Automating Your Homelab with GitLab CI/CD and Terraform: Part 2 - Practical Application
📚 This is Part 2 of a 3-part series:
- Part 1: Foundation & Setup
- Part 2: Practical Application (you are here)
- Part 3: Production-Ready Practices
Part 2: Practical Application
In Part 1, we built the foundation of our CI/CD pipeline and got everything configured. Now it’s time to put it to work! In this post, we’ll deploy real infrastructure, explore advanced pipeline features, and learn how to troubleshoot common issues.
Real-World Example: Deploying a Web Server
Now that we have everything configured, let’s walk through an actual example. We’ll deploy a simple web server using this pipeline. Again, be sure to make changes to the Terraform code so it matches your environment
Step 1: Create the Terraform configuration
resource "proxmox_vm_qemu" "webserver" { name = "web-production" target_node = "proxmox" clone = "ubuntu-2204-template"
cores = 2 memory = 2048 sockets = 1
network { bridge = "vmbr0" model = "virtio" }
disk { storage = "local-lvm" type = "virtio" size = "30G" }
ipconfig0 = "ip=dhcp"
tags = "environment=production,role=webserver,managed-by=terraform"}
output "webserver_ip" { value = proxmox_vm_qemu.webserver.default_ipv4_address description = "IP address of the web server"}Step 2: Commit and push
git add main.tfgit commit -m "Add production web server"git push origin mainStep 3: Watch the pipeline
Open GitLab → CI/CD → Pipelines. You’ll see three stages:
validate → plan → apply ✅ ✅ ⏸️Step 4: Review the plan
Click the plan job to see something like the output below:
Terraform will perform the following actions:
# proxmox_vm_qemu.webserver will be created + resource "proxmox_vm_qemu" "webserver" { + name = "web-production" + target_node = "proxmox" + cores = 2 + memory = 2048 ... }
Plan: 1 to add, 0 to change, 0 to destroy.Step 5: Apply the changes
Click the “Run” button on the apply stage. Watch as Terraform:
- Clones your template
- Configures the VM
- Starts it up
- Reports the IP address
Step 6: Verify in Proxmox
Once that’s finished, check your Proxmox UI. You should see the new web-production VM running with the tags you specified.
Advanced Pipeline Enhancements
Once the basics work and you’re comfortable with the pipeline, here are some more advanced steps I’ve added to my pipeline:
1. Environment-Based Deployments
.deploy-template: extends: .terraform-base stage: apply script: - terraform workspace select ${CI_ENVIRONMENT_NAME} || terraform workspace new ${CI_ENVIRONMENT_NAME} - terraform apply -auto-approve tfplan dependencies: - plan when: manual
deploy:dev: extends: .deploy-template environment: name: development url: http://dev.homelab.local only: - dev
deploy:prod: extends: .deploy-template environment: name: production url: http://prod.homelab.local only: - mainThis creates separate environments in GitLab with their own deployment history, tracking, and approval gates. It’s really one of the more powerful features for managing infrastructure across multiple stages.
How it works:
- Changes pushed to the dev branch automatically trigger the
deploy:devjob → deploys to Development environment - Changes pushed to the main branch automatically trigger the
deploy:prodjob → deploys to Production environment - Each environment uses Terraform workspaces to maintain isolated state files
- GitLab tracks deployment history, showing who deployed what and when
The workspace magic: The command terraform workspace select ${CI_ENVIRONMENT_NAME} || terraform workspace new ${CI_ENVIRONMENT_NAME} does two things:
- Tries to switch to an existing workspace (e.g., “development” or “production”)
- If that workspace doesn’t exist, creates it automatically
This ensures your dev and prod infrastructure remain completely separate—different VMs, different IP addresses, different everything.
Real-world workflow:
- Developer makes changes: Creates feature branch
feature/new-vm-config - Merge to dev: Changes merged to
devbranch → triggersdeploy:devjob - Test in development: Team validates changes in the dev environment
- Promote to production: Create MR from
devtomain, get approval - Deploy to production: Merge triggers
deploy:prodjob → same config, production workspace
Why this matters: This is exactly how professional teams manage infrastructure. You’re not testing in production, you have clear promotion paths, and every environment is tracked. This is typically what you’d see in a true production environment where developers have a safe place to work and test without affecting production.
Expanding to multiple environments: You might even see QA, staging, or other environments where changes flow through a controlled pipeline:
deploy:qa: extends: .deploy-template environment: name: qa url: http://qa.homelab.local only: - qa
deploy:staging: extends: .deploy-template environment: name: staging url: http://staging.homelab.local only: - stagingThis creates a promotion pipeline: Dev → QA → Staging → Production, where each stage validates changes before they reach production users.
Viewing in GitLab: Navigate to Deployments → Environments in GitLab to see:
- Current deployment status for each environment
- Deployment history with who/what/when
- Quick links to each environment’s URL
- Rollback capabilities to previous deployments
2. Automatic Destroy on Merge Request Close
destroy:mr: extends: .terraform-base stage: apply script: - terraform destroy -auto-approve environment: name: review/${CI_MERGE_REQUEST_IID} action: stop when: manual only: - merge_requestsThis feature is perfect for testing infrastructure changes in isolated environments without cluttering your homelab with orphaned resources.
How it works: When you open a merge request (MR) from a feature branch, GitLab creates a unique review environment named after the MR ID (e.g., review/42). You can deploy test infrastructure specific to that MR, verify your changes work, and then cleanly destroy everything when the MR is closed or merged.
Real-world example: Say you’re testing a new VM configuration. Instead of deploying to production or manually creating test VMs, you:
- Create a feature branch with your changes
- Open a merge request
- Deploy to the review environment
- Test your changes
- Click the manual destroy job to tear down the test infrastructure
- Merge your MR knowing the changes work
Why when: manual? This prevents accidental destruction. You control when to clean up the environment, which is useful if you need to keep it running for extended testing or demonstrations.
Cost savings: In a cloud environment, this pattern can save significant money by automatically cleaning up short-lived test infrastructure. In a homelab, it keeps your Proxmox cluster tidy and prevents resource exhaustion from forgotten test VMs.
3. Slack/Discord Notifications
notify:success: stage: .post script: - | curl -X POST "https://discord.com/api/webhooks/YOUR_WEBHOOK" \ -H "Content-Type: application/json" \ -d "{\"content\":\"✅ Terraform deployment succeeded: ${CI_COMMIT_MESSAGE}\"}" when: on_success only: - main
notify:failure: stage: .post script: - | curl -X POST "https://discord.com/api/webhooks/YOUR_WEBHOOK" \ -H "Content-Type: application/json" \ -d "{\"content\":\"❌ Terraform deployment failed: ${CI_PIPELINE_URL}\"}" when: on_failure only: - mainGet instant feedback on deployments without constantly checking GitLab. These notification jobs run after your pipeline completes, sending real-time alerts to your communication channels.
Setting up Discord webhooks:
- In Discord, go to Server Settings → Integrations → Webhooks
- Click “New Webhook”
- Name it “GitLab Terraform”
- Select the channel where notifications should appear
- Copy the webhook URL
- Add it to GitLab → Settings → CI/CD → Variables as
DISCORD_WEBHOOK_URL(masked)
Update your pipeline to use the variable:
notify:success: stage: .post script: - | curl -X POST "${DISCORD_WEBHOOK_URL}" \ -H "Content-Type: application/json" \ -d "{\"content\":\"✅ Terraform deployment succeeded: ${CI_COMMIT_MESSAGE}\"}" when: on_success only: - mainThe .post stage: This special stage runs after all other stages complete, regardless of success or failure. Perfect for notifications and cleanup tasks.
Why this matters: When you’re managing infrastructure changes, especially in production, immediate feedback is crucial. Instead of refreshing the GitLab page or checking email notifications, you get instant alerts in the communication tools you’re already using.
Pro tip: You can enhance these notifications with more context:
notify:success: stage: .post script: - | curl -X POST "${DISCORD_WEBHOOK_URL}" \ -H "Content-Type: application/json" \ -d "{ \"content\": \"✅ **Terraform Deployment Succeeded**\", \"embeds\": [{ \"title\": \"${CI_COMMIT_TITLE}\", \"description\": \"Branch: ${CI_COMMIT_REF_NAME}\", \"url\": \"${CI_PIPELINE_URL}\", \"color\": 3066993, \"fields\": [ {\"name\": \"Author\", \"value\": \"${GITLAB_USER_NAME}\", \"inline\": true}, {\"name\": \"Duration\", \"value\": \"${CI_PIPELINE_DURATION} seconds\", \"inline\": true} ] }] }" when: on_success only: - mainThis creates rich, formatted notifications with commit details, pipeline links, and timing information—making it easy to track deployments at a glance.
Troubleshooting Common Issues
While setting up and configuring this pipeline, I had run into a few issues getting everything connected. Issue 2, specifically, lead to some deeper learning. Hopefully everything just works from Part 1, but if not these are likely issues you’ll see.
Issue 1: “Runner not picking up jobs”
Symptoms: Pipeline stays in “pending” state forever.
Solution:
# Check runner statusdocker exec gitlab-runner gitlab-runner verify
# Restart runnerdocker restart gitlab-runner
# Check runner logsdocker logs gitlab-runnerEnsure the runner’s tag (docker) matches your job’s tags in .gitlab-ci.yml. I had called this out in Part 1, but sharing again because it’s caught me a few times.
Issue 2: “Terraform state locking error”
Symptoms: Jobs fail with “Error acquiring the state lock”.
Solution:
# Manually unlock (use the lock ID from error message)terraform force-unlock <lock-id>This happens if a previous job crashed without releasing the lock.
Issue 3: “Authentication failed to Proxmox”
Symptoms: Plan stage fails with authentication errors.
Solution:
- Verify the API token is correct in GitLab variables
- Check token hasn’t expired:
pveum user token list terraform@pve - Ensure the token has proper privileges
- Test manually:
curl -k https://proxmox:8006/api2/json/access/ticket -d "username=terraform@pve" -d "password=token-id:token-secret"
Issue 4: “Plan artifact not found in apply stage”
Symptoms: Apply stage fails saying tfplan doesn’t exist.
Solution: Ensure dependencies: - plan is set in the apply job. Also check that artifact expiration hasn’t passed (default is 1 week).
Issue 5: “Terraform version mismatch”
Symptoms: State file was created with newer Terraform version.
Solution: Pin Terraform version in .gitlab-ci.yml:
.terraform-base: image: hashicorp/terraform:1.6.5 # Pin to specific versionWhat’s Next?
You now have a working pipeline that can deploy real infrastructure! In Part 3, we’ll cover production-ready practices including security best practices, real-world projects I’ve built, cost analysis of homelab vs cloud, and the professional skills this teaches.
📚 Continue the series:
- Part 1: Foundation & Setup - Build the pipeline from scratch
- Part 2: Practical Application (you just finished this!)
- Part 3: Production-Ready Practices - Security, real projects, and professional skills
← Back to blog