Automating Your Homelab with GitLab CI/CD and Terraform: Part 3 - Production-Ready Practices
📚 This is Part 3 of a 3-part series:
- Part 1: Foundation & Setup
- Part 2: Practical Application
- Part 3: Production-Ready Practices (you are here)
Part 3: Production-Ready Practices
In Part 1 and Part 2, we built and deployed infrastructure with our CI/CD pipeline. Now let’s elevate it to production standards with security best practices, explore real-world applications, and understand the professional value of these skills.
Security Best Practices
Never Commit Secrets
Always use GitLab CI/CD variables. Never put credentials in:
.gitlab-ci.ymlterraform.tfvars- Any committed file
Use Protected Branches
In GitLab → Settings → Repository → Protected Branches:
- Protect
mainbranch - Allow only maintainers to push/merge
- Require MR approvals before merging
Enable Masked Variables
For all secrets in GitLab variables, enable:
- ✅ Masked (hidden in logs)
- ✅ Protected (only on protected branches)
Audit Pipeline Runs
GitLab logs every pipeline run. Regularly review:
- Who triggered deployments
- What changes were applied
- When changes occurred
Limit Runner Access
Run GitLab Runners on isolated networks. They have access to all your secrets, so treat them like production infrastructure.
Use API Tokens, Not Passwords
Proxmox API tokens are revocable and auditable. Never use root passwords in CI/CD.
What I’ve Built With This Pipeline
Since implementing this pipeline, I’ve automated:
FortiGate Lab Environment: Deploy HA pairs of FortiGates for testing configurations. Push config changes via Git, and they’re automatically applied.
Development VMs: Spin up Ubuntu/CentOS VMs for testing software. Each feature branch gets its own isolated environment.
Network Segmentation: VLAN configurations and firewall rules as code. Changes are reviewed in merge requests before deployment.
Cost Analysis
Let’s compare homelab vs. cloud costs for a similar setup:
Homelab Setup
- Hardware: HP Z840 workstation (~$500 used)
- RAM Upgrade: 64GB (~$150)
- Storage: 1TB NVMe (~$80)
- Power: ~$15/month (200W average)
- Total First Year: ~$910
Equivalent Cloud Setup
- GitLab CI/CD: $19/month (Premium for advanced features)
- EC2 for GitLab: ~$150/month (r5.xlarge)
- EBS Storage: ~$30/month
- Data Transfer: ~$20/month
- Total First Year: ~$2,628
Note: Cloud pricing varies significantly by region, instance type, and usage. These are rough estimates based on US East pricing as of October 2025. Always check current AWS pricing for accurate costs.
Savings: $1,718 in the first year. After year one, ongoing savings of ~$2,400 annually.
Plus, you own the hardware and can use it for anything else.
Skills This Teaches
Building this pipeline has taught me a lot. Some things I didn’t even consider when I first started down this path:
Infrastructure as Code: Terraform for the win! I’m now focused on writing declarative configurations instead of clicking through UIs.
GitOps Workflows: Treating infrastructure changes like software development—with code review, testing, and CI/CD. This is the way.
Pipeline Design: Understanding stages, artifacts, dependencies, and job orchestration. I’ve done quite a bit of this in the past working as a Cloud Engineer but never felt like I truly owned it. Now that it’s in my homelab and I’m using it on a weekly basis, it’s game-changing.
Security Practices: Managing secrets, implementing least privilege, and auditing changes.
Troubleshooting: Reading logs, debugging failed deployments, and recovering from errors.
State Management: Understanding Terraform state, locking, and remote backends. I learned a LOT around state manegement with this project. I quickly learned I’d only had basic exposure before this.
These aren’t homelab skills—they’re professional DevOps/SRE skills. The only difference really is scale.
Conclusion
This pipeline has really transformed how I think about and manage my homelab. Instead of ad-hoc changes and manual deployments, I have:
- Version control for a lot of my infrastructure
- Code review for changes via merge requests
- Automated testing through validation and planning
- Audit trail of every change
- One-click rollbacks by reverting any of my Git commits
- Confidence that deployments will work as expected
The best part? Every concept here applies to production environments. Whether you’re managing a homelab or a fleet of VMs across customer sites, the patterns are identical.
If you’re studying for the Terraform Associate certification, this hands-on experience covers a lot of the exam objectives. If you’re preparing for DevOps interviews, being able to demo this pipeline will definitely be a conversation starter that proves real expertise.
Next Steps
Ready to build your own pipeline? Start here:
- Deploy GitLab CE on a VM (doesn’t have to be Proxmox)
- Set up a GitLab Runner with Docker executor
- Create a simple Terraform project (even just one VM)
- Copy the
.gitlab-ci.ymlfrom Part 1 - Push to GitLab and watch the magic happen
Once the basics work, expand incrementally. Add a second VM. Implement workspaces. Add automated testing. Build more complex infrastructure.
Resources
- GitLab CI/CD Documentation
- Terraform Proxmox Provider
- HashiCorp Terraform Documentation
- Proxmox VE Documentation
📚 Series complete! You can review:
- Part 1: Foundation & Setup - Build the pipeline from scratch
- Part 2: Practical Application - Deploy real infrastructure
- Part 3: Production-Ready Practices (you just finished this!)
Thanks for following along! I learned a lot in this process and hope it can help others. If you found this series helpful, feel free to share it with others building their homelabs or learning DevOps practices.
← Back to blog