HomeLab
I started a homelab because I wanted a space where I could learn by doing, without limits, approvals, or risk to production systems. This is essentially a personal space to experiment, break things, rebuild them, and understand why they work the way they do.
Early on, I learned an important lesson: define the goal first. It’s easy to get caught up buying powerful hardware “just in case,” but a homelab is most effective when it grows to meet real needs. Building only what you need keeps it focused, affordable, and intentional.
Beyond the technical side, there’s also genuine satisfaction in it. Designing and maintaining systems that actually do something useful, espeically for my family, gives me great satisifaction. Even if it's on a small scale... it scratches both my creative and problem-solving itch.
HomeLab Use
Learning & Skill Development
The homelab gives me a safe environment to sharpen real-world IT skills. Whether it’s networking concepts that map to certifications, system administration, automation, or troubleshooting, I can test ideas without fear of consequences. Let's be honest here, if my child can't watch the latest episode of Bluey, or printing is disrupted for a couple hours, the only blow back I am going to face is from my spouse during that time. In reality, I fear my spouse more than I fear any manager or CEO.
Self-Hosting & Control
One of the biggest motivators was reclaiming control from large cloud providers. Hosting my own services means understanding exactly where my data lives, how it’s secured, and how it’s accessed. It enables me to employ tangible concepts that can be used in the workplace, especially in those rare edge-cases.
Media, Storage, and Backups
My homelab serves as the backbone for media delivery, centralized storage, and data protection. Media services like Jellyfin and Plex run alongside a storage architecture designed for resiliency, scalability, and full ownership of my data.
Storage is distributed across three nodes using Ceph, providing fault tolerance and high availability for workloads that require it. For large files and media libraries, I also maintain a dedicated Network Attached Storage (NAS), optimized for capacity and throughput.
The NAS plays a critical role in my backup strategy. Containers, virtual machines, and other essential workloads are regularly backed up to it, and critical data is synchronized with cloud providers such as OneDrive and Google Drive. This layered approach effectively implements a 3-2-1-style backup strategy, ensuring data remains protected even in the event of catastrophic hardware or site failure.
Services & Experiments
My homelab hosts a variety of services: from personal websites and internal tools to smart home automation and network services like printing and scanning. It’s a sandbox where I can run workloads that would normally live in someone else’s cloud… and save my credit card in the process. Some services, like my print and scanning servers, simply can’t run in a public cloud, making self-hosting the only practical option.
Virtualization & Containers
My homelab leverages virtualization and containerization to simulate complex environments, test deployments, and experiment with architectures that would be difficult or costly elsewhere. I work with technologies including Kubernetes, LXD/LXC, Docker, and Ceph, enabling flexible and resilient infrastructure for both learning and practical projects.
HomeLab Approach
I didn’t start with perfect hardware...and that’s the point. My homelab began with an old PC and grew over time, expanding with inherited appliances, like a NAS from my father, and recently-modern PCs I rescued from being discarded. It’s never been about having the latest or fastest equipment...it’s about the journey.
A homelab doesn’t need to be career-driven. It can be about curiosity, convenience, or just experimenting for fun. I don’t need 1ms latency or enterprise-grade performance; I just need a space to meet my digital needs and explore ideas freely.
For me, the mentality is simple: Why not?
Evolution of the HomeLab
From Proxmox to MicroCloud
My homelab journey began with Proxmox VE. Having used it in a professional environment, it felt familiar and worked reliably right out of the box. It’s also the platform most commonly recommended across YouTube and homelab communities when people are just getting started...and for good reason.
Over time, though, I realized my goals had outgrown simple VM management. Hosting virtual machines directly on my primary network quickly became cumbersome: managing individual public IPs, firewall rules, SSL termination, and service exposure added unnecessary complexity. While Proxmox does support clustering, I found that once I started working more heavily with containers, its customized Linux kernel and LXC implementation limited the experience more than I expected.
What I really wanted was hands-on experience with clustering and a more cloud-style infrastructure...something that aligned closely with real-world Linux environments and skills I could transfer beyond the homelab.
I accidentally stumbled upon Canonical’s Ubuntu MicroCloud, and it immediately felt like home. It gave me fine-grained control over isolated networks, ingress and egress traffic, and service exposure. Containers and virtual machines could each live on their own isolated networks, and I could selectively expose services only when and how I needed to.
Built on stock Ubuntu and leveraging LXD, MicroCeph, and MicroOVN, MicroCloud offers seamless clustering with minimal overhead. It’s lightweight, scales easily, and runs exceptionally well on low-power and ARM hardware...making it an ideal foundation for experimentation with modern, scalable infrastructure.
MicroCloud over Proxmox
-
Standard Linux First MicroCloud runs on stock Ubuntu Server. Every tool, workflow, and troubleshooting step maps directly to real-world Linux environments. There’s no walled garden...just Linux.
-
True Hyper-Converged Clustering Clustering isn’t an add-on; it’s the core design. Storage and networking scale automatically as new nodes are introduced, without bolting on extra layers.
-
Ideal for Small, Mixed Hardware From mini PCs and repurposed hardware to Raspberry Pis, MicroCloud performs well where heavier platforms begin to feel oversized.
-
Containers as the Primary Abstraction LXD system containers are lightweight, fast, and well-suited for Linux services. Virtual machines remain available when full isolation or alternate operating systems are required.
-
DevOps-Friendly by Design With native support for cloud-init and seamless integration with tools like Ansible and Terraform, MicroCloud aligns naturally with an infrastructure-as-code approach rather than a GUI-first workflow.
What I Host
Gitea
Self-hosted Git service used for source control, issue tracking, and project collaboration. Gitea provides a lightweight alternative to hosted Git platforms, allowing full control over repositories, authentication, and integrations. In the homelab, it serves as the central source of truth for automation scripts, infrastructure code, and personal projects.
The platform is complemented by self-hosted Gitea Actions runners, enabling CI/CD workflows to execute entirely within the homelab. These runners are used to build, test, and validate code, run automation checks, and prototype deployment pipelines without reliance on external CI providers, reinforcing a fully self-contained and reproducible development workflow.
HAProxy
Layer 4/7 load balancer and reverse proxy responsible for routing inbound traffic to internal services. HAProxy provides TLS termination, virtual host routing, and a single entry point for web-based applications. This setup mirrors real-world edge and ingress architectures used in production environments.
Jellyfin
Self-hosted media server for managing and streaming video content across the network. Jellyfin is used to centralize media storage while maintaining full ownership of data without reliance on external streaming platforms. It also serves as a testbed for storage performance, transcoding, and service reliability.
Ollama
Local AI inference service used to run large language models on-premises. Ollama allows experimentation with AI workloads without sending data to third-party cloud providers. In the homelab, it’s used to explore GPU/CPU resource allocation, container isolation, and emerging AI infrastructure patterns.
Print Server
Virtual machine providing centralized network printing services across multiple subnets and protocols. Running as a full VM to support device drivers and broader OS compatibility, it integrates both IPv4 and IPv6 networking. This system reflects real enterprise requirements where containers are insufficient due to hardware or driver constraints.
Semaphore
Ansible Semaphore UI instance used to orchestrate and schedule automation jobs. It provides a controlled interface for running playbooks, managing inventories, and tracking execution history. In the homelab, Semaphore bridges infrastructure-as-code practices with repeatable operational workflows.
Wekan
Self-hosted Kanban board used for task tracking, project planning, and workflow visualization. Wekan supports personal productivity as well as structured project management, and serves as another example of replacing SaaS tools with self-hosted alternatives.
Home Assistant
Home automation platform used to centralize and automate smart home devices, sensors, and services. Home Assistant will integrate lighting, power monitoring, media playback, and environmental sensors into a single, locally controlled system without dependence on cloud-based vendor platforms.
In the homelab, Home Assistant is being introduced as both a practical automation tool and a systems integration exercise, tying together networking, service discovery, container orchestration, and event-driven automation. It will also serve as a real-world example of stateful services, persistent storage, and long-running workloads within the MicroCloud/LXD environment.
OpenWRT Router / Firewall Appliance
Custom-built network edge device running an OpenWRT router on an x86 PC equipped with nine network interfaces. This system functions as the primary router, firewall, and internal switching fabric for the homelab, providing full control over traffic flow between internal networks, services, and external connectivity.
Beyond basic routing, the appliance hosts additional network services including stateful firewalling, VLAN segmentation, and a WireGuard VPN, enabling secure remote access to internal services. Running OpenWRT on general-purpose hardware allows for flexibility, advanced customization, and enterprise-style networking features that exceed typical consumer router capabilities.
This setup serves as a hands-on platform for learning and validating real-world networking concepts such as segmentation, zero-trust access patterns, VPN design, and firewall rule management, while acting as a stable foundation for the rest of the homelab infrastructure.
Immich
Self-hosted photo and video management platform used for automatic backup, organization, and browsing of personal media. Immich provides a modern, privacy-focused alternative to cloud photo services, offering features such as mobile uploads, metadata indexing, and timeline-based browsing.
Within the homelab, Immich is used to evaluate storage performance, database-backed services, and media lifecycle management while maintaining full control over personal data. It also serves as a real-world workload for testing backups, snapshot strategies, and storage scaling within the MicroCloud/LXD environment.
Podgrab
Automated podcast aggregation and download service that fetches, organizes, and archives podcast episodes from subscribed feeds. Podgrab runs as a lightweight, always-on service that ensures episodes are captured and stored locally without reliance on third-party streaming platforms.
In the homelab, Podgrab supports the broader self-hosting and data ownership philosophy, while acting as a simple but effective example of scheduled jobs, persistent storage, and low-resource containerized services.
Kubernetes
A dedicated virtual machine designed to host a Kubernetes cluster for container orchestration, service deployment, and experimentation with cloud-native architectures. This VM will provide a sandboxed environment to test multi-container applications, CI/CD pipelines, scaling strategies, and service networking, all within the homelab.
The Kubernetes VM will allow hands-on experience with container scheduling, Helm charts, persistent volumes, and cluster monitoring, bridging the gap between single-node LXD containers and real-world distributed systems. It is also intended to serve as a platform for running microservices and experimental workloads without affecting existing stable services.
Kiwix Offline Knowledge Server
A self-hosted Kiwix server, where Kiwix is an open-source platform that enables offline access to websites by distributing them as compressed, searchable archives (ZIM files). It is commonly used to mirror resources such as Wikipedia, Python Documentation, and other useful content for use without an active internet connection.
This service was implemented after learning about Kiwix and recognizing its real-world value following the Rogers nationwide internet outage in Canada, which disrupted internet access for a significant portion of the country. The outage highlighted the fragility of always-online assumptions and motivated the creation of a local, resilient knowledge base.
By hosting curated offline archives within the homelab, this service ensures continued access to useful reference and educational materials during extended connectivity outages. This project emphasizes resilience-focused infrastructure design, thoughtful adoption of lesser-known technologies, and practical problem-solving beyond typical self-hosted services.
Internal Debian Repository Mirror
A privately hosted internal Debian package repository mirror, accessible only within the homelab and intentionally not exposed to the public internet. This mirror caches and serves Debian packages locally, allowing systems to install and update software without relying on external connectivity.
This project was inspired by the Canada-wide Rogers outage, during which extended internet downtime revealed an unexpected dependency: even basic tasks...such as playing a DVD on a computer...required downloading additional software despite having physical media available. The experience highlighted how modern systems often assume constant internet access.
In response, this mirror was implemented to ensure continued access to essential system and media software during network outages, improving overall infrastructure resilience. Beyond availability, the project also served as a practical exercise in repository hosting, package indexing, and secure internal distribution...knowledge directly applicable to maintaining custom software repositories should the need arise in the future.
This service demonstrates an understanding of software supply chains, offline-first design, and long-term infrastructure reliability, rather than simple service hosting