Systems Portfolio

Applied AI systems running on a personal homelab

This work ties together local infrastructure, model-serving, automation runtimes, and operator-facing apps. The through-line is simple: build systems that are useful under real conditions, not just interesting in isolation.

local inference AI orchestration automation runtime observability surfaces operator-facing apps

Running across a Mac Mini M4 control plane, a Dell GB10 for local inference, and a cluster of Raspberry Pi nodes.

Mac Mini M4 control plane online GB10 inference host active Pi services powering automations Models, dashboards, and apps in daily use

One system, multiple layers

The projects here are connected. Infrastructure supports the runtime, the runtime routes work across the stack, and the applications sit on top of that foundation as things I actually use day to day.

Layer 01

Infrastructure

Compute, networking, storage, and private services across the Mac Mini M4, GB10, Pi nodes, and supporting systems.

Layer 02

Runtime

Schedulers, model-serving, health checks, workflows, and the automation surface that keeps things moving.

Layer 03

AI layer

Local models, routing logic, agent orchestration, and tool-enabled execution across the environment.

Layer 04

Applications

Practical tools built on top of the system: dashboards, OCR workflows, decision support, and creative utilities.

Selected applications

A curated set of operator-facing applications built on top of shared infrastructure, automation, and local AI — spanning sales intelligence, monitoring, OCR workflows, creative tooling, and personal automation.

Supporting systems and adjacent infrastructure

A broader view of the infrastructure, model-serving, and supporting systems that make the application layer possible.

This work reflects how I approach systems engineering: designing real infrastructure, running local AI, and building tools that are actually used.

Let’s talk