Most companies treat their codebase as a collection of projects. We treat ours as an operating system. One repository contains every product, every service, every piece of internal tooling. It's not just a code organization choice — it's a company architecture decision.
The structure
Our monorepo has three layers:
- apps/ — anything that runs. Each app has its own deploy target, its own port, its own domain. A chat-first PM tool, a client services portal, an AI generation API. They share nothing at runtime.
- packages/ — anything that's imported. Shared TypeScript configs, shared types, shared utilities. Apps import from packages, never from other apps.
- supabase/ — the shared database layer. One Postgres instance, isolated schemas per app. Shared tables (auth, profiles) live in public. App-specific tables live in their own schema.
Why this matters
The monorepo isn't about convenience — it's about leverage. Every client project we deliver through our services arm uses the same shared packages. Every improvement to a shared utility benefits every product simultaneously. Code written once gets deployed to every context that needs it.
This only works with a strict dependency rule: apps import from packages, never from other apps. If two apps need the same code, it gets extracted to a package. This prevents the monorepo from becoming a tangled mess of cross-app imports.
Schema isolation
The database architecture mirrors the code architecture. Each app gets its own Postgres schema. Our PM tool writes to the lova schema. Our services portal writes to the studio schema. Auth and profiles are shared in public.
This gives us the isolation of separate databases with the simplicity of one Supabase project. If an app outgrows the shared instance, we can split it by pointing its env vars at a new project — the schema boundary makes migration a pg_dump away.
The operating system analogy
An OS provides shared services (filesystem, networking, auth) and runs isolated applications on top. Our monorepo does the same thing: shared infrastructure (database, auth, types) with isolated applications that can't interfere with each other.
The key insight is that the repo structure is the org chart. Each app maps to a product or service. Each package maps to a capability. The dependency graph tells you how the company works better than any org chart ever could.
What we'd do differently
If we started over, we'd set up the package extraction pipeline earlier. We waited too long to pull shared UI components into @repo/ui, which meant duplicating button and input components across apps for a few weeks.
We'd also establish the schema-per-app convention from day one instead of discovering it after the second app needed a database. The migration was simple, but it would have been simpler as a default.
But the core decision — one repo, strict layers, shared infrastructure — has paid for itself many times over. Every new app starts with auth, types, and deployment working out of the box. That's not a small thing when you're moving fast.