Architecting a Multi-Venue Hospitality Platform
A multi-venue hospitality platform is best built on one Postgres database with row-level security for per-venue isolation, a unified bookings table with type-specific extensions (restaurant, spa, event, rooftop, room), a kanban event pipeline, and an admin with JWT-encoded role-based access. The pattern gives you per-venue isolation and group-level reporting in single queries — neither of which separate-database-per-venue can deliver simultaneously.
The hardest decision in a multi-venue hospitality build is the data model. Get it right and the rest of the system flows; get it wrong and every feature you ship for the next two years pays a tax.
This is the pattern we've shipped on multi-venue boutique builds, with the trade-offs we hit and the alternatives we considered. It's specific to Postgres + Supabase because that's where we live; the principles transfer to any SQL-with-RLS stack.
The shape of the problem
A boutique hospitality group with four venues under one brand is, structurally, a multi-tenant system where the tenants are the venues, the operator is a tenant of all of them, and a few user types sit on top of that:
- Per-venue staff — restaurant host, spa receptionist, events coordinator, rooftop FOH. Should see only their venue's bookings.
- Per-venue manager — restaurant GM, spa director. Should see their venue's bookings plus their venue's reports.
- Group owner — the operator. Should see everything across all venues.
- Group admin — typically the operator and one or two delegates. Read/write across all venues.
Plus: guests, who are not authenticated users but show up as records that may have interacted with multiple venues.
The system has to support per-venue isolation strict enough that a spa receptionist physically can't see the restaurant's reservations, while also supporting cross-venue reads strong enough that an owner can ask "show me everyone who booked spa and dinner in the same week" and get an answer in one query.
Why one database, not four
The instinct on the first pass is one database per venue, and it's wrong. Three reasons:
Cross-venue reads are the high-value queries. "Returning guest" is a cross-venue concept. "Group-level revenue this month" is a cross-venue concept. "Email marketing list of guests who booked spa but never restaurant" is a cross-venue concept. With four databases, every one of these is four queries and a join in the application layer. With one database, every one of these is a single SQL query with a where venue_id in (...) clause.
Schema drift is inevitable. Four databases means four sets of migrations. The discipline to keep them in lockstep degrades fast. By month nine the spa database has a column the restaurant database doesn't, and now your shared admin code branches on that.
The isolation argument is solved by RLS, not by physical separation. Postgres row-level security, when written carefully and tested in CI, gives you the same isolation guarantee as separate databases without the cost of separate databases.
The one case where four databases wins is when the venues are independent legal entities with separate data-controller obligations under GDPR or PIPEDA. That's rare in boutique hospitality (the group is one entity); when it applies, the math flips.
The schema
Three layers of tables.
Layer 1 — The shared core
create table venues (
id uuid primary key default gen_random_uuid(),
group_id uuid not null references groups(id),
slug text unique not null, -- 'rooftop' | 'restaurant' | 'spa' | 'events'
name text not null,
city text not null,
timezone text not null,
active boolean not null default true,
created_at timestamptz not null default now()
);
create table guests (
id uuid primary key default gen_random_uuid(),
email text unique,
phone text,
first_seen_at timestamptz not null default now(),
last_seen_at timestamptz not null default now(),
marketing_consent boolean not null default false
);
guests is intentionally not multi-tenant. A guest is a guest of the group, not the venue — that's what makes "returning guest at the property" a concept the system can express.
Layer 2 — The unified bookings table
create type booking_type as enum (
'restaurant', 'spa', 'event', 'rooftop', 'room'
);
create type booking_status as enum (
'pending', 'confirmed', 'cancelled', 'no_show', 'completed'
);
create table bookings (
id uuid primary key default gen_random_uuid(),
venue_id uuid not null references venues(id) on delete restrict,
guest_id uuid references guests(id),
type booking_type not null,
status booking_status not null default 'pending',
start_at timestamptz not null,
end_at timestamptz,
party_size int,
notes text,
created_at timestamptz not null default now(),
updated_at timestamptz not null default now()
);
create index on bookings (venue_id, start_at);
create index on bookings (guest_id);
The unified table holds what every booking has in common. Every type-specific table extends it with a 1:1 row.
Layer 3 — Type-specific extensions
create table spa_bookings (
booking_id uuid primary key references bookings(id) on delete cascade,
treatment text not null,
therapist_id uuid references staff(id),
room_id uuid references spa_rooms(id),
deposit_amount_minor int,
deposit_currency text,
deposit_status text -- 'unpaid' | 'paid' | 'refunded'
);
create table event_enquiries (
booking_id uuid primary key references bookings(id) on delete cascade,
guest_count int not null,
event_type text not null,
budget_band text,
kanban_status text not null default 'new', -- 'new' | 'in_discussion' | 'quoted' | 'confirmed' | 'completed'
source text,
assigned_to uuid references staff(id)
);
create table restaurant_bookings (
booking_id uuid primary key references bookings(id) on delete cascade,
table_id uuid references tables(id),
preferences jsonb default '{}'::jsonb
);
create table rooftop_bookings (
booking_id uuid primary key references bookings(id) on delete cascade,
area text, -- 'bar' | 'lounge' | 'private_section'
weather_contingency text
);
create table room_bookings (
booking_id uuid primary key references bookings(id) on delete cascade,
unit_id uuid references rooms(id),
nightly_rate_minor int not null,
nightly_rate_currency text not null,
number_of_nights int not null
);
The 1:1 extension pattern lets each booking type evolve its own columns without touching the others. Adding a cancellation_reason to spa_bookings is a single migration; nothing else moves.
Row-level security: the policy that makes it safe
alter table bookings enable row level security;
alter table spa_bookings enable row level security;
alter table event_enquiries enable row level security;
alter table restaurant_bookings enable row level security;
alter table rooftop_bookings enable row level security;
alter table room_bookings enable row level security;
-- Helpers exposed to policies
create or replace function current_user_role() returns text
language sql stable as $$
select coalesce(auth.jwt() ->> 'role', 'public')
$$;
create or replace function current_user_venue_ids() returns uuid[]
language sql stable as $$
select case
when auth.jwt() ->> 'role' in ('group_admin', 'group_owner') then
(select array_agg(id) from venues where active)
else
coalesce(string_to_array(auth.jwt() ->> 'venue_ids', ',')::uuid[], '{}')
end
$$;
-- The bookings policy
create policy "scoped read on bookings"
on bookings for select
using (venue_id = any (current_user_venue_ids()));
create policy "scoped write on bookings"
on bookings for insert
with check (venue_id = any (current_user_venue_ids()));
-- Type-specific tables inherit isolation through the booking_id FK
create policy "scoped read on spa_bookings"
on spa_bookings for select
using (booking_id in (select id from bookings));
The last policy is the trick: because the bookings policy already filters to the user's accessible venues, a select on spa_bookings where booking_id in (select id from bookings) returns only spa bookings for venues the user has access to. The type-specific tables get isolation for free, by composition.
The JWT contains the role and a comma-separated list of venue IDs the user belongs to. Group admins and owners get a wildcard handler. Anyone else is restricted to their explicit venue list.
Testing the security boundary
The risky part of RLS is the silent failure: a bad policy doesn't error, it just returns the wrong rows. The discipline is a test suite that signs in as each role and asserts which rows it can see.
describe("RLS isolation", () => {
it("a spa receptionist sees only their venue's bookings", async () => {
const client = await signInAs({
role: "staff",
venue_ids: [SPA_VENUE_ID],
});
const { data } = await client.from("bookings").select("*");
expect(data?.every((b) => b.venue_id === SPA_VENUE_ID)).toBe(true);
});
it("a group owner sees all venues", async () => {
const client = await signInAs({ role: "group_owner" });
const { data } = await client.from("bookings").select("venue_id");
const venueIds = new Set(data?.map((b) => b.venue_id));
expect(venueIds.size).toBe(4);
});
it("anonymous cannot read bookings at all", async () => {
const client = anonymousClient();
const { data, error } = await client.from("bookings").select("*");
expect(data?.length ?? 0).toBe(0);
});
});
This suite runs in CI on every PR. A regression in a policy fails the build. Without this discipline, RLS is the kind of thing that goes wrong silently in production six months later.
The kanban event pipeline
Events are different from reservations because the sales conversation is multi-step. A wedding enquiry isn't "confirmed at submission" — it's a six-week conversation that moves through discovery, proposal, negotiation, deposit, finalization. Squeezing that into a reservation primitive ("confirmed: yes/no") loses all the sales workflow.
The kanban model treats event_enquiries.kanban_status as the workflow state. The admin UI is a four-column board (new, in_discussion, quoted, confirmed) with a fifth completed archive. Drag-and-drop updates kanban_status and writes a row to event_enquiry_events for the audit trail:
create table event_enquiry_events (
id bigserial primary key,
booking_id uuid not null references bookings(id) on delete cascade,
from_status text,
to_status text not null,
actor_id uuid references staff(id),
note text,
occurred_at timestamptz not null default now()
);
The audit trail is what lets the operator answer "how long does it take a wedding enquiry to convert in our funnel" — a question that, in our experience, every group asks within the first month of using the system and no off-the-shelf tool answers.
The unified admin
One CMS, role-based, multi-venue switcher in the top bar. Per-page architecture:
- Dashboard (group) — today's bookings across venues, this-week revenue, this-month occupancy
- Dashboard (venue) — venue-specific version of the above
- Bookings (per type) — restaurant covers, spa schedule, rooftop list, kanban for events, room calendar
- Guests — the cross-venue list, with per-guest history across all venue interactions
- Settings — per-venue config, hours, deposit policies, cancellation rules
The technology is unsurprising: Next.js 14 App Router, Supabase client with the user's JWT, Tailwind, server components for the read-heavy pages, server actions for mutations. The interesting choices are the data model above; the UI is mostly straightforward once the shape is right.
Where this architecture struggles
Three places.
The first is real-time conflict resolution on shared resources. Two staff members editing the same event enquiry at the same time will overwrite each other if you don't add optimistic concurrency control. We use an updated_at where clause on writes and surface a "this record was updated by someone else" message on conflict. It's a simple pattern; we forgot it on the first pass and had to retrofit.
The second is reporting performance at scale. Group dashboards aggregate across venues — "this month's revenue, segmented by venue, by booking type." At a few thousand bookings per month, this is fine; at tens of thousands, the un-indexed cross-table joins start to hurt. The fix is materialized views refreshed on a schedule, not eager joins on every dashboard render. We do this when query time crosses ~200ms; it stays cheap until then.
The third is the audit trail's growth rate. event_enquiry_events and similar audit tables grow fast. We partition them by month after the second year of operation, which keeps queries scoped to recent history without losing the long tail.
What we'd do differently next time
Two things, both small, both learned the hard way.
The bookings.notes field is a free-text catch-all and we wish we'd typed it sooner. By the time you've got six clusters of "things people put in notes," you have a half-built schema hiding in a string column. We now extract notes into specific columns at the second use case, not the fifth.
The deposit-status field on spa_bookings is just a string. It should have been a Postgres enum from day one, and the migration to enum after the fact is exactly the kind of churn no one wants to do. Pay the schema discipline cost up front; it pays back forever.
What this enables that off-the-shelf doesn't
A query like:
select g.email, count(*) as visits, array_agg(distinct b.type) as types
from bookings b
join guests g on g.id = b.guest_id
where b.created_at > now() - interval '90 days'
and b.status = 'completed'
group by g.email
having count(*) > 1
order by visits desc
limit 50;
Top fifty most-engaged returning guests in the last 90 days, with the booking types they crossed. Useful for a hand-written marketing follow-up. Not expressible in any single off-the-shelf hospitality system we've seen because none of them treat the guest as the cross-venue primitive.
If your group is growing across booking types and you're feeling the friction in the four-vendor stack, we'd scope a rebuild. Or if you want to go deeper on the boutique-hospitality stack-design questions, the why-not-OpenTable piece covers the buy-vs-build decision in detail.
Frequently asked questions
Why not just use a separate database per venue?
Because the most valuable queries cross venues — group-level reporting, returning-guest detection, marketing CRM. A separate database per venue means writing the same query four times and stitching the results together. One database with row-level security gives you isolation where you need it (per-venue staff cannot see each other's data) and unified reads where you need them (group dashboards, guest history, admin overview).
How does role-based access actually work in this model?
The JWT issued at sign-in carries a property_id and a role (staff, manager, owner, group-admin). RLS policies on each venue-scoped table check the JWT — staff and managers see only their property's rows, owners see all properties they're owners of, group-admin sees everything. The policy lives in the database, not the application, so a forgotten check at the API layer can't accidentally leak data.
What goes in the unified bookings table vs. the type-specific tables?
The unified table holds what every booking shares: id, venue_id, guest reference, status, created_at, primary contact email. The type-specific tables hold what only that booking type has: a spa booking has treatment, therapist, room, deposit_amount; an event enquiry has guest_count, type-of-event, kanban_status. The split keeps the admin queries clean and lets each booking type evolve independently without schema migrations across the others.
Is the kanban event pipeline really worth the complexity?
For a restaurant alone, no — events fit OpenTable's reservation primitive. For a multi-venue group whose events are six-figure private bookings with multi-step sales conversations, yes. The pipeline is the system of record for the sales process — the alternative is sales-by-spreadsheet-and-Slack, which is what every group we've talked to was doing before.
What's the riskiest part of this architecture to get wrong?
The RLS policies. They're the security boundary, and a permissive default policy or a missing one means data crosses tenants. We test them with a dedicated suite that signs in as each role and asserts which rows are visible. The suite runs in CI; a regression there blocks deploy. Without that test discipline, RLS becomes a footgun.
Building or rebuilding your hospitality stack?
Start with a Stack Diagnostic. We'll scope what you have, what's breaking, and what to ship next.
Book a Stack Diagnostic