Shipping a Booking-First Website for a Solo Car Detailer
Supabase (RLS + RPC + constraints) + React (wizard UX) + Vite (fast shipping)
A friend of mine is starting a car detailing business on the side. The brief sounded simple:
> “Can you make me a website?”
But for a solo service business, a website isn’t a brochure. It’s an operations system. If it creates admin work, it fails. If it lets people double-book you, it fails. If timezone quirks or race conditions leak through, it fails at the worst possible moment—when someone is trying to give you money.
So I built a booking-first site where the product is essentially one flow:
Pick a service → pick a day + drop-off window → enter details → send request.
The key design decision: the database enforces correctness, not the frontend. The UI is allowed to be wrong. The backend is not.
This post covers the whole thing—schema, RLS, overlap prevention, slot generation RPC, and the React /booking page that stitches it together.
—
The real constraints of a solo detailer (especially a side hustle)
A one-person operator has two scarce resources:
Time (one job at a time)
Attention (they’re also working another job)
So the system needs to:
prevent impossible bookings
be low-maintenance
avoid back-and-forth as much as possible
handle concurrency cleanly (two people clicking the same slot)
And because this is New Zealand, it also needs to be timezone-correct in Pacific/Auckland, including DST edges.
—
The stack: fast frontend, strict backend
Vite + React for an app-like booking experience (fast dev loop, easy deploy)
Supabase/Postgres for the things that must be true:
scheduling rules
overlap prevention
data access control
server-side slot generation (RPC)
The philosophy is simple:
> Put “business truth” in Postgres.
Keep the UI focused on reducing clicks.
—
Backend design: “Make bad bookings impossible”
Here’s the shape of the database, simplified into the important pieces.
Services: one source of truth for what’s bookable
A services table defines what customers can book:
duration_mins (validated: >= 15)
price_cents
marketing copy (title, subtitle, summary)
includes[], ideal_for[]
active, sort_order
This is the table the booking system trusts for duration and pricing metadata.
Availability rules: day-of-week + effective dates
Availability is expressed as rules:
dow (0–6; Sunday–Saturday)
start_time, end_time
effective_from, effective_to (optional)
active
And there’s a small but important guardrail:
> only one active rule per day-of-week
That prevents the classic “two overlapping availability windows for Tuesday” bug.
Bookings: store time ranges, not “slot IDs”
Bookings are stored as:
start_at timestamptz
end_at timestamptz
service_id
customer details
status (confirmed or cancelled)
meta jsonb for extra intent (window preference, vehicle size, tax breakdown, etc.)
The choice to store a time range unlocks the best part:
—
Overlap prevention (the correct way): exclusion constraints
Instead of “check then insert” (race condition city), Postgres can enforce:
> “No confirmed booking ranges may overlap.”
With btree_gist enabled, you add an exclusion constraint:
exclude using gist (
tstzrange(start_at, end_at, '[)') with &&
)
where (status = 'confirmed');
This means:
Two users select the same slot at the same time?
One insert succeeds.
One insert fails.
No double bookings. No race conditions. No “oops.”
Cancelled bookings don’t block time because the constraint only applies to status = 'confirmed'.
This is the kind of thing that makes Postgres feel unfair.
—
Slot generation: a single RPC that defines “availability”
The UI shouldn’t implement scheduling math. That gets duplicated, drift happens, and timezone pain follows.
So I built an RPC:
get_available_starts(p_service_id, p_from, p_to, p_step_mins default 15)
In plain English it does:
1. Look up the service duration from services.duration_mins
2. For each day in the requested range:
find the active availability rule for that day (respecting effective dates)
construct NZ-local window timestamps (make_timestamptz)
3. Generate candidate start times at a step interval
4. Filter out:
times in the past
times that overlap an existing confirmed booking
5. Return a sorted list of valid start_at timestamps
That RPC is the single source of truth for “what can be booked.”
—
RLS: public reads + controlled inserts + admin-only management
Row Level Security is where “public website” meets “real database.”
The policy approach:
Public can read active services
People need to see what exists and how long it takes.
Public can read active availability rules
Useful for transparency/debugging, but only the active ones.
Public can insert bookings only if the booking is valid
This is the most important policy in the system.
The insert policy enforces:
start_at > now()
booking end_at must equal start_at + service.duration_mins
service must be active
booking must be within availability (server-side is_within_availability)
and then the exclusion constraint ensures no overlaps
Meaning:
> Even if someone bypasses your UI and calls Supabase directly, they can’t insert garbage.
Admin privileges are explicit
profiles has is_admin boolean, and there’s a helper function:
is_admin() -> boolean
Admin policies become readable and consistent:
admin can manage services, packages, availability rules, bookings
public cannot
It’s not complicated, it’s just strict.
—
Frontend: a wizard that removes clicks, not control
The React /booking page is a 4-step wizard:
1. Service
2. Schedule
3. Details
4. Review
The UX goal is: make the default path feel like one decision per screen.
The “drop-off window” pattern
Instead of dumping a huge grid of times, the UI offers windows:
Early (8–10)
Mid (10–12)
Afternoon (12–3)
Late (3–5)
Any time
By default, the user selects a day + window, and the system chooses the earliest available slot inside it.
If the user truly needs a specific time, they tap:
> “I need a specific time”
…and the UI reveals the exact slots.
This keeps the flow simple for most users, while still respecting edge cases.
Cached + deduped availability requests
Availability changes. People browse slowly. Tabs get backgrounded. Mobile networks are weird.
So the client does three things:
caches availability per service for 60 seconds
dedupes concurrent fetches with an “in-flight” map
times out after 15 seconds and shows a retry UX
It also auto-refreshes:
on tab focus
on visibility change
every 2 minutes
That reduces the chance someone sees a stale slot list.
Timezone correctness is treated as a requirement
The booking page groups slot ISO timestamps into NZ dates using Intl.DateTimeFormat with Pacific/Auckland, and computes “Today / Tomorrow” based on NZ-local date keys.
It’s the boring kind of correctness that prevents the worst kind of bug:
“Why did it book me for the wrong day?”
—
Submitting: compute end time, store intent, let the DB judge
When the user submits a booking:
the client computes end_at = start_at + duration_mins
stores window preference and vehicle size in meta
inserts the booking
Even though the UI is computing end_at, the backend verifies:
the duration matches the service definition
the booking is within availability
the time doesn’t overlap (constraint)
So the user gets a fast experience, and the system stays correct.
The response message is intentionally human:
> “Request received — Dylan will message you to confirm the exact drop-off time and details.”
Because the business reality is: the operator still confirms specifics. The system is there to eliminate the tedious parts.
—
Why I kept both services and packages
This is a quiet design win.
services = what’s bookable (duration, price, active)
packages = stable marketing codes and copy that can evolve without breaking the booking model
It’s useful for:
stable URLs
analytics by package code
future renames without data churn
—
What I’d add next (only after the first real bookings)
Once the system has proven demand, the next steps are small but powerful:
friendlier handling for “slot was just taken” (constraint error → human message)
optional pending status if manual confirmation becomes important
buffers between bookings (either by extending duration or adding per-service buffer logic)
notifications via Edge Function (email/SMS) so the operator doesn’t miss requests
But the core is already doing the job: converting intent into a structured booking, safely.
—
The takeaway
This project worked because it treated booking as a correctness problem as much as a UX problem.
React + Vite makes it feel effortless.
Supabase/Postgres makes it reliable.
RPC + RLS + constraints ensure the system behaves even under concurrency and adversarial inputs.
For a solo service business, that’s the whole point:
> Simple on the outside. Strict on the inside.