All projects

Case study · 2024 – 2026

Brilliance.

Academic operations platform — 10 apps in a single TypeScript monorepo, AI question generation, full RBAC.

ShippedSenior Software Developer · VoiceQube

10

apps in monorepo

4,159+

TS / JSX files

25+

DB tables

18

API route modules

The problem

Indian junior colleges that prepare students for JEE / NEET / CLAT / IPMAT run on a stack of disconnected tools. Attendance is on one SaaS, assessments on another, student communication is over WhatsApp, parents get nothing. Branch admins manage spreadsheets. Faculty draft questions in Word. Students write tests on printed paper, get scores three days later, and learn nothing about the wrong answers. The result is the information loop that the entire model depends on — practice → feedback → practice — runs at days of latency when it needs to run at minutes.

Brilliance was built to consolidate that loop into a single platform. One product, four user types (super admin, branch admin, faculty, student, parent), three surfaces (web, PWA, native mobile), AI-generated questions, and an analytics layer that tells every actor — admin, faculty, parent — what actually changed since yesterday. The platform had to feel light enough that a 16-year-old would open it on a 4G connection between classes, and structured enough that a branch admin could trust the attendance report at the end of a quarter.

Constraints

  • Ten apps, one type system. Super admin, branch admin, faculty dashboard, student PWA, parent PWA, plus React Native apps for student and parent — and three internal tools. The cost of letting these drift into separate type systems would have been weeks of debugging where the same field had different shapes per app.
  • India-grade network conditions. The student PWA had to work on flaky 3G, on 5-year-old Android phones with 2GB RAM, in classrooms where the wifi went out mid-test. Offline-first was not optional for the test runner.
  • RBAC has to be bulletproof. A branch admin in Mumbai must never see a student in Goa. A faculty member must only access their own batch's analytics. Permission bugs in this product aren't ergonomic — they're reputational.
  • AI question generation has to be checkable. Faculty trust is the entire lever. If Claude generated a flawed question and it shipped to a real test, the platform was done. Every AI-generated question had to pass through a faculty review queue before going live.

Architecture

Monorepo with strict module boundaries

Brilliance lives in one repo with 4,159+ TypeScript / JSX files organised under a Turborepo-style structure. Each of the 10 apps is its own package, importing a shared @brilliance/core for types, @brilliance/ui for shadcn-on-Tailwind-4 components, and @brilliance/api-client for a typed RPC client generated off the backend schema. Drift between apps stops at the type checker — if a backend field changes shape, every app sees the breakage at compile time.

Hono backend with Drizzle ORM

The API is a single Hono server with 18 route modules, hosted in Mumbai (AWS) for low-RTT to Indian users. Hono's small footprint and fast cold-start matter when you have 10 frontends each making first paint requests. Persistence is Drizzle ORM over PostgreSQL, 25+ tables, with migrations versioned in-repo. Redis in front for hot paths — student leaderboards, the active test session cache, rate limits — and as a BullMQ queue backing for everything deferrable.

AI question generation with a faculty queue

The Claude integration sits in a single llm module fronting all model calls with prompt caching. Faculty pick a topic, syllabus chapter, and difficulty; Claude generates a candidate question, the answer, and a step-by-step solution. Generated questions land in a review queue, not the public question bank. A faculty member approves, edits, or rejects each one. Approved questions flow into the bank with provenance metadata so we always know which questions started AI-drafted vs. faculty-original. That single design choice — never auto-promoting — was what made AI generation acceptable to schools.

Auth + RBAC

Full JWT + OTP + RBAC with branch-scoped roles. A request carries a JWT withtenant_id, branch_id, and role; every database query is scoped at the Drizzle layer, not at the controller — closing the door on the most common authorisation mistake (forgetting to apply scope on a new endpoint). MSG91 powers SMS / WhatsApp OTPs and notifications, FCM handles native push.

Student PWA — offline-first test runner

The student app is a PWA with a service worker that pre-caches the next test bundle the moment a student starts a session. Answers are written to IndexedDB and synced on reconnect — losing wifi mid-test never loses data. KaTeX renders maths inline so JEE physics looks right on every screen. The same flow runs in a React Native (Expo SDK 55) wrapper for native install.

What I built

  • The full Hono backend — 18 route modules, the auth + RBAC layer, the test-session orchestrator, the analytics aggregator, the Claude question-generation pipeline with the faculty review queue, the BullMQ workers for grading and notification fanout, the migration system on top of Drizzle.
  • The shared @brilliance/core + @brilliance/ui packages — type system, RBAC guards, shadcn/ui-on-Tailwind-4 components customised for an Indian audience (KaTeX-aware text input, OTP-input keyboard, low-data image components).
  • Three of the ten apps end-to-end: Super Admin (org-level analytics, branch CRUD, faculty hiring flow), Faculty Dashboard (test composer, AI question review queue, batch analytics), and the Student PWA (test runner, leaderboard, performance trend).
  • The event-driven Claude Code agent orchestration in the dev workflow — Claude Code wired into the build/test/ship cadence so PRs come through with type-checked, lint-clean diffs and the migration plan is auto-attached to schema-changing PRs.

Trade-offs

  • Hono vs. NestJS. NestJS is the obvious pick for a 10-app system — module discipline, DI, ecosystem. We picked Hono for cold-start latency on the Mumbai region edge and the much smaller surface area to learn for a team that's mostly TS-first. The cost was rebuilding a few NestJS-style patterns ourselves (request-scoped context, structured logging) — worth it for the speed and bundle size.
  • Drizzle vs. Prisma. Prisma has the better migration UX. Drizzle wins on raw query control, type inference into the SQL layer, and edge-runtime compatibility. With 25+ tables, all of them relational and analytics-heavy, Drizzle's SQL-shaped query builder kept the code closer to what the query planner actually does.
  • One AI pass vs. two-pass with reranker. For question generation we had a choice: generate-and-ship vs. generate-and-reranker-grade. We landed on a single Claude pass + a faculty review queue rather than auto-grading. A reranker is a nice optimisation, but the trust gain from a real human in the loop was bigger than any auto-eval would have delivered.

Outcome

Brilliance is shipped and live for the customer, with the admin console at dev.23yards.com. Ten apps, one backend, one type system. Faculty are now spending the bulk of their question-creation budget on review-and-edit rather than blank-page drafting — a workflow shift that comes from the AI question pipeline. The platform's RBAC, offline test runner, and Claude-Code-driven dev loop are the same patterns I now reach for any time a system has to serve four user types across web and mobile with AI components in the mix.

Brilliance is also the project where I committed to typed monorepo + edge-runtime backend + AI-with-human-review as the default shape for a 10-surface product. Every prior architecture I had used would have buckled under the type drift alone.

Stack

Next.js 15
React 19
React Native (Expo)
Hono
Drizzle
PostgreSQL
Redis
Claude
Tailwind 4
shadcn/ui
Framer Motion
AWS
Sentry
JWT
OAuth

Want help shipping something like this? Book a call, or grab the snippets this case study draws from.