We built a single-user operator analytics dashboard on ClickHouse that assembles 9 parallel queries in under 450ms, with SVG-native charts and no third-party analytics dependency.
280-450ms
API response time (9 queries)
0
New infrastructure dependencies
SVG-native
Chart rendering
allowlist
Auth restriction
CHAPTER 01
Deployed platforms accumulate usage data that goes unread. A form submission counter is not product intelligence. The operator needs to know which features users are actually reaching, which search terms appear repeatedly, at what rate AI-assisted queries substitute for symbol lookups, and where users exit the conversion funnel before booking a call.
Avo's internal analytics requirement was specific. The dashboard was not for external stakeholders. It was a single-user operator view that needed to answer one question each morning: is the platform attracting the right users, and are they doing the thing that leads to a paid engagement?
The existing state was a Vercel Analytics integration that showed page views and nothing else. There was no feature event tracking, no search query logging, no conversion funnel, and no NPS collection.
CHAPTER 02
The analytics stack used ClickHouse exclusively. No third-party product analytics vendor was added. ClickHouse was already present for market data and was extended with four new tables: argus.search_log captured every symbol search and AI question; argus.feature_usage captured page-level and feature-level events; argus.help_views and argus.help_feedback captured documentation engagement and thumbs-up/down responses.
The dashboard fetched data from /api/admin/analytics, which fired nine ClickHouse queries in parallel via Promise.all. An NPS endpoint was queried separately and overlaid on the same dashboard. The API route used safeQuery, a wrapper around chQuery that caught any ClickHouse exception and returned an empty array. Tables that had not yet been created returned zero rows gracefully rather than returning a 500.
ARCHITECTURE OVERVIEW
PRESENTATION
Next.js 15 (App Router)
API LAYER
ClickHouse 26.3
auth + rate limit + versioning
SERVICES
Clerk auth
DATABASE
TypeScript 5.4
QUEUE
Queue
CHAPTER 03
The nine queries covered: total search count (7-day), AI question count (7-day), top 20 search terms (7-day), daily search volume with AI subset (30-day), top 30 features by usage, top 20 pages by unique users, and three additional analytical aggregations. The full response assembled in under 500 milliseconds in typical conditions.
The 30-day search volume chart was rendered as a pure SVG polyline computed from the ClickHouse daily bucket array, with a second dashed line overlaid for the AI-only subset. No charting library was introduced. NPS was computed as (promoters / total - detractors / total) times 100, where promoters were scores 9 to 10 and detractors were scores 1 to 6.
The analytics endpoint used the same currentUser() Clerk-compatible wrapper as the rest of the platform. An email allowlist check supplemented the role-based check in publicMetadata. The auth failure path returned a 403, not a redirect.
TECH STACK
CHAPTER 04
The /api/admin/analytics endpoint assembled and returned a full nine-query payload in 280 to 450 milliseconds at typical load. The safeQuery wrapper ensured that tables added incrementally did not require coordinated deployments. Feature usage tracking confirmed which sections of the platform received meaningful engagement. Search query logging exposed the most common symbols and revealed that AI-mode questions constituted a measurable fraction of all search events.
280-450ms
API response time (9 queries)
0
New infrastructure dependencies
SVG-native
Chart rendering
allowlist
Auth restriction
CHAPTER 05
DECISION · 01
ClickHouse is sufficient for product analytics at this scale. There was no case for PostHog, Mixpanel, or Amplitude at current event volume. ClickHouse handled arbitrary GROUP BY aggregations over millions of rows in under 500 milliseconds. The same database that stored market data stored behavioral events, eliminating operational overhead for a separate analytics infrastructure.
DECISION · 02
safeQuery everywhere is the right default. Analytics tables get created incrementally. A query that throws on a missing table should not take down the dashboard. Returning empty arrays and rendering No data yet placeholders is a better user experience than a 500 response.
DECISION · 03
The audit log should have been in from the start. Adding logAuditEvent later meant early admin views had no paper trail. For any admin surface that touches user data, audit logging should be wired before the first deployment, not retrofitted.
START A PROJECT
We build fast. Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a ProjectWe launched a multi-tenant market intelligence SaaS serving computed signals from 425M rows, with all API routes under 500ms cold and unit economics positive from customer one.
425M+ ClickHouse rows at launch
Read case study →
PlatformsWe debugged 65 compounding bugs across seven subsystems of a live trading engine, fixed a score overflow that silently blocked all dark_matter_rs signals, and cut Redis memory from 11.8GB to 7.15GB.
65 Bugs fixed in one session
Read case study →
PlatformsWe built a retail investor dashboard serving live fund performance from a paper trading account, with compliance banners enforced as server-side dependencies and JavaScript bundle under 120KB.
7 Pages built and deployed
Read case study →