We built a crypto screener that computes 52-week highs, 30-day returns, and volume rankings for 19,172 symbols from 87M rows in a single ClickHouse GROUP BY scan returning in under 420ms.
19,172
Symbols in screener
180-420ms
Query execution time
85%+
Default cache hit rate
under 15ms
Default cache latency
CHAPTER 01
Crypto market data is abundant and effectively unusable in raw form. A trader scanning for momentum candidates across 1,200 perpetual futures pairs cannot process that universe manually. The filtering primitives that equity screeners offer do not exist natively across the crypto data layer when the underlying source is a time-series bar table rather than a pre-aggregated snapshot.
The Argus data layer had accumulated 87 million rows of daily bars in argus.bars_1d across spot pairs, perpetual futures, and leveraged tokens on 10+ exchanges. The screener requirement was to expose this full universe as a filterable, sortable table computing derived metrics like 7-day and 30-day returns, 52-week high/low proximity, and daily change percentage on demand from the bar table rather than from a pre-computed snapshot that would immediately stale.
CHAPTER 02
The screener was implemented as a single SQL query against ClickHouse at GET /api/screener, parameterized by exchange filter, price range, volume floor, return range, proximity to 52-week extremes, sort column, and pagination.
ClickHouse was the correct engine. The argMax(close, ts) aggregation function returned the most recent close price per symbol in a single pass over the table. Conditional aggregations using FILTER(WHERE ts >= today() - N) computed windowed statistics in the same pass. The entire screener metric set for a filtered universe was computed in one query scan. The query architecture used a window function to avoid the classical double-scan problem: data and total count were returned in a single pass using count() OVER () as a virtual column.
ARCHITECTURE OVERVIEW
PRESENTATION
Next.js 15 (App Router)
API LAYER
ClickHouse 26.3 (argMax, conditional aggregation)
auth + rate limit + versioning
SERVICES
Redis 7.2 (default cache)
DATABASE
TypeScript 5.4
QUEUE
Queue
CHAPTER 03
Exchange routing used a heuristic based on symbol format rather than an explicit exchange column. US equities were identified by symbols without dots and length under 6. Crypto symbols followed the EXCHANGE:BASE-QUOTE convention from the ingest daemons.
SQL injection prevention: sort column names were validated against a Set of allowed columns before interpolation into the query. All numeric parameters were passed through parseFloat before interpolation and were never treated as strings.
The first implementation computed windowed returns using a correlated subquery per symbol, which produced N+1 scans. Conditional aggregation with FILTER ran the same computation in a single GROUP BY pass. The performance difference was approximately 10x for typical filter combinations.
The Redis default cache covered only the exact default parameter combination: US exchange, no filters, page 1, sort by volume DESC, limit 50. The background cache warmer populated this key on a schedule. Cache hit rate for the default view exceeded 85% under normal traffic patterns.
TECH STACK
CHAPTER 04
The screener query executed against the full bars_1d table (87 million rows, 19,172 symbols, data from 2010 to 2026) in 180 to 420 milliseconds for typical filter combinations. The combined data-and-count query using count() OVER () avoided a second table scan, reducing query time by approximately 35% compared to the two-query approach. The Redis default-cache path returned in under 15 milliseconds for the most common query variant.
19,172
Symbols in screener
180-420ms
Query execution time
85%+
Default cache hit rate
under 15ms
Default cache latency
CHAPTER 05
DECISION · 01
argMax(close, ts) FILTER(WHERE ts >= today() - N) is the correct window aggregation pattern. The first implementation computed windowed returns using a correlated subquery per symbol, which produced N+1 scans. Conditional aggregation with FILTER ran the same computation in a single GROUP BY pass with approximately 10x performance improvement.
DECISION · 02
The double-scan problem is real and solved by window functions. Fetching data and count as two separate queries doubled ClickHouse load for every paginated screener request. The count() OVER () window function eliminated this at the cost of slightly more complex result parsing.
DECISION · 03
Regime data and screener data should be joinable. The screener returned price metrics but not the regime classification or novelty score for each symbol. A planned enhancement was to join the Redis-backed regime data into the screener response, enabling single-query filtering like crypto symbols in trending_up regime with 30-day return above 20%.
START A PROJECT
We build fast. Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a ProjectWe launched a multi-tenant market intelligence SaaS serving computed signals from 425M rows, with all API routes under 500ms cold and unit economics positive from customer one.
425M+ ClickHouse rows at launch
Read case study →
PlatformsWe debugged 65 compounding bugs across seven subsystems of a live trading engine, fixed a score overflow that silently blocked all dark_matter_rs signals, and cut Redis memory from 11.8GB to 7.15GB.
65 Bugs fixed in one session
Read case study →
PlatformsWe built a retail investor dashboard serving live fund performance from a paper trading account, with compliance banners enforced as server-side dependencies and JavaScript bundle under 120KB.
7 Pages built and deployed
Read case study →