Build a Real-Time Commodity Price Dashboard with WebSockets and TimescaleDB

Build a Real-Time Commodity Price Dashboard with WebSockets and TimescaleDB

UUnknown
2026-02-15
10 min read
Advertisement

Turn soybean, corn and wheat ticks into a low-latency dashboard with WebSockets, TimescaleDB and React—full schema, code and scaling patterns.

Build a Real-Time Commodity Price Dashboard with WebSockets and TimescaleDB

Hook: If you’ve wrestled with jittery live price feeds, slow queries on tick data, or brittle deployment patterns for real-time dashboards—this guide gives you a repeatable, production-ready pattern for turning soybean, corn and wheat tick data into a low-latency, maintainable web dashboard that updates in real time.

We’ll combine WebSockets for low-latency transport, TimescaleDB for scalable time-series storage and React for a lightweight front end with live charts. You’ll get concrete schema, SQL, server and client code, scaling advice, and 2026 trends that should shape your architecture choices.

Why this matters in 2026

  • Commodities markets generate high-frequency tick data; teams need sub-second visibility for trading desks, logistics and analytics.
  • TimescaleDB has continued to gain traction as the best Postgres-native time-series option for operational analytics—compression and continuous-aggregate patterns let you retain fine-grained ticks while supporting fast reads.
  • WebSockets remain the simplest, widely-supported approach for low-latency browser updates; newer alternatives like WebTransport are emerging, but WebSockets are still the mainstream choice for dashboards in 2026.

System overview — the architecture

At a glance the pipeline looks like this:

  1. Market data source (tick feed for soybean, corn, wheat) — raw ticks via exchange feed or market data vendor.
  2. Ingest worker(s) — normalize, batch and insert ticks into TimescaleDB. Also publish lightweight notifications.
  3. Message layer — LISTEN/NOTIFY for small-scale, Redis/NATS/Kafka for large-scale broadcast across WebSocket servers.
  4. WebSocket server(s) — accept browser connections, subscribe to message layer, broadcast tick updates to clients.
  5. React front end — receives tick/aggregate events, updates charts with a ring buffer and on-demand historical queries to TimescaleDB.
Design principle: store everything raw, compute aggregates for display. Raw ticks are cheap after compression—use continuous aggregates for the UI.

Data model: ticks + aggregates

Store raw ticks in a hypertable and generate continuous aggregates for common time buckets (1s, 1m, 5m). This gives the dual benefit of perfect fidelity and blazing-fast reads.

Tick table schema (TimescaleDB)

CREATE TABLE tick_quotes (
  time        timestamptz       NOT NULL,
  symbol      TEXT              NOT NULL,
  price       NUMERIC(12,4)     NOT NULL,
  size        BIGINT,
  side        SMALLINT,  -- 1 = buy, -1 = sell
  exchange    TEXT,
  received_at timestamptz DEFAULT now()
);

-- Convert to hypertable
SELECT create_hypertable('tick_quotes', 'time', chunk_time_interval => INTERVAL '12 hours');

-- Index for fast recent queries per symbol
CREATE INDEX ON tick_quotes (symbol, time DESC);

Continuous aggregates for the UI

Create a 1-second and 1-minute continuous aggregate for charting. Continuous aggregates let you query precomputed buckets without scanning raw ticks.

CREATE MATERIALIZED VIEW cagg_1s
WITH (timescaledb.continuous) AS
SELECT time_bucket('1 second', time) AS bucket,
       symbol,
       first(price, time) AS open,
       max(price) AS high,
       min(price) AS low,
       last(price, time) AS close,
       sum(size) AS volume
FROM tick_quotes
GROUP BY bucket, symbol;

-- Add policy to refresh frequently (every 1 second)
SELECT add_continuous_aggregate_policy('cagg_1s',
    start_offset => INTERVAL '1 minute',
    end_offset => INTERVAL '10 seconds',
    schedule_interval => INTERVAL '1 second');

-- 1-minute aggregated view
CREATE MATERIALIZED VIEW cagg_1m
WITH (timescaledb.continuous) AS
SELECT time_bucket('1 minute', time) AS bucket,
       symbol,
       first(price, time) AS open,
       max(price) AS high,
       min(price) AS low,
       last(price, time) AS close,
       sum(size) AS volume
FROM tick_quotes
GROUP BY bucket, symbol;

SELECT add_continuous_aggregate_policy('cagg_1m',
    start_offset => INTERVAL '1 hour',
    end_offset => INTERVAL '10 seconds',
    schedule_interval => INTERVAL '15 seconds');

Retention & compression (cost control)

Keep raw per-tick data for a short operational window (e.g., 30 days), compress older chunks and drop beyond retention. Timescale's compression is critical in 2026 for storing months of tick data cheaply.

-- Enable compression policy: compress chunks older than 2 days
ALTER TABLE tick_quotes SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'symbol'
);

SELECT add_compression_policy('tick_quotes', INTERVAL '2 days');

-- Optional: drop raw chunks older than 90 days
SELECT add_drop_chunks_policy('tick_quotes', INTERVAL '90 days');

Ingest pattern: batch and notify

High-frequency single-row inserts are expensive. Use micro-batches with COPY or multi-row INSERTs to maintain throughput, then send a low-latency notification for the downstream WebSocket layer.

Node.js ingest worker (pattern)

Example: ingest worker receives raw ticks from vendor, batches them every 50–200ms, writes to TimescaleDB, then NOTIFYs with a compact summary for broadcasting.

// pseudocode (Node.js)
const { Client } = require('pg');
const BATCH_SIZE = 500;
let queue = [];

async function flushBatch() {
  if (!queue.length) return;
  const client = await pool.connect();
  try {
    await client.query('BEGIN');
    // Bulk insert pattern using COPY or multi-row insert
    const values = queue.map((t, i) => `$${i * 4 + 1}, $${i * 4 + 2}, $${i * 4 + 3}, $${i * 4 + 4}`);
    // Insert query trimmed for brevity
    await client.query('INSERT INTO tick_quotes (time,symbol,price,size) VALUES (...)', params);

    // Notify a lightweight summary for real-time broadcast
    const payload = JSON.stringify({ symbol: 'SOY', last: queue[queue.length-1].price, t: queue[queue.length-1].time });
    await client.query("NOTIFY tick_notify, $1", [payload]);
    await client.query('COMMIT');
  } catch (err) {
    await client.query('ROLLBACK');
    throw err;
  } finally {
    client.release();
    queue = [];
  }
}

Note: pg NOTIFY payloads are limited (~8 KB), so send compact messages or message IDs. For high scale, use Redis/NATS/Kafka to distribute notifications and keep the DB dedicated to storage.

WebSocket server: broadcast with DB notifications

The WebSocket layer should be stateless regarding ticks and use a message bus to broadcast inbound notifications. For simpler deployments, a single WebSocket server can LISTEN/NOTIFY to Postgres and broadcast to all clients.

Node.js WebSocket server (example)

const WebSocket = require('ws');
const { Client } = require('pg');

const wss = new WebSocket.Server({ port: 8080 });
const pgClient = new Client({ connectionString: process.env.DSN });
await pgClient.connect();
await pgClient.query('LISTEN tick_notify');

pgClient.on('notification', msg => {
  if (msg.channel === 'tick_notify') {
    const data = JSON.parse(msg.payload);
    // Broadcast to all connected clients
    wss.clients.forEach(ws => {
      if (ws.readyState === WebSocket.OPEN) ws.send(JSON.stringify(data));
    });
  }
});

wss.on('connection', ws => {
  // Optionally parse query to subscribe to symbol(s)
  ws.on('message', m => { /* subscribe/unsubscribe handling */ });
});

For production: add connection pooling (PgBouncer), a small developer-friendly message broker to fan out to multiple WebSocket processes, and authentication (JWT) for client subscriptions. Use sticky sessions or a shared pub/sub layer so any server can broadcast to all connected clients.

React front end: ring buffer + incremental updates

The client should maintain a bounded in-memory buffer (e.g., last N seconds or last M ticks) and apply incremental updates rather than re-requesting full series. Send subscription messages to the WebSocket for the symbols the user wants.

React example (core)

// Pseudocode for React hook
import { useEffect, useRef, useState } from 'react';

function useLiveTicks(symbol) {
  const wsRef = useRef(null);
  const [data, setData] = useState([]); // ring buffer
  const bufferSize = 5000; // keep last 5000 ticks

  useEffect(() => {
    wsRef.current = new WebSocket('wss://api.example.com/ws');
    wsRef.current.onopen = () => wsRef.current.send(JSON.stringify({ type: 'subscribe', symbol }));

    wsRef.current.onmessage = e => {
      const tick = JSON.parse(e.data);
      setData(prev => {
        const next = prev.concat(tick);
        if (next.length > bufferSize) next.splice(0, next.length - bufferSize);
        return next;
      });
    };

    return () => {
      wsRef.current.send(JSON.stringify({ type: 'unsubscribe', symbol }));
      wsRef.current.close();
    };
  }, [symbol]);

  return data;
}

Use a fast drawing library: uPlot or lightweight WebGL charting (e.g., Plotly/Lightning for premium cases). Render incremental updates and avoid full re-renders.

Performance & scaling checklist

  • Batch ingests: Use COPY or multi-row INSERT to reduce insert pressure.
  • Connection pooling: Use PgBouncer (transaction mode) to avoid connection explosion.
  • Message bus: Use Redis Pub/Sub or NATS for multi-process broadcast; use Kafka for durable replay and analytical pipelines.
  • LISTEN/NOTIFY: Lightweight and easy for small deployments; not a replacement for a real message bus at scale.
  • Indices: Index by (symbol, time DESC) for fast tail queries.
  • Continuous aggregates & compression: Use for UI performance and storage savings.
  • Backpressure: If clients cannot keep up, send aggregated snapshots (1s aggregates) rather than every tick.
  • Monitoring: Track p99 WebSocket latency, DB write latency, chunk compression lag; instrument with Prometheus and Grafana and Grafana.

Operational considerations and pitfalls

DB connection storm

Browsers don’t connect to Postgres directly—but each WebSocket server that uses LISTEN needs a persistent DB connection. Keep that number small and use a shared pub/sub if you scale out to many WebSocket workers.

Small notifications, large realities

NOTIFY payload limits mean you should send compact JSON with symbol and timestamp, and let WebSocket servers query for full context or rely on the broadcast message format. Alternatively, publish detailed updates to Redis/NATS which supports larger payloads.

Fairness across symbols

High-volume symbols (e.g., corn in certain windows) can drown others. Implement per-symbol rate limits and server-side aggregation to guarantee UI responsiveness.

  • WebTransport & QUIC: Emerging alternatives for low-latency binary transport—consider for future upgrades if you need even lower tail-latency and multiplexing features.
  • Edge computing: Pushing aggregation to edge functions (Cloudflare Workers, Fastly Compute) can reduce round-trip latency for geographically distributed clients.
  • Time-series SQL convergence: Postgres-based time-series tooling like Timescale has matured; expect tighter integration with cloud observability stacks and more built-in real-time utilities by 2026.
  • Hybrid architectures: Many teams combine a cloud-managed Timescale instance for storage with serverless or container-based WebSocket layers for cost efficiency and scaling.

Advanced strategies

Client-side delta compression

Send diffs instead of full frames. For charts, transmit only the newest ticks or aggregated OHLC per second and reconstruct streams on the client to save bandwidth.

Materialized view prefetch

When a client first opens a symbol, query the 1m continuous aggregate for the last N intervals, then switch to the 1s stream for live updates. This hybrid pattern minimizes cold-start latency while preserving fidelity.

Analytics side-channel

Duplicate the tick feed into a Kafka topic for downstream analytics, backtesting and ML. TimescaleDB remains the canonical store for operational queries and short-term retention; Kafka can feed long-term lakehouse storage.

Quickstart checklist (copy-paste deployable)

  1. Deploy TimescaleDB (Cloud or self-host) and create the tick_quotes hypertable.
  2. Implement ingest worker that batches and writes ticks; notify on inserts (or publish to Redis).
  3. Run a simple WebSocket server that LISTENs for notifications and broadcasts to clients.
  4. Build a React client using a ring buffer and uPlot for charts; subscribe over WebSocket.
  5. Enable continuous aggregates and compression policy in TimescaleDB for 1s/1m buckets.
  6. Monitor with Prometheus (DB and servers) and set alerts for p99 latency and ingestion lag.

Mini case study — what you’ll see

On a recent prototype, an engineer team ingested ~120k ticks/s across three commodity symbols. Using micro-batched COPY inserts and a Redis pub/sub layer to broadcast compact JSON, they achieved:

  • WebSocket fanout latency < 50 ms for regional clients
  • Sub-second update rates on the browser chart for per-tick rendering
  • Storage reduction of ~12x after enabling Timescale compression and retention on 90-day data

Wrap-up: key takeaways

  • Model raw ticks in a TimescaleDB hypertable and generate continuous aggregates for charting—this balances fidelity and speed.
  • Batch writes, compress older chunks, and use retention policies to control cost without sacrificing analysis capability.
  • Use WebSockets for low-latency delivery, but introduce a proper message bus (Redis/NATS/Kafka) as you scale horizontally.
  • On the client, keep a ring buffer and apply deltas/aggregates to avoid full-series re-rendering and reduce bandwidth.

Next steps (practical)

  1. Spin up a TimescaleDB instance and create the tick schema above.
  2. Wire a small mock feed (random tick generator) into an ingest worker and verify inserts and NOTIFY flow.
  3. Stand up a local WebSocket server and build a minimal React UI to validate end-to-end latency and charting behavior.

Call to action: Ready to prototype? Download the companion repo for this guide (schemas, example Node.js ingest + WebSocket servers, and a React demo) and try it with a mocked soybean/corn/wheat feed. If you want a production review—reach out for a short architecture consult and a tailored scaling plan for your tick volumes.

Advertisement

Related Topics

U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-15T20:35:23.775Z