You pick a chart library. Chart.js, Recharts, Highcharts, whatever. You build your dashboard. Then someone asks for a PNG export for a PDF report. Now you’re either wiring up a headless browser in your CI pipeline or copy-pasting rendering logic into a Node.js script that sort of works.
Then the design team wants dark mode. Your chart library has a theme system, but it’s tangled up with the DOM rendering, so now you’re debugging CSS specificity conflicts at 2am. Then someone on the team wants to unit test that the Y-axis scale domain is correct for a given dataset, and you realize you can’t do that without spinning up jsdom because the chart logic is inseparable from the SVG it produces.
This coupling between data transformation and rendering is so pervasive that most developers don’t even see it as a design choice. It’s just how chart libraries work. You hand data to a component, and it does everything: computes scales, generates tick marks, positions bars, and renders them to the DOM. It’s one monolithic operation.
But that monolith is the source of most of the friction. You can’t test math without a DOM. You can’t render without a browser. You can’t share logic between server and client without a compatibility shim. What if the chart library never touched the DOM at all?
Specs, Not Code
The core idea behind @opendata/viz is that you describe what you want, not how to render it. You write a JSON spec using a Vega-Lite-inspired encoding model: map data fields to visual channels (x, y, color, size), and the engine handles the rest.
const spec = {
type: 'bar',
data: [
{ category: 'Food', value: 332 },
{ category: 'Housing', value: 287 },
{ category: 'Transport', value: 198 },
],
encoding: {
x: { field: 'category', type: 'nominal' },
y: { field: 'value', type: 'quantitative' },
color: { field: 'category', type: 'nominal' },
},
chrome: {
title: 'CPI by Category',
subtitle: 'Consumer Price Index, December 2025',
source: 'Bureau of Labor Statistics',
},
};
The spec is the API. Not a component tree. Not a set of imperative drawing calls. A declarative description of what the chart should show. The engine’s job is turning this spec into math: scales, pixel positions, tick marks, legend entries. A renderer’s job is turning that math into pixels.
The spec can come from a database, from an LLM, from a config file. It serializes as JSON. You can validate it, diff it, version-control it. You can generate specs programmatically and inspect them before anything hits the screen.
The encoding model follows Vega-Lite conventions. Each channel (x, y, color, size) maps a field from the data to a visual property, with a type that tells the engine how to interpret the values: quantitative for continuous numbers, temporal for dates, nominal for unordered categories, ordinal for ordered ones. The engine uses this to pick the right scale type automatically. A temporal x-axis gets a time scale. A nominal color encoding gets an ordinal scale with the theme’s categorical palette.
The Four-Package Split
The library is split into four packages with a strict dependency chain:
core → engine → vanilla → react
@opendata/viz-core is types and theme definitions. Zero DOM dependencies. This package defines the spec format (ChartSpec, TableSpec, GraphSpec), the layout format (ChartLayout, TableLayout), the theme system, color palettes, WCAG contrast checking, color-blindness simulation, and accessibility helpers like alt text generation. It’s pure TypeScript types, constants, and math utilities.
@opendata/viz-engine is the headless compiler. Pure math, no DOM, no SVG, no HTML. Takes a spec plus a width and height, produces a layout object with every position computed down to the pixel. Runs in Node.js, in the browser, in a worker thread. This is where scales are built, axes are generated, and chart marks are positioned.
@opendata/viz-vanilla is SVG and HTML rendering. Takes a layout object and produces actual DOM elements. This is the only package that touches the browser. It handles mounting, unmounting, resize observation, and chart exports.
@opendata/viz-react is thin wrappers around the vanilla renderer. useChart(), useDarkMode(), and a few convenience components. These are lifecycle bridges, not a parallel rendering implementation. The React <Chart /> component is roughly 30 lines: it creates a ref, calls createChart() from the vanilla package on mount, and chart.update(spec) when props change.
Why this matters in practice: you can run the engine in a Node.js server process for server-side rendering without pulling in jsdom or Puppeteer. The same compilation step produces identical output on server and client. And you can unit test chart logic (does this data produce the right scale domain? are the tick marks at the right pixel positions?) with plain Vitest, no DOM simulation required.
What the Compiler Actually Does
The engine’s compileChart() function takes a raw spec and produces a ChartLayout through a pipeline of pure function transforms. Here’s the sequence:
1. Validate. Check the spec against the schema. Are required fields present? Are the encoding channels valid for this chart type? A scatter plot needs both x and y. A pie chart needs neither but requires a color encoding. The validator catches these and throws with specific error messages, not generic “invalid input.”
2. Normalize. Fill in defaults. If no dark mode setting is specified, default to "off". If chrome.title is a raw string, wrap it into a { text: string } object with style defaults. If encoding channels are missing type, infer it from the data (and emit a warning so the caller knows it happened).
3. Resolve theme. Merge the spec’s style overrides with the base theme. Dark mode support happens here: adaptTheme() flips backgrounds, adjusts text colors, and modifies gridline opacity. The theme carries categorical palettes, sequential color ramps, font families, spacing values, and border radii.
4. Compute legend. Determine legend entries from the color, size, and shape encodings. The legend needs to reserve space in the layout, so this runs early. The layout strategy (determined by responsive breakpoints) controls whether the legend goes on top, to the right, or inline with the chart.
5. Calculate dimensions. Compute the actual chart drawing area after subtracting margins, chrome heights (title, subtitle, source attribution), axis label space, and legend bounds. The output is a Rect with x, y, width, and height for the drawable area.
6. Build scales. Create scale functions mapping data values to pixel positions. Linear for quantitative data, band for categorical, time for temporal, log when specified. The zero and nice options control whether the domain snaps to clean values or includes zero. This is the mathematical core of the engine.
7. Generate axes. Compute tick positions, formatted tick labels, and axis line endpoints. The engine optimizes tick count based on available space and handles label rotation when categories crowd each other.
8. Compute gridlines. Derive from axis ticks. Each gridline gets a position and a major flag. Visibility is configurable per axis.
9. Position chart marks. This is the big step. The engine calls a chart-type-specific renderer (registered via a plugin registry) that maps each data point to its visual representation. For bars: x, y, width, height. For lines: an array of { x, y } points plus a pre-computed SVG path string with monotone curve interpolation. For scatter: cx, cy, r. Each mark carries its original data row and an ARIA label.
10. Process annotations. Convert reference lines, range highlights, and text callouts from data coordinates to pixel positions using the computed scales.
11. Generate tooltips. Compute tooltip content and anchor positions for each mark. The descriptors are a Map keyed by mark identifier, containing formatted field-value pairs with optional color swatches.
12. Build accessibility metadata. Auto-generate alt text from the spec and data. Produce a tabular data fallback for screen readers. Assign ARIA roles.
Each step is a pure function. The output of step N feeds into step N+1. The final result is a ChartLayout object:
// The engine's output: pure data, no DOM
{
dimensions: { width: 800, height: 450 },
area: { x: 60, y: 55, width: 680, height: 340 },
chrome: {
title: { text: 'CPI by Category', x: 400, y: 20, style: { ... } },
subtitle: { text: 'Consumer Price Index...', x: 400, y: 42, style: { ... } },
source: { text: 'Bureau of Labor Statistics', x: 60, y: 440, style: { ... } },
},
axes: {
x: { ticks: [{ value: 'Food', position: 85, label: 'Food' }, ...] },
y: { ticks: [{ value: 100, position: 305, label: '100' }, ...] },
},
marks: [
{ type: 'rect', x: 10, y: 45, width: 150, height: 295,
fill: '#3b82f6', aria: { label: 'Food: 332' } },
// ...
],
legend: { position: 'top', entries: [...], bounds: { ... } },
a11y: {
altText: 'Bar chart showing CPI by Category across 3 categories (3 data points)',
dataTableFallback: [['category', 'value'], ['Food', 332], ...],
role: 'img',
},
theme: { ... }
}
Every value is resolved. No optionals, no “compute this later.” The renderer just walks the object and draws.
Editorial-First Design
Most chart libraries treat titles, subtitles, and source attribution as afterthoughts. You pass a string, the library sticks it somewhere with default styling, and that’s that. Annotations are worse: most libraries don’t support them at all, or only through escape hatches into the underlying SVG.
@opendata/viz treats these as first-class structural elements through the Chrome system:
Title and subtitle are positioned with proper typographic hierarchy. The title renders bold, large, and dark. The subtitle renders at regular weight, smaller, in gray. This matches how editorial publications (The Economist, the Financial Times, Chartr) present data visualizations, where the headline carries the insight and the subtitle carries the methodology context.
Source attribution is a dedicated field in the spec. Rendered small, gray, at the bottom of the chart. Always present because credible data visualization always cites its source. Making it a first-class field means it’s not optional by omission. You see the empty source field in your spec and think, “right, I should fill this in.”
Annotations come in three types: reference lines (horizontal or vertical lines at specific values, useful for baselines and thresholds), range highlights (shaded regions like recession bands), and text callouts (labels pointing to specific data points with optional connector lines). These are what turn a chart from “here’s some data” into “here’s the story in this data.”
Each chrome element accepts either a raw string or a ChromeText object with style overrides (fontSize, fontWeight, fontFamily, color). The engine resolves these against the theme, computes pixel positions, and includes them in the layout. The renderer doesn’t need to know anything about typography or positioning. It just draws text at the coordinates the engine provides.
Accessibility Built In
Accessibility isn’t bolted on after rendering. It’s computed during compilation, alongside the marks and axes.
Auto-generated alt text. The engine produces descriptive alt text from the spec and data. For the CPI example: “Bar chart showing CPI by Category across 3 categories (3 data points).” When there are multiple series, it names them: “Line chart showing GDP Growth Rate from 2020 to 2024 with 2 series (US, UK).” This happens automatically. You don’t write alt text for every chart.
ARIA labels on every mark. Each bar, line, point, and arc in the layout carries an aria object with a label field. “Food: 332.” The renderer maps these to aria-label attributes on the corresponding SVG elements. The structure is baked into the layout, not added by the renderer as an afterthought.
Color-blindness simulation. The core package includes simulation functions for protanopia, deuteranopia, and tritanopia using Brettel, Vienot, and Mollon (1997) matrices. checkPaletteDistinguishability() takes a palette and a deficiency type, simulates how each color appears, and checks whether all pairs maintain sufficient perceptual distance. This runs on color values, not on rendered images, so it works in the headless compilation step.
WCAG contrast checking. contrastRatio() computes the ratio between any two colors per WCAG 2.1. meetsAA() checks against the 4.5:1 threshold (3:1 for large text). findAccessibleColor() takes a color that fails contrast and binary-searches for an adjusted variant that preserves hue and saturation while meeting the target ratio.
All of this means accessibility testing doesn’t require a browser. You can verify that a chart’s alt text is meaningful, its palette is distinguishable for color-blind users, and its text contrast meets standards, all in a unit test running in Node.
Chart Types
The engine supports six chart types through a plugin registry, plus tables.
Line. Time series, trends. Area fill is a variant: set type: 'area' in the spec, and the engine produces AreaMark objects with a computed SVG path for the filled region. Multi-series support via the color encoding channel.
Bar. Horizontal bars. Rankings, comparisons, anything where the categorical axis reads better vertically.
Column. Vertical bars. Categorical data, time-binned counts. Same computation as bars, rotated 90 degrees.
Scatter. Correlation, distribution. Add a size encoding for bubble charts. Each mark is a PointMark with cx, cy, and r. Trendline support is built into the scatter renderer.
Dot. Cleveland dot plots. Minimal, precise comparisons when you want to emphasize the exact value rather than the visual weight of a bar.
Pie (and donut via innerRadius). Parts of a whole. The engine computes arc paths, centroids for label positioning, and start/end angles. The donut variant is the same computation with a non-zero inner radius.
And then there are tables, which aren’t charts but are very much visualization. The table compiler handles sorting, searching, pagination, column resolution, and cell formatting. But the interesting part is the per-column visual features: heatmap cells (color-coded by value with interpolated backgrounds), inline bars (a mini bar chart in each cell showing the value relative to the column range), and sparklines (a mini line, bar, or column chart embedded in a cell showing trends across a related field). The engine normalizes sparkline data to a 0-1 range and produces rendering coordinates. These go beyond basic HTML tables into something closer to a spreadsheet with embedded visualizations. All computed headlessly.
Separating What From How
The separation of spec, compilation, and rendering isn’t a new idea. Vega and Vega-Lite pioneered this approach, and the grammar of graphics predates both. What’s different here is the packaging: four npm packages with explicit boundaries, zero DOM dependencies in the computation layer, and editorial design patterns (chrome, annotations, source attribution) as first-class features rather than extensions.
The result is a system where the hard part, getting the math right, generating accessible output, applying good typography defaults, happens once in the engine. The easy part, rendering SVG elements or wrapping them in React components, is a thin layer on top that you can swap without touching any chart logic.
If you’re building data visualization and spending most of your time fighting the renderer (debugging resize observers, shimming SSR, wiring up dark mode, adding alt text by hand), computation and presentation are probably coupled too tightly. Splitting them costs more up front but pays off on every feature after that.