Most platforms forget. GroveStreams remembers — every value, every relationship, every change. One platform replaces your historian, your analytics engine, and your ML pipeline. Your AI agent queries it all with one language.
Thousands of organizations · Millions of streams · Hundreds of thousands of real-time derivations · A decade in production
Imagine a data platform where every cell has its own time axis.
Imagine relationships that evolve — and derivations that automatically recalculate.
Imagine changing the past — and watching the future recalculate.
Imagine the complete temporal story of your data, one query away.
The temporal FK chains, the deep cells, the no-history-tables story aren't bound to meters and sensors. Here's the same diagram with an HR schema — employees, departments, offices — same pillars, same arrows, same architecture.
Every temporal data project requires the same infrastructure. GroveStreams builds it all in.
| Instead of building… | GroveStreams gives you… |
|---|---|
| A separate historian for time-series storage | 100M+ points per stream with real-time rollups built in |
| History tables, triggers, and temporal schema on your relational database | TEQ™ — relational query semantics over temporal streams |
_history tables, SCD2 patterns, and change triggers for every entity |
Every attribute is a stream — history is automatic, not hand-built |
| Batch analytics pipelines for rollups and statistics | Real-time derivations and dozens of pre-computed statistics per stream |
| A separate dashboard and reporting tool | Built-in visualization with ODBC/JDBC adapter (Beta) and OData export to Power BI, Tableau, and more — saved-query VIEWs expose full temporal history to BI tools |
| An ML team for time-series forecasting | 8 built-in forecasting models — configure or code, your choice |
| A separate geospatial database or bolt-on package for location tracking | Location is just another stream — track position over time with the same queries, rollups, and AI |
| A separate security layer per query path, per tool, per dashboard | Per-user RBAC inside the query engine — same dashboard, different data per user |
Every organization builds the same scaffolding: _history tables, audit triggers, slowly changing dimension patterns, materialized views for rollups, junction tables with effective dates. That scaffolding typically doubles or triples your table count — and you rebuild it from scratch for every project.
employeesemployee_salary_historyemployee_title_historyemployee_dept_historydept_transfer_auditheadcount_quarterly_mvcompensation_rollup_mv9 objects. Hand-built. Per project.
employee template with 5 streams:salary, title, departmentUid, managerUid, locationHistory is automatic. Rollups are automatic. Relationship tracking is automatic. Query any attribute at any point in time with a standard JOIN.
1 template. 5 streams. Done.
This isn't "we added an AI chatbot." The architecture itself is why AI works better here.
Your AI agent has to navigate 9+ tables per domain. It needs to understand your SCD2 conventions, your trigger logic, your materialized view refresh schedules, and your junction table naming patterns. It generates 15-line queries with CTEs, date math, and temporal joins. Get any of it wrong and the query silently returns bad data.
Your AI agent sees 1 template, 5 streams, and asks:
salary(range(sd=-1y), CycleId='quarterly', Stat='avg')
Fewer objects. Fewer joins. Fewer round-trips. Faster answers. The agent generates correct queries on the first try because there's less to get wrong. And when it's time to build a new data model, the agent generates DDL — not a migration script across nine tables.
The simpler your data model, the more reliable — and faster — your AI becomes.
This is also why the GS SQL documentation is thorough enough to serve as an LLM's training context. Read it yourself — that's a trust signal.
Most platforms only show you where things are today. GroveStreams remembers the complete history of every value, every relationship, and every hierarchy. Query how your data model evolved over time — not just your data.
PLACE NODE, PLACE VIEW, RELATIONSHIP)GS SQL extends standard SQL with temporal semantics — full DDL and CRUD, cycle-based aggregations, temporal JOINs, point-in-time queries via temporal parameters, and time-filtered statistics. The built-in AI assistant connects to OpenAI, Anthropic, Gemini, or xAI and executes GS SQL against your live data — from ad-hoc queries to building entire data models from a natural-language description.
CREATE VIEW ... CONNECTION), materialized for caching or live on every reference. Views appear as tables in subsequent SQL and to BI tools via ODBCDefine rollup calendars that aggregate high-frequency data into any time hierarchy — seconds to minutes to hours to days to months to quarters to years. Calendars align to any reference point: midnight, shift start, billing cycle date, or each component's individual signup date. Model real-world time boundaries, not just clock boundaries.
Ingest via REST API or MQTT with X.509 certificate security. Visualize with drag-and-drop dashboards, maps, and real-time charts. Connect Power BI, Tableau, DBeaver, Grafana, and Excel directly via standard ODBC/JDBC drivers — or use the OData connector. Custom branding with branded subdomains available.
Role-based access control is enforced inside the query engine — not at an API gateway, not in application code. User A sees their 50 components. User B sees their 200. The AI assistant, the ODBC/JDBC connection from Tableau, the OData feed to Power BI, and every GS SQL query all respect the same per-user permissions automatically.
GroveStreams is designed so your analysts can query it on day one — no specialized consultants, no onsite engagement, no six-figure implementation project. The AI assistant generates GS SQL from plain English, so your team doesn't need SQL experience to get started.
The architectural insight behind temporal intelligence — every cell carries its own time axis.
Visualize GroveStreams as a relational table where each entity is a row and each stream is a column — but every cell holds up to 100 million timestamped values, not just one. The current value sits on top. The complete history lives underneath.
This is what makes temporal intelligence possible: any query can reach any point in time. You don't pre-aggregate. You don't choose in advance which time ranges to keep. Every value, every relationship, every change — queryable with standard SQL, from one second to one decade.
Time-series databases store deep history but have no relational model — no JOINs across entities. Traditional databases have relational structure but each cell holds one value — temporal history requires separate history tables, triggers, and batch jobs. TimescaleDB and similar extensions add deep time-series storage to a relational model, but at the row level — rows are time-stamped, not cells. Historians store millions of points per tag but have no entity model at all. GroveStreams is the only platform we know of where every cell in a relational table is itself a complete time-series — and where the relationships between entities are themselves time-series too.
Relationships are deep cells too. When Pump-A is reconnected from Tank-7 to Tank-12, a new data point is appended to the relationship cell. The old connection stays in history. Query which pump fed which tank at any point in time — with a standard JOIN.
Location is a deep cell. Latitude, longitude, elevation, and heading are streams — not a separate spatial layer. A vehicle's position at 2pm last Tuesday lives in the same architecture as its fuel level, its assigned driver, and its maintenance schedule. Correlate location with any other stream using the same query language.
Read the full architecture story →| Entity | kwh | voltage | lat | customerUid |
|---|---|---|---|---|
| Meter-001 | 42.741.2 → 42.7 → ... | 121.3120.8 → 121.3 → ... | 44.9844.98 | cust-Acust-B → cust-A |
| Meter-002 | 18.417.9 → 18.4 → ... | 119.7120.1 → 119.7 → ... | 33.7540.71 → 33.75 | cust-Ccust-C |
| Meter-003 | 71.268.5 → 71.2 → ... | 122.0121.6 → 122.0 → ... | 41.8841.88 | cust-Acust-D → cust-A |
Each cell holds its full temporal history — current value on top, history below.
| Entity | salary | title | departmentUid | managerUid |
|---|---|---|---|---|
| Emp-4201 | 145,000128,000 → 135,000 → 145,000 | Sr. EngineerEngineer → Sr. Engineer | dept-Engdept-Ops → dept-Eng | emp-3110emp-2900 → emp-3110 |
| Emp-4202 | 92,00085,000 → 92,000 | AnalystIntern → Analyst | dept-Findept-Fin | emp-3205emp-3140 → emp-3205 |
| Emp-4203 | 210,000175,000 → 195,000 → 210,000 | VP SalesDir. Sales → VP Sales | dept-Salesdept-Mktg → dept-Sales | emp-1001emp-2010 → emp-1001 |
Salary changes, title promotions, department transfers, reporting changes — all temporal, all queryable.
The key differentiator: relationships are streams too.
When Pump-A is reconnected from Tank-7 to Tank-12, most platforms overwrite the pointer. GroveStreams appends a new data point to a link stream. The old connection stays in history.
Query which pump fed which tank on any date — with a standard JOIN. No junction tables. No history triggers. No audit logs. One stream.
This is where temporal relationships become powerful. When a derived stream depends on a related entity's data, GroveStreams automatically detects when the relationship changed, splits the calculation into segments, and uses the correct target entity for each time period.
If a meter was connected to Customer A from January through June, then moved to Customer B in July, a cost derivation over the full year automatically uses Customer A's rate for Jan–Jun and Customer B's rate for Jul–Dec. Multi-hop chains (meter → customer → supplier) work the same way.
We're not aware of another platform that does this declaratively — not the time-series databases, stream processors, cloud data warehouses, or traditional relational databases we've evaluated. On those platforms, handling relationship changes during a calculation requires custom pipeline code. In GroveStreams, it's a single SQL expression on the variable definition.
See How It Works →Ingest with TW. Query system internals with TDQ™. Query your semantic model with TEQ™.
Real-time signal ingestion via HTTP REST or MQTT with X.509 certificates. The native ingestion layer for devices, APIs, and connectors.
Developer Docs →The core system layer — all internals exposed. Query the Stream table with the full power of the Sample column for reporting, dashboarding, and analytics.
GS SQL Overview →Query your semantic model with standard SQL. Template IDs as tables, stream IDs as columns, components as rows. FK JOINs between templates.
TEQ Guide →Imagine an AI agent that can query 100 million data points per cell with temporal parameters that collapse complex time-series operations into a single line — and build entire data models from a natural-language description using DDL. That's what GroveStreams gives every agent.
Ask a traditional database for "daily kWh totals per meter for the last 7 days, joined to the customer's region." This is what you get — CTEs, date math, GROUP BY on truncated timestamps, explicit JOINs to a separate customer table. An AI agent must generate all of this correctly, or the query fails.
-- Traditional SQL: 15+ lines, multiple failure points for an LLM
WITH date_range AS (
SELECT generate_series(
CURRENT_DATE - INTERVAL '7 days',
CURRENT_DATE,
INTERVAL '1 day'
) AS day
),
daily_kwh AS (
SELECT
m.meter_id,
DATE_TRUNC('day', r.reading_time) AS day,
SUM(r.kwh) AS daily_total
FROM meter_readings r
JOIN meters m ON r.meter_id = m.id
WHERE r.reading_time >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY m.meter_id, DATE_TRUNC('day', r.reading_time)
)
SELECT d.meter_id, d.day, d.daily_total, c.region
FROM daily_kwh d
JOIN meter_customers mc ON d.meter_id = mc.meter_id
AND mc.effective_date <= d.day
AND (mc.end_date IS NULL OR mc.end_date > d.day)
JOIN customers c ON mc.customer_id = c.id
ORDER BY d.meter_id, d.day
The same question. Temporal parameters handle the date range, rollup, and aggregation. The temporal foreign key JOIN resolves which customer each meter belonged to at each point in time — automatically. An AI agent generates this correctly on the first try.
-- GS SQL: 4 lines. Same answer. No date math. No CTEs. No SCD2 joins.
SELECT m.cname, m.kwh(range(sd=-7d), CycleId='daily', Stat='sum'), c.region
FROM meter m
INNER JOIN customer c ON m.customerUid = c.cuid
ORDER BY m.cname
Standard SQL is the #1 source of LLM hallucinations in data applications. Date arithmetic, window functions, CTEs, SCD2 temporal joins — each is a failure point where the model generates plausible but broken syntax. GS SQL eliminates all of them. Temporal parameters (sd, CycleId, Stat, TimeFilterId) collapse complex time-series operations into declarative hints on the column. The platform does the rest.
When an AI agent hears "average temperature last week, rolled up hourly" — it maps directly:
SELECT cname, time, sample(range(sd=-7d), CycleId='hourly', Stat='avg')
FROM STREAM
WHERE id = 'temperature'
No date arithmetic to get wrong. No window functions to hallucinate. No CTEs to nest incorrectly. The query is short enough for an AI agent to generate reliably, and deterministic enough for a human to verify at a glance.
Peak-hours-only filtering, running totals, lag/lead window operations — all expressed as parameters, not code.
-- Last 30 days, hourly averages, peak hours only
SELECT cname, AVG(sample(range(sd=-30d), CycleId='hourly', Stat='avg', TimeFilterId='peak')) FROM STREAM
WHERE cname LIKE 'Meter%' AND id = 'kwh'
GROUP BY cname
-- Running kWh total for the current month
SELECT cname, kwh(running='sum', range(currentCycle='month')) FROM meter
-- Previous day's temperature (lag window parameter)
SELECT cname, temperature(lag=1, CycleId='daily', range(sd=-30d)) FROM sensor
When you'd rather use Claude Desktop, Claude Code, Cursor, or any
Model Context Protocol
client, GroveStreams exposes a built-in MCP server. The agent gets four tools — describe_org,
run_gsql, ask_grovestreams, send_samples
— plus read-only Resources for help docs, templates, dashboards, and saved queries.
Authentication is per-user via OAuth 2.1 + PKCE so the agent runs under your RBAC, not a shared service account.
ODBC for your app, MCP for your AI.
Vibe design your entire organization — entities, events, and dashboards — by describing what you need in natural language. The AI assistant builds the schema, the alerts, and the visualizations for you.
Point your devices at our REST or MQTT API. Or pull from an existing PostgreSQL, MySQL, SQL Server, BigQuery, Snowflake, or Redshift database with the JDBC Import Wizard — it walks you through table selection, FK detection, and templating, then schedules incremental syncs. Components and streams register themselves automatically.
Vibe design your organization — entities, events, dashboards, temporal schemas, rollup calendars, and derivation formulas — through the studio interface, GS SQL DDL, or the AI assistant. Describe what you need in natural language and it builds it for you. No data modelers, no dashboard developers, no alert engineers required.
Query with GS SQL. Forecast with AI. Visualize in dashboards. Connect your BI tools directly via ODBC/JDBC or OData — saved queries appear as VIEWs so Tableau, Power BI, and Excel can access full temporal history, not just current values. Ask your AI assistant anything about your data.
Fair question. A determined team could assemble InfluxDB + TimescaleDB + dbt + an LLM SQL layer and approximate some of this. Here's what you'd still be missing:
You can build it. The question is whether you want to maintain it — and whether your AI agent can reliably query it. Read the GS SQL docs and decide for yourself.
Think of GroveStreams as a temporal spreadsheet. In Excel, change a cell and every formula that references it recalculates. GroveStreams works the same way — except the cells go 100 million rows deep in time, and the formulas follow relationships that themselves change over time.
A meter reading from last month turns out to be wrong. A relationship changed and nobody recorded it until now. A rate schedule is backdated. On most platforms, that's a manual rebuild — re-run batch jobs, invalidate materialized views, retrigger pipelines, hope you didn't miss a dependency. In GroveStreams, every change — insert, update, or delete — automatically triggers rollup recalculation, derivation recomputation, and dependency propagation up the entire precedent tree. Even historical relationship changes are detected and re-evaluated.
No batch jobs to re-run. No pipelines to retrigger. No manual reconciliation.
Correct the data. The platform does the rest.
Industry Solutions
30-day full-access trial. No credit card required. From single-stream prototypes to Fortune 100 deployments — 10 years in production.
START FREE TRIAL TRY A DEMO