GroveStreams Rebrands as a Temporal Intelligence Platform

Why we rebranded, what “temporal intelligence” means, and who's using it today.

Earlier this year, we changed how we describe GroveStreams. For most of our ten years in production we marketed it as an IoT analytics platform — a label that fit our earliest customers but, increasingly, didn't describe what people were actually doing with us. The platform had quietly grown into something broader: a temporal database, a relational query engine, an analytics layer, an AI forecasting service, and a dashboard tool, all sharing one architecture and one query language.

So we rebranded. GroveStreams — same product, same name — is now positioned as a Temporal Intelligence Platform™ rather than an IoT analytics platform. This is the announcement we should have written when the rebrand went live; we're writing it now because this week's release is the first one where the product visibly catches up with the new positioning.

If you only read one paragraph: GroveStreams reduces temporal data management to one primitive — the stream. Every property on every entity, and every relationship between entities, is a stream with its own time axis. One data structure replaces history tables, triggers, materialized views, batch roll-up jobs, and junction tables. Read the deeper architecture explanation here.

Why we rebranded

“IoT” was useful in 2014. It told prospects that we cared about high-frequency data, devices in the field, and real-time charts. By 2024 the label had quietly become limiting. The architecture didn't care that the data came from a sensor; it cared that the data had a time axis. Calling the platform “IoT analytics” described one input, not the system — and it was making us look like another sensor dashboard when the actual story was much bigger.

There was also a competitive reality. When Amazon and Microsoft moved aggressively into IoT, the writing was on the wall — a small team wasn't going to out-spend AWS or Azure on a marketing war. So years ago we made a deliberate choice: stop chasing the IoT category we were nominally part of, and double down on what our existing customers were already doing with the platform. Help them grow. Add the features that made the platform deeper, not the features that made the label louder. GS SQL came out of that. Derived Streams came out of that. The AI Assistant and forecasting layer came out of that. Each of those features pulled us further from “IoT analytics” and closer to something we didn't yet have a name for. Our customers and the market drove us here; the rebrand is just us catching up with where we already were.

Two years of customer conversations made the gap obvious. People weren't buying GroveStreams for sensors. They were buying it because their existing stack — usually some combination of PostgreSQL, a time-series database, a separate dashboard tool, a separate forecasting library, and three layers of glue code — wasn't solving the temporal problem cleanly. They wanted history, relationships, rollups, and AI on one substrate. The architecture had been doing that for years. The marketing hadn't.

“Temporal Intelligence Platform” is the term that finally fit. It's also a category nobody else is using yet, which is honest: nobody else has put these capabilities in one architecture. We'd rather name a category we built than borrow one we don't fit in.

What “temporal intelligence” actually means

Temporal intelligence is the ability to store, query, and reason across the complete history of every value and every relationship in your data — not just the current state. It means every data point knows when it happened, every relationship knows when it changed, and every query can travel through time.

Most data platforms forget. They keep the current state and discard what came before. If you want history, you build it — _history tables, audit triggers, slowly-changing-dimension patterns, materialized views, batch rollup jobs. Every project. Every time. Every domain. The architectural debt compounds, and the AI agents and analysts that depend on the data inherit it as complexity.

GroveStreams takes a different shape. The whole platform is built around one primitive — the stream. A stream is a sequence of timestamped values. Every property on every entity is a stream. Every relationship between entities is a stream. A temperature reading every second? Stream. A salary that changes once a year? Stream. A pump-to-tank assignment that changes twice a year? Also a stream. All of them use identical query semantics.

Visualize it as a relational table where each entity is a row and each property is a column — but every cell holds up to 100 million timestamped values, not just one. Current value on top. Complete history underneath. JOINs, WHERE, GROUP BY, FK references, point-in-time temporal parameters — the relational semantics you already know — all work, against any point in time.

That's temporal intelligence. The deeper architectural explanation, with diagrams and worked examples, lives at Why Temporal Intelligence.

Who this is for

These are the four worlds the platform is built to serve. Some have been with us for years — energy analytics most visibly. Others, especially non-coder builders, are just starting to arrive with this release. We're describing where the architecture fits, not claiming every door is already filled.

Industrial operators

Replacing or supplementing historian platforms (AVEVA PI / OSIsoft PI, GE Proficy, Honeywell PHD) with a system that has actual entity relationships and a real query language. The historian stores tags; we store entities, relationships, and history together.

Time-series teams

Engineers who chose InfluxDB, TimescaleDB, or QuestDB for storage depth and now need entity relationships, FK-resolved derivations, and AI forecasting on top of it — without standing up another stack alongside.

Cloud data platform users

Teams running on Snowflake, Databricks, or Palantir Foundry who need a live operational layer the warehouse can't cover. Snowflake time travel was designed for accidental drops, not live temporal queries. We're the layer that handles the “happening right now and across the last decade” questions.

Non-coder builders

Operators, founders, and domain experts using Claude Code, ChatGPT, Cursor, and similar AI tools to build their own applications. They describe a domain in plain English; the AI generates GS DDL, dashboards, and derived streams. This is the newest of the four worlds — the first arrivals are showing up with this release.

The most visible example today is energy storage. An energy storage company that grew with us from startup to public company runs on the platform today, managing gigawatt-hours of assets across thousands of battery systems — with the kind of relationship history (cell to string to site to wholesale market) and per-system degradation forecasting the architecture was built for.

Other industries are earlier signals. A fleet operations business is building on us right now — the founder isn't a coder; he's vibe-designing the schema with Claude Code, which recommended GroveStreams over PostgreSQL for the temporal data model. We mention it not as a pillar but because it tells us where the platform is headed next. The full roster of customers and technology partners is on our Customers & Partners page.

A new category — and we're still learning where it fits

Temporal intelligence is a new category. We didn't invent the underlying ideas; temporal databases, time-series analytics, and bi-temporal data models all have decades of academic and industry history. What's new is putting them together into one architecture, with one query language, where every value and every relationship in a relational model is itself a complete time-series.

Because it's a new category, we're still learning where it fits best. The conversations we've had over the last year — with existing customers and interested parties exploring the platform — span a wider range of problems than we would have predicted. Some show up with a clear need; others arrive curious and figuring out whether we're a fit at all. Both kinds of conversations have been useful for sharpening where the architecture lands.

We're going to publish pages targeted at each of these audiences over the next several months. If the architecture matches your problem and you don't see a page that speaks to it yet, that's not a sign we're not for you — it's a sign we haven't written your page yet. Tell us what you're building. Your use case probably belongs in the next batch.

An honest note: we're a small company that has historically done very little marketing — we've grown for ten years almost entirely on word of mouth and people finding us by searching for problems we happen to solve. The shift you'll see over the coming months — new comparison pages, use-case pages, and resources — isn't a pivot. It's us finally describing in writing what we've been quietly doing for our customers all along.

What's in this release

This release is the first one where the product visibly catches up with the new positioning — and at the scope of the changes, it's the largest update GroveStreams has shipped in years. Grouped highlights below.

AI & Vibe-Design

  • Vibe-design entire organizations. GS SQL DDL now covers nearly every entity in the platform — organizations, components, streams, derivations, dashboards, views, connectors, diagrams, and most tools. The AI Assistant can now create or modify a complete organization from natural-language conversation. Describe a brewery operation and the agent generates the full DDL in one interaction: component templates for fermentation tanks, temperature and pressure streams with derivations, dashboards, and import connectors.
  • Built-in AI Agents — no API keys required. AI Agents are now included in all plans; agent usage consumes plan tokens rather than your own LLM keys. The agent itself was redesigned with a flat architecture, schema-aware tooling, file-based document tools, a session scratchpad, and an explicit permission step before any modification. Configurable agent prompts (via Tools) are available now and results can be referenced from dashboard AI widgets.

GS SQL — A Real Relational Query Engine

  • TEQ relational query support. Full SQL semantics over temporal entity data: JOINs, GROUP BY, HAVING, UNION/INTERSECT/EXCEPT, DISTINCT, subqueries, and CASE expressions. Component templates are tables; their streams are columns; components are rows — except every cell holds its own time-series.
  • Foreign-key dependency derivations. Derived streams can now follow foreign-key relationship chains across the component hierarchy, with fan-in aggregation (SUM, AVG, MIN, MAX, COUNT). Roll child metrics up to parents, average across related equipment, propagate status changes through an entire asset tree — all defined declaratively in DDL.
  • User Catalogs. A namespace layer for tables, streams, and other entities — the same pattern Snowflake and Databricks use. Managed via DDL, referenced via dot notation (catalog.table.column) for cleaner multi-domain organization within a single org.
  • Views — local and materialized. Saved GS SQL queries that behave like tables in subsequent statements. Internal (over organization data) or external (over a remote database via JDBC). Materialized for caching, non-materialized for live execution. Usable inside dashboard SQL widgets.
  • Common Table Expressions. WITH for named subqueries usable in subsequent SELECT or INSERT statements, including multiple chained CTEs. WITH RECURSIVE is also supported (max recursion depth 100), enabling hierarchical traversals and generated sequences directly in GS SQL.
  • Window temporal parameters. Per-column window operations directly on the sample array — lag, lead (with optional defaults), trailing windows (slide + slidestat), cumulative running aggregates over sum/avg/min/max/first/last/count. Faster than SQL WINDOW/OVER() and respects both stream reference time filters and query-time TimeFilterId. Works in both TDQ and TEQ.
  • User-defined session variables. Session-scoped SET @var = expr (or SET @var = (subquery)) for reusable scalar values across statements. Case-insensitive.
  • Expanded function library. NULLIF, COALESCE, DATEPART, DATETRUNC, GREATEST, LEAST, STRING_AGG, ISNULL, CONVERT, REVERSE, MOD, EXTERNAL_QUERY(), aggregate DISTINCT — plus virtual columns, virtual views, and quoted range parameters.
  • DDL stream derivations & fill-forward. Derived streams, time filters, throttling, and dashboard widgets can now be defined entirely in DDL. Fill-forward stretches the last known value across gaps in time-series data with configurable date-stretching behavior.
  • Component name expressions. Components and streams now support computed names derived from column values, with automatic change throttling when source values update.

Query Performance

  • Cost-based query planner. Per-organization table statistics now drive index selection and join cardinality estimates. The planner can distinguish a template owning 0.0005% of an org's stream rows from one owning 64% and chooses scan strategies accordingly. A new ANALYZE STATS DDL triggers a synchronous refresh; stats also refresh automatically after JDBC imports and on a 30-minute background cycle. On large imported orgs, query times for predicates on common templates have dropped from minutes to seconds.

Data Integration & BI

  • JDBC Import Connectors. A new Database Connector type imports tables from external databases (MySQL, PostgreSQL, SQL Server, and others) into your org. Batch inserts, composite primary keys, a “don't overwrite” mode for incremental imports, and the option to save as a recurring scheduled connector for ongoing synchronization. Primary-key columns are indexed automatically on import; the import wizard can use AI to suggest collapsing closely related source tables into a single GS table.
  • PostgreSQL wire-protocol enhancements (Beta). Continued investment in BI-tool compatibility through the PostgreSQL ODBC/JDBC adapter. Tableau, Power BI Desktop, DBeaver, Grafana, Excel, DataGrip, and psql connect directly using a user's email and password — no special drivers, no API keys, no OData URLs. Saved-query VIEWs continue to expose full temporal query results (rollups, time filters, cross-stream joins) to BI tools as standard tables.

Visualization & Tooling

  • Entity Diagrams. A new visual diagram view shows the relationships between catalogs, tables, and columns in your organization. Diagram layouts are shared across users and managed programmatically via DDL (PLACE NODE, PLACE RELATIONSHIP, REMOVE). Drag-and-drop templates onto a diagram from Tools.

Data Management & Schema

  • Soft delete and trash. Deleted items move to a trash folder for recovery instead of being permanently removed. Folders and all of their contents can be sent to trash and restored together, preserving access rights. Applies to deletes from SQL and APIs as well as the UI. (Stream samples are moved to trash only when a parent component or stream is deleted; specific sample deletes are not moved to the trash.)
  • Column name standardization. All system SQL columns now use a consistent underscore-prefixed snake_case convention (_component_uid, _name, _created_date, and so on). A full database migration handles the transition automatically.

Operations

  • Billing V2 with Stripe. New usage-based billing integrated with Stripe, with updated billing metrics and metering throughout the platform.

More release details — with technical specifics for each item above — are in the GitHub Discussion thread for this release. For developers: the GS SQL grammar reference, function library, and documentation live at the Developer Portal. The GS SQL LLM Reference is the compact guide your AI coding assistant should use as context when generating GS SQL.

What's next

Over the next several weeks and months we'll be publishing:

  • Comparison pages. Honest side-by-side write-ups against AVEVA PI, InfluxDB, TimescaleDB, Snowflake, Databricks, Palantir Foundry, and PostgreSQL. We'll explain when GroveStreams is and isn't the right answer — both directions matter.
  • Use-case deep dives. Energy storage, utility metering, fleet operations, financial services, manufacturing, building systems — the industries where customers have already shown us the architecture lands.
  • Problem-shaped articles. “How to replace your historian.” “Foreign keys that change over time.” “Build an AI agent on your time-series data.” “Vibe-design your data platform.”
  • Customer stories. With permission, the people building on the platform telling their own stories.

On the product side, the architectural priorities for the rest of 2026 follow the same theme: lean further into the AI agent surface (so non-coder builders can do more with less), tighten the BI integration story (so analysts can connect any tool that speaks PostgreSQL), and continue to invest in the temporal foreign key and FK-resolved derivation engine that, as our AI's honest assessment put it, is the single hardest part of the platform to replicate.

Try it, or talk to us

Three ways to engage

Start a free trial and see if it fits your problem. Or skip the trial and tell us what you're building — if your use case maps to the architecture, we'd like to hear about it. We read every email.

Thanks for reading. If you've been a GroveStreams customer for any length of time, you've helped get us here — the rebrand and this release are a result of paying attention to what you've actually been doing with the platform. Keep telling us what you need next.