Why we rebranded, what “temporal intelligence” means, and who's using it today.
Earlier this year, we changed how we describe GroveStreams. For most of our ten years in production we marketed it as an IoT analytics platform — a label that fit our earliest customers but, increasingly, didn't describe what people were actually doing with us. The platform had quietly grown into something broader: a temporal database, a relational query engine, an analytics layer, an AI forecasting service, and a dashboard tool, all sharing one architecture and one query language.
So we rebranded. GroveStreams — same product, same name — is now positioned as a Temporal Intelligence Platform™ rather than an IoT analytics platform. This is the announcement we should have written when the rebrand went live; we're writing it now because this week's release is the first one where the product visibly catches up with the new positioning.
“IoT” was useful in 2014. It told prospects that we cared about high-frequency data, devices in the field, and real-time charts. By 2024 the label had quietly become limiting. The architecture didn't care that the data came from a sensor; it cared that the data had a time axis. Calling the platform “IoT analytics” described one input, not the system — and it was making us look like another sensor dashboard when the actual story was much bigger.
There was also a competitive reality. When Amazon and Microsoft moved aggressively into IoT, the writing was on the wall — a small team wasn't going to out-spend AWS or Azure on a marketing war. So years ago we made a deliberate choice: stop chasing the IoT category we were nominally part of, and double down on what our existing customers were already doing with the platform. Help them grow. Add the features that made the platform deeper, not the features that made the label louder. GS SQL came out of that. Derived Streams came out of that. The AI Assistant and forecasting layer came out of that. Each of those features pulled us further from “IoT analytics” and closer to something we didn't yet have a name for. Our customers and the market drove us here; the rebrand is just us catching up with where we already were.
Two years of customer conversations made the gap obvious. People weren't buying GroveStreams for sensors. They were buying it because their existing stack — usually some combination of PostgreSQL, a time-series database, a separate dashboard tool, a separate forecasting library, and three layers of glue code — wasn't solving the temporal problem cleanly. They wanted history, relationships, rollups, and AI on one substrate. The architecture had been doing that for years. The marketing hadn't.
“Temporal Intelligence Platform” is the term that finally fit. It's also a category nobody else is using yet, which is honest: nobody else has put these capabilities in one architecture. We'd rather name a category we built than borrow one we don't fit in.
Temporal intelligence is the ability to store, query, and reason across the complete history of every value and every relationship in your data — not just the current state. It means every data point knows when it happened, every relationship knows when it changed, and every query can travel through time.
Most data platforms forget. They keep the current state and discard what came before. If you want history, you build it — _history tables, audit triggers, slowly-changing-dimension patterns, materialized views, batch rollup jobs. Every project. Every time. Every domain. The architectural debt compounds, and the AI agents and analysts that depend on the data inherit it as complexity.
GroveStreams takes a different shape. The whole platform is built around one primitive — the stream. A stream is a sequence of timestamped values. Every property on every entity is a stream. Every relationship between entities is a stream. A temperature reading every second? Stream. A salary that changes once a year? Stream. A pump-to-tank assignment that changes twice a year? Also a stream. All of them use identical query semantics.
Visualize it as a relational table where each entity is a row and each property is a column — but every cell holds up to 100 million timestamped values, not just one. Current value on top. Complete history underneath. JOINs, WHERE, GROUP BY, FK references, point-in-time temporal parameters — the relational semantics you already know — all work, against any point in time.
That's temporal intelligence. The deeper architectural explanation, with diagrams and worked examples, lives at Why Temporal Intelligence.
These are the four worlds the platform is built to serve. Some have been with us for years — energy analytics most visibly. Others, especially non-coder builders, are just starting to arrive with this release. We're describing where the architecture fits, not claiming every door is already filled.
Replacing or supplementing historian platforms (AVEVA PI / OSIsoft PI, GE Proficy, Honeywell PHD) with a system that has actual entity relationships and a real query language. The historian stores tags; we store entities, relationships, and history together.
Engineers who chose InfluxDB, TimescaleDB, or QuestDB for storage depth and now need entity relationships, FK-resolved derivations, and AI forecasting on top of it — without standing up another stack alongside.
Teams running on Snowflake, Databricks, or Palantir Foundry who need a live operational layer the warehouse can't cover. Snowflake time travel was designed for accidental drops, not live temporal queries. We're the layer that handles the “happening right now and across the last decade” questions.
Operators, founders, and domain experts using Claude Code, ChatGPT, Cursor, and similar AI tools to build their own applications. They describe a domain in plain English; the AI generates GS DDL, dashboards, and derived streams. This is the newest of the four worlds — the first arrivals are showing up with this release.
The most visible example today is energy storage. An energy storage company that grew with us from startup to public company runs on the platform today, managing gigawatt-hours of assets across thousands of battery systems — with the kind of relationship history (cell to string to site to wholesale market) and per-system degradation forecasting the architecture was built for.
Other industries are earlier signals. A fleet operations business is building on us right now — the founder isn't a coder; he's vibe-designing the schema with Claude Code, which recommended GroveStreams over PostgreSQL for the temporal data model. We mention it not as a pillar but because it tells us where the platform is headed next. The full roster of customers and technology partners is on our Customers & Partners page.
Temporal intelligence is a new category. We didn't invent the underlying ideas; temporal databases, time-series analytics, and bi-temporal data models all have decades of academic and industry history. What's new is putting them together into one architecture, with one query language, where every value and every relationship in a relational model is itself a complete time-series.
Because it's a new category, we're still learning where it fits best. The conversations we've had over the last year — with existing customers and interested parties exploring the platform — span a wider range of problems than we would have predicted. Some show up with a clear need; others arrive curious and figuring out whether we're a fit at all. Both kinds of conversations have been useful for sharpening where the architecture lands.
We're going to publish pages targeted at each of these audiences over the next several months. If the architecture matches your problem and you don't see a page that speaks to it yet, that's not a sign we're not for you — it's a sign we haven't written your page yet. Tell us what you're building. Your use case probably belongs in the next batch.
This release is the first one where the product visibly catches up with the new positioning — and at the scope of the changes, it's the largest update GroveStreams has shipped in years. Grouped highlights below.
catalog.table.column) for cleaner multi-domain organization within a single org.WITH for named subqueries usable in subsequent SELECT or INSERT statements, including multiple chained CTEs. WITH RECURSIVE is also supported (max recursion depth 100), enabling hierarchical traversals and generated sequences directly in GS SQL.lag, lead (with optional defaults), trailing windows (slide + slidestat), cumulative running aggregates over sum/avg/min/max/first/last/count. Faster than SQL WINDOW/OVER() and respects both stream reference time filters and query-time TimeFilterId. Works in both TDQ and TEQ.SET @var = expr (or SET @var = (subquery)) for reusable scalar values across statements. Case-insensitive.ANALYZE STATS DDL triggers a synchronous refresh; stats also refresh automatically after JDBC imports and on a 30-minute background cycle. On large imported orgs, query times for predicates on common templates have dropped from minutes to seconds.psql connect directly using a user's email and password — no special drivers, no API keys, no OData URLs. Saved-query VIEWs continue to expose full temporal query results (rollups, time filters, cross-stream joins) to BI tools as standard tables.PLACE NODE, PLACE RELATIONSHIP, REMOVE). Drag-and-drop templates onto a diagram from Tools._component_uid, _name, _created_date, and so on). A full database migration handles the transition automatically.More release details — with technical specifics for each item above — are in the GitHub Discussion thread for this release. For developers: the GS SQL grammar reference, function library, and documentation live at the Developer Portal. The GS SQL LLM Reference is the compact guide your AI coding assistant should use as context when generating GS SQL.
Over the next several weeks and months we'll be publishing:
On the product side, the architectural priorities for the rest of 2026 follow the same theme: lean further into the AI agent surface (so non-coder builders can do more with less), tighten the BI integration story (so analysts can connect any tool that speaks PostgreSQL), and continue to invest in the temporal foreign key and FK-resolved derivation engine that, as our AI's honest assessment put it, is the single hardest part of the platform to replicate.
Start a free trial and see if it fits your problem. Or skip the trial and tell us what you're building — if your use case maps to the architecture, we'd like to hear about it. We read every email.
Thanks for reading. If you've been a GroveStreams customer for any length of time, you've helped get us here — the rebrand and this release are a result of paying attention to what you've actually been doing with the platform. Keep telling us what you need next.