
Grammar — DDL
| Template DDL | Resource DDL | INSERT INTO | Derived — Expressions | Derived — Aggregation | Views | Dashboards |
Overview
DDL (Data Definition Language) statements manage component templates, resource entities, and views through GS SQL. DDL operates in TEQ mode — it uses template IDs as table names.For querying template tables, see the TEQ Guide. For the complete query grammar (functions, operators, predicates, temporal parameters), see the Grammar Reference. For general GS SQL information (data types, conventions, date/time), see the GS SQL Overview.
The Grammar Reference — DDL section provides a compact syntax summary of all DDL statements. This page provides the full guide with detailed explanations and examples.
DDL — CREATE TABLE, ALTER TABLE, DROP TABLE
TEQ supports DDL statements for managing component templates and their stream definitions directly through GS SQL. Templates define the schema (streams/columns) for components.DDL statements modify templates only, not existing components linked to them. To push template changes to existing components, use
ALTER TABLE ... RECONCILE NOW
or the admin reconciliation action in Observation Studio.
CREATE TABLE — creates a new component template. The identifier after
CREATE TABLE is the table ID (template ID)
used in queries and other DDL to reference this template.
The optional CNAME WITH option sets the display name shown in
Observation Studio (defaults to the table ID if omitted):
-- Column-list form (template definition)
CREATE TABLE sensor (
temperature DOUBLE,
humidity DOUBLE
) WITH (CNAME = 'Environmental Sensor', TIMEZONE_ID = 'America/New_York');
-- CTAS form (PG / dbt "table" materialization). Builds a result-cached object queryable by name.
-- Lowered to CREATE MATERIALIZED VIEW under the hood; drop with DROP MATERIALIZED VIEW.
CREATE TABLE recent_temps AS SELECT _component_name, temperature FROM sensor;
CREATE TABLE sensor (
temperature DOUBLE,
humidity DOUBLE
) WITH (CNAME = 'Environmental Sensor', TIMEZONE_ID = 'America/New_York');
-- CTAS form (PG / dbt "table" materialization). Builds a result-cached object queryable by name.
-- Lowered to CREATE MATERIALIZED VIEW under the hood; drop with DROP MATERIALIZED VIEW.
CREATE TABLE recent_temps AS SELECT _component_name, temperature FROM sensor;
Columns support value types (
DOUBLE, LONG, STRING,
BOOLEAN, BIGDECIMAL), stream types (REGULAR,
POINT, INTERVAL), and options:
| Column Option | Description |
NAME 'display_name' |
Display name for the stream |
DESCRIPTION 'text' |
Stream description |
BASE_CYCLE 'cycle_id' |
Base cycle (required for INTERVAL streams). Also accepts CYCLE. |
ROLLUP_CALENDAR 'calendar_id' |
Rollup calendar for the stream |
ROLLUP_METHOD 'method' |
Rollup aggregation method (AVG, SUM, MIN, MAX, LAST, FIRST, NONE) |
DELETE_PROFILE 'profile_id' |
Delete profile for automatic data pruning |
TIME FILTER 'filter_id' |
Assigns a time filter constraint to the stream. Filtered-out intervals are set to NULL during saves. Rollups respect the filter. Useful for TOU rate periods, business-hours-only calculations, etc. |
REFERENCES template_id |
Declares a foreign key stream pointing to the specified template |
PRIMARY_KEY true|false |
Marks the stream as a primary key. PK stream values uniquely identify components during feed PUT operations and drive default name-expression generation. Primary keys are automatically indexed — a WHERE predicate on a PK column resolves via the silv secondary index in sub-second time regardless of table size. |
INDEXED true|false |
Maintains a secondary index on the stream's last value in the silv compound index. Use on non-PK columns that appear in the WHERE clause of frequent TEQ queries (e.g. a status or category column you regularly filter on). Comes at a small write-amp cost on every value change — one extra Delete + Put per indexed stream per update — so index selectively. PK columns are auto-indexed and don't need this flag. |
MAX_STRING_LENGTH n |
Sets the maximum character length for STRING streams (default 255, max 2,200,000). Use for streams that store large text such as agent prompts, plan content, or LLM responses. |
DERIVED AS 'expr' |
Declares a derived stream with an expression and dependencies (see Derived Columns below) |
When column options are omitted, they fall back to the organization's stream defaults.
Table-level WITH options (used in the WITH clause after the column definitions):
| Option | Description |
CNAME |
Display name shown in Observation Studio. Defaults to the table ID if omitted. (CNAME is equivalent to NAME for tables.) |
DESCRIPTION |
Description text for the template. |
TIMEZONE_ID |
Time zone for the template (e.g. 'America/New_York'). Falls back to org default. |
FOLDER_PATH |
Subfolder under /Tools/Templates. Intermediate folders are auto-created. Note: For INSERT INTO (component creation), FOLDER_PATH is under /Components instead. |
REFERENCE_DATE |
Reference date for the template in ISO format. |
NAME_EXPRESSION |
A format string that dynamically sets the component display name from stream values.
Uses {stream_id} placeholders, e.g. '{first_name} {last_name}'.
Referenced streams must exist on the template. Evaluated on each feed PUT;
rate-limited to prevent performance degradation from frequently-changing values. |
Full example with all column options:
CREATE TABLE meter (
meter_id STRING POINT PRIMARY_KEY true,
kwh DOUBLE INTERVAL BASE_CYCLE 'hour' ROLLUP_METHOD 'SUM' ROLLUP_CALENDAR 'standard',
voltage DOUBLE REGULAR NAME 'Voltage' DESCRIPTION 'Line voltage',
status STRING POINT INDEXED true,
customerUid STRING REFERENCES customer
) WITH (CNAME = 'Energy Meter', TIMEZONE_ID = 'US/Eastern');
meter_id STRING POINT PRIMARY_KEY true,
kwh DOUBLE INTERVAL BASE_CYCLE 'hour' ROLLUP_METHOD 'SUM' ROLLUP_CALENDAR 'standard',
voltage DOUBLE REGULAR NAME 'Voltage' DESCRIPTION 'Line voltage',
status STRING POINT INDEXED true,
customerUid STRING REFERENCES customer
) WITH (CNAME = 'Energy Meter', TIMEZONE_ID = 'US/Eastern');
PRIMARY_KEY on meter_id means WHERE meter_id = 'M-123'
resolves via the compound index in sub-second time. INDEXED on status
lets WHERE status = 'fault' also use the index. voltage, kwh,
customerUid are not indexed — WHERE on them still works but falls back to a
range scan, so index only what you actually filter on.
ALTER TABLE — modifies an existing template:
-- Add a column
ALTER TABLE sensor ADD COLUMN pressure DOUBLE;
-- Drop a column
ALTER TABLE sensor DROP COLUMN humidity;
-- Modify column options
ALTER TABLE meter MODIFY COLUMN kwh ROLLUP_METHOD 'AVG' DELETE_PROFILE 'retain_1yr';
-- Change template properties
ALTER TABLE sensor SET CNAME = 'Environmental Sensor', DESCRIPTION = 'Updated';
-- Push changes to all linked components
ALTER TABLE sensor RECONCILE NOW;
-- Rename
ALTER TABLE sensor RENAME TO env_sensor;
-- PG aliases ALTER TABLE for ALTER VIEW — renaming a view via ALTER TABLE works
ALTER TABLE active_sensors RENAME TO live_sensors;
-- Rename a column (stream id within the template)
ALTER TABLE sensor RENAME COLUMN temp TO temperature;
ALTER TABLE sensor ADD COLUMN pressure DOUBLE;
-- Drop a column
ALTER TABLE sensor DROP COLUMN humidity;
-- Modify column options
ALTER TABLE meter MODIFY COLUMN kwh ROLLUP_METHOD 'AVG' DELETE_PROFILE 'retain_1yr';
-- Change template properties
ALTER TABLE sensor SET CNAME = 'Environmental Sensor', DESCRIPTION = 'Updated';
-- Push changes to all linked components
ALTER TABLE sensor RECONCILE NOW;
-- Rename
ALTER TABLE sensor RENAME TO env_sensor;
-- PG aliases ALTER TABLE for ALTER VIEW — renaming a view via ALTER TABLE works
ALTER TABLE active_sensors RENAME TO live_sensors;
-- Rename a column (stream id within the template)
ALTER TABLE sensor RENAME COLUMN temp TO temperature;
RENAME TO is mutually exclusive with other actions in the same statement. RENAME COLUMN renames a stream's id within the template; reconciliation propagates the change to linked components.
DROP TABLE — deletes a template:
-- Drop template only (components become unlinked)
DROP TABLE sensor;
-- Drop template and all linked components (cascading delete)
DROP TABLE IF EXISTS sensor CASCADE;
DROP TABLE sensor;
-- Drop template and all linked components (cascading delete)
DROP TABLE IF EXISTS sensor CASCADE;
ALTER ORG — Org-level GS SQL behavior toggles
Toggles org-level GS SQL behavior settings. Requires the executing user to be the org owner or an org admin — per-tool GS SQL ALTER permissions do not apply.Syntax:
ALTER ORG SET propertyName = TRUE | FALSE;
Supported properties:
gsqlMergeSampleCols— whenTRUE, multiple_sample()calls in one SELECT that share the same range/cycle/timeFilter/gapFill/offsets merge into a single row per time-bucket. WhenFALSE, each_sample()call emits its own tuple set with NULLs in the other sample columns. New orgs default toTRUE(standard SQL behavior); orgs created before this feature default toFALSEto preserve their existing query semantics.
-- Opt in to multi-_sample() column merging
ALTER ORG SET gsqlMergeSampleCols = TRUE;
-- Opt back out (preserve pre-merge behavior)
ALTER ORG SET gsqlMergeSampleCols = FALSE;
ALTER ORG SET gsqlMergeSampleCols = TRUE;
-- Opt back out (preserve pre-merge behavior)
ALTER ORG SET gsqlMergeSampleCols = FALSE;
Resource DDL — Cycles, Rollup Calendars, Stream Groups, Runnables
In addition to table (template) DDL, TEQ supports DDL for managing organizational resource entity types: Cycles, Rollup Calendars, Stream Groups, Runnables, Time Filters, Units, and Delete Profiles. These entities are created in the/Tools/ folder hierarchy.
Naming convention: In every
CREATE statement, the identifier immediately after the
entity type keyword (e.g. CREATE CYCLE hourly) is the entity ID.
This ID is how other DDL statements reference the entity — for example, a rollup calendar
references cycle IDs in its CYCLES list, and a runnable references cycle IDs in
SCHEDULE and CYCLE clauses.
The optional NAME (or CNAME) WITH option sets the display name
shown in Observation Studio. If omitted, the display name defaults to the entity ID.
CNAME is an alias for NAME.
Dependency ordering: Cycles → Rollup Calendars → Source Tables → Stream Groups → Aggregation Tables → Runnables → Views. Create entities in this order; drop them in reverse order. Tables with
AGGREGATION columns depend on stream groups, so they must come after
the stream group and source table they reference.
Cycles
Cycles define time interval boundaries used for roll-ups and scheduling.CREATE CYCLE — creates a new cycle definition. The identifier after
CREATE CYCLE is the cycle ID
(used to reference this cycle in rollup calendars, schedules, and runnable items):
-- cycle_id = 'hourly'
CREATE CYCLE hourly INTERVAL 1 HOUR;
-- cycle_id = 'daily'; NAME sets the display name in Observation Studio
CREATE CYCLE IF NOT EXISTS daily INTERVAL 1 DAY
WITH (NAME = 'Daily Cycle', DESCRIPTION = 'One day intervals',
TIMEZONE_ID = 'US/Eastern');
CREATE CYCLE hourly INTERVAL 1 HOUR;
-- cycle_id = 'daily'; NAME sets the display name in Observation Studio
CREATE CYCLE IF NOT EXISTS daily INTERVAL 1 DAY
WITH (NAME = 'Daily Cycle', DESCRIPTION = 'One day intervals',
TIMEZONE_ID = 'US/Eastern');
Supported interval types:
MILLISECOND, SECOND, MINUTE,
HOUR, DAY, WEEK, MONTH, YEAR.
| Option | Description |
NAME |
Display name shown in Observation Studio. Defaults to the cycle ID if omitted. CNAME is an alias for NAME. |
DESCRIPTION |
Description text for the cycle. |
TIMEZONE_ID |
Time zone for interval alignment (e.g. 'America/New_York'). When blank, the component's time zone is used. |
REFERENCE_DATE |
Anchor date for interval boundaries in ISO format (e.g. '2000-01-01T00:00:00'). Intervals are aligned relative to this date. Default: '2000-01-01T00:00:00'. |
REFERENCE_DATE_TYPE |
How the reference date is determined. 'USE_CYCLE' (default) uses the cycle's own reference date. 'USE_COMPONENT' uses each component's creation date as the reference. |
FOLDER_PATH |
Subfolder under /Tools/Cycles (e.g. '/energy'). Intermediate folders are auto-created. |
ALTER CYCLE — modifies an existing cycle:
ALTER CYCLE hourly INTERVAL 5 MINUTE;
ALTER CYCLE hourly SET (DESCRIPTION = 'Updated to 5-minute intervals');
ALTER CYCLE hourly SET (DESCRIPTION = 'Updated to 5-minute intervals');
DROP CYCLE — deletes a cycle:
DROP CYCLE hourly;
DROP CYCLE IF EXISTS hourly;
DROP CYCLE IF EXISTS hourly;
Rollup Calendars
Rollup calendars group one or more cycles into a named set for use by streams.CREATE ROLLUP CALENDAR — creates a new rollup calendar referencing existing cycles. The identifier after
CREATE ROLLUP CALENDAR is the calendar ID
(used to reference this calendar in column ROLLUP_CALENDAR clauses):
-- calendar_id = 'standard_rc'; references cycle_ids 'hourly' and 'daily'
CREATE ROLLUP CALENDAR standard_rc CYCLES ('hourly', 'daily');
CREATE ROLLUP CALENDAR IF NOT EXISTS my_rc CYCLES ('hourly')
WITH (NAME = 'Hourly Rollup', DESCRIPTION = 'Single hourly cycle');
CREATE ROLLUP CALENDAR standard_rc CYCLES ('hourly', 'daily');
CREATE ROLLUP CALENDAR IF NOT EXISTS my_rc CYCLES ('hourly')
WITH (NAME = 'Hourly Rollup', DESCRIPTION = 'Single hourly cycle');
The
CYCLES clause takes a comma-separated list of cycle IDs (quoted strings).
All referenced cycles must already exist.
| Option | Description |
NAME |
Display name shown in Observation Studio. Defaults to the calendar ID if omitted. |
DESCRIPTION |
Description text for the rollup calendar. |
FOLDER_PATH |
Subfolder under /Tools/Rollup Calendars. Intermediate folders are auto-created. |
ALTER ROLLUP CALENDAR — modifies an existing rollup calendar:
ALTER ROLLUP CALENDAR standard_rc CYCLES ('hourly', 'daily', 'weekly');
ALTER ROLLUP CALENDAR standard_rc SET (DESCRIPTION = 'Three-level rollup');
ALTER ROLLUP CALENDAR standard_rc SET (DESCRIPTION = 'Three-level rollup');
DROP ROLLUP CALENDAR — deletes a rollup calendar:
DROP ROLLUP CALENDAR standard_rc;
DROP ROLLUP CALENDAR IF EXISTS standard_rc;
DROP ROLLUP CALENDAR IF EXISTS standard_rc;
Stream Groups
Stream groups define named collections of streams, used for aggregation runnables and batch operations. A stream group can be either CLASSIC (filter-based) or GSQL (query-based).CREATE STREAM GROUP — creates a new stream group. The identifier after
CREATE STREAM GROUP is the group ID
(used to reference this group in runnable ITEMS clauses):
-- group_id = 'energy_streams'
CREATE STREAM GROUP energy_streams
WITH (NAME = 'Energy Streams', STREAM_NAME_FILTER = 'kwh*');
-- GSQL (query-based) stream group
-- The AS query must return both _uid (stream UID) and _component_uid (component UID).
CREATE STREAM GROUP temp_streams AS SELECT _uid, _component_uid FROM system.stream WHERE _name = 'temperature';
CREATE STREAM GROUP energy_streams
WITH (NAME = 'Energy Streams', STREAM_NAME_FILTER = 'kwh*');
-- GSQL (query-based) stream group
-- The AS query must return both _uid (stream UID) and _component_uid (component UID).
CREATE STREAM GROUP temp_streams AS SELECT _uid, _component_uid FROM system.stream WHERE _name = 'temperature';
| Option | Description |
NAME |
Display name shown in Observation Studio. Defaults to the group ID if omitted. |
STREAM_NAME_FILTER |
Wildcard filter on stream display names (e.g. 'kwh*'). Classic groups only. |
STREAM_ID_FILTER |
Wildcard filter on stream IDs. Classic groups only. |
STREAM_LV_FILTER |
Filter expression on stream last-values. Classic groups only. |
NULL_IGNORED |
Whether to ignore streams with null last-values. Default: 'true'. |
FOLDER_PATH |
Subfolder under /Tools/Stream Groups. Intermediate folders are auto-created. |
ALTER STREAM GROUP — modifies an existing stream group:
ALTER STREAM GROUP energy_streams SET (STREAM_NAME_FILTER = 'kwh*', NAME = 'Updated Energy');
DROP STREAM GROUP — deletes a stream group:
DROP STREAM GROUP energy_streams;
DROP STREAM GROUP IF EXISTS energy_streams;
DROP STREAM GROUP IF EXISTS energy_streams;
REFRESH STREAM GROUP — re-executes the stream group's query and saves the updated stream list:
REFRESH STREAM GROUP energy_streams;
DERIVE STREAM — synchronously derives an expression-derived stream across all (or filtered) components of a template:
-- Derive for all components of the template
DERIVE STREAM batch.abv_actual;
-- Derive with component filter
DERIVE STREAM batch.abv_actual WHERE _component_id = 'ipa_46';
-- Derive with cascade (follow precedent derivations)
DERIVE STREAM tank.avg_temperature CASCADE;
-- With explicit limits
DERIVE STREAM batch.efficiency WHERE _component_id = 'ipa_46' CASCADE
WITH (DERIVE_MAX_STREAMS = 50, DERIVE_TIMEOUT = 15000);
DERIVE STREAM batch.abv_actual;
-- Derive with component filter
DERIVE STREAM batch.abv_actual WHERE _component_id = 'ipa_46';
-- Derive with cascade (follow precedent derivations)
DERIVE STREAM tank.avg_temperature CASCADE;
-- With explicit limits
DERIVE STREAM batch.efficiency WHERE _component_id = 'ipa_46' CASCADE
WITH (DERIVE_MAX_STREAMS = 50, DERIVE_TIMEOUT = 15000);
Options:
DERIVE_MAX_STREAMS (default 100, max 100) limits precedent streams per derivation.
DERIVE_TIMEOUT (default 30000ms, max 30000ms) limits time per derivation.
Without a WHERE clause, templates with more than 1000 components will be rejected.
DERIVE AGGREGATION — synchronously aggregates a stream group aggregation stream across all (or filtered) components:
DERIVE AGGREGATION brewery_summary.avg_temp;
DERIVE AGGREGATION brewery_summary.avg_temp WHERE _component_id = 'overview';
DERIVE AGGREGATION brewery_summary.avg_temp WHERE _component_id = 'overview';
Runnables
Runnables define scheduled or on-demand batch jobs that process stream groups at specified cycles.CREATE RUNNABLE — creates a new runnable. The identifier after
CREATE RUNNABLE is the runnable ID:
-- runnable_id = 'hourly_agg'; references cycle_ids 'hourly' and 'daily'
CREATE RUNNABLE hourly_agg
SCHEDULE 'hourly'
ITEMS (
STREAM GROUP 'energy_streams' CYCLE 'hourly',
STREAM GROUP 'temp_streams' CYCLE 'daily' STATS
)
WITH (NAME = 'Hourly Aggregation');
CREATE RUNNABLE hourly_agg
SCHEDULE 'hourly'
ITEMS (
STREAM GROUP 'energy_streams' CYCLE 'hourly',
STREAM GROUP 'temp_streams' CYCLE 'daily' STATS
)
WITH (NAME = 'Hourly Aggregation');
The
SCHEDULE clause specifies how often the runnable should execute (references a cycle ID).
Omit it for manual-only runnables.
The ITEMS clause lists stream groups (or individual streams) to process.
Item reference syntax:
| Syntax | Description |
STREAM GROUP 'group_id' |
Process a stream group. Optional CYCLE overrides the aggregation cycle.
STATS enables statistical computation for the group. |
STREAM 'stream_uid' 'comp_uid' |
Process an individual stream identified by stream UID and component UID. |
Options:
| Option | Description |
NAME |
Display name shown in Observation Studio. Defaults to the runnable ID if omitted. |
FOLDER_PATH |
Subfolder under /Tools/Runnables. Intermediate folders are auto-created. |
ALTER RUNNABLE — modifies an existing runnable:
ALTER RUNNABLE hourly_agg SCHEDULE 'daily';
ALTER RUNNABLE hourly_agg NO SCHEDULE;
ALTER RUNNABLE hourly_agg SET (NAME = 'Updated Runnable');
ALTER RUNNABLE hourly_agg NO SCHEDULE;
ALTER RUNNABLE hourly_agg SET (NAME = 'Updated Runnable');
DROP RUNNABLE — deletes a runnable:
DROP RUNNABLE hourly_agg;
DROP RUNNABLE IF EXISTS hourly_agg;
DROP RUNNABLE IF EXISTS hourly_agg;
RUN RUNNABLE — triggers immediate execution:
RUN RUNNABLE hourly_agg;
AI Agent DDL
AI Agents are schedulable LLM-powered agents that run prompts against configured AI providers (Anthropic, OpenAI, Gemini, xAI) with full tool access. They can query data via GS SQL, create/update/delete components, and perform automated maintenance tasks. Results can optionally be written to an RDM stream for temporal history and dashboard display.Scheduling is paused for the current release — the schedule tab on the agent editor shows “Coming Soon” and saves force
hasSchedule=false. DDL below stays
stable; agents can be defined and run manually via RUN AGENT or the UI, but cron
dispatch returns when the Process Queue redesign ships. When scheduling is enabled, a maximum of
2 scheduled tool agents can execute concurrently per organization — this limit gates
only the cron-driven scheduler path, not interactive chat, MCP ask_grovestreams, or
manual runs.
CREATE AGENT — creates a new AI agent. The identifier after
CREATE AGENT is the agent ID:
-- Basic scheduled agent
CREATE AGENT sleep_cycle
SCHEDULE 'day'
PROMPT 'Review observations older than 30 days...'
WITH (
NAME = 'Sleep Cycle Agent',
MAX_ITERATIONS = 30,
MAX_TIMEOUT_MS = 300000
);
-- Agent with result written to an RDM stream (dashboard-ready)
CREATE AGENT daily_summary
SCHEDULE 'day'
PROMPT 'Summarize the current state of all active initiatives...'
WITH (
NAME = 'Daily Summary',
RESULT_COMPONENT_ID = 'agent_reports',
RESULT_STREAM_ID = 'daily_summary'
);
CREATE AGENT sleep_cycle
SCHEDULE 'day'
PROMPT 'Review observations older than 30 days...'
WITH (
NAME = 'Sleep Cycle Agent',
MAX_ITERATIONS = 30,
MAX_TIMEOUT_MS = 300000
);
-- Agent with result written to an RDM stream (dashboard-ready)
CREATE AGENT daily_summary
SCHEDULE 'day'
PROMPT 'Summarize the current state of all active initiatives...'
WITH (
NAME = 'Daily Summary',
RESULT_COMPONENT_ID = 'agent_reports',
RESULT_STREAM_ID = 'daily_summary'
);
| WITH Option | Description |
| NAME | Display name |
| FOLDER_PATH | Folder in the Tools tree |
| PROFILE_UID | SystemLLMProfile UID — determines AI provider and model. Uses system default if omitted. |
| RUNNING_USER | Run-as user ID for RBAC (defaults to creator) |
| MAX_ITERATIONS | Maximum tool-call iterations per run (default 50) |
| MAX_TIMEOUT_MS | Execution timeout in milliseconds (default 600000 = 10 min) |
| ENABLED | true/false — disabled agents skip scheduled runs (default true) |
| RESULT_COMPONENT_ID | Component ID to write the result to as an RDM sample (optional) |
| RESULT_STREAM_ID | Stream ID on that component for the result (optional) |
ALTER AGENT — modifies an existing agent:
ALTER AGENT sleep_cycle PROMPT 'Updated prompt...';
ALTER AGENT sleep_cycle SCHEDULE 'hour';
ALTER AGENT sleep_cycle NO SCHEDULE;
ALTER AGENT sleep_cycle SET (NAME = 'Updated Agent', MAX_ITERATIONS = 50);
ALTER AGENT sleep_cycle SCHEDULE 'hour';
ALTER AGENT sleep_cycle NO SCHEDULE;
ALTER AGENT sleep_cycle SET (NAME = 'Updated Agent', MAX_ITERATIONS = 50);
DROP AGENT — deletes an agent:
DROP AGENT sleep_cycle;
DROP AGENT IF EXISTS sleep_cycle;
DROP AGENT IF EXISTS sleep_cycle;
RUN AGENT — triggers immediate execution:
RUN AGENT sleep_cycle;
Entity Diagram DDL
Entity Diagrams are visual diagrams stored under Tools. They can be managed via DDL statements.CREATE ENTITY DIAGRAM — creates a new entity diagram. The identifier after
CREATE ENTITY DIAGRAM is the diagram ID.
Optional PLACE NODE clauses position template nodes on the canvas.
Relationship lines between placed templates are auto-discovered from FK streams —
if template A has an FK stream pointing to template B and both are placed on the diagram,
the connector line appears automatically:
CREATE ENTITY DIAGRAM my_diagram;
CREATE ENTITY DIAGRAM IF NOT EXISTS my_diagram;
CREATE ENTITY DIAGRAM my_diagram WITH (NAME = 'My Diagram', DESCRIPTION = 'Overview', CANVAS_WIDTH = 3000, CANVAS_HEIGHT = 2000);
-- Place nodes; meter->customer FK relationship auto-discovered
CREATE ENTITY DIAGRAM utility_model
WITH (NAME = 'Utility Model', CANVAS_WIDTH = 3000)
PLACE NODE 'meter' POSITION (100, 200) SIZE (250, 180)
PLACE NODE 'customer' POSITION (500, 200);
CREATE ENTITY DIAGRAM IF NOT EXISTS my_diagram;
CREATE ENTITY DIAGRAM my_diagram WITH (NAME = 'My Diagram', DESCRIPTION = 'Overview', CANVAS_WIDTH = 3000, CANVAS_HEIGHT = 2000);
-- Place nodes; meter->customer FK relationship auto-discovered
CREATE ENTITY DIAGRAM utility_model
WITH (NAME = 'Utility Model', CANVAS_WIDTH = 3000)
PLACE NODE 'meter' POSITION (100, 200) SIZE (250, 180)
PLACE NODE 'customer' POSITION (500, 200);
| Option | Description |
|---|---|
NAME |
Display name (defaults to the diagram ID) |
DESCRIPTION |
Description text |
CANVAS_WIDTH |
Canvas width in pixels (default 2000) |
CANVAS_HEIGHT |
Canvas height in pixels (default 1500) |
PLACE NODE — positions a component template node on the diagram canvas. The template is referenced by its template ID:
PLACE NODE 'template_id' POSITION (x, y)
PLACE NODE 'template_id' POSITION (x, y) SIZE (width, height)
PLACE NODE 'template_id' POSITION (x, y) SIZE (width, height)
POSITION sets the x,y coordinates on the canvas. SIZE sets width and height (defaults to 200 × 150).
When used in ALTER, placing a node with an existing template ID updates that node's position.
PLACE VIEW — positions a saved View node on the diagram canvas, alongside template nodes. Views are first-class queryable objects — rendering them on the diagram makes the relationship between a view and the templates it queries legible at a glance. The view is referenced by its view ID (the same ID used in
CREATE VIEW):
PLACE VIEW 'view_id' POSITION (x, y)
PLACE VIEW 'view_id' POSITION (x, y) SIZE (width, height)
PLACE VIEW 'view_id' POSITION (x, y) SIZE (width, height)
Views are not catalog-scoped, so they appear on any entity diagram regardless of which catalog the surrounding template nodes belong to.
RELATIONSHIP (optional) — sets visual overrides on an auto-discovered relationship line. Relationships are auto-discovered from FK streams, so this clause is only needed to customize the connector line appearance:
RELATIONSHIP 'source_template'.'fk_stream_id' REFERENCES 'target_template' CARDINALITY 'many-to-one'
RELATIONSHIP 'source_template'.'fk_stream_id' REFERENCES 'target_template' WAYPOINTS ((x1, y1), (x2, y2))
RELATIONSHIP 'source_template'.'fk_stream_id' REFERENCES 'target_template' WAYPOINTS ((x1, y1), (x2, y2))
CARDINALITY overrides the display label (default 'one-to-many').
WAYPOINTS define intermediate points for the connector line path.
ALTER ENTITY DIAGRAM — modifies an existing entity diagram. Supports
SET options, PLACE NODE, PLACE VIEW,
RELATIONSHIP, REMOVE NODE, and REMOVE VIEW clauses.
Relationships are re-discovered automatically after node changes —
adding a node may discover new relationships, and removing a node removes its relationships:
ALTER ENTITY DIAGRAM my_diagram SET (NAME = 'Updated Diagram', DESCRIPTION = 'New desc');
ALTER ENTITY DIAGRAM my_diagram SET (CANVAS_WIDTH = 4000, CANVAS_HEIGHT = 3000);
ALTER ENTITY DIAGRAM my_diagram SET (FOLDER_PATH = '/subfolder');
-- Add or update node positions
ALTER ENTITY DIAGRAM my_diagram
PLACE NODE 'sensor' POSITION (100, 300) SIZE (250, 180);
-- Add a saved View as a node
ALTER ENTITY DIAGRAM my_diagram
PLACE VIEW 'active_sensors' POSITION (400, 500);
-- Remove a node (its relationships are removed automatically)
ALTER ENTITY DIAGRAM my_diagram
REMOVE NODE 'sensor';
-- Remove a view from the diagram (the view itself is not deleted)
ALTER ENTITY DIAGRAM my_diagram
REMOVE VIEW 'active_sensors';
ALTER ENTITY DIAGRAM my_diagram SET (CANVAS_WIDTH = 4000, CANVAS_HEIGHT = 3000);
ALTER ENTITY DIAGRAM my_diagram SET (FOLDER_PATH = '/subfolder');
-- Add or update node positions
ALTER ENTITY DIAGRAM my_diagram
PLACE NODE 'sensor' POSITION (100, 300) SIZE (250, 180);
-- Add a saved View as a node
ALTER ENTITY DIAGRAM my_diagram
PLACE VIEW 'active_sensors' POSITION (400, 500);
-- Remove a node (its relationships are removed automatically)
ALTER ENTITY DIAGRAM my_diagram
REMOVE NODE 'sensor';
-- Remove a view from the diagram (the view itself is not deleted)
ALTER ENTITY DIAGRAM my_diagram
REMOVE VIEW 'active_sensors';
DROP ENTITY DIAGRAM — deletes an entity diagram:
DROP ENTITY DIAGRAM my_diagram;
DROP ENTITY DIAGRAM IF EXISTS my_diagram;
DROP ENTITY DIAGRAM IF EXISTS my_diagram;
End-to-End Resource DDL Example
Create all four resource types in dependency order:
-- 1. Create cycles
CREATE CYCLE hourly INTERVAL 1 HOUR;
CREATE CYCLE daily INTERVAL 1 DAY;
-- 2. Create a rollup calendar referencing those cycles
CREATE ROLLUP CALENDAR standard_rc CYCLES ('hourly', 'daily');
-- 3. Create a table (template) that uses the rollup calendar
CREATE TABLE meter (kwh DOUBLE INTERVAL BASE_CYCLE 'hourly' ROLLUP_CALENDAR 'standard_rc');
-- 4. Create a stream group
CREATE STREAM GROUP energy_streams
WITH (STREAM_NAME_FILTER = 'kwh*');
-- 5. Create a runnable to aggregate on schedule
CREATE RUNNABLE hourly_agg SCHEDULE 'hourly'
ITEMS (STREAM GROUP 'energy_streams' CYCLE 'hourly');
CREATE CYCLE hourly INTERVAL 1 HOUR;
CREATE CYCLE daily INTERVAL 1 DAY;
-- 2. Create a rollup calendar referencing those cycles
CREATE ROLLUP CALENDAR standard_rc CYCLES ('hourly', 'daily');
-- 3. Create a table (template) that uses the rollup calendar
CREATE TABLE meter (kwh DOUBLE INTERVAL BASE_CYCLE 'hourly' ROLLUP_CALENDAR 'standard_rc');
-- 4. Create a stream group
CREATE STREAM GROUP energy_streams
WITH (STREAM_NAME_FILTER = 'kwh*');
-- 5. Create a runnable to aggregate on schedule
CREATE RUNNABLE hourly_agg SCHEDULE 'hourly'
ITEMS (STREAM GROUP 'energy_streams' CYCLE 'hourly');
All resource DDL statements return a single row with a
Rows Affected column
(1 for success, 0 for IF NOT EXISTS/IF EXISTS suppression).
INSERT INTO (Component Creation)
INSERT INTO creates new components from a template table. The component ID (_component_id) column
is mandatory. Stream columns can be included to set initial values at the current time.
Syntax:
-- Create a component with no initial stream data
INSERT INTO sensor (_component_id) VALUES ('sensor_001');
-- Create a component with initial stream values (appended at NOW)
INSERT INTO sensor (_component_id, temperature, humidity) VALUES ('sensor_001', 22.5, 60);
-- Create multiple components
INSERT INTO sensor (_component_id, temperature) VALUES ('s1', 22.5), ('s2', 23.0);
-- Create components from a SELECT query
INSERT INTO sensor (_component_id) SELECT DISTINCT _component_id FROM system.stream WHERE _template_id = 'old_sensor';
-- Create a component with GPS coordinates (for map display)
INSERT INTO sensor WITH (CNAME = 'Rooftop Sensor', LATITUDE = 40.7128, LONGITUDE = '-74.0060', ELEVATION = 85.0)
(_component_id, temperature) VALUES ('roof_01', 22.5);
WITH clause options for INSERT INTO:
FOLDER_PATH |
Subfolder under /Components. Intermediate folders are auto-created.
Example: WITH (FOLDER_PATH = '/Building_A/Floor_2') |
CNAME |
Component display name. Defaults to the _component_id value if not specified. |
DESCRIPTION |
Description text for the component. |
TIMEZONE_ID |
Time zone for the component (e.g. 'America/New_York'). |
COMP_TMPL_ID |
Override the template ID used for component creation (defaults to the table name). |
LATITUDE |
GPS latitude (-90 to 90). Use a string for negative values: LATITUDE = '-33.86' |
LONGITUDE |
GPS longitude (-180 to 180). Use a string for negative values: LONGITUDE = '-77.04' |
ELEVATION |
Altitude in meters. Sets the component's elevation/altitude on the map. |
Auto-create user folders: When
FOLDER_PATH is omitted and the organization setting
Auto-create user folders is enabled (Organization > User Management), the component is automatically
placed in the calling user's folder (/Components/Users/{FolderName}). The user folder is auto-created
with owner rights if it doesn't already exist. This also applies to INSERT OR REPLACE INTO system.sample
when a component is auto-created.
The User folder name setting controls what is used as the folder name: Full Name (default), Sign-in Email, or User's UID (guarantees uniqueness and durability — the folder name won't change if the user updates their name or email).
INSERT INTO returns a single row with a
Rows Affected column indicating the number of components created.
Stream values provided during creation automatically trigger rollup calculation and derived stream computation for any dependent streams.
Derived Columns — Expressions
DDL supports defining derived streams (columns whose values are automatically computed from other streams). Derived streams use theFROM_EXPRESSION derivation type and are
recalculated whenever their dependent streams receive new data.
This includes all types of changes: appending new data, updating historical values, and deleting samples.
When a dependent stream's data changes — even retroactively — the derived stream automatically
recomputes, and changes propagate up the entire dependency tree.
For VIA (FK) dependencies, historical relationship changes are also detected and the calculation
uses the correct target entity for each time period.
Syntax:
column_id [value_type] [stream_type] DERIVED AS 'expression'
DEPENDS ON (
var FROM col [options]
| var VIA fk_chain [options]
| var VIA SQL 'resolution_sql' [options]
[, ...]
)
[ALLOW_NULLS] [NULL_AS_ZERO] [NO_AUTO_RECALC]
Per-dependent options: [CYCLE 'id'] [OFFSET n] [FK_AGGREGATOR 'fn'] [CYCLE_FUNCTION 'fn'] [FILL_FORWARD]
DEPENDS ON (
var FROM col [options]
| var VIA fk_chain [options]
| var VIA SQL 'resolution_sql' [options]
[, ...]
)
[ALLOW_NULLS] [NULL_AS_ZERO] [NO_AUTO_RECALC]
Per-dependent options: [CYCLE 'id'] [OFFSET n] [FK_AGGREGATOR 'fn'] [CYCLE_FUNCTION 'fn'] [FILL_FORWARD]
The
DERIVED AS clause specifies the derivation expression (a formula string).
VIA SQL — provides a raw resolution SQL query for FK dependencies. The SQL must return exactly 2 columns:
_component_uid and a stream ID string literal. Use @_component_uid to reference the current component:
-- Fan-in: average only temperature sensors attached to this tank
DEPENDS ON (temp VIA SQL 'SELECT _component_uid, ''reading''
FROM sensor WHERE tank_fk = @_component_uid AND type = ''temperature'''
FK_AGGREGATOR 'AVG' FILL_FORWARD)
DEPENDS ON (temp VIA SQL 'SELECT _component_uid, ''reading''
FROM sensor WHERE tank_fk = @_component_uid AND type = ''temperature'''
FK_AGGREGATOR 'AVG' FILL_FORWARD)
DEPENDS ON clause declares the dependent streams that feed into the expression,
binding each to a variable name used in the formula.
Dependency types:
- FROM (sibling) — References another column in the same template table.
The variable is bound to the sibling stream's value.
Syntax:variable_name FROM column_id [CYCLE 'id'] [OFFSET n] [FUNCTION 'fn'] - VIA (foreign key chain) — References a column in a related template table
through one or more FK hops. The engine generates resolution SQL to traverse the FK chain.
Syntax:variable_name VIA fk_column.target_template.target_column [CYCLE 'id'] [OFFSET n] [FUNCTION 'fn']
Multi-hop:variable_name VIA fk_col.tmpl1.fk_col2.tmpl2.target_column
- The FK relationship stream (the stream that holds the component UID pointer, e.g.,
customer_fk) must be a Regular (RDM) stream. Interval streams are not allowed because FK streams store infrequent relationship changes, and loading full history of an interval stream would be prohibitively expensive. - For single-target FK deps (no FK_AGGREGATOR), the target data stream must be a Regular (RDM) or Point stream. Interval streams are not allowed because they can have millions of data points.
- For fan-in FK deps (with FK_AGGREGATOR), the target data stream can be an Interval stream. Fan-in aggregation is designed to read and aggregate across interval streams from multiple components.
| Option | Description |
CYCLE 'cycle_id' |
Override the rollup cycle for this dependent. The dependent's data will be aggregated to the specified cycle instead of the stream's base cycle. Uses the cycle ID (resolved to UID at execution time). |
OFFSET n |
Shift the interval time position. OFFSET -1 uses the previous interval,
OFFSET -2 uses two intervals ago, etc. Useful for delta calculations and rolling averages. |
FUNCTION 'fn' |
Override the aggregation function when CYCLE is set to a different cycle than the stream's base cycle. Valid values: AVG, SUM, MIN, MAX, FIRST, LAST, STDDEVP, STDDEVA, SAMPLECOUNT, and others. Defaults to NONE (uses the stream's configured rollup method). |
FILL_FORWARD |
Carry forward the last known value when this dependent has no data at a timestamp where other
dependents do. Without FILL_FORWARD, the derivation engine requires all dependents to have a sample at the same timestamp
(unless ALLOW_NULLS is set). This is essential for FK (VIA) dependencies where the referenced stream may have data at
different times than sibling streams. Default: true for VIA (FK) dependencies, false for FROM (sibling) dependencies.
Behavior with ALL_ARRIVED (default): The calc range is bounded by the earliest dependent's completedDate. FILL_FORWARD fills gaps within that range by carrying forward the last known value. Behavior with ANY_ARRIVED: The calc range extends to the latest dependent's completedDate. FILL_FORWARD fills gaps within that wider range. This means a change to one dependent can produce results even if other fill-forward dependents haven't had new data recently. Restrictions: FILL_FORWARD cannot be used with rollup cycle dependencies (a CYCLE + FUNCTION combination) or point stream dependencies. Attempting to do so will produce a validation error. |
Derivation options (all optional, appended after the DEPENDS ON clause):
| Option | Description |
ALLOW_NULLS |
Allow the expression to evaluate even when some dependents have no value (NULL). Without this, derivation is skipped if any dependent is missing. |
NULL_AS_ZERO |
Treat NULL dependent values as zero in the expression. Implies ALLOW_NULLS behavior. |
NO_AUTO_RECALC |
Disable automatic recalculation when dependent streams are updated. The derived stream will only recalculate when explicitly triggered. |
Restrictions
Derived streams cannot use the POINT stream type. They must be REGULAR or INTERVAL.
Example — sibling dependencies: Compute heat index from temperature and humidity
columns in the same table:
Derived streams cannot use the POINT stream type. They must be REGULAR or INTERVAL.
CREATE TABLE sensor (
temperature DOUBLE,
humidity DOUBLE,
heat_index DOUBLE DERIVED AS '-42.379 + 2.049*t + 10.143*h'
DEPENDS ON (t FROM temperature, h FROM humidity)
)
temperature DOUBLE,
humidity DOUBLE,
heat_index DOUBLE DERIVED AS '-42.379 + 2.049*t + 10.143*h'
DEPENDS ON (t FROM temperature, h FROM humidity)
)
Example — FK dependency (single hop): Pull a value from a related template through a foreign key:
CREATE TABLE meter (
kwh DOUBLE,
customerUid STRING REFERENCES customer,
billed DOUBLE DERIVED AS 'kwh * rate'
DEPENDS ON (
kwh FROM kwh,
rate VIA customerUid.customer.energy_rate
)
)
kwh DOUBLE,
customerUid STRING REFERENCES customer,
billed DOUBLE DERIVED AS 'kwh * rate'
DEPENDS ON (
kwh FROM kwh,
rate VIA customerUid.customer.energy_rate
)
)
Example — FK dependency (multi-hop): Traverse two FK relationships to reach a value in a grandparent template:
CREATE TABLE meter (
kwh DOUBLE,
customerUid STRING REFERENCES customer,
adjusted DOUBLE DERIVED AS 'kwh * rate * margin'
DEPENDS ON (
kwh FROM kwh,
rate VIA customerUid.customer.energy_rate,
margin VIA customerUid.customer.supplierUid.supplier.margin
)
NULL_AS_ZERO
)
kwh DOUBLE,
customerUid STRING REFERENCES customer,
adjusted DOUBLE DERIVED AS 'kwh * rate * margin'
DEPENDS ON (
kwh FROM kwh,
rate VIA customerUid.customer.energy_rate,
margin VIA customerUid.customer.supplierUid.supplier.margin
)
NULL_AS_ZERO
)
Example — OFFSET (delta calculation): Compute the difference between the current and previous interval:
CREATE TABLE meter (
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
kwh_delta DOUBLE DERIVED AS 'current - prev'
DEPENDS ON (current FROM kwh, prev FROM kwh OFFSET -1)
)
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
kwh_delta DOUBLE DERIVED AS 'current - prev'
DEPENDS ON (current FROM kwh, prev FROM kwh OFFSET -1)
)
Example — CYCLE and FUNCTION: Pull the daily sum of an hourly stream:
CREATE TABLE meter (
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
kwh_daily DOUBLE DERIVED AS 'ds'
DEPENDS ON (ds FROM kwh CYCLE 'day' CYCLE_FUNCTION 'SUM')
)
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
kwh_daily DOUBLE DERIVED AS 'ds'
DEPENDS ON (ds FROM kwh CYCLE 'day' CYCLE_FUNCTION 'SUM')
)
Example — all three (previous day's sum):
CREATE TABLE meter (
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
yesterday_total DOUBLE DERIVED AS 'yt'
DEPENDS ON (yt FROM kwh CYCLE 'day' OFFSET -1 CYCLE_FUNCTION 'SUM')
)
kwh DOUBLE INTERVAL BASE_CYCLE 'hour',
yesterday_total DOUBLE DERIVED AS 'yt'
DEPENDS ON (yt FROM kwh CYCLE 'day' OFFSET -1 CYCLE_FUNCTION 'SUM')
)
Example — 3-point rolling average:
CREATE TABLE sensor (
temperature DOUBLE INTERVAL BASE_CYCLE 'hour',
temp_smooth DOUBLE DERIVED AS '(t0 + t1 + t2) / 3'
DEPENDS ON (
t0 FROM temperature,
t1 FROM temperature OFFSET -1,
t2 FROM temperature OFFSET -2
)
)
temperature DOUBLE INTERVAL BASE_CYCLE 'hour',
temp_smooth DOUBLE DERIVED AS '(t0 + t1 + t2) / 3'
DEPENDS ON (
t0 FROM temperature,
t1 FROM temperature OFFSET -1,
t2 FROM temperature OFFSET -2
)
)
Example — derivation options:
CREATE TABLE sensor (
a DOUBLE,
b DOUBLE DERIVED AS 'x' DEPENDS ON (x FROM a)
ALLOW_NULLS NULL_AS_ZERO NO_AUTO_RECALC
)
a DOUBLE,
b DOUBLE DERIVED AS 'x' DEPENDS ON (x FROM a)
ALLOW_NULLS NULL_AS_ZERO NO_AUTO_RECALC
)
ALTER TABLE also supports derived columns when adding or modifying columns:
ALTER TABLE sensor ADD COLUMN heat_index DOUBLE
DERIVED AS '-42.379 + 2.049*t + 10.143*h'
DEPENDS ON (t FROM temperature, h FROM humidity)
DERIVED AS '-42.379 + 2.049*t + 10.143*h'
DEPENDS ON (t FROM temperature, h FROM humidity)
Derived Columns — Aggregation
DDL supports defining aggregation-derived streams (columns whose values are automatically computed by aggregating data from a stream group). These streams use theFROM_AGGREGATION derivation type and are recalculated when the source streams
in the stream group receive new data.
Syntax:
column_id [value_type] [INTERVAL] [BASE_CYCLE 'cycle_id']
AGGREGATION agg_function STREAM GROUP 'stream_group_id'
[CALC_RANGE n unit] [CALC_RANGE_TO n unit]
[PERCENT_GAP n] [SG_CYCLE 'cycle_id'] [SG_FUNCTION 'fn']
[NO_AUTO_RECALC]
AGGREGATION agg_function STREAM GROUP 'stream_group_id'
[CALC_RANGE n unit] [CALC_RANGE_TO n unit]
[PERCENT_GAP n] [SG_CYCLE 'cycle_id'] [SG_FUNCTION 'fn']
[NO_AUTO_RECALC]
The
AGGREGATION clause specifies the aggregation function and the stream group that
supplies the source data. Aggregation columns are always INTERVAL streams —
the stream type is automatically set to INTERVAL if not specified explicitly.
A BASE_CYCLE is required (either on the column or as an org default).
When to use aggregation vs. expression derivation:
Stream group aggregation uses background jobs for computation — it is designed for
large-scale rollups across hundreds or thousands of source streams. For smaller scenarios
(roughly 15 or fewer source streams), expression derivation is the better choice.
You can either reference streams directly in the expression (e.g.,
Aggregation functions:
(a + b + c) / 3
with each variable as a FROM or VIA dependent), or use
VIA SQL with FK_AGGREGATOR for dynamic fan-in resolution.
Expression derivation computes in real time, updates synchronously when source data changes,
and does not require background job scheduling. Use aggregation when you need to aggregate
across large, dynamic stream groups; use expression derivation for real-time, low-latency
computation with a known, bounded set of sources.
| Function | Description |
SUM |
Sum of all group stream values in each interval. |
AVG |
Time-weighted average of all group stream values. |
MIN |
Minimum value across all group streams in each interval. |
MAX |
Maximum value across all group streams in each interval. |
GAPCOUNT |
Number of group streams with no data (gaps) in each interval. |
NONGAPCOUNT |
Number of group streams with data (non-gaps) in each interval. |
INTVLCOUNT |
Total number of intervals with data across all group streams. |
Options:
| Option | Description |
CALC_RANGE n unit |
How far back in time to recalculate from the current time.
Units: MINUTE, HOUR, DAY, WEEK,
MONTH, YEAR. Default: 1 HOUR. |
CALC_RANGE_TO n unit |
How far forward in time to calculate. Same units as CALC_RANGE.
Default: 0 HOUR (no forward calculation). |
PERCENT_GAP n |
Percentage of NULL/gap values allowed before the result is set to NULL. Range: 0 to 100. Default: 100 (all gaps allowed). |
SG_CYCLE 'cycle_id' |
Pre-aggregation rollup cycle. Group stream samples are rolled up to this cycle before aggregation occurs. References a cycle by ID. |
SG_FUNCTION 'fn' |
Pre-aggregation rollup function used with SG_CYCLE. Valid values:
SUM, AVG, MIN, MAX, FIRST,
LAST. Default: AVG. |
NO_AUTO_RECALC |
Disables automatic recalculation when source streams receive new data. The aggregation must be triggered manually. |
Restrictions:
- Aggregation columns must be
INTERVALstreams (auto-set if not specified). - A
BASE_CYCLEis required. FILEvalue type streams cannot use aggregation.- A column cannot have both
DERIVED AS(expression derivation) andAGGREGATION— they are mutually exclusive. - The referenced stream group must already exist before the CREATE TABLE statement executes.
Example — basic SUM aggregation:
CREATE TABLE energy_totals (
total_kwh DOUBLE INTERVAL BASE_CYCLE 'hour'
AGGREGATION SUM STREAM GROUP 'all_meters'
)
total_kwh DOUBLE INTERVAL BASE_CYCLE 'hour'
AGGREGATION SUM STREAM GROUP 'all_meters'
)
Example — aggregation with all options:
CREATE TABLE plant_summary (
avg_temp DOUBLE INTERVAL BASE_CYCLE 'hour'
AGGREGATION AVG STREAM GROUP 'floor_sensors'
CALC_RANGE 3 DAY
CALC_RANGE_TO 1 HOUR
PERCENT_GAP 50
SG_CYCLE 'minute'
SG_FUNCTION 'MAX'
NO_AUTO_RECALC
)
avg_temp DOUBLE INTERVAL BASE_CYCLE 'hour'
AGGREGATION AVG STREAM GROUP 'floor_sensors'
CALC_RANGE 3 DAY
CALC_RANGE_TO 1 HOUR
PERCENT_GAP 50
SG_CYCLE 'minute'
SG_FUNCTION 'MAX'
NO_AUTO_RECALC
)
ALTER TABLE also supports aggregation columns when adding or modifying columns:
ALTER TABLE energy_totals ADD COLUMN peak_kwh DOUBLE INTERVAL BASE_CYCLE 'hour'
AGGREGATION MAX STREAM GROUP 'all_meters' CALC_RANGE 7 DAY
AGGREGATION MAX STREAM GROUP 'all_meters' CALC_RANGE 7 DAY
Views — Materialized and Live
TEQ supports two types of views for saving and reusing queries. Once created, views are first-class queryable objects — usable as tables in any FROM clause, JOIN, or subquery. See Views in the Tables reference for query-side usage.| View Type | Description |
MATERIALIZED VIEW |
Cached snapshot of query results (backed by a saved query). Must be explicitly refreshed to update.
Results are stored and served from cache until the next REFRESH. |
VIEW |
Live view that re-executes its query on every access. The stored SQL is inlined as a subquery whenever the view is referenced in a FROM clause. Uses SECURITY DEFINER semantics — the view body executes with the view owner's permissions. |
Connections (External Databases)
Connections define JDBC database connections for querying external databases alongside local data. The identifier afterCREATE CONNECTION is the connection ID
— used to reference the connection in CREATE VIEW ... CONNECTION id
and CREATE MATERIALIZED VIEW ... CONNECTION id.
CREATE CONNECTION — creates a new JDBC connection:
CREATE CONNECTION mydb
TYPE 'postgresql'
HOST 'db.example.com' PORT 5432
DATABASE 'analytics'
USERNAME 'reader' PASSWORD 'secret'
WITH (maxPoolSize=10, connectionTimeout=30000);
TYPE 'postgresql'
HOST 'db.example.com' PORT 5432
DATABASE 'analytics'
USERNAME 'reader' PASSWORD 'secret'
WITH (maxPoolSize=10, connectionTimeout=30000);
Supported database types:
postgresql, mysql, sqlserver, oracle.
Passwords are encrypted at rest. Connection pool settings can be specified via WITH options.
ALTER CONNECTION — modifies a connection:
ALTER CONNECTION mydb SET host = 'new-host.example.com', password = 'newsecret';
DROP CONNECTION — removes a connection:
DROP CONNECTION IF EXISTS mydb;
JDBC Import Connectors
JDBC Import connectors pull data from an external database and ingest it into GroveStreams streams. Each import references aCONNECTION (created above) and defines two SQL statements:
an External SQL that runs against the remote database, and an Ingest SQL (GS SQL)
that processes the results via CSV_TABLE(@JDBC_RESULT) and inserts into streams.
See the JDBC Import section below for variables and examples.
CREATE JDBC IMPORT — creates a new JDBC import connector:
CREATE JDBC IMPORT [IF NOT EXISTS] import_name
CONNECTION connection_name
EXTERNAL SQL 'SELECT station_id, reading_time, temperature
FROM sensor_readings WHERE reading_time > 0'
INGEST SQL 'INSERT OR REPLACE INTO system.sample(_COMPONENT_ID, _ID, _TIME, _SAMPLE)
SELECT station_id, ''temperature'',
TOEPOCHMILLIS(reading_time, ''yyyy-MM-dd HH:mm:ss''),
temperature
FROM CSV_TABLE(@JDBC_RESULT)
WITH (1 station_id STRING, 2 reading_time STRING, 3 temperature DOUBLE)'
WITH (timezone_id = 'America/New_York', max_rows = 50000,
template_id = 'sensor', name = 'Station Importer');
CONNECTION connection_name
EXTERNAL SQL 'SELECT station_id, reading_time, temperature
FROM sensor_readings WHERE reading_time > 0'
INGEST SQL 'INSERT OR REPLACE INTO system.sample(_COMPONENT_ID, _ID, _TIME, _SAMPLE)
SELECT station_id, ''temperature'',
TOEPOCHMILLIS(reading_time, ''yyyy-MM-dd HH:mm:ss''),
temperature
FROM CSV_TABLE(@JDBC_RESULT)
WITH (1 station_id STRING, 2 reading_time STRING, 3 temperature DOUBLE)'
WITH (timezone_id = 'America/New_York', max_rows = 50000,
template_id = 'sensor', name = 'Station Importer');
| WITH Option | Description |
timezone_id |
Java timezone ID for GS SQL execution (e.g., 'America/New_York'). Default: org timezone. |
max_rows |
Safety cap on external result set size. Default: 100,000. |
template_id |
Optional component template ID to associate with this import. |
name |
Display name. Defaults to the import ID if not specified. |
folder_path |
Folder path under /Tools/Connectors. Default: root. |
The running user is automatically set to the user executing the DDL statement. Single quotes within the EXTERNAL SQL or INGEST SQL strings must be escaped as
'' (two single quotes).
ALTER JDBC IMPORT — modifies a JDBC import connector:
ALTER JDBC IMPORT my_import
SET external_sql = 'SELECT id, ts, value FROM new_table',
connection = 'other_connection',
max_rows = 25000, timezone_id = 'UTC',
template_id = 'meter';
SET external_sql = 'SELECT id, ts, value FROM new_table',
connection = 'other_connection',
max_rows = 25000, timezone_id = 'UTC',
template_id = 'meter';
Settable options:
external_sql, ingest_sql, connection,
max_rows, timezone_id, template_id, name.
DROP JDBC IMPORT — removes a JDBC import connector:
DROP JDBC IMPORT IF EXISTS my_import;
Materialized Views
CREATE MATERIALIZED VIEW — creates a new materialized view:
CREATE [OR REPLACE] MATERIALIZED VIEW [IF NOT EXISTS] view_id
[CONNECTION connection_name]
AS select_stmt | (select_stmt) | 'remote SQL string'
[WITH (options)]
[CONNECTION connection_name]
AS select_stmt | (select_stmt) | 'remote SQL string'
[WITH (options)]
The
AS clause accepts the SELECT bare or wrapped in parentheses (PG / dbt compatible).
OR REPLACE drops any existing view of the same name (along with its cached results) and recreates from the new definition.
By default, the query executes immediately on CREATE and the results are cached. Use
NO_EXECUTE = 'true' to save the definition without running it.
The optional
CONNECTION clause references an external database connection by its ID
(the identifier used in CREATE CONNECTION id).
When present, the AS clause must be a string literal containing remote-dialect SQL
(not GS SQL). Without a CONNECTION clause, the view queries local data using GS SQL.
The connector is referenced internally by UID, so renaming a connector does not break existing views.
A connection cannot be dropped while views reference it.
DROP CONNECTION will fail
with a list of referencing views.
| Option | Description |
NAME |
Display name for the materialized view. Defaults to the view_id. |
TIMEZONE_ID |
Time zone for the query execution context. |
FOLDER_PATH |
Subfolder under /Tools/Views. Intermediate folders are auto-created. |
NO_EXECUTE |
Set to 'true' to save the definition without executing. Default: 'false' (executes immediately). |
Examples:
-- Create and execute immediately
CREATE MATERIALIZED VIEW daily_energy
AS SELECT _component_id, _component_name, kwh, cost FROM meter WHERE kwh > 0
WITH (FOLDER_PATH = '/reports/energy', NAME = 'Daily Energy Summary');
-- Save definition only (no initial execution)
CREATE MATERIALIZED VIEW IF NOT EXISTS heavy_report
AS SELECT * FROM sensor JOIN customer ON sensor.customerUid = customer._component_uid
WITH (FOLDER_PATH = '/reports', NO_EXECUTE = 'true');
-- External view (queries a remote PostgreSQL database)
CREATE MATERIALIZED VIEW customer_snapshot
CONNECTION mydb
AS 'SELECT id, name, email FROM public.customers'
WITH (FOLDER_PATH = '/external');
CREATE MATERIALIZED VIEW daily_energy
AS SELECT _component_id, _component_name, kwh, cost FROM meter WHERE kwh > 0
WITH (FOLDER_PATH = '/reports/energy', NAME = 'Daily Energy Summary');
-- Save definition only (no initial execution)
CREATE MATERIALIZED VIEW IF NOT EXISTS heavy_report
AS SELECT * FROM sensor JOIN customer ON sensor.customerUid = customer._component_uid
WITH (FOLDER_PATH = '/reports', NO_EXECUTE = 'true');
-- External view (queries a remote PostgreSQL database)
CREATE MATERIALIZED VIEW customer_snapshot
CONNECTION mydb
AS 'SELECT id, name, email FROM public.customers'
WITH (FOLDER_PATH = '/external');
ALTER MATERIALIZED VIEW — modifies an existing materialized view:
-- Change the SQL definition
ALTER MATERIALIZED VIEW daily_energy
AS SELECT _component_id, _component_name, kwh, cost, total_cost FROM meter WHERE kwh > 0;
-- Rename and move
ALTER MATERIALIZED VIEW daily_energy
SET (NAME = 'Daily Energy v2', FOLDER_PATH = '/reports/v2');
-- Combined
ALTER MATERIALIZED VIEW daily_energy
SET (NAME = 'Updated Report')
AS SELECT _component_id, kwh FROM meter;
-- Rename via PG-style RENAME TO (mutually exclusive with other actions)
ALTER MATERIALIZED VIEW daily_energy RENAME TO daily_kwh;
ALTER MATERIALIZED VIEW daily_energy
AS SELECT _component_id, _component_name, kwh, cost, total_cost FROM meter WHERE kwh > 0;
-- Rename and move
ALTER MATERIALIZED VIEW daily_energy
SET (NAME = 'Daily Energy v2', FOLDER_PATH = '/reports/v2');
-- Combined
ALTER MATERIALIZED VIEW daily_energy
SET (NAME = 'Updated Report')
AS SELECT _component_id, kwh FROM meter;
-- Rename via PG-style RENAME TO (mutually exclusive with other actions)
ALTER MATERIALIZED VIEW daily_energy RENAME TO daily_kwh;
Note: ALTER does not re-execute the query. Use REFRESH to update cached results after an ALTER.
DROP MATERIALIZED VIEW — deletes a materialized view and its cached results:
DROP MATERIALIZED VIEW daily_energy;
DROP MATERIALIZED VIEW IF EXISTS daily_energy;
DROP MATERIALIZED VIEW IF EXISTS daily_energy CASCADE; -- CASCADE accepted for PG / dbt compatibility (no-op in GS)
DROP MATERIALIZED VIEW IF EXISTS daily_energy;
DROP MATERIALIZED VIEW IF EXISTS daily_energy CASCADE; -- CASCADE accepted for PG / dbt compatibility (no-op in GS)
REFRESH MATERIALIZED VIEW — re-executes the query and updates cached results:
-- Synchronous (blocks until complete)
REFRESH MATERIALIZED VIEW daily_energy;
-- Asynchronous (queues execution, returns immediately)
REFRESH MATERIALIZED VIEW daily_energy WITH (NO_WAIT = 'true');
REFRESH MATERIALIZED VIEW daily_energy;
-- Asynchronous (queues execution, returns immediately)
REFRESH MATERIALIZED VIEW daily_energy WITH (NO_WAIT = 'true');
Live Views
Live views store a SQL definition that is inlined as a subquery every time the view is referenced in a FROM clause. Unlike materialized views, live views always return fresh results.CREATE VIEW — creates a new live view:
-- Local live view (GS SQL)
CREATE VIEW active_sensors
AS SELECT _component_id, _component_name, temperature, humidity
FROM sensor
WHERE temperature IS NOT NULL
WITH (NAME = 'Active Sensors', FOLDER_PATH = '/views/monitoring');
-- External live view (queries remote DB on every access)
CREATE VIEW customer_live
CONNECTION mydb
AS 'SELECT id, name, email FROM public.customers WHERE active = true';
CREATE VIEW active_sensors
AS SELECT _component_id, _component_name, temperature, humidity
FROM sensor
WHERE temperature IS NOT NULL
WITH (NAME = 'Active Sensors', FOLDER_PATH = '/views/monitoring');
-- External live view (queries remote DB on every access)
CREATE VIEW customer_live
CONNECTION mydb
AS 'SELECT id, name, email FROM public.customers WHERE active = true';
| Option | Description |
NAME |
Display name for the view. Defaults to the view_id. |
TIMEZONE_ID |
Time zone for the view's query execution context. |
FOLDER_PATH |
Subfolder under /Tools/Views. Intermediate folders are auto-created. |
CREATE OR REPLACE VIEW replaces an existing view's definition without requiring a DROP:
CREATE OR REPLACE VIEW active_sensors
AS SELECT _component_id, _component_name, temperature, humidity, pressure FROM sensor;
AS SELECT _component_id, _component_name, temperature, humidity, pressure FROM sensor;
Using views in queries — reference any view (local or external) in a FROM clause like any table:
-- Query a local view
SELECT * FROM active_sensors WHERE temperature > 100;
-- Temporal parameters work through local views
SELECT _component_name, temperature(range(sd=-1d)) FROM active_sensors;
-- Query an external view (LIVE mode — fetches from remote DB)
SELECT first_name, region FROM customer_live WHERE credit_limit > 5000;
-- Query a cached external view (CACHE mode — reads from cached results)
SELECT * FROM customer_snapshot;
-- Join an external view with a system table
SELECT s._component_name, c.first_name
FROM (SELECT DISTINCT _component_name FROM system.stream) s, customer_live c;
-- Join an external view with a template table
SELECT m._component_name, m.kwh, c.first_name
FROM meter m, customer_live c
WHERE m._component_name IS NOT NULL;
-- Join two external views
SELECT c.first_name, o.amount
FROM customer_live c JOIN orders_live o ON c.customer_id = o.customer_id;
SELECT * FROM active_sensors WHERE temperature > 100;
-- Temporal parameters work through local views
SELECT _component_name, temperature(range(sd=-1d)) FROM active_sensors;
-- Query an external view (LIVE mode — fetches from remote DB)
SELECT first_name, region FROM customer_live WHERE credit_limit > 5000;
-- Query a cached external view (CACHE mode — reads from cached results)
SELECT * FROM customer_snapshot;
-- Join an external view with a system table
SELECT s._component_name, c.first_name
FROM (SELECT DISTINCT _component_name FROM system.stream) s, customer_live c;
-- Join an external view with a template table
SELECT m._component_name, m.kwh, c.first_name
FROM meter m, customer_live c
WHERE m._component_name IS NOT NULL;
-- Join two external views
SELECT c.first_name, o.amount
FROM customer_live c JOIN orders_live o ON c.customer_id = o.customer_id;
External views can be mixed freely with system tables, template tables, and other external views in the same query. Column metadata is persisted when the view is created; if metadata is missing (e.g., the remote database was unreachable at creation time), the query engine returns a clear error asking you to re-execute or re-create the view.
LIVE vs CACHE mode in FROM clauses
LIVE views (created with
CACHE views (created with
LIVE views (created with
CREATE VIEW ... CONNECTION) execute the remote SQL
on every query. Use for data that must be current.CACHE views (created with
CREATE MATERIALIZED VIEW ... CONNECTION) read from
locally cached results. Use REFRESH MATERIALIZED VIEW to update the cache.
CACHE mode is faster but may return stale data.
EXTERNAL_QUERY() — inline federated queries
For ad-hoc remote queries without pre-defining a view, use
See the GS SQL Grammar reference for full syntax and examples.
For ad-hoc remote queries without pre-defining a view, use
EXTERNAL_QUERY() directly
in the FROM clause. The remote SQL is written in the remote database's native dialect:
SELECT * FROM EXTERNAL_QUERY('mysql_employees', 'SELECT dept_name FROM departments') d;
Cross-database queries
A JDBC connection's Default Database sets the unqualified table context for external views and EXTERNAL_QUERY(). To query tables in other databases on the same server, use qualified table names in the remote SQL (e.g.,
Views can reference other views in their SQL — including other local views, external views,
template tables, and system tables. View nesting is supported up to 10 levels deep.
Circular references (view A → view B → view A) are detected and rejected with a clear error.
A JDBC connection's Default Database sets the unqualified table context for external views and EXTERNAL_QUERY(). To query tables in other databases on the same server, use qualified table names in the remote SQL (e.g.,
'other_db.table_name' for MySQL,
'other_db.dbo.table_name' for SQL Server). The connection's credentials must have
access to the target database.
ALTER VIEW — modifies an existing live view:
ALTER VIEW active_sensors
SET (NAME = 'Active Sensors v2');
ALTER VIEW active_sensors
AS SELECT _component_id, _component_name, temperature FROM sensor WHERE _component_name ILIKE 'Building%';
ALTER VIEW active_sensors RENAME TO live_sensors;
SET (NAME = 'Active Sensors v2');
ALTER VIEW active_sensors
AS SELECT _component_id, _component_name, temperature FROM sensor WHERE _component_name ILIKE 'Building%';
ALTER VIEW active_sensors RENAME TO live_sensors;
DROP VIEW — deletes a live view:
DROP VIEW active_sensors;
DROP VIEW IF EXISTS active_sensors;
DROP VIEW IF EXISTS active_sensors CASCADE; -- CASCADE accepted for PG / dbt compatibility (no-op in GS)
DROP VIEW IF EXISTS active_sensors;
DROP VIEW IF EXISTS active_sensors CASCADE; -- CASCADE accepted for PG / dbt compatibility (no-op in GS)
Dependency constraints
DROP VIEW fails if other views depend on it. DROP TABLE ... CASCADE
also drops dependent live views that reference the template. ALTER TABLE ... DROP COLUMN
fails if a live view references that column.
JDBC Import
A JDBC Import connector reaches out to an external database on a schedule, fetches rows, and ingests them into GroveStreams streams using GS SQL. It uses a two-SQL model:- External SQL — standard SQL that executes against the remote database.
- Ingest SQL — GS SQL that processes the fetched data and inserts it into streams.
@JDBC_RESULT variable.
JDBC Import Variables
| Variable | Description |
@JDBC_RESULT |
CSV string containing the external SQL results. The first row is the column header. Parse with CSV_TABLE(@JDBC_RESULT). |
@LAST_RUN_DATE |
Epoch millisecond timestamp of the last successful import. 0 on the first run. Use for incremental imports. |
@CONNECTOR_ID |
The ID of the JDBC Import connector. |
JDBC Import Example
External SQL (runs on the remote database):
SELECT station_id, reading_time, temperature
FROM sensor_readings
WHERE reading_time > '2024-01-01'
ORDER BY reading_time
FROM sensor_readings
WHERE reading_time > '2024-01-01'
ORDER BY reading_time
Ingest SQL (GS SQL that processes
@JDBC_RESULT):
INSERT OR REPLACE INTO
SAMPLE(_COMPONENT_ID, _ID, _TIME, _SAMPLE)
SELECT station_id, 'temperature',
TOEPOCHMILLIS(reading_time, 'yyyy-MM-dd HH:mm:ss'),
temperature
FROM CSV_TABLE(@JDBC_RESULT)
WITH (1 station_id STRING, 2 reading_time STRING, 3 temperature DOUBLE);
SAMPLE(_COMPONENT_ID, _ID, _TIME, _SAMPLE)
SELECT station_id, 'temperature',
TOEPOCHMILLIS(reading_time, 'yyyy-MM-dd HH:mm:ss'),
temperature
FROM CSV_TABLE(@JDBC_RESULT)
WITH (1 station_id STRING, 2 reading_time STRING, 3 temperature DOUBLE);
Max Rows limits the number of rows fetched from the external database (default: 100,000). The import can be scheduled or triggered manually via the editor's Run Now button. Use the Test Import tab to execute only the External SQL and preview the first 100 rows before running a full import.
See the Connectors page for more details.
Dashboards
Dashboard DDL lets you create, modify, and delete dashboards programmatically via GS SQL. Dashboards created via DDL are regular dashboards — they appear in the UI and can be customized visually after creation. Widget-specific visual details (line colors, axis labels, etc.) are defaulted by the UI when first opened.CREATE DASHBOARD
Widget Types: LINE_CHART, BAR_CHART, COL_CHART, STACKED_BAR_CHART, STACKED_COL_CHART, PIE_CHART, GAUGE, BIG_NUMBER, LEVEL, TABLE_VIEW, GRID_VIEW, PIVOT_VIEW, TABLESQL_VIEW, PIVOTSQL_VIEW, BB_MAP, TRACK_MAP, URL, HTML, SS_RAD, SS_RADBAR, SS_SINGLE, SS_ODOM, SS_WIND, PIVOT_BAR, PIVOT_COL, PIVOT_LINE, PIVOT_STACKED_BAR, PIVOT_STACKED_COL, LLM_VIEW, DASH_LINK, STREAM_FORM2, RSS_FEED, RANGECREATE DASHBOARD dashboard_id ( widget_type ['name'] [widget_clauses], widget_type ['name'] [widget_clauses], ... ) [WITH (option = value, ...)]
Widget Clauses:
| Clause | Description |
| STREAMS ('comp/stream' [CYCLE 'id' STATS 'fn,...'], ...) | Stream references in componentId/streamId format. Each stream can optionally specify its own CYCLE and STATS for rollup aggregation. Per-stream CYCLE/STATS are only valid for chart, gauge, big number, level, and table widget types. |
| STREAM_GROUP 'id' | Reference a stream group by its ID. |
| HEIGHT n | Widget height in pixels (default: 300). |
| COLSPAN n | Number of dashboard columns the widget spans (default: 1). |
| RANGE_TYPE HEARTBEAT | FIXED | CYCLE | Time range type (default: HEARTBEAT). |
| POLLING n | Heartbeat polling interval in seconds. |
| VIEW_LAST n | Number of data points (intervals) to retrieve and display. Use with CYCLE to specify the interval size (e.g., VIEW_LAST 24 CYCLE 'hour' for a 24-hour chart). Without a CYCLE, retrieves raw samples. DURATION is accepted as an alias. |
| CYCLE 'id' | Widget-level cycle (interval) reference. For HEARTBEAT range type, specifies the rollup interval for VIEW_LAST. For CYCLE range type, specifies the current cycle range. |
| STATS 'fn,fn,...' | Widget-level default statistics (e.g., 'AVG', 'MIN,MAX'). |
| EXTRA_DATA 'json' | Raw JSON for widget-specific configuration. Validated on save. If omitted, the UI applies defaults when the dashboard is first opened. See examples below for SQL widget usage. |
Dashboard WITH Options:
| Option | Description |
| name | Dashboard display name (defaults to dashboard_id). |
| folder_path | Subfolder under /Tools/Dashboards. Intermediate folders are auto-created. |
| columns | Number of columns (0 = responsive, default: 2). |
| min_columns | Minimum columns when responsive (default: 1). |
| col_width | Fixed column width in pixels (0 = stretch, default: 0). |
| theme | Theme name: none, black, white, white2, whitetan, blue, green. |
| bg_color | Background color hex (e.g., 'f5f5f5'). |
| title | Dashboard title text. |
| title_style | CSS style for title. |
| title_height | Title area height in pixels (default: 50). |
| border_size | Widget border width in pixels (default: 2). |
| border_color | Widget border color hex. |
| border_radius | Widget corner radius in pixels (default: 4). |
| align | Widget alignment: left, center, right (default: center). |
| gutter_x / gutter_y | Spacing between widgets (default: 10). |
| padding_x / padding_y | Dashboard padding (default: 10). |
| hide_titles | Hide widget header bars (true/false). |
| hide_tools | Hide widget edit buttons (true/false). |
| headers_style | CSS style for widget headers. |
Examples:
-- Simple dashboard with two widgets CREATE DASHBOARD temp_monitor ( LINE_CHART 'Temperature Trends' STREAMS ('Sensor_A/temperature', 'Sensor_B/temperature') HEIGHT 400 COLSPAN 2, BIG_NUMBER 'Current Temp' STREAMS ('Sensor_A/temperature') HEIGHT 150 ) WITH (columns = 2, theme = 'white', title = 'Temperature Monitor'); -- Per-stream cycle and stats for rollup aggregation CREATE DASHBOARD hourly_report ( LINE_CHART 'Hourly Averages' STREAMS ( 'Meter_1/kwh' CYCLE 'every_hour' STATS 'AVG,MAX', 'Meter_2/kwh' CYCLE 'every_hour' STATS 'AVG' ) HEIGHT 400 COLSPAN 3 RANGE_TYPE HEARTBEAT POLLING 60 VIEW_LAST 24 CYCLE 'every_hour' ) WITH (columns = 3, folder_path = '/Reports'); -- SQL Table widget with EXTRA_DATA (inline GS SQL) CREATE DASHBOARD sql_report ( TABLESQL_VIEW 'Sensor Report' HEIGHT 400 COLSPAN 3 EXTRA_DATA '{ "sqlType": "sql", "gsql": "SELECT _component_name, temperature, humidity FROM sensor ORDER BY _component_name", "stripeRows": true, "pageSize": 50, "rowNumberer": true }' ) WITH (columns = 3); -- SQL Table widget referencing a saved View CREATE DASHBOARD view_report ( TABLESQL_VIEW 'Saved View Report' HEIGHT 400 EXTRA_DATA '{ "sqlType": "query", "query": {"name": "my_saved_view"} }' ); -- URL widget and Map CREATE DASHBOARD external_dash ( URL 'Grafana' HEIGHT 500 COLSPAN 2 EXTRA_DATA '{"url": "https://grafana.example.com/d/abc"}', BB_MAP 'Asset Locations' HEIGHT 400 STREAMS ('Sensor_A/location') EXTRA_DATA '{"zoomLevel": 12, "mapTypeId": "satellite"}' );
ALTER DASHBOARD
Examples:-- Change dashboard settings ALTER DASHBOARD dashboard_id SET (option = value, ...) -- Add new widgets ALTER DASHBOARD dashboard_id ADD ( widget_type ['name'] [widget_clauses], ... )
ALTER DASHBOARD temp_monitor SET (theme = 'black', columns = 4); ALTER DASHBOARD temp_monitor ADD ( GAUGE 'Pressure' STREAMS ('Pump_1/psi') HEIGHT 200 );
DROP DASHBOARD
DROP DASHBOARD dashboard_id DROP DASHBOARD IF EXISTS dashboard_id
Org-level permissions: CREATE/ALTER/DROP DASHBOARD operations respect the same
org-level GSQL, ODBC, and AI Chat permission settings as other resource DDL
(Create Other Tools, Alter Other Tools, Drop Other Tools).
Tools DDL — Time Filters, Units, Delete Profiles
TEQ also supports DDL for managing Time Filters, Units, and Delete Profiles. These entities are created in the/Tools/ folder hierarchy.
All operations respect the same org-level GSQL, ODBC, and AI Chat permission settings
(Create Other Tools, Alter Other Tools, Drop Other Tools).
Time Filters
Time filters define which time-of-day intervals to include when querying or aggregating data.CREATE TIME FILTER — creates a new time filter definition. The
INTERVAL_SIZE clause specifies the minutes per interval for the time-of-day grid.
It must evenly divide 1440 (minutes per day). Common values:
60 (hourly — 24 intervals/day),
30 (half-hourly — 48 intervals/day),
15 (quarter-hourly — 96 intervals/day),
1440 (one interval per day).
Important: The time filter's
INTERVAL_SIZE does not need to match
the stream's base cycle. It only needs to evenly divide into a day (1440 minutes).
A stream with a 1-minute base cycle can use a time filter with INTERVAL_SIZE 60 (hourly)
— all 60 one-minute intervals within each hour slot will be included or excluded together.
The stream's base cycle must fit evenly into the time filter interval.
INCLUDE / EXCLUDE clauses control which day-of-week and hour ranges are active. When INCLUDE clauses are present, all slots default to excluded and only the specified ranges are included. When no INCLUDE clauses are present, all slots default to included. Hour values are 0–23 (start inclusive, end exclusive). The hour values are mapped to interval slots based on
INTERVAL_SIZE —
for example, with INTERVAL_SIZE 30, hour 8 maps to slot 16 (8×2), hour 17 maps to slot 34.
RANGE clauses define time ranges (seasons) that restrict when the filter applies. Dates are ISO format strings. When no RANGE clauses are present, the filter applies to all dates. When RANGE clauses are present, only data within those date ranges is included.
-- All hours included, hourly intervals (no INCLUDE clauses)
CREATE TIME FILTER always_on INTERVAL_SIZE 60;
-- Mon-Fri 8am to 5pm only, hourly intervals
CREATE TIME FILTER business_hours INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 8 TO 17
WITH (NAME = 'Business Hours', TIMEZONE_ID = 'US/Eastern');
-- Weekdays only, one interval per day (matches screenshot: 1440 minutes)
CREATE TIME FILTER weekdays INTERVAL_SIZE 1440
INCLUDE MONDAY TO FRIDAY 0 TO 24;
-- Mon-Fri 8am-5pm with 30-minute intervals, plus Saturday mornings
CREATE TIME FILTER extended_hours INTERVAL_SIZE 30
INCLUDE MONDAY TO FRIDAY 8 TO 17
INCLUDE SATURDAY 9 TO 13;
-- Summer season only (date ranges restrict when filter applies)
CREATE TIME FILTER summer_business INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 8 TO 17
RANGE '2024-06-01T00:00:00' TO '2024-09-22T00:00:00'
RANGE '2025-06-01T00:00:00' TO '2025-09-22T00:00:00'
WITH (NAME = 'Summer Business Hours');
-- TOU rate periods for energy (off-peak = nights + weekends)
CREATE TIME FILTER on_peak INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 16 TO 21
WITH (NAME = 'On-Peak (M-F 4PM-9PM)');
CREATE TIME FILTER off_peak INTERVAL_SIZE 60
INCLUDE SUNDAY TO SATURDAY 0 TO 24
EXCLUDE MONDAY TO FRIDAY 8 TO 21
WITH (NAME = 'Off-Peak (Nights + Weekends)');
-- Assign a time filter to a stream column (CREATE TABLE or ALTER TABLE)
ALTER TABLE meter
ADD COLUMN on_peak_export DOUBLE INTERVAL BASE_CYCLE '5min'
TIME FILTER 'on_peak'
DERIVED AS 'net' DEPENDS ON (net FROM net_export);
CREATE TIME FILTER always_on INTERVAL_SIZE 60;
-- Mon-Fri 8am to 5pm only, hourly intervals
CREATE TIME FILTER business_hours INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 8 TO 17
WITH (NAME = 'Business Hours', TIMEZONE_ID = 'US/Eastern');
-- Weekdays only, one interval per day (matches screenshot: 1440 minutes)
CREATE TIME FILTER weekdays INTERVAL_SIZE 1440
INCLUDE MONDAY TO FRIDAY 0 TO 24;
-- Mon-Fri 8am-5pm with 30-minute intervals, plus Saturday mornings
CREATE TIME FILTER extended_hours INTERVAL_SIZE 30
INCLUDE MONDAY TO FRIDAY 8 TO 17
INCLUDE SATURDAY 9 TO 13;
-- Summer season only (date ranges restrict when filter applies)
CREATE TIME FILTER summer_business INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 8 TO 17
RANGE '2024-06-01T00:00:00' TO '2024-09-22T00:00:00'
RANGE '2025-06-01T00:00:00' TO '2025-09-22T00:00:00'
WITH (NAME = 'Summer Business Hours');
-- TOU rate periods for energy (off-peak = nights + weekends)
CREATE TIME FILTER on_peak INTERVAL_SIZE 60
INCLUDE MONDAY TO FRIDAY 16 TO 21
WITH (NAME = 'On-Peak (M-F 4PM-9PM)');
CREATE TIME FILTER off_peak INTERVAL_SIZE 60
INCLUDE SUNDAY TO SATURDAY 0 TO 24
EXCLUDE MONDAY TO FRIDAY 8 TO 21
WITH (NAME = 'Off-Peak (Nights + Weekends)');
-- Assign a time filter to a stream column (CREATE TABLE or ALTER TABLE)
ALTER TABLE meter
ADD COLUMN on_peak_export DOUBLE INTERVAL BASE_CYCLE '5min'
TIME FILTER 'on_peak'
DERIVED AS 'net' DEPENDS ON (net FROM net_export);
Day-of-week keywords:
SUNDAY, MONDAY, TUESDAY,
WEDNESDAY, THURSDAY, FRIDAY, SATURDAY.
Use day TO day for ranges (e.g. MONDAY TO FRIDAY) or a single day.
| Option | Description |
NAME |
Display name shown in Observation Studio. Defaults to the time filter ID if omitted. CNAME is an alias. |
DESCRIPTION |
Description text for the time filter. |
TIMEZONE_ID |
Time zone for determining time-of-day boundaries (e.g. 'America/New_York'). |
FOLDER_PATH |
Subfolder under /Tools/Time Filters. Intermediate folders are auto-created. |
ALTER TIME FILTER — modifies an existing time filter.
INCLUDE and EXCLUDE clauses modify the existing time-of-day matrix
(INCLUDE turns slots on, EXCLUDE turns them off):
ALTER TIME FILTER business_hours SET (DESCRIPTION = 'Updated description');
-- Add Sunday afternoon to existing filter
ALTER TIME FILTER extended_hours INCLUDE SUNDAY 12 TO 17;
-- Remove Friday afternoons
ALTER TIME FILTER extended_hours EXCLUDE FRIDAY 15 TO 17;
-- Add Sunday afternoon to existing filter
ALTER TIME FILTER extended_hours INCLUDE SUNDAY 12 TO 17;
-- Remove Friday afternoons
ALTER TIME FILTER extended_hours EXCLUDE FRIDAY 15 TO 17;
DROP TIME FILTER — deletes a time filter:
DROP TIME FILTER business_hours;
DROP TIME FILTER IF EXISTS business_hours;
DROP TIME FILTER IF EXISTS business_hours;
Units
Units define measurement formatting: symbol, number format, decimal/thousand separators, and boolean display style.CREATE UNIT — creates a new unit definition:
CREATE UNIT kwh WITH (SYMBOL = ' kWh', SYMBOL_LOCATION = 'AFTER');
CREATE UNIT IF NOT EXISTS dollars
WITH (NAME = 'US Dollars', SYMBOL = '$', SYMBOL_LOCATION = 'BEFORE',
NUMBER_FORMAT = '0,000.00');
CREATE UNIT IF NOT EXISTS dollars
WITH (NAME = 'US Dollars', SYMBOL = '$', SYMBOL_LOCATION = 'BEFORE',
NUMBER_FORMAT = '0,000.00');
| Option | Description |
NAME |
Display name. Defaults to the unit ID if omitted. CNAME is an alias. |
SYMBOL |
Symbol string (e.g. '$', '%', ' kWh'). |
SYMBOL_LOCATION |
'BEFORE' or 'AFTER' (default). Where the symbol is placed relative to the value. |
NUMBER_FORMAT |
Number format pattern (e.g. '0,000.00'). Default: '0,000.00'. |
DECIMAL_SEP |
Decimal separator character. Default: '.'. |
THOUSAND_SEP |
Thousand separator character. Default: ','. |
BOOLEAN_STYLE |
How boolean values display: 'TRUE_FALSE' (default), 'ON_OFF', 'YES_NO', 'ONE_ZERO', 'CLOSE_OPEN', 'OPEN_CLOSE', 'HIGH_LOW', 'LOW_HIGH'. |
FOLDER_PATH |
Subfolder under /Tools/Units. Intermediate folders are auto-created. |
ALTER UNIT — modifies an existing unit:
ALTER UNIT kwh SET (SYMBOL = ' kilowatt-hours');
ALTER UNIT dollars SET (NUMBER_FORMAT = '0,000.0000');
ALTER UNIT dollars SET (NUMBER_FORMAT = '0,000.0000');
DROP UNIT — deletes a unit:
DROP UNIT kwh;
DROP UNIT IF EXISTS kwh;
DROP UNIT IF EXISTS kwh;
Catalogs
Catalogs provide three-level namespacing (org.catalog.table) for organizing imported database tables.
Each organization has a default catalog that cannot be deleted. User-created catalogs scope component ID uniqueness
to (catalog + template), allowing the same ID across different imported databases.
PG / ODBC schema mapping. Over the PostgreSQL wire adapter, GS catalogs surface as PG schemas. The
default catalog (and any template not assigned to a catalog) maps to PG schema public;
other catalog names pass through unchanged. Inbound queries against public.<table> resolve
to default-catalog templates, so PG clients (dbt, Power BI, DBeaver) work without configuration.
CREATE CATALOG — creates a new catalog:
CREATE CATALOG sampledb;
CREATE CATALOG IF NOT EXISTS sampledb;
CREATE CATALOG mydb WITH (NAME = 'My Database', DESCRIPTION = 'Imported from MySQL');
CREATE CATALOG IF NOT EXISTS sampledb;
CREATE CATALOG mydb WITH (NAME = 'My Database', DESCRIPTION = 'Imported from MySQL');
| Option | Description |
NAME |
Display name. Defaults to the catalog ID if omitted. |
DESCRIPTION |
Optional description. |
ALTER CATALOG — modifies an existing catalog:
ALTER CATALOG mydb SET (NAME = 'Updated Name', DESCRIPTION = 'New description');
DROP CATALOG — deletes a catalog:
DROP CATALOG mydb;
DROP CATALOG IF EXISTS mydb;
DROP CATALOG IF EXISTS mydb;
The default catalog cannot be dropped. A catalog cannot be dropped if any templates reference it.
Catalog-Qualified Table Names (Dot Notation) — tables can be qualified with a catalog name using dot notation:
-- Create a catalog and tables within it
CREATE CATALOG sampledb;
CREATE TABLE sampledb.orders (order_id STRING RDM PRIMARY_KEY true, customer_name STRING, total DOUBLE);
CREATE TABLE sampledb.customers (customer_id STRING RDM PRIMARY_KEY true, name STRING, email STRING);
-- Query using dot notation
SELECT * FROM sampledb.orders LIMIT 10;
SELECT sampledb.orders.customer_name FROM sampledb.orders WHERE total > 50;
-- Insert data
INSERT OR REPLACE INTO sampledb.orders (_component_id, customer_name, total)
VALUES ('ord001', 'Acme Corp', 99.95);
-- Unqualified table names use the default catalog
CREATE TABLE sensor (temperature DOUBLE, humidity DOUBLE);
CREATE CATALOG sampledb;
CREATE TABLE sampledb.orders (order_id STRING RDM PRIMARY_KEY true, customer_name STRING, total DOUBLE);
CREATE TABLE sampledb.customers (customer_id STRING RDM PRIMARY_KEY true, name STRING, email STRING);
-- Query using dot notation
SELECT * FROM sampledb.orders LIMIT 10;
SELECT sampledb.orders.customer_name FROM sampledb.orders WHERE total > 50;
-- Insert data
INSERT OR REPLACE INTO sampledb.orders (_component_id, customer_name, total)
VALUES ('ord001', 'Acme Corp', 99.95);
-- Unqualified table names use the default catalog
CREATE TABLE sensor (temperature DOUBLE, humidity DOUBLE);
When inserting into the system.sample table for a catalog-scoped template, use the catalog-qualified COMP_TMPL_ID:
INSERT OR REPLACE INTO system.sample
WITH (COMP_TMPL_ID = 'sampledb.orders', FOLDER_PATH = '/Components/sampledb/orders')
(_component_id, _id, _time, _sample)
VALUES ('ord001', 'total', 1234567890000, 99.95);
WITH (COMP_TMPL_ID = 'sampledb.orders', FOLDER_PATH = '/Components/sampledb/orders')
(_component_id, _id, _time, _sample)
VALUES ('ord001', 'total', 1234567890000, 99.95);
Delete Profiles
Delete profiles define data retention policies. They specify how long stream sample data is retained before nightly purges remove it.CREATE DELETE PROFILE — creates a new delete profile. The
RETENTION clause specifies how many time units of data to retain:
-- Retain 13 months of data
CREATE DELETE PROFILE thirteen_months RETENTION 13 MONTH;
CREATE DELETE PROFILE IF NOT EXISTS one_year RETENTION 1 YEAR
WITH (NAME = 'One Year Retention', DESCRIPTION = 'Keep one year of samples');
CREATE DELETE PROFILE thirteen_months RETENTION 13 MONTH;
CREATE DELETE PROFILE IF NOT EXISTS one_year RETENTION 1 YEAR
WITH (NAME = 'One Year Retention', DESCRIPTION = 'Keep one year of samples');
Supported retention types:
MINUTE, HOUR, DAY,
WEEK, MONTH, YEAR.
| Option | Description |
NAME |
Display name. Defaults to the delete profile ID if omitted. CNAME is an alias. |
DESCRIPTION |
Description text for the delete profile. |
FOLDER_PATH |
Subfolder under /Tools/Delete Profiles. Intermediate folders are auto-created. |
ALTER DELETE PROFILE — modifies an existing delete profile:
ALTER DELETE PROFILE thirteen_months RETENTION 6 MONTH;
ALTER DELETE PROFILE thirteen_months SET (DESCRIPTION = 'Updated to 6 months');
ALTER DELETE PROFILE thirteen_months SET (DESCRIPTION = 'Updated to 6 months');
DROP DELETE PROFILE — deletes a delete profile:
DROP DELETE PROFILE thirteen_months;
DROP DELETE PROFILE IF EXISTS thirteen_months;
DROP DELETE PROFILE IF EXISTS thirteen_months;
Statistics — ANALYZE STATS
The query planner consults per-org table statistics — per-template and per-catalog stream-row counts — when choosing between an index scan and a server-side filtered table scan. For predicates likeWHERE _template_id = 'employees',
when a single template owns most of the org's stream rows, a table scan with a
server-side filter is faster than the two-phase index path; when the template is
small, the index path wins. Stats let the planner make the right call.
Stats are maintained automatically. Idle orgs incur near-zero overhead, and bulk JDBC imports trigger an asynchronous refresh so the planner sees the new selectivity right away.
ANALYZE STATS — forces a synchronous recompute of the org's table statistics. Useful after a mass schema change, when troubleshooting a planner choice, or in tests:
ANALYZE STATS;
The statement is org-scoped (no table argument) and takes no options. It is safe to run at any time — reads only, fast (seconds even on multi-million-stream orgs).
