AI Assistant



Overview

The GroveStreams AI Assistant is an agentic system, not a simple chatbot. When you send a message, the assistant autonomously reasons about your request, selects and executes tools, handles errors, and returns a final answer. You pick from a curated set of LLM profiles — spanning OpenAI, Anthropic, Gemini, and xAI — configured by GroveStreams. Your organization admin controls which profiles are available to your users and which one is the org default. Token usage is billed at a base rate multiplied by a per-model factor so cheaper models cost less per token; the assistant handles everything else.

The assistant is available in two places:

  • Web UI — open the chat panel inside the GroveStreams application
  • HTTP API — call the AI Assistant API from external applications, scripts, or AI agents


How It Works

The assistant uses a flat architecture with escalation sub-agents. A single Brain agent has direct access to all tools and handles most requests itself. For specialized work, it delegates to sub-agents that run in isolated context windows.


Brain (Primary Agent)

The Brain receives every user message and has direct access to all tools — GS SQL execution, object lookup, documentation search, chart rendering, and more. For simple queries and lookups it answers directly without delegation. This avoids the overhead of routing through sub-agents for common operations.

GS SQL Agent

A dedicated SQL sub-agent that owns the full GS SQL grammar. It assembles queries, executes them, reads error messages, corrects mistakes, and retries — all within its own context. The Brain receives clean results without SQL noise filling up the main conversation. This agent always runs at the BRAIN model tier for accuracy, regardless of how it was invoked.

Advanced Math Agent

Performs FFT, k-means clustering, matrix algebra, entropy calculations, z-scores, t-tests, and other statistical operations on stream data. Has access to the GS SQL Agent for fetching data.

Docs Search Agent

Searches and retrieves GroveStreams documentation. Handles multi-section lookups and how-to questions by scanning the help file system. Always runs at the Minion model tier (see below) regardless of who invokes it — reading help docs doesn't need top-tier reasoning, so this sub-agent stays cheap.

Each sub-agent gets an isolated context window. When the agent finishes, its context is garbage collected — large documents, intermediate results, and retry attempts do not accumulate in the main conversation.


Brain and Minion Model Tiers

Each LLM profile pairs two models: a Brain for the main reasoning loop and a Minion for lightweight delegated work — documentation search, query rewriting, simple classifications. The Minion is typically a cheaper, faster model (e.g., GPT‑5‑nano paired with GPT‑5; Claude Haiku paired with Claude Sonnet). When admins pick a profile in Organization Settings, both slots are configured together; the GroveStreams default profiles ship with sensible Brain/Minion pairings out of the box.

Tier assignment is per sub-agent. The GS SQL Agent always runs at the Brain tier because SQL accuracy matters. The Docs Search Agent always runs at the Minion tier because reading docs is routine. The Brain agent itself runs at — unsurprisingly — the Brain tier.

Tiers also matter for billing: Brain and Minion bill at their own per-token multipliers. A profile that pairs an expensive Brain with a cheap Minion lets you keep main-loop reasoning quality high while keeping routine sub-agent work inexpensive. See the pricing FAQ for the per-plan token rates.

Per-model multipliers track third-party LLM provider pricing in good faith but may lag behind vendor price changes — sometimes by hours, sometimes by weeks. Multipliers may be revised at any time without notice and revisions are not retroactive. AI Assistant credit charges are final once metered and are not refundable based on multiplier lag. See Terms of Service Section 10.8 for the full disclosure.



Key Capabilities

The assistant has access to over 20 built-in tools. Key capabilities include:

  • GS SQL queries — formulates and executes time-series queries, roll-ups, and cross-stream joins via GS SQL
  • DDL execution — creates, alters, and drops templates, cycles, rollup calendars, stream groups, views, materialized views, runnables, agents, and more
  • Object discovery — finds components, streams, templates, dashboards, connectors, and queries by name, ID, or wildcard
  • Statistical analysis — correlation detection, k-means clustering, z-scores, t-tests, FFT
  • Derived stream diagnostics — inspects derivation graphs, precedents, and dependents to troubleshoot formula issues
  • Dashboard and content navigation — browses content folders, finds dashboards referencing specific components
  • Job monitoring — retrieves job notifications and error details for background tasks
  • Documentation retrieval — searches and retrieves help documents and knowledge base articles
  • Chart rendering — generates chart images for stream data visualization

Read-Only by Default

By default, the assistant inspects and queries your data but cannot modify objects. Administrators can selectively enable write operations under Organization Settings → AI Chat:

  • Data operations: Allow INSERT, UPDATE, DELETE (sample-level)
  • Entity operations: Allow INSERT, UPDATE, DELETE Entity (components)
  • DDL operations: Allow CREATE/ALTER/DROP TABLE, DROP TABLE CASCADE
  • Other tool operations: Allow CREATE/ALTER/DROP/RUN Other Tools

AI Chat permissions are enforced in addition to global GSQL settings. Both must be enabled for an operation to succeed. All settings default to false (read-only).



Built for Reliability
  • Auto-escalation — if a lighter model fails 3 times, the system escalates to a more capable model
  • Parallel tool execution — independent tool calls fire concurrently
  • Error recovery — agents read error messages, correct inputs, and retry (up to 3 attempts per tool call)
  • Context isolation — sub-agent contexts are garbage collected after use
  • Security — all tool calls run under the calling user's security context


Conversation History and Pinning
  • Persistent history — chat history is preserved across sessions; close the browser and pick up where you left off
  • Message pinning — pin important messages to keep them visible as the conversation grows
  • Context continuity — sub-agents receive recent conversation history so they understand the broader context


Designed for AI Agents

The AI Assistant is available as an HTTP API, making it straightforward to integrate with external AI agents and automation pipelines. GroveStreams also exposes MCP servers that allow external agents to interact with GroveStreams data through standardized tool calls.

The platform is built for AI agents to reason over:

  • Templates give agents a discoverable schema
  • GS SQL gives agents one query language for all temporal data
  • Deterministic results are traceable and verifiable


Getting Started
  1. (Org admin) In Organization Settings → AI Chat, review the available LLM profiles and check the ones your users should be able to pick from. Mark one as the org default. Profiles are pre-configured by GroveStreams — you don’t supply API keys.
  2. Open the chat panel in the GroveStreams web application, pick a profile from the dropdown if you want to override the org default, and start asking questions.
  3. Optionally enable write permissions in Organization Settings → AI Chat.

See the FAQ for common questions and configuration details.




Scheduled Agents

Scheduling is paused for the current release. The Schedule tab on the agent editor shows “Coming Soon” and saves force hasSchedule=false. The DDL, the result-handling shape, and the concurrency model below are documented so the surface stays stable, but no agents will be cron-dispatched until the Process Queue redesign ships. You can still define agents, run them manually (right‑click → Run), and reference them from ask_grovestreams and the AI Assistant.

Scheduled Agents are LLM-powered jobs that run on a configurable schedule — the same rank as Runnables, Connectors, and Forecast Models. An agent has a prompt, a schedule, an LLM profile (which AI provider to use), and optional result stream targeting.

When an agent runs, it executes the full agentic loop with tool access — the same tools available to the interactive AI Assistant (GS SQL execution, object search, documentation, etc.). This means scheduled agents can query data, update components, generate reports, and perform any operation the interactive assistant can.

Configuration

  • Prompt: The agent’s instructions. Can reference templates, component IDs, and GS SQL queries.
  • Schedule: References a Cycle (e.g., 'day', 'hour'). Agents run when the cycle fires.
  • LLM Profile: A PROFILE_UID referencing a system-level LLM profile. If omitted, the system default profile is used.
  • Running User: The RBAC context for the agent’s tool calls. Defaults to the agent creator.

Result Handling

The agent’s prompt response is stored in up to four destinations:

  1. lastResultSummary — always stored on the agent object for quick access (last run only).
  2. RDM Stream — if RESULT_COMPONENT_ID and RESULT_STREAM_ID are set, the response is written as a timestamped data point. Builds temporal history of every run. Dashboard-ready.
  3. Email — if schedule email is configured, the response (or error) is emailed on completion.
  4. System Notification — on error, a notification is written for visibility.

Concurrency Limit

When scheduling is enabled, a maximum of 2 scheduled tool agents can execute concurrently per organization. If the limit is reached, the agent’s schedule date is preserved and retried on the next scheduler cycle. This limit gates only the cron-driven scheduler path — it does not apply to interactive AI Assistant chat sessions, MCP ask_grovestreams calls, or manually-run agents.

DDL Reference

Agents are managed via GS SQL DDL: CREATE AGENT, ALTER AGENT, DROP AGENT, RUN AGENT. See the DDL Reference — AI Agents for full syntax and examples.

Example: Daily Data Quality Check

CREATE AGENT data_quality_check
  SCHEDULE 'day'
  PROMPT 'Check all sensor streams for gaps in the last 24 hours. Report any streams with more than 1 hour of missing data.'
  WITH (
    NAME = 'Daily Data Quality',
    RESULT_COMPONENT_ID = 'system_reports',
    RESULT_STREAM_ID = 'data_quality',
    MAX_ITERATIONS = 20
  );



Related: AI Assistant API  |  MCP Servers  |  FAQ — AI Assistant  |  DDL — AI Agents  |  GS SQL Overview  |  Forecasting