DocsWorkflows
Core Concepts

Workflows

A Workflow is a directed graph of nodes that defines how data moves through your AI pipeline. Each node performs a discrete task — fetching data, reasoning with an LLM, transforming output, branching on a condition, or notifying a service — and passes its result to the next node in the chain.

How Workflows Execute

Workflows run sequentially from top to bottom. Each node waits for all of its upstream nodes to complete before it starts. When a node finishes, its output is stored and made available to every node connected downstream. If any node fails, execution halts and the run is marked as failed — the error is captured in the execution trace.

Data Source
Transform
AI Assistant
Action

Node Types

Every node on the canvas is one of three types. Understanding what each type does is the key to designing effective workflows.

AI Assistant

agent

The reasoning core of your workflow. Each AI Assistant node is backed by a specific LLM provider and model (e.g. Claude 4.5 Sonnet, GPT-4o). It receives a prompt enriched with upstream data via expressions and produces a text output — plain text or JSON — passed to the next node.

  • Choose any connected LLM provider and model
  • Write a system prompt and inject upstream data using {{NodeLabel.json.field}} expressions
  • Agents that return JSON are auto-parsed — downstream nodes can reference fields directly
  • Configure temperature and max token limits per node
  • Connect MCP Server integrations to give the agent tool-calling capability

Integration

data source / enterprise tool / mcp server

Integration nodes connect to the external services you've set up under your project's Integrations tab. They fetch data, send messages, or call APIs — and pass their output downstream as structured context accessible via expressions.

  • Select any saved integration from the node's side panel
  • Configure integration-specific options (e.g. SQL query, Gmail filter, Drive operation)
  • Output is available downstream via {{NodeLabel.json.field}} or {{NodeLabel.content}}
  • Unconfigured nodes are shown with a warning indicator on the canvas
  • Google integrations (Sheets, Drive, Gmail, etc.) use OAuth — you'll be redirected to authorise on first connection

Utility

built-in

Built-in utility nodes handle branching, data transformation, raw input, and human review steps. They require no external integration and are available on all plans.

  • Text Input — provide raw text or a static document as workflow input
  • If/Else Logic — branch the workflow based on a condition evaluated against upstream data
  • Script Runner — execute a sandboxed JavaScript snippet and pass its return value downstream
  • JSON Parser — extract a specific key path from a JSON string produced by an upstream node
  • Human-in-the-Loop — pause execution and wait for a human to approve or review before continuing

AI Workflow Designer

The AI Workflow Designer lets you describe an automation in plain English and receive a fully-formed, executable workflow graph — no dragging, no wiring, no node-schema knowledge required. It uses one of your own configured LLM providers to generate the graph, so your data never leaves your API keys.

Describe in chat
AI generates graph
Preview on canvas
Save & refine

The designer supports multi-turn conversation — ask follow-up questions, request changes, or say "add a Slack notification at the end" and it will update the graph in place. Your session persists between visits so you can pick up where you left off.

Opening the Designer

  1. 1Navigate to your project and open the Workflows tab.
  2. 2Click the Design with AI button (wand icon) next to the "New Workflow" button — or in the empty state if you have no workflows yet.
  3. 3The AI Workflow Designer opens as a full-screen overlay with a chat panel on the left and a live canvas preview on the right.
  4. 4In the header, select the LLM provider and model you want to use. Only providers you've configured under Organization Settings → LLM Providers are shown.

Provider & model selection

Select a provider (e.g. Anthropic, OpenAI, Google) from the first dropdown — this determines which API key is used. Then select any model supported by that provider from the second dropdown. You can freely mix providers: use Claude to design the workflow graph and GPT-4o inside the agent nodes themselves.

Chat → Canvas → Editor

Describe your automation

Type a plain-English description of what you want to automate. You can be brief ("email me a daily news summary") or detailed. The designer may ask up to 2 clarifying questions before generating.

"Scan Gmail for invoices, extract the total amount with Claude, and log each one to a Google Sheet."

Review the canvas preview

The generated workflow appears on the right panel with auto-applied Dagre layout. Each node shows its provider, label, and connections. The chat panel displays a plain-English summary of what was built.

Save & open in editor

Give the workflow a name, choose a project (if you belong to more than one), and click "Save as Workflow". You land directly in the Visual Designer to configure node parameters, connect integrations, and fine-tune prompts.

Refining & Iterating

The AI Designer maintains full conversation context across messages. If the first generated graph isn't quite right, continue the conversation rather than starting over. The current canvas state is automatically sent with each message so the model can make targeted changes.

Useful follow-up prompts

  • "Add a Human-in-the-Loop approval step before the email is sent."
  • "Replace the OpenAI node with Anthropic Claude Sonnet."
  • "Add an If/Else branch — if the score is below 7, skip the calendar booking."
  • "Add a Script Runner node after the JSON Parser to filter out empty rows."
  • "Regenerate the workflow but make the Gmail step read only the last 10 emails."

Session persistence

Your conversation and generated canvas are saved in your browser (localStorage) keyed by project. Close the overlay and reopen it — your session picks up exactly where you left off. Click New session in the header to clear it and start fresh.

Only connected integrations are available

The designer only suggests integration nodes for services you have already connected under the project's Integrations tab. It cannot hallucinate unavailable connectors.

Starter prompts

If you're unsure where to begin, the chat panel shows example prompts when it's empty. Click any to send it instantly.

Using the Visual Designer

The Visual Designer is a drag-and-drop canvas built on React Flow. Open it by navigating to your project, selecting the Workflows tab, and clicking into any workflow.

Add nodes to the canvas

Open the node palette on the left side of the Visual Designer. Browse by category — Data Sources, Enterprise Tools, MCP Servers, and Utilities — or search by name. Drag a node onto the canvas to add it. AI Assistant nodes are added from the top of the palette.

Connect nodes with edges

Drag from the bottom handle of one node to the top handle of another to create a directed edge. Execution flows top-to-bottom along these connections. A node only runs after all its upstream nodes have completed successfully.

Configure each node

Click any node to open its configuration panel on the right. For AI Assistant nodes, write the system prompt and choose the model. For integration nodes, select the saved integration and fill in any node-level options (e.g. a SQL query, Gmail filter, or Drive operation mode).

Reference upstream data with expressions

Use {{NodeLabel.json.field}} and {{NodeLabel.content}} expressions anywhere in prompt templates or node parameter fields to pull in data from earlier nodes. The Available Outputs panel in each node's settings shows every expression available for that node based on its operation — no guessing required.

Save and run

Click "Save Progress" to persist your workflow graph, then click "Execute" to trigger an execution. The canvas shows live status updates as each node runs — yellow pulsing for in-progress, green for success, grey-dimmed for skipped (if/else branch not taken), red for error. Execution results are saved to the workflow's log history.

Expression Engine

Expressions are the primary way to pass data between nodes. Write them anywhere — in AI Assistant prompt templates, integration parameter fields, If/Else conditions, or store output keys. At runtime, CipherSense resolves each expression to the actual value before the node executes.

Expression Syntax

{{NodeLabel.json.field}}

JSON field

Access a specific field from a node's structured output. Use dot-notation for nested fields and [0] for array indexes.

{{Read Email.json.from.email}}
{{NodeLabel.content}}

Full content

The complete text output of a node — the full email body, agent response, or file content. Best used to pass rich context to an AI Assistant.

{{Extract Info.content}}
{{store.key}}

Execution store

Read a value saved by the Store Outputs mapping of an earlier node. Useful for sharing a single value (e.g. a calendar slot) across many downstream nodes.

{{store.startTime}}

Full prompt example — CV screening workflow

// AI Assistant prompt — "Qualify Applicant" node

You are a senior hiring manager. Evaluate this applicant.

From: {{Read Email.json.from.name}} <{{Read Email.json.from.email}}>

CV and application: {{Read Email.content}}

Extracted skills: {{Extract Info.json.skills}}

Rules & behaviour

  • Node labels are case-sensitive — {{Read Email.json.from.email}} and {{read email.json.from.email}} are different.
  • If two nodes share the same label, the second is addressable as "Label 2", "Label 3", etc. Rename nodes to avoid ambiguity.
  • An expression that cannot be resolved renders as an empty string — it never causes execution to fail.
  • Expressions are evaluated at dispatch time, so only already-completed upstream nodes are reachable.
  • You can mix static text and multiple expressions in the same field.

Available Outputs panel

Every node's settings panel includes a collapsible Available Outputs section that lists every expression the node will produce, with its type and a description. The list updates automatically based on the selected operation (e.g. switching a Gmail node from Read Inbox to Send Email shows different fields). Click any row to copy the expression to the clipboard.

If/Else Branching

Drop an If/Else Logic utility node anywhere in your workflow to route execution down one of two branches based on a condition. The node exposes two output handles — TRUE (left, green) and FALSE (right, red) — which you wire to the first node of each branch.

Configuration

FieldAn expression that resolves to the value to test — e.g. {{Qualify.json.qualified}}
Operatorequals, not_equals, contains, greater_than, is_empty, matches_regex, and more
ValueThe value to compare against. Also accepts expressions — e.g. {{store.threshold}}

Skipped nodes

When the If/Else evaluates, every node on the non-taken branch is immediately marked as Skipped — shown dimmed with no border glow on the canvas. Skipped nodes do not execute and do not appear in the execution trace. Downstream nodes that receive both a real output and a skipped output will still run correctly.

// If/Else — qualification gate

Field: {{Qualify.json.qualified}}

Operator: equals

Value: yes

// TRUE → send interview invitation

// FALSE → send rejection email

For Each

The For Each utility node enables fan-out iteration — it takes an array from an upstream node and runs the downstream subgraph once per item, passing each item individually as context. This lets you process rows from a database query, files from a Drive folder, or any list of records through an AI agent one at a time.

Configuration

Items Path (optional)The key name of the array in the upstream node's output — e.g. rows, results, records. Leave blank to auto-detect from common keys (rows, results, items, records, data, collection).

If the upstream output is itself an array (no wrapping object), the node uses it directly — no configuration needed.

Expressions inside the iteration

Inside the iterated subgraph, each node can reference the current item using the For Each node's label:

{{ForEach.json.item}} — current item (object or value)

{{ForEach.json.index}} — zero-based position (0, 1, 2…)

{{ForEach.json.total}} — total number of items

If the item is an object, access its fields with dot notation: {{ForEach.json.item.name}}, {{ForEach.json.item.email}}.

What is the "downstream subgraph"?

The subgraph is every node that is reachable from the For Each node where all of its incoming edges originate from within the subgraph or from the For Each node itself. Nodes that have inputs from outside the subgraph (convergence points) are excluded and run normally in the main execution loop after all iterations complete. This means you can safely merge iteration results back into the main flow.

End-to-end example

1 — Fetch rows

A PostgreSQL node runs a query and returns { rows: [...], rowCount: 50 }. Connect its output to the For Each node.

2 — Iterate

The For Each node auto-detects rows as the array. For each row it runs the downstream agent with {{ForEach.json.item.email}} and {{ForEach.json.item.name}} in the prompt.

3 — Collect results

After all 50 iterations complete, the For Each node's output contains { iterations: [...], total: 50 }. A downstream Summary Agent can reference {{ForEach.json.iterations}} to process all results together.

// For Each node output after all iterations

{

"total": 50,

"iterations": [

{

"index": 0,

"item": { ... },

"results": { /* outputs of each subgraph node */ }

}, ...

]

}

Execution Store

The execution store is a flat key-value map that persists for the lifetime of a single run. Use it to share a value produced by one node across many downstream nodes without chaining long expression paths.

How to use it

  1. 1In the node's settings panel, open Store Outputs and add a mapping: set a Key (e.g. startTime) and a Path (the dot-path within the node's output, e.g. startTime).
  2. 2Any downstream node can then reference {{store.startTime}} in its fields.
  3. 3For most cases, the direct expression syntax ({{NodeLabel.json.field}}) is simpler — only use the store when the same value is needed by many nodes or across If/Else branches.

Execution & Monitoring

When a workflow runs, each node reports its status back to the canvas in real time. Node borders and labels update as execution progresses. All runs are persisted to the project's log history, accessible from the Logs tab.

Running

Node is currently executing. The border pulses yellow.

Success

Node completed successfully. Duration is shown below the label.

Slow (>10s)

Node took more than 10 seconds. Border turns orange as a performance hint.

Bottleneck (>30s)

Node took more than 30 seconds. Border turns red with a ⚠️ SLOW indicator.

Error

Node failed. Open the execution trace to see the error message.

Skipped

Node was on the non-taken branch of an If/Else split. Shown dimmed with no border glow.

Execution Trace

Click Trace on any log entry to open the step-by-step execution trace. For each node you can see its status, duration, the resolved prompt sent to the LLM, and the raw output returned. JSON outputs are syntax-highlighted for readability. This is the primary tool for debugging unexpected agent behaviour.

Node Settings Reference

Every node shares a set of common settings in addition to its type-specific parameters.

Available Outputs

A collapsible panel listing every expression the node produces for the currently selected operation. Updates dynamically when the operation is changed. Click a row to copy the expression.

Store Outputs

Map dot-paths from this node's output into the execution store under named keys. Stored values are accessible via {{store.key}} in any downstream node, including across If/Else branches.

Execution Timeout

Maximum time in seconds the node is allowed to run. If exceeded, the node is cancelled and the execution fails. Default is 300 seconds.

Retry on Fail

When enabled, the node will be retried up to the configured number of times with a fixed wait between attempts. Off by default. Useful for transient API or network errors.

Human-in-the-Loop Nodes

Drop a Human-in-the-Loop utility node anywhere in your workflow to insert a mandatory human checkpoint. When execution reaches this node, the workflow pauses and the run is marked as awaiting_input. A team member receives an in-app notification and email, reviews the upstream context, and approves or rejects before the workflow resumes.

Use Human-in-the-Loop nodes before high-stakes actions such as sending bulk emails, writing to production databases, or posting to external platforms. Configure a specific reviewer in the node settings, or leave it to default to the workflow owner.

Triggers

Every workflow has a trigger mode that determines when and how it starts. Choose the mode that fits your use case — you can change it at any time from the workflow's settings page.

On-Demand

manual trigger

The default mode. The workflow only runs when explicitly triggered — nothing happens automatically in the background. Use this for workflows you want full control over.

  • Click "Execute" in the Visual Designer to start a run immediately.
  • Trigger via the API using POST /api/workflows/run with your workflow ID and an API key.
  • Suitable for interactive workflows, one-off tasks, and workflows driven by user action.
  • A workflow already in progress will return 409 Conflict — only one concurrent run is allowed per workflow.

// Trigger on-demand via API

POST /api/workflows/run

Authorization: Bearer <api-key>

{

"workflowId": "wf_..."

}

Schedule

time-based trigger

Set the workflow to run automatically on a recurring schedule. CipherSense checks active scheduled workflows every minute and fires any that are due. Configure the schedule from the workflow's settings page under Trigger.

interval

Interval

Run every N minutes. Set the interval (min 5 minutes - max 60 minutes) in the schedule config. Best for near-real-time polling workflows.

daily

Daily

Run once per day at a configured time. Ideal for overnight reports, daily digests, or morning summaries.

weekly

Weekly

Run once per week on a chosen day and time. Use for weekly roll-ups, KPI reports, or recurring outreach tasks.

Scheduled workflows must have status: active to be picked up by the scheduler. Drafts and paused workflows are skipped. If a scheduled run is triggered while a previous run is still active, the new run is skipped until the previous one completes.

Webhook

event-driven trigger

Enable a unique webhook URL for the workflow. Any external system — a form, a third-party automation, your own backend, or a cron job — can POST to that URL to start a run instantly. The request body is injected into the workflow as input data.

  • No authentication required — the unique token in the URL is the credential.
  • JSON body is injected into every source node and addressable via expressions.
  • Returns 202 Accepted immediately; execution runs in the background.
  • Rate-limited to 30 triggers per minute per webhook.

See the full setup guide and payload reference in the Webhook section below.

Webhook Triggers

Every workflow can be given a unique webhook URL. When an HTTP request is sent to that URL, CipherSense creates a new execution automatically and injects the request payload into the first node — no manual "Run" click required.

Enabling a Webhook

  1. 1Open the Visual Designer for any workflow.
  2. 2Scroll to the bottom of the left palette sidebar to find the "Webhook Trigger" section.
  3. 3Click "Enable". CipherSense generates a unique token and shows the full webhook URL.
  4. 4Copy the URL and use it in any external system — a form, a cron job, a third-party automation, or your own backend.
  5. 5To rotate credentials, click "Regenerate Token". The old URL stops working immediately.

Request

POST /api/webhooks/<token>

Content-Type: application/json

{

"customer": "Alice",

"amount": 149.99,

"order": {

"id": "ORD-001",

"items": 3

}

}

Returns 202 Accepted with an executionId. Execution runs asynchronously in the background.

Response

// 202 Accepted

{

"executionId": "exec_...",

"status": "running",

"message": "Workflow triggered via webhook."

}

Use the executionId to look up the run in the workflow's Logs tab.

How payload data flows through the workflow

The JSON body you POST is injected into every source node — any node with no incoming edges. Its top-level fields become directly addressable expressions without needing a node label prefix. Nested objects are accessed with dot notation.

In the first (source) node

Reference top-level payload fields directly by name. Nested fields use dot notation.

// AI Assistant prompt

New order from {{customer}}.

Amount: {{amount}}.

Order ID: {{order.id}}.

Items: {{order.items}}.

Draft a confirmation email.

In downstream nodes

Downstream nodes access webhook data via the first node's output using the standard expression syntax — {{NodeLabel.json.field}} or {{NodeLabel.content}}.

// Prompt in a second AI Assistant

Email draft: {{EmailDrafter.content}}

// Or pass original fields forward

Customer: {{EmailDrafter.json.customer}}

Only source nodes receive the raw payload

Webhook field injection ({{customer}}, {{amount}} etc.) only works in nodes with no incoming edges. Nodes further downstream must reference upstream node outputs using the full {{NodeLabel.json.field}} syntax. Check the Available Outputs panel in any node's settings to see every expression available to that specific node.

The token is the credential

Webhook URLs do not require an API key or session — the token in the URL authenticates the request. Keep it secret. If you suspect it has been exposed, click Regenerate Token to invalidate the old URL instantly.

Health check

Send a GET request to the same webhook URL to confirm it is active without triggering a run. Returns 200 with { "status": "active" } when enabled, or 403 when disabled.

Blueprint Export / Import

Blueprints are portable JSON snapshots of a workflow's graph — nodes, edges, prompts, and parameters — without any project-specific identifiers or execution history. You can export any workflow as a blueprint file, share it across teams, check it into version control, and import it into any project to create a new workflow instantly.

Exporting a Blueprint

There are two places to export a workflow as a blueprint:

From the Project page

  1. 1Open your project and go to the Workflows tab.
  2. 2Find the workflow card you want to export.
  3. 3Click the ⋯ overflow menu on the card and select "Export Blueprint".
  4. 4A JSON file named <workflow-name>_blueprint.json is downloaded immediately.

From the Visual Designer

  1. 1Open the workflow in the Visual Designer.
  2. 2Click the "Export" button (download icon) in the canvas toolbar — it sits between the Save and Validate buttons.
  3. 3The blueprint JSON is downloaded instantly without leaving the canvas.

Blueprints contain no credentials, integration tokens, or execution history. They are safe to share publicly or commit to a repository. Integration nodes retain their provider identifier (e.g. google-sheets) but not the linked integration record — a collaborator importing the blueprint must connect their own integration.

Importing a Blueprint

Import a blueprint to create a new workflow in any project from a previously exported JSON file. All node positions, prompts, expressions, and parameters are restored exactly as they were — only node IDs are regenerated to avoid collisions with existing workflows in the project.

  1. 1Open your project and go to the Workflows tab.
  2. 2Click "Import Blueprint" in the page header.
  3. 3Select a valid <name>_blueprint.json file from your filesystem.
  4. 4CipherSense validates the file structure and creates a new workflow. You are taken to the new workflow automatically.
  5. 5Open the Visual Designer and connect any integration nodes to your project's saved integrations.

What is preserved

  • ·Node layout and positions on the canvas
  • ·All edge connections and If/Else branch wiring
  • ·AI Assistant prompts, provider, and model selection
  • ·Integration node provider type and operation settings
  • ·Expression references between nodes
  • ·Node labels and configuration parameters

What is not carried over

  • ·Linked integration records (must reconnect in the target project)
  • ·Execution history and logs
  • ·Webhook tokens and trigger schedule configuration
  • ·Internal node IDs (regenerated to avoid conflicts)

Blueprint File Format

A blueprint is a plain JSON file with a fixed schema. The top-level fields are:

// <workflow-name>_blueprint.json

{

"blueprint_version": "1.0",

"exported_at": "2025-06-01T14:32:00Z",

"name": "CV Screening Pipeline",

"description": "Reads applicant emails and scores CVs.",

"graph_data": {

"nodes": [

{

"id": "node_abc123",

"type": "agent", // "agent" | "integration" | "if-else"

"position": { "x": 200, "y": 150 }

"data": {

"label": "Qualify Applicant",

"provider": "openai",

"model_id": "gpt-4o-mini",

"params": { "prompt": "Evaluate this applicant…" },

"itemId": null // always null in blueprints

}

}

],

"edges": [

{

"id": "edge_abc_xyz",

"source": "node_abc123",

"target": "node_xyz456"

}

]

}

}

blueprint_version

Schema version — currently always "1.0". CipherSense will reject files with an unrecognised version.

graph_data.nodes

Array of node objects. Each node has id, type, position, and a data object containing the label, provider, model, and all configuration params.

graph_data.edges

Array of edge objects linking nodes by their IDs. If/Else edges carry a data.branch field set to "TRUE" or "FALSE".

Validation on import

CipherSense validates every blueprint before creating a workflow. It checks that the version field is recognised, that all nodes have a valid type, that every edge references existing node IDs, and that the graph contains no cycles. Files that fail validation show a descriptive error — no partial workflow is created.

Tips & Best Practices

Start with the AI Designer

Describe your workflow in plain English to generate a working starting graph. Then open the Visual Designer to wire up integrations, refine prompts, and add edge cases. Much faster than building from scratch.

Keep nodes focused

Each node should do one thing well. Avoid overloading a single AI Assistant with multiple responsibilities — chain specialised agents instead.

Use expressions, not copy-paste

Reference upstream data with {{NodeLabel.json.field}} rather than hard-coding values. The Available Outputs panel in every node lists every expression you can use.

Use Script Runner for transforms

When upstream data needs reshaping before it reaches an agent (e.g. filtering an array or computing a value), use a Script Runner node rather than asking the LLM to do it.

Enable Retry for external calls

Integration nodes that call external APIs can fail transiently. Enable Retry on Fail with 2–3 tries and a 1000ms wait to make your workflow resilient to brief outages.

Watch for bottlenecks

Nodes that run consistently orange or red are slowing down the whole workflow. Optimise the query, switch to a faster model, or split the node's work.

Test integrations before wiring

Use the "Save & Test" button on the Integrations page to verify connectivity before building the workflow that depends on it.

Ready to build your first workflow?

Describe your automation in plain English with the AI Designer, or jump straight into the Visual Designer to wire nodes together manually.