Compare commits

...

55 Commits

Author SHA1 Message Date
Kyle Carberry 56ff5dfded feat(coderd/database): add automations database schema and queries
Adds the database layer for the chat automations feature:

Schema (migration 000454):
- chat_automations table with status, rate limits, instructions, MCP config
- chat_automation_triggers table (webhook + cron types)
- chat_automation_events table for audit trail
- Indexes for performance and unique constraint on (owner, org, name)
- automation_id FK on chats table

SQL queries:
- chatautomations.sql: CRUD with authorize_filter support
- chatautomationtriggers.sql: CRUD + GetActiveChatAutomationCronTriggers JOIN query
- chatautomationevents.sql: insert, list, count windows, purge

RBAC:
- chat_automation resource with CRUD actions
- dbauthz wrappers for all 18 query methods

Also adds LockIDChatAutomationCron advisory lock constant.
2026-04-01 14:10:38 +00:00
Kyle Carberry 4b52656958 fix(site): ensure Thinking indicator appears regardless of WebSocket event ordering (#23884)
The "Thinking..." indicator intermittently failed to render after
submitting a message. The behavior depended on the order of events
within a single WebSocket frame.

## Root Cause

`flushMessageParts()` was called before **all** non-`message_part`
events in the batch loop. When the server sent `[message_part,
status:"running"]` in the same SSE chunk:

1. `message_part` → pushed to `partsBuf`
2. `status:"running"` → `flushMessageParts()` applied parts **first** →
`streamState` became non-null → then `chatStatus` set to `"running"`
3. Subscriber saw `streamState != null && chatStatus == "running"` →
`selectIsAwaitingFirstStreamChunk` returned `false` → no "Thinking..."

When events arrived in the reverse order (`[status:"running",
message_part]`), the indicator worked because the status was set before
parts were applied.

## Fix

- Move `flushMessageParts()` to only fire before `message` and `error`
events (which need prior parts visible)
- Add `discardBufferedParts()` for events that clear stream state
(`status:"pending"/"waiting"`, `retry`) so the deferred `setTimeout(0)`
flush doesn't re-populate cleared state
- Status changes are now always applied before parts within a batch, and
the deferred flush gives React one render cycle to show "Thinking..."

| Event | Flush? | Rationale |
|---|---|---|
| `message` | YES | Durable commit must include all stream parts |
| `error` | YES | Partial output should be visible alongside error |
| `status` | NO | Status must be set before parts so "starting" phase
renders |
| `retry` | DISCARD | Retry clears stream state; flushing would
re-populate it |
| `queue_update` | NO | Doesn't interact with stream state |

## Tests (written first, failing before fix)

1. **"shows starting phase when message_part arrives before
status:running in same batch"** — the exact bug scenario
2. **"shows starting phase when status:running arrives before
message_part in same batch"** — verifies the "good" order still works
3. **"discards buffered parts when status transitions to pending"** —
verifies parts don't leak through pending transitions

All tests are deterministic (fake timers, no race conditions).

<details><summary>Implementation plan & decision log</summary>

### Why not reorder events within the batch?
Reordering would change the semantic ordering of events from the server,
which could have subtle side effects. The simpler approach is to be
selective about when parts are flushed.

### Why discard (not flush) before pending/waiting/retry?
These events clear `streamState`. If parts were flushed before the
clear, they'd be visible for one frame then disappear. If the deferred
flush ran after the clear, it would re-populate the state. Discarding is
the only correct behavior.

### Why keep flush before error?
Errors should surface partial output so the user can see what the agent
was doing when it failed.

</details>
2026-04-01 07:45:38 -04:00
Danielle Maywood f5b98aa12d fix: stabilize flaky visual stories (#23893) 2026-04-01 11:51:06 +01:00
Cian Johnston 7ddde0887b feat(site): force-enable kyleosophy on dev.coder.com (#23892)
- Force-enable Kyleosophy on `dev.coder.com` via hostname check
- Toggle shows as checked + disabled with "Kyleosophy is mandatory on
`dev.coder.com`"
- `isKylesophyForced()` exported for UI and testability
- Tests for forced/non-forced hostname behavior

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-04-01 11:39:04 +01:00
Cian Johnston bec426b24f feat(site): add Kyleosophy alternative completion chimes (#23891)
- Add "Enable Kyleosophy" toggle to Settings > Behavior
- When enabled, replaces standard completion chime with random Kyle
sound clips
- Ships 8 alternative `.mp3` files as static assets (~82KB total)
- localStorage preference (`agents.kyleosophy`), defaults to off
- Pauses orphaned Audio elements on sound URL change to prevent overlap

<details><summary>Review findings addressed</summary>

- **P2** Stale JSDoc on `playChimeAudio` — updated to reflect
parameterized behavior
- **P3** Overlapping audio on rapid completions — added
`chimeAudio?.pause()` before replacement
- **P3** Test ordering dependency — pinned `Math.random` for
determinism, documented cache behavior
- **Nit** Setter naming — `setLocalKyleosophy` → `setKylesophyLocal`
- Toggle moved to bottom of Behavior page per product request
- Description changed to "IYKYK" per product request

</details>

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-04-01 10:06:01 +00:00
Cian Johnston d6df78c9b9 chore: remove racy ChatStatusPending assertions after CreateChat (#23882)
Removes 6 fragile `require.Equal(t, codersdk.ChatStatusPending,
chat.Status)` assertions from chat relay and creation tests.

**Root cause**: In HA tests with two replicas sharing the same DB, the
worker can acquire a just-created chat (flipping `pending → running` via
`AcquireChats`) before the HTTP response reaches the test. All affected
tests already synchronize via `require.Eventually` waiting for `running`
status, making the initial assertion both redundant and racy.

- Remove 5 assertions in `enterprise/coderd/exp_chats_test.go` (all
`TestChatStreamRelay` subtests)
- Remove 1 assertion in `coderd/exp_chats_test.go` (`TestPostChats`)
- An existing comment in `TestPostChats/Success` already documents this
exact race

Fixes flake:
https://github.com/coder/coder/actions/runs/23807597632/job/69385425724

> 🤖 Written by a Coder Agent. Will be reviewed by a human.
2026-04-01 10:00:50 +01:00
Danielle Maywood 19390a5841 fix: resolve TestScheduleOverride/extend flake caused by timezone hour boundary race (#23830) 2026-04-01 07:53:04 +01:00
Jake Howell 2d03f7fd3d fix: resolve rendering issues with GFM alert boxes in <DynamicParameter /> (#22241)
Closes #22189

GFM alerts (e.g., `> [!IMPORTANT]`) in Markdown content failed to render
when the alert body contained inline formatting like `**bold**`,
`*italic*`, or `` `code` ``. The alert marker and subsequent text were
merged into a single string node by the parser, causing the type
detection to fail and fall back to a plain blockquote.

Additionally, multi-line alert content (`> line one\n> line two`) lost
its line breaks — all lines collapsed into one.

- Split the alert marker from trailing content in shared string nodes so
type detection works with inline formatting
- Preserve embedded newlines as `<br/>` elements to match GitHub's GFM
alert rendering
- Wrap plain-text children instead of splitting on `\n` to avoid
stripping newline information early

<img width="447" height="187" alt="image"
src="https://github.com/user-attachments/assets/d2fa3495-0b31-483c-97d8-12fed6819e24"
/>
2026-04-01 17:20:55 +11:00
Ethan 153a66b579 fix(site/src/pages/AgentsPage): confirm active agent archive (#23887)
Add a confirmation dialog before archiving an agent that is actively
running from the Agents UI.

This PR came about as feedback on PR 23758:
https://github.com/coder/coder/pull/23758#issuecomment-4160424938.
Active agents now require confirmation before archive interrupts the
current run, while inactive agents keep the existing one-click archive
behavior.

<img width="450" height="242" alt="image"
src="https://github.com/user-attachments/assets/98ce6978-d2d6-440b-9841-3806038556ee"
/>
2026-04-01 15:34:21 +11:00
Ethan 5cba59af79 fix(coderd): unarchive child chats with parents (#23761)
Unarchiving a root chat now restores descendant chats in the database
and emits lifecycle events for every affected chat so passive sessions
converge without a full refetch.

This keeps archive and unarchive symmetric at both the data and
watch-stream layers by returning the affected chat family from the
database, using those post-update rows for chatd pubsub fanout, and
covering descendant lifecycle delivery with a watch-level regression
test.

Closes #23666
2026-04-01 15:30:25 +11:00
Jeremy Ruppel 1d16ff1ca6 fix(site): sessions list and timeline polish (#23885)
- Prompt table was collapsing and sizing improperly, fixed 
- Make pretty much everything `text-sm` and `font-normal`
- Add model filter
- Back button on session threads page now navigates back instead of
going straight to `/aibridge/sessions`

---------

Co-authored-by: Jake Howell <jake@hwll.me>
2026-04-01 14:42:08 +11:00
Ethan b86161e0a6 test: fix TestServer_X11_EvictionLRU hang on fish shell (#23838)
`TestServer_X11_EvictionLRU` hangs forever when the developer's login
shell is `fish`. This is the only test in the repo that breaks on fish,
and it meant I couldn't run `make test` or similar without it blocking
indefinitely.

The test uses `sess.Shell()` to start interactive shell sessions, which
causes the SSH server to run the user's login shell directly (`fish
-l`). Fish buffers all piped stdin to EOF before executing any of it, so
the test's `echo ready-0\n` write never gets processed — fish sits
waiting for the pipe to close, and the test sits waiting for the echo
response.

The fix is a one-line change: `sess.Shell()` → `sess.Start("sh")`. The
test is exercising X11 LRU eviction, not shell behavior, so using `sh`
explicitly is both correct and shell-agnostic. The DISPLAY environment
variable is set identically either way since the x11-req handler runs
before `sessionStart`.
2026-04-01 12:31:22 +11:00
Cian Johnston a164d508cf fix(coderd/x/chatd): gate control subscriber to ignore stale pubsub notifications (#23865)
Fixes flaky `TestOpenAIReasoningWithWebSearchRoundTripStoreFalse` and
`TestOpenAIReasoningWithWebSearchRoundTrip`.

## Changes

- Gate the `processChat` control subscriber's cancel callback behind a
`chan struct{}` that is closed after publishing `"running"` status
- Add `TestGatedControlCancel` with 4 subtests exercising the gate logic

<details>
<summary>Root cause analysis</summary>

`SendMessage` publishes a `"pending"` notification on
`chat:stream:<chatID>` via PostgreSQL `NOTIFY`. `processChat` subscribes
to the same channel for control signals. Due to async NOTIFY delivery,
the `"pending"` notification can arrive at the control subscriber
**after** it registers its queue — even though it was published
**before**. `shouldCancelChatFromControlNotification("pending")` returns
`true`, immediately self-interrupting the processor before it does any
work.

The fix gates the cancel callback behind a closed channel. The channel
is closed after `processChat` publishes `"running"` status, so stale
notifications from before initialization are harmlessly ignored.
`close()` provides a happens-before guarantee in the Go memory model.
</details>

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-03-31 22:55:20 +01:00
Kayla はな b9f140e53e chore: remove Language objects (#23866) 2026-03-31 15:26:59 -06:00
Jeremy Ruppel 7f7b13f0ab fix(site): share AI Bridge entitlement/permissions logic (#23834)
Introduces a new `getAIBridgePermissions` method that all AI Bridge
pages can use to restrict access/paywall. Also adds the paywall and
alert to the session threads page bc I totes forgot.

---------

Co-authored-by: Jake Howell <jacob@coder.com>
2026-03-31 17:19:46 -04:00
Michael Suchacz e2bbd12137 test(coderd/x/chatd): remove flaky OpenAI round-trip tests (#23877) 2026-03-31 17:04:56 -04:00
Danielle Maywood e769d1bd7d fix(site): update story play functions after HelpTooltip→HelpPopover migration (#23876) 2026-03-31 21:50:05 +01:00
Jeremy Ruppel cccb680ec2 chore: remove shared workspaces beta badge (#23873) 2026-03-31 16:25:42 -04:00
Danielle Maywood e8fb418820 fix(site): delay desktop VNC connection until tab is selected (#23861) 2026-03-31 18:42:36 +01:00
Kyle Carberry 2c5e003c91 refactor(site): use hover popover for context indicator with nested skill tooltips (#23870)
Replaces the tooltip-inside-tooltip approach for the context usage
indicator with a hover-based Popover. Skill descriptions now appear as
nested tooltips to the right, matching the ModelSelector pattern.

**Before**: Tooltip with inline skill descriptions (truncated, janky
nested tooltips)
**After**: Popover opens on hover, skill names listed cleanly,
descriptions appear to the right on hover

- Popover opens on `mouseEnter`, closes after 150ms delay on
`mouseLeave`
- `onOpenAutoFocus` prevented to avoid stealing chat input focus
- Mobile keeps tap-to-toggle Popover behavior
- Skill rows get subtle `hover:bg-surface-tertiary` highlight
- `TooltipProvider` with `delayDuration={300}` wraps skill items (same
as ModelSelector)
2026-03-31 13:40:26 -04:00
code-qtzl f44a8994da fix(site): improve keyboard navigation in help popovers (#23374) 2026-03-31 11:10:33 -06:00
Yevhenii Shcherbina 84b94a8376 feat: add chatgpt support for aibridge proxy (#23826)
Add ChatGPT support for AIBridgeProxy
2026-03-31 12:54:38 -04:00
Cian Johnston 2a990ce758 feat: show friendly alert for missing agents-access role (#23831)
Replaces the generic red `ErrorAlert` ("Forbidden.") with a proactive
permission check and friendly info alert when a user lacks the
`agents-access` role.

- Add `createChat` permission check to `permissions.json` using
`owner_id: "me"`
- Handle `"me"` owner substitution in `renderPermissions` (SSR path)
- Pass `canCreateChat` from `useAuthenticated().permissions` into
`AgentCreateForm`
- Show `ChatAccessDeniedAlert` and disable input immediately (no need to
trigger a 403 first)
- Also catch 403 errors as a fallback in case permissions aren't yet
loaded
- Add `ForbiddenNoAgentsRole` Storybook story with `play` assertions
- Add `TestRenderPermissionsResolvesMe` Go test to pin the `"me"`
sentinel substitution

<details><summary>Implementation plan & decision log</summary>

- Uses the existing `permissions.json` + `checkAuthorization` system
rather than a separate API call
- `owner_id: "me"` is resolved to the actor's ID by both the auth-check
API endpoint and the SSR `renderPermissions` function
- Go test uses a real `rbac.StrictCachingAuthorizer` (not a mock) so it
verifies both the sentinel substitution and the RBAC role evaluation
end-to-end
- Alert follows the exact same `Alert` pattern as the 409 usage-limit
block
- Uses `severity="info"` and links to the getting-started docs Step 3
- Textarea is disabled proactively so the user never sees the scary
generic error

</details>

> 🤖 Created by a Coder Agent and will be reviewed by a human.
2026-03-31 17:26:58 +01:00
Danny Kopping c86f1288f1 chore: update aibridge with latest changes (#23863)
https://github.com/coder/aibridge/compare/519b082ad666...a011104f377d

Includes https://github.com/coder/aibridge/pull/242 and
https://github.com/coder/aibridge/pull/229

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 16:11:50 +00:00
Yevhenii Shcherbina 9440adf435 feat: add chatgpt support for aibridge (#23822)
Registers a new aibridge provider for ChatGPT by reusing the existing
OpenAI provider with a different `Name` and `BaseURL`
(https://chatgpt.com/backend-api/codex). The ChatGPT backend API is
OpenAI-compatible, so no new provider type is needed.
ChatGPT authenticates exclusively via per-user OAuth JWTs (BYOK mode) —
no centralized API key is configured. The OpenAI provider already
handles this: when no key is set, it falls through to the bearer token
from the request's Authorization header.
  
  Depends on #23811
2026-03-31 12:08:45 -04:00
Kayla はな 755e8be5ad chore: migrate some emotion styles to tailwind (#23817) 2026-03-31 10:07:33 -06:00
Danielle Maywood c9e335c453 refactor: redesign compaction settings to table layout with batch save (#23844) 2026-03-31 17:06:12 +01:00
Jeremy Ruppel 2d1f35f8a6 feat(site): session timeline design feedback (#23836)
- Remove `border-surface-secondary` from all line art and use the
default border color
- Use `text-sm` for all timeline elements and tables *(except the
`Thinking...` mono font, that's `text-xs`)

---------

Co-authored-by: Jake Howell <jacob@coder.com>
2026-03-31 12:02:19 -04:00
Susana Ferreira b0036af57b feat: register multiple Copilot providers for business and enterprise upstreams (#23811)
## Description

Adds support for multiple Copilot provider instances to route requests to different Copilot upstreams (individual, business, enterprise). Each instance has its own name and base URL, enabling per-upstream metrics, logs, circuit breakers, API dump, and routing.

## Changes

* Add Copilot business and enterprise provider names and host constants
* Register three Copilot provider instances in aibridged (default, business, enterprise)
* Update `defaultAIBridgeProvider` in `aibridgeproxy` to route new Copilot hosts to their corresponding providers

## Related

* Depends on: https://github.com/coder/aibridge/pull/240
* Closes: https://github.com/coder/aibridge/issues/152

Note: documentation changes will be added in a follow-up PR.

_Disclaimer: initially produced by Claude Opus 4.6, heavily modified and reviewed by @ssncferreira ._
2026-03-31 16:00:37 +01:00
Kyle Carberry 2953245862 feat(site): display loaded context files and skills in context indicator tooltip (#23853)
Renders the `last_injected_context` data (AGENTS.md files and skills)
from the Chat API in the `ContextUsageIndicator` hover tooltip. On
hover, users now see:

- **Context files**: basename with full path on title hover, truncation
indicator
- **Skills**: name and optional description

Separated from the existing token usage info by a border divider when
both sections are present. Added `max-w-72` to prevent the tooltip from
getting too wide.

<img width="970" height="598" alt="image"
src="https://github.com/user-attachments/assets/5bc25cb2-1d92-41d2-ab1a-63e5e49f667a"
/>

<details>
<summary>Data flow</summary>

```
chatQuery.data.last_injected_context
  → AgentChatPage (AgentChatPageView prop)
    → AgentChatPageView (ChatPageInput prop)
      → ChatPageInput (spread into latestContextUsage)
        → AgentChatInput (contextUsage prop)
          → ContextUsageIndicator (usage.lastInjectedContext)
```

</details>

<details>
<summary>Files changed</summary>

| File | Change |
|---|---|
| `ContextUsageIndicator.tsx` | Add `lastInjectedContext` to interface,
render context files and skills sections in tooltip |
| `ChatPageContent.tsx` | Thread `lastInjectedContext` prop, spread into
context usage object |
| `AgentChatPageView.tsx` | Thread `lastInjectedContext` prop to
`ChatPageInput` |
| `AgentChatPage.tsx` | Pass `chatQuery.data?.last_injected_context`
down |

</details>
2026-03-31 14:43:32 +00:00
Danny Kopping 5d07014f9f chore: update aibridge lib (#23849)
https://github.com/coder/aibridge/pull/230 has been merged, update the
dependency to match.

Includes other changes as well:
https://github.com/coder/aibridge/compare/dd8c239e5566...77d597aa123b
(cc @evgeniy-scherbina, @pawbana)

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 16:11:40 +02:00
Jeremy Ruppel 002e88fefc fix(site): use mock client model in sessions list view (#23851)
Missed this lil guy when adding the `<ClientFilter />` in #23733
2026-03-31 10:00:21 -04:00
Ethan bbf3fbc830 fix(coderd/x/chatd): archive chat hard-interrupts active stream (#23758)
Archiving a chat now transitions pending or running chats to waiting
before setting the archived flag. This publishes a status notification
on `ChatStreamNotifyChannel` so `subscribeChatControl` cancels the
active `processChat` context via `ErrInterrupted` — the same codepath
used by the stop button.

The `processChat` cleanup also skips queued-message auto-promotion when
the chat is archived, so archiving behaves like a hard stop rather than
interrupt-and-continue.

Relates to https://github.com/coder/coder/issues/23666
2026-04-01 00:23:52 +11:00
Danny Kopping 9fa103929a perf: make ListAIBridgeSessions 10x faster (#23774)
_Disclaimer: produced using Claude Opus 4.6, reviewed by me, and
validated against Dogfood dataset._

The `ListAIBridgeSessions` query materialized and aggregated all
matching interceptions before paginating, then ran expensive
token/prompt lookups across the full dataset. For a page of 25 sessions
against ~200k interceptions (our dogfood dataset), this meant:
- Three CTEs scanning all rows (filtered_interceptions, session_tokens,
session_root)
  - ARRAY_AGG(fi.id) collecting every interception ID per session
- Lateral prompt lookup via ANY(array_of_all_ids) running for every
session, not just the page
  - ~90MB of disk sorts and JIT compilation kicking in

The improvement is to restructure to paginate first and enrich after: a
single CTE groups interceptions into sessions with only cheap aggregates
(MIN, MAX, COUNT), applies cursor pagination and LIMIT, then lateral
joins fetch metadata, tokens, and prompts for just the ~25-row page.

  Measured against 220k interceptions / 160k sessions:

  | Metric             | Before | After |
  |--------------------|--------|-------|
  | Execution time     | 1800ms | 185ms |
  | Shared buffer hits | 737k   | 2.6k  |
  | Disk sort spill    | 86MB   | 16MB  |
  | Lateral loops      | 160k   | 25    |

https://grafana.dev.coder.com/goto/fbODPGtvR?orgId=1 the results are
identical, just _much_ faster.

--- 

Also includes some additional tests which I added prior to refactoring
the query to ensure no regressions on edge-cases.

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 14:42:23 +02:00
Lukasz acd2ff63a7 chore: bump Go toolchain to 1.25.8 (#23772)
Bump the repository Go toolchain from 1.25.7 to 1.25.8.

Updates `go.mod`, the shared `setup-go` action default, and the dogfood
image checksum so local, CI, and dogfood builds stay aligned.
2026-03-31 14:04:58 +02:00
Atif Ali e3e17e15f7 fix(site): show accurate message and warning color for startup script failures in agent row (#23654)
The agent row tooltip showed "Error starting the agent" / "Something
went wrong during the agent startup" with a red border when a startup
script fails. This is misleading — the agent is started and functional,
only the startup script exited with a non-zero code.

Extracts shared message constants (`agentLifecycleMessages`,
`agentStatusMessages`) from `health.ts` so both the workspace-level
health classification and the per-agent-row tooltips reference the same
single source of truth. No more duplicated wording that can drift.

Changes:
- **`health.ts`**: Exports `agentLifecycleMessages` and
`agentStatusMessages` maps; `getAgentHealthIssue` now references them
instead of inline strings.
- **`AgentStatus.tsx`**: All lifecycle/status tooltip components
(`StartErrorLifecycle`, `StartTimeoutLifecycle`,
`ShutdownTimeoutLifecycle`, `ShutdownErrorLifecycle`, `TimeoutStatus`)
now import and render from the shared message constants.
`StartErrorLifecycle` icon changed from red (`errorWarning`) to orange
(`timeoutWarning`).
- **`AgentRow.tsx`**: `start_error` border changed from
`border-border-destructive` (red) to `border-border-warning` (orange).

Closes #23652
Refs #21389

> 🤖 This PR was created with the help of Coder Agents, and has been
reviewed by my human. 🧑‍💻
2026-03-31 12:04:51 +00:00
Michael Suchacz af678606fc fix(coderd/x/chatd): stabilize flaky request-count assertion in round-trip test (#23843)
The flaky test assumed the second streamed OpenAI request had already
been captured when the chat status event arrived. In practice, the
capture server can record that second request slightly later, which
intermittently left `streamRequestCount` at `1`.

This change waits for the second captured request before asserting on
the follow-up payload and relaxes the count check to a sanity check. The
test still verifies the `store=false` round-trip behavior without
depending on that timing race.

Fixes coder/internal#1433
2026-03-31 13:09:11 +02:00
Cian Johnston 3190406de3 fix(site): stop workspace deletes playing hide-and-seek (#23641)
- Fix workspaces list invalidation after kebab-menu delete and add
Storybook coverage for the immediate `Deleting` state.

> 🤖 This PR was made by Coder Agents and read by me.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-31 11:36:47 +01:00
Cian Johnston 3ce82bb885 feat: add chat-access site-wide role to gate chat creation (#23724)
- Add `chat-access` built-in role granting chat CRUD at User scope
- Exclude `ResourceChat` from member, org member, and org service
account `allPermsExcept` calls
- Allow system, owner, and user-admin to assign the new role
- Migration auto-assigns role to users who have ever created a chat
- Update RBAC test matrix: `memberMe` denied, `chatAccessUser` allowed

**Breaking change**: Members without `chat-access` lose chat creation
ability. Migration covers existing chat creators. Members who have never
created a chat do not get this role automatically applied.

> 🤖 This PR was created by a Coder Agent and reviewed by me.
2026-03-31 10:07:21 +01:00
Ethan 348a3bd693 fix(site): show archived filter on agents page when chat list is empty (#23793)
While running scaletests I noticed the archived filter button on the
agents sidebar would disappear when the current filter had zero results.
This made it impossible to switch between active and archived views once
one side was empty.

The filter dropdown was only rendered inside the Pinned or first
time-group section header. When `visibleRootIDs` was empty, neither
header existed, so the filter had nowhere to attach.

This keeps the original dropdown placement on section headers when chats
exist. When the list is empty, the empty-state box itself now provides a
"View archived →" or "← Back to active" link so users can always switch
filters without needing the dropdown.

<img width="322" height="191" alt="image"
src="https://github.com/user-attachments/assets/7fd9ca09-5f72-4796-a925-7fab570fdff5"
/>

<img width="320" height="184" alt="image"
src="https://github.com/user-attachments/assets/2f856088-c2dc-4e34-9ece-84144a1adf79"
/>

Both archived & unarchived look the same when there's at least one
agent:

<img width="322" height="194" alt="image"
src="https://github.com/user-attachments/assets/42c4d54b-e500-45b1-b045-c126144c35bd"
/>
2026-03-31 12:57:46 +11:00
Jeremy Ruppel 75f1503b41 feat(site): various Session Timeline fixes (#23791)
- Use Tooltip instead of Popover for AI Gov tooltip
- Fix Agentic Loop tool call summing
- Collapse all expandable sections by default
- Add solid background to "Show More" button
- Remove "Sort by" dropdown for v1
2026-03-30 19:25:03 -04:00
Danielle Maywood c33cd19a05 fix(site/scripts): guard check-compiler main block from test imports (#23825) 2026-03-30 22:11:15 +01:00
Danielle Maywood adcea865c7 fix(site): improve check-compiler.mjs quality and fix bugs (#23812) 2026-03-30 20:41:41 +00:00
Matt Vollmer 5e3bccd96c docs: fix tool tables and model option errors in agent docs (#23821)
Fixes factual errors found during a review of all pages under
`/docs/ai-coder/agents/`.

## Tool tables (`index.md`, `architecture.md`)

Both pages had incomplete tool tables. Added:

- `process_output`, `process_list`, `process_signal` — core workspace
tools always registered alongside `execute`, missing from both pages
- `propose_plan` — platform tool (root chats only), missing from both
pages
- `spawn_computer_use_agent` — orchestration tool (conditional), missing
from architecture.md

Also fixed the architecture.md claim that the agent is "restricted to
the tool set defined in this section" — it now mentions skills and MCP
tools with links to the relevant pages.

## Model options (`models.md`)

- **OpenAI / OpenRouter Reasoning Effort**: docs listed `low`, `medium`,
`high` — code has `none`, `minimal`, `low`, `medium`, `high`, `xhigh`.
Fixed both.
- **Removed hidden fields** that never appear in the admin UI:
  - Google: Safety Settings (`hidden:"true"`)
- OpenRouter: Provider Order, Allow Fallbacks (parent struct
`hidden:"true"`)
  - Vercel: Provider Options (`hidden:"true"`)

---

*PR generated with Coder Agents*
2026-03-30 16:24:45 -04:00
Mathias Fredriksson 3950947c58 fix(site): prevent scroll handler from killing autoScroll during pin convergence (#23818)
WebKit internally adjusts scrollTop during layout when content
above the viewport changes height, even with overflow-anchor:none.
These phantom adjustments fire scroll events where isNearBottom
returns false. The scroll handler was setting autoScrollRef =
nearBottom on every such event, permanently killing follow mode.

The scroll handler now only enables follow mode, never disables
it. When follow mode is active, the user is not wheel/touch
scrolling, and isNearBottom is false, this indicates a
browser-initiated scroll adjustment. Re-pin immediately and
set the restore guard so the pin's own scroll event is suppressed.

Disabling follow mode is exclusive to user-interaction handlers
(wheel, touch, scrollbar pointerdown) via handleUserInterrupt.

Guard-clear callbacks also check isNearBottom before dropping
the restoration flag, re-pinning if content grew between the
pin and the clear.
2026-03-30 22:45:58 +03:00
Kyle Carberry b3d5b8d13c fix: stabilize flaky chatd subscribe/promote queued tests (#23816)
## Summary

Fixes three flaky chatd tests that intermittently fail due to timing
races with the background run loop.

Closes coder/internal#1428

## Root Cause

`CreateChat` and `PromoteQueued` call `signalWake()` which writes to
`wakeCh`, triggering `processOnce` immediately. Even though
`newTestServer` sets `PendingChatAcquireInterval: testutil.WaitLong` to
prevent ticker-based polling, the wake channel bypasses this. This
causes `processOnce` to acquire and process the chat concurrently with
the test's manual DB updates and assertions.

### Failing tests

| Test | Failure | Cause |
|------|---------|-------|
| `TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached` |
`expected: "pending", actual: "running"` | Wake from `CreateChat` races
with manual `UpdateChatStatus`; wake from `PromoteQueued` acquires the
chat before the status assertion |
| `TestSendMessageInterruptBehaviorQueuesAndInterruptsWhenBusy` |
`should have 1 item(s), but has 2` | Wake from `CreateChat` triggers
`processChat` which auto-promotes a queued message, adding an extra row
to `chat_messages` |
| `TestSubscribeNoPubsubNoDuplicateMessageParts` | `Condition satisfied`
(duplicate events) | Pre-existing `WaitGroup.Add/Wait` race in the
`Eventually` + `WaitUntilIdleForTest` pattern |

## Fix

Introduces a `waitForChatProcessed` helper that:
1. Polls until the chat reaches a **terminal state** (not pending AND
not running)
2. Then calls `WaitUntilIdleForTest` to wait for the inflight
`WaitGroup`

Waiting for a terminal state (not just "not pending") avoids a
`sync.WaitGroup` `Add/Wait` race: `AcquireChats` updates the DB status
to `running` **before** `processOnce` calls `inflight.Add(1)`. Checking
only `status != pending` could return while `Add(1)` hasn't happened
yet, causing `Wait()` to return prematurely.

### Per-test changes

- **`TestSendMessageInterruptBehaviorQueuesAndInterruptsWhenBusy`**:
Call `waitForChatProcessed` after `CreateChat` before manually setting
running status
-
**`TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached`**:
Call `waitForChatProcessed` after `CreateChat`; remove the inherently
racy `status == pending` assertion after `PromoteQueued` (the wake
immediately acquires the chat). Key assertions on promoted message,
queue state, and message count remain.
- **`TestSubscribeNoPubsubNoDuplicateMessageParts`**: Replace inline
`Eventually` with the safer `waitForChatProcessed` helper

## Verification

All three tests pass 150 consecutive executions with `-race -count=10`
across 15 runs (0 failures).
2026-03-30 18:23:47 +00:00
blinkagent[bot] a00afe4b5a chore(site): update proxy menu dialog text (#23765)
Updates the descriptive text in the proxy selection dropdown menu to be
clearer and more concise.

**Before:**
> Workspace proxies improve terminal and web app connections to
workspaces. This does not apply to CLI connections. A region must be
manually selected, otherwise the default primary region will be used.

**After:**
> Workspace proxies improve terminal and web app connections. CLI
connections are unaffected. If no region is selected, the primary region
will be used.

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-03-30 11:20:05 -07:00
Kyle Carberry a5cc579453 feat: add last_injected_context column to chats table (#23798)
Adds a nullable JSONB column `last_injected_context` to the `chats`
table that stores the most recently persisted injected context parts
(AGENTS.md context-file and skill message parts). The column is updated
only when `persistInstructionFiles()` runs — on first workspace attach
or when the agent changes — so there are no redundant writes on
subsequent turns.

Internal fields (`ContextFileContent`, `ContextFileOS`,
`ContextFileDirectory`, `SkillDir`) are stripped at write time so the
column only holds small metadata. No stripping needed on the read path.

<details>
<summary>Implementation notes</summary>

- New migration `000456` adds nullable `last_injected_context JSONB`
column.
- New SQL query `UpdateChatLastInjectedContext` writes the column
without touching `updated_at`.
- `persistInstructionFiles()` strips internal fields from parts via
`StripInternal()` before persisting.
- Sentinel path (no AGENTS.md) persists skill-only parts when skills
exist.
- `codersdk.Chat` exposes `LastInjectedContext []ChatMessagePart`
(omitempty).
- `db2sdk.Chat()` passes through the already-clean data.

</details>
2026-03-30 14:11:30 -04:00
Spike Curtis ef3aade647 chore: support agent updates in tunneler (#23730)
<!--

If you have used AI to produce some or all of this PR, please ensure you have read our [AI Contribution guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING) before submitting.

-->

relates to GRU-18

Adds support for agent updates to the Tunneler
2026-03-30 13:50:06 -04:00
dependabot[bot] 3cc31de57a chore: bump github.com/go-git/go-git/v5 from 5.17.0 to 5.17.1 (#23813)
Bumps [github.com/go-git/go-git/v5](https://github.com/go-git/go-git)
from 5.17.0 to 5.17.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/go-git/go-git/releases">github.com/go-git/go-git/v5's
releases</a>.</em></p>
<blockquote>
<h2>v5.17.1</h2>
<h2>What's Changed</h2>
<ul>
<li>build: Update module github.com/cloudflare/circl to v1.6.3
[SECURITY] (releases/v5.x) by <a
href="https://github.com/go-git-renovate"><code>@​go-git-renovate</code></a>[bot]
in <a
href="https://redirect.github.com/go-git/go-git/pull/1930">go-git/go-git#1930</a></li>
<li>[v5] plumbing: format/index, Improve v4 entry name validation by <a
href="https://github.com/pjbgf"><code>@​pjbgf</code></a> in <a
href="https://redirect.github.com/go-git/go-git/pull/1935">go-git/go-git#1935</a></li>
<li>[v5] plumbing: format/idxfile, Fix version and fanout checks by <a
href="https://github.com/pjbgf"><code>@​pjbgf</code></a> in <a
href="https://redirect.github.com/go-git/go-git/pull/1937">go-git/go-git#1937</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1">https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/go-git/go-git/commit/5e23dfd02db92644dc4a3358ceb297fce875b772"><code>5e23dfd</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1937">#1937</a>
from pjbgf/idx-v5</li>
<li><a
href="https://github.com/go-git/go-git/commit/6b38a326816b80f64c20cc0e6113958b65c05a1c"><code>6b38a32</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1935">#1935</a>
from pjbgf/index-v5</li>
<li><a
href="https://github.com/go-git/go-git/commit/cd757fcb856a2dcc5fff6c110320a8ff62e99513"><code>cd757fc</code></a>
plumbing: format/idxfile, Fix version and fanout checks</li>
<li><a
href="https://github.com/go-git/go-git/commit/3ec0d70cb687ae1da5f4d18faa4229bd971a8710"><code>3ec0d70</code></a>
plumbing: format/index, Fix tree extension invalidated entry
parsing</li>
<li><a
href="https://github.com/go-git/go-git/commit/dbe10b6b425a2a4ea92a9d98e20cd68e15aede01"><code>dbe10b6</code></a>
plumbing: format/index, Align V2/V3 long name and V4 prefix encoding
with Git</li>
<li><a
href="https://github.com/go-git/go-git/commit/e9b65df44cb97faeba148b47523a362beaecddf9"><code>e9b65df</code></a>
plumbing: format/index, Improve v4 entry name validation</li>
<li><a
href="https://github.com/go-git/go-git/commit/adad18daabddee04c5a889f0230035e74bca32c0"><code>adad18d</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1930">#1930</a>
from go-git/renovate/releases/v5.x-go-github.com-clo...</li>
<li><a
href="https://github.com/go-git/go-git/commit/29470bd1d862c6e902996b8e8ff8eb7a0515a9be"><code>29470bd</code></a>
build: Update module github.com/cloudflare/circl to v1.6.3
[SECURITY]</li>
<li>See full diff in <a
href="https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/go-git/go-git/v5&package-manager=go_modules&previous-version=5.17.0&new-version=5.17.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 17:27:38 +00:00
Mathias Fredriksson d2c308e481 fix(site/src/pages/AgentsPage): unify scroll restore-guard lifecycle in ScrollAnchoredContainer (#23809)
Two ResizeObserver effects (content and container) each had their own
local restoreGuardRafId but both wrote to the shared
isRestoringScrollRef. Either observer's guard-clear RAF could fire
while the other's pin chain was in-flight, leaving isRestoringScrollRef
prematurely false.

scrollTranscriptToBottom also set isRestoringScrollRef without
cancelling any pending guard-clear, so a stale clear could drop
the flag mid-smooth-scroll animation.

Promote restoreGuardRafId to a single shared ref so all write
paths coordinate through one cancellation point.
2026-03-30 20:17:44 +03:00
Kyle Carberry 953c3bdc0f fix(site): prevent spurious startup warning during pending status (#23805)
## Problem

The `/agents` page frequently shows "Response startup is taking longer
than expected" even while the agent is actively working and messages are
appearing in the transcript.

## Root Cause

There's an inconsistency between `isActiveChatStatus` and
`shouldApplyMessagePart` during `"pending"` status (the state between
agent tool-call turns):

| Component | Treats `"pending"` as... |
|---|---|
| `isActiveChatStatus` | **active** — includes both `"running"` and
`"pending"` |
| `shouldApplyMessagePart` | **inactive** — drops all `message_part`
events during `"pending"` |
| Status handler | clears `streamState` to `null` on `"pending"` |

This creates a dead state during multi-turn tool-call cycles:

1. Agent finishes a turn → status = `"pending"` → `streamState` cleared
to `null`
2. `selectIsAwaitingFirstStreamChunk` returns `true` (status is
"active", stream is null, latest message isn't assistant)
3. Phase = `"starting"` → 15s timer starts
4. Stream parts from the server are **silently dropped**
(`shouldApplyMessagePart()` returns `false` for `"pending"`)
5. `streamState` stays `null` — phase is stuck at `"starting"`
6. Meanwhile, durable messages (tool calls, tool results) appear
normally in the transcript
7. After 15s → "Response startup is taking longer than expected" fires

## Fix

Narrow `selectIsAwaitingFirstStreamChunk` to only check `chatStatus ===
"running"` instead of `isActiveChatStatus(chatStatus)`. `"running"` is
the only status where the transport actually accepts stream parts, so
it's the only status where we should be showing the "starting"
indicator.

`isActiveChatStatus` is left unchanged since its other caller
(`shouldSurfaceReconnectState`) correctly needs to include `"pending"`.
2026-03-30 12:46:51 -04:00
Matt Vollmer ca879ffae6 docs: add extending-agents, mcp-servers, and usage-insights pages (#23810)
Adds three new documentation pages for major shipped features that had
no docs, and updates the platform controls index to reflect current
state.

## New pages

### Extending Agents (`extending-agents.md`)

Covers two workspace-level extension mechanisms:
- **Skills** — `.agents/skills/<name>/SKILL.md` directory structure,
frontmatter format, auto-discovery, `read_skill`/`read_skill_file`
tools, size limits, lazy loading
- **Workspace MCP tools** — `.mcp.json` format, stdio and HTTP
transports, tool name prefixing, discovery lifecycle and caching

### MCP Servers (`platform-controls/mcp-servers.md`)

Admin MCP server configuration:
- CRUD via **Agents** > **Settings** > **MCP Servers**
- Four auth modes: none, OAuth2 (with auto-discovery), API key, custom
headers
- Availability policies: `force_on`, `default_on`, `default_off`
- Tool governance via allow/deny lists
- Permission model and secret redaction

### Usage & Insights (`platform-controls/usage-insights.md`)

Three admin dashboards:
- **Usage limits** — spend caps with per-user and per-group overrides,
priority hierarchy, enforcement behavior
- **Cost tracking** — per-user rollup with token breakdowns, date
filtering, per-model and per-chat drill-down

## Updated files

- **`platform-controls/index.md`** — Moved MCP servers, usage limits,
and analytics from "Where we are headed" into "What platform teams
control today" with links to the new pages. Removed the tool
customization roadmap section (now covered by MCP servers page).
- **`manifest.json`** — Added nav entries for all three new pages.

## Resulting nav hierarchy

```
Coder Agents
├── Getting Started
├── Early Access
├── Architecture
├── Models
├── Platform Controls
│   ├── Template Optimization
│   ├── MCP Servers              ← NEW
│   └── Usage & Insights         ← NEW
├── Extending Agents             ← NEW
└── Chats API
```

---

*PR generated with Coder Agents*
2026-03-30 12:46:34 -04:00
Cian Johnston 0880a4685b ci: fix pnpm not found in check-docs job (#23807)
- Enable corepack before the linkspector step so `pnpm` shim is in PATH
- `action-linkspector@v1.4.1` internally calls `actions/setup-node@v5`,
which now defaults `package-manager-cache: true` — it detects
`pnpm-lock.yaml` and tries to resolve the `pnpm` binary, but it's not
installed on the runner
- Add TODO to remove the workaround when upstream is fixed

Upstream: https://github.com/UmbrellaDocs/action-linkspector/issues/54

> 🤖 Cian asked a Coder Agent to make this PR and then reviewed the
change.
2026-03-30 21:28:51 +05:00
Danielle Maywood 3f8e3007d8 fix(site): write WebSocket messages to React Query cache (#23618) 2026-03-30 15:56:08 +01:00
316 changed files with 10322 additions and 3213 deletions
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.7"
default: "1.25.8"
use-cache:
description: "Whether to use the cache."
default: "true"
+29 -9
View File
@@ -240,6 +240,7 @@ jobs:
- name: Create Coder Task for Documentation Check
if: steps.check-secrets.outputs.skip != 'true'
id: create_task
continue-on-error: true
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
@@ -254,8 +255,21 @@ jobs:
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: false
- name: Handle Task Creation Failure
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome != 'success'
run: |
{
echo "## Documentation Check Task"
echo ""
echo "⚠️ The external Coder task service was unavailable, so this"
echo "advisory documentation check did not run."
echo ""
echo "Maintainers can rerun the workflow or trigger it manually"
echo "after the service recovers."
} >> "${GITHUB_STEP_SUMMARY}"
- name: Write Task Info
if: steps.check-secrets.outputs.skip != 'true'
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -273,7 +287,7 @@ jobs:
} >> "${GITHUB_STEP_SUMMARY}"
- name: Wait for Task Completion
if: steps.check-secrets.outputs.skip != 'true'
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
id: wait_task
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -363,7 +377,7 @@ jobs:
fi
- name: Fetch Task Logs
if: always() && steps.check-secrets.outputs.skip != 'true'
if: always() && steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
@@ -376,7 +390,7 @@ jobs:
echo "::endgroup::"
- name: Cleanup Task
if: always() && steps.check-secrets.outputs.skip != 'true'
if: always() && steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
@@ -390,6 +404,7 @@ jobs:
- name: Write Final Summary
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
CREATE_TASK_OUTCOME: ${{ steps.create_task.outcome }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_MESSAGE: ${{ steps.wait_task.outputs.task_message }}
RESULT_URI: ${{ steps.wait_task.outputs.result_uri }}
@@ -400,10 +415,15 @@ jobs:
echo "---"
echo "### Result"
echo ""
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
if [[ "${CREATE_TASK_OUTCOME}" == "success" ]]; then
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
else
echo "**Status:** Skipped because the external Coder task"
echo "service was unavailable."
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
} >> "${GITHUB_STEP_SUMMARY}"
+6
View File
@@ -46,6 +46,12 @@ jobs:
echo " replacement: \"https://github.com/coder/coder/tree/${HEAD_SHA}/\""
} >> .github/.linkspector.yml
# TODO: Remove this workaround once action-linkspector sets
# package-manager-cache: false in its internal setup-node step.
# See: https://github.com/UmbrellaDocs/action-linkspector/issues/54
- name: Enable corepack
run: corepack enable pnpm
- name: Check Markdown links
uses: umbrelladocs/action-linkspector@37c85bcde51b30bf929936502bac6bfb7e8f0a4d # v1.4.1
id: markdown-link-check
+1 -1
View File
@@ -211,7 +211,7 @@ func TestServer_X11_EvictionLRU(t *testing.T) {
require.NoError(t, err)
stderr, err := sess.StderrPipe()
require.NoError(t, err)
require.NoError(t, sess.Shell())
require.NoError(t, sess.Start("sh"))
// The SSH server lazily starts the session. We need to write a command
// and read back to ensure the X11 forwarding is started.
+13 -2
View File
@@ -352,8 +352,6 @@ func TestScheduleOverride(t *testing.T) {
require.NoError(t, err, "invalid schedule")
ownerClient, _, _, ws := setupTestSchedule(t, sched)
now := time.Now()
// To avoid the likelihood of time-related flakes, only matching up to the hour.
expectedDeadline := now.In(loc).Add(10 * time.Hour).Format("2006-01-02T15:")
// When: we override the stop schedule
inv, root := clitest.New(t,
@@ -364,6 +362,19 @@ func TestScheduleOverride(t *testing.T) {
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Fetch the workspace to get the actual deadline set by the
// server. Computing our own expected deadline from a separately
// captured time.Now() is racy: the CLI command calls time.Now()
// internally, and with the Asia/Kolkata +05:30 offset the hour
// boundary falls at :30 UTC minutes. A small delay between our
// time.Now() and the command's is enough to land in different
// hours.
updated, err := ownerClient.Workspace(context.Background(), ws[0].ID)
require.NoError(t, err)
require.False(t, updated.LatestBuild.Deadline.IsZero(), "deadline should be set after extend")
require.WithinDuration(t, now.Add(10*time.Hour), updated.LatestBuild.Deadline.Time, 5*time.Minute)
expectedDeadline := updated.LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339)
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
pty.ExpectMatch(sched.Humanize())
+7 -2
View File
@@ -857,13 +857,18 @@ aibridgeproxy:
# Comma-separated list of AI provider domains for which HTTPS traffic will be
# decrypted and routed through AI Bridge. Requests to other domains will be
# tunneled directly without decryption. Supported domains: api.anthropic.com,
# api.openai.com, api.individual.githubcopilot.com.
# (default: api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,
# api.openai.com, api.individual.githubcopilot.com,
# api.business.githubcopilot.com, api.enterprise.githubcopilot.com, chatgpt.com.
# (default:
# api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,chatgpt.com,
# type: string-array)
domain_allowlist:
- api.anthropic.com
- api.openai.com
- api.individual.githubcopilot.com
- api.business.githubcopilot.com
- api.enterprise.githubcopilot.com
- chatgpt.com
# URL of an upstream HTTP proxy to chain tunneled (non-allowlisted) requests
# through. Format: http://[user:pass@]host:port or https://[user:pass@]host:port.
# (default: <unset>, type: string)
+15
View File
@@ -20,6 +20,21 @@ const HeaderCoderToken = "X-Coder-AI-Governance-Token" //nolint:gosec // This is
// request forwarded to aibridged for cross-service log correlation.
const HeaderCoderRequestID = "X-Coder-AI-Governance-Request-Id"
// Copilot provider.
const (
ProviderCopilotBusiness = "copilot-business"
HostCopilotBusiness = "api.business.githubcopilot.com"
ProviderCopilotEnterprise = "copilot-enterprise"
HostCopilotEnterprise = "api.enterprise.githubcopilot.com"
)
// ChatGPT provider.
const (
ProviderChatGPT = "chatgpt"
HostChatGPT = "chatgpt.com"
BaseURLChatGPT = "https://" + HostChatGPT + "/backend-api/codex"
)
// IsBYOK reports whether the request is using BYOK mode, determined
// by the presence of the X-Coder-AI-Governance-Token header.
func IsBYOK(header http.Header) bool {
+12
View File
@@ -13445,6 +13445,11 @@ const docTemplate = `{
"chat:delete",
"chat:read",
"chat:update",
"chat_automation:*",
"chat_automation:create",
"chat_automation:delete",
"chat_automation:read",
"chat_automation:update",
"coder:all",
"coder:apikeys.manage_self",
"coder:application_connect",
@@ -13654,6 +13659,11 @@ const docTemplate = `{
"APIKeyScopeChatDelete",
"APIKeyScopeChatRead",
"APIKeyScopeChatUpdate",
"APIKeyScopeChatAutomationAll",
"APIKeyScopeChatAutomationCreate",
"APIKeyScopeChatAutomationDelete",
"APIKeyScopeChatAutomationRead",
"APIKeyScopeChatAutomationUpdate",
"APIKeyScopeCoderAll",
"APIKeyScopeCoderApikeysManageSelf",
"APIKeyScopeCoderApplicationConnect",
@@ -19038,6 +19048,7 @@ const docTemplate = `{
"audit_log",
"boundary_usage",
"chat",
"chat_automation",
"connection_log",
"crypto_key",
"debug_info",
@@ -19084,6 +19095,7 @@ const docTemplate = `{
"ResourceAuditLog",
"ResourceBoundaryUsage",
"ResourceChat",
"ResourceChatAutomation",
"ResourceConnectionLog",
"ResourceCryptoKey",
"ResourceDebugInfo",
+12
View File
@@ -12015,6 +12015,11 @@
"chat:delete",
"chat:read",
"chat:update",
"chat_automation:*",
"chat_automation:create",
"chat_automation:delete",
"chat_automation:read",
"chat_automation:update",
"coder:all",
"coder:apikeys.manage_self",
"coder:application_connect",
@@ -12224,6 +12229,11 @@
"APIKeyScopeChatDelete",
"APIKeyScopeChatRead",
"APIKeyScopeChatUpdate",
"APIKeyScopeChatAutomationAll",
"APIKeyScopeChatAutomationCreate",
"APIKeyScopeChatAutomationDelete",
"APIKeyScopeChatAutomationRead",
"APIKeyScopeChatAutomationUpdate",
"APIKeyScopeCoderAll",
"APIKeyScopeCoderApikeysManageSelf",
"APIKeyScopeCoderApplicationConnect",
@@ -17410,6 +17420,7 @@
"audit_log",
"boundary_usage",
"chat",
"chat_automation",
"connection_log",
"crypto_key",
"debug_info",
@@ -17456,6 +17467,7 @@
"ResourceAuditLog",
"ResourceBoundaryUsage",
"ResourceChat",
"ResourceChatAutomation",
"ResourceConnectionLog",
"ResourceCryptoKey",
"ResourceDebugInfo",
+1 -1
View File
@@ -220,7 +220,7 @@ func (api *API) checkAuthorization(rw http.ResponseWriter, r *http.Request) {
Type: string(v.Object.ResourceType),
AnyOrgOwner: v.Object.AnyOrgOwner,
}
if obj.Owner == "me" {
if obj.Owner == codersdk.Me {
obj.Owner = auth.ID
}
+5
View File
@@ -7,6 +7,11 @@ type CheckConstraint string
// CheckConstraint enums.
const (
CheckAPIKeysAllowListNotEmpty CheckConstraint = "api_keys_allow_list_not_empty" // api_keys
CheckChatAutomationEventsChatExclusivity CheckConstraint = "chat_automation_events_chat_exclusivity" // chat_automation_events
CheckChatAutomationTriggersCronFields CheckConstraint = "chat_automation_triggers_cron_fields" // chat_automation_triggers
CheckChatAutomationTriggersWebhookFields CheckConstraint = "chat_automation_triggers_webhook_fields" // chat_automation_triggers
CheckChatAutomationsMaxChatCreatesPerHourCheck CheckConstraint = "chat_automations_max_chat_creates_per_hour_check" // chat_automations
CheckChatAutomationsMaxMessagesPerHourCheck CheckConstraint = "chat_automations_max_messages_per_hour_check" // chat_automations
CheckChatModelConfigsCompressionThresholdCheck CheckConstraint = "chat_model_configs_compression_threshold_check" // chat_model_configs
CheckChatModelConfigsContextLimitCheck CheckConstraint = "chat_model_configs_context_limit_check" // chat_model_configs
CheckChatProvidersProviderCheck CheckConstraint = "chat_providers_provider_check" // chat_providers
+11
View File
@@ -1572,6 +1572,17 @@ func Chat(c database.Chat, diffStatus *database.ChatDiffStatus) codersdk.Chat {
convertedDiffStatus := ChatDiffStatus(c.ID, diffStatus)
chat.DiffStatus = &convertedDiffStatus
}
if c.LastInjectedContext.Valid {
var parts []codersdk.ChatMessagePart
// Internal fields are stripped at write time in
// chatd.updateLastInjectedContext, so no
// StripInternal call is needed here. Unmarshal
// errors are suppressed — the column is written by
// us with a known schema.
if err := json.Unmarshal(c.LastInjectedContext.RawMessage, &parts); err == nil {
chat.LastInjectedContext = parts
}
}
return chat
}
+7
View File
@@ -541,6 +541,13 @@ func TestChat_AllFieldsPopulated(t *testing.T) {
PinOrder: 1,
MCPServerIDs: []uuid.UUID{uuid.New()},
Labels: database.StringMap{"env": "prod"},
LastInjectedContext: pqtype.NullRawMessage{
// Use a context-file part to verify internal
// fields are not present (they are stripped at
// write time by chatd, not at read time).
RawMessage: json.RawMessage(`[{"type":"context-file","context_file_path":"/AGENTS.md"}]`),
Valid: true,
},
}
// Only ChatID is needed here. This test checks that
// Chat.DiffStatus is non-nil, not that every DiffStatus
+238 -7
View File
@@ -1570,13 +1570,13 @@ func (q *querier) AllUserIDs(ctx context.Context, includeSystem bool) ([]uuid.UU
return q.db.AllUserIDs(ctx, includeSystem)
}
func (q *querier) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (q *querier) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
return nil, err
}
return q.db.ArchiveChatByID(ctx, id)
}
@@ -1694,6 +1694,13 @@ func (q *querier) CleanTailnetTunnels(ctx context.Context) error {
return q.db.CleanTailnetTunnels(ctx)
}
func (q *querier) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChatAutomation); err != nil {
return err
}
return q.db.CleanupDeletedMCPServerIDsFromChatAutomations(ctx)
}
func (q *querier) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChat); err != nil {
return err
@@ -1731,6 +1738,28 @@ func (q *querier) CountAuditLogs(ctx context.Context, arg database.CountAuditLog
return q.db.CountAuthorizedAuditLogs(ctx, arg, prep)
}
func (q *querier) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return 0, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return 0, err
}
return q.db.CountChatAutomationChatCreatesInWindow(ctx, arg)
}
func (q *querier) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return 0, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return 0, err
}
return q.db.CountChatAutomationMessagesInWindow(ctx, arg)
}
func (q *querier) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
// Just like the actual query, shortcut if the user is an owner.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog)
@@ -1842,6 +1871,28 @@ func (q *querier) DeleteApplicationConnectAPIKeysByUserID(ctx context.Context, u
return q.db.DeleteApplicationConnectAPIKeysByUserID(ctx, userID)
}
func (q *querier) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
return deleteQ(q.log, q.auth, q.db.GetChatAutomationByID, q.db.DeleteChatAutomationByID)(ctx, id)
}
// Triggers are sub-resources of an automation. Deleting a trigger
// is a configuration change, so we authorize ActionUpdate on the
// parent rather than ActionDelete.
func (q *querier) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, id)
if err != nil {
return err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return err
}
return q.db.DeleteChatAutomationTriggerByID(ctx, id)
}
func (q *querier) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
@@ -2386,6 +2437,16 @@ func (q *querier) GetActiveAISeatCount(ctx context.Context) (int64, error) {
return q.db.GetActiveAISeatCount(ctx)
}
// GetActiveChatAutomationCronTriggers is a system-level query used by
// the cron scheduler. It requires read permission on all automations
// (admin gate) because it fetches triggers across all orgs and owners.
func (q *querier) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChatAutomation.All()); err != nil {
return nil, err
}
return q.db.GetActiveChatAutomationCronTriggers(ctx)
}
func (q *querier) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTemplate.All()); err != nil {
return nil, err
@@ -2477,6 +2538,64 @@ func (q *querier) GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUI
return q.db.GetAuthorizationUserRoles(ctx, userID)
}
func (q *querier) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
return fetch(q.log, q.auth, q.db.GetChatAutomationByID)(ctx, id)
}
func (q *querier) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return nil, err
}
return q.db.GetChatAutomationEventsByAutomationID(ctx, arg)
}
func (q *querier) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, id)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (q *querier) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
automation, err := q.db.GetChatAutomationByID(ctx, automationID)
if err != nil {
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return nil, err
}
return q.db.GetChatAutomationTriggersByAutomationID(ctx, automationID)
}
func (q *querier) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
// Shortcut if the caller has broad read access (e.g. site admins
// / owners). The SQL filter is noticeable, so skip it when we
// can.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChatAutomation.All())
if err == nil {
return q.db.GetChatAutomations(ctx, arg)
}
// Fall back to SQL-level row filtering for normal users.
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceChatAutomation.Type)
if err != nil {
return nil, xerrors.Errorf("prepare chat automation SQL filter: %w", err)
}
return q.db.GetAuthorizedChatAutomations(ctx, arg, prep)
}
func (q *querier) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
return fetch(q.log, q.auth, q.db.GetChatByID)(ctx, id)
}
@@ -2811,7 +2930,15 @@ func (q *querier) GetDERPMeshKey(ctx context.Context) (string, error) {
}
func (q *querier) GetDefaultChatModelConfig(ctx context.Context) (database.ChatModelConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
// Any user who can read chat resources can read the default
// model config, since model resolution is required to create
// a chat. This avoids gating on ResourceDeploymentConfig
// which regular members lack.
act, ok := ActorFromContext(ctx)
if !ok {
return database.ChatModelConfig{}, ErrNoActor
}
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChat.WithOwner(act.ID)); err != nil {
return database.ChatModelConfig{}, err
}
return q.db.GetDefaultChatModelConfig(ctx)
@@ -4764,6 +4891,36 @@ func (q *querier) InsertChat(ctx context.Context, arg database.InsertChatParams)
return insert(q.log, q.auth, rbac.ResourceChat.WithOwner(arg.OwnerID.String()), q.db.InsertChat)(ctx, arg)
}
func (q *querier) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
return insert(q.log, q.auth, rbac.ResourceChatAutomation.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), q.db.InsertChatAutomation)(ctx, arg)
}
// Events are append-only records produced by the system when
// triggers fire. We authorize ActionUpdate on the parent
// automation because inserting an event is a side-effect of
// processing the automation, not an independent create action.
func (q *querier) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return database.ChatAutomationEvent{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationEvent{}, err
}
return q.db.InsertChatAutomationEvent(ctx, arg)
}
func (q *querier) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.InsertChatAutomationTrigger(ctx, arg)
}
func (q *querier) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
// Authorize create on chat resource scoped to the owner and org.
return insert(q.log, q.auth, rbac.ResourceChat.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), q.db.InsertChatFile)(ctx, arg)
@@ -5561,6 +5718,13 @@ func (q *querier) PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (d
return q.db.PopNextQueuedMessage(ctx, chatID)
}
func (q *querier) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceChatAutomation.All()); err != nil {
return 0, err
}
return q.db.PurgeOldChatAutomationEvents(ctx, arg)
}
func (q *querier) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
template, err := q.db.GetTemplateByID(ctx, templateID)
if err != nil {
@@ -5641,13 +5805,13 @@ func (q *querier) TryAcquireLock(ctx context.Context, id int64) (bool, error) {
return q.db.TryAcquireLock(ctx, id)
}
func (q *querier) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (q *querier) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
return nil, err
}
return q.db.UnarchiveChatByID(ctx, id)
}
@@ -5707,6 +5871,58 @@ func (q *querier) UpdateAPIKeyByID(ctx context.Context, arg database.UpdateAPIKe
return update(q.log, q.auth, fetch, q.db.UpdateAPIKeyByID)(ctx, arg)
}
func (q *querier) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
fetchFunc := func(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
return q.db.GetChatAutomationByID(ctx, arg.ID)
}
return updateWithReturn(q.log, q.auth, fetchFunc, q.db.UpdateChatAutomation)(ctx, arg)
}
func (q *querier) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.UpdateChatAutomationTrigger(ctx, arg)
}
func (q *querier) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return err
}
return q.db.UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg)
}
func (q *querier) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.UpdateChatAutomationTriggerWebhookSecret(ctx, arg)
}
func (q *querier) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
@@ -5752,6 +5968,17 @@ func (q *querier) UpdateChatLabelsByID(ctx context.Context, arg database.UpdateC
return q.db.UpdateChatLabelsByID(ctx, arg)
}
func (q *querier) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatLastInjectedContext(ctx, arg)
}
func (q *querier) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
@@ -7333,3 +7560,7 @@ func (q *querier) ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg
func (q *querier) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, _ rbac.PreparedAuthorized) ([]database.GetChatsRow, error) {
return q.GetChats(ctx, arg)
}
func (q *querier) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, _ rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
return q.GetChatAutomations(ctx, arg)
}
+242 -5
View File
@@ -392,14 +392,14 @@ func (s *MethodTestSuite) TestChats() {
s.Run("ArchiveChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().ArchiveChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
dbm.EXPECT().ArchiveChatByID(gomock.Any(), chat.ID).Return([]database.Chat{chat}, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns([]database.Chat{chat})
}))
s.Run("UnarchiveChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return([]database.Chat{chat}, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns([]database.Chat{chat})
}))
s.Run("PinChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
@@ -631,7 +631,7 @@ func (s *MethodTestSuite) TestChats() {
s.Run("GetDefaultChatModelConfig", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
config := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
dbm.EXPECT().GetDefaultChatModelConfig(gomock.Any()).Return(config, nil).AnyTimes()
check.Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(config)
check.Asserts(rbac.ResourceChat.WithOwner(testActorID.String()), policy.ActionRead).Returns(config)
}))
s.Run("GetChatModelConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
@@ -1121,6 +1121,10 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().CleanupDeletedMCPServerIDsFromChats(gomock.Any()).Return(nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChat, policy.ActionUpdate)
}))
s.Run("CleanupDeletedMCPServerIDsFromChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().CleanupDeletedMCPServerIDsFromChatAutomations(gomock.Any()).Return(nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChatAutomation, policy.ActionUpdate)
}))
s.Run("DeleteMCPServerConfigByID", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
id := uuid.New()
dbm.EXPECT().DeleteMCPServerConfigByID(gomock.Any(), id).Return(nil).AnyTimes()
@@ -1204,6 +1208,19 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatMCPServerIDs(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatLastInjectedContext", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatLastInjectedContextParams{
ID: chat.ID,
LastInjectedContext: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`[{"type":"text","text":"test"}]`),
Valid: true,
},
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatLastInjectedContext(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatLastReadMessageID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatLastReadMessageIDParams{
@@ -1237,6 +1254,226 @@ func (s *MethodTestSuite) TestChats() {
}))
}
func (s *MethodTestSuite) TestChatAutomations() {
s.Run("CountChatAutomationChatCreatesInWindow", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.CountChatAutomationChatCreatesInWindowParams{
AutomationID: automation.ID,
WindowStart: dbtime.Now().Add(-time.Hour),
}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().CountChatAutomationChatCreatesInWindow(gomock.Any(), arg).Return(int64(3), nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(int64(3))
}))
s.Run("CountChatAutomationMessagesInWindow", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.CountChatAutomationMessagesInWindowParams{
AutomationID: automation.ID,
WindowStart: dbtime.Now().Add(-time.Hour),
}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().CountChatAutomationMessagesInWindow(gomock.Any(), arg).Return(int64(5), nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(int64(5))
}))
s.Run("DeleteChatAutomationByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().DeleteChatAutomationByID(gomock.Any(), automation.ID).Return(nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionDelete).Returns()
}))
s.Run("DeleteChatAutomationTriggerByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().DeleteChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(nil).AnyTimes()
check.Args(trigger.ID).Asserts(automation, policy.ActionUpdate).Returns()
}))
s.Run("GetActiveChatAutomationCronTriggers", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
rows := []database.GetActiveChatAutomationCronTriggersRow{}
dbm.EXPECT().GetActiveChatAutomationCronTriggers(gomock.Any()).Return(rows, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead).Returns(rows)
}))
s.Run("GetChatAutomationByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionRead).Returns(automation)
}))
s.Run("GetChatAutomationEventsByAutomationID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.GetChatAutomationEventsByAutomationIDParams{
AutomationID: automation.ID,
}
events := []database.ChatAutomationEvent{}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationEventsByAutomationID(gomock.Any(), arg).Return(events, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(events)
}))
s.Run("GetChatAutomationTriggerByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
check.Args(trigger.ID).Asserts(automation, policy.ActionRead).Returns(trigger)
}))
s.Run("GetChatAutomationTriggersByAutomationID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
triggers := []database.ChatAutomationTrigger{}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationTriggersByAutomationID(gomock.Any(), automation.ID).Return(triggers, nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionRead).Returns(triggers)
}))
s.Run("GetChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatAutomationsParams{}
dbm.EXPECT().GetChatAutomations(gomock.Any(), params).Return([]database.ChatAutomation{}, nil).AnyTimes()
dbm.EXPECT().GetAuthorizedChatAutomations(gomock.Any(), params, gomock.Any()).Return([]database.ChatAutomation{}, nil).AnyTimes()
check.Args(params).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead).WithNotAuthorized("nil")
}))
s.Run("GetAuthorizedChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatAutomationsParams{}
dbm.EXPECT().GetAuthorizedChatAutomations(gomock.Any(), params, gomock.Any()).Return([]database.ChatAutomation{}, nil).AnyTimes()
dbm.EXPECT().GetChatAutomations(gomock.Any(), params).Return([]database.ChatAutomation{}, nil).AnyTimes()
check.Args(params, emptyPreparedAuthorized{}).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead)
}))
s.Run("InsertChatAutomation", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
arg := database.InsertChatAutomationParams{
ID: uuid.New(),
OwnerID: uuid.New(),
OrganizationID: uuid.New(),
Name: "test-automation",
Description: "test description",
Instructions: "test instructions",
Status: database.ChatAutomationStatusActive,
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
}
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{
ID: arg.ID,
OwnerID: arg.OwnerID,
OrganizationID: arg.OrganizationID,
Status: arg.Status,
})
dbm.EXPECT().InsertChatAutomation(gomock.Any(), arg).Return(automation, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChatAutomation.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), policy.ActionCreate).Returns(automation)
}))
s.Run("InsertChatAutomationEvent", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.InsertChatAutomationEventParams{
ID: uuid.New(),
AutomationID: automation.ID,
ReceivedAt: dbtime.Now(),
Payload: json.RawMessage(`{}`),
Status: database.ChatAutomationEventStatusFiltered,
}
event := testutil.Fake(s.T(), faker, database.ChatAutomationEvent{
ID: arg.ID,
AutomationID: automation.ID,
Status: arg.Status,
})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().InsertChatAutomationEvent(gomock.Any(), arg).Return(event, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(event)
}))
s.Run("InsertChatAutomationTrigger", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.InsertChatAutomationTriggerParams{
ID: uuid.New(),
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
}
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
ID: arg.ID,
AutomationID: automation.ID,
Type: arg.Type,
})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().InsertChatAutomationTrigger(gomock.Any(), arg).Return(trigger, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(trigger)
}))
s.Run("PurgeOldChatAutomationEvents", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.PurgeOldChatAutomationEventsParams{
Before: dbtime.Now().Add(-7 * 24 * time.Hour),
LimitCount: 1000,
}
dbm.EXPECT().PurgeOldChatAutomationEvents(gomock.Any(), arg).Return(int64(5), nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionDelete).Returns(int64(5))
}))
s.Run("UpdateChatAutomation", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.UpdateChatAutomationParams{
ID: automation.ID,
Name: "updated-name",
Description: "updated description",
Status: database.ChatAutomationStatusActive,
UpdatedAt: dbtime.Now(),
}
updated := automation
updated.Name = arg.Name
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomation(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
s.Run("UpdateChatAutomationTrigger", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeCron,
})
arg := database.UpdateChatAutomationTriggerParams{
ID: trigger.ID,
UpdatedAt: dbtime.Now(),
}
updated := trigger
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTrigger(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
s.Run("UpdateChatAutomationTriggerLastTriggeredAt", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeCron,
})
arg := database.UpdateChatAutomationTriggerLastTriggeredAtParams{
ID: trigger.ID,
LastTriggeredAt: dbtime.Now(),
}
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTriggerLastTriggeredAt(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns()
}))
s.Run("UpdateChatAutomationTriggerWebhookSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
arg := database.UpdateChatAutomationTriggerWebhookSecretParams{
ID: trigger.ID,
UpdatedAt: dbtime.Now(),
WebhookSecret: sql.NullString{
String: "new-secret",
Valid: true,
},
}
updated := trigger
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTriggerWebhookSecret(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
}
func (s *MethodTestSuite) TestFile() {
s.Run("GetFileByHashAndCreator", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
f := testutil.Fake(s.T(), faker, database.File{})
+174 -6
View File
@@ -160,12 +160,12 @@ func (m queryMetricsStore) AllUserIDs(ctx context.Context, includeSystem bool) (
return r0, r1
}
func (m queryMetricsStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m queryMetricsStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
start := time.Now()
r0 := m.s.ArchiveChatByID(ctx, id)
r0, r1 := m.s.ArchiveChatByID(ctx, id)
m.queryLatencies.WithLabelValues("ArchiveChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ArchiveChatByID").Inc()
return r0
return r0, r1
}
func (m queryMetricsStore) ArchiveUnusedTemplateVersions(ctx context.Context, arg database.ArchiveUnusedTemplateVersionsParams) ([]uuid.UUID, error) {
@@ -264,6 +264,14 @@ func (m queryMetricsStore) CleanTailnetTunnels(ctx context.Context) error {
return r0
}
func (m queryMetricsStore) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
start := time.Now()
r0 := m.s.CleanupDeletedMCPServerIDsFromChatAutomations(ctx)
m.queryLatencies.WithLabelValues("CleanupDeletedMCPServerIDsFromChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CleanupDeletedMCPServerIDsFromChatAutomations").Inc()
return r0
}
func (m queryMetricsStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
start := time.Now()
r0 := m.s.CleanupDeletedMCPServerIDsFromChats(ctx)
@@ -296,6 +304,22 @@ func (m queryMetricsStore) CountAuditLogs(ctx context.Context, arg database.Coun
return r0, r1
}
func (m queryMetricsStore) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountChatAutomationChatCreatesInWindow(ctx, arg)
m.queryLatencies.WithLabelValues("CountChatAutomationChatCreatesInWindow").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountChatAutomationChatCreatesInWindow").Inc()
return r0, r1
}
func (m queryMetricsStore) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountChatAutomationMessagesInWindow(ctx, arg)
m.queryLatencies.WithLabelValues("CountChatAutomationMessagesInWindow").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountChatAutomationMessagesInWindow").Inc()
return r0, r1
}
func (m queryMetricsStore) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountConnectionLogs(ctx, arg)
@@ -400,6 +424,22 @@ func (m queryMetricsStore) DeleteApplicationConnectAPIKeysByUserID(ctx context.C
return r0
}
func (m queryMetricsStore) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatAutomationByID(ctx, id)
m.queryLatencies.WithLabelValues("DeleteChatAutomationByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatAutomationByID").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatAutomationTriggerByID(ctx, id)
m.queryLatencies.WithLabelValues("DeleteChatAutomationTriggerByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatAutomationTriggerByID").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatModelConfigByID(ctx, id)
@@ -936,6 +976,14 @@ func (m queryMetricsStore) GetActiveAISeatCount(ctx context.Context) (int64, err
return r0, r1
}
func (m queryMetricsStore) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
start := time.Now()
r0, r1 := m.s.GetActiveChatAutomationCronTriggers(ctx)
m.queryLatencies.WithLabelValues("GetActiveChatAutomationCronTriggers").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetActiveChatAutomationCronTriggers").Inc()
return r0, r1
}
func (m queryMetricsStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
start := time.Now()
r0, r1 := m.s.GetActivePresetPrebuildSchedules(ctx)
@@ -1032,6 +1080,46 @@ func (m queryMetricsStore) GetAuthorizationUserRoles(ctx context.Context, userID
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationByID(ctx, id)
m.queryLatencies.WithLabelValues("GetChatAutomationByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationByID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationEventsByAutomationID(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatAutomationEventsByAutomationID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationEventsByAutomationID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationTriggerByID(ctx, id)
m.queryLatencies.WithLabelValues("GetChatAutomationTriggerByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationTriggerByID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationTriggersByAutomationID(ctx, automationID)
m.queryLatencies.WithLabelValues("GetChatAutomationTriggersByAutomationID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationTriggersByAutomationID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomations(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomations").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.GetChatByID(ctx, id)
@@ -3224,6 +3312,30 @@ func (m queryMetricsStore) InsertChat(ctx context.Context, arg database.InsertCh
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomation(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomation").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomation").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomationEvent(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomationEvent").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomationEvent").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomationTrigger(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomationTrigger").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomationTrigger").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
start := time.Now()
r0, r1 := m.s.InsertChatFile(ctx, arg)
@@ -3952,6 +4064,14 @@ func (m queryMetricsStore) PopNextQueuedMessage(ctx context.Context, chatID uuid
return r0, r1
}
func (m queryMetricsStore) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.PurgeOldChatAutomationEvents(ctx, arg)
m.queryLatencies.WithLabelValues("PurgeOldChatAutomationEvents").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "PurgeOldChatAutomationEvents").Inc()
return r0, r1
}
func (m queryMetricsStore) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
start := time.Now()
r0 := m.s.ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx, templateID)
@@ -4024,12 +4144,12 @@ func (m queryMetricsStore) TryAcquireLock(ctx context.Context, pgTryAdvisoryXact
return r0, r1
}
func (m queryMetricsStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m queryMetricsStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
start := time.Now()
r0 := m.s.UnarchiveChatByID(ctx, id)
r0, r1 := m.s.UnarchiveChatByID(ctx, id)
m.queryLatencies.WithLabelValues("UnarchiveChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UnarchiveChatByID").Inc()
return r0
return r0, r1
}
func (m queryMetricsStore) UnarchiveTemplateVersion(ctx context.Context, arg database.UnarchiveTemplateVersionParams) error {
@@ -4080,6 +4200,38 @@ func (m queryMetricsStore) UpdateAPIKeyByID(ctx context.Context, arg database.Up
return r0
}
func (m queryMetricsStore) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomation(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomation").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomation").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomationTrigger(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTrigger").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTrigger").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
start := time.Now()
r0 := m.s.UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTriggerLastTriggeredAt").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTriggerLastTriggeredAt").Inc()
return r0
}
func (m queryMetricsStore) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomationTriggerWebhookSecret(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTriggerWebhookSecret").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTriggerWebhookSecret").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatBuildAgentBinding(ctx, arg)
@@ -4112,6 +4264,14 @@ func (m queryMetricsStore) UpdateChatLabelsByID(ctx context.Context, arg databas
return r0, r1
}
func (m queryMetricsStore) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatLastInjectedContext(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatLastInjectedContext").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatLastInjectedContext").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatLastModelConfigByID(ctx, arg)
@@ -5343,3 +5503,11 @@ func (m queryMetricsStore) GetAuthorizedChats(ctx context.Context, arg database.
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAuthorizedChats").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetAuthorizedChatAutomations(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("GetAuthorizedChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAuthorizedChatAutomations").Inc()
return r0, r1
}
+319 -6
View File
@@ -148,11 +148,12 @@ func (mr *MockStoreMockRecorder) AllUserIDs(ctx, includeSystem any) *gomock.Call
}
// ArchiveChatByID mocks base method.
func (m *MockStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m *MockStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ArchiveChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ArchiveChatByID indicates an expected call of ArchiveChatByID.
@@ -334,6 +335,20 @@ func (mr *MockStoreMockRecorder) CleanTailnetTunnels(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanTailnetTunnels", reflect.TypeOf((*MockStore)(nil).CleanTailnetTunnels), ctx)
}
// CleanupDeletedMCPServerIDsFromChatAutomations mocks base method.
func (m *MockStore) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CleanupDeletedMCPServerIDsFromChatAutomations", ctx)
ret0, _ := ret[0].(error)
return ret0
}
// CleanupDeletedMCPServerIDsFromChatAutomations indicates an expected call of CleanupDeletedMCPServerIDsFromChatAutomations.
func (mr *MockStoreMockRecorder) CleanupDeletedMCPServerIDsFromChatAutomations(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanupDeletedMCPServerIDsFromChatAutomations", reflect.TypeOf((*MockStore)(nil).CleanupDeletedMCPServerIDsFromChatAutomations), ctx)
}
// CleanupDeletedMCPServerIDsFromChats mocks base method.
func (m *MockStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
m.ctrl.T.Helper()
@@ -453,6 +468,36 @@ func (mr *MockStoreMockRecorder) CountAuthorizedConnectionLogs(ctx, arg, prepare
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountAuthorizedConnectionLogs", reflect.TypeOf((*MockStore)(nil).CountAuthorizedConnectionLogs), ctx, arg, prepared)
}
// CountChatAutomationChatCreatesInWindow mocks base method.
func (m *MockStore) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountChatAutomationChatCreatesInWindow", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountChatAutomationChatCreatesInWindow indicates an expected call of CountChatAutomationChatCreatesInWindow.
func (mr *MockStoreMockRecorder) CountChatAutomationChatCreatesInWindow(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountChatAutomationChatCreatesInWindow", reflect.TypeOf((*MockStore)(nil).CountChatAutomationChatCreatesInWindow), ctx, arg)
}
// CountChatAutomationMessagesInWindow mocks base method.
func (m *MockStore) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountChatAutomationMessagesInWindow", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountChatAutomationMessagesInWindow indicates an expected call of CountChatAutomationMessagesInWindow.
func (mr *MockStoreMockRecorder) CountChatAutomationMessagesInWindow(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountChatAutomationMessagesInWindow", reflect.TypeOf((*MockStore)(nil).CountChatAutomationMessagesInWindow), ctx, arg)
}
// CountConnectionLogs mocks base method.
func (m *MockStore) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
m.ctrl.T.Helper()
@@ -642,6 +687,34 @@ func (mr *MockStoreMockRecorder) DeleteApplicationConnectAPIKeysByUserID(ctx, us
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteApplicationConnectAPIKeysByUserID", reflect.TypeOf((*MockStore)(nil).DeleteApplicationConnectAPIKeysByUserID), ctx, userID)
}
// DeleteChatAutomationByID mocks base method.
func (m *MockStore) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatAutomationByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatAutomationByID indicates an expected call of DeleteChatAutomationByID.
func (mr *MockStoreMockRecorder) DeleteChatAutomationByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatAutomationByID", reflect.TypeOf((*MockStore)(nil).DeleteChatAutomationByID), ctx, id)
}
// DeleteChatAutomationTriggerByID mocks base method.
func (m *MockStore) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatAutomationTriggerByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatAutomationTriggerByID indicates an expected call of DeleteChatAutomationTriggerByID.
func (mr *MockStoreMockRecorder) DeleteChatAutomationTriggerByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatAutomationTriggerByID", reflect.TypeOf((*MockStore)(nil).DeleteChatAutomationTriggerByID), ctx, id)
}
// DeleteChatModelConfigByID mocks base method.
func (m *MockStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
@@ -1608,6 +1681,21 @@ func (mr *MockStoreMockRecorder) GetActiveAISeatCount(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetActiveAISeatCount", reflect.TypeOf((*MockStore)(nil).GetActiveAISeatCount), ctx)
}
// GetActiveChatAutomationCronTriggers mocks base method.
func (m *MockStore) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetActiveChatAutomationCronTriggers", ctx)
ret0, _ := ret[0].([]database.GetActiveChatAutomationCronTriggersRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetActiveChatAutomationCronTriggers indicates an expected call of GetActiveChatAutomationCronTriggers.
func (mr *MockStoreMockRecorder) GetActiveChatAutomationCronTriggers(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetActiveChatAutomationCronTriggers", reflect.TypeOf((*MockStore)(nil).GetActiveChatAutomationCronTriggers), ctx)
}
// GetActivePresetPrebuildSchedules mocks base method.
func (m *MockStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
m.ctrl.T.Helper()
@@ -1803,6 +1891,21 @@ func (mr *MockStoreMockRecorder) GetAuthorizedAuditLogsOffset(ctx, arg, prepared
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedAuditLogsOffset", reflect.TypeOf((*MockStore)(nil).GetAuthorizedAuditLogsOffset), ctx, arg, prepared)
}
// GetAuthorizedChatAutomations mocks base method.
func (m *MockStore) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAuthorizedChatAutomations", ctx, arg, prepared)
ret0, _ := ret[0].([]database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetAuthorizedChatAutomations indicates an expected call of GetAuthorizedChatAutomations.
func (mr *MockStoreMockRecorder) GetAuthorizedChatAutomations(ctx, arg, prepared any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedChatAutomations", reflect.TypeOf((*MockStore)(nil).GetAuthorizedChatAutomations), ctx, arg, prepared)
}
// GetAuthorizedChats mocks base method.
func (m *MockStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.GetChatsRow, error) {
m.ctrl.T.Helper()
@@ -1893,6 +1996,81 @@ func (mr *MockStoreMockRecorder) GetAuthorizedWorkspacesAndAgentsByOwnerID(ctx,
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedWorkspacesAndAgentsByOwnerID", reflect.TypeOf((*MockStore)(nil).GetAuthorizedWorkspacesAndAgentsByOwnerID), ctx, ownerID, prepared)
}
// GetChatAutomationByID mocks base method.
func (m *MockStore) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationByID", ctx, id)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationByID indicates an expected call of GetChatAutomationByID.
func (mr *MockStoreMockRecorder) GetChatAutomationByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationByID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationByID), ctx, id)
}
// GetChatAutomationEventsByAutomationID mocks base method.
func (m *MockStore) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationEventsByAutomationID", ctx, arg)
ret0, _ := ret[0].([]database.ChatAutomationEvent)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationEventsByAutomationID indicates an expected call of GetChatAutomationEventsByAutomationID.
func (mr *MockStoreMockRecorder) GetChatAutomationEventsByAutomationID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationEventsByAutomationID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationEventsByAutomationID), ctx, arg)
}
// GetChatAutomationTriggerByID mocks base method.
func (m *MockStore) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationTriggerByID", ctx, id)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationTriggerByID indicates an expected call of GetChatAutomationTriggerByID.
func (mr *MockStoreMockRecorder) GetChatAutomationTriggerByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationTriggerByID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationTriggerByID), ctx, id)
}
// GetChatAutomationTriggersByAutomationID mocks base method.
func (m *MockStore) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationTriggersByAutomationID", ctx, automationID)
ret0, _ := ret[0].([]database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationTriggersByAutomationID indicates an expected call of GetChatAutomationTriggersByAutomationID.
func (mr *MockStoreMockRecorder) GetChatAutomationTriggersByAutomationID(ctx, automationID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationTriggersByAutomationID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationTriggersByAutomationID), ctx, automationID)
}
// GetChatAutomations mocks base method.
func (m *MockStore) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomations", ctx, arg)
ret0, _ := ret[0].([]database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomations indicates an expected call of GetChatAutomations.
func (mr *MockStoreMockRecorder) GetChatAutomations(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomations", reflect.TypeOf((*MockStore)(nil).GetChatAutomations), ctx, arg)
}
// GetChatByID mocks base method.
func (m *MockStore) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
m.ctrl.T.Helper()
@@ -6047,6 +6225,51 @@ func (mr *MockStoreMockRecorder) InsertChat(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChat", reflect.TypeOf((*MockStore)(nil).InsertChat), ctx, arg)
}
// InsertChatAutomation mocks base method.
func (m *MockStore) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomation", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomation indicates an expected call of InsertChatAutomation.
func (mr *MockStoreMockRecorder) InsertChatAutomation(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomation", reflect.TypeOf((*MockStore)(nil).InsertChatAutomation), ctx, arg)
}
// InsertChatAutomationEvent mocks base method.
func (m *MockStore) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomationEvent", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationEvent)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomationEvent indicates an expected call of InsertChatAutomationEvent.
func (mr *MockStoreMockRecorder) InsertChatAutomationEvent(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomationEvent", reflect.TypeOf((*MockStore)(nil).InsertChatAutomationEvent), ctx, arg)
}
// InsertChatAutomationTrigger mocks base method.
func (m *MockStore) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomationTrigger", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomationTrigger indicates an expected call of InsertChatAutomationTrigger.
func (mr *MockStoreMockRecorder) InsertChatAutomationTrigger(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomationTrigger", reflect.TypeOf((*MockStore)(nil).InsertChatAutomationTrigger), ctx, arg)
}
// InsertChatFile mocks base method.
func (m *MockStore) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
m.ctrl.T.Helper()
@@ -7500,6 +7723,21 @@ func (mr *MockStoreMockRecorder) PopNextQueuedMessage(ctx, chatID any) *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PopNextQueuedMessage", reflect.TypeOf((*MockStore)(nil).PopNextQueuedMessage), ctx, chatID)
}
// PurgeOldChatAutomationEvents mocks base method.
func (m *MockStore) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "PurgeOldChatAutomationEvents", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// PurgeOldChatAutomationEvents indicates an expected call of PurgeOldChatAutomationEvents.
func (mr *MockStoreMockRecorder) PurgeOldChatAutomationEvents(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PurgeOldChatAutomationEvents", reflect.TypeOf((*MockStore)(nil).PurgeOldChatAutomationEvents), ctx, arg)
}
// ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate mocks base method.
func (m *MockStore) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
m.ctrl.T.Helper()
@@ -7632,11 +7870,12 @@ func (mr *MockStoreMockRecorder) TryAcquireLock(ctx, pgTryAdvisoryXactLock any)
}
// UnarchiveChatByID mocks base method.
func (m *MockStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m *MockStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UnarchiveChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UnarchiveChatByID indicates an expected call of UnarchiveChatByID.
@@ -7730,6 +7969,65 @@ func (mr *MockStoreMockRecorder) UpdateAPIKeyByID(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateAPIKeyByID", reflect.TypeOf((*MockStore)(nil).UpdateAPIKeyByID), ctx, arg)
}
// UpdateChatAutomation mocks base method.
func (m *MockStore) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomation", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomation indicates an expected call of UpdateChatAutomation.
func (mr *MockStoreMockRecorder) UpdateChatAutomation(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomation", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomation), ctx, arg)
}
// UpdateChatAutomationTrigger mocks base method.
func (m *MockStore) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTrigger", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomationTrigger indicates an expected call of UpdateChatAutomationTrigger.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTrigger(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTrigger", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTrigger), ctx, arg)
}
// UpdateChatAutomationTriggerLastTriggeredAt mocks base method.
func (m *MockStore) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTriggerLastTriggeredAt", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateChatAutomationTriggerLastTriggeredAt indicates an expected call of UpdateChatAutomationTriggerLastTriggeredAt.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTriggerLastTriggeredAt", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTriggerLastTriggeredAt), ctx, arg)
}
// UpdateChatAutomationTriggerWebhookSecret mocks base method.
func (m *MockStore) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTriggerWebhookSecret", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomationTriggerWebhookSecret indicates an expected call of UpdateChatAutomationTriggerWebhookSecret.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTriggerWebhookSecret(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTriggerWebhookSecret", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTriggerWebhookSecret), ctx, arg)
}
// UpdateChatBuildAgentBinding mocks base method.
func (m *MockStore) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
m.ctrl.T.Helper()
@@ -7790,6 +8088,21 @@ func (mr *MockStoreMockRecorder) UpdateChatLabelsByID(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLabelsByID", reflect.TypeOf((*MockStore)(nil).UpdateChatLabelsByID), ctx, arg)
}
// UpdateChatLastInjectedContext mocks base method.
func (m *MockStore) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatLastInjectedContext", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatLastInjectedContext indicates an expected call of UpdateChatLastInjectedContext.
func (mr *MockStoreMockRecorder) UpdateChatLastInjectedContext(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLastInjectedContext", reflect.TypeOf((*MockStore)(nil).UpdateChatLastInjectedContext), ctx, arg)
}
// UpdateChatLastModelConfigByID mocks base method.
func (m *MockStore) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
m.ctrl.T.Helper()
+188 -2
View File
@@ -220,7 +220,12 @@ CREATE TYPE api_key_scope AS ENUM (
'chat:read',
'chat:update',
'chat:delete',
'chat:*'
'chat:*',
'chat_automation:create',
'chat_automation:read',
'chat_automation:update',
'chat_automation:delete',
'chat_automation:*'
);
CREATE TYPE app_sharing_level AS ENUM (
@@ -270,6 +275,32 @@ CREATE TYPE build_reason AS ENUM (
'task_resume'
);
CREATE TYPE chat_automation_event_status AS ENUM (
'filtered',
'preview',
'created',
'continued',
'rate_limited',
'error'
);
COMMENT ON TYPE chat_automation_event_status IS 'Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.';
CREATE TYPE chat_automation_status AS ENUM (
'disabled',
'preview',
'active'
);
COMMENT ON TYPE chat_automation_status IS 'Lifecycle state of a chat automation: disabled, preview, or active.';
CREATE TYPE chat_automation_trigger_type AS ENUM (
'webhook',
'cron'
);
COMMENT ON TYPE chat_automation_trigger_type IS 'Discriminator for chat automation triggers: webhook or cron.';
CREATE TYPE chat_message_role AS ENUM (
'system',
'user',
@@ -1238,6 +1269,104 @@ COMMENT ON COLUMN boundary_usage_stats.window_start IS 'Start of the time window
COMMENT ON COLUMN boundary_usage_stats.updated_at IS 'Timestamp of the last update to this row.';
CREATE TABLE chat_automation_events (
id uuid NOT NULL,
automation_id uuid NOT NULL,
trigger_id uuid,
received_at timestamp with time zone NOT NULL,
payload jsonb NOT NULL,
filter_matched boolean NOT NULL,
resolved_labels jsonb,
matched_chat_id uuid,
created_chat_id uuid,
status chat_automation_event_status NOT NULL,
error text,
CONSTRAINT chat_automation_events_chat_exclusivity CHECK (((matched_chat_id IS NULL) OR (created_chat_id IS NULL)))
);
COMMENT ON TABLE chat_automation_events IS 'Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.';
COMMENT ON COLUMN chat_automation_events.payload IS 'The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.';
COMMENT ON COLUMN chat_automation_events.filter_matched IS 'Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.';
COMMENT ON COLUMN chat_automation_events.resolved_labels IS 'Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.';
COMMENT ON COLUMN chat_automation_events.matched_chat_id IS 'ID of an existing chat that was found via label matching and continued with a new message.';
COMMENT ON COLUMN chat_automation_events.created_chat_id IS 'ID of a newly created chat (mutually exclusive with matched_chat_id in practice).';
COMMENT ON COLUMN chat_automation_events.status IS 'Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.';
CREATE TABLE chat_automation_triggers (
id uuid NOT NULL,
automation_id uuid NOT NULL,
type chat_automation_trigger_type NOT NULL,
webhook_secret text,
webhook_secret_key_id text,
cron_schedule text,
last_triggered_at timestamp with time zone,
filter jsonb,
label_paths jsonb,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
CONSTRAINT chat_automation_triggers_cron_fields CHECK (((type <> 'cron'::chat_automation_trigger_type) OR ((cron_schedule IS NOT NULL) AND (webhook_secret IS NULL) AND (webhook_secret_key_id IS NULL)))),
CONSTRAINT chat_automation_triggers_webhook_fields CHECK (((type <> 'webhook'::chat_automation_trigger_type) OR ((webhook_secret IS NOT NULL) AND (cron_schedule IS NULL) AND (last_triggered_at IS NULL))))
);
COMMENT ON TABLE chat_automation_triggers IS 'Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.';
COMMENT ON COLUMN chat_automation_triggers.type IS 'Discriminator: webhook or cron. Determines which nullable columns are meaningful.';
COMMENT ON COLUMN chat_automation_triggers.webhook_secret IS 'HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.';
COMMENT ON COLUMN chat_automation_triggers.cron_schedule IS 'Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.last_triggered_at IS 'Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.filter IS 'gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.';
COMMENT ON COLUMN chat_automation_triggers.label_paths IS 'Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.';
CREATE TABLE chat_automations (
id uuid NOT NULL,
owner_id uuid NOT NULL,
organization_id uuid NOT NULL,
name text NOT NULL,
description text DEFAULT ''::text NOT NULL,
instructions text DEFAULT ''::text NOT NULL,
model_config_id uuid,
mcp_server_ids uuid[] DEFAULT '{}'::uuid[] NOT NULL,
allowed_tools text[] DEFAULT '{}'::text[] NOT NULL,
status chat_automation_status DEFAULT 'disabled'::chat_automation_status NOT NULL,
max_chat_creates_per_hour integer DEFAULT 10 NOT NULL,
max_messages_per_hour integer DEFAULT 60 NOT NULL,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
CONSTRAINT chat_automations_max_chat_creates_per_hour_check CHECK ((max_chat_creates_per_hour > 0)),
CONSTRAINT chat_automations_max_messages_per_hour_check CHECK ((max_messages_per_hour > 0))
);
COMMENT ON TABLE chat_automations IS 'Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.';
COMMENT ON COLUMN chat_automations.owner_id IS 'The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.';
COMMENT ON COLUMN chat_automations.organization_id IS 'Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.';
COMMENT ON COLUMN chat_automations.instructions IS 'The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.';
COMMENT ON COLUMN chat_automations.model_config_id IS 'Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.';
COMMENT ON COLUMN chat_automations.mcp_server_ids IS 'MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.';
COMMENT ON COLUMN chat_automations.allowed_tools IS 'Tool allowlist. Empty means all tools available to the model config are permitted.';
COMMENT ON COLUMN chat_automations.status IS 'Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.';
COMMENT ON COLUMN chat_automations.max_chat_creates_per_hour IS 'Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.';
COMMENT ON COLUMN chat_automations.max_messages_per_hour IS 'Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.';
CREATE TABLE chat_diff_statuses (
chat_id uuid NOT NULL,
url text,
@@ -1403,7 +1532,9 @@ CREATE TABLE chats (
build_id uuid,
agent_id uuid,
pin_order integer DEFAULT 0 NOT NULL,
last_read_message_id bigint
last_read_message_id bigint,
last_injected_context jsonb,
automation_id uuid
);
CREATE TABLE connection_logs (
@@ -3319,6 +3450,15 @@ ALTER TABLE ONLY audit_logs
ALTER TABLE ONLY boundary_usage_stats
ADD CONSTRAINT boundary_usage_stats_pkey PRIMARY KEY (replica_id);
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
@@ -3704,6 +3844,20 @@ CREATE INDEX idx_audit_log_user_id ON audit_logs USING btree (user_id);
CREATE INDEX idx_audit_logs_time_desc ON audit_logs USING btree ("time" DESC);
CREATE INDEX idx_chat_automation_events_automation_id_received_at ON chat_automation_events USING btree (automation_id, received_at DESC);
CREATE INDEX idx_chat_automation_events_rate_limit ON chat_automation_events USING btree (automation_id, received_at) WHERE (status = ANY (ARRAY['created'::chat_automation_event_status, 'continued'::chat_automation_event_status]));
CREATE INDEX idx_chat_automation_events_received_at ON chat_automation_events USING btree (received_at);
CREATE INDEX idx_chat_automation_triggers_automation_id ON chat_automation_triggers USING btree (automation_id);
CREATE INDEX idx_chat_automations_organization_id ON chat_automations USING btree (organization_id);
CREATE INDEX idx_chat_automations_owner_id ON chat_automations USING btree (owner_id);
CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations USING btree (owner_id, organization_id, name);
CREATE INDEX idx_chat_diff_statuses_stale_at ON chat_diff_statuses USING btree (stale_at);
CREATE INDEX idx_chat_files_org ON chat_files USING btree (organization_id);
@@ -3732,6 +3886,8 @@ CREATE INDEX idx_chat_providers_enabled ON chat_providers USING btree (enabled);
CREATE INDEX idx_chat_queued_messages_chat_id ON chat_queued_messages USING btree (chat_id);
CREATE INDEX idx_chats_automation_id ON chats USING btree (automation_id);
CREATE INDEX idx_chats_labels ON chats USING gin (labels);
CREATE INDEX idx_chats_last_model_config_id ON chats USING btree (last_model_config_id);
@@ -4005,6 +4161,33 @@ ALTER TABLE ONLY aibridge_interceptions
ALTER TABLE ONLY api_keys
ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_created_chat_id_fkey FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_matched_chat_id_fkey FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_trigger_id_fkey FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_webhook_secret_key_id_fkey FOREIGN KEY (webhook_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_model_config_id_fkey FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
@@ -4041,6 +4224,9 @@ ALTER TABLE ONLY chat_queued_messages
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE SET NULL;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_build_id_fkey FOREIGN KEY (build_id) REFERENCES workspace_builds(id) ON DELETE SET NULL;
+10
View File
@@ -9,6 +9,15 @@ const (
ForeignKeyAiSeatStateUserID ForeignKeyConstraint = "ai_seat_state_user_id_fkey" // ALTER TABLE ONLY ai_seat_state ADD CONSTRAINT ai_seat_state_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyAibridgeInterceptionsInitiatorID ForeignKeyConstraint = "aibridge_interceptions_initiator_id_fkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
ForeignKeyAPIKeysUserIDUUID ForeignKeyConstraint = "api_keys_user_id_uuid_fkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatAutomationEventsAutomationID ForeignKeyConstraint = "chat_automation_events_automation_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationEventsCreatedChatID ForeignKeyConstraint = "chat_automation_events_created_chat_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_created_chat_id_fkey FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ForeignKeyChatAutomationEventsMatchedChatID ForeignKeyConstraint = "chat_automation_events_matched_chat_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_matched_chat_id_fkey FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ForeignKeyChatAutomationEventsTriggerID ForeignKeyConstraint = "chat_automation_events_trigger_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_trigger_id_fkey FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL;
ForeignKeyChatAutomationTriggersAutomationID ForeignKeyConstraint = "chat_automation_triggers_automation_id_fkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationTriggersWebhookSecretKeyID ForeignKeyConstraint = "chat_automation_triggers_webhook_secret_key_id_fkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_webhook_secret_key_id_fkey FOREIGN KEY (webhook_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyChatAutomationsModelConfigID ForeignKeyConstraint = "chat_automations_model_config_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_model_config_id_fkey FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL;
ForeignKeyChatAutomationsOrganizationID ForeignKeyConstraint = "chat_automations_organization_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationsOwnerID ForeignKeyConstraint = "chat_automations_owner_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatDiffStatusesChatID ForeignKeyConstraint = "chat_diff_statuses_chat_id_fkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatFilesOrganizationID ForeignKeyConstraint = "chat_files_organization_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyChatFilesOwnerID ForeignKeyConstraint = "chat_files_owner_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
@@ -21,6 +30,7 @@ const (
ForeignKeyChatProvidersCreatedBy ForeignKeyConstraint = "chat_providers_created_by_fkey" // ALTER TABLE ONLY chat_providers ADD CONSTRAINT chat_providers_created_by_fkey FOREIGN KEY (created_by) REFERENCES users(id);
ForeignKeyChatQueuedMessagesChatID ForeignKeyConstraint = "chat_queued_messages_chat_id_fkey" // ALTER TABLE ONLY chat_queued_messages ADD CONSTRAINT chat_queued_messages_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatsAgentID ForeignKeyConstraint = "chats_agent_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ForeignKeyChatsAutomationID ForeignKeyConstraint = "chats_automation_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE SET NULL;
ForeignKeyChatsBuildID ForeignKeyConstraint = "chats_build_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_build_id_fkey FOREIGN KEY (build_id) REFERENCES workspace_builds(id) ON DELETE SET NULL;
ForeignKeyChatsLastModelConfigID ForeignKeyConstraint = "chats_last_model_config_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_last_model_config_id_fkey FOREIGN KEY (last_model_config_id) REFERENCES chat_model_configs(id);
ForeignKeyChatsOwnerID ForeignKeyConstraint = "chats_owner_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
@@ -27,6 +27,7 @@ func TestCustomQueriesSyncedRowScan(t *testing.T) {
"GetWorkspaces": "GetAuthorizedWorkspaces",
"GetUsers": "GetAuthorizedUsers",
"GetChats": "GetAuthorizedChats",
"GetChatAutomations": "GetAuthorizedChatAutomations",
}
// Scan custom
+3
View File
@@ -15,6 +15,9 @@ const (
LockIDReconcilePrebuilds
LockIDReconcileSystemRoles
LockIDBoundaryUsageStats
// LockIDChatAutomationCron prevents concurrent cron trigger
// evaluation across coderd replicas.
LockIDChatAutomationCron
)
// GenLockID generates a unique and consistent lock ID from a given string.
@@ -0,0 +1 @@
ALTER TABLE chats DROP COLUMN last_injected_context;
@@ -0,0 +1 @@
ALTER TABLE chats ADD COLUMN last_injected_context JSONB;
@@ -0,0 +1,4 @@
-- Remove 'agents-access' from all users who have it.
UPDATE users
SET rbac_roles = array_remove(rbac_roles, 'agents-access')
WHERE 'agents-access' = ANY(rbac_roles);
@@ -0,0 +1,5 @@
-- Grant 'agents-access' to every user who has ever created a chat.
UPDATE users
SET rbac_roles = array_append(rbac_roles, 'agents-access')
WHERE id IN (SELECT DISTINCT owner_id FROM chats)
AND NOT ('agents-access' = ANY(rbac_roles));
@@ -0,0 +1,13 @@
ALTER TABLE chats DROP COLUMN IF EXISTS automation_id;
DROP TABLE IF EXISTS chat_automation_events;
DROP TABLE IF EXISTS chat_automation_triggers;
DROP TABLE IF EXISTS chat_automations;
DROP TYPE IF EXISTS chat_automation_event_status;
DROP TYPE IF EXISTS chat_automation_trigger_type;
DROP TYPE IF EXISTS chat_automation_status;
@@ -0,0 +1,238 @@
-- Chat automations bridge external events (webhooks, cron schedules) to
-- Coder chats. A chat automation defines *what* to say, *which* model
-- and tools to use, and *how fast* it is allowed to create or continue
-- chats.
CREATE TYPE chat_automation_status AS ENUM ('disabled', 'preview', 'active');
CREATE TYPE chat_automation_trigger_type AS ENUM ('webhook', 'cron');
CREATE TYPE chat_automation_event_status AS ENUM ('filtered', 'preview', 'created', 'continued', 'rate_limited', 'error');
CREATE TABLE chat_automations (
id uuid NOT NULL,
-- The user on whose behalf chats are created. All RBAC checks and
-- chat ownership are scoped to this user.
owner_id uuid NOT NULL,
-- Organization scope for RBAC. Combined with owner_id and name to
-- form a unique constraint so automations are namespaced per user
-- per org.
organization_id uuid NOT NULL,
-- Human-readable identifier. Unique within (owner_id, organization_id).
name text NOT NULL,
-- Optional long-form description shown in the UI.
description text NOT NULL DEFAULT '',
-- The user-role message injected into every chat this automation
-- creates. This is the core prompt that tells the LLM what to do.
instructions text NOT NULL DEFAULT '',
-- Optional model configuration override. When NULL the deployment
-- default is used. SET NULL on delete so automations survive config
-- changes gracefully.
model_config_id uuid,
-- MCP servers to attach to chats created by this automation.
-- Stored as an array of UUIDs rather than a join table because
-- the set is small and always read/written atomically.
mcp_server_ids uuid[] NOT NULL DEFAULT '{}',
-- Tool allowlist. Empty means all tools available to the model
-- config are permitted.
allowed_tools text[] NOT NULL DEFAULT '{}',
-- Lifecycle state:
-- disabled — trigger events are silently dropped.
-- preview — events are logged but no chat is created (dry-run).
-- active — events create or continue chats.
status chat_automation_status NOT NULL DEFAULT 'disabled',
-- Maximum number of *new* chats this automation may create in a
-- rolling one-hour window. Prevents runaway webhook storms from
-- flooding the system. Approximate under concurrency; the
-- check-then-insert is not serialized, so brief bursts may
-- slightly exceed the cap.
max_chat_creates_per_hour integer NOT NULL DEFAULT 10,
-- Maximum total messages (creates + continues) this automation may
-- send in a rolling one-hour window. A second, broader throttle
-- that catches high-frequency continuation patterns. Same
-- approximate-under-concurrency caveat as above.
max_messages_per_hour integer NOT NULL DEFAULT 60,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE,
FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE,
FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL,
CONSTRAINT chat_automations_max_chat_creates_per_hour_check CHECK (max_chat_creates_per_hour > 0),
CONSTRAINT chat_automations_max_messages_per_hour_check CHECK (max_messages_per_hour > 0)
);
CREATE INDEX idx_chat_automations_owner_id ON chat_automations (owner_id);
CREATE INDEX idx_chat_automations_organization_id ON chat_automations (organization_id);
-- Enforces that automation names are unique per user per org so they
-- can be referenced unambiguously in CLI/API calls.
CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations (owner_id, organization_id, name);
-- Triggers define *how* an automation is invoked. Each automation can
-- have multiple triggers (e.g. one webhook + one cron schedule).
-- Webhook and cron triggers share the same row shape with type-specific
-- nullable columns to keep the schema simple.
CREATE TABLE chat_automation_triggers (
id uuid NOT NULL,
-- Parent automation. CASCADE delete ensures orphan triggers are
-- cleaned up when an automation is removed.
automation_id uuid NOT NULL,
-- Discriminator: 'webhook' or 'cron'. Determines which nullable
-- columns are meaningful.
type chat_automation_trigger_type NOT NULL,
-- HMAC-SHA256 shared secret for webhook signature verification
-- (X-Hub-Signature-256 header). NULL for cron triggers.
webhook_secret text,
-- Identifier of the dbcrypt key used to encrypt webhook_secret.
-- NULL means the secret is not yet encrypted. When dbcrypt is
-- enabled, this references the active key digest used for
-- AES-256-GCM encryption.
webhook_secret_key_id text REFERENCES dbcrypt_keys(active_key_digest),
-- Standard 5-field cron expression (minute hour dom month dow),
-- with optional CRON_TZ= prefix. NULL for webhook triggers.
cron_schedule text,
-- Timestamp of the last successful cron fire. The scheduler
-- computes next = cron.Next(last_triggered_at) and fires when
-- next <= now. NULL means the trigger has never fired; the
-- scheduler falls back to created_at as the reference time.
-- Not used for webhook triggers.
last_triggered_at timestamp with time zone,
-- gjson path→value filter conditions evaluated against the
-- incoming webhook payload. All conditions must match for the
-- trigger to fire. NULL or empty means "match everything".
filter jsonb,
-- Maps chat label keys to gjson paths. When a trigger fires,
-- labels are resolved from the payload and used to find an
-- existing chat to continue (by label match) or set on a
-- newly created chat. This is how automations route events
-- to the right conversation.
label_paths jsonb,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE,
CONSTRAINT chat_automation_triggers_webhook_fields CHECK (
type != 'webhook' OR (webhook_secret IS NOT NULL AND cron_schedule IS NULL AND last_triggered_at IS NULL)
),
CONSTRAINT chat_automation_triggers_cron_fields CHECK (
type != 'cron' OR (cron_schedule IS NOT NULL AND webhook_secret IS NULL AND webhook_secret_key_id IS NULL)
)
);
CREATE INDEX idx_chat_automation_triggers_automation_id ON chat_automation_triggers (automation_id);
-- Every trigger invocation produces an event row regardless of outcome.
-- This table is the audit trail and the data source for rate-limit
-- window counts. Rows are append-only and expected to be purged by a
-- background job after a retention period.
CREATE TABLE chat_automation_events (
id uuid NOT NULL,
-- The automation that owns this event.
automation_id uuid NOT NULL,
-- The trigger that produced this event. SET NULL on delete so
-- historical events survive trigger removal.
trigger_id uuid,
-- When the event was received (webhook delivery time or cron
-- evaluation time). Used for rate-limit window calculations and
-- purge cutoffs.
received_at timestamp with time zone NOT NULL,
-- The raw payload that was evaluated. For webhooks this is the
-- HTTP body; for cron triggers it is a synthetic JSON envelope
-- with schedule metadata.
payload jsonb NOT NULL,
-- Whether the trigger's filter conditions matched. False means
-- the event was dropped before any chat interaction.
filter_matched boolean NOT NULL,
-- Labels resolved from the payload via label_paths. Stored so
-- the event log shows exactly which labels were computed.
resolved_labels jsonb,
-- ID of an existing chat that was found via label matching and
-- continued with a new message.
matched_chat_id uuid,
-- ID of a newly created chat (mutually exclusive with
-- matched_chat_id in practice).
created_chat_id uuid,
-- Outcome of the event:
-- filtered — filter did not match, event dropped.
-- preview — automation is in preview mode, no chat action.
-- created — new chat was created.
-- continued — existing chat was continued.
-- rate_limited — rate limit prevented chat action.
-- error — something went wrong (see error column).
status chat_automation_event_status NOT NULL,
-- Human-readable error description when status = 'error' or
-- 'rate_limited'. NULL for successful outcomes.
error text,
PRIMARY KEY (id),
FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE,
FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL,
FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL,
FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL,
CONSTRAINT chat_automation_events_chat_exclusivity CHECK (
matched_chat_id IS NULL OR created_chat_id IS NULL
)
);
-- Composite index for listing events per automation in reverse
-- chronological order (the primary UI query pattern).
CREATE INDEX idx_chat_automation_events_automation_id_received_at ON chat_automation_events (automation_id, received_at DESC);
-- Standalone index on received_at for the purge job, which deletes
-- events older than the retention period across all automations.
CREATE INDEX idx_chat_automation_events_received_at ON chat_automation_events (received_at);
-- Partial index for rate-limit window count queries, which filter
-- by automation_id and status IN ('created', 'continued').
CREATE INDEX idx_chat_automation_events_rate_limit
ON chat_automation_events (automation_id, received_at)
WHERE status IN ('created', 'continued');
-- Link chats back to the automation that created them. SET NULL on
-- delete so chats survive if the automation is removed. Indexed for
-- lookup queries that list chats spawned by a given automation.
ALTER TABLE chats ADD COLUMN automation_id uuid REFERENCES chat_automations(id) ON DELETE SET NULL;
CREATE INDEX idx_chats_automation_id ON chats (automation_id);
-- Enum type comments.
COMMENT ON TYPE chat_automation_status IS 'Lifecycle state of a chat automation: disabled, preview, or active.';
COMMENT ON TYPE chat_automation_trigger_type IS 'Discriminator for chat automation triggers: webhook or cron.';
COMMENT ON TYPE chat_automation_event_status IS 'Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.';
-- Table comments.
COMMENT ON TABLE chat_automations IS 'Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.';
COMMENT ON TABLE chat_automation_triggers IS 'Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.';
COMMENT ON TABLE chat_automation_events IS 'Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.';
-- Column comments for chat_automations.
COMMENT ON COLUMN chat_automations.owner_id IS 'The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.';
COMMENT ON COLUMN chat_automations.organization_id IS 'Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.';
COMMENT ON COLUMN chat_automations.instructions IS 'The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.';
COMMENT ON COLUMN chat_automations.model_config_id IS 'Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.';
COMMENT ON COLUMN chat_automations.mcp_server_ids IS 'MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.';
COMMENT ON COLUMN chat_automations.allowed_tools IS 'Tool allowlist. Empty means all tools available to the model config are permitted.';
COMMENT ON COLUMN chat_automations.status IS 'Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.';
COMMENT ON COLUMN chat_automations.max_chat_creates_per_hour IS 'Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.';
COMMENT ON COLUMN chat_automations.max_messages_per_hour IS 'Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.';
-- Column comments for chat_automation_triggers.
COMMENT ON COLUMN chat_automation_triggers.type IS 'Discriminator: webhook or cron. Determines which nullable columns are meaningful.';
COMMENT ON COLUMN chat_automation_triggers.webhook_secret IS 'HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.';
COMMENT ON COLUMN chat_automation_triggers.cron_schedule IS 'Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.filter IS 'gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.';
COMMENT ON COLUMN chat_automation_triggers.label_paths IS 'Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.';
COMMENT ON COLUMN chat_automation_triggers.last_triggered_at IS 'Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.';
-- Column comments for chat_automation_events.
COMMENT ON COLUMN chat_automation_events.payload IS 'The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.';
COMMENT ON COLUMN chat_automation_events.filter_matched IS 'Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.';
COMMENT ON COLUMN chat_automation_events.resolved_labels IS 'Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.';
COMMENT ON COLUMN chat_automation_events.matched_chat_id IS 'ID of an existing chat that was found via label matching and continued with a new message.';
COMMENT ON COLUMN chat_automation_events.created_chat_id IS 'ID of a newly created chat (mutually exclusive with matched_chat_id in practice).';
COMMENT ON COLUMN chat_automation_events.status IS 'Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.';
-- Add API key scope values for the new chat_automation resource type.
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:create';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:read';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:update';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:delete';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:*';
+146
View File
@@ -877,3 +877,149 @@ func TestMigration000387MigrateTaskWorkspaces(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 0, antCount, "antagonist workspaces (deleted and regular) should not be migrated")
}
func TestMigration000457ChatAccessRole(t *testing.T) {
t.Parallel()
const migrationVersion = 457
sqlDB := testSQLDB(t)
// Migrate up to the migration before the one that grants
// agents-access roles.
next, err := migrations.Stepper(sqlDB)
require.NoError(t, err)
for {
version, more, err := next()
require.NoError(t, err)
if !more {
t.Fatalf("migration %d not found", migrationVersion)
}
if version == migrationVersion-1 {
break
}
}
ctx := testutil.Context(t, testutil.WaitSuperLong)
// Define test users.
userWithChat := uuid.New() // Has a chat, no agents-access role.
userAlreadyHasRole := uuid.New() // Has a chat and already has agents-access.
userNoChat := uuid.New() // No chat at all.
userWithChatAndRoles := uuid.New() // Has a chat and other existing roles.
now := time.Now().UTC().Truncate(time.Microsecond)
// We need a chat_provider and chat_model_config for the chats FK.
providerID := uuid.New()
modelConfigID := uuid.New()
tx, err := sqlDB.BeginTx(ctx, nil)
require.NoError(t, err)
defer tx.Rollback()
fixtures := []struct {
query string
args []any
}{
// Insert test users with varying rbac_roles.
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userWithChat, "user-with-chat", "chat@test.com", []byte{}, now, now, "active", pq.StringArray{}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userAlreadyHasRole, "user-already-has-role", "already@test.com", []byte{}, now, now, "active", pq.StringArray{"agents-access"}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userNoChat, "user-no-chat", "nochat@test.com", []byte{}, now, now, "active", pq.StringArray{}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userWithChatAndRoles, "user-with-roles", "roles@test.com", []byte{}, now, now, "active", pq.StringArray{"template-admin"}, "password"},
},
// Insert a chat provider and model config for the chats FK.
{
`INSERT INTO chat_providers (id, provider, display_name, api_key, enabled, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7)`,
[]any{providerID, "openai", "OpenAI", "", true, now, now},
},
{
`INSERT INTO chat_model_configs (id, provider, model, display_name, enabled, context_limit, compression_threshold, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{modelConfigID, "openai", "gpt-4", "GPT 4", true, 100000, 70, now, now},
},
// Insert chats for users A, B, and D (not C).
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userWithChat, modelConfigID, "Chat A", now, now},
},
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userAlreadyHasRole, modelConfigID, "Chat B", now, now},
},
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userWithChatAndRoles, modelConfigID, "Chat D", now, now},
},
}
for i, f := range fixtures {
_, err := tx.ExecContext(ctx, f.query, f.args...)
require.NoError(t, err, "fixture %d", i)
}
require.NoError(t, tx.Commit())
// Run the migration.
version, _, err := next()
require.NoError(t, err)
require.EqualValues(t, migrationVersion, version)
// Helper to get rbac_roles for a user.
getRoles := func(t *testing.T, userID uuid.UUID) []string {
t.Helper()
var roles pq.StringArray
err := sqlDB.QueryRowContext(ctx,
"SELECT rbac_roles FROM users WHERE id = $1", userID,
).Scan(&roles)
require.NoError(t, err)
return roles
}
// Verify: user with chat gets agents-access.
roles := getRoles(t, userWithChat)
require.Contains(t, roles, "agents-access",
"user with chat should get agents-access")
// Verify: user who already had agents-access has no duplicate.
roles = getRoles(t, userAlreadyHasRole)
count := 0
for _, r := range roles {
if r == "agents-access" {
count++
}
}
require.Equal(t, 1, count,
"user who already had agents-access should not get a duplicate")
// Verify: user without chat does NOT get agents-access.
roles = getRoles(t, userNoChat)
require.NotContains(t, roles, "agents-access",
"user without chat should not get agents-access")
// Verify: user with chat and existing roles gets agents-access
// appended while preserving existing roles.
roles = getRoles(t, userWithChatAndRoles)
require.Contains(t, roles, "agents-access",
"user with chat and other roles should get agents-access")
require.Contains(t, roles, "template-admin",
"existing roles should be preserved")
}
@@ -0,0 +1,87 @@
INSERT INTO chat_automations (
id,
owner_id,
organization_id,
name,
description,
instructions,
model_config_id,
mcp_server_ids,
allowed_tools,
status,
max_chat_creates_per_hour,
max_messages_per_hour,
created_at,
updated_at
)
SELECT
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
u.id,
o.id,
'fixture-automation',
'Fixture automation for migration testing.',
'You are a helpful assistant.',
NULL,
'{}',
'{}',
'active',
10,
60,
'2024-01-01 00:00:00+00',
'2024-01-01 00:00:00+00'
FROM users u
CROSS JOIN organizations o
ORDER BY u.created_at, u.id
LIMIT 1;
INSERT INTO chat_automation_triggers (
id,
automation_id,
type,
webhook_secret,
webhook_secret_key_id,
cron_schedule,
last_triggered_at,
filter,
label_paths,
created_at,
updated_at
) VALUES (
'c4e1fe1f-9f2b-4a3d-ab4c-234567890abc',
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
'webhook',
'whsec_fixture_secret',
NULL,
NULL,
NULL,
'{"action": "opened"}'::jsonb,
'{"repo": "repository.full_name"}'::jsonb,
'2024-01-01 00:00:00+00',
'2024-01-01 00:00:00+00'
);
INSERT INTO chat_automation_events (
id,
automation_id,
trigger_id,
received_at,
payload,
filter_matched,
resolved_labels,
matched_chat_id,
created_chat_id,
status,
error
) VALUES (
'd5f20f20-a03c-4b4e-bc5d-345678901bcd',
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
'c4e1fe1f-9f2b-4a3d-ab4c-234567890abc',
'2024-01-01 00:00:00+00',
'{"action": "opened", "repository": {"full_name": "coder/coder"}}'::jsonb,
TRUE,
'{"repo": "coder/coder"}'::jsonb,
NULL,
NULL,
'preview',
NULL
);
+7
View File
@@ -182,6 +182,13 @@ func (r GetChatsRow) RBACObject() rbac.Object {
return r.Chat.RBACObject()
}
func (a ChatAutomation) RBACObject() rbac.Object {
return rbac.ResourceChatAutomation.
WithID(a.ID).
WithOwner(a.OwnerID.String()).
InOrg(a.OrganizationID)
}
func (c ChatFile) RBACObject() rbac.Object {
return rbac.ResourceChat.WithID(c.ID).WithOwner(c.OwnerID.String()).InOrg(c.OrganizationID)
}
+68 -3
View File
@@ -53,6 +53,7 @@ type customQuerier interface {
connectionLogQuerier
aibridgeQuerier
chatQuerier
chatAutomationQuerier
}
type templateQuerier interface {
@@ -795,7 +796,71 @@ func (q *sqlQuerier) GetAuthorizedChats(ctx context.Context, arg GetChatsParams,
&i.Chat.AgentID,
&i.Chat.PinOrder,
&i.Chat.LastReadMessageID,
&i.HasUnread); err != nil {
&i.Chat.LastInjectedContext,
&i.Chat.AutomationID,
&i.HasUnread,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
type chatAutomationQuerier interface {
GetAuthorizedChatAutomations(ctx context.Context, arg GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]ChatAutomation, error)
}
func (q *sqlQuerier) GetAuthorizedChatAutomations(ctx context.Context, arg GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]ChatAutomation, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
VariableConverter: regosql.NoACLConverter(),
})
if err != nil {
return nil, xerrors.Errorf("compile authorized filter: %w", err)
}
filtered, err := insertAuthorizedFilter(getChatAutomations, fmt.Sprintf(" AND %s", authorizedFilter))
if err != nil {
return nil, xerrors.Errorf("insert authorized filter: %w", err)
}
// The name comment is for metric tracking
query := fmt.Sprintf("-- name: GetAuthorizedChatAutomations :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query,
arg.OwnerID,
arg.OrganizationID,
arg.OffsetOpt,
arg.LimitOpt,
)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ChatAutomation
for rows.Next() {
var i ChatAutomation
if err := rows.Scan(
&i.ID,
&i.OwnerID,
&i.OrganizationID,
&i.Name,
&i.Description,
&i.Instructions,
&i.ModelConfigID,
pq.Array(&i.MCPServerIDs),
pq.Array(&i.AllowedTools),
&i.Status,
&i.MaxChatCreatesPerHour,
&i.MaxMessagesPerHour,
&i.CreatedAt,
&i.UpdatedAt,
); err != nil {
return nil, err
}
items = append(items, i)
@@ -995,8 +1060,6 @@ func (q *sqlQuerier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg Lis
query := fmt.Sprintf("-- name: ListAuthorizedAIBridgeSessions :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query,
arg.AfterSessionID,
arg.Offset,
arg.Limit,
arg.StartedAfter,
arg.StartedBefore,
arg.InitiatorID,
@@ -1004,6 +1067,8 @@ func (q *sqlQuerier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg Lis
arg.Model,
arg.Client,
arg.SessionID,
arg.Offset,
arg.Limit,
)
if err != nil {
return nil, err
+301 -23
View File
@@ -224,6 +224,11 @@ const (
ApiKeyScopeChatUpdate APIKeyScope = "chat:update"
ApiKeyScopeChatDelete APIKeyScope = "chat:delete"
ApiKeyScopeChat APIKeyScope = "chat:*"
ApiKeyScopeChatAutomationCreate APIKeyScope = "chat_automation:create"
ApiKeyScopeChatAutomationRead APIKeyScope = "chat_automation:read"
ApiKeyScopeChatAutomationUpdate APIKeyScope = "chat_automation:update"
ApiKeyScopeChatAutomationDelete APIKeyScope = "chat_automation:delete"
ApiKeyScopeChatAutomation APIKeyScope = "chat_automation:*"
)
func (e *APIKeyScope) Scan(src interface{}) error {
@@ -467,7 +472,12 @@ func (e APIKeyScope) Valid() bool {
ApiKeyScopeChatRead,
ApiKeyScopeChatUpdate,
ApiKeyScopeChatDelete,
ApiKeyScopeChat:
ApiKeyScopeChat,
ApiKeyScopeChatAutomationCreate,
ApiKeyScopeChatAutomationRead,
ApiKeyScopeChatAutomationUpdate,
ApiKeyScopeChatAutomationDelete,
ApiKeyScopeChatAutomation:
return true
}
return false
@@ -680,6 +690,11 @@ func AllAPIKeyScopeValues() []APIKeyScope {
ApiKeyScopeChatUpdate,
ApiKeyScopeChatDelete,
ApiKeyScopeChat,
ApiKeyScopeChatAutomationCreate,
ApiKeyScopeChatAutomationRead,
ApiKeyScopeChatAutomationUpdate,
ApiKeyScopeChatAutomationDelete,
ApiKeyScopeChatAutomation,
}
}
@@ -1107,6 +1122,198 @@ func AllBuildReasonValues() []BuildReason {
}
}
// Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.
type ChatAutomationEventStatus string
const (
ChatAutomationEventStatusFiltered ChatAutomationEventStatus = "filtered"
ChatAutomationEventStatusPreview ChatAutomationEventStatus = "preview"
ChatAutomationEventStatusCreated ChatAutomationEventStatus = "created"
ChatAutomationEventStatusContinued ChatAutomationEventStatus = "continued"
ChatAutomationEventStatusRateLimited ChatAutomationEventStatus = "rate_limited"
ChatAutomationEventStatusError ChatAutomationEventStatus = "error"
)
func (e *ChatAutomationEventStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationEventStatus(s)
case string:
*e = ChatAutomationEventStatus(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationEventStatus: %T", src)
}
return nil
}
type NullChatAutomationEventStatus struct {
ChatAutomationEventStatus ChatAutomationEventStatus `json:"chat_automation_event_status"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationEventStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationEventStatus) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationEventStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationEventStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationEventStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationEventStatus), nil
}
func (e ChatAutomationEventStatus) Valid() bool {
switch e {
case ChatAutomationEventStatusFiltered,
ChatAutomationEventStatusPreview,
ChatAutomationEventStatusCreated,
ChatAutomationEventStatusContinued,
ChatAutomationEventStatusRateLimited,
ChatAutomationEventStatusError:
return true
}
return false
}
func AllChatAutomationEventStatusValues() []ChatAutomationEventStatus {
return []ChatAutomationEventStatus{
ChatAutomationEventStatusFiltered,
ChatAutomationEventStatusPreview,
ChatAutomationEventStatusCreated,
ChatAutomationEventStatusContinued,
ChatAutomationEventStatusRateLimited,
ChatAutomationEventStatusError,
}
}
// Lifecycle state of a chat automation: disabled, preview, or active.
type ChatAutomationStatus string
const (
ChatAutomationStatusDisabled ChatAutomationStatus = "disabled"
ChatAutomationStatusPreview ChatAutomationStatus = "preview"
ChatAutomationStatusActive ChatAutomationStatus = "active"
)
func (e *ChatAutomationStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationStatus(s)
case string:
*e = ChatAutomationStatus(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationStatus: %T", src)
}
return nil
}
type NullChatAutomationStatus struct {
ChatAutomationStatus ChatAutomationStatus `json:"chat_automation_status"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationStatus) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationStatus), nil
}
func (e ChatAutomationStatus) Valid() bool {
switch e {
case ChatAutomationStatusDisabled,
ChatAutomationStatusPreview,
ChatAutomationStatusActive:
return true
}
return false
}
func AllChatAutomationStatusValues() []ChatAutomationStatus {
return []ChatAutomationStatus{
ChatAutomationStatusDisabled,
ChatAutomationStatusPreview,
ChatAutomationStatusActive,
}
}
// Discriminator for chat automation triggers: webhook or cron.
type ChatAutomationTriggerType string
const (
ChatAutomationTriggerTypeWebhook ChatAutomationTriggerType = "webhook"
ChatAutomationTriggerTypeCron ChatAutomationTriggerType = "cron"
)
func (e *ChatAutomationTriggerType) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationTriggerType(s)
case string:
*e = ChatAutomationTriggerType(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationTriggerType: %T", src)
}
return nil
}
type NullChatAutomationTriggerType struct {
ChatAutomationTriggerType ChatAutomationTriggerType `json:"chat_automation_trigger_type"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationTriggerType is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationTriggerType) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationTriggerType, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationTriggerType.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationTriggerType) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationTriggerType), nil
}
func (e ChatAutomationTriggerType) Valid() bool {
switch e {
case ChatAutomationTriggerTypeWebhook,
ChatAutomationTriggerTypeCron:
return true
}
return false
}
func AllChatAutomationTriggerTypeValues() []ChatAutomationTriggerType {
return []ChatAutomationTriggerType{
ChatAutomationTriggerTypeWebhook,
ChatAutomationTriggerTypeCron,
}
}
type ChatMessageRole string
const (
@@ -4153,28 +4360,99 @@ type BoundaryUsageStat struct {
}
type Chat struct {
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
Title string `db:"title" json:"title"`
Status ChatStatus `db:"status" json:"status"`
WorkerID uuid.NullUUID `db:"worker_id" json:"worker_id"`
StartedAt sql.NullTime `db:"started_at" json:"started_at"`
HeartbeatAt sql.NullTime `db:"heartbeat_at" json:"heartbeat_at"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ParentChatID uuid.NullUUID `db:"parent_chat_id" json:"parent_chat_id"`
RootChatID uuid.NullUUID `db:"root_chat_id" json:"root_chat_id"`
LastModelConfigID uuid.UUID `db:"last_model_config_id" json:"last_model_config_id"`
Archived bool `db:"archived" json:"archived"`
LastError sql.NullString `db:"last_error" json:"last_error"`
Mode NullChatMode `db:"mode" json:"mode"`
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
Labels StringMap `db:"labels" json:"labels"`
BuildID uuid.NullUUID `db:"build_id" json:"build_id"`
AgentID uuid.NullUUID `db:"agent_id" json:"agent_id"`
PinOrder int32 `db:"pin_order" json:"pin_order"`
LastReadMessageID sql.NullInt64 `db:"last_read_message_id" json:"last_read_message_id"`
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
Title string `db:"title" json:"title"`
Status ChatStatus `db:"status" json:"status"`
WorkerID uuid.NullUUID `db:"worker_id" json:"worker_id"`
StartedAt sql.NullTime `db:"started_at" json:"started_at"`
HeartbeatAt sql.NullTime `db:"heartbeat_at" json:"heartbeat_at"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ParentChatID uuid.NullUUID `db:"parent_chat_id" json:"parent_chat_id"`
RootChatID uuid.NullUUID `db:"root_chat_id" json:"root_chat_id"`
LastModelConfigID uuid.UUID `db:"last_model_config_id" json:"last_model_config_id"`
Archived bool `db:"archived" json:"archived"`
LastError sql.NullString `db:"last_error" json:"last_error"`
Mode NullChatMode `db:"mode" json:"mode"`
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
Labels StringMap `db:"labels" json:"labels"`
BuildID uuid.NullUUID `db:"build_id" json:"build_id"`
AgentID uuid.NullUUID `db:"agent_id" json:"agent_id"`
PinOrder int32 `db:"pin_order" json:"pin_order"`
LastReadMessageID sql.NullInt64 `db:"last_read_message_id" json:"last_read_message_id"`
LastInjectedContext pqtype.NullRawMessage `db:"last_injected_context" json:"last_injected_context"`
AutomationID uuid.NullUUID `db:"automation_id" json:"automation_id"`
}
// Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.
type ChatAutomation struct {
ID uuid.UUID `db:"id" json:"id"`
// The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
// Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
// The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.
Instructions string `db:"instructions" json:"instructions"`
// Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.
ModelConfigID uuid.NullUUID `db:"model_config_id" json:"model_config_id"`
// MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
// Tool allowlist. Empty means all tools available to the model config are permitted.
AllowedTools []string `db:"allowed_tools" json:"allowed_tools"`
// Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.
Status ChatAutomationStatus `db:"status" json:"status"`
// Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.
MaxChatCreatesPerHour int32 `db:"max_chat_creates_per_hour" json:"max_chat_creates_per_hour"`
// Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.
MaxMessagesPerHour int32 `db:"max_messages_per_hour" json:"max_messages_per_hour"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
// Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.
type ChatAutomationEvent struct {
ID uuid.UUID `db:"id" json:"id"`
AutomationID uuid.UUID `db:"automation_id" json:"automation_id"`
TriggerID uuid.NullUUID `db:"trigger_id" json:"trigger_id"`
ReceivedAt time.Time `db:"received_at" json:"received_at"`
// The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.
Payload json.RawMessage `db:"payload" json:"payload"`
// Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.
FilterMatched bool `db:"filter_matched" json:"filter_matched"`
// Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.
ResolvedLabels pqtype.NullRawMessage `db:"resolved_labels" json:"resolved_labels"`
// ID of an existing chat that was found via label matching and continued with a new message.
MatchedChatID uuid.NullUUID `db:"matched_chat_id" json:"matched_chat_id"`
// ID of a newly created chat (mutually exclusive with matched_chat_id in practice).
CreatedChatID uuid.NullUUID `db:"created_chat_id" json:"created_chat_id"`
// Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.
Status ChatAutomationEventStatus `db:"status" json:"status"`
Error sql.NullString `db:"error" json:"error"`
}
// Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.
type ChatAutomationTrigger struct {
ID uuid.UUID `db:"id" json:"id"`
AutomationID uuid.UUID `db:"automation_id" json:"automation_id"`
// Discriminator: webhook or cron. Determines which nullable columns are meaningful.
Type ChatAutomationTriggerType `db:"type" json:"type"`
// HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.
WebhookSecret sql.NullString `db:"webhook_secret" json:"webhook_secret"`
WebhookSecretKeyID sql.NullString `db:"webhook_secret_key_id" json:"webhook_secret_key_id"`
// Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.
CronSchedule sql.NullString `db:"cron_schedule" json:"cron_schedule"`
// Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.
LastTriggeredAt sql.NullTime `db:"last_triggered_at" json:"last_triggered_at"`
// gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.
Filter pqtype.NullRawMessage `db:"filter" json:"filter"`
// Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.
LabelPaths pqtype.NullRawMessage `db:"label_paths" json:"label_paths"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
type ChatDiffStatus struct {
+44 -2
View File
@@ -54,7 +54,7 @@ type sqlcQuerier interface {
ActivityBumpWorkspace(ctx context.Context, arg ActivityBumpWorkspaceParams) error
// AllUserIDs returns all UserIDs regardless of user status or deletion.
AllUserIDs(ctx context.Context, includeSystem bool) ([]uuid.UUID, error)
ArchiveChatByID(ctx context.Context, id uuid.UUID) error
ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]Chat, error)
// Archiving templates is a soft delete action, so is reversible.
// Archiving prevents the version from being used and discovered
// by listing.
@@ -74,10 +74,21 @@ type sqlcQuerier interface {
CleanTailnetCoordinators(ctx context.Context) error
CleanTailnetLostPeers(ctx context.Context) error
CleanTailnetTunnels(ctx context.Context) error
CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error
CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error
CountAIBridgeInterceptions(ctx context.Context, arg CountAIBridgeInterceptionsParams) (int64, error)
CountAIBridgeSessions(ctx context.Context, arg CountAIBridgeSessionsParams) (int64, error)
CountAuditLogs(ctx context.Context, arg CountAuditLogsParams) (int64, error)
// Counts new-chat events in the rate-limit window. This count is
// approximate under concurrency: concurrent webhook handlers may
// each read the same count before any of them insert, so brief
// bursts can slightly exceed the configured cap.
CountChatAutomationChatCreatesInWindow(ctx context.Context, arg CountChatAutomationChatCreatesInWindowParams) (int64, error)
// Counts total message events (creates + continues) in the rate-limit
// window. This count is approximate under concurrency: concurrent
// webhook handlers may each read the same count before any of them
// insert, so brief bursts can slightly exceed the configured cap.
CountChatAutomationMessagesInWindow(ctx context.Context, arg CountChatAutomationMessagesInWindowParams) (int64, error)
CountConnectionLogs(ctx context.Context, arg CountConnectionLogsParams) (int64, error)
// Counts enabled, non-deleted model configs that lack both input and
// output pricing in their JSONB options.cost configuration.
@@ -100,6 +111,8 @@ type sqlcQuerier interface {
// be recreated.
DeleteAllWebpushSubscriptions(ctx context.Context) error
DeleteApplicationConnectAPIKeysByUserID(ctx context.Context, userID uuid.UUID) error
DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error
DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error
DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error
DeleteChatProviderByID(ctx context.Context, id uuid.UUID) error
DeleteChatQueuedMessage(ctx context.Context, arg DeleteChatQueuedMessageParams) error
@@ -197,6 +210,10 @@ type sqlcQuerier interface {
GetAPIKeysByUserID(ctx context.Context, arg GetAPIKeysByUserIDParams) ([]APIKey, error)
GetAPIKeysLastUsedAfter(ctx context.Context, lastUsed time.Time) ([]APIKey, error)
GetActiveAISeatCount(ctx context.Context) (int64, error)
// Returns all cron triggers whose parent automation is active or in
// preview mode. The scheduler uses this to evaluate which triggers
// are due.
GetActiveChatAutomationCronTriggers(ctx context.Context) ([]GetActiveChatAutomationCronTriggersRow, error)
GetActivePresetPrebuildSchedules(ctx context.Context) ([]TemplateVersionPresetPrebuildSchedule, error)
GetActiveUserCount(ctx context.Context, includeSystem bool) (int64, error)
GetActiveWorkspaceBuildsByTemplateID(ctx context.Context, templateID uuid.UUID) ([]WorkspaceBuild, error)
@@ -223,6 +240,11 @@ type sqlcQuerier interface {
// This function returns roles for authorization purposes. Implied member roles
// are included.
GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUID) (GetAuthorizationUserRolesRow, error)
GetChatAutomationByID(ctx context.Context, id uuid.UUID) (ChatAutomation, error)
GetChatAutomationEventsByAutomationID(ctx context.Context, arg GetChatAutomationEventsByAutomationIDParams) ([]ChatAutomationEvent, error)
GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (ChatAutomationTrigger, error)
GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]ChatAutomationTrigger, error)
GetChatAutomations(ctx context.Context, arg GetChatAutomationsParams) ([]ChatAutomation, error)
GetChatByID(ctx context.Context, id uuid.UUID) (Chat, error)
GetChatByIDForUpdate(ctx context.Context, id uuid.UUID) (Chat, error)
// Per-root-chat cost breakdown for a single user within a date range.
@@ -696,6 +718,9 @@ type sqlcQuerier interface {
InsertAllUsersGroup(ctx context.Context, organizationID uuid.UUID) (Group, error)
InsertAuditLog(ctx context.Context, arg InsertAuditLogParams) (AuditLog, error)
InsertChat(ctx context.Context, arg InsertChatParams) (Chat, error)
InsertChatAutomation(ctx context.Context, arg InsertChatAutomationParams) (ChatAutomation, error)
InsertChatAutomationEvent(ctx context.Context, arg InsertChatAutomationEventParams) (ChatAutomationEvent, error)
InsertChatAutomationTrigger(ctx context.Context, arg InsertChatAutomationTriggerParams) (ChatAutomationTrigger, error)
InsertChatFile(ctx context.Context, arg InsertChatFileParams) (InsertChatFileRow, error)
InsertChatMessages(ctx context.Context, arg InsertChatMessagesParams) ([]ChatMessage, error)
InsertChatModelConfig(ctx context.Context, arg InsertChatModelConfigParams) (ChatModelConfig, error)
@@ -788,6 +813,10 @@ type sqlcQuerier interface {
// Returns paginated sessions with aggregated metadata, token counts, and
// the most recent user prompt. A "session" is a logical grouping of
// interceptions that share the same session_id (set by the client).
//
// Pagination-first strategy: identify the page of sessions cheaply via a
// single GROUP BY scan, then do expensive lateral joins (tokens, prompts,
// first-interception metadata) only for the ~page-size result set.
ListAIBridgeSessions(ctx context.Context, arg ListAIBridgeSessionsParams) ([]ListAIBridgeSessionsRow, error)
ListAIBridgeTokenUsagesByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]AIBridgeTokenUsage, error)
ListAIBridgeToolUsagesByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]AIBridgeToolUsage, error)
@@ -818,6 +847,10 @@ type sqlcQuerier interface {
// sequence, so this is acceptable.
PinChatByID(ctx context.Context, id uuid.UUID) error
PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (ChatQueuedMessage, error)
// Deletes old chat automation events in bounded batches to avoid
// long-running locks on high-volume tables. Callers should loop
// until zero rows are returned.
PurgeOldChatAutomationEvents(ctx context.Context, arg PurgeOldChatAutomationEventsParams) (int64, error)
ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error
RegisterWorkspaceProxy(ctx context.Context, arg RegisterWorkspaceProxyParams) (WorkspaceProxy, error)
RemoveUserFromGroups(ctx context.Context, arg RemoveUserFromGroupsParams) ([]uuid.UUID, error)
@@ -840,7 +873,7 @@ type sqlcQuerier interface {
// This must be called from within a transaction. The lock will be automatically
// released when the transaction ends.
TryAcquireLock(ctx context.Context, pgTryAdvisoryXactLock int64) (bool, error)
UnarchiveChatByID(ctx context.Context, id uuid.UUID) error
UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]Chat, error)
// This will always work regardless of the current state of the template version.
UnarchiveTemplateVersion(ctx context.Context, arg UnarchiveTemplateVersionParams) error
UnfavoriteWorkspace(ctx context.Context, id uuid.UUID) error
@@ -848,12 +881,21 @@ type sqlcQuerier interface {
UnsetDefaultChatModelConfigs(ctx context.Context) error
UpdateAIBridgeInterceptionEnded(ctx context.Context, arg UpdateAIBridgeInterceptionEndedParams) (AIBridgeInterception, error)
UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDParams) error
UpdateChatAutomation(ctx context.Context, arg UpdateChatAutomationParams) (ChatAutomation, error)
UpdateChatAutomationTrigger(ctx context.Context, arg UpdateChatAutomationTriggerParams) (ChatAutomationTrigger, error)
UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg UpdateChatAutomationTriggerLastTriggeredAtParams) error
UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg UpdateChatAutomationTriggerWebhookSecretParams) (ChatAutomationTrigger, error)
UpdateChatBuildAgentBinding(ctx context.Context, arg UpdateChatBuildAgentBindingParams) (Chat, error)
UpdateChatByID(ctx context.Context, arg UpdateChatByIDParams) (Chat, error)
// Bumps the heartbeat timestamp for a running chat so that other
// replicas know the worker is still alive.
UpdateChatHeartbeat(ctx context.Context, arg UpdateChatHeartbeatParams) (int64, error)
UpdateChatLabelsByID(ctx context.Context, arg UpdateChatLabelsByIDParams) (Chat, error)
// Updates the cached injected context parts (AGENTS.md +
// skills) on the chat row. Called only when context changes
// (first workspace attach or agent change). updated_at is
// intentionally not touched to avoid reordering the chat list.
UpdateChatLastInjectedContext(ctx context.Context, arg UpdateChatLastInjectedContextParams) (Chat, error)
UpdateChatLastModelConfigByID(ctx context.Context, arg UpdateChatLastModelConfigByIDParams) (Chat, error)
// Updates the last read message ID for a chat. This is used to track
// which messages the owner has seen, enabling unread indicators.
+11 -4
View File
@@ -1251,8 +1251,12 @@ func TestGetAuthorizedChats(t *testing.T) {
owner := dbgen.User(t, db, database.User{
RBACRoles: []string{rbac.RoleOwner().String()},
})
member := dbgen.User(t, db, database.User{})
secondMember := dbgen.User(t, db, database.User{})
member := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
secondMember := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
// Create FK dependencies: a chat provider and model config.
ctx := testutil.Context(t, testutil.WaitMedium)
@@ -1407,7 +1411,9 @@ func TestGetAuthorizedChats(t *testing.T) {
// Use a dedicated user for pagination to avoid interference
// with the other parallel subtests.
paginationUser := dbgen.User(t, db, database.User{})
paginationUser := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
for i := range 7 {
_, err := db.InsertChat(ctx, database.InsertChatParams{
OwnerID: paginationUser.ID,
@@ -10640,7 +10646,8 @@ func TestChatPinOrderQueries(t *testing.T) {
}
// Archive the middle pin.
require.NoError(t, db.ArchiveChatByID(ctx, second.ID))
_, err := db.ArchiveChatByID(ctx, second.ID)
require.NoError(t, err)
// Archived chat should have pin_order cleared. Remaining
// pins keep their original positions; the next mutation
File diff suppressed because it is too large Load Diff
+78 -79
View File
@@ -454,95 +454,91 @@ WHERE
-- Returns paginated sessions with aggregated metadata, token counts, and
-- the most recent user prompt. A "session" is a logical grouping of
-- interceptions that share the same session_id (set by the client).
WITH filtered_interceptions AS (
--
-- Pagination-first strategy: identify the page of sessions cheaply via a
-- single GROUP BY scan, then do expensive lateral joins (tokens, prompts,
-- first-interception metadata) only for the ~page-size result set.
WITH cursor_pos AS (
-- Resolve the cursor's started_at once, outside the HAVING clause,
-- so the planner cannot accidentally re-evaluate it per group.
SELECT MIN(aibridge_interceptions.started_at) AS started_at
FROM aibridge_interceptions
WHERE aibridge_interceptions.session_id = @after_session_id AND aibridge_interceptions.ended_at IS NOT NULL
),
session_page AS (
-- Paginate at the session level first; only cheap aggregates here.
SELECT
aibridge_interceptions.*
ai.session_id,
ai.initiator_id,
MIN(ai.started_at) AS started_at,
MAX(ai.ended_at) AS ended_at,
COUNT(*) FILTER (WHERE ai.thread_root_id IS NULL) AS threads
FROM
aibridge_interceptions
aibridge_interceptions ai
WHERE
-- Remove inflight interceptions (ones which lack an ended_at value).
aibridge_interceptions.ended_at IS NOT NULL
ai.ended_at IS NOT NULL
-- Filter by time frame
AND CASE
WHEN @started_after::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN aibridge_interceptions.started_at >= @started_after::timestamptz
WHEN @started_after::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN ai.started_at >= @started_after::timestamptz
ELSE true
END
AND CASE
WHEN @started_before::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN aibridge_interceptions.started_at <= @started_before::timestamptz
WHEN @started_before::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN ai.started_at <= @started_before::timestamptz
ELSE true
END
-- Filter initiator_id
AND CASE
WHEN @initiator_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN aibridge_interceptions.initiator_id = @initiator_id::uuid
WHEN @initiator_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN ai.initiator_id = @initiator_id::uuid
ELSE true
END
-- Filter provider
AND CASE
WHEN @provider::text != '' THEN aibridge_interceptions.provider = @provider::text
WHEN @provider::text != '' THEN ai.provider = @provider::text
ELSE true
END
-- Filter model
AND CASE
WHEN @model::text != '' THEN aibridge_interceptions.model = @model::text
WHEN @model::text != '' THEN ai.model = @model::text
ELSE true
END
-- Filter client
AND CASE
WHEN @client::text != '' THEN COALESCE(aibridge_interceptions.client, 'Unknown') = @client::text
WHEN @client::text != '' THEN COALESCE(ai.client, 'Unknown') = @client::text
ELSE true
END
-- Filter session_id
AND CASE
WHEN @session_id::text != '' THEN aibridge_interceptions.session_id = @session_id::text
WHEN @session_id::text != '' THEN ai.session_id = @session_id::text
ELSE true
END
-- Authorize Filter clause will be injected below in ListAuthorizedAIBridgeSessions
-- @authorize_filter
),
session_tokens AS (
-- Aggregate token usage across all interceptions in each session.
-- Group by (session_id, initiator_id) to avoid merging sessions from
-- different users who happen to share the same client_session_id.
SELECT
fi.session_id,
fi.initiator_id,
COALESCE(SUM(tu.input_tokens), 0)::bigint AS input_tokens,
COALESCE(SUM(tu.output_tokens), 0)::bigint AS output_tokens
-- TODO: add extra token types once https://github.com/coder/aibridge/issues/150 lands.
FROM
filtered_interceptions fi
LEFT JOIN
aibridge_token_usages tu ON fi.id = tu.interception_id
GROUP BY
fi.session_id, fi.initiator_id
),
session_root AS (
-- Build one summary row per session. Group by (session_id, initiator_id)
-- to avoid merging sessions from different users who happen to share the
-- same client_session_id. The ARRAY_AGG with ORDER BY picks values from
-- the chronologically first interception for fields that should represent
-- the session as a whole (client, metadata). Threads are counted as
-- distinct root interception IDs: an interception with a NULL
-- thread_root_id is itself a thread root.
SELECT
fi.session_id,
fi.initiator_id,
(ARRAY_AGG(fi.client ORDER BY fi.started_at, fi.id))[1] AS client,
(ARRAY_AGG(fi.metadata ORDER BY fi.started_at, fi.id))[1] AS metadata,
ARRAY_AGG(DISTINCT fi.provider ORDER BY fi.provider) AS providers,
ARRAY_AGG(DISTINCT fi.model ORDER BY fi.model) AS models,
MIN(fi.started_at) AS started_at,
MAX(fi.ended_at) AS ended_at,
COUNT(DISTINCT COALESCE(fi.thread_root_id, fi.id)) AS threads,
-- Collect IDs for lateral prompt lookup.
ARRAY_AGG(fi.id) AS interception_ids
FROM
filtered_interceptions fi
GROUP BY
fi.session_id, fi.initiator_id
ai.session_id, ai.initiator_id
HAVING
-- Cursor pagination: uses a composite (started_at, session_id)
-- cursor to support keyset pagination. The less-than comparison
-- matches the DESC sort order so rows after the cursor come
-- later in results. The cursor value comes from cursor_pos to
-- guarantee single evaluation.
CASE
WHEN @after_session_id::text != '' THEN (
(MIN(ai.started_at), ai.session_id) < (
(SELECT started_at FROM cursor_pos),
@after_session_id::text
)
)
ELSE true
END
ORDER BY
MIN(ai.started_at) DESC,
ai.session_id DESC
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
)
SELECT
sr.session_id,
sp.session_id,
visible_users.id AS user_id,
visible_users.username AS user_username,
visible_users.name AS user_name,
@@ -551,45 +547,48 @@ SELECT
sr.models::text[] AS models,
COALESCE(sr.client, '')::varchar(64) AS client,
sr.metadata::jsonb AS metadata,
sr.started_at::timestamptz AS started_at,
sr.ended_at::timestamptz AS ended_at,
sr.threads,
sp.started_at::timestamptz AS started_at,
sp.ended_at::timestamptz AS ended_at,
sp.threads,
COALESCE(st.input_tokens, 0)::bigint AS input_tokens,
COALESCE(st.output_tokens, 0)::bigint AS output_tokens,
COALESCE(slp.prompt, '') AS last_prompt
FROM
session_root sr
session_page sp
JOIN
visible_users ON visible_users.id = sr.initiator_id
LEFT JOIN
session_tokens st ON st.session_id = sr.session_id AND st.initiator_id = sr.initiator_id
visible_users ON visible_users.id = sp.initiator_id
LEFT JOIN LATERAL (
-- Lateral join to efficiently fetch only the most recent user prompt
-- across all interceptions in the session, avoiding a full aggregation.
SELECT
(ARRAY_AGG(ai.client ORDER BY ai.started_at, ai.id))[1] AS client,
(ARRAY_AGG(ai.metadata ORDER BY ai.started_at, ai.id))[1] AS metadata,
ARRAY_AGG(DISTINCT ai.provider ORDER BY ai.provider) AS providers,
ARRAY_AGG(DISTINCT ai.model ORDER BY ai.model) AS models,
ARRAY_AGG(ai.id) AS interception_ids
FROM aibridge_interceptions ai
WHERE ai.session_id = sp.session_id
AND ai.initiator_id = sp.initiator_id
AND ai.ended_at IS NOT NULL
) sr ON true
LEFT JOIN LATERAL (
-- Aggregate tokens only for this session's interceptions.
SELECT
COALESCE(SUM(tu.input_tokens), 0)::bigint AS input_tokens,
COALESCE(SUM(tu.output_tokens), 0)::bigint AS output_tokens
FROM aibridge_token_usages tu
WHERE tu.interception_id = ANY(sr.interception_ids)
) st ON true
LEFT JOIN LATERAL (
-- Fetch only the most recent user prompt across all interceptions
-- in the session.
SELECT up.prompt
FROM aibridge_user_prompts up
WHERE up.interception_id = ANY(sr.interception_ids)
ORDER BY up.created_at DESC, up.id DESC
LIMIT 1
) slp ON true
WHERE
-- Cursor pagination: uses a composite (started_at, session_id) cursor
-- to support keyset pagination. The less-than comparison matches the
-- DESC sort order so that rows after the cursor come later in results.
CASE
WHEN @after_session_id::text != '' THEN (
(sr.started_at, sr.session_id) < (
(SELECT started_at FROM session_root WHERE session_id = @after_session_id),
@after_session_id::text
)
)
ELSE true
END
ORDER BY
sr.started_at DESC,
sr.session_id DESC
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
sp.started_at DESC,
sp.session_id DESC
;
-- name: ListAIBridgeSessionThreads :many
@@ -0,0 +1,80 @@
-- name: InsertChatAutomationEvent :one
INSERT INTO chat_automation_events (
id,
automation_id,
trigger_id,
received_at,
payload,
filter_matched,
resolved_labels,
matched_chat_id,
created_chat_id,
status,
error
) VALUES (
@id::uuid,
@automation_id::uuid,
sqlc.narg('trigger_id')::uuid,
@received_at::timestamptz,
@payload::jsonb,
@filter_matched::boolean,
sqlc.narg('resolved_labels')::jsonb,
sqlc.narg('matched_chat_id')::uuid,
sqlc.narg('created_chat_id')::uuid,
@status::chat_automation_event_status,
sqlc.narg('error')::text
) RETURNING *;
-- name: GetChatAutomationEventsByAutomationID :many
SELECT
*
FROM
chat_automation_events
WHERE
automation_id = @automation_id::uuid
AND CASE
WHEN sqlc.narg('status_filter')::chat_automation_event_status IS NOT NULL THEN status = sqlc.narg('status_filter')::chat_automation_event_status
ELSE true
END
ORDER BY
received_at DESC
OFFSET @offset_opt
LIMIT
COALESCE(NULLIF(@limit_opt :: int, 0), 50);
-- name: CountChatAutomationChatCreatesInWindow :one
-- Counts new-chat events in the rate-limit window. This count is
-- approximate under concurrency: concurrent webhook handlers may
-- each read the same count before any of them insert, so brief
-- bursts can slightly exceed the configured cap.
SELECT COUNT(*)
FROM chat_automation_events
WHERE automation_id = @automation_id::uuid
AND status = 'created'
AND received_at > @window_start::timestamptz;
-- name: CountChatAutomationMessagesInWindow :one
-- Counts total message events (creates + continues) in the rate-limit
-- window. This count is approximate under concurrency: concurrent
-- webhook handlers may each read the same count before any of them
-- insert, so brief bursts can slightly exceed the configured cap.
SELECT COUNT(*)
FROM chat_automation_events
WHERE automation_id = @automation_id::uuid
AND status IN ('created', 'continued')
AND received_at > @window_start::timestamptz;
-- name: PurgeOldChatAutomationEvents :execrows
-- Deletes old chat automation events in bounded batches to avoid
-- long-running locks on high-volume tables. Callers should loop
-- until zero rows are returned.
WITH old_events AS (
SELECT id
FROM chat_automation_events
WHERE received_at < @before::timestamptz
ORDER BY received_at ASC
LIMIT @limit_count
)
DELETE FROM chat_automation_events
USING old_events
WHERE chat_automation_events.id = old_events.id;
@@ -0,0 +1,85 @@
-- name: InsertChatAutomation :one
INSERT INTO chat_automations (
id,
owner_id,
organization_id,
name,
description,
instructions,
model_config_id,
mcp_server_ids,
allowed_tools,
status,
max_chat_creates_per_hour,
max_messages_per_hour,
created_at,
updated_at
) VALUES (
@id::uuid,
@owner_id::uuid,
@organization_id::uuid,
@name::text,
@description::text,
@instructions::text,
sqlc.narg('model_config_id')::uuid,
COALESCE(@mcp_server_ids::uuid[], '{}'::uuid[]),
COALESCE(@allowed_tools::text[], '{}'::text[]),
@status::chat_automation_status,
@max_chat_creates_per_hour::integer,
@max_messages_per_hour::integer,
@created_at::timestamptz,
@updated_at::timestamptz
) RETURNING *;
-- name: GetChatAutomationByID :one
SELECT * FROM chat_automations WHERE id = @id::uuid;
-- name: GetChatAutomations :many
SELECT
*
FROM
chat_automations
WHERE
CASE
WHEN @owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN chat_automations.owner_id = @owner_id
ELSE true
END
AND CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN chat_automations.organization_id = @organization_id
ELSE true
END
-- Authorize Filter clause will be injected below in GetAuthorizedChatAutomations
-- @authorize_filter
ORDER BY
created_at DESC, id DESC
OFFSET @offset_opt
LIMIT
COALESCE(NULLIF(@limit_opt :: int, 0), 50);
-- name: UpdateChatAutomation :one
UPDATE chat_automations SET
name = @name::text,
description = @description::text,
instructions = @instructions::text,
model_config_id = sqlc.narg('model_config_id')::uuid,
mcp_server_ids = COALESCE(@mcp_server_ids::uuid[], '{}'::uuid[]),
allowed_tools = COALESCE(@allowed_tools::text[], '{}'::text[]),
status = @status::chat_automation_status,
max_chat_creates_per_hour = @max_chat_creates_per_hour::integer,
max_messages_per_hour = @max_messages_per_hour::integer,
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: DeleteChatAutomationByID :exec
DELETE FROM chat_automations WHERE id = @id::uuid;
-- name: CleanupDeletedMCPServerIDsFromChatAutomations :exec
UPDATE chat_automations
SET mcp_server_ids = (
SELECT COALESCE(array_agg(sid), '{}')
FROM unnest(chat_automations.mcp_server_ids) AS sid
WHERE sid IN (SELECT id FROM mcp_server_configs)
)
WHERE mcp_server_ids != '{}'
AND NOT (mcp_server_ids <@ COALESCE((SELECT array_agg(id) FROM mcp_server_configs), '{}'));
@@ -0,0 +1,87 @@
-- name: InsertChatAutomationTrigger :one
INSERT INTO chat_automation_triggers (
id,
automation_id,
type,
webhook_secret,
webhook_secret_key_id,
cron_schedule,
filter,
label_paths,
created_at,
updated_at
) VALUES (
@id::uuid,
@automation_id::uuid,
@type::chat_automation_trigger_type,
sqlc.narg('webhook_secret')::text,
sqlc.narg('webhook_secret_key_id')::text,
sqlc.narg('cron_schedule')::text,
sqlc.narg('filter')::jsonb,
sqlc.narg('label_paths')::jsonb,
@created_at::timestamptz,
@updated_at::timestamptz
) RETURNING *;
-- name: GetChatAutomationTriggerByID :one
SELECT * FROM chat_automation_triggers WHERE id = @id::uuid;
-- name: GetChatAutomationTriggersByAutomationID :many
SELECT * FROM chat_automation_triggers
WHERE automation_id = @automation_id::uuid
ORDER BY created_at ASC;
-- name: UpdateChatAutomationTrigger :one
UPDATE chat_automation_triggers SET
cron_schedule = COALESCE(sqlc.narg('cron_schedule'), cron_schedule),
filter = COALESCE(sqlc.narg('filter'), filter),
label_paths = COALESCE(sqlc.narg('label_paths'), label_paths),
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: UpdateChatAutomationTriggerWebhookSecret :one
UPDATE chat_automation_triggers SET
webhook_secret = sqlc.narg('webhook_secret')::text,
webhook_secret_key_id = sqlc.narg('webhook_secret_key_id')::text,
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: DeleteChatAutomationTriggerByID :exec
DELETE FROM chat_automation_triggers WHERE id = @id::uuid;
-- name: GetActiveChatAutomationCronTriggers :many
-- Returns all cron triggers whose parent automation is active or in
-- preview mode. The scheduler uses this to evaluate which triggers
-- are due.
SELECT
t.id,
t.automation_id,
t.type,
t.cron_schedule,
t.filter,
t.label_paths,
t.last_triggered_at,
t.created_at,
t.updated_at,
a.status AS automation_status,
a.owner_id AS automation_owner_id,
a.instructions AS automation_instructions,
a.name AS automation_name,
a.organization_id AS automation_organization_id,
a.model_config_id AS automation_model_config_id,
a.mcp_server_ids AS automation_mcp_server_ids,
a.allowed_tools AS automation_allowed_tools,
a.max_chat_creates_per_hour AS automation_max_chat_creates_per_hour,
a.max_messages_per_hour AS automation_max_messages_per_hour
FROM chat_automation_triggers t
JOIN chat_automations a ON a.id = t.automation_id
WHERE t.type = 'cron'
AND t.cron_schedule IS NOT NULL
AND a.status IN ('active', 'preview');
-- name: UpdateChatAutomationTriggerLastTriggeredAt :exec
UPDATE chat_automation_triggers
SET last_triggered_at = @last_triggered_at::timestamptz
WHERE id = @id::uuid;
+31 -5
View File
@@ -1,9 +1,24 @@
-- name: ArchiveChatByID :exec
UPDATE chats SET archived = true, pin_order = 0, updated_at = NOW()
WHERE id = @id OR root_chat_id = @id;
-- name: ArchiveChatByID :many
WITH chats AS (
UPDATE chats
SET archived = true, pin_order = 0, updated_at = NOW()
WHERE id = @id::uuid OR root_chat_id = @id::uuid
RETURNING *
)
SELECT *
FROM chats
ORDER BY (id = @id::uuid) DESC, created_at ASC, id ASC;
-- name: UnarchiveChatByID :exec
UPDATE chats SET archived = false, updated_at = NOW() WHERE id = @id::uuid;
-- name: UnarchiveChatByID :many
WITH chats AS (
UPDATE chats
SET archived = false, updated_at = NOW()
WHERE id = @id::uuid OR root_chat_id = @id::uuid
RETURNING *
)
SELECT *
FROM chats
ORDER BY (id = @id::uuid) DESC, created_at ASC, id ASC;
-- name: PinChatByID :exec
WITH target_chat AS (
@@ -528,6 +543,17 @@ WHERE
id = @id::uuid
RETURNING *;
-- name: UpdateChatLastInjectedContext :one
-- Updates the cached injected context parts (AGENTS.md +
-- skills) on the chat row. Called only when context changes
-- (first workspace attach or agent change). updated_at is
-- intentionally not touched to avoid reordering the chat list.
UPDATE chats SET
last_injected_context = sqlc.narg('last_injected_context')::jsonb
WHERE
id = @id::uuid
RETURNING *;
-- name: UpdateChatMCPServerIDs :one
UPDATE
chats
+2
View File
@@ -247,6 +247,8 @@ sql:
mcp_server_tool_snapshots: MCPServerToolSnapshots
mcp_server_config_id: MCPServerConfigID
mcp_server_ids: MCPServerIDs
automation_mcp_server_ids: AutomationMCPServerIDs
webhook_secret_key_id: WebhookSecretKeyID
icon_url: IconURL
oauth2_client_id: OAuth2ClientID
oauth2_client_secret: OAuth2ClientSecret
+4
View File
@@ -15,6 +15,9 @@ const (
UniqueAPIKeysPkey UniqueConstraint = "api_keys_pkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_pkey PRIMARY KEY (id);
UniqueAuditLogsPkey UniqueConstraint = "audit_logs_pkey" // ALTER TABLE ONLY audit_logs ADD CONSTRAINT audit_logs_pkey PRIMARY KEY (id);
UniqueBoundaryUsageStatsPkey UniqueConstraint = "boundary_usage_stats_pkey" // ALTER TABLE ONLY boundary_usage_stats ADD CONSTRAINT boundary_usage_stats_pkey PRIMARY KEY (replica_id);
UniqueChatAutomationEventsPkey UniqueConstraint = "chat_automation_events_pkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_pkey PRIMARY KEY (id);
UniqueChatAutomationTriggersPkey UniqueConstraint = "chat_automation_triggers_pkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_pkey PRIMARY KEY (id);
UniqueChatAutomationsPkey UniqueConstraint = "chat_automations_pkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_pkey PRIMARY KEY (id);
UniqueChatDiffStatusesPkey UniqueConstraint = "chat_diff_statuses_pkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
UniqueChatFilesPkey UniqueConstraint = "chat_files_pkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_pkey PRIMARY KEY (id);
UniqueChatMessagesPkey UniqueConstraint = "chat_messages_pkey" // ALTER TABLE ONLY chat_messages ADD CONSTRAINT chat_messages_pkey PRIMARY KEY (id);
@@ -125,6 +128,7 @@ const (
UniqueWorkspaceResourcesPkey UniqueConstraint = "workspace_resources_pkey" // ALTER TABLE ONLY workspace_resources ADD CONSTRAINT workspace_resources_pkey PRIMARY KEY (id);
UniqueWorkspacesPkey UniqueConstraint = "workspaces_pkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_pkey PRIMARY KEY (id);
UniqueIndexAPIKeyName UniqueConstraint = "idx_api_key_name" // CREATE UNIQUE INDEX idx_api_key_name ON api_keys USING btree (user_id, token_name) WHERE (login_type = 'token'::login_type);
UniqueIndexChatAutomationsOwnerOrgName UniqueConstraint = "idx_chat_automations_owner_org_name" // CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations USING btree (owner_id, organization_id, name);
UniqueIndexChatModelConfigsSingleDefault UniqueConstraint = "idx_chat_model_configs_single_default" // CREATE UNIQUE INDEX idx_chat_model_configs_single_default ON chat_model_configs USING btree ((1)) WHERE ((is_default = true) AND (deleted = false));
UniqueIndexConnectionLogsConnectionIDWorkspaceIDAgentName UniqueConstraint = "idx_connection_logs_connection_id_workspace_id_agent_name" // CREATE UNIQUE INDEX idx_connection_logs_connection_id_workspace_id_agent_name ON connection_logs USING btree (connection_id, workspace_id, agent_name);
UniqueIndexCustomRolesNameLowerOrganizationID UniqueConstraint = "idx_custom_roles_name_lower_organization_id" // CREATE UNIQUE INDEX idx_custom_roles_name_lower_organization_id ON custom_roles USING btree (lower(name), COALESCE(organization_id, '00000000-0000-0000-0000-000000000000'::uuid));
+26 -5
View File
@@ -393,6 +393,11 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
apiKey := httpmw.APIKey(r)
if !api.Authorize(r, policy.ActionCreate, rbac.ResourceChat.WithOwner(apiKey.UserID.String())) {
httpapi.Forbidden(rw)
return
}
var req codersdk.CreateChatRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
@@ -498,6 +503,10 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
})
return
}
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to create chat.",
Detail: err.Error(),
@@ -616,6 +625,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -626,6 +639,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -636,6 +653,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -1620,20 +1641,20 @@ func (api *API) patchChat(rw http.ResponseWriter, r *http.Request) {
}
var err error
// Use chatDaemon when available so it can notify active
// subscribers. Fall back to direct DB for the simple
// archive flag — no streaming state is involved.
// Use chatDaemon when available so it can interrupt active
// processing before broadcasting archive state. Fall back to
// direct DB when no daemon is running.
if archived {
if api.chatDaemon != nil {
err = api.chatDaemon.ArchiveChat(ctx, chat)
} else {
err = api.Database.ArchiveChatByID(ctx, chat.ID)
_, err = api.Database.ArchiveChatByID(ctx, chat.ID)
}
} else {
if api.chatDaemon != nil {
err = api.chatDaemon.UnarchiveChat(ctx, chat)
} else {
err = api.Database.UnarchiveChatByID(ctx, chat.ID)
_, err = api.Database.UnarchiveChatByID(ctx, chat.ID)
}
}
if err != nil {
+253 -13
View File
@@ -194,10 +194,15 @@ func TestPostChats(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
user := coderdtest.CreateFirstUser(t, client.Client)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
modelConfig := createChatModelConfig(t, client)
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
// Use a member with agents-access instead of the owner to
// verify least-privilege access.
memberClientRaw, member := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
memberClient := codersdk.NewExperimentalClient(memberClientRaw)
chat, err := memberClient.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
@@ -208,19 +213,18 @@ func TestPostChats(t *testing.T) {
require.NoError(t, err)
require.NotEqual(t, uuid.Nil, chat.ID)
require.Equal(t, user.UserID, chat.OwnerID)
require.Equal(t, member.ID, chat.OwnerID)
require.Equal(t, modelConfig.ID, chat.LastModelConfigID)
require.Equal(t, "hello from chats route tests", chat.Title)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
require.NotZero(t, chat.CreatedAt)
require.NotZero(t, chat.UpdatedAt)
require.Nil(t, chat.WorkspaceID)
require.NotNil(t, chat.RootChatID)
require.Equal(t, chat.ID, *chat.RootChatID)
chatResult, err := client.GetChat(ctx, chat.ID)
chatResult, err := memberClient.GetChat(ctx, chat.ID)
require.NoError(t, err)
messagesResult, err := client.GetChatMessages(ctx, chat.ID, nil)
messagesResult, err := memberClient.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
require.Equal(t, chat.ID, chatResult.ID)
@@ -240,6 +244,29 @@ func TestPostChats(t *testing.T) {
require.True(t, foundUserMessage)
})
t.Run("MemberWithoutAgentsAccess", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Member without agents-access should be denied.
memberClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
memberClient := codersdk.NewExperimentalClient(memberClientRaw)
_, err := memberClient.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "this should fail",
},
},
})
requireSDKError(t, err, http.StatusForbidden)
})
t.Run("HidesSystemPromptMessages", func(t *testing.T) {
t.Parallel()
@@ -271,7 +298,7 @@ func TestPostChats(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
adminClient, db := newChatClientWithDatabase(t)
firstUser := coderdtest.CreateFirstUser(t, adminClient.Client)
memberClientRaw, _ := coderdtest.CreateAnotherUser(t, adminClient.Client, firstUser.OrganizationID)
memberClientRaw, _ := coderdtest.CreateAnotherUser(t, adminClient.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
memberClient := codersdk.NewExperimentalClient(memberClientRaw)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -307,6 +334,7 @@ func TestPostChats(t *testing.T) {
adminClient.Client,
firstUser.OrganizationID,
rbac.ScopedRoleOrgAdmin(firstUser.OrganizationID),
rbac.RoleAgentsAccess(),
)
orgAdminClient := codersdk.NewExperimentalClient(orgAdminClientRaw)
@@ -518,7 +546,7 @@ func TestListChats(t *testing.T) {
})
require.NoError(t, err)
memberClientRaw, member := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
memberClientRaw, member := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
memberClient := codersdk.NewExperimentalClient(memberClientRaw)
memberDBChat, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: member.ID,
@@ -586,6 +614,32 @@ func TestListChats(t *testing.T) {
require.Equal(t, memberChats[0].ID, memberChats[0].DiffStatus.ChatID)
})
t.Run("MemberWithoutAgentsAccess", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := newChatClientWithDatabase(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
modelConfig := createChatModelConfig(t, client)
// Create a member without agents-access and insert a chat
// owned by them via system context. This verifies the
// RBAC filter actually excludes results rather than
// returning empty because no chats exist.
memberClientRaw, member := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
memberClient := codersdk.NewExperimentalClient(memberClientRaw)
_, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: member.ID,
LastModelConfigID: modelConfig.ID,
Title: "member chat",
})
require.NoError(t, err)
chats, err := memberClient.ListChats(ctx, nil)
require.NoError(t, err)
require.Empty(t, chats)
})
t.Run("Unauthenticated", func(t *testing.T) {
t.Parallel()
@@ -997,6 +1051,102 @@ func TestWatchChats(t *testing.T) {
}
})
t.Run("ArchiveAndUnarchiveEmitEventsForDescendants", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := newChatClientWithDatabase(t)
user := coderdtest.CreateFirstUser(t, client.Client)
modelConfig := createChatModelConfig(t, client)
parentChat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "watch root chat",
},
},
})
require.NoError(t, err)
childOne, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: user.UserID,
LastModelConfigID: modelConfig.ID,
Title: "watch child 1",
ParentChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
RootChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
})
require.NoError(t, err)
childTwo, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: user.UserID,
LastModelConfigID: modelConfig.ID,
Title: "watch child 2",
ParentChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
RootChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
})
require.NoError(t, err)
conn, err := client.Dial(ctx, "/api/experimental/chats/watch", nil)
require.NoError(t, err)
defer conn.Close(websocket.StatusNormalClosure, "done")
type watchEvent struct {
Type codersdk.ServerSentEventType `json:"type"`
Data json.RawMessage `json:"data,omitempty"`
}
var ping watchEvent
err = wsjson.Read(ctx, conn, &ping)
require.NoError(t, err)
require.Equal(t, codersdk.ServerSentEventTypePing, ping.Type)
collectLifecycleEvents := func(expectedKind coderdpubsub.ChatEventKind) map[uuid.UUID]coderdpubsub.ChatEvent {
t.Helper()
events := make(map[uuid.UUID]coderdpubsub.ChatEvent, 3)
for len(events) < 3 {
var update watchEvent
err = wsjson.Read(ctx, conn, &update)
require.NoError(t, err)
if update.Type == codersdk.ServerSentEventTypePing {
continue
}
require.Equal(t, codersdk.ServerSentEventTypeData, update.Type)
var payload coderdpubsub.ChatEvent
err = json.Unmarshal(update.Data, &payload)
require.NoError(t, err)
if payload.Kind != expectedKind {
continue
}
events[payload.Chat.ID] = payload
}
return events
}
assertLifecycleEvents := func(events map[uuid.UUID]coderdpubsub.ChatEvent, archived bool) {
t.Helper()
require.Len(t, events, 3)
for _, chatID := range []uuid.UUID{parentChat.ID, childOne.ID, childTwo.ID} {
payload, ok := events[chatID]
require.True(t, ok, "missing event for chat %s", chatID)
require.Equal(t, archived, payload.Chat.Archived)
}
}
err = client.UpdateChat(ctx, parentChat.ID, codersdk.UpdateChatRequest{Archived: ptr.Ref(true)})
require.NoError(t, err)
deletedEvents := collectLifecycleEvents(coderdpubsub.ChatEventKindDeleted)
assertLifecycleEvents(deletedEvents, true)
err = client.UpdateChat(ctx, parentChat.ID, codersdk.UpdateChatRequest{Archived: ptr.Ref(false)})
require.NoError(t, err)
createdEvents := collectLifecycleEvents(coderdpubsub.ChatEventKindCreated)
assertLifecycleEvents(createdEvents, false)
})
t.Run("Unauthenticated", func(t *testing.T) {
t.Parallel()
@@ -1958,7 +2108,7 @@ func TestGetChat(t *testing.T) {
})
require.NoError(t, err)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
otherClient := codersdk.NewExperimentalClient(otherClientRaw)
_, err = otherClient.GetChat(ctx, createdChat.ID)
requireSDKError(t, err, http.StatusNotFound)
@@ -2155,6 +2305,96 @@ func TestUnarchiveChat(t *testing.T) {
require.Empty(t, archivedChats)
})
t.Run("UnarchivesChildren", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := newChatClientWithDatabase(t)
user := coderdtest.CreateFirstUser(t, client.Client)
modelConfig := createChatModelConfig(t, client)
parentChat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "parent chat",
},
},
})
require.NoError(t, err)
child1, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: user.UserID,
LastModelConfigID: modelConfig.ID,
Title: "child 1",
ParentChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
RootChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
})
require.NoError(t, err)
child2, err := db.InsertChat(dbauthz.AsSystemRestricted(ctx), database.InsertChatParams{
OwnerID: user.UserID,
LastModelConfigID: modelConfig.ID,
Title: "child 2",
ParentChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
RootChatID: uuid.NullUUID{UUID: parentChat.ID, Valid: true},
})
require.NoError(t, err)
err = client.UpdateChat(ctx, parentChat.ID, codersdk.UpdateChatRequest{Archived: ptr.Ref(true)})
require.NoError(t, err)
err = client.UpdateChat(ctx, parentChat.ID, codersdk.UpdateChatRequest{Archived: ptr.Ref(false)})
require.NoError(t, err)
activeChats, err := client.ListChats(ctx, &codersdk.ListChatsOptions{
Query: "archived:false",
})
require.NoError(t, err)
var foundParent bool
var foundChild1 bool
var foundChild2 bool
for _, chat := range activeChats {
switch chat.ID {
case parentChat.ID:
foundParent = true
require.False(t, chat.Archived)
case child1.ID:
foundChild1 = true
require.False(t, chat.Archived)
case child2.ID:
foundChild2 = true
require.False(t, chat.Archived)
}
}
require.True(t, foundParent, "parent should be listed as active")
require.True(t, foundChild1, "child1 should be listed as active")
require.True(t, foundChild2, "child2 should be listed as active")
archivedChats, err := client.ListChats(ctx, &codersdk.ListChatsOptions{
Query: "archived:true",
})
require.NoError(t, err)
for _, chat := range archivedChats {
require.NotEqual(t, parentChat.ID, chat.ID, "parent should not remain archived")
require.NotEqual(t, child1.ID, chat.ID, "child1 should not remain archived")
require.NotEqual(t, child2.ID, chat.ID, "child2 should not remain archived")
}
dbParent, err := db.GetChatByID(dbauthz.AsSystemRestricted(ctx), parentChat.ID)
require.NoError(t, err)
require.False(t, dbParent.Archived, "parent should be unarchived")
dbChild1, err := db.GetChatByID(dbauthz.AsSystemRestricted(ctx), child1.ID)
require.NoError(t, err)
require.False(t, dbChild1.Archived, "child1 should be unarchived")
dbChild2, err := db.GetChatByID(dbauthz.AsSystemRestricted(ctx), child2.ID)
require.NoError(t, err)
require.False(t, dbChild2.Archived, "child2 should be unarchived")
})
t.Run("NotArchived", func(t *testing.T) {
t.Parallel()
@@ -3530,7 +3770,7 @@ func TestRegenerateChatTitle(t *testing.T) {
})
require.NoError(t, err)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
otherClient := codersdk.NewExperimentalClient(otherClientRaw)
_, err = otherClient.RegenerateChatTitle(ctx, createdChat.ID)
requireSDKError(t, err, http.StatusNotFound)
@@ -3855,7 +4095,7 @@ func TestGetChatDiffStatus(t *testing.T) {
})
require.NoError(t, err)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
otherClient := codersdk.NewExperimentalClient(otherClientRaw)
_, err = otherClient.GetChat(ctx, createdChat.ID)
requireSDKError(t, err, http.StatusNotFound)
@@ -4088,7 +4328,7 @@ func TestGetChatDiffContents(t *testing.T) {
})
require.NoError(t, err)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
otherClient := codersdk.NewExperimentalClient(otherClientRaw)
_, err = otherClient.GetChatDiffContents(ctx, createdChat.ID)
requireSDKError(t, err, http.StatusNotFound)
@@ -4884,7 +5124,7 @@ func TestGetChatFile(t *testing.T) {
uploaded, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "test.png", bytes.NewReader(data))
require.NoError(t, err)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID)
otherClientRaw, _ := coderdtest.CreateAnotherUser(t, client.Client, firstUser.OrganizationID, rbac.RoleAgentsAccess())
otherClient := codersdk.NewExperimentalClient(otherClientRaw)
_, _, err = otherClient.GetChatFile(ctx, uploaded.ID)
requireSDKError(t, err, http.StatusNotFound)
+11
View File
@@ -82,6 +82,16 @@ var (
Type: "chat",
}
// ResourceChatAutomation
// Valid Actions
// - "ActionCreate" :: create a chat automation
// - "ActionDelete" :: delete a chat automation
// - "ActionRead" :: read chat automation configuration
// - "ActionUpdate" :: update a chat automation
ResourceChatAutomation = Object{
Type: "chat_automation",
}
// ResourceConnectionLog
// Valid Actions
// - "ActionRead" :: read connection logs
@@ -440,6 +450,7 @@ func AllResources() []Objecter {
ResourceAuditLog,
ResourceBoundaryUsage,
ResourceChat,
ResourceChatAutomation,
ResourceConnectionLog,
ResourceCryptoKey,
ResourceDebugInfo,
+10
View File
@@ -84,6 +84,13 @@ var chatActions = map[Action]ActionDefinition{
ActionDelete: "delete a chat",
}
var chatAutomationActions = map[Action]ActionDefinition{
ActionCreate: "create a chat automation",
ActionRead: "read chat automation configuration",
ActionUpdate: "update a chat automation",
ActionDelete: "delete a chat automation",
}
// RBACPermissions is indexed by the type
var RBACPermissions = map[string]PermissionDefinition{
// Wildcard is every object, and the action "*" provides all actions.
@@ -113,6 +120,9 @@ var RBACPermissions = map[string]PermissionDefinition{
"chat": {
Actions: chatActions,
},
"chat_automation": {
Actions: chatAutomationActions,
},
// Dormant workspaces have the same perms as workspaces.
"workspace_dormant": {
Actions: workspaceActions,
+38 -4
View File
@@ -21,6 +21,7 @@ const (
templateAdmin string = "template-admin"
userAdmin string = "user-admin"
auditor string = "auditor"
agentsAccess string = "agents-access"
// customSiteRole is a placeholder for all custom site roles.
// This is used for what roles can assign other roles.
// TODO: Make this more dynamic to allow other roles to grant.
@@ -142,6 +143,7 @@ func RoleTemplateAdmin() RoleIdentifier { return RoleIdentifier{Name: templateAd
func RoleUserAdmin() RoleIdentifier { return RoleIdentifier{Name: userAdmin} }
func RoleMember() RoleIdentifier { return RoleIdentifier{Name: member} }
func RoleAuditor() RoleIdentifier { return RoleIdentifier{Name: auditor} }
func RoleAgentsAccess() RoleIdentifier { return RoleIdentifier{Name: agentsAccess} }
func RoleOrgAdmin() string {
return orgAdmin
@@ -316,7 +318,7 @@ func ReloadBuiltinRoles(opts *RoleOptions) {
denyPermissions...,
),
User: append(
allPermsExcept(ResourceWorkspaceDormant, ResourcePrebuiltWorkspace, ResourceWorkspace, ResourceUser, ResourceOrganizationMember, ResourceOrganizationMember, ResourceBoundaryUsage, ResourceAibridgeInterception),
allPermsExcept(ResourceWorkspaceDormant, ResourcePrebuiltWorkspace, ResourceWorkspace, ResourceUser, ResourceOrganizationMember, ResourceBoundaryUsage, ResourceAibridgeInterception, ResourceChat),
Permissions(map[string][]policy.Action{
// Users cannot do create/update/delete on themselves, but they
// can read their own details.
@@ -402,6 +404,21 @@ func ReloadBuiltinRoles(opts *RoleOptions) {
ByOrgID: map[string]OrgPermissions{},
}.withCachedRegoValue()
agentsAccessRole := Role{
Identifier: RoleAgentsAccess(),
DisplayName: "Coder Agents User",
Site: []Permission{},
User: Permissions(map[string][]policy.Action{
ResourceChat.Type: {
policy.ActionCreate,
policy.ActionRead,
policy.ActionUpdate,
policy.ActionDelete,
},
}),
ByOrgID: map[string]OrgPermissions{},
}.withCachedRegoValue()
builtInRoles = map[string]func(orgID uuid.UUID) Role{
// admin grants all actions to all resources.
owner: func(_ uuid.UUID) Role {
@@ -428,6 +445,13 @@ func ReloadBuiltinRoles(opts *RoleOptions) {
return userAdminRole
},
// agentsAccess grants all actions on chat resources owned
// by the user. Without this role, members cannot create
// or interact with chats.
agentsAccess: func(_ uuid.UUID) Role {
return agentsAccessRole
},
// orgAdmin returns a role with all actions allows in a given
// organization scope.
orgAdmin: func(organizationID uuid.UUID) Role {
@@ -600,6 +624,7 @@ var assignRoles = map[string]map[string]bool{
userAdmin: true,
customSiteRole: true,
customOrganizationRole: true,
agentsAccess: true,
},
owner: {
owner: true,
@@ -615,10 +640,12 @@ var assignRoles = map[string]map[string]bool{
userAdmin: true,
customSiteRole: true,
customOrganizationRole: true,
agentsAccess: true,
},
userAdmin: {
member: true,
orgMember: true,
member: true,
orgMember: true,
agentsAccess: true,
},
orgAdmin: {
orgAdmin: true,
@@ -854,13 +881,20 @@ func SiteBuiltInRoles() []Role {
for _, roleF := range builtInRoles {
// Must provide some non-nil uuid to filter out org roles.
role := roleF(uuid.New())
if !role.Identifier.IsOrgRole() {
if !role.Identifier.IsOrgRole() && role.Identifier != RoleAgentsAccess() {
roles = append(roles, role)
}
}
return roles
}
// AgentsAccessRole returns the agents-access role for use by callers
// that need to include it conditionally (e.g. when the agents
// experiment is enabled).
func AgentsAccessRole() Role {
return builtInRoles[agentsAccess](uuid.Nil)
}
// ChangeRoleSet is a helper function that finds the difference of 2 sets of
// roles. When setting a user's new roles, it is equivalent to adding and
// removing roles. This set determines the changes, so that the appropriate
+100 -82
View File
@@ -49,6 +49,11 @@ func TestBuiltInRoles(t *testing.T) {
require.NoError(t, r.Valid(), "invalid role")
})
}
t.Run("agents-access", func(t *testing.T) {
t.Parallel()
require.NoError(t, rbac.AgentsAccessRole().Valid(), "invalid role")
})
}
// permissionGranted checks whether a permission list contains a
@@ -199,6 +204,7 @@ func TestRolePermissions(t *testing.T) {
orgUserAdmin := authSubject{Name: "org_user_admin", Actor: rbac.Subject{ID: templateAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgUserAdmin(orgID)}, Scope: rbac.ScopeAll}.WithCachedASTValue()}
orgTemplateAdmin := authSubject{Name: "org_template_admin", Actor: rbac.Subject{ID: userAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgTemplateAdmin(orgID)}, Scope: rbac.ScopeAll}.WithCachedASTValue()}
orgAdminBanWorkspace := authSubject{Name: "org_admin_workspace_ban", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAdmin(orgID), rbac.ScopedRoleOrgWorkspaceCreationBan(orgID)}, Scope: rbac.ScopeAll}.WithCachedASTValue()}
agentsAccessUser := authSubject{Name: "chat_access", Actor: rbac.Subject{ID: currentUser.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.RoleAgentsAccess()}, Scope: rbac.ScopeAll}.WithCachedASTValue()}
setOrgNotMe := authSubjectSet{orgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin}
otherOrgAdmin := authSubject{Name: "org_admin_other", Actor: rbac.Subject{ID: uuid.NewString(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAdmin(otherOrg)}, Scope: rbac.ScopeAll}.WithCachedASTValue()}
@@ -210,7 +216,7 @@ func TestRolePermissions(t *testing.T) {
// requiredSubjects are required to be asserted in each test case. This is
// to make sure one is not forgotten.
requiredSubjects := []authSubject{
memberMe, owner,
memberMe, owner, agentsAccessUser,
orgAdmin, otherOrgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin,
templateAdmin, userAdmin, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
}
@@ -233,7 +239,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceUserObject(currentUser),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, templateAdmin, userAdmin, orgUserAdmin, otherOrgAdmin, otherOrgUserAdmin, orgAdmin},
true: {owner, memberMe, agentsAccessUser, templateAdmin, userAdmin, orgUserAdmin, otherOrgAdmin, otherOrgUserAdmin, orgAdmin},
false: {
orgTemplateAdmin, orgAuditor,
otherOrgAuditor, otherOrgTemplateAdmin,
@@ -246,7 +252,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceUser,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin},
},
},
{
@@ -256,7 +262,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, userAdmin, orgAuditor, orgUserAdmin},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, orgAuditor, orgUserAdmin},
},
},
{
@@ -266,7 +272,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -276,7 +282,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
},
},
{
@@ -286,7 +292,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.InOrg(orgID).WithOwner(policy.WildcardSymbol),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, userAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, agentsAccessUser, userAdmin, templateAdmin, orgTemplateAdmin},
},
},
{
@@ -296,7 +302,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -306,7 +312,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -315,7 +321,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
},
},
{
@@ -324,7 +330,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -337,7 +343,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgAdminBanWorkspace},
false: {
memberMe, setOtherOrg,
memberMe, agentsAccessUser, setOtherOrg,
templateAdmin, userAdmin,
orgTemplateAdmin, orgUserAdmin, orgAuditor,
},
@@ -354,7 +360,7 @@ func TestRolePermissions(t *testing.T) {
true: {},
false: {
orgAdmin, owner, setOtherOrg,
userAdmin, memberMe,
userAdmin, memberMe, agentsAccessUser,
templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor,
orgAdminBanWorkspace,
},
@@ -366,7 +372,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTemplate.WithID(templateID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, userAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, agentsAccessUser, userAdmin},
},
},
{
@@ -375,7 +381,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTemplate.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgUserAdmin, memberMe, userAdmin},
false: {setOtherOrg, orgUserAdmin, memberMe, agentsAccessUser, userAdmin},
},
},
{
@@ -386,7 +392,7 @@ func TestRolePermissions(t *testing.T) {
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, userAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, agentsAccessUser, userAdmin},
},
},
{
@@ -397,7 +403,7 @@ func TestRolePermissions(t *testing.T) {
true: {owner, templateAdmin},
// Org template admins can only read org scoped files.
// File scope is currently not org scoped :cry:
false: {setOtherOrg, orgTemplateAdmin, orgAdmin, memberMe, userAdmin, orgAuditor, orgUserAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAdmin, memberMe, agentsAccessUser, userAdmin, orgAuditor, orgUserAdmin},
},
},
{
@@ -405,7 +411,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead},
Resource: rbac.ResourceFile.WithID(fileID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, templateAdmin},
true: {owner, memberMe, agentsAccessUser, templateAdmin},
false: {setOtherOrg, setOrgNotMe, userAdmin},
},
},
@@ -415,7 +421,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganization,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -424,7 +430,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganization.WithID(orgID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgUserAdmin, orgAuditor, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgUserAdmin, orgAuditor, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -433,7 +439,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganization.WithID(orgID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin, auditor, orgAuditor, userAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe},
false: {setOtherOrg, memberMe, agentsAccessUser},
},
},
{
@@ -442,7 +448,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, userAdmin, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, userAdmin, memberMe, agentsAccessUser, templateAdmin},
},
},
{
@@ -451,7 +457,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin},
},
},
{
@@ -459,7 +465,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceAssignRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {setOtherOrg, setOrgNotMe, owner, memberMe, templateAdmin, userAdmin},
true: {setOtherOrg, setOrgNotMe, owner, memberMe, agentsAccessUser, templateAdmin, userAdmin},
false: {},
},
},
@@ -469,7 +475,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe, templateAdmin, orgTemplateAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, templateAdmin, orgTemplateAdmin, orgAuditor},
},
},
{
@@ -478,7 +484,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -487,7 +493,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgUserAdmin, userAdmin, templateAdmin},
false: {setOtherOrg, memberMe, orgAuditor, orgTemplateAdmin},
false: {setOtherOrg, memberMe, agentsAccessUser, orgAuditor, orgTemplateAdmin},
},
},
{
@@ -495,7 +501,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionDelete, policy.ActionUpdate},
Resource: rbac.ResourceApiKey.WithID(apiKeyID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe},
true: {owner, memberMe, agentsAccessUser},
false: {setOtherOrg, setOrgNotMe, templateAdmin, userAdmin},
},
},
@@ -507,7 +513,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceInboxNotification.WithID(uuid.New()).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, templateAdmin, userAdmin, memberMe},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, templateAdmin, userAdmin, memberMe, agentsAccessUser},
},
},
{
@@ -515,7 +521,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionReadPersonal, policy.ActionUpdatePersonal},
Resource: rbac.ResourceUserObject(currentUser),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, userAdmin},
true: {owner, memberMe, agentsAccessUser, userAdmin},
false: {setOtherOrg, setOrgNotMe, templateAdmin},
},
},
@@ -525,7 +531,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganizationMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAuditor, memberMe, templateAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAuditor, memberMe, agentsAccessUser, templateAdmin},
},
},
{
@@ -534,7 +540,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganizationMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgUserAdmin, orgTemplateAdmin},
false: {memberMe, setOtherOrg},
false: {memberMe, agentsAccessUser, setOtherOrg},
},
},
{
@@ -547,7 +553,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgUserAdmin, orgTemplateAdmin, orgAuditor},
false: {setOtherOrg, memberMe, userAdmin},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin},
},
},
{
@@ -560,7 +566,7 @@ func TestRolePermissions(t *testing.T) {
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe, templateAdmin, orgTemplateAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, templateAdmin, orgTemplateAdmin, orgAuditor},
},
},
{
@@ -573,7 +579,7 @@ func TestRolePermissions(t *testing.T) {
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe},
false: {setOtherOrg, memberMe, agentsAccessUser},
},
},
{
@@ -582,7 +588,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceGroupMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe},
false: {setOtherOrg, memberMe, agentsAccessUser},
},
},
{
@@ -591,7 +597,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceGroupMember.WithID(adminID).InOrg(orgID).WithOwner(adminID.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe},
false: {setOtherOrg, memberMe, agentsAccessUser},
},
},
{
@@ -600,7 +606,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspaceDormant.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {orgAdmin, owner},
false: {setOtherOrg, userAdmin, memberMe, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, userAdmin, memberMe, agentsAccessUser, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -609,7 +615,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspaceDormant.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {},
false: {setOtherOrg, setOrgNotMe, memberMe, userAdmin, owner, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, userAdmin, owner, templateAdmin},
},
},
{
@@ -618,7 +624,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspace.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, agentsAccessUser, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -627,7 +633,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourcePrebuiltWorkspace.WithID(uuid.New()).InOrg(orgID).WithOwner(database.PrebuildsSystemUserID.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, userAdmin, memberMe, orgUserAdmin, orgAuditor},
false: {setOtherOrg, userAdmin, memberMe, agentsAccessUser, orgUserAdmin, orgAuditor},
},
},
{
@@ -636,7 +642,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTask.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, agentsAccessUser, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
// Some admin style resources
@@ -646,7 +652,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceLicense,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -655,7 +661,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDeploymentStats,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -664,7 +670,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDeploymentConfig,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -673,7 +679,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDebugInfo,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -682,7 +688,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceReplicas,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -691,7 +697,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTailnetCoordinator,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -700,7 +706,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAuditLog,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -709,7 +715,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerDaemon.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, userAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, agentsAccessUser, userAdmin},
},
},
{
@@ -718,7 +724,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerDaemon.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgAdmin, orgTemplateAdmin},
false: {setOtherOrg, memberMe, userAdmin, orgAuditor, orgUserAdmin},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, orgAuditor, orgUserAdmin},
},
},
{
@@ -727,7 +733,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerDaemon.WithOwner(currentUser.String()).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgTemplateAdmin, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, userAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -736,7 +742,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerJobs.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgTemplateAdmin, orgAdmin},
false: {setOtherOrg, memberMe, templateAdmin, userAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -745,7 +751,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceSystem,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -754,7 +760,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2App,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -762,7 +768,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceOauth2App,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
true: {owner, setOrgNotMe, setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin},
false: {},
},
},
@@ -772,7 +778,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2AppSecret,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -781,7 +787,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2AppCodeToken,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -790,7 +796,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspaceProxy,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -798,7 +804,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceWorkspaceProxy,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
true: {owner, setOrgNotMe, setOtherOrg, memberMe, agentsAccessUser, templateAdmin, userAdmin},
false: {},
},
},
@@ -809,7 +815,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead, policy.ActionUpdate},
Resource: rbac.ResourceNotificationPreference.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {memberMe, owner},
true: {memberMe, agentsAccessUser, owner},
false: {
userAdmin, orgUserAdmin, templateAdmin,
orgAuditor, orgTemplateAdmin,
@@ -826,7 +832,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, userAdmin, orgUserAdmin, templateAdmin,
memberMe, agentsAccessUser, userAdmin, orgUserAdmin, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
orgAdmin, otherOrgAdmin,
@@ -840,7 +846,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe,
memberMe, agentsAccessUser,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -858,7 +864,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, templateAdmin, orgUserAdmin, userAdmin,
memberMe, agentsAccessUser, templateAdmin, orgUserAdmin, userAdmin,
orgAdmin, orgAuditor, orgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAdmin,
@@ -871,7 +877,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionDelete},
Resource: rbac.ResourceWebpushSubscription.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe},
true: {owner, memberMe, agentsAccessUser},
false: {orgAdmin, otherOrgAdmin, orgAuditor, otherOrgAuditor, templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin, userAdmin, orgUserAdmin, otherOrgUserAdmin},
},
},
@@ -883,7 +889,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, userAdmin, orgAdmin, otherOrgAdmin, orgUserAdmin, otherOrgUserAdmin},
false: {
memberMe, templateAdmin,
memberMe, agentsAccessUser, templateAdmin,
orgTemplateAdmin, orgAuditor,
otherOrgAuditor, otherOrgTemplateAdmin,
},
@@ -896,7 +902,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin, orgAdmin, otherOrgAdmin},
false: {
userAdmin, memberMe,
userAdmin, memberMe, agentsAccessUser,
orgAuditor, orgUserAdmin,
otherOrgAuditor, otherOrgUserAdmin,
},
@@ -909,7 +915,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, otherOrgAdmin},
false: {
memberMe, userAdmin, templateAdmin,
memberMe, agentsAccessUser, userAdmin, templateAdmin,
orgAuditor, orgUserAdmin, orgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
@@ -921,7 +927,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceCryptoKey,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -932,7 +938,7 @@ func TestRolePermissions(t *testing.T) {
true: {owner, orgAdmin, orgUserAdmin, userAdmin},
false: {
otherOrgAdmin,
memberMe, templateAdmin,
memberMe, agentsAccessUser, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
@@ -947,7 +953,7 @@ func TestRolePermissions(t *testing.T) {
false: {
orgAdmin, orgUserAdmin,
otherOrgAdmin,
memberMe, templateAdmin,
memberMe, agentsAccessUser, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
@@ -960,7 +966,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe,
memberMe, agentsAccessUser,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -975,7 +981,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe,
memberMe, agentsAccessUser,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -989,7 +995,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceConnectionLog,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
// Only the user themselves can access their own secrets — no one else.
@@ -998,7 +1004,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceUserSecret.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {memberMe},
true: {memberMe, agentsAccessUser},
false: {
owner, orgAdmin,
otherOrgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin,
@@ -1014,7 +1020,7 @@ func TestRolePermissions(t *testing.T) {
true: {},
false: {
owner,
memberMe,
memberMe, agentsAccessUser,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -1028,7 +1034,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionUpdate},
Resource: rbac.ResourceAibridgeInterception.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe},
true: {owner, memberMe, agentsAccessUser},
false: {
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
@@ -1045,7 +1051,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, auditor},
false: {
memberMe,
memberMe, agentsAccessUser,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -1058,7 +1064,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceBoundaryUsage,
AuthorizeMap: map[bool][]hasAuthSubjects{
false: {owner, setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
false: {owner, setOtherOrg, setOrgNotMe, memberMe, agentsAccessUser, templateAdmin, userAdmin},
},
},
{
@@ -1066,8 +1072,9 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceChat.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe},
true: {owner, agentsAccessUser},
false: {
memberMe,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -1075,8 +1082,20 @@ func TestRolePermissions(t *testing.T) {
},
},
},
{
// Chat automations are admin-managed. Regular org
// members cannot manage automations even if they
// are the owner. The owner_id field is for audit
// tracking, not RBAC grants.
Name: "ChatAutomation",
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceChatAutomation.InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, memberMe, agentsAccessUser, orgAuditor, orgUserAdmin, orgTemplateAdmin, templateAdmin, userAdmin},
},
},
}
// Build coverage set from test case definitions statically,
// so we don't need shared mutable state during execution.
// This allows subtests to run in parallel.
@@ -1217,7 +1236,6 @@ func TestListRoles(t *testing.T) {
"user-admin",
},
siteRoleNames)
orgID := uuid.New()
orgRoles := rbac.OrganizationRoles(orgID)
orgRoleNames := make([]string, 0, len(orgRoles))
+12
View File
@@ -32,6 +32,10 @@ const (
ScopeChatDelete ScopeName = "chat:delete"
ScopeChatRead ScopeName = "chat:read"
ScopeChatUpdate ScopeName = "chat:update"
ScopeChatAutomationCreate ScopeName = "chat_automation:create"
ScopeChatAutomationDelete ScopeName = "chat_automation:delete"
ScopeChatAutomationRead ScopeName = "chat_automation:read"
ScopeChatAutomationUpdate ScopeName = "chat_automation:update"
ScopeConnectionLogRead ScopeName = "connection_log:read"
ScopeConnectionLogUpdate ScopeName = "connection_log:update"
ScopeCryptoKeyCreate ScopeName = "crypto_key:create"
@@ -196,6 +200,10 @@ func (e ScopeName) Valid() bool {
ScopeChatDelete,
ScopeChatRead,
ScopeChatUpdate,
ScopeChatAutomationCreate,
ScopeChatAutomationDelete,
ScopeChatAutomationRead,
ScopeChatAutomationUpdate,
ScopeConnectionLogRead,
ScopeConnectionLogUpdate,
ScopeCryptoKeyCreate,
@@ -361,6 +369,10 @@ func AllScopeNameValues() []ScopeName {
ScopeChatDelete,
ScopeChatRead,
ScopeChatUpdate,
ScopeChatAutomationCreate,
ScopeChatAutomationDelete,
ScopeChatAutomationRead,
ScopeChatAutomationUpdate,
ScopeConnectionLogRead,
ScopeConnectionLogUpdate,
ScopeCryptoKeyCreate,
+11 -1
View File
@@ -5,6 +5,7 @@ import (
"github.com/google/uuid"
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/httpapi"
@@ -43,7 +44,16 @@ func (api *API) AssignableSiteRoles(rw http.ResponseWriter, r *http.Request) {
return
}
httpapi.Write(ctx, rw, http.StatusOK, assignableRoles(actorRoles.Roles, rbac.SiteBuiltInRoles(), dbCustomRoles))
siteRoles := rbac.SiteBuiltInRoles()
// Include the agents-access role only when the agents
// experiment is enabled or this is a dev build, matching
// the RequireExperimentWithDevBypass gate on chat routes.
if api.Experiments.Enabled(codersdk.ExperimentAgents) || buildinfo.IsDev() {
siteRoles = append(siteRoles, rbac.AgentsAccessRole())
}
httpapi.Write(ctx, rw, http.StatusOK,
assignableRoles(actorRoles.Roles, siteRoles, dbCustomRoles))
}
// assignableOrgRoles returns all org wide roles that can be assigned.
+157 -13
View File
@@ -1244,32 +1244,90 @@ func (p *Server) EditMessage(
return result, nil
}
// ArchiveChat archives a chat and all descendants, then broadcasts a deleted event.
// ArchiveChat archives a chat family and broadcasts deleted events for each
// affected chat so watching clients converge without a full refetch. If the
// target chat is pending or running, it first transitions the chat back to
// waiting so active processing stops before the archive is broadcast.
func (p *Server) ArchiveChat(ctx context.Context, chat database.Chat) error {
if chat.ID == uuid.Nil {
return xerrors.New("chat_id is required")
}
if err := p.db.ArchiveChatByID(ctx, chat.ID); err != nil {
return xerrors.Errorf("archive chat: %w", err)
statusChat := chat
interrupted := false
var archivedChats []database.Chat
if err := p.db.InTx(func(tx database.Store) error {
lockedChat, err := tx.GetChatByIDForUpdate(ctx, chat.ID)
if err != nil {
return xerrors.Errorf("lock chat for archive: %w", err)
}
statusChat = lockedChat
// We do not call setChatWaiting here because it intentionally preserves
// pending chats so queued-message promotion can win. Archiving is a
// harder stop: both pending and running chats must transition to waiting.
if lockedChat.Status == database.ChatStatusPending || lockedChat.Status == database.ChatStatusRunning {
statusChat, err = tx.UpdateChatStatus(ctx, database.UpdateChatStatusParams{
ID: chat.ID,
Status: database.ChatStatusWaiting,
WorkerID: uuid.NullUUID{},
StartedAt: sql.NullTime{},
HeartbeatAt: sql.NullTime{},
LastError: sql.NullString{},
})
if err != nil {
return xerrors.Errorf("set chat waiting before archive: %w", err)
}
interrupted = true
}
archivedChats, err = tx.ArchiveChatByID(ctx, chat.ID)
if err != nil {
return xerrors.Errorf("archive chat: %w", err)
}
return nil
}, nil); err != nil {
return err
}
p.publishChatPubsubEvent(chat, coderdpubsub.ChatEventKindDeleted, nil)
if interrupted {
p.publishStatus(chat.ID, statusChat.Status, statusChat.WorkerID)
p.publishChatPubsubEvent(statusChat, coderdpubsub.ChatEventKindStatusChange, nil)
}
p.publishChatPubsubEvents(archivedChats, coderdpubsub.ChatEventKindDeleted)
return nil
}
// UnarchiveChat unarchives a chat and publishes a created event so sidebar
// clients are notified that the chat has reappeared.
// UnarchiveChat unarchives a chat family and publishes created events for
// each affected chat so watching clients see every chat that reappeared.
func (p *Server) UnarchiveChat(ctx context.Context, chat database.Chat) error {
if chat.ID == uuid.Nil {
return xerrors.New("chat_id is required")
}
if err := p.db.UnarchiveChatByID(ctx, chat.ID); err != nil {
return xerrors.Errorf("unarchive chat: %w", err)
return p.applyChatLifecycleTransition(
ctx,
chat.ID,
"unarchive",
coderdpubsub.ChatEventKindCreated,
p.db.UnarchiveChatByID,
)
}
func (p *Server) applyChatLifecycleTransition(
ctx context.Context,
chatID uuid.UUID,
action string,
kind coderdpubsub.ChatEventKind,
transition func(context.Context, uuid.UUID) ([]database.Chat, error),
) error {
updatedChats, err := transition(ctx, chatID)
if err != nil {
return xerrors.Errorf("%s chat: %w", action, err)
}
p.publishChatPubsubEvent(chat, coderdpubsub.ChatEventKindCreated, nil)
p.publishChatPubsubEvents(updatedChats, kind)
return nil
}
@@ -3099,6 +3157,13 @@ func (p *Server) publishChatStreamNotify(chatID uuid.UUID, notify coderdpubsub.C
}
}
// publishChatPubsubEvents broadcasts a lifecycle event for each affected chat.
func (p *Server) publishChatPubsubEvents(chats []database.Chat, kind coderdpubsub.ChatEventKind) {
for _, chat := range chats {
p.publishChatPubsubEvent(chat, kind, nil)
}
}
// publishChatPubsubEvent broadcasts a chat lifecycle event via PostgreSQL
// pubsub so that all replicas can push updates to watching clients.
func (p *Server) publishChatPubsubEvent(chat database.Chat, kind coderdpubsub.ChatEventKind, diffStatus *codersdk.ChatDiffStatus) {
@@ -3447,7 +3512,25 @@ func (p *Server) processChat(ctx context.Context, chat database.Chat) {
chatCtx, cancel := context.WithCancelCause(ctx)
defer cancel(nil)
controlCancel := p.subscribeChatControl(chatCtx, chat.ID, cancel, logger)
// Gate the control subscriber behind a channel that is closed
// after we publish "running" status. This prevents stale
// pubsub notifications (e.g. the "pending" notification from
// SendMessage that triggered this processing) from
// interrupting us before we start work. Due to async
// PostgreSQL NOTIFY delivery, a notification published before
// subscribeChatControl registers its queue can still arrive
// after registration.
controlArmed := make(chan struct{})
gatedCancel := func(cause error) {
select {
case <-controlArmed:
cancel(cause)
default:
logger.Debug(ctx, "ignoring control notification before armed")
}
}
controlCancel := p.subscribeChatControl(chatCtx, chat.ID, gatedCancel, logger)
defer func() {
if controlCancel != nil {
controlCancel()
@@ -3508,6 +3591,12 @@ func (p *Server) processChat(ctx context.Context, chat database.Chat) {
Valid: true,
})
// Arm the control subscriber. Closing the channel is a
// happens-before guarantee in the Go memory model — any
// notification dispatched after this point will correctly
// interrupt processing.
close(controlArmed)
// Determine the final status and last error to set when we're done.
status := database.ChatStatusWaiting
wasInterrupted := false
@@ -3563,9 +3652,10 @@ func (p *Server) processChat(ctx context.Context, chat database.Chat) {
// the worker and let the processor pick it back up.
if latestChat.Status == database.ChatStatusPending {
status = database.ChatStatusPending
} else if status == database.ChatStatusWaiting {
} else if status == database.ChatStatusWaiting && !latestChat.Archived {
// Queued messages were already admitted through SendMessage,
// so auto-promotion only preserves FIFO order here.
// so auto-promotion only preserves FIFO order here. Archived
// chats skip promotion so archiving behaves like a hard stop.
var promoteErr error
promotedMessage, remainingQueuedMessages, shouldPublishQueueUpdate, promoteErr = p.tryAutoPromoteQueuedMessage(cleanupCtx, tx, latestChat)
if promoteErr != nil {
@@ -4970,9 +5060,25 @@ func (p *Server) persistInstructionFiles(
chatprompt.CurrentContentVersion,
))
_, _ = p.db.InsertChatMessages(ctx, msgParams)
// Update the cache column: persist skills if any
// exist, or clear to NULL so stale data from a
// previous agent doesn't linger.
if len(discoveredSkills) > 0 {
skillParts := make([]codersdk.ChatMessagePart, 0, len(discoveredSkills))
for _, s := range discoveredSkills {
skillParts = append(skillParts, codersdk.ChatMessagePart{
Type: codersdk.ChatMessagePartTypeSkill,
SkillName: s.Name,
SkillDescription: s.Description,
ContextFileAgentID: uuid.NullUUID{UUID: agent.ID, Valid: true},
})
}
p.updateLastInjectedContext(ctx, chat.ID, skillParts)
} else {
p.updateLastInjectedContext(ctx, chat.ID, nil)
}
return "", discoveredSkills, nil
}
// Build context-file parts (one per instruction file) and
// skill parts (one per discovered skill).
parts := make([]codersdk.ChatMessagePart, 0, len(sections)+len(discoveredSkills))
@@ -5015,6 +5121,15 @@ func (p *Server) persistInstructionFiles(
if _, err := p.db.InsertChatMessages(ctx, msgParams); err != nil {
return "", nil, xerrors.Errorf("persist instruction files: %w", err)
}
// Build stripped copies for the cache column so internal
// fields (full file content, OS, directory, skill paths)
// are never persisted or returned to API clients.
stripped := make([]codersdk.ChatMessagePart, len(parts))
copy(stripped, parts)
for i := range stripped {
stripped[i].StripInternal()
}
p.updateLastInjectedContext(ctx, chat.ID, stripped)
// Return the formatted instruction text and discovered skills
// so the caller can inject them into this turn's prompt (since
@@ -5022,6 +5137,35 @@ func (p *Server) persistInstructionFiles(
return formatSystemInstructions(agent.OperatingSystem, directory, sections), discoveredSkills, nil
}
// updateLastInjectedContext persists the injected context
// parts (AGENTS.md files and skills) on the chat row so they
// are directly queryable without scanning messages. This is
// best-effort — a failure here is logged but does not block
// the turn.
func (p *Server) updateLastInjectedContext(ctx context.Context, chatID uuid.UUID, parts []codersdk.ChatMessagePart) {
param := pqtype.NullRawMessage{Valid: false}
if parts != nil {
raw, err := json.Marshal(parts)
if err != nil {
p.logger.Warn(ctx, "failed to marshal injected context",
slog.F("chat_id", chatID),
slog.Error(err),
)
return
}
param = pqtype.NullRawMessage{RawMessage: raw, Valid: true}
}
if _, err := p.db.UpdateChatLastInjectedContext(ctx, database.UpdateChatLastInjectedContextParams{
ID: chatID,
LastInjectedContext: param,
}); err != nil {
p.logger.Warn(ctx, "failed to update injected context",
slog.F("chat_id", chatID),
slog.Error(err),
)
}
}
// resolveUserCompactionThreshold looks up the user's per-model
// compaction threshold override. Returns the override value and
// true if one exists and is valid, or 0 and false otherwise.
+359
View File
@@ -484,6 +484,32 @@ func TestPersistInstructionFilesIncludesAgentMetadata(t *testing.T) {
agentID,
).Return(workspaceAgent, nil).Times(1)
db.EXPECT().InsertChatMessages(gomock.Any(), gomock.Any()).Return(nil, nil).AnyTimes()
db.EXPECT().UpdateChatLastInjectedContext(gomock.Any(),
gomock.Cond(func(x any) bool {
arg, ok := x.(database.UpdateChatLastInjectedContextParams)
if !ok || arg.ID != chat.ID {
return false
}
if !arg.LastInjectedContext.Valid {
return false
}
var parts []codersdk.ChatMessagePart
if err := json.Unmarshal(arg.LastInjectedContext.RawMessage, &parts); err != nil {
return false
}
// Expect at least one context-file part for the
// working-directory AGENTS.md, with internal fields
// stripped (no content, OS, or directory).
for _, p := range parts {
if p.Type == codersdk.ChatMessagePartTypeContextFile && p.ContextFilePath != "" {
return p.ContextFileContent == "" &&
p.ContextFileOS == "" &&
p.ContextFileDirectory == ""
}
}
return false
}),
).Return(database.Chat{}, nil).Times(1)
conn := agentconnmock.NewMockAgentConn(ctrl)
conn.EXPECT().SetExtraHeaders(gomock.Any()).Times(1)
@@ -569,6 +595,247 @@ func TestPersistInstructionFilesSkipsSentinelWhenWorkspaceUnavailable(t *testing
require.Empty(t, instruction)
}
func TestPersistInstructionFilesSentinelWithSkills(t *testing.T) {
t.Parallel()
ctx := context.Background()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
workspaceID := uuid.New()
agentID := uuid.New()
chat := database.Chat{
ID: uuid.New(),
WorkspaceID: uuid.NullUUID{
UUID: workspaceID,
Valid: true,
},
AgentID: uuid.NullUUID{
UUID: agentID,
Valid: true,
},
}
workspaceAgent := database.WorkspaceAgent{
ID: agentID,
OperatingSystem: "linux",
Directory: "/home/coder/project",
ExpandedDirectory: "/home/coder/project",
}
db.EXPECT().GetWorkspaceAgentByID(
gomock.Any(),
agentID,
).Return(workspaceAgent, nil).Times(1)
db.EXPECT().InsertChatMessages(gomock.Any(), gomock.Any()).Return(nil, nil).AnyTimes()
db.EXPECT().UpdateChatLastInjectedContext(gomock.Any(),
gomock.Cond(func(x any) bool {
arg, ok := x.(database.UpdateChatLastInjectedContextParams)
if !ok || arg.ID != chat.ID {
return false
}
if !arg.LastInjectedContext.Valid {
return false
}
var parts []codersdk.ChatMessagePart
if err := json.Unmarshal(arg.LastInjectedContext.RawMessage, &parts); err != nil {
return false
}
// The sentinel path should persist only skill parts
// with ContextFileAgentID set.
for _, p := range parts {
if p.Type == codersdk.ChatMessagePartTypeSkill &&
p.SkillName == "my-skill" &&
p.ContextFileAgentID == (uuid.NullUUID{UUID: agentID, Valid: true}) {
return true
}
}
return false
}),
).Return(database.Chat{}, nil).Times(1)
conn := agentconnmock.NewMockAgentConn(ctrl)
conn.EXPECT().SetExtraHeaders(gomock.Any()).Times(1)
// Home LS (.coder directory): return 404 so no home
// instruction file is found.
conn.EXPECT().LS(gomock.Any(), "",
gomock.Cond(func(x any) bool {
req, ok := x.(workspacesdk.LSRequest)
return ok && req.Relativity == workspacesdk.LSRelativityHome
}),
).Return(
workspacesdk.LSResponse{},
codersdk.NewTestError(404, "POST", "/api/v0/list-directory"),
).Times(1)
// Pwd AGENTS.md: return 404 so no working-directory
// instruction file is found either.
conn.EXPECT().ReadFile(gomock.Any(),
"/home/coder/project/AGENTS.md",
int64(0),
int64(maxInstructionFileBytes+1),
).Return(
nil, "",
codersdk.NewTestError(404, "GET", "/api/v0/read-file"),
).Times(1)
// Skills LS (.agents/skills directory): return one skill
// directory so DiscoverSkills finds it.
conn.EXPECT().LS(gomock.Any(), "",
gomock.Cond(func(x any) bool {
req, ok := x.(workspacesdk.LSRequest)
return ok && req.Relativity == workspacesdk.LSRelativityRoot
}),
).Return(workspacesdk.LSResponse{
Contents: []workspacesdk.LSFile{{
Name: "my-skill",
AbsolutePathString: "/home/coder/project/.agents/skills/my-skill",
IsDir: true,
}},
}, nil).Times(1)
// Skills SKILL.md ReadFile: return valid frontmatter.
skillContent := "---\nname: my-skill\ndescription: A test skill\n---\nSkill body"
conn.EXPECT().ReadFile(gomock.Any(),
"/home/coder/project/.agents/skills/my-skill/SKILL.md",
int64(0),
int64(64*1024+1),
).Return(
io.NopCloser(strings.NewReader(skillContent)),
"",
nil,
).Times(1)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
server := &Server{
db: db,
logger: logger,
agentConnFn: func(context.Context, uuid.UUID) (workspacesdk.AgentConn, func(), error) {
return conn, func() {}, nil
},
}
chatStateMu := &sync.Mutex{}
currentChat := chat
workspaceCtx := turnWorkspaceContext{
server: server,
chatStateMu: chatStateMu,
currentChat: &currentChat,
loadChatSnapshot: func(context.Context, uuid.UUID) (database.Chat, error) { return database.Chat{}, nil },
}
t.Cleanup(workspaceCtx.close)
instruction, skills, err := server.persistInstructionFiles(
ctx,
chat,
uuid.New(),
workspaceCtx.getWorkspaceAgent,
workspaceCtx.getWorkspaceConn,
)
require.NoError(t, err)
// Sentinel path returns empty instruction string.
require.Empty(t, instruction)
// Skills are still discovered and returned.
require.Len(t, skills, 1)
require.Equal(t, "my-skill", skills[0].Name)
}
func TestPersistInstructionFilesSentinelNoSkillsClearsColumn(t *testing.T) {
t.Parallel()
ctx := context.Background()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
workspaceID := uuid.New()
agentID := uuid.New()
chat := database.Chat{
ID: uuid.New(),
WorkspaceID: uuid.NullUUID{
UUID: workspaceID,
Valid: true,
},
AgentID: uuid.NullUUID{
UUID: agentID,
Valid: true,
},
}
workspaceAgent := database.WorkspaceAgent{
ID: agentID,
OperatingSystem: "linux",
Directory: "/home/coder/project",
ExpandedDirectory: "/home/coder/project",
}
db.EXPECT().GetWorkspaceAgentByID(
gomock.Any(),
agentID,
).Return(workspaceAgent, nil).Times(1)
db.EXPECT().InsertChatMessages(gomock.Any(), gomock.Any()).Return(nil, nil).AnyTimes()
db.EXPECT().UpdateChatLastInjectedContext(gomock.Any(),
gomock.Cond(func(x any) bool {
arg, ok := x.(database.UpdateChatLastInjectedContextParams)
if !ok || arg.ID != chat.ID {
return false
}
// No skills discovered, so the column should be
// cleared to NULL.
return !arg.LastInjectedContext.Valid
}),
).Return(database.Chat{}, nil).Times(1)
conn := agentconnmock.NewMockAgentConn(ctrl)
conn.EXPECT().SetExtraHeaders(gomock.Any()).Times(1)
// All LS calls return 404: no home .coder directory and no
// .agents/skills directory.
conn.EXPECT().LS(gomock.Any(), "", gomock.Any()).Return(
workspacesdk.LSResponse{},
codersdk.NewTestError(404, "POST", "/api/v0/list-directory"),
).AnyTimes()
// Pwd AGENTS.md: return 404.
conn.EXPECT().ReadFile(gomock.Any(),
"/home/coder/project/AGENTS.md",
int64(0),
int64(maxInstructionFileBytes+1),
).Return(
nil, "",
codersdk.NewTestError(404, "GET", "/api/v0/read-file"),
).Times(1)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
server := &Server{
db: db,
logger: logger,
agentConnFn: func(context.Context, uuid.UUID) (workspacesdk.AgentConn, func(), error) {
return conn, func() {}, nil
},
}
chatStateMu := &sync.Mutex{}
currentChat := chat
workspaceCtx := turnWorkspaceContext{
server: server,
chatStateMu: chatStateMu,
currentChat: &currentChat,
loadChatSnapshot: func(context.Context, uuid.UUID) (database.Chat, error) { return database.Chat{}, nil },
}
t.Cleanup(workspaceCtx.close)
instruction, skills, err := server.persistInstructionFiles(
ctx,
chat,
uuid.New(),
workspaceCtx.getWorkspaceAgent,
workspaceCtx.getWorkspaceConn,
)
require.NoError(t, err)
// Sentinel path: empty instruction, no skills.
require.Empty(t, instruction)
require.Empty(t, skills)
}
func TestTurnWorkspaceContext_BindingFirstPath(t *testing.T) {
t.Parallel()
@@ -1751,3 +2018,95 @@ func chatMessageWithParts(parts []codersdk.ChatMessagePart) database.ChatMessage
Content: pqtype.NullRawMessage{RawMessage: raw, Valid: true},
}
}
// TestProcessChat_IgnoresStaleControlNotification verifies that
// processChat is not interrupted by a "pending" notification
// published before processing begins. This is the race that caused
// TestOpenAIReasoningWithWebSearchRoundTripStoreFalse to flake:
// SendMessage publishes "pending" via PostgreSQL NOTIFY, and due
// to async delivery the notification can arrive at the control
// subscriber after it registers but before the processor publishes
// "running".
func TestProcessChat_IgnoresStaleControlNotification(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
ps := dbpubsub.NewInMemory()
clock := quartz.NewMock(t)
chatID := uuid.New()
workerID := uuid.New()
server := &Server{
db: db,
logger: logger,
pubsub: ps,
clock: clock,
workerID: workerID,
chatHeartbeatInterval: time.Minute,
configCache: newChatConfigCache(ctx, db, clock),
}
// Publish a stale "pending" notification on the control channel
// BEFORE processChat subscribes. In production this is the
// notification from SendMessage that triggered the processing.
staleNotify, err := json.Marshal(coderdpubsub.ChatStreamNotifyMessage{
Status: string(database.ChatStatusPending),
})
require.NoError(t, err)
err = ps.Publish(coderdpubsub.ChatStreamNotifyChannel(chatID), staleNotify)
require.NoError(t, err)
// Track which status processChat writes during cleanup.
var finalStatus database.ChatStatus
cleanupDone := make(chan struct{})
// The deferred cleanup in processChat runs a transaction.
db.EXPECT().InTx(gomock.Any(), gomock.Any()).DoAndReturn(
func(fn func(database.Store) error, _ *database.TxOptions) error {
return fn(db)
},
)
db.EXPECT().GetChatByIDForUpdate(gomock.Any(), chatID).Return(
database.Chat{ID: chatID, Status: database.ChatStatusRunning, WorkerID: uuid.NullUUID{UUID: workerID, Valid: true}}, nil,
)
db.EXPECT().UpdateChatStatus(gomock.Any(), gomock.Any()).DoAndReturn(
func(_ context.Context, params database.UpdateChatStatusParams) (database.Chat, error) {
finalStatus = params.Status
close(cleanupDone)
return database.Chat{ID: chatID, Status: params.Status}, nil
},
)
// resolveChatModel fails immediately — that's fine, we only
// need processChat to get past initialization without being
// interrupted by the stale notification.
db.EXPECT().GetChatModelConfigByID(gomock.Any(), gomock.Any()).Return(
database.ChatModelConfig{}, xerrors.New("no model configured"),
).AnyTimes()
db.EXPECT().GetEnabledChatProviders(gomock.Any()).Return(nil, nil).AnyTimes()
db.EXPECT().GetEnabledChatModelConfigs(gomock.Any()).Return(nil, nil).AnyTimes()
db.EXPECT().GetChatUsageLimitConfig(gomock.Any()).Return(
database.ChatUsageLimitConfig{}, sql.ErrNoRows,
).AnyTimes()
db.EXPECT().GetChatMessagesForPromptByChatID(gomock.Any(), chatID).Return(nil, nil).AnyTimes()
chat := database.Chat{ID: chatID, LastModelConfigID: uuid.New()}
go server.processChat(ctx, chat)
select {
case <-cleanupDone:
case <-ctx.Done():
t.Fatal("processChat did not complete")
}
// If the stale notification interrupted us, status would be
// "waiting" (the ErrInterrupted path). Since the gate blocked
// it, processChat reached runChat, which failed on model
// resolution → status is "error".
require.Equal(t, database.ChatStatusError, finalStatus,
"processChat should have reached runChat (error), not been interrupted (waiting)")
}
+220 -11
View File
@@ -297,6 +297,180 @@ func TestInterruptChatClearsWorkerInDatabase(t *testing.T) {
require.False(t, fromDB.WorkerID.Valid)
}
func TestArchiveChatMovesPendingChatToWaiting(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
replica := newTestServer(t, db, ps, uuid.New())
ctx := testutil.Context(t, testutil.WaitLong)
user, model := seedChatDependencies(ctx, t, db)
chat, err := replica.CreateChat(ctx, chatd.CreateOptions{
OwnerID: user.ID,
Title: "archive-pending",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("hello")},
})
require.NoError(t, err)
chat, err = db.UpdateChatStatus(ctx, database.UpdateChatStatusParams{
ID: chat.ID,
Status: database.ChatStatusPending,
WorkerID: uuid.NullUUID{},
StartedAt: sql.NullTime{},
HeartbeatAt: sql.NullTime{},
LastError: sql.NullString{},
})
require.NoError(t, err)
err = replica.ArchiveChat(ctx, chat)
require.NoError(t, err)
fromDB, err := db.GetChatByID(ctx, chat.ID)
require.NoError(t, err)
require.Equal(t, database.ChatStatusWaiting, fromDB.Status)
require.False(t, fromDB.WorkerID.Valid)
require.False(t, fromDB.StartedAt.Valid)
require.False(t, fromDB.HeartbeatAt.Valid)
require.True(t, fromDB.Archived)
require.Zero(t, fromDB.PinOrder)
}
func TestArchiveChatInterruptsActiveProcessing(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
ctx := testutil.Context(t, testutil.WaitLong)
streamStarted := make(chan struct{})
streamCanceled := make(chan struct{})
openAIURL := chattest.NewOpenAI(t, func(req *chattest.OpenAIRequest) chattest.OpenAIResponse {
if !req.Stream {
return chattest.OpenAINonStreamingResponse("title")
}
chunks := make(chan chattest.OpenAIChunk, 1)
go func() {
defer close(chunks)
chunks <- chattest.OpenAITextChunks("partial")[0]
select {
case <-streamStarted:
default:
close(streamStarted)
}
<-req.Context().Done()
select {
case <-streamCanceled:
default:
close(streamCanceled)
}
}()
return chattest.OpenAIResponse{StreamingChunks: chunks}
})
server := newActiveTestServer(t, db, ps)
user, model := seedChatDependencies(ctx, t, db)
setOpenAIProviderBaseURL(ctx, t, db, openAIURL)
chat, err := server.CreateChat(ctx, chatd.CreateOptions{
OwnerID: user.ID,
Title: "archive-interrupt",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("hello")},
})
require.NoError(t, err)
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
fromDB, dbErr := db.GetChatByID(ctx, chat.ID)
if dbErr != nil {
return false
}
return fromDB.Status == database.ChatStatusRunning && fromDB.WorkerID.Valid
}, testutil.IntervalFast)
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
select {
case <-streamStarted:
return true
default:
return false
}
}, testutil.IntervalFast)
_, events, cancel, ok := server.Subscribe(ctx, chat.ID, nil, 0)
require.True(t, ok)
defer cancel()
queuedResult, err := server.SendMessage(ctx, chatd.SendMessageOptions{
ChatID: chat.ID,
Content: []codersdk.ChatMessagePart{codersdk.ChatMessageText("queued")},
BusyBehavior: chatd.SendMessageBusyBehaviorQueue,
})
require.NoError(t, err)
require.True(t, queuedResult.Queued)
require.NotNil(t, queuedResult.QueuedMessage)
err = server.ArchiveChat(ctx, chat)
require.NoError(t, err)
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
select {
case <-streamCanceled:
return true
default:
return false
}
}, testutil.IntervalFast)
gotWaitingStatus := false
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
for {
select {
case ev := <-events:
if ev.Type == codersdk.ChatStreamEventTypeStatus &&
ev.Status != nil &&
ev.Status.Status == codersdk.ChatStatusWaiting {
gotWaitingStatus = true
return true
}
default:
return gotWaitingStatus
}
}
}, testutil.IntervalFast)
require.True(t, gotWaitingStatus, "expected a waiting status event after archive")
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
fromDB, dbErr := db.GetChatByID(ctx, chat.ID)
if dbErr != nil {
return false
}
return fromDB.Archived &&
fromDB.Status == database.ChatStatusWaiting &&
!fromDB.WorkerID.Valid &&
!fromDB.StartedAt.Valid &&
!fromDB.HeartbeatAt.Valid
}, testutil.IntervalFast)
queuedMessages, err := db.GetChatQueuedMessages(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, queuedMessages, 1)
require.Equal(t, queuedResult.QueuedMessage.ID, queuedMessages[0].ID)
messages, err := db.GetChatMessagesByChatID(ctx, database.GetChatMessagesByChatIDParams{
ChatID: chat.ID,
AfterID: 0,
})
require.NoError(t, err)
userMessages := 0
for _, msg := range messages {
if msg.Role == database.ChatMessageRoleUser {
userMessages++
}
}
require.Equal(t, 1, userMessages, "expected queued message to stay queued after archive")
}
func TestUpdateChatHeartbeatRequiresOwnership(t *testing.T) {
t.Parallel()
@@ -473,6 +647,11 @@ func TestSendMessageInterruptBehaviorQueuesAndInterruptsWhenBusy(t *testing.T) {
})
require.NoError(t, err)
// CreateChat calls signalWake which triggers processOnce in
// the background. Wait for that processing to finish so it
// doesn't race with the manual status update below.
waitForChatProcessed(ctx, t, db, chat.ID, replica)
chat, err = db.UpdateChatStatus(ctx, database.UpdateChatStatusParams{
ID: chat.ID,
Status: database.ChatStatusRunning,
@@ -817,6 +996,11 @@ func TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached(t *testing
})
require.NoError(t, err)
// CreateChat calls signalWake which triggers processOnce in
// the background. Wait for that processing to finish so it
// doesn't race with the manual status update below.
waitForChatProcessed(ctx, t, db, chat.ID, replica)
chat, err = db.UpdateChatStatus(ctx, database.UpdateChatStatusParams{
ID: chat.ID,
Status: database.ChatStatusRunning,
@@ -879,10 +1063,6 @@ func TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached(t *testing
require.NoError(t, err)
require.Equal(t, database.ChatMessageRoleUser, result.PromotedMessage.Role)
chat, err = db.GetChatByID(ctx, chat.ID)
require.NoError(t, err)
require.Equal(t, database.ChatStatusPending, chat.Status)
queued, err := db.GetChatQueuedMessages(ctx, chat.ID)
require.NoError(t, err)
require.Empty(t, queued)
@@ -1709,13 +1889,9 @@ func TestSubscribeNoPubsubNoDuplicateMessageParts(t *testing.T) {
// subscribing, so the snapshot captures the final state.
// The wake signal may trigger processOnce which will fail
// (no LLM configured) and set the chat to error status.
// Poll until the chat leaves pending status, then wait for
// the goroutine to finish.
require.Eventually(t, func() bool {
c, err := db.GetChatByID(ctx, chat.ID)
return err == nil && c.Status != database.ChatStatusPending
}, testutil.WaitShort, testutil.IntervalFast)
chatd.WaitUntilIdleForTest(replica)
// Poll until the chat reaches a terminal state (not pending
// and not running), then wait for the goroutine to finish.
waitForChatProcessed(ctx, t, db, chat.ID, replica)
snapshot, events, cancel, ok := replica.Subscribe(ctx, chat.ID, nil, 0)
require.True(t, ok)
@@ -2598,6 +2774,39 @@ func TestHeartbeatNoWorkspaceNoBump(t *testing.T) {
require.Equal(t, 0, count, "expected no workspaces to be flushed when chat has no workspace")
}
// waitForChatProcessed waits for a wake-triggered processOnce to
// fully complete for the given chat. It polls until the chat leaves
// both pending and running states (meaning processChat has finished
// its cleanup and updated the DB), then calls WaitUntilIdleForTest.
//
// Waiting for a terminal state (not just "not pending") avoids a
// WaitGroup Add/Wait race: AcquireChats changes the DB status to
// running before processOnce calls inflight.Add(1). If we only
// waited for status != pending, we could call Wait() while Add(1)
// hasn't happened yet.
func waitForChatProcessed(
ctx context.Context,
t *testing.T,
db database.Store,
chatID uuid.UUID,
server *chatd.Server,
) {
t.Helper()
require.Eventually(t, func() bool {
c, err := db.GetChatByID(ctx, chatID)
if err != nil {
return false
}
// Wait until the chat reaches a terminal state — neither
// pending (waiting to be acquired) nor running (being
// processed). This guarantees that inflight.Add(1) has
// already been called by processOnce.
return c.Status != database.ChatStatusPending &&
c.Status != database.ChatStatusRunning
}, testutil.WaitShort, testutil.IntervalFast)
chatd.WaitUntilIdleForTest(server)
}
func newTestServer(
t *testing.T,
db database.Store,
-313
View File
@@ -1,24 +1,14 @@
package chatd_test
import (
"bytes"
"context"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"os"
"strconv"
"strings"
"sync"
"sync/atomic"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/x/chatd/chattest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
@@ -597,306 +587,3 @@ func partTypeSet(parts []codersdk.ChatMessagePart) map[codersdk.ChatMessagePartT
}
return set
}
type openAIStoreMode string
const (
openAIStoreModeTrue openAIStoreMode = "store_true"
openAIStoreModeFalse openAIStoreMode = "store_false"
)
func TestOpenAIReasoningWithWebSearchRoundTrip(t *testing.T) {
t.Parallel()
runOpenAIReasoningWithWebSearchRoundTripTest(t, openAIStoreModeTrue)
}
func TestOpenAIReasoningWithWebSearchRoundTripStoreFalse(t *testing.T) {
t.Parallel()
runOpenAIReasoningWithWebSearchRoundTripTest(t, openAIStoreModeFalse)
}
func runOpenAIReasoningWithWebSearchRoundTripTest(t *testing.T, storeMode openAIStoreMode) {
t.Helper()
ctx := testutil.Context(t, testutil.WaitLong)
store := storeMode == openAIStoreModeTrue
type capturedOpenAIRequest struct {
Stream bool `json:"stream,omitempty"`
Store *bool `json:"store,omitempty"`
PreviousResponseID *string `json:"previous_response_id,omitempty"`
Prompt []interface{} `json:"input,omitempty"`
}
var (
streamRequestCount atomic.Int32
firstReq *capturedOpenAIRequest
secondReq *capturedOpenAIRequest
mu sync.Mutex
)
upstreamOpenAIURL := chattest.NewOpenAI(t, func(req *chattest.OpenAIRequest) chattest.OpenAIResponse {
if !req.Stream {
return chattest.OpenAINonStreamingResponse("reasoning + web search title")
}
switch req.Header.Get("X-Request-Ordinal") {
case "1":
return chattest.OpenAIResponse{
ResponseID: "resp_first_test",
StreamingChunks: chattest.OpenAIStreamingResponse(
chattest.OpenAITextChunks("Here is what I found.")...,
).StreamingChunks,
Reasoning: &chattest.OpenAIReasoningItem{
Summary: "thinking about the question",
EncryptedContent: "encrypted_data_here",
},
WebSearch: &chattest.OpenAIWebSearchCall{
Query: "latest AI news",
},
}
default:
return chattest.OpenAIStreamingResponse(
chattest.OpenAITextChunks("Follow-up answer.")...,
)
}
})
captureServer := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
body, err := io.ReadAll(r.Body)
if err != nil {
t.Errorf("read OpenAI request body: %v", err)
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
}
_ = r.Body.Close()
if r.URL.Path == "/responses" {
var captured capturedOpenAIRequest
if err := json.Unmarshal(body, &captured); err != nil {
t.Errorf("decode OpenAI request body: %v", err)
http.Error(rw, err.Error(), http.StatusBadRequest)
return
}
if captured.Stream {
requestCount := streamRequestCount.Add(1)
r.Header.Set("X-Request-Ordinal", strconv.Itoa(int(requestCount)))
mu.Lock()
switch requestCount {
case 1:
firstReq = &captured
default:
secondReq = &captured
}
mu.Unlock()
}
}
upstreamReq, err := http.NewRequestWithContext(
r.Context(),
r.Method,
upstreamOpenAIURL+r.URL.RequestURI(),
bytes.NewReader(body),
)
if err != nil {
t.Errorf("create upstream OpenAI request: %v", err)
http.Error(rw, err.Error(), http.StatusInternalServerError)
return
}
upstreamReq.Header = r.Header.Clone()
resp, err := http.DefaultClient.Do(upstreamReq)
if err != nil {
t.Errorf("forward OpenAI request: %v", err)
http.Error(rw, err.Error(), http.StatusBadGateway)
return
}
defer resp.Body.Close()
for key, values := range resp.Header {
for _, value := range values {
rw.Header().Add(key, value)
}
}
rw.WriteHeader(resp.StatusCode)
if _, err := io.Copy(rw, resp.Body); err != nil {
t.Errorf("copy OpenAI response body: %v", err)
}
}))
t.Cleanup(captureServer.Close)
openAIURL := captureServer.URL
deploymentValues := coderdtest.DeploymentValues(t)
deploymentValues.Experiments = []string{string(codersdk.ExperimentAgents)}
client := coderdtest.New(t, &coderdtest.Options{
DeploymentValues: deploymentValues,
})
_ = coderdtest.CreateFirstUser(t, client)
expClient := codersdk.NewExperimentalClient(client)
_, err := expClient.CreateChatProvider(ctx, codersdk.CreateChatProviderConfigRequest{
Provider: "openai",
APIKey: "test-api-key",
BaseURL: openAIURL,
})
require.NoError(t, err)
contextLimit := int64(200000)
isDefault := true
reasoningEffort := "medium"
reasoningSummary := "auto"
_, err = expClient.CreateChatModelConfig(ctx, codersdk.CreateChatModelConfigRequest{
Provider: "openai",
Model: "o4-mini",
ContextLimit: &contextLimit,
IsDefault: &isDefault,
ModelConfig: &codersdk.ChatModelCallConfig{
ProviderOptions: &codersdk.ChatModelProviderOptions{
OpenAI: &codersdk.ChatModelOpenAIProviderOptions{
Store: ptr.Ref(store),
ReasoningEffort: &reasoningEffort,
ReasoningSummary: &reasoningSummary,
WebSearchEnabled: ptr.Ref(true),
},
},
},
})
require.NoError(t, err)
t.Logf("Creating chat with reasoning + web search query (store=%t)...", store)
chat, err := expClient.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: "Search for the latest AI news and summarize it briefly.",
}},
})
require.NoError(t, err)
events, closer, err := expClient.StreamChat(ctx, chat.ID, nil)
require.NoError(t, err)
defer closer.Close()
waitForChatDone(ctx, t, events, "step 1")
chatData, err := expClient.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs, err := expClient.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusWaiting, chatData.Status,
"chat should be in waiting status after step 1")
assistantMsg := findAssistantWithText(t, chatMsgs.Messages)
require.NotNil(t, assistantMsg,
"expected an assistant message with text content after step 1")
partTypes := partTypeSet(assistantMsg.Content)
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeReasoning,
"assistant message should contain reasoning parts")
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeToolCall,
"assistant message should contain a provider-executed web search tool call")
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeToolResult,
"assistant message should contain a provider-executed web search tool result")
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeText,
"assistant message should contain a text part")
var foundReasoning, foundWebSearchCall, foundText bool
for _, part := range assistantMsg.Content {
switch part.Type {
case codersdk.ChatMessagePartTypeReasoning:
// fantasy emits a leading newline when the reasoning summary part is
// added, so match the persisted summary text after trimming whitespace.
if strings.TrimSpace(part.Text) == "thinking about the question" {
foundReasoning = true
}
case codersdk.ChatMessagePartTypeToolCall:
if part.ToolName == "web_search" {
require.True(t, part.ProviderExecuted,
"web search tool-call should be marked provider-executed")
foundWebSearchCall = true
}
case codersdk.ChatMessagePartTypeText:
if part.Text == "Here is what I found." {
foundText = true
}
}
}
require.True(t, foundReasoning, "expected reasoning summary text to be persisted")
require.True(t, foundWebSearchCall, "expected persisted web_search tool call")
require.True(t, foundText, "expected streamed assistant text to be persisted")
t.Log("Sending follow-up message...")
_, err = expClient.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: "What is the follow-up takeaway?",
}},
})
if !store && err != nil {
require.NotContains(t, err.Error(),
"Items are not persisted when store is set to false.",
"follow-up should reconstruct store=false responses without stale provider item IDs")
}
require.NoError(t, err)
events2, closer2, err := expClient.StreamChat(ctx, chat.ID, nil)
require.NoError(t, err)
defer closer2.Close()
waitForChatDone(ctx, t, events2, "step 2")
chatData2, err := expClient.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs2, err := expClient.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusWaiting, chatData2.Status,
"chat should be in waiting status after step 2")
require.Greater(t, len(chatMsgs2.Messages), len(chatMsgs.Messages),
"follow-up should have added more messages")
require.NotNil(t, findLastAssistantWithText(t, chatMsgs2.Messages),
"expected an assistant message with text after the follow-up")
require.Equal(t, int32(2), streamRequestCount.Load(),
"expected exactly two streamed OpenAI responses")
mu.Lock()
defer mu.Unlock()
require.NotNil(t, firstReq, "expected first streaming request to be captured")
if store {
require.NotNil(t, firstReq.Store, "first request should have store field")
require.True(t, *firstReq.Store, "store should be true")
} else if firstReq.Store != nil {
require.False(t, *firstReq.Store, "store should be false")
}
require.NotNil(t, secondReq, "expected second streaming request to be captured")
foundAssistantReplay := false
for _, item := range secondReq.Prompt {
m, ok := item.(map[string]interface{})
if !ok {
continue
}
role, _ := m["role"].(string)
if role == "assistant" {
foundAssistantReplay = true
}
if store {
require.NotEqual(t, "assistant", role,
"store=true chain-mode prompt should not replay assistant messages")
require.NotEqual(t, "tool", role,
"store=true chain-mode prompt should not replay tool messages")
}
}
if store {
require.NotNil(t, secondReq.PreviousResponseID,
"store=true follow-up should set previous_response_id")
require.Equal(t, "resp_first_test", *secondReq.PreviousResponseID,
"previous_response_id should match the first response's ID")
} else {
if secondReq.PreviousResponseID != nil {
require.Empty(t, *secondReq.PreviousResponseID,
"store=false follow-up should not set previous_response_id")
}
require.True(t, foundAssistantReplay,
"store=false follow-up should replay prior assistant history")
}
}
+5
View File
@@ -38,6 +38,11 @@ const (
APIKeyScopeChatDelete APIKeyScope = "chat:delete"
APIKeyScopeChatRead APIKeyScope = "chat:read"
APIKeyScopeChatUpdate APIKeyScope = "chat:update"
APIKeyScopeChatAutomationAll APIKeyScope = "chat_automation:*"
APIKeyScopeChatAutomationCreate APIKeyScope = "chat_automation:create"
APIKeyScopeChatAutomationDelete APIKeyScope = "chat_automation:delete"
APIKeyScopeChatAutomationRead APIKeyScope = "chat_automation:read"
APIKeyScopeChatAutomationUpdate APIKeyScope = "chat_automation:update"
APIKeyScopeCoderAll APIKeyScope = "coder:all"
APIKeyScopeCoderApikeysManageSelf APIKeyScope = "coder:apikeys.manage_self"
APIKeyScopeCoderApplicationConnect APIKeyScope = "coder:application_connect"
+5
View File
@@ -68,6 +68,11 @@ type Chat struct {
// the owner's read cursor, which updates on stream
// connect and disconnect.
HasUnread bool `json:"has_unread"`
// LastInjectedContext holds the most recently persisted
// injected context parts (AGENTS.md files and skills). It
// is updated only when context changes — first workspace
// attach or agent change.
LastInjectedContext []ChatMessagePart `json:"last_injected_context,omitempty"`
}
// ChatMessage represents a single message in a chat.
+11 -9
View File
@@ -3923,15 +3923,17 @@ Write out the current server config as YAML to stdout.`,
YAML: "key_file",
},
{
Name: "AI Bridge Proxy Domain Allowlist",
Description: "Comma-separated list of AI provider domains for which HTTPS traffic will be decrypted and routed through AI Bridge. Requests to other domains will be tunneled directly without decryption. Supported domains: api.anthropic.com, api.openai.com, api.individual.githubcopilot.com.",
Flag: "aibridge-proxy-domain-allowlist",
Env: "CODER_AIBRIDGE_PROXY_DOMAIN_ALLOWLIST",
Value: &c.AI.BridgeProxyConfig.DomainAllowlist,
Default: "api.anthropic.com,api.openai.com,api.individual.githubcopilot.com",
Hidden: true,
Group: &deploymentGroupAIBridgeProxy,
YAML: "domain_allowlist",
Name: "AI Bridge Proxy Domain Allowlist",
Description: "Comma-separated list of AI provider domains for which HTTPS traffic will be decrypted and routed through AI Bridge. " +
"Requests to other domains will be tunneled directly without decryption. " +
"Supported domains: api.anthropic.com, api.openai.com, api.individual.githubcopilot.com, api.business.githubcopilot.com, api.enterprise.githubcopilot.com, chatgpt.com.",
Flag: "aibridge-proxy-domain-allowlist",
Env: "CODER_AIBRIDGE_PROXY_DOMAIN_ALLOWLIST",
Value: &c.AI.BridgeProxyConfig.DomainAllowlist,
Default: "api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,chatgpt.com",
Hidden: true,
Group: &deploymentGroupAIBridgeProxy,
YAML: "domain_allowlist",
},
{
Name: "AI Bridge Proxy Upstream Proxy",
+2
View File
@@ -12,6 +12,7 @@ const (
ResourceAuditLog RBACResource = "audit_log"
ResourceBoundaryUsage RBACResource = "boundary_usage"
ResourceChat RBACResource = "chat"
ResourceChatAutomation RBACResource = "chat_automation"
ResourceConnectionLog RBACResource = "connection_log"
ResourceCryptoKey RBACResource = "crypto_key"
ResourceDebugInfo RBACResource = "debug_info"
@@ -84,6 +85,7 @@ var RBACResourceActions = map[RBACResource][]RBACAction{
ResourceAuditLog: {ActionCreate, ActionRead},
ResourceBoundaryUsage: {ActionDelete, ActionRead, ActionUpdate},
ResourceChat: {ActionCreate, ActionDelete, ActionRead, ActionUpdate},
ResourceChatAutomation: {ActionCreate, ActionDelete, ActionRead, ActionUpdate},
ResourceConnectionLog: {ActionRead, ActionUpdate},
ResourceCryptoKey: {ActionCreate, ActionDelete, ActionRead, ActionUpdate},
ResourceDebugInfo: {ActionRead},
+2 -1
View File
@@ -1,12 +1,13 @@
package codersdk
// Ideally this roles would be generated from the rbac/roles.go package.
// Ideally these roles would be generated from the rbac/roles.go package.
const (
RoleOwner string = "owner"
RoleMember string = "member"
RoleTemplateAdmin string = "template-admin"
RoleUserAdmin string = "user-admin"
RoleAuditor string = "auditor"
RoleAgentsAccess string = "agents-access"
RoleOrganizationAdmin string = "organization-admin"
RoleOrganizationMember string = "organization-member"
+109 -30
View File
@@ -31,17 +31,35 @@ type WorkspaceStarter interface {
StartWorkspace() error
}
type Client interface {
DialAgent(dialCtx context.Context, agentID uuid.UUID, options *workspacesdk.DialAgentOptions) (workspacesdk.AgentConn, error)
}
const (
// stateInit is the initial state of the FSM.
stateInit state = iota
// exit is the final state of the FSM, and implies that everything is closed or closing.
exit
// waitToStart means the workspace is in a state where we have to wait before we can create a new start build
waitToStart
// waitForWorkspaceStarted means the workspace is starting, or we have kicked off a goroutine to start it
waitForWorkspaceStarted
// waitForAgent means the workspace has started and we are waiting for the agent to connect or be ready
waitForAgent
// establishTailnet means we have kicked off a goroutine to dial the agent and are waiting for its results
establishTailnet
// tailnetUp means the tailnet connection came up and we kicked off a goroutine to start the NetworkedApplication.
tailnetUp
// applicationUp means the NetworkedApplication is up.
applicationUp
// shutdownApplication means we are in graceful shut down and waiting for the NetworkedApplication. It could be
// starting or closing, and we expect to get a networkedApplicationUpdate event when it does.
shutdownApplication
// shutdownTailnet means that we are in graceful shut down and waiting for the tailnet. This implies the
// NetworkedApplication is status is down. E.g. closed or was never started.
shutdownTailnet
// maxState is not a valid state for the FSM, and must be last in this list. It allows tests to iterate over all
// valid states using `range maxState`.
maxState // used for testing
)
@@ -49,7 +67,7 @@ type Tunneler struct {
config Config
ctx context.Context
cancel context.CancelFunc
client *workspacesdk.Client
client Client
state state
agentConn workspacesdk.AgentConn
events chan tunnelerEvent
@@ -98,22 +116,24 @@ type buildUpdate struct {
}
type agentUpdate struct {
// TODO: commented out to appease linter
// transition codersdk.WorkspaceTransition
// id uuid.UUID
lifecycle codersdk.WorkspaceAgentLifecycle
id uuid.UUID
}
type networkedApplicationUpdate struct {
// up is true if the application is up. False if it is down.
up bool
up bool
err error
}
type tailnetUpdate struct {
// up is true if the tailnet is up. False if it is down.
up bool
up bool
conn workspacesdk.AgentConn
err error
}
func NewTunneler(client *workspacesdk.Client, config Config) *Tunneler {
func NewTunneler(client Client, config Config) *Tunneler {
t := &Tunneler{
config: config,
client: client,
@@ -166,13 +186,17 @@ func (t *Tunneler) handleSignal() {
switch t.state {
case exit, shutdownTailnet, shutdownApplication:
return
case tailnetUp, applicationUp:
case applicationUp:
t.wg.Add(1)
go t.closeApp()
t.state = shutdownApplication
case tailnetUp:
// waiting for app to start; setting state here will cause us to tear it down when the app start goroutine
// event comes in.
t.state = shutdownApplication
case establishTailnet:
t.wg.Add(1)
go t.shutdownTailnet()
// waiting for tailnet to start; setting state here will cause us to tear it down when the tailnet dial
// goroutine event comes in.
t.state = shutdownTailnet
case stateInit, waitToStart, waitForWorkspaceStarted, waitForAgent:
t.cancel() // stops the watch
@@ -212,13 +236,12 @@ func (t *Tunneler) handleBuildUpdate(update *buildUpdate) {
if update.transition == codersdk.WorkspaceTransitionStart && canMakeProgress {
t.config.DebugLogger.Debug(t.ctx, "workspace is starting", slog.F("job_status", update.jobStatus))
switch t.state {
case establishTailnet:
// new build after we're already connecting
t.wg.Add(1)
go t.shutdownTailnet()
// new build after we have already connected
case establishTailnet: // we are starting the tailnet
t.state = shutdownTailnet
case applicationUp, tailnetUp:
// new build after we have already connected
case tailnetUp: // we are starting the application
t.state = shutdownApplication
case applicationUp:
t.wg.Add(1)
go t.closeApp()
t.state = shutdownApplication
@@ -241,14 +264,14 @@ func (t *Tunneler) handleBuildUpdate(update *buildUpdate) {
if update.transition == codersdk.WorkspaceTransitionStop {
// these cases take effect regardless of whether the transition is complete or not
switch t.state {
case establishTailnet:
// new build after we're already connecting
t.wg.Add(1)
go t.shutdownTailnet()
// all 3 of these mean a new build after we have already started connecting
case establishTailnet: // waiting for tailnet to start
t.state = shutdownTailnet
return
case applicationUp, tailnetUp:
// new build after we have already connected
case tailnetUp: // waiting for application to start
t.state = shutdownApplication
return
case applicationUp:
t.wg.Add(1)
go t.closeApp()
t.state = shutdownApplication
@@ -289,7 +312,39 @@ func (t *Tunneler) handleBuildUpdate(update *buildUpdate) {
func (*Tunneler) handleProvisionerJobLog(*codersdk.ProvisionerJobLog) {
}
func (*Tunneler) handleAgentUpdate(*agentUpdate) {
func (t *Tunneler) handleAgentUpdate(update *agentUpdate) {
if t.state != waitForAgent {
return
}
doConnect := func() {
t.wg.Add(1)
t.state = establishTailnet
go t.connectTailnet(update.id)
}
// consequence of ignoring updates if we are not waiting for the agent is that we MUST receive
// the start build succeeded update BEFORE we get the Agent connected / ready update. We should keep this
// in mind when implementing the watch in Coderd.
switch update.lifecycle {
case codersdk.WorkspaceAgentLifecycleReady:
doConnect()
return
case codersdk.WorkspaceAgentLifecycleStarting,
codersdk.WorkspaceAgentLifecycleStartError,
codersdk.WorkspaceAgentLifecycleStartTimeout:
if t.config.NoWaitForScripts {
doConnect()
return
}
case codersdk.WorkspaceAgentLifecycleShuttingDown:
case codersdk.WorkspaceAgentLifecycleShutdownError:
case codersdk.WorkspaceAgentLifecycleShutdownTimeout:
case codersdk.WorkspaceAgentLifecycleOff:
case codersdk.WorkspaceAgentLifecycleCreated: // initial state, so it hasn't connected yet
default:
// unhittable, unless new states are added. We structure this with the switch and all cases covered to ensure
// we cover all cases.
t.config.DebugLogger.Critical(t.ctx, "unhandled agent update", slog.F("lifecycle", update.lifecycle))
}
}
func (*Tunneler) handleAgentLog(*codersdk.WorkspaceAgentLog) {
@@ -310,7 +365,7 @@ func (t *Tunneler) closeApp() {
select {
case <-t.ctx.Done():
t.config.DebugLogger.Info(t.ctx, "context expired before sending app down")
case t.events <- tunnelerEvent{appUpdate: &networkedApplicationUpdate{up: false}}:
case t.events <- tunnelerEvent{appUpdate: &networkedApplicationUpdate{up: false, err: err}}:
}
}
@@ -325,20 +380,44 @@ func (t *Tunneler) startWorkspace() {
select {
case <-t.ctx.Done():
t.config.DebugLogger.Info(t.ctx, "context expired before sending signal after failed workspace start")
case t.events <- tunnelerEvent{shutdownSignal: &shutdownSignal{}}:
case t.events <- tunnelerEvent{appUpdate: &networkedApplicationUpdate{up: false}}:
}
}
}
func (t *Tunneler) shutdownTailnet() {
func (t *Tunneler) connectTailnet(id uuid.UUID) {
defer t.wg.Done()
err := t.agentConn.Close()
conn, err := t.client.DialAgent(t.ctx, id, &workspacesdk.DialAgentOptions{
Logger: t.config.DebugLogger.Named("dialer"),
})
if err != nil {
t.config.DebugLogger.Error(t.ctx, "failed to close agent connection", slog.Error(err))
t.config.DebugLogger.Error(t.ctx, "failed to connect agent", slog.Error(err))
if t.config.LogWriter != nil {
_, _ = fmt.Fprintf(t.config.LogWriter, "failed to dial workspace agent: %s", err.Error())
}
select {
case <-t.ctx.Done():
t.config.DebugLogger.Info(t.ctx, "context expired before sending event after failed agent dial")
case t.events <- tunnelerEvent{tailnetUpdate: &tailnetUpdate{up: false, err: err}}:
}
}
select {
case <-t.ctx.Done():
t.config.DebugLogger.Debug(t.ctx, "context expired before sending event after shutting down tailnet")
case t.events <- tunnelerEvent{tailnetUpdate: &tailnetUpdate{up: false}}:
t.config.DebugLogger.Info(t.ctx, "context expired before sending tailnet conn")
case t.events <- tunnelerEvent{tailnetUpdate: &tailnetUpdate{up: true, conn: conn}}:
}
}
// TODO: Restore this func when we implement tearing down the tailnet
// func (t *Tunneler) shutdownTailnet() {
// defer t.wg.Done()
// err := t.agentConn.Close()
// if err != nil {
// t.config.DebugLogger.Error(t.ctx, "failed to close agent connection", slog.Error(err))
// }
// select {
// case <-t.ctx.Done():
// t.config.DebugLogger.Debug(t.ctx, "context expired before sending event after shutting down tailnet")
// case t.events <- tunnelerEvent{tailnetUpdate: &tailnetUpdate{up: false, err: err}}:
// }
//}
@@ -27,46 +27,9 @@ func TestHandleBuildUpdate_Coverage(t *testing.T) {
for _, noWaitForScripts := range []bool{true, false} {
t.Run(fmt.Sprintf("%d_%s_%s_%t_%t", s, trans, jobStatus, noAutostart, noWaitForScripts), func(t *testing.T) {
t.Parallel()
ctrl := gomock.NewController(t)
mAgentConn := agentconnmock.NewMockAgentConn(ctrl)
logger := testutil.Logger(t)
testCtx := testutil.Context(t, testutil.WaitShort)
ctx, cancel := context.WithCancel(testCtx)
uut := &Tunneler{
config: Config{
WorkspaceID: workspaceID,
App: fakeApp{},
WorkspaceStarter: &fakeWorkspaceStarter{},
AgentName: "test",
NoAutostart: noAutostart,
NoWaitForScripts: noWaitForScripts,
DebugLogger: logger.Named("tunneler"),
},
events: make(chan tunnelerEvent),
ctx: ctx,
cancel: cancel,
state: s,
agentConn: mAgentConn,
}
mAgentConn.EXPECT().Close().Return(nil).AnyTimes()
uut.handleBuildUpdate(&buildUpdate{transition: trans, jobStatus: jobStatus})
done := make(chan struct{})
go func() {
defer close(done)
uut.wg.Wait()
}()
cancel() // cancel in case the update triggers a go routine that writes another event
// ensure we don't leak a go routine
_ = testutil.TryReceive(testCtx, t, done)
// We're not asserting the resulting state, as there are just too many to directly enumerate
// due to the combinations. Unhandled cases will hit a critical log in the handler and fail
// the test.
require.Less(t, uut.state, maxState)
require.GreaterOrEqual(t, uut.state, 0)
coverUpdate(t, workspaceID, noAutostart, noWaitForScripts, s, func(uut *Tunneler) {
uut.handleBuildUpdate(&buildUpdate{transition: trans, jobStatus: jobStatus})
})
})
}
}
@@ -75,6 +38,51 @@ func TestHandleBuildUpdate_Coverage(t *testing.T) {
}
}
func coverUpdate(t *testing.T, workspaceID uuid.UUID, noAutostart bool, noWaitForScripts bool, s state, update func(uut *Tunneler)) {
ctrl := gomock.NewController(t)
mAgentConn := agentconnmock.NewMockAgentConn(ctrl)
logger := testutil.Logger(t)
fClient := &fakeClient{conn: mAgentConn}
testCtx := testutil.Context(t, testutil.WaitShort)
ctx, cancel := context.WithCancel(testCtx)
uut := &Tunneler{
client: fClient,
config: Config{
WorkspaceID: workspaceID,
App: fakeApp{},
WorkspaceStarter: &fakeWorkspaceStarter{},
AgentName: "test",
NoAutostart: noAutostart,
NoWaitForScripts: noWaitForScripts,
DebugLogger: logger.Named("tunneler"),
},
events: make(chan tunnelerEvent),
ctx: ctx,
cancel: cancel,
state: s,
agentConn: mAgentConn,
}
mAgentConn.EXPECT().Close().Return(nil).AnyTimes()
update(uut)
done := make(chan struct{})
go func() {
defer close(done)
uut.wg.Wait()
}()
cancel() // cancel in case the update triggers a go routine that writes another event
// ensure we don't leak a go routine
_ = testutil.TryReceive(testCtx, t, done)
// We're not asserting the resulting state, as there are just too many to directly enumerate
// due to the combinations. Unhandled cases will hit a critical log in the handler and fail
// the test.
require.Less(t, uut.state, maxState)
require.GreaterOrEqual(t, uut.state, 0)
}
func TestBuildUpdatesStoppedWorkspace(t *testing.T) {
t.Parallel()
workspaceID := uuid.UUID{1}
@@ -234,6 +242,96 @@ func TestBuildUpdatesNoAutostart(t *testing.T) {
require.Error(t, ctx.Err())
}
func TestAgentUpdate_Coverage(t *testing.T) {
t.Parallel()
workspaceID := uuid.UUID{1}
agentID := uuid.UUID{2}
for s := range maxState {
for _, lifecycle := range codersdk.WorkspaceAgentLifecycleOrder {
for _, noAutostart := range []bool{true, false} {
for _, noWaitForScripts := range []bool{true, false} {
t.Run(fmt.Sprintf("%d_%s_%t_%t", s, lifecycle, noAutostart, noWaitForScripts), func(t *testing.T) {
t.Parallel()
coverUpdate(t, workspaceID, noAutostart, noWaitForScripts, s, func(uut *Tunneler) {
uut.handleAgentUpdate(&agentUpdate{lifecycle: lifecycle, id: agentID})
})
})
}
}
}
}
}
func TestAgentUpdateReady(t *testing.T) {
t.Parallel()
workspaceID := uuid.UUID{1}
agentID := uuid.UUID{2}
logger := testutil.Logger(t)
ctrl := gomock.NewController(t)
mAgentConn := agentconnmock.NewMockAgentConn(ctrl)
fClient := &fakeClient{conn: mAgentConn}
testCtx := testutil.Context(t, testutil.WaitShort)
ctx, cancel := context.WithCancel(testCtx)
uut := &Tunneler{
config: Config{
WorkspaceID: workspaceID,
AgentName: "test",
DebugLogger: logger.Named("tunneler"),
},
events: make(chan tunnelerEvent),
ctx: ctx,
cancel: cancel,
state: waitForAgent,
client: fClient,
}
uut.handleAgentUpdate(&agentUpdate{lifecycle: codersdk.WorkspaceAgentLifecycleReady, id: agentID})
require.Equal(t, establishTailnet, uut.state)
event := testutil.RequireReceive(testCtx, t, uut.events)
require.NotNil(t, event.tailnetUpdate)
require.True(t, fClient.dialed)
require.Equal(t, mAgentConn, event.tailnetUpdate.conn)
require.True(t, event.tailnetUpdate.up)
}
func TestAgentUpdateNoWait(t *testing.T) {
t.Parallel()
workspaceID := uuid.UUID{1}
agentID := uuid.UUID{2}
logger := testutil.Logger(t)
ctrl := gomock.NewController(t)
mAgentConn := agentconnmock.NewMockAgentConn(ctrl)
fClient := &fakeClient{conn: mAgentConn}
testCtx := testutil.Context(t, testutil.WaitShort)
ctx, cancel := context.WithCancel(testCtx)
uut := &Tunneler{
config: Config{
WorkspaceID: workspaceID,
AgentName: "test",
DebugLogger: logger.Named("tunneler"),
NoWaitForScripts: true,
},
events: make(chan tunnelerEvent),
ctx: ctx,
cancel: cancel,
state: waitForAgent,
client: fClient,
}
uut.handleAgentUpdate(&agentUpdate{lifecycle: codersdk.WorkspaceAgentLifecycleStarting, id: agentID})
require.Equal(t, establishTailnet, uut.state)
event := testutil.RequireReceive(testCtx, t, uut.events)
require.NotNil(t, event.tailnetUpdate)
require.True(t, fClient.dialed)
require.Equal(t, mAgentConn, event.tailnetUpdate.conn)
require.True(t, event.tailnetUpdate.up)
}
func waitForGoroutines(ctx context.Context, t *testing.T, tunneler *Tunneler) {
done := make(chan struct{})
go func() {
@@ -259,3 +357,13 @@ func (fakeApp) Close() error {
}
func (fakeApp) Start(workspacesdk.AgentConn) {}
type fakeClient struct {
conn workspacesdk.AgentConn
dialed bool
}
func (f *fakeClient) DialAgent(context.Context, uuid.UUID, *workspacesdk.DialAgentOptions) (workspacesdk.AgentConn, error) {
f.dialed = true
return f.conn, nil
}
+23 -16
View File
@@ -106,10 +106,15 @@ Tools are how the agent takes action. Each tool call from the LLM translates to
a concrete operation — either inside a workspace or within the control plane
itself.
The agent is restricted to the tool set defined in this section. It has no
direct access to the Coder API beyond what these tools expose and cannot
execute arbitrary operations against the control plane. If a capability is
not represented by a tool, the agent cannot perform it.
The agent is restricted to the built-in tool set defined in this section,
plus any additional tools from workspace skills and MCP servers. Skills
provide structured instructions the agent loads on demand
(see [Extending Agents](./extending-agents.md)). MCP tools come from
admin-configured external servers
(see [MCP Servers](./platform-controls/mcp-servers.md)) and from workspace
`.mcp.json` files. The agent has no direct access to the Coder API beyond
what these tools expose and cannot execute arbitrary operations against the
control plane.
### Workspace connection lifecycle
@@ -144,24 +149,26 @@ workspace connection. Platform and orchestration tools are only available to
root chats — sub-agents spawned by `spawn_agent` do not have access to them
and cannot create workspaces or spawn further sub-agents.
| Tool | What it does |
|--------------------|----------------------------------------------------------------------------------------|
| `list_templates` | Browses available workspace templates, sorted by popularity. |
| `read_template` | Gets template details and configurable parameters. |
| `create_workspace` | Creates a workspace from a template and waits for it to be ready. |
| `start_workspace` | Starts the chat's workspace if it is currently stopped. Idempotent if already running. |
| Tool | What it does |
|--------------------|-----------------------------------------------------------------------------------------|
| `list_templates` | Browses available workspace templates, sorted by popularity. |
| `read_template` | Gets template details and configurable parameters. |
| `create_workspace` | Creates a workspace from a template and waits for it to be ready. |
| `start_workspace` | Starts the chat's workspace if it is currently stopped. Idempotent if already running. |
| `propose_plan` | Presents a Markdown plan file from the workspace for user review before implementation. |
### Orchestration tools
These tools manage sub-agents — child chats that work on independent tasks in
parallel.
| Tool | What it does |
|-----------------|--------------------------------------------------------------|
| `spawn_agent` | Delegates a task to a sub-agent with its own context window. |
| `wait_agent` | Waits for a sub-agent to finish and collects its result. |
| `message_agent` | Sends a follow-up message to a running sub-agent. |
| `close_agent` | Stops a running sub-agent. |
| Tool | What it does |
|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `spawn_agent` | Delegates a task to a sub-agent with its own context window. |
| `wait_agent` | Waits for a sub-agent to finish and collects its result. |
| `message_agent` | Sends a follow-up message to a running sub-agent. |
| `close_agent` | Stops a running sub-agent. |
| `spawn_computer_use_agent` | Spawns a sub-agent with desktop interaction capabilities (screenshot, mouse, keyboard). Requires an Anthropic provider and the desktop feature to be enabled by an administrator. |
### Provider tools
+3
View File
@@ -65,6 +65,9 @@ Once the server restarts with the experiment enabled:
1. Navigate to the **Agents** page in the Coder dashboard.
1. Open **Admin** settings and configure at least one LLM provider and model.
See [Models](./models.md) for detailed setup instructions.
1. Grant the **Coder Agents User** role to users who need to create chats.
Go to **Admin** > **Users**, click the roles icon next to each user,
and enable **Coder Agents User**.
1. Developers can then start a new chat from the Agents page.
## Licensing and availability
+130
View File
@@ -0,0 +1,130 @@
# Extending Agents
Workspace templates can extend the agent with custom skills and MCP tools.
These mechanisms let platform teams provide repository-specific instructions,
domain expertise, and external tool integrations without modifying the agent
itself.
## Skills
Skills are structured, reusable instruction sets that the agent loads on
demand. They live in the workspace filesystem and are discovered
automatically when a chat attaches to a workspace.
### How skills work
Place skill directories under `.agents/skills/` relative to the workspace
working directory. Each directory contains a required `SKILL.md` file and
any supporting files the skill needs.
On the first turn of a workspace-attached chat, the agent scans
`.agents/skills/` and builds an `<available-skills>` block in its system
prompt listing each skill's name and description. Only frontmatter is read
during discovery — the full skill content is loaded lazily when the agent
calls a tool.
Two tools are registered when skills are present:
| Tool | Parameters | Description |
|-------------------|----------------------------------|----------------------------------------------------------|
| `read_skill` | `name` (string) | Returns the SKILL.md body and a list of supporting files |
| `read_skill_file` | `name` (string), `path` (string) | Returns the content of a supporting file |
### Directory structure
```text
.agents/skills/
├── deep-review/
│ ├── SKILL.md
│ └── roles/
│ ├── security-reviewer.md
│ └── concurrency-reviewer.md
├── pull-requests/
│ └── SKILL.md
└── refine-plan/
└── SKILL.md
```
### SKILL.md format
Each `SKILL.md` starts with YAML frontmatter containing a `name` and an
optional `description`, followed by the full instructions in markdown:
```markdown
---
name: deep-review
description: "Multi-reviewer code review with domain-specific reviewers"
---
# Deep Review
Instructions for the skill go here...
```
### Naming and size constraints
- Names must be kebab-case (`^[a-z0-9]+(-[a-z0-9]+)*$`) and match the
directory name exactly.
- `SKILL.md` has a maximum size of 64 KB.
- Supporting files have a maximum size of 512 KB. Files exceeding the limit
are silently truncated.
### Path safety
`read_skill_file` rejects absolute paths, paths containing `..`, and
references to hidden files. All paths are resolved relative to the skill
directory.
## Workspace MCP tools
Workspace templates can expose custom
[MCP](https://modelcontextprotocol.io/introduction) tools by placing a
`.mcp.json` file in the workspace working directory. The agent discovers
these tools automatically when it connects to a workspace and registers
them alongside its built-in tools.
### Configuration
Define MCP servers in `.mcp.json` at the workspace root. Each entry under
`mcpServers` describes a server. The transport type is inferred from
whether `command` or `url` is present, or you can set it explicitly with
`type`:
```json
{
"mcpServers": {
"github": {
"command": "github-mcp-server",
"args": ["--token", "..."]
},
"my-api": {
"type": "http",
"url": "http://localhost:8080/mcp",
"headers": { "Authorization": "Bearer ..." }
}
}
}
```
**Stdio transport** — set `command`, and optionally `args` and `env`. The
agent spawns the process in the workspace.
**HTTP transport** — set `url`, and optionally `headers`. The agent connects
to the HTTP endpoint from the workspace.
### How discovery works
The agent reads `.mcp.json` via the workspace agent connection on each chat
turn. Discovery uses a 5-second timeout. Servers that fail to
respond are skipped — partial success is acceptable. Empty results are not
cached because the MCP servers may still be starting.
### Tool naming
Tool names are prefixed with the server name as `serverName__toolName` to
avoid collisions between servers and with built-in tools.
### Timeouts
- **Discovery**: 5-second timeout.
- **Tool calls**: 60 seconds per invocation.
+20 -1
View File
@@ -24,6 +24,9 @@ Before you begin, confirm the following:
for the agent to select when provisioning workspaces.
- **Admin access** to the Coder deployment for enabling the experiment and
configuring providers.
- **Coder Agents User role** assigned to each user who needs to create or use chats.
Owners can assign this from **Admin** > **Users**. See
[Grant Coder Agents User](#step-3-grant-coder-agents-user) below.
## Step 1: Enable the experiment
@@ -69,7 +72,23 @@ Detailed instructions for each provider and model option are in the
> Start with a single frontier model to validate your setup before adding
> additional providers.
## Step 3: Start your first chat
## Step 3: Grant Coder Agents User
The **Coder Agents User** role controls which users can create and use chats.
Members do not have Coder Agents User by default.
1. Go to **Admin** > **Users** in the Coder dashboard.
1. Click the roles icon next to the user you want to grant access to.
1. Enable the **Coder Agents User** role and save.
Repeat for each user who needs access. Owners always have full access
and do not need the role.
> [!NOTE]
> Users who created chats before this role was introduced are
> automatically granted the role during upgrade.
## Step 4: Start your first chat
1. Go to the **Agents** page in the Coder dashboard.
1. Select a model from the dropdown (your default will be pre-selected).
+5 -1
View File
@@ -238,10 +238,14 @@ tasks:
| `read_template` | Get template details and configurable parameters |
| `create_workspace` | Create a workspace from a template |
| `start_workspace` | Start a stopped workspace for the current chat |
| `propose_plan` | Present a Markdown plan file for user review |
| `read_file` | Read file contents from the workspace |
| `write_file` | Write a file to the workspace |
| `edit_files` | Perform search-and-replace edits across files |
| `execute` | Run shell commands in the workspace |
| `process_output` | Retrieve output from a background process |
| `process_list` | List all tracked processes in the workspace |
| `process_signal` | Send a signal (terminate/kill) to a tracked process |
| `spawn_agent` | Delegate a task to a sub-agent running in parallel |
| `wait_agent` | Wait for a sub-agent to complete and collect its result |
| `message_agent` | Send a follow-up message to a running sub-agent |
@@ -253,7 +257,7 @@ web terminals and IDE access. No additional ports or services are required in
the workspace.
Platform tools (`list_templates`, `read_template`, `create_workspace`,
`start_workspace`) and orchestration tools (`spawn_agent`)
`start_workspace`, `propose_plan`) and orchestration tools (`spawn_agent`)
are only available to root chats. Sub-agents do
not have access to these tools and cannot create workspaces or spawn further
sub-agents.
+13 -17
View File
@@ -132,11 +132,11 @@ fields appear dynamically in the admin UI when you select a provider.
#### OpenAI
| Option | Description |
|-----------------------|-----------------------------------------------------------------------|
| Reasoning Effort | How much effort the model spends reasoning (`low`, `medium`, `high`). |
| Max Completion Tokens | Cap on completion tokens for reasoning models. |
| Parallel Tool Calls | Whether the model can call multiple tools at once. |
| Option | Description |
|-----------------------|---------------------------------------------------------------------------------------------------|
| Reasoning Effort | How much effort the model spends reasoning (`none`, `minimal`, `low`, `medium`, `high`, `xhigh`). |
| Max Completion Tokens | Cap on completion tokens for reasoning models. |
| Parallel Tool Calls | Whether the model can call multiple tools at once. |
#### Google
@@ -144,24 +144,20 @@ fields appear dynamically in the admin UI when you select a provider.
|------------------|-----------------------------------------------------|
| Thinking Budget | Maximum tokens for the model's internal reasoning. |
| Include Thoughts | Whether to include thinking traces in the response. |
| Safety Settings | Content safety thresholds by category. |
#### OpenRouter
| Option | Description |
|-------------------|---------------------------------------------------|
| Reasoning Enabled | Enable extended reasoning mode. |
| Reasoning Effort | Reasoning effort level (`low`, `medium`, `high`). |
| Provider Order | Preferred provider routing order. |
| Allow Fallbacks | Whether to fall back to alternative providers. |
| Option | Description |
|-------------------|-------------------------------------------------------------------------------|
| Reasoning Enabled | Enable extended reasoning mode. |
| Reasoning Effort | Reasoning effort level (`none`, `minimal`, `low`, `medium`, `high`, `xhigh`). |
#### Vercel AI Gateway
| Option | Description |
|-------------------|-----------------------------------------------|
| Reasoning Enabled | Enable extended reasoning mode. |
| Reasoning Effort | Reasoning effort level. |
| Provider Options | Routing preferences for underlying providers. |
| Option | Description |
|-------------------|---------------------------------|
| Reasoning Enabled | Enable extended reasoning mode. |
| Reasoning Effort | Reasoning effort level. |
> [!NOTE]
> Azure OpenAI uses the same options as OpenAI. AWS Bedrock uses the same
+23 -23
View File
@@ -74,24 +74,31 @@ discoverable descriptions, restricting template visibility, configuring network
boundaries, scoping credentials, and designing template parameters for agent
use.
### MCP servers
Administrators can register external MCP (Model Context Protocol) servers that
provide additional tools for agent chat sessions. This includes configuring
authentication, controlling which tools are exposed via allow/deny lists, and
setting availability policies that determine whether a server is mandatory,
opt-out, or opt-in for each chat.
See [MCP Servers](./mcp-servers.md) for configuration details.
### Usage limits and analytics
Administrators can set spend limits to cap LLM usage per user within a rolling
time period, with per-user and per-group overrides. The cost tracking dashboard
provides visibility into per-user spending, token consumption, and per-model
breakdowns.
See [Usage & Analytics](./usage-insights.md) for details.
## Where we are headed
Coder Agents is in its early stages. The controls above providers, models,
and system prompt — are what is available today. We are actively building
toward a broader set of platform controls based on what we are hearing from
customers deploying agents in regulated and enterprise environments.
The areas we are investing in include:
### Usage controls and analytics
We plan to give platform teams visibility into how agents are being used across
the organization: token consumption per user, cost per PR, merge rates by model,
and average time from prompt to merged pull request.
The goal is to let platform teams make data-driven decisions — like switching
the default model when analytics show one model produces higher merge rates —
rather than relying on anecdotal feedback from individual developers.
The controls above cover providers, models, system prompts, templates, MCP
servers, and usage limits. We are continuing to invest in platform controls
based on what we hear from customers deploying agents in regulated and
enterprise environments.
### Infrastructure-level enforcement
@@ -107,13 +114,6 @@ Examples of what this looks like:
providers. You can create templates that only permit access to your git
provider and nothing else.
### Tool customization
The agent ships with a standard set of tools (file read/write, shell execution,
sub-agents). We intend to let platform teams customize the available tool set —
adding organization-specific tools or restricting default ones — without
modifying agent source code.
## Why we take this approach
The common pattern in the industry today is that each developer installs and
@@ -0,0 +1,125 @@
# MCP Servers
Administrators can register external MCP servers that provide additional tools
for agent chat sessions. Configured servers are injected into or offered to
users during chat depending on the availability policy.
This is an admin-only feature accessible at **Agents** > **Settings** >
**MCP Servers**.
## Add an MCP server
1. Navigate to **Agents** > **Settings** > **MCP Servers**.
1. Click **Add**.
1. Fill in the configuration fields described below.
1. Click **Save**.
### Identity
| Field | Required | Description |
|----------------|----------|---------------------------------------------------------------|
| `display_name` | Yes | Human-readable name shown to users in chat. |
| `slug` | Yes | URL-safe unique identifier, auto-generated from display name. |
| `description` | No | Brief summary of what the server provides. |
| `icon_url` | No | Emoji or image URL displayed alongside the server name. |
### Connection
| Field | Required | Description |
|-------------|----------|-------------------------------------------------|
| `url` | Yes | The MCP server endpoint URL. |
| `transport` | Yes | Transport protocol. `streamable_http` or `sse`. |
### Availability
| Field | Required | Description |
|----------------|----------|-------------------------------------------------------------------------------------------------------------------------------|
| `enabled` | No | Master toggle. Disabled servers are hidden from non-admin users. |
| `availability` | Yes | Controls how the server appears in chat sessions. See [Availability policies](#availability-policies). |
| `model_intent` | No | When enabled, requires the model to describe each tool call's purpose in natural language, shown as a status label in the UI. |
#### Availability policies
| Policy | Behavior |
|---------------|--------------------------------------------------------|
| `force_on` | Always injected into every chat. Users cannot opt out. |
| `default_on` | Pre-selected in new chats. Users can opt out. |
| `default_off` | Available in the server list but users must opt in. |
## Authentication
Each MCP server uses one of four authentication modes. When you change the
auth type, fields from the previous type are automatically cleared.
Secrets are never returned in API responses — boolean flags indicate whether
a value is set.
### None
No credentials are sent. Use this for servers that do not require
authentication.
### OAuth2
Per-user authorization. The administrator configures the OAuth2 provider, and
each user independently completes the authorization flow.
**Manual configuration** — provide all three fields together:
| Field | Description |
|--------------------|-----------------------------|
| `oauth2_client_id` | OAuth2 client ID. |
| `oauth2_auth_url` | Authorization endpoint URL. |
| `oauth2_token_url` | Token endpoint URL. |
Optional fields:
| Field | Description |
|------------------------|---------------------------------|
| `oauth2_client_secret` | OAuth2 client secret. |
| `oauth2_scopes` | Space-separated list of scopes. |
**Auto-discovery** — leave `oauth2_client_id`, `oauth2_auth_url`, and
`oauth2_token_url` empty. The server attempts discovery in this order:
1. RFC 9728 — Protected Resource Metadata
1. RFC 8414 — Authorization Server Metadata
1. RFC 7591 — Dynamic Client Registration
Users connect through a popup that redirects through the OAuth2 provider.
Tokens are stored per-user and refreshed automatically. Users can disconnect
via the UI or API to remove stored tokens.
### API key
A static key sent as a header on every request.
| Field | Required | Description |
|------------------|----------|--------------------------------------|
| `api_key_header` | Yes | Header name (e.g., `Authorization`). |
| `api_key_value` | Yes | Secret value sent in the header. |
### Custom headers
Arbitrary key-value header pairs sent on every request. At least one header
is required when this mode is selected.
## Tool governance
Control which tools from a server are available in chat:
| Field | Description |
|-------------------|---------------------------------------------------------------------------------------|
| `tool_allow_list` | If non-empty, only the listed tool names are exposed. An empty list allows all tools. |
| `tool_deny_list` | Listed tool names are always blocked, even if they appear in the allow list. |
## Permissions
| Action | Required role |
|-------------------------------|---------------------------|
| Create, update, or delete | Admin (deployment config) |
| View enabled servers | Any authenticated user |
| OAuth2 connect and disconnect | Any authenticated user |
Non-admin users only see enabled servers. Sensitive fields such as API keys
and client secrets are redacted in API responses.
@@ -0,0 +1,85 @@
# Usage and Analytics
Coder provides two admin-only views for monitoring and controlling agent
spend: usage limits and cost tracking.
## Usage limits
Navigate to **Agents** > **Settings** > **Limits**.
Usage limits cap how much each user can spend on LLM usage within a rolling
time period. When enabled, the system checks the user's current spend before
processing each chat message.
### Configuration
- **Enable/disable toggle** — master on/off for the entire limit system.
- **Period**`day`, `week`, or `month`. Periods are UTC-aligned: midnight
UTC for daily, Monday start for weekly, first of the month for monthly.
- **Default limit** — deployment-wide default in dollars. Applies to all
users who do not have a more specific override. Leave unset for no limit.
- **Per-user overrides** — set a custom dollar limit for an individual user.
Takes highest priority.
- **Per-group overrides** — set a limit for a group. When a user belongs to
multiple groups, the lowest group limit applies.
### Priority hierarchy
The system resolves a user's effective limit in this order:
1. Individual user override (highest priority)
1. Minimum group limit across all of the user's groups
1. Global default limit
1. No limit (if limits are disabled or no value is configured)
### Enforcement
- Checked before each chat message is processed.
- When current spend meets or exceeds the limit, the chat returns a
**409 Conflict** response and the message is blocked.
- Fail-open: if the limit query itself fails, the message is allowed
through.
- Brief overage is possible when concurrent messages are in flight, because
cost is determined only after the LLM returns.
### User-facing status
Users can view their own spend status, including whether a limit is active,
their effective limit, current spend, and when the current period resets.
> [!NOTE]
> The admin configuration page shows the count of models without pricing
> data. Models missing pricing cannot be tracked accurately against limits.
## Cost tracking
Navigate to **Agents** > **Settings** > **Usage**.
This view shows deployment-wide LLM chat costs with per-user drill-down.
### Top-level view
A per-user rollup table with the following columns:
| Column | Description |
|--------------------|-------------------------------------|
| Total cost | Aggregate dollar spend for the user |
| Messages | Number of chat messages sent |
| Chats | Number of distinct chat sessions |
| Input tokens | Total input tokens consumed |
| Output tokens | Total output tokens consumed |
| Cache read tokens | Tokens served from cache |
| Cache write tokens | Tokens written to cache |
The table supports date range filtering (default: last 30 days), search by
name or username, and pagination.
### Per-user detail view
Select a user to see:
- **Summary cards** — total cost, token breakdowns, and message counts.
- **Usage limit progress** — if a limit is active, a color-coded progress
bar shows current spend relative to the limit.
- **Per-model breakdown** — table of costs and token usage by model.
- **Per-chat breakdown** — table of costs and token usage by chat session.
+19 -2
View File
@@ -324,8 +324,7 @@
"title": "Workspace Sharing",
"description": "Sharing workspaces",
"path": "./user-guides/shared-workspaces.md",
"icon_path": "./images/icons/generic.svg",
"state": ["beta"]
"icon_path": "./images/icons/generic.svg"
},
{
"title": "Workspace Scheduling",
@@ -1238,9 +1237,27 @@
"description": "Best practices for creating templates that are discoverable and useful to Coder Agents",
"path": "./ai-coder/agents/platform-controls/template-optimization.md",
"state": ["early access"]
},
{
"title": "MCP Servers",
"description": "Configure external MCP servers that provide additional tools for agent chat sessions",
"path": "./ai-coder/agents/platform-controls/mcp-servers.md",
"state": ["early access"]
},
{
"title": "Usage \u0026 Analytics",
"description": "Spend limits and cost tracking for Coder Agents",
"path": "./ai-coder/agents/platform-controls/usage-insights.md",
"state": ["early access"]
}
]
},
{
"title": "Extending Agents",
"description": "Add custom skills and MCP tools to agent workspaces",
"path": "./ai-coder/agents/extending-agents.md",
"state": ["early access"]
},
{
"title": "Chats API",
"description": "Programmatic access to Coder Agents via the experimental Chats API",
+20 -20
View File
@@ -193,10 +193,10 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Property | Value(s) |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -326,10 +326,10 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Property | Value(s) |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -459,10 +459,10 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Property | Value(s) |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -554,10 +554,10 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Property | Value(s) |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -960,9 +960,9 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Property | Value(s) |
|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `action` | `application_connect`, `assign`, `create`, `create_agent`, `delete`, `delete_agent`, `read`, `read_personal`, `share`, `ssh`, `start`, `stop`, `unassign`, `update`, `update_agent`, `update_personal`, `use`, `view_insights` |
| `resource_type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
+6 -6
View File
@@ -1286,9 +1286,9 @@
#### Enumerated Values
| Value(s) |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `aibridge_interception:*`, `aibridge_interception:create`, `aibridge_interception:read`, `aibridge_interception:update`, `all`, `api_key:*`, `api_key:create`, `api_key:delete`, `api_key:read`, `api_key:update`, `application_connect`, `assign_org_role:*`, `assign_org_role:assign`, `assign_org_role:create`, `assign_org_role:delete`, `assign_org_role:read`, `assign_org_role:unassign`, `assign_org_role:update`, `assign_role:*`, `assign_role:assign`, `assign_role:read`, `assign_role:unassign`, `audit_log:*`, `audit_log:create`, `audit_log:read`, `boundary_usage:*`, `boundary_usage:delete`, `boundary_usage:read`, `boundary_usage:update`, `chat:*`, `chat:create`, `chat:delete`, `chat:read`, `chat:update`, `coder:all`, `coder:apikeys.manage_self`, `coder:application_connect`, `coder:templates.author`, `coder:templates.build`, `coder:workspaces.access`, `coder:workspaces.create`, `coder:workspaces.delete`, `coder:workspaces.operate`, `connection_log:*`, `connection_log:read`, `connection_log:update`, `crypto_key:*`, `crypto_key:create`, `crypto_key:delete`, `crypto_key:read`, `crypto_key:update`, `debug_info:*`, `debug_info:read`, `deployment_config:*`, `deployment_config:read`, `deployment_config:update`, `deployment_stats:*`, `deployment_stats:read`, `file:*`, `file:create`, `file:read`, `group:*`, `group:create`, `group:delete`, `group:read`, `group:update`, `group_member:*`, `group_member:read`, `idpsync_settings:*`, `idpsync_settings:read`, `idpsync_settings:update`, `inbox_notification:*`, `inbox_notification:create`, `inbox_notification:read`, `inbox_notification:update`, `license:*`, `license:create`, `license:delete`, `license:read`, `notification_message:*`, `notification_message:create`, `notification_message:delete`, `notification_message:read`, `notification_message:update`, `notification_preference:*`, `notification_preference:read`, `notification_preference:update`, `notification_template:*`, `notification_template:read`, `notification_template:update`, `oauth2_app:*`, `oauth2_app:create`, `oauth2_app:delete`, `oauth2_app:read`, `oauth2_app:update`, `oauth2_app_code_token:*`, `oauth2_app_code_token:create`, `oauth2_app_code_token:delete`, `oauth2_app_code_token:read`, `oauth2_app_secret:*`, `oauth2_app_secret:create`, `oauth2_app_secret:delete`, `oauth2_app_secret:read`, `oauth2_app_secret:update`, `organization:*`, `organization:create`, `organization:delete`, `organization:read`, `organization:update`, `organization_member:*`, `organization_member:create`, `organization_member:delete`, `organization_member:read`, `organization_member:update`, `prebuilt_workspace:*`, `prebuilt_workspace:delete`, `prebuilt_workspace:update`, `provisioner_daemon:*`, `provisioner_daemon:create`, `provisioner_daemon:delete`, `provisioner_daemon:read`, `provisioner_daemon:update`, `provisioner_jobs:*`, `provisioner_jobs:create`, `provisioner_jobs:read`, `provisioner_jobs:update`, `replicas:*`, `replicas:read`, `system:*`, `system:create`, `system:delete`, `system:read`, `system:update`, `tailnet_coordinator:*`, `tailnet_coordinator:create`, `tailnet_coordinator:delete`, `tailnet_coordinator:read`, `tailnet_coordinator:update`, `task:*`, `task:create`, `task:delete`, `task:read`, `task:update`, `template:*`, `template:create`, `template:delete`, `template:read`, `template:update`, `template:use`, `template:view_insights`, `usage_event:*`, `usage_event:create`, `usage_event:read`, `usage_event:update`, `user:*`, `user:create`, `user:delete`, `user:read`, `user:read_personal`, `user:update`, `user:update_personal`, `user_secret:*`, `user_secret:create`, `user_secret:delete`, `user_secret:read`, `user_secret:update`, `webpush_subscription:*`, `webpush_subscription:create`, `webpush_subscription:delete`, `webpush_subscription:read`, `workspace:*`, `workspace:application_connect`, `workspace:create`, `workspace:create_agent`, `workspace:delete`, `workspace:delete_agent`, `workspace:read`, `workspace:share`, `workspace:ssh`, `workspace:start`, `workspace:stop`, `workspace:update`, `workspace:update_agent`, `workspace_agent_devcontainers:*`, `workspace_agent_devcontainers:create`, `workspace_agent_resource_monitor:*`, `workspace_agent_resource_monitor:create`, `workspace_agent_resource_monitor:read`, `workspace_agent_resource_monitor:update`, `workspace_dormant:*`, `workspace_dormant:application_connect`, `workspace_dormant:create`, `workspace_dormant:create_agent`, `workspace_dormant:delete`, `workspace_dormant:delete_agent`, `workspace_dormant:read`, `workspace_dormant:share`, `workspace_dormant:ssh`, `workspace_dormant:start`, `workspace_dormant:stop`, `workspace_dormant:update`, `workspace_dormant:update_agent`, `workspace_proxy:*`, `workspace_proxy:create`, `workspace_proxy:delete`, `workspace_proxy:read`, `workspace_proxy:update` |
| Value(s) |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `aibridge_interception:*`, `aibridge_interception:create`, `aibridge_interception:read`, `aibridge_interception:update`, `all`, `api_key:*`, `api_key:create`, `api_key:delete`, `api_key:read`, `api_key:update`, `application_connect`, `assign_org_role:*`, `assign_org_role:assign`, `assign_org_role:create`, `assign_org_role:delete`, `assign_org_role:read`, `assign_org_role:unassign`, `assign_org_role:update`, `assign_role:*`, `assign_role:assign`, `assign_role:read`, `assign_role:unassign`, `audit_log:*`, `audit_log:create`, `audit_log:read`, `boundary_usage:*`, `boundary_usage:delete`, `boundary_usage:read`, `boundary_usage:update`, `chat:*`, `chat:create`, `chat:delete`, `chat:read`, `chat:update`, `chat_automation:*`, `chat_automation:create`, `chat_automation:delete`, `chat_automation:read`, `chat_automation:update`, `coder:all`, `coder:apikeys.manage_self`, `coder:application_connect`, `coder:templates.author`, `coder:templates.build`, `coder:workspaces.access`, `coder:workspaces.create`, `coder:workspaces.delete`, `coder:workspaces.operate`, `connection_log:*`, `connection_log:read`, `connection_log:update`, `crypto_key:*`, `crypto_key:create`, `crypto_key:delete`, `crypto_key:read`, `crypto_key:update`, `debug_info:*`, `debug_info:read`, `deployment_config:*`, `deployment_config:read`, `deployment_config:update`, `deployment_stats:*`, `deployment_stats:read`, `file:*`, `file:create`, `file:read`, `group:*`, `group:create`, `group:delete`, `group:read`, `group:update`, `group_member:*`, `group_member:read`, `idpsync_settings:*`, `idpsync_settings:read`, `idpsync_settings:update`, `inbox_notification:*`, `inbox_notification:create`, `inbox_notification:read`, `inbox_notification:update`, `license:*`, `license:create`, `license:delete`, `license:read`, `notification_message:*`, `notification_message:create`, `notification_message:delete`, `notification_message:read`, `notification_message:update`, `notification_preference:*`, `notification_preference:read`, `notification_preference:update`, `notification_template:*`, `notification_template:read`, `notification_template:update`, `oauth2_app:*`, `oauth2_app:create`, `oauth2_app:delete`, `oauth2_app:read`, `oauth2_app:update`, `oauth2_app_code_token:*`, `oauth2_app_code_token:create`, `oauth2_app_code_token:delete`, `oauth2_app_code_token:read`, `oauth2_app_secret:*`, `oauth2_app_secret:create`, `oauth2_app_secret:delete`, `oauth2_app_secret:read`, `oauth2_app_secret:update`, `organization:*`, `organization:create`, `organization:delete`, `organization:read`, `organization:update`, `organization_member:*`, `organization_member:create`, `organization_member:delete`, `organization_member:read`, `organization_member:update`, `prebuilt_workspace:*`, `prebuilt_workspace:delete`, `prebuilt_workspace:update`, `provisioner_daemon:*`, `provisioner_daemon:create`, `provisioner_daemon:delete`, `provisioner_daemon:read`, `provisioner_daemon:update`, `provisioner_jobs:*`, `provisioner_jobs:create`, `provisioner_jobs:read`, `provisioner_jobs:update`, `replicas:*`, `replicas:read`, `system:*`, `system:create`, `system:delete`, `system:read`, `system:update`, `tailnet_coordinator:*`, `tailnet_coordinator:create`, `tailnet_coordinator:delete`, `tailnet_coordinator:read`, `tailnet_coordinator:update`, `task:*`, `task:create`, `task:delete`, `task:read`, `task:update`, `template:*`, `template:create`, `template:delete`, `template:read`, `template:update`, `template:use`, `template:view_insights`, `usage_event:*`, `usage_event:create`, `usage_event:read`, `usage_event:update`, `user:*`, `user:create`, `user:delete`, `user:read`, `user:read_personal`, `user:update`, `user:update_personal`, `user_secret:*`, `user_secret:create`, `user_secret:delete`, `user_secret:read`, `user_secret:update`, `webpush_subscription:*`, `webpush_subscription:create`, `webpush_subscription:delete`, `webpush_subscription:read`, `workspace:*`, `workspace:application_connect`, `workspace:create`, `workspace:create_agent`, `workspace:delete`, `workspace:delete_agent`, `workspace:read`, `workspace:share`, `workspace:ssh`, `workspace:start`, `workspace:stop`, `workspace:update`, `workspace:update_agent`, `workspace_agent_devcontainers:*`, `workspace_agent_devcontainers:create`, `workspace_agent_resource_monitor:*`, `workspace_agent_resource_monitor:create`, `workspace_agent_resource_monitor:read`, `workspace_agent_resource_monitor:update`, `workspace_dormant:*`, `workspace_dormant:application_connect`, `workspace_dormant:create`, `workspace_dormant:create_agent`, `workspace_dormant:delete`, `workspace_dormant:delete_agent`, `workspace_dormant:read`, `workspace_dormant:share`, `workspace_dormant:ssh`, `workspace_dormant:start`, `workspace_dormant:stop`, `workspace_dormant:update`, `workspace_dormant:update_agent`, `workspace_proxy:*`, `workspace_proxy:create`, `workspace_proxy:delete`, `workspace_proxy:read`, `workspace_proxy:update` |
## codersdk.AddLicenseRequest
@@ -8080,9 +8080,9 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
#### Enumerated Values
| Value(s) |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| Value(s) |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
## codersdk.RateLimitConfig
+5 -5
View File
@@ -849,11 +849,11 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| `login_type` | `github`, `oidc`, `password`, `token` |
| `scope` | `all`, `application_connect` |
| Property | Value(s) |
|--------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `type` | `*`, `aibridge_interception`, `api_key`, `assign_org_role`, `assign_role`, `audit_log`, `boundary_usage`, `chat`, `chat_automation`, `connection_log`, `crypto_key`, `debug_info`, `deployment_config`, `deployment_stats`, `file`, `group`, `group_member`, `idpsync_settings`, `inbox_notification`, `license`, `notification_message`, `notification_preference`, `notification_template`, `oauth2_app`, `oauth2_app_code_token`, `oauth2_app_secret`, `organization`, `organization_member`, `prebuilt_workspace`, `provisioner_daemon`, `provisioner_jobs`, `replicas`, `system`, `tailnet_coordinator`, `task`, `template`, `usage_event`, `user`, `user_secret`, `webpush_subscription`, `workspace`, `workspace_agent_devcontainers`, `workspace_agent_resource_monitor`, `workspace_dormant`, `workspace_proxy` |
| `login_type` | `github`, `oidc`, `password`, `token` |
| `scope` | `all`, `application_connect` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
+2 -2
View File
@@ -11,8 +11,8 @@ RUN cargo install jj-cli typos-cli watchexec-cli
FROM ubuntu:jammy@sha256:ce4a593b4e323dcc3dd728e397e0a866a1bf516a1b7c31d6aa06991baec4f2e0 AS go
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.25.7
ARG GO_CHECKSUM="12e6d6a191091ae27dc31f6efc630e3a3b8ba409baf3573d955b196fdf086005"
ARG GO_VERSION=1.25.8
ARG GO_CHECKSUM="ceb5e041bbc3893846bd1614d76cb4681c91dadee579426cf21a63f2d7e03be6"
# Boring Go is needed to build FIPS-compliant binaries.
RUN apt-get update && \
@@ -776,6 +776,12 @@ func defaultAIBridgeProvider(host string) string {
return aibridge.ProviderOpenAI
case HostCopilot:
return aibridge.ProviderCopilot
case agplaibridge.HostCopilotBusiness:
return agplaibridge.ProviderCopilotBusiness
case agplaibridge.HostCopilotEnterprise:
return agplaibridge.ProviderCopilotEnterprise
case agplaibridge.HostChatGPT:
return agplaibridge.ProviderChatGPT
default:
return ""
}
+20
View File
@@ -10,6 +10,7 @@ import (
"github.com/coder/aibridge"
"github.com/coder/aibridge/config"
agplaibridge "github.com/coder/coder/v2/coderd/aibridge"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/enterprise/aibridged"
@@ -37,20 +38,39 @@ func newAIBridgeDaemon(coderAPI *coderd.API) (*aibridged.Server, error) {
// Setup supported providers with circuit breaker config.
providers := []aibridge.Provider{
aibridge.NewOpenAIProvider(aibridge.OpenAIConfig{
Name: aibridge.ProviderOpenAI,
BaseURL: cfg.OpenAI.BaseURL.String(),
Key: cfg.OpenAI.Key.String(),
CircuitBreaker: cbConfig,
SendActorHeaders: cfg.SendActorHeaders.Value(),
}),
aibridge.NewAnthropicProvider(aibridge.AnthropicConfig{
Name: aibridge.ProviderAnthropic,
BaseURL: cfg.Anthropic.BaseURL.String(),
Key: cfg.Anthropic.Key.String(),
CircuitBreaker: cbConfig,
SendActorHeaders: cfg.SendActorHeaders.Value(),
}, getBedrockConfig(cfg.Bedrock)),
aibridge.NewCopilotProvider(aibridge.CopilotConfig{
Name: aibridge.ProviderCopilot,
CircuitBreaker: cbConfig,
}),
aibridge.NewCopilotProvider(aibridge.CopilotConfig{
Name: agplaibridge.ProviderCopilotBusiness,
BaseURL: "https://" + agplaibridge.HostCopilotBusiness,
CircuitBreaker: cbConfig,
}),
aibridge.NewCopilotProvider(aibridge.CopilotConfig{
Name: agplaibridge.ProviderCopilotEnterprise,
BaseURL: "https://" + agplaibridge.HostCopilotEnterprise,
CircuitBreaker: cbConfig,
}),
aibridge.NewOpenAIProvider(aibridge.OpenAIConfig{
Name: agplaibridge.ProviderChatGPT,
BaseURL: agplaibridge.BaseURLChatGPT,
CircuitBreaker: cbConfig,
SendActorHeaders: cfg.SendActorHeaders.Value(),
}),
}
reg := prometheus.WrapRegistererWithPrefix("coder_aibridged_", coderAPI.PrometheusRegistry)
+297 -1
View File
@@ -440,7 +440,7 @@ func TestAIBridgeListInterceptions(t *testing.T) {
},
{
name: "Client/Unknown",
filter: codersdk.AIBridgeListInterceptionsFilter{Client: "Unknown"},
filter: codersdk.AIBridgeListInterceptionsFilter{Client: string(aiblib.ClientUnknown)},
want: []codersdk.AIBridgeInterception{i1SDK},
},
{
@@ -1213,6 +1213,302 @@ func TestAIBridgeListSessions(t *testing.T) {
require.Contains(t, sdkErr.Message, "Invalid pagination limit value.")
require.Empty(t, res.Sessions)
})
t.Run("StartedBeforeFilter", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Session started recently.
recentEndedAt := now.Add(time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now,
}, &recentEndedAt)
// Session started 2 hours ago.
oldEndedAt := now.Add(-2*time.Hour + time.Minute)
old := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now.Add(-2 * time.Hour),
}, &oldEndedAt)
// Only the old session should be returned when started_before
// is set to 1 hour ago.
//nolint:gocritic // Owner role is irrelevant; testing filter.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{
StartedBefore: now.Add(-time.Hour),
})
require.NoError(t, err)
require.EqualValues(t, 1, res.Count)
require.Len(t, res.Sessions, 1)
require.Equal(t, old.ID.String(), res.Sessions[0].ID)
})
t.Run("NullClientCoalescesToUnknown", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Session with explicit client.
withClientEndedAt := now.Add(time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now,
Client: sql.NullString{String: "claude-code", Valid: true},
}, &withClientEndedAt)
// Session with NULL client (should COALESCE to ClientUnknown).
nullClientEndedAt := now.Add(-time.Hour + time.Minute)
nullClient := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now.Add(-time.Hour),
// Client field deliberately omitted (NULL).
}, &nullClientEndedAt)
// Filtering by ClientUnknown should return only the NULL-client
// session.
//nolint:gocritic // Owner role is irrelevant; testing COALESCE.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{
Client: string(aiblib.ClientUnknown),
})
require.NoError(t, err)
require.EqualValues(t, 1, res.Count)
require.Len(t, res.Sessions, 1)
require.Equal(t, nullClient.ID.String(), res.Sessions[0].ID)
})
t.Run("MetadataFromFirstInterception", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// First interception (chronologically) carries the expected
// metadata for the session.
i1EndedAt := now.Add(time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now,
Metadata: json.RawMessage(`{"editor":"vscode"}`),
Client: sql.NullString{String: "claude-code", Valid: true},
ClientSessionID: sql.NullString{String: "meta-session", Valid: true},
}, &i1EndedAt)
// Second interception has different metadata.
i2EndedAt := now.Add(2 * time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now.Add(time.Minute),
Metadata: json.RawMessage(`{"editor":"jetbrains"}`),
Client: sql.NullString{String: "claude-code", Valid: true},
ClientSessionID: sql.NullString{String: "meta-session", Valid: true},
}, &i2EndedAt)
//nolint:gocritic // Owner role is irrelevant; testing metadata.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{})
require.NoError(t, err)
require.Len(t, res.Sessions, 1)
// Metadata should come from the first interception.
require.Equal(t, "vscode", res.Sessions[0].Metadata["editor"])
})
t.Run("SessionTimestamps", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Two interceptions in the same session with different
// started_at and ended_at values. The session should report
// MIN(started_at) and MAX(ended_at).
i1StartedAt := now
i1EndedAt := now.Add(time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: i1StartedAt,
ClientSessionID: sql.NullString{String: "ts-session", Valid: true},
}, &i1EndedAt)
i2StartedAt := now.Add(2 * time.Minute)
i2EndedAt := now.Add(5 * time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: i2StartedAt,
ClientSessionID: sql.NullString{String: "ts-session", Valid: true},
}, &i2EndedAt)
//nolint:gocritic // Owner role is irrelevant; testing timestamps.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{})
require.NoError(t, err)
require.Len(t, res.Sessions, 1)
s := res.Sessions[0]
require.WithinDuration(t, i1StartedAt, s.StartedAt, time.Millisecond,
"session started_at should be MIN of interception started_at values")
require.NotNil(t, s.EndedAt)
require.WithinDuration(t, i2EndedAt, *s.EndedAt, time.Millisecond,
"session ended_at should be MAX of interception ended_at values")
})
t.Run("LastPromptAcrossInterceptions", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Two interceptions in the same session.
i1EndedAt := now.Add(time.Minute)
i1 := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now,
ClientSessionID: sql.NullString{String: "prompt-session", Valid: true},
}, &i1EndedAt)
i2EndedAt := now.Add(3 * time.Minute)
i2 := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now.Add(2 * time.Minute),
ClientSessionID: sql.NullString{String: "prompt-session", Valid: true},
}, &i2EndedAt)
// Add prompts to both interceptions. The most recent prompt
// overall belongs to the second interception.
dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
InterceptionID: i1.ID,
Prompt: "early prompt from i1",
CreatedAt: now,
})
dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
InterceptionID: i2.ID,
Prompt: "latest prompt from i2",
CreatedAt: now.Add(2 * time.Minute),
})
//nolint:gocritic // Owner role is irrelevant; testing lateral join.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{})
require.NoError(t, err)
require.Len(t, res.Sessions, 1)
require.NotNil(t, res.Sessions[0].LastPrompt)
require.Equal(t, "latest prompt from i2", *res.Sessions[0].LastPrompt,
"last_prompt should be the most recent prompt across all interceptions in the session")
})
t.Run("CombinedFilters", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
_, user2 := coderdtest.CreateAnotherUser(t, client, firstUser.OrganizationID)
now := dbtime.Now()
// Session A: user1, anthropic, claude-4, started now.
aEndedAt := now.Add(time.Minute)
a := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
Provider: "anthropic",
Model: "claude-4",
StartedAt: now,
}, &aEndedAt)
// Session B: user1, anthropic, gpt-4, started 2h ago.
bEndedAt := now.Add(-2*time.Hour + time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
Provider: "anthropic",
Model: "gpt-4",
StartedAt: now.Add(-2 * time.Hour),
}, &bEndedAt)
// Session C: user2, anthropic, claude-4, started 1h ago.
cEndedAt := now.Add(-time.Hour + time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: user2.ID,
Provider: "anthropic",
Model: "claude-4",
StartedAt: now.Add(-time.Hour),
}, &cEndedAt)
// Combining provider + model + started_after should return
// only session A (user1, anthropic, claude-4, recent).
//nolint:gocritic // Owner role is irrelevant; testing combined filters.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{
Provider: "anthropic",
Model: "claude-4",
StartedAfter: now.Add(-30 * time.Minute),
})
require.NoError(t, err)
require.EqualValues(t, 1, res.Count)
require.Len(t, res.Sessions, 1)
require.Equal(t, a.ID.String(), res.Sessions[0].ID)
})
t.Run("CursorPaginationWithTiedStartedAt", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Create 3 standalone sessions all starting at the same time.
// The tie-breaker is session_id DESC.
for range 3 {
endedAt := now.Add(time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now,
}, &endedAt)
}
// Fetch all to learn the sort order (started_at DESC,
// session_id DESC).
//nolint:gocritic // Owner role is irrelevant; testing cursor.
all, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{})
require.NoError(t, err)
require.Len(t, all.Sessions, 3)
// Use the first result as cursor. The remaining 2 should be
// returned.
afterID := all.Sessions[0].ID
page, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{
Pagination: codersdk.Pagination{Limit: 10},
AfterSessionID: afterID,
})
require.NoError(t, err)
require.Len(t, page.Sessions, 2)
require.Equal(t, all.Sessions[1].ID, page.Sessions[0].ID)
require.Equal(t, all.Sessions[2].ID, page.Sessions[1].ID)
})
t.Run("DefaultLimit", func(t *testing.T) {
t.Parallel()
client, db, firstUser := coderdenttest.NewWithDatabase(t, aibridgeOpts(t))
ctx := testutil.Context(t, testutil.WaitLong)
now := dbtime.Now()
// Create 3 sessions. Without an explicit limit the default of
// 100 should apply and return all 3.
for i := range 3 {
endedAt := now.Add(-time.Duration(i)*time.Hour + time.Minute)
dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
InitiatorID: firstUser.UserID,
StartedAt: now.Add(-time.Duration(i) * time.Hour),
}, &endedAt)
}
// No Pagination.Limit set.
//nolint:gocritic // Owner role is irrelevant; testing default limit.
res, err := client.AIBridgeListSessions(ctx, codersdk.AIBridgeListSessionsFilter{})
require.NoError(t, err)
require.Len(t, res.Sessions, 3)
require.EqualValues(t, 3, res.Count)
})
}
func TestAIBridgeListClients(t *testing.T) {
-5
View File
@@ -100,7 +100,6 @@ func TestChatStreamRelay(t *testing.T) {
ModelConfigID: &model.ID,
})
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
var runningChat database.Chat
require.Eventually(t, func() bool {
@@ -289,7 +288,6 @@ func TestChatStreamRelay(t *testing.T) {
ModelConfigID: &model.ID,
})
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
var runningChat database.Chat
require.Eventually(t, func() bool {
@@ -459,7 +457,6 @@ func TestChatStreamRelay(t *testing.T) {
ModelConfigID: &model.ID,
})
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
var runningChat database.Chat
require.Eventually(t, func() bool {
@@ -631,7 +628,6 @@ func TestChatStreamRelay(t *testing.T) {
ModelConfigID: &model.ID,
})
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
var runningChat database.Chat
require.Eventually(t, func() bool {
@@ -779,7 +775,6 @@ func TestChatStreamRelay(t *testing.T) {
ModelConfigID: &model.ID,
})
require.NoError(t, err)
require.Equal(t, codersdk.ChatStatusPending, chat.Status)
var runningChat database.Chat
require.Eventually(t, func() bool {
+9
View File
@@ -452,7 +452,13 @@ func TestCustomOrganizationRole(t *testing.T) {
func TestListRoles(t *testing.T) {
t.Parallel()
dv := coderdtest.DeploymentValues(t)
dv.Experiments = []string{string(codersdk.ExperimentAgents)}
client, owner := coderdenttest.New(t, &coderdenttest.Options{
Options: &coderdtest.Options{
DeploymentValues: dv,
},
LicenseOptions: &coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureExternalProvisionerDaemons: 1,
@@ -487,6 +493,7 @@ func TestListRoles(t *testing.T) {
{Name: codersdk.RoleAuditor}: false,
{Name: codersdk.RoleTemplateAdmin}: false,
{Name: codersdk.RoleUserAdmin}: false,
{Name: codersdk.RoleAgentsAccess}: false,
}),
},
{
@@ -520,6 +527,7 @@ func TestListRoles(t *testing.T) {
{Name: codersdk.RoleAuditor}: false,
{Name: codersdk.RoleTemplateAdmin}: false,
{Name: codersdk.RoleUserAdmin}: false,
{Name: codersdk.RoleAgentsAccess}: false,
}),
},
{
@@ -553,6 +561,7 @@ func TestListRoles(t *testing.T) {
{Name: codersdk.RoleAuditor}: true,
{Name: codersdk.RoleTemplateAdmin}: true,
{Name: codersdk.RoleUserAdmin}: true,
{Name: codersdk.RoleAgentsAccess}: true,
}),
},
{
+73 -2
View File
@@ -5,10 +5,12 @@ import (
"database/sql"
"strings"
"github.com/google/uuid"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbtime"
)
// Rotate rotates the database encryption keys by re-encrypting all user tokens
@@ -109,6 +111,30 @@ func Rotate(ctx context.Context, log slog.Logger, sqlDB *sql.DB, ciphers []Ciphe
log.Debug(ctx, "encrypted chat provider key", slog.F("provider", provider.Provider), slog.F("current", idx+1), slog.F("cipher", ciphers[0].HexDigest()))
}
// Re-encrypt chat automation webhook secrets.
triggerIDs, err := fetchEncryptedTriggerIDs(ctx, sqlDB)
if err != nil {
return err
}
log.Info(ctx, "encrypting chat automation webhook secrets", slog.F("trigger_count", len(triggerIDs)))
for _, triggerID := range triggerIDs {
trigger, err := cryptDB.GetChatAutomationTriggerByID(ctx, triggerID)
if err != nil {
return xerrors.Errorf("get chat automation trigger %s: %w", triggerID, err)
}
if trigger.WebhookSecretKeyID.String == ciphers[0].HexDigest() {
continue // Already encrypted with the primary key.
}
if _, err := cryptDB.UpdateChatAutomationTriggerWebhookSecret(ctx, database.UpdateChatAutomationTriggerWebhookSecretParams{
WebhookSecret: trigger.WebhookSecret, // decrypted by cryptDB
WebhookSecretKeyID: sql.NullString{}, // dbcrypt will set the new primary key
UpdatedAt: dbtime.Now(),
ID: triggerID,
}); err != nil {
return xerrors.Errorf("re-encrypt chat automation trigger %s: %w", triggerID, err)
}
}
// Revoke old keys
for _, c := range ciphers[1:] {
if err := db.RevokeDBCryptKey(ctx, c.HexDigest()); err != nil {
@@ -221,6 +247,27 @@ func Decrypt(ctx context.Context, log slog.Logger, sqlDB *sql.DB, ciphers []Ciph
log.Debug(ctx, "decrypted chat provider key", slog.F("provider", provider.Provider), slog.F("current", idx+1), slog.F("cipher", ciphers[0].HexDigest()))
}
// Decrypt chat automation webhook secrets.
triggerIDs, err := fetchEncryptedTriggerIDs(ctx, sqlDB)
if err != nil {
return err
}
log.Info(ctx, "decrypting chat automation webhook secrets", slog.F("trigger_count", len(triggerIDs)))
for _, triggerID := range triggerIDs {
trigger, err := cryptDB.GetChatAutomationTriggerByID(ctx, triggerID)
if err != nil {
return xerrors.Errorf("get chat automation trigger %s: %w", triggerID, err)
}
if _, err := cryptDB.UpdateChatAutomationTriggerWebhookSecret(ctx, database.UpdateChatAutomationTriggerWebhookSecretParams{
WebhookSecret: trigger.WebhookSecret, // decrypted by cryptDB
WebhookSecretKeyID: sql.NullString{}, // store in plaintext
UpdatedAt: dbtime.Now(),
ID: triggerID,
}); err != nil {
return xerrors.Errorf("decrypt chat automation trigger %s: %w", triggerID, err)
}
}
// Revoke _all_ keys
for _, c := range ciphers {
if err := db.RevokeDBCryptKey(ctx, c.HexDigest()); err != nil {
@@ -245,9 +292,33 @@ UPDATE chat_providers
SET api_key = '',
api_key_key_id = NULL
WHERE api_key_key_id IS NOT NULL;
DELETE FROM chat_automation_triggers WHERE webhook_secret_key_id IS NOT NULL;
COMMIT;
`
// fetchEncryptedTriggerIDs returns the IDs of all chat automation
// triggers that have an encrypted webhook secret. It uses a raw
// SQL query because there is no "get all triggers" SQLC query.
func fetchEncryptedTriggerIDs(ctx context.Context, sqlDB *sql.DB) ([]uuid.UUID, error) {
rows, err := sqlDB.QueryContext(ctx, `SELECT id FROM chat_automation_triggers WHERE webhook_secret_key_id IS NOT NULL`)
if err != nil {
return nil, xerrors.Errorf("get encrypted chat automation triggers: %w", err)
}
defer rows.Close()
var ids []uuid.UUID
for rows.Next() {
var id uuid.UUID
if err := rows.Scan(&id); err != nil {
return nil, xerrors.Errorf("scan chat automation trigger id: %w", err)
}
ids = append(ids, id)
}
if err := rows.Err(); err != nil {
return nil, xerrors.Errorf("iterate chat automation trigger ids: %w", err)
}
return ids, nil
}
// Delete deletes all user tokens and revokes all ciphers.
// This is a destructive operation and should only be used
// as a last resort, for example, if the database encryption key has been
@@ -256,9 +327,9 @@ func Delete(ctx context.Context, log slog.Logger, sqlDB *sql.DB) error {
store := database.New(sqlDB)
_, err := sqlDB.ExecContext(ctx, sqlDeleteEncryptedUserTokens)
if err != nil {
return xerrors.Errorf("delete encrypted tokens and chat provider keys: %w", err)
return xerrors.Errorf("delete encrypted tokens, chat provider keys, and chat automation webhook secrets: %w", err)
}
log.Info(ctx, "deleted encrypted user tokens and chat provider API keys")
log.Info(ctx, "deleted encrypted user tokens, chat provider API keys, and chat automation webhook secrets")
log.Info(ctx, "revoking all active keys")
keys, err := store.GetDBCryptKeys(ctx)
+67
View File
@@ -385,6 +385,73 @@ func (db *dbCrypt) GetCryptoKeysByFeature(ctx context.Context, feature database.
return keys, nil
}
func (db *dbCrypt) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
trigger, err := db.Store.GetChatAutomationTriggerByID(ctx, id)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := db.decryptField(&trigger.WebhookSecret.String, trigger.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (db *dbCrypt) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
triggers, err := db.Store.GetChatAutomationTriggersByAutomationID(ctx, automationID)
if err != nil {
return nil, err
}
for i := range triggers {
if err := db.decryptField(&triggers[i].WebhookSecret.String, triggers[i].WebhookSecretKeyID); err != nil {
return nil, err
}
}
return triggers, nil
}
func (db *dbCrypt) InsertChatAutomationTrigger(ctx context.Context, params database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
if !params.WebhookSecret.Valid || strings.TrimSpace(params.WebhookSecret.String) == "" {
params.WebhookSecretKeyID = sql.NullString{}
} else if err := db.encryptField(&params.WebhookSecret.String, &params.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
trigger, err := db.Store.InsertChatAutomationTrigger(ctx, params)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := db.decryptField(&trigger.WebhookSecret.String, trigger.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (db *dbCrypt) UpdateChatAutomationTrigger(ctx context.Context, params database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
trigger, err := db.Store.UpdateChatAutomationTrigger(ctx, params)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := db.decryptField(&trigger.WebhookSecret.String, trigger.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (db *dbCrypt) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, params database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
if !params.WebhookSecret.Valid || strings.TrimSpace(params.WebhookSecret.String) == "" {
params.WebhookSecretKeyID = sql.NullString{}
} else if err := db.encryptField(&params.WebhookSecret.String, &params.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
trigger, err := db.Store.UpdateChatAutomationTriggerWebhookSecret(ctx, params)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := db.decryptField(&trigger.WebhookSecret.String, trigger.WebhookSecretKeyID); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (db *dbCrypt) GetChatProviderByID(ctx context.Context, id uuid.UUID) (database.ChatProvider, error) {
provider, err := db.Store.GetChatProviderByID(ctx, id)
if err != nil {
+193
View File
@@ -1177,3 +1177,196 @@ func TestMCPServerUserTokens(t *testing.T) {
requireEncryptedEquals(t, ciphers[0], rawTok.RefreshToken, refreshToken)
})
}
func TestChatAutomationTriggers(t *testing.T) {
t.Parallel()
ctx := context.Background()
const (
//nolint:gosec // test credential
webhookSecret = "whsec-test-secret-value"
)
// insertTrigger creates a user, organization, chat automation, and
// a webhook trigger with a secret through the encrypted store.
insertTrigger := func(t *testing.T, crypt *dbCrypt, ciphers []Cipher) database.ChatAutomationTrigger {
t.Helper()
user := dbgen.User(t, crypt, database.User{})
org := dbgen.Organization(t, crypt, database.Organization{})
now := dbtime.Now()
automation, err := crypt.InsertChatAutomation(ctx, database.InsertChatAutomationParams{
ID: uuid.New(),
OwnerID: user.ID,
OrganizationID: org.ID,
Name: "test-automation-" + uuid.New().String()[:8],
Description: "test automation",
Instructions: "do stuff",
MCPServerIDs: []uuid.UUID{},
AllowedTools: []string{},
Status: database.ChatAutomationStatusActive,
MaxChatCreatesPerHour: 10,
MaxMessagesPerHour: 60,
CreatedAt: now,
UpdatedAt: now,
})
require.NoError(t, err)
trigger, err := crypt.InsertChatAutomationTrigger(ctx, database.InsertChatAutomationTriggerParams{
ID: uuid.New(),
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
WebhookSecret: sql.NullString{String: webhookSecret, Valid: true},
CreatedAt: now,
UpdatedAt: now,
})
require.NoError(t, err)
require.Equal(t, webhookSecret, trigger.WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), trigger.WebhookSecretKeyID.String)
return trigger
}
// requireTriggerRawEncrypted reads the trigger from the raw
// (unwrapped) store and asserts the secret field is encrypted.
requireTriggerRawEncrypted := func(
t *testing.T,
rawDB database.Store,
triggerID uuid.UUID,
ciphers []Cipher,
wantSecret string,
) {
t.Helper()
raw, err := rawDB.GetChatAutomationTriggerByID(ctx, triggerID)
require.NoError(t, err)
requireEncryptedEquals(t, ciphers[0], raw.WebhookSecret.String, wantSecret)
}
t.Run("InsertChatAutomationTrigger", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
requireTriggerRawEncrypted(t, db, trigger.ID, ciphers, webhookSecret)
})
t.Run("GetChatAutomationTriggerByID", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
got, err := crypt.GetChatAutomationTriggerByID(ctx, trigger.ID)
require.NoError(t, err)
require.Equal(t, webhookSecret, got.WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), got.WebhookSecretKeyID.String)
requireTriggerRawEncrypted(t, db, trigger.ID, ciphers, webhookSecret)
})
t.Run("GetChatAutomationTriggersByAutomationID", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
triggers, err := crypt.GetChatAutomationTriggersByAutomationID(ctx, trigger.AutomationID)
require.NoError(t, err)
require.Len(t, triggers, 1)
require.Equal(t, webhookSecret, triggers[0].WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), triggers[0].WebhookSecretKeyID.String)
requireTriggerRawEncrypted(t, db, trigger.ID, ciphers, webhookSecret)
})
t.Run("UpdateChatAutomationTrigger", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
// UpdateChatAutomationTrigger does not change the webhook
// secret itself; it updates cron_schedule/filter/label_paths.
// The returned trigger should still have the decrypted secret.
updated, err := crypt.UpdateChatAutomationTrigger(ctx, database.UpdateChatAutomationTriggerParams{
ID: trigger.ID,
UpdatedAt: dbtime.Now(),
})
require.NoError(t, err)
require.Equal(t, webhookSecret, updated.WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), updated.WebhookSecretKeyID.String)
requireTriggerRawEncrypted(t, db, trigger.ID, ciphers, webhookSecret)
})
t.Run("UpdateChatAutomationTriggerWebhookSecret", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
const newSecret = "whsec-rotated-secret" //nolint:gosec // test credential
updated, err := crypt.UpdateChatAutomationTriggerWebhookSecret(ctx, database.UpdateChatAutomationTriggerWebhookSecretParams{
ID: trigger.ID,
WebhookSecret: sql.NullString{String: newSecret, Valid: true},
UpdatedAt: dbtime.Now(),
})
require.NoError(t, err)
require.Equal(t, newSecret, updated.WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), updated.WebhookSecretKeyID.String)
requireTriggerRawEncrypted(t, db, trigger.ID, ciphers, newSecret)
})
t.Run("CronTriggerThroughDecryptLoop", func(t *testing.T) {
t.Parallel()
_, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
// Insert a cron trigger under the same automation. Cron
// triggers have no webhook secret, so the secret fields
// should remain NULL through the encrypt/decrypt loop.
now := dbtime.Now()
cronTrigger, err := crypt.InsertChatAutomationTrigger(ctx, database.InsertChatAutomationTriggerParams{
ID: uuid.New(),
AutomationID: trigger.AutomationID,
Type: database.ChatAutomationTriggerTypeCron,
CronSchedule: sql.NullString{String: "0 * * * *", Valid: true},
WebhookSecret: sql.NullString{Valid: false},
CreatedAt: now,
UpdatedAt: now,
})
require.NoError(t, err)
require.False(t, cronTrigger.WebhookSecret.Valid)
require.False(t, cronTrigger.WebhookSecretKeyID.Valid)
// Fetch both triggers by automation ID and verify each
// comes back with the expected secret state.
triggers, err := crypt.GetChatAutomationTriggersByAutomationID(ctx, trigger.AutomationID)
require.NoError(t, err)
require.Len(t, triggers, 2)
for _, tr := range triggers {
switch tr.Type {
case database.ChatAutomationTriggerTypeWebhook:
require.Equal(t, webhookSecret, tr.WebhookSecret.String)
require.Equal(t, ciphers[0].HexDigest(), tr.WebhookSecretKeyID.String)
case database.ChatAutomationTriggerTypeCron:
require.False(t, tr.WebhookSecret.Valid, "cron trigger should have NULL secret")
require.False(t, tr.WebhookSecretKeyID.Valid, "cron trigger should have NULL key_id")
require.Equal(t, "0 * * * *", tr.CronSchedule.String)
default:
t.Fatalf("unexpected trigger type: %s", tr.Type)
}
}
})
t.Run("ClearWebhookSecretToNULL", func(t *testing.T) {
t.Parallel()
_, crypt, ciphers := setup(t)
trigger := insertTrigger(t, crypt, ciphers)
// The DB schema enforces that webhook-type triggers must
// always have a non-NULL webhook_secret via the
// chat_automation_triggers_webhook_fields constraint.
// Attempting to clear it to NULL should fail. This verifies
// the dbcrypt layer correctly clears the key_id and passes
// the NULL through to the DB, which then rejects it.
_, err := crypt.UpdateChatAutomationTriggerWebhookSecret(ctx, database.UpdateChatAutomationTriggerWebhookSecretParams{
ID: trigger.ID,
WebhookSecret: sql.NullString{Valid: false},
UpdatedAt: dbtime.Now(),
})
require.Error(t, err)
require.Contains(t, err.Error(), "chat_automation_triggers_webhook_fields")
})
}
+3 -3
View File
@@ -1,6 +1,6 @@
module github.com/coder/coder/v2
go 1.25.7
go 1.25.8
// Required until a v3 of chroma is created to lazily initialize all XML files.
// None of our dependencies seem to use the registries anyways, so this
@@ -483,7 +483,7 @@ require (
github.com/anthropics/anthropic-sdk-go v1.19.0
github.com/brianvoe/gofakeit/v7 v7.14.0
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225
github.com/coder/aibridge v1.0.8-0.20260324203533-dd8c239e5566
github.com/coder/aibridge v1.1.1-0.20260331154949-a011104f377d
github.com/coder/aisdk-go v0.0.9
github.com/coder/boundary v0.8.4-0.20260304164748-566aeea939ab
github.com/coder/preview v1.0.8
@@ -491,7 +491,7 @@ require (
github.com/dgraph-io/ristretto/v2 v2.4.0
github.com/elazarl/goproxy v1.8.0
github.com/fsnotify/fsnotify v1.9.0
github.com/go-git/go-git/v5 v5.17.0
github.com/go-git/go-git/v5 v5.17.1
github.com/mark3labs/mcp-go v0.38.0
github.com/shopspring/decimal v1.4.0
gonum.org/v1/gonum v0.17.0
+4 -4
View File
@@ -314,8 +314,8 @@ github.com/cncf/xds/go v0.0.0-20260202195803-dba9d589def2 h1:aBangftG7EVZoUb69Os
github.com/cncf/xds/go v0.0.0-20260202195803-dba9d589def2/go.mod h1:qwXFYgsP6T7XnJtbKlf1HP8AjxZZyzxMmc+Lq5GjlU4=
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225 h1:tRIViZ5JRmzdOEo5wUWngaGEFBG8OaE1o2GIHN5ujJ8=
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225/go.mod h1:rNLVpYgEVeu1Zk29K64z6Od8RBP9DwqCu9OfCzh8MR4=
github.com/coder/aibridge v1.0.8-0.20260324203533-dd8c239e5566 h1:DK+a7Q9bPpTyq7ePaz81Ihauyp1ilXNhF8MI+7rmZpA=
github.com/coder/aibridge v1.0.8-0.20260324203533-dd8c239e5566/go.mod h1:u6WvGLMQQbk3ByeOw+LBdVgDNc/v/ujAtUc6MfvzQb4=
github.com/coder/aibridge v1.1.1-0.20260331154949-a011104f377d h1:yoDGndlvKP6fiKzivG7kYLYs7jDEt2phgGVagDmuAHY=
github.com/coder/aibridge v1.1.1-0.20260331154949-a011104f377d/go.mod h1:u6WvGLMQQbk3ByeOw+LBdVgDNc/v/ujAtUc6MfvzQb4=
github.com/coder/aisdk-go v0.0.9 h1:Vzo/k2qwVGLTR10ESDeP2Ecek1SdPfZlEjtTfMveiVo=
github.com/coder/aisdk-go v0.0.9/go.mod h1:KF6/Vkono0FJJOtWtveh5j7yfNrSctVTpwgweYWSp5M=
github.com/coder/boundary v0.8.4-0.20260304164748-566aeea939ab h1:HrlxyTmMQpOHfSKzRU1vf5TxrmV6vL5OiWq+Dvn5qh0=
@@ -516,8 +516,8 @@ github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66D
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.8.0 h1:I8hjc3LbBlXTtVuFNJuwYuMiHvQJDq1AT6u4DwDzZG0=
github.com/go-git/go-billy/v5 v5.8.0/go.mod h1:RpvI/rw4Vr5QA+Z60c6d6LXH0rYJo0uD5SqfmrrheCY=
github.com/go-git/go-git/v5 v5.17.0 h1:AbyI4xf+7DsjINHMu35quAh4wJygKBKBuXVjV/pxesM=
github.com/go-git/go-git/v5 v5.17.0/go.mod h1:f82C4YiLx+Lhi8eHxltLeGC5uBTXSFa6PC5WW9o4SjI=
github.com/go-git/go-git/v5 v5.17.1 h1:WnljyxIzSj9BRRUlnmAU35ohDsjRK0EKmL0evDqi5Jk=
github.com/go-git/go-git/v5 v5.17.1/go.mod h1:pW/VmeqkanRFqR6AljLcs7EA7FbZaN5MQqO7oZADXpo=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
+4
View File
@@ -118,5 +118,9 @@
"viewOAuth2AppSecrets": {
"object": { "resource_type": "oauth2_app_secret" },
"action": "read"
},
"createChat": {
"object": { "resource_type": "chat", "owner_id": "me" },
"action": "create"
}
}
+173 -37
View File
@@ -1,18 +1,56 @@
/**
* React Compiler diagnostic checker.
*
* Runs babel-plugin-react-compiler over every .ts/.tsx file in the
* target directories and reports functions that failed to compile or
* were skipped. Exits with code 1 when any diagnostics are present
* or a target directory is missing.
*
* Usage: node scripts/check-compiler.mjs
*/
import { readFileSync, readdirSync } from "node:fs";
import { join, relative } from "node:path";
import { fileURLToPath } from "node:url";
import { transformSync } from "@babel/core";
// Resolve the site/ directory (ESM equivalent of __dirname + "..").
const siteDir = new URL("..", import.meta.url).pathname;
// Only AgentsPage is currently opted in to React Compiler. Add new
// directories here as more pages are migrated.
const targetDirs = [
"src/pages/AgentsPage",
];
const skipPatterns = [".test.", ".stories.", ".jest."];
// Maximum length for truncated error messages in the report.
const MAX_ERROR_LENGTH = 120;
// ---------------------------------------------------------------------------
// File collection
// ---------------------------------------------------------------------------
/**
* Recursively collect .ts/.tsx files under `dir`, skipping test and
* story files. Returns paths relative to `siteDir`. Sets
* `hadCollectionErrors` and returns an empty array on ENOENT so the
* caller and recursive calls both stay safe.
*/
function collectFiles(dir) {
let entries;
try {
entries = readdirSync(dir, { withFileTypes: true });
} catch (e) {
if (e.code === "ENOENT") {
console.error(`Target directory not found: ${relative(siteDir, dir)}`);
hadCollectionErrors = true;
return [];
}
throw e;
}
const results = [];
for (const entry of readdirSync(dir, { withFileTypes: true })) {
for (const entry of entries) {
const full = join(dir, entry.name);
if (entry.isDirectory()) {
results.push(...collectFiles(full));
@@ -26,17 +64,63 @@ function collectFiles(dir) {
return results;
}
const files = targetDirs.flatMap((d) => collectFiles(join(siteDir, d)));
// ---------------------------------------------------------------------------
// Compilation & diagnostics
//
// We use transformSync deliberately. The React Compiler plugin is
// CPU-bound (parse-only takes ~2s vs ~19s with the compiler over all
// of site/src), so transformAsync + Promise.all gives no speedup —
// Node still runs all transforms on a single thread. Benchmarked
// sync, async-sequential, and async-parallel: all land within noise
// of each other. The sync API keeps the code simple.
// ---------------------------------------------------------------------------
let totalCompiled = 0;
const failures = [];
/**
* Shorten a compiler diagnostic message to its first sentence, stripping
* the leading "Error: " prefix and any trailing URL references so the
* one-line report stays readable.
*
* Example:
* "Error: Ref values are not allowed. Use ref types instead (https://…)."
* "Ref values are not allowed"
*/
export function shortenMessage(msg) {
const str = typeof msg === "string" ? msg : String(msg);
return str
.replace(/^Error: /, "")
.split(/\.\s/)[0]
.split("(http")[0]
.replace(/\.\s*$/, "")
.trim();
}
for (const file of files) {
const code = readFileSync(join(siteDir, file), "utf-8");
/**
* Remove diagnostics that share the same line + message. The compiler
* can emit duplicate events for the same function when it retries
* compilation, so we deduplicate before reporting.
*/
export function deduplicateDiagnostics(diagnostics) {
const seen = new Set();
return diagnostics.filter((d) => {
const key = `${d.line}:${d.short}`;
if (seen.has(key)) return false;
seen.add(key);
return true;
});
}
/**
* Run the React Compiler over a single file and return the number of
* successfully compiled functions plus any diagnostics. Transform
* errors are caught and returned as a diagnostic with line 0 rather
* than thrown, so the caller always gets a result.
*/
function compileFile(file) {
const isTSX = file.endsWith(".tsx");
const diagnostics = [];
try {
const code = readFileSync(join(siteDir, file), "utf-8");
const result = transformSync(code, {
plugins: [
["@babel/plugin-syntax-typescript", { isTSX }],
@@ -44,53 +128,105 @@ for (const file of files) {
logger: {
logEvent(_filename, event) {
if (event.kind === "CompileError" || event.kind === "CompileSkip") {
const msg = event.detail || event.reason || "";
const short = typeof msg === "string"
? msg.replace(/^Error: /, "").split(".")[0].split("(http")[0].trim()
: String(msg);
diagnostics.push({ line: event.fnLoc?.start?.line, short });
const msg = event.detail || event.reason || "(unknown)";
diagnostics.push({
line: event.fnLoc?.start?.line ?? 0,
short: shortenMessage(msg),
});
}
},
},
}],
],
filename: file,
// Skip config-file resolution. No babel.config.js exists in the
// repo, so the search is wasted I/O on every file.
configFile: false,
babelrc: false,
});
const slots = result.code.match(/const \$ = _c\(\d+\)/g) || [];
totalCompiled += slots.length;
// The compiler inserts `const $ = _c(N)` at the top of every
// function it successfully compiles, where N is the number of
// memoization slots. Counting these tells us how many functions
// were compiled in this file.
const compiledCount = result?.code?.match(/const \$ = _c\(\d+\)/g)?.length ?? 0;
if (diagnostics.length) {
const seen = new Set();
const unique = diagnostics.filter((d) => {
const key = `${d.line}:${d.short}`;
if (seen.has(key)) return false;
seen.add(key);
return true;
});
failures.push({ file, compiled: slots.length, diagnostics: unique });
}
return {
compiled: compiledCount,
diagnostics: deduplicateDiagnostics(diagnostics),
};
} catch (e) {
failures.push({
file, compiled: 0,
diagnostics: [{ line: 0, short: `Transform error: ${String(e.message).substring(0, 120)}` }],
});
return {
compiled: 0,
diagnostics: [{
line: 0,
// Truncate to keep the one-line report readable.
short: `Transform error: ${(e instanceof Error ? e.message : String(e)).substring(0, MAX_ERROR_LENGTH)}`,
}],
};
}
}
console.log(`\nTotal: ${totalCompiled} functions compiled across ${files.length} files`);
console.log(`Files with diagnostics: ${failures.length}\n`);
// ---------------------------------------------------------------------------
// Report
// ---------------------------------------------------------------------------
for (const f of failures) {
const short = f.file.replace("src/pages/AgentsPage/", "");
console.log(`${short} (${f.compiled} compiled)`);
for (const d of f.diagnostics) {
console.log(` line ${d.line}: ${d.short}`);
/**
* Derive a short display path by stripping the first matching target
* dir prefix so the output stays compact.
*/
export function shortPath(file, dirs = targetDirs) {
for (const dir of dirs) {
const prefix = `${dir}/`;
if (file.startsWith(prefix)) {
return file.slice(prefix.length);
}
}
return file;
}
/** Print a summary of compilation results and per-file diagnostics. */
function printReport(failures, totalCompiled, fileCount, hadErrors) {
console.log(`\nTotal: ${totalCompiled} functions compiled across ${fileCount} files`);
console.log(`Files with diagnostics: ${failures.length}\n`);
for (const f of failures) {
console.log(`${shortPath(f.file)} (${f.compiled} compiled)`);
for (const d of f.diagnostics) {
console.log(` line ${d.line}: ${d.short}`);
}
}
if (failures.length === 0 && !hadErrors) {
console.log("✓ All files compile cleanly.");
}
}
if (failures.length === 0) {
console.log("✓ All files compile cleanly.");
} else {
// ---------------------------------------------------------------------------
// Main
// ---------------------------------------------------------------------------
// Only run the main block when executed directly, not when imported
// by tests for the exported pure functions.
if (process.argv[1] === fileURLToPath(import.meta.url)) {
let hadCollectionErrors = false;
const files = targetDirs.flatMap((d) => collectFiles(join(siteDir, d)));
let totalCompiled = 0;
const failures = [];
for (const file of files) {
const { compiled, diagnostics } = compileFile(file);
totalCompiled += compiled;
if (diagnostics.length > 0) {
failures.push({ file, compiled, diagnostics });
}
}
printReport(failures, totalCompiled, files.length, hadCollectionErrors);
if (failures.length > 0 || hadCollectionErrors) {
process.exitCode = 1;
}
}
+95
View File
@@ -0,0 +1,95 @@
import { describe, expect, it } from "vitest";
import {
deduplicateDiagnostics,
shortPath,
shortenMessage,
} from "./check-compiler.mjs";
describe("shortenMessage", () => {
it("strips Error: prefix and takes first sentence", () => {
expect(
shortenMessage(
"Error: Ref values are not allowed. Use ref types instead.",
),
).toBe("Ref values are not allowed");
});
it("strips trailing URL references", () => {
expect(
shortenMessage("Mutating a value returned from a hook(https://react.dev/reference)"),
).toBe("Mutating a value returned from a hook");
});
it("preserves dotted property paths", () => {
expect(
shortenMessage("Cannot destructure props.foo because it is null"),
).toBe("Cannot destructure props.foo because it is null");
});
it("coerces non-string values", () => {
expect(shortenMessage(42)).toBe("42");
expect(shortenMessage({ toString: () => "Error: obj. detail" })).toBe("obj");
});
it("normalizes trailing periods", () => {
expect(shortenMessage("Single sentence.")).toBe("Single sentence");
});
it("preserves empty string and (unknown) sentinel", () => {
expect(shortenMessage("")).toBe("");
expect(shortenMessage("(unknown)")).toBe("(unknown)");
});
});
describe("deduplicateDiagnostics", () => {
it("removes duplicates with same line and message", () => {
const input = [
{ line: 1, short: "error A" },
{ line: 1, short: "error A" },
{ line: 2, short: "error B" },
];
expect(deduplicateDiagnostics(input)).toEqual([
{ line: 1, short: "error A" },
{ line: 2, short: "error B" },
]);
});
it("keeps diagnostics with same message on different lines", () => {
const input = [
{ line: 1, short: "error A" },
{ line: 2, short: "error A" },
];
expect(deduplicateDiagnostics(input)).toEqual(input);
});
it("keeps diagnostics with same line but different messages", () => {
const input = [
{ line: 1, short: "error A" },
{ line: 1, short: "error B" },
];
expect(deduplicateDiagnostics(input)).toEqual(input);
});
it("returns empty array for empty input", () => {
expect(deduplicateDiagnostics([])).toEqual([]);
});
});
describe("shortPath", () => {
const dirs = ["src/pages/AgentsPage", "src/pages/Other"];
it("strips matching target dir prefix", () => {
expect(shortPath("src/pages/AgentsPage/components/Chat.tsx", dirs))
.toBe("components/Chat.tsx");
});
it("strips first matching prefix when multiple match", () => {
expect(shortPath("src/pages/Other/index.tsx", dirs))
.toBe("index.tsx");
});
it("returns file unchanged when no prefix matches", () => {
expect(shortPath("src/utils/helper.ts", dirs))
.toBe("src/utils/helper.ts");
});
});
+8 -1
View File
@@ -571,9 +571,16 @@ func init() {
func (h *Handler) renderPermissions(ctx context.Context, actor rbac.Subject) string {
response := make(codersdk.AuthorizationResponse)
for k, v := range permissionChecks {
// Resolve the "me" sentinel so permission checks
// run against the actual actor, matching the
// API-side handling in coderd/authorize.go.
ownerID := v.Object.OwnerID
if ownerID == codersdk.Me {
ownerID = actor.ID
}
obj := rbac.Object{
ID: v.Object.ResourceID,
Owner: v.Object.OwnerID,
Owner: ownerID,
OrgID: v.Object.OrganizationID,
AnyOrgOwner: v.Object.AnyOrgOwner,
Type: string(v.Object.ResourceType),
+70
View File
@@ -21,6 +21,7 @@ import (
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/exp/maps"
@@ -31,6 +32,7 @@ import (
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/site"
@@ -79,6 +81,74 @@ func TestInjection(t *testing.T) {
require.Equal(t, db2sdk.User(user, []uuid.UUID{}), got)
}
func TestRenderPermissionsResolvesMe(t *testing.T) {
t.Parallel()
// GIVEN: a site handler wired to a real RBAC authorizer and a
// template that renders only the SSR permissions JSON.
siteFS := fstest.MapFS{
"index.html": &fstest.MapFile{
Data: []byte("{{ .Permissions }}"),
},
}
db, _ := dbtestutil.NewDB(t)
authorizer := rbac.NewStrictCachingAuthorizer(prometheus.NewRegistry())
handler, err := site.New(&site.Options{
Telemetry: telemetry.NewNoop(),
Database: db,
SiteFS: siteFS,
Authorizer: authorizer,
})
require.NoError(t, err)
// GIVEN: a user with the agents-access role.
userWithRole := dbgen.User(t, db, database.User{
RBACRoles: []string{"agents-access"},
})
_, tokenWithRole := dbgen.APIKey(t, db, database.APIKey{
UserID: userWithRole.ID,
ExpiresAt: time.Now().Add(time.Hour),
})
// WHEN: the user loads the page.
r := httptest.NewRequest("GET", "/", nil)
r.Header.Set(codersdk.SessionTokenHeader, tokenWithRole)
rw := httptest.NewRecorder()
handler.ServeHTTP(rw, r)
require.Equal(t, http.StatusOK, rw.Code)
// THEN: the SSR-rendered permissions include createChat = true
// because the "me" sentinel in permissions.json was resolved to
// the actor's ID, and the agents-access role grants user-scoped
// chat create permission.
var permsWithRole codersdk.AuthorizationResponse
err = json.Unmarshal([]byte(html.UnescapeString(rw.Body.String())), &permsWithRole)
require.NoError(t, err)
assert.True(t, permsWithRole["createChat"], "user with agents-access role should have createChat = true")
// GIVEN: a user without the agents-access role.
userWithoutRole := dbgen.User(t, db, database.User{})
_, tokenWithoutRole := dbgen.APIKey(t, db, database.APIKey{
UserID: userWithoutRole.ID,
ExpiresAt: time.Now().Add(time.Hour),
})
// WHEN: the user loads the page.
r = httptest.NewRequest("GET", "/", nil)
r.Header.Set(codersdk.SessionTokenHeader, tokenWithoutRole)
rw = httptest.NewRecorder()
handler.ServeHTTP(rw, r)
require.Equal(t, http.StatusOK, rw.Code)
// THEN: createChat = false because the member role does not
// grant chat permissions.
var permsWithoutRole codersdk.AuthorizationResponse
err = json.Unmarshal([]byte(html.UnescapeString(rw.Body.String())), &permsWithoutRole)
require.NoError(t, err)
assert.False(t, permsWithoutRole["createChat"], "user without agents-access role should have createChat = false")
}
func TestInjectionFailureProducesCleanHTML(t *testing.T) {
t.Parallel()
+1 -8
View File
@@ -1,11 +1,5 @@
import { type AxiosError, type AxiosResponse, isAxiosError } from "axios";
const Language = {
errorsByCode: {
defaultErrorCode: "Invalid value",
},
};
export interface FieldError {
field: string;
detail: string;
@@ -64,8 +58,7 @@ export const mapApiErrorToFieldErrors = (
if (apiErrorResponse.validations) {
for (const error of apiErrorResponse.validations) {
result[error.field] =
error.detail || Language.errorsByCode.defaultErrorCode;
result[error.field] = error.detail || "Invalid value";
}
}
+6
View File
@@ -47,6 +47,12 @@ export const RBACResourceActions: Partial<
read: "read chat messages and metadata",
update: "update chat title or settings",
},
chat_automation: {
create: "create a chat automation",
delete: "delete a chat automation",
read: "read chat automation configuration",
update: "update a chat automation",
},
connection_log: {
read: "read connection logs",
update: "upsert connection log entries",
+36 -11
View File
@@ -320,6 +320,11 @@ export type APIKeyScope =
| "boundary_usage:read"
| "boundary_usage:update"
| "chat:*"
| "chat_automation:*"
| "chat_automation:create"
| "chat_automation:delete"
| "chat_automation:read"
| "chat_automation:update"
| "chat:create"
| "chat:delete"
| "chat:read"
@@ -529,6 +534,11 @@ export const APIKeyScopes: APIKeyScope[] = [
"boundary_usage:read",
"boundary_usage:update",
"chat:*",
"chat_automation:*",
"chat_automation:create",
"chat_automation:delete",
"chat_automation:read",
"chat_automation:update",
"chat:create",
"chat:delete",
"chat:read",
@@ -1200,6 +1210,13 @@ export interface Chat {
* connect and disconnect.
*/
readonly has_unread: boolean;
/**
* LastInjectedContext holds the most recently persisted
* injected context parts (AGENTS.md files and skills). It
* is updated only when context changes first workspace
* attach or agent change.
*/
readonly last_injected_context?: readonly ChatMessagePart[];
}
// From codersdk/chats.go
@@ -5578,6 +5595,7 @@ export type RBACResource =
| "audit_log"
| "boundary_usage"
| "chat"
| "chat_automation"
| "connection_log"
| "crypto_key"
| "debug_info"
@@ -5624,6 +5642,7 @@ export const RBACResources: RBACResource[] = [
"audit_log",
"boundary_usage",
"chat",
"chat_automation",
"connection_log",
"crypto_key",
"debug_info",
@@ -5918,56 +5937,62 @@ export interface Role {
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleAgentsAccess = "agents-access";
// From codersdk/rbacroles.go
/**
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleAuditor = "auditor";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleMember = "member";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationAdmin = "organization-admin";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationAuditor = "organization-auditor";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationMember = "organization-member";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationTemplateAdmin = "organization-template-admin";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationUserAdmin = "organization-user-admin";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOrganizationWorkspaceCreationBan =
"organization-workspace-creation-ban";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleOwner = "owner";
@@ -5986,13 +6011,13 @@ export interface RoleSyncSettings {
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleTemplateAdmin = "template-admin";
// From codersdk/rbacroles.go
/**
* Ideally this roles would be generated from the rbac/roles.go package.
* Ideally these roles would be generated from the rbac/roles.go package.
*/
export const RoleUserAdmin = "user-admin";
@@ -7,12 +7,12 @@ import {
ChartTooltipContent,
} from "#/components/Chart/Chart";
import {
HelpTooltip,
HelpTooltipContent,
HelpTooltipIconTrigger,
HelpTooltipText,
HelpTooltipTitle,
} from "#/components/HelpTooltip/HelpTooltip";
HelpPopover,
HelpPopoverContent,
HelpPopoverIconTrigger,
HelpPopoverText,
HelpPopoverTitle,
} from "#/components/HelpPopover/HelpPopover";
import { formatDate } from "#/utils/time";
const chartConfig = {
@@ -120,18 +120,18 @@ export const ActiveUsersTitle: FC<ActiveUsersTitleProps> = ({ interval }) => {
return (
<div className="flex items-center gap-2">
{interval === "day" ? "Daily" : "Weekly"} Active Users
<HelpTooltip>
<HelpTooltipIconTrigger size="small" />
<HelpTooltipContent>
<HelpTooltipTitle>How do we calculate active users?</HelpTooltipTitle>
<HelpTooltipText>
<HelpPopover>
<HelpPopoverIconTrigger size="small" />
<HelpPopoverContent>
<HelpPopoverTitle>How do we calculate active users?</HelpPopoverTitle>
<HelpPopoverText>
When a connection is initiated to a user&apos;s workspace they are
considered an active user. e.g. apps, web terminal, SSH. This is for
measuring user activity and has no connection to license
consumption.
</HelpTooltipText>
</HelpTooltipContent>
</HelpTooltip>
</HelpPopoverText>
</HelpPopoverContent>
</HelpPopover>
</div>
);
};
+1 -1
View File
@@ -37,7 +37,7 @@ export const AvatarCard: FC<AvatarCardProps> = ({
*
* @see {@link https://css-tricks.com/flexbox-truncated-text/}
*/}
<div css={{ marginRight: "auto", minWidth: 0 }}>
<div className="mr-auto min-w-0">
<h3
// Lets users hover over truncated text to see whole thing
title={header}
+1 -6
View File
@@ -75,12 +75,7 @@ export const DeprecatedBadge: React.FC = () => {
export const Badges: React.FC<React.PropsWithChildren> = ({ children }) => {
return (
<Stack
css={{ margin: "0 0 16px" }}
direction="row"
alignItems="center"
spacing={1}
>
<Stack className="mb-4" direction="row" alignItems="center" spacing={1}>
{children}
</Stack>
);
@@ -6,6 +6,9 @@ import { Calendar } from "./Calendar";
const meta: Meta<typeof Calendar> = {
title: "components/Calendar",
component: Calendar,
args: {
today: new Date("2025-03-15"),
},
decorators: [
(Story) => (
<div className="rounded-lg border border-solid border-border-default w-fit">
@@ -21,6 +24,7 @@ type Story = StoryObj<typeof Calendar>;
export const Single: Story = {
args: {
mode: "single",
defaultMonth: new Date("2025-03-15"),
selected: new Date("2025-03-15"),
},
};
@@ -34,6 +38,8 @@ export const Range: Story = {
return (
<Calendar
mode="range"
today={new Date("2025-03-15")}
defaultMonth={new Date("2025-03-10")}
selected={range}
onSelect={(r) => r && setRange(r)}
numberOfMonths={2}
@@ -45,6 +51,7 @@ export const Range: Story = {
export const TwoMonths: Story = {
args: {
mode: "single",
defaultMonth: new Date("2025-03-15"),
numberOfMonths: 2,
selected: new Date("2025-03-15"),
},
@@ -53,6 +60,7 @@ export const TwoMonths: Story = {
export const DisabledFutureDates: Story = {
args: {
mode: "single",
defaultMonth: new Date("2025-03-15"),
selected: new Date("2025-03-15"),
disabled: { after: new Date("2025-03-20") },
},
@@ -1,19 +1,18 @@
import type { Interpolation, Theme } from "@emotion/react";
import type { FC } from "react";
import { CoderIcon } from "#/components/Icons/CoderIcon";
import { getApplicationName, getLogoURL } from "#/utils/appearance";
import { cn } from "#/utils/cn";
/**
* Enterprise customers can set a custom logo for their Coder application. Use
* the custom logo wherever the Coder logo is used, if a custom one is provided.
*/
export const CustomLogo: FC<{ css?: Interpolation<Theme> }> = (props) => {
export const CustomLogo: FC<{ className?: string }> = ({ className }) => {
const applicationName = getApplicationName();
const logoURL = getLogoURL();
return logoURL ? (
<img
{...props}
alt={applicationName}
src={logoURL}
// This prevent browser to display the ugly error icon if the
@@ -24,10 +23,9 @@ export const CustomLogo: FC<{ css?: Interpolation<Theme> }> = (props) => {
onLoad={(e) => {
e.currentTarget.style.display = "inline";
}}
css={{ maxWidth: 200 }}
className="application-logo"
className={cn("max-w-[200px] application-logo", className)}
/>
) : (
<CoderIcon {...props} className="w-12 h-12" />
<CoderIcon className={cn("w-12 h-12", className)} />
);
};
@@ -78,7 +78,7 @@ export const DeleteDialog: FC<DeleteDialogProps> = ({
<TextField
fullWidth
autoFocus
css={{ marginTop: 24 }}
className="mt-6"
name="confirmation"
autoComplete="off"
id={`${hookId}-confirm`}
@@ -77,12 +77,7 @@ export const DurationField: FC<DurationFieldProps> = (props) => {
return (
<div>
<div
css={{
display: "flex",
gap: 8,
}}
>
<div className="flex gap-2">
<TextField
{...textFieldProps}
fullWidth

Some files were not shown because too many files have changed in this diff Show More