Compare commits

...

189 Commits

Author SHA1 Message Date
Kyle Carberry 56ff5dfded feat(coderd/database): add automations database schema and queries
Adds the database layer for the chat automations feature:

Schema (migration 000454):
- chat_automations table with status, rate limits, instructions, MCP config
- chat_automation_triggers table (webhook + cron types)
- chat_automation_events table for audit trail
- Indexes for performance and unique constraint on (owner, org, name)
- automation_id FK on chats table

SQL queries:
- chatautomations.sql: CRUD with authorize_filter support
- chatautomationtriggers.sql: CRUD + GetActiveChatAutomationCronTriggers JOIN query
- chatautomationevents.sql: insert, list, count windows, purge

RBAC:
- chat_automation resource with CRUD actions
- dbauthz wrappers for all 18 query methods

Also adds LockIDChatAutomationCron advisory lock constant.
2026-04-01 14:10:38 +00:00
Kyle Carberry 4b52656958 fix(site): ensure Thinking indicator appears regardless of WebSocket event ordering (#23884)
The "Thinking..." indicator intermittently failed to render after
submitting a message. The behavior depended on the order of events
within a single WebSocket frame.

## Root Cause

`flushMessageParts()` was called before **all** non-`message_part`
events in the batch loop. When the server sent `[message_part,
status:"running"]` in the same SSE chunk:

1. `message_part` → pushed to `partsBuf`
2. `status:"running"` → `flushMessageParts()` applied parts **first** →
`streamState` became non-null → then `chatStatus` set to `"running"`
3. Subscriber saw `streamState != null && chatStatus == "running"` →
`selectIsAwaitingFirstStreamChunk` returned `false` → no "Thinking..."

When events arrived in the reverse order (`[status:"running",
message_part]`), the indicator worked because the status was set before
parts were applied.

## Fix

- Move `flushMessageParts()` to only fire before `message` and `error`
events (which need prior parts visible)
- Add `discardBufferedParts()` for events that clear stream state
(`status:"pending"/"waiting"`, `retry`) so the deferred `setTimeout(0)`
flush doesn't re-populate cleared state
- Status changes are now always applied before parts within a batch, and
the deferred flush gives React one render cycle to show "Thinking..."

| Event | Flush? | Rationale |
|---|---|---|
| `message` | YES | Durable commit must include all stream parts |
| `error` | YES | Partial output should be visible alongside error |
| `status` | NO | Status must be set before parts so "starting" phase
renders |
| `retry` | DISCARD | Retry clears stream state; flushing would
re-populate it |
| `queue_update` | NO | Doesn't interact with stream state |

## Tests (written first, failing before fix)

1. **"shows starting phase when message_part arrives before
status:running in same batch"** — the exact bug scenario
2. **"shows starting phase when status:running arrives before
message_part in same batch"** — verifies the "good" order still works
3. **"discards buffered parts when status transitions to pending"** —
verifies parts don't leak through pending transitions

All tests are deterministic (fake timers, no race conditions).

<details><summary>Implementation plan & decision log</summary>

### Why not reorder events within the batch?
Reordering would change the semantic ordering of events from the server,
which could have subtle side effects. The simpler approach is to be
selective about when parts are flushed.

### Why discard (not flush) before pending/waiting/retry?
These events clear `streamState`. If parts were flushed before the
clear, they'd be visible for one frame then disappear. If the deferred
flush ran after the clear, it would re-populate the state. Discarding is
the only correct behavior.

### Why keep flush before error?
Errors should surface partial output so the user can see what the agent
was doing when it failed.

</details>
2026-04-01 07:45:38 -04:00
Danielle Maywood f5b98aa12d fix: stabilize flaky visual stories (#23893) 2026-04-01 11:51:06 +01:00
Cian Johnston 7ddde0887b feat(site): force-enable kyleosophy on dev.coder.com (#23892)
- Force-enable Kyleosophy on `dev.coder.com` via hostname check
- Toggle shows as checked + disabled with "Kyleosophy is mandatory on
`dev.coder.com`"
- `isKylesophyForced()` exported for UI and testability
- Tests for forced/non-forced hostname behavior

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-04-01 11:39:04 +01:00
Cian Johnston bec426b24f feat(site): add Kyleosophy alternative completion chimes (#23891)
- Add "Enable Kyleosophy" toggle to Settings > Behavior
- When enabled, replaces standard completion chime with random Kyle
sound clips
- Ships 8 alternative `.mp3` files as static assets (~82KB total)
- localStorage preference (`agents.kyleosophy`), defaults to off
- Pauses orphaned Audio elements on sound URL change to prevent overlap

<details><summary>Review findings addressed</summary>

- **P2** Stale JSDoc on `playChimeAudio` — updated to reflect
parameterized behavior
- **P3** Overlapping audio on rapid completions — added
`chimeAudio?.pause()` before replacement
- **P3** Test ordering dependency — pinned `Math.random` for
determinism, documented cache behavior
- **Nit** Setter naming — `setLocalKyleosophy` → `setKylesophyLocal`
- Toggle moved to bottom of Behavior page per product request
- Description changed to "IYKYK" per product request

</details>

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-04-01 10:06:01 +00:00
Cian Johnston d6df78c9b9 chore: remove racy ChatStatusPending assertions after CreateChat (#23882)
Removes 6 fragile `require.Equal(t, codersdk.ChatStatusPending,
chat.Status)` assertions from chat relay and creation tests.

**Root cause**: In HA tests with two replicas sharing the same DB, the
worker can acquire a just-created chat (flipping `pending → running` via
`AcquireChats`) before the HTTP response reaches the test. All affected
tests already synchronize via `require.Eventually` waiting for `running`
status, making the initial assertion both redundant and racy.

- Remove 5 assertions in `enterprise/coderd/exp_chats_test.go` (all
`TestChatStreamRelay` subtests)
- Remove 1 assertion in `coderd/exp_chats_test.go` (`TestPostChats`)
- An existing comment in `TestPostChats/Success` already documents this
exact race

Fixes flake:
https://github.com/coder/coder/actions/runs/23807597632/job/69385425724

> 🤖 Written by a Coder Agent. Will be reviewed by a human.
2026-04-01 10:00:50 +01:00
Danielle Maywood 19390a5841 fix: resolve TestScheduleOverride/extend flake caused by timezone hour boundary race (#23830) 2026-04-01 07:53:04 +01:00
Jake Howell 2d03f7fd3d fix: resolve rendering issues with GFM alert boxes in <DynamicParameter /> (#22241)
Closes #22189

GFM alerts (e.g., `> [!IMPORTANT]`) in Markdown content failed to render
when the alert body contained inline formatting like `**bold**`,
`*italic*`, or `` `code` ``. The alert marker and subsequent text were
merged into a single string node by the parser, causing the type
detection to fail and fall back to a plain blockquote.

Additionally, multi-line alert content (`> line one\n> line two`) lost
its line breaks — all lines collapsed into one.

- Split the alert marker from trailing content in shared string nodes so
type detection works with inline formatting
- Preserve embedded newlines as `<br/>` elements to match GitHub's GFM
alert rendering
- Wrap plain-text children instead of splitting on `\n` to avoid
stripping newline information early

<img width="447" height="187" alt="image"
src="https://github.com/user-attachments/assets/d2fa3495-0b31-483c-97d8-12fed6819e24"
/>
2026-04-01 17:20:55 +11:00
Ethan 153a66b579 fix(site/src/pages/AgentsPage): confirm active agent archive (#23887)
Add a confirmation dialog before archiving an agent that is actively
running from the Agents UI.

This PR came about as feedback on PR 23758:
https://github.com/coder/coder/pull/23758#issuecomment-4160424938.
Active agents now require confirmation before archive interrupts the
current run, while inactive agents keep the existing one-click archive
behavior.

<img width="450" height="242" alt="image"
src="https://github.com/user-attachments/assets/98ce6978-d2d6-440b-9841-3806038556ee"
/>
2026-04-01 15:34:21 +11:00
Ethan 5cba59af79 fix(coderd): unarchive child chats with parents (#23761)
Unarchiving a root chat now restores descendant chats in the database
and emits lifecycle events for every affected chat so passive sessions
converge without a full refetch.

This keeps archive and unarchive symmetric at both the data and
watch-stream layers by returning the affected chat family from the
database, using those post-update rows for chatd pubsub fanout, and
covering descendant lifecycle delivery with a watch-level regression
test.

Closes #23666
2026-04-01 15:30:25 +11:00
Jeremy Ruppel 1d16ff1ca6 fix(site): sessions list and timeline polish (#23885)
- Prompt table was collapsing and sizing improperly, fixed 
- Make pretty much everything `text-sm` and `font-normal`
- Add model filter
- Back button on session threads page now navigates back instead of
going straight to `/aibridge/sessions`

---------

Co-authored-by: Jake Howell <jake@hwll.me>
2026-04-01 14:42:08 +11:00
Ethan b86161e0a6 test: fix TestServer_X11_EvictionLRU hang on fish shell (#23838)
`TestServer_X11_EvictionLRU` hangs forever when the developer's login
shell is `fish`. This is the only test in the repo that breaks on fish,
and it meant I couldn't run `make test` or similar without it blocking
indefinitely.

The test uses `sess.Shell()` to start interactive shell sessions, which
causes the SSH server to run the user's login shell directly (`fish
-l`). Fish buffers all piped stdin to EOF before executing any of it, so
the test's `echo ready-0\n` write never gets processed — fish sits
waiting for the pipe to close, and the test sits waiting for the echo
response.

The fix is a one-line change: `sess.Shell()` → `sess.Start("sh")`. The
test is exercising X11 LRU eviction, not shell behavior, so using `sh`
explicitly is both correct and shell-agnostic. The DISPLAY environment
variable is set identically either way since the x11-req handler runs
before `sessionStart`.
2026-04-01 12:31:22 +11:00
Cian Johnston a164d508cf fix(coderd/x/chatd): gate control subscriber to ignore stale pubsub notifications (#23865)
Fixes flaky `TestOpenAIReasoningWithWebSearchRoundTripStoreFalse` and
`TestOpenAIReasoningWithWebSearchRoundTrip`.

## Changes

- Gate the `processChat` control subscriber's cancel callback behind a
`chan struct{}` that is closed after publishing `"running"` status
- Add `TestGatedControlCancel` with 4 subtests exercising the gate logic

<details>
<summary>Root cause analysis</summary>

`SendMessage` publishes a `"pending"` notification on
`chat:stream:<chatID>` via PostgreSQL `NOTIFY`. `processChat` subscribes
to the same channel for control signals. Due to async NOTIFY delivery,
the `"pending"` notification can arrive at the control subscriber
**after** it registers its queue — even though it was published
**before**. `shouldCancelChatFromControlNotification("pending")` returns
`true`, immediately self-interrupting the processor before it does any
work.

The fix gates the cancel callback behind a closed channel. The channel
is closed after `processChat` publishes `"running"` status, so stale
notifications from before initialization are harmlessly ignored.
`close()` provides a happens-before guarantee in the Go memory model.
</details>

> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-03-31 22:55:20 +01:00
Kayla はな b9f140e53e chore: remove Language objects (#23866) 2026-03-31 15:26:59 -06:00
Jeremy Ruppel 7f7b13f0ab fix(site): share AI Bridge entitlement/permissions logic (#23834)
Introduces a new `getAIBridgePermissions` method that all AI Bridge
pages can use to restrict access/paywall. Also adds the paywall and
alert to the session threads page bc I totes forgot.

---------

Co-authored-by: Jake Howell <jacob@coder.com>
2026-03-31 17:19:46 -04:00
Michael Suchacz e2bbd12137 test(coderd/x/chatd): remove flaky OpenAI round-trip tests (#23877) 2026-03-31 17:04:56 -04:00
Danielle Maywood e769d1bd7d fix(site): update story play functions after HelpTooltip→HelpPopover migration (#23876) 2026-03-31 21:50:05 +01:00
Jeremy Ruppel cccb680ec2 chore: remove shared workspaces beta badge (#23873) 2026-03-31 16:25:42 -04:00
Danielle Maywood e8fb418820 fix(site): delay desktop VNC connection until tab is selected (#23861) 2026-03-31 18:42:36 +01:00
Kyle Carberry 2c5e003c91 refactor(site): use hover popover for context indicator with nested skill tooltips (#23870)
Replaces the tooltip-inside-tooltip approach for the context usage
indicator with a hover-based Popover. Skill descriptions now appear as
nested tooltips to the right, matching the ModelSelector pattern.

**Before**: Tooltip with inline skill descriptions (truncated, janky
nested tooltips)
**After**: Popover opens on hover, skill names listed cleanly,
descriptions appear to the right on hover

- Popover opens on `mouseEnter`, closes after 150ms delay on
`mouseLeave`
- `onOpenAutoFocus` prevented to avoid stealing chat input focus
- Mobile keeps tap-to-toggle Popover behavior
- Skill rows get subtle `hover:bg-surface-tertiary` highlight
- `TooltipProvider` with `delayDuration={300}` wraps skill items (same
as ModelSelector)
2026-03-31 13:40:26 -04:00
code-qtzl f44a8994da fix(site): improve keyboard navigation in help popovers (#23374) 2026-03-31 11:10:33 -06:00
Yevhenii Shcherbina 84b94a8376 feat: add chatgpt support for aibridge proxy (#23826)
Add ChatGPT support for AIBridgeProxy
2026-03-31 12:54:38 -04:00
Cian Johnston 2a990ce758 feat: show friendly alert for missing agents-access role (#23831)
Replaces the generic red `ErrorAlert` ("Forbidden.") with a proactive
permission check and friendly info alert when a user lacks the
`agents-access` role.

- Add `createChat` permission check to `permissions.json` using
`owner_id: "me"`
- Handle `"me"` owner substitution in `renderPermissions` (SSR path)
- Pass `canCreateChat` from `useAuthenticated().permissions` into
`AgentCreateForm`
- Show `ChatAccessDeniedAlert` and disable input immediately (no need to
trigger a 403 first)
- Also catch 403 errors as a fallback in case permissions aren't yet
loaded
- Add `ForbiddenNoAgentsRole` Storybook story with `play` assertions
- Add `TestRenderPermissionsResolvesMe` Go test to pin the `"me"`
sentinel substitution

<details><summary>Implementation plan & decision log</summary>

- Uses the existing `permissions.json` + `checkAuthorization` system
rather than a separate API call
- `owner_id: "me"` is resolved to the actor's ID by both the auth-check
API endpoint and the SSR `renderPermissions` function
- Go test uses a real `rbac.StrictCachingAuthorizer` (not a mock) so it
verifies both the sentinel substitution and the RBAC role evaluation
end-to-end
- Alert follows the exact same `Alert` pattern as the 409 usage-limit
block
- Uses `severity="info"` and links to the getting-started docs Step 3
- Textarea is disabled proactively so the user never sees the scary
generic error

</details>

> 🤖 Created by a Coder Agent and will be reviewed by a human.
2026-03-31 17:26:58 +01:00
Danny Kopping c86f1288f1 chore: update aibridge with latest changes (#23863)
https://github.com/coder/aibridge/compare/519b082ad666...a011104f377d

Includes https://github.com/coder/aibridge/pull/242 and
https://github.com/coder/aibridge/pull/229

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 16:11:50 +00:00
Yevhenii Shcherbina 9440adf435 feat: add chatgpt support for aibridge (#23822)
Registers a new aibridge provider for ChatGPT by reusing the existing
OpenAI provider with a different `Name` and `BaseURL`
(https://chatgpt.com/backend-api/codex). The ChatGPT backend API is
OpenAI-compatible, so no new provider type is needed.
ChatGPT authenticates exclusively via per-user OAuth JWTs (BYOK mode) —
no centralized API key is configured. The OpenAI provider already
handles this: when no key is set, it falls through to the bearer token
from the request's Authorization header.
  
  Depends on #23811
2026-03-31 12:08:45 -04:00
Kayla はな 755e8be5ad chore: migrate some emotion styles to tailwind (#23817) 2026-03-31 10:07:33 -06:00
Danielle Maywood c9e335c453 refactor: redesign compaction settings to table layout with batch save (#23844) 2026-03-31 17:06:12 +01:00
Jeremy Ruppel 2d1f35f8a6 feat(site): session timeline design feedback (#23836)
- Remove `border-surface-secondary` from all line art and use the
default border color
- Use `text-sm` for all timeline elements and tables *(except the
`Thinking...` mono font, that's `text-xs`)

---------

Co-authored-by: Jake Howell <jacob@coder.com>
2026-03-31 12:02:19 -04:00
Susana Ferreira b0036af57b feat: register multiple Copilot providers for business and enterprise upstreams (#23811)
## Description

Adds support for multiple Copilot provider instances to route requests to different Copilot upstreams (individual, business, enterprise). Each instance has its own name and base URL, enabling per-upstream metrics, logs, circuit breakers, API dump, and routing.

## Changes

* Add Copilot business and enterprise provider names and host constants
* Register three Copilot provider instances in aibridged (default, business, enterprise)
* Update `defaultAIBridgeProvider` in `aibridgeproxy` to route new Copilot hosts to their corresponding providers

## Related

* Depends on: https://github.com/coder/aibridge/pull/240
* Closes: https://github.com/coder/aibridge/issues/152

Note: documentation changes will be added in a follow-up PR.

_Disclaimer: initially produced by Claude Opus 4.6, heavily modified and reviewed by @ssncferreira ._
2026-03-31 16:00:37 +01:00
Kyle Carberry 2953245862 feat(site): display loaded context files and skills in context indicator tooltip (#23853)
Renders the `last_injected_context` data (AGENTS.md files and skills)
from the Chat API in the `ContextUsageIndicator` hover tooltip. On
hover, users now see:

- **Context files**: basename with full path on title hover, truncation
indicator
- **Skills**: name and optional description

Separated from the existing token usage info by a border divider when
both sections are present. Added `max-w-72` to prevent the tooltip from
getting too wide.

<img width="970" height="598" alt="image"
src="https://github.com/user-attachments/assets/5bc25cb2-1d92-41d2-ab1a-63e5e49f667a"
/>

<details>
<summary>Data flow</summary>

```
chatQuery.data.last_injected_context
  → AgentChatPage (AgentChatPageView prop)
    → AgentChatPageView (ChatPageInput prop)
      → ChatPageInput (spread into latestContextUsage)
        → AgentChatInput (contextUsage prop)
          → ContextUsageIndicator (usage.lastInjectedContext)
```

</details>

<details>
<summary>Files changed</summary>

| File | Change |
|---|---|
| `ContextUsageIndicator.tsx` | Add `lastInjectedContext` to interface,
render context files and skills sections in tooltip |
| `ChatPageContent.tsx` | Thread `lastInjectedContext` prop, spread into
context usage object |
| `AgentChatPageView.tsx` | Thread `lastInjectedContext` prop to
`ChatPageInput` |
| `AgentChatPage.tsx` | Pass `chatQuery.data?.last_injected_context`
down |

</details>
2026-03-31 14:43:32 +00:00
Danny Kopping 5d07014f9f chore: update aibridge lib (#23849)
https://github.com/coder/aibridge/pull/230 has been merged, update the
dependency to match.

Includes other changes as well:
https://github.com/coder/aibridge/compare/dd8c239e5566...77d597aa123b
(cc @evgeniy-scherbina, @pawbana)

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 16:11:40 +02:00
Jeremy Ruppel 002e88fefc fix(site): use mock client model in sessions list view (#23851)
Missed this lil guy when adding the `<ClientFilter />` in #23733
2026-03-31 10:00:21 -04:00
Ethan bbf3fbc830 fix(coderd/x/chatd): archive chat hard-interrupts active stream (#23758)
Archiving a chat now transitions pending or running chats to waiting
before setting the archived flag. This publishes a status notification
on `ChatStreamNotifyChannel` so `subscribeChatControl` cancels the
active `processChat` context via `ErrInterrupted` — the same codepath
used by the stop button.

The `processChat` cleanup also skips queued-message auto-promotion when
the chat is archived, so archiving behaves like a hard stop rather than
interrupt-and-continue.

Relates to https://github.com/coder/coder/issues/23666
2026-04-01 00:23:52 +11:00
Danny Kopping 9fa103929a perf: make ListAIBridgeSessions 10x faster (#23774)
_Disclaimer: produced using Claude Opus 4.6, reviewed by me, and
validated against Dogfood dataset._

The `ListAIBridgeSessions` query materialized and aggregated all
matching interceptions before paginating, then ran expensive
token/prompt lookups across the full dataset. For a page of 25 sessions
against ~200k interceptions (our dogfood dataset), this meant:
- Three CTEs scanning all rows (filtered_interceptions, session_tokens,
session_root)
  - ARRAY_AGG(fi.id) collecting every interception ID per session
- Lateral prompt lookup via ANY(array_of_all_ids) running for every
session, not just the page
  - ~90MB of disk sorts and JIT compilation kicking in

The improvement is to restructure to paginate first and enrich after: a
single CTE groups interceptions into sessions with only cheap aggregates
(MIN, MAX, COUNT), applies cursor pagination and LIMIT, then lateral
joins fetch metadata, tokens, and prompts for just the ~25-row page.

  Measured against 220k interceptions / 160k sessions:

  | Metric             | Before | After |
  |--------------------|--------|-------|
  | Execution time     | 1800ms | 185ms |
  | Shared buffer hits | 737k   | 2.6k  |
  | Disk sort spill    | 86MB   | 16MB  |
  | Lateral loops      | 160k   | 25    |

https://grafana.dev.coder.com/goto/fbODPGtvR?orgId=1 the results are
identical, just _much_ faster.

--- 

Also includes some additional tests which I added prior to refactoring
the query to ensure no regressions on edge-cases.

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-31 14:42:23 +02:00
Lukasz acd2ff63a7 chore: bump Go toolchain to 1.25.8 (#23772)
Bump the repository Go toolchain from 1.25.7 to 1.25.8.

Updates `go.mod`, the shared `setup-go` action default, and the dogfood
image checksum so local, CI, and dogfood builds stay aligned.
2026-03-31 14:04:58 +02:00
Atif Ali e3e17e15f7 fix(site): show accurate message and warning color for startup script failures in agent row (#23654)
The agent row tooltip showed "Error starting the agent" / "Something
went wrong during the agent startup" with a red border when a startup
script fails. This is misleading — the agent is started and functional,
only the startup script exited with a non-zero code.

Extracts shared message constants (`agentLifecycleMessages`,
`agentStatusMessages`) from `health.ts` so both the workspace-level
health classification and the per-agent-row tooltips reference the same
single source of truth. No more duplicated wording that can drift.

Changes:
- **`health.ts`**: Exports `agentLifecycleMessages` and
`agentStatusMessages` maps; `getAgentHealthIssue` now references them
instead of inline strings.
- **`AgentStatus.tsx`**: All lifecycle/status tooltip components
(`StartErrorLifecycle`, `StartTimeoutLifecycle`,
`ShutdownTimeoutLifecycle`, `ShutdownErrorLifecycle`, `TimeoutStatus`)
now import and render from the shared message constants.
`StartErrorLifecycle` icon changed from red (`errorWarning`) to orange
(`timeoutWarning`).
- **`AgentRow.tsx`**: `start_error` border changed from
`border-border-destructive` (red) to `border-border-warning` (orange).

Closes #23652
Refs #21389

> 🤖 This PR was created with the help of Coder Agents, and has been
reviewed by my human. 🧑‍💻
2026-03-31 12:04:51 +00:00
Michael Suchacz af678606fc fix(coderd/x/chatd): stabilize flaky request-count assertion in round-trip test (#23843)
The flaky test assumed the second streamed OpenAI request had already
been captured when the chat status event arrived. In practice, the
capture server can record that second request slightly later, which
intermittently left `streamRequestCount` at `1`.

This change waits for the second captured request before asserting on
the follow-up payload and relaxes the count check to a sanity check. The
test still verifies the `store=false` round-trip behavior without
depending on that timing race.

Fixes coder/internal#1433
2026-03-31 13:09:11 +02:00
Cian Johnston 3190406de3 fix(site): stop workspace deletes playing hide-and-seek (#23641)
- Fix workspaces list invalidation after kebab-menu delete and add
Storybook coverage for the immediate `Deleting` state.

> 🤖 This PR was made by Coder Agents and read by me.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-31 11:36:47 +01:00
Cian Johnston 3ce82bb885 feat: add chat-access site-wide role to gate chat creation (#23724)
- Add `chat-access` built-in role granting chat CRUD at User scope
- Exclude `ResourceChat` from member, org member, and org service
account `allPermsExcept` calls
- Allow system, owner, and user-admin to assign the new role
- Migration auto-assigns role to users who have ever created a chat
- Update RBAC test matrix: `memberMe` denied, `chatAccessUser` allowed

**Breaking change**: Members without `chat-access` lose chat creation
ability. Migration covers existing chat creators. Members who have never
created a chat do not get this role automatically applied.

> 🤖 This PR was created by a Coder Agent and reviewed by me.
2026-03-31 10:07:21 +01:00
Ethan 348a3bd693 fix(site): show archived filter on agents page when chat list is empty (#23793)
While running scaletests I noticed the archived filter button on the
agents sidebar would disappear when the current filter had zero results.
This made it impossible to switch between active and archived views once
one side was empty.

The filter dropdown was only rendered inside the Pinned or first
time-group section header. When `visibleRootIDs` was empty, neither
header existed, so the filter had nowhere to attach.

This keeps the original dropdown placement on section headers when chats
exist. When the list is empty, the empty-state box itself now provides a
"View archived →" or "← Back to active" link so users can always switch
filters without needing the dropdown.

<img width="322" height="191" alt="image"
src="https://github.com/user-attachments/assets/7fd9ca09-5f72-4796-a925-7fab570fdff5"
/>

<img width="320" height="184" alt="image"
src="https://github.com/user-attachments/assets/2f856088-c2dc-4e34-9ece-84144a1adf79"
/>

Both archived & unarchived look the same when there's at least one
agent:

<img width="322" height="194" alt="image"
src="https://github.com/user-attachments/assets/42c4d54b-e500-45b1-b045-c126144c35bd"
/>
2026-03-31 12:57:46 +11:00
Jeremy Ruppel 75f1503b41 feat(site): various Session Timeline fixes (#23791)
- Use Tooltip instead of Popover for AI Gov tooltip
- Fix Agentic Loop tool call summing
- Collapse all expandable sections by default
- Add solid background to "Show More" button
- Remove "Sort by" dropdown for v1
2026-03-30 19:25:03 -04:00
Danielle Maywood c33cd19a05 fix(site/scripts): guard check-compiler main block from test imports (#23825) 2026-03-30 22:11:15 +01:00
Danielle Maywood adcea865c7 fix(site): improve check-compiler.mjs quality and fix bugs (#23812) 2026-03-30 20:41:41 +00:00
Matt Vollmer 5e3bccd96c docs: fix tool tables and model option errors in agent docs (#23821)
Fixes factual errors found during a review of all pages under
`/docs/ai-coder/agents/`.

## Tool tables (`index.md`, `architecture.md`)

Both pages had incomplete tool tables. Added:

- `process_output`, `process_list`, `process_signal` — core workspace
tools always registered alongside `execute`, missing from both pages
- `propose_plan` — platform tool (root chats only), missing from both
pages
- `spawn_computer_use_agent` — orchestration tool (conditional), missing
from architecture.md

Also fixed the architecture.md claim that the agent is "restricted to
the tool set defined in this section" — it now mentions skills and MCP
tools with links to the relevant pages.

## Model options (`models.md`)

- **OpenAI / OpenRouter Reasoning Effort**: docs listed `low`, `medium`,
`high` — code has `none`, `minimal`, `low`, `medium`, `high`, `xhigh`.
Fixed both.
- **Removed hidden fields** that never appear in the admin UI:
  - Google: Safety Settings (`hidden:"true"`)
- OpenRouter: Provider Order, Allow Fallbacks (parent struct
`hidden:"true"`)
  - Vercel: Provider Options (`hidden:"true"`)

---

*PR generated with Coder Agents*
2026-03-30 16:24:45 -04:00
Mathias Fredriksson 3950947c58 fix(site): prevent scroll handler from killing autoScroll during pin convergence (#23818)
WebKit internally adjusts scrollTop during layout when content
above the viewport changes height, even with overflow-anchor:none.
These phantom adjustments fire scroll events where isNearBottom
returns false. The scroll handler was setting autoScrollRef =
nearBottom on every such event, permanently killing follow mode.

The scroll handler now only enables follow mode, never disables
it. When follow mode is active, the user is not wheel/touch
scrolling, and isNearBottom is false, this indicates a
browser-initiated scroll adjustment. Re-pin immediately and
set the restore guard so the pin's own scroll event is suppressed.

Disabling follow mode is exclusive to user-interaction handlers
(wheel, touch, scrollbar pointerdown) via handleUserInterrupt.

Guard-clear callbacks also check isNearBottom before dropping
the restoration flag, re-pinning if content grew between the
pin and the clear.
2026-03-30 22:45:58 +03:00
Kyle Carberry b3d5b8d13c fix: stabilize flaky chatd subscribe/promote queued tests (#23816)
## Summary

Fixes three flaky chatd tests that intermittently fail due to timing
races with the background run loop.

Closes coder/internal#1428

## Root Cause

`CreateChat` and `PromoteQueued` call `signalWake()` which writes to
`wakeCh`, triggering `processOnce` immediately. Even though
`newTestServer` sets `PendingChatAcquireInterval: testutil.WaitLong` to
prevent ticker-based polling, the wake channel bypasses this. This
causes `processOnce` to acquire and process the chat concurrently with
the test's manual DB updates and assertions.

### Failing tests

| Test | Failure | Cause |
|------|---------|-------|
| `TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached` |
`expected: "pending", actual: "running"` | Wake from `CreateChat` races
with manual `UpdateChatStatus`; wake from `PromoteQueued` acquires the
chat before the status assertion |
| `TestSendMessageInterruptBehaviorQueuesAndInterruptsWhenBusy` |
`should have 1 item(s), but has 2` | Wake from `CreateChat` triggers
`processChat` which auto-promotes a queued message, adding an extra row
to `chat_messages` |
| `TestSubscribeNoPubsubNoDuplicateMessageParts` | `Condition satisfied`
(duplicate events) | Pre-existing `WaitGroup.Add/Wait` race in the
`Eventually` + `WaitUntilIdleForTest` pattern |

## Fix

Introduces a `waitForChatProcessed` helper that:
1. Polls until the chat reaches a **terminal state** (not pending AND
not running)
2. Then calls `WaitUntilIdleForTest` to wait for the inflight
`WaitGroup`

Waiting for a terminal state (not just "not pending") avoids a
`sync.WaitGroup` `Add/Wait` race: `AcquireChats` updates the DB status
to `running` **before** `processOnce` calls `inflight.Add(1)`. Checking
only `status != pending` could return while `Add(1)` hasn't happened
yet, causing `Wait()` to return prematurely.

### Per-test changes

- **`TestSendMessageInterruptBehaviorQueuesAndInterruptsWhenBusy`**:
Call `waitForChatProcessed` after `CreateChat` before manually setting
running status
-
**`TestPromoteQueuedAllowsAlreadyQueuedMessageWhenUsageLimitReached`**:
Call `waitForChatProcessed` after `CreateChat`; remove the inherently
racy `status == pending` assertion after `PromoteQueued` (the wake
immediately acquires the chat). Key assertions on promoted message,
queue state, and message count remain.
- **`TestSubscribeNoPubsubNoDuplicateMessageParts`**: Replace inline
`Eventually` with the safer `waitForChatProcessed` helper

## Verification

All three tests pass 150 consecutive executions with `-race -count=10`
across 15 runs (0 failures).
2026-03-30 18:23:47 +00:00
blinkagent[bot] a00afe4b5a chore(site): update proxy menu dialog text (#23765)
Updates the descriptive text in the proxy selection dropdown menu to be
clearer and more concise.

**Before:**
> Workspace proxies improve terminal and web app connections to
workspaces. This does not apply to CLI connections. A region must be
manually selected, otherwise the default primary region will be used.

**After:**
> Workspace proxies improve terminal and web app connections. CLI
connections are unaffected. If no region is selected, the primary region
will be used.

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-03-30 11:20:05 -07:00
Kyle Carberry a5cc579453 feat: add last_injected_context column to chats table (#23798)
Adds a nullable JSONB column `last_injected_context` to the `chats`
table that stores the most recently persisted injected context parts
(AGENTS.md context-file and skill message parts). The column is updated
only when `persistInstructionFiles()` runs — on first workspace attach
or when the agent changes — so there are no redundant writes on
subsequent turns.

Internal fields (`ContextFileContent`, `ContextFileOS`,
`ContextFileDirectory`, `SkillDir`) are stripped at write time so the
column only holds small metadata. No stripping needed on the read path.

<details>
<summary>Implementation notes</summary>

- New migration `000456` adds nullable `last_injected_context JSONB`
column.
- New SQL query `UpdateChatLastInjectedContext` writes the column
without touching `updated_at`.
- `persistInstructionFiles()` strips internal fields from parts via
`StripInternal()` before persisting.
- Sentinel path (no AGENTS.md) persists skill-only parts when skills
exist.
- `codersdk.Chat` exposes `LastInjectedContext []ChatMessagePart`
(omitempty).
- `db2sdk.Chat()` passes through the already-clean data.

</details>
2026-03-30 14:11:30 -04:00
Spike Curtis ef3aade647 chore: support agent updates in tunneler (#23730)
<!--

If you have used AI to produce some or all of this PR, please ensure you have read our [AI Contribution guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING) before submitting.

-->

relates to GRU-18

Adds support for agent updates to the Tunneler
2026-03-30 13:50:06 -04:00
dependabot[bot] 3cc31de57a chore: bump github.com/go-git/go-git/v5 from 5.17.0 to 5.17.1 (#23813)
Bumps [github.com/go-git/go-git/v5](https://github.com/go-git/go-git)
from 5.17.0 to 5.17.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/go-git/go-git/releases">github.com/go-git/go-git/v5's
releases</a>.</em></p>
<blockquote>
<h2>v5.17.1</h2>
<h2>What's Changed</h2>
<ul>
<li>build: Update module github.com/cloudflare/circl to v1.6.3
[SECURITY] (releases/v5.x) by <a
href="https://github.com/go-git-renovate"><code>@​go-git-renovate</code></a>[bot]
in <a
href="https://redirect.github.com/go-git/go-git/pull/1930">go-git/go-git#1930</a></li>
<li>[v5] plumbing: format/index, Improve v4 entry name validation by <a
href="https://github.com/pjbgf"><code>@​pjbgf</code></a> in <a
href="https://redirect.github.com/go-git/go-git/pull/1935">go-git/go-git#1935</a></li>
<li>[v5] plumbing: format/idxfile, Fix version and fanout checks by <a
href="https://github.com/pjbgf"><code>@​pjbgf</code></a> in <a
href="https://redirect.github.com/go-git/go-git/pull/1937">go-git/go-git#1937</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1">https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/go-git/go-git/commit/5e23dfd02db92644dc4a3358ceb297fce875b772"><code>5e23dfd</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1937">#1937</a>
from pjbgf/idx-v5</li>
<li><a
href="https://github.com/go-git/go-git/commit/6b38a326816b80f64c20cc0e6113958b65c05a1c"><code>6b38a32</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1935">#1935</a>
from pjbgf/index-v5</li>
<li><a
href="https://github.com/go-git/go-git/commit/cd757fcb856a2dcc5fff6c110320a8ff62e99513"><code>cd757fc</code></a>
plumbing: format/idxfile, Fix version and fanout checks</li>
<li><a
href="https://github.com/go-git/go-git/commit/3ec0d70cb687ae1da5f4d18faa4229bd971a8710"><code>3ec0d70</code></a>
plumbing: format/index, Fix tree extension invalidated entry
parsing</li>
<li><a
href="https://github.com/go-git/go-git/commit/dbe10b6b425a2a4ea92a9d98e20cd68e15aede01"><code>dbe10b6</code></a>
plumbing: format/index, Align V2/V3 long name and V4 prefix encoding
with Git</li>
<li><a
href="https://github.com/go-git/go-git/commit/e9b65df44cb97faeba148b47523a362beaecddf9"><code>e9b65df</code></a>
plumbing: format/index, Improve v4 entry name validation</li>
<li><a
href="https://github.com/go-git/go-git/commit/adad18daabddee04c5a889f0230035e74bca32c0"><code>adad18d</code></a>
Merge pull request <a
href="https://redirect.github.com/go-git/go-git/issues/1930">#1930</a>
from go-git/renovate/releases/v5.x-go-github.com-clo...</li>
<li><a
href="https://github.com/go-git/go-git/commit/29470bd1d862c6e902996b8e8ff8eb7a0515a9be"><code>29470bd</code></a>
build: Update module github.com/cloudflare/circl to v1.6.3
[SECURITY]</li>
<li>See full diff in <a
href="https://github.com/go-git/go-git/compare/v5.17.0...v5.17.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/go-git/go-git/v5&package-manager=go_modules&previous-version=5.17.0&new-version=5.17.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 17:27:38 +00:00
Mathias Fredriksson d2c308e481 fix(site/src/pages/AgentsPage): unify scroll restore-guard lifecycle in ScrollAnchoredContainer (#23809)
Two ResizeObserver effects (content and container) each had their own
local restoreGuardRafId but both wrote to the shared
isRestoringScrollRef. Either observer's guard-clear RAF could fire
while the other's pin chain was in-flight, leaving isRestoringScrollRef
prematurely false.

scrollTranscriptToBottom also set isRestoringScrollRef without
cancelling any pending guard-clear, so a stale clear could drop
the flag mid-smooth-scroll animation.

Promote restoreGuardRafId to a single shared ref so all write
paths coordinate through one cancellation point.
2026-03-30 20:17:44 +03:00
Kyle Carberry 953c3bdc0f fix(site): prevent spurious startup warning during pending status (#23805)
## Problem

The `/agents` page frequently shows "Response startup is taking longer
than expected" even while the agent is actively working and messages are
appearing in the transcript.

## Root Cause

There's an inconsistency between `isActiveChatStatus` and
`shouldApplyMessagePart` during `"pending"` status (the state between
agent tool-call turns):

| Component | Treats `"pending"` as... |
|---|---|
| `isActiveChatStatus` | **active** — includes both `"running"` and
`"pending"` |
| `shouldApplyMessagePart` | **inactive** — drops all `message_part`
events during `"pending"` |
| Status handler | clears `streamState` to `null` on `"pending"` |

This creates a dead state during multi-turn tool-call cycles:

1. Agent finishes a turn → status = `"pending"` → `streamState` cleared
to `null`
2. `selectIsAwaitingFirstStreamChunk` returns `true` (status is
"active", stream is null, latest message isn't assistant)
3. Phase = `"starting"` → 15s timer starts
4. Stream parts from the server are **silently dropped**
(`shouldApplyMessagePart()` returns `false` for `"pending"`)
5. `streamState` stays `null` — phase is stuck at `"starting"`
6. Meanwhile, durable messages (tool calls, tool results) appear
normally in the transcript
7. After 15s → "Response startup is taking longer than expected" fires

## Fix

Narrow `selectIsAwaitingFirstStreamChunk` to only check `chatStatus ===
"running"` instead of `isActiveChatStatus(chatStatus)`. `"running"` is
the only status where the transport actually accepts stream parts, so
it's the only status where we should be showing the "starting"
indicator.

`isActiveChatStatus` is left unchanged since its other caller
(`shouldSurfaceReconnectState`) correctly needs to include `"pending"`.
2026-03-30 12:46:51 -04:00
Matt Vollmer ca879ffae6 docs: add extending-agents, mcp-servers, and usage-insights pages (#23810)
Adds three new documentation pages for major shipped features that had
no docs, and updates the platform controls index to reflect current
state.

## New pages

### Extending Agents (`extending-agents.md`)

Covers two workspace-level extension mechanisms:
- **Skills** — `.agents/skills/<name>/SKILL.md` directory structure,
frontmatter format, auto-discovery, `read_skill`/`read_skill_file`
tools, size limits, lazy loading
- **Workspace MCP tools** — `.mcp.json` format, stdio and HTTP
transports, tool name prefixing, discovery lifecycle and caching

### MCP Servers (`platform-controls/mcp-servers.md`)

Admin MCP server configuration:
- CRUD via **Agents** > **Settings** > **MCP Servers**
- Four auth modes: none, OAuth2 (with auto-discovery), API key, custom
headers
- Availability policies: `force_on`, `default_on`, `default_off`
- Tool governance via allow/deny lists
- Permission model and secret redaction

### Usage & Insights (`platform-controls/usage-insights.md`)

Three admin dashboards:
- **Usage limits** — spend caps with per-user and per-group overrides,
priority hierarchy, enforcement behavior
- **Cost tracking** — per-user rollup with token breakdowns, date
filtering, per-model and per-chat drill-down

## Updated files

- **`platform-controls/index.md`** — Moved MCP servers, usage limits,
and analytics from "Where we are headed" into "What platform teams
control today" with links to the new pages. Removed the tool
customization roadmap section (now covered by MCP servers page).
- **`manifest.json`** — Added nav entries for all three new pages.

## Resulting nav hierarchy

```
Coder Agents
├── Getting Started
├── Early Access
├── Architecture
├── Models
├── Platform Controls
│   ├── Template Optimization
│   ├── MCP Servers              ← NEW
│   └── Usage & Insights         ← NEW
├── Extending Agents             ← NEW
└── Chats API
```

---

*PR generated with Coder Agents*
2026-03-30 12:46:34 -04:00
Cian Johnston 0880a4685b ci: fix pnpm not found in check-docs job (#23807)
- Enable corepack before the linkspector step so `pnpm` shim is in PATH
- `action-linkspector@v1.4.1` internally calls `actions/setup-node@v5`,
which now defaults `package-manager-cache: true` — it detects
`pnpm-lock.yaml` and tries to resolve the `pnpm` binary, but it's not
installed on the runner
- Add TODO to remove the workaround when upstream is fixed

Upstream: https://github.com/UmbrellaDocs/action-linkspector/issues/54

> 🤖 Cian asked a Coder Agent to make this PR and then reviewed the
change.
2026-03-30 21:28:51 +05:00
Danielle Maywood 3f8e3007d8 fix(site): write WebSocket messages to React Query cache (#23618) 2026-03-30 15:56:08 +01:00
Matt Vollmer 8e57498a87 docs: update Chats API and platform controls docs to match current state (#23803)
The Chats API docs and platform controls docs had fallen behind the
implementation. This brings them up to date.

## Chats API docs (`chats-api.md`)

### Breaking: archive/unarchive endpoints removed

The old `POST /{chat}/archive` and `POST /{chat}/unarchive` endpoints no
longer exist. Replaced with the `PATCH /{chat}` update endpoint
(`{"archived": true/false}`).

### Chat object updated

Added all new fields to the example response and a new reference table:
- `build_id`, `agent_id` — workspace agent binding
- `parent_chat_id`, `root_chat_id` — delegated/child chat lineage
- `pin_order` — pinned chats
- `labels` — general-purpose key-value labels
- `mcp_server_ids` — MCP server bindings
- `has_unread` — read/unread tracking
- `diff_status` — PR/diff metadata

### New endpoints documented

- `PATCH /{chat}` — update chat (title, archived, pin_order, labels)
- `PATCH /{chat}/messages/{message}` — edit a user message
- `GET /watch` — watch all chats via WebSocket
- `POST /{chat}/title/regenerate` — regenerate title
- `GET /{chat}/diff` — get diff/PR status
- `DELETE /{chat}/queue/{id}` / `POST /{chat}/queue/{id}/promote` —
queue management

### Updated existing endpoint docs

- Create chat: added `mcp_server_ids` and `labels` fields
- Send message: added `mcp_server_ids` field
- List chats: added `q` and `label` query parameters
- Stream: noted read cursor behavior on connect/disconnect

## Platform controls docs

### Template allowlist (`platform-controls/index.md`)

- Updated the "Template routing" section to document the template
allowlist setting (**Agents** > **Settings** > **Templates**)
- Removed the "Template scoping for agents" bullet from "Where we are
headed" since it shipped

### Template optimization (`template-optimization.md`)

- Added "Restrict available templates" section documenting the allowlist
UI, behavior, and scope (agents only, not manual workspace creation)

---

*PR generated with Coder Agents*
2026-03-30 10:28:15 -04:00
Susana Ferreira 0fb3e5cba5 feat: extract, log, and strip aibridgeproxy request ID header in aibridged (#23731)
## Problem

`aibridgeproxyd` sends `X-AI-Bridge-Request-Id` on every MITM request to
`aibridged` for cross-service log correlation, but aibridged never reads
it. The header is silently forwarded to upstream LLM providers.

## Changes

* Renamed the header to `X-Coder-AI-Governance-Request-Id` to match the
existing `X-Coder-AI-Governance-*` convention.
* `aibridged` now extracts the header, logs it and strips it before
forwarding upstream.
* Added `TestServeHTTP_StripInternalHeaders` to verify no `X-Coder-*`
headers leak to upstream
2026-03-30 15:21:30 +01:00
Mathias Fredriksson 7fb93dbf0e build: lock provider version in provisioner/terraform/testdata (#23776)
The terraform testdata fixtures silently drift when the coder provider
releases a new version. The .terraform.lock.hcl files are gitignored,
.tf files use loose constraints (>= 2.0.0), and generate.sh always
runs terraform init -upgrade. The Makefile only re-runs generate.sh
when the terraform CLI version changes, not the provider version.

Track a canonical lockfile and provider-version.txt in git. Change
generate.sh to respect the lockfile by default (terraform init without
-upgrade). Add --upgrade flag for intentional provider bumps, --check
for cheap staleness detection in the Makefile, and a new
update-terraform-testdata make target.
2026-03-30 16:37:25 +03:00
Michael Suchacz cf500b95b9 chore: move docker-chat-sandbox under templates/x (#23777)
Adds the experimental `docker-chat-sandbox` example template under
`examples/templates/x/`. It provisions a regular dev agent plus a
chat-designated agent that runs inside bubblewrap with a read-only root,
writable `/home/coder`, and outbound TCP restricted to the Coder
control-plane endpoint via `iptables`.

The chat agent still appears in dashboard and API responses, but the
template reserves it for chatd-managed sessions rather than normal user
interaction. `lint/examples` now walks nested template directories, so
experimental templates can live under `examples/templates/x/` without
treating `x/` itself as a template.
2026-03-30 15:17:55 +02:00
Danielle Maywood 6a2f389110 refactor(site/src/pages/AgentsPage): use createReconnectingWebSocket in git and workspace watchers (#23736) 2026-03-30 14:05:05 +01:00
Danielle Maywood 027f93c913 fix(site): make settings and analytics headers scrollable in Safari PWA (#23742) 2026-03-30 13:55:35 +01:00
Hugo Dutka 509e89d5c4 feat(site): refactor the wait for computer use subagent card (#23780)
Right now, when an agent is waiting for the computer use subagent, it
shows a VNC preview of the desktop that spans the full width of the
chat. It also displays a standard "waiting for <subagent name>" header
above it. See https://github.com/coder/coder/pull/23684 for a recording.

This PR refactors that preview to be smaller and changes the header to a
shimmering "Using the computer" label.


https://github.com/user-attachments/assets/0db5b4dc-6899-419b-bf7f-eb0de05722f1
2026-03-30 14:51:14 +02:00
Mathias Fredriksson 378f11d6dc fix(site/src/pages/AgentsPage): fix scroll-to-bottom pin starvation in agents chat (#23778)
scheduleBottomPin() cancelled any in-flight pin and restarted the
double-RAF chain on every ResizeObserver notification. When content
height changes on consecutive frames (e.g. during streaming where
SmoothText reveals characters each frame and markdown re-rendering
occasionally changes block height), the inner RAF that actually sets
scrollTop is perpetually cancelled before it fires. The scroll falls
behind the growing content.

Two fixes:

1. Make scheduleBottomPin() idempotent: if a pin is already in-flight,
   skip. The inner RAF reads scrollHeight at execution time so it
   always targets the latest bottom. User-interrupt paths (wheel,
   touch) still cancel via cancelPendingPins().

2. Add overscroll-behavior:contain to the scroll container. Prevents
   elastic overscroll from generating extra scroll events that could
   flip autoScrollRef to false.
2026-03-30 12:36:42 +00:00
Mathias Fredriksson f2845f6622 feat(site): humanize process_signal and show killed on processes (#23590)
Replace the raw JSON dump for process_signal with the standard
ToolCollapsible + ToolIcon + ToolLabel pipeline, matching process_list
and other generic tools. A thin ProcessSignalRenderer promotes soft
failures (success=false, isError=false) so the generic renderer shows
the error indicator. ToolLabel distinguishes running, success, and
failure states. TerminalIcon used for consistency with other process
tools.

When a process is killed via process_signal, the execute and
process_output blocks show a red OctagonX icon with signal details
on hover. The killedBySignal field is set on MergedTool during the
existing cross-message parsing pass, no new abstractions.

Stories for process_signal (10) and killed indicators (8). Unit
tests for the cross-tool annotation logic (3). Humanized labels
and TerminalIcon for process_list.
2026-03-30 15:35:39 +03:00
Jeremy Ruppel 076e97aa66 feat(site): add client filter to AI Bridge Session table (#23733) 2026-03-30 08:15:45 -04:00
dependabot[bot] 2875053b83 ci: bump the github-actions group with 4 updates (#23789)
Bumps the github-actions group with 4 updates:
[actions/cache](https://github.com/actions/cache),
[fluxcd/flux2](https://github.com/fluxcd/flux2),
[Mattraks/delete-workflow-runs](https://github.com/mattraks/delete-workflow-runs)
and
[umbrelladocs/action-linkspector](https://github.com/umbrelladocs/action-linkspector).

Updates `actions/cache` from 5.0.3 to 5.0.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.4</h2>
<h2>What's Changed</h2>
<ul>
<li>Add release instructions and update maintainer docs by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1696">actions/cache#1696</a></li>
<li>Potential fix for code scanning alert no. 52: Workflow does not
contain permissions by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1697">actions/cache#1697</a></li>
<li>Fix workflow permissions and cleanup workflow names / formatting by
<a href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1699">actions/cache#1699</a></li>
<li>docs: Update examples to use the latest version by <a
href="https://github.com/XZTDean"><code>@​XZTDean</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li>Fix proxy integration tests by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1701">actions/cache#1701</a></li>
<li>Fix cache key in examples.md for bun.lock by <a
href="https://github.com/RyPeck"><code>@​RyPeck</code></a> in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
<li>Update dependencies &amp; patch security vulnerabilities by <a
href="https://github.com/Link"><code>@​Link</code></a>- in <a
href="https://redirect.github.com/actions/cache/pull/1738">actions/cache#1738</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/XZTDean"><code>@​XZTDean</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1690">actions/cache#1690</a></li>
<li><a href="https://github.com/RyPeck"><code>@​RyPeck</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/cache/pull/1722">actions/cache#1722</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.4">https://github.com/actions/cache/compare/v5...v5.0.4</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>How to prepare a release</h2>
<blockquote>
<p>[!NOTE]<br />
Relevant for maintainers with write access only.</p>
</blockquote>
<ol>
<li>Switch to a new branch from <code>main</code>.</li>
<li>Run <code>npm test</code> to ensure all tests are passing.</li>
<li>Update the version in <a
href="https://github.com/actions/cache/blob/main/package.json"><code>https://github.com/actions/cache/blob/main/package.json</code></a>.</li>
<li>Run <code>npm run build</code> to update the compiled files.</li>
<li>Update this <a
href="https://github.com/actions/cache/blob/main/RELEASES.md"><code>https://github.com/actions/cache/blob/main/RELEASES.md</code></a>
with the new version and changes in the <code>## Changelog</code>
section.</li>
<li>Run <code>licensed cache</code> to update the license report.</li>
<li>Run <code>licensed status</code> and resolve any warnings by
updating the <a
href="https://github.com/actions/cache/blob/main/.licensed.yml"><code>https://github.com/actions/cache/blob/main/.licensed.yml</code></a>
file with the exceptions.</li>
<li>Commit your changes and push your branch upstream.</li>
<li>Open a pull request against <code>main</code> and get it reviewed
and merged.</li>
<li>Draft a new release <a
href="https://github.com/actions/cache/releases">https://github.com/actions/cache/releases</a>
use the same version number used in <code>package.json</code>
<ol>
<li>Create a new tag with the version number.</li>
<li>Auto generate release notes and update them to match the changes you
made in <code>RELEASES.md</code>.</li>
<li>Toggle the set as the latest release option.</li>
<li>Publish the release.</li>
</ol>
</li>
<li>Navigate to <a
href="https://github.com/actions/cache/actions/workflows/release-new-action-version.yml">https://github.com/actions/cache/actions/workflows/release-new-action-version.yml</a>
<ol>
<li>There should be a workflow run queued with the same version
number.</li>
<li>Approve the run to publish the new version and update the major tags
for this action.</li>
</ol>
</li>
</ol>
<h2>Changelog</h2>
<h3>5.0.4</h3>
<ul>
<li>Bump <code>minimatch</code> to v3.1.5 (fixes ReDoS via globstar
patterns)</li>
<li>Bump <code>undici</code> to v6.24.1 (WebSocket decompression bomb
protection, header validation fixes)</li>
<li>Bump <code>fast-xml-parser</code> to v5.5.6</li>
</ul>
<h3>5.0.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<h3>5.0.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.3 <a
href="https://redirect.github.com/actions/cache/pull/1692">#1692</a></li>
</ul>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.</p>
</blockquote>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/actions/cache/commit/668228422ae6a00e4ad889ee87cd7109ec5666a7"><code>6682284</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1738">#1738</a>
from actions/prepare-v5.0.4</li>
<li><a
href="https://github.com/actions/cache/commit/e34039626f957d3e3e50843d15c1b20547fc90e2"><code>e340396</code></a>
Update RELEASES</li>
<li><a
href="https://github.com/actions/cache/commit/8a671105293e81530f1af99863cdf94550aba1a6"><code>8a67110</code></a>
Add licenses</li>
<li><a
href="https://github.com/actions/cache/commit/1865903e1b0cb750dda9bc5c58be03424cc62830"><code>1865903</code></a>
Update dependencies &amp; patch security vulnerabilities</li>
<li><a
href="https://github.com/actions/cache/commit/565629816435f6c0b50676926c9b05c254113c0c"><code>5656298</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1722">#1722</a>
from RyPeck/patch-1</li>
<li><a
href="https://github.com/actions/cache/commit/4e380d19e192ace8e86f23f32ca6fdec98a673c6"><code>4e380d1</code></a>
Fix cache key in examples.md for bun.lock</li>
<li><a
href="https://github.com/actions/cache/commit/b7e8d49f17405cc70c1c120101943203c98d3a4b"><code>b7e8d49</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1701">#1701</a>
from actions/Link-/fix-proxy-integration-tests</li>
<li><a
href="https://github.com/actions/cache/commit/984a21b1cb176a0936f4edafb42be88978f93ef1"><code>984a21b</code></a>
Add traffic sanity check step</li>
<li><a
href="https://github.com/actions/cache/commit/acf2f1f76affe1ef80eee8e56dfddd3b3e5f0fba"><code>acf2f1f</code></a>
Fix resolution</li>
<li><a
href="https://github.com/actions/cache/commit/95a07c51324af6001b4d6ab8dff29f4dfadc2531"><code>95a07c5</code></a>
Add wait for proxy</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/cache/compare/cdf6c1fa76f9f475f3d7449005a359c84ca0f306...668228422ae6a00e4ad889ee87cd7109ec5666a7">compare
view</a></li>
</ul>
</details>
<br />

Updates `fluxcd/flux2` from 2.7.5 to 2.8.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fluxcd/flux2/releases">fluxcd/flux2's
releases</a>.</em></p>
<blockquote>
<h2>v2.8.3</h2>
<h2>Highlights</h2>
<p>Flux v2.8.3 is a patch release that fixes a regression in
helm-controller. Users are encouraged to upgrade for the best
experience.</p>
<p>ℹ️ Please follow the <a
href="https://github.com/fluxcd/flux2/discussions/5572">Upgrade
Procedure for Flux v2.7+</a> for a smooth upgrade from Flux v2.6 to the
latest version.</p>
<p>Fixes:</p>
<ul>
<li>Fix templating errors for charts that include <code>---</code> in
the content, e.g. YAML separators, embedded scripts, CAs inside
ConfigMaps (helm-controller)</li>
</ul>
<h2>Components changelog</h2>
<ul>
<li>helm-controller <a
href="https://github.com/fluxcd/helm-controller/blob/v1.5.3/CHANGELOG.md">v1.5.3</a></li>
</ul>
<h2>CLI changelog</h2>
<ul>
<li>[release/v2.8.x] Add target branch name to update branch by <a
href="https://github.com/fluxcdbot"><code>@​fluxcdbot</code></a> in <a
href="https://redirect.github.com/fluxcd/flux2/pull/5774">fluxcd/flux2#5774</a></li>
<li>Update toolkit components by <a
href="https://github.com/fluxcdbot"><code>@​fluxcdbot</code></a> in <a
href="https://redirect.github.com/fluxcd/flux2/pull/5779">fluxcd/flux2#5779</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/fluxcd/flux2/compare/v2.8.2...v2.8.3">https://github.com/fluxcd/flux2/compare/v2.8.2...v2.8.3</a></p>
<h2>v2.8.2</h2>
<h2>Highlights</h2>
<p>Flux v2.8.2 is a patch release that comes with various fixes. Users
are encouraged to upgrade for the best experience.</p>
<p>ℹ️ Please follow the <a
href="https://github.com/fluxcd/flux2/discussions/5572">Upgrade
Procedure for Flux v2.7+</a> for a smooth upgrade from Flux v2.6 to the
latest version.</p>
<p>Fixes:</p>
<ul>
<li>Fix enqueuing new reconciliation requests for events on source Flux
objects when they are already reconciling the revision present in the
watch event (kustomize-controller, helm-controller)</li>
<li>Fix the Go templates bug of YAML separator <code>---</code> getting
concatenated to <code>apiVersion:</code> by updating to Helm 4.1.3
(helm-controller)</li>
<li>Fix canceled HelmReleases getting stuck when they don't have a retry
strategy configured by introducing a new feature gate
<code>DefaultToRetryOnFailure</code> that improves the experience when
the <code>CancelHealthCheckOnNewRevision</code> is enabled
(helm-controller)</li>
<li>Fix the auth scope for Azure Container Registry to use the
ACR-specific scope (source-controller, image-reflector-controller)</li>
<li>Fix potential Denial of Service (DoS) during TLS handshakes
(CVE-2026-27138) by building all controllers with Go 1.26.1</li>
</ul>
<h2>Components changelog</h2>
<ul>
<li>source-controller <a
href="https://github.com/fluxcd/source-controller/blob/v1.8.1/CHANGELOG.md">v1.8.1</a></li>
<li>kustomize-controller <a
href="https://github.com/fluxcd/kustomize-controller/blob/v1.8.2/CHANGELOG.md">v1.8.2</a></li>
<li>notification-controller <a
href="https://github.com/fluxcd/notification-controller/blob/v1.8.2/CHANGELOG.md">v1.8.2</a></li>
<li>helm-controller <a
href="https://github.com/fluxcd/helm-controller/blob/v1.5.2/CHANGELOG.md">v1.5.2</a></li>
<li>image-reflector-controller <a
href="https://github.com/fluxcd/image-reflector-controller/blob/v1.1.1/CHANGELOG.md">v1.1.1</a></li>
<li>image-automation-controller <a
href="https://github.com/fluxcd/image-automation-controller/blob/v1.1.1/CHANGELOG.md">v1.1.1</a></li>
<li>source-watcher <a
href="https://github.com/fluxcd/source-watcher/blob/v2.1.1/CHANGELOG.md">v2.1.1</a></li>
</ul>
<h2>CLI changelog</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/fluxcd/flux2/commit/871be9b40d53627786d3a3835a3ddba1e3234bd2"><code>871be9b</code></a>
Merge pull request <a
href="https://redirect.github.com/fluxcd/flux2/issues/5779">#5779</a>
from fluxcd/update-components-release/v2.8.x</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/f7a168935dd2d777109ea189e0ef094695caeea7"><code>f7a1689</code></a>
Update toolkit components</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/bf67d7799d07eff26891a8b373601f1f07ee4411"><code>bf67d77</code></a>
Merge pull request <a
href="https://redirect.github.com/fluxcd/flux2/issues/5774">#5774</a>
from fluxcd/backport-5773-to-release/v2.8.x</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/5cb2208cb7dda2abc7d4bdc971458981c6be8323"><code>5cb2208</code></a>
Add target branch name to update branch</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/bfa461ed2153ae5e0cca6bce08e0845268fb3088"><code>bfa461e</code></a>
Merge pull request <a
href="https://redirect.github.com/fluxcd/flux2/issues/5771">#5771</a>
from fluxcd/update-pkg-deps/release/v2.8.x</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/f11a921e0cdc6c681a157c7a4777150463eaeec8"><code>f11a921</code></a>
Update fluxcd/pkg dependencies</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/b248efab1d786a27ccddf4b341a1034d67c14b3b"><code>b248efa</code></a>
Merge pull request <a
href="https://redirect.github.com/fluxcd/flux2/issues/5770">#5770</a>
from fluxcd/backport-5769-to-release/v2.8.x</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/4d5e044eb9067a15d1099cb9bc81147b5d4daf37"><code>4d5e044</code></a>
Update toolkit components</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/3c8917ca28a93d6ab4b97379c0c81a4144e9f7d6"><code>3c8917c</code></a>
Merge pull request <a
href="https://redirect.github.com/fluxcd/flux2/issues/5767">#5767</a>
from fluxcd/update-pkg-deps/release/v2.8.x</li>
<li><a
href="https://github.com/fluxcd/flux2/commit/c1f11bcf3d6433dbbb81835eb9f8016c3067d7ef"><code>c1f11bc</code></a>
Update fluxcd/pkg dependencies</li>
<li>Additional commits viewable in <a
href="https://github.com/fluxcd/flux2/compare/8454b02a32e48d775b9f563cb51fdcb1787b5b93...871be9b40d53627786d3a3835a3ddba1e3234bd2">compare
view</a></li>
</ul>
</details>
<br />

Updates `Mattraks/delete-workflow-runs` from
5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 to
b3018382ca039b53d238908238bd35d1fb14f8ee
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/mattraks/delete-workflow-runs/compare/5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7...5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7">compare
view</a></li>
</ul>
</details>
<br />

Updates `umbrelladocs/action-linkspector` from 1.4.0 to 1.4.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/umbrelladocs/action-linkspector/releases">umbrelladocs/action-linkspector's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.4.1</h2>
<p>v1.4.1: PR <a
href="https://redirect.github.com/umbrelladocs/action-linkspector/issues/52">#52</a>
- chore: update actions/checkout to v5 across all workflows</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/37c85bcde51b30bf929936502bac6bfb7e8f0a4d"><code>37c85bc</code></a>
Merge pull request <a
href="https://redirect.github.com/umbrelladocs/action-linkspector/issues/52">#52</a>
from UmbrellaDocs/action-v5</li>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/badbe56d6b5b23e1b01e0a48b02c8c42c734488c"><code>badbe56</code></a>
chore: update actions/checkout to v5 across all workflows</li>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/e0578c9289f053a6b2ab5ff03a1ec3d507bbb790"><code>e0578c9</code></a>
Merge pull request <a
href="https://redirect.github.com/umbrelladocs/action-linkspector/issues/51">#51</a>
from UmbrellaDocs/caching-fix-50</li>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/5ede5ac56a1421d000b3c6188c227bee606869ac"><code>5ede5ac</code></a>
feat: enhance reviewdog setup with caching and version management</li>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/a73cfa2d0f04a59ec1ab98c0f00fdd36ff5a84a1"><code>a73cfa2</code></a>
Merge pull request <a
href="https://redirect.github.com/umbrelladocs/action-linkspector/issues/49">#49</a>
from Goooler/node24</li>
<li><a
href="https://github.com/UmbrellaDocs/action-linkspector/commit/aee511ae2bf96aa01d6d77ae1c775f2f18909d49"><code>aee511a</code></a>
Update action runtime to node 24</li>
<li>See full diff in <a
href="https://github.com/umbrelladocs/action-linkspector/compare/652f85bc57bb1e7d4327260decc10aa68f7694c3...37c85bcde51b30bf929936502bac6bfb7e8f0a4d">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 12:05:40 +00:00
Jeremy Ruppel 548a648dcb feat(site): add AI session thread page (#23391)
Adds the Session Thread page

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Jake Howell <jacob@coder.com>
2026-03-30 08:03:52 -04:00
dependabot[bot] 7d0a49f54b chore: bump google.golang.org/api from 0.272.0 to 0.273.0 (#23782)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.272.0 to 0.273.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.273.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.272.0...v0.273.0">0.273.0</a>
(2026-03-23)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3542">#3542</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a4b47110f2ba5bf8bdb32174f26f609615e0e8dc">a4b4711</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3546">#3546</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0cacfa8557f0f7d21166c4dfef84f60c6d9f1a49">0cacfa8</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.272.0...v0.273.0">0.273.0</a>
(2026-03-23)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3542">#3542</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a4b47110f2ba5bf8bdb32174f26f609615e0e8dc">a4b4711</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3546">#3546</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0cacfa8557f0f7d21166c4dfef84f60c6d9f1a49">0cacfa8</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/2e86962ce58da59e39ffacd1cb9930abe979fd3c"><code>2e86962</code></a>
chore(main): release 0.273.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3545">#3545</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/50ea74c1b06b4bb59546145272bc51fc205b36ed"><code>50ea74c</code></a>
chore(google-api-go-generator): restore aiplatform:v1beta1 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3549">#3549</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/0cacfa8557f0f7d21166c4dfef84f60c6d9f1a49"><code>0cacfa8</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3546">#3546</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/d38a12991f9cee22a29ada664c5eef3942116ad9"><code>d38a129</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3548">#3548</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/a4b47110f2ba5bf8bdb32174f26f609615e0e8dc"><code>a4b4711</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3542">#3542</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/67cf706bd3f9bd26f2a61ada3290190c0c8545ff"><code>67cf706</code></a>
chore(all): update module google.golang.org/grpc to v1.79.3 [SECURITY]
(<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3544">#3544</a>)</li>
<li>See full diff in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.272.0...v0.273.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.272.0&new-version=0.273.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 11:51:12 +00:00
dependabot[bot] f77d0c1649 chore: bump github.com/hashicorp/go-version from 1.8.0 to 1.9.0 (#23784)
Bumps
[github.com/hashicorp/go-version](https://github.com/hashicorp/go-version)
from 1.8.0 to 1.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/go-version/releases">github.com/hashicorp/go-version's
releases</a>.</em></p>
<blockquote>
<h2>v1.9.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements</h3>
<ul>
<li>Add support for prefix of any character by <a
href="https://github.com/brondum"><code>@​brondum</code></a> in <a
href="https://redirect.github.com/hashicorp/go-version/pull/79">hashicorp/go-version#79</a></li>
</ul>
<h3>Internal</h3>
<ul>
<li>Update CHANGELOG for version 1.8.0 enhancements by <a
href="https://github.com/sonamtenzin2"><code>@​sonamtenzin2</code></a>
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/178">hashicorp/go-version#178</a></li>
<li>Bump the github-actions-backward-compatible group across 1 directory
with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/179">hashicorp/go-version#179</a></li>
<li>Bump the github-actions-breaking group with 4 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/180">hashicorp/go-version#180</a></li>
<li>Bump the github-actions-backward-compatible group with 3 updates by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/182">hashicorp/go-version#182</a></li>
<li>Update GitHub Actions to trigger on pull requests and update go
version by <a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-version/pull/185">hashicorp/go-version#185</a></li>
<li>Bump actions/upload-artifact from 6.0.0 to 7.0.0 in the
github-actions-breaking group across 1 directory by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/183">hashicorp/go-version#183</a></li>
<li>Bump the github-actions-backward-compatible group across 1 directory
with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-version/pull/186">hashicorp/go-version#186</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/sonamtenzin2"><code>@​sonamtenzin2</code></a>
made their first contribution in <a
href="https://redirect.github.com/hashicorp/go-version/pull/178">hashicorp/go-version#178</a></li>
<li><a href="https://github.com/brondum"><code>@​brondum</code></a> made
their first contribution in <a
href="https://redirect.github.com/hashicorp/go-version/pull/79">hashicorp/go-version#79</a></li>
<li><a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a>
made their first contribution in <a
href="https://redirect.github.com/hashicorp/go-version/pull/185">hashicorp/go-version#185</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/hashicorp/go-version/compare/v1.8.0...v1.9.0">https://github.com/hashicorp/go-version/compare/v1.8.0...v1.9.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/go-version/blob/main/CHANGELOG.md">github.com/hashicorp/go-version's
changelog</a>.</em></p>
<blockquote>
<h1>1.9.0 (Mar 30, 2026)</h1>
<p>ENHANCEMENTS:</p>
<p>Support parsing versions with custom prefixes via opt-in option in <a
href="https://redirect.github.com/hashicorp/go-version/pull/79">hashicorp/go-version#79</a></p>
<p>INTERNAL:</p>
<ul>
<li>Bump the github-actions-backward-compatible group across 1 directory
with 2 updates in <a
href="https://redirect.github.com/hashicorp/go-version/pull/179">hashicorp/go-version#179</a></li>
<li>Bump the github-actions-breaking group with 4 updates in <a
href="https://redirect.github.com/hashicorp/go-version/pull/180">hashicorp/go-version#180</a></li>
<li>Bump the github-actions-backward-compatible group with 3 updates in
<a
href="https://redirect.github.com/hashicorp/go-version/pull/182">hashicorp/go-version#182</a></li>
<li>Update GitHub Actions to trigger on pull requests and update go
version in <a
href="https://redirect.github.com/hashicorp/go-version/pull/185">hashicorp/go-version#185</a></li>
<li>Bump actions/upload-artifact from 6.0.0 to 7.0.0 in the
github-actions-breaking group across 1 directory in <a
href="https://redirect.github.com/hashicorp/go-version/pull/183">hashicorp/go-version#183</a></li>
<li>Bump the github-actions-backward-compatible group across 1 directory
with 2 updates in <a
href="https://redirect.github.com/hashicorp/go-version/pull/186">hashicorp/go-version#186</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/hashicorp/go-version/commit/b80b1e68c4854757b38663ec02bada2d839b6f56"><code>b80b1e6</code></a>
Update CHANGELOG for version 1.9.0 (<a
href="https://redirect.github.com/hashicorp/go-version/issues/187">#187</a>)</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/e93736f31592c971fe8ebbd600844cad58b18ad8"><code>e93736f</code></a>
Bump the github-actions-backward-compatible group across 1 directory
with 2 u...</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/c009de06b736afce5f36f7180c1356d6a40bee38"><code>c009de0</code></a>
Bump actions/upload-artifact from 6.0.0 to 7.0.0 in the
github-actions-breaki...</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/0474357931d1b2fe3d7ac492bcd8ee4802b3c22c"><code>0474357</code></a>
Update GitHub Actions to trigger on pull requests and update go version
(<a
href="https://redirect.github.com/hashicorp/go-version/issues/185">#185</a>)</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/b4ab5fc7d9d3eb48253b467f8f00b22403ec8089"><code>b4ab5fc</code></a>
Support parsing versions with custom prefixes via opt-in option (<a
href="https://redirect.github.com/hashicorp/go-version/issues/79">#79</a>)</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/25c683be0f3830787e522175e0309e14de37ef7b"><code>25c683b</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-version/issues/182">#182</a>
from hashicorp/dependabot/github_actions/github-actio...</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/4f2bcd85ae00b22689501fa029976f6544d18a6b"><code>4f2bcd8</code></a>
Bump the github-actions-backward-compatible group with 3 updates</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/acb8b18f5cb9ada9a3c92a9477e54aab6dd7900f"><code>acb8b18</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-version/issues/180">#180</a>
from hashicorp/dependabot/github_actions/github-actio...</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/0394c4f5ebf87c7bdf0a3034ee48613bfe5bf341"><code>0394c4f</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-version/issues/179">#179</a>
from hashicorp/dependabot/github_actions/github-actio...</li>
<li><a
href="https://github.com/hashicorp/go-version/commit/b2fbaa797b31cd3b36e55bdc4f20a765acc9a251"><code>b2fbaa7</code></a>
Bump the github-actions-backward-compatible group across 1 directory
with 2 u...</li>
<li>Additional commits viewable in <a
href="https://github.com/hashicorp/go-version/compare/v1.8.0...v1.9.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/hashicorp/go-version&package-manager=go_modules&previous-version=1.8.0&new-version=1.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 11:50:56 +00:00
dependabot[bot] 9f51c44772 chore: bump rust from f7bf1c2 to 1d0000a in /dogfood/coder (#23787)
Bumps rust from `f7bf1c2` to `1d0000a`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 11:47:12 +00:00
Michael Suchacz 73f6cd8169 feat: suffix-based chat agent selection (#23741)
Adds suffix-based agent selection for chatd. Template authors can direct
chat traffic to a specific root workspace agent by naming it with the
`-coderd-chat` suffix (for example, `coder_agent "dev-coderd-chat"`).
When no suffix match exists, chatd falls back to the first root agent by
`DisplayOrder`, then `Name`. Multiple suffix matches return an error.

The selection logic lives in `coderd/x/chatd/internal/agentselect` and
is shared by chatd core plus the workspace chat tools so all chat entry
points pick the same agent deterministically.

No database migrations, API contract changes, or provider changes. The
experimental sandbox template was split out to #23777.
2026-03-30 11:43:59 +00:00
Danielle Maywood 4c97b63d79 fix(site/src/pages/AgentsPage): toast when git refresh fails due to disconnection (#23779) 2026-03-30 12:35:23 +01:00
Jakub Domeracki 28484536b6 fix(enterprise/aibridgeproxyd): return 403 for blocked private IP CONNECT attempts (#23360)
Previously, when a CONNECT tunnel was blocked because the destination
resolved to
a private/reserved IP range, the proxy returned 502 Bad Gateway —
implying an
upstream failure rather than a deliberate policy block.

Introduce `blockedIPError` as a sentinel type returned by both
`checkBlockedIP`
and `checkBlockedIPAndDial`. `ConnectionErrHandler` now inspects the
error with
`errors.As` and returns 403 Forbidden for policy blocks, keeping 502 for
genuine
dial failures.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 12:25:33 +02:00
Danielle Maywood 7a5fd4c790 fix(site): align plus menu icons and add small switch variant (#23769) 2026-03-30 11:18:18 +01:00
Jake Howell 8f73e46c2f feat: automatically generate beta features (#23549)
Closes #15129

Adds a generated **Beta features** table on the feature stages doc,
using the same mainline/stable sparse-checkout approach as the
early-access experiments list.

- Walk `docs/manifest.json` for routes with `state: ["beta"]` and render
a table (title, description, mainline vs stable).
- Inject output between `<!-- BEGIN: available-beta-features -->` /
`END` in `docs/install/releases/feature-stages.md`.
- Rename `scripts/release/docs_update_experiments.sh` →
`docs_update_feature_stages.sh`, refresh the script header, and use
`build/docs/feature-stages` for clone output.

<img width="1624" height="1061" alt="image"
src="https://github.com/user-attachments/assets/5fa811dd-9b80-446b-ae65-ec6e6cfedd6a"
/>
2026-03-30 21:14:52 +11:00
Atif Ali 56171306ff ci: fix SLSA predicate schema in attestation steps (#23768)
Follow-up to #23763.

The custom predicate uses the **SLSA v0.2 schema** (`invocation`,
`configSource`, `metadata`) but declares `predicate-type` as v1.
GitHub's attestation API rejects the mismatch:

```
Error: Failed to persist attestation: Invalid Argument -
predicate is not of type slsa1.ProvenancePredicate
```

This was masked before #23763 because the steps failed earlier on
missing `subject-digest`. Now that digests are provided, this is the
next error.

## Fix

Remove the custom `predicate-type` and `predicate` inputs. Without them,
`actions/attest@v4` auto-generates a correct SLSA v1 predicate from the
GitHub Actions OIDC token — which is what `gh attestation verify`
expects.

- `ci.yaml`: 3 attestation steps (main, latest, version-specific)
- `release.yaml`: 3 attestation steps (base, main, latest)

<details>
<summary>Verification (source code trace of actions/attest@v4)</summary>

1. **`detect.ts`**: No `predicate-type`/`predicate` → returns
`'provenance'` (not `'custom'`)
2. **`main.ts`**: `getPredicateForType('provenance')` →
`generateProvenancePredicate()`
3. **`@actions/toolkit/.../provenance.ts`**:
`buildSLSAProvenancePredicate()` fetches OIDC claims, builds correct v1
predicate with `buildDefinition`/`runDetails`

</details>

> 🤖 This PR was created with the help of Coder Agents, and needs a human
review. 🧑💻
2026-03-30 15:07:13 +05:00
Danielle Maywood 0b07ce2a97 refactor(site): move AgentChatPageView to correct directory (#23770) 2026-03-30 10:49:21 +01:00
Ethan f2a7fdacfe ci: don't cancel in-progress linear release runs on main (#23766)
The Linear Release workflow had `cancel-in-progress: true`
unconditionally, so a new push to `main` would cancel an already-running
sync. This meant successive PR merges would show you a bunch of red Xs
on CI, even though nothing was wrong.

<img width="958" height="305" alt="image"
src="https://github.com/user-attachments/assets/1bd06948-ef2d-469f-9d48-a82277a6110c"
/>

Other workflows like CI guard against this with `cancel-in-progress: ${{
github.ref != 'refs/heads/main' }}`.

This PR does the same thing to the linear release workflow. The job will
be queued instead.

<img width="678" height="105" alt="image"
src="https://github.com/user-attachments/assets/931e38c8-3de4-40d6-b156-d5de5726d094"
/>

Letting the job finish is not particularly wasteful or anything since
the sync takes 30~ seconds in CI time.
2026-03-30 20:46:21 +11:00
Jaayden Halko 0e78156bcd fix: create scrollable proxy menu list (#23764)
This allows the proxy menu to scroll for very large numbers of proxies
in the menu.

<img width="334" height="802" alt="screenshot"
src="https://github.com/user-attachments/assets/f0e45b9c-5b77-43da-b566-28d0572fd56b"
/>
2026-03-30 09:39:30 +01:00
Atif Ali bc5e4b5d54 ci: fix broken GitHub attestations and update SBOM tooling (#23763)
## Problem

GitHub SLSA provenance attestations have been silently failing on
**every release** since they were introduced. Confirmed across all 10+
release runs checked (v2.29.2 through v2.31.6).

The `actions/attest` action requires `subject-digest` (a `sha256:...`
hash) to identify the artifact being attested, but the workflow only
provided `subject-name` (the image tag like
`ghcr.io/coder/coder:v2.31.6`). This caused every attestation step to
error with:

```
Error: One of subject-path, subject-digest, or subject-checksums must be provided
```

The failures were masked by `continue-on-error: true` and only surfaced
as `##[warning]` annotations that nobody noticed. Enterprise customers
doing `gh attestation verify` would find no provenance records for any
of our Docker images.

> [!NOTE]
> The cosign SBOM attestation (separate step) has been working correctly
the entire time — it uses a different mechanism (`cosign attest --type
spdxjson`) that does not require the same inputs. This fix is
specifically for the GitHub-native SLSA provenance attestations.

## Fix

**Add `subject-digest` to all `actions/attest` steps** (release.yaml +
ci.yaml):
- Base image: capture digest from `depot/build-push-action` output
- Main image: resolve digest via `docker buildx imagetools inspect
--raw` after push
- Latest image: same approach
- Use `subject-name` without tag per the [actions/attest
docs](https://github.com/actions/attest#container-image)

**Update `anchore/sbom-action`** from v0.18.0 to v0.24.0 (node24
support, ahead of the [June 2
deadline](https://github.blog/changelog/2025-09-19-deprecation-of-node-20-on-github-actions-runners/)).

All changes remain non-blocking for the release process
(`continue-on-error: true` preserved).

> 🤖 This PR was created with the help of Coder Agents, and is reviewed
by a human.
2026-03-30 13:15:13 +05:00
Ethan 13dfc9a9bb test: harden chatd relay test setup (#23759)
These chatd relay tests were seeding chats through
`subscriber.CreateChat(...)`, which wakes the subscriber and can race
local acquisition against the intended remote-worker setup.

Seed waiting and remote-running chats directly in the database instead,
and point the default OpenAI provider at a local safety-net server so
accidental processing fails locally instead of reaching the live API.

Closes https://github.com/coder/internal/issues/1430
2026-03-30 17:52:01 +11:00
Ethan 54738e9e14 test(coderd/x/chatd): avoid zero-ttl config cache flake (#23762)
This fixes a flaky `TestConfigCache_UserPrompt_ExpiredEntryRefetches` by
making the seeded user prompt entry unambiguously expired before the
cache lookup runs.

The test previously inserted a `tlru` entry with a zero TTL, which
depends on `Set` and `Get` landing in different clock ticks. Switching
that seed entry to a negative TTL keeps the bounded `tlru` cache
behavior while removing the same-tick race.

Close https://github.com/coder/internal/issues/1432
2026-03-30 17:51:51 +11:00
TJ 78986efed8 fix(site): hide table headers during loading and empty states on workspaces page (#23446)
## Problem

When the workspaces table transitions between states (loading →
populated, or populated → empty search results), the table column
headers visibly jump. This happens because the Actions column's width is
content-driven: when workspace rows are present, the action buttons give
it intrinsic width, shrinking the Name/Template/Status columns. When the
body is empty or loading, the Actions column collapses to zero, and the
other columns expand to fill the space.

## Solution

Hide the header content during loading and empty states using
`visibility: hidden` (Tailwind's `invisible` class), which preserves the
row's layout height but hides the text. This prevents the visual jump
since headers aren't visible during the states where column widths
differ.

- **Loading**: first column shows a skeleton bar matching the body
skeleton aesthetic; other columns are invisible
- **Empty search results**: all header content is invisible
- **Populated**: headers display normally

---------

Co-authored-by: Jaayden Halko <jaayden@coder.com>
2026-03-29 23:28:30 -07:00
Kyle Carberry 4d2b0a2f82 feat: persist skills as message parts like AGENTS.md (#23748)
## Summary

Skills are now discovered once on the first turn (or when the workspace
agent changes) and persisted as `skill` message parts alongside
`context-file` parts. On subsequent turns, the skill index is
reconstructed from persisted parts instead of re-dialing the workspace
agent.

This makes skills consistent with the AGENTS.md pattern and is
groundwork for a future `/context` endpoint that surfaces loaded
workspace context to the frontend.

## Changes

- Add `skill` `ChatMessagePartType` with `SkillName` and
`SkillDescription` fields
- Extend `persistInstructionFiles` to also discover and persist skills
as parts
- Add `skillsFromParts()` to reconstruct skill index from persisted
parts on subsequent turns
- Update `runChat()` to use `skillsFromParts` instead of re-dialing
workspace for skills
- Frontend: handle new `skill` part type (skip rendering, hide
metadata-only messages)

## Before / After

| | AGENTS.md | Skills |
|---|---|---|
| **Before** | Persist as `context-file` parts, reconstruct from parts |
In-memory `skillsCache` only, re-dial workspace on cache miss |
| **After** | Persist as `context-file` parts, reconstruct from parts |
Persist as `skill` parts, reconstruct from parts |

The in-memory `skillsCache` remains for `read_skill`/`read_skill_file`
tool calls that need full skill bodies on demand.

<details><summary>Design context</summary>

This is the first step toward a unified workspace context
representation. Currently:
- Context files are persisted as message parts (works)
- Skills were only in-memory (inconsistent)
- Workspace MCP servers are cached in-memory (future work)

Persisting skills as parts means a future `/context` endpoint can query
both context files and skills from the same message parts in the DB,
without depending on ephemeral server-side caches.
</details>
2026-03-29 21:48:17 -04:00
Ethan f7aa46c4ba fix(scaletest/llmmock): emit Anthropic SSE event lines (#23587)
The llmmock Anthropic stream wrote each chunk as `data:` only, so
Anthropic clients never saw the named SSE events they dispatch on and
Claude responses arrived empty even though the stream completed with
HTTP 200.

Update `sendAnthropicStream()` to emit `event: <type>` and `data:
<json>` for each Anthropic chunk while leaving the OpenAI-style streams
unchanged.
2026-03-30 12:21:53 +11:00
dependabot[bot] 4bf46c4435 chore: bump the coder-modules group across 2 directories with 1 update (#23757)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 00:37:51 +00:00
Kyle Carberry be99b3cb74 fix: prioritize context cancellation in WebSocket sendEvent (#23756)
## Problem

Commit 386b449 (PR #23745) changed the `OneWayWebSocketEventSender`
event channel from unbuffered to buffered(64) to reduce chat streaming
latency. This introduced a nondeterministic race in `sendEvent`:

```go
sendEvent := func(event codersdk.ServerSentEvent) error {
    select {
    case eventC <- event:  // buffered channel — almost always ready
    case <-ctx.Done():     // also ready after cancellation
    }
    return nil
}
```

After context cancellation, Go's `select` randomly picks between two
ready cases, so `send()` sometimes returns `nil` instead of `ctx.Err()`.
With the old unbuffered channel the send case was rarely ready (no
reader), masking the bug.

## Fix

Add a priority `select` that checks `ctx.Done()` before attempting the
channel send:

```go
select {
case <-ctx.Done():
    return ctx.Err()
default:
}
select {
case eventC <- event:
case <-ctx.Done():
    return ctx.Err()
}
```

This is the standard Go pattern for prioritizing one channel over
another. When the context is already cancelled, the first select returns
immediately. The second select still handles the case where cancellation
happens concurrently with the send.

## Verification

- Ran the flaky test 20× in a loop (`-count=20`): all passed
- Ran the full `TestOneWayWebSocketEventSender` suite 5× (`-count=5`):
all passed
- Ran the complete `coderd/httpapi` test package: all passed

Fixes coder/internal#1429
2026-03-29 20:11:30 -04:00
Danielle Maywood 588beb0a03 fix(site): restore mobile back button on agent settings pages (#23752) 2026-03-28 23:12:49 +00:00
Michael Suchacz bfeb91d9cd fix: scope title regeneration per chat (#23729)
Previously, generating a new agent title used a page-global pending
state, so one in-flight regeneration disabled the action for every chat
in the Agents UI.

This change tracks regenerations by chat ID, updates the Agents page
contracts to use `regeneratingTitleChatIds`, and adds sidebar story
coverage that proves only the active chat is disabled.
2026-03-29 00:01:53 +01:00
Danielle Maywood a399aa8c0c refactor(site): restructure AgentsPage folder (#23648) 2026-03-28 21:33:42 +00:00
Kyle Carberry 386b449273 perf(coderd): reduce chat streaming latency with event-driven acquisition (#23745)
Previously, when a user sent a message, there was a 0–1000ms (avg
~500ms) polling delay before processing began.
`SendMessage`/`CreateChat`/`EditMessage` set `status='pending'` in the
DB and returned, but nothing woke the processing loop — it was a blind
1-second ticker.

## Changes

**Event-driven acquisition (main change):** Adds a `wakeCh` channel to
the chatd `Server`. `CreateChat`, `SendMessage`, `EditMessage`, and
`PromoteQueued` call `signalWake()` after committing their transactions,
which wakes the run loop to call `processOnce` immediately. The 1-second
ticker remains as a fallback safety net for edge cases (stale recovery,
missed signals).

**Buffer WebSocket write channel:** Changes the
`OneWayWebSocketEventSender` event channel from unbuffered to buffered
(64), decoupling the event producer from WebSocket write speed. The
existing 10s write timeout guards against stuck connections.

<details><summary>Implementation plan & analysis</summary>

The full latency analysis identified these sources of delay in the
streaming pipeline:

1. **Chat acquisition polling** — 0–1000ms (avg 500ms) dead time per
message. Fixed by wake channel.
2. **Unbuffered WebSocket write channel** — each token blocked on the
previous WS write completing. Fixed by buffering.
3. **PersistStep DB transaction per step** — `FOR UPDATE` lock + batch
insert. Not addressed in this PR (medium risk, would overlap DB write
with next provider TTFB).
4. **Multi-hop channel pipeline** — 4 channel hops per token. Not
addressed (medium complexity).

</details>

<details><summary>Test stabilization notes</summary>

`signalWake()` causes the chatd daemon to process chats immediately
after creation/send/edit, which exposed timing assumptions in several
tests that expected chats to remain in `pending` status long enough to
assert on. These tests were updated with `require.Eventually` +
`WaitUntilIdleForTest` patterns to wait for processing to settle before
asserting.

The race detector (`test-go-race-pg`) shows failures in
`TestCreateWorkspaceTool_EndToEnd` and `TestAwaitSubagentCompletion` —
these appear to be pre-existing races in the end-to-end chat flow that
are now exercised more aggressively because processing starts
immediately instead of after a 1s delay. Main branch CI (race detector)
passes without these changes.

</details>
2026-03-28 15:26:42 -04:00
Danielle Maywood 565cf846de fix(site): fix exec tool layout shift on command expand (#23739) 2026-03-28 16:27:54 +00:00
Kyle Carberry a2799560eb fix: use Popover for context indicator on mobile viewports (#23747)
## Problem

The context-usage indicator ring in the agents chat uses a Radix UI
`Tooltip`, which only opens on hover. On mobile/touch devices there is
no hover event, so tapping the indicator does nothing.

## Fix

On mobile viewports (`< 640 px`, matching the existing
`isMobileViewport()` helper), render a `Popover` instead of a `Tooltip`
so that tapping the ring toggles the context-usage info. Desktop
behavior (hover tooltip) is unchanged.

- Extract the trigger button and content into shared variables to avoid
duplication
- Conditionally render `Popover` (mobile) or `Tooltip` (desktop) based
on viewport width
- Both `Popover` and `PopoverContent` were already imported in the file
2026-03-28 12:26:23 -04:00
Michael Suchacz 73bde99495 fix(site): prevent mobile scroll jump during active touch gestures (#23734) 2026-03-28 16:24:12 +01:00
Kyle Carberry a708e9d869 feat: add tool rendering for read_skill, read_skill_file, start_workspace (#23744)
These tools previously fell through to the `GenericToolRenderer` which
showed a wrench icon and the raw tool name. Now they get dedicated icons
and contextual labels.

## Changes

**ToolIcon.tsx** — new icon mappings:
- `read_skill` / `read_skill_file` → `BookOpenIcon`
- `start_workspace` → `PlayIcon`
- `web_search` → `SearchIcon`

**ToolLabel.tsx** — contextual labels:
- `read_skill`: "Reading skill {name}…" → "Read skill {name}"
- `read_skill_file`: "Reading {skill}/{path}…" → "Read {skill}/{path}"
- `start_workspace`: "Starting workspace…" → "Started {name}"
- `web_search`: "Searching \"{query}\"…" → "Searched \"{query}\""

**tool.stories.tsx** — 12 new stories covering running, completed, and
error states for all four tools.

<details>
<summary>Visually verified in Storybook</summary>

All 10 story variants verified:
- ReadSkillRunning / ReadSkillCompleted / ReadSkillError
- ReadSkillFileRunning / ReadSkillFileCompleted / ReadSkillFileError
- StartWorkspaceRunning / StartWorkspaceCompleted / StartWorkspaceError
- WebSearchRunning / WebSearchCompleted / WebSearchNoQuery
</details>
2026-03-28 11:16:11 -04:00
Michael Suchacz 91217a97b9 fix(coderd/x/chatd): guard title generation meta replies (#23708)
Short prompts were producing title-generation meta responses such as "I
am a title generator" and prompt-echo titles. This rewrites the
automatic and manual title prompts to be shorter, less self-referential,
and more focused on returning only the title text.

The change also removes the broader post-generation guard layer, updates
manual regeneration to send real conversation text instead of a meta
instruction, and keeps regression coverage focused on the slimmer prompt
contract.
2026-03-28 15:58:53 +01:00
Danielle Maywood 399080e3bf fix(site/src/pages/AgentsPage): preserve right panel tab state across switches (#23737) 2026-03-28 00:00:17 +00:00
Danielle Maywood 50d9d510c5 refactor(site/src/pages/AgentsPage): clean up manual ref-sync callback patterns (#23732) 2026-03-27 23:09:51 +00:00
Jeremy Ruppel eda1bba969 fix(site): show table loader during session list pagination (#23719)
This patch shows the loading spinner on the AI Bridge Sessions table
during pagination. These API calls can take a noticeable amount of time
on dogfood, and without this it shows stale data while the fetch is
fetching.

Claude with the assist 🤖

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Kayla はな <mckayla@hey.com>
2026-03-27 19:09:42 -04:00
Danielle Maywood 808dd64ef6 fix(site): reduce dropdown menu font size on agents page (#23711)
Co-authored-by: Kayla はな <mckayla@hey.com>
2026-03-27 23:09:27 +00:00
Kayla はな 04f7d19645 chore: additional typescript import modernization (#23722) 2026-03-27 16:41:56 -06:00
Jake Howell 71a492a374 feat: implement <ClientFilter /> to AI Bridge request logs (#22694)
Closes #22136

This pull-request implements a `<ClientFilter />` to our `Request Logs`
page for AI Bridge. This will allow the user to select a client which
they wish to filter against. Technically the backend is able to actually
filter against multiple clients at once however the frontend doesn't
currently have a nice way of supporting this (future improvement).

<img width="1447" height="831" alt="image"
src="https://github.com/user-attachments/assets/0be234e2-25f2-4a89-b971-d74817395da1"
/>

---------

Co-authored-by: Jeremy Ruppel <jeremy.ruppel@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 17:18:28 -04:00
Charlie Voiselle 8c494e2a77 fix(site): sever opener in openAppInNewWindow for slim-window apps (#23117)
Split out from #23000.

## Problem

`openAppInNewWindow()` opens workspace apps in a slim popup without
`noopener`, leaving `window.opener` intact. Since workspace apps can
proxy arbitrary user-hosted content under the dashboard origin, this
exposes the Coder dashboard to tabnabbing and same-origin DOM access.

We cannot simply pass `"noopener"` to `window.open()` because the WHATWG
spec mandates that `window.open()` returns `null` when `"noopener"` is
present — indistinguishable from a blocked popup — which would break the
popup-blocked error toast.

## Solution

Use a two-step approach in `openAppInNewWindow()`:

1. Open `about:blank` first (without `noopener`) so we can detect popup
blockers via the `null` return
2. If the popup succeeded, sever the opener reference with `popup.opener
= null`
3. Navigate the popup to the target URL via `popup.location.href`

This preserves popup-block detection while eliminating the security
exposure.

## Tests

A vitest case in `AppLink.test.tsx` asserts that `open_in="slim-window"`
anchors do not carry `target` or `rel` attributes (since slim-window
opening is handled programmatically via `onClick` / `window.open()`, not
anchor attributes).

---------

Co-authored-by: Kayla はな <kayla@tree.camp>
2026-03-27 15:35:10 -04:00
Kyle Carberry 839165818b feat(coderd/x/chatd): add skills discovery and tools for chatd (#23715)
Adds skill discovery and tools to chatd so the agent can discover and
load `.agents/skills/` from workspaces, following the same pattern as
AGENTS.md instruction loading and MCP tool discovery.

## What changed

### `chattool/skill.go` — discovery, loading, and tools

- **DiscoverSkills** — walks `.agents/skills/` via `conn.LS()` +
`conn.ReadFile()`, parses SKILL.md frontmatter (name + description),
validates kebab-case names match directory names, silently skips
broken/missing entries.
- **FormatSkillIndex** — renders a compact `<available-skills>` XML
block for system prompt injection (~60 tokens for 3 skills). Progressive
disclosure: only names + descriptions in context, full body loaded on
demand.
- **LoadSkillBody** / **LoadSkillFile** — on-demand loading with path
traversal protection and size caps (64KB for SKILL.md, 512KB for
supporting files).
- **read_skill** / **read_skill_file** tools — `fantasy.AgentTool`
implementations following the same pattern as ReadFile and
WorkspaceMCPTool. Receive pre-discovered `[]SkillMeta` via closure to
avoid re-scanning on every call.

### `chatd.go` — integration into runChat

- Skills discovered in the `g2` errgroup parallel with instructions and
MCP tools.
- `skillsCache` (sync.Map) per chat+agent, same invalidation pattern as
MCP tools cache.
- Skill index injected via `InsertSystem` after workspace instructions.
- Re-injected in `ReloadMessages` callback so it survives compaction.
- `read_skill` + `read_skill_file` tools registered when skills are
present (for both root and subagent chats).
- Cache cleaned up in `cleanupStreamIfIdle` alongside MCP tools cache.

## Format compatibility

Uses the same `.agents/skills/<name>/SKILL.md` format as
[coder/mux](https://github.com/coder/mux) and
[openai/codex](https://github.com/openai/codex).
2026-03-27 15:22:13 -04:00
Kyle Carberry 6b77fa74a1 fix: remove bold font-weight from unread chats in agents sidebar (#23725)
Removes the `font-semibold` class applied to unread chat titles in the
`/agents` sidebar. The unread indicator dot already provides sufficient
visual distinction for unread chats.
2026-03-27 13:36:49 -04:00
Atif Ali 25e9fa7120 ci: improve Linear release tracking for scheduled pipeline (#23357)
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 22:26:02 +05:00
Danielle Maywood 60065f6c08 fix(site): replace inline delete confirmations with modals (#23710) 2026-03-27 17:02:18 +00:00
Kyle Carberry bcdc35ee3e feat: add chat read/unread indicator to sidebar (#23129)
## Summary

Adds read/unread tracking for chats so users can see which agent
conversations have new assistant messages they haven't viewed.

## Backend Changes

- Adds `last_read_message_id` column to the `chats` table (migration
000439).
- Computes `has_unread` as a virtual column in `GetChatsByOwnerID` using
an `EXISTS` subquery checking for assistant messages beyond the read
cursor.
- Exposes `has_unread` on the `codersdk.Chat` struct and auto-generated
TypeScript types.
- Updates `last_read_message_id` on stream connect/disconnect in
`streamChat`, avoiding per-message API calls during active streaming.
- Uses `context.WithoutCancel` for the deferred disconnect write so the
DB update succeeds even after the client disconnects.

## Frontend Changes

- Bold title (`font-semibold`) for unread chats in the sidebar.
- Small blue dot indicator next to the relative timestamp.
- Suppresses unread indicator for the currently active chat via
`isActive` from NavLink.

## Design Decisions

- Only `assistant` messages count as unread — the user's own messages
don't trigger the indicator.
- No foreign key on `last_read_message_id` since messages can be deleted
(via rollback/truncation) and the column is just a high-water mark.
- Zero API calls during streaming: exactly 2 DB writes per stream
session (connect + disconnect).
- Unread state refreshes on chat list load and window focus. The
`watchChats` WebSocket optimistically marks non-active chats as unread
on `status_change` events, but does not carry a server-computed
`has_unread` field. Navigating to a chat optimistically clears its
unread indicator in the cache.
2026-03-27 12:15:04 -04:00
Cian Johnston a5c72ba396 fix(coderd/agentapi): trim whitespace from workspace agent metadata values (#23709)
- Trim leading/trailing whitespace from metadata `value` and `error`
fields before storage
- Trimming happens before length validation so whitespace-padded values
are handled correctly
- Add `TrimWhitespace` test covering spaces, tabs, newlines, and
preserved inner whitespace
- No backfill needed (unlogged table, stores only latest value)

> 🤖 Created by a Coder Agent, reviewed by me.
2026-03-27 15:08:47 +00:00
Cian Johnston 3f55b35f68 refactor: replace AsSystemRestricted with narrower actors (#23712)
Replace overly-broad `AsSystemRestricted` with purpose-built actors:

- **OAuth2 provider paths** → `AsSystemOAuth2` (13 call sites across
`tokens.go`, `registration.go`, `apikey.go`)
- **Provisioner daemon health read** → `AsSystemReadProvisionerDaemons`
(1 site in `healthcheck/provisioner.go`)
- **Provisionerd file cache paths** → `AsProvisionerd` (2 sites in
`provisionerdserver.go`, matching existing usage nearby)

<details>
<summary>Implementation notes</summary>

Each replacement actor is a strict subset of `AsSystemRestricted`. Every
DB method
at each call site is already covered by the narrower actor's
permissions:

- `subjectSystemOAuth2`: OAuth2App/Secret/CodeToken (all), ApiKey (Read,
Delete), User (Read), Organization (Read)
- `subjectSystemReadProvisionerDaemons`: ProvisionerDaemon (Read)
- `subjectProvisionerd`: File (Create, Read) plus provisionerd-scoped
resources

No new permissions added. `nolint:gocritic` comments updated to reflect
the new actors.
</details>

> 🤖 Created by a Coder Agent, reviewed by me.
2026-03-27 15:08:30 +00:00
Ehab Younes 97a27d3c09 fix(site/src/pages/AgentsPage): fire chat-ready after store sync so scroll-to-bottom hits rendered DOM (#23693)
`onChatReady` now waits for the store to have all fetched messages (not
just query success), so the DOM has content when the parent frame acts
on the signal. Removed the `scroll-to-bottom` round-trip from
`onChatReady`, the embed page no longer needs the parent to echo a
scroll command back.

Also moved `autoScrollRef` check before the `widthChanged` guard in
`ScrollAnchoredContainer`'s ResizeObserver. The width guard is only
relevant for scroll compensation (user scrolled up); it should never
prevent pinning to bottom when auto-scroll is active. Also tightened the
store sync guard to `storeMessageCount < fetchedMessageCount`.
2026-03-27 18:01:38 +03:00
Kyle Carberry 4ed9094305 perf(site): memoize chat rendering hot path (#23720)
Addresses chat page rendering performance. Profiling with React Profiler
showed `AgentChat` actual render times of 20–31ms (exceeding the
16ms/60fps
budget), with `StickyUserMessage` as the #1 component bottleneck at
35.7%
of self time.

## Changes

**Hoist `createComponents` to module scope** (`response.tsx`):
Previously every `<Response>` instance called `createComponents()` per
render, creating a fresh components map that forced Streamdown to
discard
its cached render tree. Now both light/dark variants are precomputed
once
at module scope.

**Wrap `StickyUserMessage` in `memo()`** (`ConversationTimeline.tsx`):
Profile-confirmed #1 bottleneck. Each instance carries
IntersectionObserver
+ ResizeObserver + scroll handlers; skipping re-render avoids all that
setup.

**Wrap `ConversationTimeline` in `memo()`**
(`ConversationTimeline.tsx`):
Prevents cascade re-renders from the parent when props haven't changed.

**Remove duplicate `buildSubagentTitles`** (`ConversationTimeline.tsx` →
`AgentDetailContent.tsx`): Was computed in both `AgentDetailTimeline`
and
`ConversationTimeline`. Now computed once and passed as a prop.

<details>
<summary>Profiling data & analysis</summary>

### Profiler Metrics
| Metric | Value |
|--------|-------|
| INP (Interaction to Next Paint) | 82ms |
| Processing duration (event handlers) | 52ms |
| AgentChat actual render | 20–31ms (budget: 16ms) |
| AgentChat base render (no memo) | ~100ms |

### Top Bottleneck Components (self-time %)
| Component | Self Time | % |
|-----------|-----------|---|
| StickyUserMessage | 11.0ms | 35.7% |
| ForwardRef (radix-ui) | 7.4ms | 24.0% |
| Presence (radix-ui) | 2.0ms | 6.5% |
| AgentChatInput | 1.4ms | 4.5% |

### Decision log
- Chose module-scope precomputation over `useMemo` for
`createComponents`
  because there are only two possible theme variants and they're static.
- Did not add virtualization — sticky user messages + scroll anchoring
  make it complex. The memoization fixes should be measured first.
- Did not wrap `BlockList` in `memo()` — the React Compiler (enabled for
  `pages/AgentsPage/`) already auto-memoizes JSX elements inside it.
- Phase 2 (verify React Compiler effectiveness on
`parseMessagesWithMergedTools`)
and Phase 3 (radix-ui Tooltip lazy-mounting) deferred to follow-up PRs.

</details>
2026-03-27 14:28:27 +00:00
Kyle Carberry d973a709df feat: add model_intent option to MCP server configs (#23717)
Add a per-MCP-server `model_intent` toggle that wraps tool schemas with
a
`model_intent` field, requiring the LLM to provide a human-readable
description of each tool call's purpose. The intent string is shown as a
status label in the UI instead of opaque tool names, and is
transparently
stripped before the call reaches the remote MCP server.

Built-in tools have rich specialized renderers (terminal blocks, file
diffs,
etc.) and don't need this. MCP tools hit `GenericToolRenderer` which
only
shows raw tool names and JSON — that's where model_intent adds value.

The model learns what to provide via the JSON Schema `description` on
the
`model_intent` property itself — no system prompt changes needed.

<details>
<summary>Implementation details</summary>

### Architecture

Inspired by the `withModelIntent()` pattern from `coder/blink`, adapted
for
Go + React. The wrapping is entirely in the `mcpclient` layer — tool
implementations never see `model_intent`.

**Schema wrapping** (`mcpToolWrapper.Info()`): When enabled, wraps the
original tool parameters under a `properties` key and adds a
`model_intent`
string field with a rich description that teaches the model inline.

**Input unwrapping** (`mcpToolWrapper.Run()`): Strips `model_intent` and
unwraps `properties` before forwarding to the remote MCP server. Handles
three input shapes models may produce:
1. `{ model_intent, properties: {...} }` — correct format
2. `{ model_intent, key: val, ... }` — flat, no wrapper
3. Malformed — falls through gracefully

**Frontend extraction**: `streamState.ts` extracts `model_intent` from
incrementally parsed streaming JSON. `messageParsing.ts` extracts it
from
persisted tool call args.

**UI rendering**: `GenericToolRenderer` shows the capitalized intent
string
as the primary label when available, falling back to the raw tool name.

### Changes
- Database: `model_intent` boolean column on `mcp_server_configs`
- SDK: `ModelIntent` field on config/create/update types
- API: pass-through in create/update handlers + converter
- mcpclient: schema wrapping in `Info()`, input unwrapping in `Run()`
- Frontend: extraction from streaming + persisted args
- UI: intent label in `GenericToolRenderer`, toggle in admin panel
- Tests: 6 new tests (schema wrapping, unwrapping, passthrough,
fallback)

### Decision log
- **Option lives on MCPServerConfig, not model config**: Built-in tools
  already have rich renderers; only MCP tools benefit from model_intent.
- **No system prompt changes**: The JSON Schema `description` on the
  `model_intent` property teaches the model inline.
- **Pointer bool on update request**: Follows existing pattern (`*bool`)
  so PATCH requests don't reset the value when omitted.

</details>
2026-03-27 14:23:25 +00:00
Kyle Carberry 50c0c89503 fix(coderd): refresh expired MCP OAuth2 tokens everywhere (#23713)
Fixes expired MCP OAuth2 tokens causing 401 errors and stale
`auth_connected` status in the UI.

When users authenticate MCP servers (e.g. GitHub) via OAuth2, the access
token and refresh token are stored in the database. However, when the
access token expired, nothing refreshed it anywhere:

- **chatd**: sent the expired token as-is, getting a 401 and skipping
the MCP server
- **list/get endpoints**: reported `auth_connected: true` just because a
token record existed, regardless of expiry

## Changes

### Shared utility: `mcpclient.RefreshOAuth2Token`

Pure function that uses `golang.org/x/oauth2` `TokenSource` to check if
a token is expired (or within 10s of expiry) and refresh it. No DB
dependency — callers handle persistence.

### chatd (`coderd/x/chatd/chatd.go`)

Before calling `mcpclient.ConnectAll`, refreshes expired tokens.
Persists new credentials to the database. Falls back to the old token if
refresh fails.

### List/get MCP server endpoints (`coderd/mcp.go`)

Both `listMCPServerConfigs` and `getMCPServerConfig` now attempt refresh
when checking `auth_connected`. If the token is expired:
- **Has refresh token**: attempt refresh, persist result, report
`auth_connected` based on success
- **No refresh token**: report `auth_connected: false` if expired

This means the UI accurately reflects whether the user's token is
actually usable, rather than just whether a record exists.

<details>
<summary>Design notes</summary>

- `RefreshOAuth2Token` lives in `mcpclient` to avoid circular imports
(`coderd` → `chatd` → `mcpclient` is fine; `chatd` → `coderd` would be
circular).
- DB persistence is handled by each caller with their own authz context
(`AsSystemRestricted` in both cases).
- The `buildAuthHeaders` warning in mcpclient about expired tokens is
kept as defense-in-depth logging.
</details>
2026-03-27 10:06:32 -04:00
Danielle Maywood 0ec0f8faaf fix: guard per-chat cancelQueries to prevent 'Chat not found' race (#23714) 2026-03-27 13:08:58 +00:00
Spike Curtis 9b4d15db9b chore: add Tunneler FSM and partial impl (#23691)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->

Adds the Tunneler state machine and logic for handling build updates.   
  
This is a partial implementation and tests. Further PRs will fill out
the other event types.
  
Relates to GRU-18
2026-03-27 08:52:13 -04:00
Matt Vollmer 9e33035631 fix: optimistic pinned reorder and post-drag click suppression (#23704)
## Summary

Fixes two issues with pinned chat drag-to-reorder in the agents sidebar:

1. **Missing optimistic reorder**: After dragging a pinned chat to a new
position, the sidebar flashed back to the old order while waiting for
the server response, then snapped to the new order. Now the react-query
cache is updated optimistically in `onMutate` so the reorder is visually
instant.

2. **Post-drag navigation on upward drag**: Dragging a pinned chat
**upward** caused unintended navigation to that chat on drop. The
browser synthesizes a click from the final `pointerup`, and the previous
suppression listener was attached too low in the DOM tree to reliably
intercept it after items reorder.

## Changes

### Optimistic cache reorder (`site/src/api/queries/chats.ts`)

`reorderPinnedChat.onMutate` now extracts all pinned chats from the
cache, reorders them to match the new position, and writes the updated
`pin_order` values back via `updateInfiniteChatsCache`. The server
response still reconciles on `onSettled`.

### Document-level click suppression (`AgentsSidebar.tsx`)

- Moved click suppression from the pinned container div to
`document.addEventListener('click', handler, true)`. This fires before
any element-level handlers regardless of DOM reordering during drag.
Scoped via `pinnedContainerRef.contains(target)` so it only affects
pinned rows.
- Replaced `recentDragRef` (boolean cleared by `setTimeout(100ms)`) with
`lastDragEndedAtRef` (`performance.now()` timestamp checked against a
300ms window). This eliminates the race where the timeout fires before
the synthetic click arrives.

---

PR generated with Coder Agents
2026-03-27 07:57:23 -04:00
Ethan 83b2f85d63 feat(site/src/pages/AgentsPage): add ArrowUp shortcut to edit last user message (#23705)
Add a keyboard shortcut (ArrowUp on empty input) to start editing the
most recent user message, mirroring Mux's behavior. The shortcut reuses
the existing history-edit flow triggered by the pencil button.

Extract a shared `getEditableUserMessagePayload` helper so the pencil
button and the new shortcut both derive the edit payload identically.
Derive the last editable user message during render in
`AgentDetailInput` from the existing store selectors, keeping the
implementation Effect-free and React Compiler friendly.
2026-03-27 21:43:31 +11:00
Ethan c4ef94aacf fix(coderd/x/chatd): prevent chat hang when workspace agent is unavailable (#23707)
## Problem

Chats with a persisted `agent_id` binding hang indefinitely when the
workspace is stopped. The stale agent row still exists in the DB, so
`ensureWorkspaceAgent` succeeds, but the dial blocks forever in
`AwaitReachable`. The MCP discovery goroutine used an unbounded context,
so `g2.Wait()` never returned and the LLM never started.

## Fix

Three targeted changes restore the pre-binding behavior where stopped
workspaces degrade gracefully instead of blocking:

1. **`dialWithLazyValidation`**: "no agents in latest build" is now a
terminal fast-fail — the hanging dial is canceled and
`errChatHasNoWorkspaceAgent` returned immediately, instead of falling
through to `waitForOriginalDial`.

2. **Pre-LLM workspace setup**: MCP discovery and instruction
persistence gate on `workspaceAgentIDForConn` before attempting any
dial. MCP discovery is bounded by a 5s timeout and checks the in-memory
tool cache first (using the cheap cached agent from
`ensureWorkspaceAgent`), so the common subsequent-turn path has zero DB
queries.

3. **`persistInstructionFiles`**: tracks whether the workspace
connection succeeded and skips sentinel persistence on failure, so the
next turn retries if the workspace is restarted.

## Scenarios

**Running workspace, subsequent turn (hot path):** MCP cache hit via
in-memory cached agent. Zero DB queries, zero dials. Unchanged from
#23274.

**Stopped workspace, persisted binding (the bug):** MCP cache hit (stale
descriptors, fine — they fail at invocation). Pre-LLM setup completes
instantly. Tool invocation enters `dialWithLazyValidation`, dial fails
or hangs, validation discovers no agents, returns
`errChatHasNoWorkspaceAgent`. Model sees the error and can call
`start_workspace`.

**New chat, running workspace:** `ensureWorkspaceAgent` resolves via
latest-build, persists binding. MCP discovery dials and caches tools.

**New chat, stopped workspace:** `ensureWorkspaceAgent` finds no agents,
returns `errChatHasNoWorkspaceAgent`. Pre-LLM setup skips. LLM starts
with built-in tools only.

**Rebuilt workspace (agent switched):** MCP cache hit with stale agent
(harmless for one turn). Tool invocation dials stale agent, fails fast,
`dialWithLazyValidation` switches to new agent, persists updated
binding.

**Workspace restarted after stop:** No sentinel was persisted during the
stopped turn, so instruction persistence retries. Agent binding switches
to the new agent via `workspaceAgentIDForConn`.

**Transient DB error during validation:** Not
`errChatHasNoWorkspaceAgent`, so `dialWithLazyValidation` falls through
to `waitForOriginalDial` (cannot prove stale). No false positive.

**Tool invocation on stopped workspace:** `getWorkspaceConn` calls
`ensureWorkspaceAgent` (returns stale row), then
`dialWithLazyValidation` validation discovers no agents, returns
`errChatHasNoWorkspaceAgent`, cached state cleared, error returned to
model.
2026-03-27 18:47:39 +11:00
Ethan d678c6fb16 fix(coderd/x/chatd): forward local status events to fix delayed-startup banner (#23650)
## Problem

The agent chat delayed-startup banner ("Response startup is taking
longer than expected") could appear even though the model was already
streaming.

The root cause is in `Subscribe()`: `message_part` events were delivered
via the fast local in-process stream, while `status` events were
delivered via PostgreSQL pubsub. Both feed into the same `select`
statement, and Go's `select` picks whichever channel is ready first —
there is no ordering guarantee between channels. So a `message_part`
could outrun the `status=running` that logically precedes it.

The frontend saw content arrive while it still thought the chat was
pending, triggering the banner.

## Fix

Also forward `status` events from the local channel, alongside
`message_part`.

Both event types already travel through the same FIFO subscriber
channel: `publishStatus()` is called before the first `message_part`, so
channel ordering guarantees the frontend sees `status=running` before
any content.

Pubsub still delivers a duplicate `status` event later; the frontend
deduplicates it (`setChatStatus` is idempotent — it early-returns when
the status hasn't changed).
2026-03-27 17:55:19 +11:00
Jaayden Halko 86c3983fc0 feat: add AI Governance seat capacity banners (#23411)
## Summary

Add site-wide banners for AI Governance seat usage thresholds:

1. **90% capacity warning (admin-only):** When actual AI Governance
seats are ≥90% and <100% of the license limit, admins see:
   > "You have used 90% of your AI governance add-on seats."

2. **Over-limit banner (admin-only):** When actual seats exceed the
license limit, admins see a prominent warning:
> "Your organization is using {actual} / {limit} AI Governance user
seats ({X}% over the limit). Contact sales@coder.com"
   - Uses floor whole percentage (Go int division / `Math.floor`)
   - Includes a clickable `mailto:sales@coder.com` link
2026-03-27 05:51:51 +00:00
Michael Suchacz 2312e5c428 feat: add manual chat title regeneration (#23633)
## Summary

Adds a "Generate new title" action that lets users manually regenerate a
chat's title using richer conversation context than the automatic
first-message title path.

## Changes

### Backend
- **New endpoint:** `POST
/api/experimental/chats/{chatID}/title/regenerate` returns the updated
Chat with a regenerated title
- **Manual title algorithm:** Extracts useful user/assistant text turns
→ selects first user turn + last 3 turns → builds context with gap
markers → renders prompt with anti-recency guidance → calls lightweight
model → normalizes output
- **Helpers:** `extractManualTitleTurns`,
`selectManualTitleTurnIndexes`, `buildManualTitleContext`,
`renderManualTitlePrompt`, `generateManualTitle` — all private, with the
public `Server.RegenerateChatTitle` method
- **SDK:** `ExperimentalClient.RegenerateChatTitle(ctx, chatID) (Chat,
error)`
- Persists title via existing `UpdateChatByID` and broadcasts
`ChatEventKindTitleChange`

### Frontend
- API client method + React Query mutation with cache invalidation
- "Generate new title" menu item (with wand icon) in both TopBar and
Sidebar dropdown menus
- Loading/disabled state while regeneration is in-flight
- Error toast on failure
- Stories updated for both menus

### Tests
- `quickgen_test.go`: Table-driven tests for all 4 helper functions
(turn extraction, index selection, context building, prompt rendering)
- `exp_chats_test.go`: Handler tests (ChatNotFound,
NotFoundForDifferentUser, NoDaemon)

## Design notes
- The existing auto-title path (`maybeGenerateChatTitle`, `titleInput`)
is completely unchanged
- Manual regeneration uses richer context (first user turn + last 3
turns + gap markers) vs the auto path's single first message
- Endpoint is experimental and marked with `@x-apidocgen {"skip": true}`
2026-03-27 01:47:19 +01:00
Michael Suchacz f35f2a28e6 refactor(site/src/pages/AgentsPage): normalize transcript scrolling to top-to-bottom flow (#23668)
Converts the Agents page transcript from reverse-scroll
(\`flex-col-reverse\`) to normal top-to-bottom document flow with
explicit auto-scroll pinning.

The previous \`flex-col-reverse\` approach (from #23451) required
browser-specific workarounds for Chrome vs Firefox \`scrollTop\` sign
conventions, and compensation logic that could fight user scroll intent
during streaming. This replaces it with standard scroll math
(\`scrollHeight - scrollTop - clientHeight\`) while keeping the proven
\`ScrollAnchoredContainer\` observer machinery.

Changes:
- \`AgentDetailView.tsx\`: layout flip to \`flex-col\`, standard bottom
detection, explicit prepend restoration via \`pendingPrependRef\`
snapshot, sentinel moved inside content wrapper, initial mount bottom
pin, \`scrollToBottomRef\` imperative API for send/edit, user-interrupt
guard with \`isNearBottom\` check
- \`AgentDetailView.stories.tsx\`: all scroll stories updated for
normal-flow semantics, new \`ScrollAnchorPreservedOnOlderHistoryLoad\`
story with deterministic \`IntersectionObserver\` mock
- \`AgentsSkeletons.tsx\` + \`AgentDetailLoadingView\`: removed
\`flex-col-reverse\` so loading state matches the real transcript layout
- \`AgentDetail.tsx\`: replaced \`scrollTop = 0\` on send/edit with
imperative \`scrollToBottomRef\` call

Supersedes #23576 (which was reverted in #23638). Same behavioral goals
but keeps scroll logic local to \`ScrollAnchoredContainer\` rather than
extracting a separate hook.
2026-03-27 01:31:17 +01:00
Kayla はな 4fab372bdc chore: modernize utils/ imports (#23698) 2026-03-26 16:20:25 -06:00
Kayla はな aa81238cd0 chore: modernize all typescript imports (#23511) 2026-03-26 16:04:24 -06:00
Hugo Dutka f87d6e6e82 feat(site): rich preview for the spawn_computer_use_agent tool (#23684)
Adds preview cards for the `spawn_computer_use_agent` tool. Spawning
that agent now renders a rich saying "Spawning computer use sub-agent".
The "waiting for" tool now displays an inline desktop preview which can
be clicked to reveal the desktop in the sidebar.


https://github.com/user-attachments/assets/e486ca0e-a569-4142-bb12-db3b707967b8
2026-03-26 21:52:03 +00:00
Matt Vollmer 113aaa79a0 feat: add pinned chats with drag-to-reorder (#23615)
https://github.com/user-attachments/assets/bd5d12a1-61b3-4b7d-83b6-317bdfb60b3c

## Summary

Adds pinned chats to the agents page sidebar with server-side
persistence and drag-to-reorder. Users can pin/unpin chats via the
context menu, and pinned chats appear in a dedicated "Pinned" section
above the time-grouped list.

## Database

Migration `000453_chat_pin_order`: adds `pin_order integer DEFAULT 0 NOT
NULL` column on `chats` (0 = unpinned, 1+ = pinned in display order).
Three SQL queries handle pin operations server-side using CTEs with
`ROW_NUMBER()`:

- `PinChatByID`: normalizes existing orders and appends to end
- `UnpinChatByID`: sets target to 0 and compacts remaining pins
- `UpdateChatPinOrder`: shifts neighbors, clamps to `[1, pinned_count]`

All queries exclude archived chats. `ArchiveChatByID` clears `pin_order`
on archive. The handler rejects pinning archived chats with 400.

## Backend

Pin/unpin/reorder go through the existing `PATCH
/api/experimental/chats/{chat}` via the `pin_order` field on
`UpdateChatRequest`. The handler routes based on current pin state:
`pin_order == 0` unpins, `> 0` on an already-pinned chat reorders, `> 0`
on an unpinned chat appends to end.

## Frontend

- `pinChat` / `unpinChat` / `reorderPinnedChat` optimistic mutations
using shared `isChatListQuery` predicate
- Sidebar renders Pinned section above time groups, excludes pinned
chats from time groups
- Pin/Unpin context menu items (hidden for child/delegated chats)
- `@dnd-kit/core` + `@dnd-kit/sortable` for drag-to-reorder with
`MouseSensor`, `TouchSensor`, and `KeyboardSensor`
- Local pin-order override prevents flash on drop; click blocker
prevents NavLink navigation after drag

---
*PR generated with Coder Agents*
2026-03-26 16:52:02 -04:00
Mathias Fredriksson f3a8096ff6 perf(site): move useSpeechRecognition into compiled path (#23689)
The hook lived at src/hooks/, outside the React Compiler scope.
It returned a new object literal every render, causing three
handler guards and two downstream JSX guards in AgentChatInput
(233 cache slots) to always miss.

Move the hook to src/pages/AgentsPage/hooks/ where the compiler
processes it. The compiler auto-memoizes the return object, so
manual useMemo is unnecessary.

Also replace ctorRef.current render-time access with a useState
lazy initializer. The ref access caused a CompileError that
would have prevented compilation. Browser API availability is
constant, so useState captures it once.
2026-03-26 22:34:34 +02:00
Jeremy Ruppel beece6d351 fix(site): make AI Bridge sessions query key more specific (#23687)
Seeing what appears to be a react-query caching issue on dogfood. Not
able to repro on my workspace, so this is sort of a shot in the dark 🤷
2026-03-26 16:13:02 -04:00
Jeremy Ruppel 58f744a5c1 fix(site): remove noisy tooltips from AI Bridge session row (#23690)
The provider and client icons both had a tooltip containing the exact
same content as the badge. This removes the tooltips
2026-03-26 15:58:10 -04:00
Kyle Carberry 0f86c4237e feat: add workspace MCP tool discovery and proxying for chat (#23680)
Coder's chat (chatd) can now discover and use MCP servers configured in
a workspace's `.mcp.json` file. This brings project-specific tooling
(GitHub, databases, docs servers, etc.) into the chat without any manual
configuration.

## How it works

The workspace agent reads `.mcp.json` from the workspace directory (same
format Claude Code uses), connects to the declared MCP servers —
spawning child processes for stdio servers and connecting over the
network for HTTP/SSE — and caches their tool lists. Two new agent HTTP
endpoints expose this:

- `GET /api/v0/mcp/tools` returns the cached tool list (supports
`?refresh=true`)
- `POST /api/v0/mcp/call-tool` proxies calls to the correct server

On each chat turn, chatd calls `ListMCPTools` through the existing
`AgentConn` tailnet connection, wraps each tool as a
`fantasy.AgentTool`, and adds them to the LLM's tool set alongside
built-in and admin-configured MCP tools. Tool names are prefixed with
the server name (`github__create_issue`) to avoid collisions.

Failed server connections are logged and skipped — they never block the
agent or break the chat. Child stdio processes are terminated on agent
shutdown.
2026-03-26 19:57:02 +00:00
Jeremy Ruppel 02b58534a0 fix: use TokenBadges for session list row (#23619)
The sessions list row uses a bespoke token badge instead of the
`<TokenBadge />` component, so this fixes that.
2026-03-26 15:50:32 -04:00
Atif Ali e35fa8b9ee fix(site): use restartWorkspace instead of startWorkspace in schedule dialog (#23658) 2026-03-27 00:45:16 +05:00
Jeremy Ruppel 1358233c83 fix(site): decrease chevron size in AI Bridge session row (#23688)
The Sessions table row chevron icon is too big. Make it smaller 🤏
2026-03-26 15:10:41 -04:00
Asher ea4070c0ce feat: add multi-user dialog select for adding group members (#23396)
Instead of the single-user dropdown we had before.
2026-03-26 10:42:04 -08:00
Hugo Dutka 1b2fab8306 feat(site): enable copy and paste in agents desktop (#23686) 2026-03-26 19:32:48 +01:00
Mathias Fredriksson 94e5de22f7 perf(site): fix compiler memoization gap in AgentDetailInput (#23683)
The React Compiler failed to memoize the messages derivation
chain because a useDashboard() hook call sat between the
messages computation and its consumer (getLatestContextUsage).
An IIFE around the context usage logic also fragmented the
dependency chain.

Replacing the IIFE with a ternary and reordering the non-hook
computation before the hook call lets the compiler group
messages + getLatestContextUsage into a single cache guard
keyed on messagesByID and orderedMessageIDs.
2026-03-26 18:30:06 +00:00
Mathias Fredriksson 6cbb7c6da7 fix(provisioner/terraform): regenerate fixtures with current provider (#23685)
Two test fixtures (devcontainer-resources, multiple-agents-multiple-envs)
were generated before terraform-provider-coder v2.15.0 added the
merge_strategy attribute to coder_env. Running generate.sh with the
current provider adds merge_strategy: "replace" (the default) to all
coder_env resources, causing unstable diffs on every regeneration.
2026-03-26 18:22:45 +00:00
Jeremy Ruppel fc60a6bf9b feat(site): add AIBridge Sessions to deployment menu (#23679)
Adds AI Bridge Sessions link to the deployment menu.
2026-03-26 13:56:51 -04:00
Atif Ali a52153968d fix(site): use Anthropic icon instead of Claude icon for provider (#23661) 2026-03-26 22:25:55 +05:00
Danielle Maywood d18e700699 fix(site): collapse chat toolbar badges fluidly on overflow (#23663) 2026-03-26 17:21:01 +00:00
Mathias Fredriksson 0234e8fffd perf(site): narrow buildStreamTools compiler cache guard dependencies (#23677)
The React Compiler guarded buildStreamTools on the whole
streamState ref, which changes on every text chunk. Refactoring
the function to accept toolCalls and toolResults directly lets
the compiler guard on those sub-fields, which are stable during
text-only streaming.

Before: $[0] !== streamState (misses every text chunk)
After:  $[0] !== toolCalls || $[1] !== toolResults (passes when
        only blocks change)

Verified: 181 functions compile, 0 diagnostics. Reference
stability tests confirm toolCalls/toolResults retain identity
across text-part updates and change when tool data updates.
2026-03-26 17:18:35 +00:00
david-fraley fea4560a64 fix(site): use docs() helper for hardcoded documentation URLs (#23606) 2026-03-26 17:10:47 +00:00
Danielle Maywood 6dee7cf11d perf(site/src/pages/AgentsPage): convert renderBlockList to BlockList component (#23673) 2026-03-26 17:07:36 +00:00
Danielle Maywood d4fc4e0837 fix(site): fix StreamingCodeFence storybook flake (#23681) 2026-03-26 17:00:47 +00:00
david-fraley 8da45c14bc fix(site): fix grammar in batch update description (#23605) 2026-03-26 11:59:46 -05:00
Cian Johnston bfee7e6245 fix: populate all chat fields in pubsub events (#23664)
*Problem:* `publishChatPubsubEvent` was constructing a partial
`codersdk.Chat` that omitted `LastModelConfigID` and other fields. Go's
zero-value UUID caused the sidebar to show "Default model" for chats
received via SSE.

*Solution:*
- Extracted `convertChat`/`convertChats` from `exp_chats.go` into
`db2sdk.Chat`/`db2sdk.Chats`, alongside existing `ChatMessage`,
`ChatQueuedMessage`, and `ChatDiffStatus` converters.
`publishChatPubsubEvent` now calls `db2sdk.Chat(chat, nil)` instead of
maintaining its own copy of the conversion logic
- Added backend integration test
`TestWatchChats/CreatedEventIncludesAllChatFields`
- Added frontend regression tests for nil-UUID and valid model config ID
cases

> 🤖 Created by Coder Agents, reviewed by this human.
2026-03-26 16:49:26 +00:00
Danielle Maywood 52b5d5fdc6 fix(site): match date range picker button height on template insights page (#23667) 2026-03-26 16:36:51 +00:00
Mathias Fredriksson cc4cca90fd perf(site): memo-wrap SmoothedResponse and ReasoningDisclosure to skip completed blocks during streaming (#23674)
During streaming, StreamingOutput's compiler cache guard misses
every chunk because streamState.blocks and streamTools are new
references. This causes renderBlockList to recreate all child JSX
elements, and React calls every child function even for blocks
that finished streaming.

Wrapping SmoothedResponse and ReasoningDisclosure in React.memo
lets React skip the function call entirely when props are stable.
For N completed response blocks and M completed thinking blocks,
this reduces per-chunk function calls from N+M+1 to 1. The
compiler still compiles both inner functions cleanly (6 and 12
cache slots respectively, zero diagnostics).
2026-03-26 16:35:12 +00:00
Hugo Dutka 081d91982a fix(site): fix desktop visibility glitch (#23678)
If the desktop viewer component was hidden, for example after collapsing
the sidebar, the next time it was shown the viewer would be blank. This
PR fixes that.
2026-03-26 17:20:16 +01:00
Danielle Maywood 00cd7b7346 fix: match workspace picker font size to plus menu dropdown (#23670) 2026-03-26 16:12:24 +00:00
Danny Kopping 801e57d430 feat: session detail API (#23203) 2026-03-26 18:09:53 +02:00
Michael Suchacz e937f89081 feat: add enabled toggle to chat model admin panel (#23665)
Adds an `enabled` toggle to the chat model admin create/edit form so
admins
can disable a model without soft-deleting it. Disabled models stay
visible
in admin settings but stop appearing in user-facing model selectors.

The backend already supported this (`chat_model_configs.enabled` column,
filtered queries, and SDK fields). This change wires it into the admin
UI
and adds coverage on both sides.

**Backend:** three new subtests in `coderd/exp_chats_test.go` verifying
the visibility contract (admin sees disabled models, non-admin doesn't,
update-to-disabled preserves the record).

**Frontend:** `enabled` field added to form logic and seeded from the
existing model (defaults to `true` for new models). A Switch+Tooltip
control renders in the form header, matching the MCP Server panel
pattern.
Two interaction stories cover the create-disabled and toggle-existing
flows.
2026-03-26 17:07:20 +01:00
Danielle Maywood 5c7057a67f fix(site): enable streaming-mode Streamdown for live chat output (#23676) 2026-03-26 15:22:54 +00:00
Ehab Younes 249ef7c567 feat(site): harden Agents embed frame communication and add theme sync (#23574)
Add theme synchronization, navigation blocking, scroll-to-bottom
handling, and chat-ready signaling to the agent embed page. The parent
frame can now set light/dark theme via postMessage or query param, and
ThemeProvider skips its own class manipulation when the embed marker is
present. Navigation attempts that leave the embed route are intercepted
and forwarded to the parent frame. The scroll container ref is lifted
to the layout so the parent can request scroll-to-bottom.
2026-03-26 18:03:30 +03:00
Cian Johnston 81fe7543b4 chore: set tls.VersionTLS12 MinVersion in cli/server.go to address gosec warning (#23646)
I was investigating `//nolint` comments and this one popped up.
It raised my eyebrows enough to warrant its own PR.
2026-03-26 14:53:47 +00:00
Kyle Carberry 61d2a4a9b8 fix(site): preserve streaming output when queued message is sent (#23595)
## Problem

When the user sends a message while the agent is actively streaming a
response, `handleSend` called `store.clearStreamState()`
**unconditionally before** the POST request. If the server queues the
message (`response.queued = true` because the agent is busy), the
in-progress stream output is immediately wiped from the UI. The full
text only reappears once the agent finishes and the durable message
arrives via WebSocket — causing a visible cutoff mid-stream.

## Fix

Move `clearStreamState()` from before the POST to **after** the
response, gated behind `!response.queued`:

- **Queued sends** (`response.queued === true`): `clearStreamState()` is
never called. The stream continues uninterrupted. The WebSocket `status`
handler already clears stream state when the chat transitions to
`"pending"` / `"waiting"` after the queued message is dequeued.
- **Non-queued sends** (`response.queued === false`):
`clearStreamState()` + `upsertDurableMessage()` fire immediately after
the POST, same net behavior as before.
- **Edit and promote paths**: Unchanged — those are intentional
interruptions where eager clearing is correct.

### Additional behavior changes (both improvements)

1. **Failed sends no longer wipe stream state.** Previously
`clearStreamState()` ran before the `try` block, so a network error
still wiped the agent's in-progress output. Now the `catch` re-throws
before reaching `clearStreamState()`, preserving the stream on failure.

2. **`clearStreamState()` fires for all non-queued responses**, not just
those with a `message` body. The original guard was `!response.queued &&
response.message`; now `clearStreamState()` is under `!response.queued`
while `upsertDurableMessage` retains the `response.message` check. The
server always sets `message` for non-queued responses, so this is a
no-op in practice but is semantically correct.

## Testing

**AgentDetail.stories.tsx**: New `StreamingSurvivesQueuedSend` story
exercises the full flow — mocks `createChatMessage` to return `{ queued:
true }`, delivers streaming text via WebSocket, sends a message through
the UI, and asserts the streaming text remains visible.
2026-03-26 10:35:31 -04:00
Mathias Fredriksson b23c07cf23 perf(site): use lazy iteration in sliceAtGraphemeBoundary (#23671)
Array.from(graphemeSegmenter.segment(text)) materializes the
entire text into an array before iterating, even though the loop
breaks early at the visible prefix length. During streaming at
60fps, this makes each frame O(full text) instead of O(prefix).

Benchmark on 5000-char text with 200-char prefix: 22.6x faster
(1.44ms to 0.06ms per call, saving 8.3% of the frame budget).
The fallback codepoint path had the same issue with Array.from.
2026-03-26 16:33:48 +02:00
Ethan 87aafd4ae2 fix(site): stabilize date-dependent storybook snapshots (#23657)
_Generated by mux but reviewed by a human_

Several stories computed dates relative to `dayjs()` / `new Date()` at
render time, causing snapshot text to shift daily. I ran into this on my
PRs.

This adds an optional `now` prop to `DateRangePicker`,
`TemplateInsightsControls`, and `CreateTokenForm` so stories can inject
a deterministic clock without global mocking. License stories replace
the misleadingly-named `FIXED_NOW = dayjs().startOf("day")` with
absolute timestamps. All fixed timestamps use noon UTC to avoid timezone
boundary issues.

Affected stories:
- `AgentSettingsPageView`: Usage Date Filter, Usage Date Filter Refetch
Overlay
- `LicenseCard`: Expired/future AI Governance variants, Not Yet Valid
- `LicensesSettingsPage`: Shows Addon Ui For Future License Before Nbf
- `TemplateInsightsControls`: Day
- `CreateTokenPage`: Default
2026-03-27 01:21:52 +11:00
Ethan 4d74603045 fix(coderd/x/chatd): respect provider Retry-After headers in chat retry loop (#23351)
> **PR Stack**
> 1. **#23351** ← `#23282` *(you are here)*
> 2. #23282 ← `#23275`
> 3. #23275 ← `#23349`
> 4. #23349 ← `main`

---

## Summary

`chatretry.Retry()` used pure exponential backoff (1 s, 2 s, 4 s, …) and
never consulted provider `Retry-After` headers. Fantasy's
`ProviderError` carries `ResponseHeaders` including `Retry-After`, but
`chaterror.Classify()` only parsed error text and silently dropped the
structured transport metadata.

This makes `Retry-After` a first-class signal in the classification →
retry pipeline.

<img width="853" height="346" alt="image"
src="https://github.com/user-attachments/assets/65f012b6-8173-43d2-957e-ab9faddea525"
/>


## Changes

### `coderd/chatd/chaterror/classify.go`

- Added `RetryAfter time.Duration` field to `ClassifiedError` — a
normalized minimum retry delay derived from provider response metadata.
- `Classify()` now calls `extractProviderErrorDetails()` before falling
back to text heuristics. Structured `ProviderError.StatusCode` takes
priority over regex extraction.
- `normalizeClassification()` preserves and clamps `RetryAfter`.

### `coderd/chatd/chaterror/provider_error.go` (new)

Provider-specific extraction, isolated from the text-based
classification logic:

- `extractProviderErrorDetails()` unwraps `*fantasy.ProviderError` from
the error chain via `errors.As`.
- `retryAfterFromHeaders()` parses headers in priority order:
  1. `retry-after-ms` (OpenAI-specific, millisecond precision)
  2. `retry-after` (standard HTTP — integer seconds or HTTP-date)
- Case-insensitive header key lookup.

### `coderd/chatd/chatretry/chatretry.go`

- `effectiveDelay(attempt, classified)` computes `max(Delay(attempt),
classified.RetryAfter)` — the provider hint acts as a floor without
weakening the local exponential backoff.
- `Retry()` now uses `effectiveDelay` and passes the effective delay to
both `onRetry(...)` and the sleep timer, so downstream payloads, logs,
and the frontend countdown stay aligned automatically.

### Tests

- `classify_test.go`: Structured provider status + `Retry-After`
extraction, `retry-after-ms` priority, HTTP-date parsing, invalid header
fallback, `WithProvider` preservation.
- `chatretry_test.go`: Retry-after-as-floor semantics — longer hint
wins, shorter hint keeps base delay.

## Design notes

- **No SDK/API/frontend changes needed.** `codersdk.ChatStreamRetry`
already carries `DelayMs` and `RetryingAt`, and the frontend already
consumes them. The fix is purely in the server-side delay computation.
- **Existing retryability rules unchanged.** This fixes *when* we sleep,
not *whether* an error is retryable.
- **Provider hint is a floor:** `max(baseDelay, RetryAfter)` ensures we
never retry earlier than the provider asks, and never weaken our own
backoff curve.
2026-03-27 01:20:46 +11:00
Cian Johnston 847a88c6ca chore: clean up stale and dangerous //nolint comments (#23643)
## Changes

- **Commit 1**: Remove 17 unnecessary `//nolint` directives:
  - `//nolint:varnamelen` — linter not active
  - `//nolint:unused` on exported `SlimUnsupported`
  - `//nolint:govet` in `coderd/httpmw/csrf` — no longer fires
  - `//nolint:revive` on functions refactored since the nolint was added
- `//nolint:paralleltest` citing Go 1.22 loop variable capture
(obsolete)
- Bare `//nolint` narrowed to specific `//nolint:gocritic` with
justification

- **Commit 2**: Fix root causes behind 5 dangerous nolint suppressions:
- Add `MinVersion: tls.VersionTLS12` to TLS client config (removes
`gosec` G402)
- Delete trivial unexported wrappers `apiKey()`/`normalizeProvider()` in
chatprovider (removes `revive` confusing-naming)
- Add doc comments to `StartWithAssert` and `Router` (removes `revive`
exported)
  - Rename unused parameters to `_` in integration test helpers

> 🤖 This PR was created using Coder Agents and reviewed by me.
2026-03-26 14:13:53 +00:00
Jeremy Ruppel a0283ff775 fix(site): use toLocaleString for pagination offsets (#23669)
The Pagination widget localizes the number format of the total results
but not the page offsets.

Before

<img width="620" height="78" alt="Screenshot 2026-03-26 at 09 18 01"
src="https://github.com/user-attachments/assets/7ac0ad9a-7baa-4b30-b3d0-0e0325f8433b"
/>

After

<img width="297" height="42" alt="Screenshot 2026-03-26 at 9 41 22 AM"
src="https://github.com/user-attachments/assets/79c68366-95fa-4012-8419-5cd6f6e10ae3"
/>
2026-03-26 09:50:49 -04:00
Cian Johnston f164463c6a fix(scripts/metricsdocgen): shush the prometheus scanner in CI (#23642)
- Suppress informational `log.Printf` messages from the metrics scanner
when stdout is not a TTY (i.e. piped via `atomic_write` in `make gen` or
CI)
- Genuine warnings (`warnf`) still print unconditionally so real
problems remain visible
- `log.Fatalf` for fatal errors is unchanged

> 🤖 Created by Coder Agents and reviewed by a human
2026-03-26 12:58:02 +00:00
Michael Suchacz 4f063cdc47 feat: separate default and additional Coder Agents system prompts (#23616)
Admins can now control whether the built-in Coder Agents default system
prompt is prepended to their custom instructions, rather than having the
custom prompt silently replace the default.

**Changes:**
- New `include_default_system_prompt` boolean toggle (defaults to `true`
for existing deployments) stored as a site config key — no migration
needed.
- GET `/api/experimental/chats/config/system-prompt` returns the toggle
state, the custom prompt, and a preview of the built-in default.
- PUT persists both the toggle and custom prompt atomically in a single
transaction.
- `resolvedChatSystemPrompt()` composes `[default?, custom?]` joined by
`\n\n`, falling back to the built-in default on DB errors.
- Settings UI adds a Switch toggle with conditional helper text and a
"Preview" button that shows the built-in default prompt via the existing
`TextPreviewDialog`.
- Comprehensive test coverage: 15 subtests covering toggle behavior,
prompt composition matrix, auth boundaries, and integration with chat
creation.
2026-03-26 13:32:41 +01:00
Cian Johnston d175e799da feat: show agent badge on workspace list (#23453)
- Adds `GET /api/experimental/chats/by-workspace` endpoint that returns
workspace_id → latest chat_id mapping
- Modifies FE to fetch this alongside the workspace list, gated on
`agents` experiment and render an "Agent" badge similar to the existing
"Task" badge in `WorkspacesTable`
- Badge links to the "latest chat" linked to the given workspace.

Notes:
- Intentionally uses `fetchWithPostFilter` for RBAC to decouple from
workspaces API — will migrate to `workspaces_expanded` view later.
- If users have multiple chats linked to the same workspace, the badge
will link to the most recently updated one.

> 🤖 This PR was created with the help of Coder Agents, and has been
reviewed by my human. 🧑‍💻
2026-03-26 11:30:12 +00:00
Jaayden Halko 3fb7c6264f feat: display the AI add-on column in the UI on the Users and Organization Members tables (#23291)
## Summary

Adds an entitlement-gated **AI add-on** column to both the **Users**
table and the **Organization Members** table. When
`ai_governance_user_limit` is entitled, each row shows whether the user
is consuming an AI seat.

## Background

The AI governance add-on tracks which users are consuming AI seats.
Admins need visibility into per-user seat consumption directly from the
user management tables. This change surfaces that information through
both the site-wide Users table and the per-organization Members table,
gated behind the `ai_governance_user_limit` entitlement so the column
only appears when the feature is licensed.

## Implementation

### Backend
- **New SQL query** `GetUserAISeatStates`
(`coderd/database/queries/aiseatstate.sql`) — returns user IDs consuming
an AI seat, derived from:
  - Users with entries in `aibridge_interceptions` (AI Bridge usage)
- Users who own workspaces with `has_ai_task = true` builds (AI Tasks
usage)
- **SDK types** — added `has_ai_seat: boolean` to `codersdk.User` and
`codersdk.OrganizationMemberWithUserData`
- **Handler wiring** — both the Users list endpoint (`coderd/users.go`)
and all Members endpoints (`coderd/members.go`) query AI seat state per
page of user IDs and populate the response field
- **dbauthz** — per-user `ActionRead` checks on `ResourceUserObject`

### Frontend
- **Shared `AISeatCell` component**
(`site/src/modules/users/AISeatCell.tsx`) — green `CircleCheck` for
consuming, gray `X` for non-consuming
- **`TableColumnHelpTooltip`** — extended with `ai_addon` variant with
tooltip: *"Users with access to AI features like AI Bridge, Boundary, or
Tasks who are actively consuming a seat."*
- **Column visibility** gated behind
`useFeatureVisibility().ai_governance_user_limit`

## Validation

- Backend: dbauthz full method suite (`TestMethodTestSuite`) passes
including new `GetUserAISeatStates` test
- Backend: `TestGetUsers`, `TestUsersFilter`, CLI golden file tests pass
- Frontend: 7/7 tests pass across `UsersPage.test.tsx` and
`OrganizationMembersPage.test.tsx` (column visibility gating both
directions)
- `go build ./coderd/...` compiles clean
- `pnpm --dir site run lint:types` passes
- `make gen` clean

## Risks

- **Pagination performance**: The AI seat query is scoped to the current
page's user IDs (not a full table scan), keeping it efficient for
paginated views.
- **Semantic scope**: The workspace-side AI seat derivation uses "any
build with `has_ai_task = true`" rather than "latest build only". If the
product intent is latest-build-only, this can be tightened in a
follow-up.

---

_Generated with `mux` • Model: `anthropic:claude-opus-4-6` • Thinking:
`xhigh` • Cost: `$27.25`_

<!-- mux-attribution: model=anthropic:claude-opus-4-6 thinking=xhigh
costs=27.25 -->
2026-03-26 10:36:40 +00:00
Danny Kopping 09d2588e2a docs: AI session auditing (#23660)
_Disclaimer: produced with the help of Claude Opus 4.6, heavily modified
by me._

Closes https://github.com/coder/internal/issues/1341

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-26 09:49:53 +00:00
Danny Kopping 8eade29e68 chore: update AI Bridge warning to require AI Governance Add-On (#23662)
*Disclaimer: implemented by a Coder Agent using Claude Opus 4.6,
reviewed by me.*

Replace the transitional soft warning message:

> AI Bridge is now Generally Available in v2.30. In a future Coder
version, your deployment will require the AI Governance Add-On to
continue using this feature. Please reach out to your account team or
sales@coder.com to learn more.

with the definitive requirement message:

> The AI Governance Add-On is required to use AI Bridge. Please reach
out to your account team or sales@coder.com to learn more.

Updated in:
- `enterprise/coderd/license/license.go`
- `enterprise/coderd/license/license_test.go` (2 occurrences)
2026-03-26 11:10:53 +02:00
Ethan 15f2fa55c6 perf(coderd/x/chatd): add process-wide config cache for hot DB queries (#23272)
## Summary

Adds a process-wide cache for three hot database queries in `chatd` that
were hitting Postgres on **every chat turn** despite returning
rarely-changing configuration data:

| Query | Before (50k turns) | After | Reduction |
|---|---|---|---|
| `GetEnabledChatProviders` | ~98.6k calls | ~500-1000 | ~99% |
| `GetChatModelConfigByID` | ~49.2k calls | ~500-1000 | ~98% |
| `GetUserChatCustomPrompt` | ~46.7k calls | ~1000-2000 | ~97% |

These were identified via `coder exp scaletest chat` (5000 concurrent
chats × 10 turns) as the dominant source of Postgres load during chat
processing.

## Design

Follows the established **webpush subscription cache pattern**
(`coderd/webpush/webpush.go`):
- `sync.RWMutex` + `tailscale.com/util/singleflight` (generic) +
generation-based stale prevention + TTL
- 10s TTL for provider/model config, 5s TTL for user prompts
- Negative caching for `sql.ErrNoRows` on user prompts (the common case
— most users don't set custom prompts)
- Deep-clones `ChatModelConfig.Options` (`json.RawMessage` = `[]byte`)
on both store and read paths

### Invalidation

Single pubsub channel (`chat:config_change`) with kind discriminator for
cross-replica cache invalidation. Seven publish points in
`coderd/chats.go` cover all admin mutation endpoints
(create/update/delete for providers and model configs, put for user
prompts).

_This PR was generated with mux and was reviewed by a human_
2026-03-26 18:04:53 +11:00
Danny Kopping 2ff329b68a feat(site): add banner on request-logs page directing users to sessions (#23629)
*Disclaimer: implemented by a Coder Agent using Claude Opus 4.6*

Adds an info banner on the `/aibridge/request-logs` page encouraging
users to visit `/aibridge/sessions` for an improved audit experience.

This allows us to validate whether customers still find the raw request
logs view useful before removing it in a future release.

Fixes #23563
2026-03-26 11:57:50 +05:00
Ethan ad3d934290 fix(site/src/pages/AgentsPage): clear retry banner on stream forward progress (#23653)
When a provider request fails and retries, the "Retrying request" banner
lingered in the UI after the retry succeeded. This happened because
`retryState` was only cleared on explicit `status` events (`running`,
`pending`, `waiting`), not when the stream resumed with `message_part`
or `message` events. Since the backend does not publish a
dedicated"retry resolved" event, the banner stayed visible for the
entire duration of the successful response.

Add `store.clearRetryState()` calls to the `message_part`, `message`,
and `status` event handlers so the banner disappears as soon as content
flows again.

Closes https://github.com/coder/coder/issues/23624
2026-03-26 17:41:13 +11:00
Ethan 21c2acbad5 fix: refine chat retry status UX (#23651)
Follow-up to #23282. The retry and terminal error callouts had a few UX
oddities:

- Auto-retrying states reused backend error text that said "Please try
again" even while the UI was already retrying on behalf of the user.
- Terminal error states also said "Please try again" with no action the
user could take.
- `startup_timeout` had no specific title or retry copy — it fell
through to the generic "Retrying request" heading.
- The kind pill showed raw enum values like `startup_timeout` and
`rate_limit`.
- Terminal error metadata showed a "Retryable" / "Not retryable" label
that does not help users.
- A separate "Provider anthropic" metadata row duplicated information
already present in the message body.
- The `usage-limit` error kind used a hyphen while every backend kind
uses underscores.

Changes:

**Backend (`chaterror/message.go`)**

- Split message generation into `terminalMessage()` and
`retryMessage()`, replacing the old `userFacingMessage()`.
- Terminal messages include HTTP status codes and actionable guidance
(e.g. "Check the API key, permissions, and billing settings.").
- Retry messages are clean factual statements without status codes or
remediation, suitable for the retry countdown UI (e.g. "Anthropic is
temporarily overloaded.").
- Removed "Please try again" / "Please try again later" from all paths.
- `StreamRetryPayload` calls `retryMessage()` instead of forwarding
`classified.Message`.

**Frontend**

- Removed the parallel frontend message-generation system:
`getRetryMessage()`, `getProviderDisplayName()`,
`getRetryProviderSubject()`, and the `PROVIDER_DISPLAY_NAMES` map are
all deleted from `chatStatusHelpers.ts`.
- `liveStatusModel.ts` passes `retryState.error` through directly — the
backend owns the copy.
- Added specific title and retry copy for `startup_timeout`, and
extended the title mapping to cover `auth` and `config`.
- Kind pills now show humanized labels ("Startup timeout", "Rate limit",
etc.) instead of raw enum strings.
- Removed the redundant "Provider anthropic" metadata row.
- Removed the terminal "Retryable" / "Not retryable" badge.
- Normalized `"usage-limit"` → `"usage_limit"` and added it to
`ChatProviderFailureKind` so all error kinds follow the same underscore
convention and live in one enum.

Refs #23282.
2026-03-26 17:37:27 +11:00
Ethan 411714cd73 fix(dogfood/coder): tolerate stale gh auth state (#23588)
## Problem

The dogfood startup script uses `gh auth status` to decide whether to
re-authenticate the GitHub CLI. That command exits non-zero when **any**
stored credential is invalid—even if Coder external auth already injects
a working `GITHUB_TOKEN` into the environment and `gh` commands work
fine.

On workspaces with a persistent home volume, `~/.config/gh/hosts.yml`
retains OAuth tokens written by previous `gh auth login --with-token`
calls. These tokens are issued by Coder's external auth integration and
can be rotated or revoked between workspace starts, but the copy in
`hosts.yml` persists on the volume. When the stored token goes stale,
`gh auth status` reports two accounts:

```
✓ Logged in to github.com account user (GITHUB_TOKEN)           ← works fine
✗ Failed to log in to github.com account user (hosts.yml)       ← stale token
```

It exits 1 because of the stale entry, even though `gh` API calls
succeed via `GITHUB_TOKEN`. This makes the auth state **indeterminate**
from `gh auth status` alone—you can't tell whether `gh` actually works
or not.

When the script enters the login branch:

1. `gh auth login --with-token` **refuses** to accept piped input when
`GITHUB_TOKEN` is already set in the environment, and exits 1.
2. `set -e` kills the script before it reaches `sudo service docker
start`.

The result: Docker never starts, devcontainer health checks fail, and
the workspace reports a startup error—all because of a stale GitHub CLI
credential that has no bearing on workspace functionality.

## Fix

- Switch the auth guard from `gh auth status` to `gh api user --jq
.login`, which tests whether GitHub API access actually works regardless
of which credential provides it.
- Wrap the fallback `gh auth login` so a failure logs the indeterminate
state but does not abort the script.
2026-03-26 17:25:42 +11:00
Ethan 61e31ec5cc perf(coderd/x/chatd): persist workspace agent binding across chat turns (#23274)
## Summary

This change removes the steady-state "resolve the latest workspace
agent" query from chat execution.

Instead of asking the database for the latest build's agent on every
turn, a chat now persists the workspace/build/agent binding it actually
uses and reuses that binding across subsequent turns. The common path
becomes "load the bound agent by ID and dial it", with fallback paths to
repair the binding when it is missing, stale, or intentionally changed.

## What changes

- add `workspace_id`, `build_id`, and `agent_id` binding fields to
`chats`
- expose those fields through the chat API / SDK so the execution
context is explicit
- load the persisted binding first in chatd, instead of always resolving
the latest build's agent
- persist a refreshed binding when chatd has to re-resolve the workspace
agent
- keep child / subagent chats on the same bound workspace context by
inheriting the parent binding
- leave `build_id` / `agent_id` unset for flows like `create_workspace`,
then bind them lazily on the next agent-backed turn

## Runtime behavior

The binding is treated as an optimistic cache of the agent a chat should
use:

- if the bound agent still exists and dials successfully, we use it
without a latest-build lookup
- if the bound agent is missing or no longer reachable, chatd
re-resolves against the latest build and persists the new binding
- if a workspace mutation changes the chat's target workspace, the
binding is updated as part of that mutation

To avoid reintroducing a hot-path query, dialing uses lazy validation:

- start dialing the cached agent immediately
- only validate against the latest build if the dial is still pending
after a short delay
- if validation finds a different agent, cancel the stale dial, switch
to the current agent, and persist the repaired binding

## Result

The hot path stops issuing
`GetWorkspaceAgentsInLatestBuildByWorkspaceID` for every user message,
which is the source of the DB pressure this PR is addressing. At the
same time, chats still converge to the correct workspace agent when the
binding becomes stale due to rebuilds or explicit workspace changes.
2026-03-26 17:22:38 +11:00
Ethan 17aea0b19c feat(site): make long execute tool commands expandable (#23562)
Previously, long bash commands in the execute tool were truncated with
an ellipsis and could not be viewed in full. The only way to see the
full command was to copy it via the copy button.

Adds overflow detection and an inline expand/collapse chevron next to
the copy button. Clicking the command text or the chevron toggles
between truncated and wrapped views. Short commands that fit on one line
are visually unchanged.



https://github.com/user-attachments/assets/88ec6cd4-5212-4608-9a90-9ce217d5dce7

EDIT: couldn't be bothered re-recording the video but the chevron is
hidden until hovered now, like the copy button.
2026-03-26 15:49:23 +11:00
Ethan 5112ab7da9 fix(site/e2e): fix flaky updateTemplate test expecting transient URL (#23655)
_PR generated by Mux but reviewed by a human_

## Problem

The e2e test `template update with new name redirects on successful
submit` is flaky.

After saving template settings, the app navigates to
`/templates/<name>`, which immediately redirects to
`/templates/<name>/docs` via the router's index route (`<Navigate
to="docs" replace />`). The assertion used `expect.poll()` with
`toHavePathNameEndingWith(`/${name}`)`, which matches only the
**transient intermediate URL** — it only exists while `TemplateLayout`'s
async data fetch is pending. Once the fetch resolves and the `<Outlet
/>` renders, the index route fires the `/docs` redirect and the URL no
longer matches.

## Why it's flaky (not deterministic)

The flakiness depends on whether the template query cache is warm:

- **Cache miss → PASSES**: The mutation's `onSuccess` handler
invalidates the query cache. If `TemplateLayout` needs to re-fetch, it
shows a `<Loader />`, which delays rendering the `<Outlet />` that
contains the `<Navigate to="docs">`. This gives `expect.poll()` time to
see the transient `/new-name` URL → **pass**.
- **Cache hit → FAILS**: If the template data is still in the query
client, `TemplateLayout` renders immediately and the `<Navigate
to="docs" replace />` fires nearly instantly. By the time the first poll
runs, the URL is already `/new-name/docs` → **fail**.

## Fix

Assert the **final stable URL** (`/${name}/docs`) instead of the
transient one.

This is safe because `expect.poll()` is retry-based: it keeps sampling
until a match is found (or timeout). Seeing the transient `/new-name`
URL just causes harmless retries — once the redirect completes and the
URL settles on `/new-name/docs`, the poll matches and the test passes.

| Poll | URL | Ends with `/new-name/docs`? | Action |
|---|---|---|---|
| 1st | `/templates/new-name` | No | Retry |
| 2nd | `/templates/new-name` | No | Retry |
| 3rd | `/templates/new-name/docs` | Yes | **Pass**  |

Closes https://github.com/coder/internal/issues/1403
2026-03-26 04:32:44 +00:00
Cian Johnston 7a9d57cd87 fix(coderd): actually wire the chat template allowlist into tools (#23626)
Problem: previously, the deployment-wide chat template allowlist was never actually wired in from `chatd.go`

- Extracts `parseChatTemplateAllowlist` into shared `coderd/util/xjson.ParseUUIDList`
- Adds `Server.chatTemplateAllowlist()` method that reads the allowlist from DB
- Passes `AllowedTemplateIDs` callback to `ListTemplates`, `ReadTemplate`, and `CreateWorkspace` tool constructors

> 🤖 Created by Coder Agents and reviewed by a human.
2026-03-25 22:15:27 +00:00
david-fraley dab4e6f0a4 fix(site): use standard dismiss label for cancel confirmation dialogs (#23599) 2026-03-25 21:24:53 +00:00
Kayla はな 0e69e0eaca chore: modernize typescript api client/types imports (#23637) 2026-03-25 15:21:19 -06:00
Kyle Carberry 09bcd0b260 fix: revert "refactor(site/src/pages/AgentsPage): normalize transcript scrolling" (#23638)
Reverts coder/coder#23576
2026-03-25 20:24:42 +00:00
Michael Suchacz 4025b582cd refactor(site): show one model picker option per config (#23533)
The `/agents` model picker collapsed distinct configured model variants
into fewer entries because options were built from the deduplicated
catalog (`ChatModelsResponse`). Two configs with the same provider/model
but different display names or settings appeared as a single option.

Switch option building from `getModelOptionsFromCatalog()` to a new
`getModelOptionsFromConfigs()` that emits one `ModelSelectorOption` per
enabled `ChatModelConfig` row. The option ID is the config UUID
directly, eliminating the catalog-ID ↔ config-ID mapping layer
(`buildModelConfigIDByModelID`, `buildModelIDByConfigID`).

Provider availability is still gated by the catalog response, and status
messaging ("no models configured" vs "models unavailable") is unchanged.
The sidebar now resolves model labels by config ID first, and the
/agents Storybook fixtures were updated so the stories seed matching
config IDs and model-config query data after the picker contract change.
2026-03-25 20:46:57 +01:00
Steven Masley 9d5b7f4579 test: assert on user id, not entire user (#23632)
User struct has "LastSeen" field which can change during the test


Replaces https://github.com/coder/coder/pull/23622
2026-03-25 19:09:25 +00:00
Michael Suchacz cf955b0e43 refactor(site/src/pages/AgentsPage): normalize transcript scrolling (#23576)
The `/agents` transcript used `flex-col-reverse` for bottom-anchored
chat layout, where `scrollTop = 0` means bottom and the sign of
`scrollTop` when scrolled up varies by browser engine. A
`ResizeObserver` detected content height changes and applied manual
`compensateScroll(delta)` to preserve position, which fought manual
upward scrolling during streaming — repeatedly adjusting the user's
scroll position when they were trying to read earlier content.

This replaces that model with normal DOM order (`flex-col`, standard
`overflow-y: auto`) and a dedicated `useAgentTranscriptAutoScroll` hook
that only auto-scrolls when follow-mode is enabled. When the user
scrolls up, follow-mode disables and incoming content does not move the
viewport.

Changes:
- **New**: `useAgentTranscriptAutoScroll.ts` — local hook with
follow-mode state, RAF-throttled button visibility, dual
`ResizeObserver` (content + container), and `jumpToBottom()`
- **Modified**: `AgentDetailView.tsx` — removed
`ScrollAnchoredContainer` (~350 lines of reverse-layout compensation),
replaced with normal-order container wired to the new hook, added
pagination prepend scroll restoration
- **Modified**: `AgentDetailView.stories.tsx` — updated scroll stories
for normal-order bottom-distance assertions
2026-03-25 20:07:35 +01:00
Steven Masley f65b915fe3 chore: add permissions to coder:workspace.* scopes for functionality (#23515)
`coder:workspaces.*` composite scopes did not provide enough permissions
to do what they say they can do.

Closes https://github.com/coder/coder/issues/22537
2026-03-25 13:46:58 -05:00
Kyle Carberry 1f13324075 fix(coderd): use path-aware discovery for MCP OAuth2 metadata (RFC 9728, RFC 8414) (#23520)
## Problem

MCP OAuth2 auto-discovery stripped the path component from the MCP
server URL
before looking up Protected Resource Metadata. Per RFC 9728 §3.1, the
well-known
URL should be path-aware:

```
{origin}/.well-known/oauth-protected-resource{path}
```

For `https://api.githubcopilot.com/mcp/`, the correct metadata URL is

`https://api.githubcopilot.com/.well-known/oauth-protected-resource/mcp/`,
not
`https://api.githubcopilot.com/.well-known/oauth-protected-resource`
(which
returns 404).

The same issue applied to RFC 8414 Authorization Server Metadata for
issuers
with path components (e.g. `https://github.com/login/oauth` →
`/.well-known/oauth-authorization-server/login/oauth`).

## Fix

Replace the `mcp-go` `OAuthHandler`-based discovery with a
self-contained
implementation that correctly follows path-aware well-known URI
construction for
both RFC 9728 and RFC 8414, falling back to root-level URLs when the
path-aware
form returns an error. Also implements RFC 7591 registration directly,
removing
the `mcp-go/client/transport` dependency from the discovery path.

Note: this fix resolves the discovery half of the problem for servers
like
GitHub Copilot. Full OAuth2 support for GitHub's MCP server also
requires
dynamic client registration (RFC 7591), which GitHub's authorization
server
does not currently support — that will be addressed separately.
2026-03-25 14:35:55 -04:00
Kyle Carberry c0f93583e4 fix(site): soften tool failure display and improve subagent timeout UX (#23617)
## Summary

Tool call failures in `/agents` previously displayed alarming red
styling (red icons, red text, red alert icons) that made it look like
the user did something wrong. This PR replaces the scary error
presentation with a calm, unified style and adds a dedicated timeout
display for subagent tools.

## Changes

### Unified failure style (all tools)
- Replace red `CircleAlertIcon` + `text-content-destructive` with a
muted `TriangleAlertIcon` in `text-content-secondary` across **all 11
tool renderers**.
- Remove red icon/label recoloring on error from `ToolIcon` and all
specialized tool components.
- Error details remain accessible via tooltip on hover.

### Subagent timeout display
- `ClockIcon` with "Timed out waiting for [Title]" instead of a generic
error display.
- `CircleXIcon` for non-timeout subagent errors with proper error verbs
("Failed to spawn", "Failed waiting for", etc.) instead of the
misleading running verb ("Waiting for").
- Timeout detection from result string/error field containing "timed
out".

### Title resolution for historical messages
- `ConversationTimeline` now computes `subagentTitles` via
`useMemo(buildSubagentTitles(...))` and passes it to historical
`ChatMessageItem` rendering, so `wait_agent` can resolve the actual
agent title from a prior `spawn_agent` result even outside streaming
mode.

### Stories
8 new stories: `GenericToolFailed`, `GenericToolFailedNoResult`,
`SubagentWaitTimedOut`, `SubagentWaitTimedOutWithTitle`,
`SubagentWaitTimedOutTitleFromMap`, `SubagentSpawnError`,
`SubagentWaitError`, `MCPToolFailedUnifiedStyle`.

## Files changed (15)
- `tool/Tool.tsx` — GenericToolRenderer + SubagentRenderer
- `tool/SubagentTool.tsx` — timeout/error verbs, icon changes
- `tool/ToolIcon.tsx` — remove destructive recoloring
- `tool/*.tsx` (10 specialized tools) — unified warning icon
- `ConversationTimeline.tsx` — pass subagentTitles to historical
rendering
- `tool.stories.tsx` — 8 new stories, updated existing assertions
2026-03-25 18:33:45 +00:00
Cian Johnston c753a622ad refactor(agent): move agentdesktop under x/ subpackage (#23610)
- Move `agent/agentdesktop/` to `agent/x/agentdesktop/` to signal
experimental/unstable status
- Update import paths in `agent/agent.go` and `api_test.go`

> 🤖 This mechanical refactor was performed by an agent. I made sure it
didn't change anything it wasn't supposed to.
2026-03-25 18:23:52 +00:00
Cian Johnston 5c9b0226c1 fix(coderd/x/chatd): make clarification rules coherent (#23625)
- Clarify the system prompt to prefer tools before asking the user for
clarification.
- Limit clarification to cases where ambiguity or user preferences
materially affect the outcome.
- Remove the contradictory instruction to always start by asking
clarifying questions.

> 🤖 This PR has been reviewed by the author.
2026-03-25 18:21:36 +00:00
Yevhenii Shcherbina a86b8ab6f8 feat: aibridge BYOK (#23013)
### Changes

  **coder/coder:**

- `coderd/aibridge/aibridge.go` — Added `HeaderCoderBYOKToken` constant,
`IsBYOK()` helper, and updated `ExtractAuthToken` to check the BYOK
header first.
- `enterprise/aibridged/http.go` — BYOK-aware header stripping: in BYOK
mode only the BYOK header is stripped (user's LLM credentials
preserved); in centralized mode all auth headers are stripped.
  
 <hr/>
 
**NOTE**: `X-Coder-Token` was removed! As of now `ExtractAuthToken`
retrieves token either from `X-Coder-AI-Governance-BYOK-Token` or from
`Authorization`/`X-Api-Key`.

---------

Co-authored-by: Susana Ferreira <susana@coder.com>
Co-authored-by: Danny Kopping <danny@coder.com>
2026-03-25 14:17:56 -04:00
1405 changed files with 53648 additions and 16465 deletions
+2 -2
View File
@@ -5,6 +5,6 @@ runs:
using: "composite"
steps:
- name: Install syft
uses: anchore/sbom-action/download-syft@f325610c9f50a54015d37c8d16cb3b0e2c8f4de0 # v0.18.0
uses: anchore/sbom-action/download-syft@e22c389904149dbc22b58101806040fa8d37a610 # v0.24.0
with:
syft-version: "v1.20.0"
syft-version: "v1.26.1"
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.7"
default: "1.25.8"
use-cache:
description: "Whether to use the cache."
default: "true"
+26 -98
View File
@@ -181,7 +181,7 @@ jobs:
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -1316,122 +1316,50 @@ jobs:
"${IMAGE}"
done
# GitHub attestation provides SLSA provenance for the Docker images, establishing a verifiable
# record that these images were built in GitHub Actions with specific inputs and environment.
# This complements our existing cosign attestations which focus on SBOMs.
#
# We attest each tag separately to ensure all tags have proper provenance records.
# TODO: Consider refactoring these steps to use a matrix strategy or composite action to reduce duplication
# while maintaining the required functionality for each tag.
- name: Resolve Docker image digests for attestation
id: docker_digests
if: github.ref == 'refs/heads/main'
continue-on-error: true
env:
IMAGE_BASE: ghcr.io/coder/coder-preview
BUILD_TAG: ${{ steps.build-docker.outputs.tag }}
run: |
set -euxo pipefail
main_digest=$(docker buildx imagetools inspect --raw "${IMAGE_BASE}:main" | sha256sum | awk '{print "sha256:"$1}')
echo "main_digest=${main_digest}" >> "$GITHUB_OUTPUT"
latest_digest=$(docker buildx imagetools inspect --raw "${IMAGE_BASE}:latest" | sha256sum | awk '{print "sha256:"$1}')
echo "latest_digest=${latest_digest}" >> "$GITHUB_OUTPUT"
version_digest=$(docker buildx imagetools inspect --raw "${IMAGE_BASE}:${BUILD_TAG}" | sha256sum | awk '{print "sha256:"$1}')
echo "version_digest=${version_digest}" >> "$GITHUB_OUTPUT"
- name: GitHub Attestation for Docker image
id: attest_main
if: github.ref == 'refs/heads/main'
if: github.ref == 'refs/heads/main' && steps.docker_digests.outputs.main_digest != ''
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:main"
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/ci.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
subject-name: ghcr.io/coder/coder-preview
subject-digest: ${{ steps.docker_digests.outputs.main_digest }}
push-to-registry: true
- name: GitHub Attestation for Docker image (latest tag)
id: attest_latest
if: github.ref == 'refs/heads/main'
if: github.ref == 'refs/heads/main' && steps.docker_digests.outputs.latest_digest != ''
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:latest"
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/ci.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
subject-name: ghcr.io/coder/coder-preview
subject-digest: ${{ steps.docker_digests.outputs.latest_digest }}
push-to-registry: true
- name: GitHub Attestation for version-specific Docker image
id: attest_version
if: github.ref == 'refs/heads/main'
if: github.ref == 'refs/heads/main' && steps.docker_digests.outputs.version_digest != ''
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}"
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/ci.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
subject-name: ghcr.io/coder/coder-preview
subject-digest: ${{ steps.docker_digests.outputs.version_digest }}
push-to-registry: true
# Report attestation failures but don't fail the workflow
+1 -1
View File
@@ -95,7 +95,7 @@ jobs:
AWS_DOGFOOD_DEPLOY_REGION: ${{ vars.AWS_DOGFOOD_DEPLOY_REGION }}
- name: Set up Flux CLI
uses: fluxcd/flux2/action@8454b02a32e48d775b9f563cb51fdcb1787b5b93 # v2.7.5
uses: fluxcd/flux2/action@871be9b40d53627786d3a3835a3ddba1e3234bd2 # v2.8.3
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.8.2"
+29 -9
View File
@@ -240,6 +240,7 @@ jobs:
- name: Create Coder Task for Documentation Check
if: steps.check-secrets.outputs.skip != 'true'
id: create_task
continue-on-error: true
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
@@ -254,8 +255,21 @@ jobs:
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: false
- name: Handle Task Creation Failure
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome != 'success'
run: |
{
echo "## Documentation Check Task"
echo ""
echo "⚠️ The external Coder task service was unavailable, so this"
echo "advisory documentation check did not run."
echo ""
echo "Maintainers can rerun the workflow or trigger it manually"
echo "after the service recovers."
} >> "${GITHUB_STEP_SUMMARY}"
- name: Write Task Info
if: steps.check-secrets.outputs.skip != 'true'
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -273,7 +287,7 @@ jobs:
} >> "${GITHUB_STEP_SUMMARY}"
- name: Wait for Task Completion
if: steps.check-secrets.outputs.skip != 'true'
if: steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
id: wait_task
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -363,7 +377,7 @@ jobs:
fi
- name: Fetch Task Logs
if: always() && steps.check-secrets.outputs.skip != 'true'
if: always() && steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
@@ -376,7 +390,7 @@ jobs:
echo "::endgroup::"
- name: Cleanup Task
if: always() && steps.check-secrets.outputs.skip != 'true'
if: always() && steps.check-secrets.outputs.skip != 'true' && steps.create_task.outcome == 'success'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
@@ -390,6 +404,7 @@ jobs:
- name: Write Final Summary
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
CREATE_TASK_OUTCOME: ${{ steps.create_task.outcome }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_MESSAGE: ${{ steps.wait_task.outputs.task_message }}
RESULT_URI: ${{ steps.wait_task.outputs.result_uri }}
@@ -400,10 +415,15 @@ jobs:
echo "---"
echo "### Result"
echo ""
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
if [[ "${CREATE_TASK_OUTCOME}" == "success" ]]; then
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
else
echo "**Status:** Skipped because the external Coder task"
echo "service was unavailable."
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
} >> "${GITHUB_STEP_SUMMARY}"
+99 -21
View File
@@ -4,9 +4,7 @@ on:
push:
branches:
- main
# This event reads the workflow from the default branch (main), not the
# release branch. No cherry-pick needed.
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release
- "release/2.[0-9]+"
release:
types: [published]
@@ -15,12 +13,13 @@ permissions:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# Queue rather than cancel so back-to-back pushes to main don't cancel the first sync.
cancel-in-progress: false
jobs:
sync:
name: Sync issues to Linear release
if: github.event_name == 'push'
sync-main:
name: Sync issues to next Linear release
if: github.event_name == 'push' && github.ref_name == 'main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
@@ -28,18 +27,84 @@ jobs:
fetch-depth: 0
persist-credentials: false
- name: Detect next release version
id: version
# Find the highest release/2.X branch (exact pattern, no suffixes like
# release/2.31_hotfix) and derive the next minor version for the release
# currently in development on main.
run: |
LATEST_MINOR=$(git branch -r | grep -E '^\s*origin/release/2\.[0-9]+$' | \
sed 's/.*release\/2\.//' | sort -n | tail -1)
if [ -z "$LATEST_MINOR" ]; then
echo "No release branch found, skipping sync."
echo "skip=true" >> "$GITHUB_OUTPUT"
exit 0
fi
echo "version=2.$((LATEST_MINOR + 1))" >> "$GITHUB_OUTPUT"
echo "skip=false" >> "$GITHUB_OUTPUT"
- name: Sync issues
id: sync
uses: linear/linear-release-action@5cbaabc187ceb63eee9d446e62e68e5c29a03ae8 # v0.5.0
if: steps.version.outputs.skip != 'true'
uses: linear/linear-release-action@755d50b5adb7dd42b976ee9334952745d62ceb2d # v0.6.0
with:
access_key: ${{ secrets.LINEAR_ACCESS_KEY }}
command: sync
version: ${{ steps.version.outputs.version }}
timeout: 300
- name: Print release URL
if: steps.sync.outputs.release-url
run: echo "Synced to $RELEASE_URL"
env:
RELEASE_URL: ${{ steps.sync.outputs.release-url }}
sync-release-branch:
name: Sync backports to Linear release
if: github.event_name == 'push' && startsWith(github.ref_name, 'release/')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Extract release version
id: version
# The trigger only allows exact release/2.X branch names.
run: |
echo "version=${GITHUB_REF_NAME#release/}" >> "$GITHUB_OUTPUT"
- name: Sync issues
id: sync
uses: linear/linear-release-action@755d50b5adb7dd42b976ee9334952745d62ceb2d # v0.6.0
with:
access_key: ${{ secrets.LINEAR_ACCESS_KEY }}
command: sync
version: ${{ steps.version.outputs.version }}
timeout: 300
code-freeze:
name: Move Linear release to Code Freeze
needs: sync-release-branch
if: >
github.event_name == 'push' &&
startsWith(github.ref_name, 'release/') &&
github.event.created == true
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Extract release version
id: version
run: |
echo "version=${GITHUB_REF_NAME#release/}" >> "$GITHUB_OUTPUT"
- name: Move to Code Freeze
id: update
uses: linear/linear-release-action@755d50b5adb7dd42b976ee9334952745d62ceb2d # v0.6.0
with:
access_key: ${{ secrets.LINEAR_ACCESS_KEY }}
command: update
stage: Code Freeze
version: ${{ steps.version.outputs.version }}
timeout: 300
complete:
name: Complete Linear release
@@ -50,16 +115,29 @@ jobs:
with:
persist-credentials: false
- name: Extract release version
id: version
# Strip "v" prefix and patch: "v2.31.0" -> "2.31". Also detect whether
# this is a minor release (v*.*.0) — patch releases (v2.31.1, v2.31.2,
# ...) are grouped into the same Linear release and must not re-complete
# it after it has already shipped.
run: |
VERSION=$(echo "$TAG" | sed 's/^v//' | cut -d. -f1,2)
echo "version=$VERSION" >> "$GITHUB_OUTPUT"
if [[ "$TAG" =~ ^v[0-9]+\.[0-9]+\.0$ ]]; then
echo "is_minor=true" >> "$GITHUB_OUTPUT"
else
echo "is_minor=false" >> "$GITHUB_OUTPUT"
fi
env:
TAG: ${{ github.event.release.tag_name }}
- name: Complete release
id: complete
uses: linear/linear-release-action@5cbaabc187ceb63eee9d446e62e68e5c29a03ae8 # v0
if: steps.version.outputs.is_minor == 'true'
uses: linear/linear-release-action@755d50b5adb7dd42b976ee9334952745d62ceb2d # v0.6.0
with:
access_key: ${{ secrets.LINEAR_ACCESS_KEY }}
command: complete
version: ${{ github.event.release.tag_name }}
- name: Print release URL
if: steps.complete.outputs.release-url
run: echo "Completed $RELEASE_URL"
env:
RELEASE_URL: ${{ steps.complete.outputs.release-url }}
version: ${{ steps.version.outputs.version }}
timeout: 300
+37 -113
View File
@@ -302,6 +302,7 @@ jobs:
# This uses OIDC authentication, so no auth variables are required.
- name: Build base Docker image via depot.dev
id: build_base_image
if: steps.image-base-tag.outputs.tag != ''
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
with:
@@ -349,48 +350,14 @@ jobs:
env:
IMAGE_TAG: ${{ steps.image-base-tag.outputs.tag }}
# GitHub attestation provides SLSA provenance for Docker images, establishing a verifiable
# record that these images were built in GitHub Actions with specific inputs and environment.
# This complements our existing cosign attestations (which focus on SBOMs) by adding
# GitHub-specific build provenance to enhance our supply chain security.
#
# TODO: Consider refactoring these attestation steps to use a matrix strategy or composite action
# to reduce duplication while maintaining the required functionality for each distinct image tag.
- name: GitHub Attestation for Base Docker image
id: attest_base
if: ${{ !inputs.dry_run && steps.image-base-tag.outputs.tag != '' }}
if: ${{ !inputs.dry_run && steps.build_base_image.outputs.digest != '' }}
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: ${{ steps.image-base-tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/release.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
subject-name: ghcr.io/coder/coder-base
subject-digest: ${{ steps.build_base_image.outputs.digest }}
push-to-registry: true
- name: Build Linux Docker images
@@ -413,7 +380,6 @@ jobs:
# being pushed so will automatically push them.
make push/build/coder_"$version"_linux.tag
# Save multiarch image tag for attestation
multiarch_image="$(./scripts/image_tag.sh)"
echo "multiarch_image=${multiarch_image}" >> "$GITHUB_OUTPUT"
@@ -424,12 +390,14 @@ jobs:
# version in the repo, also create a multi-arch image as ":latest" and
# push it
if [[ "$(git tag | grep '^v' | grep -vE '(rc|dev|-|\+|\/)' | sort -r --version-sort | head -n1)" == "v$(./scripts/version.sh)" ]]; then
latest_target="$(./scripts/image_tag.sh --version latest)"
# shellcheck disable=SC2046
./scripts/build_docker_multiarch.sh \
--push \
--target "$(./scripts/image_tag.sh --version latest)" \
--target "${latest_target}" \
$(cat build/coder_"$version"_linux_{amd64,arm64,armv7}.tag)
echo "created_latest_tag=true" >> "$GITHUB_OUTPUT"
echo "latest_target=${latest_target}" >> "$GITHUB_OUTPUT"
else
echo "created_latest_tag=false" >> "$GITHUB_OUTPUT"
fi
@@ -450,7 +418,6 @@ jobs:
echo "Generating SBOM for multi-arch image: ${MULTIARCH_IMAGE}"
syft "${MULTIARCH_IMAGE}" -o spdx-json > "coder_${VERSION}_sbom.spdx.json"
# Attest SBOM to multi-arch image
echo "Attesting SBOM to multi-arch image: ${MULTIARCH_IMAGE}"
cosign clean --force=true "${MULTIARCH_IMAGE}"
cosign attest --type spdxjson \
@@ -472,85 +439,42 @@ jobs:
"${latest_tag}"
fi
- name: GitHub Attestation for Docker image
id: attest_main
- name: Resolve Docker image digests for attestation
id: docker_digests
if: ${{ !inputs.dry_run }}
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: ${{ steps.build_docker.outputs.multiarch_image }}
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/release.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
push-to-registry: true
env:
MULTIARCH_IMAGE: ${{ steps.build_docker.outputs.multiarch_image }}
LATEST_TARGET: ${{ steps.build_docker.outputs.latest_target }}
run: |
set -euxo pipefail
if [[ -n "${MULTIARCH_IMAGE}" ]]; then
multiarch_digest=$(docker buildx imagetools inspect --raw "${MULTIARCH_IMAGE}" | sha256sum | awk '{print "sha256:"$1}')
echo "multiarch_digest=${multiarch_digest}" >> "$GITHUB_OUTPUT"
fi
if [[ -n "${LATEST_TARGET}" ]]; then
latest_digest=$(docker buildx imagetools inspect --raw "${LATEST_TARGET}" | sha256sum | awk '{print "sha256:"$1}')
echo "latest_digest=${latest_digest}" >> "$GITHUB_OUTPUT"
fi
# Get the latest tag name for attestation
- name: Get latest tag name
id: latest_tag
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
run: echo "tag=$(./scripts/image_tag.sh --version latest)" >> "$GITHUB_OUTPUT"
# If this is the highest version according to semver, also attest the "latest" tag
- name: GitHub Attestation for "latest" Docker image
id: attest_latest
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
- name: GitHub Attestation for Docker image
id: attest_main
if: ${{ !inputs.dry_run && steps.docker_digests.outputs.multiarch_digest != '' }}
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: ${{ steps.latest_tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
predicate: |
{
"buildType": "https://github.com/actions/runner-images/",
"builder": {
"id": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
},
"invocation": {
"configSource": {
"uri": "git+https://github.com/${{ github.repository }}@${{ github.ref }}",
"digest": {
"sha1": "${{ github.sha }}"
},
"entryPoint": ".github/workflows/release.yaml"
},
"environment": {
"github_workflow": "${{ github.workflow }}",
"github_run_id": "${{ github.run_id }}"
}
},
"metadata": {
"buildInvocationID": "${{ github.run_id }}",
"completeness": {
"environment": true,
"materials": true
}
}
}
subject-name: ghcr.io/coder/coder
subject-digest: ${{ steps.docker_digests.outputs.multiarch_digest }}
push-to-registry: true
- name: GitHub Attestation for "latest" Docker image
id: attest_latest
if: ${{ !inputs.dry_run && steps.docker_digests.outputs.latest_digest != '' }}
continue-on-error: true
uses: actions/attest@59d89421af93a897026c735860bf21b6eb4f7b26 # v4.1.0
with:
subject-name: ghcr.io/coder/coder
subject-digest: ${{ steps.docker_digests.outputs.latest_digest }}
push-to-registry: true
# Report attestation failures but don't fail the workflow
+2 -2
View File
@@ -125,7 +125,7 @@ jobs:
egress-policy: audit
- name: Delete PR Cleanup workflow runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@b3018382ca039b53d238908238bd35d1fb14f8ee # v2.1.0
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
@@ -134,7 +134,7 @@ jobs:
delete_workflow_pattern: pr-cleanup.yaml
- name: Delete PR Deploy workflow skipped runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@b3018382ca039b53d238908238bd35d1fb14f8ee # v2.1.0
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
+7 -1
View File
@@ -46,8 +46,14 @@ jobs:
echo " replacement: \"https://github.com/coder/coder/tree/${HEAD_SHA}/\""
} >> .github/.linkspector.yml
# TODO: Remove this workaround once action-linkspector sets
# package-manager-cache: false in its internal setup-node step.
# See: https://github.com/UmbrellaDocs/action-linkspector/issues/54
- name: Enable corepack
run: corepack enable pnpm
- name: Check Markdown links
uses: umbrelladocs/action-linkspector@652f85bc57bb1e7d4327260decc10aa68f7694c3 # v1.4.0
uses: umbrelladocs/action-linkspector@37c85bcde51b30bf929936502bac6bfb7e8f0a4d # v1.4.1
id: markdown-link-check
# checks all markdown files from /docs including all subfolders
with:
+1
View File
@@ -54,6 +54,7 @@ site/stats/
*.tfstate.backup
*.tfplan
*.lock.hcl
!provisioner/terraform/testdata/resources/.terraform.lock.hcl
.terraform/
!coderd/testdata/parameters/modules/.terraform/
!provisioner/terraform/testdata/modules-source-caching/.terraform/
+12 -2
View File
@@ -1260,11 +1260,21 @@ provisioner/terraform/testdata/.gen-golden: $(wildcard provisioner/terraform/tes
touch "$@"
provisioner/terraform/testdata/version:
if [[ "$(shell cat provisioner/terraform/testdata/version.txt)" != "$(shell terraform version -json | jq -r '.terraform_version')" ]]; then
./provisioner/terraform/testdata/generate.sh
@tf_match=true; \
if [[ "$$(cat provisioner/terraform/testdata/version.txt)" != \
"$$(terraform version -json | jq -r '.terraform_version')" ]]; then \
tf_match=false; \
fi; \
if ! $$tf_match || \
! ./provisioner/terraform/testdata/generate.sh --check; then \
./provisioner/terraform/testdata/generate.sh; \
fi
.PHONY: provisioner/terraform/testdata/version
update-terraform-testdata:
./provisioner/terraform/testdata/generate.sh --upgrade
.PHONY: update-terraform-testdata
# Set the retry flags if TEST_RETRIES is set
ifdef TEST_RETRIES
GOTESTSUM_RETRY_FLAGS := --rerun-fails=$(TEST_RETRIES)
+18 -1
View File
@@ -38,7 +38,6 @@ import (
"cdr.dev/slog/v3"
"github.com/coder/clistat"
"github.com/coder/coder/v2/agent/agentcontainers"
"github.com/coder/coder/v2/agent/agentdesktop"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/agentfiles"
"github.com/coder/coder/v2/agent/agentgit"
@@ -50,6 +49,8 @@ import (
"github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/agent/proto/resourcesmonitor"
"github.com/coder/coder/v2/agent/reconnectingpty"
"github.com/coder/coder/v2/agent/x/agentdesktop"
"github.com/coder/coder/v2/agent/x/agentmcp"
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/cli/gitauth"
"github.com/coder/coder/v2/coderd/database/dbtime"
@@ -311,6 +312,8 @@ type agent struct {
gitAPI *agentgit.API
processAPI *agentproc.API
desktopAPI *agentdesktop.API
mcpManager *agentmcp.Manager
mcpAPI *agentmcp.API
socketServerEnabled bool
socketPath string
@@ -396,6 +399,8 @@ func (a *agent) init() {
a.logger.Named("desktop"), a.execer, a.scriptRunner.ScriptBinDir(),
)
a.desktopAPI = agentdesktop.NewAPI(a.logger.Named("desktop"), desktop, a.clock)
a.mcpManager = agentmcp.NewManager(a.logger.Named("mcp"))
a.mcpAPI = agentmcp.NewAPI(a.logger.Named("mcp"), a.mcpManager)
a.reconnectingPTYServer = reconnectingpty.NewServer(
a.logger.Named("reconnecting-pty"),
a.sshServer,
@@ -1348,6 +1353,14 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
}
a.metrics.startupScriptSeconds.WithLabelValues(label).Set(dur)
a.scriptRunner.StartCron()
// Connect to workspace MCP servers after the
// lifecycle transition to avoid delaying Ready.
// This runs inside the tracked goroutine so it
// is properly awaited on shutdown.
if mcpErr := a.mcpManager.Connect(a.gracefulCtx, manifest.Directory); mcpErr != nil {
a.logger.Warn(ctx, "failed to connect to workspace MCP servers", slog.Error(mcpErr))
}
})
if err != nil {
return xerrors.Errorf("track conn goroutine: %w", err)
@@ -2070,6 +2083,10 @@ func (a *agent) Close() error {
a.logger.Error(a.hardCtx, "desktop API close", slog.Error(err))
}
if err := a.mcpManager.Close(); err != nil {
a.logger.Error(a.hardCtx, "mcp manager close", slog.Error(err))
}
if a.boundaryLogProxy != nil {
err = a.boundaryLogProxy.Close()
if err != nil {
@@ -159,7 +159,6 @@ func TestConvertDockerVolume(t *testing.T) {
func TestConvertDockerInspect(t *testing.T) {
t.Parallel()
//nolint:paralleltest // variable recapture no longer required
for _, tt := range []struct {
name string
expect []codersdk.WorkspaceAgentContainer
@@ -388,7 +387,6 @@ func TestConvertDockerInspect(t *testing.T) {
},
},
} {
// nolint:paralleltest // variable recapture no longer required
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
bs, err := os.ReadFile(filepath.Join("testdata", tt.name, "docker_inspect.json"))
-2
View File
@@ -166,7 +166,6 @@ func TestDockerEnvInfoer(t *testing.T) {
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
// nolint:paralleltest // variable recapture no longer required
for idx, tt := range []struct {
image string
labels map[string]string
@@ -223,7 +222,6 @@ func TestDockerEnvInfoer(t *testing.T) {
expectedUserShell: "/bin/bash",
},
} {
//nolint:paralleltest // variable recapture no longer required
t.Run(fmt.Sprintf("#%d", idx), func(t *testing.T) {
// Start a container with the given image
// and environment variables
+1 -1
View File
@@ -211,7 +211,7 @@ func TestServer_X11_EvictionLRU(t *testing.T) {
require.NoError(t, err)
stderr, err := sess.StderrPipe()
require.NoError(t, err)
require.NoError(t, sess.Shell())
require.NoError(t, sess.Start("sh"))
// The SSH server lazily starts the session. We need to write a command
// and read back to ensure the X11 forwarding is started.
+1
View File
@@ -31,6 +31,7 @@ func (a *agent) apiHandler() http.Handler {
r.Mount("/api/v0/git", a.gitAPI.Routes())
r.Mount("/api/v0/processes", a.processAPI.Routes())
r.Mount("/api/v0/desktop", a.desktopAPI.Routes())
r.Mount("/api/v0/mcp", a.mcpAPI.Routes())
if a.devcontainers {
r.Mount("/api/v0/containers", a.containerAPI.Routes())
@@ -15,7 +15,7 @@ import (
"golang.org/x/xerrors"
"cdr.dev/slog/v3/sloggers/slogtest"
"github.com/coder/coder/v2/agent/agentdesktop"
"github.com/coder/coder/v2/agent/x/agentdesktop"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/quartz"
+88
View File
@@ -0,0 +1,88 @@
package agentmcp
import (
"errors"
"net/http"
"github.com/go-chi/chi/v5"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
// API exposes MCP tool discovery and call proxying through the
// agent.
type API struct {
logger slog.Logger
manager *Manager
}
// NewAPI creates a new MCP API handler backed by the given
// manager.
func NewAPI(logger slog.Logger, manager *Manager) *API {
return &API{
logger: logger,
manager: manager,
}
}
// Routes returns the HTTP handler for MCP-related routes.
func (api *API) Routes() http.Handler {
r := chi.NewRouter()
r.Get("/tools", api.handleListTools)
r.Post("/call-tool", api.handleCallTool)
return r
}
// handleListTools returns the cached MCP tool definitions,
// optionally refreshing them first if ?refresh=true is set.
func (api *API) handleListTools(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Allow callers to force a tool re-scan before listing.
if r.URL.Query().Get("refresh") == "true" {
if err := api.manager.RefreshTools(ctx); err != nil {
api.logger.Warn(ctx, "failed to refresh MCP tools", slog.Error(err))
}
}
tools := api.manager.Tools()
// Ensure non-nil so JSON serialization returns [] not null.
if tools == nil {
tools = []workspacesdk.MCPToolInfo{}
}
httpapi.Write(ctx, rw, http.StatusOK, workspacesdk.ListMCPToolsResponse{
Tools: tools,
})
}
// handleCallTool proxies a tool invocation to the appropriate
// MCP server based on the tool name prefix.
func (api *API) handleCallTool(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req workspacesdk.CallMCPToolRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
resp, err := api.manager.CallTool(ctx, req)
if err != nil {
status := http.StatusBadGateway
if errors.Is(err, ErrInvalidToolName) {
status = http.StatusBadRequest
} else if errors.Is(err, ErrUnknownServer) {
status = http.StatusNotFound
}
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: "MCP tool call failed.",
Detail: err.Error(),
})
return
}
httpapi.Write(ctx, rw, http.StatusOK, resp)
}
+115
View File
@@ -0,0 +1,115 @@
package agentmcp
import (
"encoding/json"
"os"
"slices"
"strings"
"golang.org/x/xerrors"
)
// ServerConfig describes a single MCP server parsed from a .mcp.json file.
type ServerConfig struct {
Name string `json:"name"`
Transport string `json:"type"`
Command string `json:"command"`
Args []string `json:"args"`
Env map[string]string `json:"env"`
URL string `json:"url"`
Headers map[string]string `json:"headers"`
}
// mcpConfigFile mirrors the on-disk .mcp.json schema.
type mcpConfigFile struct {
MCPServers map[string]json.RawMessage `json:"mcpServers"`
}
// mcpServerEntry is a single server block inside mcpServers.
type mcpServerEntry struct {
Command string `json:"command"`
Args []string `json:"args"`
Env map[string]string `json:"env"`
Type string `json:"type"`
URL string `json:"url"`
Headers map[string]string `json:"headers"`
}
// ParseConfig reads a .mcp.json file at path and returns the declared
// MCP servers sorted by name. It returns an empty slice when the
// mcpServers key is missing or empty.
func ParseConfig(path string) ([]ServerConfig, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, xerrors.Errorf("read mcp config %q: %w", path, err)
}
var cfg mcpConfigFile
if err := json.Unmarshal(data, &cfg); err != nil {
return nil, xerrors.Errorf("parse mcp config %q: %w", path, err)
}
if len(cfg.MCPServers) == 0 {
return []ServerConfig{}, nil
}
servers := make([]ServerConfig, 0, len(cfg.MCPServers))
for name, raw := range cfg.MCPServers {
var entry mcpServerEntry
if err := json.Unmarshal(raw, &entry); err != nil {
return nil, xerrors.Errorf("parse server %q in %q: %w", name, path, err)
}
if strings.Contains(name, ToolNameSep) || strings.HasPrefix(name, "_") || strings.HasSuffix(name, "_") {
return nil, xerrors.Errorf("server name %q in %q contains reserved separator %q or leading/trailing underscore", name, path, ToolNameSep)
}
transport := inferTransport(entry)
if transport == "" {
return nil, xerrors.Errorf("server %q in %q has no command or url", name, path)
}
resolveEnvVars(entry.Env)
servers = append(servers, ServerConfig{
Name: name,
Transport: transport,
Command: entry.Command,
Args: entry.Args,
Env: entry.Env,
URL: entry.URL,
Headers: entry.Headers,
})
}
slices.SortFunc(servers, func(a, b ServerConfig) int {
return strings.Compare(a.Name, b.Name)
})
return servers, nil
}
// inferTransport determines the transport type for a server entry.
// An explicit "type" field takes priority; otherwise the presence
// of "command" implies stdio and "url" implies http.
func inferTransport(e mcpServerEntry) string {
if e.Type != "" {
return e.Type
}
if e.Command != "" {
return "stdio"
}
if e.URL != "" {
return "http"
}
return ""
}
// resolveEnvVars expands ${VAR} references in env map values
// using the current process environment.
func resolveEnvVars(env map[string]string) {
for k, v := range env {
env[k] = os.Expand(v, os.Getenv)
}
}
+254
View File
@@ -0,0 +1,254 @@
package agentmcp_test
import (
"encoding/json"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/agent/x/agentmcp"
)
func TestParseConfig(t *testing.T) {
t.Parallel()
tests := []struct {
name string
content string
expected []agentmcp.ServerConfig
expectError bool
}{
{
name: "StdioServer",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"my-server": map[string]any{
"command": "npx",
"args": []string{"-y", "@example/mcp-server"},
"env": map[string]string{"FOO": "bar"},
},
},
}),
expected: []agentmcp.ServerConfig{
{
Name: "my-server",
Transport: "stdio",
Command: "npx",
Args: []string{"-y", "@example/mcp-server"},
Env: map[string]string{"FOO": "bar"},
},
},
},
{
name: "HTTPServer",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"remote": map[string]any{
"url": "https://example.com/mcp",
"headers": map[string]string{"Authorization": "Bearer tok"},
},
},
}),
expected: []agentmcp.ServerConfig{
{
Name: "remote",
Transport: "http",
URL: "https://example.com/mcp",
Headers: map[string]string{"Authorization": "Bearer tok"},
},
},
},
{
name: "SSEServer",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"events": map[string]any{
"type": "sse",
"url": "https://example.com/sse",
},
},
}),
expected: []agentmcp.ServerConfig{
{
Name: "events",
Transport: "sse",
URL: "https://example.com/sse",
},
},
},
{
name: "ExplicitTypeOverridesInference",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"hybrid": map[string]any{
"command": "some-binary",
"type": "http",
},
},
}),
expected: []agentmcp.ServerConfig{
{
Name: "hybrid",
Transport: "http",
Command: "some-binary",
},
},
},
{
name: "EnvVarPassthrough",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"srv": map[string]any{
"command": "run",
"env": map[string]string{"PLAIN": "literal-value"},
},
},
}),
expected: []agentmcp.ServerConfig{
{
Name: "srv",
Transport: "stdio",
Command: "run",
Env: map[string]string{"PLAIN": "literal-value"},
},
},
},
{
name: "EmptyMCPServers",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{},
}),
expected: []agentmcp.ServerConfig{},
},
{
name: "MalformedJSON",
content: `{not valid json`,
expectError: true,
},
{
name: "ServerNameContainsSeparator",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"bad__name": map[string]any{"command": "run"},
},
}),
expectError: true,
},
{
name: "ServerNameTrailingUnderscore",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"server_": map[string]any{"command": "run"},
},
}),
expectError: true,
},
{
name: "ServerNameLeadingUnderscore",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"_server": map[string]any{"command": "run"},
},
}),
expectError: true,
},
{
name: "EmptyTransport", content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"empty": map[string]any{},
},
}),
expectError: true,
},
{
name: "MissingMCPServersKey",
content: mustJSON(t, map[string]any{
"servers": map[string]any{},
}),
expected: []agentmcp.ServerConfig{},
},
{
name: "MultipleServersSortedByName",
content: mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"zeta": map[string]any{"command": "z"},
"alpha": map[string]any{"command": "a"},
"mu": map[string]any{"command": "m"},
},
}),
expected: []agentmcp.ServerConfig{
{Name: "alpha", Transport: "stdio", Command: "a"},
{Name: "mu", Transport: "stdio", Command: "m"},
{Name: "zeta", Transport: "stdio", Command: "z"},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
dir := t.TempDir()
path := filepath.Join(dir, ".mcp.json")
err := os.WriteFile(path, []byte(tt.content), 0o600)
require.NoError(t, err)
got, err := agentmcp.ParseConfig(path)
if tt.expectError {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.expected, got)
})
}
}
// TestParseConfig_EnvVarInterpolation verifies that ${VAR} references
// in env values are resolved from the process environment. This test
// cannot be parallel because t.Setenv is incompatible with t.Parallel.
func TestParseConfig_EnvVarInterpolation(t *testing.T) {
t.Setenv("TEST_MCP_TOKEN", "secret123")
content := mustJSON(t, map[string]any{
"mcpServers": map[string]any{
"srv": map[string]any{
"command": "run",
"env": map[string]string{"TOKEN": "${TEST_MCP_TOKEN}"},
},
},
})
dir := t.TempDir()
path := filepath.Join(dir, ".mcp.json")
err := os.WriteFile(path, []byte(content), 0o600)
require.NoError(t, err)
got, err := agentmcp.ParseConfig(path)
require.NoError(t, err)
require.Equal(t, []agentmcp.ServerConfig{
{
Name: "srv",
Transport: "stdio",
Command: "run",
Env: map[string]string{"TOKEN": "secret123"},
},
}, got)
}
func TestParseConfig_FileNotFound(t *testing.T) {
t.Parallel()
_, err := agentmcp.ParseConfig(filepath.Join(t.TempDir(), "nonexistent.json"))
require.Error(t, err)
}
// mustJSON marshals v to a JSON string, failing the test on error.
func mustJSON(t *testing.T, v any) string {
t.Helper()
data, err := json.Marshal(v)
require.NoError(t, err)
return string(data)
}
+447
View File
@@ -0,0 +1,447 @@
package agentmcp
import (
"context"
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"slices"
"strings"
"sync"
"time"
"github.com/mark3labs/mcp-go/client"
"github.com/mark3labs/mcp-go/client/transport"
"github.com/mark3labs/mcp-go/mcp"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
// ToolNameSep separates the server name from the original tool name
// in prefixed tool names. Double underscore avoids collisions with
// tool names that may contain single underscores.
const ToolNameSep = "__"
// connectTimeout bounds how long we wait for a single MCP server
// to start its transport and complete initialization.
const connectTimeout = 30 * time.Second
// toolCallTimeout bounds how long a single tool invocation may
// take before being canceled.
const toolCallTimeout = 60 * time.Second
var (
// ErrInvalidToolName is returned when the tool name format
// is not "server__tool".
ErrInvalidToolName = xerrors.New("invalid tool name format")
// ErrUnknownServer is returned when no MCP server matches
// the prefix in the tool name.
ErrUnknownServer = xerrors.New("unknown MCP server")
)
// Manager manages connections to MCP servers discovered from a
// workspace's .mcp.json file. It caches the aggregated tool list
// and proxies tool calls to the appropriate server.
type Manager struct {
mu sync.RWMutex
logger slog.Logger
closed bool
servers map[string]*serverEntry // keyed by server name
tools []workspacesdk.MCPToolInfo
}
// serverEntry pairs a server config with its connected client.
type serverEntry struct {
config ServerConfig
client *client.Client
}
// NewManager creates a new MCP client manager.
func NewManager(logger slog.Logger) *Manager {
return &Manager{
logger: logger,
servers: make(map[string]*serverEntry),
}
}
// Connect discovers .mcp.json in dir and connects to all
// configured servers. Failed servers are logged and skipped.
func (m *Manager) Connect(ctx context.Context, dir string) error {
path := filepath.Join(dir, ".mcp.json")
configs, err := ParseConfig(path)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
return nil
}
return xerrors.Errorf("parse mcp config: %w", err)
}
// Connect to servers in parallel without holding the
// lock, since each connectServer call may block on
// network I/O for up to connectTimeout.
type connectedServer struct {
name string
config ServerConfig
client *client.Client
}
var (
mu sync.Mutex
connected []connectedServer
)
var eg errgroup.Group
for _, cfg := range configs {
eg.Go(func() error {
c, err := m.connectServer(ctx, cfg)
if err != nil {
m.logger.Warn(ctx, "skipping MCP server",
slog.F("server", cfg.Name),
slog.F("transport", cfg.Transport),
slog.Error(err),
)
return nil // Don't fail the group.
}
mu.Lock()
connected = append(connected, connectedServer{
name: cfg.Name, config: cfg, client: c,
})
mu.Unlock()
return nil
})
}
_ = eg.Wait()
m.mu.Lock()
if m.closed {
m.mu.Unlock()
// Close the freshly-connected clients since we're
// shutting down.
for _, cs := range connected {
_ = cs.client.Close()
}
return xerrors.New("manager closed")
}
// Close previous connections to avoid leaking child
// processes on agent reconnect.
for _, entry := range m.servers {
_ = entry.client.Close()
}
m.servers = make(map[string]*serverEntry, len(connected))
for _, cs := range connected {
m.servers[cs.name] = &serverEntry{
config: cs.config,
client: cs.client,
}
}
m.mu.Unlock()
// Refresh tools outside the lock to avoid blocking
// concurrent reads during network I/O.
if err := m.RefreshTools(ctx); err != nil {
m.logger.Warn(ctx, "failed to refresh MCP tools after connect", slog.Error(err))
}
return nil
}
// connectServer establishes a connection to a single MCP server
// and returns the connected client. It does not modify any Manager
// state.
func (*Manager) connectServer(ctx context.Context, cfg ServerConfig) (*client.Client, error) {
tr, err := createTransport(cfg)
if err != nil {
return nil, xerrors.Errorf("create transport for %q: %w", cfg.Name, err)
}
c := client.NewClient(tr)
connectCtx, cancel := context.WithTimeout(ctx, connectTimeout)
defer cancel()
if err := c.Start(connectCtx); err != nil {
_ = c.Close()
return nil, xerrors.Errorf("start %q: %w", cfg.Name, err)
}
_, err = c.Initialize(connectCtx, mcp.InitializeRequest{
Params: mcp.InitializeParams{
ProtocolVersion: mcp.LATEST_PROTOCOL_VERSION,
ClientInfo: mcp.Implementation{
Name: "coder-agent",
Version: buildinfo.Version(),
},
},
})
if err != nil {
_ = c.Close()
return nil, xerrors.Errorf("initialize %q: %w", cfg.Name, err)
}
return c, nil
}
// createTransport builds the mcp-go transport for a server config.
func createTransport(cfg ServerConfig) (transport.Interface, error) {
switch cfg.Transport {
case "stdio":
return transport.NewStdio(
cfg.Command,
buildEnv(cfg.Env),
cfg.Args...,
), nil
case "http", "":
return transport.NewStreamableHTTP(
cfg.URL,
transport.WithHTTPHeaders(cfg.Headers),
)
case "sse":
return transport.NewSSE(
cfg.URL,
transport.WithHeaders(cfg.Headers),
)
default:
return nil, xerrors.Errorf("unsupported transport %q", cfg.Transport)
}
}
// buildEnv merges the current process environment with explicit
// overrides, returning the result as KEY=VALUE strings suitable
// for the stdio transport.
func buildEnv(explicit map[string]string) []string {
env := os.Environ()
if len(explicit) == 0 {
return env
}
// Index existing env so explicit keys can override in-place.
existing := make(map[string]int, len(env))
for i, kv := range env {
if k, _, ok := strings.Cut(kv, "="); ok {
existing[k] = i
}
}
for k, v := range explicit {
entry := k + "=" + v
if idx, ok := existing[k]; ok {
env[idx] = entry
} else {
env = append(env, entry)
}
}
return env
}
// Tools returns the cached tool list. Thread-safe.
func (m *Manager) Tools() []workspacesdk.MCPToolInfo {
m.mu.RLock()
defer m.mu.RUnlock()
return slices.Clone(m.tools)
}
// CallTool proxies a tool call to the appropriate MCP server.
func (m *Manager) CallTool(ctx context.Context, req workspacesdk.CallMCPToolRequest) (workspacesdk.CallMCPToolResponse, error) {
serverName, originalName, err := splitToolName(req.ToolName)
if err != nil {
return workspacesdk.CallMCPToolResponse{}, err
}
m.mu.RLock()
entry, ok := m.servers[serverName]
m.mu.RUnlock()
if !ok {
return workspacesdk.CallMCPToolResponse{}, xerrors.Errorf("%w: %q", ErrUnknownServer, serverName)
}
callCtx, cancel := context.WithTimeout(ctx, toolCallTimeout)
defer cancel()
result, err := entry.client.CallTool(callCtx, mcp.CallToolRequest{
Params: mcp.CallToolParams{
Name: originalName,
Arguments: req.Arguments,
},
})
if err != nil {
return workspacesdk.CallMCPToolResponse{}, xerrors.Errorf("call tool %q on %q: %w", originalName, serverName, err)
}
return convertResult(result), nil
}
// splitToolName extracts the server name and original tool name
// from a prefixed tool name like "server__tool".
func splitToolName(prefixed string) (serverName, toolName string, err error) {
server, tool, ok := strings.Cut(prefixed, ToolNameSep)
if !ok || server == "" || tool == "" {
return "", "", xerrors.Errorf("%w: expected format \"server%stool\", got %q", ErrInvalidToolName, ToolNameSep, prefixed)
}
return server, tool, nil
}
// convertResult translates an MCP CallToolResult into a
// workspacesdk.CallMCPToolResponse. It iterates over content
// items and maps each recognized type.
func convertResult(result *mcp.CallToolResult) workspacesdk.CallMCPToolResponse {
if result == nil {
return workspacesdk.CallMCPToolResponse{}
}
var content []workspacesdk.MCPToolContent
for _, item := range result.Content {
switch c := item.(type) {
case mcp.TextContent:
content = append(content, workspacesdk.MCPToolContent{
Type: "text",
Text: c.Text,
})
case mcp.ImageContent:
content = append(content, workspacesdk.MCPToolContent{
Type: "image",
Data: c.Data,
MediaType: c.MIMEType,
})
case mcp.AudioContent:
content = append(content, workspacesdk.MCPToolContent{
Type: "audio",
Data: c.Data,
MediaType: c.MIMEType,
})
case mcp.EmbeddedResource:
content = append(content, workspacesdk.MCPToolContent{
Type: "resource",
Text: fmt.Sprintf("[embedded resource: %T]", c.Resource),
})
case mcp.ResourceLink:
content = append(content, workspacesdk.MCPToolContent{
Type: "resource",
Text: fmt.Sprintf("[resource link: %s]", c.URI),
})
default:
content = append(content, workspacesdk.MCPToolContent{
Type: "text",
Text: fmt.Sprintf("[unsupported content type: %T]", item),
})
}
}
return workspacesdk.CallMCPToolResponse{
Content: content,
IsError: result.IsError,
}
}
// RefreshTools re-fetches tool lists from all connected servers
// in parallel and rebuilds the cache. On partial failure, tools
// from servers that responded successfully are merged with the
// existing cached tools for servers that failed, so a single
// dead server doesn't block updates from healthy ones.
func (m *Manager) RefreshTools(ctx context.Context) error {
// Snapshot servers under read lock.
m.mu.RLock()
servers := make(map[string]*serverEntry, len(m.servers))
for k, v := range m.servers {
servers[k] = v
}
m.mu.RUnlock()
// Fetch tool lists in parallel without holding any lock.
type serverTools struct {
name string
tools []workspacesdk.MCPToolInfo
}
var (
mu sync.Mutex
results []serverTools
failed []string
errs []error
)
var eg errgroup.Group
for name, entry := range servers {
eg.Go(func() error {
listCtx, cancel := context.WithTimeout(ctx, connectTimeout)
result, err := entry.client.ListTools(listCtx, mcp.ListToolsRequest{})
cancel()
if err != nil {
m.logger.Warn(ctx, "failed to list tools from MCP server",
slog.F("server", name),
slog.Error(err),
)
mu.Lock()
errs = append(errs, xerrors.Errorf("list tools from %q: %w", name, err))
failed = append(failed, name)
mu.Unlock()
return nil
}
var tools []workspacesdk.MCPToolInfo
for _, tool := range result.Tools {
tools = append(tools, workspacesdk.MCPToolInfo{
ServerName: name,
Name: name + ToolNameSep + tool.Name,
Description: tool.Description,
Schema: tool.InputSchema.Properties,
Required: tool.InputSchema.Required,
})
}
mu.Lock()
results = append(results, serverTools{name: name, tools: tools})
mu.Unlock()
return nil
})
}
_ = eg.Wait()
// Build the new tool list. For servers that failed, preserve
// their tools from the existing cache so a single dead server
// doesn't remove healthy tools.
var merged []workspacesdk.MCPToolInfo
for _, st := range results {
merged = append(merged, st.tools...)
}
if len(failed) > 0 {
failedSet := make(map[string]struct{}, len(failed))
for _, f := range failed {
failedSet[f] = struct{}{}
}
m.mu.RLock()
for _, t := range m.tools {
if _, ok := failedSet[t.ServerName]; ok {
merged = append(merged, t)
}
}
m.mu.RUnlock()
}
slices.SortFunc(merged, func(a, b workspacesdk.MCPToolInfo) int {
return strings.Compare(a.Name, b.Name)
})
m.mu.Lock()
m.tools = merged
m.mu.Unlock()
return errors.Join(errs...)
}
// Close terminates all MCP server connections and child
// processes.
func (m *Manager) Close() error {
m.mu.Lock()
defer m.mu.Unlock()
m.closed = true
var errs []error
for _, entry := range m.servers {
errs = append(errs, entry.client.Close())
}
m.servers = make(map[string]*serverEntry)
m.tools = nil
return errors.Join(errs...)
}
+195
View File
@@ -0,0 +1,195 @@
package agentmcp
import (
"testing"
"github.com/mark3labs/mcp-go/mcp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
func TestSplitToolName(t *testing.T) {
t.Parallel()
tests := []struct {
name string
input string
wantServer string
wantTool string
wantErr bool
}{
{
name: "Valid",
input: "server__tool",
wantServer: "server",
wantTool: "tool",
},
{
name: "ValidWithUnderscoresInTool",
input: "server__my_tool",
wantServer: "server",
wantTool: "my_tool",
},
{
name: "MissingSeparator",
input: "servertool",
wantErr: true,
},
{
name: "EmptyServer",
input: "__tool",
wantErr: true,
},
{
name: "EmptyTool",
input: "server__",
wantErr: true,
},
{
name: "JustSeparator",
input: "__",
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
server, tool, err := splitToolName(tt.input)
if tt.wantErr {
require.Error(t, err)
assert.ErrorIs(t, err, ErrInvalidToolName)
return
}
require.NoError(t, err)
assert.Equal(t, tt.wantServer, server)
assert.Equal(t, tt.wantTool, tool)
})
}
}
func TestConvertResult(t *testing.T) {
t.Parallel()
tests := []struct {
name string
// input is a pointer so we can test nil.
input *mcp.CallToolResult
want workspacesdk.CallMCPToolResponse
}{
{
name: "NilInput",
input: nil,
want: workspacesdk.CallMCPToolResponse{},
},
{
name: "TextContent",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.TextContent{Type: "text", Text: "hello"},
},
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "text", Text: "hello"},
},
},
},
{
name: "ImageContent",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.ImageContent{
Type: "image",
Data: "base64data",
MIMEType: "image/png",
},
},
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "image", Data: "base64data", MediaType: "image/png"},
},
},
},
{
name: "AudioContent",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.AudioContent{
Type: "audio",
Data: "base64audio",
MIMEType: "audio/mp3",
},
},
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "audio", Data: "base64audio", MediaType: "audio/mp3"},
},
},
},
{
name: "IsErrorPropagation",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.TextContent{Type: "text", Text: "fail"},
},
IsError: true,
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "text", Text: "fail"},
},
IsError: true,
},
},
{
name: "MultipleContentItems",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.TextContent{Type: "text", Text: "caption"},
mcp.ImageContent{
Type: "image",
Data: "imgdata",
MIMEType: "image/jpeg",
},
},
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "text", Text: "caption"},
{Type: "image", Data: "imgdata", MediaType: "image/jpeg"},
},
},
},
{
name: "ResourceLink",
input: &mcp.CallToolResult{
Content: []mcp.Content{
mcp.ResourceLink{
Type: "resource_link",
URI: "file:///tmp/test.txt",
},
},
},
want: workspacesdk.CallMCPToolResponse{
Content: []workspacesdk.MCPToolContent{
{Type: "resource", Text: "[resource link: file:///tmp/test.txt]"},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got := convertResult(tt.input)
assert.Equal(t, tt.want, got)
})
}
}
+4 -1
View File
@@ -173,7 +173,10 @@ func Start(t *testing.T, inv *serpent.Invocation) {
StartWithAssert(t, inv, nil)
}
func StartWithAssert(t *testing.T, inv *serpent.Invocation, assertCallback func(t *testing.T, err error)) { //nolint:revive
// StartWithAssert starts the given invocation and calls assertCallback
// with the resulting error when the invocation completes. If assertCallback
// is nil, expected shutdown errors are silently tolerated.
func StartWithAssert(t *testing.T, inv *serpent.Invocation, assertCallback func(t *testing.T, err error)) {
t.Helper()
closeCh := make(chan struct{})
-2
View File
@@ -173,7 +173,6 @@ func (selectModel) Init() tea.Cmd {
return nil
}
//nolint:revive // The linter complains about modifying 'm' but this is typical practice for bubbletea
func (m selectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
var cmd tea.Cmd
@@ -463,7 +462,6 @@ func (multiSelectModel) Init() tea.Cmd {
return nil
}
//nolint:revive // For same reason as previous Update definition
func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
var cmd tea.Cmd
-1
View File
@@ -1414,7 +1414,6 @@ func tailLineStyle() pretty.Style {
return pretty.Style{pretty.Nop}
}
//nolint:unused
func SlimUnsupported(w io.Writer, cmd string) {
_, _ = fmt.Fprintf(w, "You are using a 'slim' build of Coder, which does not support the %s subcommand.\n", pretty.Sprint(cliui.DefaultStyles.Code, cmd))
_, _ = fmt.Fprintln(w, "")
+13 -2
View File
@@ -352,8 +352,6 @@ func TestScheduleOverride(t *testing.T) {
require.NoError(t, err, "invalid schedule")
ownerClient, _, _, ws := setupTestSchedule(t, sched)
now := time.Now()
// To avoid the likelihood of time-related flakes, only matching up to the hour.
expectedDeadline := now.In(loc).Add(10 * time.Hour).Format("2006-01-02T15:")
// When: we override the stop schedule
inv, root := clitest.New(t,
@@ -364,6 +362,19 @@ func TestScheduleOverride(t *testing.T) {
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Fetch the workspace to get the actual deadline set by the
// server. Computing our own expected deadline from a separately
// captured time.Now() is racy: the CLI command calls time.Now()
// internally, and with the Asia/Kolkata +05:30 offset the hour
// boundary falls at :30 UTC minutes. A small delay between our
// time.Now() and the command's is enough to land in different
// hours.
updated, err := ownerClient.Workspace(context.Background(), ws[0].ID)
require.NoError(t, err)
require.False(t, updated.LatestBuild.Deadline.IsZero(), "deadline should be set after extend")
require.WithinDuration(t, now.Add(10*time.Hour), updated.LatestBuild.Deadline.Time, 5*time.Minute)
expectedDeadline := updated.LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339)
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
pty.ExpectMatch(sched.Humanize())
+2 -5
View File
@@ -305,7 +305,6 @@ func enablePrometheus(
}
options.ProvisionerdServerMetrics = provisionerdserverMetrics
//nolint:revive
return ServeHandler(
ctx, logger, promhttp.InstrumentMetricHandler(
options.PrometheusRegistry, promhttp.HandlerFor(options.PrometheusRegistry, promhttp.HandlerOpts{}),
@@ -1637,8 +1636,6 @@ var defaultCipherSuites = func() []uint16 {
// configureServerTLS returns the TLS config used for the Coderd server
// connections to clients. A logger is passed in to allow printing warning
// messages that do not block startup.
//
//nolint:revive
func configureServerTLS(ctx context.Context, logger slog.Logger, tlsMinVersion, tlsClientAuth string, tlsCertFiles, tlsKeyFiles []string, tlsClientCAFile string, ciphers []string, allowInsecureCiphers bool) (*tls.Config, error) {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
@@ -2055,7 +2052,6 @@ func getGithubOAuth2ConfigParams(ctx context.Context, db database.Store, vals *c
return &params, nil
}
//nolint:revive // Ignore flag-parameter: parameter 'allowEveryone' seems to be a control flag, avoid control coupling (revive)
func configureGithubOAuth2(instrument *promoauth.Factory, params *githubOAuth2ConfigParams) (*coderd.GithubOAuth2Config, error) {
redirectURL, err := params.accessURL.Parse("/api/v2/users/oauth2/github/callback")
if err != nil {
@@ -2331,7 +2327,8 @@ func ConfigureHTTPClient(ctx context.Context, clientCertFile, clientKeyFile stri
return ctx, nil, err
}
tlsClientConfig := &tls.Config{ //nolint:gosec
tlsClientConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
Certificates: certificates,
NextProtos: []string{"h2", "http/1.1"},
}
-1
View File
@@ -2123,7 +2123,6 @@ func TestServer_TelemetryDisable(t *testing.T) {
// Set the default telemetry to true (normally disabled in tests).
t.Setenv("CODER_TEST_TELEMETRY_DEFAULT_ENABLE", "true")
//nolint:paralleltest // No need to reinitialise the variable tt (Go version).
for _, tt := range []struct {
key string
val string
+2 -2
View File
@@ -828,7 +828,7 @@ func TestTemplateEdit(t *testing.T) {
"--require-active-version",
}
inv, root := clitest.New(t, cmdArgs...)
//nolint
//nolint:gocritic // Using owner client is required for template editing.
clitest.SetupConfig(t, client, root)
ctx := testutil.Context(t, testutil.WaitLong)
@@ -858,7 +858,7 @@ func TestTemplateEdit(t *testing.T) {
"--name", "something-new",
}
inv, root := clitest.New(t, cmdArgs...)
//nolint
//nolint:gocritic // Using owner client is required for template editing.
clitest.SetupConfig(t, client, root)
ctx := testutil.Context(t, testutil.WaitLong)
+4 -2
View File
@@ -17,7 +17,8 @@
"name": "owner",
"display_name": "Owner"
}
]
],
"has_ai_seat": false
},
{
"id": "==========[second user ID]==========",
@@ -31,6 +32,7 @@
"organization_ids": [
"===========[first org ID]==========="
],
"roles": []
"roles": [],
"has_ai_seat": false
}
]
+7 -2
View File
@@ -857,13 +857,18 @@ aibridgeproxy:
# Comma-separated list of AI provider domains for which HTTPS traffic will be
# decrypted and routed through AI Bridge. Requests to other domains will be
# tunneled directly without decryption. Supported domains: api.anthropic.com,
# api.openai.com, api.individual.githubcopilot.com.
# (default: api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,
# api.openai.com, api.individual.githubcopilot.com,
# api.business.githubcopilot.com, api.enterprise.githubcopilot.com, chatgpt.com.
# (default:
# api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,chatgpt.com,
# type: string-array)
domain_allowlist:
- api.anthropic.com
- api.openai.com
- api.individual.githubcopilot.com
- api.business.githubcopilot.com
- api.enterprise.githubcopilot.com
- chatgpt.com
# URL of an upstream HTTP proxy to chain tunneled (non-allowlisted) requests
# through. Format: http://[user:pass@]host:port or https://[user:pass@]host:port.
# (default: <unset>, type: string)
-1
View File
@@ -101,7 +101,6 @@ func TestConnectionLog(t *testing.T) {
reason: "because error says so",
},
}
//nolint:paralleltest // No longer necessary to reinitialise the variable tt.
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
+3
View File
@@ -3,6 +3,7 @@ package agentapi
import (
"context"
"fmt"
"strings"
"time"
"github.com/google/uuid"
@@ -60,6 +61,8 @@ func (a *MetadataAPI) BatchUpdateMetadata(ctx context.Context, req *agentproto.B
}
)
for _, md := range req.Metadata {
md.Result.Value = strings.TrimSpace(md.Result.Value)
md.Result.Error = strings.TrimSpace(md.Result.Error)
metadataError := md.Result.Error
allKeysLen += len(md.Key)
+32 -4
View File
@@ -57,16 +57,44 @@ func TestBatchUpdateMetadata(t *testing.T) {
CollectedAt: timestamppb.New(now.Add(-3 * time.Second)),
Age: 3,
Value: "",
Error: "uncool value",
Error: "\t uncool error ",
},
},
},
}
batchSize := len(req.Metadata)
// This test sends 2 metadata entries. With batch size 2, we expect
// exactly 1 capacity flush.
// This test sends 2 metadata entries (one clean, one with
// whitespace padding). With batch size 2 we expect exactly
// 1 capacity flush. The matcher verifies that stored values
// are trimmed while clean values pass through unchanged.
expectedValues := map[string]string{
"awesome key": "awesome value",
"uncool key": "",
}
expectedErrors := map[string]string{
"awesome key": "",
"uncool key": "uncool error",
}
store.EXPECT().
BatchUpdateWorkspaceAgentMetadata(gomock.Any(), gomock.Any()).
BatchUpdateWorkspaceAgentMetadata(
gomock.Any(),
gomock.Cond(func(arg database.BatchUpdateWorkspaceAgentMetadataParams) bool {
if len(arg.Key) != len(expectedValues) {
return false
}
for i, key := range arg.Key {
expVal, ok := expectedValues[key]
if !ok || arg.Value[i] != expVal {
return false
}
expErr, ok := expectedErrors[key]
if !ok || arg.Error[i] != expErr {
return false
}
}
return true
}),
).
Return(nil).
Times(1)
+39 -10
View File
@@ -6,18 +6,47 @@ import (
"strings"
)
// HeaderCoderAuth is an internal header used to pass the Coder token
// from AI Proxy to AI Bridge for authentication. This header is stripped
// by AI Bridge before forwarding requests to upstream providers.
const HeaderCoderAuth = "X-Coder-Token"
// HeaderCoderToken is a header set by clients opting into BYOK
// (Bring Your Own Key) mode. It carries the Coder token so
// that Authorization and X-Api-Key can carry the user's own LLM
// credentials. When present, AI Bridge forwards the user's LLM
// headers unchanged instead of injecting the centralized key.
//
// The AI Bridge proxy also sets this header automatically for clients
// that use per-user LLM credentials but cannot set custom headers.
const HeaderCoderToken = "X-Coder-AI-Governance-Token" //nolint:gosec // This is a header name, not a credential.
// ExtractAuthToken extracts an authorization token from HTTP headers.
// It checks X-Coder-Token first (set by AI Proxy), then falls back
// to Authorization header (Bearer token) and X-Api-Key header, which represent
// the different ways clients authenticate against AI providers.
// If none are present, an empty string is returned.
// HeaderCoderRequestID is a header set by aibridgeproxyd on each
// request forwarded to aibridged for cross-service log correlation.
const HeaderCoderRequestID = "X-Coder-AI-Governance-Request-Id"
// Copilot provider.
const (
ProviderCopilotBusiness = "copilot-business"
HostCopilotBusiness = "api.business.githubcopilot.com"
ProviderCopilotEnterprise = "copilot-enterprise"
HostCopilotEnterprise = "api.enterprise.githubcopilot.com"
)
// ChatGPT provider.
const (
ProviderChatGPT = "chatgpt"
HostChatGPT = "chatgpt.com"
BaseURLChatGPT = "https://" + HostChatGPT + "/backend-api/codex"
)
// IsBYOK reports whether the request is using BYOK mode, determined
// by the presence of the X-Coder-AI-Governance-Token header.
func IsBYOK(header http.Header) bool {
return strings.TrimSpace(header.Get(HeaderCoderToken)) != ""
}
// ExtractAuthToken extracts a token from HTTP headers.
// It checks the BYOK header first (set by clients opting into BYOK),
// then falls back to Authorization: Bearer and X-Api-Key for direct
// centralized mode. If none are present, an empty string is returned.
func ExtractAuthToken(header http.Header) string {
if token := strings.TrimSpace(header.Get(HeaderCoderAuth)); token != "" {
if token := strings.TrimSpace(header.Get(HeaderCoderToken)); token != "" {
return token
}
if auth := strings.TrimSpace(header.Get("Authorization")); auth != "" {
+276
View File
@@ -84,6 +84,34 @@ const docTemplate = `{
}
}
},
"/aibridge/clients": {
"get": {
"produces": [
"application/json"
],
"tags": [
"AI Bridge"
],
"summary": "List AI Bridge clients",
"operationId": "list-ai-bridge-clients",
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"security": [
{
"CoderSessionToken": []
}
]
}
},
"/aibridge/interceptions": {
"get": {
"produces": [
@@ -214,6 +242,58 @@ const docTemplate = `{
]
}
},
"/aibridge/sessions/{session_id}": {
"get": {
"produces": [
"application/json"
],
"tags": [
"AI Bridge"
],
"summary": "Get AI Bridge session threads",
"operationId": "get-ai-bridge-session-threads",
"parameters": [
{
"type": "string",
"description": "Session ID (client_session_id or interception UUID)",
"name": "session_id",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Thread pagination cursor (forward/older)",
"name": "after_id",
"in": "query"
},
{
"type": "string",
"description": "Thread pagination cursor (backward/newer)",
"name": "before_id",
"in": "query"
},
{
"type": "integer",
"description": "Number of threads per page (default 50)",
"name": "limit",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsResponse"
}
}
},
"security": [
{
"CoderSessionToken": []
}
]
}
},
"/appearance": {
"get": {
"produces": [
@@ -12675,6 +12755,29 @@ const docTemplate = `{
}
}
},
"codersdk.AIBridgeAgenticAction": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"thinking": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeModelThought"
}
},
"token_usage": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
},
"tool_calls": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeToolCall"
}
}
}
},
"codersdk.AIBridgeAnthropicConfig": {
"type": "object",
"properties": {
@@ -12843,6 +12946,14 @@ const docTemplate = `{
}
}
},
"codersdk.AIBridgeModelThought": {
"type": "object",
"properties": {
"text": {
"type": "string"
}
}
},
"codersdk.AIBridgeOpenAIConfig": {
"type": "object",
"properties": {
@@ -12942,6 +13053,76 @@ const docTemplate = `{
}
}
},
"codersdk.AIBridgeSessionThreadsResponse": {
"type": "object",
"properties": {
"client": {
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string"
},
"initiator": {
"$ref": "#/definitions/codersdk.MinimalUser"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"models": {
"type": "array",
"items": {
"type": "string"
}
},
"page_ended_at": {
"type": "string",
"format": "date-time"
},
"page_started_at": {
"type": "string",
"format": "date-time"
},
"providers": {
"type": "array",
"items": {
"type": "string"
}
},
"started_at": {
"type": "string",
"format": "date-time"
},
"threads": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeThread"
}
},
"token_usage_summary": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
}
}
},
"codersdk.AIBridgeSessionThreadsTokenUsage": {
"type": "object",
"properties": {
"input_tokens": {
"type": "integer"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"output_tokens": {
"type": "integer"
}
}
},
"codersdk.AIBridgeSessionTokenUsageSummary": {
"type": "object",
"properties": {
@@ -12953,6 +13134,41 @@ const docTemplate = `{
}
}
},
"codersdk.AIBridgeThread": {
"type": "object",
"properties": {
"agentic_actions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeAgenticAction"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string",
"format": "uuid"
},
"model": {
"type": "string"
},
"prompt": {
"type": "string"
},
"provider": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"token_usage": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
}
}
},
"codersdk.AIBridgeTokenUsage": {
"type": "object",
"properties": {
@@ -12983,6 +13199,42 @@ const docTemplate = `{
}
}
},
"codersdk.AIBridgeToolCall": {
"type": "object",
"properties": {
"created_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string",
"format": "uuid"
},
"injected": {
"type": "boolean"
},
"input": {
"type": "string"
},
"interception_id": {
"type": "string",
"format": "uuid"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"provider_response_id": {
"type": "string"
},
"server_url": {
"type": "string"
},
"tool": {
"type": "string"
}
}
},
"codersdk.AIBridgeToolUsage": {
"type": "object",
"properties": {
@@ -13193,6 +13445,11 @@ const docTemplate = `{
"chat:delete",
"chat:read",
"chat:update",
"chat_automation:*",
"chat_automation:create",
"chat_automation:delete",
"chat_automation:read",
"chat_automation:update",
"coder:all",
"coder:apikeys.manage_self",
"coder:application_connect",
@@ -13402,6 +13659,11 @@ const docTemplate = `{
"APIKeyScopeChatDelete",
"APIKeyScopeChatRead",
"APIKeyScopeChatUpdate",
"APIKeyScopeChatAutomationAll",
"APIKeyScopeChatAutomationCreate",
"APIKeyScopeChatAutomationDelete",
"APIKeyScopeChatAutomationRead",
"APIKeyScopeChatAutomationUpdate",
"APIKeyScopeCoderAll",
"APIKeyScopeCoderApikeysManageSelf",
"APIKeyScopeCoderApplicationConnect",
@@ -17426,6 +17688,10 @@ const docTemplate = `{
"$ref": "#/definitions/codersdk.SlimRole"
}
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"is_service_account": {
"type": "boolean"
},
@@ -18782,6 +19048,7 @@ const docTemplate = `{
"audit_log",
"boundary_usage",
"chat",
"chat_automation",
"connection_log",
"crypto_key",
"debug_info",
@@ -18828,6 +19095,7 @@ const docTemplate = `{
"ResourceAuditLog",
"ResourceBoundaryUsage",
"ResourceChat",
"ResourceChatAutomation",
"ResourceConnectionLog",
"ResourceCryptoKey",
"ResourceDebugInfo",
@@ -20222,6 +20490,10 @@ const docTemplate = `{
"type": "string",
"format": "email"
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"id": {
"type": "string",
"format": "uuid"
@@ -21071,6 +21343,10 @@ const docTemplate = `{
"type": "string",
"format": "email"
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"id": {
"type": "string",
"format": "uuid"
+268
View File
@@ -65,6 +65,30 @@
}
}
},
"/aibridge/clients": {
"get": {
"produces": ["application/json"],
"tags": ["AI Bridge"],
"summary": "List AI Bridge clients",
"operationId": "list-ai-bridge-clients",
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"security": [
{
"CoderSessionToken": []
}
]
}
},
"/aibridge/interceptions": {
"get": {
"produces": ["application/json"],
@@ -183,6 +207,54 @@
]
}
},
"/aibridge/sessions/{session_id}": {
"get": {
"produces": ["application/json"],
"tags": ["AI Bridge"],
"summary": "Get AI Bridge session threads",
"operationId": "get-ai-bridge-session-threads",
"parameters": [
{
"type": "string",
"description": "Session ID (client_session_id or interception UUID)",
"name": "session_id",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Thread pagination cursor (forward/older)",
"name": "after_id",
"in": "query"
},
{
"type": "string",
"description": "Thread pagination cursor (backward/newer)",
"name": "before_id",
"in": "query"
},
{
"type": "integer",
"description": "Number of threads per page (default 50)",
"name": "limit",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsResponse"
}
}
},
"security": [
{
"CoderSessionToken": []
}
]
}
},
"/appearance": {
"get": {
"produces": ["application/json"],
@@ -11261,6 +11333,29 @@
}
}
},
"codersdk.AIBridgeAgenticAction": {
"type": "object",
"properties": {
"model": {
"type": "string"
},
"thinking": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeModelThought"
}
},
"token_usage": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
},
"tool_calls": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeToolCall"
}
}
}
},
"codersdk.AIBridgeAnthropicConfig": {
"type": "object",
"properties": {
@@ -11429,6 +11524,14 @@
}
}
},
"codersdk.AIBridgeModelThought": {
"type": "object",
"properties": {
"text": {
"type": "string"
}
}
},
"codersdk.AIBridgeOpenAIConfig": {
"type": "object",
"properties": {
@@ -11528,6 +11631,76 @@
}
}
},
"codersdk.AIBridgeSessionThreadsResponse": {
"type": "object",
"properties": {
"client": {
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string"
},
"initiator": {
"$ref": "#/definitions/codersdk.MinimalUser"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"models": {
"type": "array",
"items": {
"type": "string"
}
},
"page_ended_at": {
"type": "string",
"format": "date-time"
},
"page_started_at": {
"type": "string",
"format": "date-time"
},
"providers": {
"type": "array",
"items": {
"type": "string"
}
},
"started_at": {
"type": "string",
"format": "date-time"
},
"threads": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeThread"
}
},
"token_usage_summary": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
}
}
},
"codersdk.AIBridgeSessionThreadsTokenUsage": {
"type": "object",
"properties": {
"input_tokens": {
"type": "integer"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"output_tokens": {
"type": "integer"
}
}
},
"codersdk.AIBridgeSessionTokenUsageSummary": {
"type": "object",
"properties": {
@@ -11539,6 +11712,41 @@
}
}
},
"codersdk.AIBridgeThread": {
"type": "object",
"properties": {
"agentic_actions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.AIBridgeAgenticAction"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string",
"format": "uuid"
},
"model": {
"type": "string"
},
"prompt": {
"type": "string"
},
"provider": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"token_usage": {
"$ref": "#/definitions/codersdk.AIBridgeSessionThreadsTokenUsage"
}
}
},
"codersdk.AIBridgeTokenUsage": {
"type": "object",
"properties": {
@@ -11569,6 +11777,42 @@
}
}
},
"codersdk.AIBridgeToolCall": {
"type": "object",
"properties": {
"created_at": {
"type": "string",
"format": "date-time"
},
"id": {
"type": "string",
"format": "uuid"
},
"injected": {
"type": "boolean"
},
"input": {
"type": "string"
},
"interception_id": {
"type": "string",
"format": "uuid"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"provider_response_id": {
"type": "string"
},
"server_url": {
"type": "string"
},
"tool": {
"type": "string"
}
}
},
"codersdk.AIBridgeToolUsage": {
"type": "object",
"properties": {
@@ -11771,6 +12015,11 @@
"chat:delete",
"chat:read",
"chat:update",
"chat_automation:*",
"chat_automation:create",
"chat_automation:delete",
"chat_automation:read",
"chat_automation:update",
"coder:all",
"coder:apikeys.manage_self",
"coder:application_connect",
@@ -11980,6 +12229,11 @@
"APIKeyScopeChatDelete",
"APIKeyScopeChatRead",
"APIKeyScopeChatUpdate",
"APIKeyScopeChatAutomationAll",
"APIKeyScopeChatAutomationCreate",
"APIKeyScopeChatAutomationDelete",
"APIKeyScopeChatAutomationRead",
"APIKeyScopeChatAutomationUpdate",
"APIKeyScopeCoderAll",
"APIKeyScopeCoderApikeysManageSelf",
"APIKeyScopeCoderApplicationConnect",
@@ -15851,6 +16105,10 @@
"$ref": "#/definitions/codersdk.SlimRole"
}
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"is_service_account": {
"type": "boolean"
},
@@ -17162,6 +17420,7 @@
"audit_log",
"boundary_usage",
"chat",
"chat_automation",
"connection_log",
"crypto_key",
"debug_info",
@@ -17208,6 +17467,7 @@
"ResourceAuditLog",
"ResourceBoundaryUsage",
"ResourceChat",
"ResourceChatAutomation",
"ResourceConnectionLog",
"ResourceCryptoKey",
"ResourceDebugInfo",
@@ -18547,6 +18807,10 @@
"type": "string",
"format": "email"
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"id": {
"type": "string",
"format": "uuid"
@@ -19339,6 +19603,10 @@
"type": "string",
"format": "email"
},
"has_ai_seat": {
"description": "HasAISeat intentionally omits omitempty so the API always includes the\nfield, even when false.",
"type": "boolean"
},
"id": {
"type": "string",
"format": "uuid"
+1 -1
View File
@@ -220,7 +220,7 @@ func (api *API) checkAuthorization(rw http.ResponseWriter, r *http.Request) {
Type: string(v.Object.ResourceType),
AnyOrgOwner: v.Object.AnyOrgOwner,
}
if obj.Owner == "me" {
if obj.Owner == codersdk.Me {
obj.Owner = auth.ID
}
+2
View File
@@ -1155,6 +1155,7 @@ func New(options *Options) *API {
apiKeyMiddleware,
httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentAgents),
)
r.Get("/by-workspace", api.chatsByWorkspace)
r.Get("/", api.listChats)
r.Post("/", api.postChats)
r.Get("/models", api.listChatModels)
@@ -1233,6 +1234,7 @@ func New(options *Options) *API {
r.Get("/git", api.watchChatGit)
})
r.Post("/interrupt", api.interruptChat)
r.Post("/title/regenerate", api.regenerateChatTitle)
r.Get("/diff", api.getChatDiffContents)
r.Route("/queue/{queuedMessage}", func(r chi.Router) {
r.Delete("/", api.deleteChatQueuedMessage)
+5
View File
@@ -7,6 +7,11 @@ type CheckConstraint string
// CheckConstraint enums.
const (
CheckAPIKeysAllowListNotEmpty CheckConstraint = "api_keys_allow_list_not_empty" // api_keys
CheckChatAutomationEventsChatExclusivity CheckConstraint = "chat_automation_events_chat_exclusivity" // chat_automation_events
CheckChatAutomationTriggersCronFields CheckConstraint = "chat_automation_triggers_cron_fields" // chat_automation_triggers
CheckChatAutomationTriggersWebhookFields CheckConstraint = "chat_automation_triggers_webhook_fields" // chat_automation_triggers
CheckChatAutomationsMaxChatCreatesPerHourCheck CheckConstraint = "chat_automations_max_chat_creates_per_hour_check" // chat_automations
CheckChatAutomationsMaxMessagesPerHourCheck CheckConstraint = "chat_automations_max_messages_per_hour_check" // chat_automations
CheckChatModelConfigsCompressionThresholdCheck CheckConstraint = "chat_model_configs_compression_threshold_check" // chat_model_configs
CheckChatModelConfigsContextLimitCheck CheckConstraint = "chat_model_configs_context_limit_check" // chat_model_configs
CheckChatProvidersProviderCheck CheckConstraint = "chat_providers_provider_check" // chat_providers
+373
View File
@@ -1097,6 +1097,287 @@ func AIBridgeToolUsage(usage database.AIBridgeToolUsage) codersdk.AIBridgeToolUs
}
}
// AIBridgeSessionThreads converts session metadata and thread interceptions
// into the threads response. It groups interceptions into threads, builds
// agentic actions from tool usages and model thoughts, and aggregates
// token usage with metadata.
func AIBridgeSessionThreads(
session database.ListAIBridgeSessionsRow,
interceptions []database.ListAIBridgeSessionThreadsRow,
tokenUsages []database.AIBridgeTokenUsage,
toolUsages []database.AIBridgeToolUsage,
userPrompts []database.AIBridgeUserPrompt,
modelThoughts []database.AIBridgeModelThought,
) codersdk.AIBridgeSessionThreadsResponse {
// Index subresources by interception ID.
tokensByInterception := make(map[uuid.UUID][]database.AIBridgeTokenUsage, len(interceptions))
for _, tu := range tokenUsages {
tokensByInterception[tu.InterceptionID] = append(tokensByInterception[tu.InterceptionID], tu)
}
toolsByInterception := make(map[uuid.UUID][]database.AIBridgeToolUsage, len(interceptions))
for _, tu := range toolUsages {
toolsByInterception[tu.InterceptionID] = append(toolsByInterception[tu.InterceptionID], tu)
}
promptsByInterception := make(map[uuid.UUID][]database.AIBridgeUserPrompt, len(interceptions))
for _, up := range userPrompts {
promptsByInterception[up.InterceptionID] = append(promptsByInterception[up.InterceptionID], up)
}
thoughtsByInterception := make(map[uuid.UUID][]database.AIBridgeModelThought, len(interceptions))
for _, mt := range modelThoughts {
thoughtsByInterception[mt.InterceptionID] = append(thoughtsByInterception[mt.InterceptionID], mt)
}
// Group interceptions by thread_id, preserving the order returned by the
// SQL query.
interceptionsByThread := make(map[uuid.UUID][]database.AIBridgeInterception, len(interceptions))
var threadIDs []uuid.UUID
for _, row := range interceptions {
if _, ok := interceptionsByThread[row.ThreadID]; !ok {
threadIDs = append(threadIDs, row.ThreadID)
}
interceptionsByThread[row.ThreadID] = append(interceptionsByThread[row.ThreadID], row.AIBridgeInterception)
}
// Build threads and track page time bounds.
threads := make([]codersdk.AIBridgeThread, 0, len(threadIDs))
var pageStartedAt, pageEndedAt *time.Time
for _, threadID := range threadIDs {
intcs := interceptionsByThread[threadID]
thread := buildAIBridgeThread(threadID, intcs, tokensByInterception, toolsByInterception, promptsByInterception, thoughtsByInterception)
for _, intc := range intcs {
if pageStartedAt == nil || intc.StartedAt.Before(*pageStartedAt) {
t := intc.StartedAt
pageStartedAt = &t
}
if intc.EndedAt.Valid {
if pageEndedAt == nil || intc.EndedAt.Time.After(*pageEndedAt) {
t := intc.EndedAt.Time
pageEndedAt = &t
}
}
}
threads = append(threads, thread)
}
// Aggregate session-level token usage metadata from all token
// usages in the session (not just the page).
sessionTokenMeta := aggregateTokenMetadata(tokenUsages)
resp := codersdk.AIBridgeSessionThreadsResponse{
ID: session.SessionID,
Initiator: MinimalUserFromVisibleUser(database.VisibleUser{
ID: session.UserID,
Username: session.UserUsername,
Name: session.UserName,
AvatarURL: session.UserAvatarUrl,
}),
Providers: session.Providers,
Models: session.Models,
Metadata: jsonOrEmptyMap(pqtype.NullRawMessage{RawMessage: session.Metadata, Valid: len(session.Metadata) > 0}),
StartedAt: session.StartedAt,
PageStartedAt: pageStartedAt,
PageEndedAt: pageEndedAt,
TokenUsageSummary: codersdk.AIBridgeSessionThreadsTokenUsage{
InputTokens: session.InputTokens,
OutputTokens: session.OutputTokens,
Metadata: sessionTokenMeta,
},
Threads: threads,
}
if resp.Providers == nil {
resp.Providers = []string{}
}
if resp.Models == nil {
resp.Models = []string{}
}
if session.Client != "" {
resp.Client = &session.Client
}
if !session.EndedAt.IsZero() {
resp.EndedAt = &session.EndedAt
}
return resp
}
func buildAIBridgeThread(
threadID uuid.UUID,
interceptions []database.AIBridgeInterception,
tokensByInterception map[uuid.UUID][]database.AIBridgeTokenUsage,
toolsByInterception map[uuid.UUID][]database.AIBridgeToolUsage,
promptsByInterception map[uuid.UUID][]database.AIBridgeUserPrompt,
thoughtsByInterception map[uuid.UUID][]database.AIBridgeModelThought,
) codersdk.AIBridgeThread {
// Find the root interception (where id == threadID) to get the
// thread prompt and model.
var rootIntc *database.AIBridgeInterception
for i := range interceptions {
if interceptions[i].ID == threadID {
rootIntc = &interceptions[i]
break
}
}
// Fallback to first interception if root not found.
if rootIntc == nil && len(interceptions) > 0 {
rootIntc = &interceptions[0]
}
thread := codersdk.AIBridgeThread{
ID: threadID,
}
if rootIntc != nil {
thread.Model = rootIntc.Model
thread.Provider = rootIntc.Provider
// Get first user prompt from root interception.
// A thread can only have one prompt, by definition, since we currently
// only store the last prompt observed in an interception.
if prompts := promptsByInterception[rootIntc.ID]; len(prompts) > 0 {
thread.Prompt = &prompts[0].Prompt
}
}
// Compute thread time bounds from interceptions.
for _, intc := range interceptions {
if thread.StartedAt.IsZero() || intc.StartedAt.Before(thread.StartedAt) {
thread.StartedAt = intc.StartedAt
}
if intc.EndedAt.Valid {
if thread.EndedAt == nil || intc.EndedAt.Time.After(*thread.EndedAt) {
t := intc.EndedAt.Time
thread.EndedAt = &t
}
}
}
// Build agentic actions grouped by interception. Each interception that
// has tool calls produces one action with all its tool calls, thinking
// blocks, and token usage.
var actions []codersdk.AIBridgeAgenticAction
for _, intc := range interceptions {
tools := toolsByInterception[intc.ID]
if len(tools) == 0 {
continue
}
// Thinking blocks for this interception.
thoughts := thoughtsByInterception[intc.ID]
thinking := make([]codersdk.AIBridgeModelThought, 0, len(thoughts))
for _, mt := range thoughts {
thinking = append(thinking, codersdk.AIBridgeModelThought{
Text: mt.Content,
})
}
// Token usage for the interception.
actionTokenUsage := aggregateTokenUsage(tokensByInterception[intc.ID])
// Build tool call list.
toolCalls := make([]codersdk.AIBridgeToolCall, 0, len(tools))
for _, tu := range tools {
toolCalls = append(toolCalls, codersdk.AIBridgeToolCall{
ID: tu.ID,
InterceptionID: tu.InterceptionID,
ProviderResponseID: tu.ProviderResponseID,
ServerURL: tu.ServerUrl.String,
Tool: tu.Tool,
Injected: tu.Injected,
Input: tu.Input,
Metadata: jsonOrEmptyMap(tu.Metadata),
CreatedAt: tu.CreatedAt,
})
}
actions = append(actions, codersdk.AIBridgeAgenticAction{
Model: intc.Model,
TokenUsage: actionTokenUsage,
Thinking: thinking,
ToolCalls: toolCalls,
})
}
if actions == nil {
// Make an empty slice so we don't serialize `null`.
actions = make([]codersdk.AIBridgeAgenticAction, 0)
}
thread.AgenticActions = actions
// Aggregate thread-level token usage.
var threadTokens []database.AIBridgeTokenUsage
for _, intc := range interceptions {
threadTokens = append(threadTokens, tokensByInterception[intc.ID]...)
}
thread.TokenUsage = aggregateTokenUsage(threadTokens)
return thread
}
// aggregateTokenUsage sums token usage rows and aggregates metadata.
func aggregateTokenUsage(tokens []database.AIBridgeTokenUsage) codersdk.AIBridgeSessionThreadsTokenUsage {
var inputTokens, outputTokens int64
for _, tu := range tokens {
inputTokens += tu.InputTokens
outputTokens += tu.OutputTokens
// TODO: once https://github.com/coder/aibridge/issues/150 lands we
// should aggregate the other token types.
}
return codersdk.AIBridgeSessionThreadsTokenUsage{
InputTokens: inputTokens,
OutputTokens: outputTokens,
Metadata: aggregateTokenMetadata(tokens),
}
}
// aggregateTokenMetadata sums all numeric values from the metadata
// JSONB across the given token usage rows by key. Nested objects are
// flattened using dot-notation (e.g. {"cache": {"read_tokens": 10}}
// becomes "cache.read_tokens"). Non-numeric leaves (strings,
// booleans, arrays, nulls) are silently skipped.
func aggregateTokenMetadata(tokens []database.AIBridgeTokenUsage) map[string]any {
sums := make(map[string]int64)
for _, tu := range tokens {
if !tu.Metadata.Valid || len(tu.Metadata.RawMessage) == 0 {
continue
}
var m map[string]json.RawMessage
if err := json.Unmarshal(tu.Metadata.RawMessage, &m); err != nil {
continue
}
flattenAndSum(sums, "", m)
}
result := make(map[string]any, len(sums))
for k, v := range sums {
result[k] = v
}
return result
}
// flattenAndSum recursively walks a JSON object and sums all numeric
// leaf values into sums, using dot-separated keys for nested objects.
func flattenAndSum(sums map[string]int64, prefix string, m map[string]json.RawMessage) {
for k, raw := range m {
key := k
if prefix != "" {
key = prefix + "." + k
}
// Try as a number first.
var n json.Number
if err := json.Unmarshal(raw, &n); err == nil {
if v, err := n.Int64(); err == nil {
sums[key] += v
}
continue
}
// Try as a nested object.
var nested map[string]json.RawMessage
if err := json.Unmarshal(raw, &nested); err == nil {
flattenAndSum(sums, key, nested)
}
// Arrays, strings, booleans, nulls are skipped.
}
}
func InvalidatedPresets(invalidatedPresets []database.UpdatePresetsLastInvalidatedAtRow) []codersdk.InvalidatedPreset {
var presets []codersdk.InvalidatedPreset
for _, p := range invalidatedPresets {
@@ -1235,6 +1516,98 @@ func nullInt64Ptr(v sql.NullInt64) *int64 {
return &value
}
// Chat converts a database.Chat to a codersdk.Chat. It coalesces
// nil slices and maps to empty values for JSON serialization and
// derives RootChatID from the parent chain when not explicitly set.
func Chat(c database.Chat, diffStatus *database.ChatDiffStatus) codersdk.Chat {
mcpServerIDs := c.MCPServerIDs
if mcpServerIDs == nil {
mcpServerIDs = []uuid.UUID{}
}
labels := map[string]string(c.Labels)
if labels == nil {
labels = map[string]string{}
}
chat := codersdk.Chat{
ID: c.ID,
OwnerID: c.OwnerID,
LastModelConfigID: c.LastModelConfigID,
Title: c.Title,
Status: codersdk.ChatStatus(c.Status),
Archived: c.Archived,
PinOrder: c.PinOrder,
CreatedAt: c.CreatedAt,
UpdatedAt: c.UpdatedAt,
MCPServerIDs: mcpServerIDs,
Labels: labels,
}
if c.LastError.Valid {
chat.LastError = &c.LastError.String
}
if c.ParentChatID.Valid {
parentChatID := c.ParentChatID.UUID
chat.ParentChatID = &parentChatID
}
switch {
case c.RootChatID.Valid:
rootChatID := c.RootChatID.UUID
chat.RootChatID = &rootChatID
case c.ParentChatID.Valid:
rootChatID := c.ParentChatID.UUID
chat.RootChatID = &rootChatID
default:
rootChatID := c.ID
chat.RootChatID = &rootChatID
}
if c.WorkspaceID.Valid {
chat.WorkspaceID = &c.WorkspaceID.UUID
}
if c.BuildID.Valid {
chat.BuildID = &c.BuildID.UUID
}
if c.AgentID.Valid {
chat.AgentID = &c.AgentID.UUID
}
if diffStatus != nil {
convertedDiffStatus := ChatDiffStatus(c.ID, diffStatus)
chat.DiffStatus = &convertedDiffStatus
}
if c.LastInjectedContext.Valid {
var parts []codersdk.ChatMessagePart
// Internal fields are stripped at write time in
// chatd.updateLastInjectedContext, so no
// StripInternal call is needed here. Unmarshal
// errors are suppressed — the column is written by
// us with a known schema.
if err := json.Unmarshal(c.LastInjectedContext.RawMessage, &parts); err == nil {
chat.LastInjectedContext = parts
}
}
return chat
}
// ChatRows converts a slice of database.GetChatsRow (which embeds
// Chat plus HasUnread) to codersdk.Chat, looking up diff statuses
// from the provided map. When diffStatusesByChatID is non-nil,
// chats without an entry receive an empty DiffStatus.
func ChatRows(rows []database.GetChatsRow, diffStatusesByChatID map[uuid.UUID]database.ChatDiffStatus) []codersdk.Chat {
result := make([]codersdk.Chat, len(rows))
for i, row := range rows {
diffStatus, ok := diffStatusesByChatID[row.Chat.ID]
if ok {
result[i] = Chat(row.Chat, &diffStatus)
} else {
result[i] = Chat(row.Chat, nil)
if diffStatusesByChatID != nil {
emptyDiffStatus := ChatDiffStatus(row.Chat.ID, nil)
result[i].DiffStatus = &emptyDiffStatus
}
}
result[i].HasUnread = row.HasUnread
}
return result
}
// ChatDiffStatus converts a database.ChatDiffStatus to a
// codersdk.ChatDiffStatus. When status is nil an empty value
// containing only the chatID is returned.
@@ -0,0 +1,308 @@
package db2sdk
import (
"encoding/json"
"testing"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
)
func TestAggregateTokenMetadata(t *testing.T) {
t.Parallel()
t.Run("empty_input", func(t *testing.T) {
t.Parallel()
result := aggregateTokenMetadata(nil)
require.Empty(t, result)
})
t.Run("sums_across_rows", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"cache_read_tokens":100,"reasoning_tokens":50}`),
Valid: true,
},
},
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"cache_read_tokens":200,"reasoning_tokens":75}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(300), result["cache_read_tokens"])
require.Equal(t, int64(125), result["reasoning_tokens"])
require.Len(t, result, 2)
})
t.Run("skips_null_and_invalid_metadata", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{Valid: false},
},
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: nil,
Valid: true,
},
},
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"tokens":42}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(42), result["tokens"])
require.Len(t, result, 1)
})
t.Run("skips_non_integer_values", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
// Float values fail json.Number.Int64(), so they
// are silently dropped.
RawMessage: json.RawMessage(`{"good":10,"fractional":1.5}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(10), result["good"])
_, hasFractional := result["fractional"]
require.False(t, hasFractional)
})
t.Run("skips_malformed_json", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`not json`),
Valid: true,
},
},
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"tokens":5}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
// The malformed row is skipped, the valid one is counted.
require.Equal(t, int64(5), result["tokens"])
require.Len(t, result, 1)
})
t.Run("flattens_nested_objects", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"cache_read_tokens": 100,
"cache": {"creation_tokens": 40, "read_tokens": 60},
"reasoning_tokens": 50,
"tags": ["a", "b"]
}`),
Valid: true,
},
},
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"cache_read_tokens": 200,
"cache": {"creation_tokens": 10}
}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(300), result["cache_read_tokens"])
require.Equal(t, int64(50), result["reasoning_tokens"])
require.Equal(t, int64(50), result["cache.creation_tokens"])
require.Equal(t, int64(60), result["cache.read_tokens"])
// Arrays are skipped.
_, hasTags := result["tags"]
require.False(t, hasTags)
require.Len(t, result, 4)
})
t.Run("flattens_deeply_nested_objects", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"provider": {
"anthropic": {"cache_creation_tokens": 100, "cache_read_tokens": 200},
"openai": {"reasoning_tokens": 50}
},
"total": 500
}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(100), result["provider.anthropic.cache_creation_tokens"])
require.Equal(t, int64(200), result["provider.anthropic.cache_read_tokens"])
require.Equal(t, int64(50), result["provider.openai.reasoning_tokens"])
require.Equal(t, int64(500), result["total"])
require.Len(t, result, 4)
})
// Real-world provider metadata shapes from
// https://github.com/coder/aibridge/issues/150.
t.Run("aggregates_real_provider_metadata", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
// Anthropic-style: cache fields are top-level.
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 23490
}`),
Valid: true,
},
},
{
// OpenAI-style: cache fields are nested inside
// input_tokens_details.
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"input_tokens_details": {"cached_tokens": 11904}
}`),
Valid: true,
},
},
{
// Second Anthropic row to verify summing.
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{
"cache_creation_input_tokens": 500,
"cache_read_input_tokens": 10000
}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
// Anthropic fields are summed across two rows.
require.Equal(t, int64(500), result["cache_creation_input_tokens"])
require.Equal(t, int64(33490), result["cache_read_input_tokens"])
// OpenAI nested field is flattened with dot notation.
require.Equal(t, int64(11904), result["input_tokens_details.cached_tokens"])
require.Len(t, result, 3)
})
t.Run("skips_string_boolean_null_values", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"tokens":10,"name":"test","enabled":true,"nothing":null}`),
Valid: true,
},
},
}
result := aggregateTokenMetadata(tokens)
require.Equal(t, int64(10), result["tokens"])
require.Len(t, result, 1)
})
}
func TestAggregateTokenUsage(t *testing.T) {
t.Parallel()
t.Run("empty_input", func(t *testing.T) {
t.Parallel()
result := aggregateTokenUsage(nil)
require.Equal(t, int64(0), result.InputTokens)
require.Equal(t, int64(0), result.OutputTokens)
require.Empty(t, result.Metadata)
})
t.Run("sums_tokens_and_metadata", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
InputTokens: 100,
OutputTokens: 50,
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"reasoning_tokens":20}`),
Valid: true,
},
},
{
ID: uuid.New(),
InputTokens: 200,
OutputTokens: 75,
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"reasoning_tokens":30}`),
Valid: true,
},
},
}
result := aggregateTokenUsage(tokens)
require.Equal(t, int64(300), result.InputTokens)
require.Equal(t, int64(125), result.OutputTokens)
require.Equal(t, int64(50), result.Metadata["reasoning_tokens"])
})
t.Run("handles_rows_without_metadata", func(t *testing.T) {
t.Parallel()
tokens := []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
InputTokens: 500,
OutputTokens: 200,
Metadata: pqtype.NullRawMessage{Valid: false},
},
}
result := aggregateTokenUsage(tokens)
require.Equal(t, int64(500), result.InputTokens)
require.Equal(t, int64(200), result.OutputTokens)
require.Empty(t, result.Metadata)
})
}
+64
View File
@@ -5,6 +5,7 @@ import (
"database/sql"
"encoding/json"
"fmt"
"reflect"
"testing"
"time"
@@ -513,6 +514,69 @@ func TestChatQueuedMessage_ParsesUserContentParts(t *testing.T) {
require.Equal(t, "queued text", queued.Content[0].Text)
}
func TestChat_AllFieldsPopulated(t *testing.T) {
t.Parallel()
// Every field of database.Chat is set to a non-zero value so
// that the reflection check below catches any field that
// db2sdk.Chat forgets to populate. When someone adds a new
// field to codersdk.Chat, this test will fail until the
// converter is updated.
now := dbtime.Now()
input := database.Chat{
ID: uuid.New(),
OwnerID: uuid.New(),
WorkspaceID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
BuildID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
AgentID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ParentChatID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
RootChatID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
LastModelConfigID: uuid.New(),
Title: "all-fields-test",
Status: database.ChatStatusRunning,
LastError: sql.NullString{String: "boom", Valid: true},
CreatedAt: now,
UpdatedAt: now,
Archived: true,
PinOrder: 1,
MCPServerIDs: []uuid.UUID{uuid.New()},
Labels: database.StringMap{"env": "prod"},
LastInjectedContext: pqtype.NullRawMessage{
// Use a context-file part to verify internal
// fields are not present (they are stripped at
// write time by chatd, not at read time).
RawMessage: json.RawMessage(`[{"type":"context-file","context_file_path":"/AGENTS.md"}]`),
Valid: true,
},
}
// Only ChatID is needed here. This test checks that
// Chat.DiffStatus is non-nil, not that every DiffStatus
// field is populated — that would be a separate test for
// the ChatDiffStatus converter.
diffStatus := &database.ChatDiffStatus{
ChatID: input.ID,
}
got := db2sdk.Chat(input, diffStatus)
v := reflect.ValueOf(got)
typ := v.Type()
// HasUnread is populated by ChatRows (which joins the
// read-cursor query), not by Chat, so it is expected
// to remain zero here.
skip := map[string]bool{"HasUnread": true}
for i := range typ.NumField() {
field := typ.Field(i)
if skip[field.Name] {
continue
}
require.False(t, v.Field(i).IsZero(),
"codersdk.Chat field %q is zero-valued — db2sdk.Chat may not be populating it",
field.Name,
)
}
}
func TestChatQueuedMessage_MalformedContent(t *testing.T) {
t.Parallel()
+405 -19
View File
@@ -1570,13 +1570,13 @@ func (q *querier) AllUserIDs(ctx context.Context, includeSystem bool) ([]uuid.UU
return q.db.AllUserIDs(ctx, includeSystem)
}
func (q *querier) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (q *querier) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
return nil, err
}
return q.db.ArchiveChatByID(ctx, id)
}
@@ -1694,6 +1694,13 @@ func (q *querier) CleanTailnetTunnels(ctx context.Context) error {
return q.db.CleanTailnetTunnels(ctx)
}
func (q *querier) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChatAutomation); err != nil {
return err
}
return q.db.CleanupDeletedMCPServerIDsFromChatAutomations(ctx)
}
func (q *querier) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChat); err != nil {
return err
@@ -1731,6 +1738,28 @@ func (q *querier) CountAuditLogs(ctx context.Context, arg database.CountAuditLog
return q.db.CountAuthorizedAuditLogs(ctx, arg, prep)
}
func (q *querier) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return 0, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return 0, err
}
return q.db.CountChatAutomationChatCreatesInWindow(ctx, arg)
}
func (q *querier) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return 0, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return 0, err
}
return q.db.CountChatAutomationMessagesInWindow(ctx, arg)
}
func (q *querier) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
// Just like the actual query, shortcut if the user is an owner.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog)
@@ -1842,6 +1871,28 @@ func (q *querier) DeleteApplicationConnectAPIKeysByUserID(ctx context.Context, u
return q.db.DeleteApplicationConnectAPIKeysByUserID(ctx, userID)
}
func (q *querier) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
return deleteQ(q.log, q.auth, q.db.GetChatAutomationByID, q.db.DeleteChatAutomationByID)(ctx, id)
}
// Triggers are sub-resources of an automation. Deleting a trigger
// is a configuration change, so we authorize ActionUpdate on the
// parent rather than ActionDelete.
func (q *querier) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, id)
if err != nil {
return err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return err
}
return q.db.DeleteChatAutomationTriggerByID(ctx, id)
}
func (q *querier) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
@@ -2386,6 +2437,16 @@ func (q *querier) GetActiveAISeatCount(ctx context.Context) (int64, error) {
return q.db.GetActiveAISeatCount(ctx)
}
// GetActiveChatAutomationCronTriggers is a system-level query used by
// the cron scheduler. It requires read permission on all automations
// (admin gate) because it fetches triggers across all orgs and owners.
func (q *querier) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChatAutomation.All()); err != nil {
return nil, err
}
return q.db.GetActiveChatAutomationCronTriggers(ctx)
}
func (q *querier) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTemplate.All()); err != nil {
return nil, err
@@ -2477,6 +2538,64 @@ func (q *querier) GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUI
return q.db.GetAuthorizationUserRoles(ctx, userID)
}
func (q *querier) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
return fetch(q.log, q.auth, q.db.GetChatAutomationByID)(ctx, id)
}
func (q *querier) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return nil, err
}
return q.db.GetChatAutomationEventsByAutomationID(ctx, arg)
}
func (q *querier) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, id)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return trigger, nil
}
func (q *querier) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
automation, err := q.db.GetChatAutomationByID(ctx, automationID)
if err != nil {
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, automation); err != nil {
return nil, err
}
return q.db.GetChatAutomationTriggersByAutomationID(ctx, automationID)
}
func (q *querier) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
// Shortcut if the caller has broad read access (e.g. site admins
// / owners). The SQL filter is noticeable, so skip it when we
// can.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChatAutomation.All())
if err == nil {
return q.db.GetChatAutomations(ctx, arg)
}
// Fall back to SQL-level row filtering for normal users.
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceChatAutomation.Type)
if err != nil {
return nil, xerrors.Errorf("prepare chat automation SQL filter: %w", err)
}
return q.db.GetAuthorizedChatAutomations(ctx, arg, prep)
}
func (q *querier) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
return fetch(q.log, q.auth, q.db.GetChatByID)(ctx, id)
}
@@ -2578,6 +2697,18 @@ func (q *querier) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]dat
return files, nil
}
func (q *querier) GetChatIncludeDefaultSystemPrompt(ctx context.Context) (bool, error) {
// The include-default-system-prompt flag is a deployment-wide setting read
// during chat creation by every authenticated user, so no RBAC policy
// check is needed. We still verify that a valid actor exists in the
// context to ensure this is never callable by an unauthenticated or
// system-internal path without an explicit actor.
if _, ok := ActorFromContext(ctx); !ok {
return false, ErrNoActor
}
return q.db.GetChatIncludeDefaultSystemPrompt(ctx)
}
func (q *querier) GetChatMessageByID(ctx context.Context, id int64) (database.ChatMessage, error) {
// ChatMessages are authorized through their parent Chat.
// We need to fetch the message first to get its chat_id.
@@ -2602,6 +2733,14 @@ func (q *querier) GetChatMessagesByChatID(ctx context.Context, arg database.GetC
return q.db.GetChatMessagesByChatID(ctx, arg)
}
func (q *querier) GetChatMessagesByChatIDAscPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDAscPaginatedParams) ([]database.ChatMessage, error) {
_, err := q.GetChatByID(ctx, arg.ChatID)
if err != nil {
return nil, err
}
return q.db.GetChatMessagesByChatIDAscPaginated(ctx, arg)
}
func (q *querier) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
_, err := q.GetChatByID(ctx, arg.ChatID)
if err != nil {
@@ -2674,6 +2813,18 @@ func (q *querier) GetChatSystemPrompt(ctx context.Context) (string, error) {
return q.db.GetChatSystemPrompt(ctx)
}
func (q *querier) GetChatSystemPromptConfig(ctx context.Context) (database.GetChatSystemPromptConfigRow, error) {
// The system prompt configuration is a deployment-wide setting read during
// chat creation by every authenticated user, so no RBAC policy check is
// needed. We still verify that a valid actor exists in the context to
// ensure this is never callable by an unauthenticated or system-internal
// path without an explicit actor.
if _, ok := ActorFromContext(ctx); !ok {
return database.GetChatSystemPromptConfigRow{}, ErrNoActor
}
return q.db.GetChatSystemPromptConfig(ctx)
}
// GetChatTemplateAllowlist requires deployment-config read permission,
// unlike the peer getters (GetChatDesktopEnabled, etc.) which only
// check actor presence. The allowlist is admin-configuration that
@@ -2716,7 +2867,7 @@ func (q *querier) GetChatWorkspaceTTL(ctx context.Context) (string, error) {
return q.db.GetChatWorkspaceTTL(ctx)
}
func (q *querier) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
func (q *querier) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.GetChatsRow, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceChat.Type)
if err != nil {
return nil, xerrors.Errorf("(dev error) prepare sql filter: %w", err)
@@ -2724,6 +2875,10 @@ func (q *querier) GetChats(ctx context.Context, arg database.GetChatsParams) ([]
return q.db.GetAuthorizedChats(ctx, arg, prep)
}
func (q *querier) GetChatsByWorkspaceIDs(ctx context.Context, ids []uuid.UUID) ([]database.Chat, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetChatsByWorkspaceIDs)(ctx, ids)
}
func (q *querier) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
// Just like with the audit logs query, shortcut if the user is an owner.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog)
@@ -2775,7 +2930,15 @@ func (q *querier) GetDERPMeshKey(ctx context.Context) (string, error) {
}
func (q *querier) GetDefaultChatModelConfig(ctx context.Context) (database.ChatModelConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
// Any user who can read chat resources can read the default
// model config, since model resolution is required to create
// a chat. This avoids gating on ResourceDeploymentConfig
// which regular members lack.
act, ok := ActorFromContext(ctx)
if !ok {
return database.ChatModelConfig{}, ErrNoActor
}
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChat.WithOwner(act.ID)); err != nil {
return database.ChatModelConfig{}, err
}
return q.db.GetDefaultChatModelConfig(ctx)
@@ -3921,6 +4084,13 @@ func (q *querier) GetUnexpiredLicenses(ctx context.Context) ([]database.License,
return q.db.GetUnexpiredLicenses(ctx)
}
func (q *querier) GetUserAISeatStates(ctx context.Context, userIDs []uuid.UUID) ([]uuid.UUID, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceUser); err != nil {
return nil, err
}
return q.db.GetUserAISeatStates(ctx, userIDs)
}
func (q *querier) GetUserActivityInsights(ctx context.Context, arg database.GetUserActivityInsightsParams) ([]database.GetUserActivityInsightsRow, error) {
// Used by insights endpoints. Need to check both for auditors and for regular users with template acl perms.
if err := q.authorizeContext(ctx, policy.ActionViewInsights, rbac.ResourceTemplate); err != nil {
@@ -4721,6 +4891,36 @@ func (q *querier) InsertChat(ctx context.Context, arg database.InsertChatParams)
return insert(q.log, q.auth, rbac.ResourceChat.WithOwner(arg.OwnerID.String()), q.db.InsertChat)(ctx, arg)
}
func (q *querier) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
return insert(q.log, q.auth, rbac.ResourceChatAutomation.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), q.db.InsertChatAutomation)(ctx, arg)
}
// Events are append-only records produced by the system when
// triggers fire. We authorize ActionUpdate on the parent
// automation because inserting an event is a side-effect of
// processing the automation, not an independent create action.
func (q *querier) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return database.ChatAutomationEvent{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationEvent{}, err
}
return q.db.InsertChatAutomationEvent(ctx, arg)
}
func (q *querier) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
automation, err := q.db.GetChatAutomationByID(ctx, arg.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.InsertChatAutomationTrigger(ctx, arg)
}
func (q *querier) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
// Authorize create on chat resource scoped to the owner and org.
return insert(q.log, q.auth, rbac.ResourceChat.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), q.db.InsertChatFile)(ctx, arg)
@@ -5313,6 +5513,14 @@ func (q *querier) InsertWorkspaceResourceMetadata(ctx context.Context, arg datab
return q.db.InsertWorkspaceResourceMetadata(ctx, arg)
}
func (q *querier) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
return nil, xerrors.Errorf("(dev error) prepare sql filter: %w", err)
}
return q.db.ListAuthorizedAIBridgeClients(ctx, arg, prep)
}
func (q *querier) ListAIBridgeInterceptions(ctx context.Context, arg database.ListAIBridgeInterceptionsParams) ([]database.ListAIBridgeInterceptionsRow, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -5328,6 +5536,13 @@ func (q *querier) ListAIBridgeInterceptionsTelemetrySummaries(ctx context.Contex
return q.db.ListAIBridgeInterceptionsTelemetrySummaries(ctx, arg)
}
func (q *querier) ListAIBridgeModelThoughtsByInterceptionIDs(ctx context.Context, interceptionIDs []uuid.UUID) ([]database.AIBridgeModelThought, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceAibridgeInterception); err != nil {
return nil, err
}
return q.db.ListAIBridgeModelThoughtsByInterceptionIDs(ctx, interceptionIDs)
}
func (q *querier) ListAIBridgeModels(ctx context.Context, arg database.ListAIBridgeModelsParams) ([]string, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -5336,6 +5551,14 @@ func (q *querier) ListAIBridgeModels(ctx context.Context, arg database.ListAIBri
return q.db.ListAuthorizedAIBridgeModels(ctx, arg, prep)
}
func (q *querier) ListAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams) ([]database.ListAIBridgeSessionThreadsRow, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
return nil, xerrors.Errorf("(dev error) prepare sql filter: %w", err)
}
return q.db.ListAuthorizedAIBridgeSessionThreads(ctx, arg, prep)
}
func (q *querier) ListAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams) ([]database.ListAIBridgeSessionsRow, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -5473,6 +5696,17 @@ func (q *querier) PaginatedOrganizationMembers(ctx context.Context, arg database
return q.db.PaginatedOrganizationMembers(ctx, arg)
}
func (q *querier) PinChatByID(ctx context.Context, id uuid.UUID) error {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.PinChatByID(ctx, id)
}
func (q *querier) PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (database.ChatQueuedMessage, error) {
chat, err := q.db.GetChatByID(ctx, chatID)
if err != nil {
@@ -5484,6 +5718,13 @@ func (q *querier) PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (d
return q.db.PopNextQueuedMessage(ctx, chatID)
}
func (q *querier) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceChatAutomation.All()); err != nil {
return 0, err
}
return q.db.PurgeOldChatAutomationEvents(ctx, arg)
}
func (q *querier) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
template, err := q.db.GetTemplateByID(ctx, templateID)
if err != nil {
@@ -5564,13 +5805,13 @@ func (q *querier) TryAcquireLock(ctx context.Context, id int64) (bool, error) {
return q.db.TryAcquireLock(ctx, id)
}
func (q *querier) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (q *querier) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
return nil, err
}
return q.db.UnarchiveChatByID(ctx, id)
}
@@ -5598,6 +5839,17 @@ func (q *querier) UnfavoriteWorkspace(ctx context.Context, id uuid.UUID) error {
return update(q.log, q.auth, fetch, q.db.UnfavoriteWorkspace)(ctx, id)
}
func (q *querier) UnpinChatByID(ctx context.Context, id uuid.UUID) error {
chat, err := q.db.GetChatByID(ctx, id)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.UnpinChatByID(ctx, id)
}
func (q *querier) UnsetDefaultChatModelConfigs(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceSystem); err != nil {
return err
@@ -5619,6 +5871,70 @@ func (q *querier) UpdateAPIKeyByID(ctx context.Context, arg database.UpdateAPIKe
return update(q.log, q.auth, fetch, q.db.UpdateAPIKeyByID)(ctx, arg)
}
func (q *querier) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
fetchFunc := func(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
return q.db.GetChatAutomationByID(ctx, arg.ID)
}
return updateWithReturn(q.log, q.auth, fetchFunc, q.db.UpdateChatAutomation)(ctx, arg)
}
func (q *querier) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.UpdateChatAutomationTrigger(ctx, arg)
}
func (q *querier) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return err
}
return q.db.UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg)
}
func (q *querier) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
trigger, err := q.db.GetChatAutomationTriggerByID(ctx, arg.ID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
automation, err := q.db.GetChatAutomationByID(ctx, trigger.AutomationID)
if err != nil {
return database.ChatAutomationTrigger{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, automation); err != nil {
return database.ChatAutomationTrigger{}, err
}
return q.db.UpdateChatAutomationTriggerWebhookSecret(ctx, arg)
}
func (q *querier) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatBuildAgentBinding(ctx, arg)
}
func (q *querier) UpdateChatByID(ctx context.Context, arg database.UpdateChatByIDParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
@@ -5652,6 +5968,39 @@ func (q *querier) UpdateChatLabelsByID(ctx context.Context, arg database.UpdateC
return q.db.UpdateChatLabelsByID(ctx, arg)
}
func (q *querier) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatLastInjectedContext(ctx, arg)
}
func (q *querier) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatLastModelConfigByID(ctx, arg)
}
func (q *querier) UpdateChatLastReadMessageID(ctx context.Context, arg database.UpdateChatLastReadMessageIDParams) error {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.UpdateChatLastReadMessageID(ctx, arg)
}
func (q *querier) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
@@ -5686,6 +6035,17 @@ func (q *querier) UpdateChatModelConfig(ctx context.Context, arg database.Update
return q.db.UpdateChatModelConfig(ctx, arg)
}
func (q *querier) UpdateChatPinOrder(ctx context.Context, arg database.UpdateChatPinOrderParams) error {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.UpdateChatPinOrder(ctx, arg)
}
func (q *querier) UpdateChatProvider(ctx context.Context, arg database.UpdateChatProviderParams) (database.ChatProvider, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.ChatProvider{}, err
@@ -5706,7 +6066,18 @@ func (q *querier) UpdateChatStatus(ctx context.Context, arg database.UpdateChatS
return q.db.UpdateChatStatus(ctx, arg)
}
func (q *querier) UpdateChatWorkspace(ctx context.Context, arg database.UpdateChatWorkspaceParams) (database.Chat, error) {
func (q *querier) UpdateChatStatusPreserveUpdatedAt(ctx context.Context, arg database.UpdateChatStatusPreserveUpdatedAtParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatStatusPreserveUpdatedAt(ctx, arg)
}
func (q *querier) UpdateChatWorkspaceBinding(ctx context.Context, arg database.UpdateChatWorkspaceBindingParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
@@ -5715,15 +6086,7 @@ func (q *querier) UpdateChatWorkspace(ctx context.Context, arg database.UpdateCh
return database.Chat{}, err
}
// UpdateChatWorkspace is manually implemented for chat tables and may not be
// present on every wrapped store interface yet.
chatWorkspaceUpdater, ok := q.db.(interface {
UpdateChatWorkspace(context.Context, database.UpdateChatWorkspaceParams) (database.Chat, error)
})
if !ok {
return database.Chat{}, xerrors.New("update chat workspace is not implemented by wrapped store")
}
return chatWorkspaceUpdater.UpdateChatWorkspace(ctx, arg)
return q.db.UpdateChatWorkspaceBinding(ctx, arg)
}
func (q *querier) UpdateCryptoKeyDeletesAt(ctx context.Context, arg database.UpdateCryptoKeyDeletesAtParams) (database.CryptoKey, error) {
@@ -6827,6 +7190,13 @@ func (q *querier) UpsertChatDiffStatusReference(ctx context.Context, arg databas
return q.db.UpsertChatDiffStatusReference(ctx, arg)
}
func (q *querier) UpsertChatIncludeDefaultSystemPrompt(ctx context.Context, includeDefaultSystemPrompt bool) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.UpsertChatIncludeDefaultSystemPrompt(ctx, includeDefaultSystemPrompt)
}
func (q *querier) UpsertChatSystemPrompt(ctx context.Context, value string) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
@@ -7167,6 +7537,14 @@ func (q *querier) ListAuthorizedAIBridgeModels(ctx context.Context, arg database
return q.ListAIBridgeModels(ctx, arg)
}
func (q *querier) ListAuthorizedAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams, _ rbac.PreparedAuthorized) ([]string, error) {
// TODO: Delete this function, all ListAIBridgeClients should be
// authorized. For now just call ListAIBridgeClients on the authz
// querier. This cannot be deleted for now because it's included in
// the database.Store interface, so dbauthz needs to implement it.
return q.ListAIBridgeClients(ctx, arg)
}
func (q *querier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionsRow, error) {
return q.db.ListAuthorizedAIBridgeSessions(ctx, arg, prepared)
}
@@ -7175,6 +7553,14 @@ func (q *querier) CountAuthorizedAIBridgeSessions(ctx context.Context, arg datab
return q.db.CountAuthorizedAIBridgeSessions(ctx, arg, prepared)
}
func (q *querier) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, _ rbac.PreparedAuthorized) ([]database.Chat, error) {
func (q *querier) ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionThreadsRow, error) {
return q.db.ListAuthorizedAIBridgeSessionThreads(ctx, arg, prepared)
}
func (q *querier) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, _ rbac.PreparedAuthorized) ([]database.GetChatsRow, error) {
return q.GetChats(ctx, arg)
}
func (q *querier) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, _ rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
return q.GetChatAutomations(ctx, arg)
}
+386 -11
View File
@@ -392,13 +392,25 @@ func (s *MethodTestSuite) TestChats() {
s.Run("ArchiveChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().ArchiveChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
dbm.EXPECT().ArchiveChatByID(gomock.Any(), chat.ID).Return([]database.Chat{chat}, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns([]database.Chat{chat})
}))
s.Run("UnarchiveChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return([]database.Chat{chat}, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns([]database.Chat{chat})
}))
s.Run("PinChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().PinChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("UnpinChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UnpinChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("SoftDeleteChatMessagesAfterID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
@@ -449,6 +461,13 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatByIDForUpdate(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionRead).Returns(chat)
}))
s.Run("GetChatsByWorkspaceIDs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chatA := testutil.Fake(s.T(), faker, database.Chat{})
chatB := testutil.Fake(s.T(), faker, database.Chat{})
arg := []uuid.UUID{chatA.WorkspaceID.UUID, chatB.WorkspaceID.UUID}
dbm.EXPECT().GetChatsByWorkspaceIDs(gomock.Any(), arg).Return([]database.Chat{chatA, chatB}, nil).AnyTimes()
check.Args(arg).Asserts(chatA, policy.ActionRead, chatB, policy.ActionRead).Returns([]database.Chat{chatA, chatB})
}))
s.Run("GetChatCostPerChat", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetChatCostPerChatParams{
OwnerID: uuid.New(),
@@ -573,6 +592,14 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatMessagesByChatID(gomock.Any(), arg).Return(msgs, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionRead).Returns(msgs)
}))
s.Run("GetChatMessagesByChatIDAscPaginated", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msgs := []database.ChatMessage{testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})}
arg := database.GetChatMessagesByChatIDAscPaginatedParams{ChatID: chat.ID, AfterID: 0, LimitVal: 50}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().GetChatMessagesByChatIDAscPaginated(gomock.Any(), arg).Return(msgs, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionRead).Returns(msgs)
}))
s.Run("GetChatMessagesByChatIDDescPaginated", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msgs := []database.ChatMessage{testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})}
@@ -604,7 +631,7 @@ func (s *MethodTestSuite) TestChats() {
s.Run("GetDefaultChatModelConfig", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
config := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
dbm.EXPECT().GetDefaultChatModelConfig(gomock.Any()).Return(config, nil).AnyTimes()
check.Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(config)
check.Asserts(rbac.ResourceChat.WithOwner(testActorID.String()), policy.ActionRead).Returns(config)
}))
s.Run("GetChatModelConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
@@ -631,13 +658,13 @@ func (s *MethodTestSuite) TestChats() {
}))
s.Run("GetChats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatsParams{}
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.Chat{}, nil).AnyTimes()
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.GetChatsRow{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params).Asserts()
}))
s.Run("GetAuthorizedChats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatsParams{}
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.Chat{}, nil).AnyTimes()
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.GetChatsRow{}, nil).AnyTimes()
// No asserts here because it re-routes through GetChats which uses SQLFilter.
check.Args(params, emptyPreparedAuthorized{}).Asserts()
}))
@@ -648,6 +675,17 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatQueuedMessages(gomock.Any(), chat.ID).Return(qms, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionRead).Returns(qms)
}))
s.Run("GetChatIncludeDefaultSystemPrompt", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetChatIncludeDefaultSystemPrompt(gomock.Any()).Return(true, nil).AnyTimes()
check.Args().Asserts()
}))
s.Run("GetChatSystemPromptConfig", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetChatSystemPromptConfig(gomock.Any()).Return(database.GetChatSystemPromptConfigRow{
ChatSystemPrompt: "prompt",
IncludeDefaultSystemPrompt: true,
}, nil).AnyTimes()
check.Args().Asserts()
}))
s.Run("GetChatSystemPrompt", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetChatSystemPrompt(gomock.Any()).Return("prompt", nil).AnyTimes()
check.Args().Asserts()
@@ -759,6 +797,26 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatLabelsByID(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatLastModelConfigByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatLastModelConfigByIDParams{
ID: chat.ID,
LastModelConfigID: uuid.New(),
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatLastModelConfigByID(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatStatusPreserveUpdatedAt", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatStatusPreserveUpdatedAtParams{
ID: chat.ID,
Status: database.ChatStatusRunning,
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatStatusPreserveUpdatedAt(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatHeartbeat", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatHeartbeatParams{
@@ -809,6 +867,16 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatProvider(gomock.Any(), arg).Return(provider, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(provider)
}))
s.Run("UpdateChatPinOrder", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatPinOrderParams{
ID: chat.ID,
PinOrder: 2,
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatPinOrder(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("UpdateChatStatus", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatStatusParams{
@@ -819,15 +887,29 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatStatus(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatWorkspace", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
s.Run("UpdateChatBuildAgentBinding", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatWorkspaceParams{
ID: chat.ID,
WorkspaceID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
arg := database.UpdateChatBuildAgentBindingParams{
ID: chat.ID,
BuildID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
AgentID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
}
updatedChat := testutil.Fake(s.T(), faker, database.Chat{ID: chat.ID})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatWorkspace(gomock.Any(), arg).Return(updatedChat, nil).AnyTimes()
dbm.EXPECT().UpdateChatBuildAgentBinding(gomock.Any(), arg).Return(updatedChat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(updatedChat)
}))
s.Run("UpdateChatWorkspaceBinding", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatWorkspaceBindingParams{
ID: chat.ID,
WorkspaceID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
BuildID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
AgentID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
}
updatedChat := testutil.Fake(s.T(), faker, database.Chat{ID: chat.ID})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatWorkspaceBinding(gomock.Any(), arg).Return(updatedChat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(updatedChat)
}))
s.Run("UnsetDefaultChatModelConfigs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
@@ -879,6 +961,10 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().BackoffChatDiffStatus(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChat, policy.ActionUpdate).Returns()
}))
s.Run("UpsertChatIncludeDefaultSystemPrompt", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().UpsertChatIncludeDefaultSystemPrompt(gomock.Any(), false).Return(nil).AnyTimes()
check.Args(false).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("UpsertChatSystemPrompt", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().UpsertChatSystemPrompt(gomock.Any(), "").Return(nil).AnyTimes()
check.Args("").Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
@@ -1035,6 +1121,10 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().CleanupDeletedMCPServerIDsFromChats(gomock.Any()).Return(nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChat, policy.ActionUpdate)
}))
s.Run("CleanupDeletedMCPServerIDsFromChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().CleanupDeletedMCPServerIDsFromChatAutomations(gomock.Any()).Return(nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChatAutomation, policy.ActionUpdate)
}))
s.Run("DeleteMCPServerConfigByID", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
id := uuid.New()
dbm.EXPECT().DeleteMCPServerConfigByID(gomock.Any(), id).Return(nil).AnyTimes()
@@ -1118,6 +1208,29 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatMCPServerIDs(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatLastInjectedContext", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatLastInjectedContextParams{
ID: chat.ID,
LastInjectedContext: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`[{"type":"text","text":"test"}]`),
Valid: true,
},
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatLastInjectedContext(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatLastReadMessageID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatLastReadMessageIDParams{
ID: chat.ID,
LastReadMessageID: 42,
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatLastReadMessageID(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("UpdateMCPServerConfig", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
config := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
arg := database.UpdateMCPServerConfigParams{
@@ -1141,6 +1254,226 @@ func (s *MethodTestSuite) TestChats() {
}))
}
func (s *MethodTestSuite) TestChatAutomations() {
s.Run("CountChatAutomationChatCreatesInWindow", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.CountChatAutomationChatCreatesInWindowParams{
AutomationID: automation.ID,
WindowStart: dbtime.Now().Add(-time.Hour),
}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().CountChatAutomationChatCreatesInWindow(gomock.Any(), arg).Return(int64(3), nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(int64(3))
}))
s.Run("CountChatAutomationMessagesInWindow", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.CountChatAutomationMessagesInWindowParams{
AutomationID: automation.ID,
WindowStart: dbtime.Now().Add(-time.Hour),
}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().CountChatAutomationMessagesInWindow(gomock.Any(), arg).Return(int64(5), nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(int64(5))
}))
s.Run("DeleteChatAutomationByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().DeleteChatAutomationByID(gomock.Any(), automation.ID).Return(nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionDelete).Returns()
}))
s.Run("DeleteChatAutomationTriggerByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().DeleteChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(nil).AnyTimes()
check.Args(trigger.ID).Asserts(automation, policy.ActionUpdate).Returns()
}))
s.Run("GetActiveChatAutomationCronTriggers", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
rows := []database.GetActiveChatAutomationCronTriggersRow{}
dbm.EXPECT().GetActiveChatAutomationCronTriggers(gomock.Any()).Return(rows, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead).Returns(rows)
}))
s.Run("GetChatAutomationByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionRead).Returns(automation)
}))
s.Run("GetChatAutomationEventsByAutomationID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.GetChatAutomationEventsByAutomationIDParams{
AutomationID: automation.ID,
}
events := []database.ChatAutomationEvent{}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationEventsByAutomationID(gomock.Any(), arg).Return(events, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionRead).Returns(events)
}))
s.Run("GetChatAutomationTriggerByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
check.Args(trigger.ID).Asserts(automation, policy.ActionRead).Returns(trigger)
}))
s.Run("GetChatAutomationTriggersByAutomationID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
triggers := []database.ChatAutomationTrigger{}
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationTriggersByAutomationID(gomock.Any(), automation.ID).Return(triggers, nil).AnyTimes()
check.Args(automation.ID).Asserts(automation, policy.ActionRead).Returns(triggers)
}))
s.Run("GetChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatAutomationsParams{}
dbm.EXPECT().GetChatAutomations(gomock.Any(), params).Return([]database.ChatAutomation{}, nil).AnyTimes()
dbm.EXPECT().GetAuthorizedChatAutomations(gomock.Any(), params, gomock.Any()).Return([]database.ChatAutomation{}, nil).AnyTimes()
check.Args(params).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead).WithNotAuthorized("nil")
}))
s.Run("GetAuthorizedChatAutomations", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatAutomationsParams{}
dbm.EXPECT().GetAuthorizedChatAutomations(gomock.Any(), params, gomock.Any()).Return([]database.ChatAutomation{}, nil).AnyTimes()
dbm.EXPECT().GetChatAutomations(gomock.Any(), params).Return([]database.ChatAutomation{}, nil).AnyTimes()
check.Args(params, emptyPreparedAuthorized{}).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionRead)
}))
s.Run("InsertChatAutomation", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
arg := database.InsertChatAutomationParams{
ID: uuid.New(),
OwnerID: uuid.New(),
OrganizationID: uuid.New(),
Name: "test-automation",
Description: "test description",
Instructions: "test instructions",
Status: database.ChatAutomationStatusActive,
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
}
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{
ID: arg.ID,
OwnerID: arg.OwnerID,
OrganizationID: arg.OrganizationID,
Status: arg.Status,
})
dbm.EXPECT().InsertChatAutomation(gomock.Any(), arg).Return(automation, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChatAutomation.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), policy.ActionCreate).Returns(automation)
}))
s.Run("InsertChatAutomationEvent", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.InsertChatAutomationEventParams{
ID: uuid.New(),
AutomationID: automation.ID,
ReceivedAt: dbtime.Now(),
Payload: json.RawMessage(`{}`),
Status: database.ChatAutomationEventStatusFiltered,
}
event := testutil.Fake(s.T(), faker, database.ChatAutomationEvent{
ID: arg.ID,
AutomationID: automation.ID,
Status: arg.Status,
})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().InsertChatAutomationEvent(gomock.Any(), arg).Return(event, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(event)
}))
s.Run("InsertChatAutomationTrigger", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.InsertChatAutomationTriggerParams{
ID: uuid.New(),
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
}
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
ID: arg.ID,
AutomationID: automation.ID,
Type: arg.Type,
})
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().InsertChatAutomationTrigger(gomock.Any(), arg).Return(trigger, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(trigger)
}))
s.Run("PurgeOldChatAutomationEvents", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.PurgeOldChatAutomationEventsParams{
Before: dbtime.Now().Add(-7 * 24 * time.Hour),
LimitCount: 1000,
}
dbm.EXPECT().PurgeOldChatAutomationEvents(gomock.Any(), arg).Return(int64(5), nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChatAutomation.All(), policy.ActionDelete).Returns(int64(5))
}))
s.Run("UpdateChatAutomation", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
arg := database.UpdateChatAutomationParams{
ID: automation.ID,
Name: "updated-name",
Description: "updated description",
Status: database.ChatAutomationStatusActive,
UpdatedAt: dbtime.Now(),
}
updated := automation
updated.Name = arg.Name
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomation(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
s.Run("UpdateChatAutomationTrigger", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeCron,
})
arg := database.UpdateChatAutomationTriggerParams{
ID: trigger.ID,
UpdatedAt: dbtime.Now(),
}
updated := trigger
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTrigger(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
s.Run("UpdateChatAutomationTriggerLastTriggeredAt", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeCron,
})
arg := database.UpdateChatAutomationTriggerLastTriggeredAtParams{
ID: trigger.ID,
LastTriggeredAt: dbtime.Now(),
}
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTriggerLastTriggeredAt(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns()
}))
s.Run("UpdateChatAutomationTriggerWebhookSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
automation := testutil.Fake(s.T(), faker, database.ChatAutomation{Status: database.ChatAutomationStatusActive})
trigger := testutil.Fake(s.T(), faker, database.ChatAutomationTrigger{
AutomationID: automation.ID,
Type: database.ChatAutomationTriggerTypeWebhook,
})
arg := database.UpdateChatAutomationTriggerWebhookSecretParams{
ID: trigger.ID,
UpdatedAt: dbtime.Now(),
WebhookSecret: sql.NullString{
String: "new-secret",
Valid: true,
},
}
updated := trigger
dbm.EXPECT().GetChatAutomationTriggerByID(gomock.Any(), trigger.ID).Return(trigger, nil).AnyTimes()
dbm.EXPECT().GetChatAutomationByID(gomock.Any(), automation.ID).Return(automation, nil).AnyTimes()
dbm.EXPECT().UpdateChatAutomationTriggerWebhookSecret(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).Asserts(automation, policy.ActionUpdate).Returns(updated)
}))
}
func (s *MethodTestSuite) TestFile() {
s.Run("GetFileByHashAndCreator", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
f := testutil.Fake(s.T(), faker, database.File{})
@@ -2159,6 +2492,14 @@ func (s *MethodTestSuite) TestUser() {
dbm.EXPECT().GetQuotaConsumedForUser(gomock.Any(), arg).Return(int64(0), nil).AnyTimes()
check.Args(arg).Asserts(u, policy.ActionRead).Returns(int64(0))
}))
s.Run("GetUserAISeatStates", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
a := testutil.Fake(s.T(), faker, database.User{})
b := testutil.Fake(s.T(), faker, database.User{})
ids := []uuid.UUID{a.ID, b.ID}
seatStates := []uuid.UUID{a.ID}
dbm.EXPECT().GetUserAISeatStates(gomock.Any(), ids).Return(seatStates, nil).AnyTimes()
check.Args(ids).Asserts(rbac.ResourceUser, policy.ActionRead).Returns(seatStates)
}))
s.Run("GetUserByEmailOrUsername", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
u := testutil.Fake(s.T(), faker, database.User{})
arg := database.GetUserByEmailOrUsernameParams{Email: u.Email}
@@ -5438,6 +5779,20 @@ func (s *MethodTestSuite) TestAIBridge() {
check.Args(params, emptyPreparedAuthorized{}).Asserts()
}))
s.Run("ListAIBridgeClients", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
params := database.ListAIBridgeClientsParams{}
db.EXPECT().ListAuthorizedAIBridgeClients(gomock.Any(), params, gomock.Any()).Return([]string{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params).Asserts()
}))
s.Run("ListAuthorizedAIBridgeClients", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
params := database.ListAIBridgeClientsParams{}
db.EXPECT().ListAuthorizedAIBridgeClients(gomock.Any(), params, gomock.Any()).Return([]string{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params, emptyPreparedAuthorized{}).Asserts()
}))
s.Run("ListAIBridgeSessions", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
params := database.ListAIBridgeSessionsParams{}
db.EXPECT().ListAuthorizedAIBridgeSessions(gomock.Any(), params, gomock.Any()).Return([]database.ListAIBridgeSessionsRow{}, nil).AnyTimes()
@@ -5484,6 +5839,26 @@ func (s *MethodTestSuite) TestAIBridge() {
check.Args(ids).Asserts(rbac.ResourceAibridgeInterception, policy.ActionRead).Returns([]database.AIBridgeToolUsage{})
}))
s.Run("ListAIBridgeModelThoughtsByInterceptionIDs", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
ids := []uuid.UUID{{1}}
db.EXPECT().ListAIBridgeModelThoughtsByInterceptionIDs(gomock.Any(), ids).Return([]database.AIBridgeModelThought{}, nil).AnyTimes()
check.Args(ids).Asserts(rbac.ResourceAibridgeInterception, policy.ActionRead).Returns([]database.AIBridgeModelThought{})
}))
s.Run("ListAIBridgeSessionThreads", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
params := database.ListAIBridgeSessionThreadsParams{}
db.EXPECT().ListAuthorizedAIBridgeSessionThreads(gomock.Any(), params, gomock.Any()).Return([]database.ListAIBridgeSessionThreadsRow{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params).Asserts()
}))
s.Run("ListAuthorizedAIBridgeSessionThreads", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
params := database.ListAIBridgeSessionThreadsParams{}
db.EXPECT().ListAuthorizedAIBridgeSessionThreads(gomock.Any(), params, gomock.Any()).Return([]database.ListAIBridgeSessionThreadsRow{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params, emptyPreparedAuthorized{}).Asserts()
}))
s.Run("UpdateAIBridgeInterceptionEnded", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
intcID := uuid.UUID{1}
params := database.UpdateAIBridgeInterceptionEndedParams{ID: intcID}
+11
View File
@@ -1663,6 +1663,17 @@ func AIBridgeToolUsage(t testing.TB, db database.Store, seed database.InsertAIBr
return toolUsage
}
func AIBridgeModelThought(t testing.TB, db database.Store, seed database.InsertAIBridgeModelThoughtParams) database.AIBridgeModelThought {
thought, err := db.InsertAIBridgeModelThought(genCtx, database.InsertAIBridgeModelThoughtParams{
InterceptionID: takeFirst(seed.InterceptionID, uuid.New()),
Content: takeFirst(seed.Content, ""),
Metadata: takeFirstSlice(seed.Metadata, json.RawMessage("{}")),
CreatedAt: takeFirst(seed.CreatedAt, dbtime.Now()),
})
require.NoError(t, err, "insert aibridge model thought")
return thought
}
func Task(t testing.TB, db database.Store, orig database.TaskTable) database.Task {
t.Helper()
+324 -12
View File
@@ -160,12 +160,12 @@ func (m queryMetricsStore) AllUserIDs(ctx context.Context, includeSystem bool) (
return r0, r1
}
func (m queryMetricsStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m queryMetricsStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
start := time.Now()
r0 := m.s.ArchiveChatByID(ctx, id)
r0, r1 := m.s.ArchiveChatByID(ctx, id)
m.queryLatencies.WithLabelValues("ArchiveChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ArchiveChatByID").Inc()
return r0
return r0, r1
}
func (m queryMetricsStore) ArchiveUnusedTemplateVersions(ctx context.Context, arg database.ArchiveUnusedTemplateVersionsParams) ([]uuid.UUID, error) {
@@ -264,6 +264,14 @@ func (m queryMetricsStore) CleanTailnetTunnels(ctx context.Context) error {
return r0
}
func (m queryMetricsStore) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
start := time.Now()
r0 := m.s.CleanupDeletedMCPServerIDsFromChatAutomations(ctx)
m.queryLatencies.WithLabelValues("CleanupDeletedMCPServerIDsFromChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CleanupDeletedMCPServerIDsFromChatAutomations").Inc()
return r0
}
func (m queryMetricsStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
start := time.Now()
r0 := m.s.CleanupDeletedMCPServerIDsFromChats(ctx)
@@ -296,6 +304,22 @@ func (m queryMetricsStore) CountAuditLogs(ctx context.Context, arg database.Coun
return r0, r1
}
func (m queryMetricsStore) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountChatAutomationChatCreatesInWindow(ctx, arg)
m.queryLatencies.WithLabelValues("CountChatAutomationChatCreatesInWindow").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountChatAutomationChatCreatesInWindow").Inc()
return r0, r1
}
func (m queryMetricsStore) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountChatAutomationMessagesInWindow(ctx, arg)
m.queryLatencies.WithLabelValues("CountChatAutomationMessagesInWindow").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountChatAutomationMessagesInWindow").Inc()
return r0, r1
}
func (m queryMetricsStore) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountConnectionLogs(ctx, arg)
@@ -400,6 +424,22 @@ func (m queryMetricsStore) DeleteApplicationConnectAPIKeysByUserID(ctx context.C
return r0
}
func (m queryMetricsStore) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatAutomationByID(ctx, id)
m.queryLatencies.WithLabelValues("DeleteChatAutomationByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatAutomationByID").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatAutomationTriggerByID(ctx, id)
m.queryLatencies.WithLabelValues("DeleteChatAutomationTriggerByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatAutomationTriggerByID").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatModelConfigByID(ctx, id)
@@ -936,6 +976,14 @@ func (m queryMetricsStore) GetActiveAISeatCount(ctx context.Context) (int64, err
return r0, r1
}
func (m queryMetricsStore) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
start := time.Now()
r0, r1 := m.s.GetActiveChatAutomationCronTriggers(ctx)
m.queryLatencies.WithLabelValues("GetActiveChatAutomationCronTriggers").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetActiveChatAutomationCronTriggers").Inc()
return r0, r1
}
func (m queryMetricsStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
start := time.Now()
r0, r1 := m.s.GetActivePresetPrebuildSchedules(ctx)
@@ -1032,6 +1080,46 @@ func (m queryMetricsStore) GetAuthorizationUserRoles(ctx context.Context, userID
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationByID(ctx, id)
m.queryLatencies.WithLabelValues("GetChatAutomationByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationByID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationEventsByAutomationID(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatAutomationEventsByAutomationID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationEventsByAutomationID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationTriggerByID(ctx, id)
m.queryLatencies.WithLabelValues("GetChatAutomationTriggerByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationTriggerByID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomationTriggersByAutomationID(ctx, automationID)
m.queryLatencies.WithLabelValues("GetChatAutomationTriggersByAutomationID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomationTriggersByAutomationID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetChatAutomations(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatAutomations").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.GetChatByID(ctx, id)
@@ -1120,6 +1208,14 @@ func (m queryMetricsStore) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUI
return r0, r1
}
func (m queryMetricsStore) GetChatIncludeDefaultSystemPrompt(ctx context.Context) (bool, error) {
start := time.Now()
r0, r1 := m.s.GetChatIncludeDefaultSystemPrompt(ctx)
m.queryLatencies.WithLabelValues("GetChatIncludeDefaultSystemPrompt").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatIncludeDefaultSystemPrompt").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatMessageByID(ctx context.Context, id int64) (database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.GetChatMessageByID(ctx, id)
@@ -1136,6 +1232,14 @@ func (m queryMetricsStore) GetChatMessagesByChatID(ctx context.Context, chatID d
return r0, r1
}
func (m queryMetricsStore) GetChatMessagesByChatIDAscPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDAscPaginatedParams) ([]database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.GetChatMessagesByChatIDAscPaginated(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatMessagesByChatIDAscPaginated").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatMessagesByChatIDAscPaginated").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.GetChatMessagesByChatIDDescPaginated(ctx, arg)
@@ -1208,6 +1312,14 @@ func (m queryMetricsStore) GetChatSystemPrompt(ctx context.Context) (string, err
return r0, r1
}
func (m queryMetricsStore) GetChatSystemPromptConfig(ctx context.Context) (database.GetChatSystemPromptConfigRow, error) {
start := time.Now()
r0, r1 := m.s.GetChatSystemPromptConfig(ctx)
m.queryLatencies.WithLabelValues("GetChatSystemPromptConfig").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatSystemPromptConfig").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatTemplateAllowlist(ctx context.Context) (string, error) {
start := time.Now()
r0, r1 := m.s.GetChatTemplateAllowlist(ctx)
@@ -1248,7 +1360,7 @@ func (m queryMetricsStore) GetChatWorkspaceTTL(ctx context.Context) (string, err
return r0, r1
}
func (m queryMetricsStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
func (m queryMetricsStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.GetChatsRow, error) {
start := time.Now()
r0, r1 := m.s.GetChats(ctx, arg)
m.queryLatencies.WithLabelValues("GetChats").Observe(time.Since(start).Seconds())
@@ -1256,6 +1368,14 @@ func (m queryMetricsStore) GetChats(ctx context.Context, arg database.GetChatsPa
return r0, r1
}
func (m queryMetricsStore) GetChatsByWorkspaceIDs(ctx context.Context, ids []uuid.UUID) ([]database.Chat, error) {
start := time.Now()
r0, r1 := m.s.GetChatsByWorkspaceIDs(ctx, ids)
m.queryLatencies.WithLabelValues("GetChatsByWorkspaceIDs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatsByWorkspaceIDs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
start := time.Now()
r0, r1 := m.s.GetConnectionLogsOffset(ctx, arg)
@@ -2448,6 +2568,14 @@ func (m queryMetricsStore) GetUnexpiredLicenses(ctx context.Context) ([]database
return r0, r1
}
func (m queryMetricsStore) GetUserAISeatStates(ctx context.Context, userIds []uuid.UUID) ([]uuid.UUID, error) {
start := time.Now()
r0, r1 := m.s.GetUserAISeatStates(ctx, userIds)
m.queryLatencies.WithLabelValues("GetUserAISeatStates").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetUserAISeatStates").Inc()
return r0, r1
}
func (m queryMetricsStore) GetUserActivityInsights(ctx context.Context, arg database.GetUserActivityInsightsParams) ([]database.GetUserActivityInsightsRow, error) {
start := time.Now()
r0, r1 := m.s.GetUserActivityInsights(ctx, arg)
@@ -3184,6 +3312,30 @@ func (m queryMetricsStore) InsertChat(ctx context.Context, arg database.InsertCh
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomation(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomation").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomation").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomationEvent(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomationEvent").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomationEvent").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.InsertChatAutomationTrigger(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatAutomationTrigger").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatAutomationTrigger").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
start := time.Now()
r0, r1 := m.s.InsertChatFile(ctx, arg)
@@ -3712,6 +3864,14 @@ func (m queryMetricsStore) InsertWorkspaceResourceMetadata(ctx context.Context,
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeClients(ctx, arg)
m.queryLatencies.WithLabelValues("ListAIBridgeClients").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAIBridgeClients").Inc()
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeInterceptions(ctx context.Context, arg database.ListAIBridgeInterceptionsParams) ([]database.ListAIBridgeInterceptionsRow, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeInterceptions(ctx, arg)
@@ -3728,6 +3888,14 @@ func (m queryMetricsStore) ListAIBridgeInterceptionsTelemetrySummaries(ctx conte
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeModelThoughtsByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]database.AIBridgeModelThought, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeModelThoughtsByInterceptionIDs(ctx, interceptionIds)
m.queryLatencies.WithLabelValues("ListAIBridgeModelThoughtsByInterceptionIDs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAIBridgeModelThoughtsByInterceptionIDs").Inc()
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeModels(ctx context.Context, arg database.ListAIBridgeModelsParams) ([]string, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeModels(ctx, arg)
@@ -3736,6 +3904,14 @@ func (m queryMetricsStore) ListAIBridgeModels(ctx context.Context, arg database.
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams) ([]database.ListAIBridgeSessionThreadsRow, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeSessionThreads(ctx, arg)
m.queryLatencies.WithLabelValues("ListAIBridgeSessionThreads").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAIBridgeSessionThreads").Inc()
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams) ([]database.ListAIBridgeSessionsRow, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeSessions(ctx, arg)
@@ -3872,6 +4048,14 @@ func (m queryMetricsStore) PaginatedOrganizationMembers(ctx context.Context, arg
return r0, r1
}
func (m queryMetricsStore) PinChatByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.PinChatByID(ctx, id)
m.queryLatencies.WithLabelValues("PinChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "PinChatByID").Inc()
return r0
}
func (m queryMetricsStore) PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (database.ChatQueuedMessage, error) {
start := time.Now()
r0, r1 := m.s.PopNextQueuedMessage(ctx, chatID)
@@ -3880,6 +4064,14 @@ func (m queryMetricsStore) PopNextQueuedMessage(ctx context.Context, chatID uuid
return r0, r1
}
func (m queryMetricsStore) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.PurgeOldChatAutomationEvents(ctx, arg)
m.queryLatencies.WithLabelValues("PurgeOldChatAutomationEvents").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "PurgeOldChatAutomationEvents").Inc()
return r0, r1
}
func (m queryMetricsStore) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
start := time.Now()
r0 := m.s.ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx, templateID)
@@ -3952,12 +4144,12 @@ func (m queryMetricsStore) TryAcquireLock(ctx context.Context, pgTryAdvisoryXact
return r0, r1
}
func (m queryMetricsStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m queryMetricsStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
start := time.Now()
r0 := m.s.UnarchiveChatByID(ctx, id)
r0, r1 := m.s.UnarchiveChatByID(ctx, id)
m.queryLatencies.WithLabelValues("UnarchiveChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UnarchiveChatByID").Inc()
return r0
return r0, r1
}
func (m queryMetricsStore) UnarchiveTemplateVersion(ctx context.Context, arg database.UnarchiveTemplateVersionParams) error {
@@ -3976,6 +4168,14 @@ func (m queryMetricsStore) UnfavoriteWorkspace(ctx context.Context, id uuid.UUID
return r0
}
func (m queryMetricsStore) UnpinChatByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.UnpinChatByID(ctx, id)
m.queryLatencies.WithLabelValues("UnpinChatByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UnpinChatByID").Inc()
return r0
}
func (m queryMetricsStore) UnsetDefaultChatModelConfigs(ctx context.Context) error {
start := time.Now()
r0 := m.s.UnsetDefaultChatModelConfigs(ctx)
@@ -4000,6 +4200,46 @@ func (m queryMetricsStore) UpdateAPIKeyByID(ctx context.Context, arg database.Up
return r0
}
func (m queryMetricsStore) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomation(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomation").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomation").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomationTrigger(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTrigger").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTrigger").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
start := time.Now()
r0 := m.s.UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTriggerLastTriggeredAt").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTriggerLastTriggeredAt").Inc()
return r0
}
func (m queryMetricsStore) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatAutomationTriggerWebhookSecret(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatAutomationTriggerWebhookSecret").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatAutomationTriggerWebhookSecret").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatBuildAgentBinding(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatBuildAgentBinding").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatBuildAgentBinding").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatByID(ctx context.Context, arg database.UpdateChatByIDParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatByID(ctx, arg)
@@ -4024,6 +4264,30 @@ func (m queryMetricsStore) UpdateChatLabelsByID(ctx context.Context, arg databas
return r0, r1
}
func (m queryMetricsStore) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatLastInjectedContext(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatLastInjectedContext").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatLastInjectedContext").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatLastModelConfigByID(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatLastModelConfigByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatLastModelConfigByID").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatLastReadMessageID(ctx context.Context, arg database.UpdateChatLastReadMessageIDParams) error {
start := time.Now()
r0 := m.s.UpdateChatLastReadMessageID(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatLastReadMessageID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatLastReadMessageID").Inc()
return r0
}
func (m queryMetricsStore) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatMCPServerIDs(ctx, arg)
@@ -4048,6 +4312,14 @@ func (m queryMetricsStore) UpdateChatModelConfig(ctx context.Context, arg databa
return r0, r1
}
func (m queryMetricsStore) UpdateChatPinOrder(ctx context.Context, arg database.UpdateChatPinOrderParams) error {
start := time.Now()
r0 := m.s.UpdateChatPinOrder(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatPinOrder").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatPinOrder").Inc()
return r0
}
func (m queryMetricsStore) UpdateChatProvider(ctx context.Context, arg database.UpdateChatProviderParams) (database.ChatProvider, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatProvider(ctx, arg)
@@ -4064,11 +4336,19 @@ func (m queryMetricsStore) UpdateChatStatus(ctx context.Context, arg database.Up
return r0, r1
}
func (m queryMetricsStore) UpdateChatWorkspace(ctx context.Context, arg database.UpdateChatWorkspaceParams) (database.Chat, error) {
func (m queryMetricsStore) UpdateChatStatusPreserveUpdatedAt(ctx context.Context, arg database.UpdateChatStatusPreserveUpdatedAtParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatWorkspace(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatWorkspace").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatWorkspace").Inc()
r0, r1 := m.s.UpdateChatStatusPreserveUpdatedAt(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatStatusPreserveUpdatedAt").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatStatusPreserveUpdatedAt").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatWorkspaceBinding(ctx context.Context, arg database.UpdateChatWorkspaceBindingParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatWorkspaceBinding(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatWorkspaceBinding").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatWorkspaceBinding").Inc()
return r0, r1
}
@@ -4816,6 +5096,14 @@ func (m queryMetricsStore) UpsertChatDiffStatusReference(ctx context.Context, ar
return r0, r1
}
func (m queryMetricsStore) UpsertChatIncludeDefaultSystemPrompt(ctx context.Context, includeDefaultSystemPrompt bool) error {
start := time.Now()
r0 := m.s.UpsertChatIncludeDefaultSystemPrompt(ctx, includeDefaultSystemPrompt)
m.queryLatencies.WithLabelValues("UpsertChatIncludeDefaultSystemPrompt").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertChatIncludeDefaultSystemPrompt").Inc()
return r0
}
func (m queryMetricsStore) UpsertChatSystemPrompt(ctx context.Context, value string) error {
start := time.Now()
r0 := m.s.UpsertChatSystemPrompt(ctx, value)
@@ -5176,6 +5464,14 @@ func (m queryMetricsStore) ListAuthorizedAIBridgeModels(ctx context.Context, arg
return r0, r1
}
func (m queryMetricsStore) ListAuthorizedAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams, prepared rbac.PreparedAuthorized) ([]string, error) {
start := time.Now()
r0, r1 := m.s.ListAuthorizedAIBridgeClients(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("ListAuthorizedAIBridgeClients").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAuthorizedAIBridgeClients").Inc()
return r0, r1
}
func (m queryMetricsStore) ListAuthorizedAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionsRow, error) {
start := time.Now()
r0, r1 := m.s.ListAuthorizedAIBridgeSessions(ctx, arg, prepared)
@@ -5192,10 +5488,26 @@ func (m queryMetricsStore) CountAuthorizedAIBridgeSessions(ctx context.Context,
return r0, r1
}
func (m queryMetricsStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.Chat, error) {
func (m queryMetricsStore) ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionThreadsRow, error) {
start := time.Now()
r0, r1 := m.s.ListAuthorizedAIBridgeSessionThreads(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("ListAuthorizedAIBridgeSessionThreads").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAuthorizedAIBridgeSessionThreads").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.GetChatsRow, error) {
start := time.Now()
r0, r1 := m.s.GetAuthorizedChats(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("GetAuthorizedChats").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAuthorizedChats").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
start := time.Now()
r0, r1 := m.s.GetAuthorizedChatAutomations(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("GetAuthorizedChatAutomations").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAuthorizedChatAutomations").Inc()
return r0, r1
}
+594 -16
View File
@@ -148,11 +148,12 @@ func (mr *MockStoreMockRecorder) AllUserIDs(ctx, includeSystem any) *gomock.Call
}
// ArchiveChatByID mocks base method.
func (m *MockStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m *MockStore) ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ArchiveChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ArchiveChatByID indicates an expected call of ArchiveChatByID.
@@ -334,6 +335,20 @@ func (mr *MockStoreMockRecorder) CleanTailnetTunnels(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanTailnetTunnels", reflect.TypeOf((*MockStore)(nil).CleanTailnetTunnels), ctx)
}
// CleanupDeletedMCPServerIDsFromChatAutomations mocks base method.
func (m *MockStore) CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CleanupDeletedMCPServerIDsFromChatAutomations", ctx)
ret0, _ := ret[0].(error)
return ret0
}
// CleanupDeletedMCPServerIDsFromChatAutomations indicates an expected call of CleanupDeletedMCPServerIDsFromChatAutomations.
func (mr *MockStoreMockRecorder) CleanupDeletedMCPServerIDsFromChatAutomations(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanupDeletedMCPServerIDsFromChatAutomations", reflect.TypeOf((*MockStore)(nil).CleanupDeletedMCPServerIDsFromChatAutomations), ctx)
}
// CleanupDeletedMCPServerIDsFromChats mocks base method.
func (m *MockStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
m.ctrl.T.Helper()
@@ -453,6 +468,36 @@ func (mr *MockStoreMockRecorder) CountAuthorizedConnectionLogs(ctx, arg, prepare
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountAuthorizedConnectionLogs", reflect.TypeOf((*MockStore)(nil).CountAuthorizedConnectionLogs), ctx, arg, prepared)
}
// CountChatAutomationChatCreatesInWindow mocks base method.
func (m *MockStore) CountChatAutomationChatCreatesInWindow(ctx context.Context, arg database.CountChatAutomationChatCreatesInWindowParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountChatAutomationChatCreatesInWindow", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountChatAutomationChatCreatesInWindow indicates an expected call of CountChatAutomationChatCreatesInWindow.
func (mr *MockStoreMockRecorder) CountChatAutomationChatCreatesInWindow(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountChatAutomationChatCreatesInWindow", reflect.TypeOf((*MockStore)(nil).CountChatAutomationChatCreatesInWindow), ctx, arg)
}
// CountChatAutomationMessagesInWindow mocks base method.
func (m *MockStore) CountChatAutomationMessagesInWindow(ctx context.Context, arg database.CountChatAutomationMessagesInWindowParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountChatAutomationMessagesInWindow", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountChatAutomationMessagesInWindow indicates an expected call of CountChatAutomationMessagesInWindow.
func (mr *MockStoreMockRecorder) CountChatAutomationMessagesInWindow(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountChatAutomationMessagesInWindow", reflect.TypeOf((*MockStore)(nil).CountChatAutomationMessagesInWindow), ctx, arg)
}
// CountConnectionLogs mocks base method.
func (m *MockStore) CountConnectionLogs(ctx context.Context, arg database.CountConnectionLogsParams) (int64, error) {
m.ctrl.T.Helper()
@@ -642,6 +687,34 @@ func (mr *MockStoreMockRecorder) DeleteApplicationConnectAPIKeysByUserID(ctx, us
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteApplicationConnectAPIKeysByUserID", reflect.TypeOf((*MockStore)(nil).DeleteApplicationConnectAPIKeysByUserID), ctx, userID)
}
// DeleteChatAutomationByID mocks base method.
func (m *MockStore) DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatAutomationByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatAutomationByID indicates an expected call of DeleteChatAutomationByID.
func (mr *MockStoreMockRecorder) DeleteChatAutomationByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatAutomationByID", reflect.TypeOf((*MockStore)(nil).DeleteChatAutomationByID), ctx, id)
}
// DeleteChatAutomationTriggerByID mocks base method.
func (m *MockStore) DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatAutomationTriggerByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatAutomationTriggerByID indicates an expected call of DeleteChatAutomationTriggerByID.
func (mr *MockStoreMockRecorder) DeleteChatAutomationTriggerByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatAutomationTriggerByID", reflect.TypeOf((*MockStore)(nil).DeleteChatAutomationTriggerByID), ctx, id)
}
// DeleteChatModelConfigByID mocks base method.
func (m *MockStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
@@ -1608,6 +1681,21 @@ func (mr *MockStoreMockRecorder) GetActiveAISeatCount(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetActiveAISeatCount", reflect.TypeOf((*MockStore)(nil).GetActiveAISeatCount), ctx)
}
// GetActiveChatAutomationCronTriggers mocks base method.
func (m *MockStore) GetActiveChatAutomationCronTriggers(ctx context.Context) ([]database.GetActiveChatAutomationCronTriggersRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetActiveChatAutomationCronTriggers", ctx)
ret0, _ := ret[0].([]database.GetActiveChatAutomationCronTriggersRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetActiveChatAutomationCronTriggers indicates an expected call of GetActiveChatAutomationCronTriggers.
func (mr *MockStoreMockRecorder) GetActiveChatAutomationCronTriggers(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetActiveChatAutomationCronTriggers", reflect.TypeOf((*MockStore)(nil).GetActiveChatAutomationCronTriggers), ctx)
}
// GetActivePresetPrebuildSchedules mocks base method.
func (m *MockStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
m.ctrl.T.Helper()
@@ -1803,11 +1891,26 @@ func (mr *MockStoreMockRecorder) GetAuthorizedAuditLogsOffset(ctx, arg, prepared
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedAuditLogsOffset", reflect.TypeOf((*MockStore)(nil).GetAuthorizedAuditLogsOffset), ctx, arg, prepared)
}
// GetAuthorizedChatAutomations mocks base method.
func (m *MockStore) GetAuthorizedChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAuthorizedChatAutomations", ctx, arg, prepared)
ret0, _ := ret[0].([]database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetAuthorizedChatAutomations indicates an expected call of GetAuthorizedChatAutomations.
func (mr *MockStoreMockRecorder) GetAuthorizedChatAutomations(ctx, arg, prepared any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedChatAutomations", reflect.TypeOf((*MockStore)(nil).GetAuthorizedChatAutomations), ctx, arg, prepared)
}
// GetAuthorizedChats mocks base method.
func (m *MockStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.Chat, error) {
func (m *MockStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.GetChatsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAuthorizedChats", ctx, arg, prepared)
ret0, _ := ret[0].([]database.Chat)
ret0, _ := ret[0].([]database.GetChatsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
@@ -1893,6 +1996,81 @@ func (mr *MockStoreMockRecorder) GetAuthorizedWorkspacesAndAgentsByOwnerID(ctx,
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedWorkspacesAndAgentsByOwnerID", reflect.TypeOf((*MockStore)(nil).GetAuthorizedWorkspacesAndAgentsByOwnerID), ctx, ownerID, prepared)
}
// GetChatAutomationByID mocks base method.
func (m *MockStore) GetChatAutomationByID(ctx context.Context, id uuid.UUID) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationByID", ctx, id)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationByID indicates an expected call of GetChatAutomationByID.
func (mr *MockStoreMockRecorder) GetChatAutomationByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationByID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationByID), ctx, id)
}
// GetChatAutomationEventsByAutomationID mocks base method.
func (m *MockStore) GetChatAutomationEventsByAutomationID(ctx context.Context, arg database.GetChatAutomationEventsByAutomationIDParams) ([]database.ChatAutomationEvent, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationEventsByAutomationID", ctx, arg)
ret0, _ := ret[0].([]database.ChatAutomationEvent)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationEventsByAutomationID indicates an expected call of GetChatAutomationEventsByAutomationID.
func (mr *MockStoreMockRecorder) GetChatAutomationEventsByAutomationID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationEventsByAutomationID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationEventsByAutomationID), ctx, arg)
}
// GetChatAutomationTriggerByID mocks base method.
func (m *MockStore) GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationTriggerByID", ctx, id)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationTriggerByID indicates an expected call of GetChatAutomationTriggerByID.
func (mr *MockStoreMockRecorder) GetChatAutomationTriggerByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationTriggerByID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationTriggerByID), ctx, id)
}
// GetChatAutomationTriggersByAutomationID mocks base method.
func (m *MockStore) GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomationTriggersByAutomationID", ctx, automationID)
ret0, _ := ret[0].([]database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomationTriggersByAutomationID indicates an expected call of GetChatAutomationTriggersByAutomationID.
func (mr *MockStoreMockRecorder) GetChatAutomationTriggersByAutomationID(ctx, automationID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomationTriggersByAutomationID", reflect.TypeOf((*MockStore)(nil).GetChatAutomationTriggersByAutomationID), ctx, automationID)
}
// GetChatAutomations mocks base method.
func (m *MockStore) GetChatAutomations(ctx context.Context, arg database.GetChatAutomationsParams) ([]database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatAutomations", ctx, arg)
ret0, _ := ret[0].([]database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatAutomations indicates an expected call of GetChatAutomations.
func (mr *MockStoreMockRecorder) GetChatAutomations(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatAutomations", reflect.TypeOf((*MockStore)(nil).GetChatAutomations), ctx, arg)
}
// GetChatByID mocks base method.
func (m *MockStore) GetChatByID(ctx context.Context, id uuid.UUID) (database.Chat, error) {
m.ctrl.T.Helper()
@@ -2058,6 +2236,21 @@ func (mr *MockStoreMockRecorder) GetChatFilesByIDs(ctx, ids any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatFilesByIDs", reflect.TypeOf((*MockStore)(nil).GetChatFilesByIDs), ctx, ids)
}
// GetChatIncludeDefaultSystemPrompt mocks base method.
func (m *MockStore) GetChatIncludeDefaultSystemPrompt(ctx context.Context) (bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatIncludeDefaultSystemPrompt", ctx)
ret0, _ := ret[0].(bool)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatIncludeDefaultSystemPrompt indicates an expected call of GetChatIncludeDefaultSystemPrompt.
func (mr *MockStoreMockRecorder) GetChatIncludeDefaultSystemPrompt(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatIncludeDefaultSystemPrompt", reflect.TypeOf((*MockStore)(nil).GetChatIncludeDefaultSystemPrompt), ctx)
}
// GetChatMessageByID mocks base method.
func (m *MockStore) GetChatMessageByID(ctx context.Context, id int64) (database.ChatMessage, error) {
m.ctrl.T.Helper()
@@ -2088,6 +2281,21 @@ func (mr *MockStoreMockRecorder) GetChatMessagesByChatID(ctx, arg any) *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatMessagesByChatID", reflect.TypeOf((*MockStore)(nil).GetChatMessagesByChatID), ctx, arg)
}
// GetChatMessagesByChatIDAscPaginated mocks base method.
func (m *MockStore) GetChatMessagesByChatIDAscPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDAscPaginatedParams) ([]database.ChatMessage, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatMessagesByChatIDAscPaginated", ctx, arg)
ret0, _ := ret[0].([]database.ChatMessage)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatMessagesByChatIDAscPaginated indicates an expected call of GetChatMessagesByChatIDAscPaginated.
func (mr *MockStoreMockRecorder) GetChatMessagesByChatIDAscPaginated(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatMessagesByChatIDAscPaginated", reflect.TypeOf((*MockStore)(nil).GetChatMessagesByChatIDAscPaginated), ctx, arg)
}
// GetChatMessagesByChatIDDescPaginated mocks base method.
func (m *MockStore) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
m.ctrl.T.Helper()
@@ -2223,6 +2431,21 @@ func (mr *MockStoreMockRecorder) GetChatSystemPrompt(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatSystemPrompt", reflect.TypeOf((*MockStore)(nil).GetChatSystemPrompt), ctx)
}
// GetChatSystemPromptConfig mocks base method.
func (m *MockStore) GetChatSystemPromptConfig(ctx context.Context) (database.GetChatSystemPromptConfigRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatSystemPromptConfig", ctx)
ret0, _ := ret[0].(database.GetChatSystemPromptConfigRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatSystemPromptConfig indicates an expected call of GetChatSystemPromptConfig.
func (mr *MockStoreMockRecorder) GetChatSystemPromptConfig(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatSystemPromptConfig", reflect.TypeOf((*MockStore)(nil).GetChatSystemPromptConfig), ctx)
}
// GetChatTemplateAllowlist mocks base method.
func (m *MockStore) GetChatTemplateAllowlist(ctx context.Context) (string, error) {
m.ctrl.T.Helper()
@@ -2299,10 +2522,10 @@ func (mr *MockStoreMockRecorder) GetChatWorkspaceTTL(ctx any) *gomock.Call {
}
// GetChats mocks base method.
func (m *MockStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
func (m *MockStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.GetChatsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChats", ctx, arg)
ret0, _ := ret[0].([]database.Chat)
ret0, _ := ret[0].([]database.GetChatsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
@@ -2313,6 +2536,21 @@ func (mr *MockStoreMockRecorder) GetChats(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChats", reflect.TypeOf((*MockStore)(nil).GetChats), ctx, arg)
}
// GetChatsByWorkspaceIDs mocks base method.
func (m *MockStore) GetChatsByWorkspaceIDs(ctx context.Context, ids []uuid.UUID) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatsByWorkspaceIDs", ctx, ids)
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatsByWorkspaceIDs indicates an expected call of GetChatsByWorkspaceIDs.
func (mr *MockStoreMockRecorder) GetChatsByWorkspaceIDs(ctx, ids any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatsByWorkspaceIDs", reflect.TypeOf((*MockStore)(nil).GetChatsByWorkspaceIDs), ctx, ids)
}
// GetConnectionLogsOffset mocks base method.
func (m *MockStore) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
m.ctrl.T.Helper()
@@ -4578,6 +4816,21 @@ func (mr *MockStoreMockRecorder) GetUnexpiredLicenses(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUnexpiredLicenses", reflect.TypeOf((*MockStore)(nil).GetUnexpiredLicenses), ctx)
}
// GetUserAISeatStates mocks base method.
func (m *MockStore) GetUserAISeatStates(ctx context.Context, userIds []uuid.UUID) ([]uuid.UUID, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetUserAISeatStates", ctx, userIds)
ret0, _ := ret[0].([]uuid.UUID)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetUserAISeatStates indicates an expected call of GetUserAISeatStates.
func (mr *MockStoreMockRecorder) GetUserAISeatStates(ctx, userIds any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserAISeatStates", reflect.TypeOf((*MockStore)(nil).GetUserAISeatStates), ctx, userIds)
}
// GetUserActivityInsights mocks base method.
func (m *MockStore) GetUserActivityInsights(ctx context.Context, arg database.GetUserActivityInsightsParams) ([]database.GetUserActivityInsightsRow, error) {
m.ctrl.T.Helper()
@@ -5972,6 +6225,51 @@ func (mr *MockStoreMockRecorder) InsertChat(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChat", reflect.TypeOf((*MockStore)(nil).InsertChat), ctx, arg)
}
// InsertChatAutomation mocks base method.
func (m *MockStore) InsertChatAutomation(ctx context.Context, arg database.InsertChatAutomationParams) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomation", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomation indicates an expected call of InsertChatAutomation.
func (mr *MockStoreMockRecorder) InsertChatAutomation(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomation", reflect.TypeOf((*MockStore)(nil).InsertChatAutomation), ctx, arg)
}
// InsertChatAutomationEvent mocks base method.
func (m *MockStore) InsertChatAutomationEvent(ctx context.Context, arg database.InsertChatAutomationEventParams) (database.ChatAutomationEvent, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomationEvent", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationEvent)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomationEvent indicates an expected call of InsertChatAutomationEvent.
func (mr *MockStoreMockRecorder) InsertChatAutomationEvent(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomationEvent", reflect.TypeOf((*MockStore)(nil).InsertChatAutomationEvent), ctx, arg)
}
// InsertChatAutomationTrigger mocks base method.
func (m *MockStore) InsertChatAutomationTrigger(ctx context.Context, arg database.InsertChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatAutomationTrigger", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatAutomationTrigger indicates an expected call of InsertChatAutomationTrigger.
func (mr *MockStoreMockRecorder) InsertChatAutomationTrigger(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatAutomationTrigger", reflect.TypeOf((*MockStore)(nil).InsertChatAutomationTrigger), ctx, arg)
}
// InsertChatFile mocks base method.
func (m *MockStore) InsertChatFile(ctx context.Context, arg database.InsertChatFileParams) (database.InsertChatFileRow, error) {
m.ctrl.T.Helper()
@@ -6947,6 +7245,21 @@ func (mr *MockStoreMockRecorder) InsertWorkspaceResourceMetadata(ctx, arg any) *
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertWorkspaceResourceMetadata", reflect.TypeOf((*MockStore)(nil).InsertWorkspaceResourceMetadata), ctx, arg)
}
// ListAIBridgeClients mocks base method.
func (m *MockStore) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListAIBridgeClients", ctx, arg)
ret0, _ := ret[0].([]string)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListAIBridgeClients indicates an expected call of ListAIBridgeClients.
func (mr *MockStoreMockRecorder) ListAIBridgeClients(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeClients", reflect.TypeOf((*MockStore)(nil).ListAIBridgeClients), ctx, arg)
}
// ListAIBridgeInterceptions mocks base method.
func (m *MockStore) ListAIBridgeInterceptions(ctx context.Context, arg database.ListAIBridgeInterceptionsParams) ([]database.ListAIBridgeInterceptionsRow, error) {
m.ctrl.T.Helper()
@@ -6977,6 +7290,21 @@ func (mr *MockStoreMockRecorder) ListAIBridgeInterceptionsTelemetrySummaries(ctx
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeInterceptionsTelemetrySummaries", reflect.TypeOf((*MockStore)(nil).ListAIBridgeInterceptionsTelemetrySummaries), ctx, arg)
}
// ListAIBridgeModelThoughtsByInterceptionIDs mocks base method.
func (m *MockStore) ListAIBridgeModelThoughtsByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]database.AIBridgeModelThought, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListAIBridgeModelThoughtsByInterceptionIDs", ctx, interceptionIds)
ret0, _ := ret[0].([]database.AIBridgeModelThought)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListAIBridgeModelThoughtsByInterceptionIDs indicates an expected call of ListAIBridgeModelThoughtsByInterceptionIDs.
func (mr *MockStoreMockRecorder) ListAIBridgeModelThoughtsByInterceptionIDs(ctx, interceptionIds any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeModelThoughtsByInterceptionIDs", reflect.TypeOf((*MockStore)(nil).ListAIBridgeModelThoughtsByInterceptionIDs), ctx, interceptionIds)
}
// ListAIBridgeModels mocks base method.
func (m *MockStore) ListAIBridgeModels(ctx context.Context, arg database.ListAIBridgeModelsParams) ([]string, error) {
m.ctrl.T.Helper()
@@ -6992,6 +7320,21 @@ func (mr *MockStoreMockRecorder) ListAIBridgeModels(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeModels", reflect.TypeOf((*MockStore)(nil).ListAIBridgeModels), ctx, arg)
}
// ListAIBridgeSessionThreads mocks base method.
func (m *MockStore) ListAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams) ([]database.ListAIBridgeSessionThreadsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListAIBridgeSessionThreads", ctx, arg)
ret0, _ := ret[0].([]database.ListAIBridgeSessionThreadsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListAIBridgeSessionThreads indicates an expected call of ListAIBridgeSessionThreads.
func (mr *MockStoreMockRecorder) ListAIBridgeSessionThreads(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeSessionThreads", reflect.TypeOf((*MockStore)(nil).ListAIBridgeSessionThreads), ctx, arg)
}
// ListAIBridgeSessions mocks base method.
func (m *MockStore) ListAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams) ([]database.ListAIBridgeSessionsRow, error) {
m.ctrl.T.Helper()
@@ -7052,6 +7395,21 @@ func (mr *MockStoreMockRecorder) ListAIBridgeUserPromptsByInterceptionIDs(ctx, i
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAIBridgeUserPromptsByInterceptionIDs", reflect.TypeOf((*MockStore)(nil).ListAIBridgeUserPromptsByInterceptionIDs), ctx, interceptionIds)
}
// ListAuthorizedAIBridgeClients mocks base method.
func (m *MockStore) ListAuthorizedAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams, prepared rbac.PreparedAuthorized) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListAuthorizedAIBridgeClients", ctx, arg, prepared)
ret0, _ := ret[0].([]string)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListAuthorizedAIBridgeClients indicates an expected call of ListAuthorizedAIBridgeClients.
func (mr *MockStoreMockRecorder) ListAuthorizedAIBridgeClients(ctx, arg, prepared any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAuthorizedAIBridgeClients", reflect.TypeOf((*MockStore)(nil).ListAuthorizedAIBridgeClients), ctx, arg, prepared)
}
// ListAuthorizedAIBridgeInterceptions mocks base method.
func (m *MockStore) ListAuthorizedAIBridgeInterceptions(ctx context.Context, arg database.ListAIBridgeInterceptionsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeInterceptionsRow, error) {
m.ctrl.T.Helper()
@@ -7082,6 +7440,21 @@ func (mr *MockStoreMockRecorder) ListAuthorizedAIBridgeModels(ctx, arg, prepared
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAuthorizedAIBridgeModels", reflect.TypeOf((*MockStore)(nil).ListAuthorizedAIBridgeModels), ctx, arg, prepared)
}
// ListAuthorizedAIBridgeSessionThreads mocks base method.
func (m *MockStore) ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg database.ListAIBridgeSessionThreadsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionThreadsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListAuthorizedAIBridgeSessionThreads", ctx, arg, prepared)
ret0, _ := ret[0].([]database.ListAIBridgeSessionThreadsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListAuthorizedAIBridgeSessionThreads indicates an expected call of ListAuthorizedAIBridgeSessionThreads.
func (mr *MockStoreMockRecorder) ListAuthorizedAIBridgeSessionThreads(ctx, arg, prepared any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAuthorizedAIBridgeSessionThreads", reflect.TypeOf((*MockStore)(nil).ListAuthorizedAIBridgeSessionThreads), ctx, arg, prepared)
}
// ListAuthorizedAIBridgeSessions mocks base method.
func (m *MockStore) ListAuthorizedAIBridgeSessions(ctx context.Context, arg database.ListAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) ([]database.ListAIBridgeSessionsRow, error) {
m.ctrl.T.Helper()
@@ -7306,6 +7679,20 @@ func (mr *MockStoreMockRecorder) PaginatedOrganizationMembers(ctx, arg any) *gom
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PaginatedOrganizationMembers", reflect.TypeOf((*MockStore)(nil).PaginatedOrganizationMembers), ctx, arg)
}
// PinChatByID mocks base method.
func (m *MockStore) PinChatByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "PinChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// PinChatByID indicates an expected call of PinChatByID.
func (mr *MockStoreMockRecorder) PinChatByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PinChatByID", reflect.TypeOf((*MockStore)(nil).PinChatByID), ctx, id)
}
// Ping mocks base method.
func (m *MockStore) Ping(ctx context.Context) (time.Duration, error) {
m.ctrl.T.Helper()
@@ -7336,6 +7723,21 @@ func (mr *MockStoreMockRecorder) PopNextQueuedMessage(ctx, chatID any) *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PopNextQueuedMessage", reflect.TypeOf((*MockStore)(nil).PopNextQueuedMessage), ctx, chatID)
}
// PurgeOldChatAutomationEvents mocks base method.
func (m *MockStore) PurgeOldChatAutomationEvents(ctx context.Context, arg database.PurgeOldChatAutomationEventsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "PurgeOldChatAutomationEvents", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// PurgeOldChatAutomationEvents indicates an expected call of PurgeOldChatAutomationEvents.
func (mr *MockStoreMockRecorder) PurgeOldChatAutomationEvents(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PurgeOldChatAutomationEvents", reflect.TypeOf((*MockStore)(nil).PurgeOldChatAutomationEvents), ctx, arg)
}
// ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate mocks base method.
func (m *MockStore) ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error {
m.ctrl.T.Helper()
@@ -7468,11 +7870,12 @@ func (mr *MockStoreMockRecorder) TryAcquireLock(ctx, pgTryAdvisoryXactLock any)
}
// UnarchiveChatByID mocks base method.
func (m *MockStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) error {
func (m *MockStore) UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UnarchiveChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UnarchiveChatByID indicates an expected call of UnarchiveChatByID.
@@ -7509,6 +7912,20 @@ func (mr *MockStoreMockRecorder) UnfavoriteWorkspace(ctx, id any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnfavoriteWorkspace", reflect.TypeOf((*MockStore)(nil).UnfavoriteWorkspace), ctx, id)
}
// UnpinChatByID mocks base method.
func (m *MockStore) UnpinChatByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UnpinChatByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// UnpinChatByID indicates an expected call of UnpinChatByID.
func (mr *MockStoreMockRecorder) UnpinChatByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UnpinChatByID", reflect.TypeOf((*MockStore)(nil).UnpinChatByID), ctx, id)
}
// UnsetDefaultChatModelConfigs mocks base method.
func (m *MockStore) UnsetDefaultChatModelConfigs(ctx context.Context) error {
m.ctrl.T.Helper()
@@ -7552,6 +7969,80 @@ func (mr *MockStoreMockRecorder) UpdateAPIKeyByID(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateAPIKeyByID", reflect.TypeOf((*MockStore)(nil).UpdateAPIKeyByID), ctx, arg)
}
// UpdateChatAutomation mocks base method.
func (m *MockStore) UpdateChatAutomation(ctx context.Context, arg database.UpdateChatAutomationParams) (database.ChatAutomation, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomation", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomation)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomation indicates an expected call of UpdateChatAutomation.
func (mr *MockStoreMockRecorder) UpdateChatAutomation(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomation", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomation), ctx, arg)
}
// UpdateChatAutomationTrigger mocks base method.
func (m *MockStore) UpdateChatAutomationTrigger(ctx context.Context, arg database.UpdateChatAutomationTriggerParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTrigger", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomationTrigger indicates an expected call of UpdateChatAutomationTrigger.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTrigger(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTrigger", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTrigger), ctx, arg)
}
// UpdateChatAutomationTriggerLastTriggeredAt mocks base method.
func (m *MockStore) UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg database.UpdateChatAutomationTriggerLastTriggeredAtParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTriggerLastTriggeredAt", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateChatAutomationTriggerLastTriggeredAt indicates an expected call of UpdateChatAutomationTriggerLastTriggeredAt.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTriggerLastTriggeredAt(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTriggerLastTriggeredAt", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTriggerLastTriggeredAt), ctx, arg)
}
// UpdateChatAutomationTriggerWebhookSecret mocks base method.
func (m *MockStore) UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg database.UpdateChatAutomationTriggerWebhookSecretParams) (database.ChatAutomationTrigger, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatAutomationTriggerWebhookSecret", ctx, arg)
ret0, _ := ret[0].(database.ChatAutomationTrigger)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatAutomationTriggerWebhookSecret indicates an expected call of UpdateChatAutomationTriggerWebhookSecret.
func (mr *MockStoreMockRecorder) UpdateChatAutomationTriggerWebhookSecret(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatAutomationTriggerWebhookSecret", reflect.TypeOf((*MockStore)(nil).UpdateChatAutomationTriggerWebhookSecret), ctx, arg)
}
// UpdateChatBuildAgentBinding mocks base method.
func (m *MockStore) UpdateChatBuildAgentBinding(ctx context.Context, arg database.UpdateChatBuildAgentBindingParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatBuildAgentBinding", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatBuildAgentBinding indicates an expected call of UpdateChatBuildAgentBinding.
func (mr *MockStoreMockRecorder) UpdateChatBuildAgentBinding(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatBuildAgentBinding", reflect.TypeOf((*MockStore)(nil).UpdateChatBuildAgentBinding), ctx, arg)
}
// UpdateChatByID mocks base method.
func (m *MockStore) UpdateChatByID(ctx context.Context, arg database.UpdateChatByIDParams) (database.Chat, error) {
m.ctrl.T.Helper()
@@ -7597,6 +8088,50 @@ func (mr *MockStoreMockRecorder) UpdateChatLabelsByID(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLabelsByID", reflect.TypeOf((*MockStore)(nil).UpdateChatLabelsByID), ctx, arg)
}
// UpdateChatLastInjectedContext mocks base method.
func (m *MockStore) UpdateChatLastInjectedContext(ctx context.Context, arg database.UpdateChatLastInjectedContextParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatLastInjectedContext", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatLastInjectedContext indicates an expected call of UpdateChatLastInjectedContext.
func (mr *MockStoreMockRecorder) UpdateChatLastInjectedContext(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLastInjectedContext", reflect.TypeOf((*MockStore)(nil).UpdateChatLastInjectedContext), ctx, arg)
}
// UpdateChatLastModelConfigByID mocks base method.
func (m *MockStore) UpdateChatLastModelConfigByID(ctx context.Context, arg database.UpdateChatLastModelConfigByIDParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatLastModelConfigByID", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatLastModelConfigByID indicates an expected call of UpdateChatLastModelConfigByID.
func (mr *MockStoreMockRecorder) UpdateChatLastModelConfigByID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLastModelConfigByID", reflect.TypeOf((*MockStore)(nil).UpdateChatLastModelConfigByID), ctx, arg)
}
// UpdateChatLastReadMessageID mocks base method.
func (m *MockStore) UpdateChatLastReadMessageID(ctx context.Context, arg database.UpdateChatLastReadMessageIDParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatLastReadMessageID", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateChatLastReadMessageID indicates an expected call of UpdateChatLastReadMessageID.
func (mr *MockStoreMockRecorder) UpdateChatLastReadMessageID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatLastReadMessageID", reflect.TypeOf((*MockStore)(nil).UpdateChatLastReadMessageID), ctx, arg)
}
// UpdateChatMCPServerIDs mocks base method.
func (m *MockStore) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
m.ctrl.T.Helper()
@@ -7642,6 +8177,20 @@ func (mr *MockStoreMockRecorder) UpdateChatModelConfig(ctx, arg any) *gomock.Cal
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatModelConfig", reflect.TypeOf((*MockStore)(nil).UpdateChatModelConfig), ctx, arg)
}
// UpdateChatPinOrder mocks base method.
func (m *MockStore) UpdateChatPinOrder(ctx context.Context, arg database.UpdateChatPinOrderParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatPinOrder", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateChatPinOrder indicates an expected call of UpdateChatPinOrder.
func (mr *MockStoreMockRecorder) UpdateChatPinOrder(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatPinOrder", reflect.TypeOf((*MockStore)(nil).UpdateChatPinOrder), ctx, arg)
}
// UpdateChatProvider mocks base method.
func (m *MockStore) UpdateChatProvider(ctx context.Context, arg database.UpdateChatProviderParams) (database.ChatProvider, error) {
m.ctrl.T.Helper()
@@ -7672,19 +8221,34 @@ func (mr *MockStoreMockRecorder) UpdateChatStatus(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatStatus", reflect.TypeOf((*MockStore)(nil).UpdateChatStatus), ctx, arg)
}
// UpdateChatWorkspace mocks base method.
func (m *MockStore) UpdateChatWorkspace(ctx context.Context, arg database.UpdateChatWorkspaceParams) (database.Chat, error) {
// UpdateChatStatusPreserveUpdatedAt mocks base method.
func (m *MockStore) UpdateChatStatusPreserveUpdatedAt(ctx context.Context, arg database.UpdateChatStatusPreserveUpdatedAtParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatWorkspace", ctx, arg)
ret := m.ctrl.Call(m, "UpdateChatStatusPreserveUpdatedAt", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatWorkspace indicates an expected call of UpdateChatWorkspace.
func (mr *MockStoreMockRecorder) UpdateChatWorkspace(ctx, arg any) *gomock.Call {
// UpdateChatStatusPreserveUpdatedAt indicates an expected call of UpdateChatStatusPreserveUpdatedAt.
func (mr *MockStoreMockRecorder) UpdateChatStatusPreserveUpdatedAt(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatWorkspace", reflect.TypeOf((*MockStore)(nil).UpdateChatWorkspace), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatStatusPreserveUpdatedAt", reflect.TypeOf((*MockStore)(nil).UpdateChatStatusPreserveUpdatedAt), ctx, arg)
}
// UpdateChatWorkspaceBinding mocks base method.
func (m *MockStore) UpdateChatWorkspaceBinding(ctx context.Context, arg database.UpdateChatWorkspaceBindingParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatWorkspaceBinding", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatWorkspaceBinding indicates an expected call of UpdateChatWorkspaceBinding.
func (mr *MockStoreMockRecorder) UpdateChatWorkspaceBinding(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatWorkspaceBinding", reflect.TypeOf((*MockStore)(nil).UpdateChatWorkspaceBinding), ctx, arg)
}
// UpdateCryptoKeyDeletesAt mocks base method.
@@ -9029,6 +9593,20 @@ func (mr *MockStoreMockRecorder) UpsertChatDiffStatusReference(ctx, arg any) *go
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatDiffStatusReference", reflect.TypeOf((*MockStore)(nil).UpsertChatDiffStatusReference), ctx, arg)
}
// UpsertChatIncludeDefaultSystemPrompt mocks base method.
func (m *MockStore) UpsertChatIncludeDefaultSystemPrompt(ctx context.Context, includeDefaultSystemPrompt bool) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertChatIncludeDefaultSystemPrompt", ctx, includeDefaultSystemPrompt)
ret0, _ := ret[0].(error)
return ret0
}
// UpsertChatIncludeDefaultSystemPrompt indicates an expected call of UpsertChatIncludeDefaultSystemPrompt.
func (mr *MockStoreMockRecorder) UpsertChatIncludeDefaultSystemPrompt(ctx, includeDefaultSystemPrompt any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatIncludeDefaultSystemPrompt", reflect.TypeOf((*MockStore)(nil).UpsertChatIncludeDefaultSystemPrompt), ctx, includeDefaultSystemPrompt)
}
// UpsertChatSystemPrompt mocks base method.
func (m *MockStore) UpsertChatSystemPrompt(ctx context.Context, value string) error {
m.ctrl.T.Helper()
+199 -2
View File
@@ -220,7 +220,12 @@ CREATE TYPE api_key_scope AS ENUM (
'chat:read',
'chat:update',
'chat:delete',
'chat:*'
'chat:*',
'chat_automation:create',
'chat_automation:read',
'chat_automation:update',
'chat_automation:delete',
'chat_automation:*'
);
CREATE TYPE app_sharing_level AS ENUM (
@@ -270,6 +275,32 @@ CREATE TYPE build_reason AS ENUM (
'task_resume'
);
CREATE TYPE chat_automation_event_status AS ENUM (
'filtered',
'preview',
'created',
'continued',
'rate_limited',
'error'
);
COMMENT ON TYPE chat_automation_event_status IS 'Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.';
CREATE TYPE chat_automation_status AS ENUM (
'disabled',
'preview',
'active'
);
COMMENT ON TYPE chat_automation_status IS 'Lifecycle state of a chat automation: disabled, preview, or active.';
CREATE TYPE chat_automation_trigger_type AS ENUM (
'webhook',
'cron'
);
COMMENT ON TYPE chat_automation_trigger_type IS 'Discriminator for chat automation triggers: webhook or cron.';
CREATE TYPE chat_message_role AS ENUM (
'system',
'user',
@@ -1238,6 +1269,104 @@ COMMENT ON COLUMN boundary_usage_stats.window_start IS 'Start of the time window
COMMENT ON COLUMN boundary_usage_stats.updated_at IS 'Timestamp of the last update to this row.';
CREATE TABLE chat_automation_events (
id uuid NOT NULL,
automation_id uuid NOT NULL,
trigger_id uuid,
received_at timestamp with time zone NOT NULL,
payload jsonb NOT NULL,
filter_matched boolean NOT NULL,
resolved_labels jsonb,
matched_chat_id uuid,
created_chat_id uuid,
status chat_automation_event_status NOT NULL,
error text,
CONSTRAINT chat_automation_events_chat_exclusivity CHECK (((matched_chat_id IS NULL) OR (created_chat_id IS NULL)))
);
COMMENT ON TABLE chat_automation_events IS 'Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.';
COMMENT ON COLUMN chat_automation_events.payload IS 'The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.';
COMMENT ON COLUMN chat_automation_events.filter_matched IS 'Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.';
COMMENT ON COLUMN chat_automation_events.resolved_labels IS 'Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.';
COMMENT ON COLUMN chat_automation_events.matched_chat_id IS 'ID of an existing chat that was found via label matching and continued with a new message.';
COMMENT ON COLUMN chat_automation_events.created_chat_id IS 'ID of a newly created chat (mutually exclusive with matched_chat_id in practice).';
COMMENT ON COLUMN chat_automation_events.status IS 'Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.';
CREATE TABLE chat_automation_triggers (
id uuid NOT NULL,
automation_id uuid NOT NULL,
type chat_automation_trigger_type NOT NULL,
webhook_secret text,
webhook_secret_key_id text,
cron_schedule text,
last_triggered_at timestamp with time zone,
filter jsonb,
label_paths jsonb,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
CONSTRAINT chat_automation_triggers_cron_fields CHECK (((type <> 'cron'::chat_automation_trigger_type) OR ((cron_schedule IS NOT NULL) AND (webhook_secret IS NULL) AND (webhook_secret_key_id IS NULL)))),
CONSTRAINT chat_automation_triggers_webhook_fields CHECK (((type <> 'webhook'::chat_automation_trigger_type) OR ((webhook_secret IS NOT NULL) AND (cron_schedule IS NULL) AND (last_triggered_at IS NULL))))
);
COMMENT ON TABLE chat_automation_triggers IS 'Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.';
COMMENT ON COLUMN chat_automation_triggers.type IS 'Discriminator: webhook or cron. Determines which nullable columns are meaningful.';
COMMENT ON COLUMN chat_automation_triggers.webhook_secret IS 'HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.';
COMMENT ON COLUMN chat_automation_triggers.cron_schedule IS 'Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.last_triggered_at IS 'Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.filter IS 'gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.';
COMMENT ON COLUMN chat_automation_triggers.label_paths IS 'Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.';
CREATE TABLE chat_automations (
id uuid NOT NULL,
owner_id uuid NOT NULL,
organization_id uuid NOT NULL,
name text NOT NULL,
description text DEFAULT ''::text NOT NULL,
instructions text DEFAULT ''::text NOT NULL,
model_config_id uuid,
mcp_server_ids uuid[] DEFAULT '{}'::uuid[] NOT NULL,
allowed_tools text[] DEFAULT '{}'::text[] NOT NULL,
status chat_automation_status DEFAULT 'disabled'::chat_automation_status NOT NULL,
max_chat_creates_per_hour integer DEFAULT 10 NOT NULL,
max_messages_per_hour integer DEFAULT 60 NOT NULL,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
CONSTRAINT chat_automations_max_chat_creates_per_hour_check CHECK ((max_chat_creates_per_hour > 0)),
CONSTRAINT chat_automations_max_messages_per_hour_check CHECK ((max_messages_per_hour > 0))
);
COMMENT ON TABLE chat_automations IS 'Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.';
COMMENT ON COLUMN chat_automations.owner_id IS 'The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.';
COMMENT ON COLUMN chat_automations.organization_id IS 'Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.';
COMMENT ON COLUMN chat_automations.instructions IS 'The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.';
COMMENT ON COLUMN chat_automations.model_config_id IS 'Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.';
COMMENT ON COLUMN chat_automations.mcp_server_ids IS 'MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.';
COMMENT ON COLUMN chat_automations.allowed_tools IS 'Tool allowlist. Empty means all tools available to the model config are permitted.';
COMMENT ON COLUMN chat_automations.status IS 'Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.';
COMMENT ON COLUMN chat_automations.max_chat_creates_per_hour IS 'Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.';
COMMENT ON COLUMN chat_automations.max_messages_per_hour IS 'Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.';
CREATE TABLE chat_diff_statuses (
chat_id uuid NOT NULL,
url text,
@@ -1399,7 +1528,13 @@ CREATE TABLE chats (
last_error text,
mode chat_mode,
mcp_server_ids uuid[] DEFAULT '{}'::uuid[] NOT NULL,
labels jsonb DEFAULT '{}'::jsonb NOT NULL
labels jsonb DEFAULT '{}'::jsonb NOT NULL,
build_id uuid,
agent_id uuid,
pin_order integer DEFAULT 0 NOT NULL,
last_read_message_id bigint,
last_injected_context jsonb,
automation_id uuid
);
CREATE TABLE connection_logs (
@@ -1703,6 +1838,7 @@ CREATE TABLE mcp_server_configs (
updated_by uuid,
created_at timestamp with time zone DEFAULT now() NOT NULL,
updated_at timestamp with time zone DEFAULT now() NOT NULL,
model_intent boolean DEFAULT false NOT NULL,
CONSTRAINT mcp_server_configs_auth_type_check CHECK ((auth_type = ANY (ARRAY['none'::text, 'oauth2'::text, 'api_key'::text, 'custom_headers'::text]))),
CONSTRAINT mcp_server_configs_availability_check CHECK ((availability = ANY (ARRAY['force_on'::text, 'default_on'::text, 'default_off'::text]))),
CONSTRAINT mcp_server_configs_transport_check CHECK ((transport = ANY (ARRAY['streamable_http'::text, 'sse'::text])))
@@ -3314,6 +3450,15 @@ ALTER TABLE ONLY audit_logs
ALTER TABLE ONLY boundary_usage_stats
ADD CONSTRAINT boundary_usage_stats_pkey PRIMARY KEY (replica_id);
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
@@ -3699,6 +3844,20 @@ CREATE INDEX idx_audit_log_user_id ON audit_logs USING btree (user_id);
CREATE INDEX idx_audit_logs_time_desc ON audit_logs USING btree ("time" DESC);
CREATE INDEX idx_chat_automation_events_automation_id_received_at ON chat_automation_events USING btree (automation_id, received_at DESC);
CREATE INDEX idx_chat_automation_events_rate_limit ON chat_automation_events USING btree (automation_id, received_at) WHERE (status = ANY (ARRAY['created'::chat_automation_event_status, 'continued'::chat_automation_event_status]));
CREATE INDEX idx_chat_automation_events_received_at ON chat_automation_events USING btree (received_at);
CREATE INDEX idx_chat_automation_triggers_automation_id ON chat_automation_triggers USING btree (automation_id);
CREATE INDEX idx_chat_automations_organization_id ON chat_automations USING btree (organization_id);
CREATE INDEX idx_chat_automations_owner_id ON chat_automations USING btree (owner_id);
CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations USING btree (owner_id, organization_id, name);
CREATE INDEX idx_chat_diff_statuses_stale_at ON chat_diff_statuses USING btree (stale_at);
CREATE INDEX idx_chat_files_org ON chat_files USING btree (organization_id);
@@ -3727,6 +3886,8 @@ CREATE INDEX idx_chat_providers_enabled ON chat_providers USING btree (enabled);
CREATE INDEX idx_chat_queued_messages_chat_id ON chat_queued_messages USING btree (chat_id);
CREATE INDEX idx_chats_automation_id ON chats USING btree (automation_id);
CREATE INDEX idx_chats_labels ON chats USING gin (labels);
CREATE INDEX idx_chats_last_model_config_id ON chats USING btree (last_model_config_id);
@@ -4000,6 +4161,33 @@ ALTER TABLE ONLY aibridge_interceptions
ALTER TABLE ONLY api_keys
ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_created_chat_id_fkey FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_matched_chat_id_fkey FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_events
ADD CONSTRAINT chat_automation_events_trigger_id_fkey FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automation_triggers
ADD CONSTRAINT chat_automation_triggers_webhook_secret_key_id_fkey FOREIGN KEY (webhook_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_model_config_id_fkey FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL;
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_automations
ADD CONSTRAINT chat_automations_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
@@ -4033,6 +4221,15 @@ ALTER TABLE ONLY chat_providers
ALTER TABLE ONLY chat_queued_messages
ADD CONSTRAINT chat_queued_messages_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE SET NULL;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_build_id_fkey FOREIGN KEY (build_id) REFERENCES workspace_builds(id) ON DELETE SET NULL;
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_last_model_config_id_fkey FOREIGN KEY (last_model_config_id) REFERENCES chat_model_configs(id);
+12
View File
@@ -9,6 +9,15 @@ const (
ForeignKeyAiSeatStateUserID ForeignKeyConstraint = "ai_seat_state_user_id_fkey" // ALTER TABLE ONLY ai_seat_state ADD CONSTRAINT ai_seat_state_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyAibridgeInterceptionsInitiatorID ForeignKeyConstraint = "aibridge_interceptions_initiator_id_fkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
ForeignKeyAPIKeysUserIDUUID ForeignKeyConstraint = "api_keys_user_id_uuid_fkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatAutomationEventsAutomationID ForeignKeyConstraint = "chat_automation_events_automation_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationEventsCreatedChatID ForeignKeyConstraint = "chat_automation_events_created_chat_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_created_chat_id_fkey FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ForeignKeyChatAutomationEventsMatchedChatID ForeignKeyConstraint = "chat_automation_events_matched_chat_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_matched_chat_id_fkey FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
ForeignKeyChatAutomationEventsTriggerID ForeignKeyConstraint = "chat_automation_events_trigger_id_fkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_trigger_id_fkey FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL;
ForeignKeyChatAutomationTriggersAutomationID ForeignKeyConstraint = "chat_automation_triggers_automation_id_fkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationTriggersWebhookSecretKeyID ForeignKeyConstraint = "chat_automation_triggers_webhook_secret_key_id_fkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_webhook_secret_key_id_fkey FOREIGN KEY (webhook_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyChatAutomationsModelConfigID ForeignKeyConstraint = "chat_automations_model_config_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_model_config_id_fkey FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL;
ForeignKeyChatAutomationsOrganizationID ForeignKeyConstraint = "chat_automations_organization_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyChatAutomationsOwnerID ForeignKeyConstraint = "chat_automations_owner_id_fkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatDiffStatusesChatID ForeignKeyConstraint = "chat_diff_statuses_chat_id_fkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatFilesOrganizationID ForeignKeyConstraint = "chat_files_organization_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyChatFilesOwnerID ForeignKeyConstraint = "chat_files_owner_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
@@ -20,6 +29,9 @@ const (
ForeignKeyChatProvidersAPIKeyKeyID ForeignKeyConstraint = "chat_providers_api_key_key_id_fkey" // ALTER TABLE ONLY chat_providers ADD CONSTRAINT chat_providers_api_key_key_id_fkey FOREIGN KEY (api_key_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyChatProvidersCreatedBy ForeignKeyConstraint = "chat_providers_created_by_fkey" // ALTER TABLE ONLY chat_providers ADD CONSTRAINT chat_providers_created_by_fkey FOREIGN KEY (created_by) REFERENCES users(id);
ForeignKeyChatQueuedMessagesChatID ForeignKeyConstraint = "chat_queued_messages_chat_id_fkey" // ALTER TABLE ONLY chat_queued_messages ADD CONSTRAINT chat_queued_messages_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatsAgentID ForeignKeyConstraint = "chats_agent_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ForeignKeyChatsAutomationID ForeignKeyConstraint = "chats_automation_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_automation_id_fkey FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE SET NULL;
ForeignKeyChatsBuildID ForeignKeyConstraint = "chats_build_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_build_id_fkey FOREIGN KEY (build_id) REFERENCES workspace_builds(id) ON DELETE SET NULL;
ForeignKeyChatsLastModelConfigID ForeignKeyConstraint = "chats_last_model_config_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_last_model_config_id_fkey FOREIGN KEY (last_model_config_id) REFERENCES chat_model_configs(id);
ForeignKeyChatsOwnerID ForeignKeyConstraint = "chats_owner_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatsParentChatID ForeignKeyConstraint = "chats_parent_chat_id_fkey" // ALTER TABLE ONLY chats ADD CONSTRAINT chats_parent_chat_id_fkey FOREIGN KEY (parent_chat_id) REFERENCES chats(id) ON DELETE SET NULL;
@@ -27,6 +27,7 @@ func TestCustomQueriesSyncedRowScan(t *testing.T) {
"GetWorkspaces": "GetAuthorizedWorkspaces",
"GetUsers": "GetAuthorizedUsers",
"GetChats": "GetAuthorizedChats",
"GetChatAutomations": "GetAuthorizedChatAutomations",
}
// Scan custom
+3
View File
@@ -15,6 +15,9 @@ const (
LockIDReconcilePrebuilds
LockIDReconcileSystemRoles
LockIDBoundaryUsageStats
// LockIDChatAutomationCron prevents concurrent cron trigger
// evaluation across coderd replicas.
LockIDChatAutomationCron
)
// GenLockID generates a unique and consistent lock ID from a given string.
@@ -0,0 +1,3 @@
ALTER TABLE chats
DROP COLUMN IF EXISTS build_id,
DROP COLUMN IF EXISTS agent_id;
@@ -0,0 +1,3 @@
ALTER TABLE chats
ADD COLUMN build_id UUID REFERENCES workspace_builds(id) ON DELETE SET NULL,
ADD COLUMN agent_id UUID REFERENCES workspace_agents(id) ON DELETE SET NULL;
@@ -0,0 +1 @@
ALTER TABLE chats DROP COLUMN pin_order;
@@ -0,0 +1 @@
ALTER TABLE chats ADD COLUMN pin_order integer DEFAULT 0 NOT NULL;
@@ -0,0 +1 @@
ALTER TABLE mcp_server_configs DROP COLUMN model_intent;
@@ -0,0 +1,2 @@
ALTER TABLE mcp_server_configs
ADD COLUMN model_intent BOOLEAN NOT NULL DEFAULT false;
@@ -0,0 +1 @@
ALTER TABLE chats DROP COLUMN last_read_message_id;
@@ -0,0 +1,9 @@
ALTER TABLE chats ADD COLUMN last_read_message_id BIGINT;
-- Backfill existing chats so they don't appear unread after deploy.
-- The has_unread query uses COALESCE(last_read_message_id, 0), so
-- leaving this NULL would mark every existing chat as unread.
UPDATE chats SET last_read_message_id = (
SELECT MAX(cm.id) FROM chat_messages cm
WHERE cm.chat_id = chats.id AND cm.role = 'assistant' AND cm.deleted = false
);
@@ -0,0 +1 @@
ALTER TABLE chats DROP COLUMN last_injected_context;
@@ -0,0 +1 @@
ALTER TABLE chats ADD COLUMN last_injected_context JSONB;
@@ -0,0 +1,4 @@
-- Remove 'agents-access' from all users who have it.
UPDATE users
SET rbac_roles = array_remove(rbac_roles, 'agents-access')
WHERE 'agents-access' = ANY(rbac_roles);
@@ -0,0 +1,5 @@
-- Grant 'agents-access' to every user who has ever created a chat.
UPDATE users
SET rbac_roles = array_append(rbac_roles, 'agents-access')
WHERE id IN (SELECT DISTINCT owner_id FROM chats)
AND NOT ('agents-access' = ANY(rbac_roles));
@@ -0,0 +1,13 @@
ALTER TABLE chats DROP COLUMN IF EXISTS automation_id;
DROP TABLE IF EXISTS chat_automation_events;
DROP TABLE IF EXISTS chat_automation_triggers;
DROP TABLE IF EXISTS chat_automations;
DROP TYPE IF EXISTS chat_automation_event_status;
DROP TYPE IF EXISTS chat_automation_trigger_type;
DROP TYPE IF EXISTS chat_automation_status;
@@ -0,0 +1,238 @@
-- Chat automations bridge external events (webhooks, cron schedules) to
-- Coder chats. A chat automation defines *what* to say, *which* model
-- and tools to use, and *how fast* it is allowed to create or continue
-- chats.
CREATE TYPE chat_automation_status AS ENUM ('disabled', 'preview', 'active');
CREATE TYPE chat_automation_trigger_type AS ENUM ('webhook', 'cron');
CREATE TYPE chat_automation_event_status AS ENUM ('filtered', 'preview', 'created', 'continued', 'rate_limited', 'error');
CREATE TABLE chat_automations (
id uuid NOT NULL,
-- The user on whose behalf chats are created. All RBAC checks and
-- chat ownership are scoped to this user.
owner_id uuid NOT NULL,
-- Organization scope for RBAC. Combined with owner_id and name to
-- form a unique constraint so automations are namespaced per user
-- per org.
organization_id uuid NOT NULL,
-- Human-readable identifier. Unique within (owner_id, organization_id).
name text NOT NULL,
-- Optional long-form description shown in the UI.
description text NOT NULL DEFAULT '',
-- The user-role message injected into every chat this automation
-- creates. This is the core prompt that tells the LLM what to do.
instructions text NOT NULL DEFAULT '',
-- Optional model configuration override. When NULL the deployment
-- default is used. SET NULL on delete so automations survive config
-- changes gracefully.
model_config_id uuid,
-- MCP servers to attach to chats created by this automation.
-- Stored as an array of UUIDs rather than a join table because
-- the set is small and always read/written atomically.
mcp_server_ids uuid[] NOT NULL DEFAULT '{}',
-- Tool allowlist. Empty means all tools available to the model
-- config are permitted.
allowed_tools text[] NOT NULL DEFAULT '{}',
-- Lifecycle state:
-- disabled — trigger events are silently dropped.
-- preview — events are logged but no chat is created (dry-run).
-- active — events create or continue chats.
status chat_automation_status NOT NULL DEFAULT 'disabled',
-- Maximum number of *new* chats this automation may create in a
-- rolling one-hour window. Prevents runaway webhook storms from
-- flooding the system. Approximate under concurrency; the
-- check-then-insert is not serialized, so brief bursts may
-- slightly exceed the cap.
max_chat_creates_per_hour integer NOT NULL DEFAULT 10,
-- Maximum total messages (creates + continues) this automation may
-- send in a rolling one-hour window. A second, broader throttle
-- that catches high-frequency continuation patterns. Same
-- approximate-under-concurrency caveat as above.
max_messages_per_hour integer NOT NULL DEFAULT 60,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE,
FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE,
FOREIGN KEY (model_config_id) REFERENCES chat_model_configs(id) ON DELETE SET NULL,
CONSTRAINT chat_automations_max_chat_creates_per_hour_check CHECK (max_chat_creates_per_hour > 0),
CONSTRAINT chat_automations_max_messages_per_hour_check CHECK (max_messages_per_hour > 0)
);
CREATE INDEX idx_chat_automations_owner_id ON chat_automations (owner_id);
CREATE INDEX idx_chat_automations_organization_id ON chat_automations (organization_id);
-- Enforces that automation names are unique per user per org so they
-- can be referenced unambiguously in CLI/API calls.
CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations (owner_id, organization_id, name);
-- Triggers define *how* an automation is invoked. Each automation can
-- have multiple triggers (e.g. one webhook + one cron schedule).
-- Webhook and cron triggers share the same row shape with type-specific
-- nullable columns to keep the schema simple.
CREATE TABLE chat_automation_triggers (
id uuid NOT NULL,
-- Parent automation. CASCADE delete ensures orphan triggers are
-- cleaned up when an automation is removed.
automation_id uuid NOT NULL,
-- Discriminator: 'webhook' or 'cron'. Determines which nullable
-- columns are meaningful.
type chat_automation_trigger_type NOT NULL,
-- HMAC-SHA256 shared secret for webhook signature verification
-- (X-Hub-Signature-256 header). NULL for cron triggers.
webhook_secret text,
-- Identifier of the dbcrypt key used to encrypt webhook_secret.
-- NULL means the secret is not yet encrypted. When dbcrypt is
-- enabled, this references the active key digest used for
-- AES-256-GCM encryption.
webhook_secret_key_id text REFERENCES dbcrypt_keys(active_key_digest),
-- Standard 5-field cron expression (minute hour dom month dow),
-- with optional CRON_TZ= prefix. NULL for webhook triggers.
cron_schedule text,
-- Timestamp of the last successful cron fire. The scheduler
-- computes next = cron.Next(last_triggered_at) and fires when
-- next <= now. NULL means the trigger has never fired; the
-- scheduler falls back to created_at as the reference time.
-- Not used for webhook triggers.
last_triggered_at timestamp with time zone,
-- gjson path→value filter conditions evaluated against the
-- incoming webhook payload. All conditions must match for the
-- trigger to fire. NULL or empty means "match everything".
filter jsonb,
-- Maps chat label keys to gjson paths. When a trigger fires,
-- labels are resolved from the payload and used to find an
-- existing chat to continue (by label match) or set on a
-- newly created chat. This is how automations route events
-- to the right conversation.
label_paths jsonb,
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE,
CONSTRAINT chat_automation_triggers_webhook_fields CHECK (
type != 'webhook' OR (webhook_secret IS NOT NULL AND cron_schedule IS NULL AND last_triggered_at IS NULL)
),
CONSTRAINT chat_automation_triggers_cron_fields CHECK (
type != 'cron' OR (cron_schedule IS NOT NULL AND webhook_secret IS NULL AND webhook_secret_key_id IS NULL)
)
);
CREATE INDEX idx_chat_automation_triggers_automation_id ON chat_automation_triggers (automation_id);
-- Every trigger invocation produces an event row regardless of outcome.
-- This table is the audit trail and the data source for rate-limit
-- window counts. Rows are append-only and expected to be purged by a
-- background job after a retention period.
CREATE TABLE chat_automation_events (
id uuid NOT NULL,
-- The automation that owns this event.
automation_id uuid NOT NULL,
-- The trigger that produced this event. SET NULL on delete so
-- historical events survive trigger removal.
trigger_id uuid,
-- When the event was received (webhook delivery time or cron
-- evaluation time). Used for rate-limit window calculations and
-- purge cutoffs.
received_at timestamp with time zone NOT NULL,
-- The raw payload that was evaluated. For webhooks this is the
-- HTTP body; for cron triggers it is a synthetic JSON envelope
-- with schedule metadata.
payload jsonb NOT NULL,
-- Whether the trigger's filter conditions matched. False means
-- the event was dropped before any chat interaction.
filter_matched boolean NOT NULL,
-- Labels resolved from the payload via label_paths. Stored so
-- the event log shows exactly which labels were computed.
resolved_labels jsonb,
-- ID of an existing chat that was found via label matching and
-- continued with a new message.
matched_chat_id uuid,
-- ID of a newly created chat (mutually exclusive with
-- matched_chat_id in practice).
created_chat_id uuid,
-- Outcome of the event:
-- filtered — filter did not match, event dropped.
-- preview — automation is in preview mode, no chat action.
-- created — new chat was created.
-- continued — existing chat was continued.
-- rate_limited — rate limit prevented chat action.
-- error — something went wrong (see error column).
status chat_automation_event_status NOT NULL,
-- Human-readable error description when status = 'error' or
-- 'rate_limited'. NULL for successful outcomes.
error text,
PRIMARY KEY (id),
FOREIGN KEY (automation_id) REFERENCES chat_automations(id) ON DELETE CASCADE,
FOREIGN KEY (trigger_id) REFERENCES chat_automation_triggers(id) ON DELETE SET NULL,
FOREIGN KEY (matched_chat_id) REFERENCES chats(id) ON DELETE SET NULL,
FOREIGN KEY (created_chat_id) REFERENCES chats(id) ON DELETE SET NULL,
CONSTRAINT chat_automation_events_chat_exclusivity CHECK (
matched_chat_id IS NULL OR created_chat_id IS NULL
)
);
-- Composite index for listing events per automation in reverse
-- chronological order (the primary UI query pattern).
CREATE INDEX idx_chat_automation_events_automation_id_received_at ON chat_automation_events (automation_id, received_at DESC);
-- Standalone index on received_at for the purge job, which deletes
-- events older than the retention period across all automations.
CREATE INDEX idx_chat_automation_events_received_at ON chat_automation_events (received_at);
-- Partial index for rate-limit window count queries, which filter
-- by automation_id and status IN ('created', 'continued').
CREATE INDEX idx_chat_automation_events_rate_limit
ON chat_automation_events (automation_id, received_at)
WHERE status IN ('created', 'continued');
-- Link chats back to the automation that created them. SET NULL on
-- delete so chats survive if the automation is removed. Indexed for
-- lookup queries that list chats spawned by a given automation.
ALTER TABLE chats ADD COLUMN automation_id uuid REFERENCES chat_automations(id) ON DELETE SET NULL;
CREATE INDEX idx_chats_automation_id ON chats (automation_id);
-- Enum type comments.
COMMENT ON TYPE chat_automation_status IS 'Lifecycle state of a chat automation: disabled, preview, or active.';
COMMENT ON TYPE chat_automation_trigger_type IS 'Discriminator for chat automation triggers: webhook or cron.';
COMMENT ON TYPE chat_automation_event_status IS 'Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.';
-- Table comments.
COMMENT ON TABLE chat_automations IS 'Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.';
COMMENT ON TABLE chat_automation_triggers IS 'Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.';
COMMENT ON TABLE chat_automation_events IS 'Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.';
-- Column comments for chat_automations.
COMMENT ON COLUMN chat_automations.owner_id IS 'The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.';
COMMENT ON COLUMN chat_automations.organization_id IS 'Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.';
COMMENT ON COLUMN chat_automations.instructions IS 'The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.';
COMMENT ON COLUMN chat_automations.model_config_id IS 'Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.';
COMMENT ON COLUMN chat_automations.mcp_server_ids IS 'MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.';
COMMENT ON COLUMN chat_automations.allowed_tools IS 'Tool allowlist. Empty means all tools available to the model config are permitted.';
COMMENT ON COLUMN chat_automations.status IS 'Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.';
COMMENT ON COLUMN chat_automations.max_chat_creates_per_hour IS 'Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.';
COMMENT ON COLUMN chat_automations.max_messages_per_hour IS 'Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.';
-- Column comments for chat_automation_triggers.
COMMENT ON COLUMN chat_automation_triggers.type IS 'Discriminator: webhook or cron. Determines which nullable columns are meaningful.';
COMMENT ON COLUMN chat_automation_triggers.webhook_secret IS 'HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.';
COMMENT ON COLUMN chat_automation_triggers.cron_schedule IS 'Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.';
COMMENT ON COLUMN chat_automation_triggers.filter IS 'gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.';
COMMENT ON COLUMN chat_automation_triggers.label_paths IS 'Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.';
COMMENT ON COLUMN chat_automation_triggers.last_triggered_at IS 'Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.';
-- Column comments for chat_automation_events.
COMMENT ON COLUMN chat_automation_events.payload IS 'The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.';
COMMENT ON COLUMN chat_automation_events.filter_matched IS 'Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.';
COMMENT ON COLUMN chat_automation_events.resolved_labels IS 'Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.';
COMMENT ON COLUMN chat_automation_events.matched_chat_id IS 'ID of an existing chat that was found via label matching and continued with a new message.';
COMMENT ON COLUMN chat_automation_events.created_chat_id IS 'ID of a newly created chat (mutually exclusive with matched_chat_id in practice).';
COMMENT ON COLUMN chat_automation_events.status IS 'Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.';
-- Add API key scope values for the new chat_automation resource type.
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:create';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:read';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:update';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:delete';
ALTER TYPE api_key_scope ADD VALUE IF NOT EXISTS 'chat_automation:*';
+146
View File
@@ -877,3 +877,149 @@ func TestMigration000387MigrateTaskWorkspaces(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 0, antCount, "antagonist workspaces (deleted and regular) should not be migrated")
}
func TestMigration000457ChatAccessRole(t *testing.T) {
t.Parallel()
const migrationVersion = 457
sqlDB := testSQLDB(t)
// Migrate up to the migration before the one that grants
// agents-access roles.
next, err := migrations.Stepper(sqlDB)
require.NoError(t, err)
for {
version, more, err := next()
require.NoError(t, err)
if !more {
t.Fatalf("migration %d not found", migrationVersion)
}
if version == migrationVersion-1 {
break
}
}
ctx := testutil.Context(t, testutil.WaitSuperLong)
// Define test users.
userWithChat := uuid.New() // Has a chat, no agents-access role.
userAlreadyHasRole := uuid.New() // Has a chat and already has agents-access.
userNoChat := uuid.New() // No chat at all.
userWithChatAndRoles := uuid.New() // Has a chat and other existing roles.
now := time.Now().UTC().Truncate(time.Microsecond)
// We need a chat_provider and chat_model_config for the chats FK.
providerID := uuid.New()
modelConfigID := uuid.New()
tx, err := sqlDB.BeginTx(ctx, nil)
require.NoError(t, err)
defer tx.Rollback()
fixtures := []struct {
query string
args []any
}{
// Insert test users with varying rbac_roles.
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userWithChat, "user-with-chat", "chat@test.com", []byte{}, now, now, "active", pq.StringArray{}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userAlreadyHasRole, "user-already-has-role", "already@test.com", []byte{}, now, now, "active", pq.StringArray{"agents-access"}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userNoChat, "user-no-chat", "nochat@test.com", []byte{}, now, now, "active", pq.StringArray{}, "password"},
},
{
`INSERT INTO users (id, username, email, hashed_password, created_at, updated_at, status, rbac_roles, login_type)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{userWithChatAndRoles, "user-with-roles", "roles@test.com", []byte{}, now, now, "active", pq.StringArray{"template-admin"}, "password"},
},
// Insert a chat provider and model config for the chats FK.
{
`INSERT INTO chat_providers (id, provider, display_name, api_key, enabled, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7)`,
[]any{providerID, "openai", "OpenAI", "", true, now, now},
},
{
`INSERT INTO chat_model_configs (id, provider, model, display_name, enabled, context_limit, compression_threshold, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9)`,
[]any{modelConfigID, "openai", "gpt-4", "GPT 4", true, 100000, 70, now, now},
},
// Insert chats for users A, B, and D (not C).
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userWithChat, modelConfigID, "Chat A", now, now},
},
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userAlreadyHasRole, modelConfigID, "Chat B", now, now},
},
{
`INSERT INTO chats (id, owner_id, last_model_config_id, title, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6)`,
[]any{uuid.New(), userWithChatAndRoles, modelConfigID, "Chat D", now, now},
},
}
for i, f := range fixtures {
_, err := tx.ExecContext(ctx, f.query, f.args...)
require.NoError(t, err, "fixture %d", i)
}
require.NoError(t, tx.Commit())
// Run the migration.
version, _, err := next()
require.NoError(t, err)
require.EqualValues(t, migrationVersion, version)
// Helper to get rbac_roles for a user.
getRoles := func(t *testing.T, userID uuid.UUID) []string {
t.Helper()
var roles pq.StringArray
err := sqlDB.QueryRowContext(ctx,
"SELECT rbac_roles FROM users WHERE id = $1", userID,
).Scan(&roles)
require.NoError(t, err)
return roles
}
// Verify: user with chat gets agents-access.
roles := getRoles(t, userWithChat)
require.Contains(t, roles, "agents-access",
"user with chat should get agents-access")
// Verify: user who already had agents-access has no duplicate.
roles = getRoles(t, userAlreadyHasRole)
count := 0
for _, r := range roles {
if r == "agents-access" {
count++
}
}
require.Equal(t, 1, count,
"user who already had agents-access should not get a duplicate")
// Verify: user without chat does NOT get agents-access.
roles = getRoles(t, userNoChat)
require.NotContains(t, roles, "agents-access",
"user without chat should not get agents-access")
// Verify: user with chat and existing roles gets agents-access
// appended while preserving existing roles.
roles = getRoles(t, userWithChatAndRoles)
require.Contains(t, roles, "agents-access",
"user with chat and other roles should get agents-access")
require.Contains(t, roles, "template-admin",
"existing roles should be preserved")
}
@@ -0,0 +1,87 @@
INSERT INTO chat_automations (
id,
owner_id,
organization_id,
name,
description,
instructions,
model_config_id,
mcp_server_ids,
allowed_tools,
status,
max_chat_creates_per_hour,
max_messages_per_hour,
created_at,
updated_at
)
SELECT
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
u.id,
o.id,
'fixture-automation',
'Fixture automation for migration testing.',
'You are a helpful assistant.',
NULL,
'{}',
'{}',
'active',
10,
60,
'2024-01-01 00:00:00+00',
'2024-01-01 00:00:00+00'
FROM users u
CROSS JOIN organizations o
ORDER BY u.created_at, u.id
LIMIT 1;
INSERT INTO chat_automation_triggers (
id,
automation_id,
type,
webhook_secret,
webhook_secret_key_id,
cron_schedule,
last_triggered_at,
filter,
label_paths,
created_at,
updated_at
) VALUES (
'c4e1fe1f-9f2b-4a3d-ab4c-234567890abc',
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
'webhook',
'whsec_fixture_secret',
NULL,
NULL,
NULL,
'{"action": "opened"}'::jsonb,
'{"repo": "repository.full_name"}'::jsonb,
'2024-01-01 00:00:00+00',
'2024-01-01 00:00:00+00'
);
INSERT INTO chat_automation_events (
id,
automation_id,
trigger_id,
received_at,
payload,
filter_matched,
resolved_labels,
matched_chat_id,
created_chat_id,
status,
error
) VALUES (
'd5f20f20-a03c-4b4e-bc5d-345678901bcd',
'b3d0fd0e-8e1a-4f2c-9a3b-1234567890ab',
'c4e1fe1f-9f2b-4a3d-ab4c-234567890abc',
'2024-01-01 00:00:00+00',
'{"action": "opened", "repository": {"full_name": "coder/coder"}}'::jsonb,
TRUE,
'{"repo": "coder/coder"}'::jsonb,
NULL,
NULL,
'preview',
NULL
);
+11
View File
@@ -178,6 +178,17 @@ func (c Chat) RBACObject() rbac.Object {
return rbac.ResourceChat.WithID(c.ID).WithOwner(c.OwnerID.String())
}
func (r GetChatsRow) RBACObject() rbac.Object {
return r.Chat.RBACObject()
}
func (a ChatAutomation) RBACObject() rbac.Object {
return rbac.ResourceChatAutomation.
WithID(a.ID).
WithOwner(a.OwnerID.String()).
InOrg(a.OrganizationID)
}
func (c ChatFile) RBACObject() rbac.Object {
return rbac.ResourceChat.WithID(c.ID).WithOwner(c.OwnerID.String()).InOrg(c.OrganizationID)
}
+175 -20
View File
@@ -53,6 +53,7 @@ type customQuerier interface {
connectionLogQuerier
aibridgeQuerier
chatQuerier
chatAutomationQuerier
}
type templateQuerier interface {
@@ -741,10 +742,10 @@ func (q *sqlQuerier) CountAuthorizedConnectionLogs(ctx context.Context, arg Coun
}
type chatQuerier interface {
GetAuthorizedChats(ctx context.Context, arg GetChatsParams, prepared rbac.PreparedAuthorized) ([]Chat, error)
GetAuthorizedChats(ctx context.Context, arg GetChatsParams, prepared rbac.PreparedAuthorized) ([]GetChatsRow, error)
}
func (q *sqlQuerier) GetAuthorizedChats(ctx context.Context, arg GetChatsParams, prepared rbac.PreparedAuthorized) ([]Chat, error) {
func (q *sqlQuerier) GetAuthorizedChats(ctx context.Context, arg GetChatsParams, prepared rbac.PreparedAuthorized) ([]GetChatsRow, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, rbac.ConfigChats())
if err != nil {
return nil, xerrors.Errorf("compile authorized filter: %w", err)
@@ -769,28 +770,96 @@ func (q *sqlQuerier) GetAuthorizedChats(ctx context.Context, arg GetChatsParams,
return nil, err
}
defer rows.Close()
var items []Chat
var items []GetChatsRow
for rows.Next() {
var i Chat
var i GetChatsRow
if err := rows.Scan(
&i.Chat.ID,
&i.Chat.OwnerID,
&i.Chat.WorkspaceID,
&i.Chat.Title,
&i.Chat.Status,
&i.Chat.WorkerID,
&i.Chat.StartedAt,
&i.Chat.HeartbeatAt,
&i.Chat.CreatedAt,
&i.Chat.UpdatedAt,
&i.Chat.ParentChatID,
&i.Chat.RootChatID,
&i.Chat.LastModelConfigID,
&i.Chat.Archived,
&i.Chat.LastError,
&i.Chat.Mode,
pq.Array(&i.Chat.MCPServerIDs),
&i.Chat.Labels,
&i.Chat.BuildID,
&i.Chat.AgentID,
&i.Chat.PinOrder,
&i.Chat.LastReadMessageID,
&i.Chat.LastInjectedContext,
&i.Chat.AutomationID,
&i.HasUnread,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
type chatAutomationQuerier interface {
GetAuthorizedChatAutomations(ctx context.Context, arg GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]ChatAutomation, error)
}
func (q *sqlQuerier) GetAuthorizedChatAutomations(ctx context.Context, arg GetChatAutomationsParams, prepared rbac.PreparedAuthorized) ([]ChatAutomation, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
VariableConverter: regosql.NoACLConverter(),
})
if err != nil {
return nil, xerrors.Errorf("compile authorized filter: %w", err)
}
filtered, err := insertAuthorizedFilter(getChatAutomations, fmt.Sprintf(" AND %s", authorizedFilter))
if err != nil {
return nil, xerrors.Errorf("insert authorized filter: %w", err)
}
// The name comment is for metric tracking
query := fmt.Sprintf("-- name: GetAuthorizedChatAutomations :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query,
arg.OwnerID,
arg.OrganizationID,
arg.OffsetOpt,
arg.LimitOpt,
)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ChatAutomation
for rows.Next() {
var i ChatAutomation
if err := rows.Scan(
&i.ID,
&i.OwnerID,
&i.WorkspaceID,
&i.Title,
&i.OrganizationID,
&i.Name,
&i.Description,
&i.Instructions,
&i.ModelConfigID,
pq.Array(&i.MCPServerIDs),
pq.Array(&i.AllowedTools),
&i.Status,
&i.WorkerID,
&i.StartedAt,
&i.HeartbeatAt,
&i.MaxChatCreatesPerHour,
&i.MaxMessagesPerHour,
&i.CreatedAt,
&i.UpdatedAt,
&i.ParentChatID,
&i.RootChatID,
&i.LastModelConfigID,
&i.Archived,
&i.LastError,
&i.Mode,
pq.Array(&i.MCPServerIDs),
&i.Labels,
); err != nil {
return nil, err
}
@@ -809,8 +878,10 @@ type aibridgeQuerier interface {
ListAuthorizedAIBridgeInterceptions(ctx context.Context, arg ListAIBridgeInterceptionsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeInterceptionsRow, error)
CountAuthorizedAIBridgeInterceptions(ctx context.Context, arg CountAIBridgeInterceptionsParams, prepared rbac.PreparedAuthorized) (int64, error)
ListAuthorizedAIBridgeModels(ctx context.Context, arg ListAIBridgeModelsParams, prepared rbac.PreparedAuthorized) ([]string, error)
ListAuthorizedAIBridgeClients(ctx context.Context, arg ListAIBridgeClientsParams, prepared rbac.PreparedAuthorized) ([]string, error)
ListAuthorizedAIBridgeSessions(ctx context.Context, arg ListAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeSessionsRow, error)
CountAuthorizedAIBridgeSessions(ctx context.Context, arg CountAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) (int64, error)
ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg ListAIBridgeSessionThreadsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeSessionThreadsRow, error)
}
func (q *sqlQuerier) ListAuthorizedAIBridgeInterceptions(ctx context.Context, arg ListAIBridgeInterceptionsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeInterceptionsRow, error) {
@@ -945,6 +1016,35 @@ func (q *sqlQuerier) ListAuthorizedAIBridgeModels(ctx context.Context, arg ListA
return items, nil
}
func (q *sqlQuerier) ListAuthorizedAIBridgeClients(ctx context.Context, arg ListAIBridgeClientsParams, prepared rbac.PreparedAuthorized) ([]string, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
VariableConverter: regosql.AIBridgeInterceptionConverter(),
})
if err != nil {
return nil, xerrors.Errorf("compile authorized filter: %w", err)
}
filtered, err := insertAuthorizedFilter(listAIBridgeClients, fmt.Sprintf(" AND %s", authorizedFilter))
if err != nil {
return nil, xerrors.Errorf("insert authorized filter: %w", err)
}
query := fmt.Sprintf("-- name: ListAIBridgeClients :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query, arg.Client, arg.Offset, arg.Limit)
if err != nil {
return nil, err
}
defer rows.Close()
var items []string
for rows.Next() {
var client string
if err := rows.Scan(&client); err != nil {
return nil, err
}
items = append(items, client)
}
return items, nil
}
func (q *sqlQuerier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg ListAIBridgeSessionsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeSessionsRow, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
VariableConverter: regosql.AIBridgeInterceptionConverter(),
@@ -960,8 +1060,6 @@ func (q *sqlQuerier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg Lis
query := fmt.Sprintf("-- name: ListAuthorizedAIBridgeSessions :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query,
arg.AfterSessionID,
arg.Offset,
arg.Limit,
arg.StartedAfter,
arg.StartedBefore,
arg.InitiatorID,
@@ -969,6 +1067,8 @@ func (q *sqlQuerier) ListAuthorizedAIBridgeSessions(ctx context.Context, arg Lis
arg.Model,
arg.Client,
arg.SessionID,
arg.Offset,
arg.Limit,
)
if err != nil {
return nil, err
@@ -1048,11 +1148,66 @@ func (q *sqlQuerier) CountAuthorizedAIBridgeSessions(ctx context.Context, arg Co
return count, nil
}
func (q *sqlQuerier) ListAuthorizedAIBridgeSessionThreads(ctx context.Context, arg ListAIBridgeSessionThreadsParams, prepared rbac.PreparedAuthorized) ([]ListAIBridgeSessionThreadsRow, error) {
authorizedFilter, err := prepared.CompileToSQL(ctx, regosql.ConvertConfig{
VariableConverter: regosql.AIBridgeInterceptionConverter(),
})
if err != nil {
return nil, xerrors.Errorf("compile authorized filter: %w", err)
}
filtered, err := insertAuthorizedFilter(listAIBridgeSessionThreads, fmt.Sprintf(" AND %s", authorizedFilter))
if err != nil {
return nil, xerrors.Errorf("insert authorized filter: %w", err)
}
query := fmt.Sprintf("-- name: ListAuthorizedAIBridgeSessionThreads :many\n%s", filtered)
rows, err := q.db.QueryContext(ctx, query,
arg.SessionID,
arg.AfterID,
arg.BeforeID,
arg.Limit,
)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ListAIBridgeSessionThreadsRow
for rows.Next() {
var i ListAIBridgeSessionThreadsRow
if err := rows.Scan(
&i.ThreadID,
&i.AIBridgeInterception.ID,
&i.AIBridgeInterception.InitiatorID,
&i.AIBridgeInterception.Provider,
&i.AIBridgeInterception.Model,
&i.AIBridgeInterception.StartedAt,
&i.AIBridgeInterception.Metadata,
&i.AIBridgeInterception.EndedAt,
&i.AIBridgeInterception.APIKeyID,
&i.AIBridgeInterception.Client,
&i.AIBridgeInterception.ThreadParentID,
&i.AIBridgeInterception.ThreadRootID,
&i.AIBridgeInterception.ClientSessionID,
&i.AIBridgeInterception.SessionID,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
func insertAuthorizedFilter(query string, replaceWith string) (string, error) {
if !strings.Contains(query, authorizedQueryPlaceholder) {
return "", xerrors.Errorf("query does not contain authorized replace string, this is not an authorized query")
}
filtered := strings.Replace(query, authorizedQueryPlaceholder, replaceWith, 1)
filtered := strings.ReplaceAll(query, authorizedQueryPlaceholder, replaceWith)
return filtered, nil
}
+302 -19
View File
@@ -224,6 +224,11 @@ const (
ApiKeyScopeChatUpdate APIKeyScope = "chat:update"
ApiKeyScopeChatDelete APIKeyScope = "chat:delete"
ApiKeyScopeChat APIKeyScope = "chat:*"
ApiKeyScopeChatAutomationCreate APIKeyScope = "chat_automation:create"
ApiKeyScopeChatAutomationRead APIKeyScope = "chat_automation:read"
ApiKeyScopeChatAutomationUpdate APIKeyScope = "chat_automation:update"
ApiKeyScopeChatAutomationDelete APIKeyScope = "chat_automation:delete"
ApiKeyScopeChatAutomation APIKeyScope = "chat_automation:*"
)
func (e *APIKeyScope) Scan(src interface{}) error {
@@ -467,7 +472,12 @@ func (e APIKeyScope) Valid() bool {
ApiKeyScopeChatRead,
ApiKeyScopeChatUpdate,
ApiKeyScopeChatDelete,
ApiKeyScopeChat:
ApiKeyScopeChat,
ApiKeyScopeChatAutomationCreate,
ApiKeyScopeChatAutomationRead,
ApiKeyScopeChatAutomationUpdate,
ApiKeyScopeChatAutomationDelete,
ApiKeyScopeChatAutomation:
return true
}
return false
@@ -680,6 +690,11 @@ func AllAPIKeyScopeValues() []APIKeyScope {
ApiKeyScopeChatUpdate,
ApiKeyScopeChatDelete,
ApiKeyScopeChat,
ApiKeyScopeChatAutomationCreate,
ApiKeyScopeChatAutomationRead,
ApiKeyScopeChatAutomationUpdate,
ApiKeyScopeChatAutomationDelete,
ApiKeyScopeChatAutomation,
}
}
@@ -1107,6 +1122,198 @@ func AllBuildReasonValues() []BuildReason {
}
}
// Outcome of a chat automation event: filtered, preview, created, continued, rate_limited, or error.
type ChatAutomationEventStatus string
const (
ChatAutomationEventStatusFiltered ChatAutomationEventStatus = "filtered"
ChatAutomationEventStatusPreview ChatAutomationEventStatus = "preview"
ChatAutomationEventStatusCreated ChatAutomationEventStatus = "created"
ChatAutomationEventStatusContinued ChatAutomationEventStatus = "continued"
ChatAutomationEventStatusRateLimited ChatAutomationEventStatus = "rate_limited"
ChatAutomationEventStatusError ChatAutomationEventStatus = "error"
)
func (e *ChatAutomationEventStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationEventStatus(s)
case string:
*e = ChatAutomationEventStatus(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationEventStatus: %T", src)
}
return nil
}
type NullChatAutomationEventStatus struct {
ChatAutomationEventStatus ChatAutomationEventStatus `json:"chat_automation_event_status"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationEventStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationEventStatus) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationEventStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationEventStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationEventStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationEventStatus), nil
}
func (e ChatAutomationEventStatus) Valid() bool {
switch e {
case ChatAutomationEventStatusFiltered,
ChatAutomationEventStatusPreview,
ChatAutomationEventStatusCreated,
ChatAutomationEventStatusContinued,
ChatAutomationEventStatusRateLimited,
ChatAutomationEventStatusError:
return true
}
return false
}
func AllChatAutomationEventStatusValues() []ChatAutomationEventStatus {
return []ChatAutomationEventStatus{
ChatAutomationEventStatusFiltered,
ChatAutomationEventStatusPreview,
ChatAutomationEventStatusCreated,
ChatAutomationEventStatusContinued,
ChatAutomationEventStatusRateLimited,
ChatAutomationEventStatusError,
}
}
// Lifecycle state of a chat automation: disabled, preview, or active.
type ChatAutomationStatus string
const (
ChatAutomationStatusDisabled ChatAutomationStatus = "disabled"
ChatAutomationStatusPreview ChatAutomationStatus = "preview"
ChatAutomationStatusActive ChatAutomationStatus = "active"
)
func (e *ChatAutomationStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationStatus(s)
case string:
*e = ChatAutomationStatus(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationStatus: %T", src)
}
return nil
}
type NullChatAutomationStatus struct {
ChatAutomationStatus ChatAutomationStatus `json:"chat_automation_status"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationStatus) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationStatus), nil
}
func (e ChatAutomationStatus) Valid() bool {
switch e {
case ChatAutomationStatusDisabled,
ChatAutomationStatusPreview,
ChatAutomationStatusActive:
return true
}
return false
}
func AllChatAutomationStatusValues() []ChatAutomationStatus {
return []ChatAutomationStatus{
ChatAutomationStatusDisabled,
ChatAutomationStatusPreview,
ChatAutomationStatusActive,
}
}
// Discriminator for chat automation triggers: webhook or cron.
type ChatAutomationTriggerType string
const (
ChatAutomationTriggerTypeWebhook ChatAutomationTriggerType = "webhook"
ChatAutomationTriggerTypeCron ChatAutomationTriggerType = "cron"
)
func (e *ChatAutomationTriggerType) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = ChatAutomationTriggerType(s)
case string:
*e = ChatAutomationTriggerType(s)
default:
return fmt.Errorf("unsupported scan type for ChatAutomationTriggerType: %T", src)
}
return nil
}
type NullChatAutomationTriggerType struct {
ChatAutomationTriggerType ChatAutomationTriggerType `json:"chat_automation_trigger_type"`
Valid bool `json:"valid"` // Valid is true if ChatAutomationTriggerType is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullChatAutomationTriggerType) Scan(value interface{}) error {
if value == nil {
ns.ChatAutomationTriggerType, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.ChatAutomationTriggerType.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullChatAutomationTriggerType) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.ChatAutomationTriggerType), nil
}
func (e ChatAutomationTriggerType) Valid() bool {
switch e {
case ChatAutomationTriggerTypeWebhook,
ChatAutomationTriggerTypeCron:
return true
}
return false
}
func AllChatAutomationTriggerTypeValues() []ChatAutomationTriggerType {
return []ChatAutomationTriggerType{
ChatAutomationTriggerTypeWebhook,
ChatAutomationTriggerTypeCron,
}
}
type ChatMessageRole string
const (
@@ -4153,24 +4360,99 @@ type BoundaryUsageStat struct {
}
type Chat struct {
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
Title string `db:"title" json:"title"`
Status ChatStatus `db:"status" json:"status"`
WorkerID uuid.NullUUID `db:"worker_id" json:"worker_id"`
StartedAt sql.NullTime `db:"started_at" json:"started_at"`
HeartbeatAt sql.NullTime `db:"heartbeat_at" json:"heartbeat_at"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ParentChatID uuid.NullUUID `db:"parent_chat_id" json:"parent_chat_id"`
RootChatID uuid.NullUUID `db:"root_chat_id" json:"root_chat_id"`
LastModelConfigID uuid.UUID `db:"last_model_config_id" json:"last_model_config_id"`
Archived bool `db:"archived" json:"archived"`
LastError sql.NullString `db:"last_error" json:"last_error"`
Mode NullChatMode `db:"mode" json:"mode"`
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
Labels StringMap `db:"labels" json:"labels"`
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
Title string `db:"title" json:"title"`
Status ChatStatus `db:"status" json:"status"`
WorkerID uuid.NullUUID `db:"worker_id" json:"worker_id"`
StartedAt sql.NullTime `db:"started_at" json:"started_at"`
HeartbeatAt sql.NullTime `db:"heartbeat_at" json:"heartbeat_at"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ParentChatID uuid.NullUUID `db:"parent_chat_id" json:"parent_chat_id"`
RootChatID uuid.NullUUID `db:"root_chat_id" json:"root_chat_id"`
LastModelConfigID uuid.UUID `db:"last_model_config_id" json:"last_model_config_id"`
Archived bool `db:"archived" json:"archived"`
LastError sql.NullString `db:"last_error" json:"last_error"`
Mode NullChatMode `db:"mode" json:"mode"`
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
Labels StringMap `db:"labels" json:"labels"`
BuildID uuid.NullUUID `db:"build_id" json:"build_id"`
AgentID uuid.NullUUID `db:"agent_id" json:"agent_id"`
PinOrder int32 `db:"pin_order" json:"pin_order"`
LastReadMessageID sql.NullInt64 `db:"last_read_message_id" json:"last_read_message_id"`
LastInjectedContext pqtype.NullRawMessage `db:"last_injected_context" json:"last_injected_context"`
AutomationID uuid.NullUUID `db:"automation_id" json:"automation_id"`
}
// Chat automations bridge external events (webhooks, cron schedules) to Coder chats. A chat automation defines what to say, which model and tools to use, and how fast it is allowed to create or continue chats.
type ChatAutomation struct {
ID uuid.UUID `db:"id" json:"id"`
// The user on whose behalf chats are created. All RBAC checks and chat ownership are scoped to this user.
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
// Organization scope for RBAC. Combined with owner_id and name to form a unique constraint so automations are namespaced per user per org.
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
// The user-role message injected into every chat this automation creates. This is the core prompt that tells the LLM what to do.
Instructions string `db:"instructions" json:"instructions"`
// Optional model configuration override. When NULL the deployment default is used. SET NULL on delete so automations survive config changes gracefully.
ModelConfigID uuid.NullUUID `db:"model_config_id" json:"model_config_id"`
// MCP servers to attach to chats created by this automation. Stored as an array of UUIDs rather than a join table because the set is small and always read/written atomically.
MCPServerIDs []uuid.UUID `db:"mcp_server_ids" json:"mcp_server_ids"`
// Tool allowlist. Empty means all tools available to the model config are permitted.
AllowedTools []string `db:"allowed_tools" json:"allowed_tools"`
// Lifecycle state: disabled — trigger events are silently dropped; preview — events are logged but no chat is created (dry-run); active — events create or continue chats.
Status ChatAutomationStatus `db:"status" json:"status"`
// Maximum number of new chats this automation may create in a rolling one-hour window. Prevents runaway webhook storms from flooding the system.
MaxChatCreatesPerHour int32 `db:"max_chat_creates_per_hour" json:"max_chat_creates_per_hour"`
// Maximum total messages (creates + continues) this automation may send in a rolling one-hour window. A second, broader throttle that catches high-frequency continuation patterns.
MaxMessagesPerHour int32 `db:"max_messages_per_hour" json:"max_messages_per_hour"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
// Every trigger invocation produces an event row regardless of outcome. This table is the audit trail and the data source for rate-limit window counts. Rows are append-only and expected to be purged by a background job after a retention period.
type ChatAutomationEvent struct {
ID uuid.UUID `db:"id" json:"id"`
AutomationID uuid.UUID `db:"automation_id" json:"automation_id"`
TriggerID uuid.NullUUID `db:"trigger_id" json:"trigger_id"`
ReceivedAt time.Time `db:"received_at" json:"received_at"`
// The raw payload that was evaluated. For webhooks this is the HTTP body; for cron triggers it is a synthetic JSON envelope with schedule metadata.
Payload json.RawMessage `db:"payload" json:"payload"`
// Whether the trigger filter conditions matched. False means the event was dropped before any chat interaction.
FilterMatched bool `db:"filter_matched" json:"filter_matched"`
// Labels resolved from the payload via label_paths. Stored so the event log shows exactly which labels were computed.
ResolvedLabels pqtype.NullRawMessage `db:"resolved_labels" json:"resolved_labels"`
// ID of an existing chat that was found via label matching and continued with a new message.
MatchedChatID uuid.NullUUID `db:"matched_chat_id" json:"matched_chat_id"`
// ID of a newly created chat (mutually exclusive with matched_chat_id in practice).
CreatedChatID uuid.NullUUID `db:"created_chat_id" json:"created_chat_id"`
// Outcome of the event: filtered — filter did not match; preview — automation is in preview mode; created — new chat was created; continued — existing chat was continued; rate_limited — rate limit prevented chat action; error — something went wrong.
Status ChatAutomationEventStatus `db:"status" json:"status"`
Error sql.NullString `db:"error" json:"error"`
}
// Triggers define how an automation is invoked. Each automation can have multiple triggers (e.g. one webhook + one cron schedule). Webhook and cron triggers share the same row shape with type-specific nullable columns to keep the schema simple.
type ChatAutomationTrigger struct {
ID uuid.UUID `db:"id" json:"id"`
AutomationID uuid.UUID `db:"automation_id" json:"automation_id"`
// Discriminator: webhook or cron. Determines which nullable columns are meaningful.
Type ChatAutomationTriggerType `db:"type" json:"type"`
// HMAC-SHA256 shared secret for webhook signature verification (X-Hub-Signature-256 header). NULL for cron triggers.
WebhookSecret sql.NullString `db:"webhook_secret" json:"webhook_secret"`
WebhookSecretKeyID sql.NullString `db:"webhook_secret_key_id" json:"webhook_secret_key_id"`
// Standard 5-field cron expression (minute hour dom month dow), with optional CRON_TZ= prefix. NULL for webhook triggers.
CronSchedule sql.NullString `db:"cron_schedule" json:"cron_schedule"`
// Timestamp of the last successful cron fire. The scheduler computes next = cron.Next(last_triggered_at) and fires when next <= now. NULL means the trigger has never fired. Not used for webhook triggers.
LastTriggeredAt sql.NullTime `db:"last_triggered_at" json:"last_triggered_at"`
// gjson path-to-value filter conditions evaluated against the incoming webhook payload. All conditions must match for the trigger to fire. NULL or empty means match everything.
Filter pqtype.NullRawMessage `db:"filter" json:"filter"`
// Maps chat label keys to gjson paths. When a trigger fires, labels are resolved from the payload and used to find an existing chat to continue (by label match) or set on a newly created chat.
LabelPaths pqtype.NullRawMessage `db:"label_paths" json:"label_paths"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
type ChatDiffStatus struct {
@@ -4485,6 +4767,7 @@ type MCPServerConfig struct {
UpdatedBy uuid.NullUUID `db:"updated_by" json:"updated_by"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ModelIntent bool `db:"model_intent" json:"model_intent"`
}
type MCPServerUserToken struct {
+83 -4
View File
@@ -54,7 +54,7 @@ type sqlcQuerier interface {
ActivityBumpWorkspace(ctx context.Context, arg ActivityBumpWorkspaceParams) error
// AllUserIDs returns all UserIDs regardless of user status or deletion.
AllUserIDs(ctx context.Context, includeSystem bool) ([]uuid.UUID, error)
ArchiveChatByID(ctx context.Context, id uuid.UUID) error
ArchiveChatByID(ctx context.Context, id uuid.UUID) ([]Chat, error)
// Archiving templates is a soft delete action, so is reversible.
// Archiving prevents the version from being used and discovered
// by listing.
@@ -74,10 +74,21 @@ type sqlcQuerier interface {
CleanTailnetCoordinators(ctx context.Context) error
CleanTailnetLostPeers(ctx context.Context) error
CleanTailnetTunnels(ctx context.Context) error
CleanupDeletedMCPServerIDsFromChatAutomations(ctx context.Context) error
CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error
CountAIBridgeInterceptions(ctx context.Context, arg CountAIBridgeInterceptionsParams) (int64, error)
CountAIBridgeSessions(ctx context.Context, arg CountAIBridgeSessionsParams) (int64, error)
CountAuditLogs(ctx context.Context, arg CountAuditLogsParams) (int64, error)
// Counts new-chat events in the rate-limit window. This count is
// approximate under concurrency: concurrent webhook handlers may
// each read the same count before any of them insert, so brief
// bursts can slightly exceed the configured cap.
CountChatAutomationChatCreatesInWindow(ctx context.Context, arg CountChatAutomationChatCreatesInWindowParams) (int64, error)
// Counts total message events (creates + continues) in the rate-limit
// window. This count is approximate under concurrency: concurrent
// webhook handlers may each read the same count before any of them
// insert, so brief bursts can slightly exceed the configured cap.
CountChatAutomationMessagesInWindow(ctx context.Context, arg CountChatAutomationMessagesInWindowParams) (int64, error)
CountConnectionLogs(ctx context.Context, arg CountConnectionLogsParams) (int64, error)
// Counts enabled, non-deleted model configs that lack both input and
// output pricing in their JSONB options.cost configuration.
@@ -100,6 +111,8 @@ type sqlcQuerier interface {
// be recreated.
DeleteAllWebpushSubscriptions(ctx context.Context) error
DeleteApplicationConnectAPIKeysByUserID(ctx context.Context, userID uuid.UUID) error
DeleteChatAutomationByID(ctx context.Context, id uuid.UUID) error
DeleteChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) error
DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error
DeleteChatProviderByID(ctx context.Context, id uuid.UUID) error
DeleteChatQueuedMessage(ctx context.Context, arg DeleteChatQueuedMessageParams) error
@@ -197,6 +210,10 @@ type sqlcQuerier interface {
GetAPIKeysByUserID(ctx context.Context, arg GetAPIKeysByUserIDParams) ([]APIKey, error)
GetAPIKeysLastUsedAfter(ctx context.Context, lastUsed time.Time) ([]APIKey, error)
GetActiveAISeatCount(ctx context.Context) (int64, error)
// Returns all cron triggers whose parent automation is active or in
// preview mode. The scheduler uses this to evaluate which triggers
// are due.
GetActiveChatAutomationCronTriggers(ctx context.Context) ([]GetActiveChatAutomationCronTriggersRow, error)
GetActivePresetPrebuildSchedules(ctx context.Context) ([]TemplateVersionPresetPrebuildSchedule, error)
GetActiveUserCount(ctx context.Context, includeSystem bool) (int64, error)
GetActiveWorkspaceBuildsByTemplateID(ctx context.Context, templateID uuid.UUID) ([]WorkspaceBuild, error)
@@ -223,6 +240,11 @@ type sqlcQuerier interface {
// This function returns roles for authorization purposes. Implied member roles
// are included.
GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUID) (GetAuthorizationUserRolesRow, error)
GetChatAutomationByID(ctx context.Context, id uuid.UUID) (ChatAutomation, error)
GetChatAutomationEventsByAutomationID(ctx context.Context, arg GetChatAutomationEventsByAutomationIDParams) ([]ChatAutomationEvent, error)
GetChatAutomationTriggerByID(ctx context.Context, id uuid.UUID) (ChatAutomationTrigger, error)
GetChatAutomationTriggersByAutomationID(ctx context.Context, automationID uuid.UUID) ([]ChatAutomationTrigger, error)
GetChatAutomations(ctx context.Context, arg GetChatAutomationsParams) ([]ChatAutomation, error)
GetChatByID(ctx context.Context, id uuid.UUID) (Chat, error)
GetChatByIDForUpdate(ctx context.Context, id uuid.UUID) (Chat, error)
// Per-root-chat cost breakdown for a single user within a date range.
@@ -243,8 +265,14 @@ type sqlcQuerier interface {
GetChatDiffStatusesByChatIDs(ctx context.Context, chatIds []uuid.UUID) ([]ChatDiffStatus, error)
GetChatFileByID(ctx context.Context, id uuid.UUID) (ChatFile, error)
GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]ChatFile, error)
// GetChatIncludeDefaultSystemPrompt preserves the legacy default
// for deployments created before the explicit include-default toggle.
// When the toggle is unset, a non-empty custom prompt implies false;
// otherwise the setting defaults to true.
GetChatIncludeDefaultSystemPrompt(ctx context.Context) (bool, error)
GetChatMessageByID(ctx context.Context, id int64) (ChatMessage, error)
GetChatMessagesByChatID(ctx context.Context, arg GetChatMessagesByChatIDParams) ([]ChatMessage, error)
GetChatMessagesByChatIDAscPaginated(ctx context.Context, arg GetChatMessagesByChatIDAscPaginatedParams) ([]ChatMessage, error)
GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg GetChatMessagesByChatIDDescPaginatedParams) ([]ChatMessage, error)
GetChatMessagesForPromptByChatID(ctx context.Context, chatID uuid.UUID) ([]ChatMessage, error)
GetChatModelConfigByID(ctx context.Context, id uuid.UUID) (ChatModelConfig, error)
@@ -254,6 +282,12 @@ type sqlcQuerier interface {
GetChatProviders(ctx context.Context) ([]ChatProvider, error)
GetChatQueuedMessages(ctx context.Context, chatID uuid.UUID) ([]ChatQueuedMessage, error)
GetChatSystemPrompt(ctx context.Context) (string, error)
// GetChatSystemPromptConfig returns both chat system prompt settings in a
// single read to avoid torn reads between separate site-config lookups.
// The include-default fallback preserves the legacy behavior where a
// non-empty custom prompt implied opting out before the explicit toggle
// existed.
GetChatSystemPromptConfig(ctx context.Context) (GetChatSystemPromptConfigRow, error)
// GetChatTemplateAllowlist returns the JSON-encoded template allowlist.
// Returns an empty string when no allowlist has been configured (all templates allowed).
GetChatTemplateAllowlist(ctx context.Context) (string, error)
@@ -263,7 +297,8 @@ type sqlcQuerier interface {
// Returns the global TTL for chat workspaces as a Go duration string.
// Returns "0s" (disabled) when no value has been configured.
GetChatWorkspaceTTL(ctx context.Context) (string, error)
GetChats(ctx context.Context, arg GetChatsParams) ([]Chat, error)
GetChats(ctx context.Context, arg GetChatsParams) ([]GetChatsRow, error)
GetChatsByWorkspaceIDs(ctx context.Context, ids []uuid.UUID) ([]Chat, error)
GetConnectionLogsOffset(ctx context.Context, arg GetConnectionLogsOffsetParams) ([]GetConnectionLogsOffsetRow, error)
GetCryptoKeyByFeatureAndSequence(ctx context.Context, arg GetCryptoKeyByFeatureAndSequenceParams) (CryptoKey, error)
GetCryptoKeys(ctx context.Context) ([]CryptoKey, error)
@@ -548,6 +583,10 @@ type sqlcQuerier interface {
// inclusive.
GetTotalUsageDCManagedAgentsV1(ctx context.Context, arg GetTotalUsageDCManagedAgentsV1Params) (int64, error)
GetUnexpiredLicenses(ctx context.Context) ([]License, error)
// Returns user IDs from the provided list that are consuming an AI seat.
// Filters to active, non-deleted, non-system users to match the canonical
// seat count query (GetActiveAISeatCount).
GetUserAISeatStates(ctx context.Context, userIds []uuid.UUID) ([]uuid.UUID, error)
// GetUserActivityInsights returns the ranking with top active users.
// The result can be filtered on template_ids, meaning only user data
// from workspaces based on those templates will be included.
@@ -679,6 +718,9 @@ type sqlcQuerier interface {
InsertAllUsersGroup(ctx context.Context, organizationID uuid.UUID) (Group, error)
InsertAuditLog(ctx context.Context, arg InsertAuditLogParams) (AuditLog, error)
InsertChat(ctx context.Context, arg InsertChatParams) (Chat, error)
InsertChatAutomation(ctx context.Context, arg InsertChatAutomationParams) (ChatAutomation, error)
InsertChatAutomationEvent(ctx context.Context, arg InsertChatAutomationEventParams) (ChatAutomationEvent, error)
InsertChatAutomationTrigger(ctx context.Context, arg InsertChatAutomationTriggerParams) (ChatAutomationTrigger, error)
InsertChatFile(ctx context.Context, arg InsertChatFileParams) (InsertChatFileRow, error)
InsertChatMessages(ctx context.Context, arg InsertChatMessagesParams) ([]ChatMessage, error)
InsertChatModelConfig(ctx context.Context, arg InsertChatModelConfigParams) (ChatModelConfig, error)
@@ -758,14 +800,23 @@ type sqlcQuerier interface {
InsertWorkspaceProxy(ctx context.Context, arg InsertWorkspaceProxyParams) (WorkspaceProxy, error)
InsertWorkspaceResource(ctx context.Context, arg InsertWorkspaceResourceParams) (WorkspaceResource, error)
InsertWorkspaceResourceMetadata(ctx context.Context, arg InsertWorkspaceResourceMetadataParams) ([]WorkspaceResourceMetadatum, error)
ListAIBridgeClients(ctx context.Context, arg ListAIBridgeClientsParams) ([]string, error)
ListAIBridgeInterceptions(ctx context.Context, arg ListAIBridgeInterceptionsParams) ([]ListAIBridgeInterceptionsRow, error)
// Finds all unique AI Bridge interception telemetry summaries combinations
// (provider, model, client) in the given timeframe for telemetry reporting.
ListAIBridgeInterceptionsTelemetrySummaries(ctx context.Context, arg ListAIBridgeInterceptionsTelemetrySummariesParams) ([]ListAIBridgeInterceptionsTelemetrySummariesRow, error)
ListAIBridgeModelThoughtsByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]AIBridgeModelThought, error)
ListAIBridgeModels(ctx context.Context, arg ListAIBridgeModelsParams) ([]string, error)
// Returns all interceptions belonging to paginated threads within a session.
// Threads are paginated by (started_at, thread_id) cursor.
ListAIBridgeSessionThreads(ctx context.Context, arg ListAIBridgeSessionThreadsParams) ([]ListAIBridgeSessionThreadsRow, error)
// Returns paginated sessions with aggregated metadata, token counts, and
// the most recent user prompt. A "session" is a logical grouping of
// interceptions that share the same session_id (set by the client).
//
// Pagination-first strategy: identify the page of sessions cheaply via a
// single GROUP BY scan, then do expensive lateral joins (tokens, prompts,
// first-interception metadata) only for the ~page-size result set.
ListAIBridgeSessions(ctx context.Context, arg ListAIBridgeSessionsParams) ([]ListAIBridgeSessionsRow, error)
ListAIBridgeTokenUsagesByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]AIBridgeTokenUsage, error)
ListAIBridgeToolUsagesByInterceptionIDs(ctx context.Context, interceptionIds []uuid.UUID) ([]AIBridgeToolUsage, error)
@@ -789,7 +840,17 @@ type sqlcQuerier interface {
// - Use both to get a specific org member row
OrganizationMembers(ctx context.Context, arg OrganizationMembersParams) ([]OrganizationMembersRow, error)
PaginatedOrganizationMembers(ctx context.Context, arg PaginatedOrganizationMembersParams) ([]PaginatedOrganizationMembersRow, error)
// Under READ COMMITTED, concurrent pin operations for the same
// owner may momentarily produce duplicate pin_order values because
// each CTE snapshot does not see the other's writes. The next
// pin/unpin/reorder operation's ROW_NUMBER() self-heals the
// sequence, so this is acceptable.
PinChatByID(ctx context.Context, id uuid.UUID) error
PopNextQueuedMessage(ctx context.Context, chatID uuid.UUID) (ChatQueuedMessage, error)
// Deletes old chat automation events in bounded batches to avoid
// long-running locks on high-volume tables. Callers should loop
// until zero rows are returned.
PurgeOldChatAutomationEvents(ctx context.Context, arg PurgeOldChatAutomationEventsParams) (int64, error)
ReduceWorkspaceAgentShareLevelToAuthenticatedByTemplate(ctx context.Context, templateID uuid.UUID) error
RegisterWorkspaceProxy(ctx context.Context, arg RegisterWorkspaceProxyParams) (WorkspaceProxy, error)
RemoveUserFromGroups(ctx context.Context, arg RemoveUserFromGroupsParams) ([]uuid.UUID, error)
@@ -812,24 +873,41 @@ type sqlcQuerier interface {
// This must be called from within a transaction. The lock will be automatically
// released when the transaction ends.
TryAcquireLock(ctx context.Context, pgTryAdvisoryXactLock int64) (bool, error)
UnarchiveChatByID(ctx context.Context, id uuid.UUID) error
UnarchiveChatByID(ctx context.Context, id uuid.UUID) ([]Chat, error)
// This will always work regardless of the current state of the template version.
UnarchiveTemplateVersion(ctx context.Context, arg UnarchiveTemplateVersionParams) error
UnfavoriteWorkspace(ctx context.Context, id uuid.UUID) error
UnpinChatByID(ctx context.Context, id uuid.UUID) error
UnsetDefaultChatModelConfigs(ctx context.Context) error
UpdateAIBridgeInterceptionEnded(ctx context.Context, arg UpdateAIBridgeInterceptionEndedParams) (AIBridgeInterception, error)
UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDParams) error
UpdateChatAutomation(ctx context.Context, arg UpdateChatAutomationParams) (ChatAutomation, error)
UpdateChatAutomationTrigger(ctx context.Context, arg UpdateChatAutomationTriggerParams) (ChatAutomationTrigger, error)
UpdateChatAutomationTriggerLastTriggeredAt(ctx context.Context, arg UpdateChatAutomationTriggerLastTriggeredAtParams) error
UpdateChatAutomationTriggerWebhookSecret(ctx context.Context, arg UpdateChatAutomationTriggerWebhookSecretParams) (ChatAutomationTrigger, error)
UpdateChatBuildAgentBinding(ctx context.Context, arg UpdateChatBuildAgentBindingParams) (Chat, error)
UpdateChatByID(ctx context.Context, arg UpdateChatByIDParams) (Chat, error)
// Bumps the heartbeat timestamp for a running chat so that other
// replicas know the worker is still alive.
UpdateChatHeartbeat(ctx context.Context, arg UpdateChatHeartbeatParams) (int64, error)
UpdateChatLabelsByID(ctx context.Context, arg UpdateChatLabelsByIDParams) (Chat, error)
// Updates the cached injected context parts (AGENTS.md +
// skills) on the chat row. Called only when context changes
// (first workspace attach or agent change). updated_at is
// intentionally not touched to avoid reordering the chat list.
UpdateChatLastInjectedContext(ctx context.Context, arg UpdateChatLastInjectedContextParams) (Chat, error)
UpdateChatLastModelConfigByID(ctx context.Context, arg UpdateChatLastModelConfigByIDParams) (Chat, error)
// Updates the last read message ID for a chat. This is used to track
// which messages the owner has seen, enabling unread indicators.
UpdateChatLastReadMessageID(ctx context.Context, arg UpdateChatLastReadMessageIDParams) error
UpdateChatMCPServerIDs(ctx context.Context, arg UpdateChatMCPServerIDsParams) (Chat, error)
UpdateChatMessageByID(ctx context.Context, arg UpdateChatMessageByIDParams) (ChatMessage, error)
UpdateChatModelConfig(ctx context.Context, arg UpdateChatModelConfigParams) (ChatModelConfig, error)
UpdateChatPinOrder(ctx context.Context, arg UpdateChatPinOrderParams) error
UpdateChatProvider(ctx context.Context, arg UpdateChatProviderParams) (ChatProvider, error)
UpdateChatStatus(ctx context.Context, arg UpdateChatStatusParams) (Chat, error)
UpdateChatWorkspace(ctx context.Context, arg UpdateChatWorkspaceParams) (Chat, error)
UpdateChatStatusPreserveUpdatedAt(ctx context.Context, arg UpdateChatStatusPreserveUpdatedAtParams) (Chat, error)
UpdateChatWorkspaceBinding(ctx context.Context, arg UpdateChatWorkspaceBindingParams) (Chat, error)
UpdateCryptoKeyDeletesAt(ctx context.Context, arg UpdateCryptoKeyDeletesAtParams) (CryptoKey, error)
UpdateCustomRole(ctx context.Context, arg UpdateCustomRoleParams) (CustomRole, error)
UpdateExternalAuthLink(ctx context.Context, arg UpdateExternalAuthLinkParams) (ExternalAuthLink, error)
@@ -936,6 +1014,7 @@ type sqlcQuerier interface {
UpsertChatDesktopEnabled(ctx context.Context, enableDesktop bool) error
UpsertChatDiffStatus(ctx context.Context, arg UpsertChatDiffStatusParams) (ChatDiffStatus, error)
UpsertChatDiffStatusReference(ctx context.Context, arg UpsertChatDiffStatusReferenceParams) (ChatDiffStatus, error)
UpsertChatIncludeDefaultSystemPrompt(ctx context.Context, includeDefaultSystemPrompt bool) error
UpsertChatSystemPrompt(ctx context.Context, value string) error
UpsertChatTemplateAllowlist(ctx context.Context, templateAllowlist string) error
UpsertChatUsageLimitConfig(ctx context.Context, arg UpsertChatUsageLimitConfigParams) (ChatUsageLimitConfig, error)
+315 -12
View File
@@ -1251,8 +1251,12 @@ func TestGetAuthorizedChats(t *testing.T) {
owner := dbgen.User(t, db, database.User{
RBACRoles: []string{rbac.RoleOwner().String()},
})
member := dbgen.User(t, db, database.User{})
secondMember := dbgen.User(t, db, database.User{})
member := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
secondMember := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
// Create FK dependencies: a chat provider and model config.
ctx := testutil.Context(t, testutil.WaitMedium)
@@ -1311,7 +1315,7 @@ func TestGetAuthorizedChats(t *testing.T) {
require.NoError(t, err)
require.Len(t, memberRows, 2)
for _, row := range memberRows {
require.Equal(t, member.ID, row.OwnerID, "member should only see own chats")
require.Equal(t, member.ID, row.Chat.OwnerID, "member should only see own chats")
}
// Owner should see at least the 5 pre-created chats (site-wide
@@ -1381,7 +1385,7 @@ func TestGetAuthorizedChats(t *testing.T) {
require.NoError(t, err)
require.Len(t, memberRows, 2)
for _, row := range memberRows {
require.Equal(t, member.ID, row.OwnerID, "member should only see own chats")
require.Equal(t, member.ID, row.Chat.OwnerID, "member should only see own chats")
}
// As owner: should see at least the 5 pre-created chats.
@@ -1407,7 +1411,9 @@ func TestGetAuthorizedChats(t *testing.T) {
// Use a dedicated user for pagination to avoid interference
// with the other parallel subtests.
paginationUser := dbgen.User(t, db, database.User{})
paginationUser := dbgen.User(t, db, database.User{
RBACRoles: pq.StringArray{rbac.RoleAgentsAccess().String()},
})
for i := range 7 {
_, err := db.InsertChat(ctx, database.InsertChatParams{
OwnerID: paginationUser.ID,
@@ -1429,13 +1435,13 @@ func TestGetAuthorizedChats(t *testing.T) {
require.NoError(t, err)
require.Len(t, page1, 2)
for _, row := range page1 {
require.Equal(t, paginationUser.ID, row.OwnerID, "paginated results must belong to pagination user")
require.Equal(t, paginationUser.ID, row.Chat.OwnerID, "paginated results must belong to pagination user")
}
// Fetch remaining pages and collect all chat IDs.
allIDs := make(map[uuid.UUID]struct{})
for _, row := range page1 {
allIDs[row.ID] = struct{}{}
allIDs[row.Chat.ID] = struct{}{}
}
offset := int32(2)
for {
@@ -1445,8 +1451,8 @@ func TestGetAuthorizedChats(t *testing.T) {
}, preparedMember)
require.NoError(t, err)
for _, row := range page {
require.Equal(t, paginationUser.ID, row.OwnerID, "paginated results must belong to pagination user")
allIDs[row.ID] = struct{}{}
require.Equal(t, paginationUser.ID, row.Chat.OwnerID, "paginated results must belong to pagination user")
allIDs[row.Chat.ID] = struct{}{}
}
if len(page) < 2 {
break
@@ -10487,6 +10493,186 @@ func TestGetPRInsights(t *testing.T) {
})
}
func TestChatPinOrderQueries(t *testing.T) {
t.Parallel()
if testing.Short() {
t.SkipNow()
}
setup := func(t *testing.T) (context.Context, database.Store, uuid.UUID, uuid.UUID) {
t.Helper()
db, _ := dbtestutil.NewDB(t)
owner := dbgen.User(t, db, database.User{})
// Use background context for fixture setup so the
// timed test context doesn't tick during DB init.
bg := context.Background()
_, err := db.InsertChatProvider(bg, database.InsertChatProviderParams{
Provider: "openai",
DisplayName: "OpenAI",
APIKey: "test-key",
Enabled: true,
})
require.NoError(t, err)
modelCfg, err := db.InsertChatModelConfig(bg, database.InsertChatModelConfigParams{
Provider: "openai",
Model: "test-model",
DisplayName: "Test Model",
CreatedBy: uuid.NullUUID{UUID: owner.ID, Valid: true},
UpdatedBy: uuid.NullUUID{UUID: owner.ID, Valid: true},
Enabled: true,
IsDefault: true,
ContextLimit: 128000,
CompressionThreshold: 80,
Options: json.RawMessage(`{}`),
})
require.NoError(t, err)
ctx := testutil.Context(t, testutil.WaitMedium)
return ctx, db, owner.ID, modelCfg.ID
}
createChat := func(t *testing.T, ctx context.Context, db database.Store, ownerID, modelCfgID uuid.UUID, title string) database.Chat {
t.Helper()
chat, err := db.InsertChat(ctx, database.InsertChatParams{
OwnerID: ownerID,
LastModelConfigID: modelCfgID,
Title: title,
})
require.NoError(t, err)
return chat
}
requirePinOrders := func(t *testing.T, ctx context.Context, db database.Store, want map[uuid.UUID]int32) {
t.Helper()
for chatID, wantPinOrder := range want {
chat, err := db.GetChatByID(ctx, chatID)
require.NoError(t, err)
require.EqualValues(t, wantPinOrder, chat.PinOrder)
}
}
t.Run("PinChatByIDAppendsWithinOwner", func(t *testing.T) {
t.Parallel()
ctx, db, ownerID, modelCfgID := setup(t)
first := createChat(t, ctx, db, ownerID, modelCfgID, "first")
second := createChat(t, ctx, db, ownerID, modelCfgID, "second")
third := createChat(t, ctx, db, ownerID, modelCfgID, "third")
otherOwner := dbgen.User(t, db, database.User{})
other := createChat(t, ctx, db, otherOwner.ID, modelCfgID, "other-owner")
require.NoError(t, db.PinChatByID(ctx, other.ID))
require.NoError(t, db.PinChatByID(ctx, first.ID))
require.NoError(t, db.PinChatByID(ctx, second.ID))
require.NoError(t, db.PinChatByID(ctx, third.ID))
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 1,
second.ID: 2,
third.ID: 3,
other.ID: 1,
})
})
t.Run("UpdateChatPinOrderShiftsNeighborsAndClamps", func(t *testing.T) {
t.Parallel()
ctx, db, ownerID, modelCfgID := setup(t)
first := createChat(t, ctx, db, ownerID, modelCfgID, "first")
second := createChat(t, ctx, db, ownerID, modelCfgID, "second")
third := createChat(t, ctx, db, ownerID, modelCfgID, "third")
for _, chat := range []database.Chat{first, second, third} {
require.NoError(t, db.PinChatByID(ctx, chat.ID))
}
require.NoError(t, db.UpdateChatPinOrder(ctx, database.UpdateChatPinOrderParams{
ID: third.ID,
PinOrder: 1,
}))
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 2,
second.ID: 3,
third.ID: 1,
})
require.NoError(t, db.UpdateChatPinOrder(ctx, database.UpdateChatPinOrderParams{
ID: third.ID,
PinOrder: 99,
}))
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 1,
second.ID: 2,
third.ID: 3,
})
})
t.Run("UnpinChatByIDCompactsPinnedChats", func(t *testing.T) {
t.Parallel()
ctx, db, ownerID, modelCfgID := setup(t)
first := createChat(t, ctx, db, ownerID, modelCfgID, "first")
second := createChat(t, ctx, db, ownerID, modelCfgID, "second")
third := createChat(t, ctx, db, ownerID, modelCfgID, "third")
for _, chat := range []database.Chat{first, second, third} {
require.NoError(t, db.PinChatByID(ctx, chat.ID))
}
require.NoError(t, db.UnpinChatByID(ctx, second.ID))
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 1,
second.ID: 0,
third.ID: 2,
})
})
t.Run("ArchiveClearsPinAndExcludesFromRanking", func(t *testing.T) {
t.Parallel()
ctx, db, ownerID, modelCfgID := setup(t)
first := createChat(t, ctx, db, ownerID, modelCfgID, "first")
second := createChat(t, ctx, db, ownerID, modelCfgID, "second")
third := createChat(t, ctx, db, ownerID, modelCfgID, "third")
for _, chat := range []database.Chat{first, second, third} {
require.NoError(t, db.PinChatByID(ctx, chat.ID))
}
// Archive the middle pin.
_, err := db.ArchiveChatByID(ctx, second.ID)
require.NoError(t, err)
// Archived chat should have pin_order cleared. Remaining
// pins keep their original positions; the next mutation
// compacts via ROW_NUMBER().
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 1,
second.ID: 0,
third.ID: 3,
})
// Reorder among remaining active pins — archived chat
// should not interfere with position calculation.
require.NoError(t, db.UpdateChatPinOrder(ctx, database.UpdateChatPinOrderParams{
ID: third.ID,
PinOrder: 1,
}))
// After reorder, ROW_NUMBER() compacts the sequence.
requirePinOrders(t, ctx, db, map[uuid.UUID]int32{
first.ID: 2,
second.ID: 0,
third.ID: 1,
})
})
}
func TestChatLabels(t *testing.T) {
t.Parallel()
if testing.Short() {
@@ -10670,7 +10856,7 @@ func TestChatLabels(t *testing.T) {
titles := make([]string, 0, len(results))
for _, c := range results {
titles = append(titles, c.Title)
titles = append(titles, c.Chat.Title)
}
require.Contains(t, titles, "filter-a")
require.Contains(t, titles, "filter-b")
@@ -10688,8 +10874,7 @@ func TestChatLabels(t *testing.T) {
})
require.NoError(t, err)
require.Len(t, results, 1)
require.Equal(t, "filter-a", results[0].Title)
require.Equal(t, "filter-a", results[0].Chat.Title)
// No filter — should return all chats for this owner.
allChats, err := db.GetChats(ctx, database.GetChatsParams{
OwnerID: owner.ID,
@@ -10698,3 +10883,121 @@ func TestChatLabels(t *testing.T) {
require.GreaterOrEqual(t, len(allChats), 3)
})
}
func TestChatHasUnread(t *testing.T) {
t.Parallel()
store, _ := dbtestutil.NewDB(t)
ctx := context.Background()
dbgen.Organization(t, store, database.Organization{})
user := dbgen.User(t, store, database.User{})
_, err := store.InsertChatProvider(ctx, database.InsertChatProviderParams{
Provider: "openai",
DisplayName: "OpenAI",
APIKey: "test-key",
Enabled: true,
})
require.NoError(t, err)
modelCfg, err := store.InsertChatModelConfig(ctx, database.InsertChatModelConfigParams{
Provider: "openai",
Model: "test-model-" + uuid.NewString(),
DisplayName: "Test Model",
CreatedBy: uuid.NullUUID{UUID: user.ID, Valid: true},
UpdatedBy: uuid.NullUUID{UUID: user.ID, Valid: true},
Enabled: true,
IsDefault: true,
ContextLimit: 128000,
CompressionThreshold: 80,
Options: json.RawMessage(`{}`),
})
require.NoError(t, err)
chat, err := store.InsertChat(ctx, database.InsertChatParams{
OwnerID: user.ID,
LastModelConfigID: modelCfg.ID,
Title: "test-chat-" + uuid.NewString(),
})
require.NoError(t, err)
getHasUnread := func() bool {
rows, err := store.GetChats(ctx, database.GetChatsParams{
OwnerID: user.ID,
})
require.NoError(t, err)
for _, row := range rows {
if row.Chat.ID == chat.ID {
return row.HasUnread
}
}
t.Fatal("chat not found in GetChats result")
return false
}
// New chat with no messages: not unread.
require.False(t, getHasUnread(), "new chat with no messages should not be unread")
// Helper to insert a single chat message.
insertMsg := func(role database.ChatMessageRole, text string) {
t.Helper()
_, err := store.InsertChatMessages(ctx, database.InsertChatMessagesParams{
ChatID: chat.ID,
CreatedBy: []uuid.UUID{user.ID},
ModelConfigID: []uuid.UUID{modelCfg.ID},
Role: []database.ChatMessageRole{role},
Content: []string{fmt.Sprintf(`[{"type":"text","text":%q}]`, text)},
ContentVersion: []int16{0},
Visibility: []database.ChatMessageVisibility{database.ChatMessageVisibilityBoth},
InputTokens: []int64{0},
OutputTokens: []int64{0},
TotalTokens: []int64{0},
ReasoningTokens: []int64{0},
CacheCreationTokens: []int64{0},
CacheReadTokens: []int64{0},
ContextLimit: []int64{0},
Compressed: []bool{false},
TotalCostMicros: []int64{0},
RuntimeMs: []int64{0},
ProviderResponseID: []string{""},
})
require.NoError(t, err)
}
// Insert an assistant message: becomes unread.
insertMsg(database.ChatMessageRoleAssistant, "hello")
require.True(t, getHasUnread(), "chat with unread assistant message should be unread")
// Mark as read: no longer unread.
lastMsg, err := store.GetLastChatMessageByRole(ctx, database.GetLastChatMessageByRoleParams{
ChatID: chat.ID,
Role: database.ChatMessageRoleAssistant,
})
require.NoError(t, err)
err = store.UpdateChatLastReadMessageID(ctx, database.UpdateChatLastReadMessageIDParams{
ID: chat.ID,
LastReadMessageID: lastMsg.ID,
})
require.NoError(t, err)
require.False(t, getHasUnread(), "chat should not be unread after marking as read")
// Insert another assistant message: becomes unread again.
insertMsg(database.ChatMessageRoleAssistant, "new message")
require.True(t, getHasUnread(), "new assistant message after read should be unread")
// Mark as read again, then verify user messages don't
// trigger unread.
lastMsg, err = store.GetLastChatMessageByRole(ctx, database.GetLastChatMessageByRoleParams{
ChatID: chat.ID,
Role: database.ChatMessageRoleAssistant,
})
require.NoError(t, err)
err = store.UpdateChatLastReadMessageID(ctx, database.UpdateChatLastReadMessageIDParams{
ID: chat.ID,
LastReadMessageID: lastMsg.ID,
})
require.NoError(t, err)
insertMsg(database.ChatMessageRoleUser, "user msg")
require.False(t, getHasUnread(), "user messages should not trigger unread")
}
File diff suppressed because it is too large Load Diff
+164 -77
View File
@@ -454,95 +454,91 @@ WHERE
-- Returns paginated sessions with aggregated metadata, token counts, and
-- the most recent user prompt. A "session" is a logical grouping of
-- interceptions that share the same session_id (set by the client).
WITH filtered_interceptions AS (
--
-- Pagination-first strategy: identify the page of sessions cheaply via a
-- single GROUP BY scan, then do expensive lateral joins (tokens, prompts,
-- first-interception metadata) only for the ~page-size result set.
WITH cursor_pos AS (
-- Resolve the cursor's started_at once, outside the HAVING clause,
-- so the planner cannot accidentally re-evaluate it per group.
SELECT MIN(aibridge_interceptions.started_at) AS started_at
FROM aibridge_interceptions
WHERE aibridge_interceptions.session_id = @after_session_id AND aibridge_interceptions.ended_at IS NOT NULL
),
session_page AS (
-- Paginate at the session level first; only cheap aggregates here.
SELECT
aibridge_interceptions.*
ai.session_id,
ai.initiator_id,
MIN(ai.started_at) AS started_at,
MAX(ai.ended_at) AS ended_at,
COUNT(*) FILTER (WHERE ai.thread_root_id IS NULL) AS threads
FROM
aibridge_interceptions
aibridge_interceptions ai
WHERE
-- Remove inflight interceptions (ones which lack an ended_at value).
aibridge_interceptions.ended_at IS NOT NULL
ai.ended_at IS NOT NULL
-- Filter by time frame
AND CASE
WHEN @started_after::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN aibridge_interceptions.started_at >= @started_after::timestamptz
WHEN @started_after::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN ai.started_at >= @started_after::timestamptz
ELSE true
END
AND CASE
WHEN @started_before::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN aibridge_interceptions.started_at <= @started_before::timestamptz
WHEN @started_before::timestamptz != '0001-01-01 00:00:00+00'::timestamptz THEN ai.started_at <= @started_before::timestamptz
ELSE true
END
-- Filter initiator_id
AND CASE
WHEN @initiator_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN aibridge_interceptions.initiator_id = @initiator_id::uuid
WHEN @initiator_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN ai.initiator_id = @initiator_id::uuid
ELSE true
END
-- Filter provider
AND CASE
WHEN @provider::text != '' THEN aibridge_interceptions.provider = @provider::text
WHEN @provider::text != '' THEN ai.provider = @provider::text
ELSE true
END
-- Filter model
AND CASE
WHEN @model::text != '' THEN aibridge_interceptions.model = @model::text
WHEN @model::text != '' THEN ai.model = @model::text
ELSE true
END
-- Filter client
AND CASE
WHEN @client::text != '' THEN COALESCE(aibridge_interceptions.client, 'Unknown') = @client::text
WHEN @client::text != '' THEN COALESCE(ai.client, 'Unknown') = @client::text
ELSE true
END
-- Filter session_id
AND CASE
WHEN @session_id::text != '' THEN aibridge_interceptions.session_id = @session_id::text
WHEN @session_id::text != '' THEN ai.session_id = @session_id::text
ELSE true
END
-- Authorize Filter clause will be injected below in ListAuthorizedAIBridgeSessions
-- @authorize_filter
),
session_tokens AS (
-- Aggregate token usage across all interceptions in each session.
-- Group by (session_id, initiator_id) to avoid merging sessions from
-- different users who happen to share the same client_session_id.
SELECT
fi.session_id,
fi.initiator_id,
COALESCE(SUM(tu.input_tokens), 0)::bigint AS input_tokens,
COALESCE(SUM(tu.output_tokens), 0)::bigint AS output_tokens
-- TODO: add extra token types once https://github.com/coder/aibridge/issues/150 lands.
FROM
filtered_interceptions fi
LEFT JOIN
aibridge_token_usages tu ON fi.id = tu.interception_id
GROUP BY
fi.session_id, fi.initiator_id
),
session_root AS (
-- Build one summary row per session. Group by (session_id, initiator_id)
-- to avoid merging sessions from different users who happen to share the
-- same client_session_id. The ARRAY_AGG with ORDER BY picks values from
-- the chronologically first interception for fields that should represent
-- the session as a whole (client, metadata). Threads are counted as
-- distinct root interception IDs: an interception with a NULL
-- thread_root_id is itself a thread root.
SELECT
fi.session_id,
fi.initiator_id,
(ARRAY_AGG(fi.client ORDER BY fi.started_at, fi.id))[1] AS client,
(ARRAY_AGG(fi.metadata ORDER BY fi.started_at, fi.id))[1] AS metadata,
ARRAY_AGG(DISTINCT fi.provider ORDER BY fi.provider) AS providers,
ARRAY_AGG(DISTINCT fi.model ORDER BY fi.model) AS models,
MIN(fi.started_at) AS started_at,
MAX(fi.ended_at) AS ended_at,
COUNT(DISTINCT COALESCE(fi.thread_root_id, fi.id)) AS threads,
-- Collect IDs for lateral prompt lookup.
ARRAY_AGG(fi.id) AS interception_ids
FROM
filtered_interceptions fi
GROUP BY
fi.session_id, fi.initiator_id
ai.session_id, ai.initiator_id
HAVING
-- Cursor pagination: uses a composite (started_at, session_id)
-- cursor to support keyset pagination. The less-than comparison
-- matches the DESC sort order so rows after the cursor come
-- later in results. The cursor value comes from cursor_pos to
-- guarantee single evaluation.
CASE
WHEN @after_session_id::text != '' THEN (
(MIN(ai.started_at), ai.session_id) < (
(SELECT started_at FROM cursor_pos),
@after_session_id::text
)
)
ELSE true
END
ORDER BY
MIN(ai.started_at) DESC,
ai.session_id DESC
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
)
SELECT
sr.session_id,
sp.session_id,
visible_users.id AS user_id,
visible_users.username AS user_username,
visible_users.name AS user_name,
@@ -551,47 +547,114 @@ SELECT
sr.models::text[] AS models,
COALESCE(sr.client, '')::varchar(64) AS client,
sr.metadata::jsonb AS metadata,
sr.started_at::timestamptz AS started_at,
sr.ended_at::timestamptz AS ended_at,
sr.threads,
sp.started_at::timestamptz AS started_at,
sp.ended_at::timestamptz AS ended_at,
sp.threads,
COALESCE(st.input_tokens, 0)::bigint AS input_tokens,
COALESCE(st.output_tokens, 0)::bigint AS output_tokens,
COALESCE(slp.prompt, '') AS last_prompt
FROM
session_root sr
session_page sp
JOIN
visible_users ON visible_users.id = sr.initiator_id
LEFT JOIN
session_tokens st ON st.session_id = sr.session_id AND st.initiator_id = sr.initiator_id
visible_users ON visible_users.id = sp.initiator_id
LEFT JOIN LATERAL (
-- Lateral join to efficiently fetch only the most recent user prompt
-- across all interceptions in the session, avoiding a full aggregation.
SELECT
(ARRAY_AGG(ai.client ORDER BY ai.started_at, ai.id))[1] AS client,
(ARRAY_AGG(ai.metadata ORDER BY ai.started_at, ai.id))[1] AS metadata,
ARRAY_AGG(DISTINCT ai.provider ORDER BY ai.provider) AS providers,
ARRAY_AGG(DISTINCT ai.model ORDER BY ai.model) AS models,
ARRAY_AGG(ai.id) AS interception_ids
FROM aibridge_interceptions ai
WHERE ai.session_id = sp.session_id
AND ai.initiator_id = sp.initiator_id
AND ai.ended_at IS NOT NULL
) sr ON true
LEFT JOIN LATERAL (
-- Aggregate tokens only for this session's interceptions.
SELECT
COALESCE(SUM(tu.input_tokens), 0)::bigint AS input_tokens,
COALESCE(SUM(tu.output_tokens), 0)::bigint AS output_tokens
FROM aibridge_token_usages tu
WHERE tu.interception_id = ANY(sr.interception_ids)
) st ON true
LEFT JOIN LATERAL (
-- Fetch only the most recent user prompt across all interceptions
-- in the session.
SELECT up.prompt
FROM aibridge_user_prompts up
WHERE up.interception_id = ANY(sr.interception_ids)
ORDER BY up.created_at DESC, up.id DESC
LIMIT 1
) slp ON true
WHERE
-- Cursor pagination: uses a composite (started_at, session_id) cursor
-- to support keyset pagination. The less-than comparison matches the
-- DESC sort order so that rows after the cursor come later in results.
CASE
WHEN @after_session_id::text != '' THEN (
(sr.started_at, sr.session_id) < (
(SELECT started_at FROM session_root WHERE session_id = @after_session_id),
@after_session_id::text
ORDER BY
sp.started_at DESC,
sp.session_id DESC
;
-- name: ListAIBridgeSessionThreads :many
-- Returns all interceptions belonging to paginated threads within a session.
-- Threads are paginated by (started_at, thread_id) cursor.
WITH paginated_threads AS (
SELECT
-- Find thread root interceptions (thread_root_id IS NULL), apply cursor
-- pagination, and return the page.
aibridge_interceptions.id AS thread_id,
aibridge_interceptions.started_at
FROM
aibridge_interceptions
WHERE
aibridge_interceptions.session_id = @session_id::text
AND aibridge_interceptions.ended_at IS NOT NULL
AND aibridge_interceptions.thread_root_id IS NULL
-- Pagination cursor.
AND (@after_id::uuid = '00000000-0000-0000-0000-000000000000'::uuid OR
(aibridge_interceptions.started_at, aibridge_interceptions.id) > (
(SELECT started_at FROM aibridge_interceptions ai2 WHERE ai2.id = @after_id),
@after_id::uuid
)
)
ELSE true
END
AND (@before_id::uuid = '00000000-0000-0000-0000-000000000000'::uuid OR
(aibridge_interceptions.started_at, aibridge_interceptions.id) < (
(SELECT started_at FROM aibridge_interceptions ai2 WHERE ai2.id = @before_id),
@before_id::uuid
)
)
-- @authorize_filter
ORDER BY
aibridge_interceptions.started_at ASC,
aibridge_interceptions.id ASC
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 50)
)
SELECT
COALESCE(aibridge_interceptions.thread_root_id, aibridge_interceptions.id) AS thread_id,
sqlc.embed(aibridge_interceptions)
FROM
aibridge_interceptions
JOIN
paginated_threads pt
ON pt.thread_id = COALESCE(aibridge_interceptions.thread_root_id, aibridge_interceptions.id)
WHERE
aibridge_interceptions.session_id = @session_id::text
AND aibridge_interceptions.ended_at IS NOT NULL
-- @authorize_filter
ORDER BY
sr.started_at DESC,
sr.session_id DESC
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
-- Ensure threads and their associated interceptions (agentic loops) are sorted chronologically.
pt.started_at ASC,
pt.thread_id ASC,
aibridge_interceptions.started_at ASC,
aibridge_interceptions.id ASC
;
-- name: ListAIBridgeModelThoughtsByInterceptionIDs :many
SELECT
*
FROM
aibridge_model_thoughts
WHERE
interception_id = ANY(@interception_ids::uuid[])
ORDER BY
created_at ASC;
-- name: ListAIBridgeModels :many
SELECT
model
@@ -616,3 +679,27 @@ ORDER BY
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
;
-- name: ListAIBridgeClients :many
SELECT
COALESCE(client, 'Unknown') AS client
FROM
aibridge_interceptions
WHERE
ended_at IS NOT NULL
-- Filter client (prefix match to allow B-tree index usage).
AND CASE
WHEN @client::text != '' THEN COALESCE(aibridge_interceptions.client, 'Unknown') LIKE @client::text || '%'
ELSE true
END
-- We use an `@authorize_filter` as we are attempting to list clients
-- that are relevant to the user and what they are allowed to see.
-- Authorize Filter clause will be injected below in
-- ListAIBridgeClientsAuthorized.
-- @authorize_filter
GROUP BY
client
LIMIT COALESCE(NULLIF(@limit_::integer, 0), 100)
OFFSET @offset_
;
+17
View File
@@ -0,0 +1,17 @@
-- name: GetUserAISeatStates :many
-- Returns user IDs from the provided list that are consuming an AI seat.
-- Filters to active, non-deleted, non-system users to match the canonical
-- seat count query (GetActiveAISeatCount).
SELECT
ais.user_id
FROM
ai_seat_state ais
JOIN
users u
ON
ais.user_id = u.id
WHERE
ais.user_id = ANY(@user_ids::uuid[])
AND u.status = 'active'::user_status
AND u.deleted = false
AND u.is_system = false;
@@ -0,0 +1,80 @@
-- name: InsertChatAutomationEvent :one
INSERT INTO chat_automation_events (
id,
automation_id,
trigger_id,
received_at,
payload,
filter_matched,
resolved_labels,
matched_chat_id,
created_chat_id,
status,
error
) VALUES (
@id::uuid,
@automation_id::uuid,
sqlc.narg('trigger_id')::uuid,
@received_at::timestamptz,
@payload::jsonb,
@filter_matched::boolean,
sqlc.narg('resolved_labels')::jsonb,
sqlc.narg('matched_chat_id')::uuid,
sqlc.narg('created_chat_id')::uuid,
@status::chat_automation_event_status,
sqlc.narg('error')::text
) RETURNING *;
-- name: GetChatAutomationEventsByAutomationID :many
SELECT
*
FROM
chat_automation_events
WHERE
automation_id = @automation_id::uuid
AND CASE
WHEN sqlc.narg('status_filter')::chat_automation_event_status IS NOT NULL THEN status = sqlc.narg('status_filter')::chat_automation_event_status
ELSE true
END
ORDER BY
received_at DESC
OFFSET @offset_opt
LIMIT
COALESCE(NULLIF(@limit_opt :: int, 0), 50);
-- name: CountChatAutomationChatCreatesInWindow :one
-- Counts new-chat events in the rate-limit window. This count is
-- approximate under concurrency: concurrent webhook handlers may
-- each read the same count before any of them insert, so brief
-- bursts can slightly exceed the configured cap.
SELECT COUNT(*)
FROM chat_automation_events
WHERE automation_id = @automation_id::uuid
AND status = 'created'
AND received_at > @window_start::timestamptz;
-- name: CountChatAutomationMessagesInWindow :one
-- Counts total message events (creates + continues) in the rate-limit
-- window. This count is approximate under concurrency: concurrent
-- webhook handlers may each read the same count before any of them
-- insert, so brief bursts can slightly exceed the configured cap.
SELECT COUNT(*)
FROM chat_automation_events
WHERE automation_id = @automation_id::uuid
AND status IN ('created', 'continued')
AND received_at > @window_start::timestamptz;
-- name: PurgeOldChatAutomationEvents :execrows
-- Deletes old chat automation events in bounded batches to avoid
-- long-running locks on high-volume tables. Callers should loop
-- until zero rows are returned.
WITH old_events AS (
SELECT id
FROM chat_automation_events
WHERE received_at < @before::timestamptz
ORDER BY received_at ASC
LIMIT @limit_count
)
DELETE FROM chat_automation_events
USING old_events
WHERE chat_automation_events.id = old_events.id;
@@ -0,0 +1,85 @@
-- name: InsertChatAutomation :one
INSERT INTO chat_automations (
id,
owner_id,
organization_id,
name,
description,
instructions,
model_config_id,
mcp_server_ids,
allowed_tools,
status,
max_chat_creates_per_hour,
max_messages_per_hour,
created_at,
updated_at
) VALUES (
@id::uuid,
@owner_id::uuid,
@organization_id::uuid,
@name::text,
@description::text,
@instructions::text,
sqlc.narg('model_config_id')::uuid,
COALESCE(@mcp_server_ids::uuid[], '{}'::uuid[]),
COALESCE(@allowed_tools::text[], '{}'::text[]),
@status::chat_automation_status,
@max_chat_creates_per_hour::integer,
@max_messages_per_hour::integer,
@created_at::timestamptz,
@updated_at::timestamptz
) RETURNING *;
-- name: GetChatAutomationByID :one
SELECT * FROM chat_automations WHERE id = @id::uuid;
-- name: GetChatAutomations :many
SELECT
*
FROM
chat_automations
WHERE
CASE
WHEN @owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN chat_automations.owner_id = @owner_id
ELSE true
END
AND CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN chat_automations.organization_id = @organization_id
ELSE true
END
-- Authorize Filter clause will be injected below in GetAuthorizedChatAutomations
-- @authorize_filter
ORDER BY
created_at DESC, id DESC
OFFSET @offset_opt
LIMIT
COALESCE(NULLIF(@limit_opt :: int, 0), 50);
-- name: UpdateChatAutomation :one
UPDATE chat_automations SET
name = @name::text,
description = @description::text,
instructions = @instructions::text,
model_config_id = sqlc.narg('model_config_id')::uuid,
mcp_server_ids = COALESCE(@mcp_server_ids::uuid[], '{}'::uuid[]),
allowed_tools = COALESCE(@allowed_tools::text[], '{}'::text[]),
status = @status::chat_automation_status,
max_chat_creates_per_hour = @max_chat_creates_per_hour::integer,
max_messages_per_hour = @max_messages_per_hour::integer,
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: DeleteChatAutomationByID :exec
DELETE FROM chat_automations WHERE id = @id::uuid;
-- name: CleanupDeletedMCPServerIDsFromChatAutomations :exec
UPDATE chat_automations
SET mcp_server_ids = (
SELECT COALESCE(array_agg(sid), '{}')
FROM unnest(chat_automations.mcp_server_ids) AS sid
WHERE sid IN (SELECT id FROM mcp_server_configs)
)
WHERE mcp_server_ids != '{}'
AND NOT (mcp_server_ids <@ COALESCE((SELECT array_agg(id) FROM mcp_server_configs), '{}'));
@@ -0,0 +1,87 @@
-- name: InsertChatAutomationTrigger :one
INSERT INTO chat_automation_triggers (
id,
automation_id,
type,
webhook_secret,
webhook_secret_key_id,
cron_schedule,
filter,
label_paths,
created_at,
updated_at
) VALUES (
@id::uuid,
@automation_id::uuid,
@type::chat_automation_trigger_type,
sqlc.narg('webhook_secret')::text,
sqlc.narg('webhook_secret_key_id')::text,
sqlc.narg('cron_schedule')::text,
sqlc.narg('filter')::jsonb,
sqlc.narg('label_paths')::jsonb,
@created_at::timestamptz,
@updated_at::timestamptz
) RETURNING *;
-- name: GetChatAutomationTriggerByID :one
SELECT * FROM chat_automation_triggers WHERE id = @id::uuid;
-- name: GetChatAutomationTriggersByAutomationID :many
SELECT * FROM chat_automation_triggers
WHERE automation_id = @automation_id::uuid
ORDER BY created_at ASC;
-- name: UpdateChatAutomationTrigger :one
UPDATE chat_automation_triggers SET
cron_schedule = COALESCE(sqlc.narg('cron_schedule'), cron_schedule),
filter = COALESCE(sqlc.narg('filter'), filter),
label_paths = COALESCE(sqlc.narg('label_paths'), label_paths),
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: UpdateChatAutomationTriggerWebhookSecret :one
UPDATE chat_automation_triggers SET
webhook_secret = sqlc.narg('webhook_secret')::text,
webhook_secret_key_id = sqlc.narg('webhook_secret_key_id')::text,
updated_at = @updated_at::timestamptz
WHERE id = @id::uuid
RETURNING *;
-- name: DeleteChatAutomationTriggerByID :exec
DELETE FROM chat_automation_triggers WHERE id = @id::uuid;
-- name: GetActiveChatAutomationCronTriggers :many
-- Returns all cron triggers whose parent automation is active or in
-- preview mode. The scheduler uses this to evaluate which triggers
-- are due.
SELECT
t.id,
t.automation_id,
t.type,
t.cron_schedule,
t.filter,
t.label_paths,
t.last_triggered_at,
t.created_at,
t.updated_at,
a.status AS automation_status,
a.owner_id AS automation_owner_id,
a.instructions AS automation_instructions,
a.name AS automation_name,
a.organization_id AS automation_organization_id,
a.model_config_id AS automation_model_config_id,
a.mcp_server_ids AS automation_mcp_server_ids,
a.allowed_tools AS automation_allowed_tools,
a.max_chat_creates_per_hour AS automation_max_chat_creates_per_hour,
a.max_messages_per_hour AS automation_max_messages_per_hour
FROM chat_automation_triggers t
JOIN chat_automations a ON a.id = t.automation_id
WHERE t.type = 'cron'
AND t.cron_schedule IS NOT NULL
AND a.status IN ('active', 'preview');
-- name: UpdateChatAutomationTriggerLastTriggeredAt :exec
UPDATE chat_automation_triggers
SET last_triggered_at = @last_triggered_at::timestamptz
WHERE id = @id::uuid;
+279 -12
View File
@@ -1,9 +1,192 @@
-- name: ArchiveChatByID :exec
UPDATE chats SET archived = true, updated_at = NOW()
WHERE id = @id OR root_chat_id = @id;
-- name: ArchiveChatByID :many
WITH chats AS (
UPDATE chats
SET archived = true, pin_order = 0, updated_at = NOW()
WHERE id = @id::uuid OR root_chat_id = @id::uuid
RETURNING *
)
SELECT *
FROM chats
ORDER BY (id = @id::uuid) DESC, created_at ASC, id ASC;
-- name: UnarchiveChatByID :exec
UPDATE chats SET archived = false, updated_at = NOW() WHERE id = @id::uuid;
-- name: UnarchiveChatByID :many
WITH chats AS (
UPDATE chats
SET archived = false, updated_at = NOW()
WHERE id = @id::uuid OR root_chat_id = @id::uuid
RETURNING *
)
SELECT *
FROM chats
ORDER BY (id = @id::uuid) DESC, created_at ASC, id ASC;
-- name: PinChatByID :exec
WITH target_chat AS (
SELECT
id,
owner_id
FROM
chats
WHERE
id = @id::uuid
),
-- Under READ COMMITTED, concurrent pin operations for the same
-- owner may momentarily produce duplicate pin_order values because
-- each CTE snapshot does not see the other's writes. The next
-- pin/unpin/reorder operation's ROW_NUMBER() self-heals the
-- sequence, so this is acceptable.
ranked AS (
SELECT
c.id,
ROW_NUMBER() OVER (ORDER BY c.pin_order ASC, c.id ASC) :: integer AS next_pin_order
FROM
chats c
JOIN
target_chat ON c.owner_id = target_chat.owner_id
WHERE
c.pin_order > 0
AND c.archived = FALSE
AND c.id <> target_chat.id
),
updates AS (
SELECT
ranked.id,
ranked.next_pin_order AS pin_order
FROM
ranked
UNION ALL
SELECT
target_chat.id,
COALESCE((
SELECT
MAX(ranked.next_pin_order)
FROM
ranked
), 0) + 1 AS pin_order
FROM
target_chat
)
UPDATE
chats c
SET
pin_order = updates.pin_order
FROM
updates
WHERE
c.id = updates.id;
-- name: UnpinChatByID :exec
WITH target_chat AS (
SELECT
id,
owner_id
FROM
chats
WHERE
id = @id::uuid
),
ranked AS (
SELECT
c.id,
ROW_NUMBER() OVER (ORDER BY c.pin_order ASC, c.id ASC) :: integer AS current_position
FROM
chats c
JOIN
target_chat ON c.owner_id = target_chat.owner_id
WHERE
c.pin_order > 0
AND c.archived = FALSE
),
target AS (
SELECT
ranked.id,
ranked.current_position
FROM
ranked
WHERE
ranked.id = @id::uuid
),
updates AS (
SELECT
ranked.id,
CASE
WHEN ranked.id = target.id THEN 0
WHEN ranked.current_position > target.current_position THEN ranked.current_position - 1
ELSE ranked.current_position
END AS pin_order
FROM
ranked
CROSS JOIN
target
)
UPDATE
chats c
SET
pin_order = updates.pin_order
FROM
updates
WHERE
c.id = updates.id;
-- name: UpdateChatPinOrder :exec
WITH target_chat AS (
SELECT
id,
owner_id
FROM
chats
WHERE
id = @id::uuid
),
ranked AS (
SELECT
c.id,
ROW_NUMBER() OVER (ORDER BY c.pin_order ASC, c.id ASC) :: integer AS current_position,
COUNT(*) OVER () :: integer AS pinned_count
FROM
chats c
JOIN
target_chat ON c.owner_id = target_chat.owner_id
WHERE
c.pin_order > 0
AND c.archived = FALSE
),
target AS (
SELECT
ranked.id,
ranked.current_position,
LEAST(GREATEST(@pin_order::integer, 1), ranked.pinned_count) AS desired_position
FROM
ranked
WHERE
ranked.id = @id::uuid
),
updates AS (
SELECT
ranked.id,
CASE
WHEN ranked.id = target.id THEN target.desired_position
WHEN target.desired_position < target.current_position
AND ranked.current_position >= target.desired_position
AND ranked.current_position < target.current_position THEN ranked.current_position + 1
WHEN target.desired_position > target.current_position
AND ranked.current_position > target.current_position
AND ranked.current_position <= target.desired_position THEN ranked.current_position - 1
ELSE ranked.current_position
END AS pin_order
FROM
ranked
CROSS JOIN
target
)
UPDATE
chats c
SET
pin_order = updates.pin_order
FROM
updates
WHERE
c.id = updates.id;
-- name: SoftDeleteChatMessagesAfterID :exec
UPDATE
@@ -52,6 +235,21 @@ WHERE
ORDER BY
created_at ASC;
-- name: GetChatMessagesByChatIDAscPaginated :many
SELECT
*
FROM
chat_messages
WHERE
chat_id = @chat_id::uuid
AND id > @after_id::bigint
AND visibility IN ('user', 'both')
AND deleted = false
ORDER BY
id ASC
LIMIT
COALESCE(NULLIF(@limit_val::int, 0), 50);
-- name: GetChatMessagesByChatIDDescPaginated :many
SELECT
*
@@ -130,7 +328,14 @@ ORDER BY
-- name: GetChats :many
SELECT
*
sqlc.embed(chats),
EXISTS (
SELECT 1 FROM chat_messages cm
WHERE cm.chat_id = chats.id
AND cm.role = 'assistant'
AND cm.deleted = false
AND cm.id > COALESCE(chats.last_read_message_id, 0)
) AS has_unread
FROM
chats
WHERE
@@ -180,6 +385,8 @@ LIMIT
INSERT INTO chats (
owner_id,
workspace_id,
build_id,
agent_id,
parent_chat_id,
root_chat_id,
last_model_config_id,
@@ -190,6 +397,8 @@ INSERT INTO chats (
) VALUES (
@owner_id::uuid,
sqlc.narg('workspace_id')::uuid,
sqlc.narg('build_id')::uuid,
sqlc.narg('agent_id')::uuid,
sqlc.narg('parent_chat_id')::uuid,
sqlc.narg('root_chat_id')::uuid,
@last_model_config_id::uuid,
@@ -294,6 +503,17 @@ WHERE
RETURNING
*;
-- name: UpdateChatLastModelConfigByID :one
UPDATE
chats
SET
-- NOTE: updated_at is intentionally NOT touched here to avoid changing list ordering.
last_model_config_id = @last_model_config_id::uuid
WHERE
id = @id::uuid
RETURNING
*;
-- name: UpdateChatLabelsByID :one
UPDATE
chats
@@ -305,16 +525,34 @@ WHERE
RETURNING
*;
-- name: UpdateChatWorkspace :one
UPDATE
chats
SET
-- name: UpdateChatWorkspaceBinding :one
UPDATE chats SET
workspace_id = sqlc.narg('workspace_id')::uuid,
build_id = sqlc.narg('build_id')::uuid,
agent_id = sqlc.narg('agent_id')::uuid,
updated_at = NOW()
WHERE id = @id::uuid
RETURNING *;
-- name: UpdateChatBuildAgentBinding :one
UPDATE chats SET
build_id = sqlc.narg('build_id')::uuid,
agent_id = sqlc.narg('agent_id')::uuid,
updated_at = NOW()
WHERE
id = @id::uuid
RETURNING
*;
RETURNING *;
-- name: UpdateChatLastInjectedContext :one
-- Updates the cached injected context parts (AGENTS.md +
-- skills) on the chat row. Called only when context changes
-- (first workspace attach or agent change). updated_at is
-- intentionally not touched to avoid reordering the chat list.
UPDATE chats SET
last_injected_context = sqlc.narg('last_injected_context')::jsonb
WHERE
id = @id::uuid
RETURNING *;
-- name: UpdateChatMCPServerIDs :one
UPDATE
@@ -371,6 +609,21 @@ WHERE
RETURNING
*;
-- name: UpdateChatStatusPreserveUpdatedAt :one
UPDATE
chats
SET
status = @status::chat_status,
worker_id = sqlc.narg('worker_id')::uuid,
started_at = sqlc.narg('started_at')::timestamptz,
heartbeat_at = sqlc.narg('heartbeat_at')::timestamptz,
last_error = sqlc.narg('last_error')::text,
updated_at = @updated_at::timestamptz
WHERE
id = @id::uuid
RETURNING
*;
-- name: GetStaleChats :many
-- Find chats that appear stuck (running but heartbeat has expired).
-- Used for recovery after coderd crashes or long hangs.
@@ -878,6 +1131,13 @@ JOIN group_members_expanded gme ON gme.group_id = g.id
WHERE gme.user_id = @user_id::uuid
AND g.chat_spend_limit_micros IS NOT NULL;
-- name: GetChatsByWorkspaceIDs :many
SELECT *
FROM chats
WHERE archived = false
AND workspace_id = ANY(@ids::uuid[])
ORDER BY workspace_id, updated_at DESC;
-- name: ResolveUserChatSpendLimit :one
-- Resolves the effective spend limit for a user using the hierarchy:
-- 1. Individual user override (highest priority)
@@ -905,3 +1165,10 @@ LEFT JOIN LATERAL (
) gl ON TRUE
WHERE u.id = @user_id::uuid
LIMIT 1;
-- name: UpdateChatLastReadMessageID :exec
-- Updates the last read message ID for a chat. This is used to track
-- which messages the owner has seen, enabling unread indicators.
UPDATE chats
SET last_read_message_id = @last_read_message_id::bigint
WHERE id = @id::uuid;
@@ -77,6 +77,7 @@ INSERT INTO mcp_server_configs (
tool_deny_list,
availability,
enabled,
model_intent,
created_by,
updated_by
) VALUES (
@@ -102,6 +103,7 @@ INSERT INTO mcp_server_configs (
@tool_deny_list::text[],
@availability::text,
@enabled::boolean,
@model_intent::boolean,
@created_by::uuid,
@updated_by::uuid
)
@@ -134,6 +136,7 @@ SET
tool_deny_list = @tool_deny_list::text[],
availability = @availability::text,
enabled = @enabled::boolean,
model_intent = @model_intent::boolean,
updated_by = @updated_by::uuid,
updated_at = NOW()
WHERE
+50
View File
@@ -137,6 +137,24 @@ SELECT
SELECT
COALESCE((SELECT value FROM site_configs WHERE key = 'agents_chat_system_prompt'), '') :: text AS chat_system_prompt;
-- GetChatSystemPromptConfig returns both chat system prompt settings in a
-- single read to avoid torn reads between separate site-config lookups.
-- The include-default fallback preserves the legacy behavior where a
-- non-empty custom prompt implied opting out before the explicit toggle
-- existed.
-- name: GetChatSystemPromptConfig :one
SELECT
COALESCE((SELECT value FROM site_configs WHERE key = 'agents_chat_system_prompt'), '') :: text AS chat_system_prompt,
COALESCE(
(SELECT value = 'true' FROM site_configs WHERE key = 'agents_chat_include_default_system_prompt'),
NOT EXISTS (
SELECT 1
FROM site_configs
WHERE key = 'agents_chat_system_prompt'
AND value != ''
)
) :: boolean AS include_default_system_prompt;
-- name: UpsertChatSystemPrompt :exec
INSERT INTO site_configs (key, value) VALUES ('agents_chat_system_prompt', $1)
ON CONFLICT (key) DO UPDATE SET value = $1 WHERE site_configs.key = 'agents_chat_system_prompt';
@@ -167,6 +185,38 @@ WHERE site_configs.key = 'agents_desktop_enabled';
SELECT
COALESCE((SELECT value FROM site_configs WHERE key = 'agents_template_allowlist'), '') :: text AS template_allowlist;
-- GetChatIncludeDefaultSystemPrompt preserves the legacy default
-- for deployments created before the explicit include-default toggle.
-- When the toggle is unset, a non-empty custom prompt implies false;
-- otherwise the setting defaults to true.
-- name: GetChatIncludeDefaultSystemPrompt :one
SELECT
COALESCE(
(SELECT value = 'true' FROM site_configs WHERE key = 'agents_chat_include_default_system_prompt'),
NOT EXISTS (
SELECT 1
FROM site_configs
WHERE key = 'agents_chat_system_prompt'
AND value != ''
)
) :: boolean AS include_default_system_prompt;
-- name: UpsertChatIncludeDefaultSystemPrompt :exec
INSERT INTO site_configs (key, value)
VALUES (
'agents_chat_include_default_system_prompt',
CASE
WHEN sqlc.arg(include_default_system_prompt)::bool THEN 'true'
ELSE 'false'
END
)
ON CONFLICT (key) DO UPDATE
SET value = CASE
WHEN sqlc.arg(include_default_system_prompt)::bool THEN 'true'
ELSE 'false'
END
WHERE site_configs.key = 'agents_chat_include_default_system_prompt';
-- name: GetChatWorkspaceTTL :one
-- Returns the global TTL for chat workspaces as a Go duration string.
-- Returns "0s" (disabled) when no value has been configured.
+2
View File
@@ -247,6 +247,8 @@ sql:
mcp_server_tool_snapshots: MCPServerToolSnapshots
mcp_server_config_id: MCPServerConfigID
mcp_server_ids: MCPServerIDs
automation_mcp_server_ids: AutomationMCPServerIDs
webhook_secret_key_id: WebhookSecretKeyID
icon_url: IconURL
oauth2_client_id: OAuth2ClientID
oauth2_client_secret: OAuth2ClientSecret
+4
View File
@@ -15,6 +15,9 @@ const (
UniqueAPIKeysPkey UniqueConstraint = "api_keys_pkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_pkey PRIMARY KEY (id);
UniqueAuditLogsPkey UniqueConstraint = "audit_logs_pkey" // ALTER TABLE ONLY audit_logs ADD CONSTRAINT audit_logs_pkey PRIMARY KEY (id);
UniqueBoundaryUsageStatsPkey UniqueConstraint = "boundary_usage_stats_pkey" // ALTER TABLE ONLY boundary_usage_stats ADD CONSTRAINT boundary_usage_stats_pkey PRIMARY KEY (replica_id);
UniqueChatAutomationEventsPkey UniqueConstraint = "chat_automation_events_pkey" // ALTER TABLE ONLY chat_automation_events ADD CONSTRAINT chat_automation_events_pkey PRIMARY KEY (id);
UniqueChatAutomationTriggersPkey UniqueConstraint = "chat_automation_triggers_pkey" // ALTER TABLE ONLY chat_automation_triggers ADD CONSTRAINT chat_automation_triggers_pkey PRIMARY KEY (id);
UniqueChatAutomationsPkey UniqueConstraint = "chat_automations_pkey" // ALTER TABLE ONLY chat_automations ADD CONSTRAINT chat_automations_pkey PRIMARY KEY (id);
UniqueChatDiffStatusesPkey UniqueConstraint = "chat_diff_statuses_pkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
UniqueChatFilesPkey UniqueConstraint = "chat_files_pkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_pkey PRIMARY KEY (id);
UniqueChatMessagesPkey UniqueConstraint = "chat_messages_pkey" // ALTER TABLE ONLY chat_messages ADD CONSTRAINT chat_messages_pkey PRIMARY KEY (id);
@@ -125,6 +128,7 @@ const (
UniqueWorkspaceResourcesPkey UniqueConstraint = "workspace_resources_pkey" // ALTER TABLE ONLY workspace_resources ADD CONSTRAINT workspace_resources_pkey PRIMARY KEY (id);
UniqueWorkspacesPkey UniqueConstraint = "workspaces_pkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_pkey PRIMARY KEY (id);
UniqueIndexAPIKeyName UniqueConstraint = "idx_api_key_name" // CREATE UNIQUE INDEX idx_api_key_name ON api_keys USING btree (user_id, token_name) WHERE (login_type = 'token'::login_type);
UniqueIndexChatAutomationsOwnerOrgName UniqueConstraint = "idx_chat_automations_owner_org_name" // CREATE UNIQUE INDEX idx_chat_automations_owner_org_name ON chat_automations USING btree (owner_id, organization_id, name);
UniqueIndexChatModelConfigsSingleDefault UniqueConstraint = "idx_chat_model_configs_single_default" // CREATE UNIQUE INDEX idx_chat_model_configs_single_default ON chat_model_configs USING btree ((1)) WHERE ((is_default = true) AND (deleted = false));
UniqueIndexConnectionLogsConnectionIDWorkspaceIDAgentName UniqueConstraint = "idx_connection_logs_connection_id_workspace_id_agent_name" // CREATE UNIQUE INDEX idx_connection_logs_connection_id_workspace_id_agent_name ON connection_logs USING btree (connection_id, workspace_id, agent_name);
UniqueIndexCustomRolesNameLowerOrganizationID UniqueConstraint = "idx_custom_roles_name_lower_organization_id" // CREATE UNIQUE INDEX idx_custom_roles_name_lower_organization_id ON custom_roles USING btree (lower(name), COALESCE(organization_id, '00000000-0000-0000-0000-000000000000'::uuid));
+348 -120
View File
@@ -33,6 +33,7 @@ import (
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbauthz"
dbpubsub "github.com/coder/coder/v2/coderd/database/pubsub"
"github.com/coder/coder/v2/coderd/externalauth"
"github.com/coder/coder/v2/coderd/externalauth/gitprovider"
"github.com/coder/coder/v2/coderd/httpapi"
@@ -44,6 +45,7 @@ import (
"github.com/coder/coder/v2/coderd/searchquery"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/util/xjson"
"github.com/coder/coder/v2/coderd/workspaceapps"
"github.com/coder/coder/v2/coderd/x/chatd"
"github.com/coder/coder/v2/coderd/x/chatd/chatprovider"
@@ -109,6 +111,28 @@ func maybeWriteLimitErr(ctx context.Context, rw http.ResponseWriter, err error)
return false
}
func publishChatConfigEvent(logger slog.Logger, ps dbpubsub.Pubsub, kind pubsub.ChatConfigEventKind, entityID uuid.UUID) {
payload, err := json.Marshal(pubsub.ChatConfigEvent{
Kind: kind,
EntityID: entityID,
})
if err != nil {
logger.Error(context.Background(), "failed to marshal chat config event",
slog.F("kind", kind),
slog.F("entity_id", entityID),
slog.Error(err),
)
return
}
if err := ps.Publish(pubsub.ChatConfigEventChannel, payload); err != nil {
logger.Error(context.Background(), "failed to publish chat config event",
slog.F("kind", kind),
slog.F("entity_id", entityID),
slog.Error(err),
)
}
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
func (api *API) watchChats(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -172,6 +196,88 @@ func (api *API) watchChats(rw http.ResponseWriter, r *http.Request) {
}
}
// EXPERIMENTAL: chatsByWorkspace returns a mapping of workspace ID to
// the latest non-archived chat ID for each requested workspace.
// The query returns all matching chats and RBAC post-filters them;
// the handler then picks the latest per workspace in Go. This avoids
// the DISTINCT ON + post-filter bug where the sole candidate is
// silently dropped when the caller can't read it.
//
// TODO:
// 1. move aggregation to a SQL view with proper in-query authz so we
// can return a single row per workspace without this two-pass approach.
// 2. Restore the below router annotation and un-skip docs gen
// <at>Router /experimental/chats/by-workspace [post]
//
// @Summary Get latest chats by workspace IDs
// @ID get-latest-chats-by-workspace-ids
// @Security CoderSessionToken
// @Tags Chats
// @Accept json
// @Produce json
// @Success 200
// @x-apidocgen {"skip": true}
func (api *API) chatsByWorkspace(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
idsParam := r.URL.Query().Get("workspace_ids")
if idsParam == "" {
httpapi.Write(ctx, rw, http.StatusOK, map[uuid.UUID]uuid.UUID{})
return
}
raw := strings.Split(idsParam, ",")
// maxWorkspaceIDs is coupled to DEFAULT_RECORDS_PER_PAGE (25) in
// site/src/components/PaginationWidget/utils.ts.
// If the page size changes, this limit should too.
const maxWorkspaceIDs = 25
if len(raw) > maxWorkspaceIDs {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: fmt.Sprintf("Too many workspace IDs, maximum is %d.", maxWorkspaceIDs),
})
return
}
workspaceIDs := make([]uuid.UUID, 0, len(raw))
for _, s := range raw {
id, err := uuid.Parse(strings.TrimSpace(s))
if err != nil {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: fmt.Sprintf("Invalid workspace ID %q: %s", s, err),
})
return
}
workspaceIDs = append(workspaceIDs, id)
}
chats, err := api.Database.GetChatsByWorkspaceIDs(ctx, workspaceIDs)
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
} else if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to get chats by workspace.",
Detail: err.Error(),
})
return
}
// The SQL orders by (workspace_id, updated_at DESC), so the first
// chat seen per workspace after RBAC filtering is the latest
// readable one.
result := make(map[uuid.UUID]uuid.UUID, len(chats))
for _, chat := range chats {
if chat.WorkspaceID.Valid {
if _, exists := result[chat.WorkspaceID.UUID]; !exists {
result[chat.WorkspaceID.UUID] = chat.ID
}
}
}
httpapi.Write(ctx, rw, http.StatusOK, result)
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
func (api *API) listChats(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -230,7 +336,7 @@ func (api *API) listChats(rw http.ResponseWriter, r *http.Request) {
LimitOpt: int32(paginationParams.Limit),
}
chats, err := api.Database.GetChats(ctx, params)
chatRows, err := api.Database.GetChats(ctx, params)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to list chats.",
@@ -239,7 +345,13 @@ func (api *API) listChats(rw http.ResponseWriter, r *http.Request) {
return
}
diffStatusesByChatID, err := api.getChatDiffStatusesByChatID(ctx, chats)
// Extract the Chat objects for diff status lookup.
dbChats := make([]database.Chat, len(chatRows))
for i, row := range chatRows {
dbChats[i] = row.Chat
}
diffStatusesByChatID, err := api.getChatDiffStatusesByChatID(ctx, dbChats)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to list chats.",
@@ -248,7 +360,7 @@ func (api *API) listChats(rw http.ResponseWriter, r *http.Request) {
return
}
httpapi.Write(ctx, rw, http.StatusOK, convertChats(chats, diffStatusesByChatID))
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.ChatRows(chatRows, diffStatusesByChatID))
}
func (api *API) getChatDiffStatusesByChatID(
@@ -281,6 +393,11 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
apiKey := httpmw.APIKey(r)
if !api.Authorize(r, policy.ActionCreate, rbac.ResourceChat.WithOwner(apiKey.UserID.String())) {
httpapi.Forbidden(rw)
return
}
var req codersdk.CreateChatRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
@@ -386,6 +503,10 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
})
return
}
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to create chat.",
Detail: err.Error(),
@@ -393,7 +514,7 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
return
}
httpapi.Write(ctx, rw, http.StatusCreated, convertChat(chat, nil))
httpapi.Write(ctx, rw, http.StatusCreated, db2sdk.Chat(chat, nil))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -504,6 +625,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -514,6 +639,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -524,6 +653,10 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
EndDate: endDate,
})
if err != nil {
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
return
}
httpapi.InternalServerError(rw, err)
return
}
@@ -1118,7 +1251,7 @@ func (api *API) getChat(rw http.ResponseWriter, r *http.Request) {
slog.Error(err),
)
}
httpapi.Write(ctx, rw, http.StatusOK, convertChat(chat, diffStatus))
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, diffStatus))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -1449,8 +1582,8 @@ func (api *API) watchChatDesktop(rw http.ResponseWriter, r *http.Request) {
logger.Debug(ctx, "desktop Bicopy finished")
}
// patchChat updates a chat resource. Supports updating labels and
// toggling the archived state.
// patchChat updates a chat resource. Supports updating labels,
// archiving, pinning, and pinned-chat ordering.
func (api *API) patchChat(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
chat := httpmw.ChatParam(r)
@@ -1508,20 +1641,20 @@ func (api *API) patchChat(rw http.ResponseWriter, r *http.Request) {
}
var err error
// Use chatDaemon when available so it can notify active
// subscribers. Fall back to direct DB for the simple
// archive flag — no streaming state is involved.
// Use chatDaemon when available so it can interrupt active
// processing before broadcasting archive state. Fall back to
// direct DB when no daemon is running.
if archived {
if api.chatDaemon != nil {
err = api.chatDaemon.ArchiveChat(ctx, chat)
} else {
err = api.Database.ArchiveChatByID(ctx, chat.ID)
_, err = api.Database.ArchiveChatByID(ctx, chat.ID)
}
} else {
if api.chatDaemon != nil {
err = api.chatDaemon.UnarchiveChat(ctx, chat)
} else {
err = api.Database.UnarchiveChatByID(ctx, chat.ID)
_, err = api.Database.UnarchiveChatByID(ctx, chat.ID)
}
}
if err != nil {
@@ -1537,6 +1670,54 @@ func (api *API) patchChat(rw http.ResponseWriter, r *http.Request) {
}
}
if req.PinOrder != nil {
pinOrder := *req.PinOrder
if pinOrder < 0 {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Pin order must be non-negative.",
})
return
}
if pinOrder > 0 && chat.Archived {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Cannot pin an archived chat.",
})
return
}
// The behavior depends on current pin state:
// - pinOrder == 0: unpin.
// - pinOrder > 0 && already pinned: reorder (shift
// neighbors, clamp to [1, count]).
// - pinOrder > 0 && not pinned: append to end. The
// requested value is intentionally ignored because
// PinChatByID also bumps updated_at to keep the
// chat visible in the paginated sidebar.
var err error
errMsg := "Failed to pin chat."
switch {
case pinOrder == 0:
errMsg = "Failed to unpin chat."
err = api.Database.UnpinChatByID(ctx, chat.ID)
case chat.PinOrder > 0:
errMsg = "Failed to reorder pinned chat."
err = api.Database.UpdateChatPinOrder(ctx, database.UpdateChatPinOrderParams{
ID: chat.ID,
PinOrder: pinOrder,
})
default:
err = api.Database.PinChatByID(ctx, chat.ID)
}
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: errMsg,
Detail: err.Error(),
})
return
}
}
rw.WriteHeader(http.StatusNoContent)
}
@@ -1793,6 +1974,39 @@ func (api *API) promoteChatQueuedMessage(rw http.ResponseWriter, r *http.Request
httpapi.Write(ctx, rw, http.StatusOK, convertChatMessage(promoteResult.PromotedMessage))
}
// markChatAsRead updates the last read message ID for a chat to the
// latest message, so subsequent unread checks treat all current
// messages as seen. This is called on stream connect and disconnect
// to avoid per-message API calls during active streaming.
func (api *API) markChatAsRead(ctx context.Context, chatID uuid.UUID) {
lastMsg, err := api.Database.GetLastChatMessageByRole(ctx, database.GetLastChatMessageByRoleParams{
ChatID: chatID,
Role: database.ChatMessageRoleAssistant,
})
if errors.Is(err, sql.ErrNoRows) {
// No assistant messages yet, nothing to mark as read.
return
}
if err != nil {
api.Logger.Warn(ctx, "failed to get last assistant message for read marker",
slog.F("chat_id", chatID),
slog.Error(err),
)
return
}
err = api.Database.UpdateChatLastReadMessageID(ctx, database.UpdateChatLastReadMessageIDParams{
ID: chatID,
LastReadMessageID: lastMsg.ID,
})
if err != nil {
api.Logger.Warn(ctx, "failed to update chat last read message ID",
slog.F("chat_id", chatID),
slog.Error(err),
)
}
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
func (api *API) streamChat(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
@@ -1849,6 +2063,12 @@ func (api *API) streamChat(rw http.ResponseWriter, r *http.Request) {
}()
defer cancel()
// Mark the chat as read when the stream connects and again
// when it disconnects so we avoid per-message API calls while
// messages are actively streaming.
api.markChatAsRead(ctx, chatID)
defer api.markChatAsRead(context.WithoutCancel(ctx), chatID)
sendChatStreamBatch := func(batch []codersdk.ChatStreamEvent) error {
if len(batch) == 0 {
return nil
@@ -1948,7 +2168,51 @@ func (api *API) interruptChat(rw http.ResponseWriter, r *http.Request) {
chat = updatedChat
}
httpapi.Write(ctx, rw, http.StatusOK, convertChat(chat, nil))
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, nil))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
//
//nolint:revive // HTTP handler writes to ResponseWriter.
func (api *API) regenerateChatTitle(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
chat := httpmw.ChatParam(r)
if !api.Authorize(r, policy.ActionUpdate, chat.RBACObject()) {
httpapi.ResourceNotFound(rw)
return
}
if api.chatDaemon == nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Chat processor is unavailable.",
Detail: "Chat processor is not configured.",
})
return
}
updatedChat, err := api.chatDaemon.RegenerateChatTitle(ctx, chat)
if err != nil {
if errors.Is(err, chatd.ErrManualTitleRegenerationInProgress) {
httpapi.Write(ctx, rw, http.StatusConflict, codersdk.Response{
Message: "Title regeneration already in progress for this chat.",
})
return
}
if maybeWriteLimitErr(ctx, rw, err) {
return
}
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to regenerate chat title.",
Detail: err.Error(),
})
return
}
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(updatedChat, nil))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -2679,25 +2943,35 @@ func detectChatFileType(data []byte) string {
//nolint:revive // get-return: revive assumes get* must be a getter, but this is an HTTP handler.
func (api *API) getChatSystemPrompt(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
prompt, err := api.Database.GetChatSystemPrompt(ctx)
if !api.Authorize(r, policy.ActionUpdate, rbac.ResourceDeploymentConfig) {
httpapi.ResourceNotFound(rw)
return
}
config, err := api.Database.GetChatSystemPromptConfig(ctx)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching chat system prompt.",
Message: "Internal error fetching chat system prompt configuration.",
Detail: err.Error(),
})
return
}
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ChatSystemPrompt{
SystemPrompt: prompt,
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ChatSystemPromptResponse{
SystemPrompt: config.ChatSystemPrompt,
IncludeDefaultSystemPrompt: config.IncludeDefaultSystemPrompt,
DefaultSystemPrompt: chatd.DefaultSystemPrompt,
})
}
func (api *API) putChatSystemPrompt(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
if !api.Authorize(r, policy.ActionUpdate, rbac.ResourceDeploymentConfig) {
httpapi.Forbidden(rw)
return
}
// Cap the raw request body to prevent excessive memory use from
// payloads padded with invisible characters that sanitize away.
r.Body = http.MaxBytesReader(rw, r.Body, int64(2*maxSystemPromptLenBytes))
var req codersdk.ChatSystemPrompt
var req codersdk.UpdateChatSystemPromptRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
@@ -2711,13 +2985,23 @@ func (api *API) putChatSystemPrompt(rw http.ResponseWriter, r *http.Request) {
})
return
}
err := api.Database.UpsertChatSystemPrompt(ctx, sanitizedPrompt)
if httpapi.Is404Error(err) { // also catches authz error
httpapi.ResourceNotFound(rw)
return
} else if err != nil {
err := api.Database.InTx(func(tx database.Store) error {
if err := tx.UpsertChatSystemPrompt(ctx, sanitizedPrompt); err != nil {
return err
}
// Only update the include-default flag when the caller explicitly
// provides it. Omitting the field preserves whatever is currently
// stored (or the schema-level default for new deployments),
// avoiding a backward-compatibility regression for older clients
// that only send system_prompt.
if req.IncludeDefaultSystemPrompt != nil {
return tx.UpsertChatIncludeDefaultSystemPrompt(ctx, *req.IncludeDefaultSystemPrompt)
}
return nil
}, nil)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error updating chat system prompt.",
Message: "Internal error updating chat system prompt configuration.",
Detail: err.Error(),
})
return
@@ -2870,7 +3154,7 @@ func (api *API) getChatTemplateAllowlist(rw http.ResponseWriter, r *http.Request
})
return
}
ids, parseErr := parseChatTemplateAllowlist(raw)
parsed, parseErr := xjson.ParseUUIDList(raw)
if parseErr != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Stored template allowlist is corrupt.",
@@ -2878,6 +3162,10 @@ func (api *API) getChatTemplateAllowlist(rw http.ResponseWriter, r *http.Request
})
return
}
ids := make([]string, len(parsed))
for i, id := range parsed {
ids[i] = id.String()
}
resp := codersdk.ChatTemplateAllowlist{
TemplateIDs: ids,
}
@@ -2983,24 +3271,6 @@ func (api *API) putChatTemplateAllowlist(rw http.ResponseWriter, r *http.Request
rw.WriteHeader(http.StatusNoContent)
}
// parseChatTemplateAllowlist parses the raw JSON string from the
// database into a list of template ID strings. Returns an empty
// slice when the value is empty. Returns an error when the stored
// JSON is corrupt or otherwise cannot be unmarshalled.
func parseChatTemplateAllowlist(raw string) ([]string, error) {
if raw == "" {
return []string{}, nil
}
var ids []string
if err := json.Unmarshal([]byte(raw), &ids); err != nil {
return nil, xerrors.Errorf("unmarshal template allowlist: %w", err)
}
if ids == nil {
return []string{}, nil
}
return ids, nil
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
//
//nolint:revive // get-return: revive assumes get* must be a getter, but this is an HTTP handler.
@@ -3065,6 +3335,8 @@ func (api *API) putUserChatCustomPrompt(rw http.ResponseWriter, r *http.Request)
return
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventUserPrompt, apiKey.UserID)
httpapi.Write(ctx, rw, http.StatusOK, codersdk.UserChatCustomPrompt{
CustomPrompt: updatedConfig.Value,
})
@@ -3235,21 +3507,32 @@ func (api *API) deleteUserChatCompactionThreshold(rw http.ResponseWriter, r *htt
}
func (api *API) resolvedChatSystemPrompt(ctx context.Context) string {
custom, err := api.Database.GetChatSystemPrompt(ctx)
config, err := api.Database.GetChatSystemPromptConfig(ctx)
if err != nil {
// Log but don't fail chat creation — fall back to the
// built-in default so the user isn't blocked.
api.Logger.Error(ctx, "failed to fetch custom chat system prompt, using default", slog.Error(err))
// We intentionally fail open here. When the prompt configuration
// cannot be read, returning the built-in default keeps the chat
// grounded instead of sending no system guidance at all.
api.Logger.Error(ctx, "failed to fetch chat system prompt configuration, using default", slog.Error(err))
return chatd.DefaultSystemPrompt
}
sanitized := chatd.SanitizePromptText(custom)
if sanitized == "" && strings.TrimSpace(custom) != "" {
api.Logger.Warn(ctx, "custom system prompt became empty after sanitization, using default")
sanitizedCustom := chatd.SanitizePromptText(config.ChatSystemPrompt)
if sanitizedCustom == "" && strings.TrimSpace(config.ChatSystemPrompt) != "" {
api.Logger.Warn(ctx, "custom system prompt became empty after sanitization, omitting custom portion")
}
if sanitized != "" {
return sanitized
var parts []string
if config.IncludeDefaultSystemPrompt {
parts = append(parts, chatd.DefaultSystemPrompt)
}
return chatd.DefaultSystemPrompt
if sanitizedCustom != "" {
parts = append(parts, sanitizedCustom)
}
result := strings.Join(parts, "\n\n")
if result == "" {
api.Logger.Warn(ctx, "resolved system prompt is empty, no system prompt will be injected into chats")
}
return result
}
func (api *API) postChatFile(rw http.ResponseWriter, r *http.Request) {
@@ -3571,73 +3854,6 @@ func truncateRunes(value string, maxLen int) string {
return string(runes[:maxLen])
}
func convertChat(c database.Chat, diffStatus *database.ChatDiffStatus) codersdk.Chat {
mcpServerIDs := c.MCPServerIDs
if mcpServerIDs == nil {
mcpServerIDs = []uuid.UUID{}
}
labels := map[string]string(c.Labels)
if labels == nil {
labels = map[string]string{}
}
chat := codersdk.Chat{
ID: c.ID,
OwnerID: c.OwnerID,
LastModelConfigID: c.LastModelConfigID,
Title: c.Title,
Status: codersdk.ChatStatus(c.Status),
Archived: c.Archived,
CreatedAt: c.CreatedAt,
UpdatedAt: c.UpdatedAt,
MCPServerIDs: mcpServerIDs,
Labels: labels,
}
if c.LastError.Valid {
chat.LastError = &c.LastError.String
}
if c.ParentChatID.Valid {
parentChatID := c.ParentChatID.UUID
chat.ParentChatID = &parentChatID
}
switch {
case c.RootChatID.Valid:
rootChatID := c.RootChatID.UUID
chat.RootChatID = &rootChatID
case c.ParentChatID.Valid:
rootChatID := c.ParentChatID.UUID
chat.RootChatID = &rootChatID
default:
rootChatID := c.ID
chat.RootChatID = &rootChatID
}
if c.WorkspaceID.Valid {
chat.WorkspaceID = &c.WorkspaceID.UUID
}
if diffStatus != nil {
convertedDiffStatus := db2sdk.ChatDiffStatus(c.ID, diffStatus)
chat.DiffStatus = &convertedDiffStatus
}
return chat
}
func convertChats(chats []database.Chat, diffStatusesByChatID map[uuid.UUID]database.ChatDiffStatus) []codersdk.Chat {
result := make([]codersdk.Chat, len(chats))
for i, c := range chats {
diffStatus, ok := diffStatusesByChatID[c.ID]
if ok {
result[i] = convertChat(c, &diffStatus)
continue
}
result[i] = convertChat(c, nil)
if diffStatusesByChatID != nil {
emptyDiffStatus := db2sdk.ChatDiffStatus(c.ID, nil)
result[i].DiffStatus = &emptyDiffStatus
}
}
return result
}
func convertChatCostModelBreakdown(model database.GetChatCostPerModelRow) codersdk.ChatCostModelBreakdown {
displayName := strings.TrimSpace(model.DisplayName)
if displayName == "" {
@@ -3893,6 +4109,8 @@ func (api *API) createChatProvider(rw http.ResponseWriter, r *http.Request) {
}
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventProviders, uuid.Nil)
httpapi.Write(
ctx,
rw,
@@ -3979,6 +4197,8 @@ func (api *API) updateChatProvider(rw http.ResponseWriter, r *http.Request) {
return
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventProviders, uuid.Nil)
httpapi.Write(
ctx,
rw,
@@ -4033,6 +4253,8 @@ func (api *API) deleteChatProvider(rw http.ResponseWriter, r *http.Request) {
return
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventProviders, uuid.Nil)
rw.WriteHeader(http.StatusNoContent)
}
@@ -4212,6 +4434,8 @@ func (api *API) createChatModelConfig(rw http.ResponseWriter, r *http.Request) {
}
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventModelConfig, inserted.ID)
httpapi.Write(ctx, rw, http.StatusCreated, convertChatModelConfig(inserted))
}
@@ -4383,6 +4607,8 @@ func (api *API) updateChatModelConfig(rw http.ResponseWriter, r *http.Request) {
}
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventModelConfig, updated.ID)
httpapi.Write(ctx, rw, http.StatusOK, convertChatModelConfig(updated))
}
@@ -4423,6 +4649,8 @@ func (api *API) deleteChatModelConfig(rw http.ResponseWriter, r *http.Request) {
return
}
publishChatConfigEvent(api.Logger, api.Pubsub, pubsub.ChatConfigEventModelConfig, modelConfigID)
rw.WriteHeader(http.StatusNoContent)
}
+1444 -36
View File
File diff suppressed because it is too large Load Diff
+2 -2
View File
@@ -71,8 +71,8 @@ func (r *ProvisionerDaemonsReport) Run(ctx context.Context, opts *ProvisionerDae
return
}
// nolint: gocritic // need an actor to fetch provisioner daemons
daemons, err := opts.Store.GetProvisionerDaemons(dbauthz.AsSystemRestricted(ctx))
// nolint: gocritic // Read-only access to provisioner daemons for health check
daemons, err := opts.Store.GetProvisionerDaemons(dbauthz.AsSystemReadProvisionerDaemons(ctx))
if err != nil {
r.Severity = health.SeverityError
r.Error = ptr.Ref("error fetching provisioner daemons: " + err.Error())
+11 -1
View File
@@ -438,7 +438,7 @@ func OneWayWebSocketEventSender(log slog.Logger) func(rw http.ResponseWriter, r
}
go HeartbeatClose(ctx, log, cancel, socket)
eventC := make(chan codersdk.ServerSentEvent)
eventC := make(chan codersdk.ServerSentEvent, 64)
socketErrC := make(chan websocket.CloseError, 1)
closed := make(chan struct{})
go func() {
@@ -488,6 +488,16 @@ func OneWayWebSocketEventSender(log slog.Logger) func(rw http.ResponseWriter, r
}()
sendEvent := func(event codersdk.ServerSentEvent) error {
// Prioritize context cancellation over sending to the
// buffered channel. Without this check, both cases in
// the select below can fire simultaneously when the
// context is already done and the channel has capacity,
// making the result nondeterministic.
select {
case <-ctx.Done():
return ctx.Err()
default:
}
select {
case eventC <- event:
case <-ctx.Done():
+2 -2
View File
@@ -699,8 +699,8 @@ func ExtractAPIKey(rw http.ResponseWriter, r *http.Request, cfg ExtractAPIKeyCon
// is being used with the correct audience/resource server (RFC 8707).
func validateOAuth2ProviderAppTokenAudience(ctx context.Context, db database.Store, key database.APIKey, accessURL *url.URL, r *http.Request) error {
// Get the OAuth2 provider app token to check its audience
//nolint:gocritic // System needs to access token for audience validation
token, err := db.GetOAuth2ProviderAppTokenByAPIKeyID(dbauthz.AsSystemRestricted(ctx), key.ID)
//nolint:gocritic // OAuth2 system context — audience validation for provider app tokens
token, err := db.GetOAuth2ProviderAppTokenByAPIKeyID(dbauthz.AsSystemOAuth2(ctx), key.ID)
if err != nil {
return xerrors.Errorf("failed to get OAuth2 token: %w", err)
}
-1
View File
@@ -73,7 +73,6 @@ func CSRF(cookieCfg codersdk.HTTPCookieConfig) func(next http.Handler) http.Hand
// CSRF only affects requests that automatically attach credentials via a cookie.
// If no cookie is present, then there is no risk of CSRF.
//nolint:govet
sessCookie, err := r.Cookie(codersdk.SessionTokenCookie)
if xerrors.Is(err, http.ErrNoCookie) {
return true
+430 -48
View File
@@ -1,17 +1,20 @@
package coderd
import (
"bytes"
"context"
"database/sql"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/go-chi/chi/v5"
"github.com/google/uuid"
"github.com/mark3labs/mcp-go/client/transport"
"github.com/mark3labs/mcp-go/mcp"
"golang.org/x/oauth2"
"golang.org/x/xerrors"
@@ -22,6 +25,7 @@ import (
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/x/chatd/mcpclient"
"github.com/coder/coder/v2/codersdk"
)
@@ -56,7 +60,8 @@ func (api *API) listMCPServerConfigs(rw http.ResponseWriter, r *http.Request) {
}
// Look up the calling user's OAuth2 tokens so we can populate
// auth_connected per server.
// auth_connected per server. Attempt to refresh expired tokens
// so the status is accurate and the token is ready for use.
//nolint:gocritic // Need to check user tokens across all servers.
userTokens, err := api.Database.GetMCPServerUserTokensByUserID(dbauthz.AsSystemRestricted(ctx), apiKey.UserID)
if err != nil {
@@ -66,9 +71,20 @@ func (api *API) listMCPServerConfigs(rw http.ResponseWriter, r *http.Request) {
})
return
}
// Build a config lookup for the refresh helper.
configByID := make(map[uuid.UUID]database.MCPServerConfig, len(configs))
for _, c := range configs {
configByID[c.ID] = c
}
tokenMap := make(map[uuid.UUID]bool, len(userTokens))
for _, t := range userTokens {
tokenMap[t.MCPServerConfigID] = true
for _, tok := range userTokens {
cfg, ok := configByID[tok.MCPServerConfigID]
if !ok {
continue
}
tokenMap[tok.MCPServerConfigID] = api.refreshMCPUserToken(ctx, cfg, tok, apiKey.UserID)
}
resp := make([]codersdk.MCPServerConfig, 0, len(configs))
@@ -154,6 +170,7 @@ func (api *API) createMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
ToolDenyList: coalesceStringSlice(trimStringSlice(req.ToolDenyList)),
Availability: strings.TrimSpace(req.Availability),
Enabled: req.Enabled,
ModelIntent: req.ModelIntent,
CreatedBy: apiKey.UserID,
UpdatedBy: apiKey.UserID,
})
@@ -182,7 +199,11 @@ func (api *API) createMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
// Now build the callback URL with the actual ID.
callbackURL := fmt.Sprintf("%s/api/experimental/mcp/servers/%s/oauth2/callback", api.AccessURL.String(), inserted.ID)
result, err := discoverAndRegisterMCPOAuth2(ctx, strings.TrimSpace(req.URL), callbackURL)
httpClient := api.HTTPClient
if httpClient == nil {
httpClient = &http.Client{Timeout: 30 * time.Second}
}
result, err := discoverAndRegisterMCPOAuth2(ctx, httpClient, strings.TrimSpace(req.URL), callbackURL)
if err != nil {
// Clean up: delete the partially created config.
deleteErr := api.Database.DeleteMCPServerConfigByID(ctx, inserted.ID)
@@ -236,6 +257,7 @@ func (api *API) createMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
ToolDenyList: inserted.ToolDenyList,
Availability: inserted.Availability,
Enabled: inserted.Enabled,
ModelIntent: inserted.ModelIntent,
UpdatedBy: apiKey.UserID,
})
if err != nil {
@@ -303,6 +325,7 @@ func (api *API) createMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
ToolDenyList: coalesceStringSlice(trimStringSlice(req.ToolDenyList)),
Availability: strings.TrimSpace(req.Availability),
Enabled: req.Enabled,
ModelIntent: req.ModelIntent,
CreatedBy: apiKey.UserID,
UpdatedBy: apiKey.UserID,
})
@@ -379,7 +402,8 @@ func (api *API) getMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
sdkConfig = convertMCPServerConfigRedacted(config)
}
// Populate AuthConnected for the calling user.
// Populate AuthConnected for the calling user. Attempt to
// refresh the token so the status is accurate.
if config.AuthType == "oauth2" {
//nolint:gocritic // Need to check user token for this server.
userTokens, err := api.Database.GetMCPServerUserTokensByUserID(dbauthz.AsSystemRestricted(ctx), apiKey.UserID)
@@ -390,9 +414,9 @@ func (api *API) getMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
})
return
}
for _, t := range userTokens {
if t.MCPServerConfigID == config.ID {
sdkConfig.AuthConnected = true
for _, tok := range userTokens {
if tok.MCPServerConfigID == config.ID {
sdkConfig.AuthConnected = api.refreshMCPUserToken(ctx, config, tok, apiKey.UserID)
break
}
}
@@ -551,6 +575,11 @@ func (api *API) updateMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
enabled = *req.Enabled
}
modelIntent := existing.ModelIntent
if req.ModelIntent != nil {
modelIntent = *req.ModelIntent
}
// When auth_type changes, clear fields belonging to the
// previous auth type so stale secrets don't persist.
if authType != existing.AuthType {
@@ -618,6 +647,7 @@ func (api *API) updateMCPServerConfig(rw http.ResponseWriter, r *http.Request) {
ToolDenyList: toolDenyList,
Availability: availability,
Enabled: enabled,
ModelIntent: modelIntent,
UpdatedBy: apiKey.UserID,
ID: existing.ID,
})
@@ -995,6 +1025,67 @@ func (api *API) mcpServerOAuth2Disconnect(rw http.ResponseWriter, r *http.Reques
// parseMCPServerConfigID extracts the MCP server config UUID from the
// "mcpServer" path parameter.
// refreshMCPUserToken attempts to refresh an expired OAuth2 token
// for the given MCP server config. Returns true when the token is
// valid (either still fresh or successfully refreshed), false when
// the token is expired and cannot be refreshed.
func (api *API) refreshMCPUserToken(
ctx context.Context,
cfg database.MCPServerConfig,
tok database.MCPServerUserToken,
userID uuid.UUID,
) bool {
if cfg.AuthType != "oauth2" {
return true
}
if tok.RefreshToken == "" {
// No refresh token — consider connected only if not
// expired (or no expiry set).
return !tok.Expiry.Valid || tok.Expiry.Time.After(time.Now())
}
result, err := mcpclient.RefreshOAuth2Token(ctx, cfg, tok)
if err != nil {
api.Logger.Warn(ctx, "failed to refresh MCP oauth2 token",
slog.F("server_slug", cfg.Slug),
slog.Error(err),
)
// Refresh failed — token is dead.
return false
}
if result.Refreshed {
var expiry sql.NullTime
if !result.Expiry.IsZero() {
expiry = sql.NullTime{Time: result.Expiry, Valid: true}
}
//nolint:gocritic // Need system-level write access to
// persist the refreshed OAuth2 token.
_, err = api.Database.UpsertMCPServerUserToken(
dbauthz.AsSystemRestricted(ctx),
database.UpsertMCPServerUserTokenParams{
MCPServerConfigID: tok.MCPServerConfigID,
UserID: userID,
AccessToken: result.AccessToken,
AccessTokenKeyID: sql.NullString{},
RefreshToken: result.RefreshToken,
RefreshTokenKeyID: sql.NullString{},
TokenType: result.TokenType,
Expiry: expiry,
},
)
if err != nil {
api.Logger.Warn(ctx, "failed to persist refreshed MCP oauth2 token",
slog.F("server_slug", cfg.Slug),
slog.Error(err),
)
}
}
return true
}
func parseMCPServerConfigID(rw http.ResponseWriter, r *http.Request) (uuid.UUID, bool) {
mcpServerID, err := uuid.Parse(chi.URLParam(r, "mcpServer"))
if err != nil {
@@ -1038,9 +1129,10 @@ func convertMCPServerConfig(config database.MCPServerConfig) codersdk.MCPServerC
Availability: config.Availability,
Enabled: config.Enabled,
CreatedAt: config.CreatedAt,
UpdatedAt: config.UpdatedAt,
Enabled: config.Enabled,
ModelIntent: config.ModelIntent,
CreatedAt: config.CreatedAt,
UpdatedAt: config.UpdatedAt,
}
}
@@ -1107,55 +1199,345 @@ type mcpOAuth2Discovery struct {
scopes string // space-separated
}
// discoverAndRegisterMCPOAuth2 uses the mcp-go library's OAuthHandler to
// perform the MCP OAuth2 discovery and Dynamic Client Registration flow:
// protectedResourceMetadata represents the response from a
// Protected Resource Metadata endpoint per RFC 9728 §2.
type protectedResourceMetadata struct {
Resource string `json:"resource"`
AuthorizationServers []string `json:"authorization_servers"`
ScopesSupported []string `json:"scopes_supported,omitempty"`
}
// authServerMetadata represents the response from an Authorization
// Server Metadata endpoint per RFC 8414 §2.
type authServerMetadata struct {
Issuer string `json:"issuer"`
AuthorizationEndpoint string `json:"authorization_endpoint"`
TokenEndpoint string `json:"token_endpoint"`
RegistrationEndpoint string `json:"registration_endpoint,omitempty"`
ScopesSupported []string `json:"scopes_supported,omitempty"`
}
// fetchJSON performs a GET request to the given URL with the
// standard MCP OAuth2 discovery headers and decodes the JSON
// response into dest. It returns nil on success or an error
// if the request fails or the server returns a non-200 status.
func fetchJSON(ctx context.Context, httpClient *http.Client, rawURL string, dest any) error {
req, err := http.NewRequestWithContext(
ctx, http.MethodGet, rawURL, nil,
)
if err != nil {
return xerrors.Errorf("create request for %s: %w", rawURL, err)
}
req.Header.Set("Accept", "application/json")
req.Header.Set("MCP-Protocol-Version", mcp.LATEST_PROTOCOL_VERSION)
resp, err := httpClient.Do(req)
if err != nil {
return xerrors.Errorf("GET %s: %w", rawURL, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return xerrors.Errorf(
"GET %s returned HTTP %d", rawURL, resp.StatusCode,
)
}
body, err := io.ReadAll(io.LimitReader(resp.Body, 1<<20))
if err != nil {
return xerrors.Errorf(
"read response from %s: %w", rawURL, err,
)
}
if err := json.Unmarshal(body, dest); err != nil {
return xerrors.Errorf(
"decode JSON from %s: %w", rawURL, err,
)
}
return nil
}
// discoverProtectedResource discovers the Protected Resource
// Metadata for the given MCP server per RFC 9728 §3.1. It
// tries the path-aware well-known URL first, then falls back
// to the root-level URL.
//
// 1. Discover the authorization server via Protected Resource Metadata
// (RFC 9728) and Authorization Server Metadata (RFC 8414).
// 2. Register a client via Dynamic Client Registration (RFC 7591).
// 3. Return the discovered endpoints and generated credentials.
func discoverAndRegisterMCPOAuth2(ctx context.Context, mcpServerURL, callbackURL string) (*mcpOAuth2Discovery, error) {
// Per the MCP spec, the authorization base URL is the MCP server
// URL with the path component discarded (scheme + host only).
// Path-aware: GET {origin}/.well-known/oauth-protected-resource{path}
// Root: GET {origin}/.well-known/oauth-protected-resource
func discoverProtectedResource(
ctx context.Context, httpClient *http.Client, origin, path string,
) (*protectedResourceMetadata, error) {
var urls []string
// Per RFC 9728 §3.1, when the resource URL contains a
// path component, the well-known URI is constructed by
// inserting the well-known prefix before the path.
if path != "" && path != "/" {
urls = append(
urls,
origin+"/.well-known/oauth-protected-resource"+path,
)
}
// Always try the root-level URL as a fallback.
urls = append(
urls, origin+"/.well-known/oauth-protected-resource",
)
var lastErr error
for _, u := range urls {
var meta protectedResourceMetadata
if err := fetchJSON(ctx, httpClient, u, &meta); err != nil {
lastErr = err
continue
}
if len(meta.AuthorizationServers) == 0 {
lastErr = xerrors.Errorf(
"protected resource metadata at %s "+
"has no authorization_servers", u,
)
continue
}
return &meta, nil
}
return nil, xerrors.Errorf(
"discover protected resource metadata: %w", lastErr,
)
}
// discoverAuthServerMetadata discovers the Authorization Server
// Metadata per RFC 8414 §3.1. When the authorization server
// issuer URL has a path component, the metadata URL is
// path-aware. Falls back to root-level and OpenID Connect
// discovery as a last resort.
//
// Path-aware: {origin}/.well-known/oauth-authorization-server{path}
// Root: {origin}/.well-known/oauth-authorization-server
// OpenID: {issuer}/.well-known/openid-configuration
func discoverAuthServerMetadata(
ctx context.Context, httpClient *http.Client, authServerURL string,
) (*authServerMetadata, error) {
parsed, err := url.Parse(authServerURL)
if err != nil {
return nil, xerrors.Errorf(
"parse auth server URL: %w", err,
)
}
asOrigin := fmt.Sprintf(
"%s://%s", parsed.Scheme, parsed.Host,
)
asPath := parsed.Path
var urls []string
// Per RFC 8414 §3.1, if the issuer URL has a path,
// insert the well-known prefix before the path.
if asPath != "" && asPath != "/" {
urls = append(
urls,
asOrigin+"/.well-known/oauth-authorization-server"+asPath,
)
}
// Root-level fallback.
urls = append(
urls,
asOrigin+"/.well-known/oauth-authorization-server",
)
// OpenID Connect discovery as a last resort. Note: this is
// tried after RFC 8414 (unlike the previous mcp-go code that
// tried OIDC first) because RFC 8414 is the MCP spec's
// recommended discovery mechanism.
// Per OpenID Connect Discovery 1.0 §4, the well-known URL
// is formed by appending to the full issuer (including
// path), not just the origin.
urls = append(
urls,
strings.TrimRight(authServerURL, "/")+
"/.well-known/openid-configuration",
)
var lastErr error
for _, u := range urls {
var meta authServerMetadata
if err := fetchJSON(ctx, httpClient, u, &meta); err != nil {
lastErr = err
continue
}
if meta.AuthorizationEndpoint == "" || meta.TokenEndpoint == "" {
lastErr = xerrors.Errorf(
"auth server metadata at %s missing required "+
"endpoints", u,
)
continue
}
return &meta, nil
}
return nil, xerrors.Errorf(
"discover auth server metadata: %w", lastErr,
)
}
// registerOAuth2Client performs Dynamic Client Registration per
// RFC 7591 by POSTing client metadata to the registration
// endpoint and returning the assigned client_id and optional
// client_secret.
func registerOAuth2Client(
ctx context.Context, httpClient *http.Client,
registrationEndpoint, callbackURL, clientName string,
) (clientID string, clientSecret string, err error) {
payload := map[string]any{
"client_name": clientName,
"redirect_uris": []string{callbackURL},
"token_endpoint_auth_method": "none",
"grant_types": []string{"authorization_code", "refresh_token"},
"response_types": []string{"code"},
}
body, err := json.Marshal(payload)
if err != nil {
return "", "", xerrors.Errorf(
"marshal registration request: %w", err,
)
}
req, err := http.NewRequestWithContext(
ctx, http.MethodPost,
registrationEndpoint, bytes.NewReader(body),
)
if err != nil {
return "", "", xerrors.Errorf(
"create registration request: %w", err,
)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json")
resp, err := httpClient.Do(req)
if err != nil {
return "", "", xerrors.Errorf(
"POST %s: %w", registrationEndpoint, err,
)
}
defer resp.Body.Close()
respBody, err := io.ReadAll(io.LimitReader(resp.Body, 1<<20))
if err != nil {
return "", "", xerrors.Errorf(
"read registration response: %w", err,
)
}
if resp.StatusCode != http.StatusOK &&
resp.StatusCode != http.StatusCreated {
// Truncate to avoid leaking verbose upstream errors
// through the API.
const maxErrBody = 512
errMsg := string(respBody)
if len(errMsg) > maxErrBody {
errMsg = errMsg[:maxErrBody] + "..."
}
return "", "", xerrors.Errorf(
"registration endpoint returned HTTP %d: %s",
resp.StatusCode, errMsg,
)
}
var result struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}
if err := json.Unmarshal(respBody, &result); err != nil {
return "", "", xerrors.Errorf(
"decode registration response: %w", err,
)
}
if result.ClientID == "" {
return "", "", xerrors.New(
"registration response missing client_id",
)
}
return result.ClientID, result.ClientSecret, nil
}
// discoverAndRegisterMCPOAuth2 performs the full MCP OAuth2
// discovery and Dynamic Client Registration flow:
//
// 1. Discover the authorization server via Protected Resource
// Metadata (RFC 9728).
// 2. Fetch Authorization Server Metadata (RFC 8414).
// 3. Register a client via Dynamic Client Registration
// (RFC 7591).
// 4. Return the discovered endpoints and credentials.
//
// Unlike a root-only approach, this implementation follows the
// path-aware well-known URI construction rules from RFC 9728
// §3.1 and RFC 8414 §3.1, which is required for servers that
// serve metadata at path-specific URLs (e.g.
// https://api.githubcopilot.com/mcp/).
func discoverAndRegisterMCPOAuth2(ctx context.Context, httpClient *http.Client, mcpServerURL, callbackURL string) (*mcpOAuth2Discovery, error) {
// Parse the MCP server URL into origin and path.
parsed, err := url.Parse(mcpServerURL)
if err != nil {
return nil, xerrors.Errorf("parse MCP server URL: %w", err)
return nil, xerrors.Errorf(
"parse MCP server URL: %w", err,
)
}
origin := fmt.Sprintf("%s://%s", parsed.Scheme, parsed.Host)
path := parsed.Path
oauthHandler := transport.NewOAuthHandler(transport.OAuthConfig{
RedirectURI: callbackURL,
TokenStore: transport.NewMemoryTokenStore(),
})
oauthHandler.SetBaseURL(origin)
// Step 1: Discover authorization server metadata (RFC 9728 + RFC 8414).
metadata, err := oauthHandler.GetServerMetadata(ctx)
// Step 1: Discover the Protected Resource Metadata
// (RFC 9728) to find the authorization server.
prm, err := discoverProtectedResource(ctx, httpClient, origin, path)
if err != nil {
return nil, xerrors.Errorf("discover authorization server: %w", err)
}
if metadata.AuthorizationEndpoint == "" {
return nil, xerrors.New("authorization server metadata missing authorization_endpoint")
}
if metadata.TokenEndpoint == "" {
return nil, xerrors.New("authorization server metadata missing token_endpoint")
}
if metadata.RegistrationEndpoint == "" {
return nil, xerrors.New("authorization server does not advertise a registration_endpoint (dynamic client registration may not be supported)")
return nil, xerrors.Errorf(
"protected resource discovery: %w", err,
)
}
// Step 2: Register a client via Dynamic Client Registration (RFC 7591).
if err := oauthHandler.RegisterClient(ctx, "Coder"); err != nil {
return nil, xerrors.Errorf("dynamic client registration: %w", err)
// Step 2: Fetch Authorization Server Metadata (RFC 8414)
// from the first advertised authorization server.
asMeta, err := discoverAuthServerMetadata(
ctx, httpClient, prm.AuthorizationServers[0],
)
if err != nil {
return nil, xerrors.Errorf(
"auth server metadata discovery: %w", err,
)
}
scopes := strings.Join(metadata.ScopesSupported, " ")
// Only RegistrationEndpoint needs checking here;
// discoverAuthServerMetadata already validates that
// AuthorizationEndpoint and TokenEndpoint are present.
if asMeta.RegistrationEndpoint == "" {
return nil, xerrors.New(
"authorization server does not advertise a " +
"registration_endpoint (dynamic client " +
"registration may not be supported)",
)
}
// Step 3: Register via Dynamic Client Registration
// (RFC 7591).
clientID, clientSecret, err := registerOAuth2Client(
ctx, httpClient, asMeta.RegistrationEndpoint, callbackURL, "Coder",
)
if err != nil {
return nil, xerrors.Errorf(
"dynamic client registration: %w", err,
)
}
scopes := strings.Join(asMeta.ScopesSupported, " ")
return &mcpOAuth2Discovery{
clientID: oauthHandler.GetClientID(),
clientSecret: oauthHandler.GetClientSecret(),
authURL: metadata.AuthorizationEndpoint,
tokenURL: metadata.TokenEndpoint,
clientID: clientID,
clientSecret: clientSecret,
authURL: asMeta.AuthorizationEndpoint,
tokenURL: asMeta.TokenEndpoint,
scopes: scopes,
}, nil
}
+793 -7
View File
@@ -473,17 +473,21 @@ func TestMCPServerConfigsOAuth2AutoDiscovery(t *testing.T) {
t.Cleanup(authServer.Close)
// Stand up a mock MCP server that serves RFC 9728 Protected
// Resource Metadata pointing to the auth server above.
// Resource Metadata at the path-aware well-known URL.
// The URL used for the config ends with /v1/mcp, so the
// path-aware metadata URL is
// /.well-known/oauth-protected-resource/v1/mcp.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/.well-known/oauth-protected-resource" {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + authServer.URL + `"]
}`))
return
default:
http.NotFound(w, r)
}
http.NotFound(w, r)
}))
t.Cleanup(mcpServer.Close)
@@ -511,6 +515,275 @@ func TestMCPServerConfigsOAuth2AutoDiscovery(t *testing.T) {
require.Equal(t, "read write", created.OAuth2Scopes)
})
// Verify that when both path-aware and root-level protected
// resource metadata are available, the path-aware URL takes
// priority. Each points to a different auth server so we can
// distinguish which one was actually used.
t.Run("PathAwareTakesPriority", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
// Auth server that returns "path-scope" as the supported
// scope.
pathAuthServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["path-scope"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "path-client-id",
"client_secret": "path-client-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(pathAuthServer.Close)
// Auth server that returns "root-scope" as the supported
// scope.
rootAuthServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["root-scope"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "root-client-id",
"client_secret": "root-client-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(rootAuthServer.Close)
// MCP server serves different protected resource metadata at
// path-aware vs root URLs, each pointing to a different auth
// server.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/v1/mcp",
"authorization_servers": ["` + pathAuthServer.URL + `"]
}`))
case "/.well-known/oauth-protected-resource":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + rootAuthServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Priority Test",
Slug: "priority-test",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
// The path-aware auth server returns "path-scope", the root
// auth server returns "root-scope". If path-aware takes
// priority, we get "path-scope".
require.Equal(t, "path-client-id", created.OAuth2ClientID)
require.Equal(t, "path-scope", created.OAuth2Scopes)
})
// Verify discovery works when the protected resource metadata
// is only available at the root-level well-known URL (no path
// component). This covers servers that don't use path-aware
// metadata.
t.Run("RootLevelFallback", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["all"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "root-client-id",
"client_secret": "root-client-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
// MCP server only serves metadata at the root well-known
// URL, NOT at the path-aware location.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + authServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Root Fallback Server",
Slug: "root-fallback",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "root-client-id", created.OAuth2ClientID)
require.True(t, created.HasOAuth2Secret)
require.Equal(t, authServer.URL+"/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/token", created.OAuth2TokenURL)
require.Equal(t, "all", created.OAuth2Scopes)
})
// Verify that when the authorization server issuer URL has a
// path component (e.g. https://github.com/login/oauth), the
// discovery uses the path-aware metadata URL per RFC 8414 §3.1.
t.Run("PathAwareAuthServerMetadata", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
// Auth server that serves metadata at the path-aware URL.
// The issuer URL is http://host/login/oauth, so the
// metadata URL should be
// /.well-known/oauth-authorization-server/login/oauth.
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server/login/oauth":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `/login/oauth",
"authorization_endpoint": "` + "http://" + r.Host + `/login/oauth/authorize",
"token_endpoint": "` + "http://" + r.Host + `/login/oauth/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["repo", "read:org"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "path-aware-client-id"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
// MCP server that points to an auth server with a path
// in its issuer URL (like GitHub's /login/oauth).
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/mcp",
"authorization_servers": ["` + authServer.URL + `/login/oauth"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Path-Aware Auth",
Slug: "path-aware-auth",
Transport: "streamable_http",
URL: mcpServer.URL + "/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "path-aware-client-id", created.OAuth2ClientID)
require.Equal(t, authServer.URL+"/login/oauth/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/login/oauth/token", created.OAuth2TokenURL)
require.Equal(t, "repo read:org", created.OAuth2Scopes)
})
// Regression test: verify that during dynamic client registration
// the redirect_uris sent to the authorization server contain the
// real config UUID, NOT the literal string "{id}". Before the
@@ -572,15 +845,17 @@ func TestMCPServerConfigsOAuth2AutoDiscovery(t *testing.T) {
// Stand up a mock MCP server that returns RFC 9728 Protected
// Resource Metadata pointing to the auth server.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/.well-known/oauth-protected-resource" {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp",
"/.well-known/oauth-protected-resource":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + authServer.URL + `"]
}`))
return
default:
http.NotFound(w, r)
}
http.NotFound(w, r)
}))
t.Cleanup(mcpServer.Close)
@@ -1055,3 +1330,514 @@ func createChatModelConfigForMCP(t testing.TB, client *codersdk.ExperimentalClie
require.NoError(t, err)
return modelConfig
}
func TestMCPOAuth2DiscoveryEdgeCases(t *testing.T) {
t.Parallel()
t.Run("EmptyAuthorizationServers", func(t *testing.T) {
t.Parallel()
// When the path-aware PRM returns an empty
// authorization_servers array, discovery should fall
// back to the root-level PRM.
t.Run("RootFallback", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["fallback-scope"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "fallback-client-id",
"client_secret": "fallback-client-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
// Path-aware: empty authorization_servers.
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/v1/mcp",
"authorization_servers": []
}`))
case "/.well-known/oauth-protected-resource":
// Root: valid authorization_servers.
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + authServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Empty Auth Servers Fallback",
Slug: "empty-as-fallback",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "fallback-client-id", created.OAuth2ClientID)
require.Equal(t, authServer.URL+"/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/token", created.OAuth2TokenURL)
require.Equal(t, "fallback-scope", created.OAuth2Scopes)
})
// When both path-aware and root PRM return empty
// authorization_servers, discovery should fail.
t.Run("BothEmpty", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp",
"/.well-known/oauth-protected-resource":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": []
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
_, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Both Empty",
Slug: "both-empty-as",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusBadRequest, sdkErr.StatusCode())
require.Contains(t, sdkErr.Message, "auto-discovery failed")
})
})
// When the path-aware PRM returns malformed JSON,
// discovery should fall back to the root-level PRM.
t.Run("MalformedJSONFromDiscovery", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["json-fallback"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "json-fallback-client",
"client_secret": "json-fallback-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
// Return valid HTTP 200 but invalid JSON.
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`not json`))
case "/.well-known/oauth-protected-resource":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `",
"authorization_servers": ["` + authServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Malformed JSON Fallback",
Slug: "malformed-json",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "json-fallback-client", created.OAuth2ClientID)
require.Equal(t, authServer.URL+"/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/token", created.OAuth2TokenURL)
require.Equal(t, "json-fallback", created.OAuth2Scopes)
})
// When the path-aware auth server metadata is missing required
// endpoints, discovery should fall back to the root-level
// metadata URL.
t.Run("AuthServerMetadataMissingEndpoints", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
// Auth server that returns incomplete metadata at the
// path-aware URL but complete metadata at the root URL.
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server/auth":
// Path-aware: missing required endpoints.
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `/auth"
}`))
case "/.well-known/oauth-authorization-server":
// Root-level: complete metadata.
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["endpoint-fallback"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "endpoint-fallback-client",
"client_secret": "endpoint-fallback-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
// PRM points to auth server with a path (/auth) so that
// discoverAuthServerMetadata tries the path-aware URL first.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/v1/mcp",
"authorization_servers": ["` + authServer.URL + `/auth"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Missing Endpoints Fallback",
Slug: "missing-endpoints",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "endpoint-fallback-client", created.OAuth2ClientID)
require.Equal(t, authServer.URL+"/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/token", created.OAuth2TokenURL)
require.Equal(t, "endpoint-fallback", created.OAuth2Scopes)
})
// When both RFC 8414 metadata URLs (path-aware and root) fail,
// discovery should fall back to the OIDC well-known URL.
// The auth server issuer has a path (/login/oauth) so the
// OIDC URL is {issuer}/.well-known/openid-configuration =
// /login/oauth/.well-known/openid-configuration.
t.Run("OIDCFallback", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/login/oauth/.well-known/openid-configuration":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `/login/oauth",
"authorization_endpoint": "` + "http://" + r.Host + `/login/oauth/authorize",
"token_endpoint": "` + "http://" + r.Host + `/login/oauth/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["oidc-scope"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "oidc-client-id",
"client_secret": "oidc-client-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
// PRM points to auth server with a path (/login/oauth)
// so that RFC 8414 URLs are tried first and fail.
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/v1/mcp",
"authorization_servers": ["` + authServer.URL + `/login/oauth"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "OIDC Fallback",
Slug: "oidc-fallback",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "oidc-client-id", created.OAuth2ClientID)
require.Equal(t, authServer.URL+"/login/oauth/authorize", created.OAuth2AuthURL)
require.Equal(t, authServer.URL+"/login/oauth/token", created.OAuth2TokenURL)
require.Equal(t, "oidc-scope", created.OAuth2Scopes)
})
// When the registration endpoint returns a response
// without a client_id, the entire discovery flow should
// fail.
t.Run("RegistrationMissingClientID", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
// Return response with client_secret but no
// client_id.
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_secret": "secret-without-id"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/v1/mcp":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/v1/mcp",
"authorization_servers": ["` + authServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
_, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Missing Client ID",
Slug: "missing-client-id",
Transport: "streamable_http",
URL: mcpServer.URL + "/v1/mcp",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusBadRequest, sdkErr.StatusCode())
require.Contains(t, sdkErr.Message, "auto-discovery failed")
})
// Regression test for the exact scenario that motivated the PR:
// an MCP server URL with a trailing slash (like
// https://api.githubcopilot.com/mcp/).
t.Run("TrailingSlashURL", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
authServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-authorization-server":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"issuer": "` + "http://" + r.Host + `",
"authorization_endpoint": "` + "http://" + r.Host + `/authorize",
"token_endpoint": "` + "http://" + r.Host + `/token",
"registration_endpoint": "` + "http://" + r.Host + `/register",
"response_types_supported": ["code"],
"scopes_supported": ["read"]
}`))
case "/register":
if r.Method != http.MethodPost {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
_, _ = w.Write([]byte(`{
"client_id": "trailing-slash-client",
"client_secret": "trailing-slash-secret"
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(authServer.Close)
// Serve protected resource metadata at the path-aware URL
// WITH the trailing slash: /.well-known/oauth-protected-resource/mcp/
mcpServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/.well-known/oauth-protected-resource/mcp/":
w.Header().Set("Content-Type", "application/json")
_, _ = w.Write([]byte(`{
"resource": "` + "http://" + r.Host + `/mcp/",
"authorization_servers": ["` + authServer.URL + `"]
}`))
default:
http.NotFound(w, r)
}
}))
t.Cleanup(mcpServer.Close)
client := newMCPClient(t)
_ = coderdtest.CreateFirstUser(t, client)
// URL has a trailing slash, matching the GitHub Copilot URL
// pattern: https://api.githubcopilot.com/mcp/
created, err := client.CreateMCPServerConfig(ctx, codersdk.CreateMCPServerConfigRequest{
DisplayName: "Trailing Slash",
Slug: "trailing-slash",
Transport: "streamable_http",
URL: mcpServer.URL + "/mcp/",
AuthType: "oauth2",
Availability: "default_on",
Enabled: true,
ToolAllowList: []string{},
ToolDenyList: []string{},
})
require.NoError(t, err)
require.Equal(t, "trailing-slash-client", created.OAuth2ClientID)
require.True(t, created.HasOAuth2Secret)
})
}
+62 -4
View File
@@ -2,6 +2,7 @@ package coderd
import (
"context"
"database/sql"
"fmt"
"net/http"
@@ -179,7 +180,17 @@ func (api *API) organizationMember(rw http.ResponseWriter, r *http.Request) {
return
}
resp, err := convertOrganizationMembersWithUserData(ctx, api.Database, rows)
var aiSeatSet map[uuid.UUID]struct{}
if api.Entitlements.Enabled(codersdk.FeatureAIGovernanceUserLimit) {
//nolint:gocritic // AI seat state is a system-level read gated by entitlement.
aiSeatSet, err = getAISeatSetByUserIDs(dbauthz.AsSystemRestricted(ctx), api.Database, []uuid.UUID{member.UserID})
if err != nil {
httpapi.InternalServerError(rw, err)
return
}
}
resp, err := convertOrganizationMembersWithUserData(ctx, api.Database, rows, aiSeatSet)
if err != nil {
httpapi.InternalServerError(rw, err)
return
@@ -227,7 +238,21 @@ func (api *API) listMembers(rw http.ResponseWriter, r *http.Request) {
return
}
resp, err := convertOrganizationMembersWithUserData(ctx, api.Database, members)
userIDs := make([]uuid.UUID, 0, len(members))
for _, member := range members {
userIDs = append(userIDs, member.OrganizationMember.UserID)
}
var aiSeatSet map[uuid.UUID]struct{}
if api.Entitlements.Enabled(codersdk.FeatureAIGovernanceUserLimit) {
//nolint:gocritic // AI seat state is a system-level read gated by entitlement.
aiSeatSet, err = getAISeatSetByUserIDs(dbauthz.AsSystemRestricted(ctx), api.Database, userIDs)
if err != nil {
httpapi.InternalServerError(rw, err)
return
}
}
resp, err := convertOrganizationMembersWithUserData(ctx, api.Database, members, aiSeatSet)
if err != nil {
httpapi.InternalServerError(rw, err)
return
@@ -324,7 +349,21 @@ func (api *API) paginatedMembers(rw http.ResponseWriter, r *http.Request) {
return
}
members, err := convertOrganizationMembersWithUserData(ctx, api.Database, memberRows)
userIDs := make([]uuid.UUID, 0, len(memberRows))
for _, member := range memberRows {
userIDs = append(userIDs, member.OrganizationMember.UserID)
}
var aiSeatSet map[uuid.UUID]struct{}
if api.Entitlements.Enabled(codersdk.FeatureAIGovernanceUserLimit) {
//nolint:gocritic // AI seat state is a system-level read gated by entitlement.
aiSeatSet, err = getAISeatSetByUserIDs(dbauthz.AsSystemRestricted(ctx), api.Database, userIDs)
if err != nil {
httpapi.InternalServerError(rw, err)
return
}
}
members, err := convertOrganizationMembersWithUserData(ctx, api.Database, memberRows, aiSeatSet)
if err != nil {
httpapi.InternalServerError(rw, err)
return
@@ -337,6 +376,23 @@ func (api *API) paginatedMembers(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, resp)
}
func getAISeatSetByUserIDs(ctx context.Context, db database.Store, userIDs []uuid.UUID) (map[uuid.UUID]struct{}, error) {
aiSeatUserIDs, err := db.GetUserAISeatStates(ctx, userIDs)
if xerrors.Is(err, sql.ErrNoRows) {
err = nil
}
if err != nil {
return nil, err
}
aiSeatSet := make(map[uuid.UUID]struct{}, len(aiSeatUserIDs))
for _, uid := range aiSeatUserIDs {
aiSeatSet[uid] = struct{}{}
}
return aiSeatSet, nil
}
// @Summary Assign role to organization member
// @ID assign-role-to-organization-member
// @Security CoderSessionToken
@@ -508,7 +564,7 @@ func convertOrganizationMembers(ctx context.Context, db database.Store, mems []d
return converted, nil
}
func convertOrganizationMembersWithUserData(ctx context.Context, db database.Store, rows []database.OrganizationMembersRow) ([]codersdk.OrganizationMemberWithUserData, error) {
func convertOrganizationMembersWithUserData(ctx context.Context, db database.Store, rows []database.OrganizationMembersRow, aiSeatSet map[uuid.UUID]struct{}) ([]codersdk.OrganizationMemberWithUserData, error) {
members := make([]database.OrganizationMember, 0)
for _, row := range rows {
members = append(members, row.OrganizationMember)
@@ -524,12 +580,14 @@ func convertOrganizationMembersWithUserData(ctx context.Context, db database.Sto
converted := make([]codersdk.OrganizationMemberWithUserData, 0)
for i := range convertedMembers {
_, hasAISeat := aiSeatSet[rows[i].OrganizationMember.UserID]
converted = append(converted, codersdk.OrganizationMemberWithUserData{
Username: rows[i].Username,
AvatarURL: rows[i].AvatarURL,
Name: rows[i].Name,
Email: rows[i].Email,
GlobalRoles: db2sdk.SlimRolesFromNames(rows[i].GlobalRoles),
HasAISeat: hasAISeat,
LastSeenAt: rows[i].LastSeenAt,
Status: codersdk.UserStatus(rows[i].Status),
IsServiceAccount: rows[i].IsServiceAccount,
+16 -16
View File
@@ -73,8 +73,8 @@ func CreateDynamicClientRegistration(db database.Store, accessURL *url.URL, audi
// Store in database - use system context since this is a public endpoint
now := dbtime.Now()
clientName := req.GenerateClientName()
//nolint:gocritic // Dynamic client registration is a public endpoint, system access required
app, err := db.InsertOAuth2ProviderApp(dbauthz.AsSystemRestricted(ctx), database.InsertOAuth2ProviderAppParams{
//nolint:gocritic // OAuth2 system context — dynamic registration is a public endpoint
app, err := db.InsertOAuth2ProviderApp(dbauthz.AsSystemOAuth2(ctx), database.InsertOAuth2ProviderAppParams{
ID: clientID,
CreatedAt: now,
UpdatedAt: now,
@@ -121,8 +121,8 @@ func CreateDynamicClientRegistration(db database.Store, accessURL *url.URL, audi
return
}
//nolint:gocritic // Dynamic client registration is a public endpoint, system access required
_, err = db.InsertOAuth2ProviderAppSecret(dbauthz.AsSystemRestricted(ctx), database.InsertOAuth2ProviderAppSecretParams{
//nolint:gocritic // OAuth2 system context — dynamic registration is a public endpoint
_, err = db.InsertOAuth2ProviderAppSecret(dbauthz.AsSystemOAuth2(ctx), database.InsertOAuth2ProviderAppSecretParams{
ID: uuid.New(),
CreatedAt: now,
SecretPrefix: []byte(parsedSecret.Prefix),
@@ -183,8 +183,8 @@ func GetClientConfiguration(db database.Store) http.HandlerFunc {
}
// Get app by client ID
//nolint:gocritic // RFC 7592 endpoints need system access to retrieve dynamically registered clients
app, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
//nolint:gocritic // OAuth2 system context — RFC 7592 client configuration endpoint
app, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), clientID)
if err != nil {
if xerrors.Is(err, sql.ErrNoRows) {
writeOAuth2RegistrationError(ctx, rw, http.StatusUnauthorized,
@@ -269,8 +269,8 @@ func UpdateClientConfiguration(db database.Store, auditor *audit.Auditor, logger
req = req.ApplyDefaults()
// Get existing app to verify it exists and is dynamically registered
//nolint:gocritic // RFC 7592 endpoints need system access to retrieve dynamically registered clients
existingApp, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
//nolint:gocritic // OAuth2 system context — RFC 7592 client configuration endpoint
existingApp, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), clientID)
if err == nil {
aReq.Old = existingApp
}
@@ -294,8 +294,8 @@ func UpdateClientConfiguration(db database.Store, auditor *audit.Auditor, logger
// Update app in database
now := dbtime.Now()
//nolint:gocritic // RFC 7592 endpoints need system access to update dynamically registered clients
updatedApp, err := db.UpdateOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), database.UpdateOAuth2ProviderAppByClientIDParams{
//nolint:gocritic // OAuth2 system context — RFC 7592 client configuration endpoint
updatedApp, err := db.UpdateOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), database.UpdateOAuth2ProviderAppByClientIDParams{
ID: clientID,
UpdatedAt: now,
Name: req.GenerateClientName(),
@@ -377,8 +377,8 @@ func DeleteClientConfiguration(db database.Store, auditor *audit.Auditor, logger
}
// Get existing app to verify it exists and is dynamically registered
//nolint:gocritic // RFC 7592 endpoints need system access to retrieve dynamically registered clients
existingApp, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
//nolint:gocritic // OAuth2 system context — RFC 7592 client configuration endpoint
existingApp, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), clientID)
if err == nil {
aReq.Old = existingApp
}
@@ -401,8 +401,8 @@ func DeleteClientConfiguration(db database.Store, auditor *audit.Auditor, logger
}
// Delete the client and all associated data (tokens, secrets, etc.)
//nolint:gocritic // RFC 7592 endpoints need system access to delete dynamically registered clients
err = db.DeleteOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
//nolint:gocritic // OAuth2 system context — RFC 7592 client configuration endpoint
err = db.DeleteOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), clientID)
if err != nil {
writeOAuth2RegistrationError(ctx, rw, http.StatusInternalServerError,
"server_error", "Failed to delete client")
@@ -453,8 +453,8 @@ func RequireRegistrationAccessToken(db database.Store) func(http.Handler) http.H
}
// Get the client and verify the registration access token
//nolint:gocritic // RFC 7592 endpoints need system access to validate dynamically registered clients
app, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
//nolint:gocritic // OAuth2 system context — RFC 7592 registration access token validation
app, err := db.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemOAuth2(ctx), clientID)
if err != nil {
if xerrors.Is(err, sql.ErrNoRows) {
// Return 401 for authentication-related issues, not 404
+8 -8
View File
@@ -217,8 +217,8 @@ func authorizationCodeGrant(ctx context.Context, db database.Store, app database
if err != nil {
return codersdk.OAuth2TokenResponse{}, errBadSecret
}
//nolint:gocritic // Users cannot read secrets so we must use the system.
dbSecret, err := db.GetOAuth2ProviderAppSecretByPrefix(dbauthz.AsSystemRestricted(ctx), []byte(secret.Prefix))
//nolint:gocritic // OAuth2 system context — users cannot read secrets
dbSecret, err := db.GetOAuth2ProviderAppSecretByPrefix(dbauthz.AsSystemOAuth2(ctx), []byte(secret.Prefix))
if errors.Is(err, sql.ErrNoRows) {
return codersdk.OAuth2TokenResponse{}, errBadSecret
}
@@ -236,8 +236,8 @@ func authorizationCodeGrant(ctx context.Context, db database.Store, app database
if err != nil {
return codersdk.OAuth2TokenResponse{}, errBadCode
}
//nolint:gocritic // There is no user yet so we must use the system.
dbCode, err := db.GetOAuth2ProviderAppCodeByPrefix(dbauthz.AsSystemRestricted(ctx), []byte(code.Prefix))
//nolint:gocritic // OAuth2 system context — no authenticated user during token exchange
dbCode, err := db.GetOAuth2ProviderAppCodeByPrefix(dbauthz.AsSystemOAuth2(ctx), []byte(code.Prefix))
if errors.Is(err, sql.ErrNoRows) {
return codersdk.OAuth2TokenResponse{}, errBadCode
}
@@ -384,8 +384,8 @@ func refreshTokenGrant(ctx context.Context, db database.Store, app database.OAut
if err != nil {
return codersdk.OAuth2TokenResponse{}, errBadToken
}
//nolint:gocritic // There is no user yet so we must use the system.
dbToken, err := db.GetOAuth2ProviderAppTokenByPrefix(dbauthz.AsSystemRestricted(ctx), []byte(token.Prefix))
//nolint:gocritic // OAuth2 system context — no authenticated user during refresh
dbToken, err := db.GetOAuth2ProviderAppTokenByPrefix(dbauthz.AsSystemOAuth2(ctx), []byte(token.Prefix))
if errors.Is(err, sql.ErrNoRows) {
return codersdk.OAuth2TokenResponse{}, errBadToken
}
@@ -411,8 +411,8 @@ func refreshTokenGrant(ctx context.Context, db database.Store, app database.OAut
}
// Grab the user roles so we can perform the refresh as the user.
//nolint:gocritic // There is no user yet so we must use the system.
prevKey, err := db.GetAPIKeyByID(dbauthz.AsSystemRestricted(ctx), dbToken.APIKeyID)
//nolint:gocritic // OAuth2 system context — need to read the previous API key
prevKey, err := db.GetAPIKeyByID(dbauthz.AsSystemOAuth2(ctx), dbToken.APIKeyID)
if err != nil {
return codersdk.OAuth2TokenResponse{}, err
}
@@ -1881,8 +1881,8 @@ func (s *server) completeTemplateImportJob(ctx context.Context, job database.Pro
hashBytes := sha256.Sum256(moduleFiles)
hash := hex.EncodeToString(hashBytes[:])
// nolint:gocritic // Requires reading "system" files
file, err := db.GetFileByHashAndCreator(dbauthz.AsSystemRestricted(ctx), database.GetFileByHashAndCreatorParams{Hash: hash, CreatedBy: uuid.Nil})
//nolint:gocritic // Acting as provisionerd
file, err := db.GetFileByHashAndCreator(dbauthz.AsProvisionerd(ctx), database.GetFileByHashAndCreatorParams{Hash: hash, CreatedBy: uuid.Nil})
switch {
case err == nil:
// This set of modules is already cached, which means we can reuse them
@@ -1893,8 +1893,8 @@ func (s *server) completeTemplateImportJob(ctx context.Context, job database.Pro
case !xerrors.Is(err, sql.ErrNoRows):
return xerrors.Errorf("check for cached modules: %w", err)
default:
// nolint:gocritic // Requires creating a "system" file
file, err = db.InsertFile(dbauthz.AsSystemRestricted(ctx), database.InsertFileParams{
//nolint:gocritic // Acting as provisionerd
file, err = db.InsertFile(dbauthz.AsProvisionerd(ctx), database.InsertFileParams{
ID: uuid.New(),
Hash: hash,
CreatedBy: uuid.Nil,

Some files were not shown because too many files have changed in this diff Show More