Compare commits

...

187 Commits

Author SHA1 Message Date
Danielle Maywood d897f23f43 fix(site): remove scroll overshoot from diff viewer
The minHeight: viewportHeight style on the last file wrapper allowed
users to scroll the last file all the way to the top of the viewport,
leaving a huge blank space below the actual content. This happened on
the local (unstaged) diff view but not the remote (pushed) view
because the remote view's loading skeleton delayed the viewport
measurement, leaving viewportHeight at 0.

Remove the viewport measurement and minHeight entirely so scrolling
naturally stops at the bottom of the last change. The "X files
changed" footer and file-tree scrollIntoView navigation are kept.

Also drops the custom emptyMessage from LocalDiffPanel so both
panels use the same default.
2026-03-19 21:27:35 +00:00
Steven Masley cc6766e64a chore: apply monotonic validation to workspace builds (#23180)
Still not applying at the dynamic parameters websocket. The wsbuilder is
the source of truth for previous values, so this is the most accurate
and still will fail in the synchronous api call to build a workspace.
This mirrors how we handle immutable params.

Closes https://github.com/coder/coder/issues/19064
2026-03-19 14:05:34 -05:00
Kyle Carberry 7db77bbefa feat(site): add MCP server admin UI (#23301)
This adds the UI but does not add it to the Settings sidebar. Until it's
actually functional and usable (which will come in future PRs) it will
remain hidden.

Next step is wiring this up to chats and actually testing the full flow
end-to-end, but we aren't there yet.
2026-03-19 18:53:35 +00:00
Cian Johnston a908d51097 fix(site): prevent scroll overshoot in diff viewer end spacer (#23305)
- Replace fixed `100vh` spacer at bottom of diff list with a dynamically
sized one
- Adds "X files changed" message to the bottom of the diff

> This PR was created with the help of Coder Agents, and was reviewed by several humans and robots. 🧑‍💻🤝🤖
2026-03-19 18:39:39 +00:00
Mathias Fredriksson 3ef13f54ab feat(site): add @storybook/addon-vitest for local story testing (#23303)
There are 333 stories with play functions but no local way to run them.
CI uses Chromatic, which means broken play functions aren't caught until
after push. For agents, the feedback loop is even worse since they can't
open a browser.

This adds the `@storybook/addon-vitest` integration so play functions
can run locally via vitest + Playwright:

```sh
pnpm test:storybook
pnpm test:storybook src/path/to/component.stories.tsx
```

The vitest config is restructured into two projects (`unit` and
`storybook`).
2026-03-19 20:27:40 +02:00
Danielle Maywood b4b562dc9b fix(site/src/pages/AgentsPage): reserve scrollbar gutter in chat skeleton (#23310) 2026-03-19 18:20:12 +00:00
greg-the-coder 176f57bb13 docs: Updated AWS Reference Arch to support black background (#23311)
Updated to latest Ref Arch to support Black background provided by Coder
marketing content team
2026-03-19 13:11:23 -05:00
Danielle Maywood 748022f2ba fix(site): add inline decorator spacing to chat input via Lexical theme (#23308) 2026-03-19 18:10:23 +00:00
Mathias Fredriksson 0a0c976a1a test(coderd/chatd): add P0 coverage tests for subagent auth and panic recovery (#23309)
The processChat defer at line 2464 catches panics on its main
goroutine and transitions the chat to error status. This was
previously untested.

The test wraps the database Store to panic during PersistStep's
InTx call, which runs synchronously on the processChat goroutine.
A tool-level panic wouldn't work because executeTools has its own
recover that converts panics into tool error results.
2026-03-19 17:54:03 +00:00
Danielle Maywood 436a17fcf2 fix(site/src/pages/AgentsPage): remove stale NoDiffUrl story (#23306) 2026-03-19 17:22:29 +00:00
Danielle Maywood 0176a5dd6b fix(site): sync skeleton layouts with real components (#23304) 2026-03-19 17:17:12 +00:00
Mathias Fredriksson bb7c5f93f3 fix(site): don't wipe stream state when non-assistant message arrives (#23295)
scheduleStreamReset() fired for every durable message event with
changed=true, including user messages (e.g. promoted queued messages).
When a batch contained trailing message_parts followed by a user
message event, the batch loop flushed the parts (building stream
state), then scheduleStreamReset cleared it immediately.

Restrict the reset to assistant messages, which are the only role
that ends a streaming turn.
2026-03-19 18:59:28 +02:00
Danielle Maywood 84f032d97c fix(site): use search params for model edit/add navigation (#23277) 2026-03-19 16:50:31 +00:00
Steven Masley 91d7516dc1 test: remove classic params from ephemeral params test (#23302)
Dynamic parameters supports ephemeral parameters. Updated the test to
use dynamic parameters.

Ephemeral params **require** a default value.
Closes https://github.com/coder/coder/issues/19065
2026-03-19 11:32:36 -05:00
Michael Suchacz bb6e826d91 docs(site): add frontend agent guidelines from PR review analysis (#23299) 2026-03-19 17:15:20 +01:00
Kyle Carberry 742694eb20 fix: filter empty text/reasoning parts before sending to LLM (#23284)
## Problem

Anthropic rejects requests containing empty text content blocks with:

```
messages: text content blocks must be non-empty
```

Empty text parts (`""` or whitespace-only like `" "`) get persisted in
the database when a stream sends `TextStart`/`TextEnd` with no
`TextDelta` in between. On the next turn, these parts are loaded from
the DB and sent to Anthropic, which rejects them.

## Fix

Filter empty/whitespace-only text and reasoning parts at the two LLM
dispatch boundaries, without modifying persistence (the raw record is
preserved):

- **`partsToMessageParts()`** in `chatprompt.go` — filters when
converting persisted DB messages to fantasy message parts for LLM calls.
This is the last gateway before the Anthropic provider creates
`TextBlockParam` objects.
- **`toResponseMessages()`** in `chatloop.go` — filters when building
in-flight conversation messages between steps within a single turn.

Note: `flushActiveState()` (the interruption path) already had this
guard — the normal `TextEnd` streaming path did not, but since we're not
changing persistence, the fix is applied at the dispatch layer.
2026-03-19 12:10:54 -04:00
Danielle Maywood 31fe58819e fix: place diff comment box at end of selection and highlight selected lines (#23288) 2026-03-19 16:04:25 +00:00
Michael Suchacz 62cf884e81 fix(site): show PR number instead of title on mobile top bar (#23296) 2026-03-19 15:53:38 +00:00
Kyle Carberry 86cb313765 fix: update fantasy to fix OpenAI reasoning replay with Store enabled (#23297)
## Problem

When `Store: true` is set for OpenAI Responses API calls (the new
default), multi-turn conversations with reasoning models fail on the
second message:

```
stream response: bad request: Item 'rs_xxx' of type 'reasoning' was provided
without its required following item.
```

The fantasy library was reconstructing full `OfReasoning` input items
(with encrypted content and summary) when replaying assistant messages.
The API cannot pair these reconstructed reasoning items with the output
items that originally followed them because the output items are sent as
plain `OfMessage` without server-side IDs.

## Fix

Updates the fantasy dependency (`kylecarbs/fantasy@cj/go1.25`) to skip
reasoning parts during conversation replay in `toResponsesPrompt`. With
`Store` enabled, the API already has the reasoning persisted server-side
— it doesn't need to be replayed in the input.

Fantasy PR: https://github.com/charmbracelet/fantasy/pull/181

## Testing

Adds `TestOpenAIReasoningRoundTrip` integration test that:
1. Sends a query to `o4-mini` (reasoning model with `Store: true`)
2. Verifies reasoning content is persisted
3. Sends a follow-up message — this was the failing step
4. Verifies the follow-up completes successfully

Requires `OPENAI_API_KEY` env var to run.
2026-03-19 15:36:29 +00:00
Mathias Fredriksson ca57a0bcab fix(site): prevent rehype-raw from swallowing JSX in chat output (#23293)
Omit rehype-raw from the Streamdown rehype plugin list so
HTML-like syntax in LLM output is escaped as text instead of
being parsed by the HTML5 engine and stripped by rehype-sanitize.

When the LLM writes JSX fragments like <Component prop={val} />
outside code fences, remark-parse tags them as html nodes.
rehype-raw then feeds them to parse5, and rehype-sanitize strips
the unknown elements, silently destroying content. Without
rehype-raw, Streamdown auto-injects a remark plugin that converts
html nodes to text, preserving them as visible escaped text.

Markdown formatting (bold, italic, links, code blocks, tables)
is unaffected since those go through remark/rehype directly.
2026-03-19 17:34:45 +02:00
Ben Potter 00d292d764 docs: remove EC2 install guide and rename AWS marketplace doc (#23298)
## Summary

- **Removed** `docs/install/cloud/ec2.md` — the standalone EC2 install
guide.
- **Renamed** `docs/install/cloud/aws-mktplc-ce.md` →
`docs/install/cloud/aws-marketplace.md` for a clearer, more discoverable
filename.
- **Updated** `docs/manifest.json`: replaced the "AWS EC2" entry with
"AWS Marketplace" pointing to the renamed file.
- **Updated** `docs/install/cloud/index.md`: fixed the internal link to
the renamed file.
2026-03-19 15:31:32 +00:00
Cian Johnston c107d2bf5d feat: add confirmation dialog to archive & delete workspace action (#23150)
* Adds a "molly-guard" to require users to type the workspace name
before the 'Archive & delete workspace' action fires. This prevents
accidental deletion of 'pet' workspaces.
* This is only shown for workspaces created *before* the chat was
created. The logic here is that any workspace that existed previous to
the chat *cannot* have been created by the chat.
2026-03-19 15:22:55 +00:00
Michael Suchacz 6d214644f6 fix: make TestInterruptAutoPromotionIgnoresLaterUsageLimitIncrease deterministic (#23279)
Eliminates the timing flake in
`TestInterruptAutoPromotionIgnoresLaterUsageLimitIncrease` by making the
chatd worker loop clock-controllable.

## Changes

**`coderd/chatd/chatd.go`**
- Replace `time.NewTicker` calls in `Server.start()` with
`p.clock.NewTicker` using named quartz tags `("chatd", "acquire")` and
`("chatd", "stale-recovery")`.

**`coderd/chatd/chatd_test.go`**
- Inject `quartz.NewMock(t)` into the test via `newActiveTestServer`
config override.
- Trap the acquire ticker so the test controls exactly when pending
chats are reacquired.
- Rewrite the test flow as explicit clock-advance steps instead of
wall-clock polling.

**`AGENTS.md`**
- Document the PR title scope rule (scope must be a real path containing
all changed files).

## Validation
- `go test ./coderd/chatd -run
TestInterruptAutoPromotionIgnoresLaterUsageLimitIncrease -count=100` 
- `go test ./coderd/chatd` 
- `make lint` 
2026-03-19 15:14:00 +00:00
Thomas Kosiewski 83809bb380 feat: add token-to-cookie endpoint for embedded chat WebSocket auth (#23280)
## Problem

The VS Code extension embeds the Coder agent chat UI in an iframe,
passing the session token via `postMessage`. HTTP requests use the
`Coder-Session-Token` header, but browser WebSocket connections **cannot
carry custom headers** — they rely on cookies. This causes all WebSocket
requests (e.g. streaming chat messages) to fail with authorization
errors in the embedded iframe.

## Solution

Add `POST /api/v2/users/me/session/token-to-cookie` — a lightweight
endpoint that converts the current (already-validated) session token
into a `Set-Cookie` response. The frontend embed bootstrap flow calls
this immediately after `API.setSessionToken(token)`, before any
WebSocket connections are opened.

### Backend (`coderd/userauth.go`, `coderd/coderd.go`)
- New handler `postSessionTokenCookie` behind `apiKeyMiddleware`.
- Reads the validated token via `httpmw.APITokenFromRequest(r)`.
- Sets an `HttpOnly` cookie with the API key's expiry, applying
site-wide cookie config (Secure, SameSite, host prefix) via
`HTTPCookies.Apply`.
- Returns `204 No Content`.

### Frontend (`site/src/pages/AgentsPage/EmbedContext.tsx`)
- `bootstrapChatEmbedSessionFn` now calls the new endpoint after setting
the header token and before fetching user/permissions.
- The cookie is in place before any WebSocket connections are opened.

## Security

- **No privilege escalation**: The token is already valid — this just
moves it from a header credential to a cookie credential.
- **POST only**: Avoids CSRF-via-navigation.
- **Same origin**: The iframe loads from the Coder server, so the cookie
applies to the correct domain.
- **HttpOnly**: The cookie is not accessible to JavaScript.

> Built with [Coder Agents](https://coder.com/agents) 🤖
2026-03-19 16:12:31 +01:00
Mathias Fredriksson c424c31ab8 fix: diff panel follow-ups from #23243 (#23247)
LazyFileDiff memo comparator: ignore renderAnnotation reference
changes when the file has no lineAnnotations, preventing comment-box
interactions from re-rendering every file diff.

useGitWatcher stale socket guard: check socketRef.current against
the local socket variable in all event handlers. Stale close events
from superseded connections no longer clobber the active socket or
schedule spurious reconnects.

useGitWatcher field comparison guard: add a compile-time Record
type that errors when WorkspaceAgentRepoChanges gains a field not
covered by the bailout comparison.

Tests: stale close race, reference stability on duplicate messages,
per-field change detection (branch, remote_origin, unified_diff),
and no-op removal of unknown repos.

Follow-up to #23243
2026-03-19 14:41:55 +00:00
Mathias Fredriksson 635ce1f064 fix: prevent git diff panel scroll jumps on chat updates (#23243)
Three changes that eliminate unnecessary re-renders cascading into
the FileDiff Shadow DOM components during chat/git-watcher updates:

useGitWatcher: compare repo fields before updating state, return
prev Map when nothing changed instead of always allocating a new one.

RemoteDiffPanel: remove dataUpdatedAt from parsedFiles memo deps,
replace it with a content-derived version counter. The memo now only
recomputes when the actual diff string changes.

DiffViewer: pre-compute per-file options and line annotations into
memoized Maps, wrap LazyFileDiff in React.memo so it skips renders
when props are reference-equal.
2026-03-19 16:32:17 +02:00
Kyle Carberry d8ff67fb68 feat: add MCP server configuration backend for chats (#23227)
## Summary

Adds the database schema, API endpoints, SDK types, and encryption
wrappers for admin-managed MCP (Model Context Protocol) server
configurations that chatd can consume. This is the backend foundation
for allowing external MCP tools (Sentry, Linear, GitHub, etc.) to be
used during AI chat sessions.

## Database

Two new tables:
- **`mcp_server_configs`**: Admin-managed server definitions with URL,
transport (Streamable HTTP / SSE), auth config (none / OAuth2 / API key
/ custom headers), tool allow/deny lists, and an availability policy
(`force_on` / `default_on` / `default_off`). Includes CHECK constraints
on transport, auth_type, and availability values.
- **`mcp_server_user_tokens`**: Per-user OAuth2 tokens for servers
requiring individual authentication. Cascades on user/config deletion.

New column on `chats` table:
- **`mcp_server_ids UUID[]`**: Per-chat MCP server selection, following
the same pattern as `model_config_id` — passed at chat creation,
changeable per-message with nil-means-no-change semantics.

## API Endpoints

All routes are under `/api/experimental/mcp/servers/` and gated behind
the `agents` experiment.

**Admin endpoints** (`ResourceDeploymentConfig` auth):
- `POST /` — Create MCP server config
- `PATCH /{id}` — Update MCP server config (full-replace)
- `DELETE /{id}` — Delete MCP server config

**Authenticated endpoints** (all users, enabled servers only for
non-admins):
- `GET /` — List configs (admins see all, members see enabled-only with
admin fields redacted)
- `GET /{id}` — Get config by ID (with `auth_connected` populated
per-user)

**OAuth2 per-user auth flow:**
- `GET /{id}/oauth2/connect` — Initiate OAuth2 flow (state cookie CSRF
protection)
- `GET /{id}/oauth2/callback` — Handle OAuth2 callback, store tokens
- `DELETE /{id}/oauth2/disconnect` — Remove stored OAuth2 tokens

## Security

- **Secrets never returned**: `OAuth2ClientSecret`, `APIKeyValue`, and
`CustomHeaders` are never in API responses — only boolean indicators
(`has_oauth2_secret`, `has_api_key`, `has_custom_headers`).
- **Field redaction for non-admins**: `convertMCPServerConfigRedacted`
strips `OAuth2ClientID`, auth URLs, scopes, and `APIKeyHeader` from
non-admin responses.
- **dbcrypt encryption at rest**: All 5 secret fields use `dbcrypt_keys`
encryption with full encrypt-on-write / decrypt-on-read wrappers (11
dbcrypt method overrides + 2 helpers), following the same pattern as
`chat_providers.api_key`.
- **OAuth2 CSRF protection**: State parameter stored in `HttpOnly`
cookie with `HTTPCookies.Apply()` for correct `Secure`/`SameSite` behind
TLS-terminating proxies.
- **dbauthz authorization**: All 18 querier methods have authorization
wrappers. Read operations use `ActionRead`, write operations use
`ActionUpdate` on `ResourceDeploymentConfig`.

## Governance Model

| Control | Implementation |
|---------|---------------|
| **Global kill switch** | `enabled` defaults to `false` |
| **Availability policy** | `force_on` (always injected), `default_on`
(pre-selected), `default_off` (opt-in) |
| **Per-chat selection** | `mcp_server_ids` on `CreateChatRequest` /
`CreateChatMessageRequest` |
| **Auth gate** | OAuth2 servers require per-user auth before tools are
injected |
| **Tool-level allow/deny** | Arrays on `mcp_server_configs` for
granular tool filtering |
| **Secrets encrypted at rest** | Uses `dbcrypt_keys` (same pattern as
`chat_providers.api_key`) |

## Tests

8 test functions covering:
- Full CRUD lifecycle (create, list, update, delete)
- Non-admin visibility filtering (enabled-only, field redaction)
- `auth_connected` population for OAuth2 vs non-OAuth2 servers
- Availability policy validation (valid values + invalid rejection)
- Unique slug enforcement (409 Conflict)
- OAuth2 disconnect idempotency
- Chat creation with `mcp_server_ids` persistence

## Known Limitations (Deferred)

These are documented and intentional for an experimental feature:
- **Audit logging** not yet wired — will add when feature stabilizes
- **Cross-field validation** (e.g., OAuth2 fields required when
`auth_type=oauth2`) — admin-only endpoint, will add when stabilizing
- **`force_on` auto-injection** — query exists but not yet wired into
chatd tool injection (follow-up)
- **Additional test coverage** — 403 auth tests, GET-by-ID tests,
callback CSRF tests planned for follow-up

## What's NOT in this PR

- Frontend UI (admin panel + chat picker)
- Actual MCP client connections (`chatd/chatmcp/` manager)
- Tool injection into `chatloop/`
2026-03-19 14:07:36 +00:00
Dean Sheather 8f78c5145f chore: force deploying to dogfood from main (#23290)
Always deploy from main for now. We want to keep testing commits to main
as soon as they're merged, so we're going to disable the release
freezing behavior. We will test cut releases on a separate deployment,
upgraded manually.
2026-03-19 13:52:30 +00:00
Danielle Maywood 1840d6f942 fix: improve inline file reference chip contrast and spacing (#23285) 2026-03-19 13:36:21 +00:00
Mathias Fredriksson f31a8277a9 fix: show promoted queued message in chat timeline immediately (#23232)
Two issues caused the promoted message to never appear:

1. handlePromoteQueuedMessage discarded the ChatMessage returned by
the promote API, relying on the WebSocket to deliver it.

2. Even when the WebSocket did deliver it (via upsertDurableMessage),
the queue_update event in the same batch called
updateChatQueuedMessages, which mutated the React Query cache. This
gave chatMessagesList a new reference, triggering the message sync
effect. The effect found the promoted message in the store but not in
the REST-fetched data, classified it as a stale entry (the path
designed for edit truncation), and called replaceMessages, wiping it.

Fix (1): capture the ChatMessage from the promote response and upsert
it into the store, matching handleSend for non-queued messages.

Fix (2): track the fetched message array elements across effect runs
using element-level reference comparison. Only run the
hasStaleEntries/replaceMessages path when the message objects actually
changed (e.g. a refetch producing new objects from the server), not
when only an unrelated field like queued_messages caused the query
data reference to update. Element references work because
useMemo(flatMap) preserves object identity when only non-message
fields change in the page data.
2026-03-19 15:27:03 +02:00
Kyle Carberry fdc2366227 chore: update fantasy dep to rebased cj/go1.25 branch (#23242)
Updates the `charm.land/fantasy` replace to the rebased `cj/go1.25`
branch on `kylecarbs/fantasy`, which now includes:

- **chore: downgrade to Go 1.25**
- **feat: anthropic computer use**
- **chore: use kylecarbs/openai-go fork for coder/coder compat**

Switches the `openai-go/v3` replace from `SasSwart/openai-go` →
`kylecarbs/openai-go`, which is the same SasSwart perf fork plus a fix
for `WithJSONSet` being clobbered by deferred body serialization.
Without the fix, `NewStreaming` silently drops `stream: true` from
requests. See https://github.com/kylecarbs/openai-go/pull/2 for details.
2026-03-19 12:59:39 +00:00
Matt Vollmer d0f93f0818 fix(site): update editing message in agents chat input (#23283)
Updates the editing banner message in the agents chat input from:

> Editing message — all subsequent messages will be deleted

to:

> Editing will delete all subsequent messages and restart the
conversation here.

---

PR generated with Coder Agents
2026-03-19 12:57:59 +00:00
Ehab Younes 7a98b4a876 fix(coderd): gate OAuth2 well-known endpoints behind experiment flag (#23278)
- Add `RequireExperimentWithDevBypass` middleware to
`/.well-known/oauth-authorization-server` and
`/.well-known/oauth-protected-resource` routes, matching the existing
`/oauth2` routes.
- Clients can now detect OAuth2 support via unauthenticated discovery
(404 = not available).

Fixes #21608
2026-03-19 14:42:04 +03:00
Michael Suchacz 5b9a9e5bdf fix(site): guard malformed agent model refs (#23252)
## Summary
- guard Agent pages against malformed model provider/model values before
trimming
- reuse a shared model-ref normalizer across Agent detail, sidebar,
list, and create flows
- add regression coverage for malformed catalog and config entries

## Validation
- `cd site && pnpm exec vitest run
src/pages/AgentsPage/modelOptions.test.ts
src/pages/AgentsPage/AgentDetail.test.ts`
- `cd site && pnpm lint:types`
2026-03-19 12:27:24 +01:00
Danny Kopping 2ee90dfd84 revert: "ci: add verbose flag to flux reconcile and increase helmrelease timeout" (#23276)
Reverts coder/coder#23240
2026-03-19 10:01:16 +02:00
Ethan cda460f5df perf(coderd/chatd): skip same-replica stream DB rereads (#23218)
## Problem

Scaletest follow-up storms showed that the chat stream path was doing a
same-replica DB reread for every durable message it had already
delivered locally.

In a 600-chat / 10-turn run, `/stream`-attributed
`GetChatMessagesByChatID` calls reached about 14.2k across 5,400
follow-up turns — roughly **2.63 rereads per turn**. The primary coderd
replicas saturated their DB pools at 60/60 open connections during the
storm window.

The root cause: when pubsub was active, `Subscribe()` suppressed local
durable `message` events and relied entirely on pubsub notify →
`GetChatMessagesByChatID` for catch-up. Same-replica subscribers paid
the full DB round-trip even though the persisting process was on the
same replica.

## Solution

Add a bounded per-chat **durable message cache** to `chatStreamState` so
that same-replica subscribers can catch up from memory instead of the
database.

### How it works

1. `publishMessage()` caches the SDK event in `chatStreamState` before
local fanout and pubsub notify.
2. `publishEditedMessage()` replaces the cache with only the edited
message, then publishes `FullRefresh`.
3. `Subscribe()` handles ordinary `AfterMessageID` notifies by first
consulting the per-chat durable cache and only falling back to
`GetChatMessagesByChatID` on cache miss.
4. `FullRefresh` always forces a DB reread (cache is bypassed).

### Safety properties

- If the cache misses (e.g. message expired or remote replica), the DB
catch-up still runs — no silent message loss.
- `FullRefresh` (edits) always rereads from the database.
- Remote replicas still use the pubsub + DB path unchanged.
- The cache is bounded (`maxDurableMessageCacheSize = 256`) and scoped
per chat — no unbounded memory growth.

## Impact

This change removes the entire same-replica portion of the stream
rereads. Based on the 600-chat follow-up run, the upper bound on saved
work is the same-replica share of about 14.2k `GetChatMessagesByChatID`
rereads, with the observed total stream reread rate at about 2.63
rereads per follow-up turn.
2026-03-19 14:02:00 +11:00
dependabot[bot] 7877b26088 chore: bump google.golang.org/grpc from 1.79.2 to 1.79.3 (#23271)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from
1.79.2 to 1.79.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's
releases</a>.</em></p>
<blockquote>
<h2>Release 1.79.3</h2>
<h1>Security</h1>
<ul>
<li>server: fix an authorization bypass where malformed :path headers
(missing the leading slash) could bypass path-based restricted
&quot;deny&quot; rules in interceptors like <code>grpc/authz</code>. Any
request with a non-canonical path is now immediately rejected with an
<code>Unimplemented</code> error. (<a
href="https://redirect.github.com/grpc/grpc-go/issues/8981">#8981</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/grpc/grpc-go/commit/dda86dbd9cecb8b35b58c73d507d81d67761205f"><code>dda86db</code></a>
Change version to 1.79.3 (<a
href="https://redirect.github.com/grpc/grpc-go/issues/8983">#8983</a>)</li>
<li><a
href="https://github.com/grpc/grpc-go/commit/72186f163e75a065c39e6f7df9b6dea07fbdeff5"><code>72186f1</code></a>
grpc: enforce strict path checking for incoming requests on the server
(<a
href="https://redirect.github.com/grpc/grpc-go/issues/8981">#8981</a>)</li>
<li><a
href="https://github.com/grpc/grpc-go/commit/97ca3522b239edf6813e2b1106924e9d55e89d43"><code>97ca352</code></a>
Changing version to 1.79.3-dev (<a
href="https://redirect.github.com/grpc/grpc-go/issues/8954">#8954</a>)</li>
<li>See full diff in <a
href="https://github.com/grpc/grpc-go/compare/v1.79.2...v1.79.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.79.2&new-version=1.79.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 02:46:30 +00:00
Jeremy Ruppel 9ba822628a refactor(site): remove derivable useEffect antipatterns (#23267)
React's [Why You Might Not Need An
Effect](https://react.dev/learn/you-might-not-need-an-effect) article
describes several antipatterns and footguns you might encounter when
working with useEffect, so I fed it to an agent and let it determine
some low hanging fruit to fix:

Replace useEffect+setState patterns with direct computations where the
values are purely derived from props, state, or query data:

- useWebpushNotifications: derive `enabled` inline from query data
instead of setting it via useEffect
- ProxyContext: replace `proxy` state + updateProxy callback + useEffect
with a single useMemo over its three inputs
- useSyncFormParameters: replace ref-sync useEffect with direct
assignment during render
- AgentsSidebar (LoadMoreSentinel): replace two ref-sync useEffects with
direct assignments during render

Co-authored by Coder Agent 🤖
2026-03-18 19:26:17 -04:00
Danielle Maywood 0339c083ab feat(site): add scroll-to-bottom button to agent chat (#23212) 2026-03-18 22:30:09 +00:00
Cian Johnston be1c06dec9 feat: add endpoint and CLI for users to view their own OIDC claims (#23053)
- Adds a new API endpoint `GET /api/v2/users/oidc-claims` that returns
only the **merged claims** (not the separate id_token/userinfo
breakdown). Scoped exclusively to the authenticated user's own identity
— no user parameter, so users cannot view each other's claims.
- Adds a new CLI command:** `coder users oidc-claims` that hits the
above endpoint.
- The existing owner-only debug endpoint is preserved unchanged for
admins who need the full claim breakdown.


> 🤖 This PR was created with the help of Coder Agents, and will be
reviewed by my human. 🧑‍💻
2026-03-18 22:10:04 +00:00
greg-the-coder a6856320f9 docs: update Install to support AWS Marketplace Coder Community Edition (#22314)
Added new AWS install documentation and screenshots to support
deployment of AWS Marketplace Coder Community Edition, as the
primary/recommended method on AWS for POCs and experimenting with Coder.
2026-03-18 16:47:57 -05:00
Kyle Carberry 2245612ece fix(site): fix browser back navigation between agents settings pages (#23254) 2026-03-18 17:07:34 -04:00
Hugo Dutka d285a3e74e fix: handle null bytes in chat messages (#22946)
This PR fixes a bug where if a tool result contained binary data it
wouldn't be persisted to the database.

`jsonb` in Postgres is unable to store null bytes which are sometimes
output by tool results. This change makes it so that we encode them with
a special escape sequence before saving them to the database, and decode
them on read.

<img width="808" height="637" alt="Screenshot 2026-03-11 at 13 14 06"
src="https://github.com/user-attachments/assets/9be353eb-ff26-40ec-9f0a-195022b11f43"
/>
2026-03-18 21:19:25 +01:00
Jon Ayers eba7d943a0 fix: run stop build before starting a workspace with a failed start (#22925) 2026-03-18 14:58:20 -05:00
Kyle Carberry 147d627505 fix: deduplicate PR insights, fix cost computation, simplify UI (#23251)
## Problem

The `/agents/settings/insights` page had several issues:

1. **Duplicate PRs** in "Recent Pull Requests" — multiple chats
referencing the same PR URL each produced a row
2. **Wildly wrong costs** — the cost subquery summed ALL messages across
the entire chat *tree* (`GROUP BY root_chat_id`), so every chat in a
tree got the same inflated total. When aggregated, the same tree cost
was counted N× per PR in that tree
3. **UI clutter** — too many stat cards, too many table columns, mixed
naming conventions

## Fix

### Backend (SQL)
- **Deduplicate by PR URL** using `DISTINCT ON (COALESCE(cds.url,
c.id::text))` across all 4 queries
- **Fix cost computation**: use two CTEs — `pr_costs` sums cost from ALL
chats that reference a PR (so review chats contribute), `deduped` picks
one row per PR for state/additions/deletions via DISTINCT ON
- **Tests**: 3 subtests covering multi-chat cost summing, different PRs
no duplication, and duplicate URL counted once

### Frontend
- **3 stat cards** (down from 5): Merged, Merge rate, Cost / merge
- **2-line chart** (down from 3): created (dashed) + merged (solid)
- **4-column model table** (down from 7): Model, Merged, Merge rate,
Cost/merge
- **4-column recent table** (down from 7): Title, Status, Cost, Created
— with `table-fixed` to prevent overflow
- **Consistent naming**: no mixed PR/PRs abbreviation, contextual labels
since page title establishes context
2026-03-18 15:50:50 -04:00
Cian Johnston 14ed3e3644 feat: bump workspace last_used_at on chat heartbeat (#23205)
- coderd: Wires `options.WorkspaceUsageTracker` into the chatd config.
- chatd: Adds `UsageTracker` and calls `UsageTracker.Add(workspaceID)`
on each heartbeat tick
- chatd: adds tests to verify `last_used_at` bump behaviour

> 🤖 This PR was created with the help of Coder Agents, and will be
reviewed by my human. 🧑‍💻
2026-03-18 19:07:21 +00:00
Mathias Fredriksson fb61c48227 fix: remove omitempty from required ChatMessagePart fields (#23250)
ChatMessagePart uses a flat struct with omitempty on all fields,
but some fields are required in their TypeScript variant (no ?
suffix in the variants struct tag). When Go omits a zero-valued
required field, the frontend receives undefined where it expects
a concrete value.

Remove omitempty from fields that are required in at least one
variant: Text, URL, MediaType, FileName, StartLine, EndLine,
Content. Fields where all variants use ? keep omitempty.

Add a sub-test to TestChatMessagePartVariantTags that enforces
this invariant via reflection so future additions cannot
reintroduce the mismatch.

Supersedes #23249
2026-03-18 18:43:50 +00:00
Kyle Carberry 1f0d896fc9 feat: add deleted flag to chat messages for soft-delete (#23223)
Adds a `deleted` boolean column to the `chat_messages` table. Messages
are never physically deleted from the database — instead they are marked
as deleted so that usage and cost data is preserved.

## Changes

### Migration
- New migration (000444) adds `deleted boolean NOT NULL DEFAULT false`
to `chat_messages`

### SQL queries
- `DeleteChatMessagesAfterID` → `SoftDeleteChatMessagesAfterID` (UPDATE
SET deleted=true instead of DELETE)
- New `SoftDeleteChatMessageByID` query for single-message soft-delete
- All read queries now filter `deleted = false`:
  - `GetChatMessageByID`
  - `GetChatMessagesByChatID`
  - `GetChatMessagesByChatIDDescPaginated`
  - `GetChatMessagesForPromptByChatID` (both CTE and main query)
  - `GetLastChatMessageByRole`
- Cost/usage queries (`GetChatCostSummary`, `GetChatCostPerModel`, etc.)
intentionally still include deleted messages to preserve accurate spend
tracking

### EditMessage behavior
- Previously: updated the message content in-place + hard-deleted
subsequent messages
- Now: soft-deletes the original message + soft-deletes subsequent
messages + inserts a new message with the updated content
- This preserves the original message data (tokens, cost, content) in
the database
2026-03-18 14:37:09 -04:00
blinkagent[bot] f395e2e9c2 chore(dogfood): add gh CLI wrapper for automatic auth via coder external-auth (#23234)
- Adds a wrapper script at `/usr/local/bin/gh` in the dogfood image that
ensures the GitHub CLI stays authenticated even when tokens expire
during long-running workspace sessions.

Requested by @johnstcn, based on suggestion from @kylecarbs.

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-03-18 18:35:54 +00:00
Kyle Carberry cbe29e4e25 fix: encode non-ASCII filenames in chat file upload header (#23241)
## Problem

Uploading a file on the `/agents` chat page fails with:

```
Failed to execute 'setRequestHeader' on 'XMLHttpRequest': String contains non ISO-8859-1 code point.
```

This happens when the image filename contains non-ASCII characters (e.g.
CJK characters from macOS screenshots like `スクリーンショット.png`, accented
characters, emoji, etc.). HTTP headers only support ISO-8859-1 code
points, and the filename was being interpolated directly into the
`Content-Disposition` header.

## Fix

Use [RFC 5987](https://datatracker.ietf.org/doc/html/rfc5987)
`filename*=UTF-8''` encoding so the percent-encoded name is always valid
in the header. A static ASCII `filename="file"` fallback is included for
older clients.

The server already uses Go's `mime.ParseMediaType` which decodes
`filename*` automatically, so no backend changes are needed.

### Before
```ts
"Content-Disposition": `attachment; filename="${file.name}"`
```

### After
```ts
"Content-Disposition": `attachment; filename="file"; filename*=UTF-8''${encodeURIComponent(file.name)}`
```

## Testing

Added a server-side test (`TestGetChatFile/UnicodeFilename`) that
uploads with a Japanese filename and verifies it round-trips correctly
through the `Content-Disposition` header.
2026-03-18 14:11:30 -04:00
Mathias Fredriksson dbb7aee65b fix: make reasoning part text optional in generated types (#23249)
Go serializes ChatMessagePart.Text with omitempty, so empty
reasoning text (from reasoning_start with no delta) is omitted
from JSON. The frontend receives {type: "reasoning"} with text
as undefined, crashing on .trim() calls.

Mark Text as optional in the reasoning variant via the variants
struct tag. This generates ChatReasoningPart with text?: string
and the frontend falls back to "" via nullish coalescing.

Closes #23245
2026-03-18 18:10:35 +00:00
Kyle Carberry 90cf4f0a91 refactor: consolidate chat streaming endpoints under /stream (#23248)
Moves per-chat streaming/watch endpoints under a `/stream` sub-route for
better API consistency:

| Before | After |
|--------|-------|
| `GET /{chat}/stream` | `GET /{chat}/stream/` |
| `GET /{chat}/desktop` | `GET /{chat}/stream/desktop` |
| `GET /{chat}/git/watch` | `GET /{chat}/stream/git` |

### Changes
- **`coderd/coderd.go`** — Route definitions: replaced flat routes with
`r.Route("/stream", ...)` sub-router
- **`site/src/api/api.ts`** — Updated WebSocket URLs for `watchChatGit`
and `watchChatDesktop`
- **`coderd/chats_test.go`** — Updated desktop test URL
- **`coderd/workspaceagents_internal_test.go`** — Updated git watcher
test URLs (route mounts + dial URLs)
- **`site/src/pages/AgentsPage/AgentDetail.stories.tsx`** — Updated
storybook WebSocket mock paths
2026-03-18 18:04:42 +00:00
Cian Johnston 0b13ba978a fix: rename chat logger from coderd.chats.chat-processor to coderd.chatd.processor (#23246)
- Rename logger `coderd.chats` to `coderd.chatd` in `coderd.go`
- Rename sub-logger `chat-processor` to `processor` in `chatd/chatd.go`
2026-03-18 17:48:47 +00:00
blinkagent[bot] 4ce9fbeaf0 fix: show accurate error message when startup script fails instead of misleading "agents not connected" (#22843)
Fixes #21946

When a startup script fails (exits with non-zero code), the UI displayed
a misleading "Workspace agents are not connected" error even though the
agent is actually connected and functional (SSH works, web terminal
works).

- Extracts the `WorkspaceAlert` component from `Workspace.tsx` to its
own component
- Updates the `WorkspaceAlert` component in `Workspace.tsx` distinguish
correctly between agent disconnection, timeout, shutdown, and startup
script failures.
- Fixes double period bug in the alert description ("the agent has not
connected yet.."

Created on behalf of @kylecarbs

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Cian Johnston <cian@coder.com>
2026-03-18 17:22:18 +00:00
Danny Kopping 4b8a5e2b10 ci: add verbose flag to flux reconcile and increase helmrelease timeout (#23240)
*Disclaimer: implemented by a Coder Agent using Claude Opus 4.6*

Ref:
https://github.com/coder/coder/actions/runs/23251163568/job/67597250364

The deploy workflow Flux reconciliation step failed with no visibility
into what went wrong. Two changes:

- Add `--verbose` to every `flux reconcile` invocation to print
generated objects on failure
- Increase `--timeout` for the four `helmrelease` reconciliations from
the default 5m to 10m
2026-03-18 17:12:14 +00:00
Kyle Carberry d4a072b61e fix: address review comments on InsertChatMessages (#23239)
Follow-up to #23220, addressing Cian's review comments:

- **SQL casing**: Uppercase `UNNEST` to match `NULLIF`/`COALESCE`
convention in the query.
- **Builder pattern**: `chatMessage` struct now uses unexported fields
with a `newChatMessage` constructor for required fields (role, content,
visibility, modelConfigID, contentVersion) and chainable builder methods
(`withCreatedBy`, `withCompressed`, `withUsage`, `withContextLimit`,
`withTotalCostMicros`, `withRuntimeMs`) for optional/nullable fields.
- **Batch test in chats_test**: Replaced the `for i := 0; i < 2` loop
with a single batch insert of 2 messages to actually exercise the batch
logic.
- **Multi-message querier test**: Added `BatchInsertMultipleMessages`
test verifying 3-message batch insert with role ordering, sequential
IDs, nullable field semantics (NULL for zero UUIDs and zero ints), and
token/cost assertions.

---------

Co-authored-by: Cian Johnston <cian@coder.com>
2026-03-18 17:06:44 +00:00
Steven Masley c46136ff73 chore: update coder/trivy override (#23230)
Coder/preview does this update as well. Because it is a `replace`, we
have to manually update our `replace` too
2026-03-18 12:03:56 -05:00
Cian Johnston 65b7658568 chore: extract testutil.FakeSink for slog test assertions (#23208)
Follow-up to [review comment on
#23025](https://github.com/coder/coder/pull/23025#discussion_r2930309487)
from @mafredri.

Extracts the repeated `logSink` / `fakeSink` test pattern into a shared
`testutil.FakeSink` and migrates all existing call sites.

> 🤖 This PR was created with the help of Coder Agents, and will be
reviewed by my human. 🧑‍💻

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-03-18 17:02:38 +00:00
Kyle Carberry 2577d16af2 fix(site): use correct /api/experimental endpoint for PR insights (#23235)
## Problem

The `/agents/settings/insights` page was broken because
`InsightsContent` was calling `/api/v2/chats/insights/pull-requests`,
but the backend route is registered under
`/api/experimental/chats/insights/pull-requests` (the entire `/chats`
route block lives under `r.Route("/api/experimental", ...)` in
`coderd.go`).

Every other chat endpoint in the frontend correctly uses
`/api/experimental/chats/...`, but this one was missed.

## Fix

- Added `getPRInsights` method to the API client (`api.ts`) pointing to
`/api/experimental/chats/insights/pull-requests`
- Added a `prInsights` react-query helper in `api/queries/chats.ts`
(matching the pattern of `chatCostUsers`, etc.)
- Updated `InsightsContent.tsx` to use the query helper instead of a raw
`fetch()` with the wrong URL
2026-03-18 16:46:53 +00:00
Mathias Fredriksson 119030d795 fix(agent): default process working directory to agent dir or $HOME (#23224)
Processes started via the agent process API inherited the agent's
own working directory (/tmp/coder.xxx) when no WorkDir was
specified. SSH sessions already use a fallback chain: configured
agent directory > $HOME. This wires the same manifest directory
closure into the process manager so the priority is now:

  explicit req.WorkDir > agent configured dir > $HOME

The resolved directory is recorded on the process struct so
ProcessInfo.WorkDir and pathStore notifications reflect where
the process actually ran.
2026-03-18 16:46:26 +00:00
Kyle Carberry 483adc59fe feat: replace InsertChatMessage with batch InsertChatMessages (#23220)
Replaces the singular `InsertChatMessage` query with
`InsertChatMessages` that uses PostgreSQL's `unnest()` for batch
inserts. This reduces the number of database round-trips when inserting
multiple messages in a single transaction.

## Changes

- **SQL**: New `InsertChatMessages :many` query using `unnest()` arrays
following the existing codebase pattern (e.g.,
`InsertWorkspaceAgentStats`). Preserves the CTE that updates
`chats.last_model_config_id` using the last non-null model config from
the batch. Uses `NULLIF` for UUID columns to handle NULL foreign keys.
- **Go layers**: Updated `querier.go`, `dbauthz.go`,
`dbmetrics/querymetrics.go`, `dbmock/dbmock.go`, and `queries.sql.go` to
use the new batch signature (`[]ChatMessage` return type, array params).
- **chatd.go**: All call sites converted to batch inserts:
  - **CreateChat**: System prompt + user message batched into one call
- **persistStep**: Assistant message + tool messages batched into one
call
- **persistSummary**: Hidden summary + assistant + tool messages batched
into one call
  - Single-message sites use the same API with single-element arrays
- **Helper**: New `appendChatMessage` function simplifies building batch
params at each call site.
- **Tests**: All test files updated to use the new API.

Builds on top of #23213.
2026-03-18 16:27:07 +00:00
Garrett Delfosse 1f5f6c9ccb chore: add new interactive release package (#22624) 2026-03-18 12:19:54 -04:00
Kyle Carberry a130a7dc97 fix: renumber duplicate migration 000444 to 000445 (#23229)
Two migrations were merged with the same number 000444:
- `000444_usage_events_ai_seats` (#22689, merged first at 09:30) — keeps
000444
- `000444_chat_message_runtime_ms` (#23219, merged second at 10:57) —
renumbered to **000445**

This collision causes `golang-migrate` to fail at runtime since it reads
both files as the same version.

**Fix:** Rename `000444_chat_message_runtime_ms.{up,down}.sql` →
`000445_chat_message_runtime_ms.{up,down}.sql`.

Closes https://github.com/coder/internal/issues/1411
2026-03-18 11:30:33 -04:00
Kyle Carberry d6fef96d72 feat: add PR insights analytics dashboard (#23215)
## What

Adds a new admin-only **PR Insights** page for the `/agents` analytics
view — a dashboard for engineering leaders to understand code shipped by
AI agents.

### Backend
- `GET /api/v2/chats/insights/pull-requests` — admin-only endpoint
- 4 SQL queries in `chatinsights.sql` aggregating `chat_diff_statuses`
joined with chat cost data (via root chat tree rollup)
- Runs 5 parallel DB queries: current summary, previous summary (for
trends), time series, per-model breakdown, recent PRs
- SDK types auto-generate to TypeScript

### Frontend (`PRInsightsView`)
- **Stat cards**: PRs created, Merged, Merge rate, Lines shipped,
Cost/merged PR — with trend badges comparing to previous period
- **Activity chart**: Stacked area chart (created/merged/closed) using
git color tokens (`git-added-bright`, `git-merged-bright`,
`git-deleted-bright`)
- **Model performance table**: Per-model PR counts, inline merge rate
bars, diff stats, cost breakdown
- **Recent PRs table**: Status badges, review state icons, author info,
external links
- **Time range filter**: 7d/14d/30d/90d button group
- **4 Storybook stories**: Default, HighPerformance, LowVolume, NoPRs

### Data source
All PR data comes from the existing `chat_diff_statuses` table
(populated by the `gitsync.Worker` background job that polls GitHub
every 120s). No new data collection required.

### Screenshot
View in Storybook: `pages/AgentsPage/PRInsightsView`
2026-03-18 15:29:29 +00:00
Kyle Carberry 4dd8531f37 feat: track step runtime_ms on chat messages (#23219)
## Summary

Adds a `runtime_ms` column to `chat_messages` that records the
wall-clock duration (in milliseconds) of each LLM step. This covers LLM
streaming, tool execution, and retries — the full time the agent is
"alive" for a step.

This is the foundation for billing by agent alive time. The column
follows the same pattern as `total_cost_micros`: stored per assistant
message, aggregatable with `SUM()` over time periods by user.

## Changes

- **Migration**: adds nullable `runtime_ms bigint` to `chat_messages`.
- **chatloop**: adds `Runtime time.Duration` field to `PersistedStep`,
measures `time.Since(stepStart)` at the beginning of each step (covering
stream + tool execution + retries).
- **chatd**: passes `step.Runtime.Milliseconds()` to the assistant
message `InsertChatMessage` call; all other message types (system, user,
tool) get `NULL`.
- **Tests**: adds `runtime > 0` assertion in chatloop tests.

## Billing query pattern

Once ready, aggregation mirrors the existing cost queries:

```sql
SELECT COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM chat_messages cm
JOIN chats c ON c.id = cm.chat_id
WHERE c.owner_id = @user_id
  AND cm.created_at >= @start_time
  AND cm.created_at < @end_time
  AND cm.runtime_ms IS NOT NULL;
```
2026-03-18 10:57:35 -04:00
Danielle Maywood 3bcb7de7c0 fix(site): normalize chat message spacing for visual consistency (#23222) 2026-03-18 14:49:50 +00:00
Kacper Sawicki 1e07ec49a6 feat: add merge_strategy support for coder_env resources (#23107)
## Description

Implements the server-side merge logic for the `merge_strategy`
attribute added to `coder_env` in [terraform-provider-coder
v2.15.0](https://github.com/coder/terraform-provider-coder/pull/489).
This allows template authors to control how duplicate environment
variable names are combined across multiple `coder_env` resources.

Relates to https://github.com/coder/coder/issues/21885

## Supported strategies

| Strategy | Behavior |
|----------|----------|
| `replace` (default) | Last value wins — backward compatible |
| `append` | Joins values with `:` separator (e.g. PATH additions) |
| `prepend` | Prepends value with `:` separator |
| `error` | Fails the build if the variable is already defined |

## Example

```hcl
resource "coder_env" "path_tools" {
  agent_id       = coder_agent.dev.id
  name           = "PATH"
  value          = "/home/coder/tools/bin"
  merge_strategy = "append"
}
```

## Changes

- **Proto**: Added `merge_strategy` field to `Env` message in
`provisioner.proto`
- **State reader**: Updated `agentEnvAttributes` struct and proto
construction in `resources.go`
- **Merge logic**: Added `mergeExtraEnvs()` function in
`provisionerdserver.go` with strategy-aware merging for both agent envs
and devcontainer subagent envs
- **Tests**: 15 unit tests covering all strategies, edge cases (empty
values, mixed strategies, multiple appends)
- **Dependency**: Bumped `terraform-provider-coder` v2.14.0 → v2.15.0
- **Fixtures**: Updated `duplicate-env-keys` test fixtures and golden
files

## Ordering

When multiple resources `append` or `prepend` to the same key, they are
processed in alphabetical order by Terraform resource address (per the
determinism fix in #22706).
2026-03-18 15:43:28 +01:00
Steven Masley 84de391f26 chore: add tallyman events for ai seat tracking (#22689)
AI seat tracking inserted as heartbeat into usage table.
2026-03-18 09:30:22 -05:00
Kyle Carberry b83b93ea5c feat: add workspace awareness system message on chat creation (#23213)
When a chat is created via `chatd`, a system message is now inserted
informing the model whether the chat was created with or without a
workspace.

**With workspace:**
> This chat is attached to a workspace. You can use workspace tools like
execute, read_file, write_file, etc.

**Without workspace:**
> There is no workspace associated with this chat yet. Create one using
the create_workspace tool before using workspace tools like execute,
read_file, write_file, etc.

This is a model-only visibility system message (not shown to users) that
helps the model understand its available capabilities upfront —
particularly important for subagents spawned without a workspace, which
previously would attempt to use workspace tools and fail.

**Changes:**
- `coderd/chatd/chatd.go`: Added workspace awareness constants and
inserted the system message in `CreateChat` after the system prompt,
before the initial user message.
- `coderd/chatd/chatd_test.go`: Added
`TestCreateChatInsertsWorkspaceAwarenessMessage` with sub-tests for both
with-workspace and without-workspace cases.
2026-03-18 14:01:46 +00:00
Hugo Dutka 014e5b4f57 chore(site): remove experiment label from agents virtual desktop (#23217)
The "experiment" label is not needed since Coder Agents as a whole is an
experimental feature.
2026-03-18 13:55:30 +00:00
Ethan fc3508dc60 feat: configure acquire chat batch size (#23196)
## Summary
- add a hidden deployment config option for chat acquire batch size
(`CODER_CHAT_ACQUIRE_BATCH_SIZE` / `chat.acquireBatchSize`)
- thread the configured value into chatd startup while preserving the
existing default of `10`
- clamp the deployment value to the `int32` range before passing it into
chatd
- regenerate the API/docs/types/testdata artifacts for the new config
field

## Why
`chatd` currently acquires pending chats in batches of `10` via a
compile-time default. This change makes that batch size
operator-configurable from deployment config, so we can tune acquisition
behavior without another code change.
2026-03-19 00:54:32 +11:00
Mathias Fredriksson 8b4d35798a refactor: type both chat message parsers (#23176)
Both message parsers accepted untyped input and relied on scattered
asRecord/asString calls to extract fields at runtime. With the
discriminated ChatMessagePart union, both accept typed input directly
and narrow via switch (part.type).

parseMessageContent narrows from (content: unknown) to
(content: readonly ChatMessagePart[] | undefined), removing legacy
input shape handling the Go backend normalizes away.
applyMessagePartToStreamState narrows from Record<string, unknown>
to ChatMessagePart.

The SSE type guards had a & Record<string, unknown> intersection
that widened everything untyped downstream. Since the data comes
from our own API, the intersection was removed and all handlers in
ChatContext now use generated types directly.

Fixes tool_call_id and tool_name variant tags in codersdk/chats.go:
marked optional to match reality (Go guards against empty values,
omitempty omits them at the wire level).

Refs #23168, #23175
2026-03-18 15:50:57 +02:00
Danielle Maywood d69dcf18de fix: balance visual padding on agent chat sidebar items (#23211) 2026-03-18 13:44:37 +00:00
Cian Johnston fe82d0aeb9 fix: allow member users to generate support bundles (#23040)
Fixes AIGOV-141

The `coder support bundle` command previously required admin permissions
(`Read DeploymentConfig`) and would abort entirely for non-admin
`member` users with:

```
failed authorization check: cannot Read DeploymentValues
```

This change makes the command **degrade gracefully** instead of failing
outright.

<details>
<summary>
Changes
</summary>

### `support/support.go`
- **`Run()`**: The authorization check for `Read DeploymentValues` is
now a soft warning instead of a hard gate. Unauthenticated users (401)
still fail, but authenticated users with insufficient permissions
proceed with reduced data.
- **`DeploymentInfo()`**: `DeploymentConfig` and `DebugHealth` fetches
now handle 403/401 responses gracefully, matching the existing pattern
used by `DeploymentStats`, `Entitlements`, and `HealthSettings`.
- **`NetworkInfo()`**: Coordinator debug and tailnet debug fetches now
check response status codes for 403/401 before reading the body.

### `cli/support.go`
- **`summarizeBundle()`**: No longer returns early when `Config` or
`HealthReport` is nil. Instead prints warnings and continues summarizing
available data (e.g., netcheck).

### Tests
- `MissingPrivilege` → `MemberNoWorkspace`: Asserts member users can
generate a bundle successfully with degraded admin-only data.
- `NoPrivilege` → `MemberCanGenerateBundle`: Asserts the CLI produces a
valid zip bundle for member users.
- All existing tests continue to pass (`NoAuth`, `OK`, `OK_NoWorkspace`,
`DontPanic`, etc.).

## Behavior matrix

| User type | Before | After |
|---|---|---|
| **Admin** | Full bundle | Full bundle (no change) |
| **Member** | Hard error | Bundle with degraded admin-only data |
| **Unauthenticated** | Hard error | Hard error (no change) |

Related to PRODUCT-182
2026-03-18 13:43:10 +00:00
Ethan 81dba9da14 test: stabilize AgentsPageView analytics story date (#23216)
## Summary
The `AgentsPageView: Opens Analytics For Admins` story was flaky because
the analytics header renders a rolling 30-day date range in the
top-right corner. Since that range was based on the current date, the
story output changed every day.

This change makes the story deterministic by:
- adding an optional `analyticsNow` prop to `AgentsPageView`
- passing that value through to `AnalyticsPageContent` when the
analytics panel is shown
- setting a fixed local-noon timestamp in the story so the rendered
range label stays stable across timezones
2026-03-19 00:34:16 +11:00
Thomas Kosiewski 20ac96e68d feat(site): include chatId in editor deep links (#23214)
## Summary

- include the current agent chat ID in VS Code and Cursor deep links
opened from the agent detail page
- extend `getVSCodeHref` so `chatId` is added only when provided
- add focused tests for deep-link generation with and without `chatId`

## Testing

- `pnpm -C site run format -- src/modules/apps/apps.ts
src/modules/apps/apps.test.ts src/pages/AgentsPage/AgentDetail.tsx`
- `pnpm -C site run check -- src/modules/apps/apps.ts
src/modules/apps/apps.test.ts src/pages/AgentsPage/AgentDetail.tsx`
- `pnpm -C site exec vitest run src/modules/apps/apps.test.ts`
- `pnpm -C site run lint:types`

---
_Generated with [`mux`](https://github.com/coder/mux) • Model:
`openai:gpt-5.4` • Thinking: `high`_
2026-03-18 14:25:38 +01:00
Atif Ali 677f90b78a chore: label community PRs on open (#23157) 2026-03-18 18:15:37 +05:00
35C4n0r d697213373 feat(docs/ai-coder/ai-bridge): update aibridge docs for codex to use model_provider (#23199) 2026-03-18 18:09:55 +05:00
Michael Suchacz 62144d230f feat(site): show PR link in TopBar header (#23178)
When a PR is detected for a chat, display a compact PR badge in the
AgentDetail TopBar. On mobile it is always visible; on desktop it is
hidden when the sidebar panel is open (which already surfaces PR info)
and shown when the panel is closed.

The badge shows a state-colored icon (open, draft, merged, closed) and
the PR title or number, linking to the PR URL. Only URLs confirmed as
real PRs (via explicit `pull_request_state` or a `/pull/<number>`
pathname) trigger the badge.

## Changes

- **`TopBar.tsx`** — Added `diffStatusData` prop, `PrStateIcon` helper,
and a PR link badge between the title and actions area. Hidden on
desktop when the sidebar panel is open.
- **`AgentDetailView.tsx`** — Pass `diffStatusData` through to
`AgentDetailTopBar`.
- **`TopBar.stories.tsx`** — Added stories for open, draft, merged, and
closed PR states.
2026-03-18 13:40:33 +01:00
Hugo Dutka 0d0c6c956d fix(dogfood): chrome desktop icons with compatibility flags (#23209)
Our dogfood image already included chrome. Since we run dogfood
workspaces in Docker, chrome requires some compatibility flags to run
properly. If you launch chrome without them, some webpages crash and
fail to load.

The newest release of https://github.com/coder/portabledesktop added an
icon dock. This PR edits the chrome `.desktop` files so when you open
chrome from the dock it runs with the correct flags.


https://github.com/user-attachments/assets/7bf880e1-22a4-4faa-8f7f-394863c6b127
2026-03-18 13:36:16 +01:00
Mathias Fredriksson 488ceb6e58 refactor(site/src/pages/AgentsPage): clean up RenderBlock types and dead fields (#23175)
RenderBlock's file-reference variant diverged from the API (camelCase
vs snake_case), and both file variants were defined inline duplicating
the generated ChatFilePart and ChatFileReferencePart types. The
thinking and file-reference variants carried dead fields (title, text)
that were never populated by the backend.

Replace inline definitions with references to generated types, remove
dead fields, and simplify ReasoningDisclosure (disclosure button path
was dead without title).

Refs #23168
2026-03-18 12:25:05 +00:00
Matt Vollmer 481c132135 docs: clarify agent permission inheritance and default security posture (#23194)
Addresses five documentation gaps identified from an internal agents
briefing Q&A, specifically around what permissions an agent inherits
from the user:

1. **No privilege escalation** — Added explicit statement that the agent
has the exact same permissions as the user. No escalation, no shared
service account.
2. **Cross-user workspace isolation** — Added statement that agents
cannot access workspaces belonging to other users.
3. **Default-state warning** — Added WARNING callouts that agent
workspaces inherit the user's full network access unless templates
explicitly restrict it.
4. **Tool boundary statement** — Added explicit statement that the agent
cannot act outside its defined tool set and has no direct access to the
Coder API.
5. **Template visibility scoped to user RBAC** — Clarified that template
selection respects the user's role and permissions.

Changes across 3 files:
- `docs/ai-coder/agents/index.md`
- `docs/ai-coder/agents/architecture.md`
- `docs/ai-coder/agents/platform-controls/template-optimization.md`

---
PR generated with Coder Agents
2026-03-18 12:15:50 +00:00
Kyle Carberry d42008e93d fix: persist partial assistant response when chat is interrupted mid-stream (#23193)
## Problem

When a user cancels a streaming chat response mid-stream, the partial
content disappears entirely — both from the UI and the database. The
streamed text vanishes as if the response never happened.

## Root Causes

Three issues combine to prevent partial message persistence on
interrupt:

### 1. StreamPartTypeError only matched `context.Canceled`
(`chatloop.go`)

The interrupt detection in `processStepStream` checked:
```go
errors.Is(part.Error, context.Canceled) && errors.Is(context.Cause(ctx), ErrInterrupted)
```
But some providers propagate `ErrInterrupted` directly as the stream
error rather than wrapping it in `context.Canceled`. This caused the
condition to fail, so `flushActiveState` was never called and partial
text accumulated in `activeTextContent` was lost.

### 2. No post-loop interrupt check (`chatloop.go`)

If the stream iterator stops yielding parts without producing a
`StreamPartTypeError` (e.g., a provider that silently closes the
response body on cancel), there was no check after the `for part :=
range stream` loop to detect the interrupt and flush active state.

### 3. Worker ownership check blocked interrupted persists (`chatd.go`)

`InterruptChat` → `setChatWaiting` clears `worker_id` in the DB
**before** the chatloop detects the interrupt. When
`persistInterruptedStep` (using `context.WithoutCancel`) tried to write
the partial message, the ownership check:
```go
if !lockedChat.WorkerID.Valid || lockedChat.WorkerID.UUID != p.workerID {
    return chatloop.ErrInterrupted  // always blocks!
}
```
unconditionally rejected the write. The error was silently logged as a
warning.

## Fix

- **Broaden the `StreamPartTypeError` interrupt detection** to match
both `context.Canceled` and `ErrInterrupted` as the stream error.
- **Add a post-loop interrupt check** in `processStepStream` that
flushes active state when the context was canceled with
`ErrInterrupted`.
- **Allow `persistStep` to write when the chat is in `waiting` status**
(interrupt) even if `worker_id` was cleared. The `pending` status (from
`EditMessage`, where history is truncated) still correctly blocks stale
writes.

## Testing

Added `TestInterruptChatPersistsPartialResponse` — an end-to-end
integration test that:
1. Streams partial text chunks from a mock LLM
2. Waits for the chatloop to publish `message_part` events (confirming
chunks were processed)
3. Interrupts the chat mid-stream
4. Verifies the partial assistant message is persisted in the database
with the expected text content
2026-03-18 11:48:28 +00:00
Danielle Maywood aa3cee6410 fix: polish agents UI (sidebar width, combobox, limits padding, back button) (#23204) 2026-03-18 11:46:56 +00:00
Danielle Maywood 4f566f92b5 fix(site): use ExternalImage for preset icons in task prompt (#23206) 2026-03-18 11:16:30 +00:00
Atif Ali bd5b62c976 feat: expose MCP tool annotations for tool grouping (#23195)
## Summary
- add shared MCP annotation metadata to toolsdk tools
- emit MCP tool annotations from both coderd and CLI MCP servers
- cover annotation serialization in toolsdk, coderd MCP e2e, and CLI MCP
tests

## Why
- Coder already exposed MCP tools, but it did not populate MCP tool
annotation hints (`readOnlyHint`, `destructiveHint`, `idempotentHint`,
`openWorldHint`).
- Hosts such as Claude Desktop use those hints to classify and group
tools, so without them Coder tools can get lumped together.
- This change adds a shared annotation source in `toolsdk` and has both
MCP servers emit those hints through `mcp.Tool.Annotations`, avoiding
drift between local and remote MCP implementations.

## Testing
- Tested locally on Cladue Desktop and the tools are categorized
correctly.

<table>
<tr>
 <td> Before
 <td> After
<tr>
<td> <img width="613" height="183" alt="image"
src="https://github.com/user-attachments/assets/29d2e3fb-53bc-4ea7-bdb3-f10df4ef996b"
/>
<td> <img width="600" height="457" alt="image"
src="https://github.com/user-attachments/assets/cc384036-c9a7-4db9-9400-43ad51920ff5"
/>
</table>

Note: Done using Coder Agents, reviewed and tested by human locally
2026-03-18 10:21:45 +00:00
Mathias Fredriksson 66f809388e refactor: make ChatMessagePart a discriminated union in TypeScript (#23168)
The flat ChatMessagePart interface had 20+ optional fields, preventing
TypeScript from narrowing types on switch(part.type). Each consumer
needed runtime validation, type assertions, or defensive ?. chains.

Add `variants` struct tags to ChatMessagePart fields declaring which
union variants include each field. A codegen mutation in apitypings
reads these tags via reflect and generates per-variant sub-interfaces
(ChatTextPart, ChatReasoningPart, etc.) plus a union type alias.
A test validates every field has a variants tag or is explicitly
excluded, and every part type is covered.

Remove dead frontend code: normalizeBlockType, alias case branches
("thinking", "toolcall", "toolresult"), legacy field fallbacks
(line_number, typedBlock.name/id/input/output), and result_delta
handling. Add test coverage for args_delta streaming, provider_executed
skip logic, and source part parsing.
2026-03-18 09:27:51 +00:00
Mathias Fredriksson 563c00fb2c fix(dogfood/coder): suppress du stderr in docker usage metadata (#23200)
Transient 'No such file or directory' errors from disappearing
overlay2 layers during container operations pollute the displayed
metadata value. Redirect stderr to /dev/null.
2026-03-18 10:54:13 +02:00
Hugo Dutka 817fb4e67a feat: virtual desktop settings toggle frontend (#23173)
Add a toggle in agents settings to enable/disable virtual desktop. The
Desktop tab (next to the Git tab) will only be visible if the feature is
enabled.

<img width="879" height="648" alt="Screenshot 2026-03-17 at 18 01 26"
src="https://github.com/user-attachments/assets/09fc3850-c88d-4c5c-b6e4-760590e53b95"
/>
2026-03-18 09:50:14 +01:00
Hugo Dutka 2cf47ec384 feat: virtual desktop settings toggle backend (#23171)
Adds a new `site_config` entry that controls whether the virtual desktop
feature for Coder Agents is enabled. It can be set via a new
`/api/experimental/chats/config/desktop-enabled` endpoint, which will be
used by the frontend.
2026-03-18 09:35:13 +01:00
Ethan 11481d7bed perf(coderd/chatd): reduce lock contention in instruction cache and persistStep (#23144)
## Summary

Two targeted performance improvements to the chatd server, identified
through benchmarking.

### 1. RWMutex for instruction cache

The instruction cache is read on every chat turn to fetch the home
instruction file for a workspace agent. Writes only occur on cache
misses (once per agent per 5-minute TTL window), making the access
pattern ~90%+ reads.

Switching from `sync.Mutex` to `sync.RWMutex` and using
`RLock`/`RUnlock` on the read path allows concurrent readers instead of
serializing them.

**Benchmark (200 concurrent chats):**
| | ns/op |
|---|---|
| Mutex | 108 |
| RWMutex | 32 |
| **Speedup** | **3.4x** |

### 2. Hoist JSON marshaling out of persistStep transaction

`MarshalParts`, `PartFromContent`, `CalculateTotalCostMicros`, and the
`usageForCost` struct population are pure CPU work that ran inside the
`FOR UPDATE` transaction in `persistStep`. They have zero dependency on
the database transaction.

Moving all marshal and cost-calculation calls above `p.db.InTx()` means
the row lock is held only for `GetChatByIDForUpdate` +
`InsertChatMessage` calls.

**Benchmark (16 goroutines contending on same lock):**
| Tool calls | Inside lock | Outside lock | Speedup |
|---|---|---|---|
| 1 | 13,977 ns/op | 1,055 ns/op | 13x |
| 5 | 38,203 ns/op | 3,769 ns/op | 10x |
| 10 | 67,353 ns/op | 7,284 ns/op | 9x |
| 20 | 145,864 ns/op | 14,045 ns/op | 10x |

No behavioral changes in either commit.
2026-03-18 16:12:14 +11:00
Ben Potter f3bf5baba0 chore: update coder/tailscale fork to 33e050fd4bd9 (#23191)
Updates the tailscale replace directive to pick up two new commits from
[coder/tailscale](https://github.com/coder/tailscale):

- [feat(magicsock): add DERPTLSConfig for custom TLS configuration
(#105)](https://github.com/coder/tailscale/commit/8ffb3e998ba9c11d770eacac9a2f3932ce36590d)
- [chore: improve logging for derp server mesh clients
(#107)](https://github.com/coder/tailscale/commit/33e050fd4bd97d9e805afb4df7fac7a1c6e4abf8)

Relates to: PRODUCT-204
2026-03-18 15:14:02 +11:00
Matt Vollmer 9df7fda5f6 docs: rename "Template Routing" to "Template Optimization" (#23192)
Renames the page title from "Template Routing" to "Template
Optimization" in both the markdown H1 header and the docs manifest
entry.

---

PR generated with Coder Agents
2026-03-17 20:37:39 -04:00
Matt Vollmer 665db7bdeb docs: add agent workspaces best practices guide (#23142)
Add a new docs page under /docs/ai-coder/agents/ covering best practices
for creating templates that are discoverable and useful to Coder Agents.

Covers template descriptions, dedicated agent templates, network
boundaries, credential scoping, parameter design, pre-installed tooling,
and prebuilt workspaces for reducing provisioning latency.

<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-03-17 19:28:46 -04:00
Asher 903cfb183f feat: add --service-account to cli user creation (#23186) 2026-03-17 14:07:20 -08:00
Kayla はな 49e5547c22 feat: add support for creating service accounts (#23140) 2026-03-17 15:36:20 -06:00
Michael Suchacz f9c265ca6e feat: expose PromptCacheKey in OpenAI model config form (#23185)
## Summary

Remove the `hidden` tag from the `PromptCacheKey` field on
`ChatModelOpenAIProviderOptions` so the auto-generated JSON schema
no longer marks it as hidden. This allows the admin model
configuration UI to render a "Prompt Cache Key" text input for
OpenAI models alongside other visible options like Reasoning Effort,
Service Tier, and Web Search.

## Changes

- **`codersdk/chats.go`**: Remove `hidden:"true"` from `PromptCacheKey`
struct tag.
- **`site/src/api/chatModelOptionsGenerated.json`**: Regenerated via
`make gen` — `hidden: true` removed from the `prompt_cache_key` entry.
- **`modelConfigFormLogic.test.ts`**: Extend existing "all fields set"
tests to cover extract and build roundtrip for `promptCacheKey`.

## How it works

The `hidden` Go struct tag propagates through the code generation
pipeline:

1. Go struct tag → `scripts/modeloptionsgen` →
`chatModelOptionsGenerated.json`
2. The frontend `getVisibleProviderFields()` filters out fields with
`hidden: true`
3. Removing the tag makes the field visible in the schema-driven form
renderer

No new UI components are needed — the existing `ModelConfigFields`
component
automatically renders the field as a text input based on the schema
(`type: "string"`, `input_type: "input"`).

The field appears as **"Prompt Cache Key"** with description
"Key for enabling cross-request prompt caching" in the OpenAI provider
section of the admin model configuration form.
2026-03-17 21:58:36 +01:00
Danielle Maywood a65a31a5a3 fix(site): symmetric horizontal padding on agents sidebar chat rows (#23187) 2026-03-17 20:50:11 +00:00
Danielle Maywood 22a4a33886 fix(site): restore gap between agent chat messages (#23188) 2026-03-17 20:49:14 +00:00
Charlie Voiselle d3c9469e13 fix: open coder_app links in new tab when open_in is tab (#23000)
Fixes #18573

## Changes

When a `coder_app` resource sets `open_in = "tab"`, clicking the app
link now opens in a new browser tab instead of navigating in the same
tab.

`target="_blank"` and `rel="noreferrer"` are set inline on the
`<a>` elements in `AppLink.tsx`, gated on `app.open_in === "tab"`. This
follows the codebase convention of co-locating `target` and `rel` at the
render site.

`noreferrer` suppresses the Referer header to avoid leaking workspace IDs
to destination servers and implies `noopener`.
`noopener` prevents tabnabbing — without it, the opened page can
redirect the Coder dashboard tab via `window.opener`. This is especially
relevant for same-origin path-based apps, which would otherwise have
full DOM access to the dashboard. 

> **Future enhancement**: template admins could opt into sending the
referrer via a `coder_app` setting, enabling feedback pages built around
workspace context.

## Tests

A vitest case is added in `AppLink.test.tsx` (rather than a Storybook
story, since the assertions are purely behavioral with no visual
component):

- **`sets target=_blank and rel=noopener noreferrer when open_in is
tab`** — renders the app link with `open_in: "tab"` and asserts
`target="_blank"` and `rel="noreferrer"` are present on the
anchor.

## Slim-window behavior

The `slim-window` test case and the `openAppInNewWindow()` comment in
`apps.ts` have been split out into a follow-up PR for separate review,
since the `window.open()` / `noopener` tradeoffs there deserve dedicated
discussion.

---------

Co-authored-by: Kayla はな <kayla@tree.camp>
2026-03-17 15:32:45 -04:00
George K 91ec0f1484 feat: add service_accounts workspace sharing mode (#23093)
Introduce a three-way workspace sharing setting (none, everyone,
service_accounts) replacing the boolean workspace_sharing_disabled.
In service_accounts mode, only service account-owned workspaces can be
shared while regular members' share permissions are removed. Adds a
new organization-service-account system role with per-org permissions
reconciled alongside the existing organization-member system role.

Related to:
https://linear.app/codercom/issue/PLAT-28/feat-service-accounts-sharing-mode-and-rbac-role

---------

Co-authored-by: Steven Masley <Emyrk@users.noreply.github.com>
Co-authored-by: Kayla はな <mckayla@hey.com>
2026-03-17 12:16:43 -07:00
Danielle Maywood 6b76e30321 fix(site): align workspace combobox styling with model selector (#23181) 2026-03-17 18:46:35 +00:00
Kyle Carberry 6fc9f195f1 fix: resolve chat message pagination scroll issues (#23169)
## Summary

Fixes four interrelated issues that caused scroll position jumps and
phantom scroll growth when paginating older chat messages.

## Changes

### 1. Removed client-side message windowing (`useMessageWindow`)

There were two competing sentinel systems: server-side pagination and
client-side windowing. The client windowing sentinel was nested deep
inside the timeline with no explicit IntersectionObserver `root`,
causing scroll position jumps when messages were prepended. Blink
(coder/blink) has no client-side windowing. Removed it entirely; server
pagination + `contentVisibility` handled performance.

### 2. Removed `contentVisibility: "auto"` from message sections

Each section had `contentVisibility: "auto"` with `containIntrinsicSize:
"1px 600px"`, causing the scroll region to grow/shrink as the browser
swapped 600px placeholders for actual heights while scrolling. This
created phantom scroll growth with no fetch involved.

### 3. Gated WebSocket on initial REST data

The WebSocket `Subscribe` snapshot calls `GetChatMessagesByChatID` (no
LIMIT) which returns every message when `afterMessageID` is 0. The
WebSocket effect opened before the REST page resolved, so
`lastMessageIdRef` was undefined, causing the server to replay the
entire history and defeating pagination. Added `initialDataLoaded` guard
so the socket waits for the first REST page.

### 4. Manual scroll position restoration

Replaced unreliable CSS scroll anchoring in `flex-col-reverse` with a
`ScrollAnchoredContainer` that snapshots `scrollHeight` before fetch and
restores `scrollTop` via `useLayoutEffect` after render. Disabled
browser scroll anchoring (`overflow-anchor: none`) to prevent conflicts.
2026-03-17 14:26:53 -04:00
Mathias Fredriksson c2243addce fix(scripts/develop): allow empty access-url for devtunnel (#23166) 2026-03-17 18:06:55 +00:00
Danielle Maywood cd163d404b fix(site): strip SVN-style Index headers from diffs before parsing (#23179) 2026-03-17 17:57:00 +00:00
Danielle Maywood 41d12b8aa3 feat(site): improve edit-message UX with dedicated button and confirmation (#23172) 2026-03-17 17:39:28 +00:00
Kyle Carberry 497e1e6589 feat: render file references inline in user messages (#23174)
File references in user messages now render as inline chips (matching
the chat input style) instead of in a separate bordered section at the
bottom of the message bubble.

This reimplements #23131 which was accidentally reverted during the
merge of #23072 (the spend-limit UI PR resolved a merge conflict by
dropping the inline chip logic).

## Changes
- **FileReferenceNode.tsx**: Export `FileReferenceChip` so it can be
imported for read-only use (no remove button when `onRemove` is
omitted).
- **ConversationTimeline.tsx**: Iterate through `parsed.blocks` in
document order, rendering `response` blocks as text and `file-reference`
blocks as inline `FileReferenceChip` components. Removes the old
separated file-reference section with `border-t` divider.
- **ConversationTimeline.stories.tsx**: Added
`UserMessageWithInlineFileRef` and
`UserMessageWithMultipleInlineFileRefs` stories.
2026-03-17 16:52:00 +00:00
Kyle Carberry b779c9ee33 fix: use SQL-level auth filtering for chat listing (#23159)
## Problem

The chat listing endpoint (`GetChatsByOwnerID`) was using
`fetchWithPostFilter`, which fetches N rows from the database and then
filters them in Go memory using RBAC checks. This causes a pagination
bug: if the user requests `limit=25` but some rows fail the auth check,
fewer than 25 rows are returned even though more authorized rows exist
in the database. The client may incorrectly assume it has reached the
end of the list.

## Solution

Switch to the same pattern used by `GetWorkspaces`, `GetTemplates`, and
`GetUsers`: `prepareSQLFilter` + `GetAuthorized*` variant. The RBAC
filter is compiled to a SQL WHERE clause and injected into the query
before `ORDER BY`/`LIMIT`, so the database returns exactly the requested
number of authorized rows.

Additionally, `GetChatsByOwnerID` is renamed to `GetChats` with
`OwnerID` as an optional (nullable) filter parameter, matching the
`GetWorkspaces` naming convention.

## Changes

| File | Change |
|------|--------|
| `queries/chats.sql` | Renamed to `GetChats`, `owner_id` now optional
via CASE/NULL, added `-- @authorize_filter` |
| `queries.sql.go` | Renamed constant, params struct (`GetChatsParams`),
and method |
| `querier.go` | Interface method renamed |
| `modelqueries.go` | Added `chatQuerier` interface +
`GetAuthorizedChats` impl |
| `dbauthz/dbauthz.go` | `GetChats` now uses `prepareSQLFilter` instead
of `fetchWithPostFilter` |
| `dbauthz/dbauthz_test.go` | Updated tests for SQL filter pattern |
| `dbmock/dbmock.go` | Renamed + added mock for `GetAuthorizedChats` |
| `dbmetrics/querymetrics.go` | Renamed + added metrics wrapper |
| `rbac/regosql/configs.go` | Added `ChatConverter` (maps `org_owner` to
empty string literal since `chats` has no `organization_id` column) |
| `rbac/authz.go` | Added `ConfigChats()` |
| `chats.go` | Handler uses renamed method with `uuid.NullUUID` |
| `searchquery/search.go` | Updated return type |
| `gitsync/worker.go` | Updated interface and call site |
| Various test files | Updated for renamed types |
2026-03-17 12:46:24 -04:00
Mathias Fredriksson 144b32a4b6 fix(scripts/develop): skip build on Windows via build tag (#23118)
Previously main.go used syscall.SysProcAttr{Setpgid: true} and
syscall.Kill, both undefined on Windows. This broke GOOS=windows
cross-compilation.

Add a //go:build !windows constraint to the package since it is
a dev-only tool that requires Unix utilities (bash, make, etc.)
and is not intended to run on Windows.

Refs #23054
Fixes coder/internal#1407
2026-03-18 02:02:06 +11:00
Kyle Carberry a40716b6fe fix(site): stop spamming chats list endpoint on diff_status_change events (#23167)
## Problem

The WebSocket handler for `diff_status_change` events in
`AgentsPage.tsx` was triggering a burst of redundant HTTP requests on
every event:

1. **`invalidateChatListQueries(queryClient)`** — Full refetch of the
chats list endpoint. Unnecessary because `updateInfiniteChatsCache`
already writes `diff_status` into the sidebar cache optimistically on
every event.

2. **`invalidateQueries({ queryKey: chatKey(id) })`** — Refetch of the
individual chat. Also unnecessary — the SSE event carries `diff_status`
in its payload and the optimistic updater writes it into the `chatKey`
cache directly. Worse, this call was missing `exact: true`, so TanStack
Query's prefix matching cascaded the invalidation to `chatMessagesKey`,
`chatDiffContentsKey`, and every other query under `["chats", id]`.

Since diff status changes fire frequently during active agent work, this
spammed the chats list endpoint and caused redundant refetches of
messages and diff contents on every single event.

## Fix

Strip the handler down to the one invalidation that's actually needed —
`chatDiffContentsKey` (the file-level diff contents aren't in the SSE
payload):

```typescript
if (chatEvent.kind === "diff_status_change") {
    void queryClient.invalidateQueries({
        queryKey: chatDiffContentsKey(updatedChat.id),
        exact: true,
    });
}
```

## Why tests didn't catch this

The existing tests in `chats.test.ts` cover query utilities in isolation
(e.g. `invalidateChatListQueries` scoping, mutation invalidation). The
WebSocket event handler lives in the `AgentsPage` component — there was
no test covering what the `diff_status_change` code path actually
invalidates.

Added regression tests verifying that `exact: true` prevents
prefix-match cascade vs the old behavior.
2026-03-17 14:51:01 +00:00
Danielle Maywood 635c5d52a8 feat(site): move Settings and Analytics from dialogs to sidebar sub-navigation (#23126) 2026-03-17 14:48:09 +00:00
Kyle Carberry 075dfecd12 refactor: consolidate experimental chats API types (#23143)
## Summary

Consolidates three areas of type duplication in the experimental chats
API:

### 1. Merge archive/unarchive into `PATCH /{chat}`
- **Before:** `POST /{chat}/archive` + `POST /{chat}/unarchive` (two
endpoints, two handlers with mirrored logic)
- **After:** `PATCH /{chat}` accepting `{ "archived": true/false }` via
`UpdateChatRequest`
- Removes one endpoint and ~30 lines of duplicated handler code

### 2. Collapse identical request/response prompt types
- `ChatSystemPromptResponse` + `UpdateChatSystemPromptRequest` →
`ChatSystemPrompt`
- `UserChatCustomPromptResponse` + `UpdateUserChatCustomPromptRequest` →
`UserChatCustomPrompt`
- These pairs were field-for-field identical (single string field)

### 3. Merge duplicate reasoning options types
- `ChatModelOpenRouterReasoningOptions` +
`ChatModelVercelReasoningOptions` → `ChatModelReasoningOptions`
- Same 4 fields, same types — only field ordering and enum value sets
differed
- Unified type uses the superset of enum values

### Files changed
- `codersdk/chats.go` — SDK types and client methods
- `coderd/chats.go` — Handler consolidation
- `coderd/coderd.go` — Route change
- `coderd/chats_test.go` — Test updates
- `site/src/api/api.ts` — Frontend API client
- `site/src/api/queries/chats.ts` — Query mutations
- `site/src/api/queries/chats.test.ts` — Test mocks
- `site/src/pages/AgentsPage/AgentsPage.tsx` — Call site
- Generated files (`typesGenerated.ts`,
`chatModelOptionsGenerated.json`)

### Testing
- All Go tests pass (`TestArchiveChat`, `TestUnarchiveChat`,
`TestChatSystemPrompt`)
- All frontend tests pass (31/31 in `chats.test.ts`)
2026-03-17 14:31:11 +00:00
Hugo Dutka fdb1205bdf chore(agent): remove portabledesktop download logic (#23128)
The new way to install portabledesktop in a workspace will be via a
module: https://github.com/coder/registry/pull/805
2026-03-17 15:24:11 +01:00
Danielle Maywood 33a47fced3 fix(site): use theme-aware git tokens for PR status badges (#23148) 2026-03-17 14:12:36 +00:00
Kyle Carberry ca5158f94a fix: unify sidebar Git/Desktop tab styles with GitPanel tabs (#23164)
The active Git tab looked different than the Desktop tab, and they
didn't match the actual tabs in the Git section.
2026-03-17 10:08:18 -04:00
Hugo Dutka b7e0f42591 feat(dogfood): add the portabledesktop module (#23165) 2026-03-17 13:56:27 +00:00
Ethan 41bd7acf66 perf(chatd): remove redundant chat rereads (#23161)
## Summary
This PR removes two redundant chat rereads in `chatd`.

### Archive / unarchive
- `archiveChat` and `unarchiveChat` already come through
`httpmw.ChatParam`, so the handlers already have the `database.Chat`
row.
- Pass that row into `chatd.ArchiveChat` / `chatd.UnarchiveChat` instead
of rereading by ID before publishing the sidebar events.

### End-of-turn cleanup
- `processChat` no longer calls `GetChatByID` after the cleanup
transaction just to refresh the chat snapshot.
- Title generation already persists the generated title and emits its
own `title_change` event.
- To preserve best-effort title freshness for the cleanup path, the
async title-generation goroutine stores the generated title in per-turn
shared state and cleanup overlays it if available before publishing the
`status_change` event and dispatching push notifications.

## Why
- removes one DB read from archive / unarchive requests
- removes one DB read from completed turns, which is the larger hot-path
win
- keeps the existing pubsub/event contract intact instead of broadening
this into a larger event-model redesign

## Notes
- `title_change` remains the authoritative title update for clients
- cleanup does not wait for title generation; it uses the generated
title only when it is already available
2026-03-18 00:52:06 +11:00
Dean Sheather 87d4a29371 fix(site): add left offset to agents sidebar user dropdown (#23162) 2026-03-17 13:34:10 +00:00
Mathias Fredriksson a797a494ef feat: add starter template option and Coder Desktop URLs to scripts/develop (#23149)
- Add `--starter-template` option and properly create starter template
  with name and icon
- Add Coder Desktop URLs to listening banner
- Makefile tweak to avoid rebuilding `scripts/develop` every time Go
  code changes
2026-03-17 15:34:03 +02:00
Ethan a33605df58 perf(coderd/chatd): reuse workspace context within a turn (#23145)
## Summary
- reuse workspace agent context within a single `runChat()` turn
- remove duplicate latest-build agent lookups between
`resolveInstructions()` and `getWorkspaceConn()`
- avoid the extra `GetWorkspaceAgentByID` fetch when the selected
`WorkspaceAgent` already has the needed metadata
- add focused internal tests for reuse and refresh-on-dial-failure

## Why
This came out of a 5000-chat / 10-turn scaletest on bravo against a
single workspace.

The run completed successfully, but coderd stayed DB-pool bound, and one
workspace-backed hot path stood out:
- `GetWorkspaceAgentsInLatestBuildByWorkspaceID ≈ 46.7k`
- `GetWorkspaceByID ≈ 48.0k`
- `GetWorkspaceAgentByID ≈ 2.2k`

Within one `runChat()` turn, chatd was rediscovering the same workspace
agent multiple times just to resolve instructions and open the workspace
connection.

## What this changes
This PR introduces a **turn-local** workspace context helper so a single
acquired turn can:
- resolve the selected workspace agent once
- reuse that agent for instruction resolution
- reuse the same `AgentConn` for workspace tools and reload/compaction

This stays turn-local only, so a later turn on another replica still
rebuilds fresh context from the DB.

## Expected impact
This is an incremental improvement, not a full fix.

It should reduce duplicated workspace-agent lookups and shave some DB
pressure from a hot path for workspace-backed chats, while preserving
multi-replica correctness.

## Testing
- `go test ./coderd/chatd/...`
- `golangci-lint run ./coderd/chatd/...`
2026-03-18 00:33:44 +11:00
Dean Sheather 3c430a67fa fix(site): balance sidebar header spacing in agents page (#23163) 2026-03-18 00:33:14 +11:00
Dean Sheather abee77ac2f fix(site): move analytics date range above cards (#23158) 2026-03-17 12:58:43 +00:00
Kacper Sawicki 7946dc6645 fix(provisioner): skip duplicate-env-keys in generate.sh (#23155)
## Problem

When `generate.sh` is run (e.g. to regenerate fixtures after adding a
new field like `subagent_id`), the `duplicate-env-keys` fixture gets
UUID scrambling.

The `minimize_diff()` function uses a bash associative array keyed by
JSON field name (`deleted["id"]`). The `duplicate-env-keys` fixture has
multiple `coder_env` resources, each with the same key names (`id`,
`agent_id`). Since an associative array can only hold one value per key,
UUIDs get cross-contaminated or left as random terraform-generated
values.

Discovered while working on #23122.

## Fix

Add `duplicate-env-keys` to the `toskip` array in `generate.sh`,
alongside `kubernetes-metadata`. This fixture uses hand-crafted
placeholder UUIDs and should not be regenerated.

Relates to #21885.
2026-03-17 13:41:47 +01:00
Kyle Carberry eb828a6a86 fix: skip input refocus after send on mobile viewports (#23141) 2026-03-17 12:40:26 +00:00
Mathias Fredriksson 4e2d7ffaa7 refactor(site/src/pages/AgentsPage): use ChatMessagePart for editingFileBlocks (#23151)
Replace the ad-hoc camelCase file block shape ({ mediaType, fileId, data })
with snake_case fields matching ChatMessagePart from the API types.

The RenderBlock file variant now uses media_type/file_id instead of
mediaType/fileId. The parsers in messageParsing.ts and streamState.ts
pass validated ChatMessagePart objects through directly instead of
destructuring and reassembling with renamed fields. This eliminates
the needless API → camelCase → snake_case roundtrip that the edit
flow previously required.

Refs #22735
2026-03-17 04:10:08 -08:00
Mathias Fredriksson 524bca4c87 fix(site/src/pages/AgentsPage): fix chat image paste bugs and refactor queued message display (#22735)
handleSubmit (triggered via Enter key) didn't check isUploading, so
messages could be sent while an image upload was still in progress.
The send button was correctly disabled via canSend, but the keyboard
shortcut bypassed that guard.

QueuedMessagesList used untyped extraction helpers that fell through to
JSON.stringify for attachment-only messages. Replace them with a single
getQueuedMessageInfo function using typed ChatMessagePart access.
Show an attachment badge (ImageIcon + count) for file parts, and use a
consistent "[Queued message]" placeholder for all no-text situations.

Editing a queued message with file attachments silently dropped all
attachments because handleStartQueueEdit only accepted text. Thread
file blocks from QueuedMessagesList through the edit callback into
handleStartQueueEdit, which now calls setEditingFileBlocks. The
existing useEffect in AgentDetailInput picks these up and populates
the attachment UI. Also clear editingFileBlocks in handleCancelQueueEdit
and handleSendFromInput.
2026-03-17 14:00:31 +02:00
Danny Kopping 365de3e367 feat: record model thoughts (#22676)
Depends on https://github.com/coder/aibridge/pull/203
Closes https://github.com/coder/internal/issues/1337

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-03-17 11:41:10 +00:00
Michael Suchacz 5d0eb772da fix(cored): fix flaky TestInterruptAutoPromotionIgnoresLaterUsageLimitIncrease (#23147) 2026-03-17 19:08:22 +11:00
Ethan 04fca84872 perf(coderd): reduce duplicated reads in push and webpush paths (#23115)
## Background

A 5000-chat scaletest (~50k turns, ~2m45s wall time) completed
successfully,
but the main bottleneck was **DB pool starvation from repeated reads**,
not
individually expensive SQL. The push/webpush path showed a few
especially noisy
reads:

- `GetLastChatMessageByRole` for push body generation
- `GetEnabledChatProviders` + `GetChatModelConfigByID` for push summary
model
  resolution
- `GetWebpushSubscriptionsByUserID` for every webpush dispatch

This PR keeps the optimizations that remove those duplicate reads while
leaving
stream behavior unchanged.

## What changes in this PR

### 1. Reuse resolved chat state for push notifications

`maybeSendPushNotification` used to re-read the last assistant message
and
re-resolve the chat model/provider after `runChat` had already done that
work.

Now `runChat` returns the final assistant text plus the already-resolved
model
and provider keys, and the push goroutine uses that state directly.

That removes the extra push-path reads for:

- `GetLastChatMessageByRole`
- the second `resolveChatModel` path
- the provider/model lookups that came with that second resolution

### 2. Cache webpush subscriptions during dispatch

`Dispatch()` previously hit `GetWebpushSubscriptionsByUserID` on every
push. A
small per-user in-memory cache now avoids those repeated reads.

The follow-up fix keeps that optimization correct: `InvalidateUser()`
bumps a
per-user generation so an older in-flight fetch cannot repopulate the
cache with
pre-mutation data after subscribe/unsubscribe.

That preserves the cache win without letting local subscription changes
be
silently overwritten by stale fetch results.

## Why this is safe

- The push change only reuses data already produced during the same chat
run. It
does not change notification semantics; if there is no assistant text to
  summarize, the existing fallback body still applies.
- The webpush change keeps the existing TTL and `410 Gone` cleanup
behavior. The
generation guard only prevents stale in-flight fetches from poisoning
the
  shared cache after invalidation.
- The final PR does **not** change stream setup, pubsub/relay behavior,
or chat
  status snapshot timing.

## Deliberately not included

- No stream-path optimization in `Subscribe`.
- No inline pubsub message payloads.
- No distributed cross-replica webpush cache invalidation.
2026-03-17 13:50:47 +11:00
Michael Suchacz 7cca2b6176 feat(site): add chat spend limit UI (#23072)
Frontend for agent chat spend limiting on `/agents`.

## Changes
- add the limits management UI, API hooks, and validation for
deployment, group, and user overrides
- show spend limit status in Agents analytics and usage summaries
- surface limit-related chat errors consistently in the agent detail
experience
- add shared currency and usage-limit messaging helpers plus related
stories/tests
2026-03-17 02:01:51 +01:00
Michael Suchacz 1031da9738 feat: add agent chat spend limiting (backend) (#23071)
Introduces deployment-scoped spend limiting for Coder Agents, enabling
administrators to control LLM costs at global, group, and individual
user levels.

## Changes

- **Database migration (000437)**: `chat_usage_limit_config`
(singleton), `chat_usage_limit_overrides` (per-user),
`chat_usage_limit_group_overrides` (per-group)
- **Single-query limit resolution**: individual override > min(group) >
global default via `ResolveUserChatSpendLimit`
- **Fail-open enforcement** in chatd with documented TOCTOU trade-off
- **Experimental API** under `/api/experimental/chats/usage-limits` for
CRUD on limits
- **`AsChatd` RBAC subject** for narrowly-scoped daemon access (replaces
`AsSystemRestricted`)
- **Generated TypeScript types** for the frontend SDK

## Hierarchy

1. Individual user override (highest)
2. Minimum of group limits
3. Global default
4. Disabled / unlimited

Currency stored as micro-dollars (`1,000,000` = $1.00).

Frontend PR: #23072
2026-03-17 01:24:03 +01:00
Kyle Carberry b69631cb35 chore(site): improve mobile layout for agent chat (#23139) 2026-03-16 18:36:37 -04:00
Kyle Carberry 7b0aa31b55 feat: render file references inline in user messages (#23131) 2026-03-16 17:17:23 -04:00
Steven Masley 93b9d70a9b chore: add audit log entry when ai seat is consumed (#22683)
When an ai seat is consumed, an audit log entry is made. This only happens the first time a seat is used.
2026-03-16 15:30:25 -05:00
Kyle Carberry 6972d073a2 fix: improve background process handling for agent tools (#23132)
## Problem

Models frequently use shell `&` instead of `run_in_background=true` when
starting long-running processes through `/agents`, causing them to die
shortly after starting. This happens because:

1. **No guidance in tool schema** — The `ExecuteArgs` struct had zero
`description` tags. The model saw `run_in_background: boolean
(optional)` with no explanation of when/why to use it.
2. **Shell `&` is silently broken** — `sh -c "command &"` forks the
process, the shell exits immediately, and the forked child becomes an
orphan not tracked by the process manager.
3. **No process group isolation** — The SSH subsystem sets `Setsid:
true` on spawned processes, but the agent process manager set no
`SysProcAttr` at all. Signals only hit the top-level `sh`, not child
processes.

## Investigation

Compared our implementation against **openai/codex** and **coder/mux**:

| Aspect | codex | mux | coder/coder (before) |
|--------|-------|-----|---------------------|
| Background flag | Yield/resume with `session_id` | `run_in_background`
with rich description | `run_in_background` with **no description** |
| `&` handling | `setsid()` + `killpg()` | `detached: true` +
`killProcessTree()` | **Nothing** — orphaned children escape |
| Process isolation | `setsid()` on every spawn | `set -m; nohup ...
setsid` for background | **No `SysProcAttr` at all** |
| Signal delivery | `killpg(pgid, sig)` — entire group | `kill -15
-\$pid` — negative PID | `proc.cmd.Process.Signal()` — **PID only** |

## Changes

### Fix 1: Add descriptions to `ExecuteArgs` (highest impact)
The model now sees explicit guidance: *"Use for long-running processes
like dev servers, file watchers, or builds. Do NOT use shell & — it will
not work correctly."*

### Fix 2: Update tool description
The top-level execute tool description now reinforces: *"Use
run_in_background=true for long-running processes. Never use shell '&'
for backgrounding."*

### Fix 3: Detect trailing `&` and auto-promote to background
Defense-in-depth: if the model still uses `command &`, we strip the `&`
and promote to `run_in_background=true` automatically. Correctly
distinguishes `&` from `&&`.

### Fix 4: Process group isolation (`Setpgid`)
New platform-specific files (`proc_other.go` / `proc_windows.go`)
following the same pattern as `agentssh/exec_other.go`. Every spawned
process gets its own process group.

### Fix 5: Process group signaling
`signal()` now uses `syscall.Kill(-pid, sig)` on Unix to signal the
entire process group, ensuring child processes from shell pipelines are
also cleaned up.

## Testing
All existing `agent/agentproc` tests pass. Both packages compile
cleanly.
2026-03-16 16:22:10 -04:00
Kyle Carberry 89bb5bb945 ci: fix build job disk exhaustion on Depot runners (#23136)
## Problem

The `build` job on `main` has been failing intermittently (and now
consistently) with `no space left on device` on the
`depot-ubuntu-22.04-8` runner. The runner's disk fills up during Docker
image builds or SBOM generation, depending on how close to the limit a
given run lands.

The build was already at the boundary — the Go build cache alone is ~1.3
GB, build artifacts are ~2 GB, and Docker image builds + SBOM scans need
several hundred MB of headroom in `/tmp`. No single commit caused this;
cumulative growth in dependencies and the scheduled `coder-base:latest`
rebuild on Monday morning nudged it past the limit.

## Fix

Three changes to reclaim ~2 GB of disk before Docker runs:

1. **Build all platform archives and packages in the Build step** —
moves arm64/armv7 `.tar.gz` and `.deb` from the Docker step to the Build
step so we can clean caches in between.

2. **Clean up Go caches between Build and Docker** — once binaries are
compiled, the Go build cache and module cache aren't needed. Also
removes `.apk`/`.rpm` packages that are never uploaded.

3. **Set `DOCKER_IMAGE_NO_PREREQUISITES`** — tells make to skip
redundantly building `.deb`/`.rpm`/`.apk`/`.tar.gz` as prerequisites of
Docker image targets. The Makefile already supports this flag for
exactly this purpose.
2026-03-16 15:38:58 -04:00
Kyle Carberry b7eab35734 fix(site): scope chat cache helpers to chat-list queries only (#23134)
## Problem

`updateInfiniteChatsCache`, `prependToInfiniteChatsCache`, and
`readInfiniteChatsCache` use `setQueriesData({ queryKey: ["chats"] })`
which prefix-matches **all** queries starting with `"chats"`, including
`["chats", chatId, "messages"]`.

After #23083 converted chat messages to `useInfiniteQuery`, the cached
messages data gained a `.pages` property containing
`ChatMessagesResponse` objects (not `Chat[]` arrays). The `if
(!prev.pages)` guard no longer bailed out, and the updater called
`.map()` on these objects — `TypeError: Z.map is not a function`.

## Fix

Extract the `isChatListQuery` predicate that already existed inline in
`invalidateChatListQueries` and apply it to all four cache helpers. This
scopes them to sidebar queries (`["chats"]` or `["chats",
<filterOpts>]`) and skips per-chat queries (`["chats", <id>, ...]`).
2026-03-16 14:23:56 -04:00
Zach 3f76f312e4 feat(cli): add --no-wait flag to coder create (#22867)
Adds a `--no-wait` flag (CODER_CREATE_NO_WAIT) to the create command,
matching the existing pattern in `coder start`. When set, the `coder
create` command returns immediately after the workspace creation API
call succeeds instead of streaming build logs until completion.

This enables fire-and-forget workspace creation in CI/automation
contexts (e.g., GitHub Actions), where waiting for the build to finish
is unnecessary. Combined with other existing flags, users can create a
workspace with no interactivity, assuming the user is already
authenticated.
2026-03-16 11:54:30 -06:00
Steven Masley abf59ee7a6 feat: track ai seat usage (#22682)
When a user uses an AI feature, we record them in the `ai_seat_state` as consuming a seat. 

Added in debouching to prevent excessive writes to the db for this feature. There is no need for frequent updates.
2026-03-16 12:36:26 -05:00
Steven Masley cabb611fd9 chore: implement database crud for AI seat usage (#22681)
Creates a new table `ai_seat_state` to keep track of when users consume an ai_seat. Once a user consumes an AI seat, they will forever in this table (as it stands today).
2026-03-16 11:53:20 -05:00
Matt Vollmer b2d8b67ff7 feat(site): add Early Access notice below agents chat input (#23130)
Adds a centered "Coder Agents is available via Early Access" line
directly beneath the chat input on the `/agents` index page. The "Early
Access" text links to
https://coder.com/docs/ai-coder/agents/early-access.

<img width="1192" height="683" alt="image"
src="https://github.com/user-attachments/assets/1823a5d2-6f02-48c2-ac70-a62b8f52be55"
/>

---

PR generated with Coder Agents
2026-03-16 12:45:17 -04:00
Thomas Kosiewski c1884148f0 feat: add VS Code iframe embed auth bootstrap (#23060)
## VS Code iframe embed auth via postMessage + setSessionToken

Adds embed auth for VS Code iframe integration, allowing the Coder agent
chat UI to be embedded in VS Code webviews without manual login — using
direct header auth instead of cookies.

### How it works

1. **Parent frame** (VS Code webview) loads an iframe pointing to
`/agents/:agentId/embed`
2. **Embed page** detects the user is signed out and posts
`coder:vscode-ready` to the parent
3. **Parent** responds with `coder:vscode-auth-bootstrap` containing the
user's Coder API token
4. **Embed page** calls `API.setSessionToken(token)` to set the
`Coder-Session-Token` header on all subsequent axios requests
5. **Embed page** fetches user + permissions, sets them in the React
Query cache atomically, and renders the authenticated agent chat UI

No cookies, no CSRF, no backend endpoint needed. The token is passed via
postMessage and used as a header on every API request.

### What changed

**Frontend** (`site/src/pages/AgentsPage/`):
- `AgentEmbedPage.tsx` — added postMessage bootstrap directly in the
embed page: listens for `coder:vscode-auth-bootstrap`, calls
`API.setSessionToken(token)`, fetches user/permissions atomically to
avoid race conditions
- `EmbedContext.tsx` — React context signaling embed mode (from previous
commit, unchanged)
- `AgentDetail/TopBar.tsx` — conditionally hides navigation elements in
embed mode (from previous commit, unchanged)
- Both `/agents/:agentId/embed` and `/agents/:agentId/embed/session`
routes live outside `RequireAuth`

**Auth bootstrap** (`site/src/api/queries/users.ts`):
- `bootstrapChatEmbedSessionFn` now calls `API.setSessionToken(token)`
instead of posting to a backend endpoint
- Fetches user and permissions directly via `API.getAuthenticatedUser()`
and `API.checkAuthorization()`, then sets both in the query cache
atomically — this avoids a race where `isSignedIn` flips before
permissions are loaded

**Removed** (no longer needed):
- `coderd/embedauth.go` — the `POST
/api/experimental/chats/embed-session` handler
- `coderd/embedauth_test.go` — backend tests for the endpoint
- `codersdk/embedauth.go` — `EmbedSessionTokenRequest` SDK type
- `site/src/api/api.ts` — `postChatEmbedSession` method
- `docs/user-guides/workspace-access/vscode-embed-auth.md` — doc page
for the old cookie flow
- Swagger/API doc entries for the endpoint

### Why not cookies?

The initial implementation used a backend endpoint to set an HttpOnly
session cookie. This required `SameSite=None; Secure` for cross-origin
iframes, which doesn't work over HTTP in development (Chrome requires
HTTPS for `Secure` cookies). The `setSessionToken` approach bypasses
cookies entirely — the token is set as an axios default header, and
header-based auth also naturally bypasses CSRF protection.

### Dogfooding

Tested end-to-end with a VS Code extension that:
1. Registers a `/openChat` deep link handler
(`vscode://coder.coder-remote/openChat?url=...&token=...&agentId=...`)
2. Starts a local HTTP reverse proxy (to work around VS Code webview
iframe sandboxing)
3. Loads `/agents/:agentId/embed` in an iframe through the proxy
4. Relays the postMessage handshake between the iframe and the extension
host
5. The embed page receives the token, calls `setSessionToken`, and
renders the chat

Verified: chat title, messages, and input field all display correctly in
VS Code's secondary sidebar panel.
2026-03-16 17:45:01 +01:00
Kyle Carberry 741af057dc feat: paginate chat messages endpoint with cursor-based infinite scroll (#23083)
Adds cursor-based pagination to the chat messages endpoint.

## Backend

- New `GetChatMessagesByChatIDPaginated` SQL query: returns messages in
`id DESC` order with a `before_id` keyset cursor and configurable
`limit`
- Handler parses `?before_id=N&limit=N` query params, uses the `LIMIT
N+1` trick to set `has_more` without a separate COUNT query
- Queued messages only returned on the first page (no cursor) since
they're always the most recent
- SDK client updated with `ChatMessagesPaginationOptions`
- Fully backward compatible: omitting params returns the 50 newest
messages

## Frontend

- Switches `getChatMessages` from `useQuery` to `useInfiniteQuery` with
cursor chaining via `getNextPageParam`
- Pages flattened and sorted by `id` ascending for chronological display
- `MessagesPaginationSentinel` component uses `IntersectionObserver`
(200px rootMargin prefetch) inside the existing `flex-col-reverse`
scroll container
- `flex-col-reverse` handles scroll anchoring natively when older
messages are prepended — no manual `scrollTop` adjustment needed (same
pattern as coder/blink)

## Why cursor-based instead of offset/limit

Offset-based pagination breaks when new messages arrive while paginating
backward (offsets shift, causing duplicates or missed messages). The
`before_id` cursor is stable regardless of inserts — each page is
deterministic.
2026-03-16 16:40:59 +00:00
Kyle Carberry 32a894d4a7 fix: error on ambiguous matches in edit_files tool (#23125)
## Problem

The `edit_files` tool used `strings.ReplaceAll` for exact substring
matches, silently replacing **every** occurrence. When an LLM's search
string wasn't unique in the file, this caused unintended edits. Fuzzy
matches (passes 2 and 3) only replaced the first occurrence, creating
inconsistent behavior. Zero matches were also silently ignored.

## Investigation

Investigated how **coder/mux** and **openai/codex** handle this:

| Tool | Multiple matches | No match | Flag |
|---|---|---|---|
| **coder/mux** `file_edit_replace_string` | Error (default
`replace_count=1`) | Error | `replace_count` (int, default 1, -1=all) |
| **openai/codex** `apply_patch` | Uses first match after cursor
(structural disambiguation via context lines + `@@` markers) | Error |
None (different paradigm) |
| **coder/coder** `edit_files` (before) | Exact: replaces all. Fuzzy:
replaces first. | Silent success | None |

## Solution

Adopted the mux approach (error on ambiguity) with a simpler
`replace_all: bool` instead of `replace_count: int`:

- **Default (`replace_all: false`)**: search string must match exactly
once. Multiple matches → error with guidance: *"search string matches N
occurrences. Include more surrounding context to make the match unique,
or set replace_all to true"*
- **`replace_all: true`**: replaces all occurrences (opt-in for
intentional bulk operations like variable renames)
- **Zero matches**: now returns an error instead of silently succeeding

Chose `bool` over `int` count because:
1. LLMs are bad at counting occurrences
2. The real intent is binary (one specific spot vs. all occurrences)
3. Simpler error recovery loop for the LLM

## Changes

| File | Change |
|---|---|
| `codersdk/workspacesdk/agentconn.go` | Add `ReplaceAll bool` to
`FileEdit` struct |
| `agent/agentfiles/files.go` | Count matches before replacing; error if
>1 and not opted in; error on zero matches; add `countLineMatches`
helper |
| `codersdk/toolsdk/toolsdk.go` | Expose `replace_all` in tool schema
with description |
| `agent/agentfiles/files_test.go` | Update existing tests, add
`EditEditAmbiguous`, `EditEditReplaceAll`, `NoMatchErrors`,
`AmbiguousExactMatch`, `ReplaceAllExact` |
2026-03-16 16:17:33 +00:00
Spike Curtis 4fdd48b3f5 chore: randomize task status update times in load generator (#23058)
fixes https://github.com/coder/scaletest/issues/92

Randomizes the time between task status updates so that we don't send them all at the same time for load testing.
2026-03-16 12:06:29 -04:00
Charlie Voiselle e94de0bdab fix(coderd): render HTML error page for OIDC email validation failures (#23059)
## Summary

When the email address returned from an OIDC provider doesn't match the
configured allowed domain list (or isn't verified), users previously saw
raw JSON dumped directly in the browser — an ugly and confusing
experience during a browser-redirect flow.

This PR replaces those JSON responses with the same styled static HTML
error page already used for group allow-list errors, signups-disabled,
and wrong-login-type errors.

## Changes

### `coderd/userauth.go`
Replaced 3 `httpapi.Write` calls in `userOIDC` with
`site.RenderStaticErrorPage`:

| Error case | Title shown |
|---|---|
| Email domain not in allowed list | "Unauthorized email" |
| Malformed email (no `@`) with domain restrictions | "Unauthorized
email" |
| `email_verified` is `false` | "Email not verified" |

All render HTTP 403 with `HideStatus: true` and a "Back to login" action
button.

### `coderd/userauth_test.go`
- Updated `AssertResponse` callbacks on existing table-driven tests
(`EmailNotVerified`, `NotInRequiredEmailDomain`,
`EmailDomainForbiddenWithLeadingAt`) to verify HTML Content-Type and
page content.
- Extended `TestOIDCDomainErrorMessage` to additionally assert HTML
rendering.
- Added new `TestOIDCErrorPageRendering` with 3 subtests covering all
error scenarios, verifying: HTML doctype, expected title/description,
"Back to login" link, and absence of JSON markers.

---------

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-03-16 11:56:59 -04:00
Mathias Fredriksson fa8693605f test(provisioner/terraform/testdata): add subagent_id to UUID preservation and regenerate (#23122)
The minimize_diff function in generate.sh preserves autogenerated
values (UUIDs, tokens, etc.) across regeneration to keep diffs
minimal. The subagent_id field was missing from the preservation
list, causing unnecessary churn on devcontainer test data.

Also regenerates all testdata with the current terraform (1.14.5)
and coder provider (2.14.0).
2026-03-16 15:51:06 +00:00
Kyle Carberry af1be592cf fix: disable agent notification chime by default (#23124)
The completion chime on `/agents` was enabled by default for new users
(or when no localStorage preference existed). This changes the default
to disabled, so users must explicitly opt in via the sound toggle
button.

## Changes

- `getChimeEnabled()` now returns `false` when no preference is stored
(was `true`)
- `catch` fallback also returns `false` (was `true`)
- Updated tests to reflect the new default and explicitly enable the
chime in `maybePlayChime` tests
2026-03-16 11:49:04 -04:00
Kyle Carberry 6f97539122 fix: update sidebar diff status on WebSocket events (#23116)
## Problem

The sidebar diff status (PR icon, +additions/-deletions, file count) was
not updating in real-time. Users had to reload the page to see changes.

Two root causes:

1. **Frontend**: The `diff_status_change` WebSocket handler in
`AgentsPage.tsx` had an early `return` (line 398) that skipped
`updateInfiniteChatsCache`, so the sidebar's cache was never updated.
Even for other event types, the cache merge only spread `status` and
`title` — never `diff_status`.

2. **Server**: `publishChatPubsubEvent` in `chatd.go` constructed a
minimal `Chat` payload without `DiffStatus`, so even if the frontend
consumed the event, `updatedChat.diff_status` would be `undefined`.

## Fix

### Server (`coderd/chatd/chatd.go`)
- `publishChatPubsubEvent` now accepts an optional
`*codersdk.ChatDiffStatus` parameter; when non-nil it's set on the
outgoing `Chat` payload.
- `PublishDiffStatusChange` fetches the diff status from the DB,
converts it, and passes it through.
- Added `convertDBChatDiffStatus` (mirrors `coderd/chats.go`'s converter
to avoid circular import).
- All other callers pass `nil`.

### Frontend (`site/src/pages/AgentsPage/AgentsPage.tsx`)
- Removed the early `return` so `diff_status_change` events fall through
to the cache update logic.
- Added `isDiffStatusEvent` flag and spread `diff_status` into both the
infinite chats cache (sidebar) and the individual chat cache.
2026-03-16 15:41:32 +00:00
Kyle Carberry 530872873e chore: remove swagger annotations from experimental chat endpoints (#23120)
The `/archive` and `/desktop` chat endpoints had swagger route comments
(`@Summary`, `@ID`, `@Router`, etc.) that would cause them to appear in
generated API docs. Since these live under `/experimental/chats`, they
should not be documented.

This removes the swagger annotations and adds the standard `//
EXPERIMENTAL: this endpoint is experimental and is subject to change.`
comment to `archiveChat` (the `watchChatDesktop` handler already had it,
just needed the swagger block removed).
2026-03-16 08:41:13 -07:00
Matt Vollmer 115011bd70 docs: rename Chat API to Chats API (#23121)
Renames the page title and manifest label from "Chat API" to "Chats API"
to match the plural endpoint path (`/api/experimental/chats`).
2026-03-16 11:31:43 -04:00
Matt Vollmer 3c6445606d docs: add Chat API page under Coder Agents (#22898)
Adds `docs/ai-coder/agents/chat-api.md` — a concise guide for the
experimental `/api/experimental/chats` endpoints.

**What's included:**
- Authentication
- Quick start curl example
- Core workflow (create → stream → follow-up)
- All major endpoints: create, messages, stream, list, get, archive,
interrupt
- File uploads
- Chat status reference

Also marks all Coder Agents child pages as `early access` in
`docs/manifest.json`.
2026-03-16 11:00:36 -04:00
Cian Johnston f8dff3f758 fix: improve push notification message shown on subscribe (#23052)
Updates push notification message for test notification.
2026-03-16 14:52:31 +00:00
Kyle Carberry 27cbf5474b refactor: remove /diff-status endpoint, include diff_status in chat payload (#23082)
The `/chats/{chat}/diff-status` endpoint was redundant because:
- The `Chat` type already has a `DiffStatus` field
- Listing chats already resolves and returns `diff_status`
- The `getChat` endpoint was the only one not resolving it (passing
`nil`)

## Changes

**Backend:**
- `getChat` now calls `resolveChatDiffStatus` and includes the result in
the response
- Removed `getChatDiffStatus` handler, route (`GET /diff-status`), and
SDK method
- Tests updated to use `GetChat` instead of `GetChatDiffStatus`

**Frontend:**
- `AgentDetail.tsx`: uses `chatQuery.data?.diff_status` instead of
separate query
- `RemoteDiffPanel.tsx`: accepts `diffStatus` as a prop instead of
fetching internally
- `AgentsPage.tsx`: `diff_status_change` events now invalidate the chat
query
- Removed `chatDiffStatus` query, `chatDiffStatusKey`, and
`getChatDiffStatus` API method
2026-03-16 14:40:22 +00:00
blinkagent[bot] 3704e930a1 docs: update release calendar for v2.31 (#23113)
The release calendar was outdated — it still showed v2.30 as Mainline
and v2.31 as Not Released.

This runs the `scripts/update-release-calendar.sh` script and manually
re-adds the ESR rows that the script doesn't handle:

**Changes:**
- v2.28: Security Support → Not Supported
- v2.29: Stable + ESR → Security Support + ESR (v2.29.8)
- v2.30: Mainline → Stable (v2.30.3)
- v2.31: Not Released → Mainline (v2.31.5)
- Added 2.32 as Not Released
- Kept 2.24 as Extended Support Release
- Updated latest patch versions for all releases
- Removed 2.25 (no longer in the rolling window)

Created on behalf of @matifali

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-03-16 14:20:39 +00:00
Mathias Fredriksson 3a3537a642 refactor: rewrite develop.sh orchestrator in Go (#23054)
Replace the ~370-line bash develop.sh with a Go program using
serpent for CLI flags, errgroup for process lifecycle, and
codersdk for setup. develop.sh becomes a thin make + exec wrapper.

- Process groups for clean shutdown of child trees
- Docker template auto-creation via SDK ExampleID
- Idempotent setup (users, orgs, templates)
- Configurable --port, --web-port, --proxy-port
- Preflight runs lib.sh dependency checks
- TCP dial for port-busy checks
- Make target (build/.bin/develop) for build caching
2026-03-16 16:13:57 +02:00
Ethan c4db03f11a perf(coderd/database): skip redundant chat row update in InsertChatMessage (#23111)
## Summary

- add an `IS DISTINCT FROM` guard to `InsertChatMessage`'s
`updated_chat` CTE so `chats.last_model_config_id` is only rewritten
when the incoming `model_config_id` actually changes
- regenerate the query layer
- add focused regression coverage for the two meaningful behaviors:
same-model inserts and real model switches
- trim redundant message-field assertions so the new test stays focused
on the guard behavior

## Proof this is an improvement

This PR reduces work in the hottest chat write query without changing
the insert behavior.

### Why the old query did unnecessary work

Before this change, `InsertChatMessage` always ran this update whenever
`model_config_id` was non-null:

```sql
UPDATE chats
SET last_model_config_id = sqlc.narg('model_config_id')::uuid
WHERE id = @chat_id::uuid
  AND sqlc.narg('model_config_id')::uuid IS NOT NULL
```

That means the query rewrote the `chats` row even when
`chats.last_model_config_id` was already equal to the incoming value.

### What changes in this PR

This PR adds:

```sql
AND chats.last_model_config_id IS DISTINCT FROM sqlc.narg('model_config_id')::uuid
```

So same-model inserts still insert the message, but they no longer
perform a redundant `UPDATE chats`.

### Why this matters on the hot path

From the chat scaletest investigation that motivated this change:

- `InsertChatMessage` (+ `updated_chat` CTE) was the hottest write query
- about **104k calls**
- about **0.69 ms average latency**
- about **71.8 s total DB execution time**

We also verified common callsites where the update is provably
redundant:

- `CreateChat` inserts the chat with `LastModelConfigID =
opts.ModelConfigID`, then immediately inserts initial system/user
messages with that same model config
- follow-up user messages commonly pass `lockedChat.LastModelConfigID`
straight into `InsertChatMessage`
- assistant/tool/summary persistence keeps the current model in the
common case; only real switches or fallback cases need the chat row
update

That means a meaningful fraction of executions of the hottest DB write
query move from:

- **before:** insert message **+** rewrite chat row
- **after:** insert message only

This should reduce row churn and write contention on `chats`, especially
against other chat-row writers like `UpdateChatStatus` and
`GetChatByIDForUpdate`.
2026-03-17 00:44:10 +11:00
Danielle Maywood 08107b35d7 fix: remove stray whitespace in agents UI (#23110) 2026-03-16 13:42:59 +00:00
Michael Suchacz fbc8930fc3 fix(coderd): make chat cost summary tests deterministic (#23097)
Fixes flaky `TestChatCostSummary_UnpricedMessages` (and siblings) by
replacing implicit handler-default date windows with explicit time
windows derived from database-assigned message timestamps.

**Root cause:** Tests called `GetChatCostSummary` with empty options,
triggering the handler to use `[time.Now()-30d, time.Now())` as the
query window. The SQL filter's exclusive upper bound (`created_at <
@end_date`) can exclude freshly-inserted messages when the handler's
clock drifts even slightly past the message's `created_at`.

**Fix (test-only, `coderd/chats_test.go`):**
- `seedChatCostFixture` now captures `InsertChatMessage` return values
and exposes `EarliestCreatedAt`/`LatestCreatedAt`.
- Added `safeOptions()` helper that builds a padded ±1 min window around
DB timestamps.
- Updated 4 tests to use explicit date windows;
`TestChatCostSummary_DateRange` unchanged.

Validated with `go test -count=20` (100/100 passes).
2026-03-16 14:42:06 +01:00
Matt Vollmer 59553b8df8 docs(ai-coder): add enablement instructions for agents experiment (#23057)
Adds a new **Enable Coder Agents** section to the Early Access doc
explaining how to activate the `agents` experiment flag via
`CODER_EXPERIMENTS` or `--experiments`.

## Changes

### `docs/ai-coder/agents/early-access.md`
- New **Enable Coder Agents** section with env var and CLI flag
examples.
- Note that the `agents` flag is excluded from wildcard (`*`) opt-in.
- Quick-start checklist: dashboard → Admin → configure provider/model →
start chatting.
- Link to GitHub issues for feedback.

### `docs/ai-coder/agents/index.md`
- Updated **Product status** from "internal preview" to "Early Access"
with a link to the early-access page for enablement instructions.
2026-03-16 08:40:31 -04:00
Danielle Maywood 68fd82e0ba fix(site): right-align admin badge in agent settings nav tabs (#23104) 2026-03-16 12:01:05 +00:00
dependabot[bot] 2927fea959 chore: bump the x group with 6 updates (#23100)
Bumps the x group with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.48.0` |
`0.49.0` |
| [golang.org/x/mod](https://github.com/golang/mod) | `0.33.0` |
`0.34.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.51.0` |
`0.52.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.40.0` |
`0.41.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.34.0` |
`0.35.0` |
| [golang.org/x/tools](https://github.com/golang/tools) | `0.42.0` |
`0.43.0` |

Updates `golang.org/x/crypto` from 0.48.0 to 0.49.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/crypto/commit/982eaa62dfb7273603b97fc1835561450096f3bd"><code>982eaa6</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/crypto/commit/159944f128e9b3fdeb5a5b9b102a961904601a87"><code>159944f</code></a>
ssh,acme: clean up tautological/impossible nil conditions</li>
<li><a
href="https://github.com/golang/crypto/commit/a408498e55412f2ae2a058336f78889fb1ba6115"><code>a408498</code></a>
acme: only require prompt if server has terms of service</li>
<li><a
href="https://github.com/golang/crypto/commit/cab0f718548e8a858701b7b48161f44748532f58"><code>cab0f71</code></a>
all: upgrade go directive to at least 1.25.0 [generated]</li>
<li><a
href="https://github.com/golang/crypto/commit/2f26647a795e74e712b3aebc2655bca60b2686f9"><code>2f26647</code></a>
x509roots/fallback: update bundle</li>
<li>See full diff in <a
href="https://github.com/golang/crypto/compare/v0.48.0...v0.49.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/mod` from 0.33.0 to 0.34.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/mod/commit/1ac721dff8591283e59aba6412a0eafc8b950d83"><code>1ac721d</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/mod/commit/fb1fac8b369ec75b114cb416119e80d3aebda7f5"><code>fb1fac8</code></a>
all: upgrade go directive to at least 1.25.0 [generated]</li>
<li>See full diff in <a
href="https://github.com/golang/mod/compare/v0.33.0...v0.34.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/net` from 0.51.0 to 0.52.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/net/commit/316e20ce34d380337f7983808c26948232e16455"><code>316e20c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/net/commit/9767a42264fa70b674c643d0c87ee95c309a4553"><code>9767a42</code></a>
internal/http3: add support for plugging into net/http</li>
<li><a
href="https://github.com/golang/net/commit/4a812844d820f49985ee15998af285c43b0a6b96"><code>4a81284</code></a>
http2: update docs to disrecommend this package</li>
<li><a
href="https://github.com/golang/net/commit/dec6603c16144712aab7f44821471346b35a2230"><code>dec6603</code></a>
dns/dnsmessage: reject too large of names early during unpack</li>
<li><a
href="https://github.com/golang/net/commit/8afa12f927391ba32da2b75b864a3ad04cac6376"><code>8afa12f</code></a>
http2: deprecate write schedulers</li>
<li><a
href="https://github.com/golang/net/commit/38019a2dbc2645a4c06a1e983681eefb041171c8"><code>38019a2</code></a>
http2: add missing copyright header to export_test.go</li>
<li><a
href="https://github.com/golang/net/commit/039b87fac41ca283465e12a3bcc170ccd6c92f84"><code>039b87f</code></a>
internal/http3: return error when Write is used after status 304 is
set</li>
<li><a
href="https://github.com/golang/net/commit/6267c6c4c825a78e4c9cbdc19c705bc81716597c"><code>6267c6c</code></a>
internal/http3: add HTTP 103 Early Hints support to ClientConn</li>
<li><a
href="https://github.com/golang/net/commit/591bdf35bce56ad50f53555c3cbb31e4bdda2d58"><code>591bdf3</code></a>
internal/http3: add HTTP 103 Early Hints support to Server</li>
<li><a
href="https://github.com/golang/net/commit/1faa6d8722697d9a1d8d4e973b3c46c7a5563f6c"><code>1faa6d8</code></a>
internal/http3: avoid potential race when aborting RoundTrip</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/net/compare/v0.51.0...v0.52.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/term` from 0.40.0 to 0.41.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/term/commit/9d2dc074d2bdcb2229cbbaa0a252eace245a6489"><code>9d2dc07</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/term/commit/d954e03213327a5b6380b6c2aec621192ee56007"><code>d954e03</code></a>
all: upgrade go directive to at least 1.25.0 [generated]</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.40.0...v0.41.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/text` from 0.34.0 to 0.35.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/text/commit/7ca2c6d99153f6456168837916829c735c67d355"><code>7ca2c6d</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/text/commit/73d1ba91404d0de47cb6a9b3fb52a31565ca4d25"><code>73d1ba9</code></a>
all: upgrade go directive to at least 1.25.0 [generated]</li>
<li>See full diff in <a
href="https://github.com/golang/text/compare/v0.34.0...v0.35.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/tools` from 0.42.0 to 0.43.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/tools/commit/24a8e95f9d7ae2696f66314da5e50c0d98ccaa90"><code>24a8e95</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/tools/commit/3dd57fba1a6eed320cd9ea2b292cacdacda1e5e8"><code>3dd57fb</code></a>
gopls/internal/mcp: refactor unified diff generation</li>
<li><a
href="https://github.com/golang/tools/commit/fcc014db2b644cc1e0a9d08157efab0156699ada"><code>fcc014d</code></a>
cmd/digraph: fix package doc</li>
<li><a
href="https://github.com/golang/tools/commit/39f0f5c6d34afcb5664463f6e97c076187a305ea"><code>39f0f5c</code></a>
cmd/stress: add -failfast flag</li>
<li><a
href="https://github.com/golang/tools/commit/063c2644e296d3154b4dcbfc15ebeb09e6f07290"><code>063c264</code></a>
gopls/test/integration/misc: add diagnostics to flaky test</li>
<li><a
href="https://github.com/golang/tools/commit/deb6130cda665525d826291d591e988ace74f447"><code>deb6130</code></a>
gopls/internal/golang: fix hover panic in raw strings with CRLF</li>
<li><a
href="https://github.com/golang/tools/commit/5f1186b97512a314f8a35509072d7657eaf7c60a"><code>5f1186b</code></a>
gopls/internal/analysis/driverutil: remove unnecessary new imports</li>
<li><a
href="https://github.com/golang/tools/commit/ff454944261ad40f98abfc097fae89272ce40935"><code>ff45494</code></a>
go/analysis: expose GoMod etc. to Pass.Module</li>
<li><a
href="https://github.com/golang/tools/commit/62daff4834809b6cce693f6f0dff1c2722cb6328"><code>62daff4</code></a>
go/analysis/passes/inline: fix panic in inlineAlias with instantiated
generic...</li>
<li><a
href="https://github.com/golang/tools/commit/fcb6088b9059538dd6bcbd5238c10ffdc71700b5"><code>fcb6088</code></a>
x/tools: delete obsolete code</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/tools/compare/v0.42.0...v0.43.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 11:52:51 +00:00
Thomas Kosiewski d6306461bb feat(site): render computer tool screenshots as images in chat UI (#23074)
Instead of showing raw base64 JSON for Anthropic's computer use tool,
render the screenshot as an inline image. The image is clickable to open
at full resolution in a new tab.

## Changes

- **ComputerTool.tsx** — New component that renders base64 image data as
an `<img>` tag
- **Tool.tsx** — Added `ComputerRenderer` handling both single-object
and array-of-blocks result shapes
- **ToolIcon.tsx** — Added `MonitorIcon` for the `computer` tool
- **ToolLabel.tsx** — Added \Screenshot\ label for the `computer` tool
2026-03-16 12:36:18 +01:00
Michael Suchacz cb05419872 fix(site): inject time via prop for deterministic analytics story snapshots (#23092) 2026-03-16 12:34:55 +01:00
dependabot[bot] 29225252f6 chore: bump google.golang.org/api from 0.269.0 to 0.271.0 (#23102)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.269.0 to 0.271.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.271.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.270.0...v0.271.0">0.271.0</a>
(2026-03-10)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3532">#3532</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ccff5b35c0d730214473de122dcb96b110be0029">ccff5b3</a>)</li>
</ul>
<h2>v0.270.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.269.0...v0.270.0">0.270.0</a>
(2026-03-08)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3515">#3515</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/44db8ef7d07171dad68a5cc9026ab3f1cd77ef12">44db8ef</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3518">#3518</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b3dc663d78cba7be5dbd998a439edcdf4991b807">b3dc663</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3519">#3519</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/01c06b9034963e27855bf188049d1752fc2de525">01c06b9</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3520">#3520</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/7ed04540e547ca9cef1f9f48d54c1277f24773bf">7ed0454</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3521">#3521</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/d11f54e813163dfc52515d214065c67bc944c7ef">d11f54e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3523">#3523</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ce39b40dedcd239ea2fb4a18aedf23ba61b8ae90">ce39b40</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3525">#3525</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/15b140d66a7b67dd6bfea7d1473bd2df4d878f95">15b140d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3526">#3526</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/1b18158bb7807b1a5a9f73dd4ec450f274a81da8">1b18158</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3527">#3527</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a932a454c4fd97dfc66f0cca97afeae231a7e4e9">a932a45</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3528">#3528</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/f6ede69e7094cf4f7353841d593867f087f06b84">f6ede69</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3529">#3529</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b73e4fbc0017249279922cb4c223e44f98cc5db9">b73e4fb</a>)</li>
<li><strong>option/internaloption:</strong> Add more option
introspection (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3524">#3524</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ac5da8f06619417a42c5e128dcb5aafcb1912353">ac5da8f</a>)</li>
<li><strong>option/internaloption:</strong> Unsafe option resolver (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3514">#3514</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b263ceeb1a4062ae6cda17c49073d5051d96fc90">b263cee</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.270.0...v0.271.0">0.271.0</a>
(2026-03-10)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3532">#3532</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ccff5b35c0d730214473de122dcb96b110be0029">ccff5b3</a>)</li>
</ul>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.269.0...v0.270.0">0.270.0</a>
(2026-03-08)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3515">#3515</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/44db8ef7d07171dad68a5cc9026ab3f1cd77ef12">44db8ef</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3518">#3518</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b3dc663d78cba7be5dbd998a439edcdf4991b807">b3dc663</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3519">#3519</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/01c06b9034963e27855bf188049d1752fc2de525">01c06b9</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3520">#3520</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/7ed04540e547ca9cef1f9f48d54c1277f24773bf">7ed0454</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3521">#3521</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/d11f54e813163dfc52515d214065c67bc944c7ef">d11f54e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3523">#3523</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ce39b40dedcd239ea2fb4a18aedf23ba61b8ae90">ce39b40</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3525">#3525</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/15b140d66a7b67dd6bfea7d1473bd2df4d878f95">15b140d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3526">#3526</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/1b18158bb7807b1a5a9f73dd4ec450f274a81da8">1b18158</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3527">#3527</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a932a454c4fd97dfc66f0cca97afeae231a7e4e9">a932a45</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3528">#3528</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/f6ede69e7094cf4f7353841d593867f087f06b84">f6ede69</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3529">#3529</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b73e4fbc0017249279922cb4c223e44f98cc5db9">b73e4fb</a>)</li>
<li><strong>option/internaloption:</strong> Add more option
introspection (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3524">#3524</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/ac5da8f06619417a42c5e128dcb5aafcb1912353">ac5da8f</a>)</li>
<li><strong>option/internaloption:</strong> Unsafe option resolver (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3514">#3514</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b263ceeb1a4062ae6cda17c49073d5051d96fc90">b263cee</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/e79327bd305ea52af1334ef6b5385cf7a5acbbdc"><code>e79327b</code></a>
chore(main): release 0.271.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3533">#3533</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/a3dde28f12bc0c1aaab4a8a74ad9f46b53d53004"><code>a3dde28</code></a>
chore(deps): bump github.com/cloudflare/circl from 1.6.1 to 1.6.3 in
/interna...</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/bad57c0a2c19b7e0e5f5083d911544cca340a98a"><code>bad57c0</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3530">#3530</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/ccff5b35c0d730214473de122dcb96b110be0029"><code>ccff5b3</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3532">#3532</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/15dd0b11d31423e7811736bbabe7e512a214f225"><code>15dd0b1</code></a>
chore(option/internaloption): more accessors (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3531">#3531</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/ad5d5aa8fa892f0129604d9c139081cc99eb4700"><code>ad5d5aa</code></a>
chore(main): release 0.270.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3516">#3516</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/b73e4fbc0017249279922cb4c223e44f98cc5db9"><code>b73e4fb</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3529">#3529</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/f6ede69e7094cf4f7353841d593867f087f06b84"><code>f6ede69</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3528">#3528</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/7342fc24a37cfa818cf4834578e0198c1b5e0334"><code>7342fc2</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3522">#3522</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/a932a454c4fd97dfc66f0cca97afeae231a7e4e9"><code>a932a45</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3527">#3527</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.269.0...v0.271.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.269.0&new-version=0.271.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 11:34:33 +00:00
dependabot[bot] 93ea5f5d22 chore: bump github.com/coder/terraform-provider-coder/v2 from 2.13.1 to 2.14.0 (#23101)
Bumps
[github.com/coder/terraform-provider-coder/v2](https://github.com/coder/terraform-provider-coder)
from 2.13.1 to 2.14.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/coder/terraform-provider-coder/releases">github.com/coder/terraform-provider-coder/v2's
releases</a>.</em></p>
<blockquote>
<h2>v2.14.0</h2>
<h2>What's Changed</h2>
<ul>
<li>build(deps): Bump golang.org/x/mod from 0.29.0 to 0.30.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/463">coder/terraform-provider-coder#463</a></li>
<li>build(deps): Bump actions/checkout from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/468">coder/terraform-provider-coder#468</a></li>
<li>build(deps): Bump golang.org/x/crypto from 0.43.0 to 0.45.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/467">coder/terraform-provider-coder#467</a></li>
<li>build(deps): Bump github.com/hashicorp/terraform-plugin-log from
0.9.0 to 0.10.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/465">coder/terraform-provider-coder#465</a></li>
<li>fix: typo in data coder_external_auth example and docs by <a
href="https://github.com/krispage"><code>@​krispage</code></a> in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/420">coder/terraform-provider-coder#420</a></li>
<li>feat: add confliction with <code>subdomain</code> by <a
href="https://github.com/jakehwll"><code>@​jakehwll</code></a> in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/469">coder/terraform-provider-coder#469</a></li>
<li>build(deps): Bump golang.org/x/mod from 0.30.0 to 0.31.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/472">coder/terraform-provider-coder#472</a></li>
<li>build(deps): Bump golang.org/x/mod from 0.31.0 to 0.32.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/473">coder/terraform-provider-coder#473</a></li>
<li>feat: add <code>subagent_id</code> attribute to
<code>coder_devcontainer</code> resource by <a
href="https://github.com/DanielleMaywood"><code>@​DanielleMaywood</code></a>
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/474">coder/terraform-provider-coder#474</a></li>
<li>fix: embed timezone database via <code>time/tzdata</code> by <a
href="https://github.com/mtojek"><code>@​mtojek</code></a> in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/476">coder/terraform-provider-coder#476</a></li>
<li>build(deps): Bump golang.org/x/mod from 0.32.0 to 0.33.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/477">coder/terraform-provider-coder#477</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/krispage"><code>@​krispage</code></a>
made their first contribution in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/420">coder/terraform-provider-coder#420</a></li>
<li><a href="https://github.com/jakehwll"><code>@​jakehwll</code></a>
made their first contribution in <a
href="https://redirect.github.com/coder/terraform-provider-coder/pull/469">coder/terraform-provider-coder#469</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/coder/terraform-provider-coder/compare/v2.13.1...v2.14.0">https://github.com/coder/terraform-provider-coder/compare/v2.13.1...v2.14.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/7fa3c10eaaf66dd1f67a14176a438cf05ec9e98e"><code>7fa3c10</code></a>
build(deps): Bump golang.org/x/mod from 0.32.0 to 0.33.0 (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/477">#477</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/ef9a6dda578892cdcf7ab7cf920a732010b86151"><code>ef9a6dd</code></a>
fix: embed timezone database via <code>time/tzdata</code> (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/476">#476</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/b6966bf427c6d9d418dd6a217fe8897bc15f618c"><code>b6966bf</code></a>
feat: add <code>subagent_id</code> attribute to
<code>coder_devcontainer</code> resource (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/474">#474</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/c9f205fca1ca25c70704be555ff524a46dff9f2e"><code>c9f205f</code></a>
build(deps): Bump golang.org/x/mod from 0.31.0 to 0.32.0 (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/473">#473</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/7a81d185379885b6b30a96a40fd8e5f7eee2640c"><code>7a81d18</code></a>
build(deps): Bump golang.org/x/mod from 0.30.0 to 0.31.0 (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/472">#472</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/76bda72ec5f47be88edd6d0c1347802609b1d041"><code>76bda72</code></a>
feat: add confliction with <code>subdomain</code> (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/469">#469</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/aee79c41a4e4f6770db90291dffe01c53667d8dc"><code>aee79c4</code></a>
fix: typo in data coder_external_auth example and docs (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/420">#420</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/9cfd35f441fa567150ecd5aa97c5f854a2800182"><code>9cfd35f</code></a>
build(deps): Bump github.com/hashicorp/terraform-plugin-log (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/465">#465</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/dd6246532b4f0047c0125bdcd70f6e900ca69d65"><code>dd62465</code></a>
build(deps): Bump golang.org/x/crypto from 0.43.0 to 0.45.0 (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/467">#467</a>)</li>
<li><a
href="https://github.com/coder/terraform-provider-coder/commit/60377bb12b7593f11f23a986e8a386d5566a0718"><code>60377bb</code></a>
build(deps): Bump actions/checkout from 5 to 6 (<a
href="https://redirect.github.com/coder/terraform-provider-coder/issues/468">#468</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/coder/terraform-provider-coder/compare/v2.13.1...v2.14.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/coder/terraform-provider-coder/v2&package-manager=go_modules&previous-version=2.13.1&new-version=2.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 11:33:55 +00:00
dependabot[bot] 9a6356513b chore: bump rust from d6782f2 to 7d37016 in /dogfood/coder (#23103)
Bumps rust from `d6782f2` to `7d37016`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 11:27:33 +00:00
Thomas Kosiewski 069d3e2beb fix(coderd): require ssh access for workspace chats (#23094)
### Motivation
- The chat creation flow associated a workspace agent for a chat if the requester could read the workspace, enabling privilege escalation where users without SSH/app-connect permissions could cause the daemon to open privileged agent connections and execute commands.
- The intent is to ensure that attaching a workspace agent to a chat only happens when the requester has the workspace SSH permission so the chat daemon cannot be abused to bypass RBAC.

### Description
- Require request-scoped authorization for workspace agent usage by changing `validateCreateChatWorkspaceSelection` to accept the `*http.Request` and calling `api.Authorize(r, policy.ActionSSH, workspace)` before selecting the workspace for a chat.
- Pass the HTTP request into the validator from `postChats` so authorization is evaluated in the request context (`postChats` now calls `validateCreateChatWorkspaceSelection(ctx, r, req)`).
- Add a regression test `WorkspaceAccessibleButNoSSH` in `coderd/chats_test.go` which creates an org-admin-scoped user (read access but no `ActionSSH`) and asserts that creating a chat with `WorkspaceID` is denied.

### Testing
- Ran `gofmt -w coderd/chats.go coderd/chats_test.go` which succeeded.
- Attempted to run repository pre-commit checks (`make pre-commit`) and targeted `go test` invocations; these checks could not be completed in this environment due to missing local tooling and environment constraints (protobuf include resolution, containerized DB access via Docker socket, and long-running golden generation tasks), so full CI/pre-commit verification and end-to-end test runs did not complete here.
- Added a focused regression unit test (`WorkspaceAccessibleButNoSSH`) to prevent reintroduction of the authorization bypass; this test is included in the change and should be executed in CI where the full toolchain and test environment are available.

------
[Codex Task](https://chatgpt.com/codex/tasks/task_b_69b432502670832e91d14e937745de46)
2026-03-16 11:42:01 +01:00
Mathias Fredriksson aa6f301305 ci: add conventional commit PR title linting (#23096)
Restore PR title validation that was removed in 828f33a when
cdr-bot was expected to handle it. That bot has since been disabled.

The new title job in contrib.yaml validates:
- Conventional commit format (type(scope): description)
- Type from the same set used by release notes generation
- Scope validity derived from the changed files in the PR diff
- All changed files fall under the declared scope

Uses actions/github-script (no third-party marketplace actions).

Also fixes feat(api) examples across docs (no api folder exists)
and consolidates commit rules into CONTRIBUTING.md as the single
source of truth.
2026-03-16 12:24:59 +02:00
Cian Johnston ae8bed4d8e feat(site): improve DERP health page readability (#22984)
## Why

The DERP health page displayed raw field names like
`MappingVariesByDestIP`, `PMP`, `PCP`, `HairPinning` with no context.
Users without deep networking knowledge had no way to understand what
these flags meant or why they mattered. This change makes the page
self-documenting.

## What

- DERPPage (`/health/derp`)
- Replace flat pill row with four logically grouped tables:
**Connectivity**, **IPv6 Support**, **NAT Traversal**, **Port Mapping**.
  - Rename section from "Flags" to "Network Checks".
- Surface `CaptivePortal` flag (previously missing from the UI
entirely).
- Invert display of `MappingVariesByDestIP` and `CaptivePortal` so green
always means good.
- Handle `null` boolean fields (e.g. UPnP, PMP, PCP) with a distinct
"not checked" neutral icon.

- DERPRegionPage (`/health/derp/regions/:regionId`)
- Replace per-node `BooleanPill` row with a table showing **Exchange
Messages**, **Direct HTTP Upgrade**, **STUN Enabled**, and **STUN
Reachable** per node.
- Invert `uses_websocket` display as "Direct HTTP Upgrade" (green when
websocket is not needed).
- Surface **STUN Enabled** and **STUN Reachable** per node (data was
returned by the API but never rendered).
- Add null guards for `region` and `node` (remove `!` non-null
assertions).
- Convert all emotion/MUI styles to Tailwind classes; remove
`reportStyles` object and `useTheme` import.

- Content.tsx (shared)
- Adds an exported `StatusIcon` component with three states: `true`
(green check), `false` (red minus), `null` (neutral help icon).
2026-03-16 09:14:24 +00:00
Mathias Fredriksson 703b974757 fix(coderd): remove false devcontainers early access warning (#23056)
The script source claimed Dev Containers are early access and told
users to set CODER_AGENT_DEVCONTAINERS_ENABLE=true, which already
defaults to true. Clear the script source and set RunOnStart to
false since there is nothing to run.
2026-03-16 10:16:14 +02:00
dependabot[bot] 9c2f217ca2 chore: bump the coder-modules group across 3 directories with 2 updates (#23091)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 00:37:06 +00:00
Kyle Carberry 3d9628c27e ci: split build artifacts into per-platform uploads (#23081)
Splits the single `coder` artifact (containing all platforms in a 1.3GB
zip) into individual artifacts per OS/arch/format.

## Problem

All CI build artifacts are uploaded as a single artifact named `coder`,
producing a 1.3GB zip containing every platform's binary. This makes it
impossible to download a single platform's binary without pulling the
entire bundle.

## Solution

Upload each platform/format combination as a separate artifact:

| Artifact Name | Contents |
|---|---|
| `coder-linux-amd64.tar.gz` | Linux amd64 tarball |
| `coder-linux-amd64.deb` | Linux amd64 deb package |
| `coder-linux-arm64.tar.gz` | Linux arm64 tarball |
| `coder-linux-arm64.deb` | Linux arm64 deb package |
| `coder-linux-armv7.tar.gz` | Linux armv7 tarball |
| `coder-linux-armv7.deb` | Linux armv7 deb package |
| `coder-windows-amd64.zip` | Windows amd64 zip |

## Plan

This is the first step toward letting customers install directly from
`main` via:

```bash
curl -L https://coder.com/install.sh | sh -s -- --unsafe-unstable
```

GitHub Actions artifact downloads require authentication even for public
repos, so the next steps are to add a small Cloudflare Worker (similar
to the one we already have for `install.sh`) that:

1. Lists artifacts via the GitHub API (unauthenticated) to find the
latest artifact ID for the requested platform
2. Calls the download endpoint with a GitHub token (CF Worker secret) to
get a 302 redirect to a time-limited Azure Blob URL
3. Redirects the caller to that URL (which requires no auth)

This gives us publicly accessible per-platform URLs that the
`--unsafe-unstable` flag would point at. The worker doesn't proxy the
binary itself — it only proxies the metadata API call (~1KB) and
redirects for the actual download.

This PR splits the artifacts so the worker can serve individual platform
downloads (~200MB each) instead of forcing a 1.3GB bundle.
2026-03-15 09:56:18 -04:00
Dean Sheather a2b8564c48 chore: update deploy to use EKS (#23084) 2026-03-16 00:55:09 +11:00
Mathias Fredriksson 1adc22fffd fix(agent/reaper): skip reaper tests in CI (#23068)
ForkReap's syscall.ForkExec and process-directed signals remain
flaky in CI despite the subprocess isolation added in #22894.
Restore the testutil.InCI() skip guard that was removed in that
change.

Fixes coder/internal#1402
2026-03-14 21:15:47 +01:00
Kyle Carberry 266c611716 refactor(site): consolidate Git panel diff viewers and polish UI (#23080)
## Summary

Refactors the Git panel in the Agents page to consolidate duplicated
diff viewer code and significantly improve the UI.

### Deduplication
- **RemoteDiffPanel** now uses the shared `DiffViewer` component instead
of duplicating file tree, lazy loading, scroll tracking, and layout
(~500 lines removed).
- Renamed `RepoChangesPanel` → `LocalDiffPanel`, `FilesChangedPanel` →
`RemoteDiffPanel` to reflect actual scope.
- Removed `headerLeft`/`headerRight` abstraction from `DiffViewer` —
each consumer owns its own header.
- Replaced hand-rolled `ChatDiffStatusResponse` with auto-generated
`ChatDiffStatus` from `typesGenerated.ts`.

### Tab Redesign
- Per-repo tabs: each local repo gets its own tab (`Working <repo>`)
instead of a single stacked view.
- PR tab shows state icon + PR title; branch-only tab shows branch icon.
- Tabs use `Button variant="outline"` matching the Git/Desktop tab
style.
- Radix `ScrollArea` with thin horizontal scrollbar for tab overflow.
- Diff style toggle and refresh button lifted to shared toolbar, always
visible.

### PR Header
- Compact sub-header: `base_branch ←`, state badge
(`Open`/`Draft`/`Merged`/`Closed`), diff stats, and `View PR` button.
- GitHub-style state-aware icons (green open, gray draft, purple merged,
red closed).
- New API fields synced: `base_branch`, `author_login`, `pr_number`,
`commits`, `approved`, `reviewer_count`.

### Local Changes Header  
- Compact sub-header: branch name, repo root path, diff stats, and
`Commit` button (styled to match `View PR`).
- `CircleDotIcon` (amber) for working changes tabs — universal
"modified" indicator.

### Visual Polish
- All text in sub-headers and buttons at 13px matching chat font size.
- All badges (`DiffStatBadge`, PR state, `View PR`, `Commit`) use
consistent `border-border-default`, `rounded-sm`, `leading-5`.
- No background color on diff viewer header bars.
- Tabs hidden when their view has no content; auto-switch when active
tab disappears.

### Stories
- New `GitPanel.stories.tsx` covering: open PR + working changes, draft
PR, merged PR, closed PR, branch only, working changes only, multiple
repos, empty state.
- Removed old `LocalDiffPanel.stories.tsx` and
`RemoteDiffPanel.stories.tsx`.
2026-03-14 15:21:30 -04:00
Kyle Carberry 83e4f9f93e fix(agents): narrow chat mutation query invalidation (#23078)
## Problem

Sending a message on the `/agents` page triggers a burst of redundant
HTTP requests. The root cause is that chat mutations call
`invalidateQueries({ queryKey: ["chats"] })` which, due to React Query's
default **prefix matching**, cascades to every query whose key starts
with `["chats"]`:

- `["chats", {archived: false}]` — infinite sidebar list
- `["chats", chatId]` — individual chat detail
- `["chats", chatId, "messages"]` — all messages
- `["chats", chatId, "diff-status"]` — diff status
- `["chats", chatId, "diff-contents"]` — diff contents
- `["chats", "costSummary", ...]` — cost summaries

All of these have active subscribers on the page, so each one fires a
network request. The WebSocket stream already delivers these updates in
real-time, making the HTTP refetches completely redundant.

## Fix

| Mutation | Before | After |
|---|---|---|
| `createChatMessage` | `invalidateQueries({ queryKey: chatsKey })` —
prefix cascade | **Removed** — WebSocket delivers messages + sidebar
updates |
| `interruptChat` | `invalidateQueries({ queryKey: chatsKey })` — prefix
cascade | **Removed** — WebSocket delivers status changes |
| `editChatMessage` | 3 broad invalidations including `chatsKey` prefix
| 2 targeted with `exact: true`: `chatKey(id)` + `chatMessagesKey(id)` |
| `promoteChatQueuedMessage` | 3 broad invalidations including
`chatsKey` prefix | 2 targeted with `exact: true`: `chatKey(id)` +
`chatMessagesKey(id)` |

`editChatMessage` keeps `chatMessagesKey` invalidation because editing
truncates messages server-side and the WebSocket can only insert/update,
never remove stale entries.

## Net effect

Sending a message previously triggered **5–7 HTTP requests**. Now it
triggers **zero** — the WebSocket handles everything.

## Tests

Added `describe("mutation invalidation scope")` with 8 test cases
asserting that each mutation only invalidates the queries it genuinely
needs.
2026-03-14 18:22:37 +00:00
Kyle Carberry ff9d061ae9 fix(site): prevent duplicate chat in agents sidebar on creation (#23077)
## Problem

When creating a new chat in the agents page (`/agents`), the chat could
appear multiple times in the sidebar. This was a race condition
triggered by the WebSocket `created` event handler.

## Root Cause

`updateInfiniteChatsCache` applies its updater function **independently
on each page** of the infinite query:

```ts
const nextPages = prev.pages.map((page) => updater(page));
```

When the `watchChats` WebSocket received a `"created"` event, the
handler checked `exists` only within the *current page*, then prepended
the new chat if not found:

```ts
updateInfiniteChatsCache(queryClient, (chats) => {
    const exists = chats.some((c) => c.id === updatedChat.id);
    // ...
    if (chatEvent.kind === "created") {
        return [updatedChat, ...chats]; // runs per page!
    }
});
```

Since a brand-new chat doesn't exist in any page, **every loaded page**
prepends it. After `pages.flat()`, the chat appears once per loaded page
in the sidebar.

## Fix

- Added `prependToInfiniteChatsCache` in `chats.ts` that checks across
**all pages** before prepending, and only adds to page 0.
- Split the WebSocket handler so `"created"` events use the new safe
prepend, while update events (`title_change`, `status_change`) continue
using `updateInfiniteChatsCache` (which is safe for `.map()` operations
that don't add entries).
2026-03-14 13:27:54 -04:00
Kyle Carberry 0d3e39a24e feat: add head_branch to pull request diff status (#23076)
Adds the `head_branch` field (the source/feature branch name of a PR) to
the diff status pipeline. Previously only `base_branch` (target branch)
and the head commit SHA were captured from the GitHub API, but not the
head branch name itself.

## Changes

- **Migration 438**: Add `head_branch` nullable TEXT column to
`chat_diff_statuses`
- **gitprovider**: Parse `head.ref` from the GitHub API response
(alongside `head.sha`) and add `HeadBranch` to `PRStatus`
- **gitsync**: Wire `HeadBranch` through `refreshOne()` into the DB
upsert params
- **worker**: Map `HeadBranch` in `chatDiffStatusFromRow()`
- **coderd**: Convert `HeadBranch` in `convertChatDiffStatus()`
- **codersdk**: Expose as `head_branch` (`*string`, omitempty) in
`ChatDiffStatus` API response
- **Tests**: Updated `github_test.go` pull JSON fixtures and assertions
2026-03-14 17:24:19 +00:00
Thomas Kosiewski 3f7f25b3ee fix(chats): enforce desktop connect authorization (#23073)
### Motivation

- The desktop watch handler opened a VNC stream using the chat's
workspace ID while only relying on workspace read permissions, allowing
read-only users to escalate to interactive desktop access.
- Enforce connect-level authorization so only actors with
`ActionApplicationConnect` or `ActionSSH` can open the desktop stream.

### Description

- Added an explicit workspace lookup in `watchChatDesktop` using
`GetWorkspaceByID` to obtain a workspace object for authorization.
- Require the requester to be authorized for either
`policy.ActionApplicationConnect` or `policy.ActionSSH` on the workspace
before proceeding to locate agents or connect to the VNC stream, and
return `403 Forbidden` when neither permission is present.
- The change is minimal and localized to `coderd/chats.go` and does not
alter other code paths or behavior when the requester has the necessary
connect permissions.

### Testing

- Ran `gofmt -w coderd/chats.go` to format the modified file, which
succeeded.
- Attempted to run the unit test `TestWatchChatDesktop/NoWorkspace` via
`go test` in this environment but the test run did not complete within
the environment constraints and did not produce a full pass result.
- Attempted to run the repository pre-commit/gen steps but they could
not complete due to missing developer tooling and services in this
environment (e.g. `sqlc`, `mockgen`, `protoc` plugins and test services
like Docker/Postgres), so full pre-commit validation did not finish
here.
- Code review and static validation confirm the added authorization
check properly prevents read-only access from opening the desktop VNC
stream.

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_b_69b46a4ac5c4832ea9d330aeba43c32d)
2026-03-14 17:53:05 +01:00
Kyle Carberry ddd1e86a90 fix(site): prevent infinite scroll from spamming duplicate chat list requests (#23075)
## Problem

The agents sidebar infinite scroll was spamming the `/api/v2/chats`
endpoint with duplicate requests at the same offset, caused by the
`LoadMoreSentinel` component.

### Root cause

`onLoadMore` is an inline arrow function (`() => void
chatsQuery.fetchNextPage()`), creating a **new function reference on
every render**. The `useEffect` in `LoadMoreSentinel` depended on
`[onLoadMore]`, so it tore down and re-created the
`IntersectionObserver` on every render. Each new observer immediately
fired its callback when the sentinel was already visible, triggering
duplicate fetches.

## Fix

- Store `onLoadMore` and `isFetchingNextPage` in **refs** so the
observer callback always reads the latest values without needing to tear
down/re-create.
- Create the `IntersectionObserver` **once on mount** (empty deps
array).
- **Guard** against calling `onLoadMore` while `isFetchingNextPage` is
true.

## Tests

- **LoadMoreSentinel behavior tests** (6 tests): verifies no duplicate
calls across re-renders, proper `isFetchingNextPage` gating, ref-based
observer stability, and correct resume after fetch completes.
- **`infiniteChats` query factory tests** (6 tests): covers
`getNextPageParam` and `queryFn` offset computation to prevent
pagination regressions.
2026-03-14 12:12:11 -04:00
Michael Suchacz 969066b55e feat(site): improve cost analytics view (#23069)
Surfaces cache token data in the analytics views and fixes table
spacing.

### Changes

- **Cache token columns**: Added cache read and cache write token counts
to all analytics views (user and admin), from SQL queries through Go SDK
types to the frontend tables and summary cards.
- **Table spacing fix**: Replaced the bare React fragment in
`ChatCostSummaryView` with a `space-y-6` container so the model and chat
breakdown tables no longer overlap.

### Data flow

`chat_messages` table already stores `cache_read_tokens` and
`cache_creation_tokens` (and uses them for cost calculation). This PR
aggregates and displays them alongside input/output tokens in:

- Summary cards (6 cards: Total Cost, Input, Output, Cache Read, Cache
Write, Messages)
- Per-model breakdown table
- Per-chat breakdown table
- Admin per-user table
2026-03-14 01:22:00 -05:00
Michael Suchacz f6976fd6c1 chore(dogfood): bump mux to 1.4.3 (#23039)
## Summary
- bump the dogfood Mux module to 1.4.3
- enable the new restart logic and allow up to 10 restart attempts

## Testing
- terraform fmt -check -diff dogfood/coder/main.tf
- git diff --check -- dogfood/coder/main.tf
- terraform -chdir=dogfood/coder validate
2026-03-14 06:46:44 +01:00
Michael Suchacz cbb3841e81 test(chats): verify cost summaries survive model deletion (#23051) 2026-03-14 06:35:46 +01:00
Callum Styan 36665e17b2 feat: add WatchAllWorkspaceBuilds endpoint for autostart scaletests (#22057)
This PR adds a `WatchAllWorkspaces` function with `watch-all-workspaces`
endpoint, which can be used to listen on a single global pubsub channel
for _all_ workspace build updates, and makes use of it in the autostart
scaletest.

This negates the need to use a workspace watch pubsub channel _per_
workspace, which has auth overhead associated with each call. This is
especially relevant in situations such as the autostart scaletest, where
we need to start/stop a set of workspaces before we can configure their
autostart config. The overhead associated with all the watch requests
skews the scaletest results and makes it harder to reason about the
performance of the autostart feature itself.

The autostart scaletest also no longer generates its own metrics nor
does it wait for all the workspaces to actually start via autostart. We
should update the scaletest dashboard after both PRs are merged to
measure autostart performance via the new metrics.



The new function/endpoint and its usage in the autostart scaletest are
gated behind an experiment feature flag, this is something we should
discuss whether we want to enable the endpoint in prod by default or
not. If so, we can remove the experiment.

---------

Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Callum Styan <callum@coder.com>
2026-03-13 20:37:41 -07:00
Kyle Carberry b492c42624 chore(dogfood): add Google Chrome to dogfood image (#23063)
Install Google Chrome stable directly from `dl.google.com`. Ubuntu 22.04
ships `chromium-browser` as a snap-only package, which does not work in
Docker containers.

```dockerfile
RUN wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \\
    apt-get install --yes ./google-chrome-stable_current_amd64.deb && \\
    rm google-chrome-stable_current_amd64.deb
```

Verified in a running dogfood workspace:
```
$ google-chrome --version
Google Chrome 146.0.7680.75
```
2026-03-13 19:22:58 -04:00
Kyle Carberry c5b8611c5a feat(gitsync): enrich PR status with author, base branch, review info (#23038)
## Summary

Adds 7 new fields to the PR status stored by gitsync, all sourced from
the existing GitHub API calls (**zero additional HTTP requests**):

| Field | Source | Purpose |
|---|---|---|
| `author_login` | `pull.user.login` | PR author username |
| `author_avatar_url` | `pull.user.avatar_url` | PR author avatar for UI
|
| `base_branch` | `pull.base.ref` | Target branch (e.g. `main`) |
| `pr_number` | `pull.number` | Explicit PR number |
| `commits` | `pull.commits` | Number of commits in PR |
| `approved` | Derived from reviews | True when ≥1 approved, no
outstanding changes requested |
| `reviewer_count` | Derived from reviews | Distinct reviewers with a
decisive state |

## Changes

- **`gitprovider/gitprovider.go`**: Added 7 fields to `PRStatus` struct.
- **`gitprovider/github.go`**: Expanded the anonymous struct in
`FetchPullRequestStatus` to decode new JSON fields. Replaced
`hasOutstandingChangesRequested()` with `summarizeReviews()` returning a
`reviewStats` struct with `changesRequested`, `approved`, and
`reviewerCount`.
- **Migration 000434**: Adds 7 columns to `chat_diff_statuses`.
- **`queries/chats.sql`**: Updated `UpsertChatDiffStatus`
INSERT/VALUES/ON CONFLICT.
- **`gitsync/gitsync.go`**: Maps new `PRStatus` fields into upsert
params.
- **`gitsync/worker.go`**: Maps new columns in row-to-model converter.
- **`codersdk/chats.go`**: Added fields to SDK `ChatDiffStatus` type.
- **`coderd/chats.go`**: Maps new DB fields in
`convertChatDiffStatus()`.
- Auto-generated: `models.go`, `queries.sql.go`, `dump.sql`,
`typesGenerated.ts`.
2026-03-13 18:54:07 -04:00
459 changed files with 48499 additions and 14195 deletions
+1 -1
View File
@@ -113,7 +113,7 @@ Coder emphasizes clear error handling, with specific patterns required:
All tests should run in parallel using `t.Parallel()` to ensure efficient testing and expose potential race conditions. The codebase is rigorously linted with golangci-lint to maintain consistent code quality.
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
Git contributions follow [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/). See [CONTRIBUTING.md](docs/about/contributing/CONTRIBUTING.md#commit-messages) for full rules. PR titles are linted in CI.
## Development Workflow
+5 -14
View File
@@ -4,22 +4,13 @@ This guide documents the PR description style used in the Coder repository, base
## PR Title Format
Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/) format:
Format: `type(scope): description`. See [CONTRIBUTING.md](docs/about/contributing/CONTRIBUTING.md#commit-messages) for full rules. PR titles are linted in CI.
```text
type(scope): brief description
```
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`, `revert`
- Scopes must be a real path (directory or file stem) containing all changed files
- Omit scope if changes span multiple top-level directories
**Common types:**
- `feat`: New features
- `fix`: Bug fixes
- `refactor`: Code refactoring without behavior change
- `perf`: Performance improvements
- `docs`: Documentation changes
- `chore`: Dependency updates, tooling changes
**Examples:**
Examples:
- `feat: add tracing to aibridge`
- `fix: move contexts to appropriate locations`
+5 -3
View File
@@ -136,9 +136,11 @@ Then make your changes and push normally. Don't use `git push --force` unless th
## Commit Style
- Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/)
- Format: `type(scope): message`
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
Format: `type(scope): message`. See [CONTRIBUTING.md](docs/about/contributing/CONTRIBUTING.md#commit-messages) for full rules. PR titles are linted in CI.
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `perf`, `test`, `build`, `ci`, `chore`, `revert`
- Scopes must be a real path (directory or file stem) containing all changed files
- Omit scope if changes span multiple top-level directories
- Keep message titles concise (~70 characters)
- Use imperative, present tense in commit titles
+69 -7
View File
@@ -1198,7 +1198,7 @@ jobs:
make -j \
build/coder_linux_{amd64,arm64,armv7} \
build/coder_"$version"_windows_amd64.zip \
build/coder_"$version"_linux_amd64.{tar.gz,deb}
build/coder_"$version"_linux_{amd64,arm64,armv7}.{tar.gz,deb}
env:
# The Windows and Darwin slim binaries must be signed for Coder
# Desktop to accept them.
@@ -1216,11 +1216,28 @@ jobs:
GCLOUD_ACCESS_TOKEN: ${{ steps.gcloud_auth.outputs.access_token }}
JSIGN_PATH: /tmp/jsign-6.0.jar
# Free up disk space before building Docker images. The preceding
# Build step produces ~2 GB of binaries and packages, the Go build
# cache is ~1.3 GB, and node_modules is ~500 MB. Docker image
# builds, pushes, and SBOM generation need headroom that isn't
# available without reclaiming some of that space.
- name: Clean up build cache
run: |
set -euxo pipefail
# Go caches are no longer needed — binaries are already compiled.
go clean -cache -modcache
# Remove .apk and .rpm packages that are not uploaded as
# artifacts and were only built as make prerequisites.
rm -f ./build/*.apk ./build/*.rpm
- name: Build Linux Docker images
id: build-docker
env:
CODER_IMAGE_BASE: ghcr.io/coder/coder-preview
DOCKER_CLI_EXPERIMENTAL: "enabled"
# Skip building .deb/.rpm/.apk/.tar.gz as prerequisites for
# the Docker image targets — they were already built above.
DOCKER_IMAGE_NO_PREREQUISITES: "true"
run: |
set -euxo pipefail
@@ -1438,15 +1455,60 @@ jobs:
^v
prune-untagged: true
- name: Upload build artifacts
- name: Upload build artifact (coder-linux-amd64.tar.gz)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder
path: |
./build/*.zip
./build/*.tar.gz
./build/*.deb
name: coder-linux-amd64.tar.gz
path: ./build/*_linux_amd64.tar.gz
retention-days: 7
- name: Upload build artifact (coder-linux-amd64.deb)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-linux-amd64.deb
path: ./build/*_linux_amd64.deb
retention-days: 7
- name: Upload build artifact (coder-linux-arm64.tar.gz)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-linux-arm64.tar.gz
path: ./build/*_linux_arm64.tar.gz
retention-days: 7
- name: Upload build artifact (coder-linux-arm64.deb)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-linux-arm64.deb
path: ./build/*_linux_arm64.deb
retention-days: 7
- name: Upload build artifact (coder-linux-armv7.tar.gz)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-linux-armv7.tar.gz
path: ./build/*_linux_armv7.tar.gz
retention-days: 7
- name: Upload build artifact (coder-linux-armv7.deb)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-linux-armv7.deb
path: ./build/*_linux_armv7.deb
retention-days: 7
- name: Upload build artifact (coder-windows-amd64.zip)
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: coder-windows-amd64.zip
path: ./build/*_windows_amd64.zip
retention-days: 7
# Deploy is handled in deploy.yaml so we can apply concurrency limits.
+141
View File
@@ -23,6 +23,44 @@ permissions:
concurrency: pr-${{ github.ref }}
jobs:
community-label:
runs-on: ubuntu-latest
permissions:
pull-requests: write
if: >-
${{
github.event_name == 'pull_request_target' &&
github.event.action == 'opened' &&
github.event.pull_request.author_association != 'MEMBER' &&
github.event.pull_request.author_association != 'COLLABORATOR' &&
github.event.pull_request.author_association != 'OWNER'
}}
steps:
- name: Add community label
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const params = {
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
}
const labels = context.payload.pull_request.labels.map((label) => label.name)
if (labels.includes("community")) {
console.log('PR already has "community" label.')
return
}
console.log(
'Adding "community" label for author association "%s".',
context.payload.pull_request.author_association,
)
await github.rest.issues.addLabels({
...params,
labels: ["community"],
})
cla:
runs-on: ubuntu-latest
permissions:
@@ -45,6 +83,109 @@ jobs:
# Some users have signed a corporate CLA with Coder so are exempt from signing our community one.
allowlist: "coryb,aaronlehmann,dependabot*,blink-so*,blinkagent*"
title:
runs-on: ubuntu-latest
if: ${{ github.event_name == 'pull_request_target' }}
steps:
- name: Validate PR title
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const { pull_request } = context.payload;
const title = pull_request.title;
const repo = { owner: context.repo.owner, repo: context.repo.repo };
const allowedTypes = [
"feat", "fix", "docs", "style", "refactor",
"perf", "test", "build", "ci", "chore", "revert",
];
const expectedFormat = `"type(scope): description" or "type: description"`;
const guidelinesLink = `See: https://github.com/coder/coder/blob/main/docs/about/contributing/CONTRIBUTING.md#commit-messages`;
const scopeHint = (type) =>
`Use a broader scope or no scope (e.g., "${type}: ...") for cross-cutting changes.\n` +
guidelinesLink;
console.log("Title: %s", title);
// Parse conventional commit format: type(scope)!: description
const match = title.match(/^(\w+)(\(([^)]*)\))?(!)?\s*:\s*.+/);
if (!match) {
core.setFailed(
`PR title does not match conventional commit format.\n` +
`Expected: ${expectedFormat}\n` +
`Allowed types: ${allowedTypes.join(", ")}\n` +
guidelinesLink
);
return;
}
const type = match[1];
const scope = match[3]; // undefined if no parentheses
// Validate type.
if (!allowedTypes.includes(type)) {
core.setFailed(
`PR title has invalid type "${type}".\n` +
`Expected: ${expectedFormat}\n` +
`Allowed types: ${allowedTypes.join(", ")}\n` +
guidelinesLink
);
return;
}
// If no scope, we're done.
if (!scope) {
console.log("No scope provided, title is valid.");
return;
}
console.log("Scope: %s", scope);
// Fetch changed files.
const files = await github.paginate(github.rest.pulls.listFiles, {
...repo,
pull_number: pull_request.number,
per_page: 100,
});
const changedPaths = files.map(f => f.filename);
console.log("Changed files: %d", changedPaths.length);
// Derive scope type from the changed files. The diff is the
// source of truth: if files exist under the scope, the path
// exists on the PR branch. No need for Contents API calls.
const isDir = changedPaths.some(f => f.startsWith(scope + "/"));
const isFile = changedPaths.some(f => f === scope);
const isStem = changedPaths.some(f => f.startsWith(scope + "."));
if (!isDir && !isFile && !isStem) {
core.setFailed(
`PR title scope "${scope}" does not match any files changed in this PR.\n` +
`Scopes must reference a path (directory or file stem) that contains changed files.\n` +
scopeHint(type)
);
return;
}
// Verify all changed files fall under the scope.
const outsideFiles = changedPaths.filter(f => {
if (isDir && f.startsWith(scope + "/")) return false;
if (f === scope) return false;
if (isStem && f.startsWith(scope + ".")) return false;
return true;
});
if (outsideFiles.length > 0) {
const listed = outsideFiles.map(f => " - " + f).join("\n");
core.setFailed(
`PR title scope "${scope}" does not contain all changed files.\n` +
`Files outside scope:\n${listed}\n\n` +
scopeHint(type)
);
return;
}
console.log("PR title is valid.");
release-labels:
runs-on: ubuntu-latest
permissions:
+11 -15
View File
@@ -61,7 +61,7 @@ jobs:
if: needs.should-deploy.outputs.verdict == 'DEPLOY'
permissions:
contents: read
id-token: write
id-token: write # to authenticate to EKS cluster
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
@@ -82,27 +82,23 @@ jobs:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@8df5847569e6427dd6c4fb1cf565c83acfa8afa7 # v6.0.0
with:
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
role-to-assume: ${{ vars.AWS_DOGFOOD_DEPLOY_ROLE }}
aws-region: ${{ vars.AWS_DOGFOOD_DEPLOY_REGION }}
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Get Cluster Credentials
run: aws eks update-kubeconfig --name "$AWS_DOGFOOD_CLUSTER_NAME" --region "$AWS_DOGFOOD_DEPLOY_REGION"
env:
AWS_DOGFOOD_CLUSTER_NAME: ${{ vars.AWS_DOGFOOD_CLUSTER_NAME }}
AWS_DOGFOOD_DEPLOY_REGION: ${{ vars.AWS_DOGFOOD_DEPLOY_REGION }}
- name: Set up Flux CLI
uses: fluxcd/flux2/action@8454b02a32e48d775b9f563cb51fdcb1787b5b93 # v2.7.5
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.7.0"
- name: Get Cluster Credentials
uses: google-github-actions/get-gke-credentials@3da1e46a907576cefaa90c484278bb5b259dd395 # v3.0.0
with:
cluster_name: dogfood-v2
location: us-central1-a
project_id: coder-dogfood-v2
version: "2.8.2"
# Retag image as dogfood while maintaining the multi-arch manifest
- name: Tag image as dogfood
+1 -3
View File
@@ -700,11 +700,9 @@ jobs:
name: Publish to Homebrew tap
runs-on: ubuntu-latest
needs: release
if: ${{ !inputs.dry_run }}
if: ${{ !inputs.dry_run && inputs.release_channel == 'mainline' }}
steps:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
+5
View File
@@ -208,6 +208,11 @@ seems like it should use `time.Sleep`, read through https://github.com/coder/qua
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
- Commit format: `type(scope): message`
- PR titles follow the same `type(scope): message` format.
- When you use a scope, it must be a real filesystem path containing every
changed file.
- Use a broader path scope, or omit the scope, for cross-cutting changes.
- Example: `fix(coderd/chatd): ...` for changes only in `coderd/chatd/`.
### Frontend Patterns
+8 -10
View File
@@ -136,18 +136,10 @@ endif
# the search path so that these exclusions match.
FIND_EXCLUSIONS= \
-not \( \( -path '*/.git/*' -o -path './build/*' -o -path './vendor/*' -o -path './.coderv2/*' -o -path '*/node_modules/*' -o -path '*/out/*' -o -path './coderd/apidoc/*' -o -path '*/.next/*' -o -path '*/.terraform/*' -o -path './_gen/*' \) -prune \)
# Source files used for make targets, evaluated on use.
GO_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.go' -not -name '*_test.go')
# Same as GO_SRC_FILES but excluding certain files that have problematic
# Makefile dependencies (e.g. pnpm).
MOST_GO_SRC_FILES := $(shell \
find . \
$(FIND_EXCLUSIONS) \
-type f \
-name '*.go' \
-not -name '*_test.go' \
-not -wholename './agent/agentcontainers/dcspec/dcspec_gen.go' \
)
# All the shell files in the repo, excluding ignored files.
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
@@ -514,6 +506,12 @@ install: build/coder_$(VERSION)_$(GOOS)_$(GOARCH)$(GOOS_BIN_EXT)
cp "$<" "$$output_file"
.PHONY: install
# Only wildcard the go files in the develop directory to avoid rebuilds
# when project files are changd. Technically changes to some imports may
# not be detected, but it's unlikely to cause any issues.
build/.bin/develop: go.mod go.sum $(wildcard scripts/develop/*.go)
CGO_ENABLED=0 go build -o $@ ./scripts/develop
BOLD := $(shell tput bold 2>/dev/null)
GREEN := $(shell tput setaf 2 2>/dev/null)
RED := $(shell tput setaf 1 2>/dev/null)
+7 -2
View File
@@ -385,11 +385,16 @@ func (a *agent) init() {
pathStore := agentgit.NewPathStore()
a.filesAPI = agentfiles.NewAPI(a.logger.Named("files"), a.filesystem, pathStore)
a.processAPI = agentproc.NewAPI(a.logger.Named("processes"), a.execer, a.updateCommandEnv, pathStore)
a.processAPI = agentproc.NewAPI(a.logger.Named("processes"), a.execer, a.updateCommandEnv, pathStore, func() string {
if m := a.manifest.Load(); m != nil {
return m.Directory
}
return ""
})
gitOpts := append([]agentgit.Option{agentgit.WithClock(a.clock)}, a.gitAPIOptions...)
a.gitAPI = agentgit.NewAPI(a.logger.Named("git"), pathStore, gitOpts...)
desktop := agentdesktop.NewPortableDesktop(
a.logger.Named("desktop"), a.execer, a.scriptDataDir,
a.logger.Named("desktop"), a.execer, a.scriptRunner.ScriptBinDir(),
)
a.desktopAPI = agentdesktop.NewAPI(a.logger.Named("desktop"), desktop, a.clock)
a.reconnectingPTYServer = reconnectingpty.NewServer(
+24 -169
View File
@@ -2,13 +2,9 @@ package agentdesktop
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
@@ -24,28 +20,6 @@ import (
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
const (
portableDesktopVersion = "v0.0.4"
downloadRetries = 3
downloadRetryDelay = time.Second
)
// platformBinaries maps GOARCH to download URL and expected SHA-256
// digest for each supported platform.
var platformBinaries = map[string]struct {
URL string
SHA256 string
}{
"amd64": {
URL: "https://github.com/coder/portabledesktop/releases/download/" + portableDesktopVersion + "/portabledesktop-linux-x64",
SHA256: "a04e05e6c7d6f2e6b3acbf1729a7b21271276300b4fee321f4ffee6136538317",
},
"arm64": {
URL: "https://github.com/coder/portabledesktop/releases/download/" + portableDesktopVersion + "/portabledesktop-linux-arm64",
SHA256: "b8cb9142dc32d46a608f25229cbe8168ff2a3aadc54253c74ff54cd347e16ca6",
},
}
// portableDesktopOutput is the JSON output from
// `portabledesktop up --json`.
type portableDesktopOutput struct {
@@ -78,43 +52,31 @@ type screenshotOutput struct {
// portableDesktop implements Desktop by shelling out to the
// portabledesktop CLI via agentexec.Execer.
type portableDesktop struct {
logger slog.Logger
execer agentexec.Execer
dataDir string // agent's ScriptDataDir, used for binary caching
logger slog.Logger
execer agentexec.Execer
scriptBinDir string // coder script bin directory
mu sync.Mutex
session *desktopSession // nil until started
binPath string // resolved path to binary, cached
closed bool
// httpClient is used for downloading the binary. If nil,
// http.DefaultClient is used.
httpClient *http.Client
}
// NewPortableDesktop creates a Desktop backed by the portabledesktop
// CLI binary, using execer to spawn child processes. dataDir is used
// to cache the downloaded binary.
// CLI binary, using execer to spawn child processes. scriptBinDir is
// the coder script bin directory checked for the binary.
func NewPortableDesktop(
logger slog.Logger,
execer agentexec.Execer,
dataDir string,
scriptBinDir string,
) Desktop {
return &portableDesktop{
logger: logger,
execer: execer,
dataDir: dataDir,
logger: logger,
execer: execer,
scriptBinDir: scriptBinDir,
}
}
// httpDo returns the HTTP client to use for downloads.
func (p *portableDesktop) httpDo() *http.Client {
if p.httpClient != nil {
return p.httpClient
}
return http.DefaultClient
}
// Start launches the desktop session (idempotent).
func (p *portableDesktop) Start(ctx context.Context) (DisplayConfig, error) {
p.mu.Lock()
@@ -399,8 +361,8 @@ func (p *portableDesktop) runCmd(ctx context.Context, args ...string) (string, e
return string(out), nil
}
// ensureBinary resolves or downloads the portabledesktop binary. It
// must be called while p.mu is held.
// ensureBinary resolves the portabledesktop binary from PATH or the
// coder script bin directory. It must be called while p.mu is held.
func (p *portableDesktop) ensureBinary(ctx context.Context) error {
if p.binPath != "" {
return nil
@@ -415,130 +377,23 @@ func (p *portableDesktop) ensureBinary(ctx context.Context) error {
return nil
}
// 2. Platform checks.
if runtime.GOOS != "linux" {
return xerrors.New("portabledesktop is only supported on Linux")
}
bin, ok := platformBinaries[runtime.GOARCH]
if !ok {
return xerrors.Errorf("unsupported architecture for portabledesktop: %s", runtime.GOARCH)
}
// 3. Check cache.
cacheDir := filepath.Join(p.dataDir, "portabledesktop", bin.SHA256)
cachedPath := filepath.Join(cacheDir, "portabledesktop")
if info, err := os.Stat(cachedPath); err == nil && !info.IsDir() {
// Verify it is executable.
if info.Mode()&0o100 != 0 {
p.logger.Info(ctx, "using cached portabledesktop binary",
slog.F("path", cachedPath),
// 2. Check the coder script bin directory.
scriptBinPath := filepath.Join(p.scriptBinDir, "portabledesktop")
if info, err := os.Stat(scriptBinPath); err == nil && !info.IsDir() {
// On Windows, permission bits don't indicate executability,
// so accept any regular file.
if runtime.GOOS == "windows" || info.Mode()&0o111 != 0 {
p.logger.Info(ctx, "found portabledesktop in script bin directory",
slog.F("path", scriptBinPath),
)
p.binPath = cachedPath
p.binPath = scriptBinPath
return nil
}
}
// 4. Download with retry.
p.logger.Info(ctx, "downloading portabledesktop binary",
slog.F("url", bin.URL),
slog.F("version", portableDesktopVersion),
slog.F("arch", runtime.GOARCH),
)
var lastErr error
for attempt := range downloadRetries {
if err := downloadBinary(ctx, p.httpDo(), bin.URL, bin.SHA256, cachedPath); err != nil {
lastErr = err
p.logger.Warn(ctx, "download attempt failed",
slog.F("attempt", attempt+1),
slog.F("max_attempts", downloadRetries),
slog.Error(err),
)
if attempt < downloadRetries-1 {
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(downloadRetryDelay):
}
}
continue
}
p.binPath = cachedPath
p.logger.Info(ctx, "downloaded portabledesktop binary",
slog.F("path", cachedPath),
)
return nil
}
return xerrors.Errorf("download portabledesktop after %d attempts: %w", downloadRetries, lastErr)
}
// downloadBinary fetches a binary from url, verifies its SHA-256
// digest matches expectedSHA256, and atomically writes it to destPath.
func downloadBinary(ctx context.Context, client *http.Client, url, expectedSHA256, destPath string) error {
if err := os.MkdirAll(filepath.Dir(destPath), 0o700); err != nil {
return xerrors.Errorf("create cache directory: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return xerrors.Errorf("create HTTP request: %w", err)
}
resp, err := client.Do(req)
if err != nil {
return xerrors.Errorf("HTTP GET %s: %w", url, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return xerrors.Errorf("HTTP GET %s: status %d", url, resp.StatusCode)
}
// Write to a temp file in the same directory so the final rename
// is atomic on the same filesystem.
tmpFile, err := os.CreateTemp(filepath.Dir(destPath), "portabledesktop-download-*")
if err != nil {
return xerrors.Errorf("create temp file: %w", err)
}
tmpPath := tmpFile.Name()
// Clean up the temp file on any error path.
success := false
defer func() {
if !success {
_ = tmpFile.Close()
_ = os.Remove(tmpPath)
}
}()
// Stream the response body while computing SHA-256.
hasher := sha256.New()
if _, err := io.Copy(tmpFile, io.TeeReader(resp.Body, hasher)); err != nil {
return xerrors.Errorf("download body: %w", err)
}
if err := tmpFile.Close(); err != nil {
return xerrors.Errorf("close temp file: %w", err)
}
// Verify digest.
actualSHA256 := hex.EncodeToString(hasher.Sum(nil))
if actualSHA256 != expectedSHA256 {
return xerrors.Errorf(
"SHA-256 mismatch: expected %s, got %s",
expectedSHA256, actualSHA256,
p.logger.Warn(ctx, "portabledesktop found in script bin directory but not executable",
slog.F("path", scriptBinPath),
slog.F("mode", info.Mode().String()),
)
}
if err := os.Chmod(tmpPath, 0o700); err != nil {
return xerrors.Errorf("chmod: %w", err)
}
if err := os.Rename(tmpPath, destPath); err != nil {
return xerrors.Errorf("rename to final path: %w", err)
}
success = true
return nil
return xerrors.New("portabledesktop binary not found in PATH or script bin directory")
}
@@ -2,11 +2,6 @@ package agentdesktop
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"net/http"
"net/http/httptest"
"os"
"os/exec"
"path/filepath"
@@ -77,7 +72,6 @@ func TestPortableDesktop_Start_ParsesOutput(t *testing.T) {
t.Parallel()
logger := slogtest.Make(t, nil)
dataDir := t.TempDir()
// The "up" script prints the JSON line then sleeps until
// the context is canceled (simulating a long-running process).
@@ -88,13 +82,13 @@ func TestPortableDesktop_Start_ParsesOutput(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: dataDir,
binPath: "portabledesktop", // pre-set so ensureBinary is a no-op
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop", // pre-set so ensureBinary is a no-op
}
ctx := context.Background()
ctx := t.Context()
cfg, err := pd.Start(ctx)
require.NoError(t, err)
@@ -111,7 +105,6 @@ func TestPortableDesktop_Start_Idempotent(t *testing.T) {
t.Parallel()
logger := slogtest.Make(t, nil)
dataDir := t.TempDir()
rec := &recordedExecer{
scripts: map[string]string{
@@ -120,13 +113,13 @@ func TestPortableDesktop_Start_Idempotent(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: dataDir,
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
ctx := context.Background()
ctx := t.Context()
cfg1, err := pd.Start(ctx)
require.NoError(t, err)
@@ -154,7 +147,6 @@ func TestPortableDesktop_Screenshot(t *testing.T) {
t.Parallel()
logger := slogtest.Make(t, nil)
dataDir := t.TempDir()
rec := &recordedExecer{
scripts: map[string]string{
@@ -163,13 +155,13 @@ func TestPortableDesktop_Screenshot(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: dataDir,
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
ctx := context.Background()
ctx := t.Context()
result, err := pd.Screenshot(ctx, ScreenshotOptions{})
require.NoError(t, err)
@@ -180,7 +172,6 @@ func TestPortableDesktop_Screenshot_WithTargetDimensions(t *testing.T) {
t.Parallel()
logger := slogtest.Make(t, nil)
dataDir := t.TempDir()
rec := &recordedExecer{
scripts: map[string]string{
@@ -189,13 +180,13 @@ func TestPortableDesktop_Screenshot_WithTargetDimensions(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: dataDir,
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
ctx := context.Background()
ctx := t.Context()
_, err := pd.Screenshot(ctx, ScreenshotOptions{
TargetWidth: 800,
TargetHeight: 600,
@@ -287,13 +278,13 @@ func TestPortableDesktop_MouseMethods(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: t.TempDir(),
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
err := tt.invoke(context.Background(), pd)
err := tt.invoke(t.Context(), pd)
require.NoError(t, err)
cmds := rec.allCommands()
@@ -372,13 +363,13 @@ func TestPortableDesktop_KeyboardMethods(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: t.TempDir(),
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
err := tt.invoke(context.Background(), pd)
err := tt.invoke(t.Context(), pd)
require.NoError(t, err)
cmds := rec.allCommands()
@@ -404,13 +395,13 @@ func TestPortableDesktop_CursorPosition(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: t.TempDir(),
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
x, y, err := pd.CursorPosition(context.Background())
x, y, err := pd.CursorPosition(t.Context())
require.NoError(t, err)
assert.Equal(t, 100, x)
assert.Equal(t, 200, y)
@@ -428,13 +419,13 @@ func TestPortableDesktop_Close(t *testing.T) {
}
pd := &portableDesktop{
logger: logger,
execer: rec,
dataDir: t.TempDir(),
binPath: "portabledesktop",
logger: logger,
execer: rec,
scriptBinDir: t.TempDir(),
binPath: "portabledesktop",
}
ctx := context.Background()
ctx := t.Context()
_, err := pd.Start(ctx)
require.NoError(t, err)
@@ -457,81 +448,6 @@ func TestPortableDesktop_Close(t *testing.T) {
assert.Contains(t, err.Error(), "desktop is closed")
}
// --- downloadBinary tests ---
func TestDownloadBinary_Success(t *testing.T) {
t.Parallel()
binaryContent := []byte("#!/bin/sh\necho portable\n")
hash := sha256.Sum256(binaryContent)
expectedSHA := hex.EncodeToString(hash[:])
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write(binaryContent)
}))
defer srv.Close()
destDir := t.TempDir()
destPath := filepath.Join(destDir, "portabledesktop")
err := downloadBinary(context.Background(), srv.Client(), srv.URL, expectedSHA, destPath)
require.NoError(t, err)
// Verify the file exists and has correct content.
got, err := os.ReadFile(destPath)
require.NoError(t, err)
assert.Equal(t, binaryContent, got)
// Verify executable permissions.
info, err := os.Stat(destPath)
require.NoError(t, err)
assert.NotZero(t, info.Mode()&0o700, "binary should be executable")
}
func TestDownloadBinary_ChecksumMismatch(t *testing.T) {
t.Parallel()
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("real binary content"))
}))
defer srv.Close()
destDir := t.TempDir()
destPath := filepath.Join(destDir, "portabledesktop")
wrongSHA := "0000000000000000000000000000000000000000000000000000000000000000"
err := downloadBinary(context.Background(), srv.Client(), srv.URL, wrongSHA, destPath)
require.Error(t, err)
assert.Contains(t, err.Error(), "SHA-256 mismatch")
// The destination file should not exist (temp file cleaned up).
_, statErr := os.Stat(destPath)
assert.True(t, os.IsNotExist(statErr), "dest file should not exist after checksum failure")
// No leftover temp files in the directory.
entries, err := os.ReadDir(destDir)
require.NoError(t, err)
assert.Empty(t, entries, "no leftover temp files should remain")
}
func TestDownloadBinary_HTTPError(t *testing.T) {
t.Parallel()
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusNotFound)
}))
defer srv.Close()
destDir := t.TempDir()
destPath := filepath.Join(destDir, "portabledesktop")
err := downloadBinary(context.Background(), srv.Client(), srv.URL, "irrelevant", destPath)
require.Error(t, err)
assert.Contains(t, err.Error(), "status 404")
}
// --- ensureBinary tests ---
func TestEnsureBinary_UsesCachedBinPath(t *testing.T) {
@@ -541,173 +457,89 @@ func TestEnsureBinary_UsesCachedBinPath(t *testing.T) {
// immediately without doing any work.
logger := slogtest.Make(t, nil)
pd := &portableDesktop{
logger: logger,
execer: agentexec.DefaultExecer,
dataDir: t.TempDir(),
binPath: "/already/set",
logger: logger,
execer: agentexec.DefaultExecer,
scriptBinDir: t.TempDir(),
binPath: "/already/set",
}
err := pd.ensureBinary(context.Background())
err := pd.ensureBinary(t.Context())
require.NoError(t, err)
assert.Equal(t, "/already/set", pd.binPath)
}
func TestEnsureBinary_UsesCachedBinary(t *testing.T) {
func TestEnsureBinary_UsesScriptBinDir(t *testing.T) {
// Cannot use t.Parallel because t.Setenv modifies the process
// environment.
if runtime.GOOS != "linux" {
t.Skip("portabledesktop is only supported on Linux")
}
bin, ok := platformBinaries[runtime.GOARCH]
if !ok {
t.Skipf("no platformBinary entry for %s", runtime.GOARCH)
}
dataDir := t.TempDir()
cacheDir := filepath.Join(dataDir, "portabledesktop", bin.SHA256)
require.NoError(t, os.MkdirAll(cacheDir, 0o700))
cachedPath := filepath.Join(cacheDir, "portabledesktop")
require.NoError(t, os.WriteFile(cachedPath, []byte("#!/bin/sh\n"), 0o600))
scriptBinDir := t.TempDir()
binPath := filepath.Join(scriptBinDir, "portabledesktop")
require.NoError(t, os.WriteFile(binPath, []byte("#!/bin/sh\n"), 0o600))
require.NoError(t, os.Chmod(binPath, 0o755))
logger := slogtest.Make(t, nil)
pd := &portableDesktop{
logger: logger,
execer: agentexec.DefaultExecer,
dataDir: dataDir,
logger: logger,
execer: agentexec.DefaultExecer,
scriptBinDir: scriptBinDir,
}
// Clear PATH so LookPath won't find a real binary.
t.Setenv("PATH", "")
err := pd.ensureBinary(context.Background())
err := pd.ensureBinary(t.Context())
require.NoError(t, err)
assert.Equal(t, cachedPath, pd.binPath)
assert.Equal(t, binPath, pd.binPath)
}
func TestEnsureBinary_Downloads(t *testing.T) {
func TestEnsureBinary_ScriptBinDirNotExecutable(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Windows does not support Unix permission bits")
}
// Cannot use t.Parallel because t.Setenv modifies the process
// environment and we override the package-level platformBinaries.
if runtime.GOOS != "linux" {
t.Skip("portabledesktop is only supported on Linux")
}
// environment.
binaryContent := []byte("#!/bin/sh\necho downloaded\n")
hash := sha256.Sum256(binaryContent)
expectedSHA := hex.EncodeToString(hash[:])
scriptBinDir := t.TempDir()
binPath := filepath.Join(scriptBinDir, "portabledesktop")
// Write without execute permission.
require.NoError(t, os.WriteFile(binPath, []byte("#!/bin/sh\n"), 0o600))
_ = binPath
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write(binaryContent)
}))
defer srv.Close()
// Save and restore platformBinaries for this test.
origBinaries := platformBinaries
platformBinaries = map[string]struct {
URL string
SHA256 string
}{
runtime.GOARCH: {
URL: srv.URL + "/portabledesktop",
SHA256: expectedSHA,
},
}
t.Cleanup(func() { platformBinaries = origBinaries })
dataDir := t.TempDir()
logger := slogtest.Make(t, nil)
pd := &portableDesktop{
logger: logger,
execer: agentexec.DefaultExecer,
dataDir: dataDir,
httpClient: srv.Client(),
logger: logger,
execer: agentexec.DefaultExecer,
scriptBinDir: scriptBinDir,
}
// Ensure PATH doesn't contain a real portabledesktop binary.
// Clear PATH so LookPath won't find a real binary.
t.Setenv("PATH", "")
err := pd.ensureBinary(context.Background())
require.NoError(t, err)
expectedPath := filepath.Join(dataDir, "portabledesktop", expectedSHA, "portabledesktop")
assert.Equal(t, expectedPath, pd.binPath)
// Verify the downloaded file has correct content.
got, err := os.ReadFile(expectedPath)
require.NoError(t, err)
assert.Equal(t, binaryContent, got)
err := pd.ensureBinary(t.Context())
require.Error(t, err)
assert.Contains(t, err.Error(), "not found")
}
func TestEnsureBinary_RetriesOnFailure(t *testing.T) {
t.Parallel()
func TestEnsureBinary_NotFound(t *testing.T) {
// Cannot use t.Parallel because t.Setenv modifies the process
// environment.
if runtime.GOOS != "linux" {
t.Skip("portabledesktop is only supported on Linux")
logger := slogtest.Make(t, nil)
pd := &portableDesktop{
logger: logger,
execer: agentexec.DefaultExecer,
scriptBinDir: t.TempDir(), // empty directory
}
binaryContent := []byte("#!/bin/sh\necho retried\n")
hash := sha256.Sum256(binaryContent)
expectedSHA := hex.EncodeToString(hash[:])
// Clear PATH so LookPath won't find a real binary.
t.Setenv("PATH", "")
var mu sync.Mutex
attempt := 0
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
mu.Lock()
current := attempt
attempt++
mu.Unlock()
// Fail the first 2 attempts, succeed on the third.
if current < 2 {
w.WriteHeader(http.StatusServiceUnavailable)
return
}
w.WriteHeader(http.StatusOK)
_, _ = w.Write(binaryContent)
}))
defer srv.Close()
// Test downloadBinary directly to avoid time.Sleep in
// ensureBinary's retry loop. We call it 3 times to simulate
// what ensureBinary would do.
destDir := t.TempDir()
destPath := filepath.Join(destDir, "portabledesktop")
var lastErr error
for i := range 3 {
lastErr = downloadBinary(context.Background(), srv.Client(), srv.URL, expectedSHA, destPath)
if lastErr == nil {
break
}
if i < 2 {
// In the real code, ensureBinary sleeps here.
// We skip the sleep in tests.
continue
}
}
require.NoError(t, lastErr, "download should succeed on the third attempt")
got, err := os.ReadFile(destPath)
require.NoError(t, err)
assert.Equal(t, binaryContent, got)
mu.Lock()
assert.Equal(t, 3, attempt, "server should have been hit 3 times")
mu.Unlock()
err := pd.ensureBinary(t.Context())
require.Error(t, err)
assert.Contains(t, err.Error(), "not found")
}
// Ensure that portableDesktop satisfies the Desktop interface at
// compile time. This uses the unexported type so it lives in the
// internal test package.
var _ Desktop = (*portableDesktop)(nil)
// Silence the linter about unused imports — agentexec.DefaultExecer
// is used in TestEnsureBinary_UsesCachedBinPath and others, and
// fmt.Sscanf is used indirectly via the implementation.
var (
_ = agentexec.DefaultExecer
_ = fmt.Sprintf
)
+89 -38
View File
@@ -447,13 +447,10 @@ func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.
content := string(data)
for _, edit := range edits {
var ok bool
content, ok = fuzzyReplace(content, edit.Search, edit.Replace)
if !ok {
api.logger.Warn(ctx, "edit search string not found, skipping",
slog.F("path", path),
slog.F("search_preview", truncate(edit.Search, 64)),
)
var err error
content, err = fuzzyReplace(content, edit)
if err != nil {
return http.StatusBadRequest, xerrors.Errorf("edit %s: %w", path, err)
}
}
@@ -480,51 +477,92 @@ func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.
return 0, nil
}
// fuzzyReplace attempts to find `search` inside `content` and replace its first
// occurrence with `replace`. It uses a cascading match strategy inspired by
// fuzzyReplace attempts to find `search` inside `content` and replace it
// with `replace`. It uses a cascading match strategy inspired by
// openai/codex's apply_patch:
//
// 1. Exact substring match (byte-for-byte).
// 2. Line-by-line match ignoring trailing whitespace on each line.
// 3. Line-by-line match ignoring all leading/trailing whitespace (indentation-tolerant).
// 3. Line-by-line match ignoring all leading/trailing whitespace
// (indentation-tolerant).
//
// When a fuzzy match is found (passes 2 or 3), the replacement is still applied
// at the byte offsets of the original content so that surrounding text (including
// indentation of untouched lines) is preserved.
// When edit.ReplaceAll is false (the default), the search string must
// match exactly one location. If multiple matches are found, an error
// is returned asking the caller to include more context or set
// replace_all.
//
// Returns the (possibly modified) content and a bool indicating whether a match
// was found.
func fuzzyReplace(content, search, replace string) (string, bool) {
// Pass 1 exact substring (replace all occurrences).
// When a fuzzy match is found (passes 2 or 3), the replacement is still
// applied at the byte offsets of the original content so that surrounding
// text (including indentation of untouched lines) is preserved.
func fuzzyReplace(content string, edit workspacesdk.FileEdit) (string, error) {
search := edit.Search
replace := edit.Replace
// Pass 1 exact substring match.
if strings.Contains(content, search) {
return strings.ReplaceAll(content, search, replace), true
if edit.ReplaceAll {
return strings.ReplaceAll(content, search, replace), nil
}
count := strings.Count(content, search)
if count > 1 {
return "", xerrors.Errorf("search string matches %d occurrences "+
"(expected exactly 1). Include more surrounding "+
"context to make the match unique, or set "+
"replace_all to true", count)
}
// Exactly one match.
return strings.Replace(content, search, replace, 1), nil
}
// For line-level fuzzy matching we split both content and search into lines.
// For line-level fuzzy matching we split both content and search
// into lines.
contentLines := strings.SplitAfter(content, "\n")
searchLines := strings.SplitAfter(search, "\n")
// A trailing newline in the search produces an empty final element from
// SplitAfter. Drop it so it doesn't interfere with line matching.
// A trailing newline in the search produces an empty final element
// from SplitAfter. Drop it so it doesn't interfere with line
// matching.
if len(searchLines) > 0 && searchLines[len(searchLines)-1] == "" {
searchLines = searchLines[:len(searchLines)-1]
}
// Pass 2 trim trailing whitespace on each line.
if start, end, ok := seekLines(contentLines, searchLines, func(a, b string) bool {
trimRight := func(a, b string) bool {
return strings.TrimRight(a, " \t\r\n") == strings.TrimRight(b, " \t\r\n")
}); ok {
return spliceLines(contentLines, start, end, replace), true
}
// Pass 3 trim all leading and trailing whitespace (indentation-tolerant).
if start, end, ok := seekLines(contentLines, searchLines, func(a, b string) bool {
trimAll := func(a, b string) bool {
return strings.TrimSpace(a) == strings.TrimSpace(b)
}); ok {
return spliceLines(contentLines, start, end, replace), true
}
return content, false
// Pass 2 trim trailing whitespace on each line.
if start, end, ok := seekLines(contentLines, searchLines, trimRight); ok {
if !edit.ReplaceAll {
if count := countLineMatches(contentLines, searchLines, trimRight); count > 1 {
return "", xerrors.Errorf("search string matches %d occurrences "+
"(expected exactly 1). Include more surrounding "+
"context to make the match unique, or set "+
"replace_all to true", count)
}
}
return spliceLines(contentLines, start, end, replace), nil
}
// Pass 3 trim all leading and trailing whitespace
// (indentation-tolerant).
if start, end, ok := seekLines(contentLines, searchLines, trimAll); ok {
if !edit.ReplaceAll {
if count := countLineMatches(contentLines, searchLines, trimAll); count > 1 {
return "", xerrors.Errorf("search string matches %d occurrences "+
"(expected exactly 1). Include more surrounding "+
"context to make the match unique, or set "+
"replace_all to true", count)
}
}
return spliceLines(contentLines, start, end, replace), nil
}
return "", xerrors.New("search string not found in file. Verify the search " +
"string matches the file content exactly, including whitespace " +
"and indentation")
}
// seekLines scans contentLines looking for a contiguous subsequence that matches
@@ -549,6 +587,26 @@ outer:
return 0, 0, false
}
// countLineMatches counts how many non-overlapping contiguous
// subsequences of contentLines match searchLines according to eq.
func countLineMatches(contentLines, searchLines []string, eq func(a, b string) bool) int {
count := 0
if len(searchLines) == 0 || len(searchLines) > len(contentLines) {
return count
}
outer:
for i := 0; i <= len(contentLines)-len(searchLines); i++ {
for j, sLine := range searchLines {
if !eq(contentLines[i+j], sLine) {
continue outer
}
}
count++
i += len(searchLines) - 1 // skip past this match
}
return count
}
// spliceLines replaces contentLines[start:end] with replacement text, returning
// the full content as a single string.
func spliceLines(contentLines []string, start, end int, replacement string) string {
@@ -562,10 +620,3 @@ func spliceLines(contentLines []string, start, end int, replacement string) stri
}
return b.String()
}
func truncate(s string, n int) string {
if len(s) <= n {
return s
}
return s[:n] + "..."
}
+68 -3
View File
@@ -576,7 +576,9 @@ func TestEditFiles(t *testing.T) {
expected: map[string]string{filepath.Join(tmpdir, "edit1"): "bar bar"},
},
{
name: "EditEdit", // Edits affect previous edits.
// When the second edit creates ambiguity (two "bar"
// occurrences), it should fail.
name: "EditEditAmbiguous",
contents: map[string]string{filepath.Join(tmpdir, "edit-edit"): "foo bar"},
edits: []workspacesdk.FileEdits{
{
@@ -593,7 +595,33 @@ func TestEditFiles(t *testing.T) {
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "edit-edit"): "qux qux"},
errCode: http.StatusBadRequest,
errors: []string{"matches 2 occurrences"},
// File should not be modified on error.
expected: map[string]string{filepath.Join(tmpdir, "edit-edit"): "foo bar"},
},
{
// With replace_all the cascading edit replaces
// both occurrences.
name: "EditEditReplaceAll",
contents: map[string]string{filepath.Join(tmpdir, "edit-edit-ra"): "foo bar"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "edit-edit-ra"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
{
Search: "bar",
Replace: "qux",
ReplaceAll: true,
},
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "edit-edit-ra"): "qux qux"},
},
{
name: "Multiline",
@@ -720,7 +748,7 @@ func TestEditFiles(t *testing.T) {
expected: map[string]string{filepath.Join(tmpdir, "exact-preferred"): "goodbye world"},
},
{
name: "NoMatchStillSucceeds",
name: "NoMatchErrors",
contents: map[string]string{filepath.Join(tmpdir, "no-match"): "original content"},
edits: []workspacesdk.FileEdits{
{
@@ -733,9 +761,46 @@ func TestEditFiles(t *testing.T) {
},
},
},
errCode: http.StatusBadRequest,
errors: []string{"search string not found in file"},
// File should remain unchanged.
expected: map[string]string{filepath.Join(tmpdir, "no-match"): "original content"},
},
{
name: "AmbiguousExactMatch",
contents: map[string]string{filepath.Join(tmpdir, "ambig-exact"): "foo bar foo baz foo"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "ambig-exact"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "qux",
},
},
},
},
errCode: http.StatusBadRequest,
errors: []string{"matches 3 occurrences"},
expected: map[string]string{filepath.Join(tmpdir, "ambig-exact"): "foo bar foo baz foo"},
},
{
name: "ReplaceAllExact",
contents: map[string]string{filepath.Join(tmpdir, "ra-exact"): "foo bar foo baz foo"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "ra-exact"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "qux",
ReplaceAll: true,
},
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "ra-exact"): "qux bar qux baz qux"},
},
{
name: "MixedWhitespaceMultiline",
contents: map[string]string{filepath.Join(tmpdir, "mixed-ws"): "func main() {\n\tresult := compute()\n\tfmt.Println(result)\n}"},
+2 -2
View File
@@ -26,10 +26,10 @@ type API struct {
}
// NewAPI creates a new process API handler.
func NewAPI(logger slog.Logger, execer agentexec.Execer, updateEnv func(current []string) (updated []string, err error), pathStore *agentgit.PathStore) *API {
func NewAPI(logger slog.Logger, execer agentexec.Execer, updateEnv func(current []string) (updated []string, err error), pathStore *agentgit.PathStore, workingDir func() string) *API {
return &API{
logger: logger,
manager: newManager(logger, execer, updateEnv),
manager: newManager(logger, execer, updateEnv, workingDir),
pathStore: pathStore,
}
}
+105 -3
View File
@@ -7,6 +7,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"runtime"
"strings"
"testing"
@@ -97,18 +98,25 @@ func postSignal(t *testing.T, handler http.Handler, id string, req workspacesdk.
// execer, returning the handler and API.
func newTestAPI(t *testing.T) http.Handler {
t.Helper()
return newTestAPIWithUpdateEnv(t, nil)
return newTestAPIWithOptions(t, nil, nil)
}
// newTestAPIWithUpdateEnv creates a new API with an optional
// updateEnv hook for testing environment injection.
func newTestAPIWithUpdateEnv(t *testing.T, updateEnv func([]string) ([]string, error)) http.Handler {
t.Helper()
return newTestAPIWithOptions(t, updateEnv, nil)
}
// newTestAPIWithOptions creates a new API with optional
// updateEnv and workingDir hooks.
func newTestAPIWithOptions(t *testing.T, updateEnv func([]string) ([]string, error), workingDir func() string) http.Handler {
t.Helper()
logger := slogtest.Make(t, &slogtest.Options{
IgnoreErrors: true,
}).Leveled(slog.LevelDebug)
api := agentproc.NewAPI(logger, agentexec.DefaultExecer, updateEnv, nil)
api := agentproc.NewAPI(logger, agentexec.DefaultExecer, updateEnv, nil, workingDir)
t.Cleanup(func() {
_ = api.Close()
})
@@ -253,6 +261,100 @@ func TestStartProcess(t *testing.T) {
require.Contains(t, resp.Output, "marker.txt")
})
t.Run("DefaultWorkDirIsHome", func(t *testing.T) {
t.Parallel()
// No working directory closure, so the process
// should fall back to $HOME. We verify through
// the process list API which reports the resolved
// working directory using native OS paths,
// avoiding shell path format mismatches on
// Windows (Git Bash returns POSIX paths).
handler := newTestAPI(t)
homeDir, err := os.UserHomeDir()
require.NoError(t, err)
id := startAndGetID(t, handler, workspacesdk.StartProcessRequest{
Command: "echo ok",
})
resp := waitForExit(t, handler, id)
require.NotNil(t, resp.ExitCode)
require.Equal(t, 0, *resp.ExitCode)
w := getList(t, handler)
require.Equal(t, http.StatusOK, w.Code)
var listResp workspacesdk.ListProcessesResponse
require.NoError(t, json.NewDecoder(w.Body).Decode(&listResp))
var proc *workspacesdk.ProcessInfo
for i := range listResp.Processes {
if listResp.Processes[i].ID == id {
proc = &listResp.Processes[i]
break
}
}
require.NotNil(t, proc, "process not found in list")
require.Equal(t, homeDir, proc.WorkDir)
})
t.Run("DefaultWorkDirFromClosure", func(t *testing.T) {
t.Parallel()
// The closure provides a valid directory, so the
// process should start there. Use the marker file
// pattern to avoid path format mismatches on
// Windows.
tmpDir := t.TempDir()
handler := newTestAPIWithOptions(t, nil, func() string {
return tmpDir
})
id := startAndGetID(t, handler, workspacesdk.StartProcessRequest{
Command: "touch marker.txt && ls marker.txt",
})
resp := waitForExit(t, handler, id)
require.NotNil(t, resp.ExitCode)
require.Equal(t, 0, *resp.ExitCode)
require.Contains(t, resp.Output, "marker.txt")
})
t.Run("DefaultWorkDirClosureNonExistentFallsBackToHome", func(t *testing.T) {
t.Parallel()
// The closure returns a path that doesn't exist,
// so the process should fall back to $HOME.
handler := newTestAPIWithOptions(t, nil, func() string {
return "/tmp/nonexistent-dir-" + fmt.Sprintf("%d", time.Now().UnixNano())
})
homeDir, err := os.UserHomeDir()
require.NoError(t, err)
id := startAndGetID(t, handler, workspacesdk.StartProcessRequest{
Command: "echo ok",
})
resp := waitForExit(t, handler, id)
require.NotNil(t, resp.ExitCode)
require.Equal(t, 0, *resp.ExitCode)
w := getList(t, handler)
require.Equal(t, http.StatusOK, w.Code)
var listResp workspacesdk.ListProcessesResponse
require.NoError(t, json.NewDecoder(w.Body).Decode(&listResp))
var proc *workspacesdk.ProcessInfo
for i := range listResp.Processes {
if listResp.Processes[i].ID == id {
proc = &listResp.Processes[i]
break
}
}
require.NotNil(t, proc, "process not found in list")
require.Equal(t, homeDir, proc.WorkDir)
})
t.Run("CustomEnv", func(t *testing.T) {
t.Parallel()
@@ -781,7 +883,7 @@ func TestHandleStartProcess_ChatHeaders_EmptyWorkDir_StillNotifies(t *testing.T)
logger := slogtest.Make(t, nil).Leveled(slog.LevelDebug)
api := agentproc.NewAPI(logger, agentexec.DefaultExecer, func(current []string) ([]string, error) {
return current, nil
}, pathStore)
}, pathStore, nil)
defer api.Close()
routes := api.Routes()
+26
View File
@@ -0,0 +1,26 @@
//go:build !windows
package agentproc
import (
"os"
"syscall"
)
// procSysProcAttr returns the SysProcAttr to use when spawning
// processes. On Unix, Setpgid creates a new process group so
// that signals can be delivered to the entire group (the shell
// and all its children).
func procSysProcAttr() *syscall.SysProcAttr {
return &syscall.SysProcAttr{
Setpgid: true,
}
}
// signalProcess sends a signal to the process group rooted at p.
// Using the negative PID sends the signal to every process in the
// group, ensuring child processes (e.g. from shell pipelines) are
// also signaled.
func signalProcess(p *os.Process, sig syscall.Signal) error {
return syscall.Kill(-p.Pid, sig)
}
+20
View File
@@ -0,0 +1,20 @@
package agentproc
import (
"os"
"syscall"
)
// procSysProcAttr returns the SysProcAttr to use when spawning
// processes. On Windows, process groups are not supported in the
// same way as Unix, so this returns an empty struct.
func procSysProcAttr() *syscall.SysProcAttr {
return &syscall.SysProcAttr{}
}
// signalProcess sends a signal directly to the process. Windows
// does not support process group signaling, so we fall back to
// sending the signal to the process itself.
func signalProcess(p *os.Process, _ syscall.Signal) error {
return p.Kill()
}
+45 -21
View File
@@ -70,23 +70,25 @@ func (p *process) output() (string, *workspacesdk.ProcessTruncation) {
// manager tracks processes spawned by the agent.
type manager struct {
mu sync.Mutex
logger slog.Logger
execer agentexec.Execer
clock quartz.Clock
procs map[string]*process
closed bool
updateEnv func(current []string) (updated []string, err error)
mu sync.Mutex
logger slog.Logger
execer agentexec.Execer
clock quartz.Clock
procs map[string]*process
closed bool
updateEnv func(current []string) (updated []string, err error)
workingDir func() string
}
// newManager creates a new process manager.
func newManager(logger slog.Logger, execer agentexec.Execer, updateEnv func(current []string) (updated []string, err error)) *manager {
func newManager(logger slog.Logger, execer agentexec.Execer, updateEnv func(current []string) (updated []string, err error), workingDir func() string) *manager {
return &manager{
logger: logger,
execer: execer,
clock: quartz.NewReal(),
procs: make(map[string]*process),
updateEnv: updateEnv,
logger: logger,
execer: execer,
clock: quartz.NewReal(),
procs: make(map[string]*process),
updateEnv: updateEnv,
workingDir: workingDir,
}
}
@@ -109,10 +111,9 @@ func (m *manager) start(req workspacesdk.StartProcessRequest, chatID string) (*p
// the process is not tied to any HTTP request.
ctx, cancel := context.WithCancel(context.Background())
cmd := m.execer.CommandContext(ctx, "sh", "-c", req.Command)
if req.WorkDir != "" {
cmd.Dir = req.WorkDir
}
cmd.Dir = m.resolveWorkDir(req.WorkDir)
cmd.Stdin = nil
cmd.SysProcAttr = procSysProcAttr()
// WaitDelay ensures cmd.Wait returns promptly after
// the process is killed, even if child processes are
@@ -157,7 +158,7 @@ func (m *manager) start(req workspacesdk.StartProcessRequest, chatID string) (*p
proc := &process{
id: id,
command: req.Command,
workDir: req.WorkDir,
workDir: cmd.Dir,
background: req.Background,
chatID: chatID,
cmd: cmd,
@@ -272,13 +273,15 @@ func (m *manager) signal(id string, sig string) error {
switch sig {
case "kill":
if err := proc.cmd.Process.Kill(); err != nil {
// Use process group kill to ensure child processes
// (e.g. from shell pipelines) are also killed.
if err := signalProcess(proc.cmd.Process, syscall.SIGKILL); err != nil {
return xerrors.Errorf("kill process: %w", err)
}
case "terminate":
//nolint:revive // syscall.SIGTERM is portable enough
// for our supported platforms.
if err := proc.cmd.Process.Signal(syscall.SIGTERM); err != nil {
// Use process group signal to ensure child processes
// are also terminated.
if err := signalProcess(proc.cmd.Process, syscall.SIGTERM); err != nil {
return xerrors.Errorf("terminate process: %w", err)
}
default:
@@ -316,3 +319,24 @@ func (m *manager) Close() error {
return nil
}
// resolveWorkDir returns the directory a process should start in.
// Priority: explicit request dir > agent configured dir > $HOME.
// Falls through when a candidate is empty or does not exist on
// disk, matching the behavior of SSH sessions.
func (m *manager) resolveWorkDir(requested string) string {
if requested != "" {
return requested
}
if m.workingDir != nil {
if dir := m.workingDir(); dir != "" {
if info, err := os.Stat(dir); err == nil && info.IsDir() {
return dir
}
}
}
if home, err := os.UserHomeDir(); err == nil {
return home
}
return ""
}
+6 -27
View File
@@ -6,7 +6,6 @@ import (
"context"
"net"
"path/filepath"
"sync"
"testing"
"github.com/google/uuid"
@@ -23,26 +22,6 @@ import (
"github.com/coder/coder/v2/testutil"
)
// logSink captures structured log entries for testing.
type logSink struct {
mu sync.Mutex
entries []slog.SinkEntry
}
func (s *logSink) LogEntry(_ context.Context, e slog.SinkEntry) {
s.mu.Lock()
defer s.mu.Unlock()
s.entries = append(s.entries, e)
}
func (*logSink) Sync() {}
func (s *logSink) getEntries() []slog.SinkEntry {
s.mu.Lock()
defer s.mu.Unlock()
return append([]slog.SinkEntry{}, s.entries...)
}
// getField returns the value of a field by name from a slog.Map.
func getField(fields slog.Map, name string) interface{} {
for _, f := range fields {
@@ -76,8 +55,8 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
require.NoError(t, err)
t.Cleanup(func() { require.NoError(t, srv.Close()) })
sink := &logSink{}
logger := slog.Make(sink)
sink := testutil.NewFakeSink(t)
logger := sink.Logger(slog.LevelInfo)
workspaceID := uuid.New()
templateID := uuid.New()
templateVersionID := uuid.New()
@@ -118,10 +97,10 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
sendBoundaryLogsRequest(t, conn, req)
require.Eventually(t, func() bool {
return len(sink.getEntries()) >= 1
return len(sink.Entries()) >= 1
}, testutil.WaitShort, testutil.IntervalFast)
entries := sink.getEntries()
entries := sink.Entries()
require.Len(t, entries, 1)
entry := entries[0]
require.Equal(t, slog.LevelInfo, entry.Level)
@@ -152,10 +131,10 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
sendBoundaryLogsRequest(t, conn, req2)
require.Eventually(t, func() bool {
return len(sink.getEntries()) >= 2
return len(sink.Entries()) >= 2
}, testutil.WaitShort, testutil.IntervalFast)
entries = sink.getEntries()
entries = sink.Entries()
entry = entries[1]
require.Len(t, entries, 2)
require.Equal(t, slog.LevelInfo, entry.Level)
+9
View File
@@ -78,6 +78,9 @@ func withDone(t *testing.T) []reaper.Option {
// processes and passes their PIDs through the shared channel.
func TestReap(t *testing.T) {
t.Parallel()
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
if !runSubprocess(t) {
return
}
@@ -124,6 +127,9 @@ func TestReap(t *testing.T) {
//nolint:tparallel // Subtests must be sequential, each starts its own reaper.
func TestForkReapExitCodes(t *testing.T) {
t.Parallel()
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
if !runSubprocess(t) {
return
}
@@ -164,6 +170,9 @@ func TestForkReapExitCodes(t *testing.T) {
// ensures SIGINT cannot kill the parent test binary.
func TestReapInterrupt(t *testing.T) {
t.Parallel()
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
if !runSubprocess(t) {
return
}
+15
View File
@@ -46,6 +46,7 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
autoUpdates string
copyParametersFrom string
useParameterDefaults bool
noWait bool
// Organization context is only required if more than 1 template
// shares the same name across multiple organizations.
orgContext = NewOrganizationContext()
@@ -372,6 +373,14 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
cliutil.WarnMatchedProvisioners(inv.Stderr, workspace.LatestBuild.MatchedProvisioners, workspace.LatestBuild.Job)
if noWait {
_, _ = fmt.Fprintf(inv.Stdout,
"\nThe %s workspace has been created and is building in the background.\n",
cliui.Keyword(workspace.Name),
)
return nil
}
err = cliui.WorkspaceBuild(inv.Context(), inv.Stdout, client, workspace.LatestBuild.ID)
if err != nil {
return xerrors.Errorf("watch build: %w", err)
@@ -445,6 +454,12 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
Description: "Automatically accept parameter defaults when no value is provided.",
Value: serpent.BoolOf(&useParameterDefaults),
},
serpent.Option{
Flag: "no-wait",
Env: "CODER_CREATE_NO_WAIT",
Description: "Return immediately after creating the workspace. The build will run in the background.",
Value: serpent.BoolOf(&noWait),
},
cliui.SkipPromptOption(),
)
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
+75
View File
@@ -603,6 +603,81 @@ func TestCreate(t *testing.T) {
assert.Nil(t, ws.AutostartSchedule, "expected workspace autostart schedule to be nil")
}
})
t.Run("NoWait", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
ctx := testutil.Context(t, testutil.WaitLong)
inv, root := clitest.New(t, "create", "my-workspace",
"--template", template.Name,
"-y",
"--no-wait",
)
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatchContext(ctx, "building in the background")
_ = testutil.TryReceive(ctx, t, doneChan)
// Verify workspace was actually created.
ws, err := member.WorkspaceByOwnerAndName(ctx, codersdk.Me, "my-workspace", codersdk.WorkspaceOptions{})
require.NoError(t, err)
assert.Equal(t, ws.TemplateName, template.Name)
})
t.Run("NoWaitWithParameterDefaults", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "region", Type: "string", DefaultValue: "us-east-1"},
{Name: "instance_type", Type: "string", DefaultValue: "t3.micro"},
}))
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
ctx := testutil.Context(t, testutil.WaitLong)
inv, root := clitest.New(t, "create", "my-workspace",
"--template", template.Name,
"-y",
"--use-parameter-defaults",
"--no-wait",
)
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatchContext(ctx, "building in the background")
_ = testutil.TryReceive(ctx, t, doneChan)
// Verify workspace was created and parameters were applied.
ws, err := member.WorkspaceByOwnerAndName(ctx, codersdk.Me, "my-workspace", codersdk.WorkspaceOptions{})
require.NoError(t, err)
assert.Equal(t, ws.TemplateName, template.Name)
buildParams, err := member.WorkspaceBuildParameters(ctx, ws.LatestBuild.ID)
require.NoError(t, err)
assert.Contains(t, buildParams, codersdk.WorkspaceBuildParameter{Name: "region", Value: "us-east-1"})
assert.Contains(t, buildParams, codersdk.WorkspaceBuildParameter{Name: "instance_type", Value: "t3.micro"})
})
}
func prepareEchoResponses(parameters []*proto.RichParameter, presets ...*proto.Preset) *echo.Responses {
+6
View File
@@ -1000,6 +1000,12 @@ func mcpFromSDK(sdkTool toolsdk.GenericTool, tb toolsdk.Deps) server.ServerTool
Properties: sdkTool.Schema.Properties,
Required: sdkTool.Schema.Required,
},
Annotations: mcp.ToolAnnotation{
ReadOnlyHint: mcp.ToBoolPtr(sdkTool.MCPAnnotations.ReadOnlyHint),
DestructiveHint: mcp.ToBoolPtr(sdkTool.MCPAnnotations.DestructiveHint),
IdempotentHint: mcp.ToBoolPtr(sdkTool.MCPAnnotations.IdempotentHint),
OpenWorldHint: mcp.ToBoolPtr(sdkTool.MCPAnnotations.OpenWorldHint),
},
},
Handler: func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
var buf bytes.Buffer
+16 -1
View File
@@ -81,7 +81,13 @@ func TestExpMcpServer(t *testing.T) {
var toolsResponse struct {
Result struct {
Tools []struct {
Name string `json:"name"`
Name string `json:"name"`
Annotations struct {
ReadOnlyHint *bool `json:"readOnlyHint"`
DestructiveHint *bool `json:"destructiveHint"`
IdempotentHint *bool `json:"idempotentHint"`
OpenWorldHint *bool `json:"openWorldHint"`
} `json:"annotations"`
} `json:"tools"`
} `json:"result"`
}
@@ -94,6 +100,15 @@ func TestExpMcpServer(t *testing.T) {
}
slices.Sort(foundTools)
require.Equal(t, []string{"coder_get_authenticated_user"}, foundTools)
annotations := toolsResponse.Result.Tools[0].Annotations
require.NotNil(t, annotations.ReadOnlyHint)
require.NotNil(t, annotations.DestructiveHint)
require.NotNil(t, annotations.IdempotentHint)
require.NotNil(t, annotations.OpenWorldHint)
assert.True(t, *annotations.ReadOnlyHint)
assert.False(t, *annotations.DestructiveHint)
assert.True(t, *annotations.IdempotentHint)
assert.False(t, *annotations.OpenWorldHint)
// Call the tool and ensure it works.
toolPayload := `{"jsonrpc":"2.0","id":3,"method":"tools/call", "params": {"name": "coder_get_authenticated_user", "arguments": {}}}`
+74 -45
View File
@@ -1732,19 +1732,18 @@ const (
func (r *RootCmd) scaletestAutostart() *serpent.Command {
var (
workspaceCount int64
workspaceJobTimeout time.Duration
autostartDelay time.Duration
autostartTimeout time.Duration
template string
noCleanup bool
workspaceCount int64
workspaceJobTimeout time.Duration
autostartBuildTimeout time.Duration
autostartDelay time.Duration
template string
noCleanup bool
parameterFlags workspaceParameterFlags
tracingFlags = &scaletestTracingFlags{}
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
@@ -1772,7 +1771,7 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
return xerrors.Errorf("parse output flags: %w", err)
}
tpl, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
@@ -1803,15 +1802,41 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
}
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := autostart.NewMetrics(reg)
setupBarrier := new(sync.WaitGroup)
setupBarrier.Add(int(workspaceCount))
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
// The workspace-build-updates experiment must be enabled to use
// the centralized pubsub channel for coordinating workspace builds.
experiments, err := client.Experiments(ctx)
if err != nil {
return xerrors.Errorf("get experiments: %w", err)
}
if !experiments.Enabled(codersdk.ExperimentWorkspaceBuildUpdates) {
return xerrors.New("the workspace-build-updates experiment must be enabled to run the autostart scaletest")
}
workspaceNames := make([]string, 0, workspaceCount)
resultSink := make(chan autostart.RunResult, workspaceCount)
for i := range workspaceCount {
id := strconv.Itoa(int(i))
workspaceNames = append(workspaceNames, loadtestutil.GenerateDeterministicWorkspaceName(id))
}
dispatcher := autostart.NewWorkspaceDispatcher(workspaceNames)
decoder, err := client.WatchAllWorkspaceBuilds(ctx)
if err != nil {
return xerrors.Errorf("watch all workspace builds: %w", err)
}
defer decoder.Close()
// Start the dispatcher. It will run in a goroutine and automatically
// close all workspace channels when the build updates channel closes.
dispatcher.Start(ctx, decoder.Chan())
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for workspaceName, buildUpdatesChannel := range dispatcher.Channels {
id := strings.TrimPrefix(workspaceName, loadtestutil.ScaleTestPrefix+"-")
config := autostart.Config{
User: createusers.Config{
OrganizationID: me.OrganizationIDs[0],
@@ -1821,13 +1846,16 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
Request: codersdk.CreateWorkspaceRequest{
TemplateID: tpl.ID,
RichParameterValues: richParameters,
// Use deterministic workspace name so we can pre-create the channel.
Name: workspaceName,
},
},
WorkspaceJobTimeout: workspaceJobTimeout,
AutostartDelay: autostartDelay,
AutostartTimeout: autostartTimeout,
Metrics: metrics,
SetupBarrier: setupBarrier,
WorkspaceJobTimeout: workspaceJobTimeout,
AutostartBuildTimeout: autostartBuildTimeout,
AutostartDelay: autostartDelay,
SetupBarrier: setupBarrier,
BuildUpdates: buildUpdatesChannel,
ResultSink: resultSink,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
@@ -1849,18 +1877,11 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
th.AddRun(autostartTestName, id, runner)
}
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
// Wait for prometheus metrics to be scraped
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
_, _ = fmt.Fprintln(inv.Stderr, "Running autostart load test...")
@@ -1871,31 +1892,40 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
// Collect all metrics from the channel.
close(resultSink)
var runResults []autostart.RunResult
for r := range resultSink {
runResults = append(runResults, r)
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
_, _ = fmt.Fprintf(inv.Stderr, "\nAll %d autostart builds completed successfully (elapsed: %s)\n", res.TotalRuns, time.Duration(res.Elapsed).Round(time.Millisecond))
if len(runResults) > 0 {
results := autostart.NewRunResults(runResults)
for _, out := range outputs {
if err := out.write(results.ToHarnessResults(), inv.Stdout); err != nil {
return xerrors.Errorf("write output: %w", err)
}
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(context.Background())
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
_, _ = fmt.Fprintln(inv.Stderr, "Cleanup complete")
} else {
_, _ = fmt.Fprintln(inv.Stderr, "\nSkipping cleanup (--no-cleanup specified). Resources left running.")
}
return nil
@@ -1918,6 +1948,13 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
Description: "Timeout for workspace jobs (e.g. build, start).",
Value: serpent.DurationOf(&workspaceJobTimeout),
},
{
Flag: "autostart-build-timeout",
Env: "CODER_SCALETEST_AUTOSTART_BUILD_TIMEOUT",
Default: "15m",
Description: "Timeout for the autostart build to complete. Must be longer than workspace-job-timeout to account for queueing time in high-load scenarios.",
Value: serpent.DurationOf(&autostartBuildTimeout),
},
{
Flag: "autostart-delay",
Env: "CODER_SCALETEST_AUTOSTART_DELAY",
@@ -1925,13 +1962,6 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
Description: "How long after all the workspaces have been stopped to schedule them to be started again.",
Value: serpent.DurationOf(&autostartDelay),
},
{
Flag: "autostart-timeout",
Env: "CODER_SCALETEST_AUTOSTART_TIMEOUT",
Default: "5m",
Description: "Timeout for the autostart build to be initiated after the scheduled start time.",
Value: serpent.DurationOf(&autostartTimeout),
},
{
Flag: "template",
FlagShorthand: "t",
@@ -1950,10 +1980,9 @@ func (r *RootCmd) scaletestAutostart() *serpent.Command {
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
tracingFlags.attach(&cmd.Options)
output.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}
+1 -1
View File
@@ -214,7 +214,7 @@ func (r *RootCmd) createOrganizationRole(orgContext *OrganizationContext) *serpe
} else {
updated, err = client.CreateOrganizationRole(ctx, customRole)
if err != nil {
return xerrors.Errorf("patch role: %w", err)
return xerrors.Errorf("create role: %w", err)
}
}
+23
View File
@@ -79,6 +79,29 @@ func (r *RootCmd) start() *serpent.Command {
)
build = workspace.LatestBuild
default:
// If the last build was a failed start, run a stop
// first to clean up any partially-provisioned
// resources.
if workspace.LatestBuild.Status == codersdk.WorkspaceStatusFailed &&
workspace.LatestBuild.Transition == codersdk.WorkspaceTransitionStart {
_, _ = fmt.Fprintf(inv.Stdout, "The last start build failed. Cleaning up before retrying...\n")
stopBuild, stopErr := client.CreateWorkspaceBuild(inv.Context(), workspace.ID, codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStop,
})
if stopErr != nil {
return xerrors.Errorf("cleanup stop after failed start: %w", stopErr)
}
stopErr = cliui.WorkspaceBuild(inv.Context(), inv.Stdout, client, stopBuild.ID)
if stopErr != nil {
return xerrors.Errorf("wait for cleanup stop: %w", stopErr)
}
// Re-fetch workspace after stop completes so
// startWorkspace sees the latest state.
workspace, err = namedWorkspace(inv.Context(), client, inv.Args[0])
if err != nil {
return err
}
}
build, err = startWorkspace(inv, client, workspace, parameterFlags, bflags, WorkspaceStart)
// It's possible for a workspace build to fail due to the template requiring starting
// workspaces with the active version.
+52
View File
@@ -534,3 +534,55 @@ func TestStart_WithReason(t *testing.T) {
workspace = coderdtest.MustWorkspace(t, member, workspace.ID)
require.Equal(t, codersdk.BuildReasonCLI, workspace.LatestBuild.Reason)
}
func TestStart_FailedStartCleansUp(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
store, ps := dbtestutil.NewDB(t)
client := coderdtest.New(t, &coderdtest.Options{
Database: store,
Pubsub: ps,
IncludeProvisionerDaemon: true,
})
owner := coderdtest.CreateFirstUser(t, client)
memberClient, member := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, memberClient, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
// Insert a failed start build directly into the database so that
// the workspace's latest build is a failed "start" transition.
dbfake.WorkspaceBuild(t, store, database.WorkspaceTable{
ID: workspace.ID,
OwnerID: member.ID,
OrganizationID: owner.OrganizationID,
TemplateID: template.ID,
}).
Seed(database.WorkspaceBuild{
TemplateVersionID: version.ID,
Transition: database.WorkspaceTransitionStart,
BuildNumber: workspace.LatestBuild.BuildNumber + 1,
}).
Failed().
Do()
inv, root := clitest.New(t, "start", workspace.Name)
clitest.SetupConfig(t, memberClient, root)
pty := ptytest.New(t).Attach(inv)
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// The CLI should detect the failed start and clean up first.
pty.ExpectMatch("Cleaning up before retrying")
pty.ExpectMatch("workspace has been started")
_ = testutil.TryReceive(ctx, t, doneChan)
}
+26 -17
View File
@@ -113,6 +113,20 @@ func (r *RootCmd) supportBundle() *serpent.Command {
)
cliLog.Debug(inv.Context(), "invocation", slog.F("args", strings.Join(os.Args, " ")))
// Bypass rate limiting for support bundle collection since it makes many API calls.
// Note: this can only be done by the owner user.
if ok, err := support.CanGenerateFull(inv.Context(), client); err == nil && ok {
cliLog.Debug(inv.Context(), "running as owner")
client.HTTPClient.Transport = &codersdk.HeaderTransport{
Transport: client.HTTPClient.Transport,
Header: http.Header{codersdk.BypassRatelimitHeader: {"true"}},
}
} else if !ok {
cliLog.Warn(inv.Context(), "not running as owner, not all information available")
} else {
cliLog.Error(inv.Context(), "failed to look up current user", slog.Error(err))
}
// Check if we're running inside a workspace
if val, found := os.LookupEnv("CODER"); found && val == "true" {
cliui.Warn(inv.Stderr, "Running inside Coder workspace; this can affect results!")
@@ -200,12 +214,6 @@ func (r *RootCmd) supportBundle() *serpent.Command {
_, _ = fmt.Fprintln(inv.Stderr, "pprof data collection will take approximately 30 seconds...")
}
// Bypass rate limiting for support bundle collection since it makes many API calls.
client.HTTPClient.Transport = &codersdk.HeaderTransport{
Transport: client.HTTPClient.Transport,
Header: http.Header{codersdk.BypassRatelimitHeader: {"true"}},
}
deps := support.Deps{
Client: client,
// Support adds a sink so we don't need to supply one ourselves.
@@ -354,19 +362,20 @@ func summarizeBundle(inv *serpent.Invocation, bun *support.Bundle) {
return
}
if bun.Deployment.Config == nil {
cliui.Error(inv.Stdout, "No deployment configuration available!")
return
var docsURL string
if bun.Deployment.Config != nil {
docsURL = bun.Deployment.Config.Values.DocsURL.String()
} else {
cliui.Warn(inv.Stdout, "No deployment configuration available. This may require the Owner role.")
}
docsURL := bun.Deployment.Config.Values.DocsURL.String()
if bun.Deployment.HealthReport == nil {
cliui.Error(inv.Stdout, "No deployment health report available!")
return
}
deployHealthSummary := bun.Deployment.HealthReport.Summarize(docsURL)
if len(deployHealthSummary) > 0 {
cliui.Warn(inv.Stdout, "Deployment health issues detected:", deployHealthSummary...)
if bun.Deployment.HealthReport != nil {
deployHealthSummary := bun.Deployment.HealthReport.Summarize(docsURL)
if len(deployHealthSummary) > 0 {
cliui.Warn(inv.Stdout, "Deployment health issues detected:", deployHealthSummary...)
}
} else {
cliui.Warn(inv.Stdout, "No deployment health report available.")
}
if bun.Network.Netcheck == nil {
+30 -3
View File
@@ -132,12 +132,35 @@ func TestSupportBundle(t *testing.T) {
assertBundleContents(t, path, true, false, []string{secretValue})
})
t.Run("NoPrivilege", func(t *testing.T) {
t.Run("MemberCanGenerateBundle", func(t *testing.T) {
t.Parallel()
inv, root := clitest.New(t, "support", "bundle", memberWorkspace.Workspace.Name, "--yes")
d := t.TempDir()
path := filepath.Join(d, "bundle.zip")
inv, root := clitest.New(t, "support", "bundle", memberWorkspace.Workspace.Name, "--output-file", path, "--yes")
clitest.SetupConfig(t, memberClient, root)
err := inv.Run()
require.ErrorContains(t, err, "failed authorization check")
require.NoError(t, err)
r, err := zip.OpenReader(path)
require.NoError(t, err, "open zip file")
defer r.Close()
fileNames := make(map[string]struct{}, len(r.File))
for _, f := range r.File {
fileNames[f.Name] = struct{}{}
}
// These should always be present in the zip structure, even if
// the content is null/empty for non-admin users.
for _, name := range []string{
"deployment/buildinfo.json",
"deployment/config.json",
"workspace/workspace.json",
"logs.txt",
"cli_logs.txt",
"network/netcheck.json",
"network/interfaces.json",
} {
require.Contains(t, fileNames, name)
}
})
// This ensures that the CLI does not panic when trying to generate a support bundle
@@ -159,6 +182,10 @@ func TestSupportBundle(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
t.Logf("received request: %s %s", r.Method, r.URL)
switch r.URL.Path {
case "/api/v2/users/me":
resp := codersdk.User{}
w.WriteHeader(http.StatusOK)
assert.NoError(t, json.NewEncoder(w).Encode(resp))
case "/api/v2/authcheck":
// Fake auth check
resp := codersdk.AuthorizationResponse{
+4
View File
@@ -20,6 +20,10 @@ OPTIONS:
--copy-parameters-from string, $CODER_WORKSPACE_COPY_PARAMETERS_FROM
Specify the source workspace name to copy parameters from.
--no-wait bool, $CODER_CREATE_NO_WAIT
Return immediately after creating the workspace. The build will run in
the background.
--parameter string-array, $CODER_RICH_PARAMETER
Rich parameter value in the format "name=value".
+1 -1
View File
@@ -7,7 +7,7 @@
"last_seen_at": "====[timestamp]=====",
"name": "test-daemon",
"version": "v0.0.0-devel",
"api_version": "1.15",
"api_version": "1.16",
"provisioners": [
"echo"
],
+11 -10
View File
@@ -8,16 +8,17 @@ USAGE:
Aliases: user
SUBCOMMANDS:
activate Update a user's status to 'active'. Active users can fully
interact with the platform
create Create a new user.
delete Delete a user by username or user_id.
edit-roles Edit a user's roles by username or id
list Prints the list of users.
show Show a single user. Use 'me' to indicate the currently
authenticated user.
suspend Update a user's status to 'suspended'. A suspended user cannot
log into the platform
activate Update a user's status to 'active'. Active users can fully
interact with the platform
create Create a new user.
delete Delete a user by username or user_id.
edit-roles Edit a user's roles by username or id
list Prints the list of users.
oidc-claims Display the OIDC claims for the authenticated user.
show Show a single user. Use 'me' to indicate the currently
authenticated user.
suspend Update a user's status to 'suspended'. A suspended user
cannot log into the platform
———
Run `coder --help` for a list of global options.
+4
View File
@@ -24,6 +24,10 @@ OPTIONS:
-p, --password string
Specifies a password for the new user.
--service-account bool
Create a user account intended to be used by a service or as an
intermediary rather than by a human.
-u, --username string
Specifies a username for the new user.
+24
View File
@@ -0,0 +1,24 @@
coder v0.0.0-devel
USAGE:
coder users oidc-claims [flags]
Display the OIDC claims for the authenticated user.
- Display your OIDC claims:
$ coder users oidc-claims
- Display your OIDC claims as JSON:
$ coder users oidc-claims -o json
OPTIONS:
-c, --column [key|value] (default: key,value)
Columns to display in table output.
-o, --output table|json (default: table)
Output format.
———
Run `coder --help` for a list of global options.
+5
View File
@@ -752,6 +752,11 @@ workspace_prebuilds:
# limit; disabled when set to zero.
# (default: 3, type: int)
failure_hard_limit: 3
# Configure the background chat processing daemon.
chat:
# How many pending chats a worker should acquire per polling cycle.
# (default: 10, type: int)
acquireBatchSize: 10
aibridge:
# Whether to start an in-memory aibridged instance.
# (default: false, type: bool)
+37 -12
View File
@@ -17,13 +17,14 @@ import (
func (r *RootCmd) userCreate() *serpent.Command {
var (
email string
username string
name string
password string
disableLogin bool
loginType string
orgContext = NewOrganizationContext()
email string
username string
name string
password string
disableLogin bool
loginType string
serviceAccount bool
orgContext = NewOrganizationContext()
)
cmd := &serpent.Command{
Use: "create",
@@ -32,6 +33,23 @@ func (r *RootCmd) userCreate() *serpent.Command {
serpent.RequireNArgs(0),
),
Handler: func(inv *serpent.Invocation) error {
if serviceAccount {
switch {
case loginType != "":
return xerrors.New("You cannot use --login-type with --service-account")
case password != "":
return xerrors.New("You cannot use --password with --service-account")
case email != "":
return xerrors.New("You cannot use --email with --service-account")
case disableLogin:
return xerrors.New("You cannot use --disable-login with --service-account")
}
}
if disableLogin && loginType != "" {
return xerrors.New("You cannot specify both --disable-login and --login-type")
}
client, err := r.InitClient(inv)
if err != nil {
return err
@@ -59,7 +77,7 @@ func (r *RootCmd) userCreate() *serpent.Command {
return err
}
}
if email == "" {
if email == "" && !serviceAccount {
email, err = cliui.Prompt(inv, cliui.PromptOptions{
Text: "Email:",
Validate: func(s string) error {
@@ -87,10 +105,7 @@ func (r *RootCmd) userCreate() *serpent.Command {
}
}
userLoginType := codersdk.LoginTypePassword
if disableLogin && loginType != "" {
return xerrors.New("You cannot specify both --disable-login and --login-type")
}
if disableLogin {
if disableLogin || serviceAccount {
userLoginType = codersdk.LoginTypeNone
} else if loginType != "" {
userLoginType = codersdk.LoginType(loginType)
@@ -111,6 +126,7 @@ func (r *RootCmd) userCreate() *serpent.Command {
Password: password,
OrganizationIDs: []uuid.UUID{organization.ID},
UserLoginType: userLoginType,
ServiceAccount: serviceAccount,
})
if err != nil {
return err
@@ -127,6 +143,10 @@ func (r *RootCmd) userCreate() *serpent.Command {
case codersdk.LoginTypeOIDC:
authenticationMethod = `Login is authenticated through the configured OIDC provider.`
}
if serviceAccount {
email = "n/a"
authenticationMethod = "Service accounts must authenticate with a token and cannot log in."
}
_, _ = fmt.Fprintln(inv.Stderr, `A new user has been created!
Share the instructions below to get them started.
@@ -194,6 +214,11 @@ Create a workspace `+pretty.Sprint(cliui.DefaultStyles.Code, "coder create")+`!
)),
Value: serpent.StringOf(&loginType),
},
{
Flag: "service-account",
Description: "Create a user account intended to be used by a service or as an intermediary rather than by a human.",
Value: serpent.BoolOf(&serviceAccount),
},
}
orgContext.AttachOptions(cmd)
+53
View File
@@ -8,6 +8,7 @@ import (
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
@@ -124,4 +125,56 @@ func TestUserCreate(t *testing.T) {
assert.Equal(t, args[5], created.Username)
assert.Empty(t, created.Name)
})
tests := []struct {
name string
args []string
err string
}{
{
name: "ServiceAccount",
args: []string{"--service-account", "-u", "dean"},
},
{
name: "ServiceAccountLoginType",
args: []string{"--service-account", "-u", "dean", "--login-type", "none"},
err: "You cannot use --login-type with --service-account",
},
{
name: "ServiceAccountDisableLogin",
args: []string{"--service-account", "-u", "dean", "--disable-login"},
err: "You cannot use --disable-login with --service-account",
},
{
name: "ServiceAccountEmail",
args: []string{"--service-account", "-u", "dean", "--email", "dean@coder.com"},
err: "You cannot use --email with --service-account",
},
{
name: "ServiceAccountPassword",
args: []string{"--service-account", "-u", "dean", "--password", "1n5ecureP4ssw0rd!"},
err: "You cannot use --password with --service-account",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, append([]string{"users", "create"}, tt.args...)...)
clitest.SetupConfig(t, client, root)
err := inv.Run()
if tt.err == "" {
require.NoError(t, err)
ctx := testutil.Context(t, testutil.WaitShort)
created, err := client.User(ctx, "dean")
require.NoError(t, err)
assert.Equal(t, codersdk.LoginTypeNone, created.LoginType)
} else {
require.Error(t, err)
require.ErrorContains(t, err, tt.err)
}
})
}
}
+79
View File
@@ -0,0 +1,79 @@
package cli
import (
"fmt"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/serpent"
)
func (r *RootCmd) userOIDCClaims() *serpent.Command {
formatter := cliui.NewOutputFormatter(
cliui.ChangeFormatterData(
cliui.TableFormat([]claimRow{}, []string{"key", "value"}),
func(data any) (any, error) {
resp, ok := data.(codersdk.OIDCClaimsResponse)
if !ok {
return nil, xerrors.Errorf("expected type %T, got %T", resp, data)
}
rows := make([]claimRow, 0, len(resp.Claims))
for k, v := range resp.Claims {
rows = append(rows, claimRow{
Key: k,
Value: fmt.Sprintf("%v", v),
})
}
return rows, nil
},
),
cliui.JSONFormat(),
)
cmd := &serpent.Command{
Use: "oidc-claims",
Short: "Display the OIDC claims for the authenticated user.",
Long: FormatExamples(
Example{
Description: "Display your OIDC claims",
Command: "coder users oidc-claims",
},
Example{
Description: "Display your OIDC claims as JSON",
Command: "coder users oidc-claims -o json",
},
),
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
),
Handler: func(inv *serpent.Invocation) error {
client, err := r.InitClient(inv)
if err != nil {
return err
}
resp, err := client.UserOIDCClaims(inv.Context())
if err != nil {
return xerrors.Errorf("get oidc claims: %w", err)
}
out, err := formatter.Format(inv.Context(), resp)
if err != nil {
return err
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
}
formatter.AttachOptions(&cmd.Options)
return cmd
}
type claimRow struct {
Key string `json:"-" table:"key,default_sort"`
Value string `json:"-" table:"value"`
}
+161
View File
@@ -0,0 +1,161 @@
package cli_test
import (
"bytes"
"encoding/json"
"testing"
"github.com/golang-jwt/jwt/v4"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/coderdtest/oidctest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
func TestUserOIDCClaims(t *testing.T) {
t.Parallel()
newOIDCTest := func(t *testing.T) (*oidctest.FakeIDP, *codersdk.Client) {
t.Helper()
fake := oidctest.NewFakeIDP(t,
oidctest.WithServing(),
)
cfg := fake.OIDCConfig(t, nil, func(cfg *coderd.OIDCConfig) {
cfg.AllowSignups = true
})
ownerClient := coderdtest.New(t, &coderdtest.Options{
OIDCConfig: cfg,
})
return fake, ownerClient
}
t.Run("OwnClaims", func(t *testing.T) {
t.Parallel()
fake, ownerClient := newOIDCTest(t)
claims := jwt.MapClaims{
"email": "alice@coder.com",
"email_verified": true,
"sub": uuid.NewString(),
"groups": []string{"admin", "eng"},
}
userClient, loginResp := fake.Login(t, ownerClient, claims)
defer loginResp.Body.Close()
inv, root := clitest.New(t, "users", "oidc-claims", "-o", "json")
clitest.SetupConfig(t, userClient, root)
buf := bytes.NewBuffer(nil)
inv.Stdout = buf
err := inv.WithContext(testutil.Context(t, testutil.WaitMedium)).Run()
require.NoError(t, err)
var resp codersdk.OIDCClaimsResponse
err = json.Unmarshal(buf.Bytes(), &resp)
require.NoError(t, err, "unmarshal JSON output")
require.NotEmpty(t, resp.Claims, "claims should not be empty")
assert.Equal(t, "alice@coder.com", resp.Claims["email"])
})
t.Run("Table", func(t *testing.T) {
t.Parallel()
fake, ownerClient := newOIDCTest(t)
claims := jwt.MapClaims{
"email": "bob@coder.com",
"email_verified": true,
"sub": uuid.NewString(),
}
userClient, loginResp := fake.Login(t, ownerClient, claims)
defer loginResp.Body.Close()
inv, root := clitest.New(t, "users", "oidc-claims")
clitest.SetupConfig(t, userClient, root)
buf := bytes.NewBuffer(nil)
inv.Stdout = buf
err := inv.WithContext(testutil.Context(t, testutil.WaitMedium)).Run()
require.NoError(t, err)
output := buf.String()
require.Contains(t, output, "email")
require.Contains(t, output, "bob@coder.com")
})
t.Run("NotOIDCUser", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
_ = coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "users", "oidc-claims")
clitest.SetupConfig(t, client, root)
err := inv.WithContext(testutil.Context(t, testutil.WaitMedium)).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "not an OIDC user")
})
// Verify that two different OIDC users each only see their own
// claims. The endpoint has no user parameter, so there is no way
// to request another user's claims by design.
t.Run("OnlyOwnClaims", func(t *testing.T) {
t.Parallel()
aliceFake, aliceOwnerClient := newOIDCTest(t)
aliceClaims := jwt.MapClaims{
"email": "alice-isolation@coder.com",
"email_verified": true,
"sub": uuid.NewString(),
}
aliceClient, aliceLoginResp := aliceFake.Login(t, aliceOwnerClient, aliceClaims)
defer aliceLoginResp.Body.Close()
bobFake, bobOwnerClient := newOIDCTest(t)
bobClaims := jwt.MapClaims{
"email": "bob-isolation@coder.com",
"email_verified": true,
"sub": uuid.NewString(),
}
bobClient, bobLoginResp := bobFake.Login(t, bobOwnerClient, bobClaims)
defer bobLoginResp.Body.Close()
ctx := testutil.Context(t, testutil.WaitMedium)
// Alice sees her own claims.
aliceResp, err := aliceClient.UserOIDCClaims(ctx)
require.NoError(t, err)
assert.Equal(t, "alice-isolation@coder.com", aliceResp.Claims["email"])
// Bob sees his own claims.
bobResp, err := bobClient.UserOIDCClaims(ctx)
require.NoError(t, err)
assert.Equal(t, "bob-isolation@coder.com", bobResp.Claims["email"])
})
t.Run("ClaimsNeverNull", func(t *testing.T) {
t.Parallel()
fake, ownerClient := newOIDCTest(t)
// Use minimal claims — just enough for OIDC login.
claims := jwt.MapClaims{
"email": "minimal@coder.com",
"email_verified": true,
"sub": uuid.NewString(),
}
userClient, loginResp := fake.Login(t, ownerClient, claims)
defer loginResp.Body.Close()
ctx := testutil.Context(t, testutil.WaitMedium)
resp, err := userClient.UserOIDCClaims(ctx)
require.NoError(t, err)
require.NotNil(t, resp.Claims, "claims should never be nil, expected empty map")
})
}
+1
View File
@@ -19,6 +19,7 @@ func (r *RootCmd) users() *serpent.Command {
r.userSingle(),
r.userDelete(),
r.userEditRoles(),
r.userOIDCClaims(),
r.createUserStatusCommand(codersdk.UserStatusActive),
r.createUserStatusCommand(codersdk.UserStatusSuspended),
},
+38
View File
@@ -0,0 +1,38 @@
// Package aiseats is the AGPL version the package.
// The actual implementation is in `enterprise/aiseats`.
package aiseats
import (
"context"
"github.com/google/uuid"
"github.com/coder/coder/v2/coderd/database"
)
type Reason struct {
EventType database.AiSeatUsageReason
Description string
}
// ReasonAIBridge constructs a reason for usage originating from AI Bridge.
func ReasonAIBridge(description string) Reason {
return Reason{EventType: database.AiSeatUsageReasonAibridge, Description: description}
}
// ReasonTask constructs a reason for usage originating from tasks.
func ReasonTask(description string) Reason {
return Reason{EventType: database.AiSeatUsageReasonTask, Description: description}
}
// SeatTracker records AI seat consumption state.
type SeatTracker interface {
// RecordUsage does not return an error to prevent blocking the user from using
// AI features. This method is used to record usage, not enforce it.
RecordUsage(ctx context.Context, userID uuid.UUID, reason Reason)
}
// Noop is an AGPL seat tracker that does nothing.
type Noop struct{}
func (Noop) RecordUsage(context.Context, uuid.UUID, Reason) {}
+2052 -1716
View File
File diff suppressed because it is too large Load Diff
+2030 -1716
View File
File diff suppressed because it is too large Load Diff
+2 -1
View File
@@ -32,7 +32,8 @@ type Auditable interface {
idpsync.OrganizationSyncSettings |
idpsync.GroupSyncSettings |
idpsync.RoleSyncSettings |
database.TaskTable
database.TaskTable |
database.AiSeatState
}
// Map is a map of changed fields in an audited resource. It maps field names to
+8
View File
@@ -132,6 +132,8 @@ func ResourceTarget[T Auditable](tgt T) string {
return "Organization Role Sync"
case database.TaskTable:
return typed.Name
case database.AiSeatState:
return "AI Seat"
default:
panic(fmt.Sprintf("unknown resource %T for ResourceTarget", tgt))
}
@@ -196,6 +198,8 @@ func ResourceID[T Auditable](tgt T) uuid.UUID {
return noID // Org field on audit log has org id
case database.TaskTable:
return typed.ID
case database.AiSeatState:
return typed.UserID
default:
panic(fmt.Sprintf("unknown resource %T for ResourceID", tgt))
}
@@ -251,6 +255,8 @@ func ResourceType[T Auditable](tgt T) database.ResourceType {
return database.ResourceTypeIdpSyncSettingsGroup
case database.TaskTable:
return database.ResourceTypeTask
case database.AiSeatState:
return database.ResourceTypeAiSeat
default:
panic(fmt.Sprintf("unknown resource %T for ResourceType", typed))
}
@@ -309,6 +315,8 @@ func ResourceRequiresOrgID[T Auditable]() bool {
return true
case database.TaskTable:
return true
case database.AiSeatState:
return false
default:
panic(fmt.Sprintf("unknown resource %T for ResourceRequiresOrgID", tgt))
}
+984 -529
View File
File diff suppressed because it is too large Load Diff
+370
View File
@@ -2,13 +2,23 @@ package chatd
import (
"context"
"sync"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
"golang.org/x/xerrors"
"cdr.dev/slog/v3/sloggers/slogtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbmock"
dbpubsub "github.com/coder/coder/v2/coderd/database/pubsub"
coderdpubsub "github.com/coder/coder/v2/coderd/pubsub"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/codersdk/workspacesdk/agentconnmock"
)
func TestRefreshChatWorkspaceSnapshot_NoReloadWhenWorkspacePresent(t *testing.T) {
@@ -84,3 +94,363 @@ func TestRefreshChatWorkspaceSnapshot_ReturnsReloadError(t *testing.T) {
require.ErrorContains(t, err, loadErr.Error())
require.Equal(t, chat, refreshed)
}
func TestResolveInstructionsReusesTurnLocalWorkspaceAgent(t *testing.T) {
t.Parallel()
ctx := context.Background()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
workspaceID := uuid.New()
chat := database.Chat{
ID: uuid.New(),
WorkspaceID: uuid.NullUUID{
UUID: workspaceID,
Valid: true,
},
}
workspaceAgent := database.WorkspaceAgent{
ID: uuid.New(),
OperatingSystem: "linux",
Directory: "/home/coder/project",
ExpandedDirectory: "/home/coder/project",
}
db.EXPECT().GetWorkspaceAgentsInLatestBuildByWorkspaceID(
gomock.Any(),
workspaceID,
).Return([]database.WorkspaceAgent{workspaceAgent}, nil).Times(1)
conn := agentconnmock.NewMockAgentConn(ctrl)
conn.EXPECT().SetExtraHeaders(gomock.Any()).Times(1)
conn.EXPECT().LS(gomock.Any(), "", gomock.Any()).Return(
workspacesdk.LSResponse{},
codersdk.NewTestError(404, "POST", "/api/v0/list-directory"),
).Times(1)
conn.EXPECT().ReadFile(
gomock.Any(),
"/home/coder/project/AGENTS.md",
int64(0),
int64(maxInstructionFileBytes+1),
).Return(
nil,
"",
codersdk.NewTestError(404, "GET", "/api/v0/read-file"),
).Times(1)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
server := &Server{
db: db,
logger: logger,
instructionCache: make(map[uuid.UUID]cachedInstruction),
agentConnFn: func(context.Context, uuid.UUID) (workspacesdk.AgentConn, func(), error) {
return conn, func() {}, nil
},
}
chatStateMu := &sync.Mutex{}
currentChat := chat
workspaceCtx := turnWorkspaceContext{
server: server,
chatStateMu: chatStateMu,
currentChat: &currentChat,
loadChatSnapshot: func(context.Context, uuid.UUID) (database.Chat, error) { return database.Chat{}, nil },
}
t.Cleanup(workspaceCtx.close)
instruction := server.resolveInstructions(
ctx,
chat,
workspaceCtx.getWorkspaceAgent,
workspaceCtx.getWorkspaceConn,
)
require.Contains(t, instruction, "Operating System: linux")
require.Contains(t, instruction, "Working Directory: /home/coder/project")
}
func TestTurnWorkspaceContextGetWorkspaceConnRefreshesWorkspaceAgent(t *testing.T) {
t.Parallel()
ctx := context.Background()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
workspaceID := uuid.New()
chat := database.Chat{
ID: uuid.New(),
WorkspaceID: uuid.NullUUID{
UUID: workspaceID,
Valid: true,
},
}
initialAgent := database.WorkspaceAgent{ID: uuid.New()}
refreshedAgent := database.WorkspaceAgent{ID: uuid.New()}
gomock.InOrder(
db.EXPECT().GetWorkspaceAgentsInLatestBuildByWorkspaceID(
gomock.Any(),
workspaceID,
).Return([]database.WorkspaceAgent{initialAgent}, nil),
db.EXPECT().GetWorkspaceAgentsInLatestBuildByWorkspaceID(
gomock.Any(),
workspaceID,
).Return([]database.WorkspaceAgent{refreshedAgent}, nil),
)
conn := agentconnmock.NewMockAgentConn(ctrl)
conn.EXPECT().SetExtraHeaders(gomock.Any()).Times(1)
var dialed []uuid.UUID
server := &Server{db: db}
server.agentConnFn = func(_ context.Context, agentID uuid.UUID) (workspacesdk.AgentConn, func(), error) {
dialed = append(dialed, agentID)
if agentID == initialAgent.ID {
return nil, nil, xerrors.New("dial failed")
}
return conn, func() {}, nil
}
chatStateMu := &sync.Mutex{}
currentChat := chat
workspaceCtx := turnWorkspaceContext{
server: server,
chatStateMu: chatStateMu,
currentChat: &currentChat,
loadChatSnapshot: func(context.Context, uuid.UUID) (database.Chat, error) { return database.Chat{}, nil },
}
t.Cleanup(workspaceCtx.close)
gotConn, err := workspaceCtx.getWorkspaceConn(ctx)
require.NoError(t, err)
require.Same(t, conn, gotConn)
require.Equal(t, []uuid.UUID{initialAgent.ID, refreshedAgent.ID}, dialed)
}
func TestSubscribeSkipsDatabaseCatchupForLocallyDeliveredMessage(t *testing.T) {
t.Parallel()
ctx, cancelCtx := context.WithCancel(context.Background())
defer cancelCtx()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
chatID := uuid.New()
chat := database.Chat{ID: chatID, Status: database.ChatStatusPending}
initialMessage := database.ChatMessage{
ID: 1,
ChatID: chatID,
Role: database.ChatMessageRoleUser,
}
localMessage := database.ChatMessage{
ID: 2,
ChatID: chatID,
Role: database.ChatMessageRoleAssistant,
}
gomock.InOrder(
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 0,
}).Return([]database.ChatMessage{initialMessage}, nil),
db.EXPECT().GetChatQueuedMessages(gomock.Any(), chatID).Return(nil, nil),
db.EXPECT().GetChatByID(gomock.Any(), chatID).Return(chat, nil),
)
server := newSubscribeTestServer(t, db)
_, events, cancel, ok := server.Subscribe(ctx, chatID, nil, 0)
require.True(t, ok)
defer cancel()
server.publishMessage(chatID, localMessage)
event := requireStreamMessageEvent(t, events)
require.Equal(t, int64(2), event.Message.ID)
requireNoStreamEvent(t, events, 200*time.Millisecond)
}
func TestSubscribeUsesDurableCacheWhenLocalMessageWasNotDelivered(t *testing.T) {
t.Parallel()
ctx, cancelCtx := context.WithCancel(context.Background())
defer cancelCtx()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
chatID := uuid.New()
chat := database.Chat{ID: chatID, Status: database.ChatStatusPending}
initialMessage := database.ChatMessage{
ID: 1,
ChatID: chatID,
Role: database.ChatMessageRoleUser,
}
cachedMessage := codersdk.ChatMessage{
ID: 2,
ChatID: chatID,
Role: codersdk.ChatMessageRoleAssistant,
}
gomock.InOrder(
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 0,
}).Return([]database.ChatMessage{initialMessage}, nil),
db.EXPECT().GetChatQueuedMessages(gomock.Any(), chatID).Return(nil, nil),
db.EXPECT().GetChatByID(gomock.Any(), chatID).Return(chat, nil),
)
server := newSubscribeTestServer(t, db)
server.cacheDurableMessage(chatID, codersdk.ChatStreamEvent{
Type: codersdk.ChatStreamEventTypeMessage,
ChatID: chatID,
Message: &cachedMessage,
})
_, events, cancel, ok := server.Subscribe(ctx, chatID, nil, 0)
require.True(t, ok)
defer cancel()
server.publishChatStreamNotify(chatID, coderdpubsub.ChatStreamNotifyMessage{
AfterMessageID: 1,
})
event := requireStreamMessageEvent(t, events)
require.Equal(t, int64(2), event.Message.ID)
requireNoStreamEvent(t, events, 200*time.Millisecond)
}
func TestSubscribeQueriesDatabaseWhenDurableCacheMisses(t *testing.T) {
t.Parallel()
ctx, cancelCtx := context.WithCancel(context.Background())
defer cancelCtx()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
chatID := uuid.New()
chat := database.Chat{ID: chatID, Status: database.ChatStatusPending}
initialMessage := database.ChatMessage{
ID: 1,
ChatID: chatID,
Role: database.ChatMessageRoleUser,
}
catchupMessage := database.ChatMessage{
ID: 2,
ChatID: chatID,
Role: database.ChatMessageRoleAssistant,
}
gomock.InOrder(
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 0,
}).Return([]database.ChatMessage{initialMessage}, nil),
db.EXPECT().GetChatQueuedMessages(gomock.Any(), chatID).Return(nil, nil),
db.EXPECT().GetChatByID(gomock.Any(), chatID).Return(chat, nil),
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 1,
}).Return([]database.ChatMessage{catchupMessage}, nil),
)
server := newSubscribeTestServer(t, db)
_, events, cancel, ok := server.Subscribe(ctx, chatID, nil, 0)
require.True(t, ok)
defer cancel()
server.publishChatStreamNotify(chatID, coderdpubsub.ChatStreamNotifyMessage{
AfterMessageID: 1,
})
event := requireStreamMessageEvent(t, events)
require.Equal(t, int64(2), event.Message.ID)
requireNoStreamEvent(t, events, 200*time.Millisecond)
}
func TestSubscribeFullRefreshStillUsesDatabaseCatchup(t *testing.T) {
t.Parallel()
ctx, cancelCtx := context.WithCancel(context.Background())
defer cancelCtx()
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
chatID := uuid.New()
chat := database.Chat{ID: chatID, Status: database.ChatStatusPending}
initialMessage := database.ChatMessage{
ID: 1,
ChatID: chatID,
Role: database.ChatMessageRoleUser,
}
editedMessage := database.ChatMessage{
ID: 1,
ChatID: chatID,
Role: database.ChatMessageRoleUser,
}
gomock.InOrder(
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 0,
}).Return([]database.ChatMessage{initialMessage}, nil),
db.EXPECT().GetChatQueuedMessages(gomock.Any(), chatID).Return(nil, nil),
db.EXPECT().GetChatByID(gomock.Any(), chatID).Return(chat, nil),
db.EXPECT().GetChatMessagesByChatID(gomock.Any(), database.GetChatMessagesByChatIDParams{
ChatID: chatID,
AfterID: 0,
}).Return([]database.ChatMessage{editedMessage}, nil),
)
server := newSubscribeTestServer(t, db)
_, events, cancel, ok := server.Subscribe(ctx, chatID, nil, 0)
require.True(t, ok)
defer cancel()
server.publishEditedMessage(chatID, editedMessage)
event := requireStreamMessageEvent(t, events)
require.Equal(t, int64(1), event.Message.ID)
requireNoStreamEvent(t, events, 200*time.Millisecond)
}
func newSubscribeTestServer(t *testing.T, db database.Store) *Server {
t.Helper()
return &Server{
db: db,
logger: slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}),
pubsub: dbpubsub.NewInMemory(),
}
}
func requireStreamMessageEvent(t *testing.T, events <-chan codersdk.ChatStreamEvent) codersdk.ChatStreamEvent {
t.Helper()
select {
case event, ok := <-events:
require.True(t, ok, "chat stream closed before delivering an event")
require.Equal(t, codersdk.ChatStreamEventTypeMessage, event.Type)
require.NotNil(t, event.Message)
return event
case <-time.After(time.Second):
t.Fatal("timed out waiting for chat stream message event")
return codersdk.ChatStreamEvent{}
}
}
func requireNoStreamEvent(t *testing.T, events <-chan codersdk.ChatStreamEvent, wait time.Duration) {
t.Helper()
select {
case event, ok := <-events:
if !ok {
t.Fatal("chat stream closed unexpectedly")
}
t.Fatalf("unexpected chat stream event: %+v", event)
case <-time.After(wait):
}
}
+1463 -132
View File
File diff suppressed because it is too large Load Diff
+32 -6
View File
@@ -42,6 +42,11 @@ type PersistedStep struct {
Content []fantasy.Content
Usage fantasy.Usage
ContextLimit sql.NullInt64
// Runtime is the wall-clock duration of this step,
// covering LLM streaming, tool execution, and retries.
// Zero indicates the duration was not measured (e.g.
// interrupted steps).
Runtime time.Duration
}
// RunOptions configures a single streaming chat loop run.
@@ -122,7 +127,7 @@ func (r stepResult) toResponseMessages() []fantasy.Message {
switch c.GetType() {
case fantasy.ContentTypeText:
text, ok := fantasy.AsContentType[fantasy.TextContent](c)
if !ok {
if !ok || strings.TrimSpace(text.Text) == "" {
continue
}
assistantParts = append(assistantParts, fantasy.TextPart{
@@ -131,7 +136,7 @@ func (r stepResult) toResponseMessages() []fantasy.Message {
})
case fantasy.ContentTypeReasoning:
reasoning, ok := fantasy.AsContentType[fantasy.ReasoningContent](c)
if !ok {
if !ok || strings.TrimSpace(reasoning.Text) == "" {
continue
}
assistantParts = append(assistantParts, fantasy.ReasoningPart{
@@ -260,6 +265,7 @@ func Run(ctx context.Context, opts RunOptions) error {
for step := 0; totalSteps < opts.MaxSteps; step++ {
totalSteps++
stepStart := time.Now()
// Copy messages so that provider-specific caching
// mutations don't leak back to the caller's slice.
// copy copies Message structs by value, so field
@@ -365,6 +371,7 @@ func Run(ctx context.Context, opts RunOptions) error {
Content: result.content,
Usage: result.usage,
ContextLimit: contextLimit,
Runtime: time.Since(stepStart),
}); err != nil {
if errors.Is(err, ErrInterrupted) {
persistInterruptedStep(ctx, opts, &result)
@@ -610,10 +617,12 @@ func processStepStream(
result.providerMetadata = part.ProviderMetadata
case fantasy.StreamPartTypeError:
// Detect interruption: context canceled with
// ErrInterrupted as the cause.
if errors.Is(part.Error, context.Canceled) &&
errors.Is(context.Cause(ctx), ErrInterrupted) {
// Detect interruption: the stream may surface the
// cancel as context.Canceled or propagate the
// ErrInterrupted cause directly, depending on
// the provider implementation.
if errors.Is(context.Cause(ctx), ErrInterrupted) &&
(errors.Is(part.Error, context.Canceled) || errors.Is(part.Error, ErrInterrupted)) {
// Flush in-progress content so that
// persistInterruptedStep has access to partial
// text, reasoning, and tool calls that were
@@ -631,6 +640,23 @@ func processStepStream(
}
}
// The stream iterator may stop yielding parts without
// producing a StreamPartTypeError when the context is
// canceled (e.g. some providers close the response body
// silently). Detect this case and flush partial content
// so that persistInterruptedStep can save it.
if ctx.Err() != nil &&
errors.Is(context.Cause(ctx), ErrInterrupted) {
flushActiveState(
&result,
activeTextContent,
activeReasoningContent,
activeToolCalls,
toolNames,
)
return result, ErrInterrupted
}
hasLocalToolCalls := false
for _, tc := range result.toolCalls {
if !tc.ProviderExecuted {
+81
View File
@@ -7,6 +7,7 @@ import (
"strings"
"sync"
"testing"
"time"
"charm.land/fantasy"
fantasyanthropic "charm.land/fantasy/providers/anthropic"
@@ -64,6 +65,8 @@ func TestRun_ActiveToolsPrepareBehavior(t *testing.T) {
require.Equal(t, 1, persistStepCalls)
require.True(t, persistedStep.ContextLimit.Valid)
require.Equal(t, int64(4096), persistedStep.ContextLimit.Int64)
require.Greater(t, persistedStep.Runtime, time.Duration(0),
"step runtime should be positive")
require.NotEmpty(t, capturedCall.Prompt)
require.False(t, containsPromptSentinel(capturedCall.Prompt))
@@ -575,6 +578,84 @@ func TestToResponseMessages_ProviderExecutedToolResultInAssistantMessage(t *test
assert.False(t, localTR.ProviderExecuted)
}
func TestToResponseMessages_FiltersEmptyTextAndReasoningParts(t *testing.T) {
t.Parallel()
sr := stepResult{
content: []fantasy.Content{
// Empty text — should be filtered.
fantasy.TextContent{Text: ""},
// Whitespace-only text — should be filtered.
fantasy.TextContent{Text: " \t\n"},
// Empty reasoning — should be filtered.
fantasy.ReasoningContent{Text: ""},
// Whitespace-only reasoning — should be filtered.
fantasy.ReasoningContent{Text: " \n"},
// Non-empty text — should pass through.
fantasy.TextContent{Text: "hello world"},
// Leading/trailing whitespace with content — kept
// with the original value (not trimmed).
fantasy.TextContent{Text: " hello "},
// Non-empty reasoning — should pass through.
fantasy.ReasoningContent{Text: "let me think"},
// Tool call — should be unaffected by filtering.
fantasy.ToolCallContent{
ToolCallID: "tc-1",
ToolName: "read_file",
Input: `{"path":"main.go"}`,
},
// Local tool result — should be unaffected by filtering.
fantasy.ToolResultContent{
ToolCallID: "tc-1",
ToolName: "read_file",
Result: fantasy.ToolResultOutputContentText{Text: "file contents"},
},
},
}
msgs := sr.toResponseMessages()
require.Len(t, msgs, 2, "expected assistant + tool messages")
// First message: assistant role with non-empty text, reasoning,
// and the tool call. The four empty/whitespace-only parts must
// have been dropped.
assistantMsg := msgs[0]
assert.Equal(t, fantasy.MessageRoleAssistant, assistantMsg.Role)
require.Len(t, assistantMsg.Content, 4,
"assistant message should have 2x TextPart, ReasoningPart, and ToolCallPart")
// Part 0: non-empty text.
textPart, ok := fantasy.AsMessagePart[fantasy.TextPart](assistantMsg.Content[0])
require.True(t, ok, "part 0 should be TextPart")
assert.Equal(t, "hello world", textPart.Text)
// Part 1: padded text — original whitespace preserved.
paddedPart, ok := fantasy.AsMessagePart[fantasy.TextPart](assistantMsg.Content[1])
require.True(t, ok, "part 1 should be TextPart")
assert.Equal(t, " hello ", paddedPart.Text)
// Part 2: non-empty reasoning.
reasoningPart, ok := fantasy.AsMessagePart[fantasy.ReasoningPart](assistantMsg.Content[2])
require.True(t, ok, "part 2 should be ReasoningPart")
assert.Equal(t, "let me think", reasoningPart.Text)
// Part 3: tool call (unaffected by text/reasoning filtering).
toolCallPart, ok := fantasy.AsMessagePart[fantasy.ToolCallPart](assistantMsg.Content[3])
require.True(t, ok, "part 3 should be ToolCallPart")
assert.Equal(t, "tc-1", toolCallPart.ToolCallID)
assert.Equal(t, "read_file", toolCallPart.ToolName)
// Second message: tool role with the local tool result.
toolMsg := msgs[1]
assert.Equal(t, fantasy.MessageRoleTool, toolMsg.Role)
require.Len(t, toolMsg.Content, 1,
"tool message should have only the local ToolResultPart")
toolResultPart, ok := fantasy.AsMessagePart[fantasy.ToolResultPart](toolMsg.Content[0])
require.True(t, ok, "tool part should be ToolResultPart")
assert.Equal(t, "tc-1", toolResultPart.ToolCallID)
}
func hasAnthropicEphemeralCacheControl(message fantasy.Message) bool {
if len(message.ProviderOptions) == 0 {
return false
+215 -3
View File
@@ -139,9 +139,13 @@ func ConvertMessagesWithFiles(
},
})
case codersdk.ChatMessageRoleUser:
userParts := partsToMessageParts(logger, pm.parts, resolved)
if len(userParts) == 0 {
continue
}
prompt = append(prompt, fantasy.Message{
Role: fantasy.MessageRoleUser,
Content: partsToMessageParts(logger, pm.parts, resolved),
Content: userParts,
})
case codersdk.ChatMessageRoleAssistant:
fantasyParts := normalizeAssistantToolCallInputs(
@@ -153,6 +157,9 @@ func ConvertMessagesWithFiles(
}
toolNameByCallID[sanitizeToolCallID(toolCall.ToolCallID)] = toolCall.ToolName
}
if len(fantasyParts) == 0 {
continue
}
prompt = append(prompt, fantasy.Message{
Role: fantasy.MessageRoleAssistant,
Content: fantasyParts,
@@ -166,9 +173,13 @@ func ConvertMessagesWithFiles(
}
}
}
toolParts := partsToMessageParts(logger, pm.parts, resolved)
if len(toolParts) == 0 {
continue
}
prompt = append(prompt, fantasy.Message{
Role: fantasy.MessageRoleTool,
Content: partsToMessageParts(logger, pm.parts, resolved),
Content: toolParts,
})
}
}
@@ -321,6 +332,7 @@ func parseContentV1(role codersdk.ChatMessageRole, raw pqtype.NullRawMessage) ([
if err := json.Unmarshal(raw.RawMessage, &parts); err != nil {
return nil, xerrors.Errorf("parse %s content: %w", role, err)
}
decodeNulInParts(parts)
return parts, nil
}
@@ -1018,11 +1030,16 @@ func sanitizeToolCallID(id string) string {
}
// MarshalParts encodes SDK chat message parts for persistence.
// NUL characters in string fields are encoded as PUA sentinel
// pairs (U+E000 U+E001) before marshaling so the resulting JSON
// never contains \u0000 (rejected by PostgreSQL jsonb). The
// encoding operates on Go string values, not JSON bytes, so it
// survives jsonb text normalization.
func MarshalParts(parts []codersdk.ChatMessagePart) (pqtype.NullRawMessage, error) {
if len(parts) == 0 {
return pqtype.NullRawMessage{}, nil
}
data, err := json.Marshal(parts)
data, err := json.Marshal(encodeNulInParts(parts))
if err != nil {
return pqtype.NullRawMessage{}, xerrors.Errorf("encode chat message parts: %w", err)
}
@@ -1169,11 +1186,23 @@ func partsToMessageParts(
for _, part := range parts {
switch part.Type {
case codersdk.ChatMessagePartTypeText:
// Anthropic rejects empty text content blocks with
// "text content blocks must be non-empty". Empty parts
// can arise when a stream sends TextStart/TextEnd with
// no delta in between. We filter them here rather than
// at persistence time to preserve the raw record.
if strings.TrimSpace(part.Text) == "" {
continue
}
result = append(result, fantasy.TextPart{
Text: part.Text,
ProviderOptions: providerMetadataToOptions(logger, part.ProviderMetadata),
})
case codersdk.ChatMessagePartTypeReasoning:
// Same guard as text parts above.
if strings.TrimSpace(part.Text) == "" {
continue
}
result = append(result, fantasy.ReasoningPart{
Text: part.Text,
ProviderOptions: providerMetadataToOptions(logger, part.ProviderMetadata),
@@ -1216,3 +1245,186 @@ func partsToMessageParts(
}
return result
}
// encodeNulInString replaces NUL (U+0000) characters in s with
// the sentinel pair U+E000 U+E001, and doubles any pre-existing
// U+E000 to U+E000 U+E000 so the encoding is reversible.
// Operates on Unicode code points, not JSON escape sequences,
// making it safe through jsonb round-trips (jsonb stores parsed
// characters, not original escape text).
func encodeNulInString(s string) string {
if !strings.ContainsRune(s, 0) && !strings.ContainsRune(s, '\uE000') {
return s
}
var b strings.Builder
b.Grow(len(s))
for _, r := range s {
switch r {
case '\uE000':
_, _ = b.WriteRune('\uE000')
_, _ = b.WriteRune('\uE000')
case 0:
_, _ = b.WriteRune('\uE000')
_, _ = b.WriteRune('\uE001')
default:
_, _ = b.WriteRune(r)
}
}
return b.String()
}
// decodeNulInString reverses encodeNulInString: U+E000 U+E000
// becomes U+E000, and U+E000 U+E001 becomes NUL.
func decodeNulInString(s string) string {
if !strings.ContainsRune(s, '\uE000') {
return s
}
var b strings.Builder
b.Grow(len(s))
runes := []rune(s)
for i := 0; i < len(runes); i++ {
if runes[i] == '\uE000' && i+1 < len(runes) {
switch runes[i+1] {
case '\uE000':
_, _ = b.WriteRune('\uE000')
i++
case '\uE001':
_, _ = b.WriteRune(0)
i++
default:
// Unpaired sentinel — preserve as-is.
_, _ = b.WriteRune(runes[i])
}
} else {
_, _ = b.WriteRune(runes[i])
}
}
return b.String()
}
// encodeNulInValue recursively walks a JSON value (as produced
// by json.Unmarshal with UseNumber) and applies
// encodeNulInString to every string, including map keys.
func encodeNulInValue(v any) any {
switch val := v.(type) {
case string:
return encodeNulInString(val)
case map[string]any:
out := make(map[string]any, len(val))
for k, elem := range val {
out[encodeNulInString(k)] = encodeNulInValue(elem)
}
return out
case []any:
out := make([]any, len(val))
for i, elem := range val {
out[i] = encodeNulInValue(elem)
}
return out
default:
return v // numbers, bools, nil
}
}
// decodeNulInValue recursively walks a JSON value and applies
// decodeNulInString to every string, including map keys.
func decodeNulInValue(v any) any {
switch val := v.(type) {
case string:
return decodeNulInString(val)
case map[string]any:
out := make(map[string]any, len(val))
for k, elem := range val {
out[decodeNulInString(k)] = decodeNulInValue(elem)
}
return out
case []any:
out := make([]any, len(val))
for i, elem := range val {
out[i] = decodeNulInValue(elem)
}
return out
default:
return v
}
}
// encodeNulInJSON walks all string values (and keys) inside a
// json.RawMessage and applies encodeNulInString. Returns the
// original unchanged when the raw message does not contain NUL
// escapes or U+E000 bytes, or when parsing fails.
func encodeNulInJSON(raw json.RawMessage) json.RawMessage {
if len(raw) == 0 {
return raw
}
// Quick exit: no \u0000 escape and no U+E000 UTF-8 bytes.
if !bytes.Contains(raw, []byte(`\u0000`)) &&
!bytes.Contains(raw, []byte{0xEE, 0x80, 0x80}) {
return raw
}
dec := json.NewDecoder(bytes.NewReader(raw))
dec.UseNumber()
var v any
if err := dec.Decode(&v); err != nil {
return raw
}
result, err := json.Marshal(encodeNulInValue(v))
if err != nil {
return raw
}
return result
}
// decodeNulInJSON walks all string values (and keys) inside a
// json.RawMessage and applies decodeNulInString.
func decodeNulInJSON(raw json.RawMessage) json.RawMessage {
if len(raw) == 0 {
return raw
}
// U+E000 encoded as UTF-8 is 0xEE 0x80 0x80.
if !bytes.Contains(raw, []byte{0xEE, 0x80, 0x80}) {
return raw
}
dec := json.NewDecoder(bytes.NewReader(raw))
dec.UseNumber()
var v any
if err := dec.Decode(&v); err != nil {
return raw
}
result, err := json.Marshal(decodeNulInValue(v))
if err != nil {
return raw
}
return result
}
// encodeNulInParts returns a shallow copy of parts with all
// string and json.RawMessage fields NUL-encoded. The caller's
// slice is not modified.
func encodeNulInParts(parts []codersdk.ChatMessagePart) []codersdk.ChatMessagePart {
encoded := make([]codersdk.ChatMessagePart, len(parts))
copy(encoded, parts)
for i := range encoded {
p := &encoded[i]
p.Text = encodeNulInString(p.Text)
p.Content = encodeNulInString(p.Content)
p.Args = encodeNulInJSON(p.Args)
p.ArgsDelta = encodeNulInString(p.ArgsDelta)
p.Result = encodeNulInJSON(p.Result)
p.ResultDelta = encodeNulInString(p.ResultDelta)
}
return encoded
}
// decodeNulInParts reverses encodeNulInParts in place.
func decodeNulInParts(parts []codersdk.ChatMessagePart) {
for i := range parts {
p := &parts[i]
p.Text = decodeNulInString(p.Text)
p.Content = decodeNulInString(p.Content)
p.Args = decodeNulInJSON(p.Args)
p.ArgsDelta = decodeNulInString(p.ArgsDelta)
p.Result = decodeNulInJSON(p.Result)
p.ResultDelta = decodeNulInString(p.ResultDelta)
}
}
+327
View File
@@ -17,7 +17,10 @@ import (
"github.com/coder/coder/v2/coderd/chatd/chatprompt"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
// testMsg builds a database.ChatMessage for ParseContent tests.
@@ -1441,3 +1444,327 @@ func extractToolResultIDs(t *testing.T, msgs ...fantasy.Message) []string {
}
return ids
}
func TestNulEscapeRoundTrip(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := testutil.Context(t, testutil.WaitShort)
// Seed minimal dependencies for the DB round-trip path:
// user, provider, model config, chat.
user := dbgen.User(t, db, database.User{})
_, err := db.InsertChatProvider(ctx, database.InsertChatProviderParams{
Provider: "openai",
DisplayName: "openai",
APIKey: "test-key",
CreatedBy: uuid.NullUUID{UUID: user.ID, Valid: true},
Enabled: true,
})
require.NoError(t, err)
model, err := db.InsertChatModelConfig(ctx, database.InsertChatModelConfigParams{
Provider: "openai",
Model: "gpt-4o-mini",
DisplayName: "Test Model",
CreatedBy: uuid.NullUUID{UUID: user.ID, Valid: true},
UpdatedBy: uuid.NullUUID{UUID: user.ID, Valid: true},
Enabled: true,
IsDefault: true,
ContextLimit: 128000,
CompressionThreshold: 70,
Options: json.RawMessage(`{}`),
})
require.NoError(t, err)
chat, err := db.InsertChat(ctx, database.InsertChatParams{
OwnerID: user.ID,
LastModelConfigID: model.ID,
Title: "nul-roundtrip-test",
})
require.NoError(t, err)
textTests := []struct {
name string
input string
hasNul bool // Whether the input contains actual NUL bytes.
}{
// --- basic ---
{"NoNul", "hello world", false},
{"SingleNul", "a\x00b", true},
{"MultipleNuls", "a\x00b\x00c", true},
{"ConsecutiveNuls", "\x00\x00\x00", true},
// --- boundaries ---
{"EmptyString", "", false},
{"NulOnly", "\x00", true},
{"NulAtStart", "\x00hello", true},
{"NulAtEnd", "hello\x00", true},
// --- sentinel / marker in original data ---
// U+E000 is the sentinel character. The encoder must
// double it so it round-trips without being mistaken
// for an encoded NUL.
{"SentinelInOriginal", "a\uE000b", false},
{"ConsecutiveSentinels", "\uE000\uE000\uE000", false},
// U+E001 is the marker character used in the NUL pair.
{"MarkerCharInOriginal", "a\uE001b", false},
// U+E000 followed by U+E001 looks exactly like an
// encoded NUL in the encoded form, so the encoder must
// double the U+E000 to avoid confusion.
{"SentinelThenMarkerChar", "\uE000\uE001", false},
{"NulAndSentinel", "a\x00b\uE000c", true},
// Both orders: sentinel adjacent to NUL.
{"SentinelThenNul", "\uE000\x00", true},
{"NulThenSentinel", "\x00\uE000", true},
{"AlternatingSentinelNul", "\x00\uE000\x00\uE000", true},
// --- strings containing backslashes ---
// Backslashes are normal characters at the Go string
// level; no special handling needed (unlike the old
// JSON-byte approach).
{"BackslashU0000Text", "\\u0000", false},
{"BackslashThenNul", "\\\x00", true},
// --- literal text that looks like escape patterns ---
{"LiteralTextU0000", "the value is u0000 here", false},
{"LiteralTextUE000", "sentinel uE000 text", false},
// --- other control characters mixed with NUL ---
{"ControlCharsMixedWithNul", "\x01\x00\x02\x00\x1f", true},
// --- long / stress ---
{"LongNulRun", "\x00\x00\x00\x00\x00\x00\x00\x00", true},
// Simulated find -print0 output.
{"FindPrint0", "/usr/bin/ls\x00/usr/bin/cat\x00/usr/bin/grep\x00", true},
}
for _, tc := range textTests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageText(tc.input),
}
encoded, err := chatprompt.MarshalParts(parts)
require.NoError(t, err)
// When the input has real NUL bytes, the stored JSON
// must not contain the \u0000 escape sequence.
if tc.hasNul {
require.NotContains(t, string(encoded.RawMessage), `\u0000`,
"encoded JSON must not contain \\u0000")
}
// In-memory round-trip through ParseContent.
msg := testMsgV1(codersdk.ChatMessageRoleAssistant, encoded)
decoded, err := chatprompt.ParseContent(msg)
require.NoError(t, err)
require.Len(t, decoded, 1)
require.Equal(t, tc.input, decoded[0].Text)
// Full DB round-trip: write to PostgreSQL jsonb, read
// back, and verify the value survives storage.
ctx := testutil.Context(t, testutil.WaitShort)
dbMsgs, err := db.InsertChatMessages(ctx, database.InsertChatMessagesParams{
ChatID: chat.ID,
CreatedBy: []uuid.UUID{user.ID},
ModelConfigID: []uuid.UUID{model.ID},
Role: []database.ChatMessageRole{database.ChatMessageRoleAssistant},
Content: []string{string(encoded.RawMessage)},
ContentVersion: []int16{chatprompt.CurrentContentVersion},
Visibility: []database.ChatMessageVisibility{database.ChatMessageVisibilityBoth},
InputTokens: []int64{0},
OutputTokens: []int64{0},
TotalTokens: []int64{0},
ReasoningTokens: []int64{0},
CacheCreationTokens: []int64{0},
CacheReadTokens: []int64{0},
ContextLimit: []int64{0},
Compressed: []bool{false},
TotalCostMicros: []int64{0},
RuntimeMs: []int64{0},
})
require.NoError(t, err)
require.Len(t, dbMsgs, 1)
readBack, err := db.GetChatMessageByID(ctx, dbMsgs[0].ID)
require.NoError(t, err)
dbDecoded, err := chatprompt.ParseContent(readBack)
require.NoError(t, err)
require.Len(t, dbDecoded, 1)
require.Equal(t, tc.input, dbDecoded[0].Text)
})
}
// Tool result with NUL in the result JSON value.
t.Run("ToolResultWithNul", func(t *testing.T) {
t.Parallel()
resultJSON := json.RawMessage(`"output:\u0000done"`)
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageToolResult("call-1", "my_tool", resultJSON, false),
}
encoded, err := chatprompt.MarshalParts(parts)
require.NoError(t, err)
require.NotContains(t, string(encoded.RawMessage), `\u0000`,
"encoded JSON must not contain \\u0000")
msg := testMsgV1(codersdk.ChatMessageRoleTool, encoded)
decoded, err := chatprompt.ParseContent(msg)
require.NoError(t, err)
require.Len(t, decoded, 1)
// JSON re-serialization may reformat, so compare
// semantically.
assert.JSONEq(t, string(resultJSON), string(decoded[0].Result))
})
// Multiple parts in one message: one with NUL, one without.
t.Run("MultiPartMixed", func(t *testing.T) {
t.Parallel()
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageText("clean text"),
codersdk.ChatMessageText("has\x00nul"),
}
encoded, err := chatprompt.MarshalParts(parts)
require.NoError(t, err)
require.NotContains(t, string(encoded.RawMessage), `\u0000`,
"encoded JSON must not contain \\u0000")
msg := testMsgV1(codersdk.ChatMessageRoleAssistant, encoded)
decoded, err := chatprompt.ParseContent(msg)
require.NoError(t, err)
require.Len(t, decoded, 2)
require.Equal(t, "clean text", decoded[0].Text)
require.Equal(t, "has\x00nul", decoded[1].Text)
})
}
func TestConvertMessagesWithFiles_FiltersEmptyTextAndReasoningParts(t *testing.T) {
t.Parallel()
// Helper to build a DB message from SDK parts.
makeMsg := func(t *testing.T, role database.ChatMessageRole, parts []codersdk.ChatMessagePart) database.ChatMessage {
t.Helper()
encoded, err := chatprompt.MarshalParts(parts)
require.NoError(t, err)
return database.ChatMessage{
Role: role,
Visibility: database.ChatMessageVisibilityBoth,
Content: encoded,
ContentVersion: chatprompt.CurrentContentVersion,
}
}
t.Run("UserRole", func(t *testing.T) {
t.Parallel()
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageText(""), // empty — filtered
codersdk.ChatMessageText(" \t\n "), // whitespace — filtered
codersdk.ChatMessageReasoning(""), // empty — filtered
codersdk.ChatMessageReasoning(" \n"), // whitespace — filtered
codersdk.ChatMessageText("hello"), // kept
codersdk.ChatMessageText(" hello "), // kept with original whitespace
codersdk.ChatMessageReasoning("thinking deeply"), // kept
codersdk.ChatMessageToolCall("call-1", "my_tool", json.RawMessage(`{"x":1}`)),
codersdk.ChatMessageToolResult("call-1", "my_tool", json.RawMessage(`{"ok":true}`), false),
}
prompt, err := chatprompt.ConvertMessagesWithFiles(
context.Background(),
[]database.ChatMessage{makeMsg(t, database.ChatMessageRoleUser, parts)},
nil,
slogtest.Make(t, nil),
)
require.NoError(t, err)
require.Len(t, prompt, 1)
resultParts := prompt[0].Content
require.Len(t, resultParts, 5, "expected 5 parts after filtering empty text/reasoning")
textPart, ok := fantasy.AsMessagePart[fantasy.TextPart](resultParts[0])
require.True(t, ok, "expected TextPart at index 0")
require.Equal(t, "hello", textPart.Text)
// Leading/trailing whitespace is preserved — only
// all-whitespace parts are dropped.
paddedPart, ok := fantasy.AsMessagePart[fantasy.TextPart](resultParts[1])
require.True(t, ok, "expected TextPart at index 1")
require.Equal(t, " hello ", paddedPart.Text)
reasoningPart, ok := fantasy.AsMessagePart[fantasy.ReasoningPart](resultParts[2])
require.True(t, ok, "expected ReasoningPart at index 2")
require.Equal(t, "thinking deeply", reasoningPart.Text)
toolCallPart, ok := fantasy.AsMessagePart[fantasy.ToolCallPart](resultParts[3])
require.True(t, ok, "expected ToolCallPart at index 3")
require.Equal(t, "call-1", toolCallPart.ToolCallID)
toolResultPart, ok := fantasy.AsMessagePart[fantasy.ToolResultPart](resultParts[4])
require.True(t, ok, "expected ToolResultPart at index 4")
require.Equal(t, "call-1", toolResultPart.ToolCallID)
})
t.Run("AssistantRole", func(t *testing.T) {
t.Parallel()
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageText(""), // empty — filtered
codersdk.ChatMessageText(" "), // whitespace — filtered
codersdk.ChatMessageReasoning(""), // empty — filtered
codersdk.ChatMessageText(" reply "), // kept with whitespace
codersdk.ChatMessageToolCall("tc-1", "read_file", json.RawMessage(`{"path":"x"}`)),
}
prompt, err := chatprompt.ConvertMessagesWithFiles(
context.Background(),
[]database.ChatMessage{makeMsg(t, database.ChatMessageRoleAssistant, parts)},
nil,
slogtest.Make(t, nil),
)
require.NoError(t, err)
// 2 messages: assistant + synthetic tool result injected
// by injectMissingToolResults for the unmatched tool call.
require.Len(t, prompt, 2)
resultParts := prompt[0].Content
require.Len(t, resultParts, 2, "expected text + tool-call after filtering")
textPart, ok := fantasy.AsMessagePart[fantasy.TextPart](resultParts[0])
require.True(t, ok, "expected TextPart")
require.Equal(t, " reply ", textPart.Text)
tcPart, ok := fantasy.AsMessagePart[fantasy.ToolCallPart](resultParts[1])
require.True(t, ok, "expected ToolCallPart")
require.Equal(t, "tc-1", tcPart.ToolCallID)
})
t.Run("AllEmptyDropsMessage", func(t *testing.T) {
t.Parallel()
// When every part is filtered, the message itself should
// be dropped rather than appending an empty-content message.
parts := []codersdk.ChatMessagePart{
codersdk.ChatMessageText(""),
codersdk.ChatMessageText(" "),
codersdk.ChatMessageReasoning(""),
}
prompt, err := chatprompt.ConvertMessagesWithFiles(
context.Background(),
[]database.ChatMessage{makeMsg(t, database.ChatMessageRoleAssistant, parts)},
nil,
slogtest.Make(t, nil),
)
require.NoError(t, err)
require.Empty(t, prompt, "all-empty message should be dropped entirely")
})
}
+9 -1
View File
@@ -1083,6 +1083,7 @@ func openAIProviderOptionsFromChatConfig(
SafetyIdentifier: normalizedStringPointer(options.SafetyIdentifier),
ServiceTier: openAIServiceTierFromChat(options.ServiceTier),
StrictJSONSchema: options.StrictJSONSchema,
Store: boolPtrOrDefault(options.Store, true),
TextVerbosity: OpenAITextVerbosityFromChat(options.TextVerbosity),
User: normalizedStringPointer(options.User),
}
@@ -1099,7 +1100,7 @@ func openAIProviderOptionsFromChatConfig(
MaxCompletionTokens: options.MaxCompletionTokens,
TextVerbosity: normalizedStringPointer(options.TextVerbosity),
Prediction: options.Prediction,
Store: options.Store,
Store: boolPtrOrDefault(options.Store, true),
Metadata: options.Metadata,
PromptCacheKey: normalizedStringPointer(options.PromptCacheKey),
SafetyIdentifier: normalizedStringPointer(options.SafetyIdentifier),
@@ -1280,6 +1281,13 @@ func useOpenAIResponsesOptions(model fantasy.LanguageModel) bool {
}
}
func boolPtrOrDefault(value *bool, def bool) *bool {
if value != nil {
return value
}
return &def
}
func normalizedStringPointer(value *string) *string {
if value == nil {
return nil
@@ -82,7 +82,7 @@ func TestMergeMissingProviderOptions_OpenRouterNested(t *testing.T) {
options := &codersdk.ChatModelProviderOptions{
OpenRouter: &codersdk.ChatModelOpenRouterProviderOptions{
Reasoning: &codersdk.ChatModelOpenRouterReasoningOptions{
Reasoning: &codersdk.ChatModelReasoningOptions{
Enabled: boolPtr(true),
},
Provider: &codersdk.ChatModelOpenRouterProvider{
@@ -92,7 +92,7 @@ func TestMergeMissingProviderOptions_OpenRouterNested(t *testing.T) {
}
defaults := &codersdk.ChatModelProviderOptions{
OpenRouter: &codersdk.ChatModelOpenRouterProviderOptions{
Reasoning: &codersdk.ChatModelOpenRouterReasoningOptions{
Reasoning: &codersdk.ChatModelReasoningOptions{
Enabled: boolPtr(false),
Exclude: boolPtr(true),
MaxTokens: int64Ptr(123),
+15 -5
View File
@@ -78,10 +78,10 @@ type ProcessToolOptions struct {
// ExecuteArgs are the parameters accepted by the execute tool.
type ExecuteArgs struct {
Command string `json:"command"`
Timeout *string `json:"timeout,omitempty"`
WorkDir *string `json:"workdir,omitempty"`
RunInBackground *bool `json:"run_in_background,omitempty"`
Command string `json:"command" description:"The shell command to execute."`
Timeout *string `json:"timeout,omitempty" description:"Timeout duration (e.g. '30s', '5m'). Default is 10s. Only applies to foreground commands."`
WorkDir *string `json:"workdir,omitempty" description:"Working directory for the command."`
RunInBackground *bool `json:"run_in_background,omitempty" description:"Run this command in the background without blocking. Use for long-running processes like dev servers, file watchers, or builds that run longer than 5 seconds. Do NOT use shell & to background processes — it will not work correctly. Always use this parameter instead."`
}
// Execute returns an AgentTool that runs a shell command in the
@@ -89,7 +89,7 @@ type ExecuteArgs struct {
func Execute(options ExecuteOptions) fantasy.AgentTool {
return fantasy.NewAgentTool(
"execute",
"Execute a shell command in the workspace.",
"Execute a shell command in the workspace. Use run_in_background=true for long-running processes (dev servers, file watchers, builds). Never use shell '&' for backgrounding.",
func(ctx context.Context, args ExecuteArgs, _ fantasy.ToolCall) (fantasy.ToolResponse, error) {
if options.GetWorkspaceConn == nil {
return fantasy.NewTextErrorResponse("workspace connection resolver is not configured"), nil
@@ -122,6 +122,16 @@ func executeTool(
background := args.RunInBackground != nil && *args.RunInBackground
// Detect shell-style backgrounding (trailing &) and promote to
// background mode. Models sometimes use "cmd &" instead of the
// run_in_background parameter, which causes the shell to fork
// and exit immediately, leaving an untracked orphan process.
trimmed := strings.TrimSpace(args.Command)
if !background && strings.HasSuffix(trimmed, "&") && !strings.HasSuffix(trimmed, "&&") {
background = true
args.Command = strings.TrimSpace(strings.TrimSuffix(trimmed, "&"))
}
var workDir string
if args.WorkDir != nil {
workDir = *args.WorkDir
+152 -2
View File
@@ -92,7 +92,7 @@ func TestAnthropicWebSearchRoundTrip(t *testing.T) {
// Verify the chat completed and messages were persisted.
chatData, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs, err := client.GetChatMessages(ctx, chat.ID)
chatMsgs, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
t.Logf("Chat status after step 1: %s, messages: %d",
chatData.Status, len(chatMsgs.Messages))
@@ -154,7 +154,7 @@ func TestAnthropicWebSearchRoundTrip(t *testing.T) {
// Verify the follow-up completed and produced content.
chatData2, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs2, err := client.GetChatMessages(ctx, chat.ID)
chatMsgs2, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
t.Logf("Chat status after step 2: %s, messages: %d",
chatData2.Status, len(chatMsgs2.Messages))
@@ -272,6 +272,156 @@ func logMessages(t *testing.T, msgs []codersdk.ChatMessage) {
}
}
// TestOpenAIReasoningRoundTrip is an integration test that verifies
// reasoning items from OpenAI's Responses API survive the full
// persist → reconstruct → re-send cycle when Store: true. It sends
// a query to a reasoning model, waits for completion, then sends a
// follow-up message. If reasoning items are sent back without their
// required following output item, the API rejects the second request:
//
// Item 'rs_xxx' of type 'reasoning' was provided without its
// required following item.
//
// The test requires OPENAI_API_KEY to be set.
func TestOpenAIReasoningRoundTrip(t *testing.T) {
t.Parallel()
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
t.Skip("OPENAI_API_KEY not set; skipping OpenAI integration test")
}
baseURL := os.Getenv("OPENAI_BASE_URL")
ctx := testutil.Context(t, testutil.WaitSuperLong)
// Stand up a full coderd with the agents experiment.
deploymentValues := coderdtest.DeploymentValues(t)
deploymentValues.Experiments = []string{string(codersdk.ExperimentAgents)}
client := coderdtest.New(t, &coderdtest.Options{
DeploymentValues: deploymentValues,
})
_ = coderdtest.CreateFirstUser(t, client)
// Configure an OpenAI provider with the real API key.
_, err := client.CreateChatProvider(ctx, codersdk.CreateChatProviderConfigRequest{
Provider: "openai",
APIKey: apiKey,
BaseURL: baseURL,
})
require.NoError(t, err)
// Create a model config for a reasoning model with Store: true
// (the default). Using o4-mini because it always produces
// reasoning items.
contextLimit := int64(200000)
isDefault := true
reasoningSummary := "auto"
_, err = client.CreateChatModelConfig(ctx, codersdk.CreateChatModelConfigRequest{
Provider: "openai",
Model: "o4-mini",
ContextLimit: &contextLimit,
IsDefault: &isDefault,
ModelConfig: &codersdk.ChatModelCallConfig{
ProviderOptions: &codersdk.ChatModelProviderOptions{
OpenAI: &codersdk.ChatModelOpenAIProviderOptions{
Store: ptr.Ref(true),
ReasoningSummary: &reasoningSummary,
},
},
},
})
require.NoError(t, err)
// --- Step 1: Send a message that triggers reasoning ---
t.Log("Creating chat with reasoning query...")
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "What is 2+2? Be brief.",
},
},
})
require.NoError(t, err)
t.Logf("Chat created: %s (status=%s)", chat.ID, chat.Status)
// Stream events until the chat reaches a terminal status.
events, closer, err := client.StreamChat(ctx, chat.ID, nil)
require.NoError(t, err)
defer closer.Close()
waitForChatDone(ctx, t, events, "step 1")
// Verify the chat completed and messages were persisted.
chatData, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
t.Logf("Chat status after step 1: %s, messages: %d",
chatData.Status, len(chatMsgs.Messages))
logMessages(t, chatMsgs.Messages)
require.Equal(t, codersdk.ChatStatusWaiting, chatData.Status,
"chat should be in waiting status after step 1")
// Verify the assistant message has reasoning content.
assistantMsg := findAssistantWithText(t, chatMsgs.Messages)
require.NotNil(t, assistantMsg,
"expected an assistant message with text content after step 1")
partTypes := partTypeSet(assistantMsg.Content)
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeReasoning,
"assistant message should contain reasoning parts from o4-mini")
require.Contains(t, partTypes, codersdk.ChatMessagePartTypeText,
"assistant message should contain a text part")
// --- Step 2: Send a follow-up message ---
// This is the critical test: if reasoning items are sent back
// without their required following item, the API will reject
// the request with:
// Item 'rs_xxx' of type 'reasoning' was provided without its
// required following item.
t.Log("Sending follow-up message...")
_, err = client.CreateChatMessage(ctx, chat.ID,
codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "And what is 3+3? Be brief.",
},
},
})
require.NoError(t, err)
// Stream the follow-up response.
events2, closer2, err := client.StreamChat(ctx, chat.ID, nil)
require.NoError(t, err)
defer closer2.Close()
waitForChatDone(ctx, t, events2, "step 2")
// Verify the follow-up completed and produced content.
chatData2, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
chatMsgs2, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
t.Logf("Chat status after step 2: %s, messages: %d",
chatData2.Status, len(chatMsgs2.Messages))
logMessages(t, chatMsgs2.Messages)
require.Equal(t, codersdk.ChatStatusWaiting, chatData2.Status,
"chat should be in waiting status after step 2")
require.Greater(t, len(chatMsgs2.Messages), len(chatMsgs.Messages),
"follow-up should have added more messages")
// The last assistant message should have text.
lastAssistant := findLastAssistantWithText(t, chatMsgs2.Messages)
require.NotNil(t, lastAssistant,
"expected an assistant message with text in the follow-up")
t.Log("OpenAI reasoning round-trip test passed.")
}
// partTypeSet returns the set of part types present in a message.
func partTypeSet(parts []codersdk.ChatMessagePart) map[codersdk.ChatMessagePartType]struct{} {
set := make(map[codersdk.ChatMessagePartType]struct{}, len(parts))
+3 -1
View File
@@ -62,6 +62,7 @@ func (p *Server) maybeGenerateChatTitle(
messages []database.ChatMessage,
fallbackModel fantasy.LanguageModel,
keys chatprovider.ProviderAPIKeys,
generatedTitle *generatedChatTitle,
logger slog.Logger,
) {
input, ok := titleInput(chat, messages)
@@ -111,7 +112,8 @@ func (p *Server) maybeGenerateChatTitle(
return
}
chat.Title = title
p.publishChatPubsubEvent(chat, coderdpubsub.ChatEventKindTitleChange)
generatedTitle.Store(title)
p.publishChatPubsubEvent(chat, coderdpubsub.ChatEventKindTitleChange, nil)
return
}
+10 -3
View File
@@ -84,6 +84,14 @@ func (p *Server) isAnthropicConfigured(ctx context.Context) bool {
return false
}
func (p *Server) isDesktopEnabled(ctx context.Context) bool {
enabled, err := p.db.GetChatDesktopEnabled(ctx)
if err != nil {
return false
}
return enabled
}
func (p *Server) subagentTools(ctx context.Context, currentChat func() database.Chat) []fantasy.AgentTool {
tools := []fantasy.AgentTool{
fantasy.NewAgentTool(
@@ -253,9 +261,8 @@ func (p *Server) subagentTools(ctx context.Context, currentChat func() database.
}
// Only include the computer use tool when an Anthropic
// provider is configured, since it requires an Anthropic
// model.
if p.isAnthropicConfigured(ctx) {
// provider is configured and desktop is enabled.
if p.isAnthropicConfigured(ctx) && p.isDesktopEnabled(ctx) {
tools = append(tools, fantasy.NewAgentTool(
"spawn_computer_use_agent",
"Spawn a dedicated computer use agent that can see the desktop "+
+173 -3
View File
@@ -15,6 +15,7 @@ import (
"github.com/coder/coder/v2/coderd/chatd/chatprovider"
"github.com/coder/coder/v2/coderd/chatd/chattool"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/pubsub"
@@ -144,14 +145,20 @@ func findToolByName(tools []fantasy.AgentTool, name string) fantasy.AgentTool {
return nil
}
func chatdTestContext(t *testing.T) context.Context {
t.Helper()
return dbauthz.AsChatd(testutil.Context(t, testutil.WaitLong))
}
func TestSpawnComputerUseAgent_NoAnthropicProvider(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
require.NoError(t, db.UpsertChatDesktopEnabled(chatdTestContext(t), true))
// No Anthropic key in ProviderAPIKeys.
server := newInternalTestServer(t, db, ps, chatprovider.ProviderAPIKeys{})
ctx := testutil.Context(t, testutil.WaitLong)
ctx := chatdTestContext(t)
user, model := seedInternalChatDeps(ctx, t, db)
// Create a root parent chat.
@@ -176,12 +183,13 @@ func TestSpawnComputerUseAgent_NotAvailableForChildChats(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
require.NoError(t, db.UpsertChatDesktopEnabled(chatdTestContext(t), true))
// Provide an Anthropic key so the provider check passes.
server := newInternalTestServer(t, db, ps, chatprovider.ProviderAPIKeys{
Anthropic: "test-anthropic-key",
})
ctx := testutil.Context(t, testutil.WaitLong)
ctx := chatdTestContext(t)
user, model := seedInternalChatDeps(ctx, t, db)
// Create a root parent chat.
@@ -232,16 +240,42 @@ func TestSpawnComputerUseAgent_NotAvailableForChildChats(t *testing.T) {
assert.Contains(t, resp.Content, "delegated chats cannot create child subagents")
}
func TestSpawnComputerUseAgent_DesktopDisabled(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
server := newInternalTestServer(t, db, ps, chatprovider.ProviderAPIKeys{
Anthropic: "test-anthropic-key",
})
ctx := chatdTestContext(t)
user, model := seedInternalChatDeps(ctx, t, db)
parent, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
Title: "parent-desktop-disabled",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("hello")},
})
require.NoError(t, err)
parentChat, err := db.GetChatByID(ctx, parent.ID)
require.NoError(t, err)
tools := server.subagentTools(ctx, func() database.Chat { return parentChat })
tool := findToolByName(tools, "spawn_computer_use_agent")
assert.Nil(t, tool, "spawn_computer_use_agent tool must be omitted when desktop is disabled")
}
func TestSpawnComputerUseAgent_UsesComputerUseModelNotParent(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
require.NoError(t, db.UpsertChatDesktopEnabled(chatdTestContext(t), true))
// Provide an Anthropic key so the tool can proceed.
server := newInternalTestServer(t, db, ps, chatprovider.ProviderAPIKeys{
Anthropic: "test-anthropic-key",
})
ctx := testutil.Context(t, testutil.WaitLong)
ctx := chatdTestContext(t)
user, model := seedInternalChatDeps(ctx, t, db)
// The parent uses an OpenAI model.
@@ -298,3 +332,139 @@ func TestSpawnComputerUseAgent_UsesComputerUseModelNotParent(t *testing.T) {
assert.Equal(t, "anthropic", chattool.ComputerUseModelProvider)
assert.NotEmpty(t, chattool.ComputerUseModelName)
}
func TestIsSubagentDescendant(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
server := newInternalTestServer(t, db, ps, chatprovider.ProviderAPIKeys{})
ctx := chatdTestContext(t)
user, model := seedInternalChatDeps(ctx, t, db)
// Build a chain: root -> child -> grandchild.
root, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
Title: "root",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("root")},
})
require.NoError(t, err)
child, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
ParentChatID: uuid.NullUUID{
UUID: root.ID,
Valid: true,
},
RootChatID: uuid.NullUUID{
UUID: root.ID,
Valid: true,
},
Title: "child",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("child")},
})
require.NoError(t, err)
grandchild, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
ParentChatID: uuid.NullUUID{
UUID: child.ID,
Valid: true,
},
RootChatID: uuid.NullUUID{
UUID: root.ID,
Valid: true,
},
Title: "grandchild",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("grandchild")},
})
require.NoError(t, err)
// Build a separate, unrelated chain.
unrelated, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
Title: "unrelated-root",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("unrelated")},
})
require.NoError(t, err)
unrelatedChild, err := server.CreateChat(ctx, CreateOptions{
OwnerID: user.ID,
ParentChatID: uuid.NullUUID{
UUID: unrelated.ID,
Valid: true,
},
RootChatID: uuid.NullUUID{
UUID: unrelated.ID,
Valid: true,
},
Title: "unrelated-child",
ModelConfigID: model.ID,
InitialUserContent: []codersdk.ChatMessagePart{codersdk.ChatMessageText("unrelated-child")},
})
require.NoError(t, err)
tests := []struct {
name string
ancestor uuid.UUID
target uuid.UUID
want bool
}{
{
name: "SameID",
ancestor: root.ID,
target: root.ID,
want: false,
},
{
name: "DirectChild",
ancestor: root.ID,
target: child.ID,
want: true,
},
{
name: "GrandChild",
ancestor: root.ID,
target: grandchild.ID,
want: true,
},
{
name: "Unrelated",
ancestor: root.ID,
target: unrelatedChild.ID,
want: false,
},
{
name: "RootChat",
ancestor: child.ID,
target: root.ID,
want: false,
},
{
name: "BrokenChain",
ancestor: root.ID,
target: uuid.New(),
want: false,
},
{
name: "NotDescendant",
ancestor: unrelated.ID,
target: child.ID,
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx := chatdTestContext(t)
got, err := isSubagentDescendant(ctx, db, tt.ancestor, tt.target)
require.NoError(t, err)
assert.Equal(t, tt.want, got)
})
}
}
+128
View File
@@ -0,0 +1,128 @@
package chatd
import (
"context"
"database/sql"
"errors"
"fmt"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/codersdk"
)
// ComputeUsagePeriodBounds returns the UTC-aligned start and end bounds for the
// active usage-limit period containing now.
func ComputeUsagePeriodBounds(now time.Time, period codersdk.ChatUsageLimitPeriod) (start, end time.Time) {
utcNow := now.UTC()
switch period {
case codersdk.ChatUsageLimitPeriodDay:
start = time.Date(utcNow.Year(), utcNow.Month(), utcNow.Day(), 0, 0, 0, 0, time.UTC)
end = start.AddDate(0, 0, 1)
case codersdk.ChatUsageLimitPeriodWeek:
// Walk backward to Monday of the current ISO week.
// ISO 8601 weeks always start on Monday, so this never
// crosses an ISO-week boundary.
start = time.Date(utcNow.Year(), utcNow.Month(), utcNow.Day(), 0, 0, 0, 0, time.UTC)
for start.Weekday() != time.Monday {
start = start.AddDate(0, 0, -1)
}
end = start.AddDate(0, 0, 7)
case codersdk.ChatUsageLimitPeriodMonth:
start = time.Date(utcNow.Year(), utcNow.Month(), 1, 0, 0, 0, 0, time.UTC)
end = start.AddDate(0, 1, 0)
default:
panic(fmt.Sprintf("unknown chat usage limit period: %q", period))
}
return start, end
}
// ResolveUsageLimitStatus resolves the current usage-limit status for userID.
//
// Note: There is a potential race condition where two concurrent messages
// from the same user can both pass the limit check if processed in
// parallel, allowing brief overage. This is acceptable because:
// - Cost is only known after the LLM API returns.
// - Overage is bounded by message cost × concurrency.
// - Fail-open is the deliberate design choice for this feature.
//
// Architecture note: today this path enforces one period globally
// (day/week/month) from config.
// To support simultaneous periods, add nullable
// daily/weekly/monthly_limit_micros columns on override tables, where NULL
// means no limit for that period.
// Then scan spend once over the widest active window with conditional SUMs
// for each period and compare each spend/limit pair Go-side, blocking on
// whichever period is tightest.
func ResolveUsageLimitStatus(ctx context.Context, db database.Store, userID uuid.UUID, now time.Time) (*codersdk.ChatUsageLimitStatus, error) {
//nolint:gocritic // AsChatd provides narrowly-scoped daemon access for
// deployment config reads and cross-user chat spend aggregation.
authCtx := dbauthz.AsChatd(ctx)
config, err := db.GetChatUsageLimitConfig(authCtx)
if err != nil {
if errors.Is(err, sql.ErrNoRows) {
return nil, nil //nolint:nilnil // Nil status cleanly signals disabled limits.
}
return nil, err
}
if !config.Enabled {
return nil, nil //nolint:nilnil // Nil status cleanly signals disabled limits.
}
period, ok := mapDBPeriodToSDK(config.Period)
if !ok {
return nil, xerrors.Errorf("invalid chat usage limit period %q", config.Period)
}
// Resolve effective limit in a single query:
// individual override > group limit > global default.
effectiveLimit, err := db.ResolveUserChatSpendLimit(authCtx, userID)
if err != nil {
return nil, err
}
// -1 means limits are disabled (shouldn't happen since we checked above,
// but handle gracefully).
if effectiveLimit < 0 {
return nil, nil //nolint:nilnil // Nil status cleanly signals disabled limits.
}
start, end := ComputeUsagePeriodBounds(now, period)
spendTotal, err := db.GetUserChatSpendInPeriod(authCtx, database.GetUserChatSpendInPeriodParams{
UserID: userID,
StartTime: start,
EndTime: end,
})
if err != nil {
return nil, err
}
return &codersdk.ChatUsageLimitStatus{
IsLimited: true,
Period: period,
SpendLimitMicros: &effectiveLimit,
CurrentSpend: spendTotal,
PeriodStart: start,
PeriodEnd: end,
}, nil
}
func mapDBPeriodToSDK(dbPeriod string) (codersdk.ChatUsageLimitPeriod, bool) {
switch dbPeriod {
case string(codersdk.ChatUsageLimitPeriodDay):
return codersdk.ChatUsageLimitPeriodDay, true
case string(codersdk.ChatUsageLimitPeriodWeek):
return codersdk.ChatUsageLimitPeriodWeek, true
case string(codersdk.ChatUsageLimitPeriodMonth):
return codersdk.ChatUsageLimitPeriodMonth, true
default:
return "", false
}
}
+132
View File
@@ -0,0 +1,132 @@
package chatd //nolint:testpackage // Keeps chatd unit tests in the package.
import (
"testing"
"time"
"github.com/coder/coder/v2/codersdk"
)
func TestComputeUsagePeriodBounds(t *testing.T) {
t.Parallel()
newYork, err := time.LoadLocation("America/New_York")
if err != nil {
t.Fatalf("load America/New_York: %v", err)
}
tests := []struct {
name string
now time.Time
period codersdk.ChatUsageLimitPeriod
wantStart time.Time
wantEnd time.Time
}{
{
name: "day/mid_day",
now: time.Date(2025, time.June, 15, 14, 30, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodDay,
wantStart: time.Date(2025, time.June, 15, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "day/midnight_exactly",
now: time.Date(2025, time.June, 15, 0, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodDay,
wantStart: time.Date(2025, time.June, 15, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "day/end_of_day",
now: time.Date(2025, time.June, 15, 23, 59, 59, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodDay,
wantStart: time.Date(2025, time.June, 15, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "week/wednesday",
now: time.Date(2025, time.June, 11, 10, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodWeek,
wantStart: time.Date(2025, time.June, 9, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "week/monday",
now: time.Date(2025, time.June, 9, 0, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodWeek,
wantStart: time.Date(2025, time.June, 9, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "week/sunday",
now: time.Date(2025, time.June, 15, 23, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodWeek,
wantStart: time.Date(2025, time.June, 9, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
},
{
name: "week/year_boundary",
now: time.Date(2024, time.December, 31, 12, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodWeek,
wantStart: time.Date(2024, time.December, 30, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.January, 6, 0, 0, 0, 0, time.UTC),
},
{
name: "month/mid_month",
now: time.Date(2025, time.June, 15, 0, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodMonth,
wantStart: time.Date(2025, time.June, 1, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.July, 1, 0, 0, 0, 0, time.UTC),
},
{
name: "month/first_day",
now: time.Date(2025, time.June, 1, 0, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodMonth,
wantStart: time.Date(2025, time.June, 1, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.July, 1, 0, 0, 0, 0, time.UTC),
},
{
name: "month/last_day",
now: time.Date(2025, time.June, 30, 23, 59, 59, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodMonth,
wantStart: time.Date(2025, time.June, 1, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.July, 1, 0, 0, 0, 0, time.UTC),
},
{
name: "month/february",
now: time.Date(2025, time.February, 15, 12, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodMonth,
wantStart: time.Date(2025, time.February, 1, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.March, 1, 0, 0, 0, 0, time.UTC),
},
{
name: "month/leap_year_february",
now: time.Date(2024, time.February, 29, 12, 0, 0, 0, time.UTC),
period: codersdk.ChatUsageLimitPeriodMonth,
wantStart: time.Date(2024, time.February, 1, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2024, time.March, 1, 0, 0, 0, 0, time.UTC),
},
{
name: "day/non_utc_timezone",
now: time.Date(2025, time.June, 15, 22, 0, 0, 0, newYork),
period: codersdk.ChatUsageLimitPeriodDay,
wantStart: time.Date(2025, time.June, 16, 0, 0, 0, 0, time.UTC),
wantEnd: time.Date(2025, time.June, 17, 0, 0, 0, 0, time.UTC),
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
start, end := ComputeUsagePeriodBounds(tc.now, tc.period)
if !start.Equal(tc.wantStart) {
t.Errorf("start: got %v, want %v", start, tc.wantStart)
}
if !end.Equal(tc.wantEnd) {
t.Errorf("end: got %v, want %v", end, tc.wantEnd)
}
})
}
}
+1030 -194
View File
File diff suppressed because it is too large Load Diff
+980 -139
View File
File diff suppressed because it is too large Load Diff
+81 -18
View File
@@ -10,6 +10,7 @@ import (
"flag"
"fmt"
"io"
"math"
"net/http"
httppprof "net/http/pprof"
"net/url"
@@ -44,6 +45,7 @@ import (
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/coderd/agentapi"
"github.com/coder/coder/v2/coderd/agentapi/metadatabatcher"
"github.com/coder/coder/v2/coderd/aiseats"
_ "github.com/coder/coder/v2/coderd/apidoc" // Used for swagger docs.
"github.com/coder/coder/v2/coderd/appearance"
"github.com/coder/coder/v2/coderd/audit"
@@ -629,7 +631,9 @@ func New(options *Options) *API {
),
dbRolluper: options.DatabaseRolluper,
ProfileCollector: defaultProfileCollector{},
AISeatTracker: aiseats.Noop{},
}
api.WorkspaceAppsProvider = workspaceapps.NewDBTokenProvider(
ctx,
options.Logger.Named("workspaceapps"),
@@ -763,17 +767,27 @@ func New(options *Options) *API {
}
api.agentProvider = stn
maxChatsPerAcquire := options.DeploymentValues.AI.Chat.AcquireBatchSize.Value()
if maxChatsPerAcquire > math.MaxInt32 {
maxChatsPerAcquire = math.MaxInt32
}
if maxChatsPerAcquire < math.MinInt32 {
maxChatsPerAcquire = math.MinInt32
}
api.chatDaemon = chatd.New(chatd.Config{
Logger: options.Logger.Named("chats"),
Database: options.Database,
ReplicaID: api.ID,
SubscribeFn: options.ChatSubscribeFn,
ProviderAPIKeys: chatProviderAPIKeysFromDeploymentValues(options.DeploymentValues),
AgentConn: api.agentProvider.AgentConn,
CreateWorkspace: api.chatCreateWorkspace,
StartWorkspace: api.chatStartWorkspace,
Pubsub: options.Pubsub,
WebpushDispatcher: options.WebPushDispatcher,
Logger: options.Logger.Named("chatd"),
Database: options.Database,
ReplicaID: api.ID,
SubscribeFn: options.ChatSubscribeFn,
MaxChatsPerAcquire: int32(maxChatsPerAcquire), //nolint:gosec // maxChatsPerAcquire is clamped to int32 range above.
ProviderAPIKeys: chatProviderAPIKeysFromDeploymentValues(options.DeploymentValues),
AgentConn: api.agentProvider.AgentConn,
CreateWorkspace: api.chatCreateWorkspace,
StartWorkspace: api.chatStartWorkspace,
Pubsub: options.Pubsub,
WebpushDispatcher: options.WebPushDispatcher,
UsageTracker: options.WorkspaceUsageTracker,
})
gitSyncLogger := options.Logger.Named("gitsync")
refresher := gitsync.NewRefresher(
@@ -1030,10 +1044,12 @@ func New(options *Options) *API {
// OAuth2 metadata endpoint for RFC 8414 discovery
r.Route("/.well-known/oauth-authorization-server", func(r chi.Router) {
r.Use(httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentOAuth2))
r.Get("/*", api.oauth2AuthorizationServerMetadata())
})
// OAuth2 protected resource metadata endpoint for RFC 9728 discovery
r.Route("/.well-known/oauth-protected-resource", func(r chi.Router) {
r.Use(httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentOAuth2))
r.Get("/*", api.oauth2ProtectedResourceMetadata())
})
@@ -1146,6 +1162,9 @@ func New(options *Options) *API {
r.Get("/summary", api.chatCostSummary)
})
})
r.Route("/insights", func(r chi.Router) {
r.Get("/pull-requests", api.prInsights)
})
r.Route("/files", func(r chi.Router) {
r.Use(httpmw.RateLimit(options.FilesRateLimit, time.Minute))
r.Post("/", api.postChatFile)
@@ -1154,6 +1173,8 @@ func New(options *Options) *API {
r.Route("/config", func(r chi.Router) {
r.Get("/system-prompt", api.getChatSystemPrompt)
r.Put("/system-prompt", api.putChatSystemPrompt)
r.Get("/desktop-enabled", api.getChatDesktopEnabled)
r.Put("/desktop-enabled", api.putChatDesktopEnabled)
r.Get("/user-prompt", api.getUserChatCustomPrompt)
r.Put("/user-prompt", api.putUserChatCustomPrompt)
})
@@ -1175,19 +1196,32 @@ func New(options *Options) *API {
r.Delete("/", api.deleteChatModelConfig)
})
})
r.Route("/usage-limits", func(r chi.Router) {
r.Get("/", api.getChatUsageLimitConfig)
r.Put("/", api.updateChatUsageLimitConfig)
r.Get("/status", api.getMyChatUsageLimitStatus)
r.Route("/overrides/{user}", func(r chi.Router) {
r.Put("/", api.upsertChatUsageLimitOverride)
r.Delete("/", api.deleteChatUsageLimitOverride)
})
r.Route("/group-overrides/{group}", func(r chi.Router) {
r.Put("/", api.upsertChatUsageLimitGroupOverride)
r.Delete("/", api.deleteChatUsageLimitGroupOverride)
})
})
r.Route("/{chat}", func(r chi.Router) {
r.Use(httpmw.ExtractChatParam(options.Database))
r.Get("/", api.getChat)
r.Get("/git/watch", api.watchChatGit)
r.Get("/desktop", api.watchChatDesktop)
r.Post("/archive", api.archiveChat)
r.Post("/unarchive", api.unarchiveChat)
r.Patch("/", api.patchChat)
r.Get("/messages", api.getChatMessages)
r.Post("/messages", api.postChatMessages)
r.Patch("/messages/{message}", api.patchChatMessage)
r.Get("/stream", api.streamChat)
r.Route("/stream", func(r chi.Router) {
r.Get("/", api.streamChat)
r.Get("/desktop", api.watchChatDesktop)
r.Get("/git", api.watchChatGit)
})
r.Post("/interrupt", api.interruptChat)
r.Get("/diff-status", api.getChatDiffStatus)
r.Get("/diff", api.getChatDiffContents)
r.Route("/queue/{queuedMessage}", func(r chi.Router) {
r.Delete("/", api.deleteChatQueuedMessage)
@@ -1199,10 +1233,34 @@ func New(options *Options) *API {
r.Route("/mcp", func(r chi.Router) {
r.Use(
apiKeyMiddleware,
httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentOAuth2, codersdk.ExperimentMCPServerHTTP),
)
// MCP server configuration endpoints.
r.Route("/servers", func(r chi.Router) {
r.Use(httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentAgents))
r.Get("/", api.listMCPServerConfigs)
r.Post("/", api.createMCPServerConfig)
r.Route("/{mcpServer}", func(r chi.Router) {
r.Get("/", api.getMCPServerConfig)
r.Patch("/", api.updateMCPServerConfig)
r.Delete("/", api.deleteMCPServerConfig)
// OAuth2 user flow
r.Get("/oauth2/connect", api.mcpServerOAuth2Connect)
r.Get("/oauth2/callback", api.mcpServerOAuth2Callback)
r.Delete("/oauth2/disconnect", api.mcpServerOAuth2Disconnect)
})
})
// MCP HTTP transport endpoint with mandatory authentication
r.Mount("/http", api.mcpHTTPHandler())
r.Route("/http", func(r chi.Router) {
r.Use(httpmw.RequireExperimentWithDevBypass(api.Experiments, codersdk.ExperimentOAuth2, codersdk.ExperimentMCPServerHTTP))
r.Mount("/", api.mcpHTTPHandler())
})
})
r.Route("/watch-all-workspacebuilds", func(r chi.Router) {
r.Use(
apiKeyMiddleware,
httpmw.RequireExperiment(api.Experiments, codersdk.ExperimentWorkspaceBuildUpdates),
)
r.Get("/", api.watchAllWorkspaceBuilds)
})
})
@@ -1457,6 +1515,8 @@ func New(options *Options) *API {
r.Post("/", api.postUser)
r.Get("/", api.users)
r.Post("/logout", api.postLogout)
r.Post("/me/session/token-to-cookie", api.postSessionTokenCookie)
r.Get("/oidc-claims", api.userOIDCClaims)
// These routes query information about site wide roles.
r.Route("/roles", func(r chi.Router) {
r.Get("/", api.AssignableSiteRoles)
@@ -2027,6 +2087,8 @@ type API struct {
dbRolluper *dbrollup.Rolluper
// chatDaemon handles background processing of pending chats.
chatDaemon *chatd.Server
// AISeatTracker records AI seat usage.
AISeatTracker aiseats.SeatTracker
// gitSyncWorker refreshes stale chat diff statuses in the
// background.
gitSyncWorker *gitsync.Worker
@@ -2239,6 +2301,7 @@ func (api *API) CreateInMemoryTaggedProvisionerDaemon(dialCtx context.Context, n
provisionerdserver.Options{
OIDCConfig: api.OIDCConfig,
ExternalAuthConfigs: api.ExternalAuthConfigs,
AISeatTracker: api.AISeatTracker,
Clock: api.Clock,
HeartbeatFn: options.heartbeatFn,
},
+9
View File
@@ -879,6 +879,15 @@ func createAnotherUserRetry(t testing.TB, client *codersdk.Client, organizationI
m(&req)
}
// Service accounts cannot have a password or email and must
// use login_type=none. Enforce this after mutators so callers
// only need to set ServiceAccount=true.
if req.ServiceAccount {
req.Password = ""
req.Email = ""
req.UserLoginType = codersdk.LoginTypeNone
}
user, err := client.CreateUserWithOrgs(context.Background(), req)
var apiError *codersdk.Error
// If the user already exists by username or email conflict, try again up to "retries" times.
+39 -7
View File
@@ -13,32 +13,64 @@ var _ usage.Inserter = (*UsageInserter)(nil)
type UsageInserter struct {
sync.Mutex
events []usagetypes.DiscreteEvent
discreteEvents []usagetypes.DiscreteEvent
heartbeatEvents []usagetypes.HeartbeatEvent
seenHeartbeats map[string]struct{}
}
func NewUsageInserter() *UsageInserter {
return &UsageInserter{
events: []usagetypes.DiscreteEvent{},
discreteEvents: []usagetypes.DiscreteEvent{},
seenHeartbeats: map[string]struct{}{},
heartbeatEvents: []usagetypes.HeartbeatEvent{},
}
}
func (u *UsageInserter) InsertDiscreteUsageEvent(_ context.Context, _ database.Store, event usagetypes.DiscreteEvent) error {
u.Lock()
defer u.Unlock()
u.events = append(u.events, event)
u.discreteEvents = append(u.discreteEvents, event)
return nil
}
func (u *UsageInserter) GetEvents() []usagetypes.DiscreteEvent {
func (u *UsageInserter) InsertHeartbeatUsageEvent(_ context.Context, _ database.Store, id string, event usagetypes.HeartbeatEvent) error {
u.Lock()
defer u.Unlock()
eventsCopy := make([]usagetypes.DiscreteEvent, len(u.events))
copy(eventsCopy, u.events)
if _, seen := u.seenHeartbeats[id]; seen {
return nil
}
u.seenHeartbeats[id] = struct{}{}
u.heartbeatEvents = append(u.heartbeatEvents, event)
return nil
}
func (u *UsageInserter) GetHeartbeatEvents() []usagetypes.HeartbeatEvent {
u.Lock()
defer u.Unlock()
eventsCopy := make([]usagetypes.HeartbeatEvent, len(u.heartbeatEvents))
copy(eventsCopy, u.heartbeatEvents)
return eventsCopy
}
func (u *UsageInserter) GetDiscreteEvents() []usagetypes.DiscreteEvent {
u.Lock()
defer u.Unlock()
eventsCopy := make([]usagetypes.DiscreteEvent, len(u.discreteEvents))
copy(eventsCopy, u.discreteEvents)
return eventsCopy
}
func (u *UsageInserter) TotalEventCount() int {
u.Lock()
defer u.Unlock()
return len(u.discreteEvents) + len(u.heartbeatEvents)
}
func (u *UsageInserter) Reset() {
u.Lock()
defer u.Unlock()
u.events = []usagetypes.DiscreteEvent{}
u.seenHeartbeats = map[string]struct{}{}
u.discreteEvents = []usagetypes.DiscreteEvent{}
u.heartbeatEvents = []usagetypes.HeartbeatEvent{}
}
+26 -18
View File
@@ -6,22 +6,30 @@ type CheckConstraint string
// CheckConstraint enums.
const (
CheckAPIKeysAllowListNotEmpty CheckConstraint = "api_keys_allow_list_not_empty" // api_keys
CheckChatModelConfigsCompressionThresholdCheck CheckConstraint = "chat_model_configs_compression_threshold_check" // chat_model_configs
CheckChatModelConfigsContextLimitCheck CheckConstraint = "chat_model_configs_context_limit_check" // chat_model_configs
CheckChatProvidersProviderCheck CheckConstraint = "chat_providers_provider_check" // chat_providers
CheckOrganizationIDNotZero CheckConstraint = "organization_id_not_zero" // custom_roles
CheckOneTimePasscodeSet CheckConstraint = "one_time_passcode_set" // users
CheckUsersEmailNotEmpty CheckConstraint = "users_email_not_empty" // users
CheckUsersServiceAccountLoginType CheckConstraint = "users_service_account_login_type" // users
CheckUsersUsernameMinLength CheckConstraint = "users_username_min_length" // users
CheckMaxProvisionerLogsLength CheckConstraint = "max_provisioner_logs_length" // provisioner_jobs
CheckMaxLogsLength CheckConstraint = "max_logs_length" // workspace_agents
CheckSubsystemsNotNone CheckConstraint = "subsystems_not_none" // workspace_agents
CheckWorkspaceBuildsDeadlineBelowMaxDeadline CheckConstraint = "workspace_builds_deadline_below_max_deadline" // workspace_builds
CheckGroupAclIsObject CheckConstraint = "group_acl_is_object" // workspaces
CheckUserAclIsObject CheckConstraint = "user_acl_is_object" // workspaces
CheckTelemetryLockEventTypeConstraint CheckConstraint = "telemetry_lock_event_type_constraint" // telemetry_locks
CheckValidationMonotonicOrder CheckConstraint = "validation_monotonic_order" // template_version_parameters
CheckUsageEventTypeCheck CheckConstraint = "usage_event_type_check" // usage_events
CheckAPIKeysAllowListNotEmpty CheckConstraint = "api_keys_allow_list_not_empty" // api_keys
CheckChatModelConfigsCompressionThresholdCheck CheckConstraint = "chat_model_configs_compression_threshold_check" // chat_model_configs
CheckChatModelConfigsContextLimitCheck CheckConstraint = "chat_model_configs_context_limit_check" // chat_model_configs
CheckChatProvidersProviderCheck CheckConstraint = "chat_providers_provider_check" // chat_providers
CheckChatUsageLimitConfigDefaultLimitMicrosCheck CheckConstraint = "chat_usage_limit_config_default_limit_micros_check" // chat_usage_limit_config
CheckChatUsageLimitConfigPeriodCheck CheckConstraint = "chat_usage_limit_config_period_check" // chat_usage_limit_config
CheckChatUsageLimitConfigSingletonCheck CheckConstraint = "chat_usage_limit_config_singleton_check" // chat_usage_limit_config
CheckOrganizationIDNotZero CheckConstraint = "organization_id_not_zero" // custom_roles
CheckGroupsChatSpendLimitMicrosCheck CheckConstraint = "groups_chat_spend_limit_micros_check" // groups
CheckOneTimePasscodeSet CheckConstraint = "one_time_passcode_set" // users
CheckUsersChatSpendLimitMicrosCheck CheckConstraint = "users_chat_spend_limit_micros_check" // users
CheckUsersEmailNotEmpty CheckConstraint = "users_email_not_empty" // users
CheckUsersServiceAccountLoginType CheckConstraint = "users_service_account_login_type" // users
CheckUsersUsernameMinLength CheckConstraint = "users_username_min_length" // users
CheckMcpServerConfigsAuthTypeCheck CheckConstraint = "mcp_server_configs_auth_type_check" // mcp_server_configs
CheckMcpServerConfigsAvailabilityCheck CheckConstraint = "mcp_server_configs_availability_check" // mcp_server_configs
CheckMcpServerConfigsTransportCheck CheckConstraint = "mcp_server_configs_transport_check" // mcp_server_configs
CheckMaxProvisionerLogsLength CheckConstraint = "max_provisioner_logs_length" // provisioner_jobs
CheckMaxLogsLength CheckConstraint = "max_logs_length" // workspace_agents
CheckSubsystemsNotNone CheckConstraint = "subsystems_not_none" // workspace_agents
CheckWorkspaceBuildsDeadlineBelowMaxDeadline CheckConstraint = "workspace_builds_deadline_below_max_deadline" // workspace_builds
CheckGroupAclIsObject CheckConstraint = "group_acl_is_object" // workspaces
CheckUserAclIsObject CheckConstraint = "user_acl_is_object" // workspaces
CheckTelemetryLockEventTypeConstraint CheckConstraint = "telemetry_lock_event_type_constraint" // telemetry_locks
CheckValidationMonotonicOrder CheckConstraint = "validation_monotonic_order" // template_version_parameters
CheckUsageEventTypeCheck CheckConstraint = "usage_event_type_check" // usage_events
)
+92 -7
View File
@@ -21,6 +21,7 @@ import (
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/chatd/chatprompt"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/externalauth/gitprovider"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/render"
@@ -194,13 +195,14 @@ func MinimalUserFromVisibleUser(user database.VisibleUser) codersdk.MinimalUser
func ReducedUser(user database.User) codersdk.ReducedUser {
return codersdk.ReducedUser{
MinimalUser: MinimalUser(user),
Email: user.Email,
CreatedAt: user.CreatedAt,
UpdatedAt: user.UpdatedAt,
LastSeenAt: user.LastSeenAt,
Status: codersdk.UserStatus(user.Status),
LoginType: codersdk.LoginType(user.LoginType),
MinimalUser: MinimalUser(user),
Email: user.Email,
CreatedAt: user.CreatedAt,
UpdatedAt: user.UpdatedAt,
LastSeenAt: user.LastSeenAt,
Status: codersdk.UserStatus(user.Status),
LoginType: codersdk.LoginType(user.LoginType),
IsServiceAccount: user.IsServiceAccount,
}
}
@@ -1164,3 +1166,86 @@ func nullInt64Ptr(v sql.NullInt64) *int64 {
value := v.Int64
return &value
}
// ChatDiffStatus converts a database.ChatDiffStatus to a
// codersdk.ChatDiffStatus. When status is nil an empty value
// containing only the chatID is returned.
func ChatDiffStatus(chatID uuid.UUID, status *database.ChatDiffStatus) codersdk.ChatDiffStatus {
result := codersdk.ChatDiffStatus{
ChatID: chatID,
}
if status == nil {
return result
}
result.ChatID = status.ChatID
if status.Url.Valid {
u := strings.TrimSpace(status.Url.String)
if u != "" {
result.URL = &u
}
}
if result.URL == nil {
// Try to build a branch URL from the stored origin.
// Since this function does not have access to the API
// instance, we construct a GitHub provider directly as
// a best-effort fallback.
// TODO: This uses the default github.com API base URL,
// so branch URLs for GitHub Enterprise instances will
// be incorrect. To fix this, this function would need
// access to the external auth configs.
gp := gitprovider.New("github", "", nil)
if gp != nil {
if owner, repo, _, ok := gp.ParseRepositoryOrigin(status.GitRemoteOrigin); ok {
branchURL := gp.BuildBranchURL(owner, repo, status.GitBranch)
if branchURL != "" {
result.URL = &branchURL
}
}
}
}
if status.PullRequestState.Valid {
pullRequestState := strings.TrimSpace(status.PullRequestState.String)
if pullRequestState != "" {
result.PullRequestState = &pullRequestState
}
}
result.PullRequestTitle = status.PullRequestTitle
result.PullRequestDraft = status.PullRequestDraft
result.ChangesRequested = status.ChangesRequested
result.Additions = status.Additions
result.Deletions = status.Deletions
result.ChangedFiles = status.ChangedFiles
if status.AuthorLogin.Valid {
result.AuthorLogin = &status.AuthorLogin.String
}
if status.AuthorAvatarUrl.Valid {
result.AuthorAvatarURL = &status.AuthorAvatarUrl.String
}
if status.BaseBranch.Valid {
result.BaseBranch = &status.BaseBranch.String
}
if status.HeadBranch.Valid {
result.HeadBranch = &status.HeadBranch.String
}
if status.PrNumber.Valid {
result.PRNumber = &status.PrNumber.Int32
}
if status.Commits.Valid {
result.Commits = &status.Commits.Int32
}
if status.Approved.Valid {
result.Approved = &status.Approved.Bool
}
if status.ReviewerCount.Valid {
result.ReviewerCount = &status.ReviewerCount.Int32
}
if status.RefreshedAt.Valid {
refreshedAt := status.RefreshedAt.Time
result.RefreshedAt = &refreshedAt
}
staleAt := status.StaleAt
result.StaleAt = &staleAt
return result
}
+332 -21
View File
@@ -1264,7 +1264,7 @@ func (q *querier) canAssignRoles(ctx context.Context, orgID uuid.UUID, added, re
// System roles are stored in the database but have a fixed, code-defined
// meaning. Do not rewrite the name for them so the static "who can assign
// what" mapping applies.
if !rbac.SystemRoleName(roleName.Name) {
if !rolestore.IsSystemRoleName(roleName.Name) {
// To support a dynamic mapping of what roles can assign what, we need
// to store this in the database. For now, just use a static role so
// owners and org admins can assign roles.
@@ -1691,6 +1691,13 @@ func (q *querier) CleanTailnetTunnels(ctx context.Context) error {
return q.db.CleanTailnetTunnels(ctx)
}
func (q *querier) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChat); err != nil {
return err
}
return q.db.CleanupDeletedMCPServerIDsFromChats(ctx)
}
func (q *querier) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -1726,6 +1733,13 @@ func (q *querier) CountConnectionLogs(ctx context.Context, arg database.CountCon
return q.db.CountAuthorizedConnectionLogs(ctx, arg, prep)
}
func (q *querier) CountEnabledModelsWithoutPricing(ctx context.Context) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return 0, err
}
return q.db.CountEnabledModelsWithoutPricing(ctx)
}
func (q *querier) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceWorkspace.All()); err != nil {
return nil, err
@@ -1817,18 +1831,6 @@ func (q *querier) DeleteApplicationConnectAPIKeysByUserID(ctx context.Context, u
return q.db.DeleteApplicationConnectAPIKeysByUserID(ctx, userID)
}
func (q *querier) DeleteChatMessagesAfterID(ctx context.Context, arg database.DeleteChatMessagesAfterIDParams) error {
// Authorize update on the parent chat.
chat, err := q.db.GetChatByID(ctx, arg.ChatID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.DeleteChatMessagesAfterID(ctx, arg)
}
func (q *querier) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
@@ -1854,6 +1856,20 @@ func (q *querier) DeleteChatQueuedMessage(ctx context.Context, arg database.Dele
return q.db.DeleteChatQueuedMessage(ctx, arg)
}
func (q *querier) DeleteChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.DeleteChatUsageLimitGroupOverride(ctx, groupID)
}
func (q *querier) DeleteChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.DeleteChatUsageLimitUserOverride(ctx, userID)
}
func (q *querier) DeleteCryptoKey(ctx context.Context, arg database.DeleteCryptoKeyParams) (database.CryptoKey, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceCryptoKey); err != nil {
return database.CryptoKey{}, err
@@ -1911,6 +1927,20 @@ func (q *querier) DeleteLicense(ctx context.Context, id int32) (int32, error) {
return id, nil
}
func (q *querier) DeleteMCPServerConfigByID(ctx context.Context, id uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.DeleteMCPServerConfigByID(ctx, id)
}
func (q *querier) DeleteMCPServerUserToken(ctx context.Context, arg database.DeleteMCPServerUserTokenParams) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.DeleteMCPServerUserToken(ctx, arg)
}
func (q *querier) DeleteOAuth2ProviderAppByClientID(ctx context.Context, id uuid.UUID) error {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceOauth2App); err != nil {
return err
@@ -2124,12 +2154,12 @@ func (q *querier) DeleteWorkspaceACLByID(ctx context.Context, id uuid.UUID) erro
return fetchAndExec(q.log, q.auth, policy.ActionShare, fetch, q.db.DeleteWorkspaceACLByID)(ctx, id)
}
func (q *querier) DeleteWorkspaceACLsByOrganization(ctx context.Context, organizationID uuid.UUID) error {
func (q *querier) DeleteWorkspaceACLsByOrganization(ctx context.Context, params database.DeleteWorkspaceACLsByOrganizationParams) error {
// This is a system-only function.
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceSystem); err != nil {
return err
}
return q.db.DeleteWorkspaceACLsByOrganization(ctx, organizationID)
return q.db.DeleteWorkspaceACLsByOrganization(ctx, params)
}
func (q *querier) DeleteWorkspaceAgentPortShare(ctx context.Context, arg database.DeleteWorkspaceAgentPortShareParams) error {
@@ -2327,6 +2357,13 @@ func (q *querier) GetAPIKeysLastUsedAfter(ctx context.Context, lastUsed time.Tim
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetAPIKeysLastUsedAfter)(ctx, lastUsed)
}
func (q *querier) GetActiveAISeatCount(ctx context.Context) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceLicense); err != nil {
return 0, err
}
return q.db.GetActiveAISeatCount(ctx)
}
func (q *querier) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTemplate.All()); err != nil {
return nil, err
@@ -2454,6 +2491,17 @@ func (q *querier) GetChatCostSummary(ctx context.Context, arg database.GetChatCo
return q.db.GetChatCostSummary(ctx, arg)
}
func (q *querier) GetChatDesktopEnabled(ctx context.Context) (bool, error) {
// The desktop-enabled flag is a deployment-wide setting read by any
// authenticated chat user and by chatd when deciding whether to expose
// computer-use tooling. We only require that an explicit actor is present
// in the context so unauthenticated calls fail closed.
if _, ok := ActorFromContext(ctx); !ok {
return false, ErrNoActor
}
return q.db.GetChatDesktopEnabled(ctx)
}
func (q *querier) GetChatDiffStatusByChatID(ctx context.Context, chatID uuid.UUID) (database.ChatDiffStatus, error) {
// Authorize read on the parent chat.
_, err := q.GetChatByID(ctx, chatID)
@@ -2532,6 +2580,14 @@ func (q *querier) GetChatMessagesByChatID(ctx context.Context, arg database.GetC
return q.db.GetChatMessagesByChatID(ctx, arg)
}
func (q *querier) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
_, err := q.GetChatByID(ctx, arg.ChatID)
if err != nil {
return nil, err
}
return q.db.GetChatMessagesByChatIDDescPaginated(ctx, arg)
}
func (q *querier) GetChatMessagesForPromptByChatID(ctx context.Context, chatID uuid.UUID) ([]database.ChatMessage, error) {
// Authorize read on the parent chat.
_, err := q.GetChatByID(ctx, chatID)
@@ -2596,8 +2652,33 @@ func (q *querier) GetChatSystemPrompt(ctx context.Context) (string, error) {
return q.db.GetChatSystemPrompt(ctx)
}
func (q *querier) GetChatsByOwnerID(ctx context.Context, ownerID database.GetChatsByOwnerIDParams) ([]database.Chat, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetChatsByOwnerID)(ctx, ownerID)
func (q *querier) GetChatUsageLimitConfig(ctx context.Context) (database.ChatUsageLimitConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.ChatUsageLimitConfig{}, err
}
return q.db.GetChatUsageLimitConfig(ctx)
}
func (q *querier) GetChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) (database.GetChatUsageLimitGroupOverrideRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.GetChatUsageLimitGroupOverrideRow{}, err
}
return q.db.GetChatUsageLimitGroupOverride(ctx, groupID)
}
func (q *querier) GetChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) (database.GetChatUsageLimitUserOverrideRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.GetChatUsageLimitUserOverrideRow{}, err
}
return q.db.GetChatUsageLimitUserOverride(ctx, userID)
}
func (q *querier) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceChat.Type)
if err != nil {
return nil, xerrors.Errorf("(dev error) prepare sql filter: %w", err)
}
return q.db.GetAuthorizedChats(ctx, arg, prep)
}
func (q *querier) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
@@ -2703,6 +2784,13 @@ func (q *querier) GetEnabledChatProviders(ctx context.Context) ([]database.ChatP
return q.db.GetEnabledChatProviders(ctx)
}
func (q *querier) GetEnabledMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetEnabledMCPServerConfigs(ctx)
}
func (q *querier) GetExternalAuthLink(ctx context.Context, arg database.GetExternalAuthLinkParams) (database.ExternalAuthLink, error) {
return fetchWithAction(q.log, q.auth, policy.ActionReadPersonal, q.db.GetExternalAuthLink)(ctx, arg)
}
@@ -2761,6 +2849,13 @@ func (q *querier) GetFilteredInboxNotificationsByUserID(ctx context.Context, arg
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetFilteredInboxNotificationsByUserID)(ctx, arg)
}
func (q *querier) GetForcedMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetForcedMCPServerConfigs(ctx)
}
func (q *querier) GetGitSSHKey(ctx context.Context, userID uuid.UUID) (database.GitSSHKey, error) {
return fetchWithAction(q.log, q.auth, policy.ActionReadPersonal, q.db.GetGitSSHKey)(ctx, userID)
}
@@ -2901,6 +2996,48 @@ func (q *querier) GetLogoURL(ctx context.Context) (string, error) {
return q.db.GetLogoURL(ctx)
}
func (q *querier) GetMCPServerConfigByID(ctx context.Context, id uuid.UUID) (database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerConfig{}, err
}
return q.db.GetMCPServerConfigByID(ctx, id)
}
func (q *querier) GetMCPServerConfigBySlug(ctx context.Context, slug string) (database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerConfig{}, err
}
return q.db.GetMCPServerConfigBySlug(ctx, slug)
}
func (q *querier) GetMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetMCPServerConfigs(ctx)
}
func (q *querier) GetMCPServerConfigsByIDs(ctx context.Context, ids []uuid.UUID) ([]database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetMCPServerConfigsByIDs(ctx, ids)
}
func (q *querier) GetMCPServerUserToken(ctx context.Context, arg database.GetMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerUserToken{}, err
}
return q.db.GetMCPServerUserToken(ctx, arg)
}
func (q *querier) GetMCPServerUserTokensByUserID(ctx context.Context, userID uuid.UUID) ([]database.MCPServerUserToken, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetMCPServerUserTokensByUserID(ctx, userID)
}
func (q *querier) GetNotificationMessagesByStatus(ctx context.Context, arg database.GetNotificationMessagesByStatusParams) ([]database.NotificationMessage, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceNotificationMessage); err != nil {
return nil, err
@@ -3087,6 +3224,34 @@ func (q *querier) GetOrganizationsWithPrebuildStatus(ctx context.Context, arg da
return q.db.GetOrganizationsWithPrebuildStatus(ctx, arg)
}
func (q *querier) GetPRInsightsPerModel(ctx context.Context, arg database.GetPRInsightsPerModelParams) ([]database.GetPRInsightsPerModelRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetPRInsightsPerModel(ctx, arg)
}
func (q *querier) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetPRInsightsRecentPRs(ctx, arg)
}
func (q *querier) GetPRInsightsSummary(ctx context.Context, arg database.GetPRInsightsSummaryParams) (database.GetPRInsightsSummaryRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return database.GetPRInsightsSummaryRow{}, err
}
return q.db.GetPRInsightsSummary(ctx, arg)
}
func (q *querier) GetPRInsightsTimeSeries(ctx context.Context, arg database.GetPRInsightsTimeSeriesParams) ([]database.GetPRInsightsTimeSeriesRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetPRInsightsTimeSeries(ctx, arg)
}
func (q *querier) GetParameterSchemasByJobID(ctx context.Context, jobID uuid.UUID) ([]database.ParameterSchema, error) {
version, err := q.db.GetTemplateVersionByJobID(ctx, jobID)
if err != nil {
@@ -3750,6 +3915,13 @@ func (q *querier) GetUserChatCustomPrompt(ctx context.Context, userID uuid.UUID)
return q.db.GetUserChatCustomPrompt(ctx, userID)
}
func (q *querier) GetUserChatSpendInPeriod(ctx context.Context, arg database.GetUserChatSpendInPeriodParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChat.WithOwner(arg.UserID.String())); err != nil {
return 0, err
}
return q.db.GetUserChatSpendInPeriod(ctx, arg)
}
func (q *querier) GetUserCount(ctx context.Context, includeSystem bool) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return 0, err
@@ -3757,6 +3929,13 @@ func (q *querier) GetUserCount(ctx context.Context, includeSystem bool) (int64,
return q.db.GetUserCount(ctx, includeSystem)
}
func (q *querier) GetUserGroupSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChat.WithOwner(userID.String())); err != nil {
return 0, err
}
return q.db.GetUserGroupSpendLimit(ctx, userID)
}
func (q *querier) GetUserLatencyInsights(ctx context.Context, arg database.GetUserLatencyInsightsParams) ([]database.GetUserLatencyInsightsRow, error) {
// Used by insights endpoints. Need to check both for auditors and for regular users with template acl perms.
if err := q.authorizeContext(ctx, policy.ActionViewInsights, rbac.ResourceTemplate); err != nil {
@@ -4426,6 +4605,13 @@ func (q *querier) InsertAIBridgeInterception(ctx context.Context, arg database.I
return insert(q.log, q.auth, rbac.ResourceAibridgeInterception.WithOwner(arg.InitiatorID.String()), q.db.InsertAIBridgeInterception)(ctx, arg)
}
func (q *querier) InsertAIBridgeModelThought(ctx context.Context, arg database.InsertAIBridgeModelThoughtParams) (database.AIBridgeModelThought, error) {
if err := q.authorizeAIBridgeInterceptionAction(ctx, policy.ActionUpdate, arg.InterceptionID); err != nil {
return database.AIBridgeModelThought{}, err
}
return q.db.InsertAIBridgeModelThought(ctx, arg)
}
func (q *querier) InsertAIBridgeTokenUsage(ctx context.Context, arg database.InsertAIBridgeTokenUsageParams) (database.AIBridgeTokenUsage, error) {
// All aibridge_token_usages records belong to the initiator of their associated interception.
if err := q.authorizeAIBridgeInterceptionAction(ctx, policy.ActionUpdate, arg.InterceptionID); err != nil {
@@ -4482,16 +4668,16 @@ func (q *querier) InsertChatFile(ctx context.Context, arg database.InsertChatFil
return insert(q.log, q.auth, rbac.ResourceChat.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), q.db.InsertChatFile)(ctx, arg)
}
func (q *querier) InsertChatMessage(ctx context.Context, arg database.InsertChatMessageParams) (database.ChatMessage, error) {
func (q *querier) InsertChatMessages(ctx context.Context, arg database.InsertChatMessagesParams) ([]database.ChatMessage, error) {
// Authorize create on the parent chat (using update permission).
chat, err := q.db.GetChatByID(ctx, arg.ChatID)
if err != nil {
return database.ChatMessage{}, err
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.ChatMessage{}, err
return nil, err
}
return q.db.InsertChatMessage(ctx, arg)
return q.db.InsertChatMessages(ctx, arg)
}
func (q *querier) InsertChatModelConfig(ctx context.Context, arg database.InsertChatModelConfigParams) (database.ChatModelConfig, error) {
@@ -4618,6 +4804,13 @@ func (q *querier) InsertLicense(ctx context.Context, arg database.InsertLicenseP
return q.db.InsertLicense(ctx, arg)
}
func (q *querier) InsertMCPServerConfig(ctx context.Context, arg database.InsertMCPServerConfigParams) (database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerConfig{}, err
}
return q.db.InsertMCPServerConfig(ctx, arg)
}
func (q *querier) InsertMemoryResourceMonitor(ctx context.Context, arg database.InsertMemoryResourceMonitorParams) (database.WorkspaceAgentMemoryResourceMonitor, error) {
if err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceWorkspaceAgentResourceMonitor); err != nil {
return database.WorkspaceAgentMemoryResourceMonitor{}, err
@@ -5115,6 +5308,20 @@ func (q *querier) ListAIBridgeUserPromptsByInterceptionIDs(ctx context.Context,
return q.db.ListAIBridgeUserPromptsByInterceptionIDs(ctx, interceptionIDs)
}
func (q *querier) ListChatUsageLimitGroupOverrides(ctx context.Context) ([]database.ListChatUsageLimitGroupOverridesRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.ListChatUsageLimitGroupOverrides(ctx)
}
func (q *querier) ListChatUsageLimitOverrides(ctx context.Context) ([]database.ListChatUsageLimitOverridesRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.ListChatUsageLimitOverrides(ctx)
}
func (q *querier) ListProvisionerKeysByOrganization(ctx context.Context, organizationID uuid.UUID) ([]database.ProvisionerKey, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.ListProvisionerKeysByOrganization)(ctx, organizationID)
}
@@ -5234,6 +5441,13 @@ func (q *querier) RemoveUserFromGroups(ctx context.Context, arg database.RemoveU
return q.db.RemoveUserFromGroups(ctx, arg)
}
func (q *querier) ResolveUserChatSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceChat.WithOwner(userID.String())); err != nil {
return 0, err
}
return q.db.ResolveUserChatSpendLimit(ctx, userID)
}
func (q *querier) RevokeDBCryptKey(ctx context.Context, activeKeyDigest string) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceSystem); err != nil {
return err
@@ -5249,6 +5463,32 @@ func (q *querier) SelectUsageEventsForPublishing(ctx context.Context, arg time.T
return q.db.SelectUsageEventsForPublishing(ctx, arg)
}
func (q *querier) SoftDeleteChatMessageByID(ctx context.Context, id int64) error {
msg, err := q.db.GetChatMessageByID(ctx, id)
if err != nil {
return err
}
chat, err := q.db.GetChatByID(ctx, msg.ChatID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.SoftDeleteChatMessageByID(ctx, id)
}
func (q *querier) SoftDeleteChatMessagesAfterID(ctx context.Context, arg database.SoftDeleteChatMessagesAfterIDParams) error {
chat, err := q.db.GetChatByID(ctx, arg.ChatID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return err
}
return q.db.SoftDeleteChatMessagesAfterID(ctx, arg)
}
func (q *querier) TryAcquireLock(ctx context.Context, id int64) (bool, error) {
return q.db.TryAcquireLock(ctx, id)
}
@@ -5330,6 +5570,17 @@ func (q *querier) UpdateChatHeartbeat(ctx context.Context, arg database.UpdateCh
return q.db.UpdateChatHeartbeat(ctx, arg)
}
func (q *querier) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return database.Chat{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return database.Chat{}, err
}
return q.db.UpdateChatMCPServerIDs(ctx, arg)
}
func (q *querier) UpdateChatMessageByID(ctx context.Context, arg database.UpdateChatMessageByIDParams) (database.ChatMessage, error) {
// Authorize update on the parent chat of the edited message.
msg, err := q.db.GetChatMessageByID(ctx, arg.ID)
@@ -5494,6 +5745,13 @@ func (q *querier) UpdateInboxNotificationReadStatus(ctx context.Context, args da
return update(q.log, q.auth, fetchFunc, q.db.UpdateInboxNotificationReadStatus)(ctx, args)
}
func (q *querier) UpdateMCPServerConfig(ctx context.Context, arg database.UpdateMCPServerConfigParams) (database.MCPServerConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerConfig{}, err
}
return q.db.UpdateMCPServerConfig(ctx, arg)
}
func (q *querier) UpdateMemberRoles(ctx context.Context, arg database.UpdateMemberRolesParams) (database.OrganizationMember, error) {
// Authorized fetch will check that the actor has read access to the org member since the org member is returned.
member, err := database.ExpectOne(q.OrganizationMembers(ctx, database.OrganizationMembersParams{
@@ -6417,6 +6675,13 @@ func (q *querier) UpdateWorkspacesTTLByTemplateID(ctx context.Context, arg datab
return q.db.UpdateWorkspacesTTLByTemplateID(ctx, arg)
}
func (q *querier) UpsertAISeatState(ctx context.Context, arg database.UpsertAISeatStateParams) (bool, error) {
if err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceSystem); err != nil {
return false, err
}
return q.db.UpsertAISeatState(ctx, arg)
}
func (q *querier) UpsertAnnouncementBanners(ctx context.Context, value string) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
@@ -6438,6 +6703,13 @@ func (q *querier) UpsertBoundaryUsageStats(ctx context.Context, arg database.Ups
return q.db.UpsertBoundaryUsageStats(ctx, arg)
}
func (q *querier) UpsertChatDesktopEnabled(ctx context.Context, enableDesktop bool) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return err
}
return q.db.UpsertChatDesktopEnabled(ctx, enableDesktop)
}
func (q *querier) UpsertChatDiffStatus(ctx context.Context, arg database.UpsertChatDiffStatusParams) (database.ChatDiffStatus, error) {
// Authorize update on the parent chat.
chat, err := q.db.GetChatByID(ctx, arg.ChatID)
@@ -6469,6 +6741,27 @@ func (q *querier) UpsertChatSystemPrompt(ctx context.Context, value string) erro
return q.db.UpsertChatSystemPrompt(ctx, value)
}
func (q *querier) UpsertChatUsageLimitConfig(ctx context.Context, arg database.UpsertChatUsageLimitConfigParams) (database.ChatUsageLimitConfig, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.ChatUsageLimitConfig{}, err
}
return q.db.UpsertChatUsageLimitConfig(ctx, arg)
}
func (q *querier) UpsertChatUsageLimitGroupOverride(ctx context.Context, arg database.UpsertChatUsageLimitGroupOverrideParams) (database.UpsertChatUsageLimitGroupOverrideRow, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.UpsertChatUsageLimitGroupOverrideRow{}, err
}
return q.db.UpsertChatUsageLimitGroupOverride(ctx, arg)
}
func (q *querier) UpsertChatUsageLimitUserOverride(ctx context.Context, arg database.UpsertChatUsageLimitUserOverrideParams) (database.UpsertChatUsageLimitUserOverrideRow, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.UpsertChatUsageLimitUserOverrideRow{}, err
}
return q.db.UpsertChatUsageLimitUserOverride(ctx, arg)
}
func (q *querier) UpsertConnectionLog(ctx context.Context, arg database.UpsertConnectionLogParams) (database.ConnectionLog, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceConnectionLog); err != nil {
return database.ConnectionLog{}, err
@@ -6504,6 +6797,13 @@ func (q *querier) UpsertLogoURL(ctx context.Context, value string) error {
return q.db.UpsertLogoURL(ctx, value)
}
func (q *querier) UpsertMCPServerUserToken(ctx context.Context, arg database.UpsertMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceDeploymentConfig); err != nil {
return database.MCPServerUserToken{}, err
}
return q.db.UpsertMCPServerUserToken(ctx, arg)
}
func (q *querier) UpsertNotificationReportGeneratorLog(ctx context.Context, arg database.UpsertNotificationReportGeneratorLogParams) error {
if err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceSystem); err != nil {
return err
@@ -6656,6 +6956,13 @@ func (q *querier) UpsertWorkspaceAppAuditSession(ctx context.Context, arg databa
return q.db.UpsertWorkspaceAppAuditSession(ctx, arg)
}
func (q *querier) UsageEventExistsByID(ctx context.Context, id string) (bool, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceUsageEvent); err != nil {
return false, err
}
return q.db.UsageEventExistsByID(ctx, id)
}
func (q *querier) ValidateGroupIDs(ctx context.Context, groupIDs []uuid.UUID) (database.ValidateGroupIDsRow, error) {
// This check is probably overly restrictive, but the "correct" check isn't
// necessarily obvious. It's only used as a verification check for ACLs right
@@ -6751,3 +7058,7 @@ func (q *querier) ListAuthorizedAIBridgeModels(ctx context.Context, arg database
// database.Store interface, so dbauthz needs to implement it.
return q.ListAIBridgeModels(ctx, arg)
}
func (q *querier) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, _ rbac.PreparedAuthorized) ([]database.Chat, error) {
return q.GetChats(ctx, arg)
}
+348 -20
View File
@@ -401,16 +401,27 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return(nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("DeleteChatMessagesAfterID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
s.Run("SoftDeleteChatMessagesAfterID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.DeleteChatMessagesAfterIDParams{
arg := database.SoftDeleteChatMessagesAfterIDParams{
ChatID: chat.ID,
AfterID: 123,
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().DeleteChatMessagesAfterID(gomock.Any(), arg).Return(nil).AnyTimes()
dbm.EXPECT().SoftDeleteChatMessagesAfterID(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("SoftDeleteChatMessageByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msg := database.ChatMessage{
ID: 456,
ChatID: chat.ID,
}
dbm.EXPECT().GetChatMessageByID(gomock.Any(), msg.ID).Return(msg, nil).AnyTimes()
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().SoftDeleteChatMessageByID(gomock.Any(), msg.ID).Return(nil).AnyTimes()
check.Args(msg.ID).Asserts(chat, policy.ActionUpdate).Returns()
}))
s.Run("DeleteChatModelConfigByID", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
id := uuid.New()
dbm.EXPECT().DeleteChatModelConfigByID(gomock.Any(), id).Return(nil).AnyTimes()
@@ -513,6 +524,10 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatCostSummary(gomock.Any(), arg).Return(row, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChat.WithOwner(arg.OwnerID.String()), policy.ActionRead).Returns(row)
}))
s.Run("CountEnabledModelsWithoutPricing", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().CountEnabledModelsWithoutPricing(gomock.Any()).Return(int64(3), nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(int64(3))
}))
s.Run("GetChatDiffStatusByChatID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
diffStatus := testutil.Fake(s.T(), faker, database.ChatDiffStatus{ChatID: chat.ID})
@@ -558,6 +573,14 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatMessagesByChatID(gomock.Any(), arg).Return(msgs, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionRead).Returns(msgs)
}))
s.Run("GetChatMessagesByChatIDDescPaginated", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msgs := []database.ChatMessage{testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})}
arg := database.GetChatMessagesByChatIDDescPaginatedParams{ChatID: chat.ID, BeforeID: 0, LimitVal: 50}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().GetChatMessagesByChatIDDescPaginated(gomock.Any(), arg).Return(msgs, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionRead).Returns(msgs)
}))
s.Run("GetLastChatMessageByRole", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msg := testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})
@@ -606,12 +629,17 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatProviders(gomock.Any()).Return([]database.ChatProvider{providerA, providerB}, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns([]database.ChatProvider{providerA, providerB})
}))
s.Run("GetChatsByOwnerID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
c1 := testutil.Fake(s.T(), faker, database.Chat{})
c2 := testutil.Fake(s.T(), faker, database.Chat{})
params := database.GetChatsByOwnerIDParams{OwnerID: c1.OwnerID}
dbm.EXPECT().GetChatsByOwnerID(gomock.Any(), params).Return([]database.Chat{c1, c2}, nil).AnyTimes()
check.Args(params).Asserts(c1, policy.ActionRead, c2, policy.ActionRead).Returns([]database.Chat{c1, c2})
s.Run("GetChats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatsParams{}
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.Chat{}, nil).AnyTimes()
// No asserts here because SQLFilter.
check.Args(params).Asserts()
}))
s.Run("GetAuthorizedChats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
params := database.GetChatsParams{}
dbm.EXPECT().GetAuthorizedChats(gomock.Any(), params, gomock.Any()).Return([]database.Chat{}, nil).AnyTimes()
// No asserts here because it re-routes through GetChats which uses SQLFilter.
check.Args(params, emptyPreparedAuthorized{}).Asserts()
}))
s.Run("GetChatQueuedMessages", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
@@ -624,6 +652,10 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatSystemPrompt(gomock.Any()).Return("prompt", nil).AnyTimes()
check.Args().Asserts()
}))
s.Run("GetChatDesktopEnabled", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetChatDesktopEnabled(gomock.Any()).Return(false, nil).AnyTimes()
check.Args().Asserts()
}))
s.Run("GetEnabledChatModelConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
configB := testutil.Fake(s.T(), faker, database.ChatModelConfig{})
@@ -654,13 +686,13 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().InsertChatFile(gomock.Any(), arg).Return(file, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChat.WithOwner(arg.OwnerID.String()).InOrg(arg.OrganizationID), policy.ActionCreate).Returns(file)
}))
s.Run("InsertChatMessage", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
s.Run("InsertChatMessages", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := testutil.Fake(s.T(), faker, database.InsertChatMessageParams{ChatID: chat.ID})
msg := testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})
arg := testutil.Fake(s.T(), faker, database.InsertChatMessagesParams{ChatID: chat.ID})
msgs := []database.ChatMessage{testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().InsertChatMessage(gomock.Any(), arg).Return(msg, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(msg)
dbm.EXPECT().InsertChatMessages(gomock.Any(), arg).Return(msgs, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(msgs)
}))
s.Run("InsertChatQueuedMessage", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
@@ -833,6 +865,254 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpsertChatSystemPrompt(gomock.Any(), "").Return(nil).AnyTimes()
check.Args("").Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("UpsertChatDesktopEnabled", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().UpsertChatDesktopEnabled(gomock.Any(), false).Return(nil).AnyTimes()
check.Args(false).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("GetUserChatSpendInPeriod", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetUserChatSpendInPeriodParams{
UserID: uuid.New(),
StartTime: time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC),
EndTime: time.Date(2025, 2, 1, 0, 0, 0, 0, time.UTC),
}
spend := int64(123)
dbm.EXPECT().GetUserChatSpendInPeriod(gomock.Any(), arg).Return(spend, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChat.WithOwner(arg.UserID.String()), policy.ActionRead).Returns(spend)
}))
s.Run("GetUserGroupSpendLimit", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
userID := uuid.New()
limit := int64(456)
dbm.EXPECT().GetUserGroupSpendLimit(gomock.Any(), userID).Return(limit, nil).AnyTimes()
check.Args(userID).Asserts(rbac.ResourceChat.WithOwner(userID.String()), policy.ActionRead).Returns(limit)
}))
s.Run("ResolveUserChatSpendLimit", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
userID := uuid.New()
limit := int64(789)
dbm.EXPECT().ResolveUserChatSpendLimit(gomock.Any(), userID).Return(limit, nil).AnyTimes()
check.Args(userID).Asserts(rbac.ResourceChat.WithOwner(userID.String()), policy.ActionRead).Returns(limit)
}))
s.Run("GetChatUsageLimitConfig", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
now := dbtime.Now()
config := database.ChatUsageLimitConfig{
ID: 1,
Singleton: true,
Enabled: true,
DefaultLimitMicros: 1_000_000,
Period: "monthly",
CreatedAt: now,
UpdatedAt: now,
}
dbm.EXPECT().GetChatUsageLimitConfig(gomock.Any()).Return(config, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(config)
}))
s.Run("GetChatUsageLimitGroupOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
groupID := uuid.New()
override := database.GetChatUsageLimitGroupOverrideRow{
GroupID: groupID,
SpendLimitMicros: sql.NullInt64{Int64: 2_000_000, Valid: true},
}
dbm.EXPECT().GetChatUsageLimitGroupOverride(gomock.Any(), groupID).Return(override, nil).AnyTimes()
check.Args(groupID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(override)
}))
s.Run("GetChatUsageLimitUserOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
userID := uuid.New()
override := database.GetChatUsageLimitUserOverrideRow{
UserID: userID,
SpendLimitMicros: sql.NullInt64{Int64: 3_000_000, Valid: true},
}
dbm.EXPECT().GetChatUsageLimitUserOverride(gomock.Any(), userID).Return(override, nil).AnyTimes()
check.Args(userID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(override)
}))
s.Run("ListChatUsageLimitGroupOverrides", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
overrides := []database.ListChatUsageLimitGroupOverridesRow{{
GroupID: uuid.New(),
GroupName: "group-name",
GroupDisplayName: "Group Name",
GroupAvatarUrl: "https://example.com/group.png",
SpendLimitMicros: sql.NullInt64{Int64: 4_000_000, Valid: true},
MemberCount: 5,
}}
dbm.EXPECT().ListChatUsageLimitGroupOverrides(gomock.Any()).Return(overrides, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(overrides)
}))
s.Run("ListChatUsageLimitOverrides", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
overrides := []database.ListChatUsageLimitOverridesRow{{
UserID: uuid.New(),
Username: "usage-limit-user",
Name: "Usage Limit User",
AvatarURL: "https://example.com/avatar.png",
SpendLimitMicros: sql.NullInt64{Int64: 5_000_000, Valid: true},
}}
dbm.EXPECT().ListChatUsageLimitOverrides(gomock.Any()).Return(overrides, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(overrides)
}))
s.Run("UpsertChatUsageLimitConfig", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
now := dbtime.Now()
arg := database.UpsertChatUsageLimitConfigParams{
Enabled: true,
DefaultLimitMicros: 6_000_000,
Period: "monthly",
}
config := database.ChatUsageLimitConfig{
ID: 1,
Singleton: true,
Enabled: arg.Enabled,
DefaultLimitMicros: arg.DefaultLimitMicros,
Period: arg.Period,
CreatedAt: now,
UpdatedAt: now,
}
dbm.EXPECT().UpsertChatUsageLimitConfig(gomock.Any(), arg).Return(config, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(config)
}))
s.Run("UpsertChatUsageLimitGroupOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.UpsertChatUsageLimitGroupOverrideParams{
SpendLimitMicros: 7_000_000,
GroupID: uuid.New(),
}
override := database.UpsertChatUsageLimitGroupOverrideRow{
GroupID: arg.GroupID,
Name: "group",
DisplayName: "Group",
AvatarURL: "",
SpendLimitMicros: sql.NullInt64{Int64: arg.SpendLimitMicros, Valid: true},
}
dbm.EXPECT().UpsertChatUsageLimitGroupOverride(gomock.Any(), arg).Return(override, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(override)
}))
s.Run("UpsertChatUsageLimitUserOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.UpsertChatUsageLimitUserOverrideParams{
SpendLimitMicros: 8_000_000,
UserID: uuid.New(),
}
override := database.UpsertChatUsageLimitUserOverrideRow{
UserID: arg.UserID,
Username: "user",
Name: "User",
AvatarURL: "",
SpendLimitMicros: sql.NullInt64{Int64: arg.SpendLimitMicros, Valid: true},
}
dbm.EXPECT().UpsertChatUsageLimitUserOverride(gomock.Any(), arg).Return(override, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(override)
}))
s.Run("DeleteChatUsageLimitGroupOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
groupID := uuid.New()
dbm.EXPECT().DeleteChatUsageLimitGroupOverride(gomock.Any(), groupID).Return(nil).AnyTimes()
check.Args(groupID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("DeleteChatUsageLimitUserOverride", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
userID := uuid.New()
dbm.EXPECT().DeleteChatUsageLimitUserOverride(gomock.Any(), userID).Return(nil).AnyTimes()
check.Args(userID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("CleanupDeletedMCPServerIDsFromChats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().CleanupDeletedMCPServerIDsFromChats(gomock.Any()).Return(nil).AnyTimes()
check.Args().Asserts(rbac.ResourceChat, policy.ActionUpdate)
}))
s.Run("DeleteMCPServerConfigByID", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
id := uuid.New()
dbm.EXPECT().DeleteMCPServerConfigByID(gomock.Any(), id).Return(nil).AnyTimes()
check.Args(id).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("DeleteMCPServerUserToken", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.DeleteMCPServerUserTokenParams{
MCPServerConfigID: uuid.New(),
UserID: uuid.New(),
}
dbm.EXPECT().DeleteMCPServerUserToken(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate)
}))
s.Run("GetEnabledMCPServerConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
configB := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
dbm.EXPECT().GetEnabledMCPServerConfigs(gomock.Any()).Return([]database.MCPServerConfig{configA, configB}, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns([]database.MCPServerConfig{configA, configB})
}))
s.Run("GetForcedMCPServerConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
configB := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
dbm.EXPECT().GetForcedMCPServerConfigs(gomock.Any()).Return([]database.MCPServerConfig{configA, configB}, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns([]database.MCPServerConfig{configA, configB})
}))
s.Run("GetMCPServerConfigByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
config := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
dbm.EXPECT().GetMCPServerConfigByID(gomock.Any(), config.ID).Return(config, nil).AnyTimes()
check.Args(config.ID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(config)
}))
s.Run("GetMCPServerConfigBySlug", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
slug := "test-mcp-server"
config := testutil.Fake(s.T(), faker, database.MCPServerConfig{Slug: slug})
dbm.EXPECT().GetMCPServerConfigBySlug(gomock.Any(), slug).Return(config, nil).AnyTimes()
check.Args(slug).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(config)
}))
s.Run("GetMCPServerConfigs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
configB := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
dbm.EXPECT().GetMCPServerConfigs(gomock.Any()).Return([]database.MCPServerConfig{configA, configB}, nil).AnyTimes()
check.Args().Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns([]database.MCPServerConfig{configA, configB})
}))
s.Run("GetMCPServerConfigsByIDs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
configA := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
configB := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
ids := []uuid.UUID{configA.ID, configB.ID}
dbm.EXPECT().GetMCPServerConfigsByIDs(gomock.Any(), ids).Return([]database.MCPServerConfig{configA, configB}, nil).AnyTimes()
check.Args(ids).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns([]database.MCPServerConfig{configA, configB})
}))
s.Run("GetMCPServerUserToken", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
arg := database.GetMCPServerUserTokenParams{
MCPServerConfigID: uuid.New(),
UserID: uuid.New(),
}
token := testutil.Fake(s.T(), faker, database.MCPServerUserToken{MCPServerConfigID: arg.MCPServerConfigID, UserID: arg.UserID})
dbm.EXPECT().GetMCPServerUserToken(gomock.Any(), arg).Return(token, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(token)
}))
s.Run("GetMCPServerUserTokensByUserID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
userID := uuid.New()
tokens := []database.MCPServerUserToken{testutil.Fake(s.T(), faker, database.MCPServerUserToken{UserID: userID})}
dbm.EXPECT().GetMCPServerUserTokensByUserID(gomock.Any(), userID).Return(tokens, nil).AnyTimes()
check.Args(userID).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead).Returns(tokens)
}))
s.Run("InsertMCPServerConfig", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
arg := database.InsertMCPServerConfigParams{
DisplayName: "Test MCP Server",
Slug: "test-mcp-server",
}
config := testutil.Fake(s.T(), faker, database.MCPServerConfig{DisplayName: arg.DisplayName, Slug: arg.Slug})
dbm.EXPECT().InsertMCPServerConfig(gomock.Any(), arg).Return(config, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(config)
}))
s.Run("UpdateChatMCPServerIDs", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatMCPServerIDsParams{
ID: chat.ID,
MCPServerIDs: []uuid.UUID{uuid.New()},
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatMCPServerIDs(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateMCPServerConfig", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
config := testutil.Fake(s.T(), faker, database.MCPServerConfig{})
arg := database.UpdateMCPServerConfigParams{
ID: config.ID,
DisplayName: "Updated MCP Server",
Slug: "updated-mcp-server",
}
dbm.EXPECT().UpdateMCPServerConfig(gomock.Any(), arg).Return(config, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(config)
}))
s.Run("UpsertMCPServerUserToken", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
arg := database.UpsertMCPServerUserTokenParams{
MCPServerConfigID: uuid.New(),
UserID: uuid.New(),
AccessToken: "test-access-token",
TokenType: "bearer",
}
token := testutil.Fake(s.T(), faker, database.MCPServerUserToken{MCPServerConfigID: arg.MCPServerConfigID, UserID: arg.UserID})
dbm.EXPECT().UpsertMCPServerUserToken(gomock.Any(), arg).Return(token, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionUpdate).Returns(token)
}))
}
func (s *MethodTestSuite) TestFile() {
@@ -1155,6 +1435,14 @@ func (s *MethodTestSuite) TestProvisionerJob() {
}
func (s *MethodTestSuite) TestLicense() {
s.Run("GetActiveAISeatCount", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetActiveAISeatCount(gomock.Any()).Return(int64(100), nil).AnyTimes()
check.Args().Asserts(rbac.ResourceLicense, policy.ActionRead).Returns(int64(100))
}))
s.Run("UpsertAISeatState", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().UpsertAISeatState(gomock.Any(), gomock.Any()).Return(true, nil).AnyTimes()
check.Args(database.UpsertAISeatStateParams{}).Asserts(rbac.ResourceSystem, policy.ActionCreate)
}))
s.Run("GetLicenses", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
a := database.License{ID: 1}
b := database.License{ID: 2}
@@ -1201,8 +1489,8 @@ func (s *MethodTestSuite) TestLicense() {
check.Args().Asserts().Returns("value")
}))
s.Run("GetDefaultProxyConfig", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetDefaultProxyConfig(gomock.Any()).Return(database.GetDefaultProxyConfigRow{DisplayName: "Default", IconUrl: "/emojis/1f3e1.png"}, nil).AnyTimes()
check.Args().Asserts().Returns(database.GetDefaultProxyConfigRow{DisplayName: "Default", IconUrl: "/emojis/1f3e1.png"})
dbm.EXPECT().GetDefaultProxyConfig(gomock.Any()).Return(database.GetDefaultProxyConfigRow{DisplayName: "Default", IconURL: "/emojis/1f3e1.png"}, nil).AnyTimes()
check.Args().Asserts().Returns(database.GetDefaultProxyConfigRow{DisplayName: "Default", IconURL: "/emojis/1f3e1.png"})
}))
s.Run("GetLogoURL", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().GetLogoURL(gomock.Any()).Return("value", nil).AnyTimes()
@@ -1324,7 +1612,7 @@ func (s *MethodTestSuite) TestOrganization() {
org := testutil.Fake(s.T(), faker, database.Organization{})
arg := database.UpdateOrganizationWorkspaceSharingSettingsParams{
ID: org.ID,
WorkspaceSharingDisabled: true,
ShareableWorkspaceOwners: database.ShareableWorkspaceOwnersNone,
}
dbm.EXPECT().GetOrganizationByID(gomock.Any(), org.ID).Return(org, nil).AnyTimes()
dbm.EXPECT().UpdateOrganizationWorkspaceSharingSettings(gomock.Any(), arg).Return(org, nil).AnyTimes()
@@ -1755,6 +2043,26 @@ func (s *MethodTestSuite) TestTemplate() {
dbm.EXPECT().GetTemplateInsightsByTemplate(gomock.Any(), arg).Return([]database.GetTemplateInsightsByTemplateRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceTemplate, policy.ActionViewInsights)
}))
s.Run("GetPRInsightsSummary", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsSummaryParams{}
dbm.EXPECT().GetPRInsightsSummary(gomock.Any(), arg).Return(database.GetPRInsightsSummaryRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetPRInsightsTimeSeries", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsTimeSeriesParams{}
dbm.EXPECT().GetPRInsightsTimeSeries(gomock.Any(), arg).Return([]database.GetPRInsightsTimeSeriesRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetPRInsightsPerModel", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsPerModelParams{}
dbm.EXPECT().GetPRInsightsPerModel(gomock.Any(), arg).Return([]database.GetPRInsightsPerModelRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetPRInsightsRecentPRs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsRecentPRsParams{}
dbm.EXPECT().GetPRInsightsRecentPRs(gomock.Any(), arg).Return([]database.GetPRInsightsRecentPRsRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetTelemetryTaskEvents", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetTelemetryTaskEventsParams{}
dbm.EXPECT().GetTelemetryTaskEvents(gomock.Any(), arg).Return([]database.GetTelemetryTaskEventsRow{}, nil).AnyTimes()
@@ -2243,9 +2551,12 @@ func (s *MethodTestSuite) TestWorkspace() {
check.Args(w.ID).Asserts(w, policy.ActionShare)
}))
s.Run("DeleteWorkspaceACLsByOrganization", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
orgID := uuid.New()
dbm.EXPECT().DeleteWorkspaceACLsByOrganization(gomock.Any(), orgID).Return(nil).AnyTimes()
check.Args(orgID).Asserts(rbac.ResourceSystem, policy.ActionUpdate)
arg := database.DeleteWorkspaceACLsByOrganizationParams{
OrganizationID: uuid.New(),
ExcludeServiceAccounts: false,
}
dbm.EXPECT().DeleteWorkspaceACLsByOrganization(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceSystem, policy.ActionUpdate)
}))
s.Run("GetLatestWorkspaceBuildByWorkspaceID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
w := testutil.Fake(s.T(), faker, database.Workspace{})
@@ -4951,6 +5262,12 @@ func (s *MethodTestSuite) TestUsageEvents() {
check.Args(params).Asserts(rbac.ResourceUsageEvent, policy.ActionCreate)
}))
s.Run("UsageEventExistsByID", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
id := uuid.NewString()
db.EXPECT().UsageEventExistsByID(gomock.Any(), id).Return(true, nil)
check.Args(id).Asserts(rbac.ResourceUsageEvent, policy.ActionRead)
}))
s.Run("SelectUsageEventsForPublishing", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
now := dbtime.Now()
db.EXPECT().SelectUsageEventsForPublishing(gomock.Any(), now).Return([]database.UsageEvent{}, nil)
@@ -5011,6 +5328,17 @@ func (s *MethodTestSuite) TestAIBridge() {
check.Args(params).Asserts(intc, policy.ActionCreate)
}))
s.Run("InsertAIBridgeModelThought", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
intID := uuid.UUID{2}
intc := testutil.Fake(s.T(), faker, database.AIBridgeInterception{ID: intID})
db.EXPECT().GetAIBridgeInterceptionByID(gomock.Any(), intID).Return(intc, nil).AnyTimes() // Validation.
params := database.InsertAIBridgeModelThoughtParams{InterceptionID: intc.ID}
expected := testutil.Fake(s.T(), faker, database.AIBridgeModelThought{InterceptionID: intc.ID})
db.EXPECT().InsertAIBridgeModelThought(gomock.Any(), params).Return(expected, nil).AnyTimes()
check.Args(params).Asserts(intc, policy.ActionUpdate)
}))
s.Run("InsertAIBridgeTokenUsage", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
intID := uuid.UUID{2}
intc := testutil.Fake(s.T(), faker, database.AIBridgeInterception{ID: intID})
+2 -1
View File
@@ -29,6 +29,7 @@ import (
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/rbac/regosql"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
"github.com/coder/coder/v2/coderd/util/slice"
)
@@ -143,7 +144,7 @@ func (s *MethodTestSuite) Mocked(testCaseF func(dmb *dbmock.MockStore, faker *go
UUID: pair.OrganizationID,
Valid: pair.OrganizationID != uuid.Nil,
},
IsSystem: rbac.SystemRoleName(pair.Name),
IsSystem: rolestore.IsSystemRoleName(pair.Name),
ID: uuid.New(),
})
}
+17 -25
View File
@@ -650,34 +650,26 @@ func Organization(t testing.TB, db database.Store, orig database.Organization) d
})
require.NoError(t, err, "insert organization")
// Populate the placeholder organization-member system role (created by
// DB trigger/migration) so org members have expected permissions.
//nolint:gocritic // ReconcileOrgMemberRole needs the system:update
// Populate the placeholder system roles (created by DB
// trigger/migration) so org members have expected permissions.
//nolint:gocritic // ReconcileSystemRole needs the system:update
// permission that `genCtx` does not have.
sysCtx := dbauthz.AsSystemRestricted(genCtx)
_, _, err = rolestore.ReconcileOrgMemberRole(sysCtx, db, database.CustomRole{
Name: rbac.RoleOrgMember(),
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
}, org.WorkspaceSharingDisabled)
if errors.Is(err, sql.ErrNoRows) {
// The trigger that creates the placeholder role didn't run (e.g.,
// triggers were disabled in the test). Create the role manually.
err = rolestore.CreateOrgMemberRole(sysCtx, db, org)
require.NoError(t, err, "create organization-member role")
_, _, err = rolestore.ReconcileOrgMemberRole(sysCtx, db, database.CustomRole{
Name: rbac.RoleOrgMember(),
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
}, org.WorkspaceSharingDisabled)
for roleName := range rolestore.SystemRoleNames {
role := database.CustomRole{
Name: roleName,
OrganizationID: uuid.NullUUID{UUID: org.ID, Valid: true},
}
_, _, err = rolestore.ReconcileSystemRole(sysCtx, db, role, org)
if errors.Is(err, sql.ErrNoRows) {
// The trigger that creates the placeholder role didn't run (e.g.,
// triggers were disabled in the test). Create the role manually.
err = rolestore.CreateSystemRole(sysCtx, db, org, roleName)
require.NoError(t, err, "create role "+roleName)
_, _, err = rolestore.ReconcileSystemRole(sysCtx, db, role, org)
}
require.NoError(t, err, "reconcile role "+roleName)
}
require.NoError(t, err, "reconcile organization-member role")
return org
}
+356 -18
View File
@@ -264,6 +264,14 @@ func (m queryMetricsStore) CleanTailnetTunnels(ctx context.Context) error {
return r0
}
func (m queryMetricsStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
start := time.Now()
r0 := m.s.CleanupDeletedMCPServerIDsFromChats(ctx)
m.queryLatencies.WithLabelValues("CleanupDeletedMCPServerIDsFromChats").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CleanupDeletedMCPServerIDsFromChats").Inc()
return r0
}
func (m queryMetricsStore) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountAIBridgeInterceptions(ctx, arg)
@@ -288,6 +296,14 @@ func (m queryMetricsStore) CountConnectionLogs(ctx context.Context, arg database
return r0, r1
}
func (m queryMetricsStore) CountEnabledModelsWithoutPricing(ctx context.Context) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountEnabledModelsWithoutPricing(ctx)
m.queryLatencies.WithLabelValues("CountEnabledModelsWithoutPricing").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountEnabledModelsWithoutPricing").Inc()
return r0, r1
}
func (m queryMetricsStore) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
start := time.Now()
r0, r1 := m.s.CountInProgressPrebuilds(ctx)
@@ -376,14 +392,6 @@ func (m queryMetricsStore) DeleteApplicationConnectAPIKeysByUserID(ctx context.C
return r0
}
func (m queryMetricsStore) DeleteChatMessagesAfterID(ctx context.Context, arg database.DeleteChatMessagesAfterIDParams) error {
start := time.Now()
r0 := m.s.DeleteChatMessagesAfterID(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteChatMessagesAfterID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatMessagesAfterID").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatModelConfigByID(ctx, id)
@@ -408,6 +416,22 @@ func (m queryMetricsStore) DeleteChatQueuedMessage(ctx context.Context, arg data
return r0
}
func (m queryMetricsStore) DeleteChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatUsageLimitGroupOverride(ctx, groupID)
m.queryLatencies.WithLabelValues("DeleteChatUsageLimitGroupOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatUsageLimitGroupOverride").Inc()
return r0
}
func (m queryMetricsStore) DeleteChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteChatUsageLimitUserOverride(ctx, userID)
m.queryLatencies.WithLabelValues("DeleteChatUsageLimitUserOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteChatUsageLimitUserOverride").Inc()
return r0
}
func (m queryMetricsStore) DeleteCryptoKey(ctx context.Context, arg database.DeleteCryptoKeyParams) (database.CryptoKey, error) {
start := time.Now()
r0, r1 := m.s.DeleteCryptoKey(ctx, arg)
@@ -464,6 +488,22 @@ func (m queryMetricsStore) DeleteLicense(ctx context.Context, id int32) (int32,
return r0, r1
}
func (m queryMetricsStore) DeleteMCPServerConfigByID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteMCPServerConfigByID(ctx, id)
m.queryLatencies.WithLabelValues("DeleteMCPServerConfigByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteMCPServerConfigByID").Inc()
return r0
}
func (m queryMetricsStore) DeleteMCPServerUserToken(ctx context.Context, arg database.DeleteMCPServerUserTokenParams) error {
start := time.Now()
r0 := m.s.DeleteMCPServerUserToken(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteMCPServerUserToken").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteMCPServerUserToken").Inc()
return r0
}
func (m queryMetricsStore) DeleteOAuth2ProviderAppByClientID(ctx context.Context, id uuid.UUID) error {
start := time.Now()
r0 := m.s.DeleteOAuth2ProviderAppByClientID(ctx, id)
@@ -672,10 +712,11 @@ func (m queryMetricsStore) DeleteWorkspaceACLByID(ctx context.Context, id uuid.U
return r0
}
func (m queryMetricsStore) DeleteWorkspaceACLsByOrganization(ctx context.Context, organizationID uuid.UUID) error {
func (m queryMetricsStore) DeleteWorkspaceACLsByOrganization(ctx context.Context, arg database.DeleteWorkspaceACLsByOrganizationParams) error {
start := time.Now()
r0 := m.s.DeleteWorkspaceACLsByOrganization(ctx, organizationID)
r0 := m.s.DeleteWorkspaceACLsByOrganization(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteWorkspaceACLsByOrganization").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteWorkspaceACLsByOrganization").Inc()
return r0
}
@@ -871,6 +912,14 @@ func (m queryMetricsStore) GetAPIKeysLastUsedAfter(ctx context.Context, lastUsed
return r0, r1
}
func (m queryMetricsStore) GetActiveAISeatCount(ctx context.Context) (int64, error) {
start := time.Now()
r0, r1 := m.s.GetActiveAISeatCount(ctx)
m.queryLatencies.WithLabelValues("GetActiveAISeatCount").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetActiveAISeatCount").Inc()
return r0, r1
}
func (m queryMetricsStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
start := time.Now()
r0, r1 := m.s.GetActivePresetPrebuildSchedules(ctx)
@@ -1015,6 +1064,14 @@ func (m queryMetricsStore) GetChatCostSummary(ctx context.Context, arg database.
return r0, r1
}
func (m queryMetricsStore) GetChatDesktopEnabled(ctx context.Context) (bool, error) {
start := time.Now()
r0, r1 := m.s.GetChatDesktopEnabled(ctx)
m.queryLatencies.WithLabelValues("GetChatDesktopEnabled").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatDesktopEnabled").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatDiffStatusByChatID(ctx context.Context, chatID uuid.UUID) (database.ChatDiffStatus, error) {
start := time.Now()
r0, r1 := m.s.GetChatDiffStatusByChatID(ctx, chatID)
@@ -1063,6 +1120,14 @@ func (m queryMetricsStore) GetChatMessagesByChatID(ctx context.Context, chatID d
return r0, r1
}
func (m queryMetricsStore) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.GetChatMessagesByChatIDDescPaginated(ctx, arg)
m.queryLatencies.WithLabelValues("GetChatMessagesByChatIDDescPaginated").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatMessagesByChatIDDescPaginated").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatMessagesForPromptByChatID(ctx context.Context, chatID uuid.UUID) ([]database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.GetChatMessagesForPromptByChatID(ctx, chatID)
@@ -1127,11 +1192,35 @@ func (m queryMetricsStore) GetChatSystemPrompt(ctx context.Context) (string, err
return r0, r1
}
func (m queryMetricsStore) GetChatsByOwnerID(ctx context.Context, ownerID database.GetChatsByOwnerIDParams) ([]database.Chat, error) {
func (m queryMetricsStore) GetChatUsageLimitConfig(ctx context.Context) (database.ChatUsageLimitConfig, error) {
start := time.Now()
r0, r1 := m.s.GetChatsByOwnerID(ctx, ownerID)
m.queryLatencies.WithLabelValues("GetChatsByOwnerID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatsByOwnerID").Inc()
r0, r1 := m.s.GetChatUsageLimitConfig(ctx)
m.queryLatencies.WithLabelValues("GetChatUsageLimitConfig").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatUsageLimitConfig").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) (database.GetChatUsageLimitGroupOverrideRow, error) {
start := time.Now()
r0, r1 := m.s.GetChatUsageLimitGroupOverride(ctx, groupID)
m.queryLatencies.WithLabelValues("GetChatUsageLimitGroupOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatUsageLimitGroupOverride").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) (database.GetChatUsageLimitUserOverrideRow, error) {
start := time.Now()
r0, r1 := m.s.GetChatUsageLimitUserOverride(ctx, userID)
m.queryLatencies.WithLabelValues("GetChatUsageLimitUserOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatUsageLimitUserOverride").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
start := time.Now()
r0, r1 := m.s.GetChats(ctx, arg)
m.queryLatencies.WithLabelValues("GetChats").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChats").Inc()
return r0, r1
}
@@ -1263,6 +1352,14 @@ func (m queryMetricsStore) GetEnabledChatProviders(ctx context.Context) ([]datab
return r0, r1
}
func (m queryMetricsStore) GetEnabledMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetEnabledMCPServerConfigs(ctx)
m.queryLatencies.WithLabelValues("GetEnabledMCPServerConfigs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetEnabledMCPServerConfigs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetExternalAuthLink(ctx context.Context, arg database.GetExternalAuthLinkParams) (database.ExternalAuthLink, error) {
start := time.Now()
r0, r1 := m.s.GetExternalAuthLink(ctx, arg)
@@ -1319,6 +1416,14 @@ func (m queryMetricsStore) GetFilteredInboxNotificationsByUserID(ctx context.Con
return r0, r1
}
func (m queryMetricsStore) GetForcedMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetForcedMCPServerConfigs(ctx)
m.queryLatencies.WithLabelValues("GetForcedMCPServerConfigs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetForcedMCPServerConfigs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetGitSSHKey(ctx context.Context, userID uuid.UUID) (database.GitSSHKey, error) {
start := time.Now()
r0, r1 := m.s.GetGitSSHKey(ctx, userID)
@@ -1479,6 +1584,54 @@ func (m queryMetricsStore) GetLogoURL(ctx context.Context) (string, error) {
return r0, r1
}
func (m queryMetricsStore) GetMCPServerConfigByID(ctx context.Context, id uuid.UUID) (database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerConfigByID(ctx, id)
m.queryLatencies.WithLabelValues("GetMCPServerConfigByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerConfigByID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetMCPServerConfigBySlug(ctx context.Context, slug string) (database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerConfigBySlug(ctx, slug)
m.queryLatencies.WithLabelValues("GetMCPServerConfigBySlug").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerConfigBySlug").Inc()
return r0, r1
}
func (m queryMetricsStore) GetMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerConfigs(ctx)
m.queryLatencies.WithLabelValues("GetMCPServerConfigs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerConfigs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetMCPServerConfigsByIDs(ctx context.Context, ids []uuid.UUID) ([]database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerConfigsByIDs(ctx, ids)
m.queryLatencies.WithLabelValues("GetMCPServerConfigsByIDs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerConfigsByIDs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetMCPServerUserToken(ctx context.Context, arg database.GetMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerUserToken(ctx, arg)
m.queryLatencies.WithLabelValues("GetMCPServerUserToken").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerUserToken").Inc()
return r0, r1
}
func (m queryMetricsStore) GetMCPServerUserTokensByUserID(ctx context.Context, userID uuid.UUID) ([]database.MCPServerUserToken, error) {
start := time.Now()
r0, r1 := m.s.GetMCPServerUserTokensByUserID(ctx, userID)
m.queryLatencies.WithLabelValues("GetMCPServerUserTokensByUserID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetMCPServerUserTokensByUserID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetNotificationMessagesByStatus(ctx context.Context, arg database.GetNotificationMessagesByStatusParams) ([]database.NotificationMessage, error) {
start := time.Now()
r0, r1 := m.s.GetNotificationMessagesByStatus(ctx, arg)
@@ -1671,6 +1824,38 @@ func (m queryMetricsStore) GetOrganizationsWithPrebuildStatus(ctx context.Contex
return r0, r1
}
func (m queryMetricsStore) GetPRInsightsPerModel(ctx context.Context, arg database.GetPRInsightsPerModelParams) ([]database.GetPRInsightsPerModelRow, error) {
start := time.Now()
r0, r1 := m.s.GetPRInsightsPerModel(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsPerModel").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsPerModel").Inc()
return r0, r1
}
func (m queryMetricsStore) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
start := time.Now()
r0, r1 := m.s.GetPRInsightsRecentPRs(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsRecentPRs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsRecentPRs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetPRInsightsSummary(ctx context.Context, arg database.GetPRInsightsSummaryParams) (database.GetPRInsightsSummaryRow, error) {
start := time.Now()
r0, r1 := m.s.GetPRInsightsSummary(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsSummary").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsSummary").Inc()
return r0, r1
}
func (m queryMetricsStore) GetPRInsightsTimeSeries(ctx context.Context, arg database.GetPRInsightsTimeSeriesParams) ([]database.GetPRInsightsTimeSeriesRow, error) {
start := time.Now()
r0, r1 := m.s.GetPRInsightsTimeSeries(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsTimeSeries").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsTimeSeries").Inc()
return r0, r1
}
func (m queryMetricsStore) GetParameterSchemasByJobID(ctx context.Context, jobID uuid.UUID) ([]database.ParameterSchema, error) {
start := time.Now()
r0, r1 := m.s.GetParameterSchemasByJobID(ctx, jobID)
@@ -2255,6 +2440,14 @@ func (m queryMetricsStore) GetUserChatCustomPrompt(ctx context.Context, userID u
return r0, r1
}
func (m queryMetricsStore) GetUserChatSpendInPeriod(ctx context.Context, arg database.GetUserChatSpendInPeriodParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.GetUserChatSpendInPeriod(ctx, arg)
m.queryLatencies.WithLabelValues("GetUserChatSpendInPeriod").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetUserChatSpendInPeriod").Inc()
return r0, r1
}
func (m queryMetricsStore) GetUserCount(ctx context.Context, includeSystem bool) (int64, error) {
start := time.Now()
r0, r1 := m.s.GetUserCount(ctx, includeSystem)
@@ -2263,6 +2456,14 @@ func (m queryMetricsStore) GetUserCount(ctx context.Context, includeSystem bool)
return r0, r1
}
func (m queryMetricsStore) GetUserGroupSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
start := time.Now()
r0, r1 := m.s.GetUserGroupSpendLimit(ctx, userID)
m.queryLatencies.WithLabelValues("GetUserGroupSpendLimit").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetUserGroupSpendLimit").Inc()
return r0, r1
}
func (m queryMetricsStore) GetUserLatencyInsights(ctx context.Context, arg database.GetUserLatencyInsightsParams) ([]database.GetUserLatencyInsightsRow, error) {
start := time.Now()
r0, r1 := m.s.GetUserLatencyInsights(ctx, arg)
@@ -2871,6 +3072,14 @@ func (m queryMetricsStore) InsertAIBridgeInterception(ctx context.Context, arg d
return r0, r1
}
func (m queryMetricsStore) InsertAIBridgeModelThought(ctx context.Context, arg database.InsertAIBridgeModelThoughtParams) (database.AIBridgeModelThought, error) {
start := time.Now()
r0, r1 := m.s.InsertAIBridgeModelThought(ctx, arg)
m.queryLatencies.WithLabelValues("InsertAIBridgeModelThought").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertAIBridgeModelThought").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertAIBridgeTokenUsage(ctx context.Context, arg database.InsertAIBridgeTokenUsageParams) (database.AIBridgeTokenUsage, error) {
start := time.Now()
r0, r1 := m.s.InsertAIBridgeTokenUsage(ctx, arg)
@@ -2935,11 +3144,11 @@ func (m queryMetricsStore) InsertChatFile(ctx context.Context, arg database.Inse
return r0, r1
}
func (m queryMetricsStore) InsertChatMessage(ctx context.Context, arg database.InsertChatMessageParams) (database.ChatMessage, error) {
func (m queryMetricsStore) InsertChatMessages(ctx context.Context, arg database.InsertChatMessagesParams) ([]database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.InsertChatMessage(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatMessage").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatMessage").Inc()
r0, r1 := m.s.InsertChatMessages(ctx, arg)
m.queryLatencies.WithLabelValues("InsertChatMessages").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertChatMessages").Inc()
return r0, r1
}
@@ -3063,6 +3272,14 @@ func (m queryMetricsStore) InsertLicense(ctx context.Context, arg database.Inser
return r0, r1
}
func (m queryMetricsStore) InsertMCPServerConfig(ctx context.Context, arg database.InsertMCPServerConfigParams) (database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.InsertMCPServerConfig(ctx, arg)
m.queryLatencies.WithLabelValues("InsertMCPServerConfig").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertMCPServerConfig").Inc()
return r0, r1
}
func (m queryMetricsStore) InsertMemoryResourceMonitor(ctx context.Context, arg database.InsertMemoryResourceMonitorParams) (database.WorkspaceAgentMemoryResourceMonitor, error) {
start := time.Now()
r0, r1 := m.s.InsertMemoryResourceMonitor(ctx, arg)
@@ -3495,6 +3712,22 @@ func (m queryMetricsStore) ListAIBridgeUserPromptsByInterceptionIDs(ctx context.
return r0, r1
}
func (m queryMetricsStore) ListChatUsageLimitGroupOverrides(ctx context.Context) ([]database.ListChatUsageLimitGroupOverridesRow, error) {
start := time.Now()
r0, r1 := m.s.ListChatUsageLimitGroupOverrides(ctx)
m.queryLatencies.WithLabelValues("ListChatUsageLimitGroupOverrides").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListChatUsageLimitGroupOverrides").Inc()
return r0, r1
}
func (m queryMetricsStore) ListChatUsageLimitOverrides(ctx context.Context) ([]database.ListChatUsageLimitOverridesRow, error) {
start := time.Now()
r0, r1 := m.s.ListChatUsageLimitOverrides(ctx)
m.queryLatencies.WithLabelValues("ListChatUsageLimitOverrides").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListChatUsageLimitOverrides").Inc()
return r0, r1
}
func (m queryMetricsStore) ListProvisionerKeysByOrganization(ctx context.Context, organizationID uuid.UUID) ([]database.ProvisionerKey, error) {
start := time.Now()
r0, r1 := m.s.ListProvisionerKeysByOrganization(ctx, organizationID)
@@ -3607,6 +3840,14 @@ func (m queryMetricsStore) RemoveUserFromGroups(ctx context.Context, arg databas
return r0, r1
}
func (m queryMetricsStore) ResolveUserChatSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
start := time.Now()
r0, r1 := m.s.ResolveUserChatSpendLimit(ctx, userID)
m.queryLatencies.WithLabelValues("ResolveUserChatSpendLimit").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ResolveUserChatSpendLimit").Inc()
return r0, r1
}
func (m queryMetricsStore) RevokeDBCryptKey(ctx context.Context, activeKeyDigest string) error {
start := time.Now()
r0 := m.s.RevokeDBCryptKey(ctx, activeKeyDigest)
@@ -3623,6 +3864,22 @@ func (m queryMetricsStore) SelectUsageEventsForPublishing(ctx context.Context, n
return r0, r1
}
func (m queryMetricsStore) SoftDeleteChatMessageByID(ctx context.Context, id int64) error {
start := time.Now()
r0 := m.s.SoftDeleteChatMessageByID(ctx, id)
m.queryLatencies.WithLabelValues("SoftDeleteChatMessageByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "SoftDeleteChatMessageByID").Inc()
return r0
}
func (m queryMetricsStore) SoftDeleteChatMessagesAfterID(ctx context.Context, arg database.SoftDeleteChatMessagesAfterIDParams) error {
start := time.Now()
r0 := m.s.SoftDeleteChatMessagesAfterID(ctx, arg)
m.queryLatencies.WithLabelValues("SoftDeleteChatMessagesAfterID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "SoftDeleteChatMessagesAfterID").Inc()
return r0
}
func (m queryMetricsStore) TryAcquireLock(ctx context.Context, pgTryAdvisoryXactLock int64) (bool, error) {
start := time.Now()
r0, r1 := m.s.TryAcquireLock(ctx, pgTryAdvisoryXactLock)
@@ -3695,6 +3952,14 @@ func (m queryMetricsStore) UpdateChatHeartbeat(ctx context.Context, arg database
return r0, r1
}
func (m queryMetricsStore) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatMCPServerIDs(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatMCPServerIDs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatMCPServerIDs").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateChatMessageByID(ctx context.Context, arg database.UpdateChatMessageByIDParams) (database.ChatMessage, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatMessageByID(ctx, arg)
@@ -3799,6 +4064,14 @@ func (m queryMetricsStore) UpdateInboxNotificationReadStatus(ctx context.Context
return r0
}
func (m queryMetricsStore) UpdateMCPServerConfig(ctx context.Context, arg database.UpdateMCPServerConfigParams) (database.MCPServerConfig, error) {
start := time.Now()
r0, r1 := m.s.UpdateMCPServerConfig(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateMCPServerConfig").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateMCPServerConfig").Inc()
return r0, r1
}
func (m queryMetricsStore) UpdateMemberRoles(ctx context.Context, arg database.UpdateMemberRolesParams) (database.OrganizationMember, error) {
start := time.Now()
r0, r1 := m.s.UpdateMemberRoles(ctx, arg)
@@ -3859,6 +4132,7 @@ func (m queryMetricsStore) UpdateOrganizationWorkspaceSharingSettings(ctx contex
start := time.Now()
r0, r1 := m.s.UpdateOrganizationWorkspaceSharingSettings(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateOrganizationWorkspaceSharingSettings").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateOrganizationWorkspaceSharingSettings").Inc()
return r0, r1
}
@@ -4406,6 +4680,14 @@ func (m queryMetricsStore) UpdateWorkspacesTTLByTemplateID(ctx context.Context,
return r0
}
func (m queryMetricsStore) UpsertAISeatState(ctx context.Context, arg database.UpsertAISeatStateParams) (bool, error) {
start := time.Now()
r0, r1 := m.s.UpsertAISeatState(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertAISeatState").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertAISeatState").Inc()
return r0, r1
}
func (m queryMetricsStore) UpsertAnnouncementBanners(ctx context.Context, value string) error {
start := time.Now()
r0 := m.s.UpsertAnnouncementBanners(ctx, value)
@@ -4430,6 +4712,14 @@ func (m queryMetricsStore) UpsertBoundaryUsageStats(ctx context.Context, arg dat
return r0, r1
}
func (m queryMetricsStore) UpsertChatDesktopEnabled(ctx context.Context, enableDesktop bool) error {
start := time.Now()
r0 := m.s.UpsertChatDesktopEnabled(ctx, enableDesktop)
m.queryLatencies.WithLabelValues("UpsertChatDesktopEnabled").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertChatDesktopEnabled").Inc()
return r0
}
func (m queryMetricsStore) UpsertChatDiffStatus(ctx context.Context, arg database.UpsertChatDiffStatusParams) (database.ChatDiffStatus, error) {
start := time.Now()
r0, r1 := m.s.UpsertChatDiffStatus(ctx, arg)
@@ -4454,6 +4744,30 @@ func (m queryMetricsStore) UpsertChatSystemPrompt(ctx context.Context, value str
return r0
}
func (m queryMetricsStore) UpsertChatUsageLimitConfig(ctx context.Context, arg database.UpsertChatUsageLimitConfigParams) (database.ChatUsageLimitConfig, error) {
start := time.Now()
r0, r1 := m.s.UpsertChatUsageLimitConfig(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertChatUsageLimitConfig").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertChatUsageLimitConfig").Inc()
return r0, r1
}
func (m queryMetricsStore) UpsertChatUsageLimitGroupOverride(ctx context.Context, arg database.UpsertChatUsageLimitGroupOverrideParams) (database.UpsertChatUsageLimitGroupOverrideRow, error) {
start := time.Now()
r0, r1 := m.s.UpsertChatUsageLimitGroupOverride(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertChatUsageLimitGroupOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertChatUsageLimitGroupOverride").Inc()
return r0, r1
}
func (m queryMetricsStore) UpsertChatUsageLimitUserOverride(ctx context.Context, arg database.UpsertChatUsageLimitUserOverrideParams) (database.UpsertChatUsageLimitUserOverrideRow, error) {
start := time.Now()
r0, r1 := m.s.UpsertChatUsageLimitUserOverride(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertChatUsageLimitUserOverride").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertChatUsageLimitUserOverride").Inc()
return r0, r1
}
func (m queryMetricsStore) UpsertConnectionLog(ctx context.Context, arg database.UpsertConnectionLogParams) (database.ConnectionLog, error) {
start := time.Now()
r0, r1 := m.s.UpsertConnectionLog(ctx, arg)
@@ -4494,6 +4808,14 @@ func (m queryMetricsStore) UpsertLogoURL(ctx context.Context, value string) erro
return r0
}
func (m queryMetricsStore) UpsertMCPServerUserToken(ctx context.Context, arg database.UpsertMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
start := time.Now()
r0, r1 := m.s.UpsertMCPServerUserToken(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertMCPServerUserToken").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpsertMCPServerUserToken").Inc()
return r0, r1
}
func (m queryMetricsStore) UpsertNotificationReportGeneratorLog(ctx context.Context, arg database.UpsertNotificationReportGeneratorLogParams) error {
start := time.Now()
r0 := m.s.UpsertNotificationReportGeneratorLog(ctx, arg)
@@ -4630,6 +4952,14 @@ func (m queryMetricsStore) UpsertWorkspaceAppAuditSession(ctx context.Context, a
return r0, r1
}
func (m queryMetricsStore) UsageEventExistsByID(ctx context.Context, id string) (bool, error) {
start := time.Now()
r0, r1 := m.s.UsageEventExistsByID(ctx, id)
m.queryLatencies.WithLabelValues("UsageEventExistsByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UsageEventExistsByID").Inc()
return r0, r1
}
func (m queryMetricsStore) ValidateGroupIDs(ctx context.Context, groupIds []uuid.UUID) (database.ValidateGroupIDsRow, error) {
start := time.Now()
r0, r1 := m.s.ValidateGroupIDs(ctx, groupIds)
@@ -4749,3 +5079,11 @@ func (m queryMetricsStore) ListAuthorizedAIBridgeModels(ctx context.Context, arg
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListAuthorizedAIBridgeModels").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.Chat, error) {
start := time.Now()
r0, r1 := m.s.GetAuthorizedChats(ctx, arg, prepared)
m.queryLatencies.WithLabelValues("GetAuthorizedChats").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAuthorizedChats").Inc()
return r0, r1
}
+654 -31
View File
@@ -334,6 +334,20 @@ func (mr *MockStoreMockRecorder) CleanTailnetTunnels(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanTailnetTunnels", reflect.TypeOf((*MockStore)(nil).CleanTailnetTunnels), ctx)
}
// CleanupDeletedMCPServerIDsFromChats mocks base method.
func (m *MockStore) CleanupDeletedMCPServerIDsFromChats(ctx context.Context) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CleanupDeletedMCPServerIDsFromChats", ctx)
ret0, _ := ret[0].(error)
return ret0
}
// CleanupDeletedMCPServerIDsFromChats indicates an expected call of CleanupDeletedMCPServerIDsFromChats.
func (mr *MockStoreMockRecorder) CleanupDeletedMCPServerIDsFromChats(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanupDeletedMCPServerIDsFromChats", reflect.TypeOf((*MockStore)(nil).CleanupDeletedMCPServerIDsFromChats), ctx)
}
// CountAIBridgeInterceptions mocks base method.
func (m *MockStore) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
m.ctrl.T.Helper()
@@ -424,6 +438,21 @@ func (mr *MockStoreMockRecorder) CountConnectionLogs(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountConnectionLogs", reflect.TypeOf((*MockStore)(nil).CountConnectionLogs), ctx, arg)
}
// CountEnabledModelsWithoutPricing mocks base method.
func (m *MockStore) CountEnabledModelsWithoutPricing(ctx context.Context) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountEnabledModelsWithoutPricing", ctx)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountEnabledModelsWithoutPricing indicates an expected call of CountEnabledModelsWithoutPricing.
func (mr *MockStoreMockRecorder) CountEnabledModelsWithoutPricing(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountEnabledModelsWithoutPricing", reflect.TypeOf((*MockStore)(nil).CountEnabledModelsWithoutPricing), ctx)
}
// CountInProgressPrebuilds mocks base method.
func (m *MockStore) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
m.ctrl.T.Helper()
@@ -583,20 +612,6 @@ func (mr *MockStoreMockRecorder) DeleteApplicationConnectAPIKeysByUserID(ctx, us
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteApplicationConnectAPIKeysByUserID", reflect.TypeOf((*MockStore)(nil).DeleteApplicationConnectAPIKeysByUserID), ctx, userID)
}
// DeleteChatMessagesAfterID mocks base method.
func (m *MockStore) DeleteChatMessagesAfterID(ctx context.Context, arg database.DeleteChatMessagesAfterIDParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatMessagesAfterID", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatMessagesAfterID indicates an expected call of DeleteChatMessagesAfterID.
func (mr *MockStoreMockRecorder) DeleteChatMessagesAfterID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatMessagesAfterID", reflect.TypeOf((*MockStore)(nil).DeleteChatMessagesAfterID), ctx, arg)
}
// DeleteChatModelConfigByID mocks base method.
func (m *MockStore) DeleteChatModelConfigByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
@@ -639,6 +654,34 @@ func (mr *MockStoreMockRecorder) DeleteChatQueuedMessage(ctx, arg any) *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatQueuedMessage", reflect.TypeOf((*MockStore)(nil).DeleteChatQueuedMessage), ctx, arg)
}
// DeleteChatUsageLimitGroupOverride mocks base method.
func (m *MockStore) DeleteChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatUsageLimitGroupOverride", ctx, groupID)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatUsageLimitGroupOverride indicates an expected call of DeleteChatUsageLimitGroupOverride.
func (mr *MockStoreMockRecorder) DeleteChatUsageLimitGroupOverride(ctx, groupID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatUsageLimitGroupOverride", reflect.TypeOf((*MockStore)(nil).DeleteChatUsageLimitGroupOverride), ctx, groupID)
}
// DeleteChatUsageLimitUserOverride mocks base method.
func (m *MockStore) DeleteChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteChatUsageLimitUserOverride", ctx, userID)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteChatUsageLimitUserOverride indicates an expected call of DeleteChatUsageLimitUserOverride.
func (mr *MockStoreMockRecorder) DeleteChatUsageLimitUserOverride(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteChatUsageLimitUserOverride", reflect.TypeOf((*MockStore)(nil).DeleteChatUsageLimitUserOverride), ctx, userID)
}
// DeleteCryptoKey mocks base method.
func (m *MockStore) DeleteCryptoKey(ctx context.Context, arg database.DeleteCryptoKeyParams) (database.CryptoKey, error) {
m.ctrl.T.Helper()
@@ -740,6 +783,34 @@ func (mr *MockStoreMockRecorder) DeleteLicense(ctx, id any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteLicense", reflect.TypeOf((*MockStore)(nil).DeleteLicense), ctx, id)
}
// DeleteMCPServerConfigByID mocks base method.
func (m *MockStore) DeleteMCPServerConfigByID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteMCPServerConfigByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteMCPServerConfigByID indicates an expected call of DeleteMCPServerConfigByID.
func (mr *MockStoreMockRecorder) DeleteMCPServerConfigByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteMCPServerConfigByID", reflect.TypeOf((*MockStore)(nil).DeleteMCPServerConfigByID), ctx, id)
}
// DeleteMCPServerUserToken mocks base method.
func (m *MockStore) DeleteMCPServerUserToken(ctx context.Context, arg database.DeleteMCPServerUserTokenParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteMCPServerUserToken", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteMCPServerUserToken indicates an expected call of DeleteMCPServerUserToken.
func (mr *MockStoreMockRecorder) DeleteMCPServerUserToken(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteMCPServerUserToken", reflect.TypeOf((*MockStore)(nil).DeleteMCPServerUserToken), ctx, arg)
}
// DeleteOAuth2ProviderAppByClientID mocks base method.
func (m *MockStore) DeleteOAuth2ProviderAppByClientID(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
@@ -1112,17 +1183,17 @@ func (mr *MockStoreMockRecorder) DeleteWorkspaceACLByID(ctx, id any) *gomock.Cal
}
// DeleteWorkspaceACLsByOrganization mocks base method.
func (m *MockStore) DeleteWorkspaceACLsByOrganization(ctx context.Context, organizationID uuid.UUID) error {
func (m *MockStore) DeleteWorkspaceACLsByOrganization(ctx context.Context, arg database.DeleteWorkspaceACLsByOrganizationParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteWorkspaceACLsByOrganization", ctx, organizationID)
ret := m.ctrl.Call(m, "DeleteWorkspaceACLsByOrganization", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteWorkspaceACLsByOrganization indicates an expected call of DeleteWorkspaceACLsByOrganization.
func (mr *MockStoreMockRecorder) DeleteWorkspaceACLsByOrganization(ctx, organizationID any) *gomock.Call {
func (mr *MockStoreMockRecorder) DeleteWorkspaceACLsByOrganization(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkspaceACLsByOrganization", reflect.TypeOf((*MockStore)(nil).DeleteWorkspaceACLsByOrganization), ctx, organizationID)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteWorkspaceACLsByOrganization", reflect.TypeOf((*MockStore)(nil).DeleteWorkspaceACLsByOrganization), ctx, arg)
}
// DeleteWorkspaceAgentPortShare mocks base method.
@@ -1478,6 +1549,21 @@ func (mr *MockStoreMockRecorder) GetAPIKeysLastUsedAfter(ctx, lastUsed any) *gom
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAPIKeysLastUsedAfter", reflect.TypeOf((*MockStore)(nil).GetAPIKeysLastUsedAfter), ctx, lastUsed)
}
// GetActiveAISeatCount mocks base method.
func (m *MockStore) GetActiveAISeatCount(ctx context.Context) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetActiveAISeatCount", ctx)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetActiveAISeatCount indicates an expected call of GetActiveAISeatCount.
func (mr *MockStoreMockRecorder) GetActiveAISeatCount(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetActiveAISeatCount", reflect.TypeOf((*MockStore)(nil).GetActiveAISeatCount), ctx)
}
// GetActivePresetPrebuildSchedules mocks base method.
func (m *MockStore) GetActivePresetPrebuildSchedules(ctx context.Context) ([]database.TemplateVersionPresetPrebuildSchedule, error) {
m.ctrl.T.Helper()
@@ -1673,6 +1759,21 @@ func (mr *MockStoreMockRecorder) GetAuthorizedAuditLogsOffset(ctx, arg, prepared
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedAuditLogsOffset", reflect.TypeOf((*MockStore)(nil).GetAuthorizedAuditLogsOffset), ctx, arg, prepared)
}
// GetAuthorizedChats mocks base method.
func (m *MockStore) GetAuthorizedChats(ctx context.Context, arg database.GetChatsParams, prepared rbac.PreparedAuthorized) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAuthorizedChats", ctx, arg, prepared)
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetAuthorizedChats indicates an expected call of GetAuthorizedChats.
func (mr *MockStoreMockRecorder) GetAuthorizedChats(ctx, arg, prepared any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedChats", reflect.TypeOf((*MockStore)(nil).GetAuthorizedChats), ctx, arg, prepared)
}
// GetAuthorizedConnectionLogsOffset mocks base method.
func (m *MockStore) GetAuthorizedConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams, prepared rbac.PreparedAuthorized) ([]database.GetConnectionLogsOffsetRow, error) {
m.ctrl.T.Helper()
@@ -1838,6 +1939,21 @@ func (mr *MockStoreMockRecorder) GetChatCostSummary(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatCostSummary", reflect.TypeOf((*MockStore)(nil).GetChatCostSummary), ctx, arg)
}
// GetChatDesktopEnabled mocks base method.
func (m *MockStore) GetChatDesktopEnabled(ctx context.Context) (bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatDesktopEnabled", ctx)
ret0, _ := ret[0].(bool)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatDesktopEnabled indicates an expected call of GetChatDesktopEnabled.
func (mr *MockStoreMockRecorder) GetChatDesktopEnabled(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatDesktopEnabled", reflect.TypeOf((*MockStore)(nil).GetChatDesktopEnabled), ctx)
}
// GetChatDiffStatusByChatID mocks base method.
func (m *MockStore) GetChatDiffStatusByChatID(ctx context.Context, chatID uuid.UUID) (database.ChatDiffStatus, error) {
m.ctrl.T.Helper()
@@ -1928,6 +2044,21 @@ func (mr *MockStoreMockRecorder) GetChatMessagesByChatID(ctx, arg any) *gomock.C
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatMessagesByChatID", reflect.TypeOf((*MockStore)(nil).GetChatMessagesByChatID), ctx, arg)
}
// GetChatMessagesByChatIDDescPaginated mocks base method.
func (m *MockStore) GetChatMessagesByChatIDDescPaginated(ctx context.Context, arg database.GetChatMessagesByChatIDDescPaginatedParams) ([]database.ChatMessage, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatMessagesByChatIDDescPaginated", ctx, arg)
ret0, _ := ret[0].([]database.ChatMessage)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatMessagesByChatIDDescPaginated indicates an expected call of GetChatMessagesByChatIDDescPaginated.
func (mr *MockStoreMockRecorder) GetChatMessagesByChatIDDescPaginated(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatMessagesByChatIDDescPaginated", reflect.TypeOf((*MockStore)(nil).GetChatMessagesByChatIDDescPaginated), ctx, arg)
}
// GetChatMessagesForPromptByChatID mocks base method.
func (m *MockStore) GetChatMessagesForPromptByChatID(ctx context.Context, chatID uuid.UUID) ([]database.ChatMessage, error) {
m.ctrl.T.Helper()
@@ -2048,19 +2179,64 @@ func (mr *MockStoreMockRecorder) GetChatSystemPrompt(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatSystemPrompt", reflect.TypeOf((*MockStore)(nil).GetChatSystemPrompt), ctx)
}
// GetChatsByOwnerID mocks base method.
func (m *MockStore) GetChatsByOwnerID(ctx context.Context, arg database.GetChatsByOwnerIDParams) ([]database.Chat, error) {
// GetChatUsageLimitConfig mocks base method.
func (m *MockStore) GetChatUsageLimitConfig(ctx context.Context) (database.ChatUsageLimitConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatsByOwnerID", ctx, arg)
ret := m.ctrl.Call(m, "GetChatUsageLimitConfig", ctx)
ret0, _ := ret[0].(database.ChatUsageLimitConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatUsageLimitConfig indicates an expected call of GetChatUsageLimitConfig.
func (mr *MockStoreMockRecorder) GetChatUsageLimitConfig(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatUsageLimitConfig", reflect.TypeOf((*MockStore)(nil).GetChatUsageLimitConfig), ctx)
}
// GetChatUsageLimitGroupOverride mocks base method.
func (m *MockStore) GetChatUsageLimitGroupOverride(ctx context.Context, groupID uuid.UUID) (database.GetChatUsageLimitGroupOverrideRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatUsageLimitGroupOverride", ctx, groupID)
ret0, _ := ret[0].(database.GetChatUsageLimitGroupOverrideRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatUsageLimitGroupOverride indicates an expected call of GetChatUsageLimitGroupOverride.
func (mr *MockStoreMockRecorder) GetChatUsageLimitGroupOverride(ctx, groupID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatUsageLimitGroupOverride", reflect.TypeOf((*MockStore)(nil).GetChatUsageLimitGroupOverride), ctx, groupID)
}
// GetChatUsageLimitUserOverride mocks base method.
func (m *MockStore) GetChatUsageLimitUserOverride(ctx context.Context, userID uuid.UUID) (database.GetChatUsageLimitUserOverrideRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatUsageLimitUserOverride", ctx, userID)
ret0, _ := ret[0].(database.GetChatUsageLimitUserOverrideRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatUsageLimitUserOverride indicates an expected call of GetChatUsageLimitUserOverride.
func (mr *MockStoreMockRecorder) GetChatUsageLimitUserOverride(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatUsageLimitUserOverride", reflect.TypeOf((*MockStore)(nil).GetChatUsageLimitUserOverride), ctx, userID)
}
// GetChats mocks base method.
func (m *MockStore) GetChats(ctx context.Context, arg database.GetChatsParams) ([]database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChats", ctx, arg)
ret0, _ := ret[0].([]database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatsByOwnerID indicates an expected call of GetChatsByOwnerID.
func (mr *MockStoreMockRecorder) GetChatsByOwnerID(ctx, arg any) *gomock.Call {
// GetChats indicates an expected call of GetChats.
func (mr *MockStoreMockRecorder) GetChats(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatsByOwnerID", reflect.TypeOf((*MockStore)(nil).GetChatsByOwnerID), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChats", reflect.TypeOf((*MockStore)(nil).GetChats), ctx, arg)
}
// GetConnectionLogsOffset mocks base method.
@@ -2303,6 +2479,21 @@ func (mr *MockStoreMockRecorder) GetEnabledChatProviders(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetEnabledChatProviders", reflect.TypeOf((*MockStore)(nil).GetEnabledChatProviders), ctx)
}
// GetEnabledMCPServerConfigs mocks base method.
func (m *MockStore) GetEnabledMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetEnabledMCPServerConfigs", ctx)
ret0, _ := ret[0].([]database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetEnabledMCPServerConfigs indicates an expected call of GetEnabledMCPServerConfigs.
func (mr *MockStoreMockRecorder) GetEnabledMCPServerConfigs(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetEnabledMCPServerConfigs", reflect.TypeOf((*MockStore)(nil).GetEnabledMCPServerConfigs), ctx)
}
// GetExternalAuthLink mocks base method.
func (m *MockStore) GetExternalAuthLink(ctx context.Context, arg database.GetExternalAuthLinkParams) (database.ExternalAuthLink, error) {
m.ctrl.T.Helper()
@@ -2408,6 +2599,21 @@ func (mr *MockStoreMockRecorder) GetFilteredInboxNotificationsByUserID(ctx, arg
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetFilteredInboxNotificationsByUserID", reflect.TypeOf((*MockStore)(nil).GetFilteredInboxNotificationsByUserID), ctx, arg)
}
// GetForcedMCPServerConfigs mocks base method.
func (m *MockStore) GetForcedMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetForcedMCPServerConfigs", ctx)
ret0, _ := ret[0].([]database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetForcedMCPServerConfigs indicates an expected call of GetForcedMCPServerConfigs.
func (mr *MockStoreMockRecorder) GetForcedMCPServerConfigs(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetForcedMCPServerConfigs", reflect.TypeOf((*MockStore)(nil).GetForcedMCPServerConfigs), ctx)
}
// GetGitSSHKey mocks base method.
func (m *MockStore) GetGitSSHKey(ctx context.Context, userID uuid.UUID) (database.GitSSHKey, error) {
m.ctrl.T.Helper()
@@ -2708,6 +2914,96 @@ func (mr *MockStoreMockRecorder) GetLogoURL(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetLogoURL", reflect.TypeOf((*MockStore)(nil).GetLogoURL), ctx)
}
// GetMCPServerConfigByID mocks base method.
func (m *MockStore) GetMCPServerConfigByID(ctx context.Context, id uuid.UUID) (database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerConfigByID", ctx, id)
ret0, _ := ret[0].(database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerConfigByID indicates an expected call of GetMCPServerConfigByID.
func (mr *MockStoreMockRecorder) GetMCPServerConfigByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerConfigByID", reflect.TypeOf((*MockStore)(nil).GetMCPServerConfigByID), ctx, id)
}
// GetMCPServerConfigBySlug mocks base method.
func (m *MockStore) GetMCPServerConfigBySlug(ctx context.Context, slug string) (database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerConfigBySlug", ctx, slug)
ret0, _ := ret[0].(database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerConfigBySlug indicates an expected call of GetMCPServerConfigBySlug.
func (mr *MockStoreMockRecorder) GetMCPServerConfigBySlug(ctx, slug any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerConfigBySlug", reflect.TypeOf((*MockStore)(nil).GetMCPServerConfigBySlug), ctx, slug)
}
// GetMCPServerConfigs mocks base method.
func (m *MockStore) GetMCPServerConfigs(ctx context.Context) ([]database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerConfigs", ctx)
ret0, _ := ret[0].([]database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerConfigs indicates an expected call of GetMCPServerConfigs.
func (mr *MockStoreMockRecorder) GetMCPServerConfigs(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerConfigs", reflect.TypeOf((*MockStore)(nil).GetMCPServerConfigs), ctx)
}
// GetMCPServerConfigsByIDs mocks base method.
func (m *MockStore) GetMCPServerConfigsByIDs(ctx context.Context, ids []uuid.UUID) ([]database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerConfigsByIDs", ctx, ids)
ret0, _ := ret[0].([]database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerConfigsByIDs indicates an expected call of GetMCPServerConfigsByIDs.
func (mr *MockStoreMockRecorder) GetMCPServerConfigsByIDs(ctx, ids any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerConfigsByIDs", reflect.TypeOf((*MockStore)(nil).GetMCPServerConfigsByIDs), ctx, ids)
}
// GetMCPServerUserToken mocks base method.
func (m *MockStore) GetMCPServerUserToken(ctx context.Context, arg database.GetMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerUserToken", ctx, arg)
ret0, _ := ret[0].(database.MCPServerUserToken)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerUserToken indicates an expected call of GetMCPServerUserToken.
func (mr *MockStoreMockRecorder) GetMCPServerUserToken(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerUserToken", reflect.TypeOf((*MockStore)(nil).GetMCPServerUserToken), ctx, arg)
}
// GetMCPServerUserTokensByUserID mocks base method.
func (m *MockStore) GetMCPServerUserTokensByUserID(ctx context.Context, userID uuid.UUID) ([]database.MCPServerUserToken, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetMCPServerUserTokensByUserID", ctx, userID)
ret0, _ := ret[0].([]database.MCPServerUserToken)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetMCPServerUserTokensByUserID indicates an expected call of GetMCPServerUserTokensByUserID.
func (mr *MockStoreMockRecorder) GetMCPServerUserTokensByUserID(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetMCPServerUserTokensByUserID", reflect.TypeOf((*MockStore)(nil).GetMCPServerUserTokensByUserID), ctx, userID)
}
// GetNotificationMessagesByStatus mocks base method.
func (m *MockStore) GetNotificationMessagesByStatus(ctx context.Context, arg database.GetNotificationMessagesByStatusParams) ([]database.NotificationMessage, error) {
m.ctrl.T.Helper()
@@ -3068,6 +3364,66 @@ func (mr *MockStoreMockRecorder) GetOrganizationsWithPrebuildStatus(ctx, arg any
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetOrganizationsWithPrebuildStatus", reflect.TypeOf((*MockStore)(nil).GetOrganizationsWithPrebuildStatus), ctx, arg)
}
// GetPRInsightsPerModel mocks base method.
func (m *MockStore) GetPRInsightsPerModel(ctx context.Context, arg database.GetPRInsightsPerModelParams) ([]database.GetPRInsightsPerModelRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPRInsightsPerModel", ctx, arg)
ret0, _ := ret[0].([]database.GetPRInsightsPerModelRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetPRInsightsPerModel indicates an expected call of GetPRInsightsPerModel.
func (mr *MockStoreMockRecorder) GetPRInsightsPerModel(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsPerModel", reflect.TypeOf((*MockStore)(nil).GetPRInsightsPerModel), ctx, arg)
}
// GetPRInsightsRecentPRs mocks base method.
func (m *MockStore) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPRInsightsRecentPRs", ctx, arg)
ret0, _ := ret[0].([]database.GetPRInsightsRecentPRsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetPRInsightsRecentPRs indicates an expected call of GetPRInsightsRecentPRs.
func (mr *MockStoreMockRecorder) GetPRInsightsRecentPRs(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsRecentPRs", reflect.TypeOf((*MockStore)(nil).GetPRInsightsRecentPRs), ctx, arg)
}
// GetPRInsightsSummary mocks base method.
func (m *MockStore) GetPRInsightsSummary(ctx context.Context, arg database.GetPRInsightsSummaryParams) (database.GetPRInsightsSummaryRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPRInsightsSummary", ctx, arg)
ret0, _ := ret[0].(database.GetPRInsightsSummaryRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetPRInsightsSummary indicates an expected call of GetPRInsightsSummary.
func (mr *MockStoreMockRecorder) GetPRInsightsSummary(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsSummary", reflect.TypeOf((*MockStore)(nil).GetPRInsightsSummary), ctx, arg)
}
// GetPRInsightsTimeSeries mocks base method.
func (m *MockStore) GetPRInsightsTimeSeries(ctx context.Context, arg database.GetPRInsightsTimeSeriesParams) ([]database.GetPRInsightsTimeSeriesRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPRInsightsTimeSeries", ctx, arg)
ret0, _ := ret[0].([]database.GetPRInsightsTimeSeriesRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetPRInsightsTimeSeries indicates an expected call of GetPRInsightsTimeSeries.
func (mr *MockStoreMockRecorder) GetPRInsightsTimeSeries(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsTimeSeries", reflect.TypeOf((*MockStore)(nil).GetPRInsightsTimeSeries), ctx, arg)
}
// GetParameterSchemasByJobID mocks base method.
func (m *MockStore) GetParameterSchemasByJobID(ctx context.Context, jobID uuid.UUID) ([]database.ParameterSchema, error) {
m.ctrl.T.Helper()
@@ -4193,6 +4549,21 @@ func (mr *MockStoreMockRecorder) GetUserChatCustomPrompt(ctx, userID any) *gomoc
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserChatCustomPrompt", reflect.TypeOf((*MockStore)(nil).GetUserChatCustomPrompt), ctx, userID)
}
// GetUserChatSpendInPeriod mocks base method.
func (m *MockStore) GetUserChatSpendInPeriod(ctx context.Context, arg database.GetUserChatSpendInPeriodParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetUserChatSpendInPeriod", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetUserChatSpendInPeriod indicates an expected call of GetUserChatSpendInPeriod.
func (mr *MockStoreMockRecorder) GetUserChatSpendInPeriod(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserChatSpendInPeriod", reflect.TypeOf((*MockStore)(nil).GetUserChatSpendInPeriod), ctx, arg)
}
// GetUserCount mocks base method.
func (m *MockStore) GetUserCount(ctx context.Context, includeSystem bool) (int64, error) {
m.ctrl.T.Helper()
@@ -4208,6 +4579,21 @@ func (mr *MockStoreMockRecorder) GetUserCount(ctx, includeSystem any) *gomock.Ca
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserCount", reflect.TypeOf((*MockStore)(nil).GetUserCount), ctx, includeSystem)
}
// GetUserGroupSpendLimit mocks base method.
func (m *MockStore) GetUserGroupSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetUserGroupSpendLimit", ctx, userID)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetUserGroupSpendLimit indicates an expected call of GetUserGroupSpendLimit.
func (mr *MockStoreMockRecorder) GetUserGroupSpendLimit(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserGroupSpendLimit", reflect.TypeOf((*MockStore)(nil).GetUserGroupSpendLimit), ctx, userID)
}
// GetUserLatencyInsights mocks base method.
func (m *MockStore) GetUserLatencyInsights(ctx context.Context, arg database.GetUserLatencyInsightsParams) ([]database.GetUserLatencyInsightsRow, error) {
m.ctrl.T.Helper()
@@ -5362,6 +5748,21 @@ func (mr *MockStoreMockRecorder) InsertAIBridgeInterception(ctx, arg any) *gomoc
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertAIBridgeInterception", reflect.TypeOf((*MockStore)(nil).InsertAIBridgeInterception), ctx, arg)
}
// InsertAIBridgeModelThought mocks base method.
func (m *MockStore) InsertAIBridgeModelThought(ctx context.Context, arg database.InsertAIBridgeModelThoughtParams) (database.AIBridgeModelThought, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertAIBridgeModelThought", ctx, arg)
ret0, _ := ret[0].(database.AIBridgeModelThought)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertAIBridgeModelThought indicates an expected call of InsertAIBridgeModelThought.
func (mr *MockStoreMockRecorder) InsertAIBridgeModelThought(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertAIBridgeModelThought", reflect.TypeOf((*MockStore)(nil).InsertAIBridgeModelThought), ctx, arg)
}
// InsertAIBridgeTokenUsage mocks base method.
func (m *MockStore) InsertAIBridgeTokenUsage(ctx context.Context, arg database.InsertAIBridgeTokenUsageParams) (database.AIBridgeTokenUsage, error) {
m.ctrl.T.Helper()
@@ -5482,19 +5883,19 @@ func (mr *MockStoreMockRecorder) InsertChatFile(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatFile", reflect.TypeOf((*MockStore)(nil).InsertChatFile), ctx, arg)
}
// InsertChatMessage mocks base method.
func (m *MockStore) InsertChatMessage(ctx context.Context, arg database.InsertChatMessageParams) (database.ChatMessage, error) {
// InsertChatMessages mocks base method.
func (m *MockStore) InsertChatMessages(ctx context.Context, arg database.InsertChatMessagesParams) ([]database.ChatMessage, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertChatMessage", ctx, arg)
ret0, _ := ret[0].(database.ChatMessage)
ret := m.ctrl.Call(m, "InsertChatMessages", ctx, arg)
ret0, _ := ret[0].([]database.ChatMessage)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertChatMessage indicates an expected call of InsertChatMessage.
func (mr *MockStoreMockRecorder) InsertChatMessage(ctx, arg any) *gomock.Call {
// InsertChatMessages indicates an expected call of InsertChatMessages.
func (mr *MockStoreMockRecorder) InsertChatMessages(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatMessage", reflect.TypeOf((*MockStore)(nil).InsertChatMessage), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertChatMessages", reflect.TypeOf((*MockStore)(nil).InsertChatMessages), ctx, arg)
}
// InsertChatModelConfig mocks base method.
@@ -5718,6 +6119,21 @@ func (mr *MockStoreMockRecorder) InsertLicense(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertLicense", reflect.TypeOf((*MockStore)(nil).InsertLicense), ctx, arg)
}
// InsertMCPServerConfig mocks base method.
func (m *MockStore) InsertMCPServerConfig(ctx context.Context, arg database.InsertMCPServerConfigParams) (database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertMCPServerConfig", ctx, arg)
ret0, _ := ret[0].(database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// InsertMCPServerConfig indicates an expected call of InsertMCPServerConfig.
func (mr *MockStoreMockRecorder) InsertMCPServerConfig(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertMCPServerConfig", reflect.TypeOf((*MockStore)(nil).InsertMCPServerConfig), ctx, arg)
}
// InsertMemoryResourceMonitor mocks base method.
func (m *MockStore) InsertMemoryResourceMonitor(ctx context.Context, arg database.InsertMemoryResourceMonitorParams) (database.WorkspaceAgentMemoryResourceMonitor, error) {
m.ctrl.T.Helper()
@@ -6547,6 +6963,36 @@ func (mr *MockStoreMockRecorder) ListAuthorizedAIBridgeModels(ctx, arg, prepared
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListAuthorizedAIBridgeModels", reflect.TypeOf((*MockStore)(nil).ListAuthorizedAIBridgeModels), ctx, arg, prepared)
}
// ListChatUsageLimitGroupOverrides mocks base method.
func (m *MockStore) ListChatUsageLimitGroupOverrides(ctx context.Context) ([]database.ListChatUsageLimitGroupOverridesRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListChatUsageLimitGroupOverrides", ctx)
ret0, _ := ret[0].([]database.ListChatUsageLimitGroupOverridesRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListChatUsageLimitGroupOverrides indicates an expected call of ListChatUsageLimitGroupOverrides.
func (mr *MockStoreMockRecorder) ListChatUsageLimitGroupOverrides(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListChatUsageLimitGroupOverrides", reflect.TypeOf((*MockStore)(nil).ListChatUsageLimitGroupOverrides), ctx)
}
// ListChatUsageLimitOverrides mocks base method.
func (m *MockStore) ListChatUsageLimitOverrides(ctx context.Context) ([]database.ListChatUsageLimitOverridesRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListChatUsageLimitOverrides", ctx)
ret0, _ := ret[0].([]database.ListChatUsageLimitOverridesRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListChatUsageLimitOverrides indicates an expected call of ListChatUsageLimitOverrides.
func (mr *MockStoreMockRecorder) ListChatUsageLimitOverrides(ctx any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListChatUsageLimitOverrides", reflect.TypeOf((*MockStore)(nil).ListChatUsageLimitOverrides), ctx)
}
// ListProvisionerKeysByOrganization mocks base method.
func (m *MockStore) ListProvisionerKeysByOrganization(ctx context.Context, organizationID uuid.UUID) ([]database.ProvisionerKey, error) {
m.ctrl.T.Helper()
@@ -6785,6 +7231,21 @@ func (mr *MockStoreMockRecorder) RemoveUserFromGroups(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RemoveUserFromGroups", reflect.TypeOf((*MockStore)(nil).RemoveUserFromGroups), ctx, arg)
}
// ResolveUserChatSpendLimit mocks base method.
func (m *MockStore) ResolveUserChatSpendLimit(ctx context.Context, userID uuid.UUID) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ResolveUserChatSpendLimit", ctx, userID)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ResolveUserChatSpendLimit indicates an expected call of ResolveUserChatSpendLimit.
func (mr *MockStoreMockRecorder) ResolveUserChatSpendLimit(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ResolveUserChatSpendLimit", reflect.TypeOf((*MockStore)(nil).ResolveUserChatSpendLimit), ctx, userID)
}
// RevokeDBCryptKey mocks base method.
func (m *MockStore) RevokeDBCryptKey(ctx context.Context, activeKeyDigest string) error {
m.ctrl.T.Helper()
@@ -6814,6 +7275,34 @@ func (mr *MockStoreMockRecorder) SelectUsageEventsForPublishing(ctx, now any) *g
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SelectUsageEventsForPublishing", reflect.TypeOf((*MockStore)(nil).SelectUsageEventsForPublishing), ctx, now)
}
// SoftDeleteChatMessageByID mocks base method.
func (m *MockStore) SoftDeleteChatMessageByID(ctx context.Context, id int64) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SoftDeleteChatMessageByID", ctx, id)
ret0, _ := ret[0].(error)
return ret0
}
// SoftDeleteChatMessageByID indicates an expected call of SoftDeleteChatMessageByID.
func (mr *MockStoreMockRecorder) SoftDeleteChatMessageByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SoftDeleteChatMessageByID", reflect.TypeOf((*MockStore)(nil).SoftDeleteChatMessageByID), ctx, id)
}
// SoftDeleteChatMessagesAfterID mocks base method.
func (m *MockStore) SoftDeleteChatMessagesAfterID(ctx context.Context, arg database.SoftDeleteChatMessagesAfterIDParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SoftDeleteChatMessagesAfterID", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// SoftDeleteChatMessagesAfterID indicates an expected call of SoftDeleteChatMessagesAfterID.
func (mr *MockStoreMockRecorder) SoftDeleteChatMessagesAfterID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SoftDeleteChatMessagesAfterID", reflect.TypeOf((*MockStore)(nil).SoftDeleteChatMessagesAfterID), ctx, arg)
}
// TryAcquireLock mocks base method.
func (m *MockStore) TryAcquireLock(ctx context.Context, pgTryAdvisoryXactLock int64) (bool, error) {
m.ctrl.T.Helper()
@@ -6944,6 +7433,21 @@ func (mr *MockStoreMockRecorder) UpdateChatHeartbeat(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatHeartbeat", reflect.TypeOf((*MockStore)(nil).UpdateChatHeartbeat), ctx, arg)
}
// UpdateChatMCPServerIDs mocks base method.
func (m *MockStore) UpdateChatMCPServerIDs(ctx context.Context, arg database.UpdateChatMCPServerIDsParams) (database.Chat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatMCPServerIDs", ctx, arg)
ret0, _ := ret[0].(database.Chat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatMCPServerIDs indicates an expected call of UpdateChatMCPServerIDs.
func (mr *MockStoreMockRecorder) UpdateChatMCPServerIDs(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatMCPServerIDs", reflect.TypeOf((*MockStore)(nil).UpdateChatMCPServerIDs), ctx, arg)
}
// UpdateChatMessageByID mocks base method.
func (m *MockStore) UpdateChatMessageByID(ctx context.Context, arg database.UpdateChatMessageByIDParams) (database.ChatMessage, error) {
m.ctrl.T.Helper()
@@ -7137,6 +7641,21 @@ func (mr *MockStoreMockRecorder) UpdateInboxNotificationReadStatus(ctx, arg any)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateInboxNotificationReadStatus", reflect.TypeOf((*MockStore)(nil).UpdateInboxNotificationReadStatus), ctx, arg)
}
// UpdateMCPServerConfig mocks base method.
func (m *MockStore) UpdateMCPServerConfig(ctx context.Context, arg database.UpdateMCPServerConfigParams) (database.MCPServerConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateMCPServerConfig", ctx, arg)
ret0, _ := ret[0].(database.MCPServerConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateMCPServerConfig indicates an expected call of UpdateMCPServerConfig.
func (mr *MockStoreMockRecorder) UpdateMCPServerConfig(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateMCPServerConfig", reflect.TypeOf((*MockStore)(nil).UpdateMCPServerConfig), ctx, arg)
}
// UpdateMemberRoles mocks base method.
func (m *MockStore) UpdateMemberRoles(ctx context.Context, arg database.UpdateMemberRolesParams) (database.OrganizationMember, error) {
m.ctrl.T.Helper()
@@ -8229,6 +8748,21 @@ func (mr *MockStoreMockRecorder) UpdateWorkspacesTTLByTemplateID(ctx, arg any) *
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkspacesTTLByTemplateID", reflect.TypeOf((*MockStore)(nil).UpdateWorkspacesTTLByTemplateID), ctx, arg)
}
// UpsertAISeatState mocks base method.
func (m *MockStore) UpsertAISeatState(ctx context.Context, arg database.UpsertAISeatStateParams) (bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertAISeatState", ctx, arg)
ret0, _ := ret[0].(bool)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpsertAISeatState indicates an expected call of UpsertAISeatState.
func (mr *MockStoreMockRecorder) UpsertAISeatState(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertAISeatState", reflect.TypeOf((*MockStore)(nil).UpsertAISeatState), ctx, arg)
}
// UpsertAnnouncementBanners mocks base method.
func (m *MockStore) UpsertAnnouncementBanners(ctx context.Context, value string) error {
m.ctrl.T.Helper()
@@ -8272,6 +8806,20 @@ func (mr *MockStoreMockRecorder) UpsertBoundaryUsageStats(ctx, arg any) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertBoundaryUsageStats", reflect.TypeOf((*MockStore)(nil).UpsertBoundaryUsageStats), ctx, arg)
}
// UpsertChatDesktopEnabled mocks base method.
func (m *MockStore) UpsertChatDesktopEnabled(ctx context.Context, enableDesktop bool) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertChatDesktopEnabled", ctx, enableDesktop)
ret0, _ := ret[0].(error)
return ret0
}
// UpsertChatDesktopEnabled indicates an expected call of UpsertChatDesktopEnabled.
func (mr *MockStoreMockRecorder) UpsertChatDesktopEnabled(ctx, enableDesktop any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatDesktopEnabled", reflect.TypeOf((*MockStore)(nil).UpsertChatDesktopEnabled), ctx, enableDesktop)
}
// UpsertChatDiffStatus mocks base method.
func (m *MockStore) UpsertChatDiffStatus(ctx context.Context, arg database.UpsertChatDiffStatusParams) (database.ChatDiffStatus, error) {
m.ctrl.T.Helper()
@@ -8316,6 +8864,51 @@ func (mr *MockStoreMockRecorder) UpsertChatSystemPrompt(ctx, value any) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatSystemPrompt", reflect.TypeOf((*MockStore)(nil).UpsertChatSystemPrompt), ctx, value)
}
// UpsertChatUsageLimitConfig mocks base method.
func (m *MockStore) UpsertChatUsageLimitConfig(ctx context.Context, arg database.UpsertChatUsageLimitConfigParams) (database.ChatUsageLimitConfig, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertChatUsageLimitConfig", ctx, arg)
ret0, _ := ret[0].(database.ChatUsageLimitConfig)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpsertChatUsageLimitConfig indicates an expected call of UpsertChatUsageLimitConfig.
func (mr *MockStoreMockRecorder) UpsertChatUsageLimitConfig(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatUsageLimitConfig", reflect.TypeOf((*MockStore)(nil).UpsertChatUsageLimitConfig), ctx, arg)
}
// UpsertChatUsageLimitGroupOverride mocks base method.
func (m *MockStore) UpsertChatUsageLimitGroupOverride(ctx context.Context, arg database.UpsertChatUsageLimitGroupOverrideParams) (database.UpsertChatUsageLimitGroupOverrideRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertChatUsageLimitGroupOverride", ctx, arg)
ret0, _ := ret[0].(database.UpsertChatUsageLimitGroupOverrideRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpsertChatUsageLimitGroupOverride indicates an expected call of UpsertChatUsageLimitGroupOverride.
func (mr *MockStoreMockRecorder) UpsertChatUsageLimitGroupOverride(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatUsageLimitGroupOverride", reflect.TypeOf((*MockStore)(nil).UpsertChatUsageLimitGroupOverride), ctx, arg)
}
// UpsertChatUsageLimitUserOverride mocks base method.
func (m *MockStore) UpsertChatUsageLimitUserOverride(ctx context.Context, arg database.UpsertChatUsageLimitUserOverrideParams) (database.UpsertChatUsageLimitUserOverrideRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertChatUsageLimitUserOverride", ctx, arg)
ret0, _ := ret[0].(database.UpsertChatUsageLimitUserOverrideRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpsertChatUsageLimitUserOverride indicates an expected call of UpsertChatUsageLimitUserOverride.
func (mr *MockStoreMockRecorder) UpsertChatUsageLimitUserOverride(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertChatUsageLimitUserOverride", reflect.TypeOf((*MockStore)(nil).UpsertChatUsageLimitUserOverride), ctx, arg)
}
// UpsertConnectionLog mocks base method.
func (m *MockStore) UpsertConnectionLog(ctx context.Context, arg database.UpsertConnectionLogParams) (database.ConnectionLog, error) {
m.ctrl.T.Helper()
@@ -8387,6 +8980,21 @@ func (mr *MockStoreMockRecorder) UpsertLogoURL(ctx, value any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertLogoURL", reflect.TypeOf((*MockStore)(nil).UpsertLogoURL), ctx, value)
}
// UpsertMCPServerUserToken mocks base method.
func (m *MockStore) UpsertMCPServerUserToken(ctx context.Context, arg database.UpsertMCPServerUserTokenParams) (database.MCPServerUserToken, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertMCPServerUserToken", ctx, arg)
ret0, _ := ret[0].(database.MCPServerUserToken)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpsertMCPServerUserToken indicates an expected call of UpsertMCPServerUserToken.
func (mr *MockStoreMockRecorder) UpsertMCPServerUserToken(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertMCPServerUserToken", reflect.TypeOf((*MockStore)(nil).UpsertMCPServerUserToken), ctx, arg)
}
// UpsertNotificationReportGeneratorLog mocks base method.
func (m *MockStore) UpsertNotificationReportGeneratorLog(ctx context.Context, arg database.UpsertNotificationReportGeneratorLogParams) error {
m.ctrl.T.Helper()
@@ -8633,6 +9241,21 @@ func (mr *MockStoreMockRecorder) UpsertWorkspaceAppAuditSession(ctx, arg any) *g
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpsertWorkspaceAppAuditSession", reflect.TypeOf((*MockStore)(nil).UpsertWorkspaceAppAuditSession), ctx, arg)
}
// UsageEventExistsByID mocks base method.
func (m *MockStore) UsageEventExistsByID(ctx context.Context, id string) (bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UsageEventExistsByID", ctx, id)
ret0, _ := ret[0].(bool)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UsageEventExistsByID indicates an expected call of UsageEventExistsByID.
func (mr *MockStoreMockRecorder) UsageEventExistsByID(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UsageEventExistsByID", reflect.TypeOf((*MockStore)(nil).UsageEventExistsByID), ctx, id)
}
// ValidateGroupIDs mocks base method.
func (m *MockStore) ValidateGroupIDs(ctx context.Context, groupIds []uuid.UUID) (database.ValidateGroupIDsRow, error) {
m.ctrl.T.Helper()
+216 -15
View File
@@ -10,6 +10,11 @@ CREATE TYPE agent_key_scope_enum AS ENUM (
'no_user_data'
);
CREATE TYPE ai_seat_usage_reason AS ENUM (
'aibridge',
'task'
);
CREATE TYPE api_key_scope AS ENUM (
'coder:all',
'coder:application_connect',
@@ -503,7 +508,14 @@ CREATE TYPE resource_type AS ENUM (
'workspace_agent',
'workspace_app',
'prebuilds_settings',
'task'
'task',
'ai_seat'
);
CREATE TYPE shareable_workspace_owners AS ENUM (
'none',
'everyone',
'service_accounts'
);
CREATE TYPE startup_script_behavior AS ENUM (
@@ -608,28 +620,35 @@ CREATE FUNCTION aggregate_usage_event() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
-- Check for supported event types and throw error for unknown types
IF NEW.event_type NOT IN ('dc_managed_agents_v1') THEN
-- Check for supported event types and throw error for unknown types.
IF NEW.event_type NOT IN ('dc_managed_agents_v1', 'hb_ai_seats_v1') THEN
RAISE EXCEPTION 'Unhandled usage event type in aggregate_usage_event: %', NEW.event_type;
END IF;
INSERT INTO usage_events_daily (day, event_type, usage_data)
VALUES (
-- Extract the date from the created_at timestamp, always using UTC for
-- consistency
date_trunc('day', NEW.created_at AT TIME ZONE 'UTC')::date,
NEW.event_type,
NEW.event_data
)
ON CONFLICT (day, event_type) DO UPDATE SET
usage_data = CASE
-- Handle simple counter events by summing the count
-- Handle simple counter events by summing the count.
WHEN NEW.event_type IN ('dc_managed_agents_v1') THEN
jsonb_build_object(
'count',
COALESCE((usage_events_daily.usage_data->>'count')::bigint, 0) +
COALESCE((NEW.event_data->>'count')::bigint, 0)
)
-- Heartbeat events: keep the max value seen that day
WHEN NEW.event_type IN ('hb_ai_seats_v1') THEN
jsonb_build_object(
'count',
GREATEST(
COALESCE((usage_events_daily.usage_data->>'count')::bigint, 0),
COALESCE((NEW.event_data->>'count')::bigint, 0)
)
)
END;
RETURN NEW;
@@ -786,7 +805,7 @@ BEGIN
END;
$$;
CREATE FUNCTION insert_org_member_system_role() RETURNS trigger
CREATE FUNCTION insert_organization_system_roles() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
@@ -801,7 +820,8 @@ BEGIN
is_system,
created_at,
updated_at
) VALUES (
) VALUES
(
'organization-member',
'',
NEW.id,
@@ -812,6 +832,18 @@ BEGIN
true,
NOW(),
NOW()
),
(
'organization-service-account',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
);
RETURN NEW;
END;
@@ -1046,6 +1078,15 @@ BEGIN
END;
$$;
CREATE TABLE ai_seat_state (
user_id uuid NOT NULL,
first_used_at timestamp with time zone NOT NULL,
last_used_at timestamp with time zone NOT NULL,
last_event_type ai_seat_usage_reason NOT NULL,
last_event_description text NOT NULL,
updated_at timestamp with time zone NOT NULL
);
CREATE TABLE aibridge_interceptions (
id uuid NOT NULL,
initiator_id uuid NOT NULL,
@@ -1071,6 +1112,15 @@ COMMENT ON COLUMN aibridge_interceptions.thread_root_id IS 'The root interceptio
COMMENT ON COLUMN aibridge_interceptions.client_session_id IS 'The session ID supplied by the client (optional and not universally supported).';
CREATE TABLE aibridge_model_thoughts (
interception_id uuid NOT NULL,
content text NOT NULL,
metadata jsonb,
created_at timestamp with time zone NOT NULL
);
COMMENT ON TABLE aibridge_model_thoughts IS 'Audit log of model thinking in intercepted requests in AI Bridge';
CREATE TABLE aibridge_token_usages (
id uuid NOT NULL,
interception_id uuid NOT NULL,
@@ -1200,7 +1250,15 @@ CREATE TABLE chat_diff_statuses (
git_branch text DEFAULT ''::text NOT NULL,
git_remote_origin text DEFAULT ''::text NOT NULL,
pull_request_title text DEFAULT ''::text NOT NULL,
pull_request_draft boolean DEFAULT false NOT NULL
pull_request_draft boolean DEFAULT false NOT NULL,
author_login text,
author_avatar_url text,
base_branch text,
pr_number integer,
commits integer,
approved boolean,
reviewer_count integer,
head_branch text
);
CREATE TABLE chat_files (
@@ -1231,7 +1289,9 @@ CREATE TABLE chat_messages (
compressed boolean DEFAULT false NOT NULL,
created_by uuid,
content_version smallint NOT NULL,
total_cost_micros bigint
total_cost_micros bigint,
runtime_ms bigint,
deleted boolean DEFAULT false NOT NULL
);
CREATE SEQUENCE chat_messages_id_seq
@@ -1295,6 +1355,28 @@ CREATE SEQUENCE chat_queued_messages_id_seq
ALTER SEQUENCE chat_queued_messages_id_seq OWNED BY chat_queued_messages.id;
CREATE TABLE chat_usage_limit_config (
id bigint NOT NULL,
singleton boolean DEFAULT true NOT NULL,
enabled boolean DEFAULT false NOT NULL,
default_limit_micros bigint DEFAULT 0 NOT NULL,
period text DEFAULT 'month'::text NOT NULL,
created_at timestamp with time zone DEFAULT now() NOT NULL,
updated_at timestamp with time zone DEFAULT now() NOT NULL,
CONSTRAINT chat_usage_limit_config_default_limit_micros_check CHECK ((default_limit_micros >= 0)),
CONSTRAINT chat_usage_limit_config_period_check CHECK ((period = ANY (ARRAY['day'::text, 'week'::text, 'month'::text]))),
CONSTRAINT chat_usage_limit_config_singleton_check CHECK (singleton)
);
CREATE SEQUENCE chat_usage_limit_config_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER SEQUENCE chat_usage_limit_config_id_seq OWNED BY chat_usage_limit_config.id;
CREATE TABLE chats (
id uuid DEFAULT gen_random_uuid() NOT NULL,
owner_id uuid NOT NULL,
@@ -1311,7 +1393,8 @@ CREATE TABLE chats (
last_model_config_id uuid NOT NULL,
archived boolean DEFAULT false NOT NULL,
last_error text,
mode chat_mode
mode chat_mode,
mcp_server_ids uuid[] DEFAULT '{}'::uuid[] NOT NULL
);
CREATE TABLE connection_logs (
@@ -1451,7 +1534,9 @@ CREATE TABLE groups (
avatar_url text DEFAULT ''::text NOT NULL,
quota_allowance integer DEFAULT 0 NOT NULL,
display_name text DEFAULT ''::text NOT NULL,
source group_source DEFAULT 'user'::group_source NOT NULL
source group_source DEFAULT 'user'::group_source NOT NULL,
chat_spend_limit_micros bigint,
CONSTRAINT groups_chat_spend_limit_micros_check CHECK (((chat_spend_limit_micros IS NULL) OR (chat_spend_limit_micros > 0)))
);
COMMENT ON COLUMN groups.display_name IS 'Display name is a custom, human-friendly group name that user can set. This is not required to be unique and can be the empty string.';
@@ -1486,7 +1571,9 @@ CREATE TABLE users (
one_time_passcode_expires_at timestamp with time zone,
is_system boolean DEFAULT false NOT NULL,
is_service_account boolean DEFAULT false NOT NULL,
chat_spend_limit_micros bigint,
CONSTRAINT one_time_passcode_set CHECK ((((hashed_one_time_passcode IS NULL) AND (one_time_passcode_expires_at IS NULL)) OR ((hashed_one_time_passcode IS NOT NULL) AND (one_time_passcode_expires_at IS NOT NULL)))),
CONSTRAINT users_chat_spend_limit_micros_check CHECK (((chat_spend_limit_micros IS NULL) OR (chat_spend_limit_micros > 0))),
CONSTRAINT users_email_not_empty CHECK (((is_service_account = true) = (email = ''::text))),
CONSTRAINT users_service_account_login_type CHECK (((is_service_account = false) OR (login_type = 'none'::login_type))),
CONSTRAINT users_username_min_length CHECK ((length(username) >= 1))
@@ -1584,6 +1671,53 @@ CREATE SEQUENCE licenses_id_seq
ALTER SEQUENCE licenses_id_seq OWNED BY licenses.id;
CREATE TABLE mcp_server_configs (
id uuid DEFAULT gen_random_uuid() NOT NULL,
display_name text NOT NULL,
slug text NOT NULL,
description text DEFAULT ''::text NOT NULL,
icon_url text DEFAULT ''::text NOT NULL,
transport text DEFAULT 'streamable_http'::text NOT NULL,
url text NOT NULL,
auth_type text DEFAULT 'none'::text NOT NULL,
oauth2_client_id text DEFAULT ''::text NOT NULL,
oauth2_client_secret text DEFAULT ''::text NOT NULL,
oauth2_client_secret_key_id text,
oauth2_auth_url text DEFAULT ''::text NOT NULL,
oauth2_token_url text DEFAULT ''::text NOT NULL,
oauth2_scopes text DEFAULT ''::text NOT NULL,
api_key_header text DEFAULT 'Authorization'::text NOT NULL,
api_key_value text DEFAULT ''::text NOT NULL,
api_key_value_key_id text,
custom_headers text DEFAULT '{}'::text NOT NULL,
custom_headers_key_id text,
tool_allow_list text[] DEFAULT '{}'::text[] NOT NULL,
tool_deny_list text[] DEFAULT '{}'::text[] NOT NULL,
availability text DEFAULT 'default_off'::text NOT NULL,
enabled boolean DEFAULT false NOT NULL,
created_by uuid,
updated_by uuid,
created_at timestamp with time zone DEFAULT now() NOT NULL,
updated_at timestamp with time zone DEFAULT now() NOT NULL,
CONSTRAINT mcp_server_configs_auth_type_check CHECK ((auth_type = ANY (ARRAY['none'::text, 'oauth2'::text, 'api_key'::text, 'custom_headers'::text]))),
CONSTRAINT mcp_server_configs_availability_check CHECK ((availability = ANY (ARRAY['force_on'::text, 'default_on'::text, 'default_off'::text]))),
CONSTRAINT mcp_server_configs_transport_check CHECK ((transport = ANY (ARRAY['streamable_http'::text, 'sse'::text])))
);
CREATE TABLE mcp_server_user_tokens (
id uuid DEFAULT gen_random_uuid() NOT NULL,
mcp_server_config_id uuid NOT NULL,
user_id uuid NOT NULL,
access_token text NOT NULL,
access_token_key_id text,
refresh_token text DEFAULT ''::text NOT NULL,
refresh_token_key_id text,
token_type text DEFAULT 'Bearer'::text NOT NULL,
expiry timestamp with time zone,
created_at timestamp with time zone DEFAULT now() NOT NULL,
updated_at timestamp with time zone DEFAULT now() NOT NULL
);
CREATE TABLE notification_messages (
id uuid NOT NULL,
notification_template_id uuid NOT NULL,
@@ -1774,9 +1908,11 @@ CREATE TABLE organizations (
display_name text NOT NULL,
icon text DEFAULT ''::text NOT NULL,
deleted boolean DEFAULT false NOT NULL,
workspace_sharing_disabled boolean DEFAULT false NOT NULL
shareable_workspace_owners shareable_workspace_owners DEFAULT 'everyone'::shareable_workspace_owners NOT NULL
);
COMMENT ON COLUMN organizations.shareable_workspace_owners IS 'Controls whose workspaces can be shared: none, everyone, or service_accounts.';
CREATE TABLE parameter_schemas (
id uuid NOT NULL,
created_at timestamp with time zone NOT NULL,
@@ -2576,7 +2712,7 @@ CREATE TABLE usage_events (
publish_started_at timestamp with time zone,
published_at timestamp with time zone,
failure_message text,
CONSTRAINT usage_event_type_check CHECK ((event_type = 'dc_managed_agents_v1'::text))
CONSTRAINT usage_event_type_check CHECK ((event_type = ANY (ARRAY['dc_managed_agents_v1'::text, 'hb_ai_seats_v1'::text])))
);
COMMENT ON TABLE usage_events IS 'usage_events contains usage data that is collected from the product and potentially shipped to the usage collector service.';
@@ -3133,6 +3269,8 @@ ALTER TABLE ONLY chat_messages ALTER COLUMN id SET DEFAULT nextval('chat_message
ALTER TABLE ONLY chat_queued_messages ALTER COLUMN id SET DEFAULT nextval('chat_queued_messages_id_seq'::regclass);
ALTER TABLE ONLY chat_usage_limit_config ALTER COLUMN id SET DEFAULT nextval('chat_usage_limit_config_id_seq'::regclass);
ALTER TABLE ONLY licenses ALTER COLUMN id SET DEFAULT nextval('licenses_id_seq'::regclass);
ALTER TABLE ONLY provisioner_job_logs ALTER COLUMN id SET DEFAULT nextval('provisioner_job_logs_id_seq'::regclass);
@@ -3148,6 +3286,9 @@ ALTER TABLE ONLY workspace_resource_metadata ALTER COLUMN id SET DEFAULT nextval
ALTER TABLE ONLY workspace_agent_stats
ADD CONSTRAINT agent_stats_pkey PRIMARY KEY (id);
ALTER TABLE ONLY ai_seat_state
ADD CONSTRAINT ai_seat_state_pkey PRIMARY KEY (user_id);
ALTER TABLE ONLY aibridge_interceptions
ADD CONSTRAINT aibridge_interceptions_pkey PRIMARY KEY (id);
@@ -3190,6 +3331,12 @@ ALTER TABLE ONLY chat_providers
ALTER TABLE ONLY chat_queued_messages
ADD CONSTRAINT chat_queued_messages_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_usage_limit_config
ADD CONSTRAINT chat_usage_limit_config_pkey PRIMARY KEY (id);
ALTER TABLE ONLY chat_usage_limit_config
ADD CONSTRAINT chat_usage_limit_config_singleton_key UNIQUE (singleton);
ALTER TABLE ONLY chats
ADD CONSTRAINT chats_pkey PRIMARY KEY (id);
@@ -3244,6 +3391,18 @@ ALTER TABLE ONLY licenses
ALTER TABLE ONLY licenses
ADD CONSTRAINT licenses_pkey PRIMARY KEY (id);
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_pkey PRIMARY KEY (id);
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_slug_key UNIQUE (slug);
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_mcp_server_config_id_user_id_key UNIQUE (mcp_server_config_id, user_id);
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_pkey PRIMARY KEY (id);
ALTER TABLE ONLY notification_messages
ADD CONSTRAINT notification_messages_pkey PRIMARY KEY (id);
@@ -3502,6 +3661,8 @@ CREATE INDEX idx_aibridge_interceptions_thread_parent_id ON aibridge_interceptio
CREATE INDEX idx_aibridge_interceptions_thread_root_id ON aibridge_interceptions USING btree (thread_root_id);
CREATE INDEX idx_aibridge_model_thoughts_interception_id ON aibridge_model_thoughts USING btree (interception_id);
CREATE INDEX idx_aibridge_token_usages_interception_id ON aibridge_token_usages USING btree (interception_id);
CREATE INDEX idx_aibridge_token_usages_provider_response_id ON aibridge_token_usages USING btree (provider_response_id);
@@ -3542,6 +3703,8 @@ CREATE INDEX idx_chat_messages_compressed_summary_boundary ON chat_messages USIN
CREATE INDEX idx_chat_messages_created_at ON chat_messages USING btree (created_at);
CREATE INDEX idx_chat_messages_owner_spend ON chat_messages USING btree (chat_id, created_at) WHERE (total_cost_micros IS NOT NULL);
CREATE INDEX idx_chat_model_configs_enabled ON chat_model_configs USING btree (enabled);
CREATE INDEX idx_chat_model_configs_provider ON chat_model_configs USING btree (provider);
@@ -3588,6 +3751,12 @@ CREATE INDEX idx_inbox_notifications_user_id_read_at ON inbox_notifications USIN
CREATE INDEX idx_inbox_notifications_user_id_template_id_targets ON inbox_notifications USING btree (user_id, template_id, targets);
CREATE INDEX idx_mcp_server_configs_enabled ON mcp_server_configs USING btree (enabled) WHERE (enabled = true);
CREATE INDEX idx_mcp_server_configs_forced ON mcp_server_configs USING btree (enabled, availability) WHERE ((enabled = true) AND (availability = 'force_on'::text));
CREATE INDEX idx_mcp_server_user_tokens_user_id ON mcp_server_user_tokens USING btree (user_id);
CREATE INDEX idx_notification_messages_status ON notification_messages USING btree (status);
CREATE INDEX idx_organization_member_organization_id_uuid ON organization_members USING btree (organization_id);
@@ -3616,6 +3785,8 @@ CREATE INDEX idx_template_versions_has_ai_task ON template_versions USING btree
CREATE UNIQUE INDEX idx_unique_preset_name ON template_version_presets USING btree (name, template_version_id);
CREATE INDEX idx_usage_events_ai_seats ON usage_events USING btree (event_type, created_at) WHERE (event_type = 'hb_ai_seats_v1'::text);
CREATE INDEX idx_usage_events_select_for_publishing ON usage_events USING btree (published_at, publish_started_at, created_at);
CREATE INDEX idx_user_deleted_deleted_at ON user_deleted USING btree (deleted_at);
@@ -3790,7 +3961,7 @@ CREATE TRIGGER trigger_delete_oauth2_provider_app_token AFTER DELETE ON oauth2_p
CREATE TRIGGER trigger_insert_apikeys BEFORE INSERT ON api_keys FOR EACH ROW EXECUTE FUNCTION insert_apikey_fail_if_user_deleted();
CREATE TRIGGER trigger_insert_org_member_system_role AFTER INSERT ON organizations FOR EACH ROW EXECUTE FUNCTION insert_org_member_system_role();
CREATE TRIGGER trigger_insert_organization_system_roles AFTER INSERT ON organizations FOR EACH ROW EXECUTE FUNCTION insert_organization_system_roles();
CREATE TRIGGER trigger_nullify_next_start_at_on_workspace_autostart_modificati AFTER UPDATE ON workspaces FOR EACH ROW EXECUTE FUNCTION nullify_next_start_at_on_workspace_autostart_modification();
@@ -3808,6 +3979,9 @@ COMMENT ON TRIGGER workspace_agent_name_unique_trigger ON workspace_agents IS 'U
the uniqueness requirement. A trigger allows us to enforce uniqueness going
forward without requiring a migration to clean up historical data.';
ALTER TABLE ONLY ai_seat_state
ADD CONSTRAINT ai_seat_state_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY aibridge_interceptions
ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
@@ -3907,6 +4081,33 @@ ALTER TABLE ONLY jfrog_xray_scans
ALTER TABLE ONLY jfrog_xray_scans
ADD CONSTRAINT jfrog_xray_scans_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_api_key_value_key_id_fkey FOREIGN KEY (api_key_value_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_created_by_fkey FOREIGN KEY (created_by) REFERENCES users(id) ON DELETE SET NULL;
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_custom_headers_key_id_fkey FOREIGN KEY (custom_headers_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_oauth2_client_secret_key_id_fkey FOREIGN KEY (oauth2_client_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY mcp_server_configs
ADD CONSTRAINT mcp_server_configs_updated_by_fkey FOREIGN KEY (updated_by) REFERENCES users(id) ON DELETE SET NULL;
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_access_token_key_id_fkey FOREIGN KEY (access_token_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_mcp_server_config_id_fkey FOREIGN KEY (mcp_server_config_id) REFERENCES mcp_server_configs(id) ON DELETE CASCADE;
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_refresh_token_key_id_fkey FOREIGN KEY (refresh_token_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ALTER TABLE ONLY mcp_server_user_tokens
ADD CONSTRAINT mcp_server_user_tokens_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY notification_messages
ADD CONSTRAINT notification_messages_notification_template_id_fkey FOREIGN KEY (notification_template_id) REFERENCES notification_templates(id) ON DELETE CASCADE;
+10
View File
@@ -6,6 +6,7 @@ type ForeignKeyConstraint string
// ForeignKeyConstraint enums.
const (
ForeignKeyAiSeatStateUserID ForeignKeyConstraint = "ai_seat_state_user_id_fkey" // ALTER TABLE ONLY ai_seat_state ADD CONSTRAINT ai_seat_state_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyAibridgeInterceptionsInitiatorID ForeignKeyConstraint = "aibridge_interceptions_initiator_id_fkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
ForeignKeyAPIKeysUserIDUUID ForeignKeyConstraint = "api_keys_user_id_uuid_fkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatDiffStatusesChatID ForeignKeyConstraint = "chat_diff_statuses_chat_id_fkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
@@ -39,6 +40,15 @@ const (
ForeignKeyInboxNotificationsUserID ForeignKeyConstraint = "inbox_notifications_user_id_fkey" // ALTER TABLE ONLY inbox_notifications ADD CONSTRAINT inbox_notifications_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyJfrogXrayScansAgentID ForeignKeyConstraint = "jfrog_xray_scans_agent_id_fkey" // ALTER TABLE ONLY jfrog_xray_scans ADD CONSTRAINT jfrog_xray_scans_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyJfrogXrayScansWorkspaceID ForeignKeyConstraint = "jfrog_xray_scans_workspace_id_fkey" // ALTER TABLE ONLY jfrog_xray_scans ADD CONSTRAINT jfrog_xray_scans_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
ForeignKeyMcpServerConfigsAPIKeyValueKeyID ForeignKeyConstraint = "mcp_server_configs_api_key_value_key_id_fkey" // ALTER TABLE ONLY mcp_server_configs ADD CONSTRAINT mcp_server_configs_api_key_value_key_id_fkey FOREIGN KEY (api_key_value_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyMcpServerConfigsCreatedBy ForeignKeyConstraint = "mcp_server_configs_created_by_fkey" // ALTER TABLE ONLY mcp_server_configs ADD CONSTRAINT mcp_server_configs_created_by_fkey FOREIGN KEY (created_by) REFERENCES users(id) ON DELETE SET NULL;
ForeignKeyMcpServerConfigsCustomHeadersKeyID ForeignKeyConstraint = "mcp_server_configs_custom_headers_key_id_fkey" // ALTER TABLE ONLY mcp_server_configs ADD CONSTRAINT mcp_server_configs_custom_headers_key_id_fkey FOREIGN KEY (custom_headers_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyMcpServerConfigsOauth2ClientSecretKeyID ForeignKeyConstraint = "mcp_server_configs_oauth2_client_secret_key_id_fkey" // ALTER TABLE ONLY mcp_server_configs ADD CONSTRAINT mcp_server_configs_oauth2_client_secret_key_id_fkey FOREIGN KEY (oauth2_client_secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyMcpServerConfigsUpdatedBy ForeignKeyConstraint = "mcp_server_configs_updated_by_fkey" // ALTER TABLE ONLY mcp_server_configs ADD CONSTRAINT mcp_server_configs_updated_by_fkey FOREIGN KEY (updated_by) REFERENCES users(id) ON DELETE SET NULL;
ForeignKeyMcpServerUserTokensAccessTokenKeyID ForeignKeyConstraint = "mcp_server_user_tokens_access_token_key_id_fkey" // ALTER TABLE ONLY mcp_server_user_tokens ADD CONSTRAINT mcp_server_user_tokens_access_token_key_id_fkey FOREIGN KEY (access_token_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyMcpServerUserTokensMcpServerConfigID ForeignKeyConstraint = "mcp_server_user_tokens_mcp_server_config_id_fkey" // ALTER TABLE ONLY mcp_server_user_tokens ADD CONSTRAINT mcp_server_user_tokens_mcp_server_config_id_fkey FOREIGN KEY (mcp_server_config_id) REFERENCES mcp_server_configs(id) ON DELETE CASCADE;
ForeignKeyMcpServerUserTokensRefreshTokenKeyID ForeignKeyConstraint = "mcp_server_user_tokens_refresh_token_key_id_fkey" // ALTER TABLE ONLY mcp_server_user_tokens ADD CONSTRAINT mcp_server_user_tokens_refresh_token_key_id_fkey FOREIGN KEY (refresh_token_key_id) REFERENCES dbcrypt_keys(active_key_digest);
ForeignKeyMcpServerUserTokensUserID ForeignKeyConstraint = "mcp_server_user_tokens_user_id_fkey" // ALTER TABLE ONLY mcp_server_user_tokens ADD CONSTRAINT mcp_server_user_tokens_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyNotificationMessagesNotificationTemplateID ForeignKeyConstraint = "notification_messages_notification_template_id_fkey" // ALTER TABLE ONLY notification_messages ADD CONSTRAINT notification_messages_notification_template_id_fkey FOREIGN KEY (notification_template_id) REFERENCES notification_templates(id) ON DELETE CASCADE;
ForeignKeyNotificationMessagesUserID ForeignKeyConstraint = "notification_messages_user_id_fkey" // ALTER TABLE ONLY notification_messages ADD CONSTRAINT notification_messages_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyNotificationPreferencesNotificationTemplateID ForeignKeyConstraint = "notification_preferences_notification_template_id_fkey" // ALTER TABLE ONLY notification_preferences ADD CONSTRAINT notification_preferences_notification_template_id_fkey FOREIGN KEY (notification_template_id) REFERENCES notification_templates(id) ON DELETE CASCADE;
@@ -26,6 +26,7 @@ func TestCustomQueriesSyncedRowScan(t *testing.T) {
"GetTemplatesWithFilter": "GetAuthorizedTemplates",
"GetWorkspaces": "GetAuthorizedWorkspaces",
"GetUsers": "GetAuthorizedUsers",
"GetChats": "GetAuthorizedChats",
}
// Scan custom
@@ -0,0 +1,7 @@
ALTER TABLE chat_diff_statuses DROP COLUMN author_login;
ALTER TABLE chat_diff_statuses DROP COLUMN author_avatar_url;
ALTER TABLE chat_diff_statuses DROP COLUMN base_branch;
ALTER TABLE chat_diff_statuses DROP COLUMN pr_number;
ALTER TABLE chat_diff_statuses DROP COLUMN commits;
ALTER TABLE chat_diff_statuses DROP COLUMN approved;
ALTER TABLE chat_diff_statuses DROP COLUMN reviewer_count;
@@ -0,0 +1,7 @@
ALTER TABLE chat_diff_statuses ADD COLUMN author_login TEXT;
ALTER TABLE chat_diff_statuses ADD COLUMN author_avatar_url TEXT;
ALTER TABLE chat_diff_statuses ADD COLUMN base_branch TEXT;
ALTER TABLE chat_diff_statuses ADD COLUMN pr_number INTEGER;
ALTER TABLE chat_diff_statuses ADD COLUMN commits INTEGER;
ALTER TABLE chat_diff_statuses ADD COLUMN approved BOOLEAN;
ALTER TABLE chat_diff_statuses ADD COLUMN reviewer_count INTEGER;
@@ -0,0 +1 @@
ALTER TABLE chat_diff_statuses DROP COLUMN head_branch;
@@ -0,0 +1 @@
ALTER TABLE chat_diff_statuses ADD COLUMN head_branch TEXT;
@@ -0,0 +1,3 @@
DROP TABLE ai_seat_state;
DROP TYPE ai_seat_usage_reason;
@@ -0,0 +1,13 @@
CREATE TYPE ai_seat_usage_reason AS ENUM (
'aibridge',
'task'
);
CREATE TABLE ai_seat_state (
user_id uuid NOT NULL PRIMARY KEY REFERENCES users (id) ON DELETE CASCADE,
first_used_at timestamptz NOT NULL,
last_used_at timestamptz NOT NULL,
last_event_type ai_seat_usage_reason NOT NULL,
last_event_description text NOT NULL,
updated_at timestamptz NOT NULL
);
@@ -0,0 +1 @@
-- resource_type enum values cannot be removed safely; no-op.
@@ -0,0 +1 @@
ALTER TYPE resource_type ADD VALUE IF NOT EXISTS 'ai_seat';
@@ -0,0 +1,4 @@
DROP INDEX IF EXISTS idx_chat_messages_owner_spend;
ALTER TABLE groups DROP COLUMN IF EXISTS chat_spend_limit_micros;
ALTER TABLE users DROP COLUMN IF EXISTS chat_spend_limit_micros;
DROP TABLE IF EXISTS chat_usage_limit_config;
@@ -0,0 +1,32 @@
-- 1. Singleton config table
CREATE TABLE chat_usage_limit_config (
id BIGSERIAL PRIMARY KEY,
-- Only one row allowed (enforced by CHECK).
singleton BOOLEAN NOT NULL DEFAULT TRUE CHECK (singleton),
UNIQUE (singleton),
enabled BOOLEAN NOT NULL DEFAULT FALSE,
-- Limit per user per period, in micro-dollars (1 USD = 1,000,000).
default_limit_micros BIGINT NOT NULL DEFAULT 0
CHECK (default_limit_micros >= 0),
-- Period length: 'day', 'week', or 'month'.
period TEXT NOT NULL DEFAULT 'month'
CHECK (period IN ('day', 'week', 'month')),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Seed a single disabled row so reads never return empty.
INSERT INTO chat_usage_limit_config (singleton) VALUES (TRUE);
-- 2. Per-user overrides (inline on users table).
ALTER TABLE users ADD COLUMN chat_spend_limit_micros BIGINT DEFAULT NULL
CHECK (chat_spend_limit_micros IS NULL OR chat_spend_limit_micros > 0);
-- 3. Per-group overrides (inline on groups table).
ALTER TABLE groups ADD COLUMN chat_spend_limit_micros BIGINT DEFAULT NULL
CHECK (chat_spend_limit_micros IS NULL OR chat_spend_limit_micros > 0);
-- Speed up per-user spend aggregation in the usage-limit hot path.
CREATE INDEX idx_chat_messages_owner_spend
ON chat_messages (chat_id, created_at)
WHERE total_cost_micros IS NOT NULL;
@@ -0,0 +1,3 @@
DROP INDEX idx_aibridge_model_thoughts_interception_id;
DROP TABLE aibridge_model_thoughts;
@@ -0,0 +1,10 @@
CREATE TABLE aibridge_model_thoughts (
interception_id UUID NOT NULL,
content TEXT NOT NULL,
metadata jsonb,
created_at TIMESTAMPTZ NOT NULL
);
COMMENT ON TABLE aibridge_model_thoughts IS 'Audit log of model thinking in intercepted requests in AI Bridge';
CREATE INDEX idx_aibridge_model_thoughts_interception_id ON aibridge_model_thoughts(interception_id);
@@ -0,0 +1,52 @@
DELETE FROM custom_roles
WHERE name = 'organization-service-account' AND is_system = true;
ALTER TABLE organizations
ADD COLUMN workspace_sharing_disabled boolean NOT NULL DEFAULT false;
-- Migrate back: 'none' -> disabled, everything else -> enabled.
UPDATE organizations
SET workspace_sharing_disabled = true
WHERE shareable_workspace_owners = 'none';
ALTER TABLE organizations DROP COLUMN shareable_workspace_owners;
DROP TYPE shareable_workspace_owners;
-- Restore the original single-role trigger from migration 408.
DROP TRIGGER IF EXISTS trigger_insert_organization_system_roles ON organizations;
DROP FUNCTION IF EXISTS insert_organization_system_roles;
CREATE OR REPLACE FUNCTION insert_org_member_system_role() RETURNS trigger AS $$
BEGIN
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
) VALUES (
'organization-member',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_insert_org_member_system_role
AFTER INSERT ON organizations
FOR EACH ROW
EXECUTE FUNCTION insert_org_member_system_role();
@@ -0,0 +1,101 @@
CREATE TYPE shareable_workspace_owners AS ENUM ('none', 'everyone', 'service_accounts');
ALTER TABLE organizations
ADD COLUMN shareable_workspace_owners shareable_workspace_owners NOT NULL DEFAULT 'everyone';
COMMENT ON COLUMN organizations.shareable_workspace_owners IS 'Controls whose workspaces can be shared: none, everyone, or service_accounts.';
-- Migrate existing data from the boolean column.
UPDATE organizations
SET shareable_workspace_owners = 'none'
WHERE workspace_sharing_disabled = true;
ALTER TABLE organizations DROP COLUMN workspace_sharing_disabled;
-- Defensively rename any existing 'organization-service-account' roles
-- so they don't collide with the new system role.
UPDATE custom_roles
SET name = name || '-' || id::text
-- lower(name) is part of the existing unique index
WHERE lower(name) = 'organization-service-account';
-- Create skeleton organization-service-account system roles for all
-- existing organizations, mirroring what migration 408 did for
-- organization-member.
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
)
SELECT
'organization-service-account',
'',
id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
FROM
organizations;
-- Replace the single-role trigger with one that creates both system
-- roles when a new organization is inserted.
DROP TRIGGER IF EXISTS trigger_insert_org_member_system_role ON organizations;
DROP FUNCTION IF EXISTS insert_org_member_system_role;
CREATE OR REPLACE FUNCTION insert_organization_system_roles() RETURNS trigger AS $$
BEGIN
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
) VALUES
(
'organization-member',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
),
(
'organization-service-account',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_insert_organization_system_roles
AFTER INSERT ON organizations
FOR EACH ROW
EXECUTE FUNCTION insert_organization_system_roles();
@@ -0,0 +1,38 @@
DROP INDEX IF EXISTS idx_usage_events_ai_seats;
-- Remove hb_ai_seats_v1 rows so the original constraint can be restored.
DELETE FROM usage_events WHERE event_type = 'hb_ai_seats_v1';
DELETE FROM usage_events_daily WHERE event_type = 'hb_ai_seats_v1';
-- Restore original constraint.
ALTER TABLE usage_events
DROP CONSTRAINT usage_event_type_check,
ADD CONSTRAINT usage_event_type_check CHECK (event_type IN ('dc_managed_agents_v1'));
-- Restore the original aggregate function without hb_ai_seats_v1 support.
CREATE OR REPLACE FUNCTION aggregate_usage_event()
RETURNS TRIGGER AS $$
BEGIN
IF NEW.event_type NOT IN ('dc_managed_agents_v1') THEN
RAISE EXCEPTION 'Unhandled usage event type in aggregate_usage_event: %', NEW.event_type;
END IF;
INSERT INTO usage_events_daily (day, event_type, usage_data)
VALUES (
date_trunc('day', NEW.created_at AT TIME ZONE 'UTC')::date,
NEW.event_type,
NEW.event_data
)
ON CONFLICT (day, event_type) DO UPDATE SET
usage_data = CASE
WHEN NEW.event_type IN ('dc_managed_agents_v1') THEN
jsonb_build_object(
'count',
COALESCE((usage_events_daily.usage_data->>'count')::bigint, 0) +
COALESCE((NEW.event_data->>'count')::bigint, 0)
)
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
@@ -0,0 +1,50 @@
-- Expand the CHECK constraint to allow hb_ai_seats_v1.
ALTER TABLE usage_events
DROP CONSTRAINT usage_event_type_check,
ADD CONSTRAINT usage_event_type_check CHECK (event_type IN ('dc_managed_agents_v1', 'hb_ai_seats_v1'));
-- Partial index for efficient lookups of AI seat heartbeat events by time.
-- This will be used for the admin dashboard to see seat count over time.
CREATE INDEX idx_usage_events_ai_seats
ON usage_events (event_type, created_at)
WHERE event_type = 'hb_ai_seats_v1';
-- Update the aggregate function to handle hb_ai_seats_v1 events.
-- Heartbeat events replace the previous value for the same time period.
CREATE OR REPLACE FUNCTION aggregate_usage_event()
RETURNS TRIGGER AS $$
BEGIN
-- Check for supported event types and throw error for unknown types.
IF NEW.event_type NOT IN ('dc_managed_agents_v1', 'hb_ai_seats_v1') THEN
RAISE EXCEPTION 'Unhandled usage event type in aggregate_usage_event: %', NEW.event_type;
END IF;
INSERT INTO usage_events_daily (day, event_type, usage_data)
VALUES (
date_trunc('day', NEW.created_at AT TIME ZONE 'UTC')::date,
NEW.event_type,
NEW.event_data
)
ON CONFLICT (day, event_type) DO UPDATE SET
usage_data = CASE
-- Handle simple counter events by summing the count.
WHEN NEW.event_type IN ('dc_managed_agents_v1') THEN
jsonb_build_object(
'count',
COALESCE((usage_events_daily.usage_data->>'count')::bigint, 0) +
COALESCE((NEW.event_data->>'count')::bigint, 0)
)
-- Heartbeat events: keep the max value seen that day
WHEN NEW.event_type IN ('hb_ai_seats_v1') THEN
jsonb_build_object(
'count',
GREATEST(
COALESCE((usage_events_daily.usage_data->>'count')::bigint, 0),
COALESCE((NEW.event_data->>'count')::bigint, 0)
)
)
END;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
@@ -0,0 +1 @@
ALTER TABLE chat_messages DROP COLUMN runtime_ms;
@@ -0,0 +1 @@
ALTER TABLE chat_messages ADD COLUMN runtime_ms bigint;
@@ -0,0 +1,2 @@
DELETE FROM chat_messages WHERE deleted = true;
ALTER TABLE chat_messages DROP COLUMN deleted;
@@ -0,0 +1 @@
ALTER TABLE chat_messages ADD COLUMN deleted boolean NOT NULL DEFAULT false;
@@ -0,0 +1,6 @@
ALTER TABLE chats DROP COLUMN IF EXISTS mcp_server_ids;
DROP INDEX IF EXISTS idx_mcp_server_configs_enabled;
DROP INDEX IF EXISTS idx_mcp_server_configs_forced;
DROP INDEX IF EXISTS idx_mcp_server_user_tokens_user_id;
DROP TABLE IF EXISTS mcp_server_user_tokens;
DROP TABLE IF EXISTS mcp_server_configs;

Some files were not shown because too many files have changed in this diff Show More