Compare commits

..

58 Commits

Author SHA1 Message Date
Jon Ayers cfd7730194 chore(enterprise/tailnet): add debug logging for LOST and DISCONNECTED peer updates 2026-04-10 00:40:57 +00:00
Jon Ayers 1937ada0cd fix: use enriched logger for HeartbeatClose, reduce AwaitReachable backoff to 5s 2026-04-09 23:45:59 +00:00
Jon Ayers d64cd6415d revert: move HeartbeatClose back before agent dial 2026-04-09 22:37:31 +00:00
Jon Ayers c1851d9453 chore(coderd/workspaceapps): add workspace_id and elapsed time to PTY dial logs 2026-04-09 22:35:44 +00:00
Jon Ayers 8f73453681 fix(coderd/workspaceapps): move HeartbeatClose after agent dial, add 1m setup timeout 2026-04-09 21:39:28 +00:00
Jon Ayers 165db3d31c perf(enterprise/tailnet): increase coordinator worker counts and batch size for 10k scale 2026-04-09 21:26:52 +00:00
Jon Ayers 1bd1516fd1 perf(tailnet): singleflight AwaitReachable to deduplicate concurrent ping storms 2026-04-09 19:34:22 +00:00
Jon Ayers 81ba35a987 fix(coderd/tailnet): move ensureAgent Send outside mutex using singleflight 2026-04-09 19:12:27 +00:00
Jon Ayers 53d63cf8e9 perf(coderd/database/pubsub): batch-drain msgQueue to amortize lock overhead
Replace the one-at-a-time dequeue loop in msgQueue.run() with a batch
drain that copies up to 256 messages per lock acquisition. This
amortizes mutex acquire/release and cond.Wait costs across many
messages, improving drain throughput during bursts and reducing the
likelihood of ring buffer overflow.
2026-04-08 00:02:29 +00:00
Jon Ayers 4213a43b53 fix(enterprise/tailnet): async singleflight-coalesced resyncPeerMappings in pubsub callbacks
Replace synchronous resyncPeerMappings() calls in listenPeer and
listenTunnel with async goroutines using singleflight.Do. This
prevents blocking the pubsub drain goroutine when ErrDroppedMessages
arrives, avoiding cascading buffer overflows.
2026-04-08 00:02:21 +00:00
Garrett Delfosse 5453a6c6d6 fix(scripts/releaser): simplify branch regex and fix changelog range (#23947)
Two fixes for the release script:

**1. Branch regex cleanup** — Simplified to only match `release/X.Y`.
Removed
support for `release/X.Y.Z` and `release/X.Y-rc.N` branch formats. RCs
are
now tagged from main (not from release branches), and the three-segment
`release/X.Y.Z` format will not be used going forward.

**2. Changelog range for first release on a new minor** — When no tags
match
the branch's major.minor, the commit range fell back to `HEAD` (entire
git
history, ~13k lines of changelog). Now computes `git merge-base` with
the
previous minor's release branch (e.g. `origin/release/2.32`) as the
changelog
starting point. This works even when that branch has no tags pushed yet.
Falls
back to the latest reachable tag from a previous minor if the branch
doesn't
exist.
2026-04-07 17:07:21 +00:00
Jake Howell 21c08a37d7 feat: de-mui <LogLine /> and <Logs /> (#24043)
Migrated LogLine and Logs components from Emotion CSS-in-JS to Tailwind
CSS classes.

- Replaced Emotion `css` prop and theme-based styling with Tailwind
utility classes in `LogLine` and `LogLinePrefix` components
- Converted CSS-in-JS styles object to conditional Tailwind classes
using the `cn` utility function
- Updated log level styling (error, debug, warn) to use Tailwind classes
with design token references
- Migrated the Logs container component styling from Emotion to Tailwind
classes
- Removed Emotion imports and theme dependencies
2026-04-07 16:35:10 +00:00
Jake Howell 2bd261fbbf fix: cleanup useKebabMenu code (#24042)
Refactored the tab overflow hook by renaming `useTabOverflowKebabMenu`
to `useKebabMenu` and removing the configurable `alwaysVisibleTabsCount`
parameter.

- Renamed `useTabOverflowKebabMenu` to `useKebabMenu` and moved it to a
new file
- Removed the `alwaysVisibleTabsCount` parameter and hardcoded it to 1
tab as `ALWAYS_VISIBLE_TABS_COUNT`
- Removed the `utils/index.ts` export file for the Tabs component
- Updated the import in `AgentRow.tsx` to use the new hook name and
removed the `alwaysVisibleTabsCount` prop
- Refactored the internal logic to use a more functional approach with
`reduce` instead of imperative loops
- Added better performance optimizations to prevent unnecessary
re-renders
2026-04-08 02:25:18 +10:00
Kyle Carberry cffc68df58 feat(site): render read_skill body as markdown (#24069) 2026-04-07 11:50:21 -04:00
Jake Howell 6e5335df1e feat: implement new workspace download logs dropdown (#23963)
This PR improves the agent log download functionality by replacing the
single download button with a comprehensive dropdown menu system.

- Replaced single download button with a dropdown menu offering multiple
download options
- Added ability to download all logs or individual log sources
separately
- Updated download button to show chevron icon indicating dropdown
functionality
- Enhanced download options with appropriate icons for each log source

<img width="370" height="305" alt="image"
src="https://github.com/user-attachments/assets/ddf025f5-f936-499a-9165-6e81b62d6860"
/>
2026-04-07 15:27:43 +00:00
Kyle Carberry 16265e834e chore: update fantasy fork to use github.com/coder/fantasy (#24100)
Moves the `charm.land/fantasy` replace directive from
`github.com/kylecarbs/fantasy` to `github.com/coder/fantasy`, pointing
at the same `cj/go1.25` branch and commit (`112927d9b6d8`).

> Generated by Coder Agents
2026-04-07 16:11:49 +01:00
Zach 565a15bc9b feat: update user secrets queries for REST API and injection (#23998)
Update queries as prep work for user secrets API development:
- Switch all lookups and mutations from ID-based to user_id + name
- Split list query into metadata-only (for API responses) and
with-values (for provisioner/agent)
- Add partial update support using CASE WHEN pattern for write-only
value fields
- Include value_key_id in create for dbcrypt encryption support
- Update dbauthz wrappers and remove stale methods from dbmetrics
2026-04-07 09:03:28 -06:00
Ethan 76a2cb1af5 fix(site/src/pages/AgentsPage): reset provider form after create (#23975)
Previously, after creating a provider config in the agents provider
editor, the Save changes button stayed enabled for the lifetime of the
mounted form. The form kept the pre-create local baseline, so the
freshly-saved values still looked dirty.

Key `ProviderForm` by provider config identity so React remounts the
form when a config is created and re-establishes the pristine state from
the saved provider values.
2026-04-08 00:32:36 +10:00
Kyle Carberry 684f21740d perf(coderd): batch chat heartbeat queries into single UPDATE per interval (#24037)
## Summary

Replaces N per-chat heartbeat goroutines with a single centralized
heartbeat loop that issues one `UPDATE` per 30s interval for all running
chats on a worker.

## Problem

Each running chat spawned a dedicated goroutine that issued an
individual `UPDATE chats SET heartbeat_at = NOW() WHERE id = $1 AND
worker_id = $2 AND status = 'running'` query every 30 seconds. At 10,000
concurrent chats this produces **~333 DB queries/second** just for
heartbeats, plus ~333 `ActivityBumpWorkspace` CTE queries/second from
`trackWorkspaceUsage`.

## Solution

New `UpdateChatHeartbeats` (plural) SQL query replaces the old singular
`UpdateChatHeartbeat`:

```sql
UPDATE chats
SET    heartbeat_at = @now::timestamptz
WHERE  worker_id = @worker_id::uuid
  AND  status = 'running'::chat_status
RETURNING id;
```

A single `heartbeatLoop` goroutine on the `Server`:
1. Ticks every `chatHeartbeatInterval` (30s)
2. Issues one batch UPDATE for all registered chats
3. Detects stolen/completed chats via set-difference (equivalent of old
`rows == 0`)
4. Calls `trackWorkspaceUsage` for surviving chats

`processChat` registers an entry in the heartbeat registry instead of
spawning a goroutine.

## Impact

| Metric | Before (10K chats) | After (10K chats) |
|---|---|---|
| Heartbeat queries/sec | ~333 | ~0.03 (1 per 30s per replica) |
| Heartbeat goroutines | 10,000 | 1 |
| Self-interrupt detection | Per-chat `rows==0` | Batch set-difference |

---

> 🤖 Generated by Coder Agents

<details><summary>Implementation notes</summary>

- Uses `@now` parameter instead of `NOW()` so tests with `quartz.Mock`
can control timestamps.
- `heartbeatEntry` stores `context.CancelCauseFunc` + workspace state
for the centralized loop.
- `recoverStaleChats` is unaffected — it reads `heartbeat_at` which is
still updated.
- The old singular `UpdateChatHeartbeat` is removed entirely.
- `dbauthz` wrapper uses system-level `rbac.ResourceChat` authorization
(same pattern as `AcquireChats`).

</details>
2026-04-07 10:25:46 -04:00
George K 86ca61d6ca perf: cap count queries and emit native UUID comparisons for audit/connection logs (#23835)
Audit and connection log pages were timing out due to expensive COUNT(*)
queries over large tables. This commit adds opt-in count capping: requests can
return a `count_cap` field signaling that the count was truncated at a threshold,
avoiding full table scans that caused page timeouts.

Text-cast UUID comparisons in regosql-generated authorization queries
also contributed to the slowdown by preventing index usage for connection
and audit log queries. These now emit native UUID operators.

Frontend changes handle the capped state in usePaginatedQuery and
PaginationWidget, optionally displaying a capped count in the pagination
UI (e.g. "Showing 2,076 to 2,100 of 2,000+ logs")

Related to:
https://linear.app/codercom/issue/PLAT-31/connectionaudit-log-performance-issue
2026-04-07 07:24:53 -07:00
Jake Howell f0521cfa3c fix: resolve <LogLine /> storybook flake (#24084)
This pull-request ensures we have a stable test where the content
doesn't change every time we have a new storybook artifact by setting it
to a consistent date.

Closes https://github.com/coder/internal/issues/1454
2026-04-08 00:17:06 +10:00
Danielle Maywood 0c5d189aff fix(site): stabilize mutation callbacks for React Compiler memoization (#24089) 2026-04-07 15:05:27 +01:00
Michael Suchacz d7c8213eee fix(coderd/x/chatd/mcpclient): deterministic external MCP tool ordering (#24075)
> This PR was authored by Mux on behalf of Mike.

External MCP tools returned by `ConnectAll` were ordered by goroutine
completion, making the tool list nondeterministic across chat turns.
This broke prompt-cache stability since tools are serialized in order.

Sort tools by their model-visible name after all connections complete,
matching the existing pattern in workspace MCP tools
(`agent/x/agentmcp/manager.go`). Also guards against a nil-client panic
in cleanup when a connected server contributes zero tools after
filtering.
2026-04-07 14:42:30 +02:00
Cian Johnston 63924ac687 fix(site): use async findByLabelText in ProviderAccordionCards story (#24087)
- Use async `findByLabelText` instead of sync `getByLabelText` in
`ProviderAccordionCards` story
- Same bug fixed in #23999 for three other stories but missed for this
one

> 🤖 Written by a Coder Agent. Will be reviewed by a human.
2026-04-07 14:13:56 +02:00
dependabot[bot] 6c47e9ea23 ci: bump the github-actions group with 3 updates (#24085)
Bumps the github-actions group with 3 updates:
[step-security/harden-runner](https://github.com/step-security/harden-runner),
[dependabot/fetch-metadata](https://github.com/dependabot/fetch-metadata)
and [github/codeql-action](https://github.com/github/codeql-action).

Updates `step-security/harden-runner` from 2.16.0 to 2.16.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/step-security/harden-runner/releases">step-security/harden-runner's
releases</a>.</em></p>
<blockquote>
<h2>v2.16.1</h2>
<h2>What's Changed</h2>
<p>Enterprise tier: Added support for direct IP addresses in the allow
list
Community tier: Migrated Harden Runner telemetry to a new endpoint</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/step-security/harden-runner/compare/v2.16.0...v2.16.1">https://github.com/step-security/harden-runner/compare/v2.16.0...v2.16.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/step-security/harden-runner/commit/fe104658747b27e96e4f7e80cd0a94068e53901d"><code>fe10465</code></a>
v2.16.1 (<a
href="https://redirect.github.com/step-security/harden-runner/issues/654">#654</a>)</li>
<li>See full diff in <a
href="https://github.com/step-security/harden-runner/compare/fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594...fe104658747b27e96e4f7e80cd0a94068e53901d">compare
view</a></li>
</ul>
</details>
<br />

Updates `dependabot/fetch-metadata` from 2.5.0 to 3.0.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dependabot/fetch-metadata/releases">dependabot/fetch-metadata's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<p>The breaking change is requiring Node.js version v24 as the Actions
runtime.</p>
<h2>What's Changed</h2>
<ul>
<li>feat: Parse versions from metadata links by <a
href="https://github.com/ppkarwasz"><code>@​ppkarwasz</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/632">dependabot/fetch-metadata#632</a></li>
<li>Upgrade actions core and actions github packages by <a
href="https://github.com/truggeri"><code>@​truggeri</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/649">dependabot/fetch-metadata#649</a></li>
<li>docs: Add notes for using <code>alert-lookup</code> with App Token
by <a href="https://github.com/sue445"><code>@​sue445</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/656">dependabot/fetch-metadata#656</a></li>
<li>feat!: update Node.js version to v24 by <a
href="https://github.com/sturman"><code>@​sturman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/671">dependabot/fetch-metadata#671</a></li>
<li>Switch build tooling from ncc to esbuild by <a
href="https://github.com/truggeri"><code>@​truggeri</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/676">dependabot/fetch-metadata#676</a></li>
<li>Add --legal-comments=none to esbuild build commands by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/679">dependabot/fetch-metadata#679</a></li>
<li>Bump tsconfig target from es2022 to es2024 by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/680">dependabot/fetch-metadata#680</a></li>
<li>Remove vestigial outDir from tsconfig.json by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/681">dependabot/fetch-metadata#681</a></li>
<li>Switch tsconfig module resolution to bundler by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/682">dependabot/fetch-metadata#682</a></li>
<li>Remove skipLibCheck from tsconfig.json by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/683">dependabot/fetch-metadata#683</a></li>
<li>Add typecheck step to CI by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/685">dependabot/fetch-metadata#685</a></li>
<li>Enable noImplicitAny in tsconfig.json by <a
href="https://github.com/jeffwidman"><code>@​jeffwidman</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/684">dependabot/fetch-metadata#684</a></li>
<li>Upgrade <code>@​actions/core</code> to ^3.0.0 by <a
href="https://github.com/truggeri"><code>@​truggeri</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/677">dependabot/fetch-metadata#677</a></li>
<li>Upgrade <code>@​actions/github</code> to ^9.0.0 and
<code>@​octokit/request-error</code> to ^7.1.0 by <a
href="https://github.com/truggeri"><code>@​truggeri</code></a> in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/678">dependabot/fetch-metadata#678</a></li>
<li>Bump qs from 6.14.0 to 6.14.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/651">dependabot/fetch-metadata#651</a></li>
<li>Bump hono from 4.11.1 to 4.11.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/652">dependabot/fetch-metadata#652</a></li>
<li>Bump hono from 4.11.4 to 4.11.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/653">dependabot/fetch-metadata#653</a></li>
<li>Bump hono from 4.11.7 to 4.12.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/657">dependabot/fetch-metadata#657</a></li>
<li>Bump qs from 6.14.1 to 6.14.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/655">dependabot/fetch-metadata#655</a></li>
<li>Bump <code>@​modelcontextprotocol/sdk</code> from 1.25.1 to 1.26.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/654">dependabot/fetch-metadata#654</a></li>
<li>Bump <code>@​hono/node-server</code> from 1.19.9 to 1.19.10 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/665">dependabot/fetch-metadata#665</a></li>
<li>Bump hono from 4.12.2 to 4.12.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/664">dependabot/fetch-metadata#664</a></li>
<li>Bump minimatch from 3.1.2 to 3.1.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/667">dependabot/fetch-metadata#667</a></li>
<li>Bump hono from 4.12.5 to 4.12.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/668">dependabot/fetch-metadata#668</a></li>
<li>Bump actions/create-github-app-token from 2.2.1 to 3.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/669">dependabot/fetch-metadata#669</a></li>
<li>Bump flatted from 3.3.3 to 3.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/670">dependabot/fetch-metadata#670</a></li>
<li>build(deps-dev): bump picomatch from 2.3.1 to 2.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/674">dependabot/fetch-metadata#674</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/ppkarwasz"><code>@​ppkarwasz</code></a>
made their first contribution in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/632">dependabot/fetch-metadata#632</a></li>
<li><a href="https://github.com/truggeri"><code>@​truggeri</code></a>
made their first contribution in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/649">dependabot/fetch-metadata#649</a></li>
<li><a href="https://github.com/sue445"><code>@​sue445</code></a> made
their first contribution in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/656">dependabot/fetch-metadata#656</a></li>
<li><a href="https://github.com/sturman"><code>@​sturman</code></a> made
their first contribution in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/671">dependabot/fetch-metadata#671</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dependabot/fetch-metadata/compare/v2...v3.0.0">https://github.com/dependabot/fetch-metadata/compare/v2...v3.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/ffa630c65fa7e0ecfa0625b5ceda64399aea1b36"><code>ffa630c</code></a>
v3.0.0 (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/686">#686</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/ec8fff2ea0f40ccdbdcd1fea69759029f2990807"><code>ec8fff2</code></a>
Merge pull request <a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/674">#674</a>
from dependabot/dependabot/npm_and_yarn/picomatch-2.3.2</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/caf48bddf9ab5175bbd568425ea999bab03f1147"><code>caf48bd</code></a>
build(deps-dev): bump picomatch from 2.3.1 to 2.3.2</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/13d82742f9de94226254782b8662a39878795272"><code>13d8274</code></a>
Upgrade <code>@​actions/github</code> to ^9.0.0 and
<code>@​octokit/request-error</code> to ^7.1.0 (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/678">#678</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/b60309944845001ba168d4947b0c43c4bc94be74"><code>b603099</code></a>
Upgrade <code>@​actions/core</code> from ^1.11.1 to ^3.0.0 (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/677">#677</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/c5dc5b174070a3760ba36f0638aa6be896c4c7c9"><code>c5dc5b1</code></a>
Enable noImplicitAny in tsconfig.json (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/684">#684</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/a183f3c7985054f86eba6dd1ad07cde0067cc4f7"><code>a183f3c</code></a>
Add typecheck step to CI (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/685">#685</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/5e175645c2bdda348d0b48d730d38c537356a153"><code>5e17564</code></a>
Remove skipLibCheck from tsconfig.json (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/683">#683</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/bb56eeb32acd8595e47fb3529ce5816589d912fe"><code>bb56eeb</code></a>
Switch tsconfig module resolution to bundler (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/682">#682</a>)</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/3632e3d8b773dac47f843a97c7536d0ce4e73de4"><code>3632e3d</code></a>
Remove vestigial outDir from tsconfig.json (<a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/681">#681</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/dependabot/fetch-metadata/compare/21025c705c08248db411dc16f3619e6b5f9ea21a...ffa630c65fa7e0ecfa0625b5ceda64399aea1b36">compare
view</a></li>
</ul>
</details>
<br />

Updates `github/codeql-action` from 4.31.9 to 4.35.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/releases">github/codeql-action's
releases</a>.</em></p>
<blockquote>
<h2>v4.35.1</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>v4.35.0</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>v4.34.1</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>v4.34.0</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>v4.33.0</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>v4.32.6</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>v4.32.5</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
<li>Added an experimental change so that when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> fails on a runner — potentially due to
insufficient disk space — the failure is recorded in the Actions cache
so that subsequent runs will automatically skip improved incremental
analysis until something changes (e.g. a larger runner is provisioned or
a new CodeQL version is released). We expect to roll this change out to
everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3487">#3487</a></li>
<li>The minimum memory check for improved incremental analysis is now
skipped for CodeQL 2.24.3 and later, which has reduced peak RAM usage.
<a
href="https://redirect.github.com/github/codeql-action/pull/3515">#3515</a></li>
<li>Reduced log levels for best-effort private package registry
connection check failures to reduce noise from workflow annotations. <a
href="https://redirect.github.com/github/codeql-action/pull/3516">#3516</a></li>
<li>Added an experimental change which lowers the minimum disk space
requirement for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>, enabling it to run on standard GitHub Actions
runners. We expect to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3498">#3498</a></li>
<li>Added an experimental change which allows the
<code>start-proxy</code> action to resolve the CodeQL CLI version from
feature flags instead of using the linked CLI bundle version. We expect
to roll this change out to everyone in March. <a
href="https://redirect.github.com/github/codeql-action/pull/3512">#3512</a></li>
<li>The previously experimental changes from versions 4.32.3, 4.32.4,
3.32.3 and 3.32.4 are now enabled by default. <a
href="https://redirect.github.com/github/codeql-action/pull/3503">#3503</a>,
<a
href="https://redirect.github.com/github/codeql-action/pull/3504">#3504</a></li>
</ul>
<h2>v4.32.4</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.2">2.24.2</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3493">#3493</a></li>
<li>Added an experimental change which improves how certificates are
generated for the authentication proxy that is used by the CodeQL Action
in Default Setup when <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>. This is expected to generate more
widely compatible certificates and should have no impact on analyses
which are working correctly already. We expect to roll this change out
to everyone in February. <a
href="https://redirect.github.com/github/codeql-action/pull/3473">#3473</a></li>
<li>When the CodeQL Action is run <a
href="https://docs.github.com/en/code-security/how-tos/scan-code-for-vulnerabilities/troubleshooting/troubleshooting-analysis-errors/logs-not-detailed-enough#creating-codeql-debugging-artifacts-for-codeql-default-setup">with
debugging enabled in Default Setup</a> and <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries are configured</a>, the &quot;Setup proxy for
registries&quot; step will output additional diagnostic information that
can be used for troubleshooting. <a
href="https://redirect.github.com/github/codeql-action/pull/3486">#3486</a></li>
<li>Added a setting which allows the CodeQL Action to enable network
debugging for Java programs. This will help GitHub staff support
customers with troubleshooting issues in GitHub-managed CodeQL
workflows, such as Default Setup. This setting can only be enabled by
GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3485">#3485</a></li>
<li>Added a setting which enables GitHub-managed workflows, such as
Default Setup, to use a <a
href="https://github.com/dsp-testing/codeql-cli-nightlies">nightly
CodeQL CLI release</a> instead of the latest, stable release that is
used by default. This will help GitHub staff support customers whose
analyses for a given repository or organization require early access to
a change in an upcoming CodeQL CLI release. This setting can only be
enabled by GitHub staff. <a
href="https://redirect.github.com/github/codeql-action/pull/3484">#3484</a></li>
</ul>
<h2>v4.32.3</h2>
<ul>
<li>Added experimental support for testing connections to <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a>. This feature is not currently enabled for any
analysis. In the future, it may be enabled by default for Default Setup.
<a
href="https://redirect.github.com/github/codeql-action/pull/3466">#3466</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's
changelog</a>.</em></p>
<blockquote>
<h1>CodeQL Action Changelog</h1>
<p>See the <a
href="https://github.com/github/codeql-action/releases">releases
page</a> for the relevant changes to the CodeQL CLI and language
packs.</p>
<h2>[UNRELEASED]</h2>
<ul>
<li>The undocumented TRAP cache cleanup feature that could be enabled
using the <code>CODEQL_ACTION_CLEANUP_TRAP_CACHES</code> environment
variable is deprecated and will be removed in May 2026. If you are
affected by this, we recommend disabling TRAP caching by passing the
<code>trap-caching: false</code> input to the <code>init</code> Action.
<a
href="https://redirect.github.com/github/codeql-action/pull/3795">#3795</a></li>
<li>The Git version 2.36.0 requirement for improved incremental analysis
now only applies to repositories that contain submodules. <a
href="https://redirect.github.com/github/codeql-action/pull/3789">#3789</a></li>
<li>Python analysis on GHES no longer extracts the standard library,
relying instead on models of the standard library. This should result in
significantly faster extraction and analysis times, while the effect on
alerts should be minimal. <a
href="https://redirect.github.com/github/codeql-action/pull/3794">#3794</a></li>
</ul>
<h2>4.35.1 - 27 Mar 2026</h2>
<ul>
<li>Fix incorrect minimum required Git version for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a>: it should have been 2.36.0, not 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3781">#3781</a></li>
</ul>
<h2>4.35.0 - 27 Mar 2026</h2>
<ul>
<li>Reduced the minimum Git version required for <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> from 2.38.0 to 2.11.0. <a
href="https://redirect.github.com/github/codeql-action/pull/3767">#3767</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.1">2.25.1</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3773">#3773</a></li>
</ul>
<h2>4.34.1 - 20 Mar 2026</h2>
<ul>
<li>Downgrade default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>
due to issues with a small percentage of Actions and JavaScript
analyses. <a
href="https://redirect.github.com/github/codeql-action/pull/3762">#3762</a></li>
</ul>
<h2>4.34.0 - 20 Mar 2026</h2>
<ul>
<li>Added an experimental change which disables TRAP caching when <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis</a> is enabled, since improved incremental analysis
supersedes TRAP caching. This will improve performance and reduce
Actions cache usage. We expect to roll this change out to everyone in
March. <a
href="https://redirect.github.com/github/codeql-action/pull/3569">#3569</a></li>
<li>We are rolling out improved incremental analysis to C/C++ analyses
that use build mode <code>none</code>. We expect this rollout to be
complete by the end of April 2026. <a
href="https://redirect.github.com/github/codeql-action/pull/3584">#3584</a></li>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.25.0">2.25.0</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3585">#3585</a></li>
</ul>
<h2>4.33.0 - 16 Mar 2026</h2>
<ul>
<li>
<p>Upcoming change: Starting April 2026, the CodeQL Action will skip
collecting file coverage information on pull requests to improve
analysis performance. File coverage information will still be computed
on non-PR analyses. Pull request analyses will log a warning about this
upcoming change. <a
href="https://redirect.github.com/github/codeql-action/pull/3562">#3562</a></p>
<p>To opt out of this change:</p>
<ul>
<li><strong>Repositories owned by an organization:</strong> Create a
custom repository property with the name
<code>github-codeql-file-coverage-on-prs</code> and the type
&quot;True/false&quot;, then set this property to <code>true</code> in
the repository's settings. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>.
Alternatively, if you are using an advanced setup workflow, you can set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using default setup:</strong> Switch
to an advanced setup workflow and set the
<code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable to
<code>true</code> in your workflow.</li>
<li><strong>User-owned repositories using advanced setup:</strong> Set
the <code>CODEQL_ACTION_FILE_COVERAGE_ON_PRS</code> environment variable
to <code>true</code> in your workflow.</li>
</ul>
</li>
<li>
<p>Fixed <a
href="https://redirect.github.com/github/codeql-action/issues/3555">a
bug</a> which caused the CodeQL Action to fail loading repository
properties if a &quot;Multi select&quot; repository property was
configured for the repository. <a
href="https://redirect.github.com/github/codeql-action/pull/3557">#3557</a></p>
</li>
<li>
<p>The CodeQL Action now loads <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">custom
repository properties</a> on GitHub Enterprise Server, enabling the
customization of features such as
<code>github-codeql-disable-overlay</code> that was previously only
available on GitHub.com. <a
href="https://redirect.github.com/github/codeql-action/pull/3559">#3559</a></p>
</li>
<li>
<p>Once <a
href="https://docs.github.com/en/code-security/how-tos/secure-at-scale/configure-organization-security/manage-usage-and-access/giving-org-access-private-registries">private
package registries</a> can be configured with OIDC-based authentication
for organizations, the CodeQL Action will now be able to accept such
configurations. <a
href="https://redirect.github.com/github/codeql-action/pull/3563">#3563</a></p>
</li>
<li>
<p>Fixed the retry mechanism for database uploads. Previously this would
fail with the error &quot;Response body object should not be disturbed
or locked&quot;. <a
href="https://redirect.github.com/github/codeql-action/pull/3564">#3564</a></p>
</li>
<li>
<p>A warning is now emitted if the CodeQL Action detects a repository
property whose name suggests that it relates to the CodeQL Action, but
which is not one of the properties recognised by the current version of
the CodeQL Action. <a
href="https://redirect.github.com/github/codeql-action/pull/3570">#3570</a></p>
</li>
</ul>
<h2>4.32.6 - 05 Mar 2026</h2>
<ul>
<li>Update default CodeQL bundle version to <a
href="https://github.com/github/codeql-action/releases/tag/codeql-bundle-v2.24.3">2.24.3</a>.
<a
href="https://redirect.github.com/github/codeql-action/pull/3548">#3548</a></li>
</ul>
<h2>4.32.5 - 02 Mar 2026</h2>
<ul>
<li>Repositories owned by an organization can now set up the
<code>github-codeql-disable-overlay</code> custom repository property to
disable <a
href="https://redirect.github.com/github/roadmap/issues/1158">improved
incremental analysis for CodeQL</a>. First, create a custom repository
property with the name <code>github-codeql-disable-overlay</code> and
the type &quot;True/false&quot; in the organization's settings. Then in
the repository's settings, set this property to <code>true</code> to
disable improved incremental analysis. For more information, see <a
href="https://docs.github.com/en/organizations/managing-organization-settings/managing-custom-properties-for-repositories-in-your-organization">Managing
custom properties for repositories in your organization</a>. This
feature is not yet available on GitHub Enterprise Server. <a
href="https://redirect.github.com/github/codeql-action/pull/3507">#3507</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/github/codeql-action/commit/c10b8064de6f491fea524254123dbe5e09572f13"><code>c10b806</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3782">#3782</a>
from github/update-v4.35.1-d6d1743b8</li>
<li><a
href="https://github.com/github/codeql-action/commit/c5ffd0683786820677d054e3505e1c5bb4b8c227"><code>c5ffd06</code></a>
Update changelog for v4.35.1</li>
<li><a
href="https://github.com/github/codeql-action/commit/d6d1743b8ec7ecd94f78ad1ce4cb3d8d2ba58001"><code>d6d1743</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3781">#3781</a>
from github/henrymercer/update-git-minimum-version</li>
<li><a
href="https://github.com/github/codeql-action/commit/65d2efa7333ad65f97cc54be40f4cd18630f884c"><code>65d2efa</code></a>
Add changelog note</li>
<li><a
href="https://github.com/github/codeql-action/commit/2437b20ab31021229573a66717323dd5c6ce9319"><code>2437b20</code></a>
Update minimum git version for overlay to 2.36.0</li>
<li><a
href="https://github.com/github/codeql-action/commit/ea5f71947c021286c99f61cc426a10d715fe4434"><code>ea5f719</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3775">#3775</a>
from github/dependabot/npm_and_yarn/node-forge-1.4.0</li>
<li><a
href="https://github.com/github/codeql-action/commit/45ceeea896ba2293e10982f871198d1950ee13d6"><code>45ceeea</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3777">#3777</a>
from github/mergeback/v4.35.0-to-main-b8bb9f28</li>
<li><a
href="https://github.com/github/codeql-action/commit/24448c98434f429f901d27db7ddae55eec5cc1c4"><code>24448c9</code></a>
Rebuild</li>
<li><a
href="https://github.com/github/codeql-action/commit/7c510606312e5c68ac8b27c009e5254f226f5dfa"><code>7c51060</code></a>
Update changelog and version after v4.35.0</li>
<li><a
href="https://github.com/github/codeql-action/commit/b8bb9f28b8d3f992092362369c57161b755dea45"><code>b8bb9f2</code></a>
Merge pull request <a
href="https://redirect.github.com/github/codeql-action/issues/3776">#3776</a>
from github/update-v4.35.0-0078ad667</li>
<li>Additional commits viewable in <a
href="https://github.com/github/codeql-action/compare/5d4e8d1aca955e8d8589aabd499c5cae939e33c7...c10b8064de6f491fea524254123dbe5e09572f13">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:24:29 +00:00
Danielle Maywood aede045549 chore: bump @biomejs/biome from 2.2 to 2.4.10 (#24074) 2026-04-07 12:22:18 +01:00
dependabot[bot] 2ea08aa168 chore: bump github.com/gohugoio/hugo from 0.159.2 to 0.160.0 (#24081)
Bumps [github.com/gohugoio/hugo](https://github.com/gohugoio/hugo) from
0.159.2 to 0.160.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/gohugoio/hugo/releases">github.com/gohugoio/hugo's
releases</a>.</em></p>
<blockquote>
<h2>v0.160.0</h2>
<p>Now you can inject <a
href="https://gohugo.io/functions/css/build/#vars">CSS vars</a>, e.g.
from the configuration, into your stylesheets when building with <a
href="https://gohugo.io/functions/css/build/">css.Build</a>. Also, now
all the render hooks has a <a
href="https://gohugo.io/render-hooks/links/#position">.Position</a>
method, now also more accurate and effective.</p>
<h2>Bug fixes</h2>
<ul>
<li>Fix some recently introduced Position issues 4e91e14c <a
href="https://github.com/bep"><code>@​bep</code></a> <a
href="https://redirect.github.com/gohugoio/hugo/issues/14710">#14710</a></li>
<li>markup/goldmark: Fix double-escaping of ampersands in link URLs
dc9b51d2 <a href="https://github.com/bep"><code>@​bep</code></a> <a
href="https://redirect.github.com/gohugoio/hugo/issues/14715">#14715</a></li>
<li>tpl: Fix stray quotes from partial decorator in script context
43aad711 <a href="https://github.com/bep"><code>@​bep</code></a> <a
href="https://redirect.github.com/gohugoio/hugo/issues/14711">#14711</a></li>
</ul>
<h2>Improvements</h2>
<ul>
<li>all: Replace NewIntegrationTestBuilder with Test/TestE/TestRunning
481baa08 <a href="https://github.com/bep"><code>@​bep</code></a></li>
<li>tpl/css: Support <a
href="https://github.com/import"><code>@​import</code></a>
&quot;hugo:vars&quot; for CSS custom properties in css.Build 5d09b5e3 <a
href="https://github.com/bep"><code>@​bep</code></a> <a
href="https://redirect.github.com/gohugoio/hugo/issues/14699">#14699</a></li>
<li>Improve and extend .Position handling in Goldmark render hooks
303e443e <a href="https://github.com/bep"><code>@​bep</code></a> <a
href="https://redirect.github.com/gohugoio/hugo/issues/14663">#14663</a></li>
<li>markup/goldmark: Clean up test 638262ce <a
href="https://github.com/bep"><code>@​bep</code></a></li>
</ul>
<h2>Dependency Updates</h2>
<ul>
<li>build(deps): bump github.com/magefile/mage from 1.16.1 to 1.17.1
bf6e35a7 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]</li>
<li>build(deps): bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4
0eda24e6 <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]</li>
<li>build(deps): bump golang.org/x/image from 0.37.0 to 0.38.0 beb57a68
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]</li>
</ul>
<h2>Documentation</h2>
<ul>
<li>readme: Revise edition descriptions and installation instructions
9f1f1be0 <a
href="https://github.com/jmooring"><code>@​jmooring</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/gohugoio/hugo/commit/652fc5acddf94e0501f778e196a8b630566b39ad"><code>652fc5a</code></a>
releaser: Bump versions for release of 0.160.0</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/bf6e35a7557bb31b0e38b29eb10b94e03afa0d8a"><code>bf6e35a</code></a>
build(deps): bump github.com/magefile/mage from 1.16.1 to 1.17.1</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/4e91e14cb0152f6e6bd216c0cd2f0913e6e17325"><code>4e91e14</code></a>
Fix some recently introduced Position issues</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/dc9b51d2e2fa1bfc2b7c68c01417bb7ae2c9c6a2"><code>dc9b51d</code></a>
markup/goldmark: Fix double-escaping of ampersands in link URLs</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/481baa08968e29e2a2771e9d6022c9f995b2fc11"><code>481baa0</code></a>
all: Replace NewIntegrationTestBuilder with Test/TestE/TestRunning</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/43aad7118da6f8365d9cdb4aaada1878ce68fb98"><code>43aad71</code></a>
tpl: Fix stray quotes from partial decorator in script context</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/9f1f1be0be2e5b8280e16df647d838c538edb9c2"><code>9f1f1be</code></a>
readme: Revise edition descriptions and installation instructions</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/0eda24e65fdde77878a17d9583c5f2bce4f3d437"><code>0eda24e</code></a>
build(deps): bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/5d09b5e32a4d0e9b3fe8797c91804f6a7804bb5a"><code>5d09b5e</code></a>
tpl/css: Support <a
href="https://github.com/import"><code>@​import</code></a>
&quot;hugo:vars&quot; for CSS custom properties in css.Build</li>
<li><a
href="https://github.com/gohugoio/hugo/commit/303e443ea7ba5c22dc5d2b5df5d7c5392b0dcc3a"><code>303e443</code></a>
Improve and extend .Position handling in Goldmark render hooks</li>
<li>Additional commits viewable in <a
href="https://github.com/gohugoio/hugo/compare/v0.159.2...v0.160.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/gohugoio/hugo&package-manager=go_modules&previous-version=0.159.2&new-version=0.160.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:17:57 +00:00
dependabot[bot] d4b9248202 chore: bump github.com/valyala/fasthttp from 1.69.0 to 1.70.0 (#24080)
Bumps [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)
from 1.69.0 to 1.70.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/valyala/fasthttp/releases">github.com/valyala/fasthttp's
releases</a>.</em></p>
<blockquote>
<h2>v1.70.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Go 1.26 and golangci-lint updates by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2146">valyala/fasthttp#2146</a></li>
<li>Add WithLimit methods for uncompression by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2147">valyala/fasthttp#2147</a></li>
<li>Honor Root for fs.FS and normalize fs-style roots by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2145">valyala/fasthttp#2145</a></li>
<li>Sanitize header values in all setter paths to prevent CRLF injection
by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2162">valyala/fasthttp#2162</a></li>
<li>Add ServeFileLiteral, ServeFSLiteral and SendFileLiteral by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2163">valyala/fasthttp#2163</a></li>
<li>Prevent chunk extension request smuggling by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2165">valyala/fasthttp#2165</a></li>
<li>Validate request URI format during header parsing to reject
malformed requests by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2168">valyala/fasthttp#2168</a></li>
<li>HTTP1/1 requires exactly one Host header by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2164">valyala/fasthttp#2164</a></li>
<li>Strict HTTP version validation and simplified first line parsing by
<a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2167">valyala/fasthttp#2167</a></li>
<li>Only normalize pre-colon whitespace for HTTP headers by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2172">valyala/fasthttp#2172</a></li>
<li>fs: reject '..' path segments in rewritten paths by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2173">valyala/fasthttp#2173</a></li>
<li>fasthttpproxy: reject CRLF in HTTP proxy CONNECT target by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2174">valyala/fasthttp#2174</a></li>
<li>fasthttpproxy: scope proxy auth cache to GetDialFunc by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2144">valyala/fasthttp#2144</a></li>
<li>feat: enhance performance by <a
href="https://github.com/ReneWerner87"><code>@​ReneWerner87</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2135">valyala/fasthttp#2135</a></li>
<li>export ErrConnectionClosed by <a
href="https://github.com/pjebs"><code>@​pjebs</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2152">valyala/fasthttp#2152</a></li>
<li>fix: detect master process death in prefork children by <a
href="https://github.com/meruiden"><code>@​meruiden</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2158">valyala/fasthttp#2158</a></li>
<li>return prev values by <a
href="https://github.com/pjebs"><code>@​pjebs</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2123">valyala/fasthttp#2123</a></li>
<li>docs: added httpgo to related projects by <a
href="https://github.com/MUlt1mate"><code>@​MUlt1mate</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2169">valyala/fasthttp#2169</a></li>
<li>chore(deps): bump actions/upload-artifact from 6 to 7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2149">valyala/fasthttp#2149</a></li>
<li>chore(deps): bump github.com/andybalholm/brotli from 1.2.0 to 1.2.1
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2170">valyala/fasthttp#2170</a></li>
<li>chore(deps): bump github.com/klauspost/compress from 1.18.2 to
1.18.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2129">valyala/fasthttp#2129</a></li>
<li>chore(deps): bump github.com/klauspost/compress from 1.18.3 to
1.18.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2140">valyala/fasthttp#2140</a></li>
<li>chore(deps): bump github.com/klauspost/compress from 1.18.4 to
1.18.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2166">valyala/fasthttp#2166</a></li>
<li>chore(deps): bump golang.org/x/crypto from 0.47.0 to 0.48.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2139">valyala/fasthttp#2139</a></li>
<li>chore(deps): bump golang.org/x/net from 0.48.0 to 0.49.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2128">valyala/fasthttp#2128</a></li>
<li>chore(deps): bump golang.org/x/net from 0.49.0 to 0.50.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2138">valyala/fasthttp#2138</a></li>
<li>chore(deps): bump golang.org/x/sys from 0.39.0 to 0.40.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2125">valyala/fasthttp#2125</a></li>
<li>chore(deps): bump golang.org/x/sys from 0.40.0 to 0.41.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2137">valyala/fasthttp#2137</a></li>
<li>chore(deps): bump securego/gosec from 2.22.11 to 2.23.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2142">valyala/fasthttp#2142</a></li>
<li>Update securego/gosec from 2.23.0 to 2.25.0 by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2161">valyala/fasthttp#2161</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/MUlt1mate"><code>@​MUlt1mate</code></a>
made their first contribution in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2169">valyala/fasthttp#2169</a></li>
<li><a href="https://github.com/meruiden"><code>@​meruiden</code></a>
made their first contribution in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2158">valyala/fasthttp#2158</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/valyala/fasthttp/compare/v1.69.0...v1.70.0">https://github.com/valyala/fasthttp/compare/v1.69.0...v1.70.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/valyala/fasthttp/commit/534461ad123bfbcc1190d29cb3553a19b72d2845"><code>534461a</code></a>
fasthttpproxy: reject CRLF in HTTP proxy CONNECT target (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2174">#2174</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/267e740f5657cb606d35de3ca54df55b2625508c"><code>267e740</code></a>
fs: reject '..' path segments in rewritten paths (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2173">#2173</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/a95a1ad11ceeb1726740070ab464b8d22d3278d8"><code>a95a1ad</code></a>
Only normalize pre-colon whitespace for HTTP headers (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2172">#2172</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/ab8c2aceea3da871f9f901e595425fd144d1790f"><code>ab8c2ac</code></a>
fix: detect master process death in prefork children (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2158">#2158</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/c4569c5fbb7b0142cb2607dbb170f6efcec96894"><code>c4569c5</code></a>
feat: enhance performance (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2135">#2135</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/beab280ed3f7be24111fe5b452564be647370ee7"><code>beab280</code></a>
chore(deps): bump github.com/andybalholm/brotli from 1.2.0 to 1.2.1 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2170">#2170</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/82254a7addc61a494b6a504fb0c65871a9c0444f"><code>82254a7</code></a>
Normalize framing header names with pre-colon whitespace</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/611132707f1d75db30a7f3347092e36bcd87094e"><code>6111327</code></a>
Strict HTTP version validation and simplified first line parsing (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2167">#2167</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/eb38f5fc140be062aa5acbbeb97571e538a4e781"><code>eb38f5f</code></a>
HTTP1/1 requires exactly one Host header (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2164">#2164</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/7d90713bda6f90f398f42dced466942912b44fd6"><code>7d90713</code></a>
Validate request URI format during header parsing to reject malformed
request...</li>
<li>Additional commits viewable in <a
href="https://github.com/valyala/fasthttp/compare/v1.69.0...v1.70.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/valyala/fasthttp&package-manager=go_modules&previous-version=1.69.0&new-version=1.70.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:17:34 +00:00
dependabot[bot] fd6c623560 chore: bump google.golang.org/api from 0.273.0 to 0.274.0 (#24079)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.273.0 to 0.274.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.274.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.273.1...v0.274.0">0.274.0</a>
(2026-04-02)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3555">#3555</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0e634ae13e626c6082c534eda8c03d5d3e673605">0e634ae</a>)</li>
</ul>
<h2>v0.273.1</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.273.0...v0.273.1">0.273.1</a>
(2026-03-31)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Merge duplicate x-goog-request-params header (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3547">#3547</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2008108eb50215407a945afc2db9c45998c42bbe">2008108</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.273.1...v0.274.0">0.274.0</a>
(2026-04-02)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3555">#3555</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0e634ae13e626c6082c534eda8c03d5d3e673605">0e634ae</a>)</li>
</ul>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.273.0...v0.273.1">0.273.1</a>
(2026-03-31)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Merge duplicate x-goog-request-params header (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3547">#3547</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2008108eb50215407a945afc2db9c45998c42bbe">2008108</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/6c759a2bb66da9db49027475e4e76301b8d063df"><code>6c759a2</code></a>
chore(main): release 0.274.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3556">#3556</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/0e634ae13e626c6082c534eda8c03d5d3e673605"><code>0e634ae</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3555">#3555</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/0f75259689c5e80bd73e6e7018dbb9ec0dfd7d48"><code>0f75259</code></a>
chore: embargo aiplatform:v1beta1 temporarily (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3554">#3554</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/550f00c8f854c300c59f266cc0ddd60568ccfe20"><code>550f00c</code></a>
chore(main): release 0.273.1 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3551">#3551</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/da01f6aec8d3dd7914c6be434ce3bf26c1903396"><code>da01f6a</code></a>
chore(deps): bump github.com/go-git/go-git/v5 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3552">#3552</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/2008108eb50215407a945afc2db9c45998c42bbe"><code>2008108</code></a>
fix: merge duplicate x-goog-request-params header (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3547">#3547</a>)</li>
<li>See full diff in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.273.0...v0.274.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.273.0&new-version=0.274.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:16:33 +00:00
dependabot[bot] 99da498679 chore: bump rust from 1d0000a to a08d20a in /dogfood/coder (#24083)
Bumps rust from `1d0000a` to `a08d20a`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:10:47 +00:00
dependabot[bot] a20b817c28 chore: bump ubuntu from 5e5b128 to eb29ed2 in /dogfood/coder (#24082)
Bumps ubuntu from `5e5b128` to `eb29ed2`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ubuntu&package-manager=docker&previous-version=jammy&new-version=jammy)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 11:10:33 +00:00
Cian Johnston d5a1792f07 feat: track chat file associations with chat_file_links on chats (#23537)
Needed by #23833

Adds a `chat_file_links` association table to track which files are
associated with each chat.

- `AppendChatFileIDs` query links a file to a chat with deduplication
- `GetChatFileMetadataByIDs` query returns lightweight file metadata by
IDs
- Tool-created files (e.g. `propose_plan`) are linked to the chat after
insert
- User-uploaded files are linked to the chat when the referencing
message is sent
- Single-chat GET endpoint hydrates `files: ChatFileMetadata[]` on the
response

> 🤖 Created by Coder Agents and massaged into shape by a human.
2026-04-07 12:05:29 +01:00
Danielle Maywood beb99c17de fix(site): prevent chat messages from disappearing and duplicating (#23995) 2026-04-07 11:05:40 +01:00
Danielle Maywood 8913f9f5c1 fix(site): remove non-null assertion on optional chain in ExternalAuthPage (#24073) 2026-04-07 10:41:39 +01:00
Kyle Carberry acd5f01b4b fix: use GreaterOrEqual for step runtime assertion in chatloop test (#24067)
Fixes https://github.com/coder/internal/issues/1418

The `TestRun_ActiveToolsPrepareBehavior` test asserts
`persistedStep.Runtime > 0`, but on Windows the timer resolution (~15ms)
means the in-memory mock model can complete within the same clock tick,
producing a measured duration of `0s`.

Change the assertion from `require.Greater` to `require.GreaterOrEqual`
so that a legitimately measured zero duration on low-resolution clocks
does not cause a flake.

> Generated by Coder Agents
2026-04-07 02:08:49 +00:00
Kyle Carberry 6c62d8f5e6 fix(coderd/x/chatd): fix flaky TestAwaitSubagentCompletion/CompletesViaPubsub (#24066)
## Fix flaky TestAwaitSubagentCompletion/CompletesViaPubsub

Fixes coder/internal#1435

### Root Cause

During `createParentChildChats`, the processor publishes notifications
on `ChatStreamNotifyChannel(child.ID)` via PostgreSQL `LISTEN/NOTIFY`.
After `drainInflight()` returns, these stale notifications can still be
buffered in the pgListener's `NotifyChan()`. When
`awaitSubagentCompletion` subscribes and a stale notification is
dispatched between `setChatStatus(Waiting)` and
`insertAssistantMessage`, `checkSubagentCompletion` sees `done=true`
(status is `Waiting`) but returns an empty report because the message
hasn't been committed yet.

### Fix

Swap the order: insert the assistant message **before** transitioning
the status to `Waiting`. This guarantees the report is committed before
the status makes the chat appear complete to `checkSubagentCompletion`.

### Verification

- 50 consecutive runs of the specific test: all pass
- 10 runs of the full `TestAwaitSubagentCompletion` suite: all pass
- 20 runs with `-race`: all pass

> Generated by Coder Agents
2026-04-07 02:04:48 +00:00
dependabot[bot] 5000f15021 chore: bump the coder-modules group across 2 directories with 1 update (#24061)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 00:36:48 +00:00
dependabot[bot] 44be5a0d1e chore: update kreuzwerker/docker requirement from ~> 3.6 to ~> 4.0 in /dogfood/coder (#24062)
Updates the requirements on
[kreuzwerker/docker](https://github.com/kreuzwerker/terraform-provider-docker)
to permit the latest version.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kreuzwerker/terraform-provider-docker/releases">kreuzwerker/docker's
releases</a>.</em></p>
<blockquote>
<h1>v4.0.0</h1>
<p><strong>Please read <a
href="https://github.com/kreuzwerker/terraform-provider-docker/blob/master/docs/v3_v4_migration.md">https://github.com/kreuzwerker/terraform-provider-docker/blob/master/docs/v3_v4_migration.md</a></strong></p>
<p>This is a major release with potential breaking changes. For most
users, however, no changes to terraform code are needed.</p>
<h2>What's Changed</h2>
<h3>New Features</h3>
<ul>
<li>feat: Add muxing to introduce new plugin framework by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/838">kreuzwerker/terraform-provider-docker#838</a></li>
<li>Feature: Multiple enhancements by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/854">kreuzwerker/terraform-provider-docker#854</a></li>
<li>Feat: Make buildx builder default by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/855">kreuzwerker/terraform-provider-docker#855</a></li>
<li>Feature: Add new docker container attributes by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/857">kreuzwerker/terraform-provider-docker#857</a></li>
<li>feat: add selinux_relabel attribute to docker_container volumes by
<a href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/883">kreuzwerker/terraform-provider-docker#883</a></li>
<li>feat: Add CDI device support by <a
href="https://github.com/jdon"><code>@​jdon</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/762">kreuzwerker/terraform-provider-docker#762</a></li>
<li>feat: Implement proper parsing of GPU device requests when using
gpus… by <a href="https://github.com/Junkern"><code>@​Junkern</code></a>
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/881">kreuzwerker/terraform-provider-docker#881</a></li>
</ul>
<h3>Fixes</h3>
<ul>
<li>fix(deps): update module golang.org/x/sync to v0.19.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/828">kreuzwerker/terraform-provider-docker#828</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-log
to v0.10.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/823">kreuzwerker/terraform-provider-docker#823</a></li>
<li>fix(deps): update module github.com/morikuni/aec to v1.1.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/829">kreuzwerker/terraform-provider-docker#829</a></li>
<li>fix(deps): update module google.golang.org/protobuf to v1.36.11 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/830">kreuzwerker/terraform-provider-docker#830</a></li>
<li>fix(deps): update module github.com/sirupsen/logrus to v1.9.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/836">kreuzwerker/terraform-provider-docker#836</a></li>
<li>chore: Add deprecation for docker_service.networks_advanced.name by
<a href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/837">kreuzwerker/terraform-provider-docker#837</a></li>
<li>fix: Refactor docker container state handling to properly restart
whe… by <a href="https://github.com/Junkern"><code>@​Junkern</code></a>
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/841">kreuzwerker/terraform-provider-docker#841</a></li>
<li>fix: docker container stopped ports by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/842">kreuzwerker/terraform-provider-docker#842</a></li>
<li>fix: correctly set docker_container devices by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/843">kreuzwerker/terraform-provider-docker#843</a></li>
<li>fix(deps): update module github.com/katbyte/terrafmt to v0.5.6 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/844">kreuzwerker/terraform-provider-docker#844</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-sdk/v2 to v2.38.2 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/847">kreuzwerker/terraform-provider-docker#847</a></li>
<li>fix: Use DOCKER_CONFIG env same way as with docker cli by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/849">kreuzwerker/terraform-provider-docker#849</a></li>
<li>Fix: calculation of Dockerfile path in docker_image build by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/853">kreuzwerker/terraform-provider-docker#853</a></li>
<li>chore(deps): update actions/checkout action to v6 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/825">kreuzwerker/terraform-provider-docker#825</a></li>
<li>chore(deps): update hashicorp/setup-terraform action to v4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/860">kreuzwerker/terraform-provider-docker#860</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-go to
v0.30.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/861">kreuzwerker/terraform-provider-docker#861</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-framework to v1.18.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/862">kreuzwerker/terraform-provider-docker#862</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-mux
to v0.22.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/863">kreuzwerker/terraform-provider-docker#863</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-sdk/v2 to v2.39.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/864">kreuzwerker/terraform-provider-docker#864</a></li>
<li>chore(deps): update docker/setup-docker-action action to v5 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/866">kreuzwerker/terraform-provider-docker#866</a></li>
<li>chore(deps): update dependency golangci/golangci-lint to v2.10.1 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/869">kreuzwerker/terraform-provider-docker#869</a></li>
<li>fix(deps): update module golang.org/x/sync to v0.20.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/872">kreuzwerker/terraform-provider-docker#872</a></li>
<li>Prevent <code>docker_registry_image</code> panic on registries
returning nil body without digest header by <a
href="https://github.com/Copilot"><code>@​Copilot</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/880">kreuzwerker/terraform-provider-docker#880</a></li>
<li>fix: Handle size_bytes in tmpfs_options in docker_service by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/882">kreuzwerker/terraform-provider-docker#882</a></li>
<li>chore(deps): update dependency golangci/golangci-lint to v2.11.4 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/871">kreuzwerker/terraform-provider-docker#871</a></li>
<li>fix: tests for healthcheck is not required for docker container
resource by <a
href="https://github.com/vnghia"><code>@​vnghia</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/834">kreuzwerker/terraform-provider-docker#834</a></li>
<li>chore: Prepare 4.0.0 release by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/884">kreuzwerker/terraform-provider-docker#884</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/kreuzwerker/terraform-provider-docker/blob/master/CHANGELOG.md">kreuzwerker/docker's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.9.0...v4.0.0">v4.0.0</a>
(2026-04-03)</h2>
<h3>Chore</h3>
<ul>
<li>Add deprecation for docker_service.networks_advanced.name (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/837">#837</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>add selinux_relabel attribute to docker_container volumes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/883">#883</a>)</li>
<li>Implement proper parsing of GPU device requests when using gpus… (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/881">#881</a>)</li>
<li>Add CDI device support (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/762">#762</a>)</li>
<li>Add muxing to introduce new plugin framework (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/838">#838</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>Make buildx builder default (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/855">#855</a>)</li>
</ul>
<h3>Feature</h3>
<ul>
<li>Add new docker container attributes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/857">#857</a>)</li>
<li>Multiple enhancements (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/854">#854</a>)</li>
</ul>
<h3>Fix</h3>
<ul>
<li>tests for healthcheck is not required for docker container resource
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/834">#834</a>)</li>
<li>Handle size_bytes in tmpfs_options in docker_service (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/882">#882</a>)</li>
<li>Use DOCKER_CONFIG env same way as with docker cli (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/849">#849</a>)</li>
<li>correctly set docker_container devices (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/843">#843</a>)</li>
<li>docker container stopped ports (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/842">#842</a>)</li>
<li>Refactor docker container state handling to properly restart when
exited (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/841">#841</a>)</li>
</ul>
<h3>Fix</h3>
<ul>
<li>calculation of Dockerfile path in docker_image build (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/853">#853</a>)</li>
</ul>
<p><!-- raw HTML omitted --><!-- raw HTML omitted --></p>
<h2><a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.8.0...v3.9.0">v3.9.0</a>
(2025-11-09)</h2>
<h3>Chore</h3>
<ul>
<li>Prepare release v3.9.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/821">#821</a>)</li>
<li>Add file requested by hashicorp (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/813">#813</a>)</li>
<li>Prepare release v3.8.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/806">#806</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>Implement caching of docker provider (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/808">#808</a>)</li>
</ul>
<h3>Fix</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/b7296b7ec5af2f1c7516077d7056d563a1da774e"><code>b7296b7</code></a>
chore: Prepare 4.0.0 release (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/884">#884</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/b25e44ac7b3ede532d307fc6abe6daf39c7d6d56"><code>b25e44a</code></a>
feat: add selinux_relabel attribute to docker_container volumes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/883">#883</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/83b9e13b64fb78923ef88a8baeeece4611f61930"><code>83b9e13</code></a>
fix: tests for healthcheck is not required for docker container resource
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/834">#834</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/5f4cbc5673699b01c31801ba6154e9f1243a6af0"><code>5f4cbc5</code></a>
chore(deps): update dependency golangci/golangci-lint to v2.11.4 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/871">#871</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/83a89ad5a139bb9bffe11cef3b14b98f28109b36"><code>83a89ad</code></a>
fix: Handle size_bytes in tmpfs_options in docker_service (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/882">#882</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/57d8be485145db54678b2773d38f1dd7c9927cda"><code>57d8be4</code></a>
feat: Implement proper parsing of GPU device requests when using gpus…
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/881">#881</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/e63d18d450f11e3293fa14b52cb20ee3f11b2cba"><code>e63d18d</code></a>
Prevent <code>docker_registry_image</code> panic on registries returning
nil body withou...</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/8bac991400ae971425d61be5c6e442a1b3f8515c"><code>8bac991</code></a>
feat: Add CDI device support (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/762">#762</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/5c3c660fb54e52ccfd82b76ceb685bc82aed7885"><code>5c3c660</code></a>
fix(deps): update module golang.org/x/sync to v0.20.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/872">#872</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/75cba1d6ef1b76777443035f0f96c19b5c974553"><code>75cba1d</code></a>
chore(deps): update dependency golangci/golangci-lint to v2.10.1 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/869">#869</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.6.0...v4.0.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 00:29:03 +00:00
dependabot[bot] 3ca2aae9ca chore: update kreuzwerker/docker requirement from ~> 3.0 to ~> 4.0 in /dogfood/coder-envbuilder (#24063)
Updates the requirements on
[kreuzwerker/docker](https://github.com/kreuzwerker/terraform-provider-docker)
to permit the latest version.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/kreuzwerker/terraform-provider-docker/releases">kreuzwerker/docker's
releases</a>.</em></p>
<blockquote>
<h1>v4.0.0</h1>
<p><strong>Please read <a
href="https://github.com/kreuzwerker/terraform-provider-docker/blob/master/docs/v3_v4_migration.md">https://github.com/kreuzwerker/terraform-provider-docker/blob/master/docs/v3_v4_migration.md</a></strong></p>
<p>This is a major release with potential breaking changes. For most
users, however, no changes to terraform code are needed.</p>
<h2>What's Changed</h2>
<h3>New Features</h3>
<ul>
<li>feat: Add muxing to introduce new plugin framework by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/838">kreuzwerker/terraform-provider-docker#838</a></li>
<li>Feature: Multiple enhancements by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/854">kreuzwerker/terraform-provider-docker#854</a></li>
<li>Feat: Make buildx builder default by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/855">kreuzwerker/terraform-provider-docker#855</a></li>
<li>Feature: Add new docker container attributes by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/857">kreuzwerker/terraform-provider-docker#857</a></li>
<li>feat: add selinux_relabel attribute to docker_container volumes by
<a href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/883">kreuzwerker/terraform-provider-docker#883</a></li>
<li>feat: Add CDI device support by <a
href="https://github.com/jdon"><code>@​jdon</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/762">kreuzwerker/terraform-provider-docker#762</a></li>
<li>feat: Implement proper parsing of GPU device requests when using
gpus… by <a href="https://github.com/Junkern"><code>@​Junkern</code></a>
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/881">kreuzwerker/terraform-provider-docker#881</a></li>
</ul>
<h3>Fixes</h3>
<ul>
<li>fix(deps): update module golang.org/x/sync to v0.19.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/828">kreuzwerker/terraform-provider-docker#828</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-log
to v0.10.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/823">kreuzwerker/terraform-provider-docker#823</a></li>
<li>fix(deps): update module github.com/morikuni/aec to v1.1.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/829">kreuzwerker/terraform-provider-docker#829</a></li>
<li>fix(deps): update module google.golang.org/protobuf to v1.36.11 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/830">kreuzwerker/terraform-provider-docker#830</a></li>
<li>fix(deps): update module github.com/sirupsen/logrus to v1.9.4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/836">kreuzwerker/terraform-provider-docker#836</a></li>
<li>chore: Add deprecation for docker_service.networks_advanced.name by
<a href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/837">kreuzwerker/terraform-provider-docker#837</a></li>
<li>fix: Refactor docker container state handling to properly restart
whe… by <a href="https://github.com/Junkern"><code>@​Junkern</code></a>
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/841">kreuzwerker/terraform-provider-docker#841</a></li>
<li>fix: docker container stopped ports by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/842">kreuzwerker/terraform-provider-docker#842</a></li>
<li>fix: correctly set docker_container devices by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/843">kreuzwerker/terraform-provider-docker#843</a></li>
<li>fix(deps): update module github.com/katbyte/terrafmt to v0.5.6 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/844">kreuzwerker/terraform-provider-docker#844</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-sdk/v2 to v2.38.2 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/847">kreuzwerker/terraform-provider-docker#847</a></li>
<li>fix: Use DOCKER_CONFIG env same way as with docker cli by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/849">kreuzwerker/terraform-provider-docker#849</a></li>
<li>Fix: calculation of Dockerfile path in docker_image build by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/853">kreuzwerker/terraform-provider-docker#853</a></li>
<li>chore(deps): update actions/checkout action to v6 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/825">kreuzwerker/terraform-provider-docker#825</a></li>
<li>chore(deps): update hashicorp/setup-terraform action to v4 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/860">kreuzwerker/terraform-provider-docker#860</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-go to
v0.30.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/861">kreuzwerker/terraform-provider-docker#861</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-framework to v1.18.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/862">kreuzwerker/terraform-provider-docker#862</a></li>
<li>fix(deps): update module github.com/hashicorp/terraform-plugin-mux
to v0.22.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/863">kreuzwerker/terraform-provider-docker#863</a></li>
<li>fix(deps): update module
github.com/hashicorp/terraform-plugin-sdk/v2 to v2.39.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/864">kreuzwerker/terraform-provider-docker#864</a></li>
<li>chore(deps): update docker/setup-docker-action action to v5 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/866">kreuzwerker/terraform-provider-docker#866</a></li>
<li>chore(deps): update dependency golangci/golangci-lint to v2.10.1 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/869">kreuzwerker/terraform-provider-docker#869</a></li>
<li>fix(deps): update module golang.org/x/sync to v0.20.0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/872">kreuzwerker/terraform-provider-docker#872</a></li>
<li>Prevent <code>docker_registry_image</code> panic on registries
returning nil body without digest header by <a
href="https://github.com/Copilot"><code>@​Copilot</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/880">kreuzwerker/terraform-provider-docker#880</a></li>
<li>fix: Handle size_bytes in tmpfs_options in docker_service by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/882">kreuzwerker/terraform-provider-docker#882</a></li>
<li>chore(deps): update dependency golangci/golangci-lint to v2.11.4 by
<a href="https://github.com/renovate"><code>@​renovate</code></a>[bot]
in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/871">kreuzwerker/terraform-provider-docker#871</a></li>
<li>fix: tests for healthcheck is not required for docker container
resource by <a
href="https://github.com/vnghia"><code>@​vnghia</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/834">kreuzwerker/terraform-provider-docker#834</a></li>
<li>chore: Prepare 4.0.0 release by <a
href="https://github.com/Junkern"><code>@​Junkern</code></a> in <a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/pull/884">kreuzwerker/terraform-provider-docker#884</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/kreuzwerker/terraform-provider-docker/blob/master/CHANGELOG.md">kreuzwerker/docker's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.9.0...v4.0.0">v4.0.0</a>
(2026-04-03)</h2>
<h3>Chore</h3>
<ul>
<li>Add deprecation for docker_service.networks_advanced.name (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/837">#837</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>add selinux_relabel attribute to docker_container volumes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/883">#883</a>)</li>
<li>Implement proper parsing of GPU device requests when using gpus… (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/881">#881</a>)</li>
<li>Add CDI device support (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/762">#762</a>)</li>
<li>Add muxing to introduce new plugin framework (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/838">#838</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>Make buildx builder default (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/855">#855</a>)</li>
</ul>
<h3>Feature</h3>
<ul>
<li>Add new docker container attributes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/857">#857</a>)</li>
<li>Multiple enhancements (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/854">#854</a>)</li>
</ul>
<h3>Fix</h3>
<ul>
<li>tests for healthcheck is not required for docker container resource
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/834">#834</a>)</li>
<li>Handle size_bytes in tmpfs_options in docker_service (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/882">#882</a>)</li>
<li>Use DOCKER_CONFIG env same way as with docker cli (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/849">#849</a>)</li>
<li>correctly set docker_container devices (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/843">#843</a>)</li>
<li>docker container stopped ports (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/842">#842</a>)</li>
<li>Refactor docker container state handling to properly restart when
exited (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/841">#841</a>)</li>
</ul>
<h3>Fix</h3>
<ul>
<li>calculation of Dockerfile path in docker_image build (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/853">#853</a>)</li>
</ul>
<p><!-- raw HTML omitted --><!-- raw HTML omitted --></p>
<h2><a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.8.0...v3.9.0">v3.9.0</a>
(2025-11-09)</h2>
<h3>Chore</h3>
<ul>
<li>Prepare release v3.9.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/821">#821</a>)</li>
<li>Add file requested by hashicorp (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/813">#813</a>)</li>
<li>Prepare release v3.8.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/806">#806</a>)</li>
</ul>
<h3>Feat</h3>
<ul>
<li>Implement caching of docker provider (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/808">#808</a>)</li>
</ul>
<h3>Fix</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/b7296b7ec5af2f1c7516077d7056d563a1da774e"><code>b7296b7</code></a>
chore: Prepare 4.0.0 release (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/884">#884</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/b25e44ac7b3ede532d307fc6abe6daf39c7d6d56"><code>b25e44a</code></a>
feat: add selinux_relabel attribute to docker_container volumes (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/883">#883</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/83b9e13b64fb78923ef88a8baeeece4611f61930"><code>83b9e13</code></a>
fix: tests for healthcheck is not required for docker container resource
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/834">#834</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/5f4cbc5673699b01c31801ba6154e9f1243a6af0"><code>5f4cbc5</code></a>
chore(deps): update dependency golangci/golangci-lint to v2.11.4 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/871">#871</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/83a89ad5a139bb9bffe11cef3b14b98f28109b36"><code>83a89ad</code></a>
fix: Handle size_bytes in tmpfs_options in docker_service (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/882">#882</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/57d8be485145db54678b2773d38f1dd7c9927cda"><code>57d8be4</code></a>
feat: Implement proper parsing of GPU device requests when using gpus…
(<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/881">#881</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/e63d18d450f11e3293fa14b52cb20ee3f11b2cba"><code>e63d18d</code></a>
Prevent <code>docker_registry_image</code> panic on registries returning
nil body withou...</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/8bac991400ae971425d61be5c6e442a1b3f8515c"><code>8bac991</code></a>
feat: Add CDI device support (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/762">#762</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/5c3c660fb54e52ccfd82b76ceb685bc82aed7885"><code>5c3c660</code></a>
fix(deps): update module golang.org/x/sync to v0.20.0 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/872">#872</a>)</li>
<li><a
href="https://github.com/kreuzwerker/terraform-provider-docker/commit/75cba1d6ef1b76777443035f0f96c19b5c974553"><code>75cba1d</code></a>
chore(deps): update dependency golangci/golangci-lint to v2.10.1 (<a
href="https://redirect.github.com/kreuzwerker/terraform-provider-docker/issues/869">#869</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/kreuzwerker/terraform-provider-docker/compare/v3.0.0...v4.0.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 00:28:48 +00:00
david-fraley 01080302a5 feat: add onboarding info fields to first user setup (#23829)
Add optional demographic and newsletter preference fields to the first
user setup page, and redesign the setup form using non-MUI components

## New fields

**Newsletter preferences** (opt-in checkboxes):
- **Marketing updates** — product announcements, tips, best practices
- **Release & security updates** — new releases, patches, security
advisories

## Frontend redesign

Migrated the setup page from MUI to the shadcn/ui design system used
across the rest of the app:

- Replaced MUI `TextField`, `MenuItem`, `Checkbox`, `Autocomplete` with
`Input`, `Label`, `Select`, and `Checkbox` from `#/components`
- Switched from Emotion `css` props to Tailwind utility classes
- Left-aligned header, widened form container to 500px
- Updated copy: "30-day trial", "Learn more", "Help us make Coder
better"
- Side-by-side layouts for first/last name, phone/country
- Moved privacy policy text to always-visible onboarding section
- Removed "Number of developers" field from trial section

### Implementation notes

- The `onboarding_info` payload is fire-and-forget via
`Telemetry.Report()` — not stored in the database
- Country picker switched from MUI Autocomplete to Radix Select for
design consistency
- GitHub OAuth button preserved — conditionally rendered when
`authMethods.github.enabled`
- NewPasswordField is meant to be a drop in replacement for the MUI
PasswordField

### References
- #23989 
- #24021
- #24014
- #24018

---------

Co-authored-by: Tracy Johnson <tracy@coder.com>
Co-authored-by: Jeremy Ruppel <jeremy.ruppel@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Jeremy Ruppel <jeremyruppel@users.noreply.github.com>
Co-authored-by: Kayla はな <mckayla@hey.com>
2026-04-07 00:16:22 +00:00
Sushant P 61d6c728b9 docs: adding a tutorial for persistent shared workspaces (#23738)
Co-authored-by: Jiachen Jiang <jcjiang42@gmail.com>
2026-04-06 13:22:25 -07:00
Kyle Carberry 648787e739 feat: expose busy_behavior on chat message API (#24054)
The backend (`chatd.go`) already fully implements both `"queue"` and
`"interrupt"` busy behaviors for `SendMessage`, and the `message_agent`
subagent tool already leverages both internally. However the HTTP API
hardcoded `"queue"` and the SDK had no way for callers to request
interrupt-on-send.

This adds a `ChatBusyBehavior` enum type to the SDK and an optional
`busy_behavior` field on `CreateChatMessageRequest`. The HTTP handler
validates the field and passes it through to `chatd.SendMessage`.
Default remains `"queue"` for full backward compatibility.

<details><summary>Implementation notes</summary>

- `codersdk/chats.go`: New `ChatBusyBehavior` type with
`ChatBusyBehaviorQueue` and `ChatBusyBehaviorInterrupt` constants. Added
`BusyBehavior` field to `CreateChatMessageRequest` with `enums` tag for
codegen.
- `coderd/exp_chats.go`: `postChatMessages` now reads
`req.BusyBehavior`, maps SDK constants to
`chatd.SendMessageBusyBehavior*`, returns 400 on invalid values.
- `site/src/api/typesGenerated.ts`: Auto-generated via `make gen`.
- No frontend behavior changes — the field is available but unused by
the UI.

</details>

> [!NOTE]
> Generated by Coder Agents
2026-04-06 16:20:14 -04:00
blinkagent[bot] d2950e7615 docs: document that license validation works offline (#24013)
## What

Documents that Coder license keys are validated locally using
cryptographic signatures and do not require an outbound connection to
Coder's servers. This is a common question from customers evaluating
Coder for air-gapped environments.

## Changes

- **`docs/admin/licensing/index.md`**: Added an "Offline license
validation" section explaining that license keys are signed JWTs
validated locally with no phone-home requirement.
- **`docs/install/airgap.md`**: Added a "License validation" row to the
air-gapped comparison table, confirming no changes are needed for
offline license validation and linking to the licensing docs.

## Why

While the air-gapped docs state that "all Coder features are supported"
offline, there was no explicit mention that the license itself doesn't
require connectivity. This is a frequent question from
security-conscious and air-gapped customers.

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Matyas Danter <mdanter@gmail.com>
2026-04-06 20:43:49 +02:00
Jake Howell df8f695e84 fix: update logline prefix to use timestamp (#23966)
This change replaces the line number display in agent logs with formatted timestamps. The `AgentLogLine` component now shows timestamps in `HH:mm:ss.SSS` format using dayjs instead of sequential line numbers. The component no longer requires `number` and `maxLineNumber` props, and the associated styling for line number formatting has been removed.

This is a global change.. but I don't think its one that will do much damage.
2026-04-07 04:43:35 +10:00
Jake Howell 8bb48ffdda feat: implement kebab menu overflow to <Tabs /> (#23959)
Added `data-slot` attributes to all Tabs components for better CSS
targeting and component identification. Replaced generic button
selectors with data-slot attribute selectors in tab styling variants.

Implemented `useTabOverflowKebabMenu` hook to handle tab overflow
scenarios by measuring tab widths and determining which tabs should be
hidden in a dropdown menu when container space is limited.

Enhanced the AgentRow logs section with:

- Tab overflow handling using a kebab menu (three dots) for tabs that
don't fit
- Copy logs button with visual feedback using CheckIcon animation
- Download logs functionality for selected tab content with proper
filename generation
- Improved layout with flex containers and proper spacing

Few props and components updates

* Added `overflowKebabMenu` prop to TabsList component to enable
`flex-nowrap` behavior when overflow handling is active.
* Created `<DownloadSelectedAgentLogsButton />` component to replace the
previous download functionality, now working with filtered log content
based on selected tab.


https://github.com/user-attachments/assets/af48ca39-c906-4a11-a891-0d4399eee827
2026-04-07 04:42:01 +10:00
Kyle Carberry 4cfbf544a0 feat: add per-chat system prompt option (#24053)
Adds a `system_prompt` field to `CreateChatRequest` that allows API
consumers to provide custom instructions when creating a chat. The
per-chat prompt is stored as a separate system message (`role=system`,
`visibility=model`) in the `chat_messages` table, inserted between the
deployment system prompt and the workspace awareness message.

Also moves deployment system prompt resolution from the HTTP handler
(`resolvedChatSystemPrompt`) into `chatd.CreateChat` where it belongs.
The handler no longer assembles system prompts —
`CreateOptions.SystemPrompt` is now purely the per-chat user prompt, and
the deployment prompt is resolved internally by chatd.

No database schema changes required.

**Message insertion order:**
1. Deployment system prompt (resolved by chatd, existing)
2. Per-chat user system prompt (new, from `CreateOptions.SystemPrompt`)
3. Workspace awareness (existing)
4. Initial user message (existing)

🤖 Generated with [Coder Agents](https://coder.com/agents)
2026-04-06 17:19:05 +00:00
Kyle Carberry a2ce74f398 feat: add total_runtime_ms to chat cost analytics endpoints (#24050)
Surface the aggregated `runtime_ms` from `chat_messages` through all
four cost analytics queries (summary, per-model, per-chat, per-user).
This is the key billing metric for agent compute time.

The per-chat breakdown already groups by `root_chat_id`, so subagent
runtime is automatically rolled up under the parent chat — no additional
query changes needed.

<details>
<summary>Implementation details</summary>

**SQL** (`coderd/database/queries/chats.sql`): Added
`COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms` to
`GetChatCostSummary`, `GetChatCostPerModel`, `GetChatCostPerChat`, and
`GetChatCostPerUser`.

**Go SDK** (`codersdk/chats.go`): Added `TotalRuntimeMs int64` to
`ChatCostSummary`, `ChatCostModelBreakdown`, `ChatCostChatBreakdown`,
and `ChatCostUserRollup`.

**Handler** (`coderd/exp_chats.go`): Wired the new field through all
converter functions and the response assembly.

**Tests** (`coderd/exp_chats_test.go`): Updated fixture to seed non-zero
`runtime_ms` values and added assertions for the new field at summary,
per-model, and per-chat levels.
</details>

> 🤖 Generated by Coder Agents
2026-04-06 12:10:57 -04:00
Jake Howell 0060dee222 fix: remove all mui <IconButton /> instances (#24045)
This pull-request removes all instances of `<IconButton />` being
imported from `@mui/material/IconButton`. This means that we've removed
one whole dependency from MUI and replaced all instances with the local
variant.
2026-04-06 15:38:21 +00:00
blinkagent[bot] 5ff1058f30 feat: add AWS PRM user-agent attribution for partner revenue tracking (#23138)
Sets `AWS_SDK_UA_APP_ID` in the Terraform provisioner environment so
that all AWS API calls made during workspace builds include Coder's AWS
Partner Revenue Measurement (PRM) attribution in the user-agent header.

This enables AWS to attribute resource usage driven by Coder back to us
as an AWS partner across all deployments.

## How it works

- `provisionEnv()` now unconditionally sets
`AWS_SDK_UA_APP_ID=APN_1.1/pc_cdfmjwn8i6u8l9fwz8h82e4w3$` in the
environment passed to `terraform plan` and `terraform apply`
- The Terraform AWS provider picks this up and appends it to the
user-agent header on every AWS API call
- If a customer has already set `AWS_SDK_UA_APP_ID` in their environment
(e.g. via `coder.env`), we don't override it
- Templates that don't use the AWS provider are unaffected — the env var
is simply ignored

## Notes

- The product code is hardcoded in the source. It may be worth
obfuscating this value (e.g. via `-ldflags -X` at build time) to keep it
out of the public repo, though it is technically a public identifier.
- This covers user-agent attribution only. Resource-level `aws-apn-id`
tags for cost allocation are a separate effort that requires template
changes.

## References

- [AWS SDK Application ID
docs](https://docs.aws.amazon.com/sdkref/latest/guide/feature-appid.html)
- [AWS PRM Automated User
Agent](https://prm.partner.aws.dev/automated-user-agent.html) (partner
login required)

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: DevCats <christofer@coder.com>
2026-04-06 10:33:24 -05:00
Kyle Carberry 500fc5e2a4 feat: polish model config form UI (#24047)
Polishes the AI model configuration form (add/edit model) with tighter
layout and better input affordances.

**Frontend changes:**
- Replace "Unset" with "Default" in select dropdowns to communicate
system fallback
- Show pricing fields inline instead of behind a collapsible toggle
- Use flat section dividers (`border-t`) instead of bordered fieldsets
- Move field descriptions into info-icon tooltips to fix input
misalignment
- Add InputGroup adornments: `$` prefix + `/1M` suffix on pricing,
`tokens` suffix on token fields, `%` suffix on compression threshold,
range placeholders on temperature/penalty fields
- Shorter pricing labels (Input, Output, Cache Read, Cache Write)
- Compact JSON textareas (1-row height, resizable)
- Smart grid layouts by field type (3-col provider, 4-col pricing, 3-col
advanced)
- Boolean fields render as a segmented control (Default · On · Off)
instead of a dropdown

**Backend changes:**
- Add `enum` tags to OpenAI `service_tier`
(`auto,default,flex,scale,priority`) and `reasoning_summary`
(`auto,concise,detailed`) so they render as select dropdowns instead of
free-text inputs

> 🤖 Generated by Coder Agents
2026-04-06 10:49:42 -04:00
Jake Howell baba9e6ede feat: disallow auditors from editing <NotificationsPage /> settings (#22382)
Auditors are still able to access and read this page but they won't be
aren't able to update any of the content, we should show that to them.
Should also be noted that this page isn't shown to the user in the
sidebar when they are an Auditor.
2026-04-06 22:32:51 +10:00
Jake Howell b36619b905 feat: implement tabs for agent logs (#23952)
This PR adds log source tabs to the workspace agent logs panel so users
can quickly focus on specific log streams instead of scanning one
combined feed. It also updates the shared tabs trigger behavior to
explicitly use `type="button"` when rendered as a native button,
preventing unintended form submission behavior.

- Adds per-source tabs in `AgentRow` with an **All Logs** default view.
- Shows only log sources that currently have output, with source icons
and sorted labels.
- Filters rendered log lines based on the active tab while preserving
existing log streaming/scroll behavior.
- Refines the logs container layout/styling for the new tabbed UI.
- Updates `TabsTrigger` to safely default button type when not using
`asChild`.


https://github.com/user-attachments/assets/9b3e7a9d-72e3-4c12-aba2-2b70cdbc04c1
2026-04-06 18:51:00 +10:00
Kyle Carberry 937f50f0ae fix: show message action tooltips at bottom on agents page (#24041)
The CopyButton tooltip on `/agents` defaulted to top (Radix default),
while the Edit button already used `side="bottom"`. This adds an
optional `tooltipSide` prop to `CopyButton` and passes `"bottom"` in the
agents `ConversationTimeline` so both tooltips appear below the buttons
consistently.

## Changes

- `CopyButton`: added optional `tooltipSide` prop, forwarded to
`<TooltipContent side={tooltipSide}>`
- `ConversationTimeline`: passed `tooltipSide="bottom"` to the
copy-message `CopyButton`

> Generated by Coder Agents
2026-04-05 20:44:59 +00:00
Kyle Carberry a16755dd66 fix: prevent stale REST status from dropping streamed parts (#24040)
The `useEffect` that syncs `chatRecord.status` from React Query
unconditionally overwrites the store's `chatStatus`. The `chat(chatId)`
query has no `staleTime` (defaults to 0), so it refetches on window
focus, remount, etc. If the REST response catches a transient
`"pending"` status (e.g. between multi-step tool-call cycles), it
regresses `chatStatus` from `"running"` to `"pending"`.

Since `shouldApplyMessagePart()` drops ALL parts when status is
`"pending"` or `"waiting"`, every incoming `message_part` event is
silently discarded — not even buffered. Parts are visible on the
WebSocket but nothing renders, and the UI shows "Response is taking
longer than expected". A page reload fixes it because a fresh REST fetch
returns the current status.

**Fix:** Add `wsStatusReceivedRef` — once the WebSocket delivers a
status event, it becomes the authoritative source and REST refetches can
no longer overwrite it. This mirrors the existing
`wsQueueUpdateReceivedRef` pattern already used for queued messages. The
ref resets on chat change.

> Generated with [Coder Agents](https://coder.com/agents)
2026-04-05 14:10:26 -04:00
Kyle Carberry 8bdc35f91f refactor(site): unify message copy/edit UX across user and assistant messages (#24039)
Aligns the copy/edit action bar so both user and assistant messages use
the same hover-to-reveal pattern.

## Changes

- Replace bifurcated copy UX (inline `afterResponseSlot` for assistant,
floating toolbar for user) with a single unified action bar using
`CopyButton` + optional edit `Button`
- Remove `BlockList` `afterResponseSlot` prop and related machinery
- Remove per-message `copyHovered`/`useClipboard` state and left-border
highlight effect
- Remove `lastAssistantPerTurnIds`/`isTurnActive` computation — all
messages with content get actions on hover
- Hide actions on mid-chain assistant messages (only last in consecutive
chain shows buttons)
- Reduce inter-message gap from `gap-3` to `gap-2`
- Shrink action buttons to `size-6` for tighter vertical spacing
- Add 8px sticky top offset for user messages

> 🤖 Generated by Coder Agents
2026-04-05 13:27:21 -04:00
Kyle Carberry 5b32c4d79d fix: prevent stdio MCP server subprocess from dying after connect (#24035)
## Problem

MCP servers configured in `.mcp.json` with stdio transport are
discovered successfully (tools appear) but die immediately after
connection, making all tool calls fail.

## Root Cause

In `connectServer`, the subprocess is spawned with `connectCtx` — a
30-second timeout context whose `cancel()` is deferred:

```go
connectCtx, cancel := context.WithTimeout(ctx, connectTimeout)
defer cancel()
if err := c.Start(connectCtx); err != nil { ... }
```

The mcp-go stdio transport calls `exec.CommandContext(connectCtx, ...)`.
When `connectServer` returns, `cancel()` fires, and
`exec.CommandContext` sends SIGKILL to the subprocess. The process
immediately becomes a zombie.

Confirmed by checking `/proc/<pid>/status` after context cancellation:
```
State: Z (zombie)
```

## Fix

Pass the parent `ctx` (which is `a.gracefulCtx` — the agent's long-lived
context) to `c.Start()`. `connectCtx` continues to bound only the
`Initialize()` handshake. The subprocess is cleaned up when the Manager
is closed or the parent context is canceled.

## Regression Test

Added `TestConnectServer_StdioProcessSurvivesConnect` which:
- Spawns a real subprocess (re-execs the test binary as a fake MCP
server)
- Calls `connectServer` and lets it return (internal `connectCtx` gets
canceled)
- Verifies the subprocess is still alive by calling `ListTools`

The test **fails** on the old code with `transport error: context
deadline exceeded` and **passes** with the fix.

> Generated with [Coder Agents](https://coder.com/agents)
2026-04-05 12:04:13 +00:00
Kyle Carberry 8625543413 feat(coderd/x/chatd): parallelize ConvertMessagesWithFiles with g2 errgroup (#24034)
## Summary

Move `ConvertMessagesWithFiles` into the `g2` errgroup so prompt
conversion runs concurrently with instruction persistence, user prompt
resolution, MCP server connections, and workspace MCP tool discovery.

## Problem

In `runChat`, the setup before the first LLM `Stream()` call is
sequential across two errgroups:

```
g.Wait()                          // model + messages + MCP configs
ConvertMessagesWithFiles()        // sequential — blocked on g2 starting
g2.Wait()                         // instructions + user prompt + MCP connect + workspace MCP
```

`ConvertMessagesWithFiles` can take non-trivial time on conversations
with file attachments (batch DB resolution), and it was blocking g2 from
starting.

## Fix

`ConvertMessagesWithFiles` only reads the `messages` slice (available
after `g.Wait()`) and resolves file references via the database. No g2
task reads or writes the `prompt` variable. This makes it safe to
overlap with g2:

```
g.Wait()
g2.Wait()   // now includes ConvertMessagesWithFiles in parallel
```

The `InsertSystem` call for parent chats and the `promptErr` check are
deferred to after `g2.Wait()`, preserving correctness.

<details><summary>Decision log</summary>

- `ConvertMessagesWithFiles` is read-only on `messages` — no mutation,
safe for concurrent access
- `prompt` and `promptErr` are written only by the conversion goroutine,
read only after `g2.Wait()` — no data race
- Error from prompt conversion is checked immediately after `g2.Wait()`,
before any code that uses `prompt`
- `chatloop.Run` now uses `:=` instead of `=` since the prior `err`
declaration from `prompt, err :=` was removed

</details>

> Generated by Coder Agents
2026-04-05 11:42:07 +00:00
Kyle Carberry e18094825a fix: retain message_part buffer for cross-replica relay (#24031) 2026-04-04 17:24:41 -04:00
192 changed files with 7368 additions and 3099 deletions
+17 -17
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -247,7 +247,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -272,7 +272,7 @@ jobs:
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -327,7 +327,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -379,7 +379,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -575,7 +575,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -637,7 +637,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -709,7 +709,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -736,7 +736,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -769,7 +769,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -849,7 +849,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -930,7 +930,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -1005,7 +1005,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -1043,7 +1043,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -1097,7 +1097,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -1479,7 +1479,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -23,7 +23,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@21025c705c08248db411dc16f3619e6b5f9ea21a # v2.5.0
uses: dependabot/fetch-metadata@ffa630c65fa7e0ecfa0625b5ceda64399aea1b36 # v3.0.0
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
+3 -3
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -142,7 +142,7 @@ jobs:
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+2 -2
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -28,7 +28,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+5 -5
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -184,7 +184,7 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+3 -3
View File
@@ -81,7 +81,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -673,7 +673,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -749,7 +749,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+2 -2
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -47,6 +47,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v3.29.5
uses: github/codeql-action/upload-sarif@c10b8064de6f491fea524254123dbe5e09572f13 # v3.29.5
with:
sarif_file: results.sarif
+3 -3
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -40,7 +40,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Initialize CodeQL
uses: github/codeql-action/init@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v3.29.5
uses: github/codeql-action/init@c10b8064de6f491fea524254123dbe5e09572f13 # v3.29.5
with:
languages: go, javascript
@@ -50,7 +50,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@5d4e8d1aca955e8d8589aabd499c5cae939e33c7 # v3.29.5
uses: github/codeql-action/analyze@c10b8064de6f491fea524254123dbe5e09572f13 # v3.29.5
- name: Send Slack notification on failure
if: ${{ failure() }}
+3 -3
View File
@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
@@ -120,7 +120,7 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+1 -1
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@fa2e9d605c4eeb9fcad4c99c224cee0c6c7f3594 # v2.16.0
uses: step-security/harden-runner@fe104658747b27e96e4f7e80cd0a94068e53901d # v2.16.1
with:
egress-policy: audit
+5 -1
View File
@@ -187,7 +187,11 @@ func (*Manager) connectServer(ctx context.Context, cfg ServerConfig) (*client.Cl
connectCtx, cancel := context.WithTimeout(ctx, connectTimeout)
defer cancel()
if err := c.Start(connectCtx); err != nil {
// Use the parent ctx (not connectCtx) so the subprocess outlives
// the connect/initialize handshake. connectCtx bounds only the
// Initialize call below. The subprocess is cleaned up when the
// Manager is closed or ctx is canceled.
if err := c.Start(ctx); err != nil {
_ = c.Close()
return nil, xerrors.Errorf("start %q: %w", cfg.Name, err)
}
+121
View File
@@ -1,6 +1,11 @@
package agentmcp
import (
"bufio"
"context"
"encoding/json"
"fmt"
"os"
"testing"
"github.com/mark3labs/mcp-go/mcp"
@@ -8,6 +13,7 @@ import (
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/testutil"
)
func TestSplitToolName(t *testing.T) {
@@ -193,3 +199,118 @@ func TestConvertResult(t *testing.T) {
})
}
}
// TestConnectServer_StdioProcessSurvivesConnect verifies that a stdio MCP
// server subprocess remains alive after connectServer returns. This is a
// regression test for a bug where the subprocess was tied to a short-lived
// connectCtx and killed as soon as the context was canceled.
func TestConnectServer_StdioProcessSurvivesConnect(t *testing.T) {
t.Parallel()
if os.Getenv("TEST_MCP_FAKE_SERVER") == "1" {
// Child process: act as a minimal MCP server over stdio.
runFakeMCPServer()
return
}
// Get the path to the test binary so we can re-exec ourselves
// as a fake MCP server subprocess.
testBin, err := os.Executable()
require.NoError(t, err)
cfg := ServerConfig{
Name: "fake",
Transport: "stdio",
Command: testBin,
Args: []string{"-test.run=^TestConnectServer_StdioProcessSurvivesConnect$"},
Env: map[string]string{"TEST_MCP_FAKE_SERVER": "1"},
}
ctx := testutil.Context(t, testutil.WaitLong)
m := &Manager{}
client, err := m.connectServer(ctx, cfg)
require.NoError(t, err, "connectServer should succeed")
t.Cleanup(func() { _ = client.Close() })
// At this point connectServer has returned and its internal
// connectCtx has been canceled. The subprocess must still be
// alive. Verify by listing tools (requires a live server).
listCtx, listCancel := context.WithTimeout(ctx, testutil.WaitShort)
defer listCancel()
result, err := client.ListTools(listCtx, mcp.ListToolsRequest{})
require.NoError(t, err, "ListTools should succeed — server must be alive after connect")
require.Len(t, result.Tools, 1)
assert.Equal(t, "echo", result.Tools[0].Name)
}
// runFakeMCPServer implements a minimal JSON-RPC / MCP server over
// stdin/stdout, just enough for initialize + tools/list.
func runFakeMCPServer() {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
line := scanner.Bytes()
var req struct {
JSONRPC string `json:"jsonrpc"`
ID json.RawMessage `json:"id"`
Method string `json:"method"`
}
if err := json.Unmarshal(line, &req); err != nil {
continue
}
var resp any
switch req.Method {
case "initialize":
resp = map[string]any{
"jsonrpc": "2.0",
"id": req.ID,
"result": map[string]any{
"protocolVersion": "2025-03-26",
"capabilities": map[string]any{
"tools": map[string]any{},
},
"serverInfo": map[string]any{
"name": "fake-server",
"version": "0.0.1",
},
},
}
case "notifications/initialized":
// No response needed for notifications.
continue
case "tools/list":
resp = map[string]any{
"jsonrpc": "2.0",
"id": req.ID,
"result": map[string]any{
"tools": []map[string]any{
{
"name": "echo",
"description": "echoes input",
"inputSchema": map[string]any{
"type": "object",
"properties": map[string]any{},
},
},
},
},
}
default:
resp = map[string]any{
"jsonrpc": "2.0",
"id": req.ID,
"error": map[string]any{
"code": -32601,
"message": "method not found",
},
}
}
out, err := json.Marshal(resp)
if err != nil {
continue
}
_, _ = fmt.Fprintf(os.Stdout, "%s\n", out)
}
}
+28 -19
View File
@@ -3,11 +3,13 @@
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true,
"defaultBranch": "main",
"defaultBranch": "main"
},
"files": {
"includes": ["**", "!**/pnpm-lock.yaml"],
"ignoreUnknown": true,
// static/*.html are Go templates with {{ }} directives that
// Biome's HTML parser does not support.
"includes": ["**", "!**/pnpm-lock.yaml", "!**/static/*.html"],
"ignoreUnknown": true
},
"linter": {
"rules": {
@@ -15,7 +17,7 @@
"noSvgWithoutTitle": "off",
"useButtonType": "off",
"useSemanticElements": "off",
"noStaticElementInteractions": "off",
"noStaticElementInteractions": "off"
},
"correctness": {
"noUnusedImports": "warn",
@@ -24,9 +26,9 @@
"noUnusedVariables": {
"level": "warn",
"options": {
"ignoreRestSiblings": true,
},
},
"ignoreRestSiblings": true
}
}
},
"style": {
"noNonNullAssertion": "off",
@@ -47,7 +49,7 @@
"paths": {
"react": {
"message": "React 19 no longer requires forwardRef. Use ref as a prop instead.",
"importNames": ["forwardRef"],
"importNames": ["forwardRef"]
},
// "@mui/material/Alert": "Use components/Alert/Alert instead.",
// "@mui/material/AlertTitle": "Use components/Alert/Alert instead.",
@@ -115,10 +117,10 @@
"@emotion/styled": "Use Tailwind CSS instead.",
// "@emotion/cache": "Use Tailwind CSS instead.",
// "components/Stack/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
"lodash": "Use lodash/<name> instead.",
},
},
},
"lodash": "Use lodash/<name> instead."
}
}
}
},
"suspicious": {
"noArrayIndexKey": "off",
@@ -129,14 +131,21 @@
"noConsole": {
"level": "error",
"options": {
"allow": ["error", "info", "warn"],
},
},
"allow": ["error", "info", "warn"]
}
}
},
"complexity": {
"noImportantStyles": "off", // TODO: check and fix !important styles
},
},
"noImportantStyles": "off" // TODO: check and fix !important styles
}
}
},
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json",
"css": {
"parser": {
// Biome 2.3+ requires opt-in for @apply and other
// Tailwind directives.
"tailwindDirectives": true
}
},
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json"
}
+6
View File
@@ -14175,6 +14175,9 @@ const docTemplate = `{
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
@@ -14496,6 +14499,9 @@ const docTemplate = `{
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
+6
View File
@@ -12739,6 +12739,9 @@
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
@@ -13039,6 +13042,9 @@
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
+8 -1
View File
@@ -26,6 +26,11 @@ import (
"github.com/coder/coder/v2/codersdk"
)
// Limit the count query to avoid a slow sequential scan due to joins
// on a large table. Set to 0 to disable capping (but also see the note
// in the SQL query).
const auditLogCountCap = 2000
// @Summary Get audit logs
// @ID get-audit-logs
// @Security CoderSessionToken
@@ -66,7 +71,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
countFilter.Username = ""
}
// Use the same filters to count the number of audit logs
countFilter.CountCap = auditLogCountCap
count, err := api.Database.CountAuditLogs(ctx, countFilter)
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
@@ -81,6 +86,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.AuditLogResponse{
AuditLogs: []codersdk.AuditLog{},
Count: 0,
CountCap: auditLogCountCap,
})
return
}
@@ -98,6 +104,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.AuditLogResponse{
AuditLogs: api.convertAuditLogs(ctx, dblogs),
Count: count,
CountCap: auditLogCountCap,
})
}
+19 -3
View File
@@ -1528,7 +1528,10 @@ func nullInt64Ptr(v sql.NullInt64) *int64 {
// Chat converts a database.Chat to a codersdk.Chat. It coalesces
// nil slices and maps to empty values for JSON serialization and
// derives RootChatID from the parent chain when not explicitly set.
func Chat(c database.Chat, diffStatus *database.ChatDiffStatus) codersdk.Chat {
// When diffStatus is non-nil the response includes diff metadata.
// When files is non-empty the response includes file metadata;
// pass nil to omit the files field (e.g. list endpoints).
func Chat(c database.Chat, diffStatus *database.ChatDiffStatus, files []database.GetChatFileMetadataByChatIDRow) codersdk.Chat {
mcpServerIDs := c.MCPServerIDs
if mcpServerIDs == nil {
mcpServerIDs = []uuid.UUID{}
@@ -1581,6 +1584,19 @@ func Chat(c database.Chat, diffStatus *database.ChatDiffStatus) codersdk.Chat {
convertedDiffStatus := ChatDiffStatus(c.ID, diffStatus)
chat.DiffStatus = &convertedDiffStatus
}
if len(files) > 0 {
chat.Files = make([]codersdk.ChatFileMetadata, 0, len(files))
for _, row := range files {
chat.Files = append(chat.Files, codersdk.ChatFileMetadata{
ID: row.ID,
OwnerID: row.OwnerID,
OrganizationID: row.OrganizationID,
Name: row.Name,
MimeType: row.Mimetype,
CreatedAt: row.CreatedAt,
})
}
}
if c.LastInjectedContext.Valid {
var parts []codersdk.ChatMessagePart
// Internal fields are stripped at write time in
@@ -1604,9 +1620,9 @@ func ChatRows(rows []database.GetChatsRow, diffStatusesByChatID map[uuid.UUID]da
for i, row := range rows {
diffStatus, ok := diffStatusesByChatID[row.Chat.ID]
if ok {
result[i] = Chat(row.Chat, &diffStatus)
result[i] = Chat(row.Chat, &diffStatus, nil)
} else {
result[i] = Chat(row.Chat, nil)
result[i] = Chat(row.Chat, nil, nil)
if diffStatusesByChatID != nil {
emptyDiffStatus := ChatDiffStatus(row.Chat.ID, nil)
result[i].DiffStatus = &emptyDiffStatus
+122 -4
View File
@@ -561,14 +561,26 @@ func TestChat_AllFieldsPopulated(t *testing.T) {
ChatID: input.ID,
}
got := db2sdk.Chat(input, diffStatus)
fileRows := []database.GetChatFileMetadataByChatIDRow{
{
ID: uuid.New(),
OwnerID: input.OwnerID,
OrganizationID: uuid.New(),
Name: "test.png",
Mimetype: "image/png",
CreatedAt: now,
},
}
got := db2sdk.Chat(input, diffStatus, fileRows)
v := reflect.ValueOf(got)
typ := v.Type()
// HasUnread is populated by ChatRows (which joins the
// read-cursor query), not by Chat, so it is expected
// to remain zero here.
skip := map[string]bool{"HasUnread": true}
// read-cursor query), not by Chat. Warnings is a transient
// field populated by handlers, not the converter. Both are
// expected to remain zero here.
skip := map[string]bool{"HasUnread": true, "Warnings": true}
for i := range typ.NumField() {
field := typ.Field(i)
if skip[field.Name] {
@@ -581,6 +593,112 @@ func TestChat_AllFieldsPopulated(t *testing.T) {
}
}
func TestChat_FileMetadataConversion(t *testing.T) {
t.Parallel()
ownerID := uuid.New()
orgID := uuid.New()
fileID := uuid.New()
now := dbtime.Now()
chat := database.Chat{
ID: uuid.New(),
OwnerID: ownerID,
LastModelConfigID: uuid.New(),
Title: "file metadata test",
Status: database.ChatStatusWaiting,
CreatedAt: now,
UpdatedAt: now,
}
rows := []database.GetChatFileMetadataByChatIDRow{
{
ID: fileID,
OwnerID: ownerID,
OrganizationID: orgID,
Name: "screenshot.png",
Mimetype: "image/png",
CreatedAt: now,
},
}
result := db2sdk.Chat(chat, nil, rows)
require.Len(t, result.Files, 1)
f := result.Files[0]
require.Equal(t, fileID, f.ID)
require.Equal(t, ownerID, f.OwnerID, "OwnerID must be mapped from DB row")
require.Equal(t, orgID, f.OrganizationID, "OrganizationID must be mapped from DB row")
require.Equal(t, "screenshot.png", f.Name)
require.Equal(t, "image/png", f.MimeType)
require.Equal(t, now, f.CreatedAt)
// Verify JSON serialization uses snake_case for mime_type.
data, err := json.Marshal(f)
require.NoError(t, err)
require.Contains(t, string(data), `"mime_type"`)
require.NotContains(t, string(data), `"mimetype"`)
}
func TestChat_NilFilesOmitted(t *testing.T) {
t.Parallel()
chat := database.Chat{
ID: uuid.New(),
OwnerID: uuid.New(),
LastModelConfigID: uuid.New(),
Title: "no files",
Status: database.ChatStatusWaiting,
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
}
result := db2sdk.Chat(chat, nil, nil)
require.Empty(t, result.Files)
}
func TestChat_MultipleFiles(t *testing.T) {
t.Parallel()
now := dbtime.Now()
file1 := uuid.New()
file2 := uuid.New()
chat := database.Chat{
ID: uuid.New(),
OwnerID: uuid.New(),
LastModelConfigID: uuid.New(),
Title: "multi file test",
Status: database.ChatStatusWaiting,
CreatedAt: now,
UpdatedAt: now,
}
rows := []database.GetChatFileMetadataByChatIDRow{
{
ID: file1,
OwnerID: chat.OwnerID,
OrganizationID: uuid.New(),
Name: "a.png",
Mimetype: "image/png",
CreatedAt: now,
},
{
ID: file2,
OwnerID: chat.OwnerID,
OrganizationID: uuid.New(),
Name: "b.txt",
Mimetype: "text/plain",
CreatedAt: now,
},
}
result := db2sdk.Chat(chat, nil, rows)
require.Len(t, result.Files, 2)
require.Equal(t, "a.png", result.Files[0].Name)
require.Equal(t, "b.txt", result.Files[1].Name)
}
func TestChatQueuedMessage_MalformedContent(t *testing.T) {
t.Parallel()
+42 -40
View File
@@ -2155,17 +2155,12 @@ func (q *querier) DeleteUserChatProviderKey(ctx context.Context, arg database.De
return q.db.DeleteUserChatProviderKey(ctx, arg)
}
func (q *querier) DeleteUserSecret(ctx context.Context, id uuid.UUID) error {
// First get the secret to check ownership
secret, err := q.GetUserSecret(ctx, id)
if err != nil {
func (q *querier) DeleteUserSecretByUserIDAndName(ctx context.Context, arg database.DeleteUserSecretByUserIDAndNameParams) error {
obj := rbac.ResourceUserSecret.WithOwner(arg.UserID.String())
if err := q.authorizeContext(ctx, policy.ActionDelete, obj); err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionDelete, secret); err != nil {
return err
}
return q.db.DeleteUserSecret(ctx, id)
return q.db.DeleteUserSecretByUserIDAndName(ctx, arg)
}
func (q *querier) DeleteWebpushSubscriptionByUserIDAndEndpoint(ctx context.Context, arg database.DeleteWebpushSubscriptionByUserIDAndEndpointParams) error {
@@ -2583,6 +2578,10 @@ func (q *querier) GetChatFileByID(ctx context.Context, id uuid.UUID) (database.C
return file, nil
}
func (q *querier) GetChatFileMetadataByChatID(ctx context.Context, chatID uuid.UUID) ([]database.GetChatFileMetadataByChatIDRow, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetChatFileMetadataByChatID)(ctx, chatID)
}
func (q *querier) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]database.ChatFile, error) {
files, err := q.db.GetChatFilesByIDs(ctx, ids)
if err != nil {
@@ -4124,19 +4123,6 @@ func (q *querier) GetUserNotificationPreferences(ctx context.Context, userID uui
return q.db.GetUserNotificationPreferences(ctx, userID)
}
func (q *querier) GetUserSecret(ctx context.Context, id uuid.UUID) (database.UserSecret, error) {
// First get the secret to check ownership
secret, err := q.db.GetUserSecret(ctx, id)
if err != nil {
return database.UserSecret{}, err
}
if err := q.authorizeContext(ctx, policy.ActionRead, secret); err != nil {
return database.UserSecret{}, err
}
return secret, nil
}
func (q *querier) GetUserSecretByUserIDAndName(ctx context.Context, arg database.GetUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
obj := rbac.ResourceUserSecret.WithOwner(arg.UserID.String())
if err := q.authorizeContext(ctx, policy.ActionRead, obj); err != nil {
@@ -5393,6 +5379,17 @@ func (q *querier) InsertWorkspaceResourceMetadata(ctx context.Context, arg datab
return q.db.InsertWorkspaceResourceMetadata(ctx, arg)
}
func (q *querier) LinkChatFiles(ctx context.Context, arg database.LinkChatFilesParams) (int32, error) {
chat, err := q.db.GetChatByID(ctx, arg.ChatID)
if err != nil {
return 0, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return 0, err
}
return q.db.LinkChatFiles(ctx, arg)
}
func (q *querier) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -5509,7 +5506,7 @@ func (q *querier) ListUserChatCompactionThresholds(ctx context.Context, userID u
return q.db.ListUserChatCompactionThresholds(ctx, userID)
}
func (q *querier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
func (q *querier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.ListUserSecretsRow, error) {
obj := rbac.ResourceUserSecret.WithOwner(userID.String())
if err := q.authorizeContext(ctx, policy.ActionRead, obj); err != nil {
return nil, err
@@ -5517,6 +5514,16 @@ func (q *querier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]data
return q.db.ListUserSecrets(ctx, userID)
}
func (q *querier) ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
// This query returns decrypted secret values and must only be called
// from system contexts (provisioner, agent manifest). REST API
// handlers should use ListUserSecrets (metadata only).
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return nil, err
}
return q.db.ListUserSecretsWithValues(ctx, userID)
}
func (q *querier) ListWorkspaceAgentPortShares(ctx context.Context, workspaceID uuid.UUID) ([]database.WorkspaceAgentPortShare, error) {
workspace, err := q.db.GetWorkspaceByID(ctx, workspaceID)
if err != nil {
@@ -5767,15 +5774,15 @@ func (q *querier) UpdateChatByID(ctx context.Context, arg database.UpdateChatByI
return q.db.UpdateChatByID(ctx, arg)
}
func (q *querier) UpdateChatHeartbeat(ctx context.Context, arg database.UpdateChatHeartbeatParams) (int64, error) {
chat, err := q.db.GetChatByID(ctx, arg.ID)
if err != nil {
return 0, err
func (q *querier) UpdateChatHeartbeats(ctx context.Context, arg database.UpdateChatHeartbeatsParams) ([]uuid.UUID, error) {
// The batch heartbeat is a system-level operation filtered by
// worker_id. Authorization is enforced by the AsChatd context
// at the call site rather than per-row, because checking each
// row individually would defeat the purpose of batching.
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceChat); err != nil {
return nil, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, chat); err != nil {
return 0, err
}
return q.db.UpdateChatHeartbeat(ctx, arg)
return q.db.UpdateChatHeartbeats(ctx, arg)
}
func (q *querier) UpdateChatLabelsByID(ctx context.Context, arg database.UpdateChatLabelsByIDParams) (database.Chat, error) {
@@ -6617,17 +6624,12 @@ func (q *querier) UpdateUserRoles(ctx context.Context, arg database.UpdateUserRo
return q.db.UpdateUserRoles(ctx, arg)
}
func (q *querier) UpdateUserSecret(ctx context.Context, arg database.UpdateUserSecretParams) (database.UserSecret, error) {
// First get the secret to check ownership
secret, err := q.db.GetUserSecret(ctx, arg.ID)
if err != nil {
func (q *querier) UpdateUserSecretByUserIDAndName(ctx context.Context, arg database.UpdateUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
obj := rbac.ResourceUserSecret.WithOwner(arg.UserID.String())
if err := q.authorizeContext(ctx, policy.ActionUpdate, obj); err != nil {
return database.UserSecret{}, err
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, secret); err != nil {
return database.UserSecret{}, err
}
return q.db.UpdateUserSecret(ctx, arg)
return q.db.UpdateUserSecretByUserIDAndName(ctx, arg)
}
func (q *querier) UpdateUserStatus(ctx context.Context, arg database.UpdateUserStatusParams) (database.User, error) {
+53 -29
View File
@@ -400,6 +400,17 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UnarchiveChatByID(gomock.Any(), chat.ID).Return([]database.Chat{chat}, nil).AnyTimes()
check.Args(chat.ID).Asserts(chat, policy.ActionUpdate).Returns([]database.Chat{chat})
}))
s.Run("LinkChatFiles", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.LinkChatFilesParams{
ChatID: chat.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{uuid.New()},
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().LinkChatFiles(gomock.Any(), arg).Return(int32(0), nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(int32(0))
}))
s.Run("PinChatByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
@@ -576,6 +587,19 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().GetChatFilesByIDs(gomock.Any(), []uuid.UUID{file.ID}).Return([]database.ChatFile{file}, nil).AnyTimes()
check.Args([]uuid.UUID{file.ID}).Asserts(rbac.ResourceChat.WithOwner(file.OwnerID.String()).InOrg(file.OrganizationID).WithID(file.ID), policy.ActionRead).Returns([]database.ChatFile{file})
}))
s.Run("GetChatFileMetadataByChatID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
file := testutil.Fake(s.T(), faker, database.ChatFile{})
rows := []database.GetChatFileMetadataByChatIDRow{{
ID: file.ID,
Name: file.Name,
Mimetype: file.Mimetype,
CreatedAt: file.CreatedAt,
OwnerID: file.OwnerID,
OrganizationID: file.OrganizationID,
}}
dbm.EXPECT().GetChatFileMetadataByChatID(gomock.Any(), file.ID).Return(rows, nil).AnyTimes()
check.Args(file.ID).Asserts(rbac.ResourceChat.WithOwner(file.OwnerID.String()).InOrg(file.OrganizationID).WithID(file.ID), policy.ActionRead).Returns(rows)
}))
s.Run("GetChatMessageByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
msg := testutil.Fake(s.T(), faker, database.ChatMessage{ChatID: chat.ID})
@@ -818,15 +842,15 @@ func (s *MethodTestSuite) TestChats() {
dbm.EXPECT().UpdateChatStatusPreserveUpdatedAt(gomock.Any(), arg).Return(chat, nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(chat)
}))
s.Run("UpdateChatHeartbeat", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
arg := database.UpdateChatHeartbeatParams{
ID: chat.ID,
s.Run("UpdateChatHeartbeats", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
resultID := uuid.New()
arg := database.UpdateChatHeartbeatsParams{
IDs: []uuid.UUID{resultID},
WorkerID: uuid.New(),
Now: time.Now(),
}
dbm.EXPECT().GetChatByID(gomock.Any(), chat.ID).Return(chat, nil).AnyTimes()
dbm.EXPECT().UpdateChatHeartbeat(gomock.Any(), arg).Return(int64(1), nil).AnyTimes()
check.Args(arg).Asserts(chat, policy.ActionUpdate).Returns(int64(1))
dbm.EXPECT().UpdateChatHeartbeats(gomock.Any(), arg).Return([]uuid.UUID{resultID}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceChat, policy.ActionUpdate).Returns([]uuid.UUID{resultID})
}))
s.Run("UpdateChatMessageByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
chat := testutil.Fake(s.T(), faker, database.Chat{})
@@ -5322,19 +5346,20 @@ func (s *MethodTestSuite) TestUserSecrets() {
Asserts(rbac.ResourceUserSecret.WithOwner(user.ID.String()), policy.ActionRead).
Returns(secret)
}))
s.Run("GetUserSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
secret := testutil.Fake(s.T(), faker, database.UserSecret{})
dbm.EXPECT().GetUserSecret(gomock.Any(), secret.ID).Return(secret, nil).AnyTimes()
check.Args(secret.ID).
Asserts(secret, policy.ActionRead).
Returns(secret)
}))
s.Run("ListUserSecrets", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
user := testutil.Fake(s.T(), faker, database.User{})
secret := testutil.Fake(s.T(), faker, database.UserSecret{UserID: user.ID})
dbm.EXPECT().ListUserSecrets(gomock.Any(), user.ID).Return([]database.UserSecret{secret}, nil).AnyTimes()
row := testutil.Fake(s.T(), faker, database.ListUserSecretsRow{UserID: user.ID})
dbm.EXPECT().ListUserSecrets(gomock.Any(), user.ID).Return([]database.ListUserSecretsRow{row}, nil).AnyTimes()
check.Args(user.ID).
Asserts(rbac.ResourceUserSecret.WithOwner(user.ID.String()), policy.ActionRead).
Returns([]database.ListUserSecretsRow{row})
}))
s.Run("ListUserSecretsWithValues", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
user := testutil.Fake(s.T(), faker, database.User{})
secret := testutil.Fake(s.T(), faker, database.UserSecret{UserID: user.ID})
dbm.EXPECT().ListUserSecretsWithValues(gomock.Any(), user.ID).Return([]database.UserSecret{secret}, nil).AnyTimes()
check.Args(user.ID).
Asserts(rbac.ResourceSystem, policy.ActionRead).
Returns([]database.UserSecret{secret})
}))
s.Run("CreateUserSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
@@ -5346,22 +5371,21 @@ func (s *MethodTestSuite) TestUserSecrets() {
Asserts(rbac.ResourceUserSecret.WithOwner(user.ID.String()), policy.ActionCreate).
Returns(ret)
}))
s.Run("UpdateUserSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
secret := testutil.Fake(s.T(), faker, database.UserSecret{})
updated := testutil.Fake(s.T(), faker, database.UserSecret{ID: secret.ID})
arg := database.UpdateUserSecretParams{ID: secret.ID}
dbm.EXPECT().GetUserSecret(gomock.Any(), secret.ID).Return(secret, nil).AnyTimes()
dbm.EXPECT().UpdateUserSecret(gomock.Any(), arg).Return(updated, nil).AnyTimes()
s.Run("UpdateUserSecretByUserIDAndName", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
user := testutil.Fake(s.T(), faker, database.User{})
updated := testutil.Fake(s.T(), faker, database.UserSecret{UserID: user.ID})
arg := database.UpdateUserSecretByUserIDAndNameParams{UserID: user.ID, Name: "test"}
dbm.EXPECT().UpdateUserSecretByUserIDAndName(gomock.Any(), arg).Return(updated, nil).AnyTimes()
check.Args(arg).
Asserts(secret, policy.ActionUpdate).
Asserts(rbac.ResourceUserSecret.WithOwner(user.ID.String()), policy.ActionUpdate).
Returns(updated)
}))
s.Run("DeleteUserSecret", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
secret := testutil.Fake(s.T(), faker, database.UserSecret{})
dbm.EXPECT().GetUserSecret(gomock.Any(), secret.ID).Return(secret, nil).AnyTimes()
dbm.EXPECT().DeleteUserSecret(gomock.Any(), secret.ID).Return(nil).AnyTimes()
check.Args(secret.ID).
Asserts(secret, policy.ActionRead, secret, policy.ActionDelete).
s.Run("DeleteUserSecretByUserIDAndName", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
user := testutil.Fake(s.T(), faker, database.User{})
arg := database.DeleteUserSecretByUserIDAndNameParams{UserID: user.ID, Name: "test"}
dbm.EXPECT().DeleteUserSecretByUserIDAndName(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).
Asserts(rbac.ResourceUserSecret.WithOwner(user.ID.String()), policy.ActionDelete).
Returns()
}))
}
+1
View File
@@ -1597,6 +1597,7 @@ func UserSecret(t testing.TB, db database.Store, seed database.UserSecret) datab
Name: takeFirst(seed.Name, "secret-name"),
Description: takeFirst(seed.Description, "secret description"),
Value: takeFirst(seed.Value, "secret value"),
ValueKeyID: seed.ValueKeyID,
EnvName: takeFirst(seed.EnvName, "SECRET_ENV_NAME"),
FilePath: takeFirst(seed.FilePath, "~/secret/file/path"),
})
+37 -21
View File
@@ -712,11 +712,11 @@ func (m queryMetricsStore) DeleteUserChatProviderKey(ctx context.Context, arg da
return r0
}
func (m queryMetricsStore) DeleteUserSecret(ctx context.Context, id uuid.UUID) error {
func (m queryMetricsStore) DeleteUserSecretByUserIDAndName(ctx context.Context, arg database.DeleteUserSecretByUserIDAndNameParams) error {
start := time.Now()
r0 := m.s.DeleteUserSecret(ctx, id)
m.queryLatencies.WithLabelValues("DeleteUserSecret").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteUserSecret").Inc()
r0 := m.s.DeleteUserSecretByUserIDAndName(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteUserSecretByUserIDAndName").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "DeleteUserSecretByUserIDAndName").Inc()
return r0
}
@@ -1128,6 +1128,14 @@ func (m queryMetricsStore) GetChatFileByID(ctx context.Context, id uuid.UUID) (d
return r0, r1
}
func (m queryMetricsStore) GetChatFileMetadataByChatID(ctx context.Context, chatID uuid.UUID) ([]database.GetChatFileMetadataByChatIDRow, error) {
start := time.Now()
r0, r1 := m.s.GetChatFileMetadataByChatID(ctx, chatID)
m.queryLatencies.WithLabelValues("GetChatFileMetadataByChatID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetChatFileMetadataByChatID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]database.ChatFile, error) {
start := time.Now()
r0, r1 := m.s.GetChatFilesByIDs(ctx, ids)
@@ -2616,14 +2624,6 @@ func (m queryMetricsStore) GetUserNotificationPreferences(ctx context.Context, u
return r0, r1
}
func (m queryMetricsStore) GetUserSecret(ctx context.Context, id uuid.UUID) (database.UserSecret, error) {
start := time.Now()
r0, r1 := m.s.GetUserSecret(ctx, id)
m.queryLatencies.WithLabelValues("GetUserSecret").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetUserSecret").Inc()
return r0, r1
}
func (m queryMetricsStore) GetUserSecretByUserIDAndName(ctx context.Context, arg database.GetUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
start := time.Now()
r0, r1 := m.s.GetUserSecretByUserIDAndName(ctx, arg)
@@ -3776,6 +3776,14 @@ func (m queryMetricsStore) InsertWorkspaceResourceMetadata(ctx context.Context,
return r0, r1
}
func (m queryMetricsStore) LinkChatFiles(ctx context.Context, arg database.LinkChatFilesParams) (int32, error) {
start := time.Now()
r0, r1 := m.s.LinkChatFiles(ctx, arg)
m.queryLatencies.WithLabelValues("LinkChatFiles").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "LinkChatFiles").Inc()
return r0, r1
}
func (m queryMetricsStore) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
start := time.Now()
r0, r1 := m.s.ListAIBridgeClients(ctx, arg)
@@ -3904,7 +3912,7 @@ func (m queryMetricsStore) ListUserChatCompactionThresholds(ctx context.Context,
return r0, r1
}
func (m queryMetricsStore) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
func (m queryMetricsStore) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.ListUserSecretsRow, error) {
start := time.Now()
r0, r1 := m.s.ListUserSecrets(ctx, userID)
m.queryLatencies.WithLabelValues("ListUserSecrets").Observe(time.Since(start).Seconds())
@@ -3912,6 +3920,14 @@ func (m queryMetricsStore) ListUserSecrets(ctx context.Context, userID uuid.UUID
return r0, r1
}
func (m queryMetricsStore) ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
start := time.Now()
r0, r1 := m.s.ListUserSecretsWithValues(ctx, userID)
m.queryLatencies.WithLabelValues("ListUserSecretsWithValues").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "ListUserSecretsWithValues").Inc()
return r0, r1
}
func (m queryMetricsStore) ListWorkspaceAgentPortShares(ctx context.Context, workspaceID uuid.UUID) ([]database.WorkspaceAgentPortShare, error) {
start := time.Now()
r0, r1 := m.s.ListWorkspaceAgentPortShares(ctx, workspaceID)
@@ -4120,11 +4136,11 @@ func (m queryMetricsStore) UpdateChatByID(ctx context.Context, arg database.Upda
return r0, r1
}
func (m queryMetricsStore) UpdateChatHeartbeat(ctx context.Context, arg database.UpdateChatHeartbeatParams) (int64, error) {
func (m queryMetricsStore) UpdateChatHeartbeats(ctx context.Context, arg database.UpdateChatHeartbeatsParams) ([]uuid.UUID, error) {
start := time.Now()
r0, r1 := m.s.UpdateChatHeartbeat(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatHeartbeat").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatHeartbeat").Inc()
r0, r1 := m.s.UpdateChatHeartbeats(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateChatHeartbeats").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateChatHeartbeats").Inc()
return r0, r1
}
@@ -4680,11 +4696,11 @@ func (m queryMetricsStore) UpdateUserRoles(ctx context.Context, arg database.Upd
return r0, r1
}
func (m queryMetricsStore) UpdateUserSecret(ctx context.Context, arg database.UpdateUserSecretParams) (database.UserSecret, error) {
func (m queryMetricsStore) UpdateUserSecretByUserIDAndName(ctx context.Context, arg database.UpdateUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
start := time.Now()
r0, r1 := m.s.UpdateUserSecret(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateUserSecret").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateUserSecret").Inc()
r0, r1 := m.s.UpdateUserSecretByUserIDAndName(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateUserSecretByUserIDAndName").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateUserSecretByUserIDAndName").Inc()
return r0, r1
}
+66 -36
View File
@@ -1199,18 +1199,18 @@ func (mr *MockStoreMockRecorder) DeleteUserChatProviderKey(ctx, arg any) *gomock
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteUserChatProviderKey", reflect.TypeOf((*MockStore)(nil).DeleteUserChatProviderKey), ctx, arg)
}
// DeleteUserSecret mocks base method.
func (m *MockStore) DeleteUserSecret(ctx context.Context, id uuid.UUID) error {
// DeleteUserSecretByUserIDAndName mocks base method.
func (m *MockStore) DeleteUserSecretByUserIDAndName(ctx context.Context, arg database.DeleteUserSecretByUserIDAndNameParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteUserSecret", ctx, id)
ret := m.ctrl.Call(m, "DeleteUserSecretByUserIDAndName", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// DeleteUserSecret indicates an expected call of DeleteUserSecret.
func (mr *MockStoreMockRecorder) DeleteUserSecret(ctx, id any) *gomock.Call {
// DeleteUserSecretByUserIDAndName indicates an expected call of DeleteUserSecretByUserIDAndName.
func (mr *MockStoreMockRecorder) DeleteUserSecretByUserIDAndName(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteUserSecret", reflect.TypeOf((*MockStore)(nil).DeleteUserSecret), ctx, id)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteUserSecretByUserIDAndName", reflect.TypeOf((*MockStore)(nil).DeleteUserSecretByUserIDAndName), ctx, arg)
}
// DeleteWebpushSubscriptionByUserIDAndEndpoint mocks base method.
@@ -2072,6 +2072,21 @@ func (mr *MockStoreMockRecorder) GetChatFileByID(ctx, id any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatFileByID", reflect.TypeOf((*MockStore)(nil).GetChatFileByID), ctx, id)
}
// GetChatFileMetadataByChatID mocks base method.
func (m *MockStore) GetChatFileMetadataByChatID(ctx context.Context, chatID uuid.UUID) ([]database.GetChatFileMetadataByChatIDRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetChatFileMetadataByChatID", ctx, chatID)
ret0, _ := ret[0].([]database.GetChatFileMetadataByChatIDRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetChatFileMetadataByChatID indicates an expected call of GetChatFileMetadataByChatID.
func (mr *MockStoreMockRecorder) GetChatFileMetadataByChatID(ctx, chatID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetChatFileMetadataByChatID", reflect.TypeOf((*MockStore)(nil).GetChatFileMetadataByChatID), ctx, chatID)
}
// GetChatFilesByIDs mocks base method.
func (m *MockStore) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]database.ChatFile, error) {
m.ctrl.T.Helper()
@@ -4892,21 +4907,6 @@ func (mr *MockStoreMockRecorder) GetUserNotificationPreferences(ctx, userID any)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserNotificationPreferences", reflect.TypeOf((*MockStore)(nil).GetUserNotificationPreferences), ctx, userID)
}
// GetUserSecret mocks base method.
func (m *MockStore) GetUserSecret(ctx context.Context, id uuid.UUID) (database.UserSecret, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetUserSecret", ctx, id)
ret0, _ := ret[0].(database.UserSecret)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetUserSecret indicates an expected call of GetUserSecret.
func (mr *MockStoreMockRecorder) GetUserSecret(ctx, id any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUserSecret", reflect.TypeOf((*MockStore)(nil).GetUserSecret), ctx, id)
}
// GetUserSecretByUserIDAndName mocks base method.
func (m *MockStore) GetUserSecretByUserIDAndName(ctx context.Context, arg database.GetUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
m.ctrl.T.Helper()
@@ -7066,6 +7066,21 @@ func (mr *MockStoreMockRecorder) InsertWorkspaceResourceMetadata(ctx, arg any) *
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertWorkspaceResourceMetadata", reflect.TypeOf((*MockStore)(nil).InsertWorkspaceResourceMetadata), ctx, arg)
}
// LinkChatFiles mocks base method.
func (m *MockStore) LinkChatFiles(ctx context.Context, arg database.LinkChatFilesParams) (int32, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "LinkChatFiles", ctx, arg)
ret0, _ := ret[0].(int32)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// LinkChatFiles indicates an expected call of LinkChatFiles.
func (mr *MockStoreMockRecorder) LinkChatFiles(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "LinkChatFiles", reflect.TypeOf((*MockStore)(nil).LinkChatFiles), ctx, arg)
}
// ListAIBridgeClients mocks base method.
func (m *MockStore) ListAIBridgeClients(ctx context.Context, arg database.ListAIBridgeClientsParams) ([]string, error) {
m.ctrl.T.Helper()
@@ -7382,10 +7397,10 @@ func (mr *MockStoreMockRecorder) ListUserChatCompactionThresholds(ctx, userID an
}
// ListUserSecrets mocks base method.
func (m *MockStore) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
func (m *MockStore) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]database.ListUserSecretsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListUserSecrets", ctx, userID)
ret0, _ := ret[0].([]database.UserSecret)
ret0, _ := ret[0].([]database.ListUserSecretsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
@@ -7396,6 +7411,21 @@ func (mr *MockStoreMockRecorder) ListUserSecrets(ctx, userID any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListUserSecrets", reflect.TypeOf((*MockStore)(nil).ListUserSecrets), ctx, userID)
}
// ListUserSecretsWithValues mocks base method.
func (m *MockStore) ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ListUserSecretsWithValues", ctx, userID)
ret0, _ := ret[0].([]database.UserSecret)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ListUserSecretsWithValues indicates an expected call of ListUserSecretsWithValues.
func (mr *MockStoreMockRecorder) ListUserSecretsWithValues(ctx, userID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ListUserSecretsWithValues", reflect.TypeOf((*MockStore)(nil).ListUserSecretsWithValues), ctx, userID)
}
// ListWorkspaceAgentPortShares mocks base method.
func (m *MockStore) ListWorkspaceAgentPortShares(ctx context.Context, workspaceID uuid.UUID) ([]database.WorkspaceAgentPortShare, error) {
m.ctrl.T.Helper()
@@ -7805,19 +7835,19 @@ func (mr *MockStoreMockRecorder) UpdateChatByID(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatByID", reflect.TypeOf((*MockStore)(nil).UpdateChatByID), ctx, arg)
}
// UpdateChatHeartbeat mocks base method.
func (m *MockStore) UpdateChatHeartbeat(ctx context.Context, arg database.UpdateChatHeartbeatParams) (int64, error) {
// UpdateChatHeartbeats mocks base method.
func (m *MockStore) UpdateChatHeartbeats(ctx context.Context, arg database.UpdateChatHeartbeatsParams) ([]uuid.UUID, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateChatHeartbeat", ctx, arg)
ret0, _ := ret[0].(int64)
ret := m.ctrl.Call(m, "UpdateChatHeartbeats", ctx, arg)
ret0, _ := ret[0].([]uuid.UUID)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateChatHeartbeat indicates an expected call of UpdateChatHeartbeat.
func (mr *MockStoreMockRecorder) UpdateChatHeartbeat(ctx, arg any) *gomock.Call {
// UpdateChatHeartbeats indicates an expected call of UpdateChatHeartbeats.
func (mr *MockStoreMockRecorder) UpdateChatHeartbeats(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatHeartbeat", reflect.TypeOf((*MockStore)(nil).UpdateChatHeartbeat), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateChatHeartbeats", reflect.TypeOf((*MockStore)(nil).UpdateChatHeartbeats), ctx, arg)
}
// UpdateChatLabelsByID mocks base method.
@@ -8824,19 +8854,19 @@ func (mr *MockStoreMockRecorder) UpdateUserRoles(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateUserRoles", reflect.TypeOf((*MockStore)(nil).UpdateUserRoles), ctx, arg)
}
// UpdateUserSecret mocks base method.
func (m *MockStore) UpdateUserSecret(ctx context.Context, arg database.UpdateUserSecretParams) (database.UserSecret, error) {
// UpdateUserSecretByUserIDAndName mocks base method.
func (m *MockStore) UpdateUserSecretByUserIDAndName(ctx context.Context, arg database.UpdateUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateUserSecret", ctx, arg)
ret := m.ctrl.Call(m, "UpdateUserSecretByUserIDAndName", ctx, arg)
ret0, _ := ret[0].(database.UserSecret)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// UpdateUserSecret indicates an expected call of UpdateUserSecret.
func (mr *MockStoreMockRecorder) UpdateUserSecret(ctx, arg any) *gomock.Call {
// UpdateUserSecretByUserIDAndName indicates an expected call of UpdateUserSecretByUserIDAndName.
func (mr *MockStoreMockRecorder) UpdateUserSecretByUserIDAndName(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateUserSecret", reflect.TypeOf((*MockStore)(nil).UpdateUserSecret), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateUserSecretByUserIDAndName", reflect.TypeOf((*MockStore)(nil).UpdateUserSecretByUserIDAndName), ctx, arg)
}
// UpdateUserStatus mocks base method.
+16
View File
@@ -1269,6 +1269,11 @@ CREATE TABLE chat_diff_statuses (
head_branch text
);
CREATE TABLE chat_file_links (
chat_id uuid NOT NULL,
file_id uuid NOT NULL
);
CREATE TABLE chat_files (
id uuid DEFAULT gen_random_uuid() NOT NULL,
owner_id uuid NOT NULL,
@@ -3344,6 +3349,9 @@ ALTER TABLE ONLY boundary_usage_stats
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
ALTER TABLE ONLY chat_file_links
ADD CONSTRAINT chat_file_links_chat_id_file_id_key UNIQUE (chat_id, file_id);
ALTER TABLE ONLY chat_files
ADD CONSTRAINT chat_files_pkey PRIMARY KEY (id);
@@ -3734,6 +3742,8 @@ CREATE INDEX idx_audit_logs_time_desc ON audit_logs USING btree ("time" DESC);
CREATE INDEX idx_chat_diff_statuses_stale_at ON chat_diff_statuses USING btree (stale_at);
CREATE INDEX idx_chat_file_links_chat_id ON chat_file_links USING btree (chat_id);
CREATE INDEX idx_chat_files_org ON chat_files USING btree (organization_id);
CREATE INDEX idx_chat_files_owner ON chat_files USING btree (owner_id);
@@ -4036,6 +4046,12 @@ ALTER TABLE ONLY api_keys
ALTER TABLE ONLY chat_diff_statuses
ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_file_links
ADD CONSTRAINT chat_file_links_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_file_links
ADD CONSTRAINT chat_file_links_file_id_fkey FOREIGN KEY (file_id) REFERENCES chat_files(id) ON DELETE CASCADE;
ALTER TABLE ONLY chat_files
ADD CONSTRAINT chat_files_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
@@ -10,6 +10,8 @@ const (
ForeignKeyAibridgeInterceptionsInitiatorID ForeignKeyConstraint = "aibridge_interceptions_initiator_id_fkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
ForeignKeyAPIKeysUserIDUUID ForeignKeyConstraint = "api_keys_user_id_uuid_fkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatDiffStatusesChatID ForeignKeyConstraint = "chat_diff_statuses_chat_id_fkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatFileLinksChatID ForeignKeyConstraint = "chat_file_links_chat_id_fkey" // ALTER TABLE ONLY chat_file_links ADD CONSTRAINT chat_file_links_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ForeignKeyChatFileLinksFileID ForeignKeyConstraint = "chat_file_links_file_id_fkey" // ALTER TABLE ONLY chat_file_links ADD CONSTRAINT chat_file_links_file_id_fkey FOREIGN KEY (file_id) REFERENCES chat_files(id) ON DELETE CASCADE;
ForeignKeyChatFilesOrganizationID ForeignKeyConstraint = "chat_files_organization_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyChatFilesOwnerID ForeignKeyConstraint = "chat_files_owner_id_fkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyChatMessagesChatID ForeignKeyConstraint = "chat_messages_chat_id_fkey" // ALTER TABLE ONLY chat_messages ADD CONSTRAINT chat_messages_chat_id_fkey FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
@@ -0,0 +1,9 @@
ALTER TABLE chats ADD COLUMN file_ids uuid[] DEFAULT '{}'::uuid[] NOT NULL;
UPDATE chats SET file_ids = (
SELECT COALESCE(array_agg(cfl.file_id), '{}')
FROM chat_file_links cfl
WHERE cfl.chat_id = chats.id
);
DROP TABLE chat_file_links;
@@ -0,0 +1,17 @@
CREATE TABLE chat_file_links (
chat_id uuid NOT NULL,
file_id uuid NOT NULL,
UNIQUE (chat_id, file_id)
);
CREATE INDEX idx_chat_file_links_chat_id ON chat_file_links (chat_id);
ALTER TABLE chat_file_links
ADD CONSTRAINT chat_file_links_chat_id_fkey
FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE;
ALTER TABLE chat_file_links
ADD CONSTRAINT chat_file_links_file_id_fkey
FOREIGN KEY (file_id) REFERENCES chat_files(id) ON DELETE CASCADE;
ALTER TABLE chats DROP COLUMN IF EXISTS file_ids;
@@ -0,0 +1,5 @@
INSERT INTO chat_file_links (chat_id, file_id)
VALUES (
'72c0438a-18eb-4688-ab80-e4c6a126ef96',
'00000000-0000-0000-0000-000000000099'
);
+4
View File
@@ -187,6 +187,10 @@ func (c ChatFile) RBACObject() rbac.Object {
return rbac.ResourceChat.WithID(c.ID).WithOwner(c.OwnerID.String()).InOrg(c.OrganizationID)
}
func (c GetChatFileMetadataByChatIDRow) RBACObject() rbac.Object {
return rbac.ResourceChat.WithID(c.ID).WithOwner(c.OwnerID.String()).InOrg(c.OrganizationID)
}
func (s APIKeyScope) ToRBAC() rbac.ScopeName {
switch s {
case ApiKeyScopeCoderAll:
+2
View File
@@ -584,6 +584,7 @@ func (q *sqlQuerier) CountAuthorizedAuditLogs(ctx context.Context, arg CountAudi
arg.DateTo,
arg.BuildReason,
arg.RequestID,
arg.CountCap,
)
if err != nil {
return 0, err
@@ -720,6 +721,7 @@ func (q *sqlQuerier) CountAuthorizedConnectionLogs(ctx context.Context, arg Coun
arg.WorkspaceID,
arg.ConnectionID,
arg.Status,
arg.CountCap,
)
if err != nil {
return 0, err
@@ -145,5 +145,13 @@ func extractWhereClause(query string) string {
// Remove SQL comments
whereClause = regexp.MustCompile(`(?m)--.*$`).ReplaceAllString(whereClause, "")
// Normalize indentation so subquery wrapping doesn't cause
// mismatches.
lines := strings.Split(whereClause, "\n")
for i, line := range lines {
lines[i] = strings.TrimLeft(line, " \t")
}
whereClause = strings.Join(lines, "\n")
return strings.TrimSpace(whereClause)
}
+5
View File
@@ -4218,6 +4218,11 @@ type ChatFile struct {
Data []byte `db:"data" json:"data"`
}
type ChatFileLink struct {
ChatID uuid.UUID `db:"chat_id" json:"chat_id"`
FileID uuid.UUID `db:"file_id" json:"file_id"`
}
type ChatMessage struct {
ID int64 `db:"id" json:"id"`
ChatID uuid.UUID `db:"chat_id" json:"chat_id"`
+27 -17
View File
@@ -81,8 +81,8 @@ func newMsgQueue(ctx context.Context, l Listener, le ListenerWithErr) *msgQueue
}
func (q *msgQueue) run() {
var batch [maxDrainBatch]msgOrErr
for {
// wait until there is something on the queue or we are closed
q.cond.L.Lock()
for q.size == 0 && !q.closed {
q.cond.Wait()
@@ -91,28 +91,32 @@ func (q *msgQueue) run() {
q.cond.L.Unlock()
return
}
item := q.q[q.front]
q.front = (q.front + 1) % BufferSize
q.size--
// Drain up to maxDrainBatch items while holding the lock.
n := min(q.size, maxDrainBatch)
for i := range n {
batch[i] = q.q[q.front]
q.front = (q.front + 1) % BufferSize
}
q.size -= n
q.cond.L.Unlock()
// process item without holding lock
if item.err == nil {
// real message
if q.l != nil {
q.l(q.ctx, item.msg)
// Dispatch each message individually without holding the lock.
for i := range n {
item := batch[i]
if item.err == nil {
if q.l != nil {
q.l(q.ctx, item.msg)
continue
}
if q.le != nil {
q.le(q.ctx, item.msg, nil)
continue
}
continue
}
if q.le != nil {
q.le(q.ctx, item.msg, nil)
continue
q.le(q.ctx, nil, item.err)
}
// unhittable
continue
}
// if the listener wants errors, send it.
if q.le != nil {
q.le(q.ctx, nil, item.err)
}
}
}
@@ -233,6 +237,12 @@ type PGPubsub struct {
// for a subscriber before dropping messages.
const BufferSize = 2048
// maxDrainBatch is the maximum number of messages to drain from the ring
// buffer per iteration. Batching amortizes the cost of mutex
// acquire/release and cond.Wait across many messages, improving drain
// throughput during bursts.
const maxDrainBatch = 256
// Subscribe calls the listener when an event matching the name is received.
func (p *PGPubsub) Subscribe(event string, listener Listener) (cancel func(), err error) {
return p.subscribeQueue(event, newMsgQueue(context.Background(), listener, nil))
+27 -7
View File
@@ -152,7 +152,7 @@ type sqlcQuerier interface {
DeleteTask(ctx context.Context, arg DeleteTaskParams) (uuid.UUID, error)
DeleteUserChatCompactionThreshold(ctx context.Context, arg DeleteUserChatCompactionThresholdParams) error
DeleteUserChatProviderKey(ctx context.Context, arg DeleteUserChatProviderKeyParams) error
DeleteUserSecret(ctx context.Context, id uuid.UUID) error
DeleteUserSecretByUserIDAndName(ctx context.Context, arg DeleteUserSecretByUserIDAndNameParams) error
DeleteWebpushSubscriptionByUserIDAndEndpoint(ctx context.Context, arg DeleteWebpushSubscriptionByUserIDAndEndpointParams) error
DeleteWebpushSubscriptions(ctx context.Context, ids []uuid.UUID) error
DeleteWorkspaceACLByID(ctx context.Context, id uuid.UUID) error
@@ -244,6 +244,10 @@ type sqlcQuerier interface {
GetChatDiffStatusByChatID(ctx context.Context, chatID uuid.UUID) (ChatDiffStatus, error)
GetChatDiffStatusesByChatIDs(ctx context.Context, chatIds []uuid.UUID) ([]ChatDiffStatus, error)
GetChatFileByID(ctx context.Context, id uuid.UUID) (ChatFile, error)
// GetChatFileMetadataByChatID returns lightweight file metadata for
// all files linked to a chat. The data column is excluded to avoid
// loading file content.
GetChatFileMetadataByChatID(ctx context.Context, chatID uuid.UUID) ([]GetChatFileMetadataByChatIDRow, error)
GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]ChatFile, error)
// GetChatIncludeDefaultSystemPrompt preserves the legacy default
// for deployments created before the explicit include-default toggle.
@@ -594,7 +598,6 @@ type sqlcQuerier interface {
GetUserLinkByUserIDLoginType(ctx context.Context, arg GetUserLinkByUserIDLoginTypeParams) (UserLink, error)
GetUserLinksByUserID(ctx context.Context, userID uuid.UUID) ([]UserLink, error)
GetUserNotificationPreferences(ctx context.Context, userID uuid.UUID) ([]NotificationPreference, error)
GetUserSecret(ctx context.Context, id uuid.UUID) (UserSecret, error)
GetUserSecretByUserIDAndName(ctx context.Context, arg GetUserSecretByUserIDAndNameParams) (UserSecret, error)
// GetUserStatusCounts returns the count of users in each status over time.
// The time range is inclusively defined by the start_time and end_time parameters.
@@ -778,6 +781,15 @@ type sqlcQuerier interface {
InsertWorkspaceProxy(ctx context.Context, arg InsertWorkspaceProxyParams) (WorkspaceProxy, error)
InsertWorkspaceResource(ctx context.Context, arg InsertWorkspaceResourceParams) (WorkspaceResource, error)
InsertWorkspaceResourceMetadata(ctx context.Context, arg InsertWorkspaceResourceMetadataParams) ([]WorkspaceResourceMetadatum, error)
// LinkChatFiles inserts file associations into the chat_file_links
// join table with deduplication (ON CONFLICT DO NOTHING). The INSERT
// is conditional: it only proceeds when the total number of links
// (existing + genuinely new) does not exceed max_file_links. Returns
// the number of genuinely new file IDs that were NOT inserted due to
// the cap. A return value of 0 means all files were linked (or were
// already linked). A positive value means the cap blocked that many
// new links.
LinkChatFiles(ctx context.Context, arg LinkChatFilesParams) (int32, error)
ListAIBridgeClients(ctx context.Context, arg ListAIBridgeClientsParams) ([]string, error)
ListAIBridgeInterceptions(ctx context.Context, arg ListAIBridgeInterceptionsParams) ([]ListAIBridgeInterceptionsRow, error)
// Finds all unique AI Bridge interception telemetry summaries combinations
@@ -805,7 +817,13 @@ type sqlcQuerier interface {
ListProvisionerKeysByOrganizationExcludeReserved(ctx context.Context, organizationID uuid.UUID) ([]ProvisionerKey, error)
ListTasks(ctx context.Context, arg ListTasksParams) ([]Task, error)
ListUserChatCompactionThresholds(ctx context.Context, userID uuid.UUID) ([]UserConfig, error)
ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]UserSecret, error)
// Returns metadata only (no value or value_key_id) for the
// REST API list and get endpoints.
ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]ListUserSecretsRow, error)
// Returns all columns including the secret value. Used by the
// provisioner (build-time injection) and the agent manifest
// (runtime injection).
ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]UserSecret, error)
ListWorkspaceAgentPortShares(ctx context.Context, workspaceID uuid.UUID) ([]WorkspaceAgentPortShare, error)
MarkAllInboxNotificationsAsRead(ctx context.Context, arg MarkAllInboxNotificationsAsReadParams) error
OIDCClaimFieldValues(ctx context.Context, arg OIDCClaimFieldValuesParams) ([]string, error)
@@ -857,9 +875,11 @@ type sqlcQuerier interface {
UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDParams) error
UpdateChatBuildAgentBinding(ctx context.Context, arg UpdateChatBuildAgentBindingParams) (Chat, error)
UpdateChatByID(ctx context.Context, arg UpdateChatByIDParams) (Chat, error)
// Bumps the heartbeat timestamp for a running chat so that other
// replicas know the worker is still alive.
UpdateChatHeartbeat(ctx context.Context, arg UpdateChatHeartbeatParams) (int64, error)
// Bumps the heartbeat timestamp for the given set of chat IDs,
// provided they are still running and owned by the specified
// worker. Returns the IDs that were actually updated so the
// caller can detect stolen or completed chats via set-difference.
UpdateChatHeartbeats(ctx context.Context, arg UpdateChatHeartbeatsParams) ([]uuid.UUID, error)
UpdateChatLabelsByID(ctx context.Context, arg UpdateChatLabelsByIDParams) (Chat, error)
// Updates the cached injected context parts (AGENTS.md +
// skills) on the chat row. Called only when context changes
@@ -942,7 +962,7 @@ type sqlcQuerier interface {
UpdateUserProfile(ctx context.Context, arg UpdateUserProfileParams) (User, error)
UpdateUserQuietHoursSchedule(ctx context.Context, arg UpdateUserQuietHoursScheduleParams) (User, error)
UpdateUserRoles(ctx context.Context, arg UpdateUserRolesParams) (User, error)
UpdateUserSecret(ctx context.Context, arg UpdateUserSecretParams) (UserSecret, error)
UpdateUserSecretByUserIDAndName(ctx context.Context, arg UpdateUserSecretByUserIDAndNameParams) (UserSecret, error)
UpdateUserStatus(ctx context.Context, arg UpdateUserStatusParams) (User, error)
UpdateUserTaskNotificationAlertDismissed(ctx context.Context, arg UpdateUserTaskNotificationAlertDismissedParams) (bool, error)
UpdateUserTerminalFont(ctx context.Context, arg UpdateUserTerminalFontParams) (UserConfig, error)
+45 -30
View File
@@ -7339,13 +7339,7 @@ func TestUserSecretsCRUDOperations(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, secretID, createdSecret.ID)
// 2. READ by ID
readSecret, err := db.GetUserSecret(ctx, createdSecret.ID)
require.NoError(t, err)
assert.Equal(t, createdSecret.ID, readSecret.ID)
assert.Equal(t, "workflow-secret", readSecret.Name)
// 3. READ by UserID and Name
// 2. READ by UserID and Name
readByNameParams := database.GetUserSecretByUserIDAndNameParams{
UserID: testUser.ID,
Name: "workflow-secret",
@@ -7353,33 +7347,43 @@ func TestUserSecretsCRUDOperations(t *testing.T) {
readByNameSecret, err := db.GetUserSecretByUserIDAndName(ctx, readByNameParams)
require.NoError(t, err)
assert.Equal(t, createdSecret.ID, readByNameSecret.ID)
assert.Equal(t, "workflow-secret", readByNameSecret.Name)
// 4. LIST
// 3. LIST (metadata only)
secrets, err := db.ListUserSecrets(ctx, testUser.ID)
require.NoError(t, err)
require.Len(t, secrets, 1)
assert.Equal(t, createdSecret.ID, secrets[0].ID)
// 5. UPDATE
updateParams := database.UpdateUserSecretParams{
ID: createdSecret.ID,
Description: "Updated workflow description",
Value: "updated-workflow-value",
EnvName: "UPDATED_WORKFLOW_ENV",
FilePath: "/updated/workflow/path",
// 4. LIST with values
secretsWithValues, err := db.ListUserSecretsWithValues(ctx, testUser.ID)
require.NoError(t, err)
require.Len(t, secretsWithValues, 1)
assert.Equal(t, "workflow-value", secretsWithValues[0].Value)
// 5. UPDATE (partial - only description)
updateParams := database.UpdateUserSecretByUserIDAndNameParams{
UserID: testUser.ID,
Name: "workflow-secret",
UpdateDescription: true,
Description: "Updated workflow description",
}
updatedSecret, err := db.UpdateUserSecret(ctx, updateParams)
updatedSecret, err := db.UpdateUserSecretByUserIDAndName(ctx, updateParams)
require.NoError(t, err)
assert.Equal(t, "Updated workflow description", updatedSecret.Description)
assert.Equal(t, "updated-workflow-value", updatedSecret.Value)
assert.Equal(t, "workflow-value", updatedSecret.Value) // Value unchanged
assert.Equal(t, "WORKFLOW_ENV", updatedSecret.EnvName) // EnvName unchanged
// 6. DELETE
err = db.DeleteUserSecret(ctx, createdSecret.ID)
err = db.DeleteUserSecretByUserIDAndName(ctx, database.DeleteUserSecretByUserIDAndNameParams{
UserID: testUser.ID,
Name: "workflow-secret",
})
require.NoError(t, err)
// Verify deletion
_, err = db.GetUserSecret(ctx, createdSecret.ID)
_, err = db.GetUserSecretByUserIDAndName(ctx, readByNameParams)
require.Error(t, err)
assert.Contains(t, err.Error(), "no rows in result set")
@@ -7449,9 +7453,13 @@ func TestUserSecretsCRUDOperations(t *testing.T) {
})
// Verify both secrets exist
_, err = db.GetUserSecret(ctx, secret1.ID)
_, err = db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: testUser.ID, Name: secret1.Name,
})
require.NoError(t, err)
_, err = db.GetUserSecret(ctx, secret2.ID)
_, err = db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: testUser.ID, Name: secret2.Name,
})
require.NoError(t, err)
})
}
@@ -7474,14 +7482,14 @@ func TestUserSecretsAuthorization(t *testing.T) {
org := dbgen.Organization(t, db, database.Organization{})
// Create secrets for users
user1Secret := dbgen.UserSecret(t, db, database.UserSecret{
_ = dbgen.UserSecret(t, db, database.UserSecret{
UserID: user1.ID,
Name: "user1-secret",
Description: "User 1's secret",
Value: "user1-value",
})
user2Secret := dbgen.UserSecret(t, db, database.UserSecret{
_ = dbgen.UserSecret(t, db, database.UserSecret{
UserID: user2.ID,
Name: "user2-secret",
Description: "User 2's secret",
@@ -7491,7 +7499,8 @@ func TestUserSecretsAuthorization(t *testing.T) {
testCases := []struct {
name string
subject rbac.Subject
secretID uuid.UUID
lookupUserID uuid.UUID
lookupName string
expectedAccess bool
}{
{
@@ -7501,7 +7510,8 @@ func TestUserSecretsAuthorization(t *testing.T) {
Roles: rbac.RoleIdentifiers{rbac.RoleMember()},
Scope: rbac.ScopeAll,
},
secretID: user1Secret.ID,
lookupUserID: user1.ID,
lookupName: "user1-secret",
expectedAccess: true,
},
{
@@ -7511,7 +7521,8 @@ func TestUserSecretsAuthorization(t *testing.T) {
Roles: rbac.RoleIdentifiers{rbac.RoleMember()},
Scope: rbac.ScopeAll,
},
secretID: user2Secret.ID,
lookupUserID: user2.ID,
lookupName: "user2-secret",
expectedAccess: false,
},
{
@@ -7521,7 +7532,8 @@ func TestUserSecretsAuthorization(t *testing.T) {
Roles: rbac.RoleIdentifiers{rbac.RoleOwner()},
Scope: rbac.ScopeAll,
},
secretID: user1Secret.ID,
lookupUserID: user1.ID,
lookupName: "user1-secret",
expectedAccess: false,
},
{
@@ -7531,7 +7543,8 @@ func TestUserSecretsAuthorization(t *testing.T) {
Roles: rbac.RoleIdentifiers{rbac.ScopedRoleOrgAdmin(org.ID)},
Scope: rbac.ScopeAll,
},
secretID: user1Secret.ID,
lookupUserID: user1.ID,
lookupName: "user1-secret",
expectedAccess: false,
},
}
@@ -7543,8 +7556,10 @@ func TestUserSecretsAuthorization(t *testing.T) {
authCtx := dbauthz.As(ctx, tc.subject)
// Test GetUserSecret
_, err := authDB.GetUserSecret(authCtx, tc.secretID)
_, err := authDB.GetUserSecretByUserIDAndName(authCtx, database.GetUserSecretByUserIDAndNameParams{
UserID: tc.lookupUserID,
Name: tc.lookupName,
})
if tc.expectedAccess {
require.NoError(t, err, "expected access to be granted")
+482 -264
View File
@@ -2275,93 +2275,105 @@ func (q *sqlQuerier) UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDP
}
const countAuditLogs = `-- name: CountAuditLogs :one
SELECT COUNT(*)
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN $1::text != '' THEN resource_type = $1::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN $2::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = $2
ELSE true
END
-- Filter organization_id
AND CASE
WHEN $3::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = $3
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN $4::text != '' THEN resource_target = $4
ELSE true
END
-- Filter action
AND CASE
WHEN $5::text != '' THEN action = $5::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower($7)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8::text != '' THEN users.email = $8
ELSE true
END
-- Filter by date_from
AND CASE
WHEN $9::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= $9
ELSE true
END
-- Filter by date_to
AND CASE
WHEN $10::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= $10
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN $11::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = $11
ELSE true
END
-- Filter request_id
AND CASE
WHEN $12::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = $12
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
SELECT COUNT(*) FROM (
SELECT 1
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN $1::text != '' THEN resource_type = $1::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN $2::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = $2
ELSE true
END
-- Filter organization_id
AND CASE
WHEN $3::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = $3
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN $4::text != '' THEN resource_target = $4
ELSE true
END
-- Filter action
AND CASE
WHEN $5::text != '' THEN action = $5::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower($7)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8::text != '' THEN users.email = $8
ELSE true
END
-- Filter by date_from
AND CASE
WHEN $9::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= $9
ELSE true
END
-- Filter by date_to
AND CASE
WHEN $10::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= $10
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN $11::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = $11
ELSE true
END
-- Filter request_id
AND CASE
WHEN $12::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = $12
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
-- Avoid a slow scan on a large table with joins. The caller
-- passes the count cap and we add 1 so the frontend can detect
-- capping and show "... of N+". A cap of 0 means no limit (NULLIF
-- -> NULL + 1 = NULL).
-- NOTE: Parameterizing this so that we can easily change from,
-- e.g., 2000 to 5000. However, use literal NULL (or no LIMIT)
-- here if disabling the capping on a large table permanently.
-- This way the PG planner can plan parallel execution for
-- potential large wins.
LIMIT NULLIF($13::int, 0) + 1
) AS limited_count
`
type CountAuditLogsParams struct {
@@ -2377,6 +2389,7 @@ type CountAuditLogsParams struct {
DateTo time.Time `db:"date_to" json:"date_to"`
BuildReason string `db:"build_reason" json:"build_reason"`
RequestID uuid.UUID `db:"request_id" json:"request_id"`
CountCap int32 `db:"count_cap" json:"count_cap"`
}
func (q *sqlQuerier) CountAuditLogs(ctx context.Context, arg CountAuditLogsParams) (int64, error) {
@@ -2393,6 +2406,7 @@ func (q *sqlQuerier) CountAuditLogs(ctx context.Context, arg CountAuditLogsParam
arg.DateTo,
arg.BuildReason,
arg.RequestID,
arg.CountCap,
)
var count int64
err := row.Scan(&count)
@@ -2889,6 +2903,56 @@ func (q *sqlQuerier) GetChatFileByID(ctx context.Context, id uuid.UUID) (ChatFil
return i, err
}
const getChatFileMetadataByChatID = `-- name: GetChatFileMetadataByChatID :many
SELECT cf.id, cf.owner_id, cf.organization_id, cf.name, cf.mimetype, cf.created_at
FROM chat_files cf
JOIN chat_file_links cfl ON cfl.file_id = cf.id
WHERE cfl.chat_id = $1::uuid
ORDER BY cf.created_at ASC
`
type GetChatFileMetadataByChatIDRow struct {
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
Name string `db:"name" json:"name"`
Mimetype string `db:"mimetype" json:"mimetype"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
}
// GetChatFileMetadataByChatID returns lightweight file metadata for
// all files linked to a chat. The data column is excluded to avoid
// loading file content.
func (q *sqlQuerier) GetChatFileMetadataByChatID(ctx context.Context, chatID uuid.UUID) ([]GetChatFileMetadataByChatIDRow, error) {
rows, err := q.db.QueryContext(ctx, getChatFileMetadataByChatID, chatID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetChatFileMetadataByChatIDRow
for rows.Next() {
var i GetChatFileMetadataByChatIDRow
if err := rows.Scan(
&i.ID,
&i.OwnerID,
&i.OrganizationID,
&i.Name,
&i.Mimetype,
&i.CreatedAt,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getChatFilesByIDs = `-- name: GetChatFilesByIDs :many
SELECT id, owner_id, organization_id, created_at, name, mimetype, data FROM chat_files WHERE id = ANY($1::uuid[])
`
@@ -4530,7 +4594,8 @@ WITH chat_costs AS (
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM chat_messages cm
JOIN chats c ON c.id = cm.chat_id
WHERE c.owner_id = $1::uuid
@@ -4547,7 +4612,8 @@ SELECT
cc.total_input_tokens,
cc.total_output_tokens,
cc.total_cache_read_tokens,
cc.total_cache_creation_tokens
cc.total_cache_creation_tokens,
cc.total_runtime_ms
FROM chat_costs cc
LEFT JOIN chats rc ON rc.id = cc.root_chat_id
ORDER BY cc.total_cost_micros DESC
@@ -4568,6 +4634,7 @@ type GetChatCostPerChatRow struct {
TotalOutputTokens int64 `db:"total_output_tokens" json:"total_output_tokens"`
TotalCacheReadTokens int64 `db:"total_cache_read_tokens" json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `db:"total_cache_creation_tokens" json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `db:"total_runtime_ms" json:"total_runtime_ms"`
}
// Per-root-chat cost breakdown for a single user within a date range.
@@ -4591,6 +4658,7 @@ func (q *sqlQuerier) GetChatCostPerChat(ctx context.Context, arg GetChatCostPerC
&i.TotalOutputTokens,
&i.TotalCacheReadTokens,
&i.TotalCacheCreationTokens,
&i.TotalRuntimeMs,
); err != nil {
return nil, err
}
@@ -4622,7 +4690,8 @@ SELECT
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -4657,6 +4726,7 @@ type GetChatCostPerModelRow struct {
TotalOutputTokens int64 `db:"total_output_tokens" json:"total_output_tokens"`
TotalCacheReadTokens int64 `db:"total_cache_read_tokens" json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `db:"total_cache_creation_tokens" json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `db:"total_runtime_ms" json:"total_runtime_ms"`
}
// Per-model cost breakdown for a single user within a date range.
@@ -4681,6 +4751,7 @@ func (q *sqlQuerier) GetChatCostPerModel(ctx context.Context, arg GetChatCostPer
&i.TotalOutputTokens,
&i.TotalCacheReadTokens,
&i.TotalCacheCreationTokens,
&i.TotalRuntimeMs,
); err != nil {
return nil, err
}
@@ -4714,7 +4785,8 @@ WITH chat_cost_users AS (
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -4748,6 +4820,7 @@ SELECT
total_output_tokens,
total_cache_read_tokens,
total_cache_creation_tokens,
total_runtime_ms,
COUNT(*) OVER()::bigint AS total_count
FROM
chat_cost_users
@@ -4780,6 +4853,7 @@ type GetChatCostPerUserRow struct {
TotalOutputTokens int64 `db:"total_output_tokens" json:"total_output_tokens"`
TotalCacheReadTokens int64 `db:"total_cache_read_tokens" json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `db:"total_cache_creation_tokens" json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `db:"total_runtime_ms" json:"total_runtime_ms"`
TotalCount int64 `db:"total_count" json:"total_count"`
}
@@ -4812,6 +4886,7 @@ func (q *sqlQuerier) GetChatCostPerUser(ctx context.Context, arg GetChatCostPerU
&i.TotalOutputTokens,
&i.TotalCacheReadTokens,
&i.TotalCacheCreationTokens,
&i.TotalRuntimeMs,
&i.TotalCount,
); err != nil {
return nil, err
@@ -4846,7 +4921,8 @@ SELECT
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -4872,6 +4948,7 @@ type GetChatCostSummaryRow struct {
TotalOutputTokens int64 `db:"total_output_tokens" json:"total_output_tokens"`
TotalCacheReadTokens int64 `db:"total_cache_read_tokens" json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `db:"total_cache_creation_tokens" json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `db:"total_runtime_ms" json:"total_runtime_ms"`
}
// Aggregate cost summary for a single user within a date range.
@@ -4887,6 +4964,7 @@ func (q *sqlQuerier) GetChatCostSummary(ctx context.Context, arg GetChatCostSumm
&i.TotalOutputTokens,
&i.TotalCacheReadTokens,
&i.TotalCacheCreationTokens,
&i.TotalRuntimeMs,
)
return i, err
}
@@ -6019,6 +6097,57 @@ func (q *sqlQuerier) InsertChatQueuedMessage(ctx context.Context, arg InsertChat
return i, err
}
const linkChatFiles = `-- name: LinkChatFiles :one
WITH current AS (
SELECT COUNT(*) AS cnt
FROM chat_file_links
WHERE chat_id = $1::uuid
),
new_links AS (
SELECT $1::uuid AS chat_id, unnest($2::uuid[]) AS file_id
),
genuinely_new AS (
SELECT nl.chat_id, nl.file_id
FROM new_links nl
WHERE NOT EXISTS (
SELECT 1 FROM chat_file_links cfl
WHERE cfl.chat_id = nl.chat_id AND cfl.file_id = nl.file_id
)
),
inserted AS (
INSERT INTO chat_file_links (chat_id, file_id)
SELECT gn.chat_id, gn.file_id
FROM genuinely_new gn, current c
WHERE c.cnt + (SELECT COUNT(*) FROM genuinely_new) <= $3::int
ON CONFLICT (chat_id, file_id) DO NOTHING
RETURNING file_id
)
SELECT
(SELECT COUNT(*)::int FROM genuinely_new) -
(SELECT COUNT(*)::int FROM inserted) AS rejected_new_files
`
type LinkChatFilesParams struct {
ChatID uuid.UUID `db:"chat_id" json:"chat_id"`
FileIds []uuid.UUID `db:"file_ids" json:"file_ids"`
MaxFileLinks int32 `db:"max_file_links" json:"max_file_links"`
}
// LinkChatFiles inserts file associations into the chat_file_links
// join table with deduplication (ON CONFLICT DO NOTHING). The INSERT
// is conditional: it only proceeds when the total number of links
// (existing + genuinely new) does not exceed max_file_links. Returns
// the number of genuinely new file IDs that were NOT inserted due to
// the cap. A return value of 0 means all files were linked (or were
// already linked). A positive value means the cap blocked that many
// new links.
func (q *sqlQuerier) LinkChatFiles(ctx context.Context, arg LinkChatFilesParams) (int32, error) {
row := q.db.QueryRowContext(ctx, linkChatFiles, arg.ChatID, pq.Array(arg.FileIds), arg.MaxFileLinks)
var rejected_new_files int32
err := row.Scan(&rejected_new_files)
return rejected_new_files, err
}
const listChatUsageLimitGroupOverrides = `-- name: ListChatUsageLimitGroupOverrides :many
SELECT
g.id AS group_id,
@@ -6486,30 +6615,49 @@ func (q *sqlQuerier) UpdateChatByID(ctx context.Context, arg UpdateChatByIDParam
return i, err
}
const updateChatHeartbeat = `-- name: UpdateChatHeartbeat :execrows
const updateChatHeartbeats = `-- name: UpdateChatHeartbeats :many
UPDATE
chats
SET
heartbeat_at = NOW()
heartbeat_at = $1::timestamptz
WHERE
id = $1::uuid
AND worker_id = $2::uuid
id = ANY($2::uuid[])
AND worker_id = $3::uuid
AND status = 'running'::chat_status
RETURNING id
`
type UpdateChatHeartbeatParams struct {
ID uuid.UUID `db:"id" json:"id"`
WorkerID uuid.UUID `db:"worker_id" json:"worker_id"`
type UpdateChatHeartbeatsParams struct {
Now time.Time `db:"now" json:"now"`
IDs []uuid.UUID `db:"ids" json:"ids"`
WorkerID uuid.UUID `db:"worker_id" json:"worker_id"`
}
// Bumps the heartbeat timestamp for a running chat so that other
// replicas know the worker is still alive.
func (q *sqlQuerier) UpdateChatHeartbeat(ctx context.Context, arg UpdateChatHeartbeatParams) (int64, error) {
result, err := q.db.ExecContext(ctx, updateChatHeartbeat, arg.ID, arg.WorkerID)
// Bumps the heartbeat timestamp for the given set of chat IDs,
// provided they are still running and owned by the specified
// worker. Returns the IDs that were actually updated so the
// caller can detect stolen or completed chats via set-difference.
func (q *sqlQuerier) UpdateChatHeartbeats(ctx context.Context, arg UpdateChatHeartbeatsParams) ([]uuid.UUID, error) {
rows, err := q.db.QueryContext(ctx, updateChatHeartbeats, arg.Now, pq.Array(arg.IDs), arg.WorkerID)
if err != nil {
return 0, err
return nil, err
}
return result.RowsAffected()
defer rows.Close()
var items []uuid.UUID
for rows.Next() {
var id uuid.UUID
if err := rows.Scan(&id); err != nil {
return nil, err
}
items = append(items, id)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const updateChatLabelsByID = `-- name: UpdateChatLabelsByID :one
@@ -7456,110 +7604,113 @@ func (q *sqlQuerier) BatchUpsertConnectionLogs(ctx context.Context, arg BatchUps
}
const countConnectionLogs = `-- name: CountConnectionLogs :one
SELECT
COUNT(*) AS count
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN $1 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = $1
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN $2 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower($2) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN $3 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = $3
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN $4 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = $4 AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN $5 :: text != '' THEN
type = $5 :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7 :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower($7) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8 :: text != '' THEN
users.email = $8
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN $9 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= $9
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN $10 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= $10
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN $11 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = $11
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN $12 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = $12
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN $13 :: text != '' THEN
(($13 = 'ongoing' AND disconnect_time IS NULL) OR
($13 = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
SELECT COUNT(*) AS count FROM (
SELECT 1
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN $1 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = $1
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN $2 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower($2) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN $3 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = $3
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN $4 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = $4 AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN $5 :: text != '' THEN
type = $5 :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7 :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower($7) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8 :: text != '' THEN
users.email = $8
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN $9 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= $9
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN $10 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= $10
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN $11 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = $11
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN $12 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = $12
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN $13 :: text != '' THEN
(($13 = 'ongoing' AND disconnect_time IS NULL) OR
($13 = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
-- NOTE: See the CountAuditLogs LIMIT note.
LIMIT NULLIF($14::int, 0) + 1
) AS limited_count
`
type CountConnectionLogsParams struct {
@@ -7576,6 +7727,7 @@ type CountConnectionLogsParams struct {
WorkspaceID uuid.UUID `db:"workspace_id" json:"workspace_id"`
ConnectionID uuid.UUID `db:"connection_id" json:"connection_id"`
Status string `db:"status" json:"status"`
CountCap int32 `db:"count_cap" json:"count_cap"`
}
func (q *sqlQuerier) CountConnectionLogs(ctx context.Context, arg CountConnectionLogsParams) (int64, error) {
@@ -7593,6 +7745,7 @@ func (q *sqlQuerier) CountConnectionLogs(ctx context.Context, arg CountConnectio
arg.WorkspaceID,
arg.ConnectionID,
arg.Status,
arg.CountCap,
)
var count int64
err := row.Scan(&count)
@@ -22486,21 +22639,30 @@ INSERT INTO user_secrets (
name,
description,
value,
value_key_id,
env_name,
file_path
) VALUES (
$1, $2, $3, $4, $5, $6, $7
$1,
$2,
$3,
$4,
$5,
$6,
$7,
$8
) RETURNING id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id
`
type CreateUserSecretParams struct {
ID uuid.UUID `db:"id" json:"id"`
UserID uuid.UUID `db:"user_id" json:"user_id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
Value string `db:"value" json:"value"`
EnvName string `db:"env_name" json:"env_name"`
FilePath string `db:"file_path" json:"file_path"`
ID uuid.UUID `db:"id" json:"id"`
UserID uuid.UUID `db:"user_id" json:"user_id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
Value string `db:"value" json:"value"`
ValueKeyID sql.NullString `db:"value_key_id" json:"value_key_id"`
EnvName string `db:"env_name" json:"env_name"`
FilePath string `db:"file_path" json:"file_path"`
}
func (q *sqlQuerier) CreateUserSecret(ctx context.Context, arg CreateUserSecretParams) (UserSecret, error) {
@@ -22510,6 +22672,7 @@ func (q *sqlQuerier) CreateUserSecret(ctx context.Context, arg CreateUserSecretP
arg.Name,
arg.Description,
arg.Value,
arg.ValueKeyID,
arg.EnvName,
arg.FilePath,
)
@@ -22529,41 +22692,24 @@ func (q *sqlQuerier) CreateUserSecret(ctx context.Context, arg CreateUserSecretP
return i, err
}
const deleteUserSecret = `-- name: DeleteUserSecret :exec
const deleteUserSecretByUserIDAndName = `-- name: DeleteUserSecretByUserIDAndName :exec
DELETE FROM user_secrets
WHERE id = $1
WHERE user_id = $1 AND name = $2
`
func (q *sqlQuerier) DeleteUserSecret(ctx context.Context, id uuid.UUID) error {
_, err := q.db.ExecContext(ctx, deleteUserSecret, id)
type DeleteUserSecretByUserIDAndNameParams struct {
UserID uuid.UUID `db:"user_id" json:"user_id"`
Name string `db:"name" json:"name"`
}
func (q *sqlQuerier) DeleteUserSecretByUserIDAndName(ctx context.Context, arg DeleteUserSecretByUserIDAndNameParams) error {
_, err := q.db.ExecContext(ctx, deleteUserSecretByUserIDAndName, arg.UserID, arg.Name)
return err
}
const getUserSecret = `-- name: GetUserSecret :one
SELECT id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id FROM user_secrets
WHERE id = $1
`
func (q *sqlQuerier) GetUserSecret(ctx context.Context, id uuid.UUID) (UserSecret, error) {
row := q.db.QueryRowContext(ctx, getUserSecret, id)
var i UserSecret
err := row.Scan(
&i.ID,
&i.UserID,
&i.Name,
&i.Description,
&i.Value,
&i.EnvName,
&i.FilePath,
&i.CreatedAt,
&i.UpdatedAt,
&i.ValueKeyID,
)
return i, err
}
const getUserSecretByUserIDAndName = `-- name: GetUserSecretByUserIDAndName :one
SELECT id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id FROM user_secrets
SELECT id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id
FROM user_secrets
WHERE user_id = $1 AND name = $2
`
@@ -22591,17 +22737,76 @@ func (q *sqlQuerier) GetUserSecretByUserIDAndName(ctx context.Context, arg GetUs
}
const listUserSecrets = `-- name: ListUserSecrets :many
SELECT id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id FROM user_secrets
SELECT
id, user_id, name, description,
env_name, file_path,
created_at, updated_at
FROM user_secrets
WHERE user_id = $1
ORDER BY name ASC
`
func (q *sqlQuerier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]UserSecret, error) {
type ListUserSecretsRow struct {
ID uuid.UUID `db:"id" json:"id"`
UserID uuid.UUID `db:"user_id" json:"user_id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
EnvName string `db:"env_name" json:"env_name"`
FilePath string `db:"file_path" json:"file_path"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
// Returns metadata only (no value or value_key_id) for the
// REST API list and get endpoints.
func (q *sqlQuerier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]ListUserSecretsRow, error) {
rows, err := q.db.QueryContext(ctx, listUserSecrets, userID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ListUserSecretsRow
for rows.Next() {
var i ListUserSecretsRow
if err := rows.Scan(
&i.ID,
&i.UserID,
&i.Name,
&i.Description,
&i.EnvName,
&i.FilePath,
&i.CreatedAt,
&i.UpdatedAt,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const listUserSecretsWithValues = `-- name: ListUserSecretsWithValues :many
SELECT id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id
FROM user_secrets
WHERE user_id = $1
ORDER BY name ASC
`
// Returns all columns including the secret value. Used by the
// provisioner (build-time injection) and the agent manifest
// (runtime injection).
func (q *sqlQuerier) ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]UserSecret, error) {
rows, err := q.db.QueryContext(ctx, listUserSecretsWithValues, userID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []UserSecret
for rows.Next() {
var i UserSecret
@@ -22630,33 +22835,46 @@ func (q *sqlQuerier) ListUserSecrets(ctx context.Context, userID uuid.UUID) ([]U
return items, nil
}
const updateUserSecret = `-- name: UpdateUserSecret :one
const updateUserSecretByUserIDAndName = `-- name: UpdateUserSecretByUserIDAndName :one
UPDATE user_secrets
SET
description = $2,
value = $3,
env_name = $4,
file_path = $5,
updated_at = CURRENT_TIMESTAMP
WHERE id = $1
value = CASE WHEN $1::bool THEN $2 ELSE value END,
value_key_id = CASE WHEN $1::bool THEN $3 ELSE value_key_id END,
description = CASE WHEN $4::bool THEN $5 ELSE description END,
env_name = CASE WHEN $6::bool THEN $7 ELSE env_name END,
file_path = CASE WHEN $8::bool THEN $9 ELSE file_path END,
updated_at = CURRENT_TIMESTAMP
WHERE user_id = $10 AND name = $11
RETURNING id, user_id, name, description, value, env_name, file_path, created_at, updated_at, value_key_id
`
type UpdateUserSecretParams struct {
ID uuid.UUID `db:"id" json:"id"`
Description string `db:"description" json:"description"`
Value string `db:"value" json:"value"`
EnvName string `db:"env_name" json:"env_name"`
FilePath string `db:"file_path" json:"file_path"`
type UpdateUserSecretByUserIDAndNameParams struct {
UpdateValue bool `db:"update_value" json:"update_value"`
Value string `db:"value" json:"value"`
ValueKeyID sql.NullString `db:"value_key_id" json:"value_key_id"`
UpdateDescription bool `db:"update_description" json:"update_description"`
Description string `db:"description" json:"description"`
UpdateEnvName bool `db:"update_env_name" json:"update_env_name"`
EnvName string `db:"env_name" json:"env_name"`
UpdateFilePath bool `db:"update_file_path" json:"update_file_path"`
FilePath string `db:"file_path" json:"file_path"`
UserID uuid.UUID `db:"user_id" json:"user_id"`
Name string `db:"name" json:"name"`
}
func (q *sqlQuerier) UpdateUserSecret(ctx context.Context, arg UpdateUserSecretParams) (UserSecret, error) {
row := q.db.QueryRowContext(ctx, updateUserSecret,
arg.ID,
arg.Description,
func (q *sqlQuerier) UpdateUserSecretByUserIDAndName(ctx context.Context, arg UpdateUserSecretByUserIDAndNameParams) (UserSecret, error) {
row := q.db.QueryRowContext(ctx, updateUserSecretByUserIDAndName,
arg.UpdateValue,
arg.Value,
arg.ValueKeyID,
arg.UpdateDescription,
arg.Description,
arg.UpdateEnvName,
arg.EnvName,
arg.UpdateFilePath,
arg.FilePath,
arg.UserID,
arg.Name,
)
var i UserSecret
err := row.Scan(
+99 -88
View File
@@ -149,94 +149,105 @@ VALUES (
RETURNING *;
-- name: CountAuditLogs :one
SELECT COUNT(*)
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN @resource_type::text != '' THEN resource_type = @resource_type::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN @resource_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = @resource_id
ELSE true
END
-- Filter organization_id
AND CASE
WHEN @organization_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = @organization_id
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN @resource_target::text != '' THEN resource_target = @resource_target
ELSE true
END
-- Filter action
AND CASE
WHEN @action::text != '' THEN action = @action::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower(@username)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @email::text != '' THEN users.email = @email
ELSE true
END
-- Filter by date_from
AND CASE
WHEN @date_from::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= @date_from
ELSE true
END
-- Filter by date_to
AND CASE
WHEN @date_to::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= @date_to
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN @build_reason::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = @build_reason
ELSE true
END
-- Filter request_id
AND CASE
WHEN @request_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = @request_id
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
;
SELECT COUNT(*) FROM (
SELECT 1
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN @resource_type::text != '' THEN resource_type = @resource_type::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN @resource_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = @resource_id
ELSE true
END
-- Filter organization_id
AND CASE
WHEN @organization_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = @organization_id
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN @resource_target::text != '' THEN resource_target = @resource_target
ELSE true
END
-- Filter action
AND CASE
WHEN @action::text != '' THEN action = @action::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower(@username)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @email::text != '' THEN users.email = @email
ELSE true
END
-- Filter by date_from
AND CASE
WHEN @date_from::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= @date_from
ELSE true
END
-- Filter by date_to
AND CASE
WHEN @date_to::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= @date_to
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN @build_reason::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = @build_reason
ELSE true
END
-- Filter request_id
AND CASE
WHEN @request_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = @request_id
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
-- Avoid a slow scan on a large table with joins. The caller
-- passes the count cap and we add 1 so the frontend can detect
-- capping and show "... of N+". A cap of 0 means no limit (NULLIF
-- -> NULL + 1 = NULL).
-- NOTE: Parameterizing this so that we can easily change from,
-- e.g., 2000 to 5000. However, use literal NULL (or no LIMIT)
-- here if disabling the capping on a large table permanently.
-- This way the PG planner can plan parallel execution for
-- potential large wins.
LIMIT NULLIF(@count_cap::int, 0) + 1
) AS limited_count;
-- name: DeleteOldAuditLogConnectionEvents :exec
DELETE FROM audit_logs
+10
View File
@@ -8,3 +8,13 @@ SELECT * FROM chat_files WHERE id = @id::uuid;
-- name: GetChatFilesByIDs :many
SELECT * FROM chat_files WHERE id = ANY(@ids::uuid[]);
-- name: GetChatFileMetadataByChatID :many
-- GetChatFileMetadataByChatID returns lightweight file metadata for
-- all files linked to a chat. The data column is excluded to avoid
-- loading file content.
SELECT cf.id, cf.owner_id, cf.organization_id, cf.name, cf.mimetype, cf.created_at
FROM chat_files cf
JOIN chat_file_links cfl ON cfl.file_id = cf.id
WHERE cfl.chat_id = @chat_id::uuid
ORDER BY cf.created_at ASC;
+57 -11
View File
@@ -567,6 +567,43 @@ WHERE
RETURNING
*;
-- name: LinkChatFiles :one
-- LinkChatFiles inserts file associations into the chat_file_links
-- join table with deduplication (ON CONFLICT DO NOTHING). The INSERT
-- is conditional: it only proceeds when the total number of links
-- (existing + genuinely new) does not exceed max_file_links. Returns
-- the number of genuinely new file IDs that were NOT inserted due to
-- the cap. A return value of 0 means all files were linked (or were
-- already linked). A positive value means the cap blocked that many
-- new links.
WITH current AS (
SELECT COUNT(*) AS cnt
FROM chat_file_links
WHERE chat_id = @chat_id::uuid
),
new_links AS (
SELECT @chat_id::uuid AS chat_id, unnest(@file_ids::uuid[]) AS file_id
),
genuinely_new AS (
SELECT nl.chat_id, nl.file_id
FROM new_links nl
WHERE NOT EXISTS (
SELECT 1 FROM chat_file_links cfl
WHERE cfl.chat_id = nl.chat_id AND cfl.file_id = nl.file_id
)
),
inserted AS (
INSERT INTO chat_file_links (chat_id, file_id)
SELECT gn.chat_id, gn.file_id
FROM genuinely_new gn, current c
WHERE c.cnt + (SELECT COUNT(*) FROM genuinely_new) <= @max_file_links::int
ON CONFLICT (chat_id, file_id) DO NOTHING
RETURNING file_id
)
SELECT
(SELECT COUNT(*)::int FROM genuinely_new) -
(SELECT COUNT(*)::int FROM inserted) AS rejected_new_files;
-- name: AcquireChats :many
-- Acquires up to @num_chats pending chats for processing. Uses SKIP LOCKED
-- to prevent multiple replicas from acquiring the same chat.
@@ -637,17 +674,20 @@ WHERE
status = 'running'::chat_status
AND heartbeat_at < @stale_threshold::timestamptz;
-- name: UpdateChatHeartbeat :execrows
-- Bumps the heartbeat timestamp for a running chat so that other
-- replicas know the worker is still alive.
-- name: UpdateChatHeartbeats :many
-- Bumps the heartbeat timestamp for the given set of chat IDs,
-- provided they are still running and owned by the specified
-- worker. Returns the IDs that were actually updated so the
-- caller can detect stolen or completed chats via set-difference.
UPDATE
chats
SET
heartbeat_at = NOW()
heartbeat_at = @now::timestamptz
WHERE
id = @id::uuid
id = ANY(@ids::uuid[])
AND worker_id = @worker_id::uuid
AND status = 'running'::chat_status;
AND status = 'running'::chat_status
RETURNING id;
-- name: GetChatDiffStatusByChatID :one
SELECT
@@ -883,7 +923,8 @@ SELECT
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -913,7 +954,8 @@ SELECT
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -948,7 +990,8 @@ WITH chat_costs AS (
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM chat_messages cm
JOIN chats c ON c.id = cm.chat_id
WHERE c.owner_id = @owner_id::uuid
@@ -965,7 +1008,8 @@ SELECT
cc.total_input_tokens,
cc.total_output_tokens,
cc.total_cache_read_tokens,
cc.total_cache_creation_tokens
cc.total_cache_creation_tokens,
cc.total_runtime_ms
FROM chat_costs cc
LEFT JOIN chats rc ON rc.id = cc.root_chat_id
ORDER BY cc.total_cost_micros DESC;
@@ -991,7 +1035,8 @@ WITH chat_cost_users AS (
COALESCE(SUM(cm.input_tokens), 0)::bigint AS total_input_tokens,
COALESCE(SUM(cm.output_tokens), 0)::bigint AS total_output_tokens,
COALESCE(SUM(cm.cache_read_tokens), 0)::bigint AS total_cache_read_tokens,
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens
COALESCE(SUM(cm.cache_creation_tokens), 0)::bigint AS total_cache_creation_tokens,
COALESCE(SUM(cm.runtime_ms), 0)::bigint AS total_runtime_ms
FROM
chat_messages cm
JOIN
@@ -1025,6 +1070,7 @@ SELECT
total_output_tokens,
total_cache_read_tokens,
total_cache_creation_tokens,
total_runtime_ms,
COUNT(*) OVER()::bigint AS total_count
FROM
chat_cost_users
+107 -105
View File
@@ -133,111 +133,113 @@ OFFSET
@offset_opt;
-- name: CountConnectionLogs :one
SELECT
COUNT(*) AS count
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = @organization_id
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN @workspace_owner :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower(@workspace_owner) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN @workspace_owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = @workspace_owner_id
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN @workspace_owner_email :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = @workspace_owner_email AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN @type :: text != '' THEN
type = @type :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower(@username) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @user_email :: text != '' THEN
users.email = @user_email
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN @connected_after :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= @connected_after
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN @connected_before :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= @connected_before
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN @workspace_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = @workspace_id
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN @connection_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = @connection_id
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
;
SELECT COUNT(*) AS count FROM (
SELECT 1
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = @organization_id
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN @workspace_owner :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower(@workspace_owner) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN @workspace_owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = @workspace_owner_id
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN @workspace_owner_email :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = @workspace_owner_email AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN @type :: text != '' THEN
type = @type :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower(@username) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @user_email :: text != '' THEN
users.email = @user_email
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN @connected_after :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= @connected_after
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN @connected_before :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= @connected_before
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN @workspace_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = @workspace_id
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN @connection_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = @connection_id
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
-- NOTE: See the CountAuditLogs LIMIT note.
LIMIT NULLIF(@count_cap::int, 0) + 1
) AS limited_count;
-- name: DeleteOldConnectionLogs :execrows
WITH old_logs AS (
+39 -18
View File
@@ -1,14 +1,26 @@
-- name: GetUserSecretByUserIDAndName :one
SELECT * FROM user_secrets
WHERE user_id = $1 AND name = $2;
-- name: GetUserSecret :one
SELECT * FROM user_secrets
WHERE id = $1;
SELECT *
FROM user_secrets
WHERE user_id = @user_id AND name = @name;
-- name: ListUserSecrets :many
SELECT * FROM user_secrets
WHERE user_id = $1
-- Returns metadata only (no value or value_key_id) for the
-- REST API list and get endpoints.
SELECT
id, user_id, name, description,
env_name, file_path,
created_at, updated_at
FROM user_secrets
WHERE user_id = @user_id
ORDER BY name ASC;
-- name: ListUserSecretsWithValues :many
-- Returns all columns including the secret value. Used by the
-- provisioner (build-time injection) and the agent manifest
-- (runtime injection).
SELECT *
FROM user_secrets
WHERE user_id = @user_id
ORDER BY name ASC;
-- name: CreateUserSecret :one
@@ -18,23 +30,32 @@ INSERT INTO user_secrets (
name,
description,
value,
value_key_id,
env_name,
file_path
) VALUES (
$1, $2, $3, $4, $5, $6, $7
@id,
@user_id,
@name,
@description,
@value,
@value_key_id,
@env_name,
@file_path
) RETURNING *;
-- name: UpdateUserSecret :one
-- name: UpdateUserSecretByUserIDAndName :one
UPDATE user_secrets
SET
description = $2,
value = $3,
env_name = $4,
file_path = $5,
updated_at = CURRENT_TIMESTAMP
WHERE id = $1
value = CASE WHEN @update_value::bool THEN @value ELSE value END,
value_key_id = CASE WHEN @update_value::bool THEN @value_key_id ELSE value_key_id END,
description = CASE WHEN @update_description::bool THEN @description ELSE description END,
env_name = CASE WHEN @update_env_name::bool THEN @env_name ELSE env_name END,
file_path = CASE WHEN @update_file_path::bool THEN @file_path ELSE file_path END,
updated_at = CURRENT_TIMESTAMP
WHERE user_id = @user_id AND name = @name
RETURNING *;
-- name: DeleteUserSecret :exec
-- name: DeleteUserSecretByUserIDAndName :exec
DELETE FROM user_secrets
WHERE id = $1;
WHERE user_id = @user_id AND name = @name;
+1
View File
@@ -247,6 +247,7 @@ sql:
mcp_server_tool_snapshots: MCPServerToolSnapshots
mcp_server_config_id: MCPServerConfigID
mcp_server_ids: MCPServerIDs
max_file_links: MaxFileLinks
icon_url: IconURL
oauth2_client_id: OAuth2ClientID
oauth2_client_secret: OAuth2ClientSecret
+1
View File
@@ -16,6 +16,7 @@ const (
UniqueAuditLogsPkey UniqueConstraint = "audit_logs_pkey" // ALTER TABLE ONLY audit_logs ADD CONSTRAINT audit_logs_pkey PRIMARY KEY (id);
UniqueBoundaryUsageStatsPkey UniqueConstraint = "boundary_usage_stats_pkey" // ALTER TABLE ONLY boundary_usage_stats ADD CONSTRAINT boundary_usage_stats_pkey PRIMARY KEY (replica_id);
UniqueChatDiffStatusesPkey UniqueConstraint = "chat_diff_statuses_pkey" // ALTER TABLE ONLY chat_diff_statuses ADD CONSTRAINT chat_diff_statuses_pkey PRIMARY KEY (chat_id);
UniqueChatFileLinksChatIDFileIDKey UniqueConstraint = "chat_file_links_chat_id_file_id_key" // ALTER TABLE ONLY chat_file_links ADD CONSTRAINT chat_file_links_chat_id_file_id_key UNIQUE (chat_id, file_id);
UniqueChatFilesPkey UniqueConstraint = "chat_files_pkey" // ALTER TABLE ONLY chat_files ADD CONSTRAINT chat_files_pkey PRIMARY KEY (id);
UniqueChatMessagesPkey UniqueConstraint = "chat_messages_pkey" // ALTER TABLE ONLY chat_messages ADD CONSTRAINT chat_messages_pkey PRIMARY KEY (id);
UniqueChatModelConfigsPkey UniqueConstraint = "chat_model_configs_pkey" // ALTER TABLE ONLY chat_model_configs ADD CONSTRAINT chat_model_configs_pkey PRIMARY KEY (id);
+169 -63
View File
@@ -403,7 +403,17 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
return
}
contentBlocks, titleSource, inputError := createChatInputFromRequest(ctx, api.Database, req)
// Validate per-chat system prompt length.
const maxSystemPromptLen = 10000
if len(req.SystemPrompt) > maxSystemPromptLen {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "System prompt exceeds maximum length.",
Detail: fmt.Sprintf("System prompt must be at most %d characters, got %d.", maxSystemPromptLen, len(req.SystemPrompt)),
})
return
}
contentBlocks, titleSource, fileIDs, inputError := createChatInputFromRequest(ctx, api.Database, req)
if inputError != nil {
httpapi.Write(ctx, rw, http.StatusBadRequest, *inputError)
return
@@ -483,7 +493,7 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
WorkspaceID: workspaceSelection.WorkspaceID,
Title: title,
ModelConfigID: modelConfigID,
SystemPrompt: api.resolvedChatSystemPrompt(ctx),
SystemPrompt: req.SystemPrompt,
InitialUserContent: contentBlocks,
MCPServerIDs: mcpServerIDs,
Labels: labels,
@@ -514,7 +524,32 @@ func (api *API) postChats(rw http.ResponseWriter, r *http.Request) {
return
}
httpapi.Write(ctx, rw, http.StatusCreated, db2sdk.Chat(chat, nil))
// Link any user-uploaded files referenced in the initial
// message to this newly created chat (best-effort; cap
// enforced in SQL).
unlinked, capExceeded := api.linkFilesToChat(ctx, chat.ID, fileIDs)
// Re-read the chat so the response reflects the authoritative
// database state (file links are deduped in the join table).
chat, err = api.Database.GetChatByID(ctx, chat.ID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to read back chat after creation.",
Detail: err.Error(),
})
return
}
chatFiles := api.fetchChatFileMetadata(ctx, chat.ID)
response := db2sdk.Chat(chat, nil, chatFiles)
if len(unlinked) > 0 {
if capExceeded {
response.Warnings = append(response.Warnings, fileLinkCapWarning(len(unlinked)))
} else {
response.Warnings = append(response.Warnings, fileLinkErrorWarning(len(unlinked)))
}
}
httpapi.Write(ctx, rw, http.StatusCreated, response)
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -717,6 +752,7 @@ func (api *API) chatCostSummary(rw http.ResponseWriter, r *http.Request) {
TotalOutputTokens: summary.TotalOutputTokens,
TotalCacheReadTokens: summary.TotalCacheReadTokens,
TotalCacheCreationTokens: summary.TotalCacheCreationTokens,
TotalRuntimeMs: summary.TotalRuntimeMs,
ByModel: modelBreakdowns,
ByChat: chatBreakdowns,
}
@@ -1290,7 +1326,11 @@ func (api *API) getChat(rw http.ResponseWriter, r *http.Request) {
slog.Error(err),
)
}
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, diffStatus))
// Hydrate file metadata for all files linked to this chat.
chatFiles := api.fetchChatFileMetadata(ctx, chat.ID)
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, diffStatus, chatFiles))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -1780,7 +1820,7 @@ func (api *API) postChatMessages(rw http.ResponseWriter, r *http.Request) {
return
}
contentBlocks, _, inputError := createChatInputFromParts(ctx, api.Database, req.Content, "content")
contentBlocks, _, fileIDs, inputError := createChatInputFromParts(ctx, api.Database, req.Content, "content")
if inputError != nil {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: inputError.Message,
@@ -1819,6 +1859,20 @@ func (api *API) postChatMessages(rw http.ResponseWriter, r *http.Request) {
}
}
busyBehavior := chatd.SendMessageBusyBehaviorQueue
switch req.BusyBehavior {
case codersdk.ChatBusyBehaviorInterrupt:
busyBehavior = chatd.SendMessageBusyBehaviorInterrupt
case codersdk.ChatBusyBehaviorQueue, "":
// Default to queue.
default:
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Invalid busy_behavior value.",
Detail: `Must be "queue" or "interrupt".`,
})
return
}
sendResult, sendErr := api.chatDaemon.SendMessage(
ctx,
chatd.SendMessageOptions{
@@ -1826,7 +1880,7 @@ func (api *API) postChatMessages(rw http.ResponseWriter, r *http.Request) {
CreatedBy: apiKey.UserID,
Content: contentBlocks,
ModelConfigID: req.ModelConfigID,
BusyBehavior: chatd.SendMessageBusyBehaviorQueue,
BusyBehavior: busyBehavior,
MCPServerIDs: req.MCPServerIDs,
},
)
@@ -1848,6 +1902,9 @@ func (api *API) postChatMessages(rw http.ResponseWriter, r *http.Request) {
return
}
// Link any user-uploaded files referenced in this message
// to the chat (best-effort; cap enforced in SQL).
unlinked, capExceeded := api.linkFilesToChat(ctx, chatID, fileIDs)
response := codersdk.CreateChatMessageResponse{Queued: sendResult.Queued}
if sendResult.Queued {
if sendResult.QueuedMessage != nil {
@@ -1857,6 +1914,13 @@ func (api *API) postChatMessages(rw http.ResponseWriter, r *http.Request) {
message := convertChatMessage(sendResult.Message)
response.Message = &message
}
if len(unlinked) > 0 {
if capExceeded {
response.Warnings = append(response.Warnings, fileLinkCapWarning(len(unlinked)))
} else {
response.Warnings = append(response.Warnings, fileLinkErrorWarning(len(unlinked)))
}
}
httpapi.Write(ctx, rw, http.StatusOK, response)
}
@@ -1890,7 +1954,7 @@ func (api *API) patchChatMessage(rw http.ResponseWriter, r *http.Request) {
return
}
contentBlocks, _, inputError := createChatInputFromParts(ctx, api.Database, req.Content, "content")
contentBlocks, _, fileIDs, inputError := createChatInputFromParts(ctx, api.Database, req.Content, "content")
if inputError != nil {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: inputError.Message,
@@ -1929,8 +1993,20 @@ func (api *API) patchChatMessage(rw http.ResponseWriter, r *http.Request) {
return
}
message := convertChatMessage(editResult.Message)
httpapi.Write(ctx, rw, http.StatusOK, message)
// Link any user-uploaded files referenced in the edited
// message to the chat (best-effort; cap enforced in SQL).
unlinked, capExceeded := api.linkFilesToChat(ctx, chat.ID, fileIDs)
response := codersdk.EditChatMessageResponse{
Message: convertChatMessage(editResult.Message),
}
if len(unlinked) > 0 {
if capExceeded {
response.Warnings = append(response.Warnings, fileLinkCapWarning(len(unlinked)))
} else {
response.Warnings = append(response.Warnings, fileLinkErrorWarning(len(unlinked)))
}
}
httpapi.Write(ctx, rw, http.StatusOK, response)
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -2207,7 +2283,7 @@ func (api *API) interruptChat(rw http.ResponseWriter, r *http.Request) {
chat = updatedChat
}
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, nil))
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(chat, nil, nil))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -2251,7 +2327,7 @@ func (api *API) regenerateChatTitle(rw http.ResponseWriter, r *http.Request) {
return
}
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(updatedChat, nil))
httpapi.Write(ctx, rw, http.StatusOK, db2sdk.Chat(updatedChat, nil, nil))
}
// EXPERIMENTAL: this endpoint is experimental and is subject to change.
@@ -3476,35 +3552,6 @@ func (api *API) deleteUserChatCompactionThreshold(rw http.ResponseWriter, r *htt
rw.WriteHeader(http.StatusNoContent)
}
func (api *API) resolvedChatSystemPrompt(ctx context.Context) string {
config, err := api.Database.GetChatSystemPromptConfig(ctx)
if err != nil {
// We intentionally fail open here. When the prompt configuration
// cannot be read, returning the built-in default keeps the chat
// grounded instead of sending no system guidance at all.
api.Logger.Error(ctx, "failed to fetch chat system prompt configuration, using default", slog.Error(err))
return chatd.DefaultSystemPrompt
}
sanitizedCustom := chatd.SanitizePromptText(config.ChatSystemPrompt)
if sanitizedCustom == "" && strings.TrimSpace(config.ChatSystemPrompt) != "" {
api.Logger.Warn(ctx, "custom system prompt became empty after sanitization, omitting custom portion")
}
var parts []string
if config.IncludeDefaultSystemPrompt {
parts = append(parts, chatd.DefaultSystemPrompt)
}
if sanitizedCustom != "" {
parts = append(parts, sanitizedCustom)
}
result := strings.Join(parts, "\n\n")
if result == "" {
api.Logger.Warn(ctx, "resolved system prompt is empty, no system prompt will be injected into chats")
}
return result
}
func (api *API) postChatFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
apiKey := httpmw.APIKey(r)
@@ -3692,6 +3739,7 @@ func (api *API) chatFileByID(rw http.ResponseWriter, r *http.Request) {
func createChatInputFromRequest(ctx context.Context, db database.Store, req codersdk.CreateChatRequest) (
[]codersdk.ChatMessagePart,
string,
[]uuid.UUID,
*codersdk.Response,
) {
return createChatInputFromParts(ctx, db, req.Content, "content")
@@ -3702,14 +3750,15 @@ func createChatInputFromParts(
db database.Store,
parts []codersdk.ChatInputPart,
fieldName string,
) ([]codersdk.ChatMessagePart, string, *codersdk.Response) {
) ([]codersdk.ChatMessagePart, string, []uuid.UUID, *codersdk.Response) {
if len(parts) == 0 {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Content is required.",
Detail: "Content cannot be empty.",
}
}
var fileIDs []uuid.UUID
content := make([]codersdk.ChatMessagePart, 0, len(parts))
textParts := make([]string, 0, len(parts))
for i, part := range parts {
@@ -3717,7 +3766,7 @@ func createChatInputFromParts(
case string(codersdk.ChatInputPartTypeText):
text := strings.TrimSpace(part.Text)
if text == "" {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf("%s[%d].text cannot be empty.", fieldName, i),
}
@@ -3726,7 +3775,7 @@ func createChatInputFromParts(
textParts = append(textParts, text)
case string(codersdk.ChatInputPartTypeFile):
if part.FileID == uuid.Nil {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf("%s[%d].file_id is required for file parts.", fieldName, i),
}
@@ -3737,20 +3786,23 @@ func createChatInputFromParts(
chatFile, err := db.GetChatFileByID(ctx, part.FileID)
if err != nil {
if httpapi.Is404Error(err) {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf("%s[%d].file_id references a file that does not exist.", fieldName, i),
}
}
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Internal error.",
Detail: fmt.Sprintf("Failed to retrieve file for %s[%d].", fieldName, i),
}
}
content = append(content, codersdk.ChatMessageFile(part.FileID, chatFile.Mimetype))
fileIDs = append(fileIDs, part.FileID)
// file-reference parts carry inline code snippets, not uploaded
// files. They have no FileID and are excluded from file tracking.
case string(codersdk.ChatInputPartTypeFileReference):
if part.FileName == "" {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf("%s[%d].file_name cannot be empty for file-reference.", fieldName, i),
}
@@ -3767,21 +3819,8 @@ func createChatInputFromParts(
_, _ = fmt.Fprintf(&sb, "\n```%s\n%s\n```", part.FileName, strings.TrimSpace(part.Content))
}
textParts = append(textParts, sb.String())
case string(codersdk.ChatInputPartTypeSkill):
if part.SkillName == "" {
return nil, "", &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf("%s[%d].skill_name cannot be empty for skill parts.", fieldName, i),
}
}
content = append(content, codersdk.ChatMessagePart{
Type: codersdk.ChatMessagePartTypeSkill,
SkillName: part.SkillName,
SkillDescription: part.SkillDescription,
})
textParts = append(textParts, fmt.Sprintf("Use the %q skill", part.SkillName))
default:
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Invalid input part.",
Detail: fmt.Sprintf(
"%s[%d].type %q is not supported.",
@@ -3796,13 +3835,13 @@ func createChatInputFromParts(
// Allow file-only messages. The titleSource may be empty
// when only file parts are provided, callers handle this.
if len(content) == 0 {
return nil, "", &codersdk.Response{
return nil, "", nil, &codersdk.Response{
Message: "Content is required.",
Detail: fmt.Sprintf("%s must include at least one text or file part.", fieldName),
}
}
titleSource := strings.TrimSpace(strings.Join(textParts, " "))
return content, titleSource, nil
return content, titleSource, fileIDs, nil
}
func chatTitleFromMessage(message string) string {
@@ -3837,6 +3876,70 @@ func truncateRunes(value string, maxLen int) string {
return string(runes[:maxLen])
}
// linkFilesToChat inserts file-link rows into the chat_file_links
// join table. Cap enforcement and dedup are handled atomically in
// SQL. On success returns (nil, false). On failure returns the full
// input fileIDs slice — linking is all-or-nothing because the
// SQL operates on the batch atomically. capExceeded indicates
// whether the failure was due to the cap being exceeded (true)
// or a database error (false).
// Failures are logged but never block the caller.
func (api *API) linkFilesToChat(ctx context.Context, chatID uuid.UUID, fileIDs []uuid.UUID) (unlinked []uuid.UUID, capExceeded bool) {
if len(fileIDs) == 0 {
return nil, false
}
rejected, err := api.Database.LinkChatFiles(ctx, database.LinkChatFilesParams{
ChatID: chatID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: fileIDs,
})
if err != nil {
api.Logger.Error(ctx, "failed to link files to chat",
slog.F("chat_id", chatID),
slog.F("file_ids", fileIDs),
slog.Error(err),
)
return fileIDs, false
}
if rejected > 0 {
api.Logger.Warn(ctx, "file cap reached, files not linked",
slog.F("chat_id", chatID),
slog.F("file_ids", fileIDs),
slog.F("max_file_links", codersdk.MaxChatFileIDs),
)
return fileIDs, true
}
return nil, false
}
// fileLinkCapWarning builds a user-facing warning when a batch
// of file IDs was atomically rejected because the resulting
// array would exceed the per-chat file cap.
func fileLinkCapWarning(count int) string {
return fmt.Sprintf("file linking skipped: batch of %d file(s) would exceed limit of %d", count, codersdk.MaxChatFileIDs)
}
// fileLinkErrorWarning builds a user-facing warning when a
// database error prevented linking files to a chat.
func fileLinkErrorWarning(count int) string {
return fmt.Sprintf("%d file(s) could not be linked due to a server error", count)
}
// fetchChatFileMetadata returns metadata for all files linked to
// the given chat. Errors are logged and result in a nil return
// (callers treat file metadata as best-effort).
func (api *API) fetchChatFileMetadata(ctx context.Context, chatID uuid.UUID) []database.GetChatFileMetadataByChatIDRow {
rows, err := api.Database.GetChatFileMetadataByChatID(ctx, chatID)
if err != nil {
api.Logger.Error(ctx, "failed to fetch chat file metadata",
slog.F("chat_id", chatID),
slog.Error(err),
)
return nil
}
return rows
}
func convertChatCostModelBreakdown(model database.GetChatCostPerModelRow) codersdk.ChatCostModelBreakdown {
displayName := strings.TrimSpace(model.DisplayName)
if displayName == "" {
@@ -3853,6 +3956,7 @@ func convertChatCostModelBreakdown(model database.GetChatCostPerModelRow) coders
TotalOutputTokens: model.TotalOutputTokens,
TotalCacheReadTokens: model.TotalCacheReadTokens,
TotalCacheCreationTokens: model.TotalCacheCreationTokens,
TotalRuntimeMs: model.TotalRuntimeMs,
}
}
@@ -3866,6 +3970,7 @@ func convertChatCostChatBreakdown(chat database.GetChatCostPerChatRow) codersdk.
TotalOutputTokens: chat.TotalOutputTokens,
TotalCacheReadTokens: chat.TotalCacheReadTokens,
TotalCacheCreationTokens: chat.TotalCacheCreationTokens,
TotalRuntimeMs: chat.TotalRuntimeMs,
}
}
@@ -3882,6 +3987,7 @@ func convertChatCostUserRollup(user database.GetChatCostPerUserRow) codersdk.Cha
TotalOutputTokens: user.TotalOutputTokens,
TotalCacheReadTokens: user.TotalCacheReadTokens,
TotalCacheCreationTokens: user.TotalCacheCreationTokens,
TotalRuntimeMs: user.TotalRuntimeMs,
}
}
+558 -104
View File
@@ -313,6 +313,111 @@ func TestPostChats(t *testing.T) {
}
})
t.Run("WithPerChatSystemPrompt", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := newChatClientWithDatabase(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "hello with system prompt",
},
},
SystemPrompt: "You are a Go expert.",
})
require.NoError(t, err)
require.NotEqual(t, uuid.Nil, chat.ID)
// Use the DB directly to see system messages, which are
// hidden from the public API.
dbMessages, err := db.GetChatMessagesForPromptByChatID(dbauthz.AsSystemRestricted(ctx), chat.ID)
require.NoError(t, err)
// Expect: deployment system prompt, per-chat system prompt,
// workspace awareness, user message.
var systemMessages []database.ChatMessage
for _, msg := range dbMessages {
if msg.Role == database.ChatMessageRoleSystem {
systemMessages = append(systemMessages, msg)
}
}
require.GreaterOrEqual(t, len(systemMessages), 2,
"expected at least deployment + per-chat system messages")
// The per-chat system prompt should be the second system
// message and contain the user-specified text.
foundPerChat := false
for _, msg := range systemMessages {
if msg.Content.Valid {
raw := string(msg.Content.RawMessage)
if strings.Contains(raw, "You are a Go expert.") {
foundPerChat = true
break
}
}
}
require.True(t, foundPerChat,
"per-chat system prompt not found in system messages")
})
t.Run("PerChatSystemPromptEmpty", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := newChatClientWithDatabase(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "hello without system prompt",
},
},
SystemPrompt: "",
})
require.NoError(t, err)
dbMessages, err := db.GetChatMessagesForPromptByChatID(dbauthz.AsSystemRestricted(ctx), chat.ID)
require.NoError(t, err)
// No per-chat system prompt should be present.
for _, msg := range dbMessages {
if msg.Role == database.ChatMessageRoleSystem && msg.Content.Valid {
raw := string(msg.Content.RawMessage)
require.NotContains(t, raw, "You are a Go expert.",
"unexpected per-chat system prompt in messages")
}
}
})
t.Run("PerChatSystemPromptTooLong", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
longPrompt := strings.Repeat("a", 10001)
_, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "hello",
},
},
SystemPrompt: longPrompt,
})
requireSDKError(t, err, http.StatusBadRequest)
})
t.Run("WorkspaceNotAccessible", func(t *testing.T) {
t.Parallel()
@@ -3122,6 +3227,153 @@ func TestGetChat(t *testing.T) {
_, err = otherClient.GetChat(ctx, createdChat.ID)
requireSDKError(t, err, http.StatusNotFound)
})
t.Run("FilesHydrated", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Upload a file.
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
uploadResp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "hydrated.png", bytes.NewReader(pngData))
require.NoError(t, err)
// Create a chat with a text + file part.
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "check file hydration"},
{Type: codersdk.ChatInputPartTypeFile, FileID: uploadResp.ID},
},
})
require.NoError(t, err)
// GET the chat — files must be hydrated with all metadata fields.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, 1)
f := chatResult.Files[0]
require.Equal(t, uploadResp.ID, f.ID)
require.Equal(t, firstUser.UserID, f.OwnerID)
require.NotEqual(t, uuid.Nil, f.OrganizationID)
require.Equal(t, "image/png", f.MimeType)
require.Equal(t, "hydrated.png", f.Name)
require.NotZero(t, f.CreatedAt)
})
// ToolCreatedFilesLinked exercises the DB path that chatd uses
// when a tool (e.g. propose_plan) creates a file: InsertChatFile
// then LinkChatFiles. This is a DB-level test because driving
// the full chatd tool-call pipeline requires an LLM mock.
t.Run("ToolCreatedFilesLinked", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, store := newChatClientWithDatabase(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create a chat via the API so all metadata is set up.
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "tool file test"},
},
})
require.NoError(t, err)
// Mimic what chatd's StoreFile closure does:
// 1. InsertChatFile
// 2. LinkChatFiles
//nolint:gocritic // Using AsChatd to mimic the chatd background worker.
chatdCtx := dbauthz.AsChatd(ctx)
fileRow, err := store.InsertChatFile(chatdCtx, database.InsertChatFileParams{
OwnerID: firstUser.UserID,
OrganizationID: firstUser.OrganizationID,
Name: "plan.md",
Mimetype: "text/markdown",
Data: []byte("# Plan"),
})
require.NoError(t, err)
rejected, err := store.LinkChatFiles(chatdCtx, database.LinkChatFilesParams{
ChatID: chat.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{fileRow.ID},
})
require.NoError(t, err)
require.Equal(t, int32(0), rejected, "0 rejected = all files linked")
// Verify via the API that the file appears in the chat.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, 1)
f := chatResult.Files[0]
require.Equal(t, fileRow.ID, f.ID)
require.Equal(t, firstUser.UserID, f.OwnerID)
require.Equal(t, firstUser.OrganizationID, f.OrganizationID)
require.Equal(t, "plan.md", f.Name)
require.Equal(t, "text/markdown", f.MimeType)
// Fill up to the cap by inserting more files via the
// chatd DB path, then verify the cap is enforced.
for i := 1; i < codersdk.MaxChatFileIDs; i++ {
extra, err := store.InsertChatFile(chatdCtx, database.InsertChatFileParams{
OwnerID: firstUser.UserID,
OrganizationID: firstUser.OrganizationID,
Name: fmt.Sprintf("file%d.md", i),
Mimetype: "text/markdown",
Data: []byte("data"),
})
require.NoError(t, err)
_, err = store.LinkChatFiles(chatdCtx, database.LinkChatFilesParams{
ChatID: chat.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{extra.ID},
})
require.NoError(t, err)
}
// Chat should now have exactly MaxChatFileIDs files.
chatResult, err = client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, codersdk.MaxChatFileIDs)
// Attempt to add one more file — should be rejected (0 rows).
overflow, err := store.InsertChatFile(chatdCtx, database.InsertChatFileParams{
OwnerID: firstUser.UserID,
OrganizationID: firstUser.OrganizationID,
Name: "overflow.md",
Mimetype: "text/markdown",
Data: []byte("too many"),
})
require.NoError(t, err)
rejected, err = store.LinkChatFiles(chatdCtx, database.LinkChatFilesParams{
ChatID: chat.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{overflow.ID},
})
require.NoError(t, err)
require.Equal(t, int32(1), rejected, "cap should reject the 21st file")
// Re-appending an already-linked ID at cap should succeed
// (dedup means no array growth).
rejected, err = store.LinkChatFiles(chatdCtx, database.LinkChatFilesParams{
ChatID: chat.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{fileRow.ID},
})
require.NoError(t, err)
// ON CONFLICT DO NOTHING returns 0 rows when the link
// already exists, which is fine — the file is still linked.
require.Equal(t, int32(0), rejected, "dedup of existing ID should be a no-op")
// Count should still be exactly MaxChatFileIDs.
chatResult, err = client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, codersdk.MaxChatFileIDs)
})
}
func TestArchiveChat(t *testing.T) {
@@ -4135,104 +4387,6 @@ func TestChatMessageWithFileReferences(t *testing.T) {
})
}
func TestChatMessageWithSkillParts(t *testing.T) {
t.Parallel()
createChatForTest := func(t *testing.T, client *codersdk.ExperimentalClient) codersdk.Chat {
t.Helper()
ctx := testutil.Context(t, testutil.WaitLong)
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: "initial message",
}},
})
require.NoError(t, err)
return chat
}
t.Run("SkillPartRoundTrip", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
chat := createChatForTest(t, client)
created, err := client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{
Type: codersdk.ChatInputPartTypeText,
Text: "please run this skill",
},
{
Type: codersdk.ChatInputPartTypeSkill,
SkillName: "deep-review",
SkillDescription: "Multi-reviewer code review",
},
},
})
require.NoError(t, err)
checkSkill := func(part codersdk.ChatMessagePart) bool {
return part.Type == codersdk.ChatMessagePartTypeSkill &&
part.SkillName == "deep-review" &&
part.SkillDescription == "Multi-reviewer code review"
}
var found bool
require.Eventually(t, func() bool {
messagesResult, getErr := client.GetChatMessages(ctx, chat.ID, nil)
if getErr != nil {
return false
}
for _, message := range messagesResult.Messages {
if message.Role != codersdk.ChatMessageRoleUser {
continue
}
for _, part := range message.Content {
if checkSkill(part) {
found = true
return true
}
}
}
if created.Queued && created.QueuedMessage != nil {
for _, queued := range messagesResult.QueuedMessages {
for _, part := range queued.Content {
if checkSkill(part) {
found = true
return true
}
}
}
}
return false
}, testutil.WaitLong, testutil.IntervalFast)
require.True(t, found, "expected to find skill part in stored message")
})
t.Run("SkillPartEmptyNameRejected", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
chat := createChatForTest(t, client)
_, err := client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeSkill,
SkillName: "",
}},
})
require.Error(t, err)
require.Contains(t, err.Error(), "skill_name cannot be empty")
})
}
func TestChatMessageWithFiles(t *testing.T) {
t.Parallel()
@@ -4366,6 +4520,14 @@ func TestChatMessageWithFiles(t *testing.T) {
// With no text, chatTitleFromMessage("") returns "New Chat".
require.Equal(t, "New Chat", chat.Title)
require.Len(t, chat.Files, 1)
f := chat.Files[0]
require.Equal(t, uploadResp.ID, f.ID)
require.Equal(t, firstUser.UserID, f.OwnerID)
require.NotEqual(t, uuid.Nil, f.OrganizationID)
require.Equal(t, "image/png", f.MimeType)
require.Equal(t, "test.png", f.Name)
require.NotZero(t, f.CreatedAt)
})
t.Run("InvalidFileID", func(t *testing.T) {
@@ -4400,6 +4562,189 @@ func TestChatMessageWithFiles(t *testing.T) {
require.Equal(t, "Invalid input part.", sdkErr.Message)
require.Contains(t, sdkErr.Detail, "does not exist")
})
t.Run("FilesLinkedOnSend", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create a text-only chat (no files initially).
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "no files yet"},
},
})
require.NoError(t, err)
// Upload a file.
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
uploadResp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "linked.png", bytes.NewReader(pngData))
require.NoError(t, err)
// Send a message with the file.
_, err = client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "here is a file"},
{Type: codersdk.ChatInputPartTypeFile, FileID: uploadResp.ID},
},
})
require.NoError(t, err)
// GET the chat — file should be linked.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, 1)
require.Equal(t, uploadResp.ID, chatResult.Files[0].ID)
require.Equal(t, "linked.png", chatResult.Files[0].Name)
})
t.Run("DedupFileIDs", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Upload a file.
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
uploadResp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "dedup.png", bytes.NewReader(pngData))
require.NoError(t, err)
// Create a chat with a file.
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "first mention"},
{Type: codersdk.ChatInputPartTypeFile, FileID: uploadResp.ID},
},
})
require.NoError(t, err)
// Send another message with the SAME file.
msgResp, err := client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "same file again"},
{Type: codersdk.ChatInputPartTypeFile, FileID: uploadResp.ID},
},
})
require.NoError(t, err)
require.Empty(t, msgResp.Warnings, "dedup below cap should not produce warnings")
// GET — should have exactly 1 file (deduped by SQL DISTINCT).
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, 1, "duplicate file IDs should be deduped")
require.Equal(t, uploadResp.ID, chatResult.Files[0].ID)
})
t.Run("FileCapExceeded", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
// Upload MaxChatFileIDs files.
fileIDs := make([]uuid.UUID, 0, codersdk.MaxChatFileIDs)
for i := range codersdk.MaxChatFileIDs {
resp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", fmt.Sprintf("file%d.png", i), bytes.NewReader(pngData))
require.NoError(t, err)
fileIDs = append(fileIDs, resp.ID)
}
// Create a chat using all MaxChatFileIDs files.
parts := []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "max files"},
}
for _, fid := range fileIDs {
parts = append(parts, codersdk.ChatInputPart{Type: codersdk.ChatInputPartTypeFile, FileID: fid})
}
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{Content: parts})
require.NoError(t, err)
require.Empty(t, chat.Warnings, "creating a chat at exactly the cap should not warn")
require.Len(t, chat.Files, codersdk.MaxChatFileIDs, "all files should be linked on creation")
// Upload one more file.
extraResp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "one-too-many.png", bytes.NewReader(pngData))
require.NoError(t, err)
// Sending a message with the extra file should succeed
// (message goes through) but the file should NOT be linked
// (cap enforced in SQL). The response includes a warning.
msgResp, err := client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "one too many"},
{Type: codersdk.ChatInputPartTypeFile, FileID: extraResp.ID},
},
})
require.NoError(t, err)
require.NotEmpty(t, msgResp.Warnings, "response should warn about unlinked files")
require.Contains(t, msgResp.Warnings[0], "file linking skipped")
// The extra file should NOT appear in the chat's files.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, codersdk.MaxChatFileIDs,
"file count should not exceed the cap")
// Sending a message referencing an already-linked file
// should succeed with no warnings (dedup, no array growth).
msgResp2, err := client.CreateChatMessage(ctx, chat.ID, codersdk.CreateChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "re-reference existing"},
{Type: codersdk.ChatInputPartTypeFile, FileID: fileIDs[0]},
},
})
require.NoError(t, err)
require.Empty(t, msgResp2.Warnings, "re-referencing an existing file should not warn")
})
t.Run("FileCapOnCreate", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
// Upload MaxChatFileIDs + 1 files.
fileIDs := make([]uuid.UUID, 0, codersdk.MaxChatFileIDs+1)
for i := range codersdk.MaxChatFileIDs + 1 {
resp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", fmt.Sprintf("create%d.png", i), bytes.NewReader(pngData))
require.NoError(t, err)
fileIDs = append(fileIDs, resp.ID)
}
// Create a chat with all files (one over the cap).
parts := []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "over cap on create"},
}
for _, fid := range fileIDs {
parts = append(parts, codersdk.ChatInputPart{Type: codersdk.ChatInputPartTypeFile, FileID: fid})
}
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{Content: parts})
require.NoError(t, err, "chat creation should succeed even when cap is exceeded")
require.NotEmpty(t, chat.Warnings, "response should warn about unlinked files")
require.Contains(t, chat.Warnings[0], "file linking skipped")
// Only MaxChatFileIDs files should actually be linked.
// With SQL-level batch rejection, ALL files are rejected
// when the result would exceed the cap. Since we're
// sending MaxChatFileIDs+1 files, the deduped count is
// 21 > 20, so 0 rows are affected and all files are
// unlinked.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Empty(t, chatResult.Files, "no files should be linked when batch exceeds cap")
})
}
func TestPatchChatMessage(t *testing.T) {
@@ -4446,11 +4791,11 @@ func TestPatchChatMessage(t *testing.T) {
require.NoError(t, err)
// The edited message is soft-deleted and a new one is inserted,
// so the returned ID will differ from the original.
require.NotEqual(t, userMessageID, edited.ID)
require.Equal(t, codersdk.ChatMessageRoleUser, edited.Role)
require.NotEqual(t, userMessageID, edited.Message.ID)
require.Equal(t, codersdk.ChatMessageRoleUser, edited.Message.Role)
foundEditedText := false
for _, part := range edited.Content {
for _, part := range edited.Message.Content {
if part.Type == codersdk.ChatMessagePartTypeText && part.Text == "hello after edit" {
foundEditedText = true
}
@@ -4538,11 +4883,11 @@ func TestPatchChatMessage(t *testing.T) {
require.NoError(t, err)
// The edited message is soft-deleted and a new one is inserted,
// so the returned ID will differ from the original.
require.NotEqual(t, userMessageID, edited.ID)
require.NotEqual(t, userMessageID, edited.Message.ID)
// Assert the edit response preserves the file_id.
var foundText, foundFile bool
for _, part := range edited.Content {
for _, part := range edited.Message.Content {
if part.Type == codersdk.ChatMessagePartTypeText && part.Text == "after edit with file" {
foundText = true
}
@@ -4685,6 +5030,112 @@ func TestPatchChatMessage(t *testing.T) {
sdkErr := requireSDKError(t, err, http.StatusBadRequest)
require.Equal(t, "Invalid chat message ID.", sdkErr.Message)
})
t.Run("FilesLinkedOnEdit", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create a text-only chat.
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "before file edit"},
},
})
require.NoError(t, err)
// Upload a file.
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
uploadResp, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "edit-linked.png", bytes.NewReader(pngData))
require.NoError(t, err)
// Find the user message ID.
messagesResult, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
var userMessageID int64
for _, msg := range messagesResult.Messages {
if msg.Role == codersdk.ChatMessageRoleUser {
userMessageID = msg.ID
break
}
}
require.NotZero(t, userMessageID)
// Edit the message to include the file.
_, err = client.EditChatMessage(ctx, chat.ID, userMessageID, codersdk.EditChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "after file edit"},
{Type: codersdk.ChatInputPartTypeFile, FileID: uploadResp.ID},
},
})
require.NoError(t, err)
// GET the chat — file should be linked.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, 1)
f := chatResult.Files[0]
require.Equal(t, uploadResp.ID, f.ID)
require.Equal(t, "edit-linked.png", f.Name)
require.Equal(t, "image/png", f.MimeType)
})
t.Run("CapExceededOnEdit", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := newChatClient(t)
firstUser := coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create a chat with MaxChatFileIDs files already linked.
parts := []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "fill to cap"},
}
pngData := append([]byte{0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A}, make([]byte, 64)...)
for i := range codersdk.MaxChatFileIDs {
up, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", fmt.Sprintf("cap-%d.png", i), bytes.NewReader(pngData))
require.NoError(t, err)
parts = append(parts, codersdk.ChatInputPart{Type: codersdk.ChatInputPartTypeFile, FileID: up.ID})
}
chat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{Content: parts})
require.NoError(t, err)
require.Empty(t, chat.Warnings, "all files should link on create")
// Find the user message.
messagesResult, err := client.GetChatMessages(ctx, chat.ID, nil)
require.NoError(t, err)
var userMessageID int64
for _, msg := range messagesResult.Messages {
if msg.Role == codersdk.ChatMessageRoleUser {
userMessageID = msg.ID
break
}
}
require.NotZero(t, userMessageID)
// Upload one more file and try to link via edit.
extra, err := client.UploadChatFile(ctx, firstUser.OrganizationID, "image/png", "one-too-many.png", bytes.NewReader(pngData))
require.NoError(t, err)
edited, err := client.EditChatMessage(ctx, chat.ID, userMessageID, codersdk.EditChatMessageRequest{
Content: []codersdk.ChatInputPart{
{Type: codersdk.ChatInputPartTypeText, Text: "edit with extra file"},
{Type: codersdk.ChatInputPartTypeFile, FileID: extra.ID},
},
})
require.NoError(t, err)
require.NotEmpty(t, edited.Warnings, "edit should surface cap warning")
require.Contains(t, edited.Warnings[0], "file linking skipped")
// Verify the cap is still enforced.
chatResult, err := client.GetChat(ctx, chat.ID)
require.NoError(t, err)
require.Len(t, chatResult.Files, codersdk.MaxChatFileIDs,
"file count should not exceed the cap")
})
}
func TestStreamChat(t *testing.T) {
@@ -6177,7 +6628,7 @@ func seedChatCostFixture(t *testing.T) chatCostTestFixture {
ContextLimit: []int64{0, 0},
Compressed: []bool{false, false},
TotalCostMicros: []int64{500, 500},
RuntimeMs: []int64{0, 0},
RuntimeMs: []int64{1500, 2500},
})
require.NoError(t, err)
require.Len(t, results, 2)
@@ -6210,16 +6661,19 @@ func assertChatCostSummary(t *testing.T, summary codersdk.ChatCostSummary, model
require.Equal(t, int64(0), summary.UnpricedMessageCount)
require.Equal(t, int64(200), summary.TotalInputTokens)
require.Equal(t, int64(100), summary.TotalOutputTokens)
require.Equal(t, int64(4000), summary.TotalRuntimeMs)
require.Len(t, summary.ByModel, 1)
require.Equal(t, modelConfigID, summary.ByModel[0].ModelConfigID)
require.Equal(t, int64(1000), summary.ByModel[0].TotalCostMicros)
require.Equal(t, int64(2), summary.ByModel[0].MessageCount)
require.Equal(t, int64(4000), summary.ByModel[0].TotalRuntimeMs)
require.Len(t, summary.ByChat, 1)
require.Equal(t, chatID, summary.ByChat[0].RootChatID)
require.Equal(t, int64(1000), summary.ByChat[0].TotalCostMicros)
require.Equal(t, int64(2), summary.ByChat[0].MessageCount)
require.Equal(t, int64(4000), summary.ByChat[0].TotalRuntimeMs)
}
func TestChatCostSummary(t *testing.T) {
+34
View File
@@ -298,6 +298,40 @@ neq(input.object.owner, "");
ExpectedSQL: p("'' = 'org-id'"),
VariableConverter: regosql.ChatConverter(),
},
{
Name: "AuditLogUUID",
Queries: []string{
`"8c0b9bdc-a013-4b14-a49b-5747bc335708" = input.object.org_owner`,
`input.object.org_owner != ""`,
`neq(input.object.org_owner, "8c0b9bdc-a013-4b14-a49b-5747bc335708")`,
`input.object.org_owner in {"8c0b9bdc-a013-4b14-a49b-5747bc335708", "05f58202-4bfc-43ce-9ba4-5ff6e0174a71"}`,
`"read" in input.object.acl_group_list[input.object.org_owner]`,
},
ExpectedSQL: p(
p("audit_logs.organization_id = '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("audit_logs.organization_id IS NOT NULL") + " OR " +
p("audit_logs.organization_id != '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("audit_logs.organization_id = ANY(ARRAY ['05f58202-4bfc-43ce-9ba4-5ff6e0174a71'::uuid,'8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid])") + " OR " +
"(false)"),
VariableConverter: regosql.AuditLogConverter(),
},
{
Name: "ConnectionLogUUID",
Queries: []string{
`"8c0b9bdc-a013-4b14-a49b-5747bc335708" = input.object.org_owner`,
`input.object.org_owner != ""`,
`neq(input.object.org_owner, "8c0b9bdc-a013-4b14-a49b-5747bc335708")`,
`input.object.org_owner in {"8c0b9bdc-a013-4b14-a49b-5747bc335708"}`,
`"read" in input.object.acl_group_list[input.object.org_owner]`,
},
ExpectedSQL: p(
p("connection_logs.organization_id = '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("connection_logs.organization_id IS NOT NULL") + " OR " +
p("connection_logs.organization_id != '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("connection_logs.organization_id = ANY(ARRAY ['8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid])") + " OR " +
"(false)"),
VariableConverter: regosql.ConnectionLogConverter(),
},
}
for _, tc := range testCases {
+2 -2
View File
@@ -53,7 +53,7 @@ func WorkspaceConverter() *sqltypes.VariableConverter {
func AuditLogConverter() *sqltypes.VariableConverter {
matcher := sqltypes.NewVariableConverter().RegisterMatcher(
resourceIDMatcher(),
sqltypes.StringVarMatcher("COALESCE(audit_logs.organization_id :: text, '')", []string{"input", "object", "org_owner"}),
sqltypes.UUIDVarMatcher("audit_logs.organization_id", []string{"input", "object", "org_owner"}),
// Audit logs have no user owner, only owner by an organization.
sqltypes.AlwaysFalse(userOwnerMatcher()),
)
@@ -67,7 +67,7 @@ func AuditLogConverter() *sqltypes.VariableConverter {
func ConnectionLogConverter() *sqltypes.VariableConverter {
matcher := sqltypes.NewVariableConverter().RegisterMatcher(
resourceIDMatcher(),
sqltypes.StringVarMatcher("COALESCE(connection_logs.organization_id :: text, '')", []string{"input", "object", "org_owner"}),
sqltypes.UUIDVarMatcher("connection_logs.organization_id", []string{"input", "object", "org_owner"}),
// Connection logs have no user owner, only owner by an organization.
sqltypes.AlwaysFalse(userOwnerMatcher()),
)
+114
View File
@@ -0,0 +1,114 @@
package sqltypes
import (
"fmt"
"strings"
"github.com/open-policy-agent/opa/ast"
"golang.org/x/xerrors"
)
var (
_ VariableMatcher = astUUIDVar{}
_ Node = astUUIDVar{}
_ SupportsEquality = astUUIDVar{}
)
// astUUIDVar is a variable that represents a UUID column. Unlike
// astStringVar it emits native UUID comparisons (column = 'val'::uuid)
// instead of text-based ones (COALESCE(column::text, ”) = 'val').
// This allows PostgreSQL to use indexes on UUID columns.
type astUUIDVar struct {
Source RegoSource
FieldPath []string
ColumnString string
}
func UUIDVarMatcher(sqlColumn string, regoPath []string) VariableMatcher {
return astUUIDVar{FieldPath: regoPath, ColumnString: sqlColumn}
}
func (astUUIDVar) UseAs() Node { return astUUIDVar{} }
func (u astUUIDVar) ConvertVariable(rego ast.Ref) (Node, bool) {
left, err := RegoVarPath(u.FieldPath, rego)
if err == nil && len(left) == 0 {
return astUUIDVar{
Source: RegoSource(rego.String()),
FieldPath: u.FieldPath,
ColumnString: u.ColumnString,
}, true
}
return nil, false
}
func (u astUUIDVar) SQLString(_ *SQLGenerator) string {
return u.ColumnString
}
// EqualsSQLString handles equality comparisons for UUID columns.
// Rego always produces string literals, so we accept AstString and
// cast the literal to ::uuid in the output SQL. This lets PG use
// native UUID indexes instead of falling back to text comparisons.
// nolint:revive
func (u astUUIDVar) EqualsSQLString(cfg *SQLGenerator, not bool, other Node) (string, error) {
switch other.UseAs().(type) {
case AstString:
// The other side is a rego string literal like
// "8c0b9bdc-a013-4b14-a49b-5747bc335708". Emit a comparison
// that casts the literal to uuid so PG can use indexes:
// column = 'val'::uuid
// instead of the text-based:
// 'val' = COALESCE(column::text, '')
s, ok := other.(AstString)
if !ok {
return "", xerrors.Errorf("expected AstString, got %T", other)
}
if s.Value == "" {
// Empty string in rego means "no value". Compare the
// column against NULL since UUID columns represent
// absent values as NULL, not empty strings.
op := "IS NULL"
if not {
op = "IS NOT NULL"
}
return fmt.Sprintf("%s %s", u.ColumnString, op), nil
}
return fmt.Sprintf("%s %s '%s'::uuid",
u.ColumnString, equalsOp(not), s.Value), nil
case astUUIDVar:
return basicSQLEquality(cfg, not, u, other), nil
default:
return "", xerrors.Errorf("unsupported equality: %T %s %T",
u, equalsOp(not), other)
}
}
// ContainedInSQL implements SupportsContainedIn so that a UUID column
// can appear in membership checks like `col = ANY(ARRAY[...])`. The
// array elements are rego strings, so we cast each to ::uuid.
func (u astUUIDVar) ContainedInSQL(_ *SQLGenerator, haystack Node) (string, error) {
arr, ok := haystack.(ASTArray)
if !ok {
return "", xerrors.Errorf("unsupported containedIn: %T in %T", u, haystack)
}
if len(arr.Value) == 0 {
return "false", nil
}
// Build ARRAY['uuid1'::uuid, 'uuid2'::uuid, ...]
values := make([]string, 0, len(arr.Value))
for _, v := range arr.Value {
s, ok := v.(AstString)
if !ok {
return "", xerrors.Errorf("expected AstString array element, got %T", v)
}
values = append(values, fmt.Sprintf("'%s'::uuid", s.Value))
}
return fmt.Sprintf("%s = ANY(ARRAY [%s])",
u.ColumnString,
strings.Join(values, ",")), nil
}
+2 -1
View File
@@ -66,7 +66,7 @@ func AuditLogs(ctx context.Context, db database.Store, query string) (database.G
}
// Prepare the count filter, which uses the same parameters as the GetAuditLogsOffsetParams.
// nolint:exhaustruct // UserID is not obtained from the query parameters.
// nolint:exhaustruct // UserID and CountCap are not obtained from the query parameters.
countFilter := database.CountAuditLogsParams{
RequestID: filter.RequestID,
ResourceID: filter.ResourceID,
@@ -123,6 +123,7 @@ func ConnectionLogs(ctx context.Context, db database.Store, query string, apiKey
}
// This MUST be kept in sync with the above
// nolint:exhaustruct // CountCap is not obtained from the query parameters.
countFilter := database.CountConnectionLogsParams{
OrganizationID: filter.OrganizationID,
WorkspaceOwner: filter.WorkspaceOwner,
+37 -19
View File
@@ -19,6 +19,7 @@ import (
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/singleflight"
"golang.org/x/xerrors"
"tailscale.com/derp"
"tailscale.com/tailcfg"
@@ -389,6 +390,7 @@ type MultiAgentController struct {
// connections to the destination
tickets map[uuid.UUID]map[uuid.UUID]struct{}
coordination *tailnet.BasicCoordination
sendGroup singleflight.Group
cancel context.CancelFunc
expireOldAgentsDone chan struct{}
@@ -418,28 +420,44 @@ func (m *MultiAgentController) New(client tailnet.CoordinatorClient) tailnet.Clo
func (m *MultiAgentController) ensureAgent(agentID uuid.UUID) error {
m.mu.Lock()
defer m.mu.Unlock()
_, ok := m.connectionTimes[agentID]
// If we don't have the agent, subscribe.
if !ok {
m.logger.Debug(context.Background(),
"subscribing to agent", slog.F("agent_id", agentID))
if m.coordination != nil {
err := m.coordination.Client.Send(&proto.CoordinateRequest{
AddTunnel: &proto.CoordinateRequest_Tunnel{Id: agentID[:]},
})
if err != nil {
err = xerrors.Errorf("subscribe agent: %w", err)
m.coordination.SendErr(err)
_ = m.coordination.Client.Close()
m.coordination = nil
return err
}
}
m.tickets[agentID] = map[uuid.UUID]struct{}{}
if ok {
m.connectionTimes[agentID] = time.Now()
m.mu.Unlock()
return nil
}
m.mu.Unlock()
m.logger.Debug(context.Background(),
"subscribing to agent", slog.F("agent_id", agentID))
_, err, _ := m.sendGroup.Do(agentID.String(), func() (interface{}, error) {
m.mu.Lock()
coord := m.coordination
m.mu.Unlock()
if coord == nil {
return nil, xerrors.New("no active coordination")
}
err := coord.Client.Send(&proto.CoordinateRequest{
AddTunnel: &proto.CoordinateRequest_Tunnel{Id: agentID[:]},
})
if err != nil {
return nil, err
}
m.mu.Lock()
m.tickets[agentID] = map[uuid.UUID]struct{}{}
m.mu.Unlock()
return nil, nil
})
if err != nil {
m.logger.Error(context.Background(), "ensureAgent send failed",
slog.F("agent_id", agentID), slog.Error(err))
return xerrors.Errorf("send AddTunnel: %w", err)
}
m.mu.Lock()
m.connectionTimes[agentID] = time.Now()
m.mu.Unlock()
return nil
}
+13 -8
View File
@@ -730,7 +730,10 @@ func (s *Server) workspaceAgentPTY(rw http.ResponseWriter, r *http.Request) {
if !ok {
return
}
log := s.Logger.With(slog.F("agent_id", appToken.AgentID))
log := s.Logger.With(
slog.F("agent_id", appToken.AgentID),
slog.F("workspace_id", appToken.WorkspaceID),
)
log.Debug(ctx, "resolved PTY request")
values := r.URL.Query()
@@ -765,19 +768,21 @@ func (s *Server) workspaceAgentPTY(rw http.ResponseWriter, r *http.Request) {
})
return
}
go httpapi.HeartbeatClose(ctx, s.Logger, cancel, conn)
ctx, wsNetConn := WebsocketNetConn(ctx, conn, websocket.MessageBinary)
defer wsNetConn.Close() // Also closes conn.
go httpapi.HeartbeatClose(ctx, log, cancel, conn)
dialStart := time.Now()
agentConn, release, err := s.AgentProvider.AgentConn(ctx, appToken.AgentID)
if err != nil {
log.Debug(ctx, "dial workspace agent", slog.Error(err))
log.Debug(ctx, "dial workspace agent", slog.Error(err), slog.F("elapsed_ms", time.Since(dialStart).Milliseconds()))
_ = conn.Close(websocket.StatusInternalError, httpapi.WebsocketCloseSprintf("dial workspace agent: %s", err))
return
}
defer release()
log.Debug(ctx, "dialed workspace agent")
log.Debug(ctx, "dialed workspace agent", slog.F("elapsed_ms", time.Since(dialStart).Milliseconds()))
// #nosec G115 - Safe conversion for terminal height/width which are expected to be within uint16 range (0-65535)
ptNetConn, err := agentConn.ReconnectingPTY(ctx, reconnect, uint16(height), uint16(width), r.URL.Query().Get("command"), func(arp *workspacesdk.AgentReconnectingPTYInit) {
arp.Container = container
@@ -785,12 +790,12 @@ func (s *Server) workspaceAgentPTY(rw http.ResponseWriter, r *http.Request) {
arp.BackendType = backendType
})
if err != nil {
log.Debug(ctx, "dial reconnecting pty server in workspace agent", slog.Error(err))
log.Debug(ctx, "dial reconnecting pty server in workspace agent", slog.Error(err), slog.F("elapsed_ms", time.Since(dialStart).Milliseconds()))
_ = conn.Close(websocket.StatusInternalError, httpapi.WebsocketCloseSprintf("dial: %s", err))
return
}
defer ptNetConn.Close()
log.Debug(ctx, "obtained PTY")
log.Debug(ctx, "obtained PTY", slog.F("elapsed_ms", time.Since(dialStart).Milliseconds()))
report := newStatsReportFromSignedToken(*appToken)
s.collectStats(report)
@@ -800,7 +805,7 @@ func (s *Server) workspaceAgentPTY(rw http.ResponseWriter, r *http.Request) {
}()
agentssh.Bicopy(ctx, wsNetConn, ptNetConn)
log.Debug(ctx, "pty Bicopy finished")
log.Debug(ctx, "pty Bicopy finished", slog.F("elapsed_ms", time.Since(dialStart).Milliseconds()))
}
func (s *Server) collectStats(stats StatsReport) {
+264 -62
View File
@@ -7,6 +7,7 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"net/http"
"slices"
"strconv"
@@ -90,6 +91,12 @@ const (
// goroutines and lifecycle management.
streamDropWarnInterval = 10 * time.Second
// bufferRetainGracePeriod is how long the message_part
// buffer is kept after processing completes. This gives
// cross-replica relay subscribers time to connect and
// snapshot the buffer before it is garbage-collected.
bufferRetainGracePeriod = 5 * time.Second
// DefaultMaxChatsPerAcquire is the maximum number of chats to
// acquire in a single processOnce call. Batching avoids
// waiting a full polling interval between acquisitions
@@ -145,6 +152,12 @@ type Server struct {
inFlightChatStaleAfter time.Duration
chatHeartbeatInterval time.Duration
// heartbeatMu guards heartbeatRegistry.
heartbeatMu sync.Mutex
// heartbeatRegistry maps chat IDs to their cancel functions
// and workspace state for the centralized heartbeat loop.
heartbeatRegistry map[uuid.UUID]*heartbeatEntry
// wakeCh is signaled by SendMessage, EditMessage, CreateChat,
// and PromoteQueued so the run loop calls processOnce
// immediately instead of waiting for the next ticker.
@@ -691,6 +704,24 @@ type chatStreamState struct {
bufferLastWarnAt time.Time
subscriberDropCount int64
subscriberLastWarnAt time.Time
// bufferRetainedAt records when processing completed and
// the buffer was retained for late-connecting relay
// subscribers. Zero while buffering is active. When
// non-zero, cleanupStreamIfIdle skips GC until the grace
// period expires so cross-replica relays can still
// snapshot the buffer.
bufferRetainedAt time.Time
}
// heartbeatEntry tracks a single chat's cancel function and workspace
// state for the centralized heartbeat loop. Instead of spawning a
// per-chat goroutine, processChat registers an entry here and the
// single heartbeatLoop goroutine handles all chats.
type heartbeatEntry struct {
cancelWithCause context.CancelCauseFunc
chatID uuid.UUID
workspaceID uuid.NullUUID
logger slog.Logger
}
// resetDropCounters zeroes the rate-limiting state for both buffer
@@ -873,7 +904,8 @@ func (p *Server) CreateChat(ctx context.Context, opts CreateOptions) (database.C
return xerrors.Errorf("insert chat: %w", err)
}
systemPrompt := strings.TrimSpace(opts.SystemPrompt)
deploymentPrompt := p.resolveDeploymentSystemPrompt(ctx)
userPrompt := SanitizePromptText(opts.SystemPrompt)
var workspaceAwareness string
if opts.WorkspaceID.Valid {
workspaceAwareness = "This chat is attached to a workspace. You can use workspace tools like execute, read_file, write_file, etc."
@@ -895,16 +927,32 @@ func (p *Server) CreateChat(ctx context.Context, opts CreateOptions) (database.C
ChatID: insertedChat.ID,
}
if systemPrompt != "" {
systemContent, err := chatprompt.MarshalParts([]codersdk.ChatMessagePart{
codersdk.ChatMessageText(systemPrompt),
if deploymentPrompt != "" {
deploymentContent, err := chatprompt.MarshalParts([]codersdk.ChatMessagePart{
codersdk.ChatMessageText(deploymentPrompt),
})
if err != nil {
return xerrors.Errorf("marshal system prompt: %w", err)
return xerrors.Errorf("marshal deployment system prompt: %w", err)
}
appendChatMessage(&msgParams, newChatMessage(
database.ChatMessageRoleSystem,
systemContent,
deploymentContent,
database.ChatMessageVisibilityModel,
opts.ModelConfigID,
chatprompt.CurrentContentVersion,
))
}
if userPrompt != "" {
userPromptContent, err := chatprompt.MarshalParts([]codersdk.ChatMessagePart{
codersdk.ChatMessageText(userPrompt),
})
if err != nil {
return xerrors.Errorf("marshal user system prompt: %w", err)
}
appendChatMessage(&msgParams, newChatMessage(
database.ChatMessageRoleSystem,
userPromptContent,
database.ChatMessageVisibilityModel,
opts.ModelConfigID,
chatprompt.CurrentContentVersion,
@@ -2390,8 +2438,8 @@ func New(cfg Config) *Server {
clock: clk,
recordingSem: make(chan struct{}, maxConcurrentRecordingUploads),
wakeCh: make(chan struct{}, 1),
heartbeatRegistry: make(map[uuid.UUID]*heartbeatEntry),
}
//nolint:gocritic // The chat processor uses a scoped chatd context.
ctx = dbauthz.AsChatd(ctx)
@@ -2431,6 +2479,9 @@ func (p *Server) start(ctx context.Context) {
// to handle chats orphaned by crashed or redeployed workers.
p.recoverStaleChats(ctx)
// Single heartbeat loop for all chats on this replica.
go p.heartbeatLoop(ctx)
acquireTicker := p.clock.NewTicker(
p.pendingChatAcquireInterval,
"chatd",
@@ -2681,11 +2732,113 @@ func (p *Server) getOrCreateStreamState(chatID uuid.UUID) *chatStreamState {
// cleanupStreamIfIdle removes the chat entry from the sync.Map
// when there are no subscribers and the stream is not buffering.
// When bufferRetainedAt is set, cleanup is deferred until the
// grace period expires so cross-replica relay subscribers can
// still snapshot the buffer.
// The caller must hold state.mu.
func (p *Server) cleanupStreamIfIdle(chatID uuid.UUID, state *chatStreamState) {
if !state.buffering && len(state.subscribers) == 0 {
p.chatStreams.Delete(chatID)
p.workspaceMCPToolsCache.Delete(chatID)
if state.buffering || len(state.subscribers) > 0 {
return
}
// Keep stream state alive during the grace period so
// late-connecting relay subscribers can snapshot the
// buffer after the worker finishes processing.
if !state.bufferRetainedAt.IsZero() &&
p.clock.Now().Before(state.bufferRetainedAt.Add(bufferRetainGracePeriod)) {
return
}
p.chatStreams.Delete(chatID)
p.workspaceMCPToolsCache.Delete(chatID)
}
// registerHeartbeat enrolls a chat in the centralized batch
// heartbeat loop. Must be called after chatCtx is created.
func (p *Server) registerHeartbeat(entry *heartbeatEntry) {
p.heartbeatMu.Lock()
defer p.heartbeatMu.Unlock()
if _, exists := p.heartbeatRegistry[entry.chatID]; exists {
p.logger.Warn(context.Background(),
"duplicate heartbeat registration, skipping",
slog.F("chat_id", entry.chatID))
return
}
p.heartbeatRegistry[entry.chatID] = entry
}
// unregisterHeartbeat removes a chat from the centralized
// heartbeat loop when chat processing finishes.
func (p *Server) unregisterHeartbeat(chatID uuid.UUID) {
p.heartbeatMu.Lock()
defer p.heartbeatMu.Unlock()
delete(p.heartbeatRegistry, chatID)
}
// heartbeatLoop runs in a single goroutine, issuing one batch
// heartbeat query per interval for all registered chats.
func (p *Server) heartbeatLoop(ctx context.Context) {
ticker := p.clock.NewTicker(p.chatHeartbeatInterval, "chatd", "batch-heartbeat")
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
p.heartbeatTick(ctx)
}
}
}
// heartbeatTick issues a single batch UPDATE for all running chats
// owned by this worker. Chats missing from the result set are
// interrupted (stolen by another replica or already completed).
func (p *Server) heartbeatTick(ctx context.Context) {
// Snapshot the registry under the lock.
p.heartbeatMu.Lock()
snapshot := maps.Clone(p.heartbeatRegistry)
p.heartbeatMu.Unlock()
if len(snapshot) == 0 {
return
}
// Collect the IDs we believe we own.
ids := slices.Collect(maps.Keys(snapshot))
//nolint:gocritic // AsChatd provides narrowly-scoped daemon
// access for batch-updating heartbeats.
chatdCtx := dbauthz.AsChatd(ctx)
updatedIDs, err := p.db.UpdateChatHeartbeats(chatdCtx, database.UpdateChatHeartbeatsParams{
IDs: ids,
WorkerID: p.workerID,
Now: p.clock.Now(),
})
if err != nil {
p.logger.Error(ctx, "batch heartbeat failed", slog.Error(err))
return
}
// Build a set of IDs that were successfully updated.
updated := make(map[uuid.UUID]struct{}, len(updatedIDs))
for _, id := range updatedIDs {
updated[id] = struct{}{}
}
// Interrupt registered chats that were not in the result
// (stolen by another replica or already completed).
for id, entry := range snapshot {
if _, ok := updated[id]; !ok {
entry.logger.Warn(ctx, "chat not in batch heartbeat result, interrupting")
entry.cancelWithCause(chatloop.ErrInterrupted)
continue
}
// Bump workspace usage for surviving chats.
newWsID := p.trackWorkspaceUsage(ctx, entry.chatID, entry.workspaceID, entry.logger)
// Update workspace ID in the registry for next tick.
p.heartbeatMu.Lock()
if current, exists := p.heartbeatRegistry[id]; exists {
current.workspaceID = newWsID
}
p.heartbeatMu.Unlock()
}
}
@@ -3163,7 +3316,11 @@ func (p *Server) publishChatPubsubEvent(chat database.Chat, kind coderdpubsub.Ch
if p.pubsub == nil {
return
}
sdkChat := db2sdk.Chat(chat, nil) // we have diffStatus already converted
// diffStatus is applied below. File metadata is intentionally
// omitted from pubsub events to avoid an extra DB query per
// publish. Clients must merge pubsub updates, not replace
// cached file metadata.
sdkChat := db2sdk.Chat(chat, nil, nil)
if diffStatus != nil {
sdkChat.DiffStatus = diffStatus
}
@@ -3530,33 +3687,17 @@ func (p *Server) processChat(ctx context.Context, chat database.Chat) {
}
}()
// Periodically update the heartbeat so other replicas know this
// worker is still alive. The goroutine stops when chatCtx is
// canceled (either by completion or interruption).
go func() {
ticker := p.clock.NewTicker(p.chatHeartbeatInterval, "chatd", "heartbeat")
defer ticker.Stop()
for {
select {
case <-chatCtx.Done():
return
case <-ticker.C:
rows, err := p.db.UpdateChatHeartbeat(chatCtx, database.UpdateChatHeartbeatParams{
ID: chat.ID,
WorkerID: p.workerID,
})
if err != nil {
logger.Warn(chatCtx, "failed to update chat heartbeat", slog.Error(err))
continue
}
if rows == 0 {
cancel(chatloop.ErrInterrupted)
return
}
chat.WorkspaceID = p.trackWorkspaceUsage(chatCtx, chat.ID, chat.WorkspaceID, logger)
}
}
}()
// Register with the centralized heartbeat loop instead of
// running a per-chat goroutine. The loop issues a single batch
// UPDATE for all chats on this worker and detects stolen chats
// via set-difference.
p.registerHeartbeat(&heartbeatEntry{
cancelWithCause: cancel,
chatID: chat.ID,
workspaceID: chat.WorkspaceID,
logger: logger,
})
defer p.unregisterHeartbeat(chat.ID)
// Start buffering stream events BEFORE publishing the running
// status. This closes a race where a subscriber sees
@@ -3567,15 +3708,20 @@ func (p *Server) processChat(ctx context.Context, chat database.Chat) {
streamState := p.getOrCreateStreamState(chat.ID)
streamState.mu.Lock()
streamState.buffer = nil
streamState.bufferRetainedAt = time.Time{}
streamState.resetDropCounters()
streamState.buffering = true
streamState.mu.Unlock()
defer func() {
streamState.mu.Lock()
streamState.buffer = nil
streamState.resetDropCounters()
streamState.buffering = false
p.cleanupStreamIfIdle(chat.ID, streamState)
// Retain the buffer for a grace period so
// cross-replica relay subscribers can still snapshot
// it after processing completes. The buffer is
// cleared when the next processChat starts or when
// cleanupStreamIfIdle runs after the grace period.
streamState.bufferRetainedAt = p.clock.Now()
streamState.mu.Unlock()
}()
@@ -3905,14 +4051,6 @@ func (p *Server) runChat(
)
}()
prompt, err := chatprompt.ConvertMessagesWithFiles(ctx, messages, p.chatFileResolver(), logger)
if err != nil {
return result, xerrors.Errorf("build chat prompt: %w", err)
}
if chat.ParentChatID.Valid {
prompt = chatprompt.InsertSystem(prompt, defaultSubagentInstruction)
}
// Detect computer-use subagent via the mode column.
isComputerUse := chat.Mode.Valid && chat.Mode.ChatMode == database.ChatModeComputerUse
@@ -3969,7 +4107,20 @@ func (p *Server) runChat(
needsInstructionPersist = true
}
}
// Convert messages to prompt format in parallel with g2 work.
// ConvertMessagesWithFiles only reads `messages` (available
// after g.Wait()) and resolves file references via the DB.
// No g2 task reads or writes `prompt`, so this is safe.
var prompt []fantasy.Message
var g2 errgroup.Group
g2.Go(func() error {
var err error
prompt, err = chatprompt.ConvertMessagesWithFiles(ctx, messages, p.chatFileResolver(), logger)
if err != nil {
return xerrors.Errorf("build chat prompt: %w", err)
}
return nil
})
if needsInstructionPersist {
g2.Go(func() error {
var persistErr error
@@ -4083,8 +4234,12 @@ func (p *Server) runChat(
return nil
})
}
// All g2 goroutines return nil; error is discarded.
_ = g2.Wait()
if err := g2.Wait(); err != nil {
return result, err
}
if chat.ParentChatID.Valid {
prompt = chatprompt.InsertSystem(prompt, defaultSubagentInstruction)
}
if mcpCleanup != nil {
defer mcpCleanup()
}
@@ -4302,17 +4457,12 @@ func (p *Server) runChat(
p.publishMessage(chat.ID, msg)
}
// Clear the stream buffer now that the step is
// persisted. Late-joining subscribers will load
// these messages from the database instead.
if val, ok := p.chatStreams.Load(chat.ID); ok {
if ss, ok := val.(*chatStreamState); ok {
ss.mu.Lock()
ss.buffer = nil
ss.resetDropCounters()
ss.mu.Unlock()
}
}
// Do NOT clear the stream buffer here. Cross-replica
// relay subscribers may still need to snapshot buffered
// message_parts after processing completes. The buffer
// is bounded by maxStreamBufferSize and is cleared when
// the next processChat starts or when the stream state
// is garbage-collected after the retention grace period.
return nil
}
@@ -4475,6 +4625,27 @@ func (p *Server) runChat(
return uuid.Nil, xerrors.Errorf("insert chat file: %w", err)
}
// Cap enforcement and dedup are handled atomically
// in SQL. rejected > 0 = cap exceeded.
rejected, err := p.db.LinkChatFiles(ctx, database.LinkChatFilesParams{
ChatID: chatSnapshot.ID,
MaxFileLinks: int32(codersdk.MaxChatFileIDs),
FileIds: []uuid.UUID{row.ID},
})
switch {
case err != nil:
p.logger.Error(ctx, "failed to link file to chat",
slog.F("chat_id", chatSnapshot.ID),
slog.F("file_id", row.ID),
slog.Error(err),
)
case rejected > 0:
p.logger.Warn(ctx, "file cap reached, file not linked to chat",
slog.F("chat_id", chatSnapshot.ID),
slog.F("file_id", row.ID),
slog.F("max_file_links", codersdk.MaxChatFileIDs),
)
}
return row.ID, nil
},
}))
@@ -4547,7 +4718,7 @@ func (p *Server) runChat(
prompt = filterPromptForChainMode(prompt, chainInfo.trailingUserCount)
}
err = chatloop.Run(ctx, chatloop.RunOptions{
err := chatloop.Run(ctx, chatloop.RunOptions{
Model: model,
Messages: prompt,
Tools: tools, MaxSteps: maxChatSteps,
@@ -5196,6 +5367,37 @@ func (p *Server) resolveUserCompactionThreshold(ctx context.Context, userID uuid
return int32(val), true
}
// resolveDeploymentSystemPrompt builds the deployment-level system
// prompt from the built-in default and the admin-configured custom
// prompt stored in site_configs.
func (p *Server) resolveDeploymentSystemPrompt(ctx context.Context) string {
config, err := p.db.GetChatSystemPromptConfig(ctx)
if err != nil {
// Fail open: use the built-in default so chats always have
// some system guidance.
p.logger.Error(ctx, "failed to fetch chat system prompt configuration, using default", slog.Error(err))
return DefaultSystemPrompt
}
sanitizedCustom := SanitizePromptText(config.ChatSystemPrompt)
if sanitizedCustom == "" && strings.TrimSpace(config.ChatSystemPrompt) != "" {
p.logger.Warn(ctx, "custom system prompt became empty after sanitization, omitting custom portion")
}
var parts []string
if config.IncludeDefaultSystemPrompt {
parts = append(parts, DefaultSystemPrompt)
}
if sanitizedCustom != "" {
parts = append(parts, sanitizedCustom)
}
result := strings.Join(parts, "\n\n")
if result == "" {
p.logger.Warn(ctx, "resolved system prompt is empty, no system prompt will be injected into chats")
}
return result
}
// resolveUserPrompt fetches the user's custom chat prompt from the
// database and wraps it in <user-instructions> tags. Returns empty
// string if no prompt is set.
+129
View File
@@ -21,6 +21,7 @@ import (
dbpubsub "github.com/coder/coder/v2/coderd/database/pubsub"
coderdpubsub "github.com/coder/coder/v2/coderd/pubsub"
"github.com/coder/coder/v2/coderd/x/chatd/chaterror"
"github.com/coder/coder/v2/coderd/x/chatd/chatloop"
"github.com/coder/coder/v2/coderd/x/chatd/chatprovider"
"github.com/coder/coder/v2/coderd/x/chatd/chattest"
"github.com/coder/coder/v2/coderd/x/chatd/chattool"
@@ -2071,6 +2072,7 @@ func TestProcessChat_IgnoresStaleControlNotification(t *testing.T) {
workerID: workerID,
chatHeartbeatInterval: time.Minute,
configCache: newChatConfigCache(ctx, db, clock),
heartbeatRegistry: make(map[uuid.UUID]*heartbeatEntry),
}
// Publish a stale "pending" notification on the control channel
@@ -2133,3 +2135,130 @@ func TestProcessChat_IgnoresStaleControlNotification(t *testing.T) {
require.Equal(t, database.ChatStatusError, finalStatus,
"processChat should have reached runChat (error), not been interrupted (waiting)")
}
// TestHeartbeatTick_StolenChatIsInterrupted verifies that when the
// batch heartbeat UPDATE does not return a registered chat's ID
// (because another replica stole it or it was completed), the
// heartbeat tick cancels that chat's context with ErrInterrupted
// while leaving surviving chats untouched.
func TestHeartbeatTick_StolenChatIsInterrupted(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
clock := quartz.NewMock(t)
workerID := uuid.New()
server := &Server{
db: db,
logger: logger,
clock: clock,
workerID: workerID,
chatHeartbeatInterval: time.Minute,
heartbeatRegistry: make(map[uuid.UUID]*heartbeatEntry),
}
// Create three chats with independent cancel functions.
chat1 := uuid.New()
chat2 := uuid.New()
chat3 := uuid.New()
_, cancel1 := context.WithCancelCause(ctx)
_, cancel2 := context.WithCancelCause(ctx)
ctx3, cancel3 := context.WithCancelCause(ctx)
server.registerHeartbeat(&heartbeatEntry{
cancelWithCause: cancel1,
chatID: chat1,
logger: logger,
})
server.registerHeartbeat(&heartbeatEntry{
cancelWithCause: cancel2,
chatID: chat2,
logger: logger,
})
server.registerHeartbeat(&heartbeatEntry{
cancelWithCause: cancel3,
chatID: chat3,
logger: logger,
})
// The batch UPDATE returns only chat1 and chat2 —
// chat3 was "stolen" by another replica.
db.EXPECT().UpdateChatHeartbeats(gomock.Any(), gomock.Any()).DoAndReturn(
func(_ context.Context, params database.UpdateChatHeartbeatsParams) ([]uuid.UUID, error) {
require.Equal(t, workerID, params.WorkerID)
require.Len(t, params.IDs, 3)
// Return only chat1 and chat2 as surviving.
return []uuid.UUID{chat1, chat2}, nil
},
)
server.heartbeatTick(ctx)
// chat3's context should be canceled with ErrInterrupted.
require.ErrorIs(t, context.Cause(ctx3), chatloop.ErrInterrupted,
"stolen chat should be interrupted")
// chat3 should have been removed from the registry by
// unregister (in production this happens via defer in
// processChat). The heartbeat tick itself does not
// unregister — it only cancels. Verify the entry is
// still present (processChat's defer would clean it up).
server.heartbeatMu.Lock()
_, chat1Exists := server.heartbeatRegistry[chat1]
_, chat2Exists := server.heartbeatRegistry[chat2]
_, chat3Exists := server.heartbeatRegistry[chat3]
server.heartbeatMu.Unlock()
require.True(t, chat1Exists, "surviving chat1 should remain registered")
require.True(t, chat2Exists, "surviving chat2 should remain registered")
require.True(t, chat3Exists,
"stolen chat3 should still be in registry (processChat defer removes it)")
}
// TestHeartbeatTick_DBErrorDoesNotInterruptChats verifies that a
// transient database failure causes the tick to log and return
// without canceling any registered chats.
func TestHeartbeatTick_DBErrorDoesNotInterruptChats(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ctrl := gomock.NewController(t)
db := dbmock.NewMockStore(ctrl)
clock := quartz.NewMock(t)
server := &Server{
db: db,
logger: logger,
clock: clock,
workerID: uuid.New(),
chatHeartbeatInterval: time.Minute,
heartbeatRegistry: make(map[uuid.UUID]*heartbeatEntry),
}
chatID := uuid.New()
chatCtx, cancel := context.WithCancelCause(ctx)
server.registerHeartbeat(&heartbeatEntry{
cancelWithCause: cancel,
chatID: chatID,
logger: logger,
})
// Simulate a transient DB error.
db.EXPECT().UpdateChatHeartbeats(gomock.Any(), gomock.Any()).Return(
nil, xerrors.New("connection reset"),
)
server.heartbeatTick(ctx)
// Chat should NOT be interrupted — the tick logged and
// returned early.
require.NoError(t, chatCtx.Err(),
"chat context should not be canceled on transient DB error")
}
+12 -7
View File
@@ -474,7 +474,7 @@ func TestArchiveChatInterruptsActiveProcessing(t *testing.T) {
require.Equal(t, 1, userMessages, "expected queued message to stay queued after archive")
}
func TestUpdateChatHeartbeatRequiresOwnership(t *testing.T) {
func TestUpdateChatHeartbeatsRequiresOwnership(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
@@ -501,19 +501,24 @@ func TestUpdateChatHeartbeatRequiresOwnership(t *testing.T) {
})
require.NoError(t, err)
rows, err := db.UpdateChatHeartbeat(ctx, database.UpdateChatHeartbeatParams{
ID: chat.ID,
// Wrong worker_id should return no IDs.
ids, err := db.UpdateChatHeartbeats(ctx, database.UpdateChatHeartbeatsParams{
IDs: []uuid.UUID{chat.ID},
WorkerID: uuid.New(),
Now: time.Now(),
})
require.NoError(t, err)
require.Equal(t, int64(0), rows)
require.Empty(t, ids)
rows, err = db.UpdateChatHeartbeat(ctx, database.UpdateChatHeartbeatParams{
ID: chat.ID,
// Correct worker_id should return the chat's ID.
ids, err = db.UpdateChatHeartbeats(ctx, database.UpdateChatHeartbeatsParams{
IDs: []uuid.UUID{chat.ID},
WorkerID: workerID,
Now: time.Now(),
})
require.NoError(t, err)
require.Equal(t, int64(1), rows)
require.Len(t, ids, 1)
require.Equal(t, chat.ID, ids[0])
}
func TestSendMessageQueueBehaviorQueuesWhenBusy(t *testing.T) {
+2 -2
View File
@@ -70,8 +70,8 @@ func TestRun_ActiveToolsPrepareBehavior(t *testing.T) {
require.Equal(t, 1, persistStepCalls)
require.True(t, persistedStep.ContextLimit.Valid)
require.Equal(t, int64(4096), persistedStep.ContextLimit.Int64)
require.Greater(t, persistedStep.Runtime, time.Duration(0),
"step runtime should be positive")
require.GreaterOrEqual(t, persistedStep.Runtime, time.Duration(0),
"step runtime should be non-negative")
require.NotEmpty(t, capturedCall.Prompt)
require.False(t, containsPromptSentinel(capturedCall.Prompt))
-23
View File
@@ -1192,23 +1192,6 @@ func fileReferencePartToText(part codersdk.ChatMessagePart) string {
return sb.String()
}
// skillPartToText formats a skill SDK part as plain text for
// LLM consumption. The user explicitly attached this skill chip
// to their message, so the text makes clear they want the agent
// to use the named skill (listed in <available-skills>).
func skillPartToText(part codersdk.ChatMessagePart) string {
if part.SkillDescription != "" {
return fmt.Sprintf(
"Use the %q skill (%s). Read it with read_skill before following its instructions.",
part.SkillName, part.SkillDescription,
)
}
return fmt.Sprintf(
"Use the %q skill. Read it with read_skill before following its instructions.",
part.SkillName,
)
}
// toolResultPartToMessagePart converts an SDK tool-result part
// into a fantasy ToolResultPart for LLM dispatch.
func toolResultPartToMessagePart(logger slog.Logger, part codersdk.ChatMessagePart) fantasy.ToolResultPart {
@@ -1377,12 +1360,6 @@ func partsToMessageParts(
result = append(result, fantasy.TextPart{
Text: fileReferencePartToText(part),
})
case codersdk.ChatMessagePartTypeSkill:
// Skill parts from user input are converted to text
// so the LLM knows which skill the user requested.
result = append(result, fantasy.TextPart{
Text: skillPartToText(part),
})
case codersdk.ChatMessagePartTypeContextFile:
if part.ContextFileContent == "" {
continue
@@ -1088,49 +1088,6 @@ func TestFileReferencePreservation(t *testing.T) {
assert.Contains(t, textPart.Text, "func main() {}")
}
// TestSkillPartPreservation verifies skill parts survive the
// storage round-trip and convert to text for LLMs.
func TestSkillPartPreservation(t *testing.T) {
t.Parallel()
raw, err := chatprompt.MarshalParts([]codersdk.ChatMessagePart{{
Type: codersdk.ChatMessagePartTypeSkill,
SkillName: "deep-review",
SkillDescription: "Multi-reviewer code review",
}})
require.NoError(t, err)
// Storage round-trip: all fields intact.
parts, err := chatprompt.ParseContent(testMsg(codersdk.ChatMessageRoleUser, raw))
require.NoError(t, err)
require.Len(t, parts, 1)
assert.Equal(t, codersdk.ChatMessagePartTypeSkill, parts[0].Type)
assert.Equal(t, "deep-review", parts[0].SkillName)
assert.Equal(t, "Multi-reviewer code review", parts[0].SkillDescription)
// LLM dispatch: skill becomes a TextPart.
prompt, err := chatprompt.ConvertMessagesWithFiles(
context.Background(),
[]database.ChatMessage{{
Role: database.ChatMessageRoleUser,
Visibility: database.ChatMessageVisibilityBoth,
Content: raw,
}},
nil,
slogtest.Make(t, nil),
)
require.NoError(t, err)
require.Len(t, prompt, 1)
require.Len(t, prompt[0].Content, 1)
textPart, ok := fantasy.AsMessagePart[fantasy.TextPart](prompt[0].Content[0])
require.True(t, ok, "skill should become TextPart for LLM")
assert.Contains(t, textPart.Text, "deep-review")
assert.Contains(t, textPart.Text, "read_skill")
assert.Contains(t, textPart.Text, "Multi-reviewer code review")
assert.Contains(t, textPart.Text, "Multi-reviewer code review")
}
// TestAssistantWriteRoundTrip verifies the Stage 4 write path:
// fantasy.Content (with ProviderMetadata) → PartFromContent →
// MarshalParts → DB → ParseContent (SDK path) →
+34 -5
View File
@@ -9,6 +9,7 @@ import (
"fmt"
"net/http"
"net/url"
"slices"
"strings"
"sync"
"time"
@@ -49,10 +50,11 @@ const connectTimeout = 10 * time.Second
const toolCallTimeout = 60 * time.Second
// ConnectAll connects to all configured MCP servers, discovers
// their tools, and returns them as fantasy.AgentTool values. It
// skips servers that fail to connect and logs warnings. The
// returned cleanup function must be called to close all
// connections.
// their tools, and returns them as fantasy.AgentTool values.
// Tools are sorted by their prefixed name so callers
// receive a deterministic order. It skips servers that fail to
// connect and logs warnings. The returned cleanup function
// must be called to close all connections.
func ConnectAll(
ctx context.Context,
logger slog.Logger,
@@ -108,7 +110,9 @@ func ConnectAll(
}
mu.Lock()
clients = append(clients, mcpClient)
if mcpClient != nil {
clients = append(clients, mcpClient)
}
tools = append(tools, serverTools...)
mu.Unlock()
return nil
@@ -119,6 +123,31 @@ func ConnectAll(
// discarded.
_ = eg.Wait()
// Sort tools by prefixed name for deterministic ordering
// regardless of goroutine completion order. Ties, possible
// when the __ separator produces ambiguous prefixed names,
// are broken by config ID. Stable prompt construction
// depends on consistent tool ordering.
slices.SortFunc(tools, func(a, b fantasy.AgentTool) int {
// All tools in this slice are mcpToolWrapper values
// created by connectOne above, so these checked
// assertions should always succeed. The config ID
// tiebreaker resolves the __ separator ambiguity
// documented at the top of this file.
aTool, ok := a.(MCPToolIdentifier)
if !ok {
panic(fmt.Sprintf("unexpected tool type %T", a))
}
bTool, ok := b.(MCPToolIdentifier)
if !ok {
panic(fmt.Sprintf("unexpected tool type %T", b))
}
return cmp.Or(
cmp.Compare(a.Info().Name, b.Info().Name),
cmp.Compare(aTool.MCPServerConfigID().String(), bTool.MCPServerConfigID().String()),
)
})
return tools, cleanup
}
+126
View File
@@ -63,6 +63,17 @@ func greetTool() mcpserver.ServerTool {
}
}
// makeTool returns a ServerTool with the given name and a
// no-op handler that always returns "ok".
func makeTool(name string) mcpserver.ServerTool {
return mcpserver.ServerTool{
Tool: mcp.NewTool(name),
Handler: func(_ context.Context, _ mcp.CallToolRequest) (*mcp.CallToolResult, error) {
return mcp.NewToolResultText("ok"), nil
},
}
}
// makeConfig builds a database.MCPServerConfig suitable for tests.
func makeConfig(slug, url string) database.MCPServerConfig {
return database.MCPServerConfig{
@@ -198,6 +209,121 @@ func TestConnectAll_MultipleServers(t *testing.T) {
assert.Contains(t, names, "beta__greet")
}
func TestConnectAll_NoToolsAfterFiltering(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ts := newTestMCPServer(t, echoTool())
cfg := makeConfig("filtered", ts.URL)
cfg.ToolAllowList = []string{"greet"}
tools, cleanup := mcpclient.ConnectAll(
ctx,
logger,
[]database.MCPServerConfig{cfg},
nil,
)
require.Empty(t, tools)
assert.NotPanics(t, cleanup)
}
func TestConnectAll_DeterministicOrder(t *testing.T) {
t.Parallel()
t.Run("AcrossServers", func(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ts1 := newTestMCPServer(t, makeTool("zebra"))
ts2 := newTestMCPServer(t, makeTool("alpha"))
ts3 := newTestMCPServer(t, makeTool("middle"))
tools, cleanup := mcpclient.ConnectAll(
ctx,
logger,
[]database.MCPServerConfig{
makeConfig("srv3", ts3.URL),
makeConfig("srv1", ts1.URL),
makeConfig("srv2", ts2.URL),
},
nil,
)
t.Cleanup(cleanup)
require.Len(t, tools, 3)
// Sorted by full prefixed name (slug__tool), so slug
// order determines the sequence, not the tool name.
assert.Equal(t,
[]string{"srv1__zebra", "srv2__alpha", "srv3__middle"},
toolNames(tools),
)
})
t.Run("WithMultiToolServer", func(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
multi := newTestMCPServer(t, makeTool("zeta"), makeTool("beta"))
other := newTestMCPServer(t, makeTool("gamma"))
tools, cleanup := mcpclient.ConnectAll(
ctx,
logger,
[]database.MCPServerConfig{
makeConfig("zzz", multi.URL),
makeConfig("aaa", other.URL),
},
nil,
)
t.Cleanup(cleanup)
require.Len(t, tools, 3)
assert.Equal(t,
[]string{"aaa__gamma", "zzz__beta", "zzz__zeta"},
toolNames(tools),
)
})
t.Run("TiebreakByConfigID", func(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
ts1 := newTestMCPServer(t, makeTool("b__z"))
ts2 := newTestMCPServer(t, makeTool("z"))
// Use fixed UUIDs so the tiebreaker order is
// predictable. Both servers produce the same prefixed
// name, a__b__z, due to the __ separator ambiguity.
cfg1 := makeConfig("a", ts1.URL)
cfg1.ID = uuid.MustParse("00000000-0000-0000-0000-000000000002")
cfg2 := makeConfig("a__b", ts2.URL)
cfg2.ID = uuid.MustParse("00000000-0000-0000-0000-000000000001")
tools, cleanup := mcpclient.ConnectAll(
ctx,
logger,
[]database.MCPServerConfig{cfg1, cfg2},
nil,
)
t.Cleanup(cleanup)
require.Len(t, tools, 2)
assert.Equal(t, []string{"a__b__z", "a__b__z"}, toolNames(tools))
id0 := tools[0].(mcpclient.MCPToolIdentifier).MCPServerConfigID()
id1 := tools[1].(mcpclient.MCPToolIdentifier).MCPServerConfigID()
assert.Equal(t, cfg2.ID, id0, "lower config ID should sort first")
assert.Equal(t, cfg1.ID, id1, "higher config ID should sort second")
})
}
func TestConnectAll_AuthHeaders(t *testing.T) {
t.Parallel()
ctx := context.Background()
+10 -5
View File
@@ -1055,12 +1055,17 @@ func TestAwaitSubagentCompletion(t *testing.T) {
require.NoError(t, err)
defer cancelProbe()
// Transition the child first, then publish once the
// durable completion state is observable. Pubsub only
// wakes the waiter; it does not guarantee the report is
// visible in the same instant as the notification.
setChatStatus(ctx, t, db, child.ID, database.ChatStatusWaiting, "")
// Insert the message BEFORE transitioning to Waiting.
// Stale PG LISTEN/NOTIFY notifications from the
// processor's earlier run can still be buffered in the
// pgListener after drainInflight returns. If such a
// notification is dispatched between setChatStatus and
// insertAssistantMessage, checkSubagentCompletion would
// see done=true (Waiting) with an empty report. By
// inserting the message first, the report is guaranteed
// to be committed before the status makes it visible.
insertAssistantMessage(ctx, t, db, child.ID, model.ID, "pubsub result")
setChatStatus(ctx, t, db, child.ID, database.ChatStatusWaiting, "")
require.EventuallyWithT(t, func(c *assert.CollectT) {
chat, report, done, err := server.checkSubagentCompletion(ctx, child.ID)
require.NoError(c, err)
+6 -5
View File
@@ -1,6 +1,7 @@
package chatd_test
import (
"strings"
"testing"
"github.com/google/uuid"
@@ -110,19 +111,19 @@ func TestSpawnComputerUseAgent_SystemPromptFormat(t *testing.T) {
// The system message raw content is a JSON-encoded string.
// It should contain the system prompt with the user prompt.
var rawSystemContent string
var foundPrompt bool
for _, msg := range messages {
if msg.Role != "system" {
continue
}
if msg.Content.Valid {
rawSystemContent = string(msg.Content.RawMessage)
if msg.Content.Valid && strings.Contains(string(msg.Content.RawMessage), prompt) {
foundPrompt = true
break
}
}
assert.Contains(t, rawSystemContent, prompt,
"system prompt raw content should contain the user prompt")
assert.True(t, foundPrompt,
"at least one system message should contain the user prompt")
}
func TestSpawnComputerUseAgent_ChildIsListedUnderParent(t *testing.T) {
+1
View File
@@ -212,6 +212,7 @@ type AuditLogsRequest struct {
type AuditLogResponse struct {
AuditLogs []AuditLog `json:"audit_logs"`
Count int64 `json:"count"`
CountCap int64 `json:"count_cap"`
}
type CreateTestAuditLogRequest struct {
+81 -37
View File
@@ -26,6 +26,12 @@ import (
// threshold settings.
const ChatCompactionThresholdKeyPrefix = "chat_compaction_threshold_pct:"
// MaxChatFileIDs is the maximum number of file IDs that can be
// associated with a single chat. This limit prevents unbounded
// growth in the chat_file_links table. It is easier to raise
// this limit than to lower it.
const MaxChatFileIDs = 20
// CompactionThresholdKey returns the user-config key for a specific
// model configuration's compaction threshold.
func CompactionThresholdKey(modelConfigID uuid.UUID) string {
@@ -46,24 +52,25 @@ const (
// Chat represents a chat session with an AI agent.
type Chat struct {
ID uuid.UUID `json:"id" format:"uuid"`
OwnerID uuid.UUID `json:"owner_id" format:"uuid"`
WorkspaceID *uuid.UUID `json:"workspace_id,omitempty" format:"uuid"`
BuildID *uuid.UUID `json:"build_id,omitempty" format:"uuid"`
AgentID *uuid.UUID `json:"agent_id,omitempty" format:"uuid"`
ParentChatID *uuid.UUID `json:"parent_chat_id,omitempty" format:"uuid"`
RootChatID *uuid.UUID `json:"root_chat_id,omitempty" format:"uuid"`
LastModelConfigID uuid.UUID `json:"last_model_config_id" format:"uuid"`
Title string `json:"title"`
Status ChatStatus `json:"status"`
LastError *string `json:"last_error"`
DiffStatus *ChatDiffStatus `json:"diff_status,omitempty"`
CreatedAt time.Time `json:"created_at" format:"date-time"`
UpdatedAt time.Time `json:"updated_at" format:"date-time"`
Archived bool `json:"archived"`
PinOrder int32 `json:"pin_order"`
MCPServerIDs []uuid.UUID `json:"mcp_server_ids" format:"uuid"`
Labels map[string]string `json:"labels"`
ID uuid.UUID `json:"id" format:"uuid"`
OwnerID uuid.UUID `json:"owner_id" format:"uuid"`
WorkspaceID *uuid.UUID `json:"workspace_id,omitempty" format:"uuid"`
BuildID *uuid.UUID `json:"build_id,omitempty" format:"uuid"`
AgentID *uuid.UUID `json:"agent_id,omitempty" format:"uuid"`
ParentChatID *uuid.UUID `json:"parent_chat_id,omitempty" format:"uuid"`
RootChatID *uuid.UUID `json:"root_chat_id,omitempty" format:"uuid"`
LastModelConfigID uuid.UUID `json:"last_model_config_id" format:"uuid"`
Title string `json:"title"`
Status ChatStatus `json:"status"`
LastError *string `json:"last_error"`
DiffStatus *ChatDiffStatus `json:"diff_status,omitempty"`
CreatedAt time.Time `json:"created_at" format:"date-time"`
UpdatedAt time.Time `json:"updated_at" format:"date-time"`
Archived bool `json:"archived"`
PinOrder int32 `json:"pin_order"`
MCPServerIDs []uuid.UUID `json:"mcp_server_ids" format:"uuid"`
Labels map[string]string `json:"labels"`
Files []ChatFileMetadata `json:"files,omitempty"`
// HasUnread is true when assistant messages exist beyond
// the owner's read cursor, which updates on stream
// connect and disconnect.
@@ -73,6 +80,18 @@ type Chat struct {
// is updated only when context changes — first workspace
// attach or agent change.
LastInjectedContext []ChatMessagePart `json:"last_injected_context,omitempty"`
Warnings []string `json:"warnings,omitempty"`
}
// ChatFileMetadata contains lightweight metadata about a file
// associated with a chat, excluding the file content itself.
type ChatFileMetadata struct {
ID uuid.UUID `json:"id" format:"uuid"`
OwnerID uuid.UUID `json:"owner_id" format:"uuid"`
OrganizationID uuid.UUID `json:"organization_id" format:"uuid"`
Name string `json:"name"`
MimeType string `json:"mime_type"`
CreatedAt time.Time `json:"created_at" format:"date-time"`
}
// ChatMessage represents a single message in a chat.
@@ -326,7 +345,6 @@ const (
ChatInputPartTypeText ChatInputPartType = "text"
ChatInputPartTypeFile ChatInputPartType = "file"
ChatInputPartTypeFileReference ChatInputPartType = "file-reference"
ChatInputPartTypeSkill ChatInputPartType = "skill"
)
// ChatInputPart is a single user input part for creating a chat.
@@ -341,15 +359,12 @@ type ChatInputPart struct {
EndLine int `json:"end_line,omitempty"`
// The code content from the diff that was commented on.
Content string `json:"content,omitempty"`
// The following fields are only set when Type is
// ChatInputPartTypeSkill.
SkillName string `json:"skill_name,omitempty"`
SkillDescription string `json:"skill_description,omitempty"`
}
// CreateChatRequest is the request to create a new chat.
type CreateChatRequest struct {
Content []ChatInputPart `json:"content"`
SystemPrompt string `json:"system_prompt,omitempty"`
WorkspaceID *uuid.UUID `json:"workspace_id,omitempty" format:"uuid"`
ModelConfigID *uuid.UUID `json:"model_config_id,omitempty" format:"uuid"`
MCPServerIDs []uuid.UUID `json:"mcp_server_ids,omitempty" format:"uuid"`
@@ -373,11 +388,27 @@ type UpdateChatRequest struct {
Labels *map[string]string `json:"labels,omitempty"`
}
// ChatBusyBehavior controls what happens when a user sends a message
// while the chat is already processing.
type ChatBusyBehavior string
const (
// ChatBusyBehaviorQueue queues the message for processing after
// the current run finishes.
ChatBusyBehaviorQueue ChatBusyBehavior = "queue"
// ChatBusyBehaviorInterrupt queues the message and interrupts
// the active run. The partial assistant response is persisted
// before the queued message is promoted, preserving correct
// conversation order.
ChatBusyBehaviorInterrupt ChatBusyBehavior = "interrupt"
)
// CreateChatMessageRequest is the request to add a message to a chat.
type CreateChatMessageRequest struct {
Content []ChatInputPart `json:"content"`
ModelConfigID *uuid.UUID `json:"model_config_id,omitempty" format:"uuid"`
MCPServerIDs *[]uuid.UUID `json:"mcp_server_ids,omitempty" format:"uuid"`
Content []ChatInputPart `json:"content"`
ModelConfigID *uuid.UUID `json:"model_config_id,omitempty" format:"uuid"`
MCPServerIDs *[]uuid.UUID `json:"mcp_server_ids,omitempty" format:"uuid"`
BusyBehavior ChatBusyBehavior `json:"busy_behavior,omitempty" enums:"queue,interrupt"`
}
// EditChatMessageRequest is the request to edit a user message in a chat.
@@ -390,6 +421,15 @@ type CreateChatMessageResponse struct {
Message *ChatMessage `json:"message,omitempty"`
QueuedMessage *ChatQueuedMessage `json:"queued_message,omitempty"`
Queued bool `json:"queued"`
Warnings []string `json:"warnings,omitempty"`
}
// EditChatMessageResponse is the response from editing a message in a chat.
// Edits are always synchronous (no queueing), so the message is returned
// directly.
type EditChatMessageResponse struct {
Message ChatMessage `json:"message"`
Warnings []string `json:"warnings,omitempty"`
}
// UploadChatFileResponse is the response from uploading a chat file.
@@ -631,7 +671,7 @@ type ChatModelOpenAIProviderOptions struct {
ParallelToolCalls *bool `json:"parallel_tool_calls,omitempty" description:"Whether the model may make multiple tool calls in parallel"`
User *string `json:"user,omitempty" description:"Unique identifier for the end user for abuse monitoring" hidden:"true"`
ReasoningEffort *string `json:"reasoning_effort,omitempty" description:"Controls the level of reasoning effort" enum:"none,minimal,low,medium,high,xhigh"`
ReasoningSummary *string `json:"reasoning_summary,omitempty" description:"Controls whether reasoning tokens are summarized in the response"`
ReasoningSummary *string `json:"reasoning_summary,omitempty" description:"Controls whether reasoning tokens are summarized in the response" enum:"auto,concise,detailed"`
MaxCompletionTokens *int64 `json:"max_completion_tokens,omitempty" description:"Upper bound on tokens the model may generate"`
TextVerbosity *string `json:"text_verbosity,omitempty" description:"Controls the verbosity of the text response" enum:"low,medium,high"`
Prediction map[string]any `json:"prediction,omitempty" description:"Predicted output content to speed up responses" hidden:"true"`
@@ -639,12 +679,12 @@ type ChatModelOpenAIProviderOptions struct {
Metadata map[string]any `json:"metadata,omitempty" description:"Arbitrary metadata to attach to the request" hidden:"true"`
PromptCacheKey *string `json:"prompt_cache_key,omitempty" description:"Key for enabling cross-request prompt caching"`
SafetyIdentifier *string `json:"safety_identifier,omitempty" description:"Developer-specific safety identifier for the request" hidden:"true"`
ServiceTier *string `json:"service_tier,omitempty" description:"Latency tier to use for processing the request"`
ServiceTier *string `json:"service_tier,omitempty" description:"Latency tier to use for processing the request" enum:"auto,default,flex,scale,priority"`
StructuredOutputs *bool `json:"structured_outputs,omitempty" description:"Whether to enable structured JSON output mode" hidden:"true"`
StrictJSONSchema *bool `json:"strict_json_schema,omitempty" description:"Whether to enforce strict adherence to the JSON schema" hidden:"true"`
WebSearchEnabled *bool `json:"web_search_enabled,omitempty" description:"Enable OpenAI web search tool for grounding responses with real-time information"`
SearchContextSize *string `json:"search_context_size,omitempty" description:"Amount of search context to use" enum:"low,medium,high"`
AllowedDomains []string `json:"allowed_domains,omitempty" description:"Restrict web search to these domains"`
AllowedDomains []string `json:"allowed_domains,omitempty" label:"Web Search: Allowed Domains" description:"Restrict web search to these domains"`
}
// ChatModelAnthropicThinkingOptions configures Anthropic thinking budget.
@@ -656,11 +696,11 @@ type ChatModelAnthropicThinkingOptions struct {
type ChatModelAnthropicProviderOptions struct {
SendReasoning *bool `json:"send_reasoning,omitempty" description:"Whether to include reasoning content in the response"`
Thinking *ChatModelAnthropicThinkingOptions `json:"thinking,omitempty" description:"Configuration for extended thinking"`
Effort *string `json:"effort,omitempty" description:"Controls the level of reasoning effort" enum:"low,medium,high,max"`
Effort *string `json:"effort,omitempty" label:"Reasoning Effort" description:"Controls the level of reasoning effort" enum:"low,medium,high,max"`
DisableParallelToolUse *bool `json:"disable_parallel_tool_use,omitempty" description:"Whether to disable parallel tool execution"`
WebSearchEnabled *bool `json:"web_search_enabled,omitempty" description:"Enable Anthropic web search tool for grounding responses with real-time information"`
AllowedDomains []string `json:"allowed_domains,omitempty" description:"Restrict web search to these domains (cannot be used with blocked_domains)"`
BlockedDomains []string `json:"blocked_domains,omitempty" description:"Block web search on these domains (cannot be used with allowed_domains)"`
AllowedDomains []string `json:"allowed_domains,omitempty" label:"Web Search: Allowed Domains" description:"Restrict web search to these domains (cannot be used with blocked_domains)"`
BlockedDomains []string `json:"blocked_domains,omitempty" label:"Web Search: Blocked Domains" description:"Block web search on these domains (cannot be used with allowed_domains)"`
}
// ChatModelGoogleThinkingConfig configures Google thinking behavior.
@@ -979,6 +1019,7 @@ type ChatCostSummary struct {
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheReadTokens int64 `json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `json:"total_runtime_ms"`
ByModel []ChatCostModelBreakdown `json:"by_model"`
ByChat []ChatCostChatBreakdown `json:"by_chat"`
UsageLimit *ChatUsageLimitStatus `json:"usage_limit,omitempty"`
@@ -996,6 +1037,7 @@ type ChatCostModelBreakdown struct {
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheReadTokens int64 `json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `json:"total_runtime_ms"`
}
// ChatCostChatBreakdown contains per-root-chat cost aggregation.
@@ -1008,6 +1050,7 @@ type ChatCostChatBreakdown struct {
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheReadTokens int64 `json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `json:"total_runtime_ms"`
}
// ChatCostUserRollup contains per-user cost aggregation for admin views.
@@ -1023,6 +1066,7 @@ type ChatCostUserRollup struct {
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheReadTokens int64 `json:"total_cache_read_tokens"`
TotalCacheCreationTokens int64 `json:"total_cache_creation_tokens"`
TotalRuntimeMs int64 `json:"total_runtime_ms"`
}
// ChatCostUsersResponse is the response from the admin chat cost users endpoint.
@@ -1940,7 +1984,7 @@ func (c *ExperimentalClient) EditChatMessage(
chatID uuid.UUID,
messageID int64,
req EditChatMessageRequest,
) (ChatMessage, error) {
) (EditChatMessageResponse, error) {
res, err := c.Request(
ctx,
http.MethodPatch,
@@ -1948,14 +1992,14 @@ func (c *ExperimentalClient) EditChatMessage(
req,
)
if err != nil {
return ChatMessage{}, err
return EditChatMessageResponse{}, err
}
if res.StatusCode != http.StatusOK {
return ChatMessage{}, readBodyAsChatUsageLimitError(res)
return EditChatMessageResponse{}, readBodyAsChatUsageLimitError(res)
}
defer res.Body.Close()
var message ChatMessage
return message, json.NewDecoder(res.Body).Decode(&message)
var resp EditChatMessageResponse
return resp, json.NewDecoder(res.Body).Decode(&resp)
}
// InterruptChat cancels an in-flight chat run and leaves it waiting.
+1
View File
@@ -96,6 +96,7 @@ type ConnectionLogsRequest struct {
type ConnectionLogResponse struct {
ConnectionLogs []ConnectionLog `json:"connection_logs"`
Count int64 `json:"count"`
CountCap int64 `json:"count_cap"`
}
func (c *Client) ConnectionLogs(ctx context.Context, req ConnectionLogsRequest) (ConnectionLogResponse, error) {
+8
View File
@@ -7,6 +7,14 @@ features, you can [request a trial](https://coder.com/trial) or
![Licenses screen shows license information and seat consumption](../../images/admin/licenses/licenses-screen.png)
## Offline license validation
Coder license keys are signed JWTs that are validated locally using cryptographic
signatures. No outbound connection to Coder's servers is required for license
validation. This means licenses work in
[air-gapped and offline deployments](../../install/airgap.md) without any
additional configuration.
## Adding your license key
There are two ways to add a license to a Coder deployment:
+24 -2
View File
@@ -134,6 +134,12 @@ edited message onward, truncating any messages that followed it.
|-----------|-------------------|----------|----------------------------------|
| `content` | `ChatInputPart[]` | yes | The replacement message content. |
The response is an `EditChatMessageResponse` with the edited `message`
and an optional `warnings` array. When file references in the edited
content cannot be linked (e.g. the per-chat file cap is reached), the
edit still succeeds and the `warnings` array describes which files
were not linked.
### Stream updates
`GET /api/experimental/chats/{chat}/stream`
@@ -201,7 +207,9 @@ Each event is a JSON object with `kind` and `chat` fields:
`GET /api/experimental/chats`
Returns all chats owned by the authenticated user.
Returns all chats owned by the authenticated user. The `files` field is
populated on `POST /chats` and `GET /chats/{id}`. Other endpoints that
return a `Chat` object omit it.
| Query parameter | Type | Required | Description |
|-----------------|----------|----------|------------------------------------------------------------------|
@@ -212,7 +220,17 @@ Returns all chats owned by the authenticated user.
`GET /api/experimental/chats/{chat}`
Returns the `Chat` object (metadata only, no messages).
Returns the `Chat` object (metadata only, no messages). The response
includes a `files` field (`ChatFileMetadata[]`) containing metadata for
files that have been successfully linked to the chat. File linking is
best-effort; if linking fails, the file remains in message content but
will be absent from this field.
When file linking is skipped (e.g. the per-chat file cap is reached),
`POST /chats` includes a `warnings` array on the `Chat` response and
`POST /chats/{chat}/messages` includes a `warnings` array on the
`CreateChatMessageResponse`. The `warnings` field is `omitempty` and
absent when all files are linked successfully.
### Get chat messages
@@ -295,6 +313,10 @@ file, use `GET /api/experimental/chats/files/{file}`.
Supported formats: PNG, JPEG, GIF, WebP (up to 10 MB). The server
validates actual file content regardless of the declared `Content-Type`.
Files referenced in messages are automatically linked to the chat and
appear in the `files` field on subsequent
`GET /api/experimental/chats/{chat}` responses.
## Chat statuses
| Status | Meaning |
+1
View File
@@ -13,6 +13,7 @@ air-gapped with Kubernetes or Docker.
| PostgreSQL | If no [PostgreSQL connection URL](../reference/cli/server.md#--postgres-url) is specified, Coder will download Postgres from [repo1.maven.org](https://repo1.maven.org) | An external database is required, you must specify a [PostgreSQL connection URL](../reference/cli/server.md#--postgres-url) |
| Telemetry | Telemetry is on by default, and [can be disabled](../reference/cli/server.md#--telemetry) | Telemetry [can be disabled](../reference/cli/server.md#--telemetry) |
| Update check | By default, Coder checks for updates from [GitHub releases](https://github.com/coder/coder/releases) | Update checks [can be disabled](../reference/cli/server.md#--update-check) |
| License validation | License keys are validated locally using cryptographic signatures. No outbound connection to Coder is required | No changes needed. See [offline license validation](../admin/licensing/index.md#offline-license-validation) |
| AI Governance Usage Count | By default, deployments with the [AI Governance Add On](../ai-coder/ai-governance.md) report usage data | [Contact us](https://coder.com/contact) to request a license with usage reporting off. |
## Air-gapped container images
+6
View File
@@ -1299,6 +1299,12 @@
"description": "Custom claims/scopes with Okta for group/role sync",
"path": "./tutorials/configuring-okta.md"
},
{
"title": "Persistent Shared Workspaces",
"description": "Set up long-lived shared workspaces with service accounts and workspace sharing",
"path": "./tutorials/persistent-shared-workspaces.md",
"state": ["premium"]
},
{
"title": "Google to AWS Federation",
"description": "Federating a Google Cloud service account to AWS",
+2 -1
View File
@@ -90,7 +90,8 @@ curl -X GET http://coder-server:8080/api/v2/audit?limit=0 \
"user_agent": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
+2 -1
View File
@@ -291,7 +291,8 @@ curl -X GET http://coder-server:8080/api/v2/connectionlog?limit=0 \
"workspace_owner_username": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
+6 -2
View File
@@ -1740,7 +1740,8 @@
"user_agent": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
@@ -1750,6 +1751,7 @@
|--------------|-------------------------------------------------|----------|--------------|-------------|
| `audit_logs` | array of [codersdk.AuditLog](#codersdkauditlog) | false | | |
| `count` | integer | false | | |
| `count_cap` | integer | false | | |
## codersdk.AuthMethod
@@ -2173,7 +2175,8 @@ AuthorizationObject can represent a "set" of objects, such as: all workspaces in
"workspace_owner_username": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
@@ -2183,6 +2186,7 @@ AuthorizationObject can represent a "set" of objects, such as: all workspaces in
|-------------------|-----------------------------------------------------------|----------|--------------|-------------|
| `connection_logs` | array of [codersdk.ConnectionLog](#codersdkconnectionlog) | false | | |
| `count` | integer | false | | |
| `count_cap` | integer | false | | |
## codersdk.ConnectionLogSSHInfo
@@ -0,0 +1,258 @@
# Persistent Shared Workspaces with Service Accounts
> [!NOTE]
> This guide requires a
> [Premium license](https://coder.com/pricing#compare-plans) because service
> accounts are a Premium feature. For more details,
> [contact your account team](https://coder.com/contact).
This guide walks through setting up a long-lived workspace that is owned by a
service account and shared with a rotating set of users. Because no single
person owns the workspace, it persists across team changes and every user
authenticates as themselves.
This pattern is useful for any scenario where a workspace outlives the people
who use it:
- **On-call rotations** — Engineers share a workspace pre-loaded with runbooks,
dashboards, and monitoring tools. Access rotates with the shift schedule.
- **Shared staging or QA** — A team workspace hosts a persistent staging
environment. Testers and reviewers are added and removed as sprints change.
- **Pair programming** — A service-account-owned workspace gives two or more
developers a shared environment without either one owning (and accidentally
deleting) it.
- **Contractor onboarding** — An external team gets scoped access to a workspace
for the duration of an engagement, then access is revoked.
The steps below use an **on-call SRE workspace** as a running example, but the
same commands apply to any of the scenarios above. Substitute the usernames,
group names, and template to match your use case.
## Prerequisites
- A running Coder deployment (v2.32+) with workspace sharing enabled. Sharing
is on by default for OSS; Premium deployments may require
[admin configuration](../user-guides/shared-workspaces.md#policies).
- The [Coder CLI](../install/index.md) installed and authenticated.
- An account with the `Owner` or `User Admin` role.
- [OIDC authentication](../admin/users/oidc-auth/index.md) configured so
shared users log in with their corporate SSO identity. Configure
[refresh tokens](../admin/users/oidc-auth/refresh-tokens.md) to prevent
session timeouts during long work sessions.
- A [wildcard access URL](../admin/networking/wildcard-access-url.md) configured
(e.g. `*.coder.example.com`) so that shared users can access workspace apps
without a 404.
- (Recommended) [IdP Group Sync](../admin/users/idp-sync.md#group-sync)
configured if your identity provider manages group membership for the teams
that will share the workspace.
## 1. Create a service account
Create a dedicated service account that will own the shared workspace. Service
accounts are non-human accounts intended for automation and shared ownership.
Because no individual user owns the workspace, there are no personal
credentials to expose and the shared environment is not affected when any user
leaves the team or the organization.
```shell
# On-call example — substitute a name that fits your use case
coder users create \
--username oncall-sre \
--service-account
```
## 2. Generate an API token for the service account
Generate a long-lived API token so you can create and manage workspaces on
behalf of the service account:
```shell
coder tokens create \
--user oncall-sre \
--name oncall-automation \
--lifetime 8760h
```
Store this token securely (e.g. in a secrets manager like Vault or AWS Secrets
Manager).
> [!IMPORTANT]
> Never distribute this token to end users. The token is for workspace
> administration only. Shared users authenticate as themselves and reach the
> workspace through sharing.
## 3. Create the workspace
Authenticate as the service account and create the workspace:
```shell
export CODER_SESSION_TOKEN="<token-from-step-2>"
coder create oncall-sre/oncall-workspace \
--template your-oncall-template \
--use-parameter-defaults \
--yes
```
> [!TIP]
> Design a dedicated template for the workspace with the tools your team
> needs pre-installed (e.g. monitoring dashboards for on-call, test runners
> for QA). Set `subdomain = true` on workspace apps so that shared users can
> access web-based tools without a 404. See
> [Accessing workspace apps in shared workspaces](../user-guides/shared-workspaces.md#accessing-workspace-apps-in-shared-workspaces).
## 4. Share the workspace
Use `coder sharing share` to grant access to users who need the workspace:
```shell
coder sharing share oncall-sre/oncall-workspace --user alice
```
This gives `alice` the default `use` role, which allows connection via SSH and
workspace apps, starting and stopping the workspace, and viewing logs and stats.
To grant `admin` permissions (which includes all `use` permissions as well as renaming, updating, and inviting
others to join with the `use` role):
```shell
coder sharing share oncall-sre/oncall-workspace --user alice:admin
```
To share with multiple users at once:
```shell
coder sharing share oncall-sre/oncall-workspace --user alice:admin,bob
```
To share with an entire Coder group:
```shell
coder sharing share oncall-sre/oncall-workspace --group sre-oncall
```
> [!NOTE]
> Groups can be synced from your identity provider using
> [IdP Sync](../admin/users/idp-sync.md#group-sync). If your IdP already
> manages team membership, sharing with a group is the simplest approach.
## 5. Rotate access
When team membership changes, remove outgoing users and add incoming ones:
```shell
# Remove outgoing user
coder sharing remove oncall-sre/oncall-workspace --user alice
# Add incoming user
coder sharing share oncall-sre/oncall-workspace --user carol
```
> [!IMPORTANT]
> The workspace must be restarted for user removal to take effect.
Verify current sharing status at any time:
```shell
coder sharing status oncall-sre/oncall-workspace
```
## 6. Automate access changes (optional)
For use cases with frequent rotation (such as on-call shifts), you can integrate
the share/remove commands into external tooling like PagerDuty, Opsgenie, or a
cron job.
### Rotation script
```shell
#!/bin/bash
# rotate-access.sh
# Usage: ./rotate-access.sh <outgoing-user> <incoming-user>
WORKSPACE="oncall-sre/oncall-workspace"
OUTGOING="$1"
INCOMING="$2"
if [ -n "$OUTGOING" ]; then
echo "Removing access for $OUTGOING..."
coder sharing remove "$WORKSPACE" --user "$OUTGOING"
fi
echo "Granting access to $INCOMING..."
coder sharing share "$WORKSPACE" --user "$INCOMING"
echo "Restarting workspace to apply changes..."
coder restart "$WORKSPACE" --yes
echo "Current sharing status:"
coder sharing status "$WORKSPACE"
```
### Group-based rotation with IdP Sync
If your identity provider manages group membership (e.g. an `sre-oncall` group
in Okta or Azure AD), you can skip manual share/remove commands entirely:
1. Configure [Group Sync](../admin/users/idp-sync.md#group-sync) to
synchronize the group from your IdP to Coder.
1. Share the workspace with the group once:
```shell
coder sharing share oncall-sre/oncall-workspace --group sre-oncall
```
1. When your IdP rotates group membership, Coder group membership updates on
next login. All current members have access; removed members lose access
after a workspace restart.
## Finding shared workspaces
Shared users can find workspaces shared with them:
```shell
# List all workspaces shared with you
coder list --search shared:true
# List workspaces shared with a specific user
coder list --search shared_with_user:alice
# List workspaces shared with a specific group
coder list --search shared_with_group:sre-oncall
```
## Troubleshooting
### Shared user sees 404 on workspace apps
Workspace apps using path-based routing block non-owners by default. Configure a
[wildcard access URL](../admin/networking/wildcard-access-url.md) and set
`subdomain = true` on the workspace app in your template.
### Removed user still has access
Access removal requires a workspace restart. Run
`coder restart <workspace>` after removing a user or group.
### Group sync not updating membership
Group membership changes in your IdP are not reflected until the user logs out
and back in. Group sync runs at login time, not on a polling schedule. Check the
Coder server logs with
`CODER_LOG_FILTER=".*userauth.*|.*groups returned.*"` for details. See
[Troubleshooting group sync](../admin/users/idp-sync.md#troubleshooting-grouproleorganization-sync)
for more information.
## Next steps
- [Shared Workspaces](../user-guides/shared-workspaces.md) — full reference
for workspace sharing features and UI
- [IdP Sync](../admin/users/idp-sync.md) — group, role, and organization
sync configuration
- [Configuring Okta](./configuring-okta.md) — Okta-specific OIDC setup with
custom claims and scopes
- [Security Best Practices](./best-practices/security-best-practices.md) —
deployment-wide security hardening
- [Sessions and Tokens](../admin/users/sessions-tokens.md) — API token
management and scoping
+1 -1
View File
@@ -5,7 +5,7 @@ terraform {
}
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0"
version = "~> 4.0"
}
envbuilder = {
source = "coder/envbuilder"
+3 -3
View File
@@ -1,5 +1,5 @@
# 1.93.1
FROM rust:slim@sha256:1d0000a49fb62f4fde24455f49d59c6c088af46202d65d8f455b722f7263e8f8 AS rust-utils
FROM rust:slim@sha256:a08d20a404f947ed358dfb63d1ee7e0b88ecad3c45ba9682ccbf2cb09c98acca AS rust-utils
# Install rust helper programs
ENV CARGO_INSTALL_ROOT=/tmp/
# Use more reliable mirrors for Debian packages
@@ -8,7 +8,7 @@ RUN sed -i 's|http://deb.debian.org/debian|http://mirrors.edge.kernel.org/debian
RUN apt-get update && apt-get install -y libssl-dev openssl pkg-config build-essential
RUN cargo install jj-cli typos-cli watchexec-cli
FROM ubuntu:jammy@sha256:5e5b128eb4ff35ee52687c20d081dcc15b8cb55e113247683f435224fc58b956 AS go
FROM ubuntu:jammy@sha256:eb29ed27b0821dca09c2e28b39135e185fc1302036427d5f4d70a41ce8fd7659 AS go
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.25.8
@@ -83,7 +83,7 @@ RUN curl -L -o protoc.zip https://github.com/protocolbuffers/protobuf/releases/d
unzip protoc.zip && \
rm protoc.zip
FROM ubuntu:jammy@sha256:5e5b128eb4ff35ee52687c20d081dcc15b8cb55e113247683f435224fc58b956
FROM ubuntu:jammy@sha256:eb29ed27b0821dca09c2e28b39135e185fc1302036427d5f4d70a41ce8fd7659
SHELL ["/bin/bash", "-c"]
+2 -2
View File
@@ -6,7 +6,7 @@ terraform {
}
docker = {
source = "kreuzwerker/docker"
version = "~> 3.6"
version = "~> 4.0"
}
}
}
@@ -922,7 +922,7 @@ resource "coder_script" "boundary_config_setup" {
module "claude-code" {
count = data.coder_task.me.enabled ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/claude-code/coder"
version = "4.8.2"
version = "4.9.1"
enable_boundary = true
agent_id = coder_agent.dev.id
workdir = local.repo_dir
+6
View File
@@ -16,6 +16,9 @@ import (
"github.com/coder/coder/v2/codersdk"
)
// NOTE: See the auditLogCountCap note.
const connectionLogCountCap = 2000
// @Summary Get connection logs
// @ID get-connection-logs
// @Security CoderSessionToken
@@ -49,6 +52,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
// #nosec G115 - Safe conversion as pagination limit is expected to be within int32 range
filter.LimitOpt = int32(page.Limit)
countFilter.CountCap = connectionLogCountCap
count, err := api.Database.CountConnectionLogs(ctx, countFilter)
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
@@ -63,6 +67,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ConnectionLogResponse{
ConnectionLogs: []codersdk.ConnectionLog{},
Count: 0,
CountCap: connectionLogCountCap,
})
return
}
@@ -80,6 +85,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ConnectionLogResponse{
ConnectionLogs: convertConnectionLogs(dblogs),
Count: count,
CountCap: connectionLogCountCap,
})
}
+90 -4
View File
@@ -26,6 +26,12 @@ const RelaySourceHeader = "X-Coder-Relay-Source-Replica"
const (
authorizationHeader = "Authorization"
cookieHeader = "Cookie"
// relayDrainTimeout is how long an established relay is
// kept open after the chat leaves running state, giving
// buffered snapshot events time to be forwarded before
// the relay is torn down.
relayDrainTimeout = 200 * time.Millisecond
)
// MultiReplicaSubscribeConfig holds the dependencies for multi-replica chat
@@ -169,6 +175,21 @@ func NewMultiReplicaSubscribeFn(
var reconnectTimer *quartz.Timer
var reconnectCh <-chan time.Time
// drainAndClose is set when the chat transitions away
// from running while a relay dial is still in progress.
// Instead of canceling the dial immediately, we let it
// complete so the snapshot of buffered message_parts
// can be forwarded to the subscriber.
var drainAndClose bool
// Drain timer state. When the relay connects in
// drain-and-close mode, a short timer is started.
// During this window the normal relayPartsCh case
// forwards buffered snapshot events. When the timer
// fires the relay is torn down.
var drainTimer *quartz.Timer
var drainTimerCh <-chan time.Time
// Helper to close relay and stop any pending reconnect
// timer.
closeRelay := func() {
@@ -200,6 +221,12 @@ func NewMultiReplicaSubscribeFn(
reconnectTimer = nil
reconnectCh = nil
}
if drainTimer != nil {
drainTimer.Stop()
drainTimer = nil
drainTimerCh = nil
}
drainAndClose = false
}
// openRelayAsync dials the remote replica in a background
@@ -335,16 +362,52 @@ func NewMultiReplicaSubscribeFn(
// A nil parts channel signals the dial
// failed — schedule a retry.
if result.parts == nil {
scheduleRelayReconnect()
if drainAndClose {
// Dial failed and we were only
// waiting to drain — nothing to do.
drainAndClose = false
} else {
scheduleRelayReconnect()
}
continue
}
// An async relay dial completed; swap
} // An async relay dial completed; swap
// in the new relay channel.
if relayCancel != nil {
relayCancel()
}
relayParts = result.parts
relayCancel = result.cancel
if drainAndClose {
// The chat is no longer running on
// the remote worker, but the dial
// completed. Verify no new worker
// has claimed the chat before we
// drain stale parts.
currentChat, dbErr := params.DB.GetChatByID(ctx, chatID)
if dbErr != nil {
logger.Warn(ctx, "failed to check chat status for relay drain",
slog.F("chat_id", chatID),
slog.Error(dbErr),
)
}
if dbErr == nil && currentChat.Status == database.ChatStatusRunning &&
currentChat.WorkerID.Valid &&
currentChat.WorkerID.UUID != params.WorkerID {
// A new worker picked up the chat;
// discard the stale relay and let
// openRelayAsync handle the new one.
closeRelay()
} else {
// Chat is still idle — drain the
// buffered snapshot before closing.
if drainTimer != nil {
drainTimer.Stop()
}
drainTimer = cfg.clock().NewTimer(relayDrainTimeout, "drain")
drainTimerCh = drainTimer.C
drainAndClose = false
}
}
case <-reconnectCh:
reconnectCh = nil
// Re-check whether the chat is still
@@ -374,8 +437,31 @@ func NewMultiReplicaSubscribeFn(
if sn.Status == database.ChatStatusRunning && sn.WorkerID != uuid.Nil && sn.WorkerID != params.WorkerID {
openRelayAsync(sn.WorkerID)
} else {
closeRelay()
switch {
case dialCancel != nil && relayParts == nil:
// In-progress dial: let it complete
// so its snapshot can be forwarded.
drainAndClose = true
case relayParts != nil:
// Active relay: give it a short
// window to deliver any remaining
// buffered parts before closing.
if drainTimer != nil {
drainTimer.Stop()
}
drainTimer = cfg.clock().NewTimer(relayDrainTimeout, "drain")
drainTimerCh = drainTimer.C
default:
closeRelay()
}
}
case <-drainTimerCh:
drainTimerCh = nil
drainTimer = nil
closeRelay()
drainTimerCh = nil
drainTimer = nil
closeRelay()
case event, ok := <-relayPartsCh:
if !ok {
if relayCancel != nil {
+331
View File
@@ -1245,3 +1245,334 @@ func TestSubscribeRelayMultipleReconnects(t *testing.T) {
consumePart("relay-3")
require.GreaterOrEqual(t, int(callCount.Load()), 3)
}
// TestSubscribeRelayDialCanceledOnFastCompletion demonstrates a race
// condition in multi-replica chat streaming where the relay connection
// from the subscriber replica to the worker replica is canceled before
// it can be established because the worker completes processing before
// the async relay dial finishes.
//
// Scenario:
// 1. Subscriber subscribes to a chat while it's in waiting state (no relay).
// 2. User sends a message → chat becomes pending → worker picks it up.
// 3. Subscriber receives status=running via pubsub → enterprise opens relay async.
// 4. Worker completes quickly → publishes committed message + status=waiting.
// 5. Subscriber receives status=waiting → enterprise cancels the in-progress relay dial.
// 6. The relay was never established, so no message_part events were delivered.
// 7. The committed message arrives via pubsub (durable path), but streaming is lost.
//
// This reproduces the user-facing issue where refreshing the page is needed
// to see a response: the streaming tokens never arrive via the relay, and
// the response only appears after the full committed message is delivered.
func TestSubscribeRelayDialCanceledOnFastCompletion(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
workerID := uuid.New()
subscriberID := uuid.New()
var dialAttempted atomic.Bool
// Gate: closed when the worker finishes processing.
workerDone := make(chan struct{})
openAIURL := chattest.NewOpenAI(t, func(req *chattest.OpenAIRequest) chattest.OpenAIResponse {
if !req.Stream {
return chattest.OpenAINonStreamingResponse("fast-completion-relay-race")
}
return chattest.OpenAIStreamingResponse(
chattest.OpenAITextChunks("hello ", "world ", "from ", "the ", "worker")...,
)
})
// Worker server with a 1-hour acquire interval so it only processes
// when explicitly woken by SendMessage's signalWake.
workerLogger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
worker := osschatd.New(osschatd.Config{
Logger: workerLogger,
Database: db,
ReplicaID: workerID,
Pubsub: ps,
PendingChatAcquireInterval: time.Hour,
InFlightChatStaleAfter: testutil.WaitSuperLong,
})
t.Cleanup(func() {
require.NoError(t, worker.Close())
})
// Subscriber's relay dialer blocks until the worker finishes,
// simulating a slow relay dial (network latency between replicas).
// After the worker completes, the dialer connects to the worker
// to retrieve buffered parts from the retained buffer.
subscriber := newTestServer(t, db, ps, subscriberID, func(
ctx context.Context,
chatID uuid.UUID,
targetWorkerID uuid.UUID,
requestHeader http.Header,
) (
[]codersdk.ChatStreamEvent,
<-chan codersdk.ChatStreamEvent,
func(),
error,
) {
dialAttempted.Store(true)
// Block until the worker finishes processing, simulating
// a slow relay dial.
select {
case <-workerDone:
case <-ctx.Done():
return nil, nil, nil, ctx.Err()
}
// Connect to the worker. The buffer is retained for a
// grace period after processing, so the relay still gets
// the message_part snapshot.
snapshot, relayEvents, cancel, ok := worker.Subscribe(ctx, chatID, requestHeader, math.MaxInt64)
if !ok {
return nil, nil, nil, xerrors.New("worker subscribe failed")
}
return snapshot, relayEvents, cancel, nil
}, nil)
ctx := testutil.Context(t, testutil.WaitLong)
user, model := seedChatDependencies(ctx, t, db)
setOpenAIProviderBaseURL(ctx, t, db, openAIURL)
// Create the chat in waiting state so the subscriber sees it
// before the worker picks it up (avoids the synchronous relay
// path in Subscribe).
chat := seedWaitingChat(ctx, t, db, user, model, "fast-completion-relay-race")
// Subscribe from the subscriber replica while the chat is idle.
// No relay is opened because the chat is in waiting state.
_, events, subCancel, ok := subscriber.Subscribe(ctx, chat.ID, nil, 0)
require.True(t, ok)
defer subCancel()
// Send a message via the worker server to transition the chat to
// pending and wake the worker's processing loop.
_, err := worker.SendMessage(ctx, osschatd.SendMessageOptions{
ChatID: chat.ID,
CreatedBy: user.ID,
Content: []codersdk.ChatMessagePart{codersdk.ChatMessageText("hello")},
})
require.NoError(t, err)
// Wait for the worker to fully process the chat.
require.Eventually(t, func() bool {
fromDB, dbErr := db.GetChatByID(ctx, chat.ID)
if dbErr != nil {
return false
}
return fromDB.Status == database.ChatStatusWaiting
}, testutil.WaitMedium, testutil.IntervalFast)
// Release the relay dial now that the worker is done.
close(workerDone)
// Collect all events that arrived at the subscriber.
var messageParts []string
var committedAssistantMsgs int
// Drain events until we see both the committed message (via
// pubsub) and at least one streaming part (via relay
// drain-and-close).
require.Eventually(t, func() bool {
select {
case event := <-events:
switch event.Type {
case codersdk.ChatStreamEventTypeMessagePart:
if event.MessagePart != nil {
messageParts = append(messageParts, event.MessagePart.Part.Text)
}
case codersdk.ChatStreamEventTypeMessage:
if event.Message != nil && event.Message.Role == codersdk.ChatMessageRoleAssistant {
committedAssistantMsgs++
}
}
return committedAssistantMsgs > 0 && len(messageParts) > 0
default:
return false
}
}, testutil.WaitLong, testutil.IntervalFast)
// The committed assistant message arrives via pubsub → DB query
// (durable path).
require.Equal(t, 1, committedAssistantMsgs,
"committed assistant message should arrive via pubsub durable path")
// The relay dial was attempted when status=running arrived.
require.True(t, dialAttempted.Load(),
"relay dial should have been attempted when status changed to running")
// Streaming parts are now received even though the relay was
// slower than the worker: the OSS buffer retention grace period
// keeps parts available, and the enterprise relay completes the
// dial (drain-and-close) instead of canceling it immediately.
require.NotEmpty(t, messageParts,
"streaming parts should be received via the relay even when the "+
"worker completes before the relay is established")
}
// TestSubscribeRelayEstablishedMidStream demonstrates that when the
// relay is established while the worker is still streaming, the
// subscriber receives buffered parts via the relay snapshot and live
// parts through the relay channel.
//
// This is the complementary test to TestSubscribeRelayDialCanceledOnFastCompletion:
// it shows the relay mechanism works correctly when timing is favorable
// (relay connects before the worker finishes), contrasting with the race
// condition where the relay is too slow.
func TestSubscribeRelayEstablishedMidStream(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
workerID := uuid.New()
subscriberID := uuid.New()
// Gate: worker blocks after first streaming request until we
// release it. This gives the relay time to establish.
firstChunkEmitted := make(chan struct{})
continueStreaming := make(chan struct{})
openAIURL := chattest.NewOpenAI(t, func(req *chattest.OpenAIRequest) chattest.OpenAIResponse {
if !req.Stream {
return chattest.OpenAINonStreamingResponse("mid-stream-relay")
}
// Signal that the first streaming request was received,
// then block until released.
select {
case <-firstChunkEmitted:
default:
close(firstChunkEmitted)
}
<-continueStreaming
return chattest.OpenAIStreamingResponse(
chattest.OpenAITextChunks("continued ", "response")...,
)
})
// Worker with a 1-hour acquire interval; only processes when
// explicitly woken.
workerLogger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
worker := osschatd.New(osschatd.Config{
Logger: workerLogger,
Database: db,
ReplicaID: workerID,
Pubsub: ps,
PendingChatAcquireInterval: time.Hour,
InFlightChatStaleAfter: testutil.WaitSuperLong,
})
t.Cleanup(func() {
require.NoError(t, worker.Close())
})
// Subscriber's dialer connects to the worker with no delay.
// This simulates a relay that succeeds promptly.
subscriber := newTestServer(t, db, ps, subscriberID, func(
ctx context.Context,
chatID uuid.UUID,
targetWorkerID uuid.UUID,
requestHeader http.Header,
) (
[]codersdk.ChatStreamEvent,
<-chan codersdk.ChatStreamEvent,
func(),
error,
) {
if targetWorkerID != workerID {
return nil, nil, nil, xerrors.Errorf("unexpected relay target %s", targetWorkerID)
}
snapshot, relayEvents, cancel, ok := worker.Subscribe(ctx, chatID, requestHeader, math.MaxInt64)
if !ok {
return nil, nil, nil, xerrors.New("worker subscribe failed")
}
return snapshot, relayEvents, cancel, nil
}, nil)
ctx := testutil.Context(t, testutil.WaitLong)
user, model := seedChatDependencies(ctx, t, db)
setOpenAIProviderBaseURL(ctx, t, db, openAIURL)
// Create the chat in waiting state.
chat := seedWaitingChat(ctx, t, db, user, model, "mid-stream-relay")
// Subscribe from the subscriber replica while the chat is idle.
_, events, subCancel, ok := subscriber.Subscribe(ctx, chat.ID, nil, 0)
require.True(t, ok)
defer subCancel()
// Send a message to make the chat pending and wake the worker.
_, err := worker.SendMessage(ctx, osschatd.SendMessageOptions{
ChatID: chat.ID,
CreatedBy: user.ID,
Content: []codersdk.ChatMessagePart{codersdk.ChatMessageText("hello")},
})
require.NoError(t, err)
// Wait for the worker to reach the LLM (first streaming request).
select {
case <-firstChunkEmitted:
case <-ctx.Done():
t.Fatal("timed out waiting for worker to start streaming")
}
// Wait for the subscriber to receive the running status, which
// triggers the relay. Because the dialer is non-blocking, the
// relay establishes promptly.
require.Eventually(t, func() bool {
select {
case event := <-events:
return event.Type == codersdk.ChatStreamEventTypeStatus &&
event.Status != nil &&
event.Status.Status == codersdk.ChatStatusRunning
default:
return false
}
}, testutil.WaitMedium, testutil.IntervalFast)
// Now release the worker to continue streaming.
close(continueStreaming)
// Wait for the worker to complete.
require.Eventually(t, func() bool {
fromDB, dbErr := db.GetChatByID(ctx, chat.ID)
if dbErr != nil {
return false
}
return fromDB.Status == database.ChatStatusWaiting
}, testutil.WaitMedium, testutil.IntervalFast)
// Collect remaining events.
var messageParts []string
var hasCommittedMsg bool
require.Eventually(t, func() bool {
select {
case event := <-events:
switch event.Type {
case codersdk.ChatStreamEventTypeMessagePart:
if event.MessagePart != nil {
messageParts = append(messageParts, event.MessagePart.Part.Text)
}
case codersdk.ChatStreamEventTypeMessage:
if event.Message != nil && event.Message.Role == codersdk.ChatMessageRoleAssistant {
hasCommittedMsg = true
}
}
return hasCommittedMsg
default:
return false
}
}, testutil.WaitLong, testutil.IntervalFast)
// The committed message arrives via pubsub.
require.True(t, hasCommittedMsg,
"committed assistant message should arrive")
// When the relay is established mid-stream, streaming parts
// SHOULD be received through the relay. This contrasts with
// TestSubscribeRelayDialCanceledOnFastCompletion where no parts
// arrive because the relay is never established.
require.NotEmpty(t, messageParts,
"streaming parts should be received when relay establishes while worker is still streaming")
}
+27 -7
View File
@@ -12,6 +12,7 @@ import (
"github.com/cenkalti/backoff/v4"
"github.com/google/uuid"
"golang.org/x/sync/singleflight"
"golang.org/x/xerrors"
gProto "google.golang.org/protobuf/proto"
@@ -33,9 +34,9 @@ const (
eventReadyForHandshake = "tailnet_ready_for_handshake"
HeartbeatPeriod = time.Second * 2
MissedHeartbeats = 3
numQuerierWorkers = 10
numQuerierWorkers = 40
numBinderWorkers = 10
numTunnelerWorkers = 10
numTunnelerWorkers = 20
numHandshakerWorkers = 5
dbMaxBackoff = 10 * time.Second
cleanupPeriod = time.Hour
@@ -770,6 +771,9 @@ func (m *mapper) bestToUpdate(best map[uuid.UUID]mapping) *proto.CoordinateRespo
for k := range m.sent {
if _, ok := best[k]; !ok {
m.logger.Debug(m.ctx, "peer no longer in best mappings, sending DISCONNECTED",
slog.F("peer_id", k),
)
resp.PeerUpdates = append(resp.PeerUpdates, &proto.CoordinateResponse_PeerUpdate{
Id: agpl.UUIDToByteSlice(k),
Kind: proto.CoordinateResponse_PeerUpdate_DISCONNECTED,
@@ -820,6 +824,8 @@ type querier struct {
mu sync.Mutex
mappers map[mKey]*mapper
healthy bool
resyncGroup singleflight.Group
}
func newQuerier(ctx context.Context,
@@ -958,7 +964,7 @@ func (q *querier) cleanupConn(c *connIO) {
// maxBatchSize is the maximum number of keys to process in a single batch
// query.
const maxBatchSize = 50
const maxBatchSize = 200
func (q *querier) peerUpdateWorker() {
defer q.wg.Done()
@@ -1207,8 +1213,13 @@ func (q *querier) subscribe() {
func (q *querier) listenPeer(_ context.Context, msg []byte, err error) {
if xerrors.Is(err, pubsub.ErrDroppedMessages) {
q.logger.Warn(q.ctx, "pubsub may have dropped peer updates")
// we need to schedule a full resync of peer mappings
q.resyncPeerMappings()
// Schedule a full resync asynchronously so we don't block the
// pubsub drain goroutine. Singleflight coalesces concurrent
// resync requests.
go q.resyncGroup.Do("resync", func() (any, error) {
q.resyncPeerMappings()
return nil, nil
})
return
}
if err != nil {
@@ -1234,8 +1245,13 @@ func (q *querier) listenPeer(_ context.Context, msg []byte, err error) {
func (q *querier) listenTunnel(_ context.Context, msg []byte, err error) {
if xerrors.Is(err, pubsub.ErrDroppedMessages) {
q.logger.Warn(q.ctx, "pubsub may have dropped tunnel updates")
// we need to schedule a full resync of peer mappings
q.resyncPeerMappings()
// Schedule a full resync asynchronously so we don't block the
// pubsub drain goroutine. Singleflight coalesces concurrent
// resync requests.
go q.resyncGroup.Do("resync", func() (any, error) {
q.resyncPeerMappings()
return nil, nil
})
return
}
if err != nil {
@@ -1601,6 +1617,10 @@ func (h *heartbeats) filter(mappings []mapping) []mapping {
// the only mapping available for it. Newer mappings will take
// precedence.
m.kind = proto.CoordinateResponse_PeerUpdate_LOST
h.logger.Debug(h.ctx, "mapping rewritten to LOST due to missed heartbeats",
slog.F("peer_id", m.peer),
slog.F("coordinator_id", m.coordinator),
)
}
}
+1 -1
View File
@@ -33,7 +33,7 @@ data "coder_task" "me" {}
module "claude-code" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/claude-code/coder"
version = "4.8.2"
version = "4.9.1"
agent_id = coder_agent.main.id
workdir = "/home/coder/projects"
order = 999
+9 -9
View File
@@ -76,11 +76,11 @@ replace github.com/aquasecurity/trivy => github.com/coder/trivy v0.0.0-202603091
// https://github.com/spf13/afero/pull/487
replace github.com/spf13/afero => github.com/aslilac/afero v0.0.0-20250403163713-f06e86036696
// Forked from kylecarbs/fantasy (cj/go1.25 branch) which adds:
// Forked from coder/fantasy (cj/go1.25 branch) which adds:
// 1) Anthropic computer use + thinking effort
// 2) Go 1.25 downgrade for Windows CI compat
// 3) ibetitsmike/fantasy#4 skip ephemeral replay items when store=false
replace charm.land/fantasy => github.com/kylecarbs/fantasy v0.0.0-20260325145725-112927d9b6d8
replace charm.land/fantasy => github.com/coder/fantasy v0.0.0-20260325145725-112927d9b6d8
replace github.com/charmbracelet/anthropic-sdk-go => github.com/kylecarbs/anthropic-sdk-go v0.0.0-20260223140439-63879b0b8dab
@@ -91,7 +91,7 @@ require (
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d
github.com/adrg/xdg v0.5.0
github.com/ammario/tlru v0.4.0
github.com/andybalholm/brotli v1.2.0
github.com/andybalholm/brotli v1.2.1
github.com/aquasecurity/trivy-iac v0.8.0
github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2
github.com/awalterschulze/gographviz v2.0.3+incompatible
@@ -136,11 +136,11 @@ require (
github.com/go-chi/chi/v5 v5.2.4
github.com/go-chi/cors v1.2.1
github.com/go-chi/httprate v0.15.0
github.com/go-jose/go-jose/v4 v4.1.3
github.com/go-jose/go-jose/v4 v4.1.4
github.com/go-logr/logr v1.4.3
github.com/go-playground/validator/v10 v10.30.0
github.com/gofrs/flock v0.13.0
github.com/gohugoio/hugo v0.159.2
github.com/gohugoio/hugo v0.160.0
github.com/golang-jwt/jwt/v4 v4.5.2
github.com/golang-migrate/migrate/v4 v4.19.0
github.com/gomarkdown/markdown v0.0.0-20240930133441-72d49d9543d8
@@ -162,7 +162,7 @@ require (
github.com/justinas/nosurf v1.2.0
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/kirsle/configdir v0.0.0-20170128060238-e45d2f54772f
github.com/klauspost/compress v1.18.4
github.com/klauspost/compress v1.18.5
github.com/lib/pq v1.10.9
github.com/mattn/go-isatty v0.0.20
github.com/mitchellh/go-wordwrap v1.0.1
@@ -193,7 +193,7 @@ require (
github.com/tidwall/gjson v1.18.0
github.com/u-root/u-root v0.14.0
github.com/unrolled/secure v1.17.0
github.com/valyala/fasthttp v1.69.0
github.com/valyala/fasthttp v1.70.0
github.com/wagslane/go-password-validator v0.3.0
github.com/zclconf/go-cty-yaml v1.2.0
go.mozilla.org/pkcs7 v0.9.0
@@ -218,7 +218,7 @@ require (
golang.org/x/text v0.35.0
golang.org/x/tools v0.43.0
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da
google.golang.org/api v0.273.0
google.golang.org/api v0.274.0
google.golang.org/grpc v1.80.0
google.golang.org/protobuf v1.36.11
gopkg.in/DataDog/dd-trace-go.v1 v1.74.0
@@ -434,7 +434,7 @@ require (
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
github.com/yuin/goldmark v1.7.17 // indirect
github.com/yuin/goldmark v1.8.2 // indirect
github.com/yuin/goldmark-emoji v1.0.6 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
github.com/zclconf/go-cty v1.17.0
+22 -22
View File
@@ -128,8 +128,8 @@ github.com/ammario/tlru v0.4.0 h1:sJ80I0swN3KOX2YxC6w8FbCqpQucWdbb+J36C05FPuU=
github.com/ammario/tlru v0.4.0/go.mod h1:aYzRFu0XLo4KavE9W8Lx7tzjkX+pAApz+NgcKYIFUBQ=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883 h1:bvNMNQO63//z+xNgfBlViaCIJKLlCJ6/fmUseuG0wVQ=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwToPjQ=
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/andybalholm/brotli v1.2.1 h1:R+f5xP285VArJDRgowrfb9DqL18yVK0gKAW/F+eTWro=
github.com/andybalholm/brotli v1.2.1/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/apparentlymart/go-cidr v1.1.0 h1:2mAhrMoF+nhXqxTzSZMUzDHkLjmIHC+Zzn4tdgBZjnU=
@@ -322,6 +322,8 @@ github.com/coder/bubbletea v1.2.2-0.20241212190825-007a1cdb2c41 h1:SBN/DA63+ZHwu
github.com/coder/bubbletea v1.2.2-0.20241212190825-007a1cdb2c41/go.mod h1:I9ULxr64UaOSUv7hcb3nX4kowodJCVS7vt7VVJk/kW4=
github.com/coder/clistat v1.2.1 h1:P9/10njXMyj5cWzIU5wkRsSy5LVQH49+tcGMsAgWX0w=
github.com/coder/clistat v1.2.1/go.mod h1:m7SC0uj88eEERgvF8Kn6+w6XF21BeSr+15f7GoLAw0A=
github.com/coder/fantasy v0.0.0-20260325145725-112927d9b6d8 h1:n+6v+yT1B6V4oSGPmXFh7mul1E+RzG9rnqp50Vb7M/w=
github.com/coder/fantasy v0.0.0-20260325145725-112927d9b6d8/go.mod h1:ktfNX0xDpIKeggZbP/j5IYJci6pyMOR3WmZSfz9XLYw=
github.com/coder/flog v1.1.0 h1:kbAes1ai8fIS5OeV+QAnKBQE22ty1jRF/mcAwHpLBa4=
github.com/coder/flog v1.1.0/go.mod h1:UQlQvrkJBvnRGo69Le8E24Tcl5SJleAAR7gYEHzAmdQ=
github.com/coder/go-httpstat v0.0.0-20230801153223-321c88088322 h1:m0lPZjlQ7vdVpRBPKfYIFlmgevoTkBxB10wv6l2gOaU=
@@ -518,8 +520,8 @@ github.com/go-git/go-git/v5 v5.17.1 h1:WnljyxIzSj9BRRUlnmAU35ohDsjRK0EKmL0evDqi5
github.com/go-git/go-git/v5 v5.17.1/go.mod h1:pW/VmeqkanRFqR6AljLcs7EA7FbZaN5MQqO7oZADXpo=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-jose/go-jose/v4 v4.1.4 h1:moDMcTHmvE6Groj34emNPLs/qtYXRVcd6S7NHbHz3kA=
github.com/go-jose/go-jose/v4 v4.1.4/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e h1:Lf/gRkoycfOBPa42vU2bbgPurFong6zXeFtPoxholzU=
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e/go.mod h1:uNVvRXArCGbZ508SxYYTC5v1JWoz2voff5pm25jU1Ok=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -611,12 +613,12 @@ github.com/gohugoio/hashstructure v0.6.0 h1:7wMB/2CfXoThFYhdWRGv3u3rUM761Cq29CxU
github.com/gohugoio/hashstructure v0.6.0/go.mod h1:lapVLk9XidheHG1IQ4ZSbyYrXcaILU1ZEP/+vno5rBQ=
github.com/gohugoio/httpcache v0.8.0 h1:hNdsmGSELztetYCsPVgjA960zSa4dfEqqF/SficorCU=
github.com/gohugoio/httpcache v0.8.0/go.mod h1:fMlPrdY/vVJhAriLZnrF5QpN3BNAcoBClgAyQd+lGFI=
github.com/gohugoio/hugo v0.159.2 h1:tpS6pcShcP3Khl8WA1NAxVHi2D3/ib9BbM8+m7NECUA=
github.com/gohugoio/hugo v0.159.2/go.mod h1:vKww5V9i8MYzFD8JVvhRN+AKdDfKV0UvbFpmCDtTr/A=
github.com/gohugoio/hugo-goldmark-extensions/extras v0.6.0 h1:c16engMi6zyOGeCrP73RWC9fom94wXGpVzncu3GXBjI=
github.com/gohugoio/hugo-goldmark-extensions/extras v0.6.0/go.mod h1:e3+TRCT4Uz6NkZOAVMOMgPeJ+7KEtQMX8hdB+WG4qRs=
github.com/gohugoio/hugo-goldmark-extensions/passthrough v0.4.0 h1:awFlqaCQ0N/RS9ndIBpDYNms101I1sGbDRG1bksa5Js=
github.com/gohugoio/hugo-goldmark-extensions/passthrough v0.4.0/go.mod h1:lK1CjqrueCd3OBnsLLQJGrQ+uodWfT9M9Cq2zfDWJCE=
github.com/gohugoio/hugo v0.160.0 h1:WmmygLg2ahijM4w2VHFn/DdBR+OpJ9H9pH3d8OApNDY=
github.com/gohugoio/hugo v0.160.0/go.mod h1:+VA5jOO3iGELh+6cig098PT2Cd9iNhwUPRqCUe3Ce7w=
github.com/gohugoio/hugo-goldmark-extensions/extras v0.7.0 h1:I/n6v7VImJ3aISLnn73JAHXyjcQsMVvbguQPTk9Ehus=
github.com/gohugoio/hugo-goldmark-extensions/extras v0.7.0/go.mod h1:9LJNfKWFmhEJ7HW0in5znezMwH+FYMBIhNZ3VWtRcRs=
github.com/gohugoio/hugo-goldmark-extensions/passthrough v0.5.0 h1:p13Q0DBCrBRpJGtbtlgkYNCs4TnIlZJh8vHgnAiofrI=
github.com/gohugoio/hugo-goldmark-extensions/passthrough v0.5.0/go.mod h1:ob9PCHy/ocsQhTz68uxhyInaYCbbVNpOOrJkIoSeD+8=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
@@ -794,8 +796,8 @@ github.com/kirsle/configdir v0.0.0-20170128060238-e45d2f54772f h1:dKccXx7xA56UNq
github.com/kirsle/configdir v0.0.0-20170128060238-e45d2f54772f/go.mod h1:4rEELDSfUAlBSyUjPG0JnaNGjf13JySHFeRdD/3dLP0=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=
github.com/klauspost/compress v1.18.4/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kortschak/wol v0.0.0-20200729010619-da482cc4850a h1:+RR6SqnTkDLWyICxS1xpjCi/3dhyV+TgZwA6Ww3KncQ=
@@ -813,8 +815,6 @@ github.com/kylecarbs/anthropic-sdk-go v0.0.0-20260223140439-63879b0b8dab h1:5UMY
github.com/kylecarbs/anthropic-sdk-go v0.0.0-20260223140439-63879b0b8dab/go.mod h1:hqlYqR7uPKOKfnNeicUbZp0Ps0GeYFlKYtwh5HGDCx8=
github.com/kylecarbs/chroma/v2 v2.0.0-20240401211003-9e036e0631f3 h1:Z9/bo5PSeMutpdiKYNt/TTSfGM1Ll0naj3QzYX9VxTc=
github.com/kylecarbs/chroma/v2 v2.0.0-20240401211003-9e036e0631f3/go.mod h1:BUGjjsD+ndS6eX37YgTchSEG+Jg9Jv1GiZs9sqPqztk=
github.com/kylecarbs/fantasy v0.0.0-20260325145725-112927d9b6d8 h1:fZ0208U3B438fDSHCc/GNioPIyaFqn6eBsQTO61QtrI=
github.com/kylecarbs/fantasy v0.0.0-20260325145725-112927d9b6d8/go.mod h1:ktfNX0xDpIKeggZbP/j5IYJci6pyMOR3WmZSfz9XLYw=
github.com/kylecarbs/openai-go/v3 v3.0.0-20260319113850-9477dcaedcae h1:xlFZNX4nnxpj9Cf6mTwD3pirXGNtBJ/6COsf9iZmsL0=
github.com/kylecarbs/openai-go/v3 v3.0.0-20260319113850-9477dcaedcae/go.mod h1:cdufnVK14cWcT9qA1rRtrXx4FTRsgbDPW7Ia7SS5cZo=
github.com/kylecarbs/spinner v1.18.2-0.20220329160715-20702b5af89e h1:OP0ZMFeZkUnOzTFRfpuK3m7Kp4fNvC6qN+exwj7aI4M=
@@ -1188,8 +1188,8 @@ github.com/urfave/cli/v2 v2.27.5 h1:WoHEJLdsXr6dDWoJgMq/CboDmyY/8HMMH1fTECbih+w=
github.com/urfave/cli/v2 v2.27.5/go.mod h1:3Sevf16NykTbInEnD0yKkjDAeZDS0A6bzhBH5hrMvTQ=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.69.0 h1:fNLLESD2SooWeh2cidsuFtOcrEi4uB4m1mPrkJMZyVI=
github.com/valyala/fasthttp v1.69.0/go.mod h1:4wA4PfAraPlAsJ5jMSqCE2ug5tqUPwKXxVj8oNECGcw=
github.com/valyala/fasthttp v1.70.0 h1:LAhMGcWk13QZWm85+eg8ZBNbrq5mnkWFGbHMUJHIdXA=
github.com/valyala/fasthttp v1.70.0/go.mod h1:oDZEHHkJ/Buyklg6uURmYs19442zFSnCIfX3j1FY3pE=
github.com/valyala/fastjson v1.6.4 h1:uAUNq9Z6ymTgGhcm0UynUAB6tlbakBrz6CQFax3BXVQ=
github.com/valyala/fastjson v1.6.4/go.mod h1:CLCAqky6SMuOcxStkYQvblddUtoRxhYMGLrsQns1aXY=
github.com/vbatts/tar-split v0.12.2 h1:w/Y6tjxpeiFMR47yzZPlPj/FcPLpXbTUi/9H7d3CPa4=
@@ -1252,8 +1252,8 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yuin/goldmark v1.7.17 h1:p36OVWwRb246iHxA/U4p8OPEpOTESm4n+g+8t0EE5uA=
github.com/yuin/goldmark v1.7.17/go.mod h1:ip/1k0VRfGynBgxOz0yCqHrbZXhcjxyuS66Brc7iBKg=
github.com/yuin/goldmark v1.8.2 h1:kEGpgqJXdgbkhcOgBxkC0X0PmoPG1ZyoZ117rDVp4zE=
github.com/yuin/goldmark v1.8.2/go.mod h1:ip/1k0VRfGynBgxOz0yCqHrbZXhcjxyuS66Brc7iBKg=
github.com/yuin/goldmark-emoji v1.0.6 h1:QWfF2FYaXwL74tfGOW5izeiZepUDroDJfWubQI9HTHs=
github.com/yuin/goldmark-emoji v1.0.6/go.mod h1:ukxJDKFpdFb5x0a5HqbdlcKtebh086iJpI31LTKmWuA=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
@@ -1375,8 +1375,8 @@ golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
golang.org/x/exp v0.0.0-20260218203240-3dfff04db8fa h1:Zt3DZoOFFYkKhDT3v7Lm9FDMEV06GpzjG2jrqW+QTE0=
golang.org/x/exp v0.0.0-20260218203240-3dfff04db8fa/go.mod h1:K79w1Vqn7PoiZn+TkNpx3BUWUQksGO3JcVX6qIjytmA=
golang.org/x/image v0.37.0 h1:ZiRjArKI8GwxZOoEtUfhrBtaCN+4b/7709dlT6SSnQA=
golang.org/x/image v0.37.0/go.mod h1:/3f6vaXC+6CEanU4KJxbcUZyEePbyKbaLoDOe4ehFYY=
golang.org/x/image v0.38.0 h1:5l+q+Y9JDC7mBOMjo4/aPhMDcxEptsX+Tt3GgRQRPuE=
golang.org/x/image v0.38.0/go.mod h1:/3f6vaXC+6CEanU4KJxbcUZyEePbyKbaLoDOe4ehFYY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -1514,8 +1514,8 @@ golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/api v0.273.0 h1:r/Bcv36Xa/te1ugaN1kdJ5LoA5Wj/cL+a4gj6FiPBjQ=
google.golang.org/api v0.273.0/go.mod h1:JbAt7mF+XVmWu6xNP8/+CTiGH30ofmCmk9nM8d8fHew=
google.golang.org/api v0.274.0 h1:aYhycS5QQCwxHLwfEHRRLf9yNsfvp1JadKKWBE54RFA=
google.golang.org/api v0.274.0/go.mod h1:JbAt7mF+XVmWu6xNP8/+CTiGH30ofmCmk9nM8d8fHew=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
+1 -1
View File
@@ -9,7 +9,7 @@
"storybook": "pnpm run -C site/ storybook"
},
"devDependencies": {
"@biomejs/biome": "2.2.0",
"@biomejs/biome": "2.4.10",
"markdown-table-formatter": "^1.6.1",
"markdownlint-cli2": "^0.16.0",
"quicktype": "^23.0.0"
+37 -37
View File
@@ -12,8 +12,8 @@ importers:
.:
devDependencies:
'@biomejs/biome':
specifier: 2.2.0
version: 2.2.0
specifier: 2.4.10
version: 2.4.10
markdown-table-formatter:
specifier: ^1.6.1
version: 1.6.1
@@ -26,55 +26,55 @@ importers:
packages:
'@biomejs/biome@2.2.0':
resolution: {integrity: sha512-3On3RSYLsX+n9KnoSgfoYlckYBoU6VRM22cw1gB4Y0OuUVSYd/O/2saOJMrA4HFfA1Ff0eacOvMN1yAAvHtzIw==}
'@biomejs/biome@2.4.10':
resolution: {integrity: sha512-xxA3AphFQ1geij4JTHXv4EeSTda1IFn22ye9LdyVPoJU19fNVl0uzfEuhsfQ4Yue/0FaLs2/ccVi4UDiE7R30w==}
engines: {node: '>=14.21.3'}
hasBin: true
'@biomejs/cli-darwin-arm64@2.2.0':
resolution: {integrity: sha512-zKbwUUh+9uFmWfS8IFxmVD6XwqFcENjZvEyfOxHs1epjdH3wyyMQG80FGDsmauPwS2r5kXdEM0v/+dTIA9FXAg==}
'@biomejs/cli-darwin-arm64@2.4.10':
resolution: {integrity: sha512-vuzzI1cWqDVzOMIkYyHbKqp+AkQq4K7k+UCXWpkYcY/HDn1UxdsbsfgtVpa40shem8Kax4TLDLlx8kMAecgqiw==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [darwin]
'@biomejs/cli-darwin-x64@2.2.0':
resolution: {integrity: sha512-+OmT4dsX2eTfhD5crUOPw3RPhaR+SKVspvGVmSdZ9y9O/AgL8pla6T4hOn1q+VAFBHuHhsdxDRJgFCSC7RaMOw==}
'@biomejs/cli-darwin-x64@2.4.10':
resolution: {integrity: sha512-14fzASRo+BPotwp7nWULy2W5xeUyFnTaq1V13Etrrxkrih+ez/2QfgFm5Ehtf5vSjtgx/IJycMMpn5kPd5ZNaA==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [darwin]
'@biomejs/cli-linux-arm64-musl@2.2.0':
resolution: {integrity: sha512-egKpOa+4FL9YO+SMUMLUvf543cprjevNc3CAgDNFLcjknuNMcZ0GLJYa3EGTCR2xIkIUJDVneBV3O9OcIlCEZQ==}
'@biomejs/cli-linux-arm64-musl@2.4.10':
resolution: {integrity: sha512-WrJY6UuiSD/Dh+nwK2qOTu8kdMDlLV3dLMmychIghHPAysWFq1/DGC1pVZx8POE3ZkzKR3PUUnVrtZfMfaJjyQ==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-arm64@2.2.0':
resolution: {integrity: sha512-6eoRdF2yW5FnW9Lpeivh7Mayhq0KDdaDMYOJnH9aT02KuSIX5V1HmWJCQQPwIQbhDh68Zrcpl8inRlTEan0SXw==}
'@biomejs/cli-linux-arm64@2.4.10':
resolution: {integrity: sha512-7MH1CMW5uuxQ/s7FLST63qF8B3Hgu2HRdZ7tA1X1+mk+St4JOuIrqdhIBnnyqeyWJNI+Bww7Es5QZ0wIc1Cmkw==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-x64-musl@2.2.0':
resolution: {integrity: sha512-I5J85yWwUWpgJyC1CcytNSGusu2p9HjDnOPAFG4Y515hwRD0jpR9sT9/T1cKHtuCvEQ/sBvx+6zhz9l9wEJGAg==}
'@biomejs/cli-linux-x64-musl@2.4.10':
resolution: {integrity: sha512-kDTi3pI6PBN6CiczsWYOyP2zk0IJI08EWEQyDMQWW221rPaaEz6FvjLhnU07KMzLv8q3qSuoB93ua6inSQ55Tw==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-linux-x64@2.2.0':
resolution: {integrity: sha512-5UmQx/OZAfJfi25zAnAGHUMuOd+LOsliIt119x2soA2gLggQYrVPA+2kMUxR6Mw5M1deUF/AWWP2qpxgH7Nyfw==}
'@biomejs/cli-linux-x64@2.4.10':
resolution: {integrity: sha512-tZLvEEi2u9Xu1zAqRjTcpIDGVtldigVvzug2fTuPG0ME/g8/mXpRPcNgLB22bGn6FvLJpHHnqLnwliOu8xjYrg==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-win32-arm64@2.2.0':
resolution: {integrity: sha512-n9a1/f2CwIDmNMNkFs+JI0ZjFnMO0jdOyGNtihgUNFnlmd84yIYY2KMTBmMV58ZlVHjgmY5Y6E1hVTnSRieggA==}
'@biomejs/cli-win32-arm64@2.4.10':
resolution: {integrity: sha512-umwQU6qPzH+ISTf/eHyJ/QoQnJs3V9Vpjz2OjZXe9MVBZ7prgGafMy7yYeRGnlmDAn87AKTF3Q6weLoMGpeqdQ==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [win32]
'@biomejs/cli-win32-x64@2.2.0':
resolution: {integrity: sha512-Nawu5nHjP/zPKTIryh2AavzTc/KEg4um/MxWdXW0A6P/RZOyIpa7+QSjeXwAwX/utJGaCoXRPWtF3m5U/bB3Ww==}
'@biomejs/cli-win32-x64@2.4.10':
resolution: {integrity: sha512-aW/JU5GuyH4uxMrNYpoC2kjaHlyJGLgIa3XkhPEZI0uKhZhJZU8BuEyJmvgzSPQNGozBwWjC972RaNdcJ9KyJg==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [win32]
@@ -778,39 +778,39 @@ packages:
snapshots:
'@biomejs/biome@2.2.0':
'@biomejs/biome@2.4.10':
optionalDependencies:
'@biomejs/cli-darwin-arm64': 2.2.0
'@biomejs/cli-darwin-x64': 2.2.0
'@biomejs/cli-linux-arm64': 2.2.0
'@biomejs/cli-linux-arm64-musl': 2.2.0
'@biomejs/cli-linux-x64': 2.2.0
'@biomejs/cli-linux-x64-musl': 2.2.0
'@biomejs/cli-win32-arm64': 2.2.0
'@biomejs/cli-win32-x64': 2.2.0
'@biomejs/cli-darwin-arm64': 2.4.10
'@biomejs/cli-darwin-x64': 2.4.10
'@biomejs/cli-linux-arm64': 2.4.10
'@biomejs/cli-linux-arm64-musl': 2.4.10
'@biomejs/cli-linux-x64': 2.4.10
'@biomejs/cli-linux-x64-musl': 2.4.10
'@biomejs/cli-win32-arm64': 2.4.10
'@biomejs/cli-win32-x64': 2.4.10
'@biomejs/cli-darwin-arm64@2.2.0':
'@biomejs/cli-darwin-arm64@2.4.10':
optional: true
'@biomejs/cli-darwin-x64@2.2.0':
'@biomejs/cli-darwin-x64@2.4.10':
optional: true
'@biomejs/cli-linux-arm64-musl@2.2.0':
'@biomejs/cli-linux-arm64-musl@2.4.10':
optional: true
'@biomejs/cli-linux-arm64@2.2.0':
'@biomejs/cli-linux-arm64@2.4.10':
optional: true
'@biomejs/cli-linux-x64-musl@2.2.0':
'@biomejs/cli-linux-x64-musl@2.4.10':
optional: true
'@biomejs/cli-linux-x64@2.2.0':
'@biomejs/cli-linux-x64@2.4.10':
optional: true
'@biomejs/cli-win32-arm64@2.2.0':
'@biomejs/cli-win32-arm64@2.4.10':
optional: true
'@biomejs/cli-win32-x64@2.2.0':
'@biomejs/cli-win32-x64@2.4.10':
optional: true
'@cspotcode/source-map-support@0.8.1':
+1
View File
@@ -381,6 +381,7 @@ func provisionEnv(
"CODER_WORKSPACE_BUILD_ID="+metadata.GetWorkspaceBuildId(),
"CODER_TASK_ID="+metadata.GetTaskId(),
"CODER_TASK_PROMPT="+metadata.GetTaskPrompt(),
"AWS_SDK_UA_APP_ID=APN_1.1/pc_cdfmjwn8i6u8l9fwz8h82e4w3$",
)
if metadata.GetPrebuiltWorkspaceBuildStage().IsPrebuild() {
env = append(env, provider.IsPrebuildEnvironmentVariable()+"=true")
+1
View File
@@ -1298,6 +1298,7 @@ func TestProvision_SafeEnv(t *testing.T) {
require.Contains(t, log, passedValue)
require.NotContains(t, log, secretValue)
require.Contains(t, log, "CODER_")
require.Contains(t, log, "AWS_SDK_UA_APP_ID=APN_1.1/pc_cdfmjwn8i6u8l9fwz8h82e4w3$")
apply := applyComplete.Type.(*proto.Response_Apply)
require.NotEmpty(t, apply.Apply.State, "state exists")
+3
View File
@@ -18,6 +18,7 @@ type SchemaField struct {
GoName string `json:"go_name"`
Type string `json:"type"`
Description string `json:"description,omitempty"`
Label string `json:"label,omitempty"`
Required bool `json:"required"`
Enum []string `json:"enum,omitempty"`
InputType string `json:"input_type"`
@@ -135,6 +136,7 @@ func extractFields(t reflect.Type, prefix string, skip map[string]bool) FieldGro
typeName := goTypeToSchemaType(f.Type)
description := f.Tag.Get("description")
label := f.Tag.Get("label")
enumTag := f.Tag.Get("enum")
var enumValues []string
@@ -150,6 +152,7 @@ func extractFields(t reflect.Type, prefix string, skip map[string]bool) FieldGro
GoName: goFieldPath(prefix, f.Name, t, fullJSONName),
Type: typeName,
Description: description,
Label: label,
Required: required,
Enum: enumValues,
InputType: inputType,
+44 -35
View File
@@ -68,17 +68,17 @@ func runRelease(ctx context.Context, inv *serpent.Invocation, executor ReleaseEx
return xerrors.Errorf("detecting branch: %w", err)
}
// Match standard release branches (release/2.32) and RC
// branches (release/2.32-rc.0).
branchRe := regexp.MustCompile(`^release/(\d+)\.(\d+)(?:-rc\.(\d+))?$`)
// Match release branches (release/X.Y). RCs are tagged
// from main, not from release branches.
branchRe := regexp.MustCompile(`^release/(\d+)\.(\d+)$`)
m := branchRe.FindStringSubmatch(currentBranch)
if m == nil {
warnf(w, "Current branch %q is not a release branch (release/X.Y or release/X.Y-rc.N).", currentBranch)
warnf(w, "Current branch %q is not a release branch (release/X.Y).", currentBranch)
branchInput, err := cliui.Prompt(inv, cliui.PromptOptions{
Text: "Enter the release branch to use (e.g. release/2.21 or release/2.21-rc.0)",
Text: "Enter the release branch to use (e.g. release/2.21)",
Validate: func(s string) error {
if !branchRe.MatchString(s) {
return xerrors.New("must be in format release/X.Y or release/X.Y-rc.N (e.g. release/2.21 or release/2.21-rc.0)")
return xerrors.New("must be in format release/X.Y (e.g. release/2.21)")
}
return nil
},
@@ -91,10 +91,6 @@ func runRelease(ctx context.Context, inv *serpent.Invocation, executor ReleaseEx
}
branchMajor, _ := strconv.Atoi(m[1])
branchMinor, _ := strconv.Atoi(m[2])
branchRC := -1 // -1 means not an RC branch.
if m[3] != "" {
branchRC, _ = strconv.Atoi(m[3])
}
successf(w, "Using release branch: %s", currentBranch)
// --- Fetch & sync check ---
@@ -138,31 +134,44 @@ func runRelease(ctx context.Context, inv *serpent.Invocation, executor ReleaseEx
}
}
// changelogBaseRef is the git ref used as the starting point
// for release notes generation. When a tag already exists in
// this minor series we use it directly. For the first release
// on a new minor no matching tag exists, so we compute the
// merge-base with the previous minor's release branch instead.
// This works even when that branch has no tags yet (it was
// just cut and pushed). As a last resort we fall back to the
// latest reachable tag from a previous minor.
var changelogBaseRef string
if prevVersion != nil {
changelogBaseRef = prevVersion.String()
} else {
prevReleaseBranch := fmt.Sprintf("release/%d.%d", branchMajor, branchMinor-1)
if err := gitRun("fetch", "--quiet", "origin", prevReleaseBranch); err != nil {
warnf(w, "Could not fetch %s: %v", prevReleaseBranch, err)
}
if mb, mbErr := gitOutput("merge-base", "HEAD", "origin/"+prevReleaseBranch); mbErr == nil && mb != "" {
changelogBaseRef = mb
infof(w, "Using merge-base with %s as changelog base: %s", prevReleaseBranch, mb[:12])
} else {
// No previous release branch found; fall back to
// the latest reachable tag from a previous minor.
for _, t := range mergedTags {
if t.Major == branchMajor && t.Minor < branchMinor {
changelogBaseRef = t.String()
break
}
}
}
}
var suggested version
if prevVersion == nil {
infof(w, "No previous release tag found on this branch.")
suggested = version{Major: branchMajor, Minor: branchMinor, Patch: 0}
if branchRC >= 0 {
suggested.Pre = fmt.Sprintf("rc.%d", branchRC)
}
} else {
infof(w, "Previous release tag: %s", prevVersion.String())
if branchRC >= 0 {
// On an RC branch, suggest the next RC for
// the same base version.
nextRC := 0
if prevVersion.IsRC() {
nextRC = prevVersion.rcNumber() + 1
}
suggested = version{
Major: prevVersion.Major,
Minor: prevVersion.Minor,
Patch: prevVersion.Patch,
Pre: fmt.Sprintf("rc.%d", nextRC),
}
} else {
suggested = version{Major: prevVersion.Major, Minor: prevVersion.Minor, Patch: prevVersion.Patch + 1}
}
suggested = version{Major: prevVersion.Major, Minor: prevVersion.Minor, Patch: prevVersion.Patch + 1}
}
fmt.Fprintln(w)
@@ -366,8 +375,8 @@ func runRelease(ctx context.Context, inv *serpent.Invocation, executor ReleaseEx
infof(w, "Generating release notes...")
commitRange := "HEAD"
if prevVersion != nil {
commitRange = prevVersion.String() + "..HEAD"
if changelogBaseRef != "" {
commitRange = changelogBaseRef + "..HEAD"
}
commits, err := commitLog(commitRange)
@@ -473,16 +482,16 @@ func runRelease(ctx context.Context, inv *serpent.Invocation, executor ReleaseEx
}
if !hasContent {
prevStr := "the beginning of time"
if prevVersion != nil {
prevStr = prevVersion.String()
if changelogBaseRef != "" {
prevStr = changelogBaseRef
}
fmt.Fprintf(&notes, "\n_No changes since %s._\n", prevStr)
}
// Compare link.
if prevVersion != nil {
if changelogBaseRef != "" {
fmt.Fprintf(&notes, "\nCompare: [`%s...%s`](https://github.com/%s/%s/compare/%s...%s)\n",
prevVersion, newVersion, owner, repo, prevVersion, newVersion)
changelogBaseRef, newVersion, owner, repo, changelogBaseRef, newVersion)
}
// Container image.
+152 -150
View File
@@ -12,162 +12,164 @@ import {
} from "../helpers";
import { beforeCoderTest, resetExternalAuthKey } from "../hooks";
test.describe.skip("externalAuth", () => {
test.beforeAll(async ({ baseURL }) => {
const srv = await createServer(gitAuth.webPort);
test.describe
.skip("externalAuth", () => {
test.beforeAll(async ({ baseURL }) => {
const srv = await createServer(gitAuth.webPort);
// The GitHub validate endpoint returns the currently authenticated user!
srv.use(gitAuth.validatePath, (_req, res) => {
res.write(JSON.stringify(ghUser));
res.end();
// The GitHub validate endpoint returns the currently authenticated user!
srv.use(gitAuth.validatePath, (_req, res) => {
res.write(JSON.stringify(ghUser));
res.end();
});
srv.use(gitAuth.tokenPath, (_req, res) => {
const r = (Math.random() + 1).toString(36).substring(7);
res.write(JSON.stringify({ access_token: r }));
res.end();
});
srv.use(gitAuth.authPath, (req, res) => {
res.redirect(
`${baseURL}/external-auth/${gitAuth.webProvider}/callback?code=1234&state=${req.query.state}`,
);
});
});
srv.use(gitAuth.tokenPath, (_req, res) => {
const r = (Math.random() + 1).toString(36).substring(7);
res.write(JSON.stringify({ access_token: r }));
res.end();
test.beforeEach(async ({ context, page }) => {
beforeCoderTest(page);
await login(page);
await resetExternalAuthKey(context);
});
srv.use(gitAuth.authPath, (req, res) => {
res.redirect(
`${baseURL}/external-auth/${gitAuth.webProvider}/callback?code=1234&state=${req.query.state}`,
// Ensures that a Git auth provider with the device flow functions and completes!
test("external auth device", async ({ page }) => {
const device: ExternalAuthDevice = {
device_code: "1234",
user_code: "1234-5678",
expires_in: 900,
interval: 1,
verification_uri: "",
};
// Start a server to mock the GitHub API.
const srv = await createServer(gitAuth.devicePort);
srv.use(gitAuth.validatePath, (_req, res) => {
res.write(JSON.stringify(ghUser));
res.end();
});
srv.use(gitAuth.codePath, (_req, res) => {
res.write(JSON.stringify(device));
res.end();
});
srv.use(gitAuth.installationsPath, (_req, res) => {
res.write(JSON.stringify(ghInstall));
res.end();
});
const token = {
access_token: "",
error: "authorization_pending",
error_description: "",
};
// First we send a result from the API that the token hasn't been
// authorized yet to ensure the UI reacts properly.
const sentPending = new Awaiter();
srv.use(gitAuth.tokenPath, (_req, res) => {
res.write(JSON.stringify(token));
res.end();
sentPending.done();
});
await page.goto(`/external-auth/${gitAuth.deviceProvider}`, {
waitUntil: "domcontentloaded",
});
await page.getByText(device.user_code).isVisible();
await sentPending.wait();
// Update the token to be valid and ensure the UI updates!
token.error = "";
token.access_token = "hello-world";
await page.waitForSelector("text=1 organization authorized");
});
test("external auth web", async ({ page }) => {
await page.goto(`/external-auth/${gitAuth.webProvider}`, {
waitUntil: "domcontentloaded",
});
// This endpoint doesn't have the installations URL set intentionally!
await page.waitForSelector("text=You've authenticated with GitHub!");
});
test("successful external auth from workspace", async ({ page }) => {
const templateName = await createTemplate(
page,
echoResponsesWithExternalAuth([
{ id: gitAuth.webProvider, optional: false },
]),
);
await createWorkspace(page, templateName, { useExternalAuth: true });
});
});
test.beforeEach(async ({ context, page }) => {
beforeCoderTest(page);
await login(page);
await resetExternalAuthKey(context);
});
// Ensures that a Git auth provider with the device flow functions and completes!
test("external auth device", async ({ page }) => {
const device: ExternalAuthDevice = {
device_code: "1234",
user_code: "1234-5678",
expires_in: 900,
interval: 1,
verification_uri: "",
const ghUser: Endpoints["GET /user"]["response"]["data"] = {
login: "kylecarbs",
id: 7122116,
node_id: "MDQ6VXNlcjcxMjIxMTY=",
avatar_url: "https://avatars.githubusercontent.com/u/7122116?v=4",
gravatar_id: "",
url: "https://api.github.com/users/kylecarbs",
html_url: "https://github.com/kylecarbs",
followers_url: "https://api.github.com/users/kylecarbs/followers",
following_url:
"https://api.github.com/users/kylecarbs/following{/other_user}",
gists_url: "https://api.github.com/users/kylecarbs/gists{/gist_id}",
starred_url:
"https://api.github.com/users/kylecarbs/starred{/owner}{/repo}",
subscriptions_url: "https://api.github.com/users/kylecarbs/subscriptions",
organizations_url: "https://api.github.com/users/kylecarbs/orgs",
repos_url: "https://api.github.com/users/kylecarbs/repos",
events_url: "https://api.github.com/users/kylecarbs/events{/privacy}",
received_events_url:
"https://api.github.com/users/kylecarbs/received_events",
type: "User",
site_admin: false,
name: "Kyle Carberry",
company: "@coder",
blog: "https://carberry.com",
location: "Austin, TX",
email: "kyle@carberry.com",
hireable: null,
bio: "hey there",
twitter_username: "kylecarbs",
public_repos: 52,
public_gists: 9,
followers: 208,
following: 31,
created_at: "2014-04-01T02:24:41Z",
updated_at: "2023-06-26T13:03:09Z",
};
// Start a server to mock the GitHub API.
const srv = await createServer(gitAuth.devicePort);
srv.use(gitAuth.validatePath, (_req, res) => {
res.write(JSON.stringify(ghUser));
res.end();
});
srv.use(gitAuth.codePath, (_req, res) => {
res.write(JSON.stringify(device));
res.end();
});
srv.use(gitAuth.installationsPath, (_req, res) => {
res.write(JSON.stringify(ghInstall));
res.end();
});
const token = {
access_token: "",
error: "authorization_pending",
error_description: "",
};
// First we send a result from the API that the token hasn't been
// authorized yet to ensure the UI reacts properly.
const sentPending = new Awaiter();
srv.use(gitAuth.tokenPath, (_req, res) => {
res.write(JSON.stringify(token));
res.end();
sentPending.done();
});
await page.goto(`/external-auth/${gitAuth.deviceProvider}`, {
waitUntil: "domcontentloaded",
});
await page.getByText(device.user_code).isVisible();
await sentPending.wait();
// Update the token to be valid and ensure the UI updates!
token.error = "";
token.access_token = "hello-world";
await page.waitForSelector("text=1 organization authorized");
});
test("external auth web", async ({ page }) => {
await page.goto(`/external-auth/${gitAuth.webProvider}`, {
waitUntil: "domcontentloaded",
});
// This endpoint doesn't have the installations URL set intentionally!
await page.waitForSelector("text=You've authenticated with GitHub!");
});
test("successful external auth from workspace", async ({ page }) => {
const templateName = await createTemplate(
page,
echoResponsesWithExternalAuth([
{ id: gitAuth.webProvider, optional: false },
]),
);
await createWorkspace(page, templateName, { useExternalAuth: true });
});
const ghUser: Endpoints["GET /user"]["response"]["data"] = {
login: "kylecarbs",
id: 7122116,
node_id: "MDQ6VXNlcjcxMjIxMTY=",
avatar_url: "https://avatars.githubusercontent.com/u/7122116?v=4",
gravatar_id: "",
url: "https://api.github.com/users/kylecarbs",
html_url: "https://github.com/kylecarbs",
followers_url: "https://api.github.com/users/kylecarbs/followers",
following_url:
"https://api.github.com/users/kylecarbs/following{/other_user}",
gists_url: "https://api.github.com/users/kylecarbs/gists{/gist_id}",
starred_url:
"https://api.github.com/users/kylecarbs/starred{/owner}{/repo}",
subscriptions_url: "https://api.github.com/users/kylecarbs/subscriptions",
organizations_url: "https://api.github.com/users/kylecarbs/orgs",
repos_url: "https://api.github.com/users/kylecarbs/repos",
events_url: "https://api.github.com/users/kylecarbs/events{/privacy}",
received_events_url:
"https://api.github.com/users/kylecarbs/received_events",
type: "User",
site_admin: false,
name: "Kyle Carberry",
company: "@coder",
blog: "https://carberry.com",
location: "Austin, TX",
email: "kyle@carberry.com",
hireable: null,
bio: "hey there",
twitter_username: "kylecarbs",
public_repos: 52,
public_gists: 9,
followers: 208,
following: 31,
created_at: "2014-04-01T02:24:41Z",
updated_at: "2023-06-26T13:03:09Z",
};
const ghInstall: Endpoints["GET /user/installations"]["response"]["data"] = {
installations: [
const ghInstall: Endpoints["GET /user/installations"]["response"]["data"] =
{
id: 1,
access_tokens_url: "",
account: ghUser,
app_id: 1,
app_slug: "coder",
created_at: "2014-04-01T02:24:41Z",
events: [],
html_url: "",
permissions: {},
repositories_url: "",
repository_selection: "all",
single_file_name: "",
suspended_at: null,
suspended_by: null,
target_id: 1,
target_type: "",
updated_at: "2023-06-26T13:03:09Z",
},
],
total_count: 1,
};
});
installations: [
{
id: 1,
access_tokens_url: "",
account: ghUser,
app_id: 1,
app_slug: "coder",
created_at: "2014-04-01T02:24:41Z",
events: [],
html_url: "",
permissions: {},
repositories_url: "",
repository_selection: "all",
single_file_name: "",
suspended_at: null,
suspended_by: null,
target_id: 1,
target_type: "",
updated_at: "2023-06-26T13:03:09Z",
},
],
total_count: 1,
};
});
+1 -1
View File
@@ -127,7 +127,7 @@
"devDependencies": {
"@babel/core": "7.29.0",
"@babel/plugin-syntax-typescript": "7.28.6",
"@biomejs/biome": "2.2.4",
"@biomejs/biome": "2.4.10",
"@chromatic-com/storybook": "5.0.1",
"@octokit/types": "12.6.0",
"@playwright/test": "1.50.1",
+40 -40
View File
@@ -276,8 +276,8 @@ importers:
specifier: 7.28.6
version: 7.28.6(@babel/core@7.29.0)
'@biomejs/biome':
specifier: 2.2.4
version: 2.2.4
specifier: 2.4.10
version: 2.4.10
'@chromatic-com/storybook':
specifier: 5.0.1
version: 5.0.1(storybook@10.3.3(@testing-library/dom@10.4.0)(prettier@3.4.1)(react-dom@19.2.2(react@19.2.2))(react@19.2.2))
@@ -469,7 +469,7 @@ importers:
version: 8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0)
vite-plugin-checker:
specifier: 0.12.0
version: 0.12.0(@biomejs/biome@2.2.4)(optionator@0.9.3)(typescript@6.0.2)(vite@8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0))
version: 0.12.0(@biomejs/biome@2.4.10)(optionator@0.9.3)(typescript@6.0.2)(vite@8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0))
vitest:
specifier: 4.1.1
version: 4.1.1(@types/node@20.19.25)(@vitest/browser-playwright@4.1.1)(jsdom@27.2.0)(msw@2.4.8(typescript@6.0.2))(vite@8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0))
@@ -708,55 +708,55 @@ packages:
'@bcoe/v8-coverage@0.2.3':
resolution: {integrity: sha512-0hYQ8SB4Db5zvZB4axdMHGwEaQjkZzFjQiN9LVYvIFB2nSUHW9tYpxWriPrWDASIxiaXax83REcLxuSdnGPZtw==, tarball: https://registry.npmjs.org/@bcoe/v8-coverage/-/v8-coverage-0.2.3.tgz}
'@biomejs/biome@2.2.4':
resolution: {integrity: sha512-TBHU5bUy/Ok6m8c0y3pZiuO/BZoY/OcGxoLlrfQof5s8ISVwbVBdFINPQZyFfKwil8XibYWb7JMwnT8wT4WVPg==, tarball: https://registry.npmjs.org/@biomejs/biome/-/biome-2.2.4.tgz}
'@biomejs/biome@2.4.10':
resolution: {integrity: sha512-xxA3AphFQ1geij4JTHXv4EeSTda1IFn22ye9LdyVPoJU19fNVl0uzfEuhsfQ4Yue/0FaLs2/ccVi4UDiE7R30w==, tarball: https://registry.npmjs.org/@biomejs/biome/-/biome-2.4.10.tgz}
engines: {node: '>=14.21.3'}
hasBin: true
'@biomejs/cli-darwin-arm64@2.2.4':
resolution: {integrity: sha512-RJe2uiyaloN4hne4d2+qVj3d3gFJFbmrr5PYtkkjei1O9c+BjGXgpUPVbi8Pl8syumhzJjFsSIYkcLt2VlVLMA==, tarball: https://registry.npmjs.org/@biomejs/cli-darwin-arm64/-/cli-darwin-arm64-2.2.4.tgz}
'@biomejs/cli-darwin-arm64@2.4.10':
resolution: {integrity: sha512-vuzzI1cWqDVzOMIkYyHbKqp+AkQq4K7k+UCXWpkYcY/HDn1UxdsbsfgtVpa40shem8Kax4TLDLlx8kMAecgqiw==, tarball: https://registry.npmjs.org/@biomejs/cli-darwin-arm64/-/cli-darwin-arm64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [darwin]
'@biomejs/cli-darwin-x64@2.2.4':
resolution: {integrity: sha512-cFsdB4ePanVWfTnPVaUX+yr8qV8ifxjBKMkZwN7gKb20qXPxd/PmwqUH8mY5wnM9+U0QwM76CxFyBRJhC9tQwg==, tarball: https://registry.npmjs.org/@biomejs/cli-darwin-x64/-/cli-darwin-x64-2.2.4.tgz}
'@biomejs/cli-darwin-x64@2.4.10':
resolution: {integrity: sha512-14fzASRo+BPotwp7nWULy2W5xeUyFnTaq1V13Etrrxkrih+ez/2QfgFm5Ehtf5vSjtgx/IJycMMpn5kPd5ZNaA==, tarball: https://registry.npmjs.org/@biomejs/cli-darwin-x64/-/cli-darwin-x64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [darwin]
'@biomejs/cli-linux-arm64-musl@2.2.4':
resolution: {integrity: sha512-7TNPkMQEWfjvJDaZRSkDCPT/2r5ESFPKx+TEev+I2BXDGIjfCZk2+b88FOhnJNHtksbOZv8ZWnxrA5gyTYhSsQ==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-arm64-musl/-/cli-linux-arm64-musl-2.2.4.tgz}
'@biomejs/cli-linux-arm64-musl@2.4.10':
resolution: {integrity: sha512-WrJY6UuiSD/Dh+nwK2qOTu8kdMDlLV3dLMmychIghHPAysWFq1/DGC1pVZx8POE3ZkzKR3PUUnVrtZfMfaJjyQ==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-arm64-musl/-/cli-linux-arm64-musl-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-arm64@2.2.4':
resolution: {integrity: sha512-M/Iz48p4NAzMXOuH+tsn5BvG/Jb07KOMTdSVwJpicmhN309BeEyRyQX+n1XDF0JVSlu28+hiTQ2L4rZPvu7nMw==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-arm64/-/cli-linux-arm64-2.2.4.tgz}
'@biomejs/cli-linux-arm64@2.4.10':
resolution: {integrity: sha512-7MH1CMW5uuxQ/s7FLST63qF8B3Hgu2HRdZ7tA1X1+mk+St4JOuIrqdhIBnnyqeyWJNI+Bww7Es5QZ0wIc1Cmkw==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-arm64/-/cli-linux-arm64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-x64-musl@2.2.4':
resolution: {integrity: sha512-m41nFDS0ksXK2gwXL6W6yZTYPMH0LughqbsxInSKetoH6morVj43szqKx79Iudkp8WRT5SxSh7qVb8KCUiewGg==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-x64-musl/-/cli-linux-x64-musl-2.2.4.tgz}
'@biomejs/cli-linux-x64-musl@2.4.10':
resolution: {integrity: sha512-kDTi3pI6PBN6CiczsWYOyP2zk0IJI08EWEQyDMQWW221rPaaEz6FvjLhnU07KMzLv8q3qSuoB93ua6inSQ55Tw==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-x64-musl/-/cli-linux-x64-musl-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-linux-x64@2.2.4':
resolution: {integrity: sha512-orr3nnf2Dpb2ssl6aihQtvcKtLySLta4E2UcXdp7+RTa7mfJjBgIsbS0B9GC8gVu0hjOu021aU8b3/I1tn+pVQ==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-x64/-/cli-linux-x64-2.2.4.tgz}
'@biomejs/cli-linux-x64@2.4.10':
resolution: {integrity: sha512-tZLvEEi2u9Xu1zAqRjTcpIDGVtldigVvzug2fTuPG0ME/g8/mXpRPcNgLB22bGn6FvLJpHHnqLnwliOu8xjYrg==, tarball: https://registry.npmjs.org/@biomejs/cli-linux-x64/-/cli-linux-x64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-win32-arm64@2.2.4':
resolution: {integrity: sha512-NXnfTeKHDFUWfxAefa57DiGmu9VyKi0cDqFpdI+1hJWQjGJhJutHPX0b5m+eXvTKOaf+brU+P0JrQAZMb5yYaQ==, tarball: https://registry.npmjs.org/@biomejs/cli-win32-arm64/-/cli-win32-arm64-2.2.4.tgz}
'@biomejs/cli-win32-arm64@2.4.10':
resolution: {integrity: sha512-umwQU6qPzH+ISTf/eHyJ/QoQnJs3V9Vpjz2OjZXe9MVBZ7prgGafMy7yYeRGnlmDAn87AKTF3Q6weLoMGpeqdQ==, tarball: https://registry.npmjs.org/@biomejs/cli-win32-arm64/-/cli-win32-arm64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [win32]
'@biomejs/cli-win32-x64@2.2.4':
resolution: {integrity: sha512-3Y4V4zVRarVh/B/eSHczR4LYoSVyv3Dfuvm3cWs5w/HScccS0+Wt/lHOcDTRYeHjQmMYVC3rIRWqyN2EI52+zg==, tarball: https://registry.npmjs.org/@biomejs/cli-win32-x64/-/cli-win32-x64-2.2.4.tgz}
'@biomejs/cli-win32-x64@2.4.10':
resolution: {integrity: sha512-aW/JU5GuyH4uxMrNYpoC2kjaHlyJGLgIa3XkhPEZI0uKhZhJZU8BuEyJmvgzSPQNGozBwWjC972RaNdcJ9KyJg==, tarball: https://registry.npmjs.org/@biomejs/cli-win32-x64/-/cli-win32-x64-2.4.10.tgz}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [win32]
@@ -7697,39 +7697,39 @@ snapshots:
'@bcoe/v8-coverage@0.2.3': {}
'@biomejs/biome@2.2.4':
'@biomejs/biome@2.4.10':
optionalDependencies:
'@biomejs/cli-darwin-arm64': 2.2.4
'@biomejs/cli-darwin-x64': 2.2.4
'@biomejs/cli-linux-arm64': 2.2.4
'@biomejs/cli-linux-arm64-musl': 2.2.4
'@biomejs/cli-linux-x64': 2.2.4
'@biomejs/cli-linux-x64-musl': 2.2.4
'@biomejs/cli-win32-arm64': 2.2.4
'@biomejs/cli-win32-x64': 2.2.4
'@biomejs/cli-darwin-arm64': 2.4.10
'@biomejs/cli-darwin-x64': 2.4.10
'@biomejs/cli-linux-arm64': 2.4.10
'@biomejs/cli-linux-arm64-musl': 2.4.10
'@biomejs/cli-linux-x64': 2.4.10
'@biomejs/cli-linux-x64-musl': 2.4.10
'@biomejs/cli-win32-arm64': 2.4.10
'@biomejs/cli-win32-x64': 2.4.10
'@biomejs/cli-darwin-arm64@2.2.4':
'@biomejs/cli-darwin-arm64@2.4.10':
optional: true
'@biomejs/cli-darwin-x64@2.2.4':
'@biomejs/cli-darwin-x64@2.4.10':
optional: true
'@biomejs/cli-linux-arm64-musl@2.2.4':
'@biomejs/cli-linux-arm64-musl@2.4.10':
optional: true
'@biomejs/cli-linux-arm64@2.2.4':
'@biomejs/cli-linux-arm64@2.4.10':
optional: true
'@biomejs/cli-linux-x64-musl@2.2.4':
'@biomejs/cli-linux-x64-musl@2.4.10':
optional: true
'@biomejs/cli-linux-x64@2.2.4':
'@biomejs/cli-linux-x64@2.4.10':
optional: true
'@biomejs/cli-win32-arm64@2.2.4':
'@biomejs/cli-win32-arm64@2.4.10':
optional: true
'@biomejs/cli-win32-x64@2.2.4':
'@biomejs/cli-win32-x64@2.4.10':
optional: true
'@blazediff/core@1.9.1': {}
@@ -15034,7 +15034,7 @@ snapshots:
d3-time: 3.1.0
d3-timer: 3.0.1
vite-plugin-checker@0.12.0(@biomejs/biome@2.2.4)(optionator@0.9.3)(typescript@6.0.2)(vite@8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0)):
vite-plugin-checker@0.12.0(@biomejs/biome@2.4.10)(optionator@0.9.3)(typescript@6.0.2)(vite@8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0)):
dependencies:
'@babel/code-frame': 7.29.0
chokidar: 4.0.3
@@ -15046,7 +15046,7 @@ snapshots:
vite: 8.0.2(@emnapi/core@1.7.1)(@emnapi/runtime@1.7.1)(@types/node@20.19.25)(esbuild@0.25.12)(jiti@1.21.7)(yaml@2.7.0)
vscode-uri: 3.1.0
optionalDependencies:
'@biomejs/biome': 2.2.4
'@biomejs/biome': 2.4.10
optionator: 0.9.3
typescript: 6.0.2
+6 -12
View File
@@ -147,12 +147,9 @@ describe("api.ts", () => {
{ q: "owner:me" },
"/api/v2/workspaces?q=owner%3Ame",
],
])(
"Workspaces - getURLWithSearchParams(%p, %p) returns %p",
(basePath, filter, expected) => {
expect(getURLWithSearchParams(basePath, filter)).toBe(expected);
},
);
])("Workspaces - getURLWithSearchParams(%p, %p) returns %p", (basePath, filter, expected) => {
expect(getURLWithSearchParams(basePath, filter)).toBe(expected);
});
});
describe("getURLWithSearchParams - users", () => {
@@ -164,12 +161,9 @@ describe("api.ts", () => {
"/api/v2/users?q=status%3Aactive",
],
["/api/v2/users", { q: "" }, "/api/v2/users"],
])(
"Users - getURLWithSearchParams(%p, %p) returns %p",
(basePath, filter, expected) => {
expect(getURLWithSearchParams(basePath, filter)).toBe(expected);
},
);
])("Users - getURLWithSearchParams(%p, %p) returns %p", (basePath, filter, expected) => {
expect(getURLWithSearchParams(basePath, filter)).toBe(expected);
});
});
describe("update", () => {
+2 -3
View File
@@ -3170,14 +3170,13 @@ class ExperimentalApiMethods {
chatId: string,
messageId: number,
req: TypesGen.EditChatMessageRequest,
): Promise<TypesGen.ChatMessage> => {
const response = await this.axios.patch<TypesGen.ChatMessage>(
): Promise<TypesGen.EditChatMessageResponse> => {
const response = await this.axios.patch<TypesGen.EditChatMessageResponse>(
`/api/experimental/chats/${chatId}/messages/${messageId}`,
req,
);
return response.data;
};
interruptChat = async (chatId: string): Promise<TypesGen.Chat> => {
const response = await this.axios.post<TypesGen.Chat>(
`/api/experimental/chats/${chatId}/interrupt`,
+2
View File
@@ -13,6 +13,8 @@ export interface FieldSchema {
type: "string" | "integer" | "number" | "boolean" | "array" | "object";
/** Human-readable description of the field. May be absent for some fields. */
description?: string;
/** Optional display label override. When absent, derive from json_name. */
label?: string;
/** Whether this field is required when configuring the provider. */
required: boolean;
/** Hint for how the frontend should render the input control. */
+8 -2
View File
@@ -107,6 +107,7 @@
"go_name": "Effort",
"type": "string",
"description": "Controls the level of reasoning effort",
"label": "Reasoning Effort",
"required": false,
"enum": ["low", "medium", "high", "max"],
"input_type": "select"
@@ -132,6 +133,7 @@
"go_name": "AllowedDomains",
"type": "array",
"description": "Restrict web search to these domains (cannot be used with blocked_domains)",
"label": "Web Search: Allowed Domains",
"required": false,
"input_type": "json"
},
@@ -140,6 +142,7 @@
"go_name": "BlockedDomains",
"type": "array",
"description": "Block web search on these domains (cannot be used with allowed_domains)",
"label": "Web Search: Blocked Domains",
"required": false,
"input_type": "json"
}
@@ -286,7 +289,8 @@
"type": "string",
"description": "Controls whether reasoning tokens are summarized in the response",
"required": false,
"input_type": "input"
"enum": ["auto", "concise", "detailed"],
"input_type": "select"
},
{
"json_name": "max_completion_tokens",
@@ -354,7 +358,8 @@
"type": "string",
"description": "Latency tier to use for processing the request",
"required": false,
"input_type": "input"
"enum": ["auto", "default", "flex", "scale", "priority"],
"input_type": "select"
},
{
"json_name": "structured_outputs",
@@ -396,6 +401,7 @@
"go_name": "AllowedDomains",
"type": "array",
"description": "Restrict web search to these domains",
"label": "Web Search: Allowed Domains",
"required": false,
"input_type": "json"
}

Some files were not shown because too many files have changed in this diff Show More