Compare commits

..

11 Commits

Author SHA1 Message Date
default 72960aeb77 fix: show devcontainer Delete menu for failed/error states
The three-dot menu (containing Delete) was gated behind
showDevcontainerControls which requires both a subAgent AND a
container reference. When a devcontainer fails (e.g. exit status 1),
the container and/or subAgent may be missing, hiding the menu entirely.

Decouple the Delete menu from showDevcontainerControls so it renders
whenever the devcontainer is not in a transitioning state (starting,
stopping, deleting). The delete API only needs parentAgent.id and
devcontainer.id, so no subAgent or container is required.

Fixes #23754
2026-04-10 17:27:15 +00:00
Mathias Fredriksson a62ead8588 fix(coderd): sort pinned chats first in GetChats pagination (#24222)
The GetChats SQL query ordered by (updated_at, id) DESC with no
pin_order awareness. A pinned chat with an old updated_at could
land on page 2+ and be invisible in the sidebar's Pinned section.

Add a 4-column ORDER BY: pinned-first flag DESC, negated pin_order
DESC, updated_at DESC, id DESC. The negation trick keeps all sort
columns DESC so the cursor tuple < comparison still works. Update
the after_id cursor clause to match the expanded sort key.

Fix the false handler comment claiming PinChatByID bumps updated_at.
2026-04-10 17:13:19 +00:00
dependabot[bot] b68c14dd04 chore: bump github.com/hashicorp/go-getter from 1.8.4 to 1.8.6 (#24247)
Bumps
[github.com/hashicorp/go-getter](https://github.com/hashicorp/go-getter)
from 1.8.4 to 1.8.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/go-getter/releases">github.com/hashicorp/go-getter's
releases</a>.</em></p>
<blockquote>
<h2>v1.8.6</h2>
<p>No release notes provided.</p>
<h2>v1.8.5</h2>
<h2>What's Changed</h2>
<ul>
<li>[chore] : Bump the go group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/576">hashicorp/go-getter#576</a></li>
<li>use %w to wrap error by <a
href="https://github.com/Ericwww"><code>@​Ericwww</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/475">hashicorp/go-getter#475</a></li>
<li>fix: <a
href="https://redirect.github.com/hashicorp/go-getter/issues/538">#538</a>
http file download skipped if headResp.ContentLength is 0 by <a
href="https://github.com/martijnvdp"><code>@​martijnvdp</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/539">hashicorp/go-getter#539</a></li>
<li>chore: fix error message capitalization in checksum function by <a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-getter/pull/578">hashicorp/go-getter#578</a></li>
<li>[chore] : Bump the go group with 8 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/577">hashicorp/go-getter#577</a></li>
<li>Fix git url with ambiguous ref by <a
href="https://github.com/nimasamii"><code>@​nimasamii</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/382">hashicorp/go-getter#382</a></li>
<li>fix: resolve compilation errors in get_git_test.go by <a
href="https://github.com/CreatorHead"><code>@​CreatorHead</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-getter/pull/579">hashicorp/go-getter#579</a></li>
<li>[chore] : Bump the actions group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/582">hashicorp/go-getter#582</a></li>
<li>[chore] : Bump the go group with 3 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/583">hashicorp/go-getter#583</a></li>
<li>test that arbitrary files cannot be checksummed by <a
href="https://github.com/schmichael"><code>@​schmichael</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/250">hashicorp/go-getter#250</a></li>
<li>[chore] : Bump google.golang.org/api from 0.260.0 to 0.262.0 in the
go group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/585">hashicorp/go-getter#585</a></li>
<li>[chore] : Bump actions/checkout from 6.0.1 to 6.0.2 in the actions
group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/586">hashicorp/go-getter#586</a></li>
<li>[chore] : Bump the go group with 3 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/588">hashicorp/go-getter#588</a></li>
<li>[chore] : Bump actions/cache from 5.0.2 to 5.0.3 in the actions
group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/589">hashicorp/go-getter#589</a></li>
<li>[chore] : Bump aws-actions/configure-aws-credentials from 5.1.1 to
6.0.0 in the actions group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/592">hashicorp/go-getter#592</a></li>
<li>[chore] : Bump google.golang.org/api from 0.264.0 to 0.265.0 in the
go group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/591">hashicorp/go-getter#591</a></li>
<li>[chore] : Bump the go group with 5 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/593">hashicorp/go-getter#593</a></li>
<li>IND-6310 - CRT Onboarding by <a
href="https://github.com/nasareeny"><code>@​nasareeny</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/584">hashicorp/go-getter#584</a></li>
<li>Fix crt build path by <a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-getter/pull/594">hashicorp/go-getter#594</a></li>
<li>[chore] : Bump the go group with 3 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/596">hashicorp/go-getter#596</a></li>
<li>fix: remove checkout action from set-product-version job by <a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-getter/pull/598">hashicorp/go-getter#598</a></li>
<li>[chore] : Bump the actions group with 4 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/595">hashicorp/go-getter#595</a></li>
<li>fix(deps): upgrade go.opentelemetry.io/otel/sdk to v1.40.0
(GO-2026-4394) by <a
href="https://github.com/ssagarverma"><code>@​ssagarverma</code></a> in
<a
href="https://redirect.github.com/hashicorp/go-getter/pull/599">hashicorp/go-getter#599</a></li>
<li>Prepare go-getter for v1.8.5 release by <a
href="https://github.com/nasareeny"><code>@​nasareeny</code></a> in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/597">hashicorp/go-getter#597</a></li>
<li>[chore] : Bump the actions group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/600">hashicorp/go-getter#600</a></li>
<li>sec: bump go and xrepos + redact aws tokens in url by <a
href="https://github.com/dduzgun-security"><code>@​dduzgun-security</code></a>
in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/604">hashicorp/go-getter#604</a></li>
</ul>
<p><strong>NOTES:</strong></p>
<p>Binary Distribution Update: To streamline our release process and
align with other HashiCorp tools, all release binaries will now be
published exclusively to the official HashiCorp <a
href="https://releases.hashicorp.com/go-getter/">release</a> site. We
will no longer attach release assets to GitHub Releases.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Ericwww"><code>@​Ericwww</code></a> made
their first contribution in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/475">hashicorp/go-getter#475</a></li>
<li><a
href="https://github.com/martijnvdp"><code>@​martijnvdp</code></a> made
their first contribution in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/539">hashicorp/go-getter#539</a></li>
<li><a href="https://github.com/nimasamii"><code>@​nimasamii</code></a>
made their first contribution in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/382">hashicorp/go-getter#382</a></li>
<li><a href="https://github.com/nasareeny"><code>@​nasareeny</code></a>
made their first contribution in <a
href="https://redirect.github.com/hashicorp/go-getter/pull/584">hashicorp/go-getter#584</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/hashicorp/go-getter/compare/v1.8.4...v1.8.5">https://github.com/hashicorp/go-getter/compare/v1.8.4...v1.8.5</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/hashicorp/go-getter/commit/d23bff48fb87c956bb507a03d35a63ee45470e34"><code>d23bff4</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-getter/issues/608">#608</a>
from hashicorp/dependabot/go_modules/go-security-9c51...</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/2c4aba8e5286c18bc66358236454a3e3b0aa7421"><code>2c4aba8</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-getter/issues/613">#613</a>
from hashicorp/pull/v1.8.6</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/fe61ed9454b818721d81328d7e880fc2ed2c8d15"><code>fe61ed9</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-getter/issues/611">#611</a>
from hashicorp/SECVULN-41053</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/d53365612c5250f7df8d586ba3be70fbd42e613b"><code>d533656</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/go-getter/issues/606">#606</a>
from hashicorp/pull/CRT</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/388f23d7d40f1f1e1a9f5b40ee5590c08154cd6d"><code>388f23d</code></a>
Additional test for local branch and head</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/b7ceaa59b11a203c14cf58e5fcaa8f169c0ced6e"><code>b7ceaa5</code></a>
harden checkout ref handling and added regression tests</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/769cc14fdb0df5ac548f4ead1193b5c40460f11e"><code>769cc14</code></a>
Release version bump up</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/6086a6a1f6347f735401c26429d9a0e14ad29444"><code>6086a6a</code></a>
Review Comments Addressed</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/e02063cd28e97bb8a23a63e72e2a4a4ab6e982cf"><code>e02063c</code></a>
Revert &quot;SECVULN Fix for git checkout argument injection enables
arbitrary fil...</li>
<li><a
href="https://github.com/hashicorp/go-getter/commit/c93084dc4306b2c49c54fe6fbfbe79c98956e5f8"><code>c93084d</code></a>
[chore] : Bump google.golang.org/grpc</li>
<li>Additional commits viewable in <a
href="https://github.com/hashicorp/go-getter/compare/v1.8.4...v1.8.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/hashicorp/go-getter&package-manager=go_modules&previous-version=1.8.4&new-version=1.8.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 15:36:57 +00:00
Zach 508114d484 feat: user secret database encryption (#24218)
Add dbcrypt support for user secret values. When database encryption is
enabled, secret values are transparently encrypted on write and
decrypted on read through the existing dbcrypt store wrapper.

- Wrap `CreateUserSecret`, `GetUserSecretByUserIDAndName`,
`ListUserSecretsWithValues`, and `UpdateUserSecretByUserIDAndName` in
enterprise/dbcrypt/dbcrypt.go.
- Add rotate and decrypt support for user secrets in
enterprise/dbcrypt/cliutil.go (`server dbcrypt rotate` and `server
dbcrypt decrypt`).
- Add internal tests covering encrypt-on-create, decrypt-on-read,
re-encrypt-on-update, and plaintext passthrough when no cipher is
configured.
2026-04-10 09:34:11 -06:00
Garrett Delfosse e0fbb0e4ec feat: comment on original PR after cherry-pick PR is created (#24243)
After the cherry-pick workflow creates a backport PR, it now comments on
the original PR to notify the author with a link to the new PR.

If the cherry-pick had conflicts, the comment includes a warning.

## Changes

- Capture the URL output of `gh pr create` into `NEW_PR_URL`
- Add `gh pr comment` on the original PR with the link
- Append a conflict warning to the comment when applicable

> Generated by Coder Agents
2026-04-10 11:21:13 -04:00
J. Scott Miller 7bde763b66 feat: add workspace build transition to provisioner job list (#24131)
Closes #16332

Previously `coder provisioner jobs list` showed no indication of what a workspace
build job was doing (i.e., start, stop, or delete). This adds
`workspace_build_transition` to the provisioner job metadata, exposed in
both the REST API and CLI. Template and workspace name columns were also
added, both available via `-c`.

```
$ coder provisioner jobs list -c id,type,status,"workspace build transition"
ID                                    TYPE                     STATUS     WORKSPACE BUILD TRANSITION
95f35545-a59f-4900-813d-80b8c8fd7a33  template_version_import  succeeded
0a903bbe-cef5-4e72-9e62-f7e7b4dfbb7a  workspace_build          succeeded  start
```
2026-04-10 09:50:11 -05:00
Matt Vollmer 36141fafad feat: stack insights tables vertically and paginate Pull requests table (#24198)
The "By model" and "Pull requests" tables on the PR Insights page
(`/agents/settings/insights`) were side-by-side at `lg` breakpoints, and
the Pull requests table was hard-capped at 20 rows by the backend.

- Replaced `lg:grid-cols-2` with a single-column stacked layout so both
tables span the full content width.
- Removed the `LIMIT 20` from the `GetPRInsightsRecentPRs` SQL query so
all PRs in the selected time range are returned.
- Can add this back if we need it. If we do, we should add a little
subheader above this table to indicate that we're not showing all PRs
within the selected timeframe.
- Added client-side pagination to the Pull requests table using
`PaginationWidgetBase` (page size 10), matching the existing pattern in
`ChatCostSummaryView`.
- Renamed the section heading from "Recent" to "Pull requests" since it
now shows the full set for the time range.
<img width="1481" height="1817" alt="image"
src="https://github.com/user-attachments/assets/0066c42f-4d7b-4cee-b64b-6680848edc68"
/>


> 🤖 PR generated with Coder Agents
2026-04-10 10:48:54 -04:00
Garrett Delfosse 3462c31f43 fix: update directory for terraform-managed subagents (#24220)
When a devcontainer subagent is terraform-managed, the provisioner sets
its directory to the host-side `workspace_folder` path at build time. At
runtime, the agent injection code determines the correct
container-internal
path from `devcontainer read-configuration` and sends it via
`CreateSubAgent`.

However, the `CreateSubAgent` handler only updated `display_apps` for
pre-existing agents, ignoring the `Directory` field. This caused
SSH/terminal
sessions to land in `~` instead of the workspace folder (e.g.
`/workspaces/foo`).

Add `UpdateWorkspaceAgentDirectoryByID` query and call it in the
terraform-managed subagent update path to also persist the directory.

Fixes PLAT-118

<details><summary>Root cause analysis</summary>

Two code paths set the subagent `Directory` field:

1. **Provisioner (build time):** `insertDevcontainerSubagent` in
`provisionerdserver.go`
   stores `dc.GetWorkspaceFolder()` — the **host-side** path from the
   `coder_devcontainer` Terraform resource (e.g. `/home/coder/project`).

2. **Agent injection (runtime):**
`maybeInjectSubAgentIntoContainerLocked` in
`api.go` reads the devcontainer config and gets the correct
**container-internal**
path (e.g. `/workspaces/project`), then calls `client.Create(ctx,
subAgentConfig)`.

For terraform-managed subagents (those with `req.Id != nil`),
`CreateSubAgent`
in `coderd/agentapi/subagent.go` recognized the pre-existing agent and
entered
the update path — but only called `UpdateWorkspaceAgentDisplayAppsByID`,
discarding the `Directory` field from the request. The agent kept the
stale
host-side path, which doesn't exist inside the container, causing
`expandPathToAbs` to fall back to `~`.

</details>

> [!NOTE]
> Generated by Coder Agents
2026-04-10 10:11:22 -04:00
Ethan a0ea71b74c perf(site/src): optimistically edit chat messages (#23976)
Previously, editing a past user message in Agents chat waited for the
PATCH round-trip and cache reconciliation before the conversation
visibly settled. The edited bubble and truncated tail could briefly fall
back to older fetched state, and a failed edit did not restore the full
local editing context cleanly.

Keep history editing optimistic end-to-end: update the edited user
bubble and truncate the tail immediately, preserve that visible
conversation until the authoritative replacement message and cache catch
up, and restore the draft/editor/attachment state on failure. The route
already scopes each `agentId` to a keyed `AgentChatPage` instance with
its own store/cache-writing closures, so navigating between chats does
not need an extra post-await active-chat guard to keep one chat's edit
response out of another chat.
2026-04-10 23:40:49 +10:00
Cian Johnston 0a14bb529e refactor(site): convert OrganizationAutocomplete to fully controlled component (#24211)
Fixes https://github.com/coder/internal/issues/1440

- Convert `OrganizationAutocomplete` to a purely presentational, fully
controlled component
- Accept `value`, `onChange`, `options` from parent; remove internal
state, data fetching, and permission filtering
- Update `CreateTemplateForm` and `CreateUserForm` to own org fetching,
permission checks, auto-select, and invalid-value clearing inline
- Memoize `orgOptions` in callers for stable `useEffect` deps
- Rewrite Storybook stories for the new controlled API


> 🤖 Written by a Coder Agent. Reviewed by a human.
2026-04-10 13:56:43 +01:00
Danielle Maywood 2c32d84f12 fix: remove double bottom border on build logs table (#24000) 2026-04-10 13:50:36 +01:00
96 changed files with 2518 additions and 2236 deletions
+16 -7
View File
@@ -134,10 +134,19 @@ jobs:
exit 0
fi
gh pr create \
--base "$RELEASE_BRANCH" \
--head "$BACKPORT_BRANCH" \
--title "$TITLE" \
--body "$BODY" \
--assignee "$SENDER" \
--reviewer "$SENDER"
NEW_PR_URL=$(
gh pr create \
--base "$RELEASE_BRANCH" \
--head "$BACKPORT_BRANCH" \
--title "$TITLE" \
--body "$BODY" \
--assignee "$SENDER" \
--reviewer "$SENDER"
)
# Comment on the original PR to notify the author.
COMMENT="Cherry-pick PR created: ${NEW_PR_URL}"
if [ "$CONFLICT" = true ]; then
COMMENT="${COMMENT} (⚠️ conflicts need manual resolution)"
fi
gh pr comment "$PR_NUMBER" --body "$COMMENT"
+120
View File
@@ -2862,6 +2862,126 @@ func TestAPI(t *testing.T) {
"rebuilt agent should include updated display apps")
})
// Verify that when a terraform-managed subagent is injected into
// a devcontainer, the Directory field sent to Create reflects
// the container-internal workspaceFolder from devcontainer
// read-configuration, not the host-side workspace_folder from
// the terraform resource. This is the scenario described in
// https://linear.app/codercom/issue/PRODUCT-259:
// 1. Non-terraform subagent → directory = /workspaces/foo (correct)
// 2. Terraform subagent → directory was stuck on host path (bug)
t.Run("TerraformDefinedSubAgentUsesContainerInternalDirectory", func(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("Dev Container tests are not supported on Windows (this test uses mocks but fails due to Windows paths)")
}
var (
ctx = testutil.Context(t, testutil.WaitMedium)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mCtrl = gomock.NewController(t)
terraformAgentID = uuid.New()
containerID = "test-container-id"
// Given: A container with a host-side workspace folder.
terraformContainer = codersdk.WorkspaceAgentContainer{
ID: containerID,
FriendlyName: "test-container",
Image: "test-image",
Running: true,
CreatedAt: time.Now(),
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project/.devcontainer/devcontainer.json",
},
}
// Given: A terraform-defined devcontainer whose
// workspace_folder is the HOST-side path (set by provisioner).
terraformDevcontainer = codersdk.WorkspaceAgentDevcontainer{
ID: uuid.New(),
Name: "terraform-devcontainer",
WorkspaceFolder: "/home/coder/project",
ConfigPath: "/home/coder/project/.devcontainer/devcontainer.json",
SubagentID: uuid.NullUUID{UUID: terraformAgentID, Valid: true},
}
fCCLI = &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{terraformContainer},
},
arch: runtime.GOARCH,
}
// Given: devcontainer read-configuration returns the
// CONTAINER-INTERNAL workspace folder.
fDCCLI = &fakeDevcontainerCLI{
upID: containerID,
readConfig: agentcontainers.DevcontainerConfig{
Workspace: agentcontainers.DevcontainerWorkspace{
WorkspaceFolder: "/workspaces/project",
},
MergedConfiguration: agentcontainers.DevcontainerMergedConfiguration{
Customizations: agentcontainers.DevcontainerMergedCustomizations{
Coder: []agentcontainers.CoderCustomization{{}},
},
},
},
}
mSAC = acmock.NewMockSubAgentClient(mCtrl)
createCalls = make(chan agentcontainers.SubAgent, 1)
closed bool
)
mSAC.EXPECT().List(gomock.Any()).Return([]agentcontainers.SubAgent{}, nil).AnyTimes()
mSAC.EXPECT().Create(gomock.Any(), gomock.Any()).DoAndReturn(
func(_ context.Context, agent agentcontainers.SubAgent) (agentcontainers.SubAgent, error) {
agent.AuthToken = uuid.New()
createCalls <- agent
return agent, nil
},
).Times(1)
mSAC.EXPECT().Delete(gomock.Any(), gomock.Any()).DoAndReturn(func(_ context.Context, _ uuid.UUID) error {
assert.True(t, closed, "Delete should only be called after Close")
return nil
}).AnyTimes()
api := agentcontainers.NewAPI(logger,
agentcontainers.WithContainerCLI(fCCLI),
agentcontainers.WithDevcontainerCLI(fDCCLI),
agentcontainers.WithDevcontainers(
[]codersdk.WorkspaceAgentDevcontainer{terraformDevcontainer},
[]codersdk.WorkspaceAgentScript{{ID: terraformDevcontainer.ID, LogSourceID: uuid.New()}},
),
agentcontainers.WithSubAgentClient(mSAC),
agentcontainers.WithSubAgentURL("test-subagent-url"),
agentcontainers.WithWatcher(watcher.NewNoop()),
)
api.Start()
defer func() {
closed = true
api.Close()
}()
// When: The devcontainer is created (triggering injection).
err := api.CreateDevcontainer(terraformDevcontainer.WorkspaceFolder, terraformDevcontainer.ConfigPath)
require.NoError(t, err)
// Then: The subagent sent to Create has the correct
// container-internal directory, not the host path.
createdAgent := testutil.RequireReceive(ctx, t, createCalls)
assert.Equal(t, terraformAgentID, createdAgent.ID,
"agent should use terraform-defined ID")
assert.Equal(t, "/workspaces/project", createdAgent.Directory,
"directory should be the container-internal path from devcontainer "+
"read-configuration, not the host-side workspace_folder")
})
t.Run("Error", func(t *testing.T) {
t.Parallel()
+1 -9
View File
@@ -79,7 +79,6 @@ import (
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/notifications/reports"
"github.com/coder/coder/v2/coderd/oauthpki"
"github.com/coder/coder/v2/coderd/objstore"
"github.com/coder/coder/v2/coderd/pproflabel"
"github.com/coder/coder/v2/coderd/prometheusmetrics"
"github.com/coder/coder/v2/coderd/prometheusmetrics/insights"
@@ -639,19 +638,12 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
vals.WorkspaceHostnameSuffix.String())
}
objStore, err := objstore.FromConfig(ctx, vals.ObjectStore, r.globalConfig)
if err != nil {
return xerrors.Errorf("initialize object store: %w", err)
}
defer objStore.Close()
options := &coderd.Options{
AccessURL: vals.AccessURL.Value(),
AppHostname: appHostname,
AppHostnameRegex: appHostnameRegex,
Logger: logger.Named("coderd"),
Database: nil,
ObjectStore: objStore,
BaseDERPMap: derpMap,
Pubsub: nil,
CacheDir: cacheDir,
@@ -1083,7 +1075,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
defer shutdownConns()
// Ensures that old database entries are cleaned up over time!
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, options.DeploymentValues, quartz.NewReal(), options.PrometheusRegistry, objStore)
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, options.DeploymentValues, quartz.NewReal(), options.PrometheusRegistry)
defer purger.Close()
// Updates workspace usage
+1 -1
View File
@@ -11,7 +11,7 @@ OPTIONS:
-O, --org string, $CODER_ORGANIZATION
Select which organization (uuid or name) to use.
-c, --column [id|created at|started at|completed at|canceled at|error|error code|status|worker id|worker name|file id|tags|queue position|queue size|organization id|initiator id|template version id|workspace build id|type|available workers|template version name|template id|template name|template display name|template icon|workspace id|workspace name|logs overflowed|organization|queue] (default: created at,id,type,template display name,status,queue,tags)
-c, --column [id|created at|started at|completed at|canceled at|error|error code|status|worker id|worker name|file id|tags|queue position|queue size|organization id|initiator id|template version id|workspace build id|type|available workers|template version name|template id|template name|template display name|template icon|workspace id|workspace name|workspace build transition|logs overflowed|organization|queue] (default: created at,id,type,template display name,status,queue,tags)
Columns to display in table output.
-i, --initiator string, $CODER_PROVISIONER_JOB_LIST_INITIATOR
@@ -58,7 +58,8 @@
"template_display_name": "",
"template_icon": "",
"workspace_id": "===========[workspace ID]===========",
"workspace_name": "test-workspace"
"workspace_name": "test-workspace",
"workspace_build_transition": "start"
},
"logs_overflowed": false,
"organization_name": "Coder"
-35
View File
@@ -773,41 +773,6 @@ OIDC OPTIONS:
requirement, and can lead to an insecure OIDC configuration. It is not
recommended to use this flag.
OBJECT STORE OPTIONS:
Configure the object storage backend for binary data (chat files, transcripts,
etc.). Defaults to local filesystem storage.
--objectstore-backend string, $CODER_OBJECTSTORE_BACKEND (default: local)
The storage backend for binary data such as chat files. Valid values:
local, s3, gcs.
--objectstore-gcs-bucket string, $CODER_OBJECTSTORE_GCS_BUCKET
GCS bucket name. Required when the backend is "gcs".
--objectstore-gcs-credentials-file string, $CODER_OBJECTSTORE_GCS_CREDENTIALS_FILE
Path to a GCS service account key file. If empty, Application Default
Credentials are used.
--objectstore-gcs-prefix string, $CODER_OBJECTSTORE_GCS_PREFIX
Optional key prefix within the GCS bucket.
--objectstore-local-dir string, $CODER_OBJECTSTORE_LOCAL_DIR
Root directory for the local filesystem object store backend. Only
used when the backend is "local".
--objectstore-s3-bucket string, $CODER_OBJECTSTORE_S3_BUCKET
S3 bucket name. Required when the backend is "s3".
--objectstore-s3-endpoint string, $CODER_OBJECTSTORE_S3_ENDPOINT
Custom S3-compatible endpoint URL (e.g. for MinIO, R2, Cloudflare).
Leave empty for standard AWS S3.
--objectstore-s3-prefix string, $CODER_OBJECTSTORE_S3_PREFIX
Optional key prefix within the S3 bucket.
--objectstore-s3-region string, $CODER_OBJECTSTORE_S3_REGION
AWS region for the S3 bucket.
PROVISIONING OPTIONS:
Tune the behavior of the provisioner, which is responsible for creating,
updating, and deleting workspace resources.
-34
View File
@@ -908,37 +908,3 @@ retention:
# build are always retained. Set to 0 to disable automatic deletion.
# (default: 7d, type: duration)
workspace_agent_logs: 168h0m0s
# Configure the object storage backend for binary data (chat files, transcripts,
# etc.). Defaults to local filesystem storage.
objectStore:
# The storage backend for binary data such as chat files. Valid values: local, s3,
# gcs.
# (default: local, type: string)
backend: local
# Root directory for the local filesystem object store backend. Only used when the
# backend is "local".
# (default: <unset>, type: string)
local_dir: ""
# S3 bucket name. Required when the backend is "s3".
# (default: <unset>, type: string)
s3_bucket: ""
# AWS region for the S3 bucket.
# (default: <unset>, type: string)
s3_region: ""
# Optional key prefix within the S3 bucket.
# (default: <unset>, type: string)
s3_prefix: ""
# Custom S3-compatible endpoint URL (e.g. for MinIO, R2, Cloudflare). Leave empty
# for standard AWS S3.
# (default: <unset>, type: string)
s3_endpoint: ""
# GCS bucket name. Required when the backend is "gcs".
# (default: <unset>, type: string)
gcs_bucket: ""
# Optional key prefix within the GCS bucket.
# (default: <unset>, type: string)
gcs_prefix: ""
# Path to a GCS service account key file. If empty, Application Default
# Credentials are used.
# (default: <unset>, type: string)
gcs_credentials_file: ""
+11 -1
View File
@@ -71,7 +71,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
// An ID is only given in the request when it is a terraform-defined devcontainer
// that has attached resources. These subagents are pre-provisioned by terraform
// (the agent record already exists), so we update configurable fields like
// display_apps rather than creating a new agent.
// display_apps and directory rather than creating a new agent.
if req.Id != nil {
id, err := uuid.FromBytes(req.Id)
if err != nil {
@@ -97,6 +97,16 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
return nil, xerrors.Errorf("update workspace agent display apps: %w", err)
}
if req.Directory != "" {
if err := a.Database.UpdateWorkspaceAgentDirectoryByID(ctx, database.UpdateWorkspaceAgentDirectoryByIDParams{
ID: id,
Directory: req.Directory,
UpdatedAt: createdAt,
}); err != nil {
return nil, xerrors.Errorf("update workspace agent directory: %w", err)
}
}
return &agentproto.CreateSubAgentResponse{
Agent: &agentproto.SubAgent{
Name: subAgent.Name,
+38 -2
View File
@@ -1267,11 +1267,11 @@ func TestSubAgentAPI(t *testing.T) {
agentID, err := uuid.FromBytes(resp.Agent.Id)
require.NoError(t, err)
// And: The database agent's other fields are unchanged.
// And: The database agent's name, architecture, and OS are unchanged.
updatedAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agentID)
require.NoError(t, err)
require.Equal(t, baseChildAgent.Name, updatedAgent.Name)
require.Equal(t, baseChildAgent.Directory, updatedAgent.Directory)
require.Equal(t, "/different/path", updatedAgent.Directory)
require.Equal(t, baseChildAgent.Architecture, updatedAgent.Architecture)
require.Equal(t, baseChildAgent.OperatingSystem, updatedAgent.OperatingSystem)
@@ -1280,6 +1280,42 @@ func TestSubAgentAPI(t *testing.T) {
require.Equal(t, database.DisplayAppWebTerminal, updatedAgent.DisplayApps[0])
},
},
{
name: "OK_DirectoryUpdated",
setup: func(t *testing.T, db database.Store, agent database.WorkspaceAgent) *proto.CreateSubAgentRequest {
// Given: An existing child agent with a stale host-side
// directory (as set by the provisioner at build time).
childAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{Valid: true, UUID: agent.ID},
ResourceID: agent.ResourceID,
Name: baseChildAgent.Name,
Directory: "/home/coder/project",
Architecture: baseChildAgent.Architecture,
OperatingSystem: baseChildAgent.OperatingSystem,
DisplayApps: baseChildAgent.DisplayApps,
})
// When: Agent injection sends the correct
// container-internal path.
return &proto.CreateSubAgentRequest{
Id: childAgent.ID[:],
Directory: "/workspaces/project",
DisplayApps: []proto.CreateSubAgentRequest_DisplayApp{
proto.CreateSubAgentRequest_WEB_TERMINAL,
},
}
},
check: func(t *testing.T, ctx context.Context, db database.Store, resp *proto.CreateSubAgentResponse, agent database.WorkspaceAgent) {
agentID, err := uuid.FromBytes(resp.Agent.Id)
require.NoError(t, err)
// Then: Directory is updated to the container-internal
// path.
updatedAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agentID)
require.NoError(t, err)
require.Equal(t, "/workspaces/project", updatedAgent.Directory)
},
},
{
name: "Error/MalformedID",
setup: func(t *testing.T, db database.Store, agent database.WorkspaceAgent) *proto.CreateSubAgentRequest {
+3 -44
View File
@@ -15925,9 +15925,6 @@ const docTemplate = `{
"oauth2": {
"$ref": "#/definitions/codersdk.OAuth2Config"
},
"object_store": {
"$ref": "#/definitions/codersdk.ObjectStoreConfig"
},
"oidc": {
"$ref": "#/definitions/codersdk.OIDCConfig"
},
@@ -17944,47 +17941,6 @@ const docTemplate = `{
}
}
},
"codersdk.ObjectStoreConfig": {
"type": "object",
"properties": {
"backend": {
"description": "Backend selects the storage backend: \"local\" (default), \"s3\", or \"gcs\".",
"type": "string"
},
"gcs_bucket": {
"description": "GCSBucket is the GCS bucket name. Required when Backend is \"gcs\".",
"type": "string"
},
"gcs_credentials_file": {
"description": "GCSCredentialsFile is an optional path to a GCS service account\nkey file. If empty, Application Default Credentials are used.",
"type": "string"
},
"gcs_prefix": {
"description": "GCSPrefix is an optional key prefix within the GCS bucket.",
"type": "string"
},
"local_dir": {
"description": "LocalDir is the root directory for the local filesystem backend.\nOnly used when Backend is \"local\". Defaults to \u003cconfig-dir\u003e/objectstore/.",
"type": "string"
},
"s3_bucket": {
"description": "S3Bucket is the S3 bucket name. Required when Backend is \"s3\".",
"type": "string"
},
"s3_endpoint": {
"description": "S3Endpoint is a custom S3-compatible endpoint URL (for MinIO, R2, etc.).",
"type": "string"
},
"s3_prefix": {
"description": "S3Prefix is an optional key prefix within the S3 bucket.",
"type": "string"
},
"s3_region": {
"description": "S3Region is the AWS region for the S3 bucket.",
"type": "string"
}
}
},
"codersdk.OptionType": {
"type": "string",
"enum": [
@@ -19193,6 +19149,9 @@ const docTemplate = `{
"template_version_name": {
"type": "string"
},
"workspace_build_transition": {
"$ref": "#/definitions/codersdk.WorkspaceTransition"
},
"workspace_id": {
"type": "string",
"format": "uuid"
+3 -44
View File
@@ -14392,9 +14392,6 @@
"oauth2": {
"$ref": "#/definitions/codersdk.OAuth2Config"
},
"object_store": {
"$ref": "#/definitions/codersdk.ObjectStoreConfig"
},
"oidc": {
"$ref": "#/definitions/codersdk.OIDCConfig"
},
@@ -16341,47 +16338,6 @@
}
}
},
"codersdk.ObjectStoreConfig": {
"type": "object",
"properties": {
"backend": {
"description": "Backend selects the storage backend: \"local\" (default), \"s3\", or \"gcs\".",
"type": "string"
},
"gcs_bucket": {
"description": "GCSBucket is the GCS bucket name. Required when Backend is \"gcs\".",
"type": "string"
},
"gcs_credentials_file": {
"description": "GCSCredentialsFile is an optional path to a GCS service account\nkey file. If empty, Application Default Credentials are used.",
"type": "string"
},
"gcs_prefix": {
"description": "GCSPrefix is an optional key prefix within the GCS bucket.",
"type": "string"
},
"local_dir": {
"description": "LocalDir is the root directory for the local filesystem backend.\nOnly used when Backend is \"local\". Defaults to \u003cconfig-dir\u003e/objectstore/.",
"type": "string"
},
"s3_bucket": {
"description": "S3Bucket is the S3 bucket name. Required when Backend is \"s3\".",
"type": "string"
},
"s3_endpoint": {
"description": "S3Endpoint is a custom S3-compatible endpoint URL (for MinIO, R2, etc.).",
"type": "string"
},
"s3_prefix": {
"description": "S3Prefix is an optional key prefix within the S3 bucket.",
"type": "string"
},
"s3_region": {
"description": "S3Region is the AWS region for the S3 bucket.",
"type": "string"
}
}
},
"codersdk.OptionType": {
"type": "string",
"enum": ["string", "number", "bool", "list(string)"],
@@ -17553,6 +17509,9 @@
"template_version_name": {
"type": "string"
},
"workspace_build_transition": {
"$ref": "#/definitions/codersdk.WorkspaceTransition"
},
"workspace_id": {
"type": "string",
"format": "uuid"
-3
View File
@@ -71,7 +71,6 @@ import (
"github.com/coder/coder/v2/coderd/metricscache"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/oauth2provider"
"github.com/coder/coder/v2/coderd/objstore"
"github.com/coder/coder/v2/coderd/portsharing"
"github.com/coder/coder/v2/coderd/pproflabel"
"github.com/coder/coder/v2/coderd/prebuilds"
@@ -159,7 +158,6 @@ type Options struct {
AppHostnameRegex *regexp.Regexp
Logger slog.Logger
Database database.Store
ObjectStore objstore.Store
Pubsub pubsub.Pubsub
RuntimeConfig *runtimeconfig.Manager
@@ -794,7 +792,6 @@ func New(options *Options) *API {
Pubsub: options.Pubsub,
WebpushDispatcher: options.WebPushDispatcher,
UsageTracker: options.WorkspaceUsageTracker,
ObjectStore: options.ObjectStore,
})
gitSyncLogger := options.Logger.Named("gitsync")
refresher := gitsync.NewRefresher(
+17 -4
View File
@@ -2042,9 +2042,9 @@ func (q *querier) DeleteOldAuditLogs(ctx context.Context, arg database.DeleteOld
return q.db.DeleteOldAuditLogs(ctx, arg)
}
func (q *querier) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) ([]database.DeleteOldChatFilesRow, error) {
func (q *querier) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceSystem); err != nil {
return nil, err
return 0, err
}
return q.db.DeleteOldChatFiles(ctx, arg)
}
@@ -3401,11 +3401,11 @@ func (q *querier) GetPRInsightsPerModel(ctx context.Context, arg database.GetPRI
return q.db.GetPRInsightsPerModel(ctx, arg)
}
func (q *querier) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
func (q *querier) GetPRInsightsPullRequests(ctx context.Context, arg database.GetPRInsightsPullRequestsParams) ([]database.GetPRInsightsPullRequestsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceDeploymentConfig); err != nil {
return nil, err
}
return q.db.GetPRInsightsRecentPRs(ctx, arg)
return q.db.GetPRInsightsPullRequests(ctx, arg)
}
func (q *querier) GetPRInsightsSummary(ctx context.Context, arg database.GetPRInsightsSummaryParams) (database.GetPRInsightsSummaryRow, error) {
@@ -6783,6 +6783,19 @@ func (q *querier) UpdateWorkspaceAgentConnectionByID(ctx context.Context, arg da
return q.db.UpdateWorkspaceAgentConnectionByID(ctx, arg)
}
func (q *querier) UpdateWorkspaceAgentDirectoryByID(ctx context.Context, arg database.UpdateWorkspaceAgentDirectoryByIDParams) error {
workspace, err := q.db.GetWorkspaceByAgentID(ctx, arg.ID)
if err != nil {
return err
}
if err := q.authorizeContext(ctx, policy.ActionUpdateAgent, workspace); err != nil {
return err
}
return q.db.UpdateWorkspaceAgentDirectoryByID(ctx, arg)
}
func (q *querier) UpdateWorkspaceAgentDisplayAppsByID(ctx context.Context, arg database.UpdateWorkspaceAgentDisplayAppsByIDParams) error {
workspace, err := q.db.GetWorkspaceByAgentID(ctx, arg.ID)
if err != nil {
+14 -3
View File
@@ -2261,9 +2261,9 @@ func (s *MethodTestSuite) TestTemplate() {
dbm.EXPECT().GetPRInsightsPerModel(gomock.Any(), arg).Return([]database.GetPRInsightsPerModelRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetPRInsightsRecentPRs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsRecentPRsParams{}
dbm.EXPECT().GetPRInsightsRecentPRs(gomock.Any(), arg).Return([]database.GetPRInsightsRecentPRsRow{}, nil).AnyTimes()
s.Run("GetPRInsightsPullRequests", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetPRInsightsPullRequestsParams{}
dbm.EXPECT().GetPRInsightsPullRequests(gomock.Any(), arg).Return([]database.GetPRInsightsPullRequestsRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceDeploymentConfig, policy.ActionRead)
}))
s.Run("GetTelemetryTaskEvents", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
@@ -2935,6 +2935,17 @@ func (s *MethodTestSuite) TestWorkspace() {
dbm.EXPECT().UpdateWorkspaceAgentStartupByID(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(w, policy.ActionUpdate).Returns()
}))
s.Run("UpdateWorkspaceAgentDirectoryByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
w := testutil.Fake(s.T(), faker, database.Workspace{})
agt := testutil.Fake(s.T(), faker, database.WorkspaceAgent{})
arg := database.UpdateWorkspaceAgentDirectoryByIDParams{
ID: agt.ID,
Directory: "/workspaces/project",
}
dbm.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agt.ID).Return(w, nil).AnyTimes()
dbm.EXPECT().UpdateWorkspaceAgentDirectoryByID(gomock.Any(), arg).Return(nil).AnyTimes()
check.Args(arg).Asserts(w, policy.ActionUpdateAgent).Returns()
}))
s.Run("UpdateWorkspaceAgentDisplayAppsByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
w := testutil.Fake(s.T(), faker, database.Workspace{})
agt := testutil.Fake(s.T(), faker, database.WorkspaceAgent{})
+13 -5
View File
@@ -600,7 +600,7 @@ func (m queryMetricsStore) DeleteOldAuditLogs(ctx context.Context, arg database.
return r0, r1
}
func (m queryMetricsStore) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) ([]database.DeleteOldChatFilesRow, error) {
func (m queryMetricsStore) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.DeleteOldChatFiles(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteOldChatFiles").Observe(time.Since(start).Seconds())
@@ -1992,11 +1992,11 @@ func (m queryMetricsStore) GetPRInsightsPerModel(ctx context.Context, arg databa
return r0, r1
}
func (m queryMetricsStore) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
func (m queryMetricsStore) GetPRInsightsPullRequests(ctx context.Context, arg database.GetPRInsightsPullRequestsParams) ([]database.GetPRInsightsPullRequestsRow, error) {
start := time.Now()
r0, r1 := m.s.GetPRInsightsRecentPRs(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsRecentPRs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsRecentPRs").Inc()
r0, r1 := m.s.GetPRInsightsPullRequests(ctx, arg)
m.queryLatencies.WithLabelValues("GetPRInsightsPullRequests").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetPRInsightsPullRequests").Inc()
return r0, r1
}
@@ -4840,6 +4840,14 @@ func (m queryMetricsStore) UpdateWorkspaceAgentConnectionByID(ctx context.Contex
return r0
}
func (m queryMetricsStore) UpdateWorkspaceAgentDirectoryByID(ctx context.Context, arg database.UpdateWorkspaceAgentDirectoryByIDParams) error {
start := time.Now()
r0 := m.s.UpdateWorkspaceAgentDirectoryByID(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateWorkspaceAgentDirectoryByID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateWorkspaceAgentDirectoryByID").Inc()
return r0
}
func (m queryMetricsStore) UpdateWorkspaceAgentDisplayAppsByID(ctx context.Context, arg database.UpdateWorkspaceAgentDisplayAppsByIDParams) error {
start := time.Now()
r0 := m.s.UpdateWorkspaceAgentDisplayAppsByID(ctx, arg)
+23 -9
View File
@@ -999,10 +999,10 @@ func (mr *MockStoreMockRecorder) DeleteOldAuditLogs(ctx, arg any) *gomock.Call {
}
// DeleteOldChatFiles mocks base method.
func (m *MockStore) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) ([]database.DeleteOldChatFilesRow, error) {
func (m *MockStore) DeleteOldChatFiles(ctx context.Context, arg database.DeleteOldChatFilesParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteOldChatFiles", ctx, arg)
ret0, _ := ret[0].([]database.DeleteOldChatFilesRow)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
@@ -3692,19 +3692,19 @@ func (mr *MockStoreMockRecorder) GetPRInsightsPerModel(ctx, arg any) *gomock.Cal
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsPerModel", reflect.TypeOf((*MockStore)(nil).GetPRInsightsPerModel), ctx, arg)
}
// GetPRInsightsRecentPRs mocks base method.
func (m *MockStore) GetPRInsightsRecentPRs(ctx context.Context, arg database.GetPRInsightsRecentPRsParams) ([]database.GetPRInsightsRecentPRsRow, error) {
// GetPRInsightsPullRequests mocks base method.
func (m *MockStore) GetPRInsightsPullRequests(ctx context.Context, arg database.GetPRInsightsPullRequestsParams) ([]database.GetPRInsightsPullRequestsRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPRInsightsRecentPRs", ctx, arg)
ret0, _ := ret[0].([]database.GetPRInsightsRecentPRsRow)
ret := m.ctrl.Call(m, "GetPRInsightsPullRequests", ctx, arg)
ret0, _ := ret[0].([]database.GetPRInsightsPullRequestsRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetPRInsightsRecentPRs indicates an expected call of GetPRInsightsRecentPRs.
func (mr *MockStoreMockRecorder) GetPRInsightsRecentPRs(ctx, arg any) *gomock.Call {
// GetPRInsightsPullRequests indicates an expected call of GetPRInsightsPullRequests.
func (mr *MockStoreMockRecorder) GetPRInsightsPullRequests(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsRecentPRs", reflect.TypeOf((*MockStore)(nil).GetPRInsightsRecentPRs), ctx, arg)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPRInsightsPullRequests", reflect.TypeOf((*MockStore)(nil).GetPRInsightsPullRequests), ctx, arg)
}
// GetPRInsightsSummary mocks base method.
@@ -9120,6 +9120,20 @@ func (mr *MockStoreMockRecorder) UpdateWorkspaceAgentConnectionByID(ctx, arg any
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkspaceAgentConnectionByID", reflect.TypeOf((*MockStore)(nil).UpdateWorkspaceAgentConnectionByID), ctx, arg)
}
// UpdateWorkspaceAgentDirectoryByID mocks base method.
func (m *MockStore) UpdateWorkspaceAgentDirectoryByID(ctx context.Context, arg database.UpdateWorkspaceAgentDirectoryByIDParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateWorkspaceAgentDirectoryByID", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateWorkspaceAgentDirectoryByID indicates an expected call of UpdateWorkspaceAgentDirectoryByID.
func (mr *MockStoreMockRecorder) UpdateWorkspaceAgentDirectoryByID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateWorkspaceAgentDirectoryByID", reflect.TypeOf((*MockStore)(nil).UpdateWorkspaceAgentDirectoryByID), ctx, arg)
}
// UpdateWorkspaceAgentDisplayAppsByID mocks base method.
func (m *MockStore) UpdateWorkspaceAgentDisplayAppsByID(ctx context.Context, arg database.UpdateWorkspaceAgentDisplayAppsByIDParams) error {
m.ctrl.T.Helper()
+2 -100
View File
@@ -3,7 +3,6 @@ package dbpurge
import (
"context"
"io"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
@@ -13,7 +12,6 @@ import (
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/objstore"
"github.com/coder/coder/v2/coderd/pproflabel"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/quartz"
@@ -43,15 +41,11 @@ const (
chatFilesBatchSize = 1000
)
// chatFilesNamespace is the object store namespace under which chat
// files are stored.
const chatFilesNamespace = "chatfiles"
// New creates a new periodically purging database instance.
// It is the caller's responsibility to call Close on the returned instance.
//
// This is for cleaning up old, unused resources from the database that take up space.
func New(ctx context.Context, logger slog.Logger, db database.Store, vals *codersdk.DeploymentValues, clk quartz.Clock, reg prometheus.Registerer, objStore objstore.Store) io.Closer {
func New(ctx context.Context, logger slog.Logger, db database.Store, vals *codersdk.DeploymentValues, clk quartz.Clock, reg prometheus.Registerer) io.Closer {
closed := make(chan struct{})
ctx, cancelFunc := context.WithCancel(ctx)
@@ -75,22 +69,6 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, vals *coder
}, []string{"record_type"})
reg.MustRegister(recordsPurged)
objStoreInflight := prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "dbpurge",
Name: "objstore_delete_inflight",
Help: "Number of object store files currently enqueued for deletion.",
})
reg.MustRegister(objStoreInflight)
objStoreDeleted := prometheus.NewCounter(prometheus.CounterOpts{
Namespace: "coderd",
Subsystem: "dbpurge",
Name: "objstore_files_deleted_total",
Help: "Total number of object store files successfully deleted.",
})
reg.MustRegister(objStoreDeleted)
inst := &instance{
cancel: cancelFunc,
closed: closed,
@@ -99,9 +77,6 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, vals *coder
clk: clk,
iterationDuration: iterationDuration,
recordsPurged: recordsPurged,
objStore: objStore,
objStoreInflight: objStoreInflight,
objStoreDeleted: objStoreDeleted,
}
// Start the ticker with the initial delay.
@@ -275,20 +250,13 @@ func (i *instance) purgeTick(ctx context.Context, db database.Store, start time.
return xerrors.Errorf("failed to delete old chats: %w", err)
}
deletedFiles, err := tx.DeleteOldChatFiles(ctx, database.DeleteOldChatFilesParams{
purgedChatFiles, err = tx.DeleteOldChatFiles(ctx, database.DeleteOldChatFilesParams{
BeforeTime: deleteChatsBefore,
LimitCount: chatFilesBatchSize,
})
if err != nil {
return xerrors.Errorf("failed to delete old chat files: %w", err)
}
purgedChatFiles = int64(len(deletedFiles))
// Collect object store keys from the deleted rows
// and delete them in a background goroutine so
// slow object store I/O does not hold the
// advisory lock or block the next tick.
i.deleteObjStoreKeys(ctx, deletedFiles)
}
i.logger.Debug(ctx, "purged old database entries",
slog.F("workspace_agent_logs", purgedWorkspaceAgentLogs),
@@ -327,13 +295,6 @@ type instance struct {
clk quartz.Clock
iterationDuration *prometheus.HistogramVec
recordsPurged *prometheus.CounterVec
objStore objstore.Store
objStoreInflight prometheus.Gauge
objStoreDeleted prometheus.Counter
// objDeleteMu serializes background object store delete batches
// so at most one goroutine is deleting at a time.
objDeleteMu sync.Mutex
}
func (i *instance) Close() error {
@@ -341,62 +302,3 @@ func (i *instance) Close() error {
<-i.closed
return nil
}
// deleteObjStoreKeys removes object store entries for the given
// deleted chat file rows. The work runs in a background goroutine
// guarded by a mutex so that slow object store I/O never blocks
// the purge transaction or the next tick. At most one delete batch
// runs at a time; if a batch is already in flight the new keys are
// silently dropped (they will be orphan-collected on a future tick
// if needed).
func (i *instance) deleteObjStoreKeys(ctx context.Context, rows []database.DeleteOldChatFilesRow) {
// Collect non-empty object store keys.
var keys []string
for _, r := range rows {
if r.ObjectStoreKey.Valid && r.ObjectStoreKey.String != "" {
keys = append(keys, r.ObjectStoreKey.String)
}
}
if len(keys) == 0 {
return
}
// Try to acquire the mutex without blocking. If another
// delete batch is already running, skip this one.
if !i.objDeleteMu.TryLock() {
i.logger.Debug(ctx, "object store delete already in progress, skipping batch",
slog.F("skipped_keys", len(keys)))
return
}
i.objStoreInflight.Add(float64(len(keys)))
go func() {
defer i.objDeleteMu.Unlock()
var deleted int
for _, key := range keys {
if ctx.Err() != nil {
remaining := len(keys) - deleted
i.objStoreInflight.Sub(float64(remaining))
i.logger.Debug(ctx, "context canceled during object store cleanup",
slog.F("deleted", deleted),
slog.F("remaining", remaining))
return
}
if err := i.objStore.Delete(ctx, chatFilesNamespace, key); err != nil {
i.logger.Warn(ctx, "failed to delete chat file from object store",
slog.F("key", key),
slog.Error(err))
} else {
deleted++
}
i.objStoreInflight.Dec()
}
i.objStoreDeleted.Add(float64(deleted))
i.logger.Debug(ctx, "deleted chat files from object store",
slog.F("deleted", deleted),
slog.F("failed", len(keys)-deleted))
}()
}
+19 -19
View File
@@ -56,7 +56,7 @@ func TestPurge(t *testing.T) {
mDB := dbmock.NewMockStore(gomock.NewController(t))
mDB.EXPECT().GetChatRetentionDays(gomock.Any()).Return(int32(0), nil).AnyTimes()
mDB.EXPECT().InTx(gomock.Any(), database.DefaultTXOptions().WithID("db_purge")).Return(nil).Times(2)
purger := dbpurge.New(context.Background(), testutil.Logger(t), mDB, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
purger := dbpurge.New(context.Background(), testutil.Logger(t), mDB, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
<-done // wait for doTick() to run.
require.NoError(t, purger.Close())
}
@@ -90,7 +90,7 @@ func TestMetrics(t *testing.T) {
Retention: codersdk.RetentionConfig{
APIKeys: serpent.Duration(7 * 24 * time.Hour), // 7 days retention
},
}, clk, reg, nil)
}, clk, reg)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -158,7 +158,7 @@ func TestMetrics(t *testing.T) {
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, mDB, &codersdk.DeploymentValues{}, clk, reg, nil)
closer := dbpurge.New(ctx, logger, mDB, &codersdk.DeploymentValues{}, clk, reg)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -248,7 +248,7 @@ func TestDeleteOldWorkspaceAgentStats(t *testing.T) {
})
// when
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
// then
@@ -273,7 +273,7 @@ func TestDeleteOldWorkspaceAgentStats(t *testing.T) {
// Start a new purger to immediately trigger delete after rollup.
_ = closer.Close()
closer = dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer = dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
// then
@@ -368,7 +368,7 @@ func TestDeleteOldWorkspaceAgentLogs(t *testing.T) {
Retention: codersdk.RetentionConfig{
WorkspaceAgentLogs: serpent.Duration(7 * 24 * time.Hour),
},
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
<-done // doTick() has now run.
@@ -583,7 +583,7 @@ func TestDeleteOldWorkspaceAgentLogsRetention(t *testing.T) {
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -674,7 +674,7 @@ func TestDeleteOldProvisionerDaemons(t *testing.T) {
require.NoError(t, err)
// when
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
// then
@@ -778,7 +778,7 @@ func TestDeleteOldAuditLogConnectionEvents(t *testing.T) {
// Run the purge
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
// Wait for tick
testutil.TryReceive(ctx, t, done)
@@ -941,7 +941,7 @@ func TestDeleteOldTelemetryHeartbeats(t *testing.T) {
require.NoError(t, err)
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
<-done // doTick() has now run.
@@ -1060,7 +1060,7 @@ func TestDeleteOldConnectionLogs(t *testing.T) {
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1316,7 +1316,7 @@ func TestDeleteOldAIBridgeRecords(t *testing.T) {
Retention: serpent.Duration(tc.retention),
},
},
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1403,7 +1403,7 @@ func TestDeleteOldAuditLogs(t *testing.T) {
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1493,7 +1493,7 @@ func TestDeleteOldAuditLogs(t *testing.T) {
Retention: codersdk.RetentionConfig{
AuditLogs: serpent.Duration(retentionPeriod),
},
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1613,7 +1613,7 @@ func TestDeleteExpiredAPIKeys(t *testing.T) {
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk, prometheus.NewRegistry(), nil)
}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1740,7 +1740,7 @@ func TestDeleteOldChatFiles(t *testing.T) {
oldFileID := createChatFile(ctx, t, db, rawDB, deps.user.ID, deps.org.ID, now.Add(-31*24*time.Hour))
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1797,7 +1797,7 @@ func TestDeleteOldChatFiles(t *testing.T) {
activeChat := createChat(ctx, t, db, rawDB, deps.user.ID, deps.modelConfig.ID, false, now)
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1854,7 +1854,7 @@ func TestDeleteOldChatFiles(t *testing.T) {
fileBoundary := createChatFile(ctx, t, db, rawDB, deps.user.ID, deps.org.ID, now.Add(-30*24*time.Hour).Add(time.Hour))
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
@@ -1934,7 +1934,7 @@ func TestDeleteOldChatFiles(t *testing.T) {
require.NoError(t, err)
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry(), nil)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk, prometheus.NewRegistry())
defer closer.Close()
testutil.TryReceive(ctx, t, done)
+1 -4
View File
@@ -1293,8 +1293,7 @@ CREATE TABLE chat_files (
created_at timestamp with time zone DEFAULT now() NOT NULL,
name text DEFAULT ''::text NOT NULL,
mimetype text NOT NULL,
data bytea,
object_store_key text
data bytea NOT NULL
);
CREATE TABLE chat_messages (
@@ -3792,8 +3791,6 @@ CREATE INDEX idx_chats_last_model_config_id ON chats USING btree (last_model_con
CREATE INDEX idx_chats_owner ON chats USING btree (owner_id);
CREATE INDEX idx_chats_owner_updated_id ON chats USING btree (owner_id, updated_at DESC, id DESC);
CREATE INDEX idx_chats_parent_chat_id ON chats USING btree (parent_chat_id);
CREATE INDEX idx_chats_pending ON chats USING btree (status) WHERE (status = 'pending'::chat_status);
@@ -1,7 +0,0 @@
-- Backfill any NULL data values before restoring NOT NULL would require
-- reading from the object store, which is not possible in a migration.
-- Instead, delete rows that only exist in the object store.
DELETE FROM chat_files WHERE data IS NULL;
ALTER TABLE chat_files ALTER COLUMN data SET NOT NULL;
ALTER TABLE chat_files DROP COLUMN object_store_key;
@@ -1,8 +0,0 @@
-- Add object_store_key to track files stored in external object storage.
-- When non-NULL, the file data lives in the object store under this key
-- and the data column may be NULL.
ALTER TABLE chat_files ADD COLUMN object_store_key TEXT;
-- Make data nullable so new writes can skip the BYTEA column when
-- storing in the object store.
ALTER TABLE chat_files ALTER COLUMN data DROP NOT NULL;
@@ -0,0 +1 @@
CREATE INDEX idx_chats_owner_updated_id ON chats (owner_id, updated_at DESC, id DESC);
@@ -0,0 +1,5 @@
-- The GetChats ORDER BY changed from (updated_at, id) DESC to a 4-column
-- expression sort (pinned-first flag, negated pin_order, updated_at, id).
-- This index was purpose-built for the old sort and no longer provides
-- read benefit. The simpler idx_chats_owner covers the owner_id filter.
DROP INDEX IF EXISTS idx_chats_owner_updated_id;
+7 -8
View File
@@ -4275,14 +4275,13 @@ type ChatDiffStatus struct {
}
type ChatFile struct {
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
Name string `db:"name" json:"name"`
Mimetype string `db:"mimetype" json:"mimetype"`
Data []byte `db:"data" json:"data"`
ObjectStoreKey sql.NullString `db:"object_store_key" json:"object_store_key"`
ID uuid.UUID `db:"id" json:"id"`
OwnerID uuid.UUID `db:"owner_id" json:"owner_id"`
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
Name string `db:"name" json:"name"`
Mimetype string `db:"mimetype" json:"mimetype"`
Data []byte `db:"data" json:"data"`
}
type ChatFileLink struct {
+6 -6
View File
@@ -138,9 +138,7 @@ type sqlcQuerier interface {
// 1. Orphaned files not linked to any chat.
// 2. Files whose every referencing chat has been archived for longer
// than the retention period.
// Returns the deleted rows so callers can clean up associated object
// store entries.
DeleteOldChatFiles(ctx context.Context, arg DeleteOldChatFilesParams) ([]DeleteOldChatFilesRow, error)
DeleteOldChatFiles(ctx context.Context, arg DeleteOldChatFilesParams) (int64, error)
// Deletes chats that have been archived for longer than the given
// threshold. Active (non-archived) chats are never deleted.
// Related chat_messages, chat_diff_statuses, and
@@ -420,11 +418,12 @@ type sqlcQuerier interface {
// per PR for state/additions/deletions/model (model comes from the
// most recent chat).
GetPRInsightsPerModel(ctx context.Context, arg GetPRInsightsPerModelParams) ([]GetPRInsightsPerModelRow, error)
// Returns individual PR rows with cost for the recent PRs table.
// Returns all individual PR rows with cost for the selected time range.
// Uses two CTEs: pr_costs sums cost for the PR-linked chat and its
// direct children (that lack their own PR), and deduped picks one row
// per PR for metadata.
GetPRInsightsRecentPRs(ctx context.Context, arg GetPRInsightsRecentPRsParams) ([]GetPRInsightsRecentPRsRow, error)
// per PR for metadata. A safety-cap LIMIT guards against unexpectedly
// large result sets from direct API callers.
GetPRInsightsPullRequests(ctx context.Context, arg GetPRInsightsPullRequestsParams) ([]GetPRInsightsPullRequestsRow, error)
// PR Insights queries for the /agents analytics dashboard.
// These aggregate data from chat_diff_statuses (PR metadata) joined
// with chats and chat_messages (cost) to power the PR Insights view.
@@ -1013,6 +1012,7 @@ type sqlcQuerier interface {
UpdateWorkspace(ctx context.Context, arg UpdateWorkspaceParams) (WorkspaceTable, error)
UpdateWorkspaceACLByID(ctx context.Context, arg UpdateWorkspaceACLByIDParams) error
UpdateWorkspaceAgentConnectionByID(ctx context.Context, arg UpdateWorkspaceAgentConnectionByIDParams) error
UpdateWorkspaceAgentDirectoryByID(ctx context.Context, arg UpdateWorkspaceAgentDirectoryByIDParams) error
UpdateWorkspaceAgentDisplayAppsByID(ctx context.Context, arg UpdateWorkspaceAgentDisplayAppsByIDParams) error
UpdateWorkspaceAgentLifecycleStateByID(ctx context.Context, arg UpdateWorkspaceAgentLifecycleStateByIDParams) error
UpdateWorkspaceAgentLogOverflowByID(ctx context.Context, arg UpdateWorkspaceAgentLogOverflowByIDParams) error
+34 -20
View File
@@ -10408,11 +10408,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(1), summary.TotalPrsCreated)
assert.Equal(t, int64(8_000_000), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10442,11 +10441,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(1), summary.TotalPrsMerged)
// RecentPRs ordered by created_at DESC: chatB is newer.
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 2)
@@ -10491,11 +10489,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(1), summary.TotalPrsCreated)
assert.Equal(t, int64(1), summary.TotalPrsMerged)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10533,11 +10530,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(9_000_000), summary.TotalCostMicros)
// RecentPRs should return 1 row with the full tree cost.
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10575,11 +10571,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(2), summary.TotalPrsCreated)
assert.Equal(t, int64(8_000_000), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 2)
@@ -10621,11 +10616,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(2), summary.TotalPrsCreated)
assert.Equal(t, int64(17_000_000), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 2)
@@ -10658,11 +10652,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(2), summary.TotalPrsCreated)
assert.Equal(t, int64(10_000_000), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 2)
@@ -10695,11 +10688,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(1), summary.TotalPrsCreated)
assert.Equal(t, int64(15_000_000), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10724,11 +10716,10 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(1), summary.TotalPrsCreated)
assert.Equal(t, int64(0), summary.TotalCostMicros)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10767,11 +10758,10 @@ func TestGetPRInsights(t *testing.T) {
require.Len(t, byModel, 1)
assert.Equal(t, modelName, byModel[0].DisplayName)
recent, err := store.GetPRInsightsRecentPRs(context.Background(), database.GetPRInsightsRecentPRsParams{
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
LimitVal: 20,
})
require.NoError(t, err)
require.Len(t, recent, 1)
@@ -10803,6 +10793,30 @@ func TestGetPRInsights(t *testing.T) {
assert.Equal(t, int64(8_000_000), summary.TotalCostMicros)
assert.Equal(t, int64(5_000_000), summary.MergedCostMicros)
})
t.Run("AllPRsReturnedWithSafetyCap", func(t *testing.T) {
t.Parallel()
store, userID, mcID := setupChatInfra(t)
// Create 25 distinct PRs — more than the old LIMIT 20 — and
// verify all are returned.
const prCount = 25
for i := range prCount {
chat := createChat(t, store, userID, mcID, fmt.Sprintf("chat-%d", i))
insertCostMessage(t, store, chat.ID, userID, mcID, 1_000_000)
linkPR(t, store, chat.ID,
fmt.Sprintf("https://github.com/org/repo/pull/%d", 100+i),
"merged", fmt.Sprintf("fix: pr-%d", i), 10, 2, 1)
}
recent, err := store.GetPRInsightsPullRequests(context.Background(), database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: noOwner,
})
require.NoError(t, err)
assert.Len(t, recent, prCount, "all PRs within the date range should be returned")
})
}
func TestChatPinOrderQueries(t *testing.T) {
+81 -86
View File
@@ -2900,7 +2900,7 @@ func (q *sqlQuerier) UpsertBoundaryUsageStats(ctx context.Context, arg UpsertBou
return new_period, err
}
const deleteOldChatFiles = `-- name: DeleteOldChatFiles :many
const deleteOldChatFiles = `-- name: DeleteOldChatFiles :execrows
WITH kept_file_ids AS (
-- NOTE: This uses updated_at as a proxy for archive time
-- because there is no archived_at column. Correctness
@@ -2924,7 +2924,6 @@ deletable AS (
DELETE FROM chat_files
USING deletable
WHERE chat_files.id = deletable.id
RETURNING chat_files.id, chat_files.object_store_key
`
type DeleteOldChatFilesParams struct {
@@ -2932,11 +2931,6 @@ type DeleteOldChatFilesParams struct {
LimitCount int32 `db:"limit_count" json:"limit_count"`
}
type DeleteOldChatFilesRow struct {
ID uuid.UUID `db:"id" json:"id"`
ObjectStoreKey sql.NullString `db:"object_store_key" json:"object_store_key"`
}
// TODO(cian): Add indexes on chats(archived, updated_at) and
// chat_files(created_at) for purge query performance.
// See: https://github.com/coder/internal/issues/1438
@@ -2946,34 +2940,16 @@ type DeleteOldChatFilesRow struct {
// 1. Orphaned files not linked to any chat.
// 2. Files whose every referencing chat has been archived for longer
// than the retention period.
//
// Returns the deleted rows so callers can clean up associated object
// store entries.
func (q *sqlQuerier) DeleteOldChatFiles(ctx context.Context, arg DeleteOldChatFilesParams) ([]DeleteOldChatFilesRow, error) {
rows, err := q.db.QueryContext(ctx, deleteOldChatFiles, arg.BeforeTime, arg.LimitCount)
func (q *sqlQuerier) DeleteOldChatFiles(ctx context.Context, arg DeleteOldChatFilesParams) (int64, error) {
result, err := q.db.ExecContext(ctx, deleteOldChatFiles, arg.BeforeTime, arg.LimitCount)
if err != nil {
return nil, err
return 0, err
}
defer rows.Close()
var items []DeleteOldChatFilesRow
for rows.Next() {
var i DeleteOldChatFilesRow
if err := rows.Scan(&i.ID, &i.ObjectStoreKey); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Close(); err != nil {
return nil, err
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
return result.RowsAffected()
}
const getChatFileByID = `-- name: GetChatFileByID :one
SELECT id, owner_id, organization_id, created_at, name, mimetype, data, object_store_key FROM chat_files WHERE id = $1::uuid
SELECT id, owner_id, organization_id, created_at, name, mimetype, data FROM chat_files WHERE id = $1::uuid
`
func (q *sqlQuerier) GetChatFileByID(ctx context.Context, id uuid.UUID) (ChatFile, error) {
@@ -2987,7 +2963,6 @@ func (q *sqlQuerier) GetChatFileByID(ctx context.Context, id uuid.UUID) (ChatFil
&i.Name,
&i.Mimetype,
&i.Data,
&i.ObjectStoreKey,
)
return i, err
}
@@ -3043,7 +3018,7 @@ func (q *sqlQuerier) GetChatFileMetadataByChatID(ctx context.Context, chatID uui
}
const getChatFilesByIDs = `-- name: GetChatFilesByIDs :many
SELECT id, owner_id, organization_id, created_at, name, mimetype, data, object_store_key FROM chat_files WHERE id = ANY($1::uuid[])
SELECT id, owner_id, organization_id, created_at, name, mimetype, data FROM chat_files WHERE id = ANY($1::uuid[])
`
func (q *sqlQuerier) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]ChatFile, error) {
@@ -3063,7 +3038,6 @@ func (q *sqlQuerier) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]
&i.Name,
&i.Mimetype,
&i.Data,
&i.ObjectStoreKey,
); err != nil {
return nil, err
}
@@ -3079,8 +3053,8 @@ func (q *sqlQuerier) GetChatFilesByIDs(ctx context.Context, ids []uuid.UUID) ([]
}
const insertChatFile = `-- name: InsertChatFile :one
INSERT INTO chat_files (owner_id, organization_id, name, mimetype, data, object_store_key)
VALUES ($1::uuid, $2::uuid, $3::text, $4::text, $5::bytea, $6::text)
INSERT INTO chat_files (owner_id, organization_id, name, mimetype, data)
VALUES ($1::uuid, $2::uuid, $3::text, $4::text, $5::bytea)
RETURNING id, owner_id, organization_id, created_at, name, mimetype
`
@@ -3090,7 +3064,6 @@ type InsertChatFileParams struct {
Name string `db:"name" json:"name"`
Mimetype string `db:"mimetype" json:"mimetype"`
Data []byte `db:"data" json:"data"`
ObjectStoreKey string `db:"object_store_key" json:"object_store_key"`
}
type InsertChatFileRow struct {
@@ -3109,7 +3082,6 @@ func (q *sqlQuerier) InsertChatFile(ctx context.Context, arg InsertChatFileParam
arg.Name,
arg.Mimetype,
arg.Data,
arg.ObjectStoreKey,
)
var i InsertChatFileRow
err := row.Scan(
@@ -3246,7 +3218,7 @@ func (q *sqlQuerier) GetPRInsightsPerModel(ctx context.Context, arg GetPRInsight
return items, nil
}
const getPRInsightsRecentPRs = `-- name: GetPRInsightsRecentPRs :many
const getPRInsightsPullRequests = `-- name: GetPRInsightsPullRequests :many
WITH pr_costs AS (
SELECT
prc.pr_key,
@@ -3266,9 +3238,9 @@ WITH pr_costs AS (
AND cds2.pull_request_state IS NOT NULL
))
WHERE cds.pull_request_state IS NOT NULL
AND c.created_at >= $2::timestamptz
AND c.created_at < $3::timestamptz
AND ($4::uuid IS NULL OR c.owner_id = $4::uuid)
AND c.created_at >= $1::timestamptz
AND c.created_at < $2::timestamptz
AND ($3::uuid IS NULL OR c.owner_id = $3::uuid)
) prc
LEFT JOIN LATERAL (
SELECT COALESCE(SUM(cm.total_cost_micros), 0) AS cost_micros
@@ -3303,9 +3275,9 @@ deduped AS (
JOIN chats c ON c.id = cds.chat_id
LEFT JOIN chat_model_configs cmc ON cmc.id = c.last_model_config_id
WHERE cds.pull_request_state IS NOT NULL
AND c.created_at >= $2::timestamptz
AND c.created_at < $3::timestamptz
AND ($4::uuid IS NULL OR c.owner_id = $4::uuid)
AND c.created_at >= $1::timestamptz
AND c.created_at < $2::timestamptz
AND ($3::uuid IS NULL OR c.owner_id = $3::uuid)
ORDER BY COALESCE(NULLIF(cds.url, ''), c.id::text), c.created_at DESC, c.id DESC
)
SELECT chat_id, pr_title, pr_url, pr_number, state, draft, additions, deletions, changed_files, commits, approved, changes_requested, reviewer_count, author_login, author_avatar_url, base_branch, model_display_name, cost_micros, created_at FROM (
@@ -3333,17 +3305,16 @@ SELECT chat_id, pr_title, pr_url, pr_number, state, draft, additions, deletions,
JOIN pr_costs pc ON pc.pr_key = d.pr_key
) sub
ORDER BY sub.created_at DESC
LIMIT $1::int
LIMIT 500
`
type GetPRInsightsRecentPRsParams struct {
LimitVal int32 `db:"limit_val" json:"limit_val"`
type GetPRInsightsPullRequestsParams struct {
StartDate time.Time `db:"start_date" json:"start_date"`
EndDate time.Time `db:"end_date" json:"end_date"`
OwnerID uuid.NullUUID `db:"owner_id" json:"owner_id"`
}
type GetPRInsightsRecentPRsRow struct {
type GetPRInsightsPullRequestsRow struct {
ChatID uuid.UUID `db:"chat_id" json:"chat_id"`
PrTitle string `db:"pr_title" json:"pr_title"`
PrUrl sql.NullString `db:"pr_url" json:"pr_url"`
@@ -3365,24 +3336,20 @@ type GetPRInsightsRecentPRsRow struct {
CreatedAt time.Time `db:"created_at" json:"created_at"`
}
// Returns individual PR rows with cost for the recent PRs table.
// Returns all individual PR rows with cost for the selected time range.
// Uses two CTEs: pr_costs sums cost for the PR-linked chat and its
// direct children (that lack their own PR), and deduped picks one row
// per PR for metadata.
func (q *sqlQuerier) GetPRInsightsRecentPRs(ctx context.Context, arg GetPRInsightsRecentPRsParams) ([]GetPRInsightsRecentPRsRow, error) {
rows, err := q.db.QueryContext(ctx, getPRInsightsRecentPRs,
arg.LimitVal,
arg.StartDate,
arg.EndDate,
arg.OwnerID,
)
// per PR for metadata. A safety-cap LIMIT guards against unexpectedly
// large result sets from direct API callers.
func (q *sqlQuerier) GetPRInsightsPullRequests(ctx context.Context, arg GetPRInsightsPullRequestsParams) ([]GetPRInsightsPullRequestsRow, error) {
rows, err := q.db.QueryContext(ctx, getPRInsightsPullRequests, arg.StartDate, arg.EndDate, arg.OwnerID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []GetPRInsightsRecentPRsRow
var items []GetPRInsightsPullRequestsRow
for rows.Next() {
var i GetPRInsightsRecentPRsRow
var i GetPRInsightsPullRequestsRow
if err := rows.Scan(
&i.ChatID,
&i.PrTitle,
@@ -5851,20 +5818,18 @@ WHERE
ELSE chats.archived = $2 :: boolean
END
AND CASE
-- This allows using the last element on a page as effectively a cursor.
-- This is an important option for scripts that need to paginate without
-- duplicating or missing data.
-- Cursor pagination: the last element on a page acts as the cursor.
-- The 4-tuple matches the ORDER BY below. All columns sort DESC
-- (pin_order is negated so lower values sort first in DESC order),
-- which lets us use a single tuple < comparison.
WHEN $3 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN (
-- The pagination cursor is the last ID of the previous page.
-- The query is ordered by the updated_at field, so select all
-- rows before the cursor.
(updated_at, id) < (
(CASE WHEN pin_order > 0 THEN 1 ELSE 0 END, -pin_order, updated_at, id) < (
SELECT
updated_at, id
CASE WHEN c2.pin_order > 0 THEN 1 ELSE 0 END, -c2.pin_order, c2.updated_at, c2.id
FROM
chats
chats c2
WHERE
id = $3
c2.id = $3
)
)
ELSE true
@@ -5876,9 +5841,15 @@ WHERE
-- Authorize Filter clause will be injected below in GetAuthorizedChats
-- @authorize_filter
ORDER BY
-- Deterministic and consistent ordering of all rows, even if they share
-- a timestamp. This is to ensure consistent pagination.
(updated_at, id) DESC OFFSET $5
-- Pinned chats (pin_order > 0) sort before unpinned ones. Within
-- pinned chats, lower pin_order values come first. The negation
-- trick (-pin_order) keeps all sort columns DESC so the cursor
-- tuple < comparison works with uniform direction.
CASE WHEN pin_order > 0 THEN 1 ELSE 0 END DESC,
-pin_order DESC,
updated_at DESC,
id DESC
OFFSET $5
LIMIT
-- The chat list is unbounded and expected to grow large.
-- Default to 50 to prevent accidental excessively large queries.
@@ -17547,7 +17518,8 @@ SELECT
w.id AS workspace_id,
COALESCE(w.name, '') AS workspace_name,
-- Include the name of the provisioner_daemon associated to the job
COALESCE(pd.name, '') AS worker_name
COALESCE(pd.name, '') AS worker_name,
wb.transition as workspace_build_transition
FROM
provisioner_jobs pj
LEFT JOIN
@@ -17592,7 +17564,8 @@ GROUP BY
t.icon,
w.id,
w.name,
pd.name
pd.name,
wb.transition
ORDER BY
pj.created_at DESC
LIMIT
@@ -17609,18 +17582,19 @@ type GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisionerPar
}
type GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisionerRow struct {
ProvisionerJob ProvisionerJob `db:"provisioner_job" json:"provisioner_job"`
QueuePosition int64 `db:"queue_position" json:"queue_position"`
QueueSize int64 `db:"queue_size" json:"queue_size"`
AvailableWorkers []uuid.UUID `db:"available_workers" json:"available_workers"`
TemplateVersionName string `db:"template_version_name" json:"template_version_name"`
TemplateID uuid.NullUUID `db:"template_id" json:"template_id"`
TemplateName string `db:"template_name" json:"template_name"`
TemplateDisplayName string `db:"template_display_name" json:"template_display_name"`
TemplateIcon string `db:"template_icon" json:"template_icon"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
WorkspaceName string `db:"workspace_name" json:"workspace_name"`
WorkerName string `db:"worker_name" json:"worker_name"`
ProvisionerJob ProvisionerJob `db:"provisioner_job" json:"provisioner_job"`
QueuePosition int64 `db:"queue_position" json:"queue_position"`
QueueSize int64 `db:"queue_size" json:"queue_size"`
AvailableWorkers []uuid.UUID `db:"available_workers" json:"available_workers"`
TemplateVersionName string `db:"template_version_name" json:"template_version_name"`
TemplateID uuid.NullUUID `db:"template_id" json:"template_id"`
TemplateName string `db:"template_name" json:"template_name"`
TemplateDisplayName string `db:"template_display_name" json:"template_display_name"`
TemplateIcon string `db:"template_icon" json:"template_icon"`
WorkspaceID uuid.NullUUID `db:"workspace_id" json:"workspace_id"`
WorkspaceName string `db:"workspace_name" json:"workspace_name"`
WorkerName string `db:"worker_name" json:"worker_name"`
WorkspaceBuildTransition NullWorkspaceTransition `db:"workspace_build_transition" json:"workspace_build_transition"`
}
func (q *sqlQuerier) GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisioner(ctx context.Context, arg GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisionerParams) ([]GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisionerRow, error) {
@@ -17672,6 +17646,7 @@ func (q *sqlQuerier) GetProvisionerJobsByOrganizationAndStatusWithQueuePositionA
&i.WorkspaceID,
&i.WorkspaceName,
&i.WorkerName,
&i.WorkspaceBuildTransition,
); err != nil {
return nil, err
}
@@ -26844,6 +26819,26 @@ func (q *sqlQuerier) UpdateWorkspaceAgentConnectionByID(ctx context.Context, arg
return err
}
const updateWorkspaceAgentDirectoryByID = `-- name: UpdateWorkspaceAgentDirectoryByID :exec
UPDATE
workspace_agents
SET
directory = $2, updated_at = $3
WHERE
id = $1
`
type UpdateWorkspaceAgentDirectoryByIDParams struct {
ID uuid.UUID `db:"id" json:"id"`
Directory string `db:"directory" json:"directory"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
}
func (q *sqlQuerier) UpdateWorkspaceAgentDirectoryByID(ctx context.Context, arg UpdateWorkspaceAgentDirectoryByIDParams) error {
_, err := q.db.ExecContext(ctx, updateWorkspaceAgentDirectoryByID, arg.ID, arg.Directory, arg.UpdatedAt)
return err
}
const updateWorkspaceAgentDisplayAppsByID = `-- name: UpdateWorkspaceAgentDisplayAppsByID :exec
UPDATE
workspace_agents
+4 -7
View File
@@ -1,6 +1,6 @@
-- name: InsertChatFile :one
INSERT INTO chat_files (owner_id, organization_id, name, mimetype, data, object_store_key)
VALUES (@owner_id::uuid, @organization_id::uuid, @name::text, @mimetype::text, @data::bytea, @object_store_key::text)
INSERT INTO chat_files (owner_id, organization_id, name, mimetype, data)
VALUES (@owner_id::uuid, @organization_id::uuid, @name::text, @mimetype::text, @data::bytea)
RETURNING id, owner_id, organization_id, created_at, name, mimetype;
-- name: GetChatFileByID :one
@@ -22,15 +22,13 @@ ORDER BY cf.created_at ASC;
-- TODO(cian): Add indexes on chats(archived, updated_at) and
-- chat_files(created_at) for purge query performance.
-- See: https://github.com/coder/internal/issues/1438
-- name: DeleteOldChatFiles :many
-- name: DeleteOldChatFiles :execrows
-- Deletes chat files that are older than the given threshold and are
-- not referenced by any chat that is still active or was archived
-- within the same threshold window. This covers two cases:
-- 1. Orphaned files not linked to any chat.
-- 2. Files whose every referencing chat has been archived for longer
-- than the retention period.
-- Returns the deleted rows so callers can clean up associated object
-- store entries.
WITH kept_file_ids AS (
-- NOTE: This uses updated_at as a proxy for archive time
-- because there is no archived_at column. Correctness
@@ -53,5 +51,4 @@ deletable AS (
)
DELETE FROM chat_files
USING deletable
WHERE chat_files.id = deletable.id
RETURNING chat_files.id, chat_files.object_store_key;
WHERE chat_files.id = deletable.id;
+5 -4
View File
@@ -173,11 +173,12 @@ JOIN pr_costs pc ON pc.pr_key = d.pr_key
GROUP BY d.model_config_id, d.display_name, d.model, d.provider
ORDER BY total_prs DESC;
-- name: GetPRInsightsRecentPRs :many
-- Returns individual PR rows with cost for the recent PRs table.
-- name: GetPRInsightsPullRequests :many
-- Returns all individual PR rows with cost for the selected time range.
-- Uses two CTEs: pr_costs sums cost for the PR-linked chat and its
-- direct children (that lack their own PR), and deduped picks one row
-- per PR for metadata.
-- per PR for metadata. A safety-cap LIMIT guards against unexpectedly
-- large result sets from direct API callers.
WITH pr_costs AS (
SELECT
prc.pr_key,
@@ -264,4 +265,4 @@ SELECT * FROM (
JOIN pr_costs pc ON pc.pr_key = d.pr_key
) sub
ORDER BY sub.created_at DESC
LIMIT @limit_val::int;
LIMIT 500;
+17 -13
View File
@@ -353,20 +353,18 @@ WHERE
ELSE chats.archived = sqlc.narg('archived') :: boolean
END
AND CASE
-- This allows using the last element on a page as effectively a cursor.
-- This is an important option for scripts that need to paginate without
-- duplicating or missing data.
-- Cursor pagination: the last element on a page acts as the cursor.
-- The 4-tuple matches the ORDER BY below. All columns sort DESC
-- (pin_order is negated so lower values sort first in DESC order),
-- which lets us use a single tuple < comparison.
WHEN @after_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN (
-- The pagination cursor is the last ID of the previous page.
-- The query is ordered by the updated_at field, so select all
-- rows before the cursor.
(updated_at, id) < (
(CASE WHEN pin_order > 0 THEN 1 ELSE 0 END, -pin_order, updated_at, id) < (
SELECT
updated_at, id
CASE WHEN c2.pin_order > 0 THEN 1 ELSE 0 END, -c2.pin_order, c2.updated_at, c2.id
FROM
chats
chats c2
WHERE
id = @after_id
c2.id = @after_id
)
)
ELSE true
@@ -378,9 +376,15 @@ WHERE
-- Authorize Filter clause will be injected below in GetAuthorizedChats
-- @authorize_filter
ORDER BY
-- Deterministic and consistent ordering of all rows, even if they share
-- a timestamp. This is to ensure consistent pagination.
(updated_at, id) DESC OFFSET @offset_opt
-- Pinned chats (pin_order > 0) sort before unpinned ones. Within
-- pinned chats, lower pin_order values come first. The negation
-- trick (-pin_order) keeps all sort columns DESC so the cursor
-- tuple < comparison works with uniform direction.
CASE WHEN pin_order > 0 THEN 1 ELSE 0 END DESC,
-pin_order DESC,
updated_at DESC,
id DESC
OFFSET @offset_opt
LIMIT
-- The chat list is unbounded and expected to grow large.
-- Default to 50 to prevent accidental excessively large queries.
+4 -2
View File
@@ -195,7 +195,8 @@ SELECT
w.id AS workspace_id,
COALESCE(w.name, '') AS workspace_name,
-- Include the name of the provisioner_daemon associated to the job
COALESCE(pd.name, '') AS worker_name
COALESCE(pd.name, '') AS worker_name,
wb.transition as workspace_build_transition
FROM
provisioner_jobs pj
LEFT JOIN
@@ -240,7 +241,8 @@ GROUP BY
t.icon,
w.id,
w.name,
pd.name
pd.name,
wb.transition
ORDER BY
pj.created_at DESC
LIMIT
@@ -190,6 +190,14 @@ SET
WHERE
id = $1;
-- name: UpdateWorkspaceAgentDirectoryByID :exec
UPDATE
workspace_agents
SET
directory = $2, updated_at = $3
WHERE
id = $1;
-- name: GetWorkspaceAgentLogsAfter :many
SELECT
*
+10 -43
View File
@@ -1810,9 +1810,9 @@ func (api *API) patchChat(rw http.ResponseWriter, r *http.Request) {
// - pinOrder > 0 && already pinned: reorder (shift
// neighbors, clamp to [1, count]).
// - pinOrder > 0 && not pinned: append to end. The
// requested value is intentionally ignored because
// PinChatByID also bumps updated_at to keep the
// chat visible in the paginated sidebar.
// requested value is intentionally ignored; the
// SQL ORDER BY sorts pinned chats first so they
// appear on page 1 of the paginated sidebar.
var err error
errMsg := "Failed to pin chat."
switch {
@@ -2988,8 +2988,6 @@ const (
maxChatFileSize = 10 << 20
// maxChatFileName is the maximum length of an uploaded file name.
maxChatFileName = 255
// chatFilesNamespace is the object store namespace for chat files.
chatFilesNamespace = "chatfiles"
)
// allowedChatFileMIMETypes lists the content types accepted for chat
@@ -3786,21 +3784,12 @@ func (api *API) postChatFile(rw http.ResponseWriter, r *http.Request) {
}
}
key := uuid.New().String()
if err := api.ObjectStore.Write(ctx, chatFilesNamespace, key, data); err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to save chat file.",
Detail: err.Error(),
})
return
}
chatFile, err := api.Database.InsertChatFile(ctx, database.InsertChatFileParams{
OwnerID: apiKey.UserID,
OrganizationID: orgID,
Name: filename,
Mimetype: detected,
ObjectStoreKey: key,
Data: data,
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
@@ -3847,27 +3836,6 @@ func (api *API) chatFileByID(rw http.ResponseWriter, r *http.Request) {
rw.Header().Set("Content-Disposition", "inline")
}
rw.Header().Set("Cache-Control", "private, max-age=31536000, immutable")
// Serve from object store, falling back to the database BYTEA
// column for files that predate the migration.
if chatFile.ObjectStoreKey.Valid && chatFile.ObjectStoreKey.String != "" {
rc, info, err := api.ObjectStore.Read(ctx, chatFilesNamespace, chatFile.ObjectStoreKey.String)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to read chat file from storage.",
Detail: err.Error(),
})
return
}
defer rc.Close()
rw.Header().Set("Content-Length", strconv.FormatInt(info.Size, 10))
rw.WriteHeader(http.StatusOK)
if _, err := io.Copy(rw, rc); err != nil {
api.Logger.Debug(ctx, "failed to stream chat file response", slog.Error(err))
}
return
}
rw.Header().Set("Content-Length", strconv.Itoa(len(chatFile.Data)))
rw.WriteHeader(http.StatusOK)
if _, err := rw.Write(chatFile.Data); err != nil {
@@ -5658,7 +5626,7 @@ func (api *API) prInsights(rw http.ResponseWriter, r *http.Request) {
previousSummary database.GetPRInsightsSummaryRow
timeSeries []database.GetPRInsightsTimeSeriesRow
byModel []database.GetPRInsightsPerModelRow
recentPRs []database.GetPRInsightsRecentPRsRow
recentPRs []database.GetPRInsightsPullRequestsRow
)
eg, egCtx := errgroup.WithContext(ctx)
@@ -5706,11 +5674,10 @@ func (api *API) prInsights(rw http.ResponseWriter, r *http.Request) {
eg.Go(func() error {
var err error
recentPRs, err = api.Database.GetPRInsightsRecentPRs(egCtx, database.GetPRInsightsRecentPRsParams{
recentPRs, err = api.Database.GetPRInsightsPullRequests(egCtx, database.GetPRInsightsPullRequestsParams{
StartDate: startDate,
EndDate: endDate,
OwnerID: ownerID,
LimitVal: 20,
})
return err
})
@@ -5820,10 +5787,10 @@ func (api *API) prInsights(rw http.ResponseWriter, r *http.Request) {
}
httpapi.Write(ctx, rw, http.StatusOK, codersdk.PRInsightsResponse{
Summary: summary,
TimeSeries: tsEntries,
ByModel: modelEntries,
RecentPRs: prEntries,
Summary: summary,
TimeSeries: tsEntries,
ByModel: modelEntries,
PullRequests: prEntries,
})
}
+180
View File
@@ -876,6 +876,186 @@ func TestListChats(t *testing.T) {
require.NoError(t, err)
require.Len(t, allChats, totalChats)
})
// Test that a pinned chat with an old updated_at appears on page 1.
t.Run("PinnedOnFirstPage", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, _ := newChatClientWithDatabase(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create the chat that will later be pinned. It gets the
// earliest updated_at because it is inserted first.
pinnedChat, err := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: "pinned-chat",
}},
})
require.NoError(t, err)
// Fill page 1 with newer chats so the pinned chat would
// normally be pushed off the first page (default limit 50).
const fillerCount = 51
fillerChats := make([]codersdk.Chat, 0, fillerCount)
for i := range fillerCount {
c, createErr := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: fmt.Sprintf("filler-%d", i),
}},
})
require.NoError(t, createErr)
fillerChats = append(fillerChats, c)
}
// Wait for all chats to reach a terminal status so
// updated_at is stable before paginating. A single
// polling loop checks every chat per tick to avoid
// O(N) separate Eventually loops.
allCreated := append([]codersdk.Chat{pinnedChat}, fillerChats...)
pending := make(map[uuid.UUID]struct{}, len(allCreated))
for _, c := range allCreated {
pending[c.ID] = struct{}{}
}
testutil.Eventually(ctx, t, func(_ context.Context) bool {
all, listErr := client.ListChats(ctx, &codersdk.ListChatsOptions{
Pagination: codersdk.Pagination{Limit: fillerCount + 10},
})
if listErr != nil {
return false
}
for _, ch := range all {
if _, ok := pending[ch.ID]; ok && ch.Status != codersdk.ChatStatusPending && ch.Status != codersdk.ChatStatusRunning {
delete(pending, ch.ID)
}
}
return len(pending) == 0
}, testutil.IntervalFast)
// Pin the earliest chat.
err = client.UpdateChat(ctx, pinnedChat.ID, codersdk.UpdateChatRequest{
PinOrder: ptr.Ref(int32(1)),
})
require.NoError(t, err)
// Fetch page 1 with default limit (50).
page1, err := client.ListChats(ctx, &codersdk.ListChatsOptions{
Pagination: codersdk.Pagination{Limit: 50},
})
require.NoError(t, err)
// The pinned chat must appear on page 1.
page1IDs := make(map[uuid.UUID]struct{}, len(page1))
for _, c := range page1 {
page1IDs[c.ID] = struct{}{}
}
_, found := page1IDs[pinnedChat.ID]
require.True(t, found, "pinned chat should appear on page 1")
// The pinned chat should be the first item in the list.
require.Equal(t, pinnedChat.ID, page1[0].ID, "pinned chat should be first")
})
// Test cursor pagination with a mix of pinned and unpinned chats.
t.Run("CursorWithPins", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, _ := newChatClientWithDatabase(t)
_ = coderdtest.CreateFirstUser(t, client.Client)
_ = createChatModelConfig(t, client)
// Create 5 chats: 2 will be pinned, 3 unpinned.
const totalChats = 5
createdChats := make([]codersdk.Chat, 0, totalChats)
for i := range totalChats {
c, createErr := client.CreateChat(ctx, codersdk.CreateChatRequest{
Content: []codersdk.ChatInputPart{{
Type: codersdk.ChatInputPartTypeText,
Text: fmt.Sprintf("cursor-pin-chat-%d", i),
}},
})
require.NoError(t, createErr)
createdChats = append(createdChats, c)
}
// Wait for all chats to reach terminal status.
// Check each chat by ID rather than fetching the full list.
testutil.Eventually(ctx, t, func(_ context.Context) bool {
for _, c := range createdChats {
ch, err := client.GetChat(ctx, c.ID)
require.NoError(t, err, "GetChat should succeed for just-created chat %s", c.ID)
if ch.Status == codersdk.ChatStatusPending || ch.Status == codersdk.ChatStatusRunning {
return false
}
}
return true
}, testutil.IntervalFast)
// Pin the first two chats (oldest updated_at).
err := client.UpdateChat(ctx, createdChats[0].ID, codersdk.UpdateChatRequest{
PinOrder: ptr.Ref(int32(1)),
})
require.NoError(t, err)
err = client.UpdateChat(ctx, createdChats[1].ID, codersdk.UpdateChatRequest{
PinOrder: ptr.Ref(int32(1)),
})
require.NoError(t, err)
// Paginate with limit=2 using cursor (after_id).
const pageSize = 2
maxPages := totalChats/pageSize + 2
var allPaginated []codersdk.Chat
var afterID uuid.UUID
for range maxPages {
opts := &codersdk.ListChatsOptions{
Pagination: codersdk.Pagination{Limit: pageSize},
}
if afterID != uuid.Nil {
opts.Pagination.AfterID = afterID
}
page, listErr := client.ListChats(ctx, opts)
require.NoError(t, listErr)
if len(page) == 0 {
break
}
allPaginated = append(allPaginated, page...)
afterID = page[len(page)-1].ID
}
// All chats should appear exactly once.
seenIDs := make(map[uuid.UUID]struct{}, len(allPaginated))
for _, c := range allPaginated {
_, dup := seenIDs[c.ID]
require.False(t, dup, "chat %s appeared more than once", c.ID)
seenIDs[c.ID] = struct{}{}
}
require.Len(t, seenIDs, totalChats, "all chats should appear in paginated results")
// Pinned chats should come before unpinned ones, and
// within the pinned group, lower pin_order sorts first.
pinnedSeen := false
unpinnedSeen := false
for _, c := range allPaginated {
if c.PinOrder > 0 {
require.False(t, unpinnedSeen, "pinned chat %s appeared after unpinned chat", c.ID)
pinnedSeen = true
} else {
unpinnedSeen = true
}
}
require.True(t, pinnedSeen, "at least one pinned chat should exist")
// Verify within-pinned ordering: pin_order=1 before
// pin_order=2 (the -pin_order DESC column).
require.Equal(t, createdChats[0].ID, allPaginated[0].ID,
"pin_order=1 chat should be first")
require.Equal(t, createdChats[1].ID, allPaginated[1].ID,
"pin_order=2 chat should be second")
})
}
func TestListChatModels(t *testing.T) {
-197
View File
@@ -1,197 +0,0 @@
package objstore
import (
"context"
"os"
"path/filepath"
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"gocloud.dev/blob"
"gocloud.dev/blob/fileblob"
"gocloud.dev/blob/gcsblob"
"gocloud.dev/blob/s3blob"
"gocloud.dev/gcp"
"golang.org/x/oauth2/google"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
)
// Backend enumerates the supported storage backends.
type Backend string
const (
BackendLocal Backend = "local"
BackendS3 Backend = "s3"
BackendGCS Backend = "gcs"
)
// LocalConfig configures the local filesystem backend.
type LocalConfig struct {
// Dir is the root directory for stored objects. The directory
// is created if it does not exist.
Dir string
}
// S3Config configures an S3-compatible backend.
type S3Config struct {
Bucket string
Region string
// Prefix is an optional key prefix within the bucket.
Prefix string
// Endpoint is a custom S3-compatible endpoint (e.g. MinIO, R2).
// Leave empty for standard AWS S3.
Endpoint string
}
// GCSConfig configures a Google Cloud Storage backend.
type GCSConfig struct {
Bucket string
// Prefix is an optional key prefix within the bucket.
Prefix string
// CredentialsFile is an optional path to a service account key
// file. If empty, Application Default Credentials are used.
CredentialsFile string
}
// NewLocal creates a Store backed by the local filesystem.
func NewLocal(cfg LocalConfig) (Store, error) {
if cfg.Dir == "" {
return nil, xerrors.New("local object store directory is required")
}
if err := os.MkdirAll(cfg.Dir, 0o700); err != nil {
return nil, xerrors.Errorf("create object store directory %q: %w", cfg.Dir, err)
}
bucket, err := fileblob.OpenBucket(cfg.Dir, &fileblob.Options{
// Place temp files next to the target files instead of
// os.TempDir. This avoids EXDEV (cross-device link) errors
// when the storage directory is on a different filesystem.
NoTempDir: true,
// We handle metadata in the database, not in sidecar files.
Metadata: fileblob.MetadataDontWrite,
})
if err != nil {
return nil, xerrors.Errorf("open local bucket at %q: %w", cfg.Dir, err)
}
return newPrefixed(bucket, ""), nil
}
// NewS3 creates a Store backed by an S3-compatible service.
func NewS3(ctx context.Context, cfg S3Config) (Store, error) {
if cfg.Bucket == "" {
return nil, xerrors.New("S3 bucket name is required")
}
opts := []func(*awsconfig.LoadOptions) error{}
if cfg.Region != "" {
opts = append(opts, awsconfig.WithRegion(cfg.Region))
}
awsCfg, err := awsconfig.LoadDefaultConfig(ctx, opts...)
if err != nil {
return nil, xerrors.Errorf("load AWS config: %w", err)
}
s3Opts := []func(*s3.Options){}
if cfg.Endpoint != "" {
s3Opts = append(s3Opts, func(o *s3.Options) {
o.BaseEndpoint = aws.String(cfg.Endpoint)
o.UsePathStyle = true
})
}
client := s3.NewFromConfig(awsCfg, s3Opts...)
bucket, err := s3blob.OpenBucket(ctx, client, cfg.Bucket, nil)
if err != nil {
return nil, xerrors.Errorf("open S3 bucket %q: %w", cfg.Bucket, err)
}
return newPrefixed(bucket, cfg.Prefix), nil
}
// NewGCS creates a Store backed by Google Cloud Storage.
func NewGCS(ctx context.Context, cfg GCSConfig) (Store, error) {
if cfg.Bucket == "" {
return nil, xerrors.New("GCS bucket name is required")
}
var creds *google.Credentials
var err error
if cfg.CredentialsFile != "" {
jsonData, err := os.ReadFile(cfg.CredentialsFile)
if err != nil {
return nil, xerrors.Errorf("read GCS credentials file %q: %w", cfg.CredentialsFile, err)
}
//nolint:staticcheck // CredentialsFromJSON is the standard way to load service account keys.
creds, err = google.CredentialsFromJSON(ctx, jsonData, "https://www.googleapis.com/auth/cloud-platform")
if err != nil {
return nil, xerrors.Errorf("parse GCS credentials: %w", err)
}
} else {
creds, err = gcp.DefaultCredentials(ctx)
if err != nil {
return nil, xerrors.Errorf("obtain GCP default credentials: %w", err)
}
}
gcpClient, err := gcp.NewHTTPClient(gcp.DefaultTransport(), gcp.CredentialsTokenSource(creds))
if err != nil {
return nil, xerrors.Errorf("create GCP HTTP client: %w", err)
}
bucket, err := gcsblob.OpenBucket(ctx, gcpClient, cfg.Bucket, nil)
if err != nil {
return nil, xerrors.Errorf("open GCS bucket %q: %w", cfg.Bucket, err)
}
return newPrefixed(bucket, cfg.Prefix), nil
}
// newPrefixed wraps a bucket with an optional key prefix and returns
// a Store.
func newPrefixed(bucket *blob.Bucket, prefix string) Store {
if prefix != "" {
bucket = blob.PrefixedBucket(bucket, prefix+"/")
}
return New(bucket)
}
// FromConfig creates a Store from deployment configuration. The
// configDir is the Coder config directory (e.g. ~/.config/coderv2)
// and is used as the default root when the local backend is selected
// without an explicit directory.
func FromConfig(ctx context.Context, cfg codersdk.ObjectStoreConfig, configDir string) (Store, error) {
switch Backend(cfg.Backend.String()) {
case BackendLocal, "":
dir := cfg.LocalDir.String()
if dir == "" {
dir = filepath.Join(configDir, "objectstore")
}
return NewLocal(LocalConfig{Dir: dir})
case BackendS3:
return NewS3(ctx, S3Config{
Bucket: cfg.S3Bucket.String(),
Region: cfg.S3Region.String(),
Prefix: cfg.S3Prefix.String(),
Endpoint: cfg.S3Endpoint.String(),
})
case BackendGCS:
return NewGCS(ctx, GCSConfig{
Bucket: cfg.GCSBucket.String(),
Prefix: cfg.GCSPrefix.String(),
CredentialsFile: cfg.GCSCredentialsFile.String(),
})
default:
return nil, xerrors.Errorf("unknown object store backend: %q", cfg.Backend.String())
}
}
-63
View File
@@ -1,63 +0,0 @@
package objstore
import (
"context"
"io"
"iter"
"time"
"golang.org/x/xerrors"
)
// Sentinel errors.
var (
// ErrNotFound is returned when a Read or Delete targets a key
// that does not exist.
ErrNotFound = xerrors.New("object not found")
// ErrClosed is returned when an operation is attempted on a
// closed store.
ErrClosed = xerrors.New("object store closed")
)
// ObjectInfo describes a stored object.
type ObjectInfo struct {
// Key is the object's key within its namespace.
Key string
// Size is the object's size in bytes. May be -1 if unknown.
Size int64
// LastModified is the time the object was last written.
LastModified time.Time
}
// Store provides namespace-scoped CRUD operations on opaque binary
// objects. Namespaces are implicit string prefixes created on first
// write; they require no registration.
//
// Implementations must be safe for concurrent use.
type Store interface {
// Read returns a reader for the object at namespace/key. The
// caller MUST close the returned ReadCloser when done. Returns
// ErrNotFound if the object does not exist.
Read(ctx context.Context, namespace, key string) (io.ReadCloser, ObjectInfo, error)
// Write stores data at namespace/key. Semantics are
// unconditional put: last writer wins.
Write(ctx context.Context, namespace, key string, data []byte) error
// List returns an iterator over objects in the given namespace
// whose keys start with prefix. Pass "" for prefix to list all
// objects in the namespace.
//
// The iterator is lazy and fetches pages on demand. Context
// cancellation is respected between page fetches.
List(ctx context.Context, namespace, prefix string) iter.Seq2[ObjectInfo, error]
// Delete removes the object at namespace/key. Returns
// ErrNotFound if the object does not exist.
Delete(ctx context.Context, namespace, key string) error
// Close releases any resources held by the store.
// Operations after Close return ErrClosed.
io.Closer
}
@@ -1,146 +0,0 @@
// Package objstoretest provides an in-memory Store implementation
// for use in tests.
package objstoretest
import (
"bytes"
"context"
"io"
"iter"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/coder/coder/v2/coderd/objstore"
)
type memObject struct {
data []byte
modified time.Time
}
// MemoryStore is an in-memory implementation of objstore.Store.
// It is safe for concurrent use.
type MemoryStore struct {
mu sync.RWMutex
objects map[string]memObject // full key = namespace/key
closed atomic.Bool
}
// NewMemory returns a Store backed entirely by memory. Useful for
// unit tests that need object storage but don't care about backend
// specifics.
func NewMemory() *MemoryStore {
return &MemoryStore{
objects: make(map[string]memObject),
}
}
func (m *MemoryStore) Read(_ context.Context, namespace, key string) (io.ReadCloser, objstore.ObjectInfo, error) {
if m.closed.Load() {
return nil, objstore.ObjectInfo{}, objstore.ErrClosed
}
full := fullKey(namespace, key)
m.mu.RLock()
obj, ok := m.objects[full]
m.mu.RUnlock()
if !ok {
return nil, objstore.ObjectInfo{}, objstore.ErrNotFound
}
info := objstore.ObjectInfo{
Key: key,
Size: int64(len(obj.data)),
LastModified: obj.modified,
}
return io.NopCloser(bytes.NewReader(obj.data)), info, nil
}
func (m *MemoryStore) Write(_ context.Context, namespace, key string, data []byte) error {
if m.closed.Load() {
return objstore.ErrClosed
}
full := fullKey(namespace, key)
// Copy to avoid retaining caller's slice.
cp := make([]byte, len(data))
copy(cp, data)
m.mu.Lock()
m.objects[full] = memObject{
data: cp,
modified: time.Now(),
}
m.mu.Unlock()
return nil
}
func (m *MemoryStore) List(_ context.Context, namespace, prefix string) iter.Seq2[objstore.ObjectInfo, error] {
return func(yield func(objstore.ObjectInfo, error) bool) {
if m.closed.Load() {
yield(objstore.ObjectInfo{}, objstore.ErrClosed)
return
}
fullPrefix := namespace + "/"
if prefix != "" {
fullPrefix += prefix
}
m.mu.RLock()
defer m.mu.RUnlock()
for k, obj := range m.objects {
if !strings.HasPrefix(k, fullPrefix) {
continue
}
relKey := k[len(namespace)+1:]
info := objstore.ObjectInfo{
Key: relKey,
Size: int64(len(obj.data)),
LastModified: obj.modified,
}
if !yield(info, nil) {
return
}
}
}
}
func (m *MemoryStore) Delete(_ context.Context, namespace, key string) error {
if m.closed.Load() {
return objstore.ErrClosed
}
full := fullKey(namespace, key)
m.mu.Lock()
defer m.mu.Unlock()
if _, ok := m.objects[full]; !ok {
return objstore.ErrNotFound
}
delete(m.objects, full)
return nil
}
func (m *MemoryStore) Close() error {
if m.closed.Swap(true) {
return nil
}
return nil
}
func fullKey(namespace, key string) string {
return namespace + "/" + key
}
// Compile-time check.
var _ objstore.Store = (*MemoryStore)(nil)
-150
View File
@@ -1,150 +0,0 @@
package objstore
import (
"context"
"io"
"iter"
"path"
"sync/atomic"
"gocloud.dev/blob"
"gocloud.dev/gcerrors"
"golang.org/x/xerrors"
)
// bucketStore implements Store on top of a gocloud.dev/blob.Bucket.
// Namespaces are mapped to key prefixes separated by "/".
type bucketStore struct {
bucket *blob.Bucket
closed atomic.Bool
}
// New wraps a gocloud.dev/blob.Bucket as a Store. The caller
// retains no ownership of the bucket after this call; Close on
// the returned Store will close the underlying bucket.
func New(bucket *blob.Bucket) Store {
return &bucketStore{bucket: bucket}
}
func (s *bucketStore) Read(ctx context.Context, namespace, key string) (io.ReadCloser, ObjectInfo, error) {
if s.closed.Load() {
return nil, ObjectInfo{}, ErrClosed
}
objKey := objectKey(namespace, key)
// Fetch attributes first so we can populate ObjectInfo
// before handing back the reader.
attrs, err := s.bucket.Attributes(ctx, objKey)
if err != nil {
return nil, ObjectInfo{}, mapError(err, namespace, key)
}
reader, err := s.bucket.NewReader(ctx, objKey, nil)
if err != nil {
return nil, ObjectInfo{}, mapError(err, namespace, key)
}
info := ObjectInfo{
Key: key,
Size: attrs.Size,
LastModified: attrs.ModTime,
}
return reader, info, nil
}
func (s *bucketStore) Write(ctx context.Context, namespace, key string, data []byte) error {
if s.closed.Load() {
return ErrClosed
}
return mapError(
s.bucket.WriteAll(ctx, objectKey(namespace, key), data, nil),
namespace, key,
)
}
func (s *bucketStore) List(ctx context.Context, namespace, prefix string) iter.Seq2[ObjectInfo, error] {
return func(yield func(ObjectInfo, error) bool) {
if s.closed.Load() {
yield(ObjectInfo{}, ErrClosed)
return
}
fullPrefix := namespace + "/"
if prefix != "" {
fullPrefix += prefix
}
it := s.bucket.List(&blob.ListOptions{
Prefix: fullPrefix,
})
for {
obj, err := it.Next(ctx)
if err != nil {
if err == io.EOF {
return
}
if !yield(ObjectInfo{}, xerrors.Errorf("list %q/%q: %w", namespace, prefix, err)) {
return
}
return
}
if obj.IsDir {
continue
}
// Strip namespace prefix from key to return
// namespace-relative keys.
relKey := obj.Key[len(namespace)+1:]
info := ObjectInfo{
Key: relKey,
Size: obj.Size,
LastModified: obj.ModTime,
}
if !yield(info, nil) {
return
}
}
}
}
func (s *bucketStore) Delete(ctx context.Context, namespace, key string) error {
if s.closed.Load() {
return ErrClosed
}
return mapError(
s.bucket.Delete(ctx, objectKey(namespace, key)),
namespace, key,
)
}
func (s *bucketStore) Close() error {
if s.closed.Swap(true) {
return nil
}
return s.bucket.Close()
}
// objectKey builds the full key from namespace and key.
func objectKey(namespace, key string) string {
return path.Join(namespace, key)
}
// mapError translates gocloud error codes into our sentinel
// errors.
func mapError(err error, namespace, key string) error {
if err == nil {
return nil
}
if gcerrors.Code(err) == gcerrors.NotFound {
return xerrors.Errorf("%s/%s: %w", namespace, key, ErrNotFound)
}
return err
}
// Compile-time check.
var _ Store = (*bucketStore)(nil)
-198
View File
@@ -1,198 +0,0 @@
package objstore_test
import (
"context"
"errors"
"io"
"slices"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/objstore"
)
func TestLocalFS(t *testing.T) {
t.Parallel()
newStore := func(t *testing.T) objstore.Store {
t.Helper()
store, err := objstore.NewLocal(objstore.LocalConfig{Dir: t.TempDir()})
require.NoError(t, err)
t.Cleanup(func() { store.Close() })
return store
}
t.Run("WriteAndRead", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
data := []byte("hello, object store")
err := store.Write(ctx, "ns", "key1", data)
require.NoError(t, err)
rc, info, err := store.Read(ctx, "ns", "key1")
require.NoError(t, err)
defer rc.Close()
require.Equal(t, "key1", info.Key)
require.Equal(t, int64(len(data)), info.Size)
require.False(t, info.LastModified.IsZero())
got, err := io.ReadAll(rc)
require.NoError(t, err)
require.Equal(t, data, got)
})
t.Run("ReadNotFound", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
_, _, err := store.Read(ctx, "ns", "nonexistent")
require.Error(t, err)
require.True(t, errors.Is(err, objstore.ErrNotFound), "expected ErrNotFound, got: %v", err)
})
t.Run("Overwrite", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Write(ctx, "ns", "key1", []byte("v1"))
require.NoError(t, err)
err = store.Write(ctx, "ns", "key1", []byte("v2"))
require.NoError(t, err)
rc, _, err := store.Read(ctx, "ns", "key1")
require.NoError(t, err)
defer rc.Close()
got, err := io.ReadAll(rc)
require.NoError(t, err)
require.Equal(t, []byte("v2"), got)
})
t.Run("Delete", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Write(ctx, "ns", "key1", []byte("data"))
require.NoError(t, err)
err = store.Delete(ctx, "ns", "key1")
require.NoError(t, err)
_, _, err = store.Read(ctx, "ns", "key1")
require.True(t, errors.Is(err, objstore.ErrNotFound))
})
t.Run("DeleteNotFound", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Delete(ctx, "ns", "nonexistent")
require.True(t, errors.Is(err, objstore.ErrNotFound), "expected ErrNotFound, got: %v", err)
})
t.Run("ListAll", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Write(ctx, "ns", "a", []byte("1"))
require.NoError(t, err)
err = store.Write(ctx, "ns", "b", []byte("2"))
require.NoError(t, err)
err = store.Write(ctx, "ns", "c", []byte("3"))
require.NoError(t, err)
var keys []string
for info, err := range store.List(ctx, "ns", "") {
require.NoError(t, err)
keys = append(keys, info.Key)
}
slices.Sort(keys)
require.Equal(t, []string{"a", "b", "c"}, keys)
})
t.Run("ListWithPrefix", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Write(ctx, "ns", "logs/a", []byte("1"))
require.NoError(t, err)
err = store.Write(ctx, "ns", "logs/b", []byte("2"))
require.NoError(t, err)
err = store.Write(ctx, "ns", "other/c", []byte("3"))
require.NoError(t, err)
var keys []string
for info, err := range store.List(ctx, "ns", "logs/") {
require.NoError(t, err)
keys = append(keys, info.Key)
}
slices.Sort(keys)
require.Equal(t, []string{"logs/a", "logs/b"}, keys)
})
t.Run("ListEmptyNamespace", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
var count int
for _, err := range store.List(ctx, "empty", "") {
require.NoError(t, err)
count++
}
require.Zero(t, count)
})
t.Run("NamespaceIsolation", func(t *testing.T) {
t.Parallel()
store := newStore(t)
ctx := context.Background()
err := store.Write(ctx, "ns1", "key", []byte("ns1-data"))
require.NoError(t, err)
err = store.Write(ctx, "ns2", "key", []byte("ns2-data"))
require.NoError(t, err)
rc1, _, err := store.Read(ctx, "ns1", "key")
require.NoError(t, err)
got1, _ := io.ReadAll(rc1)
rc1.Close()
rc2, _, err := store.Read(ctx, "ns2", "key")
require.NoError(t, err)
got2, _ := io.ReadAll(rc2)
rc2.Close()
require.Equal(t, []byte("ns1-data"), got1)
require.Equal(t, []byte("ns2-data"), got2)
})
t.Run("CloseThenOps", func(t *testing.T) {
t.Parallel()
store, err := objstore.NewLocal(objstore.LocalConfig{Dir: t.TempDir()})
require.NoError(t, err)
err = store.Close()
require.NoError(t, err)
err = store.Write(context.Background(), "ns", "key", []byte("data"))
require.True(t, errors.Is(err, objstore.ErrClosed), "expected ErrClosed, got: %v", err)
_, _, err = store.Read(context.Background(), "ns", "key")
require.True(t, errors.Is(err, objstore.ErrClosed), "expected ErrClosed, got: %v", err)
})
}
+3
View File
@@ -435,6 +435,9 @@ func convertProvisionerJobWithQueuePosition(pj database.GetProvisionerJobsByOrga
if pj.WorkspaceID.Valid {
job.Metadata.WorkspaceID = &pj.WorkspaceID.UUID
}
if pj.WorkspaceBuildTransition.Valid {
job.Metadata.WorkspaceBuildTransition = codersdk.WorkspaceTransition(pj.WorkspaceBuildTransition.WorkspaceTransition)
}
return job
}
+8 -7
View File
@@ -97,13 +97,14 @@ func TestProvisionerJobs(t *testing.T) {
// Verify that job metadata is correct.
assert.Equal(t, job2.Metadata, codersdk.ProvisionerJobMetadata{
TemplateVersionName: version.Name,
TemplateID: template.ID,
TemplateName: template.Name,
TemplateDisplayName: template.DisplayName,
TemplateIcon: template.Icon,
WorkspaceID: &w.ID,
WorkspaceName: w.Name,
TemplateVersionName: version.Name,
TemplateID: template.ID,
TemplateName: template.Name,
TemplateDisplayName: template.DisplayName,
TemplateIcon: template.Icon,
WorkspaceID: &w.ID,
WorkspaceName: w.Name,
WorkspaceBuildTransition: codersdk.WorkspaceTransitionStart,
})
})
})
+2 -33
View File
@@ -7,7 +7,6 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"maps"
"net/http"
"slices"
@@ -29,7 +28,6 @@ import (
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/pubsub"
"github.com/coder/coder/v2/coderd/objstore"
coderdpubsub "github.com/coder/coder/v2/coderd/pubsub"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/util/xjson"
@@ -147,7 +145,6 @@ type Server struct {
usageTracker *workspacestats.UsageTracker
clock quartz.Clock
recordingSem chan struct{}
objStore objstore.Store
// Configuration
pendingChatAcquireInterval time.Duration
@@ -2694,9 +2691,6 @@ type Config struct {
WebpushDispatcher webpush.Dispatcher
UsageTracker *workspacestats.UsageTracker
Clock quartz.Clock
// ObjectStore is optional. When set, chat file data is stored
// in the object store instead of the database BYTEA column.
ObjectStore objstore.Store
}
// New creates a new chat processor. The processor polls for pending
@@ -2762,7 +2756,6 @@ func New(cfg Config) *Server {
usageTracker: cfg.UsageTracker,
clock: clk,
recordingSem: make(chan struct{}, maxConcurrentRecordingUploads),
objStore: cfg.ObjectStore,
wakeCh: make(chan struct{}, 1),
heartbeatRegistry: make(map[uuid.UUID]*heartbeatEntry),
}
@@ -3915,9 +3908,7 @@ func (p *Server) subscribeChatControl(
}
// chatFileResolver returns a FileResolver that fetches chat file
// content. When an object store is configured and the file has an
// object_store_key, data is read from the store. Otherwise it falls
// back to the database BYTEA column.
// content from the database by ID.
func (p *Server) chatFileResolver() chatprompt.FileResolver {
return func(ctx context.Context, ids []uuid.UUID) (map[uuid.UUID]chatprompt.FileData, error) {
files, err := p.db.GetChatFilesByIDs(ctx, ids)
@@ -3926,13 +3917,9 @@ func (p *Server) chatFileResolver() chatprompt.FileResolver {
}
result := make(map[uuid.UUID]chatprompt.FileData, len(files))
for _, f := range files {
data, err := p.readChatFileData(ctx, f)
if err != nil {
return nil, xerrors.Errorf("read chat file %s: %w", f.ID, err)
}
result[f.ID] = chatprompt.FileData{
Name: f.Name,
Data: data,
Data: f.Data,
MediaType: f.Mimetype,
}
}
@@ -3940,24 +3927,6 @@ func (p *Server) chatFileResolver() chatprompt.FileResolver {
}
}
// chatFilesNamespace is the object store namespace for chat files.
const chatFilesNamespace = "chatfiles"
// readChatFileData returns the binary content for a chat file. It
// reads from the object store, falling back to the database BYTEA
// column for files that predate the migration.
func (p *Server) readChatFileData(ctx context.Context, f database.ChatFile) ([]byte, error) {
if f.ObjectStoreKey.Valid && f.ObjectStoreKey.String != "" {
rc, _, err := p.objStore.Read(ctx, chatFilesNamespace, f.ObjectStoreKey.String)
if err != nil {
return nil, err
}
defer rc.Close()
return io.ReadAll(rc)
}
return f.Data, nil
}
// tryAutoPromoteQueuedMessage pops the next queued message and converts it
// into a pending user message inside the caller's transaction. Queued
// messages were already admitted through SendMessage, so this preserves FIFO
+19 -34
View File
@@ -163,24 +163,34 @@ func (p *Server) stopAndStoreRecording(
}
}
// Second pass: store the collected data.
// Second pass: store the collected data in the database.
if videoData != nil {
row, err := p.storeChatFile(ctx, ownerID, ws.OrganizationID,
fmt.Sprintf("recording-%s.mp4", p.clock.Now().UTC().Format("2006-01-02T15-04-05Z")),
"video/mp4", videoData)
//nolint:gocritic // AsChatd is required to insert chat files from the recording pipeline.
row, err := p.db.InsertChatFile(dbauthz.AsChatd(ctx), database.InsertChatFileParams{
OwnerID: ownerID,
OrganizationID: ws.OrganizationID,
Name: fmt.Sprintf("recording-%s.mp4", p.clock.Now().UTC().Format("2006-01-02T15-04-05Z")),
Mimetype: "video/mp4",
Data: videoData,
})
if err != nil {
p.logger.Warn(ctx, "failed to store recording",
p.logger.Warn(ctx, "failed to store recording in database",
slog.Error(err))
} else {
result.recordingFileID = row.ID.String()
}
}
if thumbnailData != nil && result.recordingFileID != "" {
row, err := p.storeChatFile(ctx, ownerID, ws.OrganizationID,
fmt.Sprintf("thumbnail-%s.jpg", p.clock.Now().UTC().Format("2006-01-02T15-04-05Z")),
"image/jpeg", thumbnailData)
//nolint:gocritic // AsChatd is required to insert chat files from the recording pipeline.
row, err := p.db.InsertChatFile(dbauthz.AsChatd(ctx), database.InsertChatFileParams{
OwnerID: ownerID,
OrganizationID: ws.OrganizationID,
Name: fmt.Sprintf("thumbnail-%s.jpg", p.clock.Now().UTC().Format("2006-01-02T15-04-05Z")),
Mimetype: "image/jpeg",
Data: thumbnailData,
})
if err != nil {
p.logger.Warn(ctx, "failed to store thumbnail",
p.logger.Warn(ctx, "failed to store thumbnail in database",
slog.Error(err))
} else {
result.thumbnailFileID = row.ID.String()
@@ -189,28 +199,3 @@ func (p *Server) stopAndStoreRecording(
return result
}
// storeChatFile writes file data to the object store (when
// configured) and inserts a metadata row into chat_files. Falls back
// to storing the data directly in the database BYTEA column when no
// object store is available.
func (p *Server) storeChatFile(
ctx context.Context,
ownerID, orgID uuid.UUID,
name, mimetype string,
data []byte,
) (database.InsertChatFileRow, error) {
key := uuid.New().String()
if err := p.objStore.Write(ctx, chatFilesNamespace, key, data); err != nil {
return database.InsertChatFileRow{}, fmt.Errorf("write to object store: %w", err)
}
//nolint:gocritic // AsChatd is required to insert chat files from the recording pipeline.
return p.db.InsertChatFile(dbauthz.AsChatd(ctx), database.InsertChatFileParams{
OwnerID: ownerID,
OrganizationID: orgID,
Name: name,
Mimetype: mimetype,
ObjectStoreKey: key,
})
}
+4 -4
View File
@@ -2411,10 +2411,10 @@ func (c *ExperimentalClient) GetChatsByWorkspace(ctx context.Context, workspaceI
// PRInsightsResponse is the response from the PR insights endpoint.
type PRInsightsResponse struct {
Summary PRInsightsSummary `json:"summary"`
TimeSeries []PRInsightsTimeSeriesEntry `json:"time_series"`
ByModel []PRInsightsModelBreakdown `json:"by_model"`
RecentPRs []PRInsightsPullRequest `json:"recent_prs"`
Summary PRInsightsSummary `json:"summary"`
TimeSeries []PRInsightsTimeSeriesEntry `json:"time_series"`
ByModel []PRInsightsModelBreakdown `json:"by_model"`
PullRequests []PRInsightsPullRequest `json:"recent_prs"`
}
// PRInsightsSummary contains aggregate PR metrics for a time period,
-122
View File
@@ -645,7 +645,6 @@ type DeploymentValues struct {
HideAITasks serpent.Bool `json:"hide_ai_tasks,omitempty" typescript:",notnull"`
AI AIConfig `json:"ai,omitempty"`
StatsCollection StatsCollectionConfig `json:"stats_collection,omitempty" typescript:",notnull"`
ObjectStore ObjectStoreConfig `json:"object_store,omitempty" typescript:",notnull"`
Config serpent.YAMLConfigPath `json:"config,omitempty" typescript:",notnull"`
WriteConfig serpent.Bool `json:"write_config,omitempty" typescript:",notnull"`
@@ -1047,31 +1046,6 @@ type HealthcheckConfig struct {
}
// RetentionConfig contains configuration for data retention policies.
// ObjectStoreConfig configures the object storage backend used for
// binary data such as chat files and transcripts.
type ObjectStoreConfig struct {
// Backend selects the storage backend: "local" (default), "s3", or "gcs".
Backend serpent.String `json:"backend" typescript:",notnull"`
// LocalDir is the root directory for the local filesystem backend.
// Only used when Backend is "local". Defaults to <config-dir>/objectstore/.
LocalDir serpent.String `json:"local_dir" typescript:",notnull"`
// S3Bucket is the S3 bucket name. Required when Backend is "s3".
S3Bucket serpent.String `json:"s3_bucket" typescript:",notnull"`
// S3Region is the AWS region for the S3 bucket.
S3Region serpent.String `json:"s3_region" typescript:",notnull"`
// S3Prefix is an optional key prefix within the S3 bucket.
S3Prefix serpent.String `json:"s3_prefix" typescript:",notnull"`
// S3Endpoint is a custom S3-compatible endpoint URL (for MinIO, R2, etc.).
S3Endpoint serpent.String `json:"s3_endpoint" typescript:",notnull"`
// GCSBucket is the GCS bucket name. Required when Backend is "gcs".
GCSBucket serpent.String `json:"gcs_bucket" typescript:",notnull"`
// GCSPrefix is an optional key prefix within the GCS bucket.
GCSPrefix serpent.String `json:"gcs_prefix" typescript:",notnull"`
// GCSCredentialsFile is an optional path to a GCS service account
// key file. If empty, Application Default Credentials are used.
GCSCredentialsFile serpent.String `json:"gcs_credentials_file" typescript:",notnull"`
}
// These settings control how long various types of data are retained in the database
// before being automatically purged. Setting a value to 0 disables retention for that
// data type (data is kept indefinitely).
@@ -1488,11 +1462,6 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
Description: "Configure data retention policies for various database tables. Retention policies automatically purge old data to reduce database size and improve performance. Setting a retention duration to 0 disables automatic purging for that data type.",
YAML: "retention",
}
deploymentGroupObjectStore = serpent.Group{
Name: "Object Store",
Description: "Configure the object storage backend for binary data (chat files, transcripts, etc.). Defaults to local filesystem storage.",
YAML: "objectStore",
}
)
httpAddress := serpent.Option{
@@ -4049,97 +4018,6 @@ Write out the current server config as YAML to stdout.`,
YAML: "workspace_agent_logs",
Annotations: serpent.Annotations{}.Mark(annotationFormatDuration, "true"),
},
// Object Store options
{
Name: "Object Store Backend",
Description: "The storage backend for binary data such as chat files. Valid values: local, s3, gcs.",
Flag: "objectstore-backend",
Env: "CODER_OBJECTSTORE_BACKEND",
Value: &c.ObjectStore.Backend,
Default: "local",
Group: &deploymentGroupObjectStore,
YAML: "backend",
},
{
Name: "Object Store Local Directory",
Description: "Root directory for the local filesystem object store backend. Only used when the backend is \"local\".",
Flag: "objectstore-local-dir",
Env: "CODER_OBJECTSTORE_LOCAL_DIR",
Value: &c.ObjectStore.LocalDir,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "local_dir",
},
{
Name: "Object Store S3 Bucket",
Description: "S3 bucket name. Required when the backend is \"s3\".",
Flag: "objectstore-s3-bucket",
Env: "CODER_OBJECTSTORE_S3_BUCKET",
Value: &c.ObjectStore.S3Bucket,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "s3_bucket",
},
{
Name: "Object Store S3 Region",
Description: "AWS region for the S3 bucket.",
Flag: "objectstore-s3-region",
Env: "CODER_OBJECTSTORE_S3_REGION",
Value: &c.ObjectStore.S3Region,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "s3_region",
},
{
Name: "Object Store S3 Prefix",
Description: "Optional key prefix within the S3 bucket.",
Flag: "objectstore-s3-prefix",
Env: "CODER_OBJECTSTORE_S3_PREFIX",
Value: &c.ObjectStore.S3Prefix,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "s3_prefix",
},
{
Name: "Object Store S3 Endpoint",
Description: "Custom S3-compatible endpoint URL (e.g. for MinIO, R2, Cloudflare). Leave empty for standard AWS S3.",
Flag: "objectstore-s3-endpoint",
Env: "CODER_OBJECTSTORE_S3_ENDPOINT",
Value: &c.ObjectStore.S3Endpoint,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "s3_endpoint",
},
{
Name: "Object Store GCS Bucket",
Description: "GCS bucket name. Required when the backend is \"gcs\".",
Flag: "objectstore-gcs-bucket",
Env: "CODER_OBJECTSTORE_GCS_BUCKET",
Value: &c.ObjectStore.GCSBucket,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "gcs_bucket",
},
{
Name: "Object Store GCS Prefix",
Description: "Optional key prefix within the GCS bucket.",
Flag: "objectstore-gcs-prefix",
Env: "CODER_OBJECTSTORE_GCS_PREFIX",
Value: &c.ObjectStore.GCSPrefix,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "gcs_prefix",
},
{
Name: "Object Store GCS Credentials File",
Description: "Path to a GCS service account key file. If empty, Application Default Credentials are used.",
Flag: "objectstore-gcs-credentials-file",
Env: "CODER_OBJECTSTORE_GCS_CREDENTIALS_FILE",
Value: &c.ObjectStore.GCSCredentialsFile,
Default: "",
Group: &deploymentGroupObjectStore,
YAML: "gcs_credentials_file",
},
{
Name: "Enable Authorization Recordings",
Description: "All api requests will have a header including all authorization calls made during the request. " +
+8 -7
View File
@@ -143,13 +143,14 @@ type ProvisionerJobInput struct {
// ProvisionerJobMetadata contains metadata for the job.
type ProvisionerJobMetadata struct {
TemplateVersionName string `json:"template_version_name" table:"template version name"`
TemplateID uuid.UUID `json:"template_id" format:"uuid" table:"template id"`
TemplateName string `json:"template_name" table:"template name"`
TemplateDisplayName string `json:"template_display_name" table:"template display name"`
TemplateIcon string `json:"template_icon" table:"template icon"`
WorkspaceID *uuid.UUID `json:"workspace_id,omitempty" format:"uuid" table:"workspace id"`
WorkspaceName string `json:"workspace_name,omitempty" table:"workspace name"`
TemplateVersionName string `json:"template_version_name" table:"template version name"`
TemplateID uuid.UUID `json:"template_id" format:"uuid" table:"template id"`
TemplateName string `json:"template_name" table:"template name"`
TemplateDisplayName string `json:"template_display_name" table:"template display name"`
TemplateIcon string `json:"template_icon" table:"template icon"`
WorkspaceID *uuid.UUID `json:"workspace_id,omitempty" format:"uuid" table:"workspace id"`
WorkspaceName string `json:"workspace_name,omitempty" table:"workspace name"`
WorkspaceBuildTransition WorkspaceTransition `json:"workspace_build_transition,omitempty" table:"workspace build transition"`
}
// ProvisionerJobType represents the type of job.
-2
View File
@@ -201,8 +201,6 @@ deployment. They will always be available from the agent.
| `coderd_db_tx_duration_seconds` | histogram | Duration of transactions in seconds. | `success` `tx_id` |
| `coderd_db_tx_executions_count` | counter | Total count of transactions executed. 'retries' is expected to be 0 for a successful transaction. | `retries` `success` `tx_id` |
| `coderd_dbpurge_iteration_duration_seconds` | histogram | Duration of each dbpurge iteration in seconds. | `success` |
| `coderd_dbpurge_objstore_delete_inflight` | gauge | Number of object store files currently enqueued for deletion. | |
| `coderd_dbpurge_objstore_files_deleted_total` | counter | Total number of object store files successfully deleted. | |
| `coderd_dbpurge_records_purged_total` | counter | Total number of records purged by type. | `record_type` |
| `coderd_experiments` | gauge | Indicates whether each experiment is enabled (1) or not (0) | `experiment` |
| `coderd_insights_applications_usage_seconds` | gauge | The application usage per template. | `application_name` `organization_name` `slug` `template_name` |
@@ -23,6 +23,7 @@ The following database fields are currently encrypted:
- `external_auth_links.oauth_access_token`
- `external_auth_links.oauth_refresh_token`
- `crypto_keys.secret`
- `user_secrets.value`
Additional database fields may be encrypted in the future.
+21 -14
View File
@@ -60,6 +60,7 @@ curl -X GET http://coder-server:8080/api/v2/users/{user}/workspace/{workspacenam
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -300,6 +301,7 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild} \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1008,6 +1010,7 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/sta
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1359,6 +1362,7 @@ curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace}/builds \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1577,6 +1581,7 @@ Status Code **200**
| `»»» template_id` | string(uuid) | false | | |
| `»»» template_name` | string | false | | |
| `»»» template_version_name` | string | false | | |
| `»»» workspace_build_transition` | [codersdk.WorkspaceTransition](schemas.md#codersdkworkspacetransition) | false | | |
| `»»» workspace_id` | string(uuid) | false | | |
| `»»» workspace_name` | string | false | | |
| `»» organization_id` | string(uuid) | false | | |
@@ -1710,20 +1715,21 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `connected`, `connecting`, `deleted`, `deleting`, `disconnected`, `failed`, `pending`, `running`, `starting`, `stopped`, `stopping`, `succeeded`, `timeout` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| `reason` | `autostart`, `autostop`, `initiator` |
| `health` | `disabled`, `healthy`, `initializing`, `unhealthy` |
| `open_in` | `slim-window`, `tab` |
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `workspace_transition` | `delete`, `start`, `stop` |
| `transition` | `delete`, `start`, `stop` |
| Property | Value(s) |
|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `workspace_build_transition` | `delete`, `start`, `stop` |
| `status` | `canceled`, `canceling`, `connected`, `connecting`, `deleted`, `deleting`, `disconnected`, `failed`, `pending`, `running`, `starting`, `stopped`, `stopping`, `succeeded`, `timeout` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| `reason` | `autostart`, `autostop`, `initiator` |
| `health` | `disabled`, `healthy`, `initializing`, `unhealthy` |
| `open_in` | `slim-window`, `tab` |
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `workspace_transition` | `delete`, `start`, `stop` |
| `transition` | `delete`, `start`, `stop` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -1810,6 +1816,7 @@ curl -X POST http://coder-server:8080/api/v2/workspaces/{workspace}/builds \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
-11
View File
@@ -404,17 +404,6 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"enterprise_base_url": "string"
}
},
"object_store": {
"backend": "string",
"gcs_bucket": "string",
"gcs_credentials_file": "string",
"gcs_prefix": "string",
"local_dir": "string",
"s3_bucket": "string",
"s3_endpoint": "string",
"s3_prefix": "string",
"s3_region": "string"
},
"oidc": {
"allow_signups": true,
"auth_url_params": {},
+44 -40
View File
@@ -317,6 +317,7 @@ curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/provisi
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -346,49 +347,51 @@ curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/provisi
Status Code **200**
| Name | Type | Required | Restrictions | Description |
|----------------------------|------------------------------------------------------------------------------|----------|--------------|-------------|
| `[array item]` | array | false | | |
| `» available_workers` | array | false | | |
| `» canceled_at` | string(date-time) | false | | |
| `» completed_at` | string(date-time) | false | | |
| `» created_at` | string(date-time) | false | | |
| `» error` | string | false | | |
| `» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `» file_id` | string(uuid) | false | | |
| `» id` | string(uuid) | false | | |
| `» initiator_id` | string(uuid) | false | | |
| `» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»» error` | string | false | | |
| `»» template_version_id` | string(uuid) | false | | |
| `»» workspace_build_id` | string(uuid) | false | | |
| `» logs_overflowed` | boolean | false | | |
| `» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»» template_display_name` | string | false | | |
| `»» template_icon` | string | false | | |
| `»» template_id` | string(uuid) | false | | |
| `»» template_name` | string | false | | |
| `»» template_version_name` | string | false | | |
| `»» workspace_id` | string(uuid) | false | | |
| `»» workspace_name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| queue_position` | integer | false | | |
| `» queue_size` | integer | false | | |
| started_at` | string(date-time) | false | | |
| `» status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| `» tags` | object | false | | |
| » [any property]` | string | false | | |
| type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| worker_id` | string(uuid) | false | | |
| `» worker_name` | string | false | | |
| Name | Type | Required | Restrictions | Description |
|---------------------------------|------------------------------------------------------------------------------|----------|--------------|-------------|
| `[array item]` | array | false | | |
| `» available_workers` | array | false | | |
| `» canceled_at` | string(date-time) | false | | |
| `» completed_at` | string(date-time) | false | | |
| `» created_at` | string(date-time) | false | | |
| `» error` | string | false | | |
| `» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `» file_id` | string(uuid) | false | | |
| `» id` | string(uuid) | false | | |
| `» initiator_id` | string(uuid) | false | | |
| `» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»» error` | string | false | | |
| `»» template_version_id` | string(uuid) | false | | |
| `»» workspace_build_id` | string(uuid) | false | | |
| `» logs_overflowed` | boolean | false | | |
| `» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»» template_display_name` | string | false | | |
| `»» template_icon` | string | false | | |
| `»» template_id` | string(uuid) | false | | |
| `»» template_name` | string | false | | |
| `»» template_version_name` | string | false | | |
| `»» workspace_build_transition` | [codersdk.WorkspaceTransition](schemas.md#codersdkworkspacetransition) | false | | |
| `»» workspace_id` | string(uuid) | false | | |
| » workspace_name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| `» queue_position` | integer | false | | |
| queue_size` | integer | false | | |
| `» started_at` | string(date-time) | false | | |
| status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| tags` | object | false | | |
| » [any property]` | string | false | | |
| type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| `» worker_id` | string(uuid) | false | | |
| `» worker_name` | string | false | | |
#### Enumerated Values
| Property | Value(s) |
|--------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| Property | Value(s) |
|------------------------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `workspace_build_transition` | `delete`, `start`, `stop` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -441,6 +444,7 @@ curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/provisi
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
+18 -62
View File
@@ -3456,17 +3456,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enterprise_base_url": "string"
}
},
"object_store": {
"backend": "string",
"gcs_bucket": "string",
"gcs_credentials_file": "string",
"gcs_prefix": "string",
"local_dir": "string",
"s3_bucket": "string",
"s3_endpoint": "string",
"s3_prefix": "string",
"s3_region": "string"
},
"oidc": {
"allow_signups": true,
"auth_url_params": {},
@@ -4045,17 +4034,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enterprise_base_url": "string"
}
},
"object_store": {
"backend": "string",
"gcs_bucket": "string",
"gcs_credentials_file": "string",
"gcs_prefix": "string",
"local_dir": "string",
"s3_bucket": "string",
"s3_endpoint": "string",
"s3_prefix": "string",
"s3_region": "string"
},
"oidc": {
"allow_signups": true,
"auth_url_params": {},
@@ -4313,7 +4291,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
| `metrics_cache_refresh_interval` | integer | false | | |
| `notifications` | [codersdk.NotificationsConfig](#codersdknotificationsconfig) | false | | |
| `oauth2` | [codersdk.OAuth2Config](#codersdkoauth2config) | false | | |
| `object_store` | [codersdk.ObjectStoreConfig](#codersdkobjectstoreconfig) | false | | |
| `oidc` | [codersdk.OIDCConfig](#codersdkoidcconfig) | false | | |
| `pg_auth` | string | false | | |
| `pg_conn_max_idle` | string | false | | |
@@ -6467,36 +6444,6 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
| `user_roles_default` | array of string | false | | |
| `username_field` | string | false | | |
## codersdk.ObjectStoreConfig
```json
{
"backend": "string",
"gcs_bucket": "string",
"gcs_credentials_file": "string",
"gcs_prefix": "string",
"local_dir": "string",
"s3_bucket": "string",
"s3_endpoint": "string",
"s3_prefix": "string",
"s3_region": "string"
}
```
### Properties
| Name | Type | Required | Restrictions | Description |
|------------------------|--------|----------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| `backend` | string | false | | Backend selects the storage backend: "local" (default), "s3", or "gcs". |
| `gcs_bucket` | string | false | | Gcs bucket is the GCS bucket name. Required when Backend is "gcs". |
| `gcs_credentials_file` | string | false | | Gcs credentials file is an optional path to a GCS service account key file. If empty, Application Default Credentials are used. |
| `gcs_prefix` | string | false | | Gcs prefix is an optional key prefix within the GCS bucket. |
| `local_dir` | string | false | | Local dir is the root directory for the local filesystem backend. Only used when Backend is "local". Defaults to <config-dir>/objectstore/. |
| `s3_bucket` | string | false | | S3 bucket is the S3 bucket name. Required when Backend is "s3". |
| `s3_endpoint` | string | false | | S3 endpoint is a custom S3-compatible endpoint URL (for MinIO, R2, etc.). |
| `s3_prefix` | string | false | | S3 prefix is an optional key prefix within the S3 bucket. |
| `s3_region` | string | false | | S3 region is the AWS region for the S3 bucket. |
## codersdk.OptionType
```json
@@ -7174,6 +7121,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -7840,6 +7788,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -7949,6 +7898,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
}
@@ -7956,15 +7906,16 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
### Properties
| Name | Type | Required | Restrictions | Description |
|-------------------------|--------|----------|--------------|-------------|
| `template_display_name` | string | false | | |
| `template_icon` | string | false | | |
| `template_id` | string | false | | |
| `template_name` | string | false | | |
| `template_version_name` | string | false | | |
| `workspace_id` | string | false | | |
| `workspace_name` | string | false | | |
| Name | Type | Required | Restrictions | Description |
|------------------------------|--------------------------------------------------------------|----------|--------------|-------------|
| `template_display_name` | string | false | | |
| `template_icon` | string | false | | |
| `template_id` | string | false | | |
| `template_name` | string | false | | |
| `template_version_name` | string | false | | |
| `workspace_build_transition` | [codersdk.WorkspaceTransition](#codersdkworkspacetransition) | false | | |
| `workspace_id` | string | false | | |
| `workspace_name` | string | false | | |
## codersdk.ProvisionerJobStatus
@@ -8520,6 +8471,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -10067,6 +10019,7 @@ Restarts will only happen on weekdays in this list on weeks which line up with W
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -11457,6 +11410,7 @@ If the schedule is empty, the user will be updated to use the default schedule.|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -12615,6 +12569,7 @@ If the schedule is empty, the user will be updated to use the default schedule.|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -13447,6 +13402,7 @@ If the schedule is empty, the user will be updated to use the default schedule.|
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
+2
View File
@@ -425,6 +425,7 @@ curl -X POST http://coder-server:8080/api/v2/tasks/{user}/{task}/pause \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -668,6 +669,7 @@ curl -X POST http://coder-server:8080/api/v2/tasks/{user}/{task}/resume \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
+135 -122
View File
@@ -493,6 +493,7 @@ curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/templat
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -595,6 +596,7 @@ curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/templat
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -721,6 +723,7 @@ curl -X POST http://coder-server:8080/api/v2/organizations/{organization}/templa
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1335,6 +1338,7 @@ curl -X GET http://coder-server:8080/api/v2/templates/{template}/versions \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1379,70 +1383,72 @@ curl -X GET http://coder-server:8080/api/v2/templates/{template}/versions \
Status Code **200**
| Name | Type | Required | Restrictions | Description |
|-----------------------------|------------------------------------------------------------------------------|----------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[array item]` | array | false | | |
| `» archived` | boolean | false | | |
| `» created_at` | string(date-time) | false | | |
| `» created_by` | [codersdk.MinimalUser](schemas.md#codersdkminimaluser) | false | | |
| `»» avatar_url` | string(uri) | false | | |
| `»» id` | string(uuid) | true | | |
| `»» name` | string | false | | |
| `»» username` | string | true | | |
| `» has_external_agent` | boolean | false | | |
| `» id` | string(uuid) | false | | |
| `» job` | [codersdk.ProvisionerJob](schemas.md#codersdkprovisionerjob) | false | | |
| `»» available_workers` | array | false | | |
| `»» canceled_at` | string(date-time) | false | | |
| `»» completed_at` | string(date-time) | false | | |
| `»» created_at` | string(date-time) | false | | |
| `»» error` | string | false | | |
| `»» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `»» file_id` | string(uuid) | false | | |
| `»» id` | string(uuid) | false | | |
| `»» initiator_id` | string(uuid) | false | | |
| `»» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»»» error` | string | false | | |
| `»»» template_version_id` | string(uuid) | false | | |
| `»»» workspace_build_id` | string(uuid) | false | | |
| `»» logs_overflowed` | boolean | false | | |
| `»» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»»» template_display_name` | string | false | | |
| `»»» template_icon` | string | false | | |
| `»»» template_id` | string(uuid) | false | | |
| `»»» template_name` | string | false | | |
| `»»» template_version_name` | string | false | | |
| `»»» workspace_id` | string(uuid) | false | | |
| `»»» workspace_name` | string | false | | |
| `»» organization_id` | string(uuid) | false | | |
| `»» queue_position` | integer | false | | |
| `»» queue_size` | integer | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| `»» tags` | object | false | | |
| `»»» [any property]` | string | false | | |
| `»» type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| `»» worker_id` | string(uuid) | false | | |
| `»» worker_name` | string | false | | |
| matched_provisioners` | [codersdk.MatchedProvisioners](schemas.md#codersdkmatchedprovisioners) | false | | |
| » available` | integer | false | | Available is the number of provisioner daemons that are available to take jobs. This may be less than the count if some provisioners are busy or have been stopped. |
| `»» count` | integer | false | | Count is the number of provisioner daemons that matched the given tags. If the count is 0, it means no provisioner daemons matched the requested tags. |
| `»» most_recently_seen` | string(date-time) | false | | Most recently seen is the most recently seen time of the set of matched provisioners. If no provisioners matched, this field will be null. |
| `» message` | string | false | | |
| name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| readme` | string | false | | |
| template_id` | string(uuid) | false | | |
| updated_at` | string(date-time) | false | | |
| warnings` | array | false | | |
| Name | Type | Required | Restrictions | Description |
|----------------------------------|------------------------------------------------------------------------------|----------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[array item]` | array | false | | |
| `» archived` | boolean | false | | |
| `» created_at` | string(date-time) | false | | |
| `» created_by` | [codersdk.MinimalUser](schemas.md#codersdkminimaluser) | false | | |
| `»» avatar_url` | string(uri) | false | | |
| `»» id` | string(uuid) | true | | |
| `»» name` | string | false | | |
| `»» username` | string | true | | |
| `» has_external_agent` | boolean | false | | |
| `» id` | string(uuid) | false | | |
| `» job` | [codersdk.ProvisionerJob](schemas.md#codersdkprovisionerjob) | false | | |
| `»» available_workers` | array | false | | |
| `»» canceled_at` | string(date-time) | false | | |
| `»» completed_at` | string(date-time) | false | | |
| `»» created_at` | string(date-time) | false | | |
| `»» error` | string | false | | |
| `»» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `»» file_id` | string(uuid) | false | | |
| `»» id` | string(uuid) | false | | |
| `»» initiator_id` | string(uuid) | false | | |
| `»» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»»» error` | string | false | | |
| `»»» template_version_id` | string(uuid) | false | | |
| `»»» workspace_build_id` | string(uuid) | false | | |
| `»» logs_overflowed` | boolean | false | | |
| `»» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»»» template_display_name` | string | false | | |
| `»»» template_icon` | string | false | | |
| `»»» template_id` | string(uuid) | false | | |
| `»»» template_name` | string | false | | |
| `»»» template_version_name` | string | false | | |
| `»»» workspace_build_transition` | [codersdk.WorkspaceTransition](schemas.md#codersdkworkspacetransition) | false | | |
| `»»» workspace_id` | string(uuid) | false | | |
| `»»» workspace_name` | string | false | | |
| `»» organization_id` | string(uuid) | false | | |
| `»» queue_position` | integer | false | | |
| `»» queue_size` | integer | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| `»» tags` | object | false | | |
| `»»» [any property]` | string | false | | |
| `»» type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| `»» worker_id` | string(uuid) | false | | |
| » worker_name` | string | false | | |
| matched_provisioners` | [codersdk.MatchedProvisioners](schemas.md#codersdkmatchedprovisioners) | false | | |
| `»» available` | integer | false | | Available is the number of provisioner daemons that are available to take jobs. This may be less than the count if some provisioners are busy or have been stopped. |
| `»» count` | integer | false | | Count is the number of provisioner daemons that matched the given tags. If the count is 0, it means no provisioner daemons matched the requested tags. |
| » most_recently_seen` | string(date-time) | false | | Most recently seen is the most recently seen time of the set of matched provisioners. If no provisioners matched, this field will be null. |
| message` | string | false | | |
| name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| readme` | string | false | | |
| template_id` | string(uuid) | false | | |
| updated_at` | string(date-time) | false | | |
| `» warnings` | array | false | | |
#### Enumerated Values
| Property | Value(s) |
|--------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| Property | Value(s) |
|------------------------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `workspace_build_transition` | `delete`, `start`, `stop` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -1615,6 +1621,7 @@ curl -X GET http://coder-server:8080/api/v2/templates/{template}/versions/{templ
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1659,70 +1666,72 @@ curl -X GET http://coder-server:8080/api/v2/templates/{template}/versions/{templ
Status Code **200**
| Name | Type | Required | Restrictions | Description |
|-----------------------------|------------------------------------------------------------------------------|----------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[array item]` | array | false | | |
| `» archived` | boolean | false | | |
| `» created_at` | string(date-time) | false | | |
| `» created_by` | [codersdk.MinimalUser](schemas.md#codersdkminimaluser) | false | | |
| `»» avatar_url` | string(uri) | false | | |
| `»» id` | string(uuid) | true | | |
| `»» name` | string | false | | |
| `»» username` | string | true | | |
| `» has_external_agent` | boolean | false | | |
| `» id` | string(uuid) | false | | |
| `» job` | [codersdk.ProvisionerJob](schemas.md#codersdkprovisionerjob) | false | | |
| `»» available_workers` | array | false | | |
| `»» canceled_at` | string(date-time) | false | | |
| `»» completed_at` | string(date-time) | false | | |
| `»» created_at` | string(date-time) | false | | |
| `»» error` | string | false | | |
| `»» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `»» file_id` | string(uuid) | false | | |
| `»» id` | string(uuid) | false | | |
| `»» initiator_id` | string(uuid) | false | | |
| `»» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»»» error` | string | false | | |
| `»»» template_version_id` | string(uuid) | false | | |
| `»»» workspace_build_id` | string(uuid) | false | | |
| `»» logs_overflowed` | boolean | false | | |
| `»» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»»» template_display_name` | string | false | | |
| `»»» template_icon` | string | false | | |
| `»»» template_id` | string(uuid) | false | | |
| `»»» template_name` | string | false | | |
| `»»» template_version_name` | string | false | | |
| `»»» workspace_id` | string(uuid) | false | | |
| `»»» workspace_name` | string | false | | |
| `»» organization_id` | string(uuid) | false | | |
| `»» queue_position` | integer | false | | |
| `»» queue_size` | integer | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| `»» tags` | object | false | | |
| `»»» [any property]` | string | false | | |
| `»» type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| `»» worker_id` | string(uuid) | false | | |
| `»» worker_name` | string | false | | |
| matched_provisioners` | [codersdk.MatchedProvisioners](schemas.md#codersdkmatchedprovisioners) | false | | |
| » available` | integer | false | | Available is the number of provisioner daemons that are available to take jobs. This may be less than the count if some provisioners are busy or have been stopped. |
| `»» count` | integer | false | | Count is the number of provisioner daemons that matched the given tags. If the count is 0, it means no provisioner daemons matched the requested tags. |
| `»» most_recently_seen` | string(date-time) | false | | Most recently seen is the most recently seen time of the set of matched provisioners. If no provisioners matched, this field will be null. |
| `» message` | string | false | | |
| name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| readme` | string | false | | |
| template_id` | string(uuid) | false | | |
| updated_at` | string(date-time) | false | | |
| warnings` | array | false | | |
| Name | Type | Required | Restrictions | Description |
|----------------------------------|------------------------------------------------------------------------------|----------|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `[array item]` | array | false | | |
| `» archived` | boolean | false | | |
| `» created_at` | string(date-time) | false | | |
| `» created_by` | [codersdk.MinimalUser](schemas.md#codersdkminimaluser) | false | | |
| `»» avatar_url` | string(uri) | false | | |
| `»» id` | string(uuid) | true | | |
| `»» name` | string | false | | |
| `»» username` | string | true | | |
| `» has_external_agent` | boolean | false | | |
| `» id` | string(uuid) | false | | |
| `» job` | [codersdk.ProvisionerJob](schemas.md#codersdkprovisionerjob) | false | | |
| `»» available_workers` | array | false | | |
| `»» canceled_at` | string(date-time) | false | | |
| `»» completed_at` | string(date-time) | false | | |
| `»» created_at` | string(date-time) | false | | |
| `»» error` | string | false | | |
| `»» error_code` | [codersdk.JobErrorCode](schemas.md#codersdkjoberrorcode) | false | | |
| `»» file_id` | string(uuid) | false | | |
| `»» id` | string(uuid) | false | | |
| `»» initiator_id` | string(uuid) | false | | |
| `»» input` | [codersdk.ProvisionerJobInput](schemas.md#codersdkprovisionerjobinput) | false | | |
| `»»» error` | string | false | | |
| `»»» template_version_id` | string(uuid) | false | | |
| `»»» workspace_build_id` | string(uuid) | false | | |
| `»» logs_overflowed` | boolean | false | | |
| `»» metadata` | [codersdk.ProvisionerJobMetadata](schemas.md#codersdkprovisionerjobmetadata) | false | | |
| `»»» template_display_name` | string | false | | |
| `»»» template_icon` | string | false | | |
| `»»» template_id` | string(uuid) | false | | |
| `»»» template_name` | string | false | | |
| `»»» template_version_name` | string | false | | |
| `»»» workspace_build_transition` | [codersdk.WorkspaceTransition](schemas.md#codersdkworkspacetransition) | false | | |
| `»»» workspace_id` | string(uuid) | false | | |
| `»»» workspace_name` | string | false | | |
| `»» organization_id` | string(uuid) | false | | |
| `»» queue_position` | integer | false | | |
| `»» queue_size` | integer | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» status` | [codersdk.ProvisionerJobStatus](schemas.md#codersdkprovisionerjobstatus) | false | | |
| `»» tags` | object | false | | |
| `»»» [any property]` | string | false | | |
| `»» type` | [codersdk.ProvisionerJobType](schemas.md#codersdkprovisionerjobtype) | false | | |
| `»» worker_id` | string(uuid) | false | | |
| » worker_name` | string | false | | |
| matched_provisioners` | [codersdk.MatchedProvisioners](schemas.md#codersdkmatchedprovisioners) | false | | |
| `»» available` | integer | false | | Available is the number of provisioner daemons that are available to take jobs. This may be less than the count if some provisioners are busy or have been stopped. |
| `»» count` | integer | false | | Count is the number of provisioner daemons that matched the given tags. If the count is 0, it means no provisioner daemons matched the requested tags. |
| » most_recently_seen` | string(date-time) | false | | Most recently seen is the most recently seen time of the set of matched provisioners. If no provisioners matched, this field will be null. |
| message` | string | false | | |
| name` | string | false | | |
| organization_id` | string(uuid) | false | | |
| readme` | string | false | | |
| template_id` | string(uuid) | false | | |
| updated_at` | string(date-time) | false | | |
| `» warnings` | array | false | | |
#### Enumerated Values
| Property | Value(s) |
|--------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| Property | Value(s) |
|------------------------------|--------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `workspace_build_transition` | `delete`, `start`, `stop` |
| `status` | `canceled`, `canceling`, `failed`, `pending`, `running`, `succeeded` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -1785,6 +1794,7 @@ curl -X GET http://coder-server:8080/api/v2/templateversions/{templateversion} \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1896,6 +1906,7 @@ curl -X PATCH http://coder-server:8080/api/v2/templateversions/{templateversion}
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -2095,6 +2106,7 @@ curl -X POST http://coder-server:8080/api/v2/templateversions/{templateversion}/
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -2170,6 +2182,7 @@ curl -X GET http://coder-server:8080/api/v2/templateversions/{templateversion}/d
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
+6
View File
@@ -115,6 +115,7 @@ of the template will be used.
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -478,6 +479,7 @@ curl -X GET http://coder-server:8080/api/v2/users/{user}/workspace/{workspacenam
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -808,6 +810,7 @@ of the template will be used.
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1116,6 +1119,7 @@ curl -X GET http://coder-server:8080/api/v2/workspaces \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1405,6 +1409,7 @@ curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace} \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
@@ -1971,6 +1976,7 @@ curl -X PUT http://coder-server:8080/api/v2/workspaces/{workspace}/dormant \
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_build_transition": "start",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
+4 -4
View File
@@ -54,10 +54,10 @@ Select which organization (uuid or name) to use.
### -c, --column
| | |
|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Type | <code>[id\|created at\|started at\|completed at\|canceled at\|error\|error code\|status\|worker id\|worker name\|file id\|tags\|queue position\|queue size\|organization id\|initiator id\|template version id\|workspace build id\|type\|available workers\|template version name\|template id\|template name\|template display name\|template icon\|workspace id\|workspace name\|logs overflowed\|organization\|queue]</code> |
| Default | <code>created at,id,type,template display name,status,queue,tags</code> |
| | |
|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Type | <code>[id\|created at\|started at\|completed at\|canceled at\|error\|error code\|status\|worker id\|worker name\|file id\|tags\|queue position\|queue size\|organization id\|initiator id\|template version id\|workspace build id\|type\|available workers\|template version name\|template id\|template name\|template display name\|template icon\|workspace id\|workspace name\|workspace build transition\|logs overflowed\|organization\|queue]</code> |
| Default | <code>created at,id,type,template display name,status,queue,tags</code> |
Columns to display in table output.
+19
View File
@@ -197,6 +197,10 @@ func TestServerDBCrypt(t *testing.T) {
gitAuthLinks, err := db.GetExternalAuthLinksByUserID(ctx, usr.ID)
require.NoError(t, err, "failed to get git auth links for user %s", usr.ID)
require.Empty(t, gitAuthLinks)
userSecrets, err := db.ListUserSecretsWithValues(ctx, usr.ID)
require.NoError(t, err, "failed to get user secrets for user %s", usr.ID)
require.Empty(t, userSecrets)
}
// Validate that the key has been revoked in the database.
@@ -242,6 +246,14 @@ func genData(t *testing.T, db database.Store) []database.User {
OAuthRefreshToken: "refresh-" + usr.ID.String(),
})
}
_ = dbgen.UserSecret(t, db, database.UserSecret{
UserID: usr.ID,
Name: "secret-" + usr.ID.String(),
Value: "value-" + usr.ID.String(),
EnvName: "",
FilePath: "",
})
users = append(users, usr)
}
}
@@ -283,6 +295,13 @@ func requireEncryptedWithCipher(ctx context.Context, t *testing.T, db database.S
require.Equal(t, c.HexDigest(), gal.OAuthAccessTokenKeyID.String)
require.Equal(t, c.HexDigest(), gal.OAuthRefreshTokenKeyID.String)
}
userSecrets, err := db.ListUserSecretsWithValues(ctx, userID)
require.NoError(t, err, "failed to get user secrets for user %s", userID)
for _, s := range userSecrets {
requireEncryptedEquals(t, c, "value-"+userID.String(), s.Value)
require.Equal(t, c.HexDigest(), s.ValueKeyID.String)
}
}
// nullCipher is a dbcrypt.Cipher that does not encrypt or decrypt.
@@ -11,7 +11,7 @@ OPTIONS:
-O, --org string, $CODER_ORGANIZATION
Select which organization (uuid or name) to use.
-c, --column [id|created at|started at|completed at|canceled at|error|error code|status|worker id|worker name|file id|tags|queue position|queue size|organization id|initiator id|template version id|workspace build id|type|available workers|template version name|template id|template name|template display name|template icon|workspace id|workspace name|logs overflowed|organization|queue] (default: created at,id,type,template display name,status,queue,tags)
-c, --column [id|created at|started at|completed at|canceled at|error|error code|status|worker id|worker name|file id|tags|queue position|queue size|organization id|initiator id|template version id|workspace build id|type|available workers|template version name|template id|template name|template display name|template icon|workspace id|workspace name|workspace build transition|logs overflowed|organization|queue] (default: created at,id,type,template display name,status,queue,tags)
Columns to display in table output.
-i, --initiator string, $CODER_PROVISIONER_JOB_LIST_INITIATOR
-35
View File
@@ -774,41 +774,6 @@ OIDC OPTIONS:
requirement, and can lead to an insecure OIDC configuration. It is not
recommended to use this flag.
OBJECT STORE OPTIONS:
Configure the object storage backend for binary data (chat files, transcripts,
etc.). Defaults to local filesystem storage.
--objectstore-backend string, $CODER_OBJECTSTORE_BACKEND (default: local)
The storage backend for binary data such as chat files. Valid values:
local, s3, gcs.
--objectstore-gcs-bucket string, $CODER_OBJECTSTORE_GCS_BUCKET
GCS bucket name. Required when the backend is "gcs".
--objectstore-gcs-credentials-file string, $CODER_OBJECTSTORE_GCS_CREDENTIALS_FILE
Path to a GCS service account key file. If empty, Application Default
Credentials are used.
--objectstore-gcs-prefix string, $CODER_OBJECTSTORE_GCS_PREFIX
Optional key prefix within the GCS bucket.
--objectstore-local-dir string, $CODER_OBJECTSTORE_LOCAL_DIR
Root directory for the local filesystem object store backend. Only
used when the backend is "local".
--objectstore-s3-bucket string, $CODER_OBJECTSTORE_S3_BUCKET
S3 bucket name. Required when the backend is "s3".
--objectstore-s3-endpoint string, $CODER_OBJECTSTORE_S3_ENDPOINT
Custom S3-compatible endpoint URL (e.g. for MinIO, R2, Cloudflare).
Leave empty for standard AWS S3.
--objectstore-s3-prefix string, $CODER_OBJECTSTORE_S3_PREFIX
Optional key prefix within the S3 bucket.
--objectstore-s3-region string, $CODER_OBJECTSTORE_S3_REGION
AWS region for the S3 bucket.
PROVISIONING OPTIONS:
Tune the behavior of the provisioner, which is responsible for creating,
updating, and deleting workspace resources.
+58
View File
@@ -96,6 +96,34 @@ func Rotate(ctx context.Context, log slog.Logger, sqlDB *sql.DB, ciphers []Ciphe
}
log.Debug(ctx, "encrypted user chat provider key", slog.F("user_id", uid), slog.F("chat_provider_id", userProviderKey.ChatProviderID), slog.F("current", idx+1), slog.F("cipher", ciphers[0].HexDigest()))
}
userSecrets, err := cryptTx.ListUserSecretsWithValues(ctx, uid)
if err != nil {
return xerrors.Errorf("get user secrets for user %s: %w", uid, err)
}
for _, secret := range userSecrets {
if secret.ValueKeyID.Valid && secret.ValueKeyID.String == ciphers[0].HexDigest() {
log.Debug(ctx, "skipping user secret", slog.F("user_id", uid), slog.F("secret_name", secret.Name), slog.F("current", idx+1), slog.F("cipher", ciphers[0].HexDigest()))
continue
}
if _, err := cryptTx.UpdateUserSecretByUserIDAndName(ctx, database.UpdateUserSecretByUserIDAndNameParams{
UserID: uid,
Name: secret.Name,
UpdateValue: true,
Value: secret.Value,
ValueKeyID: sql.NullString{}, // dbcrypt will re-encrypt
UpdateDescription: false,
Description: "",
UpdateEnvName: false,
EnvName: "",
UpdateFilePath: false,
FilePath: "",
}); err != nil {
return xerrors.Errorf("rotate user secret user_id=%s name=%s: %w", uid, secret.Name, err)
}
log.Debug(ctx, "rotated user secret", slog.F("user_id", uid), slog.F("secret_name", secret.Name), slog.F("current", idx+1), slog.F("cipher", ciphers[0].HexDigest()))
}
return nil
}, &database.TxOptions{
Isolation: sql.LevelRepeatableRead,
@@ -235,6 +263,34 @@ func Decrypt(ctx context.Context, log slog.Logger, sqlDB *sql.DB, ciphers []Ciph
}
log.Debug(ctx, "decrypted user chat provider key", slog.F("user_id", uid), slog.F("chat_provider_id", userProviderKey.ChatProviderID), slog.F("current", idx+1))
}
userSecrets, err := tx.ListUserSecretsWithValues(ctx, uid)
if err != nil {
return xerrors.Errorf("get user secrets for user %s: %w", uid, err)
}
for _, secret := range userSecrets {
if !secret.ValueKeyID.Valid {
log.Debug(ctx, "skipping user secret", slog.F("user_id", uid), slog.F("secret_name", secret.Name), slog.F("current", idx+1))
continue
}
if _, err := tx.UpdateUserSecretByUserIDAndName(ctx, database.UpdateUserSecretByUserIDAndNameParams{
UserID: uid,
Name: secret.Name,
UpdateValue: true,
Value: secret.Value,
ValueKeyID: sql.NullString{}, // clear the key ID
UpdateDescription: false,
Description: "",
UpdateEnvName: false,
EnvName: "",
UpdateFilePath: false,
FilePath: "",
}); err != nil {
return xerrors.Errorf("decrypt user secret user_id=%s name=%s: %w", uid, secret.Name, err)
}
log.Debug(ctx, "decrypted user secret", slog.F("user_id", uid), slog.F("secret_name", secret.Name), slog.F("current", idx+1))
}
return nil
}, &database.TxOptions{
Isolation: sql.LevelRepeatableRead,
@@ -292,6 +348,8 @@ DELETE FROM external_auth_links
OR oauth_refresh_token_key_id IS NOT NULL;
DELETE FROM user_chat_provider_keys
WHERE api_key_key_id IS NOT NULL;
DELETE FROM user_secrets
WHERE value_key_id IS NOT NULL;
UPDATE chat_providers
SET api_key = '',
api_key_key_id = NULL
+54
View File
@@ -717,6 +717,60 @@ func (db *dbCrypt) UpsertMCPServerUserToken(ctx context.Context, params database
return tok, nil
}
func (db *dbCrypt) CreateUserSecret(ctx context.Context, params database.CreateUserSecretParams) (database.UserSecret, error) {
if err := db.encryptField(&params.Value, &params.ValueKeyID); err != nil {
return database.UserSecret{}, err
}
secret, err := db.Store.CreateUserSecret(ctx, params)
if err != nil {
return database.UserSecret{}, err
}
if err := db.decryptField(&secret.Value, secret.ValueKeyID); err != nil {
return database.UserSecret{}, err
}
return secret, nil
}
func (db *dbCrypt) GetUserSecretByUserIDAndName(ctx context.Context, arg database.GetUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
secret, err := db.Store.GetUserSecretByUserIDAndName(ctx, arg)
if err != nil {
return database.UserSecret{}, err
}
if err := db.decryptField(&secret.Value, secret.ValueKeyID); err != nil {
return database.UserSecret{}, err
}
return secret, nil
}
func (db *dbCrypt) ListUserSecretsWithValues(ctx context.Context, userID uuid.UUID) ([]database.UserSecret, error) {
secrets, err := db.Store.ListUserSecretsWithValues(ctx, userID)
if err != nil {
return nil, err
}
for i := range secrets {
if err := db.decryptField(&secrets[i].Value, secrets[i].ValueKeyID); err != nil {
return nil, err
}
}
return secrets, nil
}
func (db *dbCrypt) UpdateUserSecretByUserIDAndName(ctx context.Context, arg database.UpdateUserSecretByUserIDAndNameParams) (database.UserSecret, error) {
if arg.UpdateValue {
if err := db.encryptField(&arg.Value, &arg.ValueKeyID); err != nil {
return database.UserSecret{}, err
}
}
secret, err := db.Store.UpdateUserSecretByUserIDAndName(ctx, arg)
if err != nil {
return database.UserSecret{}, err
}
if err := db.decryptField(&secret.Value, secret.ValueKeyID); err != nil {
return database.UserSecret{}, err
}
return secret, nil
}
func (db *dbCrypt) encryptField(field *string, digest *sql.NullString) error {
// If no cipher is loaded, then we can't encrypt anything!
if db.ciphers == nil || db.primaryCipherDigest == "" {
+195
View File
@@ -1287,3 +1287,198 @@ func TestUserChatProviderKeys(t *testing.T) {
requireEncryptedEquals(t, ciphers[0], rawKey.APIKey, updatedAPIKey)
})
}
func TestUserSecrets(t *testing.T) {
t.Parallel()
ctx := context.Background()
const (
//nolint:gosec // test credentials
initialValue = "super-secret-value-initial"
//nolint:gosec // test credentials
updatedValue = "super-secret-value-updated"
)
insertUserSecret := func(
t *testing.T,
crypt *dbCrypt,
ciphers []Cipher,
) database.UserSecret {
t.Helper()
user := dbgen.User(t, crypt, database.User{})
secret, err := crypt.CreateUserSecret(ctx, database.CreateUserSecretParams{
ID: uuid.New(),
UserID: user.ID,
Name: "test-secret-" + uuid.NewString()[:8],
Value: initialValue,
})
require.NoError(t, err)
require.Equal(t, initialValue, secret.Value)
if len(ciphers) > 0 {
require.Equal(t, ciphers[0].HexDigest(), secret.ValueKeyID.String)
}
return secret
}
t.Run("CreateUserSecretEncryptsValue", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
secret := insertUserSecret(t, crypt, ciphers)
// Reading through crypt should return plaintext.
got, err := crypt.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
})
require.NoError(t, err)
require.Equal(t, initialValue, got.Value)
// Reading through raw DB should return encrypted value.
raw, err := db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
})
require.NoError(t, err)
require.NotEqual(t, initialValue, raw.Value)
requireEncryptedEquals(t, ciphers[0], raw.Value, initialValue)
})
t.Run("ListUserSecretsWithValuesDecrypts", func(t *testing.T) {
t.Parallel()
_, crypt, ciphers := setup(t)
secret := insertUserSecret(t, crypt, ciphers)
secrets, err := crypt.ListUserSecretsWithValues(ctx, secret.UserID)
require.NoError(t, err)
require.Len(t, secrets, 1)
require.Equal(t, initialValue, secrets[0].Value)
})
t.Run("UpdateUserSecretReEncryptsValue", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
secret := insertUserSecret(t, crypt, ciphers)
updated, err := crypt.UpdateUserSecretByUserIDAndName(ctx, database.UpdateUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
UpdateValue: true,
Value: updatedValue,
ValueKeyID: sql.NullString{},
})
require.NoError(t, err)
require.Equal(t, updatedValue, updated.Value)
require.Equal(t, ciphers[0].HexDigest(), updated.ValueKeyID.String)
// Raw DB should have new encrypted value.
raw, err := db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
})
require.NoError(t, err)
require.NotEqual(t, updatedValue, raw.Value)
requireEncryptedEquals(t, ciphers[0], raw.Value, updatedValue)
})
t.Run("NoCipherStoresPlaintext", func(t *testing.T) {
t.Parallel()
db, crypt := setupNoCiphers(t)
user := dbgen.User(t, crypt, database.User{})
secret, err := crypt.CreateUserSecret(ctx, database.CreateUserSecretParams{
ID: uuid.New(),
UserID: user.ID,
Name: "plaintext-secret",
Value: initialValue,
})
require.NoError(t, err)
require.Equal(t, initialValue, secret.Value)
require.False(t, secret.ValueKeyID.Valid)
// Raw DB should also have plaintext.
raw, err := db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: user.ID,
Name: "plaintext-secret",
})
require.NoError(t, err)
require.Equal(t, initialValue, raw.Value)
require.False(t, raw.ValueKeyID.Valid)
})
t.Run("UpdateMetadataOnlySkipsEncryption", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
secret := insertUserSecret(t, crypt, ciphers)
// Read the raw encrypted value from the database.
rawBefore, err := db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
})
require.NoError(t, err)
// Perform a metadata-only update (no value change).
updated, err := crypt.UpdateUserSecretByUserIDAndName(ctx, database.UpdateUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
UpdateValue: false,
Value: "",
ValueKeyID: sql.NullString{},
UpdateDescription: true,
Description: "updated description",
UpdateEnvName: false,
EnvName: "",
UpdateFilePath: false,
FilePath: "",
})
require.NoError(t, err)
require.Equal(t, "updated description", updated.Description)
require.Equal(t, initialValue, updated.Value)
// Read the raw encrypted value again.
rawAfter, err := db.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: secret.UserID,
Name: secret.Name,
})
require.NoError(t, err)
require.Equal(t, rawBefore.Value, rawAfter.Value)
require.Equal(t, rawBefore.ValueKeyID, rawAfter.ValueKeyID)
})
t.Run("GetUserSecretDecryptErr", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
user := dbgen.User(t, db, database.User{})
dbgen.UserSecret(t, db, database.UserSecret{
UserID: user.ID,
Name: "corrupt-secret",
Value: fakeBase64RandomData(t, 32),
ValueKeyID: sql.NullString{String: ciphers[0].HexDigest(), Valid: true},
})
_, err := crypt.GetUserSecretByUserIDAndName(ctx, database.GetUserSecretByUserIDAndNameParams{
UserID: user.ID,
Name: "corrupt-secret",
})
require.Error(t, err)
var derr *DecryptFailedError
require.ErrorAs(t, err, &derr)
})
t.Run("ListUserSecretsWithValuesDecryptErr", func(t *testing.T) {
t.Parallel()
db, crypt, ciphers := setup(t)
user := dbgen.User(t, db, database.User{})
dbgen.UserSecret(t, db, database.UserSecret{
UserID: user.ID,
Name: "corrupt-list-secret",
Value: fakeBase64RandomData(t, 32),
ValueKeyID: sql.NullString{String: ciphers[0].HexDigest(), Valid: true},
})
_, err := crypt.ListUserSecretsWithValues(ctx, user.ID)
require.Error(t, err)
var derr *DecryptFailedError
require.ErrorAs(t, err, &derr)
})
}
+5 -7
View File
@@ -494,7 +494,6 @@ require (
require (
charm.land/fantasy v0.8.1
github.com/anthropics/anthropic-sdk-go v1.19.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.97.3
github.com/brianvoe/gofakeit/v7 v7.14.0
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225
github.com/coder/aibridge v1.1.1-0.20260408143328-f72a795f1e77
@@ -509,7 +508,6 @@ require (
github.com/invopop/jsonschema v0.13.0
github.com/mark3labs/mcp-go v0.38.0
github.com/shopspring/decimal v1.4.0
gocloud.dev v0.45.0
gonum.org/v1/gonum v0.17.0
)
@@ -520,7 +518,7 @@ require (
cloud.google.com/go/logging v1.13.2 // indirect
cloud.google.com/go/longrunning v0.8.0 // indirect
cloud.google.com/go/monitoring v1.24.3 // indirect
cloud.google.com/go/storage v1.60.0 // indirect
cloud.google.com/go/storage v1.61.3 // indirect
git.sr.ht/~jackmordaunt/go-toast v1.1.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
@@ -538,9 +536,9 @@ require (
github.com/aquasecurity/trivy v0.61.1-0.20250407075540-f1329c7ea1aa // indirect
github.com/aquasecurity/trivy-checks v1.12.2-0.20251219190323-79d27547baf5 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.8 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.13 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.21 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.97.3 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.8 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
@@ -573,13 +571,13 @@ require (
github.com/go-openapi/swag/stringutils v0.25.4 // indirect
github.com/go-openapi/swag/typeutils v0.25.4 // indirect
github.com/go-openapi/swag/yamlutils v0.25.4 // indirect
github.com/go-sql-driver/mysql v1.9.3 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/goccy/go-yaml v1.19.2 // indirect
github.com/google/go-containerregistry v0.20.7 // indirect
github.com/google/wire v0.7.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.70 // indirect
github.com/hashicorp/go-getter v1.8.4 // indirect
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.72 // indirect
github.com/hashicorp/go-getter v1.8.6 // indirect
github.com/hexops/gotextdiff v1.0.3 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackmordaunt/icns/v3 v3.0.1 // indirect
+8 -18
View File
@@ -18,8 +18,8 @@ cloud.google.com/go/longrunning v0.8.0 h1:LiKK77J3bx5gDLi4SMViHixjD2ohlkwBi+mKA7
cloud.google.com/go/longrunning v0.8.0/go.mod h1:UmErU2Onzi+fKDg2gR7dusz11Pe26aknR4kHmJJqIfk=
cloud.google.com/go/monitoring v1.24.3 h1:dde+gMNc0UhPZD1Azu6at2e79bfdztVDS5lvhOdsgaE=
cloud.google.com/go/monitoring v1.24.3/go.mod h1:nYP6W0tm3N9H/bOw8am7t62YTzZY+zUeQ+Bi6+2eonI=
cloud.google.com/go/storage v1.60.0 h1:oBfZrSOCimggVNz9Y/bXY35uUcts7OViubeddTTVzQ8=
cloud.google.com/go/storage v1.60.0/go.mod h1:q+5196hXfejkctrnx+VYU8RKQr/L3c0cBIlrjmiAKE0=
cloud.google.com/go/storage v1.61.3 h1:VS//ZfBuPGDvakfD9xyPW1RGF1Vy3BWUoVZXgW1KMOg=
cloud.google.com/go/storage v1.61.3/go.mod h1:JtqK8BBB7TWv0HVGHubtUdzYYrakOQIsMLffZ2Z/HWk=
cloud.google.com/go/trace v1.11.7 h1:kDNDX8JkaAG3R2nq1lIdkb7FCSi1rCmsEtKVsty7p+U=
cloud.google.com/go/trace v1.11.7/go.mod h1:TNn9d5V3fQVf6s4SCveVMIBS2LJUqo73GACmq/Tky0s=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
@@ -171,8 +171,6 @@ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.20 h1:zOgq3uezl5nznfoK3ODuqb
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.20/go.mod h1:z/MVwUARehy6GAg/yQ1GO2IMl0k++cu1ohP9zo887wE=
github.com/aws/aws-sdk-go-v2/feature/rds/auth v1.6.14 h1:gKXU53GYsPuYgkdTdMHh6vNdcbIgoxFQLQGjg+iRG+k=
github.com/aws/aws-sdk-go-v2/feature/rds/auth v1.6.14/go.mod h1:jyoemRAktfCyZR9bTb5gT3kn/Vj2KwYDm0Pev5TsmEQ=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12 h1:Zy6Tme1AA13kX8x3CnkHx5cqdGWGaj/anwOiWGnA0Xo=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.12/go.mod h1:ql4uXYKoTM9WUAUSmthY4AtPVrlTBZOvnBJTiCUdPxI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.21 h1:Rgg6wvjjtX8bNHcvi9OnXWwcE0a2vGpbwmtICOsvcf4=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.21/go.mod h1:A/kJFst/nm//cyqonihbdpQZwiUhhzpqTsdbhDdRF9c=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.21 h1:PEgGVtPoB6NTpPrBgqSE5hE/o47Ij9qk/SEZFbUOe9A=
@@ -661,10 +659,6 @@ github.com/google/go-github/v61 v61.0.0 h1:VwQCBwhyE9JclCI+22/7mLB1PuU9eowCXKY5p
github.com/google/go-github/v61 v61.0.0/go.mod h1:0WR+KmsWX75G2EbpyGsGmradjo3IiciuI4BmdVCobQY=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/go-replayers/grpcreplay v1.3.0 h1:1Keyy0m1sIpqstQmgz307zhiJ1pV4uIlFds5weTmxbo=
github.com/google/go-replayers/grpcreplay v1.3.0/go.mod h1:v6NgKtkijC0d3e3RW8il6Sy5sqRVUwoQa4mHOGEy8DI=
github.com/google/go-replayers/httpreplay v1.2.0 h1:VM1wEyyjaoU53BwrOnaf9VhAyQQEEioJvFYxYcLRKzk=
github.com/google/go-replayers/httpreplay v1.2.0/go.mod h1:WahEFFZZ7a1P4VM1qEeHy+tME4bwyqPcwWbNlUI1Mcg=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
@@ -681,8 +675,6 @@ github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaU
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/wire v0.7.0 h1:JxUKI6+CVBgCO2WToKy/nQk0sS+amI9z9EjVmdaocj4=
github.com/google/wire v0.7.0/go.mod h1:n6YbUQD9cPKTnHXEBN2DXlOp/mVADhVErcMFb0v3J18=
github.com/googleapis/enterprise-certificate-proxy v0.3.14 h1:yh8ncqsbUY4shRD5dA6RlzjJaT4hi3kII+zYw8wmLb8=
github.com/googleapis/enterprise-certificate-proxy v0.3.14/go.mod h1:vqVt9yG9480NtzREnTlmGSBmFrA+bzb0yl0TxoBQXOg=
github.com/googleapis/gax-go/v2 v2.19.0 h1:fYQaUOiGwll0cGj7jmHT/0nPlcrZDFPrZRhTsoCr8hE=
@@ -695,8 +687,8 @@ github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/hairyhenderson/go-codeowners v0.7.0 h1:s0W4wF8bdsBEjTWzwzSlsatSthWtTAF2xLgo4a4RwAo=
github.com/hairyhenderson/go-codeowners v0.7.0/go.mod h1:wUlNgQ3QjqC4z8DnM5nnCYVq/icpqXJyJOukKx5U8/Q=
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.70 h1:0HADrxxqaQkGycO1JoUUA+B4FnIkuo8d2bz/hSaTFFQ=
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.70/go.mod h1:fm2FdDCzJdtbXF7WKAMvBb5NEPouXPHFbGNYs9ShFns=
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.72 h1:vTCWu1wbdYo7PEZFem/rlr01+Un+wwVmI7wiegFdRLk=
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.72/go.mod h1:Vn+BBgKQHVQYdVQ4NZDICE1Brb+JfaONyDHr3q07oQc=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
@@ -706,8 +698,8 @@ github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9n
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-cty v1.5.0 h1:EkQ/v+dDNUqnuVpmS5fPqyY71NXVgT5gf32+57xY8g0=
github.com/hashicorp/go-cty v1.5.0/go.mod h1:lFUCG5kd8exDobgSfyj4ONE/dc822kiYMguVKdHGMLM=
github.com/hashicorp/go-getter v1.8.4 h1:hGEd2xsuVKgwkMtPVufq73fAmZU/x65PPcqH3cb0D9A=
github.com/hashicorp/go-getter v1.8.4/go.mod h1:x27pPGSg9kzoB147QXI8d/nDvp2IgYGcwuRjpaXE9Yg=
github.com/hashicorp/go-getter v1.8.6 h1:9sQboWULaydVphxc4S64oAI4YqpuCk7nPmvbk131ebY=
github.com/hashicorp/go-getter v1.8.6/go.mod h1:nVH12eOV2P58dIiL3rsU6Fh3wLeJEKBOJzhMmzlSWoo=
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
@@ -1330,8 +1322,8 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0 h1:DvJDO
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.40.0/go.mod h1:EtekO9DEJb4/jRyN4v4Qjc2yA7AtfCBuz2FynRUWTXs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.39.0 h1:5gn2urDL/FBnK8OkCfD1j3/ER79rUuTYmCvlXBKeYL8=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.39.0/go.mod h1:0fBG6ZJxhqByfFZDwSwpZGzJU671HkwpWaNe2t4VUPI=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.40.0 h1:ZrPRak/kS4xI3AVXy8F7pipuDXmDsrO8Lg+yQjBLjw0=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.40.0/go.mod h1:3y6kQCWztq6hyW8Z9YxQDDm0Je9AJoFar2G0yDcmhRk=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.37.0 h1:SNhVp/9q4Go/XHBkQ1/d5u9P/U+L1yaGPoi0x+mStaI=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.37.0/go.mod h1:tx8OOlGH6R4kLV67YaYO44GFXloEjGPZuMjEkaaqIp4=
go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4=
@@ -1367,8 +1359,6 @@ go4.org/mem v0.0.0-20220726221520-4f986261bf13 h1:CbZeCBZ0aZj8EfVgnqQcYZgf0lpZ3H
go4.org/mem v0.0.0-20220726221520-4f986261bf13/go.mod h1:reUoABIJ9ikfM5sgtSF3Wushcza7+WeD01VB9Lirh3g=
go4.org/netipx v0.0.0-20230728180743-ad4cb58a6516 h1:X66ZEoMN2SuaoI/dfZVYobB6E5zjZyyHUMWlCA7MgGE=
go4.org/netipx v0.0.0-20230728180743-ad4cb58a6516/go.mod h1:TQvodOM+hJTioNQJilmLXu08JNb8i+ccq418+KWu1/Y=
gocloud.dev v0.45.0 h1:WknIK8IbRdmynDvara3Q7G6wQhmEiOGwpgJufbM39sY=
gocloud.dev v0.45.0/go.mod h1:0kXKmkCLG6d31N7NyLZWzt7jDSQura9zD/mWgiB6THI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200117160349-530e935923ad/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
-6
View File
@@ -235,12 +235,6 @@ coderd_db_tx_executions_count{success="",retries="",tx_id=""} 0
# HELP coderd_dbpurge_iteration_duration_seconds Duration of each dbpurge iteration in seconds.
# TYPE coderd_dbpurge_iteration_duration_seconds histogram
coderd_dbpurge_iteration_duration_seconds{success=""} 0
# HELP coderd_dbpurge_objstore_delete_inflight Number of object store files currently enqueued for deletion.
# TYPE coderd_dbpurge_objstore_delete_inflight gauge
coderd_dbpurge_objstore_delete_inflight 0
# HELP coderd_dbpurge_objstore_files_deleted_total Total number of object store files successfully deleted.
# TYPE coderd_dbpurge_objstore_files_deleted_total counter
coderd_dbpurge_objstore_files_deleted_total 0
# HELP coderd_dbpurge_records_purged_total Total number of records purged by type.
# TYPE coderd_dbpurge_records_purged_total counter
coderd_dbpurge_records_purged_total{record_type=""} 0
@@ -0,0 +1,44 @@
import { describe, expect, it } from "vitest";
import type * as TypesGen from "#/api/typesGenerated";
import { buildOptimisticEditedMessage } from "./chatMessageEdits";
const makeUserMessage = (
content: readonly TypesGen.ChatMessagePart[] = [
{ type: "text", text: "original" },
],
): TypesGen.ChatMessage => ({
id: 1,
chat_id: "chat-1",
created_at: "2025-01-01T00:00:00.000Z",
role: "user",
content,
});
describe("buildOptimisticEditedMessage", () => {
it("preserves image MIME types for newly attached files", () => {
const message = buildOptimisticEditedMessage({
requestContent: [{ type: "file", file_id: "image-1" }],
originalMessage: makeUserMessage(),
attachmentMediaTypes: new Map([["image-1", "image/png"]]),
});
expect(message.content).toEqual([
{ type: "file", file_id: "image-1", media_type: "image/png" },
]);
});
it("reuses existing file parts before local attachment metadata", () => {
const existingFilePart: TypesGen.ChatFilePart = {
type: "file",
file_id: "existing-1",
media_type: "image/jpeg",
};
const message = buildOptimisticEditedMessage({
requestContent: [{ type: "file", file_id: "existing-1" }],
originalMessage: makeUserMessage([existingFilePart]),
attachmentMediaTypes: new Map([["existing-1", "text/plain"]]),
});
expect(message.content).toEqual([existingFilePart]);
});
});
+148
View File
@@ -0,0 +1,148 @@
import type { InfiniteData } from "react-query";
import type * as TypesGen from "#/api/typesGenerated";
const buildOptimisticEditedContent = ({
requestContent,
originalMessage,
attachmentMediaTypes,
}: {
requestContent: readonly TypesGen.ChatInputPart[];
originalMessage: TypesGen.ChatMessage;
attachmentMediaTypes?: ReadonlyMap<string, string>;
}): readonly TypesGen.ChatMessagePart[] => {
const existingFilePartsByID = new Map<string, TypesGen.ChatFilePart>();
for (const part of originalMessage.content ?? []) {
if (part.type === "file" && part.file_id) {
existingFilePartsByID.set(part.file_id, part);
}
}
return requestContent.map((part): TypesGen.ChatMessagePart => {
if (part.type === "text") {
return { type: "text", text: part.text ?? "" };
}
if (part.type === "file-reference") {
return {
type: "file-reference",
file_name: part.file_name ?? "",
start_line: part.start_line ?? 1,
end_line: part.end_line ?? 1,
content: part.content ?? "",
};
}
const fileId = part.file_id ?? "";
return (
existingFilePartsByID.get(fileId) ?? {
type: "file",
file_id: part.file_id,
media_type:
attachmentMediaTypes?.get(fileId) ?? "application/octet-stream",
}
);
});
};
export const buildOptimisticEditedMessage = ({
requestContent,
originalMessage,
attachmentMediaTypes,
}: {
requestContent: readonly TypesGen.ChatInputPart[];
originalMessage: TypesGen.ChatMessage;
attachmentMediaTypes?: ReadonlyMap<string, string>;
}): TypesGen.ChatMessage => ({
...originalMessage,
content: buildOptimisticEditedContent({
requestContent,
originalMessage,
attachmentMediaTypes,
}),
});
const sortMessagesDescending = (
messages: readonly TypesGen.ChatMessage[],
): TypesGen.ChatMessage[] => [...messages].sort((a, b) => b.id - a.id);
const upsertFirstPageMessage = (
messages: readonly TypesGen.ChatMessage[],
message: TypesGen.ChatMessage,
): TypesGen.ChatMessage[] => {
const byID = new Map(
messages.map((existingMessage) => [existingMessage.id, existingMessage]),
);
byID.set(message.id, message);
return sortMessagesDescending(Array.from(byID.values()));
};
export const projectEditedConversationIntoCache = ({
currentData,
editedMessageId,
replacementMessage,
queuedMessages,
}: {
currentData: InfiniteData<TypesGen.ChatMessagesResponse> | undefined;
editedMessageId: number;
replacementMessage?: TypesGen.ChatMessage;
queuedMessages?: readonly TypesGen.ChatQueuedMessage[];
}): InfiniteData<TypesGen.ChatMessagesResponse> | undefined => {
if (!currentData?.pages?.length) {
return currentData;
}
const truncatedPages = currentData.pages.map((page, pageIndex) => {
const truncatedMessages = page.messages.filter(
(message) => message.id < editedMessageId,
);
const nextPage = {
...page,
...(pageIndex === 0 && queuedMessages !== undefined
? { queued_messages: queuedMessages }
: {}),
};
if (pageIndex !== 0 || !replacementMessage) {
return { ...nextPage, messages: truncatedMessages };
}
return {
...nextPage,
messages: upsertFirstPageMessage(truncatedMessages, replacementMessage),
};
});
return {
...currentData,
pages: truncatedPages,
};
};
export const reconcileEditedMessageInCache = ({
currentData,
optimisticMessageId,
responseMessage,
}: {
currentData: InfiniteData<TypesGen.ChatMessagesResponse> | undefined;
optimisticMessageId: number;
responseMessage: TypesGen.ChatMessage;
}): InfiniteData<TypesGen.ChatMessagesResponse> | undefined => {
if (!currentData?.pages?.length) {
return currentData;
}
const replacedPages = currentData.pages.map((page, pageIndex) => {
const preservedMessages = page.messages.filter(
(message) =>
message.id !== optimisticMessageId && message.id !== responseMessage.id,
);
if (pageIndex !== 0) {
return { ...page, messages: preservedMessages };
}
return {
...page,
messages: upsertFirstPageMessage(preservedMessages, responseMessage),
};
});
return {
...currentData,
pages: replacedPages,
};
};
+177 -19
View File
@@ -2,6 +2,7 @@ import { QueryClient } from "react-query";
import { describe, expect, it, vi } from "vitest";
import { API } from "#/api/api";
import type * as TypesGen from "#/api/typesGenerated";
import { buildOptimisticEditedMessage } from "./chatMessageEdits";
import {
archiveChat,
cancelChatListRefetches,
@@ -795,14 +796,44 @@ describe("mutation invalidation scope", () => {
content: [{ type: "text" as const, text: `msg ${id}` }],
});
const makeQueuedMessage = (
chatId: string,
id: number,
): TypesGen.ChatQueuedMessage => ({
id,
chat_id: chatId,
created_at: `2025-01-01T00:10:${String(id).padStart(2, "0")}Z`,
content: [{ type: "text" as const, text: `queued ${id}` }],
});
const editReq = {
content: [{ type: "text" as const, text: "edited" }],
};
it("editChatMessage optimistically removes truncated messages from cache", async () => {
const requireMessage = (
messages: readonly TypesGen.ChatMessage[],
messageId: number,
): TypesGen.ChatMessage => {
const message = messages.find((candidate) => candidate.id === messageId);
if (!message) {
throw new Error(`missing message ${messageId}`);
}
return message;
};
const buildOptimisticMessage = (message: TypesGen.ChatMessage) =>
buildOptimisticEditedMessage({
originalMessage: message,
requestContent: editReq.content,
});
it("editChatMessage writes the optimistic replacement into cache", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 3),
);
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [{ messages, queued_messages: [], has_more: false }],
@@ -812,18 +843,58 @@ describe("mutation invalidation scope", () => {
const mutation = editChatMessage(queryClient, chatId);
const context = await mutation.onMutate({
messageId: 3,
optimisticMessage,
req: editReq,
});
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
expect(data?.pages[0]?.messages.map((m) => m.id)).toEqual([2, 1]);
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([
3, 2, 1,
]);
expect(data?.pages[0]?.messages[0]?.content).toEqual(
optimisticMessage.content,
);
expect(context?.previousData?.pages[0]?.messages).toHaveLength(5);
});
it("editChatMessage clears queued messages in cache during optimistic history edit", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 3),
);
const queuedMessages = [makeQueuedMessage(chatId, 11)];
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [
{
messages,
queued_messages: queuedMessages,
has_more: false,
},
],
pageParams: [undefined],
});
const mutation = editChatMessage(queryClient, chatId);
await mutation.onMutate({
messageId: 3,
optimisticMessage,
req: editReq,
});
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
expect(data?.pages[0]?.queued_messages).toEqual([]);
});
it("editChatMessage restores cache on error", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 3),
);
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [{ messages, queued_messages: [], has_more: false }],
@@ -833,22 +904,85 @@ describe("mutation invalidation scope", () => {
const mutation = editChatMessage(queryClient, chatId);
const context = await mutation.onMutate({
messageId: 3,
optimisticMessage,
req: editReq,
});
expect(
queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId))?.pages[0]
?.messages,
).toHaveLength(2);
).toHaveLength(3);
mutation.onError(
new Error("network failure"),
{ messageId: 3, req: editReq },
{ messageId: 3, optimisticMessage, req: editReq },
context,
);
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
expect(data?.pages[0]?.messages.map((m) => m.id)).toEqual([5, 4, 3, 2, 1]);
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([
5, 4, 3, 2, 1,
]);
});
it("editChatMessage preserves websocket-upserted newer messages on success", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 3),
);
const responseMessage = {
...makeMsg(chatId, 9),
content: [{ type: "text" as const, text: "edited authoritative" }],
};
const websocketMessage = {
...makeMsg(chatId, 10),
content: [{ type: "text" as const, text: "assistant follow-up" }],
role: "assistant" as const,
};
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [{ messages, queued_messages: [], has_more: false }],
pageParams: [undefined],
});
const mutation = editChatMessage(queryClient, chatId);
await mutation.onMutate({
messageId: 3,
optimisticMessage,
req: editReq,
});
queryClient.setQueryData<InfMessages | undefined>(
chatMessagesKey(chatId),
(current) => {
if (!current) {
return current;
}
return {
...current,
pages: [
{
...current.pages[0],
messages: [websocketMessage, ...current.pages[0].messages],
},
...current.pages.slice(1),
],
};
},
);
mutation.onSuccess(
{ message: responseMessage },
{ messageId: 3, optimisticMessage, req: editReq },
);
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([
10, 9, 2, 1,
]);
expect(data?.pages[0]?.messages[1]?.content).toEqual(
responseMessage.content,
);
});
it("editChatMessage onMutate is a no-op when cache is empty", async () => {
@@ -890,13 +1024,14 @@ describe("mutation invalidation scope", () => {
expect(data?.pages[0]?.messages.map((m) => m.id)).toEqual([3, 2, 1]);
});
it("editChatMessage onMutate filters across multiple pages", async () => {
it("editChatMessage onMutate updates the first page and preserves older pages", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
// Page 0 (newest): IDs 106. Page 1 (older): IDs 51.
const page0 = [10, 9, 8, 7, 6].map((id) => makeMsg(chatId, id));
const page1 = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(requireMessage(page0, 7));
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [
@@ -907,19 +1042,28 @@ describe("mutation invalidation scope", () => {
});
const mutation = editChatMessage(queryClient, chatId);
await mutation.onMutate({ messageId: 7, req: editReq });
await mutation.onMutate({
messageId: 7,
optimisticMessage,
req: editReq,
});
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
// Page 0: only ID 6 survives (< 7).
expect(data?.pages[0]?.messages.map((m) => m.id)).toEqual([6]);
// Page 1: all survive (all < 7).
expect(data?.pages[1]?.messages.map((m) => m.id)).toEqual([5, 4, 3, 2, 1]);
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([
7, 6,
]);
expect(data?.pages[1]?.messages.map((message) => message.id)).toEqual([
5, 4, 3, 2, 1,
]);
});
it("editChatMessage onMutate editing the first message empties all pages", async () => {
it("editChatMessage onMutate keeps the optimistic replacement when editing the first message", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 1),
);
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [{ messages, queued_messages: [], has_more: false }],
@@ -927,20 +1071,25 @@ describe("mutation invalidation scope", () => {
});
const mutation = editChatMessage(queryClient, chatId);
await mutation.onMutate({ messageId: 1, req: editReq });
await mutation.onMutate({
messageId: 1,
optimisticMessage,
req: editReq,
});
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
// All messages have id >= 1, so the page is empty.
expect(data?.pages[0]?.messages).toHaveLength(0);
// Sibling fields survive the spread.
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([1]);
expect(data?.pages[0]?.queued_messages).toEqual([]);
expect(data?.pages[0]?.has_more).toBe(false);
});
it("editChatMessage onMutate editing the latest message keeps earlier ones", async () => {
it("editChatMessage onMutate keeps earlier messages when editing the latest message", async () => {
const queryClient = createTestQueryClient();
const chatId = "chat-1";
const messages = [5, 4, 3, 2, 1].map((id) => makeMsg(chatId, id));
const optimisticMessage = buildOptimisticMessage(
requireMessage(messages, 5),
);
queryClient.setQueryData<InfMessages>(chatMessagesKey(chatId), {
pages: [{ messages, queued_messages: [], has_more: false }],
@@ -948,10 +1097,19 @@ describe("mutation invalidation scope", () => {
});
const mutation = editChatMessage(queryClient, chatId);
await mutation.onMutate({ messageId: 5, req: editReq });
await mutation.onMutate({
messageId: 5,
optimisticMessage,
req: editReq,
});
const data = queryClient.getQueryData<InfMessages>(chatMessagesKey(chatId));
expect(data?.pages[0]?.messages.map((m) => m.id)).toEqual([4, 3, 2, 1]);
expect(data?.pages[0]?.messages.map((message) => message.id)).toEqual([
5, 4, 3, 2, 1,
]);
expect(data?.pages[0]?.messages[0]?.content).toEqual(
optimisticMessage.content,
);
});
it("interruptChat does not invalidate unrelated queries", async () => {
+36 -27
View File
@@ -6,6 +6,10 @@ import type {
import { API } from "#/api/api";
import type * as TypesGen from "#/api/typesGenerated";
import type { UsePaginatedQueryOptions } from "#/hooks/usePaginatedQuery";
import {
projectEditedConversationIntoCache,
reconcileEditedMessageInCache,
} from "./chatMessageEdits";
export const chatsKey = ["chats"] as const;
export const chatKey = (chatId: string) => ["chats", chatId] as const;
@@ -601,13 +605,21 @@ export const createChatMessage = (
type EditChatMessageMutationArgs = {
messageId: number;
optimisticMessage?: TypesGen.ChatMessage;
req: TypesGen.EditChatMessageRequest;
};
type EditChatMessageMutationContext = {
previousData?: InfiniteData<TypesGen.ChatMessagesResponse> | undefined;
};
export const editChatMessage = (queryClient: QueryClient, chatId: string) => ({
mutationFn: ({ messageId, req }: EditChatMessageMutationArgs) =>
API.experimental.editChatMessage(chatId, messageId, req),
onMutate: async ({ messageId }: EditChatMessageMutationArgs) => {
onMutate: async ({
messageId,
optimisticMessage,
}: EditChatMessageMutationArgs): Promise<EditChatMessageMutationContext> => {
// Cancel in-flight refetches so they don't overwrite the
// optimistic update before the mutation completes.
await queryClient.cancelQueries({
@@ -619,40 +631,23 @@ export const editChatMessage = (queryClient: QueryClient, chatId: string) => ({
InfiniteData<TypesGen.ChatMessagesResponse>
>(chatMessagesKey(chatId));
// Optimistically remove the edited message and everything
// after it. The server soft-deletes these and inserts a
// replacement with a new ID. Without this, the WebSocket
// handler's upsertCacheMessages adds new messages to the
// React Query cache without removing the soft-deleted ones,
// causing deleted messages to flash back into view until
// the full REST refetch resolves.
queryClient.setQueryData<
InfiniteData<TypesGen.ChatMessagesResponse> | undefined
>(chatMessagesKey(chatId), (current) => {
if (!current?.pages?.length) {
return current;
}
return {
...current,
pages: current.pages.map((page) => ({
...page,
messages: page.messages.filter((m) => m.id < messageId),
})),
};
});
>(chatMessagesKey(chatId), (current) =>
projectEditedConversationIntoCache({
currentData: current,
editedMessageId: messageId,
replacementMessage: optimisticMessage,
queuedMessages: [],
}),
);
return { previousData };
},
onError: (
_error: unknown,
_variables: EditChatMessageMutationArgs,
context:
| {
previousData?:
| InfiniteData<TypesGen.ChatMessagesResponse>
| undefined;
}
| undefined,
context: EditChatMessageMutationContext | undefined,
) => {
// Restore the cache on failure so the user sees the
// original messages again.
@@ -660,6 +655,20 @@ export const editChatMessage = (queryClient: QueryClient, chatId: string) => ({
queryClient.setQueryData(chatMessagesKey(chatId), context.previousData);
}
},
onSuccess: (
response: TypesGen.EditChatMessageResponse,
variables: EditChatMessageMutationArgs,
) => {
queryClient.setQueryData<
InfiniteData<TypesGen.ChatMessagesResponse> | undefined
>(chatMessagesKey(chatId), (current) =>
reconcileEditedMessageInCache({
currentData: current,
optimisticMessageId: variables.messageId,
responseMessage: response.message,
}),
);
},
onSettled: () => {
// Always reconcile with the server regardless of whether
// the mutation succeeded or failed. On success this picks
+119
View File
@@ -0,0 +1,119 @@
import { describe, expect, it, vi } from "vitest";
import { API } from "#/api/api";
import type { AuthorizationCheck, Organization } from "#/api/typesGenerated";
import { permittedOrganizations } from "./organizations";
// Mock the API module
vi.mock("#/api/api", () => ({
API: {
getOrganizations: vi.fn(),
checkAuthorization: vi.fn(),
},
}));
const MockOrg1: Organization = {
id: "org-1",
name: "org-one",
display_name: "Org One",
description: "",
icon: "",
created_at: "",
updated_at: "",
is_default: true,
};
const MockOrg2: Organization = {
id: "org-2",
name: "org-two",
display_name: "Org Two",
description: "",
icon: "",
created_at: "",
updated_at: "",
is_default: false,
};
const templateCreateCheck: AuthorizationCheck = {
object: { resource_type: "template" },
action: "create",
};
describe("permittedOrganizations", () => {
it("returns query config with correct queryKey", () => {
const config = permittedOrganizations(templateCreateCheck);
expect(config.queryKey).toEqual([
"organizations",
"permitted",
templateCreateCheck,
]);
});
it("fetches orgs and filters by permission check", async () => {
const getOrgsMock = vi.mocked(API.getOrganizations);
const checkAuthMock = vi.mocked(API.checkAuthorization);
getOrgsMock.mockResolvedValue([MockOrg1, MockOrg2]);
checkAuthMock.mockResolvedValue({
"org-1": true,
"org-2": false,
});
const config = permittedOrganizations(templateCreateCheck);
const result = await config.queryFn!();
// Should only return org-1 (which passed the check)
expect(result).toEqual([MockOrg1]);
// Verify the auth check was called with per-org checks
expect(checkAuthMock).toHaveBeenCalledWith({
checks: {
"org-1": {
...templateCreateCheck,
object: {
...templateCreateCheck.object,
organization_id: "org-1",
},
},
"org-2": {
...templateCreateCheck,
object: {
...templateCreateCheck.object,
organization_id: "org-2",
},
},
},
});
});
it("returns all orgs when all pass the check", async () => {
const getOrgsMock = vi.mocked(API.getOrganizations);
const checkAuthMock = vi.mocked(API.checkAuthorization);
getOrgsMock.mockResolvedValue([MockOrg1, MockOrg2]);
checkAuthMock.mockResolvedValue({
"org-1": true,
"org-2": true,
});
const config = permittedOrganizations(templateCreateCheck);
const result = await config.queryFn!();
expect(result).toEqual([MockOrg1, MockOrg2]);
});
it("returns empty array when no orgs pass the check", async () => {
const getOrgsMock = vi.mocked(API.getOrganizations);
const checkAuthMock = vi.mocked(API.checkAuthorization);
getOrgsMock.mockResolvedValue([MockOrg1, MockOrg2]);
checkAuthMock.mockResolvedValue({
"org-1": false,
"org-2": false,
});
const config = permittedOrganizations(templateCreateCheck);
const result = await config.queryFn!();
expect(result).toEqual([]);
});
});
+27 -1
View File
@@ -5,6 +5,7 @@ import {
type GetProvisionerJobsParams,
} from "#/api/api";
import type {
AuthorizationCheck,
CreateOrganizationRequest,
GroupSyncSettings,
Organization,
@@ -160,7 +161,7 @@ export const updateOrganizationMemberRoles = (
};
};
export const organizationsKey = ["organizations"] as const;
const organizationsKey = ["organizations"] as const;
const notAvailable = { available: false, value: undefined } as const;
@@ -295,6 +296,31 @@ export const provisionerJobs = (
};
};
/**
* Fetch organizations the current user is permitted to use for a given
* action. Fetches all organizations, runs a per-org authorization
* check, and returns only those that pass.
*/
export const permittedOrganizations = (check: AuthorizationCheck) => {
return {
queryKey: ["organizations", "permitted", check],
queryFn: async (): Promise<Organization[]> => {
const orgs = await API.getOrganizations();
const checks = Object.fromEntries(
orgs.map((org) => [
org.id,
{
...check,
object: { ...check.object, organization_id: org.id },
},
]),
);
const permissions = await API.checkAuthorization({ checks });
return orgs.filter((org) => permissions[org.id]);
},
};
};
/**
* Fetch permissions for all provided organizations.
*
+2 -48
View File
@@ -3256,7 +3256,6 @@ export interface DeploymentValues {
readonly hide_ai_tasks?: boolean;
readonly ai?: AIConfig;
readonly stats_collection?: StatsCollectionConfig;
readonly object_store?: ObjectStoreConfig;
readonly config?: string;
readonly write_config?: boolean;
/**
@@ -4984,53 +4983,6 @@ export interface OIDCConfig {
readonly redirect_url: string;
}
// From codersdk/deployment.go
/**
* RetentionConfig contains configuration for data retention policies.
* ObjectStoreConfig configures the object storage backend used for
* binary data such as chat files and transcripts.
*/
export interface ObjectStoreConfig {
/**
* Backend selects the storage backend: "local" (default), "s3", or "gcs".
*/
readonly backend: string;
/**
* LocalDir is the root directory for the local filesystem backend.
* Only used when Backend is "local". Defaults to <config-dir>/objectstore/.
*/
readonly local_dir: string;
/**
* S3Bucket is the S3 bucket name. Required when Backend is "s3".
*/
readonly s3_bucket: string;
/**
* S3Region is the AWS region for the S3 bucket.
*/
readonly s3_region: string;
/**
* S3Prefix is an optional key prefix within the S3 bucket.
*/
readonly s3_prefix: string;
/**
* S3Endpoint is a custom S3-compatible endpoint URL (for MinIO, R2, etc.).
*/
readonly s3_endpoint: string;
/**
* GCSBucket is the GCS bucket name. Required when Backend is "gcs".
*/
readonly gcs_bucket: string;
/**
* GCSPrefix is an optional key prefix within the GCS bucket.
*/
readonly gcs_prefix: string;
/**
* GCSCredentialsFile is an optional path to a GCS service account
* key file. If empty, Application Default Credentials are used.
*/
readonly gcs_credentials_file: string;
}
// From codersdk/parameters.go
export type OptionType = "bool" | "list(string)" | "number" | "string";
@@ -5663,6 +5615,7 @@ export interface ProvisionerJobMetadata {
readonly template_icon: string;
readonly workspace_id?: string;
readonly workspace_name?: string;
readonly workspace_build_transition?: WorkspaceTransition;
}
// From codersdk/provisionerdaemons.go
@@ -6144,6 +6097,7 @@ export interface ResumeTaskResponse {
// From codersdk/deployment.go
/**
* RetentionConfig contains configuration for data retention policies.
* These settings control how long various types of data are retained in the database
* before being automatically purged. Setting a value to 0 disables retention for that
* data type (data is kept indefinitely).
@@ -1,18 +1,14 @@
import type { Meta, StoryObj } from "@storybook/react-vite";
import { action } from "storybook/actions";
import { userEvent, within } from "storybook/test";
import {
MockOrganization,
MockOrganization2,
MockUserOwner,
} from "#/testHelpers/entities";
import { expect, fn, screen, userEvent, waitFor, within } from "storybook/test";
import { MockOrganization, MockOrganization2 } from "#/testHelpers/entities";
import { OrganizationAutocomplete } from "./OrganizationAutocomplete";
const meta: Meta<typeof OrganizationAutocomplete> = {
title: "components/OrganizationAutocomplete",
component: OrganizationAutocomplete,
args: {
onChange: action("Selected organization"),
onChange: fn(),
options: [MockOrganization, MockOrganization2],
},
};
@@ -20,36 +16,51 @@ export default meta;
type Story = StoryObj<typeof OrganizationAutocomplete>;
export const ManyOrgs: Story = {
parameters: {
showOrganizations: true,
user: MockUserOwner,
features: ["multiple_organizations"],
permissions: { viewDeploymentConfig: true },
queries: [
{
key: ["organizations"],
data: [MockOrganization, MockOrganization2],
},
],
args: {
value: null,
},
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
const button = canvas.getByRole("button");
await userEvent.click(button);
await waitFor(() => {
expect(
screen.getByText(MockOrganization.display_name),
).toBeInTheDocument();
expect(
screen.getByText(MockOrganization2.display_name),
).toBeInTheDocument();
});
},
};
export const WithValue: Story = {
args: {
value: MockOrganization2,
},
play: async ({ canvasElement, args }) => {
const canvas = within(canvasElement);
await waitFor(() => {
expect(
canvas.getByText(MockOrganization2.display_name),
).toBeInTheDocument();
});
expect(args.onChange).not.toHaveBeenCalled();
},
};
export const OneOrg: Story = {
parameters: {
showOrganizations: true,
user: MockUserOwner,
features: ["multiple_organizations"],
permissions: { viewDeploymentConfig: true },
queries: [
{
key: ["organizations"],
data: [MockOrganization],
},
],
args: {
value: MockOrganization,
options: [MockOrganization],
},
play: async ({ canvasElement, args }) => {
const canvas = within(canvasElement);
await waitFor(() => {
expect(
canvas.getByText(MockOrganization.display_name),
).toBeInTheDocument();
});
expect(args.onChange).not.toHaveBeenCalled();
},
};
@@ -1,9 +1,6 @@
import { Check } from "lucide-react";
import { type FC, useEffect, useState } from "react";
import { useQuery } from "react-query";
import { checkAuthorization } from "#/api/queries/authCheck";
import { organizations } from "#/api/queries/organizations";
import type { AuthorizationCheck, Organization } from "#/api/typesGenerated";
import { type FC, useState } from "react";
import type { Organization } from "#/api/typesGenerated";
import { ChevronDownIcon } from "#/components/AnimatedIcons/ChevronDown";
import { Avatar } from "#/components/Avatar/Avatar";
import { Button } from "#/components/Button/Button";
@@ -22,62 +19,21 @@ import {
} from "#/components/Popover/Popover";
type OrganizationAutocompleteProps = {
value: Organization | null;
onChange: (organization: Organization | null) => void;
options: Organization[];
id?: string;
required?: boolean;
check?: AuthorizationCheck;
};
export const OrganizationAutocomplete: FC<OrganizationAutocompleteProps> = ({
value,
onChange,
options,
id,
required,
check,
}) => {
const [open, setOpen] = useState(false);
const [selected, setSelected] = useState<Organization | null>(null);
const organizationsQuery = useQuery(organizations());
const checks =
check &&
organizationsQuery.data &&
Object.fromEntries(
organizationsQuery.data.map((org) => [
org.id,
{
...check,
object: { ...check.object, organization_id: org.id },
},
]),
);
const permissionsQuery = useQuery({
...checkAuthorization({ checks: checks ?? {} }),
enabled: Boolean(check && organizationsQuery.data),
});
// If an authorization check was provided, filter the organizations based on
// the results of that check.
let options = organizationsQuery.data ?? [];
if (check) {
options = permissionsQuery.data
? options.filter((org) => permissionsQuery.data[org.id])
: [];
}
// Unfortunate: this useEffect sets a default org value
// if only one is available and is necessary as the autocomplete loads
// its own data. Until we refactor, proceed cautiously!
useEffect(() => {
const org = options[0];
if (options.length !== 1 || org === selected) {
return;
}
setSelected(org);
onChange(org);
}, [options, selected, onChange]);
return (
<Popover open={open} onOpenChange={setOpen}>
@@ -90,14 +46,14 @@ export const OrganizationAutocomplete: FC<OrganizationAutocompleteProps> = ({
data-testid="organization-autocomplete"
className="w-full justify-start gap-2 font-normal"
>
{selected ? (
{value ? (
<>
<Avatar
size="sm"
src={selected.icon}
fallback={selected.display_name}
src={value.icon}
fallback={value.display_name}
/>
<span className="truncate">{selected.display_name}</span>
<span className="truncate">{value.display_name}</span>
</>
) : (
<span className="text-content-secondary">
@@ -121,7 +77,6 @@ export const OrganizationAutocomplete: FC<OrganizationAutocompleteProps> = ({
key={org.id}
value={`${org.display_name} ${org.name}`}
onSelect={() => {
setSelected(org);
onChange(org);
setOpen(false);
}}
@@ -134,7 +89,7 @@ export const OrganizationAutocomplete: FC<OrganizationAutocompleteProps> = ({
<span className="truncate">
{org.display_name || org.name}
</span>
{selected?.id === org.id && (
{value?.id === org.id && (
<Check className="ml-auto size-icon-sm shrink-0" />
)}
</CommandItem>
@@ -1,5 +1,5 @@
import type { Meta, StoryObj } from "@storybook/react-vite";
import { screen, spyOn, userEvent, within } from "storybook/test";
import { expect, screen, spyOn, userEvent, within } from "storybook/test";
import { API } from "#/api/api";
import { getPreferredProxy } from "#/contexts/ProxyContext";
import { chromatic } from "#/testHelpers/chromatic";
@@ -57,6 +57,13 @@ export const HasError: Story = {
agent: undefined,
},
},
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
const moreActionsButton = canvas.getByRole("button", {
name: "Dev Container actions",
});
expect(moreActionsButton).toBeVisible();
},
};
export const NoPorts: Story = {};
@@ -123,6 +130,13 @@ export const NoContainerOrSubAgent: Story = {
},
subAgents: [],
},
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
const moreActionsButton = canvas.getByRole("button", {
name: "Dev Container actions",
});
expect(moreActionsButton).toBeVisible();
},
};
export const NoContainerOrAgentOrName: Story = {
@@ -274,7 +274,7 @@ export const AgentDevcontainerCard: FC<AgentDevcontainerCardProps> = ({
/>
)}
{showDevcontainerControls && (
{!isTransitioning && (
<AgentDevcontainerMoreActions
deleteDevContainer={deleteDevcontainerMutation.mutate}
/>
@@ -89,7 +89,7 @@ export const WorkspaceBuildLogs: FC<WorkspaceBuildLogsProps> = ({
<div
className={cn(
"logs-header",
"flex items-center border-solid border-0 border-b border-border font-sans",
"flex items-center border-solid border-0 border-b last:border-b-0 border-border font-sans",
"bg-surface-primary text-xs font-semibold leading-none",
"first-of-type:pt-4",
)}
@@ -4,9 +4,12 @@ import { beforeEach, describe, expect, it, vi } from "vitest";
import {
draftInputStorageKeyPrefix,
getPersistedDraftInputValue,
restoreOptimisticRequestSnapshot,
useConversationEditingState,
} from "./AgentChatPage";
import type { ChatMessageInputRef } from "./components/AgentChatInput";
import { createChatStore } from "./components/ChatConversation/chatStore";
import type { PendingAttachment } from "./components/ChatPageContent";
type MockChatInputHandle = {
handle: ChatMessageInputRef;
@@ -84,6 +87,41 @@ describe("getPersistedDraftInputValue", () => {
});
});
describe("restoreOptimisticRequestSnapshot", () => {
it("restores queued messages, stream output, status, and stream error", () => {
const store = createChatStore();
store.setQueuedMessages([
{
id: 9,
chat_id: "chat-abc-123",
created_at: "2025-01-01T00:00:00.000Z",
content: [{ type: "text" as const, text: "queued" }],
},
]);
store.setChatStatus("running");
store.applyMessagePart({ type: "text", text: "partial response" });
store.setStreamError({ kind: "generic", message: "old error" });
const previousSnapshot = store.getSnapshot();
store.batch(() => {
store.setQueuedMessages([]);
store.setChatStatus("pending");
store.clearStreamState();
store.clearStreamError();
});
restoreOptimisticRequestSnapshot(store, previousSnapshot);
const restoredSnapshot = store.getSnapshot();
expect(restoredSnapshot.queuedMessages).toEqual(
previousSnapshot.queuedMessages,
);
expect(restoredSnapshot.chatStatus).toBe(previousSnapshot.chatStatus);
expect(restoredSnapshot.streamState).toBe(previousSnapshot.streamState);
expect(restoredSnapshot.streamError).toEqual(previousSnapshot.streamError);
});
});
describe("useConversationEditingState", () => {
const chatID = "chat-abc-123";
const expectedKey = `${draftInputStorageKeyPrefix}${chatID}`;
@@ -327,6 +365,64 @@ describe("useConversationEditingState", () => {
unmount();
});
it("forwards pending attachments through history-edit send", async () => {
const { result, onSend, unmount } = renderEditing();
const attachments: PendingAttachment[] = [
{ fileId: "file-1", mediaType: "image/png" },
];
act(() => {
result.current.handleEditUserMessage(7, "hello");
});
await act(async () => {
await result.current.handleSendFromInput("hello", attachments);
});
expect(onSend).toHaveBeenCalledWith("hello", attachments, 7);
unmount();
});
it("restores the edit draft and file-block seed when an edit submission fails", async () => {
const { result, onSend, unmount } = renderEditing();
const mockInput = createMockChatInputHandle("edited message");
const fileBlocks = [
{ type: "file", file_id: "file-1", media_type: "image/png" },
] as const;
result.current.chatInputRef.current = mockInput.handle;
onSend.mockRejectedValueOnce(new Error("boom"));
const editorState = JSON.stringify({
root: {
children: [
{
children: [{ text: "edited message" }],
type: "paragraph",
},
],
type: "root",
},
});
act(() => {
result.current.handleEditUserMessage(7, "edited message", fileBlocks);
result.current.handleContentChange("edited message", editorState, false);
});
await act(async () => {
await expect(
result.current.handleSendFromInput("edited message"),
).rejects.toThrow("boom");
});
expect(mockInput.clear).toHaveBeenCalled();
expect(result.current.inputValueRef.current).toBe("edited message");
expect(result.current.editingMessageId).toBe(7);
expect(result.current.editingFileBlocks).toEqual(fileBlocks);
expect(result.current.editorInitialValue).toBe("edited message");
expect(result.current.initialEditorState).toBe(editorState);
unmount();
});
it("clears the composer and persisted draft after a successful send", async () => {
localStorage.setItem(expectedKey, "draft to clear");
const { result, onSend, unmount } = renderEditing();
+136 -30
View File
@@ -11,6 +11,7 @@ import { toast } from "sonner";
import type { UrlTransform } from "streamdown";
import { API, watchWorkspace } from "#/api/api";
import { isApiError } from "#/api/errors";
import { buildOptimisticEditedMessage } from "#/api/queries/chatMessageEdits";
import {
chat,
chatDesktopEnabled,
@@ -51,11 +52,14 @@ import {
getWorkspaceAgent,
} from "./components/ChatConversation/chatHelpers";
import {
type ChatStore,
type ChatStoreState,
selectChatStatus,
useChatSelector,
useChatStore,
} from "./components/ChatConversation/chatStore";
import { useWorkspaceCreationWatcher } from "./components/ChatConversation/useWorkspaceCreationWatcher";
import type { PendingAttachment } from "./components/ChatPageContent";
import {
getDefaultMCPSelection,
getSavedMCPSelection,
@@ -101,12 +105,47 @@ export function getPersistedDraftInputValue(
).text;
}
/** @internal Exported for testing. */
export const restoreOptimisticRequestSnapshot = (
store: Pick<
ChatStore,
| "batch"
| "setChatStatus"
| "setQueuedMessages"
| "setStreamError"
| "setStreamState"
>,
snapshot: Pick<
ChatStoreState,
"chatStatus" | "queuedMessages" | "streamError" | "streamState"
>,
): void => {
store.batch(() => {
store.setQueuedMessages(snapshot.queuedMessages);
store.setChatStatus(snapshot.chatStatus);
store.setStreamState(snapshot.streamState);
store.setStreamError(snapshot.streamError);
});
};
const buildAttachmentMediaTypes = (
attachments?: readonly PendingAttachment[],
): ReadonlyMap<string, string> | undefined => {
if (!attachments?.length) {
return undefined;
}
return new Map(
attachments.map(({ fileId, mediaType }) => [fileId, mediaType]),
);
};
/** @internal Exported for testing. */
export function useConversationEditingState(deps: {
chatID: string | undefined;
onSend: (
message: string,
fileIds?: string[],
attachments?: readonly PendingAttachment[],
editedMessageID?: number,
) => Promise<void>;
onDeleteQueuedMessage: (id: number) => Promise<void>;
@@ -130,6 +169,9 @@ export function useConversationEditingState(deps: {
};
},
);
const serializedEditorStateRef = useRef<string | undefined>(
initialEditorState,
);
// Monotonic counter to force LexicalComposer remount.
const [remountKey, setRemountKey] = useState(0);
@@ -176,6 +218,7 @@ export function useConversationEditingState(deps: {
editorInitialValue: text,
initialEditorState: undefined,
});
serializedEditorStateRef.current = undefined;
setRemountKey((k) => k + 1);
inputValueRef.current = text;
setEditingFileBlocks(fileBlocks ?? []);
@@ -188,6 +231,7 @@ export function useConversationEditingState(deps: {
editorInitialValue: savedText,
initialEditorState: savedState,
});
serializedEditorStateRef.current = savedState;
setRemountKey((k) => k + 1);
inputValueRef.current = savedText;
setEditingMessageId(null);
@@ -221,6 +265,7 @@ export function useConversationEditingState(deps: {
editorInitialValue: text,
initialEditorState: undefined,
});
serializedEditorStateRef.current = undefined;
setRemountKey((k) => k + 1);
inputValueRef.current = text;
setEditingFileBlocks(fileBlocks);
@@ -233,6 +278,7 @@ export function useConversationEditingState(deps: {
editorInitialValue: savedText,
initialEditorState: savedState,
});
serializedEditorStateRef.current = savedState;
setRemountKey((k) => k + 1);
inputValueRef.current = savedText;
setEditingQueuedMessageID(null);
@@ -240,25 +286,48 @@ export function useConversationEditingState(deps: {
setEditingFileBlocks([]);
};
// Wraps the parent onSend to clear local input/editing state
// and handle queue-edit deletion.
const handleSendFromInput = async (message: string, fileIds?: string[]) => {
const editedMessageID =
editingMessageId !== null ? editingMessageId : undefined;
const queueEditID = editingQueuedMessageID;
// Clears the composer for an in-flight history edit and
// returns a rollback function that restores the editing draft
// if the send fails.
const clearInputForHistoryEdit = (message: string) => {
const snapshot = {
editorState: serializedEditorStateRef.current,
fileBlocks: editingFileBlocks,
messageId: editingMessageId,
};
await onSend(message, fileIds, editedMessageID);
// Clear input and editing state on success.
chatInputRef.current?.clear();
inputValueRef.current = "";
setEditingMessageId(null);
return () => {
setDraftState({
editorInitialValue: message,
initialEditorState: snapshot.editorState,
});
serializedEditorStateRef.current = snapshot.editorState;
setRemountKey((k) => k + 1);
inputValueRef.current = message;
setEditingMessageId(snapshot.messageId);
setEditingFileBlocks(snapshot.fileBlocks);
};
};
// Clears all input and editing state after a successful send.
const finalizeSuccessfulSend = (
editedMessageID: number | undefined,
queueEditID: number | null,
) => {
chatInputRef.current?.clear();
if (!isMobileViewport()) {
chatInputRef.current?.focus();
}
inputValueRef.current = "";
serializedEditorStateRef.current = undefined;
if (draftStorageKey) {
localStorage.removeItem(draftStorageKey);
}
if (editingMessageId !== null) {
setEditingMessageId(null);
if (editedMessageID !== undefined) {
setDraftBeforeHistoryEdit(null);
setEditingFileBlocks([]);
}
@@ -270,12 +339,41 @@ export function useConversationEditingState(deps: {
}
};
// Wraps the parent onSend to clear local input/editing state
// and handle queue-edit deletion.
const handleSendFromInput = async (
message: string,
attachments?: readonly PendingAttachment[],
) => {
const editedMessageID =
editingMessageId !== null ? editingMessageId : undefined;
const queueEditID = editingQueuedMessageID;
const sendPromise = onSend(message, attachments, editedMessageID);
// For history edits, clear input immediately and prepare
// a rollback in case the send fails.
const rollback =
editedMessageID !== undefined
? clearInputForHistoryEdit(message)
: undefined;
try {
await sendPromise;
} catch (error) {
rollback?.();
throw error;
}
finalizeSuccessfulSend(editedMessageID, queueEditID);
};
const handleContentChange = (
content: string,
serializedEditorState: string,
hasFileReferences: boolean,
) => {
inputValueRef.current = content;
serializedEditorStateRef.current = serializedEditorState;
// Don't overwrite the persisted draft while editing a
// history or queued message — the original draft (possibly
@@ -430,9 +528,6 @@ const AgentChatPage: FC = () => {
} = useOutletContext<AgentsOutletContext>();
const queryClient = useQueryClient();
const [selectedModel, setSelectedModel] = useState("");
const [pendingEditMessageId, setPendingEditMessageId] = useState<
number | null
>(null);
const scrollToBottomRef = useRef<(() => void) | null>(null);
const chatInputRef = useRef<ChatMessageInputRef | null>(null);
const inputValueRef = useRef(
@@ -775,7 +870,7 @@ const AgentChatPage: FC = () => {
const handleSend = async (
message: string,
fileIds?: string[],
attachments?: readonly PendingAttachment[],
editedMessageID?: number,
) => {
const chatInputHandle = (
@@ -790,7 +885,9 @@ const AgentChatPage: FC = () => {
(p) => p.type === "file-reference",
);
const hasContent =
message.trim() || (fileIds && fileIds.length > 0) || hasFileReferences;
message.trim() ||
(attachments && attachments.length > 0) ||
hasFileReferences;
if (!hasContent || isSubmissionPending || !agentId || !hasModelOptions) {
return;
}
@@ -818,28 +915,41 @@ const AgentChatPage: FC = () => {
}
}
// Add pre-uploaded file references.
if (fileIds && fileIds.length > 0) {
for (const fileId of fileIds) {
// Add pre-uploaded file attachments.
if (attachments && attachments.length > 0) {
for (const { fileId } of attachments) {
content.push({ type: "file", file_id: fileId });
}
}
if (editedMessageID !== undefined) {
const request: TypesGen.EditChatMessageRequest = { content };
const originalEditedMessage = chatMessagesList?.find(
(existingMessage) => existingMessage.id === editedMessageID,
);
const optimisticMessage = originalEditedMessage
? buildOptimisticEditedMessage({
requestContent: request.content,
originalMessage: originalEditedMessage,
attachmentMediaTypes: buildAttachmentMediaTypes(attachments),
})
: undefined;
const previousSnapshot = store.getSnapshot();
clearChatErrorReason(agentId);
clearStreamError();
setPendingEditMessageId(editedMessageID);
store.batch(() => {
store.setQueuedMessages([]);
store.setChatStatus("running");
store.clearStreamState();
});
scrollToBottomRef.current?.();
try {
await editMessage({
messageId: editedMessageID,
optimisticMessage,
req: request,
});
store.clearStreamState();
store.setChatStatus("running");
setPendingEditMessageId(null);
} catch (error) {
setPendingEditMessageId(null);
restoreOptimisticRequestSnapshot(store, previousSnapshot);
handleUsageLimitError(error);
throw error;
}
@@ -918,10 +1028,8 @@ const AgentChatPage: FC = () => {
const handlePromoteQueuedMessage = async (id: number) => {
const previousSnapshot = store.getSnapshot();
const previousQueuedMessages = previousSnapshot.queuedMessages;
const previousChatStatus = previousSnapshot.chatStatus;
store.setQueuedMessages(
previousQueuedMessages.filter((message) => message.id !== id),
previousSnapshot.queuedMessages.filter((message) => message.id !== id),
);
store.clearStreamState();
if (agentId) {
@@ -937,8 +1045,7 @@ const AgentChatPage: FC = () => {
store.upsertDurableMessage(promotedMessage);
upsertCacheMessages([promotedMessage]);
} catch (error) {
store.setQueuedMessages(previousQueuedMessages);
store.setChatStatus(previousChatStatus);
restoreOptimisticRequestSnapshot(store, previousSnapshot);
handleUsageLimitError(error);
throw error;
}
@@ -1133,7 +1240,6 @@ const AgentChatPage: FC = () => {
workspaceAgent={workspaceAgent}
store={store}
editing={editing}
pendingEditMessageId={pendingEditMessageId}
effectiveSelectedModel={effectiveSelectedModel}
setSelectedModel={setSelectedModel}
modelOptions={modelOptions}
@@ -113,7 +113,6 @@ const StoryAgentChatPageView: FC<StoryProps> = ({ editing, ...overrides }) => {
parentChat: undefined as TypesGen.Chat | undefined,
isArchived: false,
store: createChatStore(),
pendingEditMessageId: null as number | null,
effectiveSelectedModel: defaultModelConfigID,
setSelectedModel: fn(),
modelOptions: defaultModelOptions,
@@ -474,16 +473,6 @@ const buildStoreWithMessages = (
return store;
};
const gapTestStore = createChatStore();
gapTestStore.replaceMessages([
buildMessage(1, "user", "Explain the layout."),
buildMessage(2, "assistant", "Here is the explanation."),
buildMessage(3, "user", "Can you elaborate?"),
]);
gapTestStore.applyMessageParts([
{ type: "text", text: "Certainly, here are more details..." },
]);
// ---------------------------------------------------------------------------
// Editing flow stories
// ---------------------------------------------------------------------------
@@ -515,42 +504,6 @@ export const EditingMessage: Story = {
),
};
/** The saving state while an edit is in progress shows the pending
* indicator on the message being saved. */
export const EditingSaving: Story = {
render: () => (
<StoryAgentChatPageView
store={buildStoreWithMessages(editingMessages)}
editing={{
editingMessageId: 3,
editorInitialValue: "Now tell me a better joke",
}}
pendingEditMessageId={3}
isSubmissionPending
/>
),
};
export const ConsistentGapBetweenTimelineAndStream: Story = {
render: () => <StoryAgentChatPageView store={gapTestStore} />,
play: async ({ canvasElement }) => {
const wrapper = canvasElement.querySelector(
'[data-testid="chat-timeline-wrapper"]',
);
expect(wrapper).not.toBeNull();
const outerGap = window.getComputedStyle(wrapper!).rowGap;
expect(outerGap).toBe("8px");
const timeline = wrapper!.querySelector(
'[data-testid="conversation-timeline"]',
);
expect(timeline).not.toBeNull();
const innerGap = window.getComputedStyle(timeline!).rowGap;
expect(innerGap).toBe("8px");
},
};
// ---------------------------------------------------------------------------
// AgentChatPageNotFoundView stories
// ---------------------------------------------------------------------------
@@ -36,6 +36,7 @@ import {
import type { useChatStore } from "./components/ChatConversation/chatStore";
import type { ModelSelectorOption } from "./components/ChatElements";
import { DesktopPanelContext } from "./components/ChatElements/tools/DesktopPanelContext";
import type { PendingAttachment } from "./components/ChatPageContent";
import { ChatPageInput, ChatPageTimeline } from "./components/ChatPageContent";
import { ChatScrollContainer } from "./components/ChatScrollContainer";
import { ChatTopBar } from "./components/ChatTopBar";
@@ -69,7 +70,10 @@ interface EditingState {
fileBlocks: readonly ChatMessagePart[],
) => void;
handleCancelQueueEdit: () => void;
handleSendFromInput: (message: string, fileIds?: string[]) => void;
handleSendFromInput: (
message: string,
attachments?: readonly PendingAttachment[],
) => void;
handleContentChange: (
content: string,
serializedEditorState: string,
@@ -92,7 +96,6 @@ interface AgentChatPageViewProps {
// Editing state.
editing: EditingState;
pendingEditMessageId: number | null;
// Model/input configuration.
effectiveSelectedModel: string;
@@ -179,7 +182,6 @@ export const AgentChatPageView: FC<AgentChatPageViewProps> = ({
workspace,
store,
editing,
pendingEditMessageId,
effectiveSelectedModel,
setSelectedModel,
modelOptions,
@@ -387,7 +389,6 @@ export const AgentChatPageView: FC<AgentChatPageViewProps> = ({
persistedError={persistedError}
onEditUserMessage={editing.handleEditUserMessage}
editingMessageId={editing.editingMessageId}
savingMessageId={pendingEditMessageId}
urlTransform={urlTransform}
mcpServers={mcpServers}
/>
@@ -668,9 +668,8 @@ export const AgentChatInput: FC<AgentChatInputProps> = ({
<div className="flex items-center justify-between border-b border-border-warning/50 px-3 py-1.5">
<span className="flex items-center gap-1.5 text-xs font-medium text-content-warning">
<PencilIcon className="h-3.5 w-3.5" />
{isLoading
? "Saving edit..."
: "Editing will delete all subsequent messages and restart the conversation here."}
Editing will delete all subsequent messages and restart the
conversation here.
</span>
<Button
type="button"
@@ -12,7 +12,6 @@ import type { UrlTransform } from "streamdown";
import type * as TypesGen from "#/api/typesGenerated";
import { Button } from "#/components/Button/Button";
import { CopyButton } from "#/components/CopyButton/CopyButton";
import { Spinner } from "#/components/Spinner/Spinner";
import {
Tooltip,
TooltipContent,
@@ -427,7 +426,6 @@ const ChatMessageItem = memo<{
fileBlocks?: readonly TypesGen.ChatMessagePart[],
) => void;
editingMessageId?: number | null;
savingMessageId?: number | null;
isAfterEditingMessage?: boolean;
hideActions?: boolean;
@@ -446,7 +444,6 @@ const ChatMessageItem = memo<{
parsed,
onEditUserMessage,
editingMessageId,
savingMessageId,
isAfterEditingMessage = false,
hideActions = false,
fadeFromBottom = false,
@@ -458,7 +455,6 @@ const ChatMessageItem = memo<{
showDesktopPreviews,
}) => {
const isUser = message.role === "user";
const isSavingMessage = savingMessageId === message.id;
const [previewImage, setPreviewImage] = useState<string | null>(null);
const [previewText, setPreviewText] = useState<string | null>(null);
if (
@@ -541,7 +537,6 @@ const ChatMessageItem = memo<{
"rounded-lg border border-solid border-border-default bg-surface-secondary px-3 py-2 font-sans shadow-sm transition-shadow",
editingMessageId === message.id &&
"border-surface-secondary shadow-[0_0_0_2px_hsla(var(--border-warning),0.6)]",
isSavingMessage && "ring-2 ring-content-secondary/40",
fadeFromBottom && "relative overflow-hidden",
)}
style={
@@ -572,13 +567,6 @@ const ChatMessageItem = memo<{
: parsed.markdown || ""}
</span>
)}
{isSavingMessage && (
<Spinner
className="mt-0.5 h-3.5 w-3.5 shrink-0 text-content-secondary"
aria-label="Saving message edit"
loading
/>
)}
</div>
)}
{hasFileBlocks && (
@@ -704,7 +692,6 @@ const StickyUserMessage = memo<{
fileBlocks?: readonly TypesGen.ChatMessagePart[],
) => void;
editingMessageId?: number | null;
savingMessageId?: number | null;
isAfterEditingMessage?: boolean;
}>(
({
@@ -712,7 +699,6 @@ const StickyUserMessage = memo<{
parsed,
onEditUserMessage,
editingMessageId,
savingMessageId,
isAfterEditingMessage = false,
}) => {
const [isStuck, setIsStuck] = useState(false);
@@ -937,7 +923,6 @@ const StickyUserMessage = memo<{
parsed={parsed}
onEditUserMessage={handleEditUserMessage}
editingMessageId={editingMessageId}
savingMessageId={savingMessageId}
isAfterEditingMessage={isAfterEditingMessage}
/>
</div>
@@ -980,7 +965,6 @@ const StickyUserMessage = memo<{
parsed={parsed}
onEditUserMessage={handleEditUserMessage}
editingMessageId={editingMessageId}
savingMessageId={savingMessageId}
isAfterEditingMessage={isAfterEditingMessage}
fadeFromBottom
/>
@@ -1002,7 +986,6 @@ interface ConversationTimelineProps {
fileBlocks?: readonly TypesGen.ChatMessagePart[],
) => void;
editingMessageId?: number | null;
savingMessageId?: number | null;
urlTransform?: UrlTransform;
mcpServers?: readonly TypesGen.MCPServerConfig[];
computerUseSubagentIds?: Set<string>;
@@ -1016,7 +999,6 @@ export const ConversationTimeline = memo<ConversationTimelineProps>(
subagentTitles,
onEditUserMessage,
editingMessageId,
savingMessageId,
urlTransform,
mcpServers,
computerUseSubagentIds,
@@ -1053,7 +1035,6 @@ export const ConversationTimeline = memo<ConversationTimelineProps>(
parsed={parsed}
onEditUserMessage={onEditUserMessage}
editingMessageId={editingMessageId}
savingMessageId={savingMessageId}
isAfterEditingMessage={afterEditingMessageIds.has(message.id)}
/>
);
@@ -1067,7 +1048,6 @@ export const ConversationTimeline = memo<ConversationTimelineProps>(
key={message.id}
message={message}
parsed={parsed}
savingMessageId={savingMessageId}
urlTransform={urlTransform}
isAfterEditingMessage={afterEditingMessageIds.has(message.id)}
hideActions={!isLastInChain}
@@ -190,6 +190,27 @@ describe("setChatStatus", () => {
});
});
// ---------------------------------------------------------------------------
// setStreamState
// ---------------------------------------------------------------------------
describe("setStreamState", () => {
it("does not notify when setting the same stream state reference", () => {
const store = createChatStore();
store.applyMessagePart({ type: "text", text: "hello" });
const streamState = store.getSnapshot().streamState;
expect(streamState).not.toBeNull();
let notified = false;
store.subscribe(() => {
notified = true;
});
store.setStreamState(streamState);
expect(notified).toBe(false);
});
});
// ---------------------------------------------------------------------------
// setStreamError / clearStreamError
// ---------------------------------------------------------------------------
@@ -4213,6 +4213,96 @@ describe("store/cache desync protection", () => {
expect(result.current.orderedMessageIDs).toEqual([1]);
});
});
it("reflects optimistic and authoritative history-edit cache updates through the normal sync effect", async () => {
immediateAnimationFrame();
const chatID = "chat-local-edit-sync";
const msg1 = makeMessage(chatID, 1, "user", "first");
const msg2 = makeMessage(chatID, 2, "assistant", "second");
const msg3 = makeMessage(chatID, 3, "user", "third");
const optimisticReplacement = {
...msg3,
content: [{ type: "text" as const, text: "edited draft" }],
};
const authoritativeReplacement = makeMessage(chatID, 9, "user", "edited");
const mockSocket = createMockSocket();
mockWatchChatReturn(mockSocket);
const queryClient = createTestQueryClient();
const wrapper: FC<PropsWithChildren> = ({ children }) => (
<QueryClientProvider client={queryClient}>{children}</QueryClientProvider>
);
const initialOptions = {
chatID,
chatMessages: [msg1, msg2, msg3],
chatRecord: makeChat(chatID),
chatMessagesData: {
messages: [msg1, msg2, msg3],
queued_messages: [],
has_more: false,
},
chatQueuedMessages: [] as TypesGen.ChatQueuedMessage[],
setChatErrorReason: vi.fn(),
clearChatErrorReason: vi.fn(),
};
const { result, rerender } = renderHook(
(options: Parameters<typeof useChatStore>[0]) => {
const { store } = useChatStore(options);
return {
store,
messagesByID: useChatSelector(store, selectMessagesByID),
orderedMessageIDs: useChatSelector(store, selectOrderedMessageIDs),
};
},
{ initialProps: initialOptions, wrapper },
);
await waitFor(() => {
expect(result.current.orderedMessageIDs).toEqual([1, 2, 3]);
});
act(() => {
mockSocket.emitOpen();
});
rerender({
...initialOptions,
chatMessages: [msg1, msg2, optimisticReplacement],
chatMessagesData: {
messages: [msg1, msg2, optimisticReplacement],
queued_messages: [],
has_more: false,
},
});
await waitFor(() => {
expect(result.current.orderedMessageIDs).toEqual([1, 2, 3]);
expect(result.current.messagesByID.get(3)?.content).toEqual(
optimisticReplacement.content,
);
});
rerender({
...initialOptions,
chatMessages: [msg1, msg2, authoritativeReplacement],
chatMessagesData: {
messages: [msg1, msg2, authoritativeReplacement],
queued_messages: [],
has_more: false,
},
});
await waitFor(() => {
expect(result.current.orderedMessageIDs).toEqual([1, 2, 9]);
expect(result.current.messagesByID.has(3)).toBe(false);
expect(result.current.messagesByID.get(9)?.content).toEqual(
authoritativeReplacement.content,
);
});
});
});
describe("parse errors", () => {
@@ -174,6 +174,7 @@ export type ChatStore = {
queuedMessages: readonly TypesGen.ChatQueuedMessage[] | undefined,
) => void;
setChatStatus: (status: TypesGen.ChatStatus | null) => void;
setStreamState: (streamState: StreamState | null) => void;
setStreamError: (reason: ChatDetailError | null) => void;
clearStreamError: () => void;
setRetryState: (state: RetryState | null) => void;
@@ -412,6 +413,20 @@ export const createChatStore = (): ChatStore => {
chatStatus: status,
}));
},
setStreamState: (streamState) => {
if (state.streamState === streamState) {
return;
}
setState((current) => {
if (current.streamState === streamState) {
return current;
}
return {
...current,
streamState,
};
});
},
setStreamError: (reason) => {
setState((current) => {
if (chatDetailErrorsEqual(current.streamError, reason)) {
@@ -206,10 +206,9 @@ export const useChatStore = (
const fetchedIDs = new Set(chatMessages.map((m) => m.id));
// Only classify a store-held ID as stale if it was
// present in the PREVIOUS sync's fetched data. IDs
// added to the store after the last sync (by the WS
// handler or handleSend) are new, not stale, and
// must not trigger the destructive replaceMessages
// path.
// added to the store after the last sync (for example
// by the WS handler) are new, not stale, and must not
// trigger the destructive replaceMessages path.
const prevIDs = new Set(prev.map((m) => m.id));
const hasStaleEntries =
contentChanged &&
@@ -21,6 +21,7 @@ import {
} from "#/components/Tooltip/Tooltip";
import { formatTokenCount } from "#/utils/analytics";
import { formatCostMicros } from "#/utils/currency";
import { paginateItems } from "#/utils/paginateItems";
interface ChatCostSummaryViewProps {
summary: TypesGen.ChatCostSummary | undefined;
@@ -95,25 +96,19 @@ export const ChatCostSummaryView: FC<ChatCostSummaryViewProps> = ({
}
const modelPageSize = 10;
const modelMaxPage = Math.max(
1,
Math.ceil(summary.by_model.length / modelPageSize),
);
const clampedModelPage = Math.min(modelPage, modelMaxPage);
const pagedModels = summary.by_model.slice(
(clampedModelPage - 1) * modelPageSize,
clampedModelPage * modelPageSize,
);
const {
pagedItems: pagedModels,
clampedPage: clampedModelPage,
hasPreviousPage: hasModelPrev,
hasNextPage: hasModelNext,
} = paginateItems(summary.by_model, modelPageSize, modelPage);
const chatPageSize = 10;
const chatMaxPage = Math.max(
1,
Math.ceil(summary.by_chat.length / chatPageSize),
);
const clampedChatPage = Math.min(chatPage, chatMaxPage);
const pagedChats = summary.by_chat.slice(
(clampedChatPage - 1) * chatPageSize,
clampedChatPage * chatPageSize,
);
const {
pagedItems: pagedChats,
clampedPage: clampedChatPage,
hasPreviousPage: hasChatPrev,
hasNextPage: hasChatNext,
} = paginateItems(summary.by_chat, chatPageSize, chatPage);
const usageLimit = summary.usage_limit;
const showUsageLimitCard = usageLimit?.is_limited === true;
@@ -333,10 +328,8 @@ export const ChatCostSummaryView: FC<ChatCostSummaryViewProps> = ({
currentPage={clampedModelPage}
pageSize={modelPageSize}
onPageChange={setModelPage}
hasPreviousPage={clampedModelPage > 1}
hasNextPage={
clampedModelPage * modelPageSize < summary.by_model.length
}
hasPreviousPage={hasModelPrev}
hasNextPage={hasModelNext}
/>
</div>
)}
@@ -403,10 +396,8 @@ export const ChatCostSummaryView: FC<ChatCostSummaryViewProps> = ({
currentPage={clampedChatPage}
pageSize={chatPageSize}
onPageChange={setChatPage}
hasPreviousPage={clampedChatPage > 1}
hasNextPage={
clampedChatPage * chatPageSize < summary.by_chat.length
}
hasPreviousPage={hasChatPrev}
hasNextPage={hasChatNext}
/>
</div>
)}
@@ -48,7 +48,6 @@ interface ChatPageTimelineProps {
fileBlocks?: readonly TypesGen.ChatMessagePart[],
) => void;
editingMessageId?: number | null;
savingMessageId?: number | null;
urlTransform?: UrlTransform;
mcpServers?: readonly TypesGen.MCPServerConfig[];
}
@@ -59,7 +58,6 @@ export const ChatPageTimeline: FC<ChatPageTimelineProps> = ({
persistedError,
onEditUserMessage,
editingMessageId,
savingMessageId,
urlTransform,
mcpServers,
}) => {
@@ -100,7 +98,6 @@ export const ChatPageTimeline: FC<ChatPageTimelineProps> = ({
subagentTitles={subagentTitles}
onEditUserMessage={onEditUserMessage}
editingMessageId={editingMessageId}
savingMessageId={savingMessageId}
urlTransform={urlTransform}
mcpServers={mcpServers}
computerUseSubagentIds={computerUseSubagentIds}
@@ -121,10 +118,18 @@ export const ChatPageTimeline: FC<ChatPageTimelineProps> = ({
);
};
export type PendingAttachment = {
fileId: string;
mediaType: string;
};
interface ChatPageInputProps {
store: ChatStoreHandle;
compressionThreshold: number | undefined;
onSend: (message: string, fileIds?: string[]) => void;
onSend: (
message: string,
attachments?: readonly PendingAttachment[],
) => Promise<void> | void;
onDeleteQueuedMessage: (id: number) => Promise<void>;
onPromoteQueuedMessage: (id: number) => Promise<void>;
onInterrupt: () => void;
@@ -315,9 +320,10 @@ export const ChatPageInput: FC<ChatPageInputProps> = ({
<AgentChatInput
onSend={(message) => {
void (async () => {
// Collect file IDs from already-uploaded attachments.
// Skip files in error state (e.g. too large).
const fileIds: string[] = [];
// Collect uploaded attachment metadata for the optimistic
// transcript builder while keeping the server payload
// shape unchanged downstream.
const pendingAttachments: PendingAttachment[] = [];
let skippedErrors = 0;
for (const file of attachments) {
const state = uploadStates.get(file);
@@ -326,7 +332,10 @@ export const ChatPageInput: FC<ChatPageInputProps> = ({
continue;
}
if (state?.status === "uploaded" && state.fileId) {
fileIds.push(state.fileId);
pendingAttachments.push({
fileId: state.fileId,
mediaType: file.type || "application/octet-stream",
});
}
}
if (skippedErrors > 0) {
@@ -334,9 +343,10 @@ export const ChatPageInput: FC<ChatPageInputProps> = ({
`${skippedErrors} attachment${skippedErrors > 1 ? "s" : ""} could not be sent (upload failed)`,
);
}
const fileArg = fileIds.length > 0 ? fileIds : undefined;
const attachmentArg =
pendingAttachments.length > 0 ? pendingAttachments : undefined;
try {
await onSend(message, fileArg);
await onSend(message, attachmentArg);
} catch {
// Attachments preserved for retry on failure.
return;
@@ -1,7 +1,7 @@
import dayjs from "dayjs";
import relativeTime from "dayjs/plugin/relativeTime";
import { CodeIcon, ExternalLinkIcon } from "lucide-react";
import type { FC } from "react";
import { type FC, useState } from "react";
import { Area, AreaChart, CartesianGrid, XAxis, YAxis } from "recharts";
import type * as TypesGen from "#/api/typesGenerated";
import { Button } from "#/components/Button/Button";
@@ -11,6 +11,7 @@ import {
ChartTooltip,
ChartTooltipContent,
} from "#/components/Chart/Chart";
import { PaginationWidgetBase } from "#/components/PaginationWidget/PaginationWidgetBase";
import {
Table,
TableBody,
@@ -21,6 +22,7 @@ import {
} from "#/components/Table/Table";
import { cn } from "#/utils/cn";
import { formatCostMicros } from "#/utils/currency";
import { paginateItems } from "#/utils/paginateItems";
import { PrStateIcon } from "./GitPanel/GitPanel";
dayjs.extend(relativeTime);
@@ -286,6 +288,8 @@ const TimeRangeFilter: FC<{
// Main view
// ---------------------------------------------------------------------------
const RECENT_PRS_PAGE_SIZE = 10;
export const PRInsightsView: FC<PRInsightsViewProps> = ({
data,
timeRange,
@@ -294,6 +298,18 @@ export const PRInsightsView: FC<PRInsightsViewProps> = ({
const { summary, time_series, by_model, recent_prs } = data;
const isEmpty = summary.total_prs_created === 0;
// Client-side pagination for recent PRs table.
// Page resets to 1 on data refresh because the parent unmounts this
// component during loading. Clamping ensures the page is valid if the
// list shrinks without a full remount.
const [recentPrsPage, setRecentPrsPage] = useState(1);
const {
pagedItems: pagedRecentPrs,
clampedPage: clampedRecentPrsPage,
hasPreviousPage: hasRecentPrsPrev,
hasNextPage: hasRecentPrsNext,
} = paginateItems(recent_prs, RECENT_PRS_PAGE_SIZE, recentPrsPage);
return (
<div className="space-y-8">
{/* ── Header ── */}
@@ -354,8 +370,8 @@ export const PRInsightsView: FC<PRInsightsViewProps> = ({
</div>
</section>
{/* ── Model breakdown + Recent PRs side by side ── */}
<div className="grid grid-cols-1 gap-6 lg:grid-cols-2">
{/* ── Model breakdown + Recent PRs ── */}
<div className="space-y-6">
{/* ── Model performance (simplified) ── */}
{by_model.length > 0 && (
<section>
@@ -413,7 +429,7 @@ export const PRInsightsView: FC<PRInsightsViewProps> = ({
{recent_prs.length > 0 && (
<section>
<div className="mb-4">
<SectionTitle>Recent</SectionTitle>
<SectionTitle>Pull requests</SectionTitle>
</div>
<div className="overflow-hidden rounded-lg border border-border-default">
<Table className="table-fixed text-sm">
@@ -436,7 +452,7 @@ export const PRInsightsView: FC<PRInsightsViewProps> = ({
</TableRow>
</TableHeader>{" "}
<TableBody>
{recent_prs.map((pr) => (
{pagedRecentPrs.map((pr) => (
<TableRow
key={pr.chat_id}
className="border-t border-border-default transition-colors hover:bg-surface-secondary/50"
@@ -480,6 +496,18 @@ export const PRInsightsView: FC<PRInsightsViewProps> = ({
</TableBody>
</Table>
</div>
{recent_prs.length > RECENT_PRS_PAGE_SIZE && (
<div className="pt-4">
<PaginationWidgetBase
totalRecords={recent_prs.length}
currentPage={clampedRecentPrsPage}
pageSize={RECENT_PRS_PAGE_SIZE}
onPageChange={setRecentPrsPage}
hasPreviousPage={hasRecentPrsPrev}
hasNextPage={hasRecentPrsNext}
/>
</div>
)}
</section>
)}
</div>
@@ -1,10 +1,7 @@
import type { Meta, StoryObj } from "@storybook/react-vite";
import { action } from "storybook/actions";
import { screen, userEvent } from "storybook/test";
import {
getProvisionerDaemonsKey,
organizationsKey,
} from "#/api/queries/organizations";
import { expect, screen, userEvent, waitFor } from "storybook/test";
import { getProvisionerDaemonsKey } from "#/api/queries/organizations";
import {
MockDefaultOrganization,
MockOrganization2,
@@ -61,40 +58,20 @@ export const StarterTemplateWithOrgPicker: Story = {
},
};
const canCreateTemplate = (organizationId: string) => {
return {
[organizationId]: {
object: {
resource_type: "template",
organization_id: organizationId,
},
action: "create",
},
};
};
// Query key used by permittedOrganizations() in the form.
const permittedOrgsKey = [
"organizations",
"permitted",
{ object: { resource_type: "template" }, action: "create" },
];
export const StarterTemplateWithProvisionerWarning: Story = {
parameters: {
queries: [
{
key: organizationsKey,
key: permittedOrgsKey,
data: [MockDefaultOrganization, MockOrganization2],
},
{
key: [
"authorization",
{
checks: {
...canCreateTemplate(MockDefaultOrganization.id),
...canCreateTemplate(MockOrganization2.id),
},
},
],
data: {
[MockDefaultOrganization.id]: true,
[MockOrganization2.id]: true,
},
},
{
key: getProvisionerDaemonsKey(MockOrganization2.id),
data: [],
@@ -117,27 +94,11 @@ export const StarterTemplatePermissionsCheck: Story = {
parameters: {
queries: [
{
key: organizationsKey,
data: [MockDefaultOrganization, MockOrganization2],
},
{
key: [
"authorization",
{
checks: {
...canCreateTemplate(MockDefaultOrganization.id),
...canCreateTemplate(MockOrganization2.id),
},
},
],
data: {
[MockDefaultOrganization.id]: true,
[MockOrganization2.id]: false,
},
},
{
key: getProvisionerDaemonsKey(MockOrganization2.id),
data: [],
// Only MockDefaultOrganization passes the permission
// check; MockOrganization2 is filtered out by the
// permittedOrganizations query.
key: permittedOrgsKey,
data: [MockDefaultOrganization],
},
],
},
@@ -146,7 +107,14 @@ export const StarterTemplatePermissionsCheck: Story = {
showOrganizationPicker: true,
},
play: async () => {
// When only one org passes the permission check, it should be
// auto-selected in the picker.
const organizationPicker = screen.getByTestId("organization-autocomplete");
await waitFor(() =>
expect(organizationPicker).toHaveTextContent(
MockDefaultOrganization.display_name,
),
);
await userEvent.click(organizationPicker);
},
};
@@ -7,7 +7,10 @@ import { type FC, useState } from "react";
import { useQuery } from "react-query";
import { useSearchParams } from "react-router";
import * as Yup from "yup";
import { provisionerDaemons } from "#/api/queries/organizations";
import {
permittedOrganizations,
provisionerDaemons,
} from "#/api/queries/organizations";
import type {
CreateTemplateVersionRequest,
Organization,
@@ -191,6 +194,10 @@ type CreateTemplateFormProps = (
showOrganizationPicker?: boolean;
};
// Stable reference for empty org options to avoid re-render loops
// in the render-time state adjustment pattern.
const emptyOrgs: Organization[] = [];
export const CreateTemplateForm: FC<CreateTemplateFormProps> = (props) => {
const [searchParams] = useSearchParams();
const [selectedOrg, setSelectedOrg] = useState<Organization | null>(null);
@@ -222,6 +229,34 @@ export const CreateTemplateForm: FC<CreateTemplateFormProps> = (props) => {
});
const getFieldHelpers = getFormHelpers<CreateTemplateFormData>(form, error);
const permittedOrgsQuery = useQuery({
...permittedOrganizations({
object: { resource_type: "template" },
action: "create",
}),
enabled: Boolean(showOrganizationPicker),
});
const orgOptions = permittedOrgsQuery.data ?? emptyOrgs;
// Clear invalid selections when permission filtering removes the
// selected org. Uses the React render-time adjustment pattern.
const [prevOrgOptions, setPrevOrgOptions] = useState(orgOptions);
if (orgOptions !== prevOrgOptions) {
setPrevOrgOptions(orgOptions);
if (selectedOrg && !orgOptions.some((o) => o.id === selectedOrg.id)) {
setSelectedOrg(null);
void form.setFieldValue("organization", "");
}
}
// Auto-select when exactly one org is available and nothing is
// selected. Runs every render (not gated on options change) so it
// works when mock data is available synchronously on first render.
if (orgOptions.length === 1 && selectedOrg === null) {
setSelectedOrg(orgOptions[0]);
void form.setFieldValue("organization", orgOptions[0].name || "");
}
const { data: provisioners } = useQuery({
...provisionerDaemons(selectedOrg?.id ?? ""),
enabled: showOrganizationPicker && Boolean(selectedOrg),
@@ -263,9 +298,10 @@ export const CreateTemplateForm: FC<CreateTemplateFormProps> = (props) => {
<div className="flex flex-col gap-2">
<Label htmlFor="organization">Organization</Label>
<OrganizationAutocomplete
{...getFieldHelpers("organization")}
id="organization"
required
value={selectedOrg}
options={orgOptions}
onChange={(newValue) => {
setSelectedOrg(newValue);
void form.setFieldValue(
@@ -273,10 +309,6 @@ export const CreateTemplateForm: FC<CreateTemplateFormProps> = (props) => {
newValue?.name || "",
);
}}
check={{
object: { resource_type: "template" },
action: "create",
}}
/>
</div>
</>
@@ -1,8 +1,6 @@
import type { Meta, StoryObj } from "@storybook/react-vite";
import { action } from "storybook/actions";
import { userEvent, within } from "storybook/test";
import { organizationsKey } from "#/api/queries/organizations";
import type { Organization } from "#/api/typesGenerated";
import {
MockOrganization,
MockOrganization2,
@@ -26,37 +24,20 @@ type Story = StoryObj<typeof CreateUserForm>;
export const Ready: Story = {};
const permissionCheckQuery = (organizations: Organization[]) => {
return {
key: [
"authorization",
{
checks: Object.fromEntries(
organizations.map((org) => [
org.id,
{
action: "create",
object: {
resource_type: "organization_member",
organization_id: org.id,
},
},
]),
),
},
],
data: Object.fromEntries(organizations.map((org) => [org.id, true])),
};
};
// Query key used by permittedOrganizations() in the form.
const permittedOrgsKey = [
"organizations",
"permitted",
{ object: { resource_type: "organization_member" }, action: "create" },
];
export const WithOrganizations: Story = {
parameters: {
queries: [
{
key: organizationsKey,
key: permittedOrgsKey,
data: [MockOrganization, MockOrganization2],
},
permissionCheckQuery([MockOrganization, MockOrganization2]),
],
},
args: {
@@ -1,9 +1,11 @@
import { useFormik } from "formik";
import { Check } from "lucide-react";
import { Select as SelectPrimitive } from "radix-ui";
import type { FC } from "react";
import { type FC, useState } from "react";
import { useQuery } from "react-query";
import * as Yup from "yup";
import { hasApiFieldErrors, isApiError } from "#/api/errors";
import { permittedOrganizations } from "#/api/queries/organizations";
import type * as TypesGen from "#/api/typesGenerated";
import { ErrorAlert } from "#/components/Alert/ErrorAlert";
import { Button } from "#/components/Button/Button";
@@ -90,6 +92,10 @@ interface CreateUserFormProps {
serviceAccountsEnabled: boolean;
}
// Stable reference for empty org options to avoid re-render loops
// in the render-time state adjustment pattern.
const emptyOrgs: TypesGen.Organization[] = [];
export const CreateUserForm: FC<CreateUserFormProps> = ({
error,
isLoading,
@@ -125,6 +131,38 @@ export const CreateUserForm: FC<CreateUserFormProps> = ({
enableReinitialize: true,
});
const [selectedOrg, setSelectedOrg] = useState<TypesGen.Organization | null>(
null,
);
const permittedOrgsQuery = useQuery({
...permittedOrganizations({
object: { resource_type: "organization_member" },
action: "create",
}),
enabled: showOrganizations,
});
const orgOptions = permittedOrgsQuery.data ?? emptyOrgs;
// Clear invalid selections when permission filtering removes the
// selected org. Uses the React render-time adjustment pattern.
const [prevOrgOptions, setPrevOrgOptions] = useState(orgOptions);
if (orgOptions !== prevOrgOptions) {
setPrevOrgOptions(orgOptions);
if (selectedOrg && !orgOptions.some((o) => o.id === selectedOrg.id)) {
setSelectedOrg(null);
void form.setFieldValue("organization", "");
}
}
// Auto-select when exactly one org is available and nothing is
// selected. Runs every render (not gated on options change) so it
// works when mock data is available synchronously on first render.
if (orgOptions.length === 1 && selectedOrg === null) {
setSelectedOrg(orgOptions[0]);
void form.setFieldValue("organization", orgOptions[0].id ?? "");
}
const getFieldHelpers = getFormHelpers(form, error);
const isServiceAccount = form.values.login_type === "none";
@@ -174,16 +212,14 @@ export const CreateUserForm: FC<CreateUserFormProps> = ({
<div className="flex flex-col gap-2">
<Label htmlFor="organization">Organization</Label>
<OrganizationAutocomplete
{...getFieldHelpers("organization")}
id="organization"
required
value={selectedOrg}
options={orgOptions}
onChange={(newValue) => {
setSelectedOrg(newValue);
void form.setFieldValue("organization", newValue?.id ?? "");
}}
check={{
object: { resource_type: "organization_member" },
action: "create",
}}
/>
</div>
)}
+64
View File
@@ -0,0 +1,64 @@
import { describe, expect, it } from "vitest";
import { paginateItems } from "./paginateItems";
// 25 items numbered 125 for readable assertions.
const items = Array.from({ length: 25 }, (_, i) => i + 1);
describe("paginateItems", () => {
it("returns the first page of items", () => {
const result = paginateItems(items, 10, 1);
expect(result.pagedItems).toEqual([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
expect(result.clampedPage).toBe(1);
expect(result.totalPages).toBe(3);
expect(result.hasPreviousPage).toBe(false);
expect(result.hasNextPage).toBe(true);
});
it("returns a partial last page", () => {
const result = paginateItems(items, 10, 3);
expect(result.pagedItems).toEqual([21, 22, 23, 24, 25]);
expect(result.clampedPage).toBe(3);
expect(result.totalPages).toBe(3);
expect(result.hasPreviousPage).toBe(true);
expect(result.hasNextPage).toBe(false);
});
it("clamps currentPage down when beyond total pages", () => {
const result = paginateItems(items, 10, 99);
expect(result.clampedPage).toBe(3);
expect(result.pagedItems).toEqual([21, 22, 23, 24, 25]);
expect(result.hasPreviousPage).toBe(true);
expect(result.hasNextPage).toBe(false);
});
it("clamps currentPage up when 0", () => {
const result = paginateItems(items, 10, 0);
expect(result.clampedPage).toBe(1);
expect(result.pagedItems).toEqual([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
expect(result.hasPreviousPage).toBe(false);
expect(result.hasNextPage).toBe(true);
});
it("clamps currentPage up when negative", () => {
const result = paginateItems(items, 10, -5);
expect(result.clampedPage).toBe(1);
expect(result.pagedItems).toEqual([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
expect(result.hasPreviousPage).toBe(false);
expect(result.hasNextPage).toBe(true);
});
it("returns empty pagedItems with clampedPage=1 for an empty array", () => {
const result = paginateItems([], 10, 1);
expect(result.pagedItems).toEqual([]);
expect(result.clampedPage).toBe(1);
expect(result.totalPages).toBe(1);
expect(result.hasPreviousPage).toBe(false);
expect(result.hasNextPage).toBe(false);
});
it("reports hasPreviousPage correctly for middle pages", () => {
const result = paginateItems(items, 10, 2);
expect(result.hasPreviousPage).toBe(true);
expect(result.hasNextPage).toBe(true);
});
});
+25
View File
@@ -0,0 +1,25 @@
export function paginateItems<T>(
items: readonly T[],
pageSize: number,
currentPage: number,
): {
pagedItems: T[];
clampedPage: number;
totalPages: number;
hasPreviousPage: boolean;
hasNextPage: boolean;
} {
const totalPages = Math.max(1, Math.ceil(items.length / pageSize));
const clampedPage = Math.max(1, Math.min(currentPage, totalPages));
const pagedItems = items.slice(
(clampedPage - 1) * pageSize,
clampedPage * pageSize,
);
return {
pagedItems,
clampedPage,
totalPages,
hasPreviousPage: clampedPage > 1,
hasNextPage: clampedPage * pageSize < items.length,
};
}