Compare commits

...

164 Commits

Author SHA1 Message Date
Jake Howell
13fe1546a3 feat: add data-[state=open] modifiers 2026-02-01 13:26:37 +00:00
Jake Howell
b14a709adb fix: resolve <Badges /> to use <Badge /> (#21747)
Continuing the work from #21740 

This pull-request updates all of our badges to use the `<Badge />`
component. This is inline with our Figma design/guidelines, so
going-forth and we're standardised across the application. I've added
`<EnterpriseBadge />` and `<DeprecatedBadge />` to the
`Badges.stories.tsx` so we can track these in future (they were missing
previously).

In `site/src/components/Form/Form.tsx` we were using these components
within a `<h2 />` which would cause invalid semantic HTML. I chose the
easy route around this and made them sit in their own `<header>` with a
flex.

### Preview

| Old | New |
| --- | --- |
| <img width="512" height="288" alt="BADGES_OLD"
src="https://github.com/user-attachments/assets/196b0a53-37b2-4aee-b66e-454ac0ff1271"
/> | <img width="512" height="288" alt="BADGES_OLD-1"
src="https://github.com/user-attachments/assets/f0fb2871-40e2-4f0d-972c-cbf4249cf2d7"
/> |
| <img width="512" height="288" alt="DEPRECATED_OLD"
src="https://github.com/user-attachments/assets/cce36b6c-e91a-47f6-8d20-02b9f40ea44e"
/> | <img width="512" height="289" alt="DEPRECATED_NEW"
src="https://github.com/user-attachments/assets/8a1f5168-d128-4733-819e-c1cb6641b83b"
/> |
| <img width="512" height="288" alt="ENTERPRISE_OLD"
src="https://github.com/user-attachments/assets/aba677ce-23c7-4820-913b-886d049f81ef"
/> | <img width="512" height="288" alt="ENTERPRISE_NEW"
src="https://github.com/user-attachments/assets/eca9729d-c98a-4848-9f10-28e42e2c3cd3"
/> |

---------

Co-authored-by: Ben Potter <me@bpmct.net>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 12:22:58 +11:00
Jon Ayers
3d97f677e5 chore: bump alpine to 3.23.3 (#21804) 2026-01-30 22:18:54 +00:00
dependabot[bot]
8985120c36 chore(examples/templates/tasks-docker): bump claude-code module from 4.3.0 to 4.4.2 (#21551)
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=coder/claude-code/coder&package-manager=terraform&previous-version=4.3.0&new-version=4.4.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 20:47:42 +00:00
George K
c60f802580 fix(coderd/rbac): make workspace ACL disabled flag atomic (#21799)
The flag is a package-global that was only meant to be set once on
startup. This was a bad assumption since the lack of sync caused test
flakes.

Related to:
https://github.com/coder/internal/issues/1317
https://github.com/coder/internal/issues/1318
2026-01-30 11:21:27 -08:00
Danielle Maywood
37aecda165 feat(coderd/provisionerdserver): insert sub agent resource (#21699)
Update provisionerdserver to handle the changes introduced to
provisionerd in https://github.com/coder/coder/pull/21602

We now create a relationship between `workspace_agent_devcontainers` and
`workspace_agents` with the newly created `subagent_id`.
2026-01-30 17:19:19 +00:00
Cian Johnston
14b4650d6c chore: fix flakiness in TestSSH/StdioExitOnParentDeath (#21792)
Relates to https://github.com/coder/internal/issues/1289
2026-01-30 15:46:38 +00:00
blinkagent[bot]
b035843484 docs: clarify that only Coder tokens work with AI Bridge authentication (#21791)
## Summary

Clarifies the [AI Bridge client config authentication
section](https://coder.com/docs/ai-coder/ai-bridge/client-config#authentication)
to explicitly state that only **Coder-issued tokens** are accepted.

## Changes

- Changed "API key" to "Coder API key" throughout the Authentication
section
- Added a note clarifying that provider-specific API keys (OpenAI,
Anthropic, etc.) will not work with AI Bridge

Fixes #21790

---

Created on behalf of @dannykopping

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-30 14:49:06 +00:00
Mathias Fredriksson
21eabb1d73 feat(coderd): return log snapshot for paused tasks (#21771)
Previously the task logs endpoint only worked when the workspace was
running, leaving users unable to view task history after pausing.

This change adds snapshot retrieval with state-based branching: active
tasks fetch live logs from AgentAPI, paused/initializing/pending tasks
return stored snapshots (providing continuity during pause/resume), and
error/unknown states return HTTP 409 Conflict.

The response includes snapshot metadata (snapshot, snapshot_at) to
indicate whether logs are live or historical.

Closes coder/internal#1254
2026-01-30 16:09:45 +02:00
Danny Kopping
536bca7ea9 chore: log api key on each HTTP API request (#21785)
Operators need to know which API key was used in HTTP requests.

For example, if a key is leaking and a DDOS is underway using that key, operators need a way to identify the key in use and take steps to expire the key (see https://github.com/coder/coder/issues/21782).

_Disclaimer: created using Claude Opus 4.5_
2026-01-30 14:48:10 +02:00
Jake Howell
e45635aab6 fix: refactor <Paywall /> component to be universal (#21740)
During development of #21659 I approved some `<Paywall />` code that had
an extensive props system, however, I wasn't a huge fan of this. This
approach attempts to take it further like something `shadcn` would,
where-in we define the `<Paywall />` (and its subset of components) and
we wrap around those when needed for `<PaywallAIGovernance />` and
`<PaywallPremium />`.

Theoretically there is no real CSS/Design changes here. However
screenshot for prosperity.

| Previously | Now |
| --- | --- |
| <img width="2306" height="614" alt="CleanShot 2026-01-29 at 10 56
05@2x"
src="https://github.com/user-attachments/assets/83a4aa1b-da74-459d-ae11-fae06c1a8371"
/> | <img width="2308" height="622" alt="CleanShot 2026-01-29 at 10 55
05@2x"
src="https://github.com/user-attachments/assets/4aa43b09-6705-4af3-86cc-edc0c08e53b1"
/> |

---------

Co-authored-by: Ben Potter <me@bpmct.net>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 23:44:07 +11:00
Marcin Tojek
036ed5672f fix!: remove deprecated prometheus metrics (#21788)
## Description

Removes the following deprecated Prometheus metrics:

- `coderd_api_workspace_latest_build_total` → use
`coderd_api_workspace_latest_build` instead
- `coderd_oauth2_external_requests_rate_limit_total` → use
`coderd_oauth2_external_requests_rate_limit` instead

These metrics were deprecated in #12976 because gauge metrics should
avoid the `_total` suffix per [Prometheus naming
conventions](https://prometheus.io/docs/practices/naming/).

## Changes

- Removed deprecated metric `coderd_api_workspace_latest_build_total`
from `coderd/prometheusmetrics/prometheusmetrics.go`
- Removed deprecated metric
`coderd_oauth2_external_requests_rate_limit_total` from
`coderd/promoauth/oauth2.go`
- Updated tests to use the non-deprecated metric name

Fixes #12999
2026-01-30 13:30:06 +01:00
Marcin Tojek
90cf4809ec fix(site): use version name instead of ID in View source button URL (#21784)
Fixes #19921

The "View source" button was using `versionId` (UUID) instead of version
name in the URL, causing broken links.
2026-01-30 12:43:09 +01:00
Jaayden Halko
4847920407 fix: don't allow sharing admins to change own role (#21634)
resolve coder/internal#1280
2026-01-30 06:27:30 -05:00
Ethan
a464ab67c6 test: use explicit names in TestStartAutoUpdate to prevent flake (#21745)
The test was creating two template versions without explicit names,
relying on `namesgenerator.NameDigitWith()` which can produce
collisions. When both versions got the same random name, the test failed
with a 409 Conflict error.

Fix by giving each version an explicit name (`v1`, `v2`).

Closes https://github.com/coder/internal/issues/1309

---

*Generated by [mux](https://mux.coder.com)*
2026-01-30 13:24:06 +11:00
Zach
0611e90dd3 feat: add time window fields to telemetry boundary usage (#21772)
Add PeriodStart and PeriodDurationMilliseconds fields to BoundaryUsageSummary
so consumers of telemetry data can understand usage within a particular time window.
2026-01-29 13:40:55 -07:00
blinkagent[bot]
5da28ff72f docs: clarify Tasks limit and AI Governance relationship (#21774)
## Summary

This PR updates the note on the Tasks documentation page to more clearly
explain the relationship between Premium task limits and the AI
Governance Add-On.

## Problem

The previous wording:
> "Premium Coder deployments are limited to running 1,000 tasks. Contact
us for pricing options or learn more about our AI Governance Add-On to
evaluate all of Coder's AI features."

The "or" in this sentence could be interpreted as two separate paths:
(1) contact sales for custom pricing that might not require the add-on,
OR (2) get AI Governance. This led to confusion about whether higher
task limits could be obtained without the AI Governance Add-On.

## Solution

Updated the note to be explicit about the scaling path:
> "Premium deployments include 1,000 Agent Workspace Builds for
proof-of-concept use. To scale beyond this limit, the AI Governance
Add-On provides expanded usage pools that grow with your user count.
Contact us to discuss pricing."

This makes it clear that:
1. Premium includes 1,000 builds for POC use
2. Scaling beyond that requires the AI Governance Add-On
3. Contact sales to discuss pricing for the add-on

Created on behalf of @mattvollmer

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Matt Vollmer <matthewjvollmer@outlook.com>
2026-01-29 14:17:06 -06:00
George K
f5d4926bc1 fix(site): use total_member_count for group subtitles when sharing (#21744)
Justification:

- Populating `members` is authorized with `group_member.read` which is
not required to be able to share a workspace

- Populating `total_member_count` is authorized with `group.read` which
is required to be able to share

- The updated helper is only used in template/workspace sharing UIs, so
other pages that might need counts of readable members are unaffected

Related to: https://github.com/coder/internal/issues/1302
2026-01-29 08:33:02 -08:00
Susana Ferreira
9f6ce7542a feat: add metrics to aibridgeproxy (#21709)
## Description

Adds Prometheus metrics to the AI Bridge Proxy for observability into
proxy traffic and performance.

## Changes
* Add Metrics struct with the following metrics:
* `connect_sessions_total`: counts CONNECT sessions by type
(mitm/tunneled)
  * `mitm_requests_total`: counts MITM requests by provider
* `inflight_mitm_requests`: gauge tracking in-flight requests by
provider
* `mitm_request_duration_seconds`: histogram of request latencies by
provider
* `mitm_responses_total`: counts responses by status code class
(2XX/3XX/4XX/5XX) and provider
* Register metrics with `coder_aibridgeproxyd_` prefix in CLI
* Unregister metrics on server close to prevent registry leaks
* Add `tunneledMiddleware` to track non-allowlisted CONNECT sessions
* Add tests for metric recording in both MITM and tunneled paths

Closes: https://github.com/coder/internal/issues/1185
2026-01-29 15:11:36 +00:00
Kacper Sawicki
d09300eadf feat(cli): add 'coder login token' command to print session token (#21627)
Adds a new subcommand to print the current session token for use in
scripts and automation, similar to `gh auth token`.

## Usage

```bash
CODER_SESSION_TOKEN=$(coder login token)
```

Fixes #21515
2026-01-29 16:06:17 +01:00
Kacper Sawicki
9a417df940 ci: add retry logic for Go module operations (#21609)
## Description

Add exponential backoff retries to all `go install` and `go mod
download` commands across CI workflows and actions.

## Why

Fixes
[coder/internal#1276](https://github.com/coder/internal/issues/1276) -
CI fails when `sum.golang.org` returns 500 errors during Go module
verification. This is an infrastructure-level flake that can't be
controlled.

## Changes

- Created `.github/scripts/retry.sh` - reusable retry helper with
exponential backoff (2s, 4s, 8s delays, max 3 attempts), using
`scripts/lib.sh` helpers
- Wrapped all `go install` and `go mod download` commands with retry in:
  - `.github/actions/setup-go/action.yaml`
  - `.github/actions/setup-sqlc/action.yaml`
  - `.github/actions/setup-go-tools/action.yaml`
  - `.github/workflows/ci.yaml`
  - `.github/workflows/release.yaml`
  - `.github/workflows/security.yaml`
- Added GNU tools setup (bash 4+, GNU getopt, make 4+) for macOS in
`test-go-pg` job, since `retry.sh` uses `lib.sh` which requires these
tools
2026-01-29 16:05:49 +01:00
Yevhenii Shcherbina
8ee4f594d5 chore: update boundary policy (#21738)
Relates to https://github.com/coder/coder/pull/21548
2026-01-29 08:46:30 -05:00
Kacper Sawicki
9eda6569b8 docs: fix broken Kilo Code link in AI Bridge client-config (#21754)
## Summary

Fixes the broken Kilo Code documentation link in the AI Bridge
client-config page.

## Changes

- Updated the Kilo Code link from the old
`/docs/features/api-configuration-profiles` (returns 404) to the current
`/docs/ai-providers/openai-compatible` page

The Kilo Code documentation was restructured and the old URL no longer
exists.

Fixes #21750
2026-01-29 13:43:08 +00:00
Marcin Tojek
bb7b49de6a fix(cli): ignore space in custom input mode (#21752)
Fixes: https://github.com/coder/internal/issues/560

"Select" CLI UI component should ignore "space" when `+Add custom value`
is highlighted. Otherwise it interprets that as a potential option...
and panics.
2026-01-29 14:40:02 +01:00
Danny Kopping
5ae0e08494 chore: ensure consistent YAML names for aibridge flags (#21751)
Closes https://github.com/coder/internal/issues/1205

_Implemented by Claude Opus 4.5_

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-29 13:03:58 +00:00
Marcin Tojek
04b0253e8a feat: add Prometheus metrics for license warnings and errors (#21749)
Fixes: coder/internal#767

Adds two new Prometheus metrics for license health monitoring:

- `coderd_license_warnings` - count of active license warnings
- `coderd_license_errors` - count of active license errors

Metrics endpoint after startup of a deployment with license enabled:

```
...
# HELP coderd_license_errors The number of active license errors.
# TYPE coderd_license_errors gauge
coderd_license_errors 0
...
# HELP coderd_license_warnings The number of active license warnings.
# TYPE coderd_license_warnings gauge
coderd_license_warnings 0
...
```
2026-01-29 13:50:15 +01:00
Spike Curtis
06e396188f test: subscribe to heartbeats synchronously on PGCoord startup (#21746)
fixes: https://github.com/coder/internal/issues/1304

Subscribe to heartbeats synchronously on startup of PGCoordinator. This ensures tests that send heartbeats don't race with this subscription.
2026-01-29 13:34:34 +04:00
Jake Howell
62704eb858 feat: implement ai governance consumption frontend (#21595)
Closes [#1246](https://github.com/coder/internal/issues/1246)

This PR adds a new component to display AI Governance user entitlements
in the Licenses Settings page. The implementation includes:

- New `AIGovernanceUsersConsumptionChart` component that shows the
number of entitled users for AI Governance features
- Storybook stories for various states (default, disabled, error states)
- Integration with the existing license settings page
- Collapsible "Learn more" section with links to relevant documentation
- Updated the ManagedAgentsConsumption component with clearer
terminology ("Agent Workspace Builds" instead of "Managed AI Agents")

The chart displays the number of users entitled to use AI features like
AI Bridge, Boundary, and Tasks, with a note that additional analytics
are coming soon.

### Preview

<img width="3516" height="2390" alt="CleanShot 2026-01-27 at 22 44
25@2x"
src="https://github.com/user-attachments/assets/cb97a215-f054-45cb-a3e7-3055c249ef04"
/>

<img width="3516" height="2390" alt="CleanShot 2026-01-27 at 22 45
04@2x"
src="https://github.com/user-attachments/assets/d2534189-cffb-4ad2-b2e2-67eb045572e8"
/>

---------

Co-authored-by: Jaayden Halko <jaayden.halko@gmail.com>
2026-01-29 11:22:11 +11:00
Danielle Maywood
1a94aa67a3 feat(provisioner): associate resources with coder_devcontainer (#21602)
Closes https://github.com/coder/internal/issues/1239

Allow associating `coder_env`, `coder_script` and `coder_app` with
`coder_devcontainer` resource. To do this we make use of the newly added
`subagent_id` field in the `coder_devcontainer` resource added in
https://github.com/coder/terraform-provider-coder/pull/474
2026-01-29 00:01:30 +00:00
Matt Vollmer
7473b57e54 feat(docs): add use cases section to AI Governance docs (#21717)
- Added use cases
- Moved GA section after use cases
2026-01-28 17:51:32 -06:00
Ben Potter
57ab991a95 chore: update paywall to mention AI governance-add on (#21659)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:37:15 -06:00
DevCats
1b31279506 chore: update doc-check workflow to prevent unnecessary comments (#21737)
This pull request makes a minor update to the documentation check
workflow. It clarifies that a comment should not be posted if there are
no documentation changes needed and simplifies the comment format
instructions.
2026-01-28 22:02:16 +00:00
Jon Ayers
4f1fd82ed7 fix: propagate correct agent exit code (#21718)
The reaper (PID 1) now returns the child's exit code instead of always
exiting 0. Signal termination uses the standard Unix convention of 128 +
signal number.

fixes #21661
2026-01-28 15:56:04 -06:00
Jon Ayers
4ce4b5ef9f chore: fix trivy dependency (#21736) 2026-01-28 22:36:42 +01:00
Steven Masley
dfbd541cee chore: move List util out of db2sdk to avoid circular imports (#21733) 2026-01-28 13:07:53 -06:00
Steven Masley
921fad098b chore: make corrupted directories non-fatal (#21707)
From https://github.com/coder/coder/pull/20563#discussion_r2513135196
Closes https://github.com/coder/coder/issues/20751
2026-01-28 11:35:17 -06:00
George K
264ae77458 chore(docs): update workspace sharing docs to reflect current state (#21662)
This PR updates the workspace sharing documentation to reflect
the current behavior.
2026-01-28 08:58:29 -08:00
Cian Johnston
c2c225052a chore(enterprise/coderd): ensure TestManagedAgentLimit differentiates between tasks and workspaces (#21731)
My previous change to this test did not create another **workspace**
using the template containing `coder_ai_task` resources, meaning that
this test was not actually testing the right thing. This PR addresses
this oversight.
2026-01-28 16:30:56 +00:00
Steven Masley
e13f2a9869 chore: remove extra stop_modules from provisionerd proto (#21706)
Was a duplicate of start_modules

Closes https://github.com/coder/coder/issues/21206
2026-01-28 09:25:47 -06:00
Mathias Fredriksson
d06b21df45 test(cli): increase timeout in TestGitSSH to reduce flakes (#21725)
The test occasionally times out at 15s on Windows CI runners.
Investigation of CI logs shows the HTTP request to the agent's
gitsshkey endpoint never appears in server logs, suggesting it
hangs before the request completes (possibly in connection setup,
middleware, or database queries). Increase to 60s to reduce flake
rate.

Fixes coder/internal#770
2026-01-28 14:01:07 +02:00
Susana Ferreira
327c885292 feat: add provider to aibridgeproxy requestContext (#21710)
## Description

Moves the provider lookup from `handleRequest` to `authMiddleware` so
that the provider is determined during the `CONNECT` handshake and
stored in the request context. This enables provider information to be
available earlier in the request lifecycle.

## Changes

* Move `aibridgeProviderFromHost` call from `handleRequest` to
`authMiddleware`
* Store `Provider` in `requestContext` during `CONNECT` handshake
* Add provider validation in `authMiddleware` (reject if no provider
mapping)
* Keep defensive provider check in `handleRequest` for safety

Follow-up from: https://github.com/coder/coder/pull/21617
2026-01-28 08:44:17 +00:00
Jake Howell
7a8d8d2f86 feat: add icon and description to preset dropdown (#21694)
Closes #20598 

This pull-request implements a very basic change to also render the
`icon` of the `Preset` when we've specifically defined one within the
template. Furthermore, theres a `ⓘ` icon with a description.

### Preview

<img width="984" height="442" alt="CleanShot 2026-01-27 at 20 15 29@2x"
src="https://github.com/user-attachments/assets/d4ceebf9-a5fe-4df4-a8b2-a8355d6bb25e"
/>
2026-01-28 18:51:22 +11:00
Spike Curtis
7090a1e205 chore: renumber duplicate migration 000411 (#21720)
Fixes recent duplicate DB migration in #21607
2026-01-28 08:01:58 +04:00
Spike Curtis
f358a6db11 chore: convert tailnet tables to UNLOGGED for improved write performance (#21607)
This migration converts all tailnet coordination tables to UNLOGGED:
- `tailnet_coordinators`
- `tailnet_peers`
- `tailnet_tunnels`

UNLOGGED tables skip Write-Ahead Log (WAL) writes, significantly
improving performance for high-frequency updates like coordinator
heartbeats and peer state changes.

The trade-off is that UNLOGGED tables are truncated on crash recovery
and are not replicated to standby servers. This is acceptable for these
tables because the data is ephemeral:
1. Coordinators re-register on startup
2. Peers re-establish connections on reconnect
3. Tunnels are re-created based on current peer state

**Migration notes:**
- Child tables must be converted before the parent table because LOGGED
child tables cannot reference UNLOGGED parent tables (but the reverse is
allowed)
- The down migration reverses the order: parent first, then children

Fixes https://github.com/coder/coder/issues/21333
2026-01-28 07:12:32 +04:00
Zach
2204731ddb feat: implement boundary usage tracker and telemetry collection (#21716)
Implements telemetry for boundary usage tracking across all Coder
replicas and reports them via telemetry.

Changes:
- Implement Tracker with Track(), FlushToDB(), and StartFlushLoop() methods
- Add telemetry integration via collectBoundaryUsageSummary()
- Use telemetry lock to ensure only one replica collects per period

The tracker accumulates unique workspaces, unique users, and request
counts (allowed/denied) in memory, then flushes to the database
periodically. During telemetry collection, stats are aggregated across
all replicas and reset for the next period.
2026-01-27 19:11:40 -07:00
Jake Howell
d7037280da feat: improve max-height on <PopoverContent /> (#21600)
Closes #21593

Various `<PopoverContent>`'s among the application were found that when
the screen-size was too small we weren't able to actually see the full
content unless we resized the window. This pull-request ensures that the
content is never going to extend past that of the
`--radix-popper-available-height` without having an appropriate
scrollbar.

| Before | After |
| --- | --- |
| <img width="948" height="960" alt="CleanShot 2026-01-21 at 20 56
48@2x"
src="https://github.com/user-attachments/assets/5d15fbf9-1c62-427b-bbed-81239922a6bc"
/> | <img width="896" height="906" alt="CleanShot 2026-01-21 at 21 19
03@2x"
src="https://github.com/user-attachments/assets/cfa5baa5-2ec1-438c-9454-bf3073dc6534"
/> |
2026-01-28 01:57:17 +00:00
Steven Masley
799b190dee fix: do not enforce managed agent limit for non-task workspaces (#21689)
Only task workspaces have the checks in wsbuilder for violating the
managed agent caps in the license.

Stopped tasks that are resumed with a regular workspace start **still
count as usage**.
2026-01-27 19:01:17 -06:00
Ben Potter
3eeeabfd68 chore: clarify "agent workspace builds" were "managed agents" (#21594)
Clarified the definition of Agent Workspace Builds and updated the
previous term used.

<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-01-27 15:08:36 -06:00
Zach
7dfa33b410 feat: add boundary usage tracking database schema and tracker skeleton (#21670)
feat: add boundary usage telemetry database schema and RBAC

Adds the foundation for tracking boundary usage telemetry across Coder
replicas. This includes:

  - Database schema: `boundary_usage_stats` table with per-replica stats
    (unique workspaces, unique users, allowed/denied request counts)
  - Database queries: upsert stats, get aggregated summary, reset stats,
    delete by replica ID
  - RBAC: `boundary_usage` resource type with read/update/delete actions,
    accessible only via system `BoundaryUsageTracker` subject (not regular
    user roles)
  - Tracker skeleton + docs: stub implementation in `coderd/boundaryusage/`

The tracker accumulates stats in memory and periodically flushes to the
database. Stats are aggregated across replicas for telemetry reporting,
then reset when a new reporting period begins. The tracker implementation
and plumbing will be done in a subsequent commit/PR.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 13:29:21 -07:00
Yevhenii Shcherbina
e008f720b6 chore: bump claude code module version (#21708)
Changes:
- bump claude-code module version
- add docs for version compatibility (for customers using older version
of coder or claude-code module; before GA)
2026-01-27 15:13:45 -05:00
Callum Styan
d4cd982608 chore: undeprecate the workspace rename flag and clarify potential issues (#21669)
This undeprecates the `allow-workspace-renames` flag. IIUC, the 'danger'
with using this flag is that the workspace name might have been used in
the definition of some other terraform resources within template code,
so a rename could cause problems such as with persistent disks.

for https://github.com/coder/coder/issues/21628

---------

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2026-01-27 10:53:13 -08:00
Danny Kopping
3ee4f6d0ec chore: update to Go 1.25.6 (#21693)
## Summary
- Update Go version from 1.24.11 to 1.25.6
- Update go.mod to specify Go 1.25.6
- Update GitHub Actions setup-go default version
- Update dogfood Dockerfile with new Go version and SHA256 checksum

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
[Coder
Task](https://dev.coder.com/tasks/danny/42dcc0b6-17e1-4caf-bb44-8d6c8f346bef)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 20:44:21 +02:00
George K
c352a51b22 fix(coderd): authorize workspace start/stop/delete by transition action (#21691)
Use transition-specific actions when authorizing workspace build
parameter inserts in the database layer so start/stop/delete do not
require workspace.update.

Related to: https://github.com/coder/internal/issues/1299
2026-01-27 09:08:12 -08:00
DevCats
2ee3386cc5 chore: add ready_for_review trigger and disable auto-commenting to doc-check worfklow (#21667)
This pull request updates the `.github/workflows/doc-check.yaml`
workflow to improve how documentation reviews are triggered and handled,
particularly for pull requests that are converted from draft to ready
for review. The changes ensure that documentation checks are performed
at the appropriate times and clarify the workflow's behavior.

**Workflow trigger and logic enhancements:**

* Added support for triggering the documentation check when a pull
request is marked as "ready for review" (converted from draft), both in
the workflow triggers and in the workflow logic.
[[1]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R9)
[[2]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R24)
[[3]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149L39-R48)
* Updated the internal context and trigger type handling to recognize
and describe the "ready_for_review" event, providing more accurate
context for the agent.
[[1]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R138-R140)
[[2]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R171-R173)

**Workflow behavior adjustment:**

* Changed the `comment-on-issue` setting to `false`, so the workflow
will no longer automatically comment on the PR issue when running which
was creating unnecessary noise.
2026-01-27 08:09:54 -06:00
Susana Ferreira
8f3bb0b0d1 feat: add Copilot provider to aibridge (#21663)
Adds GitHub Copilot as a supported AI provider in aibridge. 

Depends on: https://github.com/coder/aibridge/pull/137
Closes: https://github.com/coder/internal/issues/1235
2026-01-27 14:02:35 +00:00
Cian Johnston
b1267c458c chore(dogfood/coder): use opus instead of sonnet for claude-code module (#21700) 2026-01-27 12:23:35 +00:00
Paweł Banaszewski
a5c06a3751 chore: bump AI Bridge version (#21698)
New AI Bridge version adds:
* Universal Client Compatibility
* [Responses API](https://github.com/coder/aibridge/issues/83) support
* Various fixes and improvements
2026-01-27 13:16:35 +01:00
Cian Johnston
7b44976618 fix(coderd/provisionerdserver): correct managed agent tracking (#21696)
Relates to https://github.com/coder/internal/issues/1282

Updates tracking of managed agents to be predicated instead on the
presence of a related `task_id` instead of the presence of a
`coder_ai_task` resource.
2026-01-27 12:14:52 +00:00
Susana Ferreira
c3f41ce08c fix: return proxy auth challenge on missing/invalid credentials (#21677)
## Description

When `CONNECT` requests are missing or have invalid
`Proxy-Authorization` credentials, the proxy now returns a proper `407
Proxy Authentication Required` response with a `Proxy-Authenticate`
challenge header instead of rejecting the connection without an HTTP
response.

Some clients (e.g. Copilot in VS Code) do not send the
`Proxy-Authorization` header on the initial request and rely on
receiving a `407 challenge` to prompt for credentials. Without this fix,
those clients would fail to connect.

## Changes

* Added `newProxyAuthRequiredResponse` helper function to create
consistent `407` responses with the appropriate `Proxy-Authenticate`
header.
* Updated `authMiddleware` to return a `407` challenge instead of
rejecting unauthenticated `CONNECT` requests without an HTTP response
* Refactored `handleRequest` to use the same helper for consistency
* Updated `TestProxy_Authentication` to verify the `407` response
status, `Proxy-Authenticate` header, and response body

Related to: https://github.com/coder/internal/issues/1235
2026-01-27 11:57:24 +00:00
Jake Howell
6f15b178a4 feat: extend premium license for aigovernance (#21499)
Closes [#1227](https://github.com/coder/internal/issues/1227)

Added support for license addons, starting with AI Governance, to enable
dynamic feature grouping without requiring license reissuance.

### What changed?

- Introduced a new `Addon` type to represent groupings of features that
can be added to licenses
- Created the first addon `AddonAIGovernance` which includes AI Bridge
and Boundary features
- Added validation for addon dependencies to ensure required features
are present
- Added new features: `FeatureBoundary` and
`FeatureAIGovernanceUserLimit`
- Updated license entitlement logic to handle addons and their features
- Added helper methods to check if features belong to addons
- Updated tests to verify addon functionality

### Why make this change?

This change introduces a more flexible licensing model that allows
features to be grouped into addons that can be added to licenses without
requiring reissuance when new features are added to an addon. This is
particularly useful for specialized feature sets like AI Governance,
where related features can be bundled together and sold as a separate
SKU. The addon approach allows for better organization of features and
more granular control over entitlements.
2026-01-27 22:33:53 +11:00
blinkagent[bot]
1375fd9ead docs: add administrator configuration for disabling Coder Desktop auto-updates (#21641)
## Summary

Adds documentation for the "disable automatic updates" feature in Coder
Desktop.

This adds a new "Administrator Configuration" section to the Coder
Desktop docs that documents:

- **macOS**: Setting the `disableUpdater` UserDefaults key via MDM or
`defaults` command
- **Windows**: Setting the `Updater:Enable` registry value in
`HKLM\SOFTWARE\Coder Desktop\App`

The feature already exists in both platforms but was not documented in
the user-facing docs.

## Changes

- Added new "Administrator Configuration" section before
"Troubleshooting"
- Documented macOS MDM configuration for disabling updates
- Documented Windows registry configuration for disabling updates
- Mentioned the `ForcedChannel` option for locking update channels

---

Created on behalf of @ethanndickson

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-27 22:26:40 +11:00
Susana Ferreira
7546e94534 feat: improve aibridgeproxyd logging (#21617)
## Description

Improves logging in `aibridgeproxyd` to provide better observability for
proxy requests. Adds structured logging with request correlation IDs and
propagates request context through the proxy chain.

## Changes

* Add `requestContext` struct to propagate metadata (token, provider,
session ID) through the proxy request/response chain
* ~Add `handleTunnelRequest` to log passthrough requests for
non-allowlisted domains at debug level~ (removed due to verbosity)
* Add `handleResponse` to log responses from `aibridged`
* Log MITM requests routed to `aibridged` at info level, tunneled
requests at debug level

Related to: https://github.com/coder/internal/issues/1185
2026-01-27 11:21:31 +00:00
Sas Swart
59b2afaa80 perf: use the more efficient dannykopping/anthropic-sdk-go for AI Bridge (#21695)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-01-27 13:19:41 +02:00
Danny Kopping
303389e75a fix: correct https://github.com/coder/internal/issues/1167 behaviour (#21692)
Closes https://github.com/coder/internal/issues/1167

Previously we were checking that start != end time; this was flaking on
Windows.

On Windows, `time.Now()` has limited resolution (~1ms with Go runtime's
`timeBeginPeriod`, or ~15.6ms in default system resolution). When two
`time.Now()` calls execute within the same clock tick, they return
identical timestamps, causing `StartedAt.Before(EndedAt)` to return
`false`.
**References:**
- [Go issue #8687](https://github.com/golang/go/issues/8687) - Windows
system clock resolution issue
- [Go issue #67066](https://github.com/golang/go/issues/67066) -
time.Now precision on Windows (still open)

Instead, we're changing the assertion to (the more semantically correct)
"end not before start".

A possible future enhancement could be to plumb coder/quartz through the
recording mechanism, but it's unnecessary for now.

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-27 12:36:48 +02:00
Mathias Fredriksson
25d7f27cdb feat(coderd): add task log snapshot storage endpoint (#21644)
This change adds a POST /workspaceagents/me/tasks/{task}/log-snapshot
endpoint for agents to upload task conversation history during
workspace shutdown. This allows users to view task logs even when the
workspace is stopped.

The endpoint accepts agentapi format payloads (typically last 10
messages, max 64KB), wraps them in a format envelope, and upserts to the
task_snapshots table. Uses agent token auth and validates the task
belongs to the agent's workspace.

Closes coder/internal#1253
2026-01-27 11:09:24 +02:00
Sushant P
f2e998848e fix: resolve organization member visibility issue during owned work sharing (#21657)
The workspace sharing autocomplete was using the site-wide /api/v2/users
endpoint which requires user:read permission. Regular org members don't
have this permission, so they couldn't see other members to share with.

## Sharing Scope
* This iteration of shared workspaces is slated for beta, and the
currently understood use case does not include cross-org workspace
sharing. This can be addressed later if necessary.

## What's Changed
* Changed to use /api/v2/organizations/{org}/members instead, which only
requires organization_member:read permission (already granted to org
members when workspace sharing is enabled).
* Added `OrganizationMemberWithUserData` to the `UserLike` union to
allow for more flexibility in differentiating groups from users
2026-01-26 18:25:12 -08:00
Zach
d2e54819bf docs: clarify boundary logs are independent from app logs (#21578) 2026-01-26 14:34:06 -07:00
Callum Styan
806d7e4c11 docs: update metrics docs to include metadata batcher metrics (#21665)
This updates the metrics docs to include metrics added in
https://github.com/coder/coder/pull/21330

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2026-01-26 09:22:14 -08:00
Danny Kopping
7123518baa feat: conditionally send aibridge actor headers (#21643)
Also passes along the authenticated username as actor metadata.

Closes https://github.com/coder/aibridge/issues/135
Depends on https://github.com/coder/aibridge/pull/142

**Replace aibridge tag with merge commit once
https://github.com/coder/aibridge/pull/142 lands.**

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-26 15:08:17 +00:00
dependabot[bot]
bb186b8699 ci: bump the github-actions group across 1 directory with 4 updates (#21683)
Bumps the github-actions group with 4 updates in the / directory:
[actions/checkout](https://github.com/actions/checkout),
[actions/cache](https://github.com/actions/cache),
[chromaui/action](https://github.com/chromaui/action) and
[nix-community/cache-nix-action](https://github.com/nix-community/cache-nix-action).

Updates `actions/checkout` from 6.0.1 to 6.0.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Add orchestration_id to git user-agent when ACTIONS_ORCHESTRATION_ID
is set by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2355">actions/checkout#2355</a></li>
<li>Fix tag handling: preserve annotations and explicit fetch-tags by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2356">actions/checkout#2356</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v6.0.1...v6.0.2">https://github.com/actions/checkout/compare/v6.0.1...v6.0.2</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>v6.0.2</h2>
<ul>
<li>Fix tag handling: preserve annotations and explicit fetch-tags by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2356">actions/checkout#2356</a></li>
</ul>
<h2>v6.0.1</h2>
<ul>
<li>Add worktree support for persist-credentials includeIf by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2327">actions/checkout#2327</a></li>
</ul>
<h2>v6.0.0</h2>
<ul>
<li>Persist creds to a separate file by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2286">actions/checkout#2286</a></li>
<li>Update README to include Node.js 24 support details and requirements
by <a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2248">actions/checkout#2248</a></li>
</ul>
<h2>v5.0.1</h2>
<ul>
<li>Port v6 cleanup to v5 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2301">actions/checkout#2301</a></li>
</ul>
<h2>v5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>v4.3.1</h2>
<ul>
<li>Port v6 cleanup to v4 by <a
href="https://github.com/ericsciple"><code>@​ericsciple</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2305">actions/checkout#2305</a></li>
</ul>
<h2>v4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="de0fac2e45"><code>de0fac2</code></a>
Fix tag handling: preserve annotations and explicit fetch-tags (<a
href="https://redirect.github.com/actions/checkout/issues/2356">#2356</a>)</li>
<li><a
href="064fe7f331"><code>064fe7f</code></a>
Add orchestration_id to git user-agent when ACTIONS_ORCHESTRATION_ID is
set (...</li>
<li>See full diff in <a
href="8e8c483db8...de0fac2e45">compare
view</a></li>
</ul>
</details>
<br />

Updates `actions/cache` from 5.0.1 to 5.0.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v.5.0.2</h2>
<h1>v5.0.2</h1>
<h2>What's Changed</h2>
<p>When creating cache entries, 429s returned from the cache service
will not be retried.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>Changelog</h2>
<h3>5.0.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.3 <a
href="https://redirect.github.com/actions/cache/pull/1692">#1692</a></li>
</ul>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.
If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<h3>4.3.0</h3>
<ul>
<li>Bump <code>@actions/cache</code> to <a
href="https://redirect.github.com/actions/toolkit/pull/2132">v4.1.0</a></li>
</ul>
<h3>4.2.4</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.5</li>
</ul>
<h3>4.2.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.3 (obfuscates SAS token in
debug logs for cache entries)</li>
</ul>
<h3>4.2.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.2</li>
</ul>
<h3>4.2.1</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v4.0.1</li>
</ul>
<h3>4.2.0</h3>
<p>TLDR; The cache backend service has been rewritten from the ground up
for improved performance and reliability. <a
href="https://github.com/actions/cache">actions/cache</a> now integrates
with the new cache service (v2) APIs.</p>
<p>The new service will gradually roll out as of <strong>February 1st,
2025</strong>. The legacy service will also be sunset on the same date.
Changes in these release are <strong>fully backward
compatible</strong>.</p>
<p><strong>We are deprecating some versions of this action</strong>. We
recommend upgrading to version <code>v4</code> or <code>v3</code> as
soon as possible before <strong>February 1st, 2025.</strong> (Upgrade
instructions below).</p>
<p>If you are using pinned SHAs, please use the SHAs of versions
<code>v4.2.0</code> or <code>v3.4.0</code></p>
<p>If you do not upgrade, all workflow runs using any of the deprecated
<a href="https://github.com/actions/cache">actions/cache</a> will
fail.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8b402f58fb"><code>8b402f5</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1692">#1692</a>
from GhadimiR/main</li>
<li><a
href="304ab5a070"><code>304ab5a</code></a>
license for httpclient</li>
<li><a
href="609fc19e67"><code>609fc19</code></a>
Update licensed record for cache</li>
<li><a
href="b22231e43d"><code>b22231e</code></a>
Build</li>
<li><a
href="93150cdfb3"><code>93150cd</code></a>
Add PR link to releases</li>
<li><a
href="9b8ca9f07e"><code>9b8ca9f</code></a>
Bump actions/cache to 5.0.3</li>
<li>See full diff in <a
href="9255dc7a25...8b402f58fb">compare
view</a></li>
</ul>
</details>
<br />

Updates `chromaui/action` from 13.3.4 to 13.3.5
<details>
<summary>Commits</summary>
<ul>
<li><a
href="07791f8243"><code>07791f8</code></a>
v13.3.5</li>
<li>See full diff in <a
href="4c20b95e9d...07791f8243">compare
view</a></li>
</ul>
</details>
<br />

Updates `nix-community/cache-nix-action` from 7.0.0 to 7.0.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nix-community/cache-nix-action/releases">nix-community/cache-nix-action's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.1</h2>
<h2>What's Changed</h2>
<h2>Fixed</h2>
<ul>
<li>Checkpoint Nix store database before saving cache by <a
href="https://github.com/CathalMullan"><code>@​CathalMullan</code></a>
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/278">nix-community/cache-nix-action#278</a></li>
<li>Checkpoint Nix store database before copying it by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/279">nix-community/cache-nix-action#279</a></li>
</ul>
<h2>Fixed (CI)</h2>
<ul>
<li>Fix formatting in CI by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/280">nix-community/cache-nix-action#280</a></li>
<li>Fix workflows for PRs in CI by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/281">nix-community/cache-nix-action#281</a></li>
</ul>
<h2>Changed (deps)</h2>
<!-- raw HTML omitted -->
<ul>
<li>chore(deps): bump <code>@​actions/github</code> from 6.0.1 to 7.0.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/272">nix-community/cache-nix-action#272</a></li>
<li>chore(deps-dev): bump eslint-config-love from 140.0.0 to 144.0.0 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/271">nix-community/cache-nix-action#271</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/parser</code> from
8.51.0 to 8.52.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/269">nix-community/cache-nix-action#269</a></li>
<li>chore(deps-dev): bump eslint-plugin-jest from 29.12.0 to 29.12.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/266">nix-community/cache-nix-action#266</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code>
from 8.51.0 to 8.52.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/268">nix-community/cache-nix-action#268</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/parser</code> from
8.52.0 to 8.53.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/273">nix-community/cache-nix-action#273</a></li>
<li>chore(deps-dev): bump prettier from 3.7.4 to 3.8.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/277">nix-community/cache-nix-action#277</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code>
from 8.52.0 to 8.53.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/274">nix-community/cache-nix-action#274</a></li>
</ul>
<!-- raw HTML omitted -->
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/CathalMullan"><code>@​CathalMullan</code></a>
made their first contribution in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/278">nix-community/cache-nix-action#278</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nix-community/cache-nix-action/compare/v7...v7.0.1">https://github.com/nix-community/cache-nix-action/compare/v7...v7.0.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="106bba72ed"><code>106bba7</code></a>
fix(ci): use a modern command</li>
<li><a
href="b244431fab"><code>b244431</code></a>
chore: update src</li>
<li><a
href="052bf75174"><code>052bf75</code></a>
chore: update docs</li>
<li><a
href="c19319ee78"><code>c19319e</code></a>
chore: build the action</li>
<li><a
href="e3b90182d2"><code>e3b9018</code></a>
feat(action): add comment about checkpointing after database
merging</li>
<li><a
href="05419d3e13"><code>05419d3</code></a>
feat(readme): mention that the action may affect the workflow speed</li>
<li><a
href="0c043090a0"><code>0c04309</code></a>
refactor(readme): group limitations and list them in separate
sections</li>
<li><a
href="084a7ec7cc"><code>084a7ec</code></a>
fix(github): adress linter comments and format templates</li>
<li><a
href="b23f7c961d"><code>b23f7c9</code></a>
fix(ci): don't fail-fast</li>
<li><a
href="6b5a012f6e"><code>6b5a012</code></a>
Merge pull request <a
href="https://redirect.github.com/nix-community/cache-nix-action/issues/281">#281</a>
from nix-community/fix-prs</li>
<li>Additional commits viewable in <a
href="b426b118b6...106bba72ed">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 13:20:40 +00:00
dependabot[bot]
bbca7f546c chore: bump google.golang.org/api from 0.260.0 to 0.262.0 (#21680)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.260.0 to 0.262.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.262.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.261.0...v0.262.0">0.262.0</a>
(2026-01-22)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3446">#3446</a>)
(<a
href="e7cf4692f3">e7cf469</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3450">#3450</a>)
(<a
href="b32ced9c87">b32ced9</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>internaloption:</strong> Add WithTelemetryAttributes (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3442">#3442</a>)
(<a
href="2a5c807a86">2a5c807</a>)</li>
</ul>
<h2>v0.261.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.260.0...v0.261.0">0.261.0</a>
(2026-01-20)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3439">#3439</a>)
(<a
href="70a0e3729f">70a0e37</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3441">#3441</a>)
(<a
href="c32590dc1e">c32590d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3443">#3443</a>)
(<a
href="1c9ed9b363">1c9ed9b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3444">#3444</a>)
(<a
href="9b31e6d02b">9b31e6d</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.261.0...v0.262.0">0.262.0</a>
(2026-01-22)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3446">#3446</a>)
(<a
href="e7cf4692f3">e7cf469</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3450">#3450</a>)
(<a
href="b32ced9c87">b32ced9</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>internaloption:</strong> Add WithTelemetryAttributes (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3442">#3442</a>)
(<a
href="2a5c807a86">2a5c807</a>)</li>
</ul>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.260.0...v0.261.0">0.261.0</a>
(2026-01-20)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3439">#3439</a>)
(<a
href="70a0e3729f">70a0e37</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3441">#3441</a>)
(<a
href="c32590dc1e">c32590d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3443">#3443</a>)
(<a
href="1c9ed9b363">1c9ed9b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3444">#3444</a>)
(<a
href="9b31e6d02b">9b31e6d</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1faae8daa9"><code>1faae8d</code></a>
chore(main): release 0.262.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3449">#3449</a>)</li>
<li><a
href="2a5c807a86"><code>2a5c807</code></a>
fix(internaloption): add WithTelemetryAttributes (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3442">#3442</a>)</li>
<li><a
href="b32ced9c87"><code>b32ced9</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3450">#3450</a>)</li>
<li><a
href="e7cf4692f3"><code>e7cf469</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3446">#3446</a>)</li>
<li><a
href="ff4f52bc32"><code>ff4f52b</code></a>
chore(main): release 0.261.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3440">#3440</a>)</li>
<li><a
href="8e75a1d937"><code>8e75a1d</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3445">#3445</a>)</li>
<li><a
href="9b31e6d02b"><code>9b31e6d</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3444">#3444</a>)</li>
<li><a
href="1c9ed9b363"><code>1c9ed9b</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3443">#3443</a>)</li>
<li><a
href="c32590dc1e"><code>c32590d</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3441">#3441</a>)</li>
<li><a
href="70a0e3729f"><code>70a0e37</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3439">#3439</a>)</li>
<li>See full diff in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.260.0...v0.262.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.260.0&new-version=0.262.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 13:11:28 +00:00
dependabot[bot]
4bff2f7296 chore: bump github.com/dgraph-io/ristretto/v2 from 2.3.0 to 2.4.0 (#21681)
Bumps
[github.com/dgraph-io/ristretto/v2](https://github.com/dgraph-io/ristretto)
from 2.3.0 to 2.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dgraph-io/ristretto/releases">github.com/dgraph-io/ristretto/v2's
releases</a>.</em></p>
<blockquote>
<h2>v2.4.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat: add value iterator by <a
href="https://github.com/SkArchon"><code>@​SkArchon</code></a> in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/475">dgraph-io/ristretto#475</a></li>
<li>fix: allow custom key types with underlying types in Key constraint
by <a
href="https://github.com/matthewmcneely"><code>@​matthewmcneely</code></a>
in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/478">dgraph-io/ristretto#478</a></li>
<li>chore(deps): Update actions/checkout action to v5 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/dgraph-io/ristretto/pull/464">dgraph-io/ristretto#464</a></li>
<li>chore(deps): Update actions/setup-go action to v6 by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/dgraph-io/ristretto/pull/468">dgraph-io/ristretto#468</a></li>
<li>chore(deps): Update go minor and patch by <a
href="https://github.com/renovate"><code>@​renovate</code></a>[bot] in
<a
href="https://redirect.github.com/dgraph-io/ristretto/pull/467">dgraph-io/ristretto#467</a></li>
<li>chore: update trunk for 1.25 toolchain by <a
href="https://github.com/matthewmcneely"><code>@​matthewmcneely</code></a>
in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/471">dgraph-io/ristretto#471</a></li>
<li>chore: update readme and trunk config by <a
href="https://github.com/matthewmcneely"><code>@​matthewmcneely</code></a>
in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/474">dgraph-io/ristretto#474</a></li>
<li>chore(test): fix test files compilation on 32-bit archs (<a
href="https://redirect.github.com/dgraph-io/ristretto/issues/465">#465</a>)
by <a href="https://github.com/jas4711"><code>@​jas4711</code></a> in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/470">dgraph-io/ristretto#470</a></li>
<li>chore: prepare for release v2.4.0 by <a
href="https://github.com/matthewmcneely"><code>@​matthewmcneely</code></a>
in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/479">dgraph-io/ristretto#479</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/jas4711"><code>@​jas4711</code></a> made
their first contribution in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/470">dgraph-io/ristretto#470</a></li>
<li><a href="https://github.com/SkArchon"><code>@​SkArchon</code></a>
made their first contribution in <a
href="https://redirect.github.com/dgraph-io/ristretto/pull/475">dgraph-io/ristretto#475</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dgraph-io/ristretto/compare/v2.3.0...v2.4.0">https://github.com/dgraph-io/ristretto/compare/v2.3.0...v2.4.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/dgraph-io/ristretto/blob/main/CHANGELOG.md">github.com/dgraph-io/ristretto/v2's
changelog</a>.</em></p>
<blockquote>
<h2>[v2.4.0] - 2026-01-21</h2>
<h3>Added</h3>
<ul>
<li>Implement public <code>Cache.IterValues()</code> method (<a
href="https://redirect.github.com/dgraph-io/ristretto/issues/475">#475</a>)</li>
<li>Allow custom key types with underlying types in Key constraint (<a
href="https://redirect.github.com/dgraph-io/ristretto/issues/478">#478</a>)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fix compilation on 32-bit archs (<a
href="https://redirect.github.com/dgraph-io/ristretto/issues/465">#465</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dgraph-io/ristretto/compare/v2.3.0...v2.4.0">https://github.com/dgraph-io/ristretto/compare/v2.3.0...v2.4.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="402101df6c"><code>402101d</code></a>
Update change log</li>
<li><a
href="25534dc98b"><code>25534dc</code></a>
Update README</li>
<li><a
href="3eb2a417db"><code>3eb2a41</code></a>
Update trunk config</li>
<li><a
href="20299c01d0"><code>20299c0</code></a>
allow custom key types with underlying types in Key constraint</li>
<li><a
href="3e164e48c1"><code>3e164e4</code></a>
feat: value iterator</li>
<li><a
href="4f24d62b51"><code>4f24d62</code></a>
Fix compilation on 32-bit archs (<a
href="https://redirect.github.com/dgraph-io/ristretto/issues/465">#465</a>)</li>
<li><a
href="2149cc3abb"><code>2149cc3</code></a>
Update template</li>
<li><a
href="b53c3918ca"><code>b53c391</code></a>
Update codeowners</li>
<li><a
href="6f2e7876f6"><code>6f2e787</code></a>
Update copyright notices</li>
<li><a
href="3537f72a0f"><code>3537f72</code></a>
Update trunk configuration</li>
<li>Additional commits viewable in <a
href="https://github.com/dgraph-io/ristretto/compare/v2.3.0...v2.4.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/dgraph-io/ristretto/v2&package-manager=go_modules&previous-version=2.3.0&new-version=2.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 13:10:49 +00:00
dependabot[bot]
c3cd3614e4 chore: bump rust from bf3368a to df6ca8f in /dogfood/coder (#21682)
Bumps rust from `bf3368a` to `df6ca8f`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-26 13:10:09 +00:00
Cian Johnston
612aae2523 chore: replace httpapi.Heartbeat with httpapi.HeartbeatClose (#21676)
Relates to https://github.com/coder/coder/pull/21676

* Replaces all existing usages of `httpapi.Heartbeat` with `httpapi.HeartbeatClose`
* Removes `httpapi.HeartbeatClose`
2026-01-26 12:11:40 +00:00
Ethan
49f135bcd4 fix(.devcontainer): make post_start.sh idempotent (#21678)
When running multiple instances of the coder/coder devcontainer, the
`postStartCommand` fails with exit code 1:

```
postStartCommand from devcontainer.json failed with exit code 1
Command failed: ./.devcontainer/scripts/post_start.sh
```

`service docker start` returns exit code 1 when Docker is already
running:

| Docker State | Exit Code |
|--------------|-----------|
| Not running | 0 |
| Already running | **1** |

## Fix

Check if Docker is already running before attempting to start it:

```bash
sudo service docker status >/dev/null 2>&1 || sudo service docker start
```

---

*This PR and description were generated by
[Mux](https://mux.coder.com).*
2026-01-26 10:52:05 +00:00
Spike Curtis
f47f89d997 chore: remove unused tailnet v1 tables and queries (#21646)
Removes the legacy tailnet v1 API tables (`tailnet_clients`, `tailnet_agents`, `tailnet_client_subscriptions`) and their associated queries, triggers, and functions. These were superseded by the v2 tables (`tailnet_peers`, `tailnet_tunnels`) in migration 000168, and the v1 API code was removed in commit d6154c4310, but the database artifacts were never cleaned up.

**Changes:**
- New migration `000410_remove_tailnet_v1_tables` to drop the unused tables
- Removed 11 unused queries from `tailnet.sql`
- Removed associated manual wrapper methods in `dbauthz` and `dbmetrics`
- ~930 lines deleted across 11 files
2026-01-26 14:27:17 +04:00
Kacper Sawicki
78bc5861e0 feat(enterprise/coderd): add soft warning for AI Bridge GA transition (#21675)
## Summary

AI Bridge is moving to General Availability in v2.30 and will require
the AI Governance Add-On license in future versions. This adds a soft
warning for deployments using AI Bridge via Premium/Enterprise
FeatureSet without an explicit AI Bridge add-on license.

Relates to: https://github.com/coder/internal/issues/1226

## Changes

- Track whether AI Bridge was explicitly granted via license Features
(add-on) vs inherited from FeatureSet
- Show soft warning when AI Bridge is enabled and entitled via
FeatureSet but not via explicit add-on
- Changed AI Bridge enablement from hardcoded `true` to check
`CODER_AIBRIDGE_ENABLED` deployment config

## Behavior Change

AI Bridge is now only marked as "enabled" in entitlements when
`CODER_AIBRIDGE_ENABLED=true` is set in the deployment config.
Previously, it was always enabled for Premium/Enterprise licenses
regardless of the config setting.

This change ensures that users who do not use AI Bridge will not see the
soft warning about the upcoming license requirement.

## Warning Message

> AI Bridge is now Generally Available in v2.30. In a future Coder
version, your deployment will require the AI Governance Add-On to
continue using this feature. Please reach out to your account team or
sales@coder.com to learn more.

## Behavior

| Condition | Warning Shown |
|-----------|---------------|
| AI Bridge disabled |  No |
| AI Bridge enabled + explicit add-on license |  No |
| AI Bridge enabled + Premium/Enterprise FeatureSet (no add-on) |  Yes
|

## Screenshots

### 1. No license
<img width="1708" height="577" alt="image"
src="https://github.com/user-attachments/assets/cbdbfd4d-55de-4d70-8abf-2665f458e96f"
/>

### 2. No license + CODER_AIBRIDGE_ENABLED=true
<img width="1716" height="513" alt="image"
src="https://github.com/user-attachments/assets/344aae76-7703-485f-b568-1f13a1efa48f"
/>

### 3. Premium license + CODER_AIBRIDGE_ENABLED=false
<img width="1687" height="389" alt="image"
src="https://github.com/user-attachments/assets/c2be12b0-1c0f-438d-a293-f9ec9fe6a736"
/>

### 4. Premium license + CODER_AIBRIDGE_ENABLED=true
<img width="1707" height="525" alt="image"
src="https://github.com/user-attachments/assets/1a4640e1-e656-4f9b-bed0-9390cb5d6a84"
/>

## Notes

- TODO comments added to mark code that should be removed when AI Bridge
enforcement is added
- Feature continues to work - this is just a transitional warning (soft
enforcement)
2026-01-26 10:46:45 +01:00
Cian Johnston
0d21365825 chore: fix failing agent tests with non-default shell (#21671)
* Updates agent tests to write `exit 0` to stdin before closing.
* Updates agent stats tests to detect required stats split out over multiple reports
2026-01-26 09:42:24 +00:00
Danielle Maywood
409360c62d fix(coderd): ensure inbox WebSocket is closed when client disconnects (#21652)
Relates to https://github.com/coder/coder/issues/19715

This is similar to https://github.com/coder/coder/pull/19711

This endpoint works by doing the following:
- Subscribing to the database's with pubsub
- Accepts a WebSocket upgrade
- Starts a `httpapi.Heartbeat`
- Creates a json encoder
- **Infinitely loops waiting for notification until request context
cancelled**

The critical issue here is that `httpapi.Heartbeat` silently fails when
the client has disconnected. This means we never cancel the request
context, leaving the WebSocket alive until we receive a notification
from the database and fail to write that down the pipe.

By replacing usage of `httpapi.Heartbeat` with `httpapi.HeartbeatClose`,
we cancel the context _when the heartbeat fails to write_ due to the
client disconnecting. This allows us to cleanup without waiting for a
notification to come through the pubsub channel.
2026-01-26 09:24:45 +00:00
christin
6c8209bdf1 fix(site): update bulk action checkbox style for workspace and task lists (#21535)
## Summary

Updates the bulk action checkbox style in workspace and task lists to
use the new Shadcn Checkbox component that aligns with the Coder design
system (as specified in
[Figma](https://www.figma.com/design/WfqIgsTFN2BscBSSyXWF8/Coder-kit?node-id=489-4187&t=KRtpi391rVPHRXJI-1)).

## Changes

- **WorkspacesTable.tsx**: Replace MUI Checkbox with Shadcn Checkbox
component
- **TasksTable.tsx**: Replace MUI Checkbox with Shadcn Checkbox
component
- **WorkspacesPageView.stories.tsx**: Add `WithCheckedWorkspaces` story
to showcase the new design

## Key Improvements

The new checkbox design features:
-  Consistent 20px × 20px sizing (vs. old larger MUI checkbox)
- 🎨 Clean inverted color scheme (light background unchecked, dark when
checked)
-  Proper indeterminate state support for "select all" functionality
- 🎯 Smooth hover and focus states with proper ring indicators
- 📐 Better alignment with Coder design language from Figma

## API Changes

Updated from MUI Checkbox API to Radix UI Checkbox API:
- `onChange={(e) => ...}` → `onCheckedChange={(checked) => ...}`
- Removed MUI-specific `size` props (`xsmall`, `small`) 
- Updated `checked` prop to support boolean | "indeterminate"

## Testing

### Storybook

The checkbox changes can be reviewed in Storybook:

1. **Checkbox Component** - `components/Checkbox` - Base component
examples
2. **Workspaces Page** - `pages/WorkspacesPage/WithCheckedWorkspaces` -
Shows workspace list with selections
3. **Tasks Page** - `pages/TasksPage/BatchActionsSomeSelected` - Shows
task list with selections

### Feature Flag Requirement

**Note**: The bulk action checkboxes require the
`workspace_batch_actions` or `task_batch_actions` feature flags to be
enabled (Premium feature). To test in a live environment, you'll need a
valid license that includes these features.

## Screenshots

Before: Old MUI checkbox with prominent blue styling and larger size
After: New Shadcn checkbox with refined design matching Coder's design
system

_(Screenshots can be viewed in Storybook at the URLs above)_

Closes #21444

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Jake Howell <jake@hwll.me>
Co-authored-by: Jaayden Halko <jaayden@coder.com>
2026-01-26 09:04:15 +01:00
Ben Potter
ece531ab4e chore: mention usage data reporting in AI Gov docs (#21664)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-01-23 21:40:17 +00:00
Yevhenii Shcherbina
15c61906e2 test: fix flaky boundary test (#21660)
Closes https://github.com/coder/internal/issues/1297

Rewrite `TestBoundarySubcommand` in a way similar to
`TestPrebuildsCommand`.
2026-01-23 15:25:15 -05:00
Yevhenii Shcherbina
8d6822b23a ci: skip flaky test (#21658) 2026-01-23 18:46:31 +00:00
ケイラ
98834a7837 chore: continue vitest test migrations (#21639) 2026-01-23 11:28:59 -07:00
DevCats
338b952d71 chore: skip doc-check when secrets are not available (#21637)
This pull request updates the `.github/workflows/doc-check.yaml`
workflow to improve its handling of required secrets. The workflow now
checks for the presence of necessary secrets before proceeding, and
conditionally skips all subsequent steps if the secrets are unavailable.
This prevents failures on pull requests where secrets are not accessible
(such as from forks), and provides clear messaging for maintainers about
manual triggering options.

Key improvements:

**Secret availability checks and conditional execution:**

* Added an explicit step at the start of the workflow to check if
required secrets (`DOC_CHECK_CODER_URL` and
`DOC_CHECK_CODER_SESSION_TOKEN`) are available, and set an output flag
(`skip`) accordingly.
* Updated all subsequent workflow steps to include a conditional (`if:
steps.check-secrets.outputs.skip != 'true'`), ensuring they only run if
the secrets are present. This includes setup, context extraction, task
creation, waiting, and summary steps.
[[1]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R59-R82)
[[2]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R140)
[[3]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R205)
[[4]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R215)
[[5]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R232)
[[6]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149R250)
* Modified the "Fetch Task Logs", "Cleanup Task", and "Write Final
Summary" steps to combine their existing `always()` condition with the
new secrets check, preventing unnecessary errors when secrets are
missing.
[[1]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149L314-R340)
[[2]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149L327-R353)
[[3]](diffhunk://#diff-46e6065a312f35e5d294476e7865089afd10e6072fed80ac77b257e090def149L339-R365)

**Documentation and messaging:**

* Added comments at the top of the workflow file to explain the secret
requirements and the expected behavior for PRs without secrets,
including instructions for maintainers on manual triggering.…se on pr's
originating from forks.

<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-01-23 12:15:07 -06:00
Yevhenii Shcherbina
9b14fd3adc feat: add boundary premium feature (#21589)
Source code changes:

- Added a wrapper for the boundary subcommand that checks feature
entitlement before executing the underlying command.
- Added a helper that returns the Boundary version using the
runtime/debug package, which reads this information from the go.mod
file.
- Added FeatureBoundary to the corresponding enum.
- Move boundary command from AGPL to enterprise.

`NOTE`: From now on, the Boundary version will be specified in go.mod
instead of being defined in AI modules.
2026-01-23 12:56:36 -05:00
Kacper Sawicki
b82693d4cc feat(codersdk): revert "remove AI Bridge entitlement from Premium license" (#21653)
Reverts coder/coder#21540
2026-01-23 15:58:12 +00:00
Susana Ferreira
f5858c8a18 fix: unregister metrics on reconciler stop to prevent panic on restart (#21647)
## Description

Fixes a panic that occurs when the prebuilds feature is toggled by
adding/removing a license. The `StoreReconciler` was not unregistering
the `reconciliationDuration` histogram, causing a "duplicate metrics
collector registration attempted" panic when a new reconciler was
created.

## Changes

* Unregister the `reconciliationDuration` histogram in `Stop()`
alongside the existing metrics collector
* Change log level when stopping the reconciler with a cause, since
"entitlements change" is not an error condition
* Add `TestReconcilerLifecycle` to verify the reconciler can be stopped
and recreated with the same prometheus registry

Related to internal slack thread:
https://codercom.slack.com/archives/C07GRNNRW03/p1769116582171379
2026-01-23 14:45:27 +00:00
Kacper Sawicki
9843adb8c6 feat(codersdk): remove AI Bridge entitlement from Premium license (#21540)
## Summary

AI Bridge is moving out of Premium as a separate add-on (GA in Feb 3).

Closes https://github.com/coder/internal/issues/1226

## Changes

- Excludes `FeatureAIBridge` from `Enterprise()` and
`FeatureSetPremium.Features()`
- Adds soft warning for deployments with AI Bridge enabled but not
entitled
- Warning is displayed to Auditor/Owner roles in UI banner and CLI
headers

## Warning Message

When AI Bridge is enabled (`CODER_AIBRIDGE_ENABLED=true`) but the
license doesn't include the entitlement:

> AI Bridge has reached General Availability and your Coder deployment
is not entitled to run this feature. Contact your account team
(https://coder.com/contact) for information around getting a license
with AI Bridge.

## Behavior

- The feature remains usable in v2.30 (soft warning only)
- Future versions may include hard enforcement
2026-01-23 13:48:27 +01:00
Cian Johnston
fa7baebdd8 fix(coderd): handle rbac.NotAuthorizedError when deleting template (#21645)
Relates to
https://github.com/coder/aibridge/pull/143/changes#r2720659638

We previously had been returning the following when attempting to delete
failed due to lack of permissions.

```
500 Internal error deleting template: unauthorized: rbac: forbidden
```

This PR updates the handler to return our usual 403 forbidden response.
2026-01-23 12:02:46 +00:00
Spike Curtis
3398833919 test: don't drop error on blank IP address in report (#21642)
fixes https://github.com/coder/internal/issues/1286

We can get blank IP address from the net connection if the client has
already disconnected, as was the case in this flake. Fix is to only log
error if we get something non-empty we can't parse.

---------

Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
2026-01-23 10:26:44 +00:00
Cian Johnston
365ab0e609 test: bump timeout on TestSSH/StdioExitOnParentDeath (#21630)
Relates to https://github.com/coder/internal/issues/1289

I was able to reproduce the issue locally -- it appears to sometimes
just take 25 seconds to get all of the test dependencies stood up:

```
t.go:111: 2026-01-22 16:39:15.388 [debu]  pubsub: pubsub dialing postgres  network=tcp  address=127.0.0.1:5432  timeout_ms=0 
...
    t.go:111: 2026-01-22 16:39:38.789 [info]  agent.net.tailnet.tcp: accepted connection  src=[fd7a:115c:a1e0:44b1:8901:8f09:e605:d019]:55406  dst=[fd7a:115c:a1e0:4cfd:a892:e4e2:8cad:8534]:1
...
    ssh_test.go:1208: 
                Error Trace:    /Users/cian/src/coder/coder/testutil/chan.go:74
                                                        /Users/cian/src/coder/coder/cli/ssh_test.go:1208
                Error:          SoftTryReceive: context expired
                Test:           TestSSH/StdioExitOnParentDeath
    ssh_test.go:1212: 
```

Hopefully bumping the timeout should fix it.
2026-01-23 10:17:41 +00:00
Michael Suchacz
7c948a7ad8 test: make backedpipe ForceReconnect tests deterministic (#21635) 2026-01-23 08:20:13 +01:00
Callum Styan
e195856c43 perf: reduce pg_notify call volume by batching together agent metadata updates (#21330)
---------

Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-22 22:47:49 -08:00
Jaayden Halko
6a81474ff0 chore: add warning display on user or group removal (#21624)
resolve coder/internal#1274

Warn the user that a workspace restart is needed to complete the user or
group removal

<img width="606" height="350" alt="Screenshot 2026-01-22 at 13 59 30"
src="https://github.com/user-attachments/assets/4e4af209-9714-46ef-b126-0a084c6e6d38"
/>
2026-01-23 04:26:55 +00:00
Zach
6c49938fca feat: add template version ID to re-emitted boundary logs (#21636)
Adds template_version_id to re-emitted boundary audit logs to allow
filtering and analysis by specific template versions iin addition to the
existing template_id field. Since boundary policies are defined in the
template, the template version is critical to figuring out which policy
was responsible for boundaries decision in a workspace.

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-22 15:06:02 -07:00
Steven Masley
e1282b6904 chore: reword unhealthy agents on workspace page depending on the failure (#21622)
Also remove the "restart" button. The language is still there, and the
restart button is already on this page. That button being present had
users clicking it when waiting was the correct solution.

So the "Restart" option is not as pushed.
2026-01-22 11:36:07 -06:00
Yevhenii Shcherbina
57cc50c703 chore: bump boundary version (#21629) 2026-01-22 11:12:45 -05:00
George K
d29a168785 fix(coderd/rbac): reinstate deployment-wide workspace.share permission for owner role (#21620)
The removal of that permission from the role broke valid use cases (e.g.
a site owner user creating a workspace owned by a system account and
then trying to share it with another user).

The bulk of the PR is made up of the rollbacks of the previously
introduced test updates necessitated by the removal.

Related to: https://github.com/coder/internal/issues/1285
2026-01-22 08:12:15 -08:00
christin
859099f1f2 fix: shorten "Share workspace" to "Share" in tasks UI (#21601)
Simplifies the CTA text from "Share workspace" to "Share" for better UX.
Users don't need to understand the underlying infrastructure when
working on tasks, so the shorter, more direct text is clearer.

**Affected locations:**
- Tasks table dropdown menu
- Tasks sidebar dropdown menu

Fixes https://github.com/coder/coder/issues/21599

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-22 16:55:19 +01:00
Zach
6d8e6d4830 feat: include template ID in re-emitted boundary logs (#21618)
Boundary policies are currently defined at the template level, so
including the template ID in re-emitted logs by the control plane
allows policy creators to filter and observe boundary activity for
specific templates. This makes it easier to verify that policies are
working as expected and to debug issues with specific template
configurations.
2026-01-22 08:37:16 -07:00
Mathias Fredriksson
4c7844ad3d feat(coderd): bump workspace deadline on AI agent activity (#21584)
AI agents report status via patchWorkspaceAgentAppStatus, but this wasn't
extending workspace deadlines. This prevented proper task auto-pause behavior,
causing tasks to pause mid-execution when there were no human connections.

Now we call ActivityBumpWorkspace when agents report status, using the same
logic as SSH/IDE connections. We bump when transitioning to or from the working
state.

Closes coder/internal#1251
2026-01-22 13:52:32 +02:00
Danielle Maywood
5dcc9dd8ab chore(site): replace usage of deprecated Stack component (#21613)
Replace usage of deprecated Stack component in AgentDevcontainerCard
2026-01-22 10:20:34 +00:00
Sas Swart
fdd928e01c fix: set bridge scaletest generator timeout per request (#21626)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-01-22 10:56:40 +02:00
Spike Curtis
f0152e291a docs: fix 10k docs to include 600 provisioners (#21597)
fixes typo in docs
2026-01-22 10:43:13 +04:00
DevCats
26ce070393 feat: update doc-check workflow to utilize claude-skills (#21588)
## Summary

Updates our existing doc-check workflow to utilize new claude-skills
that exist in the repository for better contextual behavior with less
prompting requirements in the workflow

## Changes

### New: Claude Skill (`.claude/skills/doc-check/SKILL.md`)

Defines the doc-check workflow for Claude:
- Check PR title first to skip irrelevant reviews (refactors, tests,
chores)
- Read code changes and search existing docs for related content
- Post structured comments with specific recommendations

### Updated: GitHub Workflow (`.github/workflows/doc-check.yaml`)

- **Triggers**: PR opened, updated (synchronize), `doc-check` label
added, or manual dispatch
- **Task lifecycle**: Creates task → monitors completion → fetches logs
→ cleans up
- **Context-aware prompts**: Tells Claude if this is a new PR, update,
or manual request
- Uses `coder-username` parameter to run as `doc-check-bot` service
account
2026-01-21 16:14:43 -06:00
Matt Vollmer
e78d89620b docs: update AI Governance nav label to AI Governance Add-On (#21616)
Updates the page title and left navigation for the AI Governance page
from "AI Governance" to "AI Governance Add-On"
2026-01-21 15:18:21 -05:00
Danny Kopping
1dd0519a38 docs: clarify max_connections implications (#21596)
Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-21 22:10:12 +02:00
Susana Ferreira
47b3846bca feat: use coder specific header for aibridge authentication from AI proxy (#21590)
## Description

Introduces a new `X-Coder-Token` header for authenticating requests from
AI Proxy to AI Bridge. Previously, the proxy overwrote the
`Authorization` header with the Coder token, which prevented the
original authentication headers from flowing through to upstream
providers.

With this change, AI Proxy sets the Coder token in a separate header,
preserving the original `Authorization` and `X-Api-Key` headers. AI
Bridge uses this header for authentication and removes it before
forwarding requests to upstream providers. For requests that don't come
through AI Proxy, AI Bridge continues to use `Authorization` and
`X-Api-Key` for authentication.

## Changes

* Add `HeaderCoderAuth` constant and update `ExtractAuthToken` to check
headers in the following order: `X-Coder-Token` > `Authorization` >
`X-Api-Key`
* Update AI Proxy to set `X-Coder-Token` instead of overwriting
`Authorization`
* Remove `X-Coder-Token` in AI Bridge before forwarding to upstream
providers
* Add tests for header handling and token extraction priority

Related to: https://github.com/coder/internal/issues/1235
2026-01-21 19:06:19 +00:00
Michael Suchacz
3e29eec560 chore(dogfood): capitalize Mux display name (#21612)
Capitalizes the Mux display name in the dogfood template to match
branding.
2026-01-21 15:55:17 +00:00
Steven Masley
1b03202e90 chore: add script to calculate workspace 'on' hours in a given time window (#21505)
Calculates how long each workspace has been "on" during a given time
window defined by "start/stop".
2026-01-21 14:35:37 +00:00
Cian Johnston
f799cba395 fix(cli): allow coder ssh --stdio to exit when parent process dies (#21583)
Relates to https://github.com/coder/internal/issues/1217

Adds a background goroutine in `--stdio` mode to check if the parent PID
is still alive and exit if it is no longer present.

🤖 Implemented using Mux + Claude Opus 4.5, reviewed and refactored by
me.
2026-01-21 14:14:51 +00:00
blinkagent[bot]
408a35a961 feat(site): move AI Bridge settings to AI Governance page (#21598)
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-21 19:11:28 +05:00
Mathias Fredriksson
97e8a5b093 fix(coderd): allow agent auth during workspace shutdown (#21538)
Agents were losing authentication during workspace shutdown, causing
shutdown scripts to fail. The auth query required agents to belong to
the latest build, but during shutdown a `stop` build becomes latest while
the `start` build's agents are still running.

Modified the auth query to allow `start` build agents to authenticate
temporarily during `stop` execution. The query allows auth when:

- Agent's `start` build job succeeded
- Latest build is `stop` with `pending`/`running` job status
- Builds are adjacent (`stop` is `build_number + 1`)
- Template versions match

Auth closes once `stop` completes.

Renamed `GetWorkspaceAgentAndLatestBuildByAuthToken` to
`GetAuthenticatedWorkspaceAgentAndBuildByAuthToken` since it returns the
agent's build (not always latest) during shutdown.

Closes coder/internal#1249
Fixes #19467
2026-01-21 13:18:43 +00:00
Danny Kopping
a14a22eb54 feat: support custom bedrock base url (#21582)
Closes https://github.com/coder/aibridge/issues/126
Depends on https://github.com/coder/aibridge/pull/131

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-21 12:48:56 +00:00
Susana Ferreira
6ef9670384 fix: limit concurrent database connections in prebuild reconciliation (#20908)
## Description

This PR addresses database connection pool exhaustion during prebuilds
reconciliation by introducing two changes:
* `CanSkipReconciliation`: Filters out presets that don't need
reconciliation before spawning goroutines. This ensures we only create
goroutines for presets that will (_most likely_) perform database
operations, avoiding unnecessary connection pool usage.
* Dynamic `eg.SetLimit`: Limits concurrent goroutines based on the
configured database connection pool size (`CODER_PG_CONN_MAX_OPEN / 2`).
This replaces the previous hardcoded limit of 5, ensuring the
reconciliation loop scales appropriately with the configured pool size
while leaving capacity for other database operations.

## Changes

* Add `CanSkipReconciliation()` method to `PresetSnapshot` that returns
true for inactive presets with no running workspaces, no pending jobs,
or expired prebuilds.
* Add `maxDBConnections` parameter to `NewStoreReconciler` and compute
`reconciliationConcurrency` as half the pool size (minimum 1).
* Add `ReconciliationConcurrency()` getter method to `StoreReconciler`.
* Add `eg.SetLimit(c.reconciliationConcurrency)` to bound concurrent
reconciliation goroutines.
* Add `PresetsTotal` and `PresetsReconciled` to `ReconcileStats` for
observability.
* Add `TestCanSkipReconciliation` unit tests.
* Add `TestReconciliationConcurrency` unit tests.
* Add benchmark tests for reconciliation performance.

## Benchmarks

* `BenchmarkReconcileAll_NoOps`: Tests presets with no reconciliation
actions. All presets are filtered by `CanSkipReconciliation`, resulting
in no goroutines spawned and no database connections used.
* `BenchmarkReconcileAll_ConnectionContention`: Tests presets where all
require reconciliation actions. All presets spawn goroutines, but
concurrency is limited by `eg.SetLimit(reconciliationConcurrency)`.
* `BenchmarkReconcileAll_Mix`: Simulates a realistic scenario with a
large subset of inactive presets (filtered by `CanSkipReconciliation`)
and a smaller subset requiring reconciliation (limited by
`eg.SetLimit`).

Closes: https://github.com/coder/coder/issues/20606
2026-01-21 10:56:31 +00:00
Mathias Fredriksson
2132c53f28 feat(coderd/database): add schema for task pause/resume lifecycle (#21557)
Creates migration 000409 with the database foundation for pausing and
resuming task workspaces.

The task_snapshots table stores conversation history (AgentAPI messages)
so users can view task logs even when the workspace is stopped. Each task
gets one snapshot, overwritten on each pause.

Three new build_reason values (task_auto_pause, task_manual_pause,
task_resume) let us distinguish task lifecycle events in telemetry and
audit logs from regular workspace operations.

Uses a regular table rather than UNLOGGED for snapshots. While UNLOGGED
would be faster, losing snapshots on database crash creates user confusion
(logs disappear until next pause). We can switch to UNLOGGED post-GA if
write performance becomes a problem.

Closes coder/internal#1250
2026-01-21 12:12:12 +02:00
Jake Howell
59b71f296f feat: implement non-brittle TestDBPurgeAuthorization (#21442)
Closes #21440 

The `TestDBPurgeAuthorization` test was overfitting by calling each
purge method individually, which reimplemented dbpurge logic in the test
and created a maintenance burden. When new purge steps are added, they
either need to be reflected in the test or there will be a testing
blindspot.

This change extracts the `doTick` closure into an exported `PurgeTick`
function that returns an error, making the core purge logic testable.
The test now calls `PurgeTick` directly to exercise the actual dbpurge
behavior rather than reimplementing it. Retention values are configured
to ensure all purge operations run, so we test RBAC permissions for all
code paths.

- Tests actual dbpurge behavior instead of reimplementing it
- Automatically covers new purge steps when they're added
- Still validates that all operations have proper RBAC permissions

The test focuses on authorization (checking for RBAC errors) rather than
verifying deletion behavior, which is already covered by other tests
like `TestDeleteExpiredAPIKeys` and `TestDeleteOldAuditLogs`.
2026-01-21 11:27:01 +11:00
Jake Howell
0ac05b4144 fix: temporarily hide prompt column from table view (#21586) 2026-01-21 10:09:39 +11:00
Ben Potter
6346eb7af8 docs: mention AI Governance add-on (#21592)
Ironically, no AI was used to make this PR.

---------

Co-authored-by: Matt Vollmer <matthewjvollmer@outlook.com>
2026-01-20 16:44:14 -06:00
Susana Ferreira
09f50046cb feat: validate aiproxy allowlisted domains have aibridge provider mappings at startup (#21577)
## Description

Adds startup validation to ensure all allowlisted domains have
corresponding AI Bridge provider mappings. This prevents a
misconfiguration where a domain could be MITM'd (decrypted) but have no
route to aibridge.

Previously, if a domain was in the allowlist but had no provider
mapping, requests would be decrypted and forwarded to the original
destination, a potential privacy concern. Now the server fails to start
if this misconfiguration is detected.
2026-01-20 17:13:12 +00:00
Kacper Sawicki
ed679bb3da feat(codersdk): add circuit breaker configuration support for aibridge (#21546)
## Summary

Add circuit breaker support for AI Bridge to protect against cascading
failures from upstream AI provider rate limits (HTTP 429, 503, and
Anthropic's 529 overloaded responses).

## Changes

- Add 5 new CLI options for circuit breaker configuration:
  - `--aibridge-circuit-breaker-enabled` (default: false)
  - `--aibridge-circuit-breaker-failure-threshold` (default: 5)
  - `--aibridge-circuit-breaker-interval` (default: 10s)
  - `--aibridge-circuit-breaker-timeout` (default: 30s)
  - `--aibridge-circuit-breaker-max-requests` (default: 3)
- Update aibridge dependency to include circuit breaker support
- Add tests for pool creation with circuit breaker providers

## Notes

- Circuit breaker is **disabled by default** for backward compatibility
- When enabled, applies to both OpenAI and Anthropic providers
- Uses sony/gobreaker internally via the aibridge library

## Testing

```
make test RUN=TestPoolWithCircuitBreakerProviders
```
2026-01-20 14:59:29 +01:00
Sas Swart
bfae5b03dc chore: update the dependency on aibridge (#21554)
Update our dependency on coder/aibridge. This allows us to benefit from
the following additions to bridge:

feat: inner agentic loop for openai responses requests (blocking only)
(coder/aibridge#127)
feat: req/resp logging middleware (coder/aibridge#105)
perf: eliminate unnecessary json marshalling for anthropic requests to
bridge (coder/aibridge#102)
feat: add token usage recording for responses streaming interceptor
(coder/aibridge#125)
feat: add token usage recording for responses blocking interceptor
(coder/aibridge#124)
feat: add tool usage recording to streaming responses interceptor
(coder/aibridge#123)
feat: add tool usage recording for blocking responses interceptor
(coder/aibridge#122)
Extend circuit breaker functionality to support per-model isolation
(coder/aibridge#111)
feat: add circuit breaker for upstream provider overload protection
(coder/aibridge#75)
chore: change blocking request timeouts to 10m (coder/aibridge#118)
feat: add prompt recording for responses API (coder/aibridge#109)
feat: add basic responses API interceptor  (coder/aibridge#107)
2026-01-20 15:08:53 +02:00
Jakub Domeracki
ca2e728fcb chore: update the extended expiry GPG public key (#21579)
Updates the GPG public key used for release signing with an extended
expiration date
2026-01-20 10:51:19 +01:00
blinkagent[bot]
12a6a9b5f0 fix: support open_in for external apps with HTTP URLs (#21558)
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-20 14:21:54 +05:00
Rowan Smith
b163b4c950 feat: support bundle updates to enable pprof and telemetry collection (#21486)
- Adds pprof collection support now that we have the listeners
automatically starting (requires Coder server 2.28.0+, includes a
version check). Collects heap, allocs, profile (30s), block, mutex,
goroutine, threadcreate, trace (30s), cmdline, symbol. Performs capture
for 30 seconds and emits a log line stating as such. Enable capture by
supplying the `--pprof` flag or `CODER_SUPPORT_BUNDLE_PPROF` env var.
Collection of pprof data from both coderd and the Coder agent occurs.
- Adds collection of Prometheus metrics, also requires 2.28.0+
- Adds the ability to include a template in the bundle independently of
supplying the details of a running workspace by supplying the
`--template` flag or `CODER_SUPPORT_BUNDLE_TEMPLATE` env var
- Captures a list of workspaces the user has access to. Defaults to a
max of 10, configurable via `--workspaces-total-cap` /
`CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP`
- Collects additional stats from the coderd deployment (aggregated
workspace/session metrics), as well as entitlements via license and
dismissed health checks.

created with help from mux
2026-01-20 10:28:52 +11:00
Cian Johnston
9776dc16bd fix(coderd/database/dbmetrics): fix incorrect query label in GetWorkspaceAgentAndWorkspaceByID (#21576)
Fixes an incorrect label.
2026-01-19 16:25:36 +00:00
dependabot[bot]
e79f1d0406 chore: bump github.com/elazarl/goproxy from 1.7.2 to 1.8.0 (#21565)
Bumps [github.com/elazarl/goproxy](https://github.com/elazarl/goproxy)
from 1.7.2 to 1.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/elazarl/goproxy/releases">github.com/elazarl/goproxy's
releases</a>.</em></p>
<blockquote>
<h2>v1.8.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix typo in example code snippet by <a
href="https://github.com/PrinceShaji"><code>@​PrinceShaji</code></a> in
<a
href="https://redirect.github.com/elazarl/goproxy/pull/653">elazarl/goproxy#653</a></li>
<li>Bump golang.org/x/net from 0.35.0 to 0.36.0 in /ext by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/656">elazarl/goproxy#656</a></li>
<li>Only chunk MITM response when body was modified by <a
href="https://github.com/Skn0tt"><code>@​Skn0tt</code></a> in <a
href="https://redirect.github.com/elazarl/goproxy/pull/720">elazarl/goproxy#720</a></li>
<li>Bump actions/checkout from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/728">elazarl/goproxy#728</a></li>
<li>Bump golangci/golangci-lint-action from 6 to 9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/725">elazarl/goproxy#725</a></li>
<li>Fix keep alive logic and replace legacy response write logic by <a
href="https://github.com/ErikPelli"><code>@​ErikPelli</code></a> in <a
href="https://redirect.github.com/elazarl/goproxy/pull/734">elazarl/goproxy#734</a></li>
<li>Bump github.com/stretchr/testify from 1.10.0 to 1.11.1 in /ext by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/708">elazarl/goproxy#708</a></li>
<li>Bump github.com/coder/websocket from 1.8.12 to 1.8.14 in /examples
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/711">elazarl/goproxy#711</a></li>
<li>Bump actions/setup-go from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/709">elazarl/goproxy#709</a></li>
<li>fix auth remote proxy in cascadeproxy by <a
href="https://github.com/mcarbonneaux"><code>@​mcarbonneaux</code></a>
in <a
href="https://redirect.github.com/elazarl/goproxy/pull/664">elazarl/goproxy#664</a></li>
<li>Fix linter configuration &amp; issues by <a
href="https://github.com/ErikPelli"><code>@​ErikPelli</code></a> in <a
href="https://redirect.github.com/elazarl/goproxy/pull/735">elazarl/goproxy#735</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/PrinceShaji"><code>@​PrinceShaji</code></a>
made their first contribution in <a
href="https://redirect.github.com/elazarl/goproxy/pull/653">elazarl/goproxy#653</a></li>
<li><a href="https://github.com/Skn0tt"><code>@​Skn0tt</code></a> made
their first contribution in <a
href="https://redirect.github.com/elazarl/goproxy/pull/720">elazarl/goproxy#720</a></li>
<li><a
href="https://github.com/mcarbonneaux"><code>@​mcarbonneaux</code></a>
made their first contribution in <a
href="https://redirect.github.com/elazarl/goproxy/pull/664">elazarl/goproxy#664</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/elazarl/goproxy/compare/v1.7.2...v1.8.0">https://github.com/elazarl/goproxy/compare/v1.7.2...v1.8.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="26d3e758aa"><code>26d3e75</code></a>
Fix linter configuration &amp; issues (<a
href="https://redirect.github.com/elazarl/goproxy/issues/735">#735</a>)</li>
<li><a
href="5f529678d4"><code>5f52967</code></a>
fix auth remote proxy in cascadeproxy (<a
href="https://redirect.github.com/elazarl/goproxy/issues/664">#664</a>)</li>
<li><a
href="b81733c462"><code>b81733c</code></a>
Bump actions/setup-go from 5 to 6 (<a
href="https://redirect.github.com/elazarl/goproxy/issues/709">#709</a>)</li>
<li><a
href="2df6d8b266"><code>2df6d8b</code></a>
Bump github.com/coder/websocket from 1.8.12 to 1.8.14 in /examples (<a
href="https://redirect.github.com/elazarl/goproxy/issues/711">#711</a>)</li>
<li><a
href="18547706ca"><code>1854770</code></a>
Bump github.com/stretchr/testify from 1.10.0 to 1.11.1 in /ext (<a
href="https://redirect.github.com/elazarl/goproxy/issues/708">#708</a>)</li>
<li><a
href="78c76be575"><code>78c76be</code></a>
Fix keep alive logic and replace legacy response write logic (<a
href="https://redirect.github.com/elazarl/goproxy/issues/734">#734</a>)</li>
<li><a
href="8766328c5e"><code>8766328</code></a>
Bump golangci/golangci-lint-action from 6 to 9 (<a
href="https://redirect.github.com/elazarl/goproxy/issues/725">#725</a>)</li>
<li><a
href="fad3713f17"><code>fad3713</code></a>
Merge pull request <a
href="https://redirect.github.com/elazarl/goproxy/issues/728">#728</a>
from elazarl/dependabot/github_actions/actions/checko...</li>
<li><a
href="3cfbd83639"><code>3cfbd83</code></a>
Bump actions/checkout from 4 to 6</li>
<li><a
href="29d155006e"><code>29d1550</code></a>
Merge pull request <a
href="https://redirect.github.com/elazarl/goproxy/issues/720">#720</a>
from Skn0tt/reproduce-mitm-content-length-bug</li>
<li>Additional commits viewable in <a
href="https://github.com/elazarl/goproxy/compare/v1.7.2...v1.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/elazarl/goproxy&package-manager=go_modules&previous-version=1.7.2&new-version=1.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 13:48:38 +00:00
dependabot[bot]
2bfd54dfdb chore: bump google.golang.org/api from 0.259.0 to 0.260.0 (#21566)
[//]: # (dependabot-start)
⚠️  **Dependabot is rebasing this PR** ⚠️ 

Rebasing might not happen immediately, so don't worry if this takes some
time.

Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.

---

[//]: # (dependabot-end)

Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.259.0 to 0.260.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.260.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.259.0...v0.260.0">0.260.0</a>
(2026-01-14)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3428">#3428</a>)
(<a
href="0afb986761">0afb986</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3430">#3430</a>)
(<a
href="6fe40c61fa">6fe40c6</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3431">#3431</a>)
(<a
href="02e27cf37d">02e27cf</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3432">#3432</a>)
(<a
href="b147c8bae5">b147c8b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3433">#3433</a>)
(<a
href="d2187ce982">d2187ce</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3435">#3435</a>)
(<a
href="b93c288ec0">b93c288</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3437">#3437</a>)
(<a
href="28ff500331">28ff500</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3438">#3438</a>)
(<a
href="0172d5662d">0172d56</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.259.0...v0.260.0">0.260.0</a>
(2026-01-14)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3428">#3428</a>)
(<a
href="0afb986761">0afb986</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3430">#3430</a>)
(<a
href="6fe40c61fa">6fe40c6</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3431">#3431</a>)
(<a
href="02e27cf37d">02e27cf</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3432">#3432</a>)
(<a
href="b147c8bae5">b147c8b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3433">#3433</a>)
(<a
href="d2187ce982">d2187ce</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3435">#3435</a>)
(<a
href="b93c288ec0">b93c288</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3437">#3437</a>)
(<a
href="28ff500331">28ff500</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3438">#3438</a>)
(<a
href="0172d5662d">0172d56</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b916f2cc94"><code>b916f2c</code></a>
chore(main): release 0.260.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3429">#3429</a>)</li>
<li><a
href="0172d5662d"><code>0172d56</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3438">#3438</a>)</li>
<li><a
href="ccb5b87ebc"><code>ccb5b87</code></a>
chore: switch test driver to use gotestsum (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3436">#3436</a>)</li>
<li><a
href="28ff500331"><code>28ff500</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3437">#3437</a>)</li>
<li><a
href="33b7ab5e94"><code>33b7ab5</code></a>
chore(all): update module
github.com/googleapis/enterprise-certificate-proxy ...</li>
<li><a
href="b93c288ec0"><code>b93c288</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3435">#3435</a>)</li>
<li><a
href="d2187ce982"><code>d2187ce</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3433">#3433</a>)</li>
<li><a
href="b147c8bae5"><code>b147c8b</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3432">#3432</a>)</li>
<li><a
href="02e27cf37d"><code>02e27cf</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3431">#3431</a>)</li>
<li><a
href="6fe40c61fa"><code>6fe40c6</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3430">#3430</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.259.0...v0.260.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.259.0&new-version=0.260.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 13:47:32 +00:00
dependabot[bot]
8db1e0481a chore: bump the x group with 3 updates (#21564)
Bumps the x group with 3 updates:
[golang.org/x/crypto](https://github.com/golang/crypto),
[golang.org/x/net](https://github.com/golang/net) and
[golang.org/x/tools](https://github.com/golang/tools).

Updates `golang.org/x/crypto` from 0.46.0 to 0.47.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="506e022208"><code>506e022</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="7dacc380ba"><code>7dacc38</code></a>
chacha20poly1305: error out in fips140=only mode</li>
<li>See full diff in <a
href="https://github.com/golang/crypto/compare/v0.46.0...v0.47.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/net` from 0.48.0 to 0.49.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d977772e17"><code>d977772</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="eea413e294"><code>eea413e</code></a>
internal/http3: use go1.25 synctest.Test instead of go1.24
synctest.Run</li>
<li><a
href="9ace223794"><code>9ace223</code></a>
websocket: add missing call to resp.Body.Close</li>
<li><a
href="7d3dbb06ce"><code>7d3dbb0</code></a>
http2: buffer the most recently received PRIORITY_UPDATE frame</li>
<li>See full diff in <a
href="https://github.com/golang/net/compare/v0.48.0...v0.49.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/tools` from 0.40.0 to 0.41.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="2ad2b30edf"><code>2ad2b30</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="5832cce571"><code>5832cce</code></a>
internal/diff/lcs: introduce line diffs</li>
<li><a
href="67c42573e2"><code>67c4257</code></a>
gopls/internal/golang: Definition: fix Windows bug wrt //go:embed</li>
<li><a
href="12c1f0453e"><code>12c1f04</code></a>
gopls/completion: check Selection invariant</li>
<li><a
href="6d87185788"><code>6d87185</code></a>
internal/server: add vulncheck scanning after vulncheck prompt</li>
<li><a
href="0c3a1fec56"><code>0c3a1fe</code></a>
go/ast/inspector: FindByPos returns the first innermost node</li>
<li><a
href="ca281cf950"><code>ca281cf</code></a>
go/analysis/passes/ctrlflow: add noreturn funcs from popular pkgs</li>
<li><a
href="09c21a9342"><code>09c21a9</code></a>
gopls/internal/analysis/unusedfunc: remove warnings for unused enum
consts</li>
<li><a
href="03cb4551c6"><code>03cb455</code></a>
internal/modindex: suppress missing modcacheindex message</li>
<li><a
href="15d13e8a95"><code>15d13e8</code></a>
gopls/internal/util/typesutil: refine EnclosingSignature bug.Report</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/tools/compare/v0.40.0...v0.41.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 13:46:32 +00:00
dependabot[bot]
31654deb87 chore: bump rust from 6cff8a3 to bf3368a in /dogfood/coder (#21569)
Bumps rust from `6cff8a3` to `bf3368a`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 13:46:11 +00:00
dependabot[bot]
25ac3dbab8 chore: bump ubuntu from 104ae83 to c7eb020 in /dogfood/coder (#21570)
Bumps ubuntu from `104ae83` to `c7eb020`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ubuntu&package-manager=docker&previous-version=jammy&new-version=jammy)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-19 13:46:07 +00:00
Susana Ferreira
a002fbbae6 refactor: avoid terminology collision with aibridge by renaming passthrough to tunneled (#21562)
## Description

Renames "passthrough" to "tunneled" in aiproxy to avoid terminology
collision with aibridge, which has its own passthrough concept.

Follow-up from:
https://github.com/coder/coder/pull/21512#discussion_r2698231778

---------

Co-authored-by: Danny Kopping <danny@coder.com>
2026-01-19 13:23:42 +00:00
Cian Johnston
08343a7a9f perf: reduce number of queries made by /api/v2/workspaceagents/{id} (#21522)
Relates to https://github.com/coder/internal/issues/1214

The `ExtractWorkspaceAgentParam` middleware ends up making 4 database
queries to follow the chain of `WorkspaceAgent` -> `WorkspaceResource`
-> `ProvisionerJob` -> `WorkspaceBuild` -- but then dropping all that
hard work on the floor. The `api.workspaceAgent` handler that references
this middleware then has to do all of that work again, plus one more
query to get the related `User` so we can get the username. This pattern
is also mirrored in `getDatabaseTerminal` but without the middleware.

This PR:
* Adds a new query `GetWorkspaceAgentAndWorkspaceByID` to fetch all
this information at once to avoid the multiple round-trips,
* Updates the existing usage of `GetWorkspaceAgentByID` to this new
query instead,
* Updates `ExtractWorkspaceAgentParam` to also store the workspace in
the request context

Dalibo: [0.63ms](https://explain.dalibo.com/plan/40bb597f3539gc6c)
2026-01-19 12:36:33 +00:00
Cian Johnston
d176714f90 chore: increase du interval to 1h in dogfood/coder template (#21555)
Increases the interval of running `du` on `/home/coder` and
`/var/lib/docker` to 1h.
Also decreases the timout to 1m; having `du` run for longer is likely
not great.
2026-01-19 09:57:29 +00:00
Cian Johnston
34c7fe2eaf chore: update agent metadata in WCOC template (#21549)
This PR:
- Removes the host-related agent metadata. It's not particularly useful
and the hosts have dedicated monitoring via Netdata.
- Adds two metdata blocks to expose the sizes of `/home/coder` and
`/var/lib/docker`
2026-01-19 09:07:44 +00:00
Susana Ferreira
a406ed7cc5 feat: add upstream proxy support to aiproxy for passthrough requests (#21512)
## Description

Adds upstream proxy support for AI Bridge Proxy passthrough requests.
This allows aiproxy to forward non-allowlisted requests through an
upstream proxy. Currently, the only supported configuration is when
aiproxy is the first proxy in the chain (client → aiproxy → upstream
proxy).

## Changes

* Add `--aibridge-proxy-upstream` option to configure an upstream
HTTP/HTTPS proxy URL for passthrough requests
* Add `--aibridge-proxy-upstream-ca` option to trust custom CA
certificates for HTTPS upstream proxies
* Passthrough requests (non-allowlisted domains) are forwarded through
the upstream proxy
* MITM'd requests (allowlisted domains) continue to go directly to
aibridge, not through the upstream proxy
* Add tests for upstream proxy configuration and request routing

Closes: https://github.com/coder/internal/issues/1204
2026-01-19 08:50:57 +00:00
Dean Sheather
1813605012 chore: update dogfood templates to new server (#21543) 2026-01-18 01:23:31 +11:00
Atif Ali
a4e14448c2 chore: add Go module domains to boundary allowlist (#21548)
Add 21 domains to the boundary allowlist to support Go module downloads
in the dogfood environment.

When running `go mod download` with `GOPROXY=direct`, Go fetches modules
directly from their source domains. Several dependencies in `go.mod` use
non-standard import paths that were being blocked by boundary with `403
Forbidden` errors.

**Added domains:**

| Domain | Purpose |
|--------|---------|
| `go.dev`, `dl.google.com` | Go toolchain downloads |
| `cdr.dev` | cdr.dev/slog (Coder logging) |
| `cel.dev` | cel.dev/expr |
| `dario.cat` | dario.cat/mergo |
| `git.sr.ht` | git.sr.ht/~jackmordaunt/go-toast |
| `go.mozilla.org` | go.mozilla.org/pkcs7 |
| `go.nhat.io` | go.nhat.io/otelsql |
| `go.opentelemetry.io` | OpenTelemetry packages |
| `go.uber.org` | go.uber.org/atomic, etc. |
| `go.yaml.in` | go.yaml.in/yaml |
| `go4.org` | go4.org/netipx |
| `golang.zx2c4.com` | WireGuard Go packages |
| `gonum.org` | gonum.org/v1/gonum |
| `gopkg.in` | gopkg.in/yaml.v3, etc. |
| `gvisor.dev` | gvisor.dev/gvisor |
| `howett.net` | howett.net/plist |
| `kernel.org` | libcap packages |
| `mvdan.cc` | mvdan.cc/gofumpt |
| `sigs.k8s.io` | sigs.k8s.io/yaml |
| `storj.io` | storj.io/drpc |

**Tested:** All domains verified working through boundary in a Linux
container.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 09:20:43 -05:00
Asher
4d414a0df7 feat: add --use-parameter-defaults flag (#21119)
This is like `--yes`, but for parameter prompts.
2026-01-16 17:04:57 -09:00
Asher
ff9ed91811 chore: move agent's file API into separate package (#21531)
This makes it so we can test it directly without having to go through
Tailnet, which appears to be causing flakes in CI where the requests
time out and never make it to the agent.

Takes inspiration from the container-related API endpoints.

Would probably make sense to refactor the ls tests to also go through
the API (rather than be internal tests like they are currently) but I
left those alone for now to keep the diff minimal.
2026-01-16 17:03:17 -09:00
Zach
ea465d4ea3 docs: add documentation for boundary audit logs (#21529) 2026-01-16 13:04:06 -07:00
Yevhenii Shcherbina
fe68ec9095 chore: bump claude-code module version (#21527)
- update boundary docs
- bump claude-code module version
- modify boundary policy for dogfood
2026-01-16 12:31:25 -05:00
Cian Johnston
ab126e0f0a feat: improve usability of coder show (#21539)
This PR improves the usability of `coder show`:

- Adds a header with workspace owner/name, latest build status and time
since, and template name / version name.
- Updates `namedWorkspace` to allow looking up by UUID
- Also improves associated `TestShow` to respect context deadlines.
2026-01-16 15:45:33 +00:00
Cian Johnston
ad23ea3561 chore: remove unused ExtractWorkspaceAndAgentParam (#21537)
While investigating https://github.com/coder/internal/issues/1214 I
noticed that `ExtractWorkspaceAndAgentParam` appeared to be unused
outside of tests.
2026-01-16 15:11:10 +00:00
blinkagent[bot]
3b07f7b9c4 fix: remove unreachable exit after error call in check_pg_schema.sh (#21530)
Fixes shellcheck warning reported in
https://github.com/coder/coder/pull/21496#discussion_r2696470065

## Problem

The `error()` function in `lib.sh` already calls `exit 1`, so the `exit
1` on line 17 of `check_pg_schema.sh` was unreachable:

```
In ./scripts/check_pg_schema.sh line 17:
	exit 1
        ^----^ SC2317 (info): Command appears to be unreachable.
```

## Solution

Remove the redundant `exit 1` since `error()` already handles exiting.

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-16 10:48:20 +02:00
uzair-coder07
11b35a5f94 feat(helm): add com.coder/component pod annotation to identify component type (#21378) 2026-01-16 09:17:11 +11:00
Asher
170fbcdb14 chore: refactor template insight page conditionals (#21331) 2026-01-15 12:31:05 -09:00
Jaayden Halko
3db5558603 fix: fix navigation when clicking on share workspace from the task overview page (#21523) 2026-01-15 15:17:20 -05:00
Yevhenii Shcherbina
61961db41d docs: update boundary docs (#21524)
- update boundary docs
- bump boundary version in dogfood
2026-01-15 15:07:40 -05:00
ケイラ
d2d7c0ee40 chore: migrate a bunch of tests to vitest (#21514) 2026-01-15 12:38:29 -07:00
Jaayden Halko
d25d95231f feat: add workspace sharing toggle on organization settings page (#21456)
resolves coder/internal#1211

<img width="1448" height="757" alt="Screenshot 2026-01-08 at 11 16 34"
src="https://github.com/user-attachments/assets/8d1e1b8b-e808-42a4-825a-f7f0f6fd8689"
/>

<img width="600" height="384" alt="Screenshot 2026-01-08 at 11 03 49"
src="https://github.com/user-attachments/assets/7fbe9b77-4617-4621-a566-972712210cbb"
/>

---------

Co-authored-by: George Katsitadze <george.katsitadze@gmail.com>
2026-01-15 17:18:23 +00:00
Cian Johnston
3a62a8e70e chore: improve healthcheck timeout message (#21520)
Relates to https://github.com/coder/internal/issues/272

This flake has been persisting for a while, and unfortunately there's no
detail on which healthcheck in particular is holding things up.

This PR adds a concurrency-safe `healthcheck.Progress` and wires it
through `healthcheck.Run`. If the healthcheck times out, it will provide
information on which healthchecks are completed / running, and how long
they took / are still taking.

🤖 Claude Opus 4.5 completed the first round of this implementation,
which I then refactored.
2026-01-15 16:37:05 +00:00
blinkagent[bot]
7fc84ecf0b feat: move stop button from shortcuts to kebab menu in workspace list (#21518)
## Summary

Moves the stop action from the icon-button shortcuts to the kebab menu
(WorkspaceMoreActions) in the workspaces list view.

## Problem

The stop icon was difficult to recognize without context in the
workspace list view. Users couldn't easily identify what the stop button
did based on the icon alone.

## Solution

- The stop action is not a primary action and doesn't need to be
highlighted in the icon-button view
- Moved the stop action into the kebab (⋮) menu
- The start button remains as a primary action when the workspace is
offline, since starting a workspace is a more common and expected action

## Changes

- `WorkspaceMoreActions`: Added optional `onStop` and `isStopPending`
props to conditionally render a "Stop" menu item
- `WorkspacesTable`: Removed the stop `PrimaryAction` button and instead
passes the stop callback to `WorkspaceMoreActions` when the workspace
can be stopped

## Testing

- TypeScript compiles without errors
- All existing tests pass
- Manually verified that the stop action appears in the kebab menu when
the workspace is running

Fixes #21516

---

Created on behalf of @jacobhqh1

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Jake Howell <jake@hwll.me>
2026-01-15 16:19:20 +00:00
Sas Swart
0ebe8e57ad chore: add scaletesting tools for aibridge (#21279)
This pull request adds scaletesting tools for aibridge.

See
https://www.notion.so/Scale-tests-2c5d579be5928088b565d15dd8bdea41?source=copy_link
for information and instructions.

closes: https://github.com/coder/internal/issues/1156
closes: https://github.com/coder/internal/issues/1155
closes: https://github.com/coder/internal/issues/1158
2026-01-15 17:05:46 +02:00
Jakub Domeracki
3894edbcc3 chore: update Go to 1.24.11 (#21519)
Resolves:
https://github.com/coder/coder/issues/21470
2026-01-15 15:12:31 +01:00
blinkagent[bot]
d5296a4855 chore: add lint/migrations to detect hardcoded public schema (#21496)
## Problem

Migration 000401 introduced a hardcoded `public.` schema qualifier which
broke deployments using non-public schemas (see #21493). We need to
prevent this from happening again.

## Solution

Adds a new `lint/migrations` Make target that validates database
migrations do not hardcode the `public` schema qualifier. Migrations
should rely on `search_path` instead to support deployments using
non-public schemas.

## Changes

- Added `scripts/check_migrations_schema.sh` - a linter script that
checks for `public.` references in migration files (excluding test
fixtures)
- Added `lint/migrations` target to the Makefile
- Added `lint/migrations` to the main `lint` target so it runs in CI

## Testing

- Verified the linter **fails** on current `main` (which has the
hardcoded `public.` in migration 000401)
- Verified the linter **passes** after applying the fix from #21493

```bash
# On main (fails)
$ make lint/migrations
ERROR: Migrations must not hardcode the 'public' schema. Use unqualified table names instead.

# After fix (passes)
$ make lint/migrations
Migration schema references OK
```

## Depends on

- #21493 must be merged first (or this PR will fail CI until it is)

---------

Signed-off-by: Danny Kopping <danny@coder.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Danny Kopping <danny@coder.com>
2026-01-15 14:17:16 +02:00
Cian Johnston
5073493850 feat(coderd/database/dbmetrics): add query_counts_total metric (#21506)
Adds a new Prometheus metric `coderd_db_query_counts_total` that tracks
the total number of queries by route, method, and query name. This is
aimed at helping us track down potential optimization candidates for
HTTP handlers that may trigger a number of queries. It is expected to be
used alongside `coderd_api_requests_processed_total` for correlation.

Depends upon new middleware introduced in
https://github.com/coder/coder/pull/21498

Relates to https://github.com/coder/internal/issues/1214
2026-01-15 10:58:56 +00:00
Cian Johnston
32354261d3 chore(coderd/httpmw): extract HTTPRoute middleware (#21498)
Extracts part of the prometheus middleware that stores the route
information in the request context into its own middleware. Also adds
request method information to context.

Relates to https://github.com/coder/internal/issues/1214
2026-01-15 10:26:50 +00:00
Ehab Younes
6683d807ac refactor: add RFC-compliant enum types and use SDK as source of truth (#21468)
Add comprehensive OAuth2 enum types to codersdk following RFC specifications:
- OAuth2ProviderGrantType (RFC 6749)
- OAuth2ProviderResponseType (RFC 6749)
- OAuth2TokenEndpointAuthMethod (RFC 7591)
- OAuth2PKCECodeChallengeMethod (RFC 7636)
- OAuth2TokenType (RFC 6749, RFC 9449)
- OAuth2RevocationTokenTypeHint (RFC 7009)
- OAuth2ErrorCode (RFC 6749, RFC 7009, RFC 8707)

Add OAuth2TokenRequest, OAuth2TokenResponse, OAuth2TokenRevocationRequest,
and OAuth2Error structs to the SDK. Update OAuth2ClientRegistrationRequest,
OAuth2ClientRegistrationResponse, OAuth2ClientConfiguration, and
OAuth2AuthorizationServerMetadata to use typed enums instead of raw strings.

This makes codersdk the single source of truth for OAuth2 types, eliminating
duplication between SDK and server-side structs.

Closes #21476
2026-01-15 12:41:28 +03:00
Atif Ali
7c2479ce92 chore(dogfood): remove JetBrains Fleet module (#21510) 2026-01-15 14:32:13 +05:00
Jaayden Halko
e1156b050f feat: add workspace sharing buttons to tasks (#21491)
resolves coder/internal#1130

This adds a workspace sharing button to tasks in 3 places

Figma:
https://www.figma.com/design/KriBGfS73GAwkplnVhCBoU/Tasks?node-id=278-2455&t=vhU6Q8G1b7fDWiAP-1

<img width="320" height="374" alt="Screenshot 2026-01-13 at 15 16 06"
src="https://github.com/user-attachments/assets/cf232a12-b0c8-4f5c-91fa-d84eac8cb106"
/>
<img width="582" height="372" alt="Screenshot 2026-01-13 at 15 16 36"
src="https://github.com/user-attachments/assets/90654afc-720a-4bfe-9c67-fcbcebb4aa2b"
/>
<img width="768" height="317" alt="Screenshot 2026-01-13 at 15 18 03"
src="https://github.com/user-attachments/assets/0281cb84-c941-4075-9a20-00ad3958864b"
/>
2026-01-14 23:21:44 +00:00
George K
0712faef4f feat(enterprise): implement organization "disable workspace sharing" option (#21376)
Adds a per-organization setting to disable workspace sharing. When enabled,
all existing workspace ACLs in the organization are cleared and the workspace
ACL mutation API endpoints return `403 Forbidden`.

This complements the existing site-wide `--disable-workspace-sharing` flag by
providing more granular control at the organization level.

Closes https://github.com/coder/internal/issues/1073 (part 2)

---------

Co-authored-by: Steven Masley <Emyrk@users.noreply.github.com>
2026-01-14 09:47:50 -08:00
Danny Kopping
7d5cd06f83 feat: add aibridge structured logging (#21492)
Closes https://github.com/coder/internal/issues/1151

Sample:

```
[API] 2026-01-13 15:50:20.795 [info]  coderd.aibridgedserver: interception started  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=a3e5b5da9546032a  record_type=interception_start  interception_id=97461880-4a6c-47c1-8292-3588dd715312  initiator_id=360c6167-a93a-4442-9c3e-f87a6d1cfb66  api_key_id=vg1sbUv97d  provider=anthropic  model=claude-opus-4-5-20251101  started_at="2026-01-13T15:50:20.790690781Z"  metadata={}
[API] 2026-01-13 15:50:23.741 [info]  coderd.aibridgedserver: token usage recorded  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=a114f0cc3047296e  record_type=token_usage  interception_id=97461880-4a6c-47c1-8292-3588dd715312  msg_id=msg_01VJH1rYKspfun8BW29CrYEu  input_tokens=10  output_tokens=8  created_at="2026-01-13T15:50:23.731587038Z"  metadata={"cache_creation_input":53194,"cache_ephemeral_1h_input":0,"cache_ephemeral_5m_input":53194,"cache_read_input":0,"web_search_requests":0}
[API] 2026-01-13 15:50:26.265 [info]  coderd.aibridgedserver: token usage recorded  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=dbdafb563bff2c9c  record_type=token_usage  interception_id=97461880-4a6c-47c1-8292-3588dd715312  msg_id=msg_01VJH1rYKspfun8BW29CrYEu  input_tokens=0  output_tokens=130  created_at="2026-01-13T15:50:26.254467904Z"  metadata={}
[API] 2026-01-13 15:50:26.268 [info]  coderd.aibridgedserver: prompt usage recorded  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=da51887a757226fc  record_type=prompt_usage  interception_id=97461880-4a6c-47c1-8292-3588dd715312  msg_id=msg_01VJH1rYKspfun8BW29CrYEu  prompt="list the jmia share price"  created_at="2026-01-13T15:50:26.255299811Z"  metadata={}
[API] 2026-01-13 15:50:26.268 [info]  coderd.aibridgedserver: interception ended  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=3fa25397705ee7c9  record_type=interception_end  interception_id=97461880-4a6c-47c1-8292-3588dd715312  ended_at="2026-01-13T15:50:26.25555547Z"
[API] 2026-01-13 15:50:26.269 [info]  coderd.aibridgedserver: tool usage recorded  trace=8bb5a1d8eb10526cc46ad90f191bb468  span=b54af90afc604d29  record_type=tool_usage  interception_id=97461880-4a6c-47c1-8292-3588dd715312  msg_id=msg_01VJH1rYKspfun8BW29CrYEu  tool=mcp__stonks__getStockPriceSnapshot  input="{\"ticker\":\"JMIA\"}"  server_url=""  injected=false  invocation_error=""  created_at="2026-01-13T15:50:26.255164652Z"  metadata={}
```

Structured logging is only enabled when
`CODER_AIBRIDGE_STRUCTURED_LOGGING=true`.

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-14 17:26:08 +02:00
614 changed files with 28319 additions and 8521 deletions

View File

@@ -0,0 +1,79 @@
---
name: doc-check
description: Checks if code changes require documentation updates
---
# Documentation Check Skill
Review code changes and determine if documentation updates or new documentation
is needed.
## Workflow
1. **Get the code changes** - Use the method provided in the prompt, or if none
specified:
- For a PR: `gh pr diff <PR_NUMBER> --repo coder/coder`
- For local changes: `git diff main` or `git diff --staged`
- For a branch: `git diff main...<branch>`
2. **Understand the scope** - Consider what changed:
- Is this user-facing or internal?
- Does it change behavior, APIs, CLI flags, or configuration?
- Even for "internal" or "chore" changes, always verify the actual diff
3. **Search the docs** for related content in `docs/`
4. **Decide what's needed**:
- Do existing docs need updates to match the code?
- Is new documentation needed for undocumented features?
- Or is everything already covered?
5. **Report findings** - Use the method provided in the prompt, or if none
specified, summarize findings directly
## What to Check
- **Accuracy**: Does documentation match current code behavior?
- **Completeness**: Are new features/options documented?
- **Examples**: Do code examples still work?
- **CLI/API changes**: Are new flags, endpoints, or options documented?
- **Configuration**: Are new environment variables or settings documented?
- **Breaking changes**: Are migration steps documented if needed?
- **Premium features**: Should docs indicate `(Premium)` in the title?
## Key Documentation Info
- **`docs/manifest.json`** - Navigation structure; new pages MUST be added here
- **`docs/reference/cli/*.md`** - Auto-generated from Go code, don't edit directly
- **Premium features** - H1 title should include `(Premium)` suffix
## Coder-Specific Patterns
### Callouts
Use GitHub-Flavored Markdown alerts:
```markdown
> [!NOTE]
> Additional helpful information.
> [!WARNING]
> Important warning about potential issues.
> [!TIP]
> Helpful tip for users.
```
### CLI Documentation
CLI docs in `docs/reference/cli/` are auto-generated. Don't suggest editing them
directly. Instead, changes should be made in the Go code that defines the CLI
commands (typically in `cli/` directory).
### Code Examples
Use `sh` for shell commands:
```sh
coder server --flag-name value
```

View File

@@ -1,4 +1,4 @@
#!/bin/sh
# Start Docker service if not already running.
sudo service docker start
sudo service docker status >/dev/null 2>&1 || sudo service docker start

View File

@@ -7,6 +7,6 @@ runs:
- name: go install tools
shell: bash
run: |
go install tool
./.github/scripts/retry.sh -- go install tool
# NOTE: protoc-gen-go cannot be installed with `go get`
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30

View File

@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.24.10"
default: "1.25.6"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
@@ -22,14 +22,14 @@ runs:
- name: Install gotestsum
shell: bash
run: go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
run: ./.github/scripts/retry.sh -- go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
- name: Install mtimehash
shell: bash
run: go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
run: ./.github/scripts/retry.sh -- go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
# It isn't necessary that we ever do this, but it helps
# separate the "setup" from the "run" times.
- name: go mod download
shell: bash
run: go mod download -x
run: ./.github/scripts/retry.sh -- go mod download -x

View File

@@ -14,4 +14,4 @@ runs:
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
./.github/scripts/retry.sh -- env CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05

50
.github/scripts/retry.sh vendored Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Retry a command with exponential backoff.
#
# Usage: retry.sh [--max-attempts N] -- <command...>
#
# Example:
# retry.sh --max-attempts 3 -- go install gotest.tools/gotestsum@latest
#
# This will retry the command up to 3 times with exponential backoff
# (2s, 4s, 8s delays between attempts).
set -euo pipefail
# shellcheck source=scripts/lib.sh
source "$(dirname "${BASH_SOURCE[0]}")/../../scripts/lib.sh"
max_attempts=3
args="$(getopt -o "" -l max-attempts: -- "$@")"
eval set -- "$args"
while true; do
case "$1" in
--max-attempts)
max_attempts="$2"
shift 2
;;
--)
shift
break
;;
*)
error "Unrecognized option: $1"
;;
esac
done
if [[ $# -lt 1 ]]; then
error "Usage: retry.sh [--max-attempts N] -- <command...>"
fi
attempt=1
until "$@"; do
if ((attempt >= max_attempts)); then
error "Command failed after $max_attempts attempts: $*"
fi
delay=$((2 ** attempt))
log "Attempt $attempt/$max_attempts failed, retrying in ${delay}s..."
sleep "$delay"
((attempt++))
done

View File

@@ -40,7 +40,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -124,7 +124,7 @@ jobs:
# runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
# steps:
# - name: Checkout
# uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
# uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# with:
# fetch-depth: 1
# # See: https://github.com/stefanzweifel/git-auto-commit-action?tab=readme-ov-file#commits-made-by-this-action-do-not-trigger-new-workflow-runs
@@ -162,7 +162,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -176,12 +176,12 @@ jobs:
- name: Get golangci-lint cache dir
run: |
linter_ver=$(grep -Eo 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
./.github/scripts/retry.sh -- go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
dir=$(golangci-lint cache status | awk '/Dir/ { print $2 }')
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -256,7 +256,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -313,7 +313,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -329,7 +329,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install shfmt
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
run: ./.github/scripts/retry.sh -- go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
- name: make fmt
timeout-minutes: 7
@@ -386,7 +386,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -395,6 +395,18 @@ jobs:
id: go-paths
uses: ./.github/actions/setup-go-paths
# macOS default bash and coreutils are too old for our scripts
# (lib.sh requires bash 4+, GNU getopt, make 4+).
- name: Setup GNU tools (macOS)
if: runner.os == 'macOS'
run: |
brew install bash gnu-getopt make
{
echo "$(brew --prefix bash)/bin"
echo "$(brew --prefix gnu-getopt)/bin"
echo "$(brew --prefix make)/libexec/gnubin"
} >> "$GITHUB_PATH"
- name: Setup Go
uses: ./.github/actions/setup-go
with:
@@ -559,7 +571,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -621,7 +633,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -693,7 +705,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -720,7 +732,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -753,7 +765,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
@@ -833,7 +845,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
# 👇 Ensures Chromatic can read your full git history
fetch-depth: 0
@@ -849,7 +861,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -881,7 +893,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -914,7 +926,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
# 0 is required here for version.sh to work.
fetch-depth: 0
@@ -1018,7 +1030,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -1068,7 +1080,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -1105,7 +1117,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -1117,10 +1129,10 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1128,7 +1140,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build
@@ -1160,7 +1172,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -1207,10 +1219,10 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1258,7 +1270,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
version="$(./scripts/version.sh)"
tag="main-${version//+/-}"
@@ -1557,7 +1569,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false

View File

@@ -215,7 +215,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
path: ./.github/actions/create-task-action

View File

@@ -249,7 +249,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
path: ./.github/actions/create-task-action

View File

@@ -41,7 +41,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -70,7 +70,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -151,7 +151,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false

View File

@@ -2,14 +2,26 @@
# It creates a Coder Task that uses AI to analyze the PR changes,
# search existing docs, and comment with recommendations.
#
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
# Triggers:
# - New PR opened: Initial documentation review
# - PR updated (synchronize): Re-review after changes
# - Label "doc-check" added: Manual trigger for review
# - PR marked ready for review: Review when draft is promoted
# - Workflow dispatch: Manual run with PR URL
#
# Note: This workflow requires access to secrets and will be skipped for:
# - Any PR where secrets are not available
# For these PRs, maintainers can manually trigger via workflow_dispatch.
name: AI Documentation Check
on:
pull_request:
types:
- opened
- synchronize
- labeled
- ready_for_review
workflow_dispatch:
inputs:
pr_url:
@@ -26,8 +38,16 @@ jobs:
doc-check:
name: Analyze PR for Documentation Updates Needed
runs-on: ubuntu-latest
# Run on: opened, synchronize, labeled (with doc-check label), ready_for_review, or workflow_dispatch
# Skip draft PRs unless manually triggered
if: |
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
(
github.event.action == 'opened' ||
github.event.action == 'synchronize' ||
github.event.label.name == 'doc-check' ||
github.event.action == 'ready_for_review' ||
github.event_name == 'workflow_dispatch'
) &&
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
timeout-minutes: 30
env:
@@ -39,120 +59,154 @@ jobs:
actions: write
steps:
- name: Check if secrets are available
id: check-secrets
env:
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
CODER_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
run: |
if [[ -z "${CODER_URL}" || -z "${CODER_TOKEN}" ]]; then
echo "skip=true" >> "${GITHUB_OUTPUT}"
echo "Secrets not available - skipping doc-check."
echo "This is expected for PRs where secrets are not available."
echo "Maintainers can manually trigger via workflow_dispatch if needed."
{
echo "⚠️ Workflow skipped: Secrets not available"
echo ""
echo "This workflow requires secrets that are unavailable for this run."
echo "Maintainers can manually trigger via workflow_dispatch if needed."
} >> "${GITHUB_STEP_SUMMARY}"
else
echo "skip=false" >> "${GITHUB_OUTPUT}"
fi
- name: Setup Coder CLI
if: steps.check-secrets.outputs.skip != 'true'
uses: coder/setup-action@4a607a8113d4e676e2d7c34caa20a814bc88bfda # v1
with:
access_url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder_session_token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
- name: Determine PR Context
if: steps.check-secrets.outputs.skip != 'true'
id: determine-context
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_ACTION: ${{ github.event.action }}
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
INPUTS_PR_URL: ${{ inputs.pr_url }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the provided PR URL
# Determine trigger type for task context
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
echo "trigger_type=manual" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Validate PR URL format
if [[ ! "${INPUTS_PR_URL}" =~ ^https://github\.com/[^/]+/[^/]+/pull/[0-9]+$ ]]; then
echo "::error::Invalid PR URL format: ${INPUTS_PR_URL}"
echo "::error::Expected format: https://github.com/owner/repo/pull/NUMBER"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
# Extract PR number from URL for later use
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
# Set trigger type based on action
case "${GITHUB_EVENT_ACTION}" in
opened)
echo "trigger_type=new_pr" >> "${GITHUB_OUTPUT}"
;;
synchronize)
echo "trigger_type=pr_updated" >> "${GITHUB_OUTPUT}"
;;
labeled)
echo "trigger_type=label_requested" >> "${GITHUB_OUTPUT}"
;;
ready_for_review)
echo "trigger_type=ready_for_review" >> "${GITHUB_OUTPUT}"
;;
*)
echo "trigger_type=unknown" >> "${GITHUB_OUTPUT}"
;;
esac
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Extract changed files and build prompt
- name: Build task prompt
if: steps.check-secrets.outputs.skip != 'true'
id: extract-context
env:
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
GH_TOKEN: ${{ github.token }}
TRIGGER_TYPE: ${{ steps.determine-context.outputs.trigger_type }}
run: |
echo "Analyzing PR #${PR_NUMBER}"
echo "Analyzing PR #${PR_NUMBER} (trigger: ${TRIGGER_TYPE})"
# Build task prompt - using unquoted heredoc so variables expand
TASK_PROMPT=$(cat <<EOF
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
# Build context based on trigger type
case "${TRIGGER_TYPE}" in
new_pr)
CONTEXT="This is a NEW PR. Perform a thorough documentation review."
;;
pr_updated)
CONTEXT="This PR was UPDATED with new commits. Only comment if the changes affect documentation needs or address previous feedback."
;;
label_requested)
CONTEXT="A documentation review was REQUESTED via label. Perform a thorough documentation review."
;;
ready_for_review)
CONTEXT="This PR was marked READY FOR REVIEW (converted from draft). Perform a thorough documentation review."
;;
manual)
CONTEXT="This is a MANUAL review request. Perform a thorough documentation review."
;;
*)
CONTEXT="Perform a thorough documentation review."
;;
esac
PR URL: ${PR_URL}
# Build task prompt with PR-specific context
TASK_PROMPT="Use the doc-check skill to review PR #${PR_NUMBER} in coder/coder.
WORKFLOW:
1. Setup (repo is pre-cloned at ~/coder)
cd ~/coder
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
git checkout pr-${PR_NUMBER}
${CONTEXT}
2. Get PR info
Use GitHub MCP tools to get PR title, body, and diff
Or use: git diff main...pr-${PR_NUMBER}
Use \`gh\` to get PR details, diff, and all comments. Check for previous doc-check comments (from coder-doc-check) and only post a new comment if it adds value.
3. Understand Changes
Read the diff and identify what changed
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
**Do not comment if no documentation changes are needed.**
4. Search for Related Docs
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
## Comment format
5. Decide
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
Use this structure (only include relevant sections):
6. Comment on the PR using this format
\`\`\`
## Documentation Check
COMMENT FORMAT:
## 📚 Documentation Check
### Previous Feedback
[For re-reviews only: Addressed | Partially addressed | Not yet addressed]
### Updates Needed
- **[docs/path/file.md](github_link)** - Brief what needs changing
### Updates Needed
- [ ] \`docs/path/file.md\` - [what needs to change]
### 📝 New Docs Needed
- **docs/suggested/location.md** - What should be documented
### ✨ No Changes Needed
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
### New Documentation Needed
- [ ] \`docs/suggested/path.md\` - [what should be documented]
---
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
DOCS STRUCTURE:
Read ~/coder/docs/manifest.json for the complete documentation structure.
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
But check manifest.json - it has everything.
EOF
)
*Automated review via [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
\`\`\`"
# Output the prompt
{
@@ -162,7 +216,8 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
if: steps.check-secrets.outputs.skip != 'true'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
@@ -171,22 +226,24 @@ jobs:
repository: coder/create-task-action
- name: Create Coder Task for Documentation Check
if: steps.check-secrets.outputs.skip != 'true'
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-name: coder-workflow-bot
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
coder-task-name-prefix: doc-check
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
coder-username: doc-check-bot
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: true
comment-on-issue: false
- name: Write outputs
- name: Write Task Info
if: steps.check-secrets.outputs.skip != 'true'
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -201,5 +258,140 @@ jobs:
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL:** ${TASK_URL}"
echo ""
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
} >> "${GITHUB_STEP_SUMMARY}"
- name: Wait for Task Completion
if: steps.check-secrets.outputs.skip != 'true'
id: wait_task
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
echo "Waiting for task to complete..."
echo "Task name: ${TASK_NAME}"
if [[ -z "${TASK_NAME}" ]]; then
echo "::error::TASK_NAME is empty"
exit 1
fi
MAX_WAIT=600 # 10 minutes
WAITED=0
POLL_INTERVAL=3
LAST_STATUS=""
is_workspace_message() {
local msg="$1"
[[ -z "$msg" ]] && return 0 # Empty = treat as workspace/startup
[[ "$msg" =~ ^Workspace ]] && return 0
[[ "$msg" =~ ^Agent ]] && return 0
return 1
}
while [[ $WAITED -lt $MAX_WAIT ]]; do
# Get task status (|| true prevents set -e from exiting on non-zero)
RAW_OUTPUT=$(coder task status "${TASK_NAME}" -o json 2>&1) || true
STATUS_JSON=$(echo "$RAW_OUTPUT" | grep -v "^version mismatch\|^download v" || true)
# Debug: show first poll's raw output
if [[ $WAITED -eq 0 ]]; then
echo "Raw status output: ${RAW_OUTPUT:0:500}"
fi
if [[ -z "$STATUS_JSON" ]] || ! echo "$STATUS_JSON" | jq -e . >/dev/null 2>&1; then
if [[ "$LAST_STATUS" != "waiting" ]]; then
echo "[${WAITED}s] Waiting for task status..."
LAST_STATUS="waiting"
fi
sleep $POLL_INTERVAL
WAITED=$((WAITED + POLL_INTERVAL))
continue
fi
TASK_STATE=$(echo "$STATUS_JSON" | jq -r '.current_state.state // "unknown"')
TASK_MESSAGE=$(echo "$STATUS_JSON" | jq -r '.current_state.message // ""')
WORKSPACE_STATUS=$(echo "$STATUS_JSON" | jq -r '.workspace_status // "unknown"')
# Build current status string for comparison
CURRENT_STATUS="${TASK_STATE}|${WORKSPACE_STATUS}|${TASK_MESSAGE}"
# Only log if status changed
if [[ "$CURRENT_STATUS" != "$LAST_STATUS" ]]; then
if [[ "$TASK_STATE" == "idle" ]] && is_workspace_message "$TASK_MESSAGE"; then
echo "[${WAITED}s] Workspace ready, waiting for Agent..."
else
echo "[${WAITED}s] State: ${TASK_STATE} | Workspace: ${WORKSPACE_STATUS} | ${TASK_MESSAGE}"
fi
LAST_STATUS="$CURRENT_STATUS"
fi
if [[ "$WORKSPACE_STATUS" == "failed" || "$WORKSPACE_STATUS" == "canceled" ]]; then
echo "::error::Workspace failed: ${WORKSPACE_STATUS}"
exit 1
fi
if [[ "$TASK_STATE" == "idle" ]]; then
if ! is_workspace_message "$TASK_MESSAGE"; then
# Real completion message from Claude!
echo ""
echo "Task completed: ${TASK_MESSAGE}"
RESULT_URI=$(echo "$STATUS_JSON" | jq -r '.current_state.uri // ""')
echo "result_uri=${RESULT_URI}" >> "${GITHUB_OUTPUT}"
echo "task_message=${TASK_MESSAGE}" >> "${GITHUB_OUTPUT}"
break
fi
fi
sleep $POLL_INTERVAL
WAITED=$((WAITED + POLL_INTERVAL))
done
if [[ $WAITED -ge $MAX_WAIT ]]; then
echo "::error::Task monitoring timed out after ${MAX_WAIT}s"
exit 1
fi
- name: Fetch Task Logs
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
echo "::group::Task Conversation Log"
if [[ -n "${TASK_NAME}" ]]; then
coder task logs "${TASK_NAME}" 2>&1 || echo "Failed to fetch logs"
else
echo "No task name, skipping log fetch"
fi
echo "::endgroup::"
- name: Cleanup Task
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
if [[ -n "${TASK_NAME}" ]]; then
echo "Deleting task: ${TASK_NAME}"
coder task delete "${TASK_NAME}" -y 2>&1 || echo "Task deletion failed or already deleted"
else
echo "No task name, skipping cleanup"
fi
- name: Write Final Summary
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_MESSAGE: ${{ steps.wait_task.outputs.task_message }}
RESULT_URI: ${{ steps.wait_task.outputs.result_uri }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
run: |
{
echo ""
echo "---"
echo "### Result"
echo ""
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
} >> "${GITHUB_STEP_SUMMARY}"

View File

@@ -43,7 +43,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -31,7 +31,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
@@ -42,7 +42,7 @@ jobs:
# on version 2.29 and above.
nix_version: "2.28.5"
- uses: nix-community/cache-nix-action@b426b118b6dc86d6952988d396aa7c6b09776d08 # v7.0.0
- uses: nix-community/cache-nix-action@106bba72ed8e29c8357661199511ef07790175e9 # v7.0.1
with:
# restore and save a cache using this key
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
@@ -130,7 +130,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -54,7 +54,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false

View File

@@ -44,7 +44,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
@@ -81,7 +81,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -233,7 +233,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -337,7 +337,7 @@ jobs:
kubectl create namespace "pr${PR_NUMBER}"
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -65,7 +65,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS.
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -121,7 +121,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -169,7 +169,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -259,7 +259,7 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nsis and zstd
run: sudo apt-get install -y nsis zstd
@@ -341,7 +341,7 @@ jobs:
- name: Build binaries
run: |
set -euo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
version="$(./scripts/version.sh)"
make gen/mark-fresh
@@ -888,7 +888,7 @@ jobs:
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -976,7 +976,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false

View File

@@ -25,7 +25,7 @@ jobs:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -32,7 +32,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
@@ -74,7 +74,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
@@ -97,11 +97,11 @@ jobs:
- name: Install yq
run: go run github.com/mikefarah/yq/v4@v4.44.3
- name: Install mockgen
run: go install go.uber.org/mock/mockgen@v0.5.0
run: ./.github/scripts/retry.sh -- go install go.uber.org/mock/mockgen@v0.6.0
- name: Install protoc-gen-go
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
run: ./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
- name: Install protoc-gen-go-drpc
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
run: ./.github/scripts/retry.sh -- go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
- name: Install Protoc
run: |
# protoc must be in lockstep with our dogfood Dockerfile or the

View File

@@ -101,7 +101,7 @@ jobs:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Run delete-old-branches-action

View File

@@ -153,7 +153,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
path: ./.github/actions/create-task-action

View File

@@ -26,7 +26,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false

View File

@@ -69,6 +69,9 @@ MOST_GO_SRC_FILES := $(shell \
# All the shell files in the repo, excluding ignored files.
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
MIGRATION_FILES := $(shell find ./coderd/database/migrations/ -maxdepth 1 $(FIND_EXCLUSIONS) -type f -name '*.sql')
FIXTURE_FILES := $(shell find ./coderd/database/migrations/testdata/fixtures/ $(FIND_EXCLUSIONS) -type f -name '*.sql')
# Ensure we don't use the user's git configs which might cause side-effects
GIT_FLAGS = GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_SYSTEM=/dev/null
@@ -561,7 +564,7 @@ endif
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
.PHONY: lint
lint/site-icons:
@@ -619,6 +622,12 @@ lint/check-scopes: coderd/database/dump.sql
go run ./scripts/check-scopes
.PHONY: lint/check-scopes
# Verify migrations do not hardcode the public schema.
lint/migrations:
./scripts/check_pg_schema.sh "Migrations" $(MIGRATION_FILES)
./scripts/check_pg_schema.sh "Fixtures" $(FIXTURE_FILES)
.PHONY: lint/migrations
# All files generated by the database should be added here, and this can be used
# as a target for jobs that need to run after the database is generated.
DB_GEN_FILES := \

View File

@@ -40,6 +40,7 @@ import (
"github.com/coder/clistat"
"github.com/coder/coder/v2/agent/agentcontainers"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/agentfiles"
"github.com/coder/coder/v2/agent/agentscripts"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agentssh"
@@ -295,6 +296,8 @@ type agent struct {
containerAPIOptions []agentcontainers.Option
containerAPI *agentcontainers.API
filesAPI *agentfiles.API
socketServerEnabled bool
socketPath string
socketServer *agentsocket.Server
@@ -365,6 +368,8 @@ func (a *agent) init() {
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
a.filesAPI = agentfiles.NewAPI(a.logger.Named("files"), a.filesystem)
a.reconnectingPTYServer = reconnectingpty.NewServer(
a.logger.Named("reconnecting-pty"),
a.sshServer,
@@ -877,12 +882,16 @@ const (
)
func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_Type, ip string) (disconnected func(code int, reason string)) {
// Remove the port from the IP because ports are not supported in coderd.
if host, _, err := net.SplitHostPort(ip); err != nil {
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
} else {
// Best effort.
ip = host
// A blank IP can unfortunately happen if the connection is broken in a data race before we get to introspect it. We
// still report it, and the recipient can handle a blank IP.
if ip != "" {
// Remove the port from the IP because ports are not supported in coderd.
if host, _, err := net.SplitHostPort(ip); err != nil {
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
} else {
// Best effort.
ip = host
}
}
// If the IP is "localhost" (which it can be in some cases), set it to

View File

@@ -121,7 +121,8 @@ func TestAgent_ImmediateClose(t *testing.T) {
require.NoError(t, err)
}
// NOTE: These tests only work when your default shell is bash for some reason.
// NOTE(Cian): I noticed that these tests would fail when my default shell was zsh.
// Writing "exit 0" to stdin before closing fixed the issue for me.
func TestAgent_Stats_SSH(t *testing.T) {
t.Parallel()
@@ -148,16 +149,37 @@ func TestAgent_Stats_SSH(t *testing.T) {
require.NoError(t, err)
var s *proto.Stats
// We are looking for four different stats to be reported. They might not all
// arrive at the same time, so we loop until we've seen them all.
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen bool
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountSsh == 1
if !ok {
return false
}
if s.ConnectionCount > 0 {
connectionCountSeen = true
}
if s.RxBytes > 0 {
rxBytesSeen = true
}
if s.TxBytes > 0 {
txBytesSeen = true
}
if s.SessionCountSsh == 1 {
sessionCountSSHSeen = true
}
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountSSHSeen
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats: %+v", s,
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountSsh: %t",
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen,
)
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err)
require.NoError(t, err, "waiting for session to exit")
})
}
}
@@ -183,12 +205,31 @@ func TestAgent_Stats_ReconnectingPTY(t *testing.T) {
require.NoError(t, err)
var s *proto.Stats
// We are looking for four different stats to be reported. They might not all
// arrive at the same time, so we loop until we've seen them all.
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen bool
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountReconnectingPty == 1
if !ok {
return false
}
if s.ConnectionCount > 0 {
connectionCountSeen = true
}
if s.RxBytes > 0 {
rxBytesSeen = true
}
if s.TxBytes > 0 {
txBytesSeen = true
}
if s.SessionCountReconnectingPty == 1 {
sessionCountReconnectingPTYSeen = true
}
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountReconnectingPTYSeen
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats: %+v", s,
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountReconnectingPTY: %t",
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen,
)
}
@@ -218,9 +259,10 @@ func TestAgent_Stats_Magic(t *testing.T) {
require.NoError(t, err)
require.Equal(t, expected, strings.TrimSpace(string(output)))
})
t.Run("TracksVSCode", func(t *testing.T) {
t.Parallel()
if runtime.GOOS == "window" {
if runtime.GOOS == "windows" {
t.Skip("Sleeping for infinity doesn't work on Windows")
}
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
@@ -252,7 +294,9 @@ func TestAgent_Stats_Magic(t *testing.T) {
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats",
)
// The shell will automatically exit if there is no stdin!
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err)
@@ -3633,9 +3677,11 @@ func TestAgent_Metrics_SSH(t *testing.T) {
}
}
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err)
require.NoError(t, err, "waiting for session to exit")
}
// echoOnce accepts a single connection, reads 4 bytes and echos them back

View File

@@ -779,10 +779,13 @@ func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
// close frames.
_ = conn.CloseRead(context.Background())
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.Heartbeat(ctx, conn)
go httpapi.HeartbeatClose(ctx, api.logger, cancel, conn)
updateCh := make(chan struct{}, 1)

36
agent/agentfiles/api.go Normal file
View File

@@ -0,0 +1,36 @@
package agentfiles
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/spf13/afero"
"cdr.dev/slog/v3"
)
// API exposes file-related operations performed through the agent.
type API struct {
logger slog.Logger
filesystem afero.Fs
}
func NewAPI(logger slog.Logger, filesystem afero.Fs) *API {
api := &API{
logger: logger,
filesystem: filesystem,
}
return api
}
// Routes returns the HTTP handler for file-related routes.
func (api *API) Routes() http.Handler {
r := chi.NewRouter()
r.Post("/list-directory", api.HandleLS)
r.Get("/read-file", api.HandleReadFile)
r.Post("/write-file", api.HandleWriteFile)
r.Post("/edit-files", api.HandleEditFiles)
return r
}

View File

@@ -1,4 +1,4 @@
package agent
package agentfiles
import (
"context"
@@ -25,7 +25,7 @@ import (
type HTTPResponseCode = int
func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
func (api *API) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
@@ -42,7 +42,7 @@ func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
return
}
status, err := a.streamFile(ctx, rw, path, offset, limit)
status, err := api.streamFile(ctx, rw, path, offset, limit)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
@@ -51,12 +51,12 @@ func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
}
}
func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
func (api *API) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
f, err := a.filesystem.Open(path)
f, err := api.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -97,13 +97,13 @@ func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path str
reader := io.NewSectionReader(f, offset, bytesToRead)
_, err = io.Copy(rw, reader)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
a.logger.Error(ctx, "workspace agent read file", slog.Error(err))
api.logger.Error(ctx, "workspace agent read file", slog.Error(err))
}
return 0, nil
}
func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
func (api *API) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
@@ -118,7 +118,7 @@ func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
return
}
status, err := a.writeFile(ctx, r, path)
status, err := api.writeFile(ctx, r, path)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
@@ -131,13 +131,13 @@ func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
})
}
func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
func (api *API) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
dir := filepath.Dir(path)
err := a.filesystem.MkdirAll(dir, 0o755)
err := api.filesystem.MkdirAll(dir, 0o755)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -149,7 +149,7 @@ func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HT
return status, err
}
f, err := a.filesystem.Create(path)
f, err := api.filesystem.Create(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -164,13 +164,13 @@ func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HT
_, err = io.Copy(f, r.Body)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
a.logger.Error(ctx, "workspace agent write file", slog.Error(err))
api.logger.Error(ctx, "workspace agent write file", slog.Error(err))
}
return 0, nil
}
func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
func (api *API) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req workspacesdk.FileEditRequest
@@ -188,7 +188,7 @@ func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
var combinedErr error
status := http.StatusOK
for _, edit := range req.Files {
s, err := a.editFile(r.Context(), edit.Path, edit.Edits)
s, err := api.editFile(r.Context(), edit.Path, edit.Edits)
// Keep the highest response status, so 500 will be preferred over 400, etc.
if s > status {
status = s
@@ -210,7 +210,7 @@ func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
})
}
func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
if path == "" {
return http.StatusBadRequest, xerrors.New("\"path\" is required")
}
@@ -223,7 +223,7 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
return http.StatusBadRequest, xerrors.New("must specify at least one edit")
}
f, err := a.filesystem.Open(path)
f, err := api.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -252,7 +252,7 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
// Create an adjacent file to ensure it will be on the same device and can be
// moved atomically.
tmpfile, err := afero.TempFile(a.filesystem, filepath.Dir(path), filepath.Base(path))
tmpfile, err := afero.TempFile(api.filesystem, filepath.Dir(path), filepath.Base(path))
if err != nil {
return http.StatusInternalServerError, err
}
@@ -260,13 +260,13 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
_, err = io.Copy(tmpfile, replace.Chain(f, transforms...))
if err != nil {
if rerr := a.filesystem.Remove(tmpfile.Name()); rerr != nil {
a.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
if rerr := api.filesystem.Remove(tmpfile.Name()); rerr != nil {
api.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
}
return http.StatusInternalServerError, xerrors.Errorf("edit %s: %w", path, err)
}
err = a.filesystem.Rename(tmpfile.Name(), path)
err = api.filesystem.Rename(tmpfile.Name(), path)
if err != nil {
return http.StatusInternalServerError, err
}

View File

@@ -1,11 +1,13 @@
package agent_test
package agentfiles_test
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"runtime"
@@ -16,10 +18,10 @@ import (
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk/agentsdk"
"cdr.dev/slog/v3"
"cdr.dev/slog/v3/sloggers/slogtest"
"github.com/coder/coder/v2/agent/agentfiles"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/testutil"
)
@@ -106,15 +108,15 @@ func TestReadFile(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return os.ErrPermission
}
return nil
})
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath {
return os.ErrPermission
}
return nil
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "a-directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -260,19 +262,22 @@ func TestReadFile(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
reader, mimeType, err := conn.ReadFile(ctx, tt.path, tt.offset, tt.limit)
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodGet, fmt.Sprintf("/read-file?path=%s&offset=%d&limit=%d", tt.path, tt.offset, tt.limit), nil)
api.Routes().ServeHTTP(w, r)
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
defer reader.Close()
bytes, err := io.ReadAll(reader)
require.ErrorContains(t, got, tt.error)
require.Equal(t, tt.errCode, w.Code)
} else {
bytes, err := io.ReadAll(w.Body)
require.NoError(t, err)
require.Equal(t, tt.bytes, bytes)
require.Equal(t, tt.mimeType, mimeType)
require.Equal(t, tt.mimeType, w.Header().Get("Content-Type"))
require.Equal(t, http.StatusOK, w.Code)
}
})
}
@@ -284,15 +289,14 @@ func TestWriteFile(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
noPermsDirPath := filepath.Join(tmpdir, "no-perms-dir")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath || file == noPermsDirPath {
return os.ErrPermission
}
return nil
})
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath || file == noPermsDirPath {
return os.ErrPermission
}
return nil
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -371,17 +375,21 @@ func TestWriteFile(t *testing.T) {
defer cancel()
reader := bytes.NewReader(tt.bytes)
err := conn.WriteFile(ctx, tt.path, reader)
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodPost, fmt.Sprintf("/write-file?path=%s", tt.path), reader)
api.Routes().ServeHTTP(w, r)
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
require.ErrorContains(t, got, tt.error)
require.Equal(t, tt.errCode, w.Code)
} else {
bytes, err := afero.ReadFile(fs, tt.path)
require.NoError(t, err)
b, err := afero.ReadFile(fs, tt.path)
require.NoError(t, err)
require.Equal(t, tt.bytes, b)
require.Equal(t, tt.bytes, bytes)
require.Equal(t, http.StatusOK, w.Code)
}
})
}
@@ -393,21 +401,20 @@ func TestEditFiles(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
failRenameFilePath := filepath.Join(tmpdir, "fail-rename")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return &os.PathError{
Op: call,
Path: file,
Err: os.ErrPermission,
}
} else if file == failRenameFilePath && call == "rename" {
return xerrors.New("rename failed")
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath {
return &os.PathError{
Op: call,
Path: file,
Err: os.ErrPermission,
}
return nil
})
} else if file == failRenameFilePath && call == "rename" {
return xerrors.New("rename failed")
}
return nil
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -701,16 +708,26 @@ func TestEditFiles(t *testing.T) {
require.NoError(t, err)
}
err := conn.EditFiles(ctx, workspacesdk.FileEditRequest{Files: tt.edits})
buf := bytes.NewBuffer(nil)
enc := json.NewEncoder(buf)
enc.SetEscapeHTML(false)
err := enc.Encode(workspacesdk.FileEditRequest{Files: tt.edits})
require.NoError(t, err)
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodPost, "/edit-files", buf)
api.Routes().ServeHTTP(w, r)
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
for _, error := range tt.errors {
require.Contains(t, cerr.Error(), error)
}
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
for _, error := range tt.errors {
require.ErrorContains(t, got, error)
}
require.Equal(t, tt.errCode, w.Code)
} else {
require.Equal(t, http.StatusOK, w.Code)
}
for path, expect := range tt.expected {
b, err := afero.ReadFile(fs, path)

View File

@@ -1,4 +1,4 @@
package agent
package agentfiles
import (
"errors"
@@ -21,7 +21,7 @@ import (
var WindowsDriveRegex = regexp.MustCompile(`^[a-zA-Z]:\\$`)
func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
func (api *API) HandleLS(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// An absolute path may be optionally provided, otherwise a path split into an
@@ -43,7 +43,7 @@ func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
return
}
resp, err := listFiles(a.filesystem, path, req)
resp, err := listFiles(api.filesystem, path, req)
if err != nil {
status := http.StatusInternalServerError
switch {

View File

@@ -1,4 +1,4 @@
package agent
package agentfiles
import (
"os"

View File

@@ -27,6 +27,8 @@ func (a *agent) apiHandler() http.Handler {
})
})
r.Mount("/api/v0", a.filesAPI.Routes())
if a.devcontainers {
r.Mount("/api/v0/containers", a.containerAPI.Routes())
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
@@ -49,10 +51,6 @@ func (a *agent) apiHandler() http.Handler {
r.Get("/api/v0/listening-ports", a.listeningPortsHandler.handler)
r.Get("/api/v0/netcheck", a.HandleNetcheck)
r.Post("/api/v0/list-directory", a.HandleLS)
r.Get("/api/v0/read-file", a.HandleReadFile)
r.Post("/api/v0/write-file", a.HandleWriteFile)
r.Post("/api/v0/edit-files", a.HandleEditFiles)
r.Get("/debug/logs", a.HandleHTTPDebugLogs)
r.Get("/debug/magicsock", a.HandleHTTPDebugMagicsock)
r.Get("/debug/magicsock/debug-logging/{state}", a.HandleHTTPMagicsockDebugLoggingState)

View File

@@ -78,9 +78,13 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
sink := &logSink{}
logger := slog.Make(sink)
workspaceID := uuid.New()
templateID := uuid.New()
templateVersionID := uuid.New()
reporter := &agentapi.BoundaryLogsAPI{
Log: logger,
WorkspaceID: workspaceID,
Log: logger,
WorkspaceID: workspaceID,
TemplateID: templateID,
TemplateVersionID: templateVersionID,
}
ctx, cancel := context.WithCancel(context.Background())
@@ -123,6 +127,8 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
require.Equal(t, "boundary_request", entry.Message)
require.Equal(t, "allow", getField(entry.Fields, "decision"))
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
require.Equal(t, "GET", getField(entry.Fields, "http_method"))
require.Equal(t, "https://example.com/allowed", getField(entry.Fields, "http_url"))
require.Equal(t, "*.example.com", getField(entry.Fields, "matched_rule"))
@@ -155,6 +161,8 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
require.Equal(t, "boundary_request", entry.Message)
require.Equal(t, "deny", getField(entry.Fields, "decision"))
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
require.Equal(t, "POST", getField(entry.Fields, "http_method"))
require.Equal(t, "https://blocked.com/denied", getField(entry.Fields, "http_url"))
require.Equal(t, nil, getField(entry.Fields, "matched_rule"))

View File

@@ -81,6 +81,10 @@ type BackedPipe struct {
// Unified error handling with generation filtering
errChan chan ErrorEvent
// forceReconnectHook is a test hook invoked after ForceReconnect registers
// with the singleflight group.
forceReconnectHook func()
// singleflight group to dedupe concurrent ForceReconnect calls
sf singleflight.Group
@@ -324,6 +328,13 @@ func (bp *BackedPipe) handleConnectionError(errorEvt ErrorEvent) {
}
}
// SetForceReconnectHookForTests sets a hook invoked after ForceReconnect
// registers with the singleflight group. It must be set before any
// concurrent ForceReconnect calls.
func (bp *BackedPipe) SetForceReconnectHookForTests(hook func()) {
bp.forceReconnectHook = hook
}
// ForceReconnect forces a reconnection attempt immediately.
// This can be used to force a reconnection if a new connection is established.
// It prevents duplicate reconnections when called concurrently.
@@ -331,7 +342,7 @@ func (bp *BackedPipe) ForceReconnect() error {
// Deduplicate concurrent ForceReconnect calls so only one reconnection
// attempt runs at a time from this API. Use the pipe's internal context
// to ensure Close() cancels any in-flight attempt.
_, err, _ := bp.sf.Do("force-reconnect", func() (interface{}, error) {
resultChan := bp.sf.DoChan("force-reconnect", func() (interface{}, error) {
bp.mu.Lock()
defer bp.mu.Unlock()
@@ -346,5 +357,11 @@ func (bp *BackedPipe) ForceReconnect() error {
return nil, bp.reconnectLocked()
})
return err
if hook := bp.forceReconnectHook; hook != nil {
hook()
}
result := <-resultChan
return result.Err
}

View File

@@ -742,12 +742,15 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
const numConcurrent = 3
startSignals := make([]chan struct{}, numConcurrent)
startedSignals := make([]chan struct{}, numConcurrent)
for i := range startSignals {
startSignals[i] = make(chan struct{})
startedSignals[i] = make(chan struct{})
}
enteredSignals := make(chan struct{}, numConcurrent)
bp.SetForceReconnectHookForTests(func() {
enteredSignals <- struct{}{}
})
errors := make([]error, numConcurrent)
var wg sync.WaitGroup
@@ -758,15 +761,12 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
defer wg.Done()
// Wait for the signal to start
<-startSignals[idx]
// Signal that we're about to call ForceReconnect
close(startedSignals[idx])
errors[idx] = bp.ForceReconnect()
}(i)
}
// Start the first ForceReconnect and wait for it to block
close(startSignals[0])
<-startedSignals[0]
// Wait for the first reconnect to actually start and block
testutil.RequireReceive(testCtx, t, blockedChan)
@@ -777,9 +777,9 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
close(startSignals[i])
}
// Wait for all additional goroutines to have started their calls
for i := 1; i < numConcurrent; i++ {
<-startedSignals[i]
// Wait for all ForceReconnect calls to join the singleflight operation.
for i := 0; i < numConcurrent; i++ {
testutil.RequireReceive(testCtx, t, enteredSignals)
}
// At this point, one reconnect has started and is blocked,

View File

@@ -7,6 +7,6 @@ func IsInitProcess() bool {
return false
}
func ForkReap(_ ...Option) error {
return nil
func ForkReap(_ ...Option) (int, error) {
return 0, nil
}

View File

@@ -32,12 +32,13 @@ func TestReap(t *testing.T) {
}
pids := make(reap.PidCh, 1)
err := reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithPIDCallback(pids),
// Provide some argument that immediately exits.
reaper.WithExecArgs("/bin/sh", "-c", "exit 0"),
)
require.NoError(t, err)
require.Equal(t, 0, exitCode)
cmd := exec.Command("tail", "-f", "/dev/null")
err = cmd.Start()
@@ -65,6 +66,36 @@ func TestReap(t *testing.T) {
}
}
//nolint:paralleltest
func TestForkReapExitCodes(t *testing.T) {
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
tests := []struct {
name string
command string
expectedCode int
}{
{"exit 0", "exit 0", 0},
{"exit 1", "exit 1", 1},
{"exit 42", "exit 42", 42},
{"exit 255", "exit 255", 255},
{"SIGKILL", "kill -9 $$", 128 + 9},
{"SIGTERM", "kill -15 $$", 128 + 15},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
exitCode, err := reaper.ForkReap(
reaper.WithExecArgs("/bin/sh", "-c", tt.command),
)
require.NoError(t, err)
require.Equal(t, tt.expectedCode, exitCode, "exit code mismatch for %q", tt.command)
})
}
}
//nolint:paralleltest // Signal handling.
func TestReapInterrupt(t *testing.T) {
// Don't run the reaper test in CI. It does weird
@@ -84,13 +115,17 @@ func TestReapInterrupt(t *testing.T) {
defer signal.Stop(usrSig)
go func() {
errC <- reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithPIDCallback(pids),
reaper.WithCatchSignals(os.Interrupt),
// Signal propagation does not extend to children of children, so
// we create a little bash script to ensure sleep is interrupted.
reaper.WithExecArgs("/bin/sh", "-c", fmt.Sprintf("pid=0; trap 'kill -USR2 %d; kill -TERM $pid' INT; sleep 10 &\npid=$!; kill -USR1 %d; wait", os.Getpid(), os.Getpid())),
)
// The child exits with 128 + SIGTERM (15) = 143, but the trap catches
// SIGINT and sends SIGTERM to the sleep process, so exit code varies.
_ = exitCode
errC <- err
}()
require.Equal(t, <-usrSig, syscall.SIGUSR1)

View File

@@ -40,7 +40,10 @@ func catchSignals(pid int, sigs []os.Signal) {
// the reaper and an exec.Command waiting for its process to complete.
// The provided 'pids' channel may be nil if the caller does not care about the
// reaped children PIDs.
func ForkReap(opt ...Option) error {
//
// Returns the child's exit code (using 128+signal for signal termination)
// and any error from Wait4.
func ForkReap(opt ...Option) (int, error) {
opts := &options{
ExecArgs: os.Args,
}
@@ -53,7 +56,7 @@ func ForkReap(opt ...Option) error {
pwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get wd: %w", err)
return 1, xerrors.Errorf("get wd: %w", err)
}
pattrs := &syscall.ProcAttr{
@@ -72,7 +75,7 @@ func ForkReap(opt ...Option) error {
//#nosec G204
pid, err := syscall.ForkExec(opts.ExecArgs[0], opts.ExecArgs, pattrs)
if err != nil {
return xerrors.Errorf("fork exec: %w", err)
return 1, xerrors.Errorf("fork exec: %w", err)
}
go catchSignals(pid, opts.CatchSignals)
@@ -82,5 +85,18 @@ func ForkReap(opt ...Option) error {
for xerrors.Is(err, syscall.EINTR) {
_, err = syscall.Wait4(pid, &wstatus, 0, nil)
}
return err
// Convert wait status to exit code using standard Unix conventions:
// - Normal exit: use the exit code
// - Signal termination: use 128 + signal number
var exitCode int
switch {
case wstatus.Exited():
exitCode = wstatus.ExitStatus()
case wstatus.Signaled():
exitCode = 128 + int(wstatus.Signal())
default:
exitCode = 1
}
return exitCode, err
}

View File

@@ -136,7 +136,7 @@ func workspaceAgent() *serpent.Command {
// to do this else we fork bomb ourselves.
//nolint:gocritic
args := append(os.Args, "--no-reap")
err := reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithExecArgs(args...),
reaper.WithCatchSignals(StopSignals...),
)
@@ -145,8 +145,8 @@ func workspaceAgent() *serpent.Command {
return xerrors.Errorf("fork reap: %w", err)
}
logger.Info(ctx, "reaper process exiting")
return nil
logger.Info(ctx, "reaper child process exited", slog.F("exit_code", exitCode))
return ExitError(exitCode, nil)
}
// Handle interrupt signals to allow for graceful shutdown,

View File

@@ -10,12 +10,8 @@ import (
"github.com/coder/serpent"
)
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, defaultOverrides map[string]string) (string, error) {
label := templateVersionParameter.Name
if templateVersionParameter.DisplayName != "" {
label = templateVersionParameter.DisplayName
}
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, name, defaultValue string) (string, error) {
label := name
if templateVersionParameter.Ephemeral {
label += pretty.Sprint(DefaultStyles.Warn, " (build option)")
}
@@ -26,11 +22,6 @@ func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.Te
_, _ = fmt.Fprintln(inv.Stdout, " "+strings.TrimSpace(strings.Join(strings.Split(templateVersionParameter.DescriptionPlaintext, "\n"), "\n "))+"\n")
}
defaultValue := templateVersionParameter.DefaultValue
if v, ok := defaultOverrides[templateVersionParameter.Name]; ok {
defaultValue = v
}
var err error
var value string
switch {

View File

@@ -32,12 +32,12 @@ type PromptOptions struct {
const skipPromptFlag = "yes"
// SkipPromptOption adds a "--yes/-y" flag to the cmd that can be used to skip
// prompts.
// confirmation prompts.
func SkipPromptOption() serpent.Option {
return serpent.Option{
Flag: skipPromptFlag,
FlagShorthand: "y",
Description: "Bypass prompts.",
Description: "Bypass confirmation prompts.",
// Discard
Value: serpent.BoolOf(new(bool)),
}

View File

@@ -491,6 +491,11 @@ func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case tea.KeySpace:
options := m.filteredOptions()
if m.enableCustomInput && m.cursor == len(options) {
return m, nil
}
if len(options) != 0 {
options[m.cursor].chosen = !options[m.cursor].chosen
}

View File

@@ -42,9 +42,10 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
stopAfter time.Duration
workspaceName string
parameterFlags workspaceParameterFlags
autoUpdates string
copyParametersFrom string
parameterFlags workspaceParameterFlags
autoUpdates string
copyParametersFrom string
useParameterDefaults bool
// Organization context is only required if more than 1 template
// shares the same name across multiple organizations.
orgContext = NewOrganizationContext()
@@ -308,7 +309,7 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
displayAppliedPreset(inv, preset, presetParameters)
} else {
// Inform the user that no preset was applied
_, _ = fmt.Fprintf(inv.Stdout, "%s", cliui.Bold("No preset applied."))
_, _ = fmt.Fprintf(inv.Stdout, "%s\n", cliui.Bold("No preset applied."))
}
if opts.BeforeCreate != nil {
@@ -329,6 +330,8 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
RichParameterDefaults: cliBuildParameterDefaults,
SourceWorkspaceParameters: sourceWorkspaceParameters,
UseParameterDefaults: useParameterDefaults,
})
if err != nil {
return xerrors.Errorf("prepare build: %w", err)
@@ -435,6 +438,12 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
Description: "Specify the source workspace name to copy parameters from.",
Value: serpent.StringOf(&copyParametersFrom),
},
serpent.Option{
Flag: "use-parameter-defaults",
Env: "CODER_WORKSPACE_USE_PARAMETER_DEFAULTS",
Description: "Automatically accept parameter defaults when no value is provided.",
Value: serpent.BoolOf(&useParameterDefaults),
},
cliui.SkipPromptOption(),
)
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
@@ -459,6 +468,8 @@ type prepWorkspaceBuildArgs struct {
RichParameters []codersdk.WorkspaceBuildParameter
RichParameterFile string
RichParameterDefaults []codersdk.WorkspaceBuildParameter
UseParameterDefaults bool
}
// resolvePreset returns the preset matching the given presetName (if specified),
@@ -561,7 +572,8 @@ func prepWorkspaceBuild(inv *serpent.Invocation, client *codersdk.Client, args p
WithPromptRichParameters(args.PromptRichParameters).
WithRichParameters(args.RichParameters).
WithRichParametersFile(parameterFile).
WithRichParametersDefaults(args.RichParameterDefaults)
WithRichParametersDefaults(args.RichParameterDefaults).
WithUseParameterDefaults(args.UseParameterDefaults)
buildParameters, err := resolver.Resolve(inv, args.Action, templateVersionParameters)
if err != nil {
return nil, err

View File

@@ -139,12 +139,15 @@ func TestCreate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent())
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Create a new version
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
@@ -318,353 +321,438 @@ func prepareEchoResponses(parameters []*proto.RichParameter, presets ...*proto.P
}
}
type param struct {
name string
ptype string
value string
mutable bool
}
func TestCreateWithRichParameters(t *testing.T) {
t.Parallel()
const (
firstParameterName = "first_parameter"
firstParameterDescription = "This is first parameter"
firstParameterValue = "1"
secondParameterName = "second_parameter"
secondParameterDisplayName = "Second Parameter"
secondParameterDescription = "This is second parameter"
secondParameterValue = "2"
immutableParameterName = "third_parameter"
immutableParameterDescription = "This is not mutable parameter"
immutableParameterValue = "4"
)
echoResponses := func() *echo.Responses {
return prepareEchoResponses([]*proto.RichParameter{
{Name: firstParameterName, Description: firstParameterDescription, Mutable: true},
{Name: secondParameterName, DisplayName: secondParameterDisplayName, Description: secondParameterDescription, Mutable: true},
{Name: immutableParameterName, Description: immutableParameterDescription, Mutable: false},
})
// Default parameters and their expected values.
params := []param{
{
name: "number_param",
ptype: "number",
value: "777",
mutable: true,
},
{
name: "string_param",
ptype: "string",
value: "qux",
mutable: true,
},
{
name: "bool_param",
// TODO: Setting the type breaks booleans. It claims the default is false
// but when you then accept this default it errors saying that the value
// must be true or false. For now, use a string.
ptype: "string",
value: "false",
mutable: true,
},
{
name: "immutable_string_param",
ptype: "string",
value: "i am eternal",
mutable: false,
},
}
t.Run("InputParameters", func(t *testing.T) {
t.Parallel()
type testContext struct {
client *codersdk.Client
member *codersdk.Client
owner codersdk.CreateFirstUserResponse
template codersdk.Template
workspaceName string
}
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
tests := []struct {
name string
// setup runs before the command is started and return arguments that will
// be appended to the create command.
setup func() []string
// handlePty optionally runs after the command is started. It should handle
// all expected prompts from the pty.
handlePty func(pty *ptytest.PTY)
// postRun runs after the command has finished but before the workspace is
// verified. It must return the workspace name to check (used for the copy
// workspace tests).
postRun func(t *testing.T, args testContext) string
// errors contains expected errors. The workspace will not be verified if
// errors are expected.
errors []string
// inputParameters overrides the default parameters.
inputParameters []param
// expectedParameters defaults to inputParameters.
expectedParameters []param
// withDefaults sets DefaultValue to each parameter's value.
withDefaults bool
}{
{
name: "ValuesFromPrompt",
handlePty: func(pty *ptytest.PTY) {
// Enter the value for each parameter as prompted.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.WriteLine(param.value)
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromDefaultFlags",
setup: func() []string {
// Provide the defaults on the command line.
args := []string{}
for _, param := range params {
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// Simply accept the defaults.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
pty.WriteLine("")
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromFile",
setup: func() []string {
// Create a file with the values.
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
for _, param := range params {
_, err := parameterFile.WriteString(fmt.Sprintf("%s: %s\n", param.name, param.value))
require.NoError(t, err)
}
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
return []string{"--rich-parameter-file", parameterFile.Name()}
},
handlePty: func(pty *ptytest.PTY) {
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromFlags",
setup: func() []string {
// Provide the values on the command line.
var args []string
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "MisspelledParameter",
setup: func() []string {
// Provide the values on the command line.
args := []string{}
for i, param := range params {
if i == 0 {
// Slightly misspell the first parameter with an extra character.
args = append(args, "--parameter", fmt.Sprintf("n%s=%s", param.name, param.value))
} else {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
}
return args
},
errors: []string{
"parameter \"n" + params[0].name + "\" is not present in the template",
"Did you mean: " + params[0].name,
},
},
{
name: "ValuesFromWorkspace",
setup: func() []string {
// Provide the values on the command line.
args := []string{"-y"}
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
postRun: func(t *testing.T, tctx testContext) string {
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
clitest.SetupConfig(t, tctx.member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "failed to create a workspace based on the source workspace")
return "other-workspace"
},
},
{
name: "ValuesFromOutdatedWorkspace",
setup: func() []string {
// Provide the values on the command line.
args := []string{"-y"}
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
postRun: func(t *testing.T, tctx testContext) string {
// Update the template to a new version.
version2 := coderdtest.CreateTemplateVersion(t, tctx.client, tctx.owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "another_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = tctx.template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, tctx.client, version2.ID)
coderdtest.UpdateActiveTemplateVersion(t, tctx.client, tctx.template.ID, version2.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name)
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// Then create the copy. It should use the old template version.
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
clitest.SetupConfig(t, tctx.member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "failed to create a workspace based on the source workspace")
return "other-workspace"
},
},
{
name: "ValuesFromTemplateDefaults",
handlePty: func(pty *ptytest.PTY) {
// Simply accept the defaults.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
pty.WriteLine("")
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
},
{
name: "ValuesFromTemplateDefaultsNoPrompt",
setup: func() []string {
return []string{"--use-parameter-defaults"}
},
handlePty: func(pty *ptytest.PTY) {
// Default values should get printed.
for _, param := range params {
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
}
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
},
{
name: "ValuesFromDefaultFlagsNoPrompt",
setup: func() []string {
// Provide the defaults on the command line.
args := []string{"--use-parameter-defaults"}
for _, param := range params {
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// Default values should get printed.
for _, param := range params {
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
}
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
// File and flags should override template defaults. Additionally, if a
// value has no default value we should still get a prompt for it.
name: "ValuesFromMultipleSources",
setup: func() []string {
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
_, err := parameterFile.WriteString(`
file_param: from file
cli_param: from file`)
require.NoError(t, err)
return []string{
"--use-parameter-defaults",
"--rich-parameter-file", parameterFile.Name(),
"--parameter-default", "file_param=from cli default",
"--parameter-default", "cli_param=from cli default",
"--parameter", "cli_param=from cli",
}
},
handlePty: func(pty *ptytest.PTY) {
// Should get prompted for the input param since it has no default.
pty.ExpectMatch("input_param")
pty.WriteLine("from input")
matches := []string{
firstParameterDescription, firstParameterValue,
secondParameterDisplayName, "",
secondParameterDescription, secondParameterValue,
immutableParameterDescription, immutableParameterValue,
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
inputParameters: []param{
{
name: "template_param",
value: "from template default",
},
{
name: "file_param",
value: "from template default",
},
{
name: "cli_param",
value: "from template default",
},
{
name: "input_param",
},
},
expectedParameters: []param{
{
name: "template_param",
value: "from template default",
},
{
name: "file_param",
value: "from file",
},
{
name: "cli_param",
value: "from cli",
},
{
name: "input_param",
value: "from input",
},
},
},
}
if value != "" {
pty.WriteLine(value)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
parameters := params
if len(tt.inputParameters) > 0 {
parameters = tt.inputParameters
}
}
<-doneChan
})
t.Run("ParametersDefaults", func(t *testing.T) {
t.Parallel()
// Convert parameters for the echo provisioner response.
var rparams []*proto.RichParameter
for i, param := range parameters {
defaultValue := ""
if tt.withDefaults {
defaultValue = param.value
}
rparams = append(rparams, &proto.RichParameter{
Name: param.name,
Type: param.ptype,
Mutable: param.mutable,
DefaultValue: defaultValue,
Order: int32(i), //nolint:gosec
})
}
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
// Set up the template.
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(rparams))
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter-default", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter-default", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter-default", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// Run the command, possibly setting up values.
workspaceName := "my-workspace"
args := []string{"create", workspaceName, "--template", template.Name}
if tt.setup != nil {
args = append(args, tt.setup()...)
}
inv, root := clitest.New(t, args...)
clitest.SetupConfig(t, member, root)
doneChan := make(chan error)
pty := ptytest.New(t).Attach(inv)
go func() {
doneChan <- inv.Run()
}()
matches := []string{
firstParameterDescription, firstParameterValue,
secondParameterDescription, secondParameterValue,
immutableParameterDescription, immutableParameterValue,
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
defaultValue := matches[i+1]
// The test may do something with the pty.
if tt.handlePty != nil {
tt.handlePty(pty)
}
pty.ExpectMatch(match)
pty.ExpectMatch(`Enter a value (default: "` + defaultValue + `")`)
pty.WriteLine("")
}
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
<-doneChan
// Wait for the command to exit.
err := <-doneChan
// Verify that the expected default values were used.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
// The test may want to run additional setup like copying the workspace.
if tt.postRun != nil {
workspaceName = tt.postRun(t, testContext{
client: client,
member: member,
owner: owner,
template: template,
workspaceName: workspaceName,
})
}
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: "my-workspace",
if len(tt.errors) > 0 {
require.Error(t, err)
for _, errstr := range tt.errors {
assert.ErrorContains(t, err, errstr)
}
} else {
require.NoError(t, err)
// Verify the workspace was created and has the right template and values.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{Name: workspaceName})
require.NoError(t, err, "expected to find created workspace")
require.Len(t, workspaces.Workspaces, 1)
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
require.NoError(t, err)
if len(tt.expectedParameters) > 0 {
parameters = tt.expectedParameters
}
require.Len(t, buildParameters, len(parameters))
for _, param := range parameters {
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: param.name, Value: param.value})
}
}
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
t.Run("RichParametersFile", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
_, _ = parameterFile.WriteString(
firstParameterName + ": " + firstParameterValue + "\n" +
secondParameterName + ": " + secondParameterValue + "\n" +
immutableParameterName + ": " + immutableParameterValue)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "--rich-parameter-file", parameterFile.Name())
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
pty.WriteLine(value)
}
<-doneChan
})
t.Run("ParameterFlags", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
pty.WriteLine(value)
}
<-doneChan
})
t.Run("WrongParameterName/DidYouMean", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
wrongFirstParameterName := "frst-prameter"
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter", fmt.Sprintf("%s=%s", wrongFirstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
assert.ErrorContains(t, err, "parameter \""+wrongFirstParameterName+"\" is not present in the template")
assert.ErrorContains(t, err, "Did you mean: "+firstParameterName)
})
t.Run("CopyParameters", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
t.Run("CopyParametersFromNotUpdatedWorkspace", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, update the template to the newer version.
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "third_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
coderdtest.UpdateActiveTemplateVersion(t, client, template.ID, version2.ID)
// Thirdly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, otherWorkspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
}
}
func TestCreateWithPreset(t *testing.T) {

View File

@@ -1,12 +0,0 @@
package cli
import (
boundarycli "github.com/coder/boundary/cli"
"github.com/coder/serpent"
)
func (*RootCmd) boundary() *serpent.Command {
cmd := boundarycli.BaseCommand() // Package coder/boundary/cli exports a "base command" designed to be integrated as a subcommand.
cmd.Use += " [args...]" // The base command looks like `boundary -- command`. Serpent adds the flags piece, but we need to add the args.
return cmd
}

View File

@@ -1,33 +0,0 @@
package cli_test
import (
"testing"
"github.com/stretchr/testify/assert"
boundarycli "github.com/coder/boundary/cli"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
// Actually testing the functionality of coder/boundary takes place in the
// coder/boundary repo, since it's a dependency of coder.
// Here we want to test basically that integrating it as a subcommand doesn't break anything.
func TestBoundarySubcommand(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
inv, _ := clitest.New(t, "exp", "boundary", "--help")
pty := ptytest.New(t).Attach(inv)
go func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
}()
// Expect the --help output to include the short description.
// We're simply confirming that `coder boundary --help` ran without a runtime error as
// a good chunk of serpents self validation logic happens at runtime.
pty.ExpectMatch(boundarycli.BaseCommand().Short)
}

View File

@@ -174,6 +174,19 @@ func (RootCmd) promptExample() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("multi-select-no-defaults", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
Message: "Select some things:",
Options: []string{
"Code", "Chairs", "Whale",
},
EnableCustomInput: enableCustomInput,
})
}
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("rich-multi-select", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{

View File

@@ -68,6 +68,8 @@ func (r *RootCmd) scaletestCmd() *serpent.Command {
r.scaletestTaskStatus(),
r.scaletestSMTP(),
r.scaletestPrebuilds(),
r.scaletestBridge(),
r.scaletestLLMMock(),
},
}

278
cli/exp_scaletest_bridge.go Normal file
View File

@@ -0,0 +1,278 @@
//go:build !slim
package cli
import (
"fmt"
"net/http"
"os/signal"
"strconv"
"text/tabwriter"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/bridge"
"github.com/coder/coder/v2/scaletest/createusers"
"github.com/coder/coder/v2/scaletest/harness"
"github.com/coder/serpent"
)
func (r *RootCmd) scaletestBridge() *serpent.Command {
var (
concurrentUsers int64
noCleanup bool
mode string
upstreamURL string
provider string
requestsPerUser int64
useStreamingAPI bool
requestPayloadSize int64
numMessages int64
httpTimeout time.Duration
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
Use: "bridge",
Short: "Generate load on the AI Bridge service.",
Long: `Generate load for AI Bridge testing. Supports two modes: 'bridge' mode routes requests through the Coder AI Bridge, 'direct' mode makes requests directly to an upstream URL (useful for baseline comparisons).
Examples:
# Test OpenAI API through bridge
coder scaletest bridge --mode bridge --provider openai --concurrent-users 10 --request-count 5 --num-messages 10
# Test Anthropic API through bridge
coder scaletest bridge --mode bridge --provider anthropic --concurrent-users 10 --request-count 5 --num-messages 10
# Test directly against mock server
coder scaletest bridge --mode direct --provider openai --upstream-url http://localhost:8080/v1/chat/completions
`,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
reg := prometheus.NewRegistry()
metrics := bridge.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
defer stop()
ctx = notifyCtx
var userConfig createusers.Config
if bridge.RequestMode(mode) == bridge.RequestModeBridge {
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
if len(me.OrganizationIDs) == 0 {
return xerrors.Errorf("admin user must have at least one organization")
}
userConfig = createusers.Config{
OrganizationID: me.OrganizationIDs[0],
}
_, _ = fmt.Fprintln(inv.Stderr, "Bridge mode: creating users and making requests through AI Bridge...")
} else {
_, _ = fmt.Fprintf(inv.Stderr, "Direct mode: making requests directly to %s\n", upstreamURL)
}
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("parse output flags: %w", err)
}
config := bridge.Config{
Mode: bridge.RequestMode(mode),
Metrics: metrics,
Provider: provider,
RequestCount: int(requestsPerUser),
Stream: useStreamingAPI,
RequestPayloadSize: int(requestPayloadSize),
NumMessages: int(numMessages),
HTTPTimeout: httpTimeout,
UpstreamURL: upstreamURL,
User: userConfig,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
}
if err := config.PrepareRequestBody(); err != nil {
return xerrors.Errorf("prepare request body: %w", err)
}
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for i := range concurrentUsers {
id := strconv.Itoa(int(i))
name := fmt.Sprintf("bridge-%s", id)
var runner harness.Runnable = bridge.NewRunner(client, config)
th.AddRun(name, id, runner)
}
_, _ = fmt.Fprintln(inv.Stderr, "Bridge scaletest configuration:")
tw := tabwriter.NewWriter(inv.Stderr, 0, 0, 2, ' ', 0)
for _, opt := range inv.Command.Options {
if opt.Hidden || opt.ValueSource == serpent.ValueSourceNone {
continue
}
_, _ = fmt.Fprintf(tw, " %s:\t%s", opt.Name, opt.Value.String())
if opt.ValueSource != serpent.ValueSourceDefault {
_, _ = fmt.Fprintf(tw, "\t(from %s)", opt.ValueSource)
}
_, _ = fmt.Fprintln(tw)
}
_ = tw.Flush()
_, _ = fmt.Fprintln(inv.Stderr, "\nRunning bridge scaletest...")
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
err = th.Run(testCtx)
if err != nil {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "concurrent-users",
FlagShorthand: "c",
Env: "CODER_SCALETEST_BRIDGE_CONCURRENT_USERS",
Description: "Required: Number of concurrent users.",
Value: serpent.Validate(serpent.Int64Of(&concurrentUsers), func(value *serpent.Int64) error {
if value == nil || value.Value() <= 0 {
return xerrors.Errorf("--concurrent-users must be greater than 0")
}
return nil
}),
Required: true,
},
{
Flag: "mode",
Env: "CODER_SCALETEST_BRIDGE_MODE",
Default: "direct",
Description: "Request mode: 'bridge' (create users and use AI Bridge) or 'direct' (make requests directly to upstream-url).",
Value: serpent.EnumOf(&mode, string(bridge.RequestModeBridge), string(bridge.RequestModeDirect)),
},
{
Flag: "upstream-url",
Env: "CODER_SCALETEST_BRIDGE_UPSTREAM_URL",
Description: "URL to make requests to directly (required in direct mode, e.g., http://localhost:8080/v1/chat/completions).",
Value: serpent.StringOf(&upstreamURL),
},
{
Flag: "provider",
Env: "CODER_SCALETEST_BRIDGE_PROVIDER",
Default: "openai",
Description: "API provider to use.",
Value: serpent.EnumOf(&provider, "openai", "anthropic"),
},
{
Flag: "request-count",
Env: "CODER_SCALETEST_BRIDGE_REQUEST_COUNT",
Default: "1",
Description: "Number of sequential requests to make per runner.",
Value: serpent.Validate(serpent.Int64Of(&requestsPerUser), func(value *serpent.Int64) error {
if value == nil || value.Value() <= 0 {
return xerrors.Errorf("--request-count must be greater than 0")
}
return nil
}),
},
{
Flag: "stream",
Env: "CODER_SCALETEST_BRIDGE_STREAM",
Description: "Enable streaming requests.",
Value: serpent.BoolOf(&useStreamingAPI),
},
{
Flag: "request-payload-size",
Env: "CODER_SCALETEST_BRIDGE_REQUEST_PAYLOAD_SIZE",
Default: "1024",
Description: "Size in bytes of the request payload (user message content). If 0, uses default message content.",
Value: serpent.Int64Of(&requestPayloadSize),
},
{
Flag: "num-messages",
Env: "CODER_SCALETEST_BRIDGE_NUM_MESSAGES",
Default: "1",
Description: "Number of messages to include in the conversation.",
Value: serpent.Int64Of(&numMessages),
},
{
Flag: "no-cleanup",
Env: "CODER_SCALETEST_NO_CLEANUP",
Description: "Do not clean up resources after the test completes.",
Value: serpent.BoolOf(&noCleanup),
},
{
Flag: "http-timeout",
Env: "CODER_SCALETEST_BRIDGE_HTTP_TIMEOUT",
Default: "30s",
Description: "Timeout for individual HTTP requests to the upstream provider.",
Value: serpent.DurationOf(&httpTimeout),
},
}
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}

View File

@@ -0,0 +1,118 @@
//go:build !slim
package cli
import (
"fmt"
"os/signal"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"cdr.dev/slog/v3/sloggers/sloghuman"
"github.com/coder/coder/v2/scaletest/llmmock"
"github.com/coder/serpent"
)
func (*RootCmd) scaletestLLMMock() *serpent.Command {
var (
address string
artificialLatency time.Duration
responsePayloadSize int64
pprofEnable bool
pprofAddress string
traceEnable bool
)
cmd := &serpent.Command{
Use: "llm-mock",
Short: "Start a mock LLM API server for testing",
Long: `Start a mock LLM API server that simulates OpenAI and Anthropic APIs`,
Handler: func(inv *serpent.Invocation) error {
ctx, stop := signal.NotifyContext(inv.Context(), StopSignals...)
defer stop()
logger := slog.Make(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelInfo)
if pprofEnable {
closePprof := ServeHandler(ctx, logger, nil, pprofAddress, "pprof")
defer closePprof()
logger.Info(ctx, "pprof server started", slog.F("address", pprofAddress))
}
config := llmmock.Config{
Address: address,
Logger: logger,
ArtificialLatency: artificialLatency,
ResponsePayloadSize: int(responsePayloadSize),
PprofEnable: pprofEnable,
PprofAddress: pprofAddress,
TraceEnable: traceEnable,
}
srv := new(llmmock.Server)
if err := srv.Start(ctx, config); err != nil {
return xerrors.Errorf("start mock LLM server: %w", err)
}
defer func() {
_ = srv.Stop()
}()
_, _ = fmt.Fprintf(inv.Stdout, "Mock LLM API server started on %s\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " OpenAI endpoint: %s/v1/chat/completions\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " Anthropic endpoint: %s/v1/messages\n", srv.APIAddress())
<-ctx.Done()
return nil
},
}
cmd.Options = []serpent.Option{
{
Flag: "address",
Env: "CODER_SCALETEST_LLM_MOCK_ADDRESS",
Default: "localhost",
Description: "Address to bind the mock LLM API server. Can include a port (e.g., 'localhost:8080' or ':8080'). Uses a random port if no port is specified.",
Value: serpent.StringOf(&address),
},
{
Flag: "artificial-latency",
Env: "CODER_SCALETEST_LLM_MOCK_ARTIFICIAL_LATENCY",
Default: "0s",
Description: "Artificial latency to add to each response (e.g., 100ms, 1s). Simulates slow upstream processing.",
Value: serpent.DurationOf(&artificialLatency),
},
{
Flag: "response-payload-size",
Env: "CODER_SCALETEST_LLM_MOCK_RESPONSE_PAYLOAD_SIZE",
Default: "0",
Description: "Size in bytes of the response payload. If 0, uses default context-aware responses.",
Value: serpent.Int64Of(&responsePayloadSize),
},
{
Flag: "pprof-enable",
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ENABLE",
Default: "false",
Description: "Serve pprof metrics on the address defined by pprof-address.",
Value: serpent.BoolOf(&pprofEnable),
},
{
Flag: "pprof-address",
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ADDRESS",
Default: "127.0.0.1:6060",
Description: "The bind address to serve pprof.",
Value: serpent.StringOf(&pprofAddress),
},
{
Flag: "trace-enable",
Env: "CODER_SCALETEST_LLM_MOCK_TRACE_ENABLE",
Default: "false",
Description: "Whether application tracing data is collected. It exports to a backend configured by environment variables. See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md.",
Value: serpent.BoolOf(&traceEnable),
},
}
return cmd
}

View File

@@ -141,7 +141,9 @@ func TestGitSSH(t *testing.T) {
"-o", "IdentitiesOnly=yes",
"127.0.0.1",
)
ctx := testutil.Context(t, testutil.WaitMedium)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.EqualValues(t, 1, inc)
@@ -205,7 +207,9 @@ func TestGitSSH(t *testing.T) {
inv, _ := clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
ctx := testutil.Context(t, testutil.WaitMedium)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
@@ -223,7 +227,9 @@ func TestGitSSH(t *testing.T) {
inv, _ = clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
ctx = testutil.Context(t, testutil.WaitMedium) // Reset context for second cmd test.
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx = testutil.Context(t, testutil.WaitSuperLong) // Reset context for second cmd test.
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {

View File

@@ -462,9 +462,38 @@ func (r *RootCmd) login() *serpent.Command {
Value: serpent.BoolOf(&useTokenForSession),
},
}
cmd.Children = []*serpent.Command{
r.loginToken(),
}
return cmd
}
func (r *RootCmd) loginToken() *serpent.Command {
return &serpent.Command{
Use: "token",
Short: "Print the current session token",
Long: "Print the session token for use in scripts and automation.",
Middleware: serpent.RequireNArgs(0),
Handler: func(inv *serpent.Invocation) error {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
if err != nil {
if xerrors.Is(err, os.ErrNotExist) {
return xerrors.New("no session token found - run 'coder login' first")
}
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("read session token: %w", err)
}
if tok == "" {
return xerrors.New("no session token found - run 'coder login' first")
}
_, err = fmt.Fprintln(inv.Stdout, tok)
return err
},
}
}
// isWSL determines if coder-cli is running within Windows Subsystem for Linux
func isWSL() (bool, error) {
if runtime.GOOS == goosDarwin || runtime.GOOS == goosWindows {

View File

@@ -537,3 +537,31 @@ func TestLogin(t *testing.T) {
require.Equal(t, selected, first.OrganizationID.String())
})
}
func TestLoginToken(t *testing.T) {
t.Parallel()
t.Run("PrintsToken", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "login", "token", "--url", client.URL.String())
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
pty.ExpectMatch(client.SessionToken())
})
t.Run("NoTokenStored", func(t *testing.T) {
t.Parallel()
inv, _ := clitest.New(t, "login", "token")
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "no session token found")
})
}

View File

@@ -65,6 +65,22 @@ func (r *RootCmd) organizationSettings(orgContext *OrganizationContext) *serpent
return cli.OrganizationIDPSyncSettings(ctx)
},
},
{
Name: "workspace-sharing",
Aliases: []string{"workspacesharing"},
Short: "Workspace sharing settings for the organization.",
Patch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID, input json.RawMessage) (any, error) {
var req codersdk.WorkspaceSharingSettings
err := json.Unmarshal(input, &req)
if err != nil {
return nil, xerrors.Errorf("unmarshalling workspace sharing settings: %w", err)
}
return cli.PatchWorkspaceSharingSettings(ctx, org.String(), req)
},
Fetch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID) (any, error) {
return cli.WorkspaceSharingSettings(ctx, org.String())
},
},
}
cmd := &serpent.Command{
Use: "settings",

View File

@@ -34,6 +34,7 @@ type ParameterResolver struct {
promptRichParameters bool
promptEphemeralParameters bool
useParameterDefaults bool
}
func (pr *ParameterResolver) WithLastBuildParameters(params []codersdk.WorkspaceBuildParameter) *ParameterResolver {
@@ -86,8 +87,21 @@ func (pr *ParameterResolver) WithPromptEphemeralParameters(promptEphemeralParame
return pr
}
// Resolve gathers workspace build parameters in a layered fashion, applying values from various sources
// in order of precedence: parameter file < CLI/ENV < source build < last build < preset < user input.
func (pr *ParameterResolver) WithUseParameterDefaults(useParameterDefaults bool) *ParameterResolver {
pr.useParameterDefaults = useParameterDefaults
return pr
}
// Resolve gathers workspace build parameters in a layered fashion, applying
// values from various sources in order of precedence:
// 1. template defaults (if auto-accepting defaults)
// 2. cli parameter defaults (if auto-accepting defaults)
// 3. parameter file
// 4. CLI/ENV
// 5. source build
// 6. last build
// 7. preset
// 8. user input (unless auto-accepting defaults)
func (pr *ParameterResolver) Resolve(inv *serpent.Invocation, action WorkspaceCLIAction, templateVersionParameters []codersdk.TemplateVersionParameter) ([]codersdk.WorkspaceBuildParameter, error) {
var staged []codersdk.WorkspaceBuildParameter
var err error
@@ -262,9 +276,25 @@ func (pr *ParameterResolver) resolveWithInput(resolved []codersdk.WorkspaceBuild
(action == WorkspaceUpdate && tvp.Mutable && tvp.Required) ||
(action == WorkspaceUpdate && !tvp.Mutable && firstTimeUse) ||
(tvp.Mutable && !tvp.Ephemeral && pr.promptRichParameters) {
parameterValue, err := cliui.RichParameter(inv, tvp, pr.richParametersDefaults)
if err != nil {
return nil, err
name := tvp.Name
if tvp.DisplayName != "" {
name = tvp.DisplayName
}
parameterValue := tvp.DefaultValue
if v, ok := pr.richParametersDefaults[tvp.Name]; ok {
parameterValue = v
}
// Auto-accept the default if there is one.
if pr.useParameterDefaults && parameterValue != "" {
_, _ = fmt.Fprintf(inv.Stdout, "Using default value for %s: '%s'\n", name, parameterValue)
} else {
var err error
parameterValue, err = cliui.RichParameter(inv, tvp, name, parameterValue)
if err != nil {
return nil, err
}
}
resolved = append(resolved, codersdk.WorkspaceBuildParameter{

View File

@@ -24,6 +24,7 @@ import (
"text/tabwriter"
"time"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/mitchellh/go-wordwrap"
"golang.org/x/mod/semver"
@@ -150,7 +151,6 @@ func (r *RootCmd) AGPLExperimental() []*serpent.Command {
r.promptExample(),
r.rptyCommand(),
r.syncCommand(),
r.boundary(),
}
}
@@ -332,6 +332,12 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
// support links.
return
}
if cmd.Name() == "boundary" {
// The boundary command is integrated from the boundary package
// and has YAML-only options (e.g., allowlist from config file)
// that don't have flags or env vars.
return
}
merr = errors.Join(
merr,
xerrors.Errorf("option %q in %q should have a flag or env", opt.Name, cmd.FullName()),
@@ -923,6 +929,9 @@ func splitNamedWorkspace(identifier string) (owner string, workspaceName string,
// a bare name (for a workspace owned by the current user) or a "user/workspace" combination,
// where user is either a username or UUID.
func namedWorkspace(ctx context.Context, client *codersdk.Client, identifier string) (codersdk.Workspace, error) {
if uid, err := uuid.Parse(identifier); err == nil {
return client.Workspace(ctx, uid)
}
owner, name, err := splitNamedWorkspace(identifier)
if err != nil {
return codersdk.Workspace{}, err

View File

@@ -197,7 +197,7 @@ func TestSharingStatus(t *testing.T) {
ctx = testutil.Context(t, testutil.WaitMedium)
)
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
},
@@ -248,7 +248,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser.ID.String(): codersdk.WorkspaceRoleUse,
@@ -309,7 +309,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toRemoveUser2.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser1.ID.String(): codersdk.WorkspaceRoleUse,

View File

@@ -1,8 +1,10 @@
package cli
import (
"fmt"
"sort"
"sync"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
@@ -43,11 +45,11 @@ func (r *RootCmd) show() *serpent.Command {
if err != nil {
return xerrors.Errorf("get workspace: %w", err)
}
options := cliui.WorkspaceResourcesOptions{
WorkspaceName: workspace.Name,
ServerVersion: buildInfo.Version,
ShowDetails: details,
Title: fmt.Sprintf("%s/%s (%s since %s) %s:%s", workspace.OwnerName, workspace.Name, workspace.LatestBuild.Status, time.Since(workspace.LatestBuild.CreatedAt).Round(time.Second).String(), workspace.TemplateName, workspace.LatestBuild.TemplateVersionName),
}
if workspace.LatestBuild.Status == codersdk.WorkspaceStatusRunning {
// Get listening ports for each agent.
@@ -55,7 +57,6 @@ func (r *RootCmd) show() *serpent.Command {
options.ListeningPorts = ports
options.Devcontainers = devcontainers
}
return cliui.WorkspaceResources(inv.Stdout, workspace.LatestBuild.Resources, options)
},
}

View File

@@ -2,6 +2,7 @@ package cli_test
import (
"bytes"
"fmt"
"testing"
"time"
@@ -15,6 +16,7 @@ import (
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestShow(t *testing.T) {
@@ -28,7 +30,7 @@ func TestShow(t *testing.T) {
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
build := coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
args := []string{
"show",
@@ -38,26 +40,30 @@ func TestShow(t *testing.T) {
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
go func() {
defer close(doneChan)
err := inv.Run()
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
}()
matches := []struct {
match string
write string
}{
{match: fmt.Sprintf("%s/%s", workspace.OwnerName, workspace.Name)},
{match: fmt.Sprintf("(%s since ", build.Status)},
{match: fmt.Sprintf("%s:%s", workspace.TemplateName, workspace.LatestBuild.TemplateVersionName)},
{match: "compute.main"},
{match: "smith (linux, i386)"},
{match: "coder ssh " + workspace.Name},
}
for _, m := range matches {
pty.ExpectMatch(m.match)
pty.ExpectMatchContext(ctx, m.match)
if len(m.write) > 0 {
pty.WriteLine(m.write)
}
}
<-doneChan
_ = testutil.TryReceive(ctx, t, doneChan)
})
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -84,6 +85,9 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -175,6 +179,24 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -775,6 +797,12 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1662,3 +1690,33 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}

View File

@@ -312,6 +312,102 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err

View File

@@ -1122,6 +1122,107 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
// Ref: https://github.com/coder/internal/issues/1289
// This may return either a nil error or io.EOF.
// There is an inherent race here:
// 1. Sleep process is killed -> sleepDone is closed.
// 2. watchParentContext detects parent death, cancels context,
// causing SSH session teardown.
// 3. We receive from sleepDone and attempt to call session.Close()
// Now either:
// a. Session teardown completes before we call Close(), resulting in io.EOF
// b. We call Close() first, resulting in a nil error.
_ = session.Close()
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")

View File

@@ -367,7 +367,9 @@ func TestStartAutoUpdate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version1.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version1.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
@@ -379,6 +381,7 @@ func TestStartAutoUpdate(t *testing.T) {
coderdtest.MustTransitionWorkspace(t, member, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
}
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(stringRichParameters), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)

View File

@@ -7,6 +7,7 @@ import (
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"path/filepath"
@@ -44,13 +45,18 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
` - Coder deployment version
- Coder deployment Configuration (sanitized), including enabled experiments
- Coder deployment health snapshot
- Coder deployment stats (aggregated workspace/session metrics)
- Entitlements (if available)
- Health settings (dismissed healthchecks)
- Coder deployment Network troubleshooting information
- Workspace list accessible to the user (sanitized)
- Workspace configuration, parameters, and build logs
- Template version and source code for the given workspace
- Agent details (with environment variable sanitized)
- Agent network diagnostics
- Agent logs
- License status
- pprof profiling data (if --pprof is enabled)
` + cliui.Bold("Note: ") +
cliui.Wrap("While we try to sanitize sensitive data from support bundles, we cannot guarantee that they do not contain information that you or your organization may consider sensitive.\n") +
cliui.Bold("Please confirm that you will:\n") +
@@ -61,6 +67,9 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
func (r *RootCmd) supportBundle() *serpent.Command {
var outputPath string
var coderURLOverride string
var workspacesTotalCap64 int64 = 10
var templateName string
var pprof bool
cmd := &serpent.Command{
Use: "bundle <workspace> [<agent>]",
Short: "Generate a support bundle to troubleshoot issues connecting to a workspace.",
@@ -121,8 +130,9 @@ func (r *RootCmd) supportBundle() *serpent.Command {
}
var (
wsID uuid.UUID
agtID uuid.UUID
wsID uuid.UUID
agtID uuid.UUID
templateID uuid.UUID
)
if len(inv.Args) == 0 {
@@ -155,6 +165,16 @@ func (r *RootCmd) supportBundle() *serpent.Command {
}
}
// Resolve template by name if provided (captures active version)
// Fallback: if canonical name lookup fails, match DisplayName (case-insensitive).
if templateName != "" {
id, err := resolveTemplateID(inv.Context(), client, templateName)
if err != nil {
return err
}
templateID = id
}
if outputPath == "" {
cwd, err := filepath.Abs(".")
if err != nil {
@@ -176,12 +196,25 @@ func (r *RootCmd) supportBundle() *serpent.Command {
if r.verbose {
clientLog.AppendSinks(sloghuman.Sink(inv.Stderr))
}
if pprof {
_, _ = fmt.Fprintln(inv.Stderr, "pprof data collection will take approximately 30 seconds...")
}
// Bypass rate limiting for support bundle collection since it makes many API calls.
client.HTTPClient.Transport = &codersdk.HeaderTransport{
Transport: client.HTTPClient.Transport,
Header: http.Header{codersdk.BypassRatelimitHeader: {"true"}},
}
deps := support.Deps{
Client: client,
// Support adds a sink so we don't need to supply one ourselves.
Log: clientLog,
WorkspaceID: wsID,
AgentID: agtID,
Log: clientLog,
WorkspaceID: wsID,
AgentID: agtID,
WorkspacesTotalCap: int(workspacesTotalCap64),
TemplateID: templateID,
CollectPprof: pprof,
}
bun, err := support.Run(inv.Context(), &deps)
@@ -217,11 +250,102 @@ func (r *RootCmd) supportBundle() *serpent.Command {
Description: "Override the URL to your Coder deployment. This may be useful, for example, if you need to troubleshoot a specific Coder replica.",
Value: serpent.StringOf(&coderURLOverride),
},
{
Flag: "workspaces-total-cap",
Env: "CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP",
Description: "Maximum number of workspaces to include in the support bundle. Set to 0 or negative value to disable the cap. Defaults to 10.",
Value: serpent.Int64Of(&workspacesTotalCap64),
},
{
Flag: "template",
Env: "CODER_SUPPORT_BUNDLE_TEMPLATE",
Description: "Template name to include in the support bundle. Use org_name/template_name if template name is reused across multiple organizations.",
Value: serpent.StringOf(&templateName),
},
{
Flag: "pprof",
Env: "CODER_SUPPORT_BUNDLE_PPROF",
Description: "Collect pprof profiling data from the Coder server and agent. Requires Coder server version 2.28.0 or newer.",
Value: serpent.BoolOf(&pprof),
},
}
return cmd
}
// Resolve a template to its ID, supporting:
// - org/name form
// - slug or display name match (case-insensitive) across all memberships
func resolveTemplateID(ctx context.Context, client *codersdk.Client, templateArg string) (uuid.UUID, error) {
orgPart := ""
namePart := templateArg
if slash := strings.IndexByte(templateArg, '/'); slash > 0 && slash < len(templateArg)-1 {
orgPart = templateArg[:slash]
namePart = templateArg[slash+1:]
}
resolveInOrg := func(orgID uuid.UUID) (codersdk.Template, bool, error) {
if t, err := client.TemplateByName(ctx, orgID, namePart); err == nil {
return t, true, nil
}
tpls, err := client.TemplatesByOrganization(ctx, orgID)
if err != nil {
return codersdk.Template{}, false, nil
}
for _, t := range tpls {
if strings.EqualFold(t.Name, namePart) || strings.EqualFold(t.DisplayName, namePart) {
return t, true, nil
}
}
return codersdk.Template{}, false, nil
}
if orgPart != "" {
org, err := client.OrganizationByName(ctx, orgPart)
if err != nil {
return uuid.Nil, xerrors.Errorf("get organization %q: %w", orgPart, err)
}
t, found, err := resolveInOrg(org.ID)
if err != nil {
return uuid.Nil, err
}
if !found {
return uuid.Nil, xerrors.Errorf("template %q not found in organization %q", namePart, orgPart)
}
return t.ID, nil
}
orgs, err := client.OrganizationsByUser(ctx, codersdk.Me)
if err != nil {
return uuid.Nil, xerrors.Errorf("get organizations: %w", err)
}
var (
foundTpl codersdk.Template
foundOrgs []string
)
for _, org := range orgs {
if t, found, err := resolveInOrg(org.ID); err == nil && found {
if len(foundOrgs) == 0 {
foundTpl = t
}
foundOrgs = append(foundOrgs, org.Name)
}
}
switch len(foundOrgs) {
case 0:
return uuid.Nil, xerrors.Errorf("template %q not found in your organizations", namePart)
case 1:
return foundTpl.ID, nil
default:
return uuid.Nil, xerrors.Errorf(
"template %q found in multiple organizations (%s); use --template \"<org_name/%s>\" to target desired template.",
namePart,
strings.Join(foundOrgs, ", "),
namePart,
)
}
}
// summarizeBundle makes a best-effort attempt to write a short summary
// of the support bundle to the user's terminal.
func summarizeBundle(inv *serpent.Invocation, bun *support.Bundle) {
@@ -283,6 +407,10 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
"deployment/config.json": src.Deployment.Config,
"deployment/experiments.json": src.Deployment.Experiments,
"deployment/health.json": src.Deployment.HealthReport,
"deployment/stats.json": src.Deployment.Stats,
"deployment/entitlements.json": src.Deployment.Entitlements,
"deployment/health_settings.json": src.Deployment.HealthSettings,
"deployment/workspaces.json": src.Deployment.Workspaces,
"network/connection_info.json": src.Network.ConnectionInfo,
"network/netcheck.json": src.Network.Netcheck,
"network/interfaces.json": src.Network.Interfaces,
@@ -302,6 +430,49 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
}
}
// Include named template artifacts (if requested)
if src.NamedTemplate.Template.ID != uuid.Nil {
name := src.NamedTemplate.Template.Name
// JSON files
for k, v := range map[string]any{
"templates/" + name + "/template.json": src.NamedTemplate.Template,
"templates/" + name + "/template_version.json": src.NamedTemplate.TemplateVersion,
} {
f, err := dest.Create(k)
if err != nil {
return xerrors.Errorf("create file %q in archive: %w", k, err)
}
enc := json.NewEncoder(f)
enc.SetIndent("", " ")
if err := enc.Encode(v); err != nil {
return xerrors.Errorf("write json to %q: %w", k, err)
}
}
// Binary template file (zip)
if namedZipBytes, err := base64.StdEncoding.DecodeString(src.NamedTemplate.TemplateFileBase64); err == nil {
k := "templates/" + name + "/template_file.zip"
f, err := dest.Create(k)
if err != nil {
return xerrors.Errorf("create file %q in archive: %w", k, err)
}
if _, err := f.Write(namedZipBytes); err != nil {
return xerrors.Errorf("write file %q in archive: %w", k, err)
}
}
}
var buildInfoRef string
if src.Deployment.BuildInfo != nil {
if raw, err := json.Marshal(src.Deployment.BuildInfo); err == nil {
buildInfoRef = base64.StdEncoding.EncodeToString(raw)
}
}
tailnetHTML := src.Network.TailnetDebug
if buildInfoRef != "" {
tailnetHTML += "\n<!-- trace " + buildInfoRef + " -->"
}
templateVersionBytes, err := base64.StdEncoding.DecodeString(src.Workspace.TemplateFileBase64)
if err != nil {
return xerrors.Errorf("decode template zip from base64")
@@ -319,10 +490,11 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
"agent/client_magicsock.html": string(src.Agent.ClientMagicsockHTML),
"agent/startup_logs.txt": humanizeAgentLogs(src.Agent.StartupLogs),
"agent/prometheus.txt": string(src.Agent.Prometheus),
"deployment/prometheus.txt": string(src.Deployment.Prometheus),
"cli_logs.txt": string(src.CLILogs),
"logs.txt": strings.Join(src.Logs, "\n"),
"network/coordinator_debug.html": src.Network.CoordinatorDebug,
"network/tailnet_debug.html": src.Network.TailnetDebug,
"network/tailnet_debug.html": tailnetHTML,
"workspace/build_logs.txt": humanizeBuildLogs(src.Workspace.BuildLogs),
"workspace/template_file.zip": string(templateVersionBytes),
"license-status.txt": licenseStatus,
@@ -335,12 +507,89 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
return xerrors.Errorf("write file %q in archive: %w", k, err)
}
}
// Write pprof binary data
if err := writePprofData(src.Pprof, dest); err != nil {
return xerrors.Errorf("write pprof data: %w", err)
}
if err := dest.Close(); err != nil {
return xerrors.Errorf("close zip file: %w", err)
}
return nil
}
func writePprofData(pprof support.Pprof, dest *zip.Writer) error {
// Write server pprof data directly to pprof directory
if pprof.Server != nil {
if err := writePprofCollection("pprof", pprof.Server, dest); err != nil {
return xerrors.Errorf("write server pprof data: %w", err)
}
}
// Write agent pprof data
if pprof.Agent != nil {
if err := writePprofCollection("pprof/agent", pprof.Agent, dest); err != nil {
return xerrors.Errorf("write agent pprof data: %w", err)
}
}
return nil
}
func writePprofCollection(basePath string, collection *support.PprofCollection, dest *zip.Writer) error {
// Define the pprof files to write with their extensions
files := map[string][]byte{
"allocs.prof.gz": collection.Allocs,
"heap.prof.gz": collection.Heap,
"profile.prof.gz": collection.Profile,
"block.prof.gz": collection.Block,
"mutex.prof.gz": collection.Mutex,
"goroutine.prof.gz": collection.Goroutine,
"threadcreate.prof.gz": collection.Threadcreate,
"trace.gz": collection.Trace,
}
// Write binary pprof files
for filename, data := range files {
if len(data) > 0 {
filePath := basePath + "/" + filename
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create pprof file %q: %w", filePath, err)
}
if _, err := f.Write(data); err != nil {
return xerrors.Errorf("write pprof file %q: %w", filePath, err)
}
}
}
// Write cmdline as text file
if collection.Cmdline != "" {
filePath := basePath + "/cmdline.txt"
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create cmdline file %q: %w", filePath, err)
}
if _, err := f.Write([]byte(collection.Cmdline)); err != nil {
return xerrors.Errorf("write cmdline file %q: %w", filePath, err)
}
}
if collection.Symbol != "" {
filePath := basePath + "/symbol.txt"
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create symbol file %q: %w", filePath, err)
}
if _, err := f.Write([]byte(collection.Symbol)); err != nil {
return xerrors.Errorf("write symbol file %q: %w", filePath, err)
}
}
return nil
}
func humanizeAgentLogs(ls []codersdk.WorkspaceAgentLog) string {
var buf bytes.Buffer
tw := tabwriter.NewWriter(&buf, 0, 2, 1, ' ', 0)

View File

@@ -46,6 +46,8 @@ func TestSupportBundle(t *testing.T) {
// Support bundle tests can share a single coderdtest instance.
var dc codersdk.DeploymentConfig
dc.Values = coderdtest.DeploymentValues(t)
dc.Values.Prometheus.Enable = true
secretValue := uuid.NewString()
seedSecretDeploymentOptions(t, &dc, secretValue)
client, closer, api := coderdtest.NewWithAPI(t, &coderdtest.Options{
@@ -203,6 +205,10 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
var v codersdk.DeploymentConfig
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "deployment config should not be empty")
case "deployment/entitlements.json":
var v codersdk.Entitlements
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "entitlements should not be nil")
case "deployment/experiments.json":
var v codersdk.Experiments
decodeJSONFromZip(t, f, &v)
@@ -211,6 +217,22 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
var v healthsdk.HealthcheckReport
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "health report should not be empty")
case "deployment/health_settings.json":
var v healthsdk.HealthSettings
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "health settings should not be empty")
case "deployment/stats.json":
var v codersdk.DeploymentStats
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "deployment stats should not be nil")
case "deployment/workspaces.json":
var v codersdk.Workspace
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "deployment workspaces should not be nil")
case "deployment/prometheus.txt":
bs := readBytesFromZip(t, f)
require.NotEmpty(t, bs, "prometheus metrics should not be empty")
require.Contains(t, string(bs), "go_goroutines", "prometheus metrics should contain go runtime metrics")
case "network/connection_info.json":
var v workspacesdk.AgentConnectionInfo
decodeJSONFromZip(t, f, &v)

View File

@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -55,7 +55,7 @@ OPTIONS:
configured in the workspace template is used.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -49,8 +49,11 @@ OPTIONS:
--template-version string, $CODER_TEMPLATE_VERSION
Specify a template version name.
--use-parameter-defaults bool, $CODER_WORKSPACE_USE_PARAMETER_DEFAULTS
Automatically accept parameter defaults when no value is provided.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -18,7 +18,7 @@ OPTIONS:
resources.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -24,7 +24,7 @@ OPTIONS:
empty, will use $HOME.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -9,6 +9,9 @@ USAGE:
macOS and Windows and a plain text file on Linux. Use the --use-keyring flag
or CODER_USE_KEYRING environment variable to change the storage mechanism.
SUBCOMMANDS:
token Print the current session token
OPTIONS:
--first-user-email string, $CODER_FIRST_USER_EMAIL
Specifies an email address to use if creating the first user for the

View File

@@ -0,0 +1,11 @@
coder v0.0.0-devel
USAGE:
coder login token
Print the current session token
Print the session token for use in scripts and automation.
———
Run `coder --help` for a list of global options.

View File

@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -18,7 +18,7 @@ OPTIONS:
Reads stdin for the json role definition to upload.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -23,7 +23,7 @@ OPTIONS:
Reads stdin for the json role definition to upload.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -15,6 +15,7 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.

View File

@@ -15,6 +15,7 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.

View File

@@ -15,6 +15,7 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.

View File

@@ -15,6 +15,7 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.

View File

@@ -7,7 +7,7 @@
"last_seen_at": "====[timestamp]=====",
"name": "test-daemon",
"version": "v0.0.0-devel",
"api_version": "1.14",
"api_version": "1.15",
"provisioners": [
"echo"
],

View File

@@ -13,7 +13,7 @@ OPTIONS:
services it's registered with.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -39,7 +39,7 @@ OPTIONS:
pairs for the parameters.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -15,9 +15,11 @@ SUBCOMMANDS:
OPTIONS:
--allow-workspace-renames bool, $CODER_ALLOW_WORKSPACE_RENAMES (default: false)
DEPRECATED: Allow users to rename their workspaces. Use only for
temporary compatibility reasons, this will be removed in a future
release.
Allow users to rename their workspaces. WARNING: Renaming a workspace
can cause Terraform resources that depend on the workspace name to be
destroyed and recreated, potentially causing data loss. Only enable
this if your templates do not use workspace names in resource
identifiers, or if you understand the risks.
--cache-dir string, $CODER_CACHE_DIRECTORY (default: [cache dir])
The directory to cache temporary files. If unspecified and
@@ -109,11 +111,18 @@ AI BRIDGE OPTIONS:
The access key secret to use with the access key to authenticate
against the AWS Bedrock API.
--aibridge-bedrock-base-url string, $CODER_AIBRIDGE_BEDROCK_BASE_URL
The base URL to use for the AWS Bedrock API. Use this setting to
specify an exact URL to use. Takes precedence over
CODER_AIBRIDGE_BEDROCK_REGION.
--aibridge-bedrock-model string, $CODER_AIBRIDGE_BEDROCK_MODEL (default: global.anthropic.claude-sonnet-4-5-20250929-v1:0)
The model to use when making requests to the AWS Bedrock API.
--aibridge-bedrock-region string, $CODER_AIBRIDGE_BEDROCK_REGION
The AWS Bedrock API region.
The AWS Bedrock API region to use. Constructs a base URL to use for
the AWS Bedrock API in the form of
'https://bedrock-runtime.<region>.amazonaws.com'.
--aibridge-bedrock-small-fastmodel string, $CODER_AIBRIDGE_BEDROCK_SMALL_FAST_MODEL (default: global.anthropic.claude-haiku-4-5-20251001-v1:0)
The small fast model to use when making requests to the AWS Bedrock
@@ -121,6 +130,10 @@ AI BRIDGE OPTIONS:
See
https://docs.claude.com/en/docs/claude-code/settings#environment-variables.
--aibridge-circuit-breaker-enabled bool, $CODER_AIBRIDGE_CIRCUIT_BREAKER_ENABLED (default: false)
Enable the circuit breaker to protect against cascading failures from
upstream AI provider rate limits (429, 503, 529 overloaded).
--aibridge-retention duration, $CODER_AIBRIDGE_RETENTION (default: 60d)
Length of time to retain data such as interceptions and all related
records (token, prompt, tool use).
@@ -147,6 +160,18 @@ AI BRIDGE OPTIONS:
Maximum number of AI Bridge requests per second per replica. Set to 0
to disable (unlimited).
--aibridge-send-actor-headers bool, $CODER_AIBRIDGE_SEND_ACTOR_HEADERS (default: false)
Once enabled, extra headers will be added to upstream requests to
identify the user (actor) making requests to AI Bridge. This is only
needed if you are using a proxy between AI Bridge and an upstream AI
provider. This will send X-Ai-Bridge-Actor-Id (the ID of the user
making the request) and X-Ai-Bridge-Actor-Metadata-Username (their
username).
--aibridge-structured-logging bool, $CODER_AIBRIDGE_STRUCTURED_LOGGING (default: false)
Emit structured logs for AI Bridge interception records. Use this for
exporting these records to external SIEM or observability systems.
AI BRIDGE PROXY OPTIONS:
--aibridge-proxy-cert-file string, $CODER_AIBRIDGE_PROXY_CERT_FILE
Path to the CA certificate file for AI Bridge Proxy.
@@ -161,6 +186,17 @@ AI BRIDGE PROXY OPTIONS:
--aibridge-proxy-listen-addr string, $CODER_AIBRIDGE_PROXY_LISTEN_ADDR (default: :8888)
The address the AI Bridge Proxy will listen on.
--aibridge-proxy-upstream string, $CODER_AIBRIDGE_PROXY_UPSTREAM
URL of an upstream HTTP proxy to chain tunneled (non-allowlisted)
requests through. Format: http://[user:pass@]host:port or
https://[user:pass@]host:port.
--aibridge-proxy-upstream-ca string, $CODER_AIBRIDGE_PROXY_UPSTREAM_CA
Path to a PEM-encoded CA certificate to trust for the upstream proxy's
TLS connection. Only needed for HTTPS upstream proxies with
certificates not trusted by the system. If not provided, the system
certificate pool is used.
CLIENT OPTIONS:
These options change the behavior of how clients interact with the Coder.
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.

View File

@@ -42,7 +42,7 @@ OPTIONS:
pairs for the parameters.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -14,12 +14,25 @@ OPTIONS:
File path for writing the generated support bundle. Defaults to
coder-support-$(date +%s).zip.
--pprof bool, $CODER_SUPPORT_BUNDLE_PPROF
Collect pprof profiling data from the Coder server and agent. Requires
Coder server version 2.28.0 or newer.
--template string, $CODER_SUPPORT_BUNDLE_TEMPLATE
Template name to include in the support bundle. Use
org_name/template_name if template name is reused across multiple
organizations.
--url-override string, $CODER_SUPPORT_BUNDLE_URL_OVERRIDE
Override the URL to your Coder deployment. This may be useful, for
example, if you need to troubleshoot a specific Coder replica.
--workspaces-total-cap int, $CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP
Maximum number of workspaces to include in the support bundle. Set to
0 or negative value to disable the cap. Defaults to 10.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -21,7 +21,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -14,7 +14,7 @@ OPTIONS:
versions are archived.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -68,7 +68,7 @@ OPTIONS:
Specify a file path with values for Terraform-managed variables.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -12,7 +12,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -91,7 +91,7 @@ OPTIONS:
for more details.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -18,7 +18,7 @@ OPTIONS:
the template version to pull.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
--zip bool
Output the template as a zip archive to stdout.

View File

@@ -48,7 +48,7 @@ OPTIONS:
Specify a file path with values for Terraform-managed variables.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -11,7 +11,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -11,7 +11,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -11,7 +11,7 @@ OPTIONS:
the user may have.
-y, --yes bool
Bypass prompts.
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.

View File

@@ -575,8 +575,10 @@ userQuietHoursSchedule:
# change their quiet hours schedule and the site default is always used.
# (default: true, type: bool)
allowCustomQuietHours: true
# DEPRECATED: Allow users to rename their workspaces. Use only for temporary
# compatibility reasons, this will be removed in a future release.
# Allow users to rename their workspaces. WARNING: Renaming a workspace can cause
# Terraform resources that depend on the workspace name to be destroyed and
# recreated, potentially causing data loss. Only enable this if your templates do
# not use workspace names in resource identifiers, or if you understand the risks.
# (default: false, type: bool)
allowWorkspaceRenames: false
# Configure how emails are sent.
@@ -746,7 +748,12 @@ aibridge:
# The base URL of the Anthropic API.
# (default: https://api.anthropic.com/, type: string)
anthropic_base_url: https://api.anthropic.com/
# The AWS Bedrock API region.
# The base URL to use for the AWS Bedrock API. Use this setting to specify an
# exact URL to use. Takes precedence over CODER_AIBRIDGE_BEDROCK_REGION.
# (default: <unset>, type: string)
bedrock_base_url: ""
# The AWS Bedrock API region to use. Constructs a base URL to use for the AWS
# Bedrock API in the form of 'https://bedrock-runtime.<region>.amazonaws.com'.
# (default: <unset>, type: string)
bedrock_region: ""
# The model to use when making requests to the AWS Bedrock API.
@@ -768,11 +775,39 @@ aibridge:
# Maximum number of concurrent AI Bridge requests per replica. Set to 0 to disable
# (unlimited).
# (default: 0, type: int)
maxConcurrency: 0
max_concurrency: 0
# Maximum number of AI Bridge requests per second per replica. Set to 0 to disable
# (unlimited).
# (default: 0, type: int)
rateLimit: 0
rate_limit: 0
# Emit structured logs for AI Bridge interception records. Use this for exporting
# these records to external SIEM or observability systems.
# (default: false, type: bool)
structured_logging: false
# Once enabled, extra headers will be added to upstream requests to identify the
# user (actor) making requests to AI Bridge. This is only needed if you are using
# a proxy between AI Bridge and an upstream AI provider. This will send
# X-Ai-Bridge-Actor-Id (the ID of the user making the request) and
# X-Ai-Bridge-Actor-Metadata-Username (their username).
# (default: false, type: bool)
send_actor_headers: false
# Enable the circuit breaker to protect against cascading failures from upstream
# AI provider rate limits (429, 503, 529 overloaded).
# (default: false, type: bool)
circuit_breaker_enabled: false
# Number of consecutive failures that triggers the circuit breaker to open.
# (default: 5, type: int)
circuit_breaker_failure_threshold: 5
# Cyclic period of the closed state for clearing internal failure counts.
# (default: 10s, type: duration)
circuit_breaker_interval: 10s
# How long the circuit breaker stays open before transitioning to half-open state.
# (default: 30s, type: duration)
circuit_breaker_timeout: 30s
# Maximum number of requests allowed in half-open state before deciding to close
# or re-open the circuit.
# (default: 3, type: int)
circuit_breaker_max_requests: 3
aibridgeproxy:
# Enable the AI Bridge MITM Proxy for intercepting and decrypting AI provider
# requests.
@@ -787,13 +822,25 @@ aibridgeproxy:
# Path to the CA private key file for AI Bridge Proxy.
# (default: <unset>, type: string)
key_file: ""
# Comma-separated list of domains for which HTTPS traffic will be decrypted and
# routed through AI Bridge. Requests to other domains will be tunneled directly
# without decryption.
# (default: api.anthropic.com,api.openai.com, type: string-array)
# Comma-separated list of AI provider domains for which HTTPS traffic will be
# decrypted and routed through AI Bridge. Requests to other domains will be
# tunneled directly without decryption. Supported domains: api.anthropic.com,
# api.openai.com, api.individual.githubcopilot.com.
# (default: api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,
# type: string-array)
domain_allowlist:
- api.anthropic.com
- api.openai.com
- api.individual.githubcopilot.com
# URL of an upstream HTTP proxy to chain tunneled (non-allowlisted) requests
# through. Format: http://[user:pass@]host:port or https://[user:pass@]host:port.
# (default: <unset>, type: string)
upstream_proxy: ""
# Path to a PEM-encoded CA certificate to trust for the upstream proxy's TLS
# connection. Only needed for HTTPS upstream proxies with certificates not trusted
# by the system. If not provided, the system certificate pool is used.
# (default: <unset>, type: string)
upstream_proxy_ca: ""
# Configure data retention policies for various database tables. Retention
# policies automatically purge old data to reduce database size and improve
# performance. Setting a retention duration to 0 disables automatic purging for

View File

@@ -17,8 +17,10 @@ import (
"cdr.dev/slog/v3"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/agentapi/metadatabatcher"
"github.com/coder/coder/v2/coderd/agentapi/resourcesmonitor"
"github.com/coder/coder/v2/coderd/appearance"
"github.com/coder/coder/v2/coderd/boundaryusage"
"github.com/coder/coder/v2/coderd/connectionlog"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/pubsub"
@@ -65,10 +67,11 @@ type API struct {
var _ agentproto.DRPCAgentServer = &API{}
type Options struct {
AgentID uuid.UUID
OwnerID uuid.UUID
WorkspaceID uuid.UUID
OrganizationID uuid.UUID
AgentID uuid.UUID
OwnerID uuid.UUID
WorkspaceID uuid.UUID
OrganizationID uuid.UUID
TemplateVersionID uuid.UUID
AuthenticatedCtx context.Context
Log slog.Logger
@@ -80,10 +83,12 @@ type Options struct {
DerpMapFn func() *tailcfg.DERPMap
TailnetCoordinator *atomic.Pointer[tailnet.Coordinator]
StatsReporter *workspacestats.Reporter
MetadataBatcher *metadatabatcher.Batcher
AppearanceFetcher *atomic.Pointer[appearance.Fetcher]
PublishWorkspaceUpdateFn func(ctx context.Context, userID uuid.UUID, event wspubsub.WorkspaceEvent)
PublishWorkspaceAgentLogsUpdateFn func(ctx context.Context, workspaceAgentID uuid.UUID, msg agentsdk.LogsNotifyMessage)
NetworkTelemetryHandler func(batch []*tailnetproto.TelemetryEvent)
BoundaryUsageTracker *boundaryusage.Tracker
AccessURL *url.URL
AppHostname string
@@ -178,8 +183,8 @@ func New(opts Options, workspace database.Workspace) *API {
AgentFn: api.agent,
Workspace: api.cachedWorkspaceFields,
Database: opts.Database,
Pubsub: opts.Pubsub,
Log: opts.Log,
Batcher: opts.MetadataBatcher,
}
api.LogsAPI = &LogsAPI{
@@ -221,8 +226,12 @@ func New(opts Options, workspace database.Workspace) *API {
}
api.BoundaryLogsAPI = &BoundaryLogsAPI{
Log: opts.Log,
WorkspaceID: opts.WorkspaceID,
Log: opts.Log,
WorkspaceID: opts.WorkspaceID,
OwnerID: opts.OwnerID,
TemplateID: workspace.TemplateID,
TemplateVersionID: opts.TemplateVersionID,
BoundaryUsageTracker: opts.BoundaryUsageTracker,
}
// Start background cache refresh loop to handle workspace changes

Some files were not shown because too many files have changed in this diff Show More