Compare commits

..

45 Commits

Author SHA1 Message Date
Jon Ayers aefc75133a fix: use separate http.Transports for wsproxy tests 2026-02-24 23:02:37 +00:00
Jon Ayers b9181c3934 feat(wsproxy): add /debug/expvar endpoint for DERP server stats 2026-02-21 00:19:41 +00:00
Jon Ayers a90471db53 feat(monitoring): add wsproxy DERP section to Grafana dashboard
Adds a new 'Workspace Proxy - DERP' row with 6 panels:
- DERP Connections (current connections and home connections)
- DERP Client Breakdown (local, remote, total)
- DERP Throughput (bytes received/sent rate)
- DERP Packets (received/sent/forwarded rate)
- DERP Packet Drops (by reason label)
- DERP Queue Duration (average queue duration)
2026-02-20 23:44:24 +00:00
Jon Ayers cb71f5e789 feat(wsproxy): add DERP websocket throughput metrics
Add Prometheus metrics tracking active DERP websocket connections and
bytes relayed through the wsproxy:

- coder_wsproxy_derp_websocket_active_connections (gauge)
- coder_wsproxy_derp_websocket_bytes_total (counter, direction=read|write)

Implementation adds a DERPWebsocketMetrics hook struct and countingConn
wrapper in tailnet/, and a new WithWebsocketSupportAndMetrics function
that instruments the websocket connection lifecycle. The existing
WithWebsocketSupport function delegates to the new one with nil metrics.
2026-02-20 23:44:21 +00:00
Jon Ayers f50707bc3e feat(wsproxy): add Prometheus collector for DERP server expvar metrics
Create a prometheus.Collector that bridges the tailscale derp.Server's
expvar-based stats to Prometheus metrics with namespace coder, subsystem
wsproxy_derp. Handles counters, gauges, labeled metrics (nested
metrics.Set for drop reasons, packet types, etc.), and the average
queue duration (converted from ms to seconds).

Register the collector in the wsproxy server after derpServer creation.
2026-02-20 23:40:03 +00:00
Jeremy Ruppel 065266412a fix(site): respect meta user appearance preference as theme fallback (#22152)
Use the server-rendered meta tag value as an intermediate fallback for
theme preference, between the JS-fetched value and the default theme.
This ensures the correct theme is applied before the API response loads.

Fixes #20050
2026-02-20 16:32:49 -05:00
Jeremy Ruppel de4ff78cd1 fix(site): show when secret deployment options are configured (#22151)
Previously, when secret deployment options like CODER_OIDC_CLIENT_SECRET
were populated, the API correctly returned the "secret": "true"
annotation, but the UI did not indicate that these secrets were
configured. The UI would show "Not set" regardless of whether the secret
was set or not.

Now, the UI checks both the secret annotation and the value_source
field. When a secret is configured (value_source is set), it displays
"Set" to indicate the secret is populated. When a secret is not
configured, it displays "Not set".

Fixes #18913

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-02-20 15:42:28 -05:00
Yevhenii Shcherbina e6f0a1b2f6 docs: improve boundary docs (#22183) 2026-02-20 15:41:54 -05:00
Steven Masley e2cbf03f85 fix: ensure stopping a workspace before starting it when updating (#22201)
Dynamic parameters were not following the same code path as legacy

Closes https://github.com/coder/coder/issues/20333
2026-02-20 14:21:33 -06:00
Jakub Domeracki ceb417f8ba fix: revert automatically set 'host-prefix-cookie' in https deployments" (#22225)
Reverts coder/coder#22224
2026-02-20 20:12:51 +01:00
Steven Masley 67044d80a0 chore: automatically set 'host-prefix-cookie' in https deployments (#22224)
The feature was never released, so this is not a breaking change
2026-02-20 17:17:50 +00:00
Paweł Banaszewski 381c55a97a chore: update AI Bridge to v1.0.5 (#22223)
Updates aibridge library to `v1.0.5`
Fixes adaptive thinking in Anthropic messages API
(https://github.com/coder/aibridge/issues/177)
2026-02-20 21:40:16 +05:00
Steven Masley b0f35316da chore!: automatically use secure cookies if using https access-url (#22198)
`--secure-auth-cookie` now automatically sources it's default value from `--access-url`

If the access url uses HTTPS, secure is set to `true`. 
To revert to old behavior, set the value explicitly to `false`
2026-02-20 10:33:37 -06:00
Steven Masley efdaaa2c8f chore: add oidc redirect url to override access url (#21521)
If a deployment has 2 domains, overriding the oidc url allows the oidc
redirect to differ from the access_url

response to https://github.com/coder/coder/discussions/21500

**This config setting is hidden by default**
2026-02-20 09:11:01 -06:00
Steven Masley e5f64eb21d chore: optionally prefix authentication related cookies (#22148)
When the deployment option is enabled auth cookies are prefixed with
`__HOST-`
([info](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Set-Cookie)).

This is all done in a middleware that intercepts all requests and strips
the prefix on incoming request cookies.
2026-02-20 09:01:00 -06:00
Spike Curtis 1069ce6e19 feat: add support for agentsock on Windows (#22171)
relates to #21335

Adds support for the agentsock and thus `coder exp sync` commands on Windows. This support was initially missing.
2026-02-20 16:27:32 +04:00
Lukasz 9bbe3c6af9 chore: update trivy-action to v0.34.0 (#22216)
Update trivy-action to v0.34.0.
2026-02-20 12:27:44 +01:00
Jake Howell d700f9ebc4 fix: restore block to Managed Agents on Enterprise (#22210)
#21998 accidentally allowed `Managed Agents` usages whilst being on an
`Enterprise` license. This was incorrect, it should work as the
following (same as prior to #21998).

| Scenario | Before your PRs | After your PRs (bug) | After this fix |
|---|---|---|---|
| Unlicensed (AGPL) | Permitted | Permitted | Permitted |
| Licensed, no entitlement | **Blocked** | Permitted | **Blocked** |
| Licensed, explicitly disabled (limit=0) | **Blocked** | Permitted |
**Blocked** |
| Licensed, entitled, under limit | Permitted | Permitted | Permitted |
| Licensed, entitled, over limit | Blocked | Permitted (advisory) |
Permitted (advisory) |
| Any license, stop/delete | Permitted | Permitted | Permitted |
| Any license, non-AI build | Permitted | Permitted | Permitted |
2026-02-20 20:15:32 +11:00
Atif Ali a955de906a docs: convert a note to GFM style (#22197)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->
2026-02-20 13:34:35 +05:00
Jake Howell 051ed34580 feat: convert soft_limit to limit (#22048)
In relation to
[`internal#1281`](https://github.com/coder/internal/issues/1281)

Remove the `soft_limit` field from the `Feature` type and simplify
license limit handling. This change:

- Removes the `soft_limit` field from the API and SDK
- Uses the soft limit value as the single `limit` value in the UI and
API
- Simplifies warning logic to only show warnings when the limit is
exceeded
- Updates tests to reflect the new behavior
- Updates the UI to use the single limit value for display
2026-02-20 16:09:12 +11:00
Jake Howell 203899718f feat: remove agent workspaces limit (#21998)
In relation to
[`internal#1281`](https://github.com/coder/internal/issues/1281)

Managed agent workspace build limits are now advisory only. Breaching
the limit no longer blocks workspace creation — it only surfaces a
warning.

- Removed hard-limit enforcement in `checkAIBuildUsage` so AI task
builds are always permitted regardless of managed agent count.
- Updated the license warning to remove "Further managed agent builds
will be blocked." verbiage.
- Updated tests to assert builds succeed beyond the limit instead of
failing.
- Removed the "Limit" display from the `ManagedAgentsConsumption`
progress bar — the bar is now relative to the included allowance (soft
limit) only, and turns orange when usage exceeds it.

Bonus:

- De-MUI'd `LicenseBannerView` — replaced Emotion CSS and MUI `Link`
with Tailwind classes.
- Added `highlight-orange` color token to the Tailwind theme.
2026-02-20 12:56:00 +11:00
Jake Howell ccb5b83c19 feat: add animations to each <ChevronDown /> (#22068)
This pull-request implement animations for each of our `<ChevronDown />`
(and a few other chevrons) so that everything is uniform with
`<Autocomplete />`.
2026-02-20 12:55:02 +11:00
Jake Howell 00d6f15e7c chore: deprecate <ChooseOne /> (#22107)
Based on previous PR reviews it appears we don't want to use these
components anymore. We previously deprecated the use of `<Stack />` in
this way in #20973 so it would be good to take the same approach here.
2026-02-20 12:54:25 +11:00
Jake Howell d23f5ea86f fix: add optimizeDeps on @emotion/* and @mui/* (#22130)
This PR stops Vite from repeatedly re-optimizing certain MUI modules
during development, which was triggering an HMR feedback loop and
crashing my dev environment on specific pages — most notably
`<LicensesSettingsPage />`.

After some digging, the culprit turned out to be:

```ts
import Paper from "@mui/material/Paper";
```

Importing components this way causes Vite to continuously re-optimize
them during HMR, which leads to the page refreshing over and over until
the dev server taps out and `504 "Outdated Optimize Dep"`'s us.

The fix ensures these modules are computed once at startup instead of
being reprocessed on every hot update. Development is now stable, and
the infinite refresh loop is gone.

I did experiment with using globs to handle this more generically, but
since they’re still early-access in this context, they ended up breaking
things 😔

In short: fewer re-optimizations, no more HMR meltdown, and a much
calmer dev experience.
2026-02-20 12:53:18 +11:00
Jake Howell e857060010 feat: upgrade to storybook@10 (#22187)
Continuation of #22186 (without `vitest` addon)

Upgrades the dependency so that we can actively make use of new
features/speed/less-dependencies. Short simple sweet and lovely 🙂
2026-02-20 12:52:35 +11:00
dependabot[bot] db343a9885 chore: bump filippo.io/edwards25519 from 1.1.0 to 1.1.1 (#22199)
Bumps
[filippo.io/edwards25519](https://github.com/FiloSottile/edwards25519)
from 1.1.0 to 1.1.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/FiloSottile/edwards25519/commit/d1c650afb95fad0742b98d95f2eb2cf031393abb"><code>d1c650a</code></a>
extra: initialize receiver in MultiScalarMult</li>
<li>See full diff in <a
href="https://github.com/FiloSottile/edwards25519/compare/v1.1.0...v1.1.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=filippo.io/edwards25519&package-manager=go_modules&previous-version=1.1.0&new-version=1.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-19 19:05:36 +00:00
Garrett Delfosse e8d6016807 fix: allow users with workspace:create for any owner to list users (#21947)
## Summary

Custom roles that can create workspaces on behalf of other users need to
be able to list users to populate the owner dropdown in the workspace
creation UI. Previously, this required a separate `user:read`
permission, causing the dropdown to fail for custom roles.

## Changes

- Modified `GetUsers` in `dbauthz` to check if the user can create
workspaces for any owner (`workspace:create` with `owner_id: *`)
- If the user has this permission, they can list all users without
needing explicit `user:read` permission
- Added tests to verify the new behavior

## Testing

- Updated mock tests to assert the new authorization check
- Added integration tests for both positive and negative cases

Fixes #18203
2026-02-19 13:04:53 -05:00
Danielle Maywood 911d734df9 fix: avoid re-using AuthInstanceID for sub agents (#22196)
Parent agents were re-using AuthInstanceID when spawning child agents.
This caused GetWorkspaceAgentByInstanceID to return the most recently
created sub agent instead of the parent when the parent tried to refetch
its own manifest.

Fix by not reusing AuthInstanceID for sub agents, and updating
GetWorkspaceAgentByInstanceID to filter them out entirely.
2026-02-19 16:56:29 +00:00
blinkagent[bot] 0f6fbe7736 chore(examples): clarify azure-linux resource lifecycle on stop vs delete (#22150)
The existing README for the Azure Linux starter template only mentioned
that the VM is ephemeral and the managed disk is persistent, but did not
explain that the resource group, virtual network, subnet, and network
interface also persist when a workspace is stopped.

This led to confusion where users expected all Azure resources to be
cleaned up on stop, when in reality only the VM is destroyed.

## Changes

- Added the persistent networking/infrastructure resources to the
resource list
- Added "What happens on stop" section explaining which resources
persist and why
- Added "What happens on delete" section confirming all resources are
cleaned up
- Moved the existing note about ephemeral tools/files into a "Workspace
restarts" subsection for clarity

These changes exactly mirror https://github.com/coder/registry/pull/713
since the registry is not yet linked to the starter templates in
`coder/coder`. Once the registry is linked, the starter templates will
pull from the registry and this duplication will no longer be necessary.

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-02-19 10:53:05 -06:00
Ehab Younes 3fcd8c6128 feat(site): show task log preview in paused and failed states (#22063)
Add a `TaskLogPreview` component that displays the last N messages of AI
chat logs when a task is paused or its build has failed. The preview
fetches log snapshots via a new `getTaskLogs` API method and renders
them in a scrollable panel with `[user]` and `[agent]` labels, colored
left borders on type transitions, and a snapshot timestamp tooltip.

The build-logs auto-scroll in `BuildingWorkspace` was simplified by
replacing the `useRef`/`useLayoutEffect` pattern with a `useCallback`
ref, and client-side message slicing was removed in favor of
server-side limits. `InfoTooltip` now accepts an optional `title` prop.
2026-02-19 14:54:59 +01:00
Danielle Maywood 02a80eac2e docs: document new terraform-managed devcontainers (#21978) 2026-02-19 11:45:04 +00:00
blinkagent[bot] c8335fdc54 docs: rename ANTHROPIC_API_KEY to ANTHROPIC_AUTH_TOKEN in Claude Code docs (#22188)
Updates the reference to `ANTHROPIC_API_KEY` in the Claude Code client
docs to `ANTHROPIC_AUTH_TOKEN`.

**File changed:**
- `docs/ai-coder/ai-bridge/clients/claude-code.md` — configuration
instructions

Created on behalf of @dannykopping

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-02-19 13:23:47 +02:00
Cian Johnston cfdbd5251a chore: add compose alternative to develop.sh (#22157)
Adds a `compose.dev.yml` intended as a pure-Docker alternative to
`develop.sh`.

---------

Co-authored-by: Steven Masley <stevenmasley@gmail.com>
2026-02-19 09:28:52 +00:00
Danielle Maywood 92a6d6c2c0 chore: remove unnecessary loop variable captures (#22180)
Since Go 1.22, the loop variable capture issue is resolved. Variables
declared by for loops are now per-iteration rather than per-loop, making
the 'v := v' pattern unnecessary.
2026-02-19 09:02:19 +00:00
Rowan Smith d9ec892b9a chore: helm - tolerations - change format from object to array (#22185)
`tolerations` is a list/array, not a map and should be represented using
`[]` instead of `{}`

closes #22179
2026-02-19 15:22:54 +11:00
Rowan Smith c664e4f72d chore: add active field to template versions json output (#22165)
`coder templates version list` makes a call to determine the `active`
version:

```
➜  ~ coder templates version list aws-linux-dynamic 
NAME                 CREATED AT                 CREATED BY  STATUS     ACTIVE  
infallible_feistel2  2025-10-10T10:34:02+11:00  rowansmith  Succeeded  Active  
mystifying_almeida1  2025-10-10T10:32:38+11:00  rowansmith  Succeeded      
```

but this is not carried across to the `-ojson` output version, so this
PR implements that in order to support programattic addressing.

It is added a top level entry. If it should be nested under
`TemplateVersion` let me know.

```
➜  ~ ./Downloads/coder-cli-templateversions-json-active templates version list aws-linux-dynamic -ojson | jq '.[] | select(.active == true) | { active, id: .TemplateVersion.id }'      

{
  "active": true,
  "id": "38f66eae-ec63-49b7-a9d2-cdb79c379d19"
}

➜  ~ ./Downloads/coder-cli-templateversions-json-active templates version list aws-linux-dynamic -ojson |jq '.[] | select(.active == true)'
{
  "TemplateVersion": {
    "id": "38f66eae-ec63-49b7-a9d2-cdb79c379d19",
    "template_id": "1a84ce78-06a6-41ad-99e4-8ea5d9b91e89",
    "organization_id": "35f75f20-890e-4095-95f1-bb8f2ba02e79",
    "created_at": "2025-10-10T10:34:02.254357+11:00",
    "updated_at": "2025-10-10T10:34:46.594032+11:00",
    "name": "infallible_feistel2",
    "message": "Uploaded from the CLI",
    "job": {
      "id": "8afd05ca-b4be-48d5-a6b9-82dcfd12c960",
      "created_at": "2025-10-10T10:34:02.251234+11:00",
      "started_at": "2025-10-10T10:34:02.257301+11:00",
      "completed_at": "2025-10-10T10:34:46.594032+11:00",
      "status": "succeeded",
      "worker_id": "a0940ade-ecdd-47c2-98c6-f2a4e5eb0733",
      "file_id": "05fd653c-3a3f-4e5c-856b-29407732e1b1",
      "tags": {
        "owner": "",
        "scope": "organization"
      },
      "queue_position": 0,
      "queue_size": 0,
      "organization_id": "35f75f20-890e-4095-95f1-bb8f2ba02e79",
      "initiator_id": "d20c05ff-ecf3-4521-a99d-516c8befbaa6",
      "input": {
        "template_version_id": "38f66eae-ec63-49b7-a9d2-cdb79c379d19"
      },
      "type": "template_version_import",
      "metadata": {
        "template_version_name": "",
        "template_id": "00000000-0000-0000-0000-000000000000",
        "template_name": "",
        "template_display_name": "",
        "template_icon": ""
      },
      "logs_overflowed": false
    },
    "readme": "---\ndxxxxx,
    "created_by": {
      "id": "d20c05ff-ecf3-4521-a99d-516c8befbaa6",
      "username": "rowansmith",
      "name": "rowan smith"
    },
    "archived": false,
    "has_external_agent": false
  },
  "active": true
}
```
2026-02-19 09:31:12 +11:00
Yevhenii Shcherbina 385554dff8 chore: add boundary and k8s docs (#22153) 2026-02-18 13:33:22 -05:00
blinkagent[bot] fb027da8bb docs: add Antigravity IDE integration documentation (#22177)
Closes #21130

Adds documentation for Google Antigravity IDE integration, following the
same pattern as Cursor and Windsurf (dedicated page for desktop IDEs).

**Changes:**

- `docs/user-guides/workspace-access/antigravity.md` — New dedicated
page with install guide, Coder extension setup, and template
configuration example using the [Antigravity registry
module](https://registry.coder.com/modules/coder/antigravity)
- `docs/user-guides/workspace-access/index.md` — Added Antigravity IDE
section alongside Cursor and Windsurf
- `docs/manifest.json` — Added sidebar navigation entry after Windsurf

Antigravity uses the `antigravity://` protocol (added in #20873) and the
built-in `/icon/antigravity.svg` icon (added in #21068). The [registry
module](https://registry.coder.com/modules/coder/antigravity) wraps
`vscode-desktop-core` with `protocol = "antigravity"`.

Created on behalf of @matifali

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-02-18 22:06:44 +05:00
Danielle Maywood 31c1279202 feat: notify on task auto pause, manual pause and manual resume (#22050) 2026-02-18 16:30:16 +00:00
Yevhenii Shcherbina dcdca814d6 chore: fix pty-max-limit flake (#22147)
### Notes
- Closes https://github.com/coder/internal/issues/558
- I closed previous attempt with `ptySemaphore`:
https://github.com/coder/coder/pull/21981
- We can consider implementing the retries proposed by Spike in:
https://github.com/coder/coder/pull/21981#pullrequestreview-3783200423,
if increasing the limit isn’t enough.
- I looked into Datadog — this particular test doesn’t seem very flaky
right now. It failed once in the Nightly gauntlet (3 weeks ago), but it
hasn’t failed again in the last 3 months (at least I couldn’t find any
other failures in Datadog).

## Fix PTY exhaustion flake on macOS CI

### Problem
macOS CI runners were experiencing PTY exhaustion during test runs,
causing flakes. The default PTY limit on macOS is 511, which can be
insufficient when running parallel tests.

### Solution
Added a CI step to increase the PTY limit on macOS runners from the
default 511 to the maximum allowed value of 999 before running tests.

### Changes
- Added `Increase PTY limit (macOS)` step in `.github/workflows/ci.yaml`
- Sets `kern.tty.ptmx_max=999` using `sysctl` (maximum value on our CI
runners)
- Runs only on macOS runners before the test-go-pg action
2026-02-18 08:38:35 -05:00
Danielle Maywood 873e054be0 fix(site): render username with content-primary, not white (#22172) 2026-02-18 12:48:58 +00:00
Lukasz 4c0c621f2a chore: bump bundled terraform to 1.14.5 (#22167)
Description:
This PR updates the bundled Terraform binary and related version pins
from 1.14.1 to 1.14.5 (base image, installer fallback, and CI/test
fixtures). Terraform is statically built with an embedded Go runtime.
Moving to 1.14.5 updates the embedded toolchain and is intended to
address Go stdlib CVEs reported by security scanning.

Notes:
- Change is version-only; no functional Coder logic changes.
- Backport-friendly: intended to be cherry-picked to release branches
after merge.
2026-02-18 12:18:38 +01:00
Kacper Sawicki f016d9e505 fix(coderd): add role param to agent RPC to prevent false connectivity (#22052)
## Summary

coder-logstream-kube and other tools that use the agent token to connect
to the RPC endpoint were incorrectly triggering connection monitoring,
causing false connected/disconnected timestamps on the agent. This led
to VSCode/JetBrains disconnections and incorrect dashboard status.

## Changes

Add a `role` query parameter to `/api/v2/workspaceagents/me/rpc`:
- `role=agent`: triggers connection monitoring (default for the agent
SDK)
- any other value (e.g. `logstream-kube`): skips connection monitoring
- omitted: triggers monitoring for backward compatibility with older
agents

The agent SDK now sends `role=agent` by default. A new `Role` field on
the `agentsdk.Client` allows non-agent callers to specify a different
role.

## Required follow-up

coder-logstream-kube needs to set `client.Role = "logstream-kube"`
before calling `ConnectRPC20()`. Without that change, it will still send
`role=agent` and trigger monitoring.

Fixes #21625
2026-02-18 09:44:06 +01:00
Rowan Smith 1c4dd78b05 chore: add id to template version output columns (#22163)
At present it is not possible to obtain the `id` of the template version
in the table output:

```
➜  ~ coder templates version list -h                
coder v2.30.1+16408b1

USAGE:
  coder templates versions list [flags] <template>

  List all the versions of the specified template

OPTIONS:
  -O, --org string, $CODER_ORGANIZATION
          Select which organization (uuid or name) to use.

  -c, --column [name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
          Columns to display in table output.

➜  ~ coder templates version list aws-linux-dynamic 
NAME                 CREATED AT                 CREATED BY  STATUS     ACTIVE  
infallible_feistel2  2025-10-10T10:34:02+11:00  rowansmith  Succeeded  Active  
mystifying_almeida1  2025-10-10T10:32:38+11:00  rowansmith  Succeeded         
```

Adding this because it is useful when wanting to programatically
retrieve the details of the latest template version, and `-ojson` does
not include `active` details in it's output.

```
➜  Downloads ./coder-cli-templateversions-list-id templates version list -h                
coder v2.30.1-devel+bab99db9e7

USAGE:
  coder templates versions list [flags] <template>

  List all the versions of the specified template

OPTIONS:
  -O, --org string, $CODER_ORGANIZATION
          Select which organization (uuid or name) to use.

  -c, --column [id|name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
          Columns to display in table output.

      --include-archived bool
          Include archived versions in the result list.

  -o, --output table|json (default: table)
          Output format.

———
Run `coder --help` for a list of global options.

➜  Downloads ./coder-cli-templateversions-list-id templates version list aws-linux-dynamic -c id,name,'created at','created by',status,active
ID                                    NAME                 CREATED AT                 CREATED BY  STATUS     ACTIVE  
38f66eae-ec63-49b7-a9d2-cdb79c379d19  infallible_feistel2  2025-10-10T10:34:02+11:00  rowansmith  Succeeded  Active  
aa797ea5-4221-461b-80b0-90c5164f8dc0  mystifying_almeida1  2025-10-10T10:32:38+11:00  rowansmith  Succeeded
```
2026-02-18 16:47:45 +11:00
Jon Ayers e82edf1b6b chore: update Go from 1.25.6 to 1.25.7 (#22042) 2026-02-17 22:31:20 -06:00
187 changed files with 6341 additions and 2959 deletions
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.6"
default: "1.25.7"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.14.1
terraform_version: 1.14.5
terraform_wrapper: false
+8
View File
@@ -489,6 +489,14 @@ jobs:
# macOS will output "The default interactive shell is now zsh" intermittently in CI.
touch ~/.bash_profile && echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bash_profile
- name: Increase PTY limit (macOS)
if: runner.os == 'macOS'
shell: bash
run: |
# Increase PTY limit to avoid exhaustion during tests.
# Default is 511; 999 is the maximum value on CI runner.
sudo sysctl -w kern.tty.ptmx_max=999
- name: Test with PostgreSQL Database (Linux)
if: runner.os == 'Linux'
uses: ./.github/actions/test-go-pg
+1 -1
View File
@@ -146,7 +146,7 @@ jobs:
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
+3
View File
@@ -98,3 +98,6 @@ AGENTS.local.md
# Ignore plans written by AI agents.
PLAN.md
# Ignore any dev licenses
license.txt
+10 -2
View File
@@ -111,6 +111,12 @@ type Client interface {
ConnectRPC28(ctx context.Context) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
)
// ConnectRPC28WithRole is like ConnectRPC28 but sends an explicit
// role query parameter to the server. The workspace agent should
// use role "agent" to enable connection monitoring.
ConnectRPC28WithRole(ctx context.Context, role string) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
)
tailnet.DERPMapRewriter
agentsdk.RefreshableSessionTokenProvider
}
@@ -997,8 +1003,10 @@ func (a *agent) run() (retErr error) {
return xerrors.Errorf("refresh token: %w", err)
}
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs
aAPI, tAPI, err := a.client.ConnectRPC28(a.hardCtx)
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs.
// We pass role "agent" to enable connection monitoring on the server, which tracks
// the agent's connectivity state (first_connected_at, last_connected_at, disconnected_at).
aAPI, tAPI, err := a.client.ConnectRPC28WithRole(a.hardCtx, "agent")
if err != nil {
return err
}
+2 -103
View File
@@ -1,37 +1,22 @@
package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/google/uuid"
"github.com/spf13/afero"
"github.com/stretchr/testify/require"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agenttest"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/tailnettest"
"github.com/coder/coder/v2/testutil"
)
func TestServer(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("StartStop", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
socketPath := testutil.AgentSocketPath(t)
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
@@ -41,7 +26,7 @@ func TestServer(t *testing.T) {
t.Run("AlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
socketPath := testutil.AgentSocketPath(t)
logger := slog.Make().Leveled(slog.LevelDebug)
server1, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
@@ -49,90 +34,4 @@ func TestServer(t *testing.T) {
_, err = agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "create socket")
})
t.Run("AutoSocketPath", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
}
func TestServerWindowsNotSupported(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
t.Run("NewServer", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
_, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
t.Run("NewClient", func(t *testing.T) {
t.Parallel()
_, err := agentsocket.NewClient(context.Background(), agentsocket.WithPath("test.sock"))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
}
func TestAgentInitializesOnWindowsWithoutSocketServer(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
ctx := testutil.Context(t, testutil.WaitShort)
logger := testutil.Logger(t).Named("agent")
derpMap, _ := tailnettest.RunDERPAndSTUN(t)
coordinator := tailnet.NewCoordinator(logger)
t.Cleanup(func() {
_ = coordinator.Close()
})
statsCh := make(chan *agentproto.Stats, 50)
agentID := uuid.New()
manifest := agentsdk.Manifest{
AgentID: agentID,
AgentName: "test-agent",
WorkspaceName: "test-workspace",
OwnerName: "test-user",
WorkspaceID: uuid.New(),
DERPMap: derpMap,
}
client := agenttest.NewClient(t, logger.Named("agenttest"), agentID, manifest, statsCh, coordinator)
t.Cleanup(client.Close)
options := agent.Options{
Client: client,
Filesystem: afero.NewMemMapFs(),
Logger: logger.Named("agent"),
ReconnectingPTYTimeout: testutil.WaitShort,
EnvironmentVariables: map[string]string{},
SocketPath: "",
}
agnt := agent.New(options)
t.Cleanup(func() {
_ = agnt.Close()
})
startup := testutil.TryReceive(ctx, t, client.GetStartup())
require.NotNil(t, startup, "agent should send startup message")
err := agnt.Close()
require.NoError(t, err, "agent should close cleanly")
}
+11 -17
View File
@@ -2,8 +2,6 @@ package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
@@ -30,14 +28,10 @@ func newSocketClient(ctx context.Context, t *testing.T, socketPath string) *agen
func TestDRPCAgentSocketService(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("Ping", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -57,7 +51,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("NewUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -79,7 +73,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitAlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -109,7 +103,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitAlreadyCompleted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -148,7 +142,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -178,7 +172,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("NewUnits", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -203,7 +197,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("DependencyAlreadyRegistered", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -238,7 +232,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("DependencyAddedAfterDependentStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -280,7 +274,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnregisteredUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -299,7 +293,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -323,7 +317,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
+47 -6
View File
@@ -4,19 +4,60 @@ package agentsocket
import (
"context"
"fmt"
"net"
"os"
"os/user"
"strings"
"github.com/Microsoft/go-winio"
"golang.org/x/xerrors"
)
func createSocket(_ string) (net.Listener, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
const defaultSocketPath = `\\.\pipe\com.coder.agentsocket`
func createSocket(path string) (net.Listener, error) {
if path == "" {
path = defaultSocketPath
}
if !strings.HasPrefix(path, `\\.\pipe\`) {
return nil, xerrors.Errorf("%q is not a valid local socket path", path)
}
user, err := user.Current()
if err != nil {
return nil, fmt.Errorf("unable to look up current user: %w", err)
}
sid := user.Uid
// SecurityDescriptor is in SDDL format. c.f.
// https://learn.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format for full details.
// D: indicates this is a Discretionary Access Control List (DACL), which is Windows-speak for ACLs that allow or
// deny access (as opposed to SACL which controls audit logging).
// P indicates that this DACL is "protected" from being modified thru inheritance
// () delimit access control entries (ACEs), here we only have one, which, allows (A) generic all (GA) access to our
// specific user's security ID (SID).
//
// Note that although Microsoft docs at https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes warns that
// named pipes are accessible from remote machines in the general case, the `winio` package sets the flag
// windows.FILE_PIPE_REJECT_REMOTE_CLIENTS when creating pipes, so connections from remote machines are always
// denied. This is important because we sort of expect customers to run the Coder agent under a generic user
// account unless they are very sophisticated. We don't want this socket to cross the boundary of the local machine.
configuration := &winio.PipeConfig{
SecurityDescriptor: fmt.Sprintf("D:P(A;;GA;;;%s)", sid),
}
listener, err := winio.ListenPipe(path, configuration)
if err != nil {
return nil, xerrors.Errorf("failed to open named pipe: %w", err)
}
return listener, nil
}
func cleanupSocket(_ string) error {
return nil
func cleanupSocket(path string) error {
return os.Remove(path)
}
func dialSocket(_ context.Context, _ string) (net.Conn, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
func dialSocket(ctx context.Context, path string) (net.Conn, error) {
return winio.DialPipeContext(ctx, path)
}
+6
View File
@@ -124,6 +124,12 @@ func (c *Client) Close() {
c.derpMapOnce.Do(func() { close(c.derpMapUpdates) })
}
func (c *Client) ConnectRPC28WithRole(ctx context.Context, _ string) (
agentproto.DRPCAgentClient28, proto.DRPCTailnetClient28, error,
) {
return c.ConnectRPC28(ctx)
}
func (c *Client) ConnectRPC28(ctx context.Context) (
agentproto.DRPCAgentClient28, proto.DRPCTailnetClient28, error,
) {
+9
View File
@@ -137,6 +137,15 @@ func createOIDCConfig(ctx context.Context, logger slog.Logger, vals *codersdk.De
if err != nil {
return nil, xerrors.Errorf("parse oidc oauth callback url: %w", err)
}
if vals.OIDC.RedirectURL.String() != "" {
redirectURL, err = vals.OIDC.RedirectURL.Value().Parse("/api/v2/users/oidc/callback")
if err != nil {
return nil, xerrors.Errorf("parse oidc redirect url %q", err)
}
logger.Warn(ctx, "custom OIDC redirect URL used instead of 'access_url', ensure this matches the value configured in your OIDC provider")
}
// If the scopes contain 'groups', we enable group support.
// Do not override any custom value set by the user.
if slice.Contains(vals.OIDC.Scopes, "groups") && vals.OIDC.GroupField == "" {
+12
View File
@@ -1740,6 +1740,18 @@ func TestServer(t *testing.T) {
// Next, we instruct the same server to display the YAML config
// and then save it.
// Because this is literally the same invocation, DefaultFn sets the
// value of 'Default'. Which triggers a mutually exclusive error
// on the next parse.
// Usually we only parse flags once, so this is not an issue
for _, c := range inv.Command.Children {
if c.Name() == "server" {
for i := range c.Options {
c.Options[i].DefaultFn = nil
}
break
}
}
inv = inv.WithContext(testutil.Context(t, testutil.WaitMedium))
//nolint:gocritic
inv.Args = append(args, "--write-config")
+9 -7
View File
@@ -1,5 +1,3 @@
//go:build !windows
package cli_test
import (
@@ -7,6 +5,7 @@ import (
"context"
"os"
"path/filepath"
"runtime"
"testing"
"time"
@@ -25,12 +24,15 @@ func setupSocketServer(t *testing.T) (path string, cleanup func()) {
t.Helper()
// Use a temporary socket path for each test
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
socketPath := testutil.AgentSocketPath(t)
// Create parent directory if needed
parentDir := filepath.Dir(socketPath)
err := os.MkdirAll(parentDir, 0o700)
require.NoError(t, err, "create socket directory")
// Create parent directory if needed. Not necessary on Windows because named pipes live in an abstract namespace
// not tied to any real files.
if runtime.GOOS != "windows" {
parentDir := filepath.Dir(socketPath)
err := os.MkdirAll(parentDir, 0o700)
require.NoError(t, err, "create socket directory")
}
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
+4
View File
@@ -139,8 +139,10 @@ func (r *RootCmd) templateVersionsList() *serpent.Command {
type templateVersionRow struct {
// For json format:
TemplateVersion codersdk.TemplateVersion `table:"-"`
ActiveJSON bool `json:"active" table:"-"`
// For table format:
ID string `json:"-" table:"id"`
Name string `json:"-" table:"name,default_sort"`
CreatedAt time.Time `json:"-" table:"created at"`
CreatedBy string `json:"-" table:"created by"`
@@ -166,6 +168,8 @@ func templateVersionsToRows(activeVersionID uuid.UUID, templateVersions ...coder
rows[i] = templateVersionRow{
TemplateVersion: templateVersion,
ActiveJSON: templateVersion.ID == activeVersionID,
ID: templateVersion.ID.String(),
Name: templateVersion.Name,
CreatedAt: templateVersion.CreatedAt,
CreatedBy: templateVersion.CreatedBy.Username,
+29
View File
@@ -1,7 +1,9 @@
package cli_test
import (
"bytes"
"context"
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
@@ -40,6 +42,33 @@ func TestTemplateVersions(t *testing.T) {
pty.ExpectMatch(version.CreatedBy.Username)
pty.ExpectMatch("Active")
})
t.Run("ListVersionsJSON", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "templates", "versions", "list", template.Name, "--output", "json")
clitest.SetupConfig(t, member, root)
var stdout bytes.Buffer
inv.Stdout = &stdout
require.NoError(t, inv.Run())
var rows []struct {
TemplateVersion codersdk.TemplateVersion `json:"TemplateVersion"`
Active bool `json:"active"`
}
require.NoError(t, json.Unmarshal(stdout.Bytes(), &rows))
require.Len(t, rows, 1)
assert.Equal(t, version.ID, rows[0].TemplateVersion.ID)
assert.True(t, rows[0].Active)
})
}
func TestTemplateVersionsPromote(t *testing.T) {
+5 -1
View File
@@ -383,13 +383,17 @@ NETWORKING OPTIONS:
--samesite-auth-cookie lax|none, $CODER_SAMESITE_AUTH_COOKIE (default: lax)
Controls the 'SameSite' property is set on browser session cookies.
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE (default: false)
Controls if the 'Secure' property is set on browser session cookies.
--wildcard-access-url string, $CODER_WILDCARD_ACCESS_URL
Specifies the wildcard hostname to use for workspace applications in
the form "*.example.com".
--host-prefix-cookie bool, $CODER_HOST_PREFIX_COOKIE (default: false)
Recommended to be enabled. Enables `__Host-` prefix for cookies to
guarantee they are only set by the right domain.
NETWORKING / DERP OPTIONS:
Most Coder deployments never have to think about DERP because all connections
between workspaces and users are peer-to-peer. However, when Coder cannot
+1 -1
View File
@@ -9,7 +9,7 @@ OPTIONS:
-O, --org string, $CODER_ORGANIZATION
Select which organization (uuid or name) to use.
-c, --column [name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
-c, --column [id|name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
Columns to display in table output.
--include-archived bool
+10 -1
View File
@@ -176,11 +176,15 @@ networking:
# (default: <unset>, type: string-array)
proxyTrustedOrigins: []
# Controls if the 'Secure' property is set on browser session cookies.
# (default: <unset>, type: bool)
# (default: false, type: bool)
secureAuthCookie: false
# Controls the 'SameSite' property is set on browser session cookies.
# (default: lax, type: enum[lax\|none])
sameSiteAuthCookie: lax
# Recommended to be enabled. Enables `__Host-` prefix for cookies to guarantee
# they are only set by the right domain.
# (default: false, type: bool)
hostPrefixCookie: false
# Whether Coder only allows connections to workspaces via the browser.
# (default: <unset>, type: bool)
browserOnly: false
@@ -417,6 +421,11 @@ oidc:
# an insecure OIDC configuration. It is not recommended to use this flag.
# (default: <unset>, type: bool)
dangerousSkipIssuerChecks: false
# Optional override of the default redirect url which uses the deployment's access
# url. Useful in situations where a deployment has more than 1 domain. Using this
# setting can also break OIDC, so use with caution.
# (default: <unset>, type: url)
oidc-redirect-url:
# Telemetry is critical to our ability to improve Coder. We strip all personal
# information before sending data to our servers. Please only disable telemetry
# when required by your organization's security policy.
+1 -1
View File
@@ -128,7 +128,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
Name: agentName,
ResourceID: parentAgent.ResourceID,
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
AuthInstanceID: sql.NullString{},
Architecture: req.Architecture,
EnvironmentVariables: pqtype.NullRawMessage{},
OperatingSystem: req.OperatingSystem,
+46 -1
View File
@@ -175,6 +175,52 @@ func TestSubAgentAPI(t *testing.T) {
}
})
// Context: https://github.com/coder/coder/pull/22196
t.Run("CreateSubAgentDoesNotInheritAuthInstanceID", func(t *testing.T) {
t.Parallel()
var (
log = testutil.Logger(t)
clock = quartz.NewMock(t)
db, org = newDatabaseWithOrg(t)
user, agent = newUserWithWorkspaceAgent(t, db, org)
)
// Given: The parent agent has an AuthInstanceID set
ctx := testutil.Context(t, testutil.WaitShort)
parentAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agent.ID)
require.NoError(t, err)
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.NotEmpty(t, parentAgent.AuthInstanceID.String)
api := newAgentAPI(t, log, db, clock, user, org, agent)
// When: We create a sub agent
createResp, err := api.CreateSubAgent(ctx, &proto.CreateSubAgentRequest{
Name: "sub-agent",
Directory: "/workspaces/test",
Architecture: "amd64",
OperatingSystem: "linux",
})
require.NoError(t, err)
subAgentID, err := uuid.FromBytes(createResp.Agent.Id)
require.NoError(t, err)
// Then: The sub-agent must NOT re-use the parent's AuthInstanceID.
subAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), subAgentID)
require.NoError(t, err)
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Double-check: looking up by the parent's instance ID must
// still return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(dbauthz.AsSystemRestricted(ctx), parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
})
type expectedAppError struct {
index int32
field string
@@ -1320,7 +1366,6 @@ func TestSubAgentAPI(t *testing.T) {
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
+35
View File
@@ -21,10 +21,12 @@ import (
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpapi/httperror"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/searchquery"
@@ -1300,6 +1302,23 @@ func (api *API) pauseTask(rw http.ResponseWriter, r *http.Request) {
return
}
if _, err := api.NotificationsEnqueuer.Enqueue(
// nolint:gocritic // Need notifier actor to enqueue notifications.
dbauthz.AsNotifier(ctx),
workspace.OwnerID,
notifications.TemplateTaskPaused,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": workspace.Name,
"pause_reason": "manual",
},
"api-task-pause",
workspace.ID, workspace.OwnerID, workspace.OrganizationID,
); err != nil {
api.Logger.Warn(ctx, "failed to notify of task paused", slog.Error(err), slog.F("task_id", task.ID), slog.F("workspace_id", workspace.ID))
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.PauseTaskResponse{
WorkspaceBuild: &build,
})
@@ -1387,6 +1406,22 @@ func (api *API) resumeTask(rw http.ResponseWriter, r *http.Request) {
httperror.WriteWorkspaceBuildError(ctx, rw, err)
return
}
if _, err := api.NotificationsEnqueuer.Enqueue(
// nolint:gocritic // Need notifier actor to enqueue notifications.
dbauthz.AsNotifier(ctx),
workspace.OwnerID,
notifications.TemplateTaskResumed,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": workspace.Name,
},
"api-task-resume",
workspace.ID, workspace.OwnerID, workspace.OrganizationID,
); err != nil {
api.Logger.Warn(ctx, "failed to notify of task resumed", slog.Error(err), slog.F("task_id", task.ID), slog.F("workspace_id", workspace.ID))
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.ResumeTaskResponse{
WorkspaceBuild: &build,
})
+110 -38
View File
@@ -45,10 +45,10 @@ import (
)
// createTaskInState is a helper to create a task in the desired state.
// It returns a function that takes context, test, and status, and returns the task ID.
// It returns a function that takes context, test, and status, and returns the task.
// The caller is responsible for setting up the database, owner, and user.
func createTaskInState(db database.Store, ownerSubject rbac.Subject, ownerOrgID, userID uuid.UUID) func(context.Context, *testing.T, database.TaskStatus) uuid.UUID {
return func(ctx context.Context, t *testing.T, status database.TaskStatus) uuid.UUID {
func createTaskInState(db database.Store, ownerSubject rbac.Subject, ownerOrgID, userID uuid.UUID) func(context.Context, *testing.T, database.TaskStatus) database.Task {
return func(ctx context.Context, t *testing.T, status database.TaskStatus) database.Task {
ctx = dbauthz.As(ctx, ownerSubject)
builder := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -65,6 +65,9 @@ func createTaskInState(db database.Store, ownerSubject rbac.Subject, ownerOrgID,
builder = builder.Pending()
case database.TaskStatusInitializing:
builder = builder.Starting()
case database.TaskStatusActive:
// Default builder produces a succeeded start build.
// Post-processing below sets agent and app to active.
case database.TaskStatusPaused:
builder = builder.Seed(database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStop,
@@ -76,31 +79,32 @@ func createTaskInState(db database.Store, ownerSubject rbac.Subject, ownerOrgID,
}
resp := builder.Do()
taskID := resp.Task.ID
// Post-process by manipulating agent and app state.
if status == database.TaskStatusError {
// First, set agent to ready state so agent_status returns 'active'.
// This ensures the cascade reaches app_status.
if status == database.TaskStatusActive || status == database.TaskStatusError {
// Set agent to ready state so agent_status returns 'active'.
err := db.UpdateWorkspaceAgentLifecycleStateByID(ctx, database.UpdateWorkspaceAgentLifecycleStateByIDParams{
ID: resp.Agents[0].ID,
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
})
require.NoError(t, err)
// Then set workspace app health to unhealthy to trigger error state.
apps, err := db.GetWorkspaceAppsByAgentID(ctx, resp.Agents[0].ID)
require.NoError(t, err)
require.Len(t, apps, 1, "expected exactly one app for task")
appHealth := database.WorkspaceAppHealthHealthy
if status == database.TaskStatusError {
appHealth = database.WorkspaceAppHealthUnhealthy
}
err = db.UpdateWorkspaceAppHealthByID(ctx, database.UpdateWorkspaceAppHealthByIDParams{
ID: apps[0].ID,
Health: database.WorkspaceAppHealthUnhealthy,
Health: appHealth,
})
require.NoError(t, err)
}
return taskID
return resp.Task
}
}
@@ -845,9 +849,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPaused)
task := createTask(ctx, t, database.TaskStatusPaused)
err := client.TaskSend(ctx, "me", taskID, codersdk.TaskSendRequest{
err := client.TaskSend(ctx, "me", task.ID, codersdk.TaskSendRequest{
Input: "Hello",
})
@@ -863,9 +867,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusInitializing)
task := createTask(ctx, t, database.TaskStatusInitializing)
err := client.TaskSend(ctx, "me", taskID, codersdk.TaskSendRequest{
err := client.TaskSend(ctx, "me", task.ID, codersdk.TaskSendRequest{
Input: "Hello",
})
@@ -881,9 +885,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPending)
task := createTask(ctx, t, database.TaskStatusPending)
err := client.TaskSend(ctx, "me", taskID, codersdk.TaskSendRequest{
err := client.TaskSend(ctx, "me", task.ID, codersdk.TaskSendRequest{
Input: "Hello",
})
@@ -899,9 +903,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusError)
task := createTask(ctx, t, database.TaskStatusError)
err := client.TaskSend(ctx, "me", taskID, codersdk.TaskSendRequest{
err := client.TaskSend(ctx, "me", task.ID, codersdk.TaskSendRequest{
Input: "Hello",
})
@@ -1120,16 +1124,16 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPending)
task := createTask(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
TaskID: task.ID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
logsResp, err := client.TaskLogs(ctx, "me", task.ID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
@@ -1138,16 +1142,16 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusInitializing)
task := createTask(ctx, t, database.TaskStatusInitializing)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
TaskID: task.ID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
logsResp, err := client.TaskLogs(ctx, "me", task.ID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
@@ -1156,16 +1160,16 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPaused)
task := createTask(ctx, t, database.TaskStatusPaused)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
TaskID: task.ID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
logsResp, err := client.TaskLogs(ctx, "me", task.ID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
@@ -1174,9 +1178,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPending)
task := createTask(ctx, t, database.TaskStatusPending)
logsResp, err := client.TaskLogs(ctx, "me", taskID)
logsResp, err := client.TaskLogs(ctx, "me", task.ID)
require.NoError(t, err)
assert.True(t, logsResp.Snapshot)
@@ -1188,7 +1192,7 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPending)
task := createTask(ctx, t, database.TaskStatusPending)
invalidEnvelope := coderd.TaskLogSnapshotEnvelope{
Format: "unknown-format",
@@ -1198,13 +1202,13 @@ func TestTasks(t *testing.T) {
require.NoError(t, err)
err = db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
TaskID: task.ID,
LogSnapshot: json.RawMessage(invalidJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
_, err = client.TaskLogs(ctx, "me", task.ID)
require.Error(t, err)
var sdkErr *codersdk.Error
@@ -1217,16 +1221,16 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusPending)
task := createTask(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
TaskID: task.ID,
LogSnapshot: json.RawMessage(`{"format":"agentapi","data":"not an object"}`),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
_, err = client.TaskLogs(ctx, "me", task.ID)
require.Error(t, err)
var sdkErr *codersdk.Error
@@ -1238,9 +1242,9 @@ func TestTasks(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTask(ctx, t, database.TaskStatusError)
task := createTask(ctx, t, database.TaskStatusError)
_, err := client.TaskLogs(ctx, "me", taskID)
_, err := client.TaskLogs(ctx, "me", task.ID)
require.Error(t, err)
var sdkErr *codersdk.Error
@@ -2563,7 +2567,6 @@ func TestPauseTask(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
task, _ := setupWorkspaceTask(t, db, owner)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, tc.roles...)
@@ -2787,6 +2790,41 @@ func TestPauseTask(t *testing.T) {
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
})
t.Run("Notification", func(t *testing.T) {
t.Parallel()
var (
notifyEnq = &notificationstest.FakeEnqueuer{}
ownerClient, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{NotificationsEnqueuer: notifyEnq})
owner = coderdtest.CreateFirstUser(t, ownerClient)
)
ctx := testutil.Context(t, testutil.WaitMedium)
ownerUser, err := ownerClient.User(ctx, owner.UserID.String())
require.NoError(t, err)
createTask := createTaskInState(db, coderdtest.AuthzUserSubject(ownerUser), owner.OrganizationID, owner.UserID)
// Given: A task in an active state
task := createTask(ctx, t, database.TaskStatusActive)
workspace, err := ownerClient.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
// When: We pause the task
_, err = ownerClient.PauseTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
// Then: A notification should be sent
sent := notifyEnq.Sent(notificationstest.WithTemplateID(notifications.TemplateTaskPaused))
require.Len(t, sent, 1)
require.Equal(t, owner.UserID, sent[0].UserID)
require.Equal(t, task.Name, sent[0].Labels["task"])
require.Equal(t, task.ID.String(), sent[0].Labels["task_id"])
require.Equal(t, workspace.Name, sent[0].Labels["workspace"])
require.Equal(t, "manual", sent[0].Labels["pause_reason"])
})
}
func TestResumeTask(t *testing.T) {
@@ -3116,4 +3154,38 @@ func TestResumeTask(t *testing.T) {
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
})
t.Run("Notification", func(t *testing.T) {
t.Parallel()
var (
notifyEnq = &notificationstest.FakeEnqueuer{}
ownerClient, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{NotificationsEnqueuer: notifyEnq})
owner = coderdtest.CreateFirstUser(t, ownerClient)
)
ctx := testutil.Context(t, testutil.WaitMedium)
ownerUser, err := ownerClient.User(ctx, owner.UserID.String())
require.NoError(t, err)
createTask := createTaskInState(db, coderdtest.AuthzUserSubject(ownerUser), owner.OrganizationID, owner.UserID)
// Given: A task in a paused state
task := createTask(ctx, t, database.TaskStatusPaused)
workspace, err := ownerClient.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
// When: We resume the task
_, err = ownerClient.ResumeTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
// Then: A notification should be sent
sent := notifyEnq.Sent(notificationstest.WithTemplateID(notifications.TemplateTaskResumed))
require.Len(t, sent, 1)
require.Equal(t, owner.UserID, sent[0].UserID)
require.Equal(t, task.Name, sent[0].Labels["task"])
require.Equal(t, task.ID.String(), sent[0].Labels["task_id"])
require.Equal(t, workspace.Name, sent[0].Labels["workspace"])
})
}
+75 -5
View File
@@ -3745,6 +3745,69 @@ const docTemplate = `{
}
}
},
"/organizations/{organization}/members/{user}/workspaces/available-users": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": [
"application/json"
],
"tags": [
"Workspaces"
],
"summary": "Get users available for workspace creation",
"operationId": "get-users-available-for-workspace-creation",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Organization ID",
"name": "organization",
"in": "path",
"required": true
},
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Limit results",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Offset for pagination",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.MinimalUser"
}
}
}
}
}
},
"/organizations/{organization}/paginated-members": {
"get": {
"security": [
@@ -15305,10 +15368,6 @@ const docTemplate = `{
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -15512,6 +15571,9 @@ const docTemplate = `{
"codersdk.HTTPCookieConfig": {
"type": "object",
"properties": {
"host_prefix": {
"type": "boolean"
},
"same_site": {
"type": "string"
},
@@ -16722,6 +16784,14 @@ const docTemplate = `{
"organization_mapping": {
"type": "object"
},
"redirect_url": {
"description": "RedirectURL is optional, defaulting to 'ACCESS_URL'. Only useful in niche\nsituations where the OIDC callback domain is different from the ACCESS_URL\ndomain.",
"allOf": [
{
"$ref": "#/definitions/serpent.URL"
}
]
},
"scopes": {
"type": "array",
"items": {
@@ -22805,7 +22875,7 @@ const docTemplate = `{
]
},
"default": {
"description": "Default is parsed into Value if set.",
"description": "Default is parsed into Value if set.\nMust be ` + "`" + `\"\"` + "`" + ` if ` + "`" + `DefaultFn` + "`" + ` != nil",
"type": "string"
},
"description": {
+71 -5
View File
@@ -3296,6 +3296,65 @@
}
}
},
"/organizations/{organization}/members/{user}/workspaces/available-users": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": ["application/json"],
"tags": ["Workspaces"],
"summary": "Get users available for workspace creation",
"operationId": "get-users-available-for-workspace-creation",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Organization ID",
"name": "organization",
"in": "path",
"required": true
},
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Limit results",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Offset for pagination",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.MinimalUser"
}
}
}
}
}
},
"/organizations/{organization}/paginated-members": {
"get": {
"security": [
@@ -13836,10 +13895,6 @@
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -14037,6 +14092,9 @@
"codersdk.HTTPCookieConfig": {
"type": "object",
"properties": {
"host_prefix": {
"type": "boolean"
},
"same_site": {
"type": "string"
},
@@ -15190,6 +15248,14 @@
"organization_mapping": {
"type": "object"
},
"redirect_url": {
"description": "RedirectURL is optional, defaulting to 'ACCESS_URL'. Only useful in niche\nsituations where the OIDC callback domain is different from the ACCESS_URL\ndomain.",
"allOf": [
{
"$ref": "#/definitions/serpent.URL"
}
]
},
"scopes": {
"type": "array",
"items": {
@@ -20979,7 +21045,7 @@
]
},
"default": {
"description": "Default is parsed into Value if set.",
"description": "Default is parsed into Value if set.\nMust be `\"\"` if `DefaultFn` != nil",
"type": "string"
},
"description": {
+27
View File
@@ -231,6 +231,7 @@ func (e *Executor) runOnce(t time.Time) Stats {
job *database.ProvisionerJob
auditLog *auditParams
shouldNotifyDormancy bool
shouldNotifyTaskPause bool
nextBuild *database.WorkspaceBuild
activeTemplateVersion database.TemplateVersion
ws database.Workspace
@@ -316,6 +317,10 @@ func (e *Executor) runOnce(t time.Time) Stats {
return nil
}
if reason == database.BuildReasonTaskAutoPause {
shouldNotifyTaskPause = true
}
// Get the template version job to access tags
templateVersionJob, err := tx.GetProvisionerJobByID(e.ctx, activeTemplateVersion.JobID)
if err != nil {
@@ -482,6 +487,28 @@ func (e *Executor) runOnce(t time.Time) Stats {
log.Warn(e.ctx, "failed to notify of workspace marked as dormant", slog.Error(err), slog.F("workspace_id", ws.ID))
}
}
if shouldNotifyTaskPause {
task, err := e.db.GetTaskByID(e.ctx, ws.TaskID.UUID)
if err != nil {
log.Warn(e.ctx, "failed to get task for pause notification", slog.Error(err), slog.F("task_id", ws.TaskID.UUID), slog.F("workspace_id", ws.ID))
} else {
if _, err := e.notificationsEnqueuer.Enqueue(
e.ctx,
ws.OwnerID,
notifications.TemplateTaskPaused,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": ws.Name,
"pause_reason": "inactivity exceeded the dormancy threshold",
},
"lifecycle_executor",
ws.ID, ws.OwnerID, ws.OrganizationID,
); err != nil {
log.Warn(e.ctx, "failed to notify of task paused", slog.Error(err), slog.F("task_id", ws.TaskID.UUID), slog.F("workspace_id", ws.ID))
}
}
}
return nil
}()
if err != nil && !xerrors.Is(err, context.Canceled) {
@@ -2026,4 +2026,62 @@ func TestExecutorTaskWorkspace(t *testing.T) {
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
assert.Equal(t, codersdk.BuildReasonTaskAutoPause, workspace.LatestBuild.Reason, "task workspace should use TaskAutoPause build reason")
})
t.Run("AutostopNotification", func(t *testing.T) {
t.Parallel()
var (
tickCh = make(chan time.Time)
statsCh = make(chan autobuild.Stats)
notifyEnq = notificationstest.FakeEnqueuer{}
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
AutobuildTicker: tickCh,
IncludeProvisionerDaemon: true,
AutobuildStats: statsCh,
NotificationsEnqueuer: &notifyEnq,
})
admin = coderdtest.CreateFirstUser(t, client)
)
// Given: A task workspace with an 8 hour deadline
ctx := testutil.Context(t, testutil.WaitShort)
template := createTaskTemplate(t, client, admin.OrganizationID, ctx, 8*time.Hour)
workspace := createTaskWorkspace(t, client, template, ctx, "test task for autostop notification")
// Given: The workspace is currently running
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
require.Equal(t, codersdk.WorkspaceTransitionStart, workspace.LatestBuild.Transition)
require.NotZero(t, workspace.LatestBuild.Deadline, "workspace should have a deadline for autostop")
p, err := coderdtest.GetProvisionerForTags(db, time.Now(), workspace.OrganizationID, map[string]string{})
require.NoError(t, err)
// When: the autobuild executor ticks after the deadline
go func() {
tickTime := workspace.LatestBuild.Deadline.Time.Add(time.Minute)
coderdtest.UpdateProvisionerLastSeenAt(t, db, p.ID, tickTime)
tickCh <- tickTime
close(tickCh)
}()
// Then: We expect to see a stop transition
stats := <-statsCh
require.Len(t, stats.Transitions, 1, "lifecycle executor should transition the task workspace")
assert.Contains(t, stats.Transitions, workspace.ID, "task workspace should be in transitions")
assert.Equal(t, database.WorkspaceTransitionStop, stats.Transitions[workspace.ID], "should autostop the workspace")
require.Empty(t, stats.Errors, "should have no errors when managing task workspaces")
// Then: A task paused notification was sent with "idle timeout" reason
require.True(t, workspace.TaskID.Valid, "workspace should have a task ID")
task, err := db.GetTaskByID(dbauthz.AsSystemRestricted(ctx), workspace.TaskID.UUID)
require.NoError(t, err)
sent := notifyEnq.Sent(notificationstest.WithTemplateID(notifications.TemplateTaskPaused))
require.Len(t, sent, 1)
require.Equal(t, workspace.OwnerID, sent[0].UserID)
require.Equal(t, task.Name, sent[0].Labels["task"])
require.Equal(t, task.ID.String(), sent[0].Labels["task_id"])
require.Equal(t, workspace.Name, sent[0].Labels["workspace"])
require.Equal(t, "inactivity exceeded the dormancy threshold", sent[0].Labels["pause_reason"])
})
}
+5 -1
View File
@@ -900,6 +900,7 @@ func New(options *Options) *API {
sharedhttpmw.Recover(api.Logger),
httpmw.WithProfilingLabels,
tracing.StatusWriterMiddleware,
options.DeploymentValues.HTTPCookies.Middleware,
tracing.Middleware(api.TracerProvider),
httpmw.AttachRequestID,
httpmw.ExtractRealIP(api.RealIPConfig),
@@ -1232,7 +1233,10 @@ func New(options *Options) *API {
r.Get("/", api.organizationMember)
r.Delete("/", api.deleteOrganizationMember)
r.Put("/roles", api.putMemberRoles)
r.Post("/workspaces", api.postWorkspacesByOrganization)
r.Route("/workspaces", func(r chi.Router) {
r.Post("/", api.postWorkspacesByOrganization)
r.Get("/available-users", api.workspaceAvailableUsers)
})
})
})
})
@@ -0,0 +1,4 @@
-- Remove Task 'paused' transition template notification
DELETE FROM notification_templates WHERE id = '2a74f3d3-ab09-4123-a4a5-ca238f4f65a1';
-- Remove Task 'resumed' transition template notification
DELETE FROM notification_templates WHERE id = '843ee9c3-a8fb-4846-afa9-977bec578649';
@@ -0,0 +1,63 @@
-- Task transition to 'paused' status
INSERT INTO notification_templates (
id,
name,
title_template,
body_template,
actions,
"group",
method,
kind,
enabled_by_default
) VALUES (
'2a74f3d3-ab09-4123-a4a5-ca238f4f65a1',
'Task Paused',
E'Task ''{{.Labels.task}}'' is paused',
E'The task ''{{.Labels.task}}'' was paused ({{.Labels.pause_reason}}).',
'[
{
"label": "View task",
"url": "{{base_url}}/tasks/{{.UserUsername}}/{{.Labels.task_id}}"
},
{
"label": "View workspace",
"url": "{{base_url}}/@{{.UserUsername}}/{{.Labels.workspace}}"
}
]'::jsonb,
'Task Events',
NULL,
'system'::notification_template_kind,
true
);
-- Task transition to 'resumed' status
INSERT INTO notification_templates (
id,
name,
title_template,
body_template,
actions,
"group",
method,
kind,
enabled_by_default
) VALUES (
'843ee9c3-a8fb-4846-afa9-977bec578649',
'Task Resumed',
E'Task ''{{.Labels.task}}'' has resumed',
E'The task ''{{.Labels.task}}'' has resumed.',
'[
{
"label": "View task",
"url": "{{base_url}}/tasks/{{.UserUsername}}/{{.Labels.task_id}}"
},
{
"label": "View workspace",
"url": "{{base_url}}/@{{.UserUsername}}/{{.Labels.workspace}}"
}
]'::jsonb,
'Task Events',
NULL,
'system'::notification_template_kind,
true
);
@@ -138,7 +138,6 @@ func TestCheckLatestVersion(t *testing.T) {
}
for i, tc := range tests {
i, tc := i, tc
t.Run(fmt.Sprintf("entry %d", i), func(t *testing.T) {
t.Parallel()
+50 -3
View File
@@ -6319,6 +6319,56 @@ func TestGetWorkspaceAgentsByParentID(t *testing.T) {
})
}
func TestGetWorkspaceAgentByInstanceID(t *testing.T) {
t.Parallel()
// Context: https://github.com/coder/coder/pull/22196
t.Run("DoesNotReturnSubAgents", func(t *testing.T) {
t.Parallel()
// Given: A parent workspace agent with an AuthInstanceID and a
// sub-agent that shares the same AuthInstanceID.
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: job.ID,
})
authInstanceID := fmt.Sprintf("instance-%s-%d", t.Name(), time.Now().UnixNano())
parentAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
// Create a sub-agent with the same AuthInstanceID (simulating
// the old behavior before the fix).
_ = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{UUID: parentAgent.ID, Valid: true},
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
ctx := testutil.Context(t, testutil.WaitShort)
// When: We look up the agent by instance ID.
agent, err := db.GetWorkspaceAgentByInstanceID(ctx, authInstanceID)
require.NoError(t, err)
// Then: The result must be the parent agent, not the sub-agent.
assert.Equal(t, parentAgent.ID, agent.ID, "instance ID lookup should return the parent agent, not a sub-agent")
assert.False(t, agent.ParentID.Valid, "returned agent should not have a parent (should be the parent itself)")
})
}
func requireUsersMatch(t testing.TB, expected []database.User, found []database.GetUsersRow, msg string) {
t.Helper()
require.ElementsMatch(t, expected, database.ConvertUserRows(found), msg)
@@ -6646,7 +6696,6 @@ func TestUserSecretsAuthorization(t *testing.T) {
}
for _, tc := range testCases {
tc := tc // capture range variable
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
@@ -7460,7 +7509,6 @@ func TestGetTaskByWorkspaceID(t *testing.T) {
db, _ := dbtestutil.NewDB(t)
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
@@ -8000,7 +8048,6 @@ func TestUpdateTaskWorkspaceID(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
+2
View File
@@ -18251,6 +18251,8 @@ WHERE
auth_instance_id = $1 :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC
`
@@ -17,6 +17,8 @@ WHERE
auth_instance_id = @auth_instance_id :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC;
-1
View File
@@ -86,7 +86,6 @@ func FindClosestNode(nodes []Node) (Node, error) {
eg = errgroup.Group{}
)
for i, node := range nodes {
i, node := i, node
eg.Go(func() error {
pinger, err := ping.NewPinger(node.HostnameHTTPS)
if err != nil {
-4
View File
@@ -106,7 +106,6 @@ func TestNormalizeAudienceURI(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := normalizeAudienceURI(tc.input)
@@ -157,7 +156,6 @@ func TestNormalizeHost(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := normalizeHost(tc.input)
@@ -203,7 +201,6 @@ func TestNormalizePathSegments(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := normalizePathSegments(tc.input)
@@ -247,7 +244,6 @@ func TestExtractExpectedAudience(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var req *http.Request
+2
View File
@@ -59,4 +59,6 @@ var (
TemplateTaskIdle = uuid.MustParse("d4a6271c-cced-4ed0-84ad-afd02a9c7799")
TemplateTaskCompleted = uuid.MustParse("8c5a4d12-9f7e-4b3a-a1c8-6e4f2d9b5a7c")
TemplateTaskFailed = uuid.MustParse("3b7e8f1a-4c2d-49a6-b5e9-7f3a1c8d6b4e")
TemplateTaskPaused = uuid.MustParse("2a74f3d3-ab09-4123-a4a5-ca238f4f65a1")
TemplateTaskResumed = uuid.MustParse("843ee9c3-a8fb-4846-afa9-977bec578649")
)
@@ -1302,6 +1302,37 @@ func TestNotificationTemplates_Golden(t *testing.T) {
Data: map[string]any{},
},
},
{
name: "TemplateTaskPaused",
id: notifications.TemplateTaskPaused,
payload: types.MessagePayload{
UserName: "Bobby",
UserEmail: "bobby@coder.com",
UserUsername: "bobby",
Labels: map[string]string{
"task": "my-task",
"task_id": "00000000-0000-0000-0000-000000000000",
"workspace": "my-workspace",
"pause_reason": "idle timeout",
},
Data: map[string]any{},
},
},
{
name: "TemplateTaskResumed",
id: notifications.TemplateTaskResumed,
payload: types.MessagePayload{
UserName: "Bobby",
UserEmail: "bobby@coder.com",
UserUsername: "bobby",
Labels: map[string]string{
"task": "my-task",
"task_id": "00000000-0000-0000-0000-000000000001",
"workspace": "my-workspace",
},
Data: map[string]any{},
},
},
}
// We must have a test case for every notification_template. This is enforced below:
@@ -0,0 +1,85 @@
From: system@coder.com
To: bobby@coder.com
Subject: Task 'my-task' is paused
Message-Id: 02ee4935-73be-4fa1-a290-ff9999026b13@blush-whale-48
Date: Fri, 11 Oct 2024 09:03:06 +0000
Content-Type: multipart/alternative; boundary=bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
MIME-Version: 1.0
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Hi Bobby,
The task 'my-task' was paused (idle timeout).
View task: http://test.com/tasks/bobby/00000000-0000-0000-0000-000000000000
View workspace: http://test.com/@bobby/my-workspace
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!doctype html>
<html lang=3D"en">
<head>
<meta charset=3D"UTF-8" />
<meta name=3D"viewport" content=3D"width=3Ddevice-width, initial-scale=
=3D1.0" />
<title>Task 'my-task' is paused</title>
</head>
<body style=3D"margin: 0; padding: 0; font-family: -apple-system, system-=
ui, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', 'Ubuntu', 'Cantarel=
l', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif; color: #020617=
; background: #f8fafc;">
<div style=3D"max-width: 600px; margin: 20px auto; padding: 60px; borde=
r: 1px solid #e2e8f0; border-radius: 8px; background-color: #fff; text-alig=
n: left; font-size: 14px; line-height: 1.5;">
<div style=3D"text-align: center;">
<img src=3D"https://coder.com/coder-logo-horizontal.png" alt=3D"Cod=
er Logo" style=3D"height: 40px;" />
</div>
<h1 style=3D"text-align: center; font-size: 24px; font-weight: 400; m=
argin: 8px 0 32px; line-height: 1.5;">
Task 'my-task' is paused
</h1>
<div style=3D"line-height: 1.5;">
<p>Hi Bobby,</p>
<p>The task &lsquo;my-task&rsquo; was paused (idle timeout).</p>
</div>
<div style=3D"text-align: center; margin-top: 32px;">
=20
<a href=3D"http://test.com/tasks/bobby/00000000-0000-0000-0000-0000=
00000000" style=3D"display: inline-block; padding: 13px 24px; background-co=
lor: #020617; color: #f8fafc; text-decoration: none; border-radius: 8px; ma=
rgin: 0 4px;">
View task
</a>
=20
<a href=3D"http://test.com/@bobby/my-workspace" style=3D"display: i=
nline-block; padding: 13px 24px; background-color: #020617; color: #f8fafc;=
text-decoration: none; border-radius: 8px; margin: 0 4px;">
View workspace
</a>
=20
</div>
<div style=3D"border-top: 1px solid #e2e8f0; color: #475569; font-siz=
e: 12px; margin-top: 64px; padding-top: 24px; line-height: 1.6;">
<p>&copy;&nbsp;2024&nbsp;Coder. All rights reserved&nbsp;-&nbsp;<a =
href=3D"http://test.com" style=3D"color: #2563eb; text-decoration: none;">h=
ttp://test.com</a></p>
<p><a href=3D"http://test.com/settings/notifications" style=3D"colo=
r: #2563eb; text-decoration: none;">Click here to manage your notification =
settings</a></p>
<p><a href=3D"http://test.com/settings/notifications?disabled=3D2a7=
4f3d3-ab09-4123-a4a5-ca238f4f65a1" style=3D"color: #2563eb; text-decoration=
: none;">Stop receiving emails like this</a></p>
</div>
</div>
</body>
</html>
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4--
@@ -0,0 +1,85 @@
From: system@coder.com
To: bobby@coder.com
Subject: Task 'my-task' has resumed
Message-Id: 02ee4935-73be-4fa1-a290-ff9999026b13@blush-whale-48
Date: Fri, 11 Oct 2024 09:03:06 +0000
Content-Type: multipart/alternative; boundary=bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
MIME-Version: 1.0
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Hi Bobby,
The task 'my-task' has resumed.
View task: http://test.com/tasks/bobby/00000000-0000-0000-0000-000000000001
View workspace: http://test.com/@bobby/my-workspace
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!doctype html>
<html lang=3D"en">
<head>
<meta charset=3D"UTF-8" />
<meta name=3D"viewport" content=3D"width=3Ddevice-width, initial-scale=
=3D1.0" />
<title>Task 'my-task' has resumed</title>
</head>
<body style=3D"margin: 0; padding: 0; font-family: -apple-system, system-=
ui, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', 'Ubuntu', 'Cantarel=
l', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif; color: #020617=
; background: #f8fafc;">
<div style=3D"max-width: 600px; margin: 20px auto; padding: 60px; borde=
r: 1px solid #e2e8f0; border-radius: 8px; background-color: #fff; text-alig=
n: left; font-size: 14px; line-height: 1.5;">
<div style=3D"text-align: center;">
<img src=3D"https://coder.com/coder-logo-horizontal.png" alt=3D"Cod=
er Logo" style=3D"height: 40px;" />
</div>
<h1 style=3D"text-align: center; font-size: 24px; font-weight: 400; m=
argin: 8px 0 32px; line-height: 1.5;">
Task 'my-task' has resumed
</h1>
<div style=3D"line-height: 1.5;">
<p>Hi Bobby,</p>
<p>The task &lsquo;my-task&rsquo; has resumed.</p>
</div>
<div style=3D"text-align: center; margin-top: 32px;">
=20
<a href=3D"http://test.com/tasks/bobby/00000000-0000-0000-0000-0000=
00000001" style=3D"display: inline-block; padding: 13px 24px; background-co=
lor: #020617; color: #f8fafc; text-decoration: none; border-radius: 8px; ma=
rgin: 0 4px;">
View task
</a>
=20
<a href=3D"http://test.com/@bobby/my-workspace" style=3D"display: i=
nline-block; padding: 13px 24px; background-color: #020617; color: #f8fafc;=
text-decoration: none; border-radius: 8px; margin: 0 4px;">
View workspace
</a>
=20
</div>
<div style=3D"border-top: 1px solid #e2e8f0; color: #475569; font-siz=
e: 12px; margin-top: 64px; padding-top: 24px; line-height: 1.6;">
<p>&copy;&nbsp;2024&nbsp;Coder. All rights reserved&nbsp;-&nbsp;<a =
href=3D"http://test.com" style=3D"color: #2563eb; text-decoration: none;">h=
ttp://test.com</a></p>
<p><a href=3D"http://test.com/settings/notifications" style=3D"colo=
r: #2563eb; text-decoration: none;">Click here to manage your notification =
settings</a></p>
<p><a href=3D"http://test.com/settings/notifications?disabled=3D843=
ee9c3-a8fb-4846-afa9-977bec578649" style=3D"color: #2563eb; text-decoration=
: none;">Stop receiving emails like this</a></p>
</div>
</div>
</body>
</html>
--bbe61b741255b6098bb6b3c1f41b885773df633cb18d2a3002b68e4bc9c4--
@@ -0,0 +1,35 @@
{
"_version": "1.1",
"msg_id": "00000000-0000-0000-0000-000000000000",
"payload": {
"_version": "1.2",
"notification_name": "Task Paused",
"notification_template_id": "00000000-0000-0000-0000-000000000000",
"user_id": "00000000-0000-0000-0000-000000000000",
"user_email": "bobby@coder.com",
"user_name": "Bobby",
"user_username": "bobby",
"actions": [
{
"label": "View task",
"url": "http://test.com/tasks/bobby/00000000-0000-0000-0000-000000000000"
},
{
"label": "View workspace",
"url": "http://test.com/@bobby/my-workspace"
}
],
"labels": {
"pause_reason": "idle timeout",
"task": "my-task",
"task_id": "00000000-0000-0000-0000-000000000000",
"workspace": "my-workspace"
},
"data": {},
"targets": null
},
"title": "Task 'my-task' is paused",
"title_markdown": "Task 'my-task' is paused",
"body": "The task 'my-task' was paused (idle timeout).",
"body_markdown": "The task 'my-task' was paused (idle timeout)."
}
@@ -0,0 +1,34 @@
{
"_version": "1.1",
"msg_id": "00000000-0000-0000-0000-000000000000",
"payload": {
"_version": "1.2",
"notification_name": "Task Resumed",
"notification_template_id": "00000000-0000-0000-0000-000000000000",
"user_id": "00000000-0000-0000-0000-000000000000",
"user_email": "bobby@coder.com",
"user_name": "Bobby",
"user_username": "bobby",
"actions": [
{
"label": "View task",
"url": "http://test.com/tasks/bobby/00000000-0000-0000-0000-000000000000"
},
{
"label": "View workspace",
"url": "http://test.com/@bobby/my-workspace"
}
],
"labels": {
"task": "my-task",
"task_id": "00000000-0000-0000-0000-000000000000",
"workspace": "my-workspace"
},
"data": {},
"targets": null
},
"title": "Task 'my-task' has resumed",
"title_markdown": "Task 'my-task' has resumed",
"body": "The task 'my-task' has resumed.",
"body_markdown": "The task 'my-task' has resumed."
}
-1
View File
@@ -53,7 +53,6 @@ func TestVerifyPKCE(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
result := oauth2provider.VerifyPKCE(tt.challenge, tt.verifier)
-1
View File
@@ -217,7 +217,6 @@ func TestOAuth2ClientRegistrationValidation(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
-1
View File
@@ -128,7 +128,6 @@ func TestFindMatchingPresetID(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
-2
View File
@@ -1193,7 +1193,6 @@ func TestMatchesCron(t *testing.T) {
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
@@ -1518,7 +1517,6 @@ func TestCalculateDesiredInstances(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
desiredInstances := tc.snapshot.CalculateDesiredInstances(tc.at)
-1
View File
@@ -190,7 +190,6 @@ func TestTemplateVersionPresetsDefault(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
@@ -3319,7 +3319,7 @@ func insertDevcontainerSubagent(
ResourceID: resourceID,
Name: dc.GetName(),
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
AuthInstanceID: sql.NullString{},
Architecture: parentAgent.Architecture,
EnvironmentVariables: envJSON,
Directory: dc.GetWorkspaceFolder(),
@@ -4072,6 +4072,54 @@ func TestInsertWorkspaceResource(t *testing.T) {
}},
},
},
{
// This test verifies that subagents created via
// devcontainers do not inherit the parent agent's
// AuthInstanceID.
// Context: https://github.com/coder/coder/pull/22196
name: "SubAgentDoesNotInheritAuthInstanceID",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Architecture: "amd64",
OperatingSystem: "linux",
Auth: &sdkproto.Agent_InstanceId{
InstanceId: "parent-instance-id",
},
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "sub",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "code-server", DisplayName: "VS Code", Url: "http://localhost:8080"},
},
}},
}},
},
expectSubAgentCount: 1,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, _ bool) {
// Parent should have the AuthInstanceID set.
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.Equal(t, "parent-instance-id", parentAgent.AuthInstanceID.String)
require.Len(t, subAgents, 1)
subAgent := subAgents[0]
// Sub-agent must NOT inherit the parent's AuthInstanceID.
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Looking up by the parent's instance ID must still
// return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(ctx, parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
},
},
{
// This test verifies the backward-compatibility behavior where a
// devcontainer with a SubagentId but no apps, scripts, or envs does
+10
View File
@@ -265,6 +265,16 @@ func TestRolePermissions(t *testing.T) {
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
},
},
{
Name: "CreateWorkspaceForMembers",
// When creating the WithID won't be set, but it does not change the result.
Actions: []policy.Action{policy.ActionCreate},
Resource: rbac.ResourceWorkspace.InOrg(orgID).WithOwner(policy.WildcardSymbol),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, userAdmin, templateAdmin, orgTemplateAdmin},
},
},
{
Name: "MyWorkspaceInOrgExecution",
// When creating the WithID won't be set, but it does not change the result.
-1
View File
@@ -253,7 +253,6 @@ func TestIsWithinRange(t *testing.T) {
}
for _, testCase := range testCases {
testCase := testCase
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
sched, err := cron.Weekly(testCase.spec)
+1 -1
View File
@@ -704,7 +704,7 @@ func (api *API) postLogout(rw http.ResponseWriter, r *http.Request) {
Name: codersdk.SessionTokenCookie,
Path: "/",
}
http.SetCookie(rw, cookie)
http.SetCookie(rw, api.DeploymentValues.HTTPCookies.Apply(cookie))
// Delete the session token from database.
apiKey := httpmw.APIKey(r)
+5 -2
View File
@@ -1905,10 +1905,13 @@ func TestUserLogout(t *testing.T) {
// Create a custom database so it's easier to make scoped tokens for
// testing.
db, pubSub := dbtestutil.NewDB(t)
dv := coderdtest.DeploymentValues(t)
dv.HTTPCookies.EnableHostPrefix = true
client := coderdtest.New(t, &coderdtest.Options{
Database: db,
Pubsub: pubSub,
DeploymentValues: dv,
Database: db,
Pubsub: pubSub,
})
firstUser := coderdtest.CreateFirstUser(t, client)
-2
View File
@@ -606,7 +606,6 @@ func TestWorkspaceAgentAppStatus_ActivityBump(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
@@ -1920,7 +1919,6 @@ func TestWorkspaceAgentDeleteDevcontainer(t *testing.T) {
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
+20 -4
View File
@@ -59,6 +59,17 @@ func (api *API) workspaceAgentRPC(rw http.ResponseWriter, r *http.Request) {
return
}
// The role parameter distinguishes the real workspace agent from
// other clients using the same agent token (e.g. coder-logstream-kube).
// Only connections with the "agent" role trigger connection monitoring
// that updates first_connected_at/last_connected_at/disconnected_at.
// For backward compatibility, we default to monitoring when the role
// is omitted, since older agents don't send this parameter. In a
// future release, once all agents include role=agent, we can change
// this default to skip monitoring for unspecified roles.
role := r.URL.Query().Get("role")
monitorConnection := role == "" || role == "agent"
api.WebsocketWaitMutex.Lock()
api.WebsocketWaitGroup.Add(1)
api.WebsocketWaitMutex.Unlock()
@@ -121,10 +132,15 @@ func (api *API) workspaceAgentRPC(rw http.ResponseWriter, r *http.Request) {
slog.F("agent_api_version", workspaceAgent.APIVersion),
slog.F("agent_resource_id", workspaceAgent.ResourceID))
closeCtx, closeCtxCancel := context.WithCancel(ctx)
defer closeCtxCancel()
monitor := api.startAgentYamuxMonitor(closeCtx, workspace, workspaceAgent, build, mux)
defer monitor.close()
if monitorConnection {
closeCtx, closeCtxCancel := context.WithCancel(ctx)
defer closeCtxCancel()
monitor := api.startAgentYamuxMonitor(closeCtx, workspace, workspaceAgent, build, mux)
defer monitor.close()
} else {
logger.Debug(ctx, "skipping agent connection monitoring",
slog.F("role", role))
}
agentAPI := agentapi.New(agentapi.Options{
AgentID: workspaceAgent.ID,
+83
View File
@@ -11,6 +11,7 @@ import (
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/rbac"
@@ -168,3 +169,85 @@ func TestAgentAPI_LargeManifest(t *testing.T) {
})
}
}
func TestWorkspaceAgentRPCRole(t *testing.T) {
t.Parallel()
t.Run("AgentRoleMonitorsConnection", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
// Connect with role=agent using ConnectRPCWithRole. This is
// how the real workspace agent connects.
ac := agentsdk.New(client.URL, agentsdk.WithFixedToken(r.AgentToken))
conn, err := ac.ConnectRPCWithRole(ctx, "agent")
require.NoError(t, err)
defer func() {
_ = conn.Close()
}()
// The connection monitor updates the database asynchronously,
// so we need to wait for first_connected_at to be set.
var agent database.WorkspaceAgent
require.Eventually(t, func() bool {
agent, err = db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), r.Agents[0].ID)
if err != nil {
return false
}
return agent.FirstConnectedAt.Valid
}, testutil.WaitShort, testutil.IntervalFast)
assert.True(t, agent.LastConnectedAt.Valid,
"last_connected_at should be set for agent role")
})
t.Run("NonAgentRoleSkipsMonitoring", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
// Connect with a non-agent role using ConnectRPCWithRole.
// This is how coder-logstream-kube should connect.
ac := agentsdk.New(client.URL, agentsdk.WithFixedToken(r.AgentToken))
conn, err := ac.ConnectRPCWithRole(ctx, "logstream-kube")
require.NoError(t, err)
// Send a log to confirm the RPC connection is functional.
agentAPI := agentproto.NewDRPCAgentClient(conn)
_, err = agentAPI.BatchCreateLogs(ctx, &agentproto.BatchCreateLogsRequest{
LogSourceId: []byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
})
// We don't care about the log source error, just that the
// RPC is functional.
_ = err
// Close the connection and give the server time to process.
_ = conn.Close()
time.Sleep(100 * time.Millisecond)
// Verify that connectivity timestamps were never set.
agent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), r.Agents[0].ID)
require.NoError(t, err)
assert.False(t, agent.FirstConnectedAt.Valid,
"first_connected_at should NOT be set for non-agent role")
assert.False(t, agent.LastConnectedAt.Valid,
"last_connected_at should NOT be set for non-agent role")
assert.False(t, agent.DisconnectedAt.Valid,
"disconnected_at should NOT be set for non-agent role")
})
// NOTE: Backward compatibility (empty role) is implicitly tested by
// existing tests like TestWorkspaceAgentReportStats which use
// ConnectRPC() (no role). The server defaults to monitoring when
// the role query parameter is omitted.
}
+45
View File
@@ -2952,3 +2952,48 @@ func convertToWorkspaceRole(actions []policy.Action) codersdk.WorkspaceRole {
return codersdk.WorkspaceRoleDeleted
}
// @Summary Get users available for workspace creation
// @ID get-users-available-for-workspace-creation
// @Security CoderSessionToken
// @Produce json
// @Tags Workspaces
// @Param organization path string true "Organization ID" format(uuid)
// @Param user path string true "User ID, name, or me"
// @Param q query string false "Search query"
// @Param limit query int false "Limit results"
// @Param offset query int false "Offset for pagination"
// @Success 200 {array} codersdk.MinimalUser
// @Router /organizations/{organization}/members/{user}/workspaces/available-users [get]
func (api *API) workspaceAvailableUsers(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
organization := httpmw.OrganizationParam(r)
// This endpoint requires the user to be able to create workspaces for other
// users in this organization. We check if they can create a workspace with
// a wildcard owner.
if !api.Authorize(r, policy.ActionCreate, rbac.ResourceWorkspace.InOrg(organization.ID).WithOwner(policy.WildcardSymbol)) {
httpapi.Forbidden(rw)
return
}
// Use system context to list all users. The authorization check above
// ensures only users who can create workspaces for others can access this.
//nolint:gocritic // System context needed to list users for workspace owner selection.
users, _, ok := api.GetUsers(rw, r.WithContext(dbauthz.AsSystemRestricted(ctx)))
if !ok {
return
}
minimalUsers := make([]codersdk.MinimalUser, 0, len(users))
for _, user := range users {
minimalUsers = append(minimalUsers, codersdk.MinimalUser{
ID: user.ID,
Username: user.Username,
Name: user.Name,
AvatarURL: user.AvatarURL,
})
}
httpapi.Write(ctx, rw, http.StatusOK, minimalUsers)
}
+48
View File
@@ -5625,6 +5625,54 @@ func TestWorkspaceSharingDisabled(t *testing.T) {
})
}
func TestWorkspaceAvailableUsers(t *testing.T) {
t.Parallel()
t.Run("OrgAdminCanListUsers", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
ctx := testutil.Context(t, testutil.WaitMedium)
// Create an org admin and additional users
orgAdminClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.ScopedRoleOrgAdmin(owner.OrganizationID))
_, user1 := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
_, user2 := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// Org admin should be able to list available users
users, err := orgAdminClient.WorkspaceAvailableUsers(ctx, owner.OrganizationID, "me")
require.NoError(t, err)
require.GreaterOrEqual(t, len(users), 4) // owner + orgAdmin + 2 users
// Verify the users we created are in the list
usernames := make([]string, 0, len(users))
for _, u := range users {
usernames = append(usernames, u.Username)
}
require.Contains(t, usernames, user1.Username)
require.Contains(t, usernames, user2.Username)
})
t.Run("MemberCannotListUsers", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
ctx := testutil.Context(t, testutil.WaitMedium)
// Create a regular member
memberClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// Regular member should not be able to list available users
_, err := memberClient.WorkspaceAvailableUsers(ctx, owner.OrganizationID, "me")
require.Error(t, err)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusForbidden, apiErr.StatusCode())
})
}
func TestWorkspaceCreateWithImplicitPreset(t *testing.T) {
t.Parallel()
+43 -15
View File
@@ -152,7 +152,7 @@ func (c *Client) RewriteDERPMap(derpMap *tailcfg.DERPMap) {
// Release Versions from 2.9+
// Deprecated: use ConnectRPC20WithTailnet
func (c *Client) ConnectRPC20(ctx context.Context) (proto.DRPCAgentClient20, error) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 0))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 0), "")
if err != nil {
return nil, err
}
@@ -165,7 +165,7 @@ func (c *Client) ConnectRPC20(ctx context.Context) (proto.DRPCAgentClient20, err
func (c *Client) ConnectRPC20WithTailnet(ctx context.Context) (
proto.DRPCAgentClient20, tailnetproto.DRPCTailnetClient20, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 0))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 0), "")
if err != nil {
return nil, nil, err
}
@@ -176,7 +176,7 @@ func (c *Client) ConnectRPC20WithTailnet(ctx context.Context) (
// maximally compatible with Coderd Release Versions from 2.12+
// Deprecated: use ConnectRPC21WithTailnet
func (c *Client) ConnectRPC21(ctx context.Context) (proto.DRPCAgentClient21, error) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 1))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 1), "")
if err != nil {
return nil, err
}
@@ -188,7 +188,7 @@ func (c *Client) ConnectRPC21(ctx context.Context) (proto.DRPCAgentClient21, err
func (c *Client) ConnectRPC21WithTailnet(ctx context.Context) (
proto.DRPCAgentClient21, tailnetproto.DRPCTailnetClient21, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 1))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 1), "")
if err != nil {
return nil, nil, err
}
@@ -200,7 +200,7 @@ func (c *Client) ConnectRPC21WithTailnet(ctx context.Context) (
func (c *Client) ConnectRPC22(ctx context.Context) (
proto.DRPCAgentClient22, tailnetproto.DRPCTailnetClient22, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 2))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 2), "")
if err != nil {
return nil, nil, err
}
@@ -212,7 +212,7 @@ func (c *Client) ConnectRPC22(ctx context.Context) (
func (c *Client) ConnectRPC23(ctx context.Context) (
proto.DRPCAgentClient23, tailnetproto.DRPCTailnetClient23, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 3))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 3), "")
if err != nil {
return nil, nil, err
}
@@ -224,7 +224,7 @@ func (c *Client) ConnectRPC23(ctx context.Context) (
func (c *Client) ConnectRPC24(ctx context.Context) (
proto.DRPCAgentClient24, tailnetproto.DRPCTailnetClient24, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 4))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 4), "")
if err != nil {
return nil, nil, err
}
@@ -236,7 +236,7 @@ func (c *Client) ConnectRPC24(ctx context.Context) (
func (c *Client) ConnectRPC25(ctx context.Context) (
proto.DRPCAgentClient25, tailnetproto.DRPCTailnetClient25, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 5))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 5), "")
if err != nil {
return nil, nil, err
}
@@ -248,7 +248,7 @@ func (c *Client) ConnectRPC25(ctx context.Context) (
func (c *Client) ConnectRPC26(ctx context.Context) (
proto.DRPCAgentClient26, tailnetproto.DRPCTailnetClient26, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 6))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 6), "")
if err != nil {
return nil, nil, err
}
@@ -260,7 +260,7 @@ func (c *Client) ConnectRPC26(ctx context.Context) (
func (c *Client) ConnectRPC27(ctx context.Context) (
proto.DRPCAgentClient27, tailnetproto.DRPCTailnetClient27, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 7))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 7), "")
if err != nil {
return nil, nil, err
}
@@ -272,25 +272,53 @@ func (c *Client) ConnectRPC27(ctx context.Context) (
func (c *Client) ConnectRPC28(ctx context.Context) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 8))
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 8), "")
if err != nil {
return nil, nil, err
}
return proto.NewDRPCAgentClient(conn), tailnetproto.NewDRPCTailnetClient(conn), nil
}
// ConnectRPC connects to the workspace agent API and tailnet API
func (c *Client) ConnectRPC(ctx context.Context) (drpc.Conn, error) {
return c.connectRPCVersion(ctx, proto.CurrentVersion)
// ConnectRPC28WithRole is like ConnectRPC28 but sends an explicit role
// query parameter to the server. Use "agent" for workspace agents to
// enable connection monitoring.
func (c *Client) ConnectRPC28WithRole(ctx context.Context, role string) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
) {
conn, err := c.connectRPCVersion(ctx, apiversion.New(2, 8), role)
if err != nil {
return nil, nil, err
}
return proto.NewDRPCAgentClient(conn), tailnetproto.NewDRPCTailnetClient(conn), nil
}
func (c *Client) connectRPCVersion(ctx context.Context, version *apiversion.APIVersion) (drpc.Conn, error) {
// ConnectRPC connects to the workspace agent API and tailnet API.
// It does not send a role query parameter, so the server will apply
// its default behavior (currently: enable connection monitoring for
// backward compatibility). Use ConnectRPCWithRole to explicitly
// identify the caller's role.
func (c *Client) ConnectRPC(ctx context.Context) (drpc.Conn, error) {
return c.connectRPCVersion(ctx, proto.CurrentVersion, "")
}
// ConnectRPCWithRole connects to the workspace agent RPC API with an
// explicit role. The role parameter is sent to the server to identify
// the type of client. Use "agent" for workspace agents to enable
// connection monitoring.
func (c *Client) ConnectRPCWithRole(ctx context.Context, role string) (drpc.Conn, error) {
return c.connectRPCVersion(ctx, proto.CurrentVersion, role)
}
func (c *Client) connectRPCVersion(ctx context.Context, version *apiversion.APIVersion, role string) (drpc.Conn, error) {
rpcURL, err := c.SDK.URL.Parse("/api/v2/workspaceagents/me/rpc")
if err != nil {
return nil, xerrors.Errorf("parse url: %w", err)
}
q := rpcURL.Query()
q.Add("version", version.String())
if role != "" {
q.Add("role", role)
}
rpcURL.RawQuery = q.Encode()
jar, err := cookiejar.New(nil)
+113 -7
View File
@@ -372,10 +372,6 @@ type Feature struct {
// Below is only for features that use usage periods.
// SoftLimit is the soft limit of the feature, and is only used for showing
// included limits in the dashboard. No license validation or warnings are
// generated from this value.
SoftLimit *int64 `json:"soft_limit,omitempty"`
// UsagePeriod denotes that the usage is a counter that accumulates over
// this period (and most likely resets with the issuance of the next
// license).
@@ -822,6 +818,11 @@ type OIDCConfig struct {
IconURL serpent.URL `json:"icon_url" typescript:",notnull"`
SignupsDisabledText serpent.String `json:"signups_disabled_text" typescript:",notnull"`
SkipIssuerChecks serpent.Bool `json:"skip_issuer_checks" typescript:",notnull"`
// RedirectURL is optional, defaulting to 'ACCESS_URL'. Only useful in niche
// situations where the OIDC callback domain is different from the ACCESS_URL
// domain.
RedirectURL serpent.URL `json:"redirect_url" typescript:",notnull"`
}
type TelemetryConfig struct {
@@ -852,14 +853,87 @@ type TraceConfig struct {
DataDog serpent.Bool `json:"data_dog" typescript:",notnull"`
}
const cookieHostPrefix = "__Host-"
type HTTPCookieConfig struct {
Secure serpent.Bool `json:"secure_auth_cookie,omitempty" typescript:",notnull"`
SameSite string `json:"same_site,omitempty" typescript:",notnull"`
Secure serpent.Bool `json:"secure_auth_cookie,omitempty" typescript:",notnull"`
SameSite string `json:"same_site,omitempty" typescript:",notnull"`
EnableHostPrefix bool `json:"host_prefix,omitempty" typescript:",notnull"`
}
// cookiesToPrefix is the set of cookies that should be prefixed with the host prefix if EnableHostPrefix is true.
// This is a constant, do not ever mutate it.
var cookiesToPrefix = map[string]struct{}{
SessionTokenCookie: {},
}
// Middleware handles some cookie mutation the requests.
//
// For performance of this, see 'BenchmarkHTTPCookieConfigMiddleware'
// This code is executed on every request, so efficiency is important.
// If making changes, please consider the performance implications and run benchmarks.
func (cfg *HTTPCookieConfig) Middleware(next http.Handler) http.Handler {
prefixed := make(map[string]struct{})
for name := range cookiesToPrefix {
prefixed[cookieHostPrefix+name] = struct{}{}
}
return http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
if !cfg.EnableHostPrefix {
// If a deployment has this config on, then turned it off. Then some old __Host-
// cookies could exist on the browsers of the clients. These cookies have no
// impact, so we are going to ignore them if they exist (niche scenario)
next.ServeHTTP(rw, r)
return
}
// When 'EnableHostPrefix', some cookies are set with a `__Host-` prefix. This
// middleware will strip any prefixes, so the backend is unaware of this security
// feature.
//
// This code also handles any unprefixed cookies that are now invalid.
cookies := r.Cookies()
for i, c := range cookies {
// If any cookies that should be prefixed are found without the prefix, remove
// them from the client and the request. This is usually from a migration where
// the prefix was just turned on. In any case, these cookies MUST be dropped
if _, ok := cookiesToPrefix[c.Name]; ok {
// Remove the cookie from the client to prevent any future requests from sending it.
http.SetCookie(rw, &http.Cookie{
MaxAge: -1, // Delete
Name: c.Name,
Path: "/",
})
// And remove it from the request so the rest of the code doesn't see it.
cookies[i] = nil
}
// Only strip prefix's from the cookies we care about. Let other `__Host-` cookies be
if _, ok := prefixed[c.Name]; ok {
c.Name = strings.TrimPrefix(c.Name, cookieHostPrefix)
}
}
// r.Cookies() returns copies, so we need to rebuild the header.
r.Header.Del("Cookie")
for _, c := range cookies {
if c != nil {
r.AddCookie(c)
}
}
next.ServeHTTP(rw, r)
})
}
func (cfg *HTTPCookieConfig) Apply(c *http.Cookie) *http.Cookie {
c.Secure = cfg.Secure.Value()
c.SameSite = cfg.HTTPSameSite()
if cfg.EnableHostPrefix {
// Only prefix the cookies we want to be prefixed.
if _, ok := cookiesToPrefix[c.Name]; ok {
c.Name = cookieHostPrefix + c.Name
}
}
return c
}
@@ -1379,7 +1453,8 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
Value: &c.HTTPAddress,
Group: &deploymentGroupNetworkingHTTP,
YAML: "httpAddress",
Annotations: serpent.Annotations{}.Mark(annotationExternalProxies, "true"),
Annotations: serpent.Annotations{}.
Mark(annotationExternalProxies, "true"),
}
tlsBindAddress := serpent.Option{
Name: "TLS Address",
@@ -2365,6 +2440,21 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
Group: &deploymentGroupOIDC,
YAML: "dangerousSkipIssuerChecks",
},
{
Name: "OIDC Redirect URL",
Description: "Optional override of the default redirect url which uses the deployment's access url. " +
"Useful in situations where a deployment has more than 1 domain. Using this setting can also break OIDC, so use with caution.",
Required: false,
Flag: "oidc-redirect-url",
Env: "CODER_OIDC_REDIRECT_URL",
YAML: "oidc-redirect-url",
Value: &c.OIDC.RedirectURL,
Group: &deploymentGroupOIDC,
UseInstead: nil,
// In most deployments, this setting can only complicate and break OIDC.
// So hide it, and only surface it to the small number of users that need it.
Hidden: true,
},
// Telemetry settings
telemetryEnable,
{
@@ -2800,6 +2890,9 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
Description: "Controls if the 'Secure' property is set on browser session cookies.",
Flag: "secure-auth-cookie",
Env: "CODER_SECURE_AUTH_COOKIE",
DefaultFn: func() string {
return strconv.FormatBool(c.AccessURL.Scheme == "https")
},
Value: &c.HTTPCookies.Secure,
Group: &deploymentGroupNetworking,
YAML: "secureAuthCookie",
@@ -2817,6 +2910,19 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
YAML: "sameSiteAuthCookie",
Annotations: serpent.Annotations{}.Mark(annotationExternalProxies, "true"),
},
{
Name: "__Host Prefix Cookies",
Description: "Recommended to be enabled. Enables `__Host-` prefix for cookies to guarantee they are only set by the right domain.",
Flag: "host-prefix-cookie",
Env: "CODER_HOST_PREFIX_COOKIE",
Value: serpent.BoolOf(&c.HTTPCookies.EnableHostPrefix),
// Ideally this is true, however any frontend interactions with the coder api would be broken.
// So for compatibility reasons, this is set to false.
Default: "false",
Group: &deploymentGroupNetworking,
YAML: "hostPrefixCookie",
Annotations: serpent.Annotations{}.Mark(annotationExternalProxies, "true"),
},
{
Name: "Terms of Service URL",
Description: "A URL to an external Terms of Service that must be accepted by users when logging in.",
+224 -1
View File
@@ -5,6 +5,8 @@ import (
"embed"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"runtime"
"strings"
"testing"
@@ -747,7 +749,6 @@ func TestRetentionConfigParsing(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
@@ -883,3 +884,225 @@ func TestComputeMaxIdleConns(t *testing.T) {
})
}
}
func TestHTTPCookieConfigMiddleware(t *testing.T) {
t.Parallel()
// Realistic cookies that are always present in production.
// These cookies are added to every test.
baseCookies := []*http.Cookie{
{Name: "_ga", Value: "GA1.1.661026807.1770083336"},
{Name: "_ga_G0Q1B9GRC0", Value: "GS2.1.s1771343727$o49$g1$t1771343993$j48$l0$h0"},
{Name: "csrf_token", Value: "gDiKk8GjTM2iCUHAPfN9GlC+DGjzAprlLi2vJ+5TBU0="},
}
cases := []struct {
name string
cfg codersdk.HTTPCookieConfig
extraCookies []*http.Cookie
expectedCookies map[string]string // cookie name -> value that handler should see
expectedDeleted []string // if any cookies are supposed to be deleted via Set-Cookie
}{
{
name: "Disabled_PassesThrough",
cfg: codersdk.HTTPCookieConfig{},
extraCookies: []*http.Cookie{
{Name: codersdk.SessionTokenCookie, Value: "token123"},
},
expectedCookies: map[string]string{
codersdk.SessionTokenCookie: "token123",
},
},
{
name: "Enabled_StripsPrefixFromCookie",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "token123"},
},
expectedCookies: map[string]string{
codersdk.SessionTokenCookie: "token123",
},
},
{
name: "Enabled_DeletesUnprefixedCookie",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
// Unprefixed cookie that should be in the "to prefix" list.
{Name: codersdk.SessionTokenCookie, Value: "unprefixed-token"},
},
expectedCookies: map[string]string{
// Session token should NOT be present - it was deleted.
},
expectedDeleted: []string{codersdk.SessionTokenCookie},
},
{
name: "Enabled_BothPrefixedAndUnprefixed",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
// Browser might send both during migration.
{Name: codersdk.SessionTokenCookie, Value: "unprefixed-token"},
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "prefixed-token"},
},
expectedCookies: map[string]string{
codersdk.SessionTokenCookie: "prefixed-token", // Prefixed wins.
},
expectedDeleted: []string{codersdk.SessionTokenCookie},
},
{
name: "Enabled_MultiplePrefixedCookies",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "session"},
{Name: "__Host-SomeOtherCookie", Value: "other-cookie"},
{Name: "__Host-Santa", Value: "santa"},
},
expectedCookies: map[string]string{
codersdk.SessionTokenCookie: "session",
"__Host-SomeOtherCookie": "other-cookie",
"__Host-Santa": "santa",
},
},
{
name: "Enabled_UnrelatedCookiesUnchanged",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "custom_cookie", Value: "custom-value"},
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "session"},
{Name: "__Host-foobar", Value: "do-not-change-me"},
},
expectedCookies: map[string]string{
"custom_cookie": "custom-value",
codersdk.SessionTokenCookie: "session",
"__Host-foobar": "do-not-change-me",
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var handlerCookies []*http.Cookie
handler := tc.cfg.Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
handlerCookies = r.Cookies()
}))
req := httptest.NewRequest("GET", "/", nil)
for _, c := range baseCookies {
req.AddCookie(c)
}
for _, c := range tc.extraCookies {
req.AddCookie(c)
}
rw := httptest.NewRecorder()
handler.ServeHTTP(rw, req)
// Verify cookies seen by handler.
gotCookies := make(map[string]string)
for _, c := range handlerCookies {
gotCookies[c.Name] = c.Value
}
for _, v := range baseCookies {
tc.expectedCookies[v.Name] = v.Value
}
assert.Equal(t, tc.expectedCookies, gotCookies)
// Verify Set-Cookie header for deletion.
setCookies := rw.Result().Cookies()
if len(tc.expectedDeleted) > 0 {
assert.NotEmpty(t, setCookies, "expected Set-Cookie header for cookie deletion")
expDel := make(map[string]struct{})
for _, name := range tc.expectedDeleted {
expDel[name] = struct{}{}
}
// Verify it's a deletion (MaxAge < 0).
for _, c := range setCookies {
assert.Less(t, c.MaxAge, 0, "Set-Cookie should have MaxAge < 0 for deletion")
delete(expDel, c.Name)
}
require.Empty(t, expDel, "expected Set-Cookie header for deletion")
} else {
assert.Empty(t, setCookies, "did not expect Set-Cookie header")
}
})
}
}
func BenchmarkHTTPCookieConfigMiddleware(b *testing.B) {
noop := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {})
// Realistic cookies that are always present in production.
baseCookies := []*http.Cookie{
{Name: "_ga", Value: "GA1.1.661026807.1770083336"},
{Name: "_ga_G0Q1B9GRC0", Value: "GS2.1.s1771343727$o49$g1$t1771343993$j48$l0$h0"},
{Name: "csrf_token", Value: "gDiKk8GjTM2iCUHAPfN9GlC+DGjzAprlLi2vJ+5TBU0="},
}
cases := []struct {
name string
cfg codersdk.HTTPCookieConfig
extraCookies []*http.Cookie
}{
{
name: "Disabled",
cfg: codersdk.HTTPCookieConfig{},
extraCookies: []*http.Cookie{
{Name: codersdk.SessionTokenCookie, Value: "KybJV9fNul-u11vlll9wiF6eLQDxBVucD"},
},
},
{
name: "Enabled_NoPrefixedCookies",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: codersdk.SessionTokenCookie, Value: "KybJV9fNul-u11vlll9wiF6eLQDxBVucD"},
},
},
{
name: "Enabled_WithPrefixedCookie",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "KybJV9fNul-u11vlll9wiF6eLQDxBVucD"},
},
},
{
name: "Enabled_MultiplePrefixedCookies",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "KybJV9fNul-u11vlll9wiF6eLQDxBVucD"},
{Name: "__Host-" + codersdk.PathAppSessionTokenCookie, Value: "xyz123"},
{Name: "__Host-" + codersdk.SubdomainAppSessionTokenCookie, Value: "abc456"},
{Name: "__Host-" + "foobar", Value: "do-not-change-me"},
},
},
{
name: "Enabled_NonSessionPrefixedCookies",
cfg: codersdk.HTTPCookieConfig{EnableHostPrefix: true},
extraCookies: []*http.Cookie{
{Name: "__Host-" + codersdk.SessionTokenCookie, Value: "KybJV9fNul-u11vlll9wiF6eLQDxBVucD"},
},
},
}
for _, tc := range cases {
b.Run(tc.name, func(b *testing.B) {
handler := tc.cfg.Middleware(noop)
rw := httptest.NewRecorder()
allCookies := make([]*http.Cookie, 1, len(baseCookies))
copy(allCookies, baseCookies)
// Combine base cookies with test-specific cookies.
allCookies = append(allCookies, tc.extraCookies...)
b.ResetTimer()
for i := 0; i < b.N; i++ {
req := httptest.NewRequest("GET", "/", nil)
for _, c := range allCookies {
req.AddCookie(c)
}
handler.ServeHTTP(rw, req)
}
})
}
}
+3 -2
View File
@@ -12,8 +12,9 @@ import (
)
const (
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
LicenseManagedAgentLimitExceededWarningText = "You have built more workspaces with managed agents than your license allows."
)
type AddLicenseRequest struct {
+16
View File
@@ -787,3 +787,19 @@ func (c *Client) WorkspaceExternalAgentCredentials(ctx context.Context, workspac
var credentials ExternalAgentCredentials
return credentials, json.NewDecoder(res.Body).Decode(&credentials)
}
// WorkspaceAvailableUsers returns users available for workspace creation.
// This is used to populate the owner dropdown when creating workspaces for
// other users.
func (c *Client) WorkspaceAvailableUsers(ctx context.Context, organizationID uuid.UUID, userID string) ([]MinimalUser, error) {
res, err := c.Request(ctx, http.MethodGet, fmt.Sprintf("/api/v2/organizations/%s/members/%s/workspaces/available-users", organizationID, userID), nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, ReadBodyAsError(res)
}
var users []MinimalUser
return users, json.NewDecoder(res.Body).Decode(&users)
}
+363
View File
@@ -0,0 +1,363 @@
# docker-compose.dev.yml — Development environment
services:
database:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: postgres:17
environment:
POSTGRES_USER: coder
POSTGRES_PASSWORD: coder
POSTGRES_DB: coder
ports:
- "5432:5432"
volumes:
- coder_dev_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U coder"]
interval: 2s
timeout: 5s
retries: 10
# Ensure named volumes are owned by the coder user (uid 1000)
# since Docker creates them as root by default.
init-volumes:
labels:
- "com.coder.dev"
image: codercom/oss-dogfood:latest
user: "0:0"
volumes:
- go_cache:/go-cache
- coder_cache:/cache
- bootstrap_token:/bootstrap
- site_node_modules:/app/site/node_modules
command: >
chown -R 1000:1000
/go-cache
/cache
/bootstrap
/app/site/node_modules
build-slim:
labels:
- "com.coder.dev"
network_mode: "host"
image: codercom/oss-dogfood:latest
depends_on:
init-volumes:
condition: service_completed_successfully
database:
condition: service_healthy
working_dir: /app
# Add the Docker group so coderd can access the Docker socket.
# If your Docker group is not 999, the below should work:
# export DOCKER_GROUP=$(getent group docker | cut -d: -f3)
group_add:
- "${DOCKER_GROUP:-999}"
environment:
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
DOCKER_HOST: "${CODER_DEV_DOCKER_HOST:-unix:///var/run/docker.sock}"
volumes:
- .:/app
- go_cache:/go-cache
- coder_cache:/cache
- "${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock"
command: >
sh -c '
if [ "${CODER_BUILD_AGPL:-0}" = "1" ]; then
make -j build-slim CODER_BUILD_AGPL=1
else
make -j build-slim
fi &&
mkdir -p /cache/site/orig/bin &&
cp site/out/bin/coder-* /cache/site/orig/bin/
'
coderd:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
database:
condition: service_healthy
build-slim:
condition: service_completed_successfully
environment:
CODER_PG_CONNECTION_URL: "postgresql://coder:coder@database:5432/coder?sslmode=disable"
CODER_HTTP_ADDRESS: "0.0.0.0:3000"
CODER_ACCESS_URL: "${CODER_DEV_ACCESS_URL:-http://localhost:3000}"
CODER_DEV_ADMIN_PASSWORD: "${CODER_DEV_ADMIN_PASSWORD:-SomeSecurePassword!}"
CODER_SWAGGER_ENABLE: "true"
CODER_DANGEROUS_ALLOW_CORS_REQUESTS: "true"
CODER_TELEMETRY_ENABLE: "false"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
CODER_CACHE_DIRECTORY: /cache
DOCKER_HOST: "${CODER_DEV_DOCKER_HOST:-unix:///var/run/docker.sock}"
# Add the Docker group so coderd can access the Docker socket.
# Override DOCKER_GROUP if your host's docker group is not 999.
group_add:
- "${DOCKER_GROUP:-999}"
ports:
- "3000:3000"
healthcheck:
test: ["CMD-SHELL", "curl -sf http://localhost:3000/healthz || exit 1"]
interval: 5s
timeout: 5s
retries: 30
start_period: 120s
working_dir: /app
volumes:
- .:/app
- go_cache:/go-cache
- coder_cache:/cache
- "${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock"
command: >
sh -c '
CMD_PATH="./enterprise/cmd/coder"
[ "${CODER_BUILD_AGPL:-0}" = "1" ] && CMD_PATH="./cmd/coder"
exec go run "$$CMD_PATH" server \
--http-address 0.0.0.0:3000 \
--access-url "${CODER_DEV_ACCESS_URL:-http://localhost:3000}" \
--swagger-enable \
--dangerous-allow-cors-requests=true \
--enable-terraform-debug-mode
'
setup-init:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
coderd:
condition: service_healthy
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
CODER_DEV_ADMIN_PASSWORD: "${CODER_DEV_ADMIN_PASSWORD:-SomeSecurePassword!}"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap
- ./scripts/docker-dev:/scripts:ro
command: ["sh", "/scripts/setup-init.sh"]
setup-users:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-init:
condition: service_completed_successfully
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
CODER_DEV_MEMBER_PASSWORD: "${CODER_DEV_MEMBER_PASSWORD:-SomeSecurePassword!}"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
- ./scripts/docker-dev:/scripts:ro
command: ["sh", "/scripts/setup-users.sh"]
setup-template:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-init:
condition: service_completed_successfully
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
DOCKER_HOST: "${CODER_DEV_DOCKER_HOST:-unix:///var/run/docker.sock}"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
- ./scripts/docker-dev:/scripts:ro
- "${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock"
command: ["sh", "/scripts/setup-template.sh"]
site:
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-template:
condition: service_completed_successfully
working_dir: /app/site
environment:
CODER_HOST: "http://coderd:3000"
ports:
- "8080:8080"
volumes:
- ./site:/app/site
- site_node_modules:/app/site/node_modules
command: sh -c "pnpm install --frozen-lockfile && pnpm dev --host"
wsproxy:
profiles: ["proxy"]
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-init:
condition: service_completed_successfully
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
ports:
- "3010:3010"
command: >
sh -c '
export CODER_SESSION_TOKEN=$$(cat /bootstrap/token) &&
go run ./cmd/coder wsproxy delete local-proxy --yes 2>/dev/null || true
PROXY_TOKEN=$$(go run ./cmd/coder wsproxy create \
--name=local-proxy \
--display-name="Local Proxy" \
--icon="/emojis/1f4bb.png" \
--only-token)
exec go run ./cmd/coder wsproxy server \
--dangerous-allow-cors-requests=true \
--http-address=0.0.0.0:3010 \
--proxy-session-token="$$PROXY_TOKEN" \
--primary-access-url=http://coderd:3000
'
setup-multi-org:
profiles: ["multi-org"]
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-users:
condition: service_completed_successfully
setup-template:
condition: service_completed_successfully
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
DOCKER_HOST: "${CODER_DEV_DOCKER_HOST:-unix:///var/run/docker.sock}"
LICENSE_FILE: "${CODER_DEV_LICENSE_FILE:-./license.txt}"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
- ./scripts/docker-dev:/scripts:ro
- "${CODER_DEV_LICENSE_FILE:-./license.txt}:/license.txt:ro"
command: ["sh", "/scripts/setup-multi-org.sh"]
ext-provisioner:
profiles: ["multi-org"]
labels:
- "com.coder.dev"
networks:
- coder-dev
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:2112"]
image: codercom/oss-dogfood:latest
depends_on:
setup-multi-org:
condition: service_completed_successfully
group_add:
- "${DOCKER_GROUP:-999}"
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
DOCKER_HOST: "${CODER_DEV_DOCKER_HOST:-unix:///var/run/docker.sock}"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
CODER_PROMETHEUS_ENABLE: "1"
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
- "${DOCKER_SOCKET:-/var/run/docker.sock}:/var/run/docker.sock"
command: >
sh -c '
export CODER_SESSION_TOKEN=$$(cat /bootstrap/token) &&
exec go run ./enterprise/cmd/coder provisionerd start \
--tag "scope=organization" \
--name second-org-daemon \
--org second-organization
'
setup-multi-org-template:
profiles: ["multi-org"]
labels:
- "com.coder.dev"
networks:
- coder-dev
image: codercom/oss-dogfood:latest
depends_on:
setup-multi-org:
condition: service_completed_successfully
ext-provisioner:
condition: service_healthy
working_dir: /app
environment:
CODER_URL: "http://coderd:3000"
GOMODCACHE: /go-cache/mod
GOCACHE: /go-cache/build
volumes:
- .:/app
- go_cache:/go-cache
- bootstrap_token:/bootstrap:ro
- ./scripts/docker-dev:/scripts:ro
command: ["sh", "-c", "/scripts/setup-template.sh second-organization"]
volumes:
coder_dev_data:
labels:
- "com.coder.dev"
go_cache:
labels:
- "com.coder.dev"
coder_cache:
labels:
- "com.coder.dev"
site_node_modules:
labels:
- "com.coder.dev"
bootstrap_token:
labels:
- "com.coder.dev"
networks:
coder-dev:
labels:
- "com.coder.dev"
name: coder-dev
driver: bridge
+19 -112
View File
@@ -1,11 +1,11 @@
# Dev Containers
Dev Containers allow developers to define their development environment
Dev containers allow developers to define their development environment
as code using the [Dev Container specification](https://containers.dev/).
Configuration lives in a `devcontainer.json` file alongside source code,
enabling consistent, reproducible environments.
By adopting Dev Containers, organizations can:
By adopting dev containers, organizations can:
- **Standardize environments**: Eliminate "works on my machine" issues while
still allowing developers to customize their tools within approved boundaries.
@@ -14,129 +14,36 @@ By adopting Dev Containers, organizations can:
- **Improve security**: Use hardened base images and controlled package
registries to enforce security policies while enabling developer self-service.
Coder supports multiple approaches for running Dev Containers. Choose based on
your infrastructure and workflow requirements.
Coder supports two approaches for running dev containers. Choose based on your
infrastructure and workflow requirements.
## Comparison
## Dev Containers Integration
| Method | Dev Container CLI | Envbuilder | CI/CD Pre-built |
|-------------------------------------------|--------------------------------------------------------|---------------------------------------|-----------------------------------------------------------|
| **Standard Dev Container implementation** | ✅ Yes | ❌ No | ✅ Yes |
| **Full Dev Container Spec Support** | ✅ All options | ❌ Limited options | \~ Most options |
| **Startup Time** | Build at runtime, faster with caching | Build at runtime, faster with caching | Fast (pre-built) |
| **Docker Required** | ❌ Yes | ✅ No | ✅ No |
| **Caching** | More difficult | ✅ Yes | ✅ Yes |
| **Repo Discovery** | ✅ Yes | ❌ No | ❌ No |
| **Custom Apps in-spec** | ✅ Via spec args | ❌ No | ❌ No |
| **Debugging** | Easy | Very difficult | Moderate |
| **Versioning** | \~ Via spec, or template | \~ Via spec, or template | ✅ Image tags |
| **Testing Pipeline** | \~ Via CLI in CI/CD | \~ Via CLI in CI/CD | ✅ Yes, via the same pipeline |
| **Feedback Loop** | ✅ Fast | ✅ Fast | Slow (build, and then test) |
| **Maintenance Status** | ✅ Active | ⚠️ Maintenance mode | ✅ Active |
| **Best For** | Dev flexibility, rapid iteration, feature completeness | Restricted environments | Controlled and centralized releases, less dev flexibility |
## Dev Container CLI
The Dev Container CLI integration uses the standard `@devcontainers/cli` and Docker to run
Dev Containers inside your workspace. This is the recommended approach for most use
cases and provides the most complete Dev Container experience.
Uses the
[devcontainers-cli module](https://registry.coder.com/modules/devcontainers-cli),
the `coder_devcontainer` Terraform resource, and
`CODER_AGENT_DEVCONTAINERS_ENABLE=true`.
**Pros:**
- Standard Dev Container implementation via Microsoft's official `@devcontainers/cli` package.
- Supports all Dev Container configuration options.
- Supports custom arguments in the Dev Container spec for defining custom apps
without needing template changes.
- Supports discovery of repos with Dev Containers in them.
- Easier to debug, since you have access to the outer container.
**Cons / Requirements:**
- Requires Docker in workspaces. This does not necessarily mean Docker-in-Docker
or a specific Kubernetes runtime — you could use Rootless Podman or a
privileged sidecar.
- Caching is more difficult than with Envbuilder or CI/CD pre-built approaches.
The Dev Containers Integration uses the standard `@devcontainers/cli` and Docker
to run containers inside your workspace. This is the recommended approach for
most use cases.
**Best for:**
- Dev flexibility, rapid iteration, and feature completeness.
- Workspaces with Docker available (Docker-in-Docker or mounted socket).
- Dev Container management in the Coder dashboard (discovery, status, rebuild).
- Multiple Dev Containers per workspace.
- Workspaces with Docker available (Docker-in-Docker or mounted socket)
- Dev container management in the Coder dashboard (discovery, status, rebuild)
- Multiple dev containers per workspace
See the [Dev Containers Integration](./integration.md) page for instructions.
[Configure Dev Containers Integration](./integration.md)
For user documentation, see the
[Dev Containers user guide](../../../user-guides/devcontainers/index.md).
## Envbuilder
Envbuilder transforms the workspace environment itself from a Dev Container spec (i.e `devcontainer.json`),
rather than running containers inside the workspace. It does not require a Docker
daemon.
> [!NOTE]
> Envbuilder is in **maintenance mode**. No new features are planned to be
> implemented. For most use cases, the
> [Dev Container CLI](#dev-container-cli) or [CI/CD Pre-built](#cicd-pre-built)
> approaches are recommended.
**Pros:**
- Does not require Docker in workspaces.
- Easier caching.
**Cons:**
- Very complicated to debug, since Envbuilder replaces the filesystem of the
container. You can't access that environment within Coder easily if it fails,
and you won't have many debug tools.
- Does not support all of the Dev Container configuration options.
- Does not support discovery of repos with Dev Containers in them.
- Less flexible and more complex in general.
Envbuilder transforms the workspace image itself from a `devcontainer.json`,
rather than running containers inside the workspace. It does not require
a Docker daemon.
**Best for:**
- Environments where Docker is unavailable or restricted.
- Infrastructure-level control over image builds, caching, and security scanning.
- Kubernetes-native deployments without privileged containers.
- Environments where Docker is unavailable or restricted
- Infrastructure-level control over image builds, caching, and security scanning
- Kubernetes-native deployments without privileged containers
See the [Envbuilder](./envbuilder/index.md) page for instructions.
## CI/CD Pre-built
Build the Dev Container image from CI/CD and pull it from within Terraform. This
approach separates the image build step from the workspace startup, resulting in
fast startup times and a generic template that doesn't have any
Dev Container-specific configuration items.
**Pros:**
- Standard Dev Container implementation via Microsoft's official `@devcontainers/cli` package.
- Faster startup time — no need for a specific caching setup.
- The template is generic and doesn't have any Dev Container-specific
configuration items.
- Versioned via image tags.
- Testable pipeline.
**Cons:**
- Adds a build step.
- Does not support all of the runtime options, but still supports more options
than Envbuilder.
- Does not support discovery of repos with Dev Containers.
- Slow feedback loop (build, then test).
**Best for:**
- Controlled and centralized releases with less dev flexibility.
- Teams that already have CI/CD pipelines for building images.
- Environments that need fast, predictable startup times.
For an example workflow, see the
[uwu/basic-env CI/CD workflow](https://github.com/uwu/basic-env/blob/main/.github/workflows/_build-and-push.yml).
[Configure Envbuilder](./envbuilder/index.md)
@@ -1,14 +1,13 @@
# Configure a template for Dev Containers
This guide covers the Dev Containers CLI Integration, which uses Docker. For
This guide covers the Dev Containers Integration, which uses Docker. For
environments without Docker, see [Envbuilder](./envbuilder/index.md) as an
alternative.
To enable Dev Containers in workspaces, configure your template with the [`devcontainers-cli`](https://registry.coder.com/modules/coder/devcontainers-cli)
module and configurations outlined in this doc.
To enable Dev Containers in workspaces, configure your template with the Dev Containers
modules and configurations outlined in this doc.
> [!WARNING]
> Dev Containers are currently not supported in Windows or macOS workspaces.
Dev Containers are currently not supported in Windows or macOS workspaces.
## Configuration Modes
@@ -17,7 +16,7 @@ There are two approaches to configuring Dev Containers in Coder:
### Manual Configuration
Use the [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) Terraform resource to explicitly define which Dev
Container(s) should be started in your workspace. This approach provides:
Containers should be started in your workspace. This approach provides:
- Predictable behavior and explicit control
- Clear template configuration
@@ -39,7 +38,7 @@ or work with many projects, as it reduces template maintenance overhead.
Use the
[devcontainers-cli](https://registry.coder.com/modules/devcontainers-cli) module
to ensure that the `@devcontainers/cli` NPM package is installed in your workspace:
to ensure the `@devcontainers/cli` is installed in your workspace:
```terraform
module "devcontainers-cli" {
@@ -55,7 +54,7 @@ Alternatively, install the devcontainer CLI manually in your base image.
The
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer)
resource automatically starts a specific Dev Container in your workspace, ensuring it's
resource automatically starts a Dev Container in your workspace, ensuring it's
ready when you access the workspace:
```terraform
@@ -75,9 +74,9 @@ For multi-repo workspaces, define multiple `coder_devcontainer` resources, each
pointing to a different repository. Each one runs as a separate sub-agent with
its own terminal and apps in the dashboard.
## Enable Dev Containers CLI Integration
## Enable Dev Containers Integration
The Dev Containers CLI Integration is **enabled by default** in Coder 2.24.0 and later.
Dev Containers integration is **enabled by default** in Coder 2.24.0 and later.
You don't need to set any environment variables unless you want to change the
default behavior.
@@ -111,7 +110,7 @@ the feature.
**Default: `true`** • **Added in: v2.24.0**
Enables the Dev Containers CLI Integration in the Coder agent.
Enables the Dev Containers integration in the Coder agent.
The Dev Containers feature is enabled by default. You can explicitly disable it
by setting this to `false`.
@@ -145,24 +144,70 @@ during workspace initialization. This only applies to Dev Containers found via
project discovery. Dev Containers defined with the `coder_devcontainer` resource
always auto-start regardless of this setting.
## Per-Container Customizations
## Attach Resources to Dev Containers
> [!NOTE]
>
> Dev container sub-agents are created dynamically after workspace provisioning,
> so Terraform resources like
> [`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script)
> and [`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
> cannot currently be attached to them. Modules from the
> [Coder registry](https://registry.coder.com) that depend on these resources
> are also not currently supported for sub-agents.
>
> To add tools to dev containers, use
> [dev container features](../../../user-guides/devcontainers/working-with-dev-containers.md#dev-container-features).
> For Coder-specific apps, use the
> [`apps` customization](../../../user-guides/devcontainers/customizing-dev-containers.md#custom-apps).
>
> If you really need modules, look into [other Dev Container options](./index.md#comparison)
You can attach
[`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app),
[`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script),
and [`coder_env`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/env)
resources to a `coder_devcontainer` by referencing its `subagent_id` attribute
as the `agent_id`:
```terraform
resource "coder_devcontainer" "my-repository" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
resource "coder_app" "code-server" {
count = data.coder_workspace.me.start_count
agent_id = coder_devcontainer.my-repository[0].subagent_id
# ...
}
resource "coder_script" "dev-setup" {
count = data.coder_workspace.me.start_count
agent_id = coder_devcontainer.my-repository[0].subagent_id
# ...
}
resource "coder_env" "my-var" {
count = data.coder_workspace.me.start_count
agent_id = coder_devcontainer.my-repository[0].subagent_id
# ...
}
```
This also enables using [Coder registry](https://registry.coder.com) modules
that depend on these resources inside dev containers, by passing the
`subagent_id` as the module's `agent_id`.
### Terraform-managed dev containers
When a `coder_devcontainer` has any `coder_app`, `coder_script`, or `coder_env`
resource attached, it becomes a **terraform-managed** dev container. This
changes how Coder handles the sub-agent:
- The sub-agent is pre-defined during Terraform provisioning rather than created
dynamically.
- On dev container configuration changes, Coder updates the sub-agent in-place
instead of deleting and recreating it.
### Interaction with devcontainer.json customizations
Terraform-defined resources and
[`devcontainer.json` customizations](../../../user-guides/devcontainers/customizing-dev-containers.md)
work together with some limitations. The `displayApps` settings from
`devcontainer.json` are applied to terraform-managed dev containers, so you can
control built-in app visibility (e.g., hide VS Code Insiders) via
`devcontainer.json` even when using Terraform resources.
However, custom `apps` defined in `devcontainer.json` are **not applied** to
terraform-managed dev containers. If you need custom apps, define them as
`coder_app` resources in Terraform instead.
## Per-Container Customizations
Developers can customize individual dev containers using the `customizations.coder`
block in their `devcontainer.json` file. Available options include:
@@ -177,7 +222,7 @@ block in their `devcontainer.json` file. Available options include:
For the full reference, see
[Customizing dev containers](../../../user-guides/devcontainers/customizing-dev-containers.md).
## Simplified Template Example
## Complete Template Example
Here's a simplified template example that uses Dev Containers with manual
configuration:
@@ -214,14 +259,23 @@ resource "coder_devcontainer" "my-repository" {
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
# Attaching resources to dev containers is optional. By attaching
# this resource to the dev container, we are changing how the dev
# container will be treated by Coder. This limits the ability to
# customize the injected agent via the devcontainer.json file.
resource "coder_env" "env" {
count = data.coder_workspace.me.start_count
agent_id = coder_devcontainer.my-repository[0].subagent_id
name = "MY_VAR"
value = "my-value"
}
```
### Alternative: Project Discovery with Autostart
By default, discovered containers appear in the dashboard but developers must
manually start them. To have them start automatically, enable autostart by setting the `CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE` environment to `true` within the workspace.
For example, with a Docker-based template:
manually start them. To have them start automatically, enable autostart:
```terraform
resource "docker_container" "workspace" {
@@ -241,24 +295,24 @@ With autostart enabled:
- Discovered containers automatically build and start during workspace
initialization
- The [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) resource is not required
- The `coder_devcontainer` resource is not required
- Developers can work with multiple projects seamlessly
> [!NOTE]
>
> When using project discovery, you still need to install the `devcontainer` CLI
> When using project discovery, you still need to install the devcontainers CLI
> using the module or in your base image.
## Example Template
The [Docker (Dev Containers)](https://github.com/coder/coder/tree/main/examples/templates/docker-devcontainer)
starter template demonstrates the Dev Containers CLI Integration using Docker-in-Docker.
It includes the [`devcontainers-cli`](https://registry.coder.com/modules/coder/devcontainers-cli) module, [`git-clone`](https://registry.coder.com/modules/git-clone) module, and the
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) resource.
starter template demonstrates Dev Containers integration using Docker-in-Docker.
It includes the `devcontainers-cli` module, `git-clone` module, and the
`coder_devcontainer` resource.
## Next Steps
- [Dev Containers CLI Integration user guide](../../../user-guides/devcontainers/index.md)
- [Dev Containers Integration](../../../user-guides/devcontainers/index.md)
- [Customizing Dev Containers](../../../user-guides/devcontainers/customizing-dev-containers.md)
- [Working with Dev Containers](../../../user-guides/devcontainers/working-with-dev-containers.md)
- [Troubleshooting Dev Containers](../../../user-guides/devcontainers/troubleshooting-dev-containers.md)
@@ -4,11 +4,11 @@ Dev containers extend your template with containerized development environments,
allowing developers to work in consistent, reproducible setups defined by
`devcontainer.json` files.
Coder's main Dev Containers Integration uses the standard `@devcontainers/cli` and
Coder's Dev Containers Integration uses the standard `@devcontainers/cli` and
Docker to run containers inside workspaces.
For setup instructions, see
[Dev Containers Integration](../../integrations/devcontainers/integration.md).
For alternative approaches and comparisons between them, see the
[Dev Containers](../../integrations/devcontainers/index.md) page.
For an alternative approach that doesn't require Docker, see
[Envbuilder](../../integrations/devcontainers/envbuilder/index.md).
+1 -1
View File
@@ -130,7 +130,7 @@ with different characteristics and requirements:
1. **nsjail** - Uses Linux namespaces for isolation. This is the default jail
type and provides network namespace isolation. See
[nsjail documentation](./nsjail.md) for detailed information about runtime
[nsjail documentation](./nsjail/index.md) for detailed information about runtime
requirements and Docker configuration.
2. **landjail** - Uses Landlock V4 for network isolation. This provides network
@@ -1,25 +1,11 @@
# nsjail Jail Type
# nsjail on Docker
nsjail is Agent Boundaries' default jail type that uses Linux namespaces to
provide process isolation. It creates unprivileged network namespaces to control
and monitor network access for processes running under Boundary.
This page describes the runtime and permission requirements for running Agent
Boundaries with the **nsjail** jail type on **Docker**.
## Overview
For an overview of nsjail, see [nsjail](./index.md).
nsjail leverages Linux namespace technology to isolate processes at the network
level. When Agent Boundaries runs with nsjail, it creates a separate network
namespace for the isolated process, allowing Agent Boundaries to intercept and
filter all network traffic according to the configured policy.
This jail type requires Linux capabilities to create and manage network
namespaces, which means it has specific runtime requirements when running in
containerized environments like Docker.
## Architecture
<img width="1228" height="604" alt="Boundary" src="https://github.com/user-attachments/assets/1b7c8c5b-7b8f-4adf-8795-325bd28715c6" />
## Runtime & Permission Requirements for Running Agent Boundaries in Docker
## Runtime & Permission Requirements for Running Boundary in Docker
This section describes the Linux capabilities and runtime configurations
required to run Agent Boundaries with nsjail inside a Docker container.
@@ -0,0 +1,38 @@
# nsjail on ECS
This page describes the runtime and permission requirements for running
Boundary with the **nsjail** jail type on **Amazon ECS**.
## Runtime & Permission Requirements for Running Boundary in ECS
The setup for ECS is similar to [nsjail on Kubernetes](./k8s.md); that environment
is better explored and tested, so the Kubernetes page is a useful reference. On
ECS, requirements depend on the node OS and how ECS runs your tasks. The
following examples use **ECS with Self Managed Node Groups** (EC2 launch type).
---
### Example 1: ECS + Self Managed Node Groups + Amazon Linux
On **Amazon Linux** nodes with ECS, the default Docker seccomp profile enforced
by ECS blocks the syscalls needed for Boundary. Because it is difficult to
disable or modify the seccomp profile on ECS, you must grant `SYS_ADMIN` (along
with `NET_ADMIN`) so that Boundary can create namespaces and run nsjail.
**Task definition (Terraform) — `linuxParameters`:**
```hcl
container_definitions = jsonencode([{
name = "coder-agent"
image = "your-coder-agent-image"
linuxParameters = {
capabilities = {
add = ["NET_ADMIN", "SYS_ADMIN"]
}
}
}])
```
This gives the container the capabilities required for nsjail when ECS uses the
default Docker seccomp profile.
@@ -0,0 +1,27 @@
# nsjail Jail Type
nsjail is Agent Boundaries' default jail type that uses Linux namespaces to
provide process isolation. It creates unprivileged network namespaces to control
and monitor network access for processes running under Boundary.
**Running on Docker, Kubernetes, or ECS?** See the relevant page for runtime
and permission requirements:
- [nsjail on Docker](./docker.md)
- [nsjail on Kubernetes](./k8s.md)
- [nsjail on ECS](./ecs.md)
## Overview
nsjail leverages Linux namespace technology to isolate processes at the network
level. When Agent Boundaries runs with nsjail, it creates a separate network
namespace for the isolated process, allowing Agent Boundaries to intercept and
filter all network traffic according to the configured policy.
This jail type requires Linux capabilities to create and manage network
namespaces, which means it has specific runtime requirements when running in
containerized environments like Docker and Kubernetes.
## Architecture
<img width="1228" height="604" alt="Boundary" src="https://github.com/user-attachments/assets/1b7c8c5b-7b8f-4adf-8795-325bd28715c6" />
@@ -0,0 +1,129 @@
# nsjail on Kubernetes
This page describes the runtime and permission requirements for running Agent
Boundaries with the **nsjail** jail type on **Kubernetes**.
## Runtime & Permission Requirements for Running Boundary in Kubernetes
Requirements depend on the node OS and the container runtime. The following
examples use **EKS with Managed Node Groups** for two common node AMIs.
---
### Example 1: EKS + Managed Node Groups + Amazon Linux
On **Amazon Linux** nodes, the default seccomp and runtime behavior typically
allow the syscalls needed for Boundary. You only need to
grant `NET_ADMIN`.
**Container `securityContext`:**
```yaml
apiVersion: v1
kind: Pod
metadata:
name: coder-agent
spec:
containers:
- name: coder-agent
image: your-coder-agent-image
securityContext:
capabilities:
add:
- NET_ADMIN
# ... rest of container spec
```
---
### Example 2: EKS + Managed Node Groups + Bottlerocket
On **Bottlerocket** nodes, the default seccomp profile often blocks the `clone`
syscalls required for unprivileged user namespaces. You must either disable or
modify seccomp for the pod (see [Docker Seccomp Profile Considerations](./docker.md#docker-seccomp-profile-considerations)) or grant `SYS_ADMIN`.
**Option A: `NET_ADMIN` + disable seccomp**
Disabling the seccomp profile allows the container to create namespaces
without granting `SYS_ADMIN` capabilities.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: coder-agent
spec:
containers:
- name: coder-agent
image: your-coder-agent-image
securityContext:
capabilities:
add:
- NET_ADMIN
seccompProfile:
type: Unconfined
# ... rest of container spec
```
**Option B: `NET_ADMIN` + `SYS_ADMIN`**
Granting `SYS_ADMIN` bypasses many seccomp restrictions and allows namespace
creation.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: coder-agent
spec:
containers:
- name: coder-agent
image: your-coder-agent-image
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
# ... rest of container spec
```
### User namespaces on Bottlerocket
User namespaces are often disabled (`user.max_user_namespaces=0`) on Bottlerocket
nodes. Check and enable user namespaces:
```bash
# Check current value
sysctl user.max_user_namespaces
# If it returns 0, enable user namespaces
sysctl -w user.max_user_namespaces=65536
```
If `sysctl -w` is not allowed, configure it via Bottlerocket bootstrap settings
when creating the node group (e.g., in Terraform):
```hcl
bootstrap_extra_args = <<-EOT
[settings.kernel.sysctl]
"user.max_user_namespaces" = "65536"
EOT
```
This ensures Boundary can create user namespaces with nsjail.
### Running without user namespaces
If the environment is restricted and you cannot enable user namespaces (e.g.
Bottlerocket in EKS auto-mode), you can run Boundary with the
`--no-user-namespace` flag. Use this when you have no way to allow user namespace creation.
---
### Example 3: EKS + Fargate (Firecracker VMs)
nsjail is not currently supported on **EKS Fargate** (Firecracker-based VMs), which
blocks the capabilities needed for nsjail.
If you run on Fargate, we recommend using [landjail](../landjail.md) instead,
provided kernel version supports it (Linux 6.7+).
@@ -5,7 +5,7 @@
Claude Code can be configured using environment variables.
* **Base URL**: `ANTHROPIC_BASE_URL` should point to `https://coder.example.com/api/v2/aibridge/anthropic`
* **API Key**: `ANTHROPIC_API_KEY` should be your [Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself).
* **Auth Token**: `ANTHROPIC_AUTH_TOKEN` should be your [Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself).
### Pre-configuring in Templates
+2 -1
View File
@@ -127,7 +127,8 @@ Example trace of an interception using Jaeger backend:
### Capturing Logs in Traces
> **Note:** Enabling log capture may generate a large volume of trace events.
> [!NOTE]
> Enabling log capture may generate a large volume of trace events.
To include log messages as trace events, enable trace log capture
by setting `CODER_TRACE_LOGS` environment variable or using
+23 -1
View File
@@ -293,6 +293,11 @@
"title": "Windsurf",
"description": "Access your workspace with Windsurf",
"path": "./user-guides/workspace-access/windsurf.md"
},
{
"title": "Antigravity",
"description": "Access your workspace with Antigravity",
"path": "./user-guides/workspace-access/antigravity.md"
}
]
},
@@ -1008,7 +1013,24 @@
{
"title": "NS Jail",
"description": "Documentation for Namespace Jail",
"path": "./ai-coder/agent-boundaries/nsjail.md"
"path": "./ai-coder/agent-boundaries/nsjail/index.md",
"children": [
{
"title": "NS Jail on Docker",
"description": "Runtime and permission requirements for running NS Jail on Docker",
"path": "./ai-coder/agent-boundaries/nsjail/docker.md"
},
{
"title": "NS Jail on Kubernetes",
"description": "Runtime and permission requirements for running NS Jail on Kubernetes",
"path": "./ai-coder/agent-boundaries/nsjail/k8s.md"
},
{
"title": "NS Jail on ECS",
"description": "Runtime and permission requirements for running NS Jail on ECS",
"path": "./ai-coder/agent-boundaries/nsjail/ecs.md"
}
]
},
{
"title": "LandJail",
-2
View File
@@ -329,7 +329,6 @@ curl -X GET http://coder-server:8080/api/v2/entitlements \
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -341,7 +340,6 @@ curl -X GET http://coder-server:8080/api/v2/entitlements \
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
+14
View File
@@ -314,6 +314,7 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"hide_ai_tasks": true,
"http_address": "string",
"http_cookies": {
"host_prefix": true,
"same_site": "string",
"secure_auth_cookie": true
},
@@ -431,6 +432,19 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"organization_assign_default": true,
"organization_field": "string",
"organization_mapping": {},
"redirect_url": {
"forceQuery": true,
"fragment": "string",
"host": "string",
"omitHost": true,
"opaque": "string",
"path": "string",
"rawFragment": "string",
"rawPath": "string",
"rawQuery": "string",
"scheme": "string",
"user": {}
},
"scopes": [
"string"
],
+51 -11
View File
@@ -2815,6 +2815,7 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"hide_ai_tasks": true,
"http_address": "string",
"http_cookies": {
"host_prefix": true,
"same_site": "string",
"secure_auth_cookie": true
},
@@ -2932,6 +2933,19 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"organization_assign_default": true,
"organization_field": "string",
"organization_mapping": {},
"redirect_url": {
"forceQuery": true,
"fragment": "string",
"host": "string",
"omitHost": true,
"opaque": "string",
"path": "string",
"rawFragment": "string",
"rawPath": "string",
"rawQuery": "string",
"scheme": "string",
"user": {}
},
"scopes": [
"string"
],
@@ -3369,6 +3383,7 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"hide_ai_tasks": true,
"http_address": "string",
"http_cookies": {
"host_prefix": true,
"same_site": "string",
"secure_auth_cookie": true
},
@@ -3486,6 +3501,19 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"organization_assign_default": true,
"organization_field": "string",
"organization_mapping": {},
"redirect_url": {
"forceQuery": true,
"fragment": "string",
"host": "string",
"omitHost": true,
"opaque": "string",
"path": "string",
"rawFragment": "string",
"rawPath": "string",
"rawQuery": "string",
"scheme": "string",
"user": {}
},
"scopes": [
"string"
],
@@ -3902,7 +3930,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -3914,7 +3941,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -4196,7 +4222,6 @@ Git clone makes use of this by parsing the URL from: 'Username for "https://gith
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -4207,13 +4232,12 @@ Git clone makes use of this by parsing the URL from: 'Username for "https://gith
### Properties
| Name | Type | Required | Restrictions | Description |
|---------------|----------------------------------------------|----------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `actual` | integer | false | | |
| `enabled` | boolean | false | | |
| `entitlement` | [codersdk.Entitlement](#codersdkentitlement) | false | | |
| `limit` | integer | false | | |
| `soft_limit` | integer | false | | Soft limit is the soft limit of the feature, and is only used for showing included limits in the dashboard. No license validation or warnings are generated from this value. |
| Name | Type | Required | Restrictions | Description |
|---------------|----------------------------------------------|----------|--------------|-------------|
| `actual` | integer | false | | |
| `enabled` | boolean | false | | |
| `entitlement` | [codersdk.Entitlement](#codersdkentitlement) | false | | |
| `limit` | integer | false | | |
|`usage_period`|[codersdk.UsagePeriod](#codersdkusageperiod)|false||Usage period denotes that the usage is a counter that accumulates over this period (and most likely resets with the issuance of the next license).
These dates are determined from the license that this entitlement comes from, see enterprise/coderd/license/license.go.
Only certain features set these fields: - FeatureManagedAgentLimit|
@@ -4492,6 +4516,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
```json
{
"host_prefix": true,
"same_site": "string",
"secure_auth_cookie": true
}
@@ -4501,6 +4526,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
| Name | Type | Required | Restrictions | Description |
|----------------------|---------|----------|--------------|-------------|
| `host_prefix` | boolean | false | | |
| `same_site` | string | false | | |
| `secure_auth_cookie` | boolean | false | | |
@@ -5731,6 +5757,19 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
"organization_assign_default": true,
"organization_field": "string",
"organization_mapping": {},
"redirect_url": {
"forceQuery": true,
"fragment": "string",
"host": "string",
"omitHost": true,
"opaque": "string",
"path": "string",
"rawFragment": "string",
"rawPath": "string",
"rawQuery": "string",
"scheme": "string",
"user": {}
},
"scopes": [
"string"
],
@@ -5772,6 +5811,7 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
| `organization_assign_default` | boolean | false | | |
| `organization_field` | string | false | | |
| `organization_mapping` | object | false | | |
| `redirect_url` | [serpent.URL](#serpenturl) | false | | Redirect URL is optional, defaulting to 'ACCESS_URL'. Only useful in niche situations where the OIDC callback domain is different from the ACCESS_URL domain. |
| `scopes` | array of string | false | | |
| `sign_in_text` | string | false | | |
| `signups_disabled_text` | string | false | | |
@@ -14083,7 +14123,7 @@ None
| Name | Type | Required | Restrictions | Description |
|------------------|--------------------------------------------|----------|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| `annotations` | [serpent.Annotations](#serpentannotations) | false | | Annotations enable extensions to serpent higher up in the stack. It's useful for help formatting and documentation generation. |
| `default` | string | false | | Default is parsed into Value if set. |
| `default` | string | false | | Default is parsed into Value if set. Must be `""` if `DefaultFn` != nil |
| `description` | string | false | | |
| `env` | string | false | | Env is the environment variable used to configure this option. If unset, environment configuring is disabled. |
| `flag` | string | false | | Flag is the long name of the flag used to configure this option. If unset, flag configuring is disabled. |
+58
View File
@@ -331,6 +331,64 @@ of the template will be used.
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get users available for workspace creation
### Code samples
```shell
# Example request using curl
curl -X GET http://coder-server:8080/api/v2/organizations/{organization}/members/{user}/workspaces/available-users \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`GET /organizations/{organization}/members/{user}/workspaces/available-users`
### Parameters
| Name | In | Type | Required | Description |
|----------------|-------|--------------|----------|-----------------------|
| `organization` | path | string(uuid) | true | Organization ID |
| `user` | path | string | true | User ID, name, or me |
| `q` | query | string | false | Search query |
| `limit` | query | integer | false | Limit results |
| `offset` | query | integer | false | Offset for pagination |
### Example responses
> 200 Response
```json
[
{
"avatar_url": "http://example.com",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"name": "string",
"username": "string"
}
]
```
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------|-------------|-----------------------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | OK | array of [codersdk.MinimalUser](schemas.md#codersdkminimaluser) |
<h3 id="get-users-available-for-workspace-creation-responseschema">Response Schema</h3>
Status Code **200**
| Name | Type | Required | Restrictions | Description |
|----------------|--------------|----------|--------------|-------------|
| `[array item]` | array | false | | |
| `» avatar_url` | string(uri) | false | | |
| `» id` | string(uuid) | true | | |
| `» name` | string | false | | |
| `» username` | string | true | | |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get workspace metadata by user and workspace name
### Code samples
+11
View File
@@ -1058,6 +1058,17 @@ Controls if the 'Secure' property is set on browser session cookies.
Controls the 'SameSite' property is set on browser session cookies.
### --host-prefix-cookie
| | |
|-------------|------------------------------------------|
| Type | <code>bool</code> |
| Environment | <code>$CODER_HOST_PREFIX_COOKIE</code> |
| YAML | <code>networking.hostPrefixCookie</code> |
| Default | <code>false</code> |
Recommended to be enabled. Enables `__Host-` prefix for cookies to guarantee they are only set by the right domain.
### --terms-of-service-url
| | |
+4 -4
View File
@@ -30,10 +30,10 @@ Select which organization (uuid or name) to use.
### -c, --column
| | |
|---------|-----------------------------------------------------------------------|
| Type | <code>[name\|created at\|created by\|status\|active\|archived]</code> |
| Default | <code>name,created at,created by,status,active</code> |
| | |
|---------|---------------------------------------------------------------------------|
| Type | <code>[id\|name\|created at\|created by\|status\|active\|archived]</code> |
| Default | <code>name,created at,created by,status,active</code> |
Columns to display in table output.
@@ -4,6 +4,13 @@ Coder supports custom configuration in your `devcontainer.json` file through the
`customizations.coder` block. These options let you control how Coder interacts
with your dev container without requiring template changes.
> [!TIP]
>
> Alternatively, template administrators can also define apps, scripts, and
> environment variables for dev containers directly in Terraform. See
> [Attach resources to dev containers](../../admin/integrations/devcontainers/integration.md#attach-resources-to-dev-containers)
> for details.
## Ignore a dev container
Use the `ignore` option to hide a dev container from Coder completely:
+37 -32
View File
@@ -1,4 +1,4 @@
# Dev Containers (via `@devcontainers/cli` CLI)
# Dev Containers
[Dev containers](https://containers.dev/) define your development environment
as code using a `devcontainer.json` file. Coder's Dev Containers integration
@@ -6,30 +6,32 @@ uses the [`@devcontainers/cli`](https://github.com/devcontainers/cli) and
[Docker](https://www.docker.com) to seamlessly build and run these containers,
with management in your dashboard.
This guide covers the Dev Containers CLI integration which requires Docker. For workspaces without Docker in them,
administrators can look into
[other options](../../admin/integrations/devcontainers/index.md#comparison) instead.
This guide covers the Dev Containers integration. For workspaces without Docker,
administrators can configure
[Envbuilder](../../admin/integrations/devcontainers/envbuilder/index.md) instead,
which builds the workspace image itself from your dev container configuration.
![Two Dev Containers running as sub-agents in a Coder workspace](../../images/user-guides/devcontainers/devcontainer-running.png)_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
![Two dev containers running as sub-agents in a Coder workspace](../../images/user-guides/devcontainers/devcontainer-running.png)_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
## Prerequisites
- Coder version 2.24.0 or later
- Docker available inside your workspace (via Docker-in-Docker or a mounted socket, see [Docker in workspaces](../../admin/templates/extending-templates/docker-in-workspaces.md) for details on how to achieve this)
- The `devcontainer` CLI (`@devcontainers/cli` NPM package) installed in your workspace
- Docker available inside your workspace
- The `@devcontainers/cli` installed in your workspace
The Dev Containers CLI integration is enabled by default in Coder.
Most templates with Dev Containers support include all these prerequisites. See
[Configure a template for Dev Containers](../../admin/integrations/devcontainers/integration.md)
Dev Containers integration is enabled by default. Your workspace needs Docker
(via Docker-in-Docker or a mounted socket) and the devcontainers CLI. Most
templates with Dev Containers support include both. See
[Configure a template for dev containers](../../admin/integrations/devcontainers/integration.md)
for setup details.
## Features
- Automatic Dev Container detection from repositories
- Automatic dev container detection from repositories
- Seamless container startup during workspace initialization
- Change detection with outdated status indicator
- On-demand container rebuild via dashboard button
- Template-defined apps, scripts, and environment variables via Terraform (see [limitations](../../admin/integrations/devcontainers/integration.md#interaction-with-devcontainerjson-customizations))
- Integrated IDE experience with VS Code
- Direct SSH access to containers
- Automatic port detection
@@ -45,7 +47,7 @@ development environment. You can place it in:
- `.devcontainer.json` (root of repository)
- `.devcontainer/<folder>/devcontainer.json` (for multiple configurations)
The third option allows monorepos to define multiple Dev Container
The third option allows monorepos to define multiple dev container
configurations in separate sub-folders. See the
[Dev Container specification](https://containers.dev/implementors/spec/#devcontainerjson)
for details.
@@ -62,52 +64,55 @@ Here's a minimal example:
For more configuration options, see the
[Dev Container specification](https://containers.dev/).
### Start your Dev Container
### Start your dev container
Coder automatically discovers Dev Container configurations in your repositories
Coder automatically discovers dev container configurations in your repositories
and displays them in your workspace dashboard. From there, you can start a dev
container with a single click.
![Discovered Dev Containers with Start buttons](../../images/user-guides/devcontainers/devcontainer-discovery.png)_Coder detects Dev Container configurations and displays them with a Start button_
![Discovered dev containers with Start buttons](../../images/user-guides/devcontainers/devcontainer-discovery.png)_Coder detects dev container configurations and displays them with a Start button_
If your template administrator has configured automatic startup (via the
`coder_devcontainer` Terraform resource or autostart settings), your dev
container will build and start automatically when the workspace starts.
### Connect to your Dev Container
### Connect to your dev container
Once running, your Dev Container appears as a sub-agent in your workspace
Once running, your dev container appears as a sub-agent in your workspace
dashboard. You can connect via:
- **Web terminal** in the Coder dashboard
- **SSH** using `coder ssh <workspace>.<agent>`
- **VS Code** using the "Open in VS Code Desktop" button
See [Working with Dev Containers](./working-with-dev-containers.md) for detailed
See [Working with dev containers](./working-with-dev-containers.md) for detailed
connection instructions.
## How it works
The Dev Containers CLI integration uses the `devcontainer` command from
The Dev Containers integration uses the `devcontainer` command from
[`@devcontainers/cli`](https://github.com/devcontainers/cli) to manage
containers within your Coder workspace.
When a workspace with Dev Containers integration starts:
1. If the template defines `coder_app`, `coder_script`, or `coder_env` resources
attached to the dev container, a sub-agent is pre-created with these resources.
1. The workspace initializes the Docker environment.
1. The integration detects repositories with Dev Container configurations.
1. Detected Dev Containers appear in the Coder dashboard.
1. The integration detects repositories with dev container configurations.
1. Detected dev containers appear in the Coder dashboard.
1. If auto-start is configured (via `coder_devcontainer` or autostart settings),
the integration builds and starts the Dev Container automatically.
1. Coder creates a sub-agent for the running container, enabling direct access.
the integration builds and starts the dev container automatically.
1. Coder creates a sub-agent (or updates the pre-created one) for the running
container, enabling direct access.
Without auto-start, users can manually start discovered Dev Containers from the
Without auto-start, users can manually start discovered dev containers from the
dashboard.
### Agent naming
Each Dev Container gets its own agent name, derived from the workspace folder
path. For example, a Dev Container with workspace folder `/home/coder/my-app`
Each dev container gets its own agent name, derived from the workspace folder
path. For example, a dev container with workspace folder `/home/coder/my-app`
will have an agent named `my-app`.
Agent names are sanitized to contain only lowercase alphanumeric characters and
@@ -123,17 +128,17 @@ in your `devcontainer.json`.
button
- The `forwardPorts` property in `devcontainer.json` with `host:port` syntax
(e.g., `"db:5432"`) for Docker Compose sidecar containers is not yet
supported. For single-container Dev Containers, use `coder port-forward` to
supported. For single-container dev containers, use `coder port-forward` to
access ports directly on the sub-agent.
- Some advanced Dev Container features may have limited support
- Some advanced dev container features may have limited support
## Next steps
- [Working with Dev Containers](./working-with-dev-containers.md) — SSH, IDE
- [Working with dev containers](./working-with-dev-containers.md) — SSH, IDE
integration, and port forwarding
- [Customizing Dev Containers](./customizing-dev-containers.md) — Custom agent
- [Customizing dev containers](./customizing-dev-containers.md) — Custom agent
names, apps, and display options
- [Troubleshooting Dev Containers](./troubleshooting-dev-containers.md) —
- [Troubleshooting dev containers](./troubleshooting-dev-containers.md) —
Diagnose common issues
- [Dev Container specification](https://containers.dev/) — Advanced
configuration options
@@ -0,0 +1,68 @@
# Antigravity
[Antigravity](https://antigravity.google/) is Google's desktop IDE.
Follow this guide to use Antigravity to access your Coder workspaces.
If your team uses Antigravity regularly, ask your Coder administrator to add Antigravity as a workspace application in your template.
You can also use the [Antigravity module](https://registry.coder.com/modules/coder/antigravity) to easily add Antigravity to your Coder templates.
## Install Antigravity
Antigravity connects to your Coder workspaces using the Coder extension:
1. [Install Antigravity](https://antigravity.google/) on your local machine.
1. Open Antigravity and sign in with your Google account.
## Install the Coder extension
1. You can install the Coder extension through the Marketplace built in to Antigravity or manually.
<div class="tabs">
## Extension Marketplace
Search for Coder from the Extensions Pane and select **Install**.
## Manually
1. Download the [latest vscode-coder extension](https://github.com/coder/vscode-coder/releases/latest) `.vsix` file.
1. Drag the `.vsix` file into the extensions pane of Antigravity.
Alternatively:
1. Open the Command Palette
(<kdb>Ctrl</kdb>+<kdb>Shift</kdb>+<kdb>P</kdb> or <kdb>Cmd</kdb>+<kdb>Shift</kdb>+<kdb>P</kdb>) and search for `vsix`.
1. Select **Extensions: Install from VSIX** and select the vscode-coder extension you downloaded.
</div>
## Open a workspace in Antigravity
1. From the Antigravity Command Palette (<kdb>Ctrl</kdb>+<kdb>Shift</kdb>+<kdb>P</kdb> or <kdb>Cmd</kdb>+<kdb>Shift</kdb>+<kdb>P</kdb>),
enter `coder` and select **Coder: Login**.
1. Follow the prompts to login and copy your session token.
Paste the session token in the **Coder API Key** dialogue in Antigravity.
1. Antigravity prompts you to open a workspace, or you can use the Command Palette to run **Coder: Open Workspace**.
## Template configuration
Your Coder administrator can add Antigravity as a one-click workspace app using
the [Antigravity module](https://registry.coder.com/modules/coder/antigravity)
from the Coder registry:
```tf
module "antigravity" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/antigravity/coder"
version = "1.0.0"
agent_id = coder_agent.example.id
folder = "/home/coder/project"
}
```
@@ -102,6 +102,13 @@ Read more about [using Cursor with your workspace](./cursor.md).
[Windsurf](./windsurf.md) is Codeium's code editor designed for AI-assisted development.
Windsurf connects using the Coder extension.
## Antigravity
[Antigravity](https://antigravity.google/) is Google's desktop IDE.
Antigravity connects using the Coder extension.
Read more about [using Antigravity with your workspace](./antigravity.md).
## JetBrains IDEs
We support JetBrains IDEs using
+4 -4
View File
@@ -11,8 +11,8 @@ RUN cargo install jj-cli typos-cli watchexec-cli
FROM ubuntu:jammy@sha256:c7eb020043d8fc2ae0793fb35a37bff1cf33f156d4d4b12ccc7f3ef8706c38b1 AS go
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.25.6
ARG GO_CHECKSUM="f022b6aad78e362bcba9b0b94d09ad58c5a70c6ba3b7582905fababf5fe0181a"
ARG GO_VERSION=1.25.7
ARG GO_CHECKSUM="12e6d6a191091ae27dc31f6efc630e3a3b8ba409baf3573d955b196fdf086005"
# Boring Go is needed to build FIPS-compliant binaries.
RUN apt-get update && \
@@ -212,9 +212,9 @@ RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/u
# Configure FIPS-compliant policies
update-crypto-policies --set FIPS
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.12.2.
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.14.5.
# Installing the same version here to match.
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_linux_amd64.zip" && \
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.5/terraform_1.14.5_linux_amd64.zip" && \
unzip /tmp/terraform.zip -d /usr/local/bin && \
rm -f /tmp/terraform.zip && \
chmod +x /usr/local/bin/terraform && \
-1
View File
@@ -390,7 +390,6 @@ func TestSchedulePrebuilds(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
+5 -1
View File
@@ -384,13 +384,17 @@ NETWORKING OPTIONS:
--samesite-auth-cookie lax|none, $CODER_SAMESITE_AUTH_COOKIE (default: lax)
Controls the 'SameSite' property is set on browser session cookies.
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE (default: false)
Controls if the 'Secure' property is set on browser session cookies.
--wildcard-access-url string, $CODER_WILDCARD_ACCESS_URL
Specifies the wildcard hostname to use for workspace applications in
the form "*.example.com".
--host-prefix-cookie bool, $CODER_HOST_PREFIX_COOKIE (default: false)
Recommended to be enabled. Enables `__Host-` prefix for cookies to
guarantee they are only set by the right domain.
NETWORKING / DERP OPTIONS:
Most Coder deployments never have to think about DERP because all connections
between workspaces and users are peer-to-peer. However, when Coder cannot
+18 -48
View File
@@ -983,7 +983,13 @@ func (api *API) updateEntitlements(ctx context.Context) error {
var _ wsbuilder.UsageChecker = &API{}
func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, task *database.Task, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
func (api *API) CheckBuildUsage(
_ context.Context,
_ database.Store,
templateVersion *database.TemplateVersion,
task *database.Task,
transition database.WorkspaceTransition,
) (wsbuilder.UsageCheckResponse, error) {
// If the template version has an external agent, we need to check that the
// license is entitled to this feature.
if templateVersion.HasExternalAgent.Valid && templateVersion.HasExternalAgent.Bool {
@@ -996,59 +1002,23 @@ func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templ
}
}
resp, err := api.checkAIBuildUsage(ctx, store, task, transition)
if err != nil {
return wsbuilder.UsageCheckResponse{}, err
}
if !resp.Permitted {
return resp, nil
}
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
// checkAIBuildUsage validates AI-related usage constraints. It is a no-op
// unless the transition is "start" and the template version has an AI task.
func (api *API) checkAIBuildUsage(ctx context.Context, store database.Store, task *database.Task, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
// Only check AI usage rules for start transitions.
if transition != database.WorkspaceTransitionStart {
// Verify managed agent entitlement for AI task builds.
// The count/limit check is intentionally omitted — breaching the
// limit is advisory only and surfaced as a warning via entitlements.
if transition != database.WorkspaceTransitionStart || task == nil {
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
// If the template version doesn't have an AI task, we don't need to check usage.
if task == nil {
if !api.Entitlements.HasLicense() {
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
// When licensed, ensure we haven't breached the managed agent limit.
// Unlicensed deployments are allowed to use unlimited managed agents.
if api.Entitlements.HasLicense() {
managedAgentLimit, ok := api.Entitlements.Feature(codersdk.FeatureManagedAgentLimit)
if !ok || !managedAgentLimit.Enabled || managedAgentLimit.Limit == nil || managedAgentLimit.UsagePeriod == nil {
return wsbuilder.UsageCheckResponse{
Permitted: false,
Message: "Your license is not entitled to managed agents. Please contact sales to continue using managed agents.",
}, nil
}
// This check is intentionally not committed to the database. It's fine
// if it's not 100% accurate or allows for minor breaches due to build
// races.
// nolint:gocritic // Requires permission to read all usage events.
managedAgentCount, err := store.GetTotalUsageDCManagedAgentsV1(agpldbauthz.AsSystemRestricted(ctx), database.GetTotalUsageDCManagedAgentsV1Params{
StartDate: managedAgentLimit.UsagePeriod.Start,
EndDate: managedAgentLimit.UsagePeriod.End,
})
if err != nil {
return wsbuilder.UsageCheckResponse{}, xerrors.Errorf("get managed agent count: %w", err)
}
if managedAgentCount >= *managedAgentLimit.Limit {
return wsbuilder.UsageCheckResponse{
Permitted: false,
Message: "You have breached the managed agent limit in your license. Please contact sales to continue using managed agents.",
}, nil
}
managedAgentLimit, ok := api.Entitlements.Feature(codersdk.FeatureManagedAgentLimit)
if !ok || !managedAgentLimit.Enabled {
return wsbuilder.UsageCheckResponse{
Permitted: false,
Message: "Your license is not entitled to managed agents. Please contact sales to continue using managed agents.",
}, nil
}
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
+100 -20
View File
@@ -678,7 +678,7 @@ func TestManagedAgentLimit(t *testing.T) {
// expiry warnings.
GraceAt: time.Now().Add(time.Hour * 24 * 60),
ExpiresAt: time.Now().Add(time.Hour * 24 * 90),
}).ManagedAgentLimit(1, 1),
}).ManagedAgentLimit(1),
})
// Get entitlements to check that the license is a-ok.
@@ -689,11 +689,7 @@ func TestManagedAgentLimit(t *testing.T) {
require.True(t, agentLimit.Enabled)
require.NotNil(t, agentLimit.Limit)
require.EqualValues(t, 1, *agentLimit.Limit)
require.NotNil(t, agentLimit.SoftLimit)
require.EqualValues(t, 1, *agentLimit.SoftLimit)
require.Empty(t, sdkEntitlements.Errors)
// There should be a warning since we're really close to our agent limit.
require.Equal(t, sdkEntitlements.Warnings[0], "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
// Create a fake provision response that claims there are agents in the
// template and every built workspace.
@@ -765,15 +761,20 @@ func TestManagedAgentLimit(t *testing.T) {
require.NoError(t, err, "fetching AI workspace must succeed")
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace.LatestBuild.ID)
// Create a second AI task, which should fail due to breaching the limit.
_, err = cli.CreateTask(ctx, owner.UserID.String(), codersdk.CreateTaskRequest{
// Create a second AI task, which should succeed even though the limit is
// breached. Managed agent limits are advisory only and should never block
// workspace creation.
task2, err := cli.CreateTask(ctx, owner.UserID.String(), codersdk.CreateTaskRequest{
Name: namesgenerator.UniqueNameWith("-"),
TemplateVersionID: aiTemplate.ActiveVersionID,
TemplateVersionPresetID: uuid.Nil,
Input: "hi",
DisplayName: namesgenerator.UniqueName(),
})
require.ErrorContains(t, err, "You have breached the managed agent limit in your license")
require.NoError(t, err, "creating task beyond managed agent limit must succeed")
workspace2, err := cli.Workspace(ctx, task2.WorkspaceID.UUID)
require.NoError(t, err, "fetching AI workspace must succeed")
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace2.LatestBuild.ID)
// Create a third workspace using the same template, which should succeed.
workspace = coderdtest.CreateWorkspace(t, cli, aiTemplate.ID)
@@ -784,12 +785,12 @@ func TestManagedAgentLimit(t *testing.T) {
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace.LatestBuild.ID)
}
func TestCheckBuildUsage_SkipsAIForNonStartTransitions(t *testing.T) {
func TestCheckBuildUsage_NeverBlocksOnManagedAgentLimit(t *testing.T) {
t.Parallel()
ctrl := gomock.NewController(t)
defer ctrl.Finish()
// Prepare entitlements with a managed agent limit to enforce.
// Prepare entitlements with a managed agent limit.
entSet := entitlements.New()
entSet.Modify(func(e *codersdk.Entitlements) {
e.HasLicense = true
@@ -825,32 +826,111 @@ func TestCheckBuildUsage_SkipsAIForNonStartTransitions(t *testing.T) {
TemplateVersionID: tv.ID,
}
// Mock DB: expect exactly one count call for the "start" transition.
// Mock DB: no calls expected since managed agent limits are
// advisory only and no longer query the database at build time.
mDB := dbmock.NewMockStore(ctrl)
mDB.EXPECT().
GetTotalUsageDCManagedAgentsV1(gomock.Any(), gomock.Any()).
Times(1).
Return(int64(1), nil) // equal to limit -> should breach
ctx := context.Background()
// Start transition: should be not permitted due to limit breach.
// Start transition: should be permitted even though the limit is
// breached. Managed agent limits are advisory only.
startResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.False(t, startResp.Permitted)
require.Contains(t, startResp.Message, "breached the managed agent limit")
require.True(t, startResp.Permitted)
// Stop transition: should be permitted and must not trigger additional DB calls.
// Stop transition: should also be permitted.
stopResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStop)
require.NoError(t, err)
require.True(t, stopResp.Permitted)
// Delete transition: should be permitted and must not trigger additional DB calls.
// Delete transition: should also be permitted.
deleteResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionDelete)
require.NoError(t, err)
require.True(t, deleteResp.Permitted)
}
func TestCheckBuildUsage_BlocksWithoutManagedAgentEntitlement(t *testing.T) {
t.Parallel()
tv := &database.TemplateVersion{
HasAITask: sql.NullBool{Valid: true, Bool: true},
HasExternalAgent: sql.NullBool{Valid: true, Bool: false},
}
task := &database.Task{
TemplateVersionID: tv.ID,
}
// Both "feature absent" and "feature explicitly disabled" should
// block AI task builds on licensed deployments.
tests := []struct {
name string
setupEnts func(e *codersdk.Entitlements)
}{
{
name: "FeatureAbsent",
setupEnts: func(e *codersdk.Entitlements) {
e.HasLicense = true
},
},
{
name: "FeatureDisabled",
setupEnts: func(e *codersdk.Entitlements) {
e.HasLicense = true
e.Features[codersdk.FeatureManagedAgentLimit] = codersdk.Feature{
Enabled: false,
}
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctrl := gomock.NewController(t)
defer ctrl.Finish()
entSet := entitlements.New()
entSet.Modify(tc.setupEnts)
agpl := &agplcoderd.API{
Options: &agplcoderd.Options{
Entitlements: entSet,
},
}
eapi := &coderd.API{
AGPL: agpl,
Options: &coderd.Options{Options: agpl.Options},
}
mDB := dbmock.NewMockStore(ctrl)
ctx := context.Background()
// Start transition with a task: should be blocked because the
// license doesn't include the managed agent entitlement.
resp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.False(t, resp.Permitted)
require.Contains(t, resp.Message, "not entitled to managed agents")
// Stop and delete transitions should still be permitted so
// that existing workspaces can be stopped/cleaned up.
stopResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStop)
require.NoError(t, err)
require.True(t, stopResp.Permitted)
deleteResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionDelete)
require.NoError(t, err)
require.True(t, deleteResp.Permitted)
// Start transition without a task: should be permitted (not
// an AI task build, so the entitlement check doesn't apply).
noTaskResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, nil, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.True(t, noTaskResp.Permitted)
})
}
}
// testDBAuthzRole returns a context with a subject that has a role
// with permissions required for test setup.
func testDBAuthzRole(ctx context.Context) context.Context {
@@ -231,12 +231,8 @@ func (opts *LicenseOptions) AIGovernanceAddon(limit int64) *LicenseOptions {
return opts.Feature(codersdk.FeatureAIGovernanceUserLimit, limit)
}
func (opts *LicenseOptions) ManagedAgentLimit(soft int64, hard int64) *LicenseOptions {
// These don't use named or exported feature names, see
// enterprise/coderd/license/license.go.
opts = opts.Feature(codersdk.FeatureName("managed_agent_limit_soft"), soft)
opts = opts.Feature(codersdk.FeatureName("managed_agent_limit_hard"), hard)
return opts
func (opts *LicenseOptions) ManagedAgentLimit(limit int64) *LicenseOptions {
return opts.Feature(codersdk.FeatureManagedAgentLimit, limit)
}
func (opts *LicenseOptions) Feature(name codersdk.FeatureName, value int64) *LicenseOptions {
+5 -1
View File
@@ -146,8 +146,12 @@ func NewWorkspaceProxyReplica(t *testing.T, coderdAPI *coderd.API, owner *coders
logger := testutil.Logger(t).With(slog.F("server_url", serverURL.String()))
// nolint: forcetypeassert // This is a stdlib transport it's unnecessary to type assert especially in tests.
wssrv, err := wsproxy.New(ctx, &wsproxy.Options{
Logger: logger,
Logger: logger,
// It's important to ensure each test has its own isolated transport to avoid interfering with other tests
// especially in shutdown.
HTTPClient: &http.Client{Transport: http.DefaultTransport.(*http.Transport).Clone()},
Experiments: options.Experiments,
DashboardURL: coderdAPI.AccessURL,
AccessURL: accessURL,
+31 -156
View File
@@ -15,60 +15,9 @@ import (
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
)
const (
// These features are only included in the license and are not actually
// entitlements after the licenses are processed. These values will be
// merged into the codersdk.FeatureManagedAgentLimit feature.
//
// The reason we need two separate features is because the License v3 format
// uses map[string]int64 for features, so we're unable to use a single value
// with a struct like `{"soft": 100, "hard": 200}`. This is unfortunate and
// we should fix this with a new license format v4 in the future.
//
// These are intentionally not exported as they should not be used outside
// of this package (except tests).
featureManagedAgentLimitHard codersdk.FeatureName = "managed_agent_limit_hard"
featureManagedAgentLimitSoft codersdk.FeatureName = "managed_agent_limit_soft"
)
var (
// Mapping of license feature names to the SDK feature name.
// This is used to map from multiple usage period features into a single SDK
// feature.
featureGrouping = map[codersdk.FeatureName]struct {
// The parent feature.
sdkFeature codersdk.FeatureName
// Whether the value of the license feature is the soft limit or the hard
// limit.
isSoft bool
}{
// Map featureManagedAgentLimitHard and featureManagedAgentLimitSoft to
// codersdk.FeatureManagedAgentLimit.
featureManagedAgentLimitHard: {
sdkFeature: codersdk.FeatureManagedAgentLimit,
isSoft: false,
},
featureManagedAgentLimitSoft: {
sdkFeature: codersdk.FeatureManagedAgentLimit,
isSoft: true,
},
}
// Features that are forbidden to be set in a license. These are the SDK
// features in the usagedBasedFeatureGrouping map.
licenseForbiddenFeatures = func() map[codersdk.FeatureName]struct{} {
features := make(map[codersdk.FeatureName]struct{})
for _, feature := range featureGrouping {
features[feature.sdkFeature] = struct{}{}
}
return features
}()
)
// Entitlements processes licenses to return whether features are enabled or not.
// TODO(@deansheather): This function and the related LicensesEntitlements
// function should be refactored into smaller functions that:
@@ -280,17 +229,15 @@ func LicensesEntitlements(
// licenses with the corresponding features actually set
// trump this default entitlement, even if they are set to a
// smaller value.
defaultManagedAgentsIsuedAt = time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
defaultManagedAgentsStart = defaultManagedAgentsIsuedAt
defaultManagedAgentsEnd = defaultManagedAgentsStart.AddDate(100, 0, 0)
defaultManagedAgentsSoftLimit int64 = 1000
defaultManagedAgentsHardLimit int64 = 1000
defaultManagedAgentsIsuedAt = time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
defaultManagedAgentsStart = defaultManagedAgentsIsuedAt
defaultManagedAgentsEnd = defaultManagedAgentsStart.AddDate(100, 0, 0)
defaultManagedAgentsLimit int64 = 1000
)
entitlements.AddFeature(codersdk.FeatureManagedAgentLimit, codersdk.Feature{
Enabled: true,
Entitlement: entitlement,
SoftLimit: &defaultManagedAgentsSoftLimit,
Limit: &defaultManagedAgentsHardLimit,
Limit: &defaultManagedAgentsLimit,
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: defaultManagedAgentsIsuedAt,
Start: defaultManagedAgentsStart,
@@ -310,15 +257,6 @@ func LicensesEntitlements(
// Add all features from the feature set.
for _, featureName := range claims.FeatureSet.Features() {
if _, ok := licenseForbiddenFeatures[featureName]; ok {
// Ignore any FeatureSet features that are forbidden to be set in a license.
continue
}
if _, ok := featureGrouping[featureName]; ok {
// These features need very special handling due to merging
// multiple feature values into a single SDK feature.
continue
}
if featureName.UsesLimit() || featureName.UsesUsagePeriod() {
// Limit and usage period features are handled below.
// They don't provide default values as they are always enabled
@@ -335,30 +273,24 @@ func LicensesEntitlements(
})
}
// A map of SDK feature name to the uncommitted usage feature.
uncommittedUsageFeatures := map[codersdk.FeatureName]usageLimit{}
// Features al-la-carte
for featureName, featureValue := range claims.Features {
if _, ok := licenseForbiddenFeatures[featureName]; ok {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Feature %s is forbidden to be set in a license.", featureName))
continue
// Old-style licenses encode the managed agent limit as
// separate soft/hard features.
//
// This could be removed in a future release, but can only be
// done once all old licenses containing this are no longer in use.
if featureName == "managed_agent_limit_soft" {
// Maps the soft limit to the canonical feature name
featureName = codersdk.FeatureManagedAgentLimit
}
if featureValue < 0 {
// We currently don't use negative values for features.
if featureName == "managed_agent_limit_hard" {
// We can safely ignore the hard limit as it is no longer used.
continue
}
// Special handling for grouped (e.g. usage period) features.
if grouping, ok := featureGrouping[featureName]; ok {
ul := uncommittedUsageFeatures[grouping.sdkFeature]
if grouping.isSoft {
ul.Soft = &featureValue
} else {
ul.Hard = &featureValue
}
uncommittedUsageFeatures[grouping.sdkFeature] = ul
if featureValue < 0 {
// We currently don't use negative values for features.
continue
}
@@ -372,6 +304,17 @@ func LicensesEntitlements(
// Handling for limit features.
switch {
case featureName.UsesUsagePeriod():
entitlements.AddFeature(featureName, codersdk.Feature{
Enabled: featureValue > 0,
Entitlement: entitlement,
Limit: &featureValue,
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: claims.IssuedAt.Time,
Start: usagePeriodStart,
End: usagePeriodEnd,
},
})
case featureName.UsesLimit():
if featureValue <= 0 {
// 0 limit value or less doesn't make sense, so we skip it.
@@ -402,46 +345,6 @@ func LicensesEntitlements(
}
}
// Apply uncommitted usage features to the entitlements.
for featureName, ul := range uncommittedUsageFeatures {
if ul.Soft == nil || ul.Hard == nil {
// Invalid license.
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has missing soft or hard limit values", license.UUID.String(), featureName))
continue
}
if *ul.Hard < *ul.Soft {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has a hard limit less than the soft limit", license.UUID.String(), featureName))
continue
}
if *ul.Hard < 0 || *ul.Soft < 0 {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has a soft or hard limit less than 0", license.UUID.String(), featureName))
continue
}
feature := codersdk.Feature{
Enabled: true,
Entitlement: entitlement,
SoftLimit: ul.Soft,
Limit: ul.Hard,
// `Actual` will be populated below when warnings are generated.
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: claims.IssuedAt.Time,
Start: usagePeriodStart,
End: usagePeriodEnd,
},
}
// If the hard limit is 0, the feature is disabled.
if *ul.Hard <= 0 {
feature.Enabled = false
feature.SoftLimit = ptr.Ref(int64(0))
feature.Limit = ptr.Ref(int64(0))
}
entitlements.AddFeature(featureName, feature)
}
addonFeatures := make(map[codersdk.FeatureName]codersdk.Feature)
// Finally, add all features from the addons. We do this last so that
@@ -557,32 +460,9 @@ func LicensesEntitlements(
entitlements.AddFeature(codersdk.FeatureManagedAgentLimit, agentLimit)
// Only issue warnings if the feature is enabled.
if agentLimit.Enabled {
var softLimit int64
if agentLimit.SoftLimit != nil {
softLimit = *agentLimit.SoftLimit
}
var hardLimit int64
if agentLimit.Limit != nil {
hardLimit = *agentLimit.Limit
}
// Issue a warning early:
// 1. If the soft limit and hard limit are equal, at 75% of the hard
// limit.
// 2. If the limit is greater than the soft limit, at 75% of the
// difference between the hard limit and the soft limit.
softWarningThreshold := int64(float64(hardLimit) * 0.75)
if hardLimit > softLimit && softLimit > 0 {
softWarningThreshold = softLimit + int64(float64(hardLimit-softLimit)*0.75)
}
if managedAgentCount >= *agentLimit.Limit {
entitlements.Warnings = append(entitlements.Warnings,
"You have built more workspaces with managed agents than your license allows. Further managed agent builds will be blocked.")
} else if managedAgentCount >= softWarningThreshold {
entitlements.Warnings = append(entitlements.Warnings,
"You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
}
if agentLimit.Enabled && agentLimit.Limit != nil && managedAgentCount >= *agentLimit.Limit {
entitlements.Warnings = append(entitlements.Warnings,
codersdk.LicenseManagedAgentLimitExceededWarningText)
}
}
}
@@ -683,11 +563,6 @@ var (
type Features map[codersdk.FeatureName]int64
type usageLimit struct {
Soft *int64
Hard *int64 // 0 means "disabled"
}
// Claims is the full set of claims in a license.
type Claims struct {
jwt.RegisteredClaims
+249 -251
View File
@@ -76,8 +76,7 @@ func TestEntitlements(t *testing.T) {
f := make(license.Features)
for _, name := range codersdk.FeatureNames {
if name == codersdk.FeatureManagedAgentLimit {
f[codersdk.FeatureName("managed_agent_limit_soft")] = 100
f[codersdk.FeatureName("managed_agent_limit_hard")] = 200
f[codersdk.FeatureManagedAgentLimit] = 100
continue
}
f[name] = 1
@@ -533,8 +532,7 @@ func TestEntitlements(t *testing.T) {
t.Run("Premium", func(t *testing.T) {
t.Parallel()
const userLimit = 1
const expectedAgentSoftLimit = 1000
const expectedAgentHardLimit = 1000
const expectedAgentLimit = 1000
db, _ := dbtestutil.NewDB(t)
licenseOptions := coderdenttest.LicenseOptions{
@@ -566,8 +564,7 @@ func TestEntitlements(t *testing.T) {
agentEntitlement := entitlements.Features[featureName]
require.True(t, agentEntitlement.Enabled)
require.Equal(t, codersdk.EntitlementEntitled, agentEntitlement.Entitlement)
require.EqualValues(t, expectedAgentSoftLimit, *agentEntitlement.SoftLimit)
require.EqualValues(t, expectedAgentHardLimit, *agentEntitlement.Limit)
require.EqualValues(t, expectedAgentLimit, *agentEntitlement.Limit)
// This might be shocking, but there's a sound reason for this.
// See license.go for more details.
@@ -840,7 +837,7 @@ func TestEntitlements(t *testing.T) {
},
}).
UserLimit(100).
ManagedAgentLimit(100, 200)
ManagedAgentLimit(100)
lic := database.License{
ID: 1,
@@ -882,16 +879,15 @@ func TestEntitlements(t *testing.T) {
managedAgentLimit, ok := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, ok)
require.NotNil(t, managedAgentLimit.SoftLimit)
require.EqualValues(t, 100, *managedAgentLimit.SoftLimit)
require.NotNil(t, managedAgentLimit.Limit)
require.EqualValues(t, 200, *managedAgentLimit.Limit)
// The soft limit value (100) is used as the single Limit.
require.EqualValues(t, 100, *managedAgentLimit.Limit)
require.NotNil(t, managedAgentLimit.Actual)
require.EqualValues(t, 175, *managedAgentLimit.Actual)
// Should've also populated a warning.
// Usage exceeds the limit, so an exceeded warning should be present.
require.Len(t, entitlements.Warnings, 1)
require.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
require.Equal(t, codersdk.LicenseManagedAgentLimitExceededWarningText, entitlements.Warnings[0])
})
}
@@ -1121,13 +1117,12 @@ func TestLicenseEntitlements(t *testing.T) {
{
Name: "ManagedAgentLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().UserLimit(100).ManagedAgentLimit(100, 200),
enterpriseLicense().UserLimit(100).ManagedAgentLimit(100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
// 175 will generate a warning as it's over 75% of the
// difference between the soft and hard limit.
return 174, nil
// 74 is below the limit (soft=100), so no warning.
return 74, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
@@ -1136,9 +1131,9 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(174), *feature.Actual)
// Soft limit value is used as the single Limit.
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(74), *feature.Actual)
},
},
{
@@ -1151,7 +1146,7 @@ func TestLicenseEntitlements(t *testing.T) {
WithIssuedAt(time.Now().Add(-time.Hour * 2)),
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 100).
ManagedAgentLimit(100).
WithIssuedAt(time.Now().Add(-time.Hour * 1)).
GracePeriod(time.Now()),
},
@@ -1168,7 +1163,6 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementGracePeriod, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(74), *feature.Actual)
},
@@ -1183,7 +1177,7 @@ func TestLicenseEntitlements(t *testing.T) {
WithIssuedAt(time.Now().Add(-time.Hour * 2)),
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200).
ManagedAgentLimit(100).
WithIssuedAt(time.Now().Add(-time.Hour * 1)).
Expired(time.Now()),
},
@@ -1196,84 +1190,33 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
assert.False(t, feature.Enabled)
assert.Nil(t, feature.SoftLimit)
assert.Nil(t, feature.Limit)
assert.Nil(t, feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/ApproachingLimit/DifferentSoftAndHardLimit",
Name: "ManagedAgentLimitWarning/ExceededLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200),
ManagedAgentLimit(100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 175, nil
return 150, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
assert.Equal(t, codersdk.LicenseManagedAgentLimitExceededWarningText, entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(175), *feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/ApproachingLimit/EqualSoftAndHardLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 75, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
// Soft limit (100) is used as the single Limit.
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(75), *feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/BreachedLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 200, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You have built more workspaces with managed agents than your license allows. Further managed agent builds will be blocked.", entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(200), *feature.Actual)
assert.Equal(t, int64(150), *feature.Actual)
},
},
{
@@ -1472,173 +1415,240 @@ func TestAIBridgeSoftWarning(t *testing.T) {
func TestUsageLimitFeatures(t *testing.T) {
t.Parallel()
cases := []struct {
sdkFeatureName codersdk.FeatureName
softLimitFeatureName codersdk.FeatureName
hardLimitFeatureName codersdk.FeatureName
}{
{
sdkFeatureName: codersdk.FeatureManagedAgentLimit,
softLimitFeatureName: codersdk.FeatureName("managed_agent_limit_soft"),
hardLimitFeatureName: codersdk.FeatureName("managed_agent_limit_hard"),
},
}
// Ensures that usage limit features are ranked by issued at, not by
// values.
t.Run("IssuedAtRanking", func(t *testing.T) {
t.Parallel()
for _, c := range cases {
t.Run(string(c.sdkFeatureName), func(t *testing.T) {
t.Parallel()
// Generate 2 real licenses both with managed agent limit
// features. lic2 should trump lic1 even though it has a lower
// limit, because it was issued later.
lic1 := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: time.Now().Add(-time.Minute * 2),
NotBefore: time.Now().Add(-time.Minute * 2),
ExpiresAt: time.Now().Add(time.Hour * 2),
Features: license.Features{
codersdk.FeatureManagedAgentLimit: 100,
},
}),
}
lic2Iat := time.Now().Add(-time.Minute * 1)
lic2Nbf := lic2Iat.Add(-time.Minute)
lic2Exp := lic2Iat.Add(time.Hour)
lic2 := database.License{
ID: 2,
UploadedAt: time.Now(),
Exp: lic2Exp,
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: lic2Iat,
NotBefore: lic2Nbf,
ExpiresAt: lic2Exp,
Features: license.Features{
codersdk.FeatureManagedAgentLimit: 50,
},
}),
}
// Test for either a missing soft or hard limit feature value.
t.Run("MissingGroupedFeature", func(t *testing.T) {
t.Parallel()
const actualAgents = 10
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ReplicaCount: 0,
ExternalAuthCount: 0,
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return actualAgents, nil
},
}
for _, feature := range []codersdk.FeatureName{
c.softLimitFeatureName,
c.hardLimitFeatureName,
} {
t.Run(string(feature), func(t *testing.T) {
t.Parallel()
// Load the licenses in both orders to ensure the correct
// behavior is observed no matter the order.
for _, order := range [][]database.License{
{lic1, lic2},
{lic2, lic1},
} {
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), order, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
feature: 100,
},
}),
}
feature, ok := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 50, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
require.WithinDuration(t, lic2Iat, feature.UsagePeriod.IssuedAt, 2*time.Second)
require.WithinDuration(t, lic2Nbf, feature.UsagePeriod.Start, 2*time.Second)
require.WithinDuration(t, lic2Exp, feature.UsagePeriod.End, 2*time.Second)
}
})
}
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), []database.License{lic}, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
// TestOldStyleManagedAgentLicenses ensures backward compatibility with
// older licenses that encode the managed agent limit using separate
// "managed_agent_limit_soft" and "managed_agent_limit_hard" feature keys
// instead of the canonical "managed_agent_limit" key.
func TestOldStyleManagedAgentLicenses(t *testing.T) {
t.Parallel()
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
t.Run("SoftAndHard", func(t *testing.T) {
t.Parallel()
require.Len(t, entitlements.Errors, 1)
require.Equal(t, fmt.Sprintf("Invalid license (%v): feature %s has missing soft or hard limit values", lic.UUID, c.sdkFeatureName), entitlements.Errors[0])
})
}
})
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_soft"): 100,
codersdk.FeatureName("managed_agent_limit_hard"): 200,
},
}),
}
t.Run("HardBelowSoft", func(t *testing.T) {
t.Parallel()
const actualAgents = 42
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
c.softLimitFeatureName: 100,
c.hardLimitFeatureName: 50,
},
}),
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), []database.License{lic}, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.True(t, feature.Enabled)
require.NotNil(t, feature.Limit)
// The soft limit should be used as the canonical limit.
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
})
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
t.Run("OnlySoft", func(t *testing.T) {
t.Parallel()
require.Len(t, entitlements.Errors, 1)
require.Equal(t, fmt.Sprintf("Invalid license (%v): feature %s has a hard limit less than the soft limit", lic.UUID, c.sdkFeatureName), entitlements.Errors[0])
})
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_soft"): 75,
},
}),
}
// Ensures that these features are ranked by issued at, not by
// values.
t.Run("IssuedAtRanking", func(t *testing.T) {
t.Parallel()
const actualAgents = 10
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
// Generate 2 real licenses both with managed agent limit
// features. lic2 should trump lic1 even though it has a lower
// limit, because it was issued later.
lic1 := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: time.Now().Add(-time.Minute * 2),
NotBefore: time.Now().Add(-time.Minute * 2),
ExpiresAt: time.Now().Add(time.Hour * 2),
Features: license.Features{
c.softLimitFeatureName: 100,
c.hardLimitFeatureName: 200,
},
}),
}
lic2Iat := time.Now().Add(-time.Minute * 1)
lic2Nbf := lic2Iat.Add(-time.Minute)
lic2Exp := lic2Iat.Add(time.Hour)
lic2 := database.License{
ID: 2,
UploadedAt: time.Now(),
Exp: lic2Exp,
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: lic2Iat,
NotBefore: lic2Nbf,
ExpiresAt: lic2Exp,
Features: license.Features{
c.softLimitFeatureName: 50,
c.hardLimitFeatureName: 100,
},
}),
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
const actualAgents = 10
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ReplicaCount: 0,
ExternalAuthCount: 0,
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return actualAgents, nil
},
}
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.True(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 75, *feature.Limit)
})
// Load the licenses in both orders to ensure the correct
// behavior is observed no matter the order.
for _, order := range [][]database.License{
{lic1, lic2},
{lic2, lic1},
} {
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), order, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
// A license with only the hard limit key should silently ignore it,
// leaving the feature unset (not entitled).
t.Run("OnlyHard", func(t *testing.T) {
t.Parallel()
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 50, *feature.SoftLimit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
require.WithinDuration(t, lic2Iat, feature.UsagePeriod.IssuedAt, 2*time.Second)
require.WithinDuration(t, lic2Nbf, feature.UsagePeriod.Start, 2*time.Second)
require.WithinDuration(t, lic2Exp, feature.UsagePeriod.End, 2*time.Second)
}
})
})
}
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_hard"): 200,
},
}),
}
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
})
// Old-style license with both soft and hard set to zero should
// explicitly disable the feature (and override any Premium default).
t.Run("ExplicitZero", func(t *testing.T) {
t.Parallel()
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 0,
codersdk.FeatureName("managed_agent_limit_hard"): 0,
},
}),
}
const actualAgents = 5
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.False(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 0, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
})
}
func TestManagedAgentLimitDefault(t *testing.T) {
@@ -1676,20 +1686,16 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
require.Nil(t, feature.Limit)
require.Nil(t, feature.SoftLimit)
require.Nil(t, feature.Actual)
require.Nil(t, feature.UsagePeriod)
})
// "Premium" licenses should receive a default managed agent limit of:
// soft = 1000
// hard = 1000
// "Premium" licenses should receive a default managed agent limit of 1000.
t.Run("Premium", func(t *testing.T) {
t.Parallel()
const userLimit = 33
const softLimit = 1000
const hardLimit = 1000
const defaultLimit = 1000
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
@@ -1720,9 +1726,7 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, hardLimit, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, softLimit, *feature.SoftLimit)
require.EqualValues(t, defaultLimit, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
@@ -1731,8 +1735,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.NotZero(t, feature.UsagePeriod.End)
})
// "Premium" licenses with an explicit managed agent limit should not
// receive a default managed agent limit.
// "Premium" licenses with an explicit managed agent limit should use
// that value instead of the default.
t.Run("PremiumExplicitValues", func(t *testing.T) {
t.Parallel()
@@ -1744,9 +1748,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 100,
codersdk.FeatureName("managed_agent_limit_hard"): 200,
codersdk.FeatureUserLimit: 100,
codersdk.FeatureManagedAgentLimit: 100,
},
}),
}
@@ -1768,9 +1771,7 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 200, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 100, *feature.SoftLimit)
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
@@ -1792,9 +1793,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 0,
codersdk.FeatureName("managed_agent_limit_hard"): 0,
codersdk.FeatureUserLimit: 100,
codersdk.FeatureManagedAgentLimit: 0,
},
}),
}
@@ -1818,8 +1818,6 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.False(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 0, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 0, *feature.SoftLimit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
@@ -60,14 +60,10 @@ func TestReconcileAll(t *testing.T) {
}
for _, tc := range tests {
tc := tc
includePreset := tc.includePreset
for _, preExistingOrgMembership := range tc.preExistingOrgMembership {
preExistingOrgMembership := preExistingOrgMembership
for _, preExistingGroup := range tc.preExistingGroup {
preExistingGroup := preExistingGroup
for _, preExistingGroupMembership := range tc.preExistingGroupMembership {
preExistingGroupMembership := preExistingGroupMembership
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
@@ -1268,7 +1268,6 @@ func TestTemplateUpdatePrebuilds(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
+1 -2
View File
@@ -2750,7 +2750,6 @@ func TestPrebuildUpdateLifecycleParams(t *testing.T) {
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
@@ -4729,7 +4728,7 @@ func TestWorkspaceAITask(t *testing.T) {
Features: license.Features{
codersdk.FeatureTemplateRBAC: 1,
},
}).ManagedAgentLimit(10, 20),
}).ManagedAgentLimit(10),
})
client, _ := coderdtest.CreateAnotherUser(t, owner, first.OrganizationID,
+162
View File
@@ -0,0 +1,162 @@
package derpmetrics
import (
"expvar"
"strconv"
"strings"
"github.com/prometheus/client_golang/prometheus"
"tailscale.com/derp"
)
// NewCollector returns a prometheus.Collector that bridges the
// derp.Server's expvar-based stats into Prometheus metrics.
func NewCollector(server *derp.Server) prometheus.Collector {
return &collector{server: server}
}
const (
namespace = "coder"
subsystem = "wsproxy_derp"
)
// Simple counter metrics keyed by their expvar name.
var counterMetrics = map[string]*prometheus.Desc{
"accepts": desc("accepts_total", "Total number of accepted connections."),
"bytes_received": desc("bytes_received_total", "Total bytes received."),
"bytes_sent": desc("bytes_sent_total", "Total bytes sent."),
"packets_sent": desc("packets_sent_total", "Total packets sent."),
"packets_received": desc("packets_received_total", "Total packets received."),
"packets_dropped": desc("packets_dropped_total_unlabeled", "Total packets dropped (unlabeled aggregate)."),
"packets_forwarded_out": desc("packets_forwarded_out_total", "Total packets forwarded out."),
"packets_forwarded_in": desc("packets_forwarded_in_total", "Total packets forwarded in."),
"home_moves_in": desc("home_moves_in_total", "Total home moves in."),
"home_moves_out": desc("home_moves_out_total", "Total home moves out."),
"got_ping": desc("got_ping_total", "Total pings received."),
"sent_pong": desc("sent_pong_total", "Total pongs sent."),
"unknown_frames": desc("unknown_frames_total", "Total unknown frames received."),
"peer_gone_disconnected_frames": desc("peer_gone_disconnected_frames_total", "Total peer-gone-disconnected frames sent."),
"peer_gone_not_here_frames": desc("peer_gone_not_here_frames_total", "Total peer-gone-not-here frames sent."),
"multiforwarder_created": desc("multiforwarder_created_total", "Total multiforwarders created."),
"multiforwarder_deleted": desc("multiforwarder_deleted_total", "Total multiforwarders deleted."),
"packet_forwarder_delete_other_value": desc("packet_forwarder_delete_other_value_total", "Total packet forwarder delete-other-value events."),
"counter_total_dup_client_conns": desc("duplicate_client_conns_total", "Total duplicate client connections."),
}
// Simple gauge metrics keyed by their expvar name.
var gaugeMetrics = map[string]*prometheus.Desc{
"gauge_current_connections": desc("current_connections", "Current number of connections."),
"gauge_current_home_connections": desc("current_home_connections", "Current number of home connections."),
"gauge_watchers": desc("watchers", "Current number of watchers."),
"gauge_current_file_descriptors": desc("current_file_descriptors", "Current number of file descriptors."),
"gauge_clients_total": desc("clients_total", "Current total number of clients."),
"gauge_clients_local": desc("clients_local", "Current number of local clients."),
"gauge_clients_remote": desc("clients_remote", "Current number of remote clients."),
"gauge_current_dup_client_keys": desc("current_duplicate_client_keys", "Current number of duplicate client keys."),
"gauge_current_dup_client_conns": desc("current_duplicate_client_conns", "Current number of duplicate client connections."),
}
// Labeled counter metrics (nested metrics.Set) with their label name.
var labeledCounterMetrics = map[string]struct {
desc *prometheus.Desc
labelName string
}{
"counter_packets_dropped_reason": {
desc: prometheus.NewDesc(prometheus.BuildFQName(namespace, subsystem, "packets_dropped_total"), "Total packets dropped by reason.", []string{"reason"}, nil),
labelName: "reason",
},
"counter_packets_dropped_type": {
desc: prometheus.NewDesc(prometheus.BuildFQName(namespace, subsystem, "packets_dropped_by_type_total"), "Total packets dropped by type.", []string{"type"}, nil),
labelName: "type",
},
"counter_packets_received_kind": {
desc: prometheus.NewDesc(prometheus.BuildFQName(namespace, subsystem, "packets_received_by_kind_total"), "Total packets received by kind.", []string{"kind"}, nil),
labelName: "kind",
},
"counter_tcp_rtt": {
desc: prometheus.NewDesc(prometheus.BuildFQName(namespace, subsystem, "tcp_rtt"), "TCP RTT measurements.", []string{"bucket"}, nil),
labelName: "bucket",
},
}
var avgQueueDurationDesc = desc("average_queue_duration_seconds", "Average queue duration in seconds.")
func desc(name, help string) *prometheus.Desc {
return prometheus.NewDesc(
prometheus.BuildFQName(namespace, subsystem, name),
help, nil, nil,
)
}
type collector struct {
server *derp.Server
}
var _ prometheus.Collector = (*collector)(nil)
func (c *collector) Describe(ch chan<- *prometheus.Desc) {
for _, d := range counterMetrics {
ch <- d
}
for _, d := range gaugeMetrics {
ch <- d
}
for _, m := range labeledCounterMetrics {
ch <- m.desc
}
ch <- avgQueueDurationDesc
}
func (c *collector) Collect(ch chan<- prometheus.Metric) {
statsVar := c.server.ExpVar()
// The returned expvar.Var is a *metrics.Set which supports Do().
type doer interface {
Do(func(expvar.KeyValue))
}
d, ok := statsVar.(doer)
if !ok {
return
}
d.Do(func(kv expvar.KeyValue) {
// Counter metrics.
if desc, ok := counterMetrics[kv.Key]; ok {
if v, err := strconv.ParseFloat(kv.Value.String(), 64); err == nil {
ch <- prometheus.MustNewConstMetric(desc, prometheus.CounterValue, v)
}
return
}
// Gauge metrics.
if desc, ok := gaugeMetrics[kv.Key]; ok {
if v, err := strconv.ParseFloat(kv.Value.String(), 64); err == nil {
ch <- prometheus.MustNewConstMetric(desc, prometheus.GaugeValue, v)
}
return
}
// Labeled counter metrics (nested metrics.Set).
if lm, ok := labeledCounterMetrics[kv.Key]; ok {
if nested, ok := kv.Value.(doer); ok {
nested.Do(func(sub expvar.KeyValue) {
if v, err := strconv.ParseFloat(sub.Value.String(), 64); err == nil {
ch <- prometheus.MustNewConstMetric(lm.desc, prometheus.CounterValue, v, sub.Key)
}
})
}
return
}
// Average queue duration: convert ms → seconds.
if kv.Key == "average_queue_duration_ms" {
s := kv.Value.String()
// expvar.Func may return a quoted string or a number.
s = strings.Trim(s, "\"")
if v, err := strconv.ParseFloat(s, 64); err == nil {
ch <- prometheus.MustNewConstMetric(avgQueueDurationDesc, prometheus.GaugeValue, v/1000.0)
}
return
}
})
}
@@ -0,0 +1,62 @@
package derpmetrics_test
import (
"testing"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/require"
"go.uber.org/goleak"
"tailscale.com/derp"
"tailscale.com/types/key"
"github.com/coder/coder/v2/enterprise/wsproxy/derpmetrics"
"github.com/coder/coder/v2/testutil"
)
func TestMain(m *testing.M) {
goleak.VerifyTestMain(m)
}
func TestCollector(t *testing.T) {
t.Parallel()
logger := testutil.Logger(t)
_ = logger
srv := derp.NewServer(key.NewNode(), func(format string, args ...any) {
t.Logf(format, args...)
})
defer srv.Close()
c := derpmetrics.NewCollector(srv)
t.Run("ImplementsCollector", func(t *testing.T) {
t.Parallel()
var _ prometheus.Collector = c
})
t.Run("RegisterAndCollect", func(t *testing.T) {
t.Parallel()
reg := prometheus.NewRegistry()
err := reg.Register(c)
require.NoError(t, err)
// Gather metrics and ensure no errors.
families, err := reg.Gather()
require.NoError(t, err)
require.NotEmpty(t, families)
// Check that at least some expected metric names are present.
names := make(map[string]bool)
for _, f := range families {
names[f.GetName()] = true
}
// These gauges should always be present (even if zero).
require.True(t, names["coder_wsproxy_derp_current_connections"],
"expected current_connections metric, got: %v", names)
require.True(t, names["coder_wsproxy_derp_current_home_connections"],
"expected current_home_connections metric, got: %v", names)
})
}
+42 -1
View File
@@ -4,6 +4,7 @@ import (
"context"
"crypto/tls"
"errors"
"expvar"
"fmt"
"net/http"
"net/url"
@@ -36,6 +37,7 @@ import (
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/coderd/workspaceapps"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/enterprise/wsproxy/derpmetrics"
"github.com/coder/coder/v2/enterprise/derpmesh"
"github.com/coder/coder/v2/enterprise/replicasync"
"github.com/coder/coder/v2/enterprise/wsproxy/wsproxysdk"
@@ -44,6 +46,12 @@ import (
"github.com/coder/coder/v2/tailnet"
)
// expWsproxyDERPOnce guards the global expvar.Publish call for the wsproxy
// DERP server, similar to expDERPOnce in coderd. We use a different variable
// name ("wsproxy_derp") to avoid conflicts when both run in the same process
// during tests.
var expWsproxyDERPOnce sync.Once
type Options struct {
Logger slog.Logger
Experiments codersdk.Experiments
@@ -196,6 +204,13 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
return nil, xerrors.Errorf("create DERP mesh tls config: %w", err)
}
derpServer := derp.NewServer(key.NewNode(), tailnet.Logger(opts.Logger.Named("net.derp")))
if opts.PrometheusRegistry != nil {
opts.PrometheusRegistry.MustRegister(derpmetrics.NewCollector(derpServer))
}
// Publish DERP server metrics via expvar, served at /debug/expvar.
expWsproxyDERPOnce.Do(func() {
expvar.Publish("wsproxy_derp", derpServer.ExpVar())
})
ctx, cancel := context.WithCancel(context.Background())
@@ -317,7 +332,31 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
})
derpHandler := derphttp.Handler(derpServer)
derpHandler, s.derpCloseFunc = tailnet.WithWebsocketSupport(derpServer, derpHandler)
// Prometheus metrics for DERP websocket connections.
derpWSActiveConns := prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "coder_wsproxy",
Subsystem: "derp_websocket",
Name: "active_connections",
Help: "Number of active DERP websocket connections.",
})
derpWSBytesTotal := prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "coder_wsproxy",
Subsystem: "derp_websocket",
Name: "bytes_total",
Help: "Total bytes flowing through DERP websocket connections.",
}, []string{"direction"})
if opts.PrometheusRegistry != nil {
opts.PrometheusRegistry.MustRegister(derpWSActiveConns, derpWSBytesTotal)
}
derpHandler, s.derpCloseFunc = tailnet.WithWebsocketSupportAndMetrics(
derpServer, derpHandler, &tailnet.DERPWebsocketMetrics{
OnConnOpen: func() { derpWSActiveConns.Inc() },
OnConnClose: func() { derpWSActiveConns.Dec() },
OnRead: func(n int) { derpWSBytesTotal.WithLabelValues("read").Add(float64(n)) },
OnWrite: func(n int) { derpWSBytesTotal.WithLabelValues("write").Add(float64(n)) },
})
// The primary coderd dashboard needs to make some GET requests to
// the workspace proxies to check latency.
@@ -332,6 +371,7 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
sharedhttpmw.Recover(s.Logger),
httpmw.WithProfilingLabels,
tracing.StatusWriterMiddleware,
opts.CookieConfig.Middleware,
tracing.Middleware(s.TracerProvider),
httpmw.AttachRequestID,
httpmw.ExtractRealIP(s.Options.RealIPConfig),
@@ -419,6 +459,7 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
r.Get("/healthz", func(w http.ResponseWriter, _ *http.Request) { _, _ = w.Write([]byte("OK")) })
// TODO: @emyrk should this be authenticated or debounced?
r.Get("/healthz-report", s.healthReport)
r.Method("GET", "/debug/expvar", expvar.Handler())
r.NotFound(func(rw http.ResponseWriter, r *http.Request) {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Title: "Head to the Dashboard",
-1
View File
@@ -525,7 +525,6 @@ func TestDERPMesh(t *testing.T) {
require.Len(t, cases, (len(proxies)*(len(proxies)+1))/2) // triangle number
for i, c := range cases {
i, c := i, c
t.Run(fmt.Sprintf("Proxy%d", i), func(t *testing.T) {
t.Parallel()

Some files were not shown because too many files have changed in this diff Show More