Compare commits

..

16 Commits

Author SHA1 Message Date
Jakub Domeracki 8a097ee635 feat(site)!: add consent prompt for auto-creation with prefilled parameters (#22256)
Cherry-pick of 60e3ab7632 from main.

Workspace created via mode=auto links now require explicit user
confirmation before provisioning. A warning dialog shows all prefilled
param.* values from the URL and blocks creation until the user clicks
`Confirm and Create`. Clicking `Cancel` falls back to the standard form
view.

### Breaking behavior change

Links using `mode=auto` (e.g., "Open in Coder" buttons) will no longer
silently create workspaces. Users will now see a consent dialog and must
explicitly confirm before the workspace is provisioned.

Original PR: #22011

Co-authored-by: Kacper Sawicki <kacper@coder.com>
Co-authored-by: Jake Howell <jacob@coder.com>
2026-02-23 17:37:57 -05:00
Danielle Maywood 2ca88b0f07 fix: avoid re-using AuthInstanceID for sub agents (#22196) (#22212)
Parent agents were re-using AuthInstanceID when spawning child agents.
This caused GetWorkspaceAgentByInstanceID to return the most recently
created sub agent instead of the parent when the parent tried to refetch
its own manifest.

Fix by not reusing AuthInstanceID for sub agents, and updating
GetWorkspaceAgentByInstanceID to filter them out entirely.

---

Cherry picked from 911d734df9
2026-02-23 17:37:41 -05:00
Jake Howell 79a0ff8249 feat: convert soft_limit to limit (cherry-pick/v2.29) (#22207)
Related [`internal#1281`](https://github.com/coder/internal/issues/1281)

Cherry picks two pull-requests in `release/2.29`.

* https://github.com/coder/coder/pull/22048
* https://github.com/coder/coder/pull/21998
* https://github.com/coder/coder/pull/22210
2026-02-23 17:37:15 -05:00
Lukasz 7819c471f7 chore: bump bundled terraform to 1.14.5 for 2.29 (#22193)
Description:
This PR updates the bundled Terraform binary and related version pins
from 1.13.4 to 1.14.5 (base image, installer fallback, and CI/test
fixtures). Terraform is statically built with an embedded Go runtime.
Moving to 1.14.5 updates the embedded toolchain and is intended to
address Go stdlib CVEs reported by security scanning.

Notes:

- Change is version-only; no functional Coder logic changes.

- Backport-friendly: intended to be cherry-picked to release branches
after merge.

---------

Co-authored-by: Jakub Domeracki <jakub@coder.com>
Co-authored-by: Dean Sheather <dean@deansheather.com>
2026-02-23 15:32:41 +01:00
Lukasz 3aa8212aac chore: bump versions of gh actions for 2.29 (#22218)
Update gh actions:

- aquasecurity/trivy-action v0.34.0
- harden-runner v2.14.2
2026-02-20 12:49:36 +01:00
Jon Ayers 8b2f472f71 chore: use old slog (#21959) 2026-02-05 16:35:41 -06:00
Jon Ayers 13337a193c chore: fix go.mod (#21958) 2026-02-05 16:23:04 -06:00
Jon Ayers b275be2e7a chore: backport fixes (#21957) 2026-02-05 16:09:41 -06:00
Lukasz 72afd3677c chore: bump alpine to 3.23.3 in release/2.29 (#21879)
(cherry picked from commit 3d97f677e5)

Co-authored-by: Jon Ayers <jon@coder.com>
2026-02-03 09:12:15 -06:00
Dean Sheather 7dfaa606ee fix: fix various AI task usage accounting bugs (#21723)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->

---------

Co-authored-by: Cian Johnston <cian@coder.com>
Co-authored-by: Steven Masley <Emyrk@users.noreply.github.com>
2026-01-29 10:06:45 -06:00
Cian Johnston 0c3144fc32 fix(coderd): ensure inbox WebSocket is closed when client disconnects… (#21684)
… (#21652)

Relates to https://github.com/coder/coder/issues/19715

This is similar to https://github.com/coder/coder/pull/19711

This endpoint works by doing the following:
- Subscribing to the database's with pubsub
- Accepts a WebSocket upgrade
- Starts a `httpapi.Heartbeat`
- Creates a json encoder
- **Infinitely loops waiting for notification until request context
cancelled**

The critical issue here is that `httpapi.Heartbeat` silently fails when
the client has disconnected. This means we never cancel the request
context, leaving the WebSocket alive until we receive a notification
from the database and fail to write that down the pipe.

By replacing usage of `httpapi.Heartbeat` with `httpapi.HeartbeatClose`,
we cancel the context _when the heartbeat fails to write_ due to the
client disconnecting. This allows us to cleanup without waiting for a
notification to come through the pubsub channel.

(cherry picked from commit 409360c62d)

<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->

Co-authored-by: Danielle Maywood <danielle@themaywoods.com>
2026-01-26 09:28:04 -06:00
Cian Johnston b5360a9180 fix: backport migration fixes (#21611)
* https://github.com/coder/coder/pull/21493
* https://github.com/coder/coder/pull/21496
* https://github.com/coder/coder/pull/21530

NB these commits were originally authored by Blink on behalf of
@dannykopping, so amended to reflect actual authorship.


**Repro/Verification Steps:**

* Created a Coder deployment with a non-public schema via Docker compose
on v2.28.6:
  
* Created a DB init script under `db-init/01-create-schema.sql` with the
following:
    ```sql
    CREATE SCHEMA IF NOT EXISTS coder AUTHORIZATION coder;
    GRANT ALL PRIVILEGES ON SCHEMA coder TO coder;
    ALTER ROLE coder SET search_path TO coder;
    ```
  * Mounted above inside the `postgres` container:
    ```diff
         volumes:
           - coder_data:/var/lib/postgresql/data
    +      - ./db-init:/docker-entrypoint-initdb.d:ro
    ```
  * Edited `CODER_PG_CONNECTION_URL` to update the search path:
    ```diff
    environment:
- CODER_PG_CONNECTION_URL:
"postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable"
+ CODER_PG_CONNECTION_URL:
"postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable&search_path=coder"
    ```
  * Brought up the deployment:
    ```shell
CODER_VERSION=v2.28.6 CODER_ACCESS_URL=http://localhost:7080
POSTGRES_USER=coder POSTGRES_PASSWORD=coder docker compose up`
    ```
  * Created user / template / workspace

* Updated to `v2.29.1`:
  * ```shell
CODER_VERSION=v2.29.1 CODER_ACCESS_URL=http://localhost:7080
POSTGRES_USER=coder POSTGRES_PASSWORD=coder docker compose up`
    ```

  * Observed following error:
    ```
database-1 | 2026-01-21 15:07:17.629 UTC [102] ERROR: relation
"public.workspace_agents" does not exist
coder-1 | Encountered an error running "coder server", see "coder server
--help" for more information
database-1 | 2026-01-21 15:07:17.629 UTC [102] STATEMENT: CREATE INDEX
IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON
public.workspace_agents (auth_instance_id, deleted);
coder-1 | error: connect to postgres: connect to postgres: migrate up:
up: 2 errors occurred:
coder-1 | * run statement: migration failed: relation
"public.workspace_agents" does not exist in line 0: CREATE INDEX IF NOT
EXISTS workspace_agents_auth_instance_id_deleted_idx ON
public.workspace_agents (auth_instance_id, deleted);
coder-1 | (details: pq: relation "public.workspace_agents" does not
exist)
coder-1 | * commit tx on unlock: pq: Could not complete operation in a
failed transaction
    coder-1 exited with code 1
    ```

  * Built image locally:
    ```console
    $ make build/coder_$(./scripts/version.sh)_linux_amd64.tag
    ...
    ghcr.io/coder/coder:v2.29.1-devel-e8c482a98a67-amd64
    ```

  * Started with new image:
    ```shell
CODER_VERSION=v2.29.1-devel-e8c482a98a67-amd64
CODER_ACCESS_URL=http://localhost:7080 POSTGRES_USER=coder
POSTGRES_PASSWORD=coder docker compose up
    ```

  * Observed migrations ran successfully and Coder came up successfully

---------

Signed-off-by: Danny Kopping <danny@coder.com>
Co-authored-by: Danny Kopping <danny@coder.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-21 15:45:58 +00:00
Kacper Sawicki 2e2d0dde44 feat(cli): backport #21374 to 2.29 (#21561)
backport #21374 to 2.29

feat(cli): add --no-build flag to state push for state-only updates
#21374
2026-01-20 15:46:46 -06:00
Kacper Sawicki 2314e4a94e fix: backport update boundary version to 2.29 (#21290) (#21575)
fix: update boundary version https://github.com/coder/coder/pull/21290

required by https://github.com/coder/coder/pull/21561

Co-authored-by: Yevhenii Shcherbina <evgeniy.shcherbina.es@gmail.com>
2026-01-20 11:19:53 +01:00
blinkagent[bot] bd76c602e4 chore: add antigravity to allowed protocols list (#20873) (#21122)
Co-authored-by: DevCats <christofer@coder.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Atif Ali <atif@coder.com>
2025-12-29 13:29:28 +05:00
Jakub Domeracki 59cdd7e21f chore: update react to apply patch for CVE-2025-55182 (#21084) (#21168)
Reference:

https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components

> Please note that coder deployments aren't vulnerable since [React
Server Components](https://react.dev/reference/rsc/server-components)
aren't in use

---------

Co-authored-by: blinkagent[bot] <237617714+blinkagent[bot]@users.noreply.github.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-09 09:34:16 -06:00
122 changed files with 3843 additions and 3083 deletions
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.13.4
terraform_version: 1.14.5
terraform_wrapper: false
+16 -16
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -235,7 +235,7 @@ jobs:
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -292,7 +292,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -343,7 +343,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -532,7 +532,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -581,7 +581,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -641,7 +641,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -668,7 +668,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -701,7 +701,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -781,7 +781,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -862,7 +862,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -933,7 +933,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1053,7 +1053,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1108,7 +1108,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1505,7 +1505,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+3 -3
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -146,7 +146,7 @@ jobs:
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+2 -2
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -27,7 +27,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+5 -5
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -184,7 +184,7 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+4 -4
View File
@@ -164,7 +164,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -802,7 +802,7 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -878,7 +878,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -971,7 +971,7 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+3 -3
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -69,7 +69,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -146,7 +146,7 @@ jobs:
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
+3 -3
View File
@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -120,7 +120,7 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+10 -1
View File
@@ -69,6 +69,9 @@ MOST_GO_SRC_FILES := $(shell \
# All the shell files in the repo, excluding ignored files.
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
MIGRATION_FILES := $(shell find ./coderd/database/migrations/ -maxdepth 1 $(FIND_EXCLUSIONS) -type f -name '*.sql')
FIXTURE_FILES := $(shell find ./coderd/database/migrations/testdata/fixtures/ $(FIND_EXCLUSIONS) -type f -name '*.sql')
# Ensure we don't use the user's git configs which might cause side-effects
GIT_FLAGS = GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_SYSTEM=/dev/null
@@ -561,7 +564,7 @@ endif
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
.PHONY: lint
lint/site-icons:
@@ -619,6 +622,12 @@ lint/check-scopes: coderd/database/dump.sql
go run ./scripts/check-scopes
.PHONY: lint/check-scopes
# Verify migrations do not hardcode the public schema.
lint/migrations:
./scripts/check_pg_schema.sh "Migrations" $(MIGRATION_FILES)
./scripts/check_pg_schema.sh "Fixtures" $(FIXTURE_FILES)
.PHONY: lint/migrations
# All files generated by the database should be added here, and this can be used
# as a target for jobs that need to run after the database is generated.
DB_GEN_FILES := \
+4 -1
View File
@@ -99,7 +99,10 @@ func (c *Client) SyncReady(ctx context.Context, unitName unit.ID) (bool, error)
resp, err := c.client.SyncReady(ctx, &proto.SyncReadyRequest{
Unit: string(unitName),
})
return resp.Ready, err
if err != nil {
return false, xerrors.Errorf("sync ready: %w", err)
}
return resp.Ready, nil
}
// SyncStatus gets the status of a unit and its dependencies.
+9
View File
@@ -4,6 +4,8 @@ import (
"os"
"github.com/hashicorp/go-reap"
"cdr.dev/slog"
)
type Option func(o *options)
@@ -34,8 +36,15 @@ func WithCatchSignals(sigs ...os.Signal) Option {
}
}
func WithLogger(logger slog.Logger) Option {
return func(o *options) {
o.Logger = logger
}
}
type options struct {
ExecArgs []string
PIDs reap.PidCh
CatchSignals []os.Signal
Logger slog.Logger
}
+2 -2
View File
@@ -7,6 +7,6 @@ func IsInitProcess() bool {
return false
}
func ForkReap(_ ...Option) error {
return nil
func ForkReap(_ ...Option) (int, error) {
return 0, nil
}
+37 -2
View File
@@ -32,12 +32,13 @@ func TestReap(t *testing.T) {
}
pids := make(reap.PidCh, 1)
err := reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithPIDCallback(pids),
// Provide some argument that immediately exits.
reaper.WithExecArgs("/bin/sh", "-c", "exit 0"),
)
require.NoError(t, err)
require.Equal(t, 0, exitCode)
cmd := exec.Command("tail", "-f", "/dev/null")
err = cmd.Start()
@@ -65,6 +66,36 @@ func TestReap(t *testing.T) {
}
}
//nolint:paralleltest
func TestForkReapExitCodes(t *testing.T) {
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
tests := []struct {
name string
command string
expectedCode int
}{
{"exit 0", "exit 0", 0},
{"exit 1", "exit 1", 1},
{"exit 42", "exit 42", 42},
{"exit 255", "exit 255", 255},
{"SIGKILL", "kill -9 $$", 128 + 9},
{"SIGTERM", "kill -15 $$", 128 + 15},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
exitCode, err := reaper.ForkReap(
reaper.WithExecArgs("/bin/sh", "-c", tt.command),
)
require.NoError(t, err)
require.Equal(t, tt.expectedCode, exitCode, "exit code mismatch for %q", tt.command)
})
}
}
//nolint:paralleltest // Signal handling.
func TestReapInterrupt(t *testing.T) {
// Don't run the reaper test in CI. It does weird
@@ -84,13 +115,17 @@ func TestReapInterrupt(t *testing.T) {
defer signal.Stop(usrSig)
go func() {
errC <- reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithPIDCallback(pids),
reaper.WithCatchSignals(os.Interrupt),
// Signal propagation does not extend to children of children, so
// we create a little bash script to ensure sleep is interrupted.
reaper.WithExecArgs("/bin/sh", "-c", fmt.Sprintf("pid=0; trap 'kill -USR2 %d; kill -TERM $pid' INT; sleep 10 &\npid=$!; kill -USR1 %d; wait", os.Getpid(), os.Getpid())),
)
// The child exits with 128 + SIGTERM (15) = 143, but the trap catches
// SIGINT and sends SIGTERM to the sleep process, so exit code varies.
_ = exitCode
errC <- err
}()
require.Equal(t, <-usrSig, syscall.SIGUSR1)
+34 -6
View File
@@ -3,12 +3,15 @@
package reaper
import (
"context"
"os"
"os/signal"
"syscall"
"github.com/hashicorp/go-reap"
"golang.org/x/xerrors"
"cdr.dev/slog"
)
// IsInitProcess returns true if the current process's PID is 1.
@@ -16,7 +19,7 @@ func IsInitProcess() bool {
return os.Getpid() == 1
}
func catchSignals(pid int, sigs []os.Signal) {
func catchSignals(logger slog.Logger, pid int, sigs []os.Signal) {
if len(sigs) == 0 {
return
}
@@ -25,10 +28,19 @@ func catchSignals(pid int, sigs []os.Signal) {
signal.Notify(sc, sigs...)
defer signal.Stop(sc)
logger.Info(context.Background(), "reaper catching signals",
slog.F("signals", sigs),
slog.F("child_pid", pid),
)
for {
s := <-sc
sig, ok := s.(syscall.Signal)
if ok {
logger.Info(context.Background(), "reaper caught signal, killing child process",
slog.F("signal", sig.String()),
slog.F("child_pid", pid),
)
_ = syscall.Kill(pid, sig)
}
}
@@ -40,7 +52,10 @@ func catchSignals(pid int, sigs []os.Signal) {
// the reaper and an exec.Command waiting for its process to complete.
// The provided 'pids' channel may be nil if the caller does not care about the
// reaped children PIDs.
func ForkReap(opt ...Option) error {
//
// Returns the child's exit code (using 128+signal for signal termination)
// and any error from Wait4.
func ForkReap(opt ...Option) (int, error) {
opts := &options{
ExecArgs: os.Args,
}
@@ -53,7 +68,7 @@ func ForkReap(opt ...Option) error {
pwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get wd: %w", err)
return 1, xerrors.Errorf("get wd: %w", err)
}
pattrs := &syscall.ProcAttr{
@@ -72,15 +87,28 @@ func ForkReap(opt ...Option) error {
//#nosec G204
pid, err := syscall.ForkExec(opts.ExecArgs[0], opts.ExecArgs, pattrs)
if err != nil {
return xerrors.Errorf("fork exec: %w", err)
return 1, xerrors.Errorf("fork exec: %w", err)
}
go catchSignals(pid, opts.CatchSignals)
go catchSignals(opts.Logger, pid, opts.CatchSignals)
var wstatus syscall.WaitStatus
_, err = syscall.Wait4(pid, &wstatus, 0, nil)
for xerrors.Is(err, syscall.EINTR) {
_, err = syscall.Wait4(pid, &wstatus, 0, nil)
}
return err
// Convert wait status to exit code using standard Unix conventions:
// - Normal exit: use the exit code
// - Signal termination: use 128 + signal number
var exitCode int
switch {
case wstatus.Exited():
exitCode = wstatus.ExitStatus()
case wstatus.Signaled():
exitCode = 128 + int(wstatus.Signal())
default:
exitCode = 1
}
return exitCode, err
}
+46 -18
View File
@@ -9,6 +9,7 @@ import (
"net/http/pprof"
"net/url"
"os"
"os/signal"
"path/filepath"
"runtime"
"slices"
@@ -130,40 +131,29 @@ func workspaceAgent() *serpent.Command {
sinks = append(sinks, sloghuman.Sink(logWriter))
logger := inv.Logger.AppendSinks(sinks...).Leveled(slog.LevelDebug)
logger = logger.Named("reaper")
logger.Info(ctx, "spawning reaper process")
// Do not start a reaper on the child process. It's important
// to do this else we fork bomb ourselves.
//nolint:gocritic
args := append(os.Args, "--no-reap")
err := reaper.ForkReap(
exitCode, err := reaper.ForkReap(
reaper.WithExecArgs(args...),
reaper.WithCatchSignals(StopSignals...),
reaper.WithLogger(logger),
)
if err != nil {
logger.Error(ctx, "agent process reaper unable to fork", slog.Error(err))
return xerrors.Errorf("fork reap: %w", err)
}
logger.Info(ctx, "reaper process exiting")
return nil
logger.Info(ctx, "child process exited, propagating exit code",
slog.F("exit_code", exitCode),
)
return ExitError(exitCode, nil)
}
// Handle interrupt signals to allow for graceful shutdown,
// note that calling stopNotify disables the signal handler
// and the next interrupt will terminate the program (you
// probably want cancel instead).
//
// Note that we don't want to handle these signals in the
// process that runs as PID 1, that's why we do this after
// the reaper forked.
ctx, stopNotify := inv.SignalNotifyContext(ctx, StopSignals...)
defer stopNotify()
// DumpHandler does signal handling, so we call it after the
// reaper.
go DumpHandler(ctx, "agent")
logWriter := &clilog.LumberjackWriteCloseFixer{Writer: &lumberjack.Logger{
Filename: filepath.Join(logDir, "coder-agent.log"),
MaxSize: 5, // MB
@@ -176,6 +166,21 @@ func workspaceAgent() *serpent.Command {
sinks = append(sinks, sloghuman.Sink(logWriter))
logger := inv.Logger.AppendSinks(sinks...).Leveled(slog.LevelDebug)
// Handle interrupt signals to allow for graceful shutdown,
// note that calling stopNotify disables the signal handler
// and the next interrupt will terminate the program (you
// probably want cancel instead).
//
// Note that we also handle these signals in the
// process that runs as PID 1, mainly to forward it to the agent child
// so that it can shutdown gracefully.
ctx, stopNotify := logSignalNotifyContext(ctx, logger, StopSignals...)
defer stopNotify()
// DumpHandler does signal handling, so we call it after the
// reaper.
go DumpHandler(ctx, "agent")
version := buildinfo.Version()
logger.Info(ctx, "agent is starting now",
slog.F("url", agentAuth.agentURL),
@@ -557,3 +562,26 @@ func urlPort(u string) (int, error) {
}
return -1, xerrors.Errorf("invalid port: %s", u)
}
// logSignalNotifyContext is like signal.NotifyContext but logs the received
// signal before canceling the context.
func logSignalNotifyContext(parent context.Context, logger slog.Logger, signals ...os.Signal) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancelCause(parent)
c := make(chan os.Signal, 1)
signal.Notify(c, signals...)
go func() {
select {
case sig := <-c:
logger.Info(ctx, "agent received signal", slog.F("signal", sig.String()))
cancel(xerrors.Errorf("signal: %s", sig.String()))
case <-ctx.Done():
logger.Info(ctx, "ctx canceled, stopping signal handler")
}
}()
return ctx, func() {
cancel(context.Canceled)
signal.Stop(c)
}
}
+17
View File
@@ -87,6 +87,7 @@ func buildNumberOption(n *int64) serpent.Option {
func (r *RootCmd) statePush() *serpent.Command {
var buildNumber int64
var noBuild bool
cmd := &serpent.Command{
Use: "push <workspace> <file>",
Short: "Push a Terraform state file to a workspace.",
@@ -126,6 +127,16 @@ func (r *RootCmd) statePush() *serpent.Command {
return err
}
if noBuild {
// Update state directly without triggering a build.
err = client.UpdateWorkspaceBuildState(inv.Context(), build.ID, state)
if err != nil {
return err
}
_, _ = fmt.Fprintln(inv.Stdout, "State updated successfully.")
return nil
}
build, err = client.CreateWorkspaceBuild(inv.Context(), workspace.ID, codersdk.CreateWorkspaceBuildRequest{
TemplateVersionID: build.TemplateVersionID,
Transition: build.Transition,
@@ -139,6 +150,12 @@ func (r *RootCmd) statePush() *serpent.Command {
}
cmd.Options = serpent.OptionSet{
buildNumberOption(&buildNumber),
{
Flag: "no-build",
FlagShorthand: "n",
Description: "Update the state without triggering a workspace build. Useful for state-only migrations.",
Value: serpent.BoolOf(&noBuild),
},
}
return cmd
}
+47
View File
@@ -2,6 +2,7 @@ package cli_test
import (
"bytes"
"context"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,7 @@ import (
"testing"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/stretchr/testify/require"
@@ -158,4 +160,49 @@ func TestStatePush(t *testing.T) {
err := inv.Run()
require.NoError(t, err)
})
t.Run("NoBuild", func(t *testing.T) {
t.Parallel()
client, store := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
templateAdmin, taUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.RoleTemplateAdmin())
initialState := []byte("initial state")
r := dbfake.WorkspaceBuild(t, store, database.WorkspaceTable{
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{ProvisionerState: initialState}).
Do()
wantState := []byte("updated state")
stateFile, err := os.CreateTemp(t.TempDir(), "")
require.NoError(t, err)
_, err = stateFile.Write(wantState)
require.NoError(t, err)
err = stateFile.Close()
require.NoError(t, err)
inv, root := clitest.New(t, "state", "push", "--no-build", r.Workspace.Name, stateFile.Name())
clitest.SetupConfig(t, templateAdmin, root)
var stdout bytes.Buffer
inv.Stdout = &stdout
err = inv.Run()
require.NoError(t, err)
require.Contains(t, stdout.String(), "State updated successfully")
// Verify the state was updated by pulling it.
inv, root = clitest.New(t, "state", "pull", r.Workspace.Name)
var gotState bytes.Buffer
inv.Stdout = &gotState
clitest.SetupConfig(t, templateAdmin, root)
err = inv.Run()
require.NoError(t, err)
require.Equal(t, wantState, bytes.TrimSpace(gotState.Bytes()))
// Verify no new build was created.
builds, err := store.GetWorkspaceBuildsByWorkspaceID(dbauthz.AsSystemRestricted(context.Background()), database.GetWorkspaceBuildsByWorkspaceIDParams{
WorkspaceID: r.Workspace.ID,
})
require.NoError(t, err)
require.Len(t, builds, 1, "expected only the initial build, no new build should be created")
})
}
+4
View File
@@ -9,5 +9,9 @@ OPTIONS:
-b, --build int
Specify a workspace build to target by name. Defaults to latest.
-n, --no-build bool
Update the state without triggering a workspace build. Useful for
state-only migrations.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -92,7 +92,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
Name: agentName,
ResourceID: parentAgent.ResourceID,
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
AuthInstanceID: sql.NullString{},
Architecture: req.Architecture,
EnvironmentVariables: pqtype.NullRawMessage{},
OperatingSystem: req.OperatingSystem,
+46
View File
@@ -175,6 +175,52 @@ func TestSubAgentAPI(t *testing.T) {
}
})
// Context: https://github.com/coder/coder/pull/22196
t.Run("CreateSubAgentDoesNotInheritAuthInstanceID", func(t *testing.T) {
t.Parallel()
var (
log = testutil.Logger(t)
clock = quartz.NewMock(t)
db, org = newDatabaseWithOrg(t)
user, agent = newUserWithWorkspaceAgent(t, db, org)
)
// Given: The parent agent has an AuthInstanceID set
ctx := testutil.Context(t, testutil.WaitShort)
parentAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agent.ID)
require.NoError(t, err)
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.NotEmpty(t, parentAgent.AuthInstanceID.String)
api := newAgentAPI(t, log, db, clock, user, org, agent)
// When: We create a sub agent
createResp, err := api.CreateSubAgent(ctx, &proto.CreateSubAgentRequest{
Name: "sub-agent",
Directory: "/workspaces/test",
Architecture: "amd64",
OperatingSystem: "linux",
})
require.NoError(t, err)
subAgentID, err := uuid.FromBytes(createResp.Agent.Id)
require.NoError(t, err)
// Then: The sub-agent must NOT re-use the parent's AuthInstanceID.
subAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), subAgentID)
require.NoError(t, err)
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Double-check: looking up by the parent's instance ID must
// still return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(dbauthz.AsSystemRestricted(ctx), parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
})
type expectedAppError struct {
index int32
field string
+50 -4
View File
@@ -10182,6 +10182,45 @@ const docTemplate = `{
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Builds"
],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -14758,10 +14797,6 @@ const docTemplate = `{
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -19402,6 +19437,17 @@ const docTemplate = `{
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+46 -4
View File
@@ -9014,6 +9014,41 @@
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Builds"],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -13335,10 +13370,6 @@
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -17794,6 +17825,17 @@
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+1
View File
@@ -1501,6 +1501,7 @@ func New(options *Options) *API {
r.Get("/parameters", api.workspaceBuildParameters)
r.Get("/resources", api.workspaceBuildResourcesDeprecated)
r.Get("/state", api.workspaceBuildState)
r.Put("/state", api.workspaceBuildUpdateState)
r.Get("/timings", api.workspaceBuildTimings)
})
r.Route("/authcheck", func(r chi.Router) {
+8
View File
@@ -83,6 +83,7 @@ import (
"github.com/coder/coder/v2/coderd/schedule"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/coderd/updatecheck"
"github.com/coder/coder/v2/coderd/usage"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/webpush"
"github.com/coder/coder/v2/coderd/workspaceapps"
@@ -186,6 +187,7 @@ type Options struct {
TelemetryReporter telemetry.Reporter
ProvisionerdServerMetrics *provisionerdserver.Metrics
UsageInserter usage.Inserter
}
// New constructs a codersdk client connected to an in-memory API instance.
@@ -266,6 +268,11 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
}
}
var usageInserter *atomic.Pointer[usage.Inserter]
if options.UsageInserter != nil {
usageInserter = &atomic.Pointer[usage.Inserter]{}
usageInserter.Store(&options.UsageInserter)
}
if options.Database == nil {
options.Database, options.Pubsub = dbtestutil.NewDB(t)
}
@@ -559,6 +566,7 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
Database: options.Database,
Pubsub: options.Pubsub,
ExternalAuthConfigs: options.ExternalAuthConfigs,
UsageInserter: usageInserter,
Auditor: options.Auditor,
ConnectionLogger: options.ConnectionLogger,
+44
View File
@@ -0,0 +1,44 @@
package coderdtest
import (
"context"
"sync"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/usage"
"github.com/coder/coder/v2/coderd/usage/usagetypes"
)
var _ usage.Inserter = (*UsageInserter)(nil)
type UsageInserter struct {
sync.Mutex
events []usagetypes.DiscreteEvent
}
func NewUsageInserter() *UsageInserter {
return &UsageInserter{
events: []usagetypes.DiscreteEvent{},
}
}
func (u *UsageInserter) InsertDiscreteUsageEvent(_ context.Context, _ database.Store, event usagetypes.DiscreteEvent) error {
u.Lock()
defer u.Unlock()
u.events = append(u.events, event)
return nil
}
func (u *UsageInserter) GetEvents() []usagetypes.DiscreteEvent {
u.Lock()
defer u.Unlock()
eventsCopy := make([]usagetypes.DiscreteEvent, len(u.events))
copy(eventsCopy, u.events)
return eventsCopy
}
func (u *UsageInserter) Reset() {
u.Lock()
defer u.Unlock()
u.events = []usagetypes.DiscreteEvent{}
}
@@ -1 +1 @@
DROP INDEX IF EXISTS public.workspace_agents_auth_instance_id_deleted_idx;
DROP INDEX IF EXISTS workspace_agents_auth_instance_id_deleted_idx;
@@ -1 +1 @@
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON public.workspace_agents (auth_instance_id, deleted);
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON workspace_agents (auth_instance_id, deleted);
File diff suppressed because one or more lines are too long
@@ -1,34 +1,34 @@
-- This is a deleted user that shares the same username and linked_id as the existing user below.
-- Any future migrations need to handle this case.
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'githubuser@coder.com', 'githubuser', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', true) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('a0061a8e-7db7-4585-838c-3116a003dd21', 'github', '100', '');
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'githubuser@coder.com', 'githubuser', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'github', '100', '');
-- Additionally, there is no unique constraint on user_id. So also add another user_link for the same user.
-- This has happened on a production database.
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'oidc', 'foo', '');
-- Lastly, make 2 other users who have the same user link.
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'dup_link_a@coder.com', 'dupe_a', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('580ed397-727d-4aaf-950a-51f89f556c24', 'github', '500', '');
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'dup_link_b@coder.com', 'dupe_b', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'github', '500', '');
@@ -1,4 +1,4 @@
INSERT INTO public.workspace_app_stats (
INSERT INTO workspace_app_stats (
id,
user_id,
workspace_id,
@@ -1,5 +1,5 @@
INSERT INTO
public.workspace_modules (
workspace_modules (
id,
job_id,
transition,
@@ -1,15 +1,15 @@
INSERT INTO public.organizations (id, name, description, created_at, updated_at, is_default, display_name, icon) VALUES ('20362772-802a-4a72-8e4f-3648b4bfd168', 'strange_hopper58', 'wizardly_stonebraker60', '2025-02-07 07:46:19.507551 +00:00', '2025-02-07 07:46:19.507552 +00:00', false, 'competent_rhodes59', '');
INSERT INTO organizations (id, name, description, created_at, updated_at, is_default, display_name, icon) VALUES ('20362772-802a-4a72-8e4f-3648b4bfd168', 'strange_hopper58', 'wizardly_stonebraker60', '2025-02-07 07:46:19.507551 +00:00', '2025-02-07 07:46:19.507552 +00:00', false, 'competent_rhodes59', '');
INSERT INTO public.users (id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, login_type, avatar_url, deleted, last_seen_at, quiet_hours_schedule, theme_preference, name, github_com_user_id, hashed_one_time_passcode, one_time_passcode_expires_at) VALUES ('6c353aac-20de-467b-bdfb-3c30a37adcd2', 'vigorous_murdock61', 'affectionate_hawking62', 'lqTu9C5363AwD7NVNH6noaGjp91XIuZJ', '2025-02-07 07:46:19.510861 +00:00', '2025-02-07 07:46:19.512949 +00:00', 'active', '{}', 'password', '', false, '0001-01-01 00:00:00.000000', '', '', 'vigilant_hugle63', null, null, null);
INSERT INTO users (id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, login_type, avatar_url, deleted, last_seen_at, quiet_hours_schedule, theme_preference, name, github_com_user_id, hashed_one_time_passcode, one_time_passcode_expires_at) VALUES ('6c353aac-20de-467b-bdfb-3c30a37adcd2', 'vigorous_murdock61', 'affectionate_hawking62', 'lqTu9C5363AwD7NVNH6noaGjp91XIuZJ', '2025-02-07 07:46:19.510861 +00:00', '2025-02-07 07:46:19.512949 +00:00', 'active', '{}', 'password', '', false, '0001-01-01 00:00:00.000000', '', '', 'vigilant_hugle63', null, null, null);
INSERT INTO public.templates (id, created_at, updated_at, organization_id, deleted, name, provisioner, active_version_id, description, default_ttl, created_by, icon, user_acl, group_acl, display_name, allow_user_cancel_workspace_jobs, allow_user_autostart, allow_user_autostop, failure_ttl, time_til_dormant, time_til_dormant_autodelete, autostop_requirement_days_of_week, autostop_requirement_weeks, autostart_block_days_of_week, require_active_version, deprecated, activity_bump, max_port_sharing_level) VALUES ('6b298946-7a4f-47ac-9158-b03b08740a41', '2025-02-07 07:46:19.513317 +00:00', '2025-02-07 07:46:19.513317 +00:00', '20362772-802a-4a72-8e4f-3648b4bfd168', false, 'modest_leakey64', 'echo', 'e6cfa2a4-e4cf-4182-9e19-08b975682a28', 'upbeat_wright65', 604800000000000, '6c353aac-20de-467b-bdfb-3c30a37adcd2', 'nervous_keller66', '{}', '{"20362772-802a-4a72-8e4f-3648b4bfd168": ["read", "use"]}', 'determined_aryabhata67', false, true, true, 0, 0, 0, 0, 0, 0, false, '', 3600000000000, 'owner');
INSERT INTO public.template_versions (id, template_id, organization_id, created_at, updated_at, name, readme, job_id, created_by, external_auth_providers, message, archived, source_example_id) VALUES ('af58bd62-428c-4c33-849b-d43a3be07d93', '6b298946-7a4f-47ac-9158-b03b08740a41', '20362772-802a-4a72-8e4f-3648b4bfd168', '2025-02-07 07:46:19.514782 +00:00', '2025-02-07 07:46:19.514782 +00:00', 'distracted_shockley68', 'sleepy_turing69', 'f2e2ea1c-5aa3-4a1d-8778-2e5071efae59', '6c353aac-20de-467b-bdfb-3c30a37adcd2', '[]', '', false, null);
INSERT INTO templates (id, created_at, updated_at, organization_id, deleted, name, provisioner, active_version_id, description, default_ttl, created_by, icon, user_acl, group_acl, display_name, allow_user_cancel_workspace_jobs, allow_user_autostart, allow_user_autostop, failure_ttl, time_til_dormant, time_til_dormant_autodelete, autostop_requirement_days_of_week, autostop_requirement_weeks, autostart_block_days_of_week, require_active_version, deprecated, activity_bump, max_port_sharing_level) VALUES ('6b298946-7a4f-47ac-9158-b03b08740a41', '2025-02-07 07:46:19.513317 +00:00', '2025-02-07 07:46:19.513317 +00:00', '20362772-802a-4a72-8e4f-3648b4bfd168', false, 'modest_leakey64', 'echo', 'e6cfa2a4-e4cf-4182-9e19-08b975682a28', 'upbeat_wright65', 604800000000000, '6c353aac-20de-467b-bdfb-3c30a37adcd2', 'nervous_keller66', '{}', '{"20362772-802a-4a72-8e4f-3648b4bfd168": ["read", "use"]}', 'determined_aryabhata67', false, true, true, 0, 0, 0, 0, 0, 0, false, '', 3600000000000, 'owner');
INSERT INTO template_versions (id, template_id, organization_id, created_at, updated_at, name, readme, job_id, created_by, external_auth_providers, message, archived, source_example_id) VALUES ('af58bd62-428c-4c33-849b-d43a3be07d93', '6b298946-7a4f-47ac-9158-b03b08740a41', '20362772-802a-4a72-8e4f-3648b4bfd168', '2025-02-07 07:46:19.514782 +00:00', '2025-02-07 07:46:19.514782 +00:00', 'distracted_shockley68', 'sleepy_turing69', 'f2e2ea1c-5aa3-4a1d-8778-2e5071efae59', '6c353aac-20de-467b-bdfb-3c30a37adcd2', '[]', '', false, null);
INSERT INTO public.template_version_presets (id, template_version_id, name, created_at) VALUES ('28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'af58bd62-428c-4c33-849b-d43a3be07d93', 'test', '0001-01-01 00:00:00.000000 +00:00');
INSERT INTO template_version_presets (id, template_version_id, name, created_at) VALUES ('28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'af58bd62-428c-4c33-849b-d43a3be07d93', 'test', '0001-01-01 00:00:00.000000 +00:00');
-- Add presets with the same template version ID and name
-- to ensure they're correctly handled by the 00031*_preset_prebuilds migration.
INSERT INTO public.template_version_presets (
INSERT INTO template_version_presets (
id, template_version_id, name, created_at
)
VALUES (
@@ -19,7 +19,7 @@ VALUES (
'0001-01-01 00:00:00.000000 +00:00'
);
INSERT INTO public.template_version_presets (
INSERT INTO template_version_presets (
id, template_version_id, name, created_at
)
VALUES (
@@ -29,4 +29,4 @@ VALUES (
'0001-01-01 00:00:00.000000 +00:00'
);
INSERT INTO public.template_version_preset_parameters (id, template_version_preset_id, name, value) VALUES ('ea90ccd2-5024-459e-87e4-879afd24de0f', '28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'test', 'test');
INSERT INTO template_version_preset_parameters (id, template_version_preset_id, name, value) VALUES ('ea90ccd2-5024-459e-87e4-879afd24de0f', '28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'test', 'test');
@@ -1,4 +1,4 @@
INSERT INTO public.tasks VALUES (
INSERT INTO tasks VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- id
'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', -- organization_id
'30095c71-380b-457a-8995-97b8ee6e5307', -- owner_id
@@ -11,7 +11,7 @@ INSERT INTO public.tasks VALUES (
NULL -- deleted_at
) ON CONFLICT DO NOTHING;
INSERT INTO public.task_workspace_apps VALUES (
INSERT INTO task_workspace_apps VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- task_id
'a8c0b8c5-c9a8-4f33-93a4-8142e6858244', -- workspace_build_id
'8fa17bbd-c48c-44c7-91ae-d4acbc755fad', -- workspace_agent_id
@@ -1,4 +1,4 @@
INSERT INTO public.task_workspace_apps VALUES (
INSERT INTO task_workspace_apps VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- task_id
NULL, -- workspace_agent_id
NULL, -- workspace_app_id
+50
View File
@@ -6107,6 +6107,56 @@ func TestGetWorkspaceAgentsByParentID(t *testing.T) {
})
}
func TestGetWorkspaceAgentByInstanceID(t *testing.T) {
t.Parallel()
// Context: https://github.com/coder/coder/pull/22196
t.Run("DoesNotReturnSubAgents", func(t *testing.T) {
t.Parallel()
// Given: A parent workspace agent with an AuthInstanceID and a
// sub-agent that shares the same AuthInstanceID.
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: job.ID,
})
authInstanceID := fmt.Sprintf("instance-%s-%d", t.Name(), time.Now().UnixNano())
parentAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
// Create a sub-agent with the same AuthInstanceID (simulating
// the old behavior before the fix).
_ = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{UUID: parentAgent.ID, Valid: true},
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
ctx := testutil.Context(t, testutil.WaitShort)
// When: We look up the agent by instance ID.
agent, err := db.GetWorkspaceAgentByInstanceID(ctx, authInstanceID)
require.NoError(t, err)
// Then: The result must be the parent agent, not the sub-agent.
assert.Equal(t, parentAgent.ID, agent.ID, "instance ID lookup should return the parent agent, not a sub-agent")
assert.False(t, agent.ParentID.Valid, "returned agent should not have a parent (should be the parent itself)")
})
}
func requireUsersMatch(t testing.TB, expected []database.User, found []database.GetUsersRow, msg string) {
t.Helper()
require.ElementsMatch(t, expected, database.ConvertUserRows(found), msg)
+2
View File
@@ -18052,6 +18052,8 @@ WHERE
auth_instance_id = $1 :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC
`
@@ -17,6 +17,8 @@ WHERE
auth_instance_id = @auth_instance_id :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC;
+15 -5
View File
@@ -21,7 +21,6 @@ import (
"github.com/coder/coder/v2/coderd/pubsub"
markdown "github.com/coder/coder/v2/coderd/render"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/wsjson"
"github.com/coder/websocket"
)
@@ -127,6 +126,7 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
templates = p.UUIDs(vals, []uuid.UUID{}, "templates")
readStatus = p.String(vals, "all", "read_status")
format = p.String(vals, notificationFormatMarkdown, "format")
logger = api.Logger.Named("inbox_notifications_watcher")
)
p.ErrorExcessParams(vals)
if len(p.Errors) > 0 {
@@ -214,11 +214,17 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
return
}
go httpapi.Heartbeat(ctx, conn)
defer conn.Close(websocket.StatusNormalClosure, "connection closed")
ctx, cancel := context.WithCancel(ctx)
defer cancel()
encoder := wsjson.NewEncoder[codersdk.GetInboxNotificationResponse](conn, websocket.MessageText)
defer encoder.Close(websocket.StatusNormalClosure)
_ = conn.CloseRead(context.Background())
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.HeartbeatClose(ctx, logger, cancel, conn)
encoder := json.NewEncoder(wsNetConn)
// Log the request immediately instead of after it completes.
if rl := loggermw.RequestLoggerFromContext(ctx); rl != nil {
@@ -227,8 +233,12 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
for {
select {
case <-api.ctx.Done():
return
case <-ctx.Done():
return
case notif := <-notificationCh:
unreadCount, err := api.Database.CountUnreadInboxNotificationsByUserID(ctx, apikey.UserID)
if err != nil {
+1
View File
@@ -63,6 +63,7 @@ type StateSnapshotter interface {
type Claimer interface {
Claim(
ctx context.Context,
store database.Store,
now time.Time,
userID uuid.UUID,
name string,
+1 -1
View File
@@ -34,7 +34,7 @@ var DefaultReconciler ReconciliationOrchestrator = NoopReconciler{}
type NoopClaimer struct{}
func (NoopClaimer) Claim(context.Context, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
func (NoopClaimer) Claim(context.Context, database.Store, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
// Not entitled to claim prebuilds in AGPL version.
return nil, ErrAGPLDoesNotSupportPrebuiltWorkspaces
}
+15 -16
View File
@@ -2026,13 +2026,11 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
}
var (
hasAITask bool
unknownAppID string
taskAppID uuid.NullUUID
taskAgentID uuid.NullUUID
)
if tasks := jobType.WorkspaceBuild.GetAiTasks(); len(tasks) > 0 {
hasAITask = true
task := tasks[0]
if task == nil {
return xerrors.Errorf("update ai task: task is nil")
@@ -2048,7 +2046,6 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
if !slices.Contains(appIDs, appID) {
unknownAppID = appID
hasAITask = false
} else {
// Only parse for valid app and agent to avoid fk violation.
id, err := uuid.Parse(appID)
@@ -2083,7 +2080,7 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
Level: []database.LogLevel{database.LogLevelWarn, database.LogLevelWarn, database.LogLevelWarn, database.LogLevelWarn},
Stage: []string{"Cleaning Up", "Cleaning Up", "Cleaning Up", "Cleaning Up"},
Output: []string{
fmt.Sprintf("Unknown ai_task_app_id %q. This workspace will be unable to run AI tasks. This may be due to a template configuration issue, please check with the template author.", taskAppID.UUID.String()),
fmt.Sprintf("Unknown ai_task_app_id %q. This workspace will be unable to run AI tasks. This may be due to a template configuration issue, please check with the template author.", unknownAppID),
"Template author: double-check the following:",
" - You have associated the coder_ai_task with a valid coder_app in your template (ref: https://registry.terraform.io/providers/coder/coder/latest/docs/resources/ai_task).",
" - You have associated the coder_agent with at least one other compute resource. Agents with no other associated resources are not inserted into the database.",
@@ -2098,21 +2095,23 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
}
}
if hasAITask && workspaceBuild.Transition == database.WorkspaceTransitionStart {
// Insert usage event for managed agents.
usageInserter := s.UsageInserter.Load()
if usageInserter != nil {
event := usagetypes.DCManagedAgentsV1{
Count: 1,
}
err = (*usageInserter).InsertDiscreteUsageEvent(ctx, db, event)
if err != nil {
return xerrors.Errorf("insert %q event: %w", event.EventType(), err)
var hasAITask bool
if task, err := db.GetTaskByWorkspaceID(ctx, workspace.ID); err == nil {
hasAITask = true
if workspaceBuild.Transition == database.WorkspaceTransitionStart {
// Insert usage event for managed agents.
usageInserter := s.UsageInserter.Load()
if usageInserter != nil {
event := usagetypes.DCManagedAgentsV1{
Count: 1,
}
err = (*usageInserter).InsertDiscreteUsageEvent(ctx, db, event)
if err != nil {
return xerrors.Errorf("insert %q event: %w", event.EventType(), err)
}
}
}
}
if task, err := db.GetTaskByWorkspaceID(ctx, workspace.ID); err == nil {
// Irrespective of whether the agent or sidebar app is present,
// perform the upsert to ensure a link between the task and
// workspace build. Linking the task to the build is typically
@@ -2878,7 +2878,7 @@ func TestCompleteJob(t *testing.T) {
sidebarAppID := uuid.New()
for _, tc := range []testcase{
{
name: "has_ai_task is false by default",
name: "has_ai_task is false if task_id is nil",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
// No AiTasks defined.
@@ -2887,6 +2887,37 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: false,
expectUsageEvent: false,
},
{
name: "has_ai_task is false even if there are coder_ai_task resources, but no task_id",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{
{
Id: uuid.NewString(),
AppId: sidebarAppID.String(),
},
},
Resources: []*sdkproto.Resource{
{
Agents: []*sdkproto.Agent{
{
Id: uuid.NewString(),
Name: "a",
Apps: []*sdkproto.App{
{
Id: sidebarAppID.String(),
Slug: "test-app",
},
},
},
},
},
},
},
isTask: false,
expectHasAiTask: false,
expectUsageEvent: false,
},
{
name: "has_ai_task is set to true",
transition: database.WorkspaceTransitionStart,
@@ -2964,15 +2995,17 @@ func TestCompleteJob(t *testing.T) {
{
Id: uuid.NewString(),
// Non-existing app ID would previously trigger a FK violation.
// Now it should just be ignored.
// Now it will trigger a warning instead in the provisioner logs.
AppId: sidebarAppID.String(),
},
},
},
isTask: true,
expectTaskStatus: database.TaskStatusInitializing,
expectHasAiTask: false,
expectUsageEvent: false,
// You can still "sort of" use a task in this state, but as we don't have
// the correct app ID you won't be able to communicate with it via Coder.
expectHasAiTask: true,
expectUsageEvent: true,
},
{
name: "has_ai_task is set to true, but transition is not start",
@@ -3007,19 +3040,6 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: true,
expectUsageEvent: false,
},
{
name: "current build does not have ai task but previous build did",
seedFunc: seedPreviousWorkspaceStartWithAITask,
transition: database.WorkspaceTransitionStop,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{},
Resources: []*sdkproto.Resource{},
},
isTask: true,
expectTaskStatus: database.TaskStatusPaused,
expectHasAiTask: false, // We no longer inherit this from the previous build.
expectUsageEvent: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
@@ -4410,62 +4430,3 @@ func (f *fakeUsageInserter) InsertDiscreteUsageEvent(_ context.Context, _ databa
f.collectedEvents = append(f.collectedEvents, event)
return nil
}
func seedPreviousWorkspaceStartWithAITask(ctx context.Context, t testing.TB, db database.Store) error {
t.Helper()
// If the below looks slightly convoluted, that's because it is.
// The workspace doesn't yet have a latest build, so querying all
// workspaces will fail.
tpls, err := db.GetTemplates(ctx)
if err != nil {
return xerrors.Errorf("seedFunc: get template: %w", err)
}
if len(tpls) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one template, got %d", len(tpls))
}
ws, err := db.GetWorkspacesByTemplateID(ctx, tpls[0].ID)
if err != nil {
return xerrors.Errorf("seedFunc: get workspaces: %w", err)
}
if len(ws) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one workspace, got %d", len(ws))
}
w := ws[0]
prevJob := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
OrganizationID: w.OrganizationID,
InitiatorID: w.OwnerID,
Type: database.ProvisionerJobTypeWorkspaceBuild,
})
tvs, err := db.GetTemplateVersionsByTemplateID(ctx, database.GetTemplateVersionsByTemplateIDParams{
TemplateID: tpls[0].ID,
})
if err != nil {
return xerrors.Errorf("seedFunc: get template version: %w", err)
}
if len(tvs) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one template version, got %d", len(tvs))
}
if tpls[0].ActiveVersionID == uuid.Nil {
return xerrors.Errorf("seedFunc: active version id is nil")
}
res := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: prevJob.ID,
})
agt := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: res.ID,
})
_ = dbgen.WorkspaceApp(t, db, database.WorkspaceApp{
AgentID: agt.ID,
})
_ = dbgen.WorkspaceBuild(t, db, database.WorkspaceBuild{
BuildNumber: 1,
HasAITask: sql.NullBool{Valid: true, Bool: true},
ID: w.ID,
InitiatorID: w.OwnerID,
JobID: prevJob.ID,
TemplateVersionID: tvs[0].ID,
Transition: database.WorkspaceTransitionStart,
WorkspaceID: w.ID,
})
return nil
}
+57
View File
@@ -849,6 +849,63 @@ func (api *API) workspaceBuildState(rw http.ResponseWriter, r *http.Request) {
_, _ = rw.Write(workspaceBuild.ProvisionerState)
}
// @Summary Update workspace build state
// @ID update-workspace-build-state
// @Security CoderSessionToken
// @Accept json
// @Tags Builds
// @Param workspacebuild path string true "Workspace build ID" format(uuid)
// @Param request body codersdk.UpdateWorkspaceBuildStateRequest true "Request body"
// @Success 204
// @Router /workspacebuilds/{workspacebuild}/state [put]
func (api *API) workspaceBuildUpdateState(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
workspaceBuild := httpmw.WorkspaceBuildParam(r)
workspace, err := api.Database.GetWorkspaceByID(ctx, workspaceBuild.WorkspaceID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "No workspace exists for this job.",
})
return
}
template, err := api.Database.GetTemplateByID(ctx, workspace.TemplateID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to get template",
Detail: err.Error(),
})
return
}
// You must have update permissions on the template to update the state.
if !api.Authorize(r, policy.ActionUpdate, template.RBACObject()) {
httpapi.ResourceNotFound(rw)
return
}
var req codersdk.UpdateWorkspaceBuildStateRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
// Use system context since we've already verified authorization via template permissions.
// nolint:gocritic // System access required for provisioner state update.
err = api.Database.UpdateWorkspaceBuildProvisionerStateByID(dbauthz.AsSystemRestricted(ctx), database.UpdateWorkspaceBuildProvisionerStateByIDParams{
ID: workspaceBuild.ID,
ProvisionerState: req.State,
UpdatedAt: dbtime.Now(),
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to update workspace build state.",
Detail: err.Error(),
})
return
}
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Get workspace build timings by ID
// @ID get-workspace-build-timings-by-id
// @Security CoderSessionToken
+1 -1
View File
@@ -937,7 +937,7 @@ func claimPrebuild(
nextStartAt sql.NullTime,
ttl sql.NullInt64,
) (*database.Workspace, error) {
claimedID, err := claimer.Claim(ctx, now, owner.ID, name, templateVersionPresetID, autostartSchedule, nextStartAt, ttl)
claimedID, err := claimer.Claim(ctx, db, now, owner.ID, name, templateVersionPresetID, autostartSchedule, nextStartAt, ttl)
if err != nil {
// TODO: enhance this by clarifying whether this *specific* prebuild failed or whether there are none to claim.
return nil, xerrors.Errorf("claim prebuild: %w", err)
+36 -7
View File
@@ -6,7 +6,6 @@ import (
"context"
"database/sql"
"encoding/json"
"errors"
"fmt"
"net/http"
"time"
@@ -87,13 +86,15 @@ type Builder struct {
templateVersionPresetParameterValues *[]database.TemplateVersionPresetParameter
parameterRender dynamicparameters.Renderer
workspaceTags *map[string]string
task *database.Task
hasTask *bool // A workspace without a task will have a nil `task` and false `hasTask`.
prebuiltWorkspaceBuildStage sdkproto.PrebuiltWorkspaceBuildStage
verifyNoLegacyParametersOnce bool
}
type UsageChecker interface {
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (UsageCheckResponse, error)
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, task *database.Task, transition database.WorkspaceTransition) (UsageCheckResponse, error)
}
type UsageCheckResponse struct {
@@ -105,7 +106,7 @@ type NoopUsageChecker struct{}
var _ UsageChecker = NoopUsageChecker{}
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion) (UsageCheckResponse, error) {
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion, _ *database.Task, _ database.WorkspaceTransition) (UsageCheckResponse, error) {
return UsageCheckResponse{
Permitted: true,
}, nil
@@ -489,8 +490,12 @@ func (b *Builder) buildTx(authFunc func(action policy.Action, object rbac.Object
return BuildError{code, "insert workspace build", err}
}
task, err := b.getWorkspaceTask()
if err != nil {
return BuildError{http.StatusInternalServerError, "get task by workspace id", err}
}
// If this is a task workspace, link it to the latest workspace build.
if task, err := store.GetTaskByWorkspaceID(b.ctx, b.workspace.ID); err == nil {
if task != nil {
_, err = store.UpsertTaskWorkspaceApp(b.ctx, database.UpsertTaskWorkspaceAppParams{
TaskID: task.ID,
WorkspaceBuildNumber: buildNum,
@@ -500,8 +505,6 @@ func (b *Builder) buildTx(authFunc func(action policy.Action, object rbac.Object
if err != nil {
return BuildError{http.StatusInternalServerError, "upsert task workspace app", err}
}
} else if !errors.Is(err, sql.ErrNoRows) {
return BuildError{http.StatusInternalServerError, "get task by workspace id", err}
}
err = store.InsertWorkspaceBuildParameters(b.ctx, database.InsertWorkspaceBuildParametersParams{
@@ -634,6 +637,27 @@ func (b *Builder) getTemplateVersionID() (uuid.UUID, error) {
return bld.TemplateVersionID, nil
}
// getWorkspaceTask returns the task associated with the workspace, if any.
// If no task exists, it returns (nil, nil).
func (b *Builder) getWorkspaceTask() (*database.Task, error) {
if b.hasTask != nil {
return b.task, nil
}
t, err := b.store.GetTaskByWorkspaceID(b.ctx, b.workspace.ID)
if err != nil {
if xerrors.Is(err, sql.ErrNoRows) {
b.hasTask = ptr.Ref(false)
//nolint:nilnil // No task exists.
return nil, nil
}
return nil, xerrors.Errorf("get task: %w", err)
}
b.task = &t
b.hasTask = ptr.Ref(true)
return b.task, nil
}
func (b *Builder) getTemplateTerraformValues() (*database.TemplateVersionTerraformValue, error) {
if b.terraformValues != nil {
return b.terraformValues, nil
@@ -1307,7 +1331,12 @@ func (b *Builder) checkUsage() error {
return BuildError{http.StatusInternalServerError, "Failed to fetch template version", err}
}
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion)
task, err := b.getWorkspaceTask()
if err != nil {
return BuildError{http.StatusInternalServerError, "Failed to fetch workspace task", err}
}
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion, task, b.trans)
if err != nil {
return BuildError{http.StatusInternalServerError, "Failed to check build usage", err}
}
+8 -5
View File
@@ -570,6 +570,7 @@ func TestWorkspaceBuildWithRichParameters(t *testing.T) {
mDB := expectDB(t,
// Inputs
withTemplate,
withNoTask,
withInactiveVersionNoParams(),
withLastBuildFound,
withTemplateVersionVariables(inactiveVersionID, nil),
@@ -605,6 +606,7 @@ func TestWorkspaceBuildWithRichParameters(t *testing.T) {
withTemplate,
withInactiveVersion(richParameters),
withLastBuildFound,
withNoTask,
withTemplateVersionVariables(inactiveVersionID, nil),
withRichParameters(initialBuildParameters),
withParameterSchemas(inactiveJobID, nil),
@@ -1049,7 +1051,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
var calls int64
fakeUsageChecker := &fakeUsageChecker{
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
checkBuildUsageFunc: func(_ context.Context, _ database.Store, _ *database.TemplateVersion, _ *database.Task, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
atomic.AddInt64(&calls, 1)
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
},
@@ -1126,7 +1128,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
var calls int64
fakeUsageChecker := &fakeUsageChecker{
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
checkBuildUsageFunc: func(_ context.Context, _ database.Store, _ *database.TemplateVersion, _ *database.Task, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
atomic.AddInt64(&calls, 1)
return c.response, c.responseErr
},
@@ -1134,6 +1136,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
mDB := expectDB(t,
withTemplate,
withNoTask,
withInactiveVersionNoParams(),
)
fc := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
@@ -1577,11 +1580,11 @@ func expectFindMatchingPresetID(id uuid.UUID, err error) func(mTx *dbmock.MockSt
}
type fakeUsageChecker struct {
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error)
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, task *database.Task, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error)
}
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
return f.checkBuildUsageFunc(ctx, store, templateVersion)
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, task *database.Task, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
return f.checkBuildUsageFunc(ctx, store, templateVersion, task, transition)
}
func withNoTask(mTx *dbmock.MockStore) {
-4
View File
@@ -242,10 +242,6 @@ type Feature struct {
// Below is only for features that use usage periods.
// SoftLimit is the soft limit of the feature, and is only used for showing
// included limits in the dashboard. No license validation or warnings are
// generated from this value.
SoftLimit *int64 `json:"soft_limit,omitempty"`
// UsagePeriod denotes that the usage is a counter that accumulates over
// this period (and most likely resets with the issuance of the next
// license).
+3 -2
View File
@@ -12,8 +12,9 @@ import (
)
const (
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
LicenseManagedAgentLimitExceededWarningText = "You have built more workspaces with managed agents than your license allows."
)
type AddLicenseRequest struct {
+22
View File
@@ -188,6 +188,28 @@ func (c *Client) WorkspaceBuildState(ctx context.Context, build uuid.UUID) ([]by
return io.ReadAll(res.Body)
}
// UpdateWorkspaceBuildStateRequest is the request body for updating the
// provisioner state of a workspace build.
type UpdateWorkspaceBuildStateRequest struct {
State []byte `json:"state"`
}
// UpdateWorkspaceBuildState updates the provisioner state of the build without
// triggering a new build. This is useful for state-only migrations.
func (c *Client) UpdateWorkspaceBuildState(ctx context.Context, build uuid.UUID, state []byte) error {
res, err := c.Request(ctx, http.MethodPut, fmt.Sprintf("/api/v2/workspacebuilds/%s/state", build), UpdateWorkspaceBuildStateRequest{
State: state,
})
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
return ReadBodyAsError(res)
}
return nil
}
func (c *Client) WorkspaceBuildByUsernameAndWorkspaceNameAndBuildNumber(ctx context.Context, username string, workspaceName string, buildNumber string) (WorkspaceBuild, error) {
res, err := c.Request(ctx, http.MethodGet, fmt.Sprintf("/api/v2/users/%s/workspace/%s/builds/%s", username, workspaceName, buildNumber), nil)
if err != nil {
+19
View File
@@ -115,6 +115,25 @@ specified in your template in the `disable_params` search params list
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace?disable_params=first_parameter,second_parameter)
```
### Security: consent dialog for automatic creation
When using `mode=auto` with prefilled `param.*` values, Coder displays a
security consent dialog before creating the workspace. This protects users
from malicious links that could provision workspaces with untrusted
configurations, such as dotfiles or startup scripts from unknown sources.
The dialog shows:
- A warning that a workspace is about to be created automatically from a link
- All prefilled `param.*` values from the URL
- **Confirm and Create** and **Cancel** buttons
The workspace is only created if the user explicitly clicks **Confirm and
Create**. Clicking **Cancel** falls back to the standard creation form where
all parameters can be reviewed manually.
![Consent dialog for automatic workspace creation](../../images/templates/auto-create-consent-dialog.png)
### Example: Kubernetes
For a full example of the Open in Coder flow in Kubernetes, check out
Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

+38
View File
@@ -1213,6 +1213,44 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/sta
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Update workspace build state
### Code samples
```shell
# Example request using curl
curl -X PUT http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/state \
-H 'Content-Type: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`PUT /workspacebuilds/{workspacebuild}/state`
> Body parameter
```json
{
"state": [
0
]
}
```
### Parameters
| Name | In | Type | Required | Description |
|------------------|------|--------------------------------------------------------------------------------------------------|----------|--------------------|
| `workspacebuild` | path | string(uuid) | true | Workspace build ID |
| `body` | body | [codersdk.UpdateWorkspaceBuildStateRequest](schemas.md#codersdkupdateworkspacebuildstaterequest) | true | Request body |
### Responses
| Status | Meaning | Description | Schema |
|--------|-----------------------------------------------------------------|-------------|--------|
| 204 | [No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5) | No Content | |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get workspace build timings by ID
### Code samples
-2
View File
@@ -329,7 +329,6 @@ curl -X GET http://coder-server:8080/api/v2/entitlements \
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -341,7 +340,6 @@ curl -X GET http://coder-server:8080/api/v2/entitlements \
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
+22 -10
View File
@@ -4011,7 +4011,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -4023,7 +4022,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -4309,7 +4307,6 @@ Git clone makes use of this by parsing the URL from: 'Username for "https://gith
"enabled": true,
"entitlement": "entitled",
"limit": 0,
"soft_limit": 0,
"usage_period": {
"end": "2019-08-24T14:15:22Z",
"issued_at": "2019-08-24T14:15:22Z",
@@ -4320,13 +4317,12 @@ Git clone makes use of this by parsing the URL from: 'Username for "https://gith
### Properties
| Name | Type | Required | Restrictions | Description |
|---------------|----------------------------------------------|----------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `actual` | integer | false | | |
| `enabled` | boolean | false | | |
| `entitlement` | [codersdk.Entitlement](#codersdkentitlement) | false | | |
| `limit` | integer | false | | |
| `soft_limit` | integer | false | | Soft limit is the soft limit of the feature, and is only used for showing included limits in the dashboard. No license validation or warnings are generated from this value. |
| Name | Type | Required | Restrictions | Description |
|---------------|----------------------------------------------|----------|--------------|-------------|
| `actual` | integer | false | | |
| `enabled` | boolean | false | | |
| `entitlement` | [codersdk.Entitlement](#codersdkentitlement) | false | | |
| `limit` | integer | false | | |
|`usage_period`|[codersdk.UsagePeriod](#codersdkusageperiod)|false||Usage period denotes that the usage is a counter that accumulates over this period (and most likely resets with the issuance of the next license).
These dates are determined from the license that this entitlement comes from, see enterprise/coderd/license/license.go.
Only certain features set these fields: - FeatureManagedAgentLimit|
@@ -9456,6 +9452,22 @@ If the schedule is empty, the user will be updated to use the default schedule.|
|------------|--------|----------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `schedule` | string | false | | Schedule is expected to be of the form `CRON_TZ=<IANA Timezone> <min> <hour> * * <dow>` Example: `CRON_TZ=US/Central 30 9 * * 1-5` represents 0930 in the timezone US/Central on weekdays (Mon-Fri). `CRON_TZ` defaults to UTC if not present. |
## codersdk.UpdateWorkspaceBuildStateRequest
```json
{
"state": [
0
]
}
```
### Properties
| Name | Type | Required | Restrictions | Description |
|---------|------------------|----------|--------------|-------------|
| `state` | array of integer | false | | |
## codersdk.UpdateWorkspaceDormancy
```json
+8
View File
@@ -18,3 +18,11 @@ coder state push [flags] <workspace> <file>
| Type | <code>int</code> |
Specify a workspace build to target by name. Defaults to latest.
### -n, --no-build
| | |
|------|-------------------|
| Type | <code>bool</code> |
Update the state without triggering a workspace build. Useful for state-only migrations.
+2 -2
View File
@@ -212,9 +212,9 @@ RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/u
# Configure FIPS-compliant policies
update-crypto-policies --set FIPS
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.12.2.
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.14.5.
# Installing the same version here to match.
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.13.4/terraform_1.13.4_linux_amd64.zip" && \
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.5/terraform_1.14.5_linux_amd64.zip" && \
unzip /tmp/terraform.zip -d /usr/local/bin && \
rm -f /tmp/terraform.zip && \
chmod +x /usr/local/bin/terraform && \
+2 -2
View File
@@ -371,7 +371,7 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
notifications.NewNoopEnqueuer(),
newNoopUsageCheckerPtr(),
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Given: a template and a template version where the preset defines values for all required parameters,
@@ -482,7 +482,7 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
notifications.NewNoopEnqueuer(),
newNoopUsageCheckerPtr(),
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Given: a template and a template version where the preset defines values for all required parameters,
+35 -42
View File
@@ -971,7 +971,13 @@ func (api *API) updateEntitlements(ctx context.Context) error {
var _ wsbuilder.UsageChecker = &API{}
func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
func (api *API) CheckBuildUsage(
_ context.Context,
_ database.Store,
templateVersion *database.TemplateVersion,
task *database.Task,
transition database.WorkspaceTransition,
) (wsbuilder.UsageCheckResponse, error) {
// If the template version has an external agent, we need to check that the
// license is entitled to this feature.
if templateVersion.HasExternalAgent.Valid && templateVersion.HasExternalAgent.Bool {
@@ -984,48 +990,26 @@ func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templ
}
}
// If the template version doesn't have an AI task, we don't need to check
// usage.
if !templateVersion.HasAITask.Valid || !templateVersion.HasAITask.Bool {
// Verify managed agent entitlement for AI task builds.
// The count/limit check is intentionally omitted — breaching the
// limit is advisory only and surfaced as a warning via entitlements.
if transition != database.WorkspaceTransitionStart || task == nil {
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
if !api.Entitlements.HasLicense() {
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
managedAgentLimit, ok := api.Entitlements.Feature(codersdk.FeatureManagedAgentLimit)
if !ok || !managedAgentLimit.Enabled {
return wsbuilder.UsageCheckResponse{
Permitted: true,
Permitted: false,
Message: "Your license is not entitled to managed agents. Please contact sales to continue using managed agents.",
}, nil
}
// When unlicensed, we need to check that we haven't breached the managed agent
// limit.
// Unlicensed deployments are allowed to use unlimited managed agents.
if api.Entitlements.HasLicense() {
managedAgentLimit, ok := api.Entitlements.Feature(codersdk.FeatureManagedAgentLimit)
if !ok || !managedAgentLimit.Enabled || managedAgentLimit.Limit == nil || managedAgentLimit.UsagePeriod == nil {
return wsbuilder.UsageCheckResponse{
Permitted: false,
Message: "Your license is not entitled to managed agents. Please contact sales to continue using managed agents.",
}, nil
}
// This check is intentionally not committed to the database. It's fine if
// it's not 100% accurate or allows for minor breaches due to build races.
// nolint:gocritic // Requires permission to read all usage events.
managedAgentCount, err := store.GetTotalUsageDCManagedAgentsV1(agpldbauthz.AsSystemRestricted(ctx), database.GetTotalUsageDCManagedAgentsV1Params{
StartDate: managedAgentLimit.UsagePeriod.Start,
EndDate: managedAgentLimit.UsagePeriod.End,
})
if err != nil {
return wsbuilder.UsageCheckResponse{}, xerrors.Errorf("get managed agent count: %w", err)
}
if managedAgentCount >= *managedAgentLimit.Limit {
return wsbuilder.UsageCheckResponse{
Permitted: false,
Message: "You have breached the managed agent limit in your license. Please contact sales to continue using managed agents.",
}, nil
}
}
return wsbuilder.UsageCheckResponse{
Permitted: true,
}, nil
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
}
// getProxyDERPStartingRegionID returns the starting region ID that should be
@@ -1293,7 +1277,16 @@ func (api *API) setupPrebuilds(featureEnabled bool) (agplprebuilds.Reconciliatio
return agplprebuilds.DefaultReconciler, agplprebuilds.DefaultClaimer
}
reconciler := prebuilds.NewStoreReconciler(api.Database, api.Pubsub, api.AGPL.FileCache, api.DeploymentValues.Prebuilds,
api.Logger.Named("prebuilds"), quartz.NewReal(), api.PrometheusRegistry, api.NotificationsEnqueuer, api.AGPL.BuildUsageChecker)
return reconciler, prebuilds.NewEnterpriseClaimer(api.Database)
reconciler := prebuilds.NewStoreReconciler(
api.Database,
api.Pubsub,
api.AGPL.FileCache,
api.DeploymentValues.Prebuilds,
api.Logger.Named("prebuilds"),
quartz.NewReal(),
api.PrometheusRegistry,
api.NotificationsEnqueuer,
api.AGPL.BuildUsageChecker,
)
return reconciler, prebuilds.NewEnterpriseClaimer()
}
+180 -18
View File
@@ -3,6 +3,7 @@ package coderd_test
import (
"bytes"
"context"
"database/sql"
"encoding/json"
"fmt"
"io"
@@ -21,6 +22,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/goleak"
"go.uber.org/mock/gomock"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/slogtest"
@@ -39,13 +41,16 @@ import (
"github.com/coder/retry"
"github.com/coder/serpent"
agplcoderd "github.com/coder/coder/v2/coderd"
agplaudit "github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbmock"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/entitlements"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
@@ -621,7 +626,7 @@ func TestManagedAgentLimit(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
cli, _ := coderdenttest.New(t, &coderdenttest.Options{
cli, owner := coderdenttest.New(t, &coderdenttest.Options{
Options: &coderdtest.Options{
IncludeProvisionerDaemon: true,
},
@@ -631,22 +636,18 @@ func TestManagedAgentLimit(t *testing.T) {
// expiry warnings.
GraceAt: time.Now().Add(time.Hour * 24 * 60),
ExpiresAt: time.Now().Add(time.Hour * 24 * 90),
}).ManagedAgentLimit(1, 1),
}).ManagedAgentLimit(1),
})
// Get entitlements to check that the license is a-ok.
entitlements, err := cli.Entitlements(ctx) //nolint:gocritic // we're not testing authz on the entitlements endpoint, so using owner is fine
sdkEntitlements, err := cli.Entitlements(ctx) //nolint:gocritic // we're not testing authz on the entitlements endpoint, so using owner is fine
require.NoError(t, err)
require.True(t, entitlements.HasLicense)
agentLimit := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, sdkEntitlements.HasLicense)
agentLimit := sdkEntitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, agentLimit.Enabled)
require.NotNil(t, agentLimit.Limit)
require.EqualValues(t, 1, *agentLimit.Limit)
require.NotNil(t, agentLimit.SoftLimit)
require.EqualValues(t, 1, *agentLimit.SoftLimit)
require.Empty(t, entitlements.Errors)
// There should be a warning since we're really close to our agent limit.
require.Equal(t, entitlements.Warnings[0], "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
require.Empty(t, sdkEntitlements.Errors)
// Create a fake provision response that claims there are agents in the
// template and every built workspace.
@@ -706,23 +707,184 @@ func TestManagedAgentLimit(t *testing.T) {
noAiTemplate := coderdtest.CreateTemplate(t, cli, uuid.Nil, noAiVersion.ID)
// Create one AI workspace, which should succeed.
workspace := coderdtest.CreateWorkspace(t, cli, aiTemplate.ID)
task, err := cli.CreateTask(ctx, owner.UserID.String(), codersdk.CreateTaskRequest{
Name: "workspace-1",
TemplateVersionID: aiTemplate.ActiveVersionID,
TemplateVersionPresetID: uuid.Nil,
Input: "hi",
DisplayName: "cool task 1",
})
require.NoError(t, err, "creating task for AI workspace must succeed")
workspace, err := cli.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err, "fetching AI workspace must succeed")
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace.LatestBuild.ID)
// Create a second AI workspace, which should fail. This needs to be done
// manually because coderdtest.CreateWorkspace expects it to succeed.
_, err = cli.CreateUserWorkspace(ctx, codersdk.Me, codersdk.CreateWorkspaceRequest{ //nolint:gocritic // owners must still be subject to the limit
TemplateID: aiTemplate.ID,
Name: coderdtest.RandomUsername(t),
AutomaticUpdates: codersdk.AutomaticUpdatesNever,
// Create a second AI task, which should succeed even though the limit is
// breached. Managed agent limits are advisory only and should never block
// workspace creation.
task2, err := cli.CreateTask(ctx, owner.UserID.String(), codersdk.CreateTaskRequest{
Name: "workspace-2",
TemplateVersionID: aiTemplate.ActiveVersionID,
TemplateVersionPresetID: uuid.Nil,
Input: "hi",
DisplayName: "bad task 2",
})
require.ErrorContains(t, err, "You have breached the managed agent limit in your license")
require.NoError(t, err, "creating task beyond managed agent limit must succeed")
workspace2, err := cli.Workspace(ctx, task2.WorkspaceID.UUID)
require.NoError(t, err, "fetching AI workspace must succeed")
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace2.LatestBuild.ID)
// Create a third non-AI workspace, which should succeed.
workspace = coderdtest.CreateWorkspace(t, cli, noAiTemplate.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace.LatestBuild.ID)
}
func TestCheckBuildUsage_NeverBlocksOnManagedAgentLimit(t *testing.T) {
t.Parallel()
ctrl := gomock.NewController(t)
defer ctrl.Finish()
// Prepare entitlements with a managed agent limit.
entSet := entitlements.New()
entSet.Modify(func(e *codersdk.Entitlements) {
e.HasLicense = true
limit := int64(1)
issuedAt := time.Now().Add(-2 * time.Hour)
start := time.Now().Add(-time.Hour)
end := time.Now().Add(time.Hour)
e.Features[codersdk.FeatureManagedAgentLimit] = codersdk.Feature{
Enabled: true,
Limit: &limit,
UsagePeriod: &codersdk.UsagePeriod{IssuedAt: issuedAt, Start: start, End: end},
}
})
// Enterprise API instance with entitlements injected.
agpl := &agplcoderd.API{
Options: &agplcoderd.Options{
Entitlements: entSet,
},
}
eapi := &coderd.API{
AGPL: agpl,
Options: &coderd.Options{Options: agpl.Options},
}
// Template version that has an AI task.
tv := &database.TemplateVersion{
HasAITask: sql.NullBool{Valid: true, Bool: true},
HasExternalAgent: sql.NullBool{Valid: true, Bool: false},
}
task := &database.Task{
TemplateVersionID: tv.ID,
}
// Mock DB: no calls expected since managed agent limits are
// advisory only and no longer query the database at build time.
mDB := dbmock.NewMockStore(ctrl)
ctx := context.Background()
// Start transition: should be permitted even though the limit is
// breached. Managed agent limits are advisory only.
startResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.True(t, startResp.Permitted)
// Stop transition: should also be permitted.
stopResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStop)
require.NoError(t, err)
require.True(t, stopResp.Permitted)
// Delete transition: should also be permitted.
deleteResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionDelete)
require.NoError(t, err)
require.True(t, deleteResp.Permitted)
}
func TestCheckBuildUsage_BlocksWithoutManagedAgentEntitlement(t *testing.T) {
t.Parallel()
tv := &database.TemplateVersion{
HasAITask: sql.NullBool{Valid: true, Bool: true},
HasExternalAgent: sql.NullBool{Valid: true, Bool: false},
}
task := &database.Task{
TemplateVersionID: tv.ID,
}
// Both "feature absent" and "feature explicitly disabled" should
// block AI task builds on licensed deployments.
tests := []struct {
name string
setupEnts func(e *codersdk.Entitlements)
}{
{
name: "FeatureAbsent",
setupEnts: func(e *codersdk.Entitlements) {
e.HasLicense = true
},
},
{
name: "FeatureDisabled",
setupEnts: func(e *codersdk.Entitlements) {
e.HasLicense = true
e.Features[codersdk.FeatureManagedAgentLimit] = codersdk.Feature{
Enabled: false,
}
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctrl := gomock.NewController(t)
defer ctrl.Finish()
entSet := entitlements.New()
entSet.Modify(tc.setupEnts)
agpl := &agplcoderd.API{
Options: &agplcoderd.Options{
Entitlements: entSet,
},
}
eapi := &coderd.API{
AGPL: agpl,
Options: &coderd.Options{Options: agpl.Options},
}
mDB := dbmock.NewMockStore(ctrl)
ctx := context.Background()
// Start transition with a task: should be blocked because the
// license doesn't include the managed agent entitlement.
resp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.False(t, resp.Permitted)
require.Contains(t, resp.Message, "not entitled to managed agents")
// Stop and delete transitions should still be permitted so
// that existing workspaces can be stopped/cleaned up.
stopResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionStop)
require.NoError(t, err)
require.True(t, stopResp.Permitted)
deleteResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, task, database.WorkspaceTransitionDelete)
require.NoError(t, err)
require.True(t, deleteResp.Permitted)
// Start transition without a task: should be permitted (not
// an AI task build, so the entitlement check doesn't apply).
noTaskResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, nil, database.WorkspaceTransitionStart)
require.NoError(t, err)
require.True(t, noTaskResp.Permitted)
})
}
}
// testDBAuthzRole returns a context with a subject that has a role
// with permissions required for test setup.
func testDBAuthzRole(ctx context.Context) context.Context {
@@ -226,12 +226,8 @@ func (opts *LicenseOptions) UserLimit(limit int64) *LicenseOptions {
return opts.Feature(codersdk.FeatureUserLimit, limit)
}
func (opts *LicenseOptions) ManagedAgentLimit(soft int64, hard int64) *LicenseOptions {
// These don't use named or exported feature names, see
// enterprise/coderd/license/license.go.
opts = opts.Feature(codersdk.FeatureName("managed_agent_limit_soft"), soft)
opts = opts.Feature(codersdk.FeatureName("managed_agent_limit_hard"), hard)
return opts
func (opts *LicenseOptions) ManagedAgentLimit(limit int64) *LicenseOptions {
return opts.Feature(codersdk.FeatureManagedAgentLimit, limit)
}
func (opts *LicenseOptions) Feature(name codersdk.FeatureName, value int64) *LicenseOptions {
+36 -162
View File
@@ -14,60 +14,9 @@ import (
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
)
const (
// These features are only included in the license and are not actually
// entitlements after the licenses are processed. These values will be
// merged into the codersdk.FeatureManagedAgentLimit feature.
//
// The reason we need two separate features is because the License v3 format
// uses map[string]int64 for features, so we're unable to use a single value
// with a struct like `{"soft": 100, "hard": 200}`. This is unfortunate and
// we should fix this with a new license format v4 in the future.
//
// These are intentionally not exported as they should not be used outside
// of this package (except tests).
featureManagedAgentLimitHard codersdk.FeatureName = "managed_agent_limit_hard"
featureManagedAgentLimitSoft codersdk.FeatureName = "managed_agent_limit_soft"
)
var (
// Mapping of license feature names to the SDK feature name.
// This is used to map from multiple usage period features into a single SDK
// feature.
featureGrouping = map[codersdk.FeatureName]struct {
// The parent feature.
sdkFeature codersdk.FeatureName
// Whether the value of the license feature is the soft limit or the hard
// limit.
isSoft bool
}{
// Map featureManagedAgentLimitHard and featureManagedAgentLimitSoft to
// codersdk.FeatureManagedAgentLimit.
featureManagedAgentLimitHard: {
sdkFeature: codersdk.FeatureManagedAgentLimit,
isSoft: false,
},
featureManagedAgentLimitSoft: {
sdkFeature: codersdk.FeatureManagedAgentLimit,
isSoft: true,
},
}
// Features that are forbidden to be set in a license. These are the SDK
// features in the usagedBasedFeatureGrouping map.
licenseForbiddenFeatures = func() map[codersdk.FeatureName]struct{} {
features := make(map[codersdk.FeatureName]struct{})
for _, feature := range featureGrouping {
features[feature.sdkFeature] = struct{}{}
}
return features
}()
)
// Entitlements processes licenses to return whether features are enabled or not.
// TODO(@deansheather): This function and the related LicensesEntitlements
// function should be refactored into smaller functions that:
@@ -273,17 +222,15 @@ func LicensesEntitlements(
// licenses with the corresponding features actually set
// trump this default entitlement, even if they are set to a
// smaller value.
defaultManagedAgentsIsuedAt = time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
defaultManagedAgentsStart = defaultManagedAgentsIsuedAt
defaultManagedAgentsEnd = defaultManagedAgentsStart.AddDate(100, 0, 0)
defaultManagedAgentsSoftLimit int64 = 1000
defaultManagedAgentsHardLimit int64 = 1000
defaultManagedAgentsIsuedAt = time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
defaultManagedAgentsStart = defaultManagedAgentsIsuedAt
defaultManagedAgentsEnd = defaultManagedAgentsStart.AddDate(100, 0, 0)
defaultManagedAgentsLimit int64 = 1000
)
entitlements.AddFeature(codersdk.FeatureManagedAgentLimit, codersdk.Feature{
Enabled: true,
Entitlement: entitlement,
SoftLimit: &defaultManagedAgentsSoftLimit,
Limit: &defaultManagedAgentsHardLimit,
Limit: &defaultManagedAgentsLimit,
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: defaultManagedAgentsIsuedAt,
Start: defaultManagedAgentsStart,
@@ -294,18 +241,8 @@ func LicensesEntitlements(
// Add all features from the feature set defined.
for _, featureName := range claims.FeatureSet.Features() {
if _, ok := licenseForbiddenFeatures[featureName]; ok {
// Ignore any FeatureSet features that are forbidden to be set
// in a license.
continue
}
if _, ok := featureGrouping[featureName]; ok {
// These features need very special handling due to merging
// multiple feature values into a single SDK feature.
continue
}
if featureName == codersdk.FeatureUserLimit || featureName.UsesUsagePeriod() {
// FeatureUserLimit and usage period features are handled below.
if featureName.UsesLimit() || featureName.UsesUsagePeriod() {
// Limit and usage period features are handled below.
// They don't provide default values as they are always enabled
// and require a limit to be specified in the license to have
// any effect.
@@ -320,30 +257,24 @@ func LicensesEntitlements(
})
}
// A map of SDK feature name to the uncommitted usage feature.
uncommittedUsageFeatures := map[codersdk.FeatureName]usageLimit{}
// Features al-la-carte
for featureName, featureValue := range claims.Features {
if _, ok := licenseForbiddenFeatures[featureName]; ok {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Feature %s is forbidden to be set in a license.", featureName))
continue
// Old-style licenses encode the managed agent limit as
// separate soft/hard features.
//
// This could be removed in a future release, but can only be
// done once all old licenses containing this are no longer in use.
if featureName == "managed_agent_limit_soft" {
// Maps the soft limit to the canonical feature name
featureName = codersdk.FeatureManagedAgentLimit
}
if featureValue < 0 {
// We currently don't use negative values for features.
if featureName == "managed_agent_limit_hard" {
// We can safely ignore the hard limit as it is no longer used.
continue
}
// Special handling for grouped (e.g. usage period) features.
if grouping, ok := featureGrouping[featureName]; ok {
ul := uncommittedUsageFeatures[grouping.sdkFeature]
if grouping.isSoft {
ul.Soft = &featureValue
} else {
ul.Hard = &featureValue
}
uncommittedUsageFeatures[grouping.sdkFeature] = ul
if featureValue < 0 {
// We currently don't use negative values for features.
continue
}
@@ -355,9 +286,20 @@ func LicensesEntitlements(
continue
}
// Handling for non-grouped features.
switch featureName {
case codersdk.FeatureUserLimit:
// Handling for limit features.
switch {
case featureName.UsesUsagePeriod():
entitlements.AddFeature(featureName, codersdk.Feature{
Enabled: featureValue > 0,
Entitlement: entitlement,
Limit: &featureValue,
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: claims.IssuedAt.Time,
Start: usagePeriodStart,
End: usagePeriodEnd,
},
})
case featureName.UsesLimit():
if featureValue <= 0 {
// 0 user count doesn't make sense, so we skip it.
continue
@@ -379,46 +321,6 @@ func LicensesEntitlements(
}
}
}
// Apply uncommitted usage features to the entitlements.
for featureName, ul := range uncommittedUsageFeatures {
if ul.Soft == nil || ul.Hard == nil {
// Invalid license.
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has missing soft or hard limit values", license.UUID.String(), featureName))
continue
}
if *ul.Hard < *ul.Soft {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has a hard limit less than the soft limit", license.UUID.String(), featureName))
continue
}
if *ul.Hard < 0 || *ul.Soft < 0 {
entitlements.Errors = append(entitlements.Errors,
fmt.Sprintf("Invalid license (%s): feature %s has a soft or hard limit less than 0", license.UUID.String(), featureName))
continue
}
feature := codersdk.Feature{
Enabled: true,
Entitlement: entitlement,
SoftLimit: ul.Soft,
Limit: ul.Hard,
// `Actual` will be populated below when warnings are generated.
UsagePeriod: &codersdk.UsagePeriod{
IssuedAt: claims.IssuedAt.Time,
Start: usagePeriodStart,
End: usagePeriodEnd,
},
}
// If the hard limit is 0, the feature is disabled.
if *ul.Hard <= 0 {
feature.Enabled = false
feature.SoftLimit = ptr.Ref(int64(0))
feature.Limit = ptr.Ref(int64(0))
}
entitlements.AddFeature(featureName, feature)
}
}
// Now the license specific warnings and errors are added to the entitlements.
@@ -506,32 +408,9 @@ func LicensesEntitlements(
entitlements.AddFeature(codersdk.FeatureManagedAgentLimit, agentLimit)
// Only issue warnings if the feature is enabled.
if agentLimit.Enabled {
var softLimit int64
if agentLimit.SoftLimit != nil {
softLimit = *agentLimit.SoftLimit
}
var hardLimit int64
if agentLimit.Limit != nil {
hardLimit = *agentLimit.Limit
}
// Issue a warning early:
// 1. If the soft limit and hard limit are equal, at 75% of the hard
// limit.
// 2. If the limit is greater than the soft limit, at 75% of the
// difference between the hard limit and the soft limit.
softWarningThreshold := int64(float64(hardLimit) * 0.75)
if hardLimit > softLimit && softLimit > 0 {
softWarningThreshold = softLimit + int64(float64(hardLimit-softLimit)*0.75)
}
if managedAgentCount >= *agentLimit.Limit {
entitlements.Warnings = append(entitlements.Warnings,
"You have built more workspaces with managed agents than your license allows. Further managed agent builds will be blocked.")
} else if managedAgentCount >= softWarningThreshold {
entitlements.Warnings = append(entitlements.Warnings,
"You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
}
if agentLimit.Enabled && agentLimit.Limit != nil && managedAgentCount >= *agentLimit.Limit {
entitlements.Warnings = append(entitlements.Warnings,
codersdk.LicenseManagedAgentLimitExceededWarningText)
}
}
}
@@ -621,11 +500,6 @@ var (
type Features map[codersdk.FeatureName]int64
type usageLimit struct {
Soft *int64
Hard *int64 // 0 means "disabled"
}
// Claims is the full set of claims in a license.
type Claims struct {
jwt.RegisteredClaims
+250 -251
View File
@@ -76,8 +76,7 @@ func TestEntitlements(t *testing.T) {
f := make(license.Features)
for _, name := range codersdk.FeatureNames {
if name == codersdk.FeatureManagedAgentLimit {
f[codersdk.FeatureName("managed_agent_limit_soft")] = 100
f[codersdk.FeatureName("managed_agent_limit_hard")] = 200
f[codersdk.FeatureManagedAgentLimit] = 100
continue
}
f[name] = 1
@@ -520,8 +519,7 @@ func TestEntitlements(t *testing.T) {
t.Run("Premium", func(t *testing.T) {
t.Parallel()
const userLimit = 1
const expectedAgentSoftLimit = 1000
const expectedAgentHardLimit = 1000
const expectedAgentLimit = 1000
db, _ := dbtestutil.NewDB(t)
licenseOptions := coderdenttest.LicenseOptions{
@@ -553,8 +551,7 @@ func TestEntitlements(t *testing.T) {
agentEntitlement := entitlements.Features[featureName]
require.True(t, agentEntitlement.Enabled)
require.Equal(t, codersdk.EntitlementEntitled, agentEntitlement.Entitlement)
require.EqualValues(t, expectedAgentSoftLimit, *agentEntitlement.SoftLimit)
require.EqualValues(t, expectedAgentHardLimit, *agentEntitlement.Limit)
require.EqualValues(t, expectedAgentLimit, *agentEntitlement.Limit)
// This might be shocking, but there's a sound reason for this.
// See license.go for more details.
@@ -814,7 +811,7 @@ func TestEntitlements(t *testing.T) {
ExpiresAt: dbtime.Now().Add(time.Hour * 24 * 90).Truncate(time.Second), // 90 days to remove warning
}).
UserLimit(100).
ManagedAgentLimit(100, 200)
ManagedAgentLimit(100)
lic := database.License{
ID: 1,
@@ -855,16 +852,16 @@ func TestEntitlements(t *testing.T) {
managedAgentLimit, ok := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, ok)
require.NotNil(t, managedAgentLimit.SoftLimit)
require.EqualValues(t, 100, *managedAgentLimit.SoftLimit)
require.NotNil(t, managedAgentLimit.Limit)
require.EqualValues(t, 200, *managedAgentLimit.Limit)
// The soft limit value (100) is used as the single Limit.
require.EqualValues(t, 100, *managedAgentLimit.Limit)
require.NotNil(t, managedAgentLimit.Actual)
require.EqualValues(t, 175, *managedAgentLimit.Actual)
// Should've also populated a warning.
// Usage exceeds the limit, so an exceeded warning should be present.
require.Len(t, entitlements.Warnings, 1)
require.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
require.Equal(t, codersdk.LicenseManagedAgentLimitExceededWarningText, entitlements.Warnings[0])
})
}
@@ -1081,13 +1078,12 @@ func TestLicenseEntitlements(t *testing.T) {
{
Name: "ManagedAgentLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().UserLimit(100).ManagedAgentLimit(100, 200),
enterpriseLicense().UserLimit(100).ManagedAgentLimit(100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
// 175 will generate a warning as it's over 75% of the
// difference between the soft and hard limit.
return 174, nil
// 74 is below the limit (soft=100), so no warning.
return 74, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
@@ -1096,9 +1092,9 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(174), *feature.Actual)
// Soft limit value is used as the single Limit.
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(74), *feature.Actual)
},
},
{
@@ -1111,7 +1107,7 @@ func TestLicenseEntitlements(t *testing.T) {
WithIssuedAt(time.Now().Add(-time.Hour * 2)),
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 100).
ManagedAgentLimit(100).
WithIssuedAt(time.Now().Add(-time.Hour * 1)).
GracePeriod(time.Now()),
},
@@ -1128,7 +1124,6 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementGracePeriod, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(74), *feature.Actual)
},
@@ -1143,7 +1138,7 @@ func TestLicenseEntitlements(t *testing.T) {
WithIssuedAt(time.Now().Add(-time.Hour * 2)),
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200).
ManagedAgentLimit(100).
WithIssuedAt(time.Now().Add(-time.Hour * 1)).
Expired(time.Now()),
},
@@ -1156,84 +1151,33 @@ func TestLicenseEntitlements(t *testing.T) {
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
assert.False(t, feature.Enabled)
assert.Nil(t, feature.SoftLimit)
assert.Nil(t, feature.Limit)
assert.Nil(t, feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/ApproachingLimit/DifferentSoftAndHardLimit",
Name: "ManagedAgentLimitWarning/ExceededLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200),
ManagedAgentLimit(100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 175, nil
return 150, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
assert.Equal(t, codersdk.LicenseManagedAgentLimitExceededWarningText, entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(175), *feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/ApproachingLimit/EqualSoftAndHardLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 100),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 75, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.", entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
// Soft limit (100) is used as the single Limit.
assert.Equal(t, int64(100), *feature.Limit)
assert.Equal(t, int64(75), *feature.Actual)
},
},
{
Name: "ManagedAgentLimitWarning/BreachedLimit",
Licenses: []*coderdenttest.LicenseOptions{
enterpriseLicense().
UserLimit(100).
ManagedAgentLimit(100, 200),
},
Arguments: license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 200, nil
},
},
AssertEntitlements: func(t *testing.T, entitlements codersdk.Entitlements) {
assert.Len(t, entitlements.Warnings, 1)
assert.Equal(t, "You have built more workspaces with managed agents than your license allows. Further managed agent builds will be blocked.", entitlements.Warnings[0])
assertNoErrors(t, entitlements)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
assert.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
assert.True(t, feature.Enabled)
assert.Equal(t, int64(100), *feature.SoftLimit)
assert.Equal(t, int64(200), *feature.Limit)
assert.Equal(t, int64(200), *feature.Actual)
assert.Equal(t, int64(150), *feature.Actual)
},
},
{
@@ -1288,173 +1232,240 @@ func TestLicenseEntitlements(t *testing.T) {
func TestUsageLimitFeatures(t *testing.T) {
t.Parallel()
cases := []struct {
sdkFeatureName codersdk.FeatureName
softLimitFeatureName codersdk.FeatureName
hardLimitFeatureName codersdk.FeatureName
}{
{
sdkFeatureName: codersdk.FeatureManagedAgentLimit,
softLimitFeatureName: codersdk.FeatureName("managed_agent_limit_soft"),
hardLimitFeatureName: codersdk.FeatureName("managed_agent_limit_hard"),
},
}
// Ensures that usage limit features are ranked by issued at, not by
// values.
t.Run("IssuedAtRanking", func(t *testing.T) {
t.Parallel()
for _, c := range cases {
t.Run(string(c.sdkFeatureName), func(t *testing.T) {
t.Parallel()
// Generate 2 real licenses both with managed agent limit
// features. lic2 should trump lic1 even though it has a lower
// limit, because it was issued later.
lic1 := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: time.Now().Add(-time.Minute * 2),
NotBefore: time.Now().Add(-time.Minute * 2),
ExpiresAt: time.Now().Add(time.Hour * 2),
Features: license.Features{
codersdk.FeatureManagedAgentLimit: 100,
},
}),
}
lic2Iat := time.Now().Add(-time.Minute * 1)
lic2Nbf := lic2Iat.Add(-time.Minute)
lic2Exp := lic2Iat.Add(time.Hour)
lic2 := database.License{
ID: 2,
UploadedAt: time.Now(),
Exp: lic2Exp,
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: lic2Iat,
NotBefore: lic2Nbf,
ExpiresAt: lic2Exp,
Features: license.Features{
codersdk.FeatureManagedAgentLimit: 50,
},
}),
}
// Test for either a missing soft or hard limit feature value.
t.Run("MissingGroupedFeature", func(t *testing.T) {
t.Parallel()
const actualAgents = 10
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ReplicaCount: 0,
ExternalAuthCount: 0,
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return actualAgents, nil
},
}
for _, feature := range []codersdk.FeatureName{
c.softLimitFeatureName,
c.hardLimitFeatureName,
} {
t.Run(string(feature), func(t *testing.T) {
t.Parallel()
// Load the licenses in both orders to ensure the correct
// behavior is observed no matter the order.
for _, order := range [][]database.License{
{lic1, lic2},
{lic2, lic1},
} {
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), order, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
feature: 100,
},
}),
}
feature, ok := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 50, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
require.WithinDuration(t, lic2Iat, feature.UsagePeriod.IssuedAt, 2*time.Second)
require.WithinDuration(t, lic2Nbf, feature.UsagePeriod.Start, 2*time.Second)
require.WithinDuration(t, lic2Exp, feature.UsagePeriod.End, 2*time.Second)
}
})
}
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), []database.License{lic}, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
// TestOldStyleManagedAgentLicenses ensures backward compatibility with
// older licenses that encode the managed agent limit using separate
// "managed_agent_limit_soft" and "managed_agent_limit_hard" feature keys
// instead of the canonical "managed_agent_limit" key.
func TestOldStyleManagedAgentLicenses(t *testing.T) {
t.Parallel()
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
t.Run("SoftAndHard", func(t *testing.T) {
t.Parallel()
require.Len(t, entitlements.Errors, 1)
require.Equal(t, fmt.Sprintf("Invalid license (%v): feature %s has missing soft or hard limit values", lic.UUID, c.sdkFeatureName), entitlements.Errors[0])
})
}
})
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_soft"): 100,
codersdk.FeatureName("managed_agent_limit_hard"): 200,
},
}),
}
t.Run("HardBelowSoft", func(t *testing.T) {
t.Parallel()
const actualAgents = 42
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
c.softLimitFeatureName: 100,
c.hardLimitFeatureName: 50,
},
}),
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), []database.License{lic}, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.True(t, feature.Enabled)
require.NotNil(t, feature.Limit)
// The soft limit should be used as the canonical limit.
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
})
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
t.Run("OnlySoft", func(t *testing.T) {
t.Parallel()
require.Len(t, entitlements.Errors, 1)
require.Equal(t, fmt.Sprintf("Invalid license (%v): feature %s has a hard limit less than the soft limit", lic.UUID, c.sdkFeatureName), entitlements.Errors[0])
})
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_soft"): 75,
},
}),
}
// Ensures that these features are ranked by issued at, not by
// values.
t.Run("IssuedAtRanking", func(t *testing.T) {
t.Parallel()
const actualAgents = 10
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
// Generate 2 real licenses both with managed agent limit
// features. lic2 should trump lic1 even though it has a lower
// limit, because it was issued later.
lic1 := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: time.Now().Add(-time.Minute * 2),
NotBefore: time.Now().Add(-time.Minute * 2),
ExpiresAt: time.Now().Add(time.Hour * 2),
Features: license.Features{
c.softLimitFeatureName: 100,
c.hardLimitFeatureName: 200,
},
}),
}
lic2Iat := time.Now().Add(-time.Minute * 1)
lic2Nbf := lic2Iat.Add(-time.Minute)
lic2Exp := lic2Iat.Add(time.Hour)
lic2 := database.License{
ID: 2,
UploadedAt: time.Now(),
Exp: lic2Exp,
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
IssuedAt: lic2Iat,
NotBefore: lic2Nbf,
ExpiresAt: lic2Exp,
Features: license.Features{
c.softLimitFeatureName: 50,
c.hardLimitFeatureName: 100,
},
}),
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
const actualAgents = 10
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ReplicaCount: 0,
ExternalAuthCount: 0,
ManagedAgentCountFn: func(ctx context.Context, from time.Time, to time.Time) (int64, error) {
return actualAgents, nil
},
}
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.True(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 75, *feature.Limit)
})
// Load the licenses in both orders to ensure the correct
// behavior is observed no matter the order.
for _, order := range [][]database.License{
{lic1, lic2},
{lic2, lic1},
} {
entitlements, err := license.LicensesEntitlements(context.Background(), time.Now(), order, map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments)
require.NoError(t, err)
// A license with only the hard limit key should silently ignore it,
// leaving the feature unset (not entitled).
t.Run("OnlyHard", func(t *testing.T) {
t.Parallel()
feature, ok := entitlements.Features[c.sdkFeatureName]
require.True(t, ok, "feature %s not found", c.sdkFeatureName)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 50, *feature.SoftLimit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
require.WithinDuration(t, lic2Iat, feature.UsagePeriod.IssuedAt, 2*time.Second)
require.WithinDuration(t, lic2Nbf, feature.UsagePeriod.Start, 2*time.Second)
require.WithinDuration(t, lic2Exp, feature.UsagePeriod.End, 2*time.Second)
}
})
})
}
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureName("managed_agent_limit_hard"): 200,
},
}),
}
arguments := license.FeatureArguments{
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return 0, nil
},
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
require.Empty(t, entitlements.Errors)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
})
// Old-style license with both soft and hard set to zero should
// explicitly disable the feature (and override any Premium default).
t.Run("ExplicitZero", func(t *testing.T) {
t.Parallel()
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
Exp: time.Now().Add(time.Hour),
UUID: uuid.New(),
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 0,
codersdk.FeatureName("managed_agent_limit_hard"): 0,
},
}),
}
const actualAgents = 5
arguments := license.FeatureArguments{
ActiveUserCount: 10,
ManagedAgentCountFn: func(_ context.Context, _, _ time.Time) (int64, error) {
return actualAgents, nil
},
}
entitlements, err := license.LicensesEntitlements(
context.Background(), time.Now(), []database.License{lic},
map[codersdk.FeatureName]bool{}, coderdenttest.Keys, arguments,
)
require.NoError(t, err)
feature := entitlements.Features[codersdk.FeatureManagedAgentLimit]
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.False(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 0, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
})
}
func TestManagedAgentLimitDefault(t *testing.T) {
@@ -1492,20 +1503,16 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementNotEntitled, feature.Entitlement)
require.Nil(t, feature.Limit)
require.Nil(t, feature.SoftLimit)
require.Nil(t, feature.Actual)
require.Nil(t, feature.UsagePeriod)
})
// "Premium" licenses should receive a default managed agent limit of:
// soft = 1000
// hard = 1000
// "Premium" licenses should receive a default managed agent limit of 1000.
t.Run("Premium", func(t *testing.T) {
t.Parallel()
const userLimit = 33
const softLimit = 1000
const hardLimit = 1000
const defaultLimit = 1000
lic := database.License{
ID: 1,
UploadedAt: time.Now(),
@@ -1536,9 +1543,7 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, hardLimit, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, softLimit, *feature.SoftLimit)
require.EqualValues(t, defaultLimit, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
@@ -1547,8 +1552,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.NotZero(t, feature.UsagePeriod.End)
})
// "Premium" licenses with an explicit managed agent limit should not
// receive a default managed agent limit.
// "Premium" licenses with an explicit managed agent limit should use
// that value instead of the default.
t.Run("PremiumExplicitValues", func(t *testing.T) {
t.Parallel()
@@ -1560,9 +1565,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 100,
codersdk.FeatureName("managed_agent_limit_hard"): 200,
codersdk.FeatureUserLimit: 100,
codersdk.FeatureManagedAgentLimit: 100,
},
}),
}
@@ -1584,9 +1588,7 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.True(t, ok, "feature %s not found", codersdk.FeatureManagedAgentLimit)
require.Equal(t, codersdk.EntitlementEntitled, feature.Entitlement)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 200, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 100, *feature.SoftLimit)
require.EqualValues(t, 100, *feature.Limit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
@@ -1608,9 +1610,8 @@ func TestManagedAgentLimitDefault(t *testing.T) {
JWT: coderdenttest.GenerateLicense(t, coderdenttest.LicenseOptions{
FeatureSet: codersdk.FeatureSetPremium,
Features: license.Features{
codersdk.FeatureUserLimit: 100,
codersdk.FeatureName("managed_agent_limit_soft"): 0,
codersdk.FeatureName("managed_agent_limit_hard"): 0,
codersdk.FeatureUserLimit: 100,
codersdk.FeatureManagedAgentLimit: 0,
},
}),
}
@@ -1634,8 +1635,6 @@ func TestManagedAgentLimitDefault(t *testing.T) {
require.False(t, feature.Enabled)
require.NotNil(t, feature.Limit)
require.EqualValues(t, 0, *feature.Limit)
require.NotNil(t, feature.SoftLimit)
require.EqualValues(t, 0, *feature.SoftLimit)
require.NotNil(t, feature.Actual)
require.EqualValues(t, actualAgents, *feature.Actual)
require.NotNil(t, feature.UsagePeriod)
+7 -10
View File
@@ -13,18 +13,15 @@ import (
"github.com/coder/coder/v2/coderd/prebuilds"
)
type EnterpriseClaimer struct {
store database.Store
type EnterpriseClaimer struct{}
func NewEnterpriseClaimer() *EnterpriseClaimer {
return &EnterpriseClaimer{}
}
func NewEnterpriseClaimer(store database.Store) *EnterpriseClaimer {
return &EnterpriseClaimer{
store: store,
}
}
func (c EnterpriseClaimer) Claim(
func (EnterpriseClaimer) Claim(
ctx context.Context,
store database.Store,
now time.Time,
userID uuid.UUID,
name string,
@@ -33,7 +30,7 @@ func (c EnterpriseClaimer) Claim(
nextStartAt sql.NullTime,
ttl sql.NullInt64,
) (*uuid.UUID, error) {
result, err := c.store.ClaimPrebuiltWorkspace(ctx, database.ClaimPrebuiltWorkspaceParams{
result, err := store.ClaimPrebuiltWorkspace(ctx, database.ClaimPrebuiltWorkspaceParams{
NewUserID: userID,
NewName: name,
Now: now,
+8 -2
View File
@@ -167,8 +167,14 @@ func TestClaimPrebuild(t *testing.T) {
defer provisionerCloser.Close()
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
reconciler := prebuilds.NewStoreReconciler(spy, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(spy)
reconciler := prebuilds.NewStoreReconciler(
spy, pubsub, cache, codersdk.PrebuildsConfig{}, logger,
quartz.NewMock(t),
prometheus.NewRegistry(),
newNoopEnqueuer(),
newNoopUsageCheckerPtr(),
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
version := coderdtest.CreateTemplateVersion(t, client, orgID, templateWithAgentAndPresetsWithPrebuilds(desiredInstances))
+127 -6
View File
@@ -1978,7 +1978,7 @@ func TestPrebuildsAutobuild(t *testing.T) {
notificationsNoop,
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Setup user, template and template version with a preset with 1 prebuild instance
@@ -2100,7 +2100,7 @@ func TestPrebuildsAutobuild(t *testing.T) {
notificationsNoop,
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Setup user, template and template version with a preset with 1 prebuild instance
@@ -2222,7 +2222,7 @@ func TestPrebuildsAutobuild(t *testing.T) {
notificationsNoop,
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Setup user, template and template version with a preset with 1 prebuild instance
@@ -2366,7 +2366,7 @@ func TestPrebuildsAutobuild(t *testing.T) {
notificationsNoop,
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Setup user, template and template version with a preset with 1 prebuild instance
@@ -2511,7 +2511,7 @@ func TestPrebuildsAutobuild(t *testing.T) {
notificationsNoop,
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
// Setup user, template and template version with a preset with 1 prebuild instance
@@ -2957,7 +2957,7 @@ func TestWorkspaceProvisionerdServerMetrics(t *testing.T) {
notifications.NewNoopEnqueuer(),
api.AGPL.BuildUsageChecker,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
organizationName, err := client.Organization(ctx, owner.OrganizationID)
@@ -4477,3 +4477,124 @@ func TestDeleteWorkspaceACL(t *testing.T) {
require.Equal(t, acl.Groups[0].ID, group.ID)
})
}
// Unfortunately this test is incompatible with 2.29, so it's commented out in
// this backport PR.
/*
func TestWorkspaceAITask(t *testing.T) {
t.Parallel()
usage := coderdtest.NewUsageInserter()
owner, _, first := coderdenttest.NewWithDatabase(t, &coderdenttest.Options{
Options: &coderdtest.Options{
UsageInserter: usage,
IncludeProvisionerDaemon: true,
},
LicenseOptions: (&coderdenttest.LicenseOptions{
Features: license.Features{
codersdk.FeatureTemplateRBAC: 1,
},
}).ManagedAgentLimit(10),
})
client, _ := coderdtest.CreateAnotherUser(t, owner, first.OrganizationID,
rbac.RoleTemplateAdmin(), rbac.RoleUserAdmin())
graphWithTask := []*proto.Response{{
Type: &proto.Response_Graph{
Graph: &proto.GraphComplete{
Error: "",
Timings: nil,
Resources: nil,
Parameters: nil,
ExternalAuthProviders: nil,
Presets: nil,
HasAiTasks: true,
AiTasks: []*proto.AITask{
{
Id: "test",
SidebarApp: nil,
AppId: "test",
},
},
HasExternalAgents: false,
},
},
}}
planWithTask := []*proto.Response{{
Type: &proto.Response_Plan{
Plan: &proto.PlanComplete{
Plan: []byte("{}"),
AiTaskCount: 1,
},
},
}}
t.Run("CreateWorkspaceWithTaskNormally", func(t *testing.T) {
// Creating a workspace that has agentic tasks, but is not launced via task
// should not count towards the usage.
t.Cleanup(usage.Reset)
version := coderdtest.CreateTemplateVersion(t, client, first.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionInit: echo.InitComplete,
ProvisionPlan: planWithTask,
ProvisionApply: echo.ApplyComplete,
ProvisionGraph: graphWithTask,
})
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, first.OrganizationID, version.ID)
wrk := coderdtest.CreateWorkspace(t, client, template.ID)
build := coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, wrk.LatestBuild.ID)
require.Equal(t, codersdk.WorkspaceStatusRunning, build.Status)
require.Len(t, usage.GetEvents(), 0)
})
t.Run("CreateTaskWorkspace", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
t.Cleanup(usage.Reset)
version := coderdtest.CreateTemplateVersion(t, client, first.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionInit: echo.InitComplete,
ProvisionPlan: planWithTask,
ProvisionApply: echo.ApplyComplete,
ProvisionGraph: graphWithTask,
})
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, first.OrganizationID, version.ID)
task, err := client.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Name: "istask",
})
require.NoError(t, err)
wrk, err := client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
build := coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, wrk.LatestBuild.ID)
require.Equal(t, codersdk.WorkspaceStatusRunning, build.Status)
require.Len(t, usage.GetEvents(), 1)
usage.Reset() // Clean slate for easy checks
// Stopping the workspace should not create additional usage.
build, err = client.CreateWorkspaceBuild(ctx, wrk.ID, codersdk.CreateWorkspaceBuildRequest{
TemplateVersionID: wrk.LatestBuild.TemplateVersionID,
Transition: codersdk.WorkspaceTransitionStop,
})
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, build.ID)
require.Len(t, usage.GetEvents(), 0)
usage.Reset() // Clean slate for easy checks
// Starting the workspace manually **WILL** create usage, as it's
// still a task workspace.
build, err = client.CreateWorkspaceBuild(ctx, wrk.ID, codersdk.CreateWorkspaceBuildRequest{
TemplateVersionID: wrk.LatestBuild.TemplateVersionID,
Transition: codersdk.WorkspaceTransitionStart,
})
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, build.ID)
require.Len(t, usage.GetEvents(), 1)
})
}
*/
+2 -2
View File
@@ -478,7 +478,7 @@ require (
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225
github.com/coder/aibridge v0.2.0
github.com/coder/aisdk-go v0.0.9
github.com/coder/boundary v1.0.1-0.20250925154134-55a44f2a7945
github.com/coder/boundary v0.0.1-alpha
github.com/coder/preview v1.0.4
github.com/danieljoos/wincred v1.2.3
github.com/dgraph-io/ristretto/v2 v2.3.0
@@ -515,7 +515,7 @@ require (
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
github.com/buger/jsonparser v1.1.1 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/charmbracelet/x/exp/slice v0.0.0-20250327172914-2fdc97757edf // indirect
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
+4 -4
View File
@@ -854,8 +854,8 @@ github.com/cakturk/go-netstat v0.0.0-20200220111822-e5b49efee7a5 h1:BjkPE3785EwP
github.com/cakturk/go-netstat v0.0.0-20200220111822-e5b49efee7a5/go.mod h1:jtAfVaU/2cu1+wdSRPWE2c1N2qeAA3K4RH9pYgqwets=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
@@ -923,8 +923,8 @@ github.com/coder/aibridge v0.2.0 h1:kAWhHD6fsmDLH1WxIwXPu9Ineijj+lVniko45C003Vo=
github.com/coder/aibridge v0.2.0/go.mod h1:2T0RSnIX1WTqFajzXsaNsoNe6mmNsNeCTxiHBWEsFnE=
github.com/coder/aisdk-go v0.0.9 h1:Vzo/k2qwVGLTR10ESDeP2Ecek1SdPfZlEjtTfMveiVo=
github.com/coder/aisdk-go v0.0.9/go.mod h1:KF6/Vkono0FJJOtWtveh5j7yfNrSctVTpwgweYWSp5M=
github.com/coder/boundary v1.0.1-0.20250925154134-55a44f2a7945 h1:hDUf02kTX8EGR3+5B+v5KdYvORs4YNfDPci0zCs+pC0=
github.com/coder/boundary v1.0.1-0.20250925154134-55a44f2a7945/go.mod h1:d1AMFw81rUgrGHuZzWdPNhkY0G8w7pvLNLYF0e3ceC4=
github.com/coder/boundary v0.0.1-alpha h1:6shUQ2zkrWrfbgVcqWvpV2ibljOQvPvYqTctWBkKoUA=
github.com/coder/boundary v0.0.1-alpha/go.mod h1:d1AMFw81rUgrGHuZzWdPNhkY0G8w7pvLNLYF0e3ceC4=
github.com/coder/bubbletea v1.2.2-0.20241212190825-007a1cdb2c41 h1:SBN/DA63+ZHwuWwPHPYoCZ/KLAjHv5g4h2MS4f2/MTI=
github.com/coder/bubbletea v1.2.2-0.20241212190825-007a1cdb2c41/go.mod h1:I9ULxr64UaOSUv7hcb3nX4kowodJCVS7vt7VVJk/kW4=
github.com/coder/clistat v1.1.2 h1:1WzCsEQ/VFBNyxu5ryy0Pdb6rrMh+byCp3aZMkn9k/E=
+1 -1
View File
@@ -273,7 +273,7 @@ EOF
main() {
MAINLINE=1
STABLE=0
TERRAFORM_VERSION="1.13.4"
TERRAFORM_VERSION="1.14.5"
if [ "${TRACE-}" ]; then
set -x
+2 -2
View File
@@ -22,10 +22,10 @@ var (
// when Terraform is not available on the system.
// NOTE: Keep this in sync with the version in scripts/Dockerfile.base.
// NOTE: Keep this in sync with the version in install.sh.
TerraformVersion = version.Must(version.NewVersion("1.13.4"))
TerraformVersion = version.Must(version.NewVersion("1.14.5"))
minTerraformVersion = version.Must(version.NewVersion("1.1.0"))
maxTerraformVersion = version.Must(version.NewVersion("1.13.9")) // use .9 to automatically allow patch releases
maxTerraformVersion = version.Must(version.NewVersion("1.14.9")) // use .9 to automatically allow patch releases
errTerraformMinorVersionMismatch = xerrors.New("Terraform binary minor version mismatch.")
)
+1 -1
View File
@@ -102,7 +102,7 @@ func (p *terraformProxy) handleGet(w http.ResponseWriter, r *http.Request) {
require.NoError(p.t, err)
// update index.json so urls in it point to proxy by making them relative
// "https://releases.hashicorp.com/terraform/1.13.4/terraform_1.13.4_windows_amd64.zip" -> "/terraform/1.13.4/terraform_1.13.4_windows_amd64.zip"
// "https://releases.hashicorp.com/terraform/1.14.5/terraform_1.14.5_windows_amd64.zip" -> "/terraform/1.14.5/terraform_1.14.5_windows_amd64.zip"
if strings.HasSuffix(r.URL.Path, "index.json") {
body = []byte(strings.ReplaceAll(string(body), terraformURL, ""))
}
+5
View File
@@ -3,6 +3,11 @@
set -euo pipefail
cd "$(dirname "${BASH_SOURCE[0]}")/resources"
# These environment variables influence the coder provider.
for v in $(env | grep -E '^CODER_' | cut -d= -f1); do
unset "$v"
done
generate() {
local name="$1"
@@ -41,6 +41,7 @@
"sidebar_app": []
},
"after_unknown": {
"enabled": true,
"id": true,
"prompt": true,
"sidebar_app": []
@@ -81,11 +82,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "5c06d6ea-101b-4069-8d14-7179df66ebcc",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "coder",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -104,7 +105,7 @@
"schema_version": 0,
"values": {
"email": "default@example.com",
"full_name": "coder",
"full_name": "default",
"groups": [],
"id": "8796d8d7-88f1-445a-bea7-65f5cf530b95",
"login_type": null,
@@ -27,11 +27,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "bca94359-107b-43c9-a272-99af4b239aad",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "coder",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -50,7 +50,7 @@
"schema_version": 0,
"values": {
"email": "default@example.com",
"full_name": "coder",
"full_name": "default",
"groups": [],
"id": "cb8c55f2-7f66-4e69-a584-eb08f4a7cf04",
"login_type": null,
@@ -79,8 +79,9 @@
"schema_version": 1,
"values": {
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
"enabled": false,
"id": "c4f032b8-97e4-42b0-aa2f-30a9e698f8d4",
"prompt": "default",
"prompt": null,
"sidebar_app": []
},
"sensitive_values": {
@@ -66,6 +66,7 @@
},
"after_unknown": {
"app_id": true,
"enabled": true,
"id": true,
"prompt": true,
"sidebar_app": [
@@ -97,6 +98,7 @@
"sidebar_app": []
},
"after_unknown": {
"enabled": true,
"id": true,
"prompt": true,
"sidebar_app": []
@@ -137,11 +139,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "344575c1-55b9-43bb-89b5-35f547e2cf08",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "sebenza-nonix",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -173,7 +175,9 @@
},
"sensitive_values": {
"groups": [],
"oidc_access_token": true,
"rbac_roles": [],
"session_token": true,
"ssh_private_key": true
}
}
@@ -27,11 +27,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "b6713709-6736-4d2f-b3da-7b5b242df5f4",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "sebenza-nonix",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -63,7 +63,9 @@
},
"sensitive_values": {
"groups": [],
"oidc_access_token": true,
"rbac_roles": [],
"session_token": true,
"ssh_private_key": true
}
},
@@ -77,8 +79,9 @@
"schema_version": 1,
"values": {
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
"enabled": false,
"id": "89e6ab36-2e98-4d13-9b4c-69b7588b7e1d",
"prompt": "default",
"prompt": null,
"sidebar_app": [
{
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd"
@@ -101,8 +104,9 @@
"schema_version": 1,
"values": {
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
"enabled": false,
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
"prompt": "default",
"prompt": null,
"sidebar_app": []
},
"sensitive_values": {
@@ -50,6 +50,7 @@
},
"after_unknown": {
"app_id": true,
"enabled": true,
"id": true,
"prompt": true,
"sidebar_app": [
@@ -94,11 +95,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "344575c1-55b9-43bb-89b5-35f547e2cf08",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "sebenza-nonix",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -130,7 +131,9 @@
},
"sensitive_values": {
"groups": [],
"oidc_access_token": true,
"rbac_roles": [],
"session_token": true,
"ssh_private_key": true
}
}
@@ -27,11 +27,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "b6713709-6736-4d2f-b3da-7b5b242df5f4",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "sebenza-nonix",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -63,7 +63,9 @@
},
"sensitive_values": {
"groups": [],
"oidc_access_token": true,
"rbac_roles": [],
"session_token": true,
"ssh_private_key": true
}
},
@@ -77,8 +79,9 @@
"schema_version": 1,
"values": {
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
"enabled": false,
"id": "89e6ab36-2e98-4d13-9b4c-69b7588b7e1d",
"prompt": "default",
"prompt": null,
"sidebar_app": [
{
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd"
@@ -147,11 +147,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "0b7fc772-5e27-4096-b8a3-9e6a8b914ebe",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "kacper",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -170,7 +170,7 @@
"schema_version": 0,
"values": {
"email": "default@example.com",
"full_name": "kacpersaw",
"full_name": "default",
"groups": [],
"id": "1ebd1795-7cf2-47c5-8024-5d56e68f1681",
"login_type": null,
@@ -27,11 +27,11 @@
"schema_version": 1,
"values": {
"access_port": 443,
"access_url": "https://dev.coder.com/",
"access_url": "https://mydeployment.coder.com",
"id": "dfa1dbe8-ad31-410b-b201-a4ed4d884938",
"is_prebuild": false,
"is_prebuild_claim": false,
"name": "kacper",
"name": "default",
"prebuild_count": 0,
"start_count": 1,
"template_id": "",
@@ -50,7 +50,7 @@
"schema_version": 0,
"values": {
"email": "default@example.com",
"full_name": "kacpersaw",
"full_name": "default",
"groups": [],
"id": "f5e82b90-ea22-4288-8286-9cf7af651143",
"login_type": null,
@@ -136,7 +136,9 @@
"id": "github",
"optional": null
},
"sensitive_values": {}
"sensitive_values": {
"access_token": true
}
},
{
"address": "data.coder_external_auth.gitlab",
@@ -150,7 +152,9 @@
"id": "gitlab",
"optional": true
},
"sensitive_values": {}
"sensitive_values": {
"access_token": true
}
}
]
}
@@ -16,7 +16,9 @@
"id": "github",
"optional": null
},
"sensitive_values": {}
"sensitive_values": {
"access_token": true
}
},
{
"address": "data.coder_external_auth.gitlab",
@@ -30,7 +32,9 @@
"id": "gitlab",
"optional": true
},
"sensitive_values": {}
"sensitive_values": {
"access_token": true
}
},
{
"address": "coder_agent.main",
@@ -56,6 +56,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -83,6 +84,7 @@
"share": "owner",
"slug": "app2",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -175,6 +177,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -213,6 +216,7 @@
"share": "owner",
"slug": "app2",
"subdomain": null,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -72,6 +72,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -104,6 +105,7 @@
"share": "owner",
"slug": "app2",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -86,6 +86,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -118,6 +119,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -146,6 +148,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -294,6 +297,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -337,6 +341,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -378,6 +383,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -116,6 +116,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -153,6 +154,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -186,6 +188,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -2,7 +2,7 @@ terraform {
required_providers {
coder = {
source = "coder/coder"
version = "2.2.0-pre0"
version = ">=2.2.0"
}
}
}
@@ -134,6 +134,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -166,6 +167,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -369,6 +371,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -412,6 +415,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -456,7 +460,7 @@
"coder": {
"name": "coder",
"full_name": "registry.terraform.io/coder/coder",
"version_constraint": "2.2.0-pre0"
"version_constraint": ">= 2.2.0"
},
"null": {
"name": "null",
@@ -164,6 +164,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -201,6 +202,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -55,6 +55,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -87,6 +88,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -115,6 +117,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -206,6 +209,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -249,6 +253,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -290,6 +295,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"after_unknown": {
@@ -71,6 +71,7 @@
"share": "owner",
"slug": "app1",
"subdomain": null,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -108,6 +109,7 @@
"share": "owner",
"slug": "app2",
"subdomain": true,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -141,6 +143,7 @@
"share": "owner",
"slug": "app3",
"subdomain": false,
"tooltip": null,
"url": null
},
"sensitive_values": {
@@ -162,6 +162,8 @@
"schema_version": 1,
"values": {
"default": true,
"description": null,
"icon": null,
"id": "development",
"name": "development",
"parameters": {
@@ -194,6 +196,8 @@
"schema_version": 1,
"values": {
"default": true,
"description": null,
"icon": null,
"id": "production",
"name": "production",
"parameters": {
@@ -42,6 +42,8 @@
"schema_version": 1,
"values": {
"default": true,
"description": null,
"icon": null,
"id": "development",
"name": "development",
"parameters": {
@@ -74,6 +76,8 @@
"schema_version": 1,
"values": {
"default": true,
"description": null,
"icon": null,
"id": "production",
"name": "production",
"parameters": {
@@ -162,6 +162,8 @@
"schema_version": 1,
"values": {
"default": true,
"description": null,
"icon": null,
"id": "development",
"name": "development",
"parameters": {
@@ -194,6 +196,8 @@
"schema_version": 1,
"values": {
"default": false,
"description": null,
"icon": null,
"id": "production",
"name": "production",
"parameters": {

Some files were not shown because too many files have changed in this diff Show More