Compare commits

..

32 Commits

Author SHA1 Message Date
Rowan Smith 3b2ded6985 chore: switch agent gone response from 502 to 404 (backport #23090) (#23636)
Backport of #23090 to `release/2.30`.

When a user creates a workspace, opens the web terminal, then the
workspace stops but the web terminal remains open, the web terminal will
retry the connection. Coder would issue a HTTP 502 Bad Gateway response
when this occurred because coderd could not connect to the workspace
agent, however this is problematic as any load balancer sitting in front
of Coder sees a 502 and thinks Coder is unhealthy.

This PR changes the response to a HTTP 404 after internal discussion.

Cherry-picked from merge commit
c33812a430.
2026-03-25 16:49:44 -04:00
blinkagent[bot] de64b63977 fix(coderd): add organization_name label to insights Prometheus metrics (cherry-pick #22296) (#23447)
Backport of #22296 to release/2.30.

When multiple organizations have templates with the same name, the
Prometheus `/metrics` endpoint returns HTTP 500 because Prometheus
rejects duplicate label combinations. The three `coderd_insights_*`
metrics (`coderd_insights_templates_active_users`,
`coderd_insights_applications_usage_seconds`,
`coderd_insights_parameters`) used only `template_name` as a
distinguishing label, so two templates named e.g. `"openstack-v1"` in
different orgs would produce duplicate metric series.

This adds `organization_name` as a label to all three insight metric
descriptors to disambiguate templates across organizations.

(cherry picked from commit 4057363f78)

Fixes #21748

Co-authored-by: Garrett Delfosse <garrett@coder.com>
2026-03-25 15:43:12 -04:00
Charlie Voiselle 149e9f1dc0 fix: open coder_app links in new tab when open_in is tab (cherry-pick #23000) (#23621)
Cherry-pick of #23000 onto release/2.30.

Co-authored-by: Kayla はな <kayla@tree.camp>
2026-03-25 15:31:43 -04:00
Susana Ferreira 2970c54140 fix: bump aibridge to v1.0.9 to forward Anthropic-Beta header (#22936)
Bumps aibridge to v1.0.9, which forwards the `Anthropic-Beta` header
from client requests to the upstream Anthropic API:
https://github.com/coder/aibridge/pull/205

This fixes the `context_management: Extra inputs are not permitted`
error when using Claude Code with AI Bridge.

Note: v1.0.8 was retracted due to a conflict marker cached by the Go
module proxy https://github.com/coder/aibridge/pull/208. v1.0.9 contains
the same fix.

Related to internal Slack thread:
https://codercom.slack.com/archives/C096PFVBZKN/p1773192289945009?thread_ts=1772811897.981709&cid=C096PFVBZKN
2026-03-16 13:20:40 -04:00
Ethan 26e3da1f17 fix(tailnet): retry after transport dial timeouts (#22977) (cherry-pick/v2.30) (#22993)
Backport of #22977 to 2.30
2026-03-16 13:20:19 -04:00
Rowan Smith b49c4b3257 fix: prevent ui error when last org member is removed (#23018)
Backport of #22975 to release/2.30.
2026-03-13 14:22:31 -04:00
Rowan Smith 55da992aeb fix: avoid derp-related panic during wsproxy registration (backport release/2.30) (#22343)
Backport of #22322.

- Cherry-picked 7f03bd7.

Co-authored-by: Dean Sheather <dean@deansheather.com>
2026-03-03 13:39:15 -05:00
Lukasz 613029cb21 chore: update Go from 1.25.6 to 1.25.7 (#22465)
chore: update Go from 1.25.6 to 1.25.7

Co-authored-by: Jon Ayers <jon@coder.com>
2026-03-03 13:38:06 -05:00
Cian Johnston 7e0cf53dd1 fix(stringutil): operate on runes instead of bytes in Truncate (#22388) (#22467)
Fixes https://github.com/coder/coder/issues/22375

Updates `stringutil.Truncate` to properly handle multi-byte UTF-8
characters.
Adds tests for multi-byte truncation with word boundary.

Created by Mux using Opus 4.6

(cherry picked from commit 0cfa03718e)
2026-03-02 11:19:49 +00:00
Danny Kopping fa050ee0ab chore: backport aibridge fixes (#22266)
Backports https://github.com/coder/coder/pull/22264

Includes fixes https://github.com/coder/aibridge/pull/189 and
https://github.com/coder/aibridge/pull/185

Signed-off-by: Danny Kopping <danny@coder.com>
2026-02-23 17:18:32 -05:00
Jake Howell bfb6583ecc feat: convert soft_limit to limit (cherry-pick/v2.30) (#22209)
Related [`internal#1281`](https://github.com/coder/internal/issues/1281)

Cherry picks two pull-requests in `release/2.30`.

* https://github.com/coder/coder/pull/22048
* https://github.com/coder/coder/pull/21998
* https://github.com/coder/coder/pull/22210
2026-02-23 17:18:14 -05:00
Jakub Domeracki 40b3970388 feat(site)!: add consent prompt for auto-creation with prefilled parameters (#22255)
Cherry-pick of 60e3ab7632 from main.

Workspace created via mode=auto links now require explicit user
confirmation before provisioning. A warning dialog shows all prefilled
param.* values from the URL and blocks creation until the user clicks
`Confirm and Create`. Clicking `Cancel` falls back to the standard form
view.

### Breaking behavior change

Links using `mode=auto` (e.g., "Open in Coder" buttons) will no longer
silently create workspaces. Users will now see a consent dialog and must
explicitly confirm before the workspace is provisioned.

Original PR: #22011

Co-authored-by: Kacper Sawicki <kacper@coder.com>
Co-authored-by: Jake Howell <jacob@coder.com>
2026-02-23 17:17:40 -05:00
Danielle Maywood fa284dc149 fix: avoid re-using AuthInstanceID for sub agents (#22196) (#22211)
Parent agents were re-using AuthInstanceID when spawning child agents.
This caused GetWorkspaceAgentByInstanceID to return the most recently
created sub agent instead of the parent when the parent tried to refetch
its own manifest.

Fix by not reusing AuthInstanceID for sub agents, and updating
GetWorkspaceAgentByInstanceID to filter them out entirely.

---

Cherry picked from 911d734df9
2026-02-23 17:17:16 -05:00
Lukasz b89dc439b7 chore: bump bundled terraform to 1.14.5 for 2.30 (#22192)
Description:
This PR updates the bundled Terraform binary and related version pins
from 1.14.1 to 1.14.5 (base image, installer fallback, and CI/test
fixtures). Terraform is statically built with an embedded Go runtime.
Moving to 1.14.5 updates the embedded toolchain and is intended to
address Go stdlib CVEs reported by security scanning.

Notes:

- Change is version-only; no functional Coder logic changes.

- Backport-friendly: intended to be cherry-picked to release branches
after merge.
2026-02-20 13:19:18 +01:00
Lukasz d4ce9620d6 chore: bump versions of gh actions 2.30 (#22217)
Update gh actions:
- aquasecurity/trivy-action v0.34.0
- harden-runner v2.14.2
2026-02-20 12:49:48 +01:00
Cian Johnston 16408b157b fix(cli): revert #21583 (#22000) (#22002) 2026-02-10 11:11:03 -06:00
Sas Swart ef29702014 fix: update AI Bridge to preserve stream property in 'chat/completions' calls (#21953) 2026-02-10 11:10:44 -06:00
Sas Swart 43e67d12e2 perf: update AIBridge for improved memory use at scale (#21896) 2026-02-03 10:58:25 -06:00
ケイラ 94cf95a3e8 fix: disable task sharing (#21901) 2026-02-03 10:49:17 -06:00
Susana Ferreira 5e2f845272 fix: support authentication for upstream proxy (#21841) (#21849)
Related to PR: https://github.com/coder/coder/pull/21841

(cherry picked from commit 09453aa5a5)
2026-02-03 10:05:59 +00:00
blinkagent[bot] 3d5dc93060 docs: reorganize AI Bridge client documentation (#21873)
Co-authored-by: Danny Kopping <danny@coder.com>
Co-authored-by: Atif Ali <atif@coder.com>
2026-02-03 13:22:43 +05:00
Rowan Smith 6e1fe14d6c fix(helm): allow overriding CODER_PPROF_ADDRESS and CODER_PROMETHEUS_ADDRESS (#21871)
backport of #21714

cc @uzair-coder07
2026-02-02 23:09:23 -06:00
Jon Ayers c0b939f7e4 fix: use existing transaction to claim prebuild (#21862) (#21868)
- Claiming a prebuild was happening outside a transaction
2026-02-02 22:11:03 -06:00
Jon Ayers 1fd77bc459 chore: cherry-pick fixes (#21864) 2026-02-02 15:57:20 -06:00
Zach 37c3476ca7 fix: handle boundary usage across snapshots and prevent race (cherry-pick) (#21853) 2026-02-02 14:06:04 -06:00
Danny Kopping 26a3f82a39 chore(helm): disable liveness probes by default, allow all probe settings (#21847) 2026-02-02 14:05:18 -06:00
Zach ea6b11472c feat: add time window fields to telemetry boundary usage (cherry-pick) (#21775) 2026-02-02 14:04:58 -06:00
Danny Kopping a92dc3d5b3 chore: ensure consistent YAML names for aibridge flags (#21751) (#21756) 2026-02-02 14:03:09 -06:00
Jake Howell a69aea2c83 feat: implement ai governance consumption frontend (cherry-pick) (#21742) 2026-02-02 14:02:53 -06:00
Jake Howell c2db391019 chore: update paywall message to reference AI governance-add on (cherry-pick) (#21741) 2026-02-02 14:02:35 -06:00
Susana Ferreira 895cc07395 feat: add metrics to aibridgeproxy (#21709) (#21767)
Related to PR: https://github.com/coder/coder/pull/21709

(cherry picked from commit 9f6ce7542a)
2026-02-02 17:48:12 +00:00
Susana Ferreira 0377c985e4 feat: add provider to aibridgeproxy requestContext (#21710) (#21766)
Related to PR: https://github.com/coder/coder/pull/21710

(cherry picked from commit 327c885292)
2026-02-02 17:42:47 +00:00
299 changed files with 5034 additions and 9087 deletions
+2 -2
View File
@@ -7,6 +7,6 @@ runs:
- name: go install tools
shell: bash
run: |
./.github/scripts/retry.sh -- go install tool
go install tool
# NOTE: protoc-gen-go cannot be installed with `go get`
./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
+4 -4
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.6"
default: "1.25.7"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
@@ -22,14 +22,14 @@ runs:
- name: Install gotestsum
shell: bash
run: ./.github/scripts/retry.sh -- go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
run: go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
- name: Install mtimehash
shell: bash
run: ./.github/scripts/retry.sh -- go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
run: go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
# It isn't necessary that we ever do this, but it helps
# separate the "setup" from the "run" times.
- name: go mod download
shell: bash
run: ./.github/scripts/retry.sh -- go mod download -x
run: go mod download -x
+1 -1
View File
@@ -14,4 +14,4 @@ runs:
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
./.github/scripts/retry.sh -- env CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.14.1
terraform_version: 1.14.5
terraform_wrapper: false
-50
View File
@@ -1,50 +0,0 @@
#!/usr/bin/env bash
# Retry a command with exponential backoff.
#
# Usage: retry.sh [--max-attempts N] -- <command...>
#
# Example:
# retry.sh --max-attempts 3 -- go install gotest.tools/gotestsum@latest
#
# This will retry the command up to 3 times with exponential backoff
# (2s, 4s, 8s delays between attempts).
set -euo pipefail
# shellcheck source=scripts/lib.sh
source "$(dirname "${BASH_SOURCE[0]}")/../../scripts/lib.sh"
max_attempts=3
args="$(getopt -o "" -l max-attempts: -- "$@")"
eval set -- "$args"
while true; do
case "$1" in
--max-attempts)
max_attempts="$2"
shift 2
;;
--)
shift
break
;;
*)
error "Unrecognized option: $1"
;;
esac
done
if [[ $# -lt 1 ]]; then
error "Usage: retry.sh [--max-attempts N] -- <command...>"
fi
attempt=1
until "$@"; do
if ((attempt >= max_attempts)); then
error "Command failed after $max_attempts attempts: $*"
fi
delay=$((2 ** attempt))
log "Attempt $attempt/$max_attempts failed, retrying in ${delay}s..."
sleep "$delay"
((attempt++))
done
+33 -66
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -176,7 +176,7 @@ jobs:
- name: Get golangci-lint cache dir
run: |
linter_ver=$(grep -Eo 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
./.github/scripts/retry.sh -- go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
dir=$(golangci-lint cache status | awk '/Dir/ { print $2 }')
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
@@ -225,7 +225,13 @@ jobs:
run: helm version --short
- name: make lint
run: make --output-sync=line -j lint
run: |
# zizmor isn't included in the lint target because it takes a while,
# but we explicitly want to run it in CI.
make --output-sync=line -j lint lint/actions/zizmor
env:
# Used by zizmor to lint third-party GitHub actions.
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Check workflow files
run: |
@@ -239,38 +245,13 @@ jobs:
./scripts/check_unstaged.sh
shell: bash
lint-actions:
needs: changes
if: needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 1
persist-credentials: false
- name: Setup Go
uses: ./.github/actions/setup-go
- name: make lint/actions
run: make --output-sync=line -j lint/actions
env:
# Used by zizmor to lint third-party GitHub actions.
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
gen:
timeout-minutes: 20
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -327,7 +308,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -348,7 +329,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install shfmt
run: ./.github/scripts/retry.sh -- go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
- name: make fmt
timeout-minutes: 7
@@ -379,7 +360,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -414,18 +395,6 @@ jobs:
id: go-paths
uses: ./.github/actions/setup-go-paths
# macOS default bash and coreutils are too old for our scripts
# (lib.sh requires bash 4+, GNU getopt, make 4+).
- name: Setup GNU tools (macOS)
if: runner.os == 'macOS'
run: |
brew install bash gnu-getopt make
{
echo "$(brew --prefix bash)/bin"
echo "$(brew --prefix gnu-getopt)/bin"
echo "$(brew --prefix make)/libexec/gnubin"
} >> "$GITHUB_PATH"
- name: Setup Go
uses: ./.github/actions/setup-go
with:
@@ -585,7 +554,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -647,7 +616,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -719,7 +688,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -746,7 +715,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -779,7 +748,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -859,7 +828,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -940,7 +909,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -997,7 +966,6 @@ jobs:
- changes
- fmt
- lint
- lint-actions
- gen
- test-go-pg
- test-go-pg-17
@@ -1012,7 +980,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1022,7 +990,6 @@ jobs:
echo "- changes: ${{ needs.changes.result }}"
echo "- fmt: ${{ needs.fmt.result }}"
echo "- lint: ${{ needs.lint.result }}"
echo "- lint-actions: ${{ needs.lint-actions.result }}"
echo "- gen: ${{ needs.gen.result }}"
echo "- test-go-pg: ${{ needs.test-go-pg.result }}"
echo "- test-go-pg-17: ${{ needs.test-go-pg-17.result }}"
@@ -1101,7 +1068,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -1133,7 +1100,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1150,10 +1117,10 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1161,7 +1128,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build
@@ -1188,7 +1155,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -1234,16 +1201,16 @@ jobs:
# Necessary for signing Windows binaries.
- name: Setup Java
uses: actions/setup-java@be666c2fcd27ec809703dec50e508c2fdc7f6654 # v5.2.0
uses: actions/setup-java@f2beeb24e141e01a676f977032f5a29d81c9e27e # v5.1.0
with:
distribution: "zulu"
java-version: "11.0"
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1291,7 +1258,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
version="$(./scripts/version.sh)"
tag="main-${version//+/-}"
@@ -1585,7 +1552,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+3 -3
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -146,7 +146,7 @@ jobs:
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+3 -2
View File
@@ -186,8 +186,6 @@ jobs:
Use \`gh\` to get PR details, diff, and all comments. Check for previous doc-check comments (from coder-doc-check) and only post a new comment if it adds value.
**Do not comment if no documentation changes are needed.**
## Comment format
Use this structure (only include relevant sections):
@@ -204,6 +202,9 @@ jobs:
### New Documentation Needed
- [ ] \`docs/suggested/path.md\` - [what should be documented]
### No Changes Needed
[brief explanation - use this OR the above sections, not both]
---
*Automated review via [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
\`\`\`"
+1 -1
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+2 -2
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -28,7 +28,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+5 -5
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -184,7 +184,7 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+8 -8
View File
@@ -121,7 +121,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -164,7 +164,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -253,13 +253,13 @@ jobs:
# Necessary for signing Windows binaries.
- name: Setup Java
uses: actions/setup-java@be666c2fcd27ec809703dec50e508c2fdc7f6654 # v5.2.0
uses: actions/setup-java@f2beeb24e141e01a676f977032f5a29d81c9e27e # v5.1.0
with:
distribution: "zulu"
java-version: "11.0"
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nsis and zstd
run: sudo apt-get install -y nsis zstd
@@ -341,7 +341,7 @@ jobs:
- name: Build binaries
run: |
set -euo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
version="$(./scripts/version.sh)"
make gen/mark-fresh
@@ -802,7 +802,7 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -878,7 +878,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -971,7 +971,7 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+6 -6
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -69,7 +69,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -97,11 +97,11 @@ jobs:
- name: Install yq
run: go run github.com/mikefarah/yq/v4@v4.44.3
- name: Install mockgen
run: ./.github/scripts/retry.sh -- go install go.uber.org/mock/mockgen@v0.6.0
run: go install go.uber.org/mock/mockgen@v0.5.0
- name: Install protoc-gen-go
run: ./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
- name: Install protoc-gen-go-drpc
run: ./.github/scripts/retry.sh -- go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
- name: Install Protoc
run: |
# protoc must be in lockstep with our dogfood Dockerfile or the
@@ -146,7 +146,7 @@ jobs:
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
+3 -3
View File
@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
@@ -120,7 +120,7 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+1 -1
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
with:
egress-policy: audit
+3 -5
View File
@@ -562,11 +562,9 @@ else
endif
.PHONY: fmt/markdown
# Note: we don't run zizmor in the lint target because it takes a while.
# GitHub Actions linters are run in a separate CI job (lint-actions) that only
# triggers when workflow files change, so we skip them here when CI=true.
LINT_ACTIONS_TARGETS := $(if $(CI),,lint/actions/actionlint)
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/check-scopes lint/migrations $(LINT_ACTIONS_TARGETS)
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
.PHONY: lint
lint/site-icons:
-71
View File
@@ -9,7 +9,6 @@ import (
"path/filepath"
"regexp"
"strings"
"sync"
"testing"
"github.com/google/go-cmp/cmp"
@@ -96,76 +95,6 @@ ExtractCommandPathsLoop:
}
}
// Output captures stdout and stderr from an invocation and formats them with
// prefixes for golden file testing, preserving their interleaved order.
type Output struct {
mu sync.Mutex
stdout bytes.Buffer
stderr bytes.Buffer
combined bytes.Buffer
}
// prefixWriter wraps a buffer and prefixes each line with a given prefix.
type prefixWriter struct {
mu *sync.Mutex
prefix string
raw *bytes.Buffer
combined *bytes.Buffer
line bytes.Buffer // buffer for incomplete lines
}
// Write implements io.Writer, adding a prefix to each complete line.
func (w *prefixWriter) Write(p []byte) (n int, err error) {
w.mu.Lock()
defer w.mu.Unlock()
// Write unprefixed to raw buffer.
_, _ = w.raw.Write(p)
// Append to line buffer.
_, _ = w.line.Write(p)
// Split on newlines.
lines := bytes.Split(w.line.Bytes(), []byte{'\n'})
// Write all complete lines (all but the last, which may be incomplete).
for i := 0; i < len(lines)-1; i++ {
_, _ = w.combined.WriteString(w.prefix)
_, _ = w.combined.Write(lines[i])
_ = w.combined.WriteByte('\n')
}
// Keep the last line (incomplete) in the buffer.
w.line.Reset()
_, _ = w.line.Write(lines[len(lines)-1])
return len(p), nil
}
// Capture sets up stdout and stderr writers on the invocation that prefix each
// line with "out: " or "err: " while preserving their order.
func Capture(inv *serpent.Invocation) *Output {
output := &Output{}
inv.Stdout = &prefixWriter{mu: &output.mu, prefix: "out: ", raw: &output.stdout, combined: &output.combined}
inv.Stderr = &prefixWriter{mu: &output.mu, prefix: "err: ", raw: &output.stderr, combined: &output.combined}
return output
}
// Golden returns the formatted output with lines prefixed by "err: " or "out: ".
func (o *Output) Golden() []byte {
return o.combined.Bytes()
}
// Stdout returns the unprefixed stdout content for parsing (e.g., JSON).
func (o *Output) Stdout() string {
return o.stdout.String()
}
// Stderr returns the unprefixed stderr content.
func (o *Output) Stderr() string {
return o.stderr.String()
}
// TestGoldenFile will test the given bytes slice input against the
// golden file with the given file name, optionally using the given replacements.
func TestGoldenFile(t *testing.T, fileName string, actual []byte, replacements map[string]string) {
-5
View File
@@ -491,11 +491,6 @@ func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case tea.KeySpace:
options := m.filteredOptions()
if m.enableCustomInput && m.cursor == len(options) {
return m, nil
}
if len(options) != 0 {
options[m.cursor].chosen = !options[m.cursor].chosen
}
+1 -5
View File
@@ -139,15 +139,12 @@ func TestCreate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Create a new version
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
@@ -519,7 +516,6 @@ func TestCreateWithRichParameters(t *testing.T) {
version2 := coderdtest.CreateTemplateVersion(t, tctx.client, tctx.owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "another_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = tctx.template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, tctx.client, version2.ID)
-13
View File
@@ -174,19 +174,6 @@ func (RootCmd) promptExample() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("multi-select-no-defaults", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
Message: "Select some things:",
Options: []string{
"Code", "Chairs", "Whale",
},
EnableCustomInput: enableCustomInput,
})
}
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("rich-multi-select", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
+3 -9
View File
@@ -141,9 +141,7 @@ func TestGitSSH(t *testing.T) {
"-o", "IdentitiesOnly=yes",
"127.0.0.1",
)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.EqualValues(t, 1, inc)
@@ -207,9 +205,7 @@ func TestGitSSH(t *testing.T) {
inv, _ := clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
ctx := testutil.Context(t, testutil.WaitMedium)
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
@@ -227,9 +223,7 @@ func TestGitSSH(t *testing.T) {
inv, _ = clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx = testutil.Context(t, testutil.WaitSuperLong) // Reset context for second cmd test.
ctx = testutil.Context(t, testutil.WaitMedium) // Reset context for second cmd test.
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
-29
View File
@@ -462,38 +462,9 @@ func (r *RootCmd) login() *serpent.Command {
Value: serpent.BoolOf(&useTokenForSession),
},
}
cmd.Children = []*serpent.Command{
r.loginToken(),
}
return cmd
}
func (r *RootCmd) loginToken() *serpent.Command {
return &serpent.Command{
Use: "token",
Short: "Print the current session token",
Long: "Print the session token for use in scripts and automation.",
Middleware: serpent.RequireNArgs(0),
Handler: func(inv *serpent.Invocation) error {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
if err != nil {
if xerrors.Is(err, os.ErrNotExist) {
return xerrors.New("no session token found - run 'coder login' first")
}
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("read session token: %w", err)
}
if tok == "" {
return xerrors.New("no session token found - run 'coder login' first")
}
_, err = fmt.Fprintln(inv.Stdout, tok)
return err
},
}
}
// isWSL determines if coder-cli is running within Windows Subsystem for Linux
func isWSL() (bool, error) {
if runtime.GOOS == goosDarwin || runtime.GOOS == goosWindows {
-28
View File
@@ -537,31 +537,3 @@ func TestLogin(t *testing.T) {
require.Equal(t, selected, first.OrganizationID.String())
})
}
func TestLoginToken(t *testing.T) {
t.Parallel()
t.Run("PrintsToken", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "login", "token", "--url", client.URL.String())
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
pty.ExpectMatch(client.SessionToken())
})
t.Run("NoTokenStored", func(t *testing.T) {
t.Parallel()
inv, _ := clitest.New(t, "login", "token")
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "no session token found")
})
}
-58
View File
@@ -24,7 +24,6 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -85,9 +84,6 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -179,24 +175,6 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -797,12 +775,6 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1690,33 +1662,3 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}
-96
View File
@@ -312,102 +312,6 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err
-101
View File
@@ -1122,107 +1122,6 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
// Ref: https://github.com/coder/internal/issues/1289
// This may return either a nil error or io.EOF.
// There is an inherent race here:
// 1. Sleep process is killed -> sleepDone is closed.
// 2. watchParentContext detects parent death, cancels context,
// causing SSH session teardown.
// 3. We receive from sleepDone and attempt to call session.Close()
// Now either:
// a. Session teardown completes before we call Close(), resulting in io.EOF
// b. We call Close() first, resulting in a nil error.
_ = session.Close()
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")
+1 -4
View File
@@ -367,9 +367,7 @@ func TestStartAutoUpdate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version1.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version1.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
@@ -381,7 +379,6 @@ func TestStartAutoUpdate(t *testing.T) {
coderdtest.MustTransitionWorkspace(t, member, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
}
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(stringRichParameters), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
-26
View File
@@ -54,38 +54,12 @@ func (r *RootCmd) taskLogs() *serpent.Command {
return xerrors.Errorf("get task logs: %w", err)
}
// Handle snapshot responses (paused/initializing/pending tasks).
if logs.Snapshot {
if logs.SnapshotAt == nil {
// No snapshot captured yet.
cliui.Warnf(inv.Stderr,
"Task is %s. No snapshot available (snapshot may have failed during pause, resume your task to view logs).\n",
task.Status)
}
// Snapshot exists with logs, show warning with count.
if len(logs.Logs) > 0 {
if len(logs.Logs) == 1 {
cliui.Warnf(inv.Stderr, "Task is %s. Showing last 1 message from snapshot.\n", task.Status)
} else {
cliui.Warnf(inv.Stderr, "Task is %s. Showing last %d messages from snapshot.\n", task.Status, len(logs.Logs))
}
}
}
// Handle empty logs for both snapshot/live, table/json.
if len(logs.Logs) == 0 {
cliui.Infof(inv.Stderr, "No task logs found.")
return nil
}
out, err := formatter.Format(ctx, logs.Logs)
if err != nil {
return xerrors.Errorf("format task logs: %w", err)
}
if out == "" {
// Defensive check (shouldn't happen given count check above).
cliui.Infof(inv.Stderr, "No task logs found.")
return nil
}
+24 -136
View File
@@ -19,7 +19,7 @@ import (
"github.com/coder/coder/v2/testutil"
)
func Test_TaskLogs_Golden(t *testing.T) {
func Test_TaskLogs(t *testing.T) {
t.Parallel()
testMessages := []agentapisdk.Message{
@@ -44,20 +44,23 @@ func Test_TaskLogs_Golden(t *testing.T) {
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client // user already has access to their own workspace
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.Name, "--output", "json")
output := clitest.Capture(inv)
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify JSON is valid.
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(output.Stdout())).Decode(&logs)
err = json.NewDecoder(strings.NewReader(stdout.String())).Decode(&logs)
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
require.Len(t, logs, 2)
require.Equal(t, "What is 1 + 1?", logs[0].Content)
require.Equal(t, codersdk.TaskLogTypeInput, logs[0].Type)
require.Equal(t, "2", logs[1].Content)
require.Equal(t, codersdk.TaskLogTypeOutput, logs[1].Type)
})
t.Run("ByTaskID_JSON", func(t *testing.T) {
@@ -67,20 +70,23 @@ func Test_TaskLogs_Golden(t *testing.T) {
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.ID.String(), "--output", "json")
output := clitest.Capture(inv)
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify JSON is valid.
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(output.Stdout())).Decode(&logs)
err = json.NewDecoder(strings.NewReader(stdout.String())).Decode(&logs)
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
require.Len(t, logs, 2)
require.Equal(t, "What is 1 + 1?", logs[0].Content)
require.Equal(t, codersdk.TaskLogTypeInput, logs[0].Type)
require.Equal(t, "2", logs[1].Content)
require.Equal(t, codersdk.TaskLogTypeOutput, logs[1].Type)
})
t.Run("ByTaskID_Table", func(t *testing.T) {
@@ -90,15 +96,19 @@ func Test_TaskLogs_Golden(t *testing.T) {
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.ID.String())
output := clitest.Capture(inv)
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
output := stdout.String()
require.Contains(t, output, "What is 1 + 1?")
require.Contains(t, output, "2")
require.Contains(t, output, "input")
require.Contains(t, output, "output")
})
t.Run("TaskNotFound_ByName", func(t *testing.T) {
@@ -150,128 +160,6 @@ func Test_TaskLogs_Golden(t *testing.T) {
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, assert.AnError.Error())
})
t.Run("SnapshotWithLogs_Table", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTestWithSnapshot(ctx, t, codersdk.TaskStatusPaused, testMessages)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
t.Run("SnapshotWithLogs_JSON", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTestWithSnapshot(ctx, t, codersdk.TaskStatusPaused, testMessages)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name, "--output", "json")
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify JSON is valid.
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(output.Stdout())).Decode(&logs)
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
t.Run("SnapshotWithoutLogs_NoSnapshotCaptured", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTestWithoutSnapshot(t, codersdk.TaskStatusPaused)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
t.Run("SnapshotWithSingleMessage", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
singleMessage := []agentapisdk.Message{
{
Id: 0,
Role: agentapisdk.RoleUser,
Content: "Single message",
Time: time.Now(),
},
}
client, task := setupCLITaskTestWithSnapshot(ctx, t, codersdk.TaskStatusPending, singleMessage)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
t.Run("SnapshotEmptyLogs", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTestWithSnapshot(ctx, t, codersdk.TaskStatusInitializing, []agentapisdk.Message{})
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
t.Run("InitializingTaskSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTestWithSnapshot(ctx, t, codersdk.TaskStatusInitializing, testMessages)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
// Verify output format with golden file.
clitest.TestGoldenFile(t, t.Name(), output.Golden(), nil)
})
}
func fakeAgentAPITaskLogsOK(messages []agentapisdk.Message) map[string]http.HandlerFunc {
-91
View File
@@ -20,11 +20,7 @@ import (
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
@@ -275,93 +271,6 @@ func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[st
return userClient, task
}
// setupCLITaskTestWithSnapshot creates a task in the specified status with a log snapshot.
func setupCLITaskTestWithSnapshot(ctx context.Context, t *testing.T, status codersdk.TaskStatus, messages []agentapisdk.Message) (*codersdk.Client, codersdk.Task) {
t.Helper()
ownerClient, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, ownerClient)
userClient, user := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
ownerUser, err := ownerClient.User(ctx, owner.UserID.String())
require.NoError(t, err)
ownerSubject := coderdtest.AuthzUserSubject(ownerUser)
task := createTaskInStatus(t, db, owner.OrganizationID, user.ID, status)
// Create snapshot envelope with agentapi format.
envelope := coderd.TaskLogSnapshotEnvelope{
Format: "agentapi",
Data: agentapisdk.GetMessagesResponse{
Messages: messages,
},
}
snapshotJSON, err := json.Marshal(envelope)
require.NoError(t, err)
// Insert snapshot into database.
snapshotTime := time.Now()
err = db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: task.ID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
return userClient, task
}
// setupCLITaskTestWithoutSnapshot creates a task in the specified status without a log snapshot.
func setupCLITaskTestWithoutSnapshot(t *testing.T, status codersdk.TaskStatus) (*codersdk.Client, codersdk.Task) {
t.Helper()
ownerClient, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, ownerClient)
userClient, user := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
task := createTaskInStatus(t, db, owner.OrganizationID, user.ID, status)
return userClient, task
}
// createTaskInStatus creates a task in the specified status using dbfake.
func createTaskInStatus(t *testing.T, db database.Store, orgID, ownerID uuid.UUID, status codersdk.TaskStatus) codersdk.Task {
t.Helper()
builder := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: orgID,
OwnerID: ownerID,
}).
WithTask(database.TaskTable{
OrganizationID: orgID,
OwnerID: ownerID,
}, nil)
switch status {
case codersdk.TaskStatusPending:
builder = builder.Pending()
case codersdk.TaskStatusInitializing:
builder = builder.Starting()
case codersdk.TaskStatusPaused:
builder = builder.Seed(database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStop,
})
default:
require.Fail(t, "unsupported task status in test helper", "status: %s", status)
}
resp := builder.Do()
return codersdk.Task{
ID: resp.Task.ID,
Name: resp.Task.Name,
OrganizationID: resp.Task.OrganizationID,
OwnerID: resp.Task.OwnerID,
WorkspaceID: resp.Task.WorkspaceID,
Status: status,
}
}
// createAITaskTemplate creates a template configured for AI tasks with a sidebar app.
func createAITaskTemplate(t *testing.T, client *codersdk.Client, orgID uuid.UUID, opts ...aiTemplateOpt) codersdk.Template {
t.Helper()
-14
View File
@@ -1,14 +0,0 @@
out: [
out: {
out: "id": 0,
out: "content": "What is 1 + 1?",
out: "type": "input",
out: "time": "====[timestamp]====="
out: },
out: {
out: "id": 1,
out: "content": "2",
out: "type": "output",
out: "time": "====[timestamp]====="
out: }
out: ]
@@ -1,3 +0,0 @@
out: TYPE CONTENT
out: input What is 1 + 1?
out: output 2
@@ -1,14 +0,0 @@
out: [
out: {
out: "id": 0,
out: "content": "What is 1 + 1?",
out: "type": "input",
out: "time": "====[timestamp]====="
out: },
out: {
out: "id": 1,
out: "content": "2",
out: "type": "output",
out: "time": "====[timestamp]====="
out: }
out: ]
@@ -1,5 +0,0 @@
err: WARN: Task is initializing. Showing last 2 messages from snapshot.
err:
out: TYPE CONTENT
out: input What is 1 + 1?
out: output 2
@@ -1 +0,0 @@
err: No task logs found.
@@ -1,16 +0,0 @@
err: WARN: Task is paused. Showing last 2 messages from snapshot.
err:
out: [
out: {
out: "id": 0,
out: "content": "What is 1 + 1?",
out: "type": "input",
out: "time": "====[timestamp]====="
out: },
out: {
out: "id": 1,
out: "content": "2",
out: "type": "output",
out: "time": "====[timestamp]====="
out: }
out: ]
@@ -1,5 +0,0 @@
err: WARN: Task is paused. Showing last 2 messages from snapshot.
err:
out: TYPE CONTENT
out: input What is 1 + 1?
out: output 2
@@ -1,4 +0,0 @@
err: WARN: Task is initializing. Showing last 1 message from snapshot.
err:
out: TYPE CONTENT
out: input Single message
@@ -1,3 +0,0 @@
err: WARN: Task is paused. No snapshot available (snapshot may have failed during pause, resume your task to view logs).
err:
err: No task logs found.
-3
View File
@@ -9,9 +9,6 @@ USAGE:
macOS and Windows and a plain text file on Linux. Use the --use-keyring flag
or CODER_USE_KEYRING environment variable to change the storage mechanism.
SUBCOMMANDS:
token Print the current session token
OPTIONS:
--first-user-email string, $CODER_FIRST_USER_EMAIL
Specifies an email address to use if creating the first user for the
-11
View File
@@ -1,11 +0,0 @@
coder v0.0.0-devel
USAGE:
coder login token
Print the current session token
Print the session token for use in scripts and automation.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@
"last_seen_at": "====[timestamp]=====",
"name": "test-daemon",
"version": "v0.0.0-devel",
"api_version": "1.15",
"api_version": "1.14",
"provisioners": [
"echo"
],
+3
View File
@@ -215,6 +215,9 @@ Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
commas.Using this incorrectly can break SSH to your deployment, use
cautiously.
--ssh-hostname-prefix string, $CODER_SSH_HOSTNAME_PREFIX (default: coder.)
The SSH deployment prefix is used in the Host of the ssh config.
--web-terminal-renderer string, $CODER_WEB_TERMINAL_RENDERER (default: canvas)
The renderer to use when opening a web terminal. Valid values are
'canvas', 'webgl', or 'dom'.
+1 -2
View File
@@ -523,8 +523,7 @@ disableWorkspaceSharing: false
# These options change the behavior of how clients interact with the Coder.
# Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
client:
# Deprecated: use workspace-hostname-suffix instead. The SSH deployment prefix is
# used in the Host of the ssh config.
# The SSH deployment prefix is used in the Host of the ssh config.
# (default: coder., type: string)
sshHostnamePrefix: coder.
# Workspace hostnames use this suffix in SSH config and Coder Connect on Coder
+27 -31
View File
@@ -188,9 +188,9 @@ func isDigit(s string) bool {
// - d (days, interpreted as 24h)
// - y (years, interpreted as 8_760h)
//
// Fractional values are supported for all units (e.g., "1.5d" for 36 hours).
// FIXME: handle fractional values as discussed in https://github.com/coder/coder/pull/15040#discussion_r1799261736
func extendedParseDuration(raw string) (time.Duration, error) {
var d float64
var d int64
isPositive := true
// handle negative durations by checking for a leading '-'
@@ -203,52 +203,48 @@ func extendedParseDuration(raw string) (time.Duration, error) {
return 0, xerrors.Errorf("invalid duration: %q", raw)
}
// Regular expression to match any characters that do not match the expected
// duration format. Allows digits, decimal point, and unit characters.
invalidCharRe := regexp.MustCompile(`[^0-9.|nsuµhdym]+`)
// Regular expression to match any characters that do not match the expected duration format
invalidCharRe := regexp.MustCompile(`[^0-9|nsuµhdym]+`)
if invalidCharRe.MatchString(raw) {
return 0, xerrors.Errorf("invalid duration format: %q", raw)
}
// Regular expression to match numbers (including decimals) followed by time
// units. Captures the numeric part (with optional decimal) and the unit.
re := regexp.MustCompile(`(\d+\.?\d*)(ns|us|µs|ms|s|m|h|d|y)`)
// Regular expression to match numbers followed by 'd', 'y', or time units
re := regexp.MustCompile(`(-?\d+)(ns|us|µs|ms|s|m|h|d|y)`)
matches := re.FindAllStringSubmatch(raw, -1)
for _, match := range matches {
num, err := strconv.ParseFloat(match[1], 64)
var num int64
num, err := strconv.ParseInt(match[1], 10, 0)
if err != nil {
return 0, xerrors.Errorf("invalid duration: %q", match[1])
}
var add float64
switch match[2] {
case "d":
add = num * float64(24*time.Hour)
// we want to check if d + num * int64(24*time.Hour) would overflow
if d > (1<<63-1)-num*int64(24*time.Hour) {
return 0, xerrors.Errorf("invalid duration: %q", raw)
}
d += num * int64(24*time.Hour)
case "y":
add = num * float64(8760*time.Hour)
case "h":
add = num * float64(time.Hour)
case "m":
add = num * float64(time.Minute)
case "s":
add = num * float64(time.Second)
case "ms":
add = num * float64(time.Millisecond)
case "us", "µs":
add = num * float64(time.Microsecond)
case "ns":
add = num * float64(time.Nanosecond)
// we want to check if d + num * int64(8760*time.Hour) would overflow
if d > (1<<63-1)-num*int64(8760*time.Hour) {
return 0, xerrors.Errorf("invalid duration: %q", raw)
}
d += num * int64(8760*time.Hour)
case "h", "m", "s", "ns", "us", "µs", "ms":
partDuration, err := time.ParseDuration(match[0])
if err != nil {
return 0, xerrors.Errorf("invalid duration: %q", match[0])
}
if d > (1<<63-1)-int64(partDuration) {
return 0, xerrors.Errorf("invalid duration: %q", raw)
}
d += int64(partDuration)
default:
return 0, xerrors.Errorf("invalid duration unit: %q", match[2])
}
// Check for overflow before adding.
const maxDuration = float64(1<<63 - 1)
if d > maxDuration-add {
return 0, xerrors.Errorf("invalid duration: %q", raw)
}
d += add
}
if !isPositive {
-12
View File
@@ -69,18 +69,6 @@ func TestExtendedParseDuration(t *testing.T) {
{"92233754775807y", 0, false},
{"200y200y200y200y200y", 0, false},
{"9223372036854775807s", 0, false},
// fractional values
{"1.5d", 36 * time.Hour, true},
{"0.5h", 30 * time.Minute, true},
{"2.5s", 2500 * time.Millisecond, true},
{"1.5y", time.Duration(float64(365*24*time.Hour) * 1.5), true},
{"0.5m", 30 * time.Second, true},
{"1.5h30m", 2 * time.Hour, true},
{"0.25d", 6 * time.Hour, true},
{"-1.5h", -90 * time.Minute, true},
{"100.5ms", 100*time.Millisecond + 500*time.Microsecond, true},
{"1.5us", 1500 * time.Nanosecond, true},
{"1.5µs", 1500 * time.Nanosecond, true},
} {
t.Run(testCase.Duration, func(t *testing.T) {
t.Parallel()
+1 -1
View File
@@ -91,7 +91,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
Name: agentName,
ResourceID: parentAgent.ResourceID,
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
AuthInstanceID: sql.NullString{},
Architecture: req.Architecture,
EnvironmentVariables: pqtype.NullRawMessage{},
OperatingSystem: req.OperatingSystem,
+46
View File
@@ -175,6 +175,52 @@ func TestSubAgentAPI(t *testing.T) {
}
})
// Context: https://github.com/coder/coder/pull/22196
t.Run("CreateSubAgentDoesNotInheritAuthInstanceID", func(t *testing.T) {
t.Parallel()
var (
log = testutil.Logger(t)
clock = quartz.NewMock(t)
db, org = newDatabaseWithOrg(t)
user, agent = newUserWithWorkspaceAgent(t, db, org)
)
// Given: The parent agent has an AuthInstanceID set
ctx := testutil.Context(t, testutil.WaitShort)
parentAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agent.ID)
require.NoError(t, err)
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.NotEmpty(t, parentAgent.AuthInstanceID.String)
api := newAgentAPI(t, log, db, clock, user, org, agent)
// When: We create a sub agent
createResp, err := api.CreateSubAgent(ctx, &proto.CreateSubAgentRequest{
Name: "sub-agent",
Directory: "/workspaces/test",
Architecture: "amd64",
OperatingSystem: "linux",
})
require.NoError(t, err)
subAgentID, err := uuid.FromBytes(createResp.Agent.Id)
require.NoError(t, err)
// Then: The sub-agent must NOT re-use the parent's AuthInstanceID.
subAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), subAgentID)
require.NoError(t, err)
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Double-check: looking up by the parent's instance ID must
// still return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(dbauthz.AsSystemRestricted(ctx), parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
})
type expectedAppError struct {
index int32
field string
+25 -137
View File
@@ -786,30 +786,6 @@ func (api *API) taskSend(rw http.ResponseWriter, r *http.Request) {
rw.WriteHeader(http.StatusNoContent)
}
// convertAgentAPIMessagesToLogEntries converts AgentAPI messages to
// TaskLogEntry format.
func convertAgentAPIMessagesToLogEntries(messages []agentapisdk.Message) ([]codersdk.TaskLogEntry, error) {
logs := make([]codersdk.TaskLogEntry, 0, len(messages))
for _, m := range messages {
var typ codersdk.TaskLogType
switch m.Role {
case agentapisdk.RoleUser:
typ = codersdk.TaskLogTypeInput
case agentapisdk.RoleAgent:
typ = codersdk.TaskLogTypeOutput
default:
return nil, xerrors.Errorf("invalid agentapi message role %q", m.Role)
}
logs = append(logs, codersdk.TaskLogEntry{
ID: int(m.Id),
Content: m.Content,
Type: typ,
Time: m.Time,
})
}
return logs, nil
}
// @Summary Get AI task logs
// @ID get-ai-task-logs
// @Security CoderSessionToken
@@ -823,42 +799,8 @@ func (api *API) taskLogs(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
task := httpmw.TaskParam(r)
switch task.Status {
case database.TaskStatusActive:
// Active tasks: fetch live logs from AgentAPI.
out, err := api.fetchLiveTaskLogs(r, task)
if err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusOK, out)
case database.TaskStatusPaused, database.TaskStatusPending, database.TaskStatusInitializing:
// In pause, pending and initializing states, we attempt to fetch
// the snapshot from database to provide continuity.
out, err := api.fetchSnapshotTaskLogs(ctx, task.ID)
if err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusOK, out)
default:
// Cases: database.TaskStatusError, database.TaskStatusUnknown.
// - Error: snapshot would be stale from previous pause.
// - Unknown: cannot determine reliable state.
httpapi.Write(ctx, rw, http.StatusConflict, codersdk.Response{
Message: "Cannot fetch logs for task in current state.",
Detail: fmt.Sprintf("Task status is %q.", task.Status),
})
}
}
func (api *API) fetchLiveTaskLogs(r *http.Request, task database.Task) (codersdk.TaskLogsResponse, error) {
var out codersdk.TaskLogsResponse
err := api.authAndDoWithTaskAppClient(r, task, func(ctx context.Context, client *http.Client, appURL *url.URL) error {
if err := api.authAndDoWithTaskAppClient(r, task, func(ctx context.Context, client *http.Client, appURL *url.URL) error {
agentAPIClient, err := agentapisdk.NewClient(appURL.String(), agentapisdk.WithHTTPClient(client))
if err != nil {
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
@@ -875,89 +817,35 @@ func (api *API) fetchLiveTaskLogs(r *http.Request, task database.Task) (codersdk
})
}
logs, err := convertAgentAPIMessagesToLogEntries(messagesResp.Messages)
if err != nil {
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
Message: "Invalid task app response.",
Detail: err.Error(),
logs := make([]codersdk.TaskLogEntry, 0, len(messagesResp.Messages))
for _, m := range messagesResp.Messages {
var typ codersdk.TaskLogType
switch m.Role {
case agentapisdk.RoleUser:
typ = codersdk.TaskLogTypeInput
case agentapisdk.RoleAgent:
typ = codersdk.TaskLogTypeOutput
default:
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
Message: "Invalid task app response message role.",
Detail: fmt.Sprintf(`Expected "user" or "agent", got %q.`, m.Role),
})
}
logs = append(logs, codersdk.TaskLogEntry{
ID: int(m.Id),
Content: m.Content,
Type: typ,
Time: m.Time,
})
}
out = codersdk.TaskLogsResponse{
Logs: logs,
}
out = codersdk.TaskLogsResponse{Logs: logs}
return nil
})
return out, err
}
func (api *API) fetchSnapshotTaskLogs(ctx context.Context, taskID uuid.UUID) (codersdk.TaskLogsResponse, error) {
snapshot, err := api.Database.GetTaskSnapshot(ctx, taskID)
if err != nil {
if httpapi.IsUnauthorizedError(err) {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusNotFound, codersdk.Response{
Message: "Resource not found.",
})
}
if errors.Is(err, sql.ErrNoRows) {
// No snapshot exists yet, return empty logs. Snapshot is true
// because this field indicates whether the data is from the
// live task app (false) or not (true). Since the task is
// paused/initializing/pending, we cannot fetch live logs, so
// snapshot must be true even with no snapshot data.
return codersdk.TaskLogsResponse{
Logs: []codersdk.TaskLogEntry{},
Snapshot: true,
}, nil
}
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task snapshot.",
Detail: err.Error(),
})
}); err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
}
// Unmarshal envelope with pre-populated data field to decode once.
envelope := TaskLogSnapshotEnvelope{
Data: &agentapisdk.GetMessagesResponse{},
}
if err := json.Unmarshal(snapshot.LogSnapshot, &envelope); err != nil {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error decoding task snapshot.",
Detail: err.Error(),
})
}
// Validate snapshot format.
if envelope.Format != "agentapi" {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Unsupported task snapshot format.",
Detail: fmt.Sprintf("Expected format %q, got %q.", "agentapi", envelope.Format),
})
}
// Extract agentapi data from envelope (already decoded into the correct type).
messagesResp, ok := envelope.Data.(*agentapisdk.GetMessagesResponse)
if !ok {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error decoding snapshot data.",
Detail: "Unexpected data type in envelope.",
})
}
// Convert agentapi messages to log entries.
logs, err := convertAgentAPIMessagesToLogEntries(messagesResp.Messages)
if err != nil {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Invalid snapshot data.",
Detail: err.Error(),
})
}
return codersdk.TaskLogsResponse{
Logs: logs,
Snapshot: true,
SnapshotAt: ptr.Ref(snapshot.LogSnapshotCreatedAt),
}, nil
httpapi.Write(ctx, rw, http.StatusOK, out)
}
// authAndDoWithTaskAppClient centralizes the shared logic to:
-261
View File
@@ -12,7 +12,6 @@ import (
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -724,266 +723,6 @@ func TestTasks(t *testing.T) {
})
})
t.Run("LogsWithSnapshot", func(t *testing.T) {
t.Parallel()
ownerClient, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{})
owner := coderdtest.CreateFirstUser(t, ownerClient)
ownerUser, err := ownerClient.User(testutil.Context(t, testutil.WaitMedium), owner.UserID.String())
require.NoError(t, err)
ownerSubject := coderdtest.AuthzUserSubject(ownerUser)
// Create a regular user to test snapshot access.
client, user := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
// Helper to create a task in the desired state.
createTaskInState := func(ctx context.Context, t *testing.T, status database.TaskStatus) uuid.UUID {
ctx = dbauthz.As(ctx, ownerSubject)
builder := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: owner.OrganizationID,
OwnerID: user.ID,
}).
WithTask(database.TaskTable{
OrganizationID: owner.OrganizationID,
OwnerID: user.ID,
}, nil)
switch status {
case database.TaskStatusPending:
builder = builder.Pending()
case database.TaskStatusInitializing:
builder = builder.Starting()
case database.TaskStatusPaused:
builder = builder.Seed(database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStop,
})
case database.TaskStatusError:
// For error state, create a completed build then manipulate app health.
default:
require.Fail(t, "unsupported task status in test helper", "status: %s", status)
}
resp := builder.Do()
taskID := resp.Task.ID
// Post-process by manipulating agent and app state.
if status == database.TaskStatusError {
// First, set agent to ready state so agent_status returns 'active'.
// This ensures the cascade reaches app_status.
err := db.UpdateWorkspaceAgentLifecycleStateByID(ctx, database.UpdateWorkspaceAgentLifecycleStateByIDParams{
ID: resp.Agents[0].ID,
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
})
require.NoError(t, err)
// Then set workspace app health to unhealthy to trigger error state.
apps, err := db.GetWorkspaceAppsByAgentID(ctx, resp.Agents[0].ID)
require.NoError(t, err)
require.Len(t, apps, 1, "expected exactly one app for task")
err = db.UpdateWorkspaceAppHealthByID(ctx, database.UpdateWorkspaceAppHealthByIDParams{
ID: apps[0].ID,
Health: database.WorkspaceAppHealthUnhealthy,
})
require.NoError(t, err)
}
return taskID
}
// Prepare snapshot data used across tests.
snapshotMessages := []agentapisdk.Message{
{
Id: 0,
Content: "First message",
Role: agentapisdk.RoleAgent,
Time: time.Date(2025, 1, 1, 10, 0, 0, 0, time.UTC),
},
{
Id: 1,
Content: "Second message",
Role: agentapisdk.RoleUser,
Time: time.Date(2025, 1, 1, 10, 1, 0, 0, time.UTC),
},
}
snapshotData := agentapisdk.GetMessagesResponse{
Messages: snapshotMessages,
}
envelope := coderd.TaskLogSnapshotEnvelope{
Format: "agentapi",
Data: snapshotData,
}
snapshotJSON, err := json.Marshal(envelope)
require.NoError(t, err)
snapshotTime := time.Date(2025, 1, 1, 10, 5, 0, 0, time.UTC)
// Helper to verify snapshot logs content.
verifySnapshotLogs := func(t *testing.T, got codersdk.TaskLogsResponse) {
t.Helper()
want := codersdk.TaskLogsResponse{
Snapshot: true,
SnapshotAt: &snapshotTime,
Logs: []codersdk.TaskLogEntry{
{
ID: 0,
Type: codersdk.TaskLogTypeOutput,
Content: "First message",
Time: snapshotMessages[0].Time,
},
{
ID: 1,
Type: codersdk.TaskLogTypeInput,
Content: "Second message",
Time: snapshotMessages[1].Time,
},
},
}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("got bad response (-want +got):\n%s", diff)
}
}
t.Run("PendingTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("InitializingTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusInitializing)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("PausedTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPaused)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("NoSnapshotReturnsEmpty", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err)
assert.True(t, logsResp.Snapshot)
assert.Nil(t, logsResp.SnapshotAt)
assert.Len(t, logsResp.Logs, 0)
})
t.Run("InvalidSnapshotFormat", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
invalidEnvelope := coderd.TaskLogSnapshotEnvelope{
Format: "unknown-format",
Data: map[string]any{},
}
invalidJSON, err := json.Marshal(invalidEnvelope)
require.NoError(t, err)
err = db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(invalidJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusInternalServerError, sdkErr.StatusCode())
assert.Contains(t, sdkErr.Message, "Unsupported task snapshot format")
})
t.Run("MalformedSnapshotData", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(`{"format":"agentapi","data":"not an object"}`),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusInternalServerError, sdkErr.StatusCode())
})
t.Run("ErrorStateReturnsError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusError)
_, err := client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusConflict, sdkErr.StatusCode())
assert.Contains(t, sdkErr.Message, "Cannot fetch logs for task in current state")
assert.Contains(t, sdkErr.Detail, "error")
})
})
t.Run("UpdateInput", func(t *testing.T) {
tests := []struct {
name string
-10
View File
@@ -15066,10 +15066,6 @@ const docTemplate = `{
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -18567,12 +18563,6 @@ const docTemplate = `{
"items": {
"$ref": "#/definitions/codersdk.TaskLogEntry"
}
},
"snapshot": {
"type": "boolean"
},
"snapshot_at": {
"type": "string"
}
}
},
-10
View File
@@ -13623,10 +13623,6 @@
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -16983,12 +16979,6 @@
"items": {
"$ref": "#/definitions/codersdk.TaskLogEntry"
}
},
"snapshot": {
"type": "boolean"
},
"snapshot_at": {
"type": "string"
}
}
},
+2 -2
View File
@@ -384,9 +384,9 @@ func TestCSRFExempt(t *testing.T) {
data, _ := io.ReadAll(resp.Body)
_ = resp.Body.Close()
// A StatusBadGateway means Coderd tried to proxy to the agent and failed because the agent
// A StatusNotFound means Coderd tried to proxy to the agent and failed because the agent
// was not there. This means CSRF did not block the app request, which is what we want.
require.Equal(t, http.StatusBadGateway, resp.StatusCode, "status code 500 is CSRF failure")
require.Equal(t, http.StatusNotFound, resp.StatusCode, "status code 500 is CSRF failure")
require.NotContains(t, string(data), "CSRF")
})
}
+14 -2
View File
@@ -106,6 +106,8 @@ import (
"github.com/coder/quartz"
)
const DefaultDERPMeshKey = "test-key"
const defaultTestDaemonName = "test-daemon"
type Options struct {
@@ -510,8 +512,18 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
stunAddresses = options.DeploymentValues.DERP.Server.STUNAddresses.Value()
}
derpServer := derp.NewServer(key.NewNode(), tailnet.Logger(options.Logger.Named("derp").Leveled(slog.LevelDebug)))
derpServer.SetMeshKey("test-key")
const derpMeshKey = "test-key"
// Technically AGPL coderd servers don't set this value, but it doesn't
// change any behavior. It's useful for enterprise tests.
err = options.Database.InsertDERPMeshKey(dbauthz.AsSystemRestricted(ctx), derpMeshKey) //nolint:gocritic // test
if !database.IsUniqueViolation(err, database.UniqueSiteConfigsKeyKey) {
require.NoError(t, err, "insert DERP mesh key")
}
var derpServer *derp.Server
if options.DeploymentValues.DERP.Server.Enable.Value() {
derpServer = derp.NewServer(key.NewNode(), tailnet.Logger(options.Logger.Named("derp").Leveled(slog.LevelDebug)))
derpServer.SetMeshKey(derpMeshKey)
}
// match default with cli default
if options.SSHKeygenAlgorithm == "" {
+13 -4
View File
@@ -31,14 +31,23 @@ import (
previewtypes "github.com/coder/preview/types"
)
// Deprecated: use slice.List
// List is a helper function to reduce boilerplate when converting slices of
// database types to slices of codersdk types.
// Only works if the function takes a single argument.
func List[F any, T any](list []F, convert func(F) T) []T {
return slice.List[F, T](list, convert)
return ListLazy(convert)(list)
}
// Deprecated: use slice.ListLazy
// ListLazy returns the converter function for a list, but does not eval
// the input. Helpful for combining the Map and the List functions.
func ListLazy[F any, T any](convert func(F) T) func(list []F) []T {
return slice.ListLazy[F, T](convert)
return func(list []F) []T {
into := make([]T, 0, len(list))
for _, item := range list {
into = append(into, convert(item))
}
return into
}
}
func APIAllowListTarget(entry rbac.AllowListElement) codersdk.APIAllowListTarget {
-1
View File
@@ -394,7 +394,6 @@ func WorkspaceAgentDevcontainer(t testing.TB, db database.Store, orig database.W
Name: []string{takeFirst(orig.Name, testutil.GetRandomName(t))},
WorkspaceFolder: []string{takeFirst(orig.WorkspaceFolder, "/workspace")},
ConfigPath: []string{takeFirst(orig.ConfigPath, "")},
SubagentID: []uuid.UUID{orig.SubagentID.UUID},
})
require.NoError(t, err, "insert workspace agent devcontainer")
return devcontainers[0]
+1 -5
View File
@@ -2457,8 +2457,7 @@ CREATE TABLE workspace_agent_devcontainers (
created_at timestamp with time zone DEFAULT now() NOT NULL,
workspace_folder text NOT NULL,
config_path text NOT NULL,
name text NOT NULL,
subagent_id uuid
name text NOT NULL
);
COMMENT ON TABLE workspace_agent_devcontainers IS 'Workspace agent devcontainer configuration';
@@ -3738,9 +3737,6 @@ ALTER TABLE ONLY user_status_changes
ALTER TABLE ONLY webpush_subscriptions
ADD CONSTRAINT webpush_subscriptions_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspace_agent_devcontainers
ADD CONSTRAINT workspace_agent_devcontainers_subagent_id_fkey FOREIGN KEY (subagent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspace_agent_devcontainers
ADD CONSTRAINT workspace_agent_devcontainers_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
@@ -72,7 +72,6 @@ const (
ForeignKeyUserSecretsUserID ForeignKeyConstraint = "user_secrets_user_id_fkey" // ALTER TABLE ONLY user_secrets ADD CONSTRAINT user_secrets_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyUserStatusChangesUserID ForeignKeyConstraint = "user_status_changes_user_id_fkey" // ALTER TABLE ONLY user_status_changes ADD CONSTRAINT user_status_changes_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
ForeignKeyWebpushSubscriptionsUserID ForeignKeyConstraint = "webpush_subscriptions_user_id_fkey" // ALTER TABLE ONLY webpush_subscriptions ADD CONSTRAINT webpush_subscriptions_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentDevcontainersSubagentID ForeignKeyConstraint = "workspace_agent_devcontainers_subagent_id_fkey" // ALTER TABLE ONLY workspace_agent_devcontainers ADD CONSTRAINT workspace_agent_devcontainers_subagent_id_fkey FOREIGN KEY (subagent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentDevcontainersWorkspaceAgentID ForeignKeyConstraint = "workspace_agent_devcontainers_workspace_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_devcontainers ADD CONSTRAINT workspace_agent_devcontainers_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentLogSourcesWorkspaceAgentID ForeignKeyConstraint = "workspace_agent_log_sources_workspace_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_log_sources ADD CONSTRAINT workspace_agent_log_sources_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentMemoryResourceMonitorsAgentID ForeignKeyConstraint = "workspace_agent_memory_resource_monitors_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_memory_resource_monitors ADD CONSTRAINT workspace_agent_memory_resource_monitors_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
@@ -1,2 +0,0 @@
ALTER TABLE workspace_agent_devcontainers
DROP COLUMN subagent_id;
@@ -1,2 +0,0 @@
ALTER TABLE workspace_agent_devcontainers
ADD COLUMN subagent_id UUID REFERENCES workspace_agents(id) ON DELETE CASCADE;
-1
View File
@@ -440,7 +440,6 @@ func (q *sqlQuerier) GetAuthorizedUsers(ctx context.Context, arg GetUsersParams,
rows, err := q.db.QueryContext(ctx, query,
arg.AfterID,
arg.Search,
arg.Name,
pq.Array(arg.Status),
pq.Array(arg.RbacRole),
arg.LastSeenBefore,
+1 -2
View File
@@ -4771,8 +4771,7 @@ type WorkspaceAgentDevcontainer struct {
// Path to devcontainer.json.
ConfigPath string `db:"config_path" json:"config_path"`
// The name of the Dev Container.
Name string `db:"name" json:"name"`
SubagentID uuid.NullUUID `db:"subagent_id" json:"subagent_id"`
Name string `db:"name" json:"name"`
}
type WorkspaceAgentLog struct {
+50 -102
View File
@@ -7,9 +7,7 @@ import (
"errors"
"fmt"
"net"
"slices"
"sort"
"strings"
"testing"
"time"
@@ -6273,6 +6271,56 @@ func TestGetWorkspaceAgentsByParentID(t *testing.T) {
})
}
func TestGetWorkspaceAgentByInstanceID(t *testing.T) {
t.Parallel()
// Context: https://github.com/coder/coder/pull/22196
t.Run("DoesNotReturnSubAgents", func(t *testing.T) {
t.Parallel()
// Given: A parent workspace agent with an AuthInstanceID and a
// sub-agent that shares the same AuthInstanceID.
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: job.ID,
})
authInstanceID := fmt.Sprintf("instance-%s-%d", t.Name(), time.Now().UnixNano())
parentAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
// Create a sub-agent with the same AuthInstanceID (simulating
// the old behavior before the fix).
_ = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{UUID: parentAgent.ID, Valid: true},
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
ctx := testutil.Context(t, testutil.WaitShort)
// When: We look up the agent by instance ID.
agent, err := db.GetWorkspaceAgentByInstanceID(ctx, authInstanceID)
require.NoError(t, err)
// Then: The result must be the parent agent, not the sub-agent.
assert.Equal(t, parentAgent.ID, agent.ID, "instance ID lookup should return the parent agent, not a sub-agent")
assert.False(t, agent.ParentID.Valid, "returned agent should not have a parent (should be the parent itself)")
})
}
func requireUsersMatch(t testing.TB, expected []database.User, found []database.GetUsersRow, msg string) {
t.Helper()
require.ElementsMatch(t, expected, database.ConvertUserRows(found), msg)
@@ -8484,103 +8532,3 @@ func TestGetAuthenticatedWorkspaceAgentAndBuildByAuthToken_ShutdownScripts(t *te
require.ErrorIs(t, err, sql.ErrNoRows, "agent should not authenticate when latest build is not STOP")
})
}
// Our `InsertWorkspaceAgentDevcontainers` query should ideally be `[]uuid.NullUUID` but unfortunately
// sqlc infers it as `[]uuid.UUID`. To ensure we don't insert a `uuid.Nil`, the query inserts NULL when
// passed with `uuid.Nil`. This test ensures we keep this behavior without regression.
func TestInsertWorkspaceAgentDevcontainers(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
validSubagent []bool
}{
{"BothValid", []bool{true, true}},
{"FirstValidSecondInvalid", []bool{true, false}},
{"FirstInvalidSecondValid", []bool{false, true}},
{"BothInvalid", []bool{false, false}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
db, _ = dbtestutil.NewDB(t)
org = dbgen.Organization(t, db, database.Organization{})
job = dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource = dbgen.WorkspaceResource(t, db, database.WorkspaceResource{JobID: job.ID})
agent = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{ResourceID: resource.ID})
)
ids := make([]uuid.UUID, len(tc.validSubagent))
names := make([]string, len(tc.validSubagent))
workspaceFolders := make([]string, len(tc.validSubagent))
configPaths := make([]string, len(tc.validSubagent))
subagentIDs := make([]uuid.UUID, len(tc.validSubagent))
for i, valid := range tc.validSubagent {
ids[i] = uuid.New()
names[i] = fmt.Sprintf("test-devcontainer-%d", i)
workspaceFolders[i] = fmt.Sprintf("/workspace%d", i)
configPaths[i] = fmt.Sprintf("/workspace%d/.devcontainer/devcontainer.json", i)
if valid {
subagentIDs[i] = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
ParentID: uuid.NullUUID{UUID: agent.ID, Valid: true},
}).ID
} else {
subagentIDs[i] = uuid.Nil
}
}
ctx := testutil.Context(t, testutil.WaitShort)
// Given: We insert multiple devcontainer records.
devcontainers, err := db.InsertWorkspaceAgentDevcontainers(ctx, database.InsertWorkspaceAgentDevcontainersParams{
WorkspaceAgentID: agent.ID,
CreatedAt: dbtime.Now(),
ID: ids,
Name: names,
WorkspaceFolder: workspaceFolders,
ConfigPath: configPaths,
SubagentID: subagentIDs,
})
require.NoError(t, err)
require.Len(t, devcontainers, len(tc.validSubagent))
// Then: Verify each devcontainer has the correct SubagentID validity.
// - When we pass `uuid.Nil`, we get a `uuid.NullUUID{Valid: false}`
// - When we pass a valid UUID, we get a `uuid.NullUUID{Valid: true}`
for i, valid := range tc.validSubagent {
require.Equal(t, valid, devcontainers[i].SubagentID.Valid, "devcontainer %d: subagent_id validity mismatch", i)
if valid {
require.Equal(t, subagentIDs[i], devcontainers[i].SubagentID.UUID, "devcontainer %d: subagent_id UUID mismatch", i)
}
}
// Perform the same check on data returned by
// `GetWorkspaceAgentDevcontainersByAgentID` to ensure the fix is at
// the data storage layer, instead of just at a query level.
fetched, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, agent.ID)
require.NoError(t, err)
require.Len(t, fetched, len(tc.validSubagent))
// Sort fetched by name to ensure consistent ordering for comparison.
slices.SortFunc(fetched, func(a, b database.WorkspaceAgentDevcontainer) int {
return strings.Compare(a.Name, b.Name)
})
for i, valid := range tc.validSubagent {
require.Equal(t, valid, fetched[i].SubagentID.Valid, "fetched devcontainer %d: subagent_id validity mismatch", i)
if valid {
require.Equal(t, subagentIDs[i], fetched[i].SubagentID.UUID, "fetched devcontainer %d: subagent_id UUID mismatch", i)
}
}
})
}
}
+26 -37
View File
@@ -16403,7 +16403,7 @@ WHERE
ELSE true
END
-- Start filters
-- Filter by email or username
-- Filter by name, email or username
AND CASE
WHEN $2 :: text != '' THEN (
email ILIKE concat('%', $2, '%')
@@ -16411,64 +16411,58 @@ WHERE
)
ELSE true
END
-- Filter by name (display name)
AND CASE
WHEN $3 :: text != '' THEN
name ILIKE concat('%', $3, '%')
ELSE true
END
-- Filter by status
AND CASE
-- @status needs to be a text because it can be empty, If it was
-- user_status enum, it would not.
WHEN cardinality($4 :: user_status[]) > 0 THEN
status = ANY($4 :: user_status[])
WHEN cardinality($3 :: user_status[]) > 0 THEN
status = ANY($3 :: user_status[])
ELSE true
END
-- Filter by rbac_roles
AND CASE
-- @rbac_role allows filtering by rbac roles. If 'member' is included, show everyone, as
-- everyone is a member.
WHEN cardinality($5 :: text[]) > 0 AND 'member' != ANY($5 :: text[]) THEN
rbac_roles && $5 :: text[]
WHEN cardinality($4 :: text[]) > 0 AND 'member' != ANY($4 :: text[]) THEN
rbac_roles && $4 :: text[]
ELSE true
END
-- Filter by last_seen
AND CASE
WHEN $6 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
last_seen_at <= $6
WHEN $5 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
last_seen_at <= $5
ELSE true
END
AND CASE
WHEN $7 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
last_seen_at >= $7
WHEN $6 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
last_seen_at >= $6
ELSE true
END
-- Filter by created_at
AND CASE
WHEN $8 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
created_at <= $8
WHEN $7 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
created_at <= $7
ELSE true
END
AND CASE
WHEN $9 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
created_at >= $9
WHEN $8 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
created_at >= $8
ELSE true
END
AND CASE
WHEN $10::bool THEN TRUE
WHEN $9::bool THEN TRUE
ELSE
is_system = false
END
AND CASE
WHEN $11 :: bigint != 0 THEN
github_com_user_id = $11
WHEN $10 :: bigint != 0 THEN
github_com_user_id = $10
ELSE true
END
-- Filter by login_type
AND CASE
WHEN cardinality($12 :: login_type[]) > 0 THEN
login_type = ANY($12 :: login_type[])
WHEN cardinality($11 :: login_type[]) > 0 THEN
login_type = ANY($11 :: login_type[])
ELSE true
END
-- End of filters
@@ -16477,16 +16471,15 @@ WHERE
-- @authorize_filter
ORDER BY
-- Deterministic and consistent ordering of all users. This is to ensure consistent pagination.
LOWER(username) ASC OFFSET $13
LOWER(username) ASC OFFSET $12
LIMIT
-- A null limit means "no limit", so 0 means return all
NULLIF($14 :: int, 0)
NULLIF($13 :: int, 0)
`
type GetUsersParams struct {
AfterID uuid.UUID `db:"after_id" json:"after_id"`
Search string `db:"search" json:"search"`
Name string `db:"name" json:"name"`
Status []UserStatus `db:"status" json:"status"`
RbacRole []string `db:"rbac_role" json:"rbac_role"`
LastSeenBefore time.Time `db:"last_seen_before" json:"last_seen_before"`
@@ -16527,7 +16520,6 @@ func (q *sqlQuerier) GetUsers(ctx context.Context, arg GetUsersParams) ([]GetUse
rows, err := q.db.QueryContext(ctx, getUsers,
arg.AfterID,
arg.Search,
arg.Name,
pq.Array(arg.Status),
pq.Array(arg.RbacRole),
arg.LastSeenBefore,
@@ -17226,7 +17218,7 @@ func (q *sqlQuerier) ValidateUserIDs(ctx context.Context, userIds []uuid.UUID) (
const getWorkspaceAgentDevcontainersByAgentID = `-- name: GetWorkspaceAgentDevcontainersByAgentID :many
SELECT
id, workspace_agent_id, created_at, workspace_folder, config_path, name, subagent_id
id, workspace_agent_id, created_at, workspace_folder, config_path, name
FROM
workspace_agent_devcontainers
WHERE
@@ -17251,7 +17243,6 @@ func (q *sqlQuerier) GetWorkspaceAgentDevcontainersByAgentID(ctx context.Context
&i.WorkspaceFolder,
&i.ConfigPath,
&i.Name,
&i.SubagentID,
); err != nil {
return nil, err
}
@@ -17268,16 +17259,15 @@ func (q *sqlQuerier) GetWorkspaceAgentDevcontainersByAgentID(ctx context.Context
const insertWorkspaceAgentDevcontainers = `-- name: InsertWorkspaceAgentDevcontainers :many
INSERT INTO
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path, subagent_id)
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path)
SELECT
$1::uuid AS workspace_agent_id,
$2::timestamptz AS created_at,
unnest($3::uuid[]) AS id,
unnest($4::text[]) AS name,
unnest($5::text[]) AS workspace_folder,
unnest($6::text[]) AS config_path,
NULLIF(unnest($7::uuid[]), '00000000-0000-0000-0000-000000000000')::uuid AS subagent_id
RETURNING workspace_agent_devcontainers.id, workspace_agent_devcontainers.workspace_agent_id, workspace_agent_devcontainers.created_at, workspace_agent_devcontainers.workspace_folder, workspace_agent_devcontainers.config_path, workspace_agent_devcontainers.name, workspace_agent_devcontainers.subagent_id
unnest($6::text[]) AS config_path
RETURNING workspace_agent_devcontainers.id, workspace_agent_devcontainers.workspace_agent_id, workspace_agent_devcontainers.created_at, workspace_agent_devcontainers.workspace_folder, workspace_agent_devcontainers.config_path, workspace_agent_devcontainers.name
`
type InsertWorkspaceAgentDevcontainersParams struct {
@@ -17287,7 +17277,6 @@ type InsertWorkspaceAgentDevcontainersParams struct {
Name []string `db:"name" json:"name"`
WorkspaceFolder []string `db:"workspace_folder" json:"workspace_folder"`
ConfigPath []string `db:"config_path" json:"config_path"`
SubagentID []uuid.UUID `db:"subagent_id" json:"subagent_id"`
}
func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg InsertWorkspaceAgentDevcontainersParams) ([]WorkspaceAgentDevcontainer, error) {
@@ -17298,7 +17287,6 @@ func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg
pq.Array(arg.Name),
pq.Array(arg.WorkspaceFolder),
pq.Array(arg.ConfigPath),
pq.Array(arg.SubagentID),
)
if err != nil {
return nil, err
@@ -17314,7 +17302,6 @@ func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg
&i.WorkspaceFolder,
&i.ConfigPath,
&i.Name,
&i.SubagentID,
); err != nil {
return nil, err
}
@@ -18239,6 +18226,8 @@ WHERE
auth_instance_id = $1 :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC
`
+1 -7
View File
@@ -247,7 +247,7 @@ WHERE
ELSE true
END
-- Start filters
-- Filter by email or username
-- Filter by name, email or username
AND CASE
WHEN @search :: text != '' THEN (
email ILIKE concat('%', @search, '%')
@@ -255,12 +255,6 @@ WHERE
)
ELSE true
END
-- Filter by name (display name)
AND CASE
WHEN @name :: text != '' THEN
name ILIKE concat('%', @name, '%')
ELSE true
END
-- Filter by status
AND CASE
-- @status needs to be a text because it can be empty, If it was
@@ -1,14 +1,13 @@
-- name: InsertWorkspaceAgentDevcontainers :many
INSERT INTO
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path, subagent_id)
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path)
SELECT
@workspace_agent_id::uuid AS workspace_agent_id,
@created_at::timestamptz AS created_at,
unnest(@id::uuid[]) AS id,
unnest(@name::text[]) AS name,
unnest(@workspace_folder::text[]) AS workspace_folder,
unnest(@config_path::text[]) AS config_path,
NULLIF(unnest(@subagent_id::uuid[]), '00000000-0000-0000-0000-000000000000')::uuid AS subagent_id
unnest(@config_path::text[]) AS config_path
RETURNING workspace_agent_devcontainers.*;
-- name: GetWorkspaceAgentDevcontainersByAgentID :many
@@ -17,6 +17,8 @@ WHERE
auth_instance_id = @auth_instance_id :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC;
-6
View File
@@ -162,12 +162,6 @@ func (l *Set) Errors() []string {
return slices.Clone(l.entitlements.Errors)
}
func (l *Set) Warnings() []string {
l.entitlementsMu.RLock()
defer l.entitlementsMu.RUnlock()
return slices.Clone(l.entitlements.Warnings)
}
func (l *Set) HasLicense() bool {
l.entitlementsMu.RLock()
defer l.entitlementsMu.RUnlock()
-6
View File
@@ -503,12 +503,6 @@ func OneWayWebSocketEventSender(log slog.Logger) func(rw http.ResponseWriter, r
// WriteOAuth2Error writes an OAuth2-compliant error response per RFC 6749.
// This should be used for all OAuth2 endpoints (/oauth2/*) to ensure compliance.
func WriteOAuth2Error(ctx context.Context, rw http.ResponseWriter, status int, errorCode codersdk.OAuth2ErrorCode, description string) {
// RFC 6749 §5.2: invalid_client SHOULD use 401 and MUST include a
// WWW-Authenticate response header.
if status == http.StatusUnauthorized && errorCode == codersdk.OAuth2ErrorCodeInvalidClient {
rw.Header().Set("WWW-Authenticate", `Basic realm="coder"`)
}
Write(ctx, rw, status, codersdk.OAuth2Error{
Error: errorCode,
ErrorDescription: description,
-7
View File
@@ -23,7 +23,6 @@ import (
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/promoauth"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
@@ -245,12 +244,6 @@ func ExtractAPIKey(rw http.ResponseWriter, r *http.Request, cfg ExtractAPIKeyCon
return optionalWrite(http.StatusUnauthorized, resp)
}
// Log the API key ID for all requests that have a valid key format and secret,
// regardless of whether subsequent validation (expiry, user status, etc.) succeeds.
if rl := loggermw.RequestLoggerFromContext(ctx); rl != nil {
rl.WithFields(slog.F("api_key_id", key.ID))
}
now := dbtime.Now()
if key.ExpiresAt.Before(now) {
return optionalWrite(http.StatusUnauthorized, codersdk.Response{
-79
View File
@@ -16,11 +16,9 @@ import (
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
"golang.org/x/exp/slices"
"golang.org/x/oauth2"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/apikey"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
@@ -29,8 +27,6 @@ import (
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/httpmw/loggermw/loggermock"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
@@ -995,79 +991,4 @@ func TestAPIKey(t *testing.T) {
defer res.Body.Close()
require.Equal(t, http.StatusOK, res.StatusCode)
})
t.Run("LogsAPIKeyID", func(t *testing.T) {
t.Parallel()
tests := []struct {
name string
expired bool
expectedStatus int
}{
{
name: "OnSuccess",
expired: false,
expectedStatus: http.StatusOK,
},
{
name: "OnFailure",
expired: true,
expectedStatus: http.StatusUnauthorized,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
db, _ = dbtestutil.NewDB(t)
user = dbgen.User(t, db, database.User{})
expiry = dbtime.Now().AddDate(0, 0, 1)
)
if tc.expired {
expiry = dbtime.Now().AddDate(0, 0, -1)
}
sentAPIKey, token := dbgen.APIKey(t, db, database.APIKey{
UserID: user.ID,
ExpiresAt: expiry,
})
var (
ctrl = gomock.NewController(t)
mockLogger = loggermock.NewMockRequestLogger(ctrl)
r = httptest.NewRequest("GET", "/", nil)
rw = httptest.NewRecorder()
)
r.Header.Set(codersdk.SessionTokenHeader, token)
// Expect WithAuthContext to be called (from dbauthz.As).
mockLogger.EXPECT().WithAuthContext(gomock.Any()).AnyTimes()
// Expect WithFields to be called with api_key_id field regardless of success/failure.
mockLogger.EXPECT().WithFields(
slog.F("api_key_id", sentAPIKey.ID),
).Times(1)
// Add the mock logger to the context.
ctx := loggermw.WithRequestLogger(r.Context(), mockLogger)
r = r.WithContext(ctx)
httpmw.ExtractAPIKeyMW(httpmw.ExtractAPIKeyConfig{
DB: db,
RedirectToLogin: false,
})(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
if tc.expired {
t.Error("handler should not be called on auth failure")
}
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.Response{
Message: "It worked!",
})
})).ServeHTTP(rw, r)
res := rw.Result()
defer res.Body.Close()
require.Equal(t, tc.expectedStatus, res.StatusCode)
})
}
})
}
-7
View File
@@ -329,13 +329,6 @@ func extractOAuth2ProviderAppBase(db database.Store, errWriter errorWriter) func
paramAppID = r.Form.Get("client_id")
}
}
if paramAppID == "" {
// RFC 6749 §2.3.1: confidential clients may authenticate via
// HTTP Basic where the username is the client_id.
if user, _, ok := r.BasicAuth(); ok && user != "" {
paramAppID = user
}
}
if paramAppID == "" {
errWriter.writeMissingClientID(ctx, rw)
return
+9
View File
@@ -238,9 +238,18 @@ func (api *API) paginatedMembers(rw http.ResponseWriter, r *http.Request) {
memberRows = append(memberRows, row)
}
if len(paginatedMemberRows) == 0 {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.PaginatedMembersResponse{
Members: []codersdk.OrganizationMemberWithUserData{},
Count: 0,
})
return
}
members, err := convertOrganizationMembersWithUserData(ctx, api.Database, memberRows)
if err != nil {
httpapi.InternalServerError(rw, err)
return
}
resp := codersdk.PaginatedMembersResponse{
+7 -17
View File
@@ -156,11 +156,11 @@ func (s *SMTPHandler) dispatch(subject, htmlBody, plainBody, to string) Delivery
}
// Sender identification.
envelopeFrom, headerFrom, err := s.validateFromAddr(s.cfg.From.String())
from, err := s.validateFromAddr(s.cfg.From.String())
if err != nil {
return false, xerrors.Errorf("'from' validation: %w", err)
}
err = c.Mail(envelopeFrom, &smtp.MailOptions{})
err = c.Mail(from, &smtp.MailOptions{})
if err != nil {
// This is retryable because the server may be temporarily down.
return true, xerrors.Errorf("sender identification: %w", err)
@@ -200,7 +200,7 @@ func (s *SMTPHandler) dispatch(subject, htmlBody, plainBody, to string) Delivery
msg := &bytes.Buffer{}
multipartBuffer := &bytes.Buffer{}
multipartWriter := multipart.NewWriter(multipartBuffer)
_, _ = fmt.Fprintf(msg, "From: %s\r\n", headerFrom)
_, _ = fmt.Fprintf(msg, "From: %s\r\n", from)
_, _ = fmt.Fprintf(msg, "To: %s\r\n", strings.Join(recipients, ", "))
_, _ = fmt.Fprintf(msg, "Subject: %s\r\n", subject)
_, _ = fmt.Fprintf(msg, "Message-Id: %s@%s\r\n", msgID, s.hostname())
@@ -486,25 +486,15 @@ func (s *SMTPHandler) auth(ctx context.Context, mechs string) (sasl.Client, erro
return nil, errs
}
// validateFromAddr parses the "from" address and returns two values:
// 1. envelopeFrom: The bare email address for use in the SMTP MAIL FROM command.
// 2. headerFrom: The original address (possibly including display name) for use in the email header.
//
// This separation is necessary because SMTP envelope addresses (used in MAIL FROM
// and RCPT TO commands) must be bare email addresses, while email headers can
// include display names (e.g., "John Doe <john@example.com>").
func (*SMTPHandler) validateFromAddr(from string) (envelopeFrom, headerFrom string, err error) {
func (*SMTPHandler) validateFromAddr(from string) (string, error) {
addrs, err := mail.ParseAddressList(from)
if err != nil {
return "", "", xerrors.Errorf("parse 'from' address: %w", err)
return "", xerrors.Errorf("parse 'from' address: %w", err)
}
if len(addrs) != 1 {
return "", "", ErrValidationNoFromAddress
return "", ErrValidationNoFromAddress
}
// Use the parsed email address for the SMTP envelope (MAIL FROM command),
// but preserve the original string for the email header (which may include
// a display name).
return addrs[0].Address, from, nil
return from, nil
}
func (s *SMTPHandler) validateToAddrs(to string) ([]string, error) {
@@ -1,81 +0,0 @@
package dispatch
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestValidateFromAddr(t *testing.T) {
t.Parallel()
tests := []struct {
name string
input string
expectedEnvelope string
expectedHeader string
expectedErrContain string
}{
{
name: "bare email address",
input: "system@coder.com",
expectedEnvelope: "system@coder.com",
expectedHeader: "system@coder.com",
},
{
name: "email with display name",
input: "Coder System <system@coder.com>",
expectedEnvelope: "system@coder.com",
expectedHeader: "Coder System <system@coder.com>",
},
{
name: "email with quoted display name",
input: `"Coder Notifications" <notifications@coder.com>`,
expectedEnvelope: "notifications@coder.com",
expectedHeader: `"Coder Notifications" <notifications@coder.com>`,
},
{
name: "email with special characters in display name",
input: `"O'Brien, John" <john@example.com>`,
expectedEnvelope: "john@example.com",
expectedHeader: `"O'Brien, John" <john@example.com>`,
},
{
name: "invalid email address",
input: "not-an-email",
expectedErrContain: "parse 'from' address",
},
{
name: "empty string",
input: "",
expectedErrContain: "parse 'from' address",
},
{
name: "multiple addresses",
input: "a@example.com, b@example.com",
expectedErrContain: "'from' address not defined",
},
}
handler := &SMTPHandler{}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
envelope, header, err := handler.validateFromAddr(tc.input)
if tc.expectedErrContain != "" {
require.Error(t, err)
require.ErrorContains(t, err, tc.expectedErrContain)
return
}
require.NoError(t, err)
require.Equal(t, tc.expectedEnvelope, envelope,
"envelope address should be the bare email")
require.Equal(t, tc.expectedHeader, header,
"header address should preserve the original input")
})
}
}
-121
View File
@@ -515,124 +515,3 @@ func TestSMTP(t *testing.T) {
})
}
}
// TestSMTPEnvelopeAndHeaders verifies that SMTP envelope addresses (used in
// MAIL FROM and RCPT TO commands) contain only bare email addresses, while
// email headers preserve the full address including display names.
//
// This is important because RFC 5321 requires envelope addresses to be bare
// emails, while RFC 5322 allows headers to include display names.
//
// See: https://github.com/coder/coder/issues/20727
func TestSMTPEnvelopeAndHeaders(t *testing.T) {
t.Parallel()
const (
hello = "localhost"
to = "bob@bob.com"
subject = "This is the subject"
body = "This is the body"
)
tests := []struct {
name string
fromConfig string // The configured From address (may include display name)
expectedEnvFrom string // Expected envelope MAIL FROM (bare email)
expectedHeaderFrom string // Expected From header (preserves display name)
}{
{
name: "bare email address",
fromConfig: "system@coder.com",
expectedEnvFrom: "system@coder.com",
expectedHeaderFrom: "system@coder.com",
},
{
name: "email with display name",
fromConfig: "Coder System <system@coder.com>",
expectedEnvFrom: "system@coder.com",
expectedHeaderFrom: "Coder System <system@coder.com>",
},
{
name: "email with quoted display name",
fromConfig: `"Coder Notifications" <notifications@coder.com>`,
expectedEnvFrom: "notifications@coder.com",
expectedHeaderFrom: `"Coder Notifications" <notifications@coder.com>`,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
cfg := codersdk.NotificationsEmailConfig{
Hello: serpent.String(hello),
From: serpent.String(tc.fromConfig),
}
backend := smtptest.NewBackend(smtptest.Config{
AuthMechanisms: []string{},
})
srv, listen, err := smtptest.CreateMockSMTPServer(backend, false)
require.NoError(t, err)
t.Cleanup(func() {
assert.ErrorIs(t, srv.Shutdown(ctx), smtp.ErrServerClosed)
})
var hp serpent.HostPort
require.NoError(t, hp.Set(listen.Addr().String()))
cfg.Smarthost = serpent.String(hp.String())
handler := dispatch.NewSMTPHandler(cfg, logger.Named("smtp"))
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
assert.NoError(t, srv.Serve(listen))
}()
require.Eventually(t, func() bool {
cl, err := smtptest.PingClient(listen, false, false)
if err != nil {
return false
}
_ = cl.Close()
return true
}, testutil.WaitShort, testutil.IntervalFast)
payload := types.MessagePayload{
Version: "1.0",
UserEmail: to,
Labels: make(map[string]string),
}
dispatchFn, err := handler.Dispatcher(payload, subject, body, helpers())
require.NoError(t, err)
msgID := uuid.New()
retryable, err := dispatchFn(ctx, msgID)
require.NoError(t, err)
require.False(t, retryable)
msg := backend.LastMessage()
require.NotNil(t, msg)
// Verify envelope address (MAIL FROM) contains only the bare email.
require.Equal(t, tc.expectedEnvFrom, msg.From,
"SMTP envelope MAIL FROM should contain only the bare email address")
// Verify header From preserves the display name.
require.Contains(t, msg.Contents, fmt.Sprintf("From: %s\r\n", tc.expectedHeaderFrom),
"Email From header should preserve the display name if present")
require.NoError(t, srv.Shutdown(ctx))
wg.Wait()
})
}
}
+1 -1
View File
@@ -23,7 +23,7 @@ func GetAuthorizationServerMetadata(accessURL *url.URL) http.HandlerFunc {
GrantTypesSupported: []codersdk.OAuth2ProviderGrantType{codersdk.OAuth2ProviderGrantTypeAuthorizationCode, codersdk.OAuth2ProviderGrantTypeRefreshToken},
CodeChallengeMethodsSupported: []codersdk.OAuth2PKCECodeChallengeMethod{codersdk.OAuth2PKCECodeChallengeMethodS256},
ScopesSupported: rbac.ExternalScopeNames(),
TokenEndpointAuthMethodsSupported: []codersdk.OAuth2TokenEndpointAuthMethod{codersdk.OAuth2TokenEndpointAuthMethodClientSecretBasic, codersdk.OAuth2TokenEndpointAuthMethodClientSecretPost},
TokenEndpointAuthMethodsSupported: []codersdk.OAuth2TokenEndpointAuthMethod{codersdk.OAuth2TokenEndpointAuthMethodClientSecretPost},
}
httpapi.Write(ctx, rw, http.StatusOK, metadata)
}
@@ -1,20 +1,13 @@
package oauth2providertest_test
import (
"encoding/json"
"net/http"
"net/url"
"strings"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/oauth2"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/oauth2provider/oauth2providertest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
func TestOAuth2AuthorizationServerMetadata(t *testing.T) {
@@ -49,12 +42,6 @@ func TestOAuth2AuthorizationServerMetadata(t *testing.T) {
require.True(t, ok, "code_challenge_methods_supported should be an array")
require.Contains(t, challengeMethods, "S256", "should support S256 PKCE method")
// Verify token endpoint auth methods
authMethods, ok := metadata["token_endpoint_auth_methods_supported"].([]any)
require.True(t, ok, "token_endpoint_auth_methods_supported should be an array")
require.Contains(t, authMethods, "client_secret_basic", "should support client_secret_basic token auth")
require.Contains(t, authMethods, "client_secret_post", "should support client_secret_post token auth")
// Verify endpoints are proper URLs
authEndpoint, ok := metadata["authorization_endpoint"].(string)
require.True(t, ok, "authorization_endpoint should be a string")
@@ -199,109 +186,6 @@ func TestOAuth2WithoutPKCE(t *testing.T) {
require.NotEmpty(t, token.RefreshToken, "should receive refresh token")
}
func TestOAuth2TokenExchangeClientSecretBasic(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: false,
})
_ = coderdtest.CreateFirstUser(t, client)
app, clientSecret := oauth2providertest.CreateTestOAuth2App(t, client)
t.Cleanup(func() {
oauth2providertest.CleanupOAuth2App(t, client, app.ID)
})
state := oauth2providertest.GenerateState(t)
authParams := oauth2providertest.AuthorizeParams{
ClientID: app.ID.String(),
ResponseType: "code",
RedirectURI: oauth2providertest.TestRedirectURI,
State: state,
}
code := oauth2providertest.AuthorizeOAuth2App(t, client, client.URL.String(), authParams)
require.NotEmpty(t, code, "should receive authorization code")
ctx := testutil.Context(t, testutil.WaitLong)
data := url.Values{}
data.Set("grant_type", "authorization_code")
data.Set("code", code)
data.Set("redirect_uri", oauth2providertest.TestRedirectURI)
req, err := http.NewRequestWithContext(ctx, "POST", client.URL.String()+"/oauth2/tokens", strings.NewReader(data.Encode()))
require.NoError(t, err, "failed to create token request")
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.SetBasicAuth(app.ID.String(), clientSecret)
httpClient := &http.Client{Timeout: 10 * time.Second}
resp, err := httpClient.Do(req)
require.NoError(t, err, "failed to perform token request")
defer resp.Body.Close()
require.Equal(t, http.StatusOK, resp.StatusCode, "unexpected status code")
var tokenResp oauth2.Token
err = json.NewDecoder(resp.Body).Decode(&tokenResp)
require.NoError(t, err, "failed to decode token response")
require.NotEmpty(t, tokenResp.AccessToken, "missing access token")
require.NotEmpty(t, tokenResp.RefreshToken, "missing refresh token")
require.Equal(t, "Bearer", tokenResp.TokenType, "unexpected token type")
}
func TestOAuth2TokenExchangeClientSecretBasicInvalidSecret(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: false,
})
_ = coderdtest.CreateFirstUser(t, client)
app, clientSecret := oauth2providertest.CreateTestOAuth2App(t, client)
t.Cleanup(func() {
oauth2providertest.CleanupOAuth2App(t, client, app.ID)
})
state := oauth2providertest.GenerateState(t)
authParams := oauth2providertest.AuthorizeParams{
ClientID: app.ID.String(),
ResponseType: "code",
RedirectURI: oauth2providertest.TestRedirectURI,
State: state,
}
code := oauth2providertest.AuthorizeOAuth2App(t, client, client.URL.String(), authParams)
require.NotEmpty(t, code, "should receive authorization code")
ctx := testutil.Context(t, testutil.WaitLong)
data := url.Values{}
data.Set("grant_type", "authorization_code")
data.Set("code", code)
data.Set("redirect_uri", oauth2providertest.TestRedirectURI)
wrongSecret := clientSecret + "x"
req, err := http.NewRequestWithContext(ctx, "POST", client.URL.String()+"/oauth2/tokens", strings.NewReader(data.Encode()))
require.NoError(t, err, "failed to create token request")
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.SetBasicAuth(app.ID.String(), wrongSecret)
httpClient := &http.Client{Timeout: 10 * time.Second}
resp, err := httpClient.Do(req)
require.NoError(t, err, "failed to perform token request")
defer resp.Body.Close()
require.Equal(t, http.StatusUnauthorized, resp.StatusCode, "expected 401 status code")
require.Equal(t, `Basic realm="coder"`, resp.Header.Get("WWW-Authenticate"), "missing WWW-Authenticate header")
oauth2providertest.RequireOAuth2Error(t, resp, oauth2providertest.OAuth2ErrorTypes.InvalidClient)
}
func TestOAuth2PKCEPlainMethodRejected(t *testing.T) {
t.Parallel()
+1 -38
View File
@@ -34,9 +34,6 @@ var (
errInvalidPKCE = xerrors.New("invalid code_verifier")
// errInvalidResource means the resource parameter validation failed.
errInvalidResource = xerrors.New("invalid resource parameter")
// errConflictingClientAuth means the client provided credentials in both the
// request body and HTTP Basic, but they did not match.
errConflictingClientAuth = xerrors.New("conflicting client authentication")
)
func extractTokenRequest(r *http.Request, callbackURL *url.URL) (codersdk.OAuth2TokenRequest, []codersdk.ValidationError, error) {
@@ -55,7 +52,7 @@ func extractTokenRequest(r *http.Request, callbackURL *url.URL) (codersdk.OAuth2
case codersdk.OAuth2ProviderGrantTypeRefreshToken:
p.RequiredNotEmpty("refresh_token")
case codersdk.OAuth2ProviderGrantTypeAuthorizationCode:
p.RequiredNotEmpty("code")
p.RequiredNotEmpty("client_secret", "client_id", "code")
}
req := codersdk.OAuth2TokenRequest{
@@ -70,35 +67,6 @@ func extractTokenRequest(r *http.Request, callbackURL *url.URL) (codersdk.OAuth2
Scope: p.String(vals, "", "scope"),
}
// RFC 6749 §2.3.1: confidential clients may authenticate via HTTP Basic.
if user, pass, ok := r.BasicAuth(); ok && user != "" {
if req.ClientID != "" && req.ClientID != user {
return codersdk.OAuth2TokenRequest{}, nil, errConflictingClientAuth
}
if req.ClientSecret != "" && req.ClientSecret != pass {
return codersdk.OAuth2TokenRequest{}, nil, errConflictingClientAuth
}
req.ClientID = user
req.ClientSecret = pass
}
// Grant-specific required checks that can be satisfied via HTTP Basic.
if req.GrantType == codersdk.OAuth2ProviderGrantTypeAuthorizationCode {
if req.ClientID == "" {
p.Errors = append(p.Errors, codersdk.ValidationError{
Field: "client_id",
Detail: "Parameter \"client_id\" is required and cannot be empty",
})
}
if req.ClientSecret == "" {
p.Errors = append(p.Errors, codersdk.ValidationError{
Field: "client_secret",
Detail: "Parameter \"client_secret\" is required and cannot be empty",
})
}
}
// Validate redirect URI - errors are added to p.Errors.
_ = p.RedirectURL(vals, callbackURL, "redirect_uri")
@@ -136,11 +104,6 @@ func Tokens(db database.Store, lifetimes codersdk.SessionLifetime) http.HandlerF
req, validationErrs, err := extractTokenRequest(r, callbackURL)
if err != nil {
if errors.Is(err, errConflictingClientAuth) {
httpapi.WriteOAuth2Error(ctx, rw, http.StatusBadRequest, codersdk.OAuth2ErrorCodeInvalidRequest, "Conflicting client credentials between Authorization header and request body")
return
}
// Check for specific validation errors in priority order
if slices.ContainsFunc(validationErrs, func(validationError codersdk.ValidationError) bool {
return validationError.Field == "grant_type"
+4 -8
View File
@@ -83,9 +83,8 @@ func TestDynamicParametersWithTerraformValues(t *testing.T) {
dynamicParametersTerraformSource, err := os.ReadFile("testdata/parameters/modules/main.tf")
require.NoError(t, err)
modulesArchive, skipped, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
modulesArchive, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
require.NoError(t, err)
require.Len(t, skipped, 0)
setup := setupDynamicParamsTest(t, setupDynamicParamsTestParams{
provisionerDaemonVersion: provProto.CurrentVersion.String(),
@@ -199,9 +198,8 @@ func TestDynamicParametersWithTerraformValues(t *testing.T) {
dynamicParametersTerraformSource, err := os.ReadFile("testdata/parameters/modules/main.tf")
require.NoError(t, err)
modulesArchive, skipped, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
modulesArchive, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
require.NoError(t, err)
require.Len(t, skipped, 0)
c := atomic.NewInt32(0)
reject := &dbRejectGitSSHKey{Store: db, hook: func(d *dbRejectGitSSHKey) {
@@ -234,9 +232,8 @@ func TestDynamicParametersWithTerraformValues(t *testing.T) {
dynamicParametersTerraformSource, err := os.ReadFile("testdata/parameters/modules/main.tf")
require.NoError(t, err)
modulesArchive, skipped, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
modulesArchive, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
require.NoError(t, err)
require.Len(t, skipped, 0)
setup := setupDynamicParamsTest(t, setupDynamicParamsTestParams{
provisionerDaemonVersion: provProto.CurrentVersion.String(),
@@ -321,9 +318,8 @@ func TestDynamicParametersWithTerraformValues(t *testing.T) {
dynamicParametersTerraformSource, err := os.ReadFile("testdata/parameters/modules/main.tf")
require.NoError(t, err)
modulesArchive, skipped, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
modulesArchive, err := terraform.GetModulesArchive(os.DirFS("testdata/parameters/modules"))
require.NoError(t, err)
require.Len(t, skipped, 0)
setup := setupDynamicParamsTest(t, setupDynamicParamsTestParams{
provisionerDaemonVersion: provProto.CurrentVersion.String(),
+1
View File
@@ -65,6 +65,7 @@ type StateSnapshotter interface {
type Claimer interface {
Claim(
ctx context.Context,
store database.Store,
now time.Time,
userID uuid.UUID,
name string,
+1 -1
View File
@@ -34,7 +34,7 @@ var DefaultReconciler ReconciliationOrchestrator = NoopReconciler{}
type NoopClaimer struct{}
func (NoopClaimer) Claim(context.Context, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
func (NoopClaimer) Claim(context.Context, database.Store, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
// Not entitled to claim prebuilds in AGPL version.
return nil, ErrAGPLDoesNotSupportPrebuiltWorkspaces
}
@@ -19,9 +19,9 @@ import (
)
var (
templatesActiveUsersDesc = prometheus.NewDesc("coderd_insights_templates_active_users", "The number of active users of the template.", []string{"template_name"}, nil)
applicationsUsageSecondsDesc = prometheus.NewDesc("coderd_insights_applications_usage_seconds", "The application usage per template.", []string{"template_name", "application_name", "slug"}, nil)
parametersDesc = prometheus.NewDesc("coderd_insights_parameters", "The parameter usage per template.", []string{"template_name", "parameter_name", "parameter_type", "parameter_value"}, nil)
templatesActiveUsersDesc = prometheus.NewDesc("coderd_insights_templates_active_users", "The number of active users of the template.", []string{"template_name", "organization_name"}, nil)
applicationsUsageSecondsDesc = prometheus.NewDesc("coderd_insights_applications_usage_seconds", "The application usage per template.", []string{"template_name", "application_name", "slug", "organization_name"}, nil)
parametersDesc = prometheus.NewDesc("coderd_insights_parameters", "The parameter usage per template.", []string{"template_name", "parameter_name", "parameter_type", "parameter_value", "organization_name"}, nil)
)
type MetricsCollector struct {
@@ -38,7 +38,8 @@ type insightsData struct {
apps []database.GetTemplateAppInsightsByTemplateRow
params []parameterRow
templateNames map[uuid.UUID]string
templateNames map[uuid.UUID]string
organizationNames map[uuid.UUID]string // template ID → org name
}
type parameterRow struct {
@@ -137,6 +138,7 @@ func (mc *MetricsCollector) Run(ctx context.Context) (func(), error) {
templateIDs := uniqueTemplateIDs(templateInsights, appInsights, paramInsights)
templateNames := make(map[uuid.UUID]string, len(templateIDs))
organizationNames := make(map[uuid.UUID]string, len(templateIDs))
if len(templateIDs) > 0 {
templates, err := mc.database.GetTemplatesWithFilter(ctx, database.GetTemplatesWithFilterParams{
IDs: templateIDs,
@@ -146,6 +148,31 @@ func (mc *MetricsCollector) Run(ctx context.Context) (func(), error) {
return
}
templateNames = onlyTemplateNames(templates)
// Build org name lookup so that metrics can
// distinguish templates with the same name across
// different organizations.
orgIDs := make([]uuid.UUID, 0, len(templates))
for _, t := range templates {
orgIDs = append(orgIDs, t.OrganizationID)
}
orgIDs = slice.Unique(orgIDs)
orgs, err := mc.database.GetOrganizations(ctx, database.GetOrganizationsParams{
IDs: orgIDs,
})
if err != nil {
mc.logger.Error(ctx, "unable to fetch organizations from database", slog.Error(err))
return
}
orgNameByID := make(map[uuid.UUID]string, len(orgs))
for _, o := range orgs {
orgNameByID[o.ID] = o.Name
}
organizationNames = make(map[uuid.UUID]string, len(templates))
for _, t := range templates {
organizationNames[t.ID] = orgNameByID[t.OrganizationID]
}
}
// Refresh the collector state
@@ -154,7 +181,8 @@ func (mc *MetricsCollector) Run(ctx context.Context) (func(), error) {
apps: appInsights,
params: paramInsights,
templateNames: templateNames,
templateNames: templateNames,
organizationNames: organizationNames,
})
}
@@ -194,44 +222,46 @@ func (mc *MetricsCollector) Collect(metricsCh chan<- prometheus.Metric) {
// Custom apps
for _, appRow := range data.apps {
metricsCh <- prometheus.MustNewConstMetric(applicationsUsageSecondsDesc, prometheus.GaugeValue, float64(appRow.UsageSeconds), data.templateNames[appRow.TemplateID],
appRow.DisplayName, appRow.SlugOrPort)
appRow.DisplayName, appRow.SlugOrPort, data.organizationNames[appRow.TemplateID])
}
// Built-in apps
for _, templateRow := range data.templates {
orgName := data.organizationNames[templateRow.TemplateID]
metricsCh <- prometheus.MustNewConstMetric(applicationsUsageSecondsDesc, prometheus.GaugeValue,
float64(templateRow.UsageVscodeSeconds),
data.templateNames[templateRow.TemplateID],
codersdk.TemplateBuiltinAppDisplayNameVSCode,
"")
"", orgName)
metricsCh <- prometheus.MustNewConstMetric(applicationsUsageSecondsDesc, prometheus.GaugeValue,
float64(templateRow.UsageJetbrainsSeconds),
data.templateNames[templateRow.TemplateID],
codersdk.TemplateBuiltinAppDisplayNameJetBrains,
"")
"", orgName)
metricsCh <- prometheus.MustNewConstMetric(applicationsUsageSecondsDesc, prometheus.GaugeValue,
float64(templateRow.UsageReconnectingPtySeconds),
data.templateNames[templateRow.TemplateID],
codersdk.TemplateBuiltinAppDisplayNameWebTerminal,
"")
"", orgName)
metricsCh <- prometheus.MustNewConstMetric(applicationsUsageSecondsDesc, prometheus.GaugeValue,
float64(templateRow.UsageSshSeconds),
data.templateNames[templateRow.TemplateID],
codersdk.TemplateBuiltinAppDisplayNameSSH,
"")
"", orgName)
}
// Templates
for _, templateRow := range data.templates {
metricsCh <- prometheus.MustNewConstMetric(templatesActiveUsersDesc, prometheus.GaugeValue, float64(templateRow.ActiveUsers), data.templateNames[templateRow.TemplateID])
metricsCh <- prometheus.MustNewConstMetric(templatesActiveUsersDesc, prometheus.GaugeValue, float64(templateRow.ActiveUsers), data.templateNames[templateRow.TemplateID], data.organizationNames[templateRow.TemplateID])
}
// Parameters
for _, parameterRow := range data.params {
metricsCh <- prometheus.MustNewConstMetric(parametersDesc, prometheus.GaugeValue, float64(parameterRow.count), data.templateNames[parameterRow.templateID], parameterRow.name, parameterRow.aType, parameterRow.value)
metricsCh <- prometheus.MustNewConstMetric(parametersDesc, prometheus.GaugeValue, float64(parameterRow.count), data.templateNames[parameterRow.templateID], parameterRow.name, parameterRow.aType, parameterRow.value, data.organizationNames[parameterRow.templateID])
}
}
@@ -1,13 +1,13 @@
{
"coderd_insights_applications_usage_seconds[application_name=JetBrains,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Visual Studio Code,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Web Terminal,slug=,template_name=golden-template]": 0,
"coderd_insights_applications_usage_seconds[application_name=SSH,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Golden Slug,slug=golden-slug,template_name=golden-template]": 180,
"coderd_insights_parameters[parameter_name=first_parameter,parameter_type=string,parameter_value=Foobar,template_name=golden-template]": 1,
"coderd_insights_parameters[parameter_name=first_parameter,parameter_type=string,parameter_value=Baz,template_name=golden-template]": 1,
"coderd_insights_parameters[parameter_name=second_parameter,parameter_type=bool,parameter_value=true,template_name=golden-template]": 2,
"coderd_insights_parameters[parameter_name=third_parameter,parameter_type=number,parameter_value=789,template_name=golden-template]": 1,
"coderd_insights_parameters[parameter_name=third_parameter,parameter_type=number,parameter_value=999,template_name=golden-template]": 1,
"coderd_insights_templates_active_users[template_name=golden-template]": 1
"coderd_insights_applications_usage_seconds[application_name=JetBrains,organization_name=coder,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Visual Studio Code,organization_name=coder,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Web Terminal,organization_name=coder,slug=,template_name=golden-template]": 0,
"coderd_insights_applications_usage_seconds[application_name=SSH,organization_name=coder,slug=,template_name=golden-template]": 60,
"coderd_insights_applications_usage_seconds[application_name=Golden Slug,organization_name=coder,slug=golden-slug,template_name=golden-template]": 180,
"coderd_insights_parameters[organization_name=coder,parameter_name=first_parameter,parameter_type=string,parameter_value=Foobar,template_name=golden-template]": 1,
"coderd_insights_parameters[organization_name=coder,parameter_name=first_parameter,parameter_type=string,parameter_value=Baz,template_name=golden-template]": 1,
"coderd_insights_parameters[organization_name=coder,parameter_name=second_parameter,parameter_type=bool,parameter_value=true,template_name=golden-template]": 2,
"coderd_insights_parameters[organization_name=coder,parameter_name=third_parameter,parameter_type=number,parameter_value=789,template_name=golden-template]": 1,
"coderd_insights_parameters[organization_name=coder,parameter_name=third_parameter,parameter_type=number,parameter_value=999,template_name=golden-template]": 1,
"coderd_insights_templates_active_users[organization_name=coder,template_name=golden-template]": 1
}
@@ -132,6 +132,19 @@ func Workspaces(ctx context.Context, logger slog.Logger, registerer prometheus.R
duration = defaultRefreshRate
}
// TODO: deprecated: remove in the future
// See: https://github.com/coder/coder/issues/12999
// Deprecation reason: gauge metrics should avoid suffix `_total``
workspaceLatestBuildTotalsDeprecated := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "api",
Name: "workspace_latest_build_total",
Help: "DEPRECATED: use coderd_api_workspace_latest_build instead",
}, []string{"status"})
if err := registerer.Register(workspaceLatestBuildTotalsDeprecated); err != nil {
return nil, err
}
workspaceLatestBuildTotals := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "api",
@@ -185,6 +198,8 @@ func Workspaces(ctx context.Context, logger slog.Logger, registerer prometheus.R
for _, w := range ws {
status := string(w.LatestBuildStatus)
workspaceLatestBuildTotals.WithLabelValues(status).Add(1)
// TODO: deprecated: remove in the future
workspaceLatestBuildTotalsDeprecated.WithLabelValues(status).Add(1)
workspaceLatestBuildStatuses.WithLabelValues(
status,
+19 -3
View File
@@ -70,9 +70,11 @@ type metrics struct {
// if the oauth supports it, rate limit metrics.
// rateLimit is the defined limit per interval
rateLimit *prometheus.GaugeVec
rateLimitRemaining *prometheus.GaugeVec
rateLimitUsed *prometheus.GaugeVec
rateLimit *prometheus.GaugeVec
// TODO: remove deprecated metrics in the future release
rateLimitDeprecated *prometheus.GaugeVec
rateLimitRemaining *prometheus.GaugeVec
rateLimitUsed *prometheus.GaugeVec
// rateLimitReset is unix time of the next interval (when the rate limit resets).
rateLimitReset *prometheus.GaugeVec
// rateLimitResetIn is the time in seconds until the rate limit resets.
@@ -107,6 +109,18 @@ func NewFactory(registry prometheus.Registerer) *Factory {
// Some IDPs have different buckets for different rate limits.
"resource",
}),
// TODO: deprecated: remove in the future
// See: https://github.com/coder/coder/issues/12999
// Deprecation reason: gauge metrics should avoid suffix `_total``
rateLimitDeprecated: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "oauth2",
Name: "external_requests_rate_limit_total",
Help: "DEPRECATED: use coderd_oauth2_external_requests_rate_limit instead",
}, []string{
"name",
"resource",
}),
rateLimitRemaining: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "oauth2",
@@ -184,6 +198,8 @@ func (f *Factory) NewGithub(name string, under OAuth2Config) *Config {
}
}
// TODO: remove this metric in v3
f.metrics.rateLimitDeprecated.With(labels).Set(float64(limits.Limit))
f.metrics.rateLimit.With(labels).Set(float64(limits.Limit))
f.metrics.rateLimitRemaining.With(labels).Set(float64(limits.Remaining))
f.metrics.rateLimitUsed.With(labels).Set(float64(limits.Used))
+2 -2
View File
@@ -209,7 +209,7 @@ func TestGithubRateLimits(t *testing.T) {
}
pass := true
if !c.ExpectNoMetrics {
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit", labels), c.Limit, "limit")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_total", labels), c.Limit, "limit")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_remaining", labels), c.Remaining, "remaining")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_used", labels), c.Used, "used")
if !c.at.IsZero() {
@@ -218,7 +218,7 @@ func TestGithubRateLimits(t *testing.T) {
pass = pass && assert.InDelta(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_reset_in_seconds", labels), int(until.Seconds()), 2, "reset in")
}
} else {
pass = pass && assert.Nil(t, promhelp.MetricValue(t, reg, "coderd_oauth2_external_requests_rate_limit", labels), "not exists")
pass = pass && assert.Nil(t, promhelp.MetricValue(t, reg, "coderd_oauth2_external_requests_rate_limit_total", labels), "not exists")
}
// Helpful debugging
+159 -335
View File
@@ -1652,6 +1652,7 @@ func (s *server) completeTemplateImportJob(ctx context.Context, job database.Pro
// Process modules
for transition, modules := range map[database.WorkspaceTransition][]*sdkproto.Module{
database.WorkspaceTransitionStart: jobType.TemplateImport.StartModules,
database.WorkspaceTransitionStop: jobType.TemplateImport.StopModules,
} {
for _, module := range modules {
s.Logger.Info(ctx, "inserting template import job module",
@@ -2031,23 +2032,6 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
appIDs = append(appIDs, app.GetId())
agentIDByAppID[app.GetId()] = agentID
}
// Subagents in devcontainers can also have apps that need
// tracking for task linking, just like the parent agent's
// apps above.
for _, dc := range protoAgent.GetDevcontainers() {
dc.Id = uuid.New().String()
if dc.GetSubagentId() != "" {
subAgentID := uuid.New()
dc.SubagentId = subAgentID.String()
for _, app := range dc.GetApps() {
appIDs = append(appIDs, app.GetId())
agentIDByAppID[app.GetId()] = subAgentID
}
}
}
}
err = InsertWorkspaceResource(
@@ -2876,7 +2860,33 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
}
}
scriptsParams := agentScriptsFromProto(prAgent.Scripts)
logSourceIDs := make([]uuid.UUID, 0, len(prAgent.Scripts))
logSourceDisplayNames := make([]string, 0, len(prAgent.Scripts))
logSourceIcons := make([]string, 0, len(prAgent.Scripts))
scriptIDs := make([]uuid.UUID, 0, len(prAgent.Scripts))
scriptDisplayName := make([]string, 0, len(prAgent.Scripts))
scriptLogPaths := make([]string, 0, len(prAgent.Scripts))
scriptSources := make([]string, 0, len(prAgent.Scripts))
scriptCron := make([]string, 0, len(prAgent.Scripts))
scriptTimeout := make([]int32, 0, len(prAgent.Scripts))
scriptStartBlocksLogin := make([]bool, 0, len(prAgent.Scripts))
scriptRunOnStart := make([]bool, 0, len(prAgent.Scripts))
scriptRunOnStop := make([]bool, 0, len(prAgent.Scripts))
for _, script := range prAgent.Scripts {
logSourceIDs = append(logSourceIDs, uuid.New())
logSourceDisplayNames = append(logSourceDisplayNames, script.DisplayName)
logSourceIcons = append(logSourceIcons, script.Icon)
scriptIDs = append(scriptIDs, uuid.New())
scriptDisplayName = append(scriptDisplayName, script.DisplayName)
scriptLogPaths = append(scriptLogPaths, script.LogPath)
scriptSources = append(scriptSources, script.Script)
scriptCron = append(scriptCron, script.Cron)
scriptTimeout = append(scriptTimeout, script.TimeoutSeconds)
scriptStartBlocksLogin = append(scriptStartBlocksLogin, script.StartBlocksLogin)
scriptRunOnStart = append(scriptRunOnStart, script.RunOnStart)
scriptRunOnStop = append(scriptRunOnStop, script.RunOnStop)
}
// Dev Containers require a script and log/source, so we do this before
// the logs insert below.
@@ -2886,46 +2896,32 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
devcontainerNames = make([]string, 0, len(devcontainers))
devcontainerWorkspaceFolders = make([]string, 0, len(devcontainers))
devcontainerConfigPaths = make([]string, 0, len(devcontainers))
devcontainerSubagentIDs = make([]uuid.UUID, 0, len(devcontainers))
)
for _, dc := range devcontainers {
id := uuid.New()
if opts.useAgentIDsFromProto {
id, err = uuid.Parse(dc.GetId())
if err != nil {
return xerrors.Errorf("invalid devcontainer ID format; must be uuid: %w", err)
}
}
subAgentID, err := insertDevcontainerSubagent(ctx, db, dc, dbAgent, resource.ID, appSlugs, snapshot, opts)
if err != nil {
return xerrors.Errorf("insert devcontainer %q subagent: %w", dc.GetName(), err)
}
devcontainerIDs = append(devcontainerIDs, id)
devcontainerNames = append(devcontainerNames, dc.GetName())
devcontainerWorkspaceFolders = append(devcontainerWorkspaceFolders, dc.GetWorkspaceFolder())
devcontainerConfigPaths = append(devcontainerConfigPaths, dc.GetConfigPath())
devcontainerSubagentIDs = append(devcontainerSubagentIDs, subAgentID)
devcontainerNames = append(devcontainerNames, dc.Name)
devcontainerWorkspaceFolders = append(devcontainerWorkspaceFolders, dc.WorkspaceFolder)
devcontainerConfigPaths = append(devcontainerConfigPaths, dc.ConfigPath)
// Add a log source and script for each devcontainer so we can
// track logs and timings for each devcontainer.
displayName := fmt.Sprintf("Dev Container (%s)", dc.GetName())
scriptsParams.LogSourceIDs = append(scriptsParams.LogSourceIDs, uuid.New())
scriptsParams.LogSourceDisplayNames = append(scriptsParams.LogSourceDisplayNames, displayName)
scriptsParams.LogSourceIcons = append(scriptsParams.LogSourceIcons, "/emojis/1f4e6.png") // Emoji package. Or perhaps /icon/container.svg?
scriptsParams.ScriptIDs = append(scriptsParams.ScriptIDs, id) // Re-use the devcontainer ID as the script ID for identification.
scriptsParams.ScriptDisplayNames = append(scriptsParams.ScriptDisplayNames, displayName)
scriptsParams.ScriptLogPaths = append(scriptsParams.ScriptLogPaths, "")
scriptsParams.ScriptSources = append(scriptsParams.ScriptSources, `echo "WARNING: Dev Containers are early access. If you're seeing this message then Dev Containers haven't been enabled for your workspace yet. To enable, the agent needs to run with the environment variable CODER_AGENT_DEVCONTAINERS_ENABLE=true set."`)
scriptsParams.ScriptCron = append(scriptsParams.ScriptCron, "")
scriptsParams.ScriptTimeout = append(scriptsParams.ScriptTimeout, 0)
scriptsParams.ScriptStartBlocksLogin = append(scriptsParams.ScriptStartBlocksLogin, false)
displayName := fmt.Sprintf("Dev Container (%s)", dc.Name)
logSourceIDs = append(logSourceIDs, uuid.New())
logSourceDisplayNames = append(logSourceDisplayNames, displayName)
logSourceIcons = append(logSourceIcons, "/emojis/1f4e6.png") // Emoji package. Or perhaps /icon/container.svg?
scriptIDs = append(scriptIDs, id) // Re-use the devcontainer ID as the script ID for identification.
scriptDisplayName = append(scriptDisplayName, displayName)
scriptLogPaths = append(scriptLogPaths, "")
scriptSources = append(scriptSources, `echo "WARNING: Dev Containers are early access. If you're seeing this message then Dev Containers haven't been enabled for your workspace yet. To enable, the agent needs to run with the environment variable CODER_AGENT_DEVCONTAINERS_ENABLE=true set."`)
scriptCron = append(scriptCron, "")
scriptTimeout = append(scriptTimeout, 0)
scriptStartBlocksLogin = append(scriptStartBlocksLogin, false)
// Run on start to surface the warning message in case the
// terraform resource is used, but the experiment hasn't
// been enabled.
scriptsParams.ScriptRunOnStart = append(scriptsParams.ScriptRunOnStart, true)
scriptsParams.ScriptRunOnStop = append(scriptsParams.ScriptRunOnStop, false)
scriptRunOnStart = append(scriptRunOnStart, true)
scriptRunOnStop = append(scriptRunOnStop, false)
}
_, err = db.InsertWorkspaceAgentDevcontainers(ctx, database.InsertWorkspaceAgentDevcontainersParams{
@@ -2935,21 +2931,131 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
Name: devcontainerNames,
WorkspaceFolder: devcontainerWorkspaceFolders,
ConfigPath: devcontainerConfigPaths,
SubagentID: devcontainerSubagentIDs,
})
if err != nil {
return xerrors.Errorf("insert agent devcontainer: %w", err)
}
}
if err := insertAgentScriptsAndLogSources(ctx, db, agentID, scriptsParams); err != nil {
return xerrors.Errorf("insert agent scripts and log sources: %w", err)
_, err = db.InsertWorkspaceAgentLogSources(ctx, database.InsertWorkspaceAgentLogSourcesParams{
WorkspaceAgentID: agentID,
ID: logSourceIDs,
CreatedAt: dbtime.Now(),
DisplayName: logSourceDisplayNames,
Icon: logSourceIcons,
})
if err != nil {
return xerrors.Errorf("insert agent log sources: %w", err)
}
_, err = db.InsertWorkspaceAgentScripts(ctx, database.InsertWorkspaceAgentScriptsParams{
WorkspaceAgentID: agentID,
LogSourceID: logSourceIDs,
LogPath: scriptLogPaths,
CreatedAt: dbtime.Now(),
Script: scriptSources,
Cron: scriptCron,
TimeoutSeconds: scriptTimeout,
StartBlocksLogin: scriptStartBlocksLogin,
RunOnStart: scriptRunOnStart,
RunOnStop: scriptRunOnStop,
DisplayName: scriptDisplayName,
ID: scriptIDs,
})
if err != nil {
return xerrors.Errorf("insert agent scripts: %w", err)
}
for _, app := range prAgent.Apps {
if err := insertAgentApp(ctx, db, dbAgent.ID, app, appSlugs, snapshot); err != nil {
return xerrors.Errorf("insert agent app: %w", err)
// Similar logic is duplicated in terraform/resources.go.
slug := app.Slug
if slug == "" {
return xerrors.Errorf("app must have a slug or name set")
}
// Contrary to agent names above, app slugs were never permitted to
// contain uppercase letters or underscores.
if !provisioner.AppSlugRegex.MatchString(slug) {
return xerrors.Errorf("app slug %q does not match regex %q", slug, provisioner.AppSlugRegex.String())
}
if _, exists := appSlugs[slug]; exists {
return xerrors.Errorf("duplicate app slug, must be unique per template: %q", slug)
}
appSlugs[slug] = struct{}{}
health := database.WorkspaceAppHealthDisabled
if app.Healthcheck == nil {
app.Healthcheck = &sdkproto.Healthcheck{}
}
if app.Healthcheck.Url != "" {
health = database.WorkspaceAppHealthInitializing
}
sharingLevel := database.AppSharingLevelOwner
switch app.SharingLevel {
case sdkproto.AppSharingLevel_AUTHENTICATED:
sharingLevel = database.AppSharingLevelAuthenticated
case sdkproto.AppSharingLevel_PUBLIC:
sharingLevel = database.AppSharingLevelPublic
}
displayGroup := sql.NullString{
Valid: app.Group != "",
String: app.Group,
}
openIn := database.WorkspaceAppOpenInSlimWindow
switch app.OpenIn {
case sdkproto.AppOpenIn_TAB:
openIn = database.WorkspaceAppOpenInTab
case sdkproto.AppOpenIn_SLIM_WINDOW:
openIn = database.WorkspaceAppOpenInSlimWindow
}
var appID string
if app.Id == "" || app.Id == uuid.Nil.String() {
appID = uuid.NewString()
} else {
appID = app.Id
}
id, err := uuid.Parse(appID)
if err != nil {
return xerrors.Errorf("parse app uuid: %w", err)
}
// If workspace apps are "persistent", the ID will not be regenerated across workspace builds, so we have to upsert.
dbApp, err := db.UpsertWorkspaceApp(ctx, database.UpsertWorkspaceAppParams{
ID: id,
CreatedAt: dbtime.Now(),
AgentID: dbAgent.ID,
Slug: slug,
DisplayName: app.DisplayName,
Icon: app.Icon,
Command: sql.NullString{
String: app.Command,
Valid: app.Command != "",
},
Url: sql.NullString{
String: app.Url,
Valid: app.Url != "",
},
External: app.External,
Subdomain: app.Subdomain,
SharingLevel: sharingLevel,
HealthcheckUrl: app.Healthcheck.Url,
HealthcheckInterval: app.Healthcheck.Interval,
HealthcheckThreshold: app.Healthcheck.Threshold,
Health: health,
// #nosec G115 - Order represents a display order value that's always small and fits in int32
DisplayOrder: int32(app.Order),
DisplayGroup: displayGroup,
Hidden: app.Hidden,
OpenIn: openIn,
Tooltip: app.Tooltip,
})
if err != nil {
return xerrors.Errorf("upsert app: %w", err)
}
snapshot.WorkspaceApps = append(snapshot.WorkspaceApps, telemetry.ConvertWorkspaceApp(dbApp))
}
}
@@ -3255,285 +3361,3 @@ func convertDisplayApps(apps *sdkproto.DisplayApps) []database.DisplayApp {
}
return dapps
}
// insertDevcontainerSubagent creates a workspace agent for a devcontainer's
// subagent if one is defined. It returns the subagent ID (zero UUID if no
// subagent is defined).
func insertDevcontainerSubagent(
ctx context.Context,
db database.Store,
dc *sdkproto.Devcontainer,
parentAgent database.WorkspaceAgent,
resourceID uuid.UUID,
appSlugs map[string]struct{},
snapshot *telemetry.Snapshot,
opts *insertWorkspaceResourceOptions,
) (uuid.UUID, error) {
// If there are no attached resources, we don't need to pre-create the
// subagent. This preserves backwards compatibility where devcontainers
// without resources can have their agents recreated dynamically.
if len(dc.GetApps()) == 0 && len(dc.GetScripts()) == 0 && len(dc.GetEnvs()) == 0 {
return uuid.UUID{}, nil
}
subAgentID := uuid.New()
if opts.useAgentIDsFromProto {
var err error
subAgentID, err = uuid.Parse(dc.GetSubagentId())
if err != nil {
return uuid.UUID{}, xerrors.Errorf("parse subagent id: %w", err)
}
}
envJSON, err := encodeSubagentEnvs(dc.GetEnvs())
if err != nil {
return uuid.UUID{}, err
}
_, err = db.InsertWorkspaceAgent(ctx, database.InsertWorkspaceAgentParams{
ID: subAgentID,
ParentID: uuid.NullUUID{Valid: true, UUID: parentAgent.ID},
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
ResourceID: resourceID,
Name: dc.GetName(),
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
Architecture: parentAgent.Architecture,
EnvironmentVariables: envJSON,
Directory: dc.GetWorkspaceFolder(),
InstanceMetadata: pqtype.NullRawMessage{},
ResourceMetadata: pqtype.NullRawMessage{},
OperatingSystem: parentAgent.OperatingSystem,
ConnectionTimeoutSeconds: parentAgent.ConnectionTimeoutSeconds,
TroubleshootingURL: parentAgent.TroubleshootingURL,
MOTDFile: "",
DisplayApps: []database.DisplayApp{},
DisplayOrder: 0,
APIKeyScope: parentAgent.APIKeyScope,
})
if err != nil {
return uuid.UUID{}, xerrors.Errorf("insert subagent: %w", err)
}
for _, app := range dc.GetApps() {
if err := insertAgentApp(ctx, db, subAgentID, app, appSlugs, snapshot); err != nil {
return uuid.UUID{}, xerrors.Errorf("insert agent app: %w", err)
}
}
if err := insertAgentScriptsAndLogSources(ctx, db, subAgentID, agentScriptsFromProto(dc.GetScripts())); err != nil {
return uuid.UUID{}, xerrors.Errorf("insert agent scripts and log sources: %w", err)
}
return subAgentID, nil
}
func encodeSubagentEnvs(envs []*sdkproto.Env) (pqtype.NullRawMessage, error) {
if len(envs) == 0 {
return pqtype.NullRawMessage{}, nil
}
subAgentEnvs := make(map[string]string, len(envs))
for _, env := range envs {
subAgentEnvs[env.GetName()] = env.GetValue()
}
data, err := json.Marshal(subAgentEnvs)
if err != nil {
return pqtype.NullRawMessage{}, xerrors.Errorf("marshal env: %w", err)
}
return pqtype.NullRawMessage{Valid: true, RawMessage: data}, nil
}
// agentScriptsParams holds the parameters for inserting agent scripts and
// their associated log sources.
type agentScriptsParams struct {
LogSourceIDs []uuid.UUID
LogSourceDisplayNames []string
LogSourceIcons []string
ScriptIDs []uuid.UUID
ScriptDisplayNames []string
ScriptLogPaths []string
ScriptSources []string
ScriptCron []string
ScriptTimeout []int32
ScriptStartBlocksLogin []bool
ScriptRunOnStart []bool
ScriptRunOnStop []bool
}
// agentScriptsFromProto converts a slice of proto scripts into the
// agentScriptsParams struct needed for database insertion.
func agentScriptsFromProto(scripts []*sdkproto.Script) agentScriptsParams {
params := agentScriptsParams{
LogSourceIDs: make([]uuid.UUID, 0, len(scripts)),
LogSourceDisplayNames: make([]string, 0, len(scripts)),
LogSourceIcons: make([]string, 0, len(scripts)),
ScriptIDs: make([]uuid.UUID, 0, len(scripts)),
ScriptDisplayNames: make([]string, 0, len(scripts)),
ScriptLogPaths: make([]string, 0, len(scripts)),
ScriptSources: make([]string, 0, len(scripts)),
ScriptCron: make([]string, 0, len(scripts)),
ScriptTimeout: make([]int32, 0, len(scripts)),
ScriptStartBlocksLogin: make([]bool, 0, len(scripts)),
ScriptRunOnStart: make([]bool, 0, len(scripts)),
ScriptRunOnStop: make([]bool, 0, len(scripts)),
}
for _, script := range scripts {
params.LogSourceIDs = append(params.LogSourceIDs, uuid.New())
params.LogSourceDisplayNames = append(params.LogSourceDisplayNames, script.GetDisplayName())
params.LogSourceIcons = append(params.LogSourceIcons, script.GetIcon())
params.ScriptIDs = append(params.ScriptIDs, uuid.New())
params.ScriptDisplayNames = append(params.ScriptDisplayNames, script.GetDisplayName())
params.ScriptLogPaths = append(params.ScriptLogPaths, script.GetLogPath())
params.ScriptSources = append(params.ScriptSources, script.GetScript())
params.ScriptCron = append(params.ScriptCron, script.GetCron())
params.ScriptTimeout = append(params.ScriptTimeout, script.GetTimeoutSeconds())
params.ScriptStartBlocksLogin = append(params.ScriptStartBlocksLogin, script.GetStartBlocksLogin())
params.ScriptRunOnStart = append(params.ScriptRunOnStart, script.GetRunOnStart())
params.ScriptRunOnStop = append(params.ScriptRunOnStop, script.GetRunOnStop())
}
return params
}
// insertAgentScriptsAndLogSources inserts log sources and scripts for an agent (or
// subagent). It expects the caller to have built the agentScriptsParams,
// allowing for additional entries to be appended before insertion (e.g. for
// devcontainers). Returns nil if there are no log sources to insert.
func insertAgentScriptsAndLogSources(ctx context.Context, db database.Store, agentID uuid.UUID, params agentScriptsParams) error {
if len(params.LogSourceIDs) == 0 {
return nil
}
_, err := db.InsertWorkspaceAgentLogSources(ctx, database.InsertWorkspaceAgentLogSourcesParams{
WorkspaceAgentID: agentID,
ID: params.LogSourceIDs,
CreatedAt: dbtime.Now(),
DisplayName: params.LogSourceDisplayNames,
Icon: params.LogSourceIcons,
})
if err != nil {
return xerrors.Errorf("insert log sources: %w", err)
}
_, err = db.InsertWorkspaceAgentScripts(ctx, database.InsertWorkspaceAgentScriptsParams{
WorkspaceAgentID: agentID,
LogSourceID: params.LogSourceIDs,
ID: params.ScriptIDs,
LogPath: params.ScriptLogPaths,
CreatedAt: dbtime.Now(),
Script: params.ScriptSources,
Cron: params.ScriptCron,
TimeoutSeconds: params.ScriptTimeout,
StartBlocksLogin: params.ScriptStartBlocksLogin,
RunOnStart: params.ScriptRunOnStart,
RunOnStop: params.ScriptRunOnStop,
DisplayName: params.ScriptDisplayNames,
})
if err != nil {
return xerrors.Errorf("insert scripts: %w", err)
}
return nil
}
func insertAgentApp(ctx context.Context, db database.Store, agentID uuid.UUID, app *sdkproto.App, appSlugs map[string]struct{}, snapshot *telemetry.Snapshot) error {
// Similar logic is duplicated in terraform/resources.go.
slug := app.Slug
if slug == "" {
return xerrors.Errorf("app must have a slug or name set")
}
// Unlike agent names, app slugs were never permitted to contain uppercase
// letters or underscores.
if !provisioner.AppSlugRegex.MatchString(slug) {
return xerrors.Errorf("app slug %q does not match regex %q", slug, provisioner.AppSlugRegex.String())
}
if _, exists := appSlugs[slug]; exists {
return xerrors.Errorf("duplicate app slug, must be unique per template: %q", slug)
}
appSlugs[slug] = struct{}{}
health := database.WorkspaceAppHealthDisabled
if app.Healthcheck == nil {
app.Healthcheck = &sdkproto.Healthcheck{}
}
if app.Healthcheck.Url != "" {
health = database.WorkspaceAppHealthInitializing
}
sharingLevel := database.AppSharingLevelOwner
switch app.SharingLevel {
case sdkproto.AppSharingLevel_AUTHENTICATED:
sharingLevel = database.AppSharingLevelAuthenticated
case sdkproto.AppSharingLevel_PUBLIC:
sharingLevel = database.AppSharingLevelPublic
}
displayGroup := sql.NullString{
Valid: app.Group != "",
String: app.Group,
}
openIn := database.WorkspaceAppOpenInSlimWindow
switch app.OpenIn {
case sdkproto.AppOpenIn_TAB:
openIn = database.WorkspaceAppOpenInTab
case sdkproto.AppOpenIn_SLIM_WINDOW:
openIn = database.WorkspaceAppOpenInSlimWindow
}
var appID string
if app.Id == "" || app.Id == uuid.Nil.String() {
appID = uuid.NewString()
} else {
appID = app.Id
}
id, err := uuid.Parse(appID)
if err != nil {
return xerrors.Errorf("parse app uuid: %w", err)
}
// If workspace apps are "persistent", the ID will not be regenerated across workspace builds, so we have to upsert.
dbApp, err := db.UpsertWorkspaceApp(ctx, database.UpsertWorkspaceAppParams{
ID: id,
CreatedAt: dbtime.Now(),
AgentID: agentID,
Slug: slug,
DisplayName: app.DisplayName,
Icon: app.Icon,
Command: sql.NullString{
String: app.Command,
Valid: app.Command != "",
},
Url: sql.NullString{
String: app.Url,
Valid: app.Url != "",
},
External: app.External,
Subdomain: app.Subdomain,
SharingLevel: sharingLevel,
HealthcheckUrl: app.Healthcheck.Url,
HealthcheckInterval: app.Healthcheck.Interval,
HealthcheckThreshold: app.Healthcheck.Threshold,
Health: health,
// #nosec G115 - Order represents a display order value that's always small and fits in int32
DisplayOrder: int32(app.Order),
DisplayGroup: displayGroup,
Hidden: app.Hidden,
OpenIn: openIn,
Tooltip: app.Tooltip,
})
if err != nil {
return xerrors.Errorf("upsert app: %w", err)
}
snapshot.WorkspaceApps = append(snapshot.WorkspaceApps, telemetry.ConvertWorkspaceApp(dbApp))
return nil
}
@@ -2309,17 +2309,19 @@ func TestCompleteJob(t *testing.T) {
Version: "1.0.0",
Source: "github.com/example/example",
},
{
Key: "test2",
Version: "2.0.0",
Source: "github.com/example2/example",
},
},
StopResources: []*sdkproto.Resource{{
Name: "something2",
Type: "aws_instance",
ModulePath: "module.test2",
}},
StopModules: []*sdkproto.Module{
{
Key: "test2",
Version: "2.0.0",
Source: "github.com/example2/example",
},
},
Plan: []byte("{}"),
},
},
@@ -2356,7 +2358,7 @@ func TestCompleteJob(t *testing.T) {
Key: "test2",
Version: "2.0.0",
Source: "github.com/example2/example",
Transition: database.WorkspaceTransitionStart,
Transition: database.WorkspaceTransitionStop,
}},
},
{
@@ -2981,46 +2983,6 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: true,
expectUsageEvent: true,
},
{
name: "ai task linked to subagent app in devcontainer",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{
{
Id: uuid.NewString(),
AppId: sidebarAppID.String(),
},
},
Resources: []*sdkproto.Resource{
{
Agents: []*sdkproto.Agent{
{
Id: uuid.NewString(),
Name: "parent-agent",
Devcontainers: []*sdkproto.Devcontainer{
{
Name: "dev",
WorkspaceFolder: "/workspace",
SubagentId: uuid.NewString(),
Apps: []*sdkproto.App{
{
Id: sidebarAppID.String(),
Slug: "subagent-app",
},
},
},
},
},
},
},
},
},
isTask: true,
expectTaskStatus: database.TaskStatusInitializing,
expectAppID: uuid.NullUUID{UUID: sidebarAppID, Valid: true},
expectHasAiTask: true,
expectUsageEvent: true,
},
// Checks regression for https://github.com/coder/coder/issues/18776
{
name: "non-existing app",
@@ -3426,9 +3388,6 @@ func TestInsertWorkspaceResource(t *testing.T) {
insert := func(db database.Store, jobID uuid.UUID, resource *sdkproto.Resource) error {
return provisionerdserver.InsertWorkspaceResource(ctx, db, jobID, database.WorkspaceTransitionStart, resource, &telemetry.Snapshot{})
}
insertWithProtoIDs := func(db database.Store, jobID uuid.UUID, resource *sdkproto.Resource) error {
return provisionerdserver.InsertWorkspaceResource(ctx, db, jobID, database.WorkspaceTransitionStart, resource, &telemetry.Snapshot{}, provisionerdserver.InsertWorkspaceResourceWithAgentIDsFromProto())
}
t.Run("NoAgents", func(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
@@ -3765,400 +3724,39 @@ func TestInsertWorkspaceResource(t *testing.T) {
t.Run("Devcontainers", func(t *testing.T) {
t.Parallel()
agentID := uuid.New()
subAgentID := uuid.New()
devcontainerID := uuid.New()
devcontainerID2 := uuid.New()
tests := []struct {
name string
resource *sdkproto.Resource
wantErr string
protoIDsOnly bool // when true, only run with insertWithProtoIDs (e.g., for UUID parsing error tests)
expectSubAgentCount int
check func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, useProtoIDs bool)
}{
{
name: "OK",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{Id: devcontainerID.String(), Name: "foo", WorkspaceFolder: "/workspace1"},
{Id: devcontainerID2.String(), Name: "bar", WorkspaceFolder: "/workspace2", ConfigPath: "/workspace2/.devcontainer/devcontainer.json"},
},
}},
db, _ := dbtestutil.NewDB(t)
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{})
err := insert(db, job.ID, &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{Name: "foo", WorkspaceFolder: "/workspace1"},
{Name: "bar", WorkspaceFolder: "/workspace2", ConfigPath: "/workspace2/.devcontainer/devcontainer.json"},
},
expectSubAgentCount: 0,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, _ []database.WorkspaceAgent, useProtoIDs bool) {
require.Equal(t, "dev", parentAgent.Name)
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
sort.Slice(devcontainers, func(i, j int) bool {
return devcontainers[i].Name > devcontainers[j].Name
})
require.Len(t, devcontainers, 2)
if useProtoIDs {
assert.Equal(t, devcontainerID, devcontainers[0].ID)
assert.Equal(t, devcontainerID2, devcontainers[1].ID)
} else {
assert.NotEqual(t, uuid.Nil, devcontainers[0].ID)
assert.NotEqual(t, uuid.Nil, devcontainers[1].ID)
}
assert.Equal(t, "foo", devcontainers[0].Name)
assert.Equal(t, "/workspace1", devcontainers[0].WorkspaceFolder)
assert.Equal(t, "", devcontainers[0].ConfigPath)
assert.False(t, devcontainers[0].SubagentID.Valid)
assert.Equal(t, "bar", devcontainers[1].Name)
assert.Equal(t, "/workspace2", devcontainers[1].WorkspaceFolder)
assert.Equal(t, "/workspace2/.devcontainer/devcontainer.json", devcontainers[1].ConfigPath)
assert.False(t, devcontainers[1].SubagentID.Valid)
},
},
{
name: "SubAgentWithAllResources",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Architecture: "amd64",
OperatingSystem: "linux",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "full-subagent",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "code-server", DisplayName: "VS Code", Url: "http://localhost:8080"},
},
Scripts: []*sdkproto.Script{
{DisplayName: "Startup", Script: "echo start", RunOnStart: true},
},
Envs: []*sdkproto.Env{
{Name: "EDITOR", Value: "vim"},
},
}},
}},
},
expectSubAgentCount: 1,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, useProtoIDs bool) {
require.Len(t, subAgents, 1)
subAgent := subAgents[0]
if useProtoIDs {
require.Equal(t, subAgentID, subAgent.ID)
} else {
require.NotEqual(t, uuid.Nil, subAgent.ID)
}
assert.Equal(t, parentAgent.ID, subAgent.ParentID.UUID)
assert.Equal(t, parentAgent.Architecture, subAgent.Architecture)
assert.Equal(t, parentAgent.OperatingSystem, subAgent.OperatingSystem)
apps, err := db.GetWorkspaceAppsByAgentID(ctx, subAgent.ID)
require.NoError(t, err)
require.Len(t, apps, 1)
assert.Equal(t, "code-server", apps[0].Slug)
scripts, err := db.GetWorkspaceAgentScriptsByAgentIDs(ctx, []uuid.UUID{subAgent.ID})
require.NoError(t, err)
require.Len(t, scripts, 1)
assert.Equal(t, "Startup", scripts[0].DisplayName)
var envVars map[string]string
err = json.Unmarshal(subAgent.EnvironmentVariables.RawMessage, &envVars)
require.NoError(t, err)
assert.Equal(t, "vim", envVars["EDITOR"])
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 1)
assert.True(t, devcontainers[0].SubagentID.Valid)
if useProtoIDs {
assert.Equal(t, subAgentID, devcontainers[0].SubagentID.UUID)
} else {
assert.Equal(t, subAgent.ID, devcontainers[0].SubagentID.UUID)
}
},
},
{
name: "MultipleDevcontainersWithSubagents",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{
Id: devcontainerID.String(),
Name: "frontend",
WorkspaceFolder: "/workspace/frontend",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "frontend-app", DisplayName: "Frontend"},
},
},
{
Id: devcontainerID2.String(),
Name: "backend",
WorkspaceFolder: "/workspace/backend",
SubagentId: uuid.New().String(),
Apps: []*sdkproto.App{
{Slug: "backend-app", DisplayName: "Backend"},
},
},
},
}},
},
expectSubAgentCount: 2,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, _ bool) {
for _, subAgent := range subAgents {
apps, err := db.GetWorkspaceAppsByAgentID(ctx, subAgent.ID)
require.NoError(t, err)
require.Len(t, apps, 1, "each subagent should have exactly one app")
}
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 2)
for _, dc := range devcontainers {
assert.True(t, dc.SubagentID.Valid, "devcontainer %s should have subagent", dc.Name)
}
},
},
{
name: "SubAgentDuplicateAppSlugs",
wantErr: `duplicate app slug, must be unique per template: "my-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-dup-apps",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "my-app", DisplayName: "App 1"},
{Slug: "my-app", DisplayName: "App 2"},
},
}},
}},
},
},
{
name: "SubAgentInvalidAppSlug",
wantErr: `app slug "Invalid_Slug" does not match regex`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-invalid-app",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "Invalid_Slug", DisplayName: "Bad App"},
},
}},
}},
},
},
{
name: "SubAgentAppSlugConflictsWithParentAgent",
wantErr: `duplicate app slug, must be unique per template: "shared-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Apps: []*sdkproto.App{
{Slug: "shared-app", DisplayName: "Parent App"},
},
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "dc",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "shared-app", DisplayName: "Child App"},
},
}},
}},
},
},
{
name: "SubAgentAppSlugConflictsBetweenSubagents",
wantErr: `duplicate app slug, must be unique per template: "conflicting-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{
Id: devcontainerID.String(),
Name: "dc1",
WorkspaceFolder: "/workspace1",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "conflicting-app", DisplayName: "App in DC1"},
},
},
{
Id: devcontainerID2.String(),
Name: "dc2",
WorkspaceFolder: "/workspace2",
SubagentId: uuid.New().String(),
Apps: []*sdkproto.App{
{Slug: "conflicting-app", DisplayName: "App in DC2"},
},
},
},
}},
},
},
{
name: "SubAgentInvalidSubagentID",
wantErr: "parse subagent id",
protoIDsOnly: true, // UUID parsing errors only occur with proto IDs
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "invalid-subagent",
WorkspaceFolder: "/workspace",
SubagentId: "not-a-valid-uuid",
Apps: []*sdkproto.App{{Slug: "app", DisplayName: "App"}},
}},
}},
},
},
{
name: "SubAgentInvalidAppID",
wantErr: "parse app uuid",
protoIDsOnly: true, // UUID parsing errors only occur with proto IDs
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-invalid-app-id",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{{Id: "not-a-uuid", Slug: "my-app", DisplayName: "App"}},
}},
}},
},
},
{
// This test verifies the backward-compatibility behavior where a
// devcontainer with a SubagentId but no apps, scripts, or envs does
// NOT create a subagent.
name: "SubAgentBackwardCompatNoResources",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "no-resources",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
// Intentionally no Apps, Scripts, or Envs.
}},
}},
},
expectSubAgentCount: 0,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, _ []database.WorkspaceAgent, _ bool) {
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 1)
assert.Equal(t, "no-resources", devcontainers[0].Name)
assert.False(t, devcontainers[0].SubagentID.Valid,
"devcontainer with SubagentId but no apps/scripts/envs should not have a subagent (backward compatibility)")
},
},
}
for _, tt := range tests {
for _, useProtoIDs := range []bool{false, true} {
if tt.protoIDsOnly && !useProtoIDs {
continue
}
name := tt.name
if useProtoIDs {
name += "/WithProtoIDs"
} else {
name += "/WithoutProtoIDs"
}
t.Run(name, func(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{})
var err error
if useProtoIDs {
err = insertWithProtoIDs(db, job.ID, tt.resource)
} else {
err = insert(db, job.ID, tt.resource)
}
if tt.wantErr != "" {
require.ErrorContains(t, err, tt.wantErr)
return
}
require.NoError(t, err)
resources, err := db.GetWorkspaceResourcesByJobID(ctx, job.ID)
require.NoError(t, err)
require.Len(t, resources, 1)
agents, err := db.GetWorkspaceAgentsByResourceIDs(ctx, []uuid.UUID{resources[0].ID})
require.NoError(t, err)
var parentAgent database.WorkspaceAgent
var subAgents []database.WorkspaceAgent
for _, agent := range agents {
if agent.ParentID.Valid {
subAgents = append(subAgents, agent)
} else {
parentAgent = agent
}
}
require.NotEqual(t, uuid.Nil, parentAgent.ID)
require.Len(t, subAgents, tt.expectSubAgentCount, "expected %d subagents", tt.expectSubAgentCount)
tt.check(t, db, parentAgent, subAgents, useProtoIDs)
})
}
}
}},
})
require.NoError(t, err)
resources, err := db.GetWorkspaceResourcesByJobID(ctx, job.ID)
require.NoError(t, err)
require.Len(t, resources, 1)
agents, err := db.GetWorkspaceAgentsByResourceIDs(ctx, []uuid.UUID{resources[0].ID})
require.NoError(t, err)
require.Len(t, agents, 1)
agent := agents[0]
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, agent.ID)
sort.Slice(devcontainers, func(i, j int) bool {
return devcontainers[i].Name > devcontainers[j].Name
})
require.NoError(t, err)
require.Len(t, devcontainers, 2)
require.Equal(t, "foo", devcontainers[0].Name)
require.Equal(t, "/workspace1", devcontainers[0].WorkspaceFolder)
require.Equal(t, "", devcontainers[0].ConfigPath)
require.Equal(t, "bar", devcontainers[1].Name)
require.Equal(t, "/workspace2", devcontainers[1].WorkspaceFolder)
require.Equal(t, "/workspace2/.devcontainer/devcontainer.json", devcontainers[1].ConfigPath)
})
}
+3 -4
View File
@@ -3,7 +3,6 @@ package rbac
import (
"fmt"
"strings"
"sync/atomic"
"github.com/google/uuid"
"golang.org/x/xerrors"
@@ -240,16 +239,16 @@ func (z Object) WithGroupACL(groups map[string][]policy.Action) Object {
// TODO(geokat): similar to builtInRoles, this should ideally be
// scoped to a coderd rather than a global.
var workspaceACLDisabled atomic.Bool
var workspaceACLDisabled bool
// SetWorkspaceACLDisabled disables/enables workspace sharing for the
// deployment.
func SetWorkspaceACLDisabled(v bool) {
workspaceACLDisabled.Store(v)
workspaceACLDisabled = v
}
// WorkspaceACLDisabled returns true if workspace sharing is disabled
// for the deployment.
func WorkspaceACLDisabled() bool {
return workspaceACLDisabled.Load()
return workspaceACLDisabled
}
-1
View File
@@ -156,7 +156,6 @@ func Users(query string) (database.GetUsersParams, []codersdk.ValidationError) {
parser := httpapi.NewQueryParamParser()
filter := database.GetUsersParams{
Search: parser.String(values, "", "search"),
Name: parser.String(values, "", "name"),
Status: httpapi.ParseCustomList(parser, values, []database.UserStatus{}, "status", httpapi.ParseEnum[database.UserStatus]),
RbacRole: parser.Strings(values, []string{}, "role"),
LastSeenAfter: parser.Time3339Nano(values, time.Time{}, "last_seen_after"),
-43
View File
@@ -754,49 +754,6 @@ func TestSearchUsers(t *testing.T) {
},
},
// Name filter tests
{
Name: "NameFilter",
Query: "name:John",
Expected: database.GetUsersParams{
Name: "john",
Status: []database.UserStatus{},
RbacRole: []string{},
LoginType: []database.LoginType{},
},
},
{
Name: "NameFilterQuoted",
Query: `name:"John Doe"`,
Expected: database.GetUsersParams{
Name: "john doe",
Status: []database.UserStatus{},
RbacRole: []string{},
LoginType: []database.LoginType{},
},
},
{
Name: "NameFilterWithSearch",
Query: "name:John search:johnd",
Expected: database.GetUsersParams{
Search: "johnd",
Name: "john",
Status: []database.UserStatus{},
RbacRole: []string{},
LoginType: []database.LoginType{},
},
},
{
Name: "NameFilterWithOtherParams",
Query: "name:John status:active role:owner",
Expected: database.GetUsersParams{
Name: "john",
Status: []database.UserStatus{database.UserStatusActive},
RbacRole: []string{codersdk.RoleOwner},
LoginType: []database.LoginType{},
},
},
// Failures
{
Name: "ExtraColon",
+2 -2
View File
@@ -177,7 +177,7 @@ func generateFromPrompt(prompt string) (TaskName, error) {
// Ensure display name is never empty
displayName = strings.ReplaceAll(name, "-", " ")
}
displayName = strings.ToUpper(displayName[:1]) + displayName[1:]
displayName = strutil.Capitalize(displayName)
return TaskName{
Name: taskName,
@@ -269,7 +269,7 @@ func generateFromAnthropic(ctx context.Context, prompt string, apiKey string, mo
// Ensure display name is never empty
displayName = strings.ReplaceAll(taskNameResponse.Name, "-", " ")
}
displayName = strings.ToUpper(displayName[:1]) + displayName[1:]
displayName = strutil.Capitalize(displayName)
return TaskName{
Name: name,
+13
View File
@@ -49,6 +49,19 @@ func TestGenerate(t *testing.T) {
require.NotEmpty(t, taskName.DisplayName)
})
t.Run("FromPromptMultiByte", func(t *testing.T) {
t.Setenv("ANTHROPIC_API_KEY", "")
ctx := testutil.Context(t, testutil.WaitShort)
taskName := taskname.Generate(ctx, testutil.Logger(t), "über cool feature")
require.NoError(t, codersdk.NameValid(taskName.Name))
require.True(t, len(taskName.DisplayName) > 0)
// The display name must start with "Ü", not corrupted bytes.
require.Equal(t, "Über cool feature", taskName.DisplayName)
})
t.Run("Fallback", func(t *testing.T) {
// Ensure no API key
t.Setenv("ANTHROPIC_API_KEY", "")
+1 -4
View File
@@ -1977,13 +1977,10 @@ func TestTemplateVersionPatch(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
user := coderdtest.CreateFirstUser(t, client)
version1 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
version1 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version1.ID)
version2 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})

Some files were not shown because too many files have changed in this diff Show More