Compare commits

..

1 Commits

Author SHA1 Message Date
Garrett Delfosse ce2aed9002 feat: move boundary code from coder/boundary to enterprise/cli/boundary 2026-01-14 13:07:05 -05:00
646 changed files with 15097 additions and 26982 deletions
-79
View File
@@ -1,79 +0,0 @@
---
name: doc-check
description: Checks if code changes require documentation updates
---
# Documentation Check Skill
Review code changes and determine if documentation updates or new documentation
is needed.
## Workflow
1. **Get the code changes** - Use the method provided in the prompt, or if none
specified:
- For a PR: `gh pr diff <PR_NUMBER> --repo coder/coder`
- For local changes: `git diff main` or `git diff --staged`
- For a branch: `git diff main...<branch>`
2. **Understand the scope** - Consider what changed:
- Is this user-facing or internal?
- Does it change behavior, APIs, CLI flags, or configuration?
- Even for "internal" or "chore" changes, always verify the actual diff
3. **Search the docs** for related content in `docs/`
4. **Decide what's needed**:
- Do existing docs need updates to match the code?
- Is new documentation needed for undocumented features?
- Or is everything already covered?
5. **Report findings** - Use the method provided in the prompt, or if none
specified, summarize findings directly
## What to Check
- **Accuracy**: Does documentation match current code behavior?
- **Completeness**: Are new features/options documented?
- **Examples**: Do code examples still work?
- **CLI/API changes**: Are new flags, endpoints, or options documented?
- **Configuration**: Are new environment variables or settings documented?
- **Breaking changes**: Are migration steps documented if needed?
- **Premium features**: Should docs indicate `(Premium)` in the title?
## Key Documentation Info
- **`docs/manifest.json`** - Navigation structure; new pages MUST be added here
- **`docs/reference/cli/*.md`** - Auto-generated from Go code, don't edit directly
- **Premium features** - H1 title should include `(Premium)` suffix
## Coder-Specific Patterns
### Callouts
Use GitHub-Flavored Markdown alerts:
```markdown
> [!NOTE]
> Additional helpful information.
> [!WARNING]
> Important warning about potential issues.
> [!TIP]
> Helpful tip for users.
```
### CLI Documentation
CLI docs in `docs/reference/cli/` are auto-generated. Don't suggest editing them
directly. Instead, changes should be made in the Go code that defines the CLI
commands (typically in `cli/` directory).
### Code Examples
Use `sh` for shell commands:
```sh
coder server --flag-name value
```
+1 -1
View File
@@ -1,4 +1,4 @@
#!/bin/sh
# Start Docker service if not already running.
sudo service docker status >/dev/null 2>&1 || sudo service docker start
sudo service docker start
+2 -2
View File
@@ -7,6 +7,6 @@ runs:
- name: go install tools
shell: bash
run: |
./.github/scripts/retry.sh -- go install tool
go install tool
# NOTE: protoc-gen-go cannot be installed with `go get`
./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
+4 -4
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.6"
default: "1.24.10"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
@@ -22,14 +22,14 @@ runs:
- name: Install gotestsum
shell: bash
run: ./.github/scripts/retry.sh -- go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
run: go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
- name: Install mtimehash
shell: bash
run: ./.github/scripts/retry.sh -- go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
run: go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
# It isn't necessary that we ever do this, but it helps
# separate the "setup" from the "run" times.
- name: go mod download
shell: bash
run: ./.github/scripts/retry.sh -- go mod download -x
run: go mod download -x
+1 -1
View File
@@ -14,4 +14,4 @@ runs:
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
./.github/scripts/retry.sh -- env CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
-50
View File
@@ -1,50 +0,0 @@
#!/usr/bin/env bash
# Retry a command with exponential backoff.
#
# Usage: retry.sh [--max-attempts N] -- <command...>
#
# Example:
# retry.sh --max-attempts 3 -- go install gotest.tools/gotestsum@latest
#
# This will retry the command up to 3 times with exponential backoff
# (2s, 4s, 8s delays between attempts).
set -euo pipefail
# shellcheck source=scripts/lib.sh
source "$(dirname "${BASH_SOURCE[0]}")/../../scripts/lib.sh"
max_attempts=3
args="$(getopt -o "" -l max-attempts: -- "$@")"
eval set -- "$args"
while true; do
case "$1" in
--max-attempts)
max_attempts="$2"
shift 2
;;
--)
shift
break
;;
*)
error "Unrecognized option: $1"
;;
esac
done
if [[ $# -lt 1 ]]; then
error "Usage: retry.sh [--max-attempts N] -- <command...>"
fi
attempt=1
until "$@"; do
if ((attempt >= max_attempts)); then
error "Command failed after $max_attempts attempts: $*"
fi
delay=$((2 ** attempt))
log "Attempt $attempt/$max_attempts failed, retrying in ${delay}s..."
sleep "$delay"
((attempt++))
done
+29 -41
View File
@@ -40,7 +40,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -124,7 +124,7 @@ jobs:
# runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
# steps:
# - name: Checkout
# uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
# with:
# fetch-depth: 1
# # See: https://github.com/stefanzweifel/git-auto-commit-action?tab=readme-ov-file#commits-made-by-this-action-do-not-trigger-new-workflow-runs
@@ -162,7 +162,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -176,12 +176,12 @@ jobs:
- name: Get golangci-lint cache dir
run: |
linter_ver=$(grep -Eo 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
./.github/scripts/retry.sh -- go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
dir=$(golangci-lint cache status | awk '/Dir/ { print $2 }')
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -256,7 +256,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -313,7 +313,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -329,7 +329,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install shfmt
run: ./.github/scripts/retry.sh -- go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
- name: make fmt
timeout-minutes: 7
@@ -386,7 +386,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -395,18 +395,6 @@ jobs:
id: go-paths
uses: ./.github/actions/setup-go-paths
# macOS default bash and coreutils are too old for our scripts
# (lib.sh requires bash 4+, GNU getopt, make 4+).
- name: Setup GNU tools (macOS)
if: runner.os == 'macOS'
run: |
brew install bash gnu-getopt make
{
echo "$(brew --prefix bash)/bin"
echo "$(brew --prefix gnu-getopt)/bin"
echo "$(brew --prefix make)/libexec/gnubin"
} >> "$GITHUB_PATH"
- name: Setup Go
uses: ./.github/actions/setup-go
with:
@@ -571,7 +559,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -633,7 +621,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -705,7 +693,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -732,7 +720,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -765,7 +753,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -845,7 +833,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# 👇 Ensures Chromatic can read your full git history
fetch-depth: 0
@@ -861,7 +849,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -893,7 +881,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -926,7 +914,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
# 0 is required here for version.sh to work.
fetch-depth: 0
@@ -1030,7 +1018,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -1080,7 +1068,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -1117,7 +1105,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -1129,10 +1117,10 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1140,7 +1128,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build
@@ -1172,7 +1160,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -1219,10 +1207,10 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1270,7 +1258,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
version="$(./scripts/version.sh)"
tag="main-${version//+/-}"
@@ -1569,7 +1557,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
@@ -215,7 +215,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
+1 -1
View File
@@ -249,7 +249,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
+3 -3
View File
@@ -41,7 +41,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -70,7 +70,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -151,7 +151,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
+73 -265
View File
@@ -2,26 +2,14 @@
# It creates a Coder Task that uses AI to analyze the PR changes,
# search existing docs, and comment with recommendations.
#
# Triggers:
# - New PR opened: Initial documentation review
# - PR updated (synchronize): Re-review after changes
# - Label "doc-check" added: Manual trigger for review
# - PR marked ready for review: Review when draft is promoted
# - Workflow dispatch: Manual run with PR URL
#
# Note: This workflow requires access to secrets and will be skipped for:
# - Any PR where secrets are not available
# For these PRs, maintainers can manually trigger via workflow_dispatch.
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
name: AI Documentation Check
on:
pull_request:
types:
- opened
- synchronize
- labeled
- ready_for_review
workflow_dispatch:
inputs:
pr_url:
@@ -38,16 +26,8 @@ jobs:
doc-check:
name: Analyze PR for Documentation Updates Needed
runs-on: ubuntu-latest
# Run on: opened, synchronize, labeled (with doc-check label), ready_for_review, or workflow_dispatch
# Skip draft PRs unless manually triggered
if: |
(
github.event.action == 'opened' ||
github.event.action == 'synchronize' ||
github.event.label.name == 'doc-check' ||
github.event.action == 'ready_for_review' ||
github.event_name == 'workflow_dispatch'
) &&
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
timeout-minutes: 30
env:
@@ -59,154 +39,120 @@ jobs:
actions: write
steps:
- name: Check if secrets are available
id: check-secrets
env:
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
CODER_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
run: |
if [[ -z "${CODER_URL}" || -z "${CODER_TOKEN}" ]]; then
echo "skip=true" >> "${GITHUB_OUTPUT}"
echo "Secrets not available - skipping doc-check."
echo "This is expected for PRs where secrets are not available."
echo "Maintainers can manually trigger via workflow_dispatch if needed."
{
echo "⚠️ Workflow skipped: Secrets not available"
echo ""
echo "This workflow requires secrets that are unavailable for this run."
echo "Maintainers can manually trigger via workflow_dispatch if needed."
} >> "${GITHUB_STEP_SUMMARY}"
else
echo "skip=false" >> "${GITHUB_OUTPUT}"
fi
- name: Setup Coder CLI
if: steps.check-secrets.outputs.skip != 'true'
uses: coder/setup-action@4a607a8113d4e676e2d7c34caa20a814bc88bfda # v1
with:
access_url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder_session_token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
- name: Determine PR Context
if: steps.check-secrets.outputs.skip != 'true'
id: determine-context
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_ACTION: ${{ github.event.action }}
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
INPUTS_PR_URL: ${{ inputs.pr_url }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
# Determine trigger type for task context
# For workflow_dispatch, use the provided PR URL
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
echo "trigger_type=manual" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Validate PR URL format
if [[ ! "${INPUTS_PR_URL}" =~ ^https://github\.com/[^/]+/[^/]+/pull/[0-9]+$ ]]; then
echo "::error::Invalid PR URL format: ${INPUTS_PR_URL}"
echo "::error::Expected format: https://github.com/owner/repo/pull/NUMBER"
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
# Extract PR number from URL for later use
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
# Set trigger type based on action
case "${GITHUB_EVENT_ACTION}" in
opened)
echo "trigger_type=new_pr" >> "${GITHUB_OUTPUT}"
;;
synchronize)
echo "trigger_type=pr_updated" >> "${GITHUB_OUTPUT}"
;;
labeled)
echo "trigger_type=label_requested" >> "${GITHUB_OUTPUT}"
;;
ready_for_review)
echo "trigger_type=ready_for_review" >> "${GITHUB_OUTPUT}"
;;
*)
echo "trigger_type=unknown" >> "${GITHUB_OUTPUT}"
;;
esac
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Build task prompt
if: steps.check-secrets.outputs.skip != 'true'
- name: Extract changed files and build prompt
id: extract-context
env:
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
TRIGGER_TYPE: ${{ steps.determine-context.outputs.trigger_type }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Analyzing PR #${PR_NUMBER} (trigger: ${TRIGGER_TYPE})"
echo "Analyzing PR #${PR_NUMBER}"
# Build context based on trigger type
case "${TRIGGER_TYPE}" in
new_pr)
CONTEXT="This is a NEW PR. Perform a thorough documentation review."
;;
pr_updated)
CONTEXT="This PR was UPDATED with new commits. Only comment if the changes affect documentation needs or address previous feedback."
;;
label_requested)
CONTEXT="A documentation review was REQUESTED via label. Perform a thorough documentation review."
;;
ready_for_review)
CONTEXT="This PR was marked READY FOR REVIEW (converted from draft). Perform a thorough documentation review."
;;
manual)
CONTEXT="This is a MANUAL review request. Perform a thorough documentation review."
;;
*)
CONTEXT="Perform a thorough documentation review."
;;
esac
# Build task prompt - using unquoted heredoc so variables expand
TASK_PROMPT=$(cat <<EOF
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
# Build task prompt with PR-specific context
TASK_PROMPT="Use the doc-check skill to review PR #${PR_NUMBER} in coder/coder.
PR URL: ${PR_URL}
${CONTEXT}
WORKFLOW:
1. Setup (repo is pre-cloned at ~/coder)
cd ~/coder
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
git checkout pr-${PR_NUMBER}
Use \`gh\` to get PR details, diff, and all comments. Check for previous doc-check comments (from coder-doc-check) and only post a new comment if it adds value.
2. Get PR info
Use GitHub MCP tools to get PR title, body, and diff
Or use: git diff main...pr-${PR_NUMBER}
**Do not comment if no documentation changes are needed.**
3. Understand Changes
Read the diff and identify what changed
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
## Comment format
4. Search for Related Docs
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
Use this structure (only include relevant sections):
5. Decide
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
\`\`\`
## Documentation Check
6. Comment on the PR using this format
### Previous Feedback
[For re-reviews only: Addressed | Partially addressed | Not yet addressed]
COMMENT FORMAT:
## 📚 Documentation Check
### Updates Needed
- [ ] \`docs/path/file.md\` - [what needs to change]
### Updates Needed
- **[docs/path/file.md](github_link)** - Brief what needs changing
### New Documentation Needed
- [ ] \`docs/suggested/path.md\` - [what should be documented]
### 📝 New Docs Needed
- **docs/suggested/location.md** - What should be documented
### ✨ No Changes Needed
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
---
*Automated review via [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
\`\`\`"
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
DOCS STRUCTURE:
Read ~/coder/docs/manifest.json for the complete documentation structure.
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
But check manifest.json - it has everything.
EOF
)
# Output the prompt
{
@@ -216,8 +162,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
if: steps.check-secrets.outputs.skip != 'true'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
@@ -226,24 +171,22 @@ jobs:
repository: coder/create-task-action
- name: Create Coder Task for Documentation Check
if: steps.check-secrets.outputs.skip != 'true'
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder-workflow-bot
coder-template-name: coder
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
coder-task-name-prefix: doc-check
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
coder-username: doc-check-bot
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: false
comment-on-issue: true
- name: Write Task Info
if: steps.check-secrets.outputs.skip != 'true'
- name: Write outputs
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
@@ -258,140 +201,5 @@ jobs:
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL:** ${TASK_URL}"
echo ""
} >> "${GITHUB_STEP_SUMMARY}"
- name: Wait for Task Completion
if: steps.check-secrets.outputs.skip != 'true'
id: wait_task
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
echo "Waiting for task to complete..."
echo "Task name: ${TASK_NAME}"
if [[ -z "${TASK_NAME}" ]]; then
echo "::error::TASK_NAME is empty"
exit 1
fi
MAX_WAIT=600 # 10 minutes
WAITED=0
POLL_INTERVAL=3
LAST_STATUS=""
is_workspace_message() {
local msg="$1"
[[ -z "$msg" ]] && return 0 # Empty = treat as workspace/startup
[[ "$msg" =~ ^Workspace ]] && return 0
[[ "$msg" =~ ^Agent ]] && return 0
return 1
}
while [[ $WAITED -lt $MAX_WAIT ]]; do
# Get task status (|| true prevents set -e from exiting on non-zero)
RAW_OUTPUT=$(coder task status "${TASK_NAME}" -o json 2>&1) || true
STATUS_JSON=$(echo "$RAW_OUTPUT" | grep -v "^version mismatch\|^download v" || true)
# Debug: show first poll's raw output
if [[ $WAITED -eq 0 ]]; then
echo "Raw status output: ${RAW_OUTPUT:0:500}"
fi
if [[ -z "$STATUS_JSON" ]] || ! echo "$STATUS_JSON" | jq -e . >/dev/null 2>&1; then
if [[ "$LAST_STATUS" != "waiting" ]]; then
echo "[${WAITED}s] Waiting for task status..."
LAST_STATUS="waiting"
fi
sleep $POLL_INTERVAL
WAITED=$((WAITED + POLL_INTERVAL))
continue
fi
TASK_STATE=$(echo "$STATUS_JSON" | jq -r '.current_state.state // "unknown"')
TASK_MESSAGE=$(echo "$STATUS_JSON" | jq -r '.current_state.message // ""')
WORKSPACE_STATUS=$(echo "$STATUS_JSON" | jq -r '.workspace_status // "unknown"')
# Build current status string for comparison
CURRENT_STATUS="${TASK_STATE}|${WORKSPACE_STATUS}|${TASK_MESSAGE}"
# Only log if status changed
if [[ "$CURRENT_STATUS" != "$LAST_STATUS" ]]; then
if [[ "$TASK_STATE" == "idle" ]] && is_workspace_message "$TASK_MESSAGE"; then
echo "[${WAITED}s] Workspace ready, waiting for Agent..."
else
echo "[${WAITED}s] State: ${TASK_STATE} | Workspace: ${WORKSPACE_STATUS} | ${TASK_MESSAGE}"
fi
LAST_STATUS="$CURRENT_STATUS"
fi
if [[ "$WORKSPACE_STATUS" == "failed" || "$WORKSPACE_STATUS" == "canceled" ]]; then
echo "::error::Workspace failed: ${WORKSPACE_STATUS}"
exit 1
fi
if [[ "$TASK_STATE" == "idle" ]]; then
if ! is_workspace_message "$TASK_MESSAGE"; then
# Real completion message from Claude!
echo ""
echo "Task completed: ${TASK_MESSAGE}"
RESULT_URI=$(echo "$STATUS_JSON" | jq -r '.current_state.uri // ""')
echo "result_uri=${RESULT_URI}" >> "${GITHUB_OUTPUT}"
echo "task_message=${TASK_MESSAGE}" >> "${GITHUB_OUTPUT}"
break
fi
fi
sleep $POLL_INTERVAL
WAITED=$((WAITED + POLL_INTERVAL))
done
if [[ $WAITED -ge $MAX_WAIT ]]; then
echo "::error::Task monitoring timed out after ${MAX_WAIT}s"
exit 1
fi
- name: Fetch Task Logs
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
echo "::group::Task Conversation Log"
if [[ -n "${TASK_NAME}" ]]; then
coder task logs "${TASK_NAME}" 2>&1 || echo "Failed to fetch logs"
else
echo "No task name, skipping log fetch"
fi
echo "::endgroup::"
- name: Cleanup Task
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
run: |
if [[ -n "${TASK_NAME}" ]]; then
echo "Deleting task: ${TASK_NAME}"
coder task delete "${TASK_NAME}" -y 2>&1 || echo "Task deletion failed or already deleted"
else
echo "No task name, skipping cleanup"
fi
- name: Write Final Summary
if: always() && steps.check-secrets.outputs.skip != 'true'
env:
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_MESSAGE: ${{ steps.wait_task.outputs.task_message }}
RESULT_URI: ${{ steps.wait_task.outputs.result_uri }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
run: |
{
echo ""
echo "---"
echo "### Result"
echo ""
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
if [[ -n "${RESULT_URI}" ]]; then
echo "**Comment:** ${RESULT_URI}"
fi
echo ""
echo "Task \`${TASK_NAME}\` has been cleaned up."
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
} >> "${GITHUB_STEP_SUMMARY}"
+1 -1
View File
@@ -43,7 +43,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+1 -1
View File
@@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+3 -3
View File
@@ -31,7 +31,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
@@ -42,7 +42,7 @@ jobs:
# on version 2.29 and above.
nix_version: "2.28.5"
- uses: nix-community/cache-nix-action@106bba72ed8e29c8357661199511ef07790175e9 # v7.0.1
- uses: nix-community/cache-nix-action@b426b118b6dc86d6952988d396aa7c6b09776d08 # v7.0.0
with:
# restore and save a cache using this key
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
@@ -130,7 +130,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+1 -1
View File
@@ -54,7 +54,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
+4 -4
View File
@@ -44,7 +44,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
@@ -81,7 +81,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -233,7 +233,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -337,7 +337,7 @@ jobs:
kubectl create namespace "pr${PR_NUMBER}"
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+7 -7
View File
@@ -65,7 +65,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS.
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -121,7 +121,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -169,7 +169,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -259,7 +259,7 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nsis and zstd
run: sudo apt-get install -y nsis zstd
@@ -341,7 +341,7 @@ jobs:
- name: Build binaries
run: |
set -euo pipefail
./.github/scripts/retry.sh -- go mod download
go mod download
version="$(./scripts/version.sh)"
make gen/mark-fresh
@@ -888,7 +888,7 @@ jobs:
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -976,7 +976,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
persist-credentials: false
+1 -1
View File
@@ -25,7 +25,7 @@ jobs:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+5 -5
View File
@@ -32,7 +32,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
@@ -74,7 +74,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: false
@@ -97,11 +97,11 @@ jobs:
- name: Install yq
run: go run github.com/mikefarah/yq/v4@v4.44.3
- name: Install mockgen
run: ./.github/scripts/retry.sh -- go install go.uber.org/mock/mockgen@v0.6.0
run: go install go.uber.org/mock/mockgen@v0.5.0
- name: Install protoc-gen-go
run: ./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
- name: Install protoc-gen-go-drpc
run: ./.github/scripts/retry.sh -- go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
- name: Install Protoc
run: |
# protoc must be in lockstep with our dogfood Dockerfile or the
+1 -1
View File
@@ -101,7 +101,7 @@ jobs:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
- name: Run delete-old-branches-action
+1 -1
View File
@@ -153,7 +153,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
+1 -1
View File
@@ -26,7 +26,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
persist-credentials: false
+8
View File
@@ -211,6 +211,14 @@ issues:
- path: scripts/rules.go
linters:
- ALL
# Boundary code is imported from github.com/coder/boundary and has different
# lint standards. Suppress lint issues in this imported code.
- path: enterprise/cli/boundary/
linters:
- revive
- gocritic
- gosec
- errorlint
fix: true
max-issues-per-linter: 0
+1 -10
View File
@@ -69,9 +69,6 @@ MOST_GO_SRC_FILES := $(shell \
# All the shell files in the repo, excluding ignored files.
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
MIGRATION_FILES := $(shell find ./coderd/database/migrations/ -maxdepth 1 $(FIND_EXCLUSIONS) -type f -name '*.sql')
FIXTURE_FILES := $(shell find ./coderd/database/migrations/testdata/fixtures/ $(FIND_EXCLUSIONS) -type f -name '*.sql')
# Ensure we don't use the user's git configs which might cause side-effects
GIT_FLAGS = GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_SYSTEM=/dev/null
@@ -564,7 +561,7 @@ endif
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
.PHONY: lint
lint/site-icons:
@@ -622,12 +619,6 @@ lint/check-scopes: coderd/database/dump.sql
go run ./scripts/check-scopes
.PHONY: lint/check-scopes
# Verify migrations do not hardcode the public schema.
lint/migrations:
./scripts/check_pg_schema.sh "Migrations" $(MIGRATION_FILES)
./scripts/check_pg_schema.sh "Fixtures" $(FIXTURE_FILES)
.PHONY: lint/migrations
# All files generated by the database should be added here, and this can be used
# as a target for jobs that need to run after the database is generated.
DB_GEN_FILES := \
+6 -15
View File
@@ -40,7 +40,6 @@ import (
"github.com/coder/clistat"
"github.com/coder/coder/v2/agent/agentcontainers"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/agentfiles"
"github.com/coder/coder/v2/agent/agentscripts"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agentssh"
@@ -296,8 +295,6 @@ type agent struct {
containerAPIOptions []agentcontainers.Option
containerAPI *agentcontainers.API
filesAPI *agentfiles.API
socketServerEnabled bool
socketPath string
socketServer *agentsocket.Server
@@ -368,8 +365,6 @@ func (a *agent) init() {
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
a.filesAPI = agentfiles.NewAPI(a.logger.Named("files"), a.filesystem)
a.reconnectingPTYServer = reconnectingpty.NewServer(
a.logger.Named("reconnecting-pty"),
a.sshServer,
@@ -882,16 +877,12 @@ const (
)
func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_Type, ip string) (disconnected func(code int, reason string)) {
// A blank IP can unfortunately happen if the connection is broken in a data race before we get to introspect it. We
// still report it, and the recipient can handle a blank IP.
if ip != "" {
// Remove the port from the IP because ports are not supported in coderd.
if host, _, err := net.SplitHostPort(ip); err != nil {
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
} else {
// Best effort.
ip = host
}
// Remove the port from the IP because ports are not supported in coderd.
if host, _, err := net.SplitHostPort(ip); err != nil {
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
} else {
// Best effort.
ip = host
}
// If the IP is "localhost" (which it can be in some cases), set it to
+9 -55
View File
@@ -121,8 +121,7 @@ func TestAgent_ImmediateClose(t *testing.T) {
require.NoError(t, err)
}
// NOTE(Cian): I noticed that these tests would fail when my default shell was zsh.
// Writing "exit 0" to stdin before closing fixed the issue for me.
// NOTE: These tests only work when your default shell is bash for some reason.
func TestAgent_Stats_SSH(t *testing.T) {
t.Parallel()
@@ -149,37 +148,16 @@ func TestAgent_Stats_SSH(t *testing.T) {
require.NoError(t, err)
var s *proto.Stats
// We are looking for four different stats to be reported. They might not all
// arrive at the same time, so we loop until we've seen them all.
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen bool
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
if !ok {
return false
}
if s.ConnectionCount > 0 {
connectionCountSeen = true
}
if s.RxBytes > 0 {
rxBytesSeen = true
}
if s.TxBytes > 0 {
txBytesSeen = true
}
if s.SessionCountSsh == 1 {
sessionCountSSHSeen = true
}
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountSSHSeen
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountSsh == 1
}, testutil.WaitLong, testutil.IntervalFast,
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountSsh: %t",
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen,
"never saw stats: %+v", s,
)
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err, "waiting for session to exit")
require.NoError(t, err)
})
}
}
@@ -205,31 +183,12 @@ func TestAgent_Stats_ReconnectingPTY(t *testing.T) {
require.NoError(t, err)
var s *proto.Stats
// We are looking for four different stats to be reported. They might not all
// arrive at the same time, so we loop until we've seen them all.
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen bool
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
if !ok {
return false
}
if s.ConnectionCount > 0 {
connectionCountSeen = true
}
if s.RxBytes > 0 {
rxBytesSeen = true
}
if s.TxBytes > 0 {
txBytesSeen = true
}
if s.SessionCountReconnectingPty == 1 {
sessionCountReconnectingPTYSeen = true
}
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountReconnectingPTYSeen
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountReconnectingPty == 1
}, testutil.WaitLong, testutil.IntervalFast,
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountReconnectingPTY: %t",
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen,
"never saw stats: %+v", s,
)
}
@@ -259,10 +218,9 @@ func TestAgent_Stats_Magic(t *testing.T) {
require.NoError(t, err)
require.Equal(t, expected, strings.TrimSpace(string(output)))
})
t.Run("TracksVSCode", func(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
if runtime.GOOS == "window" {
t.Skip("Sleeping for infinity doesn't work on Windows")
}
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
@@ -294,9 +252,7 @@ func TestAgent_Stats_Magic(t *testing.T) {
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats",
)
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
// The shell will automatically exit if there is no stdin!
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err)
@@ -3677,11 +3633,9 @@ func TestAgent_Metrics_SSH(t *testing.T) {
}
}
_, err = stdin.Write([]byte("exit 0\n"))
require.NoError(t, err, "writing exit to stdin")
_ = stdin.Close()
err = session.Wait()
require.NoError(t, err, "waiting for session to exit")
require.NoError(t, err)
}
// echoOnce accepts a single connection, reads 4 bytes and echos them back
+1 -4
View File
@@ -779,13 +779,10 @@ func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
// close frames.
_ = conn.CloseRead(context.Background())
ctx, cancel := context.WithCancel(ctx)
defer cancel()
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.HeartbeatClose(ctx, api.logger, cancel, conn)
go httpapi.Heartbeat(ctx, conn)
updateCh := make(chan struct{}, 1)
-36
View File
@@ -1,36 +0,0 @@
package agentfiles
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/spf13/afero"
"cdr.dev/slog/v3"
)
// API exposes file-related operations performed through the agent.
type API struct {
logger slog.Logger
filesystem afero.Fs
}
func NewAPI(logger slog.Logger, filesystem afero.Fs) *API {
api := &API{
logger: logger,
filesystem: filesystem,
}
return api
}
// Routes returns the HTTP handler for file-related routes.
func (api *API) Routes() http.Handler {
r := chi.NewRouter()
r.Post("/list-directory", api.HandleLS)
r.Get("/read-file", api.HandleReadFile)
r.Post("/write-file", api.HandleWriteFile)
r.Post("/edit-files", api.HandleEditFiles)
return r
}
+4 -2
View File
@@ -27,8 +27,6 @@ func (a *agent) apiHandler() http.Handler {
})
})
r.Mount("/api/v0", a.filesAPI.Routes())
if a.devcontainers {
r.Mount("/api/v0/containers", a.containerAPI.Routes())
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
@@ -51,6 +49,10 @@ func (a *agent) apiHandler() http.Handler {
r.Get("/api/v0/listening-ports", a.listeningPortsHandler.handler)
r.Get("/api/v0/netcheck", a.HandleNetcheck)
r.Post("/api/v0/list-directory", a.HandleLS)
r.Get("/api/v0/read-file", a.HandleReadFile)
r.Post("/api/v0/write-file", a.HandleWriteFile)
r.Post("/api/v0/edit-files", a.HandleEditFiles)
r.Get("/debug/logs", a.HandleHTTPDebugLogs)
r.Get("/debug/magicsock", a.HandleHTTPDebugMagicsock)
r.Get("/debug/magicsock/debug-logging/{state}", a.HandleHTTPMagicsockDebugLoggingState)
+2 -10
View File
@@ -78,13 +78,9 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
sink := &logSink{}
logger := slog.Make(sink)
workspaceID := uuid.New()
templateID := uuid.New()
templateVersionID := uuid.New()
reporter := &agentapi.BoundaryLogsAPI{
Log: logger,
WorkspaceID: workspaceID,
TemplateID: templateID,
TemplateVersionID: templateVersionID,
Log: logger,
WorkspaceID: workspaceID,
}
ctx, cancel := context.WithCancel(context.Background())
@@ -127,8 +123,6 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
require.Equal(t, "boundary_request", entry.Message)
require.Equal(t, "allow", getField(entry.Fields, "decision"))
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
require.Equal(t, "GET", getField(entry.Fields, "http_method"))
require.Equal(t, "https://example.com/allowed", getField(entry.Fields, "http_url"))
require.Equal(t, "*.example.com", getField(entry.Fields, "matched_rule"))
@@ -161,8 +155,6 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
require.Equal(t, "boundary_request", entry.Message)
require.Equal(t, "deny", getField(entry.Fields, "decision"))
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
require.Equal(t, "POST", getField(entry.Fields, "http_method"))
require.Equal(t, "https://blocked.com/denied", getField(entry.Fields, "http_url"))
require.Equal(t, nil, getField(entry.Fields, "matched_rule"))
+20 -20
View File
@@ -1,4 +1,4 @@
package agentfiles
package agent
import (
"context"
@@ -25,7 +25,7 @@ import (
type HTTPResponseCode = int
func (api *API) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
@@ -42,7 +42,7 @@ func (api *API) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
return
}
status, err := api.streamFile(ctx, rw, path, offset, limit)
status, err := a.streamFile(ctx, rw, path, offset, limit)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
@@ -51,12 +51,12 @@ func (api *API) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
}
}
func (api *API) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
f, err := api.filesystem.Open(path)
f, err := a.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -97,13 +97,13 @@ func (api *API) streamFile(ctx context.Context, rw http.ResponseWriter, path str
reader := io.NewSectionReader(f, offset, bytesToRead)
_, err = io.Copy(rw, reader)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
api.logger.Error(ctx, "workspace agent read file", slog.Error(err))
a.logger.Error(ctx, "workspace agent read file", slog.Error(err))
}
return 0, nil
}
func (api *API) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
@@ -118,7 +118,7 @@ func (api *API) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
return
}
status, err := api.writeFile(ctx, r, path)
status, err := a.writeFile(ctx, r, path)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
@@ -131,13 +131,13 @@ func (api *API) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
})
}
func (api *API) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
dir := filepath.Dir(path)
err := api.filesystem.MkdirAll(dir, 0o755)
err := a.filesystem.MkdirAll(dir, 0o755)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -149,7 +149,7 @@ func (api *API) writeFile(ctx context.Context, r *http.Request, path string) (HT
return status, err
}
f, err := api.filesystem.Create(path)
f, err := a.filesystem.Create(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -164,13 +164,13 @@ func (api *API) writeFile(ctx context.Context, r *http.Request, path string) (HT
_, err = io.Copy(f, r.Body)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
api.logger.Error(ctx, "workspace agent write file", slog.Error(err))
a.logger.Error(ctx, "workspace agent write file", slog.Error(err))
}
return 0, nil
}
func (api *API) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req workspacesdk.FileEditRequest
@@ -188,7 +188,7 @@ func (api *API) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
var combinedErr error
status := http.StatusOK
for _, edit := range req.Files {
s, err := api.editFile(r.Context(), edit.Path, edit.Edits)
s, err := a.editFile(r.Context(), edit.Path, edit.Edits)
// Keep the highest response status, so 500 will be preferred over 400, etc.
if s > status {
status = s
@@ -210,7 +210,7 @@ func (api *API) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
})
}
func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
if path == "" {
return http.StatusBadRequest, xerrors.New("\"path\" is required")
}
@@ -223,7 +223,7 @@ func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.
return http.StatusBadRequest, xerrors.New("must specify at least one edit")
}
f, err := api.filesystem.Open(path)
f, err := a.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -252,7 +252,7 @@ func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.
// Create an adjacent file to ensure it will be on the same device and can be
// moved atomically.
tmpfile, err := afero.TempFile(api.filesystem, filepath.Dir(path), filepath.Base(path))
tmpfile, err := afero.TempFile(a.filesystem, filepath.Dir(path), filepath.Base(path))
if err != nil {
return http.StatusInternalServerError, err
}
@@ -260,13 +260,13 @@ func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.
_, err = io.Copy(tmpfile, replace.Chain(f, transforms...))
if err != nil {
if rerr := api.filesystem.Remove(tmpfile.Name()); rerr != nil {
api.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
if rerr := a.filesystem.Remove(tmpfile.Name()); rerr != nil {
a.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
}
return http.StatusInternalServerError, xerrors.Errorf("edit %s: %w", path, err)
}
err = api.filesystem.Rename(tmpfile.Name(), path)
err = a.filesystem.Rename(tmpfile.Name(), path)
if err != nil {
return http.StatusInternalServerError, err
}
@@ -1,13 +1,11 @@
package agentfiles_test
package agent_test
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"runtime"
@@ -18,10 +16,10 @@ import (
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"cdr.dev/slog/v3/sloggers/slogtest"
"github.com/coder/coder/v2/agent/agentfiles"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/testutil"
)
@@ -108,15 +106,15 @@ func TestReadFile(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms")
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath {
return os.ErrPermission
}
return nil
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return os.ErrPermission
}
return nil
})
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "a-directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -262,22 +260,19 @@ func TestReadFile(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodGet, fmt.Sprintf("/read-file?path=%s&offset=%d&limit=%d", tt.path, tt.offset, tt.limit), nil)
api.Routes().ServeHTTP(w, r)
reader, mimeType, err := conn.ReadFile(ctx, tt.path, tt.offset, tt.limit)
if tt.errCode != 0 {
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
require.ErrorContains(t, got, tt.error)
require.Equal(t, tt.errCode, w.Code)
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
bytes, err := io.ReadAll(w.Body)
require.NoError(t, err)
defer reader.Close()
bytes, err := io.ReadAll(reader)
require.NoError(t, err)
require.Equal(t, tt.bytes, bytes)
require.Equal(t, tt.mimeType, w.Header().Get("Content-Type"))
require.Equal(t, http.StatusOK, w.Code)
require.Equal(t, tt.mimeType, mimeType)
}
})
}
@@ -289,14 +284,15 @@ func TestWriteFile(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
noPermsDirPath := filepath.Join(tmpdir, "no-perms-dir")
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath || file == noPermsDirPath {
return os.ErrPermission
}
return nil
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath || file == noPermsDirPath {
return os.ErrPermission
}
return nil
})
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -375,21 +371,17 @@ func TestWriteFile(t *testing.T) {
defer cancel()
reader := bytes.NewReader(tt.bytes)
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodPost, fmt.Sprintf("/write-file?path=%s", tt.path), reader)
api.Routes().ServeHTTP(w, r)
err := conn.WriteFile(ctx, tt.path, reader)
if tt.errCode != 0 {
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
require.ErrorContains(t, got, tt.error)
require.Equal(t, tt.errCode, w.Code)
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
bytes, err := afero.ReadFile(fs, tt.path)
require.NoError(t, err)
require.Equal(t, tt.bytes, bytes)
require.Equal(t, http.StatusOK, w.Code)
b, err := afero.ReadFile(fs, tt.path)
require.NoError(t, err)
require.Equal(t, tt.bytes, b)
}
})
}
@@ -401,20 +393,21 @@ func TestEditFiles(t *testing.T) {
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
failRenameFilePath := filepath.Join(tmpdir, "fail-rename")
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
if file == noPermsFilePath {
return &os.PathError{
Op: call,
Path: file,
Err: os.ErrPermission,
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return &os.PathError{
Op: call,
Path: file,
Err: os.ErrPermission,
}
} else if file == failRenameFilePath && call == "rename" {
return xerrors.New("rename failed")
}
} else if file == failRenameFilePath && call == "rename" {
return xerrors.New("rename failed")
}
return nil
return nil
})
})
api := agentfiles.NewAPI(logger, fs)
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
@@ -708,26 +701,16 @@ func TestEditFiles(t *testing.T) {
require.NoError(t, err)
}
buf := bytes.NewBuffer(nil)
enc := json.NewEncoder(buf)
enc.SetEscapeHTML(false)
err := enc.Encode(workspacesdk.FileEditRequest{Files: tt.edits})
require.NoError(t, err)
w := httptest.NewRecorder()
r := httptest.NewRequestWithContext(ctx, http.MethodPost, "/edit-files", buf)
api.Routes().ServeHTTP(w, r)
err := conn.EditFiles(ctx, workspacesdk.FileEditRequest{Files: tt.edits})
if tt.errCode != 0 {
got := &codersdk.Error{}
err := json.NewDecoder(w.Body).Decode(got)
require.NoError(t, err)
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
for _, error := range tt.errors {
require.ErrorContains(t, got, error)
require.Contains(t, cerr.Error(), error)
}
require.Equal(t, tt.errCode, w.Code)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
require.Equal(t, http.StatusOK, w.Code)
require.NoError(t, err)
}
for path, expect := range tt.expected {
b, err := afero.ReadFile(fs, path)
@@ -81,10 +81,6 @@ type BackedPipe struct {
// Unified error handling with generation filtering
errChan chan ErrorEvent
// forceReconnectHook is a test hook invoked after ForceReconnect registers
// with the singleflight group.
forceReconnectHook func()
// singleflight group to dedupe concurrent ForceReconnect calls
sf singleflight.Group
@@ -328,13 +324,6 @@ func (bp *BackedPipe) handleConnectionError(errorEvt ErrorEvent) {
}
}
// SetForceReconnectHookForTests sets a hook invoked after ForceReconnect
// registers with the singleflight group. It must be set before any
// concurrent ForceReconnect calls.
func (bp *BackedPipe) SetForceReconnectHookForTests(hook func()) {
bp.forceReconnectHook = hook
}
// ForceReconnect forces a reconnection attempt immediately.
// This can be used to force a reconnection if a new connection is established.
// It prevents duplicate reconnections when called concurrently.
@@ -342,7 +331,7 @@ func (bp *BackedPipe) ForceReconnect() error {
// Deduplicate concurrent ForceReconnect calls so only one reconnection
// attempt runs at a time from this API. Use the pipe's internal context
// to ensure Close() cancels any in-flight attempt.
resultChan := bp.sf.DoChan("force-reconnect", func() (interface{}, error) {
_, err, _ := bp.sf.Do("force-reconnect", func() (interface{}, error) {
bp.mu.Lock()
defer bp.mu.Unlock()
@@ -357,11 +346,5 @@ func (bp *BackedPipe) ForceReconnect() error {
return nil, bp.reconnectLocked()
})
if hook := bp.forceReconnectHook; hook != nil {
hook()
}
result := <-resultChan
return result.Err
return err
}
@@ -742,15 +742,12 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
const numConcurrent = 3
startSignals := make([]chan struct{}, numConcurrent)
startedSignals := make([]chan struct{}, numConcurrent)
for i := range startSignals {
startSignals[i] = make(chan struct{})
startedSignals[i] = make(chan struct{})
}
enteredSignals := make(chan struct{}, numConcurrent)
bp.SetForceReconnectHookForTests(func() {
enteredSignals <- struct{}{}
})
errors := make([]error, numConcurrent)
var wg sync.WaitGroup
@@ -761,12 +758,15 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
defer wg.Done()
// Wait for the signal to start
<-startSignals[idx]
// Signal that we're about to call ForceReconnect
close(startedSignals[idx])
errors[idx] = bp.ForceReconnect()
}(i)
}
// Start the first ForceReconnect and wait for it to block
close(startSignals[0])
<-startedSignals[0]
// Wait for the first reconnect to actually start and block
testutil.RequireReceive(testCtx, t, blockedChan)
@@ -777,9 +777,9 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
close(startSignals[i])
}
// Wait for all ForceReconnect calls to join the singleflight operation.
for i := 0; i < numConcurrent; i++ {
testutil.RequireReceive(testCtx, t, enteredSignals)
// Wait for all additional goroutines to have started their calls
for i := 1; i < numConcurrent; i++ {
<-startedSignals[i]
}
// At this point, one reconnect has started and is blocked,
+3 -3
View File
@@ -1,4 +1,4 @@
package agentfiles
package agent
import (
"errors"
@@ -21,7 +21,7 @@ import (
var WindowsDriveRegex = regexp.MustCompile(`^[a-zA-Z]:\\$`)
func (api *API) HandleLS(rw http.ResponseWriter, r *http.Request) {
func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// An absolute path may be optionally provided, otherwise a path split into an
@@ -43,7 +43,7 @@ func (api *API) HandleLS(rw http.ResponseWriter, r *http.Request) {
return
}
resp, err := listFiles(api.filesystem, path, req)
resp, err := listFiles(a.filesystem, path, req)
if err != nil {
status := http.StatusInternalServerError
switch {
@@ -1,4 +1,4 @@
package agentfiles
package agent
import (
"os"
+2 -2
View File
@@ -7,6 +7,6 @@ func IsInitProcess() bool {
return false
}
func ForkReap(_ ...Option) (int, error) {
return 0, nil
func ForkReap(_ ...Option) error {
return nil
}
+2 -37
View File
@@ -32,13 +32,12 @@ func TestReap(t *testing.T) {
}
pids := make(reap.PidCh, 1)
exitCode, err := reaper.ForkReap(
err := reaper.ForkReap(
reaper.WithPIDCallback(pids),
// Provide some argument that immediately exits.
reaper.WithExecArgs("/bin/sh", "-c", "exit 0"),
)
require.NoError(t, err)
require.Equal(t, 0, exitCode)
cmd := exec.Command("tail", "-f", "/dev/null")
err = cmd.Start()
@@ -66,36 +65,6 @@ func TestReap(t *testing.T) {
}
}
//nolint:paralleltest
func TestForkReapExitCodes(t *testing.T) {
if testutil.InCI() {
t.Skip("Detected CI, skipping reaper tests")
}
tests := []struct {
name string
command string
expectedCode int
}{
{"exit 0", "exit 0", 0},
{"exit 1", "exit 1", 1},
{"exit 42", "exit 42", 42},
{"exit 255", "exit 255", 255},
{"SIGKILL", "kill -9 $$", 128 + 9},
{"SIGTERM", "kill -15 $$", 128 + 15},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
exitCode, err := reaper.ForkReap(
reaper.WithExecArgs("/bin/sh", "-c", tt.command),
)
require.NoError(t, err)
require.Equal(t, tt.expectedCode, exitCode, "exit code mismatch for %q", tt.command)
})
}
}
//nolint:paralleltest // Signal handling.
func TestReapInterrupt(t *testing.T) {
// Don't run the reaper test in CI. It does weird
@@ -115,17 +84,13 @@ func TestReapInterrupt(t *testing.T) {
defer signal.Stop(usrSig)
go func() {
exitCode, err := reaper.ForkReap(
errC <- reaper.ForkReap(
reaper.WithPIDCallback(pids),
reaper.WithCatchSignals(os.Interrupt),
// Signal propagation does not extend to children of children, so
// we create a little bash script to ensure sleep is interrupted.
reaper.WithExecArgs("/bin/sh", "-c", fmt.Sprintf("pid=0; trap 'kill -USR2 %d; kill -TERM $pid' INT; sleep 10 &\npid=$!; kill -USR1 %d; wait", os.Getpid(), os.Getpid())),
)
// The child exits with 128 + SIGTERM (15) = 143, but the trap catches
// SIGINT and sends SIGTERM to the sleep process, so exit code varies.
_ = exitCode
errC <- err
}()
require.Equal(t, <-usrSig, syscall.SIGUSR1)
+4 -20
View File
@@ -40,10 +40,7 @@ func catchSignals(pid int, sigs []os.Signal) {
// the reaper and an exec.Command waiting for its process to complete.
// The provided 'pids' channel may be nil if the caller does not care about the
// reaped children PIDs.
//
// Returns the child's exit code (using 128+signal for signal termination)
// and any error from Wait4.
func ForkReap(opt ...Option) (int, error) {
func ForkReap(opt ...Option) error {
opts := &options{
ExecArgs: os.Args,
}
@@ -56,7 +53,7 @@ func ForkReap(opt ...Option) (int, error) {
pwd, err := os.Getwd()
if err != nil {
return 1, xerrors.Errorf("get wd: %w", err)
return xerrors.Errorf("get wd: %w", err)
}
pattrs := &syscall.ProcAttr{
@@ -75,7 +72,7 @@ func ForkReap(opt ...Option) (int, error) {
//#nosec G204
pid, err := syscall.ForkExec(opts.ExecArgs[0], opts.ExecArgs, pattrs)
if err != nil {
return 1, xerrors.Errorf("fork exec: %w", err)
return xerrors.Errorf("fork exec: %w", err)
}
go catchSignals(pid, opts.CatchSignals)
@@ -85,18 +82,5 @@ func ForkReap(opt ...Option) (int, error) {
for xerrors.Is(err, syscall.EINTR) {
_, err = syscall.Wait4(pid, &wstatus, 0, nil)
}
// Convert wait status to exit code using standard Unix conventions:
// - Normal exit: use the exit code
// - Signal termination: use 128 + signal number
var exitCode int
switch {
case wstatus.Exited():
exitCode = wstatus.ExitStatus()
case wstatus.Signaled():
exitCode = 128 + int(wstatus.Signal())
default:
exitCode = 1
}
return exitCode, err
return err
}
+3 -3
View File
@@ -136,7 +136,7 @@ func workspaceAgent() *serpent.Command {
// to do this else we fork bomb ourselves.
//nolint:gocritic
args := append(os.Args, "--no-reap")
exitCode, err := reaper.ForkReap(
err := reaper.ForkReap(
reaper.WithExecArgs(args...),
reaper.WithCatchSignals(StopSignals...),
)
@@ -145,8 +145,8 @@ func workspaceAgent() *serpent.Command {
return xerrors.Errorf("fork reap: %w", err)
}
logger.Info(ctx, "reaper child process exited", slog.F("exit_code", exitCode))
return ExitError(exitCode, nil)
logger.Info(ctx, "reaper process exiting")
return nil
}
// Handle interrupt signals to allow for graceful shutdown,
+11 -2
View File
@@ -10,8 +10,12 @@ import (
"github.com/coder/serpent"
)
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, name, defaultValue string) (string, error) {
label := name
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, defaultOverrides map[string]string) (string, error) {
label := templateVersionParameter.Name
if templateVersionParameter.DisplayName != "" {
label = templateVersionParameter.DisplayName
}
if templateVersionParameter.Ephemeral {
label += pretty.Sprint(DefaultStyles.Warn, " (build option)")
}
@@ -22,6 +26,11 @@ func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.Te
_, _ = fmt.Fprintln(inv.Stdout, " "+strings.TrimSpace(strings.Join(strings.Split(templateVersionParameter.DescriptionPlaintext, "\n"), "\n "))+"\n")
}
defaultValue := templateVersionParameter.DefaultValue
if v, ok := defaultOverrides[templateVersionParameter.Name]; ok {
defaultValue = v
}
var err error
var value string
switch {
+2 -2
View File
@@ -32,12 +32,12 @@ type PromptOptions struct {
const skipPromptFlag = "yes"
// SkipPromptOption adds a "--yes/-y" flag to the cmd that can be used to skip
// confirmation prompts.
// prompts.
func SkipPromptOption() serpent.Option {
return serpent.Option{
Flag: skipPromptFlag,
FlagShorthand: "y",
Description: "Bypass confirmation prompts.",
Description: "Bypass prompts.",
// Discard
Value: serpent.BoolOf(new(bool)),
}
-5
View File
@@ -491,11 +491,6 @@ func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case tea.KeySpace:
options := m.filteredOptions()
if m.enableCustomInput && m.cursor == len(options) {
return m, nil
}
if len(options) != 0 {
options[m.cursor].chosen = !options[m.cursor].chosen
}
+5 -17
View File
@@ -42,10 +42,9 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
stopAfter time.Duration
workspaceName string
parameterFlags workspaceParameterFlags
autoUpdates string
copyParametersFrom string
useParameterDefaults bool
parameterFlags workspaceParameterFlags
autoUpdates string
copyParametersFrom string
// Organization context is only required if more than 1 template
// shares the same name across multiple organizations.
orgContext = NewOrganizationContext()
@@ -309,7 +308,7 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
displayAppliedPreset(inv, preset, presetParameters)
} else {
// Inform the user that no preset was applied
_, _ = fmt.Fprintf(inv.Stdout, "%s\n", cliui.Bold("No preset applied."))
_, _ = fmt.Fprintf(inv.Stdout, "%s", cliui.Bold("No preset applied."))
}
if opts.BeforeCreate != nil {
@@ -330,8 +329,6 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
RichParameterDefaults: cliBuildParameterDefaults,
SourceWorkspaceParameters: sourceWorkspaceParameters,
UseParameterDefaults: useParameterDefaults,
})
if err != nil {
return xerrors.Errorf("prepare build: %w", err)
@@ -438,12 +435,6 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
Description: "Specify the source workspace name to copy parameters from.",
Value: serpent.StringOf(&copyParametersFrom),
},
serpent.Option{
Flag: "use-parameter-defaults",
Env: "CODER_WORKSPACE_USE_PARAMETER_DEFAULTS",
Description: "Automatically accept parameter defaults when no value is provided.",
Value: serpent.BoolOf(&useParameterDefaults),
},
cliui.SkipPromptOption(),
)
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
@@ -468,8 +459,6 @@ type prepWorkspaceBuildArgs struct {
RichParameters []codersdk.WorkspaceBuildParameter
RichParameterFile string
RichParameterDefaults []codersdk.WorkspaceBuildParameter
UseParameterDefaults bool
}
// resolvePreset returns the preset matching the given presetName (if specified),
@@ -572,8 +561,7 @@ func prepWorkspaceBuild(inv *serpent.Invocation, client *codersdk.Client, args p
WithPromptRichParameters(args.PromptRichParameters).
WithRichParameters(args.RichParameters).
WithRichParametersFile(parameterFile).
WithRichParametersDefaults(args.RichParameterDefaults).
WithUseParameterDefaults(args.UseParameterDefaults)
WithRichParametersDefaults(args.RichParameterDefaults)
buildParameters, err := resolver.Resolve(inv, args.Action, templateVersionParameters)
if err != nil {
return nil, err
+340 -428
View File
@@ -139,15 +139,12 @@ func TestCreate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Create a new version
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
@@ -321,438 +318,353 @@ func prepareEchoResponses(parameters []*proto.RichParameter, presets ...*proto.P
}
}
type param struct {
name string
ptype string
value string
mutable bool
}
func TestCreateWithRichParameters(t *testing.T) {
t.Parallel()
// Default parameters and their expected values.
params := []param{
{
name: "number_param",
ptype: "number",
value: "777",
mutable: true,
},
{
name: "string_param",
ptype: "string",
value: "qux",
mutable: true,
},
{
name: "bool_param",
// TODO: Setting the type breaks booleans. It claims the default is false
// but when you then accept this default it errors saying that the value
// must be true or false. For now, use a string.
ptype: "string",
value: "false",
mutable: true,
},
{
name: "immutable_string_param",
ptype: "string",
value: "i am eternal",
mutable: false,
},
}
const (
firstParameterName = "first_parameter"
firstParameterDescription = "This is first parameter"
firstParameterValue = "1"
type testContext struct {
client *codersdk.Client
member *codersdk.Client
owner codersdk.CreateFirstUserResponse
template codersdk.Template
workspaceName string
}
secondParameterName = "second_parameter"
secondParameterDisplayName = "Second Parameter"
secondParameterDescription = "This is second parameter"
secondParameterValue = "2"
tests := []struct {
name string
// setup runs before the command is started and return arguments that will
// be appended to the create command.
setup func() []string
// handlePty optionally runs after the command is started. It should handle
// all expected prompts from the pty.
handlePty func(pty *ptytest.PTY)
// postRun runs after the command has finished but before the workspace is
// verified. It must return the workspace name to check (used for the copy
// workspace tests).
postRun func(t *testing.T, args testContext) string
// errors contains expected errors. The workspace will not be verified if
// errors are expected.
errors []string
// inputParameters overrides the default parameters.
inputParameters []param
// expectedParameters defaults to inputParameters.
expectedParameters []param
// withDefaults sets DefaultValue to each parameter's value.
withDefaults bool
}{
{
name: "ValuesFromPrompt",
handlePty: func(pty *ptytest.PTY) {
// Enter the value for each parameter as prompted.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.WriteLine(param.value)
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromDefaultFlags",
setup: func() []string {
// Provide the defaults on the command line.
args := []string{}
for _, param := range params {
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// Simply accept the defaults.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
pty.WriteLine("")
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromFile",
setup: func() []string {
// Create a file with the values.
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
for _, param := range params {
_, err := parameterFile.WriteString(fmt.Sprintf("%s: %s\n", param.name, param.value))
require.NoError(t, err)
}
immutableParameterName = "third_parameter"
immutableParameterDescription = "This is not mutable parameter"
immutableParameterValue = "4"
)
return []string{"--rich-parameter-file", parameterFile.Name()}
},
handlePty: func(pty *ptytest.PTY) {
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "ValuesFromFlags",
setup: func() []string {
// Provide the values on the command line.
var args []string
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
name: "MisspelledParameter",
setup: func() []string {
// Provide the values on the command line.
args := []string{}
for i, param := range params {
if i == 0 {
// Slightly misspell the first parameter with an extra character.
args = append(args, "--parameter", fmt.Sprintf("n%s=%s", param.name, param.value))
} else {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
}
return args
},
errors: []string{
"parameter \"n" + params[0].name + "\" is not present in the template",
"Did you mean: " + params[0].name,
},
},
{
name: "ValuesFromWorkspace",
setup: func() []string {
// Provide the values on the command line.
args := []string{"-y"}
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
postRun: func(t *testing.T, tctx testContext) string {
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
clitest.SetupConfig(t, tctx.member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "failed to create a workspace based on the source workspace")
return "other-workspace"
},
},
{
name: "ValuesFromOutdatedWorkspace",
setup: func() []string {
// Provide the values on the command line.
args := []string{"-y"}
for _, param := range params {
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
postRun: func(t *testing.T, tctx testContext) string {
// Update the template to a new version.
version2 := coderdtest.CreateTemplateVersion(t, tctx.client, tctx.owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "another_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = tctx.template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, tctx.client, version2.ID)
coderdtest.UpdateActiveTemplateVersion(t, tctx.client, tctx.template.ID, version2.ID)
// Then create the copy. It should use the old template version.
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
clitest.SetupConfig(t, tctx.member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "failed to create a workspace based on the source workspace")
return "other-workspace"
},
},
{
name: "ValuesFromTemplateDefaults",
handlePty: func(pty *ptytest.PTY) {
// Simply accept the defaults.
for _, param := range params {
pty.ExpectMatch(param.name)
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
pty.WriteLine("")
}
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
},
{
name: "ValuesFromTemplateDefaultsNoPrompt",
setup: func() []string {
return []string{"--use-parameter-defaults"}
},
handlePty: func(pty *ptytest.PTY) {
// Default values should get printed.
for _, param := range params {
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
}
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
},
{
name: "ValuesFromDefaultFlagsNoPrompt",
setup: func() []string {
// Provide the defaults on the command line.
args := []string{"--use-parameter-defaults"}
for _, param := range params {
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
}
return args
},
handlePty: func(pty *ptytest.PTY) {
// Default values should get printed.
for _, param := range params {
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
}
// No prompts, we only need to confirm.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
},
{
// File and flags should override template defaults. Additionally, if a
// value has no default value we should still get a prompt for it.
name: "ValuesFromMultipleSources",
setup: func() []string {
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
_, err := parameterFile.WriteString(`
file_param: from file
cli_param: from file`)
require.NoError(t, err)
return []string{
"--use-parameter-defaults",
"--rich-parameter-file", parameterFile.Name(),
"--parameter-default", "file_param=from cli default",
"--parameter-default", "cli_param=from cli default",
"--parameter", "cli_param=from cli",
}
},
handlePty: func(pty *ptytest.PTY) {
// Should get prompted for the input param since it has no default.
pty.ExpectMatch("input_param")
pty.WriteLine("from input")
// Confirm the creation.
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
},
withDefaults: true,
inputParameters: []param{
{
name: "template_param",
value: "from template default",
},
{
name: "file_param",
value: "from template default",
},
{
name: "cli_param",
value: "from template default",
},
{
name: "input_param",
},
},
expectedParameters: []param{
{
name: "template_param",
value: "from template default",
},
{
name: "file_param",
value: "from file",
},
{
name: "cli_param",
value: "from cli",
},
{
name: "input_param",
value: "from input",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
parameters := params
if len(tt.inputParameters) > 0 {
parameters = tt.inputParameters
}
// Convert parameters for the echo provisioner response.
var rparams []*proto.RichParameter
for i, param := range parameters {
defaultValue := ""
if tt.withDefaults {
defaultValue = param.value
}
rparams = append(rparams, &proto.RichParameter{
Name: param.name,
Type: param.ptype,
Mutable: param.mutable,
DefaultValue: defaultValue,
Order: int32(i), //nolint:gosec
})
}
// Set up the template.
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(rparams))
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Run the command, possibly setting up values.
workspaceName := "my-workspace"
args := []string{"create", workspaceName, "--template", template.Name}
if tt.setup != nil {
args = append(args, tt.setup()...)
}
inv, root := clitest.New(t, args...)
clitest.SetupConfig(t, member, root)
doneChan := make(chan error)
pty := ptytest.New(t).Attach(inv)
go func() {
doneChan <- inv.Run()
}()
// The test may do something with the pty.
if tt.handlePty != nil {
tt.handlePty(pty)
}
// Wait for the command to exit.
err := <-doneChan
// The test may want to run additional setup like copying the workspace.
if tt.postRun != nil {
workspaceName = tt.postRun(t, testContext{
client: client,
member: member,
owner: owner,
template: template,
workspaceName: workspaceName,
})
}
if len(tt.errors) > 0 {
require.Error(t, err)
for _, errstr := range tt.errors {
assert.ErrorContains(t, err, errstr)
}
} else {
require.NoError(t, err)
// Verify the workspace was created and has the right template and values.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{Name: workspaceName})
require.NoError(t, err, "expected to find created workspace")
require.Len(t, workspaces.Workspaces, 1)
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
require.NoError(t, err)
if len(tt.expectedParameters) > 0 {
parameters = tt.expectedParameters
}
require.Len(t, buildParameters, len(parameters))
for _, param := range parameters {
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: param.name, Value: param.value})
}
}
echoResponses := func() *echo.Responses {
return prepareEchoResponses([]*proto.RichParameter{
{Name: firstParameterName, Description: firstParameterDescription, Mutable: true},
{Name: secondParameterName, DisplayName: secondParameterDisplayName, Description: secondParameterDescription, Mutable: true},
{Name: immutableParameterName, Description: immutableParameterDescription, Mutable: false},
})
}
t.Run("InputParameters", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name)
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
firstParameterDescription, firstParameterValue,
secondParameterDisplayName, "",
secondParameterDescription, secondParameterValue,
immutableParameterDescription, immutableParameterValue,
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
if value != "" {
pty.WriteLine(value)
}
}
<-doneChan
})
t.Run("ParametersDefaults", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter-default", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter-default", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter-default", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
firstParameterDescription, firstParameterValue,
secondParameterDescription, secondParameterValue,
immutableParameterDescription, immutableParameterValue,
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
defaultValue := matches[i+1]
pty.ExpectMatch(match)
pty.ExpectMatch(`Enter a value (default: "` + defaultValue + `")`)
pty.WriteLine("")
}
pty.ExpectMatch("Confirm create?")
pty.WriteLine("yes")
<-doneChan
// Verify that the expected default values were used.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: "my-workspace",
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
t.Run("RichParametersFile", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
tempDir := t.TempDir()
removeTmpDirUntilSuccessAfterTest(t, tempDir)
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
_, _ = parameterFile.WriteString(
firstParameterName + ": " + firstParameterValue + "\n" +
secondParameterName + ": " + secondParameterValue + "\n" +
immutableParameterName + ": " + immutableParameterValue)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "--rich-parameter-file", parameterFile.Name())
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
pty.WriteLine(value)
}
<-doneChan
})
t.Run("ParameterFlags", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
matches := []string{
"Confirm create?", "yes",
}
for i := 0; i < len(matches); i += 2 {
match := matches[i]
value := matches[i+1]
pty.ExpectMatch(match)
pty.WriteLine(value)
}
<-doneChan
})
t.Run("WrongParameterName/DidYouMean", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
wrongFirstParameterName := "frst-prameter"
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter", fmt.Sprintf("%s=%s", wrongFirstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
assert.ErrorContains(t, err, "parameter \""+wrongFirstParameterName+"\" is not present in the template")
assert.ErrorContains(t, err, "Did you mean: "+firstParameterName)
})
t.Run("CopyParameters", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
t.Run("CopyParametersFromNotUpdatedWorkspace", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, update the template to the newer version.
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "third_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
coderdtest.UpdateActiveTemplateVersion(t, client, template.ID, version2.ID)
// Thirdly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, otherWorkspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
}
func TestCreateWithPreset(t *testing.T) {
+18
View File
@@ -0,0 +1,18 @@
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/serpent"
)
func (*RootCmd) boundary() *serpent.Command {
return &serpent.Command{
Use: "boundary",
Short: "Network isolation tool for monitoring and restricting HTTP/HTTPS requests (enterprise)",
Long: `boundary creates an isolated network environment for target processes. This is an enterprise feature.`,
Handler: func(_ *serpent.Invocation) error {
return xerrors.New("boundary is an enterprise feature; upgrade to use this command")
},
}
}
+29
View File
@@ -0,0 +1,29 @@
package cli_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
// Here we want to test that integrating boundary as a subcommand doesn't break anything.
// The full boundary functionality is tested in enterprise/cli.
func TestBoundarySubcommand(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
inv, _ := clitest.New(t, "exp", "boundary", "--help")
pty := ptytest.New(t).Attach(inv)
go func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
}()
// Expect the --help output to include the short description.
pty.ExpectMatch("Network isolation tool")
}
-13
View File
@@ -174,19 +174,6 @@ func (RootCmd) promptExample() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("multi-select-no-defaults", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
Message: "Select some things:",
Options: []string{
"Code", "Chairs", "Whale",
},
EnableCustomInput: enableCustomInput,
})
}
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("rich-multi-select", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
-2
View File
@@ -68,8 +68,6 @@ func (r *RootCmd) scaletestCmd() *serpent.Command {
r.scaletestTaskStatus(),
r.scaletestSMTP(),
r.scaletestPrebuilds(),
r.scaletestBridge(),
r.scaletestLLMMock(),
},
}
-278
View File
@@ -1,278 +0,0 @@
//go:build !slim
package cli
import (
"fmt"
"net/http"
"os/signal"
"strconv"
"text/tabwriter"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/bridge"
"github.com/coder/coder/v2/scaletest/createusers"
"github.com/coder/coder/v2/scaletest/harness"
"github.com/coder/serpent"
)
func (r *RootCmd) scaletestBridge() *serpent.Command {
var (
concurrentUsers int64
noCleanup bool
mode string
upstreamURL string
provider string
requestsPerUser int64
useStreamingAPI bool
requestPayloadSize int64
numMessages int64
httpTimeout time.Duration
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
Use: "bridge",
Short: "Generate load on the AI Bridge service.",
Long: `Generate load for AI Bridge testing. Supports two modes: 'bridge' mode routes requests through the Coder AI Bridge, 'direct' mode makes requests directly to an upstream URL (useful for baseline comparisons).
Examples:
# Test OpenAI API through bridge
coder scaletest bridge --mode bridge --provider openai --concurrent-users 10 --request-count 5 --num-messages 10
# Test Anthropic API through bridge
coder scaletest bridge --mode bridge --provider anthropic --concurrent-users 10 --request-count 5 --num-messages 10
# Test directly against mock server
coder scaletest bridge --mode direct --provider openai --upstream-url http://localhost:8080/v1/chat/completions
`,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
reg := prometheus.NewRegistry()
metrics := bridge.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
defer stop()
ctx = notifyCtx
var userConfig createusers.Config
if bridge.RequestMode(mode) == bridge.RequestModeBridge {
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
if len(me.OrganizationIDs) == 0 {
return xerrors.Errorf("admin user must have at least one organization")
}
userConfig = createusers.Config{
OrganizationID: me.OrganizationIDs[0],
}
_, _ = fmt.Fprintln(inv.Stderr, "Bridge mode: creating users and making requests through AI Bridge...")
} else {
_, _ = fmt.Fprintf(inv.Stderr, "Direct mode: making requests directly to %s\n", upstreamURL)
}
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("parse output flags: %w", err)
}
config := bridge.Config{
Mode: bridge.RequestMode(mode),
Metrics: metrics,
Provider: provider,
RequestCount: int(requestsPerUser),
Stream: useStreamingAPI,
RequestPayloadSize: int(requestPayloadSize),
NumMessages: int(numMessages),
HTTPTimeout: httpTimeout,
UpstreamURL: upstreamURL,
User: userConfig,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
}
if err := config.PrepareRequestBody(); err != nil {
return xerrors.Errorf("prepare request body: %w", err)
}
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for i := range concurrentUsers {
id := strconv.Itoa(int(i))
name := fmt.Sprintf("bridge-%s", id)
var runner harness.Runnable = bridge.NewRunner(client, config)
th.AddRun(name, id, runner)
}
_, _ = fmt.Fprintln(inv.Stderr, "Bridge scaletest configuration:")
tw := tabwriter.NewWriter(inv.Stderr, 0, 0, 2, ' ', 0)
for _, opt := range inv.Command.Options {
if opt.Hidden || opt.ValueSource == serpent.ValueSourceNone {
continue
}
_, _ = fmt.Fprintf(tw, " %s:\t%s", opt.Name, opt.Value.String())
if opt.ValueSource != serpent.ValueSourceDefault {
_, _ = fmt.Fprintf(tw, "\t(from %s)", opt.ValueSource)
}
_, _ = fmt.Fprintln(tw)
}
_ = tw.Flush()
_, _ = fmt.Fprintln(inv.Stderr, "\nRunning bridge scaletest...")
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
err = th.Run(testCtx)
if err != nil {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "concurrent-users",
FlagShorthand: "c",
Env: "CODER_SCALETEST_BRIDGE_CONCURRENT_USERS",
Description: "Required: Number of concurrent users.",
Value: serpent.Validate(serpent.Int64Of(&concurrentUsers), func(value *serpent.Int64) error {
if value == nil || value.Value() <= 0 {
return xerrors.Errorf("--concurrent-users must be greater than 0")
}
return nil
}),
Required: true,
},
{
Flag: "mode",
Env: "CODER_SCALETEST_BRIDGE_MODE",
Default: "direct",
Description: "Request mode: 'bridge' (create users and use AI Bridge) or 'direct' (make requests directly to upstream-url).",
Value: serpent.EnumOf(&mode, string(bridge.RequestModeBridge), string(bridge.RequestModeDirect)),
},
{
Flag: "upstream-url",
Env: "CODER_SCALETEST_BRIDGE_UPSTREAM_URL",
Description: "URL to make requests to directly (required in direct mode, e.g., http://localhost:8080/v1/chat/completions).",
Value: serpent.StringOf(&upstreamURL),
},
{
Flag: "provider",
Env: "CODER_SCALETEST_BRIDGE_PROVIDER",
Default: "openai",
Description: "API provider to use.",
Value: serpent.EnumOf(&provider, "openai", "anthropic"),
},
{
Flag: "request-count",
Env: "CODER_SCALETEST_BRIDGE_REQUEST_COUNT",
Default: "1",
Description: "Number of sequential requests to make per runner.",
Value: serpent.Validate(serpent.Int64Of(&requestsPerUser), func(value *serpent.Int64) error {
if value == nil || value.Value() <= 0 {
return xerrors.Errorf("--request-count must be greater than 0")
}
return nil
}),
},
{
Flag: "stream",
Env: "CODER_SCALETEST_BRIDGE_STREAM",
Description: "Enable streaming requests.",
Value: serpent.BoolOf(&useStreamingAPI),
},
{
Flag: "request-payload-size",
Env: "CODER_SCALETEST_BRIDGE_REQUEST_PAYLOAD_SIZE",
Default: "1024",
Description: "Size in bytes of the request payload (user message content). If 0, uses default message content.",
Value: serpent.Int64Of(&requestPayloadSize),
},
{
Flag: "num-messages",
Env: "CODER_SCALETEST_BRIDGE_NUM_MESSAGES",
Default: "1",
Description: "Number of messages to include in the conversation.",
Value: serpent.Int64Of(&numMessages),
},
{
Flag: "no-cleanup",
Env: "CODER_SCALETEST_NO_CLEANUP",
Description: "Do not clean up resources after the test completes.",
Value: serpent.BoolOf(&noCleanup),
},
{
Flag: "http-timeout",
Env: "CODER_SCALETEST_BRIDGE_HTTP_TIMEOUT",
Default: "30s",
Description: "Timeout for individual HTTP requests to the upstream provider.",
Value: serpent.DurationOf(&httpTimeout),
},
}
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}
-118
View File
@@ -1,118 +0,0 @@
//go:build !slim
package cli
import (
"fmt"
"os/signal"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"cdr.dev/slog/v3/sloggers/sloghuman"
"github.com/coder/coder/v2/scaletest/llmmock"
"github.com/coder/serpent"
)
func (*RootCmd) scaletestLLMMock() *serpent.Command {
var (
address string
artificialLatency time.Duration
responsePayloadSize int64
pprofEnable bool
pprofAddress string
traceEnable bool
)
cmd := &serpent.Command{
Use: "llm-mock",
Short: "Start a mock LLM API server for testing",
Long: `Start a mock LLM API server that simulates OpenAI and Anthropic APIs`,
Handler: func(inv *serpent.Invocation) error {
ctx, stop := signal.NotifyContext(inv.Context(), StopSignals...)
defer stop()
logger := slog.Make(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelInfo)
if pprofEnable {
closePprof := ServeHandler(ctx, logger, nil, pprofAddress, "pprof")
defer closePprof()
logger.Info(ctx, "pprof server started", slog.F("address", pprofAddress))
}
config := llmmock.Config{
Address: address,
Logger: logger,
ArtificialLatency: artificialLatency,
ResponsePayloadSize: int(responsePayloadSize),
PprofEnable: pprofEnable,
PprofAddress: pprofAddress,
TraceEnable: traceEnable,
}
srv := new(llmmock.Server)
if err := srv.Start(ctx, config); err != nil {
return xerrors.Errorf("start mock LLM server: %w", err)
}
defer func() {
_ = srv.Stop()
}()
_, _ = fmt.Fprintf(inv.Stdout, "Mock LLM API server started on %s\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " OpenAI endpoint: %s/v1/chat/completions\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " Anthropic endpoint: %s/v1/messages\n", srv.APIAddress())
<-ctx.Done()
return nil
},
}
cmd.Options = []serpent.Option{
{
Flag: "address",
Env: "CODER_SCALETEST_LLM_MOCK_ADDRESS",
Default: "localhost",
Description: "Address to bind the mock LLM API server. Can include a port (e.g., 'localhost:8080' or ':8080'). Uses a random port if no port is specified.",
Value: serpent.StringOf(&address),
},
{
Flag: "artificial-latency",
Env: "CODER_SCALETEST_LLM_MOCK_ARTIFICIAL_LATENCY",
Default: "0s",
Description: "Artificial latency to add to each response (e.g., 100ms, 1s). Simulates slow upstream processing.",
Value: serpent.DurationOf(&artificialLatency),
},
{
Flag: "response-payload-size",
Env: "CODER_SCALETEST_LLM_MOCK_RESPONSE_PAYLOAD_SIZE",
Default: "0",
Description: "Size in bytes of the response payload. If 0, uses default context-aware responses.",
Value: serpent.Int64Of(&responsePayloadSize),
},
{
Flag: "pprof-enable",
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ENABLE",
Default: "false",
Description: "Serve pprof metrics on the address defined by pprof-address.",
Value: serpent.BoolOf(&pprofEnable),
},
{
Flag: "pprof-address",
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ADDRESS",
Default: "127.0.0.1:6060",
Description: "The bind address to serve pprof.",
Value: serpent.StringOf(&pprofAddress),
},
{
Flag: "trace-enable",
Env: "CODER_SCALETEST_LLM_MOCK_TRACE_ENABLE",
Default: "false",
Description: "Whether application tracing data is collected. It exports to a backend configured by environment variables. See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md.",
Value: serpent.BoolOf(&traceEnable),
},
}
return cmd
}
+3 -9
View File
@@ -141,9 +141,7 @@ func TestGitSSH(t *testing.T) {
"-o", "IdentitiesOnly=yes",
"127.0.0.1",
)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.EqualValues(t, 1, inc)
@@ -207,9 +205,7 @@ func TestGitSSH(t *testing.T) {
inv, _ := clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
ctx := testutil.Context(t, testutil.WaitMedium)
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
@@ -227,9 +223,7 @@ func TestGitSSH(t *testing.T) {
inv, _ = clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx = testutil.Context(t, testutil.WaitSuperLong) // Reset context for second cmd test.
ctx = testutil.Context(t, testutil.WaitMedium) // Reset context for second cmd test.
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
-29
View File
@@ -462,38 +462,9 @@ func (r *RootCmd) login() *serpent.Command {
Value: serpent.BoolOf(&useTokenForSession),
},
}
cmd.Children = []*serpent.Command{
r.loginToken(),
}
return cmd
}
func (r *RootCmd) loginToken() *serpent.Command {
return &serpent.Command{
Use: "token",
Short: "Print the current session token",
Long: "Print the session token for use in scripts and automation.",
Middleware: serpent.RequireNArgs(0),
Handler: func(inv *serpent.Invocation) error {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
if err != nil {
if xerrors.Is(err, os.ErrNotExist) {
return xerrors.New("no session token found - run 'coder login' first")
}
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("read session token: %w", err)
}
if tok == "" {
return xerrors.New("no session token found - run 'coder login' first")
}
_, err = fmt.Fprintln(inv.Stdout, tok)
return err
},
}
}
// isWSL determines if coder-cli is running within Windows Subsystem for Linux
func isWSL() (bool, error) {
if runtime.GOOS == goosDarwin || runtime.GOOS == goosWindows {
-28
View File
@@ -537,31 +537,3 @@ func TestLogin(t *testing.T) {
require.Equal(t, selected, first.OrganizationID.String())
})
}
func TestLoginToken(t *testing.T) {
t.Parallel()
t.Run("PrintsToken", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "login", "token", "--url", client.URL.String())
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
pty.ExpectMatch(client.SessionToken())
})
t.Run("NoTokenStored", func(t *testing.T) {
t.Parallel()
inv, _ := clitest.New(t, "login", "token")
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "no session token found")
})
}
-16
View File
@@ -65,22 +65,6 @@ func (r *RootCmd) organizationSettings(orgContext *OrganizationContext) *serpent
return cli.OrganizationIDPSyncSettings(ctx)
},
},
{
Name: "workspace-sharing",
Aliases: []string{"workspacesharing"},
Short: "Workspace sharing settings for the organization.",
Patch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID, input json.RawMessage) (any, error) {
var req codersdk.WorkspaceSharingSettings
err := json.Unmarshal(input, &req)
if err != nil {
return nil, xerrors.Errorf("unmarshalling workspace sharing settings: %w", err)
}
return cli.PatchWorkspaceSharingSettings(ctx, org.String(), req)
},
Fetch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID) (any, error) {
return cli.WorkspaceSharingSettings(ctx, org.String())
},
},
}
cmd := &serpent.Command{
Use: "settings",
+5 -35
View File
@@ -34,7 +34,6 @@ type ParameterResolver struct {
promptRichParameters bool
promptEphemeralParameters bool
useParameterDefaults bool
}
func (pr *ParameterResolver) WithLastBuildParameters(params []codersdk.WorkspaceBuildParameter) *ParameterResolver {
@@ -87,21 +86,8 @@ func (pr *ParameterResolver) WithPromptEphemeralParameters(promptEphemeralParame
return pr
}
func (pr *ParameterResolver) WithUseParameterDefaults(useParameterDefaults bool) *ParameterResolver {
pr.useParameterDefaults = useParameterDefaults
return pr
}
// Resolve gathers workspace build parameters in a layered fashion, applying
// values from various sources in order of precedence:
// 1. template defaults (if auto-accepting defaults)
// 2. cli parameter defaults (if auto-accepting defaults)
// 3. parameter file
// 4. CLI/ENV
// 5. source build
// 6. last build
// 7. preset
// 8. user input (unless auto-accepting defaults)
// Resolve gathers workspace build parameters in a layered fashion, applying values from various sources
// in order of precedence: parameter file < CLI/ENV < source build < last build < preset < user input.
func (pr *ParameterResolver) Resolve(inv *serpent.Invocation, action WorkspaceCLIAction, templateVersionParameters []codersdk.TemplateVersionParameter) ([]codersdk.WorkspaceBuildParameter, error) {
var staged []codersdk.WorkspaceBuildParameter
var err error
@@ -276,25 +262,9 @@ func (pr *ParameterResolver) resolveWithInput(resolved []codersdk.WorkspaceBuild
(action == WorkspaceUpdate && tvp.Mutable && tvp.Required) ||
(action == WorkspaceUpdate && !tvp.Mutable && firstTimeUse) ||
(tvp.Mutable && !tvp.Ephemeral && pr.promptRichParameters) {
name := tvp.Name
if tvp.DisplayName != "" {
name = tvp.DisplayName
}
parameterValue := tvp.DefaultValue
if v, ok := pr.richParametersDefaults[tvp.Name]; ok {
parameterValue = v
}
// Auto-accept the default if there is one.
if pr.useParameterDefaults && parameterValue != "" {
_, _ = fmt.Fprintf(inv.Stdout, "Using default value for %s: '%s'\n", name, parameterValue)
} else {
var err error
parameterValue, err = cliui.RichParameter(inv, tvp, name, parameterValue)
if err != nil {
return nil, err
}
parameterValue, err := cliui.RichParameter(inv, tvp, pr.richParametersDefaults)
if err != nil {
return nil, err
}
resolved = append(resolved, codersdk.WorkspaceBuildParameter{
+1 -10
View File
@@ -24,7 +24,6 @@ import (
"text/tabwriter"
"time"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/mitchellh/go-wordwrap"
"golang.org/x/mod/semver"
@@ -151,6 +150,7 @@ func (r *RootCmd) AGPLExperimental() []*serpent.Command {
r.promptExample(),
r.rptyCommand(),
r.syncCommand(),
r.boundary(),
}
}
@@ -332,12 +332,6 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
// support links.
return
}
if cmd.Name() == "boundary" {
// The boundary command is integrated from the boundary package
// and has YAML-only options (e.g., allowlist from config file)
// that don't have flags or env vars.
return
}
merr = errors.Join(
merr,
xerrors.Errorf("option %q in %q should have a flag or env", opt.Name, cmd.FullName()),
@@ -929,9 +923,6 @@ func splitNamedWorkspace(identifier string) (owner string, workspaceName string,
// a bare name (for a workspace owned by the current user) or a "user/workspace" combination,
// where user is either a username or UUID.
func namedWorkspace(ctx context.Context, client *codersdk.Client, identifier string) (codersdk.Workspace, error) {
if uid, err := uuid.Parse(identifier); err == nil {
return client.Workspace(ctx, uid)
}
owner, name, err := splitNamedWorkspace(identifier)
if err != nil {
return codersdk.Workspace{}, err
+3 -3
View File
@@ -197,7 +197,7 @@ func TestSharingStatus(t *testing.T) {
ctx = testutil.Context(t, testutil.WaitMedium)
)
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
},
@@ -248,7 +248,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser.ID.String(): codersdk.WorkspaceRoleUse,
@@ -309,7 +309,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toRemoveUser2.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser1.ID.String(): codersdk.WorkspaceRoleUse,
+2 -3
View File
@@ -1,10 +1,8 @@
package cli
import (
"fmt"
"sort"
"sync"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
@@ -45,11 +43,11 @@ func (r *RootCmd) show() *serpent.Command {
if err != nil {
return xerrors.Errorf("get workspace: %w", err)
}
options := cliui.WorkspaceResourcesOptions{
WorkspaceName: workspace.Name,
ServerVersion: buildInfo.Version,
ShowDetails: details,
Title: fmt.Sprintf("%s/%s (%s since %s) %s:%s", workspace.OwnerName, workspace.Name, workspace.LatestBuild.Status, time.Since(workspace.LatestBuild.CreatedAt).Round(time.Second).String(), workspace.TemplateName, workspace.LatestBuild.TemplateVersionName),
}
if workspace.LatestBuild.Status == codersdk.WorkspaceStatusRunning {
// Get listening ports for each agent.
@@ -57,6 +55,7 @@ func (r *RootCmd) show() *serpent.Command {
options.ListeningPorts = ports
options.Devcontainers = devcontainers
}
return cliui.WorkspaceResources(inv.Stdout, workspace.LatestBuild.Resources, options)
},
}
+4 -10
View File
@@ -2,7 +2,6 @@ package cli_test
import (
"bytes"
"fmt"
"testing"
"time"
@@ -16,7 +15,6 @@ import (
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestShow(t *testing.T) {
@@ -30,7 +28,7 @@ func TestShow(t *testing.T) {
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID)
build := coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
args := []string{
"show",
@@ -40,30 +38,26 @@ func TestShow(t *testing.T) {
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
go func() {
defer close(doneChan)
err := inv.WithContext(ctx).Run()
err := inv.Run()
assert.NoError(t, err)
}()
matches := []struct {
match string
write string
}{
{match: fmt.Sprintf("%s/%s", workspace.OwnerName, workspace.Name)},
{match: fmt.Sprintf("(%s since ", build.Status)},
{match: fmt.Sprintf("%s:%s", workspace.TemplateName, workspace.LatestBuild.TemplateVersionName)},
{match: "compute.main"},
{match: "smith (linux, i386)"},
{match: "coder ssh " + workspace.Name},
}
for _, m := range matches {
pty.ExpectMatchContext(ctx, m.match)
pty.ExpectMatch(m.match)
if len(m.write) > 0 {
pty.WriteLine(m.write)
}
}
_ = testutil.TryReceive(ctx, t, doneChan)
<-doneChan
})
}
-58
View File
@@ -24,7 +24,6 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -85,9 +84,6 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -179,24 +175,6 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -797,12 +775,6 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1690,33 +1662,3 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}
-96
View File
@@ -312,102 +312,6 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err
-91
View File
@@ -1122,97 +1122,6 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
assert.NoError(t, session.Close())
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")
+1 -4
View File
@@ -367,9 +367,7 @@ func TestStartAutoUpdate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version1.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version1.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
@@ -381,7 +379,6 @@ func TestStartAutoUpdate(t *testing.T) {
coderdtest.MustTransitionWorkspace(t, member, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
}
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(stringRichParameters), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
+6 -255
View File
@@ -7,7 +7,6 @@ import (
"encoding/base64"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"path/filepath"
@@ -45,18 +44,13 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
` - Coder deployment version
- Coder deployment Configuration (sanitized), including enabled experiments
- Coder deployment health snapshot
- Coder deployment stats (aggregated workspace/session metrics)
- Entitlements (if available)
- Health settings (dismissed healthchecks)
- Coder deployment Network troubleshooting information
- Workspace list accessible to the user (sanitized)
- Workspace configuration, parameters, and build logs
- Template version and source code for the given workspace
- Agent details (with environment variable sanitized)
- Agent network diagnostics
- Agent logs
- License status
- pprof profiling data (if --pprof is enabled)
` + cliui.Bold("Note: ") +
cliui.Wrap("While we try to sanitize sensitive data from support bundles, we cannot guarantee that they do not contain information that you or your organization may consider sensitive.\n") +
cliui.Bold("Please confirm that you will:\n") +
@@ -67,9 +61,6 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
func (r *RootCmd) supportBundle() *serpent.Command {
var outputPath string
var coderURLOverride string
var workspacesTotalCap64 int64 = 10
var templateName string
var pprof bool
cmd := &serpent.Command{
Use: "bundle <workspace> [<agent>]",
Short: "Generate a support bundle to troubleshoot issues connecting to a workspace.",
@@ -130,9 +121,8 @@ func (r *RootCmd) supportBundle() *serpent.Command {
}
var (
wsID uuid.UUID
agtID uuid.UUID
templateID uuid.UUID
wsID uuid.UUID
agtID uuid.UUID
)
if len(inv.Args) == 0 {
@@ -165,16 +155,6 @@ func (r *RootCmd) supportBundle() *serpent.Command {
}
}
// Resolve template by name if provided (captures active version)
// Fallback: if canonical name lookup fails, match DisplayName (case-insensitive).
if templateName != "" {
id, err := resolveTemplateID(inv.Context(), client, templateName)
if err != nil {
return err
}
templateID = id
}
if outputPath == "" {
cwd, err := filepath.Abs(".")
if err != nil {
@@ -196,25 +176,12 @@ func (r *RootCmd) supportBundle() *serpent.Command {
if r.verbose {
clientLog.AppendSinks(sloghuman.Sink(inv.Stderr))
}
if pprof {
_, _ = fmt.Fprintln(inv.Stderr, "pprof data collection will take approximately 30 seconds...")
}
// Bypass rate limiting for support bundle collection since it makes many API calls.
client.HTTPClient.Transport = &codersdk.HeaderTransport{
Transport: client.HTTPClient.Transport,
Header: http.Header{codersdk.BypassRatelimitHeader: {"true"}},
}
deps := support.Deps{
Client: client,
// Support adds a sink so we don't need to supply one ourselves.
Log: clientLog,
WorkspaceID: wsID,
AgentID: agtID,
WorkspacesTotalCap: int(workspacesTotalCap64),
TemplateID: templateID,
CollectPprof: pprof,
Log: clientLog,
WorkspaceID: wsID,
AgentID: agtID,
}
bun, err := support.Run(inv.Context(), &deps)
@@ -250,102 +217,11 @@ func (r *RootCmd) supportBundle() *serpent.Command {
Description: "Override the URL to your Coder deployment. This may be useful, for example, if you need to troubleshoot a specific Coder replica.",
Value: serpent.StringOf(&coderURLOverride),
},
{
Flag: "workspaces-total-cap",
Env: "CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP",
Description: "Maximum number of workspaces to include in the support bundle. Set to 0 or negative value to disable the cap. Defaults to 10.",
Value: serpent.Int64Of(&workspacesTotalCap64),
},
{
Flag: "template",
Env: "CODER_SUPPORT_BUNDLE_TEMPLATE",
Description: "Template name to include in the support bundle. Use org_name/template_name if template name is reused across multiple organizations.",
Value: serpent.StringOf(&templateName),
},
{
Flag: "pprof",
Env: "CODER_SUPPORT_BUNDLE_PPROF",
Description: "Collect pprof profiling data from the Coder server and agent. Requires Coder server version 2.28.0 or newer.",
Value: serpent.BoolOf(&pprof),
},
}
return cmd
}
// Resolve a template to its ID, supporting:
// - org/name form
// - slug or display name match (case-insensitive) across all memberships
func resolveTemplateID(ctx context.Context, client *codersdk.Client, templateArg string) (uuid.UUID, error) {
orgPart := ""
namePart := templateArg
if slash := strings.IndexByte(templateArg, '/'); slash > 0 && slash < len(templateArg)-1 {
orgPart = templateArg[:slash]
namePart = templateArg[slash+1:]
}
resolveInOrg := func(orgID uuid.UUID) (codersdk.Template, bool, error) {
if t, err := client.TemplateByName(ctx, orgID, namePart); err == nil {
return t, true, nil
}
tpls, err := client.TemplatesByOrganization(ctx, orgID)
if err != nil {
return codersdk.Template{}, false, nil
}
for _, t := range tpls {
if strings.EqualFold(t.Name, namePart) || strings.EqualFold(t.DisplayName, namePart) {
return t, true, nil
}
}
return codersdk.Template{}, false, nil
}
if orgPart != "" {
org, err := client.OrganizationByName(ctx, orgPart)
if err != nil {
return uuid.Nil, xerrors.Errorf("get organization %q: %w", orgPart, err)
}
t, found, err := resolveInOrg(org.ID)
if err != nil {
return uuid.Nil, err
}
if !found {
return uuid.Nil, xerrors.Errorf("template %q not found in organization %q", namePart, orgPart)
}
return t.ID, nil
}
orgs, err := client.OrganizationsByUser(ctx, codersdk.Me)
if err != nil {
return uuid.Nil, xerrors.Errorf("get organizations: %w", err)
}
var (
foundTpl codersdk.Template
foundOrgs []string
)
for _, org := range orgs {
if t, found, err := resolveInOrg(org.ID); err == nil && found {
if len(foundOrgs) == 0 {
foundTpl = t
}
foundOrgs = append(foundOrgs, org.Name)
}
}
switch len(foundOrgs) {
case 0:
return uuid.Nil, xerrors.Errorf("template %q not found in your organizations", namePart)
case 1:
return foundTpl.ID, nil
default:
return uuid.Nil, xerrors.Errorf(
"template %q found in multiple organizations (%s); use --template \"<org_name/%s>\" to target desired template.",
namePart,
strings.Join(foundOrgs, ", "),
namePart,
)
}
}
// summarizeBundle makes a best-effort attempt to write a short summary
// of the support bundle to the user's terminal.
func summarizeBundle(inv *serpent.Invocation, bun *support.Bundle) {
@@ -407,10 +283,6 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
"deployment/config.json": src.Deployment.Config,
"deployment/experiments.json": src.Deployment.Experiments,
"deployment/health.json": src.Deployment.HealthReport,
"deployment/stats.json": src.Deployment.Stats,
"deployment/entitlements.json": src.Deployment.Entitlements,
"deployment/health_settings.json": src.Deployment.HealthSettings,
"deployment/workspaces.json": src.Deployment.Workspaces,
"network/connection_info.json": src.Network.ConnectionInfo,
"network/netcheck.json": src.Network.Netcheck,
"network/interfaces.json": src.Network.Interfaces,
@@ -430,49 +302,6 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
}
}
// Include named template artifacts (if requested)
if src.NamedTemplate.Template.ID != uuid.Nil {
name := src.NamedTemplate.Template.Name
// JSON files
for k, v := range map[string]any{
"templates/" + name + "/template.json": src.NamedTemplate.Template,
"templates/" + name + "/template_version.json": src.NamedTemplate.TemplateVersion,
} {
f, err := dest.Create(k)
if err != nil {
return xerrors.Errorf("create file %q in archive: %w", k, err)
}
enc := json.NewEncoder(f)
enc.SetIndent("", " ")
if err := enc.Encode(v); err != nil {
return xerrors.Errorf("write json to %q: %w", k, err)
}
}
// Binary template file (zip)
if namedZipBytes, err := base64.StdEncoding.DecodeString(src.NamedTemplate.TemplateFileBase64); err == nil {
k := "templates/" + name + "/template_file.zip"
f, err := dest.Create(k)
if err != nil {
return xerrors.Errorf("create file %q in archive: %w", k, err)
}
if _, err := f.Write(namedZipBytes); err != nil {
return xerrors.Errorf("write file %q in archive: %w", k, err)
}
}
}
var buildInfoRef string
if src.Deployment.BuildInfo != nil {
if raw, err := json.Marshal(src.Deployment.BuildInfo); err == nil {
buildInfoRef = base64.StdEncoding.EncodeToString(raw)
}
}
tailnetHTML := src.Network.TailnetDebug
if buildInfoRef != "" {
tailnetHTML += "\n<!-- trace " + buildInfoRef + " -->"
}
templateVersionBytes, err := base64.StdEncoding.DecodeString(src.Workspace.TemplateFileBase64)
if err != nil {
return xerrors.Errorf("decode template zip from base64")
@@ -490,11 +319,10 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
"agent/client_magicsock.html": string(src.Agent.ClientMagicsockHTML),
"agent/startup_logs.txt": humanizeAgentLogs(src.Agent.StartupLogs),
"agent/prometheus.txt": string(src.Agent.Prometheus),
"deployment/prometheus.txt": string(src.Deployment.Prometheus),
"cli_logs.txt": string(src.CLILogs),
"logs.txt": strings.Join(src.Logs, "\n"),
"network/coordinator_debug.html": src.Network.CoordinatorDebug,
"network/tailnet_debug.html": tailnetHTML,
"network/tailnet_debug.html": src.Network.TailnetDebug,
"workspace/build_logs.txt": humanizeBuildLogs(src.Workspace.BuildLogs),
"workspace/template_file.zip": string(templateVersionBytes),
"license-status.txt": licenseStatus,
@@ -507,89 +335,12 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
return xerrors.Errorf("write file %q in archive: %w", k, err)
}
}
// Write pprof binary data
if err := writePprofData(src.Pprof, dest); err != nil {
return xerrors.Errorf("write pprof data: %w", err)
}
if err := dest.Close(); err != nil {
return xerrors.Errorf("close zip file: %w", err)
}
return nil
}
func writePprofData(pprof support.Pprof, dest *zip.Writer) error {
// Write server pprof data directly to pprof directory
if pprof.Server != nil {
if err := writePprofCollection("pprof", pprof.Server, dest); err != nil {
return xerrors.Errorf("write server pprof data: %w", err)
}
}
// Write agent pprof data
if pprof.Agent != nil {
if err := writePprofCollection("pprof/agent", pprof.Agent, dest); err != nil {
return xerrors.Errorf("write agent pprof data: %w", err)
}
}
return nil
}
func writePprofCollection(basePath string, collection *support.PprofCollection, dest *zip.Writer) error {
// Define the pprof files to write with their extensions
files := map[string][]byte{
"allocs.prof.gz": collection.Allocs,
"heap.prof.gz": collection.Heap,
"profile.prof.gz": collection.Profile,
"block.prof.gz": collection.Block,
"mutex.prof.gz": collection.Mutex,
"goroutine.prof.gz": collection.Goroutine,
"threadcreate.prof.gz": collection.Threadcreate,
"trace.gz": collection.Trace,
}
// Write binary pprof files
for filename, data := range files {
if len(data) > 0 {
filePath := basePath + "/" + filename
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create pprof file %q: %w", filePath, err)
}
if _, err := f.Write(data); err != nil {
return xerrors.Errorf("write pprof file %q: %w", filePath, err)
}
}
}
// Write cmdline as text file
if collection.Cmdline != "" {
filePath := basePath + "/cmdline.txt"
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create cmdline file %q: %w", filePath, err)
}
if _, err := f.Write([]byte(collection.Cmdline)); err != nil {
return xerrors.Errorf("write cmdline file %q: %w", filePath, err)
}
}
if collection.Symbol != "" {
filePath := basePath + "/symbol.txt"
f, err := dest.Create(filePath)
if err != nil {
return xerrors.Errorf("create symbol file %q: %w", filePath, err)
}
if _, err := f.Write([]byte(collection.Symbol)); err != nil {
return xerrors.Errorf("write symbol file %q: %w", filePath, err)
}
}
return nil
}
func humanizeAgentLogs(ls []codersdk.WorkspaceAgentLog) string {
var buf bytes.Buffer
tw := tabwriter.NewWriter(&buf, 0, 2, 1, ' ', 0)
-22
View File
@@ -46,8 +46,6 @@ func TestSupportBundle(t *testing.T) {
// Support bundle tests can share a single coderdtest instance.
var dc codersdk.DeploymentConfig
dc.Values = coderdtest.DeploymentValues(t)
dc.Values.Prometheus.Enable = true
secretValue := uuid.NewString()
seedSecretDeploymentOptions(t, &dc, secretValue)
client, closer, api := coderdtest.NewWithAPI(t, &coderdtest.Options{
@@ -205,10 +203,6 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
var v codersdk.DeploymentConfig
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "deployment config should not be empty")
case "deployment/entitlements.json":
var v codersdk.Entitlements
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "entitlements should not be nil")
case "deployment/experiments.json":
var v codersdk.Experiments
decodeJSONFromZip(t, f, &v)
@@ -217,22 +211,6 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
var v healthsdk.HealthcheckReport
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "health report should not be empty")
case "deployment/health_settings.json":
var v healthsdk.HealthSettings
decodeJSONFromZip(t, f, &v)
require.NotEmpty(t, v, "health settings should not be empty")
case "deployment/stats.json":
var v codersdk.DeploymentStats
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "deployment stats should not be nil")
case "deployment/workspaces.json":
var v codersdk.Workspace
decodeJSONFromZip(t, f, &v)
require.NotNil(t, v, "deployment workspaces should not be nil")
case "deployment/prometheus.txt":
bs := readBytesFromZip(t, f)
require.NotEmpty(t, bs, "prometheus metrics should not be empty")
require.Contains(t, string(bs), "go_goroutines", "prometheus metrics should contain go runtime metrics")
case "network/connection_info.json":
var v workspacesdk.AgentConnectionInfo
decodeJSONFromZip(t, f, &v)
+1 -1
View File
@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -55,7 +55,7 @@ OPTIONS:
configured in the workspace template is used.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -4
View File
@@ -49,11 +49,8 @@ OPTIONS:
--template-version string, $CODER_TEMPLATE_VERSION
Specify a template version name.
--use-parameter-defaults bool, $CODER_WORKSPACE_USE_PARAMETER_DEFAULTS
Automatically accept parameter defaults when no value is provided.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -18,7 +18,7 @@ OPTIONS:
resources.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -24,7 +24,7 @@ OPTIONS:
empty, will use $HOME.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
-3
View File
@@ -9,9 +9,6 @@ USAGE:
macOS and Windows and a plain text file on Linux. Use the --use-keyring flag
or CODER_USE_KEYRING environment variable to change the storage mechanism.
SUBCOMMANDS:
token Print the current session token
OPTIONS:
--first-user-email string, $CODER_FIRST_USER_EMAIL
Specifies an email address to use if creating the first user for the
-11
View File
@@ -1,11 +0,0 @@
coder v0.0.0-devel
USAGE:
coder login token
Print the current session token
Print the session token for use in scripts and automation.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
@@ -18,7 +18,7 @@ OPTIONS:
Reads stdin for the json role definition to upload.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
@@ -23,7 +23,7 @@ OPTIONS:
Reads stdin for the json role definition to upload.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
@@ -15,7 +15,6 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.
@@ -15,7 +15,6 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.
@@ -15,7 +15,6 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.
@@ -15,7 +15,6 @@ SUBCOMMANDS:
memberships from an IdP.
role-sync Role sync settings to sync organization roles from an
IdP.
workspace-sharing Workspace sharing settings for the organization.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@
"last_seen_at": "====[timestamp]=====",
"name": "test-daemon",
"version": "v0.0.0-devel",
"api_version": "1.15",
"api_version": "1.14",
"provisioners": [
"echo"
],
+1 -1
View File
@@ -13,7 +13,7 @@ OPTIONS:
services it's registered with.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -39,7 +39,7 @@ OPTIONS:
pairs for the parameters.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+4 -40
View File
@@ -15,11 +15,9 @@ SUBCOMMANDS:
OPTIONS:
--allow-workspace-renames bool, $CODER_ALLOW_WORKSPACE_RENAMES (default: false)
Allow users to rename their workspaces. WARNING: Renaming a workspace
can cause Terraform resources that depend on the workspace name to be
destroyed and recreated, potentially causing data loss. Only enable
this if your templates do not use workspace names in resource
identifiers, or if you understand the risks.
DEPRECATED: Allow users to rename their workspaces. Use only for
temporary compatibility reasons, this will be removed in a future
release.
--cache-dir string, $CODER_CACHE_DIRECTORY (default: [cache dir])
The directory to cache temporary files. If unspecified and
@@ -111,18 +109,11 @@ AI BRIDGE OPTIONS:
The access key secret to use with the access key to authenticate
against the AWS Bedrock API.
--aibridge-bedrock-base-url string, $CODER_AIBRIDGE_BEDROCK_BASE_URL
The base URL to use for the AWS Bedrock API. Use this setting to
specify an exact URL to use. Takes precedence over
CODER_AIBRIDGE_BEDROCK_REGION.
--aibridge-bedrock-model string, $CODER_AIBRIDGE_BEDROCK_MODEL (default: global.anthropic.claude-sonnet-4-5-20250929-v1:0)
The model to use when making requests to the AWS Bedrock API.
--aibridge-bedrock-region string, $CODER_AIBRIDGE_BEDROCK_REGION
The AWS Bedrock API region to use. Constructs a base URL to use for
the AWS Bedrock API in the form of
'https://bedrock-runtime.<region>.amazonaws.com'.
The AWS Bedrock API region.
--aibridge-bedrock-small-fastmodel string, $CODER_AIBRIDGE_BEDROCK_SMALL_FAST_MODEL (default: global.anthropic.claude-haiku-4-5-20251001-v1:0)
The small fast model to use when making requests to the AWS Bedrock
@@ -130,10 +121,6 @@ AI BRIDGE OPTIONS:
See
https://docs.claude.com/en/docs/claude-code/settings#environment-variables.
--aibridge-circuit-breaker-enabled bool, $CODER_AIBRIDGE_CIRCUIT_BREAKER_ENABLED (default: false)
Enable the circuit breaker to protect against cascading failures from
upstream AI provider rate limits (429, 503, 529 overloaded).
--aibridge-retention duration, $CODER_AIBRIDGE_RETENTION (default: 60d)
Length of time to retain data such as interceptions and all related
records (token, prompt, tool use).
@@ -160,18 +147,6 @@ AI BRIDGE OPTIONS:
Maximum number of AI Bridge requests per second per replica. Set to 0
to disable (unlimited).
--aibridge-send-actor-headers bool, $CODER_AIBRIDGE_SEND_ACTOR_HEADERS (default: false)
Once enabled, extra headers will be added to upstream requests to
identify the user (actor) making requests to AI Bridge. This is only
needed if you are using a proxy between AI Bridge and an upstream AI
provider. This will send X-Ai-Bridge-Actor-Id (the ID of the user
making the request) and X-Ai-Bridge-Actor-Metadata-Username (their
username).
--aibridge-structured-logging bool, $CODER_AIBRIDGE_STRUCTURED_LOGGING (default: false)
Emit structured logs for AI Bridge interception records. Use this for
exporting these records to external SIEM or observability systems.
AI BRIDGE PROXY OPTIONS:
--aibridge-proxy-cert-file string, $CODER_AIBRIDGE_PROXY_CERT_FILE
Path to the CA certificate file for AI Bridge Proxy.
@@ -186,17 +161,6 @@ AI BRIDGE PROXY OPTIONS:
--aibridge-proxy-listen-addr string, $CODER_AIBRIDGE_PROXY_LISTEN_ADDR (default: :8888)
The address the AI Bridge Proxy will listen on.
--aibridge-proxy-upstream string, $CODER_AIBRIDGE_PROXY_UPSTREAM
URL of an upstream HTTP proxy to chain tunneled (non-allowlisted)
requests through. Format: http://[user:pass@]host:port or
https://[user:pass@]host:port.
--aibridge-proxy-upstream-ca string, $CODER_AIBRIDGE_PROXY_UPSTREAM_CA
Path to a PEM-encoded CA certificate to trust for the upstream proxy's
TLS connection. Only needed for HTTPS upstream proxies with
certificates not trusted by the system. If not provided, the system
certificate pool is used.
CLIENT OPTIONS:
These options change the behavior of how clients interact with the Coder.
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
+1 -1
View File
@@ -42,7 +42,7 @@ OPTIONS:
pairs for the parameters.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -14
View File
@@ -14,25 +14,12 @@ OPTIONS:
File path for writing the generated support bundle. Defaults to
coder-support-$(date +%s).zip.
--pprof bool, $CODER_SUPPORT_BUNDLE_PPROF
Collect pprof profiling data from the Coder server and agent. Requires
Coder server version 2.28.0 or newer.
--template string, $CODER_SUPPORT_BUNDLE_TEMPLATE
Template name to include in the support bundle. Use
org_name/template_name if template name is reused across multiple
organizations.
--url-override string, $CODER_SUPPORT_BUNDLE_URL_OVERRIDE
Override the URL to your Coder deployment. This may be useful, for
example, if you need to troubleshoot a specific Coder replica.
--workspaces-total-cap int, $CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP
Maximum number of workspaces to include in the support bundle. Set to
0 or negative value to disable the cap. Defaults to 10.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -21,7 +21,7 @@ USAGE:
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -14,7 +14,7 @@ OPTIONS:
versions are archived.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -68,7 +68,7 @@ OPTIONS:
Specify a file path with values for Terraform-managed variables.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -12,7 +12,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -91,7 +91,7 @@ OPTIONS:
for more details.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -18,7 +18,7 @@ OPTIONS:
the template version to pull.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
--zip bool
Output the template as a zip archive to stdout.
+1 -1
View File
@@ -48,7 +48,7 @@ OPTIONS:
Specify a file path with values for Terraform-managed variables.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
@@ -11,7 +11,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
@@ -11,7 +11,7 @@ OPTIONS:
Select which organization (uuid or name) to use.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -11,7 +11,7 @@ OPTIONS:
the user may have.
-y, --yes bool
Bypass confirmation prompts.
Bypass prompts.
———
Run `coder --help` for a list of global options.
+9 -56
View File
@@ -575,10 +575,8 @@ userQuietHoursSchedule:
# change their quiet hours schedule and the site default is always used.
# (default: true, type: bool)
allowCustomQuietHours: true
# Allow users to rename their workspaces. WARNING: Renaming a workspace can cause
# Terraform resources that depend on the workspace name to be destroyed and
# recreated, potentially causing data loss. Only enable this if your templates do
# not use workspace names in resource identifiers, or if you understand the risks.
# DEPRECATED: Allow users to rename their workspaces. Use only for temporary
# compatibility reasons, this will be removed in a future release.
# (default: false, type: bool)
allowWorkspaceRenames: false
# Configure how emails are sent.
@@ -748,12 +746,7 @@ aibridge:
# The base URL of the Anthropic API.
# (default: https://api.anthropic.com/, type: string)
anthropic_base_url: https://api.anthropic.com/
# The base URL to use for the AWS Bedrock API. Use this setting to specify an
# exact URL to use. Takes precedence over CODER_AIBRIDGE_BEDROCK_REGION.
# (default: <unset>, type: string)
bedrock_base_url: ""
# The AWS Bedrock API region to use. Constructs a base URL to use for the AWS
# Bedrock API in the form of 'https://bedrock-runtime.<region>.amazonaws.com'.
# The AWS Bedrock API region.
# (default: <unset>, type: string)
bedrock_region: ""
# The model to use when making requests to the AWS Bedrock API.
@@ -775,39 +768,11 @@ aibridge:
# Maximum number of concurrent AI Bridge requests per replica. Set to 0 to disable
# (unlimited).
# (default: 0, type: int)
max_concurrency: 0
maxConcurrency: 0
# Maximum number of AI Bridge requests per second per replica. Set to 0 to disable
# (unlimited).
# (default: 0, type: int)
rate_limit: 0
# Emit structured logs for AI Bridge interception records. Use this for exporting
# these records to external SIEM or observability systems.
# (default: false, type: bool)
structured_logging: false
# Once enabled, extra headers will be added to upstream requests to identify the
# user (actor) making requests to AI Bridge. This is only needed if you are using
# a proxy between AI Bridge and an upstream AI provider. This will send
# X-Ai-Bridge-Actor-Id (the ID of the user making the request) and
# X-Ai-Bridge-Actor-Metadata-Username (their username).
# (default: false, type: bool)
send_actor_headers: false
# Enable the circuit breaker to protect against cascading failures from upstream
# AI provider rate limits (429, 503, 529 overloaded).
# (default: false, type: bool)
circuit_breaker_enabled: false
# Number of consecutive failures that triggers the circuit breaker to open.
# (default: 5, type: int)
circuit_breaker_failure_threshold: 5
# Cyclic period of the closed state for clearing internal failure counts.
# (default: 10s, type: duration)
circuit_breaker_interval: 10s
# How long the circuit breaker stays open before transitioning to half-open state.
# (default: 30s, type: duration)
circuit_breaker_timeout: 30s
# Maximum number of requests allowed in half-open state before deciding to close
# or re-open the circuit.
# (default: 3, type: int)
circuit_breaker_max_requests: 3
rateLimit: 0
aibridgeproxy:
# Enable the AI Bridge MITM Proxy for intercepting and decrypting AI provider
# requests.
@@ -822,25 +787,13 @@ aibridgeproxy:
# Path to the CA private key file for AI Bridge Proxy.
# (default: <unset>, type: string)
key_file: ""
# Comma-separated list of AI provider domains for which HTTPS traffic will be
# decrypted and routed through AI Bridge. Requests to other domains will be
# tunneled directly without decryption. Supported domains: api.anthropic.com,
# api.openai.com, api.individual.githubcopilot.com.
# (default: api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,
# type: string-array)
# Comma-separated list of domains for which HTTPS traffic will be decrypted and
# routed through AI Bridge. Requests to other domains will be tunneled directly
# without decryption.
# (default: api.anthropic.com,api.openai.com, type: string-array)
domain_allowlist:
- api.anthropic.com
- api.openai.com
- api.individual.githubcopilot.com
# URL of an upstream HTTP proxy to chain tunneled (non-allowlisted) requests
# through. Format: http://[user:pass@]host:port or https://[user:pass@]host:port.
# (default: <unset>, type: string)
upstream_proxy: ""
# Path to a PEM-encoded CA certificate to trust for the upstream proxy's TLS
# connection. Only needed for HTTPS upstream proxies with certificates not trusted
# by the system. If not provided, the system certificate pool is used.
# (default: <unset>, type: string)
upstream_proxy_ca: ""
# Configure data retention policies for various database tables. Retention
# policies automatically purge old data to reduce database size and improve
# performance. Setting a retention duration to 0 disables automatic purging for

Some files were not shown because too many files have changed in this diff Show More