Compare commits
164 Commits
coder
...
jakehwll/e
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
13fe1546a3 | ||
|
|
b14a709adb | ||
|
|
3d97f677e5 | ||
|
|
8985120c36 | ||
|
|
c60f802580 | ||
|
|
37aecda165 | ||
|
|
14b4650d6c | ||
|
|
b035843484 | ||
|
|
21eabb1d73 | ||
|
|
536bca7ea9 | ||
|
|
e45635aab6 | ||
|
|
036ed5672f | ||
|
|
90cf4809ec | ||
|
|
4847920407 | ||
|
|
a464ab67c6 | ||
|
|
0611e90dd3 | ||
|
|
5da28ff72f | ||
|
|
f5d4926bc1 | ||
|
|
9f6ce7542a | ||
|
|
d09300eadf | ||
|
|
9a417df940 | ||
|
|
8ee4f594d5 | ||
|
|
9eda6569b8 | ||
|
|
bb7b49de6a | ||
|
|
5ae0e08494 | ||
|
|
04b0253e8a | ||
|
|
06e396188f | ||
|
|
62704eb858 | ||
|
|
1a94aa67a3 | ||
|
|
7473b57e54 | ||
|
|
57ab991a95 | ||
|
|
1b31279506 | ||
|
|
4f1fd82ed7 | ||
|
|
4ce4b5ef9f | ||
|
|
dfbd541cee | ||
|
|
921fad098b | ||
|
|
264ae77458 | ||
|
|
c2c225052a | ||
|
|
e13f2a9869 | ||
|
|
d06b21df45 | ||
|
|
327c885292 | ||
|
|
7a8d8d2f86 | ||
|
|
7090a1e205 | ||
|
|
f358a6db11 | ||
|
|
2204731ddb | ||
|
|
d7037280da | ||
|
|
799b190dee | ||
|
|
3eeeabfd68 | ||
|
|
7dfa33b410 | ||
|
|
e008f720b6 | ||
|
|
d4cd982608 | ||
|
|
3ee4f6d0ec | ||
|
|
c352a51b22 | ||
|
|
2ee3386cc5 | ||
|
|
8f3bb0b0d1 | ||
|
|
b1267c458c | ||
|
|
a5c06a3751 | ||
|
|
7b44976618 | ||
|
|
c3f41ce08c | ||
|
|
6f15b178a4 | ||
|
|
1375fd9ead | ||
|
|
7546e94534 | ||
|
|
59b2afaa80 | ||
|
|
303389e75a | ||
|
|
25d7f27cdb | ||
|
|
f2e998848e | ||
|
|
d2e54819bf | ||
|
|
806d7e4c11 | ||
|
|
7123518baa | ||
|
|
bb186b8699 | ||
|
|
bbca7f546c | ||
|
|
4bff2f7296 | ||
|
|
c3cd3614e4 | ||
|
|
612aae2523 | ||
|
|
49f135bcd4 | ||
|
|
f47f89d997 | ||
|
|
78bc5861e0 | ||
|
|
0d21365825 | ||
|
|
409360c62d | ||
|
|
6c8209bdf1 | ||
|
|
ece531ab4e | ||
|
|
15c61906e2 | ||
|
|
8d6822b23a | ||
|
|
98834a7837 | ||
|
|
338b952d71 | ||
|
|
9b14fd3adc | ||
|
|
b82693d4cc | ||
|
|
f5858c8a18 | ||
|
|
9843adb8c6 | ||
|
|
fa7baebdd8 | ||
|
|
3398833919 | ||
|
|
365ab0e609 | ||
|
|
7c948a7ad8 | ||
|
|
e195856c43 | ||
|
|
6a81474ff0 | ||
|
|
6c49938fca | ||
|
|
e1282b6904 | ||
|
|
57cc50c703 | ||
|
|
d29a168785 | ||
|
|
859099f1f2 | ||
|
|
6d8e6d4830 | ||
|
|
4c7844ad3d | ||
|
|
5dcc9dd8ab | ||
|
|
fdd928e01c | ||
|
|
f0152e291a | ||
|
|
26ce070393 | ||
|
|
e78d89620b | ||
|
|
1dd0519a38 | ||
|
|
47b3846bca | ||
|
|
3e29eec560 | ||
|
|
1b03202e90 | ||
|
|
f799cba395 | ||
|
|
408a35a961 | ||
|
|
97e8a5b093 | ||
|
|
a14a22eb54 | ||
|
|
6ef9670384 | ||
|
|
2132c53f28 | ||
|
|
59b71f296f | ||
|
|
0ac05b4144 | ||
|
|
6346eb7af8 | ||
|
|
09f50046cb | ||
|
|
ed679bb3da | ||
|
|
bfae5b03dc | ||
|
|
ca2e728fcb | ||
|
|
12a6a9b5f0 | ||
|
|
b163b4c950 | ||
|
|
9776dc16bd | ||
|
|
e79f1d0406 | ||
|
|
2bfd54dfdb | ||
|
|
8db1e0481a | ||
|
|
31654deb87 | ||
|
|
25ac3dbab8 | ||
|
|
a002fbbae6 | ||
|
|
08343a7a9f | ||
|
|
d176714f90 | ||
|
|
34c7fe2eaf | ||
|
|
a406ed7cc5 | ||
|
|
1813605012 | ||
|
|
a4e14448c2 | ||
|
|
4d414a0df7 | ||
|
|
ff9ed91811 | ||
|
|
ea465d4ea3 | ||
|
|
fe68ec9095 | ||
|
|
ab126e0f0a | ||
|
|
ad23ea3561 | ||
|
|
3b07f7b9c4 | ||
|
|
11b35a5f94 | ||
|
|
170fbcdb14 | ||
|
|
3db5558603 | ||
|
|
61961db41d | ||
|
|
d2d7c0ee40 | ||
|
|
d25d95231f | ||
|
|
3a62a8e70e | ||
|
|
7fc84ecf0b | ||
|
|
0ebe8e57ad | ||
|
|
3894edbcc3 | ||
|
|
d5296a4855 | ||
|
|
5073493850 | ||
|
|
32354261d3 | ||
|
|
6683d807ac | ||
|
|
7c2479ce92 | ||
|
|
e1156b050f | ||
|
|
0712faef4f | ||
|
|
7d5cd06f83 |
79
.claude/skills/doc-check/SKILL.md
Normal file
79
.claude/skills/doc-check/SKILL.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
name: doc-check
|
||||
description: Checks if code changes require documentation updates
|
||||
---
|
||||
|
||||
# Documentation Check Skill
|
||||
|
||||
Review code changes and determine if documentation updates or new documentation
|
||||
is needed.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Get the code changes** - Use the method provided in the prompt, or if none
|
||||
specified:
|
||||
- For a PR: `gh pr diff <PR_NUMBER> --repo coder/coder`
|
||||
- For local changes: `git diff main` or `git diff --staged`
|
||||
- For a branch: `git diff main...<branch>`
|
||||
|
||||
2. **Understand the scope** - Consider what changed:
|
||||
- Is this user-facing or internal?
|
||||
- Does it change behavior, APIs, CLI flags, or configuration?
|
||||
- Even for "internal" or "chore" changes, always verify the actual diff
|
||||
|
||||
3. **Search the docs** for related content in `docs/`
|
||||
|
||||
4. **Decide what's needed**:
|
||||
- Do existing docs need updates to match the code?
|
||||
- Is new documentation needed for undocumented features?
|
||||
- Or is everything already covered?
|
||||
|
||||
5. **Report findings** - Use the method provided in the prompt, or if none
|
||||
specified, summarize findings directly
|
||||
|
||||
## What to Check
|
||||
|
||||
- **Accuracy**: Does documentation match current code behavior?
|
||||
- **Completeness**: Are new features/options documented?
|
||||
- **Examples**: Do code examples still work?
|
||||
- **CLI/API changes**: Are new flags, endpoints, or options documented?
|
||||
- **Configuration**: Are new environment variables or settings documented?
|
||||
- **Breaking changes**: Are migration steps documented if needed?
|
||||
- **Premium features**: Should docs indicate `(Premium)` in the title?
|
||||
|
||||
## Key Documentation Info
|
||||
|
||||
- **`docs/manifest.json`** - Navigation structure; new pages MUST be added here
|
||||
- **`docs/reference/cli/*.md`** - Auto-generated from Go code, don't edit directly
|
||||
- **Premium features** - H1 title should include `(Premium)` suffix
|
||||
|
||||
## Coder-Specific Patterns
|
||||
|
||||
### Callouts
|
||||
|
||||
Use GitHub-Flavored Markdown alerts:
|
||||
|
||||
```markdown
|
||||
> [!NOTE]
|
||||
> Additional helpful information.
|
||||
|
||||
> [!WARNING]
|
||||
> Important warning about potential issues.
|
||||
|
||||
> [!TIP]
|
||||
> Helpful tip for users.
|
||||
```
|
||||
|
||||
### CLI Documentation
|
||||
|
||||
CLI docs in `docs/reference/cli/` are auto-generated. Don't suggest editing them
|
||||
directly. Instead, changes should be made in the Go code that defines the CLI
|
||||
commands (typically in `cli/` directory).
|
||||
|
||||
### Code Examples
|
||||
|
||||
Use `sh` for shell commands:
|
||||
|
||||
```sh
|
||||
coder server --flag-name value
|
||||
```
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Start Docker service if not already running.
|
||||
sudo service docker start
|
||||
sudo service docker status >/dev/null 2>&1 || sudo service docker start
|
||||
|
||||
4
.github/actions/setup-go-tools/action.yaml
vendored
4
.github/actions/setup-go-tools/action.yaml
vendored
@@ -7,6 +7,6 @@ runs:
|
||||
- name: go install tools
|
||||
shell: bash
|
||||
run: |
|
||||
go install tool
|
||||
./.github/scripts/retry.sh -- go install tool
|
||||
# NOTE: protoc-gen-go cannot be installed with `go get`
|
||||
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
|
||||
./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
|
||||
|
||||
8
.github/actions/setup-go/action.yaml
vendored
8
.github/actions/setup-go/action.yaml
vendored
@@ -4,7 +4,7 @@ description: |
|
||||
inputs:
|
||||
version:
|
||||
description: "The Go version to use."
|
||||
default: "1.24.10"
|
||||
default: "1.25.6"
|
||||
use-preinstalled-go:
|
||||
description: "Whether to use preinstalled Go."
|
||||
default: "false"
|
||||
@@ -22,14 +22,14 @@ runs:
|
||||
|
||||
- name: Install gotestsum
|
||||
shell: bash
|
||||
run: go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
|
||||
run: ./.github/scripts/retry.sh -- go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
|
||||
|
||||
- name: Install mtimehash
|
||||
shell: bash
|
||||
run: go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
|
||||
|
||||
# It isn't necessary that we ever do this, but it helps
|
||||
# separate the "setup" from the "run" times.
|
||||
- name: go mod download
|
||||
shell: bash
|
||||
run: go mod download -x
|
||||
run: ./.github/scripts/retry.sh -- go mod download -x
|
||||
|
||||
2
.github/actions/setup-sqlc/action.yaml
vendored
2
.github/actions/setup-sqlc/action.yaml
vendored
@@ -14,4 +14,4 @@ runs:
|
||||
# - https://github.com/sqlc-dev/sqlc/pull/4159
|
||||
shell: bash
|
||||
run: |
|
||||
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
|
||||
./.github/scripts/retry.sh -- env CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
|
||||
|
||||
50
.github/scripts/retry.sh
vendored
Executable file
50
.github/scripts/retry.sh
vendored
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
# Retry a command with exponential backoff.
|
||||
#
|
||||
# Usage: retry.sh [--max-attempts N] -- <command...>
|
||||
#
|
||||
# Example:
|
||||
# retry.sh --max-attempts 3 -- go install gotest.tools/gotestsum@latest
|
||||
#
|
||||
# This will retry the command up to 3 times with exponential backoff
|
||||
# (2s, 4s, 8s delays between attempts).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# shellcheck source=scripts/lib.sh
|
||||
source "$(dirname "${BASH_SOURCE[0]}")/../../scripts/lib.sh"
|
||||
|
||||
max_attempts=3
|
||||
|
||||
args="$(getopt -o "" -l max-attempts: -- "$@")"
|
||||
eval set -- "$args"
|
||||
while true; do
|
||||
case "$1" in
|
||||
--max-attempts)
|
||||
max_attempts="$2"
|
||||
shift 2
|
||||
;;
|
||||
--)
|
||||
shift
|
||||
break
|
||||
;;
|
||||
*)
|
||||
error "Unrecognized option: $1"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ $# -lt 1 ]]; then
|
||||
error "Usage: retry.sh [--max-attempts N] -- <command...>"
|
||||
fi
|
||||
|
||||
attempt=1
|
||||
until "$@"; do
|
||||
if ((attempt >= max_attempts)); then
|
||||
error "Command failed after $max_attempts attempts: $*"
|
||||
fi
|
||||
delay=$((2 ** attempt))
|
||||
log "Attempt $attempt/$max_attempts failed, retrying in ${delay}s..."
|
||||
sleep "$delay"
|
||||
((attempt++))
|
||||
done
|
||||
70
.github/workflows/ci.yaml
vendored
70
.github/workflows/ci.yaml
vendored
@@ -40,7 +40,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -124,7 +124,7 @@ jobs:
|
||||
# runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
|
||||
# steps:
|
||||
# - name: Checkout
|
||||
# uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
# uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
# with:
|
||||
# fetch-depth: 1
|
||||
# # See: https://github.com/stefanzweifel/git-auto-commit-action?tab=readme-ov-file#commits-made-by-this-action-do-not-trigger-new-workflow-runs
|
||||
@@ -162,7 +162,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -176,12 +176,12 @@ jobs:
|
||||
- name: Get golangci-lint cache dir
|
||||
run: |
|
||||
linter_ver=$(grep -Eo 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
|
||||
go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
|
||||
./.github/scripts/retry.sh -- go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
|
||||
dir=$(golangci-lint cache status | awk '/Dir/ { print $2 }')
|
||||
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
|
||||
|
||||
- name: golangci-lint cache
|
||||
uses: actions/cache@9255dc7a253b0ccc959486e2bca901246202afeb # v5.0.1
|
||||
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
|
||||
with:
|
||||
path: |
|
||||
${{ env.LINT_CACHE_DIR }}
|
||||
@@ -256,7 +256,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -313,7 +313,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -329,7 +329,7 @@ jobs:
|
||||
uses: ./.github/actions/setup-go
|
||||
|
||||
- name: Install shfmt
|
||||
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
|
||||
run: ./.github/scripts/retry.sh -- go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
|
||||
|
||||
- name: make fmt
|
||||
timeout-minutes: 7
|
||||
@@ -386,7 +386,7 @@ jobs:
|
||||
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -395,6 +395,18 @@ jobs:
|
||||
id: go-paths
|
||||
uses: ./.github/actions/setup-go-paths
|
||||
|
||||
# macOS default bash and coreutils are too old for our scripts
|
||||
# (lib.sh requires bash 4+, GNU getopt, make 4+).
|
||||
- name: Setup GNU tools (macOS)
|
||||
if: runner.os == 'macOS'
|
||||
run: |
|
||||
brew install bash gnu-getopt make
|
||||
{
|
||||
echo "$(brew --prefix bash)/bin"
|
||||
echo "$(brew --prefix gnu-getopt)/bin"
|
||||
echo "$(brew --prefix make)/libexec/gnubin"
|
||||
} >> "$GITHUB_PATH"
|
||||
|
||||
- name: Setup Go
|
||||
uses: ./.github/actions/setup-go
|
||||
with:
|
||||
@@ -559,7 +571,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -621,7 +633,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -693,7 +705,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -720,7 +732,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -753,7 +765,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
@@ -833,7 +845,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
# 👇 Ensures Chromatic can read your full git history
|
||||
fetch-depth: 0
|
||||
@@ -849,7 +861,7 @@ jobs:
|
||||
# the check to pass. This is desired in PRs, but not in mainline.
|
||||
- name: Publish to Chromatic (non-mainline)
|
||||
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
|
||||
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
|
||||
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
|
||||
env:
|
||||
NODE_OPTIONS: "--max_old_space_size=4096"
|
||||
STORYBOOK: true
|
||||
@@ -881,7 +893,7 @@ jobs:
|
||||
# infinitely "in progress" in mainline unless we re-review each build.
|
||||
- name: Publish to Chromatic (mainline)
|
||||
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
|
||||
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
|
||||
uses: chromaui/action@07791f8243f4cb2698bf4d00426baf4b2d1cb7e0 # v13.3.5
|
||||
env:
|
||||
NODE_OPTIONS: "--max_old_space_size=4096"
|
||||
STORYBOOK: true
|
||||
@@ -914,7 +926,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
# 0 is required here for version.sh to work.
|
||||
fetch-depth: 0
|
||||
@@ -1018,7 +1030,7 @@ jobs:
|
||||
steps:
|
||||
# Harden Runner doesn't work on macOS
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -1068,7 +1080,7 @@ jobs:
|
||||
- name: Build dylibs
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
go mod download
|
||||
./.github/scripts/retry.sh -- go mod download
|
||||
|
||||
make gen/mark-fresh
|
||||
make build/coder-dylib
|
||||
@@ -1105,7 +1117,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -1117,10 +1129,10 @@ jobs:
|
||||
uses: ./.github/actions/setup-go
|
||||
|
||||
- name: Install go-winres
|
||||
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
|
||||
- name: Install nfpm
|
||||
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
|
||||
|
||||
- name: Install zstd
|
||||
run: sudo apt-get install -y zstd
|
||||
@@ -1128,7 +1140,7 @@ jobs:
|
||||
- name: Build
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
go mod download
|
||||
./.github/scripts/retry.sh -- go mod download
|
||||
make gen/mark-fresh
|
||||
make build
|
||||
|
||||
@@ -1160,7 +1172,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -1207,10 +1219,10 @@ jobs:
|
||||
java-version: "11.0"
|
||||
|
||||
- name: Install go-winres
|
||||
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
|
||||
- name: Install nfpm
|
||||
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
|
||||
|
||||
- name: Install zstd
|
||||
run: sudo apt-get install -y zstd
|
||||
@@ -1258,7 +1270,7 @@ jobs:
|
||||
- name: Build
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
go mod download
|
||||
./.github/scripts/retry.sh -- go mod download
|
||||
|
||||
version="$(./scripts/version.sh)"
|
||||
tag="main-${version//+/-}"
|
||||
@@ -1557,7 +1569,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
@@ -215,7 +215,7 @@ jobs:
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: Checkout create-task-action
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
path: ./.github/actions/create-task-action
|
||||
|
||||
2
.github/workflows/code-review.yaml
vendored
2
.github/workflows/code-review.yaml
vendored
@@ -249,7 +249,7 @@ jobs:
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: Checkout create-task-action
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
path: ./.github/actions/create-task-action
|
||||
|
||||
6
.github/workflows/deploy.yaml
vendored
6
.github/workflows/deploy.yaml
vendored
@@ -41,7 +41,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -70,7 +70,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -151,7 +151,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
338
.github/workflows/doc-check.yaml
vendored
338
.github/workflows/doc-check.yaml
vendored
@@ -2,14 +2,26 @@
|
||||
# It creates a Coder Task that uses AI to analyze the PR changes,
|
||||
# search existing docs, and comment with recommendations.
|
||||
#
|
||||
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
|
||||
# Triggers:
|
||||
# - New PR opened: Initial documentation review
|
||||
# - PR updated (synchronize): Re-review after changes
|
||||
# - Label "doc-check" added: Manual trigger for review
|
||||
# - PR marked ready for review: Review when draft is promoted
|
||||
# - Workflow dispatch: Manual run with PR URL
|
||||
#
|
||||
# Note: This workflow requires access to secrets and will be skipped for:
|
||||
# - Any PR where secrets are not available
|
||||
# For these PRs, maintainers can manually trigger via workflow_dispatch.
|
||||
|
||||
name: AI Documentation Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- opened
|
||||
- synchronize
|
||||
- labeled
|
||||
- ready_for_review
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
pr_url:
|
||||
@@ -26,8 +38,16 @@ jobs:
|
||||
doc-check:
|
||||
name: Analyze PR for Documentation Updates Needed
|
||||
runs-on: ubuntu-latest
|
||||
# Run on: opened, synchronize, labeled (with doc-check label), ready_for_review, or workflow_dispatch
|
||||
# Skip draft PRs unless manually triggered
|
||||
if: |
|
||||
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
|
||||
(
|
||||
github.event.action == 'opened' ||
|
||||
github.event.action == 'synchronize' ||
|
||||
github.event.label.name == 'doc-check' ||
|
||||
github.event.action == 'ready_for_review' ||
|
||||
github.event_name == 'workflow_dispatch'
|
||||
) &&
|
||||
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
|
||||
timeout-minutes: 30
|
||||
env:
|
||||
@@ -39,120 +59,154 @@ jobs:
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
- name: Check if secrets are available
|
||||
id: check-secrets
|
||||
env:
|
||||
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
|
||||
CODER_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
|
||||
run: |
|
||||
if [[ -z "${CODER_URL}" || -z "${CODER_TOKEN}" ]]; then
|
||||
echo "skip=true" >> "${GITHUB_OUTPUT}"
|
||||
echo "Secrets not available - skipping doc-check."
|
||||
echo "This is expected for PRs where secrets are not available."
|
||||
echo "Maintainers can manually trigger via workflow_dispatch if needed."
|
||||
{
|
||||
echo "⚠️ Workflow skipped: Secrets not available"
|
||||
echo ""
|
||||
echo "This workflow requires secrets that are unavailable for this run."
|
||||
echo "Maintainers can manually trigger via workflow_dispatch if needed."
|
||||
} >> "${GITHUB_STEP_SUMMARY}"
|
||||
else
|
||||
echo "skip=false" >> "${GITHUB_OUTPUT}"
|
||||
fi
|
||||
|
||||
- name: Setup Coder CLI
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
uses: coder/setup-action@4a607a8113d4e676e2d7c34caa20a814bc88bfda # v1
|
||||
with:
|
||||
access_url: ${{ secrets.DOC_CHECK_CODER_URL }}
|
||||
coder_session_token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
|
||||
|
||||
- name: Determine PR Context
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
id: determine-context
|
||||
env:
|
||||
GITHUB_ACTOR: ${{ github.actor }}
|
||||
GITHUB_EVENT_NAME: ${{ github.event_name }}
|
||||
GITHUB_EVENT_ACTION: ${{ github.event.action }}
|
||||
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
|
||||
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
|
||||
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
|
||||
INPUTS_PR_URL: ${{ inputs.pr_url }}
|
||||
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
|
||||
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# For workflow_dispatch, use the provided PR URL
|
||||
# Determine trigger type for task context
|
||||
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
|
||||
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
|
||||
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
|
||||
echo "trigger_type=manual" >> "${GITHUB_OUTPUT}"
|
||||
echo "Using PR URL: ${INPUTS_PR_URL}"
|
||||
|
||||
# Validate PR URL format
|
||||
if [[ ! "${INPUTS_PR_URL}" =~ ^https://github\.com/[^/]+/[^/]+/pull/[0-9]+$ ]]; then
|
||||
echo "::error::Invalid PR URL format: ${INPUTS_PR_URL}"
|
||||
echo "::error::Expected format: https://github.com/owner/repo/pull/NUMBER"
|
||||
exit 1
|
||||
fi
|
||||
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
|
||||
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
|
||||
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using PR URL: ${INPUTS_PR_URL}"
|
||||
# Convert /pull/ to /issues/ for create-task-action compatibility
|
||||
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
|
||||
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# Extract PR number from URL for later use
|
||||
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
|
||||
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
|
||||
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
|
||||
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
|
||||
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
|
||||
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
|
||||
# Convert /pull/ to /issues/ for create-task-action compatibility
|
||||
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
|
||||
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
|
||||
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# Set trigger type based on action
|
||||
case "${GITHUB_EVENT_ACTION}" in
|
||||
opened)
|
||||
echo "trigger_type=new_pr" >> "${GITHUB_OUTPUT}"
|
||||
;;
|
||||
synchronize)
|
||||
echo "trigger_type=pr_updated" >> "${GITHUB_OUTPUT}"
|
||||
;;
|
||||
labeled)
|
||||
echo "trigger_type=label_requested" >> "${GITHUB_OUTPUT}"
|
||||
;;
|
||||
ready_for_review)
|
||||
echo "trigger_type=ready_for_review" >> "${GITHUB_OUTPUT}"
|
||||
;;
|
||||
*)
|
||||
echo "trigger_type=unknown" >> "${GITHUB_OUTPUT}"
|
||||
;;
|
||||
esac
|
||||
|
||||
else
|
||||
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Extract changed files and build prompt
|
||||
- name: Build task prompt
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
id: extract-context
|
||||
env:
|
||||
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
|
||||
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
TRIGGER_TYPE: ${{ steps.determine-context.outputs.trigger_type }}
|
||||
run: |
|
||||
echo "Analyzing PR #${PR_NUMBER}"
|
||||
echo "Analyzing PR #${PR_NUMBER} (trigger: ${TRIGGER_TYPE})"
|
||||
|
||||
# Build task prompt - using unquoted heredoc so variables expand
|
||||
TASK_PROMPT=$(cat <<EOF
|
||||
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
|
||||
# Build context based on trigger type
|
||||
case "${TRIGGER_TYPE}" in
|
||||
new_pr)
|
||||
CONTEXT="This is a NEW PR. Perform a thorough documentation review."
|
||||
;;
|
||||
pr_updated)
|
||||
CONTEXT="This PR was UPDATED with new commits. Only comment if the changes affect documentation needs or address previous feedback."
|
||||
;;
|
||||
label_requested)
|
||||
CONTEXT="A documentation review was REQUESTED via label. Perform a thorough documentation review."
|
||||
;;
|
||||
ready_for_review)
|
||||
CONTEXT="This PR was marked READY FOR REVIEW (converted from draft). Perform a thorough documentation review."
|
||||
;;
|
||||
manual)
|
||||
CONTEXT="This is a MANUAL review request. Perform a thorough documentation review."
|
||||
;;
|
||||
*)
|
||||
CONTEXT="Perform a thorough documentation review."
|
||||
;;
|
||||
esac
|
||||
|
||||
PR URL: ${PR_URL}
|
||||
# Build task prompt with PR-specific context
|
||||
TASK_PROMPT="Use the doc-check skill to review PR #${PR_NUMBER} in coder/coder.
|
||||
|
||||
WORKFLOW:
|
||||
1. Setup (repo is pre-cloned at ~/coder)
|
||||
cd ~/coder
|
||||
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
|
||||
git checkout pr-${PR_NUMBER}
|
||||
${CONTEXT}
|
||||
|
||||
2. Get PR info
|
||||
Use GitHub MCP tools to get PR title, body, and diff
|
||||
Or use: git diff main...pr-${PR_NUMBER}
|
||||
Use \`gh\` to get PR details, diff, and all comments. Check for previous doc-check comments (from coder-doc-check) and only post a new comment if it adds value.
|
||||
|
||||
3. Understand Changes
|
||||
Read the diff and identify what changed
|
||||
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
|
||||
**Do not comment if no documentation changes are needed.**
|
||||
|
||||
4. Search for Related Docs
|
||||
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
|
||||
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
|
||||
## Comment format
|
||||
|
||||
5. Decide
|
||||
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
|
||||
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
|
||||
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
|
||||
Use this structure (only include relevant sections):
|
||||
|
||||
6. Comment on the PR using this format
|
||||
\`\`\`
|
||||
## Documentation Check
|
||||
|
||||
COMMENT FORMAT:
|
||||
## 📚 Documentation Check
|
||||
### Previous Feedback
|
||||
[For re-reviews only: Addressed | Partially addressed | Not yet addressed]
|
||||
|
||||
### ✅ Updates Needed
|
||||
- **[docs/path/file.md](github_link)** - Brief what needs changing
|
||||
### Updates Needed
|
||||
- [ ] \`docs/path/file.md\` - [what needs to change]
|
||||
|
||||
### 📝 New Docs Needed
|
||||
- **docs/suggested/location.md** - What should be documented
|
||||
|
||||
### ✨ No Changes Needed
|
||||
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
|
||||
### New Documentation Needed
|
||||
- [ ] \`docs/suggested/path.md\` - [what should be documented]
|
||||
|
||||
---
|
||||
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
|
||||
|
||||
DOCS STRUCTURE:
|
||||
Read ~/coder/docs/manifest.json for the complete documentation structure.
|
||||
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
|
||||
But check manifest.json - it has everything.
|
||||
|
||||
EOF
|
||||
)
|
||||
*Automated review via [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
|
||||
\`\`\`"
|
||||
|
||||
# Output the prompt
|
||||
{
|
||||
@@ -162,7 +216,8 @@ jobs:
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: Checkout create-task-action
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
path: ./.github/actions/create-task-action
|
||||
@@ -171,22 +226,24 @@ jobs:
|
||||
repository: coder/create-task-action
|
||||
|
||||
- name: Create Coder Task for Documentation Check
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
id: create_task
|
||||
uses: ./.github/actions/create-task-action
|
||||
with:
|
||||
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
|
||||
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
|
||||
coder-organization: "default"
|
||||
coder-template-name: coder
|
||||
coder-template-name: coder-workflow-bot
|
||||
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
|
||||
coder-task-name-prefix: doc-check
|
||||
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
|
||||
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
|
||||
coder-username: doc-check-bot
|
||||
github-token: ${{ github.token }}
|
||||
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
|
||||
comment-on-issue: true
|
||||
comment-on-issue: false
|
||||
|
||||
- name: Write outputs
|
||||
- name: Write Task Info
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
env:
|
||||
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
|
||||
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
|
||||
@@ -201,5 +258,140 @@ jobs:
|
||||
echo "**Task name:** ${TASK_NAME}"
|
||||
echo "**Task URL:** ${TASK_URL}"
|
||||
echo ""
|
||||
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
|
||||
} >> "${GITHUB_STEP_SUMMARY}"
|
||||
|
||||
- name: Wait for Task Completion
|
||||
if: steps.check-secrets.outputs.skip != 'true'
|
||||
id: wait_task
|
||||
env:
|
||||
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
|
||||
run: |
|
||||
echo "Waiting for task to complete..."
|
||||
echo "Task name: ${TASK_NAME}"
|
||||
|
||||
if [[ -z "${TASK_NAME}" ]]; then
|
||||
echo "::error::TASK_NAME is empty"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MAX_WAIT=600 # 10 minutes
|
||||
WAITED=0
|
||||
POLL_INTERVAL=3
|
||||
LAST_STATUS=""
|
||||
|
||||
is_workspace_message() {
|
||||
local msg="$1"
|
||||
[[ -z "$msg" ]] && return 0 # Empty = treat as workspace/startup
|
||||
[[ "$msg" =~ ^Workspace ]] && return 0
|
||||
[[ "$msg" =~ ^Agent ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
while [[ $WAITED -lt $MAX_WAIT ]]; do
|
||||
# Get task status (|| true prevents set -e from exiting on non-zero)
|
||||
RAW_OUTPUT=$(coder task status "${TASK_NAME}" -o json 2>&1) || true
|
||||
STATUS_JSON=$(echo "$RAW_OUTPUT" | grep -v "^version mismatch\|^download v" || true)
|
||||
|
||||
# Debug: show first poll's raw output
|
||||
if [[ $WAITED -eq 0 ]]; then
|
||||
echo "Raw status output: ${RAW_OUTPUT:0:500}"
|
||||
fi
|
||||
|
||||
if [[ -z "$STATUS_JSON" ]] || ! echo "$STATUS_JSON" | jq -e . >/dev/null 2>&1; then
|
||||
if [[ "$LAST_STATUS" != "waiting" ]]; then
|
||||
echo "[${WAITED}s] Waiting for task status..."
|
||||
LAST_STATUS="waiting"
|
||||
fi
|
||||
sleep $POLL_INTERVAL
|
||||
WAITED=$((WAITED + POLL_INTERVAL))
|
||||
continue
|
||||
fi
|
||||
|
||||
TASK_STATE=$(echo "$STATUS_JSON" | jq -r '.current_state.state // "unknown"')
|
||||
TASK_MESSAGE=$(echo "$STATUS_JSON" | jq -r '.current_state.message // ""')
|
||||
WORKSPACE_STATUS=$(echo "$STATUS_JSON" | jq -r '.workspace_status // "unknown"')
|
||||
|
||||
# Build current status string for comparison
|
||||
CURRENT_STATUS="${TASK_STATE}|${WORKSPACE_STATUS}|${TASK_MESSAGE}"
|
||||
|
||||
# Only log if status changed
|
||||
if [[ "$CURRENT_STATUS" != "$LAST_STATUS" ]]; then
|
||||
if [[ "$TASK_STATE" == "idle" ]] && is_workspace_message "$TASK_MESSAGE"; then
|
||||
echo "[${WAITED}s] Workspace ready, waiting for Agent..."
|
||||
else
|
||||
echo "[${WAITED}s] State: ${TASK_STATE} | Workspace: ${WORKSPACE_STATUS} | ${TASK_MESSAGE}"
|
||||
fi
|
||||
LAST_STATUS="$CURRENT_STATUS"
|
||||
fi
|
||||
|
||||
if [[ "$WORKSPACE_STATUS" == "failed" || "$WORKSPACE_STATUS" == "canceled" ]]; then
|
||||
echo "::error::Workspace failed: ${WORKSPACE_STATUS}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$TASK_STATE" == "idle" ]]; then
|
||||
if ! is_workspace_message "$TASK_MESSAGE"; then
|
||||
# Real completion message from Claude!
|
||||
echo ""
|
||||
echo "Task completed: ${TASK_MESSAGE}"
|
||||
RESULT_URI=$(echo "$STATUS_JSON" | jq -r '.current_state.uri // ""')
|
||||
echo "result_uri=${RESULT_URI}" >> "${GITHUB_OUTPUT}"
|
||||
echo "task_message=${TASK_MESSAGE}" >> "${GITHUB_OUTPUT}"
|
||||
break
|
||||
fi
|
||||
fi
|
||||
|
||||
sleep $POLL_INTERVAL
|
||||
WAITED=$((WAITED + POLL_INTERVAL))
|
||||
done
|
||||
|
||||
if [[ $WAITED -ge $MAX_WAIT ]]; then
|
||||
echo "::error::Task monitoring timed out after ${MAX_WAIT}s"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Fetch Task Logs
|
||||
if: always() && steps.check-secrets.outputs.skip != 'true'
|
||||
env:
|
||||
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
|
||||
run: |
|
||||
echo "::group::Task Conversation Log"
|
||||
if [[ -n "${TASK_NAME}" ]]; then
|
||||
coder task logs "${TASK_NAME}" 2>&1 || echo "Failed to fetch logs"
|
||||
else
|
||||
echo "No task name, skipping log fetch"
|
||||
fi
|
||||
echo "::endgroup::"
|
||||
|
||||
- name: Cleanup Task
|
||||
if: always() && steps.check-secrets.outputs.skip != 'true'
|
||||
env:
|
||||
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
|
||||
run: |
|
||||
if [[ -n "${TASK_NAME}" ]]; then
|
||||
echo "Deleting task: ${TASK_NAME}"
|
||||
coder task delete "${TASK_NAME}" -y 2>&1 || echo "Task deletion failed or already deleted"
|
||||
else
|
||||
echo "No task name, skipping cleanup"
|
||||
fi
|
||||
|
||||
- name: Write Final Summary
|
||||
if: always() && steps.check-secrets.outputs.skip != 'true'
|
||||
env:
|
||||
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
|
||||
TASK_MESSAGE: ${{ steps.wait_task.outputs.task_message }}
|
||||
RESULT_URI: ${{ steps.wait_task.outputs.result_uri }}
|
||||
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
|
||||
run: |
|
||||
{
|
||||
echo ""
|
||||
echo "---"
|
||||
echo "### Result"
|
||||
echo ""
|
||||
echo "**Status:** ${TASK_MESSAGE:-Task completed}"
|
||||
if [[ -n "${RESULT_URI}" ]]; then
|
||||
echo "**Comment:** ${RESULT_URI}"
|
||||
fi
|
||||
echo ""
|
||||
echo "Task \`${TASK_NAME}\` has been cleaned up."
|
||||
} >> "${GITHUB_STEP_SUMMARY}"
|
||||
|
||||
2
.github/workflows/docker-base.yaml
vendored
2
.github/workflows/docker-base.yaml
vendored
@@ -43,7 +43,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
2
.github/workflows/docs-ci.yaml
vendored
2
.github/workflows/docs-ci.yaml
vendored
@@ -23,7 +23,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
6
.github/workflows/dogfood.yaml
vendored
6
.github/workflows/dogfood.yaml
vendored
@@ -31,7 +31,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -42,7 +42,7 @@ jobs:
|
||||
# on version 2.29 and above.
|
||||
nix_version: "2.28.5"
|
||||
|
||||
- uses: nix-community/cache-nix-action@b426b118b6dc86d6952988d396aa7c6b09776d08 # v7.0.0
|
||||
- uses: nix-community/cache-nix-action@106bba72ed8e29c8357661199511ef07790175e9 # v7.0.1
|
||||
with:
|
||||
# restore and save a cache using this key
|
||||
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
|
||||
@@ -130,7 +130,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
2
.github/workflows/nightly-gauntlet.yaml
vendored
2
.github/workflows/nightly-gauntlet.yaml
vendored
@@ -54,7 +54,7 @@ jobs:
|
||||
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
8
.github/workflows/pr-deploy.yaml
vendored
8
.github/workflows/pr-deploy.yaml
vendored
@@ -44,7 +44,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -81,7 +81,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -233,7 +233,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -337,7 +337,7 @@ jobs:
|
||||
kubectl create namespace "pr${PR_NUMBER}"
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
14
.github/workflows/release.yaml
vendored
14
.github/workflows/release.yaml
vendored
@@ -65,7 +65,7 @@ jobs:
|
||||
steps:
|
||||
# Harden Runner doesn't work on macOS.
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -121,7 +121,7 @@ jobs:
|
||||
- name: Build dylibs
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
go mod download
|
||||
./.github/scripts/retry.sh -- go mod download
|
||||
|
||||
make gen/mark-fresh
|
||||
make build/coder-dylib
|
||||
@@ -169,7 +169,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -259,7 +259,7 @@ jobs:
|
||||
java-version: "11.0"
|
||||
|
||||
- name: Install go-winres
|
||||
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
|
||||
|
||||
- name: Install nsis and zstd
|
||||
run: sudo apt-get install -y nsis zstd
|
||||
@@ -341,7 +341,7 @@ jobs:
|
||||
- name: Build binaries
|
||||
run: |
|
||||
set -euo pipefail
|
||||
go mod download
|
||||
./.github/scripts/retry.sh -- go mod download
|
||||
|
||||
version="$(./scripts/version.sh)"
|
||||
make gen/mark-fresh
|
||||
@@ -888,7 +888,7 @@ jobs:
|
||||
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -976,7 +976,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
2
.github/workflows/scorecard.yml
vendored
2
.github/workflows/scorecard.yml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
10
.github/workflows/security.yaml
vendored
10
.github/workflows/security.yaml
vendored
@@ -32,7 +32,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -74,7 +74,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
@@ -97,11 +97,11 @@ jobs:
|
||||
- name: Install yq
|
||||
run: go run github.com/mikefarah/yq/v4@v4.44.3
|
||||
- name: Install mockgen
|
||||
run: go install go.uber.org/mock/mockgen@v0.5.0
|
||||
run: ./.github/scripts/retry.sh -- go install go.uber.org/mock/mockgen@v0.6.0
|
||||
- name: Install protoc-gen-go
|
||||
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
|
||||
run: ./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
|
||||
- name: Install protoc-gen-go-drpc
|
||||
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
|
||||
run: ./.github/scripts/retry.sh -- go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
|
||||
- name: Install Protoc
|
||||
run: |
|
||||
# protoc must be in lockstep with our dogfood Dockerfile or the
|
||||
|
||||
2
.github/workflows/stale.yaml
vendored
2
.github/workflows/stale.yaml
vendored
@@ -101,7 +101,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
- name: Run delete-old-branches-action
|
||||
|
||||
2
.github/workflows/traiage.yaml
vendored
2
.github/workflows/traiage.yaml
vendored
@@ -153,7 +153,7 @@ jobs:
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
path: ./.github/actions/create-task-action
|
||||
|
||||
2
.github/workflows/weekly-docs.yaml
vendored
2
.github/workflows/weekly-docs.yaml
vendored
@@ -26,7 +26,7 @@ jobs:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
||||
11
Makefile
11
Makefile
@@ -69,6 +69,9 @@ MOST_GO_SRC_FILES := $(shell \
|
||||
# All the shell files in the repo, excluding ignored files.
|
||||
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
|
||||
|
||||
MIGRATION_FILES := $(shell find ./coderd/database/migrations/ -maxdepth 1 $(FIND_EXCLUSIONS) -type f -name '*.sql')
|
||||
FIXTURE_FILES := $(shell find ./coderd/database/migrations/testdata/fixtures/ $(FIND_EXCLUSIONS) -type f -name '*.sql')
|
||||
|
||||
# Ensure we don't use the user's git configs which might cause side-effects
|
||||
GIT_FLAGS = GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_SYSTEM=/dev/null
|
||||
|
||||
@@ -561,7 +564,7 @@ endif
|
||||
|
||||
# Note: we don't run zizmor in the lint target because it takes a while. CI
|
||||
# runs it explicitly.
|
||||
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
|
||||
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
|
||||
.PHONY: lint
|
||||
|
||||
lint/site-icons:
|
||||
@@ -619,6 +622,12 @@ lint/check-scopes: coderd/database/dump.sql
|
||||
go run ./scripts/check-scopes
|
||||
.PHONY: lint/check-scopes
|
||||
|
||||
# Verify migrations do not hardcode the public schema.
|
||||
lint/migrations:
|
||||
./scripts/check_pg_schema.sh "Migrations" $(MIGRATION_FILES)
|
||||
./scripts/check_pg_schema.sh "Fixtures" $(FIXTURE_FILES)
|
||||
.PHONY: lint/migrations
|
||||
|
||||
# All files generated by the database should be added here, and this can be used
|
||||
# as a target for jobs that need to run after the database is generated.
|
||||
DB_GEN_FILES := \
|
||||
|
||||
@@ -40,6 +40,7 @@ import (
|
||||
"github.com/coder/clistat"
|
||||
"github.com/coder/coder/v2/agent/agentcontainers"
|
||||
"github.com/coder/coder/v2/agent/agentexec"
|
||||
"github.com/coder/coder/v2/agent/agentfiles"
|
||||
"github.com/coder/coder/v2/agent/agentscripts"
|
||||
"github.com/coder/coder/v2/agent/agentsocket"
|
||||
"github.com/coder/coder/v2/agent/agentssh"
|
||||
@@ -295,6 +296,8 @@ type agent struct {
|
||||
containerAPIOptions []agentcontainers.Option
|
||||
containerAPI *agentcontainers.API
|
||||
|
||||
filesAPI *agentfiles.API
|
||||
|
||||
socketServerEnabled bool
|
||||
socketPath string
|
||||
socketServer *agentsocket.Server
|
||||
@@ -365,6 +368,8 @@ func (a *agent) init() {
|
||||
|
||||
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
|
||||
|
||||
a.filesAPI = agentfiles.NewAPI(a.logger.Named("files"), a.filesystem)
|
||||
|
||||
a.reconnectingPTYServer = reconnectingpty.NewServer(
|
||||
a.logger.Named("reconnecting-pty"),
|
||||
a.sshServer,
|
||||
@@ -877,12 +882,16 @@ const (
|
||||
)
|
||||
|
||||
func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_Type, ip string) (disconnected func(code int, reason string)) {
|
||||
// Remove the port from the IP because ports are not supported in coderd.
|
||||
if host, _, err := net.SplitHostPort(ip); err != nil {
|
||||
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
|
||||
} else {
|
||||
// Best effort.
|
||||
ip = host
|
||||
// A blank IP can unfortunately happen if the connection is broken in a data race before we get to introspect it. We
|
||||
// still report it, and the recipient can handle a blank IP.
|
||||
if ip != "" {
|
||||
// Remove the port from the IP because ports are not supported in coderd.
|
||||
if host, _, err := net.SplitHostPort(ip); err != nil {
|
||||
a.logger.Error(a.hardCtx, "split host and port for connection report failed", slog.F("ip", ip), slog.Error(err))
|
||||
} else {
|
||||
// Best effort.
|
||||
ip = host
|
||||
}
|
||||
}
|
||||
|
||||
// If the IP is "localhost" (which it can be in some cases), set it to
|
||||
|
||||
@@ -121,7 +121,8 @@ func TestAgent_ImmediateClose(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// NOTE: These tests only work when your default shell is bash for some reason.
|
||||
// NOTE(Cian): I noticed that these tests would fail when my default shell was zsh.
|
||||
// Writing "exit 0" to stdin before closing fixed the issue for me.
|
||||
|
||||
func TestAgent_Stats_SSH(t *testing.T) {
|
||||
t.Parallel()
|
||||
@@ -148,16 +149,37 @@ func TestAgent_Stats_SSH(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
var s *proto.Stats
|
||||
// We are looking for four different stats to be reported. They might not all
|
||||
// arrive at the same time, so we loop until we've seen them all.
|
||||
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen bool
|
||||
require.Eventuallyf(t, func() bool {
|
||||
var ok bool
|
||||
s, ok = <-stats
|
||||
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountSsh == 1
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if s.ConnectionCount > 0 {
|
||||
connectionCountSeen = true
|
||||
}
|
||||
if s.RxBytes > 0 {
|
||||
rxBytesSeen = true
|
||||
}
|
||||
if s.TxBytes > 0 {
|
||||
txBytesSeen = true
|
||||
}
|
||||
if s.SessionCountSsh == 1 {
|
||||
sessionCountSSHSeen = true
|
||||
}
|
||||
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountSSHSeen
|
||||
}, testutil.WaitLong, testutil.IntervalFast,
|
||||
"never saw stats: %+v", s,
|
||||
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountSsh: %t",
|
||||
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountSSHSeen,
|
||||
)
|
||||
_, err = stdin.Write([]byte("exit 0\n"))
|
||||
require.NoError(t, err, "writing exit to stdin")
|
||||
_ = stdin.Close()
|
||||
err = session.Wait()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, err, "waiting for session to exit")
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -183,12 +205,31 @@ func TestAgent_Stats_ReconnectingPTY(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
var s *proto.Stats
|
||||
// We are looking for four different stats to be reported. They might not all
|
||||
// arrive at the same time, so we loop until we've seen them all.
|
||||
var connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen bool
|
||||
require.Eventuallyf(t, func() bool {
|
||||
var ok bool
|
||||
s, ok = <-stats
|
||||
return ok && s.ConnectionCount > 0 && s.RxBytes > 0 && s.TxBytes > 0 && s.SessionCountReconnectingPty == 1
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if s.ConnectionCount > 0 {
|
||||
connectionCountSeen = true
|
||||
}
|
||||
if s.RxBytes > 0 {
|
||||
rxBytesSeen = true
|
||||
}
|
||||
if s.TxBytes > 0 {
|
||||
txBytesSeen = true
|
||||
}
|
||||
if s.SessionCountReconnectingPty == 1 {
|
||||
sessionCountReconnectingPTYSeen = true
|
||||
}
|
||||
return connectionCountSeen && rxBytesSeen && txBytesSeen && sessionCountReconnectingPTYSeen
|
||||
}, testutil.WaitLong, testutil.IntervalFast,
|
||||
"never saw stats: %+v", s,
|
||||
"never saw all stats: %+v, saw connectionCount: %t, rxBytes: %t, txBytes: %t, sessionCountReconnectingPTY: %t",
|
||||
s, connectionCountSeen, rxBytesSeen, txBytesSeen, sessionCountReconnectingPTYSeen,
|
||||
)
|
||||
}
|
||||
|
||||
@@ -218,9 +259,10 @@ func TestAgent_Stats_Magic(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, strings.TrimSpace(string(output)))
|
||||
})
|
||||
|
||||
t.Run("TracksVSCode", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
if runtime.GOOS == "window" {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("Sleeping for infinity doesn't work on Windows")
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
@@ -252,7 +294,9 @@ func TestAgent_Stats_Magic(t *testing.T) {
|
||||
}, testutil.WaitLong, testutil.IntervalFast,
|
||||
"never saw stats",
|
||||
)
|
||||
// The shell will automatically exit if there is no stdin!
|
||||
|
||||
_, err = stdin.Write([]byte("exit 0\n"))
|
||||
require.NoError(t, err, "writing exit to stdin")
|
||||
_ = stdin.Close()
|
||||
err = session.Wait()
|
||||
require.NoError(t, err)
|
||||
@@ -3633,9 +3677,11 @@ func TestAgent_Metrics_SSH(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
_, err = stdin.Write([]byte("exit 0\n"))
|
||||
require.NoError(t, err, "writing exit to stdin")
|
||||
_ = stdin.Close()
|
||||
err = session.Wait()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, err, "waiting for session to exit")
|
||||
}
|
||||
|
||||
// echoOnce accepts a single connection, reads 4 bytes and echos them back
|
||||
|
||||
@@ -779,10 +779,13 @@ func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
|
||||
// close frames.
|
||||
_ = conn.CloseRead(context.Background())
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
|
||||
defer wsNetConn.Close()
|
||||
|
||||
go httpapi.Heartbeat(ctx, conn)
|
||||
go httpapi.HeartbeatClose(ctx, api.logger, cancel, conn)
|
||||
|
||||
updateCh := make(chan struct{}, 1)
|
||||
|
||||
|
||||
36
agent/agentfiles/api.go
Normal file
36
agent/agentfiles/api.go
Normal file
@@ -0,0 +1,36 @@
|
||||
package agentfiles
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/spf13/afero"
|
||||
|
||||
"cdr.dev/slog/v3"
|
||||
)
|
||||
|
||||
// API exposes file-related operations performed through the agent.
|
||||
type API struct {
|
||||
logger slog.Logger
|
||||
filesystem afero.Fs
|
||||
}
|
||||
|
||||
func NewAPI(logger slog.Logger, filesystem afero.Fs) *API {
|
||||
api := &API{
|
||||
logger: logger,
|
||||
filesystem: filesystem,
|
||||
}
|
||||
return api
|
||||
}
|
||||
|
||||
// Routes returns the HTTP handler for file-related routes.
|
||||
func (api *API) Routes() http.Handler {
|
||||
r := chi.NewRouter()
|
||||
|
||||
r.Post("/list-directory", api.HandleLS)
|
||||
r.Get("/read-file", api.HandleReadFile)
|
||||
r.Post("/write-file", api.HandleWriteFile)
|
||||
r.Post("/edit-files", api.HandleEditFiles)
|
||||
|
||||
return r
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package agent
|
||||
package agentfiles
|
||||
|
||||
import (
|
||||
"context"
|
||||
@@ -25,7 +25,7 @@ import (
|
||||
|
||||
type HTTPResponseCode = int
|
||||
|
||||
func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
|
||||
func (api *API) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
query := r.URL.Query()
|
||||
@@ -42,7 +42,7 @@ func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
status, err := a.streamFile(ctx, rw, path, offset, limit)
|
||||
status, err := api.streamFile(ctx, rw, path, offset, limit)
|
||||
if err != nil {
|
||||
httpapi.Write(ctx, rw, status, codersdk.Response{
|
||||
Message: err.Error(),
|
||||
@@ -51,12 +51,12 @@ func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
|
||||
func (api *API) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
|
||||
if !filepath.IsAbs(path) {
|
||||
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Open(path)
|
||||
f, err := api.filesystem.Open(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -97,13 +97,13 @@ func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path str
|
||||
reader := io.NewSectionReader(f, offset, bytesToRead)
|
||||
_, err = io.Copy(rw, reader)
|
||||
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
|
||||
a.logger.Error(ctx, "workspace agent read file", slog.Error(err))
|
||||
api.logger.Error(ctx, "workspace agent read file", slog.Error(err))
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
|
||||
func (api *API) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
query := r.URL.Query()
|
||||
@@ -118,7 +118,7 @@ func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
status, err := a.writeFile(ctx, r, path)
|
||||
status, err := api.writeFile(ctx, r, path)
|
||||
if err != nil {
|
||||
httpapi.Write(ctx, rw, status, codersdk.Response{
|
||||
Message: err.Error(),
|
||||
@@ -131,13 +131,13 @@ func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
|
||||
func (api *API) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
|
||||
if !filepath.IsAbs(path) {
|
||||
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
|
||||
}
|
||||
|
||||
dir := filepath.Dir(path)
|
||||
err := a.filesystem.MkdirAll(dir, 0o755)
|
||||
err := api.filesystem.MkdirAll(dir, 0o755)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -149,7 +149,7 @@ func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HT
|
||||
return status, err
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Create(path)
|
||||
f, err := api.filesystem.Create(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -164,13 +164,13 @@ func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HT
|
||||
|
||||
_, err = io.Copy(f, r.Body)
|
||||
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
|
||||
a.logger.Error(ctx, "workspace agent write file", slog.Error(err))
|
||||
api.logger.Error(ctx, "workspace agent write file", slog.Error(err))
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
|
||||
func (api *API) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
var req workspacesdk.FileEditRequest
|
||||
@@ -188,7 +188,7 @@ func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
|
||||
var combinedErr error
|
||||
status := http.StatusOK
|
||||
for _, edit := range req.Files {
|
||||
s, err := a.editFile(r.Context(), edit.Path, edit.Edits)
|
||||
s, err := api.editFile(r.Context(), edit.Path, edit.Edits)
|
||||
// Keep the highest response status, so 500 will be preferred over 400, etc.
|
||||
if s > status {
|
||||
status = s
|
||||
@@ -210,7 +210,7 @@ func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
|
||||
func (api *API) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
|
||||
if path == "" {
|
||||
return http.StatusBadRequest, xerrors.New("\"path\" is required")
|
||||
}
|
||||
@@ -223,7 +223,7 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
|
||||
return http.StatusBadRequest, xerrors.New("must specify at least one edit")
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Open(path)
|
||||
f, err := api.filesystem.Open(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -252,7 +252,7 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
|
||||
|
||||
// Create an adjacent file to ensure it will be on the same device and can be
|
||||
// moved atomically.
|
||||
tmpfile, err := afero.TempFile(a.filesystem, filepath.Dir(path), filepath.Base(path))
|
||||
tmpfile, err := afero.TempFile(api.filesystem, filepath.Dir(path), filepath.Base(path))
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
@@ -260,13 +260,13 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
|
||||
|
||||
_, err = io.Copy(tmpfile, replace.Chain(f, transforms...))
|
||||
if err != nil {
|
||||
if rerr := a.filesystem.Remove(tmpfile.Name()); rerr != nil {
|
||||
a.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
|
||||
if rerr := api.filesystem.Remove(tmpfile.Name()); rerr != nil {
|
||||
api.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
|
||||
}
|
||||
return http.StatusInternalServerError, xerrors.Errorf("edit %s: %w", path, err)
|
||||
}
|
||||
|
||||
err = a.filesystem.Rename(tmpfile.Name(), path)
|
||||
err = api.filesystem.Rename(tmpfile.Name(), path)
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
@@ -1,11 +1,13 @@
|
||||
package agent_test
|
||||
package agentfiles_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
@@ -16,10 +18,10 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/agent"
|
||||
"github.com/coder/coder/v2/agent/agenttest"
|
||||
"github.com/coder/coder/v2/coderd/coderdtest"
|
||||
"github.com/coder/coder/v2/codersdk/agentsdk"
|
||||
"cdr.dev/slog/v3"
|
||||
"cdr.dev/slog/v3/sloggers/slogtest"
|
||||
"github.com/coder/coder/v2/agent/agentfiles"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
@@ -106,15 +108,15 @@ func TestReadFile(t *testing.T) {
|
||||
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
api := agentfiles.NewAPI(logger, fs)
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "a-directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
@@ -260,19 +262,22 @@ func TestReadFile(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
|
||||
reader, mimeType, err := conn.ReadFile(ctx, tt.path, tt.offset, tt.limit)
|
||||
w := httptest.NewRecorder()
|
||||
r := httptest.NewRequestWithContext(ctx, http.MethodGet, fmt.Sprintf("/read-file?path=%s&offset=%d&limit=%d", tt.path, tt.offset, tt.limit), nil)
|
||||
api.Routes().ServeHTTP(w, r)
|
||||
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
require.Contains(t, cerr.Error(), tt.error)
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
} else {
|
||||
got := &codersdk.Error{}
|
||||
err := json.NewDecoder(w.Body).Decode(got)
|
||||
require.NoError(t, err)
|
||||
defer reader.Close()
|
||||
bytes, err := io.ReadAll(reader)
|
||||
require.ErrorContains(t, got, tt.error)
|
||||
require.Equal(t, tt.errCode, w.Code)
|
||||
} else {
|
||||
bytes, err := io.ReadAll(w.Body)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.bytes, bytes)
|
||||
require.Equal(t, tt.mimeType, mimeType)
|
||||
require.Equal(t, tt.mimeType, w.Header().Get("Content-Type"))
|
||||
require.Equal(t, http.StatusOK, w.Code)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -284,15 +289,14 @@ func TestWriteFile(t *testing.T) {
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
|
||||
noPermsDirPath := filepath.Join(tmpdir, "no-perms-dir")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath || file == noPermsDirPath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
|
||||
if file == noPermsFilePath || file == noPermsDirPath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
api := agentfiles.NewAPI(logger, fs)
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
@@ -371,17 +375,21 @@ func TestWriteFile(t *testing.T) {
|
||||
defer cancel()
|
||||
|
||||
reader := bytes.NewReader(tt.bytes)
|
||||
err := conn.WriteFile(ctx, tt.path, reader)
|
||||
w := httptest.NewRecorder()
|
||||
r := httptest.NewRequestWithContext(ctx, http.MethodPost, fmt.Sprintf("/write-file?path=%s", tt.path), reader)
|
||||
api.Routes().ServeHTTP(w, r)
|
||||
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
require.Contains(t, cerr.Error(), tt.error)
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
got := &codersdk.Error{}
|
||||
err := json.NewDecoder(w.Body).Decode(got)
|
||||
require.NoError(t, err)
|
||||
require.ErrorContains(t, got, tt.error)
|
||||
require.Equal(t, tt.errCode, w.Code)
|
||||
} else {
|
||||
bytes, err := afero.ReadFile(fs, tt.path)
|
||||
require.NoError(t, err)
|
||||
b, err := afero.ReadFile(fs, tt.path)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.bytes, b)
|
||||
require.Equal(t, tt.bytes, bytes)
|
||||
require.Equal(t, http.StatusOK, w.Code)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -393,21 +401,20 @@ func TestEditFiles(t *testing.T) {
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
|
||||
failRenameFilePath := filepath.Join(tmpdir, "fail-rename")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return &os.PathError{
|
||||
Op: call,
|
||||
Path: file,
|
||||
Err: os.ErrPermission,
|
||||
}
|
||||
} else if file == failRenameFilePath && call == "rename" {
|
||||
return xerrors.New("rename failed")
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
fs := newTestFs(afero.NewMemMapFs(), func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return &os.PathError{
|
||||
Op: call,
|
||||
Path: file,
|
||||
Err: os.ErrPermission,
|
||||
}
|
||||
return nil
|
||||
})
|
||||
} else if file == failRenameFilePath && call == "rename" {
|
||||
return xerrors.New("rename failed")
|
||||
}
|
||||
return nil
|
||||
})
|
||||
api := agentfiles.NewAPI(logger, fs)
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
@@ -701,16 +708,26 @@ func TestEditFiles(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
err := conn.EditFiles(ctx, workspacesdk.FileEditRequest{Files: tt.edits})
|
||||
buf := bytes.NewBuffer(nil)
|
||||
enc := json.NewEncoder(buf)
|
||||
enc.SetEscapeHTML(false)
|
||||
err := enc.Encode(workspacesdk.FileEditRequest{Files: tt.edits})
|
||||
require.NoError(t, err)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
r := httptest.NewRequestWithContext(ctx, http.MethodPost, "/edit-files", buf)
|
||||
api.Routes().ServeHTTP(w, r)
|
||||
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
for _, error := range tt.errors {
|
||||
require.Contains(t, cerr.Error(), error)
|
||||
}
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
} else {
|
||||
got := &codersdk.Error{}
|
||||
err := json.NewDecoder(w.Body).Decode(got)
|
||||
require.NoError(t, err)
|
||||
for _, error := range tt.errors {
|
||||
require.ErrorContains(t, got, error)
|
||||
}
|
||||
require.Equal(t, tt.errCode, w.Code)
|
||||
} else {
|
||||
require.Equal(t, http.StatusOK, w.Code)
|
||||
}
|
||||
for path, expect := range tt.expected {
|
||||
b, err := afero.ReadFile(fs, path)
|
||||
@@ -1,4 +1,4 @@
|
||||
package agent
|
||||
package agentfiles
|
||||
|
||||
import (
|
||||
"errors"
|
||||
@@ -21,7 +21,7 @@ import (
|
||||
|
||||
var WindowsDriveRegex = regexp.MustCompile(`^[a-zA-Z]:\\$`)
|
||||
|
||||
func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
func (api *API) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
// An absolute path may be optionally provided, otherwise a path split into an
|
||||
@@ -43,7 +43,7 @@ func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
resp, err := listFiles(a.filesystem, path, req)
|
||||
resp, err := listFiles(api.filesystem, path, req)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -1,4 +1,4 @@
|
||||
package agent
|
||||
package agentfiles
|
||||
|
||||
import (
|
||||
"os"
|
||||
@@ -27,6 +27,8 @@ func (a *agent) apiHandler() http.Handler {
|
||||
})
|
||||
})
|
||||
|
||||
r.Mount("/api/v0", a.filesAPI.Routes())
|
||||
|
||||
if a.devcontainers {
|
||||
r.Mount("/api/v0/containers", a.containerAPI.Routes())
|
||||
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
|
||||
@@ -49,10 +51,6 @@ func (a *agent) apiHandler() http.Handler {
|
||||
|
||||
r.Get("/api/v0/listening-ports", a.listeningPortsHandler.handler)
|
||||
r.Get("/api/v0/netcheck", a.HandleNetcheck)
|
||||
r.Post("/api/v0/list-directory", a.HandleLS)
|
||||
r.Get("/api/v0/read-file", a.HandleReadFile)
|
||||
r.Post("/api/v0/write-file", a.HandleWriteFile)
|
||||
r.Post("/api/v0/edit-files", a.HandleEditFiles)
|
||||
r.Get("/debug/logs", a.HandleHTTPDebugLogs)
|
||||
r.Get("/debug/magicsock", a.HandleHTTPDebugMagicsock)
|
||||
r.Get("/debug/magicsock/debug-logging/{state}", a.HandleHTTPMagicsockDebugLoggingState)
|
||||
|
||||
@@ -78,9 +78,13 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
|
||||
sink := &logSink{}
|
||||
logger := slog.Make(sink)
|
||||
workspaceID := uuid.New()
|
||||
templateID := uuid.New()
|
||||
templateVersionID := uuid.New()
|
||||
reporter := &agentapi.BoundaryLogsAPI{
|
||||
Log: logger,
|
||||
WorkspaceID: workspaceID,
|
||||
Log: logger,
|
||||
WorkspaceID: workspaceID,
|
||||
TemplateID: templateID,
|
||||
TemplateVersionID: templateVersionID,
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
@@ -123,6 +127,8 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
|
||||
require.Equal(t, "boundary_request", entry.Message)
|
||||
require.Equal(t, "allow", getField(entry.Fields, "decision"))
|
||||
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
|
||||
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
|
||||
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
|
||||
require.Equal(t, "GET", getField(entry.Fields, "http_method"))
|
||||
require.Equal(t, "https://example.com/allowed", getField(entry.Fields, "http_url"))
|
||||
require.Equal(t, "*.example.com", getField(entry.Fields, "matched_rule"))
|
||||
@@ -155,6 +161,8 @@ func TestBoundaryLogs_EndToEnd(t *testing.T) {
|
||||
require.Equal(t, "boundary_request", entry.Message)
|
||||
require.Equal(t, "deny", getField(entry.Fields, "decision"))
|
||||
require.Equal(t, workspaceID.String(), getField(entry.Fields, "workspace_id"))
|
||||
require.Equal(t, templateID.String(), getField(entry.Fields, "template_id"))
|
||||
require.Equal(t, templateVersionID.String(), getField(entry.Fields, "template_version_id"))
|
||||
require.Equal(t, "POST", getField(entry.Fields, "http_method"))
|
||||
require.Equal(t, "https://blocked.com/denied", getField(entry.Fields, "http_url"))
|
||||
require.Equal(t, nil, getField(entry.Fields, "matched_rule"))
|
||||
|
||||
@@ -81,6 +81,10 @@ type BackedPipe struct {
|
||||
// Unified error handling with generation filtering
|
||||
errChan chan ErrorEvent
|
||||
|
||||
// forceReconnectHook is a test hook invoked after ForceReconnect registers
|
||||
// with the singleflight group.
|
||||
forceReconnectHook func()
|
||||
|
||||
// singleflight group to dedupe concurrent ForceReconnect calls
|
||||
sf singleflight.Group
|
||||
|
||||
@@ -324,6 +328,13 @@ func (bp *BackedPipe) handleConnectionError(errorEvt ErrorEvent) {
|
||||
}
|
||||
}
|
||||
|
||||
// SetForceReconnectHookForTests sets a hook invoked after ForceReconnect
|
||||
// registers with the singleflight group. It must be set before any
|
||||
// concurrent ForceReconnect calls.
|
||||
func (bp *BackedPipe) SetForceReconnectHookForTests(hook func()) {
|
||||
bp.forceReconnectHook = hook
|
||||
}
|
||||
|
||||
// ForceReconnect forces a reconnection attempt immediately.
|
||||
// This can be used to force a reconnection if a new connection is established.
|
||||
// It prevents duplicate reconnections when called concurrently.
|
||||
@@ -331,7 +342,7 @@ func (bp *BackedPipe) ForceReconnect() error {
|
||||
// Deduplicate concurrent ForceReconnect calls so only one reconnection
|
||||
// attempt runs at a time from this API. Use the pipe's internal context
|
||||
// to ensure Close() cancels any in-flight attempt.
|
||||
_, err, _ := bp.sf.Do("force-reconnect", func() (interface{}, error) {
|
||||
resultChan := bp.sf.DoChan("force-reconnect", func() (interface{}, error) {
|
||||
bp.mu.Lock()
|
||||
defer bp.mu.Unlock()
|
||||
|
||||
@@ -346,5 +357,11 @@ func (bp *BackedPipe) ForceReconnect() error {
|
||||
|
||||
return nil, bp.reconnectLocked()
|
||||
})
|
||||
return err
|
||||
|
||||
if hook := bp.forceReconnectHook; hook != nil {
|
||||
hook()
|
||||
}
|
||||
|
||||
result := <-resultChan
|
||||
return result.Err
|
||||
}
|
||||
|
||||
@@ -742,12 +742,15 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
|
||||
|
||||
const numConcurrent = 3
|
||||
startSignals := make([]chan struct{}, numConcurrent)
|
||||
startedSignals := make([]chan struct{}, numConcurrent)
|
||||
for i := range startSignals {
|
||||
startSignals[i] = make(chan struct{})
|
||||
startedSignals[i] = make(chan struct{})
|
||||
}
|
||||
|
||||
enteredSignals := make(chan struct{}, numConcurrent)
|
||||
bp.SetForceReconnectHookForTests(func() {
|
||||
enteredSignals <- struct{}{}
|
||||
})
|
||||
|
||||
errors := make([]error, numConcurrent)
|
||||
var wg sync.WaitGroup
|
||||
|
||||
@@ -758,15 +761,12 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
|
||||
defer wg.Done()
|
||||
// Wait for the signal to start
|
||||
<-startSignals[idx]
|
||||
// Signal that we're about to call ForceReconnect
|
||||
close(startedSignals[idx])
|
||||
errors[idx] = bp.ForceReconnect()
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Start the first ForceReconnect and wait for it to block
|
||||
close(startSignals[0])
|
||||
<-startedSignals[0]
|
||||
|
||||
// Wait for the first reconnect to actually start and block
|
||||
testutil.RequireReceive(testCtx, t, blockedChan)
|
||||
@@ -777,9 +777,9 @@ func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
|
||||
close(startSignals[i])
|
||||
}
|
||||
|
||||
// Wait for all additional goroutines to have started their calls
|
||||
for i := 1; i < numConcurrent; i++ {
|
||||
<-startedSignals[i]
|
||||
// Wait for all ForceReconnect calls to join the singleflight operation.
|
||||
for i := 0; i < numConcurrent; i++ {
|
||||
testutil.RequireReceive(testCtx, t, enteredSignals)
|
||||
}
|
||||
|
||||
// At this point, one reconnect has started and is blocked,
|
||||
|
||||
@@ -7,6 +7,6 @@ func IsInitProcess() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func ForkReap(_ ...Option) error {
|
||||
return nil
|
||||
func ForkReap(_ ...Option) (int, error) {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
@@ -32,12 +32,13 @@ func TestReap(t *testing.T) {
|
||||
}
|
||||
|
||||
pids := make(reap.PidCh, 1)
|
||||
err := reaper.ForkReap(
|
||||
exitCode, err := reaper.ForkReap(
|
||||
reaper.WithPIDCallback(pids),
|
||||
// Provide some argument that immediately exits.
|
||||
reaper.WithExecArgs("/bin/sh", "-c", "exit 0"),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, exitCode)
|
||||
|
||||
cmd := exec.Command("tail", "-f", "/dev/null")
|
||||
err = cmd.Start()
|
||||
@@ -65,6 +66,36 @@ func TestReap(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:paralleltest
|
||||
func TestForkReapExitCodes(t *testing.T) {
|
||||
if testutil.InCI() {
|
||||
t.Skip("Detected CI, skipping reaper tests")
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
command string
|
||||
expectedCode int
|
||||
}{
|
||||
{"exit 0", "exit 0", 0},
|
||||
{"exit 1", "exit 1", 1},
|
||||
{"exit 42", "exit 42", 42},
|
||||
{"exit 255", "exit 255", 255},
|
||||
{"SIGKILL", "kill -9 $$", 128 + 9},
|
||||
{"SIGTERM", "kill -15 $$", 128 + 15},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
exitCode, err := reaper.ForkReap(
|
||||
reaper.WithExecArgs("/bin/sh", "-c", tt.command),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedCode, exitCode, "exit code mismatch for %q", tt.command)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:paralleltest // Signal handling.
|
||||
func TestReapInterrupt(t *testing.T) {
|
||||
// Don't run the reaper test in CI. It does weird
|
||||
@@ -84,13 +115,17 @@ func TestReapInterrupt(t *testing.T) {
|
||||
defer signal.Stop(usrSig)
|
||||
|
||||
go func() {
|
||||
errC <- reaper.ForkReap(
|
||||
exitCode, err := reaper.ForkReap(
|
||||
reaper.WithPIDCallback(pids),
|
||||
reaper.WithCatchSignals(os.Interrupt),
|
||||
// Signal propagation does not extend to children of children, so
|
||||
// we create a little bash script to ensure sleep is interrupted.
|
||||
reaper.WithExecArgs("/bin/sh", "-c", fmt.Sprintf("pid=0; trap 'kill -USR2 %d; kill -TERM $pid' INT; sleep 10 &\npid=$!; kill -USR1 %d; wait", os.Getpid(), os.Getpid())),
|
||||
)
|
||||
// The child exits with 128 + SIGTERM (15) = 143, but the trap catches
|
||||
// SIGINT and sends SIGTERM to the sleep process, so exit code varies.
|
||||
_ = exitCode
|
||||
errC <- err
|
||||
}()
|
||||
|
||||
require.Equal(t, <-usrSig, syscall.SIGUSR1)
|
||||
|
||||
@@ -40,7 +40,10 @@ func catchSignals(pid int, sigs []os.Signal) {
|
||||
// the reaper and an exec.Command waiting for its process to complete.
|
||||
// The provided 'pids' channel may be nil if the caller does not care about the
|
||||
// reaped children PIDs.
|
||||
func ForkReap(opt ...Option) error {
|
||||
//
|
||||
// Returns the child's exit code (using 128+signal for signal termination)
|
||||
// and any error from Wait4.
|
||||
func ForkReap(opt ...Option) (int, error) {
|
||||
opts := &options{
|
||||
ExecArgs: os.Args,
|
||||
}
|
||||
@@ -53,7 +56,7 @@ func ForkReap(opt ...Option) error {
|
||||
|
||||
pwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return xerrors.Errorf("get wd: %w", err)
|
||||
return 1, xerrors.Errorf("get wd: %w", err)
|
||||
}
|
||||
|
||||
pattrs := &syscall.ProcAttr{
|
||||
@@ -72,7 +75,7 @@ func ForkReap(opt ...Option) error {
|
||||
//#nosec G204
|
||||
pid, err := syscall.ForkExec(opts.ExecArgs[0], opts.ExecArgs, pattrs)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("fork exec: %w", err)
|
||||
return 1, xerrors.Errorf("fork exec: %w", err)
|
||||
}
|
||||
|
||||
go catchSignals(pid, opts.CatchSignals)
|
||||
@@ -82,5 +85,18 @@ func ForkReap(opt ...Option) error {
|
||||
for xerrors.Is(err, syscall.EINTR) {
|
||||
_, err = syscall.Wait4(pid, &wstatus, 0, nil)
|
||||
}
|
||||
return err
|
||||
|
||||
// Convert wait status to exit code using standard Unix conventions:
|
||||
// - Normal exit: use the exit code
|
||||
// - Signal termination: use 128 + signal number
|
||||
var exitCode int
|
||||
switch {
|
||||
case wstatus.Exited():
|
||||
exitCode = wstatus.ExitStatus()
|
||||
case wstatus.Signaled():
|
||||
exitCode = 128 + int(wstatus.Signal())
|
||||
default:
|
||||
exitCode = 1
|
||||
}
|
||||
return exitCode, err
|
||||
}
|
||||
|
||||
@@ -136,7 +136,7 @@ func workspaceAgent() *serpent.Command {
|
||||
// to do this else we fork bomb ourselves.
|
||||
//nolint:gocritic
|
||||
args := append(os.Args, "--no-reap")
|
||||
err := reaper.ForkReap(
|
||||
exitCode, err := reaper.ForkReap(
|
||||
reaper.WithExecArgs(args...),
|
||||
reaper.WithCatchSignals(StopSignals...),
|
||||
)
|
||||
@@ -145,8 +145,8 @@ func workspaceAgent() *serpent.Command {
|
||||
return xerrors.Errorf("fork reap: %w", err)
|
||||
}
|
||||
|
||||
logger.Info(ctx, "reaper process exiting")
|
||||
return nil
|
||||
logger.Info(ctx, "reaper child process exited", slog.F("exit_code", exitCode))
|
||||
return ExitError(exitCode, nil)
|
||||
}
|
||||
|
||||
// Handle interrupt signals to allow for graceful shutdown,
|
||||
|
||||
@@ -10,12 +10,8 @@ import (
|
||||
"github.com/coder/serpent"
|
||||
)
|
||||
|
||||
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, defaultOverrides map[string]string) (string, error) {
|
||||
label := templateVersionParameter.Name
|
||||
if templateVersionParameter.DisplayName != "" {
|
||||
label = templateVersionParameter.DisplayName
|
||||
}
|
||||
|
||||
func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.TemplateVersionParameter, name, defaultValue string) (string, error) {
|
||||
label := name
|
||||
if templateVersionParameter.Ephemeral {
|
||||
label += pretty.Sprint(DefaultStyles.Warn, " (build option)")
|
||||
}
|
||||
@@ -26,11 +22,6 @@ func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.Te
|
||||
_, _ = fmt.Fprintln(inv.Stdout, " "+strings.TrimSpace(strings.Join(strings.Split(templateVersionParameter.DescriptionPlaintext, "\n"), "\n "))+"\n")
|
||||
}
|
||||
|
||||
defaultValue := templateVersionParameter.DefaultValue
|
||||
if v, ok := defaultOverrides[templateVersionParameter.Name]; ok {
|
||||
defaultValue = v
|
||||
}
|
||||
|
||||
var err error
|
||||
var value string
|
||||
switch {
|
||||
|
||||
@@ -32,12 +32,12 @@ type PromptOptions struct {
|
||||
const skipPromptFlag = "yes"
|
||||
|
||||
// SkipPromptOption adds a "--yes/-y" flag to the cmd that can be used to skip
|
||||
// prompts.
|
||||
// confirmation prompts.
|
||||
func SkipPromptOption() serpent.Option {
|
||||
return serpent.Option{
|
||||
Flag: skipPromptFlag,
|
||||
FlagShorthand: "y",
|
||||
Description: "Bypass prompts.",
|
||||
Description: "Bypass confirmation prompts.",
|
||||
// Discard
|
||||
Value: serpent.BoolOf(new(bool)),
|
||||
}
|
||||
|
||||
@@ -491,6 +491,11 @@ func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
|
||||
|
||||
case tea.KeySpace:
|
||||
options := m.filteredOptions()
|
||||
|
||||
if m.enableCustomInput && m.cursor == len(options) {
|
||||
return m, nil
|
||||
}
|
||||
|
||||
if len(options) != 0 {
|
||||
options[m.cursor].chosen = !options[m.cursor].chosen
|
||||
}
|
||||
|
||||
@@ -42,9 +42,10 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
|
||||
stopAfter time.Duration
|
||||
workspaceName string
|
||||
|
||||
parameterFlags workspaceParameterFlags
|
||||
autoUpdates string
|
||||
copyParametersFrom string
|
||||
parameterFlags workspaceParameterFlags
|
||||
autoUpdates string
|
||||
copyParametersFrom string
|
||||
useParameterDefaults bool
|
||||
// Organization context is only required if more than 1 template
|
||||
// shares the same name across multiple organizations.
|
||||
orgContext = NewOrganizationContext()
|
||||
@@ -308,7 +309,7 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
|
||||
displayAppliedPreset(inv, preset, presetParameters)
|
||||
} else {
|
||||
// Inform the user that no preset was applied
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "%s", cliui.Bold("No preset applied."))
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "%s\n", cliui.Bold("No preset applied."))
|
||||
}
|
||||
|
||||
if opts.BeforeCreate != nil {
|
||||
@@ -329,6 +330,8 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
|
||||
RichParameterDefaults: cliBuildParameterDefaults,
|
||||
|
||||
SourceWorkspaceParameters: sourceWorkspaceParameters,
|
||||
|
||||
UseParameterDefaults: useParameterDefaults,
|
||||
})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("prepare build: %w", err)
|
||||
@@ -435,6 +438,12 @@ func (r *RootCmd) Create(opts CreateOptions) *serpent.Command {
|
||||
Description: "Specify the source workspace name to copy parameters from.",
|
||||
Value: serpent.StringOf(©ParametersFrom),
|
||||
},
|
||||
serpent.Option{
|
||||
Flag: "use-parameter-defaults",
|
||||
Env: "CODER_WORKSPACE_USE_PARAMETER_DEFAULTS",
|
||||
Description: "Automatically accept parameter defaults when no value is provided.",
|
||||
Value: serpent.BoolOf(&useParameterDefaults),
|
||||
},
|
||||
cliui.SkipPromptOption(),
|
||||
)
|
||||
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
|
||||
@@ -459,6 +468,8 @@ type prepWorkspaceBuildArgs struct {
|
||||
RichParameters []codersdk.WorkspaceBuildParameter
|
||||
RichParameterFile string
|
||||
RichParameterDefaults []codersdk.WorkspaceBuildParameter
|
||||
|
||||
UseParameterDefaults bool
|
||||
}
|
||||
|
||||
// resolvePreset returns the preset matching the given presetName (if specified),
|
||||
@@ -561,7 +572,8 @@ func prepWorkspaceBuild(inv *serpent.Invocation, client *codersdk.Client, args p
|
||||
WithPromptRichParameters(args.PromptRichParameters).
|
||||
WithRichParameters(args.RichParameters).
|
||||
WithRichParametersFile(parameterFile).
|
||||
WithRichParametersDefaults(args.RichParameterDefaults)
|
||||
WithRichParametersDefaults(args.RichParameterDefaults).
|
||||
WithUseParameterDefaults(args.UseParameterDefaults)
|
||||
buildParameters, err := resolver.Resolve(inv, args.Action, templateVersionParameters)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -139,12 +139,15 @@ func TestCreate(t *testing.T) {
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent())
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.Name = "v1"
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
// Create a new version
|
||||
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.Name = "v2"
|
||||
ctvr.TemplateID = template.ID
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
|
||||
@@ -318,353 +321,438 @@ func prepareEchoResponses(parameters []*proto.RichParameter, presets ...*proto.P
|
||||
}
|
||||
}
|
||||
|
||||
type param struct {
|
||||
name string
|
||||
ptype string
|
||||
value string
|
||||
mutable bool
|
||||
}
|
||||
|
||||
func TestCreateWithRichParameters(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
const (
|
||||
firstParameterName = "first_parameter"
|
||||
firstParameterDescription = "This is first parameter"
|
||||
firstParameterValue = "1"
|
||||
|
||||
secondParameterName = "second_parameter"
|
||||
secondParameterDisplayName = "Second Parameter"
|
||||
secondParameterDescription = "This is second parameter"
|
||||
secondParameterValue = "2"
|
||||
|
||||
immutableParameterName = "third_parameter"
|
||||
immutableParameterDescription = "This is not mutable parameter"
|
||||
immutableParameterValue = "4"
|
||||
)
|
||||
|
||||
echoResponses := func() *echo.Responses {
|
||||
return prepareEchoResponses([]*proto.RichParameter{
|
||||
{Name: firstParameterName, Description: firstParameterDescription, Mutable: true},
|
||||
{Name: secondParameterName, DisplayName: secondParameterDisplayName, Description: secondParameterDescription, Mutable: true},
|
||||
{Name: immutableParameterName, Description: immutableParameterDescription, Mutable: false},
|
||||
})
|
||||
// Default parameters and their expected values.
|
||||
params := []param{
|
||||
{
|
||||
name: "number_param",
|
||||
ptype: "number",
|
||||
value: "777",
|
||||
mutable: true,
|
||||
},
|
||||
{
|
||||
name: "string_param",
|
||||
ptype: "string",
|
||||
value: "qux",
|
||||
mutable: true,
|
||||
},
|
||||
{
|
||||
name: "bool_param",
|
||||
// TODO: Setting the type breaks booleans. It claims the default is false
|
||||
// but when you then accept this default it errors saying that the value
|
||||
// must be true or false. For now, use a string.
|
||||
ptype: "string",
|
||||
value: "false",
|
||||
mutable: true,
|
||||
},
|
||||
{
|
||||
name: "immutable_string_param",
|
||||
ptype: "string",
|
||||
value: "i am eternal",
|
||||
mutable: false,
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("InputParameters", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
type testContext struct {
|
||||
client *codersdk.Client
|
||||
member *codersdk.Client
|
||||
owner codersdk.CreateFirstUserResponse
|
||||
template codersdk.Template
|
||||
workspaceName string
|
||||
}
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
tests := []struct {
|
||||
name string
|
||||
// setup runs before the command is started and return arguments that will
|
||||
// be appended to the create command.
|
||||
setup func() []string
|
||||
// handlePty optionally runs after the command is started. It should handle
|
||||
// all expected prompts from the pty.
|
||||
handlePty func(pty *ptytest.PTY)
|
||||
// postRun runs after the command has finished but before the workspace is
|
||||
// verified. It must return the workspace name to check (used for the copy
|
||||
// workspace tests).
|
||||
postRun func(t *testing.T, args testContext) string
|
||||
// errors contains expected errors. The workspace will not be verified if
|
||||
// errors are expected.
|
||||
errors []string
|
||||
// inputParameters overrides the default parameters.
|
||||
inputParameters []param
|
||||
// expectedParameters defaults to inputParameters.
|
||||
expectedParameters []param
|
||||
// withDefaults sets DefaultValue to each parameter's value.
|
||||
withDefaults bool
|
||||
}{
|
||||
{
|
||||
name: "ValuesFromPrompt",
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Enter the value for each parameter as prompted.
|
||||
for _, param := range params {
|
||||
pty.ExpectMatch(param.name)
|
||||
pty.WriteLine(param.value)
|
||||
}
|
||||
// Confirm the creation.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromDefaultFlags",
|
||||
setup: func() []string {
|
||||
// Provide the defaults on the command line.
|
||||
args := []string{}
|
||||
for _, param := range params {
|
||||
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
return args
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Simply accept the defaults.
|
||||
for _, param := range params {
|
||||
pty.ExpectMatch(param.name)
|
||||
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
|
||||
pty.WriteLine("")
|
||||
}
|
||||
// Confirm the creation.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromFile",
|
||||
setup: func() []string {
|
||||
// Create a file with the values.
|
||||
tempDir := t.TempDir()
|
||||
removeTmpDirUntilSuccessAfterTest(t, tempDir)
|
||||
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
|
||||
for _, param := range params {
|
||||
_, err := parameterFile.WriteString(fmt.Sprintf("%s: %s\n", param.name, param.value))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
return []string{"--rich-parameter-file", parameterFile.Name()}
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// No prompts, we only need to confirm.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromFlags",
|
||||
setup: func() []string {
|
||||
// Provide the values on the command line.
|
||||
var args []string
|
||||
for _, param := range params {
|
||||
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
return args
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// No prompts, we only need to confirm.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "MisspelledParameter",
|
||||
setup: func() []string {
|
||||
// Provide the values on the command line.
|
||||
args := []string{}
|
||||
for i, param := range params {
|
||||
if i == 0 {
|
||||
// Slightly misspell the first parameter with an extra character.
|
||||
args = append(args, "--parameter", fmt.Sprintf("n%s=%s", param.name, param.value))
|
||||
} else {
|
||||
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
}
|
||||
return args
|
||||
},
|
||||
errors: []string{
|
||||
"parameter \"n" + params[0].name + "\" is not present in the template",
|
||||
"Did you mean: " + params[0].name,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromWorkspace",
|
||||
setup: func() []string {
|
||||
// Provide the values on the command line.
|
||||
args := []string{"-y"}
|
||||
for _, param := range params {
|
||||
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
return args
|
||||
},
|
||||
postRun: func(t *testing.T, tctx testContext) string {
|
||||
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
|
||||
clitest.SetupConfig(t, tctx.member, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err := inv.Run()
|
||||
require.NoError(t, err, "failed to create a workspace based on the source workspace")
|
||||
return "other-workspace"
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromOutdatedWorkspace",
|
||||
setup: func() []string {
|
||||
// Provide the values on the command line.
|
||||
args := []string{"-y"}
|
||||
for _, param := range params {
|
||||
args = append(args, "--parameter", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
return args
|
||||
},
|
||||
postRun: func(t *testing.T, tctx testContext) string {
|
||||
// Update the template to a new version.
|
||||
version2 := coderdtest.CreateTemplateVersion(t, tctx.client, tctx.owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
|
||||
{Name: "another_parameter", Type: "string", DefaultValue: "not-relevant"},
|
||||
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.Name = "v2"
|
||||
ctvr.TemplateID = tctx.template.ID
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, tctx.client, version2.ID)
|
||||
coderdtest.UpdateActiveTemplateVersion(t, tctx.client, tctx.template.ID, version2.ID)
|
||||
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name)
|
||||
clitest.SetupConfig(t, member, root)
|
||||
doneChan := make(chan struct{})
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
err := inv.Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
// Then create the copy. It should use the old template version.
|
||||
inv, root := clitest.New(t, "create", "--copy-parameters-from", tctx.workspaceName, "other-workspace", "-y")
|
||||
clitest.SetupConfig(t, tctx.member, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err := inv.Run()
|
||||
require.NoError(t, err, "failed to create a workspace based on the source workspace")
|
||||
return "other-workspace"
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ValuesFromTemplateDefaults",
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Simply accept the defaults.
|
||||
for _, param := range params {
|
||||
pty.ExpectMatch(param.name)
|
||||
pty.ExpectMatch(`Enter a value (default: "` + param.value + `")`)
|
||||
pty.WriteLine("")
|
||||
}
|
||||
// Confirm the creation.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
withDefaults: true,
|
||||
},
|
||||
{
|
||||
name: "ValuesFromTemplateDefaultsNoPrompt",
|
||||
setup: func() []string {
|
||||
return []string{"--use-parameter-defaults"}
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Default values should get printed.
|
||||
for _, param := range params {
|
||||
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
|
||||
}
|
||||
// No prompts, we only need to confirm.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
withDefaults: true,
|
||||
},
|
||||
{
|
||||
name: "ValuesFromDefaultFlagsNoPrompt",
|
||||
setup: func() []string {
|
||||
// Provide the defaults on the command line.
|
||||
args := []string{"--use-parameter-defaults"}
|
||||
for _, param := range params {
|
||||
args = append(args, "--parameter-default", fmt.Sprintf("%s=%s", param.name, param.value))
|
||||
}
|
||||
return args
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Default values should get printed.
|
||||
for _, param := range params {
|
||||
pty.ExpectMatch(fmt.Sprintf("%s: '%s'", param.name, param.value))
|
||||
}
|
||||
// No prompts, we only need to confirm.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
},
|
||||
{
|
||||
// File and flags should override template defaults. Additionally, if a
|
||||
// value has no default value we should still get a prompt for it.
|
||||
name: "ValuesFromMultipleSources",
|
||||
setup: func() []string {
|
||||
tempDir := t.TempDir()
|
||||
removeTmpDirUntilSuccessAfterTest(t, tempDir)
|
||||
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
|
||||
_, err := parameterFile.WriteString(`
|
||||
file_param: from file
|
||||
cli_param: from file`)
|
||||
require.NoError(t, err)
|
||||
return []string{
|
||||
"--use-parameter-defaults",
|
||||
"--rich-parameter-file", parameterFile.Name(),
|
||||
"--parameter-default", "file_param=from cli default",
|
||||
"--parameter-default", "cli_param=from cli default",
|
||||
"--parameter", "cli_param=from cli",
|
||||
}
|
||||
},
|
||||
handlePty: func(pty *ptytest.PTY) {
|
||||
// Should get prompted for the input param since it has no default.
|
||||
pty.ExpectMatch("input_param")
|
||||
pty.WriteLine("from input")
|
||||
|
||||
matches := []string{
|
||||
firstParameterDescription, firstParameterValue,
|
||||
secondParameterDisplayName, "",
|
||||
secondParameterDescription, secondParameterValue,
|
||||
immutableParameterDescription, immutableParameterValue,
|
||||
"Confirm create?", "yes",
|
||||
}
|
||||
for i := 0; i < len(matches); i += 2 {
|
||||
match := matches[i]
|
||||
value := matches[i+1]
|
||||
pty.ExpectMatch(match)
|
||||
// Confirm the creation.
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
},
|
||||
withDefaults: true,
|
||||
inputParameters: []param{
|
||||
{
|
||||
name: "template_param",
|
||||
value: "from template default",
|
||||
},
|
||||
{
|
||||
name: "file_param",
|
||||
value: "from template default",
|
||||
},
|
||||
{
|
||||
name: "cli_param",
|
||||
value: "from template default",
|
||||
},
|
||||
{
|
||||
name: "input_param",
|
||||
},
|
||||
},
|
||||
expectedParameters: []param{
|
||||
{
|
||||
name: "template_param",
|
||||
value: "from template default",
|
||||
},
|
||||
{
|
||||
name: "file_param",
|
||||
value: "from file",
|
||||
},
|
||||
{
|
||||
name: "cli_param",
|
||||
value: "from cli",
|
||||
},
|
||||
{
|
||||
name: "input_param",
|
||||
value: "from input",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if value != "" {
|
||||
pty.WriteLine(value)
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
parameters := params
|
||||
if len(tt.inputParameters) > 0 {
|
||||
parameters = tt.inputParameters
|
||||
}
|
||||
}
|
||||
<-doneChan
|
||||
})
|
||||
|
||||
t.Run("ParametersDefaults", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
// Convert parameters for the echo provisioner response.
|
||||
var rparams []*proto.RichParameter
|
||||
for i, param := range parameters {
|
||||
defaultValue := ""
|
||||
if tt.withDefaults {
|
||||
defaultValue = param.value
|
||||
}
|
||||
rparams = append(rparams, &proto.RichParameter{
|
||||
Name: param.name,
|
||||
Type: param.ptype,
|
||||
Mutable: param.mutable,
|
||||
DefaultValue: defaultValue,
|
||||
Order: int32(i), //nolint:gosec
|
||||
})
|
||||
}
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
// Set up the template.
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(rparams))
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
|
||||
"--parameter-default", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
|
||||
"--parameter-default", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
|
||||
"--parameter-default", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
|
||||
clitest.SetupConfig(t, member, root)
|
||||
doneChan := make(chan struct{})
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
err := inv.Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
// Run the command, possibly setting up values.
|
||||
workspaceName := "my-workspace"
|
||||
args := []string{"create", workspaceName, "--template", template.Name}
|
||||
if tt.setup != nil {
|
||||
args = append(args, tt.setup()...)
|
||||
}
|
||||
inv, root := clitest.New(t, args...)
|
||||
clitest.SetupConfig(t, member, root)
|
||||
doneChan := make(chan error)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
go func() {
|
||||
doneChan <- inv.Run()
|
||||
}()
|
||||
|
||||
matches := []string{
|
||||
firstParameterDescription, firstParameterValue,
|
||||
secondParameterDescription, secondParameterValue,
|
||||
immutableParameterDescription, immutableParameterValue,
|
||||
}
|
||||
for i := 0; i < len(matches); i += 2 {
|
||||
match := matches[i]
|
||||
defaultValue := matches[i+1]
|
||||
// The test may do something with the pty.
|
||||
if tt.handlePty != nil {
|
||||
tt.handlePty(pty)
|
||||
}
|
||||
|
||||
pty.ExpectMatch(match)
|
||||
pty.ExpectMatch(`Enter a value (default: "` + defaultValue + `")`)
|
||||
pty.WriteLine("")
|
||||
}
|
||||
pty.ExpectMatch("Confirm create?")
|
||||
pty.WriteLine("yes")
|
||||
<-doneChan
|
||||
// Wait for the command to exit.
|
||||
err := <-doneChan
|
||||
|
||||
// Verify that the expected default values were used.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
defer cancel()
|
||||
// The test may want to run additional setup like copying the workspace.
|
||||
if tt.postRun != nil {
|
||||
workspaceName = tt.postRun(t, testContext{
|
||||
client: client,
|
||||
member: member,
|
||||
owner: owner,
|
||||
template: template,
|
||||
workspaceName: workspaceName,
|
||||
})
|
||||
}
|
||||
|
||||
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
|
||||
Name: "my-workspace",
|
||||
if len(tt.errors) > 0 {
|
||||
require.Error(t, err)
|
||||
for _, errstr := range tt.errors {
|
||||
assert.ErrorContains(t, err, errstr)
|
||||
}
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the workspace was created and has the right template and values.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{Name: workspaceName})
|
||||
require.NoError(t, err, "expected to find created workspace")
|
||||
require.Len(t, workspaces.Workspaces, 1)
|
||||
|
||||
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
|
||||
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
|
||||
|
||||
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
|
||||
require.NoError(t, err)
|
||||
if len(tt.expectedParameters) > 0 {
|
||||
parameters = tt.expectedParameters
|
||||
}
|
||||
require.Len(t, buildParameters, len(parameters))
|
||||
for _, param := range parameters {
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: param.name, Value: param.value})
|
||||
}
|
||||
}
|
||||
})
|
||||
require.NoError(t, err, "can't list available workspaces")
|
||||
require.Len(t, workspaces.Workspaces, 1)
|
||||
|
||||
workspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
|
||||
require.Equal(t, version.ID, workspaceLatestBuild.TemplateVersionID)
|
||||
|
||||
buildParameters, err := client.WorkspaceBuildParameters(ctx, workspaceLatestBuild.ID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, buildParameters, 3)
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
|
||||
})
|
||||
|
||||
t.Run("RichParametersFile", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
tempDir := t.TempDir()
|
||||
removeTmpDirUntilSuccessAfterTest(t, tempDir)
|
||||
parameterFile, _ := os.CreateTemp(tempDir, "testParameterFile*.yaml")
|
||||
_, _ = parameterFile.WriteString(
|
||||
firstParameterName + ": " + firstParameterValue + "\n" +
|
||||
secondParameterName + ": " + secondParameterValue + "\n" +
|
||||
immutableParameterName + ": " + immutableParameterValue)
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "--rich-parameter-file", parameterFile.Name())
|
||||
clitest.SetupConfig(t, member, root)
|
||||
|
||||
doneChan := make(chan struct{})
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
err := inv.Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
|
||||
matches := []string{
|
||||
"Confirm create?", "yes",
|
||||
}
|
||||
for i := 0; i < len(matches); i += 2 {
|
||||
match := matches[i]
|
||||
value := matches[i+1]
|
||||
pty.ExpectMatch(match)
|
||||
pty.WriteLine(value)
|
||||
}
|
||||
<-doneChan
|
||||
})
|
||||
|
||||
t.Run("ParameterFlags", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
|
||||
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
|
||||
clitest.SetupConfig(t, member, root)
|
||||
doneChan := make(chan struct{})
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
err := inv.Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
|
||||
matches := []string{
|
||||
"Confirm create?", "yes",
|
||||
}
|
||||
for i := 0; i < len(matches); i += 2 {
|
||||
match := matches[i]
|
||||
value := matches[i+1]
|
||||
pty.ExpectMatch(match)
|
||||
pty.WriteLine(value)
|
||||
}
|
||||
<-doneChan
|
||||
})
|
||||
|
||||
t.Run("WrongParameterName/DidYouMean", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
wrongFirstParameterName := "frst-prameter"
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
|
||||
"--parameter", fmt.Sprintf("%s=%s", wrongFirstParameterName, firstParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
|
||||
clitest.SetupConfig(t, member, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err := inv.Run()
|
||||
assert.ErrorContains(t, err, "parameter \""+wrongFirstParameterName+"\" is not present in the template")
|
||||
assert.ErrorContains(t, err, "Did you mean: "+firstParameterName)
|
||||
})
|
||||
|
||||
t.Run("CopyParameters", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
// Firstly, create a regular workspace using template with parameters.
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
|
||||
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
|
||||
clitest.SetupConfig(t, member, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err := inv.Run()
|
||||
require.NoError(t, err, "can't create first workspace")
|
||||
|
||||
// Secondly, create a new workspace using parameters from the previous workspace.
|
||||
const otherWorkspace = "other-workspace"
|
||||
|
||||
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
|
||||
clitest.SetupConfig(t, member, root)
|
||||
pty = ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err = inv.Run()
|
||||
require.NoError(t, err, "can't create a workspace based on the source workspace")
|
||||
|
||||
// Verify if the new workspace uses expected parameters.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
|
||||
Name: otherWorkspace,
|
||||
})
|
||||
require.NoError(t, err, "can't list available workspaces")
|
||||
require.Len(t, workspaces.Workspaces, 1)
|
||||
|
||||
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
|
||||
|
||||
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, buildParameters, 3)
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
|
||||
})
|
||||
|
||||
t.Run("CopyParametersFromNotUpdatedWorkspace", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses())
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
|
||||
// Firstly, create a regular workspace using template with parameters.
|
||||
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
|
||||
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
|
||||
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
|
||||
clitest.SetupConfig(t, member, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err := inv.Run()
|
||||
require.NoError(t, err, "can't create first workspace")
|
||||
|
||||
// Secondly, update the template to the newer version.
|
||||
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
|
||||
{Name: "third_parameter", Type: "string", DefaultValue: "not-relevant"},
|
||||
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.TemplateID = template.ID
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
|
||||
coderdtest.UpdateActiveTemplateVersion(t, client, template.ID, version2.ID)
|
||||
|
||||
// Thirdly, create a new workspace using parameters from the previous workspace.
|
||||
const otherWorkspace = "other-workspace"
|
||||
|
||||
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
|
||||
clitest.SetupConfig(t, member, root)
|
||||
pty = ptytest.New(t).Attach(inv)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
err = inv.Run()
|
||||
require.NoError(t, err, "can't create a workspace based on the source workspace")
|
||||
|
||||
// Verify if the new workspace uses expected parameters.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
|
||||
Name: otherWorkspace,
|
||||
})
|
||||
require.NoError(t, err, "can't list available workspaces")
|
||||
require.Len(t, workspaces.Workspaces, 1)
|
||||
|
||||
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
|
||||
require.Equal(t, version.ID, otherWorkspaceLatestBuild.TemplateVersionID)
|
||||
|
||||
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, buildParameters, 3)
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
|
||||
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateWithPreset(t *testing.T) {
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
boundarycli "github.com/coder/boundary/cli"
|
||||
"github.com/coder/serpent"
|
||||
)
|
||||
|
||||
func (*RootCmd) boundary() *serpent.Command {
|
||||
cmd := boundarycli.BaseCommand() // Package coder/boundary/cli exports a "base command" designed to be integrated as a subcommand.
|
||||
cmd.Use += " [args...]" // The base command looks like `boundary -- command`. Serpent adds the flags piece, but we need to add the args.
|
||||
return cmd
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
package cli_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
boundarycli "github.com/coder/boundary/cli"
|
||||
"github.com/coder/coder/v2/cli/clitest"
|
||||
"github.com/coder/coder/v2/pty/ptytest"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
// Actually testing the functionality of coder/boundary takes place in the
|
||||
// coder/boundary repo, since it's a dependency of coder.
|
||||
// Here we want to test basically that integrating it as a subcommand doesn't break anything.
|
||||
func TestBoundarySubcommand(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
inv, _ := clitest.New(t, "exp", "boundary", "--help")
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
|
||||
go func() {
|
||||
err := inv.WithContext(ctx).Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
|
||||
// Expect the --help output to include the short description.
|
||||
// We're simply confirming that `coder boundary --help` ran without a runtime error as
|
||||
// a good chunk of serpents self validation logic happens at runtime.
|
||||
pty.ExpectMatch(boundarycli.BaseCommand().Short)
|
||||
}
|
||||
@@ -174,6 +174,19 @@ func (RootCmd) promptExample() *serpent.Command {
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
|
||||
return multiSelectError
|
||||
}, useThingsOption, enableCustomInputOption),
|
||||
promptCmd("multi-select-no-defaults", func(inv *serpent.Invocation) error {
|
||||
if len(multiSelectValues) == 0 {
|
||||
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
|
||||
Message: "Select some things:",
|
||||
Options: []string{
|
||||
"Code", "Chairs", "Whale",
|
||||
},
|
||||
EnableCustomInput: enableCustomInput,
|
||||
})
|
||||
}
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
|
||||
return multiSelectError
|
||||
}, useThingsOption, enableCustomInputOption),
|
||||
promptCmd("rich-multi-select", func(inv *serpent.Invocation) error {
|
||||
if len(multiSelectValues) == 0 {
|
||||
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
|
||||
|
||||
@@ -68,6 +68,8 @@ func (r *RootCmd) scaletestCmd() *serpent.Command {
|
||||
r.scaletestTaskStatus(),
|
||||
r.scaletestSMTP(),
|
||||
r.scaletestPrebuilds(),
|
||||
r.scaletestBridge(),
|
||||
r.scaletestLLMMock(),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
278
cli/exp_scaletest_bridge.go
Normal file
278
cli/exp_scaletest_bridge.go
Normal file
@@ -0,0 +1,278 @@
|
||||
//go:build !slim
|
||||
|
||||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/scaletest/bridge"
|
||||
"github.com/coder/coder/v2/scaletest/createusers"
|
||||
"github.com/coder/coder/v2/scaletest/harness"
|
||||
"github.com/coder/serpent"
|
||||
)
|
||||
|
||||
func (r *RootCmd) scaletestBridge() *serpent.Command {
|
||||
var (
|
||||
concurrentUsers int64
|
||||
noCleanup bool
|
||||
mode string
|
||||
upstreamURL string
|
||||
provider string
|
||||
requestsPerUser int64
|
||||
useStreamingAPI bool
|
||||
requestPayloadSize int64
|
||||
numMessages int64
|
||||
httpTimeout time.Duration
|
||||
|
||||
timeoutStrategy = &timeoutFlags{}
|
||||
cleanupStrategy = newScaletestCleanupStrategy()
|
||||
output = &scaletestOutputFlags{}
|
||||
prometheusFlags = &scaletestPrometheusFlags{}
|
||||
)
|
||||
|
||||
cmd := &serpent.Command{
|
||||
Use: "bridge",
|
||||
Short: "Generate load on the AI Bridge service.",
|
||||
Long: `Generate load for AI Bridge testing. Supports two modes: 'bridge' mode routes requests through the Coder AI Bridge, 'direct' mode makes requests directly to an upstream URL (useful for baseline comparisons).
|
||||
|
||||
Examples:
|
||||
# Test OpenAI API through bridge
|
||||
coder scaletest bridge --mode bridge --provider openai --concurrent-users 10 --request-count 5 --num-messages 10
|
||||
|
||||
# Test Anthropic API through bridge
|
||||
coder scaletest bridge --mode bridge --provider anthropic --concurrent-users 10 --request-count 5 --num-messages 10
|
||||
|
||||
# Test directly against mock server
|
||||
coder scaletest bridge --mode direct --provider openai --upstream-url http://localhost:8080/v1/chat/completions
|
||||
`,
|
||||
Handler: func(inv *serpent.Invocation) error {
|
||||
ctx := inv.Context()
|
||||
client, err := r.InitClient(inv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
client.HTTPClient = &http.Client{
|
||||
Transport: &codersdk.HeaderTransport{
|
||||
Transport: http.DefaultTransport,
|
||||
Header: map[string][]string{
|
||||
codersdk.BypassRatelimitHeader: {"true"},
|
||||
},
|
||||
},
|
||||
}
|
||||
reg := prometheus.NewRegistry()
|
||||
metrics := bridge.NewMetrics(reg)
|
||||
|
||||
logger := inv.Logger
|
||||
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
|
||||
defer prometheusSrvClose()
|
||||
|
||||
defer func() {
|
||||
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
|
||||
<-time.After(prometheusFlags.Wait)
|
||||
}()
|
||||
|
||||
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
|
||||
defer stop()
|
||||
ctx = notifyCtx
|
||||
|
||||
var userConfig createusers.Config
|
||||
if bridge.RequestMode(mode) == bridge.RequestModeBridge {
|
||||
me, err := requireAdmin(ctx, client)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(me.OrganizationIDs) == 0 {
|
||||
return xerrors.Errorf("admin user must have at least one organization")
|
||||
}
|
||||
userConfig = createusers.Config{
|
||||
OrganizationID: me.OrganizationIDs[0],
|
||||
}
|
||||
_, _ = fmt.Fprintln(inv.Stderr, "Bridge mode: creating users and making requests through AI Bridge...")
|
||||
} else {
|
||||
_, _ = fmt.Fprintf(inv.Stderr, "Direct mode: making requests directly to %s\n", upstreamURL)
|
||||
}
|
||||
|
||||
outputs, err := output.parse()
|
||||
if err != nil {
|
||||
return xerrors.Errorf("parse output flags: %w", err)
|
||||
}
|
||||
|
||||
config := bridge.Config{
|
||||
Mode: bridge.RequestMode(mode),
|
||||
Metrics: metrics,
|
||||
Provider: provider,
|
||||
RequestCount: int(requestsPerUser),
|
||||
Stream: useStreamingAPI,
|
||||
RequestPayloadSize: int(requestPayloadSize),
|
||||
NumMessages: int(numMessages),
|
||||
HTTPTimeout: httpTimeout,
|
||||
UpstreamURL: upstreamURL,
|
||||
User: userConfig,
|
||||
}
|
||||
if err := config.Validate(); err != nil {
|
||||
return xerrors.Errorf("validate config: %w", err)
|
||||
}
|
||||
if err := config.PrepareRequestBody(); err != nil {
|
||||
return xerrors.Errorf("prepare request body: %w", err)
|
||||
}
|
||||
|
||||
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
|
||||
|
||||
for i := range concurrentUsers {
|
||||
id := strconv.Itoa(int(i))
|
||||
name := fmt.Sprintf("bridge-%s", id)
|
||||
var runner harness.Runnable = bridge.NewRunner(client, config)
|
||||
th.AddRun(name, id, runner)
|
||||
}
|
||||
|
||||
_, _ = fmt.Fprintln(inv.Stderr, "Bridge scaletest configuration:")
|
||||
tw := tabwriter.NewWriter(inv.Stderr, 0, 0, 2, ' ', 0)
|
||||
for _, opt := range inv.Command.Options {
|
||||
if opt.Hidden || opt.ValueSource == serpent.ValueSourceNone {
|
||||
continue
|
||||
}
|
||||
_, _ = fmt.Fprintf(tw, " %s:\t%s", opt.Name, opt.Value.String())
|
||||
if opt.ValueSource != serpent.ValueSourceDefault {
|
||||
_, _ = fmt.Fprintf(tw, "\t(from %s)", opt.ValueSource)
|
||||
}
|
||||
_, _ = fmt.Fprintln(tw)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
|
||||
_, _ = fmt.Fprintln(inv.Stderr, "\nRunning bridge scaletest...")
|
||||
testCtx, testCancel := timeoutStrategy.toContext(ctx)
|
||||
defer testCancel()
|
||||
err = th.Run(testCtx)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
|
||||
}
|
||||
|
||||
// If the command was interrupted, skip stats.
|
||||
if notifyCtx.Err() != nil {
|
||||
return notifyCtx.Err()
|
||||
}
|
||||
|
||||
res := th.Results()
|
||||
|
||||
for _, o := range outputs {
|
||||
err = o.write(res, inv.Stdout)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
|
||||
}
|
||||
}
|
||||
|
||||
if !noCleanup {
|
||||
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
|
||||
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
|
||||
defer cleanupCancel()
|
||||
err = th.Cleanup(cleanupCtx)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("cleanup tests: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if res.TotalFail > 0 {
|
||||
return xerrors.New("load test failed, see above for more details")
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Options = serpent.OptionSet{
|
||||
{
|
||||
Flag: "concurrent-users",
|
||||
FlagShorthand: "c",
|
||||
Env: "CODER_SCALETEST_BRIDGE_CONCURRENT_USERS",
|
||||
Description: "Required: Number of concurrent users.",
|
||||
Value: serpent.Validate(serpent.Int64Of(&concurrentUsers), func(value *serpent.Int64) error {
|
||||
if value == nil || value.Value() <= 0 {
|
||||
return xerrors.Errorf("--concurrent-users must be greater than 0")
|
||||
}
|
||||
return nil
|
||||
}),
|
||||
Required: true,
|
||||
},
|
||||
{
|
||||
Flag: "mode",
|
||||
Env: "CODER_SCALETEST_BRIDGE_MODE",
|
||||
Default: "direct",
|
||||
Description: "Request mode: 'bridge' (create users and use AI Bridge) or 'direct' (make requests directly to upstream-url).",
|
||||
Value: serpent.EnumOf(&mode, string(bridge.RequestModeBridge), string(bridge.RequestModeDirect)),
|
||||
},
|
||||
{
|
||||
Flag: "upstream-url",
|
||||
Env: "CODER_SCALETEST_BRIDGE_UPSTREAM_URL",
|
||||
Description: "URL to make requests to directly (required in direct mode, e.g., http://localhost:8080/v1/chat/completions).",
|
||||
Value: serpent.StringOf(&upstreamURL),
|
||||
},
|
||||
{
|
||||
Flag: "provider",
|
||||
Env: "CODER_SCALETEST_BRIDGE_PROVIDER",
|
||||
Default: "openai",
|
||||
Description: "API provider to use.",
|
||||
Value: serpent.EnumOf(&provider, "openai", "anthropic"),
|
||||
},
|
||||
{
|
||||
Flag: "request-count",
|
||||
Env: "CODER_SCALETEST_BRIDGE_REQUEST_COUNT",
|
||||
Default: "1",
|
||||
Description: "Number of sequential requests to make per runner.",
|
||||
Value: serpent.Validate(serpent.Int64Of(&requestsPerUser), func(value *serpent.Int64) error {
|
||||
if value == nil || value.Value() <= 0 {
|
||||
return xerrors.Errorf("--request-count must be greater than 0")
|
||||
}
|
||||
return nil
|
||||
}),
|
||||
},
|
||||
{
|
||||
Flag: "stream",
|
||||
Env: "CODER_SCALETEST_BRIDGE_STREAM",
|
||||
Description: "Enable streaming requests.",
|
||||
Value: serpent.BoolOf(&useStreamingAPI),
|
||||
},
|
||||
{
|
||||
Flag: "request-payload-size",
|
||||
Env: "CODER_SCALETEST_BRIDGE_REQUEST_PAYLOAD_SIZE",
|
||||
Default: "1024",
|
||||
Description: "Size in bytes of the request payload (user message content). If 0, uses default message content.",
|
||||
Value: serpent.Int64Of(&requestPayloadSize),
|
||||
},
|
||||
{
|
||||
Flag: "num-messages",
|
||||
Env: "CODER_SCALETEST_BRIDGE_NUM_MESSAGES",
|
||||
Default: "1",
|
||||
Description: "Number of messages to include in the conversation.",
|
||||
Value: serpent.Int64Of(&numMessages),
|
||||
},
|
||||
{
|
||||
Flag: "no-cleanup",
|
||||
Env: "CODER_SCALETEST_NO_CLEANUP",
|
||||
Description: "Do not clean up resources after the test completes.",
|
||||
Value: serpent.BoolOf(&noCleanup),
|
||||
},
|
||||
{
|
||||
Flag: "http-timeout",
|
||||
Env: "CODER_SCALETEST_BRIDGE_HTTP_TIMEOUT",
|
||||
Default: "30s",
|
||||
Description: "Timeout for individual HTTP requests to the upstream provider.",
|
||||
Value: serpent.DurationOf(&httpTimeout),
|
||||
},
|
||||
}
|
||||
|
||||
timeoutStrategy.attach(&cmd.Options)
|
||||
cleanupStrategy.attach(&cmd.Options)
|
||||
output.attach(&cmd.Options)
|
||||
prometheusFlags.attach(&cmd.Options)
|
||||
return cmd
|
||||
}
|
||||
118
cli/exp_scaletest_llmmock.go
Normal file
118
cli/exp_scaletest_llmmock.go
Normal file
@@ -0,0 +1,118 @@
|
||||
//go:build !slim
|
||||
|
||||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os/signal"
|
||||
"time"
|
||||
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"cdr.dev/slog/v3"
|
||||
"cdr.dev/slog/v3/sloggers/sloghuman"
|
||||
"github.com/coder/coder/v2/scaletest/llmmock"
|
||||
"github.com/coder/serpent"
|
||||
)
|
||||
|
||||
func (*RootCmd) scaletestLLMMock() *serpent.Command {
|
||||
var (
|
||||
address string
|
||||
artificialLatency time.Duration
|
||||
responsePayloadSize int64
|
||||
|
||||
pprofEnable bool
|
||||
pprofAddress string
|
||||
|
||||
traceEnable bool
|
||||
)
|
||||
cmd := &serpent.Command{
|
||||
Use: "llm-mock",
|
||||
Short: "Start a mock LLM API server for testing",
|
||||
Long: `Start a mock LLM API server that simulates OpenAI and Anthropic APIs`,
|
||||
Handler: func(inv *serpent.Invocation) error {
|
||||
ctx, stop := signal.NotifyContext(inv.Context(), StopSignals...)
|
||||
defer stop()
|
||||
|
||||
logger := slog.Make(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelInfo)
|
||||
|
||||
if pprofEnable {
|
||||
closePprof := ServeHandler(ctx, logger, nil, pprofAddress, "pprof")
|
||||
defer closePprof()
|
||||
logger.Info(ctx, "pprof server started", slog.F("address", pprofAddress))
|
||||
}
|
||||
|
||||
config := llmmock.Config{
|
||||
Address: address,
|
||||
Logger: logger,
|
||||
ArtificialLatency: artificialLatency,
|
||||
ResponsePayloadSize: int(responsePayloadSize),
|
||||
PprofEnable: pprofEnable,
|
||||
PprofAddress: pprofAddress,
|
||||
TraceEnable: traceEnable,
|
||||
}
|
||||
srv := new(llmmock.Server)
|
||||
|
||||
if err := srv.Start(ctx, config); err != nil {
|
||||
return xerrors.Errorf("start mock LLM server: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = srv.Stop()
|
||||
}()
|
||||
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "Mock LLM API server started on %s\n", srv.APIAddress())
|
||||
_, _ = fmt.Fprintf(inv.Stdout, " OpenAI endpoint: %s/v1/chat/completions\n", srv.APIAddress())
|
||||
_, _ = fmt.Fprintf(inv.Stdout, " Anthropic endpoint: %s/v1/messages\n", srv.APIAddress())
|
||||
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Options = []serpent.Option{
|
||||
{
|
||||
Flag: "address",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_ADDRESS",
|
||||
Default: "localhost",
|
||||
Description: "Address to bind the mock LLM API server. Can include a port (e.g., 'localhost:8080' or ':8080'). Uses a random port if no port is specified.",
|
||||
Value: serpent.StringOf(&address),
|
||||
},
|
||||
{
|
||||
Flag: "artificial-latency",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_ARTIFICIAL_LATENCY",
|
||||
Default: "0s",
|
||||
Description: "Artificial latency to add to each response (e.g., 100ms, 1s). Simulates slow upstream processing.",
|
||||
Value: serpent.DurationOf(&artificialLatency),
|
||||
},
|
||||
{
|
||||
Flag: "response-payload-size",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_RESPONSE_PAYLOAD_SIZE",
|
||||
Default: "0",
|
||||
Description: "Size in bytes of the response payload. If 0, uses default context-aware responses.",
|
||||
Value: serpent.Int64Of(&responsePayloadSize),
|
||||
},
|
||||
{
|
||||
Flag: "pprof-enable",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ENABLE",
|
||||
Default: "false",
|
||||
Description: "Serve pprof metrics on the address defined by pprof-address.",
|
||||
Value: serpent.BoolOf(&pprofEnable),
|
||||
},
|
||||
{
|
||||
Flag: "pprof-address",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_PPROF_ADDRESS",
|
||||
Default: "127.0.0.1:6060",
|
||||
Description: "The bind address to serve pprof.",
|
||||
Value: serpent.StringOf(&pprofAddress),
|
||||
},
|
||||
{
|
||||
Flag: "trace-enable",
|
||||
Env: "CODER_SCALETEST_LLM_MOCK_TRACE_ENABLE",
|
||||
Default: "false",
|
||||
Description: "Whether application tracing data is collected. It exports to a backend configured by environment variables. See: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/exporter.md.",
|
||||
Value: serpent.BoolOf(&traceEnable),
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -141,7 +141,9 @@ func TestGitSSH(t *testing.T) {
|
||||
"-o", "IdentitiesOnly=yes",
|
||||
"127.0.0.1",
|
||||
)
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
// This occasionally times out at 15s on Windows CI runners. Use a
|
||||
// longer timeout to reduce flakes.
|
||||
ctx := testutil.Context(t, testutil.WaitSuperLong)
|
||||
err := inv.WithContext(ctx).Run()
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, 1, inc)
|
||||
@@ -205,7 +207,9 @@ func TestGitSSH(t *testing.T) {
|
||||
inv, _ := clitest.New(t, cmdArgs...)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
// This occasionally times out at 15s on Windows CI runners. Use a
|
||||
// longer timeout to reduce flakes.
|
||||
ctx := testutil.Context(t, testutil.WaitSuperLong)
|
||||
err = inv.WithContext(ctx).Run()
|
||||
require.NoError(t, err)
|
||||
select {
|
||||
@@ -223,7 +227,9 @@ func TestGitSSH(t *testing.T) {
|
||||
inv, _ = clitest.New(t, cmdArgs...)
|
||||
inv.Stdout = pty.Output()
|
||||
inv.Stderr = pty.Output()
|
||||
ctx = testutil.Context(t, testutil.WaitMedium) // Reset context for second cmd test.
|
||||
// This occasionally times out at 15s on Windows CI runners. Use a
|
||||
// longer timeout to reduce flakes.
|
||||
ctx = testutil.Context(t, testutil.WaitSuperLong) // Reset context for second cmd test.
|
||||
err = inv.WithContext(ctx).Run()
|
||||
require.NoError(t, err)
|
||||
select {
|
||||
|
||||
29
cli/login.go
29
cli/login.go
@@ -462,9 +462,38 @@ func (r *RootCmd) login() *serpent.Command {
|
||||
Value: serpent.BoolOf(&useTokenForSession),
|
||||
},
|
||||
}
|
||||
cmd.Children = []*serpent.Command{
|
||||
r.loginToken(),
|
||||
}
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (r *RootCmd) loginToken() *serpent.Command {
|
||||
return &serpent.Command{
|
||||
Use: "token",
|
||||
Short: "Print the current session token",
|
||||
Long: "Print the session token for use in scripts and automation.",
|
||||
Middleware: serpent.RequireNArgs(0),
|
||||
Handler: func(inv *serpent.Invocation) error {
|
||||
tok, err := r.ensureTokenBackend().Read(r.clientURL)
|
||||
if err != nil {
|
||||
if xerrors.Is(err, os.ErrNotExist) {
|
||||
return xerrors.New("no session token found - run 'coder login' first")
|
||||
}
|
||||
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
|
||||
return errKeyringNotSupported
|
||||
}
|
||||
return xerrors.Errorf("read session token: %w", err)
|
||||
}
|
||||
if tok == "" {
|
||||
return xerrors.New("no session token found - run 'coder login' first")
|
||||
}
|
||||
_, err = fmt.Fprintln(inv.Stdout, tok)
|
||||
return err
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// isWSL determines if coder-cli is running within Windows Subsystem for Linux
|
||||
func isWSL() (bool, error) {
|
||||
if runtime.GOOS == goosDarwin || runtime.GOOS == goosWindows {
|
||||
|
||||
@@ -537,3 +537,31 @@ func TestLogin(t *testing.T) {
|
||||
require.Equal(t, selected, first.OrganizationID.String())
|
||||
})
|
||||
}
|
||||
|
||||
func TestLoginToken(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("PrintsToken", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
client := coderdtest.New(t, nil)
|
||||
coderdtest.CreateFirstUser(t, client)
|
||||
|
||||
inv, root := clitest.New(t, "login", "token", "--url", client.URL.String())
|
||||
clitest.SetupConfig(t, client, root)
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
err := inv.WithContext(ctx).Run()
|
||||
require.NoError(t, err)
|
||||
|
||||
pty.ExpectMatch(client.SessionToken())
|
||||
})
|
||||
|
||||
t.Run("NoTokenStored", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
inv, _ := clitest.New(t, "login", "token")
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
err := inv.WithContext(ctx).Run()
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "no session token found")
|
||||
})
|
||||
}
|
||||
|
||||
@@ -65,6 +65,22 @@ func (r *RootCmd) organizationSettings(orgContext *OrganizationContext) *serpent
|
||||
return cli.OrganizationIDPSyncSettings(ctx)
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "workspace-sharing",
|
||||
Aliases: []string{"workspacesharing"},
|
||||
Short: "Workspace sharing settings for the organization.",
|
||||
Patch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID, input json.RawMessage) (any, error) {
|
||||
var req codersdk.WorkspaceSharingSettings
|
||||
err := json.Unmarshal(input, &req)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("unmarshalling workspace sharing settings: %w", err)
|
||||
}
|
||||
return cli.PatchWorkspaceSharingSettings(ctx, org.String(), req)
|
||||
},
|
||||
Fetch: func(ctx context.Context, cli *codersdk.Client, org uuid.UUID) (any, error) {
|
||||
return cli.WorkspaceSharingSettings(ctx, org.String())
|
||||
},
|
||||
},
|
||||
}
|
||||
cmd := &serpent.Command{
|
||||
Use: "settings",
|
||||
|
||||
@@ -34,6 +34,7 @@ type ParameterResolver struct {
|
||||
|
||||
promptRichParameters bool
|
||||
promptEphemeralParameters bool
|
||||
useParameterDefaults bool
|
||||
}
|
||||
|
||||
func (pr *ParameterResolver) WithLastBuildParameters(params []codersdk.WorkspaceBuildParameter) *ParameterResolver {
|
||||
@@ -86,8 +87,21 @@ func (pr *ParameterResolver) WithPromptEphemeralParameters(promptEphemeralParame
|
||||
return pr
|
||||
}
|
||||
|
||||
// Resolve gathers workspace build parameters in a layered fashion, applying values from various sources
|
||||
// in order of precedence: parameter file < CLI/ENV < source build < last build < preset < user input.
|
||||
func (pr *ParameterResolver) WithUseParameterDefaults(useParameterDefaults bool) *ParameterResolver {
|
||||
pr.useParameterDefaults = useParameterDefaults
|
||||
return pr
|
||||
}
|
||||
|
||||
// Resolve gathers workspace build parameters in a layered fashion, applying
|
||||
// values from various sources in order of precedence:
|
||||
// 1. template defaults (if auto-accepting defaults)
|
||||
// 2. cli parameter defaults (if auto-accepting defaults)
|
||||
// 3. parameter file
|
||||
// 4. CLI/ENV
|
||||
// 5. source build
|
||||
// 6. last build
|
||||
// 7. preset
|
||||
// 8. user input (unless auto-accepting defaults)
|
||||
func (pr *ParameterResolver) Resolve(inv *serpent.Invocation, action WorkspaceCLIAction, templateVersionParameters []codersdk.TemplateVersionParameter) ([]codersdk.WorkspaceBuildParameter, error) {
|
||||
var staged []codersdk.WorkspaceBuildParameter
|
||||
var err error
|
||||
@@ -262,9 +276,25 @@ func (pr *ParameterResolver) resolveWithInput(resolved []codersdk.WorkspaceBuild
|
||||
(action == WorkspaceUpdate && tvp.Mutable && tvp.Required) ||
|
||||
(action == WorkspaceUpdate && !tvp.Mutable && firstTimeUse) ||
|
||||
(tvp.Mutable && !tvp.Ephemeral && pr.promptRichParameters) {
|
||||
parameterValue, err := cliui.RichParameter(inv, tvp, pr.richParametersDefaults)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
name := tvp.Name
|
||||
if tvp.DisplayName != "" {
|
||||
name = tvp.DisplayName
|
||||
}
|
||||
|
||||
parameterValue := tvp.DefaultValue
|
||||
if v, ok := pr.richParametersDefaults[tvp.Name]; ok {
|
||||
parameterValue = v
|
||||
}
|
||||
|
||||
// Auto-accept the default if there is one.
|
||||
if pr.useParameterDefaults && parameterValue != "" {
|
||||
_, _ = fmt.Fprintf(inv.Stdout, "Using default value for %s: '%s'\n", name, parameterValue)
|
||||
} else {
|
||||
var err error
|
||||
parameterValue, err = cliui.RichParameter(inv, tvp, name, parameterValue)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
resolved = append(resolved, codersdk.WorkspaceBuildParameter{
|
||||
|
||||
11
cli/root.go
11
cli/root.go
@@ -24,6 +24,7 @@ import (
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/mattn/go-isatty"
|
||||
"github.com/mitchellh/go-wordwrap"
|
||||
"golang.org/x/mod/semver"
|
||||
@@ -150,7 +151,6 @@ func (r *RootCmd) AGPLExperimental() []*serpent.Command {
|
||||
r.promptExample(),
|
||||
r.rptyCommand(),
|
||||
r.syncCommand(),
|
||||
r.boundary(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -332,6 +332,12 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
|
||||
// support links.
|
||||
return
|
||||
}
|
||||
if cmd.Name() == "boundary" {
|
||||
// The boundary command is integrated from the boundary package
|
||||
// and has YAML-only options (e.g., allowlist from config file)
|
||||
// that don't have flags or env vars.
|
||||
return
|
||||
}
|
||||
merr = errors.Join(
|
||||
merr,
|
||||
xerrors.Errorf("option %q in %q should have a flag or env", opt.Name, cmd.FullName()),
|
||||
@@ -923,6 +929,9 @@ func splitNamedWorkspace(identifier string) (owner string, workspaceName string,
|
||||
// a bare name (for a workspace owned by the current user) or a "user/workspace" combination,
|
||||
// where user is either a username or UUID.
|
||||
func namedWorkspace(ctx context.Context, client *codersdk.Client, identifier string) (codersdk.Workspace, error) {
|
||||
if uid, err := uuid.Parse(identifier); err == nil {
|
||||
return client.Workspace(ctx, uid)
|
||||
}
|
||||
owner, name, err := splitNamedWorkspace(identifier)
|
||||
if err != nil {
|
||||
return codersdk.Workspace{}, err
|
||||
|
||||
@@ -197,7 +197,7 @@ func TestSharingStatus(t *testing.T) {
|
||||
ctx = testutil.Context(t, testutil.WaitMedium)
|
||||
)
|
||||
|
||||
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
UserRoles: map[string]codersdk.WorkspaceRole{
|
||||
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
|
||||
},
|
||||
@@ -248,7 +248,7 @@ func TestSharingRemove(t *testing.T) {
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
|
||||
// Share the workspace with a user to later remove
|
||||
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
UserRoles: map[string]codersdk.WorkspaceRole{
|
||||
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
|
||||
toRemoveUser.ID.String(): codersdk.WorkspaceRoleUse,
|
||||
@@ -309,7 +309,7 @@ func TestSharingRemove(t *testing.T) {
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
|
||||
// Share the workspace with a user to later remove
|
||||
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
|
||||
UserRoles: map[string]codersdk.WorkspaceRole{
|
||||
toRemoveUser2.ID.String(): codersdk.WorkspaceRoleUse,
|
||||
toRemoveUser1.ID.String(): codersdk.WorkspaceRoleUse,
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"golang.org/x/xerrors"
|
||||
@@ -43,11 +45,11 @@ func (r *RootCmd) show() *serpent.Command {
|
||||
if err != nil {
|
||||
return xerrors.Errorf("get workspace: %w", err)
|
||||
}
|
||||
|
||||
options := cliui.WorkspaceResourcesOptions{
|
||||
WorkspaceName: workspace.Name,
|
||||
ServerVersion: buildInfo.Version,
|
||||
ShowDetails: details,
|
||||
Title: fmt.Sprintf("%s/%s (%s since %s) %s:%s", workspace.OwnerName, workspace.Name, workspace.LatestBuild.Status, time.Since(workspace.LatestBuild.CreatedAt).Round(time.Second).String(), workspace.TemplateName, workspace.LatestBuild.TemplateVersionName),
|
||||
}
|
||||
if workspace.LatestBuild.Status == codersdk.WorkspaceStatusRunning {
|
||||
// Get listening ports for each agent.
|
||||
@@ -55,7 +57,6 @@ func (r *RootCmd) show() *serpent.Command {
|
||||
options.ListeningPorts = ports
|
||||
options.Devcontainers = devcontainers
|
||||
}
|
||||
|
||||
return cliui.WorkspaceResources(inv.Stdout, workspace.LatestBuild.Resources, options)
|
||||
},
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package cli_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -15,6 +16,7 @@ import (
|
||||
"github.com/coder/coder/v2/coderd/coderdtest"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/pty/ptytest"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
func TestShow(t *testing.T) {
|
||||
@@ -28,7 +30,7 @@ func TestShow(t *testing.T) {
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
|
||||
workspace := coderdtest.CreateWorkspace(t, member, template.ID)
|
||||
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
|
||||
build := coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
|
||||
|
||||
args := []string{
|
||||
"show",
|
||||
@@ -38,26 +40,30 @@ func TestShow(t *testing.T) {
|
||||
clitest.SetupConfig(t, member, root)
|
||||
doneChan := make(chan struct{})
|
||||
pty := ptytest.New(t).Attach(inv)
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
go func() {
|
||||
defer close(doneChan)
|
||||
err := inv.Run()
|
||||
err := inv.WithContext(ctx).Run()
|
||||
assert.NoError(t, err)
|
||||
}()
|
||||
matches := []struct {
|
||||
match string
|
||||
write string
|
||||
}{
|
||||
{match: fmt.Sprintf("%s/%s", workspace.OwnerName, workspace.Name)},
|
||||
{match: fmt.Sprintf("(%s since ", build.Status)},
|
||||
{match: fmt.Sprintf("%s:%s", workspace.TemplateName, workspace.LatestBuild.TemplateVersionName)},
|
||||
{match: "compute.main"},
|
||||
{match: "smith (linux, i386)"},
|
||||
{match: "coder ssh " + workspace.Name},
|
||||
}
|
||||
for _, m := range matches {
|
||||
pty.ExpectMatch(m.match)
|
||||
pty.ExpectMatchContext(ctx, m.match)
|
||||
if len(m.write) > 0 {
|
||||
pty.WriteLine(m.write)
|
||||
}
|
||||
}
|
||||
<-doneChan
|
||||
_ = testutil.TryReceive(ctx, t, doneChan)
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
58
cli/ssh.go
58
cli/ssh.go
@@ -24,6 +24,7 @@ import (
|
||||
"github.com/gofrs/flock"
|
||||
"github.com/google/uuid"
|
||||
"github.com/mattn/go-isatty"
|
||||
"github.com/shirou/gopsutil/v4/process"
|
||||
"github.com/spf13/afero"
|
||||
gossh "golang.org/x/crypto/ssh"
|
||||
gosshagent "golang.org/x/crypto/ssh/agent"
|
||||
@@ -84,6 +85,9 @@ func (r *RootCmd) ssh() *serpent.Command {
|
||||
|
||||
containerName string
|
||||
containerUser string
|
||||
|
||||
// Used in tests to simulate the parent exiting.
|
||||
testForcePPID int64
|
||||
)
|
||||
cmd := &serpent.Command{
|
||||
Annotations: workspaceCommand,
|
||||
@@ -175,6 +179,24 @@ func (r *RootCmd) ssh() *serpent.Command {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
// When running as a ProxyCommand (stdio mode), monitor the parent process
|
||||
// and exit if it dies to avoid leaving orphaned processes. This is
|
||||
// particularly important when editors like VSCode/Cursor spawn SSH
|
||||
// connections and then crash or are killed - we don't want zombie
|
||||
// `coder ssh` processes accumulating.
|
||||
// Note: using gopsutil to check the parent process as this handles
|
||||
// windows processes as well in a standard way.
|
||||
if stdio {
|
||||
ppid := int32(os.Getppid()) // nolint:gosec
|
||||
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
|
||||
if testForcePPID > 0 {
|
||||
ppid = int32(testForcePPID) // nolint:gosec
|
||||
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
|
||||
}
|
||||
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
// Prevent unnecessary logs from the stdlib from messing up the TTY.
|
||||
// See: https://github.com/coder/coder/issues/13144
|
||||
log.SetOutput(io.Discard)
|
||||
@@ -775,6 +797,12 @@ func (r *RootCmd) ssh() *serpent.Command {
|
||||
Value: serpent.BoolOf(&forceNewTunnel),
|
||||
Hidden: true,
|
||||
},
|
||||
{
|
||||
Flag: "test.force-ppid",
|
||||
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
|
||||
Value: serpent.Int64Of(&testForcePPID),
|
||||
Hidden: true,
|
||||
},
|
||||
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
|
||||
}
|
||||
return cmd
|
||||
@@ -1662,3 +1690,33 @@ func normalizeWorkspaceInput(input string) string {
|
||||
return input // Fallback
|
||||
}
|
||||
}
|
||||
|
||||
// watchParentContext returns a context that is canceled when the parent process
|
||||
// dies. It polls using the provided clock and checks if the parent is alive
|
||||
// using the provided pidExists function.
|
||||
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
|
||||
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
|
||||
|
||||
go func() {
|
||||
ticker := clock.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
alive, err := pidExists(ctx, originalPPID)
|
||||
// If we get an error checking the parent process (e.g., permission
|
||||
// denied, the process is in an unknown state), we assume the parent
|
||||
// is still alive to avoid disrupting the SSH connection. We only
|
||||
// cancel when we definitively know the parent is gone (alive=false, err=nil).
|
||||
if !alive && err == nil {
|
||||
cancel()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return ctx, cancel
|
||||
}
|
||||
|
||||
@@ -312,6 +312,102 @@ type fakeCloser struct {
|
||||
err error
|
||||
}
|
||||
|
||||
func TestWatchParentContext(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("CancelsWhenParentDies", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
mClock := quartz.NewMock(t)
|
||||
trap := mClock.Trap().NewTicker()
|
||||
defer trap.Close()
|
||||
|
||||
parentAlive := true
|
||||
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
|
||||
return parentAlive, nil
|
||||
}, testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
// Wait for the ticker to be created
|
||||
trap.MustWait(ctx).MustRelease(ctx)
|
||||
|
||||
// When: we simulate parent death and advance the clock
|
||||
parentAlive = false
|
||||
mClock.AdvanceNext()
|
||||
|
||||
// Then: The context should be canceled
|
||||
_ = testutil.TryReceive(ctx, t, childCtx.Done())
|
||||
})
|
||||
|
||||
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
mClock := quartz.NewMock(t)
|
||||
trap := mClock.Trap().NewTicker()
|
||||
defer trap.Close()
|
||||
|
||||
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
|
||||
return true, nil // Parent always alive
|
||||
}, testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
// Wait for the ticker to be created
|
||||
trap.MustWait(ctx).MustRelease(ctx)
|
||||
|
||||
// When: we advance the clock several times with the parent alive
|
||||
for range 3 {
|
||||
mClock.AdvanceNext()
|
||||
}
|
||||
|
||||
// Then: context should not be canceled
|
||||
require.NoError(t, childCtx.Err())
|
||||
})
|
||||
|
||||
t.Run("RespectsParentContext", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx, cancelParent := context.WithCancel(context.Background())
|
||||
mClock := quartz.NewMock(t)
|
||||
|
||||
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
|
||||
return true, nil
|
||||
}, testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
// When: we cancel the parent context
|
||||
cancelParent()
|
||||
|
||||
// Then: The context should be canceled
|
||||
require.ErrorIs(t, childCtx.Err(), context.Canceled)
|
||||
})
|
||||
|
||||
t.Run("DoesNotCancelOnError", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
mClock := quartz.NewMock(t)
|
||||
trap := mClock.Trap().NewTicker()
|
||||
defer trap.Close()
|
||||
|
||||
// Simulate an error checking parent status (e.g., permission denied).
|
||||
// We should not cancel the context in this case to avoid disrupting
|
||||
// the SSH connection.
|
||||
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
|
||||
return false, xerrors.New("permission denied")
|
||||
}, testutil.WaitShort)
|
||||
defer cancel()
|
||||
|
||||
// Wait for the ticker to be created
|
||||
trap.MustWait(ctx).MustRelease(ctx)
|
||||
|
||||
// When: we advance clock several times
|
||||
for range 3 {
|
||||
mClock.AdvanceNext()
|
||||
}
|
||||
|
||||
// Context should NOT be canceled since we got an error (not a definitive "not alive")
|
||||
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
|
||||
})
|
||||
}
|
||||
|
||||
func (c *fakeCloser) Close() error {
|
||||
*c.closes = append(*c.closes, c)
|
||||
return c.err
|
||||
|
||||
101
cli/ssh_test.go
101
cli/ssh_test.go
@@ -1122,6 +1122,107 @@ func TestSSH(t *testing.T) {
|
||||
}
|
||||
})
|
||||
|
||||
// This test ensures that the SSH session exits when the parent process dies.
|
||||
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
|
||||
defer cancel()
|
||||
|
||||
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
|
||||
sleepStart := make(chan int)
|
||||
agentReady := make(chan struct{})
|
||||
sessionStarted := make(chan struct{})
|
||||
sleepKill := make(chan struct{})
|
||||
sleepDone := make(chan struct{})
|
||||
|
||||
// Start a sleep process which we will pretend is the parent.
|
||||
go func() {
|
||||
sleepCmd := exec.Command("sleep", "infinity")
|
||||
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
|
||||
return
|
||||
}
|
||||
sleepStart <- sleepCmd.Process.Pid
|
||||
defer close(sleepDone)
|
||||
<-sleepKill
|
||||
sleepCmd.Process.Kill()
|
||||
_ = sleepCmd.Wait()
|
||||
}()
|
||||
|
||||
client, workspace, agentToken := setupWorkspaceForAgent(t)
|
||||
go func() {
|
||||
defer close(agentReady)
|
||||
_ = agenttest.New(t, client.URL, agentToken)
|
||||
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
|
||||
}()
|
||||
|
||||
clientOutput, clientInput := io.Pipe()
|
||||
serverOutput, serverInput := io.Pipe()
|
||||
defer func() {
|
||||
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
|
||||
_ = c.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
// Start a connection to the agent once it's ready
|
||||
go func() {
|
||||
<-agentReady
|
||||
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
|
||||
Reader: serverOutput,
|
||||
Writer: clientInput,
|
||||
}, "", &ssh.ClientConfig{
|
||||
// #nosec
|
||||
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
|
||||
})
|
||||
if !assert.NoError(t, err, "failed to create SSH client connection") {
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
sshClient := ssh.NewClient(conn, channels, requests)
|
||||
defer sshClient.Close()
|
||||
|
||||
session, err := sshClient.NewSession()
|
||||
if !assert.NoError(t, err, "failed to create SSH session") {
|
||||
return
|
||||
}
|
||||
close(sessionStarted)
|
||||
<-sleepDone
|
||||
// Ref: https://github.com/coder/internal/issues/1289
|
||||
// This may return either a nil error or io.EOF.
|
||||
// There is an inherent race here:
|
||||
// 1. Sleep process is killed -> sleepDone is closed.
|
||||
// 2. watchParentContext detects parent death, cancels context,
|
||||
// causing SSH session teardown.
|
||||
// 3. We receive from sleepDone and attempt to call session.Close()
|
||||
// Now either:
|
||||
// a. Session teardown completes before we call Close(), resulting in io.EOF
|
||||
// b. We call Close() first, resulting in a nil error.
|
||||
_ = session.Close()
|
||||
}()
|
||||
|
||||
// Wait for our "parent" process to start
|
||||
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
|
||||
// Wait for the agent to be ready
|
||||
testutil.SoftTryReceive(ctx, t, agentReady)
|
||||
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
|
||||
clitest.SetupConfig(t, client, root)
|
||||
inv.Stdin = clientOutput
|
||||
inv.Stdout = serverInput
|
||||
inv.Stderr = io.Discard
|
||||
|
||||
// Start the command
|
||||
clitest.Start(t, inv.WithContext(ctx))
|
||||
|
||||
// Wait for a session to be established
|
||||
testutil.SoftTryReceive(ctx, t, sessionStarted)
|
||||
// Now kill the fake "parent"
|
||||
close(sleepKill)
|
||||
// The sleep process should exit
|
||||
testutil.SoftTryReceive(ctx, t, sleepDone)
|
||||
// And then the command should exit. This is tracked by clitest.Start.
|
||||
})
|
||||
|
||||
t.Run("ForwardAgent", func(t *testing.T) {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("Test not supported on windows")
|
||||
|
||||
@@ -367,7 +367,9 @@ func TestStartAutoUpdate(t *testing.T) {
|
||||
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
|
||||
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.Name = "v1"
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version1.ID)
|
||||
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version1.ID)
|
||||
workspace := coderdtest.CreateWorkspace(t, member, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
|
||||
@@ -379,6 +381,7 @@ func TestStartAutoUpdate(t *testing.T) {
|
||||
coderdtest.MustTransitionWorkspace(t, member, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
|
||||
}
|
||||
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(stringRichParameters), func(ctvr *codersdk.CreateTemplateVersionRequest) {
|
||||
ctvr.Name = "v2"
|
||||
ctvr.TemplateID = template.ID
|
||||
})
|
||||
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
|
||||
|
||||
261
cli/support.go
261
cli/support.go
@@ -7,6 +7,7 @@ import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -44,13 +45,18 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
|
||||
` - Coder deployment version
|
||||
- Coder deployment Configuration (sanitized), including enabled experiments
|
||||
- Coder deployment health snapshot
|
||||
- Coder deployment stats (aggregated workspace/session metrics)
|
||||
- Entitlements (if available)
|
||||
- Health settings (dismissed healthchecks)
|
||||
- Coder deployment Network troubleshooting information
|
||||
- Workspace list accessible to the user (sanitized)
|
||||
- Workspace configuration, parameters, and build logs
|
||||
- Template version and source code for the given workspace
|
||||
- Agent details (with environment variable sanitized)
|
||||
- Agent network diagnostics
|
||||
- Agent logs
|
||||
- License status
|
||||
- pprof profiling data (if --pprof is enabled)
|
||||
` + cliui.Bold("Note: ") +
|
||||
cliui.Wrap("While we try to sanitize sensitive data from support bundles, we cannot guarantee that they do not contain information that you or your organization may consider sensitive.\n") +
|
||||
cliui.Bold("Please confirm that you will:\n") +
|
||||
@@ -61,6 +67,9 @@ var supportBundleBlurb = cliui.Bold("This will collect the following information
|
||||
func (r *RootCmd) supportBundle() *serpent.Command {
|
||||
var outputPath string
|
||||
var coderURLOverride string
|
||||
var workspacesTotalCap64 int64 = 10
|
||||
var templateName string
|
||||
var pprof bool
|
||||
cmd := &serpent.Command{
|
||||
Use: "bundle <workspace> [<agent>]",
|
||||
Short: "Generate a support bundle to troubleshoot issues connecting to a workspace.",
|
||||
@@ -121,8 +130,9 @@ func (r *RootCmd) supportBundle() *serpent.Command {
|
||||
}
|
||||
|
||||
var (
|
||||
wsID uuid.UUID
|
||||
agtID uuid.UUID
|
||||
wsID uuid.UUID
|
||||
agtID uuid.UUID
|
||||
templateID uuid.UUID
|
||||
)
|
||||
|
||||
if len(inv.Args) == 0 {
|
||||
@@ -155,6 +165,16 @@ func (r *RootCmd) supportBundle() *serpent.Command {
|
||||
}
|
||||
}
|
||||
|
||||
// Resolve template by name if provided (captures active version)
|
||||
// Fallback: if canonical name lookup fails, match DisplayName (case-insensitive).
|
||||
if templateName != "" {
|
||||
id, err := resolveTemplateID(inv.Context(), client, templateName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
templateID = id
|
||||
}
|
||||
|
||||
if outputPath == "" {
|
||||
cwd, err := filepath.Abs(".")
|
||||
if err != nil {
|
||||
@@ -176,12 +196,25 @@ func (r *RootCmd) supportBundle() *serpent.Command {
|
||||
if r.verbose {
|
||||
clientLog.AppendSinks(sloghuman.Sink(inv.Stderr))
|
||||
}
|
||||
if pprof {
|
||||
_, _ = fmt.Fprintln(inv.Stderr, "pprof data collection will take approximately 30 seconds...")
|
||||
}
|
||||
|
||||
// Bypass rate limiting for support bundle collection since it makes many API calls.
|
||||
client.HTTPClient.Transport = &codersdk.HeaderTransport{
|
||||
Transport: client.HTTPClient.Transport,
|
||||
Header: http.Header{codersdk.BypassRatelimitHeader: {"true"}},
|
||||
}
|
||||
|
||||
deps := support.Deps{
|
||||
Client: client,
|
||||
// Support adds a sink so we don't need to supply one ourselves.
|
||||
Log: clientLog,
|
||||
WorkspaceID: wsID,
|
||||
AgentID: agtID,
|
||||
Log: clientLog,
|
||||
WorkspaceID: wsID,
|
||||
AgentID: agtID,
|
||||
WorkspacesTotalCap: int(workspacesTotalCap64),
|
||||
TemplateID: templateID,
|
||||
CollectPprof: pprof,
|
||||
}
|
||||
|
||||
bun, err := support.Run(inv.Context(), &deps)
|
||||
@@ -217,11 +250,102 @@ func (r *RootCmd) supportBundle() *serpent.Command {
|
||||
Description: "Override the URL to your Coder deployment. This may be useful, for example, if you need to troubleshoot a specific Coder replica.",
|
||||
Value: serpent.StringOf(&coderURLOverride),
|
||||
},
|
||||
{
|
||||
Flag: "workspaces-total-cap",
|
||||
Env: "CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP",
|
||||
Description: "Maximum number of workspaces to include in the support bundle. Set to 0 or negative value to disable the cap. Defaults to 10.",
|
||||
Value: serpent.Int64Of(&workspacesTotalCap64),
|
||||
},
|
||||
{
|
||||
Flag: "template",
|
||||
Env: "CODER_SUPPORT_BUNDLE_TEMPLATE",
|
||||
Description: "Template name to include in the support bundle. Use org_name/template_name if template name is reused across multiple organizations.",
|
||||
Value: serpent.StringOf(&templateName),
|
||||
},
|
||||
{
|
||||
Flag: "pprof",
|
||||
Env: "CODER_SUPPORT_BUNDLE_PPROF",
|
||||
Description: "Collect pprof profiling data from the Coder server and agent. Requires Coder server version 2.28.0 or newer.",
|
||||
Value: serpent.BoolOf(&pprof),
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// Resolve a template to its ID, supporting:
|
||||
// - org/name form
|
||||
// - slug or display name match (case-insensitive) across all memberships
|
||||
func resolveTemplateID(ctx context.Context, client *codersdk.Client, templateArg string) (uuid.UUID, error) {
|
||||
orgPart := ""
|
||||
namePart := templateArg
|
||||
if slash := strings.IndexByte(templateArg, '/'); slash > 0 && slash < len(templateArg)-1 {
|
||||
orgPart = templateArg[:slash]
|
||||
namePart = templateArg[slash+1:]
|
||||
}
|
||||
|
||||
resolveInOrg := func(orgID uuid.UUID) (codersdk.Template, bool, error) {
|
||||
if t, err := client.TemplateByName(ctx, orgID, namePart); err == nil {
|
||||
return t, true, nil
|
||||
}
|
||||
tpls, err := client.TemplatesByOrganization(ctx, orgID)
|
||||
if err != nil {
|
||||
return codersdk.Template{}, false, nil
|
||||
}
|
||||
for _, t := range tpls {
|
||||
if strings.EqualFold(t.Name, namePart) || strings.EqualFold(t.DisplayName, namePart) {
|
||||
return t, true, nil
|
||||
}
|
||||
}
|
||||
return codersdk.Template{}, false, nil
|
||||
}
|
||||
|
||||
if orgPart != "" {
|
||||
org, err := client.OrganizationByName(ctx, orgPart)
|
||||
if err != nil {
|
||||
return uuid.Nil, xerrors.Errorf("get organization %q: %w", orgPart, err)
|
||||
}
|
||||
t, found, err := resolveInOrg(org.ID)
|
||||
if err != nil {
|
||||
return uuid.Nil, err
|
||||
}
|
||||
if !found {
|
||||
return uuid.Nil, xerrors.Errorf("template %q not found in organization %q", namePart, orgPart)
|
||||
}
|
||||
return t.ID, nil
|
||||
}
|
||||
|
||||
orgs, err := client.OrganizationsByUser(ctx, codersdk.Me)
|
||||
if err != nil {
|
||||
return uuid.Nil, xerrors.Errorf("get organizations: %w", err)
|
||||
}
|
||||
var (
|
||||
foundTpl codersdk.Template
|
||||
foundOrgs []string
|
||||
)
|
||||
for _, org := range orgs {
|
||||
if t, found, err := resolveInOrg(org.ID); err == nil && found {
|
||||
if len(foundOrgs) == 0 {
|
||||
foundTpl = t
|
||||
}
|
||||
foundOrgs = append(foundOrgs, org.Name)
|
||||
}
|
||||
}
|
||||
switch len(foundOrgs) {
|
||||
case 0:
|
||||
return uuid.Nil, xerrors.Errorf("template %q not found in your organizations", namePart)
|
||||
case 1:
|
||||
return foundTpl.ID, nil
|
||||
default:
|
||||
return uuid.Nil, xerrors.Errorf(
|
||||
"template %q found in multiple organizations (%s); use --template \"<org_name/%s>\" to target desired template.",
|
||||
namePart,
|
||||
strings.Join(foundOrgs, ", "),
|
||||
namePart,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// summarizeBundle makes a best-effort attempt to write a short summary
|
||||
// of the support bundle to the user's terminal.
|
||||
func summarizeBundle(inv *serpent.Invocation, bun *support.Bundle) {
|
||||
@@ -283,6 +407,10 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
|
||||
"deployment/config.json": src.Deployment.Config,
|
||||
"deployment/experiments.json": src.Deployment.Experiments,
|
||||
"deployment/health.json": src.Deployment.HealthReport,
|
||||
"deployment/stats.json": src.Deployment.Stats,
|
||||
"deployment/entitlements.json": src.Deployment.Entitlements,
|
||||
"deployment/health_settings.json": src.Deployment.HealthSettings,
|
||||
"deployment/workspaces.json": src.Deployment.Workspaces,
|
||||
"network/connection_info.json": src.Network.ConnectionInfo,
|
||||
"network/netcheck.json": src.Network.Netcheck,
|
||||
"network/interfaces.json": src.Network.Interfaces,
|
||||
@@ -302,6 +430,49 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Include named template artifacts (if requested)
|
||||
if src.NamedTemplate.Template.ID != uuid.Nil {
|
||||
name := src.NamedTemplate.Template.Name
|
||||
// JSON files
|
||||
for k, v := range map[string]any{
|
||||
"templates/" + name + "/template.json": src.NamedTemplate.Template,
|
||||
"templates/" + name + "/template_version.json": src.NamedTemplate.TemplateVersion,
|
||||
} {
|
||||
f, err := dest.Create(k)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create file %q in archive: %w", k, err)
|
||||
}
|
||||
enc := json.NewEncoder(f)
|
||||
enc.SetIndent("", " ")
|
||||
if err := enc.Encode(v); err != nil {
|
||||
return xerrors.Errorf("write json to %q: %w", k, err)
|
||||
}
|
||||
}
|
||||
// Binary template file (zip)
|
||||
if namedZipBytes, err := base64.StdEncoding.DecodeString(src.NamedTemplate.TemplateFileBase64); err == nil {
|
||||
k := "templates/" + name + "/template_file.zip"
|
||||
f, err := dest.Create(k)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create file %q in archive: %w", k, err)
|
||||
}
|
||||
if _, err := f.Write(namedZipBytes); err != nil {
|
||||
return xerrors.Errorf("write file %q in archive: %w", k, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var buildInfoRef string
|
||||
if src.Deployment.BuildInfo != nil {
|
||||
if raw, err := json.Marshal(src.Deployment.BuildInfo); err == nil {
|
||||
buildInfoRef = base64.StdEncoding.EncodeToString(raw)
|
||||
}
|
||||
}
|
||||
|
||||
tailnetHTML := src.Network.TailnetDebug
|
||||
if buildInfoRef != "" {
|
||||
tailnetHTML += "\n<!-- trace " + buildInfoRef + " -->"
|
||||
}
|
||||
|
||||
templateVersionBytes, err := base64.StdEncoding.DecodeString(src.Workspace.TemplateFileBase64)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("decode template zip from base64")
|
||||
@@ -319,10 +490,11 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
|
||||
"agent/client_magicsock.html": string(src.Agent.ClientMagicsockHTML),
|
||||
"agent/startup_logs.txt": humanizeAgentLogs(src.Agent.StartupLogs),
|
||||
"agent/prometheus.txt": string(src.Agent.Prometheus),
|
||||
"deployment/prometheus.txt": string(src.Deployment.Prometheus),
|
||||
"cli_logs.txt": string(src.CLILogs),
|
||||
"logs.txt": strings.Join(src.Logs, "\n"),
|
||||
"network/coordinator_debug.html": src.Network.CoordinatorDebug,
|
||||
"network/tailnet_debug.html": src.Network.TailnetDebug,
|
||||
"network/tailnet_debug.html": tailnetHTML,
|
||||
"workspace/build_logs.txt": humanizeBuildLogs(src.Workspace.BuildLogs),
|
||||
"workspace/template_file.zip": string(templateVersionBytes),
|
||||
"license-status.txt": licenseStatus,
|
||||
@@ -335,12 +507,89 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
|
||||
return xerrors.Errorf("write file %q in archive: %w", k, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write pprof binary data
|
||||
if err := writePprofData(src.Pprof, dest); err != nil {
|
||||
return xerrors.Errorf("write pprof data: %w", err)
|
||||
}
|
||||
|
||||
if err := dest.Close(); err != nil {
|
||||
return xerrors.Errorf("close zip file: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func writePprofData(pprof support.Pprof, dest *zip.Writer) error {
|
||||
// Write server pprof data directly to pprof directory
|
||||
if pprof.Server != nil {
|
||||
if err := writePprofCollection("pprof", pprof.Server, dest); err != nil {
|
||||
return xerrors.Errorf("write server pprof data: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Write agent pprof data
|
||||
if pprof.Agent != nil {
|
||||
if err := writePprofCollection("pprof/agent", pprof.Agent, dest); err != nil {
|
||||
return xerrors.Errorf("write agent pprof data: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func writePprofCollection(basePath string, collection *support.PprofCollection, dest *zip.Writer) error {
|
||||
// Define the pprof files to write with their extensions
|
||||
files := map[string][]byte{
|
||||
"allocs.prof.gz": collection.Allocs,
|
||||
"heap.prof.gz": collection.Heap,
|
||||
"profile.prof.gz": collection.Profile,
|
||||
"block.prof.gz": collection.Block,
|
||||
"mutex.prof.gz": collection.Mutex,
|
||||
"goroutine.prof.gz": collection.Goroutine,
|
||||
"threadcreate.prof.gz": collection.Threadcreate,
|
||||
"trace.gz": collection.Trace,
|
||||
}
|
||||
|
||||
// Write binary pprof files
|
||||
for filename, data := range files {
|
||||
if len(data) > 0 {
|
||||
filePath := basePath + "/" + filename
|
||||
f, err := dest.Create(filePath)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create pprof file %q: %w", filePath, err)
|
||||
}
|
||||
if _, err := f.Write(data); err != nil {
|
||||
return xerrors.Errorf("write pprof file %q: %w", filePath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write cmdline as text file
|
||||
if collection.Cmdline != "" {
|
||||
filePath := basePath + "/cmdline.txt"
|
||||
f, err := dest.Create(filePath)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create cmdline file %q: %w", filePath, err)
|
||||
}
|
||||
if _, err := f.Write([]byte(collection.Cmdline)); err != nil {
|
||||
return xerrors.Errorf("write cmdline file %q: %w", filePath, err)
|
||||
}
|
||||
}
|
||||
|
||||
if collection.Symbol != "" {
|
||||
filePath := basePath + "/symbol.txt"
|
||||
f, err := dest.Create(filePath)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create symbol file %q: %w", filePath, err)
|
||||
}
|
||||
if _, err := f.Write([]byte(collection.Symbol)); err != nil {
|
||||
return xerrors.Errorf("write symbol file %q: %w", filePath, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func humanizeAgentLogs(ls []codersdk.WorkspaceAgentLog) string {
|
||||
var buf bytes.Buffer
|
||||
tw := tabwriter.NewWriter(&buf, 0, 2, 1, ' ', 0)
|
||||
|
||||
@@ -46,6 +46,8 @@ func TestSupportBundle(t *testing.T) {
|
||||
|
||||
// Support bundle tests can share a single coderdtest instance.
|
||||
var dc codersdk.DeploymentConfig
|
||||
dc.Values = coderdtest.DeploymentValues(t)
|
||||
dc.Values.Prometheus.Enable = true
|
||||
secretValue := uuid.NewString()
|
||||
seedSecretDeploymentOptions(t, &dc, secretValue)
|
||||
client, closer, api := coderdtest.NewWithAPI(t, &coderdtest.Options{
|
||||
@@ -203,6 +205,10 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
|
||||
var v codersdk.DeploymentConfig
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotEmpty(t, v, "deployment config should not be empty")
|
||||
case "deployment/entitlements.json":
|
||||
var v codersdk.Entitlements
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotNil(t, v, "entitlements should not be nil")
|
||||
case "deployment/experiments.json":
|
||||
var v codersdk.Experiments
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
@@ -211,6 +217,22 @@ func assertBundleContents(t *testing.T, path string, wantWorkspace bool, wantAge
|
||||
var v healthsdk.HealthcheckReport
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotEmpty(t, v, "health report should not be empty")
|
||||
case "deployment/health_settings.json":
|
||||
var v healthsdk.HealthSettings
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotEmpty(t, v, "health settings should not be empty")
|
||||
case "deployment/stats.json":
|
||||
var v codersdk.DeploymentStats
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotNil(t, v, "deployment stats should not be nil")
|
||||
case "deployment/workspaces.json":
|
||||
var v codersdk.Workspace
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
require.NotNil(t, v, "deployment workspaces should not be nil")
|
||||
case "deployment/prometheus.txt":
|
||||
bs := readBytesFromZip(t, f)
|
||||
require.NotEmpty(t, bs, "prometheus metrics should not be empty")
|
||||
require.Contains(t, string(bs), "go_goroutines", "prometheus metrics should contain go runtime metrics")
|
||||
case "network/connection_info.json":
|
||||
var v workspacesdk.AgentConnectionInfo
|
||||
decodeJSONFromZip(t, f, &v)
|
||||
|
||||
2
cli/testdata/coder_autoupdate_--help.golden
vendored
2
cli/testdata/coder_autoupdate_--help.golden
vendored
@@ -7,7 +7,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_config-ssh_--help.golden
vendored
2
cli/testdata/coder_config-ssh_--help.golden
vendored
@@ -55,7 +55,7 @@ OPTIONS:
|
||||
configured in the workspace template is used.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
5
cli/testdata/coder_create_--help.golden
vendored
5
cli/testdata/coder_create_--help.golden
vendored
@@ -49,8 +49,11 @@ OPTIONS:
|
||||
--template-version string, $CODER_TEMPLATE_VERSION
|
||||
Specify a template version name.
|
||||
|
||||
--use-parameter-defaults bool, $CODER_WORKSPACE_USE_PARAMETER_DEFAULTS
|
||||
Automatically accept parameter defaults when no value is provided.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_delete_--help.golden
vendored
2
cli/testdata/coder_delete_--help.golden
vendored
@@ -18,7 +18,7 @@ OPTIONS:
|
||||
resources.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_dotfiles_--help.golden
vendored
2
cli/testdata/coder_dotfiles_--help.golden
vendored
@@ -24,7 +24,7 @@ OPTIONS:
|
||||
empty, will use $HOME.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
3
cli/testdata/coder_login_--help.golden
vendored
3
cli/testdata/coder_login_--help.golden
vendored
@@ -9,6 +9,9 @@ USAGE:
|
||||
macOS and Windows and a plain text file on Linux. Use the --use-keyring flag
|
||||
or CODER_USE_KEYRING environment variable to change the storage mechanism.
|
||||
|
||||
SUBCOMMANDS:
|
||||
token Print the current session token
|
||||
|
||||
OPTIONS:
|
||||
--first-user-email string, $CODER_FIRST_USER_EMAIL
|
||||
Specifies an email address to use if creating the first user for the
|
||||
|
||||
11
cli/testdata/coder_login_token_--help.golden
vendored
Normal file
11
cli/testdata/coder_login_token_--help.golden
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
coder v0.0.0-devel
|
||||
|
||||
USAGE:
|
||||
coder login token
|
||||
|
||||
Print the current session token
|
||||
|
||||
Print the session token for use in scripts and automation.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
2
cli/testdata/coder_logout_--help.golden
vendored
2
cli/testdata/coder_logout_--help.golden
vendored
@@ -7,7 +7,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -7,7 +7,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -18,7 +18,7 @@ OPTIONS:
|
||||
Reads stdin for the json role definition to upload.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -23,7 +23,7 @@ OPTIONS:
|
||||
Reads stdin for the json role definition to upload.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -15,6 +15,7 @@ SUBCOMMANDS:
|
||||
memberships from an IdP.
|
||||
role-sync Role sync settings to sync organization roles from an
|
||||
IdP.
|
||||
workspace-sharing Workspace sharing settings for the organization.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -15,6 +15,7 @@ SUBCOMMANDS:
|
||||
memberships from an IdP.
|
||||
role-sync Role sync settings to sync organization roles from an
|
||||
IdP.
|
||||
workspace-sharing Workspace sharing settings for the organization.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -15,6 +15,7 @@ SUBCOMMANDS:
|
||||
memberships from an IdP.
|
||||
role-sync Role sync settings to sync organization roles from an
|
||||
IdP.
|
||||
workspace-sharing Workspace sharing settings for the organization.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -15,6 +15,7 @@ SUBCOMMANDS:
|
||||
memberships from an IdP.
|
||||
role-sync Role sync settings to sync organization roles from an
|
||||
IdP.
|
||||
workspace-sharing Workspace sharing settings for the organization.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
"last_seen_at": "====[timestamp]=====",
|
||||
"name": "test-daemon",
|
||||
"version": "v0.0.0-devel",
|
||||
"api_version": "1.14",
|
||||
"api_version": "1.15",
|
||||
"provisioners": [
|
||||
"echo"
|
||||
],
|
||||
|
||||
2
cli/testdata/coder_publickey_--help.golden
vendored
2
cli/testdata/coder_publickey_--help.golden
vendored
@@ -13,7 +13,7 @@ OPTIONS:
|
||||
services it's registered with.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_rename_--help.golden
vendored
2
cli/testdata/coder_rename_--help.golden
vendored
@@ -7,7 +7,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_restart_--help.golden
vendored
2
cli/testdata/coder_restart_--help.golden
vendored
@@ -39,7 +39,7 @@ OPTIONS:
|
||||
pairs for the parameters.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
44
cli/testdata/coder_server_--help.golden
vendored
44
cli/testdata/coder_server_--help.golden
vendored
@@ -15,9 +15,11 @@ SUBCOMMANDS:
|
||||
|
||||
OPTIONS:
|
||||
--allow-workspace-renames bool, $CODER_ALLOW_WORKSPACE_RENAMES (default: false)
|
||||
DEPRECATED: Allow users to rename their workspaces. Use only for
|
||||
temporary compatibility reasons, this will be removed in a future
|
||||
release.
|
||||
Allow users to rename their workspaces. WARNING: Renaming a workspace
|
||||
can cause Terraform resources that depend on the workspace name to be
|
||||
destroyed and recreated, potentially causing data loss. Only enable
|
||||
this if your templates do not use workspace names in resource
|
||||
identifiers, or if you understand the risks.
|
||||
|
||||
--cache-dir string, $CODER_CACHE_DIRECTORY (default: [cache dir])
|
||||
The directory to cache temporary files. If unspecified and
|
||||
@@ -109,11 +111,18 @@ AI BRIDGE OPTIONS:
|
||||
The access key secret to use with the access key to authenticate
|
||||
against the AWS Bedrock API.
|
||||
|
||||
--aibridge-bedrock-base-url string, $CODER_AIBRIDGE_BEDROCK_BASE_URL
|
||||
The base URL to use for the AWS Bedrock API. Use this setting to
|
||||
specify an exact URL to use. Takes precedence over
|
||||
CODER_AIBRIDGE_BEDROCK_REGION.
|
||||
|
||||
--aibridge-bedrock-model string, $CODER_AIBRIDGE_BEDROCK_MODEL (default: global.anthropic.claude-sonnet-4-5-20250929-v1:0)
|
||||
The model to use when making requests to the AWS Bedrock API.
|
||||
|
||||
--aibridge-bedrock-region string, $CODER_AIBRIDGE_BEDROCK_REGION
|
||||
The AWS Bedrock API region.
|
||||
The AWS Bedrock API region to use. Constructs a base URL to use for
|
||||
the AWS Bedrock API in the form of
|
||||
'https://bedrock-runtime.<region>.amazonaws.com'.
|
||||
|
||||
--aibridge-bedrock-small-fastmodel string, $CODER_AIBRIDGE_BEDROCK_SMALL_FAST_MODEL (default: global.anthropic.claude-haiku-4-5-20251001-v1:0)
|
||||
The small fast model to use when making requests to the AWS Bedrock
|
||||
@@ -121,6 +130,10 @@ AI BRIDGE OPTIONS:
|
||||
See
|
||||
https://docs.claude.com/en/docs/claude-code/settings#environment-variables.
|
||||
|
||||
--aibridge-circuit-breaker-enabled bool, $CODER_AIBRIDGE_CIRCUIT_BREAKER_ENABLED (default: false)
|
||||
Enable the circuit breaker to protect against cascading failures from
|
||||
upstream AI provider rate limits (429, 503, 529 overloaded).
|
||||
|
||||
--aibridge-retention duration, $CODER_AIBRIDGE_RETENTION (default: 60d)
|
||||
Length of time to retain data such as interceptions and all related
|
||||
records (token, prompt, tool use).
|
||||
@@ -147,6 +160,18 @@ AI BRIDGE OPTIONS:
|
||||
Maximum number of AI Bridge requests per second per replica. Set to 0
|
||||
to disable (unlimited).
|
||||
|
||||
--aibridge-send-actor-headers bool, $CODER_AIBRIDGE_SEND_ACTOR_HEADERS (default: false)
|
||||
Once enabled, extra headers will be added to upstream requests to
|
||||
identify the user (actor) making requests to AI Bridge. This is only
|
||||
needed if you are using a proxy between AI Bridge and an upstream AI
|
||||
provider. This will send X-Ai-Bridge-Actor-Id (the ID of the user
|
||||
making the request) and X-Ai-Bridge-Actor-Metadata-Username (their
|
||||
username).
|
||||
|
||||
--aibridge-structured-logging bool, $CODER_AIBRIDGE_STRUCTURED_LOGGING (default: false)
|
||||
Emit structured logs for AI Bridge interception records. Use this for
|
||||
exporting these records to external SIEM or observability systems.
|
||||
|
||||
AI BRIDGE PROXY OPTIONS:
|
||||
--aibridge-proxy-cert-file string, $CODER_AIBRIDGE_PROXY_CERT_FILE
|
||||
Path to the CA certificate file for AI Bridge Proxy.
|
||||
@@ -161,6 +186,17 @@ AI BRIDGE PROXY OPTIONS:
|
||||
--aibridge-proxy-listen-addr string, $CODER_AIBRIDGE_PROXY_LISTEN_ADDR (default: :8888)
|
||||
The address the AI Bridge Proxy will listen on.
|
||||
|
||||
--aibridge-proxy-upstream string, $CODER_AIBRIDGE_PROXY_UPSTREAM
|
||||
URL of an upstream HTTP proxy to chain tunneled (non-allowlisted)
|
||||
requests through. Format: http://[user:pass@]host:port or
|
||||
https://[user:pass@]host:port.
|
||||
|
||||
--aibridge-proxy-upstream-ca string, $CODER_AIBRIDGE_PROXY_UPSTREAM_CA
|
||||
Path to a PEM-encoded CA certificate to trust for the upstream proxy's
|
||||
TLS connection. Only needed for HTTPS upstream proxies with
|
||||
certificates not trusted by the system. If not provided, the system
|
||||
certificate pool is used.
|
||||
|
||||
CLIENT OPTIONS:
|
||||
These options change the behavior of how clients interact with the Coder.
|
||||
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
|
||||
|
||||
2
cli/testdata/coder_start_--help.golden
vendored
2
cli/testdata/coder_start_--help.golden
vendored
@@ -42,7 +42,7 @@ OPTIONS:
|
||||
pairs for the parameters.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_stop_--help.golden
vendored
2
cli/testdata/coder_stop_--help.golden
vendored
@@ -7,7 +7,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
15
cli/testdata/coder_support_bundle_--help.golden
vendored
15
cli/testdata/coder_support_bundle_--help.golden
vendored
@@ -14,12 +14,25 @@ OPTIONS:
|
||||
File path for writing the generated support bundle. Defaults to
|
||||
coder-support-$(date +%s).zip.
|
||||
|
||||
--pprof bool, $CODER_SUPPORT_BUNDLE_PPROF
|
||||
Collect pprof profiling data from the Coder server and agent. Requires
|
||||
Coder server version 2.28.0 or newer.
|
||||
|
||||
--template string, $CODER_SUPPORT_BUNDLE_TEMPLATE
|
||||
Template name to include in the support bundle. Use
|
||||
org_name/template_name if template name is reused across multiple
|
||||
organizations.
|
||||
|
||||
--url-override string, $CODER_SUPPORT_BUNDLE_URL_OVERRIDE
|
||||
Override the URL to your Coder deployment. This may be useful, for
|
||||
example, if you need to troubleshoot a specific Coder replica.
|
||||
|
||||
--workspaces-total-cap int, $CODER_SUPPORT_BUNDLE_WORKSPACES_TOTAL_CAP
|
||||
Maximum number of workspaces to include in the support bundle. Set to
|
||||
0 or negative value to disable the cap. Defaults to 10.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
2
cli/testdata/coder_task_delete_--help.golden
vendored
2
cli/testdata/coder_task_delete_--help.golden
vendored
@@ -21,7 +21,7 @@ USAGE:
|
||||
|
||||
OPTIONS:
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -14,7 +14,7 @@ OPTIONS:
|
||||
versions are archived.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -68,7 +68,7 @@ OPTIONS:
|
||||
Specify a file path with values for Terraform-managed variables.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -12,7 +12,7 @@ OPTIONS:
|
||||
Select which organization (uuid or name) to use.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -91,7 +91,7 @@ OPTIONS:
|
||||
for more details.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -18,7 +18,7 @@ OPTIONS:
|
||||
the template version to pull.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
--zip bool
|
||||
Output the template as a zip archive to stdout.
|
||||
|
||||
@@ -48,7 +48,7 @@ OPTIONS:
|
||||
Specify a file path with values for Terraform-managed variables.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -11,7 +11,7 @@ OPTIONS:
|
||||
Select which organization (uuid or name) to use.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -11,7 +11,7 @@ OPTIONS:
|
||||
Select which organization (uuid or name) to use.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
@@ -11,7 +11,7 @@ OPTIONS:
|
||||
the user may have.
|
||||
|
||||
-y, --yes bool
|
||||
Bypass prompts.
|
||||
Bypass confirmation prompts.
|
||||
|
||||
———
|
||||
Run `coder --help` for a list of global options.
|
||||
|
||||
65
cli/testdata/server-config.yaml.golden
vendored
65
cli/testdata/server-config.yaml.golden
vendored
@@ -575,8 +575,10 @@ userQuietHoursSchedule:
|
||||
# change their quiet hours schedule and the site default is always used.
|
||||
# (default: true, type: bool)
|
||||
allowCustomQuietHours: true
|
||||
# DEPRECATED: Allow users to rename their workspaces. Use only for temporary
|
||||
# compatibility reasons, this will be removed in a future release.
|
||||
# Allow users to rename their workspaces. WARNING: Renaming a workspace can cause
|
||||
# Terraform resources that depend on the workspace name to be destroyed and
|
||||
# recreated, potentially causing data loss. Only enable this if your templates do
|
||||
# not use workspace names in resource identifiers, or if you understand the risks.
|
||||
# (default: false, type: bool)
|
||||
allowWorkspaceRenames: false
|
||||
# Configure how emails are sent.
|
||||
@@ -746,7 +748,12 @@ aibridge:
|
||||
# The base URL of the Anthropic API.
|
||||
# (default: https://api.anthropic.com/, type: string)
|
||||
anthropic_base_url: https://api.anthropic.com/
|
||||
# The AWS Bedrock API region.
|
||||
# The base URL to use for the AWS Bedrock API. Use this setting to specify an
|
||||
# exact URL to use. Takes precedence over CODER_AIBRIDGE_BEDROCK_REGION.
|
||||
# (default: <unset>, type: string)
|
||||
bedrock_base_url: ""
|
||||
# The AWS Bedrock API region to use. Constructs a base URL to use for the AWS
|
||||
# Bedrock API in the form of 'https://bedrock-runtime.<region>.amazonaws.com'.
|
||||
# (default: <unset>, type: string)
|
||||
bedrock_region: ""
|
||||
# The model to use when making requests to the AWS Bedrock API.
|
||||
@@ -768,11 +775,39 @@ aibridge:
|
||||
# Maximum number of concurrent AI Bridge requests per replica. Set to 0 to disable
|
||||
# (unlimited).
|
||||
# (default: 0, type: int)
|
||||
maxConcurrency: 0
|
||||
max_concurrency: 0
|
||||
# Maximum number of AI Bridge requests per second per replica. Set to 0 to disable
|
||||
# (unlimited).
|
||||
# (default: 0, type: int)
|
||||
rateLimit: 0
|
||||
rate_limit: 0
|
||||
# Emit structured logs for AI Bridge interception records. Use this for exporting
|
||||
# these records to external SIEM or observability systems.
|
||||
# (default: false, type: bool)
|
||||
structured_logging: false
|
||||
# Once enabled, extra headers will be added to upstream requests to identify the
|
||||
# user (actor) making requests to AI Bridge. This is only needed if you are using
|
||||
# a proxy between AI Bridge and an upstream AI provider. This will send
|
||||
# X-Ai-Bridge-Actor-Id (the ID of the user making the request) and
|
||||
# X-Ai-Bridge-Actor-Metadata-Username (their username).
|
||||
# (default: false, type: bool)
|
||||
send_actor_headers: false
|
||||
# Enable the circuit breaker to protect against cascading failures from upstream
|
||||
# AI provider rate limits (429, 503, 529 overloaded).
|
||||
# (default: false, type: bool)
|
||||
circuit_breaker_enabled: false
|
||||
# Number of consecutive failures that triggers the circuit breaker to open.
|
||||
# (default: 5, type: int)
|
||||
circuit_breaker_failure_threshold: 5
|
||||
# Cyclic period of the closed state for clearing internal failure counts.
|
||||
# (default: 10s, type: duration)
|
||||
circuit_breaker_interval: 10s
|
||||
# How long the circuit breaker stays open before transitioning to half-open state.
|
||||
# (default: 30s, type: duration)
|
||||
circuit_breaker_timeout: 30s
|
||||
# Maximum number of requests allowed in half-open state before deciding to close
|
||||
# or re-open the circuit.
|
||||
# (default: 3, type: int)
|
||||
circuit_breaker_max_requests: 3
|
||||
aibridgeproxy:
|
||||
# Enable the AI Bridge MITM Proxy for intercepting and decrypting AI provider
|
||||
# requests.
|
||||
@@ -787,13 +822,25 @@ aibridgeproxy:
|
||||
# Path to the CA private key file for AI Bridge Proxy.
|
||||
# (default: <unset>, type: string)
|
||||
key_file: ""
|
||||
# Comma-separated list of domains for which HTTPS traffic will be decrypted and
|
||||
# routed through AI Bridge. Requests to other domains will be tunneled directly
|
||||
# without decryption.
|
||||
# (default: api.anthropic.com,api.openai.com, type: string-array)
|
||||
# Comma-separated list of AI provider domains for which HTTPS traffic will be
|
||||
# decrypted and routed through AI Bridge. Requests to other domains will be
|
||||
# tunneled directly without decryption. Supported domains: api.anthropic.com,
|
||||
# api.openai.com, api.individual.githubcopilot.com.
|
||||
# (default: api.anthropic.com,api.openai.com,api.individual.githubcopilot.com,
|
||||
# type: string-array)
|
||||
domain_allowlist:
|
||||
- api.anthropic.com
|
||||
- api.openai.com
|
||||
- api.individual.githubcopilot.com
|
||||
# URL of an upstream HTTP proxy to chain tunneled (non-allowlisted) requests
|
||||
# through. Format: http://[user:pass@]host:port or https://[user:pass@]host:port.
|
||||
# (default: <unset>, type: string)
|
||||
upstream_proxy: ""
|
||||
# Path to a PEM-encoded CA certificate to trust for the upstream proxy's TLS
|
||||
# connection. Only needed for HTTPS upstream proxies with certificates not trusted
|
||||
# by the system. If not provided, the system certificate pool is used.
|
||||
# (default: <unset>, type: string)
|
||||
upstream_proxy_ca: ""
|
||||
# Configure data retention policies for various database tables. Retention
|
||||
# policies automatically purge old data to reduce database size and improve
|
||||
# performance. Setting a retention duration to 0 disables automatic purging for
|
||||
|
||||
@@ -17,8 +17,10 @@ import (
|
||||
|
||||
"cdr.dev/slog/v3"
|
||||
agentproto "github.com/coder/coder/v2/agent/proto"
|
||||
"github.com/coder/coder/v2/coderd/agentapi/metadatabatcher"
|
||||
"github.com/coder/coder/v2/coderd/agentapi/resourcesmonitor"
|
||||
"github.com/coder/coder/v2/coderd/appearance"
|
||||
"github.com/coder/coder/v2/coderd/boundaryusage"
|
||||
"github.com/coder/coder/v2/coderd/connectionlog"
|
||||
"github.com/coder/coder/v2/coderd/database"
|
||||
"github.com/coder/coder/v2/coderd/database/pubsub"
|
||||
@@ -65,10 +67,11 @@ type API struct {
|
||||
var _ agentproto.DRPCAgentServer = &API{}
|
||||
|
||||
type Options struct {
|
||||
AgentID uuid.UUID
|
||||
OwnerID uuid.UUID
|
||||
WorkspaceID uuid.UUID
|
||||
OrganizationID uuid.UUID
|
||||
AgentID uuid.UUID
|
||||
OwnerID uuid.UUID
|
||||
WorkspaceID uuid.UUID
|
||||
OrganizationID uuid.UUID
|
||||
TemplateVersionID uuid.UUID
|
||||
|
||||
AuthenticatedCtx context.Context
|
||||
Log slog.Logger
|
||||
@@ -80,10 +83,12 @@ type Options struct {
|
||||
DerpMapFn func() *tailcfg.DERPMap
|
||||
TailnetCoordinator *atomic.Pointer[tailnet.Coordinator]
|
||||
StatsReporter *workspacestats.Reporter
|
||||
MetadataBatcher *metadatabatcher.Batcher
|
||||
AppearanceFetcher *atomic.Pointer[appearance.Fetcher]
|
||||
PublishWorkspaceUpdateFn func(ctx context.Context, userID uuid.UUID, event wspubsub.WorkspaceEvent)
|
||||
PublishWorkspaceAgentLogsUpdateFn func(ctx context.Context, workspaceAgentID uuid.UUID, msg agentsdk.LogsNotifyMessage)
|
||||
NetworkTelemetryHandler func(batch []*tailnetproto.TelemetryEvent)
|
||||
BoundaryUsageTracker *boundaryusage.Tracker
|
||||
|
||||
AccessURL *url.URL
|
||||
AppHostname string
|
||||
@@ -178,8 +183,8 @@ func New(opts Options, workspace database.Workspace) *API {
|
||||
AgentFn: api.agent,
|
||||
Workspace: api.cachedWorkspaceFields,
|
||||
Database: opts.Database,
|
||||
Pubsub: opts.Pubsub,
|
||||
Log: opts.Log,
|
||||
Batcher: opts.MetadataBatcher,
|
||||
}
|
||||
|
||||
api.LogsAPI = &LogsAPI{
|
||||
@@ -221,8 +226,12 @@ func New(opts Options, workspace database.Workspace) *API {
|
||||
}
|
||||
|
||||
api.BoundaryLogsAPI = &BoundaryLogsAPI{
|
||||
Log: opts.Log,
|
||||
WorkspaceID: opts.WorkspaceID,
|
||||
Log: opts.Log,
|
||||
WorkspaceID: opts.WorkspaceID,
|
||||
OwnerID: opts.OwnerID,
|
||||
TemplateID: workspace.TemplateID,
|
||||
TemplateVersionID: opts.TemplateVersionID,
|
||||
BoundaryUsageTracker: opts.BoundaryUsageTracker,
|
||||
}
|
||||
|
||||
// Start background cache refresh loop to handle workspace changes
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user