Compare commits

..

5 Commits

Author SHA1 Message Date
Danielle Maywood 230b55bfa0 chore: upgrade coder/clistat to v1.1.1 (#20322) (#20325)
coder/clistat has received a handful of bug fixes so we're back-porting
these bug fixes to 2.27

---

Cherry-picked from
https://github.com/coder/coder/commit/9bef5de30d8921bdeeec5ff02f14fdd7d1d932d7
2025-10-16 15:29:09 +01:00
Cian Johnston b2d6a18861 fix(coderd): truncate task prompt to 160 characters in notifications (#20147) (#20153)
Truncates the task prompt used in notifications to a maximum of 160
characters. The length of 160 characters was chosen arbitrarily.

(cherry picked from commit ffcb7a1693)
2025-10-09 07:38:34 +01:00
Dean Sheather c0cd32c2c4 chore: fix missing variable in deploy workflow (#20135) 2025-10-02 14:15:52 +10:00
Dean Sheather c2414d5287 chore: backport release freeze workflow to 2.27 (#20132)
Relates to https://github.com/coder/dogfood/pull/189
Relates to https://github.com/coder/internal/issues/1021

- Adds new script `scripts/should_deploy.sh` which implements the
algorithm in the linked issue
- Changes the `ci.yaml` workflow to run on release branches
- Moves the deployment steps out of `ci.yaml` into a new workflow
`deploy.yaml` for concurrency limiting purposes
- Changes the behavior of image tag pushing slightly:
    - Versioned tags will no longer have a `main-` prefix
    - `main` branch will still push the `main` and `latest` tags
    - `release/x.y` branches will now push `release-x.y` tags
- The deploy job will exit early if `should_deploy.sh` returns false
- The deploy job will now retag whatever image it's about to deploy as
`dogfood`

(cherry picked from commit e5c8c9bdaf)
2025-10-02 12:45:22 +10:00
Dean Sheather ff69ed69df chore: backport various downgrades from main (#20133)
JS downgrades + make `changes` a required job

---------

Co-authored-by: Bruno Quaresma <bruno@coder.com>
Co-authored-by: ケイラ <mckayla@hey.com>
Co-authored-by: Ethan <39577870+ethanndickson@users.noreply.github.com>
2025-10-02 12:44:43 +10:00
1000 changed files with 17359 additions and 61095 deletions
+2 -10
View File
@@ -91,9 +91,6 @@
## Systematic Debugging Approach
YOU MUST ALWAYS find the root cause of any issue you are debugging
YOU MUST NEVER fix a symptom or add a workaround instead of finding a root cause, even if it is faster.
### Multi-Issue Problem Solving
When facing multiple failing tests or complex integration issues:
@@ -101,21 +98,16 @@ When facing multiple failing tests or complex integration issues:
1. **Identify Root Causes**:
- Run failing tests individually to isolate issues
- Use LSP tools to trace through call chains
- Read Error Messages Carefully: Check both compilation and runtime errors
- Reproduce Consistently: Ensure you can reliably reproduce the issue before investigating
- Check Recent Changes: What changed that could have caused this? Git diff, recent commits, etc.
- When You Don't Know: Say "I don't understand X" rather than pretending to know
- Check both compilation and runtime errors
2. **Fix in Logical Order**:
- Address compilation issues first (imports, syntax)
- Fix authorization and RBAC issues next
- Resolve business logic and validation issues
- Handle edge cases and race conditions last
- IF your first fix doesn't work, STOP and re-analyze rather than adding more fixes
3. **Verification Strategy**:
- Always Test each fix individually before moving to next issue
- Verify Before Continuing: Did your test work? If not, form new hypothesis - don't add more fixes
- Test each fix individually before moving to next issue
- Use `make lint` and `make gen` after database changes
- Verify RFC compliance with actual specifications
- Run comprehensive test suites before considering complete
+3 -7
View File
@@ -40,15 +40,11 @@
- Use proper error types
- Pattern: `xerrors.Errorf("failed to X: %w", err)`
## Naming Conventions
### Naming Conventions
- Names MUST tell what code does, not how it's implemented or its history
- Follow Go and TypeScript naming conventions
- When changing code, never document the old behavior or the behavior change
- NEVER use implementation details in names (e.g., "ZodValidator", "MCPWrapper", "JSONParser")
- NEVER use temporal/historical context in names (e.g., "LegacyHandler", "UnifiedTool", "ImprovedInterface", "EnhancedParser")
- NEVER use pattern names unless they add clarity (e.g., prefer "Tool" over "ToolFactory")
- Use clear, descriptive names
- Abbreviate only when obvious
- Follow Go and TypeScript naming conventions
### Comments
+2 -6
View File
@@ -10,12 +10,8 @@ install_devcontainer_cli() {
install_ssh_config() {
echo "🔑 Installing SSH configuration..."
if [ -d /mnt/home/coder/.ssh ]; then
rsync -a /mnt/home/coder/.ssh/ ~/.ssh/
chmod 0700 ~/.ssh
else
echo "⚠️ SSH directory not found."
fi
rsync -a /mnt/home/coder/.ssh/ ~/.ssh/
chmod 0700 ~/.ssh
}
install_git_config() {
-3
View File
@@ -26,8 +26,5 @@ ignorePatterns:
- pattern: "claude.ai"
- pattern: "splunk.com"
- pattern: "stackoverflow.com/questions"
- pattern: "developer.hashicorp.com/terraform/language"
- pattern: "platform.openai.com/docs/api-reference"
- pattern: "api.openai.com"
aliveStatusCodes:
- 200
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.24.10"
default: "1.24.6"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
+3 -10
View File
@@ -5,13 +5,6 @@ runs:
using: "composite"
steps:
- name: Setup sqlc
# uses: sqlc-dev/setup-sqlc@c0209b9199cd1cce6a14fc27cabcec491b651761 # v4.0.0
# with:
# sqlc-version: "1.30.0"
# Switched to coder/sqlc fork to fix ambiguous column bug, see:
# - https://github.com/coder/sqlc/pull/1
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
uses: sqlc-dev/setup-sqlc@c0209b9199cd1cce6a14fc27cabcec491b651761 # v4.0.0
with:
sqlc-version: "1.27.0"
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.13.4
terraform_version: 1.13.0
terraform_wrapper: false
@@ -0,0 +1,34 @@
app = "sao-paulo-coder"
primary_region = "gru"
[experimental]
entrypoint = ["/bin/sh", "-c", "CODER_DERP_SERVER_RELAY_URL=\"http://[${FLY_PRIVATE_IP}]:3000\" /opt/coder wsproxy server"]
auto_rollback = true
[build]
image = "ghcr.io/coder/coder-preview:main"
[env]
CODER_ACCESS_URL = "https://sao-paulo.fly.dev.coder.com"
CODER_HTTP_ADDRESS = "0.0.0.0:3000"
CODER_PRIMARY_ACCESS_URL = "https://dev.coder.com"
CODER_WILDCARD_ACCESS_URL = "*--apps.sao-paulo.fly.dev.coder.com"
CODER_VERBOSE = "true"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
# Ref: https://fly.io/docs/reference/configuration/#http_service-concurrency
[http_service.concurrency]
type = "requests"
soft_limit = 50
hard_limit = 100
[[vm]]
cpu_kind = "shared"
cpus = 2
memory_mb = 512
+45 -41
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -181,7 +181,7 @@ jobs:
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -191,7 +191,7 @@ jobs:
# Check for any typos
- name: Check for typos
uses: crate-ci/typos@626c4bedb751ce0b7f03262ca97ddda9a076ae1c # v1.39.2
uses: crate-ci/typos@85f62a8a84f939ae994ab3763f01a0296d61a7ee # v1.36.2
with:
config: .github/workflows/typos.toml
@@ -230,12 +230,12 @@ jobs:
shell: bash
gen:
timeout-minutes: 20
timeout-minutes: 8
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -271,7 +271,6 @@ jobs:
popd
- name: make gen
timeout-minutes: 8
run: |
# Remove golden files to detect discrepancy in generated files.
make clean/golden-files
@@ -289,10 +288,10 @@ jobs:
needs: changes
if: needs.changes.outputs.offlinedocs-only == 'false' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
timeout-minutes: 20
timeout-minutes: 7
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -316,7 +315,6 @@ jobs:
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
- name: make fmt
timeout-minutes: 7
run: |
PATH="${PATH}:$(go env GOPATH)/bin" \
make --output-sync -j -B fmt
@@ -343,7 +341,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -378,6 +376,13 @@ jobs:
id: go-paths
uses: ./.github/actions/setup-go-paths
- name: Download Go Build Cache
id: download-go-build-cache
uses: ./.github/actions/test-cache/download
with:
key-prefix: test-go-build-${{ runner.os }}-${{ runner.arch }}
cache-path: ${{ steps.go-paths.outputs.cached-dirs }}
- name: Setup Go
uses: ./.github/actions/setup-go
with:
@@ -385,7 +390,8 @@ jobs:
# download the toolchain configured in go.mod, so we don't
# need to reinstall it. It's faster on Windows runners.
use-preinstalled-go: ${{ runner.os == 'Windows' }}
use-cache: true
# Cache is already downloaded above
use-cache: false
- name: Setup Terraform
uses: ./.github/actions/setup-tf
@@ -494,11 +500,17 @@ jobs:
make test
- name: Upload failed test db dumps
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: failed-test-db-dump-${{matrix.os}}
path: "**/*.test.sql"
- name: Upload Go Build Cache
uses: ./.github/actions/test-cache/upload
with:
cache-key: ${{ steps.download-go-build-cache.outputs.cache-key }}
cache-path: ${{ steps.go-paths.outputs.cached-dirs }}
- name: Upload Test Cache
uses: ./.github/actions/test-cache/upload
with:
@@ -532,7 +544,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -581,7 +593,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -641,7 +653,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -668,7 +680,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -701,7 +713,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -750,23 +762,15 @@ jobs:
- name: Upload Playwright Failed Tests
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: failed-test-videos${{ matrix.variant.premium && '-premium' || '' }}
path: ./site/test-results/**/*.webm
retention-days: 7
- name: Upload debug log
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coderd-debug-logs${{ matrix.variant.premium && '-premium' || '' }}
path: ./site/e2e/test-results/debug.log
retention-days: 7
- name: Upload pprof dumps
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: debug-pprof-dumps${{ matrix.variant.premium && '-premium' || '' }}
path: ./site/test-results/**/debug-pprof-*.txt
@@ -781,7 +785,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -802,7 +806,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
uses: chromaui/action@20c7e42e1b2f6becd5d188df9acb02f3e2f51519 # v13.2.0
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -834,7 +838,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
uses: chromaui/action@20c7e42e1b2f6becd5d188df9acb02f3e2f51519 # v13.2.0
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -862,7 +866,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -933,7 +937,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -1032,7 +1036,7 @@ jobs:
- name: Upload build artifacts
if: ${{ github.repository_owner == 'coder' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')) }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: dylibs
path: |
@@ -1053,7 +1057,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -1108,7 +1112,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -1119,7 +1123,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -1197,7 +1201,7 @@ jobs:
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Download dylibs
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: dylibs
path: ./build
@@ -1464,7 +1468,7 @@ jobs:
- name: Upload build artifacts
if: github.ref == 'refs/heads/main'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: coder
path: |
@@ -1505,7 +1509,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -1533,7 +1537,7 @@ jobs:
steps:
- name: Send Slack notification
run: |
ESCAPED_PROMPT=$(printf "%s" "<@U09LQ75AHKR> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
ESCAPED_PROMPT=$(printf "%s" "<@U08TJ4YNCA3> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
curl -X POST -H 'Content-type: application/json' \
--data '{
"blocks": [
+7 -9
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -92,7 +92,7 @@ jobs:
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Set up Flux CLI
uses: fluxcd/flux2/action@b6e76ca2534f76dcb8dd94fb057cdfa923c3b641 # v2.7.3
uses: fluxcd/flux2/action@6bf37f6a560fd84982d67f853162e4b3c2235edb # v2.6.4
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.7.0"
@@ -121,8 +121,6 @@ jobs:
flux --namespace flux-system reconcile source chart coder-coder-provisioner
flux --namespace coder reconcile helmrelease coder
flux --namespace coder reconcile helmrelease coder-provisioner
flux --namespace coder reconcile helmrelease coder-provisioner-tagged
flux --namespace coder reconcile helmrelease coder-provisioner-tagged-prebuilds
# Just updating Flux is usually not enough. The Helm release may get
# redeployed, but unless something causes the Deployment to update the
@@ -138,15 +136,13 @@ jobs:
kubectl --namespace coder rollout status deployment/coder-provisioner
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged-prebuilds
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged-prebuilds
deploy-wsproxies:
runs-on: ubuntu-latest
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -163,10 +159,12 @@ jobs:
run: |
flyctl deploy --image "$IMAGE" --app paris-coder --config ./.github/fly-wsproxies/paris-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_PARIS" --yes
flyctl deploy --image "$IMAGE" --app sydney-coder --config ./.github/fly-wsproxies/sydney-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SYDNEY" --yes
flyctl deploy --image "$IMAGE" --app sao-paulo-coder --config ./.github/fly-wsproxies/sao-paulo-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SAO_PAULO" --yes
flyctl deploy --image "$IMAGE" --app jnb-coder --config ./.github/fly-wsproxies/jnb-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_JNB" --yes
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
IMAGE: ${{ inputs.image }}
TOKEN_PARIS: ${{ secrets.FLY_PARIS_CODER_PROXY_SESSION_TOKEN }}
TOKEN_SYDNEY: ${{ secrets.FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN }}
TOKEN_SAO_PAULO: ${{ secrets.FLY_SAO_PAULO_CODER_PROXY_SESSION_TOKEN }}
TOKEN_JNB: ${{ secrets.FLY_JNB_CODER_PROXY_SESSION_TOKEN }}
-205
View File
@@ -1,205 +0,0 @@
# This workflow checks if a PR requires documentation updates.
# It creates a Coder Task that uses AI to analyze the PR changes,
# search existing docs, and comment with recommendations.
#
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
name: AI Documentation Check
on:
pull_request:
types:
- labeled
workflow_dispatch:
inputs:
pr_url:
description: "Pull Request URL to check"
required: true
type: string
template_preset:
description: "Template preset to use"
required: false
default: ""
type: string
jobs:
doc-check:
name: Analyze PR for Documentation Updates Needed
runs-on: ubuntu-latest
if: |
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
timeout-minutes: 30
env:
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
CODER_SESSION_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
permissions:
contents: read
pull-requests: write
actions: write
steps:
- name: Determine PR Context
id: determine-context
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
INPUTS_PR_URL: ${{ inputs.pr_url }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the provided PR URL
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
# Extract PR number from URL for later use
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Extract changed files and build prompt
id: extract-context
env:
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Analyzing PR #${PR_NUMBER}"
# Build task prompt - using unquoted heredoc so variables expand
TASK_PROMPT=$(cat <<EOF
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
PR URL: ${PR_URL}
WORKFLOW:
1. Setup (repo is pre-cloned at ~/coder)
cd ~/coder
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
git checkout pr-${PR_NUMBER}
2. Get PR info
Use GitHub MCP tools to get PR title, body, and diff
Or use: git diff main...pr-${PR_NUMBER}
3. Understand Changes
Read the diff and identify what changed
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
4. Search for Related Docs
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
5. Decide
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
6. Comment on the PR using this format
COMMENT FORMAT:
## 📚 Documentation Check
### ✅ Updates Needed
- **[docs/path/file.md](github_link)** - Brief what needs changing
### 📝 New Docs Needed
- **docs/suggested/location.md** - What should be documented
### ✨ No Changes Needed
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
---
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
DOCS STRUCTURE:
Read ~/coder/docs/manifest.json for the complete documentation structure.
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
But check manifest.json - it has everything.
EOF
)
# Output the prompt
{
echo "task_prompt<<EOFOUTPUT"
echo "${TASK_PROMPT}"
echo "EOFOUTPUT"
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
persist-credentials: false
ref: main
repository: coder/create-task-action
- name: Create Coder Task for Documentation Check
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
coder-task-name-prefix: doc-check
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: true
- name: Write outputs
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_URL: ${{ steps.create_task.outputs.task-url }}
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
run: |
{
echo "## Documentation Check Task"
echo ""
echo "**PR:** ${PR_URL}"
echo "**Task created:** ${TASK_CREATED}"
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL:** ${TASK_URL}"
echo ""
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
} >> "${GITHUB_STEP_SUMMARY}"
+2 -2
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -48,7 +48,7 @@ jobs:
persist-credentials: false
- name: Docker login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
+1 -1
View File
@@ -30,7 +30,7 @@ jobs:
- name: Setup Node
uses: ./.github/actions/setup-node
- uses: tj-actions/changed-files@70069877f29101175ed2b055d210fe8b1d54d7d7 # v45.0.7
- uses: tj-actions/changed-files@4563c729c555b4141fac99c80f699f571219b836 # v45.0.7
id: changed-files
with:
files: |
+5 -5
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -36,11 +36,11 @@ jobs:
persist-credentials: false
- name: Setup Nix
uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
uses: nixbuild/nix-quick-install-action@1f095fee853b33114486cfdeae62fa099cda35a9 # v33
with:
# Pinning to 2.28 here, as Nix gets a "error: [json.exception.type_error.302] type must be array, but is string"
# on version 2.29 and above.
nix_version: "2.28.5"
nix_version: "2.28.4"
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
@@ -82,7 +82,7 @@ jobs:
- name: Login to DockerHub
if: github.ref == 'refs/heads/main'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
+2 -2
View File
@@ -27,7 +27,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -170,7 +170,7 @@ jobs:
steps:
- name: Send Slack notification
run: |
ESCAPED_PROMPT=$(printf "%s" "<@U09LQ75AHKR> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
ESCAPED_PROMPT=$(printf "%s" "<@U08TJ4YNCA3> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
curl -X POST -H 'Content-type: application/json' \
--data '{
"blocks": [
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
+10 -10
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -184,12 +184,12 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
- name: Find Comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: fc
with:
issue-number: ${{ needs.get_info.outputs.PR_NUMBER }}
@@ -199,7 +199,7 @@ jobs:
- name: Comment on PR
id: comment_id
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.fc.outputs.comment-id }}
issue-number: ${{ needs.get_info.outputs.PR_NUMBER }}
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -248,7 +248,7 @@ jobs:
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -491,7 +491,7 @@ jobs:
PASSWORD: ${{ steps.setup_deployment.outputs.password }}
- name: Find Comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: fc
with:
issue-number: ${{ env.PR_NUMBER }}
@@ -500,7 +500,7 @@ jobs:
direction: last
- name: Comment on PR
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
env:
STATUS: ${{ needs.get_info.outputs.NEW == 'true' && 'Created' || 'Updated' }}
with:
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
+10 -10
View File
@@ -131,7 +131,7 @@ jobs:
AC_CERTIFICATE_PASSWORD_FILE: /tmp/apple_cert_password.txt
- name: Upload build artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: dylibs
path: |
@@ -164,7 +164,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -239,7 +239,7 @@ jobs:
cat "$CODER_RELEASE_NOTES_FILE"
- name: Docker Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -327,7 +327,7 @@ jobs:
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Download dylibs
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
with:
name: dylibs
path: ./build
@@ -761,7 +761,7 @@ jobs:
- name: Upload artifacts to actions (if dry-run)
if: ${{ inputs.dry_run }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: release-artifacts
path: |
@@ -777,7 +777,7 @@ jobs:
- name: Upload latest sbom artifact to actions (if dry-run)
if: inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: latest-sbom-artifact
path: ./coder_latest_sbom.spdx.json
@@ -785,7 +785,7 @@ jobs:
- name: Send repository-dispatch event
if: ${{ !inputs.dry_run }}
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.CDRCI_GITHUB_TOKEN }}
repository: coder/packages
@@ -802,7 +802,7 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -878,7 +878,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -971,7 +971,7 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
+4 -4
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -30,7 +30,7 @@ jobs:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
@@ -39,7 +39,7 @@ jobs:
# Upload the results as artifacts.
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif
@@ -47,6 +47,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/upload-sarif@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
with:
sarif_file: results.sarif
+6 -6
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -40,7 +40,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Initialize CodeQL
uses: github/codeql-action/init@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/init@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
with:
languages: go, javascript
@@ -50,7 +50,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/analyze@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
- name: Send Slack notification on failure
if: ${{ failure() }}
@@ -69,7 +69,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -154,13 +154,13 @@ jobs:
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/upload-sarif@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
with:
sarif_file: trivy-results.sarif
category: "Trivy"
- name: Upload Trivy scan results as an artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: trivy
path: trivy-results.sarif
+6 -6
View File
@@ -18,12 +18,12 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
- name: stale
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
with:
stale-issue-label: "stale"
stale-pr-label: "stale"
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -120,12 +120,12 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
- name: Delete PR Cleanup workflow runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@39f0bbed25d76b34de5594dceab824811479e5de # v2.0.6
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
@@ -134,7 +134,7 @@ jobs:
delete_workflow_pattern: pr-cleanup.yaml
- name: Delete PR Deploy workflow skipped runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@39f0bbed25d76b34de5594dceab824811479e5de # v2.0.6
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
+133 -138
View File
@@ -1,9 +1,6 @@
name: AI Triage Automation
on:
issues:
types:
- labeled
workflow_dispatch:
inputs:
issue_url:
@@ -13,178 +10,176 @@ on:
template_name:
description: "Coder template to use for workspace"
required: true
default: "coder"
default: "traiage"
type: string
template_preset:
description: "Template preset to use"
required: false
default: ""
required: true
default: "Default"
type: string
prefix:
description: "Prefix for workspace name"
required: false
default: "traiage"
type: string
cleanup:
description: "Cleanup workspace after triage."
required: false
default: false
type: boolean
jobs:
traiage:
name: Triage GitHub Issue with Claude Code
runs-on: ubuntu-latest
if: github.event.label.name == 'traiage' || github.event_name == 'workflow_dispatch'
timeout-minutes: 30
env:
CODER_URL: ${{ secrets.TRAIAGE_CODER_URL }}
CODER_SESSION_TOKEN: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
TEMPLATE_NAME: ${{ inputs.template_name }}
permissions:
contents: read
issues: write
actions: write
steps:
# This is only required for testing locally using nektos/act, so leaving commented out.
# An alternative is to use a larger or custom image.
# - name: Install Github CLI
# id: install-gh
# run: |
# (type -p wget >/dev/null || (sudo apt update && sudo apt install wget -y)) \
# && sudo mkdir -p -m 755 /etc/apt/keyrings \
# && out=$(mktemp) && wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
# && cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
# && sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
# && sudo mkdir -p -m 755 /etc/apt/sources.list.d \
# && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
# && sudo apt update \
# && sudo apt install gh -y
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
fetch-depth: 0
- name: Determine Inputs
id: determine-inputs
if: always()
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_ISSUE_HTML_URL: ${{ github.event.issue.html_url }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_USER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_USER_LOGIN: ${{ github.event.sender.login }}
INPUTS_ISSUE_URL: ${{ inputs.issue_url }}
INPUTS_TEMPLATE_NAME: ${{ inputs.template_name || 'coder' }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || ''}}
INPUTS_PREFIX: ${{ inputs.prefix || 'traiage' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template name: ${INPUTS_TEMPLATE_NAME}"
echo "template_name=${INPUTS_TEMPLATE_NAME}" >> "${GITHUB_OUTPUT}"
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
echo "Using prefix: ${INPUTS_PREFIX}"
echo "prefix=${INPUTS_PREFIX}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the actor who triggered it
# For issues events, use the issue author.
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using issue URL: ${INPUTS_ISSUE_URL}"
echo "issue_url=${INPUTS_ISSUE_URL}" >> "${GITHUB_OUTPUT}"
exit 0
elif [[ "${GITHUB_EVENT_NAME}" == "issues" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_USER_ID}
echo "Using issue author: ${GITHUB_EVENT_USER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_USER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using issue URL: ${GITHUB_EVENT_ISSUE_HTML_URL}"
echo "issue_url=${GITHUB_EVENT_ISSUE_HTML_URL}" >> "${GITHUB_OUTPUT}"
exit 0
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Verify push access
env:
GITHUB_REPOSITORY: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
GITHUB_USERNAME: ${{ steps.determine-inputs.outputs.github_username }}
GITHUB_USER_ID: ${{ steps.determine-inputs.outputs.github_user_id }}
run: |
# Query the actors permission on this repo
can_push="$(gh api "/repos/${GITHUB_REPOSITORY}/collaborators/${GITHUB_USERNAME}/permission" --jq '.user.permissions.push')"
if [[ "${can_push}" != "true" ]]; then
echo "::error title=Access Denied::${GITHUB_USERNAME} does not have push access to ${GITHUB_REPOSITORY}"
exit 1
fi
- name: Extract context key and description from issue
- name: Extract context key from issue
id: extract-context
env:
ISSUE_URL: ${{ steps.determine-inputs.outputs.issue_url }}
GH_TOKEN: ${{ github.token }}
ISSUE_URL: ${{ inputs.issue_url }}
GITHUB_TOKEN: ${{ github.token }}
run: |
issue_number="$(gh issue view "${ISSUE_URL}" --json number --jq '.number')"
context_key="gh-${issue_number}"
echo "context_key=${context_key}" >> "${GITHUB_OUTPUT}"
echo "CONTEXT_KEY=${context_key}" >> "${GITHUB_ENV}"
TASK_PROMPT=$(cat <<EOF
Fix ${ISSUE_URL}
- name: Download and install Coder binary
shell: bash
env:
CODER_URL: ${{ secrets.TRAIAGE_CODER_URL }}
run: |
if [ "${{ runner.arch }}" == "ARM64" ]; then
ARCH="arm64"
else
ARCH="amd64"
fi
mkdir -p "${HOME}/.local/bin"
curl -fsSL --compressed "$CODER_URL/bin/coder-linux-${ARCH}" -o "${HOME}/.local/bin/coder"
chmod +x "${HOME}/.local/bin/coder"
export PATH="$HOME/.local/bin:$PATH"
coder version
coder whoami
echo "$HOME/.local/bin" >> "${GITHUB_PATH}"
1. Use the gh CLI to read the issue description and comments.
2. Think carefully and try to understand the root cause. If the issue is unclear or not well defined, ask me to clarify and provide more information.
3. Write a proposed implementation plan to PLAN.md for me to review before starting implementation. Your plan should use TDD and only make the minimal changes necessary to fix the root cause.
4. When I approve your plan, start working on it. If you encounter issues with the plan, ask me for clarification and update the plan as required.
5. When you have finished implementation according to the plan, commit and push your changes, and create a PR using the gh CLI for me to review.
- name: Get Coder username from GitHub actor
id: get-coder-username
env:
CODER_SESSION_TOKEN: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
GITHUB_USER_ID: ${{
(github.event_name == 'workflow_dispatch' && github.actor_id)
}}
run: |
[[ -z "${GITHUB_USER_ID}" || "${GITHUB_USER_ID}" == "null" ]] && echo "No GitHub actor ID found" && exit 1
user_json=$(
coder users list --github-user-id="${GITHUB_USER_ID}" --output=json
)
coder_username=$(jq -r 'first | .username' <<< "$user_json")
[[ -z "${coder_username}" || "${coder_username}" == "null" ]] && echo "No Coder user with GitHub user ID ${GITHUB_USER_ID} found" && exit 1
echo "coder_username=${coder_username}" >> "${GITHUB_OUTPUT}"
# TODO(Cian): this is a good use-case for 'recipes'
- name: Create Coder task
id: create-task
env:
CODER_USERNAME: ${{ steps.get-coder-username.outputs.coder_username }}
CONTEXT_KEY: ${{ steps.extract-context.outputs.context_key }}
GITHUB_TOKEN: ${{ github.token }}
ISSUE_URL: ${{ inputs.issue_url }}
PREFIX: ${{ inputs.prefix }}
RUN_ID: ${{ github.run_id }}
TEMPLATE_PARAMETERS: ${{ secrets.TRAIAGE_TEMPLATE_PARAMETERS }}
TEMPLATE_PRESET: ${{ inputs.template_preset }}
run: |
# Fetch issue description using `gh` CLI
issue_description=$(gh issue view "${ISSUE_URL}")
# Write a prompt to PROMPT_FILE
PROMPT=$(cat <<EOF
Analyze the below GitHub issue description, understand the root cause, and make appropriate changes to resolve the issue.
ISSUE URL: ${ISSUE_URL}
ISSUE DESCRIPTION BELOW:
${issue_description}
EOF
)
export PROMPT
echo "context_key=${context_key}" >> "${GITHUB_OUTPUT}"
{
echo "TASK_PROMPT<<EOF"
echo "${TASK_PROMPT}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
export TASK_NAME="${PREFIX}-${CONTEXT_KEY}-${RUN_ID}"
echo "Creating task: $TASK_NAME"
./scripts/traiage.sh create
coder exp task status "${CODER_USERNAME}/$TASK_NAME" --watch
echo "TASK_NAME=${CODER_USERNAME}/${TASK_NAME}" >> "${GITHUB_OUTPUT}"
echo "TASK_NAME=${CODER_USERNAME}/${TASK_NAME}" >> "${GITHUB_ENV}"
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
persist-credentials: false
ref: main
repository: coder/create-task-action
- name: Create Coder Task
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.TRAIAGE_CODER_URL }}
coder-token: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-preset: ${{ steps.determine-inputs.outputs.template_preset }}
coder-task-name-prefix: gh-coder
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-inputs.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-inputs.outputs.issue_url }}
comment-on-issue: ${{ startsWith(steps.determine-inputs.outputs.issue_url, format('{0}/{1}', github.server_url, github.repository)) }}
- name: Write outputs
- name: Create and upload archive
id: create-archive
if: inputs.cleanup
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_URL: ${{ steps.create_task.outputs.task-url }}
BUCKET_PREFIX: "gs://coder-traiage-outputs/traiage"
run: |
echo "Creating archive for workspace: $TASK_NAME"
./scripts/traiage.sh archive
echo "archive_url=${BUCKET_PREFIX%%/}/$TASK_NAME.tar.gz" >> "${GITHUB_OUTPUT}"
- name: Generate a summary of the changes and post a comment on GitHub.
id: generate-summary
if: inputs.cleanup
env:
ARCHIVE_URL: ${{ steps.create-archive.outputs.archive_url }}
BUCKET_PREFIX: "gs://coder-traiage-outputs/traiage"
CONTEXT_KEY: ${{ steps.extract-context.outputs.context_key }}
GITHUB_TOKEN: ${{ github.token }}
GITHUB_REPOSITORY: ${{ github.repository }}
ISSUE_URL: ${{ inputs.issue_url }}
TASK_NAME: ${{ steps.create-task.outputs.TASK_NAME }}
run: |
SUMMARY_FILE=$(mktemp)
trap 'rm -f "${SUMMARY_FILE}"' EXIT
AUTO_SUMMARY=$(./scripts/traiage.sh summary)
{
echo "**Task created:** ${TASK_CREATED}"
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL**: ${TASK_URL}"
} >> "${GITHUB_STEP_SUMMARY}"
echo "## TrAIage Results"
echo "- **Issue URL:** ${ISSUE_URL}"
echo "- **Context Key:** ${CONTEXT_KEY}"
echo "- **Workspace:** ${TASK_NAME}"
echo "- **Archive URL:** ${ARCHIVE_URL}"
echo
echo "${AUTO_SUMMARY}"
echo
echo "To fetch the output to your own workspace:"
echo
echo '```bash'
echo "BUCKET_PREFIX=${BUCKET_PREFIX} TASK_NAME=${TASK_NAME} ./scripts/traiage.sh resume"
echo '```'
echo
} >> "${SUMMARY_FILE}"
if [[ "${ISSUE_URL}" == "https://github.com/${GITHUB_REPOSITORY}"* ]]; then
gh issue comment "${ISSUE_URL}" --body-file "${SUMMARY_FILE}" --create-if-none --edit-last
else
echo "Skipping comment on other repo."
fi
cat "${SUMMARY_FILE}" >> "${GITHUB_STEP_SUMMARY}"
- name: Cleanup task
if: inputs.cleanup && steps.create-task.outputs.TASK_NAME != '' && steps.create-archive.outputs.archive_url != ''
run: |
echo "Cleaning up task: $TASK_NAME"
./scripts/traiage.sh delete || true
-1
View File
@@ -9,7 +9,6 @@ IST = "IST"
MacOS = "macOS"
AKS = "AKS"
O_WRONLY = "O_WRONLY"
AIBridge = "AI Bridge"
[default.extend-words]
AKS = "AKS"
+2 -2
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
with:
egress-policy: audit
@@ -31,7 +31,7 @@ jobs:
persist-credentials: false
- name: Check Markdown links
uses: umbrelladocs/action-linkspector@652f85bc57bb1e7d4327260decc10aa68f7694c3 # v1.4.0
uses: umbrelladocs/action-linkspector@874d01cae9fd488e3077b08952093235bd626977 # v1.3.7
id: markdown-link-check
# checks all markdown files from /docs including all subfolders
with:
-8
View File
@@ -12,9 +12,6 @@ node_modules/
vendor/
yarn-error.log
# Test output files
test-output/
# VSCode settings.
**/.vscode/*
# Allow VSCode recommendations and default settings in project root.
@@ -89,8 +86,3 @@ result
__debug_bin*
**/.claude/settings.local.json
/.env
# Ignore plans written by AI agents.
PLAN.md
+1 -11
View File
@@ -169,16 +169,6 @@ linters-settings:
- name: var-declaration
- name: var-naming
- name: waitgroup-by-value
usetesting:
# Only os-setenv is enabled because we migrated to usetesting from another linter that
# only covered os-setenv.
os-setenv: true
os-create-temp: false
os-mkdir-temp: false
os-temp-dir: false
os-chdir: false
context-background: false
context-todo: false
# irrelevant as of Go v1.22: https://go.dev/blog/loopvar-preview
govet:
@@ -262,6 +252,7 @@ linters:
# - wastedassign
- staticcheck
- tenv
# In Go, it's possible for a package to test it's internal functionality
# without testing any exported functions. This is enabled to promote
# decomposing a package before testing it's internals. A function caller
@@ -274,5 +265,4 @@ linters:
- typecheck
- unconvert
- unused
- usetesting
- dupl
+1 -2
View File
@@ -61,6 +61,5 @@
"typos.config": ".github/workflows/typos.toml",
"[markdown]": {
"editor.defaultFormatter": "DavidAnson.vscode-markdownlint"
},
"biome.lsp.bin": "site/node_modules/.bin/biome"
}
}
+19 -40
View File
@@ -1,41 +1,11 @@
# Coder Development Guidelines
You are an experienced, pragmatic software engineer. You don't over-engineer a solution when a simple one is possible.
Rule #1: If you want exception to ANY rule, YOU MUST STOP and get explicit permission first. BREAKING THE LETTER OR SPIRIT OF THE RULES IS FAILURE.
## Foundational rules
- Doing it right is better than doing it fast. You are not in a rush. NEVER skip steps or take shortcuts.
- Tedious, systematic work is often the correct solution. Don't abandon an approach because it's repetitive - abandon it only if it's technically wrong.
- Honesty is a core value.
## Our relationship
- Act as a critical peer reviewer. Your job is to disagree with me when I'm wrong, not to please me. Prioritize accuracy and reasoning over agreement.
- YOU MUST speak up immediately when you don't know something or we're in over our heads
- YOU MUST call out bad ideas, unreasonable expectations, and mistakes - I depend on this
- NEVER be agreeable just to be nice - I NEED your HONEST technical judgment
- NEVER write the phrase "You're absolutely right!" You are not a sycophant. We're working together because I value your opinion. Do not agree with me unless you can justify it with evidence or reasoning.
- YOU MUST ALWAYS STOP and ask for clarification rather than making assumptions.
- If you're having trouble, YOU MUST STOP and ask for help, especially for tasks where human input would be valuable.
- When you disagree with my approach, YOU MUST push back. Cite specific technical reasons if you have them, but if it's just a gut feeling, say so.
- If you're uncomfortable pushing back out loud, just say "Houston, we have a problem". I'll know what you mean
- We discuss architectutral decisions (framework changes, major refactoring, system design) together before implementation. Routine fixes and clear implementations don't need discussion.
## Proactiveness
When asked to do something, just do it - including obvious follow-up actions needed to complete the task properly.
Only pause to ask for confirmation when:
- Multiple valid approaches exist and the choice matters
- The action would delete or significantly restructure existing code
- You genuinely don't understand what's being asked
- Your partner asked a question (answer the question, don't jump to implementation)
@.claude/docs/WORKFLOWS.md
@.cursorrules
@README.md
@package.json
## Essential Commands
## 🚀 Essential Commands
| Task | Command | Notes |
|-------------------|--------------------------|----------------------------------|
@@ -51,13 +21,22 @@ Only pause to ask for confirmation when:
| **Format** | `make fmt` | Auto-format code |
| **Clean** | `make clean` | Clean build artifacts |
### Frontend Commands (site directory)
- `pnpm build` - Build frontend
- `pnpm dev` - Run development server
- `pnpm check` - Run code checks
- `pnpm format` - Format frontend code
- `pnpm lint` - Lint frontend code
- `pnpm test` - Run frontend tests
### Documentation Commands
- `pnpm run format-docs` - Format markdown tables in docs
- `pnpm run lint-docs` - Lint and fix markdown files
- `pnpm run storybook` - Run Storybook (from site directory)
## Critical Patterns
## 🔧 Critical Patterns
### Database Changes (ALWAYS FOLLOW)
@@ -99,7 +78,7 @@ app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestrict
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
```
## Quick Reference
## 📋 Quick Reference
### Full workflows available in imported WORKFLOWS.md
@@ -109,14 +88,14 @@ app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
- [ ] Check if feature touches database - you'll need migrations
- [ ] Check if feature touches audit logs - update `enterprise/audit/table.go`
## Architecture
## 🏗️ Architecture
- **coderd**: Main API service
- **provisionerd**: Infrastructure provisioning
- **Agents**: Workspace services (SSH, port forwarding)
- **Database**: PostgreSQL with `dbauthz` authorization
## Testing
## 🧪 Testing
### Race Condition Prevention
@@ -133,21 +112,21 @@ app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
NEVER use `time.Sleep` to mitigate timing issues. If an issue
seems like it should use `time.Sleep`, read through https://github.com/coder/quartz and specifically the [README](https://github.com/coder/quartz/blob/main/README.md) to better understand how to handle timing issues.
## Code Style
## 🎯 Code Style
### Detailed guidelines in imported WORKFLOWS.md
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
- Commit format: `type(scope): message`
## Detailed Development Guides
## 📚 Detailed Development Guides
@.claude/docs/OAUTH2.md
@.claude/docs/TESTING.md
@.claude/docs/TROUBLESHOOTING.md
@.claude/docs/DATABASE.md
## Common Pitfalls
## 🚨 Common Pitfalls
1. **Audit table errors** → Update `enterprise/audit/table.go`
2. **OAuth2 errors** → Return RFC-compliant format
+12
View File
@@ -18,6 +18,18 @@ coderd/rbac/ @Emyrk
scripts/apitypings/ @Emyrk
scripts/gensite/ @aslilac
site/ @aslilac @Parkreiner
site/src/hooks/ @Parkreiner
# These rules intentionally do not specify any owners. More specific rules
# override less specific rules, so these files are "ignored" by the site/ rule.
site/e2e/google/protobuf/timestampGenerated.ts
site/e2e/provisionerGenerated.ts
site/src/api/countriesGenerated.ts
site/src/api/rbacresourcesGenerated.ts
site/src/api/typesGenerated.ts
site/src/testHelpers/entities.ts
site/CLAUDE.md
# The blood and guts of the autostop algorithm, which is quite complex and
# requires elite ball knowledge of most of the scheduling code to make changes
# without inadvertently affecting other parts of the codebase.
+18 -30
View File
@@ -636,17 +636,16 @@ TAILNETTEST_MOCKS := \
tailnet/tailnettest/subscriptionmock.go
AIBRIDGED_MOCKS := \
enterprise/aibridged/aibridgedmock/clientmock.go \
enterprise/aibridged/aibridgedmock/poolmock.go
enterprise/x/aibridged/aibridgedmock/clientmock.go \
enterprise/x/aibridged/aibridgedmock/poolmock.go
GEN_FILES := \
tailnet/proto/tailnet.pb.go \
agent/proto/agent.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
vpn/vpn.pb.go \
enterprise/aibridged/proto/aibridged.pb.go \
enterprise/x/aibridged/proto/aibridged.pb.go \
$(DB_GEN_FILES) \
$(SITE_GEN_FILES) \
coderd/rbac/object_gen.go \
@@ -677,7 +676,6 @@ gen/db: $(DB_GEN_FILES)
.PHONY: gen/db
gen/golden-files: \
agent/unit/testdata/.gen-golden \
cli/testdata/.gen-golden \
coderd/.gen-golden \
coderd/notifications/.gen-golden \
@@ -697,9 +695,8 @@ gen/mark-fresh:
agent/proto/agent.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
vpn/vpn.pb.go \
enterprise/aibridged/proto/aibridged.pb.go \
enterprise/x/aibridged/proto/aibridged.pb.go \
coderd/database/dump.sql \
$(DB_GEN_FILES) \
site/src/api/typesGenerated.ts \
@@ -770,8 +767,8 @@ codersdk/workspacesdk/agentconnmock/agentconnmock.go: codersdk/workspacesdk/agen
go generate ./codersdk/workspacesdk/agentconnmock/
touch "$@"
$(AIBRIDGED_MOCKS): enterprise/aibridged/client.go enterprise/aibridged/pool.go
go generate ./enterprise/aibridged/aibridgedmock/
$(AIBRIDGED_MOCKS): enterprise/x/aibridged/client.go enterprise/x/aibridged/pool.go
go generate ./enterprise/x/aibridged/aibridgedmock/
touch "$@"
agent/agentcontainers/dcspec/dcspec_gen.go: \
@@ -802,14 +799,6 @@ agent/proto/agent.pb.go: agent/proto/agent.proto
--go-drpc_opt=paths=source_relative \
./agent/proto/agent.proto
agent/agentsocket/proto/agentsocket.pb.go: agent/agentsocket/proto/agentsocket.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./agent/agentsocket/proto/agentsocket.proto
provisionersdk/proto/provisioner.pb.go: provisionersdk/proto/provisioner.proto
protoc \
--go_out=. \
@@ -832,13 +821,13 @@ vpn/vpn.pb.go: vpn/vpn.proto
--go_opt=paths=source_relative \
./vpn/vpn.proto
enterprise/aibridged/proto/aibridged.pb.go: enterprise/aibridged/proto/aibridged.proto
enterprise/x/aibridged/proto/aibridged.pb.go: enterprise/x/aibridged/proto/aibridged.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./enterprise/aibridged/proto/aibridged.proto
./enterprise/x/aibridged/proto/aibridged.proto
site/src/api/typesGenerated.ts: site/node_modules/.installed $(wildcard scripts/apitypings/*) $(shell find ./codersdk $(FIND_EXCLUSIONS) -type f -name '*.go')
# -C sets the directory for the go run command
@@ -963,10 +952,6 @@ clean/golden-files:
-type f -name '*.golden' -delete
.PHONY: clean/golden-files
agent/unit/testdata/.gen-golden: $(wildcard agent/unit/testdata/*.golden) $(GO_SRC_FILES) $(wildcard agent/unit/*_test.go)
TZ=UTC go test ./agent/unit -run="TestGraph" -update
touch "$@"
cli/testdata/.gen-golden: $(wildcard cli/testdata/*.golden) $(wildcard cli/*.tpl) $(GO_SRC_FILES) $(wildcard cli/*_test.go)
TZ=UTC go test ./cli -run="Test(CommandHelp|ServerYAML|ErrorExamples|.*Golden)" -update
touch "$@"
@@ -1035,11 +1020,19 @@ endif
TEST_PACKAGES ?= ./...
test:
warm-go-cache-db-cleaner:
# ensure Go's build cache for the cleanercmd is fresh so that tests don't have to build from scratch. This
# could take some time and counts against the test's timeout, which can lead to flakes.
# c.f. https://github.com/coder/internal/issues/1026
mkdir -p build
$(GIT_FLAGS) go build -o ./build/cleaner github.com/coder/coder/v2/coderd/database/dbtestutil/cleanercmd
.PHONY: warm-go-cache-db-cleaner
test: warm-go-cache-db-cleaner
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="$(TEST_PACKAGES)" -- $(GOTEST_FLAGS)
.PHONY: test
test-cli:
test-cli: warm-go-cache-db-cleaner
$(MAKE) test TEST_PACKAGES="./cli..."
.PHONY: test-cli
@@ -1192,8 +1185,3 @@ endif
dogfood/coder/nix.hash: flake.nix flake.lock
sha256sum flake.nix flake.lock >./dogfood/coder/nix.hash
# Count the number of test databases created per test package.
count-test-databases:
PGPASSWORD=postgres psql -h localhost -U postgres -d coder_testing -P pager=off -c 'SELECT test_package, count(*) as count from test_databases GROUP BY test_package ORDER BY count DESC'
.PHONY: count-test-databases
+3 -14
View File
@@ -781,15 +781,11 @@ func (a *agent) reportConnectionsLoop(ctx context.Context, aAPI proto.DRPCAgentC
logger.Debug(ctx, "reporting connection")
_, err := aAPI.ReportConnection(ctx, payload)
if err != nil {
// Do not fail the loop if we fail to report a connection, just
// log a warning.
// Related to https://github.com/coder/coder/issues/20194
logger.Warn(ctx, "failed to report connection to server", slog.Error(err))
// keep going, we still need to remove it from the slice
} else {
logger.Debug(ctx, "successfully reported connection")
return xerrors.Errorf("failed to report connection: %w", err)
}
logger.Debug(ctx, "successfully reported connection")
// Remove the payload we sent.
a.reportConnectionsMu.Lock()
a.reportConnections[0] = nil // Release the pointer from the underlying array.
@@ -820,13 +816,6 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
ip = host
}
// If the IP is "localhost" (which it can be in some cases), set it to
// 127.0.0.1 instead.
// Related to https://github.com/coder/coder/issues/20194
if ip == "localhost" {
ip = "127.0.0.1"
}
a.reportConnectionsMu.Lock()
defer a.reportConnectionsMu.Unlock()
+35 -80
View File
@@ -1807,12 +1807,11 @@ func TestAgent_ReconnectingPTY(t *testing.T) {
//nolint:dogsled
conn, agentClient, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
idConnectionReport := uuid.New()
id := uuid.New()
// Test that the connection is reported. This must be tested in the
// first connection because we care about verifying all of these.
netConn0, err := conn.ReconnectingPTY(ctx, idConnectionReport, 80, 80, "bash --norc")
netConn0, err := conn.ReconnectingPTY(ctx, id, 80, 80, "bash --norc")
require.NoError(t, err)
_ = netConn0.Close()
assertConnectionReport(t, agentClient, proto.Connection_RECONNECTING_PTY, 0, "")
@@ -2028,8 +2027,7 @@ func runSubAgentMain() int {
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
req = req.WithContext(ctx)
client := &http.Client{}
resp, err := client.Do(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "agent connection failed: %v\n", err)
return 11
@@ -3462,7 +3460,11 @@ func TestAgent_Metrics_SSH(t *testing.T) {
registry := prometheus.NewRegistry()
//nolint:dogsled
conn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, o *agent.Options) {
conn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
// Make sure we always get a DERP connection for
// currently_reachable_peers.
DisableDirectConnections: true,
}, 0, func(_ *agenttest.Client, o *agent.Options) {
o.PrometheusRegistry = registry
})
@@ -3477,31 +3479,16 @@ func TestAgent_Metrics_SSH(t *testing.T) {
err = session.Shell()
require.NoError(t, err)
expected := []struct {
Name string
Type proto.Stats_Metric_Type
CheckFn func(float64) error
Labels []*proto.Stats_Metric_Label
}{
expected := []*proto.Stats_Metric{
{
Name: "agent_reconnecting_pty_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_reconnecting_pty_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_sessions_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 1 {
return nil
}
return xerrors.Errorf("expected 1, got %f", v)
},
Name: "agent_sessions_total",
Type: proto.Stats_Metric_COUNTER,
Value: 1,
Labels: []*proto.Stats_Metric_Label{
{
Name: "magic_type",
@@ -3514,44 +3501,24 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "agent_ssh_server_failed_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_failed_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_ssh_server_sftp_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_sftp_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_ssh_server_sftp_server_errors_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_sftp_server_errors_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(float64) error {
// We can't reliably ping a peer here, and networking is out of
// scope of this test, so we just test that the metric exists
// with the correct labels.
return nil
},
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
Value: 1,
Labels: []*proto.Stats_Metric_Label{
{
Name: "connection_type",
@@ -3560,11 +3527,9 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(float64) error {
return nil
},
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
Value: 0,
Labels: []*proto.Stats_Metric_Label{
{
Name: "connection_type",
@@ -3573,20 +3538,9 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "coderd_agentstats_startup_script_seconds",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(f float64) error {
if f >= 0 {
return nil
}
return xerrors.Errorf("expected >= 0, got %f", f)
},
Labels: []*proto.Stats_Metric_Label{
{
Name: "success",
Value: "true",
},
},
Name: "coderd_agentstats_startup_script_seconds",
Type: proto.Stats_Metric_GAUGE,
Value: 1,
},
}
@@ -3608,10 +3562,11 @@ func TestAgent_Metrics_SSH(t *testing.T) {
for _, m := range mf.GetMetric() {
assert.Equal(t, expected[i].Name, mf.GetName())
assert.Equal(t, expected[i].Type.String(), mf.GetType().String())
// Value is max expected
if expected[i].Type == proto.Stats_Metric_GAUGE {
assert.NoError(t, expected[i].CheckFn(m.GetGauge().GetValue()), "check fn for %s failed", expected[i].Name)
assert.GreaterOrEqualf(t, expected[i].Value, m.GetGauge().GetValue(), "expected %s to be greater than or equal to %f, got %f", expected[i].Name, expected[i].Value, m.GetGauge().GetValue())
} else if expected[i].Type == proto.Stats_Metric_COUNTER {
assert.NoError(t, expected[i].CheckFn(m.GetCounter().GetValue()), "check fn for %s failed", expected[i].Name)
assert.GreaterOrEqualf(t, expected[i].Value, m.GetCounter().GetValue(), "expected %s to be greater than or equal to %f, got %f", expected[i].Name, expected[i].Value, m.GetCounter().GetValue())
}
for j, lbl := range expected[i].Labels {
assert.Equal(t, m.GetLabel()[j], &promgo.LabelPair{
+2
View File
@@ -682,6 +682,8 @@ func (api *API) updaterLoop() {
} else {
prevErr = nil
}
default:
api.logger.Debug(api.ctx, "updater loop ticker skipped, update in progress")
}
return nil // Always nil to keep the ticker going.
-968
View File
@@ -1,968 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.23.4
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type PingRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingRequest) Reset() {
*x = PingRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingRequest) ProtoMessage() {}
func (x *PingRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingRequest.ProtoReflect.Descriptor instead.
func (*PingRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{0}
}
type PingResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingResponse) Reset() {
*x = PingResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingResponse) ProtoMessage() {}
func (x *PingResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingResponse.ProtoReflect.Descriptor instead.
func (*PingResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{1}
}
type SyncStartRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStartRequest) Reset() {
*x = SyncStartRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartRequest) ProtoMessage() {}
func (x *SyncStartRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartRequest.ProtoReflect.Descriptor instead.
func (*SyncStartRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{2}
}
func (x *SyncStartRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncStartResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncStartResponse) Reset() {
*x = SyncStartResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartResponse) ProtoMessage() {}
func (x *SyncStartResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartResponse.ProtoReflect.Descriptor instead.
func (*SyncStartResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{3}
}
type SyncWantRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
}
func (x *SyncWantRequest) Reset() {
*x = SyncWantRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantRequest) ProtoMessage() {}
func (x *SyncWantRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantRequest.ProtoReflect.Descriptor instead.
func (*SyncWantRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{4}
}
func (x *SyncWantRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *SyncWantRequest) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
type SyncWantResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncWantResponse) Reset() {
*x = SyncWantResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantResponse) ProtoMessage() {}
func (x *SyncWantResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantResponse.ProtoReflect.Descriptor instead.
func (*SyncWantResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{5}
}
type SyncCompleteRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncCompleteRequest) Reset() {
*x = SyncCompleteRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteRequest) ProtoMessage() {}
func (x *SyncCompleteRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteRequest.ProtoReflect.Descriptor instead.
func (*SyncCompleteRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{6}
}
func (x *SyncCompleteRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncCompleteResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncCompleteResponse) Reset() {
*x = SyncCompleteResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteResponse) ProtoMessage() {}
func (x *SyncCompleteResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteResponse.ProtoReflect.Descriptor instead.
func (*SyncCompleteResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{7}
}
type SyncReadyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncReadyRequest) Reset() {
*x = SyncReadyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyRequest) ProtoMessage() {}
func (x *SyncReadyRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyRequest.ProtoReflect.Descriptor instead.
func (*SyncReadyRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{8}
}
func (x *SyncReadyRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncReadyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Ready bool `protobuf:"varint,1,opt,name=ready,proto3" json:"ready,omitempty"`
}
func (x *SyncReadyResponse) Reset() {
*x = SyncReadyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyResponse) ProtoMessage() {}
func (x *SyncReadyResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyResponse.ProtoReflect.Descriptor instead.
func (*SyncReadyResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{9}
}
func (x *SyncReadyResponse) GetReady() bool {
if x != nil {
return x.Ready
}
return false
}
type SyncStatusRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStatusRequest) Reset() {
*x = SyncStatusRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusRequest) ProtoMessage() {}
func (x *SyncStatusRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusRequest.ProtoReflect.Descriptor instead.
func (*SyncStatusRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{10}
}
func (x *SyncStatusRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type DependencyInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
RequiredStatus string `protobuf:"bytes,3,opt,name=required_status,json=requiredStatus,proto3" json:"required_status,omitempty"`
CurrentStatus string `protobuf:"bytes,4,opt,name=current_status,json=currentStatus,proto3" json:"current_status,omitempty"`
IsSatisfied bool `protobuf:"varint,5,opt,name=is_satisfied,json=isSatisfied,proto3" json:"is_satisfied,omitempty"`
}
func (x *DependencyInfo) Reset() {
*x = DependencyInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DependencyInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DependencyInfo) ProtoMessage() {}
func (x *DependencyInfo) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DependencyInfo.ProtoReflect.Descriptor instead.
func (*DependencyInfo) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{11}
}
func (x *DependencyInfo) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *DependencyInfo) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
func (x *DependencyInfo) GetRequiredStatus() string {
if x != nil {
return x.RequiredStatus
}
return ""
}
func (x *DependencyInfo) GetCurrentStatus() string {
if x != nil {
return x.CurrentStatus
}
return ""
}
func (x *DependencyInfo) GetIsSatisfied() bool {
if x != nil {
return x.IsSatisfied
}
return false
}
type SyncStatusResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
IsReady bool `protobuf:"varint,2,opt,name=is_ready,json=isReady,proto3" json:"is_ready,omitempty"`
Dependencies []*DependencyInfo `protobuf:"bytes,3,rep,name=dependencies,proto3" json:"dependencies,omitempty"`
}
func (x *SyncStatusResponse) Reset() {
*x = SyncStatusResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusResponse) ProtoMessage() {}
func (x *SyncStatusResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusResponse.ProtoReflect.Descriptor instead.
func (*SyncStatusResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{12}
}
func (x *SyncStatusResponse) GetStatus() string {
if x != nil {
return x.Status
}
return ""
}
func (x *SyncStatusResponse) GetIsReady() bool {
if x != nil {
return x.IsReady
}
return false
}
func (x *SyncStatusResponse) GetDependencies() []*DependencyInfo {
if x != nil {
return x.Dependencies
}
return nil
}
var File_agent_agentsocket_proto_agentsocket_proto protoreflect.FileDescriptor
var file_agent_agentsocket_proto_agentsocket_proto_rawDesc = []byte{
0x0a, 0x29, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73,
0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x14, 0x63, 0x6f, 0x64,
0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76,
0x31, 0x22, 0x0d, 0x0a, 0x0b, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x22, 0x0e, 0x0a, 0x0c, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x26, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0x13, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63,
0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x44, 0x0a,
0x0f, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f,
0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64,
0x73, 0x4f, 0x6e, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x29, 0x0a, 0x13, 0x53, 0x79, 0x6e, 0x63, 0x43,
0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65,
0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x26, 0x0a, 0x10, 0x53, 0x79,
0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x29, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79,
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79, 0x22, 0x27, 0x0a,
0x11, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0xb6, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x65, 0x6e,
0x64, 0x65, 0x6e, 0x63, 0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69,
0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a,
0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x4f, 0x6e, 0x12, 0x27, 0x0a, 0x0f,
0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x53,
0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x25, 0x0a, 0x0e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74,
0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x63,
0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x21, 0x0a, 0x0c,
0x69, 0x73, 0x5f, 0x73, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01,
0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x53, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x22,
0x91, 0x01, 0x0a, 0x12, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x19,
0x0a, 0x08, 0x69, 0x73, 0x5f, 0x72, 0x65, 0x61, 0x64, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x69, 0x73, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12, 0x48, 0x0a, 0x0c, 0x64, 0x65, 0x70,
0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63, 0x69, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x24, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x79, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x69, 0x65, 0x73, 0x32, 0xbb, 0x04, 0x0a, 0x0b, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x53, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x12, 0x4d, 0x0a, 0x04, 0x50, 0x69, 0x6e, 0x67, 0x12, 0x21, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x59, 0x0a, 0x08, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x12, 0x25, 0x2e, 0x63,
0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e,
0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57,
0x61, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x65, 0x0a, 0x0c, 0x53,
0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x29, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79,
0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x5f, 0x0a, 0x0a, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x27,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x76, 0x32, 0x2f, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce sync.Once
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = file_agent_agentsocket_proto_agentsocket_proto_rawDesc
)
func file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP() []byte {
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce.Do(func() {
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = protoimpl.X.CompressGZIP(file_agent_agentsocket_proto_agentsocket_proto_rawDescData)
})
return file_agent_agentsocket_proto_agentsocket_proto_rawDescData
}
var file_agent_agentsocket_proto_agentsocket_proto_msgTypes = make([]protoimpl.MessageInfo, 13)
var file_agent_agentsocket_proto_agentsocket_proto_goTypes = []interface{}{
(*PingRequest)(nil), // 0: coder.agentsocket.v1.PingRequest
(*PingResponse)(nil), // 1: coder.agentsocket.v1.PingResponse
(*SyncStartRequest)(nil), // 2: coder.agentsocket.v1.SyncStartRequest
(*SyncStartResponse)(nil), // 3: coder.agentsocket.v1.SyncStartResponse
(*SyncWantRequest)(nil), // 4: coder.agentsocket.v1.SyncWantRequest
(*SyncWantResponse)(nil), // 5: coder.agentsocket.v1.SyncWantResponse
(*SyncCompleteRequest)(nil), // 6: coder.agentsocket.v1.SyncCompleteRequest
(*SyncCompleteResponse)(nil), // 7: coder.agentsocket.v1.SyncCompleteResponse
(*SyncReadyRequest)(nil), // 8: coder.agentsocket.v1.SyncReadyRequest
(*SyncReadyResponse)(nil), // 9: coder.agentsocket.v1.SyncReadyResponse
(*SyncStatusRequest)(nil), // 10: coder.agentsocket.v1.SyncStatusRequest
(*DependencyInfo)(nil), // 11: coder.agentsocket.v1.DependencyInfo
(*SyncStatusResponse)(nil), // 12: coder.agentsocket.v1.SyncStatusResponse
}
var file_agent_agentsocket_proto_agentsocket_proto_depIdxs = []int32{
11, // 0: coder.agentsocket.v1.SyncStatusResponse.dependencies:type_name -> coder.agentsocket.v1.DependencyInfo
0, // 1: coder.agentsocket.v1.AgentSocket.Ping:input_type -> coder.agentsocket.v1.PingRequest
2, // 2: coder.agentsocket.v1.AgentSocket.SyncStart:input_type -> coder.agentsocket.v1.SyncStartRequest
4, // 3: coder.agentsocket.v1.AgentSocket.SyncWant:input_type -> coder.agentsocket.v1.SyncWantRequest
6, // 4: coder.agentsocket.v1.AgentSocket.SyncComplete:input_type -> coder.agentsocket.v1.SyncCompleteRequest
8, // 5: coder.agentsocket.v1.AgentSocket.SyncReady:input_type -> coder.agentsocket.v1.SyncReadyRequest
10, // 6: coder.agentsocket.v1.AgentSocket.SyncStatus:input_type -> coder.agentsocket.v1.SyncStatusRequest
1, // 7: coder.agentsocket.v1.AgentSocket.Ping:output_type -> coder.agentsocket.v1.PingResponse
3, // 8: coder.agentsocket.v1.AgentSocket.SyncStart:output_type -> coder.agentsocket.v1.SyncStartResponse
5, // 9: coder.agentsocket.v1.AgentSocket.SyncWant:output_type -> coder.agentsocket.v1.SyncWantResponse
7, // 10: coder.agentsocket.v1.AgentSocket.SyncComplete:output_type -> coder.agentsocket.v1.SyncCompleteResponse
9, // 11: coder.agentsocket.v1.AgentSocket.SyncReady:output_type -> coder.agentsocket.v1.SyncReadyResponse
12, // 12: coder.agentsocket.v1.AgentSocket.SyncStatus:output_type -> coder.agentsocket.v1.SyncStatusResponse
7, // [7:13] is the sub-list for method output_type
1, // [1:7] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_agent_agentsocket_proto_agentsocket_proto_init() }
func file_agent_agentsocket_proto_agentsocket_proto_init() {
if File_agent_agentsocket_proto_agentsocket_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DependencyInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_agent_agentsocket_proto_agentsocket_proto_rawDesc,
NumEnums: 0,
NumMessages: 13,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_agent_agentsocket_proto_agentsocket_proto_goTypes,
DependencyIndexes: file_agent_agentsocket_proto_agentsocket_proto_depIdxs,
MessageInfos: file_agent_agentsocket_proto_agentsocket_proto_msgTypes,
}.Build()
File_agent_agentsocket_proto_agentsocket_proto = out.File
file_agent_agentsocket_proto_agentsocket_proto_rawDesc = nil
file_agent_agentsocket_proto_agentsocket_proto_goTypes = nil
file_agent_agentsocket_proto_agentsocket_proto_depIdxs = nil
}
-69
View File
@@ -1,69 +0,0 @@
syntax = "proto3";
option go_package = "github.com/coder/coder/v2/agent/agentsocket/proto";
package coder.agentsocket.v1;
message PingRequest {}
message PingResponse {}
message SyncStartRequest {
string unit = 1;
}
message SyncStartResponse {}
message SyncWantRequest {
string unit = 1;
string depends_on = 2;
}
message SyncWantResponse {}
message SyncCompleteRequest {
string unit = 1;
}
message SyncCompleteResponse {}
message SyncReadyRequest {
string unit = 1;
}
message SyncReadyResponse {
bool ready = 1;
}
message SyncStatusRequest {
string unit = 1;
}
message DependencyInfo {
string unit = 1;
string depends_on = 2;
string required_status = 3;
string current_status = 4;
bool is_satisfied = 5;
}
message SyncStatusResponse {
string status = 1;
bool is_ready = 2;
repeated DependencyInfo dependencies = 3;
}
// AgentSocket provides direct access to the agent over local IPC.
service AgentSocket {
// Ping the agent to check if it is alive.
rpc Ping(PingRequest) returns (PingResponse);
// Report the start of a unit.
rpc SyncStart(SyncStartRequest) returns (SyncStartResponse);
// Declare a dependency between units.
rpc SyncWant(SyncWantRequest) returns (SyncWantResponse);
// Report the completion of a unit.
rpc SyncComplete(SyncCompleteRequest) returns (SyncCompleteResponse);
// Request whether a unit is ready to be started. That is, all dependencies are satisfied.
rpc SyncReady(SyncReadyRequest) returns (SyncReadyResponse);
// Get the status of a unit and list its dependencies.
rpc SyncStatus(SyncStatusRequest) returns (SyncStatusResponse);
}
@@ -1,311 +0,0 @@
// Code generated by protoc-gen-go-drpc. DO NOT EDIT.
// protoc-gen-go-drpc version: v0.0.34
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
context "context"
errors "errors"
protojson "google.golang.org/protobuf/encoding/protojson"
proto "google.golang.org/protobuf/proto"
drpc "storj.io/drpc"
drpcerr "storj.io/drpc/drpcerr"
)
type drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto struct{}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Marshal(msg drpc.Message) ([]byte, error) {
return proto.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) MarshalAppend(buf []byte, msg drpc.Message) ([]byte, error) {
return proto.MarshalOptions{}.MarshalAppend(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Unmarshal(buf []byte, msg drpc.Message) error {
return proto.Unmarshal(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONMarshal(msg drpc.Message) ([]byte, error) {
return protojson.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONUnmarshal(buf []byte, msg drpc.Message) error {
return protojson.Unmarshal(buf, msg.(proto.Message))
}
type DRPCAgentSocketClient interface {
DRPCConn() drpc.Conn
Ping(ctx context.Context, in *PingRequest) (*PingResponse, error)
SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error)
}
type drpcAgentSocketClient struct {
cc drpc.Conn
}
func NewDRPCAgentSocketClient(cc drpc.Conn) DRPCAgentSocketClient {
return &drpcAgentSocketClient{cc}
}
func (c *drpcAgentSocketClient) DRPCConn() drpc.Conn { return c.cc }
func (c *drpcAgentSocketClient) Ping(ctx context.Context, in *PingRequest) (*PingResponse, error) {
out := new(PingResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error) {
out := new(SyncStartResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error) {
out := new(SyncWantResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error) {
out := new(SyncCompleteResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error) {
out := new(SyncReadyResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error) {
out := new(SyncStatusResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
type DRPCAgentSocketServer interface {
Ping(context.Context, *PingRequest) (*PingResponse, error)
SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error)
}
type DRPCAgentSocketUnimplementedServer struct{}
func (s *DRPCAgentSocketUnimplementedServer) Ping(context.Context, *PingRequest) (*PingResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
type DRPCAgentSocketDescription struct{}
func (DRPCAgentSocketDescription) NumMethods() int { return 6 }
func (DRPCAgentSocketDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver, interface{}, bool) {
switch n {
case 0:
return "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
Ping(
ctx,
in1.(*PingRequest),
)
}, DRPCAgentSocketServer.Ping, true
case 1:
return "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStart(
ctx,
in1.(*SyncStartRequest),
)
}, DRPCAgentSocketServer.SyncStart, true
case 2:
return "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncWant(
ctx,
in1.(*SyncWantRequest),
)
}, DRPCAgentSocketServer.SyncWant, true
case 3:
return "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncComplete(
ctx,
in1.(*SyncCompleteRequest),
)
}, DRPCAgentSocketServer.SyncComplete, true
case 4:
return "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncReady(
ctx,
in1.(*SyncReadyRequest),
)
}, DRPCAgentSocketServer.SyncReady, true
case 5:
return "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStatus(
ctx,
in1.(*SyncStatusRequest),
)
}, DRPCAgentSocketServer.SyncStatus, true
default:
return "", nil, nil, nil, false
}
}
func DRPCRegisterAgentSocket(mux drpc.Mux, impl DRPCAgentSocketServer) error {
return mux.Register(impl, DRPCAgentSocketDescription{})
}
type DRPCAgentSocket_PingStream interface {
drpc.Stream
SendAndClose(*PingResponse) error
}
type drpcAgentSocket_PingStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_PingStream) SendAndClose(m *PingResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStartStream interface {
drpc.Stream
SendAndClose(*SyncStartResponse) error
}
type drpcAgentSocket_SyncStartStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStartStream) SendAndClose(m *SyncStartResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncWantStream interface {
drpc.Stream
SendAndClose(*SyncWantResponse) error
}
type drpcAgentSocket_SyncWantStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncWantStream) SendAndClose(m *SyncWantResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncCompleteStream interface {
drpc.Stream
SendAndClose(*SyncCompleteResponse) error
}
type drpcAgentSocket_SyncCompleteStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncCompleteStream) SendAndClose(m *SyncCompleteResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncReadyStream interface {
drpc.Stream
SendAndClose(*SyncReadyResponse) error
}
type drpcAgentSocket_SyncReadyStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncReadyStream) SendAndClose(m *SyncReadyResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStatusStream interface {
drpc.Stream
SendAndClose(*SyncStatusResponse) error
}
type drpcAgentSocket_SyncStatusStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStatusStream) SendAndClose(m *SyncStatusResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
-17
View File
@@ -1,17 +0,0 @@
package proto
import "github.com/coder/coder/v2/apiversion"
// Version history:
//
// API v1.0:
// - Initial release
// - Ping
// - Sync operations: SyncStart, SyncWant, SyncComplete, SyncWait, SyncStatus
const (
CurrentMajor = 1
CurrentMinor = 0
)
var CurrentVersion = apiversion.New(CurrentMajor, CurrentMinor)
-185
View File
@@ -1,185 +0,0 @@
package agentsocket
import (
"context"
"errors"
"net"
"sync"
"golang.org/x/xerrors"
"github.com/hashicorp/yamux"
"storj.io/drpc/drpcmux"
"storj.io/drpc/drpcserver"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/codersdk/drpcsdk"
)
// Server provides access to the DRPCAgentSocketService via a Unix domain socket.
// Do not invoke Server{} directly. Use NewServer() instead.
type Server struct {
logger slog.Logger
path string
drpcServer *drpcserver.Server
service *DRPCAgentSocketService
mu sync.Mutex
listener net.Listener
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
}
func NewServer(path string, logger slog.Logger) (*Server, error) {
logger = logger.Named("agentsocket-server")
server := &Server{
logger: logger,
path: path,
service: &DRPCAgentSocketService{
logger: logger,
unitManager: unit.NewManager(),
},
}
mux := drpcmux.New()
err := proto.DRPCRegisterAgentSocket(mux, server.service)
if err != nil {
return nil, xerrors.Errorf("failed to register drpc service: %w", err)
}
server.drpcServer = drpcserver.NewWithOptions(mux, drpcserver.Options{
Manager: drpcsdk.DefaultDRPCOptions(nil),
Log: func(err error) {
if errors.Is(err, context.Canceled) ||
errors.Is(err, context.DeadlineExceeded) {
return
}
logger.Debug(context.Background(), "drpc server error", slog.Error(err))
},
})
if server.path == "" {
var err error
server.path, err = getDefaultSocketPath()
if err != nil {
return nil, xerrors.Errorf("get default socket path: %w", err)
}
}
listener, err := createSocket(server.path)
if err != nil {
return nil, xerrors.Errorf("create socket: %w", err)
}
server.listener = listener
// This context is canceled by server.Close().
// canceling it will close all connections.
server.ctx, server.cancel = context.WithCancel(context.Background())
server.logger.Info(server.ctx, "agent socket server started", slog.F("path", server.path))
server.wg.Add(1)
go func() {
defer server.wg.Done()
server.acceptConnections()
}()
return server, nil
}
func (s *Server) Close() error {
s.mu.Lock()
if s.listener == nil {
s.mu.Unlock()
return nil
}
s.logger.Info(s.ctx, "stopping agent socket server")
s.cancel()
if err := s.listener.Close(); err != nil {
s.logger.Warn(s.ctx, "error closing socket listener", slog.Error(err))
}
s.listener = nil
s.mu.Unlock()
// Wait for all connections to finish
s.wg.Wait()
if err := cleanupSocket(s.path); err != nil {
s.logger.Warn(s.ctx, "error cleaning up socket file", slog.Error(err))
}
s.logger.Info(s.ctx, "agent socket server stopped")
return nil
}
func (s *Server) acceptConnections() {
// In an edge case, Close() might race with acceptConnections() and set s.listener to nil.
// Therefore, we grab a copy of the listener under a lock. We might still get a nil listener,
// but then we know close has already run and we can return early.
s.mu.Lock()
listener := s.listener
s.mu.Unlock()
if listener == nil {
return
}
for {
select {
case <-s.ctx.Done():
return
default:
}
conn, err := listener.Accept()
if err != nil {
s.logger.Warn(s.ctx, "error accepting connection", slog.Error(err))
continue
}
s.mu.Lock()
if s.listener == nil {
s.mu.Unlock()
_ = conn.Close()
return
}
s.wg.Add(1)
s.mu.Unlock()
go func() {
defer s.wg.Done()
s.handleConnection(conn)
}()
}
}
func (s *Server) handleConnection(conn net.Conn) {
defer conn.Close()
s.logger.Debug(s.ctx, "new connection accepted", slog.F("remote_addr", conn.RemoteAddr()))
config := yamux.DefaultConfig()
config.LogOutput = nil
config.Logger = slog.Stdlib(s.ctx, s.logger.Named("agentsocket-yamux"), slog.LevelInfo)
session, err := yamux.Server(conn, config)
if err != nil {
s.logger.Warn(s.ctx, "failed to create yamux session", slog.Error(err))
return
}
defer session.Close()
err = s.drpcServer.Serve(s.ctx, session)
if err != nil {
s.logger.Debug(s.ctx, "drpc server finished", slog.Error(err))
}
}
-52
View File
@@ -1,52 +0,0 @@
package agentsocket_test
import (
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket"
)
func TestServer(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("StartStop", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(socketPath, logger)
require.NoError(t, err)
require.NoError(t, server.Close())
})
t.Run("AlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server1, err := agentsocket.NewServer(socketPath, logger)
require.NoError(t, err)
defer server1.Close()
_, err = agentsocket.NewServer(socketPath, logger)
require.ErrorContains(t, err, "create socket")
})
t.Run("AutoSocketPath", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(socketPath, logger)
require.NoError(t, err)
require.NoError(t, server.Close())
})
}
-142
View File
@@ -1,142 +0,0 @@
package agentsocket
import (
"context"
"errors"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
)
var _ proto.DRPCAgentSocketServer = (*DRPCAgentSocketService)(nil)
var ErrUnitManagerNotAvailable = xerrors.New("unit manager not available")
type DRPCAgentSocketService struct {
unitManager *unit.Manager
logger slog.Logger
}
func (*DRPCAgentSocketService) Ping(_ context.Context, _ *proto.PingRequest) (*proto.PingResponse, error) {
return &proto.PingResponse{}, nil
}
func (s *DRPCAgentSocketService) SyncStart(_ context.Context, req *proto.SyncStartRequest) (*proto.SyncStartResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("SyncStart: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.Register(unitID); err != nil {
if !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("SyncStart: %w", err)
}
}
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
if !isReady {
return nil, xerrors.Errorf("cannot start unit %q: unit not ready", req.Unit)
}
err = s.unitManager.UpdateStatus(unitID, unit.StatusStarted)
if err != nil {
return nil, xerrors.Errorf("cannot start unit %q: %w", req.Unit, err)
}
return &proto.SyncStartResponse{}, nil
}
func (s *DRPCAgentSocketService) SyncWant(_ context.Context, req *proto.SyncWantRequest) (*proto.SyncWantResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot add dependency: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
dependsOnID := unit.ID(req.DependsOn)
if err := s.unitManager.Register(unitID); err != nil && !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
if err := s.unitManager.AddDependency(unitID, dependsOnID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
return &proto.SyncWantResponse{}, nil
}
func (s *DRPCAgentSocketService) SyncComplete(_ context.Context, req *proto.SyncCompleteRequest) (*proto.SyncCompleteResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot complete unit: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.UpdateStatus(unitID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot complete unit %q: %w", req.Unit, err)
}
return &proto.SyncCompleteResponse{}, nil
}
func (s *DRPCAgentSocketService) SyncReady(_ context.Context, req *proto.SyncReadyRequest) (*proto.SyncReadyResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot check readiness: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
return &proto.SyncReadyResponse{
Ready: isReady,
}, nil
}
func (s *DRPCAgentSocketService) SyncStatus(_ context.Context, req *proto.SyncStatusRequest) (*proto.SyncStatusResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
dependencies, err := s.unitManager.GetAllDependencies(unitID)
if err != nil {
return nil, xerrors.Errorf("failed to get dependencies: %w", err)
}
var depInfos []*proto.DependencyInfo
for _, dep := range dependencies {
depInfos = append(depInfos, &proto.DependencyInfo{
Unit: string(dep.Unit),
DependsOn: string(dep.DependsOn),
RequiredStatus: string(dep.RequiredStatus),
CurrentStatus: string(dep.CurrentStatus),
IsSatisfied: dep.IsSatisfied,
})
}
u, err := s.unitManager.Unit(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, err)
}
return &proto.SyncStatusResponse{
Status: string(u.Status()),
IsReady: isReady,
Dependencies: depInfos,
}, nil
}
-470
View File
@@ -1,470 +0,0 @@
package agentsocket_test
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"net"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/hashicorp/yamux"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/codersdk/drpcsdk"
)
// tempDirUnixSocket returns a temporary directory that can safely hold unix
// sockets (probably).
//
// During tests on darwin we hit the max path length limit for unix sockets
// pretty easily in the default location, so this function uses /tmp instead to
// get shorter paths. To keep paths short, we use a hash of the test name
// instead of the full test name.
func tempDirUnixSocket(t *testing.T) string {
t.Helper()
if runtime.GOOS == "darwin" {
// Use a short hash of the test name to keep the path under 104 chars
hash := sha256.Sum256([]byte(t.Name()))
hashStr := hex.EncodeToString(hash[:])[:8] // Use first 8 chars of hash
dir, err := os.MkdirTemp("/tmp", fmt.Sprintf("c-%s-", hashStr))
require.NoError(t, err, "create temp dir for unix socket test")
t.Cleanup(func() {
err := os.RemoveAll(dir)
assert.NoError(t, err, "remove temp dir", dir)
})
return dir
}
return t.TempDir()
}
// newSocketClient creates a DRPC client connected to the Unix socket at the given path.
func newSocketClient(t *testing.T, socketPath string) proto.DRPCAgentSocketClient {
t.Helper()
conn, err := net.Dial("unix", socketPath)
require.NoError(t, err)
config := yamux.DefaultConfig()
config.Logger = nil
session, err := yamux.Client(conn, config)
require.NoError(t, err)
client := proto.NewDRPCAgentSocketClient(drpcsdk.MultiplexedConn(session))
t.Cleanup(func() {
_ = session.Close()
_ = conn.Close()
})
return client
}
func TestDRPCAgentSocketService(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("Ping", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
_, err = client.Ping(context.Background(), &proto.PingRequest{})
require.NoError(t, err)
})
t.Run("SyncStart", func(t *testing.T) {
t.Parallel()
t.Run("NewUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
})
t.Run("UnitAlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// First Start
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
// Second Start
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.ErrorContains(t, err, unit.ErrSameStatusAlreadySet.Error())
status, err = client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
})
t.Run("UnitAlreadyCompleted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// First start
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
// Complete the unit
_, err = client.SyncComplete(context.Background(), &proto.SyncCompleteRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err = client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "completed", status.Status)
// Second start
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err = client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "test-unit",
DependsOn: "dependency-unit",
})
require.NoError(t, err)
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.ErrorContains(t, err, "unit not ready")
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, string(unit.StatusPending), status.Status)
require.False(t, status.IsReady)
})
})
t.Run("SyncWant", func(t *testing.T) {
t.Parallel()
t.Run("NewUnits", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// If dependency units are not registered, they are registered automatically
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "test-unit",
DependsOn: "dependency-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Len(t, status.Dependencies, 1)
require.Equal(t, "dependency-unit", status.Dependencies[0].DependsOn)
require.Equal(t, "completed", status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAlreadyRegistered", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// Start the dependency unit
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "dependency-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "dependency-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
// Add the dependency after the dependency unit has already started
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "test-unit",
DependsOn: "dependency-unit",
})
// Dependencies can be added even if the dependency unit has already started
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "dependency-unit", status.Dependencies[0].DependsOn)
require.Equal(t, "completed", status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAddedAfterDependentStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// Start the dependent unit
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
status, err := client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "started", status.Status)
// Add the dependency after the dependency unit has already started
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "test-unit",
DependsOn: "dependency-unit",
})
// Dependencies can be added even if the dependent unit has already started.
// The dependency applies the next time a unit is started. The current status is not updated.
// This is to allow flexible dependency management. It does mean that users of this API should
// take care to add dependencies before they start their dependent units.
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(context.Background(), &proto.SyncStatusRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.Equal(t, "dependency-unit", status.Dependencies[0].DependsOn)
require.Equal(t, "completed", status.Dependencies[0].RequiredStatus)
})
})
t.Run("SyncReady", func(t *testing.T) {
t.Parallel()
t.Run("UnregisteredUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
response, err := client.SyncReady(context.Background(), &proto.SyncReadyRequest{
Unit: "unregistered-unit",
})
require.NoError(t, err)
require.False(t, response.Ready)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// Register a unit with an unsatisfied dependency
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "test-unit",
DependsOn: "dependency-unit",
})
require.NoError(t, err)
// Check readiness - should be false because dependency is not satisfied
response, err := client.SyncReady(context.Background(), &proto.SyncReadyRequest{
Unit: "test-unit",
})
require.NoError(t, err)
require.False(t, response.Ready)
})
t.Run("UnitReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
server, err := agentsocket.NewServer(
socketPath,
slog.Make().Leveled(slog.LevelDebug),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(t, socketPath)
// Register a unit with no dependencies - should be ready immediately
_, err = client.SyncStart(context.Background(), &proto.SyncStartRequest{
Unit: "test-unit",
})
require.NoError(t, err)
// Check readiness - should be true
_, err = client.SyncReady(context.Background(), &proto.SyncReadyRequest{
Unit: "test-unit",
})
require.NoError(t, err)
// Also test a unit with satisfied dependencies
_, err = client.SyncWant(context.Background(), &proto.SyncWantRequest{
Unit: "dependent-unit",
DependsOn: "test-unit",
})
require.NoError(t, err)
// Complete the dependency
_, err = client.SyncComplete(context.Background(), &proto.SyncCompleteRequest{
Unit: "test-unit",
})
require.NoError(t, err)
// Now dependent-unit should be ready
_, err = client.SyncReady(context.Background(), &proto.SyncReadyRequest{
Unit: "dependent-unit",
})
require.NoError(t, err)
})
})
}
-83
View File
@@ -1,83 +0,0 @@
//go:build !windows
package agentsocket
import (
"crypto/rand"
"encoding/hex"
"net"
"os"
"path/filepath"
"time"
"golang.org/x/xerrors"
)
// createSocket creates a Unix domain socket listener
func createSocket(path string) (net.Listener, error) {
if !isSocketAvailable(path) {
return nil, xerrors.Errorf("socket path %s is not available", path)
}
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
return nil, xerrors.Errorf("remove existing socket: %w", err)
}
// Create parent directory if it doesn't exist
parentDir := filepath.Dir(path)
if err := os.MkdirAll(parentDir, 0o700); err != nil {
return nil, xerrors.Errorf("create socket directory: %w", err)
}
listener, err := net.Listen("unix", path)
if err != nil {
return nil, xerrors.Errorf("listen on unix socket: %w", err)
}
if err := os.Chmod(path, 0o600); err != nil {
_ = listener.Close()
return nil, xerrors.Errorf("set socket permissions: %w", err)
}
return listener, nil
}
// getDefaultSocketPath returns the default socket path for Unix-like systems
func getDefaultSocketPath() (string, error) {
randomBytes := make([]byte, 4)
if _, err := rand.Read(randomBytes); err != nil {
return "", xerrors.Errorf("generate random socket name: %w", err)
}
randomSuffix := hex.EncodeToString(randomBytes)
// Try XDG_RUNTIME_DIR first
if runtimeDir := os.Getenv("XDG_RUNTIME_DIR"); runtimeDir != "" {
return filepath.Join(runtimeDir, "coder-agent-"+randomSuffix+".sock"), nil
}
return filepath.Join("/tmp", "coder-agent-"+randomSuffix+".sock"), nil
}
// CleanupSocket removes the socket file
func cleanupSocket(path string) error {
return os.Remove(path)
}
// isSocketAvailable checks if a socket path is available for use
func isSocketAvailable(path string) bool {
// Check if file exists
if _, err := os.Stat(path); os.IsNotExist(err) {
return true
}
// Try to connect to see if it's actually listening
dialer := net.Dialer{Timeout: 10 * time.Second}
conn, err := dialer.Dial("unix", path)
if err != nil {
// If we can't connect, the socket is not in use
// Socket is available for use
return true
}
_ = conn.Close()
// Socket is in use
return false
}
-27
View File
@@ -1,27 +0,0 @@
//go:build windows
package agentsocket
import (
"net"
"golang.org/x/xerrors"
)
// createSocket returns an error indicating that agentsocket is not supported on Windows.
// This feature is unix-only in its current experimental state.
func createSocket(_ string) (net.Listener, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
// getDefaultSocketPath returns an error indicating that agentsocket is not supported on Windows.
// This feature is unix-only in its current experimental state.
func getDefaultSocketPath() (string, error) {
return "", xerrors.New("agentsocket is not supported on Windows")
}
// cleanupSocket is a no-op on Windows since agentsocket is not supported.
func cleanupSocket(_ string) error {
// No-op since agentsocket is not supported on Windows
return nil
}
+1 -2
View File
@@ -63,7 +63,6 @@ func NewAppHealthReporterWithClock(
// run a ticker for each app health check.
var mu sync.RWMutex
failures := make(map[uuid.UUID]int, 0)
client := &http.Client{}
for _, nextApp := range apps {
if !shouldStartTicker(nextApp) {
continue
@@ -92,7 +91,7 @@ func NewAppHealthReporterWithClock(
if err != nil {
return err
}
res, err := client.Do(req)
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
+1 -3
View File
@@ -250,9 +250,7 @@ func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.
transforms[i] = replace.String(edit.Search, edit.Replace)
}
// Create an adjacent file to ensure it will be on the same device and can be
// moved atomically.
tmpfile, err := afero.TempFile(a.filesystem, filepath.Dir(path), filepath.Base(path))
tmpfile, err := afero.TempFile(a.filesystem, "", filepath.Base(path))
if err != nil {
return http.StatusInternalServerError, err
}
-5
View File
@@ -25,7 +25,6 @@ import (
// screenReconnectingPTY provides a reconnectable PTY via `screen`.
type screenReconnectingPTY struct {
logger slog.Logger
execer agentexec.Execer
command *pty.Cmd
@@ -63,7 +62,6 @@ type screenReconnectingPTY struct {
// own which causes it to spawn with the specified size.
func newScreen(ctx context.Context, logger slog.Logger, execer agentexec.Execer, cmd *pty.Cmd, options *Options) *screenReconnectingPTY {
rpty := &screenReconnectingPTY{
logger: logger,
execer: execer,
command: cmd,
metrics: options.Metrics,
@@ -175,7 +173,6 @@ func (rpty *screenReconnectingPTY) Attach(ctx context.Context, _ string, conn ne
ptty, process, err := rpty.doAttach(ctx, conn, height, width, logger)
if err != nil {
logger.Debug(ctx, "unable to attach to screen reconnecting pty", slog.Error(err))
if errors.Is(err, context.Canceled) {
// Likely the process was too short-lived and canceled the version command.
// TODO: Is it worth distinguishing between that and a cancel from the
@@ -185,7 +182,6 @@ func (rpty *screenReconnectingPTY) Attach(ctx context.Context, _ string, conn ne
}
return err
}
logger.Debug(ctx, "attached to screen reconnecting pty")
defer func() {
// Log only for debugging since the process might have already exited on its
@@ -407,7 +403,6 @@ func (rpty *screenReconnectingPTY) Wait() {
}
func (rpty *screenReconnectingPTY) Close(err error) {
rpty.logger.Debug(context.Background(), "closing screen reconnecting pty", slog.Error(err))
// The closing state change will be handled by the lifecycle.
rpty.state.setState(StateClosing, err)
}
-174
View File
@@ -1,174 +0,0 @@
package unit
import (
"fmt"
"sync"
"golang.org/x/xerrors"
"gonum.org/v1/gonum/graph/encoding/dot"
"gonum.org/v1/gonum/graph/simple"
"gonum.org/v1/gonum/graph/topo"
)
// Graph provides a bidirectional interface over gonum's directed graph implementation.
// While the underlying gonum graph is directed, we overlay bidirectional semantics
// by distinguishing between forward and reverse edges. Wanting and being wanted by
// other units are related but different concepts that have different graph traversal
// implications when Units update their status.
//
// The graph stores edge types to represent different relationships between units,
// allowing for domain-specific semantics beyond simple connectivity.
type Graph[EdgeType, VertexType comparable] struct {
mu sync.RWMutex
// The underlying gonum graph. It stores vertices and edges without knowing about the types of the vertices and edges.
gonumGraph *simple.DirectedGraph
// Maps vertices to their IDs so that a gonum vertex ID can be used to lookup the vertex type.
vertexToID map[VertexType]int64
// Maps vertex IDs to their types so that a vertex type can be used to lookup the gonum vertex ID.
idToVertex map[int64]VertexType
// The next ID to assign to a vertex.
nextID int64
// Store edge types by "fromID->toID" key. This is used to lookup the edge type for a given edge.
edgeTypes map[string]EdgeType
}
// Edge is a convenience type for representing an edge in the graph.
// It encapsulates the from and to vertices and the edge type itself.
type Edge[EdgeType, VertexType comparable] struct {
From VertexType
To VertexType
Edge EdgeType
}
// AddEdge adds an edge to the graph. It initializes the graph and metadata on first use,
// checks for cycles, and adds the edge to the gonum graph.
func (g *Graph[EdgeType, VertexType]) AddEdge(from, to VertexType, edge EdgeType) error {
g.mu.Lock()
defer g.mu.Unlock()
if g.gonumGraph == nil {
g.gonumGraph = simple.NewDirectedGraph()
g.vertexToID = make(map[VertexType]int64)
g.idToVertex = make(map[int64]VertexType)
g.edgeTypes = make(map[string]EdgeType)
g.nextID = 1
}
fromID := g.getOrCreateVertexID(from)
toID := g.getOrCreateVertexID(to)
if g.canReach(to, from) {
return xerrors.Errorf("adding edge (%v -> %v): %w", from, to, ErrCycleDetected)
}
g.gonumGraph.SetEdge(simple.Edge{F: simple.Node(fromID), T: simple.Node(toID)})
edgeKey := fmt.Sprintf("%d->%d", fromID, toID)
g.edgeTypes[edgeKey] = edge
return nil
}
// GetForwardAdjacentVertices returns all the edges that originate from the given vertex.
func (g *Graph[EdgeType, VertexType]) GetForwardAdjacentVertices(from VertexType) []Edge[EdgeType, VertexType] {
g.mu.RLock()
defer g.mu.RUnlock()
fromID, exists := g.vertexToID[from]
if !exists {
return []Edge[EdgeType, VertexType]{}
}
edges := []Edge[EdgeType, VertexType]{}
toNodes := g.gonumGraph.From(fromID)
for toNodes.Next() {
toID := toNodes.Node().ID()
to := g.idToVertex[toID]
// Get the edge type
edgeKey := fmt.Sprintf("%d->%d", fromID, toID)
edgeType := g.edgeTypes[edgeKey]
edges = append(edges, Edge[EdgeType, VertexType]{From: from, To: to, Edge: edgeType})
}
return edges
}
// GetReverseAdjacentVertices returns all the edges that terminate at the given vertex.
func (g *Graph[EdgeType, VertexType]) GetReverseAdjacentVertices(to VertexType) []Edge[EdgeType, VertexType] {
g.mu.RLock()
defer g.mu.RUnlock()
toID, exists := g.vertexToID[to]
if !exists {
return []Edge[EdgeType, VertexType]{}
}
edges := []Edge[EdgeType, VertexType]{}
fromNodes := g.gonumGraph.To(toID)
for fromNodes.Next() {
fromID := fromNodes.Node().ID()
from := g.idToVertex[fromID]
// Get the edge type
edgeKey := fmt.Sprintf("%d->%d", fromID, toID)
edgeType := g.edgeTypes[edgeKey]
edges = append(edges, Edge[EdgeType, VertexType]{From: from, To: to, Edge: edgeType})
}
return edges
}
// getOrCreateVertexID returns the ID for a vertex, creating it if it doesn't exist.
func (g *Graph[EdgeType, VertexType]) getOrCreateVertexID(vertex VertexType) int64 {
if id, exists := g.vertexToID[vertex]; exists {
return id
}
id := g.nextID
g.nextID++
g.vertexToID[vertex] = id
g.idToVertex[id] = vertex
// Add the node to the gonum graph
g.gonumGraph.AddNode(simple.Node(id))
return id
}
// canReach checks if there is a path from the start vertex to the end vertex.
func (g *Graph[EdgeType, VertexType]) canReach(start, end VertexType) bool {
if start == end {
return true
}
startID, startExists := g.vertexToID[start]
endID, endExists := g.vertexToID[end]
if !startExists || !endExists {
return false
}
// Use gonum's built-in path existence check
return topo.PathExistsIn(g.gonumGraph, simple.Node(startID), simple.Node(endID))
}
// ToDOT exports the graph to DOT format for visualization
func (g *Graph[EdgeType, VertexType]) ToDOT(name string) (string, error) {
g.mu.RLock()
defer g.mu.RUnlock()
if g.gonumGraph == nil {
return "", xerrors.New("graph is not initialized")
}
// Marshal the graph to DOT format
dotBytes, err := dot.Marshal(g.gonumGraph, name, "", " ")
if err != nil {
return "", xerrors.Errorf("failed to marshal graph to DOT: %w", err)
}
return string(dotBytes), nil
}
-452
View File
@@ -1,452 +0,0 @@
// Package unit_test provides tests for the unit package.
//
// DOT Graph Testing:
// The graph tests use golden files for DOT representation verification.
// To update the golden files:
// make gen/golden-files
//
// The golden files contain the expected DOT representation and can be easily
// inspected, version controlled, and updated when the graph structure changes.
package unit_test
import (
"bytes"
"flag"
"fmt"
"os"
"path/filepath"
"sync"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/cryptorand"
)
type testGraphEdge string
const (
testEdgeStarted testGraphEdge = "started"
testEdgeCompleted testGraphEdge = "completed"
)
type testGraphVertex struct {
Name string
}
type (
testGraph = unit.Graph[testGraphEdge, *testGraphVertex]
testEdge = unit.Edge[testGraphEdge, *testGraphVertex]
)
// randInt generates a random integer in the range [0, limit).
func randInt(limit int) int {
if limit <= 0 {
return 0
}
n, err := cryptorand.Int63n(int64(limit))
if err != nil {
return 0
}
return int(n)
}
// UpdateGoldenFiles indicates golden files should be updated.
// To update the golden files:
// make gen/golden-files
var UpdateGoldenFiles = flag.Bool("update", false, "update .golden files")
// assertDOTGraph requires that the graph's DOT representation matches the golden file
func assertDOTGraph(t *testing.T, graph *testGraph, goldenName string) {
t.Helper()
dot, err := graph.ToDOT(goldenName)
require.NoError(t, err)
goldenFile := filepath.Join("testdata", goldenName+".golden")
if *UpdateGoldenFiles {
t.Logf("update golden file for: %q: %s", goldenName, goldenFile)
err := os.MkdirAll(filepath.Dir(goldenFile), 0o755)
require.NoError(t, err, "want no error creating golden file directory")
err = os.WriteFile(goldenFile, []byte(dot), 0o600)
require.NoError(t, err, "update golden file")
}
expected, err := os.ReadFile(goldenFile)
require.NoError(t, err, "read golden file, run \"make gen/golden-files\" and commit the changes")
// Normalize line endings for cross-platform compatibility
expected = normalizeLineEndings(expected)
normalizedDot := normalizeLineEndings([]byte(dot))
assert.Empty(t, cmp.Diff(string(expected), string(normalizedDot)), "golden file mismatch (-want +got): %s, run \"make gen/golden-files\", verify and commit the changes", goldenFile)
}
// normalizeLineEndings ensures that all line endings are normalized to \n.
// Required for Windows compatibility.
func normalizeLineEndings(content []byte) []byte {
content = bytes.ReplaceAll(content, []byte("\r\n"), []byte("\n"))
content = bytes.ReplaceAll(content, []byte("\r"), []byte("\n"))
return content
}
func TestGraph(t *testing.T) {
t.Parallel()
testFuncs := map[string]func(t *testing.T) *unit.Graph[testGraphEdge, *testGraphVertex]{
"ForwardAndReverseEdges": func(t *testing.T) *unit.Graph[testGraphEdge, *testGraphVertex] {
graph := &unit.Graph[testGraphEdge, *testGraphVertex]{}
unit1 := &testGraphVertex{Name: "unit1"}
unit2 := &testGraphVertex{Name: "unit2"}
unit3 := &testGraphVertex{Name: "unit3"}
err := graph.AddEdge(unit1, unit2, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unit1, unit3, testEdgeStarted)
require.NoError(t, err)
// Check for forward edge
vertices := graph.GetForwardAdjacentVertices(unit1)
require.Len(t, vertices, 2)
// Unit 1 depends on the completion of Unit2
require.Contains(t, vertices, testEdge{
From: unit1,
To: unit2,
Edge: testEdgeCompleted,
})
// Unit 1 depends on the start of Unit3
require.Contains(t, vertices, testEdge{
From: unit1,
To: unit3,
Edge: testEdgeStarted,
})
// Check for reverse edges
unit2ReverseEdges := graph.GetReverseAdjacentVertices(unit2)
require.Len(t, unit2ReverseEdges, 1)
// Unit 2 must be completed before Unit 1 can start
require.Contains(t, unit2ReverseEdges, testEdge{
From: unit1,
To: unit2,
Edge: testEdgeCompleted,
})
unit3ReverseEdges := graph.GetReverseAdjacentVertices(unit3)
require.Len(t, unit3ReverseEdges, 1)
// Unit 3 must be started before Unit 1 can complete
require.Contains(t, unit3ReverseEdges, testEdge{
From: unit1,
To: unit3,
Edge: testEdgeStarted,
})
return graph
},
"SelfReference": func(t *testing.T) *testGraph {
graph := &testGraph{}
unit1 := &testGraphVertex{Name: "unit1"}
err := graph.AddEdge(unit1, unit1, testEdgeCompleted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
return graph
},
"Cycle": func(t *testing.T) *testGraph {
graph := &testGraph{}
unit1 := &testGraphVertex{Name: "unit1"}
unit2 := &testGraphVertex{Name: "unit2"}
err := graph.AddEdge(unit1, unit2, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unit2, unit1, testEdgeStarted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
return graph
},
"MultipleDependenciesSameStatus": func(t *testing.T) *testGraph {
graph := &testGraph{}
unit1 := &testGraphVertex{Name: "unit1"}
unit2 := &testGraphVertex{Name: "unit2"}
unit3 := &testGraphVertex{Name: "unit3"}
unit4 := &testGraphVertex{Name: "unit4"}
// Unit1 depends on completion of both unit2 and unit3 (same status type)
err := graph.AddEdge(unit1, unit2, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unit1, unit3, testEdgeCompleted)
require.NoError(t, err)
// Unit1 also depends on starting of unit4 (different status type)
err = graph.AddEdge(unit1, unit4, testEdgeStarted)
require.NoError(t, err)
// Check that unit1 has 3 forward dependencies
forwardEdges := graph.GetForwardAdjacentVertices(unit1)
require.Len(t, forwardEdges, 3)
// Verify all expected dependencies exist
expectedDependencies := []testEdge{
{From: unit1, To: unit2, Edge: testEdgeCompleted},
{From: unit1, To: unit3, Edge: testEdgeCompleted},
{From: unit1, To: unit4, Edge: testEdgeStarted},
}
for _, expected := range expectedDependencies {
require.Contains(t, forwardEdges, expected)
}
// Check reverse dependencies
unit2ReverseEdges := graph.GetReverseAdjacentVertices(unit2)
require.Len(t, unit2ReverseEdges, 1)
require.Contains(t, unit2ReverseEdges, testEdge{
From: unit1, To: unit2, Edge: testEdgeCompleted,
})
unit3ReverseEdges := graph.GetReverseAdjacentVertices(unit3)
require.Len(t, unit3ReverseEdges, 1)
require.Contains(t, unit3ReverseEdges, testEdge{
From: unit1, To: unit3, Edge: testEdgeCompleted,
})
unit4ReverseEdges := graph.GetReverseAdjacentVertices(unit4)
require.Len(t, unit4ReverseEdges, 1)
require.Contains(t, unit4ReverseEdges, testEdge{
From: unit1, To: unit4, Edge: testEdgeStarted,
})
return graph
},
}
for testName, testFunc := range testFuncs {
var graph *testGraph
t.Run(testName, func(t *testing.T) {
t.Parallel()
graph = testFunc(t)
assertDOTGraph(t, graph, testName)
})
}
}
func TestGraphThreadSafety(t *testing.T) {
t.Parallel()
t.Run("ConcurrentReadWrite", func(t *testing.T) {
t.Parallel()
graph := &testGraph{}
var wg sync.WaitGroup
const numWriters = 50
const numReaders = 100
const operationsPerWriter = 1000
const operationsPerReader = 2000
barrier := make(chan struct{})
// Launch writers
for i := 0; i < numWriters; i++ {
wg.Add(1)
go func(writerID int) {
defer wg.Done()
<-barrier
for j := 0; j < operationsPerWriter; j++ {
from := &testGraphVertex{Name: fmt.Sprintf("writer-%d-%d", writerID, j)}
to := &testGraphVertex{Name: fmt.Sprintf("writer-%d-%d", writerID, j+1)}
graph.AddEdge(from, to, testEdgeCompleted)
}
}(i)
}
// Launch readers
readerResults := make([]struct {
panicked bool
readCount int
}, numReaders)
for i := 0; i < numReaders; i++ {
wg.Add(1)
go func(readerID int) {
defer wg.Done()
<-barrier
defer func() {
if r := recover(); r != nil {
readerResults[readerID].panicked = true
}
}()
readCount := 0
for j := 0; j < operationsPerReader; j++ {
// Create a test vertex and read
testUnit := &testGraphVertex{Name: fmt.Sprintf("test-reader-%d-%d", readerID, j)}
forwardEdges := graph.GetForwardAdjacentVertices(testUnit)
reverseEdges := graph.GetReverseAdjacentVertices(testUnit)
// Just verify no panics (results may be nil for non-existent vertices)
_ = forwardEdges
_ = reverseEdges
readCount++
}
readerResults[readerID].readCount = readCount
}(i)
}
close(barrier)
wg.Wait()
// Verify no panics occurred in readers
for i, result := range readerResults {
require.False(t, result.panicked, "reader %d panicked", i)
require.Equal(t, operationsPerReader, result.readCount, "reader %d should have performed expected reads", i)
}
})
t.Run("ConcurrentCycleDetection", func(t *testing.T) {
t.Parallel()
graph := &testGraph{}
// Pre-create chain: A→B→C→D
unitA := &testGraphVertex{Name: "A"}
unitB := &testGraphVertex{Name: "B"}
unitC := &testGraphVertex{Name: "C"}
unitD := &testGraphVertex{Name: "D"}
err := graph.AddEdge(unitA, unitB, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unitB, unitC, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unitC, unitD, testEdgeCompleted)
require.NoError(t, err)
barrier := make(chan struct{})
var wg sync.WaitGroup
const numGoroutines = 50
cycleErrors := make([]error, numGoroutines)
// Launch goroutines trying to add D→A (creates cycle)
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(goroutineID int) {
defer wg.Done()
<-barrier
err := graph.AddEdge(unitD, unitA, testEdgeCompleted)
cycleErrors[goroutineID] = err
}(i)
}
close(barrier)
wg.Wait()
// Verify all attempts correctly returned cycle error
for i, err := range cycleErrors {
require.Error(t, err, "goroutine %d should have detected cycle", i)
require.ErrorIs(t, err, unit.ErrCycleDetected)
}
// Verify graph remains valid (original chain intact)
dot, err := graph.ToDOT("test")
require.NoError(t, err)
require.NotEmpty(t, dot)
})
t.Run("ConcurrentToDOT", func(t *testing.T) {
t.Parallel()
graph := &testGraph{}
// Pre-populate graph
for i := 0; i < 20; i++ {
from := &testGraphVertex{Name: fmt.Sprintf("dot-unit-%d", i)}
to := &testGraphVertex{Name: fmt.Sprintf("dot-unit-%d", i+1)}
err := graph.AddEdge(from, to, testEdgeCompleted)
require.NoError(t, err)
}
barrier := make(chan struct{})
var wg sync.WaitGroup
const numReaders = 100
const numWriters = 20
dotResults := make([]string, numReaders)
// Launch readers calling ToDOT
dotErrors := make([]error, numReaders)
for i := 0; i < numReaders; i++ {
wg.Add(1)
go func(readerID int) {
defer wg.Done()
<-barrier
dot, err := graph.ToDOT(fmt.Sprintf("test-%d", readerID))
dotErrors[readerID] = err
if err == nil {
dotResults[readerID] = dot
}
}(i)
}
// Launch writers adding edges
for i := 0; i < numWriters; i++ {
wg.Add(1)
go func(writerID int) {
defer wg.Done()
<-barrier
from := &testGraphVertex{Name: fmt.Sprintf("writer-dot-%d", writerID)}
to := &testGraphVertex{Name: fmt.Sprintf("writer-dot-target-%d", writerID)}
graph.AddEdge(from, to, testEdgeCompleted)
}(i)
}
close(barrier)
wg.Wait()
// Verify no errors occurred during DOT generation
for i, err := range dotErrors {
require.NoError(t, err, "DOT generation error at index %d", i)
}
// Verify all DOT results are valid
for i, dot := range dotResults {
require.NotEmpty(t, dot, "DOT result %d should not be empty", i)
}
})
}
func BenchmarkGraph_ConcurrentMixedOperations(b *testing.B) {
graph := &testGraph{}
var wg sync.WaitGroup
const numGoroutines = 200
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Launch goroutines performing random operations
for j := 0; j < numGoroutines; j++ {
wg.Add(1)
go func(goroutineID int) {
defer wg.Done()
operationCount := 0
for operationCount < 50 {
operation := float32(randInt(100)) / 100.0
if operation < 0.6 { // 60% reads
// Read operation
testUnit := &testGraphVertex{Name: fmt.Sprintf("bench-read-%d-%d", goroutineID, operationCount)}
forwardEdges := graph.GetForwardAdjacentVertices(testUnit)
reverseEdges := graph.GetReverseAdjacentVertices(testUnit)
// Just verify no panics (results may be nil for non-existent vertices)
_ = forwardEdges
_ = reverseEdges
} else { // 40% writes
// Write operation
from := &testGraphVertex{Name: fmt.Sprintf("bench-write-%d-%d", goroutineID, operationCount)}
to := &testGraphVertex{Name: fmt.Sprintf("bench-write-target-%d-%d", goroutineID, operationCount)}
graph.AddEdge(from, to, testEdgeCompleted)
}
operationCount++
}
}(j)
}
wg.Wait()
}
}
-280
View File
@@ -1,280 +0,0 @@
package unit
import (
"errors"
"sync"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/coderd/util/slice"
)
var (
ErrUnitIDRequired = xerrors.New("unit name is required")
ErrUnitNotFound = xerrors.New("unit not found")
ErrUnitAlreadyRegistered = xerrors.New("unit already registered")
ErrCannotUpdateOtherUnit = xerrors.New("cannot update other unit's status")
ErrDependenciesNotSatisfied = xerrors.New("unit dependencies not satisfied")
ErrSameStatusAlreadySet = xerrors.New("same status already set")
ErrCycleDetected = xerrors.New("cycle detected")
ErrFailedToAddDependency = xerrors.New("failed to add dependency")
)
// Status represents the status of a unit.
type Status string
// Status constants for dependency tracking.
const (
StatusNotRegistered Status = ""
StatusPending Status = "pending"
StatusStarted Status = "started"
StatusComplete Status = "completed"
)
// ID provides a type narrowed representation of the unique identifier of a unit.
type ID string
// Unit represents a point-in-time snapshot of a vertex in the dependency graph.
// Units may depend on other units, or be depended on by other units. The unit struct
// is not aware of updates made to the dependency graph after it is initialized and should
// not be cached.
type Unit struct {
id ID
status Status
// ready is true if all dependencies are satisfied.
// It does not have an accessor method on Unit, because a unit cannot know whether it is ready.
// Only the Manager can calculate whether a unit is ready based on knowledge of the dependency graph.
// To discourage use of an outdated readiness value, only the Manager should set and return this field.
ready bool
}
func (u Unit) ID() ID {
return u.id
}
func (u Unit) Status() Status {
return u.status
}
// Dependency represents a dependency relationship between units.
type Dependency struct {
Unit ID
DependsOn ID
RequiredStatus Status
CurrentStatus Status
IsSatisfied bool
}
// Manager provides reactive dependency tracking over a Graph.
// It manages Unit registration, dependency relationships, and status updates
// with automatic recalculation of readiness when dependencies are satisfied.
type Manager struct {
mu sync.RWMutex
// The underlying graph that stores dependency relationships
graph *Graph[Status, ID]
// Store vertex instances for each unit to ensure consistent references
units map[ID]Unit
}
// NewManager creates a new Manager instance.
func NewManager() *Manager {
return &Manager{
graph: &Graph[Status, ID]{},
units: make(map[ID]Unit),
}
}
// Register adds a unit to the manager if it is not already registered.
// If a Unit is already registered (per the ID field), it is not updated.
func (m *Manager) Register(id ID) error {
m.mu.Lock()
defer m.mu.Unlock()
if id == "" {
return xerrors.Errorf("registering unit %q: %w", id, ErrUnitIDRequired)
}
if m.registered(id) {
return xerrors.Errorf("registering unit %q: %w", id, ErrUnitAlreadyRegistered)
}
m.units[id] = Unit{
id: id,
status: StatusPending,
ready: true,
}
return nil
}
// registered checks if a unit is registered in the manager.
func (m *Manager) registered(id ID) bool {
return m.units[id].status != StatusNotRegistered
}
// Unit fetches a unit from the manager. If the unit does not exist,
// it returns the Unit zero-value as a placeholder unit, because
// units may depend on other units that have not yet been created.
func (m *Manager) Unit(id ID) (Unit, error) {
if id == "" {
return Unit{}, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
m.mu.RLock()
defer m.mu.RUnlock()
return m.units[id], nil
}
func (m *Manager) IsReady(id ID) (bool, error) {
if id == "" {
return false, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
m.mu.RLock()
defer m.mu.RUnlock()
if !m.registered(id) {
return false, nil
}
return m.units[id].ready, nil
}
// AddDependency adds a dependency relationship between units.
// The unit depends on the dependsOn unit reaching the requiredStatus.
func (m *Manager) AddDependency(unit ID, dependsOn ID, requiredStatus Status) error {
m.mu.Lock()
defer m.mu.Unlock()
switch {
case unit == "":
return xerrors.Errorf("dependent name cannot be empty: %w", ErrUnitIDRequired)
case dependsOn == "":
return xerrors.Errorf("dependency name cannot be empty: %w", ErrUnitIDRequired)
case !m.registered(unit):
return xerrors.Errorf("dependent unit %q must be registered first: %w", unit, ErrUnitNotFound)
}
// Add the dependency edge to the graph
// The edge goes from unit to dependsOn, representing the dependency
err := m.graph.AddEdge(unit, dependsOn, requiredStatus)
if err != nil {
return xerrors.Errorf("adding edge for unit %q: %w", unit, errors.Join(ErrFailedToAddDependency, err))
}
// Recalculate readiness for the unit since it now has a new dependency
m.recalculateReadinessUnsafe(unit)
return nil
}
// UpdateStatus updates a unit's status and recalculates readiness for affected dependents.
func (m *Manager) UpdateStatus(unit ID, newStatus Status) error {
m.mu.Lock()
defer m.mu.Unlock()
switch {
case unit == "":
return xerrors.Errorf("updating status for unit %q: %w", unit, ErrUnitIDRequired)
case !m.registered(unit):
return xerrors.Errorf("unit %q must be registered first: %w", unit, ErrUnitNotFound)
}
u := m.units[unit]
if u.status == newStatus {
return xerrors.Errorf("checking status for unit %q: %w", unit, ErrSameStatusAlreadySet)
}
u.status = newStatus
m.units[unit] = u
// Get all units that depend on this one (reverse adjacent vertices)
dependents := m.graph.GetReverseAdjacentVertices(unit)
// Recalculate readiness for all dependents
for _, dependent := range dependents {
m.recalculateReadinessUnsafe(dependent.From)
}
return nil
}
// recalculateReadinessUnsafe recalculates the readiness state for a unit.
// This method assumes the caller holds the write lock.
func (m *Manager) recalculateReadinessUnsafe(unit ID) {
u := m.units[unit]
dependencies := m.graph.GetForwardAdjacentVertices(unit)
allSatisfied := true
for _, dependency := range dependencies {
requiredStatus := dependency.Edge
dependsOnUnit := m.units[dependency.To]
if dependsOnUnit.status != requiredStatus {
allSatisfied = false
break
}
}
u.ready = allSatisfied
m.units[unit] = u
}
// GetGraph returns the underlying graph for visualization and debugging.
// This should be used carefully as it exposes the internal graph structure.
func (m *Manager) GetGraph() *Graph[Status, ID] {
return m.graph
}
// GetAllDependencies returns all dependencies for a unit, both satisfied and unsatisfied.
func (m *Manager) GetAllDependencies(unit ID) ([]Dependency, error) {
m.mu.RLock()
defer m.mu.RUnlock()
if unit == "" {
return nil, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
if !m.registered(unit) {
return nil, xerrors.Errorf("checking registration for unit %q: %w", unit, ErrUnitNotFound)
}
dependencies := m.graph.GetForwardAdjacentVertices(unit)
var allDependencies []Dependency
for _, dependency := range dependencies {
dependsOnUnit := m.units[dependency.To]
requiredStatus := dependency.Edge
allDependencies = append(allDependencies, Dependency{
Unit: unit,
DependsOn: dependency.To,
RequiredStatus: requiredStatus,
CurrentStatus: dependsOnUnit.status,
IsSatisfied: dependsOnUnit.status == requiredStatus,
})
}
return allDependencies, nil
}
// GetUnmetDependencies returns a list of unsatisfied dependencies for a unit.
func (m *Manager) GetUnmetDependencies(unit ID) ([]Dependency, error) {
allDependencies, err := m.GetAllDependencies(unit)
if err != nil {
return nil, err
}
var unmetDependencies []Dependency = slice.Filter(allDependencies, func(dependency Dependency) bool {
return !dependency.IsSatisfied
})
return unmetDependencies, nil
}
// ExportDOT exports the dependency graph to DOT format for visualization.
func (m *Manager) ExportDOT(name string) (string, error) {
return m.graph.ToDOT(name)
}
-743
View File
@@ -1,743 +0,0 @@
package unit_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/agent/unit"
)
const (
unitA unit.ID = "serviceA"
unitB unit.ID = "serviceB"
unitC unit.ID = "serviceC"
unitD unit.ID = "serviceD"
)
func TestManager_UnitValidation(t *testing.T) {
t.Parallel()
t.Run("Empty Unit Name", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
err := manager.Register("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
err = manager.AddDependency("", unitA, unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
err = manager.AddDependency(unitA, "", unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
dependencies, err := manager.GetAllDependencies("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.Len(t, dependencies, 0)
unmetDependencies, err := manager.GetUnmetDependencies("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.Len(t, unmetDependencies, 0)
err = manager.UpdateStatus("", unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
isReady, err := manager.IsReady("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.False(t, isReady)
u, err := manager.Unit("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
assert.Equal(t, unit.Unit{}, u)
})
}
func TestManager_Register(t *testing.T) {
t.Parallel()
t.Run("RegisterNewUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Then: the unit should be ready (no dependencies)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unitA, u.ID())
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("RegisterDuplicateUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Newly registered units have StatusPending. We update the unit status to StatusStarted,
// so we can later assert that it is not overwritten back to StatusPending by the second
// register call
manager.UpdateStatus(unitA, unit.StatusStarted)
// When: the unit is registered again
err = manager.Register(unitA)
// Then: a descriptive error should be returned
require.ErrorIs(t, err, unit.ErrUnitAlreadyRegistered)
// Then: the unit status should not be overwritten
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusStarted, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("RegisterMultipleUnits", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: multiple units are registered
unitIDs := []unit.ID{unitA, unitB, unitC}
for _, unit := range unitIDs {
err := manager.Register(unit)
require.NoError(t, err)
}
// Then: all units should be ready initially
for _, unitID := range unitIDs {
u, err := manager.Unit(unitID)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitID)
require.NoError(t, err)
assert.True(t, isReady)
}
})
}
func TestManager_AddDependency(t *testing.T) {
t.Parallel()
t.Run("AddDependencyBetweenRegisteredUnits", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready (depends on B)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Then: Unit B should still be ready (no dependencies)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is stopped
err = manager.UpdateStatus(unitB, unit.StatusPending)
require.NoError(t, err)
// Then: Unit A should no longer be ready, because its dependency is not in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
})
t.Run("AddDependencyByAnUnregisteredDependentUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given Unit B is registered
err := manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being started
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
// Then: a descriptive error communicates that the dependency cannot be added
// because the dependent unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
})
t.Run("AddDependencyOnAnUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given unit A is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Given Unit B is not yet registered
// And Unit A depends on Unit B being started
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: The dependency should be visible in Unit A's status
dependencies, err := manager.GetAllDependencies(unitA)
require.NoError(t, err)
require.Len(t, dependencies, 1)
assert.Equal(t, unitB, dependencies[0].DependsOn)
assert.Equal(t, unit.StatusStarted, dependencies[0].RequiredStatus)
assert.False(t, dependencies[0].IsSatisfied)
u, err := manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusNotRegistered, u.Status())
// Then: Unit A should not be ready, because it depends on Unit B
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is registered
err = manager.Register(unitB)
require.NoError(t, err)
// Then: Unit A should still not be ready.
// Unit B is not registered, but it has not been started as required by the dependency.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("AddDependencyCreatesACyclicDependency", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
err = manager.Register(unitD)
require.NoError(t, err)
// A depends on B
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// B depends on C
err = manager.AddDependency(unitB, unitC, unit.StatusStarted)
require.NoError(t, err)
// C depends on D
err = manager.AddDependency(unitC, unitD, unit.StatusStarted)
require.NoError(t, err)
// Try to make D depend on A (creates indirect cycle)
err = manager.AddDependency(unitD, unitA, unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
})
t.Run("UpdatingADependency", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// When: The dependency is updated to unit.StatusComplete
err = manager.AddDependency(unitA, unitB, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit A should only have one dependency, and it should be unit.StatusComplete
dependencies, err := manager.GetAllDependencies(unitA)
require.NoError(t, err)
require.Len(t, dependencies, 1)
assert.Equal(t, unit.StatusComplete, dependencies[0].RequiredStatus)
})
}
func TestManager_UpdateStatus(t *testing.T) {
t.Parallel()
t.Run("UpdateStatusTriggersReadinessRecalculation", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready (depends on B)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("UpdateStatusWithUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given Unit A is not registered
// When: Unit A is updated to unit.StatusStarted
err := manager.UpdateStatus(unitA, unit.StatusStarted)
// Then: a descriptive error communicates that the unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
})
t.Run("LinearChainDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A, B, and C are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
// Create chain: A depends on B being "started", B depends on C being "completed"
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitB, unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: only Unit C should be ready (no dependencies)
u, err := manager.Unit(unitC)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitC)
require.NoError(t, err)
assert.True(t, isReady)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.False(t, isReady)
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit C is completed
err = manager.UpdateStatus(unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit B should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
}
func TestManager_GetUnmetDependencies(t *testing.T) {
t.Parallel()
t.Run("GetUnmetDependenciesForUnitWithNoDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: Unit A is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Given: Unit A has no dependencies
// Then: Unit A should have no unmet dependencies
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
assert.Empty(t, unmet)
})
t.Run("GetUnmetDependenciesForUnitWithUnsatisfiedDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
require.Len(t, unmet, 1)
assert.Equal(t, unitA, unmet[0].Unit)
assert.Equal(t, unitB, unmet[0].DependsOn)
assert.Equal(t, unit.StatusStarted, unmet[0].RequiredStatus)
assert.False(t, unmet[0].IsSatisfied)
})
t.Run("GetUnmetDependenciesForUnitWithSatisfiedDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: Unit A and Unit B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should have no unmet dependencies
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
assert.Empty(t, unmet)
})
t.Run("GetUnmetDependenciesForUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// When: Unit A is requested
unmet, err := manager.GetUnmetDependencies(unitA)
// Then: a descriptive error communicates that the unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
assert.Nil(t, unmet)
})
}
func TestManager_MultipleDependencies(t *testing.T) {
t.Parallel()
t.Run("UnitWithMultipleDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// A depends on B being unit.StatusStarted AND C being "started"
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
// A should not be ready (depends on both B and C)
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update B to unit.StatusStarted - A should still not be ready (needs C too)
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update C to "started" - A should now be ready
err = manager.UpdateStatus(unitC, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("ComplexDependencyChain", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// Create complex dependency graph:
// A depends on B being unit.StatusStarted AND C being "started"
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
// B depends on D being "completed"
err = manager.AddDependency(unitB, unitD, unit.StatusComplete)
require.NoError(t, err)
// C depends on D being "completed"
err = manager.AddDependency(unitC, unitD, unit.StatusComplete)
require.NoError(t, err)
// Initially only D is ready
isReady, err := manager.IsReady(unitD)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.False(t, isReady)
isReady, err = manager.IsReady(unitC)
require.NoError(t, err)
assert.False(t, isReady)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update D to "completed" - B and C should become ready
err = manager.UpdateStatus(unitD, unit.StatusComplete)
require.NoError(t, err)
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitC)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update B to unit.StatusStarted - A should still not be ready (needs C)
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update C to "started" - A should now be ready
err = manager.UpdateStatus(unitC, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("DifferentStatusTypes", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Given: Unit A depends on Unit C being "completed"
err = manager.AddDependency(unitA, unitC, unit.StatusComplete)
require.NoError(t, err)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready, because only one of its dependencies is in the desired state.
// It still requires Unit C to be completed.
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit C is completed
err = manager.UpdateStatus(unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit A should be ready, because both of its dependencies are in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
}
func TestManager_IsReady(t *testing.T) {
t.Parallel()
t.Run("IsReadyWithUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is not registered
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusNotRegistered, u.Status())
// Then: the unit is not ready
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
})
}
func TestManager_ToDOT(t *testing.T) {
t.Parallel()
t.Run("ExportSimpleGraph", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Add dependency
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
dot, err := manager.ExportDOT("test")
require.NoError(t, err)
assert.NotEmpty(t, dot)
assert.Contains(t, dot, "digraph")
})
t.Run("ExportComplexGraph", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// Create complex dependency graph
// A depends on B and C, B depends on D, C depends on D
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitB, unitD, unit.StatusComplete)
require.NoError(t, err)
err = manager.AddDependency(unitC, unitD, unit.StatusComplete)
require.NoError(t, err)
dot, err := manager.ExportDOT("complex")
require.NoError(t, err)
assert.NotEmpty(t, dot)
assert.Contains(t, dot, "digraph")
})
}
-8
View File
@@ -1,8 +0,0 @@
strict digraph Cycle {
// Node definitions.
1;
2;
// Edge definitions.
1 -> 2;
}
-10
View File
@@ -1,10 +0,0 @@
strict digraph ForwardAndReverseEdges {
// Node definitions.
1;
2;
3;
// Edge definitions.
1 -> 2;
1 -> 3;
}
@@ -1,12 +0,0 @@
strict digraph MultipleDependenciesSameStatus {
// Node definitions.
1;
2;
3;
4;
// Edge definitions.
1 -> 2;
1 -> 3;
1 -> 4;
}
-4
View File
@@ -1,4 +0,0 @@
strict digraph SelfReference {
// Node definitions.
1;
}
+11 -5
View File
@@ -6,7 +6,10 @@
"defaultBranch": "main"
},
"files": {
"includes": ["**", "!**/pnpm-lock.yaml"],
"includes": [
"**",
"!**/pnpm-lock.yaml"
],
"ignoreUnknown": true
},
"linter": {
@@ -45,14 +48,13 @@
"options": {
"paths": {
"@mui/material": "Use @mui/material/<name> instead. See: https://material-ui.com/guides/minimizing-bundle-size/.",
"@mui/icons-material": "Use @mui/icons-material/<name> instead. See: https://material-ui.com/guides/minimizing-bundle-size/.",
"@mui/material/Avatar": "Use components/Avatar/Avatar instead.",
"@mui/material/Alert": "Use components/Alert/Alert instead.",
"@mui/material/Popover": "Use components/Popover/Popover instead.",
"@mui/material/Typography": "Use native HTML elements instead. Eg: <span>, <p>, <h1>, etc.",
"@mui/material/Box": "Use a <div> instead.",
"@mui/material/Button": "Use a components/Button/Button instead.",
"@mui/material/styles": "Import from @emotion/react instead.",
"@mui/material/Table*": "Import from components/Table/Table instead.",
"lodash": "Use lodash/<name> instead."
}
}
@@ -67,7 +69,11 @@
"noConsole": {
"level": "error",
"options": {
"allow": ["error", "info", "warn"]
"allow": [
"error",
"info",
"warn"
]
}
}
},
@@ -76,5 +82,5 @@
}
}
},
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json"
"$schema": "https://biomejs.dev/schemas/2.2.0/schema.json"
}
+17 -45
View File
@@ -11,7 +11,6 @@ import (
"os"
"path/filepath"
"runtime"
"slices"
"strconv"
"strings"
"time"
@@ -202,15 +201,18 @@ func workspaceAgent() *serpent.Command {
// Enable pprof handler
// This prevents the pprof import from being accidentally deleted.
_ = pprof.Handler
if pprofAddress != "" {
pprofSrvClose := ServeHandler(ctx, logger, nil, pprofAddress, "pprof")
defer pprofSrvClose()
pprofSrvClose := ServeHandler(ctx, logger, nil, pprofAddress, "pprof")
defer pprofSrvClose()
if port, err := extractPort(pprofAddress); err == nil {
ignorePorts[port] = "pprof"
}
if port, err := extractPort(pprofAddress); err == nil {
ignorePorts[port] = "pprof"
}
} else {
logger.Debug(ctx, "pprof address is empty, disabling pprof server")
if port, err := extractPort(prometheusAddress); err == nil {
ignorePorts[port] = "prometheus"
}
if port, err := extractPort(debugAddress); err == nil {
ignorePorts[port] = "debug"
}
executablePath, err := os.Executable()
@@ -274,28 +276,6 @@ func workspaceAgent() *serpent.Command {
for {
prometheusRegistry := prometheus.NewRegistry()
promHandler := agent.PrometheusMetricsHandler(prometheusRegistry, logger)
var serverClose []func()
if prometheusAddress != "" {
prometheusSrvClose := ServeHandler(ctx, logger, promHandler, prometheusAddress, "prometheus")
serverClose = append(serverClose, prometheusSrvClose)
if port, err := extractPort(prometheusAddress); err == nil {
ignorePorts[port] = "prometheus"
}
} else {
logger.Debug(ctx, "prometheus address is empty, disabling prometheus server")
}
if debugAddress != "" {
// ServerHandle depends on `agnt.HTTPDebug()`, but `agnt`
// depends on `ignorePorts`. Keep this if statement in sync
// with below.
if port, err := extractPort(debugAddress); err == nil {
ignorePorts[port] = "debug"
}
}
agnt := agent.New(agent.Options{
Client: client,
Logger: logger,
@@ -319,15 +299,10 @@ func workspaceAgent() *serpent.Command {
},
})
if debugAddress != "" {
// ServerHandle depends on `agnt.HTTPDebug()`, but `agnt`
// depends on `ignorePorts`. Keep this if statement in sync
// with above.
debugSrvClose := ServeHandler(ctx, logger, agnt.HTTPDebug(), debugAddress, "debug")
serverClose = append(serverClose, debugSrvClose)
} else {
logger.Debug(ctx, "debug address is empty, disabling debug server")
}
promHandler := agent.PrometheusMetricsHandler(prometheusRegistry, logger)
prometheusSrvClose := ServeHandler(ctx, logger, promHandler, prometheusAddress, "prometheus")
debugSrvClose := ServeHandler(ctx, logger, agnt.HTTPDebug(), debugAddress, "debug")
select {
case <-ctx.Done():
@@ -339,11 +314,8 @@ func workspaceAgent() *serpent.Command {
}
lastErr = agnt.Close()
slices.Reverse(serverClose)
for _, closeFunc := range serverClose {
closeFunc()
}
debugSrvClose()
prometheusSrvClose()
if mustExit {
break
-45
View File
@@ -178,51 +178,6 @@ func TestWorkspaceAgent(t *testing.T) {
require.Greater(t, atomic.LoadInt64(&called), int64(0), "expected coderd to be reached with custom headers")
require.Greater(t, atomic.LoadInt64(&derpCalled), int64(0), "expected /derp to be called with custom headers")
})
t.Run("DisabledServers", func(t *testing.T) {
t.Parallel()
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
logDir := t.TempDir()
inv, _ := clitest.New(t,
"agent",
"--auth", "token",
"--agent-token", r.AgentToken,
"--agent-url", client.URL.String(),
"--log-dir", logDir,
"--pprof-address", "",
"--prometheus-address", "",
"--debug-address", "",
)
clitest.Start(t, inv)
// Verify the agent is connected and working.
resources := coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).
MatchResources(matchAgentWithVersion).Wait()
require.Len(t, resources, 1)
require.Len(t, resources[0].Agents, 1)
require.NotEmpty(t, resources[0].Agents[0].Version)
// Verify the servers are not listening by checking the log for disabled
// messages.
require.Eventually(t, func() bool {
logContent, err := os.ReadFile(filepath.Join(logDir, "coder-agent.log"))
if err != nil {
return false
}
logStr := string(logContent)
return strings.Contains(logStr, "pprof address is empty, disabling pprof server") &&
strings.Contains(logStr, "prometheus address is empty, disabling prometheus server") &&
strings.Contains(logStr, "debug address is empty, disabling debug server")
}, testutil.WaitLong, testutil.IntervalMedium)
})
}
func matchAgentWithVersion(rs []codersdk.WorkspaceResource) bool {
-78
View File
@@ -1,78 +0,0 @@
package cli
import (
"encoding/csv"
"strings"
"github.com/spf13/pflag"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
)
var (
_ pflag.SliceValue = &AllowListFlag{}
_ pflag.Value = &AllowListFlag{}
)
// AllowListFlag implements pflag.SliceValue for codersdk.APIAllowListTarget entries.
type AllowListFlag []codersdk.APIAllowListTarget
func AllowListFlagOf(al *[]codersdk.APIAllowListTarget) *AllowListFlag {
return (*AllowListFlag)(al)
}
func (a AllowListFlag) String() string {
return strings.Join(a.GetSlice(), ",")
}
func (a AllowListFlag) Value() []codersdk.APIAllowListTarget {
return []codersdk.APIAllowListTarget(a)
}
func (AllowListFlag) Type() string { return "allow-list" }
func (a *AllowListFlag) Set(set string) error {
values, err := csv.NewReader(strings.NewReader(set)).Read()
if err != nil {
return xerrors.Errorf("parse allow list entries as csv: %w", err)
}
for _, v := range values {
if err := a.Append(v); err != nil {
return err
}
}
return nil
}
func (a *AllowListFlag) Append(value string) error {
value = strings.TrimSpace(value)
if value == "" {
return xerrors.New("allow list entry cannot be empty")
}
var target codersdk.APIAllowListTarget
if err := target.UnmarshalText([]byte(value)); err != nil {
return err
}
*a = append(*a, target)
return nil
}
func (a *AllowListFlag) Replace(items []string) error {
*a = []codersdk.APIAllowListTarget{}
for _, item := range items {
if err := a.Append(item); err != nil {
return err
}
}
return nil
}
func (a *AllowListFlag) GetSlice() []string {
out := make([]string, len(*a))
for i, entry := range *a {
out[i] = entry.String()
}
return out
}
+1 -26
View File
@@ -53,9 +53,6 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
t := time.NewTimer(0)
defer t.Stop()
startTime := time.Now()
baseInterval := opts.FetchInterval
for {
select {
case <-ctx.Done():
@@ -71,11 +68,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
return
}
fetchedAgent <- fetchAgent{agent: agent}
// Adjust the interval based on how long we've been waiting.
elapsed := time.Since(startTime)
currentInterval := GetProgressiveInterval(baseInterval, elapsed)
t.Reset(currentInterval)
t.Reset(opts.FetchInterval)
}
}
}()
@@ -300,24 +293,6 @@ func safeDuration(sw *stageWriter, a, b *time.Time) time.Duration {
return a.Sub(*b)
}
// GetProgressiveInterval returns an interval that increases over time.
// The interval starts at baseInterval and increases to
// a maximum of baseInterval * 16 over time.
func GetProgressiveInterval(baseInterval time.Duration, elapsed time.Duration) time.Duration {
switch {
case elapsed < 60*time.Second:
return baseInterval // 500ms for first 60 seconds
case elapsed < 2*time.Minute:
return baseInterval * 2 // 1s for next 1 minute
case elapsed < 5*time.Minute:
return baseInterval * 4 // 2s for next 3 minutes
case elapsed < 10*time.Minute:
return baseInterval * 8 // 4s for next 5 minutes
default:
return baseInterval * 16 // 8s after 10 minutes
}
}
type closeFunc func() error
func (c closeFunc) Close() error {
-28
View File
@@ -866,31 +866,3 @@ func TestConnDiagnostics(t *testing.T) {
})
}
}
func TestGetProgressiveInterval(t *testing.T) {
t.Parallel()
baseInterval := 500 * time.Millisecond
testCases := []struct {
name string
elapsed time.Duration
expected time.Duration
}{
{"first_minute", 30 * time.Second, baseInterval},
{"second_minute", 90 * time.Second, baseInterval * 2},
{"third_to_fifth_minute", 3 * time.Minute, baseInterval * 4},
{"sixth_to_tenth_minute", 7 * time.Minute, baseInterval * 8},
{"after_ten_minutes", 15 * time.Minute, baseInterval * 16},
{"boundary_first_minute", 59 * time.Second, baseInterval},
{"boundary_second_minute", 61 * time.Second, baseInterval * 2},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := cliui.GetProgressiveInterval(baseInterval, tc.elapsed)
require.Equal(t, tc.expected, result)
})
}
}
+9 -18
View File
@@ -296,23 +296,22 @@ func renderTable(out any, sort string, headers table.Row, filterColumns []string
// returned. If the table tag is malformed, an error is returned.
//
// The returned name is transformed from "snake_case" to "normal text".
func parseTableStructTag(field reflect.StructField) (name string, defaultSort, noSortOpt, recursive, skipParentName, emptyNil bool, err error) {
func parseTableStructTag(field reflect.StructField) (name string, defaultSort, noSortOpt, recursive, skipParentName bool, err error) {
tags, err := structtag.Parse(string(field.Tag))
if err != nil {
return "", false, false, false, false, false, xerrors.Errorf("parse struct field tag %q: %w", string(field.Tag), err)
return "", false, false, false, false, xerrors.Errorf("parse struct field tag %q: %w", string(field.Tag), err)
}
tag, err := tags.Get("table")
if err != nil || tag.Name == "-" {
// tags.Get only returns an error if the tag is not found.
return "", false, false, false, false, false, nil
return "", false, false, false, false, nil
}
defaultSortOpt := false
noSortOpt = false
recursiveOpt := false
skipParentNameOpt := false
emptyNilOpt := false
for _, opt := range tag.Options {
switch opt {
case "default_sort":
@@ -327,14 +326,12 @@ func parseTableStructTag(field reflect.StructField) (name string, defaultSort, n
// make sure the child name is unique across all nested structs in the parent.
recursiveOpt = true
skipParentNameOpt = true
case "empty_nil":
emptyNilOpt = true
default:
return "", false, false, false, false, false, xerrors.Errorf("unknown option %q in struct field tag", opt)
return "", false, false, false, false, xerrors.Errorf("unknown option %q in struct field tag", opt)
}
}
return strings.ReplaceAll(tag.Name, "_", " "), defaultSortOpt, noSortOpt, recursiveOpt, skipParentNameOpt, emptyNilOpt, nil
return strings.ReplaceAll(tag.Name, "_", " "), defaultSortOpt, noSortOpt, recursiveOpt, skipParentNameOpt, nil
}
func isStructOrStructPointer(t reflect.Type) bool {
@@ -361,7 +358,7 @@ func typeToTableHeaders(t reflect.Type, requireDefault bool) ([]string, string,
noSortOpt := false
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
name, defaultSort, noSort, recursive, skip, _, err := parseTableStructTag(field)
name, defaultSort, noSort, recursive, skip, err := parseTableStructTag(field)
if err != nil {
return nil, "", xerrors.Errorf("parse struct tags for field %q in type %q: %w", field.Name, t.String(), err)
}
@@ -438,7 +435,7 @@ func valueToTableMap(val reflect.Value) (map[string]any, error) {
for i := 0; i < val.NumField(); i++ {
field := val.Type().Field(i)
fieldVal := val.Field(i)
name, _, _, recursive, skip, emptyNil, err := parseTableStructTag(field)
name, _, _, recursive, skip, err := parseTableStructTag(field)
if err != nil {
return nil, xerrors.Errorf("parse struct tags for field %q in type %T: %w", field.Name, val, err)
}
@@ -446,14 +443,8 @@ func valueToTableMap(val reflect.Value) (map[string]any, error) {
continue
}
fieldType := field.Type
// If empty_nil is set and this is a nil pointer, use a zero value.
if emptyNil && fieldVal.Kind() == reflect.Pointer && fieldVal.IsNil() {
fieldVal = reflect.New(fieldType.Elem())
}
// Recurse if it's a struct.
fieldType := field.Type
if recursive {
if !isStructOrStructPointer(fieldType) {
return nil, xerrors.Errorf("field %q in type %q is marked as recursive but does not contain a struct or a pointer to a struct", field.Name, fieldType.String())
@@ -476,7 +467,7 @@ func valueToTableMap(val reflect.Value) (map[string]any, error) {
}
// Otherwise, we just use the field value.
row[name] = fieldVal.Interface()
row[name] = val.Field(i).Interface()
}
return row, nil
-72
View File
@@ -400,78 +400,6 @@ foo <nil> 10 [a, b, c] foo1 11 foo2 12 fo
})
})
})
t.Run("EmptyNil", func(t *testing.T) {
t.Parallel()
type emptyNilTest struct {
Name string `table:"name,default_sort"`
EmptyOnNil *string `table:"empty_on_nil,empty_nil"`
NormalBehavior *string `table:"normal_behavior"`
}
value := "value"
in := []emptyNilTest{
{
Name: "has_value",
EmptyOnNil: &value,
NormalBehavior: &value,
},
{
Name: "has_nil",
EmptyOnNil: nil,
NormalBehavior: nil,
},
}
expected := `
NAME EMPTY ON NIL NORMAL BEHAVIOR
has_nil <nil>
has_value value value
`
out, err := cliui.DisplayTable(in, "", nil)
log.Println("rendered table:\n" + out)
require.NoError(t, err)
compareTables(t, expected, out)
})
t.Run("EmptyNilWithRecursiveInline", func(t *testing.T) {
t.Parallel()
type nestedData struct {
Name string `table:"name"`
}
type inlineTest struct {
Nested *nestedData `table:"ignored,recursive_inline,empty_nil"`
Count int `table:"count,default_sort"`
}
in := []inlineTest{
{
Nested: &nestedData{
Name: "alice",
},
Count: 1,
},
{
Nested: nil,
Count: 2,
},
}
expected := `
NAME COUNT
alice 1
2
`
out, err := cliui.DisplayTable(in, "", nil)
log.Println("rendered table:\n" + out)
require.NoError(t, err)
compareTables(t, expected, out)
})
}
// compareTables normalizes the incoming table lines
+44 -48
View File
@@ -577,57 +577,53 @@ func prepWorkspaceBuild(inv *serpent.Invocation, client *codersdk.Client, args p
return nil, xerrors.Errorf("template version git auth: %w", err)
}
// Only perform dry-run for workspace creation and updates
// Skip for start and restart to avoid unnecessary delays
if args.Action == WorkspaceCreate || args.Action == WorkspaceUpdate {
// Run a dry-run with the given parameters to check correctness
dryRun, err := client.CreateTemplateVersionDryRun(inv.Context(), templateVersion.ID, codersdk.CreateTemplateVersionDryRunRequest{
WorkspaceName: args.NewWorkspaceName,
RichParameterValues: buildParameters,
})
if err != nil {
return nil, xerrors.Errorf("begin workspace dry-run: %w", err)
}
// Run a dry-run with the given parameters to check correctness
dryRun, err := client.CreateTemplateVersionDryRun(inv.Context(), templateVersion.ID, codersdk.CreateTemplateVersionDryRunRequest{
WorkspaceName: args.NewWorkspaceName,
RichParameterValues: buildParameters,
})
if err != nil {
return nil, xerrors.Errorf("begin workspace dry-run: %w", err)
}
matchedProvisioners, err := client.TemplateVersionDryRunMatchedProvisioners(inv.Context(), templateVersion.ID, dryRun.ID)
if err != nil {
return nil, xerrors.Errorf("get matched provisioners: %w", err)
}
cliutil.WarnMatchedProvisioners(inv.Stdout, &matchedProvisioners, dryRun)
_, _ = fmt.Fprintln(inv.Stdout, "Planning workspace...")
err = cliui.ProvisionerJob(inv.Context(), inv.Stdout, cliui.ProvisionerJobOptions{
Fetch: func() (codersdk.ProvisionerJob, error) {
return client.TemplateVersionDryRun(inv.Context(), templateVersion.ID, dryRun.ID)
},
Cancel: func() error {
return client.CancelTemplateVersionDryRun(inv.Context(), templateVersion.ID, dryRun.ID)
},
Logs: func() (<-chan codersdk.ProvisionerJobLog, io.Closer, error) {
return client.TemplateVersionDryRunLogsAfter(inv.Context(), templateVersion.ID, dryRun.ID, 0)
},
// Don't show log output for the dry-run unless there's an error.
Silent: true,
})
if err != nil {
// TODO (Dean): reprompt for parameter values if we deem it to
// be a validation error
return nil, xerrors.Errorf("dry-run workspace: %w", err)
}
matchedProvisioners, err := client.TemplateVersionDryRunMatchedProvisioners(inv.Context(), templateVersion.ID, dryRun.ID)
if err != nil {
return nil, xerrors.Errorf("get matched provisioners: %w", err)
}
cliutil.WarnMatchedProvisioners(inv.Stdout, &matchedProvisioners, dryRun)
_, _ = fmt.Fprintln(inv.Stdout, "Planning workspace...")
err = cliui.ProvisionerJob(inv.Context(), inv.Stdout, cliui.ProvisionerJobOptions{
Fetch: func() (codersdk.ProvisionerJob, error) {
return client.TemplateVersionDryRun(inv.Context(), templateVersion.ID, dryRun.ID)
},
Cancel: func() error {
return client.CancelTemplateVersionDryRun(inv.Context(), templateVersion.ID, dryRun.ID)
},
Logs: func() (<-chan codersdk.ProvisionerJobLog, io.Closer, error) {
return client.TemplateVersionDryRunLogsAfter(inv.Context(), templateVersion.ID, dryRun.ID, 0)
},
// Don't show log output for the dry-run unless there's an error.
Silent: true,
})
if err != nil {
// TODO (Dean): reprompt for parameter values if we deem it to
// be a validation error
return nil, xerrors.Errorf("dry-run workspace: %w", err)
}
resources, err := client.TemplateVersionDryRunResources(inv.Context(), templateVersion.ID, dryRun.ID)
if err != nil {
return nil, xerrors.Errorf("get workspace dry-run resources: %w", err)
}
resources, err := client.TemplateVersionDryRunResources(inv.Context(), templateVersion.ID, dryRun.ID)
if err != nil {
return nil, xerrors.Errorf("get workspace dry-run resources: %w", err)
}
err = cliui.WorkspaceResources(inv.Stdout, resources, cliui.WorkspaceResourcesOptions{
WorkspaceName: args.NewWorkspaceName,
// Since agents haven't connected yet, hiding this makes more sense.
HideAgentState: true,
Title: "Workspace Preview",
})
if err != nil {
return nil, xerrors.Errorf("get resources: %w", err)
}
err = cliui.WorkspaceResources(inv.Stdout, resources, cliui.WorkspaceResourcesOptions{
WorkspaceName: args.NewWorkspaceName,
// Since agents haven't connected yet, hiding this makes more sense.
HideAgentState: true,
Title: "Workspace Preview",
})
if err != nil {
return nil, xerrors.Errorf("get resources: %w", err)
}
return buildParameters, nil
+6
View File
@@ -185,6 +185,9 @@ func TestDelete(t *testing.T) {
t.Run("WarnNoProvisioners", func(t *testing.T) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("this test requires postgres")
}
store, ps, db := dbtestutil.NewDBWithSQLDB(t)
client, closeDaemon := coderdtest.NewWithProvisionerCloser(t, &coderdtest.Options{
@@ -225,6 +228,9 @@ func TestDelete(t *testing.T) {
t.Run("Prebuilt workspace delete permissions", func(t *testing.T) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("this test requires postgres")
}
// Setup
db, pb := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
+61 -337
View File
@@ -33,7 +33,6 @@ import (
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/scaletest/agentconn"
"github.com/coder/coder/v2/scaletest/autostart"
"github.com/coder/coder/v2/scaletest/createusers"
"github.com/coder/coder/v2/scaletest/createworkspaces"
"github.com/coder/coder/v2/scaletest/dashboard"
@@ -58,15 +57,9 @@ func (r *RootCmd) scaletestCmd() *serpent.Command {
Children: []*serpent.Command{
r.scaletestCleanup(),
r.scaletestDashboard(),
r.scaletestDynamicParameters(),
r.scaletestCreateWorkspaces(),
r.scaletestWorkspaceUpdates(),
r.scaletestWorkspaceTraffic(),
r.scaletestAutostart(),
r.scaletestNotifications(),
r.scaletestTaskStatus(),
r.scaletestSMTP(),
r.scaletestPrebuilds(),
},
}
@@ -386,88 +379,6 @@ func (s *scaletestPrometheusFlags) attach(opts *serpent.OptionSet) {
)
}
// workspaceTargetFlags holds common flags for targeting specific workspaces in scale tests.
type workspaceTargetFlags struct {
template string
targetWorkspaces string
useHostLogin bool
}
// attach adds the workspace target flags to the given options set.
func (f *workspaceTargetFlags) attach(opts *serpent.OptionSet) {
*opts = append(*opts,
serpent.Option{
Flag: "template",
FlagShorthand: "t",
Env: "CODER_SCALETEST_TEMPLATE",
Description: "Name or ID of the template. Traffic generation will be limited to workspaces created from this template.",
Value: serpent.StringOf(&f.template),
},
serpent.Option{
Flag: "target-workspaces",
Env: "CODER_SCALETEST_TARGET_WORKSPACES",
Description: "Target a specific range of workspaces in the format [START]:[END] (exclusive). Example: 0:10 will target the 10 first alphabetically sorted workspaces (0-9).",
Value: serpent.StringOf(&f.targetWorkspaces),
},
serpent.Option{
Flag: "use-host-login",
Env: "CODER_SCALETEST_USE_HOST_LOGIN",
Default: "false",
Description: "Connect as the currently logged in user.",
Value: serpent.BoolOf(&f.useHostLogin),
},
)
}
// getTargetedWorkspaces retrieves the workspaces based on the template filter and target range. warnWriter is where to
// write a warning message if any workspaces were skipped due to ownership mismatch.
func (f *workspaceTargetFlags) getTargetedWorkspaces(ctx context.Context, client *codersdk.Client, organizationIDs []uuid.UUID, warnWriter io.Writer) ([]codersdk.Workspace, error) {
// Validate template if provided
if f.template != "" {
_, err := parseTemplate(ctx, client, organizationIDs, f.template)
if err != nil {
return nil, xerrors.Errorf("parse template: %w", err)
}
}
// Parse target range
targetStart, targetEnd, err := parseTargetRange("workspaces", f.targetWorkspaces)
if err != nil {
return nil, xerrors.Errorf("parse target workspaces: %w", err)
}
// Determine owner based on useHostLogin
var owner string
if f.useHostLogin {
owner = codersdk.Me
}
// Get workspaces
workspaces, numSkipped, err := getScaletestWorkspaces(ctx, client, owner, f.template)
if err != nil {
return nil, err
}
if numSkipped > 0 {
cliui.Warnf(warnWriter, "CODER_DISABLE_OWNER_WORKSPACE_ACCESS is set on the deployment.\n\t%d workspace(s) were skipped due to ownership mismatch.\n\tSet --use-host-login to only target workspaces you own.", numSkipped)
}
// Adjust targetEnd if not specified
if targetEnd == 0 {
targetEnd = len(workspaces)
}
// Validate range
if len(workspaces) == 0 {
return nil, xerrors.Errorf("no scaletest workspaces exist")
}
if targetEnd > len(workspaces) {
return nil, xerrors.Errorf("target workspace end %d is greater than the number of workspaces %d", targetEnd, len(workspaces))
}
// Return the sliced workspaces
return workspaces[targetStart:targetEnd], nil
}
func requireAdmin(ctx context.Context, client *codersdk.Client) (codersdk.User, error) {
me, err := client.User(ctx, codersdk.Me)
if err != nil {
@@ -1277,10 +1188,12 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
bytesPerTick int64
ssh bool
disableDirect bool
useHostLogin bool
app string
template string
targetWorkspaces string
workspaceProxyURL string
targetFlags = &workspaceTargetFlags{}
tracingFlags = &scaletestTracingFlags{}
strategy = &scaletestStrategyFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
@@ -1325,9 +1238,15 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
},
}
workspaces, err := targetFlags.getTargetedWorkspaces(ctx, client, me.OrganizationIDs, inv.Stdout)
if template != "" {
_, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
if err != nil {
return xerrors.Errorf("parse template: %w", err)
}
}
targetWorkspaceStart, targetWorkspaceEnd, err := parseTargetRange("workspaces", targetWorkspaces)
if err != nil {
return err
return xerrors.Errorf("parse target workspaces: %w", err)
}
appHost, err := client.AppHost(ctx)
@@ -1335,6 +1254,30 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
return xerrors.Errorf("get app host: %w", err)
}
var owner string
if useHostLogin {
owner = codersdk.Me
}
workspaces, numSkipped, err := getScaletestWorkspaces(inv.Context(), client, owner, template)
if err != nil {
return err
}
if numSkipped > 0 {
cliui.Warnf(inv.Stdout, "CODER_DISABLE_OWNER_WORKSPACE_ACCESS is set on the deployment.\n\t%d workspace(s) were skipped due to ownership mismatch.\n\tSet --use-host-login to only target workspaces you own.", numSkipped)
}
if targetWorkspaceEnd == 0 {
targetWorkspaceEnd = len(workspaces)
}
if len(workspaces) == 0 {
return xerrors.Errorf("no scaletest workspaces exist")
}
if targetWorkspaceEnd > len(workspaces) {
return xerrors.Errorf("target workspace end %d is greater than the number of workspaces %d", targetWorkspaceEnd, len(workspaces))
}
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
@@ -1359,6 +1302,10 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
th := harness.NewTestHarness(strategy.toStrategy(), cleanupStrategy.toStrategy())
for idx, ws := range workspaces {
if idx < targetWorkspaceStart || idx >= targetWorkspaceEnd {
continue
}
var (
agent codersdk.WorkspaceAgent
name = "workspace-traffic"
@@ -1463,6 +1410,19 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
}
cmd.Options = []serpent.Option{
{
Flag: "template",
FlagShorthand: "t",
Env: "CODER_SCALETEST_TEMPLATE",
Description: "Name or ID of the template. Traffic generation will be limited to workspaces created from this template.",
Value: serpent.StringOf(&template),
},
{
Flag: "target-workspaces",
Env: "CODER_SCALETEST_TARGET_WORKSPACES",
Description: "Target a specific range of workspaces in the format [START]:[END] (exclusive). Example: 0:10 will target the 10 first alphabetically sorted workspaces (0-9).",
Value: serpent.StringOf(&targetWorkspaces),
},
{
Flag: "bytes-per-tick",
Env: "CODER_SCALETEST_WORKSPACE_TRAFFIC_BYTES_PER_TICK",
@@ -1498,6 +1458,13 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
Description: "Send WebSocket traffic to a workspace app (proxied via coderd), cannot be used with --ssh.",
Value: serpent.StringOf(&app),
},
{
Flag: "use-host-login",
Env: "CODER_SCALETEST_USE_HOST_LOGIN",
Default: "false",
Description: "Connect as the currently logged in user.",
Value: serpent.BoolOf(&useHostLogin),
},
{
Flag: "workspace-proxy-url",
Env: "CODER_SCALETEST_WORKSPACE_PROXY_URL",
@@ -1507,7 +1474,6 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *serpent.Command {
},
}
targetFlags.attach(&cmd.Options)
tracingFlags.attach(&cmd.Options)
strategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
@@ -1716,239 +1682,6 @@ func (r *RootCmd) scaletestDashboard() *serpent.Command {
return cmd
}
const (
autostartTestName = "autostart"
)
func (r *RootCmd) scaletestAutostart() *serpent.Command {
var (
workspaceCount int64
workspaceJobTimeout time.Duration
autostartDelay time.Duration
autostartTimeout time.Duration
template string
noCleanup bool
parameterFlags workspaceParameterFlags
tracingFlags = &scaletestTracingFlags{}
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
Use: "autostart",
Short: "Replicate a thundering herd of autostarting workspaces",
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...) // Checked later.
defer stop()
ctx = notifyCtx
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
if workspaceCount <= 0 {
return xerrors.Errorf("--workspace-count must be greater than zero")
}
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
tpl, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
if err != nil {
return xerrors.Errorf("parse template: %w", err)
}
cliRichParameters, err := asWorkspaceBuildParameters(parameterFlags.richParameters)
if err != nil {
return xerrors.Errorf("can't parse given parameter values: %w", err)
}
richParameters, err := prepWorkspaceBuild(inv, client, prepWorkspaceBuildArgs{
Action: WorkspaceCreate,
TemplateVersionID: tpl.ActiveVersionID,
RichParameterFile: parameterFlags.richParameterFile,
RichParameters: cliRichParameters,
})
if err != nil {
return xerrors.Errorf("prepare build: %w", err)
}
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := autostart.NewMetrics(reg)
setupBarrier := new(sync.WaitGroup)
setupBarrier.Add(int(workspaceCount))
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for i := range workspaceCount {
id := strconv.Itoa(int(i))
config := autostart.Config{
User: createusers.Config{
OrganizationID: me.OrganizationIDs[0],
},
Workspace: workspacebuild.Config{
OrganizationID: me.OrganizationIDs[0],
Request: codersdk.CreateWorkspaceRequest{
TemplateID: tpl.ID,
RichParameterValues: richParameters,
},
},
WorkspaceJobTimeout: workspaceJobTimeout,
AutostartDelay: autostartDelay,
AutostartTimeout: autostartTimeout,
Metrics: metrics,
SetupBarrier: setupBarrier,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
}
var runner harness.Runnable = autostart.NewRunner(client, config)
if tracingEnabled {
runner = &runnableTraceWrapper{
tracer: tracer,
spanName: fmt.Sprintf("%s/%s", autostartTestName, id),
runner: runner,
}
}
th.AddRun(autostartTestName, id, runner)
}
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
// Wait for prometheus metrics to be scraped
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
_, _ = fmt.Fprintln(inv.Stderr, "Running autostart load test...")
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
err = th.Run(testCtx)
if err != nil {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "workspace-count",
FlagShorthand: "c",
Env: "CODER_SCALETEST_WORKSPACE_COUNT",
Description: "Required: Total number of workspaces to create.",
Value: serpent.Int64Of(&workspaceCount),
Required: true,
},
{
Flag: "workspace-job-timeout",
Env: "CODER_SCALETEST_WORKSPACE_JOB_TIMEOUT",
Default: "5m",
Description: "Timeout for workspace jobs (e.g. build, start).",
Value: serpent.DurationOf(&workspaceJobTimeout),
},
{
Flag: "autostart-delay",
Env: "CODER_SCALETEST_AUTOSTART_DELAY",
Default: "2m",
Description: "How long after all the workspaces have been stopped to schedule them to be started again.",
Value: serpent.DurationOf(&autostartDelay),
},
{
Flag: "autostart-timeout",
Env: "CODER_SCALETEST_AUTOSTART_TIMEOUT",
Default: "5m",
Description: "Timeout for the autostart build to be initiated after the scheduled start time.",
Value: serpent.DurationOf(&autostartTimeout),
},
{
Flag: "template",
FlagShorthand: "t",
Env: "CODER_SCALETEST_TEMPLATE",
Description: "Required: Name or ID of the template to use for workspaces.",
Value: serpent.StringOf(&template),
Required: true,
},
{
Flag: "no-cleanup",
Env: "CODER_SCALETEST_NO_CLEANUP",
Description: "Do not clean up resources after the test completes.",
Value: serpent.BoolOf(&noCleanup),
},
}
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
tracingFlags.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}
type runnableTraceWrapper struct {
tracer trace.Tracer
spanName string
@@ -1958,9 +1691,8 @@ type runnableTraceWrapper struct {
}
var (
_ harness.Runnable = &runnableTraceWrapper{}
_ harness.Cleanable = &runnableTraceWrapper{}
_ harness.Collectable = &runnableTraceWrapper{}
_ harness.Runnable = &runnableTraceWrapper{}
_ harness.Cleanable = &runnableTraceWrapper{}
)
func (r *runnableTraceWrapper) Run(ctx context.Context, id string, logs io.Writer) error {
@@ -2002,14 +1734,6 @@ func (r *runnableTraceWrapper) Cleanup(ctx context.Context, id string, logs io.W
return c.Cleanup(ctx, id, logs)
}
func (r *runnableTraceWrapper) GetMetrics() map[string]any {
c, ok := r.runner.(harness.Collectable)
if !ok {
return nil
}
return c.GetMetrics()
}
func getScaletestWorkspaces(ctx context.Context, client *codersdk.Client, owner, template string) ([]codersdk.Workspace, int, error) {
var (
pageNumber = 0
-181
View File
@@ -1,181 +0,0 @@
//go:build !slim
package cli
import (
"fmt"
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/sloghuman"
"github.com/coder/serpent"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/dynamicparameters"
"github.com/coder/coder/v2/scaletest/harness"
)
const (
dynamicParametersTestName = "dynamic-parameters"
)
func (r *RootCmd) scaletestDynamicParameters() *serpent.Command {
var (
templateName string
provisionerTags []string
numEvals int64
tracingFlags = &scaletestTracingFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
// This test requires unlimited concurrency
timeoutStrategy = &timeoutFlags{}
)
orgContext := NewOrganizationContext()
output := &scaletestOutputFlags{}
cmd := &serpent.Command{
Use: "dynamic-parameters",
Short: "Generates load on the Coder server evaluating dynamic parameters",
Long: `It is recommended that all rate limits are disabled on the server before running this scaletest. This test generates many login events which will be rate limited against the (most likely single) IP.`,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
client, err := r.InitClient(inv)
if err != nil {
return err
}
if templateName == "" {
return xerrors.Errorf("template cannot be empty")
}
tags, err := ParseProvisionerTags(provisionerTags)
if err != nil {
return err
}
org, err := orgContext.Selected(inv, client)
if err != nil {
return err
}
_, err = requireAdmin(ctx, client)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
reg := prometheus.NewRegistry()
metrics := dynamicparameters.NewMetrics(reg, "concurrent_evaluations")
logger := slog.Make(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
defer func() {
// Allow time for traces to flush even if command context is
// canceled. This is a no-op if tracing is not enabled.
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
// Wait for prometheus metrics to be scraped
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
partitions, err := dynamicparameters.SetupPartitions(ctx, client, org.ID, templateName, tags, numEvals, logger)
if err != nil {
return xerrors.Errorf("setup dynamic parameters partitions: %w", err)
}
th := harness.NewTestHarness(
timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}),
// there is no cleanup since it's just a connection that we sever.
nil)
for i, part := range partitions {
for j := range part.ConcurrentEvaluations {
cfg := dynamicparameters.Config{
TemplateVersion: part.TemplateVersion.ID,
Metrics: metrics,
MetricLabelValues: []string{fmt.Sprintf("%d", part.ConcurrentEvaluations)},
}
var runner harness.Runnable = dynamicparameters.NewRunner(client, cfg)
if tracingEnabled {
runner = &runnableTraceWrapper{
tracer: tracer,
spanName: fmt.Sprintf("%s/%d/%d", dynamicParametersTestName, i, j),
runner: runner,
}
}
th.AddRun(dynamicParametersTestName, fmt.Sprintf("%d/%d", j, i), runner)
}
}
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
err = th.Run(testCtx)
if err != nil {
return xerrors.Errorf("run test harness: %w", err)
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "template",
Description: "Name of the template to use. If it does not exist, it will be created.",
Default: "scaletest-dynamic-parameters",
Value: serpent.StringOf(&templateName),
},
{
Flag: "concurrent-evaluations",
Description: "Number of concurrent dynamic parameter evaluations to perform.",
Default: "100",
Value: serpent.Int64Of(&numEvals),
},
{
Flag: "provisioner-tag",
Description: "Specify a set of tags to target provisioner daemons.",
Value: serpent.StringArrayOf(&provisionerTags),
},
}
orgContext.AttachOptions(cmd)
output.attach(&cmd.Options)
tracingFlags.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
return cmd
}
-480
View File
@@ -1,480 +0,0 @@
//go:build !slim
package cli
import (
"bytes"
"context"
"fmt"
"net/http"
"os/signal"
"strconv"
"strings"
"sync"
"time"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"cdr.dev/slog"
notificationsLib "github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/createusers"
"github.com/coder/coder/v2/scaletest/harness"
"github.com/coder/coder/v2/scaletest/notifications"
"github.com/coder/serpent"
)
func (r *RootCmd) scaletestNotifications() *serpent.Command {
var (
userCount int64
templateAdminPercentage float64
notificationTimeout time.Duration
smtpRequestTimeout time.Duration
dialTimeout time.Duration
noCleanup bool
smtpAPIURL string
tracingFlags = &scaletestTracingFlags{}
// This test requires unlimited concurrency.
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
Use: "notifications",
Short: "Simulate notification delivery by creating many users listening to notifications.",
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
defer stop()
ctx = notifyCtx
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
if userCount <= 0 {
return xerrors.Errorf("--user-count must be greater than 0")
}
if templateAdminPercentage < 0 || templateAdminPercentage > 100 {
return xerrors.Errorf("--template-admin-percentage must be between 0 and 100")
}
if smtpAPIURL != "" && !strings.HasPrefix(smtpAPIURL, "http://") && !strings.HasPrefix(smtpAPIURL, "https://") {
return xerrors.Errorf("--smtp-api-url must start with http:// or https://")
}
templateAdminCount := int64(float64(userCount) * templateAdminPercentage / 100)
if templateAdminCount == 0 && templateAdminPercentage > 0 {
templateAdminCount = 1
}
regularUserCount := userCount - templateAdminCount
_, _ = fmt.Fprintf(inv.Stderr, "Distribution plan:\n")
_, _ = fmt.Fprintf(inv.Stderr, " Total users: %d\n", userCount)
_, _ = fmt.Fprintf(inv.Stderr, " Template admins: %d (%.1f%%)\n", templateAdminCount, templateAdminPercentage)
_, _ = fmt.Fprintf(inv.Stderr, " Regular users: %d (%.1f%%)\n", regularUserCount, 100.0-templateAdminPercentage)
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := notifications.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
// Wait for prometheus metrics to be scraped
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
_, _ = fmt.Fprintln(inv.Stderr, "Creating users...")
dialBarrier := &sync.WaitGroup{}
templateAdminWatchBarrier := &sync.WaitGroup{}
dialBarrier.Add(int(userCount))
templateAdminWatchBarrier.Add(int(templateAdminCount))
expectedNotificationIDs := map[uuid.UUID]struct{}{
notificationsLib.TemplateTemplateDeleted: {},
}
triggerTimes := make(map[uuid.UUID]chan time.Time, len(expectedNotificationIDs))
for id := range expectedNotificationIDs {
triggerTimes[id] = make(chan time.Time, 1)
}
smtpHTTPTransport := &http.Transport{
MaxConnsPerHost: 512,
MaxIdleConnsPerHost: 512,
IdleConnTimeout: 60 * time.Second,
}
smtpHTTPClient := &http.Client{
Transport: smtpHTTPTransport,
}
configs := make([]notifications.Config, 0, userCount)
for range templateAdminCount {
config := notifications.Config{
User: createusers.Config{
OrganizationID: me.OrganizationIDs[0],
},
Roles: []string{codersdk.RoleTemplateAdmin},
NotificationTimeout: notificationTimeout,
DialTimeout: dialTimeout,
DialBarrier: dialBarrier,
ReceivingWatchBarrier: templateAdminWatchBarrier,
ExpectedNotificationsIDs: expectedNotificationIDs,
Metrics: metrics,
SMTPApiURL: smtpAPIURL,
SMTPRequestTimeout: smtpRequestTimeout,
SMTPHttpClient: smtpHTTPClient,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
}
configs = append(configs, config)
}
for range regularUserCount {
config := notifications.Config{
User: createusers.Config{
OrganizationID: me.OrganizationIDs[0],
},
Roles: []string{},
NotificationTimeout: notificationTimeout,
DialTimeout: dialTimeout,
DialBarrier: dialBarrier,
ReceivingWatchBarrier: templateAdminWatchBarrier,
Metrics: metrics,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
}
configs = append(configs, config)
}
go triggerNotifications(
ctx,
logger,
client,
me.OrganizationIDs[0],
dialBarrier,
dialTimeout,
triggerTimes,
)
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for i, config := range configs {
id := strconv.Itoa(i)
name := fmt.Sprintf("notifications-%s", id)
var runner harness.Runnable = notifications.NewRunner(client, config)
if tracingEnabled {
runner = &runnableTraceWrapper{
tracer: tracer,
spanName: name,
runner: runner,
}
}
th.AddRun(name, id, runner)
}
_, _ = fmt.Fprintln(inv.Stderr, "Running notification delivery scaletest...")
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
err = th.Run(testCtx)
if err != nil {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
if err := computeNotificationLatencies(ctx, logger, triggerTimes, res, metrics); err != nil {
return xerrors.Errorf("compute notification latencies: %w", err)
}
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nCleaning up...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "user-count",
FlagShorthand: "c",
Env: "CODER_SCALETEST_NOTIFICATION_USER_COUNT",
Description: "Required: Total number of users to create.",
Value: serpent.Int64Of(&userCount),
Required: true,
},
{
Flag: "template-admin-percentage",
Env: "CODER_SCALETEST_NOTIFICATION_TEMPLATE_ADMIN_PERCENTAGE",
Default: "20.0",
Description: "Percentage of users to assign Template Admin role to (0-100).",
Value: serpent.Float64Of(&templateAdminPercentage),
},
{
Flag: "notification-timeout",
Env: "CODER_SCALETEST_NOTIFICATION_TIMEOUT",
Default: "10m",
Description: "How long to wait for notifications after triggering.",
Value: serpent.DurationOf(&notificationTimeout),
},
{
Flag: "smtp-request-timeout",
Env: "CODER_SCALETEST_SMTP_REQUEST_TIMEOUT",
Default: "5m",
Description: "Timeout for SMTP requests.",
Value: serpent.DurationOf(&smtpRequestTimeout),
},
{
Flag: "dial-timeout",
Env: "CODER_SCALETEST_DIAL_TIMEOUT",
Default: "10m",
Description: "Timeout for dialing the notification websocket endpoint.",
Value: serpent.DurationOf(&dialTimeout),
},
{
Flag: "no-cleanup",
Env: "CODER_SCALETEST_NO_CLEANUP",
Description: "Do not clean up resources after the test completes.",
Value: serpent.BoolOf(&noCleanup),
},
{
Flag: "smtp-api-url",
Env: "CODER_SCALETEST_SMTP_API_URL",
Description: "SMTP mock HTTP API address.",
Value: serpent.StringOf(&smtpAPIURL),
},
}
tracingFlags.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}
func computeNotificationLatencies(
ctx context.Context,
logger slog.Logger,
expectedNotifications map[uuid.UUID]chan time.Time,
results harness.Results,
metrics *notifications.Metrics,
) error {
triggerTimes := make(map[uuid.UUID]time.Time)
for notificationID, triggerTimeChan := range expectedNotifications {
select {
case triggerTime := <-triggerTimeChan:
triggerTimes[notificationID] = triggerTime
logger.Info(ctx, "received trigger time",
slog.F("notification_id", notificationID),
slog.F("trigger_time", triggerTime))
default:
logger.Warn(ctx, "no trigger time received for notification",
slog.F("notification_id", notificationID))
}
}
if len(triggerTimes) == 0 {
logger.Warn(ctx, "no trigger times available, skipping latency computation")
return nil
}
var totalLatencies int
for runID, runResult := range results.Runs {
if runResult.Error != nil {
logger.Debug(ctx, "skipping failed run for latency computation",
slog.F("run_id", runID))
continue
}
if runResult.Metrics == nil {
continue
}
// Process websocket notifications.
if wsReceiptTimes, ok := runResult.Metrics[notifications.WebsocketNotificationReceiptTimeMetric].(map[uuid.UUID]time.Time); ok {
for notificationID, receiptTime := range wsReceiptTimes {
if triggerTime, ok := triggerTimes[notificationID]; ok {
latency := receiptTime.Sub(triggerTime)
metrics.RecordLatency(latency, notificationID.String(), notifications.NotificationTypeWebsocket)
totalLatencies++
logger.Debug(ctx, "computed websocket latency",
slog.F("run_id", runID),
slog.F("notification_id", notificationID),
slog.F("latency", latency))
}
}
}
// Process SMTP notifications
if smtpReceiptTimes, ok := runResult.Metrics[notifications.SMTPNotificationReceiptTimeMetric].(map[uuid.UUID]time.Time); ok {
for notificationID, receiptTime := range smtpReceiptTimes {
if triggerTime, ok := triggerTimes[notificationID]; ok {
latency := receiptTime.Sub(triggerTime)
metrics.RecordLatency(latency, notificationID.String(), notifications.NotificationTypeSMTP)
totalLatencies++
logger.Debug(ctx, "computed SMTP latency",
slog.F("run_id", runID),
slog.F("notification_id", notificationID),
slog.F("latency", latency))
}
}
}
}
logger.Info(ctx, "finished computing notification latencies",
slog.F("total_runs", results.TotalRuns),
slog.F("total_latencies_computed", totalLatencies))
return nil
}
// triggerNotifications waits for all test users to connect,
// then creates and deletes a test template to trigger notification events for testing.
func triggerNotifications(
ctx context.Context,
logger slog.Logger,
client *codersdk.Client,
orgID uuid.UUID,
dialBarrier *sync.WaitGroup,
dialTimeout time.Duration,
expectedNotifications map[uuid.UUID]chan time.Time,
) {
logger.Info(ctx, "waiting for all users to connect")
// Wait for all users to connect
waitCtx, cancel := context.WithTimeout(ctx, dialTimeout+30*time.Second)
defer cancel()
done := make(chan struct{})
go func() {
dialBarrier.Wait()
close(done)
}()
select {
case <-done:
logger.Info(ctx, "all users connected")
case <-waitCtx.Done():
if waitCtx.Err() == context.DeadlineExceeded {
logger.Error(ctx, "timeout waiting for users to connect")
} else {
logger.Info(ctx, "context canceled while waiting for users")
}
return
}
logger.Info(ctx, "creating test template to test notifications")
// Upload empty template file.
file, err := client.Upload(ctx, codersdk.ContentTypeTar, bytes.NewReader([]byte{}))
if err != nil {
logger.Error(ctx, "upload test template", slog.Error(err))
return
}
logger.Info(ctx, "test template uploaded", slog.F("file_id", file.ID))
// Create template version.
version, err := client.CreateTemplateVersion(ctx, orgID, codersdk.CreateTemplateVersionRequest{
StorageMethod: codersdk.ProvisionerStorageMethodFile,
FileID: file.ID,
Provisioner: codersdk.ProvisionerTypeEcho,
})
if err != nil {
logger.Error(ctx, "create test template version", slog.Error(err))
return
}
logger.Info(ctx, "test template version created", slog.F("template_version_id", version.ID))
// Create template.
testTemplate, err := client.CreateTemplate(ctx, orgID, codersdk.CreateTemplateRequest{
Name: "scaletest-test-template",
Description: "scaletest-test-template",
VersionID: version.ID,
})
if err != nil {
logger.Error(ctx, "create test template", slog.Error(err))
return
}
logger.Info(ctx, "test template created", slog.F("template_id", testTemplate.ID))
// Delete template to trigger notification.
err = client.DeleteTemplate(ctx, testTemplate.ID)
if err != nil {
logger.Error(ctx, "delete test template", slog.Error(err))
return
}
logger.Info(ctx, "test template deleted", slog.F("template_id", testTemplate.ID))
// Record expected notification.
expectedNotifications[notificationsLib.TemplateTemplateDeleted] <- time.Now()
close(expectedNotifications[notificationsLib.TemplateTemplateDeleted])
}
-297
View File
@@ -1,297 +0,0 @@
//go:build !slim
package cli
import (
"fmt"
"net/http"
"os/signal"
"strconv"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/harness"
"github.com/coder/coder/v2/scaletest/prebuilds"
"github.com/coder/quartz"
"github.com/coder/serpent"
)
func (r *RootCmd) scaletestPrebuilds() *serpent.Command {
var (
numTemplates int64
numPresets int64
numPresetPrebuilds int64
templateVersionJobTimeout time.Duration
prebuildWorkspaceTimeout time.Duration
noCleanup bool
tracingFlags = &scaletestTracingFlags{}
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
)
cmd := &serpent.Command{
Use: "prebuilds",
Short: "Creates prebuild workspaces on the Coder server.",
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
defer stop()
ctx = notifyCtx
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
if numTemplates <= 0 {
return xerrors.Errorf("--num-templates must be greater than 0")
}
if numPresets <= 0 {
return xerrors.Errorf("--num-presets must be greater than 0")
}
if numPresetPrebuilds <= 0 {
return xerrors.Errorf("--num-preset-prebuilds must be greater than 0")
}
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("parse output flags: %w", err)
}
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := prebuilds.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: true,
})
if err != nil {
return xerrors.Errorf("pause prebuilds: %w", err)
}
setupBarrier := new(sync.WaitGroup)
setupBarrier.Add(int(numTemplates))
creationBarrier := new(sync.WaitGroup)
creationBarrier.Add(int(numTemplates))
deletionSetupBarrier := new(sync.WaitGroup)
deletionSetupBarrier.Add(1)
deletionBarrier := new(sync.WaitGroup)
deletionBarrier.Add(int(numTemplates))
th := harness.NewTestHarness(timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}), cleanupStrategy.toStrategy())
for i := range numTemplates {
id := strconv.Itoa(int(i))
cfg := prebuilds.Config{
OrganizationID: me.OrganizationIDs[0],
NumPresets: int(numPresets),
NumPresetPrebuilds: int(numPresetPrebuilds),
TemplateVersionJobTimeout: templateVersionJobTimeout,
PrebuildWorkspaceTimeout: prebuildWorkspaceTimeout,
Metrics: metrics,
SetupBarrier: setupBarrier,
CreationBarrier: creationBarrier,
DeletionSetupBarrier: deletionSetupBarrier,
DeletionBarrier: deletionBarrier,
Clock: quartz.NewReal(),
}
err := cfg.Validate()
if err != nil {
return xerrors.Errorf("validate config: %w", err)
}
var runner harness.Runnable = prebuilds.NewRunner(client, cfg)
if tracingEnabled {
runner = &runnableTraceWrapper{
tracer: tracer,
spanName: fmt.Sprintf("prebuilds/%s", id),
runner: runner,
}
}
th.AddRun("prebuilds", id, runner)
}
_, _ = fmt.Fprintf(inv.Stderr, "Creating %d templates with %d presets and %d prebuilds per preset...\n",
numTemplates, numPresets, numPresetPrebuilds)
_, _ = fmt.Fprintf(inv.Stderr, "Total expected prebuilds: %d\n", numTemplates*numPresets*numPresetPrebuilds)
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
runErrCh := make(chan error, 1)
go func() {
runErrCh <- th.Run(testCtx)
}()
_, _ = fmt.Fprintln(inv.Stderr, "Waiting for all templates to be created...")
setupBarrier.Wait()
_, _ = fmt.Fprintln(inv.Stderr, "All templates created")
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: false,
})
if err != nil {
return xerrors.Errorf("resume prebuilds: %w", err)
}
_, _ = fmt.Fprintln(inv.Stderr, "Waiting for all prebuilds to be created...")
creationBarrier.Wait()
_, _ = fmt.Fprintln(inv.Stderr, "All prebuilds created")
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: true,
})
if err != nil {
return xerrors.Errorf("pause prebuilds before deletion: %w", err)
}
_, _ = fmt.Fprintln(inv.Stderr, "Prebuilds paused, signaling runners to prepare for deletion")
deletionSetupBarrier.Done()
_, _ = fmt.Fprintln(inv.Stderr, "Waiting for all templates to be updated with 0 prebuilds...")
deletionBarrier.Wait()
_, _ = fmt.Fprintln(inv.Stderr, "All templates updated")
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: false,
})
if err != nil {
return xerrors.Errorf("resume prebuilds for deletion: %w", err)
}
_, _ = fmt.Fprintln(inv.Stderr, "Waiting for all prebuilds to be deleted...")
err = <-runErrCh
if err != nil {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip cleanup & stats
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
if !noCleanup {
_, _ = fmt.Fprintln(inv.Stderr, "\nStarting cleanup (deleting templates)...")
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
// If the cleanup was interrupted, skip stats
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
}
if res.TotalFail > 0 {
return xerrors.New("prebuild creation test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "num-templates",
Env: "CODER_SCALETEST_PREBUILDS_NUM_TEMPLATES",
Default: "1",
Description: "Number of templates to create for the test.",
Value: serpent.Int64Of(&numTemplates),
},
{
Flag: "num-presets",
Env: "CODER_SCALETEST_PREBUILDS_NUM_PRESETS",
Default: "1",
Description: "Number of presets per template.",
Value: serpent.Int64Of(&numPresets),
},
{
Flag: "num-preset-prebuilds",
Env: "CODER_SCALETEST_PREBUILDS_NUM_PRESET_PREBUILDS",
Default: "1",
Description: "Number of prebuilds per preset.",
Value: serpent.Int64Of(&numPresetPrebuilds),
},
{
Flag: "template-version-job-timeout",
Env: "CODER_SCALETEST_PREBUILDS_TEMPLATE_VERSION_JOB_TIMEOUT",
Default: "5m",
Description: "Timeout for template version provisioning jobs.",
Value: serpent.DurationOf(&templateVersionJobTimeout),
},
{
Flag: "prebuild-workspace-timeout",
Env: "CODER_SCALETEST_PREBUILDS_WORKSPACE_TIMEOUT",
Default: "10m",
Description: "Timeout for all prebuild workspaces to be created/deleted.",
Value: serpent.DurationOf(&prebuildWorkspaceTimeout),
},
{
Flag: "skip-cleanup",
Env: "CODER_SCALETEST_PREBUILDS_SKIP_CLEANUP",
Description: "Skip cleanup (deletion test) and leave resources intact.",
Value: serpent.BoolOf(&noCleanup),
},
}
tracingFlags.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
output.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
return cmd
}
-112
View File
@@ -1,112 +0,0 @@
//go:build !slim
package cli
import (
"fmt"
"os/signal"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/sloghuman"
"github.com/coder/coder/v2/scaletest/smtpmock"
"github.com/coder/serpent"
)
func (*RootCmd) scaletestSMTP() *serpent.Command {
var (
hostAddress string
smtpPort int64
apiPort int64
purgeAtCount int64
)
cmd := &serpent.Command{
Use: "smtp",
Short: "Start a mock SMTP server for testing",
Long: `Start a mock SMTP server with an HTTP API server that can be used to purge
messages and get messages by email.`,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
notifyCtx, stop := signal.NotifyContext(ctx, StopSignals...)
defer stop()
ctx = notifyCtx
logger := slog.Make(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelInfo)
config := smtpmock.Config{
HostAddress: hostAddress,
SMTPPort: int(smtpPort),
APIPort: int(apiPort),
Logger: logger,
}
srv := new(smtpmock.Server)
if err := srv.Start(ctx, config); err != nil {
return xerrors.Errorf("start mock SMTP server: %w", err)
}
defer func() {
_ = srv.Stop()
}()
_, _ = fmt.Fprintf(inv.Stdout, "Mock SMTP server started on %s\n", srv.SMTPAddress())
_, _ = fmt.Fprintf(inv.Stdout, "HTTP API server started on %s\n", srv.APIAddress())
if purgeAtCount > 0 {
_, _ = fmt.Fprintf(inv.Stdout, " Auto-purge when message count reaches %d\n", purgeAtCount)
}
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
_, _ = fmt.Fprintf(inv.Stdout, "\nTotal messages received since last purge: %d\n", srv.MessageCount())
return nil
case <-ticker.C:
count := srv.MessageCount()
if count > 0 {
_, _ = fmt.Fprintf(inv.Stdout, "Messages received: %d\n", count)
}
if purgeAtCount > 0 && int64(count) >= purgeAtCount {
_, _ = fmt.Fprintf(inv.Stdout, "Message count (%d) reached threshold (%d). Purging...\n", count, purgeAtCount)
srv.Purge()
continue
}
}
}
},
}
cmd.Options = []serpent.Option{
{
Flag: "host-address",
Env: "CODER_SCALETEST_SMTP_HOST_ADDRESS",
Default: "localhost",
Description: "Host address to bind the mock SMTP and API servers.",
Value: serpent.StringOf(&hostAddress),
},
{
Flag: "smtp-port",
Env: "CODER_SCALETEST_SMTP_PORT",
Description: "Port for the mock SMTP server. Uses a random port if not specified.",
Value: serpent.Int64Of(&smtpPort),
},
{
Flag: "api-port",
Env: "CODER_SCALETEST_SMTP_API_PORT",
Description: "Port for the HTTP API server. Uses a random port if not specified.",
Value: serpent.Int64Of(&apiPort),
},
{
Flag: "purge-at-count",
Env: "CODER_SCALETEST_SMTP_PURGE_AT_COUNT",
Default: "100000",
Description: "Maximum number of messages to keep before auto-purging. Set to 0 to disable.",
Value: serpent.Int64Of(&purgeAtCount),
},
}
return cmd
}
-275
View File
@@ -1,275 +0,0 @@
//go:build !slim
package cli
import (
"context"
"fmt"
"net/http"
"sync"
"time"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/xerrors"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/sloghuman"
"github.com/coder/serpent"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/scaletest/harness"
"github.com/coder/coder/v2/scaletest/taskstatus"
)
const (
taskStatusTestName = "task-status"
)
func (r *RootCmd) scaletestTaskStatus() *serpent.Command {
var (
count int64
template string
workspaceNamePrefix string
appSlug string
reportStatusPeriod time.Duration
reportStatusDuration time.Duration
baselineDuration time.Duration
tracingFlags = &scaletestTracingFlags{}
prometheusFlags = &scaletestPrometheusFlags{}
timeoutStrategy = &timeoutFlags{}
cleanupStrategy = newScaletestCleanupStrategy()
output = &scaletestOutputFlags{}
)
orgContext := NewOrganizationContext()
cmd := &serpent.Command{
Use: "task-status",
Short: "Generates load on the Coder server by simulating task status reporting",
Long: `This test creates external workspaces and simulates AI agents reporting task status.
After all runners connect, it waits for the baseline duration before triggering status reporting.`,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags: %w", err)
}
client, err := r.InitClient(inv)
if err != nil {
return err
}
org, err := orgContext.Selected(inv, client)
if err != nil {
return err
}
_, err = requireAdmin(ctx, client)
if err != nil {
return err
}
// Disable rate limits for this test
client.HTTPClient = &http.Client{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
// Find the template
tpl, err := parseTemplate(ctx, client, []uuid.UUID{org.ID}, template)
if err != nil {
return xerrors.Errorf("parse template %q: %w", template, err)
}
templateID := tpl.ID
reg := prometheus.NewRegistry()
metrics := taskstatus.NewMetrics(reg)
logger := slog.Make(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
tracerProvider, closeTracing, tracingEnabled, err := tracingFlags.provider(ctx)
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
defer func() {
// Allow time for traces to flush even if command context is
// canceled. This is a no-op if tracing is not enabled.
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
_, _ = fmt.Fprintf(inv.Stderr, "\nError uploading traces: %+v\n", err)
}
// Wait for prometheus metrics to be scraped
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
// Setup shared resources for coordination
connectedWaitGroup := &sync.WaitGroup{}
connectedWaitGroup.Add(int(count))
startReporting := make(chan struct{})
// Create the test harness
th := harness.NewTestHarness(
timeoutStrategy.wrapStrategy(harness.ConcurrentExecutionStrategy{}),
cleanupStrategy.toStrategy(),
)
// Create runners
for i := range count {
workspaceName := fmt.Sprintf("%s-%d", workspaceNamePrefix, i)
cfg := taskstatus.Config{
TemplateID: templateID,
WorkspaceName: workspaceName,
AppSlug: appSlug,
ConnectedWaitGroup: connectedWaitGroup,
StartReporting: startReporting,
ReportStatusPeriod: reportStatusPeriod,
ReportStatusDuration: reportStatusDuration,
Metrics: metrics,
MetricLabelValues: []string{},
}
if err := cfg.Validate(); err != nil {
return xerrors.Errorf("validate config for runner %d: %w", i, err)
}
var runner harness.Runnable = taskstatus.NewRunner(client, cfg)
if tracingEnabled {
runner = &runnableTraceWrapper{
tracer: tracer,
spanName: fmt.Sprintf("%s/%d", taskStatusTestName, i),
runner: runner,
}
}
th.AddRun(taskStatusTestName, workspaceName, runner)
}
// Start the test in a separate goroutine so we can coordinate timing
testCtx, testCancel := timeoutStrategy.toContext(ctx)
defer testCancel()
testDone := make(chan error)
go func() {
testDone <- th.Run(testCtx)
}()
// Wait for all runners to connect
logger.Info(ctx, "waiting for all runners to connect")
waitCtx, waitCancel := context.WithTimeout(ctx, 5*time.Minute)
defer waitCancel()
connectDone := make(chan struct{})
go func() {
connectedWaitGroup.Wait()
close(connectDone)
}()
select {
case <-waitCtx.Done():
return xerrors.Errorf("timeout waiting for runners to connect")
case <-connectDone:
logger.Info(ctx, "all runners connected")
}
// Wait for baseline duration
logger.Info(ctx, "waiting for baseline duration", slog.F("duration", baselineDuration))
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(baselineDuration):
}
// Trigger all runners to start reporting
logger.Info(ctx, "triggering runners to start reporting task status")
close(startReporting)
// Wait for the test to complete
err = <-testDone
if err != nil {
return xerrors.Errorf("run test harness: %w", err)
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
if err != nil {
return xerrors.Errorf("write output %q to %q: %w", o.format, o.path, err)
}
}
cleanupCtx, cleanupCancel := cleanupStrategy.toContext(ctx)
defer cleanupCancel()
err = th.Cleanup(cleanupCtx)
if err != nil {
return xerrors.Errorf("cleanup tests: %w", err)
}
if res.TotalFail > 0 {
return xerrors.New("load test failed, see above for more details")
}
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "count",
Description: "Number of concurrent runners to create.",
Default: "10",
Value: serpent.Int64Of(&count),
},
{
Flag: "template",
Description: "Name or UUID of the template to use for the scale test. The template MUST include a coder_external_agent and a coder_app.",
Default: "scaletest-task-status",
Value: serpent.StringOf(&template),
},
{
Flag: "workspace-name-prefix",
Description: "Prefix for workspace names (will be suffixed with index).",
Default: "scaletest-task-status",
Value: serpent.StringOf(&workspaceNamePrefix),
},
{
Flag: "app-slug",
Description: "Slug of the app designated as the AI Agent.",
Default: "ai-agent",
Value: serpent.StringOf(&appSlug),
},
{
Flag: "report-status-period",
Description: "Time between reporting task statuses.",
Default: "10s",
Value: serpent.DurationOf(&reportStatusPeriod),
},
{
Flag: "report-status-duration",
Description: "Total time to report task statuses after baseline.",
Default: "15m",
Value: serpent.DurationOf(&reportStatusDuration),
},
{
Flag: "baseline-duration",
Description: "Duration to wait after all runners connect before starting to report status.",
Default: "10m",
Value: serpent.DurationOf(&baselineDuration),
},
}
orgContext.AttachOptions(cmd)
output.attach(&cmd.Options)
tracingFlags.attach(&cmd.Options)
prometheusFlags.attach(&cmd.Options)
timeoutStrategy.attach(&cmd.Options)
cleanupStrategy.attach(&cmd.Options)
return cmd
}
-22
View File
@@ -29,28 +29,6 @@ func (r *RootCmd) taskCreate() *serpent.Command {
cmd := &serpent.Command{
Use: "create [input]",
Short: "Create an experimental task",
Long: FormatExamples(
Example{
Description: "Create a task with direct input",
Command: "coder exp task create \"Add authentication to the user service\"",
},
Example{
Description: "Create a task with stdin input",
Command: "echo \"Add authentication to the user service\" | coder exp task create",
},
Example{
Description: "Create a task with a specific name",
Command: "coder exp task create --name task1 \"Add authentication to the user service\"",
},
Example{
Description: "Create a task from a specific template / preset",
Command: "coder exp task create --template backend-dev --preset \"My Preset\" \"Add authentication to the user service\"",
},
Example{
Description: "Create a task for another user (requires appropriate permissions)",
Command: "coder exp task create --owner user@example.com \"Add authentication to the user service\"",
},
),
Middleware: serpent.Chain(
serpent.RequireRangeArgs(0, 1),
),
+34 -24
View File
@@ -5,6 +5,7 @@ import (
"strings"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/pretty"
@@ -18,20 +19,6 @@ func (r *RootCmd) taskDelete() *serpent.Command {
cmd := &serpent.Command{
Use: "delete <task> [<task> ...]",
Short: "Delete experimental tasks",
Long: FormatExamples(
Example{
Description: "Delete a single task.",
Command: "$ coder exp task delete task1",
},
Example{
Description: "Delete multiple tasks.",
Command: "$ coder exp task delete task1 task2 task3",
},
Example{
Description: "Delete a task without confirmation.",
Command: "$ coder exp task delete task4 --yes",
},
),
Middleware: serpent.Chain(
serpent.RequireRangeArgs(1, -1),
),
@@ -46,19 +33,43 @@ func (r *RootCmd) taskDelete() *serpent.Command {
}
exp := codersdk.NewExperimentalClient(client)
var tasks []codersdk.Task
type toDelete struct {
ID uuid.UUID
Owner string
Display string
}
var items []toDelete
for _, identifier := range inv.Args {
task, err := exp.TaskByIdentifier(ctx, identifier)
identifier = strings.TrimSpace(identifier)
if identifier == "" {
return xerrors.New("task identifier cannot be empty or whitespace")
}
// Check task identifier, try UUID first.
if id, err := uuid.Parse(identifier); err == nil {
task, err := exp.TaskByID(ctx, id)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", identifier, err)
}
display := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
items = append(items, toDelete{ID: id, Display: display, Owner: task.OwnerName})
continue
}
// Non-UUID, treat as a workspace identifier (name or owner/name).
ws, err := namedWorkspace(ctx, client, identifier)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", identifier, err)
}
tasks = append(tasks, task)
display := ws.FullName()
items = append(items, toDelete{ID: ws.ID, Display: display, Owner: ws.OwnerName})
}
// Confirm deletion of the tasks.
var displayList []string
for _, task := range tasks {
displayList = append(displayList, fmt.Sprintf("%s/%s", task.OwnerName, task.Name))
for _, it := range items {
displayList = append(displayList, it.Display)
}
_, err = cliui.Prompt(inv, cliui.PromptOptions{
Text: fmt.Sprintf("Delete these tasks: %s?", pretty.Sprint(cliui.DefaultStyles.Code, strings.Join(displayList, ", "))),
@@ -69,13 +80,12 @@ func (r *RootCmd) taskDelete() *serpent.Command {
return err
}
for i, task := range tasks {
display := displayList[i]
if err := exp.DeleteTask(ctx, task.OwnerName, task.ID); err != nil {
return xerrors.Errorf("delete task %q: %w", display, err)
for _, item := range items {
if err := exp.DeleteTask(ctx, item.Owner, item.ID); err != nil {
return xerrors.Errorf("delete task %q: %w", item.Display, err)
}
_, _ = fmt.Fprintln(
inv.Stdout, "Deleted task "+pretty.Sprint(cliui.DefaultStyles.Keyword, display)+" at "+cliui.Timestamp(time.Now()),
inv.Stdout, "Deleted task "+pretty.Sprint(cliui.DefaultStyles.Keyword, item.Display)+" at "+cliui.Timestamp(time.Now()),
)
}
+17 -24
View File
@@ -56,14 +56,13 @@ func TestExpTaskDelete(t *testing.T) {
taskID := uuid.MustParse(id1)
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/exists":
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/users/me/workspace/exists":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK,
codersdk.Task{
ID: taskID,
Name: "exists",
OwnerName: "me",
})
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Workspace{
ID: taskID,
Name: "exists",
OwnerName: "me",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id1:
c.deleteCalls.Add(1)
w.WriteHeader(http.StatusAccepted)
@@ -102,21 +101,21 @@ func TestExpTaskDelete(t *testing.T) {
name: "Multiple_YesFlag",
args: []string{"--yes", "first", id4},
buildHandler: func(c *testCounters) http.HandlerFunc {
firstID := uuid.MustParse(id3)
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/first":
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/users/me/workspace/first":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse(id3),
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Workspace{
ID: firstID,
Name: "first",
OwnerName: "me",
})
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/"+id4:
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse(id4),
OwnerName: "me",
Name: "uuid-task-4",
Name: "uuid-task-2",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id3:
c.deleteCalls.Add(1)
@@ -130,7 +129,7 @@ func TestExpTaskDelete(t *testing.T) {
}
},
wantDeleteCalls: 2,
wantNameResolves: 2,
wantNameResolves: 1,
wantDeletedMessage: 2,
},
{
@@ -140,14 +139,8 @@ func TestExpTaskDelete(t *testing.T) {
buildHandler: func(_ *testCounters) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks" && r.URL.Query().Get("q") == "owner:\"me\"":
httpapi.Write(r.Context(), w, http.StatusOK, struct {
Tasks []codersdk.Task `json:"tasks"`
Count int `json:"count"`
}{
Tasks: []codersdk.Task{},
Count: 0,
})
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/users/me/workspace/doesnotexist":
httpapi.ResourceNotFound(w)
default:
httpapi.InternalServerError(w, xerrors.New("unwanted path: "+r.Method+" "+r.URL.Path))
}
@@ -163,14 +156,14 @@ func TestExpTaskDelete(t *testing.T) {
taskID := uuid.MustParse(id5)
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/bad":
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/users/me/workspace/bad":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Workspace{
ID: taskID,
Name: "bad",
OwnerName: "me",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/bad":
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id5:
httpapi.InternalServerError(w, xerrors.New("boom"))
default:
httpapi.InternalServerError(w, xerrors.New("unwanted path: "+r.Method+" "+r.URL.Path))
+5 -28
View File
@@ -8,7 +8,6 @@ import (
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/coderd/util/slice"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/serpent"
)
@@ -68,30 +67,8 @@ func (r *RootCmd) taskList() *serpent.Command {
)
cmd := &serpent.Command{
Use: "list",
Short: "List experimental tasks",
Long: FormatExamples(
Example{
Description: "List tasks for the current user.",
Command: "coder exp task list",
},
Example{
Description: "List tasks for a specific user.",
Command: "coder exp task list --user someone-else",
},
Example{
Description: "List all tasks you can view.",
Command: "coder exp task list --all",
},
Example{
Description: "List all your running tasks.",
Command: "coder exp task list --status running",
},
Example{
Description: "As above, but only show IDs.",
Command: "coder exp task list --status running --quiet",
},
),
Use: "list",
Short: "List experimental tasks",
Aliases: []string{"ls"},
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
@@ -99,10 +76,10 @@ func (r *RootCmd) taskList() *serpent.Command {
Options: serpent.OptionSet{
{
Name: "status",
Description: "Filter by task status.",
Description: "Filter by task status (e.g. running, failed, etc).",
Flag: "status",
Default: "",
Value: serpent.EnumOf(&statusFilter, slice.ToStrings(codersdk.AllTaskStatuses())...),
Value: serpent.StringOf(&statusFilter),
},
{
Name: "all",
@@ -144,7 +121,7 @@ func (r *RootCmd) taskList() *serpent.Command {
tasks, err := exp.Tasks(ctx, &codersdk.TasksFilter{
Owner: targetUser,
Status: codersdk.TaskStatus(statusFilter),
Status: statusFilter,
})
if err != nil {
return xerrors.Errorf("list tasks: %w", err)
+60 -23
View File
@@ -2,6 +2,7 @@ package cli_test
import (
"bytes"
"context"
"database/sql"
"encoding/json"
"io"
@@ -18,7 +19,9 @@ import (
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/util/slice"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
@@ -26,7 +29,7 @@ import (
)
// makeAITask creates an AI-task workspace.
func makeAITask(t *testing.T, db database.Store, orgID, adminID, ownerID uuid.UUID, transition database.WorkspaceTransition, prompt string) database.Task {
func makeAITask(t *testing.T, db database.Store, orgID, adminID, ownerID uuid.UUID, transition database.WorkspaceTransition, prompt string) (workspace database.WorkspaceTable) {
t.Helper()
tv := dbfake.TemplateVersion(t, db).
@@ -39,22 +42,56 @@ func makeAITask(t *testing.T, db database.Store, orgID, adminID, ownerID uuid.UU
},
}).Do()
build := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
ws := database.WorkspaceTable{
OrganizationID: orgID,
OwnerID: ownerID,
TemplateID: tv.Template.ID,
}).
}
build := dbfake.WorkspaceBuild(t, db, ws).
Seed(database.WorkspaceBuild{
TemplateVersionID: tv.TemplateVersion.ID,
Transition: transition,
}).
WithAgent().
WithTask(database.TaskTable{
Prompt: prompt,
}, nil).
Do()
}).WithAgent().Do()
dbgen.WorkspaceBuildParameters(t, db, []database.WorkspaceBuildParameter{
{
WorkspaceBuildID: build.Build.ID,
Name: codersdk.AITaskPromptParameterName,
Value: prompt,
},
})
agents, err := db.GetWorkspaceAgentsByWorkspaceAndBuildNumber(
dbauthz.AsSystemRestricted(context.Background()),
database.GetWorkspaceAgentsByWorkspaceAndBuildNumberParams{
WorkspaceID: build.Workspace.ID,
BuildNumber: build.Build.BuildNumber,
},
)
require.NoError(t, err)
require.NotEmpty(t, agents)
agentID := agents[0].ID
return build.Task
// Create a workspace app and set it as the sidebar app.
app := dbgen.WorkspaceApp(t, db, database.WorkspaceApp{
AgentID: agentID,
Slug: "task-sidebar",
DisplayName: "Task Sidebar",
External: false,
})
// Update build flags to reference the sidebar app and HasAITask=true.
err = db.UpdateWorkspaceBuildFlagsByID(
dbauthz.AsSystemRestricted(context.Background()),
database.UpdateWorkspaceBuildFlagsByIDParams{
ID: build.Build.ID,
HasAITask: sql.NullBool{Bool: true, Valid: true},
HasExternalAgent: sql.NullBool{Bool: false, Valid: false},
SidebarAppID: uuid.NullUUID{UUID: app.ID, Valid: true},
UpdatedAt: build.Build.UpdatedAt,
},
)
require.NoError(t, err)
return build.Workspace
}
func TestExpTaskList(t *testing.T) {
@@ -91,7 +128,7 @@ func TestExpTaskList(t *testing.T) {
memberClient, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
wantPrompt := "build me a web app"
task := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, wantPrompt)
ws := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, wantPrompt)
inv, root := clitest.New(t, "exp", "task", "list", "--column", "id,name,status,initial prompt")
clitest.SetupConfig(t, memberClient, root)
@@ -103,8 +140,8 @@ func TestExpTaskList(t *testing.T) {
require.NoError(t, err)
// Validate the table includes the task and status.
pty.ExpectMatch(task.Name)
pty.ExpectMatch("initializing")
pty.ExpectMatch(ws.Name)
pty.ExpectMatch("running")
pty.ExpectMatch(wantPrompt)
})
@@ -117,12 +154,12 @@ func TestExpTaskList(t *testing.T) {
owner := coderdtest.CreateFirstUser(t, client)
memberClient, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// Create two AI tasks: one initializing, one paused.
initializingTask := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "keep me initializing")
pausedTask := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStop, "stop me please")
// Create two AI tasks: one running, one stopped.
running := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "keep me running")
stopped := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStop, "stop me please")
// Use JSON output to reliably validate filtering.
inv, root := clitest.New(t, "exp", "task", "list", "--status=paused", "--output=json")
inv, root := clitest.New(t, "exp", "task", "list", "--status=stopped", "--output=json")
clitest.SetupConfig(t, memberClient, root)
ctx := testutil.Context(t, testutil.WaitShort)
@@ -136,10 +173,10 @@ func TestExpTaskList(t *testing.T) {
var tasks []codersdk.Task
require.NoError(t, json.Unmarshal(stdout.Bytes(), &tasks))
// Only the paused task is returned.
// Only the stopped task is returned.
require.Len(t, tasks, 1, "expected one task after filtering")
require.Equal(t, pausedTask.ID, tasks[0].ID)
require.NotEqual(t, initializingTask.ID, tasks[0].ID)
require.Equal(t, stopped.ID, tasks[0].ID)
require.NotEqual(t, running.ID, tasks[0].ID)
})
t.Run("UserFlag_Me_Table", func(t *testing.T) {
@@ -151,7 +188,7 @@ func TestExpTaskList(t *testing.T) {
_, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
_ = makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "other-task")
task := makeAITask(t, db, owner.OrganizationID, owner.UserID, owner.UserID, database.WorkspaceTransitionStart, "me-task")
ws := makeAITask(t, db, owner.OrganizationID, owner.UserID, owner.UserID, database.WorkspaceTransitionStart, "me-task")
inv, root := clitest.New(t, "exp", "task", "list", "--user", "me")
//nolint:gocritic // Owner client is intended here smoke test the member task not showing up.
@@ -163,7 +200,7 @@ func TestExpTaskList(t *testing.T) {
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
pty.ExpectMatch(task.Name)
pty.ExpectMatch(ws.Name)
})
t.Run("Quiet", func(t *testing.T) {
@@ -176,7 +213,7 @@ func TestExpTaskList(t *testing.T) {
memberClient, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// Given: We have two tasks
task1 := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "keep me active")
task1 := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "keep me running")
task2 := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStop, "stop me please")
// Given: We add the `--quiet` flag
+15 -12
View File
@@ -3,6 +3,7 @@ package cli
import (
"fmt"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
@@ -25,11 +26,6 @@ func (r *RootCmd) taskLogs() *serpent.Command {
cmd := &serpent.Command{
Use: "logs <task>",
Short: "Show a task's logs",
Long: FormatExamples(
Example{
Description: "Show logs for a given task.",
Command: "coder exp task logs task1",
}),
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
),
@@ -40,17 +36,24 @@ func (r *RootCmd) taskLogs() *serpent.Command {
}
var (
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
identifier = inv.Args[0]
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
task = inv.Args[0]
taskID uuid.UUID
)
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", identifier, err)
if id, err := uuid.Parse(task); err == nil {
taskID = id
} else {
ws, err := namedWorkspace(ctx, client, task)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", task, err)
}
taskID = ws.ID
}
logs, err := exp.TaskLogs(ctx, codersdk.Me, task.ID)
logs, err := exp.TaskLogs(ctx, codersdk.Me, taskID)
if err != nil {
return xerrors.Errorf("get task logs: %w", err)
}
+163 -150
View File
@@ -1,8 +1,11 @@
package cli_test
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
@@ -11,10 +14,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
@@ -23,165 +23,178 @@ import (
func Test_TaskLogs(t *testing.T) {
t.Parallel()
testMessages := []agentapisdk.Message{
var (
clock = time.Date(2025, 8, 26, 12, 34, 56, 0, time.UTC)
taskID = uuid.MustParse("11111111-1111-1111-1111-111111111111")
taskName = "task-workspace"
taskLogs = []codersdk.TaskLogEntry{
{
ID: 0,
Content: "What is 1 + 1?",
Type: codersdk.TaskLogTypeInput,
Time: clock,
},
{
ID: 1,
Content: "2",
Type: codersdk.TaskLogTypeOutput,
Time: clock.Add(1 * time.Second),
},
}
)
tests := []struct {
args []string
expectTable string
expectLogs []codersdk.TaskLogEntry
expectError string
handler func(t *testing.T, ctx context.Context) http.HandlerFunc
}{
{
Id: 0,
Role: agentapisdk.RoleUser,
Content: "What is 1 + 1?",
Time: time.Now().Add(-2 * time.Minute),
args: []string{taskName, "--output", "json"},
expectLogs: taskLogs,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/v2/users/me/workspace/%s", taskName):
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: taskID,
})
case fmt.Sprintf("/api/experimental/tasks/me/%s/logs", taskID.String()):
httpapi.Write(ctx, w, http.StatusOK, codersdk.TaskLogsResponse{
Logs: taskLogs,
})
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
Id: 1,
Role: agentapisdk.RoleAgent,
Content: "2",
Time: time.Now().Add(-1 * time.Minute),
args: []string{taskID.String(), "--output", "json"},
expectLogs: taskLogs,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/logs", taskID.String()):
httpapi.Write(ctx, w, http.StatusOK, codersdk.TaskLogsResponse{
Logs: taskLogs,
})
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{taskID.String()},
expectTable: `
TYPE CONTENT
input What is 1 + 1?
output 2`,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/logs", taskID.String()):
httpapi.Write(ctx, w, http.StatusOK, codersdk.TaskLogsResponse{
Logs: taskLogs,
})
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{"doesnotexist"},
expectError: httpapi.ResourceNotFoundResponse.Message,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/users/me/workspace/doesnotexist":
httpapi.ResourceNotFound(w)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{uuid.Nil.String()}, // uuid does not exist
expectError: httpapi.ResourceNotFoundResponse.Message,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/logs", uuid.Nil.String()):
httpapi.ResourceNotFound(w)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{"err-fetching-logs"},
expectError: assert.AnError.Error(),
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/users/me/workspace/err-fetching-logs":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: taskID,
})
case fmt.Sprintf("/api/experimental/tasks/me/%s/logs", taskID.String()):
httpapi.InternalServerError(w, assert.AnError)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
}
t.Run("ByTaskName_JSON", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
for _, tt := range tests {
t.Run(strings.Join(tt.args, ","), func(t *testing.T) {
t.Parallel()
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client // user already has access to their own workspace
var (
ctx = testutil.Context(t, testutil.WaitShort)
srv = httptest.NewServer(tt.handler(t, ctx))
client = codersdk.New(testutil.MustURL(t, srv.URL))
args = []string{"exp", "task", "logs"}
stdout strings.Builder
err error
)
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "logs", task.Name, "--output", "json")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
t.Cleanup(srv.Close)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
inv, root := clitest.New(t, append(args, tt.args...)...)
inv.Stdout = &stdout
inv.Stderr = &stdout
clitest.SetupConfig(t, client, root)
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(stdout.String())).Decode(&logs)
require.NoError(t, err)
err = inv.WithContext(ctx).Run()
if tt.expectError == "" {
assert.NoError(t, err)
} else {
assert.ErrorContains(t, err, tt.expectError)
}
require.Len(t, logs, 2)
require.Equal(t, "What is 1 + 1?", logs[0].Content)
require.Equal(t, codersdk.TaskLogTypeInput, logs[0].Type)
require.Equal(t, "2", logs[1].Content)
require.Equal(t, codersdk.TaskLogTypeOutput, logs[1].Type)
})
if tt.expectTable != "" {
if diff := tableDiff(tt.expectTable, stdout.String()); diff != "" {
t.Errorf("unexpected output diff (-want +got):\n%s", diff)
}
}
t.Run("ByTaskID_JSON", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
if tt.expectLogs != nil {
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(stdout.String())).Decode(&logs)
require.NoError(t, err)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String(), "--output", "json")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
var logs []codersdk.TaskLogEntry
err = json.NewDecoder(strings.NewReader(stdout.String())).Decode(&logs)
require.NoError(t, err)
require.Len(t, logs, 2)
require.Equal(t, "What is 1 + 1?", logs[0].Content)
require.Equal(t, codersdk.TaskLogTypeInput, logs[0].Type)
require.Equal(t, "2", logs[1].Content)
require.Equal(t, codersdk.TaskLogTypeOutput, logs[1].Type)
})
t.Run("ByTaskID_Table", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String())
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
output := stdout.String()
require.Contains(t, output, "What is 1 + 1?")
require.Contains(t, output, "2")
require.Contains(t, output, "input")
require.Contains(t, output, "output")
})
t.Run("TaskNotFound_ByName", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "logs", "doesnotexist")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.ErrorContains(t, err, httpapi.ResourceNotFoundResponse.Message)
})
t.Run("TaskNotFound_ByID", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "logs", uuid.Nil.String())
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.ErrorContains(t, err, httpapi.ResourceNotFoundResponse.Message)
})
t.Run("ErrorFetchingLogs", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsErr(assert.AnError))
userClient := client
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String())
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, assert.AnError.Error())
})
}
func fakeAgentAPITaskLogsOK(messages []agentapisdk.Message) map[string]http.HandlerFunc {
return map[string]http.HandlerFunc{
"/messages": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"messages": messages,
})
},
}
}
func fakeAgentAPITaskLogsErr(err error) map[string]http.HandlerFunc {
return map[string]http.HandlerFunc{
"/messages": func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"error": err.Error(),
})
},
assert.Equal(t, tt.expectLogs, logs)
}
})
}
}
+17 -16
View File
@@ -3,6 +3,7 @@ package cli
import (
"io"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
@@ -13,15 +14,8 @@ func (r *RootCmd) taskSend() *serpent.Command {
var stdin bool
cmd := &serpent.Command{
Use: "send <task> [<input> | --stdin]",
Short: "Send input to a task",
Long: FormatExamples(Example{
Description: "Send direct input to a task.",
Command: "coder exp task send task1 \"Please also add unit tests\"",
}, Example{
Description: "Send input from stdin to a task.",
Command: "echo \"Please also add unit tests\" | coder exp task send task1 --stdin",
}),
Use: "send <task> [<input> | --stdin]",
Short: "Send input to a task",
Middleware: serpent.RequireRangeArgs(1, 2),
Options: serpent.OptionSet{
{
@@ -38,11 +32,12 @@ func (r *RootCmd) taskSend() *serpent.Command {
}
var (
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
identifier = inv.Args[0]
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
task = inv.Args[0]
taskInput string
taskID uuid.UUID
)
if stdin {
@@ -60,12 +55,18 @@ func (r *RootCmd) taskSend() *serpent.Command {
taskInput = inv.Args[1]
}
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return xerrors.Errorf("resolve task: %w", err)
if id, err := uuid.Parse(task); err == nil {
taskID = id
} else {
ws, err := namedWorkspace(ctx, client, task)
if err != nil {
return xerrors.Errorf("resolve task: %w", err)
}
taskID = ws.ID
}
if err = exp.TaskSend(ctx, codersdk.Me, task.ID, codersdk.TaskSendRequest{Input: taskInput}); err != nil {
if err = exp.TaskSend(ctx, codersdk.Me, taskID, codersdk.TaskSendRequest{Input: taskInput}); err != nil {
return xerrors.Errorf("send input to task: %w", err)
}
+145 -143
View File
@@ -1,171 +1,173 @@
package cli_test
import (
"encoding/json"
"context"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
func Test_TaskSend(t *testing.T) {
t.Parallel()
t.Run("ByTaskName_WithArgument", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
var (
taskName = "task-workspace"
taskID = uuid.MustParse("11111111-1111-1111-1111-111111111111")
)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
tests := []struct {
args []string
stdin string
expectError string
handler func(t *testing.T, ctx context.Context) http.HandlerFunc
}{
{
args: []string{taskName, "carry on with the task"},
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/v2/users/me/workspace/%s", taskName):
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: taskID,
})
case fmt.Sprintf("/api/experimental/tasks/me/%s/send", taskID.String()):
var req codersdk.TaskSendRequest
if !httpapi.Read(ctx, w, r, &req) {
return
}
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "carry on with the task")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
assert.Equal(t, "carry on with the task", req.Input)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
})
t.Run("ByTaskID_WithArgument", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", task.ID.String(), "carry on with the task")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
})
t.Run("ByTaskName_WithStdin", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "--stdin")
inv.Stdout = &stdout
inv.Stdin = strings.NewReader("carry on with the task")
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
})
t.Run("TaskNotFound_ByName", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", "doesnotexist", "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.ErrorContains(t, err, httpapi.ResourceNotFoundResponse.Message)
})
t.Run("TaskNotFound_ByID", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", uuid.Nil.String(), "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.ErrorContains(t, err, httpapi.ResourceNotFoundResponse.Message)
})
t.Run("SendError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
userClient, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
var stdout strings.Builder
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, assert.AnError.Error())
})
}
func fakeAgentAPITaskSendOK(t *testing.T, expectMessage, returnMessage string) map[string]http.HandlerFunc {
return map[string]http.HandlerFunc{
"/status": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{
"status": "stable",
})
httpapi.Write(ctx, w, http.StatusNoContent, nil)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
"/message": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
var msg agentapisdk.PostMessageParams
if err := json.NewDecoder(r.Body).Decode(&msg); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
assert.Equal(t, expectMessage, msg.Content)
message := agentapisdk.Message{
Id: 999,
Role: agentapisdk.RoleAgent,
Content: returnMessage,
Time: time.Now(),
}
_ = json.NewEncoder(w).Encode(message)
{
args: []string{taskID.String(), "carry on with the task"},
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/send", taskID.String()):
var req codersdk.TaskSendRequest
if !httpapi.Read(ctx, w, r, &req) {
return
}
assert.Equal(t, "carry on with the task", req.Input)
httpapi.Write(ctx, w, http.StatusNoContent, nil)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{taskName, "--stdin"},
stdin: "carry on with the task",
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/v2/users/me/workspace/%s", taskName):
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: taskID,
})
case fmt.Sprintf("/api/experimental/tasks/me/%s/send", taskID.String()):
var req codersdk.TaskSendRequest
if !httpapi.Read(ctx, w, r, &req) {
return
}
assert.Equal(t, "carry on with the task", req.Input)
httpapi.Write(ctx, w, http.StatusNoContent, nil)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{"doesnotexist", "some task input"},
expectError: httpapi.ResourceNotFoundResponse.Message,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/users/me/workspace/doesnotexist":
httpapi.ResourceNotFound(w)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{uuid.Nil.String(), "some task input"},
expectError: httpapi.ResourceNotFoundResponse.Message,
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/send", uuid.Nil.String()):
httpapi.ResourceNotFound(w)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{uuid.Nil.String(), "some task input"},
expectError: assert.AnError.Error(),
handler: func(t *testing.T, ctx context.Context) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case fmt.Sprintf("/api/experimental/tasks/me/%s/send", uuid.Nil.String()):
httpapi.InternalServerError(w, assert.AnError)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
}
}
func fakeAgentAPITaskSendErr(t *testing.T, returnErr error) map[string]http.HandlerFunc {
return map[string]http.HandlerFunc{
"/status": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]string{
"status": "stable",
})
},
"/message": func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
for _, tt := range tests {
t.Run(strings.Join(tt.args, ","), func(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitShort)
srv = httptest.NewServer(tt.handler(t, ctx))
client = codersdk.New(testutil.MustURL(t, srv.URL))
args = []string{"exp", "task", "send"}
err error
)
t.Cleanup(srv.Close)
inv, root := clitest.New(t, append(args, tt.args...)...)
inv.Stdin = strings.NewReader(tt.stdin)
clitest.SetupConfig(t, client, root)
err = inv.WithContext(ctx).Run()
if tt.expectError == "" {
assert.NoError(t, err)
} else {
assert.ErrorContains(t, err, tt.expectError)
}
w.WriteHeader(http.StatusInternalServerError)
_, _ = w.Write([]byte(returnErr.Error()))
},
})
}
}
+32 -33
View File
@@ -5,6 +5,7 @@ import (
"strings"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
@@ -43,17 +44,7 @@ func (r *RootCmd) taskStatus() *serpent.Command {
watchIntervalArg time.Duration
)
cmd := &serpent.Command{
Short: "Show the status of a task.",
Long: FormatExamples(
Example{
Description: "Show the status of a given task.",
Command: "coder exp task status task1",
},
Example{
Description: "Watch the status of a given task until it completes (idle or stopped).",
Command: "coder exp task status task1 --watch",
},
),
Short: "Show the status of a task.",
Use: "status",
Aliases: []string{"stat"},
Options: serpent.OptionSet{
@@ -83,10 +74,21 @@ func (r *RootCmd) taskStatus() *serpent.Command {
}
ctx := i.Context()
exp := codersdk.NewExperimentalClient(client)
ec := codersdk.NewExperimentalClient(client)
identifier := i.Args[0]
task, err := exp.TaskByIdentifier(ctx, identifier)
taskID, err := uuid.Parse(identifier)
if err != nil {
// Try to resolve the task as a named workspace
// TODO: right now tasks are still "workspaces" under the hood.
// We should update this once we have a proper task model.
ws, err := namedWorkspace(ctx, client, identifier)
if err != nil {
return err
}
taskID = ws.ID
}
task, err := ec.TaskByID(ctx, taskID)
if err != nil {
return err
}
@@ -107,7 +109,7 @@ func (r *RootCmd) taskStatus() *serpent.Command {
// TODO: implement streaming updates instead of polling
lastStatusRow := tsr
for range t.C {
task, err := exp.TaskByID(ctx, task.ID)
task, err := ec.TaskByID(ctx, taskID)
if err != nil {
return err
}
@@ -140,7 +142,7 @@ func (r *RootCmd) taskStatus() *serpent.Command {
}
func taskWatchIsEnded(task codersdk.Task) bool {
if task.WorkspaceStatus == codersdk.WorkspaceStatusStopped {
if task.Status == codersdk.WorkspaceStatusStopped {
return true
}
if task.WorkspaceAgentHealth == nil || !task.WorkspaceAgentHealth.Healthy {
@@ -156,21 +158,28 @@ func taskWatchIsEnded(task codersdk.Task) bool {
}
type taskStatusRow struct {
codersdk.Task `table:"r,recursive_inline"`
ChangedAgo string `json:"-" table:"state changed"`
Healthy bool `json:"-" table:"healthy"`
codersdk.Task `table:"-"`
ChangedAgo string `json:"-" table:"state changed,default_sort"`
Timestamp time.Time `json:"-" table:"-"`
TaskStatus string `json:"-" table:"status"`
Healthy bool `json:"-" table:"healthy"`
TaskState string `json:"-" table:"state"`
Message string `json:"-" table:"message"`
}
func taskStatusRowEqual(r1, r2 taskStatusRow) bool {
return r1.Status == r2.Status &&
return r1.TaskStatus == r2.TaskStatus &&
r1.Healthy == r2.Healthy &&
taskStateEqual(r1.CurrentState, r2.CurrentState)
r1.TaskState == r2.TaskState &&
r1.Message == r2.Message
}
func toStatusRow(task codersdk.Task) taskStatusRow {
tsr := taskStatusRow{
Task: task,
ChangedAgo: time.Since(task.UpdatedAt).Truncate(time.Second).String() + " ago",
Timestamp: task.UpdatedAt,
TaskStatus: string(task.Status),
}
tsr.Healthy = task.WorkspaceAgentHealth != nil &&
task.WorkspaceAgentHealth.Healthy &&
@@ -180,19 +189,9 @@ func toStatusRow(task codersdk.Task) taskStatusRow {
if task.CurrentState != nil {
tsr.ChangedAgo = time.Since(task.CurrentState.Timestamp).Truncate(time.Second).String() + " ago"
tsr.Timestamp = task.CurrentState.Timestamp
tsr.TaskState = string(task.CurrentState.State)
tsr.Message = task.CurrentState.Message
}
return tsr
}
func taskStateEqual(se1, se2 *codersdk.TaskStateEntry) bool {
var s1, m1, s2, m2 string
if se1 != nil {
s1 = string(se1.State)
m1 = se1.Message
}
if se2 != nil {
s2 = string(se2.State)
m2 = se2.Message
}
return s1 == s2 && m1 == m2
}
+68 -82
View File
@@ -36,9 +36,26 @@ func Test_TaskStatus(t *testing.T) {
hf: func(ctx context.Context, _ time.Time) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/experimental/tasks/me/doesnotexist":
case "/api/v2/users/me/workspace/doesnotexist":
httpapi.ResourceNotFound(w)
return
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
}
},
},
{
args: []string{"err-fetching-workspace"},
expectError: assert.AnError.Error(),
hf: func(ctx context.Context, _ time.Time) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/users/me/workspace/err-fetching-workspace":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
})
case "/api/experimental/tasks/me/11111111-1111-1111-1111-111111111111":
httpapi.InternalServerError(w, assert.AnError)
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
@@ -47,17 +64,21 @@ func Test_TaskStatus(t *testing.T) {
},
{
args: []string{"exists"},
expectOutput: `STATE CHANGED STATUS HEALTHY STATE MESSAGE
0s ago active true working Thinking furiously...`,
expectOutput: `STATE CHANGED STATUS HEALTHY STATE MESSAGE
0s ago running true working Thinking furiously...`,
hf: func(ctx context.Context, now time.Time) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/experimental/tasks/me/exists":
case "/api/v2/users/me/workspace/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
})
case "/api/experimental/tasks/me/11111111-1111-1111-1111-111111111111":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: now,
UpdatedAt: now,
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Status: codersdk.WorkspaceStatusRunning,
CreatedAt: now,
UpdatedAt: now,
CurrentState: &codersdk.TaskStateEntry{
State: codersdk.TaskStateWorking,
Timestamp: now,
@@ -67,9 +88,7 @@ func Test_TaskStatus(t *testing.T) {
Healthy: true,
},
WorkspaceAgentLifecycle: ptr.Ref(codersdk.WorkspaceAgentLifecycleReady),
Status: codersdk.TaskStatusActive,
})
return
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
@@ -78,68 +97,50 @@ func Test_TaskStatus(t *testing.T) {
},
{
args: []string{"exists", "--watch"},
expectOutput: `STATE CHANGED STATUS HEALTHY STATE MESSAGE
5s ago pending true
4s ago initializing true
4s ago active true
3s ago active true working Reticulating splines...
2s ago active true complete Splines reticulated successfully!`,
expectOutput: `
STATE CHANGED STATUS HEALTHY STATE MESSAGE
4s ago running true
3s ago running true working Reticulating splines...
2s ago running true complete Splines reticulated successfully!`,
hf: func(ctx context.Context, now time.Time) func(http.ResponseWriter, *http.Request) {
var calls atomic.Int64
return func(w http.ResponseWriter, r *http.Request) {
defer calls.Add(1)
switch r.URL.Path {
case "/api/experimental/tasks/me/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
OwnerName: "me",
WorkspaceStatus: codersdk.WorkspaceStatusPending,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-5 * time.Second),
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
WorkspaceAgentLifecycle: ptr.Ref(codersdk.WorkspaceAgentLifecycleReady),
Status: codersdk.TaskStatusPending,
case "/api/v2/users/me/workspace/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
})
return
case "/api/experimental/tasks/me/11111111-1111-1111-1111-111111111111":
defer calls.Add(1)
switch calls.Load() {
case 0:
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
OwnerName: "me",
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-4 * time.Second),
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Status: codersdk.WorkspaceStatusPending,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-5 * time.Second),
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
WorkspaceAgentLifecycle: ptr.Ref(codersdk.WorkspaceAgentLifecycleReady),
Status: codersdk.TaskStatusInitializing,
})
return
case 1:
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Status: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
WorkspaceAgentLifecycle: ptr.Ref(codersdk.WorkspaceAgentLifecycleReady),
UpdatedAt: now.Add(-4 * time.Second),
Status: codersdk.TaskStatusActive,
})
return
case 2:
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-4 * time.Second),
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Status: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-4 * time.Second),
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
@@ -149,15 +150,13 @@ func Test_TaskStatus(t *testing.T) {
Timestamp: now.Add(-3 * time.Second),
Message: "Reticulating splines...",
},
Status: codersdk.TaskStatusActive,
})
return
case 3:
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-4 * time.Second),
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Status: codersdk.WorkspaceStatusRunning,
CreatedAt: now.Add(-5 * time.Second),
UpdatedAt: now.Add(-4 * time.Second),
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
@@ -167,16 +166,13 @@ func Test_TaskStatus(t *testing.T) {
Timestamp: now.Add(-2 * time.Second),
Message: "Splines reticulated successfully!",
},
Status: codersdk.TaskStatusActive,
})
return
default:
httpapi.InternalServerError(w, xerrors.New("too many calls!"))
return
}
default:
httpapi.InternalServerError(w, xerrors.Errorf("unexpected path: %q", r.URL.Path))
return
}
}
},
@@ -187,24 +183,18 @@ func Test_TaskStatus(t *testing.T) {
"id": "11111111-1111-1111-1111-111111111111",
"organization_id": "00000000-0000-0000-0000-000000000000",
"owner_id": "00000000-0000-0000-0000-000000000000",
"owner_name": "me",
"name": "exists",
"owner_name": "",
"name": "",
"template_id": "00000000-0000-0000-0000-000000000000",
"template_version_id": "00000000-0000-0000-0000-000000000000",
"template_name": "",
"template_display_name": "",
"template_icon": "",
"workspace_id": null,
"workspace_name": "",
"workspace_status": "running",
"workspace_agent_id": null,
"workspace_agent_lifecycle": "ready",
"workspace_agent_health": {
"healthy": true
},
"workspace_app_id": null,
"workspace_agent_lifecycle": null,
"workspace_agent_health": null,
"initial_prompt": "",
"status": "active",
"status": "running",
"current_state": {
"timestamp": "2025-08-26T12:34:57Z",
"state": "working",
@@ -214,30 +204,26 @@ func Test_TaskStatus(t *testing.T) {
"created_at": "2025-08-26T12:34:56Z",
"updated_at": "2025-08-26T12:34:56Z"
}`,
hf: func(ctx context.Context, now time.Time) func(http.ResponseWriter, *http.Request) {
hf: func(ctx context.Context, _ time.Time) func(w http.ResponseWriter, r *http.Request) {
ts := time.Date(2025, 8, 26, 12, 34, 56, 0, time.UTC)
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/experimental/tasks/me/exists":
case "/api/v2/users/me/workspace/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Workspace{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
})
case "/api/experimental/tasks/me/11111111-1111-1111-1111-111111111111":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
OwnerName: "me",
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
WorkspaceAgentLifecycle: ptr.Ref(codersdk.WorkspaceAgentLifecycleReady),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
CreatedAt: ts,
UpdatedAt: ts,
Status: codersdk.WorkspaceStatusRunning,
CreatedAt: ts,
UpdatedAt: ts,
CurrentState: &codersdk.TaskStateEntry{
State: codersdk.TaskStateWorking,
Timestamp: ts.Add(time.Second),
Message: "Thinking furiously...",
},
Status: codersdk.TaskStatusActive,
})
return
default:
t.Errorf("unexpected path: %s", r.URL.Path)
}
-420
View File
@@ -1,420 +0,0 @@
package cli_test
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"slices"
"strings"
"sync"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/provisioner/echo"
"github.com/coder/coder/v2/provisionersdk/proto"
"github.com/coder/coder/v2/testutil"
)
// This test performs an integration-style test for tasks functionality.
//
//nolint:tparallel // The sub-tests of this test must be run sequentially.
func Test_Tasks(t *testing.T) {
t.Parallel()
// Given: a template configured for tasks
var (
ctx = testutil.Context(t, testutil.WaitLong)
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner = coderdtest.CreateFirstUser(t, client)
userClient, _ = coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
initMsg = agentapisdk.Message{
Content: "test task input for " + t.Name(),
Id: 0,
Role: "user",
Time: time.Now().UTC(),
}
authToken = uuid.NewString()
echoAgentAPI = startFakeAgentAPI(t, fakeAgentAPIEcho(ctx, t, initMsg, "hello"))
taskTpl = createAITaskTemplate(t, client, owner.OrganizationID, withAgentToken(authToken), withSidebarURL(echoAgentAPI.URL()))
taskName = strings.ReplaceAll(testutil.GetRandomName(t), "_", "-")
)
for _, tc := range []struct {
name string
cmdArgs []string
assertFn func(stdout string, userClient *codersdk.Client)
}{
{
name: "create task",
cmdArgs: []string{"exp", "task", "create", "test task input for " + t.Name(), "--name", taskName, "--template", taskTpl.Name},
assertFn: func(stdout string, userClient *codersdk.Client) {
require.Contains(t, stdout, taskName, "task name should be in output")
},
},
{
name: "list tasks after create",
cmdArgs: []string{"exp", "task", "list", "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var tasks []codersdk.Task
err := json.NewDecoder(strings.NewReader(stdout)).Decode(&tasks)
require.NoError(t, err, "list output should unmarshal properly")
require.Len(t, tasks, 1, "expected one task")
require.Equal(t, taskName, tasks[0].Name, "task name should match")
require.Equal(t, initMsg.Content, tasks[0].InitialPrompt, "initial prompt should match")
require.True(t, tasks[0].WorkspaceID.Valid, "workspace should be created")
// For the next test, we need to wait for the workspace to be healthy
ws := coderdtest.MustWorkspace(t, userClient, tasks[0].WorkspaceID.UUID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, ws.LatestBuild.ID)
agentClient := agentsdk.New(client.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, client.URL, authToken, func(o *agent.Options) {
o.Client = agentClient
})
coderdtest.NewWorkspaceAgentWaiter(t, userClient, tasks[0].WorkspaceID.UUID).WithContext(ctx).WaitFor(coderdtest.AgentsReady)
},
},
{
name: "get task status after create",
cmdArgs: []string{"exp", "task", "status", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var task codersdk.Task
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&task), "should unmarshal task status")
require.Equal(t, task.Name, taskName, "task name should match")
require.Equal(t, codersdk.TaskStatusActive, task.Status, "task should be active")
},
},
{
name: "send task message",
cmdArgs: []string{"exp", "task", "send", taskName, "hello"},
// Assertions for this happen in the fake agent API handler.
},
{
name: "read task logs",
cmdArgs: []string{"exp", "task", "logs", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var logs []codersdk.TaskLogEntry
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&logs), "should unmarshal task logs")
require.Len(t, logs, 3, "should have 3 logs")
require.Equal(t, logs[0].Content, initMsg.Content, "first message should be the init message")
require.Equal(t, logs[0].Type, codersdk.TaskLogTypeInput, "first message should be an input")
require.Equal(t, logs[1].Content, "hello", "second message should be the sent message")
require.Equal(t, logs[1].Type, codersdk.TaskLogTypeInput, "second message should be an input")
require.Equal(t, logs[2].Content, "hello", "third message should be the echoed message")
require.Equal(t, logs[2].Type, codersdk.TaskLogTypeOutput, "third message should be an output")
},
},
{
name: "delete task",
cmdArgs: []string{"exp", "task", "delete", taskName, "--yes"},
assertFn: func(stdout string, userClient *codersdk.Client) {
// The task should eventually no longer show up in the list of tasks
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
expClient := codersdk.NewExperimentalClient(userClient)
tasks, err := expClient.Tasks(ctx, &codersdk.TasksFilter{})
if !assert.NoError(t, err) {
return false
}
return slices.IndexFunc(tasks, func(task codersdk.Task) bool {
return task.Name == taskName
}) == -1
}, testutil.IntervalMedium)
},
},
} {
t.Logf("test case: %q", tc.name)
var stdout strings.Builder
inv, root := clitest.New(t, tc.cmdArgs...)
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
require.NoError(t, inv.WithContext(ctx).Run(), tc.name)
if tc.assertFn != nil {
tc.assertFn(stdout.String(), userClient)
}
}
}
func fakeAgentAPIEcho(ctx context.Context, t testing.TB, initMsg agentapisdk.Message, want ...string) map[string]http.HandlerFunc {
t.Helper()
var mmu sync.RWMutex
msgs := []agentapisdk.Message{initMsg}
wantCpy := make([]string, len(want))
copy(wantCpy, want)
t.Cleanup(func() {
mmu.Lock()
defer mmu.Unlock()
if !t.Failed() {
assert.Empty(t, wantCpy, "not all expected messages received: missing %v", wantCpy)
}
})
writeAgentAPIError := func(w http.ResponseWriter, err error, status int) {
w.WriteHeader(status)
_ = json.NewEncoder(w).Encode(agentapisdk.ErrorModel{
Errors: ptr.Ref([]agentapisdk.ErrorDetail{
{
Message: ptr.Ref(err.Error()),
},
}),
})
}
return map[string]http.HandlerFunc{
"/status": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(agentapisdk.GetStatusResponse{
Status: "stable",
})
},
"/messages": func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
mmu.RLock()
defer mmu.RUnlock()
bs, err := json.Marshal(agentapisdk.GetMessagesResponse{
Messages: msgs,
})
if err != nil {
writeAgentAPIError(w, err, http.StatusBadRequest)
return
}
_, _ = w.Write(bs)
},
"/message": func(w http.ResponseWriter, r *http.Request) {
mmu.Lock()
defer mmu.Unlock()
var params agentapisdk.PostMessageParams
w.Header().Set("Content-Type", "application/json")
err := json.NewDecoder(r.Body).Decode(&params)
if !assert.NoError(t, err, "decode message") {
writeAgentAPIError(w, err, http.StatusBadRequest)
return
}
if len(wantCpy) == 0 {
assert.Fail(t, "unexpected message", "received message %v, but no more expected messages", params)
writeAgentAPIError(w, xerrors.New("no more expected messages"), http.StatusBadRequest)
return
}
exp := wantCpy[0]
wantCpy = wantCpy[1:]
if !assert.Equal(t, exp, params.Content, "message content mismatch") {
writeAgentAPIError(w, xerrors.New("unexpected message content: expected "+exp+", got "+params.Content), http.StatusBadRequest)
return
}
msgs = append(msgs, agentapisdk.Message{
Id: int64(len(msgs) + 1),
Content: params.Content,
Role: agentapisdk.RoleUser,
Time: time.Now().UTC(),
})
msgs = append(msgs, agentapisdk.Message{
Id: int64(len(msgs) + 1),
Content: params.Content,
Role: agentapisdk.RoleAgent,
Time: time.Now().UTC(),
})
assert.NoError(t, json.NewEncoder(w).Encode(agentapisdk.PostMessageResponse{
Ok: true,
}))
},
}
}
// setupCLITaskTest creates a test workspace with an AI task template and agent,
// with a fake agent API configured with the provided set of handlers.
// Returns the user client and workspace.
func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[string]http.HandlerFunc) (*codersdk.Client, codersdk.Task) {
t.Helper()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
fakeAPI := startFakeAgentAPI(t, agentAPIHandlers)
authToken := uuid.NewString()
template := createAITaskTemplate(t, client, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
wantPrompt := "test prompt"
exp := codersdk.NewExperimentalClient(userClient)
task, err := exp.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Input: wantPrompt,
Name: "test-task",
})
require.NoError(t, err)
// Wait for the task's underlying workspace to be built
require.True(t, task.WorkspaceID.Valid, "task should have a workspace ID")
workspace, err := userClient.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
agentClient := agentsdk.New(client.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, client.URL, authToken, func(o *agent.Options) {
o.Client = agentClient
})
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).
WaitFor(coderdtest.AgentsReady)
return userClient, task
}
// createAITaskTemplate creates a template configured for AI tasks with a sidebar app.
func createAITaskTemplate(t *testing.T, client *codersdk.Client, orgID uuid.UUID, opts ...aiTemplateOpt) codersdk.Template {
t.Helper()
opt := aiTemplateOpts{
authToken: uuid.NewString(),
}
for _, o := range opts {
o(&opt)
}
taskAppID := uuid.New()
version := coderdtest.CreateTemplateVersion(t, client, orgID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionPlan: []*proto.Response{
{
Type: &proto.Response_Plan{
Plan: &proto.PlanComplete{
HasAiTasks: true,
},
},
},
},
ProvisionApply: []*proto.Response{
{
Type: &proto.Response_Apply{
Apply: &proto.ApplyComplete{
Resources: []*proto.Resource{
{
Name: "example",
Type: "aws_instance",
Agents: []*proto.Agent{
{
Id: uuid.NewString(),
Name: "example",
Auth: &proto.Agent_Token{
Token: opt.authToken,
},
Apps: []*proto.App{
{
Id: taskAppID.String(),
Slug: "task-sidebar",
DisplayName: "Task Sidebar",
Url: opt.appURL,
},
},
},
},
},
},
AiTasks: []*proto.AITask{
{
AppId: taskAppID.String(),
},
},
},
},
},
},
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, orgID, version.ID)
return template
}
// fakeAgentAPI implements a fake AgentAPI HTTP server for testing.
type fakeAgentAPI struct {
t *testing.T
server *httptest.Server
handlers map[string]http.HandlerFunc
called map[string]bool
mu sync.Mutex
}
// startFakeAgentAPI starts an HTTP server that implements the AgentAPI endpoints.
// handlers is a map of path -> handler function.
func startFakeAgentAPI(t *testing.T, handlers map[string]http.HandlerFunc) *fakeAgentAPI {
t.Helper()
fake := &fakeAgentAPI{
t: t,
handlers: handlers,
called: make(map[string]bool),
}
mux := http.NewServeMux()
// Register all provided handlers with call tracking
for path, handler := range handlers {
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
fake.mu.Lock()
fake.called[path] = true
fake.mu.Unlock()
handler(w, r)
})
}
knownEndpoints := []string{"/status", "/messages", "/message"}
for _, endpoint := range knownEndpoints {
if handlers[endpoint] == nil {
endpoint := endpoint // capture loop variable
mux.HandleFunc(endpoint, func(w http.ResponseWriter, r *http.Request) {
t.Fatalf("unexpected call to %s %s - no handler defined", r.Method, endpoint)
})
}
}
// Default handler for unknown endpoints should cause the test to fail.
mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
t.Fatalf("unexpected call to %s %s - no handler defined", r.Method, r.URL.Path)
})
fake.server = httptest.NewServer(mux)
// Register cleanup to check that all defined handlers were called
t.Cleanup(func() {
fake.server.Close()
fake.mu.Lock()
for path := range handlers {
if !fake.called[path] {
t.Errorf("handler for %s was defined but never called", path)
}
}
})
return fake
}
func (f *fakeAgentAPI) URL() string {
return f.server.URL
}
type aiTemplateOpts struct {
appURL string
authToken string
}
type aiTemplateOpt func(*aiTemplateOpts)
func withSidebarURL(url string) aiTemplateOpt {
return func(o *aiTemplateOpts) { o.appURL = url }
}
func withAgentToken(token string) aiTemplateOpt {
return func(o *aiTemplateOpts) { o.authToken = token }
}
-355
View File
@@ -1,355 +0,0 @@
package cli_test
import (
"bytes"
"net/url"
"os"
"path"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/pty/ptytest"
)
// mockKeyring is a mock sessionstore.Backend implementation.
type mockKeyring struct {
credentials map[string]string // service name -> credential
}
const mockServiceName = "mock-service-name"
func newMockKeyring() *mockKeyring {
return &mockKeyring{credentials: make(map[string]string)}
}
func (m *mockKeyring) Read(_ *url.URL) (string, error) {
cred, ok := m.credentials[mockServiceName]
if !ok {
return "", os.ErrNotExist
}
return cred, nil
}
func (m *mockKeyring) Write(_ *url.URL, token string) error {
m.credentials[mockServiceName] = token
return nil
}
func (m *mockKeyring) Delete(_ *url.URL) error {
_, ok := m.credentials[mockServiceName]
if !ok {
return os.ErrNotExist
}
delete(m.credentials, mockServiceName)
return nil
}
func TestUseKeyring(t *testing.T) {
// Verify that the --use-keyring flag opts into using a keyring backend for
// storing session tokens instead of plain text files.
t.Parallel()
t.Run("Login", func(t *testing.T) {
t.Parallel()
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
// Create a pty for interactive prompts
pty := ptytest.New(t)
// Create CLI invocation with --use-keyring flag
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--use-keyring",
"--no-open",
client.URL.String(),
)
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
// Inject the mock backend before running the command
var root cli.RootCmd
cmd, err := root.Command(root.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
root.WithSessionStorageBackend(mockBackend)
inv.Command = cmd
// Run login in background
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// Provide the token when prompted
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file was NOT created (using keyring instead)
sessionFile := path.Join(string(cfg), "session")
_, err = os.Stat(sessionFile)
require.True(t, os.IsNotExist(err), "session file should not exist when using keyring")
// Verify that the credential IS stored in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "credential should be stored in mock keyring")
require.Equal(t, client.SessionToken(), cred, "stored token should match login token")
})
t.Run("Logout", func(t *testing.T) {
t.Parallel()
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
// Create a pty for interactive prompts
pty := ptytest.New(t)
// First, login with --use-keyring
loginInv, cfg := clitest.New(t,
"login",
"--force-tty",
"--use-keyring",
"--no-open",
client.URL.String(),
)
loginInv.Stdin = pty.Input()
loginInv.Stdout = pty.Output()
// Inject the mock backend
var loginRoot cli.RootCmd
loginCmd, err := loginRoot.Command(loginRoot.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
loginRoot.WithSessionStorageBackend(mockBackend)
loginInv.Command = loginCmd
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := loginInv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify credential exists in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "read credential should succeed before logout")
require.NotEmpty(t, cred, "credential should exist after logout")
// Now run logout with --use-keyring
logoutInv, _ := clitest.New(t,
"logout",
"--use-keyring",
"--yes",
"--global-config", string(cfg),
)
// Inject the same mock backend
var logoutRoot cli.RootCmd
logoutCmd, err := logoutRoot.Command(logoutRoot.AGPL())
require.NoError(t, err)
logoutRoot.WithSessionStorageBackend(mockBackend)
logoutInv.Command = logoutCmd
var logoutOut bytes.Buffer
logoutInv.Stdout = &logoutOut
err = logoutInv.Run()
require.NoError(t, err, "logout should succeed")
// Verify the credential was deleted from mock keyring
_, err = mockBackend.Read(nil)
require.ErrorIs(t, err, os.ErrNotExist, "credential should be deleted from keyring after logout")
})
t.Run("OmitFlag", func(t *testing.T) {
t.Parallel()
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
// Create a pty for interactive prompts
pty := ptytest.New(t)
// --use-keyring flag omitted (should use file-based storage)
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file WAS created (not using keyring)
sessionFile := path.Join(string(cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist when NOT using --use-keyring")
// Read and verify the token from file
content, err := os.ReadFile(sessionFile)
require.NoError(t, err, "should be able to read session file")
require.Equal(t, client.SessionToken(), string(content), "file should contain the session token")
})
t.Run("EnvironmentVariable", func(t *testing.T) {
t.Parallel()
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
// Create a pty for interactive prompts
pty := ptytest.New(t)
// Login using CODER_USE_KEYRING environment variable instead of flag
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
inv.Environ.Set("CODER_USE_KEYRING", "true")
// Inject the mock backend
var root cli.RootCmd
cmd, err := root.Command(root.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
root.WithSessionStorageBackend(mockBackend)
inv.Command = cmd
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file was NOT created (using keyring via env var)
sessionFile := path.Join(string(cfg), "session")
_, err = os.Stat(sessionFile)
require.True(t, os.IsNotExist(err), "session file should not exist when using keyring via env var")
// Verify credential is in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "credential should be stored in keyring when CODER_USE_KEYRING=true")
require.NotEmpty(t, cred)
})
}
func TestUseKeyringUnsupportedOS(t *testing.T) {
// Verify that trying to use --use-keyring on an unsupported operating system produces
// a helpful error message.
t.Parallel()
// Only run this on an unsupported OS.
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
t.Skipf("Skipping unsupported OS test on %s where keyring is supported", runtime.GOOS)
}
const expMessage = "keyring storage is not supported on this operating system; remove the --use-keyring flag"
t.Run("LoginWithUnsupportedKeyring", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
// Try to login with --use-keyring on an unsupported OS
inv, _ := clitest.New(t,
"login",
"--use-keyring",
client.URL.String(),
)
// The error should occur immediately, before any prompts
loginErr := inv.Run()
// Verify we got an error about unsupported OS
require.Error(t, loginErr)
require.Contains(t, loginErr.Error(), expMessage)
})
t.Run("LogoutWithUnsupportedKeyring", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
pty := ptytest.New(t)
// First login without keyring to create a session
loginInv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
loginInv.Stdin = pty.Input()
loginInv.Stdout = pty.Output()
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := loginInv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Now try to logout with --use-keyring on an unsupported OS
logoutInv, _ := clitest.New(t,
"logout",
"--use-keyring",
"--yes",
"--global-config", string(cfg),
)
err := logoutInv.Run()
// Verify we got an error about unsupported OS
require.Error(t, err)
require.Contains(t, err.Error(), expMessage)
})
}
+5 -24
View File
@@ -19,7 +19,6 @@ import (
"github.com/coder/pretty"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/sessionstore"
"github.com/coder/coder/v2/coderd/userpassword"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/serpent"
@@ -115,11 +114,9 @@ func (r *RootCmd) loginWithPassword(
}
sessionToken := resp.SessionToken
err = r.ensureTokenBackend().Write(client.URL, sessionToken)
config := r.createConfig()
err = config.Session().Write(sessionToken)
if err != nil {
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("write session token: %w", err)
}
@@ -152,15 +149,11 @@ func (r *RootCmd) login() *serpent.Command {
useTokenForSession bool
)
cmd := &serpent.Command{
Use: "login [<url>]",
Short: "Authenticate with Coder deployment",
Long: "By default, the session token is stored in a plain text file. Use the " +
"--use-keyring flag or set CODER_USE_KEYRING=true to store the token in " +
"the operating system keyring instead.",
Use: "login [<url>]",
Short: "Authenticate with Coder deployment",
Middleware: serpent.RequireRangeArgs(0, 1),
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
rawURL := ""
var urlSource string
@@ -205,15 +198,6 @@ func (r *RootCmd) login() *serpent.Command {
return err
}
// Check keyring availability before prompting the user for a token to fail fast.
if r.useKeyring {
backend := r.ensureTokenBackend()
_, err := backend.Read(client.URL)
if err != nil && xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
}
hasFirstUser, err := client.HasFirstUser(ctx)
if err != nil {
return xerrors.Errorf("Failed to check server %q for first user, is the URL correct and is coder accessible from your browser? Error - has initial user: %w", serverURL.String(), err)
@@ -410,11 +394,8 @@ func (r *RootCmd) login() *serpent.Command {
}
config := r.createConfig()
err = r.ensureTokenBackend().Write(client.URL, sessionToken)
err = config.Session().Write(sessionToken)
if err != nil {
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("write session token: %w", err)
}
err = config.URL().Write(serverURL.String())
+3 -8
View File
@@ -8,7 +8,6 @@ import (
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/sessionstore"
"github.com/coder/serpent"
)
@@ -47,15 +46,11 @@ func (r *RootCmd) logout() *serpent.Command {
errors = append(errors, xerrors.Errorf("remove URL file: %w", err))
}
err = r.ensureTokenBackend().Delete(client.URL)
err = config.Session().Delete()
// Only throw error if the session configuration file is present,
// otherwise the user is already logged out, and we proceed
if err != nil && !xerrors.Is(err, os.ErrNotExist) {
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
errors = append(errors, errKeyringNotSupported)
} else {
errors = append(errors, xerrors.Errorf("remove session token: %w", err))
}
if err != nil && !os.IsNotExist(err) {
errors = append(errors, xerrors.Errorf("remove session file: %w", err))
}
err = config.Organization().Delete()
+4 -21
View File
@@ -43,9 +43,8 @@ func (r *RootCmd) provisionerJobsList() *serpent.Command {
cliui.TableFormat([]provisionerJobRow{}, []string{"created at", "id", "type", "template display name", "status", "queue", "tags"}),
cliui.JSONFormat(),
)
status []string
limit int64
initiator string
status []string
limit int64
)
cmd := &serpent.Command{
@@ -66,18 +65,9 @@ func (r *RootCmd) provisionerJobsList() *serpent.Command {
return xerrors.Errorf("current organization: %w", err)
}
if initiator != "" {
user, err := client.User(ctx, initiator)
if err != nil {
return xerrors.Errorf("initiator not found: %s", initiator)
}
initiator = user.ID.String()
}
jobs, err := client.OrganizationProvisionerJobs(ctx, org.ID, &codersdk.OrganizationProvisionerJobsOptions{
Status: slice.StringEnums[codersdk.ProvisionerJobStatus](status),
Limit: int(limit),
Initiator: initiator,
Status: slice.StringEnums[codersdk.ProvisionerJobStatus](status),
Limit: int(limit),
})
if err != nil {
return xerrors.Errorf("list provisioner jobs: %w", err)
@@ -132,13 +122,6 @@ func (r *RootCmd) provisionerJobsList() *serpent.Command {
Default: "50",
Value: serpent.Int64Of(&limit),
},
{
Flag: "initiator",
FlagShorthand: "i",
Env: "CODER_PROVISIONER_JOB_LIST_INITIATOR",
Description: "Filter by initiator (user ID or username).",
Value: serpent.StringOf(&initiator),
},
}...)
orgContext.AttachOptions(cmd)
+24 -168
View File
@@ -5,7 +5,6 @@ import (
"database/sql"
"encoding/json"
"fmt"
"strings"
"testing"
"time"
@@ -27,32 +26,33 @@ import (
func TestProvisionerJobs(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, coderdAPI := coderdtest.NewWithAPI(t, &coderdtest.Options{
IncludeProvisionerDaemon: false,
Database: db,
Pubsub: ps,
})
owner := coderdtest.CreateFirstUser(t, client)
templateAdminClient, templateAdmin := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.ScopedRoleOrgTemplateAdmin(owner.OrganizationID))
memberClient, member := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// These CLI tests are related to provisioner job CRUD operations and as such
// do not require the overhead of starting a provisioner. Other provisioner job
// functionalities (acquisition etc.) are tested elsewhere.
template := dbgen.Template(t, db, database.Template{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
AllowUserCancelWorkspaceJobs: true,
})
version := dbgen.TemplateVersion(t, db, database.TemplateVersion{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
TemplateID: uuid.NullUUID{UUID: template.ID, Valid: true},
})
t.Run("Cancel", func(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, coderdAPI := coderdtest.NewWithAPI(t, &coderdtest.Options{
IncludeProvisionerDaemon: false,
Database: db,
Pubsub: ps,
})
owner := coderdtest.CreateFirstUser(t, client)
templateAdminClient, templateAdmin := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.ScopedRoleOrgTemplateAdmin(owner.OrganizationID))
memberClient, member := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// These CLI tests are related to provisioner job CRUD operations and as such
// do not require the overhead of starting a provisioner. Other provisioner job
// functionalities (acquisition etc.) are tested elsewhere.
template := dbgen.Template(t, db, database.Template{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
AllowUserCancelWorkspaceJobs: true,
})
version := dbgen.TemplateVersion(t, db, database.TemplateVersion{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
TemplateID: uuid.NullUUID{UUID: template.ID, Valid: true},
})
// Test helper to create a provisioner job of a given type with a given input.
prepareJob := func(t *testing.T, jobType database.ProvisionerJobType, input json.RawMessage) database.ProvisionerJob {
t.Helper()
@@ -178,148 +178,4 @@ func TestProvisionerJobs(t *testing.T) {
})
}
})
t.Run("List", func(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, coderdAPI := coderdtest.NewWithAPI(t, &coderdtest.Options{
IncludeProvisionerDaemon: false,
Database: db,
Pubsub: ps,
})
owner := coderdtest.CreateFirstUser(t, client)
_, member := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// These CLI tests are related to provisioner job CRUD operations and as such
// do not require the overhead of starting a provisioner. Other provisioner job
// functionalities (acquisition etc.) are tested elsewhere.
template := dbgen.Template(t, db, database.Template{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
AllowUserCancelWorkspaceJobs: true,
})
version := dbgen.TemplateVersion(t, db, database.TemplateVersion{
OrganizationID: owner.OrganizationID,
CreatedBy: owner.UserID,
TemplateID: uuid.NullUUID{UUID: template.ID, Valid: true},
})
// Create some test jobs
job1 := dbgen.ProvisionerJob(t, db, coderdAPI.Pubsub, database.ProvisionerJob{
OrganizationID: owner.OrganizationID,
InitiatorID: owner.UserID,
Type: database.ProvisionerJobTypeTemplateVersionImport,
Input: []byte(`{"template_version_id":"` + version.ID.String() + `"}`),
Tags: database.StringMap{provisionersdk.TagScope: provisionersdk.ScopeOrganization},
})
job2 := dbgen.ProvisionerJob(t, db, coderdAPI.Pubsub, database.ProvisionerJob{
OrganizationID: owner.OrganizationID,
InitiatorID: member.ID,
Type: database.ProvisionerJobTypeWorkspaceBuild,
Input: []byte(`{"workspace_build_id":"` + uuid.New().String() + `"}`),
Tags: database.StringMap{provisionersdk.TagScope: provisionersdk.ScopeOrganization},
})
// Test basic list command
t.Run("Basic", func(t *testing.T) {
t.Parallel()
inv, root := clitest.New(t, "provisioner", "jobs", "list")
clitest.SetupConfig(t, client, root)
var buf bytes.Buffer
inv.Stdout = &buf
err := inv.Run()
require.NoError(t, err)
// Should contain both jobs
output := buf.String()
assert.Contains(t, output, job1.ID.String())
assert.Contains(t, output, job2.ID.String())
})
// Test list with JSON output
t.Run("JSON", func(t *testing.T) {
t.Parallel()
inv, root := clitest.New(t, "provisioner", "jobs", "list", "--output", "json")
clitest.SetupConfig(t, client, root)
var buf bytes.Buffer
inv.Stdout = &buf
err := inv.Run()
require.NoError(t, err)
// Parse JSON output
var jobs []codersdk.ProvisionerJob
err = json.Unmarshal(buf.Bytes(), &jobs)
require.NoError(t, err)
// Should contain both jobs
jobIDs := make([]uuid.UUID, len(jobs))
for i, job := range jobs {
jobIDs[i] = job.ID
}
assert.Contains(t, jobIDs, job1.ID)
assert.Contains(t, jobIDs, job2.ID)
})
// Test list with limit
t.Run("Limit", func(t *testing.T) {
t.Parallel()
inv, root := clitest.New(t, "provisioner", "jobs", "list", "--limit", "1")
clitest.SetupConfig(t, client, root)
var buf bytes.Buffer
inv.Stdout = &buf
err := inv.Run()
require.NoError(t, err)
// Should contain at most 1 job
output := buf.String()
jobCount := 0
if strings.Contains(output, job1.ID.String()) {
jobCount++
}
if strings.Contains(output, job2.ID.String()) {
jobCount++
}
assert.LessOrEqual(t, jobCount, 1)
})
// Test list with initiator filter
t.Run("InitiatorFilter", func(t *testing.T) {
t.Parallel()
// Get owner user details to access username
ctx := testutil.Context(t, testutil.WaitShort)
ownerUser, err := client.User(ctx, owner.UserID.String())
require.NoError(t, err)
// Test filtering by initiator (using username)
inv, root := clitest.New(t, "provisioner", "jobs", "list", "--initiator", ownerUser.Username)
clitest.SetupConfig(t, client, root)
var buf bytes.Buffer
inv.Stdout = &buf
err = inv.Run()
require.NoError(t, err)
// Should only contain job1 (initiated by owner)
output := buf.String()
assert.Contains(t, output, job1.ID.String())
assert.NotContains(t, output, job2.ID.String())
})
// Test list with invalid user
t.Run("InvalidUser", func(t *testing.T) {
t.Parallel()
// Test with non-existent user
inv, root := clitest.New(t, "provisioner", "jobs", "list", "--initiator", "nonexistent-user")
clitest.SetupConfig(t, client, root)
var buf bytes.Buffer
inv.Stdout = &buf
err := inv.Run()
require.Error(t, err)
assert.Contains(t, err.Error(), "initiator not found: nonexistent-user")
})
})
}
+7 -50
View File
@@ -37,7 +37,6 @@ import (
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/config"
"github.com/coder/coder/v2/cli/gitauth"
"github.com/coder/coder/v2/cli/sessionstore"
"github.com/coder/coder/v2/cli/telemetry"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
@@ -55,8 +54,6 @@ var (
// ErrSilent is a sentinel error that tells the command handler to just exit with a non-zero error, but not print
// anything.
ErrSilent = xerrors.New("silent error")
errKeyringNotSupported = xerrors.New("keyring storage is not supported on this operating system; remove the --use-keyring flag to use file-based storage")
)
const (
@@ -71,14 +68,12 @@ const (
varVerbose = "verbose"
varDisableDirect = "disable-direct-connections"
varDisableNetworkTelemetry = "disable-network-telemetry"
varUseKeyring = "use-keyring"
notLoggedInMessage = "You are not logged in. Try logging in using '%s login <url>'."
envNoVersionCheck = "CODER_NO_VERSION_WARNING"
envNoFeatureWarning = "CODER_NO_FEATURE_WARNING"
envSessionToken = "CODER_SESSION_TOKEN"
envUseKeyring = "CODER_USE_KEYRING"
//nolint:gosec
envAgentToken = "CODER_AGENT_TOKEN"
//nolint:gosec
@@ -479,15 +474,6 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
Value: serpent.BoolOf(&r.disableNetworkTelemetry),
Group: globalGroup,
},
{
Flag: varUseKeyring,
Env: envUseKeyring,
Description: "Store and retrieve session tokens using the operating system " +
"keyring. Currently only supported on Windows. By default, tokens are " +
"stored in plain text files.",
Value: serpent.BoolOf(&r.useKeyring),
Group: globalGroup,
},
{
Flag: "debug-http",
Description: "Debug codersdk HTTP requests.",
@@ -522,7 +508,6 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
type RootCmd struct {
clientURL *url.URL
token string
tokenBackend sessionstore.Backend
globalConfig string
header []string
headerCommand string
@@ -537,7 +522,6 @@ type RootCmd struct {
disableNetworkTelemetry bool
noVersionCheck bool
noFeatureWarning bool
useKeyring bool
}
// InitClient creates and configures a new client with authentication, telemetry,
@@ -565,19 +549,14 @@ func (r *RootCmd) InitClient(inv *serpent.Invocation) (*codersdk.Client, error)
return nil, err
}
}
// Read the token stored on disk.
if r.token == "" {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
r.token, err = conf.Session().Read()
// Even if there isn't a token, we don't care.
// Some API routes can be unauthenticated.
if err != nil && !xerrors.Is(err, os.ErrNotExist) {
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return nil, errKeyringNotSupported
}
if err != nil && !os.IsNotExist(err) {
return nil, err
}
if tok != "" {
r.token = tok
}
}
// Configure HTTP client with transport wrappers
@@ -609,6 +588,7 @@ func (r *RootCmd) InitClient(inv *serpent.Invocation) (*codersdk.Client, error)
// This allows commands to run without requiring authentication, but still use auth if available.
func (r *RootCmd) TryInitClient(inv *serpent.Invocation) (*codersdk.Client, error) {
conf := r.createConfig()
var err error
// Read the client URL stored on disk.
if r.clientURL == nil || r.clientURL.String() == "" {
rawURL, err := conf.URL().Read()
@@ -625,19 +605,14 @@ func (r *RootCmd) TryInitClient(inv *serpent.Invocation) (*codersdk.Client, erro
}
}
}
// Read the token stored on disk.
if r.token == "" {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
r.token, err = conf.Session().Read()
// Even if there isn't a token, we don't care.
// Some API routes can be unauthenticated.
if err != nil && !xerrors.Is(err, os.ErrNotExist) {
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return nil, errKeyringNotSupported
}
if err != nil && !os.IsNotExist(err) {
return nil, err
}
if tok != "" {
r.token = tok
}
}
// Only configure the client if we have a URL
@@ -713,24 +688,6 @@ func (r *RootCmd) createUnauthenticatedClient(ctx context.Context, serverURL *ur
return client, nil
}
// ensureTokenBackend returns the session token storage backend, creating it if necessary.
// This must be called after flags are parsed so we can respect the value of the --use-keyring
// flag.
func (r *RootCmd) ensureTokenBackend() sessionstore.Backend {
if r.tokenBackend == nil {
if r.useKeyring {
r.tokenBackend = sessionstore.NewKeyring()
} else {
r.tokenBackend = sessionstore.NewFile(r.createConfig)
}
}
return r.tokenBackend
}
func (r *RootCmd) WithSessionStorageBackend(backend sessionstore.Backend) {
r.tokenBackend = backend
}
type AgentAuth struct {
// Agent Client config
agentToken string
-16
View File
@@ -176,22 +176,6 @@ func (r *RootCmd) scheduleStart() *serpent.Command {
}
schedStr = ptr.Ref(sched.String())
// Check if the template has autostart requirements that may conflict
// with the user's schedule.
template, err := client.Template(inv.Context(), workspace.TemplateID)
if err != nil {
return xerrors.Errorf("get template: %w", err)
}
if len(template.AutostartRequirement.DaysOfWeek) > 0 {
_, _ = fmt.Fprintf(
inv.Stderr,
"Warning: your workspace template restricts autostart to the following days: %s.\n"+
"Your workspace may only autostart on these days.\n",
strings.Join(template.AutostartRequirement.DaysOfWeek, ", "),
)
}
}
err = client.UpdateWorkspaceAutostart(inv.Context(), workspace.ID, codersdk.UpdateWorkspaceAutostartRequest{
-64
View File
@@ -373,67 +373,3 @@ func TestScheduleOverride(t *testing.T) {
})
}
}
//nolint:paralleltest // t.Setenv
func TestScheduleStart_TemplateAutostartRequirement(t *testing.T) {
t.Setenv("TZ", "UTC")
loc, err := tz.TimezoneIANA()
require.NoError(t, err)
require.Equal(t, "UTC", loc.String())
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
// Update template to have autostart requirement
// Note: In AGPL, this will be ignored and all days will be allowed (enterprise feature).
template, err = client.UpdateTemplateMeta(context.Background(), template.ID, codersdk.UpdateTemplateMeta{
AutostartRequirement: &codersdk.TemplateAutostartRequirement{
DaysOfWeek: []string{"monday", "wednesday", "friday"},
},
})
require.NoError(t, err)
// Verify the template - in AGPL, AutostartRequirement will have all days (enterprise feature)
template, err = client.Template(context.Background(), template.ID)
require.NoError(t, err)
require.NotEmpty(t, template.AutostartRequirement.DaysOfWeek, "template should have autostart requirement days")
workspace := coderdtest.CreateWorkspace(t, client, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
t.Run("ShowsWarning", func(t *testing.T) {
// When: user sets autostart schedule
inv, root := clitest.New(t,
"schedule", "start", workspace.Name, "9:30AM", "Mon-Fri",
)
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: warning should be shown
// In AGPL, this will show all days (enterprise feature defaults to all days allowed)
pty.ExpectMatch("Warning")
pty.ExpectMatch("may only autostart")
})
t.Run("NoWarningWhenManual", func(t *testing.T) {
// When: user sets manual schedule
inv, root := clitest.New(t,
"schedule", "start", workspace.Name, "manual",
)
clitest.SetupConfig(t, client, root)
var stderrBuf bytes.Buffer
inv.Stderr = &stderrBuf
require.NoError(t, inv.Run())
// Then: no warning should be shown on stderr
stderrOutput := stderrBuf.String()
require.NotContains(t, stderrOutput, "Warning")
})
}
+40 -84
View File
@@ -29,7 +29,6 @@ import (
"strings"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/charmbracelet/lipgloss"
@@ -1029,7 +1028,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
defer shutdownConns()
// Ensures that old database entries are cleaned up over time!
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, options.DeploymentValues, quartz.NewReal())
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, quartz.NewReal())
defer purger.Close()
// Updates workspace usage
@@ -1378,7 +1377,6 @@ func IsLocalURL(ctx context.Context, u *url.URL) (bool, error) {
}
func shutdownWithTimeout(shutdown func(context.Context) error, timeout time.Duration) error {
// nolint:gocritic // The magic number is parameterized.
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
return shutdown(ctx)
@@ -1476,7 +1474,6 @@ func newProvisionerDaemon(
Listener: terraformServer,
Logger: provisionerLogger,
WorkDirectory: workDir,
Experiments: coderAPI.Experiments,
},
CachePath: tfDir,
Tracer: tracer,
@@ -2137,90 +2134,50 @@ func startBuiltinPostgres(ctx context.Context, cfg config.Root, logger slog.Logg
return "", nil, xerrors.New("The built-in PostgreSQL cannot run as the root user. Create a non-root user and run again!")
}
// Ensure a password and port have been generated!
connectionURL, err := embeddedPostgresURL(cfg)
if err != nil {
return "", nil, err
}
pgPassword, err := cfg.PostgresPassword().Read()
if err != nil {
return "", nil, xerrors.Errorf("read postgres password: %w", err)
}
pgPortRaw, err := cfg.PostgresPort().Read()
if err != nil {
return "", nil, xerrors.Errorf("read postgres port: %w", err)
}
pgPort, err := strconv.ParseUint(pgPortRaw, 10, 16)
if err != nil {
return "", nil, xerrors.Errorf("parse postgres port: %w", err)
}
cachePath := filepath.Join(cfg.PostgresPath(), "cache")
if customCacheDir != "" {
cachePath = filepath.Join(customCacheDir, "postgres")
}
stdlibLogger := slog.Stdlib(ctx, logger.Named("postgres"), slog.LevelDebug)
// If the port is not defined, an available port will be found dynamically. This has
// implications in CI because here is no way to tell Postgres to use an ephemeral
// port, so to avoid flaky tests in CI we need to retry EmbeddedPostgres.Start in
// case of a race condition where the port we quickly listen on and close in
// embeddedPostgresURL() is not free by the time the embedded postgres starts up.
// The maximum retry attempts _should_ cover most cases where port conflicts occur
// in CI and cause flaky tests.
maxAttempts := 1
_, err = cfg.PostgresPort().Read()
// Important: if retryPortDiscovery is changed to not include testing.Testing(),
// the retry logic below also needs to be updated to ensure we don't delete an
// existing database
retryPortDiscovery := errors.Is(err, os.ErrNotExist) && testing.Testing()
if retryPortDiscovery {
maxAttempts = 3
ep := embeddedpostgres.NewDatabase(
embeddedpostgres.DefaultConfig().
Version(embeddedpostgres.V13).
BinariesPath(filepath.Join(cfg.PostgresPath(), "bin")).
// Default BinaryRepositoryURL repo1.maven.org is flaky.
BinaryRepositoryURL("https://repo.maven.apache.org/maven2").
DataPath(filepath.Join(cfg.PostgresPath(), "data")).
RuntimePath(filepath.Join(cfg.PostgresPath(), "runtime")).
CachePath(cachePath).
Username("coder").
Password(pgPassword).
Database("coder").
Encoding("UTF8").
Port(uint32(pgPort)).
Logger(stdlibLogger.Writer()),
)
err = ep.Start()
if err != nil {
return "", nil, xerrors.Errorf("Failed to start built-in PostgreSQL. Optionally, specify an external deployment with `--postgres-url`: %w", err)
}
var startErr error
for attempt := 0; attempt < maxAttempts; attempt++ {
if retryPortDiscovery && attempt > 0 {
// Clean up the data and runtime directories and the port file from the
// previous failed attempt to ensure a clean slate for the next attempt.
_ = os.RemoveAll(filepath.Join(cfg.PostgresPath(), "data"))
_ = os.RemoveAll(filepath.Join(cfg.PostgresPath(), "runtime"))
_ = cfg.PostgresPort().Delete()
}
// Ensure a password and port have been generated.
connectionURL, err := embeddedPostgresURL(cfg)
if err != nil {
return "", nil, err
}
pgPassword, err := cfg.PostgresPassword().Read()
if err != nil {
return "", nil, xerrors.Errorf("read postgres password: %w", err)
}
pgPortRaw, err := cfg.PostgresPort().Read()
if err != nil {
return "", nil, xerrors.Errorf("read postgres port: %w", err)
}
pgPort, err := strconv.ParseUint(pgPortRaw, 10, 16)
if err != nil {
return "", nil, xerrors.Errorf("parse postgres port: %w", err)
}
ep := embeddedpostgres.NewDatabase(
embeddedpostgres.DefaultConfig().
Version(embeddedpostgres.V13).
BinariesPath(filepath.Join(cfg.PostgresPath(), "bin")).
// Default BinaryRepositoryURL repo1.maven.org is flaky.
BinaryRepositoryURL("https://repo.maven.apache.org/maven2").
DataPath(filepath.Join(cfg.PostgresPath(), "data")).
RuntimePath(filepath.Join(cfg.PostgresPath(), "runtime")).
CachePath(cachePath).
Username("coder").
Password(pgPassword).
Database("coder").
Encoding("UTF8").
Port(uint32(pgPort)).
Logger(stdlibLogger.Writer()),
)
startErr = ep.Start()
if startErr == nil {
return connectionURL, ep.Stop, nil
}
logger.Warn(ctx, "failed to start embedded postgres",
slog.F("attempt", attempt+1),
slog.F("max_attempts", maxAttempts),
slog.F("port", pgPort),
slog.Error(startErr),
)
}
return "", nil, xerrors.Errorf("failed to start built-in PostgreSQL after %d attempts. "+
"Optionally, specify an external deployment. See https://coder.com/docs/tutorials/external-database "+
"for more details: %w", maxAttempts, startErr)
return connectionURL, ep.Stop, nil
}
func ConfigureHTTPClient(ctx context.Context, clientCertFile, clientKeyFile string, tlsClientCAFile string) (context.Context, *http.Client, error) {
@@ -2329,7 +2286,7 @@ func ConnectToPostgres(ctx context.Context, logger slog.Logger, driver string, d
var err error
var sqlDB *sql.DB
dbNeedsClosing := true
// nolint:gocritic // Try to connect for 30 seconds.
// Try to connect for 30 seconds.
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
@@ -2425,7 +2382,6 @@ func ConnectToPostgres(ctx context.Context, logger slog.Logger, driver string, d
}
func pingPostgres(ctx context.Context, db *sql.DB) error {
// nolint:gocritic // This is a reasonable magic number for a ping timeout.
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
return db.PingContext(ctx)
@@ -17,6 +17,9 @@ import (
func TestRegenerateVapidKeypair(t *testing.T) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("this test is only supported on postgres")
}
t.Run("NoExistingVAPIDKeys", func(t *testing.T) {
t.Parallel()
+13 -4
View File
@@ -348,6 +348,9 @@ func TestServer(t *testing.T) {
runGitHubProviderTest := func(t *testing.T, tc testCase) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("test requires postgres")
}
ctx, cancelFunc := context.WithCancel(testutil.Context(t, testutil.WaitLong))
defer cancelFunc()
@@ -1251,9 +1254,8 @@ func TestServer(t *testing.T) {
t.Logf("error creating request: %s", err.Error())
return false
}
client := &http.Client{}
// nolint:bodyclose
res, err := client.Do(req)
res, err := http.DefaultClient.Do(req)
if err != nil {
t.Logf("error hitting prometheus endpoint: %s", err.Error())
return false
@@ -1314,9 +1316,8 @@ func TestServer(t *testing.T) {
t.Logf("error creating request: %s", err.Error())
return false
}
client := &http.Client{}
// nolint:bodyclose
res, err := client.Do(req)
res, err := http.DefaultClient.Do(req)
if err != nil {
t.Logf("error hitting prometheus endpoint: %s", err.Error())
return false
@@ -2139,6 +2140,10 @@ func TestServerYAMLConfig(t *testing.T) {
func TestConnectToPostgres(t *testing.T) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("this test does not make sense without postgres")
}
t.Run("Migrate", func(t *testing.T) {
t.Parallel()
@@ -2249,6 +2254,10 @@ type runServerOpts struct {
func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
t.Parallel()
if !dbtestutil.WillUsePostgres() {
t.Skip("this test requires postgres")
}
telemetryServerURL, deployment, snapshot := mockTelemetryServer(t)
dbConnURL, err := dbtestutil.Open(t)
require.NoError(t, err)
-245
View File
@@ -1,245 +0,0 @@
// Package sessionstore provides CLI session token storage mechanisms.
// Operating system keyring storage is intended to have compatibility with other Coder
// applications (e.g. Coder Desktop, Coder provider for JetBrains Toolbox, etc) so that
// applications can read/write the same credential stored in the keyring.
//
// Note that we aren't using an existing Go package zalando/go-keyring here for a few
// reasons. 1) It prescribes the format of the target credential name in the OS keyrings,
// which makes our life difficult for compatibility with other Coder applications. 2)
// It uses init functions that make it difficult to test with. As a result, the OS
// keyring implementations may be adapted from zalando/go-keyring source (i.e. Windows).
package sessionstore
import (
"encoding/json"
"errors"
"net/url"
"os"
"strings"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/config"
)
// Backend is a storage backend for session tokens.
type Backend interface {
// Read returns the session token for the given server URL or an error, if any. It
// will return os.ErrNotExist if no token exists for the given URL.
Read(serverURL *url.URL) (string, error)
// Write stores the session token for the given server URL.
Write(serverURL *url.URL, token string) error
// Delete removes the session token for the given server URL or an error, if any.
// It will return os.ErrNotExist error if no token exists to delete.
Delete(serverURL *url.URL) error
}
var (
// ErrSetDataTooBig is returned if `keyringProvider.Set` was called with too much data.
// On macOS: The combination of service, username & password should not exceed ~3000 bytes
// On Windows: The service is limited to 32KiB while the password is limited to 2560 bytes
ErrSetDataTooBig = xerrors.New("data passed to Set was too big")
// ErrNotImplemented represents when keyring usage is not implemented on the current
// operating system.
ErrNotImplemented = xerrors.New("not implemented")
)
const (
// defaultServiceName is the service name used in keyrings for storing Coder CLI session
// tokens.
defaultServiceName = "coder-v2-credentials"
)
// keyringProvider represents an operating system keyring. The expectation
// is these methods operate on the user/login keyring.
type keyringProvider interface {
// Set stores the given credential for a service name in the operating system
// keyring.
Set(service, credential string) error
// Get retrieves the credential from the keyring. It must return os.ErrNotExist
// if the credential is not found.
Get(service string) ([]byte, error)
// Delete deletes the credential from the keyring. It must return os.ErrNotExist
// if the credential is not found.
Delete(service string) error
}
// credential represents a single credential entry.
type credential struct {
CoderURL string `json:"coder_url"`
APIToken string `json:"api_token"`
}
// credentialsMap represents the JSON structure stored in the operating system keyring.
// It supports storing multiple credentials for different server URLs.
type credentialsMap map[string]credential
// normalizeHost returns a normalized version of the URL host for use as a map key.
func normalizeHost(u *url.URL) (string, error) {
if u == nil || u.Host == "" {
return "", xerrors.New("nil server URL")
}
return strings.TrimSpace(strings.ToLower(u.Host)), nil
}
// parseCredentialsJSON parses the JSON from the keyring into a credentialsMap.
func parseCredentialsJSON(jsonData []byte) (credentialsMap, error) {
if len(jsonData) == 0 {
return make(credentialsMap), nil
}
var creds credentialsMap
if err := json.Unmarshal(jsonData, &creds); err != nil {
return nil, xerrors.Errorf("unmarshal credentials: %w", err)
}
return creds, nil
}
// Keyring is a Backend that exclusively stores the session token in the operating
// system keyring. Happy path usage of this type should start with NewKeyring.
// It stores a JSON object in the keyring that supports multiple credentials for
// different server URLs, providing compatibility with Coder Desktop and other Coder
// applications.
type Keyring struct {
provider keyringProvider
serviceName string
}
// NewKeyring creates a Keyring with the default service name for production use.
func NewKeyring() Keyring {
return Keyring{
provider: operatingSystemKeyring{},
serviceName: defaultServiceName,
}
}
// NewKeyringWithService creates a Keyring Backend that stores credentials under the
// specified service name. This is primarily intended for testing to avoid conflicts
// with production credentials and collisions between tests.
func NewKeyringWithService(serviceName string) Keyring {
return Keyring{
provider: operatingSystemKeyring{},
serviceName: serviceName,
}
}
func (o Keyring) Read(serverURL *url.URL) (string, error) {
host, err := normalizeHost(serverURL)
if err != nil {
return "", err
}
credJSON, err := o.provider.Get(o.serviceName)
if err != nil {
return "", err
}
if len(credJSON) == 0 {
return "", os.ErrNotExist
}
creds, err := parseCredentialsJSON(credJSON)
if err != nil {
return "", xerrors.Errorf("read: parse existing credentials: %w", err)
}
// Return the credential for the specified URL
cred, ok := creds[host]
if !ok {
return "", os.ErrNotExist
}
return cred.APIToken, nil
}
func (o Keyring) Write(serverURL *url.URL, token string) error {
host, err := normalizeHost(serverURL)
if err != nil {
return err
}
existingJSON, err := o.provider.Get(o.serviceName)
if err != nil && !errors.Is(err, os.ErrNotExist) {
return xerrors.Errorf("read existing credentials: %w", err)
}
creds, err := parseCredentialsJSON(existingJSON)
if err != nil {
return xerrors.Errorf("write: parse existing credentials: %w", err)
}
// Upsert the credential for this URL.
creds[host] = credential{
CoderURL: host,
APIToken: token,
}
credsJSON, err := json.Marshal(creds)
if err != nil {
return xerrors.Errorf("marshal credentials: %w", err)
}
err = o.provider.Set(o.serviceName, string(credsJSON))
if err != nil {
return xerrors.Errorf("write credentials to keyring: %w", err)
}
return nil
}
func (o Keyring) Delete(serverURL *url.URL) error {
host, err := normalizeHost(serverURL)
if err != nil {
return err
}
existingJSON, err := o.provider.Get(o.serviceName)
if err != nil {
return err
}
creds, err := parseCredentialsJSON(existingJSON)
if err != nil {
return xerrors.Errorf("failed to parse existing credentials: %w", err)
}
if _, ok := creds[host]; !ok {
return os.ErrNotExist
}
delete(creds, host)
// Delete the entire keyring entry when no credentials remain.
if len(creds) == 0 {
return o.provider.Delete(o.serviceName)
}
// Write back the updated credentials map.
credsJSON, err := json.Marshal(creds)
if err != nil {
return xerrors.Errorf("failed to marshal credentials: %w", err)
}
return o.provider.Set(o.serviceName, string(credsJSON))
}
// File is a Backend that exclusively stores the session token in a file on disk.
type File struct {
config func() config.Root
}
func NewFile(f func() config.Root) *File {
return &File{config: f}
}
func (f *File) Read(_ *url.URL) (string, error) {
return f.config().Session().Read()
}
func (f *File) Write(_ *url.URL, token string) error {
return f.config().Session().Write(token)
}
func (f *File) Delete(_ *url.URL) error {
return f.config().Session().Delete()
}
-105
View File
@@ -1,105 +0,0 @@
//go:build darwin
package sessionstore
import (
"encoding/base64"
"fmt"
"io"
"os"
"os/exec"
"regexp"
"strings"
)
const (
// fixedUsername is the fixed username used for all keychain entries.
// Since our interface only uses service names, we use a constant username.
fixedUsername = "coder-login-credentials"
execPathKeychain = "/usr/bin/security"
notFoundStr = "could not be found"
)
// operatingSystemKeyring implements keyringProvider for macOS.
// It is largely adapted from the zalando/go-keyring package.
type operatingSystemKeyring struct{}
func (operatingSystemKeyring) Set(service, credential string) error {
// if the added secret has multiple lines or some non ascii,
// macOS will hex encode it on return. To avoid getting garbage, we
// encode all passwords
password := base64.StdEncoding.EncodeToString([]byte(credential))
cmd := exec.Command(execPathKeychain, "-i")
stdIn, err := cmd.StdinPipe()
if err != nil {
return err
}
if err = cmd.Start(); err != nil {
return err
}
command := fmt.Sprintf("add-generic-password -U -s %s -a %s -w %s\n",
shellEscape(service),
shellEscape(fixedUsername),
shellEscape(password))
if len(command) > 4096 {
return ErrSetDataTooBig
}
if _, err := io.WriteString(stdIn, command); err != nil {
return err
}
if err = stdIn.Close(); err != nil {
return err
}
return cmd.Wait()
}
func (operatingSystemKeyring) Get(service string) ([]byte, error) {
out, err := exec.Command(
execPathKeychain,
"find-generic-password",
"-s", service,
"-wa", fixedUsername).CombinedOutput()
if err != nil {
if strings.Contains(string(out), notFoundStr) {
return nil, os.ErrNotExist
}
return nil, err
}
trimStr := strings.TrimSpace(string(out))
return base64.StdEncoding.DecodeString(trimStr)
}
func (operatingSystemKeyring) Delete(service string) error {
out, err := exec.Command(
execPathKeychain,
"delete-generic-password",
"-s", service,
"-a", fixedUsername).CombinedOutput()
if strings.Contains(string(out), notFoundStr) {
return os.ErrNotExist
}
return err
}
// shellEscape returns a shell-escaped version of the string s.
// This is adapted from github.com/zalando/go-keyring/internal/shellescape.
func shellEscape(s string) string {
if len(s) == 0 {
return "''"
}
pattern := regexp.MustCompile(`[^\w@%+=:,./-]`)
if pattern.MatchString(s) {
return "'" + strings.ReplaceAll(s, "'", "'\"'\"'") + "'"
}
return s
}
@@ -1,34 +0,0 @@
//go:build darwin
package sessionstore_test
import (
"encoding/base64"
"os/exec"
"testing"
)
const (
execPathKeychain = "/usr/bin/security"
fixedUsername = "coder-login-credentials"
)
func readRawKeychainCredential(t *testing.T, service string) []byte {
t.Helper()
out, err := exec.Command(
execPathKeychain,
"find-generic-password",
"-s", service,
"-wa", fixedUsername).CombinedOutput()
if err != nil {
t.Fatal(err)
}
dst := make([]byte, base64.StdEncoding.DecodedLen(len(out)))
n, err := base64.StdEncoding.Decode(dst, out)
if err != nil {
t.Fatal(err)
}
return dst[:n]
}
@@ -1,121 +0,0 @@
package sessionstore
import (
"encoding/json"
"net/url"
"testing"
"github.com/stretchr/testify/require"
)
func TestNormalizeHost(t *testing.T) {
t.Parallel()
tests := []struct {
name string
url *url.URL
want string
wantErr bool
}{
{
name: "StandardHost",
url: &url.URL{Host: "coder.example.com"},
want: "coder.example.com",
},
{
name: "HostWithPort",
url: &url.URL{Host: "coder.example.com:8080"},
want: "coder.example.com:8080",
},
{
name: "UppercaseHost",
url: &url.URL{Host: "CODER.EXAMPLE.COM"},
want: "coder.example.com",
},
{
name: "HostWithWhitespace",
url: &url.URL{Host: " coder.example.com "},
want: "coder.example.com",
},
{
name: "NilURL",
url: nil,
want: "",
wantErr: true,
},
{
name: "EmptyHost",
url: &url.URL{Host: ""},
want: "",
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got, err := normalizeHost(tt.url)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.want, got)
})
}
}
func TestParseCredentialsJSON(t *testing.T) {
t.Parallel()
t.Run("Empty", func(t *testing.T) {
t.Parallel()
creds, err := parseCredentialsJSON(nil)
require.NoError(t, err)
require.NotNil(t, creds)
require.Empty(t, creds)
})
t.Run("NewFormat", func(t *testing.T) {
t.Parallel()
jsonData := []byte(`{
"coder1.example.com": {"coder_url": "coder1.example.com", "api_token": "token1"},
"coder2.example.com": {"coder_url": "coder2.example.com", "api_token": "token2"}
}`)
creds, err := parseCredentialsJSON(jsonData)
require.NoError(t, err)
require.Len(t, creds, 2)
require.Equal(t, "token1", creds["coder1.example.com"].APIToken)
require.Equal(t, "token2", creds["coder2.example.com"].APIToken)
})
t.Run("InvalidJSON", func(t *testing.T) {
t.Parallel()
jsonData := []byte(`{invalid json}`)
_, err := parseCredentialsJSON(jsonData)
require.Error(t, err)
})
}
func TestCredentialsMap_RoundTrip(t *testing.T) {
t.Parallel()
creds := credentialsMap{
"coder1.example.com": {
CoderURL: "coder1.example.com",
APIToken: "token1",
},
"coder2.example.com:8080": {
CoderURL: "coder2.example.com:8080",
APIToken: "token2",
},
}
jsonData, err := json.Marshal(creds)
require.NoError(t, err)
parsed, err := parseCredentialsJSON(jsonData)
require.NoError(t, err)
require.Equal(t, creds, parsed)
}
-17
View File
@@ -1,17 +0,0 @@
//go:build !windows && !darwin
package sessionstore
type operatingSystemKeyring struct{}
func (operatingSystemKeyring) Set(_, _ string) error {
return ErrNotImplemented
}
func (operatingSystemKeyring) Get(_ string) ([]byte, error) {
return nil, ErrNotImplemented
}
func (operatingSystemKeyring) Delete(_ string) error {
return ErrNotImplemented
}

Some files were not shown because too many files have changed in this diff Show More