Compare commits

..

16 Commits

Author SHA1 Message Date
Mathias Fredriksson f9b38be2f3 feat: add TunnelPeers to coordinator and populate workspace connections
Add a TunnelPeers method to the CoordinatorV2 interface that returns
active tunnel peers for a given agent. The in-memory coordinator reads
tunnels.byDst under RLock, the enterprise pgCoord queries
tailnet_tunnels joined with tailnet_peers filtered by dst_id, applies
heartbeat filtering, and resolves the best mapping per peer (NODE beats
LOST).

Replace the hardcoded placeholder connection data in
db2sdk.WorkspaceAgent with real coordinator tunnel data. IP addresses
are parsed from the peer node addresses, status is mapped from the
coordinator peer update kind (NODE -> ongoing, LOST -> control_lost).

Thread the authenticated user ID into the client peer name at both
coordination entry points so tunnel peer data carries user identity.

The Type field on WorkspaceConnection is left empty since the
coordinator operates at the tunnel layer and does not know the
application type (SSH, VS Code, etc.). That information comes from
connection logs (Ethan's task) and can be merged later.
2026-02-09 12:01:40 +00:00
Ethan Dickson 270e52537d feat: populate WorkspaceAgent.connections from connection_logs (partial) 2026-02-09 11:52:58 +00:00
Spike Curtis e409f3d656 feat: add short description to tailnet connections 2026-02-09 11:42:55 +00:00
Mathias Fredriksson 3d506178ed Revert "feat(tailnet): add TunnelPeers method to CoordinatorV2 interface"
This reverts commit 89aef9f5d1.
2026-02-09 10:47:31 +00:00
Mathias Fredriksson d67c8e49e6 Revert "feat(enterprise/tailnet): implement TunnelPeers on pgCoord"
This reverts commit 5bab1f33ec.
2026-02-09 10:47:30 +00:00
Mathias Fredriksson 205c7204ef Revert "fix(coderd): use real coordinator tunnel data for workspace connections"
This reverts commit ec9bdf126e.
2026-02-09 10:47:28 +00:00
Mathias Fredriksson 6125f01e7d Revert "test(tailnet): add unit tests for TunnelPeers on in-memory coordinator"
This reverts commit 5625d4fcf5.
2026-02-09 10:47:24 +00:00
Mathias Fredriksson 5625d4fcf5 test(tailnet): add unit tests for TunnelPeers on in-memory coordinator
Cover four cases: nil return for unknown agent, connected client with
full field assertions (ID, Name, Node, Status, Start), client
disconnect removing the peer, and multiple concurrent clients.
2026-02-09 10:39:14 +00:00
Mathias Fredriksson ec9bdf126e fix(coderd): use real coordinator tunnel data for workspace connections
Previously WorkspaceAgent() returned hardcoded fake connections for
connection testing. Replace this with real data from the coordinator's
TunnelPeers method, mapping tunnel peer info to WorkspaceConnection
structs with IP addresses parsed from node addresses and status
derived from the peer update kind.

Also thread the authenticated user's ID into the client peer name
at both coordination entry points (workspaceAgentClientCoordinate
and tailnetRPCConn) so tunnel peer data includes user identity
instead of a generic "client" name.
2026-02-09 10:33:34 +00:00
Mathias Fredriksson 5bab1f33ec feat(enterprise/tailnet): implement TunnelPeers on pgCoord
Previously TunnelPeers returned nil with a TODO comment. This
implements it by querying GetTailnetTunnelPeerBindingsByDstID,
unmarshalling proto nodes, filtering by coordinator heartbeats,
and selecting the best mapping per peer using the same NODE-beats-LOST
logic from bestMappings.

The Name field on TunnelPeerInfo is left empty since tailnet_peers
does not store peer names.
2026-02-09 10:33:31 +00:00
Mathias Fredriksson 89aef9f5d1 feat(tailnet): add TunnelPeers method to CoordinatorV2 interface
Add TunnelPeerInfo struct and TunnelPeers method for querying peers
with active tunnels to a given agent. The in-memory coordinator
implements this by reading tunnels.byDst under RLock. The enterprise
pgCoord has a temporary stub returning nil (real implementation in a
follow-up task).

Also adds GetTailnetTunnelPeerBindingsByDstID SQL query that joins
tailnet_peers with tailnet_tunnels filtered by dst_id, for use by
the pgCoord implementation.
2026-02-09 10:29:55 +00:00
Spike Curtis 40b555238f chore: add hostname and short description to tailnet proto 2026-02-09 10:04:59 +00:00
Spike Curtis 5af4118e7a feat: show connection logs on Workspace 2026-02-09 09:35:47 +00:00
Spike Curtis fab998c6e0 WIP: add temporary example connection data 2026-02-09 08:11:32 +00:00
Spike Curtis 9e8539eae2 chore: renamed to WorkspaceAgentStatus and make gen 2026-02-09 07:52:42 +00:00
Spike Curtis 44ea2e63b8 chore: add basic connection info to SDK response 2026-02-09 04:10:31 +00:00
562 changed files with 11148 additions and 20228 deletions
-4
View File
@@ -1,4 +0,0 @@
# All artifacts of the build processed are dumped here.
# Ignore it for docker context, as all Dockerfiles should build their own
# binaries.
build
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.7"
default: "1.25.6"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.14.5
terraform_version: 1.14.1
terraform_wrapper: false
+23 -33
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -181,7 +181,7 @@ jobs:
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -241,13 +241,11 @@ jobs:
lint-actions:
needs: changes
# Only run this job if changes to CI workflow files are detected. This job
# can flake as it reaches out to GitHub to check referenced actions.
if: needs.changes.outputs.ci == 'true'
if: needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -272,7 +270,7 @@ jobs:
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -329,7 +327,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -381,7 +379,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -489,14 +487,6 @@ jobs:
# macOS will output "The default interactive shell is now zsh" intermittently in CI.
touch ~/.bash_profile && echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bash_profile
- name: Increase PTY limit (macOS)
if: runner.os == 'macOS'
shell: bash
run: |
# Increase PTY limit to avoid exhaustion during tests.
# Default is 511; 999 is the maximum value on CI runner.
sudo sysctl -w kern.tty.ptmx_max=999
- name: Test with PostgreSQL Database (Linux)
if: runner.os == 'Linux'
uses: ./.github/actions/test-go-pg
@@ -586,7 +576,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -648,7 +638,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -720,7 +710,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -747,7 +737,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -780,7 +770,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -860,7 +850,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -941,7 +931,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -1013,7 +1003,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -1128,7 +1118,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -1183,7 +1173,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -1194,7 +1184,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -1401,7 +1391,7 @@ jobs:
id: attest_main
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:main"
predicate-type: "https://slsa.dev/provenance/v1"
@@ -1438,7 +1428,7 @@ jobs:
id: attest_latest
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:latest"
predicate-type: "https://slsa.dev/provenance/v1"
@@ -1475,7 +1465,7 @@ jobs:
id: attest_version
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: "ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}"
predicate-type: "https://slsa.dev/provenance/v1"
@@ -1580,7 +1570,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+4 -4
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -146,7 +146,7 @@ jobs:
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+4 -4
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -48,7 +48,7 @@ jobs:
persist-credentials: false
- name: Docker login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -58,11 +58,11 @@ jobs:
run: mkdir base-build-context
- name: Install depot.dev CLI
uses: depot/setup-action@15c09a5f77a0840ad4bce955686522a257853461 # v1.7.1
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
# This uses OIDC authentication, so no auth variables are required.
- name: Build base Docker image via depot.dev
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
with:
project: wl5hnrrkns
context: base-build-context
+6 -6
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -42,7 +42,7 @@ jobs:
# on version 2.29 and above.
nix_version: "2.28.5"
- uses: nix-community/cache-nix-action@7df957e333c1e5da7721f60227dbba6d06080569 # v7.0.2
- uses: nix-community/cache-nix-action@106bba72ed8e29c8357661199511ef07790175e9 # v7.0.1
with:
# restore and save a cache using this key
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
@@ -75,20 +75,20 @@ jobs:
BRANCH_NAME: ${{ steps.branch-name.outputs.current_branch }}
- name: Set up Depot CLI
uses: depot/setup-action@15c09a5f77a0840ad4bce955686522a257853461 # v1.7.1
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
- name: Login to DockerHub
if: github.ref == 'refs/heads/main'
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and push Non-Nix image
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
with:
project: b4q6ltmpzh
token: ${{ secrets.DEPOT_TOKEN }}
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -28,7 +28,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+6 -6
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -184,7 +184,7 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -248,7 +248,7 @@ jobs:
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+10 -10
View File
@@ -158,7 +158,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -233,7 +233,7 @@ jobs:
cat "$CODER_RELEASE_NOTES_FILE"
- name: Docker Login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -386,12 +386,12 @@ jobs:
- name: Install depot.dev CLI
if: steps.image-base-tag.outputs.tag != ''
uses: depot/setup-action@15c09a5f77a0840ad4bce955686522a257853461 # v1.7.1
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
# This uses OIDC authentication, so no auth variables are required.
- name: Build base Docker image via depot.dev
if: steps.image-base-tag.outputs.tag != ''
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
with:
project: wl5hnrrkns
context: base-build-context
@@ -448,7 +448,7 @@ jobs:
id: attest_base
if: ${{ !inputs.dry_run && steps.image-base-tag.outputs.tag != '' }}
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: ${{ steps.image-base-tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -564,7 +564,7 @@ jobs:
id: attest_main
if: ${{ !inputs.dry_run }}
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: ${{ steps.build_docker.outputs.multiarch_image }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -608,7 +608,7 @@ jobs:
id: attest_latest
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
continue-on-error: true
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
with:
subject-name: ${{ steps.latest_tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -796,7 +796,7 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -872,7 +872,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -965,7 +965,7 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+3 -3
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -69,7 +69,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -146,7 +146,7 @@ jobs:
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
+4 -4
View File
@@ -18,12 +18,12 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
- name: stale
uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
with:
stale-issue-label: "stale"
stale-pr-label: "stale"
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
@@ -120,7 +120,7 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
+1 -1
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@e3f713f2d8f53843e71c69a996d56f51aa9adfb9 # v2.14.1
with:
egress-policy: audit
-3
View File
@@ -98,6 +98,3 @@ AGENTS.local.md
# Ignore plans written by AI agents.
PLAN.md
# Ignore any dev licenses
license.txt
+1 -4
View File
@@ -909,10 +909,7 @@ site/src/api/countriesGenerated.ts: site/node_modules/.installed scripts/typegen
(cd site/ && pnpm exec biome format --write src/api/countriesGenerated.ts)
touch "$@"
scripts/metricsdocgen/generated_metrics: $(GO_SRC_FILES)
go run ./scripts/metricsdocgen/scanner > $@
docs/admin/integrations/prometheus.md: node_modules/.installed scripts/metricsdocgen/main.go scripts/metricsdocgen/metrics scripts/metricsdocgen/generated_metrics
docs/admin/integrations/prometheus.md: node_modules/.installed scripts/metricsdocgen/main.go scripts/metricsdocgen/metrics
go run scripts/metricsdocgen/main.go
pnpm exec markdownlint-cli2 --fix ./docs/admin/integrations/prometheus.md
pnpm exec markdown-table-formatter ./docs/admin/integrations/prometheus.md
+6 -10
View File
@@ -111,12 +111,6 @@ type Client interface {
ConnectRPC28(ctx context.Context) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
)
// ConnectRPC28WithRole is like ConnectRPC28 but sends an explicit
// role query parameter to the server. The workspace agent should
// use role "agent" to enable connection monitoring.
ConnectRPC28WithRole(ctx context.Context, role string) (
proto.DRPCAgentClient28, tailnetproto.DRPCTailnetClient28, error,
)
tailnet.DERPMapRewriter
agentsdk.RefreshableSessionTokenProvider
}
@@ -1003,10 +997,8 @@ func (a *agent) run() (retErr error) {
return xerrors.Errorf("refresh token: %w", err)
}
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs.
// We pass role "agent" to enable connection monitoring on the server, which tracks
// the agent's connectivity state (first_connected_at, last_connected_at, disconnected_at).
aAPI, tAPI, err := a.client.ConnectRPC28WithRole(a.hardCtx, "agent")
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs
aAPI, tAPI, err := a.client.ConnectRPC28(a.hardCtx)
if err != nil {
return err
}
@@ -1385,6 +1377,7 @@ func (a *agent) createOrUpdateNetwork(manifestOK, networkOK *checkpoint) func(co
manifest.DERPForceWebSockets,
manifest.DisableDirectConnections,
keySeed,
manifest.WorkspaceName,
)
if err != nil {
return xerrors.Errorf("create tailnet: %w", err)
@@ -1539,6 +1532,7 @@ func (a *agent) createTailnet(
derpMap *tailcfg.DERPMap,
derpForceWebSockets, disableDirectConnections bool,
keySeed int64,
workspaceName string,
) (_ *tailnet.Conn, err error) {
// Inject `CODER_AGENT_HEADER` into the DERP header.
var header http.Header
@@ -1556,6 +1550,8 @@ func (a *agent) createTailnet(
Logger: a.logger.Named("net.tailnet"),
ListenPort: a.tailnetListenPort,
BlockEndpoints: disableDirectConnections,
ShortDescription: "Workspace Agent",
Hostname: workspaceName,
})
if err != nil {
return nil, xerrors.Errorf("create tailnet: %w", err)
+103 -2
View File
@@ -1,22 +1,37 @@
package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/google/uuid"
"github.com/spf13/afero"
"github.com/stretchr/testify/require"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agenttest"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/tailnettest"
"github.com/coder/coder/v2/testutil"
)
func TestServer(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("StartStop", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
@@ -26,7 +41,7 @@ func TestServer(t *testing.T) {
t.Run("AlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server1, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
@@ -34,4 +49,90 @@ func TestServer(t *testing.T) {
_, err = agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "create socket")
})
t.Run("AutoSocketPath", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
}
func TestServerWindowsNotSupported(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
t.Run("NewServer", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
_, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
t.Run("NewClient", func(t *testing.T) {
t.Parallel()
_, err := agentsocket.NewClient(context.Background(), agentsocket.WithPath("test.sock"))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
}
func TestAgentInitializesOnWindowsWithoutSocketServer(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
ctx := testutil.Context(t, testutil.WaitShort)
logger := testutil.Logger(t).Named("agent")
derpMap, _ := tailnettest.RunDERPAndSTUN(t)
coordinator := tailnet.NewCoordinator(logger)
t.Cleanup(func() {
_ = coordinator.Close()
})
statsCh := make(chan *agentproto.Stats, 50)
agentID := uuid.New()
manifest := agentsdk.Manifest{
AgentID: agentID,
AgentName: "test-agent",
WorkspaceName: "test-workspace",
OwnerName: "test-user",
WorkspaceID: uuid.New(),
DERPMap: derpMap,
}
client := agenttest.NewClient(t, logger.Named("agenttest"), agentID, manifest, statsCh, coordinator)
t.Cleanup(client.Close)
options := agent.Options{
Client: client,
Filesystem: afero.NewMemMapFs(),
Logger: logger.Named("agent"),
ReconnectingPTYTimeout: testutil.WaitShort,
EnvironmentVariables: map[string]string{},
SocketPath: "",
}
agnt := agent.New(options)
t.Cleanup(func() {
_ = agnt.Close()
})
startup := testutil.TryReceive(ctx, t, client.GetStartup())
require.NotNil(t, startup, "agent should send startup message")
err := agnt.Close()
require.NoError(t, err, "agent should close cleanly")
}
+17 -11
View File
@@ -2,6 +2,8 @@ package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
@@ -28,10 +30,14 @@ func newSocketClient(ctx context.Context, t *testing.T, socketPath string) *agen
func TestDRPCAgentSocketService(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("Ping", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -51,7 +57,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("NewUnit", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -73,7 +79,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitAlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -103,7 +109,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitAlreadyCompleted", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -142,7 +148,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -172,7 +178,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("NewUnits", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -197,7 +203,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("DependencyAlreadyRegistered", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -232,7 +238,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("DependencyAddedAfterDependentStarted", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -274,7 +280,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnregisteredUnit", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -293,7 +299,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
@@ -317,7 +323,7 @@ func TestDRPCAgentSocketService(t *testing.T) {
t.Run("UnitReady", func(t *testing.T) {
t.Parallel()
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
+6 -47
View File
@@ -4,60 +4,19 @@ package agentsocket
import (
"context"
"fmt"
"net"
"os"
"os/user"
"strings"
"github.com/Microsoft/go-winio"
"golang.org/x/xerrors"
)
const defaultSocketPath = `\\.\pipe\com.coder.agentsocket`
func createSocket(path string) (net.Listener, error) {
if path == "" {
path = defaultSocketPath
}
if !strings.HasPrefix(path, `\\.\pipe\`) {
return nil, xerrors.Errorf("%q is not a valid local socket path", path)
}
user, err := user.Current()
if err != nil {
return nil, fmt.Errorf("unable to look up current user: %w", err)
}
sid := user.Uid
// SecurityDescriptor is in SDDL format. c.f.
// https://learn.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format for full details.
// D: indicates this is a Discretionary Access Control List (DACL), which is Windows-speak for ACLs that allow or
// deny access (as opposed to SACL which controls audit logging).
// P indicates that this DACL is "protected" from being modified thru inheritance
// () delimit access control entries (ACEs), here we only have one, which, allows (A) generic all (GA) access to our
// specific user's security ID (SID).
//
// Note that although Microsoft docs at https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipes warns that
// named pipes are accessible from remote machines in the general case, the `winio` package sets the flag
// windows.FILE_PIPE_REJECT_REMOTE_CLIENTS when creating pipes, so connections from remote machines are always
// denied. This is important because we sort of expect customers to run the Coder agent under a generic user
// account unless they are very sophisticated. We don't want this socket to cross the boundary of the local machine.
configuration := &winio.PipeConfig{
SecurityDescriptor: fmt.Sprintf("D:P(A;;GA;;;%s)", sid),
}
listener, err := winio.ListenPipe(path, configuration)
if err != nil {
return nil, xerrors.Errorf("failed to open named pipe: %w", err)
}
return listener, nil
func createSocket(_ string) (net.Listener, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
func cleanupSocket(path string) error {
return os.Remove(path)
func cleanupSocket(_ string) error {
return nil
}
func dialSocket(ctx context.Context, path string) (net.Conn, error) {
return winio.DialPipeContext(ctx, path)
func dialSocket(_ context.Context, _ string) (net.Conn, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
-10
View File
@@ -124,12 +124,6 @@ func (c *Client) Close() {
c.derpMapOnce.Do(func() { close(c.derpMapUpdates) })
}
func (c *Client) ConnectRPC28WithRole(ctx context.Context, _ string) (
agentproto.DRPCAgentClient28, proto.DRPCTailnetClient28, error,
) {
return c.ConnectRPC28(ctx)
}
func (c *Client) ConnectRPC28(ctx context.Context) (
agentproto.DRPCAgentClient28, proto.DRPCTailnetClient28, error,
) {
@@ -235,10 +229,6 @@ type FakeAgentAPI struct {
pushResourcesMonitoringUsageFunc func(*agentproto.PushResourcesMonitoringUsageRequest) (*agentproto.PushResourcesMonitoringUsageResponse, error)
}
func (*FakeAgentAPI) UpdateAppStatus(context.Context, *agentproto.UpdateAppStatusRequest) (*agentproto.UpdateAppStatusResponse, error) {
panic("unimplemented")
}
func (f *FakeAgentAPI) GetManifest(context.Context, *agentproto.GetManifestRequest) (*agentproto.Manifest, error) {
return f.manifest, nil
}
+330 -544
View File
File diff suppressed because it is too large Load Diff
+1 -20
View File
@@ -436,7 +436,7 @@ message CreateSubAgentRequest {
}
repeated DisplayApp display_apps = 6;
optional bytes id = 7;
}
@@ -494,24 +494,6 @@ message ReportBoundaryLogsRequest {
message ReportBoundaryLogsResponse {}
// UpdateAppStatusRequest updates the given Workspace App's status. c.f. agentsdk.PatchAppStatus
message UpdateAppStatusRequest {
string slug = 1;
enum AppStatusState {
WORKING = 0;
IDLE = 1;
COMPLETE = 2;
FAILURE = 3;
}
AppStatusState state = 2;
string message = 3;
string uri = 4;
}
message UpdateAppStatusResponse {}
service Agent {
rpc GetManifest(GetManifestRequest) returns (Manifest);
rpc GetServiceBanner(GetServiceBannerRequest) returns (ServiceBanner);
@@ -530,5 +512,4 @@ service Agent {
rpc DeleteSubAgent(DeleteSubAgentRequest) returns (DeleteSubAgentResponse);
rpc ListSubAgents(ListSubAgentsRequest) returns (ListSubAgentsResponse);
rpc ReportBoundaryLogs(ReportBoundaryLogsRequest) returns (ReportBoundaryLogsResponse);
rpc UpdateAppStatus(UpdateAppStatusRequest) returns (UpdateAppStatusResponse);
}
+1 -41
View File
@@ -56,7 +56,6 @@ type DRPCAgentClient interface {
DeleteSubAgent(ctx context.Context, in *DeleteSubAgentRequest) (*DeleteSubAgentResponse, error)
ListSubAgents(ctx context.Context, in *ListSubAgentsRequest) (*ListSubAgentsResponse, error)
ReportBoundaryLogs(ctx context.Context, in *ReportBoundaryLogsRequest) (*ReportBoundaryLogsResponse, error)
UpdateAppStatus(ctx context.Context, in *UpdateAppStatusRequest) (*UpdateAppStatusResponse, error)
}
type drpcAgentClient struct {
@@ -222,15 +221,6 @@ func (c *drpcAgentClient) ReportBoundaryLogs(ctx context.Context, in *ReportBoun
return out, nil
}
func (c *drpcAgentClient) UpdateAppStatus(ctx context.Context, in *UpdateAppStatusRequest) (*UpdateAppStatusResponse, error) {
out := new(UpdateAppStatusResponse)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/UpdateAppStatus", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
type DRPCAgentServer interface {
GetManifest(context.Context, *GetManifestRequest) (*Manifest, error)
GetServiceBanner(context.Context, *GetServiceBannerRequest) (*ServiceBanner, error)
@@ -249,7 +239,6 @@ type DRPCAgentServer interface {
DeleteSubAgent(context.Context, *DeleteSubAgentRequest) (*DeleteSubAgentResponse, error)
ListSubAgents(context.Context, *ListSubAgentsRequest) (*ListSubAgentsResponse, error)
ReportBoundaryLogs(context.Context, *ReportBoundaryLogsRequest) (*ReportBoundaryLogsResponse, error)
UpdateAppStatus(context.Context, *UpdateAppStatusRequest) (*UpdateAppStatusResponse, error)
}
type DRPCAgentUnimplementedServer struct{}
@@ -322,13 +311,9 @@ func (s *DRPCAgentUnimplementedServer) ReportBoundaryLogs(context.Context, *Repo
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) UpdateAppStatus(context.Context, *UpdateAppStatusRequest) (*UpdateAppStatusResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
type DRPCAgentDescription struct{}
func (DRPCAgentDescription) NumMethods() int { return 18 }
func (DRPCAgentDescription) NumMethods() int { return 17 }
func (DRPCAgentDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver, interface{}, bool) {
switch n {
@@ -485,15 +470,6 @@ func (DRPCAgentDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver,
in1.(*ReportBoundaryLogsRequest),
)
}, DRPCAgentServer.ReportBoundaryLogs, true
case 17:
return "/coder.agent.v2.Agent/UpdateAppStatus", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
UpdateAppStatus(
ctx,
in1.(*UpdateAppStatusRequest),
)
}, DRPCAgentServer.UpdateAppStatus, true
default:
return "", nil, nil, nil, false
}
@@ -774,19 +750,3 @@ func (x *drpcAgent_ReportBoundaryLogsStream) SendAndClose(m *ReportBoundaryLogsR
}
return x.CloseSend()
}
type DRPCAgent_UpdateAppStatusStream interface {
drpc.Stream
SendAndClose(*UpdateAppStatusResponse) error
}
type drpcAgent_UpdateAppStatusStream struct {
drpc.Stream
}
func (x *drpcAgent_UpdateAppStatusStream) SendAndClose(m *UpdateAppStatusResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
+3 -7
View File
@@ -73,13 +73,9 @@ type DRPCAgentClient27 interface {
ReportBoundaryLogs(ctx context.Context, in *ReportBoundaryLogsRequest) (*ReportBoundaryLogsResponse, error)
}
// DRPCAgentClient28 is the Agent API at v2.8. It adds
// - a SubagentId field to the WorkspaceAgentDevcontainer message
// - an Id field to the CreateSubAgentRequest message.
// - UpdateAppStatus RPC.
//
// Compatible with Coder v2.31+
// DRPCAgentClient28 is the Agent API at v2.8. It adds a SubagentId field to the
// WorkspaceAgentDevcontainer message, and a Id field to the CreateSubAgentRequest
// message. Compatible with Coder v2.31+
type DRPCAgentClient28 interface {
DRPCAgentClient27
UpdateAppStatus(ctx context.Context, in *UpdateAppStatusRequest) (*UpdateAppStatusResponse, error)
}
+21 -25
View File
@@ -3,11 +3,11 @@
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true,
"defaultBranch": "main",
"defaultBranch": "main"
},
"files": {
"includes": ["**", "!**/pnpm-lock.yaml"],
"ignoreUnknown": true,
"ignoreUnknown": true
},
"linter": {
"rules": {
@@ -15,18 +15,18 @@
"noSvgWithoutTitle": "off",
"useButtonType": "off",
"useSemanticElements": "off",
"noStaticElementInteractions": "off",
"noStaticElementInteractions": "off"
},
"correctness": {
"noUnusedImports": "warn",
"correctness": {
"noUnusedImports": "warn",
"useUniqueElementIds": "off", // TODO: This is new but we want to fix it
"noNestedComponentDefinitions": "off", // TODO: Investigate, since it is used by shadcn components
"noUnusedVariables": {
"level": "warn",
"noUnusedVariables": {
"level": "warn",
"options": {
"ignoreRestSiblings": true,
},
},
"ignoreRestSiblings": true
}
}
},
"style": {
"noNonNullAssertion": "off",
@@ -45,10 +45,6 @@
"level": "error",
"options": {
"paths": {
"react": {
"message": "React 19 no longer requires forwardRef. Use ref as a prop instead.",
"importNames": ["forwardRef"],
},
// "@mui/material/Alert": "Use components/Alert/Alert instead.",
// "@mui/material/AlertTitle": "Use components/Alert/Alert instead.",
// "@mui/material/Autocomplete": "Use shadcn/ui Combobox instead.",
@@ -115,10 +111,10 @@
"@emotion/styled": "Use Tailwind CSS instead.",
// "@emotion/cache": "Use Tailwind CSS instead.",
// "components/Stack/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
"lodash": "Use lodash/<name> instead.",
},
},
},
"lodash": "Use lodash/<name> instead."
}
}
}
},
"suspicious": {
"noArrayIndexKey": "off",
@@ -129,14 +125,14 @@
"noConsole": {
"level": "error",
"options": {
"allow": ["error", "info", "warn"],
},
},
"allow": ["error", "info", "warn"]
}
}
},
"complexity": {
"noImportantStyles": "off", // TODO: check and fix !important styles
},
},
"noImportantStyles": "off" // TODO: check and fix !important styles
}
}
},
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json",
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json"
}
@@ -1,4 +1,4 @@
package cliutil
package hostname
import (
"os"
+45 -50
View File
@@ -10,7 +10,6 @@ import (
"path/filepath"
"slices"
"strings"
"time"
"github.com/mark3labs/mcp-go/mcp"
"github.com/mark3labs/mcp-go/server"
@@ -24,7 +23,6 @@ import (
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/codersdk/toolsdk"
"github.com/coder/retry"
"github.com/coder/serpent"
)
@@ -541,6 +539,7 @@ func (r *RootCmd) mcpServer() *serpent.Command {
defer cancel()
defer srv.queue.Close()
cliui.Infof(inv.Stderr, "Failed to watch screen events")
// Start the reporter, watcher, and server. These are all tied to the
// lifetime of the MCP server, which is itself tied to the lifetime of the
// AI agent.
@@ -614,51 +613,48 @@ func (s *mcpServer) startReporter(ctx context.Context, inv *serpent.Invocation)
}
func (s *mcpServer) startWatcher(ctx context.Context, inv *serpent.Invocation) {
eventsCh, errCh, err := s.aiAgentAPIClient.SubscribeEvents(ctx)
if err != nil {
cliui.Warnf(inv.Stderr, "Failed to watch screen events: %s", err)
return
}
go func() {
for retrier := retry.New(time.Second, 30*time.Second); retrier.Wait(ctx); {
eventsCh, errCh, err := s.aiAgentAPIClient.SubscribeEvents(ctx)
if err == nil {
retrier.Reset()
loop:
for {
select {
case <-ctx.Done():
for {
select {
case <-ctx.Done():
return
case event := <-eventsCh:
switch ev := event.(type) {
case agentapi.EventStatusChange:
// If the screen is stable, report idle.
state := codersdk.WorkspaceAppStatusStateWorking
if ev.Status == agentapi.StatusStable {
state = codersdk.WorkspaceAppStatusStateIdle
}
err := s.queue.Push(taskReport{
state: state,
})
if err != nil {
cliui.Warnf(inv.Stderr, "Failed to queue update: %s", err)
return
case event := <-eventsCh:
switch ev := event.(type) {
case agentapi.EventStatusChange:
state := codersdk.WorkspaceAppStatusStateWorking
if ev.Status == agentapi.StatusStable {
state = codersdk.WorkspaceAppStatusStateIdle
}
err := s.queue.Push(taskReport{
state: state,
})
if err != nil {
cliui.Warnf(inv.Stderr, "Failed to queue update: %s", err)
return
}
case agentapi.EventMessageUpdate:
if ev.Role == agentapi.RoleUser {
err := s.queue.Push(taskReport{
messageID: &ev.Id,
state: codersdk.WorkspaceAppStatusStateWorking,
})
if err != nil {
cliui.Warnf(inv.Stderr, "Failed to queue update: %s", err)
return
}
}
}
case agentapi.EventMessageUpdate:
if ev.Role == agentapi.RoleUser {
err := s.queue.Push(taskReport{
messageID: &ev.Id,
state: codersdk.WorkspaceAppStatusStateWorking,
})
if err != nil {
cliui.Warnf(inv.Stderr, "Failed to queue update: %s", err)
return
}
case err := <-errCh:
if !errors.Is(err, context.Canceled) {
cliui.Warnf(inv.Stderr, "Received error from screen event watcher: %s", err)
}
break loop
}
}
} else {
cliui.Warnf(inv.Stderr, "Failed to watch screen events: %s", err)
case err := <-errCh:
if !errors.Is(err, context.Canceled) {
cliui.Warnf(inv.Stderr, "Received error from screen event watcher: %s", err)
}
return
}
}
}()
@@ -696,14 +692,13 @@ func (s *mcpServer) startServer(ctx context.Context, inv *serpent.Invocation, in
// Add tool dependencies.
toolOpts := []func(*toolsdk.Deps){
toolsdk.WithTaskReporter(func(args toolsdk.ReportTaskArgs) error {
state := codersdk.WorkspaceAppStatusState(args.State)
// The agent does not reliably report idle, so when AgentAPI is
// enabled we override idle to working and let the screen watcher
// detect the real idle via StatusStable. Final states (failure,
// complete) are trusted from the agent since the screen watcher
// cannot produce them.
if s.aiAgentAPIClient != nil && state == codersdk.WorkspaceAppStatusStateIdle {
state = codersdk.WorkspaceAppStatusStateWorking
// The agent does not reliably report its status correctly. If AgentAPI
// is enabled, we will always set the status to "working" when we get an
// MCP message, and rely on the screen watcher to eventually catch the
// idle state.
state := codersdk.WorkspaceAppStatusStateWorking
if s.aiAgentAPIClient == nil {
state = codersdk.WorkspaceAppStatusState(args.State)
}
return s.queue.Push(taskReport{
link: args.Link,
+1 -185
View File
@@ -921,7 +921,7 @@ func TestExpMcpReporter(t *testing.T) {
},
},
},
// We override idle from the agent to working, but trust final states.
// We ignore the state from the agent and assume "working".
{
name: "IgnoreAgentState",
// AI agent reports that it is finished but the summary says it is doing
@@ -953,46 +953,6 @@ func TestExpMcpReporter(t *testing.T) {
Message: "finished",
},
},
// Agent reports failure; trusted even with AgentAPI enabled.
{
state: codersdk.WorkspaceAppStatusStateFailure,
summary: "something broke",
expected: &codersdk.WorkspaceAppStatus{
State: codersdk.WorkspaceAppStatusStateFailure,
Message: "something broke",
},
},
// After failure, watcher reports stable -> idle.
{
event: makeStatusEvent(agentapi.StatusStable),
expected: &codersdk.WorkspaceAppStatus{
State: codersdk.WorkspaceAppStatusStateIdle,
Message: "something broke",
},
},
},
},
// Final states pass through with AgentAPI enabled.
{
name: "AllowFinalStates",
tests: []test{
{
state: codersdk.WorkspaceAppStatusStateWorking,
summary: "doing work",
expected: &codersdk.WorkspaceAppStatus{
State: codersdk.WorkspaceAppStatusStateWorking,
Message: "doing work",
},
},
// Agent reports complete; not overridden.
{
state: codersdk.WorkspaceAppStatusStateComplete,
summary: "all done",
expected: &codersdk.WorkspaceAppStatus{
State: codersdk.WorkspaceAppStatusStateComplete,
Message: "all done",
},
},
},
},
// When AgentAPI is not being used, we accept agent state updates as-is.
@@ -1150,148 +1110,4 @@ func TestExpMcpReporter(t *testing.T) {
<-cmdDone
})
}
t.Run("Reconnect", func(t *testing.T) {
t.Parallel()
// Create a test deployment and workspace.
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
client, user2 := coderdtest.CreateAnotherUser(t, client, user.OrganizationID)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user2.ID,
}).WithAgent(func(a []*proto.Agent) []*proto.Agent {
a[0].Apps = []*proto.App{
{
Slug: "vscode",
},
}
return a
}).Do()
ctx, cancel := context.WithCancel(testutil.Context(t, testutil.WaitLong))
// Watch the workspace for changes.
watcher, err := client.WatchWorkspace(ctx, r.Workspace.ID)
require.NoError(t, err)
var lastAppStatus codersdk.WorkspaceAppStatus
nextUpdate := func() codersdk.WorkspaceAppStatus {
for {
select {
case <-ctx.Done():
require.FailNow(t, "timed out waiting for status update")
case w, ok := <-watcher:
require.True(t, ok, "watch channel closed")
if w.LatestAppStatus != nil && w.LatestAppStatus.ID != lastAppStatus.ID {
t.Logf("Got status update: %s > %s", lastAppStatus.State, w.LatestAppStatus.State)
lastAppStatus = *w.LatestAppStatus
return lastAppStatus
}
}
}
}
// Mock AI AgentAPI server that supports disconnect/reconnect.
disconnect := make(chan struct{})
listening := make(chan func(sse codersdk.ServerSentEvent) error)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Create a cancelable context so we can stop the SSE sender
// goroutine on disconnect without waiting for the HTTP
// serve loop to cancel r.Context().
sseCtx, sseCancel := context.WithCancel(r.Context())
defer sseCancel()
r = r.WithContext(sseCtx)
send, closed, err := httpapi.ServerSentEventSender(w, r)
if err != nil {
httpapi.Write(sseCtx, w, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error setting up server-sent events.",
Detail: err.Error(),
})
return
}
// Send initial message so the watcher knows the agent is active.
send(*makeMessageEvent(0, agentapi.RoleAgent))
select {
case listening <- send:
case <-r.Context().Done():
return
}
select {
case <-closed:
case <-disconnect:
sseCancel()
<-closed
}
}))
t.Cleanup(srv.Close)
inv, _ := clitest.New(t,
"exp", "mcp", "server",
"--agent-url", client.URL.String(),
"--agent-token", r.AgentToken,
"--app-status-slug", "vscode",
"--allowed-tools=coder_report_task",
"--ai-agentapi-url", srv.URL,
)
inv = inv.WithContext(ctx)
pty := ptytest.New(t)
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
stderr := ptytest.New(t)
inv.Stderr = stderr.Output()
// Run the MCP server.
clitest.Start(t, inv)
// Initialize.
payload := `{"jsonrpc":"2.0","id":1,"method":"initialize"}`
pty.WriteLine(payload)
_ = pty.ReadLine(ctx) // ignore echo
_ = pty.ReadLine(ctx) // ignore init response
// Get first sender from the initial SSE connection.
sender := testutil.RequireReceive(ctx, t, listening)
// Self-report a working status via tool call.
toolPayload := `{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"coder_report_task","arguments":{"state":"working","summary":"doing work","link":""}}}`
pty.WriteLine(toolPayload)
_ = pty.ReadLine(ctx) // ignore echo
_ = pty.ReadLine(ctx) // ignore response
got := nextUpdate()
require.Equal(t, codersdk.WorkspaceAppStatusStateWorking, got.State)
require.Equal(t, "doing work", got.Message)
// Watcher sends stable, verify idle is reported.
err = sender(*makeStatusEvent(agentapi.StatusStable))
require.NoError(t, err)
got = nextUpdate()
require.Equal(t, codersdk.WorkspaceAppStatusStateIdle, got.State)
// Disconnect the SSE connection by signaling the handler to return.
testutil.RequireSend(ctx, t, disconnect, struct{}{})
// Wait for the watcher to reconnect and get the new sender.
sender = testutil.RequireReceive(ctx, t, listening)
// After reconnect, self-report a working status again.
toolPayload = `{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"coder_report_task","arguments":{"state":"working","summary":"reconnected","link":""}}}`
pty.WriteLine(toolPayload)
_ = pty.ReadLine(ctx) // ignore echo
_ = pty.ReadLine(ctx) // ignore response
got = nextUpdate()
require.Equal(t, codersdk.WorkspaceAppStatusStateWorking, got.State)
require.Equal(t, "reconnected", got.Message)
// Verify the watcher still processes events after reconnect.
err = sender(*makeStatusEvent(agentapi.StatusStable))
require.NoError(t, err)
got = nextUpdate()
require.Equal(t, codersdk.WorkspaceAppStatusStateIdle, got.State)
cancel()
})
}
+5 -1
View File
@@ -106,7 +106,11 @@ func TestList(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
memberClient, member = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
sharedWorkspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
+1 -1
View File
@@ -297,7 +297,7 @@ func (pr *ParameterResolver) verifyConstraints(resolved []codersdk.WorkspaceBuil
return xerrors.Errorf("ephemeral parameter %q can be used only with --prompt-ephemeral-parameters or --ephemeral-parameter flag", r.Name)
}
if !tvp.Mutable && action != WorkspaceCreate && !pr.isFirstTimeUse(r.Name) {
if !tvp.Mutable && action != WorkspaceCreate {
return xerrors.Errorf("parameter %q is immutable and cannot be updated", r.Name)
}
}
+3 -1
View File
@@ -123,7 +123,9 @@ func (r *RootCmd) ping() *serpent.Command {
spin.Start()
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI ping",
}
if r.verbose {
opts.Logger = inv.Logger.AppendSinks(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
+3 -1
View File
@@ -107,7 +107,9 @@ func (r *RootCmd) portForward() *serpent.Command {
return xerrors.Errorf("await agent: %w", err)
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI port-forward",
}
logger := inv.Logger
if r.verbose {
+4 -15
View File
@@ -884,27 +884,16 @@ func (o *OrganizationContext) Selected(inv *serpent.Invocation, client *codersdk
index := slices.IndexFunc(orgs, func(org codersdk.Organization) bool {
return org.Name == o.FlagSelect || org.ID.String() == o.FlagSelect
})
if index >= 0 {
return orgs[index], nil
}
// Not in membership list - try direct fetch.
// This allows site-wide admins (e.g., Owners) to use orgs they aren't
// members of.
org, err := client.OrganizationByName(inv.Context(), o.FlagSelect)
if err != nil {
if index < 0 {
var names []string
for _, org := range orgs {
names = append(names, org.Name)
}
var sdkErr *codersdk.Error
if errors.As(err, &sdkErr) && sdkErr.StatusCode() == http.StatusNotFound {
return codersdk.Organization{}, xerrors.Errorf("organization %q not found, are you sure you are a member of this organization? "+
"Valid options for '--org=' are [%s].", o.FlagSelect, strings.Join(names, ", "))
}
return codersdk.Organization{}, xerrors.Errorf("get organization %q: %w", o.FlagSelect, err)
return codersdk.Organization{}, xerrors.Errorf("organization %q not found, are you sure you are a member of this organization? "+
"Valid options for '--org=' are [%s].", o.FlagSelect, strings.Join(names, ", "))
}
return org, nil
return orgs[index], nil
}
if len(orgs) == 1 {
+3 -19
View File
@@ -59,7 +59,7 @@ import (
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/cli/clilog"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/cliutil"
"github.com/coder/coder/v2/cli/cliutil/hostname"
"github.com/coder/coder/v2/cli/config"
"github.com/coder/coder/v2/coderd"
"github.com/coder/coder/v2/coderd/autobuild"
@@ -95,7 +95,6 @@ import (
"github.com/coder/coder/v2/coderd/webpush"
"github.com/coder/coder/v2/coderd/workspaceapps/appurl"
"github.com/coder/coder/v2/coderd/workspacestats"
"github.com/coder/coder/v2/coderd/wsbuilder"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/drpcsdk"
"github.com/coder/coder/v2/cryptorand"
@@ -137,15 +136,6 @@ func createOIDCConfig(ctx context.Context, logger slog.Logger, vals *codersdk.De
if err != nil {
return nil, xerrors.Errorf("parse oidc oauth callback url: %w", err)
}
if vals.OIDC.RedirectURL.String() != "" {
redirectURL, err = vals.OIDC.RedirectURL.Value().Parse("/api/v2/users/oidc/callback")
if err != nil {
return nil, xerrors.Errorf("parse oidc redirect url %q", err)
}
logger.Warn(ctx, "custom OIDC redirect URL used instead of 'access_url', ensure this matches the value configured in your OIDC provider")
}
// If the scopes contain 'groups', we enable group support.
// Do not override any custom value set by the user.
if slice.Contains(vals.OIDC.Scopes, "groups") && vals.OIDC.GroupField == "" {
@@ -945,12 +935,6 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
options.StatsBatcher = batcher
defer closeBatcher()
wsBuilderMetrics, err := wsbuilder.NewMetrics(options.PrometheusRegistry)
if err != nil {
return xerrors.Errorf("failed to register workspace builder metrics: %w", err)
}
options.WorkspaceBuilderMetrics = wsBuilderMetrics
// Manage notifications.
var (
notificationsCfg = options.DeploymentValues.Notifications
@@ -1045,7 +1029,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
suffix := fmt.Sprintf("%d", i)
// The suffix is added to the hostname, so we may need to trim to fit into
// the 64 character limit.
hostname := stringutil.Truncate(cliutil.Hostname(), 63-len(suffix))
hostname := stringutil.Truncate(hostname.Hostname(), 63-len(suffix))
name := fmt.Sprintf("%s-%s", hostname, suffix)
daemonCacheDir := filepath.Join(cacheDir, fmt.Sprintf("provisioner-%d", i))
daemon, err := newProvisionerDaemon(
@@ -1134,7 +1118,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
autobuildTicker := time.NewTicker(vals.AutobuildPollInterval.Value())
defer autobuildTicker.Stop()
autobuildExecutor := autobuild.NewExecutor(
ctx, options.Database, options.Pubsub, coderAPI.FileCache, options.PrometheusRegistry, coderAPI.TemplateScheduleStore, &coderAPI.Auditor, coderAPI.AccessControlStore, coderAPI.BuildUsageChecker, logger, autobuildTicker.C, options.NotificationsEnqueuer, coderAPI.Experiments, coderAPI.WorkspaceBuilderMetrics)
ctx, options.Database, options.Pubsub, coderAPI.FileCache, options.PrometheusRegistry, coderAPI.TemplateScheduleStore, &coderAPI.Auditor, coderAPI.AccessControlStore, coderAPI.BuildUsageChecker, logger, autobuildTicker.C, options.NotificationsEnqueuer, coderAPI.Experiments)
autobuildExecutor.Run()
jobReaperTicker := time.NewTicker(vals.JobReaperDetectorInterval.Value())
+19 -36
View File
@@ -1740,18 +1740,6 @@ func TestServer(t *testing.T) {
// Next, we instruct the same server to display the YAML config
// and then save it.
// Because this is literally the same invocation, DefaultFn sets the
// value of 'Default'. Which triggers a mutually exclusive error
// on the next parse.
// Usually we only parse flags once, so this is not an issue
for _, c := range inv.Command.Children {
if c.Name() == "server" {
for i := range c.Options {
c.Options[i].DefaultFn = nil
}
break
}
}
inv = inv.WithContext(testutil.Context(t, testutil.WaitMedium))
//nolint:gocritic
inv.Args = append(args, "--write-config")
@@ -2256,7 +2244,6 @@ type runServerOpts struct {
waitForSnapshot bool
telemetryDisabled bool
waitForTelemetryDisabledCheck bool
name string
}
func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
@@ -2279,23 +2266,25 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
"--cache-dir", cacheDir,
"--log-filter", ".*",
)
inv.Logger = inv.Logger.Named(opts.name)
finished := make(chan bool, 2)
errChan := make(chan error, 1)
pty := ptytest.New(t).Named(opts.name).Attach(inv)
pty := ptytest.New(t).Attach(inv)
go func() {
errChan <- inv.WithContext(ctx).Run()
// close the pty here so that we can start tearing down resources. This test creates multiple servers with
// associated ptys. There is a `t.Cleanup()` that does this, but it waits until the whole test is complete.
_ = pty.Close()
finished <- true
}()
if opts.waitForSnapshot {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "submitted snapshot")
}
if opts.waitForTelemetryDisabledCheck {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "finished telemetry status check")
}
go func() {
defer func() {
finished <- true
}()
if opts.waitForSnapshot {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "submitted snapshot")
}
if opts.waitForTelemetryDisabledCheck {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "finished telemetry status check")
}
}()
<-finished
return errChan, cancelFunc
}
waitForShutdown := func(t *testing.T, errChan chan error) error {
@@ -2309,9 +2298,7 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
return nil
}
errChan, cancelFunc := runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "0disabled",
})
errChan, cancelFunc := runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
@@ -2319,7 +2306,7 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
require.Empty(t, deployment)
require.Empty(t, snapshot)
errChan, cancelFunc = runServer(t, runServerOpts{waitForSnapshot: true, name: "1enabled"})
errChan, cancelFunc = runServer(t, runServerOpts{waitForSnapshot: true})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
// we expect to see a deployment and a snapshot twice:
@@ -2338,9 +2325,7 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
}
}
errChan, cancelFunc = runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "2disabled",
})
errChan, cancelFunc = runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
@@ -2356,9 +2341,7 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
t.Fatalf("timed out waiting for snapshot")
}
errChan, cancelFunc = runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "3disabled",
})
errChan, cancelFunc = runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
// Since telemetry is disabled and we've already sent a snapshot, we expect no
+31 -7
View File
@@ -25,7 +25,11 @@ func TestSharingShare(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -64,8 +68,12 @@ func TestSharingShare(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
orgOwner = coderdtest.CreateFirstUser(t, client)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -119,7 +127,11 @@ func TestSharingShare(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -170,7 +182,11 @@ func TestSharingStatus(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -214,7 +230,11 @@ func TestSharingRemove(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
@@ -271,7 +291,11 @@ func TestSharingRemove(t *testing.T) {
t.Parallel()
var (
client, db = coderdtest.NewWithDatabase(t, nil)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
}),
})
orgOwner = coderdtest.CreateFirstUser(t, client)
workspaceOwnerClient, workspaceOwner = coderdtest.CreateAnotherUser(t, client, orgOwner.OrganizationID, rbac.ScopedRoleOrgAuditor(orgOwner.OrganizationID))
workspace = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
+3 -1
View File
@@ -97,7 +97,9 @@ func (r *RootCmd) speedtest() *serpent.Command {
return xerrors.Errorf("await agent: %w", err)
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI speedtest",
}
if r.verbose {
opts.Logger = inv.Logger.AppendSinks(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelDebug)
}
+66 -3
View File
@@ -24,6 +24,7 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -84,6 +85,9 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -175,6 +179,24 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -343,6 +365,10 @@ func (r *RootCmd) ssh() *serpent.Command {
}
return err
}
shortDescription := "CLI ssh"
if stdio {
shortDescription = "CLI ssh (stdio)"
}
// If we're in stdio mode, check to see if we can use Coder Connect.
// We don't support Coder Connect over non-stdio coder ssh yet.
@@ -383,9 +409,10 @@ func (r *RootCmd) ssh() *serpent.Command {
}
conn, err := wsClient.
DialAgent(ctx, workspaceAgent.ID, &workspacesdk.DialAgentOptions{
Logger: logger,
BlockEndpoints: r.disableDirect,
EnableTelemetry: !r.disableNetworkTelemetry,
Logger: logger,
BlockEndpoints: r.disableDirect,
EnableTelemetry: !r.disableNetworkTelemetry,
ShortDescription: shortDescription,
})
if err != nil {
return xerrors.Errorf("dial agent: %w", err)
@@ -775,6 +802,12 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1662,3 +1695,33 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}
+96
View File
@@ -312,6 +312,102 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err
+101
View File
@@ -1122,6 +1122,107 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
// Ref: https://github.com/coder/internal/issues/1289
// This may return either a nil error or io.EOF.
// There is an inherent race here:
// 1. Sleep process is killed -> sleepDone is closed.
// 2. watchParentContext detects parent death, cancels context,
// causing SSH session teardown.
// 3. We receive from sleepDone and attempt to call session.Close()
// Now either:
// a. Session teardown completes before we call Close(), resulting in io.EOF
// b. We call Close() first, resulting in a nil error.
_ = session.Close()
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")
+1 -1
View File
@@ -120,7 +120,7 @@ func (r *RootCmd) start() *serpent.Command {
func buildWorkspaceStartRequest(inv *serpent.Invocation, client *codersdk.Client, workspace codersdk.Workspace, parameterFlags workspaceParameterFlags, buildFlags buildFlags, action WorkspaceCLIAction) (codersdk.CreateWorkspaceBuildRequest, error) {
version := workspace.LatestBuild.TemplateVersionID
if workspace.AutomaticUpdates == codersdk.AutomaticUpdatesAlways || workspace.TemplateRequireActiveVersion || action == WorkspaceUpdate {
if workspace.AutomaticUpdates == codersdk.AutomaticUpdatesAlways || action == WorkspaceUpdate {
version = workspace.TemplateActiveVersionID
if version != workspace.LatestBuild.TemplateVersionID {
action = WorkspaceUpdate
+4 -4
View File
@@ -33,7 +33,7 @@ func TestStatePull(t *testing.T) {
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{}).ProvisionerState(wantState).
Seed(database.WorkspaceBuild{ProvisionerState: wantState}).
Do()
statefilePath := filepath.Join(t.TempDir(), "state")
inv, root := clitest.New(t, "state", "pull", r.Workspace.Name, statefilePath)
@@ -54,7 +54,7 @@ func TestStatePull(t *testing.T) {
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{}).ProvisionerState(wantState).
Seed(database.WorkspaceBuild{ProvisionerState: wantState}).
Do()
inv, root := clitest.New(t, "state", "pull", r.Workspace.Name)
var gotState bytes.Buffer
@@ -74,7 +74,7 @@ func TestStatePull(t *testing.T) {
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{}).ProvisionerState(wantState).
Seed(database.WorkspaceBuild{ProvisionerState: wantState}).
Do()
inv, root := clitest.New(t, "state", "pull", taUser.Username+"/"+r.Workspace.Name,
"--build", fmt.Sprintf("%d", r.Build.BuildNumber))
@@ -170,7 +170,7 @@ func TestStatePush(t *testing.T) {
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{}).ProvisionerState(initialState).
Seed(database.WorkspaceBuild{ProvisionerState: initialState}).
Do()
wantState := []byte("updated state")
stateFile, err := os.CreateTemp(t.TempDir(), "")
+7 -9
View File
@@ -1,3 +1,5 @@
//go:build !windows
package cli_test
import (
@@ -5,7 +7,6 @@ import (
"context"
"os"
"path/filepath"
"runtime"
"testing"
"time"
@@ -24,15 +25,12 @@ func setupSocketServer(t *testing.T) (path string, cleanup func()) {
t.Helper()
// Use a temporary socket path for each test
socketPath := testutil.AgentSocketPath(t)
socketPath := filepath.Join(testutil.TempDirUnixSocket(t), "test.sock")
// Create parent directory if needed. Not necessary on Windows because named pipes live in an abstract namespace
// not tied to any real files.
if runtime.GOOS != "windows" {
parentDir := filepath.Dir(socketPath)
err := os.MkdirAll(parentDir, 0o700)
require.NoError(t, err, "create socket directory")
}
// Create parent directory if needed
parentDir := filepath.Dir(socketPath)
err := os.MkdirAll(parentDir, 0o700)
require.NoError(t, err, "create socket directory")
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
-2
View File
@@ -17,8 +17,6 @@ func (r *RootCmd) tasksCommand() *serpent.Command {
r.taskDelete(),
r.taskList(),
r.taskLogs(),
r.taskPause(),
r.taskResume(),
r.taskSend(),
r.taskStatus(),
},
+10 -5
View File
@@ -41,7 +41,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client // user already has access to their own workspace
inv, root := clitest.New(t, "task", "logs", task.Name, "--output", "json")
output := clitest.Capture(inv)
@@ -64,7 +65,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String(), "--output", "json")
output := clitest.Capture(inv)
@@ -87,7 +89,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String())
output := clitest.Capture(inv)
@@ -141,7 +144,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsErr(assert.AnError))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsErr(assert.AnError))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String())
clitest.SetupConfig(t, userClient, root)
@@ -197,7 +201,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Run("SnapshotWithoutLogs_NoSnapshotCaptured", func(t *testing.T) {
t.Parallel()
userClient, task := setupCLITaskTestWithoutSnapshot(t, codersdk.TaskStatusPaused)
client, task := setupCLITaskTestWithoutSnapshot(t, codersdk.TaskStatusPaused)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
-90
View File
@@ -1,90 +0,0 @@
package cli
import (
"fmt"
"time"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
"github.com/coder/serpent"
)
func (r *RootCmd) taskPause() *serpent.Command {
cmd := &serpent.Command{
Use: "pause <task>",
Short: "Pause a task",
Long: FormatExamples(
Example{
Description: "Pause a task by name",
Command: "coder task pause my-task",
},
Example{
Description: "Pause another user's task",
Command: "coder task pause alice/my-task",
},
Example{
Description: "Pause a task without confirmation",
Command: "coder task pause my-task --yes",
},
),
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
),
Options: serpent.OptionSet{
cliui.SkipPromptOption(),
},
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
task, err := client.TaskByIdentifier(ctx, inv.Args[0])
if err != nil {
return xerrors.Errorf("resolve task %q: %w", inv.Args[0], err)
}
display := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
if task.Status == codersdk.TaskStatusPaused {
return xerrors.Errorf("task %q is already paused", display)
}
_, err = cliui.Prompt(inv, cliui.PromptOptions{
Text: fmt.Sprintf("Pause task %s?", pretty.Sprint(cliui.DefaultStyles.Code, display)),
IsConfirm: true,
Default: cliui.ConfirmNo,
})
if err != nil {
return err
}
resp, err := client.PauseTask(ctx, task.OwnerName, task.ID)
if err != nil {
return xerrors.Errorf("pause task %q: %w", display, err)
}
if resp.WorkspaceBuild == nil {
return xerrors.Errorf("pause task %q: no workspace build returned", display)
}
err = cliui.WorkspaceBuild(ctx, inv.Stdout, client, resp.WorkspaceBuild.ID)
if err != nil {
return xerrors.Errorf("watch pause build for task %q: %w", display, err)
}
_, _ = fmt.Fprintf(
inv.Stdout,
"\nThe %s task has been paused at %s!\n",
cliui.Keyword(task.Name),
cliui.Timestamp(time.Now()),
)
return nil
},
}
return cmd
}
-144
View File
@@ -1,144 +0,0 @@
package cli_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestExpTaskPause(t *testing.T) {
t.Parallel()
t.Run("WithYesFlag", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
// Then: Expect the task to be paused
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been paused")
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
// OtherUserTask verifies that an admin can pause a task owned by
// another user using the "owner/name" identifier format.
t.Run("OtherUserTask", func(t *testing.T) {
t.Parallel()
// Given: A different user's running task
setupCtx := testutil.Context(t, testutil.WaitLong)
adminClient, _, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause their task
identifier := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
inv, root := clitest.New(t, "task", "pause", identifier, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, adminClient, root)
// Then: We expect the task to be paused
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been paused")
updated, err := adminClient.TaskByIdentifier(ctx, identifier)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("PromptConfirm", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name)
clitest.SetupConfig(t, userClient, root)
// And: We confirm we want to pause the task
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Pause task")
pty.WriteLine("yes")
// Then: We expect the task to be paused
pty.ExpectMatchContext(ctx, "has been paused")
require.NoError(t, w.Wait())
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("PromptDecline", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name)
clitest.SetupConfig(t, userClient, root)
// But: We say no at the confirmation screen
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Pause task")
pty.WriteLine("no")
require.Error(t, w.Wait())
// Then: We expect the task to not be paused
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.NotEqual(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("TaskAlreadyPaused", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// And: We paused the running task
ctx := testutil.Context(t, testutil.WaitMedium)
resp, err := userClient.PauseTask(ctx, task.OwnerName, task.ID)
require.NoError(t, err)
require.NotNil(t, resp.WorkspaceBuild)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, userClient, resp.WorkspaceBuild.ID)
// When: We attempt to pause the task again
inv, root := clitest.New(t, "task", "pause", task.Name, "--yes")
clitest.SetupConfig(t, userClient, root)
// Then: We expect to get an error that the task is already paused
err = inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "is already paused")
})
}
-95
View File
@@ -1,95 +0,0 @@
package cli
import (
"fmt"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
"github.com/coder/serpent"
)
func (r *RootCmd) taskResume() *serpent.Command {
var noWait bool
cmd := &serpent.Command{
Use: "resume <task>",
Short: "Resume a task",
Long: FormatExamples(
Example{
Description: "Resume a task by name",
Command: "coder task resume my-task",
},
Example{
Description: "Resume another user's task",
Command: "coder task resume alice/my-task",
},
Example{
Description: "Resume a task without confirmation",
Command: "coder task resume my-task --yes",
},
),
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
),
Options: serpent.OptionSet{
{
Flag: "no-wait",
Description: "Return immediately after resuming the task.",
Value: serpent.BoolOf(&noWait),
},
cliui.SkipPromptOption(),
},
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
task, err := client.TaskByIdentifier(ctx, inv.Args[0])
if err != nil {
return xerrors.Errorf("resolve task %q: %w", inv.Args[0], err)
}
display := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
if task.Status == codersdk.TaskStatusError || task.Status == codersdk.TaskStatusUnknown {
return xerrors.Errorf("task %q is in %s state and cannot be resumed; check the workspace build logs and agent status for details", display, task.Status)
} else if task.Status != codersdk.TaskStatusPaused {
return xerrors.Errorf("task %q cannot be resumed (current status: %s)", display, task.Status)
}
_, err = cliui.Prompt(inv, cliui.PromptOptions{
Text: fmt.Sprintf("Resume task %s?", pretty.Sprint(cliui.DefaultStyles.Code, display)),
IsConfirm: true,
Default: cliui.ConfirmNo,
})
if err != nil {
return err
}
resp, err := client.ResumeTask(ctx, task.OwnerName, task.ID)
if err != nil {
return xerrors.Errorf("resume task %q: %w", display, err)
} else if resp.WorkspaceBuild == nil {
return xerrors.Errorf("resume task %q: no workspace build returned", display)
}
if noWait {
_, _ = fmt.Fprintf(inv.Stdout, "Resuming task %q in the background.\n", cliui.Keyword(display))
return nil
}
if err = cliui.WorkspaceBuild(ctx, inv.Stdout, client, resp.WorkspaceBuild.ID); err != nil {
return xerrors.Errorf("watch resume build for task %q: %w", display, err)
}
_, _ = fmt.Fprintf(inv.Stdout, "\nThe %s task has been resumed.\n", cliui.Keyword(display))
return nil
},
}
return cmd
}
-183
View File
@@ -1,183 +0,0 @@
package cli_test
import (
"context"
"fmt"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestExpTaskResume(t *testing.T) {
t.Parallel()
// pauseTask is a helper that pauses a task and waits for the stop
// build to complete.
pauseTask := func(ctx context.Context, t *testing.T, client *codersdk.Client, task codersdk.Task) {
t.Helper()
pauseResp, err := client.PauseTask(ctx, task.OwnerName, task.ID)
require.NoError(t, err)
require.NotNil(t, pauseResp.WorkspaceBuild)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, pauseResp.WorkspaceBuild.ID)
}
t.Run("WithYesFlag", func(t *testing.T) {
t.Parallel()
// Given: A paused task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
pauseTask(setupCtx, t, userClient, task)
// When: We attempt to resume the task
inv, root := clitest.New(t, "task", "resume", task.Name, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
// Then: We expect the task to be resumed
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been resumed")
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusInitializing, updated.Status)
})
// OtherUserTask verifies that an admin can resume a task owned by
// another user using the "owner/name" identifier format.
t.Run("OtherUserTask", func(t *testing.T) {
t.Parallel()
// Given: A different user's paused task
setupCtx := testutil.Context(t, testutil.WaitLong)
adminClient, userClient, task := setupCLITaskTest(setupCtx, t, nil)
pauseTask(setupCtx, t, userClient, task)
// When: We attempt to resume their task
identifier := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
inv, root := clitest.New(t, "task", "resume", identifier, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, adminClient, root)
// Then: We expect the task to be resumed
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been resumed")
updated, err := adminClient.TaskByIdentifier(ctx, identifier)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusInitializing, updated.Status)
})
t.Run("NoWait", func(t *testing.T) {
t.Parallel()
// Given: A paused task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
pauseTask(setupCtx, t, userClient, task)
// When: We attempt to resume the task (and specify no wait)
inv, root := clitest.New(t, "task", "resume", task.Name, "--yes", "--no-wait")
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
// Then: We expect the task to be resumed in the background
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "in the background")
// And: The task to eventually be resumed
require.True(t, task.WorkspaceID.Valid, "task should have a workspace ID")
ws := coderdtest.MustWorkspace(t, userClient, task.WorkspaceID.UUID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, userClient, ws.LatestBuild.ID)
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusInitializing, updated.Status)
})
t.Run("PromptConfirm", func(t *testing.T) {
t.Parallel()
// Given: A paused task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
pauseTask(setupCtx, t, userClient, task)
// When: We attempt to resume the task
inv, root := clitest.New(t, "task", "resume", task.Name)
clitest.SetupConfig(t, userClient, root)
// And: We confirm we want to resume the task
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Resume task")
pty.WriteLine("yes")
// Then: We expect the task to be resumed
pty.ExpectMatchContext(ctx, "has been resumed")
require.NoError(t, w.Wait())
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusInitializing, updated.Status)
})
t.Run("PromptDecline", func(t *testing.T) {
t.Parallel()
// Given: A paused task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
pauseTask(setupCtx, t, userClient, task)
// When: We attempt to resume the task
inv, root := clitest.New(t, "task", "resume", task.Name)
clitest.SetupConfig(t, userClient, root)
// But: Say no at the confirmation screen
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Resume task")
pty.WriteLine("no")
require.Error(t, w.Wait())
// Then: We expect the task to still be paused
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("TaskNotPaused", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to resume the task that is not paused
inv, root := clitest.New(t, "task", "resume", task.Name, "--yes")
clitest.SetupConfig(t, userClient, root)
// Then: We expect to get an error that the task is not paused
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "cannot be resumed")
})
}
+7 -4
View File
@@ -25,7 +25,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "carry on with the task")
@@ -41,7 +42,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.ID.String(), "carry on with the task")
@@ -57,7 +59,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "--stdin")
@@ -110,7 +113,7 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "some task input")
+10 -44
View File
@@ -120,40 +120,6 @@ func Test_Tasks(t *testing.T) {
require.Equal(t, logs[2].Type, codersdk.TaskLogTypeOutput, "third message should be an output")
},
},
{
name: "pause task",
cmdArgs: []string{"task", "pause", taskName, "--yes"},
assertFn: func(stdout string, userClient *codersdk.Client) {
require.Contains(t, stdout, "has been paused", "pause output should confirm task was paused")
},
},
{
name: "get task status after pause",
cmdArgs: []string{"task", "status", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var task codersdk.Task
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&task), "should unmarshal task status")
require.Equal(t, taskName, task.Name, "task name should match")
require.Equal(t, codersdk.TaskStatusPaused, task.Status, "task should be paused")
},
},
{
name: "resume task",
cmdArgs: []string{"task", "resume", taskName, "--yes"},
assertFn: func(stdout string, userClient *codersdk.Client) {
require.Contains(t, stdout, "has been resumed", "resume output should confirm task was resumed")
},
},
{
name: "get task status after resume",
cmdArgs: []string{"task", "status", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var task codersdk.Task
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&task), "should unmarshal task status")
require.Equal(t, taskName, task.Name, "task name should match")
require.Equal(t, codersdk.TaskStatusInitializing, task.Status, "task should be initializing after resume")
},
},
{
name: "delete task",
cmdArgs: []string{"task", "delete", taskName, "--yes"},
@@ -272,17 +238,17 @@ func fakeAgentAPIEcho(ctx context.Context, t testing.TB, initMsg agentapisdk.Mes
// setupCLITaskTest creates a test workspace with an AI task template and agent,
// with a fake agent API configured with the provided set of handlers.
// Returns the user client and workspace.
func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[string]http.HandlerFunc) (ownerClient *codersdk.Client, memberClient *codersdk.Client, task codersdk.Task) {
func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[string]http.HandlerFunc) (*codersdk.Client, codersdk.Task) {
t.Helper()
ownerClient = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, ownerClient)
userClient, _ := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
fakeAPI := startFakeAgentAPI(t, agentAPIHandlers)
authToken := uuid.NewString()
template := createAITaskTemplate(t, ownerClient, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
template := createAITaskTemplate(t, client, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
wantPrompt := "test prompt"
task, err := userClient.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
@@ -296,17 +262,17 @@ func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[st
require.True(t, task.WorkspaceID.Valid, "task should have a workspace ID")
workspace, err := userClient.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, userClient, workspace.LatestBuild.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
agentClient := agentsdk.New(userClient.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, userClient.URL, authToken, func(o *agent.Options) {
agentClient := agentsdk.New(client.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, client.URL, authToken, func(o *agent.Options) {
o.Client = agentClient
})
coderdtest.NewWorkspaceAgentWaiter(t, userClient, workspace.ID).
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).
WaitFor(coderdtest.AgentsReady)
return ownerClient, userClient, task
return userClient, task
}
// setupCLITaskTestWithSnapshot creates a task in the specified status with a log snapshot.
-4
View File
@@ -139,10 +139,8 @@ func (r *RootCmd) templateVersionsList() *serpent.Command {
type templateVersionRow struct {
// For json format:
TemplateVersion codersdk.TemplateVersion `table:"-"`
ActiveJSON bool `json:"active" table:"-"`
// For table format:
ID string `json:"-" table:"id"`
Name string `json:"-" table:"name,default_sort"`
CreatedAt time.Time `json:"-" table:"created at"`
CreatedBy string `json:"-" table:"created by"`
@@ -168,8 +166,6 @@ func templateVersionsToRows(activeVersionID uuid.UUID, templateVersions ...coder
rows[i] = templateVersionRow{
TemplateVersion: templateVersion,
ActiveJSON: templateVersion.ID == activeVersionID,
ID: templateVersion.ID.String(),
Name: templateVersion.Name,
CreatedAt: templateVersion.CreatedAt,
CreatedBy: templateVersion.CreatedBy.Username,
-29
View File
@@ -1,9 +1,7 @@
package cli_test
import (
"bytes"
"context"
"encoding/json"
"testing"
"github.com/stretchr/testify/assert"
@@ -42,33 +40,6 @@ func TestTemplateVersions(t *testing.T) {
pty.ExpectMatch(version.CreatedBy.Username)
pty.ExpectMatch("Active")
})
t.Run("ListVersionsJSON", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "templates", "versions", "list", template.Name, "--output", "json")
clitest.SetupConfig(t, member, root)
var stdout bytes.Buffer
inv.Stdout = &stdout
require.NoError(t, inv.Run())
var rows []struct {
TemplateVersion codersdk.TemplateVersion `json:"TemplateVersion"`
Active bool `json:"active"`
}
require.NoError(t, json.Unmarshal(stdout.Bytes(), &rows))
require.Len(t, rows, 1)
assert.Equal(t, version.ID, rows[0].TemplateVersion.ID)
assert.True(t, rows[0].Active)
})
}
func TestTemplateVersionsPromote(t *testing.T) {
+5 -8
View File
@@ -49,9 +49,10 @@ OPTIONS:
security purposes if a --wildcard-access-url is configured.
--disable-workspace-sharing bool, $CODER_DISABLE_WORKSPACE_SHARING
Disable workspace sharing. Workspace ACL checking is disabled and only
owners can have ssh, apps and terminal access to workspaces. Access
based on the 'owner' role is also allowed unless disabled via
Disable workspace sharing (requires the "workspace-sharing" experiment
to be enabled). Workspace ACL checking is disabled and only owners can
have ssh, apps and terminal access to workspaces. Access based on the
'owner' role is also allowed unless disabled via
--disable-owner-workspace-access.
--swagger-enable bool, $CODER_SWAGGER_ENABLE
@@ -382,17 +383,13 @@ NETWORKING OPTIONS:
--samesite-auth-cookie lax|none, $CODER_SAMESITE_AUTH_COOKIE (default: lax)
Controls the 'SameSite' property is set on browser session cookies.
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE (default: false)
--secure-auth-cookie bool, $CODER_SECURE_AUTH_COOKIE
Controls if the 'Secure' property is set on browser session cookies.
--wildcard-access-url string, $CODER_WILDCARD_ACCESS_URL
Specifies the wildcard hostname to use for workspace applications in
the form "*.example.com".
--host-prefix-cookie bool, $CODER_HOST_PREFIX_COOKIE (default: false)
Recommended to be enabled. Enables `__Host-` prefix for cookies to
guarantee they are only set by the right domain.
NETWORKING / DERP OPTIONS:
Most Coder deployments never have to think about DERP because all connections
between workspaces and users are peer-to-peer. However, when Coder cannot
-2
View File
@@ -12,8 +12,6 @@ SUBCOMMANDS:
delete Delete tasks
list List tasks
logs Show a task's logs
pause Pause a task
resume Resume a task
send Send input to a task
status Show the status of a task.
-25
View File
@@ -1,25 +0,0 @@
coder v0.0.0-devel
USAGE:
coder task pause [flags] <task>
Pause a task
- Pause a task by name:
$ coder task pause my-task
- Pause another user's task:
$ coder task pause alice/my-task
- Pause a task without confirmation:
$ coder task pause my-task --yes
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.
-28
View File
@@ -1,28 +0,0 @@
coder v0.0.0-devel
USAGE:
coder task resume [flags] <task>
Resume a task
- Resume a task by name:
$ coder task resume my-task
- Resume another user's task:
$ coder task resume alice/my-task
- Resume a task without confirmation:
$ coder task resume my-task --yes
OPTIONS:
--no-wait bool
Return immediately after resuming the task.
-y, --yes bool
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -9,7 +9,7 @@ OPTIONS:
-O, --org string, $CODER_ORGANIZATION
Select which organization (uuid or name) to use.
-c, --column [id|name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
-c, --column [name|created at|created by|status|active|archived] (default: name,created at,created by,status,active)
Columns to display in table output.
--include-archived bool
+1 -1
View File
@@ -27,7 +27,7 @@ USAGE:
SUBCOMMANDS:
create Create a token
list List tokens
remove Expire or delete a token
remove Delete a token
view Display detailed information about a token
———
-4
View File
@@ -15,10 +15,6 @@ OPTIONS:
-c, --column [id|name|scopes|allow list|last used|expires at|created at|owner] (default: id,name,scopes,allow list,last used,expires at,created at)
Columns to display in table output.
--include-expired bool
Include expired tokens in the output. By default, expired tokens are
hidden.
-o, --output table|json (default: table)
Output format.
+2 -10
View File
@@ -1,19 +1,11 @@
coder v0.0.0-devel
USAGE:
coder tokens remove [flags] <name|id|token>
coder tokens remove <name|id|token>
Expire or delete a token
Delete a token
Aliases: delete, rm
Remove a token by expiring it. Use --delete to permanently hard-delete the
token instead.
OPTIONS:
--delete bool
Permanently delete the token instead of expiring it. This removes the
audit trail.
———
Run `coder --help` for a list of global options.
+5 -14
View File
@@ -176,15 +176,11 @@ networking:
# (default: <unset>, type: string-array)
proxyTrustedOrigins: []
# Controls if the 'Secure' property is set on browser session cookies.
# (default: false, type: bool)
# (default: <unset>, type: bool)
secureAuthCookie: false
# Controls the 'SameSite' property is set on browser session cookies.
# (default: lax, type: enum[lax\|none])
sameSiteAuthCookie: lax
# Recommended to be enabled. Enables `__Host-` prefix for cookies to guarantee
# they are only set by the right domain.
# (default: false, type: bool)
hostPrefixCookie: false
# Whether Coder only allows connections to workspaces via the browser.
# (default: <unset>, type: bool)
browserOnly: false
@@ -421,11 +417,6 @@ oidc:
# an insecure OIDC configuration. It is not recommended to use this flag.
# (default: <unset>, type: bool)
dangerousSkipIssuerChecks: false
# Optional override of the default redirect url which uses the deployment's access
# url. Useful in situations where a deployment has more than 1 domain. Using this
# setting can also break OIDC, so use with caution.
# (default: <unset>, type: url)
oidc-redirect-url:
# Telemetry is critical to our ability to improve Coder. We strip all personal
# information before sending data to our servers. Please only disable telemetry
# when required by your organization's security policy.
@@ -523,10 +514,10 @@ disablePathApps: false
# workspaces.
# (default: <unset>, type: bool)
disableOwnerWorkspaceAccess: false
# Disable workspace sharing. Workspace ACL checking is disabled and only owners
# can have ssh, apps and terminal access to workspaces. Access based on the
# 'owner' role is also allowed unless disabled via
# --disable-owner-workspace-access.
# Disable workspace sharing (requires the "workspace-sharing" experiment to be
# enabled). Workspace ACL checking is disabled and only owners can have ssh, apps
# and terminal access to workspaces. Access based on the 'owner' role is also
# allowed unless disabled via --disable-owner-workspace-access.
# (default: <unset>, type: bool)
disableWorkspaceSharing: false
# These options change the behavior of how clients interact with the Coder.
+14 -37
View File
@@ -218,10 +218,9 @@ func (r *RootCmd) listTokens() *serpent.Command {
}
var (
all bool
includeExpired bool
displayTokens []tokenListRow
formatter = cliui.NewOutputFormatter(
all bool
displayTokens []tokenListRow
formatter = cliui.NewOutputFormatter(
cliui.TableFormat([]tokenListRow{}, defaultCols),
cliui.JSONFormat(),
)
@@ -241,8 +240,7 @@ func (r *RootCmd) listTokens() *serpent.Command {
}
tokens, err := client.Tokens(inv.Context(), codersdk.Me, codersdk.TokensFilter{
IncludeAll: all,
IncludeExpired: includeExpired,
IncludeAll: all,
})
if err != nil {
return xerrors.Errorf("list tokens: %w", err)
@@ -276,12 +274,6 @@ func (r *RootCmd) listTokens() *serpent.Command {
Description: "Specifies whether all users' tokens will be listed or not (must have Owner role to see all tokens).",
Value: serpent.BoolOf(&all),
},
{
Name: "include-expired",
Flag: "include-expired",
Description: "Include expired tokens in the output. By default, expired tokens are hidden.",
Value: serpent.BoolOf(&includeExpired),
},
}
formatter.AttachOptions(&cmd.Options)
@@ -331,13 +323,10 @@ func (r *RootCmd) viewToken() *serpent.Command {
}
func (r *RootCmd) removeToken() *serpent.Command {
var deleteToken bool
cmd := &serpent.Command{
Use: "remove <name|id|token>",
Aliases: []string{"delete"},
Short: "Expire or delete a token",
Long: "Remove a token by expiring it. Use --delete to permanently hard-" +
"delete the token instead.",
Short: "Delete a token",
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
),
@@ -349,7 +338,7 @@ func (r *RootCmd) removeToken() *serpent.Command {
token, err := client.APIKeyByName(inv.Context(), codersdk.Me, inv.Args[0])
if err != nil {
// If it's a token, we need to extract the ID.
// If it's a token, we need to extract the ID
maybeID := strings.Split(inv.Args[0], "-")[0]
token, err = client.APIKeyByID(inv.Context(), codersdk.Me, maybeID)
if err != nil {
@@ -357,29 +346,17 @@ func (r *RootCmd) removeToken() *serpent.Command {
}
}
if deleteToken {
err = client.DeleteAPIKey(inv.Context(), codersdk.Me, token.ID)
if err != nil {
return xerrors.Errorf("delete api key: %w", err)
}
cliui.Infof(inv.Stdout, "Token has been deleted.")
return nil
}
err = client.ExpireAPIKey(inv.Context(), codersdk.Me, token.ID)
err = client.DeleteAPIKey(inv.Context(), codersdk.Me, token.ID)
if err != nil {
return xerrors.Errorf("expire api key: %w", err)
return xerrors.Errorf("delete api key: %w", err)
}
cliui.Infof(inv.Stdout, "Token has been expired.")
return nil
},
}
cmd.Options = serpent.OptionSet{
{
Flag: "delete",
Description: "Permanently delete the token instead of expiring it. This removes the audit trail.",
Value: serpent.BoolOf(&deleteToken),
cliui.Infof(
inv.Stdout,
"Token has been deleted.",
)
return nil
},
}
+17 -153
View File
@@ -6,16 +6,12 @@ import (
"encoding/json"
"fmt"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
@@ -26,7 +22,7 @@ func TestTokens(t *testing.T) {
adminUser := coderdtest.CreateFirstUser(t, client)
secondUserClient, secondUser := coderdtest.CreateAnotherUser(t, client, adminUser.OrganizationID)
thirdUserClient, thirdUser := coderdtest.CreateAnotherUser(t, client, adminUser.OrganizationID)
_, thirdUser := coderdtest.CreateAnotherUser(t, client, adminUser.OrganizationID)
ctx, cancelFunc := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancelFunc()
@@ -159,7 +155,7 @@ func TestTokens(t *testing.T) {
require.Len(t, scopedToken.AllowList, 1)
require.Equal(t, allowSpec, scopedToken.AllowList[0].String())
// Delete by name (default behavior is now expire)
// Delete by name
inv, root = clitest.New(t, "tokens", "rm", "token-one")
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
@@ -168,42 +164,21 @@ func TestTokens(t *testing.T) {
require.NoError(t, err)
res = buf.String()
require.NotEmpty(t, res)
require.Contains(t, res, "expired")
// Regular users cannot expire other users' tokens (expire is default now).
inv, root = clitest.New(t, "tokens", "rm", secondTokenID)
clitest.SetupConfig(t, thirdUserClient, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
err = inv.WithContext(ctx).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "not found")
// Only admin users can expire other users' tokens (expire is default now).
inv, root = clitest.New(t, "tokens", "rm", secondTokenID)
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
// Validate that token was expired
if token, err := client.APIKeyByName(ctx, secondUser.ID.String(), "token-two"); assert.NoError(t, err) {
require.True(t, token.ExpiresAt.Before(time.Now()))
}
// Delete by ID (explicit delete flag)
inv, root = clitest.New(t, "tokens", "rm", "--delete", secondTokenID)
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
res = buf.String()
require.NotEmpty(t, res)
require.Contains(t, res, "deleted")
// Delete scoped token by ID (explicit delete flag)
inv, root = clitest.New(t, "tokens", "rm", "--delete", scopedTokenID)
// Delete by ID
inv, root = clitest.New(t, "tokens", "rm", secondTokenID)
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
res = buf.String()
require.NotEmpty(t, res)
require.Contains(t, res, "deleted")
// Delete scoped token by ID
inv, root = clitest.New(t, "tokens", "rm", scopedTokenID)
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
@@ -224,8 +199,8 @@ func TestTokens(t *testing.T) {
require.NotEmpty(t, res)
fourthToken := res
// Delete by token (explicit delete flag)
inv, root = clitest.New(t, "tokens", "rm", "--delete", fourthToken)
// Delete by token
inv, root = clitest.New(t, "tokens", "rm", fourthToken)
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
@@ -235,114 +210,3 @@ func TestTokens(t *testing.T) {
require.NotEmpty(t, res)
require.Contains(t, res, "deleted")
}
func TestTokensListExpiredFiltering(t *testing.T) {
t.Parallel()
client, _, api := coderdtest.NewWithAPI(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
// Create a valid (non-expired) token
validToken, _ := dbgen.APIKey(t, api.Database, database.APIKey{
UserID: owner.UserID,
ExpiresAt: time.Now().Add(24 * time.Hour),
LoginType: database.LoginTypeToken,
TokenName: "valid-token",
})
// Create an expired token
expiredToken, _ := dbgen.APIKey(t, api.Database, database.APIKey{
UserID: owner.UserID,
ExpiresAt: time.Now().Add(-24 * time.Hour),
LoginType: database.LoginTypeToken,
TokenName: "expired-token",
})
t.Run("HidesExpiredByDefault", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
inv, root := clitest.New(t, "tokens", "ls")
clitest.SetupConfig(t, client, root)
buf := new(bytes.Buffer)
inv.Stdout = buf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
res := buf.String()
require.Contains(t, res, validToken.ID)
require.Contains(t, res, "valid-token")
require.NotContains(t, res, expiredToken.ID)
require.NotContains(t, res, "expired-token")
})
t.Run("ShowsExpiredWithFlag", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
inv, root := clitest.New(t, "tokens", "ls", "--include-expired")
clitest.SetupConfig(t, client, root)
buf := new(bytes.Buffer)
inv.Stdout = buf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
res := buf.String()
require.Contains(t, res, validToken.ID)
require.Contains(t, res, "valid-token")
require.Contains(t, res, expiredToken.ID)
require.Contains(t, res, "expired-token")
})
t.Run("JSONOutputRespectsFilter", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
// Default (no expired)
inv, root := clitest.New(t, "tokens", "ls", "--output=json")
clitest.SetupConfig(t, client, root)
buf := new(bytes.Buffer)
inv.Stdout = buf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
res := buf.String()
require.Contains(t, res, "valid-token")
require.NotContains(t, res, "expired-token")
// With --include-expired
inv, root = clitest.New(t, "tokens", "ls", "--output=json", "--include-expired")
clitest.SetupConfig(t, client, root)
buf = new(bytes.Buffer)
inv.Stdout = buf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
res = buf.String()
require.Contains(t, res, "valid-token")
require.Contains(t, res, "expired-token")
})
t.Run("AllUsersWithIncludeExpired", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
inv, root := clitest.New(t, "tokens", "ls", "--all", "--include-expired")
clitest.SetupConfig(t, client, root)
buf := new(bytes.Buffer)
inv.Stdout = buf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
res := buf.String()
// Should show both valid and expired tokens
require.Contains(t, res, validToken.ID)
require.Contains(t, res, "valid-token")
require.Contains(t, res, expiredToken.ID)
require.Contains(t, res, "expired-token")
})
}
-70
View File
@@ -990,74 +990,4 @@ func TestUpdateValidateRichParameters(t *testing.T) {
_ = testutil.TryReceive(ctx, t, doneChan)
})
t.Run("NewImmutableParameterViaFlag", func(t *testing.T) {
t.Parallel()
// Create template and workspace with only a mutable parameter.
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
templateParameters := []*proto.RichParameter{
{Name: stringParameterName, Type: "string", Mutable: true, Required: true, Options: []*proto.RichParameterOption{
{Name: "First option", Description: "This is first option", Value: "1st"},
{Name: "Second option", Description: "This is second option", Value: "2nd"},
}},
}
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(templateParameters))
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
inv, root := clitest.New(t, "create", "my-workspace", "--yes", "--template", template.Name, "--parameter", fmt.Sprintf("%s=%s", stringParameterName, "1st"))
clitest.SetupConfig(t, member, root)
err := inv.Run()
require.NoError(t, err)
// Update template: add a new immutable parameter.
updatedTemplateParameters := []*proto.RichParameter{
templateParameters[0],
{Name: immutableParameterName, Type: "string", Mutable: false, Required: true, Options: []*proto.RichParameterOption{
{Name: "fir", Description: "First option for immutable parameter", Value: "I"},
{Name: "sec", Description: "Second option for immutable parameter", Value: "II"},
}},
}
updatedVersion := coderdtest.UpdateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(updatedTemplateParameters), template.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, updatedVersion.ID)
err = client.UpdateActiveTemplateVersion(context.Background(), template.ID, codersdk.UpdateActiveTemplateVersion{
ID: updatedVersion.ID,
})
require.NoError(t, err)
// Update workspace, supplying the new immutable parameter via
// the --parameter flag. This should succeed because it's the
// first time this parameter is being set.
inv, root = clitest.New(t, "update", "my-workspace",
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, "II"))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Planning workspace")
ctx := testutil.Context(t, testutil.WaitLong)
_ = testutil.TryReceive(ctx, t, doneChan)
// Verify the immutable parameter was set correctly.
workspace, err := client.WorkspaceByOwnerAndName(ctx, memberUser.ID.String(), "my-workspace", codersdk.WorkspaceOptions{})
require.NoError(t, err)
actualParameters, err := client.WorkspaceBuildParameters(ctx, workspace.LatestBuild.ID)
require.NoError(t, err)
require.Contains(t, actualParameters, codersdk.WorkspaceBuildParameter{
Name: immutableParameterName,
Value: "II",
})
})
}
+24
View File
@@ -0,0 +1,24 @@
//go:build !windows && !darwin
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/serpent"
)
func (*RootCmd) vpnDaemonRun() *serpent.Command {
cmd := &serpent.Command{
Use: "run",
Short: "Run the VPN daemon on Windows.",
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
),
Handler: func(_ *serpent.Invocation) error {
return xerrors.New("vpn-daemon subcommand is not supported on this platform")
},
}
return cmd
}
@@ -1,4 +1,4 @@
//go:build windows || linux
//go:build windows
package cli
@@ -11,7 +11,7 @@ import (
"github.com/coder/serpent"
)
func (*RootCmd) vpnDaemonRun() *serpent.Command {
func (r *RootCmd) vpnDaemonRun() *serpent.Command {
var (
rpcReadHandleInt int64
rpcWriteHandleInt int64
@@ -19,7 +19,7 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
cmd := &serpent.Command{
Use: "run",
Short: "Run the VPN daemon on Windows and Linux.",
Short: "Run the VPN daemon on Windows.",
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
),
@@ -53,8 +53,8 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
return xerrors.Errorf("rpc-read-handle (%v) and rpc-write-handle (%v) must be different", rpcReadHandleInt, rpcWriteHandleInt)
}
// The manager passes the read and write descriptors directly to the
// daemon, so we can open the RPC pipe from the raw values.
// We don't need to worry about duplicating the handles on Windows,
// which is different from Unix.
logger.Info(ctx, "opening bidirectional RPC pipe", slog.F("rpc_read_handle", rpcReadHandleInt), slog.F("rpc_write_handle", rpcWriteHandleInt))
pipe, err := vpn.NewBidirectionalPipe(uintptr(rpcReadHandleInt), uintptr(rpcWriteHandleInt))
if err != nil {
@@ -62,7 +62,7 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
}
defer pipe.Close()
logger.Info(ctx, "starting VPN tunnel")
logger.Info(ctx, "starting tunnel")
tunnel, err := vpn.NewTunnel(ctx, logger, pipe, vpn.NewClient(), vpn.UseOSNetworkingStack())
if err != nil {
return xerrors.Errorf("create new tunnel for client: %w", err)
@@ -1,19 +0,0 @@
//go:build linux
package cli_test
import (
"os"
"testing"
"github.com/stretchr/testify/require"
"golang.org/x/sys/unix"
)
func dupHandle(t *testing.T, f *os.File) uintptr {
t.Helper()
dupFD, err := unix.Dup(int(f.Fd()))
require.NoError(t, err)
return uintptr(dupFD)
}
@@ -1,33 +0,0 @@
//go:build windows
package cli_test
import (
"os"
"syscall"
"testing"
"github.com/stretchr/testify/require"
)
func dupHandle(t *testing.T, f *os.File) uintptr {
t.Helper()
src := syscall.Handle(f.Fd())
var dup syscall.Handle
proc, err := syscall.GetCurrentProcess()
require.NoError(t, err)
err = syscall.DuplicateHandle(
proc,
src,
proc,
&dup,
0,
false,
syscall.DUPLICATE_SAME_ACCESS,
)
require.NoError(t, err)
return uintptr(dup)
}
@@ -1,4 +1,4 @@
//go:build windows || linux
//go:build windows
package cli_test
@@ -67,35 +67,22 @@ func TestVPNDaemonRun(t *testing.T) {
r1, w1, err := os.Pipe()
require.NoError(t, err)
defer r1.Close()
defer w1.Close()
r2, w2, err := os.Pipe()
require.NoError(t, err)
defer r2.Close()
// The daemon closes the handles passed via NewBidirectionalPipe. Since our
// CLI tests run in-process, pass duplicated handles so we can close the
// originals without risking a double-close on FD reuse.
rpcReadHandle := dupHandle(t, r1)
rpcWriteHandle := dupHandle(t, w2)
require.NoError(t, r1.Close())
require.NoError(t, w2.Close())
defer w2.Close()
ctx := testutil.Context(t, testutil.WaitLong)
inv, _ := clitest.New(t,
"vpn-daemon",
"run",
"--rpc-read-handle",
fmt.Sprint(rpcReadHandle),
"--rpc-write-handle",
fmt.Sprint(rpcWriteHandle),
)
inv, _ := clitest.New(t, "vpn-daemon", "run", "--rpc-read-handle", fmt.Sprint(r1.Fd()), "--rpc-write-handle", fmt.Sprint(w2.Fd()))
waiter := clitest.StartWithWaiter(t, inv.WithContext(ctx))
// Send an invalid header, including a newline delimiter, so the handshake
// fails without requiring context cancellation.
_, err = w1.Write([]byte("garbage\n"))
// Send garbage which should cause the handshake to fail and the daemon
// to exit.
_, err = w1.Write([]byte("garbage"))
require.NoError(t, err)
waiter.Cancel()
err = waiter.Wait()
require.ErrorContains(t, err, "handshake failed")
})
+3 -2
View File
@@ -166,8 +166,9 @@ func (r *RootCmd) vscodeSSH() *serpent.Command {
}
agentConn, err := workspacesdk.New(client).
DialAgent(ctx, workspaceAgent.ID, &workspacesdk.DialAgentOptions{
Logger: logger,
BlockEndpoints: r.disableDirect,
Logger: logger,
BlockEndpoints: r.disableDirect,
ShortDescription: "VSCode SSH",
})
if err != nil {
return xerrors.Errorf("dial workspace agent: %w", err)
-2
View File
@@ -179,8 +179,6 @@ func New(opts Options, workspace database.Workspace) *API {
Database: opts.Database,
Log: opts.Log,
PublishWorkspaceUpdateFn: api.publishWorkspaceUpdate,
Clock: opts.Clock,
NotificationsEnqueuer: opts.NotificationsEnqueuer,
}
api.MetadataAPI = &MetadataAPI{
-240
View File
@@ -2,10 +2,6 @@ package agentapi
import (
"context"
"database/sql"
"fmt"
"net/http"
"time"
"github.com/google/uuid"
"golang.org/x/xerrors"
@@ -13,14 +9,7 @@ import (
"cdr.dev/slog/v3"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/notifications"
strutil "github.com/coder/coder/v2/coderd/util/strings"
"github.com/coder/coder/v2/coderd/workspacestats"
"github.com/coder/coder/v2/coderd/wspubsub"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/quartz"
)
type AppsAPI struct {
@@ -28,8 +17,6 @@ type AppsAPI struct {
Database database.Store
Log slog.Logger
PublishWorkspaceUpdateFn func(context.Context, *database.WorkspaceAgent, wspubsub.WorkspaceEventKind) error
NotificationsEnqueuer notifications.Enqueuer
Clock quartz.Clock
}
func (a *AppsAPI) BatchUpdateAppHealths(ctx context.Context, req *agentproto.BatchUpdateAppHealthRequest) (*agentproto.BatchUpdateAppHealthResponse, error) {
@@ -117,230 +104,3 @@ func (a *AppsAPI) BatchUpdateAppHealths(ctx context.Context, req *agentproto.Bat
}
return &agentproto.BatchUpdateAppHealthResponse{}, nil
}
func (a *AppsAPI) UpdateAppStatus(ctx context.Context, req *agentproto.UpdateAppStatusRequest) (*agentproto.UpdateAppStatusResponse, error) {
if len(req.Message) > 160 {
return nil, codersdk.NewError(http.StatusBadRequest, codersdk.Response{
Message: "Message is too long.",
Detail: "Message must be less than 160 characters.",
Validations: []codersdk.ValidationError{
{Field: "message", Detail: "Message must be less than 160 characters."},
},
})
}
var dbState database.WorkspaceAppStatusState
switch req.State {
case agentproto.UpdateAppStatusRequest_COMPLETE:
dbState = database.WorkspaceAppStatusStateComplete
case agentproto.UpdateAppStatusRequest_FAILURE:
dbState = database.WorkspaceAppStatusStateFailure
case agentproto.UpdateAppStatusRequest_WORKING:
dbState = database.WorkspaceAppStatusStateWorking
case agentproto.UpdateAppStatusRequest_IDLE:
dbState = database.WorkspaceAppStatusStateIdle
default:
return nil, codersdk.NewError(http.StatusBadRequest, codersdk.Response{
Message: "Invalid state provided.",
Detail: fmt.Sprintf("invalid state: %q", req.State),
Validations: []codersdk.ValidationError{
{Field: "state", Detail: "State must be one of: complete, failure, working, idle."},
},
})
}
workspaceAgent, err := a.AgentFn(ctx)
if err != nil {
return nil, err
}
app, err := a.Database.GetWorkspaceAppByAgentIDAndSlug(ctx, database.GetWorkspaceAppByAgentIDAndSlugParams{
AgentID: workspaceAgent.ID,
Slug: req.Slug,
})
if err != nil {
return nil, codersdk.NewError(http.StatusBadRequest, codersdk.Response{
Message: "Failed to get workspace app.",
Detail: fmt.Sprintf("No app found with slug %q", req.Slug),
})
}
workspace, err := a.Database.GetWorkspaceByAgentID(ctx, workspaceAgent.ID)
if err != nil {
return nil, codersdk.NewError(http.StatusBadRequest, codersdk.Response{
Message: "Failed to get workspace.",
Detail: err.Error(),
})
}
// Treat the message as untrusted input.
cleaned := strutil.UISanitize(req.Message)
// Get the latest status for the workspace app to detect no-op updates
// nolint:gocritic // This is a system restricted operation.
latestAppStatus, err := a.Database.GetLatestWorkspaceAppStatusByAppID(dbauthz.AsSystemRestricted(ctx), app.ID)
if err != nil && !xerrors.Is(err, sql.ErrNoRows) {
return nil, codersdk.NewError(http.StatusInternalServerError, codersdk.Response{
Message: "Failed to get latest workspace app status.",
Detail: err.Error(),
})
}
// If no rows found, latestAppStatus will be a zero-value struct (ID == uuid.Nil)
// nolint:gocritic // This is a system restricted operation.
_, err = a.Database.InsertWorkspaceAppStatus(dbauthz.AsSystemRestricted(ctx), database.InsertWorkspaceAppStatusParams{
ID: uuid.New(),
CreatedAt: dbtime.Now(),
WorkspaceID: workspace.ID,
AgentID: workspaceAgent.ID,
AppID: app.ID,
State: dbState,
Message: cleaned,
Uri: sql.NullString{
String: req.Uri,
Valid: req.Uri != "",
},
})
if err != nil {
return nil, codersdk.NewError(http.StatusInternalServerError, codersdk.Response{
Message: "Failed to insert workspace app status.",
Detail: err.Error(),
})
}
if a.PublishWorkspaceUpdateFn != nil {
err = a.PublishWorkspaceUpdateFn(ctx, &workspaceAgent, wspubsub.WorkspaceEventKindAgentAppStatusUpdate)
if err != nil {
return nil, codersdk.NewError(http.StatusInternalServerError, codersdk.Response{
Message: "Failed to publish workspace update.",
Detail: err.Error(),
})
}
}
// Notify on state change to Working/Idle for AI tasks
a.enqueueAITaskStateNotification(ctx, app.ID, latestAppStatus, dbState, workspace, workspaceAgent)
if shouldBump(dbState, latestAppStatus) {
// We pass time.Time{} for nextAutostart since we don't have access to
// TemplateScheduleStore here. The activity bump logic handles this by
// defaulting to the template's activity_bump duration (typically 1 hour).
workspacestats.ActivityBumpWorkspace(ctx, a.Log, a.Database, workspace.ID, time.Time{})
}
// just return a blank response because it doesn't contain any settable fields at present.
return new(agentproto.UpdateAppStatusResponse), nil
}
func shouldBump(dbState database.WorkspaceAppStatusState, latestAppStatus database.WorkspaceAppStatus) bool {
// Bump deadline when agent reports working or transitions away from working.
// This prevents auto-pause during active work and gives users time to interact
// after work completes.
// Bump if reporting working state.
if dbState == database.WorkspaceAppStatusStateWorking {
return true
}
// Bump if transitioning away from working state.
if latestAppStatus.ID != uuid.Nil {
prevState := latestAppStatus.State
if prevState == database.WorkspaceAppStatusStateWorking {
return true
}
}
return false
}
// enqueueAITaskStateNotification enqueues a notification when an AI task's app
// transitions to Working or Idle.
// No-op if:
// - the workspace agent app isn't configured as an AI task,
// - the new state equals the latest persisted state,
// - the workspace agent is not ready (still starting up).
func (a *AppsAPI) enqueueAITaskStateNotification(
ctx context.Context,
appID uuid.UUID,
latestAppStatus database.WorkspaceAppStatus,
newAppStatus database.WorkspaceAppStatusState,
workspace database.Workspace,
agent database.WorkspaceAgent,
) {
var notificationTemplate uuid.UUID
switch newAppStatus {
case database.WorkspaceAppStatusStateWorking:
notificationTemplate = notifications.TemplateTaskWorking
case database.WorkspaceAppStatusStateIdle:
notificationTemplate = notifications.TemplateTaskIdle
case database.WorkspaceAppStatusStateComplete:
notificationTemplate = notifications.TemplateTaskCompleted
case database.WorkspaceAppStatusStateFailure:
notificationTemplate = notifications.TemplateTaskFailed
default:
// Not a notifiable state, do nothing
return
}
if !workspace.TaskID.Valid {
// Workspace has no task ID, do nothing.
return
}
// Only send notifications when the agent is ready. We want to skip
// any state transitions that occur whilst the workspace is starting
// up as it doesn't make sense to receive them.
if agent.LifecycleState != database.WorkspaceAgentLifecycleStateReady {
a.Log.Debug(ctx, "skipping AI task notification because agent is not ready",
slog.F("agent_id", agent.ID),
slog.F("lifecycle_state", agent.LifecycleState),
slog.F("new_app_status", newAppStatus),
)
return
}
task, err := a.Database.GetTaskByID(ctx, workspace.TaskID.UUID)
if err != nil {
a.Log.Warn(ctx, "failed to get task", slog.Error(err))
return
}
if !task.WorkspaceAppID.Valid || task.WorkspaceAppID.UUID != appID {
// Non-task app, do nothing.
return
}
// Skip if the latest persisted state equals the new state (no new transition)
// Note: uuid.Nil check is valid here. If no previous status exists,
// GetLatestWorkspaceAppStatusByAppID returns sql.ErrNoRows and we get a zero-value struct.
if latestAppStatus.ID != uuid.Nil && latestAppStatus.State == newAppStatus {
return
}
// Skip the initial "Working" notification when the task first starts.
// This is obvious to the user since they just created the task.
// We still notify on the first "Idle" status and all subsequent transitions.
if latestAppStatus.ID == uuid.Nil && newAppStatus == database.WorkspaceAppStatusStateWorking {
return
}
if _, err := a.NotificationsEnqueuer.EnqueueWithData(
// nolint:gocritic // Need notifier actor to enqueue notifications
dbauthz.AsNotifier(ctx),
workspace.OwnerID,
notificationTemplate,
map[string]string{
"task": task.Name,
"workspace": workspace.Name,
},
map[string]any{
// Use a 1-minute bucketed timestamp to bypass per-day dedupe,
// allowing identical content to resend within the same day
// (but not more than once every 10s).
"dedupe_bypass_ts": a.Clock.Now().UTC().Truncate(time.Minute),
},
"api-workspace-agent-app-status",
// Associate this notification with related entities
workspace.ID, workspace.OwnerID, workspace.OrganizationID, appID,
); err != nil {
a.Log.Warn(ctx, "failed to notify of task state", slog.Error(err))
return
}
}
-115
View File
@@ -1,115 +0,0 @@
package agentapi
import (
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/util/ptr"
)
func TestShouldBump(t *testing.T) {
t.Parallel()
tests := []struct {
name string
prevState *database.WorkspaceAppStatusState // nil means no previous state
newState database.WorkspaceAppStatusState
shouldBump bool
}{
{
name: "FirstStatusBumps",
prevState: nil,
newState: database.WorkspaceAppStatusStateWorking,
shouldBump: true,
},
{
name: "WorkingToIdleBumps",
prevState: ptr.Ref(database.WorkspaceAppStatusStateWorking),
newState: database.WorkspaceAppStatusStateIdle,
shouldBump: true,
},
{
name: "WorkingToCompleteBumps",
prevState: ptr.Ref(database.WorkspaceAppStatusStateWorking),
newState: database.WorkspaceAppStatusStateComplete,
shouldBump: true,
},
{
name: "CompleteToIdleNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateComplete),
newState: database.WorkspaceAppStatusStateIdle,
shouldBump: false,
},
{
name: "CompleteToCompleteNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateComplete),
newState: database.WorkspaceAppStatusStateComplete,
shouldBump: false,
},
{
name: "FailureToIdleNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateFailure),
newState: database.WorkspaceAppStatusStateIdle,
shouldBump: false,
},
{
name: "FailureToFailureNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateFailure),
newState: database.WorkspaceAppStatusStateFailure,
shouldBump: false,
},
{
name: "CompleteToWorkingBumps",
prevState: ptr.Ref(database.WorkspaceAppStatusStateComplete),
newState: database.WorkspaceAppStatusStateWorking,
shouldBump: true,
},
{
name: "FailureToCompleteNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateFailure),
newState: database.WorkspaceAppStatusStateComplete,
shouldBump: false,
},
{
name: "WorkingToFailureBumps",
prevState: ptr.Ref(database.WorkspaceAppStatusStateWorking),
newState: database.WorkspaceAppStatusStateFailure,
shouldBump: true,
},
{
name: "IdleToIdleNoBump",
prevState: ptr.Ref(database.WorkspaceAppStatusStateIdle),
newState: database.WorkspaceAppStatusStateIdle,
shouldBump: false,
},
{
name: "IdleToWorkingBumps",
prevState: ptr.Ref(database.WorkspaceAppStatusStateIdle),
newState: database.WorkspaceAppStatusStateWorking,
shouldBump: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
var prevAppStatus database.WorkspaceAppStatus
// If there's a previous state, report it first.
if tt.prevState != nil {
prevAppStatus.ID = uuid.UUID{1}
prevAppStatus.State = *tt.prevState
}
didBump := shouldBump(tt.newState, prevAppStatus)
if tt.shouldBump {
require.True(t, didBump, "wanted deadline to bump but it didn't")
} else {
require.False(t, didBump, "wanted deadline not to bump but it did")
}
})
}
}
-188
View File
@@ -2,13 +2,9 @@ package agentapi_test
import (
"context"
"database/sql"
"net/http"
"strings"
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
@@ -16,12 +12,8 @@ import (
"github.com/coder/coder/v2/coderd/agentapi"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbmock"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/notifications/notificationstest"
"github.com/coder/coder/v2/coderd/wspubsub"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
"github.com/coder/quartz"
)
func TestBatchUpdateAppHealths(t *testing.T) {
@@ -261,183 +253,3 @@ func TestBatchUpdateAppHealths(t *testing.T) {
require.Nil(t, resp)
})
}
func TestWorkspaceAgentAppStatus(t *testing.T) {
t.Parallel()
t.Run("Success", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
ctrl := gomock.NewController(t)
mDB := dbmock.NewMockStore(ctrl)
fEnq := &notificationstest.FakeEnqueuer{}
mClock := quartz.NewMock(t)
agent := database.WorkspaceAgent{
ID: uuid.UUID{2},
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
}
workspaceUpdates := make(chan wspubsub.WorkspaceEventKind, 100)
api := &agentapi.AppsAPI{
AgentFn: func(context.Context) (database.WorkspaceAgent, error) {
return agent, nil
},
Database: mDB,
Log: testutil.Logger(t),
PublishWorkspaceUpdateFn: func(_ context.Context, agnt *database.WorkspaceAgent, kind wspubsub.WorkspaceEventKind) error {
assert.Equal(t, *agnt, agent)
testutil.AssertSend(ctx, t, workspaceUpdates, kind)
return nil
},
NotificationsEnqueuer: fEnq,
Clock: mClock,
}
app := database.WorkspaceApp{
ID: uuid.UUID{8},
}
mDB.EXPECT().GetWorkspaceAppByAgentIDAndSlug(gomock.Any(), database.GetWorkspaceAppByAgentIDAndSlugParams{
AgentID: agent.ID,
Slug: "vscode",
}).Times(1).Return(app, nil)
task := database.Task{
ID: uuid.UUID{7},
WorkspaceAppID: uuid.NullUUID{
Valid: true,
UUID: app.ID,
},
}
mDB.EXPECT().GetTaskByID(gomock.Any(), task.ID).Times(1).Return(task, nil)
workspace := database.Workspace{
ID: uuid.UUID{9},
TaskID: uuid.NullUUID{
Valid: true,
UUID: task.ID,
},
}
mDB.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agent.ID).Times(1).Return(workspace, nil)
appStatus := database.WorkspaceAppStatus{
ID: uuid.UUID{6},
}
mDB.EXPECT().GetLatestWorkspaceAppStatusByAppID(gomock.Any(), app.ID).Times(1).Return(appStatus, nil)
mDB.EXPECT().InsertWorkspaceAppStatus(
gomock.Any(),
gomock.Cond(func(params database.InsertWorkspaceAppStatusParams) bool {
if params.AgentID == agent.ID && params.AppID == app.ID {
assert.Equal(t, "testing", params.Message)
assert.Equal(t, database.WorkspaceAppStatusStateComplete, params.State)
assert.True(t, params.Uri.Valid)
assert.Equal(t, "https://example.com", params.Uri.String)
return true
}
return false
})).Times(1).Return(database.WorkspaceAppStatus{}, nil)
_, err := api.UpdateAppStatus(ctx, &agentproto.UpdateAppStatusRequest{
Slug: "vscode",
Message: "testing",
Uri: "https://example.com",
State: agentproto.UpdateAppStatusRequest_COMPLETE,
})
require.NoError(t, err)
kind := testutil.RequireReceive(ctx, t, workspaceUpdates)
require.Equal(t, wspubsub.WorkspaceEventKindAgentAppStatusUpdate, kind)
sent := fEnq.Sent(notificationstest.WithTemplateID(notifications.TemplateTaskCompleted))
require.Len(t, sent, 1)
})
t.Run("FailUnknownApp", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
ctrl := gomock.NewController(t)
mDB := dbmock.NewMockStore(ctrl)
agent := database.WorkspaceAgent{
ID: uuid.UUID{2},
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
}
mDB.EXPECT().GetWorkspaceAppByAgentIDAndSlug(gomock.Any(), gomock.Any()).
Times(1).
Return(database.WorkspaceApp{}, sql.ErrNoRows)
api := &agentapi.AppsAPI{
AgentFn: func(context.Context) (database.WorkspaceAgent, error) {
return agent, nil
},
Database: mDB,
Log: testutil.Logger(t),
}
_, err := api.UpdateAppStatus(ctx, &agentproto.UpdateAppStatusRequest{
Slug: "unknown",
Message: "testing",
Uri: "https://example.com",
State: agentproto.UpdateAppStatusRequest_COMPLETE,
})
require.ErrorContains(t, err, "No app found with slug")
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusBadRequest, sdkErr.StatusCode())
})
t.Run("FailUnknownState", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
ctrl := gomock.NewController(t)
mDB := dbmock.NewMockStore(ctrl)
agent := database.WorkspaceAgent{
ID: uuid.UUID{2},
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
}
api := &agentapi.AppsAPI{
AgentFn: func(context.Context) (database.WorkspaceAgent, error) {
return agent, nil
},
Database: mDB,
Log: testutil.Logger(t),
}
_, err := api.UpdateAppStatus(ctx, &agentproto.UpdateAppStatusRequest{
Slug: "vscode",
Message: "testing",
Uri: "https://example.com",
State: 77,
})
require.ErrorContains(t, err, "Invalid state")
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusBadRequest, sdkErr.StatusCode())
})
t.Run("FailTooLong", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
ctrl := gomock.NewController(t)
mDB := dbmock.NewMockStore(ctrl)
agent := database.WorkspaceAgent{
ID: uuid.UUID{2},
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
}
api := &agentapi.AppsAPI{
AgentFn: func(context.Context) (database.WorkspaceAgent, error) {
return agent, nil
},
Database: mDB,
Log: testutil.Logger(t),
}
_, err := api.UpdateAppStatus(ctx, &agentproto.UpdateAppStatusRequest{
Slug: "vscode",
Message: strings.Repeat("a", 161),
Uri: "https://example.com",
State: agentproto.UpdateAppStatusRequest_COMPLETE,
})
require.ErrorContains(t, err, "Message is too long")
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusBadRequest, sdkErr.StatusCode())
})
}
+1 -1
View File
@@ -128,7 +128,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
Name: agentName,
ResourceID: parentAgent.ResourceID,
AuthToken: uuid.New(),
AuthInstanceID: sql.NullString{},
AuthInstanceID: parentAgent.AuthInstanceID,
Architecture: req.Architecture,
EnvironmentVariables: pqtype.NullRawMessage{},
OperatingSystem: req.OperatingSystem,
+1 -46
View File
@@ -175,52 +175,6 @@ func TestSubAgentAPI(t *testing.T) {
}
})
// Context: https://github.com/coder/coder/pull/22196
t.Run("CreateSubAgentDoesNotInheritAuthInstanceID", func(t *testing.T) {
t.Parallel()
var (
log = testutil.Logger(t)
clock = quartz.NewMock(t)
db, org = newDatabaseWithOrg(t)
user, agent = newUserWithWorkspaceAgent(t, db, org)
)
// Given: The parent agent has an AuthInstanceID set
ctx := testutil.Context(t, testutil.WaitShort)
parentAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agent.ID)
require.NoError(t, err)
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.NotEmpty(t, parentAgent.AuthInstanceID.String)
api := newAgentAPI(t, log, db, clock, user, org, agent)
// When: We create a sub agent
createResp, err := api.CreateSubAgent(ctx, &proto.CreateSubAgentRequest{
Name: "sub-agent",
Directory: "/workspaces/test",
Architecture: "amd64",
OperatingSystem: "linux",
})
require.NoError(t, err)
subAgentID, err := uuid.FromBytes(createResp.Agent.Id)
require.NoError(t, err)
// Then: The sub-agent must NOT re-use the parent's AuthInstanceID.
subAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), subAgentID)
require.NoError(t, err)
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Double-check: looking up by the parent's instance ID must
// still return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(dbauthz.AsSystemRestricted(ctx), parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
})
type expectedAppError struct {
index int32
field string
@@ -1366,6 +1320,7 @@ func TestSubAgentAPI(t *testing.T) {
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
+2 -182
View File
@@ -21,12 +21,10 @@ import (
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpapi/httperror"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/searchquery"
@@ -466,6 +464,7 @@ func (api *API) convertTasks(ctx context.Context, requesterID uuid.UUID, dbTasks
apiWorkspaces, err := convertWorkspaces(
ctx,
api.Experiments,
api.Logger,
requesterID,
workspaces,
@@ -545,6 +544,7 @@ func (api *API) taskGet(rw http.ResponseWriter, r *http.Request) {
ws, err := convertWorkspace(
ctx,
api.Experiments,
api.Logger,
apiKey.UserID,
workspace,
@@ -1244,183 +1244,3 @@ func (api *API) postWorkspaceAgentTaskLogSnapshot(rw http.ResponseWriter, r *htt
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Pause task
// @ID pause-task
// @Security CoderSessionToken
// @Accept json
// @Tags Tasks
// @Param user path string true "Username, user ID, or 'me' for the authenticated user"
// @Param task path string true "Task ID" format(uuid)
// @Success 202 {object} codersdk.PauseTaskResponse
// @Router /tasks/{user}/{task}/pause [post]
func (api *API) pauseTask(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
apiKey = httpmw.APIKey(r)
task = httpmw.TaskParam(r)
)
if !task.WorkspaceID.Valid {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Task does not have a workspace.",
})
return
}
workspace, err := api.Database.GetWorkspaceByID(ctx, task.WorkspaceID.UUID)
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace.",
Detail: err.Error(),
})
return
}
buildReq := codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStop,
Reason: codersdk.CreateWorkspaceBuildReasonTaskManualPause,
}
build, err := api.postWorkspaceBuildsInternal(
ctx,
apiKey,
workspace,
buildReq,
func(action policy.Action, object rbac.Objecter) bool {
return api.Authorize(r, action, object)
},
audit.WorkspaceBuildBaggageFromRequest(r),
)
if err != nil {
httperror.WriteWorkspaceBuildError(ctx, rw, err)
return
}
if _, err := api.NotificationsEnqueuer.Enqueue(
// nolint:gocritic // Need notifier actor to enqueue notifications.
dbauthz.AsNotifier(ctx),
workspace.OwnerID,
notifications.TemplateTaskPaused,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": workspace.Name,
"pause_reason": "manual",
},
"api-task-pause",
workspace.ID, workspace.OwnerID, workspace.OrganizationID,
); err != nil {
api.Logger.Warn(ctx, "failed to notify of task paused", slog.Error(err), slog.F("task_id", task.ID), slog.F("workspace_id", workspace.ID))
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.PauseTaskResponse{
WorkspaceBuild: &build,
})
}
// @Summary Resume task
// @ID resume-task
// @Security CoderSessionToken
// @Accept json
// @Tags Tasks
// @Param user path string true "Username, user ID, or 'me' for the authenticated user"
// @Param task path string true "Task ID" format(uuid)
// @Success 202 {object} codersdk.ResumeTaskResponse
// @Router /tasks/{user}/{task}/resume [post]
func (api *API) resumeTask(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
apiKey = httpmw.APIKey(r)
task = httpmw.TaskParam(r)
)
if !task.WorkspaceID.Valid {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Task does not have a workspace.",
})
return
}
workspace, err := api.Database.GetWorkspaceByID(ctx, task.WorkspaceID.UUID)
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace.",
Detail: err.Error(),
})
return
}
latestBuild, err := api.Database.GetLatestWorkspaceBuildByWorkspaceID(ctx, workspace.ID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace build.",
Detail: err.Error(),
})
return
}
job, err := api.Database.GetProvisionerJobByID(ctx, latestBuild.JobID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace build job.",
Detail: err.Error(),
})
return
}
workspaceStatus := codersdk.ConvertWorkspaceStatus(
codersdk.ProvisionerJobStatus(job.JobStatus),
codersdk.WorkspaceTransition(latestBuild.Transition),
)
if workspaceStatus == codersdk.WorkspaceStatusRunning {
httpapi.Write(ctx, rw, http.StatusConflict, codersdk.Response{
Message: "Task workspace is already running.",
Detail: fmt.Sprintf("Workspace status is %q.", workspaceStatus),
})
return
}
buildReq := codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStart,
Reason: codersdk.CreateWorkspaceBuildReasonTaskResume,
}
build, err := api.postWorkspaceBuildsInternal(
ctx,
apiKey,
workspace,
buildReq,
func(action policy.Action, object rbac.Objecter) bool {
return api.Authorize(r, action, object)
},
audit.WorkspaceBuildBaggageFromRequest(r),
)
if err != nil {
httperror.WriteWorkspaceBuildError(ctx, rw, err)
return
}
if _, err := api.NotificationsEnqueuer.Enqueue(
// nolint:gocritic // Need notifier actor to enqueue notifications.
dbauthz.AsNotifier(ctx),
workspace.OwnerID,
notifications.TemplateTaskResumed,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": workspace.Name,
},
"api-task-resume",
workspace.ID, workspace.OwnerID, workspace.OrganizationID,
); err != nil {
api.Logger.Warn(ctx, "failed to notify of task resumed", slog.Error(err), slog.F("task_id", task.ID), slog.F("workspace_id", workspace.ID))
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.ResumeTaskResponse{
WorkspaceBuild: &build,
})
}
+37 -804
View File
File diff suppressed because it is too large Load Diff
+64 -255
View File
@@ -3745,69 +3745,6 @@ const docTemplate = `{
}
}
},
"/organizations/{organization}/members/{user}/workspaces/available-users": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": [
"application/json"
],
"tags": [
"Workspaces"
],
"summary": "Get users available for workspace creation",
"operationId": "get-users-available-for-workspace-creation",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Organization ID",
"name": "organization",
"in": "path",
"required": true
},
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Limit results",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Offset for pagination",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.MinimalUser"
}
}
}
}
}
},
"/organizations/{organization}/paginated-members": {
"get": {
"security": [
@@ -5887,90 +5824,6 @@ const docTemplate = `{
}
}
},
"/tasks/{user}/{task}/pause": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Tasks"
],
"summary": "Pause task",
"operationId": "pause-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.PauseTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/resume": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Tasks"
],
"summary": "Resume task",
"operationId": "resume-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.ResumeTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -8238,12 +8091,6 @@ const docTemplate = `{
"name": "user",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Include expired tokens in the list",
"name": "include_expired",
"in": "query"
}
],
"responses": {
@@ -8455,54 +8302,6 @@ const docTemplate = `{
}
}
},
"/users/{user}/keys/{keyid}/expire": {
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"tags": [
"Users"
],
"summary": "Expire API key",
"operationId": "expire-api-key",
"parameters": [
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "string",
"description": "Key ID",
"name": "keyid",
"in": "path",
"required": true
}
],
"responses": {
"204": {
"description": "No Content"
},
"404": {
"description": "Not Found",
"schema": {
"$ref": "#/definitions/codersdk.Response"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"$ref": "#/definitions/codersdk.Response"
}
}
}
}
},
"/users/{user}/login-type": {
"get": {
"security": [
@@ -9551,7 +9350,6 @@ const docTemplate = `{
],
"summary": "Patch workspace agent app status",
"operationId": "patch-workspace-agent-app-status",
"deprecated": true,
"parameters": [
{
"description": "app status",
@@ -12421,9 +12219,6 @@ const docTemplate = `{
"api_key_id": {
"type": "string"
},
"client": {
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
@@ -13623,10 +13418,7 @@ const docTemplate = `{
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_auto_pause",
"task_manual_pause",
"task_resume"
"jetbrains_connection"
],
"x-enum-varnames": [
"BuildReasonInitiator",
@@ -13637,10 +13429,7 @@ const docTemplate = `{
"BuildReasonCLI",
"BuildReasonSSHConnection",
"BuildReasonVSCodeConnection",
"BuildReasonJetbrainsConnection",
"BuildReasonTaskAutoPause",
"BuildReasonTaskManualPause",
"BuildReasonTaskResume"
"BuildReasonJetbrainsConnection"
]
},
"codersdk.CORSBehavior": {
@@ -14313,18 +14102,14 @@ const docTemplate = `{
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause",
"task_resume"
"jetbrains_connection"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
"CreateWorkspaceBuildReasonCLI",
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause",
"CreateWorkspaceBuildReasonTaskResume"
"CreateWorkspaceBuildReasonJetbrainsConnection"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -14358,8 +14143,7 @@ const docTemplate = `{
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause"
"jetbrains_connection"
],
"allOf": [
{
@@ -15104,7 +14888,8 @@ const docTemplate = `{
"workspace-usage",
"web-push",
"oauth2",
"mcp-server-http"
"mcp-server-http",
"workspace-sharing"
],
"x-enum-comments": {
"ExperimentAutoFillParameters": "This should not be taken out of experiments until we have redesigned the feature.",
@@ -15113,6 +14898,7 @@ const docTemplate = `{
"ExperimentNotifications": "Sends notifications via SMTP and webhooks following certain events.",
"ExperimentOAuth2": "Enables OAuth2 provider functionality.",
"ExperimentWebPush": "Enables web push notifications through the browser.",
"ExperimentWorkspaceSharing": "Enables updating workspace ACLs for sharing with users and groups.",
"ExperimentWorkspaceUsage": "Enables the new workspace usage tracking."
},
"x-enum-descriptions": [
@@ -15122,7 +14908,8 @@ const docTemplate = `{
"Enables the new workspace usage tracking.",
"Enables web push notifications through the browser.",
"Enables OAuth2 provider functionality.",
"Enables the MCP HTTP server functionality."
"Enables the MCP HTTP server functionality.",
"Enables updating workspace ACLs for sharing with users and groups."
],
"x-enum-varnames": [
"ExperimentExample",
@@ -15131,7 +14918,8 @@ const docTemplate = `{
"ExperimentWorkspaceUsage",
"ExperimentWebPush",
"ExperimentOAuth2",
"ExperimentMCPServerHTTP"
"ExperimentMCPServerHTTP",
"ExperimentWorkspaceSharing"
]
},
"codersdk.ExternalAPIKeyScopes": {
@@ -15371,6 +15159,10 @@ const docTemplate = `{
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -15574,9 +15366,6 @@ const docTemplate = `{
"codersdk.HTTPCookieConfig": {
"type": "object",
"properties": {
"host_prefix": {
"type": "boolean"
},
"same_site": {
"type": "string"
},
@@ -16787,14 +16576,6 @@ const docTemplate = `{
"organization_mapping": {
"type": "object"
},
"redirect_url": {
"description": "RedirectURL is optional, defaulting to 'ACCESS_URL'. Only useful in niche\nsituations where the OIDC callback domain is different from the ACCESS_URL\ndomain.",
"allOf": [
{
"$ref": "#/definitions/serpent.URL"
}
]
},
"scopes": {
"type": "array",
"items": {
@@ -17233,14 +17014,6 @@ const docTemplate = `{
}
}
},
"codersdk.PauseTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.Permission": {
"type": "object",
"properties": {
@@ -18409,14 +18182,6 @@ const docTemplate = `{
}
}
},
"codersdk.ResumeTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
@@ -19049,9 +18814,6 @@ const docTemplate = `{
"default_ttl_ms": {
"type": "integer"
},
"deleted": {
"type": "boolean"
},
"deprecated": {
"type": "boolean"
},
@@ -20838,6 +20600,12 @@ const docTemplate = `{
"connection_timeout_seconds": {
"type": "integer"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"created_at": {
"type": "string",
"format": "date-time"
@@ -21747,6 +21515,32 @@ const docTemplate = `{
}
}
},
"codersdk.WorkspaceConnection": {
"type": "object",
"properties": {
"connected_at": {
"type": "string",
"format": "date-time"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"ip": {
"type": "string"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
}
}
},
"codersdk.WorkspaceConnectionLatencyMS": {
"type": "object",
"properties": {
@@ -21760,6 +21554,21 @@ const docTemplate = `{
}
}
},
"codersdk.WorkspaceConnectionStatus": {
"type": "string",
"enum": [
"ongoing",
"control_lost",
"client_disconnected",
"clean_disconnected"
],
"x-enum-varnames": [
"ConnectionStatusOngoing",
"ConnectionStatusControlLost",
"ConnectionStatusClientDisconnected",
"ConnectionStatusCleanDisconnected"
]
},
"codersdk.WorkspaceDeploymentStats": {
"type": "object",
"properties": {
@@ -22878,7 +22687,7 @@ const docTemplate = `{
]
},
"default": {
"description": "Default is parsed into Value if set.\nMust be ` + "`" + `\"\"` + "`" + ` if ` + "`" + `DefaultFn` + "`" + ` != nil",
"description": "Default is parsed into Value if set.",
"type": "string"
},
"description": {
+64 -241
View File
@@ -3296,65 +3296,6 @@
}
}
},
"/organizations/{organization}/members/{user}/workspaces/available-users": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": ["application/json"],
"tags": ["Workspaces"],
"summary": "Get users available for workspace creation",
"operationId": "get-users-available-for-workspace-creation",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Organization ID",
"name": "organization",
"in": "path",
"required": true
},
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Limit results",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Offset for pagination",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.MinimalUser"
}
}
}
}
}
},
"/organizations/{organization}/paginated-members": {
"get": {
"security": [
@@ -5206,82 +5147,6 @@
}
}
},
"/tasks/{user}/{task}/pause": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Tasks"],
"summary": "Pause task",
"operationId": "pause-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.PauseTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/resume": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Tasks"],
"summary": "Resume task",
"operationId": "resume-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.ResumeTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -7285,12 +7150,6 @@
"name": "user",
"in": "path",
"required": true
},
{
"type": "boolean",
"description": "Include expired tokens in the list",
"name": "include_expired",
"in": "query"
}
],
"responses": {
@@ -7482,52 +7341,6 @@
}
}
},
"/users/{user}/keys/{keyid}/expire": {
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"tags": ["Users"],
"summary": "Expire API key",
"operationId": "expire-api-key",
"parameters": [
{
"type": "string",
"description": "User ID, name, or me",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "string",
"description": "Key ID",
"name": "keyid",
"in": "path",
"required": true
}
],
"responses": {
"204": {
"description": "No Content"
},
"404": {
"description": "Not Found",
"schema": {
"$ref": "#/definitions/codersdk.Response"
}
},
"500": {
"description": "Internal Server Error",
"schema": {
"$ref": "#/definitions/codersdk.Response"
}
}
}
}
},
"/users/{user}/login-type": {
"get": {
"security": [
@@ -8450,7 +8263,6 @@
"tags": ["Agents"],
"summary": "Patch workspace agent app status",
"operationId": "patch-workspace-agent-app-status",
"deprecated": true,
"parameters": [
{
"description": "app status",
@@ -11037,9 +10849,6 @@
"api_key_id": {
"type": "string"
},
"client": {
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
@@ -12214,10 +12023,7 @@
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_auto_pause",
"task_manual_pause",
"task_resume"
"jetbrains_connection"
],
"x-enum-varnames": [
"BuildReasonInitiator",
@@ -12228,10 +12034,7 @@
"BuildReasonCLI",
"BuildReasonSSHConnection",
"BuildReasonVSCodeConnection",
"BuildReasonJetbrainsConnection",
"BuildReasonTaskAutoPause",
"BuildReasonTaskManualPause",
"BuildReasonTaskResume"
"BuildReasonJetbrainsConnection"
]
},
"codersdk.CORSBehavior": {
@@ -12859,18 +12662,14 @@
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause",
"task_resume"
"jetbrains_connection"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
"CreateWorkspaceBuildReasonCLI",
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause",
"CreateWorkspaceBuildReasonTaskResume"
"CreateWorkspaceBuildReasonJetbrainsConnection"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -12900,8 +12699,7 @@
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause"
"jetbrains_connection"
],
"allOf": [
{
@@ -13631,7 +13429,8 @@
"workspace-usage",
"web-push",
"oauth2",
"mcp-server-http"
"mcp-server-http",
"workspace-sharing"
],
"x-enum-comments": {
"ExperimentAutoFillParameters": "This should not be taken out of experiments until we have redesigned the feature.",
@@ -13640,6 +13439,7 @@
"ExperimentNotifications": "Sends notifications via SMTP and webhooks following certain events.",
"ExperimentOAuth2": "Enables OAuth2 provider functionality.",
"ExperimentWebPush": "Enables web push notifications through the browser.",
"ExperimentWorkspaceSharing": "Enables updating workspace ACLs for sharing with users and groups.",
"ExperimentWorkspaceUsage": "Enables the new workspace usage tracking."
},
"x-enum-descriptions": [
@@ -13649,7 +13449,8 @@
"Enables the new workspace usage tracking.",
"Enables web push notifications through the browser.",
"Enables OAuth2 provider functionality.",
"Enables the MCP HTTP server functionality."
"Enables the MCP HTTP server functionality.",
"Enables updating workspace ACLs for sharing with users and groups."
],
"x-enum-varnames": [
"ExperimentExample",
@@ -13658,7 +13459,8 @@
"ExperimentWorkspaceUsage",
"ExperimentWebPush",
"ExperimentOAuth2",
"ExperimentMCPServerHTTP"
"ExperimentMCPServerHTTP",
"ExperimentWorkspaceSharing"
]
},
"codersdk.ExternalAPIKeyScopes": {
@@ -13898,6 +13700,10 @@
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -14095,9 +13901,6 @@
"codersdk.HTTPCookieConfig": {
"type": "object",
"properties": {
"host_prefix": {
"type": "boolean"
},
"same_site": {
"type": "string"
},
@@ -15251,14 +15054,6 @@
"organization_mapping": {
"type": "object"
},
"redirect_url": {
"description": "RedirectURL is optional, defaulting to 'ACCESS_URL'. Only useful in niche\nsituations where the OIDC callback domain is different from the ACCESS_URL\ndomain.",
"allOf": [
{
"$ref": "#/definitions/serpent.URL"
}
]
},
"scopes": {
"type": "array",
"items": {
@@ -15682,14 +15477,6 @@
}
}
},
"codersdk.PauseTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.Permission": {
"type": "object",
"properties": {
@@ -16811,14 +16598,6 @@
}
}
},
"codersdk.ResumeTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
@@ -17430,9 +17209,6 @@
"default_ttl_ms": {
"type": "integer"
},
"deleted": {
"type": "boolean"
},
"deprecated": {
"type": "boolean"
},
@@ -19128,6 +18904,12 @@
"connection_timeout_seconds": {
"type": "integer"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"created_at": {
"type": "string",
"format": "date-time"
@@ -19982,6 +19764,32 @@
}
}
},
"codersdk.WorkspaceConnection": {
"type": "object",
"properties": {
"connected_at": {
"type": "string",
"format": "date-time"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"ip": {
"type": "string"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
}
}
},
"codersdk.WorkspaceConnectionLatencyMS": {
"type": "object",
"properties": {
@@ -19995,6 +19803,21 @@
}
}
},
"codersdk.WorkspaceConnectionStatus": {
"type": "string",
"enum": [
"ongoing",
"control_lost",
"client_disconnected",
"clean_disconnected"
],
"x-enum-varnames": [
"ConnectionStatusOngoing",
"ConnectionStatusControlLost",
"ConnectionStatusClientDisconnected",
"ConnectionStatusCleanDisconnected"
]
},
"codersdk.WorkspaceDeploymentStats": {
"type": "object",
"properties": {
@@ -21048,7 +20871,7 @@
]
},
"default": {
"description": "Default is parsed into Value if set.\nMust be `\"\"` if `DefaultFn` != nil",
"description": "Default is parsed into Value if set.",
"type": "string"
},
"description": {
+8 -77
View File
@@ -307,26 +307,20 @@ func (api *API) apiKeyByName(rw http.ResponseWriter, r *http.Request) {
// @Tags Users
// @Param user path string true "User ID, name, or me"
// @Success 200 {array} codersdk.APIKey
// @Param include_expired query bool false "Include expired tokens in the list"
// @Router /users/{user}/keys/tokens [get]
func (api *API) tokens(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
user = httpmw.UserParam(r)
keys []database.APIKey
err error
queryStr = r.URL.Query().Get("include_all")
includeAll, _ = strconv.ParseBool(queryStr)
expiredStr = r.URL.Query().Get("include_expired")
includeExpired, _ = strconv.ParseBool(expiredStr)
ctx = r.Context()
user = httpmw.UserParam(r)
keys []database.APIKey
err error
queryStr = r.URL.Query().Get("include_all")
includeAll, _ = strconv.ParseBool(queryStr)
)
if includeAll {
// get tokens for all users
keys, err = api.Database.GetAPIKeysByLoginType(ctx, database.GetAPIKeysByLoginTypeParams{
LoginType: database.LoginTypeToken,
IncludeExpired: includeExpired,
})
keys, err = api.Database.GetAPIKeysByLoginType(ctx, database.LoginTypeToken)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching API keys.",
@@ -336,7 +330,7 @@ func (api *API) tokens(rw http.ResponseWriter, r *http.Request) {
}
} else {
// get user's tokens only
keys, err = api.Database.GetAPIKeysByUserID(ctx, database.GetAPIKeysByUserIDParams{LoginType: database.LoginTypeToken, UserID: user.ID, IncludeExpired: includeExpired})
keys, err = api.Database.GetAPIKeysByUserID(ctx, database.GetAPIKeysByUserIDParams{LoginType: database.LoginTypeToken, UserID: user.ID})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching API keys.",
@@ -427,69 +421,6 @@ func (api *API) deleteAPIKey(rw http.ResponseWriter, r *http.Request) {
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Expire API key
// @ID expire-api-key
// @Security CoderSessionToken
// @Tags Users
// @Param user path string true "User ID, name, or me"
// @Param keyid path string true "Key ID" format(string)
// @Success 204
// @Failure 404 {object} codersdk.Response
// @Failure 500 {object} codersdk.Response
// @Router /users/{user}/keys/{keyid}/expire [put]
func (api *API) expireAPIKey(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
keyID = chi.URLParam(r, "keyid")
auditor = api.Auditor.Load()
aReq, commitAudit = audit.InitRequest[database.APIKey](rw, &audit.RequestParams{
Audit: *auditor,
Log: api.Logger,
Request: r,
Action: database.AuditActionWrite,
})
)
defer commitAudit()
if err := api.Database.InTx(func(db database.Store) error {
key, err := db.GetAPIKeyByID(ctx, keyID)
if err != nil {
return xerrors.Errorf("fetch API key: %w", err)
}
if !key.ExpiresAt.After(api.Clock.Now()) {
return nil // Already expired
}
aReq.Old = key
if err := db.UpdateAPIKeyByID(ctx, database.UpdateAPIKeyByIDParams{
ID: key.ID,
LastUsed: key.LastUsed,
ExpiresAt: dbtime.Now(),
IPAddress: key.IPAddress,
}); err != nil {
return xerrors.Errorf("expire API key: %w", err)
}
// Fetch the updated key for audit log.
newKey, err := db.GetAPIKeyByID(ctx, keyID)
if err != nil {
api.Logger.Warn(ctx, "failed to fetch updated API key for audit log", slog.Error(err))
} else {
aReq.New = newKey
}
return nil
}, nil); httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
} else if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error expiring API key.",
Detail: err.Error(),
})
return
}
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Get token config
// @ID get-token-config
// @Security CoderSessionToken
+4 -197
View File
@@ -69,44 +69,6 @@ func TestTokenCRUD(t *testing.T) {
require.Equal(t, database.AuditActionDelete, auditor.AuditLogs()[numLogs-1].Action)
}
func TestTokensFilterExpired(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
adminClient := coderdtest.New(t, nil)
_ = coderdtest.CreateFirstUser(t, adminClient)
// Create a token.
res, err := adminClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// List tokens without including expired - should see the token.
keys, err := adminClient.Tokens(ctx, codersdk.Me, codersdk.TokensFilter{})
require.NoError(t, err)
require.Len(t, keys, 1)
// Expire the token.
err = adminClient.ExpireAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// List tokens without including expired - should NOT see expired token.
keys, err = adminClient.Tokens(ctx, codersdk.Me, codersdk.TokensFilter{})
require.NoError(t, err)
require.Empty(t, keys)
// List tokens WITH including expired - should see expired token.
keys, err = adminClient.Tokens(ctx, codersdk.Me, codersdk.TokensFilter{
IncludeExpired: true,
})
require.NoError(t, err)
require.Len(t, keys, 1)
require.Equal(t, keyID, keys[0].ID)
}
func TestTokenScoped(t *testing.T) {
t.Parallel()
@@ -438,7 +400,7 @@ func TestAPIKey_Deleted(t *testing.T) {
require.Error(t, err)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
require.Equal(t, http.StatusBadRequest, apiErr.StatusCode())
}
func TestAPIKey_SetDefault(t *testing.T) {
@@ -477,7 +439,7 @@ func TestAPIKey_PrebuildsNotAllowed(t *testing.T) {
DeploymentValues: dc,
})
setupCtx := testutil.Context(t, testutil.WaitLong)
ctx := testutil.Context(t, testutil.WaitLong)
// Given: an existing api token for the prebuilds user
_, prebuildsToken := dbgen.APIKey(t, db, database.APIKey{
@@ -486,167 +448,12 @@ func TestAPIKey_PrebuildsNotAllowed(t *testing.T) {
client.SetSessionToken(prebuildsToken)
// When: the prebuilds user tries to create an API key
_, err := client.CreateAPIKey(setupCtx, database.PrebuildsSystemUserID.String())
_, err := client.CreateAPIKey(ctx, database.PrebuildsSystemUserID.String())
// Then: denied.
require.ErrorContains(t, err, httpapi.ResourceForbiddenResponse.Message)
// When: the prebuilds user tries to create a token
_, err = client.CreateToken(setupCtx, database.PrebuildsSystemUserID.String(), codersdk.CreateTokenRequest{})
_, err = client.CreateToken(ctx, database.PrebuildsSystemUserID.String(), codersdk.CreateTokenRequest{})
// Then: also denied.
require.ErrorContains(t, err, httpapi.ResourceForbiddenResponse.Message)
}
//nolint:tparallel,paralleltest // Subtests share the same coderdtest instance and auditor.
func TestExpireAPIKey(t *testing.T) {
t.Parallel()
auditor := audit.NewMock()
adminClient := coderdtest.New(t, &coderdtest.Options{Auditor: auditor})
admin := coderdtest.CreateFirstUser(t, adminClient)
memberClient, member := coderdtest.CreateAnotherUser(t, adminClient, admin.OrganizationID)
t.Run("OwnerCanExpireOwnToken", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Create a token.
res, err := adminClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// Verify the token is not expired.
key, err := adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.True(t, key.ExpiresAt.After(time.Now()))
auditor.ResetLogs()
// Expire the token.
err = adminClient.ExpireAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// Verify the token is expired.
key, err = adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.True(t, key.ExpiresAt.Before(time.Now()))
// Verify audit log.
als := auditor.AuditLogs()
require.Len(t, als, 1)
require.Equal(t, database.AuditActionWrite, als[0].Action)
require.Equal(t, database.ResourceTypeApiKey, als[0].ResourceType)
require.Equal(t, admin.UserID.String(), als[0].UserID.String())
})
t.Run("AdminCanExpireOtherUsersToken", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Create a token for the member.
res, err := memberClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// Admin expires the member's token.
err = adminClient.ExpireAPIKey(ctx, member.ID.String(), keyID)
require.NoError(t, err)
// Verify the token is expired.
key, err := memberClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.True(t, key.ExpiresAt.Before(time.Now()))
})
t.Run("MemberCannotExpireOtherUsersToken", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Create a token for the admin.
res, err := adminClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// Member attempts to expire admin's token.
err = memberClient.ExpireAPIKey(ctx, admin.UserID.String(), keyID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
// Members cannot read other users, so they get a 404 Not Found
// from the authorization layer.
require.Equal(t, http.StatusNotFound, sdkErr.StatusCode())
})
t.Run("NotFound", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Try to expire a non-existent token.
err := adminClient.ExpireAPIKey(ctx, codersdk.Me, "nonexistent")
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusNotFound, sdkErr.StatusCode())
})
t.Run("ExpiringAlreadyExpiredTokenSucceeds", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Create and expire a token.
res, err := adminClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// Expire it once.
err = adminClient.ExpireAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// Invariant: make sure it's actually expired
key, err := adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.LessOrEqual(t, key.ExpiresAt, time.Now(), "key should be expired")
// Expire it again - should succeed (idempotent).
err = adminClient.ExpireAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// Token should still be just as expired as before. No more, no less.
keyAgain, err := adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.Equal(t, key.ExpiresAt, keyAgain.ExpiresAt, "expiration should be idempotent")
})
t.Run("DeletingExpiredTokenSucceeds", func(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// Create a token.
res, err := adminClient.CreateToken(ctx, codersdk.Me, codersdk.CreateTokenRequest{
Lifetime: time.Hour * 24 * 7,
})
require.NoError(t, err)
keyID := strings.Split(res.Key, "-")[0]
// Expire it first.
err = adminClient.ExpireAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// Verify it's expired.
key, err := adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.NoError(t, err)
require.True(t, key.ExpiresAt.Before(time.Now()))
// Delete the expired token - should succeed.
err = adminClient.DeleteAPIKey(ctx, codersdk.Me, keyID)
require.NoError(t, err)
// Verify it's gone.
_, err = adminClient.APIKeyByID(ctx, codersdk.Me, keyID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusNotFound, sdkErr.StatusCode())
})
}
+18 -56
View File
@@ -48,10 +48,9 @@ type Executor struct {
tick <-chan time.Time
statsCh chan<- Stats
// NotificationsEnqueuer handles enqueueing notifications for delivery by SMTP, webhook, etc.
notificationsEnqueuer notifications.Enqueuer
reg prometheus.Registerer
experiments codersdk.Experiments
workspaceBuilderMetrics *wsbuilder.Metrics
notificationsEnqueuer notifications.Enqueuer
reg prometheus.Registerer
experiments codersdk.Experiments
metrics executorMetrics
}
@@ -68,24 +67,23 @@ type Stats struct {
}
// New returns a new wsactions executor.
func NewExecutor(ctx context.Context, db database.Store, ps pubsub.Pubsub, fc *files.Cache, reg prometheus.Registerer, tss *atomic.Pointer[schedule.TemplateScheduleStore], auditor *atomic.Pointer[audit.Auditor], acs *atomic.Pointer[dbauthz.AccessControlStore], buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker], log slog.Logger, tick <-chan time.Time, enqueuer notifications.Enqueuer, exp codersdk.Experiments, workspaceBuilderMetrics *wsbuilder.Metrics) *Executor {
func NewExecutor(ctx context.Context, db database.Store, ps pubsub.Pubsub, fc *files.Cache, reg prometheus.Registerer, tss *atomic.Pointer[schedule.TemplateScheduleStore], auditor *atomic.Pointer[audit.Auditor], acs *atomic.Pointer[dbauthz.AccessControlStore], buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker], log slog.Logger, tick <-chan time.Time, enqueuer notifications.Enqueuer, exp codersdk.Experiments) *Executor {
factory := promauto.With(reg)
le := &Executor{
//nolint:gocritic // Autostart has a limited set of permissions.
ctx: dbauthz.AsAutostart(ctx),
db: db,
ps: ps,
fileCache: fc,
templateScheduleStore: tss,
tick: tick,
log: log.Named("autobuild"),
auditor: auditor,
accessControlStore: acs,
buildUsageChecker: buildUsageChecker,
notificationsEnqueuer: enqueuer,
reg: reg,
experiments: exp,
workspaceBuilderMetrics: workspaceBuilderMetrics,
ctx: dbauthz.AsAutostart(ctx),
db: db,
ps: ps,
fileCache: fc,
templateScheduleStore: tss,
tick: tick,
log: log.Named("autobuild"),
auditor: auditor,
accessControlStore: acs,
buildUsageChecker: buildUsageChecker,
notificationsEnqueuer: enqueuer,
reg: reg,
experiments: exp,
metrics: executorMetrics{
autobuildExecutionDuration: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: "coderd",
@@ -231,7 +229,6 @@ func (e *Executor) runOnce(t time.Time) Stats {
job *database.ProvisionerJob
auditLog *auditParams
shouldNotifyDormancy bool
shouldNotifyTaskPause bool
nextBuild *database.WorkspaceBuild
activeTemplateVersion database.TemplateVersion
ws database.Workspace
@@ -317,10 +314,6 @@ func (e *Executor) runOnce(t time.Time) Stats {
return nil
}
if reason == database.BuildReasonTaskAutoPause {
shouldNotifyTaskPause = true
}
// Get the template version job to access tags
templateVersionJob, err := tx.GetProvisionerJobByID(e.ctx, activeTemplateVersion.JobID)
if err != nil {
@@ -342,8 +335,7 @@ func (e *Executor) runOnce(t time.Time) Stats {
SetLastWorkspaceBuildInTx(&latestBuild).
SetLastWorkspaceBuildJobInTx(&latestJob).
Experiments(e.experiments).
Reason(reason).
BuildMetrics(e.workspaceBuilderMetrics)
Reason(reason)
log.Debug(e.ctx, "auto building workspace", slog.F("transition", nextTransition))
if nextTransition == database.WorkspaceTransitionStart &&
useActiveVersion(accessControl, ws) {
@@ -487,28 +479,6 @@ func (e *Executor) runOnce(t time.Time) Stats {
log.Warn(e.ctx, "failed to notify of workspace marked as dormant", slog.Error(err), slog.F("workspace_id", ws.ID))
}
}
if shouldNotifyTaskPause {
task, err := e.db.GetTaskByID(e.ctx, ws.TaskID.UUID)
if err != nil {
log.Warn(e.ctx, "failed to get task for pause notification", slog.Error(err), slog.F("task_id", ws.TaskID.UUID), slog.F("workspace_id", ws.ID))
} else {
if _, err := e.notificationsEnqueuer.Enqueue(
e.ctx,
ws.OwnerID,
notifications.TemplateTaskPaused,
map[string]string{
"task": task.Name,
"task_id": task.ID.String(),
"workspace": ws.Name,
"pause_reason": "idle timeout",
},
"lifecycle_executor",
ws.ID, ws.OwnerID, ws.OrganizationID,
); err != nil {
log.Warn(e.ctx, "failed to notify of task paused", slog.Error(err), slog.F("task_id", ws.TaskID.UUID), slog.F("workspace_id", ws.ID))
}
}
}
return nil
}()
if err != nil && !xerrors.Is(err, context.Canceled) {
@@ -552,18 +522,10 @@ func getNextTransition(
) {
switch {
case isEligibleForAutostop(user, ws, latestBuild, latestJob, currentTick):
// Use task-specific reason for AI task workspaces.
if ws.TaskID.Valid {
return database.WorkspaceTransitionStop, database.BuildReasonTaskAutoPause, nil
}
return database.WorkspaceTransitionStop, database.BuildReasonAutostop, nil
case isEligibleForAutostart(user, ws, latestBuild, latestJob, templateSchedule, currentTick):
return database.WorkspaceTransitionStart, database.BuildReasonAutostart, nil
case isEligibleForFailedStop(latestBuild, latestJob, templateSchedule, currentTick):
// Use task-specific reason for AI task workspaces.
if ws.TaskID.Valid {
return database.WorkspaceTransitionStop, database.BuildReasonTaskAutoPause, nil
}
return database.WorkspaceTransitionStop, database.BuildReasonAutostop, nil
case isEligibleForDormantStop(ws, templateSchedule, currentTick):
// Only stop started workspaces.
@@ -5,113 +5,12 @@ import (
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/schedule"
)
func Test_getNextTransition_TaskAutoPause(t *testing.T) {
t.Parallel()
// Set up a workspace that is eligible for autostop (past deadline).
now := time.Now()
pastDeadline := now.Add(-time.Hour)
okUser := database.User{Status: database.UserStatusActive}
okBuild := database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStart,
Deadline: pastDeadline,
}
okJob := database.ProvisionerJob{
JobStatus: database.ProvisionerJobStatusSucceeded,
}
okTemplateSchedule := schedule.TemplateScheduleOptions{}
// Failed build setup for failedstop tests.
failedBuild := database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStart,
}
failedJob := database.ProvisionerJob{
JobStatus: database.ProvisionerJobStatusFailed,
CompletedAt: sql.NullTime{Time: now.Add(-time.Hour), Valid: true},
}
failedTemplateSchedule := schedule.TemplateScheduleOptions{
FailureTTL: time.Minute, // TTL already elapsed since job completed an hour ago.
}
testCases := []struct {
Name string
Workspace database.Workspace
Build database.WorkspaceBuild
Job database.ProvisionerJob
TemplateSchedule schedule.TemplateScheduleOptions
ExpectedReason database.BuildReason
}{
{
Name: "RegularWorkspace_Autostop",
Workspace: database.Workspace{
DormantAt: sql.NullTime{Valid: false},
},
Build: okBuild,
Job: okJob,
TemplateSchedule: okTemplateSchedule,
ExpectedReason: database.BuildReasonAutostop,
},
{
Name: "TaskWorkspace_Autostop_UsesTaskAutoPause",
Workspace: database.Workspace{
DormantAt: sql.NullTime{Valid: false},
TaskID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
},
Build: okBuild,
Job: okJob,
TemplateSchedule: okTemplateSchedule,
ExpectedReason: database.BuildReasonTaskAutoPause,
},
{
Name: "RegularWorkspace_FailedStop",
Workspace: database.Workspace{
DormantAt: sql.NullTime{Valid: false},
},
Build: failedBuild,
Job: failedJob,
TemplateSchedule: failedTemplateSchedule,
ExpectedReason: database.BuildReasonAutostop,
},
{
Name: "TaskWorkspace_FailedStop_UsesTaskAutoPause",
Workspace: database.Workspace{
DormantAt: sql.NullTime{Valid: false},
TaskID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
},
Build: failedBuild,
Job: failedJob,
TemplateSchedule: failedTemplateSchedule,
ExpectedReason: database.BuildReasonTaskAutoPause,
},
}
for _, tc := range testCases {
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
transition, reason, err := getNextTransition(
okUser,
tc.Workspace,
tc.Build,
tc.Job,
tc.TemplateSchedule,
now,
)
require.NoError(t, err)
require.Equal(t, database.WorkspaceTransitionStop, transition)
require.Equal(t, tc.ExpectedReason, reason)
})
}
}
func Test_isEligibleForAutostart(t *testing.T) {
t.Parallel()
@@ -2019,69 +2019,5 @@ func TestExecutorTaskWorkspace(t *testing.T) {
assert.Contains(t, stats.Transitions, workspace.ID, "task workspace should be in transitions")
assert.Equal(t, database.WorkspaceTransitionStop, stats.Transitions[workspace.ID], "should autostop the workspace")
require.Empty(t, stats.Errors, "should have no errors when managing task workspaces")
// Then: The build reason should be TaskAutoPause (not regular Autostop)
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
assert.Equal(t, codersdk.BuildReasonTaskAutoPause, workspace.LatestBuild.Reason, "task workspace should use TaskAutoPause build reason")
})
t.Run("AutostopNotification", func(t *testing.T) {
t.Parallel()
var (
tickCh = make(chan time.Time)
statsCh = make(chan autobuild.Stats)
notifyEnq = notificationstest.FakeEnqueuer{}
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
AutobuildTicker: tickCh,
IncludeProvisionerDaemon: true,
AutobuildStats: statsCh,
NotificationsEnqueuer: &notifyEnq,
})
admin = coderdtest.CreateFirstUser(t, client)
)
// Given: A task workspace with an 8 hour deadline
ctx := testutil.Context(t, testutil.WaitShort)
template := createTaskTemplate(t, client, admin.OrganizationID, ctx, 8*time.Hour)
workspace := createTaskWorkspace(t, client, template, ctx, "test task for autostop notification")
// Given: The workspace is currently running
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
require.Equal(t, codersdk.WorkspaceTransitionStart, workspace.LatestBuild.Transition)
require.NotZero(t, workspace.LatestBuild.Deadline, "workspace should have a deadline for autostop")
p, err := coderdtest.GetProvisionerForTags(db, time.Now(), workspace.OrganizationID, map[string]string{})
require.NoError(t, err)
// When: the autobuild executor ticks after the deadline
go func() {
tickTime := workspace.LatestBuild.Deadline.Time.Add(time.Minute)
coderdtest.UpdateProvisionerLastSeenAt(t, db, p.ID, tickTime)
tickCh <- tickTime
close(tickCh)
}()
// Then: We expect to see a stop transition
stats := <-statsCh
require.Len(t, stats.Transitions, 1, "lifecycle executor should transition the task workspace")
assert.Contains(t, stats.Transitions, workspace.ID, "task workspace should be in transitions")
assert.Equal(t, database.WorkspaceTransitionStop, stats.Transitions[workspace.ID], "should autostop the workspace")
require.Empty(t, stats.Errors, "should have no errors when managing task workspaces")
// Then: A task paused notification was sent with "idle timeout" reason
require.True(t, workspace.TaskID.Valid, "workspace should have a task ID")
task, err := db.GetTaskByID(dbauthz.AsSystemRestricted(ctx), workspace.TaskID.UUID)
require.NoError(t, err)
sent := notifyEnq.Sent(notificationstest.WithTemplateID(notifications.TemplateTaskPaused))
require.Len(t, sent, 1)
require.Equal(t, workspace.OwnerID, sent[0].UserID)
require.Equal(t, task.Name, sent[0].Labels["task"])
require.Equal(t, task.ID.String(), sent[0].Labels["task_id"])
require.Equal(t, workspace.Name, sent[0].Labels["workspace"])
require.Equal(t, "idle timeout", sent[0].Labels["pause_reason"])
})
}
+6 -9
View File
@@ -245,7 +245,6 @@ type Options struct {
MetadataBatcherOptions []metadatabatcher.Option
ProvisionerdServerMetrics *provisionerdserver.Metrics
WorkspaceBuilderMetrics *wsbuilder.Metrics
// WorkspaceAppAuditSessionTimeout allows changing the timeout for audit
// sessions. Raising or lowering this value will directly affect the write
@@ -749,6 +748,7 @@ func New(options *Options) *API {
options.DeploymentValues.DERP.Config.ForceWebSockets.Value(),
options.DeploymentValues.DERP.Config.BlockDirect.Value(),
api.TracerProvider,
"Coder Server",
)
if err != nil {
panic("failed to setup server tailnet: " + err.Error())
@@ -900,7 +900,6 @@ func New(options *Options) *API {
sharedhttpmw.Recover(api.Logger),
httpmw.WithProfilingLabels,
tracing.StatusWriterMiddleware,
options.DeploymentValues.HTTPCookies.Middleware,
tracing.Middleware(api.TracerProvider),
httpmw.AttachRequestID,
httpmw.ExtractRealIP(api.RealIPConfig),
@@ -1080,8 +1079,6 @@ func New(options *Options) *API {
r.Patch("/input", api.taskUpdateInput)
r.Post("/send", api.taskSend)
r.Get("/logs", api.taskLogs)
r.Post("/pause", api.pauseTask)
r.Post("/resume", api.resumeTask)
})
})
})
@@ -1233,10 +1230,7 @@ func New(options *Options) *API {
r.Get("/", api.organizationMember)
r.Delete("/", api.deleteOrganizationMember)
r.Put("/roles", api.putMemberRoles)
r.Route("/workspaces", func(r chi.Router) {
r.Post("/", api.postWorkspacesByOrganization)
r.Get("/available-users", api.workspaceAvailableUsers)
})
r.Post("/workspaces", api.postWorkspacesByOrganization)
})
})
})
@@ -1403,7 +1397,6 @@ func New(options *Options) *API {
r.Route("/{keyid}", func(r chi.Router) {
r.Get("/", api.apiKeyByID)
r.Delete("/", api.deleteAPIKey)
r.Put("/expire", api.expireAPIKey)
})
})
@@ -1526,6 +1519,10 @@ func New(options *Options) *API {
})
r.Get("/timings", api.workspaceTimings)
r.Route("/acl", func(r chi.Router) {
r.Use(
httpmw.RequireExperiment(api.Experiments, codersdk.ExperimentWorkspaceSharing),
)
r.Get("/", api.workspaceACL)
r.Patch("/", api.patchWorkspaceACL)
r.Delete("/", api.deleteWorkspaceACL)
-3
View File
@@ -191,7 +191,6 @@ type Options struct {
TelemetryReporter telemetry.Reporter
ProvisionerdServerMetrics *provisionerdserver.Metrics
WorkspaceBuilderMetrics *wsbuilder.Metrics
UsageInserter usage.Inserter
}
@@ -400,7 +399,6 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
options.AutobuildTicker,
options.NotificationsEnqueuer,
experiments,
options.WorkspaceBuilderMetrics,
).WithStatsChannel(options.AutobuildStats)
lifecycleExecutor.Run()
@@ -622,7 +620,6 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
AppEncryptionKeyCache: options.APIKeyEncryptionCache,
OIDCConvertKeyCache: options.OIDCConvertKeyCache,
ProvisionerdServerMetrics: options.ProvisionerdServerMetrics,
WorkspaceBuilderMetrics: options.WorkspaceBuilderMetrics,
}
}
-2
View File
@@ -17,6 +17,4 @@ const (
CheckTelemetryLockEventTypeConstraint CheckConstraint = "telemetry_lock_event_type_constraint" // telemetry_locks
CheckValidationMonotonicOrder CheckConstraint = "validation_monotonic_order" // template_version_parameters
CheckUsageEventTypeCheck CheckConstraint = "usage_event_type_check" // usage_events
CheckGroupAclIsObject CheckConstraint = "group_acl_is_object" // workspaces
CheckUserAclIsObject CheckConstraint = "user_acl_is_object" // workspaces
)
@@ -0,0 +1,147 @@
package database_test
import (
"context"
"database/sql"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
)
func TestGetOngoingAgentConnectionsLast24h(t *testing.T) {
t.Parallel()
ctx := context.Background()
db, _ := dbtestutil.NewDB(t)
org := dbfake.Organization(t, db).Do()
user := dbgen.User(t, db, database.User{})
tpl := dbgen.Template(t, db, database.Template{OrganizationID: org.Org.ID, CreatedBy: user.ID})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
TemplateID: tpl.ID,
Name: "ws",
})
now := dbtime.Now()
since := now.Add(-24 * time.Hour)
const (
agent1 = "agent1"
agent2 = "agent2"
)
// Insert a disconnected log that should be excluded.
disconnectedConnID := uuid.New()
disconnected := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-30 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: disconnectedConnID, Valid: true},
})
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-20 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
AgentName: disconnected.AgentName,
ConnectionStatus: database.ConnectionStatusDisconnected,
ConnectionID: disconnected.ConnectionID,
DisconnectReason: sql.NullString{String: "closed", Valid: true},
})
// Insert an old log that should be excluded by the 24h window.
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-25 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
// Insert a web log that should be excluded by the types filter.
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeWorkspaceApp,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
// Insert 55 active logs for agent1 (should be capped to 50).
for i := 0; i < 55; i++ {
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-time.Duration(i) * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeVscode,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
}
// Insert one active log for agent2.
agent2Log := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-5 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent2,
Type: database.ConnectionTypeJetbrains,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
logs, err := db.GetOngoingAgentConnectionsLast24h(ctx, database.GetOngoingAgentConnectionsLast24hParams{
WorkspaceIds: []uuid.UUID{ws.ID},
AgentNames: []string{agent1, agent2},
Types: []database.ConnectionType{database.ConnectionTypeSsh, database.ConnectionTypeVscode, database.ConnectionTypeJetbrains, database.ConnectionTypeReconnectingPty},
Since: since,
PerAgentLimit: 50,
})
require.NoError(t, err)
byAgent := map[string][]database.GetOngoingAgentConnectionsLast24hRow{}
for _, l := range logs {
byAgent[l.AgentName] = append(byAgent[l.AgentName], l)
}
// Agent1 should be capped at 50 and contain only active logs within the window.
require.Len(t, byAgent[agent1], 50)
for i, l := range byAgent[agent1] {
require.False(t, l.DisconnectTime.Valid, "expected log to be ongoing")
require.True(t, l.ConnectTime.After(since) || l.ConnectTime.Equal(since), "expected log to be within window")
if i > 0 {
require.True(t, byAgent[agent1][i-1].ConnectTime.After(l.ConnectTime) || byAgent[agent1][i-1].ConnectTime.Equal(l.ConnectTime), "expected logs to be ordered by connect_time desc")
}
}
// Agent2 should include its single active log.
require.Equal(t, []uuid.UUID{agent2Log.ID}, []uuid.UUID{byAgent[agent2][0].ID})
}
+1
View File
@@ -93,6 +93,7 @@ type TxOptions struct {
// IncrementExecutionCount is a helper function for external packages
// to increment the unexported count.
// Mainly for `dbmem`.
func IncrementExecutionCount(opts *TxOptions) {
opts.executionCount++
}
+26 -3
View File
@@ -4,6 +4,7 @@ package db2sdk
import (
"encoding/json"
"fmt"
"net/netip"
"net/url"
"slices"
"sort"
@@ -28,6 +29,7 @@ import (
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisionersdk/proto"
"github.com/coder/coder/v2/tailnet"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
previewtypes "github.com/coder/preview/types"
)
@@ -519,6 +521,30 @@ func WorkspaceAgent(derpMap *tailcfg.DERPMap, coordinator tailnet.Coordinator,
workspaceAgent.Health.Healthy = true
}
if tunnelPeers := coordinator.TunnelPeers(dbAgent.ID); len(tunnelPeers) > 0 {
conns := make([]codersdk.WorkspaceConnection, 0, len(tunnelPeers))
for _, tp := range tunnelPeers {
var ip *netip.Addr
if tp.Node != nil && len(tp.Node.Addresses) > 0 {
if prefix, err := netip.ParsePrefix(tp.Node.Addresses[0]); err == nil {
addr := prefix.Addr()
ip = &addr
}
}
status := codersdk.ConnectionStatusOngoing
if tp.Status == tailnetproto.CoordinateResponse_PeerUpdate_LOST {
status = codersdk.ConnectionStatusControlLost
}
conns = append(conns, codersdk.WorkspaceConnection{
IP: ip,
Status: status,
CreatedAt: tp.Start,
ConnectedAt: &tp.Start,
})
}
workspaceAgent.Connections = conns
}
return workspaceAgent, nil
}
@@ -981,9 +1007,6 @@ func AIBridgeInterception(interception database.AIBridgeInterception, initiator
if interception.EndedAt.Valid {
intc.EndedAt = &interception.EndedAt.Time
}
if interception.Client.Valid {
intc.Client = &interception.Client.String
}
return intc
}
+148 -220
View File
@@ -9,8 +9,10 @@ import (
"time"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
gomock "go.uber.org/mock/gomock"
"tailscale.com/tailcfg"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
@@ -19,6 +21,9 @@ import (
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisionersdk/proto"
"github.com/coder/coder/v2/tailnet"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
"github.com/coder/coder/v2/tailnet/tailnettest"
)
func TestProvisionerJobStatus(t *testing.T) {
@@ -208,230 +213,153 @@ func TestTemplateVersionParameter_BadDescription(t *testing.T) {
req.NotEmpty(sdk.DescriptionPlaintext, "broke the markdown parser with %v", desc)
}
func TestAIBridgeInterception(t *testing.T) {
func TestWorkspaceAgent_TunnelPeers(t *testing.T) {
t.Parallel()
now := dbtime.Now()
interceptionID := uuid.New()
initiatorID := uuid.New()
t.Run("MultiplePeers", func(t *testing.T) {
t.Parallel()
cases := []struct {
name string
interception database.AIBridgeInterception
initiator database.VisibleUser
tokenUsages []database.AIBridgeTokenUsage
userPrompts []database.AIBridgeUserPrompt
toolUsages []database.AIBridgeToolUsage
expected codersdk.AIBridgeInterception
}{
{
name: "all_optional_values_set",
interception: database.AIBridgeInterception{
ID: interceptionID,
InitiatorID: initiatorID,
Provider: "anthropic",
Model: "claude-3-opus",
StartedAt: now,
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"key":"value"}`),
Valid: true,
agentID := uuid.New()
now := dbtime.Now()
peerID1 := uuid.New()
peerID2 := uuid.New()
ctrl := gomock.NewController(t)
mockCoord := tailnettest.NewMockCoordinator(ctrl)
mockCoord.EXPECT().Node(agentID).Return(nil)
mockCoord.EXPECT().TunnelPeers(agentID).Return([]*tailnet.TunnelPeerInfo{
{
ID: peerID1,
Name: "active-user",
Node: &tailnetproto.Node{
Addresses: []string{"fd60:627a:a42b:0102:0304:0506:0708:090a/128"},
PreferredDerp: 1,
},
EndedAt: sql.NullTime{
Time: now.Add(time.Minute),
Valid: true,
},
APIKeyID: sql.NullString{
String: "api-key-123",
Valid: true,
},
Client: sql.NullString{
String: "claude-code/1.0.0",
Valid: true,
Status: tailnetproto.CoordinateResponse_PeerUpdate_NODE,
Start: now,
},
{
ID: peerID2,
Name: "lost-user",
Node: &tailnetproto.Node{
Addresses: []string{"fd60:627a:a42b:aaaa:bbbb:cccc:dddd:eeee/128"},
PreferredDerp: 2,
},
Status: tailnetproto.CoordinateResponse_PeerUpdate_LOST,
Start: now.Add(-time.Hour),
},
initiator: database.VisibleUser{
ID: initiatorID,
Username: "testuser",
Name: "Test User",
AvatarURL: "https://example.com/avatar.png",
},
tokenUsages: []database.AIBridgeTokenUsage{
{
ID: uuid.New(),
InterceptionID: interceptionID,
ProviderResponseID: "resp-123",
InputTokens: 100,
OutputTokens: 200,
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"cache":"hit"}`),
Valid: true,
},
CreatedAt: now.Add(10 * time.Second),
},
},
userPrompts: []database.AIBridgeUserPrompt{
{
ID: uuid.New(),
InterceptionID: interceptionID,
ProviderResponseID: "resp-123",
Prompt: "Hello, world!",
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"role":"user"}`),
Valid: true,
},
CreatedAt: now.Add(5 * time.Second),
},
},
toolUsages: []database.AIBridgeToolUsage{
{
ID: uuid.New(),
InterceptionID: interceptionID,
ProviderResponseID: "resp-123",
ServerUrl: sql.NullString{
String: "https://mcp.example.com",
Valid: true,
},
Tool: "read_file",
Input: `{"path":"/tmp/test.txt"}`,
Injected: true,
InvocationError: sql.NullString{
String: "file not found",
Valid: true,
},
Metadata: pqtype.NullRawMessage{
RawMessage: json.RawMessage(`{"duration_ms":50}`),
Valid: true,
},
CreatedAt: now.Add(15 * time.Second),
},
},
expected: codersdk.AIBridgeInterception{
ID: interceptionID,
Initiator: codersdk.MinimalUser{
ID: initiatorID,
Username: "testuser",
Name: "Test User",
AvatarURL: "https://example.com/avatar.png",
},
Provider: "anthropic",
Model: "claude-3-opus",
Metadata: map[string]any{"key": "value"},
StartedAt: now,
},
},
{
name: "no_optional_values_set",
interception: database.AIBridgeInterception{
ID: interceptionID,
InitiatorID: initiatorID,
Provider: "openai",
Model: "gpt-4",
StartedAt: now,
Metadata: pqtype.NullRawMessage{Valid: false},
EndedAt: sql.NullTime{Valid: false},
APIKeyID: sql.NullString{Valid: false},
Client: sql.NullString{Valid: false},
},
initiator: database.VisibleUser{
ID: initiatorID,
Username: "minimaluser",
Name: "",
AvatarURL: "",
},
tokenUsages: nil,
userPrompts: nil,
toolUsages: nil,
expected: codersdk.AIBridgeInterception{
ID: interceptionID,
Initiator: codersdk.MinimalUser{
ID: initiatorID,
Username: "minimaluser",
Name: "",
AvatarURL: "",
},
Provider: "openai",
Model: "gpt-4",
Metadata: nil,
StartedAt: now,
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
result := db2sdk.AIBridgeInterception(
tc.interception,
tc.initiator,
tc.tokenUsages,
tc.userPrompts,
tc.toolUsages,
)
// Check basic fields.
require.Equal(t, tc.expected.ID, result.ID)
require.Equal(t, tc.expected.Initiator, result.Initiator)
require.Equal(t, tc.expected.Provider, result.Provider)
require.Equal(t, tc.expected.Model, result.Model)
require.Equal(t, tc.expected.StartedAt.UTC(), result.StartedAt.UTC())
require.Equal(t, tc.expected.Metadata, result.Metadata)
// Check optional pointer fields.
if tc.interception.APIKeyID.Valid {
require.NotNil(t, result.APIKeyID)
require.Equal(t, tc.interception.APIKeyID.String, *result.APIKeyID)
} else {
require.Nil(t, result.APIKeyID)
}
if tc.interception.EndedAt.Valid {
require.NotNil(t, result.EndedAt)
require.Equal(t, tc.interception.EndedAt.Time.UTC(), result.EndedAt.UTC())
} else {
require.Nil(t, result.EndedAt)
}
if tc.interception.Client.Valid {
require.NotNil(t, result.Client)
require.Equal(t, tc.interception.Client.String, *result.Client)
} else {
require.Nil(t, result.Client)
}
// Check slices.
require.Len(t, result.TokenUsages, len(tc.tokenUsages))
require.Len(t, result.UserPrompts, len(tc.userPrompts))
require.Len(t, result.ToolUsages, len(tc.toolUsages))
// Verify token usages are converted correctly.
for i, tu := range tc.tokenUsages {
require.Equal(t, tu.ID, result.TokenUsages[i].ID)
require.Equal(t, tu.InterceptionID, result.TokenUsages[i].InterceptionID)
require.Equal(t, tu.ProviderResponseID, result.TokenUsages[i].ProviderResponseID)
require.Equal(t, tu.InputTokens, result.TokenUsages[i].InputTokens)
require.Equal(t, tu.OutputTokens, result.TokenUsages[i].OutputTokens)
}
// Verify user prompts are converted correctly.
for i, up := range tc.userPrompts {
require.Equal(t, up.ID, result.UserPrompts[i].ID)
require.Equal(t, up.InterceptionID, result.UserPrompts[i].InterceptionID)
require.Equal(t, up.ProviderResponseID, result.UserPrompts[i].ProviderResponseID)
require.Equal(t, up.Prompt, result.UserPrompts[i].Prompt)
}
// Verify tool usages are converted correctly.
for i, toolUsage := range tc.toolUsages {
require.Equal(t, toolUsage.ID, result.ToolUsages[i].ID)
require.Equal(t, toolUsage.InterceptionID, result.ToolUsages[i].InterceptionID)
require.Equal(t, toolUsage.ProviderResponseID, result.ToolUsages[i].ProviderResponseID)
require.Equal(t, toolUsage.ServerUrl.String, result.ToolUsages[i].ServerURL)
require.Equal(t, toolUsage.Tool, result.ToolUsages[i].Tool)
require.Equal(t, toolUsage.Input, result.ToolUsages[i].Input)
require.Equal(t, toolUsage.Injected, result.ToolUsages[i].Injected)
require.Equal(t, toolUsage.InvocationError.String, result.ToolUsages[i].InvocationError)
}
})
}
dbAgent := database.WorkspaceAgent{
ID: agentID,
CreatedAt: now,
Name: "test-agent",
}
agent, err := db2sdk.WorkspaceAgent(
&tailcfg.DERPMap{},
mockCoord,
dbAgent,
nil, nil, nil,
time.Minute,
"",
)
require.NoError(t, err)
require.Len(t, agent.Connections, 2)
// Find connections by status since order is not guaranteed.
var ongoing, lost *codersdk.WorkspaceConnection
for i := range agent.Connections {
switch agent.Connections[i].Status {
case codersdk.ConnectionStatusOngoing:
ongoing = &agent.Connections[i]
case codersdk.ConnectionStatusControlLost:
lost = &agent.Connections[i]
}
}
require.NotNil(t, ongoing, "expected an ongoing connection")
require.NotNil(t, ongoing.IP, "expected ongoing IP to be set")
assert.Equal(t, "fd60:627a:a42b:102:304:506:708:90a", ongoing.IP.String())
assert.Equal(t, now, ongoing.CreatedAt)
require.NotNil(t, ongoing.ConnectedAt)
assert.Equal(t, now, *ongoing.ConnectedAt)
assert.Nil(t, ongoing.EndedAt)
require.NotNil(t, lost, "expected a control_lost connection")
require.NotNil(t, lost.IP, "expected lost IP to be set")
assert.Equal(t, "fd60:627a:a42b:aaaa:bbbb:cccc:dddd:eeee", lost.IP.String())
assert.Equal(t, now.Add(-time.Hour), lost.CreatedAt)
require.NotNil(t, lost.ConnectedAt)
assert.Equal(t, now.Add(-time.Hour), *lost.ConnectedAt)
assert.Nil(t, lost.EndedAt)
})
t.Run("NilNode", func(t *testing.T) {
t.Parallel()
agentID := uuid.New()
now := dbtime.Now()
ctrl := gomock.NewController(t)
mockCoord := tailnettest.NewMockCoordinator(ctrl)
mockCoord.EXPECT().Node(agentID).Return(nil)
mockCoord.EXPECT().TunnelPeers(agentID).Return([]*tailnet.TunnelPeerInfo{
{
ID: uuid.New(),
Name: "no-node-user",
Node: nil,
Status: tailnetproto.CoordinateResponse_PeerUpdate_NODE,
Start: now,
},
})
dbAgent := database.WorkspaceAgent{
ID: agentID,
CreatedAt: now,
Name: "test-agent",
}
agent, err := db2sdk.WorkspaceAgent(
&tailcfg.DERPMap{},
mockCoord,
dbAgent,
nil, nil, nil,
time.Minute,
"",
)
require.NoError(t, err)
require.Len(t, agent.Connections, 1)
assert.Nil(t, agent.Connections[0].IP, "IP should be nil when Node is nil")
assert.Equal(t, codersdk.ConnectionStatusOngoing, agent.Connections[0].Status)
})
t.Run("NoPeers", func(t *testing.T) {
t.Parallel()
agentID := uuid.New()
now := dbtime.Now()
ctrl := gomock.NewController(t)
mockCoord := tailnettest.NewMockCoordinator(ctrl)
mockCoord.EXPECT().Node(agentID).Return(nil)
mockCoord.EXPECT().TunnelPeers(agentID).Return(nil)
dbAgent := database.WorkspaceAgent{
ID: agentID,
CreatedAt: now,
Name: "test-agent",
}
agent, err := db2sdk.WorkspaceAgent(
&tailcfg.DERPMap{},
mockCoord,
dbAgent,
nil, nil, nil,
time.Minute,
"",
)
require.NoError(t, err)
assert.Empty(t, agent.Connections)
})
}
+19 -48
View File
@@ -668,31 +668,6 @@ var (
}),
Scope: rbac.ScopeAll,
}.WithCachedASTValue()
subjectWorkspaceBuilder = rbac.Subject{
Type: rbac.SubjectTypeWorkspaceBuilder,
FriendlyName: "Workspace Builder",
ID: uuid.Nil.String(),
Roles: rbac.Roles([]rbac.Role{
{
Identifier: rbac.RoleIdentifier{Name: "workspace-builder"},
DisplayName: "Workspace Builder",
Site: rbac.Permissions(map[string][]policy.Action{
// Reading provisioner daemons to check eligibility.
rbac.ResourceProvisionerDaemon.Type: {policy.ActionRead},
// Updating provisioner jobs (e.g. marking prebuild
// jobs complete).
rbac.ResourceProvisionerJobs.Type: {policy.ActionUpdate},
// Reading provisioner state requires template update
// permission.
rbac.ResourceTemplate.Type: {policy.ActionUpdate},
}),
User: []rbac.Permission{},
ByOrgID: map[string]rbac.OrgPermissions{},
},
}),
Scope: rbac.ScopeAll,
}.WithCachedASTValue()
)
// AsProvisionerd returns a context with an actor that has permissions required
@@ -799,14 +774,6 @@ func AsBoundaryUsageTracker(ctx context.Context) context.Context {
return As(ctx, subjectBoundaryUsageTracker)
}
// AsWorkspaceBuilder returns a context with an actor that has permissions
// required for the workspace builder to prepare workspace builds. This
// includes reading provisioner daemons, updating provisioner jobs, and
// reading provisioner state (which requires template update permission).
func AsWorkspaceBuilder(ctx context.Context) context.Context {
return As(ctx, subjectWorkspaceBuilder)
}
var AsRemoveActor = rbac.Subject{
ID: "remove-actor",
}
@@ -2194,12 +2161,12 @@ func (q *querier) GetAPIKeyByName(ctx context.Context, arg database.GetAPIKeyByN
return fetch(q.log, q.auth, q.db.GetAPIKeyByName)(ctx, arg)
}
func (q *querier) GetAPIKeysByLoginType(ctx context.Context, loginType database.GetAPIKeysByLoginTypeParams) ([]database.APIKey, error) {
func (q *querier) GetAPIKeysByLoginType(ctx context.Context, loginType database.LoginType) ([]database.APIKey, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetAPIKeysByLoginType)(ctx, loginType)
}
func (q *querier) GetAPIKeysByUserID(ctx context.Context, params database.GetAPIKeysByUserIDParams) ([]database.APIKey, error) {
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetAPIKeysByUserID)(ctx, params)
return fetchWithPostFilter(q.auth, policy.ActionRead, q.db.GetAPIKeysByUserID)(ctx, database.GetAPIKeysByUserIDParams{LoginType: params.LoginType, UserID: params.UserID})
}
func (q *querier) GetAPIKeysLastUsedAfter(ctx context.Context, lastUsed time.Time) ([]database.APIKey, error) {
@@ -2290,7 +2257,7 @@ func (q *querier) GetAuditLogsOffset(ctx context.Context, arg database.GetAuditL
}
func (q *querier) GetAuthenticatedWorkspaceAgentAndBuildByAuthToken(ctx context.Context, authToken uuid.UUID) (database.GetAuthenticatedWorkspaceAgentAndBuildByAuthTokenRow, error) {
// This is a system function.
// This is a system function
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return database.GetAuthenticatedWorkspaceAgentAndBuildByAuthTokenRow{}, err
}
@@ -2745,6 +2712,15 @@ func (q *querier) GetOAuthSigningKey(ctx context.Context) (string, error) {
return q.db.GetOAuthSigningKey(ctx)
}
func (q *querier) GetOngoingAgentConnectionsLast24h(ctx context.Context, arg database.GetOngoingAgentConnectionsLast24hParams) ([]database.GetOngoingAgentConnectionsLast24hRow, error) {
// This is a system-level read; authorization comes from the
// caller using dbauthz.AsSystemRestricted(ctx).
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return nil, err
}
return q.db.GetOngoingAgentConnectionsLast24h(ctx, arg)
}
func (q *querier) GetOrganizationByID(ctx context.Context, id uuid.UUID) (database.Organization, error) {
return fetch(q.log, q.auth, q.db.GetOrganizationByID)(ctx, id)
}
@@ -3114,6 +3090,13 @@ func (q *querier) GetTailnetTunnelPeerBindings(ctx context.Context, srcID uuid.U
return q.db.GetTailnetTunnelPeerBindings(ctx, srcID)
}
func (q *querier) GetTailnetTunnelPeerBindingsByDstID(ctx context.Context, dstID uuid.UUID) ([]database.GetTailnetTunnelPeerBindingsByDstIDRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
}
return q.db.GetTailnetTunnelPeerBindingsByDstID(ctx, dstID)
}
func (q *querier) GetTailnetTunnelPeerIDs(ctx context.Context, srcID uuid.UUID) ([]database.GetTailnetTunnelPeerIDsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
@@ -3166,13 +3149,6 @@ func (q *querier) GetTelemetryItems(ctx context.Context) ([]database.TelemetryIt
return q.db.GetTelemetryItems(ctx)
}
func (q *querier) GetTelemetryTaskEvents(ctx context.Context, arg database.GetTelemetryTaskEventsParams) ([]database.GetTelemetryTaskEventsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTask.All()); err != nil {
return nil, err
}
return q.db.GetTelemetryTaskEvents(ctx, arg)
}
func (q *querier) GetTemplateAppInsights(ctx context.Context, arg database.GetTemplateAppInsightsParams) ([]database.GetTemplateAppInsightsRow, error) {
if err := q.authorizeTemplateInsights(ctx, arg.TemplateIDs); err != nil {
return nil, err
@@ -3954,11 +3930,6 @@ func (q *querier) GetWorkspaceBuildParametersByBuildIDs(ctx context.Context, wor
return q.db.GetAuthorizedWorkspaceBuildParametersByBuildIDs(ctx, workspaceBuildIDs, prep)
}
func (q *querier) GetWorkspaceBuildProvisionerStateByID(ctx context.Context, buildID uuid.UUID) (database.GetWorkspaceBuildProvisionerStateByIDRow, error) {
// Fetching the provisioner state requires Update permission on the template.
return fetchWithAction(q.log, q.auth, policy.ActionUpdate, q.db.GetWorkspaceBuildProvisionerStateByID)(ctx, buildID)
}
func (q *querier) GetWorkspaceBuildStatsByTemplates(ctx context.Context, since time.Time) ([]database.GetWorkspaceBuildStatsByTemplatesRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return nil, err
+6 -16
View File
@@ -237,8 +237,8 @@ func (s *MethodTestSuite) TestAPIKey() {
s.Run("GetAPIKeysByLoginType", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
a := testutil.Fake(s.T(), faker, database.APIKey{LoginType: database.LoginTypePassword})
b := testutil.Fake(s.T(), faker, database.APIKey{LoginType: database.LoginTypePassword})
dbm.EXPECT().GetAPIKeysByLoginType(gomock.Any(), database.GetAPIKeysByLoginTypeParams{LoginType: database.LoginTypePassword}).Return([]database.APIKey{a, b}, nil).AnyTimes()
check.Args(database.GetAPIKeysByLoginTypeParams{LoginType: database.LoginTypePassword}).Asserts(a, policy.ActionRead, b, policy.ActionRead).Returns(slice.New(a, b))
dbm.EXPECT().GetAPIKeysByLoginType(gomock.Any(), database.LoginTypePassword).Return([]database.APIKey{a, b}, nil).AnyTimes()
check.Args(database.LoginTypePassword).Asserts(a, policy.ActionRead, b, policy.ActionRead).Returns(slice.New(a, b))
}))
s.Run("GetAPIKeysByUserID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
u1 := testutil.Fake(s.T(), faker, database.User{})
@@ -1326,11 +1326,6 @@ func (s *MethodTestSuite) TestTemplate() {
dbm.EXPECT().GetTemplateInsightsByTemplate(gomock.Any(), arg).Return([]database.GetTemplateInsightsByTemplateRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceTemplate, policy.ActionViewInsights)
}))
s.Run("GetTelemetryTaskEvents", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetTelemetryTaskEventsParams{}
dbm.EXPECT().GetTelemetryTaskEvents(gomock.Any(), arg).Return([]database.GetTelemetryTaskEventsRow{}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceTask.All(), policy.ActionRead)
}))
s.Run("GetTemplateAppInsights", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.GetTemplateAppInsightsParams{}
dbm.EXPECT().GetTemplateAppInsights(gomock.Any(), arg).Return([]database.GetTemplateAppInsightsRow{}, nil).AnyTimes()
@@ -1974,15 +1969,6 @@ func (s *MethodTestSuite) TestWorkspace() {
dbm.EXPECT().GetWorkspaceByID(gomock.Any(), ws.ID).Return(ws, nil).AnyTimes()
check.Args(build.ID).Asserts(ws, policy.ActionRead).Returns(build)
}))
s.Run("GetWorkspaceBuildProvisionerStateByID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
row := database.GetWorkspaceBuildProvisionerStateByIDRow{
ProvisionerState: []byte("state"),
TemplateID: uuid.New(),
TemplateOrganizationID: uuid.New(),
}
dbm.EXPECT().GetWorkspaceBuildProvisionerStateByID(gomock.Any(), gomock.Any()).Return(row, nil).AnyTimes()
check.Args(uuid.New()).Asserts(row, policy.ActionUpdate).Returns(row)
}))
s.Run("GetWorkspaceBuildByJobID", s.Mocked(func(dbm *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
ws := testutil.Fake(s.T(), faker, database.Workspace{})
build := testutil.Fake(s.T(), faker, database.WorkspaceBuild{WorkspaceID: ws.ID})
@@ -2855,6 +2841,10 @@ func (s *MethodTestSuite) TestTailnetFunctions() {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
}))
s.Run("GetTailnetTunnelPeerBindingsByDstID", s.Subtest(func(_ database.Store, check *expects) {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
}))
s.Run("GetTailnetTunnelPeerIDs", s.Subtest(func(_ database.Store, check *expects) {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
-19
View File
@@ -67,8 +67,6 @@ type WorkspaceBuildBuilder struct {
jobError string // Error message for failed jobs
jobErrorCode string // Error code for failed jobs
provisionerState []byte
}
// BuilderOption is a functional option for customizing job timestamps
@@ -140,15 +138,6 @@ func (b WorkspaceBuildBuilder) Seed(seed database.WorkspaceBuild) WorkspaceBuild
return b
}
// ProvisionerState sets the provisioner state for the workspace build.
// This is stored separately from the seed because ProvisionerState is
// not part of the WorkspaceBuild view struct.
func (b WorkspaceBuildBuilder) ProvisionerState(state []byte) WorkspaceBuildBuilder {
//nolint: revive // returns modified struct
b.provisionerState = state
return b
}
func (b WorkspaceBuildBuilder) Resource(resource ...*sdkproto.Resource) WorkspaceBuildBuilder {
//nolint: revive // returns modified struct
b.resources = append(b.resources, resource...)
@@ -475,14 +464,6 @@ func (b WorkspaceBuildBuilder) doInTX() WorkspaceResponse {
}
resp.Build = dbgen.WorkspaceBuild(b.t, b.db, b.seed)
if len(b.provisionerState) > 0 {
err = b.db.UpdateWorkspaceBuildProvisionerStateByID(ownerCtx, database.UpdateWorkspaceBuildProvisionerStateByIDParams{
ID: resp.Build.ID,
UpdatedAt: dbtime.Now(),
ProvisionerState: b.provisionerState,
})
require.NoError(b.t, err, "update provisioner state")
}
b.logger.Debug(context.Background(), "created workspace build",
slog.F("build_id", resp.Build.ID),
slog.F("workspace_id", resp.Workspace.ID),

Some files were not shown because too many files have changed in this diff Show More