Compare commits

..

4 Commits

Author SHA1 Message Date
Danielle Maywood 1fcd2a2ed9 chore: fix formatting 2025-11-21 19:03:11 +00:00
Danielle Maywood b42b136360 fix: bad formatting 2025-11-21 16:13:31 +00:00
Danielle Maywood 5edf08f8a4 test(agent): add delete and recreate devcontainer test
Add comprehensive test that validates the delete and recreate flow:
1. Verifies devcontainer exists initially
2. Deletes the devcontainer via DELETE endpoint
3. Confirms it's removed from the list
4. Simulates container reappearing (e.g., manually recreated)
5. Triggers updater loop to rediscover the container
6. Verifies devcontainer is recreated with new ID

Also fixes existing delete tests to properly control the updater loop
using ticker traps to prevent race conditions.
2025-11-21 15:56:37 +00:00
Danielle Maywood 5fdac48e37 feat(agent): add devcontainer delete endpoint
Adds DELETE /api/v0/containers/devcontainers/{devcontainer} endpoint
to the agent API for removing devcontainers.

Changes:
- Add handleDevcontainerDelete handler that:
  - Validates devcontainer ID and prevents deletion while starting
  - Stops sub-agent process and cleans up all internal state
  - Broadcasts update to connected clients
- Update AgentConn interface with DeleteDevcontainer method
- Add comprehensive tests covering all scenarios

This is the agent-side foundation for the devcontainer delete feature.
The coderd API layer will be added in a follow-up change.
2025-11-21 15:16:50 +00:00
559 changed files with 10259 additions and 27798 deletions
-126
View File
@@ -1,126 +0,0 @@
# Coder Architecture
This document provides an overview of Coder's architecture and core systems.
## What is Coder?
Coder is a platform for creating, managing, and using remote development environments (also known as Cloud Development Environments or CDEs). It leverages Terraform to define and provision these environments, which are referred to as "workspaces" within the project. The system is designed to be extensible, secure, and provide developers with a seamless remote development experience.
## Core Architecture
The heart of Coder is a control plane that orchestrates the creation and management of workspaces. This control plane interacts with separate Provisioner processes over gRPC to handle workspace builds. The Provisioners consume workspace definitions and use Terraform to create the actual infrastructure.
The CLI package serves dual purposes - it can be used to launch the control plane itself and also provides client functionality for users to interact with an existing control plane instance. All user-facing frontend code is developed in TypeScript using React and lives in the `site/` directory.
The database layer uses PostgreSQL with SQLC for generating type-safe database code. Database migrations are carefully managed to ensure both forward and backward compatibility through paired `.up.sql` and `.down.sql` files.
## API Design
Coder's API architecture combines REST and gRPC approaches. The REST API is defined in `coderd/coderd.go` and uses Chi for HTTP routing. This provides the primary interface for the frontend and external integrations.
Internal communication with Provisioners occurs over gRPC, with service definitions maintained in `.proto` files. This separation allows for efficient binary communication with the components responsible for infrastructure management while providing a standard REST interface for human-facing applications.
## Network Architecture
Coder implements a secure networking layer based on Tailscale's Wireguard implementation. The `tailnet` package provides connectivity between workspace agents and clients through DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible. This creates a secure overlay network allowing access to workspaces regardless of network topology, firewalls, or NAT configurations.
### Tailnet and DERP System
The networking system has three key components:
1. **Tailnet**: An overlay network implemented in the `tailnet` package that provides secure, end-to-end encrypted connections between clients, the Coder server, and workspace agents.
2. **DERP Servers**: These relay traffic when direct connections aren't possible. Coder provides several options:
- A built-in DERP server that runs on the Coder control plane
- Integration with Tailscale's global DERP infrastructure
- Support for custom DERP servers for lower latency or offline deployments
3. **Direct Connections**: When possible, the system establishes peer-to-peer connections between clients and workspaces using STUN for NAT traversal. This requires both endpoints to send UDP traffic on ephemeral ports.
### Workspace Proxies
Workspace proxies (in the Enterprise edition) provide regional relay points for browser-based connections, reducing latency for geo-distributed teams. Key characteristics:
- Deployed as independent servers that authenticate with the Coder control plane
- Relay connections for SSH, workspace apps, port forwarding, and web terminals
- Do not make direct database connections
- Managed through the `coder wsproxy` commands
- Implemented primarily in the `enterprise/wsproxy/` package
## Agent System
The workspace agent runs within each provisioned workspace and provides core functionality including:
- SSH access to workspaces via the `agentssh` package
- Port forwarding
- Terminal connectivity via the `pty` package for pseudo-terminal support
- Application serving
- Healthcheck monitoring
- Resource usage reporting
Agents communicate with the control plane using the tailnet system and authenticate using secure tokens.
## Workspace Applications
Workspace applications (or "apps") provide browser-based access to services running within workspaces. The system supports:
- HTTP(S) and WebSocket connections
- Path-based or subdomain-based access URLs
- Health checks to monitor application availability
- Different sharing levels (owner-only, authenticated users, or public)
- Custom icons and display settings
The implementation is primarily in the `coderd/workspaceapps/` directory with components for URL generation, proxying connections, and managing application state.
## Implementation Details
The project structure separates frontend and backend concerns. React components and pages are organized in the `site/src/` directory, with Jest used for testing. The backend is primarily written in Go, with a strong emphasis on error handling patterns and test coverage.
Database interactions are carefully managed through migrations in `coderd/database/migrations/` and queries in `coderd/database/queries/`. All new queries require proper database authorization (dbauthz) implementation to ensure that only users with appropriate permissions can access specific resources.
## Authorization System
The database authorization (dbauthz) system enforces fine-grained access control across all database operations. It uses role-based access control (RBAC) to validate user permissions before executing database operations. The `dbauthz` package wraps the database store and performs authorization checks before returning data. All database operations must pass through this layer to ensure security.
## Testing Framework
The codebase has a comprehensive testing approach with several key components:
1. **Parallel Testing**: All tests must use `t.Parallel()` to run concurrently, which improves test suite performance and helps identify race conditions.
2. **coderdtest Package**: This package in `coderd/coderdtest/` provides utilities for creating test instances of the Coder server, setting up test users and workspaces, and mocking external components.
3. **Integration Tests**: Tests often span multiple components to verify system behavior, such as template creation, workspace provisioning, and agent connectivity.
4. **Enterprise Testing**: Enterprise features have dedicated test utilities in the `coderdenttest` package.
## Open Source and Enterprise Components
The repository contains both open source and enterprise components:
- Enterprise code lives primarily in the `enterprise/` directory
- Enterprise features focus on governance, scalability (high availability), and advanced deployment options like workspace proxies
- The boundary between open source and enterprise is managed through a licensing system
- The same core codebase supports both editions, with enterprise features conditionally enabled
## Development Philosophy
Coder emphasizes clear error handling, with specific patterns required:
- Concise error messages that avoid phrases like "failed to"
- Wrapping errors with `%w` to maintain error chains
- Using sentinel errors with the "err" prefix (e.g., `errNotFound`)
All tests should run in parallel using `t.Parallel()` to ensure efficient testing and expose potential race conditions. The codebase is rigorously linted with golangci-lint to maintain consistent code quality.
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
## Development Workflow
Development can be initiated using `scripts/develop.sh` to start the application after making changes. Database schema updates should be performed through the migration system using `create_migration.sh <name>` to generate migration files, with each `.up.sql` migration paired with a corresponding `.down.sql` that properly reverts all changes.
If the development database gets into a bad state, it can be completely reset by removing the PostgreSQL data directory with `rm -rf .coderv2/postgres`. This will destroy all data in the development database, requiring you to recreate any test users, templates, or workspaces after restarting the application.
Code generation for the database layer uses `coderd/database/generate.sh`, and developers should refer to `sqlc.yaml` for the appropriate style and patterns to follow when creating new queries or tables.
The focus should always be on maintaining security through proper database authorization, clean error handling, and comprehensive test coverage to ensure the platform remains robust and reliable.
-1
View File
@@ -1 +0,0 @@
AGENTS.md
+124
View File
@@ -0,0 +1,124 @@
# Cursor Rules
This project is called "Coder" - an application for managing remote development environments.
Coder provides a platform for creating, managing, and using remote development environments (also known as Cloud Development Environments or CDEs). It leverages Terraform to define and provision these environments, which are referred to as "workspaces" within the project. The system is designed to be extensible, secure, and provide developers with a seamless remote development experience.
## Core Architecture
The heart of Coder is a control plane that orchestrates the creation and management of workspaces. This control plane interacts with separate Provisioner processes over gRPC to handle workspace builds. The Provisioners consume workspace definitions and use Terraform to create the actual infrastructure.
The CLI package serves dual purposes - it can be used to launch the control plane itself and also provides client functionality for users to interact with an existing control plane instance. All user-facing frontend code is developed in TypeScript using React and lives in the `site/` directory.
The database layer uses PostgreSQL with SQLC for generating type-safe database code. Database migrations are carefully managed to ensure both forward and backward compatibility through paired `.up.sql` and `.down.sql` files.
## API Design
Coder's API architecture combines REST and gRPC approaches. The REST API is defined in `coderd/coderd.go` and uses Chi for HTTP routing. This provides the primary interface for the frontend and external integrations.
Internal communication with Provisioners occurs over gRPC, with service definitions maintained in `.proto` files. This separation allows for efficient binary communication with the components responsible for infrastructure management while providing a standard REST interface for human-facing applications.
## Network Architecture
Coder implements a secure networking layer based on Tailscale's Wireguard implementation. The `tailnet` package provides connectivity between workspace agents and clients through DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible. This creates a secure overlay network allowing access to workspaces regardless of network topology, firewalls, or NAT configurations.
### Tailnet and DERP System
The networking system has three key components:
1. **Tailnet**: An overlay network implemented in the `tailnet` package that provides secure, end-to-end encrypted connections between clients, the Coder server, and workspace agents.
2. **DERP Servers**: These relay traffic when direct connections aren't possible. Coder provides several options:
- A built-in DERP server that runs on the Coder control plane
- Integration with Tailscale's global DERP infrastructure
- Support for custom DERP servers for lower latency or offline deployments
3. **Direct Connections**: When possible, the system establishes peer-to-peer connections between clients and workspaces using STUN for NAT traversal. This requires both endpoints to send UDP traffic on ephemeral ports.
### Workspace Proxies
Workspace proxies (in the Enterprise edition) provide regional relay points for browser-based connections, reducing latency for geo-distributed teams. Key characteristics:
- Deployed as independent servers that authenticate with the Coder control plane
- Relay connections for SSH, workspace apps, port forwarding, and web terminals
- Do not make direct database connections
- Managed through the `coder wsproxy` commands
- Implemented primarily in the `enterprise/wsproxy/` package
## Agent System
The workspace agent runs within each provisioned workspace and provides core functionality including:
- SSH access to workspaces via the `agentssh` package
- Port forwarding
- Terminal connectivity via the `pty` package for pseudo-terminal support
- Application serving
- Healthcheck monitoring
- Resource usage reporting
Agents communicate with the control plane using the tailnet system and authenticate using secure tokens.
## Workspace Applications
Workspace applications (or "apps") provide browser-based access to services running within workspaces. The system supports:
- HTTP(S) and WebSocket connections
- Path-based or subdomain-based access URLs
- Health checks to monitor application availability
- Different sharing levels (owner-only, authenticated users, or public)
- Custom icons and display settings
The implementation is primarily in the `coderd/workspaceapps/` directory with components for URL generation, proxying connections, and managing application state.
## Implementation Details
The project structure separates frontend and backend concerns. React components and pages are organized in the `site/src/` directory, with Jest used for testing. The backend is primarily written in Go, with a strong emphasis on error handling patterns and test coverage.
Database interactions are carefully managed through migrations in `coderd/database/migrations/` and queries in `coderd/database/queries/`. All new queries require proper database authorization (dbauthz) implementation to ensure that only users with appropriate permissions can access specific resources.
## Authorization System
The database authorization (dbauthz) system enforces fine-grained access control across all database operations. It uses role-based access control (RBAC) to validate user permissions before executing database operations. The `dbauthz` package wraps the database store and performs authorization checks before returning data. All database operations must pass through this layer to ensure security.
## Testing Framework
The codebase has a comprehensive testing approach with several key components:
1. **Parallel Testing**: All tests must use `t.Parallel()` to run concurrently, which improves test suite performance and helps identify race conditions.
2. **coderdtest Package**: This package in `coderd/coderdtest/` provides utilities for creating test instances of the Coder server, setting up test users and workspaces, and mocking external components.
3. **Integration Tests**: Tests often span multiple components to verify system behavior, such as template creation, workspace provisioning, and agent connectivity.
4. **Enterprise Testing**: Enterprise features have dedicated test utilities in the `coderdenttest` package.
## Open Source and Enterprise Components
The repository contains both open source and enterprise components:
- Enterprise code lives primarily in the `enterprise/` directory
- Enterprise features focus on governance, scalability (high availability), and advanced deployment options like workspace proxies
- The boundary between open source and enterprise is managed through a licensing system
- The same core codebase supports both editions, with enterprise features conditionally enabled
## Development Philosophy
Coder emphasizes clear error handling, with specific patterns required:
- Concise error messages that avoid phrases like "failed to"
- Wrapping errors with `%w` to maintain error chains
- Using sentinel errors with the "err" prefix (e.g., `errNotFound`)
All tests should run in parallel using `t.Parallel()` to ensure efficient testing and expose potential race conditions. The codebase is rigorously linted with golangci-lint to maintain consistent code quality.
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
## Development Workflow
Development can be initiated using `scripts/develop.sh` to start the application after making changes. Database schema updates should be performed through the migration system using `create_migration.sh <name>` to generate migration files, with each `.up.sql` migration paired with a corresponding `.down.sql` that properly reverts all changes.
If the development database gets into a bad state, it can be completely reset by removing the PostgreSQL data directory with `rm -rf .coderv2/postgres`. This will destroy all data in the development database, requiring you to recreate any test users, templates, or workspaces after restarting the application.
Code generation for the database layer uses `coderd/database/generate.sh`, and developers should refer to `sqlc.yaml` for the appropriate style and patterns to follow when creating new queries or tables.
The focus should always be on maintaining security through proper database authorization, clean error handling, and comprehensive test coverage to ensure the platform remains robust and reliable.
+1 -1
View File
@@ -27,7 +27,7 @@ ignorePatterns:
- pattern: "splunk.com"
- pattern: "stackoverflow.com/questions"
- pattern: "developer.hashicorp.com/terraform/language"
- pattern: "platform.openai.com"
- pattern: "platform.openai.com/docs/api-reference"
- pattern: "api.openai.com"
aliveStatusCodes:
- 200
+4 -6
View File
@@ -6,8 +6,6 @@ updates:
interval: "weekly"
time: "06:00"
timezone: "America/Chicago"
cooldown:
default-days: 7
labels: []
commit-message:
prefix: "ci"
@@ -70,8 +68,8 @@ updates:
interval: "monthly"
time: "06:00"
timezone: "America/Chicago"
cooldown:
default-days: 7
reviewers:
- "coder/ts"
commit-message:
prefix: "chore"
labels: []
@@ -121,9 +119,9 @@ updates:
commit-message:
prefix: "chore"
groups:
coder-modules:
coder:
patterns:
- "coder/*/coder"
- "registry.coder.com/coder/*/coder"
labels: []
ignore:
- dependency-name: "*"
+20 -28
View File
@@ -40,7 +40,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -124,7 +124,7 @@ jobs:
# runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
# steps:
# - name: Checkout
# uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
# uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# with:
# fetch-depth: 1
# # See: https://github.com/stefanzweifel/git-auto-commit-action?tab=readme-ov-file#commits-made-by-this-action-do-not-trigger-new-workflow-runs
@@ -162,7 +162,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -191,7 +191,7 @@ jobs:
# Check for any typos
- name: Check for typos
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06 # v1.40.0
uses: crate-ci/typos@626c4bedb751ce0b7f03262ca97ddda9a076ae1c # v1.39.2
with:
config: .github/workflows/typos.toml
@@ -240,7 +240,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -297,7 +297,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -369,7 +369,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -537,7 +537,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -586,7 +586,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -646,7 +646,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -673,7 +673,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -706,7 +706,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -756,14 +756,6 @@ jobs:
path: ./site/test-results/**/*.webm
retention-days: 7
- name: Upload debug log
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: coderd-debug-logs${{ matrix.variant.premium && '-premium' || '' }}
path: ./site/e2e/test-results/debug.log
retention-days: 7
- name: Upload pprof dumps
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
@@ -786,7 +778,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# 👇 Ensures Chromatic can read your full git history
fetch-depth: 0
@@ -802,7 +794,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -834,7 +826,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -867,7 +859,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# 0 is required here for version.sh to work.
fetch-depth: 0
@@ -971,7 +963,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1058,7 +1050,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1113,7 +1105,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1510,7 +1502,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+1 -9
View File
@@ -28,7 +28,6 @@ jobs:
github-token: "${{ secrets.GITHUB_TOKEN }}"
- name: Approve the PR
if: steps.metadata.outputs.package-ecosystem != 'github-actions'
run: |
echo "Approving $PR_URL"
gh pr review --approve "$PR_URL"
@@ -37,7 +36,6 @@ jobs:
GH_TOKEN: ${{secrets.GITHUB_TOKEN}}
- name: Enable auto-merge
if: steps.metadata.outputs.package-ecosystem != 'github-actions'
run: |
echo "Enabling auto-merge for $PR_URL"
gh pr merge --auto --squash "$PR_URL"
@@ -47,11 +45,6 @@ jobs:
- name: Send Slack notification
run: |
if [ "$PACKAGE_ECOSYSTEM" = "github-actions" ]; then
STATUS_TEXT=":pr-opened: Dependabot opened PR #${PR_NUMBER} (GitHub Actions changes are not auto-merged)"
else
STATUS_TEXT=":pr-merged: Auto merge enabled for Dependabot PR #${PR_NUMBER}"
fi
curl -X POST -H 'Content-type: application/json' \
--data '{
"username": "dependabot",
@@ -61,7 +54,7 @@ jobs:
"type": "header",
"text": {
"type": "plain_text",
"text": "'"${STATUS_TEXT}"'",
"text": ":pr-merged: Auto merge enabled for Dependabot PR #'"${PR_NUMBER}"'",
"emoji": true
}
},
@@ -91,7 +84,6 @@ jobs:
}' "${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}"
env:
SLACK_WEBHOOK: ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
PACKAGE_ECOSYSTEM: ${{ steps.metadata.outputs.package-ecosystem }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_TITLE: ${{ github.event.pull_request.title }}
PR_URL: ${{ github.event.pull_request.html_url }}
+4 -4
View File
@@ -41,7 +41,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -70,7 +70,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -92,7 +92,7 @@ jobs:
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Set up Flux CLI
uses: fluxcd/flux2/action@8454b02a32e48d775b9f563cb51fdcb1787b5b93 # v2.7.5
uses: fluxcd/flux2/action@b6e76ca2534f76dcb8dd94fb057cdfa923c3b641 # v2.7.3
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.7.0"
@@ -151,7 +151,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
-205
View File
@@ -1,205 +0,0 @@
# This workflow checks if a PR requires documentation updates.
# It creates a Coder Task that uses AI to analyze the PR changes,
# search existing docs, and comment with recommendations.
#
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
name: AI Documentation Check
on:
pull_request:
types:
- labeled
workflow_dispatch:
inputs:
pr_url:
description: "Pull Request URL to check"
required: true
type: string
template_preset:
description: "Template preset to use"
required: false
default: ""
type: string
jobs:
doc-check:
name: Analyze PR for Documentation Updates Needed
runs-on: ubuntu-latest
if: |
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
timeout-minutes: 30
env:
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
CODER_SESSION_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
permissions:
contents: read
pull-requests: write
actions: write
steps:
- name: Determine PR Context
id: determine-context
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
INPUTS_PR_URL: ${{ inputs.pr_url }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the provided PR URL
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
# Extract PR number from URL for later use
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Extract changed files and build prompt
id: extract-context
env:
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Analyzing PR #${PR_NUMBER}"
# Build task prompt - using unquoted heredoc so variables expand
TASK_PROMPT=$(cat <<EOF
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
PR URL: ${PR_URL}
WORKFLOW:
1. Setup (repo is pre-cloned at ~/coder)
cd ~/coder
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
git checkout pr-${PR_NUMBER}
2. Get PR info
Use GitHub MCP tools to get PR title, body, and diff
Or use: git diff main...pr-${PR_NUMBER}
3. Understand Changes
Read the diff and identify what changed
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
4. Search for Related Docs
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
5. Decide
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
6. Comment on the PR using this format
COMMENT FORMAT:
## 📚 Documentation Check
### ✅ Updates Needed
- **[docs/path/file.md](github_link)** - Brief what needs changing
### 📝 New Docs Needed
- **docs/suggested/location.md** - What should be documented
### ✨ No Changes Needed
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
---
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
DOCS STRUCTURE:
Read ~/coder/docs/manifest.json for the complete documentation structure.
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
But check manifest.json - it has everything.
EOF
)
# Output the prompt
{
echo "task_prompt<<EOFOUTPUT"
echo "${TASK_PROMPT}"
echo "EOFOUTPUT"
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
persist-credentials: false
ref: main
repository: coder/create-task-action
- name: Create Coder Task for Documentation Check
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
coder-task-name-prefix: doc-check
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: true
- name: Write outputs
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_URL: ${{ steps.create_task.outputs.task-url }}
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
run: |
{
echo "## Documentation Check Task"
echo ""
echo "**PR:** ${PR_URL}"
echo "**Task created:** ${TASK_CREATED}"
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL:** ${TASK_URL}"
echo ""
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
} >> "${GITHUB_STEP_SUMMARY}"
+1 -1
View File
@@ -43,7 +43,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+2 -2
View File
@@ -23,14 +23,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: Setup Node
uses: ./.github/actions/setup-node
- uses: tj-actions/changed-files@abdd2f68ea150cee8f236d4a9fb4e0f2491abf1b # v45.0.7
- uses: tj-actions/changed-files@70069877f29101175ed2b055d210fe8b1d54d7d7 # v45.0.7
id: changed-files
with:
files: |
+2 -2
View File
@@ -31,7 +31,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -130,7 +130,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+1 -1
View File
@@ -53,7 +53,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+4 -4
View File
@@ -44,7 +44,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -81,7 +81,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -233,7 +233,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -337,7 +337,7 @@ jobs:
kubectl create namespace "pr${PR_NUMBER}"
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+4 -4
View File
@@ -65,7 +65,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS.
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -169,7 +169,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -888,7 +888,7 @@ jobs:
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -976,7 +976,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+2 -2
View File
@@ -25,7 +25,7 @@ jobs:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -47,6 +47,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
sarif_file: results.sarif
+5 -5
View File
@@ -32,7 +32,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -40,7 +40,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Initialize CodeQL
uses: github/codeql-action/init@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/init@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
languages: go, javascript
@@ -50,7 +50,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/analyze@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
- name: Send Slack notification on failure
if: ${{ failure() }}
@@ -74,7 +74,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -154,7 +154,7 @@ jobs:
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
sarif_file: trivy-results.sarif
category: "Trivy"
+1 -1
View File
@@ -101,7 +101,7 @@ jobs:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: Run delete-old-branches-action
+1 -1
View File
@@ -153,7 +153,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
-1
View File
@@ -9,7 +9,6 @@ IST = "IST"
MacOS = "macOS"
AKS = "AKS"
O_WRONLY = "O_WRONLY"
AIBridge = "AI Bridge"
[default.extend-words]
AKS = "AKS"
+1 -1
View File
@@ -26,7 +26,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
-3
View File
@@ -90,9 +90,6 @@ __debug_bin*
**/.claude/settings.local.json
# Local agent configuration
AGENTS.local.md
/.env
# Ignore plans written by AI agents.
-3
View File
@@ -1,3 +0,0 @@
{
"ignores": ["PLAN.md"],
}
-57
View File
@@ -140,70 +140,13 @@ seems like it should use `time.Sleep`, read through https://github.com/coder/qua
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
- Commit format: `type(scope): message`
### Writing Comments
Code comments should be clear, well-formatted, and add meaningful context.
**Proper sentence structure**: Comments are sentences and should end with
periods or other appropriate punctuation. This improves readability and
maintains professional code standards.
**Explain why, not what**: Good comments explain the reasoning behind code
rather than describing what the code does. The code itself should be
self-documenting through clear naming and structure. Focus your comments on
non-obvious decisions, edge cases, or business logic that isn't immediately
apparent from reading the implementation.
**Line length and wrapping**: Keep comment lines to 80 characters wide
(including the comment prefix like `//` or `#`). When a comment spans multiple
lines, wrap it naturally at word boundaries rather than writing one sentence
per line. This creates more readable, paragraph-like blocks of documentation.
```go
// Good: Explains the rationale with proper sentence structure.
// We need a custom timeout here because workspace builds can take several
// minutes on slow networks, and the default 30s timeout causes false
// failures during initial template imports.
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
// Bad: Describes what the code does without punctuation or wrapping
// Set a custom timeout
// Workspace builds can take a long time
// Default timeout is too short
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
```
### Avoid Unnecessary Changes
When fixing a bug or adding a feature, don't modify code unrelated to your
task. Unnecessary changes make PRs harder to review and can introduce
regressions.
**Don't reword existing comments or code** unless the change is directly
motivated by your task. Rewording comments to be shorter or "cleaner" wastes
reviewer time and clutters the diff.
**Don't delete existing comments** that explain non-obvious behavior. These
comments preserve important context about why code works a certain way.
**When adding tests for new behavior**, add new test cases instead of modifying
existing ones. This preserves coverage for the original behavior and makes it
clear what the new test covers.
## Detailed Development Guides
@.claude/docs/ARCHITECTURE.md
@.claude/docs/OAUTH2.md
@.claude/docs/TESTING.md
@.claude/docs/TROUBLESHOOTING.md
@.claude/docs/DATABASE.md
## Local Configuration
These files may be gitignored, read manually if not auto-loaded.
@AGENTS.local.md
## Common Pitfalls
1. **Audit table errors** → Update `enterprise/audit/table.go`
-2
View File
@@ -27,5 +27,3 @@ coderd/schedule/autostop.go @deansheather @DanielleMaywood
# well as guidance from revenue.
coderd/usage/ @deansheather @spikecurtis
enterprise/coderd/usage/ @deansheather @spikecurtis
.github/ @jdomeracki-coder
-10
View File
@@ -642,7 +642,6 @@ AIBRIDGED_MOCKS := \
GEN_FILES := \
tailnet/proto/tailnet.pb.go \
agent/proto/agent.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
vpn/vpn.pb.go \
@@ -697,7 +696,6 @@ gen/mark-fresh:
agent/proto/agent.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
vpn/vpn.pb.go \
enterprise/aibridged/proto/aibridged.pb.go \
coderd/database/dump.sql \
@@ -802,14 +800,6 @@ agent/proto/agent.pb.go: agent/proto/agent.proto
--go-drpc_opt=paths=source_relative \
./agent/proto/agent.proto
agent/agentsocket/proto/agentsocket.pb.go: agent/agentsocket/proto/agentsocket.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./agent/agentsocket/proto/agentsocket.proto
provisionersdk/proto/provisioner.pb.go: provisionersdk/proto/provisioner.proto
protoc \
--go_out=. \
+51 -98
View File
@@ -8,7 +8,6 @@ import (
"fmt"
"hash/fnv"
"io"
"maps"
"net"
"net/http"
"net/netip"
@@ -41,7 +40,6 @@ import (
"github.com/coder/coder/v2/agent/agentcontainers"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/agentscripts"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agentssh"
"github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/agent/proto/resourcesmonitor"
@@ -72,21 +70,16 @@ const (
)
type Options struct {
Filesystem afero.Fs
LogDir string
TempDir string
ScriptDataDir string
Client Client
ReconnectingPTYTimeout time.Duration
EnvironmentVariables map[string]string
Logger slog.Logger
// IgnorePorts tells the api handler which ports to ignore when
// listing all listening ports. This is helpful to hide ports that
// are used by the agent, that the user does not care about.
IgnorePorts map[int]string
// ListeningPortsGetter is used to get the list of listening ports. Only
// tests should set this. If unset, a default that queries the OS will be used.
ListeningPortsGetter ListeningPortsGetter
Filesystem afero.Fs
LogDir string
TempDir string
ScriptDataDir string
Client Client
ReconnectingPTYTimeout time.Duration
EnvironmentVariables map[string]string
Logger slog.Logger
IgnorePorts map[int]string
PortCacheDuration time.Duration
SSHMaxTimeout time.Duration
TailnetListenPort uint16
Subsystems []codersdk.AgentSubsystem
@@ -98,8 +91,6 @@ type Options struct {
Devcontainers bool
DevcontainerAPIOptions []agentcontainers.Option // Enable Devcontainers for these to be effective.
Clock quartz.Clock
SocketServerEnabled bool
SocketPath string // Path for the agent socket server socket
}
type Client interface {
@@ -146,7 +137,9 @@ func New(options Options) Agent {
if options.ServiceBannerRefreshInterval == 0 {
options.ServiceBannerRefreshInterval = 2 * time.Minute
}
if options.PortCacheDuration == 0 {
options.PortCacheDuration = 1 * time.Second
}
if options.Clock == nil {
options.Clock = quartz.NewReal()
}
@@ -160,38 +153,30 @@ func New(options Options) Agent {
options.Execer = agentexec.DefaultExecer
}
if options.ListeningPortsGetter == nil {
options.ListeningPortsGetter = &osListeningPortsGetter{
cacheDuration: 1 * time.Second,
}
}
hardCtx, hardCancel := context.WithCancel(context.Background())
gracefulCtx, gracefulCancel := context.WithCancel(hardCtx)
a := &agent{
clock: options.Clock,
tailnetListenPort: options.TailnetListenPort,
reconnectingPTYTimeout: options.ReconnectingPTYTimeout,
logger: options.Logger,
gracefulCtx: gracefulCtx,
gracefulCancel: gracefulCancel,
hardCtx: hardCtx,
hardCancel: hardCancel,
coordDisconnected: make(chan struct{}),
environmentVariables: options.EnvironmentVariables,
client: options.Client,
filesystem: options.Filesystem,
logDir: options.LogDir,
tempDir: options.TempDir,
scriptDataDir: options.ScriptDataDir,
lifecycleUpdate: make(chan struct{}, 1),
lifecycleReported: make(chan codersdk.WorkspaceAgentLifecycle, 1),
lifecycleStates: []agentsdk.PostLifecycleRequest{{State: codersdk.WorkspaceAgentLifecycleCreated}},
reportConnectionsUpdate: make(chan struct{}, 1),
listeningPortsHandler: listeningPortsHandler{
getter: options.ListeningPortsGetter,
ignorePorts: maps.Clone(options.IgnorePorts),
},
clock: options.Clock,
tailnetListenPort: options.TailnetListenPort,
reconnectingPTYTimeout: options.ReconnectingPTYTimeout,
logger: options.Logger,
gracefulCtx: gracefulCtx,
gracefulCancel: gracefulCancel,
hardCtx: hardCtx,
hardCancel: hardCancel,
coordDisconnected: make(chan struct{}),
environmentVariables: options.EnvironmentVariables,
client: options.Client,
filesystem: options.Filesystem,
logDir: options.LogDir,
tempDir: options.TempDir,
scriptDataDir: options.ScriptDataDir,
lifecycleUpdate: make(chan struct{}, 1),
lifecycleReported: make(chan codersdk.WorkspaceAgentLifecycle, 1),
lifecycleStates: []agentsdk.PostLifecycleRequest{{State: codersdk.WorkspaceAgentLifecycleCreated}},
reportConnectionsUpdate: make(chan struct{}, 1),
ignorePorts: options.IgnorePorts,
portCacheDuration: options.PortCacheDuration,
reportMetadataInterval: options.ReportMetadataInterval,
announcementBannersRefreshInterval: options.ServiceBannerRefreshInterval,
sshMaxTimeout: options.SSHMaxTimeout,
@@ -205,8 +190,6 @@ func New(options Options) Agent {
devcontainers: options.Devcontainers,
containerAPIOptions: options.DevcontainerAPIOptions,
socketPath: options.SocketPath,
socketServerEnabled: options.SocketServerEnabled,
}
// Initially, we have a closed channel, reflecting the fact that we are not initially connected.
// Each time we connect we replace the channel (while holding the closeMutex) with a new one
@@ -219,16 +202,20 @@ func New(options Options) Agent {
}
type agent struct {
clock quartz.Clock
logger slog.Logger
client Client
tailnetListenPort uint16
filesystem afero.Fs
logDir string
tempDir string
scriptDataDir string
listeningPortsHandler listeningPortsHandler
subsystems []codersdk.AgentSubsystem
clock quartz.Clock
logger slog.Logger
client Client
tailnetListenPort uint16
filesystem afero.Fs
logDir string
tempDir string
scriptDataDir string
// ignorePorts tells the api handler which ports to ignore when
// listing all listening ports. This is helpful to hide ports that
// are used by the agent, that the user does not care about.
ignorePorts map[int]string
portCacheDuration time.Duration
subsystems []codersdk.AgentSubsystem
reconnectingPTYTimeout time.Duration
reconnectingPTYServer *reconnectingpty.Server
@@ -284,10 +271,6 @@ type agent struct {
devcontainers bool
containerAPIOptions []agentcontainers.Option
containerAPI *agentcontainers.API
socketServerEnabled bool
socketPath string
socketServer *agentsocket.Server
}
func (a *agent) TailnetConn() *tailnet.Conn {
@@ -367,32 +350,9 @@ func (a *agent) init() {
s.ExperimentalContainers = a.devcontainers
},
)
a.initSocketServer()
go a.runLoop()
}
// initSocketServer initializes server that allows direct communication with a workspace agent using IPC.
func (a *agent) initSocketServer() {
if !a.socketServerEnabled {
a.logger.Info(a.hardCtx, "socket server is disabled")
return
}
server, err := agentsocket.NewServer(
a.logger.Named("socket"),
agentsocket.WithPath(a.socketPath),
)
if err != nil {
a.logger.Warn(a.hardCtx, "failed to create socket server", slog.Error(err), slog.F("path", a.socketPath))
return
}
a.socketServer = server
a.logger.Debug(a.hardCtx, "socket server started", slog.F("path", a.socketPath))
}
// runLoop attempts to start the agent in a retry loop.
// Coder may be offline temporarily, a connection issue
// may be happening, but regardless after the intermittent
@@ -1127,7 +1087,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
if err != nil {
return xerrors.Errorf("fetch metadata: %w", err)
}
a.logger.Info(ctx, "fetched manifest")
a.logger.Info(ctx, "fetched manifest", slog.F("manifest", mp))
manifest, err := agentsdk.ManifestFromProto(mp)
if err != nil {
a.logger.Critical(ctx, "failed to convert manifest", slog.F("manifest", mp), slog.Error(err))
@@ -1576,8 +1536,8 @@ func (a *agent) createTailnet(
break
}
clog := a.logger.Named("speedtest").With(
slog.F("remote", conn.RemoteAddr()),
slog.F("local", conn.LocalAddr()))
slog.F("remote", conn.RemoteAddr().String()),
slog.F("local", conn.LocalAddr().String()))
clog.Info(ctx, "accepted conn")
wg.Add(1)
closed := make(chan struct{})
@@ -1960,7 +1920,6 @@ func (a *agent) Close() error {
lifecycleState = codersdk.WorkspaceAgentLifecycleShutdownError
}
}
a.setLifecycle(lifecycleState)
err = a.scriptRunner.Close()
@@ -1968,12 +1927,6 @@ func (a *agent) Close() error {
a.logger.Error(a.hardCtx, "script runner close", slog.Error(err))
}
if a.socketServer != nil {
if err := a.socketServer.Close(); err != nil {
a.logger.Error(a.hardCtx, "socket server close", slog.Error(err))
}
}
if err := a.containerAPI.Close(); err != nil {
a.logger.Error(a.hardCtx, "container API close", slog.Error(err))
}
-45
View File
@@ -1,45 +0,0 @@
package agent
import (
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/slogtest"
"github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/testutil"
)
// TestReportConnectionEmpty tests that reportConnection() doesn't choke if given an empty IP string, which is what we
// send if we cannot get the remote address.
func TestReportConnectionEmpty(t *testing.T) {
t.Parallel()
connID := uuid.UUID{1}
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
ctx := testutil.Context(t, testutil.WaitShort)
uut := &agent{
hardCtx: ctx,
logger: logger,
}
disconnected := uut.reportConnection(connID, proto.Connection_TYPE_UNSPECIFIED, "")
require.Len(t, uut.reportConnections, 1)
req0 := uut.reportConnections[0]
require.Equal(t, proto.Connection_TYPE_UNSPECIFIED, req0.GetConnection().GetType())
require.Equal(t, "", req0.GetConnection().Ip)
require.Equal(t, connID[:], req0.GetConnection().GetId())
require.Equal(t, proto.Connection_CONNECT, req0.GetConnection().GetAction())
disconnected(0, "because")
require.Len(t, uut.reportConnections, 2)
req1 := uut.reportConnections[1]
require.Equal(t, proto.Connection_TYPE_UNSPECIFIED, req1.GetConnection().GetType())
require.Equal(t, "", req1.GetConnection().Ip)
require.Equal(t, connID[:], req1.GetConnection().GetId())
require.Equal(t, proto.Connection_DISCONNECT, req1.GetConnection().GetAction())
require.Equal(t, "because", req1.GetConnection().GetReason())
}
+121 -44
View File
@@ -743,6 +743,7 @@ func (api *API) Routes() http.Handler {
// /-route was dropped. We can drop the /devcontainers prefix here too.
r.Route("/devcontainers/{devcontainer}", func(r chi.Router) {
r.Post("/recreate", api.handleDevcontainerRecreate)
r.Delete("/", api.handleDevcontainerDelete)
})
return r
@@ -1039,10 +1040,6 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
logger.Error(ctx, "inject subagent into container failed", slog.Error(err))
dc.Error = err.Error()
} else {
// TODO(mafredri): Preserve the error from devcontainer
// up if it was a lifecycle script error. Currently
// this results in a brief flicker for the user if
// injection is fast, as the error is shown then erased.
dc.Error = ""
}
}
@@ -1286,6 +1283,107 @@ func (api *API) handleDevcontainerRecreate(w http.ResponseWriter, r *http.Reques
})
}
// handleDevcontainerDelete handles the HTTP request to delete a
// devcontainer by stopping the sub-agent and removing the container.
func (api *API) handleDevcontainerDelete(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
devcontainerID := chi.URLParam(r, "devcontainer")
if devcontainerID == "" {
httpapi.Write(ctx, w, http.StatusBadRequest, codersdk.Response{
Message: "Missing devcontainer ID",
Detail: "Devcontainer ID is required to delete a devcontainer.",
})
return
}
api.mu.Lock()
defer api.mu.Unlock()
// Find the devcontainer by ID
var dc codersdk.WorkspaceAgentDevcontainer
var workspaceFolder string
for folder, knownDC := range api.knownDevcontainers {
if knownDC.ID.String() == devcontainerID {
dc = knownDC
workspaceFolder = folder
break
}
}
if dc.ID == uuid.Nil {
httpapi.Write(ctx, w, http.StatusNotFound, codersdk.Response{
Message: "Devcontainer not found.",
Detail: fmt.Sprintf("Could not find devcontainer with ID: %q", devcontainerID),
})
return
}
// Cannot delete while starting
if dc.Status == codersdk.WorkspaceAgentDevcontainerStatusStarting {
httpapi.Write(ctx, w, http.StatusConflict, codersdk.Response{
Message: "Cannot delete devcontainer while starting",
Detail: fmt.Sprintf("Devcontainer %q is currently starting. Wait for it to finish before deleting.", dc.Name),
})
return
}
logger := api.logger.With(
slog.F("devcontainer_id", dc.ID),
slog.F("devcontainer_name", dc.Name),
slog.F("workspace_folder", dc.WorkspaceFolder),
)
logger.Info(ctx, "deleting devcontainer")
// Stop the sub-agent if it's running
if proc, ok := api.injectedSubAgentProcs[workspaceFolder]; ok {
logger.Debug(ctx, "stopping sub-agent process")
proc.stop()
// Delete the sub-agent from the backend if it was registered
if proc.agent.ID != uuid.Nil {
// Unlock while doing the delete operation
api.mu.Unlock()
client := *api.subAgentClient.Load()
if err := client.Delete(ctx, proc.agent.ID); err != nil {
api.mu.Lock()
logger.Error(ctx, "failed to delete sub-agent", slog.Error(err))
httpapi.Write(ctx, w, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to delete sub-agent.",
Detail: err.Error(),
})
return
}
api.mu.Lock()
logger.Debug(ctx, "sub-agent deleted successfully")
}
// Clean up the sub-agent process from the map
delete(api.injectedSubAgentProcs, workspaceFolder)
}
// Remove the devcontainer from all tracking maps
delete(api.knownDevcontainers, workspaceFolder)
delete(api.devcontainerLogSourceIDs, workspaceFolder)
delete(api.configFileModifiedTimes, dc.ConfigPath)
delete(api.recreateSuccessTimes, workspaceFolder)
delete(api.recreateErrorTimes, workspaceFolder)
delete(api.ignoredDevcontainers, workspaceFolder)
delete(api.usingWorkspaceFolderName, workspaceFolder)
delete(api.devcontainerNames, dc.Name)
// Broadcast the update so clients know the devcontainer is gone
api.broadcastUpdatesLocked()
logger.Info(ctx, "devcontainer deleted successfully")
httpapi.Write(ctx, w, http.StatusOK, codersdk.Response{
Message: "Devcontainer deleted successfully",
Detail: fmt.Sprintf("Devcontainer %q has been deleted. The container and sub-agent have been stopped and removed.", dc.Name),
})
}
// createDevcontainer should run in its own goroutine and is responsible for
// recreating a devcontainer based on the provided devcontainer configuration.
// It updates the devcontainer status and logs the process. The configPath is
@@ -1351,41 +1449,27 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
upOptions := []DevcontainerCLIUpOptions{WithUpOutput(infoW, errW)}
upOptions = append(upOptions, opts...)
containerID, upErr := api.dccli.Up(ctx, dc.WorkspaceFolder, configPath, upOptions...)
if upErr != nil {
_, err := api.dccli.Up(ctx, dc.WorkspaceFolder, configPath, upOptions...)
if err != nil {
// No need to log if the API is closing (context canceled), as this
// is expected behavior when the API is shutting down.
if !errors.Is(upErr, context.Canceled) {
logger.Error(ctx, "devcontainer creation failed", slog.Error(upErr))
if !errors.Is(err, context.Canceled) {
logger.Error(ctx, "devcontainer creation failed", slog.Error(err))
}
// If we don't have a container ID, the error is fatal, so we
// should mark the devcontainer as errored and return.
if containerID == "" {
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
dc.Error = upErr.Error()
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
api.broadcastUpdatesLocked()
api.mu.Unlock()
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
dc.Error = err.Error()
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
api.mu.Unlock()
return xerrors.Errorf("start devcontainer: %w", upErr)
}
// If we have a container ID, it means the container was created
// but a lifecycle script (e.g. postCreateCommand) failed. In this
// case, we still want to refresh containers to pick up the new
// container, inject the agent, and allow the user to debug the
// issue. We store the error to surface it to the user.
logger.Warn(ctx, "devcontainer created with errors (e.g. lifecycle script failure), container is available",
slog.F("container_id", containerID),
)
} else {
logger.Info(ctx, "devcontainer created successfully")
return xerrors.Errorf("start devcontainer: %w", err)
}
logger.Info(ctx, "devcontainer created successfully")
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
// Update the devcontainer status to Running or Stopped based on the
@@ -1394,18 +1478,13 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
// to minimize the time between API consistency, we guess the status
// based on the container state.
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStopped
if dc.Container != nil && dc.Container.Running {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
if dc.Container != nil {
if dc.Container.Running {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
}
}
dc.Dirty = false
if upErr != nil {
// If there was a lifecycle script error but we have a container ID,
// the container is running so we should set the status to Running.
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
dc.Error = upErr.Error()
} else {
dc.Error = ""
}
dc.Error = ""
api.recreateSuccessTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "successTimes")
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.broadcastUpdatesLocked()
@@ -1457,8 +1536,6 @@ func (api *API) markDevcontainerDirty(configPath string, modifiedAt time.Time) {
api.knownDevcontainers[dc.WorkspaceFolder] = dc
}
api.broadcastUpdatesLocked()
}
// cleanupSubAgents removes subagents that are no longer managed by
+279 -196
View File
@@ -234,8 +234,6 @@ func (w *fakeWatcher) sendEventWaitNextCalled(ctx context.Context, event fsnotif
// fakeSubAgentClient implements SubAgentClient for testing purposes.
type fakeSubAgentClient struct {
logger slog.Logger
mu sync.Mutex // Protects following.
agents map[uuid.UUID]agentcontainers.SubAgent
listErrC chan error // If set, send to return error, close to return nil.
@@ -256,8 +254,6 @@ func (m *fakeSubAgentClient) List(ctx context.Context) ([]agentcontainers.SubAge
}
}
}
m.mu.Lock()
defer m.mu.Unlock()
var agents []agentcontainers.SubAgent
for _, agent := range m.agents {
agents = append(agents, agent)
@@ -287,9 +283,6 @@ func (m *fakeSubAgentClient) Create(ctx context.Context, agent agentcontainers.S
return agentcontainers.SubAgent{}, xerrors.New("operating system must be set")
}
m.mu.Lock()
defer m.mu.Unlock()
for _, a := range m.agents {
if a.Name == agent.Name {
return agentcontainers.SubAgent{}, &pq.Error{
@@ -321,8 +314,6 @@ func (m *fakeSubAgentClient) Delete(ctx context.Context, id uuid.UUID) error {
}
}
}
m.mu.Lock()
defer m.mu.Unlock()
if m.agents == nil {
m.agents = make(map[uuid.UUID]agentcontainers.SubAgent)
}
@@ -1641,77 +1632,6 @@ func TestAPI(t *testing.T) {
require.NotNil(t, response.Devcontainers[0].Container, "container should not be nil")
})
// Verify that modifying a config file broadcasts the dirty status
// over websocket immediately.
t.Run("FileWatcherDirtyBroadcast", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
configPath := "/workspace/project/.devcontainer/devcontainer.json"
fWatcher := newFakeWatcher(t)
fLister := &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{
{
ID: "container-id",
FriendlyName: "container-name",
Running: true,
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/workspace/project",
agentcontainers.DevcontainerConfigFileLabel: configPath,
},
},
},
},
}
mClock := quartz.NewMock(t)
tickerTrap := mClock.Trap().TickerFunc("updaterLoop")
api := agentcontainers.NewAPI(
slogtest.Make(t, nil).Leveled(slog.LevelDebug),
agentcontainers.WithContainerCLI(fLister),
agentcontainers.WithWatcher(fWatcher),
agentcontainers.WithClock(mClock),
)
api.Start()
defer api.Close()
srv := httptest.NewServer(api.Routes())
defer srv.Close()
tickerTrap.MustWait(ctx).MustRelease(ctx)
tickerTrap.Close()
wsConn, resp, err := websocket.Dial(ctx, "ws"+strings.TrimPrefix(srv.URL, "http")+"/watch", nil)
require.NoError(t, err)
if resp != nil && resp.Body != nil {
defer resp.Body.Close()
}
defer wsConn.Close(websocket.StatusNormalClosure, "")
// Read and discard initial state.
_, _, err = wsConn.Read(ctx)
require.NoError(t, err)
fWatcher.waitNext(ctx)
fWatcher.sendEventWaitNextCalled(ctx, fsnotify.Event{
Name: configPath,
Op: fsnotify.Write,
})
// Verify dirty status is broadcast without advancing the clock.
_, msg, err := wsConn.Read(ctx)
require.NoError(t, err)
var response codersdk.WorkspaceAgentListContainersResponse
err = json.Unmarshal(msg, &response)
require.NoError(t, err)
require.Len(t, response.Devcontainers, 1)
assert.True(t, response.Devcontainers[0].Dirty,
"devcontainer should be marked as dirty after config file modification")
})
t.Run("SubAgentLifecycle", func(t *testing.T) {
t.Parallel()
@@ -2150,122 +2070,6 @@ func TestAPI(t *testing.T) {
require.Equal(t, "", response.Devcontainers[0].Error)
})
// This test verifies that when devcontainer up fails due to a
// lifecycle script error (such as postCreateCommand failing) but the
// container was successfully created, we still proceed with the
// devcontainer. The container should be available for use and the
// agent should be injected.
t.Run("DuringUpWithContainerID", func(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitMedium)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock = quartz.NewMock(t)
testContainer = codersdk.WorkspaceAgentContainer{
ID: "test-container-id",
FriendlyName: "test-container",
Image: "test-image",
Running: true,
CreatedAt: time.Now(),
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/workspaces/project",
agentcontainers.DevcontainerConfigFileLabel: "/workspaces/project/.devcontainer/devcontainer.json",
},
}
fCCLI = &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{testContainer},
},
arch: "amd64",
}
fDCCLI = &fakeDevcontainerCLI{
upID: testContainer.ID,
upErrC: make(chan func() error, 1),
}
fSAC = &fakeSubAgentClient{
logger: logger.Named("fakeSubAgentClient"),
}
testDevcontainer = codersdk.WorkspaceAgentDevcontainer{
ID: uuid.New(),
Name: "test-devcontainer",
WorkspaceFolder: "/workspaces/project",
ConfigPath: "/workspaces/project/.devcontainer/devcontainer.json",
Status: codersdk.WorkspaceAgentDevcontainerStatusStopped,
}
)
mClock.Set(time.Now()).MustWait(ctx)
tickerTrap := mClock.Trap().TickerFunc("updaterLoop")
nowRecreateSuccessTrap := mClock.Trap().Now("recreate", "successTimes")
api := agentcontainers.NewAPI(logger,
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(fCCLI),
agentcontainers.WithDevcontainerCLI(fDCCLI),
agentcontainers.WithDevcontainers(
[]codersdk.WorkspaceAgentDevcontainer{testDevcontainer},
[]codersdk.WorkspaceAgentScript{{ID: testDevcontainer.ID, LogSourceID: uuid.New()}},
),
agentcontainers.WithSubAgentClient(fSAC),
agentcontainers.WithSubAgentURL("test-subagent-url"),
agentcontainers.WithWatcher(watcher.NewNoop()),
)
api.Start()
defer func() {
close(fDCCLI.upErrC)
api.Close()
}()
r := chi.NewRouter()
r.Mount("/", api.Routes())
tickerTrap.MustWait(ctx).MustRelease(ctx)
tickerTrap.Close()
// Send a recreate request to trigger devcontainer up.
req := httptest.NewRequest(http.MethodPost, "/devcontainers/"+testDevcontainer.ID.String()+"/recreate", nil)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusAccepted, rec.Code)
// Simulate a lifecycle script failure. The devcontainer CLI
// will return an error but also provide a container ID since
// the container was created before the script failed.
simulatedError := xerrors.New("postCreateCommand failed with exit code 1")
testutil.RequireSend(ctx, t, fDCCLI.upErrC, func() error { return simulatedError })
// Wait for the recreate operation to complete. We expect it to
// record a success time because the container was created.
nowRecreateSuccessTrap.MustWait(ctx).MustRelease(ctx)
nowRecreateSuccessTrap.Close()
// Advance the clock to run the devcontainer state update routine.
_, aw := mClock.AdvanceNext()
aw.MustWait(ctx)
req = httptest.NewRequest(http.MethodGet, "/", nil)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var response codersdk.WorkspaceAgentListContainersResponse
err := json.NewDecoder(rec.Body).Decode(&response)
require.NoError(t, err)
// Verify that the devcontainer is running and has the container
// associated with it despite the lifecycle script error. The
// error may be cleared during refresh if agent injection
// succeeds, but the important thing is that the container is
// available for use.
require.Len(t, response.Devcontainers, 1)
assert.Equal(t, codersdk.WorkspaceAgentDevcontainerStatusRunning, response.Devcontainers[0].Status)
require.NotNil(t, response.Devcontainers[0].Container)
assert.Equal(t, testContainer.ID, response.Devcontainers[0].Container.ID)
})
t.Run("DuringInjection", func(t *testing.T) {
t.Parallel()
@@ -4228,3 +4032,282 @@ func TestDevcontainerPrebuildSupport(t *testing.T) {
// And: We expect this app to have the post-claim URL.
require.Equal(t, userAppURL, secondApp.URL)
}
func TestHandleDevcontainerDeleteAndRecreate(t *testing.T) {
t.Parallel()
// This test validates that after deleting a devcontainer, if the container
// reappears (e.g., manually recreated or discovered), the updater loop will
// detect it and recreate the devcontainer entry.
devcontainerID := uuid.New()
workspaceFolder := "/workspace/test-recreate"
configPath := workspaceFolder + "/.devcontainer/devcontainer.json"
// Create a container that represents a devcontainer
devContainer := codersdk.WorkspaceAgentContainer{
ID: "container-recreate-1",
FriendlyName: "test-container-recreate",
Running: true,
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: workspaceFolder,
agentcontainers.DevcontainerConfigFileLabel: configPath,
},
}
ctx := testutil.Context(t, testutil.WaitMedium)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock := quartz.NewMock(t)
mClock.Set(time.Now()).MustWait(ctx)
// Set up updater ticker trap to control when the updater loop runs
updaterTickerTrap := mClock.Trap().TickerFunc("updaterLoop")
defer updaterTickerTrap.Close()
// Create a fake container CLI that initially has the devcontainer
fakeLister := &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{devContainer},
},
arch: "<none>", // Unsupported architecture, don't inject subagent.
}
// Setup router and API with the initial devcontainer
r := chi.NewRouter()
api := agentcontainers.NewAPI(
logger,
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(fakeLister),
agentcontainers.WithDevcontainerCLI(&fakeDevcontainerCLI{}),
agentcontainers.WithWatcher(watcher.NewNoop()),
agentcontainers.WithDevcontainers([]codersdk.WorkspaceAgentDevcontainer{
{
ID: devcontainerID,
Name: "test-devcontainer",
WorkspaceFolder: workspaceFolder,
ConfigPath: configPath,
Status: codersdk.WorkspaceAgentDevcontainerStatusRunning,
Container: &devContainer,
},
}, nil),
)
api.Start()
defer api.Close()
r.Mount("/", api.Routes())
// Wait for the updater loop to be ready
updaterTickerTrap.MustWait(ctx).MustRelease(ctx)
// STEP 1: Verify the devcontainer exists
req := httptest.NewRequest(http.MethodGet, "/", nil).WithContext(ctx)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code, "initial list request should succeed")
var initialResp codersdk.WorkspaceAgentListContainersResponse
err := json.NewDecoder(rec.Body).Decode(&initialResp)
require.NoError(t, err, "unmarshal initial response failed")
require.Len(t, initialResp.Devcontainers, 1, "should have one devcontainer initially")
require.Equal(t, devcontainerID, initialResp.Devcontainers[0].ID, "devcontainer ID should match")
// STEP 2: Delete the devcontainer
req = httptest.NewRequest(http.MethodDelete, "/devcontainers/"+devcontainerID.String(), nil).
WithContext(ctx)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code, "delete request should succeed")
assert.Contains(t, rec.Body.String(), "Devcontainer deleted successfully", "delete response body mismatch")
// STEP 3: Verify the devcontainer is gone
req = httptest.NewRequest(http.MethodGet, "/", nil).WithContext(ctx)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code, "list after delete should succeed")
var afterDeleteResp codersdk.WorkspaceAgentListContainersResponse
err = json.NewDecoder(rec.Body).Decode(&afterDeleteResp)
require.NoError(t, err, "unmarshal after delete response failed")
require.Len(t, afterDeleteResp.Devcontainers, 0, "devcontainer should be removed after deletion")
// STEP 4: Simulate the container reappearing (e.g., manually recreated)
// Update the fake lister to return the container again
fakeLister.containers = codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{devContainer},
}
// STEP 5: Trigger the updater loop to discover the container again
_, aw := mClock.AdvanceNext()
aw.MustWait(ctx)
// STEP 6: Verify the devcontainer has been recreated
req = httptest.NewRequest(http.MethodGet, "/", nil).WithContext(ctx)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code, "list after recreate should succeed")
var afterRecreateResp codersdk.WorkspaceAgentListContainersResponse
err = json.NewDecoder(rec.Body).Decode(&afterRecreateResp)
require.NoError(t, err, "unmarshal after recreate response failed")
require.Len(t, afterRecreateResp.Devcontainers, 1, "devcontainer should be rediscovered after updater loop")
// The ID will be different since it's a new devcontainer entry
require.NotEqual(t, devcontainerID, afterRecreateResp.Devcontainers[0].ID,
"recreated devcontainer should have a new ID")
require.Equal(t, workspaceFolder, afterRecreateResp.Devcontainers[0].WorkspaceFolder,
"workspace folder should match")
require.Equal(t, configPath, afterRecreateResp.Devcontainers[0].ConfigPath,
"config path should match")
require.Equal(t, codersdk.WorkspaceAgentDevcontainerStatusRunning, afterRecreateResp.Devcontainers[0].Status,
"recreated devcontainer should be running")
}
func TestHandleDevcontainerDelete(t *testing.T) {
t.Parallel()
devcontainerID1 := uuid.New()
devcontainerID2 := uuid.New()
workspaceFolder1 := "/workspace/test1"
workspaceFolder2 := "/workspace/test2"
configPath1 := "/workspace/test1/.devcontainer/devcontainer.json"
configPath2 := "/workspace/test2/.devcontainer/devcontainer.json"
// Create a container that represents an existing devcontainer
devContainer1 := codersdk.WorkspaceAgentContainer{
ID: "container-1",
FriendlyName: "test-container-1",
Running: true,
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: workspaceFolder1,
agentcontainers.DevcontainerConfigFileLabel: configPath1,
},
}
tests := []struct {
name string
devcontainerID string
setupDevcontainers []codersdk.WorkspaceAgentDevcontainer
lister *fakeContainerCLI
wantStatus int
wantBody string
}{
{
name: "Missing devcontainer ID",
devcontainerID: "",
lister: &fakeContainerCLI{},
wantStatus: http.StatusNotFound, // Chi router returns 404 for empty path parameter
wantBody: "404 page not found",
},
{
name: "Devcontainer not found",
devcontainerID: uuid.NewString(),
lister: &fakeContainerCLI{
arch: "<none>", // Unsupported architecture, don't inject subagent.
},
wantStatus: http.StatusNotFound,
wantBody: "Devcontainer not found",
},
{
name: "Cannot delete while starting",
devcontainerID: devcontainerID2.String(),
setupDevcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
ID: devcontainerID2,
Name: "test-devcontainer-2",
WorkspaceFolder: workspaceFolder2,
ConfigPath: configPath2,
Status: codersdk.WorkspaceAgentDevcontainerStatusStarting,
Container: nil,
},
},
lister: &fakeContainerCLI{
arch: "<none>",
},
wantStatus: http.StatusConflict,
wantBody: "Cannot delete devcontainer while starting",
},
{
name: "OK - Delete existing devcontainer",
devcontainerID: devcontainerID1.String(),
setupDevcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
ID: devcontainerID1,
Name: "test-devcontainer-1",
WorkspaceFolder: workspaceFolder1,
ConfigPath: configPath1,
Status: codersdk.WorkspaceAgentDevcontainerStatusRunning,
Container: &devContainer1,
},
},
lister: &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{devContainer1},
},
arch: "<none>", // Unsupported architecture, don't inject subagent.
},
wantStatus: http.StatusOK,
wantBody: "Devcontainer deleted successfully",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock := quartz.NewMock(t)
mClock.Set(time.Now()).MustWait(ctx)
// Set up updater ticker trap to prevent automatic updates
updaterTickerTrap := mClock.Trap().TickerFunc("updaterLoop")
defer updaterTickerTrap.Close()
// Setup router with the handler under test.
r := chi.NewRouter()
api := agentcontainers.NewAPI(
logger,
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(tt.lister),
agentcontainers.WithDevcontainerCLI(&fakeDevcontainerCLI{}),
agentcontainers.WithWatcher(watcher.NewNoop()),
agentcontainers.WithDevcontainers(tt.setupDevcontainers, nil),
)
api.Start()
defer api.Close()
r.Mount("/", api.Routes())
// Wait for the updater loop to be ready, then hold it so it doesn't interfere
updaterTickerTrap.MustWait(ctx).MustRelease(ctx)
// Simulate HTTP DELETE request to the delete endpoint.
req := httptest.NewRequest(http.MethodDelete, "/devcontainers/"+tt.devcontainerID, nil).
WithContext(ctx)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
// Check the response status code and body.
require.Equal(t, tt.wantStatus, rec.Code, "status code mismatch")
if tt.wantBody != "" {
assert.Contains(t, rec.Body.String(), tt.wantBody, "response body mismatch")
}
// For successful deletion, verify the devcontainer is no longer in the list
if tt.wantStatus == http.StatusOK {
req = httptest.NewRequest(http.MethodGet, "/", nil).
WithContext(ctx)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code, "status code mismatch after delete")
var resp codersdk.WorkspaceAgentListContainersResponse
err := json.NewDecoder(rec.Body).Decode(&resp)
require.NoError(t, err, "unmarshal response failed after delete")
require.Len(t, resp.Devcontainers, 0, "devcontainer should be removed from list after deletion")
}
})
}
}
+15 -16
View File
@@ -263,14 +263,11 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
}
if err := cmd.Run(); err != nil {
result, err2 := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
_, err2 := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
if err2 != nil {
err = errors.Join(err, err2)
}
// Return the container ID if available, even if there was an error.
// This can happen if the container was created successfully but a
// lifecycle script (e.g. postCreateCommand) failed.
return result.ContainerID, err
return "", err
}
result, err := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
@@ -278,13 +275,6 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
return "", err
}
// Check if the result indicates an error (e.g. lifecycle script failure)
// but still has a container ID, allowing the caller to potentially
// continue with the container that was created.
if err := result.Err(); err != nil {
return result.ContainerID, err
}
return result.ContainerID, nil
}
@@ -404,10 +394,7 @@ func parseDevcontainerCLILastLine[T any](ctx context.Context, logger slog.Logger
type devcontainerCLIResult struct {
Outcome string `json:"outcome"` // "error", "success".
// The following fields are typically set if outcome is success, but
// ContainerID may also be present when outcome is error if the
// container was created but a lifecycle script (e.g. postCreateCommand)
// failed.
// The following fields are set if outcome is success.
ContainerID string `json:"containerId"`
RemoteUser string `json:"remoteUser"`
RemoteWorkspaceFolder string `json:"remoteWorkspaceFolder"`
@@ -417,6 +404,18 @@ type devcontainerCLIResult struct {
Description string `json:"description"`
}
func (r *devcontainerCLIResult) UnmarshalJSON(data []byte) error {
type wrapperResult devcontainerCLIResult
var wrappedResult wrapperResult
if err := json.Unmarshal(data, &wrappedResult); err != nil {
return err
}
*r = devcontainerCLIResult(wrappedResult)
return r.Err()
}
func (r devcontainerCLIResult) Err() error {
if r.Outcome == "success" {
return nil
+41 -64
View File
@@ -42,63 +42,56 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
t.Parallel()
tests := []struct {
name string
logFile string
workspace string
config string
opts []agentcontainers.DevcontainerCLIUpOptions
wantArgs string
wantError bool
wantContainerID bool // If true, expect a container ID even when wantError is true.
name string
logFile string
workspace string
config string
opts []agentcontainers.DevcontainerCLIUpOptions
wantArgs string
wantError bool
}{
{
name: "success",
logFile: "up.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
wantContainerID: true,
name: "success",
logFile: "up.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
},
{
name: "success with config",
logFile: "up.log",
workspace: "/test/workspace",
config: "/test/config.json",
wantArgs: "up --log-format json --workspace-folder /test/workspace --config /test/config.json",
wantError: false,
wantContainerID: true,
name: "success with config",
logFile: "up.log",
workspace: "/test/workspace",
config: "/test/config.json",
wantArgs: "up --log-format json --workspace-folder /test/workspace --config /test/config.json",
wantError: false,
},
{
name: "already exists",
logFile: "up-already-exists.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
wantContainerID: true,
name: "already exists",
logFile: "up-already-exists.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
},
{
name: "docker error",
logFile: "up-error-docker.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "docker error",
logFile: "up-error-docker.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "bad outcome",
logFile: "up-error-bad-outcome.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "bad outcome",
logFile: "up-error-bad-outcome.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "does not exist",
logFile: "up-error-does-not-exist.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "does not exist",
logFile: "up-error-does-not-exist.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "with remove existing container",
@@ -107,21 +100,8 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
opts: []agentcontainers.DevcontainerCLIUpOptions{
agentcontainers.WithRemoveExistingContainer(),
},
wantArgs: "up --log-format json --workspace-folder /test/workspace --remove-existing-container",
wantError: false,
wantContainerID: true,
},
{
// This test verifies that when a lifecycle script like
// postCreateCommand fails, the CLI returns both an error
// and a container ID. The caller can then proceed with
// agent injection into the created container.
name: "lifecycle script failure with container",
logFile: "up-error-lifecycle-script.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: true,
wantArgs: "up --log-format json --workspace-folder /test/workspace --remove-existing-container",
wantError: false,
},
}
@@ -142,13 +122,10 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
containerID, err := dccli.Up(ctx, tt.workspace, tt.config, tt.opts...)
if tt.wantError {
assert.Error(t, err, "want error")
assert.Empty(t, containerID, "expected empty container ID")
} else {
assert.NoError(t, err, "want no error")
}
if tt.wantContainerID {
assert.NotEmpty(t, containerID, "expected non-empty container ID")
} else {
assert.Empty(t, containerID, "expected empty container ID")
}
})
}
File diff suppressed because one or more lines are too long
-146
View File
@@ -1,146 +0,0 @@
package agentsocket
import (
"context"
"golang.org/x/xerrors"
"storj.io/drpc"
"storj.io/drpc/drpcconn"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
)
// Option represents a configuration option for NewClient.
type Option func(*options)
type options struct {
path string
}
// WithPath sets the socket path. If not provided or empty, the client will
// auto-discover the default socket path.
func WithPath(path string) Option {
return func(opts *options) {
if path == "" {
return
}
opts.path = path
}
}
// Client provides a client for communicating with the workspace agentsocket API.
type Client struct {
client proto.DRPCAgentSocketClient
conn drpc.Conn
}
// NewClient creates a new socket client and opens a connection to the socket.
// If path is not provided via WithPath or is empty, it will auto-discover the
// default socket path.
func NewClient(ctx context.Context, opts ...Option) (*Client, error) {
options := &options{}
for _, opt := range opts {
opt(options)
}
conn, err := dialSocket(ctx, options.path)
if err != nil {
return nil, xerrors.Errorf("connect to socket: %w", err)
}
drpcConn := drpcconn.New(conn)
client := proto.NewDRPCAgentSocketClient(drpcConn)
return &Client{
client: client,
conn: drpcConn,
}, nil
}
// Close closes the socket connection.
func (c *Client) Close() error {
return c.conn.Close()
}
// Ping sends a ping request to the agent.
func (c *Client) Ping(ctx context.Context) error {
_, err := c.client.Ping(ctx, &proto.PingRequest{})
return err
}
// SyncStart starts a unit in the dependency graph.
func (c *Client) SyncStart(ctx context.Context, unitName unit.ID) error {
_, err := c.client.SyncStart(ctx, &proto.SyncStartRequest{
Unit: string(unitName),
})
return err
}
// SyncWant declares a dependency between units.
func (c *Client) SyncWant(ctx context.Context, unitName, dependsOn unit.ID) error {
_, err := c.client.SyncWant(ctx, &proto.SyncWantRequest{
Unit: string(unitName),
DependsOn: string(dependsOn),
})
return err
}
// SyncComplete marks a unit as complete in the dependency graph.
func (c *Client) SyncComplete(ctx context.Context, unitName unit.ID) error {
_, err := c.client.SyncComplete(ctx, &proto.SyncCompleteRequest{
Unit: string(unitName),
})
return err
}
// SyncReady requests whether a unit is ready to be started. That is, all dependencies are satisfied.
func (c *Client) SyncReady(ctx context.Context, unitName unit.ID) (bool, error) {
resp, err := c.client.SyncReady(ctx, &proto.SyncReadyRequest{
Unit: string(unitName),
})
return resp.Ready, err
}
// SyncStatus gets the status of a unit and its dependencies.
func (c *Client) SyncStatus(ctx context.Context, unitName unit.ID) (SyncStatusResponse, error) {
resp, err := c.client.SyncStatus(ctx, &proto.SyncStatusRequest{
Unit: string(unitName),
})
if err != nil {
return SyncStatusResponse{}, err
}
var dependencies []DependencyInfo
for _, dep := range resp.Dependencies {
dependencies = append(dependencies, DependencyInfo{
DependsOn: unit.ID(dep.DependsOn),
RequiredStatus: unit.Status(dep.RequiredStatus),
CurrentStatus: unit.Status(dep.CurrentStatus),
IsSatisfied: dep.IsSatisfied,
})
}
return SyncStatusResponse{
UnitName: unitName,
Status: unit.Status(resp.Status),
IsReady: resp.IsReady,
Dependencies: dependencies,
}, nil
}
// SyncStatusResponse contains the status information for a unit.
type SyncStatusResponse struct {
UnitName unit.ID `table:"unit,default_sort" json:"unit_name"`
Status unit.Status `table:"status" json:"status"`
IsReady bool `table:"ready" json:"is_ready"`
Dependencies []DependencyInfo `table:"dependencies" json:"dependencies"`
}
// DependencyInfo contains information about a unit dependency.
type DependencyInfo struct {
DependsOn unit.ID `table:"depends on,default_sort" json:"depends_on"`
RequiredStatus unit.Status `table:"required status" json:"required_status"`
CurrentStatus unit.Status `table:"current status" json:"current_status"`
IsSatisfied bool `table:"satisfied" json:"is_satisfied"`
}
-968
View File
@@ -1,968 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.23.4
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type PingRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingRequest) Reset() {
*x = PingRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingRequest) ProtoMessage() {}
func (x *PingRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingRequest.ProtoReflect.Descriptor instead.
func (*PingRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{0}
}
type PingResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingResponse) Reset() {
*x = PingResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingResponse) ProtoMessage() {}
func (x *PingResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingResponse.ProtoReflect.Descriptor instead.
func (*PingResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{1}
}
type SyncStartRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStartRequest) Reset() {
*x = SyncStartRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartRequest) ProtoMessage() {}
func (x *SyncStartRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartRequest.ProtoReflect.Descriptor instead.
func (*SyncStartRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{2}
}
func (x *SyncStartRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncStartResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncStartResponse) Reset() {
*x = SyncStartResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartResponse) ProtoMessage() {}
func (x *SyncStartResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartResponse.ProtoReflect.Descriptor instead.
func (*SyncStartResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{3}
}
type SyncWantRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
}
func (x *SyncWantRequest) Reset() {
*x = SyncWantRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantRequest) ProtoMessage() {}
func (x *SyncWantRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantRequest.ProtoReflect.Descriptor instead.
func (*SyncWantRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{4}
}
func (x *SyncWantRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *SyncWantRequest) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
type SyncWantResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncWantResponse) Reset() {
*x = SyncWantResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantResponse) ProtoMessage() {}
func (x *SyncWantResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantResponse.ProtoReflect.Descriptor instead.
func (*SyncWantResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{5}
}
type SyncCompleteRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncCompleteRequest) Reset() {
*x = SyncCompleteRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteRequest) ProtoMessage() {}
func (x *SyncCompleteRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteRequest.ProtoReflect.Descriptor instead.
func (*SyncCompleteRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{6}
}
func (x *SyncCompleteRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncCompleteResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncCompleteResponse) Reset() {
*x = SyncCompleteResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteResponse) ProtoMessage() {}
func (x *SyncCompleteResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteResponse.ProtoReflect.Descriptor instead.
func (*SyncCompleteResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{7}
}
type SyncReadyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncReadyRequest) Reset() {
*x = SyncReadyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyRequest) ProtoMessage() {}
func (x *SyncReadyRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyRequest.ProtoReflect.Descriptor instead.
func (*SyncReadyRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{8}
}
func (x *SyncReadyRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncReadyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Ready bool `protobuf:"varint,1,opt,name=ready,proto3" json:"ready,omitempty"`
}
func (x *SyncReadyResponse) Reset() {
*x = SyncReadyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyResponse) ProtoMessage() {}
func (x *SyncReadyResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyResponse.ProtoReflect.Descriptor instead.
func (*SyncReadyResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{9}
}
func (x *SyncReadyResponse) GetReady() bool {
if x != nil {
return x.Ready
}
return false
}
type SyncStatusRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStatusRequest) Reset() {
*x = SyncStatusRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusRequest) ProtoMessage() {}
func (x *SyncStatusRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusRequest.ProtoReflect.Descriptor instead.
func (*SyncStatusRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{10}
}
func (x *SyncStatusRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type DependencyInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
RequiredStatus string `protobuf:"bytes,3,opt,name=required_status,json=requiredStatus,proto3" json:"required_status,omitempty"`
CurrentStatus string `protobuf:"bytes,4,opt,name=current_status,json=currentStatus,proto3" json:"current_status,omitempty"`
IsSatisfied bool `protobuf:"varint,5,opt,name=is_satisfied,json=isSatisfied,proto3" json:"is_satisfied,omitempty"`
}
func (x *DependencyInfo) Reset() {
*x = DependencyInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DependencyInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DependencyInfo) ProtoMessage() {}
func (x *DependencyInfo) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DependencyInfo.ProtoReflect.Descriptor instead.
func (*DependencyInfo) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{11}
}
func (x *DependencyInfo) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *DependencyInfo) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
func (x *DependencyInfo) GetRequiredStatus() string {
if x != nil {
return x.RequiredStatus
}
return ""
}
func (x *DependencyInfo) GetCurrentStatus() string {
if x != nil {
return x.CurrentStatus
}
return ""
}
func (x *DependencyInfo) GetIsSatisfied() bool {
if x != nil {
return x.IsSatisfied
}
return false
}
type SyncStatusResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
IsReady bool `protobuf:"varint,2,opt,name=is_ready,json=isReady,proto3" json:"is_ready,omitempty"`
Dependencies []*DependencyInfo `protobuf:"bytes,3,rep,name=dependencies,proto3" json:"dependencies,omitempty"`
}
func (x *SyncStatusResponse) Reset() {
*x = SyncStatusResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusResponse) ProtoMessage() {}
func (x *SyncStatusResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusResponse.ProtoReflect.Descriptor instead.
func (*SyncStatusResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{12}
}
func (x *SyncStatusResponse) GetStatus() string {
if x != nil {
return x.Status
}
return ""
}
func (x *SyncStatusResponse) GetIsReady() bool {
if x != nil {
return x.IsReady
}
return false
}
func (x *SyncStatusResponse) GetDependencies() []*DependencyInfo {
if x != nil {
return x.Dependencies
}
return nil
}
var File_agent_agentsocket_proto_agentsocket_proto protoreflect.FileDescriptor
var file_agent_agentsocket_proto_agentsocket_proto_rawDesc = []byte{
0x0a, 0x29, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73,
0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x14, 0x63, 0x6f, 0x64,
0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76,
0x31, 0x22, 0x0d, 0x0a, 0x0b, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x22, 0x0e, 0x0a, 0x0c, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x26, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0x13, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63,
0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x44, 0x0a,
0x0f, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f,
0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64,
0x73, 0x4f, 0x6e, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x29, 0x0a, 0x13, 0x53, 0x79, 0x6e, 0x63, 0x43,
0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65,
0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x26, 0x0a, 0x10, 0x53, 0x79,
0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x29, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79,
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79, 0x22, 0x27, 0x0a,
0x11, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0xb6, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x65, 0x6e,
0x64, 0x65, 0x6e, 0x63, 0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69,
0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a,
0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x4f, 0x6e, 0x12, 0x27, 0x0a, 0x0f,
0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x53,
0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x25, 0x0a, 0x0e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74,
0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x63,
0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x21, 0x0a, 0x0c,
0x69, 0x73, 0x5f, 0x73, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01,
0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x53, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x22,
0x91, 0x01, 0x0a, 0x12, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x19,
0x0a, 0x08, 0x69, 0x73, 0x5f, 0x72, 0x65, 0x61, 0x64, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x69, 0x73, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12, 0x48, 0x0a, 0x0c, 0x64, 0x65, 0x70,
0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63, 0x69, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x24, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x79, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x69, 0x65, 0x73, 0x32, 0xbb, 0x04, 0x0a, 0x0b, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x53, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x12, 0x4d, 0x0a, 0x04, 0x50, 0x69, 0x6e, 0x67, 0x12, 0x21, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x59, 0x0a, 0x08, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x12, 0x25, 0x2e, 0x63,
0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e,
0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57,
0x61, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x65, 0x0a, 0x0c, 0x53,
0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x29, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79,
0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x5f, 0x0a, 0x0a, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x27,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x76, 0x32, 0x2f, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce sync.Once
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = file_agent_agentsocket_proto_agentsocket_proto_rawDesc
)
func file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP() []byte {
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce.Do(func() {
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = protoimpl.X.CompressGZIP(file_agent_agentsocket_proto_agentsocket_proto_rawDescData)
})
return file_agent_agentsocket_proto_agentsocket_proto_rawDescData
}
var file_agent_agentsocket_proto_agentsocket_proto_msgTypes = make([]protoimpl.MessageInfo, 13)
var file_agent_agentsocket_proto_agentsocket_proto_goTypes = []interface{}{
(*PingRequest)(nil), // 0: coder.agentsocket.v1.PingRequest
(*PingResponse)(nil), // 1: coder.agentsocket.v1.PingResponse
(*SyncStartRequest)(nil), // 2: coder.agentsocket.v1.SyncStartRequest
(*SyncStartResponse)(nil), // 3: coder.agentsocket.v1.SyncStartResponse
(*SyncWantRequest)(nil), // 4: coder.agentsocket.v1.SyncWantRequest
(*SyncWantResponse)(nil), // 5: coder.agentsocket.v1.SyncWantResponse
(*SyncCompleteRequest)(nil), // 6: coder.agentsocket.v1.SyncCompleteRequest
(*SyncCompleteResponse)(nil), // 7: coder.agentsocket.v1.SyncCompleteResponse
(*SyncReadyRequest)(nil), // 8: coder.agentsocket.v1.SyncReadyRequest
(*SyncReadyResponse)(nil), // 9: coder.agentsocket.v1.SyncReadyResponse
(*SyncStatusRequest)(nil), // 10: coder.agentsocket.v1.SyncStatusRequest
(*DependencyInfo)(nil), // 11: coder.agentsocket.v1.DependencyInfo
(*SyncStatusResponse)(nil), // 12: coder.agentsocket.v1.SyncStatusResponse
}
var file_agent_agentsocket_proto_agentsocket_proto_depIdxs = []int32{
11, // 0: coder.agentsocket.v1.SyncStatusResponse.dependencies:type_name -> coder.agentsocket.v1.DependencyInfo
0, // 1: coder.agentsocket.v1.AgentSocket.Ping:input_type -> coder.agentsocket.v1.PingRequest
2, // 2: coder.agentsocket.v1.AgentSocket.SyncStart:input_type -> coder.agentsocket.v1.SyncStartRequest
4, // 3: coder.agentsocket.v1.AgentSocket.SyncWant:input_type -> coder.agentsocket.v1.SyncWantRequest
6, // 4: coder.agentsocket.v1.AgentSocket.SyncComplete:input_type -> coder.agentsocket.v1.SyncCompleteRequest
8, // 5: coder.agentsocket.v1.AgentSocket.SyncReady:input_type -> coder.agentsocket.v1.SyncReadyRequest
10, // 6: coder.agentsocket.v1.AgentSocket.SyncStatus:input_type -> coder.agentsocket.v1.SyncStatusRequest
1, // 7: coder.agentsocket.v1.AgentSocket.Ping:output_type -> coder.agentsocket.v1.PingResponse
3, // 8: coder.agentsocket.v1.AgentSocket.SyncStart:output_type -> coder.agentsocket.v1.SyncStartResponse
5, // 9: coder.agentsocket.v1.AgentSocket.SyncWant:output_type -> coder.agentsocket.v1.SyncWantResponse
7, // 10: coder.agentsocket.v1.AgentSocket.SyncComplete:output_type -> coder.agentsocket.v1.SyncCompleteResponse
9, // 11: coder.agentsocket.v1.AgentSocket.SyncReady:output_type -> coder.agentsocket.v1.SyncReadyResponse
12, // 12: coder.agentsocket.v1.AgentSocket.SyncStatus:output_type -> coder.agentsocket.v1.SyncStatusResponse
7, // [7:13] is the sub-list for method output_type
1, // [1:7] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_agent_agentsocket_proto_agentsocket_proto_init() }
func file_agent_agentsocket_proto_agentsocket_proto_init() {
if File_agent_agentsocket_proto_agentsocket_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DependencyInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_agent_agentsocket_proto_agentsocket_proto_rawDesc,
NumEnums: 0,
NumMessages: 13,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_agent_agentsocket_proto_agentsocket_proto_goTypes,
DependencyIndexes: file_agent_agentsocket_proto_agentsocket_proto_depIdxs,
MessageInfos: file_agent_agentsocket_proto_agentsocket_proto_msgTypes,
}.Build()
File_agent_agentsocket_proto_agentsocket_proto = out.File
file_agent_agentsocket_proto_agentsocket_proto_rawDesc = nil
file_agent_agentsocket_proto_agentsocket_proto_goTypes = nil
file_agent_agentsocket_proto_agentsocket_proto_depIdxs = nil
}
-69
View File
@@ -1,69 +0,0 @@
syntax = "proto3";
option go_package = "github.com/coder/coder/v2/agent/agentsocket/proto";
package coder.agentsocket.v1;
message PingRequest {}
message PingResponse {}
message SyncStartRequest {
string unit = 1;
}
message SyncStartResponse {}
message SyncWantRequest {
string unit = 1;
string depends_on = 2;
}
message SyncWantResponse {}
message SyncCompleteRequest {
string unit = 1;
}
message SyncCompleteResponse {}
message SyncReadyRequest {
string unit = 1;
}
message SyncReadyResponse {
bool ready = 1;
}
message SyncStatusRequest {
string unit = 1;
}
message DependencyInfo {
string unit = 1;
string depends_on = 2;
string required_status = 3;
string current_status = 4;
bool is_satisfied = 5;
}
message SyncStatusResponse {
string status = 1;
bool is_ready = 2;
repeated DependencyInfo dependencies = 3;
}
// AgentSocket provides direct access to the agent over local IPC.
service AgentSocket {
// Ping the agent to check if it is alive.
rpc Ping(PingRequest) returns (PingResponse);
// Report the start of a unit.
rpc SyncStart(SyncStartRequest) returns (SyncStartResponse);
// Declare a dependency between units.
rpc SyncWant(SyncWantRequest) returns (SyncWantResponse);
// Report the completion of a unit.
rpc SyncComplete(SyncCompleteRequest) returns (SyncCompleteResponse);
// Request whether a unit is ready to be started. That is, all dependencies are satisfied.
rpc SyncReady(SyncReadyRequest) returns (SyncReadyResponse);
// Get the status of a unit and list its dependencies.
rpc SyncStatus(SyncStatusRequest) returns (SyncStatusResponse);
}
@@ -1,311 +0,0 @@
// Code generated by protoc-gen-go-drpc. DO NOT EDIT.
// protoc-gen-go-drpc version: v0.0.34
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
context "context"
errors "errors"
protojson "google.golang.org/protobuf/encoding/protojson"
proto "google.golang.org/protobuf/proto"
drpc "storj.io/drpc"
drpcerr "storj.io/drpc/drpcerr"
)
type drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto struct{}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Marshal(msg drpc.Message) ([]byte, error) {
return proto.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) MarshalAppend(buf []byte, msg drpc.Message) ([]byte, error) {
return proto.MarshalOptions{}.MarshalAppend(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Unmarshal(buf []byte, msg drpc.Message) error {
return proto.Unmarshal(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONMarshal(msg drpc.Message) ([]byte, error) {
return protojson.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONUnmarshal(buf []byte, msg drpc.Message) error {
return protojson.Unmarshal(buf, msg.(proto.Message))
}
type DRPCAgentSocketClient interface {
DRPCConn() drpc.Conn
Ping(ctx context.Context, in *PingRequest) (*PingResponse, error)
SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error)
}
type drpcAgentSocketClient struct {
cc drpc.Conn
}
func NewDRPCAgentSocketClient(cc drpc.Conn) DRPCAgentSocketClient {
return &drpcAgentSocketClient{cc}
}
func (c *drpcAgentSocketClient) DRPCConn() drpc.Conn { return c.cc }
func (c *drpcAgentSocketClient) Ping(ctx context.Context, in *PingRequest) (*PingResponse, error) {
out := new(PingResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error) {
out := new(SyncStartResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error) {
out := new(SyncWantResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error) {
out := new(SyncCompleteResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error) {
out := new(SyncReadyResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error) {
out := new(SyncStatusResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
type DRPCAgentSocketServer interface {
Ping(context.Context, *PingRequest) (*PingResponse, error)
SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error)
}
type DRPCAgentSocketUnimplementedServer struct{}
func (s *DRPCAgentSocketUnimplementedServer) Ping(context.Context, *PingRequest) (*PingResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
type DRPCAgentSocketDescription struct{}
func (DRPCAgentSocketDescription) NumMethods() int { return 6 }
func (DRPCAgentSocketDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver, interface{}, bool) {
switch n {
case 0:
return "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
Ping(
ctx,
in1.(*PingRequest),
)
}, DRPCAgentSocketServer.Ping, true
case 1:
return "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStart(
ctx,
in1.(*SyncStartRequest),
)
}, DRPCAgentSocketServer.SyncStart, true
case 2:
return "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncWant(
ctx,
in1.(*SyncWantRequest),
)
}, DRPCAgentSocketServer.SyncWant, true
case 3:
return "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncComplete(
ctx,
in1.(*SyncCompleteRequest),
)
}, DRPCAgentSocketServer.SyncComplete, true
case 4:
return "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncReady(
ctx,
in1.(*SyncReadyRequest),
)
}, DRPCAgentSocketServer.SyncReady, true
case 5:
return "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStatus(
ctx,
in1.(*SyncStatusRequest),
)
}, DRPCAgentSocketServer.SyncStatus, true
default:
return "", nil, nil, nil, false
}
}
func DRPCRegisterAgentSocket(mux drpc.Mux, impl DRPCAgentSocketServer) error {
return mux.Register(impl, DRPCAgentSocketDescription{})
}
type DRPCAgentSocket_PingStream interface {
drpc.Stream
SendAndClose(*PingResponse) error
}
type drpcAgentSocket_PingStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_PingStream) SendAndClose(m *PingResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStartStream interface {
drpc.Stream
SendAndClose(*SyncStartResponse) error
}
type drpcAgentSocket_SyncStartStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStartStream) SendAndClose(m *SyncStartResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncWantStream interface {
drpc.Stream
SendAndClose(*SyncWantResponse) error
}
type drpcAgentSocket_SyncWantStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncWantStream) SendAndClose(m *SyncWantResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncCompleteStream interface {
drpc.Stream
SendAndClose(*SyncCompleteResponse) error
}
type drpcAgentSocket_SyncCompleteStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncCompleteStream) SendAndClose(m *SyncCompleteResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncReadyStream interface {
drpc.Stream
SendAndClose(*SyncReadyResponse) error
}
type drpcAgentSocket_SyncReadyStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncReadyStream) SendAndClose(m *SyncReadyResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStatusStream interface {
drpc.Stream
SendAndClose(*SyncStatusResponse) error
}
type drpcAgentSocket_SyncStatusStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStatusStream) SendAndClose(m *SyncStatusResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
-17
View File
@@ -1,17 +0,0 @@
package proto
import "github.com/coder/coder/v2/apiversion"
// Version history:
//
// API v1.0:
// - Initial release
// - Ping
// - Sync operations: SyncStart, SyncWant, SyncComplete, SyncWait, SyncStatus
const (
CurrentMajor = 1
CurrentMinor = 0
)
var CurrentVersion = apiversion.New(CurrentMajor, CurrentMinor)
-138
View File
@@ -1,138 +0,0 @@
package agentsocket
import (
"context"
"errors"
"net"
"sync"
"golang.org/x/xerrors"
"storj.io/drpc/drpcmux"
"storj.io/drpc/drpcserver"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/codersdk/drpcsdk"
)
// Server provides access to the DRPCAgentSocketService via a Unix domain socket.
// Do not invoke Server{} directly. Use NewServer() instead.
type Server struct {
logger slog.Logger
path string
drpcServer *drpcserver.Server
service *DRPCAgentSocketService
mu sync.Mutex
listener net.Listener
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
}
// NewServer creates a new agent socket server.
func NewServer(logger slog.Logger, opts ...Option) (*Server, error) {
options := &options{}
for _, opt := range opts {
opt(options)
}
logger = logger.Named("agentsocket-server")
server := &Server{
logger: logger,
path: options.path,
service: &DRPCAgentSocketService{
logger: logger,
unitManager: unit.NewManager(),
},
}
mux := drpcmux.New()
err := proto.DRPCRegisterAgentSocket(mux, server.service)
if err != nil {
return nil, xerrors.Errorf("failed to register drpc service: %w", err)
}
server.drpcServer = drpcserver.NewWithOptions(mux, drpcserver.Options{
Manager: drpcsdk.DefaultDRPCOptions(nil),
Log: func(err error) {
if errors.Is(err, context.Canceled) ||
errors.Is(err, context.DeadlineExceeded) {
return
}
logger.Debug(context.Background(), "drpc server error", slog.Error(err))
},
})
listener, err := createSocket(server.path)
if err != nil {
return nil, xerrors.Errorf("create socket: %w", err)
}
server.listener = listener
// This context is canceled by server.Close().
// canceling it will close all connections.
server.ctx, server.cancel = context.WithCancel(context.Background())
server.logger.Info(server.ctx, "agent socket server started", slog.F("path", server.path))
server.wg.Add(1)
go func() {
defer server.wg.Done()
server.acceptConnections()
}()
return server, nil
}
// Close stops the server and cleans up resources.
func (s *Server) Close() error {
s.mu.Lock()
if s.listener == nil {
s.mu.Unlock()
return nil
}
s.logger.Info(s.ctx, "stopping agent socket server")
s.cancel()
if err := s.listener.Close(); err != nil {
s.logger.Warn(s.ctx, "error closing socket listener", slog.Error(err))
}
s.listener = nil
s.mu.Unlock()
// Wait for all connections to finish
s.wg.Wait()
if err := cleanupSocket(s.path); err != nil {
s.logger.Warn(s.ctx, "error cleaning up socket file", slog.Error(err))
}
s.logger.Info(s.ctx, "agent socket server stopped")
return nil
}
func (s *Server) acceptConnections() {
// In an edge case, Close() might race with acceptConnections() and set s.listener to nil.
// Therefore, we grab a copy of the listener under a lock. We might still get a nil listener,
// but then we know close has already run and we can return early.
s.mu.Lock()
listener := s.listener
s.mu.Unlock()
if listener == nil {
return
}
err := s.drpcServer.Serve(s.ctx, listener)
if err != nil {
s.logger.Warn(s.ctx, "error serving drpc server", slog.Error(err))
}
}
-138
View File
@@ -1,138 +0,0 @@
package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/google/uuid"
"github.com/spf13/afero"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agenttest"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/tailnettest"
"github.com/coder/coder/v2/testutil"
)
func TestServer(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("StartStop", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
t.Run("AlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server1, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
defer server1.Close()
_, err = agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "create socket")
})
t.Run("AutoSocketPath", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
}
func TestServerWindowsNotSupported(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
t.Run("NewServer", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
_, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
t.Run("NewClient", func(t *testing.T) {
t.Parallel()
_, err := agentsocket.NewClient(context.Background(), agentsocket.WithPath("test.sock"))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
}
func TestAgentInitializesOnWindowsWithoutSocketServer(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
ctx := testutil.Context(t, testutil.WaitShort)
logger := testutil.Logger(t).Named("agent")
derpMap, _ := tailnettest.RunDERPAndSTUN(t)
coordinator := tailnet.NewCoordinator(logger)
t.Cleanup(func() {
_ = coordinator.Close()
})
statsCh := make(chan *agentproto.Stats, 50)
agentID := uuid.New()
manifest := agentsdk.Manifest{
AgentID: agentID,
AgentName: "test-agent",
WorkspaceName: "test-workspace",
OwnerName: "test-user",
WorkspaceID: uuid.New(),
DERPMap: derpMap,
}
client := agenttest.NewClient(t, logger.Named("agenttest"), agentID, manifest, statsCh, coordinator)
t.Cleanup(client.Close)
options := agent.Options{
Client: client,
Filesystem: afero.NewMemMapFs(),
Logger: logger.Named("agent"),
ReconnectingPTYTimeout: testutil.WaitShort,
EnvironmentVariables: map[string]string{},
SocketPath: "",
}
agnt := agent.New(options)
t.Cleanup(func() {
_ = agnt.Close()
})
startup := testutil.TryReceive(ctx, t, client.GetStartup())
require.NotNil(t, startup, "agent should send startup message")
err := agnt.Close()
require.NoError(t, err, "agent should close cleanly")
}
-152
View File
@@ -1,152 +0,0 @@
package agentsocket
import (
"context"
"errors"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
)
var _ proto.DRPCAgentSocketServer = (*DRPCAgentSocketService)(nil)
var ErrUnitManagerNotAvailable = xerrors.New("unit manager not available")
// DRPCAgentSocketService implements the DRPC agent socket service.
type DRPCAgentSocketService struct {
unitManager *unit.Manager
logger slog.Logger
}
// Ping responds to a ping request to check if the service is alive.
func (*DRPCAgentSocketService) Ping(_ context.Context, _ *proto.PingRequest) (*proto.PingResponse, error) {
return &proto.PingResponse{}, nil
}
// SyncStart starts a unit in the dependency graph.
func (s *DRPCAgentSocketService) SyncStart(_ context.Context, req *proto.SyncStartRequest) (*proto.SyncStartResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("SyncStart: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.Register(unitID); err != nil {
if !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("SyncStart: %w", err)
}
}
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
if !isReady {
return nil, xerrors.Errorf("cannot start unit %q: unit not ready", req.Unit)
}
err = s.unitManager.UpdateStatus(unitID, unit.StatusStarted)
if err != nil {
return nil, xerrors.Errorf("cannot start unit %q: %w", req.Unit, err)
}
return &proto.SyncStartResponse{}, nil
}
// SyncWant declares a dependency between units.
func (s *DRPCAgentSocketService) SyncWant(_ context.Context, req *proto.SyncWantRequest) (*proto.SyncWantResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot add dependency: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
dependsOnID := unit.ID(req.DependsOn)
if err := s.unitManager.Register(unitID); err != nil && !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
if err := s.unitManager.AddDependency(unitID, dependsOnID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
return &proto.SyncWantResponse{}, nil
}
// SyncComplete marks a unit as complete in the dependency graph.
func (s *DRPCAgentSocketService) SyncComplete(_ context.Context, req *proto.SyncCompleteRequest) (*proto.SyncCompleteResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot complete unit: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.UpdateStatus(unitID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot complete unit %q: %w", req.Unit, err)
}
return &proto.SyncCompleteResponse{}, nil
}
// SyncReady checks whether a unit is ready to be started. That is, all dependencies are satisfied.
func (s *DRPCAgentSocketService) SyncReady(_ context.Context, req *proto.SyncReadyRequest) (*proto.SyncReadyResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot check readiness: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
return &proto.SyncReadyResponse{
Ready: isReady,
}, nil
}
// SyncStatus gets the status of a unit and lists its dependencies.
func (s *DRPCAgentSocketService) SyncStatus(_ context.Context, req *proto.SyncStatusRequest) (*proto.SyncStatusResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
dependencies, err := s.unitManager.GetAllDependencies(unitID)
switch {
case errors.Is(err, unit.ErrUnitNotFound):
dependencies = []unit.Dependency{}
case err != nil:
return nil, xerrors.Errorf("cannot get dependencies: %w", err)
}
var depInfos []*proto.DependencyInfo
for _, dep := range dependencies {
depInfos = append(depInfos, &proto.DependencyInfo{
Unit: string(dep.Unit),
DependsOn: string(dep.DependsOn),
RequiredStatus: string(dep.RequiredStatus),
CurrentStatus: string(dep.CurrentStatus),
IsSatisfied: dep.IsSatisfied,
})
}
u, err := s.unitManager.Unit(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, err)
}
return &proto.SyncStatusResponse{
Status: string(u.Status()),
IsReady: isReady,
Dependencies: depInfos,
}, nil
}
-389
View File
@@ -1,389 +0,0 @@
package agentsocket_test
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/testutil"
)
// tempDirUnixSocket returns a temporary directory that can safely hold unix
// sockets (probably).
//
// During tests on darwin we hit the max path length limit for unix sockets
// pretty easily in the default location, so this function uses /tmp instead to
// get shorter paths. To keep paths short, we use a hash of the test name
// instead of the full test name.
func tempDirUnixSocket(t *testing.T) string {
t.Helper()
if runtime.GOOS == "darwin" {
// Use a short hash of the test name to keep the path under 104 chars
hash := sha256.Sum256([]byte(t.Name()))
hashStr := hex.EncodeToString(hash[:])[:8] // Use first 8 chars of hash
dir, err := os.MkdirTemp("/tmp", fmt.Sprintf("c-%s-", hashStr))
require.NoError(t, err, "create temp dir for unix socket test")
t.Cleanup(func() {
err := os.RemoveAll(dir)
assert.NoError(t, err, "remove temp dir", dir)
})
return dir
}
return t.TempDir()
}
// newSocketClient creates a DRPC client connected to the Unix socket at the given path.
func newSocketClient(ctx context.Context, t *testing.T, socketPath string) *agentsocket.Client {
t.Helper()
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(socketPath))
t.Cleanup(func() {
_ = client.Close()
})
require.NoError(t, err)
return client
}
func TestDRPCAgentSocketService(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("Ping", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.Ping(ctx)
require.NoError(t, err)
})
t.Run("SyncStart", func(t *testing.T) {
t.Parallel()
t.Run("NewUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitAlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// First Start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Second Start
err = client.SyncStart(ctx, "test-unit")
require.ErrorContains(t, err, unit.ErrSameStatusAlreadySet.Error())
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitAlreadyCompleted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// First start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Complete the unit
err = client.SyncComplete(ctx, "test-unit")
require.NoError(t, err)
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusComplete, status.Status)
// Second start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
err = client.SyncStart(ctx, "test-unit")
require.ErrorContains(t, err, "unit not ready")
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusPending, status.Status)
require.False(t, status.IsReady)
})
})
t.Run("SyncWant", func(t *testing.T) {
t.Parallel()
t.Run("NewUnits", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// If dependency units are not registered, they are registered automatically
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Len(t, status.Dependencies, 1)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAlreadyRegistered", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Start the dependency unit
err = client.SyncStart(ctx, "dependency-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "dependency-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Add the dependency after the dependency unit has already started
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
// Dependencies can be added even if the dependency unit has already started
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAddedAfterDependentStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Start the dependent unit
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Add the dependency after the dependency unit has already started
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
// Dependencies can be added even if the dependent unit has already started.
// The dependency applies the next time a unit is started. The current status is not updated.
// This is to allow flexible dependency management. It does mean that users of this API should
// take care to add dependencies before they start their dependent units.
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
})
t.Run("SyncReady", func(t *testing.T) {
t.Parallel()
t.Run("UnregisteredUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
ready, err := client.SyncReady(ctx, "unregistered-unit")
require.NoError(t, err)
require.True(t, ready)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Register a unit with an unsatisfied dependency
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
// Check readiness - should be false because dependency is not satisfied
ready, err := client.SyncReady(ctx, "test-unit")
require.NoError(t, err)
require.False(t, ready)
})
t.Run("UnitReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Register a unit with no dependencies - should be ready immediately
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
// Check readiness - should be true
ready, err := client.SyncReady(ctx, "test-unit")
require.NoError(t, err)
require.True(t, ready)
// Also test a unit with satisfied dependencies
err = client.SyncWant(ctx, "dependent-unit", "test-unit")
require.NoError(t, err)
// Complete the dependency
err = client.SyncComplete(ctx, "test-unit")
require.NoError(t, err)
// Now dependent-unit should be ready
ready, err = client.SyncReady(ctx, "dependent-unit")
require.NoError(t, err)
require.True(t, ready)
})
})
}
-73
View File
@@ -1,73 +0,0 @@
//go:build !windows
package agentsocket
import (
"context"
"net"
"os"
"path/filepath"
"time"
"golang.org/x/xerrors"
)
const defaultSocketPath = "/tmp/coder-agent.sock"
func createSocket(path string) (net.Listener, error) {
if path == "" {
path = defaultSocketPath
}
if !isSocketAvailable(path) {
return nil, xerrors.Errorf("socket path %s is not available", path)
}
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
return nil, xerrors.Errorf("remove existing socket: %w", err)
}
parentDir := filepath.Dir(path)
if err := os.MkdirAll(parentDir, 0o700); err != nil {
return nil, xerrors.Errorf("create socket directory: %w", err)
}
listener, err := net.Listen("unix", path)
if err != nil {
return nil, xerrors.Errorf("listen on unix socket: %w", err)
}
if err := os.Chmod(path, 0o600); err != nil {
_ = listener.Close()
return nil, xerrors.Errorf("set socket permissions: %w", err)
}
return listener, nil
}
func cleanupSocket(path string) error {
return os.Remove(path)
}
func isSocketAvailable(path string) bool {
if _, err := os.Stat(path); os.IsNotExist(err) {
return true
}
// Try to connect to see if it's actually listening.
dialer := net.Dialer{Timeout: 10 * time.Second}
conn, err := dialer.Dial("unix", path)
if err != nil {
return true
}
_ = conn.Close()
return false
}
func dialSocket(ctx context.Context, path string) (net.Conn, error) {
if path == "" {
path = defaultSocketPath
}
dialer := net.Dialer{}
return dialer.DialContext(ctx, "unix", path)
}
-22
View File
@@ -1,22 +0,0 @@
//go:build windows
package agentsocket
import (
"context"
"net"
"golang.org/x/xerrors"
)
func createSocket(_ string) (net.Listener, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
func cleanupSocket(_ string) error {
return nil
}
func dialSocket(_ context.Context, _ string) (net.Conn, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
+2 -11
View File
@@ -391,19 +391,10 @@ func (s *Server) sessionHandler(session ssh.Session) {
env := session.Environ()
magicType, magicTypeRaw, env := extractMagicSessionType(env)
// It's not safe to assume RemoteAddr() returns a non-nil value. slog.F usage is fine because it correctly
// handles nil.
// c.f. https://github.com/coder/internal/issues/1143
remoteAddr := session.RemoteAddr()
remoteAddrString := ""
if remoteAddr != nil {
remoteAddrString = remoteAddr.String()
}
if !s.trackSession(session, true) {
reason := "unable to accept new session, server is closing"
// Report connection attempt even if we couldn't accept it.
disconnected := s.config.ReportConnection(id, magicType, remoteAddrString)
disconnected := s.config.ReportConnection(id, magicType, session.RemoteAddr().String())
defer disconnected(1, reason)
logger.Info(ctx, reason)
@@ -438,7 +429,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
scr := &sessionCloseTracker{Session: session}
session = scr
disconnected := s.config.ReportConnection(id, magicType, remoteAddrString)
disconnected := s.config.ReportConnection(id, magicType, session.RemoteAddr().String())
defer func() {
disconnected(scr.exitCode(), reason)
}()
+1 -1
View File
@@ -176,7 +176,7 @@ func (x *x11Forwarder) listenForConnections(
var originPort uint32
if tcpConn, ok := conn.(*net.TCPConn); ok {
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok && tcpAddr != nil {
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok {
originAddr = tcpAddr.IP.String()
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
originPort = uint32(tcpAddr.Port)
+31 -33
View File
@@ -2,31 +2,41 @@ package agent
import (
"net/http"
"sync"
"time"
"github.com/go-chi/chi/v5"
"github.com/google/uuid"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/httpmw"
)
func (a *agent) apiHandler() http.Handler {
r := chi.NewRouter()
r.Use(
httpmw.Recover(a.logger),
tracing.StatusWriterMiddleware,
loggermw.Logger(a.logger),
)
r.Get("/", func(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.Response{
Message: "Hello from the agent!",
})
})
// Make a copy to ensure the map is not modified after the handler is
// created.
cpy := make(map[int]string)
for k, b := range a.ignorePorts {
cpy[k] = b
}
cacheDuration := 1 * time.Second
if a.portCacheDuration > 0 {
cacheDuration = a.portCacheDuration
}
lp := &listeningPortsHandler{
ignorePorts: cpy,
cacheDuration: cacheDuration,
}
if a.devcontainers {
r.Mount("/api/v0/containers", a.containerAPI.Routes())
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
@@ -47,7 +57,7 @@ func (a *agent) apiHandler() http.Handler {
promHandler := PrometheusMetricsHandler(a.prometheusRegistry, a.logger)
r.Get("/api/v0/listening-ports", a.listeningPortsHandler.handler)
r.Get("/api/v0/listening-ports", lp.handler)
r.Get("/api/v0/netcheck", a.HandleNetcheck)
r.Post("/api/v0/list-directory", a.HandleLS)
r.Get("/api/v0/read-file", a.HandleReadFile)
@@ -62,21 +72,22 @@ func (a *agent) apiHandler() http.Handler {
return r
}
type ListeningPortsGetter interface {
GetListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error)
}
type listeningPortsHandler struct {
// In production code, this is set to an osListeningPortsGetter, but it can be overridden for
// testing.
getter ListeningPortsGetter
ignorePorts map[int]string
ignorePorts map[int]string
cacheDuration time.Duration
//nolint: unused // used on some but not all platforms
mut sync.Mutex
//nolint: unused // used on some but not all platforms
ports []codersdk.WorkspaceAgentListeningPort
//nolint: unused // used on some but not all platforms
mtime time.Time
}
// handler returns a list of listening ports. This is tested by coderd's
// TestWorkspaceAgentListeningPorts test.
func (lp *listeningPortsHandler) handler(rw http.ResponseWriter, r *http.Request) {
ports, err := lp.getter.GetListeningPorts()
ports, err := lp.getListeningPorts()
if err != nil {
httpapi.Write(r.Context(), rw, http.StatusInternalServerError, codersdk.Response{
Message: "Could not scan for listening ports.",
@@ -85,20 +96,7 @@ func (lp *listeningPortsHandler) handler(rw http.ResponseWriter, r *http.Request
return
}
filteredPorts := make([]codersdk.WorkspaceAgentListeningPort, 0, len(ports))
for _, port := range ports {
if port.Port < workspacesdk.AgentMinimumListeningPort {
continue
}
// Ignore ports that we've been told to ignore.
if _, ok := lp.ignorePorts[int(port.Port)]; ok {
continue
}
filteredPorts = append(filteredPorts, port)
}
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.WorkspaceAgentListeningPortsResponse{
Ports: filteredPorts,
Ports: ports,
})
}
+8 -10
View File
@@ -3,23 +3,16 @@
package agent
import (
"sync"
"time"
"github.com/cakturk/go-netstat/netstat"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
type osListeningPortsGetter struct {
cacheDuration time.Duration
mut sync.Mutex
ports []codersdk.WorkspaceAgentListeningPort
mtime time.Time
}
func (lp *osListeningPortsGetter) GetListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
func (lp *listeningPortsHandler) getListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
lp.mut.Lock()
defer lp.mut.Unlock()
@@ -40,7 +33,12 @@ func (lp *osListeningPortsGetter) GetListeningPorts() ([]codersdk.WorkspaceAgent
seen := make(map[uint16]struct{}, len(tabs))
ports := []codersdk.WorkspaceAgentListeningPort{}
for _, tab := range tabs {
if tab.LocalAddr == nil {
if tab.LocalAddr == nil || tab.LocalAddr.Port < workspacesdk.AgentMinimumListeningPort {
continue
}
// Ignore ports that we've been told to ignore.
if _, ok := lp.ignorePorts[int(tab.LocalAddr.Port)]; ok {
continue
}
-45
View File
@@ -1,45 +0,0 @@
//go:build linux || (windows && amd64)
package agent
import (
"net"
"testing"
"time"
"github.com/stretchr/testify/require"
)
func TestOSListeningPortsGetter(t *testing.T) {
t.Parallel()
uut := &osListeningPortsGetter{
cacheDuration: 1 * time.Hour,
}
l, err := net.Listen("tcp", "localhost:0")
require.NoError(t, err)
defer l.Close()
ports, err := uut.GetListeningPorts()
require.NoError(t, err)
found := false
for _, port := range ports {
// #nosec G115 - Safe conversion as TCP port numbers are within uint16 range (0-65535)
if port.Port == uint16(l.Addr().(*net.TCPAddr).Port) {
found = true
break
}
}
require.True(t, found)
// check that we cache the ports
err = l.Close()
require.NoError(t, err)
portsNew, err := uut.GetListeningPorts()
require.NoError(t, err)
require.Equal(t, ports, portsNew)
// note that it's unsafe to try to assert that a port does not exist in the response
// because the OS may reallocate the port very quickly.
}
+2 -10
View File
@@ -2,17 +2,9 @@
package agent
import (
"time"
import "github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk"
)
type osListeningPortsGetter struct {
cacheDuration time.Duration
}
func (*osListeningPortsGetter) GetListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
func (*listeningPortsHandler) getListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
// Can't scan for ports on non-linux or non-windows_amd64 systems at the
// moment. The UI will not show any "no ports found" message to the user, so
// the user won't suspect a thing.
+3 -13
View File
@@ -74,21 +74,11 @@ func (s *Server) Serve(ctx, hardCtx context.Context, l net.Listener) (retErr err
break
}
clog := s.logger.With(
slog.F("remote", conn.RemoteAddr()),
slog.F("local", conn.LocalAddr()))
slog.F("remote", conn.RemoteAddr().String()),
slog.F("local", conn.LocalAddr().String()))
clog.Info(ctx, "accepted conn")
// It's not safe to assume RemoteAddr() returns a non-nil value. slog.F usage is fine because it correctly
// handles nil.
// c.f. https://github.com/coder/internal/issues/1143
remoteAddr := conn.RemoteAddr()
remoteAddrString := ""
if remoteAddr != nil {
remoteAddrString = remoteAddr.String()
}
wg.Add(1)
disconnected := s.reportConnection(uuid.New(), remoteAddrString)
disconnected := s.reportConnection(uuid.New(), conn.RemoteAddr().String())
closed := make(chan struct{})
go func() {
defer wg.Done()
+1 -1
View File
@@ -58,7 +58,7 @@ func (g *Graph[EdgeType, VertexType]) AddEdge(from, to VertexType, edge EdgeType
toID := g.getOrCreateVertexID(to)
if g.canReach(to, from) {
return xerrors.Errorf("adding edge (%v -> %v): %w", from, to, ErrCycleDetected)
return xerrors.Errorf("adding edge (%v -> %v) would create a cycle", from, to)
}
g.gonumGraph.SetEdge(simple.Edge{F: simple.Node(fromID), T: simple.Node(toID)})
+5 -3
View File
@@ -148,7 +148,8 @@ func TestGraph(t *testing.T) {
graph := &testGraph{}
unit1 := &testGraphVertex{Name: "unit1"}
err := graph.AddEdge(unit1, unit1, testEdgeCompleted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
require.Error(t, err)
require.ErrorContains(t, err, fmt.Sprintf("adding edge (%v -> %v) would create a cycle", unit1, unit1))
return graph
},
@@ -159,7 +160,8 @@ func TestGraph(t *testing.T) {
err := graph.AddEdge(unit1, unit2, testEdgeCompleted)
require.NoError(t, err)
err = graph.AddEdge(unit2, unit1, testEdgeStarted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
require.Error(t, err)
require.ErrorContains(t, err, fmt.Sprintf("adding edge (%v -> %v) would create a cycle", unit2, unit1))
return graph
},
@@ -339,7 +341,7 @@ func TestGraphThreadSafety(t *testing.T) {
// Verify all attempts correctly returned cycle error
for i, err := range cycleErrors {
require.Error(t, err, "goroutine %d should have detected cycle", i)
require.ErrorIs(t, err, unit.ErrCycleDetected)
require.Contains(t, err.Error(), "would create a cycle")
}
// Verify graph remains valid (original chain intact)
-290
View File
@@ -1,290 +0,0 @@
package unit
import (
"errors"
"fmt"
"sync"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/coderd/util/slice"
)
var (
ErrUnitIDRequired = xerrors.New("unit name is required")
ErrUnitNotFound = xerrors.New("unit not found")
ErrUnitAlreadyRegistered = xerrors.New("unit already registered")
ErrCannotUpdateOtherUnit = xerrors.New("cannot update other unit's status")
ErrDependenciesNotSatisfied = xerrors.New("unit dependencies not satisfied")
ErrSameStatusAlreadySet = xerrors.New("same status already set")
ErrCycleDetected = xerrors.New("cycle detected")
ErrFailedToAddDependency = xerrors.New("failed to add dependency")
)
// Status represents the status of a unit.
type Status string
var _ fmt.Stringer = Status("")
func (s Status) String() string {
if s == StatusNotRegistered {
return "not registered"
}
return string(s)
}
// Status constants for dependency tracking.
const (
StatusNotRegistered Status = ""
StatusPending Status = "pending"
StatusStarted Status = "started"
StatusComplete Status = "completed"
)
// ID provides a type narrowed representation of the unique identifier of a unit.
type ID string
// Unit represents a point-in-time snapshot of a vertex in the dependency graph.
// Units may depend on other units, or be depended on by other units. The unit struct
// is not aware of updates made to the dependency graph after it is initialized and should
// not be cached.
type Unit struct {
id ID
status Status
// ready is true if all dependencies are satisfied.
// It does not have an accessor method on Unit, because a unit cannot know whether it is ready.
// Only the Manager can calculate whether a unit is ready based on knowledge of the dependency graph.
// To discourage use of an outdated readiness value, only the Manager should set and return this field.
ready bool
}
func (u Unit) ID() ID {
return u.id
}
func (u Unit) Status() Status {
return u.status
}
// Dependency represents a dependency relationship between units.
type Dependency struct {
Unit ID
DependsOn ID
RequiredStatus Status
CurrentStatus Status
IsSatisfied bool
}
// Manager provides reactive dependency tracking over a Graph.
// It manages Unit registration, dependency relationships, and status updates
// with automatic recalculation of readiness when dependencies are satisfied.
type Manager struct {
mu sync.RWMutex
// The underlying graph that stores dependency relationships
graph *Graph[Status, ID]
// Store vertex instances for each unit to ensure consistent references
units map[ID]Unit
}
// NewManager creates a new Manager instance.
func NewManager() *Manager {
return &Manager{
graph: &Graph[Status, ID]{},
units: make(map[ID]Unit),
}
}
// Register adds a unit to the manager if it is not already registered.
// If a Unit is already registered (per the ID field), it is not updated.
func (m *Manager) Register(id ID) error {
m.mu.Lock()
defer m.mu.Unlock()
if id == "" {
return xerrors.Errorf("registering unit %q: %w", id, ErrUnitIDRequired)
}
if m.registered(id) {
return xerrors.Errorf("registering unit %q: %w", id, ErrUnitAlreadyRegistered)
}
m.units[id] = Unit{
id: id,
status: StatusPending,
ready: true,
}
return nil
}
// registered checks if a unit is registered in the manager.
func (m *Manager) registered(id ID) bool {
return m.units[id].status != StatusNotRegistered
}
// Unit fetches a unit from the manager. If the unit does not exist,
// it returns the Unit zero-value as a placeholder unit, because
// units may depend on other units that have not yet been created.
func (m *Manager) Unit(id ID) (Unit, error) {
if id == "" {
return Unit{}, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
m.mu.RLock()
defer m.mu.RUnlock()
return m.units[id], nil
}
func (m *Manager) IsReady(id ID) (bool, error) {
if id == "" {
return false, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
m.mu.RLock()
defer m.mu.RUnlock()
if !m.registered(id) {
return true, nil
}
return m.units[id].ready, nil
}
// AddDependency adds a dependency relationship between units.
// The unit depends on the dependsOn unit reaching the requiredStatus.
func (m *Manager) AddDependency(unit ID, dependsOn ID, requiredStatus Status) error {
m.mu.Lock()
defer m.mu.Unlock()
switch {
case unit == "":
return xerrors.Errorf("dependent name cannot be empty: %w", ErrUnitIDRequired)
case dependsOn == "":
return xerrors.Errorf("dependency name cannot be empty: %w", ErrUnitIDRequired)
case !m.registered(unit):
return xerrors.Errorf("dependent unit %q must be registered first: %w", unit, ErrUnitNotFound)
}
// Add the dependency edge to the graph
// The edge goes from unit to dependsOn, representing the dependency
err := m.graph.AddEdge(unit, dependsOn, requiredStatus)
if err != nil {
return xerrors.Errorf("adding edge for unit %q: %w", unit, errors.Join(ErrFailedToAddDependency, err))
}
// Recalculate readiness for the unit since it now has a new dependency
m.recalculateReadinessUnsafe(unit)
return nil
}
// UpdateStatus updates a unit's status and recalculates readiness for affected dependents.
func (m *Manager) UpdateStatus(unit ID, newStatus Status) error {
m.mu.Lock()
defer m.mu.Unlock()
switch {
case unit == "":
return xerrors.Errorf("updating status for unit %q: %w", unit, ErrUnitIDRequired)
case !m.registered(unit):
return xerrors.Errorf("unit %q must be registered first: %w", unit, ErrUnitNotFound)
}
u := m.units[unit]
if u.status == newStatus {
return xerrors.Errorf("checking status for unit %q: %w", unit, ErrSameStatusAlreadySet)
}
u.status = newStatus
m.units[unit] = u
// Get all units that depend on this one (reverse adjacent vertices)
dependents := m.graph.GetReverseAdjacentVertices(unit)
// Recalculate readiness for all dependents
for _, dependent := range dependents {
m.recalculateReadinessUnsafe(dependent.From)
}
return nil
}
// recalculateReadinessUnsafe recalculates the readiness state for a unit.
// This method assumes the caller holds the write lock.
func (m *Manager) recalculateReadinessUnsafe(unit ID) {
u := m.units[unit]
dependencies := m.graph.GetForwardAdjacentVertices(unit)
allSatisfied := true
for _, dependency := range dependencies {
requiredStatus := dependency.Edge
dependsOnUnit := m.units[dependency.To]
if dependsOnUnit.status != requiredStatus {
allSatisfied = false
break
}
}
u.ready = allSatisfied
m.units[unit] = u
}
// GetGraph returns the underlying graph for visualization and debugging.
// This should be used carefully as it exposes the internal graph structure.
func (m *Manager) GetGraph() *Graph[Status, ID] {
return m.graph
}
// GetAllDependencies returns all dependencies for a unit, both satisfied and unsatisfied.
func (m *Manager) GetAllDependencies(unit ID) ([]Dependency, error) {
m.mu.RLock()
defer m.mu.RUnlock()
if unit == "" {
return nil, xerrors.Errorf("unit ID cannot be empty: %w", ErrUnitIDRequired)
}
if !m.registered(unit) {
return nil, xerrors.Errorf("checking registration for unit %q: %w", unit, ErrUnitNotFound)
}
dependencies := m.graph.GetForwardAdjacentVertices(unit)
var allDependencies []Dependency
for _, dependency := range dependencies {
dependsOnUnit := m.units[dependency.To]
requiredStatus := dependency.Edge
allDependencies = append(allDependencies, Dependency{
Unit: unit,
DependsOn: dependency.To,
RequiredStatus: requiredStatus,
CurrentStatus: dependsOnUnit.status,
IsSatisfied: dependsOnUnit.status == requiredStatus,
})
}
return allDependencies, nil
}
// GetUnmetDependencies returns a list of unsatisfied dependencies for a unit.
func (m *Manager) GetUnmetDependencies(unit ID) ([]Dependency, error) {
allDependencies, err := m.GetAllDependencies(unit)
if err != nil {
return nil, err
}
var unmetDependencies []Dependency = slice.Filter(allDependencies, func(dependency Dependency) bool {
return !dependency.IsSatisfied
})
return unmetDependencies, nil
}
// ExportDOT exports the dependency graph to DOT format for visualization.
func (m *Manager) ExportDOT(name string) (string, error) {
return m.graph.ToDOT(name)
}
-743
View File
@@ -1,743 +0,0 @@
package unit_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/agent/unit"
)
const (
unitA unit.ID = "serviceA"
unitB unit.ID = "serviceB"
unitC unit.ID = "serviceC"
unitD unit.ID = "serviceD"
)
func TestManager_UnitValidation(t *testing.T) {
t.Parallel()
t.Run("Empty Unit Name", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
err := manager.Register("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
err = manager.AddDependency("", unitA, unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
err = manager.AddDependency(unitA, "", unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
dependencies, err := manager.GetAllDependencies("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.Len(t, dependencies, 0)
unmetDependencies, err := manager.GetUnmetDependencies("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.Len(t, unmetDependencies, 0)
err = manager.UpdateStatus("", unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
isReady, err := manager.IsReady("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
require.False(t, isReady)
u, err := manager.Unit("")
require.ErrorIs(t, err, unit.ErrUnitIDRequired)
assert.Equal(t, unit.Unit{}, u)
})
}
func TestManager_Register(t *testing.T) {
t.Parallel()
t.Run("RegisterNewUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Then: the unit should be ready (no dependencies)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unitA, u.ID())
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("RegisterDuplicateUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Newly registered units have StatusPending. We update the unit status to StatusStarted,
// so we can later assert that it is not overwritten back to StatusPending by the second
// register call
manager.UpdateStatus(unitA, unit.StatusStarted)
// When: the unit is registered again
err = manager.Register(unitA)
// Then: a descriptive error should be returned
require.ErrorIs(t, err, unit.ErrUnitAlreadyRegistered)
// Then: the unit status should not be overwritten
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusStarted, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("RegisterMultipleUnits", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: multiple units are registered
unitIDs := []unit.ID{unitA, unitB, unitC}
for _, unit := range unitIDs {
err := manager.Register(unit)
require.NoError(t, err)
}
// Then: all units should be ready initially
for _, unitID := range unitIDs {
u, err := manager.Unit(unitID)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitID)
require.NoError(t, err)
assert.True(t, isReady)
}
})
}
func TestManager_AddDependency(t *testing.T) {
t.Parallel()
t.Run("AddDependencyBetweenRegisteredUnits", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready (depends on B)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Then: Unit B should still be ready (no dependencies)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is stopped
err = manager.UpdateStatus(unitB, unit.StatusPending)
require.NoError(t, err)
// Then: Unit A should no longer be ready, because its dependency is not in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
})
t.Run("AddDependencyByAnUnregisteredDependentUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given Unit B is registered
err := manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being started
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
// Then: a descriptive error communicates that the dependency cannot be added
// because the dependent unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
})
t.Run("AddDependencyOnAnUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given unit A is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Given Unit B is not yet registered
// And Unit A depends on Unit B being started
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: The dependency should be visible in Unit A's status
dependencies, err := manager.GetAllDependencies(unitA)
require.NoError(t, err)
require.Len(t, dependencies, 1)
assert.Equal(t, unitB, dependencies[0].DependsOn)
assert.Equal(t, unit.StatusStarted, dependencies[0].RequiredStatus)
assert.False(t, dependencies[0].IsSatisfied)
u, err := manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusNotRegistered, u.Status())
// Then: Unit A should not be ready, because it depends on Unit B
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is registered
err = manager.Register(unitB)
require.NoError(t, err)
// Then: Unit A should still not be ready.
// Unit B is not registered, but it has not been started as required by the dependency.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("AddDependencyCreatesACyclicDependency", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
err = manager.Register(unitD)
require.NoError(t, err)
// A depends on B
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// B depends on C
err = manager.AddDependency(unitB, unitC, unit.StatusStarted)
require.NoError(t, err)
// C depends on D
err = manager.AddDependency(unitC, unitD, unit.StatusStarted)
require.NoError(t, err)
// Try to make D depend on A (creates indirect cycle)
err = manager.AddDependency(unitD, unitA, unit.StatusStarted)
require.ErrorIs(t, err, unit.ErrCycleDetected)
})
t.Run("UpdatingADependency", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// When: The dependency is updated to unit.StatusComplete
err = manager.AddDependency(unitA, unitB, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit A should only have one dependency, and it should be unit.StatusComplete
dependencies, err := manager.GetAllDependencies(unitA)
require.NoError(t, err)
require.Len(t, dependencies, 1)
assert.Equal(t, unit.StatusComplete, dependencies[0].RequiredStatus)
})
}
func TestManager_UpdateStatus(t *testing.T) {
t.Parallel()
t.Run("UpdateStatusTriggersReadinessRecalculation", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A and B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready (depends on B)
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("UpdateStatusWithUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given Unit A is not registered
// When: Unit A is updated to unit.StatusStarted
err := manager.UpdateStatus(unitA, unit.StatusStarted)
// Then: a descriptive error communicates that the unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
})
t.Run("LinearChainDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given units A, B, and C are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
// Create chain: A depends on B being "started", B depends on C being "completed"
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitB, unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: only Unit C should be ready (no dependencies)
u, err := manager.Unit(unitC)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err := manager.IsReady(unitC)
require.NoError(t, err)
assert.True(t, isReady)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.False(t, isReady)
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit C is completed
err = manager.UpdateStatus(unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit B should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
u, err = manager.Unit(unitB)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should be ready, because its dependency is now in the desired state.
u, err = manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusPending, u.Status())
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
}
func TestManager_GetUnmetDependencies(t *testing.T) {
t.Parallel()
t.Run("GetUnmetDependenciesForUnitWithNoDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: Unit A is registered
err := manager.Register(unitA)
require.NoError(t, err)
// Given: Unit A has no dependencies
// Then: Unit A should have no unmet dependencies
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
assert.Empty(t, unmet)
})
t.Run("GetUnmetDependenciesForUnitWithUnsatisfiedDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
require.Len(t, unmet, 1)
assert.Equal(t, unitA, unmet[0].Unit)
assert.Equal(t, unitB, unmet[0].DependsOn)
assert.Equal(t, unit.StatusStarted, unmet[0].RequiredStatus)
assert.False(t, unmet[0].IsSatisfied)
})
t.Run("GetUnmetDependenciesForUnitWithSatisfiedDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: Unit A and Unit B are registered
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should have no unmet dependencies
unmet, err := manager.GetUnmetDependencies(unitA)
require.NoError(t, err)
assert.Empty(t, unmet)
})
t.Run("GetUnmetDependenciesForUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// When: Unit A is requested
unmet, err := manager.GetUnmetDependencies(unitA)
// Then: a descriptive error communicates that the unit must be registered first.
require.ErrorIs(t, err, unit.ErrUnitNotFound)
assert.Nil(t, unmet)
})
}
func TestManager_MultipleDependencies(t *testing.T) {
t.Parallel()
t.Run("UnitWithMultipleDependencies", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// A depends on B being unit.StatusStarted AND C being "started"
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
// A should not be ready (depends on both B and C)
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update B to unit.StatusStarted - A should still not be ready (needs C too)
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update C to "started" - A should now be ready
err = manager.UpdateStatus(unitC, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("ComplexDependencyChain", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// Create complex dependency graph:
// A depends on B being unit.StatusStarted AND C being "started"
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
// B depends on D being "completed"
err = manager.AddDependency(unitB, unitD, unit.StatusComplete)
require.NoError(t, err)
// C depends on D being "completed"
err = manager.AddDependency(unitC, unitD, unit.StatusComplete)
require.NoError(t, err)
// Initially only D is ready
isReady, err := manager.IsReady(unitD)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.False(t, isReady)
isReady, err = manager.IsReady(unitC)
require.NoError(t, err)
assert.False(t, isReady)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update D to "completed" - B and C should become ready
err = manager.UpdateStatus(unitD, unit.StatusComplete)
require.NoError(t, err)
isReady, err = manager.IsReady(unitB)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitC)
require.NoError(t, err)
assert.True(t, isReady)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update B to unit.StatusStarted - A should still not be ready (needs C)
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// Update C to "started" - A should now be ready
err = manager.UpdateStatus(unitC, unit.StatusStarted)
require.NoError(t, err)
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
t.Run("DifferentStatusTypes", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
err = manager.Register(unitC)
require.NoError(t, err)
// Given: Unit A depends on Unit B being unit.StatusStarted
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
// Given: Unit A depends on Unit C being "completed"
err = manager.AddDependency(unitA, unitC, unit.StatusComplete)
require.NoError(t, err)
// When: Unit B is started
err = manager.UpdateStatus(unitB, unit.StatusStarted)
require.NoError(t, err)
// Then: Unit A should not be ready, because only one of its dependencies is in the desired state.
// It still requires Unit C to be completed.
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.False(t, isReady)
// When: Unit C is completed
err = manager.UpdateStatus(unitC, unit.StatusComplete)
require.NoError(t, err)
// Then: Unit A should be ready, because both of its dependencies are in the desired state.
isReady, err = manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
}
func TestManager_IsReady(t *testing.T) {
t.Parallel()
t.Run("IsReadyWithUnregisteredUnit", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Given: a unit is not registered
u, err := manager.Unit(unitA)
require.NoError(t, err)
assert.Equal(t, unit.StatusNotRegistered, u.Status())
// Then: the unit is not ready
isReady, err := manager.IsReady(unitA)
require.NoError(t, err)
assert.True(t, isReady)
})
}
func TestManager_ToDOT(t *testing.T) {
t.Parallel()
t.Run("ExportSimpleGraph", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register units
err := manager.Register(unitA)
require.NoError(t, err)
err = manager.Register(unitB)
require.NoError(t, err)
// Add dependency
err = manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
dot, err := manager.ExportDOT("test")
require.NoError(t, err)
assert.NotEmpty(t, dot)
assert.Contains(t, dot, "digraph")
})
t.Run("ExportComplexGraph", func(t *testing.T) {
t.Parallel()
manager := unit.NewManager()
// Register all units
units := []unit.ID{unitA, unitB, unitC, unitD}
for _, unit := range units {
err := manager.Register(unit)
require.NoError(t, err)
}
// Create complex dependency graph
// A depends on B and C, B depends on D, C depends on D
err := manager.AddDependency(unitA, unitB, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitA, unitC, unit.StatusStarted)
require.NoError(t, err)
err = manager.AddDependency(unitB, unitD, unit.StatusComplete)
require.NoError(t, err)
err = manager.AddDependency(unitC, unitD, unit.StatusComplete)
require.NoError(t, err)
dot, err := manager.ExportDOT("complex")
require.NoError(t, err)
assert.NotEmpty(t, dot)
assert.Contains(t, dot, "digraph")
})
}
-17
View File
@@ -57,8 +57,6 @@ func workspaceAgent() *serpent.Command {
devcontainers bool
devcontainerProjectDiscovery bool
devcontainerDiscoveryAutostart bool
socketServerEnabled bool
socketPath string
)
agentAuth := &AgentAuth{}
cmd := &serpent.Command{
@@ -319,8 +317,6 @@ func workspaceAgent() *serpent.Command {
agentcontainers.WithProjectDiscovery(devcontainerProjectDiscovery),
agentcontainers.WithDiscoveryAutostart(devcontainerDiscoveryAutostart),
},
SocketPath: socketPath,
SocketServerEnabled: socketServerEnabled,
})
if debugAddress != "" {
@@ -481,19 +477,6 @@ func workspaceAgent() *serpent.Command {
Description: "Allow the agent to autostart devcontainer projects it discovers based on their configuration.",
Value: serpent.BoolOf(&devcontainerDiscoveryAutostart),
},
{
Flag: "socket-server-enabled",
Default: "false",
Env: "CODER_AGENT_SOCKET_SERVER_ENABLED",
Description: "Enable the agent socket server.",
Value: serpent.BoolOf(&socketServerEnabled),
},
{
Flag: "socket-path",
Env: "CODER_AGENT_SOCKET_PATH",
Description: "Specify the path for the agent socket.",
Value: serpent.StringOf(&socketPath),
},
}
agentAuth.AttachOptions(cmd, false)
return cmd
+4 -20
View File
@@ -28,9 +28,7 @@ import (
)
// New creates a CLI instance with a configuration pointed to a
// temporary testing directory. The invocation is set up to use a
// global config directory for the given testing.TB, and keyring
// usage disabled.
// temporary testing directory.
func New(t testing.TB, args ...string) (*serpent.Invocation, config.Root) {
var root cli.RootCmd
@@ -61,15 +59,6 @@ func NewWithCommand(
t testing.TB, cmd *serpent.Command, args ...string,
) (*serpent.Invocation, config.Root) {
configDir := config.Root(t.TempDir())
// Keyring usage is disabled here when --global-config is set because many existing
// tests expect the session token to be stored on disk and is not properly instrumented
// for parallel testing against the actual operating system keyring.
invArgs := append([]string{"--global-config", string(configDir)}, args...)
return setupInvocation(t, cmd, invArgs...), configDir
}
func setupInvocation(t testing.TB, cmd *serpent.Command, args ...string,
) *serpent.Invocation {
// I really would like to fail test on error logs, but realistically, turning on by default
// in all our CLI tests is going to create a lot of flaky noise.
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).
@@ -77,21 +66,16 @@ func setupInvocation(t testing.TB, cmd *serpent.Command, args ...string,
Named("cli")
i := &serpent.Invocation{
Command: cmd,
Args: args,
Args: append([]string{"--global-config", string(configDir)}, args...),
Stdin: io.LimitReader(nil, 0),
Stdout: (&logWriter{prefix: "stdout", log: logger}),
Stderr: (&logWriter{prefix: "stderr", log: logger}),
Logger: logger,
}
t.Logf("invoking command: %s %s", cmd.Name(), strings.Join(i.Args, " "))
return i
}
func NewWithDefaultKeyringCommand(t testing.TB, cmd *serpent.Command, args ...string,
) (*serpent.Invocation, config.Root) {
configDir := config.Root(t.TempDir())
invArgs := append([]string{"--global-config", string(configDir)}, args...)
return setupInvocation(t, cmd, invArgs...), configDir
// These can be overridden by the test.
return i, configDir
}
// SetupConfig applies the URL and SessionToken of the client to the config.
-3
View File
@@ -106,9 +106,6 @@ var _ OutputFormat = &tableFormat{}
//
// defaultColumns is optional and specifies the default columns to display. If
// not specified, all columns are displayed by default.
//
// If the data is empty, an empty string is returned. Callers should check for
// this and provide an appropriate message to the user.
func TableFormat(out any, defaultColumns []string) OutputFormat {
v := reflect.Indirect(reflect.ValueOf(out))
if v.Kind() != reflect.Slice {
-6
View File
@@ -180,12 +180,6 @@ func DisplayTable(out any, sort string, filterColumns []string) (string, error)
func renderTable(out any, sort string, headers table.Row, filterColumns []string) (string, error) {
v := reflect.Indirect(reflect.ValueOf(out))
// Return empty string for empty data. Callers should check for this
// and provide an appropriate message to the user.
if v.Kind() == reflect.Slice && v.Len() == 0 {
return "", nil
}
headers = filterHeaders(headers, filterColumns)
columnConfigs := createColumnConfigs(headers, filterColumns)
-9
View File
@@ -472,15 +472,6 @@ alice 1
require.NoError(t, err)
compareTables(t, expected, out)
})
t.Run("Empty", func(t *testing.T) {
t.Parallel()
var in []tableTest4
out, err := cliui.DisplayTable(in, "", nil)
require.NoError(t, err)
require.Empty(t, out)
})
}
// compareTables normalizes the incoming table lines
+3 -3
View File
@@ -90,6 +90,7 @@ func TestExpRpty(t *testing.T) {
wantLabel := "coder.devcontainers.TestExpRpty.Container"
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -127,15 +128,14 @@ func TestExpRpty(t *testing.T) {
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
pty.ExpectMatchContext(ctx, " #")
pty.ExpectMatch(" #")
pty.WriteLine("hostname")
pty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
pty.ExpectMatch(ct.Container.Config.Hostname)
pty.WriteLine("exit")
<-cmdDone
})
+8 -10
View File
@@ -1559,15 +1559,6 @@ func (r *RootCmd) scaletestDashboard() *serpent.Command {
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
reg := prometheus.NewRegistry()
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
// Allow time for traces to flush even if command context is
// canceled. This is a no-op if tracing is not enabled.
@@ -1579,7 +1570,14 @@ func (r *RootCmd) scaletestDashboard() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
reg := prometheus.NewRegistry()
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
metrics := dashboard.NewMetrics(reg)
th := harness.NewTestHarness(strategy.toStrategy(), cleanupStrategy.toStrategy())
-10
View File
@@ -142,15 +142,6 @@ func (r *RootCmd) scaletestNotifications() *serpent.Command {
triggerTimes[id] = make(chan time.Time, 1)
}
smtpHTTPTransport := &http.Transport{
MaxConnsPerHost: 512,
MaxIdleConnsPerHost: 512,
IdleConnTimeout: 60 * time.Second,
}
smtpHTTPClient := &http.Client{
Transport: smtpHTTPTransport,
}
configs := make([]notifications.Config, 0, userCount)
for range templateAdminCount {
config := notifications.Config{
@@ -166,7 +157,6 @@ func (r *RootCmd) scaletestNotifications() *serpent.Command {
Metrics: metrics,
SMTPApiURL: smtpAPIURL,
SMTPRequestTimeout: smtpRequestTimeout,
SMTPHttpClient: smtpHTTPClient,
}
if err := config.Validate(); err != nil {
return xerrors.Errorf("validate config: %w", err)
+8 -9
View File
@@ -84,15 +84,6 @@ func (r *RootCmd) scaletestPrebuilds() *serpent.Command {
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := prebuilds.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
@@ -101,6 +92,14 @@ func (r *RootCmd) scaletestPrebuilds() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := prebuilds.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: true,
+1 -1
View File
@@ -8,7 +8,7 @@ func (r *RootCmd) tasksCommand() *serpent.Command {
cmd := &serpent.Command{
Use: "task",
Aliases: []string{"tasks"},
Short: "Manage tasks",
Short: "Experimental task commands.",
Handler: func(i *serpent.Invocation) error {
return i.Command.HelpHandler(i)
},
@@ -28,27 +28,27 @@ func (r *RootCmd) taskCreate() *serpent.Command {
cmd := &serpent.Command{
Use: "create [input]",
Short: "Create a task",
Short: "Create an experimental task",
Long: FormatExamples(
Example{
Description: "Create a task with direct input",
Command: "coder task create \"Add authentication to the user service\"",
Command: "coder exp task create \"Add authentication to the user service\"",
},
Example{
Description: "Create a task with stdin input",
Command: "echo \"Add authentication to the user service\" | coder task create",
Command: "echo \"Add authentication to the user service\" | coder exp task create",
},
Example{
Description: "Create a task with a specific name",
Command: "coder task create --name task1 \"Add authentication to the user service\"",
Command: "coder exp task create --name task1 \"Add authentication to the user service\"",
},
Example{
Description: "Create a task from a specific template / preset",
Command: "coder task create --template backend-dev --preset \"My Preset\" \"Add authentication to the user service\"",
Command: "coder exp task create --template backend-dev --preset \"My Preset\" \"Add authentication to the user service\"",
},
Example{
Description: "Create a task for another user (requires appropriate permissions)",
Command: "coder task create --owner user@example.com \"Add authentication to the user service\"",
Command: "coder exp task create --owner user@example.com \"Add authentication to the user service\"",
},
),
Middleware: serpent.Chain(
@@ -111,7 +111,8 @@ func (r *RootCmd) taskCreate() *serpent.Command {
}
var (
ctx = inv.Context()
ctx = inv.Context()
expClient = codersdk.NewExperimentalClient(client)
taskInput string
templateVersionID uuid.UUID
@@ -207,7 +208,7 @@ func (r *RootCmd) taskCreate() *serpent.Command {
templateVersionPresetID = preset.ID
}
task, err := client.CreateTask(ctx, ownerArg, codersdk.CreateTaskRequest{
task, err := expClient.CreateTask(ctx, ownerArg, codersdk.CreateTaskRequest{
Name: taskName,
TemplateVersionID: templateVersionID,
TemplateVersionPresetID: templateVersionPresetID,
@@ -69,7 +69,7 @@ func TestTaskCreate(t *testing.T) {
ActiveVersionID: templateVersionID,
},
})
case fmt.Sprintf("/api/v2/tasks/%s", username):
case fmt.Sprintf("/api/experimental/tasks/%s", username):
var req codersdk.CreateTaskRequest
if !httpapi.Read(ctx, w, r, &req) {
return
@@ -329,7 +329,7 @@ func TestTaskCreate(t *testing.T) {
ctx = testutil.Context(t, testutil.WaitShort)
srv = httptest.NewServer(tt.handler(t, ctx))
client = codersdk.New(testutil.MustURL(t, srv.URL))
args = []string{"task", "create"}
args = []string{"exp", "task", "create"}
sb strings.Builder
err error
)
@@ -17,19 +17,19 @@ import (
func (r *RootCmd) taskDelete() *serpent.Command {
cmd := &serpent.Command{
Use: "delete <task> [<task> ...]",
Short: "Delete tasks",
Short: "Delete experimental tasks",
Long: FormatExamples(
Example{
Description: "Delete a single task.",
Command: "$ coder task delete task1",
Command: "$ coder exp task delete task1",
},
Example{
Description: "Delete multiple tasks.",
Command: "$ coder task delete task1 task2 task3",
Command: "$ coder exp task delete task1 task2 task3",
},
Example{
Description: "Delete a task without confirmation.",
Command: "$ coder task delete task4 --yes",
Command: "$ coder exp task delete task4 --yes",
},
),
Middleware: serpent.Chain(
@@ -44,10 +44,11 @@ func (r *RootCmd) taskDelete() *serpent.Command {
if err != nil {
return err
}
exp := codersdk.NewExperimentalClient(client)
var tasks []codersdk.Task
for _, identifier := range inv.Args {
task, err := client.TaskByIdentifier(ctx, identifier)
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", identifier, err)
}
@@ -70,7 +71,7 @@ func (r *RootCmd) taskDelete() *serpent.Command {
for i, task := range tasks {
display := displayList[i]
if err := client.DeleteTask(ctx, task.OwnerName, task.ID); err != nil {
if err := exp.DeleteTask(ctx, task.OwnerName, task.ID); err != nil {
return xerrors.Errorf("delete task %q: %w", display, err)
}
_, _ = fmt.Fprintln(
@@ -56,7 +56,7 @@ func TestExpTaskDelete(t *testing.T) {
taskID := uuid.MustParse(id1)
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks/me/exists":
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/exists":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK,
codersdk.Task{
@@ -64,7 +64,7 @@ func TestExpTaskDelete(t *testing.T) {
Name: "exists",
OwnerName: "me",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/v2/tasks/me/"+id1:
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id1:
c.deleteCalls.Add(1)
w.WriteHeader(http.StatusAccepted)
default:
@@ -82,13 +82,13 @@ func TestExpTaskDelete(t *testing.T) {
buildHandler: func(c *testCounters) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks/me/"+id2:
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/"+id2:
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse(id2),
OwnerName: "me",
Name: "uuid-task",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/v2/tasks/me/"+id2:
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id2:
c.deleteCalls.Add(1)
w.WriteHeader(http.StatusAccepted)
default:
@@ -104,24 +104,24 @@ func TestExpTaskDelete(t *testing.T) {
buildHandler: func(c *testCounters) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks/me/first":
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/first":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse(id3),
Name: "first",
OwnerName: "me",
})
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks/me/"+id4:
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/"+id4:
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse(id4),
OwnerName: "me",
Name: "uuid-task-4",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/v2/tasks/me/"+id3:
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id3:
c.deleteCalls.Add(1)
w.WriteHeader(http.StatusAccepted)
case r.Method == http.MethodDelete && r.URL.Path == "/api/v2/tasks/me/"+id4:
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/"+id4:
c.deleteCalls.Add(1)
w.WriteHeader(http.StatusAccepted)
default:
@@ -140,7 +140,7 @@ func TestExpTaskDelete(t *testing.T) {
buildHandler: func(_ *testCounters) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks" && r.URL.Query().Get("q") == "owner:\"me\"":
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks" && r.URL.Query().Get("q") == "owner:\"me\"":
httpapi.Write(r.Context(), w, http.StatusOK, struct {
Tasks []codersdk.Task `json:"tasks"`
Count int `json:"count"`
@@ -163,14 +163,14 @@ func TestExpTaskDelete(t *testing.T) {
taskID := uuid.MustParse(id5)
return func(w http.ResponseWriter, r *http.Request) {
switch {
case r.Method == http.MethodGet && r.URL.Path == "/api/v2/tasks/me/bad":
case r.Method == http.MethodGet && r.URL.Path == "/api/experimental/tasks/me/bad":
c.nameResolves.Add(1)
httpapi.Write(r.Context(), w, http.StatusOK, codersdk.Task{
ID: taskID,
Name: "bad",
OwnerName: "me",
})
case r.Method == http.MethodDelete && r.URL.Path == "/api/v2/tasks/me/bad":
case r.Method == http.MethodDelete && r.URL.Path == "/api/experimental/tasks/me/bad":
httpapi.InternalServerError(w, xerrors.New("boom"))
default:
httpapi.InternalServerError(w, xerrors.New("unwanted path: "+r.Method+" "+r.URL.Path))
@@ -193,7 +193,7 @@ func TestExpTaskDelete(t *testing.T) {
client := codersdk.New(testutil.MustURL(t, srv.URL))
args := append([]string{"task", "delete"}, tc.args...)
args := append([]string{"exp", "task", "delete"}, tc.args...)
inv, root := clitest.New(t, args...)
inv = inv.WithContext(ctx)
clitest.SetupConfig(t, client, root)
+14 -11
View File
@@ -69,27 +69,27 @@ func (r *RootCmd) taskList() *serpent.Command {
cmd := &serpent.Command{
Use: "list",
Short: "List tasks",
Short: "List experimental tasks",
Long: FormatExamples(
Example{
Description: "List tasks for the current user.",
Command: "coder task list",
Command: "coder exp task list",
},
Example{
Description: "List tasks for a specific user.",
Command: "coder task list --user someone-else",
Command: "coder exp task list --user someone-else",
},
Example{
Description: "List all tasks you can view.",
Command: "coder task list --all",
Command: "coder exp task list --all",
},
Example{
Description: "List all your running tasks.",
Command: "coder task list --status running",
Command: "coder exp task list --status running",
},
Example{
Description: "As above, but only show IDs.",
Command: "coder task list --status running --quiet",
Command: "coder exp task list --status running --quiet",
},
),
Aliases: []string{"ls"},
@@ -135,13 +135,14 @@ func (r *RootCmd) taskList() *serpent.Command {
}
ctx := inv.Context()
exp := codersdk.NewExperimentalClient(client)
targetUser := strings.TrimSpace(user)
if targetUser == "" && !all {
targetUser = codersdk.Me
}
tasks, err := client.Tasks(ctx, &codersdk.TasksFilter{
tasks, err := exp.Tasks(ctx, &codersdk.TasksFilter{
Owner: targetUser,
Status: codersdk.TaskStatus(statusFilter),
})
@@ -157,6 +158,12 @@ func (r *RootCmd) taskList() *serpent.Command {
return nil
}
// If no rows and not JSON, show a friendly message.
if len(tasks) == 0 && formatter.FormatID() != cliui.JSONFormat().ID() {
_, _ = fmt.Fprintln(inv.Stderr, "No tasks found.")
return nil
}
rows := make([]taskListRow, len(tasks))
now := time.Now()
for i := range tasks {
@@ -167,10 +174,6 @@ func (r *RootCmd) taskList() *serpent.Command {
if err != nil {
return xerrors.Errorf("format tasks: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No tasks found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
},
@@ -69,7 +69,7 @@ func TestExpTaskList(t *testing.T) {
owner := coderdtest.CreateFirstUser(t, client)
memberClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
inv, root := clitest.New(t, "task", "list")
inv, root := clitest.New(t, "exp", "task", "list")
clitest.SetupConfig(t, memberClient, root)
pty := ptytest.New(t).Attach(inv)
@@ -93,7 +93,7 @@ func TestExpTaskList(t *testing.T) {
wantPrompt := "build me a web app"
task := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, wantPrompt)
inv, root := clitest.New(t, "task", "list", "--column", "id,name,status,initial prompt")
inv, root := clitest.New(t, "exp", "task", "list", "--column", "id,name,status,initial prompt")
clitest.SetupConfig(t, memberClient, root)
pty := ptytest.New(t).Attach(inv)
@@ -122,7 +122,7 @@ func TestExpTaskList(t *testing.T) {
pausedTask := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStop, "stop me please")
// Use JSON output to reliably validate filtering.
inv, root := clitest.New(t, "task", "list", "--status=paused", "--output=json")
inv, root := clitest.New(t, "exp", "task", "list", "--status=paused", "--output=json")
clitest.SetupConfig(t, memberClient, root)
ctx := testutil.Context(t, testutil.WaitShort)
@@ -153,7 +153,7 @@ func TestExpTaskList(t *testing.T) {
_ = makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStart, "other-task")
task := makeAITask(t, db, owner.OrganizationID, owner.UserID, owner.UserID, database.WorkspaceTransitionStart, "me-task")
inv, root := clitest.New(t, "task", "list", "--user", "me")
inv, root := clitest.New(t, "exp", "task", "list", "--user", "me")
//nolint:gocritic // Owner client is intended here smoke test the member task not showing up.
clitest.SetupConfig(t, client, root)
@@ -180,7 +180,7 @@ func TestExpTaskList(t *testing.T) {
task2 := makeAITask(t, db, owner.OrganizationID, owner.UserID, memberUser.ID, database.WorkspaceTransitionStop, "stop me please")
// Given: We add the `--quiet` flag
inv, root := clitest.New(t, "task", "list", "--quiet")
inv, root := clitest.New(t, "exp", "task", "list", "--quiet")
clitest.SetupConfig(t, memberClient, root)
ctx := testutil.Context(t, testutil.WaitShort)
@@ -224,7 +224,7 @@ func TestExpTaskList_OwnerCanListOthers(t *testing.T) {
t.Parallel()
// As the owner, list only member A tasks.
inv, root := clitest.New(t, "task", "list", "--user", memberAUser.Username, "--output=json")
inv, root := clitest.New(t, "exp", "task", "list", "--user", memberAUser.Username, "--output=json")
//nolint:gocritic // Owner client is intended here to allow member tasks to be listed.
clitest.SetupConfig(t, ownerClient, root)
@@ -252,7 +252,7 @@ func TestExpTaskList_OwnerCanListOthers(t *testing.T) {
// As the owner, list all tasks to verify both member tasks are present.
// Use JSON output to reliably validate filtering.
inv, root := clitest.New(t, "task", "list", "--all", "--output=json")
inv, root := clitest.New(t, "exp", "task", "list", "--all", "--output=json")
//nolint:gocritic // Owner client is intended here to allow all tasks to be listed.
clitest.SetupConfig(t, ownerClient, root)
+4 -8
View File
@@ -28,7 +28,7 @@ func (r *RootCmd) taskLogs() *serpent.Command {
Long: FormatExamples(
Example{
Description: "Show logs for a given task.",
Command: "coder task logs task1",
Command: "coder exp task logs task1",
}),
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
@@ -41,15 +41,16 @@ func (r *RootCmd) taskLogs() *serpent.Command {
var (
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
identifier = inv.Args[0]
)
task, err := client.TaskByIdentifier(ctx, identifier)
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return xerrors.Errorf("resolve task %q: %w", identifier, err)
}
logs, err := client.TaskLogs(ctx, codersdk.Me, task.ID)
logs, err := exp.TaskLogs(ctx, codersdk.Me, task.ID)
if err != nil {
return xerrors.Errorf("get task logs: %w", err)
}
@@ -59,11 +60,6 @@ func (r *RootCmd) taskLogs() *serpent.Command {
return xerrors.Errorf("format task logs: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No task logs found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
},
@@ -46,7 +46,7 @@ func Test_TaskLogs(t *testing.T) {
userClient := client // user already has access to their own workspace
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.Name, "--output", "json")
inv, root := clitest.New(t, "exp", "task", "logs", task.Name, "--output", "json")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -72,7 +72,7 @@ func Test_TaskLogs(t *testing.T) {
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.ID.String(), "--output", "json")
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String(), "--output", "json")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -98,7 +98,7 @@ func Test_TaskLogs(t *testing.T) {
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", task.ID.String())
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String())
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -121,7 +121,7 @@ func Test_TaskLogs(t *testing.T) {
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", "doesnotexist")
inv, root := clitest.New(t, "exp", "task", "logs", "doesnotexist")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -139,7 +139,7 @@ func Test_TaskLogs(t *testing.T) {
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "task", "logs", uuid.Nil.String())
inv, root := clitest.New(t, "exp", "task", "logs", uuid.Nil.String())
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -155,7 +155,7 @@ func Test_TaskLogs(t *testing.T) {
client, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskLogsErr(assert.AnError))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String())
inv, root := clitest.New(t, "exp", "task", "logs", task.ID.String())
clitest.SetupConfig(t, userClient, root)
err := inv.WithContext(ctx).Run()
+5 -4
View File
@@ -17,10 +17,10 @@ func (r *RootCmd) taskSend() *serpent.Command {
Short: "Send input to a task",
Long: FormatExamples(Example{
Description: "Send direct input to a task.",
Command: "coder task send task1 \"Please also add unit tests\"",
Command: "coder exp task send task1 \"Please also add unit tests\"",
}, Example{
Description: "Send input from stdin to a task.",
Command: "echo \"Please also add unit tests\" | coder task send task1 --stdin",
Command: "echo \"Please also add unit tests\" | coder exp task send task1 --stdin",
}),
Middleware: serpent.RequireRangeArgs(1, 2),
Options: serpent.OptionSet{
@@ -39,6 +39,7 @@ func (r *RootCmd) taskSend() *serpent.Command {
var (
ctx = inv.Context()
exp = codersdk.NewExperimentalClient(client)
identifier = inv.Args[0]
taskInput string
@@ -59,12 +60,12 @@ func (r *RootCmd) taskSend() *serpent.Command {
taskInput = inv.Args[1]
}
task, err := client.TaskByIdentifier(ctx, identifier)
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return xerrors.Errorf("resolve task: %w", err)
}
if err = client.TaskSend(ctx, codersdk.Me, task.ID, codersdk.TaskSendRequest{Input: taskInput}); err != nil {
if err = exp.TaskSend(ctx, codersdk.Me, task.ID, codersdk.TaskSendRequest{Input: taskInput}); err != nil {
return xerrors.Errorf("send input to task: %w", err)
}
@@ -30,7 +30,7 @@ func Test_TaskSend(t *testing.T) {
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "carry on with the task")
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "carry on with the task")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -46,7 +46,7 @@ func Test_TaskSend(t *testing.T) {
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.ID.String(), "carry on with the task")
inv, root := clitest.New(t, "exp", "task", "send", task.ID.String(), "carry on with the task")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -62,7 +62,7 @@ func Test_TaskSend(t *testing.T) {
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "--stdin")
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "--stdin")
inv.Stdout = &stdout
inv.Stdin = strings.NewReader("carry on with the task")
clitest.SetupConfig(t, userClient, root)
@@ -80,7 +80,7 @@ func Test_TaskSend(t *testing.T) {
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", "doesnotexist", "some task input")
inv, root := clitest.New(t, "exp", "task", "send", "doesnotexist", "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -98,7 +98,7 @@ func Test_TaskSend(t *testing.T) {
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", uuid.Nil.String(), "some task input")
inv, root := clitest.New(t, "exp", "task", "send", uuid.Nil.String(), "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -114,7 +114,7 @@ func Test_TaskSend(t *testing.T) {
userClient, task := setupCLITaskTest(ctx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "some task input")
inv, root := clitest.New(t, "exp", "task", "send", task.Name, "some task input")
inv.Stdout = &stdout
clitest.SetupConfig(t, userClient, root)
@@ -47,11 +47,11 @@ func (r *RootCmd) taskStatus() *serpent.Command {
Long: FormatExamples(
Example{
Description: "Show the status of a given task.",
Command: "coder task status task1",
Command: "coder exp task status task1",
},
Example{
Description: "Watch the status of a given task until it completes (idle or stopped).",
Command: "coder task status task1 --watch",
Command: "coder exp task status task1 --watch",
},
),
Use: "status",
@@ -83,9 +83,10 @@ func (r *RootCmd) taskStatus() *serpent.Command {
}
ctx := i.Context()
exp := codersdk.NewExperimentalClient(client)
identifier := i.Args[0]
task, err := client.TaskByIdentifier(ctx, identifier)
task, err := exp.TaskByIdentifier(ctx, identifier)
if err != nil {
return err
}
@@ -106,7 +107,7 @@ func (r *RootCmd) taskStatus() *serpent.Command {
// TODO: implement streaming updates instead of polling
lastStatusRow := tsr
for range t.C {
task, err := client.TaskByID(ctx, task.ID)
task, err := exp.TaskByID(ctx, task.ID)
if err != nil {
return err
}
@@ -36,7 +36,7 @@ func Test_TaskStatus(t *testing.T) {
hf: func(ctx context.Context, _ time.Time) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/tasks/me/doesnotexist":
case "/api/experimental/tasks/me/doesnotexist":
httpapi.ResourceNotFound(w)
return
default:
@@ -52,7 +52,7 @@ func Test_TaskStatus(t *testing.T) {
hf: func(ctx context.Context, now time.Time) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/tasks/me/exists":
case "/api/experimental/tasks/me/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
WorkspaceStatus: codersdk.WorkspaceStatusRunning,
@@ -88,7 +88,7 @@ func Test_TaskStatus(t *testing.T) {
var calls atomic.Int64
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/tasks/me/exists":
case "/api/experimental/tasks/me/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
@@ -103,7 +103,7 @@ func Test_TaskStatus(t *testing.T) {
Status: codersdk.TaskStatusPending,
})
return
case "/api/v2/tasks/me/11111111-1111-1111-1111-111111111111":
case "/api/experimental/tasks/me/11111111-1111-1111-1111-111111111111":
defer calls.Add(1)
switch calls.Load() {
case 0:
@@ -189,7 +189,6 @@ func Test_TaskStatus(t *testing.T) {
"owner_id": "00000000-0000-0000-0000-000000000000",
"owner_name": "me",
"name": "exists",
"display_name": "Task exists",
"template_id": "00000000-0000-0000-0000-000000000000",
"template_version_id": "00000000-0000-0000-0000-000000000000",
"template_name": "",
@@ -219,12 +218,11 @@ func Test_TaskStatus(t *testing.T) {
ts := time.Date(2025, 8, 26, 12, 34, 56, 0, time.UTC)
return func(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/api/v2/tasks/me/exists":
case "/api/experimental/tasks/me/exists":
httpapi.Write(ctx, w, http.StatusOK, codersdk.Task{
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
DisplayName: "Task exists",
OwnerName: "me",
ID: uuid.MustParse("11111111-1111-1111-1111-111111111111"),
Name: "exists",
OwnerName: "me",
WorkspaceAgentHealth: &codersdk.WorkspaceAgentHealth{
Healthy: true,
},
@@ -256,7 +254,7 @@ func Test_TaskStatus(t *testing.T) {
srv = httptest.NewServer(http.HandlerFunc(tc.hf(ctx, now)))
client = codersdk.New(testutil.MustURL(t, srv.URL))
sb = strings.Builder{}
args = []string{"task", "status", "--watch-interval", testutil.IntervalFast.String()}
args = []string{"exp", "task", "status", "--watch-interval", testutil.IntervalFast.String()}
)
t.Cleanup(srv.Close)
+10 -8
View File
@@ -60,14 +60,14 @@ func Test_Tasks(t *testing.T) {
}{
{
name: "create task",
cmdArgs: []string{"task", "create", "test task input for " + t.Name(), "--name", taskName, "--template", taskTpl.Name},
cmdArgs: []string{"exp", "task", "create", "test task input for " + t.Name(), "--name", taskName, "--template", taskTpl.Name},
assertFn: func(stdout string, userClient *codersdk.Client) {
require.Contains(t, stdout, taskName, "task name should be in output")
},
},
{
name: "list tasks after create",
cmdArgs: []string{"task", "list", "--output", "json"},
cmdArgs: []string{"exp", "task", "list", "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var tasks []codersdk.Task
err := json.NewDecoder(strings.NewReader(stdout)).Decode(&tasks)
@@ -88,7 +88,7 @@ func Test_Tasks(t *testing.T) {
},
{
name: "get task status after create",
cmdArgs: []string{"task", "status", taskName, "--output", "json"},
cmdArgs: []string{"exp", "task", "status", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var task codersdk.Task
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&task), "should unmarshal task status")
@@ -98,12 +98,12 @@ func Test_Tasks(t *testing.T) {
},
{
name: "send task message",
cmdArgs: []string{"task", "send", taskName, "hello"},
cmdArgs: []string{"exp", "task", "send", taskName, "hello"},
// Assertions for this happen in the fake agent API handler.
},
{
name: "read task logs",
cmdArgs: []string{"task", "logs", taskName, "--output", "json"},
cmdArgs: []string{"exp", "task", "logs", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var logs []codersdk.TaskLogEntry
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&logs), "should unmarshal task logs")
@@ -118,11 +118,12 @@ func Test_Tasks(t *testing.T) {
},
{
name: "delete task",
cmdArgs: []string{"task", "delete", taskName, "--yes"},
cmdArgs: []string{"exp", "task", "delete", taskName, "--yes"},
assertFn: func(stdout string, userClient *codersdk.Client) {
// The task should eventually no longer show up in the list of tasks
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
tasks, err := userClient.Tasks(ctx, &codersdk.TasksFilter{})
expClient := codersdk.NewExperimentalClient(userClient)
tasks, err := expClient.Tasks(ctx, &codersdk.TasksFilter{})
if !assert.NoError(t, err) {
return false
}
@@ -247,7 +248,8 @@ func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[st
template := createAITaskTemplate(t, client, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
wantPrompt := "test prompt"
task, err := userClient.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
exp := codersdk.NewExperimentalClient(userClient)
task, err := exp.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Input: wantPrompt,
Name: "test-task",
+121 -192
View File
@@ -2,85 +2,62 @@ package cli_test
import (
"bytes"
"crypto/rand"
"encoding/binary"
"fmt"
"net/url"
"os"
"path"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/cli/config"
"github.com/coder/coder/v2/cli/sessionstore"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/serpent"
)
// keyringTestServiceName generates a unique service name for keyring tests
// using the test name and a nanosecond timestamp to prevent collisions.
func keyringTestServiceName(t *testing.T) string {
t.Helper()
var n uint32
err := binary.Read(rand.Reader, binary.BigEndian, &n)
if err != nil {
t.Fatal(err)
// mockKeyring is a mock sessionstore.Backend implementation.
type mockKeyring struct {
credentials map[string]string // service name -> credential
}
const mockServiceName = "mock-service-name"
func newMockKeyring() *mockKeyring {
return &mockKeyring{credentials: make(map[string]string)}
}
func (m *mockKeyring) Read(_ *url.URL) (string, error) {
cred, ok := m.credentials[mockServiceName]
if !ok {
return "", os.ErrNotExist
}
return fmt.Sprintf("%s_%v_%d", t.Name(), time.Now().UnixNano(), n)
return cred, nil
}
type keyringTestEnv struct {
serviceName string
keyring sessionstore.Keyring
inv *serpent.Invocation
cfg config.Root
clientURL *url.URL
func (m *mockKeyring) Write(_ *url.URL, token string) error {
m.credentials[mockServiceName] = token
return nil
}
func setupKeyringTestEnv(t *testing.T, clientURL string, args ...string) keyringTestEnv {
t.Helper()
var root cli.RootCmd
cmd, err := root.Command(root.AGPL())
require.NoError(t, err)
serviceName := keyringTestServiceName(t)
root.WithKeyringServiceName(serviceName)
root.UseKeyringWithGlobalConfig()
inv, cfg := clitest.NewWithDefaultKeyringCommand(t, cmd, args...)
parsedURL, err := url.Parse(clientURL)
require.NoError(t, err)
backend := sessionstore.NewKeyringWithService(serviceName)
t.Cleanup(func() {
_ = backend.Delete(parsedURL)
})
return keyringTestEnv{serviceName, backend, inv, cfg, parsedURL}
func (m *mockKeyring) Delete(_ *url.URL) error {
_, ok := m.credentials[mockServiceName]
if !ok {
return os.ErrNotExist
}
delete(m.credentials, mockServiceName)
return nil
}
func TestUseKeyring(t *testing.T) {
// Verify that the --use-keyring flag default opts into using a keyring backend
// for storing session tokens instead of plain text files.
// Verify that the --use-keyring flag opts into using a keyring backend for
// storing session tokens instead of plain text files.
t.Parallel()
t.Run("Login", func(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" && runtime.GOOS != "darwin" {
t.Skip("keyring is not supported on this OS")
}
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
@@ -88,16 +65,25 @@ func TestUseKeyring(t *testing.T) {
// Create a pty for interactive prompts
pty := ptytest.New(t)
// Create CLI invocation which defaults to using the keyring
env := setupKeyringTestEnv(t, client.URL.String(),
// Create CLI invocation with --use-keyring flag
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--use-keyring",
"--no-open",
client.URL.String())
inv := env.inv
client.URL.String(),
)
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
// Inject the mock backend before running the command
var root cli.RootCmd
cmd, err := root.Command(root.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
root.WithSessionStorageBackend(mockBackend)
inv.Command = cmd
// Run login in background
doneChan := make(chan struct{})
go func() {
@@ -113,23 +99,19 @@ func TestUseKeyring(t *testing.T) {
<-doneChan
// Verify that session file was NOT created (using keyring instead)
sessionFile := path.Join(string(env.cfg), "session")
_, err := os.Stat(sessionFile)
sessionFile := path.Join(string(cfg), "session")
_, err = os.Stat(sessionFile)
require.True(t, os.IsNotExist(err), "session file should not exist when using keyring")
// Verify that the credential IS stored in OS keyring
cred, err := env.keyring.Read(env.clientURL)
require.NoError(t, err, "credential should be stored in OS keyring")
// Verify that the credential IS stored in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "credential should be stored in mock keyring")
require.Equal(t, client.SessionToken(), cred, "stored token should match login token")
})
t.Run("Logout", func(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" && runtime.GOOS != "darwin" {
t.Skip("keyring is not supported on this OS")
}
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
@@ -137,17 +119,25 @@ func TestUseKeyring(t *testing.T) {
// Create a pty for interactive prompts
pty := ptytest.New(t)
// First, login with the keyring (default)
env := setupKeyringTestEnv(t, client.URL.String(),
// First, login with --use-keyring
loginInv, cfg := clitest.New(t,
"login",
"--force-tty",
"--use-keyring",
"--no-open",
client.URL.String(),
)
loginInv := env.inv
loginInv.Stdin = pty.Input()
loginInv.Stdout = pty.Output()
// Inject the mock backend
var loginRoot cli.RootCmd
loginCmd, err := loginRoot.Command(loginRoot.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
loginRoot.WithSessionStorageBackend(mockBackend)
loginInv.Command = loginCmd
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
@@ -160,23 +150,25 @@ func TestUseKeyring(t *testing.T) {
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify credential exists in OS keyring
cred, err := env.keyring.Read(env.clientURL)
// Verify credential exists in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "read credential should succeed before logout")
require.NotEmpty(t, cred, "credential should exist before logout")
require.NotEmpty(t, cred, "credential should exist after logout")
// Now logout using the same keyring service name
// Now run logout with --use-keyring
logoutInv, _ := clitest.New(t,
"logout",
"--use-keyring",
"--yes",
"--global-config", string(cfg),
)
// Inject the same mock backend
var logoutRoot cli.RootCmd
logoutCmd, err := logoutRoot.Command(logoutRoot.AGPL())
require.NoError(t, err)
logoutRoot.WithKeyringServiceName(env.serviceName)
logoutRoot.UseKeyringWithGlobalConfig()
logoutInv, _ := clitest.NewWithDefaultKeyringCommand(t, logoutCmd,
"logout",
"--yes",
"--global-config", string(env.cfg),
)
logoutRoot.WithSessionStorageBackend(mockBackend)
logoutInv.Command = logoutCmd
var logoutOut bytes.Buffer
logoutInv.Stdout = &logoutOut
@@ -184,18 +176,14 @@ func TestUseKeyring(t *testing.T) {
err = logoutInv.Run()
require.NoError(t, err, "logout should succeed")
// Verify the credential was deleted from OS keyring
_, err = env.keyring.Read(env.clientURL)
// Verify the credential was deleted from mock keyring
_, err = mockBackend.Read(nil)
require.ErrorIs(t, err, os.ErrNotExist, "credential should be deleted from keyring after logout")
})
t.Run("DefaultFileStorage", func(t *testing.T) {
t.Run("OmitFlag", func(t *testing.T) {
t.Parallel()
if runtime.GOOS != "linux" {
t.Skip("file storage is the default for Linux")
}
// Create a test server
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
@@ -203,13 +191,13 @@ func TestUseKeyring(t *testing.T) {
// Create a pty for interactive prompts
pty := ptytest.New(t)
env := setupKeyringTestEnv(t, client.URL.String(),
// --use-keyring flag omitted (should use file-based storage)
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
inv := env.inv
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
@@ -226,9 +214,9 @@ func TestUseKeyring(t *testing.T) {
<-doneChan
// Verify that session file WAS created (not using keyring)
sessionFile := path.Join(string(env.cfg), "session")
sessionFile := path.Join(string(cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist when NOT using --use-keyring on Linux")
require.NoError(t, err, "session file should exist when NOT using --use-keyring")
// Read and verify the token from file
content, err := os.ReadFile(sessionFile)
@@ -246,18 +234,24 @@ func TestUseKeyring(t *testing.T) {
// Create a pty for interactive prompts
pty := ptytest.New(t)
// Login using CODER_USE_KEYRING environment variable set to disable keyring usage,
// which should have the same behavior on all platforms.
env := setupKeyringTestEnv(t, client.URL.String(),
// Login using CODER_USE_KEYRING environment variable instead of flag
inv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
inv := env.inv
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
inv.Environ.Set("CODER_USE_KEYRING", "false")
inv.Environ.Set("CODER_USE_KEYRING", "true")
// Inject the mock backend
var root cli.RootCmd
cmd, err := root.Command(root.AGPL())
require.NoError(t, err)
mockBackend := newMockKeyring()
root.WithSessionStorageBackend(mockBackend)
inv.Command = cmd
doneChan := make(chan struct{})
go func() {
@@ -271,64 +265,21 @@ func TestUseKeyring(t *testing.T) {
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file WAS created (not using keyring)
sessionFile := path.Join(string(env.cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist when CODER_USE_KEYRING set to false")
// Verify that session file was NOT created (using keyring via env var)
sessionFile := path.Join(string(cfg), "session")
_, err = os.Stat(sessionFile)
require.True(t, os.IsNotExist(err), "session file should not exist when using keyring via env var")
// Read and verify the token from file
content, err := os.ReadFile(sessionFile)
require.NoError(t, err, "should be able to read session file")
require.Equal(t, client.SessionToken(), string(content), "file should contain the session token")
})
t.Run("DisableKeyringWithFlag", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
pty := ptytest.New(t)
// Login with --use-keyring=false to explicitly disable keyring usage, which
// should have the same behavior on all platforms.
env := setupKeyringTestEnv(t, client.URL.String(),
"login",
"--use-keyring=false",
"--force-tty",
"--no-open",
client.URL.String(),
)
inv := env.inv
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file WAS created (not using keyring)
sessionFile := path.Join(string(env.cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist when --use-keyring=false is specified")
// Read and verify the token from file
content, err := os.ReadFile(sessionFile)
require.NoError(t, err, "should be able to read session file")
require.Equal(t, client.SessionToken(), string(content), "file should contain the session token")
// Verify credential is in mock keyring
cred, err := mockBackend.Read(nil)
require.NoError(t, err, "credential should be stored in keyring when CODER_USE_KEYRING=true")
require.NotEmpty(t, cred)
})
}
func TestUseKeyringUnsupportedOS(t *testing.T) {
// Verify that on unsupported operating systems, file-based storage is used
// automatically even when --use-keyring is set to true (the default).
// Verify that trying to use --use-keyring on an unsupported operating system produces
// a helpful error message.
t.Parallel()
// Only run this on an unsupported OS.
@@ -336,60 +287,43 @@ func TestUseKeyringUnsupportedOS(t *testing.T) {
t.Skipf("Skipping unsupported OS test on %s where keyring is supported", runtime.GOOS)
}
t.Run("LoginWithDefaultKeyring", func(t *testing.T) {
const expMessage = "keyring storage is not supported on this operating system; remove the --use-keyring flag"
t.Run("LoginWithUnsupportedKeyring", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
pty := ptytest.New(t)
env := setupKeyringTestEnv(t, client.URL.String(),
// Try to login with --use-keyring on an unsupported OS
inv, _ := clitest.New(t,
"login",
"--force-tty",
"--no-open",
"--use-keyring",
client.URL.String(),
)
inv := env.inv
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
doneChan := make(chan struct{})
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// The error should occur immediately, before any prompts
loginErr := inv.Run()
pty.ExpectMatch("Paste your token here:")
pty.WriteLine(client.SessionToken())
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify that session file WAS created (automatic fallback to file storage)
sessionFile := path.Join(string(env.cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist due to automatic fallback to file storage")
content, err := os.ReadFile(sessionFile)
require.NoError(t, err, "should be able to read session file")
require.Equal(t, client.SessionToken(), string(content), "file should contain the session token")
// Verify we got an error about unsupported OS
require.Error(t, loginErr)
require.Contains(t, loginErr.Error(), expMessage)
})
t.Run("LogoutWithDefaultKeyring", func(t *testing.T) {
t.Run("LogoutWithUnsupportedKeyring", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
pty := ptytest.New(t)
// First login to create a session (will use file storage due to automatic fallback)
env := setupKeyringTestEnv(t, client.URL.String(),
// First login without keyring to create a session
loginInv, cfg := clitest.New(t,
"login",
"--force-tty",
"--no-open",
client.URL.String(),
)
loginInv := env.inv
loginInv.Stdin = pty.Input()
loginInv.Stdout = pty.Output()
@@ -405,22 +339,17 @@ func TestUseKeyringUnsupportedOS(t *testing.T) {
pty.ExpectMatch("Welcome to Coder")
<-doneChan
// Verify session file exists
sessionFile := path.Join(string(env.cfg), "session")
_, err := os.Stat(sessionFile)
require.NoError(t, err, "session file should exist before logout")
// Now logout - should succeed and delete the file
logoutEnv := setupKeyringTestEnv(t, client.URL.String(),
// Now try to logout with --use-keyring on an unsupported OS
logoutInv, _ := clitest.New(t,
"logout",
"--use-keyring",
"--yes",
"--global-config", string(env.cfg),
"--global-config", string(cfg),
)
err = logoutEnv.inv.Run()
require.NoError(t, err, "logout should succeed with automatic file storage fallback")
_, err = os.Stat(sessionFile)
require.True(t, os.IsNotExist(err), "session file should be deleted after logout")
err := logoutInv.Run()
// Verify we got an error about unsupported OS
require.Error(t, err)
require.Contains(t, err.Error(), expMessage)
})
}
+6 -6
View File
@@ -139,12 +139,7 @@ func (r *RootCmd) list() *serpent.Command {
return err
}
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
if out == "" {
if len(res) == 0 && formatter.FormatID() != cliui.JSONFormat().ID() {
pretty.Fprintf(inv.Stderr, cliui.DefaultStyles.Prompt, "No workspaces found! Create one:\n")
_, _ = fmt.Fprintln(inv.Stderr)
_, _ = fmt.Fprintln(inv.Stderr, " "+pretty.Sprint(cliui.DefaultStyles.Code, "coder create <name>"))
@@ -152,6 +147,11 @@ func (r *RootCmd) list() *serpent.Command {
return nil
}
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+3 -3
View File
@@ -154,9 +154,9 @@ func (r *RootCmd) login() *serpent.Command {
cmd := &serpent.Command{
Use: "login [<url>]",
Short: "Authenticate with Coder deployment",
Long: "By default, the session token is stored in the operating system keyring on " +
"macOS and Windows and a plain text file on Linux. Use the --use-keyring flag " +
"or CODER_USE_KEYRING environment variable to change the storage mechanism.",
Long: "By default, the session token is stored in a plain text file. Use the " +
"--use-keyring flag or set CODER_USE_KEYRING=true to store the token in " +
"the operating system keyring instead.",
Middleware: serpent.RequireRangeArgs(0, 1),
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
-5
View File
@@ -170,11 +170,6 @@ func (r *RootCmd) listOrganizationMembers(orgContext *OrganizationContext) *serp
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No organization members found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -92,11 +92,6 @@ func (r *RootCmd) showOrganizationRoles(orgContext *OrganizationContext) *serpen
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No organization roles found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -110,11 +110,6 @@ func (r *RootCmd) provisionerJobsList() *serpent.Command {
return xerrors.Errorf("display provisioner daemons: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No provisioner jobs found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
+5 -5
View File
@@ -74,6 +74,11 @@ func (r *RootCmd) provisionerList() *serpent.Command {
return xerrors.Errorf("list provisioner daemons: %w", err)
}
if len(daemons) == 0 {
_, _ = fmt.Fprintln(inv.Stdout, "No provisioner daemons found")
return nil
}
var rows []provisionerDaemonRow
for _, daemon := range daemons {
rows = append(rows, provisionerDaemonRow{
@@ -87,11 +92,6 @@ func (r *RootCmd) provisionerList() *serpent.Command {
return xerrors.Errorf("display provisioner daemons: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No provisioner daemons found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
+14 -40
View File
@@ -56,7 +56,7 @@ var (
// anything.
ErrSilent = xerrors.New("silent error")
errKeyringNotSupported = xerrors.New("keyring storage is not supported on this operating system; omit --use-keyring to use file-based storage")
errKeyringNotSupported = xerrors.New("keyring storage is not supported on this operating system; remove the --use-keyring flag to use file-based storage")
)
const (
@@ -104,7 +104,6 @@ func (r *RootCmd) CoreSubcommands() []*serpent.Command {
r.resetPassword(),
r.sharing(),
r.state(),
r.tasksCommand(),
r.templates(),
r.tokens(),
r.users(),
@@ -150,7 +149,7 @@ func (r *RootCmd) AGPLExperimental() []*serpent.Command {
r.mcpCommand(),
r.promptExample(),
r.rptyCommand(),
r.syncCommand(),
r.tasksCommand(),
r.boundary(),
}
}
@@ -484,12 +483,10 @@ func (r *RootCmd) Command(subcommands []*serpent.Command) (*serpent.Command, err
Flag: varUseKeyring,
Env: envUseKeyring,
Description: "Store and retrieve session tokens using the operating system " +
"keyring. This flag is ignored and file-based storage is used when " +
"--global-config is set or keyring usage is not supported on the current " +
"platform. Set to false to force file-based storage on supported platforms.",
Default: "true",
Value: serpent.BoolOf(&r.useKeyring),
Group: globalGroup,
"keyring. Currently only supported on Windows. By default, tokens are " +
"stored in plain text files.",
Value: serpent.BoolOf(&r.useKeyring),
Group: globalGroup,
},
{
Flag: "debug-http",
@@ -537,12 +534,10 @@ type RootCmd struct {
disableDirect bool
debugHTTP bool
disableNetworkTelemetry bool
noVersionCheck bool
noFeatureWarning bool
useKeyring bool
keyringServiceName string
useKeyringWithGlobalConfig bool
disableNetworkTelemetry bool
noVersionCheck bool
noFeatureWarning bool
useKeyring bool
}
// InitClient creates and configures a new client with authentication, telemetry,
@@ -723,19 +718,8 @@ func (r *RootCmd) createUnauthenticatedClient(ctx context.Context, serverURL *ur
// flag.
func (r *RootCmd) ensureTokenBackend() sessionstore.Backend {
if r.tokenBackend == nil {
// Checking for the --global-config directory being set is a bit wonky but necessary
// to allow extensions that invoke the CLI with this flag (e.g. VS code) to continue
// working without modification. In the future we should modify these extensions to
// either access the credential in the keyring (like Coder Desktop) or some other
// approach that doesn't rely on the session token being stored on disk.
assumeExtensionInUse := r.globalConfig != config.DefaultDir() && !r.useKeyringWithGlobalConfig
keyringSupported := runtime.GOOS == "windows" || runtime.GOOS == "darwin"
if r.useKeyring && !assumeExtensionInUse && keyringSupported {
serviceName := sessionstore.DefaultServiceName
if r.keyringServiceName != "" {
serviceName = r.keyringServiceName
}
r.tokenBackend = sessionstore.NewKeyringWithService(serviceName)
if r.useKeyring {
r.tokenBackend = sessionstore.NewKeyring()
} else {
r.tokenBackend = sessionstore.NewFile(r.createConfig)
}
@@ -743,18 +727,8 @@ func (r *RootCmd) ensureTokenBackend() sessionstore.Backend {
return r.tokenBackend
}
// WithKeyringServiceName sets a custom keyring service name for testing purposes.
// This allows tests to use isolated keyring storage while still exercising the
// genuine storage backend selection logic in ensureTokenBackend().
func (r *RootCmd) WithKeyringServiceName(serviceName string) {
r.keyringServiceName = serviceName
}
// UseKeyringWithGlobalConfig enables the use of the keyring storage backend
// when the --global-config directory is set. This is only intended as an override
// for tests, which require specifying the global config directory for test isolation.
func (r *RootCmd) UseKeyringWithGlobalConfig() {
r.useKeyringWithGlobalConfig = true
func (r *RootCmd) WithSessionStorageBackend(backend sessionstore.Backend) {
r.tokenBackend = backend
}
type AgentAuth struct {
-25
View File
@@ -72,31 +72,6 @@ func TestCommandHelp(t *testing.T) {
Name: "coder provisioner jobs list --output json",
Cmd: []string{"provisioner", "jobs", "list", "--output", "json"},
},
// TODO (SasSwart): Remove these once the sync commands are promoted out of experimental.
clitest.CommandHelpCase{
Name: "coder exp sync --help",
Cmd: []string{"exp", "sync", "--help"},
},
clitest.CommandHelpCase{
Name: "coder exp sync ping --help",
Cmd: []string{"exp", "sync", "ping", "--help"},
},
clitest.CommandHelpCase{
Name: "coder exp sync start --help",
Cmd: []string{"exp", "sync", "start", "--help"},
},
clitest.CommandHelpCase{
Name: "coder exp sync want --help",
Cmd: []string{"exp", "sync", "want", "--help"},
},
clitest.CommandHelpCase{
Name: "coder exp sync complete --help",
Cmd: []string{"exp", "sync", "complete", "--help"},
},
clitest.CommandHelpCase{
Name: "coder exp sync status --help",
Cmd: []string{"exp", "sync", "status", "--help"},
},
))
}
-5
View File
@@ -129,11 +129,6 @@ func (r *RootCmd) scheduleShow() *serpent.Command {
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No schedules found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+12 -19
View File
@@ -1029,7 +1029,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
defer shutdownConns()
// Ensures that old database entries are cleaned up over time!
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, options.DeploymentValues, quartz.NewReal())
purger := dbpurge.New(ctx, logger.Named("dbpurge"), options.Database, quartz.NewReal())
defer purger.Close()
// Updates workspace usage
@@ -2143,33 +2143,21 @@ func startBuiltinPostgres(ctx context.Context, cfg config.Root, logger slog.Logg
}
stdlibLogger := slog.Stdlib(ctx, logger.Named("postgres"), slog.LevelDebug)
// If the port is not defined, an available port will be found dynamically. This has
// implications in CI because here is no way to tell Postgres to use an ephemeral
// port, so to avoid flaky tests in CI we need to retry EmbeddedPostgres.Start in
// case of a race condition where the port we quickly listen on and close in
// embeddedPostgresURL() is not free by the time the embedded postgres starts up.
// The maximum retry attempts _should_ cover most cases where port conflicts occur
// in CI and cause flaky tests.
// If the port is not defined, an available port will be found dynamically.
maxAttempts := 1
_, err = cfg.PostgresPort().Read()
// Important: if retryPortDiscovery is changed to not include testing.Testing(),
// the retry logic below also needs to be updated to ensure we don't delete an
// existing database
retryPortDiscovery := errors.Is(err, os.ErrNotExist) && testing.Testing()
if retryPortDiscovery {
// There is no way to tell Postgres to use an ephemeral port, so in order to avoid
// flaky tests in CI we need to retry EmbeddedPostgres.Start in case of a race
// condition where the port we quickly listen on and close in embeddedPostgresURL()
// is not free by the time the embedded postgres starts up. This maximum_should
// cover most cases where port conflicts occur in CI and cause flaky tests.
maxAttempts = 3
}
var startErr error
for attempt := 0; attempt < maxAttempts; attempt++ {
if retryPortDiscovery && attempt > 0 {
// Clean up the data and runtime directories and the port file from the
// previous failed attempt to ensure a clean slate for the next attempt.
_ = os.RemoveAll(filepath.Join(cfg.PostgresPath(), "data"))
_ = os.RemoveAll(filepath.Join(cfg.PostgresPath(), "runtime"))
_ = cfg.PostgresPort().Delete()
}
// Ensure a password and port have been generated.
connectionURL, err := embeddedPostgresURL(cfg)
if err != nil {
@@ -2216,6 +2204,11 @@ func startBuiltinPostgres(ctx context.Context, cfg config.Root, logger slog.Logg
slog.F("port", pgPort),
slog.Error(startErr),
)
if retryPortDiscovery {
// Since a retry is needed, we wipe the port stored here at the beginning of the loop.
_ = cfg.PostgresPort().Delete()
}
}
return "", nil, xerrors.Errorf("failed to start built-in PostgreSQL after %d attempts. "+
+12 -4
View File
@@ -47,9 +47,9 @@ var (
)
const (
// DefaultServiceName is the service name used in keyrings for storing Coder CLI session
// defaultServiceName is the service name used in keyrings for storing Coder CLI session
// tokens.
DefaultServiceName = "coder-v2-credentials"
defaultServiceName = "coder-v2-credentials"
)
// keyringProvider represents an operating system keyring. The expectation
@@ -108,9 +108,17 @@ type Keyring struct {
serviceName string
}
// NewKeyring creates a Keyring with the default service name for production use.
func NewKeyring() Keyring {
return Keyring{
provider: operatingSystemKeyring{},
serviceName: defaultServiceName,
}
}
// NewKeyringWithService creates a Keyring Backend that stores credentials under the
// specified service name. Generally, DefaultServiceName should be provided as the service
// name except in tests which may need parameterization to avoid conflicting keyring use.
// specified service name. This is primarily intended for testing to avoid conflicts
// with production credentials and collisions between tests.
func NewKeyringWithService(serviceName string) Keyring {
return Keyring{
provider: operatingSystemKeyring{},
+3 -3
View File
@@ -2052,6 +2052,7 @@ func TestSSH_Container(t *testing.T) {
t.Parallel()
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -2086,15 +2087,14 @@ func TestSSH_Container(t *testing.T) {
clitest.SetupConfig(t, client, root)
ptty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
ptty.ExpectMatchContext(ctx, " #")
ptty.ExpectMatch(" #")
ptty.WriteLine("hostname")
ptty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
ptty.ExpectMatch(ct.Container.Config.Hostname)
ptty.WriteLine("exit")
<-cmdDone
})
-35
View File
@@ -1,35 +0,0 @@
package cli
import (
"github.com/coder/serpent"
)
func (r *RootCmd) syncCommand() *serpent.Command {
var socketPath string
cmd := &serpent.Command{
Use: "sync",
Short: "Manage unit dependencies for coordinated startup",
Long: "Commands for orchestrating unit startup order in workspaces. Units are most commonly coder scripts. Use these commands to declare dependencies between units, coordinate their startup sequence, and ensure units start only after their dependencies are ready. This helps prevent race conditions and startup failures.",
Handler: func(i *serpent.Invocation) error {
return i.Command.HelpHandler(i)
},
Children: []*serpent.Command{
r.syncPing(&socketPath),
r.syncStart(&socketPath),
r.syncWant(&socketPath),
r.syncComplete(&socketPath),
r.syncStatus(&socketPath),
},
Options: serpent.OptionSet{
{
Flag: "socket-path",
Env: "CODER_AGENT_SOCKET_PATH",
Description: "Specify the path for the agent socket.",
Value: serpent.StringOf(&socketPath),
},
},
}
return cmd
}
-47
View File
@@ -1,47 +0,0 @@
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/serpent"
)
func (*RootCmd) syncComplete(socketPath *string) *serpent.Command {
cmd := &serpent.Command{
Use: "complete <unit>",
Short: "Mark a unit as complete",
Long: "Mark a unit as complete. Indicating to other units that it has completed its work. This allows units that depend on it to proceed with their startup.",
Handler: func(i *serpent.Invocation) error {
ctx := i.Context()
if len(i.Args) != 1 {
return xerrors.New("exactly one unit name is required")
}
unit := unit.ID(i.Args[0])
opts := []agentsocket.Option{}
if *socketPath != "" {
opts = append(opts, agentsocket.WithPath(*socketPath))
}
client, err := agentsocket.NewClient(ctx, opts...)
if err != nil {
return xerrors.Errorf("connect to agent socket: %w", err)
}
defer client.Close()
if err := client.SyncComplete(ctx, unit); err != nil {
return xerrors.Errorf("complete unit failed: %w", err)
}
cliui.Info(i.Stdout, "Success")
return nil
},
}
return cmd
}
-42
View File
@@ -1,42 +0,0 @@
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/serpent"
)
func (*RootCmd) syncPing(socketPath *string) *serpent.Command {
cmd := &serpent.Command{
Use: "ping",
Short: "Test agent socket connectivity and health",
Long: "Test connectivity to the local Coder agent socket to verify the agent is running and responsive. Useful for troubleshooting startup issues or verifying the agent is accessible before running other sync commands.",
Handler: func(i *serpent.Invocation) error {
ctx := i.Context()
opts := []agentsocket.Option{}
if *socketPath != "" {
opts = append(opts, agentsocket.WithPath(*socketPath))
}
client, err := agentsocket.NewClient(ctx, opts...)
if err != nil {
return xerrors.Errorf("connect to agent socket: %w", err)
}
defer client.Close()
err = client.Ping(ctx)
if err != nil {
return xerrors.Errorf("ping failed: %w", err)
}
cliui.Info(i.Stdout, "Success")
return nil
},
}
return cmd
}
-101
View File
@@ -1,101 +0,0 @@
package cli
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/coder/serpent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/cli/cliui"
)
const (
syncPollInterval = 1 * time.Second
)
func (*RootCmd) syncStart(socketPath *string) *serpent.Command {
var timeout time.Duration
cmd := &serpent.Command{
Use: "start <unit>",
Short: "Wait until all unit dependencies are satisfied",
Long: "Wait until all dependencies are satisfied, consider the unit to have started, then allow it to proceed. This command polls until dependencies are ready, then marks the unit as started.",
Handler: func(i *serpent.Invocation) error {
ctx := i.Context()
if len(i.Args) != 1 {
return xerrors.New("exactly one unit name is required")
}
unitName := unit.ID(i.Args[0])
if timeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, timeout)
defer cancel()
}
opts := []agentsocket.Option{}
if *socketPath != "" {
opts = append(opts, agentsocket.WithPath(*socketPath))
}
client, err := agentsocket.NewClient(ctx, opts...)
if err != nil {
return xerrors.Errorf("connect to agent socket: %w", err)
}
defer client.Close()
ready, err := client.SyncReady(ctx, unitName)
if err != nil {
return xerrors.Errorf("error checking dependencies: %w", err)
}
if !ready {
cliui.Infof(i.Stdout, "Waiting for dependencies of unit '%s' to be satisfied...", unitName)
ticker := time.NewTicker(syncPollInterval)
defer ticker.Stop()
pollLoop:
for {
select {
case <-ctx.Done():
if ctx.Err() == context.DeadlineExceeded {
return xerrors.Errorf("timeout waiting for dependencies of unit '%s'", unitName)
}
return ctx.Err()
case <-ticker.C:
ready, err := client.SyncReady(ctx, unitName)
if err != nil {
return xerrors.Errorf("error checking dependencies: %w", err)
}
if ready {
break pollLoop
}
}
}
}
if err := client.SyncStart(ctx, unitName); err != nil {
return xerrors.Errorf("start unit failed: %w", err)
}
cliui.Info(i.Stdout, "Success")
return nil
},
}
cmd.Options = append(cmd.Options, serpent.Option{
Flag: "timeout",
Description: "Maximum time to wait for dependencies (e.g., 30s, 5m). 5m by default.",
Value: serpent.DurationOf(&timeout),
Default: "5m",
})
return cmd
}
-88
View File
@@ -1,88 +0,0 @@
package cli
import (
"fmt"
"golang.org/x/xerrors"
"github.com/coder/serpent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/cli/cliui"
)
func (*RootCmd) syncStatus(socketPath *string) *serpent.Command {
formatter := cliui.NewOutputFormatter(
cliui.ChangeFormatterData(
cliui.TableFormat(
[]agentsocket.DependencyInfo{},
[]string{
"depends on",
"required status",
"current status",
"satisfied",
},
),
func(data any) (any, error) {
resp, ok := data.(agentsocket.SyncStatusResponse)
if !ok {
return nil, xerrors.Errorf("expected agentsocket.SyncStatusResponse, got %T", data)
}
return resp.Dependencies, nil
}),
cliui.JSONFormat(),
)
cmd := &serpent.Command{
Use: "status <unit>",
Short: "Show unit status and dependency state",
Long: "Show the current status of a unit, whether it is ready to start, and lists its dependencies. Shows which dependencies are satisfied and which are still pending. Supports multiple output formats.",
Handler: func(i *serpent.Invocation) error {
ctx := i.Context()
if len(i.Args) != 1 {
return xerrors.New("exactly one unit name is required")
}
unit := unit.ID(i.Args[0])
opts := []agentsocket.Option{}
if *socketPath != "" {
opts = append(opts, agentsocket.WithPath(*socketPath))
}
client, err := agentsocket.NewClient(ctx, opts...)
if err != nil {
return xerrors.Errorf("connect to agent socket: %w", err)
}
defer client.Close()
statusResp, err := client.SyncStatus(ctx, unit)
if err != nil {
return xerrors.Errorf("get status failed: %w", err)
}
var out string
header := fmt.Sprintf("Unit: %s\nStatus: %s\nReady: %t\n\nDependencies:\n", unit, statusResp.Status, statusResp.IsReady)
if formatter.FormatID() == "table" && len(statusResp.Dependencies) == 0 {
out = header + "No dependencies found"
} else {
out, err = formatter.Format(ctx, statusResp)
if err != nil {
return xerrors.Errorf("format status: %w", err)
}
if formatter.FormatID() == "table" {
out = header + out
}
}
_, _ = fmt.Fprintln(i.Stdout, out)
return nil
},
}
formatter.AttachOptions(&cmd.Options)
return cmd
}
-330
View File
@@ -1,330 +0,0 @@
//go:build !windows
package cli_test
import (
"bytes"
"context"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/testutil"
)
// setupSocketServer creates an agentsocket server at a temporary path for testing.
// Returns the socket path and a cleanup function. The path should be passed to
// sync commands via the --socket-path flag.
func setupSocketServer(t *testing.T) (path string, cleanup func()) {
t.Helper()
// Use a temporary socket path for each test
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
// Create parent directory if needed
parentDir := filepath.Dir(socketPath)
err := os.MkdirAll(parentDir, 0o700)
require.NoError(t, err, "create socket directory")
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err, "create socket server")
// Return cleanup function
return socketPath, func() {
err := server.Close()
require.NoError(t, err, "close socket server")
_ = os.Remove(socketPath)
}
}
func TestSyncCommands_Golden(t *testing.T) {
t.Parallel()
t.Run("ping", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "ping", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/ping_success", outBuf.Bytes(), nil)
})
t.Run("start_no_dependencies", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "start", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/start_no_dependencies", outBuf.Bytes(), nil)
})
t.Run("start_with_dependencies", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Set up dependency: test-unit depends on dep-unit
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
// Declare dependency
err = client.SyncWant(ctx, "test-unit", "dep-unit")
require.NoError(t, err)
client.Close()
// Start a goroutine to complete the dependency after a short delay
// This simulates the dependency being satisfied while start is waiting
// The delay ensures the "Waiting..." message appears in the output
done := make(chan error, 1)
go func() {
// Wait a moment to let the start command begin waiting and print the message
time.Sleep(100 * time.Millisecond)
compCtx := context.Background()
compClient, err := agentsocket.NewClient(compCtx, agentsocket.WithPath(path))
if err != nil {
done <- err
return
}
defer compClient.Close()
// Start and complete the dependency unit
err = compClient.SyncStart(compCtx, "dep-unit")
if err != nil {
done <- err
return
}
err = compClient.SyncComplete(compCtx, "dep-unit")
done <- err
}()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "start", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
// Run the start command - it should wait for the dependency
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
// Ensure the completion goroutine finished
select {
case err := <-done:
require.NoError(t, err, "complete dependency")
case <-time.After(time.Second):
// Goroutine should have finished by now
}
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/start_with_dependencies", outBuf.Bytes(), nil)
})
t.Run("want", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "want", "test-unit", "dep-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/want_success", outBuf.Bytes(), nil)
})
t.Run("complete", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// First start the unit
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "complete", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/complete_success", outBuf.Bytes(), nil)
})
t.Run("status_pending", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Set up a unit with unsatisfied dependency
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncWant(ctx, "test-unit", "dep-unit")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "status", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/status_pending", outBuf.Bytes(), nil)
})
t.Run("status_started", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Start a unit
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "status", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/status_started", outBuf.Bytes(), nil)
})
t.Run("status_completed", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Start and complete a unit
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
err = client.SyncComplete(ctx, "test-unit")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "status", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/status_completed", outBuf.Bytes(), nil)
})
t.Run("status_with_dependencies", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Set up a unit with dependencies, some satisfied, some not
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncWant(ctx, "test-unit", "dep-1")
require.NoError(t, err)
err = client.SyncWant(ctx, "test-unit", "dep-2")
require.NoError(t, err)
// Complete dep-1, leave dep-2 incomplete
err = client.SyncStart(ctx, "dep-1")
require.NoError(t, err)
err = client.SyncComplete(ctx, "dep-1")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "status", "test-unit", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/status_with_dependencies", outBuf.Bytes(), nil)
})
t.Run("status_json_format", func(t *testing.T) {
t.Parallel()
path, cleanup := setupSocketServer(t)
defer cleanup()
ctx := testutil.Context(t, testutil.WaitShort)
// Set up a unit with dependencies
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(path))
require.NoError(t, err)
err = client.SyncWant(ctx, "test-unit", "dep-unit")
require.NoError(t, err)
err = client.SyncStart(ctx, "dep-unit")
require.NoError(t, err)
err = client.SyncComplete(ctx, "dep-unit")
require.NoError(t, err)
client.Close()
var outBuf bytes.Buffer
inv, _ := clitest.New(t, "exp", "sync", "status", "test-unit", "--output", "json", "--socket-path", path)
inv.Stdout = &outBuf
inv.Stderr = &outBuf
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
clitest.TestGoldenFile(t, "TestSyncCommands_Golden/status_json_format", outBuf.Bytes(), nil)
})
}
-49
View File
@@ -1,49 +0,0 @@
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/serpent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/cli/cliui"
)
func (*RootCmd) syncWant(socketPath *string) *serpent.Command {
cmd := &serpent.Command{
Use: "want <unit> <depends-on>",
Short: "Declare that a unit depends on another unit completing before it can start",
Long: "Declare that a unit depends on another unit completing before it can start. The unit specified first will not start until the second has signaled that it has completed.",
Handler: func(i *serpent.Invocation) error {
ctx := i.Context()
if len(i.Args) != 2 {
return xerrors.New("exactly two arguments are required: unit and depends-on")
}
dependentUnit := unit.ID(i.Args[0])
dependsOn := unit.ID(i.Args[1])
opts := []agentsocket.Option{}
if *socketPath != "" {
opts = append(opts, agentsocket.WithPath(*socketPath))
}
client, err := agentsocket.NewClient(ctx, opts...)
if err != nil {
return xerrors.Errorf("connect to agent socket: %w", err)
}
defer client.Close()
if err := client.SyncWant(ctx, dependentUnit, dependsOn); err != nil {
return xerrors.Errorf("declare dependency failed: %w", err)
}
cliui.Info(i.Stdout, "Success")
return nil
},
}
return cmd
}
+6 -6
View File
@@ -30,18 +30,18 @@ func (r *RootCmd) templateList() *serpent.Command {
return err
}
if len(templates) == 0 {
_, _ = fmt.Fprintf(inv.Stderr, "%s No templates found! Create one:\n\n", Caret)
_, _ = fmt.Fprintln(inv.Stderr, color.HiMagentaString(" $ coder templates push <directory>\n"))
return nil
}
rows := templatesToRows(templates...)
out, err := formatter.Format(inv.Context(), rows)
if err != nil {
return err
}
if out == "" {
_, _ = fmt.Fprintf(inv.Stderr, "%s No templates found! Create one:\n\n", Caret)
_, _ = fmt.Fprintln(inv.Stderr, color.HiMagentaString(" $ coder templates push <directory>\n"))
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+2 -7
View File
@@ -106,7 +106,7 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
if len(presets) == 0 {
cliui.Infof(
inv.Stdout,
"No presets found for template %q and template-version %q.", template.Name, version.Name,
"No presets found for template %q and template-version %q.\n", template.Name, version.Name,
)
return nil
}
@@ -115,7 +115,7 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
if formatter.FormatID() == "table" {
cliui.Infof(
inv.Stdout,
"Showing presets for template %q and template version %q.", template.Name, version.Name,
"Showing presets for template %q and template version %q.\n", template.Name, version.Name,
)
}
rows := templatePresetsToRows(presets...)
@@ -124,11 +124,6 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
return xerrors.Errorf("render table: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No template presets found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},

Some files were not shown because too many files have changed in this diff Show More