Compare commits

..

17 Commits

Author SHA1 Message Date
Lukasz 72afd3677c chore: bump alpine to 3.23.3 in release/2.29 (#21879)
(cherry picked from commit 3d97f677e5)

Co-authored-by: Jon Ayers <jon@coder.com>
2026-02-03 09:12:15 -06:00
Dean Sheather 7dfaa606ee fix: fix various AI task usage accounting bugs (#21723)
<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->

---------

Co-authored-by: Cian Johnston <cian@coder.com>
Co-authored-by: Steven Masley <Emyrk@users.noreply.github.com>
2026-01-29 10:06:45 -06:00
Cian Johnston 0c3144fc32 fix(coderd): ensure inbox WebSocket is closed when client disconnects… (#21684)
… (#21652)

Relates to https://github.com/coder/coder/issues/19715

This is similar to https://github.com/coder/coder/pull/19711

This endpoint works by doing the following:
- Subscribing to the database's with pubsub
- Accepts a WebSocket upgrade
- Starts a `httpapi.Heartbeat`
- Creates a json encoder
- **Infinitely loops waiting for notification until request context
cancelled**

The critical issue here is that `httpapi.Heartbeat` silently fails when
the client has disconnected. This means we never cancel the request
context, leaving the WebSocket alive until we receive a notification
from the database and fail to write that down the pipe.

By replacing usage of `httpapi.Heartbeat` with `httpapi.HeartbeatClose`,
we cancel the context _when the heartbeat fails to write_ due to the
client disconnecting. This allows us to cleanup without waiting for a
notification to come through the pubsub channel.

(cherry picked from commit 409360c62d)

<!--

If you have used AI to produce some or all of this PR, please ensure you
have read our [AI Contribution
guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING)
before submitting.

-->

Co-authored-by: Danielle Maywood <danielle@themaywoods.com>
2026-01-26 09:28:04 -06:00
Cian Johnston b5360a9180 fix: backport migration fixes (#21611)
* https://github.com/coder/coder/pull/21493
* https://github.com/coder/coder/pull/21496
* https://github.com/coder/coder/pull/21530

NB these commits were originally authored by Blink on behalf of
@dannykopping, so amended to reflect actual authorship.


**Repro/Verification Steps:**

* Created a Coder deployment with a non-public schema via Docker compose
on v2.28.6:
  
* Created a DB init script under `db-init/01-create-schema.sql` with the
following:
    ```sql
    CREATE SCHEMA IF NOT EXISTS coder AUTHORIZATION coder;
    GRANT ALL PRIVILEGES ON SCHEMA coder TO coder;
    ALTER ROLE coder SET search_path TO coder;
    ```
  * Mounted above inside the `postgres` container:
    ```diff
         volumes:
           - coder_data:/var/lib/postgresql/data
    +      - ./db-init:/docker-entrypoint-initdb.d:ro
    ```
  * Edited `CODER_PG_CONNECTION_URL` to update the search path:
    ```diff
    environment:
- CODER_PG_CONNECTION_URL:
"postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable"
+ CODER_PG_CONNECTION_URL:
"postgresql://${POSTGRES_USER:-username}:${POSTGRES_PASSWORD:-password}@database/${POSTGRES_DB:-coder}?sslmode=disable&search_path=coder"
    ```
  * Brought up the deployment:
    ```shell
CODER_VERSION=v2.28.6 CODER_ACCESS_URL=http://localhost:7080
POSTGRES_USER=coder POSTGRES_PASSWORD=coder docker compose up`
    ```
  * Created user / template / workspace

* Updated to `v2.29.1`:
  * ```shell
CODER_VERSION=v2.29.1 CODER_ACCESS_URL=http://localhost:7080
POSTGRES_USER=coder POSTGRES_PASSWORD=coder docker compose up`
    ```

  * Observed following error:
    ```
database-1 | 2026-01-21 15:07:17.629 UTC [102] ERROR: relation
"public.workspace_agents" does not exist
coder-1 | Encountered an error running "coder server", see "coder server
--help" for more information
database-1 | 2026-01-21 15:07:17.629 UTC [102] STATEMENT: CREATE INDEX
IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON
public.workspace_agents (auth_instance_id, deleted);
coder-1 | error: connect to postgres: connect to postgres: migrate up:
up: 2 errors occurred:
coder-1 | * run statement: migration failed: relation
"public.workspace_agents" does not exist in line 0: CREATE INDEX IF NOT
EXISTS workspace_agents_auth_instance_id_deleted_idx ON
public.workspace_agents (auth_instance_id, deleted);
coder-1 | (details: pq: relation "public.workspace_agents" does not
exist)
coder-1 | * commit tx on unlock: pq: Could not complete operation in a
failed transaction
    coder-1 exited with code 1
    ```

  * Built image locally:
    ```console
    $ make build/coder_$(./scripts/version.sh)_linux_amd64.tag
    ...
    ghcr.io/coder/coder:v2.29.1-devel-e8c482a98a67-amd64
    ```

  * Started with new image:
    ```shell
CODER_VERSION=v2.29.1-devel-e8c482a98a67-amd64
CODER_ACCESS_URL=http://localhost:7080 POSTGRES_USER=coder
POSTGRES_PASSWORD=coder docker compose up
    ```

  * Observed migrations ran successfully and Coder came up successfully

---------

Signed-off-by: Danny Kopping <danny@coder.com>
Co-authored-by: Danny Kopping <danny@coder.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-21 15:45:58 +00:00
Kacper Sawicki 2e2d0dde44 feat(cli): backport #21374 to 2.29 (#21561)
backport #21374 to 2.29

feat(cli): add --no-build flag to state push for state-only updates
#21374
2026-01-20 15:46:46 -06:00
Kacper Sawicki 2314e4a94e fix: backport update boundary version to 2.29 (#21290) (#21575)
fix: update boundary version https://github.com/coder/coder/pull/21290

required by https://github.com/coder/coder/pull/21561

Co-authored-by: Yevhenii Shcherbina <evgeniy.shcherbina.es@gmail.com>
2026-01-20 11:19:53 +01:00
blinkagent[bot] bd76c602e4 chore: add antigravity to allowed protocols list (#20873) (#21122)
Co-authored-by: DevCats <christofer@coder.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Atif Ali <atif@coder.com>
2025-12-29 13:29:28 +05:00
Jakub Domeracki 59cdd7e21f chore: update react to apply patch for CVE-2025-55182 (#21084) (#21168)
Reference:

https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components

> Please note that coder deployments aren't vulnerable since [React
Server Components](https://react.dev/reference/rsc/server-components)
aren't in use

---------

Co-authored-by: blinkagent[bot] <237617714+blinkagent[bot]@users.noreply.github.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-09 09:34:16 -06:00
blinkagent[bot] ba71b321bc fix: remove a sensitive field from an agent log line (#20968) (#21063)
This PR removes a log field that could expose sensitive information in
agent logs for workspaces that pass such information to the agent via
its manifest.

(cherry picked from commit 1d726c81bb)

Co-authored-by: Sas Swart <sas.swart.cdk@gmail.com>
2025-12-02 11:33:50 -06:00
Callum Styan c94c470aae fix: pass context with authorization to agentapi (#21045)
cherry pick 20959 into release branch

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2025-12-01 14:16:30 -06:00
Zach 8430dd648a fix(cli): remove defaulting to keyring when --global-config set (cherry-pick) (#20953)
(cherry picked from commit bbf7b137da)
2025-12-01 14:09:54 -06:00
Susana Ferreira 0bd0990e14 feat: add notification warning alert to Tasks page (#20900) (#20981)
Related to PR: https://github.com/coder/coder/pull/20900

(cherry picked from commit f8d9a8046f)
2025-12-01 14:08:50 -06:00
Susana Ferreira 10e70f8c51 fix: show task display name in task topbar (#20957) (#20980)
Related to PR: https://github.com/coder/coder/pull/20957

(cherry picked from commit 21efebeadc)
2025-12-01 14:08:01 -06:00
Sas Swart abe66a38eb feat: implement agent socket api, client and cli (#20758) (#20976) 2025-12-01 14:07:40 -06:00
Jake Howell cd9d3ef46f fix: ensure we check if the user can actually see ai bridge (#20964) 2025-12-01 14:07:18 -06:00
Mathias Fredriksson c0a2522bd6 fix(site): only show active tasks in waiting for input tab (#20933) (#20955) 2025-12-01 14:06:44 -06:00
Danny Kopping dfa25d5f00 chore: document bedrock setup process for aibridge (#20956) (#20966)
Cherry-pick of https://github.com/coder/coder/pull/20956

Signed-off-by: Danny Kopping <danny@coder.com>
2025-11-28 13:09:10 +05:00
223 changed files with 4202 additions and 8637 deletions
-126
View File
@@ -1,126 +0,0 @@
# Coder Architecture
This document provides an overview of Coder's architecture and core systems.
## What is Coder?
Coder is a platform for creating, managing, and using remote development environments (also known as Cloud Development Environments or CDEs). It leverages Terraform to define and provision these environments, which are referred to as "workspaces" within the project. The system is designed to be extensible, secure, and provide developers with a seamless remote development experience.
## Core Architecture
The heart of Coder is a control plane that orchestrates the creation and management of workspaces. This control plane interacts with separate Provisioner processes over gRPC to handle workspace builds. The Provisioners consume workspace definitions and use Terraform to create the actual infrastructure.
The CLI package serves dual purposes - it can be used to launch the control plane itself and also provides client functionality for users to interact with an existing control plane instance. All user-facing frontend code is developed in TypeScript using React and lives in the `site/` directory.
The database layer uses PostgreSQL with SQLC for generating type-safe database code. Database migrations are carefully managed to ensure both forward and backward compatibility through paired `.up.sql` and `.down.sql` files.
## API Design
Coder's API architecture combines REST and gRPC approaches. The REST API is defined in `coderd/coderd.go` and uses Chi for HTTP routing. This provides the primary interface for the frontend and external integrations.
Internal communication with Provisioners occurs over gRPC, with service definitions maintained in `.proto` files. This separation allows for efficient binary communication with the components responsible for infrastructure management while providing a standard REST interface for human-facing applications.
## Network Architecture
Coder implements a secure networking layer based on Tailscale's Wireguard implementation. The `tailnet` package provides connectivity between workspace agents and clients through DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible. This creates a secure overlay network allowing access to workspaces regardless of network topology, firewalls, or NAT configurations.
### Tailnet and DERP System
The networking system has three key components:
1. **Tailnet**: An overlay network implemented in the `tailnet` package that provides secure, end-to-end encrypted connections between clients, the Coder server, and workspace agents.
2. **DERP Servers**: These relay traffic when direct connections aren't possible. Coder provides several options:
- A built-in DERP server that runs on the Coder control plane
- Integration with Tailscale's global DERP infrastructure
- Support for custom DERP servers for lower latency or offline deployments
3. **Direct Connections**: When possible, the system establishes peer-to-peer connections between clients and workspaces using STUN for NAT traversal. This requires both endpoints to send UDP traffic on ephemeral ports.
### Workspace Proxies
Workspace proxies (in the Enterprise edition) provide regional relay points for browser-based connections, reducing latency for geo-distributed teams. Key characteristics:
- Deployed as independent servers that authenticate with the Coder control plane
- Relay connections for SSH, workspace apps, port forwarding, and web terminals
- Do not make direct database connections
- Managed through the `coder wsproxy` commands
- Implemented primarily in the `enterprise/wsproxy/` package
## Agent System
The workspace agent runs within each provisioned workspace and provides core functionality including:
- SSH access to workspaces via the `agentssh` package
- Port forwarding
- Terminal connectivity via the `pty` package for pseudo-terminal support
- Application serving
- Healthcheck monitoring
- Resource usage reporting
Agents communicate with the control plane using the tailnet system and authenticate using secure tokens.
## Workspace Applications
Workspace applications (or "apps") provide browser-based access to services running within workspaces. The system supports:
- HTTP(S) and WebSocket connections
- Path-based or subdomain-based access URLs
- Health checks to monitor application availability
- Different sharing levels (owner-only, authenticated users, or public)
- Custom icons and display settings
The implementation is primarily in the `coderd/workspaceapps/` directory with components for URL generation, proxying connections, and managing application state.
## Implementation Details
The project structure separates frontend and backend concerns. React components and pages are organized in the `site/src/` directory, with Jest used for testing. The backend is primarily written in Go, with a strong emphasis on error handling patterns and test coverage.
Database interactions are carefully managed through migrations in `coderd/database/migrations/` and queries in `coderd/database/queries/`. All new queries require proper database authorization (dbauthz) implementation to ensure that only users with appropriate permissions can access specific resources.
## Authorization System
The database authorization (dbauthz) system enforces fine-grained access control across all database operations. It uses role-based access control (RBAC) to validate user permissions before executing database operations. The `dbauthz` package wraps the database store and performs authorization checks before returning data. All database operations must pass through this layer to ensure security.
## Testing Framework
The codebase has a comprehensive testing approach with several key components:
1. **Parallel Testing**: All tests must use `t.Parallel()` to run concurrently, which improves test suite performance and helps identify race conditions.
2. **coderdtest Package**: This package in `coderd/coderdtest/` provides utilities for creating test instances of the Coder server, setting up test users and workspaces, and mocking external components.
3. **Integration Tests**: Tests often span multiple components to verify system behavior, such as template creation, workspace provisioning, and agent connectivity.
4. **Enterprise Testing**: Enterprise features have dedicated test utilities in the `coderdenttest` package.
## Open Source and Enterprise Components
The repository contains both open source and enterprise components:
- Enterprise code lives primarily in the `enterprise/` directory
- Enterprise features focus on governance, scalability (high availability), and advanced deployment options like workspace proxies
- The boundary between open source and enterprise is managed through a licensing system
- The same core codebase supports both editions, with enterprise features conditionally enabled
## Development Philosophy
Coder emphasizes clear error handling, with specific patterns required:
- Concise error messages that avoid phrases like "failed to"
- Wrapping errors with `%w` to maintain error chains
- Using sentinel errors with the "err" prefix (e.g., `errNotFound`)
All tests should run in parallel using `t.Parallel()` to ensure efficient testing and expose potential race conditions. The codebase is rigorously linted with golangci-lint to maintain consistent code quality.
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
## Development Workflow
Development can be initiated using `scripts/develop.sh` to start the application after making changes. Database schema updates should be performed through the migration system using `create_migration.sh <name>` to generate migration files, with each `.up.sql` migration paired with a corresponding `.down.sql` that properly reverts all changes.
If the development database gets into a bad state, it can be completely reset by removing the PostgreSQL data directory with `rm -rf .coderv2/postgres`. This will destroy all data in the development database, requiring you to recreate any test users, templates, or workspaces after restarting the application.
Code generation for the database layer uses `coderd/database/generate.sh`, and developers should refer to `sqlc.yaml` for the appropriate style and patterns to follow when creating new queries or tables.
The focus should always be on maintaining security through proper database authorization, clean error handling, and comprehensive test coverage to ensure the platform remains robust and reliable.
-1
View File
@@ -1 +0,0 @@
AGENTS.md
+124
View File
@@ -0,0 +1,124 @@
# Cursor Rules
This project is called "Coder" - an application for managing remote development environments.
Coder provides a platform for creating, managing, and using remote development environments (also known as Cloud Development Environments or CDEs). It leverages Terraform to define and provision these environments, which are referred to as "workspaces" within the project. The system is designed to be extensible, secure, and provide developers with a seamless remote development experience.
## Core Architecture
The heart of Coder is a control plane that orchestrates the creation and management of workspaces. This control plane interacts with separate Provisioner processes over gRPC to handle workspace builds. The Provisioners consume workspace definitions and use Terraform to create the actual infrastructure.
The CLI package serves dual purposes - it can be used to launch the control plane itself and also provides client functionality for users to interact with an existing control plane instance. All user-facing frontend code is developed in TypeScript using React and lives in the `site/` directory.
The database layer uses PostgreSQL with SQLC for generating type-safe database code. Database migrations are carefully managed to ensure both forward and backward compatibility through paired `.up.sql` and `.down.sql` files.
## API Design
Coder's API architecture combines REST and gRPC approaches. The REST API is defined in `coderd/coderd.go` and uses Chi for HTTP routing. This provides the primary interface for the frontend and external integrations.
Internal communication with Provisioners occurs over gRPC, with service definitions maintained in `.proto` files. This separation allows for efficient binary communication with the components responsible for infrastructure management while providing a standard REST interface for human-facing applications.
## Network Architecture
Coder implements a secure networking layer based on Tailscale's Wireguard implementation. The `tailnet` package provides connectivity between workspace agents and clients through DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible. This creates a secure overlay network allowing access to workspaces regardless of network topology, firewalls, or NAT configurations.
### Tailnet and DERP System
The networking system has three key components:
1. **Tailnet**: An overlay network implemented in the `tailnet` package that provides secure, end-to-end encrypted connections between clients, the Coder server, and workspace agents.
2. **DERP Servers**: These relay traffic when direct connections aren't possible. Coder provides several options:
- A built-in DERP server that runs on the Coder control plane
- Integration with Tailscale's global DERP infrastructure
- Support for custom DERP servers for lower latency or offline deployments
3. **Direct Connections**: When possible, the system establishes peer-to-peer connections between clients and workspaces using STUN for NAT traversal. This requires both endpoints to send UDP traffic on ephemeral ports.
### Workspace Proxies
Workspace proxies (in the Enterprise edition) provide regional relay points for browser-based connections, reducing latency for geo-distributed teams. Key characteristics:
- Deployed as independent servers that authenticate with the Coder control plane
- Relay connections for SSH, workspace apps, port forwarding, and web terminals
- Do not make direct database connections
- Managed through the `coder wsproxy` commands
- Implemented primarily in the `enterprise/wsproxy/` package
## Agent System
The workspace agent runs within each provisioned workspace and provides core functionality including:
- SSH access to workspaces via the `agentssh` package
- Port forwarding
- Terminal connectivity via the `pty` package for pseudo-terminal support
- Application serving
- Healthcheck monitoring
- Resource usage reporting
Agents communicate with the control plane using the tailnet system and authenticate using secure tokens.
## Workspace Applications
Workspace applications (or "apps") provide browser-based access to services running within workspaces. The system supports:
- HTTP(S) and WebSocket connections
- Path-based or subdomain-based access URLs
- Health checks to monitor application availability
- Different sharing levels (owner-only, authenticated users, or public)
- Custom icons and display settings
The implementation is primarily in the `coderd/workspaceapps/` directory with components for URL generation, proxying connections, and managing application state.
## Implementation Details
The project structure separates frontend and backend concerns. React components and pages are organized in the `site/src/` directory, with Jest used for testing. The backend is primarily written in Go, with a strong emphasis on error handling patterns and test coverage.
Database interactions are carefully managed through migrations in `coderd/database/migrations/` and queries in `coderd/database/queries/`. All new queries require proper database authorization (dbauthz) implementation to ensure that only users with appropriate permissions can access specific resources.
## Authorization System
The database authorization (dbauthz) system enforces fine-grained access control across all database operations. It uses role-based access control (RBAC) to validate user permissions before executing database operations. The `dbauthz` package wraps the database store and performs authorization checks before returning data. All database operations must pass through this layer to ensure security.
## Testing Framework
The codebase has a comprehensive testing approach with several key components:
1. **Parallel Testing**: All tests must use `t.Parallel()` to run concurrently, which improves test suite performance and helps identify race conditions.
2. **coderdtest Package**: This package in `coderd/coderdtest/` provides utilities for creating test instances of the Coder server, setting up test users and workspaces, and mocking external components.
3. **Integration Tests**: Tests often span multiple components to verify system behavior, such as template creation, workspace provisioning, and agent connectivity.
4. **Enterprise Testing**: Enterprise features have dedicated test utilities in the `coderdenttest` package.
## Open Source and Enterprise Components
The repository contains both open source and enterprise components:
- Enterprise code lives primarily in the `enterprise/` directory
- Enterprise features focus on governance, scalability (high availability), and advanced deployment options like workspace proxies
- The boundary between open source and enterprise is managed through a licensing system
- The same core codebase supports both editions, with enterprise features conditionally enabled
## Development Philosophy
Coder emphasizes clear error handling, with specific patterns required:
- Concise error messages that avoid phrases like "failed to"
- Wrapping errors with `%w` to maintain error chains
- Using sentinel errors with the "err" prefix (e.g., `errNotFound`)
All tests should run in parallel using `t.Parallel()` to ensure efficient testing and expose potential race conditions. The codebase is rigorously linted with golangci-lint to maintain consistent code quality.
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
## Development Workflow
Development can be initiated using `scripts/develop.sh` to start the application after making changes. Database schema updates should be performed through the migration system using `create_migration.sh <name>` to generate migration files, with each `.up.sql` migration paired with a corresponding `.down.sql` that properly reverts all changes.
If the development database gets into a bad state, it can be completely reset by removing the PostgreSQL data directory with `rm -rf .coderv2/postgres`. This will destroy all data in the development database, requiring you to recreate any test users, templates, or workspaces after restarting the application.
Code generation for the database layer uses `coderd/database/generate.sh`, and developers should refer to `sqlc.yaml` for the appropriate style and patterns to follow when creating new queries or tables.
The focus should always be on maintaining security through proper database authorization, clean error handling, and comprehensive test coverage to ensure the platform remains robust and reliable.
+4 -6
View File
@@ -6,8 +6,6 @@ updates:
interval: "weekly"
time: "06:00"
timezone: "America/Chicago"
cooldown:
default-days: 7
labels: []
commit-message:
prefix: "ci"
@@ -70,8 +68,8 @@ updates:
interval: "monthly"
time: "06:00"
timezone: "America/Chicago"
cooldown:
default-days: 7
reviewers:
- "coder/ts"
commit-message:
prefix: "chore"
labels: []
@@ -121,9 +119,9 @@ updates:
commit-message:
prefix: "chore"
groups:
coder-modules:
coder:
patterns:
- "coder/*/coder"
- "registry.coder.com/coder/*/coder"
labels: []
ignore:
- dependency-name: "*"
+20 -20
View File
@@ -40,7 +40,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -124,7 +124,7 @@ jobs:
# runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
# steps:
# - name: Checkout
# uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
# uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# with:
# fetch-depth: 1
# # See: https://github.com/stefanzweifel/git-auto-commit-action?tab=readme-ov-file#commits-made-by-this-action-do-not-trigger-new-workflow-runs
@@ -162,7 +162,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -191,7 +191,7 @@ jobs:
# Check for any typos
- name: Check for typos
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06 # v1.40.0
uses: crate-ci/typos@626c4bedb751ce0b7f03262ca97ddda9a076ae1c # v1.39.2
with:
config: .github/workflows/typos.toml
@@ -240,7 +240,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -297,7 +297,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -369,7 +369,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -537,7 +537,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -586,7 +586,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -646,7 +646,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -673,7 +673,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -706,7 +706,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
@@ -786,7 +786,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# 👇 Ensures Chromatic can read your full git history
fetch-depth: 0
@@ -802,7 +802,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -834,7 +834,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@ac86f2ff0a458ffbce7b40698abd44c0fa34d4b6 # v13.3.3
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -867,7 +867,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# 0 is required here for version.sh to work.
fetch-depth: 0
@@ -971,7 +971,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1058,7 +1058,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1113,7 +1113,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -1510,7 +1510,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+1 -9
View File
@@ -28,7 +28,6 @@ jobs:
github-token: "${{ secrets.GITHUB_TOKEN }}"
- name: Approve the PR
if: steps.metadata.outputs.package-ecosystem != 'github-actions'
run: |
echo "Approving $PR_URL"
gh pr review --approve "$PR_URL"
@@ -37,7 +36,6 @@ jobs:
GH_TOKEN: ${{secrets.GITHUB_TOKEN}}
- name: Enable auto-merge
if: steps.metadata.outputs.package-ecosystem != 'github-actions'
run: |
echo "Enabling auto-merge for $PR_URL"
gh pr merge --auto --squash "$PR_URL"
@@ -47,11 +45,6 @@ jobs:
- name: Send Slack notification
run: |
if [ "$PACKAGE_ECOSYSTEM" = "github-actions" ]; then
STATUS_TEXT=":pr-opened: Dependabot opened PR #${PR_NUMBER} (GitHub Actions changes are not auto-merged)"
else
STATUS_TEXT=":pr-merged: Auto merge enabled for Dependabot PR #${PR_NUMBER}"
fi
curl -X POST -H 'Content-type: application/json' \
--data '{
"username": "dependabot",
@@ -61,7 +54,7 @@ jobs:
"type": "header",
"text": {
"type": "plain_text",
"text": "'"${STATUS_TEXT}"'",
"text": ":pr-merged: Auto merge enabled for Dependabot PR #'"${PR_NUMBER}"'",
"emoji": true
}
},
@@ -91,7 +84,6 @@ jobs:
}' "${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}"
env:
SLACK_WEBHOOK: ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
PACKAGE_ECOSYSTEM: ${{ steps.metadata.outputs.package-ecosystem }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_TITLE: ${{ github.event.pull_request.title }}
PR_URL: ${{ github.event.pull_request.html_url }}
+4 -4
View File
@@ -41,7 +41,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -70,7 +70,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -92,7 +92,7 @@ jobs:
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Set up Flux CLI
uses: fluxcd/flux2/action@8454b02a32e48d775b9f563cb51fdcb1787b5b93 # v2.7.5
uses: fluxcd/flux2/action@b6e76ca2534f76dcb8dd94fb057cdfa923c3b641 # v2.7.3
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.7.0"
@@ -151,7 +151,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
+1 -1
View File
@@ -162,7 +162,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
+1 -1
View File
@@ -43,7 +43,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+2 -2
View File
@@ -23,14 +23,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: Setup Node
uses: ./.github/actions/setup-node
- uses: tj-actions/changed-files@abdd2f68ea150cee8f236d4a9fb4e0f2491abf1b # v45.0.7
- uses: tj-actions/changed-files@70069877f29101175ed2b055d210fe8b1d54d7d7 # v45.0.7
id: changed-files
with:
files: |
+2 -2
View File
@@ -31,7 +31,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -130,7 +130,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+1 -1
View File
@@ -53,7 +53,7 @@ jobs:
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+4 -4
View File
@@ -44,7 +44,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -81,7 +81,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -233,7 +233,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -337,7 +337,7 @@ jobs:
kubectl create namespace "pr${PR_NUMBER}"
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
+4 -4
View File
@@ -65,7 +65,7 @@ jobs:
steps:
# Harden Runner doesn't work on macOS.
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -169,7 +169,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -888,7 +888,7 @@ jobs:
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -976,7 +976,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
+2 -2
View File
@@ -25,7 +25,7 @@ jobs:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -47,6 +47,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
sarif_file: results.sarif
+5 -5
View File
@@ -32,7 +32,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
@@ -40,7 +40,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Initialize CodeQL
uses: github/codeql-action/init@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/init@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
languages: go, javascript
@@ -50,7 +50,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/analyze@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
- name: Send Slack notification on failure
if: ${{ failure() }}
@@ -74,7 +74,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
@@ -154,7 +154,7 @@ jobs:
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@fe4161a26a8629af62121b670040955b330f9af2 # v3.29.5
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
sarif_file: trivy-results.sarif
category: "Trivy"
+1 -1
View File
@@ -101,7 +101,7 @@ jobs:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: Run delete-old-branches-action
+1 -1
View File
@@ -153,7 +153,7 @@ jobs:
} >> "${GITHUB_OUTPUT}"
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
+1 -1
View File
@@ -26,7 +26,7 @@ jobs:
egress-policy: audit
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
-3
View File
@@ -90,9 +90,6 @@ __debug_bin*
**/.claude/settings.local.json
# Local agent configuration
AGENTS.local.md
/.env
# Ignore plans written by AI agents.
-3
View File
@@ -1,3 +0,0 @@
{
"ignores": ["PLAN.md"],
}
-57
View File
@@ -140,70 +140,13 @@ seems like it should use `time.Sleep`, read through https://github.com/coder/qua
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
- Commit format: `type(scope): message`
### Writing Comments
Code comments should be clear, well-formatted, and add meaningful context.
**Proper sentence structure**: Comments are sentences and should end with
periods or other appropriate punctuation. This improves readability and
maintains professional code standards.
**Explain why, not what**: Good comments explain the reasoning behind code
rather than describing what the code does. The code itself should be
self-documenting through clear naming and structure. Focus your comments on
non-obvious decisions, edge cases, or business logic that isn't immediately
apparent from reading the implementation.
**Line length and wrapping**: Keep comment lines to 80 characters wide
(including the comment prefix like `//` or `#`). When a comment spans multiple
lines, wrap it naturally at word boundaries rather than writing one sentence
per line. This creates more readable, paragraph-like blocks of documentation.
```go
// Good: Explains the rationale with proper sentence structure.
// We need a custom timeout here because workspace builds can take several
// minutes on slow networks, and the default 30s timeout causes false
// failures during initial template imports.
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
// Bad: Describes what the code does without punctuation or wrapping
// Set a custom timeout
// Workspace builds can take a long time
// Default timeout is too short
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
```
### Avoid Unnecessary Changes
When fixing a bug or adding a feature, don't modify code unrelated to your
task. Unnecessary changes make PRs harder to review and can introduce
regressions.
**Don't reword existing comments or code** unless the change is directly
motivated by your task. Rewording comments to be shorter or "cleaner" wastes
reviewer time and clutters the diff.
**Don't delete existing comments** that explain non-obvious behavior. These
comments preserve important context about why code works a certain way.
**When adding tests for new behavior**, add new test cases instead of modifying
existing ones. This preserves coverage for the original behavior and makes it
clear what the new test covers.
## Detailed Development Guides
@.claude/docs/ARCHITECTURE.md
@.claude/docs/OAUTH2.md
@.claude/docs/TESTING.md
@.claude/docs/TROUBLESHOOTING.md
@.claude/docs/DATABASE.md
## Local Configuration
These files may be gitignored, read manually if not auto-loaded.
@AGENTS.local.md
## Common Pitfalls
1. **Audit table errors** → Update `enterprise/audit/table.go`
+10 -1
View File
@@ -69,6 +69,9 @@ MOST_GO_SRC_FILES := $(shell \
# All the shell files in the repo, excluding ignored files.
SHELL_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.sh')
MIGRATION_FILES := $(shell find ./coderd/database/migrations/ -maxdepth 1 $(FIND_EXCLUSIONS) -type f -name '*.sql')
FIXTURE_FILES := $(shell find ./coderd/database/migrations/testdata/fixtures/ $(FIND_EXCLUSIONS) -type f -name '*.sql')
# Ensure we don't use the user's git configs which might cause side-effects
GIT_FLAGS = GIT_CONFIG_GLOBAL=/dev/null GIT_CONFIG_SYSTEM=/dev/null
@@ -561,7 +564,7 @@ endif
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes lint/migrations
.PHONY: lint
lint/site-icons:
@@ -619,6 +622,12 @@ lint/check-scopes: coderd/database/dump.sql
go run ./scripts/check-scopes
.PHONY: lint/check-scopes
# Verify migrations do not hardcode the public schema.
lint/migrations:
./scripts/check_pg_schema.sh "Migrations" $(MIGRATION_FILES)
./scripts/check_pg_schema.sh "Fixtures" $(FIXTURE_FILES)
.PHONY: lint/migrations
# All files generated by the database should be added here, and this can be used
# as a target for jobs that need to run after the database is generated.
DB_GEN_FILES := \
+2 -2
View File
@@ -1576,8 +1576,8 @@ func (a *agent) createTailnet(
break
}
clog := a.logger.Named("speedtest").With(
slog.F("remote", conn.RemoteAddr()),
slog.F("local", conn.LocalAddr()))
slog.F("remote", conn.RemoteAddr().String()),
slog.F("local", conn.LocalAddr().String()))
clog.Info(ctx, "accepted conn")
wg.Add(1)
closed := make(chan struct{})
-45
View File
@@ -1,45 +0,0 @@
package agent
import (
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/slogtest"
"github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/testutil"
)
// TestReportConnectionEmpty tests that reportConnection() doesn't choke if given an empty IP string, which is what we
// send if we cannot get the remote address.
func TestReportConnectionEmpty(t *testing.T) {
t.Parallel()
connID := uuid.UUID{1}
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
ctx := testutil.Context(t, testutil.WaitShort)
uut := &agent{
hardCtx: ctx,
logger: logger,
}
disconnected := uut.reportConnection(connID, proto.Connection_TYPE_UNSPECIFIED, "")
require.Len(t, uut.reportConnections, 1)
req0 := uut.reportConnections[0]
require.Equal(t, proto.Connection_TYPE_UNSPECIFIED, req0.GetConnection().GetType())
require.Equal(t, "", req0.GetConnection().Ip)
require.Equal(t, connID[:], req0.GetConnection().GetId())
require.Equal(t, proto.Connection_CONNECT, req0.GetConnection().GetAction())
disconnected(0, "because")
require.Len(t, uut.reportConnections, 2)
req1 := uut.reportConnections[1]
require.Equal(t, proto.Connection_TYPE_UNSPECIFIED, req1.GetConnection().GetType())
require.Equal(t, "", req1.GetConnection().Ip)
require.Equal(t, connID[:], req1.GetConnection().GetId())
require.Equal(t, proto.Connection_DISCONNECT, req1.GetConnection().GetAction())
require.Equal(t, "because", req1.GetConnection().GetReason())
}
+19 -44
View File
@@ -1039,10 +1039,6 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
logger.Error(ctx, "inject subagent into container failed", slog.Error(err))
dc.Error = err.Error()
} else {
// TODO(mafredri): Preserve the error from devcontainer
// up if it was a lifecycle script error. Currently
// this results in a brief flicker for the user if
// injection is fast, as the error is shown then erased.
dc.Error = ""
}
}
@@ -1351,41 +1347,27 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
upOptions := []DevcontainerCLIUpOptions{WithUpOutput(infoW, errW)}
upOptions = append(upOptions, opts...)
containerID, upErr := api.dccli.Up(ctx, dc.WorkspaceFolder, configPath, upOptions...)
if upErr != nil {
_, err := api.dccli.Up(ctx, dc.WorkspaceFolder, configPath, upOptions...)
if err != nil {
// No need to log if the API is closing (context canceled), as this
// is expected behavior when the API is shutting down.
if !errors.Is(upErr, context.Canceled) {
logger.Error(ctx, "devcontainer creation failed", slog.Error(upErr))
if !errors.Is(err, context.Canceled) {
logger.Error(ctx, "devcontainer creation failed", slog.Error(err))
}
// If we don't have a container ID, the error is fatal, so we
// should mark the devcontainer as errored and return.
if containerID == "" {
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
dc.Error = upErr.Error()
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
api.broadcastUpdatesLocked()
api.mu.Unlock()
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
dc.Error = err.Error()
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
api.mu.Unlock()
return xerrors.Errorf("start devcontainer: %w", upErr)
}
// If we have a container ID, it means the container was created
// but a lifecycle script (e.g. postCreateCommand) failed. In this
// case, we still want to refresh containers to pick up the new
// container, inject the agent, and allow the user to debug the
// issue. We store the error to surface it to the user.
logger.Warn(ctx, "devcontainer created with errors (e.g. lifecycle script failure), container is available",
slog.F("container_id", containerID),
)
} else {
logger.Info(ctx, "devcontainer created successfully")
return xerrors.Errorf("start devcontainer: %w", err)
}
logger.Info(ctx, "devcontainer created successfully")
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
// Update the devcontainer status to Running or Stopped based on the
@@ -1394,18 +1376,13 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
// to minimize the time between API consistency, we guess the status
// based on the container state.
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStopped
if dc.Container != nil && dc.Container.Running {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
if dc.Container != nil {
if dc.Container.Running {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
}
}
dc.Dirty = false
if upErr != nil {
// If there was a lifecycle script error but we have a container ID,
// the container is running so we should set the status to Running.
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusRunning
dc.Error = upErr.Error()
} else {
dc.Error = ""
}
dc.Error = ""
api.recreateSuccessTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "successTimes")
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.broadcastUpdatesLocked()
@@ -1457,8 +1434,6 @@ func (api *API) markDevcontainerDirty(configPath string, modifiedAt time.Time) {
api.knownDevcontainers[dc.WorkspaceFolder] = dc
}
api.broadcastUpdatesLocked()
}
// cleanupSubAgents removes subagents that are no longer managed by
-196
View File
@@ -234,8 +234,6 @@ func (w *fakeWatcher) sendEventWaitNextCalled(ctx context.Context, event fsnotif
// fakeSubAgentClient implements SubAgentClient for testing purposes.
type fakeSubAgentClient struct {
logger slog.Logger
mu sync.Mutex // Protects following.
agents map[uuid.UUID]agentcontainers.SubAgent
listErrC chan error // If set, send to return error, close to return nil.
@@ -256,8 +254,6 @@ func (m *fakeSubAgentClient) List(ctx context.Context) ([]agentcontainers.SubAge
}
}
}
m.mu.Lock()
defer m.mu.Unlock()
var agents []agentcontainers.SubAgent
for _, agent := range m.agents {
agents = append(agents, agent)
@@ -287,9 +283,6 @@ func (m *fakeSubAgentClient) Create(ctx context.Context, agent agentcontainers.S
return agentcontainers.SubAgent{}, xerrors.New("operating system must be set")
}
m.mu.Lock()
defer m.mu.Unlock()
for _, a := range m.agents {
if a.Name == agent.Name {
return agentcontainers.SubAgent{}, &pq.Error{
@@ -321,8 +314,6 @@ func (m *fakeSubAgentClient) Delete(ctx context.Context, id uuid.UUID) error {
}
}
}
m.mu.Lock()
defer m.mu.Unlock()
if m.agents == nil {
m.agents = make(map[uuid.UUID]agentcontainers.SubAgent)
}
@@ -1641,77 +1632,6 @@ func TestAPI(t *testing.T) {
require.NotNil(t, response.Devcontainers[0].Container, "container should not be nil")
})
// Verify that modifying a config file broadcasts the dirty status
// over websocket immediately.
t.Run("FileWatcherDirtyBroadcast", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
configPath := "/workspace/project/.devcontainer/devcontainer.json"
fWatcher := newFakeWatcher(t)
fLister := &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{
{
ID: "container-id",
FriendlyName: "container-name",
Running: true,
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/workspace/project",
agentcontainers.DevcontainerConfigFileLabel: configPath,
},
},
},
},
}
mClock := quartz.NewMock(t)
tickerTrap := mClock.Trap().TickerFunc("updaterLoop")
api := agentcontainers.NewAPI(
slogtest.Make(t, nil).Leveled(slog.LevelDebug),
agentcontainers.WithContainerCLI(fLister),
agentcontainers.WithWatcher(fWatcher),
agentcontainers.WithClock(mClock),
)
api.Start()
defer api.Close()
srv := httptest.NewServer(api.Routes())
defer srv.Close()
tickerTrap.MustWait(ctx).MustRelease(ctx)
tickerTrap.Close()
wsConn, resp, err := websocket.Dial(ctx, "ws"+strings.TrimPrefix(srv.URL, "http")+"/watch", nil)
require.NoError(t, err)
if resp != nil && resp.Body != nil {
defer resp.Body.Close()
}
defer wsConn.Close(websocket.StatusNormalClosure, "")
// Read and discard initial state.
_, _, err = wsConn.Read(ctx)
require.NoError(t, err)
fWatcher.waitNext(ctx)
fWatcher.sendEventWaitNextCalled(ctx, fsnotify.Event{
Name: configPath,
Op: fsnotify.Write,
})
// Verify dirty status is broadcast without advancing the clock.
_, msg, err := wsConn.Read(ctx)
require.NoError(t, err)
var response codersdk.WorkspaceAgentListContainersResponse
err = json.Unmarshal(msg, &response)
require.NoError(t, err)
require.Len(t, response.Devcontainers, 1)
assert.True(t, response.Devcontainers[0].Dirty,
"devcontainer should be marked as dirty after config file modification")
})
t.Run("SubAgentLifecycle", func(t *testing.T) {
t.Parallel()
@@ -2150,122 +2070,6 @@ func TestAPI(t *testing.T) {
require.Equal(t, "", response.Devcontainers[0].Error)
})
// This test verifies that when devcontainer up fails due to a
// lifecycle script error (such as postCreateCommand failing) but the
// container was successfully created, we still proceed with the
// devcontainer. The container should be available for use and the
// agent should be injected.
t.Run("DuringUpWithContainerID", func(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitMedium)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock = quartz.NewMock(t)
testContainer = codersdk.WorkspaceAgentContainer{
ID: "test-container-id",
FriendlyName: "test-container",
Image: "test-image",
Running: true,
CreatedAt: time.Now(),
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/workspaces/project",
agentcontainers.DevcontainerConfigFileLabel: "/workspaces/project/.devcontainer/devcontainer.json",
},
}
fCCLI = &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{testContainer},
},
arch: "amd64",
}
fDCCLI = &fakeDevcontainerCLI{
upID: testContainer.ID,
upErrC: make(chan func() error, 1),
}
fSAC = &fakeSubAgentClient{
logger: logger.Named("fakeSubAgentClient"),
}
testDevcontainer = codersdk.WorkspaceAgentDevcontainer{
ID: uuid.New(),
Name: "test-devcontainer",
WorkspaceFolder: "/workspaces/project",
ConfigPath: "/workspaces/project/.devcontainer/devcontainer.json",
Status: codersdk.WorkspaceAgentDevcontainerStatusStopped,
}
)
mClock.Set(time.Now()).MustWait(ctx)
tickerTrap := mClock.Trap().TickerFunc("updaterLoop")
nowRecreateSuccessTrap := mClock.Trap().Now("recreate", "successTimes")
api := agentcontainers.NewAPI(logger,
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(fCCLI),
agentcontainers.WithDevcontainerCLI(fDCCLI),
agentcontainers.WithDevcontainers(
[]codersdk.WorkspaceAgentDevcontainer{testDevcontainer},
[]codersdk.WorkspaceAgentScript{{ID: testDevcontainer.ID, LogSourceID: uuid.New()}},
),
agentcontainers.WithSubAgentClient(fSAC),
agentcontainers.WithSubAgentURL("test-subagent-url"),
agentcontainers.WithWatcher(watcher.NewNoop()),
)
api.Start()
defer func() {
close(fDCCLI.upErrC)
api.Close()
}()
r := chi.NewRouter()
r.Mount("/", api.Routes())
tickerTrap.MustWait(ctx).MustRelease(ctx)
tickerTrap.Close()
// Send a recreate request to trigger devcontainer up.
req := httptest.NewRequest(http.MethodPost, "/devcontainers/"+testDevcontainer.ID.String()+"/recreate", nil)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusAccepted, rec.Code)
// Simulate a lifecycle script failure. The devcontainer CLI
// will return an error but also provide a container ID since
// the container was created before the script failed.
simulatedError := xerrors.New("postCreateCommand failed with exit code 1")
testutil.RequireSend(ctx, t, fDCCLI.upErrC, func() error { return simulatedError })
// Wait for the recreate operation to complete. We expect it to
// record a success time because the container was created.
nowRecreateSuccessTrap.MustWait(ctx).MustRelease(ctx)
nowRecreateSuccessTrap.Close()
// Advance the clock to run the devcontainer state update routine.
_, aw := mClock.AdvanceNext()
aw.MustWait(ctx)
req = httptest.NewRequest(http.MethodGet, "/", nil)
rec = httptest.NewRecorder()
r.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var response codersdk.WorkspaceAgentListContainersResponse
err := json.NewDecoder(rec.Body).Decode(&response)
require.NoError(t, err)
// Verify that the devcontainer is running and has the container
// associated with it despite the lifecycle script error. The
// error may be cleared during refresh if agent injection
// succeeds, but the important thing is that the container is
// available for use.
require.Len(t, response.Devcontainers, 1)
assert.Equal(t, codersdk.WorkspaceAgentDevcontainerStatusRunning, response.Devcontainers[0].Status)
require.NotNil(t, response.Devcontainers[0].Container)
assert.Equal(t, testContainer.ID, response.Devcontainers[0].Container.ID)
})
t.Run("DuringInjection", func(t *testing.T) {
t.Parallel()
+15 -16
View File
@@ -263,14 +263,11 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
}
if err := cmd.Run(); err != nil {
result, err2 := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
_, err2 := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
if err2 != nil {
err = errors.Join(err, err2)
}
// Return the container ID if available, even if there was an error.
// This can happen if the container was created successfully but a
// lifecycle script (e.g. postCreateCommand) failed.
return result.ContainerID, err
return "", err
}
result, err := parseDevcontainerCLILastLine[devcontainerCLIResult](ctx, logger, stdoutBuf.Bytes())
@@ -278,13 +275,6 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
return "", err
}
// Check if the result indicates an error (e.g. lifecycle script failure)
// but still has a container ID, allowing the caller to potentially
// continue with the container that was created.
if err := result.Err(); err != nil {
return result.ContainerID, err
}
return result.ContainerID, nil
}
@@ -404,10 +394,7 @@ func parseDevcontainerCLILastLine[T any](ctx context.Context, logger slog.Logger
type devcontainerCLIResult struct {
Outcome string `json:"outcome"` // "error", "success".
// The following fields are typically set if outcome is success, but
// ContainerID may also be present when outcome is error if the
// container was created but a lifecycle script (e.g. postCreateCommand)
// failed.
// The following fields are set if outcome is success.
ContainerID string `json:"containerId"`
RemoteUser string `json:"remoteUser"`
RemoteWorkspaceFolder string `json:"remoteWorkspaceFolder"`
@@ -417,6 +404,18 @@ type devcontainerCLIResult struct {
Description string `json:"description"`
}
func (r *devcontainerCLIResult) UnmarshalJSON(data []byte) error {
type wrapperResult devcontainerCLIResult
var wrappedResult wrapperResult
if err := json.Unmarshal(data, &wrappedResult); err != nil {
return err
}
*r = devcontainerCLIResult(wrappedResult)
return r.Err()
}
func (r devcontainerCLIResult) Err() error {
if r.Outcome == "success" {
return nil
+41 -64
View File
@@ -42,63 +42,56 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
t.Parallel()
tests := []struct {
name string
logFile string
workspace string
config string
opts []agentcontainers.DevcontainerCLIUpOptions
wantArgs string
wantError bool
wantContainerID bool // If true, expect a container ID even when wantError is true.
name string
logFile string
workspace string
config string
opts []agentcontainers.DevcontainerCLIUpOptions
wantArgs string
wantError bool
}{
{
name: "success",
logFile: "up.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
wantContainerID: true,
name: "success",
logFile: "up.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
},
{
name: "success with config",
logFile: "up.log",
workspace: "/test/workspace",
config: "/test/config.json",
wantArgs: "up --log-format json --workspace-folder /test/workspace --config /test/config.json",
wantError: false,
wantContainerID: true,
name: "success with config",
logFile: "up.log",
workspace: "/test/workspace",
config: "/test/config.json",
wantArgs: "up --log-format json --workspace-folder /test/workspace --config /test/config.json",
wantError: false,
},
{
name: "already exists",
logFile: "up-already-exists.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
wantContainerID: true,
name: "already exists",
logFile: "up-already-exists.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: false,
},
{
name: "docker error",
logFile: "up-error-docker.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "docker error",
logFile: "up-error-docker.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "bad outcome",
logFile: "up-error-bad-outcome.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "bad outcome",
logFile: "up-error-bad-outcome.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "does not exist",
logFile: "up-error-does-not-exist.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: false,
name: "does not exist",
logFile: "up-error-does-not-exist.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
},
{
name: "with remove existing container",
@@ -107,21 +100,8 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
opts: []agentcontainers.DevcontainerCLIUpOptions{
agentcontainers.WithRemoveExistingContainer(),
},
wantArgs: "up --log-format json --workspace-folder /test/workspace --remove-existing-container",
wantError: false,
wantContainerID: true,
},
{
// This test verifies that when a lifecycle script like
// postCreateCommand fails, the CLI returns both an error
// and a container ID. The caller can then proceed with
// agent injection into the created container.
name: "lifecycle script failure with container",
logFile: "up-error-lifecycle-script.log",
workspace: "/test/workspace",
wantArgs: "up --log-format json --workspace-folder /test/workspace",
wantError: true,
wantContainerID: true,
wantArgs: "up --log-format json --workspace-folder /test/workspace --remove-existing-container",
wantError: false,
},
}
@@ -142,13 +122,10 @@ func TestDevcontainerCLI_ArgsAndParsing(t *testing.T) {
containerID, err := dccli.Up(ctx, tt.workspace, tt.config, tt.opts...)
if tt.wantError {
assert.Error(t, err, "want error")
assert.Empty(t, containerID, "expected empty container ID")
} else {
assert.NoError(t, err, "want no error")
}
if tt.wantContainerID {
assert.NotEmpty(t, containerID, "expected non-empty container ID")
} else {
assert.Empty(t, containerID, "expected empty container ID")
}
})
}
File diff suppressed because one or more lines are too long
+2 -11
View File
@@ -391,19 +391,10 @@ func (s *Server) sessionHandler(session ssh.Session) {
env := session.Environ()
magicType, magicTypeRaw, env := extractMagicSessionType(env)
// It's not safe to assume RemoteAddr() returns a non-nil value. slog.F usage is fine because it correctly
// handles nil.
// c.f. https://github.com/coder/internal/issues/1143
remoteAddr := session.RemoteAddr()
remoteAddrString := ""
if remoteAddr != nil {
remoteAddrString = remoteAddr.String()
}
if !s.trackSession(session, true) {
reason := "unable to accept new session, server is closing"
// Report connection attempt even if we couldn't accept it.
disconnected := s.config.ReportConnection(id, magicType, remoteAddrString)
disconnected := s.config.ReportConnection(id, magicType, session.RemoteAddr().String())
defer disconnected(1, reason)
logger.Info(ctx, reason)
@@ -438,7 +429,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
scr := &sessionCloseTracker{Session: session}
session = scr
disconnected := s.config.ReportConnection(id, magicType, remoteAddrString)
disconnected := s.config.ReportConnection(id, magicType, session.RemoteAddr().String())
defer func() {
disconnected(scr.exitCode(), reason)
}()
+1 -1
View File
@@ -176,7 +176,7 @@ func (x *x11Forwarder) listenForConnections(
var originPort uint32
if tcpConn, ok := conn.(*net.TCPConn); ok {
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok && tcpAddr != nil {
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok {
originAddr = tcpAddr.IP.String()
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
originPort = uint32(tcpAddr.Port)
+3 -13
View File
@@ -74,21 +74,11 @@ func (s *Server) Serve(ctx, hardCtx context.Context, l net.Listener) (retErr err
break
}
clog := s.logger.With(
slog.F("remote", conn.RemoteAddr()),
slog.F("local", conn.LocalAddr()))
slog.F("remote", conn.RemoteAddr().String()),
slog.F("local", conn.LocalAddr().String()))
clog.Info(ctx, "accepted conn")
// It's not safe to assume RemoteAddr() returns a non-nil value. slog.F usage is fine because it correctly
// handles nil.
// c.f. https://github.com/coder/internal/issues/1143
remoteAddr := conn.RemoteAddr()
remoteAddrString := ""
if remoteAddr != nil {
remoteAddrString = remoteAddr.String()
}
wg.Add(1)
disconnected := s.reportConnection(uuid.New(), remoteAddrString)
disconnected := s.reportConnection(uuid.New(), conn.RemoteAddr().String())
closed := make(chan struct{})
go func() {
defer wg.Done()
-3
View File
@@ -106,9 +106,6 @@ var _ OutputFormat = &tableFormat{}
//
// defaultColumns is optional and specifies the default columns to display. If
// not specified, all columns are displayed by default.
//
// If the data is empty, an empty string is returned. Callers should check for
// this and provide an appropriate message to the user.
func TableFormat(out any, defaultColumns []string) OutputFormat {
v := reflect.Indirect(reflect.ValueOf(out))
if v.Kind() != reflect.Slice {
-6
View File
@@ -180,12 +180,6 @@ func DisplayTable(out any, sort string, filterColumns []string) (string, error)
func renderTable(out any, sort string, headers table.Row, filterColumns []string) (string, error) {
v := reflect.Indirect(reflect.ValueOf(out))
// Return empty string for empty data. Callers should check for this
// and provide an appropriate message to the user.
if v.Kind() == reflect.Slice && v.Len() == 0 {
return "", nil
}
headers = filterHeaders(headers, filterColumns)
columnConfigs := createColumnConfigs(headers, filterColumns)
-9
View File
@@ -472,15 +472,6 @@ alice 1
require.NoError(t, err)
compareTables(t, expected, out)
})
t.Run("Empty", func(t *testing.T) {
t.Parallel()
var in []tableTest4
out, err := cliui.DisplayTable(in, "", nil)
require.NoError(t, err)
require.Empty(t, out)
})
}
// compareTables normalizes the incoming table lines
+3 -3
View File
@@ -90,6 +90,7 @@ func TestExpRpty(t *testing.T) {
wantLabel := "coder.devcontainers.TestExpRpty.Container"
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -127,15 +128,14 @@ func TestExpRpty(t *testing.T) {
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
pty.ExpectMatchContext(ctx, " #")
pty.ExpectMatch(" #")
pty.WriteLine("hostname")
pty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
pty.ExpectMatch(ct.Container.Config.Hostname)
pty.WriteLine("exit")
<-cmdDone
})
+8 -10
View File
@@ -1559,15 +1559,6 @@ func (r *RootCmd) scaletestDashboard() *serpent.Command {
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
reg := prometheus.NewRegistry()
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
// Allow time for traces to flush even if command context is
// canceled. This is a no-op if tracing is not enabled.
@@ -1579,7 +1570,14 @@ func (r *RootCmd) scaletestDashboard() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
outputs, err := output.parse()
if err != nil {
return xerrors.Errorf("could not parse --output flags")
}
reg := prometheus.NewRegistry()
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
metrics := dashboard.NewMetrics(reg)
th := harness.NewTestHarness(strategy.toStrategy(), cleanupStrategy.toStrategy())
+8 -9
View File
@@ -84,15 +84,6 @@ func (r *RootCmd) scaletestPrebuilds() *serpent.Command {
if err != nil {
return xerrors.Errorf("create tracer provider: %w", err)
}
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := prebuilds.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
defer func() {
_, _ = fmt.Fprintln(inv.Stderr, "\nUploading traces...")
if err := closeTracing(ctx); err != nil {
@@ -101,6 +92,14 @@ func (r *RootCmd) scaletestPrebuilds() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stderr, "Waiting %s for prometheus metrics to be scraped\n", prometheusFlags.Wait)
<-time.After(prometheusFlags.Wait)
}()
tracer := tracerProvider.Tracer(scaletestTracerName)
reg := prometheus.NewRegistry()
metrics := prebuilds.NewMetrics(reg)
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
err = client.PutPrebuildsSettings(ctx, codersdk.PrebuildsSettings{
ReconciliationPaused: true,
+6 -6
View File
@@ -139,12 +139,7 @@ func (r *RootCmd) list() *serpent.Command {
return err
}
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
if out == "" {
if len(res) == 0 && formatter.FormatID() != cliui.JSONFormat().ID() {
pretty.Fprintf(inv.Stderr, cliui.DefaultStyles.Prompt, "No workspaces found! Create one:\n")
_, _ = fmt.Fprintln(inv.Stderr)
_, _ = fmt.Fprintln(inv.Stderr, " "+pretty.Sprint(cliui.DefaultStyles.Code, "coder create <name>"))
@@ -152,6 +147,11 @@ func (r *RootCmd) list() *serpent.Command {
return nil
}
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -170,11 +170,6 @@ func (r *RootCmd) listOrganizationMembers(orgContext *OrganizationContext) *serp
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No organization members found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -92,11 +92,6 @@ func (r *RootCmd) showOrganizationRoles(orgContext *OrganizationContext) *serpen
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No organization roles found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -110,11 +110,6 @@ func (r *RootCmd) provisionerJobsList() *serpent.Command {
return xerrors.Errorf("display provisioner daemons: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No provisioner jobs found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
+5 -5
View File
@@ -74,6 +74,11 @@ func (r *RootCmd) provisionerList() *serpent.Command {
return xerrors.Errorf("list provisioner daemons: %w", err)
}
if len(daemons) == 0 {
_, _ = fmt.Fprintln(inv.Stdout, "No provisioner daemons found")
return nil
}
var rows []provisionerDaemonRow
for _, daemon := range daemons {
rows = append(rows, provisionerDaemonRow{
@@ -87,11 +92,6 @@ func (r *RootCmd) provisionerList() *serpent.Command {
return xerrors.Errorf("display provisioner daemons: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No provisioner daemons found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
-5
View File
@@ -129,11 +129,6 @@ func (r *RootCmd) scheduleShow() *serpent.Command {
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No schedules found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+3 -3
View File
@@ -2052,6 +2052,7 @@ func TestSSH_Container(t *testing.T) {
t.Parallel()
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -2086,15 +2087,14 @@ func TestSSH_Container(t *testing.T) {
clitest.SetupConfig(t, client, root)
ptty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
ptty.ExpectMatchContext(ctx, " #")
ptty.ExpectMatch(" #")
ptty.WriteLine("hostname")
ptty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
ptty.ExpectMatch(ct.Container.Config.Hostname)
ptty.WriteLine("exit")
<-cmdDone
})
+17
View File
@@ -87,6 +87,7 @@ func buildNumberOption(n *int64) serpent.Option {
func (r *RootCmd) statePush() *serpent.Command {
var buildNumber int64
var noBuild bool
cmd := &serpent.Command{
Use: "push <workspace> <file>",
Short: "Push a Terraform state file to a workspace.",
@@ -126,6 +127,16 @@ func (r *RootCmd) statePush() *serpent.Command {
return err
}
if noBuild {
// Update state directly without triggering a build.
err = client.UpdateWorkspaceBuildState(inv.Context(), build.ID, state)
if err != nil {
return err
}
_, _ = fmt.Fprintln(inv.Stdout, "State updated successfully.")
return nil
}
build, err = client.CreateWorkspaceBuild(inv.Context(), workspace.ID, codersdk.CreateWorkspaceBuildRequest{
TemplateVersionID: build.TemplateVersionID,
Transition: build.Transition,
@@ -139,6 +150,12 @@ func (r *RootCmd) statePush() *serpent.Command {
}
cmd.Options = serpent.OptionSet{
buildNumberOption(&buildNumber),
{
Flag: "no-build",
FlagShorthand: "n",
Description: "Update the state without triggering a workspace build. Useful for state-only migrations.",
Value: serpent.BoolOf(&noBuild),
},
}
return cmd
}
+47
View File
@@ -2,6 +2,7 @@ package cli_test
import (
"bytes"
"context"
"fmt"
"os"
"path/filepath"
@@ -10,6 +11,7 @@ import (
"testing"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/stretchr/testify/require"
@@ -158,4 +160,49 @@ func TestStatePush(t *testing.T) {
err := inv.Run()
require.NoError(t, err)
})
t.Run("NoBuild", func(t *testing.T) {
t.Parallel()
client, store := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
templateAdmin, taUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.RoleTemplateAdmin())
initialState := []byte("initial state")
r := dbfake.WorkspaceBuild(t, store, database.WorkspaceTable{
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{ProvisionerState: initialState}).
Do()
wantState := []byte("updated state")
stateFile, err := os.CreateTemp(t.TempDir(), "")
require.NoError(t, err)
_, err = stateFile.Write(wantState)
require.NoError(t, err)
err = stateFile.Close()
require.NoError(t, err)
inv, root := clitest.New(t, "state", "push", "--no-build", r.Workspace.Name, stateFile.Name())
clitest.SetupConfig(t, templateAdmin, root)
var stdout bytes.Buffer
inv.Stdout = &stdout
err = inv.Run()
require.NoError(t, err)
require.Contains(t, stdout.String(), "State updated successfully")
// Verify the state was updated by pulling it.
inv, root = clitest.New(t, "state", "pull", r.Workspace.Name)
var gotState bytes.Buffer
inv.Stdout = &gotState
clitest.SetupConfig(t, templateAdmin, root)
err = inv.Run()
require.NoError(t, err)
require.Equal(t, wantState, bytes.TrimSpace(gotState.Bytes()))
// Verify no new build was created.
builds, err := store.GetWorkspaceBuildsByWorkspaceID(dbauthz.AsSystemRestricted(context.Background()), database.GetWorkspaceBuildsByWorkspaceIDParams{
WorkspaceID: r.Workspace.ID,
})
require.NoError(t, err)
require.Len(t, builds, 1, "expected only the initial build, no new build should be created")
})
}
+6 -4
View File
@@ -157,6 +157,12 @@ func (r *RootCmd) taskList() *serpent.Command {
return nil
}
// If no rows and not JSON, show a friendly message.
if len(tasks) == 0 && formatter.FormatID() != cliui.JSONFormat().ID() {
_, _ = fmt.Fprintln(inv.Stderr, "No tasks found.")
return nil
}
rows := make([]taskListRow, len(tasks))
now := time.Now()
for i := range tasks {
@@ -167,10 +173,6 @@ func (r *RootCmd) taskList() *serpent.Command {
if err != nil {
return xerrors.Errorf("format tasks: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No tasks found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
},
-5
View File
@@ -59,11 +59,6 @@ func (r *RootCmd) taskLogs() *serpent.Command {
return xerrors.Errorf("format task logs: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No task logs found.")
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, out)
return nil
},
+6 -6
View File
@@ -30,18 +30,18 @@ func (r *RootCmd) templateList() *serpent.Command {
return err
}
if len(templates) == 0 {
_, _ = fmt.Fprintf(inv.Stderr, "%s No templates found! Create one:\n\n", Caret)
_, _ = fmt.Fprintln(inv.Stderr, color.HiMagentaString(" $ coder templates push <directory>\n"))
return nil
}
rows := templatesToRows(templates...)
out, err := formatter.Format(inv.Context(), rows)
if err != nil {
return err
}
if out == "" {
_, _ = fmt.Fprintf(inv.Stderr, "%s No templates found! Create one:\n\n", Caret)
_, _ = fmt.Fprintln(inv.Stderr, color.HiMagentaString(" $ coder templates push <directory>\n"))
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+2 -7
View File
@@ -106,7 +106,7 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
if len(presets) == 0 {
cliui.Infof(
inv.Stdout,
"No presets found for template %q and template-version %q.", template.Name, version.Name,
"No presets found for template %q and template-version %q.\n", template.Name, version.Name,
)
return nil
}
@@ -115,7 +115,7 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
if formatter.FormatID() == "table" {
cliui.Infof(
inv.Stdout,
"Showing presets for template %q and template version %q.", template.Name, version.Name,
"Showing presets for template %q and template version %q.\n", template.Name, version.Name,
)
}
rows := templatePresetsToRows(presets...)
@@ -124,11 +124,6 @@ func (r *RootCmd) templatePresetsList() *serpent.Command {
return xerrors.Errorf("render table: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No template presets found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-5
View File
@@ -121,11 +121,6 @@ func (r *RootCmd) templateVersionsList() *serpent.Command {
return xerrors.Errorf("render table: %w", err)
}
if out == "" {
cliui.Infof(inv.Stderr, "No template versions found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-38
View File
@@ -118,23 +118,12 @@ AI BRIDGE OPTIONS:
requests (requires the "oauth2" and "mcp-server-http" experiments to
be enabled).
--aibridge-max-concurrency int, $CODER_AIBRIDGE_MAX_CONCURRENCY (default: 0)
Maximum number of concurrent AI Bridge requests. Set to 0 to disable
(unlimited).
--aibridge-openai-base-url string, $CODER_AIBRIDGE_OPENAI_BASE_URL (default: https://api.openai.com/v1/)
The base URL of the OpenAI API.
--aibridge-openai-key string, $CODER_AIBRIDGE_OPENAI_KEY
The key to authenticate against the OpenAI API.
--aibridge-rate-limit int, $CODER_AIBRIDGE_RATE_LIMIT (default: 0)
Maximum number of AI Bridge requests per rate window. Set to 0 to
disable rate limiting.
--aibridge-rate-window duration, $CODER_AIBRIDGE_RATE_WINDOW (default: 1m)
Duration of the rate limiting window for AI Bridge requests.
CLIENT OPTIONS:
These options change the behavior of how clients interact with the Coder.
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
@@ -707,33 +696,6 @@ updating, and deleting workspace resources.
Number of provisioner daemons to create on start. If builds are stuck
in queued state for a long time, consider increasing this.
RETENTION OPTIONS:
Configure data retention policies for various database tables. Retention
policies automatically purge old data to reduce database size and improve
performance. Setting a retention duration to 0 disables automatic purging for
that data type.
--api-keys-retention duration, $CODER_API_KEYS_RETENTION (default: 7d)
How long expired API keys are retained before being deleted. Keeping
expired keys allows the backend to return a more helpful error when a
user tries to use an expired key. Set to 0 to disable automatic
deletion of expired keys.
--audit-logs-retention duration, $CODER_AUDIT_LOGS_RETENTION (default: 0)
How long audit log entries are retained. Set to 0 to disable (keep
indefinitely). We advise keeping audit logs for at least a year, and
in accordance with your compliance requirements.
--connection-logs-retention duration, $CODER_CONNECTION_LOGS_RETENTION (default: 0)
How long connection log entries are retained. Set to 0 to disable
(keep indefinitely).
--workspace-agent-logs-retention duration, $CODER_WORKSPACE_AGENT_LOGS_RETENTION (default: 7d)
How long workspace agent logs are retained. Logs from non-latest
builds are deleted if the agent hasn't connected within this period.
Logs from the latest build are always retained. Set to 0 to disable
automatic deletion.
TELEMETRY OPTIONS:
Telemetry is critical to our ability to improve Coder. We strip all personal
information before sending data to our servers. Please only disable telemetry
+4
View File
@@ -9,5 +9,9 @@ OPTIONS:
-b, --build int
Specify a workspace build to target by name. Defaults to latest.
-n, --no-build bool
Update the state without triggering a workspace build. Useful for
state-only migrations.
———
Run `coder --help` for a list of global options.
+13 -35
View File
@@ -720,12 +720,25 @@ aibridge:
# The base URL of the OpenAI API.
# (default: https://api.openai.com/v1/, type: string)
openai_base_url: https://api.openai.com/v1/
# The key to authenticate against the OpenAI API.
# (default: <unset>, type: string)
openai_key: ""
# The base URL of the Anthropic API.
# (default: https://api.anthropic.com/, type: string)
anthropic_base_url: https://api.anthropic.com/
# The key to authenticate against the Anthropic API.
# (default: <unset>, type: string)
anthropic_key: ""
# The AWS Bedrock API region.
# (default: <unset>, type: string)
bedrock_region: ""
# The access key to authenticate against the AWS Bedrock API.
# (default: <unset>, type: string)
bedrock_access_key: ""
# The access key secret to use with the access key to authenticate against the AWS
# Bedrock API.
# (default: <unset>, type: string)
bedrock_access_key_secret: ""
# The model to use when making requests to the AWS Bedrock API.
# (default: global.anthropic.claude-sonnet-4-5-20250929-v1:0, type: string)
bedrock_model: global.anthropic.claude-sonnet-4-5-20250929-v1:0
@@ -742,38 +755,3 @@ aibridge:
# (token, prompt, tool use).
# (default: 60d, type: duration)
retention: 1440h0m0s
# Maximum number of concurrent AI Bridge requests. Set to 0 to disable
# (unlimited).
# (default: 0, type: int)
max_concurrency: 0
# Maximum number of AI Bridge requests per rate window. Set to 0 to disable rate
# limiting.
# (default: 0, type: int)
rate_limit: 0
# Duration of the rate limiting window for AI Bridge requests.
# (default: 1m, type: duration)
rate_window: 1m0s
# Configure data retention policies for various database tables. Retention
# policies automatically purge old data to reduce database size and improve
# performance. Setting a retention duration to 0 disables automatic purging for
# that data type.
retention:
# How long audit log entries are retained. Set to 0 to disable (keep
# indefinitely). We advise keeping audit logs for at least a year, and in
# accordance with your compliance requirements.
# (default: 0, type: duration)
audit_logs: 0s
# How long connection log entries are retained. Set to 0 to disable (keep
# indefinitely).
# (default: 0, type: duration)
connection_logs: 0s
# How long expired API keys are retained before being deleted. Keeping expired
# keys allows the backend to return a more helpful error when a user tries to use
# an expired key. Set to 0 to disable automatic deletion of expired keys.
# (default: 7d, type: duration)
api_keys: 168h0m0s
# How long workspace agent logs are retained. Logs from non-latest builds are
# deleted if the agent hasn't connected within this period. Logs from the latest
# build are always retained. Set to 0 to disable automatic deletion.
# (default: 7d, type: duration)
workspace_agent_logs: 168h0m0s
+7 -5
View File
@@ -246,6 +246,13 @@ func (r *RootCmd) listTokens() *serpent.Command {
return xerrors.Errorf("list tokens: %w", err)
}
if len(tokens) == 0 {
cliui.Infof(
inv.Stdout,
"No tokens found.\n",
)
}
displayTokens = make([]tokenListRow, len(tokens))
for i, token := range tokens {
@@ -257,11 +264,6 @@ func (r *RootCmd) listTokens() *serpent.Command {
return err
}
if out == "" {
cliui.Info(inv.Stderr, "No tokens found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
-1
View File
@@ -34,7 +34,6 @@ func TestTokens(t *testing.T) {
clitest.SetupConfig(t, client, root)
buf := new(bytes.Buffer)
inv.Stdout = buf
inv.Stderr = buf
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
res := buf.String()
-5
View File
@@ -58,11 +58,6 @@ func (r *RootCmd) userList() *serpent.Command {
return err
}
if out == "" {
cliui.Infof(inv.Stderr, "No users found.")
return nil
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
+2 -2
View File
@@ -1680,8 +1680,8 @@ func TestTasksNotification(t *testing.T) {
require.NoError(t, err)
require.Len(t, workspaceAgent.Apps, 1)
require.GreaterOrEqual(t, len(workspaceAgent.Apps[0].Statuses), 1)
// Statuses are ordered by created_at DESC, so the first element is the latest.
require.Equal(t, tc.newAppStatus, workspaceAgent.Apps[0].Statuses[0].State)
latestStatusIndex := len(workspaceAgent.Apps[0].Statuses) - 1
require.Equal(t, tc.newAppStatus, workspaceAgent.Apps[0].Statuses[latestStatusIndex].State)
if tc.isNotificationSent {
// Then: A notification is sent to the workspace owner (memberUser)
+52 -36
View File
@@ -1800,7 +1800,7 @@ const docTemplate = `{
"application/json"
],
"tags": [
"Enterprise"
"Organizations"
],
"summary": "Add new license",
"operationId": "add-new-license",
@@ -1836,7 +1836,7 @@ const docTemplate = `{
"application/json"
],
"tags": [
"Enterprise"
"Organizations"
],
"summary": "Update license entitlements",
"operationId": "update-license-entitlements",
@@ -10182,6 +10182,45 @@ const docTemplate = `{
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Builds"
],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -11877,19 +11916,9 @@ const docTemplate = `{
"inject_coder_mcp_tools": {
"type": "boolean"
},
"max_concurrency": {
"description": "Overload protection settings.",
"type": "integer"
},
"openai": {
"$ref": "#/definitions/codersdk.AIBridgeOpenAIConfig"
},
"rate_limit": {
"type": "integer"
},
"rate_window": {
"type": "integer"
},
"retention": {
"type": "integer"
}
@@ -14312,9 +14341,6 @@ const docTemplate = `{
"redirect_to_access_url": {
"type": "boolean"
},
"retention": {
"$ref": "#/definitions/codersdk.RetentionConfig"
},
"scim_api_key": {
"type": "string"
},
@@ -17741,27 +17767,6 @@ const docTemplate = `{
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
"api_keys": {
"description": "APIKeys controls how long expired API keys are retained before being deleted.\nKeys are only deleted if they have been expired for at least this duration.\nDefaults to 7 days to preserve existing behavior.",
"type": "integer"
},
"audit_logs": {
"description": "AuditLogs controls how long audit log entries are retained.\nSet to 0 to disable (keep indefinitely).",
"type": "integer"
},
"connection_logs": {
"description": "ConnectionLogs controls how long connection log entries are retained.\nSet to 0 to disable (keep indefinitely).",
"type": "integer"
},
"workspace_agent_logs": {
"description": "WorkspaceAgentLogs controls how long workspace agent logs are retained.\nLogs are deleted if the agent hasn't connected within this period.\nLogs from the latest build are always retained regardless of age.\nDefaults to 7 days to preserve existing behavior.",
"type": "integer"
}
}
},
"codersdk.Role": {
"type": "object",
"properties": {
@@ -19436,6 +19441,17 @@ const docTemplate = `{
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+48 -36
View File
@@ -1570,7 +1570,7 @@
],
"consumes": ["application/json"],
"produces": ["application/json"],
"tags": ["Enterprise"],
"tags": ["Organizations"],
"summary": "Add new license",
"operationId": "add-new-license",
"parameters": [
@@ -1602,7 +1602,7 @@
}
],
"produces": ["application/json"],
"tags": ["Enterprise"],
"tags": ["Organizations"],
"summary": "Update license entitlements",
"operationId": "update-license-entitlements",
"responses": {
@@ -9014,6 +9014,41 @@
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Builds"],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -10543,19 +10578,9 @@
"inject_coder_mcp_tools": {
"type": "boolean"
},
"max_concurrency": {
"description": "Overload protection settings.",
"type": "integer"
},
"openai": {
"$ref": "#/definitions/codersdk.AIBridgeOpenAIConfig"
},
"rate_limit": {
"type": "integer"
},
"rate_window": {
"type": "integer"
},
"retention": {
"type": "integer"
}
@@ -12896,9 +12921,6 @@
"redirect_to_access_url": {
"type": "boolean"
},
"retention": {
"$ref": "#/definitions/codersdk.RetentionConfig"
},
"scim_api_key": {
"type": "string"
},
@@ -16203,27 +16225,6 @@
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
"api_keys": {
"description": "APIKeys controls how long expired API keys are retained before being deleted.\nKeys are only deleted if they have been expired for at least this duration.\nDefaults to 7 days to preserve existing behavior.",
"type": "integer"
},
"audit_logs": {
"description": "AuditLogs controls how long audit log entries are retained.\nSet to 0 to disable (keep indefinitely).",
"type": "integer"
},
"connection_logs": {
"description": "ConnectionLogs controls how long connection log entries are retained.\nSet to 0 to disable (keep indefinitely).",
"type": "integer"
},
"workspace_agent_logs": {
"description": "WorkspaceAgentLogs controls how long workspace agent logs are retained.\nLogs are deleted if the agent hasn't connected within this period.\nLogs from the latest build are always retained regardless of age.\nDefaults to 7 days to preserve existing behavior.",
"type": "integer"
}
}
},
"codersdk.Role": {
"type": "object",
"properties": {
@@ -17828,6 +17829,17 @@
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+1
View File
@@ -1501,6 +1501,7 @@ func New(options *Options) *API {
r.Get("/parameters", api.workspaceBuildParameters)
r.Get("/resources", api.workspaceBuildResourcesDeprecated)
r.Get("/state", api.workspaceBuildState)
r.Put("/state", api.workspaceBuildUpdateState)
r.Get("/timings", api.workspaceBuildTimings)
})
r.Route("/authcheck", func(r chi.Router) {
+8
View File
@@ -83,6 +83,7 @@ import (
"github.com/coder/coder/v2/coderd/schedule"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/coderd/updatecheck"
"github.com/coder/coder/v2/coderd/usage"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/webpush"
"github.com/coder/coder/v2/coderd/workspaceapps"
@@ -186,6 +187,7 @@ type Options struct {
TelemetryReporter telemetry.Reporter
ProvisionerdServerMetrics *provisionerdserver.Metrics
UsageInserter usage.Inserter
}
// New constructs a codersdk client connected to an in-memory API instance.
@@ -266,6 +268,11 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
}
}
var usageInserter *atomic.Pointer[usage.Inserter]
if options.UsageInserter != nil {
usageInserter = &atomic.Pointer[usage.Inserter]{}
usageInserter.Store(&options.UsageInserter)
}
if options.Database == nil {
options.Database, options.Pubsub = dbtestutil.NewDB(t)
}
@@ -559,6 +566,7 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
Database: options.Database,
Pubsub: options.Pubsub,
ExternalAuthConfigs: options.ExternalAuthConfigs,
UsageInserter: usageInserter,
Auditor: options.Auditor,
ConnectionLogger: options.ConnectionLogger,
+44
View File
@@ -0,0 +1,44 @@
package coderdtest
import (
"context"
"sync"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/usage"
"github.com/coder/coder/v2/coderd/usage/usagetypes"
)
var _ usage.Inserter = (*UsageInserter)(nil)
type UsageInserter struct {
sync.Mutex
events []usagetypes.DiscreteEvent
}
func NewUsageInserter() *UsageInserter {
return &UsageInserter{
events: []usagetypes.DiscreteEvent{},
}
}
func (u *UsageInserter) InsertDiscreteUsageEvent(_ context.Context, _ database.Store, event usagetypes.DiscreteEvent) error {
u.Lock()
defer u.Unlock()
u.events = append(u.events, event)
return nil
}
func (u *UsageInserter) GetEvents() []usagetypes.DiscreteEvent {
u.Lock()
defer u.Unlock()
eventsCopy := make([]usagetypes.DiscreteEvent, len(u.events))
copy(eventsCopy, u.events)
return eventsCopy
}
func (u *UsageInserter) Reset() {
u.Lock()
defer u.Unlock()
u.events = []usagetypes.DiscreteEvent{}
}
+3 -17
View File
@@ -1732,7 +1732,7 @@ func (q *querier) DeleteOAuth2ProviderAppTokensByAppAndUserID(ctx context.Contex
return q.db.DeleteOAuth2ProviderAppTokensByAppAndUserID(ctx, arg)
}
func (q *querier) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int64, error) {
func (q *querier) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int32, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceAibridgeInterception); err != nil {
return -1, err
}
@@ -1749,20 +1749,6 @@ func (q *querier) DeleteOldAuditLogConnectionEvents(ctx context.Context, thresho
return q.db.DeleteOldAuditLogConnectionEvents(ctx, threshold)
}
func (q *querier) DeleteOldAuditLogs(ctx context.Context, arg database.DeleteOldAuditLogsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceSystem); err != nil {
return 0, err
}
return q.db.DeleteOldAuditLogs(ctx, arg)
}
func (q *querier) DeleteOldConnectionLogs(ctx context.Context, arg database.DeleteOldConnectionLogsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceSystem); err != nil {
return 0, err
}
return q.db.DeleteOldConnectionLogs(ctx, arg)
}
func (q *querier) DeleteOldNotificationMessages(ctx context.Context) error {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceNotificationMessage); err != nil {
return err
@@ -1784,9 +1770,9 @@ func (q *querier) DeleteOldTelemetryLocks(ctx context.Context, beforeTime time.T
return q.db.DeleteOldTelemetryLocks(ctx, beforeTime)
}
func (q *querier) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) (int64, error) {
func (q *querier) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) error {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceSystem); err != nil {
return 0, err
return err
}
return q.db.DeleteOldWorkspaceAgentLogs(ctx, threshold)
}
+2 -10
View File
@@ -324,10 +324,6 @@ func (s *MethodTestSuite) TestAuditLogs() {
dbm.EXPECT().DeleteOldAuditLogConnectionEvents(gomock.Any(), database.DeleteOldAuditLogConnectionEventsParams{}).Return(nil).AnyTimes()
check.Args(database.DeleteOldAuditLogConnectionEventsParams{}).Asserts(rbac.ResourceSystem, policy.ActionDelete)
}))
s.Run("DeleteOldAuditLogs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().DeleteOldAuditLogs(gomock.Any(), database.DeleteOldAuditLogsParams{}).Return(int64(0), nil).AnyTimes()
check.Args(database.DeleteOldAuditLogsParams{}).Asserts(rbac.ResourceSystem, policy.ActionDelete)
}))
}
func (s *MethodTestSuite) TestConnectionLogs() {
@@ -359,10 +355,6 @@ func (s *MethodTestSuite) TestConnectionLogs() {
dbm.EXPECT().CountConnectionLogs(gomock.Any(), database.CountConnectionLogsParams{}).Return(int64(0), nil).AnyTimes()
check.Args(database.CountConnectionLogsParams{}, emptyPreparedAuthorized{}).Asserts(rbac.ResourceConnectionLog, policy.ActionRead)
}))
s.Run("DeleteOldConnectionLogs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
dbm.EXPECT().DeleteOldConnectionLogs(gomock.Any(), database.DeleteOldConnectionLogsParams{}).Return(int64(0), nil).AnyTimes()
check.Args(database.DeleteOldConnectionLogsParams{}).Asserts(rbac.ResourceSystem, policy.ActionDelete)
}))
}
func (s *MethodTestSuite) TestFile() {
@@ -3227,7 +3219,7 @@ func (s *MethodTestSuite) TestSystemFunctions() {
}))
s.Run("DeleteOldWorkspaceAgentLogs", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
t := time.Time{}
dbm.EXPECT().DeleteOldWorkspaceAgentLogs(gomock.Any(), t).Return(int64(0), nil).AnyTimes()
dbm.EXPECT().DeleteOldWorkspaceAgentLogs(gomock.Any(), t).Return(nil).AnyTimes()
check.Args(t).Asserts(rbac.ResourceSystem, policy.ActionDelete)
}))
s.Run("InsertWorkspaceAgentStats", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
@@ -4705,7 +4697,7 @@ func (s *MethodTestSuite) TestAIBridge() {
s.Run("DeleteOldAIBridgeRecords", s.Mocked(func(db *dbmock.MockStore, faker *gofakeit.Faker, check *expects) {
t := dbtime.Now()
db.EXPECT().DeleteOldAIBridgeRecords(gomock.Any(), t).Return(int64(0), nil).AnyTimes()
db.EXPECT().DeleteOldAIBridgeRecords(gomock.Any(), t).Return(int32(0), nil).AnyTimes()
check.Args(t).Asserts(rbac.ResourceAibridgeInterception, policy.ActionDelete)
}))
}
+4 -18
View File
@@ -396,7 +396,7 @@ func (m queryMetricsStore) DeleteOAuth2ProviderAppTokensByAppAndUserID(ctx conte
return r0
}
func (m queryMetricsStore) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int64, error) {
func (m queryMetricsStore) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int32, error) {
start := time.Now()
r0, r1 := m.s.DeleteOldAIBridgeRecords(ctx, beforeTime)
m.queryLatencies.WithLabelValues("DeleteOldAIBridgeRecords").Observe(time.Since(start).Seconds())
@@ -410,20 +410,6 @@ func (m queryMetricsStore) DeleteOldAuditLogConnectionEvents(ctx context.Context
return r0
}
func (m queryMetricsStore) DeleteOldAuditLogs(ctx context.Context, arg database.DeleteOldAuditLogsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.DeleteOldAuditLogs(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteOldAuditLogs").Observe(time.Since(start).Seconds())
return r0, r1
}
func (m queryMetricsStore) DeleteOldConnectionLogs(ctx context.Context, arg database.DeleteOldConnectionLogsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.DeleteOldConnectionLogs(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteOldConnectionLogs").Observe(time.Since(start).Seconds())
return r0, r1
}
func (m queryMetricsStore) DeleteOldNotificationMessages(ctx context.Context) error {
start := time.Now()
r0 := m.s.DeleteOldNotificationMessages(ctx)
@@ -445,11 +431,11 @@ func (m queryMetricsStore) DeleteOldTelemetryLocks(ctx context.Context, periodEn
return r0
}
func (m queryMetricsStore) DeleteOldWorkspaceAgentLogs(ctx context.Context, arg time.Time) (int64, error) {
func (m queryMetricsStore) DeleteOldWorkspaceAgentLogs(ctx context.Context, arg time.Time) error {
start := time.Now()
r0, r1 := m.s.DeleteOldWorkspaceAgentLogs(ctx, arg)
r0 := m.s.DeleteOldWorkspaceAgentLogs(ctx, arg)
m.queryLatencies.WithLabelValues("DeleteOldWorkspaceAgentLogs").Observe(time.Since(start).Seconds())
return r0, r1
return r0
}
func (m queryMetricsStore) DeleteOldWorkspaceAgentStats(ctx context.Context) error {
+5 -36
View File
@@ -725,10 +725,10 @@ func (mr *MockStoreMockRecorder) DeleteOAuth2ProviderAppTokensByAppAndUserID(ctx
}
// DeleteOldAIBridgeRecords mocks base method.
func (m *MockStore) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int64, error) {
func (m *MockStore) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int32, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteOldAIBridgeRecords", ctx, beforeTime)
ret0, _ := ret[0].(int64)
ret0, _ := ret[0].(int32)
ret1, _ := ret[1].(error)
return ret0, ret1
}
@@ -753,36 +753,6 @@ func (mr *MockStoreMockRecorder) DeleteOldAuditLogConnectionEvents(ctx, arg any)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteOldAuditLogConnectionEvents", reflect.TypeOf((*MockStore)(nil).DeleteOldAuditLogConnectionEvents), ctx, arg)
}
// DeleteOldAuditLogs mocks base method.
func (m *MockStore) DeleteOldAuditLogs(ctx context.Context, arg database.DeleteOldAuditLogsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteOldAuditLogs", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// DeleteOldAuditLogs indicates an expected call of DeleteOldAuditLogs.
func (mr *MockStoreMockRecorder) DeleteOldAuditLogs(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteOldAuditLogs", reflect.TypeOf((*MockStore)(nil).DeleteOldAuditLogs), ctx, arg)
}
// DeleteOldConnectionLogs mocks base method.
func (m *MockStore) DeleteOldConnectionLogs(ctx context.Context, arg database.DeleteOldConnectionLogsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteOldConnectionLogs", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// DeleteOldConnectionLogs indicates an expected call of DeleteOldConnectionLogs.
func (mr *MockStoreMockRecorder) DeleteOldConnectionLogs(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeleteOldConnectionLogs", reflect.TypeOf((*MockStore)(nil).DeleteOldConnectionLogs), ctx, arg)
}
// DeleteOldNotificationMessages mocks base method.
func (m *MockStore) DeleteOldNotificationMessages(ctx context.Context) error {
m.ctrl.T.Helper()
@@ -826,12 +796,11 @@ func (mr *MockStoreMockRecorder) DeleteOldTelemetryLocks(ctx, periodEndingAtBefo
}
// DeleteOldWorkspaceAgentLogs mocks base method.
func (m *MockStore) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) (int64, error) {
func (m *MockStore) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeleteOldWorkspaceAgentLogs", ctx, threshold)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
ret0, _ := ret[0].(error)
return ret0
}
// DeleteOldWorkspaceAgentLogs indicates an expected call of DeleteOldWorkspaceAgentLogs.
+24 -71
View File
@@ -18,16 +18,13 @@ import (
)
const (
delay = 10 * time.Minute
delay = 10 * time.Minute
maxAgentLogAge = 7 * 24 * time.Hour
// Connection events are now inserted into the `connection_logs` table.
// We'll slowly remove old connection events from the `audit_logs` table.
// The `connection_logs` table is purged based on the configured retention.
// We'll slowly remove old connection events from the `audit_logs` table,
// but we won't touch the `connection_logs` table.
maxAuditLogConnectionEventAge = 90 * 24 * time.Hour // 90 days
auditLogConnectionEventBatchSize = 1000
// Batch size for connection log deletion.
connectionLogsBatchSize = 10000
// Batch size for audit log deletion.
auditLogsBatchSize = 10000
// Telemetry heartbeats are used to deduplicate events across replicas. We
// don't need to persist heartbeat rows for longer than 24 hours, as they
// are only used for deduplication across replicas. The time needs to be
@@ -65,14 +62,9 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, vals *coder
return nil
}
var purgedWorkspaceAgentLogs int64
workspaceAgentLogsRetention := vals.Retention.WorkspaceAgentLogs.Value()
if workspaceAgentLogsRetention > 0 {
deleteOldWorkspaceAgentLogsBefore := start.Add(-workspaceAgentLogsRetention)
purgedWorkspaceAgentLogs, err = tx.DeleteOldWorkspaceAgentLogs(ctx, deleteOldWorkspaceAgentLogsBefore)
if err != nil {
return xerrors.Errorf("failed to delete old workspace agent logs: %w", err)
}
deleteOldWorkspaceAgentLogsBefore := start.Add(-maxAgentLogAge)
if err := tx.DeleteOldWorkspaceAgentLogs(ctx, deleteOldWorkspaceAgentLogsBefore); err != nil {
return xerrors.Errorf("failed to delete old workspace agent logs: %w", err)
}
if err := tx.DeleteOldWorkspaceAgentStats(ctx); err != nil {
return xerrors.Errorf("failed to delete old workspace agent stats: %w", err)
@@ -86,24 +78,18 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, vals *coder
if err := tx.ExpirePrebuildsAPIKeys(ctx, dbtime.Time(start)); err != nil {
return xerrors.Errorf("failed to expire prebuilds user api keys: %w", err)
}
var expiredAPIKeys int64
apiKeysRetention := vals.Retention.APIKeys.Value()
if apiKeysRetention > 0 {
// Delete keys that have been expired for at least the retention period.
// A higher retention period allows the backend to return a more helpful
// error message when a user tries to use an expired key.
deleteExpiredKeysBefore := start.Add(-apiKeysRetention)
expiredAPIKeys, err = tx.DeleteExpiredAPIKeys(ctx, database.DeleteExpiredAPIKeysParams{
Before: dbtime.Time(deleteExpiredKeysBefore),
// There could be a lot of expired keys here, so set a limit to prevent
// this taking too long. This runs every 10 minutes, so it deletes
// ~1.5m keys per day at most.
LimitCount: 10000,
})
if err != nil {
return xerrors.Errorf("failed to delete expired api keys: %w", err)
}
expiredAPIKeys, err := tx.DeleteExpiredAPIKeys(ctx, database.DeleteExpiredAPIKeysParams{
// Leave expired keys for a week to allow the backend to know the difference
// between a 404 and an expired key. This purge code is just to bound the size of
// the table to something more reasonable.
Before: dbtime.Time(start.Add(time.Hour * 24 * 7 * -1)),
// There could be a lot of expired keys here, so set a limit to prevent this
// taking too long.
// This runs every 10 minutes, so it deletes ~1.5m keys per day at most.
LimitCount: 10000,
})
if err != nil {
return xerrors.Errorf("failed to delete expired api keys: %w", err)
}
deleteOldTelemetryLocksBefore := start.Add(-maxTelemetryHeartbeatAge)
if err := tx.DeleteOldTelemetryLocks(ctx, deleteOldTelemetryLocksBefore); err != nil {
@@ -118,49 +104,16 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, vals *coder
return xerrors.Errorf("failed to delete old audit log connection events: %w", err)
}
var purgedAIBridgeRecords int64
aibridgeRetention := vals.AI.BridgeConfig.Retention.Value()
if aibridgeRetention > 0 {
deleteAIBridgeRecordsBefore := start.Add(-aibridgeRetention)
// nolint:gocritic // Needs to run as aibridge context.
purgedAIBridgeRecords, err = tx.DeleteOldAIBridgeRecords(dbauthz.AsAIBridged(ctx), deleteAIBridgeRecordsBefore)
if err != nil {
return xerrors.Errorf("failed to delete old aibridge records: %w", err)
}
}
var purgedConnectionLogs int64
connectionLogsRetention := vals.Retention.ConnectionLogs.Value()
if connectionLogsRetention > 0 {
deleteConnectionLogsBefore := start.Add(-connectionLogsRetention)
purgedConnectionLogs, err = tx.DeleteOldConnectionLogs(ctx, database.DeleteOldConnectionLogsParams{
BeforeTime: deleteConnectionLogsBefore,
LimitCount: connectionLogsBatchSize,
})
if err != nil {
return xerrors.Errorf("failed to delete old connection logs: %w", err)
}
}
var purgedAuditLogs int64
auditLogsRetention := vals.Retention.AuditLogs.Value()
if auditLogsRetention > 0 {
deleteAuditLogsBefore := start.Add(-auditLogsRetention)
purgedAuditLogs, err = tx.DeleteOldAuditLogs(ctx, database.DeleteOldAuditLogsParams{
BeforeTime: deleteAuditLogsBefore,
LimitCount: auditLogsBatchSize,
})
if err != nil {
return xerrors.Errorf("failed to delete old audit logs: %w", err)
}
deleteAIBridgeRecordsBefore := start.Add(-vals.AI.BridgeConfig.Retention.Value())
// nolint:gocritic // Needs to run as aibridge context.
purgedAIBridgeRecords, err := tx.DeleteOldAIBridgeRecords(dbauthz.AsAIBridged(ctx), deleteAIBridgeRecordsBefore)
if err != nil {
return xerrors.Errorf("failed to delete old aibridge records: %w", err)
}
logger.Debug(ctx, "purged old database entries",
slog.F("workspace_agent_logs", purgedWorkspaceAgentLogs),
slog.F("expired_api_keys", expiredAPIKeys),
slog.F("aibridge_records", purgedAIBridgeRecords),
slog.F("connection_logs", purgedConnectionLogs),
slog.F("audit_logs", purgedAuditLogs),
slog.F("duration", clk.Since(start)),
)
+140 -741
View File
@@ -246,11 +246,7 @@ func TestDeleteOldWorkspaceAgentLogs(t *testing.T) {
// After dbpurge completes, the ticker is reset. Trap this call.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: codersdk.RetentionConfig{
WorkspaceAgentLogs: serpent.Duration(7 * 24 * time.Hour),
},
}, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{}, clk)
defer closer.Close()
<-done // doTick() has now run.
@@ -396,90 +392,6 @@ func mustCreateAgentLogs(ctx context.Context, t *testing.T, db database.Store, a
require.NotEmpty(t, agentLogs, "agent logs must be present")
}
func TestDeleteOldWorkspaceAgentLogsRetention(t *testing.T) {
t.Parallel()
now := time.Date(2025, 1, 15, 7, 30, 0, 0, time.UTC)
testCases := []struct {
name string
retentionConfig codersdk.RetentionConfig
logsAge time.Duration
expectDeleted bool
}{
{
name: "RetentionEnabled",
retentionConfig: codersdk.RetentionConfig{
WorkspaceAgentLogs: serpent.Duration(7 * 24 * time.Hour), // 7 days
},
logsAge: 8 * 24 * time.Hour, // 8 days ago
expectDeleted: true,
},
{
name: "RetentionDisabled",
retentionConfig: codersdk.RetentionConfig{
WorkspaceAgentLogs: serpent.Duration(0),
},
logsAge: 60 * 24 * time.Hour, // 60 days ago
expectDeleted: false,
},
{
name: "CustomRetention30Days",
retentionConfig: codersdk.RetentionConfig{
WorkspaceAgentLogs: serpent.Duration(30 * 24 * time.Hour), // 30 days
},
logsAge: 31 * 24 * time.Hour, // 31 days ago
expectDeleted: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
oldTime := now.Add(-tc.logsAge)
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
org := dbgen.Organization(t, db, database.Organization{})
user := dbgen.User(t, db, database.User{})
_ = dbgen.OrganizationMember(t, db, database.OrganizationMember{UserID: user.ID, OrganizationID: org.ID})
tv := dbgen.TemplateVersion(t, db, database.TemplateVersion{OrganizationID: org.ID, CreatedBy: user.ID})
tmpl := dbgen.Template(t, db, database.Template{OrganizationID: org.ID, ActiveVersionID: tv.ID, CreatedBy: user.ID})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{Name: "test-ws", OwnerID: user.ID, OrganizationID: org.ID, TemplateID: tmpl.ID})
wb1 := mustCreateWorkspaceBuild(t, db, org, tv, ws.ID, oldTime, 1)
wb2 := mustCreateWorkspaceBuild(t, db, org, tv, ws.ID, oldTime, 2)
agent1 := mustCreateAgent(t, db, wb1)
agent2 := mustCreateAgent(t, db, wb2)
mustCreateAgentLogs(ctx, t, db, agent1, &oldTime, "agent 1 logs")
mustCreateAgentLogs(ctx, t, db, agent2, &oldTime, "agent 2 logs")
// Run the purge.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
// Verify results.
if tc.expectDeleted {
assertNoWorkspaceAgentLogs(ctx, t, db, agent1.ID)
} else {
assertWorkspaceAgentLogs(ctx, t, db, agent1.ID, "agent 1 logs")
}
// Latest build logs are always retained.
assertWorkspaceAgentLogs(ctx, t, db, agent2.ID, "agent 2 logs")
})
}
}
//nolint:paralleltest // It uses LockIDDBPurge.
func TestDeleteOldProvisionerDaemons(t *testing.T) {
// TODO: must refactor DeleteOldProvisionerDaemons to allow passing in cutoff
@@ -847,684 +759,171 @@ func TestDeleteOldTelemetryHeartbeats(t *testing.T) {
}, testutil.WaitShort, testutil.IntervalFast, "it should delete old telemetry heartbeats")
}
func TestDeleteOldConnectionLogs(t *testing.T) {
t.Parallel()
now := time.Date(2025, 1, 15, 7, 30, 0, 0, time.UTC)
retentionPeriod := 30 * 24 * time.Hour
afterThreshold := now.Add(-retentionPeriod).Add(-24 * time.Hour) // 31 days ago (older than threshold)
beforeThreshold := now.Add(-15 * 24 * time.Hour) // 15 days ago (newer than threshold)
testCases := []struct {
name string
retentionConfig codersdk.RetentionConfig
oldLogTime time.Time
recentLogTime *time.Time // nil means no recent log created
expectOldDeleted bool
expectedLogsRemaining int
}{
{
name: "RetentionEnabled",
retentionConfig: codersdk.RetentionConfig{
ConnectionLogs: serpent.Duration(retentionPeriod),
},
oldLogTime: afterThreshold,
recentLogTime: &beforeThreshold,
expectOldDeleted: true,
expectedLogsRemaining: 1, // only recent log remains
},
{
name: "RetentionDisabled",
retentionConfig: codersdk.RetentionConfig{
ConnectionLogs: serpent.Duration(0),
},
oldLogTime: now.Add(-365 * 24 * time.Hour), // 1 year ago
recentLogTime: nil,
expectOldDeleted: false,
expectedLogsRemaining: 1, // old log is kept
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
// Setup test fixtures.
user := dbgen.User(t, db, database.User{})
org := dbgen.Organization(t, db, database.Organization{})
_ = dbgen.OrganizationMember(t, db, database.OrganizationMember{UserID: user.ID, OrganizationID: org.ID})
tv := dbgen.TemplateVersion(t, db, database.TemplateVersion{OrganizationID: org.ID, CreatedBy: user.ID})
tmpl := dbgen.Template(t, db, database.Template{OrganizationID: org.ID, ActiveVersionID: tv.ID, CreatedBy: user.ID})
workspace := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: user.ID,
OrganizationID: org.ID,
TemplateID: tmpl.ID,
})
// Create old connection log.
oldLog := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: tc.oldLogTime,
OrganizationID: org.ID,
WorkspaceOwnerID: user.ID,
WorkspaceID: workspace.ID,
WorkspaceName: workspace.Name,
AgentName: "agent1",
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
})
// Create recent connection log if specified.
var recentLog database.ConnectionLog
if tc.recentLogTime != nil {
recentLog = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: *tc.recentLogTime,
OrganizationID: org.ID,
WorkspaceOwnerID: user.ID,
WorkspaceID: workspace.ID,
WorkspaceName: workspace.Name,
AgentName: "agent2",
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
})
}
// Run the purge.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
// Verify results.
logs, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
LimitOpt: 100,
})
require.NoError(t, err)
require.Len(t, logs, tc.expectedLogsRemaining, "unexpected number of logs remaining")
logIDs := make([]uuid.UUID, len(logs))
for i, log := range logs {
logIDs[i] = log.ConnectionLog.ID
}
if tc.expectOldDeleted {
require.NotContains(t, logIDs, oldLog.ID, "old connection log should be deleted")
} else {
require.Contains(t, logIDs, oldLog.ID, "old connection log should NOT be deleted")
}
if tc.recentLogTime != nil {
require.Contains(t, logIDs, recentLog.ID, "recent connection log should be kept")
}
})
}
}
func TestDeleteOldAIBridgeRecords(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
now := time.Date(2025, 1, 15, 7, 30, 0, 0, time.UTC)
retentionPeriod := 30 * 24 * time.Hour // 30 days
afterThreshold := now.Add(-retentionPeriod).Add(-24 * time.Hour) // 31 days ago (older than threshold)
beforeThreshold := now.Add(-15 * 24 * time.Hour) // 15 days ago (newer than threshold)
closeBeforeThreshold := now.Add(-retentionPeriod).Add(24 * time.Hour) // 29 days ago
clk.Set(now).MustWait(ctx)
type testFixtures struct {
oldInterception database.AIBridgeInterception
oldInterceptionWithRelated database.AIBridgeInterception
recentInterception database.AIBridgeInterception
nearThresholdInterception database.AIBridgeInterception
}
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
user := dbgen.User(t, db, database.User{})
testCases := []struct {
name string
retention time.Duration
verify func(t *testing.T, ctx context.Context, db database.Store, fixtures testFixtures)
}{
{
name: "RetentionEnabled",
retention: retentionPeriod,
verify: func(t *testing.T, ctx context.Context, db database.Store, fixtures testFixtures) {
t.Helper()
// Create old AI Bridge interception (should be deleted)
oldInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: afterThreshold,
}, &afterThreshold)
interceptions, err := db.GetAIBridgeInterceptions(ctx)
require.NoError(t, err)
require.Len(t, interceptions, 2, "expected 2 interceptions remaining")
// Create old interception with related records (should all be deleted)
oldInterceptionWithRelated := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "openai",
Model: "gpt-4",
StartedAt: afterThreshold,
}, &afterThreshold)
interceptionIDs := make([]uuid.UUID, len(interceptions))
for i, interception := range interceptions {
interceptionIDs[i] = interception.ID
}
require.NotContains(t, interceptionIDs, fixtures.oldInterception.ID, "old interception should be deleted")
require.NotContains(t, interceptionIDs, fixtures.oldInterceptionWithRelated.ID, "old interception with related records should be deleted")
require.Contains(t, interceptionIDs, fixtures.recentInterception.ID, "recent interception should be kept")
require.Contains(t, interceptionIDs, fixtures.nearThresholdInterception.ID, "near threshold interception should be kept")
// Verify related records were deleted for old interception.
oldTokenUsages, err := db.GetAIBridgeTokenUsagesByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldTokenUsages, "old token usages should be deleted")
oldUserPrompts, err := db.GetAIBridgeUserPromptsByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldUserPrompts, "old user prompts should be deleted")
oldToolUsages, err := db.GetAIBridgeToolUsagesByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldToolUsages, "old tool usages should be deleted")
// Verify related records were NOT deleted for near-threshold interception.
newTokenUsages, err := db.GetAIBridgeTokenUsagesByInterceptionID(ctx, fixtures.nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newTokenUsages, 1, "near threshold token usages should not be deleted")
newUserPrompts, err := db.GetAIBridgeUserPromptsByInterceptionID(ctx, fixtures.nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newUserPrompts, 1, "near threshold user prompts should not be deleted")
newToolUsages, err := db.GetAIBridgeToolUsagesByInterceptionID(ctx, fixtures.nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newToolUsages, 1, "near threshold tool usages should not be deleted")
},
},
{
name: "RetentionDisabled",
retention: 0,
verify: func(t *testing.T, ctx context.Context, db database.Store, fixtures testFixtures) {
t.Helper()
interceptions, err := db.GetAIBridgeInterceptions(ctx)
require.NoError(t, err)
require.Len(t, interceptions, 4, "expected all 4 interceptions to be retained")
interceptionIDs := make([]uuid.UUID, len(interceptions))
for i, interception := range interceptions {
interceptionIDs[i] = interception.ID
}
require.Contains(t, interceptionIDs, fixtures.oldInterception.ID, "old interception should be kept")
require.Contains(t, interceptionIDs, fixtures.oldInterceptionWithRelated.ID, "old interception with related records should be kept")
require.Contains(t, interceptionIDs, fixtures.recentInterception.ID, "recent interception should be kept")
require.Contains(t, interceptionIDs, fixtures.nearThresholdInterception.ID, "near threshold interception should be kept")
// Verify all related records were kept.
oldTokenUsages, err := db.GetAIBridgeTokenUsagesByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Len(t, oldTokenUsages, 1, "old token usages should be kept")
oldUserPrompts, err := db.GetAIBridgeUserPromptsByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Len(t, oldUserPrompts, 1, "old user prompts should be kept")
oldToolUsages, err := db.GetAIBridgeToolUsagesByInterceptionID(ctx, fixtures.oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Len(t, oldToolUsages, 1, "old tool usages should be kept")
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
user := dbgen.User(t, db, database.User{})
// Create old AI Bridge interception (should be deleted when retention enabled).
oldInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: afterThreshold,
}, &afterThreshold)
// Create old interception with related records (should all be deleted when retention enabled).
oldInterceptionWithRelated := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "openai",
Model: "gpt-4",
StartedAt: afterThreshold,
}, &afterThreshold)
_ = dbgen.AIBridgeTokenUsage(t, db, database.InsertAIBridgeTokenUsageParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
InputTokens: 100,
OutputTokens: 50,
CreatedAt: afterThreshold,
})
_ = dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
Prompt: "test prompt",
CreatedAt: afterThreshold,
})
_ = dbgen.AIBridgeToolUsage(t, db, database.InsertAIBridgeToolUsageParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
Tool: "test-tool",
ServerUrl: sql.NullString{String: "http://test", Valid: true},
Input: "{}",
Injected: true,
CreatedAt: afterThreshold,
})
// Create recent AI Bridge interception (should be kept).
recentInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: beforeThreshold,
}, &beforeThreshold)
// Create interception close to threshold (should be kept).
nearThresholdInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: closeBeforeThreshold,
}, &closeBeforeThreshold)
_ = dbgen.AIBridgeTokenUsage(t, db, database.InsertAIBridgeTokenUsageParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
InputTokens: 100,
OutputTokens: 50,
CreatedAt: closeBeforeThreshold,
})
_ = dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
Prompt: "test prompt",
CreatedAt: closeBeforeThreshold,
})
_ = dbgen.AIBridgeToolUsage(t, db, database.InsertAIBridgeToolUsageParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
Tool: "test-tool",
ServerUrl: sql.NullString{String: "http://test", Valid: true},
Input: "{}",
Injected: true,
CreatedAt: closeBeforeThreshold,
})
fixtures := testFixtures{
oldInterception: oldInterception,
oldInterceptionWithRelated: oldInterceptionWithRelated,
recentInterception: recentInterception,
nearThresholdInterception: nearThresholdInterception,
}
// Run the purge with configured retention period.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
AI: codersdk.AIConfig{
BridgeConfig: codersdk.AIBridgeConfig{
Retention: serpent.Duration(tc.retention),
},
},
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
tc.verify(t, ctx, db, fixtures)
})
}
}
func TestDeleteOldAuditLogs(t *testing.T) {
t.Parallel()
now := time.Date(2025, 1, 15, 7, 30, 0, 0, time.UTC)
retentionPeriod := 30 * 24 * time.Hour
afterThreshold := now.Add(-retentionPeriod).Add(-24 * time.Hour) // 31 days ago (older than threshold)
beforeThreshold := now.Add(-15 * 24 * time.Hour) // 15 days ago (newer than threshold)
testCases := []struct {
name string
retentionConfig codersdk.RetentionConfig
oldLogTime time.Time
recentLogTime *time.Time // nil means no recent log created
expectOldDeleted bool
expectedLogsRemaining int
}{
{
name: "RetentionEnabled",
retentionConfig: codersdk.RetentionConfig{
AuditLogs: serpent.Duration(retentionPeriod),
},
oldLogTime: afterThreshold,
recentLogTime: &beforeThreshold,
expectOldDeleted: true,
expectedLogsRemaining: 1, // only recent log remains
},
{
name: "RetentionDisabled",
retentionConfig: codersdk.RetentionConfig{
AuditLogs: serpent.Duration(0),
},
oldLogTime: now.Add(-365 * 24 * time.Hour), // 1 year ago
recentLogTime: nil,
expectOldDeleted: false,
expectedLogsRemaining: 1, // old log is kept
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
// Setup test fixtures.
user := dbgen.User(t, db, database.User{})
org := dbgen.Organization(t, db, database.Organization{})
// Create old audit log.
oldLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: tc.oldLogTime,
Action: database.AuditActionCreate,
ResourceType: database.ResourceTypeWorkspace,
})
// Create recent audit log if specified.
var recentLog database.AuditLog
if tc.recentLogTime != nil {
recentLog = dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: *tc.recentLogTime,
Action: database.AuditActionCreate,
ResourceType: database.ResourceTypeWorkspace,
})
}
// Run the purge.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
// Verify results.
logs, err := db.GetAuditLogsOffset(ctx, database.GetAuditLogsOffsetParams{
LimitOpt: 100,
})
require.NoError(t, err)
require.Len(t, logs, tc.expectedLogsRemaining, "unexpected number of logs remaining")
logIDs := make([]uuid.UUID, len(logs))
for i, log := range logs {
logIDs[i] = log.AuditLog.ID
}
if tc.expectOldDeleted {
require.NotContains(t, logIDs, oldLog.ID, "old audit log should be deleted")
} else {
require.Contains(t, logIDs, oldLog.ID, "old audit log should NOT be deleted")
}
if tc.recentLogTime != nil {
require.Contains(t, logIDs, recentLog.ID, "recent audit log should be kept")
}
})
}
// ConnectionEventsNotDeleted is a special case that tests multiple audit
// action types, so it's kept as a separate subtest.
t.Run("ConnectionEventsNotDeleted", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
user := dbgen.User(t, db, database.User{})
org := dbgen.Organization(t, db, database.Organization{})
// Create old connection events (should NOT be deleted by audit logs retention).
oldConnectLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: afterThreshold,
Action: database.AuditActionConnect,
ResourceType: database.ResourceTypeWorkspace,
})
oldDisconnectLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: afterThreshold,
Action: database.AuditActionDisconnect,
ResourceType: database.ResourceTypeWorkspace,
})
oldOpenLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: afterThreshold,
Action: database.AuditActionOpen,
ResourceType: database.ResourceTypeWorkspace,
})
oldCloseLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: afterThreshold,
Action: database.AuditActionClose,
ResourceType: database.ResourceTypeWorkspace,
})
// Create old non-connection audit log (should be deleted).
oldCreateLog := dbgen.AuditLog(t, db, database.AuditLog{
UserID: user.ID,
OrganizationID: org.ID,
Time: afterThreshold,
Action: database.AuditActionCreate,
ResourceType: database.ResourceTypeWorkspace,
})
// Run the purge with audit logs retention enabled.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: codersdk.RetentionConfig{
AuditLogs: serpent.Duration(retentionPeriod),
},
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
// Verify results.
logs, err := db.GetAuditLogsOffset(ctx, database.GetAuditLogsOffsetParams{
LimitOpt: 100,
})
require.NoError(t, err)
require.Len(t, logs, 4, "should have 4 connection event logs remaining")
logIDs := make([]uuid.UUID, len(logs))
for i, log := range logs {
logIDs[i] = log.AuditLog.ID
}
// Connection events should NOT be deleted by audit logs retention.
require.Contains(t, logIDs, oldConnectLog.ID, "old connect log should NOT be deleted by audit logs retention")
require.Contains(t, logIDs, oldDisconnectLog.ID, "old disconnect log should NOT be deleted by audit logs retention")
require.Contains(t, logIDs, oldOpenLog.ID, "old open log should NOT be deleted by audit logs retention")
require.Contains(t, logIDs, oldCloseLog.ID, "old close log should NOT be deleted by audit logs retention")
// Non-connection event should be deleted.
require.NotContains(t, logIDs, oldCreateLog.ID, "old create log should be deleted by audit logs retention")
_ = dbgen.AIBridgeTokenUsage(t, db, database.InsertAIBridgeTokenUsageParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
InputTokens: 100,
OutputTokens: 50,
CreatedAt: afterThreshold,
})
}
func TestDeleteExpiredAPIKeys(t *testing.T) {
t.Parallel()
_ = dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
Prompt: "test prompt",
CreatedAt: afterThreshold,
})
now := time.Date(2025, 1, 15, 7, 30, 0, 0, time.UTC)
_ = dbgen.AIBridgeToolUsage(t, db, database.InsertAIBridgeToolUsageParams{
ID: uuid.New(),
InterceptionID: oldInterceptionWithRelated.ID,
ProviderResponseID: "resp-1",
Tool: "test-tool",
ServerUrl: sql.NullString{String: "http://test", Valid: true},
Input: "{}",
Injected: true,
CreatedAt: afterThreshold,
})
testCases := []struct {
name string
retentionConfig codersdk.RetentionConfig
oldExpiredTime time.Time
recentExpiredTime *time.Time // nil means no recent expired key created
activeTime *time.Time // nil means no active key created
expectOldExpiredDeleted bool
expectedKeysRemaining int
}{
{
name: "RetentionEnabled",
retentionConfig: codersdk.RetentionConfig{
APIKeys: serpent.Duration(7 * 24 * time.Hour), // 7 days
// Create recent AI Bridge interception (should be kept)
recentInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: beforeThreshold,
}, &beforeThreshold)
// Create interception close to threshold (should be kept)
nearThresholdInterception := dbgen.AIBridgeInterception(t, db, database.InsertAIBridgeInterceptionParams{
ID: uuid.New(),
APIKeyID: sql.NullString{},
InitiatorID: user.ID,
Provider: "anthropic",
Model: "claude-3-5-sonnet",
StartedAt: closeBeforeThreshold,
}, &closeBeforeThreshold)
_ = dbgen.AIBridgeTokenUsage(t, db, database.InsertAIBridgeTokenUsageParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
InputTokens: 100,
OutputTokens: 50,
CreatedAt: closeBeforeThreshold,
})
_ = dbgen.AIBridgeUserPrompt(t, db, database.InsertAIBridgeUserPromptParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
Prompt: "test prompt",
CreatedAt: closeBeforeThreshold,
})
_ = dbgen.AIBridgeToolUsage(t, db, database.InsertAIBridgeToolUsageParams{
ID: uuid.New(),
InterceptionID: nearThresholdInterception.ID,
ProviderResponseID: "resp-1",
Tool: "test-tool",
ServerUrl: sql.NullString{String: "http://test", Valid: true},
Input: "{}",
Injected: true,
CreatedAt: closeBeforeThreshold,
})
// Run the purge with configured retention period
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
AI: codersdk.AIConfig{
BridgeConfig: codersdk.AIBridgeConfig{
Retention: serpent.Duration(retentionPeriod),
},
oldExpiredTime: now.Add(-8 * 24 * time.Hour), // Expired 8 days ago
recentExpiredTime: ptr(now.Add(-6 * 24 * time.Hour)), // Expired 6 days ago
activeTime: ptr(now.Add(24 * time.Hour)), // Expires tomorrow
expectOldExpiredDeleted: true,
expectedKeysRemaining: 2, // recent expired + active
},
{
name: "RetentionDisabled",
retentionConfig: codersdk.RetentionConfig{
APIKeys: serpent.Duration(0),
},
oldExpiredTime: now.Add(-365 * 24 * time.Hour), // Expired 1 year ago
recentExpiredTime: nil,
activeTime: nil,
expectOldExpiredDeleted: false,
expectedKeysRemaining: 1, // old expired is kept
},
}, clk)
defer closer.Close()
// Wait for tick
testutil.TryReceive(ctx, t, done)
{
name: "CustomRetention30Days",
retentionConfig: codersdk.RetentionConfig{
APIKeys: serpent.Duration(30 * 24 * time.Hour), // 30 days
},
oldExpiredTime: now.Add(-31 * 24 * time.Hour), // Expired 31 days ago
recentExpiredTime: ptr(now.Add(-29 * 24 * time.Hour)), // Expired 29 days ago
activeTime: nil,
expectOldExpiredDeleted: true,
expectedKeysRemaining: 1, // only recent expired remains
},
// Verify results by querying all AI Bridge records
interceptions, err := db.GetAIBridgeInterceptions(ctx)
require.NoError(t, err)
// Extract interception IDs for comparison
interceptionIDs := make([]uuid.UUID, len(interceptions))
for i, interception := range interceptions {
interceptionIDs[i] = interception.ID
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
require.NotContains(t, interceptionIDs, oldInterception.ID, "old interception should be deleted")
require.NotContains(t, interceptionIDs, oldInterceptionWithRelated.ID, "old interception with related records should be deleted")
ctx := testutil.Context(t, testutil.WaitShort)
clk := quartz.NewMock(t)
clk.Set(now).MustWait(ctx)
// Verify related records were also deleted
oldTokenUsages, err := db.GetAIBridgeTokenUsagesByInterceptionID(ctx, oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldTokenUsages, "old token usages should be deleted")
db, _ := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
user := dbgen.User(t, db, database.User{})
oldUserPrompts, err := db.GetAIBridgeUserPromptsByInterceptionID(ctx, oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldUserPrompts, "old user prompts should be deleted")
// Create API key that expired long ago.
oldExpiredKey, _ := dbgen.APIKey(t, db, database.APIKey{
UserID: user.ID,
ExpiresAt: tc.oldExpiredTime,
TokenName: "old-expired-key",
})
oldToolUsages, err := db.GetAIBridgeToolUsagesByInterceptionID(ctx, oldInterceptionWithRelated.ID)
require.NoError(t, err)
require.Empty(t, oldToolUsages, "old tool usages should be deleted")
// Create API key that expired recently if specified.
var recentExpiredKey database.APIKey
if tc.recentExpiredTime != nil {
recentExpiredKey, _ = dbgen.APIKey(t, db, database.APIKey{
UserID: user.ID,
ExpiresAt: *tc.recentExpiredTime,
TokenName: "recent-expired-key",
})
}
require.Contains(t, interceptionIDs, recentInterception.ID, "recent interception should be kept")
require.Contains(t, interceptionIDs, nearThresholdInterception.ID, "near threshold interception should be kept")
// Create API key that hasn't expired yet if specified.
var activeKey database.APIKey
if tc.activeTime != nil {
activeKey, _ = dbgen.APIKey(t, db, database.APIKey{
UserID: user.ID,
ExpiresAt: *tc.activeTime,
TokenName: "active-key",
})
}
// Verify related records were NOT deleted
newTokenUsages, err := db.GetAIBridgeTokenUsagesByInterceptionID(ctx, nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newTokenUsages, 1, "near threshold token usages should not be deleted")
// Run the purge.
done := awaitDoTick(ctx, t, clk)
closer := dbpurge.New(ctx, logger, db, &codersdk.DeploymentValues{
Retention: tc.retentionConfig,
}, clk)
defer closer.Close()
testutil.TryReceive(ctx, t, done)
newUserPrompts, err := db.GetAIBridgeUserPromptsByInterceptionID(ctx, nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newUserPrompts, 1, "near threshold user prompts should not be deleted")
// Verify total keys remaining.
keys, err := db.GetAPIKeysLastUsedAfter(ctx, time.Time{})
require.NoError(t, err)
require.Len(t, keys, tc.expectedKeysRemaining, "unexpected number of keys remaining")
// Verify results.
_, err = db.GetAPIKeyByID(ctx, oldExpiredKey.ID)
if tc.expectOldExpiredDeleted {
require.Error(t, err, "old expired key should be deleted")
} else {
require.NoError(t, err, "old expired key should NOT be deleted")
}
if tc.recentExpiredTime != nil {
_, err = db.GetAPIKeyByID(ctx, recentExpiredKey.ID)
require.NoError(t, err, "recently expired key should be kept")
}
if tc.activeTime != nil {
_, err = db.GetAPIKeyByID(ctx, activeKey.ID)
require.NoError(t, err, "active key should be kept")
}
})
}
}
// ptr is a helper to create a pointer to a value.
func ptr[T any](v T) *T {
return &v
newToolUsages, err := db.GetAIBridgeToolUsagesByInterceptionID(ctx, nearThresholdInterception.ID)
require.NoError(t, err)
require.Len(t, newToolUsages, 1, "near threshold tool usages should not be deleted")
}
-2
View File
@@ -3449,8 +3449,6 @@ COMMENT ON INDEX workspace_app_audit_sessions_unique_index IS 'Unique index to e
CREATE INDEX workspace_app_stats_workspace_id_idx ON workspace_app_stats USING btree (workspace_id);
CREATE INDEX workspace_app_statuses_app_id_idx ON workspace_app_statuses USING btree (app_id, created_at DESC);
CREATE INDEX workspace_modules_created_at_idx ON workspace_modules USING btree (created_at);
CREATE INDEX workspace_next_start_at_idx ON workspaces USING btree (next_start_at) WHERE (deleted = false);
@@ -1 +1 @@
DROP INDEX IF EXISTS public.workspace_agents_auth_instance_id_deleted_idx;
DROP INDEX IF EXISTS workspace_agents_auth_instance_id_deleted_idx;
@@ -1 +1 @@
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON public.workspace_agents (auth_instance_id, deleted);
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON workspace_agents (auth_instance_id, deleted);
@@ -1 +0,0 @@
DROP INDEX IF EXISTS workspace_app_statuses_app_id_idx;
@@ -1 +0,0 @@
CREATE INDEX workspace_app_statuses_app_id_idx ON workspace_app_statuses (app_id, created_at DESC);
File diff suppressed because one or more lines are too long
@@ -1,34 +1,34 @@
-- This is a deleted user that shares the same username and linked_id as the existing user below.
-- Any future migrations need to handle this case.
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'githubuser@coder.com', 'githubuser', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', true) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('a0061a8e-7db7-4585-838c-3116a003dd21', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('a0061a8e-7db7-4585-838c-3116a003dd21', 'github', '100', '');
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'githubuser@coder.com', 'githubuser', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'github', '100', '');
-- Additionally, there is no unique constraint on user_id. So also add another user_link for the same user.
-- This has happened on a production database.
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('fc1511ef-4fcf-4a3b-98a1-8df64160e35a', 'oidc', 'foo', '');
-- Lastly, make 2 other users who have the same user link.
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'dup_link_a@coder.com', 'dupe_a', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('580ed397-727d-4aaf-950a-51f89f556c24', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('580ed397-727d-4aaf-950a-51f89f556c24', 'github', '500', '');
INSERT INTO public.users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
INSERT INTO users(id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, deleted)
VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'dup_link_b@coder.com', 'dupe_b', '\x', '2022-11-02 13:05:21.445455+02', '2022-11-02 13:05:21.445455+02', 'active', '{}', false) ON CONFLICT DO NOTHING;
INSERT INTO public.organization_members VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO public.user_links(user_id, login_type, linked_id, oauth_access_token)
INSERT INTO organization_members VALUES ('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', '2022-11-02 13:05:21.447595+02', '2022-11-02 13:05:21.447595+02', '{}') ON CONFLICT DO NOTHING;
INSERT INTO user_links(user_id, login_type, linked_id, oauth_access_token)
VALUES('c813366b-2fde-45ae-920c-101c3ad6a1e1', 'github', '500', '');
@@ -1,4 +1,4 @@
INSERT INTO public.workspace_app_stats (
INSERT INTO workspace_app_stats (
id,
user_id,
workspace_id,
@@ -1,5 +1,5 @@
INSERT INTO
public.workspace_modules (
workspace_modules (
id,
job_id,
transition,
@@ -1,15 +1,15 @@
INSERT INTO public.organizations (id, name, description, created_at, updated_at, is_default, display_name, icon) VALUES ('20362772-802a-4a72-8e4f-3648b4bfd168', 'strange_hopper58', 'wizardly_stonebraker60', '2025-02-07 07:46:19.507551 +00:00', '2025-02-07 07:46:19.507552 +00:00', false, 'competent_rhodes59', '');
INSERT INTO organizations (id, name, description, created_at, updated_at, is_default, display_name, icon) VALUES ('20362772-802a-4a72-8e4f-3648b4bfd168', 'strange_hopper58', 'wizardly_stonebraker60', '2025-02-07 07:46:19.507551 +00:00', '2025-02-07 07:46:19.507552 +00:00', false, 'competent_rhodes59', '');
INSERT INTO public.users (id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, login_type, avatar_url, deleted, last_seen_at, quiet_hours_schedule, theme_preference, name, github_com_user_id, hashed_one_time_passcode, one_time_passcode_expires_at) VALUES ('6c353aac-20de-467b-bdfb-3c30a37adcd2', 'vigorous_murdock61', 'affectionate_hawking62', 'lqTu9C5363AwD7NVNH6noaGjp91XIuZJ', '2025-02-07 07:46:19.510861 +00:00', '2025-02-07 07:46:19.512949 +00:00', 'active', '{}', 'password', '', false, '0001-01-01 00:00:00.000000', '', '', 'vigilant_hugle63', null, null, null);
INSERT INTO users (id, email, username, hashed_password, created_at, updated_at, status, rbac_roles, login_type, avatar_url, deleted, last_seen_at, quiet_hours_schedule, theme_preference, name, github_com_user_id, hashed_one_time_passcode, one_time_passcode_expires_at) VALUES ('6c353aac-20de-467b-bdfb-3c30a37adcd2', 'vigorous_murdock61', 'affectionate_hawking62', 'lqTu9C5363AwD7NVNH6noaGjp91XIuZJ', '2025-02-07 07:46:19.510861 +00:00', '2025-02-07 07:46:19.512949 +00:00', 'active', '{}', 'password', '', false, '0001-01-01 00:00:00.000000', '', '', 'vigilant_hugle63', null, null, null);
INSERT INTO public.templates (id, created_at, updated_at, organization_id, deleted, name, provisioner, active_version_id, description, default_ttl, created_by, icon, user_acl, group_acl, display_name, allow_user_cancel_workspace_jobs, allow_user_autostart, allow_user_autostop, failure_ttl, time_til_dormant, time_til_dormant_autodelete, autostop_requirement_days_of_week, autostop_requirement_weeks, autostart_block_days_of_week, require_active_version, deprecated, activity_bump, max_port_sharing_level) VALUES ('6b298946-7a4f-47ac-9158-b03b08740a41', '2025-02-07 07:46:19.513317 +00:00', '2025-02-07 07:46:19.513317 +00:00', '20362772-802a-4a72-8e4f-3648b4bfd168', false, 'modest_leakey64', 'echo', 'e6cfa2a4-e4cf-4182-9e19-08b975682a28', 'upbeat_wright65', 604800000000000, '6c353aac-20de-467b-bdfb-3c30a37adcd2', 'nervous_keller66', '{}', '{"20362772-802a-4a72-8e4f-3648b4bfd168": ["read", "use"]}', 'determined_aryabhata67', false, true, true, 0, 0, 0, 0, 0, 0, false, '', 3600000000000, 'owner');
INSERT INTO public.template_versions (id, template_id, organization_id, created_at, updated_at, name, readme, job_id, created_by, external_auth_providers, message, archived, source_example_id) VALUES ('af58bd62-428c-4c33-849b-d43a3be07d93', '6b298946-7a4f-47ac-9158-b03b08740a41', '20362772-802a-4a72-8e4f-3648b4bfd168', '2025-02-07 07:46:19.514782 +00:00', '2025-02-07 07:46:19.514782 +00:00', 'distracted_shockley68', 'sleepy_turing69', 'f2e2ea1c-5aa3-4a1d-8778-2e5071efae59', '6c353aac-20de-467b-bdfb-3c30a37adcd2', '[]', '', false, null);
INSERT INTO templates (id, created_at, updated_at, organization_id, deleted, name, provisioner, active_version_id, description, default_ttl, created_by, icon, user_acl, group_acl, display_name, allow_user_cancel_workspace_jobs, allow_user_autostart, allow_user_autostop, failure_ttl, time_til_dormant, time_til_dormant_autodelete, autostop_requirement_days_of_week, autostop_requirement_weeks, autostart_block_days_of_week, require_active_version, deprecated, activity_bump, max_port_sharing_level) VALUES ('6b298946-7a4f-47ac-9158-b03b08740a41', '2025-02-07 07:46:19.513317 +00:00', '2025-02-07 07:46:19.513317 +00:00', '20362772-802a-4a72-8e4f-3648b4bfd168', false, 'modest_leakey64', 'echo', 'e6cfa2a4-e4cf-4182-9e19-08b975682a28', 'upbeat_wright65', 604800000000000, '6c353aac-20de-467b-bdfb-3c30a37adcd2', 'nervous_keller66', '{}', '{"20362772-802a-4a72-8e4f-3648b4bfd168": ["read", "use"]}', 'determined_aryabhata67', false, true, true, 0, 0, 0, 0, 0, 0, false, '', 3600000000000, 'owner');
INSERT INTO template_versions (id, template_id, organization_id, created_at, updated_at, name, readme, job_id, created_by, external_auth_providers, message, archived, source_example_id) VALUES ('af58bd62-428c-4c33-849b-d43a3be07d93', '6b298946-7a4f-47ac-9158-b03b08740a41', '20362772-802a-4a72-8e4f-3648b4bfd168', '2025-02-07 07:46:19.514782 +00:00', '2025-02-07 07:46:19.514782 +00:00', 'distracted_shockley68', 'sleepy_turing69', 'f2e2ea1c-5aa3-4a1d-8778-2e5071efae59', '6c353aac-20de-467b-bdfb-3c30a37adcd2', '[]', '', false, null);
INSERT INTO public.template_version_presets (id, template_version_id, name, created_at) VALUES ('28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'af58bd62-428c-4c33-849b-d43a3be07d93', 'test', '0001-01-01 00:00:00.000000 +00:00');
INSERT INTO template_version_presets (id, template_version_id, name, created_at) VALUES ('28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'af58bd62-428c-4c33-849b-d43a3be07d93', 'test', '0001-01-01 00:00:00.000000 +00:00');
-- Add presets with the same template version ID and name
-- to ensure they're correctly handled by the 00031*_preset_prebuilds migration.
INSERT INTO public.template_version_presets (
INSERT INTO template_version_presets (
id, template_version_id, name, created_at
)
VALUES (
@@ -19,7 +19,7 @@ VALUES (
'0001-01-01 00:00:00.000000 +00:00'
);
INSERT INTO public.template_version_presets (
INSERT INTO template_version_presets (
id, template_version_id, name, created_at
)
VALUES (
@@ -29,4 +29,4 @@ VALUES (
'0001-01-01 00:00:00.000000 +00:00'
);
INSERT INTO public.template_version_preset_parameters (id, template_version_preset_id, name, value) VALUES ('ea90ccd2-5024-459e-87e4-879afd24de0f', '28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'test', 'test');
INSERT INTO template_version_preset_parameters (id, template_version_preset_id, name, value) VALUES ('ea90ccd2-5024-459e-87e4-879afd24de0f', '28b42cc0-c4fe-4907-a0fe-e4d20f1e9bfe', 'test', 'test');
@@ -1,4 +1,4 @@
INSERT INTO public.tasks VALUES (
INSERT INTO tasks VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- id
'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1', -- organization_id
'30095c71-380b-457a-8995-97b8ee6e5307', -- owner_id
@@ -11,7 +11,7 @@ INSERT INTO public.tasks VALUES (
NULL -- deleted_at
) ON CONFLICT DO NOTHING;
INSERT INTO public.task_workspace_apps VALUES (
INSERT INTO task_workspace_apps VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- task_id
'a8c0b8c5-c9a8-4f33-93a4-8142e6858244', -- workspace_build_id
'8fa17bbd-c48c-44c7-91ae-d4acbc755fad', -- workspace_agent_id
@@ -1,4 +1,4 @@
INSERT INTO public.task_workspace_apps VALUES (
INSERT INTO task_workspace_apps VALUES (
'f5a1c3e4-8b2d-4f6a-9d7e-2a8b5c9e1f3d', -- task_id
NULL, -- workspace_agent_id
NULL, -- workspace_app_id
+3 -8
View File
@@ -104,13 +104,8 @@ type sqlcQuerier interface {
DeleteOAuth2ProviderAppSecretByID(ctx context.Context, id uuid.UUID) error
DeleteOAuth2ProviderAppTokensByAppAndUserID(ctx context.Context, arg DeleteOAuth2ProviderAppTokensByAppAndUserIDParams) error
// Cumulative count.
DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int64, error)
DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int32, error)
DeleteOldAuditLogConnectionEvents(ctx context.Context, arg DeleteOldAuditLogConnectionEventsParams) error
// Deletes old audit logs based on retention policy, excluding deprecated
// connection events (connect, disconnect, open, close) which are handled
// separately by DeleteOldAuditLogConnectionEvents.
DeleteOldAuditLogs(ctx context.Context, arg DeleteOldAuditLogsParams) (int64, error)
DeleteOldConnectionLogs(ctx context.Context, arg DeleteOldConnectionLogsParams) (int64, error)
// Delete all notification messages which have not been updated for over a week.
DeleteOldNotificationMessages(ctx context.Context) error
// Delete provisioner daemons that have been created at least a week ago
@@ -120,10 +115,10 @@ type sqlcQuerier interface {
DeleteOldProvisionerDaemons(ctx context.Context) error
// Deletes old telemetry locks from the telemetry_locks table.
DeleteOldTelemetryLocks(ctx context.Context, periodEndingAtBefore time.Time) error
// If an agent hasn't connected within the retention period, we purge its logs.
// If an agent hasn't connected in the last 7 days, we purge it's logs.
// Exception: if the logs are related to the latest build, we keep those around.
// Logs can take up a lot of space, so it's important we clean up frequently.
DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) (int64, error)
DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) error
DeleteOldWorkspaceAgentStats(ctx context.Context) error
DeleteOrganizationMember(ctx context.Context, arg DeleteOrganizationMemberParams) error
DeleteProvisionerKey(ctx context.Context, id uuid.UUID) error
+25 -85
View File
@@ -354,18 +354,17 @@ WITH
WHERE id IN (SELECT id FROM to_delete)
RETURNING 1
)
SELECT (
SELECT
(SELECT COUNT(*) FROM tool_usages) +
(SELECT COUNT(*) FROM token_usages) +
(SELECT COUNT(*) FROM user_prompts) +
(SELECT COUNT(*) FROM interceptions)
)::bigint as total_deleted
(SELECT COUNT(*) FROM interceptions) as total_deleted
`
// Cumulative count.
func (q *sqlQuerier) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int64, error) {
func (q *sqlQuerier) DeleteOldAIBridgeRecords(ctx context.Context, beforeTime time.Time) (int32, error) {
row := q.db.QueryRowContext(ctx, deleteOldAIBridgeRecords, beforeTime)
var total_deleted int64
var total_deleted int32
err := row.Scan(&total_deleted)
return total_deleted, err
}
@@ -1108,20 +1107,24 @@ func (q *sqlQuerier) DeleteApplicationConnectAPIKeysByUserID(ctx context.Context
return err
}
const deleteExpiredAPIKeys = `-- name: DeleteExpiredAPIKeys :execrows
const deleteExpiredAPIKeys = `-- name: DeleteExpiredAPIKeys :one
WITH expired_keys AS (
SELECT id
FROM api_keys
-- expired keys only
WHERE expires_at < $1::timestamptz
LIMIT $2
)
DELETE FROM
api_keys
USING
expired_keys
WHERE
api_keys.id = expired_keys.id
),
deleted_rows AS (
DELETE FROM
api_keys
USING
expired_keys
WHERE
api_keys.id = expired_keys.id
RETURNING api_keys.id
)
SELECT COUNT(deleted_rows.id) AS deleted_count FROM deleted_rows
`
type DeleteExpiredAPIKeysParams struct {
@@ -1130,11 +1133,10 @@ type DeleteExpiredAPIKeysParams struct {
}
func (q *sqlQuerier) DeleteExpiredAPIKeys(ctx context.Context, arg DeleteExpiredAPIKeysParams) (int64, error) {
result, err := q.db.ExecContext(ctx, deleteExpiredAPIKeys, arg.Before, arg.LimitCount)
if err != nil {
return 0, err
}
return result.RowsAffected()
row := q.db.QueryRowContext(ctx, deleteExpiredAPIKeys, arg.Before, arg.LimitCount)
var deleted_count int64
err := row.Scan(&deleted_count)
return deleted_count, err
}
const expirePrebuildsAPIKeys = `-- name: ExpirePrebuildsAPIKeys :exec
@@ -1635,37 +1637,6 @@ func (q *sqlQuerier) DeleteOldAuditLogConnectionEvents(ctx context.Context, arg
return err
}
const deleteOldAuditLogs = `-- name: DeleteOldAuditLogs :execrows
WITH old_logs AS (
SELECT id
FROM audit_logs
WHERE
"time" < $1::timestamp with time zone
AND action NOT IN ('connect', 'disconnect', 'open', 'close')
ORDER BY "time" ASC
LIMIT $2
)
DELETE FROM audit_logs
USING old_logs
WHERE audit_logs.id = old_logs.id
`
type DeleteOldAuditLogsParams struct {
BeforeTime time.Time `db:"before_time" json:"before_time"`
LimitCount int32 `db:"limit_count" json:"limit_count"`
}
// Deletes old audit logs based on retention policy, excluding deprecated
// connection events (connect, disconnect, open, close) which are handled
// separately by DeleteOldAuditLogConnectionEvents.
func (q *sqlQuerier) DeleteOldAuditLogs(ctx context.Context, arg DeleteOldAuditLogsParams) (int64, error) {
result, err := q.db.ExecContext(ctx, deleteOldAuditLogs, arg.BeforeTime, arg.LimitCount)
if err != nil {
return 0, err
}
return result.RowsAffected()
}
const getAuditLogsOffset = `-- name: GetAuditLogsOffset :many
SELECT audit_logs.id, audit_logs.time, audit_logs.user_id, audit_logs.organization_id, audit_logs.ip, audit_logs.user_agent, audit_logs.resource_type, audit_logs.resource_id, audit_logs.resource_target, audit_logs.action, audit_logs.diff, audit_logs.status_code, audit_logs.additional_fields, audit_logs.request_id, audit_logs.resource_icon,
-- sqlc.embed(users) would be nice but it does not seem to play well with
@@ -2124,32 +2095,6 @@ func (q *sqlQuerier) CountConnectionLogs(ctx context.Context, arg CountConnectio
return count, err
}
const deleteOldConnectionLogs = `-- name: DeleteOldConnectionLogs :execrows
WITH old_logs AS (
SELECT id
FROM connection_logs
WHERE connect_time < $1::timestamp with time zone
ORDER BY connect_time ASC
LIMIT $2
)
DELETE FROM connection_logs
USING old_logs
WHERE connection_logs.id = old_logs.id
`
type DeleteOldConnectionLogsParams struct {
BeforeTime time.Time `db:"before_time" json:"before_time"`
LimitCount int32 `db:"limit_count" json:"limit_count"`
}
func (q *sqlQuerier) DeleteOldConnectionLogs(ctx context.Context, arg DeleteOldConnectionLogsParams) (int64, error) {
result, err := q.db.ExecContext(ctx, deleteOldConnectionLogs, arg.BeforeTime, arg.LimitCount)
if err != nil {
return 0, err
}
return result.RowsAffected()
}
const getConnectionLogsOffset = `-- name: GetConnectionLogsOffset :many
SELECT
connection_logs.id, connection_logs.connect_time, connection_logs.organization_id, connection_logs.workspace_owner_id, connection_logs.workspace_id, connection_logs.workspace_name, connection_logs.agent_name, connection_logs.type, connection_logs.ip, connection_logs.code, connection_logs.user_agent, connection_logs.user_id, connection_logs.slug_or_port, connection_logs.connection_id, connection_logs.disconnect_time, connection_logs.disconnect_reason,
@@ -17847,7 +17792,7 @@ func (q *sqlQuerier) UpdateVolumeResourceMonitor(ctx context.Context, arg Update
return err
}
const deleteOldWorkspaceAgentLogs = `-- name: DeleteOldWorkspaceAgentLogs :execrows
const deleteOldWorkspaceAgentLogs = `-- name: DeleteOldWorkspaceAgentLogs :exec
WITH
latest_builds AS (
SELECT
@@ -17890,15 +17835,12 @@ WITH
DELETE FROM workspace_agent_logs WHERE agent_id IN (SELECT id FROM old_agents)
`
// If an agent hasn't connected within the retention period, we purge its logs.
// If an agent hasn't connected in the last 7 days, we purge it's logs.
// Exception: if the logs are related to the latest build, we keep those around.
// Logs can take up a lot of space, so it's important we clean up frequently.
func (q *sqlQuerier) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) (int64, error) {
result, err := q.db.ExecContext(ctx, deleteOldWorkspaceAgentLogs, threshold)
if err != nil {
return 0, err
}
return result.RowsAffected()
func (q *sqlQuerier) DeleteOldWorkspaceAgentLogs(ctx context.Context, threshold time.Time) error {
_, err := q.db.ExecContext(ctx, deleteOldWorkspaceAgentLogs, threshold)
return err
}
const deleteWorkspaceSubAgentByID = `-- name: DeleteWorkspaceSubAgentByID :exec
@@ -20246,7 +20188,6 @@ func (q *sqlQuerier) GetWorkspaceAppByAgentIDAndSlug(ctx context.Context, arg Ge
const getWorkspaceAppStatusesByAppIDs = `-- name: GetWorkspaceAppStatusesByAppIDs :many
SELECT id, created_at, agent_id, app_id, workspace_id, state, message, uri FROM workspace_app_statuses WHERE app_id = ANY($1 :: uuid [ ])
ORDER BY created_at DESC, id DESC
`
func (q *sqlQuerier) GetWorkspaceAppStatusesByAppIDs(ctx context.Context, ids []uuid.UUID) ([]WorkspaceAppStatus, error) {
@@ -23854,7 +23795,6 @@ SET
WHERE
template_id = $3
AND dormant_at IS NOT NULL
AND deleted = false
-- Prebuilt workspaces (identified by having the prebuilds system user as owner_id)
-- should not have their dormant or deleting at set, as these are handled by the
-- prebuilds reconciliation loop.
+2 -3
View File
@@ -360,9 +360,8 @@ WITH
RETURNING 1
)
-- Cumulative count.
SELECT (
SELECT
(SELECT COUNT(*) FROM tool_usages) +
(SELECT COUNT(*) FROM token_usages) +
(SELECT COUNT(*) FROM user_prompts) +
(SELECT COUNT(*) FROM interceptions)
)::bigint as total_deleted;
(SELECT COUNT(*) FROM interceptions) as total_deleted;
+13 -8
View File
@@ -85,20 +85,25 @@ DELETE FROM
WHERE
user_id = $1;
-- name: DeleteExpiredAPIKeys :execrows
-- name: DeleteExpiredAPIKeys :one
WITH expired_keys AS (
SELECT id
FROM api_keys
-- expired keys only
WHERE expires_at < @before::timestamptz
LIMIT @limit_count
)
DELETE FROM
api_keys
USING
expired_keys
WHERE
api_keys.id = expired_keys.id;
),
deleted_rows AS (
DELETE FROM
api_keys
USING
expired_keys
WHERE
api_keys.id = expired_keys.id
RETURNING api_keys.id
)
SELECT COUNT(deleted_rows.id) AS deleted_count FROM deleted_rows;
;
-- name: ExpirePrebuildsAPIKeys :exec
-- Firstly, collect api_keys owned by the prebuilds user that correlate
-17
View File
@@ -253,20 +253,3 @@ WHERE id IN (
ORDER BY "time" ASC
LIMIT @limit_count
);
-- name: DeleteOldAuditLogs :execrows
-- Deletes old audit logs based on retention policy, excluding deprecated
-- connection events (connect, disconnect, open, close) which are handled
-- separately by DeleteOldAuditLogConnectionEvents.
WITH old_logs AS (
SELECT id
FROM audit_logs
WHERE
"time" < @before_time::timestamp with time zone
AND action NOT IN ('connect', 'disconnect', 'open', 'close')
ORDER BY "time" ASC
LIMIT @limit_count
)
DELETE FROM audit_logs
USING old_logs
WHERE audit_logs.id = old_logs.id;
@@ -239,18 +239,6 @@ WHERE
-- @authorize_filter
;
-- name: DeleteOldConnectionLogs :execrows
WITH old_logs AS (
SELECT id
FROM connection_logs
WHERE connect_time < @before_time::timestamp with time zone
ORDER BY connect_time ASC
LIMIT @limit_count
)
DELETE FROM connection_logs
USING old_logs
WHERE connection_logs.id = old_logs.id;
-- name: UpsertConnectionLog :one
INSERT INTO connection_logs (
id,
+2 -2
View File
@@ -199,10 +199,10 @@ INSERT INTO
-- name: GetWorkspaceAgentLogSourcesByAgentIDs :many
SELECT * FROM workspace_agent_log_sources WHERE workspace_agent_id = ANY(@ids :: uuid [ ]);
-- If an agent hasn't connected within the retention period, we purge its logs.
-- If an agent hasn't connected in the last 7 days, we purge it's logs.
-- Exception: if the logs are related to the latest build, we keep those around.
-- Logs can take up a lot of space, so it's important we clean up frequently.
-- name: DeleteOldWorkspaceAgentLogs :execrows
-- name: DeleteOldWorkspaceAgentLogs :exec
WITH
latest_builds AS (
SELECT
+1 -2
View File
@@ -71,8 +71,7 @@ VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
RETURNING *;
-- name: GetWorkspaceAppStatusesByAppIDs :many
SELECT * FROM workspace_app_statuses WHERE app_id = ANY(@ids :: uuid [ ])
ORDER BY created_at DESC, id DESC;
SELECT * FROM workspace_app_statuses WHERE app_id = ANY(@ids :: uuid [ ]);
-- name: GetLatestWorkspaceAppStatusByAppID :one
SELECT *
-1
View File
@@ -846,7 +846,6 @@ SET
WHERE
template_id = @template_id
AND dormant_at IS NOT NULL
AND deleted = false
-- Prebuilt workspaces (identified by having the prebuilds system user as owner_id)
-- should not have their dormant or deleting at set, as these are handled by the
-- prebuilds reconciliation loop.
+1 -3
View File
@@ -153,9 +153,7 @@ func freeUDPPort(t *testing.T) uint16 {
})
require.NoError(t, err, "listen on random UDP port")
localAddr := l.LocalAddr()
require.NotNil(t, localAddr, "local address is nil")
_, port, err := net.SplitHostPort(localAddr.String())
_, port, err := net.SplitHostPort(l.LocalAddr().String())
require.NoError(t, err, "split host port")
portUint, err := strconv.ParseUint(port, 10, 16)
+15 -5
View File
@@ -21,7 +21,6 @@ import (
"github.com/coder/coder/v2/coderd/pubsub"
markdown "github.com/coder/coder/v2/coderd/render"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/wsjson"
"github.com/coder/websocket"
)
@@ -127,6 +126,7 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
templates = p.UUIDs(vals, []uuid.UUID{}, "templates")
readStatus = p.String(vals, "all", "read_status")
format = p.String(vals, notificationFormatMarkdown, "format")
logger = api.Logger.Named("inbox_notifications_watcher")
)
p.ErrorExcessParams(vals)
if len(p.Errors) > 0 {
@@ -214,11 +214,17 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
return
}
go httpapi.Heartbeat(ctx, conn)
defer conn.Close(websocket.StatusNormalClosure, "connection closed")
ctx, cancel := context.WithCancel(ctx)
defer cancel()
encoder := wsjson.NewEncoder[codersdk.GetInboxNotificationResponse](conn, websocket.MessageText)
defer encoder.Close(websocket.StatusNormalClosure)
_ = conn.CloseRead(context.Background())
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.HeartbeatClose(ctx, logger, cancel, conn)
encoder := json.NewEncoder(wsNetConn)
// Log the request immediately instead of after it completes.
if rl := loggermw.RequestLoggerFromContext(ctx); rl != nil {
@@ -227,8 +233,12 @@ func (api *API) watchInboxNotifications(rw http.ResponseWriter, r *http.Request)
for {
select {
case <-api.ctx.Done():
return
case <-ctx.Done():
return
case notif := <-notificationCh:
unreadCount, err := api.Database.CountUnreadInboxNotificationsByUserID(ctx, apikey.UserID)
if err != nil {
+1 -3
View File
@@ -87,9 +87,7 @@ func (c *Cache) refreshTemplateBuildTimes(ctx context.Context) error {
//nolint:gocritic // This is a system service.
ctx = dbauthz.AsSystemRestricted(ctx)
templates, err := c.database.GetTemplatesWithFilter(ctx, database.GetTemplatesWithFilterParams{
Deleted: false,
})
templates, err := c.database.GetTemplates(ctx)
if err != nil {
return err
}
+8 -24
View File
@@ -16,8 +16,6 @@ import (
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/agentmetrics"
"github.com/coder/coder/v2/coderd/pproflabel"
"github.com/coder/quartz"
)
const (
@@ -49,7 +47,6 @@ type MetricsAggregator struct {
log slog.Logger
metricsCleanupInterval time.Duration
clock quartz.Clock
collectCh chan (chan []prometheus.Metric)
updateCh chan updateRequest
@@ -154,7 +151,7 @@ func (am *annotatedMetric) shallowCopy() annotatedMetric {
}
}
func NewMetricsAggregator(logger slog.Logger, registerer prometheus.Registerer, duration time.Duration, aggregateByLabels []string, options ...func(*MetricsAggregator)) (*MetricsAggregator, error) {
func NewMetricsAggregator(logger slog.Logger, registerer prometheus.Registerer, duration time.Duration, aggregateByLabels []string) (*MetricsAggregator, error) {
metricsCleanupInterval := defaultMetricsCleanupInterval
if duration > 0 {
metricsCleanupInterval = duration
@@ -195,10 +192,9 @@ func NewMetricsAggregator(logger slog.Logger, registerer prometheus.Registerer,
return nil, err
}
ma := &MetricsAggregator{
return &MetricsAggregator{
log: logger.Named(loggerName),
metricsCleanupInterval: metricsCleanupInterval,
clock: quartz.NewReal(),
store: map[metricKey]annotatedMetric{},
@@ -210,19 +206,7 @@ func NewMetricsAggregator(logger slog.Logger, registerer prometheus.Registerer,
cleanupHistogram: cleanupHistogram,
aggregateByLabels: aggregateByLabels,
}
for _, option := range options {
option(ma)
}
return ma, nil
}
func WithClock(clock quartz.Clock) func(*MetricsAggregator) {
return func(ma *MetricsAggregator) {
ma.clock = clock
}
}, nil
}
// labelAggregator is used to control cardinality of collected Prometheus metrics by pre-aggregating series based on given labels.
@@ -365,7 +349,7 @@ func (ma *MetricsAggregator) Run(ctx context.Context) func() {
ma.log.Debug(ctx, "clean expired metrics")
timer := prometheus.NewTimer(ma.cleanupHistogram)
now := ma.clock.Now()
now := time.Now()
for key, val := range ma.store {
if now.After(val.expiryDate) {
@@ -423,7 +407,7 @@ func (ma *MetricsAggregator) getOrCreateDesc(name string, help string, baseLabel
}
key := cacheKeyForDesc(name, baseLabelNames, extraLabels)
if d, ok := ma.descCache[key]; ok {
d.lastUsed = ma.clock.Now()
d.lastUsed = time.Now()
ma.descCache[key] = d
return d.desc
}
@@ -435,7 +419,7 @@ func (ma *MetricsAggregator) getOrCreateDesc(name string, help string, baseLabel
labels[nBase+i] = l.Name
}
d := prometheus.NewDesc(name, help, labels, nil)
ma.descCache[key] = descCacheEntry{d, ma.clock.Now()}
ma.descCache[key] = descCacheEntry{d, time.Now()}
return d
}
@@ -513,7 +497,7 @@ func (ma *MetricsAggregator) Update(ctx context.Context, labels AgentMetricLabel
templateName: labels.TemplateName,
metrics: metrics,
timestamp: ma.clock.Now(),
timestamp: time.Now(),
}:
case <-ctx.Done():
ma.log.Debug(ctx, "update request is canceled")
@@ -524,7 +508,7 @@ func (ma *MetricsAggregator) Update(ctx context.Context, labels AgentMetricLabel
// Move to a function for testability
func (ma *MetricsAggregator) cleanupDescCache() {
now := ma.clock.Now()
now := time.Now()
for key, entry := range ma.descCache {
if now.Sub(entry.lastUsed) > ma.metricsCleanupInterval {
delete(ma.descCache, key)
@@ -11,7 +11,6 @@ import (
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/coderd/agentmetrics"
"github.com/coder/coder/v2/testutil"
"github.com/coder/quartz"
)
func TestDescCache_DescExpire(t *testing.T) {
@@ -63,9 +62,8 @@ func TestDescCache_DescExpire(t *testing.T) {
func TestDescCacheTimestampUpdate(t *testing.T) {
t.Parallel()
mClock := quartz.NewMock(t)
registry := prometheus.NewRegistry()
ma, err := NewMetricsAggregator(slogtest.Make(t, nil), registry, time.Hour, nil, WithClock(mClock))
ma, err := NewMetricsAggregator(slogtest.Make(t, nil), registry, time.Hour, nil)
require.NoError(t, err)
baseLabelNames := []string{"label1", "label2"}
@@ -80,9 +78,6 @@ func TestDescCacheTimestampUpdate(t *testing.T) {
initialEntry := ma.descCache[key]
initialTime := initialEntry.lastUsed
// Advance the mock clock to ensure a different timestamp
mClock.Advance(time.Second)
desc2 := ma.getOrCreateDesc("test_metric", "help text", baseLabelNames, extraLabels)
require.NotNil(t, desc2)
+15 -16
View File
@@ -2026,13 +2026,11 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
}
var (
hasAITask bool
unknownAppID string
taskAppID uuid.NullUUID
taskAgentID uuid.NullUUID
)
if tasks := jobType.WorkspaceBuild.GetAiTasks(); len(tasks) > 0 {
hasAITask = true
task := tasks[0]
if task == nil {
return xerrors.Errorf("update ai task: task is nil")
@@ -2048,7 +2046,6 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
if !slices.Contains(appIDs, appID) {
unknownAppID = appID
hasAITask = false
} else {
// Only parse for valid app and agent to avoid fk violation.
id, err := uuid.Parse(appID)
@@ -2083,7 +2080,7 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
Level: []database.LogLevel{database.LogLevelWarn, database.LogLevelWarn, database.LogLevelWarn, database.LogLevelWarn},
Stage: []string{"Cleaning Up", "Cleaning Up", "Cleaning Up", "Cleaning Up"},
Output: []string{
fmt.Sprintf("Unknown ai_task_app_id %q. This workspace will be unable to run AI tasks. This may be due to a template configuration issue, please check with the template author.", taskAppID.UUID.String()),
fmt.Sprintf("Unknown ai_task_app_id %q. This workspace will be unable to run AI tasks. This may be due to a template configuration issue, please check with the template author.", unknownAppID),
"Template author: double-check the following:",
" - You have associated the coder_ai_task with a valid coder_app in your template (ref: https://registry.terraform.io/providers/coder/coder/latest/docs/resources/ai_task).",
" - You have associated the coder_agent with at least one other compute resource. Agents with no other associated resources are not inserted into the database.",
@@ -2098,21 +2095,23 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
}
}
if hasAITask && workspaceBuild.Transition == database.WorkspaceTransitionStart {
// Insert usage event for managed agents.
usageInserter := s.UsageInserter.Load()
if usageInserter != nil {
event := usagetypes.DCManagedAgentsV1{
Count: 1,
}
err = (*usageInserter).InsertDiscreteUsageEvent(ctx, db, event)
if err != nil {
return xerrors.Errorf("insert %q event: %w", event.EventType(), err)
var hasAITask bool
if task, err := db.GetTaskByWorkspaceID(ctx, workspace.ID); err == nil {
hasAITask = true
if workspaceBuild.Transition == database.WorkspaceTransitionStart {
// Insert usage event for managed agents.
usageInserter := s.UsageInserter.Load()
if usageInserter != nil {
event := usagetypes.DCManagedAgentsV1{
Count: 1,
}
err = (*usageInserter).InsertDiscreteUsageEvent(ctx, db, event)
if err != nil {
return xerrors.Errorf("insert %q event: %w", event.EventType(), err)
}
}
}
}
if task, err := db.GetTaskByWorkspaceID(ctx, workspace.ID); err == nil {
// Irrespective of whether the agent or sidebar app is present,
// perform the upsert to ensure a link between the task and
// workspace build. Linking the task to the build is typically
@@ -2878,7 +2878,7 @@ func TestCompleteJob(t *testing.T) {
sidebarAppID := uuid.New()
for _, tc := range []testcase{
{
name: "has_ai_task is false by default",
name: "has_ai_task is false if task_id is nil",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
// No AiTasks defined.
@@ -2887,6 +2887,37 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: false,
expectUsageEvent: false,
},
{
name: "has_ai_task is false even if there are coder_ai_task resources, but no task_id",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{
{
Id: uuid.NewString(),
AppId: sidebarAppID.String(),
},
},
Resources: []*sdkproto.Resource{
{
Agents: []*sdkproto.Agent{
{
Id: uuid.NewString(),
Name: "a",
Apps: []*sdkproto.App{
{
Id: sidebarAppID.String(),
Slug: "test-app",
},
},
},
},
},
},
},
isTask: false,
expectHasAiTask: false,
expectUsageEvent: false,
},
{
name: "has_ai_task is set to true",
transition: database.WorkspaceTransitionStart,
@@ -2964,15 +2995,17 @@ func TestCompleteJob(t *testing.T) {
{
Id: uuid.NewString(),
// Non-existing app ID would previously trigger a FK violation.
// Now it should just be ignored.
// Now it will trigger a warning instead in the provisioner logs.
AppId: sidebarAppID.String(),
},
},
},
isTask: true,
expectTaskStatus: database.TaskStatusInitializing,
expectHasAiTask: false,
expectUsageEvent: false,
// You can still "sort of" use a task in this state, but as we don't have
// the correct app ID you won't be able to communicate with it via Coder.
expectHasAiTask: true,
expectUsageEvent: true,
},
{
name: "has_ai_task is set to true, but transition is not start",
@@ -3007,19 +3040,6 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: true,
expectUsageEvent: false,
},
{
name: "current build does not have ai task but previous build did",
seedFunc: seedPreviousWorkspaceStartWithAITask,
transition: database.WorkspaceTransitionStop,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{},
Resources: []*sdkproto.Resource{},
},
isTask: true,
expectTaskStatus: database.TaskStatusPaused,
expectHasAiTask: false, // We no longer inherit this from the previous build.
expectUsageEvent: false,
},
} {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
@@ -4410,62 +4430,3 @@ func (f *fakeUsageInserter) InsertDiscreteUsageEvent(_ context.Context, _ databa
f.collectedEvents = append(f.collectedEvents, event)
return nil
}
func seedPreviousWorkspaceStartWithAITask(ctx context.Context, t testing.TB, db database.Store) error {
t.Helper()
// If the below looks slightly convoluted, that's because it is.
// The workspace doesn't yet have a latest build, so querying all
// workspaces will fail.
tpls, err := db.GetTemplates(ctx)
if err != nil {
return xerrors.Errorf("seedFunc: get template: %w", err)
}
if len(tpls) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one template, got %d", len(tpls))
}
ws, err := db.GetWorkspacesByTemplateID(ctx, tpls[0].ID)
if err != nil {
return xerrors.Errorf("seedFunc: get workspaces: %w", err)
}
if len(ws) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one workspace, got %d", len(ws))
}
w := ws[0]
prevJob := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
OrganizationID: w.OrganizationID,
InitiatorID: w.OwnerID,
Type: database.ProvisionerJobTypeWorkspaceBuild,
})
tvs, err := db.GetTemplateVersionsByTemplateID(ctx, database.GetTemplateVersionsByTemplateIDParams{
TemplateID: tpls[0].ID,
})
if err != nil {
return xerrors.Errorf("seedFunc: get template version: %w", err)
}
if len(tvs) != 1 {
return xerrors.Errorf("seedFunc: expected exactly one template version, got %d", len(tvs))
}
if tpls[0].ActiveVersionID == uuid.Nil {
return xerrors.Errorf("seedFunc: active version id is nil")
}
res := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: prevJob.ID,
})
agt := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: res.ID,
})
_ = dbgen.WorkspaceApp(t, db, database.WorkspaceApp{
AgentID: agt.ID,
})
_ = dbgen.WorkspaceBuild(t, db, database.WorkspaceBuild{
BuildNumber: 1,
HasAITask: sql.NullBool{Valid: true, Bool: true},
ID: w.ID,
InitiatorID: w.OwnerID,
JobID: prevJob.ID,
TemplateVersionID: tvs[0].ID,
Transition: database.WorkspaceTransitionStart,
WorkspaceID: w.ID,
})
return nil
}
+57
View File
@@ -849,6 +849,63 @@ func (api *API) workspaceBuildState(rw http.ResponseWriter, r *http.Request) {
_, _ = rw.Write(workspaceBuild.ProvisionerState)
}
// @Summary Update workspace build state
// @ID update-workspace-build-state
// @Security CoderSessionToken
// @Accept json
// @Tags Builds
// @Param workspacebuild path string true "Workspace build ID" format(uuid)
// @Param request body codersdk.UpdateWorkspaceBuildStateRequest true "Request body"
// @Success 204
// @Router /workspacebuilds/{workspacebuild}/state [put]
func (api *API) workspaceBuildUpdateState(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
workspaceBuild := httpmw.WorkspaceBuildParam(r)
workspace, err := api.Database.GetWorkspaceByID(ctx, workspaceBuild.WorkspaceID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "No workspace exists for this job.",
})
return
}
template, err := api.Database.GetTemplateByID(ctx, workspace.TemplateID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to get template",
Detail: err.Error(),
})
return
}
// You must have update permissions on the template to update the state.
if !api.Authorize(r, policy.ActionUpdate, template.RBACObject()) {
httpapi.ResourceNotFound(rw)
return
}
var req codersdk.UpdateWorkspaceBuildStateRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
// Use system context since we've already verified authorization via template permissions.
// nolint:gocritic // System access required for provisioner state update.
err = api.Database.UpdateWorkspaceBuildProvisionerStateByID(dbauthz.AsSystemRestricted(ctx), database.UpdateWorkspaceBuildProvisionerStateByIDParams{
ID: workspaceBuild.ID,
ProvisionerState: req.State,
UpdatedAt: dbtime.Now(),
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to update workspace build state.",
Detail: err.Error(),
})
return
}
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Get workspace build timings by ID
// @ID get-workspace-build-timings-by-id
// @Security CoderSessionToken

Some files were not shown because too many files have changed in this diff Show More