Compare commits

..

54 Commits

Author SHA1 Message Date
Ben Potter 031e1c2d10 devsh 2026-02-01 03:53:07 +00:00
Ben Potter ad4d5ed70c feat(site): use workspace name matching for task icons and views
Previously, workspace-to-icon and workspace-to-view mapping relied on
index-based logic, which broke when the sort order changed. This
updates both TasksSidebar and TaskPage to use case-insensitive
workspace name matching instead.

Changes:
- TasksSidebar: Check workspace names for "headless", "la de de", or
  "second" to determine icons
- TaskPage: Check workspace name for "headless" to show headless agent
  view
- Both use case-insensitive matching for reliability

This ensures correct mapping regardless of workspace list sort order.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:52:52 +00:00
Ben Potter 7b0461a99b feat(site): simplify headless session header text
Update the header to be more concise and direct:

**Changes**:
- "Autonomous Session" → "Headless Session" (clearer terminology)
- Removed separate callout banner
- Moved description inline under title: "No IDE, just pure agent
  execution. Watch the Mux agent work autonomously."
- More compact header layout with all info in one place

The new layout is cleaner and explains the headless nature immediately
without needing a separate callout banner.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:13:38 +00:00
Ben Potter 75a41e6f8d feat(site): add headless workspace callout and improve button text
Update the headless agent view with clearer messaging:

**Button Text**:
- Changed "Continue" → "Continue in a new session"
- Makes it explicit that duplication creates a new interactive session

**Headless Callout**:
- Added info banner below header explaining the workspace type
- "This is a headless workspace—no IDE, just pure agent execution.
  Watch the Mux agent work autonomously, then continue the
  conversation in a new session when ready."
- Uses InfoIcon for friendly, informative tone
- Subtle styling with secondary background and border

This makes the headless nature of the workspace immediately clear
while explaining the value proposition (watch the agent work, then
continue when ready).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:12:25 +00:00
Ben Potter 4f9cbe6db0 feat(site): redesign headless view with chat interface and diff viewer
Transform the headless agent view into a more subtle, chat-like interface
with split-panel design:

**Left Panel - Chat Interface**:
- Compact header with "Autonomous Session" title
- Chat-style messages showing agent reasoning and task breakdown
- Agent avatar (Mux icon) with each message
- "Task Breakdown" cards showing how agent plans work into subtasks
- Inline tool call indicators (Read, Edit, Bash commands)
- Security boundary alerts for blocked operations
- "Thinking..." state with spinner
- Footer with metadata (branch, PVC) and "Mux Agent" label

**Right Panel - Diff Viewer**:
- File tabs showing edited files with +/- line counts
- Click to switch between files
- Readonly diff view with syntax-highlighted changes
- Shows actual code changes made by the agent
- Green/red indicators for additions/deletions

**Agent Workflow**:
- Shows how agent breaks work into discrete tasks
- Tracks task completion (✓ Completed task N)
- Demonstrates autonomous decision-making
- Makes boundary security visible without being alarming

This creates a transparent view into headless agent operations while
maintaining a clean, professional aesthetic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:11:28 +00:00
Ben Potter 3fc2ab38bf feat(site): add headless agent view for third task workspace
Implement special handling for the third workspace (index 2) in the tasks
sidebar to display a headless Mux agent session:

**Sidebar**:
- Use /icon/tasks.svg for third workspace icon
- Consistent with existing special handling for first two workspaces

**Headless View** (full-width, no IDE panels):
- Prominent metadata display: workspace, branch, repository, PVC
- Real-time agent activity feed showing:
  - Prompts being executed
  - Tool calls with arguments (code_search, read_file, grep, bash, etc.)
  - Boundary security blocks (access attempts, dangerous commands)
- "Continue Conversation" button to duplicate and extend the session
- Autonomous Mux agent branding and messaging
- Loading indicator for ongoing agent processing

This provides transparency into headless agent operations while making
it clear the conversation can be continued via duplication.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:06:37 +00:00
Ben Potter 800922edec feat(site): use k8s-style PVC naming and add conversation history
Update metadata displays to use Kubernetes-style PersistentVolumeClaim
naming:
- Changed "Disk (PVC ID)" to "PersistentVolumeClaim"
- Format PVC name as "coder-{workspace-id}-pvc" following k8s conventions
- Applies to both Metadata dropdown and Duplicate banner

Added conversation history link to the duplicate metadata banner,
allowing users to view the workspace's terminal/session history before
creating a follow-up task.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 03:02:27 +00:00
Ben Potter 868b654439 feat(site): show duplicate metadata banner in task dialog
Update the Duplicate button functionality to display workspace metadata
in an info banner instead of pre-filling the prompt. When duplicating a
task, users now see:

- Source workspace name highlighted in brand color
- Branch (template version)
- Repository (template name)
- PVC ID (workspace ID prefix)

The banner uses improved contrast with bg-surface-secondary, a branded
border (border-content-link/20), and semibold text for better readability.

This allows users to see the context of what they're duplicating while
providing a clean slate to add their follow-up prompt or select skills.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 02:58:56 +00:00
Ben Potter 07911dd7c7 feat(site): add metadata dropdown and duplicate task button
Add two new buttons to TaskTopbar:

- **Metadata button**: Displays readonly workspace attributes in a tooltip
  dropdown including branch (template version name), repository (template
  name), and disk PVC ID. Also includes a link to view session history.

- **Duplicate button**: Opens NewTaskDialog with the current task's initial
  prompt pre-filled, allowing users to create follow-up tasks or modify
  existing task prompts.

Updated NewTaskDialog to accept an optional initialPrompt prop that
pre-fills the free-form prompt field when duplicating tasks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 02:51:50 +00:00
Ben Potter 577c3aee9b feat(site): refine NewTaskDialog UI and add GitHub Actions API example
Improve dialog layout and styling:
- Move API button to header next to 'Show Advanced', matching Button style
- Restructure sidebar with 'Free Form Task' button at top
- Add 'or select a skill:' headline below button for better hierarchy
- Reduce spacing throughout sidebar for cleaner look
- Move '.agent/skills' reference to bottom footer

Add GitHub Actions as API automation option:
- Extend API code examples to include curl, CLI, SDK, and GitHub Actions
- GitHub Actions example includes full workflow YAML with secrets
- Widen Select dropdown to accommodate 'GitHub Actions' text
- Dynamic code generation updates based on form input

The dialog now provides a clean, progressive interface from simple
free-form tasks to advanced skill-based workflows, with comprehensive
automation examples for CI/CD integration.
2026-02-01 02:44:11 +00:00
Ben Potter ad5957c646 feat(site): add API/CLI/SDK examples to NewTaskDialog
Add collapsible API code examples section in sidebar showing:
- curl command with proper JSON payload
- CLI command with coder binary
- TypeScript SDK example with async/await

Code examples dynamically update based on form input, showing
users how to automate task creation programmatically.
2026-02-01 02:37:44 +00:00
Ben Potter f03be7da29 feat(site): add two-column layout to NewTaskDialog with auto-focus
- Switch to two-column layout: main form left, skills sidebar right
- Add auto-focus to text inputs when dialog opens and skill changes
- Add Cmd+Enter keyboard shortcut to close modal (prototyping)
- Update skills link text to "See your organization's .agent/skills"
- Make design more compact and quick-fire focused
- Add focus rings on text inputs for better visual feedback

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 02:33:23 +00:00
Ben Potter 4c7d6403c8 feat(site): improve NewTaskDialog UX with skill selection flow
- Add GitHub link to skills subtext ("Learn more? See .claude/skills")
- Change default agent to Claude Code
- Hide free-form input when skill is selected
- Move follow-up prompt directly below skill selection
- Add Clear button to reset to free-form mode
- Improve visual hierarchy for quick task creation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-01 02:28:01 +00:00
Jake Howell b14a709adb fix: resolve <Badges /> to use <Badge /> (#21747)
Continuing the work from #21740 

This pull-request updates all of our badges to use the `<Badge />`
component. This is inline with our Figma design/guidelines, so
going-forth and we're standardised across the application. I've added
`<EnterpriseBadge />` and `<DeprecatedBadge />` to the
`Badges.stories.tsx` so we can track these in future (they were missing
previously).

In `site/src/components/Form/Form.tsx` we were using these components
within a `<h2 />` which would cause invalid semantic HTML. I chose the
easy route around this and made them sit in their own `<header>` with a
flex.

### Preview

| Old | New |
| --- | --- |
| <img width="512" height="288" alt="BADGES_OLD"
src="https://github.com/user-attachments/assets/196b0a53-37b2-4aee-b66e-454ac0ff1271"
/> | <img width="512" height="288" alt="BADGES_OLD-1"
src="https://github.com/user-attachments/assets/f0fb2871-40e2-4f0d-972c-cbf4249cf2d7"
/> |
| <img width="512" height="288" alt="DEPRECATED_OLD"
src="https://github.com/user-attachments/assets/cce36b6c-e91a-47f6-8d20-02b9f40ea44e"
/> | <img width="512" height="289" alt="DEPRECATED_NEW"
src="https://github.com/user-attachments/assets/8a1f5168-d128-4733-819e-c1cb6641b83b"
/> |
| <img width="512" height="288" alt="ENTERPRISE_OLD"
src="https://github.com/user-attachments/assets/aba677ce-23c7-4820-913b-886d049f81ef"
/> | <img width="512" height="288" alt="ENTERPRISE_NEW"
src="https://github.com/user-attachments/assets/eca9729d-c98a-4848-9f10-28e42e2c3cd3"
/> |

---------

Co-authored-by: Ben Potter <me@bpmct.net>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 12:22:58 +11:00
Jon Ayers 3d97f677e5 chore: bump alpine to 3.23.3 (#21804) 2026-01-30 22:18:54 +00:00
dependabot[bot] 8985120c36 chore(examples/templates/tasks-docker): bump claude-code module from 4.3.0 to 4.4.2 (#21551)
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=coder/claude-code/coder&package-manager=terraform&previous-version=4.3.0&new-version=4.4.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 20:47:42 +00:00
George K c60f802580 fix(coderd/rbac): make workspace ACL disabled flag atomic (#21799)
The flag is a package-global that was only meant to be set once on
startup. This was a bad assumption since the lack of sync caused test
flakes.

Related to:
https://github.com/coder/internal/issues/1317
https://github.com/coder/internal/issues/1318
2026-01-30 11:21:27 -08:00
Danielle Maywood 37aecda165 feat(coderd/provisionerdserver): insert sub agent resource (#21699)
Update provisionerdserver to handle the changes introduced to
provisionerd in https://github.com/coder/coder/pull/21602

We now create a relationship between `workspace_agent_devcontainers` and
`workspace_agents` with the newly created `subagent_id`.
2026-01-30 17:19:19 +00:00
Cian Johnston 14b4650d6c chore: fix flakiness in TestSSH/StdioExitOnParentDeath (#21792)
Relates to https://github.com/coder/internal/issues/1289
2026-01-30 15:46:38 +00:00
blinkagent[bot] b035843484 docs: clarify that only Coder tokens work with AI Bridge authentication (#21791)
## Summary

Clarifies the [AI Bridge client config authentication
section](https://coder.com/docs/ai-coder/ai-bridge/client-config#authentication)
to explicitly state that only **Coder-issued tokens** are accepted.

## Changes

- Changed "API key" to "Coder API key" throughout the Authentication
section
- Added a note clarifying that provider-specific API keys (OpenAI,
Anthropic, etc.) will not work with AI Bridge

Fixes #21790

---

Created on behalf of @dannykopping

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2026-01-30 14:49:06 +00:00
Mathias Fredriksson 21eabb1d73 feat(coderd): return log snapshot for paused tasks (#21771)
Previously the task logs endpoint only worked when the workspace was
running, leaving users unable to view task history after pausing.

This change adds snapshot retrieval with state-based branching: active
tasks fetch live logs from AgentAPI, paused/initializing/pending tasks
return stored snapshots (providing continuity during pause/resume), and
error/unknown states return HTTP 409 Conflict.

The response includes snapshot metadata (snapshot, snapshot_at) to
indicate whether logs are live or historical.

Closes coder/internal#1254
2026-01-30 16:09:45 +02:00
Danny Kopping 536bca7ea9 chore: log api key on each HTTP API request (#21785)
Operators need to know which API key was used in HTTP requests.

For example, if a key is leaking and a DDOS is underway using that key, operators need a way to identify the key in use and take steps to expire the key (see https://github.com/coder/coder/issues/21782).

_Disclaimer: created using Claude Opus 4.5_
2026-01-30 14:48:10 +02:00
Jake Howell e45635aab6 fix: refactor <Paywall /> component to be universal (#21740)
During development of #21659 I approved some `<Paywall />` code that had
an extensive props system, however, I wasn't a huge fan of this. This
approach attempts to take it further like something `shadcn` would,
where-in we define the `<Paywall />` (and its subset of components) and
we wrap around those when needed for `<PaywallAIGovernance />` and
`<PaywallPremium />`.

Theoretically there is no real CSS/Design changes here. However
screenshot for prosperity.

| Previously | Now |
| --- | --- |
| <img width="2306" height="614" alt="CleanShot 2026-01-29 at 10 56
05@2x"
src="https://github.com/user-attachments/assets/83a4aa1b-da74-459d-ae11-fae06c1a8371"
/> | <img width="2308" height="622" alt="CleanShot 2026-01-29 at 10 55
05@2x"
src="https://github.com/user-attachments/assets/4aa43b09-6705-4af3-86cc-edc0c08e53b1"
/> |

---------

Co-authored-by: Ben Potter <me@bpmct.net>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 23:44:07 +11:00
Marcin Tojek 036ed5672f fix!: remove deprecated prometheus metrics (#21788)
## Description

Removes the following deprecated Prometheus metrics:

- `coderd_api_workspace_latest_build_total` → use
`coderd_api_workspace_latest_build` instead
- `coderd_oauth2_external_requests_rate_limit_total` → use
`coderd_oauth2_external_requests_rate_limit` instead

These metrics were deprecated in #12976 because gauge metrics should
avoid the `_total` suffix per [Prometheus naming
conventions](https://prometheus.io/docs/practices/naming/).

## Changes

- Removed deprecated metric `coderd_api_workspace_latest_build_total`
from `coderd/prometheusmetrics/prometheusmetrics.go`
- Removed deprecated metric
`coderd_oauth2_external_requests_rate_limit_total` from
`coderd/promoauth/oauth2.go`
- Updated tests to use the non-deprecated metric name

Fixes #12999
2026-01-30 13:30:06 +01:00
Marcin Tojek 90cf4809ec fix(site): use version name instead of ID in View source button URL (#21784)
Fixes #19921

The "View source" button was using `versionId` (UUID) instead of version
name in the URL, causing broken links.
2026-01-30 12:43:09 +01:00
Jaayden Halko 4847920407 fix: don't allow sharing admins to change own role (#21634)
resolve coder/internal#1280
2026-01-30 06:27:30 -05:00
Ethan a464ab67c6 test: use explicit names in TestStartAutoUpdate to prevent flake (#21745)
The test was creating two template versions without explicit names,
relying on `namesgenerator.NameDigitWith()` which can produce
collisions. When both versions got the same random name, the test failed
with a 409 Conflict error.

Fix by giving each version an explicit name (`v1`, `v2`).

Closes https://github.com/coder/internal/issues/1309

---

*Generated by [mux](https://mux.coder.com)*
2026-01-30 13:24:06 +11:00
Zach 0611e90dd3 feat: add time window fields to telemetry boundary usage (#21772)
Add PeriodStart and PeriodDurationMilliseconds fields to BoundaryUsageSummary
so consumers of telemetry data can understand usage within a particular time window.
2026-01-29 13:40:55 -07:00
blinkagent[bot] 5da28ff72f docs: clarify Tasks limit and AI Governance relationship (#21774)
## Summary

This PR updates the note on the Tasks documentation page to more clearly
explain the relationship between Premium task limits and the AI
Governance Add-On.

## Problem

The previous wording:
> "Premium Coder deployments are limited to running 1,000 tasks. Contact
us for pricing options or learn more about our AI Governance Add-On to
evaluate all of Coder's AI features."

The "or" in this sentence could be interpreted as two separate paths:
(1) contact sales for custom pricing that might not require the add-on,
OR (2) get AI Governance. This led to confusion about whether higher
task limits could be obtained without the AI Governance Add-On.

## Solution

Updated the note to be explicit about the scaling path:
> "Premium deployments include 1,000 Agent Workspace Builds for
proof-of-concept use. To scale beyond this limit, the AI Governance
Add-On provides expanded usage pools that grow with your user count.
Contact us to discuss pricing."

This makes it clear that:
1. Premium includes 1,000 builds for POC use
2. Scaling beyond that requires the AI Governance Add-On
3. Contact sales to discuss pricing for the add-on

Created on behalf of @mattvollmer

---------

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Matt Vollmer <matthewjvollmer@outlook.com>
2026-01-29 14:17:06 -06:00
George K f5d4926bc1 fix(site): use total_member_count for group subtitles when sharing (#21744)
Justification:

- Populating `members` is authorized with `group_member.read` which is
not required to be able to share a workspace

- Populating `total_member_count` is authorized with `group.read` which
is required to be able to share

- The updated helper is only used in template/workspace sharing UIs, so
other pages that might need counts of readable members are unaffected

Related to: https://github.com/coder/internal/issues/1302
2026-01-29 08:33:02 -08:00
Susana Ferreira 9f6ce7542a feat: add metrics to aibridgeproxy (#21709)
## Description

Adds Prometheus metrics to the AI Bridge Proxy for observability into
proxy traffic and performance.

## Changes
* Add Metrics struct with the following metrics:
* `connect_sessions_total`: counts CONNECT sessions by type
(mitm/tunneled)
  * `mitm_requests_total`: counts MITM requests by provider
* `inflight_mitm_requests`: gauge tracking in-flight requests by
provider
* `mitm_request_duration_seconds`: histogram of request latencies by
provider
* `mitm_responses_total`: counts responses by status code class
(2XX/3XX/4XX/5XX) and provider
* Register metrics with `coder_aibridgeproxyd_` prefix in CLI
* Unregister metrics on server close to prevent registry leaks
* Add `tunneledMiddleware` to track non-allowlisted CONNECT sessions
* Add tests for metric recording in both MITM and tunneled paths

Closes: https://github.com/coder/internal/issues/1185
2026-01-29 15:11:36 +00:00
Kacper Sawicki d09300eadf feat(cli): add 'coder login token' command to print session token (#21627)
Adds a new subcommand to print the current session token for use in
scripts and automation, similar to `gh auth token`.

## Usage

```bash
CODER_SESSION_TOKEN=$(coder login token)
```

Fixes #21515
2026-01-29 16:06:17 +01:00
Kacper Sawicki 9a417df940 ci: add retry logic for Go module operations (#21609)
## Description

Add exponential backoff retries to all `go install` and `go mod
download` commands across CI workflows and actions.

## Why

Fixes
[coder/internal#1276](https://github.com/coder/internal/issues/1276) -
CI fails when `sum.golang.org` returns 500 errors during Go module
verification. This is an infrastructure-level flake that can't be
controlled.

## Changes

- Created `.github/scripts/retry.sh` - reusable retry helper with
exponential backoff (2s, 4s, 8s delays, max 3 attempts), using
`scripts/lib.sh` helpers
- Wrapped all `go install` and `go mod download` commands with retry in:
  - `.github/actions/setup-go/action.yaml`
  - `.github/actions/setup-sqlc/action.yaml`
  - `.github/actions/setup-go-tools/action.yaml`
  - `.github/workflows/ci.yaml`
  - `.github/workflows/release.yaml`
  - `.github/workflows/security.yaml`
- Added GNU tools setup (bash 4+, GNU getopt, make 4+) for macOS in
`test-go-pg` job, since `retry.sh` uses `lib.sh` which requires these
tools
2026-01-29 16:05:49 +01:00
Yevhenii Shcherbina 8ee4f594d5 chore: update boundary policy (#21738)
Relates to https://github.com/coder/coder/pull/21548
2026-01-29 08:46:30 -05:00
Kacper Sawicki 9eda6569b8 docs: fix broken Kilo Code link in AI Bridge client-config (#21754)
## Summary

Fixes the broken Kilo Code documentation link in the AI Bridge
client-config page.

## Changes

- Updated the Kilo Code link from the old
`/docs/features/api-configuration-profiles` (returns 404) to the current
`/docs/ai-providers/openai-compatible` page

The Kilo Code documentation was restructured and the old URL no longer
exists.

Fixes #21750
2026-01-29 13:43:08 +00:00
Marcin Tojek bb7b49de6a fix(cli): ignore space in custom input mode (#21752)
Fixes: https://github.com/coder/internal/issues/560

"Select" CLI UI component should ignore "space" when `+Add custom value`
is highlighted. Otherwise it interprets that as a potential option...
and panics.
2026-01-29 14:40:02 +01:00
Danny Kopping 5ae0e08494 chore: ensure consistent YAML names for aibridge flags (#21751)
Closes https://github.com/coder/internal/issues/1205

_Implemented by Claude Opus 4.5_

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-29 13:03:58 +00:00
Marcin Tojek 04b0253e8a feat: add Prometheus metrics for license warnings and errors (#21749)
Fixes: coder/internal#767

Adds two new Prometheus metrics for license health monitoring:

- `coderd_license_warnings` - count of active license warnings
- `coderd_license_errors` - count of active license errors

Metrics endpoint after startup of a deployment with license enabled:

```
...
# HELP coderd_license_errors The number of active license errors.
# TYPE coderd_license_errors gauge
coderd_license_errors 0
...
# HELP coderd_license_warnings The number of active license warnings.
# TYPE coderd_license_warnings gauge
coderd_license_warnings 0
...
```
2026-01-29 13:50:15 +01:00
Spike Curtis 06e396188f test: subscribe to heartbeats synchronously on PGCoord startup (#21746)
fixes: https://github.com/coder/internal/issues/1304

Subscribe to heartbeats synchronously on startup of PGCoordinator. This ensures tests that send heartbeats don't race with this subscription.
2026-01-29 13:34:34 +04:00
Jake Howell 62704eb858 feat: implement ai governance consumption frontend (#21595)
Closes [#1246](https://github.com/coder/internal/issues/1246)

This PR adds a new component to display AI Governance user entitlements
in the Licenses Settings page. The implementation includes:

- New `AIGovernanceUsersConsumptionChart` component that shows the
number of entitled users for AI Governance features
- Storybook stories for various states (default, disabled, error states)
- Integration with the existing license settings page
- Collapsible "Learn more" section with links to relevant documentation
- Updated the ManagedAgentsConsumption component with clearer
terminology ("Agent Workspace Builds" instead of "Managed AI Agents")

The chart displays the number of users entitled to use AI features like
AI Bridge, Boundary, and Tasks, with a note that additional analytics
are coming soon.

### Preview

<img width="3516" height="2390" alt="CleanShot 2026-01-27 at 22 44
25@2x"
src="https://github.com/user-attachments/assets/cb97a215-f054-45cb-a3e7-3055c249ef04"
/>

<img width="3516" height="2390" alt="CleanShot 2026-01-27 at 22 45
04@2x"
src="https://github.com/user-attachments/assets/d2534189-cffb-4ad2-b2e2-67eb045572e8"
/>

---------

Co-authored-by: Jaayden Halko <jaayden.halko@gmail.com>
2026-01-29 11:22:11 +11:00
Danielle Maywood 1a94aa67a3 feat(provisioner): associate resources with coder_devcontainer (#21602)
Closes https://github.com/coder/internal/issues/1239

Allow associating `coder_env`, `coder_script` and `coder_app` with
`coder_devcontainer` resource. To do this we make use of the newly added
`subagent_id` field in the `coder_devcontainer` resource added in
https://github.com/coder/terraform-provider-coder/pull/474
2026-01-29 00:01:30 +00:00
Matt Vollmer 7473b57e54 feat(docs): add use cases section to AI Governance docs (#21717)
- Added use cases
- Moved GA section after use cases
2026-01-28 17:51:32 -06:00
Ben Potter 57ab991a95 chore: update paywall to mention AI governance-add on (#21659)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 17:37:15 -06:00
DevCats 1b31279506 chore: update doc-check workflow to prevent unnecessary comments (#21737)
This pull request makes a minor update to the documentation check
workflow. It clarifies that a comment should not be posted if there are
no documentation changes needed and simplifies the comment format
instructions.
2026-01-28 22:02:16 +00:00
Jon Ayers 4f1fd82ed7 fix: propagate correct agent exit code (#21718)
The reaper (PID 1) now returns the child's exit code instead of always
exiting 0. Signal termination uses the standard Unix convention of 128 +
signal number.

fixes #21661
2026-01-28 15:56:04 -06:00
Jon Ayers 4ce4b5ef9f chore: fix trivy dependency (#21736) 2026-01-28 22:36:42 +01:00
Steven Masley dfbd541cee chore: move List util out of db2sdk to avoid circular imports (#21733) 2026-01-28 13:07:53 -06:00
Steven Masley 921fad098b chore: make corrupted directories non-fatal (#21707)
From https://github.com/coder/coder/pull/20563#discussion_r2513135196
Closes https://github.com/coder/coder/issues/20751
2026-01-28 11:35:17 -06:00
George K 264ae77458 chore(docs): update workspace sharing docs to reflect current state (#21662)
This PR updates the workspace sharing documentation to reflect
the current behavior.
2026-01-28 08:58:29 -08:00
Cian Johnston c2c225052a chore(enterprise/coderd): ensure TestManagedAgentLimit differentiates between tasks and workspaces (#21731)
My previous change to this test did not create another **workspace**
using the template containing `coder_ai_task` resources, meaning that
this test was not actually testing the right thing. This PR addresses
this oversight.
2026-01-28 16:30:56 +00:00
Steven Masley e13f2a9869 chore: remove extra stop_modules from provisionerd proto (#21706)
Was a duplicate of start_modules

Closes https://github.com/coder/coder/issues/21206
2026-01-28 09:25:47 -06:00
Mathias Fredriksson d06b21df45 test(cli): increase timeout in TestGitSSH to reduce flakes (#21725)
The test occasionally times out at 15s on Windows CI runners.
Investigation of CI logs shows the HTTP request to the agent's
gitsshkey endpoint never appears in server logs, suggesting it
hangs before the request completes (possibly in connection setup,
middleware, or database queries). Increase to 60s to reduce flake
rate.

Fixes coder/internal#770
2026-01-28 14:01:07 +02:00
Susana Ferreira 327c885292 feat: add provider to aibridgeproxy requestContext (#21710)
## Description

Moves the provider lookup from `handleRequest` to `authMiddleware` so
that the provider is determined during the `CONNECT` handshake and
stored in the request context. This enables provider information to be
available earlier in the request lifecycle.

## Changes

* Move `aibridgeProviderFromHost` call from `handleRequest` to
`authMiddleware`
* Store `Provider` in `requestContext` during `CONNECT` handshake
* Add provider validation in `authMiddleware` (reject if no provider
mapping)
* Keep defensive provider check in `handleRequest` for safety

Follow-up from: https://github.com/coder/coder/pull/21617
2026-01-28 08:44:17 +00:00
Jake Howell 7a8d8d2f86 feat: add icon and description to preset dropdown (#21694)
Closes #20598 

This pull-request implements a very basic change to also render the
`icon` of the `Preset` when we've specifically defined one within the
template. Furthermore, theres a `ⓘ` icon with a description.

### Preview

<img width="984" height="442" alt="CleanShot 2026-01-27 at 20 15 29@2x"
src="https://github.com/user-attachments/assets/d4ceebf9-a5fe-4df4-a8b2-a8355d6bb25e"
/>
2026-01-28 18:51:22 +11:00
299 changed files with 8930 additions and 6019 deletions
+3
View File
@@ -1,4 +1,7 @@
{
"permissions": {
"defaultMode": "bypassPermissions"
},
"hooks": {
"PostToolUse": [
{
+2 -2
View File
@@ -7,6 +7,6 @@ runs:
- name: go install tools
shell: bash
run: |
go install tool
./.github/scripts/retry.sh -- go install tool
# NOTE: protoc-gen-go cannot be installed with `go get`
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
+4 -4
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.7"
default: "1.25.6"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
@@ -22,14 +22,14 @@ runs:
- name: Install gotestsum
shell: bash
run: go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
run: ./.github/scripts/retry.sh -- go install gotest.tools/gotestsum@0d9599e513d70e5792bb9334869f82f6e8b53d4d # main as of 2025-05-15
- name: Install mtimehash
shell: bash
run: go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
run: ./.github/scripts/retry.sh -- go install github.com/slsyy/mtimehash/cmd/mtimehash@a6b5da4ed2c4a40e7b805534b004e9fde7b53ce0 # v1.0.0
# It isn't necessary that we ever do this, but it helps
# separate the "setup" from the "run" times.
- name: go mod download
shell: bash
run: go mod download -x
run: ./.github/scripts/retry.sh -- go mod download -x
+1 -1
View File
@@ -14,4 +14,4 @@ runs:
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
./.github/scripts/retry.sh -- env CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.14.5
terraform_version: 1.14.1
terraform_wrapper: false
+50
View File
@@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Retry a command with exponential backoff.
#
# Usage: retry.sh [--max-attempts N] -- <command...>
#
# Example:
# retry.sh --max-attempts 3 -- go install gotest.tools/gotestsum@latest
#
# This will retry the command up to 3 times with exponential backoff
# (2s, 4s, 8s delays between attempts).
set -euo pipefail
# shellcheck source=scripts/lib.sh
source "$(dirname "${BASH_SOURCE[0]}")/../../scripts/lib.sh"
max_attempts=3
args="$(getopt -o "" -l max-attempts: -- "$@")"
eval set -- "$args"
while true; do
case "$1" in
--max-attempts)
max_attempts="$2"
shift 2
;;
--)
shift
break
;;
*)
error "Unrecognized option: $1"
;;
esac
done
if [[ $# -lt 1 ]]; then
error "Usage: retry.sh [--max-attempts N] -- <command...>"
fi
attempt=1
until "$@"; do
if ((attempt >= max_attempts)); then
error "Command failed after $max_attempts attempts: $*"
fi
delay=$((2 ** attempt))
log "Attempt $attempt/$max_attempts failed, retrying in ${delay}s..."
sleep "$delay"
((attempt++))
done
+37 -25
View File
@@ -35,7 +35,7 @@ jobs:
tailnet-integration: ${{ steps.filter.outputs.tailnet-integration }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -157,7 +157,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -176,7 +176,7 @@ jobs:
- name: Get golangci-lint cache dir
run: |
linter_ver=$(grep -Eo 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
./.github/scripts/retry.sh -- go install "github.com/golangci/golangci-lint/cmd/golangci-lint@v$linter_ver"
dir=$(golangci-lint cache status | awk '/Dir/ { print $2 }')
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
@@ -251,7 +251,7 @@ jobs:
if: ${{ !cancelled() }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -308,7 +308,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -329,7 +329,7 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install shfmt
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
run: ./.github/scripts/retry.sh -- go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
- name: make fmt
timeout-minutes: 7
@@ -360,7 +360,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -395,6 +395,18 @@ jobs:
id: go-paths
uses: ./.github/actions/setup-go-paths
# macOS default bash and coreutils are too old for our scripts
# (lib.sh requires bash 4+, GNU getopt, make 4+).
- name: Setup GNU tools (macOS)
if: runner.os == 'macOS'
run: |
brew install bash gnu-getopt make
{
echo "$(brew --prefix bash)/bin"
echo "$(brew --prefix gnu-getopt)/bin"
echo "$(brew --prefix make)/libexec/gnubin"
} >> "$GITHUB_PATH"
- name: Setup Go
uses: ./.github/actions/setup-go
with:
@@ -554,7 +566,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -616,7 +628,7 @@ jobs:
timeout-minutes: 25
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -688,7 +700,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -715,7 +727,7 @@ jobs:
timeout-minutes: 20
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -748,7 +760,7 @@ jobs:
name: ${{ matrix.variant.name }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -828,7 +840,7 @@ jobs:
if: needs.changes.outputs.site == 'true' || needs.changes.outputs.ci == 'true'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -909,7 +921,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -980,7 +992,7 @@ jobs:
if: always()
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -1068,7 +1080,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -1100,7 +1112,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -1117,10 +1129,10 @@ jobs:
uses: ./.github/actions/setup-go
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1128,7 +1140,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build
@@ -1155,7 +1167,7 @@ jobs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -1207,10 +1219,10 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
run: ./.github/scripts/retry.sh -- go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.35.1
- name: Install zstd
run: sudo apt-get install -y zstd
@@ -1258,7 +1270,7 @@ jobs:
- name: Build
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
version="$(./scripts/version.sh)"
tag="main-${version//+/-}"
@@ -1552,7 +1564,7 @@ jobs:
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+3 -3
View File
@@ -36,7 +36,7 @@ jobs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -65,7 +65,7 @@ jobs:
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -146,7 +146,7 @@ jobs:
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+2 -3
View File
@@ -186,6 +186,8 @@ jobs:
Use \`gh\` to get PR details, diff, and all comments. Check for previous doc-check comments (from coder-doc-check) and only post a new comment if it adds value.
**Do not comment if no documentation changes are needed.**
## Comment format
Use this structure (only include relevant sections):
@@ -202,9 +204,6 @@ jobs:
### New Documentation Needed
- [ ] \`docs/suggested/path.md\` - [what should be documented]
### No Changes Needed
[brief explanation - use this OR the above sections, not both]
---
*Automated review via [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
\`\`\`"
+1 -1
View File
@@ -38,7 +38,7 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+2 -2
View File
@@ -26,7 +26,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -125,7 +125,7 @@ jobs:
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -28,7 +28,7 @@ jobs:
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+5 -5
View File
@@ -39,7 +39,7 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -76,7 +76,7 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -184,7 +184,7 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -228,7 +228,7 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -288,7 +288,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+7 -7
View File
@@ -121,7 +121,7 @@ jobs:
- name: Build dylibs
run: |
set -euxo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
make gen/mark-fresh
make build/coder-dylib
@@ -164,7 +164,7 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -259,7 +259,7 @@ jobs:
java-version: "11.0"
- name: Install go-winres
run: go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
run: ./.github/scripts/retry.sh -- go install github.com/tc-hib/go-winres@d743268d7ea168077ddd443c4240562d4f5e8c3e # v0.3.3
- name: Install nsis and zstd
run: sudo apt-get install -y nsis zstd
@@ -341,7 +341,7 @@ jobs:
- name: Build binaries
run: |
set -euo pipefail
go mod download
./.github/scripts/retry.sh -- go mod download
version="$(./scripts/version.sh)"
make gen/mark-fresh
@@ -802,7 +802,7 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -878,7 +878,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -971,7 +971,7 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+6 -6
View File
@@ -27,7 +27,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -69,7 +69,7 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -97,11 +97,11 @@ jobs:
- name: Install yq
run: go run github.com/mikefarah/yq/v4@v4.44.3
- name: Install mockgen
run: go install go.uber.org/mock/mockgen@v0.5.0
run: ./.github/scripts/retry.sh -- go install go.uber.org/mock/mockgen@v0.6.0
- name: Install protoc-gen-go
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
run: ./.github/scripts/retry.sh -- go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
- name: Install protoc-gen-go-drpc
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
run: ./.github/scripts/retry.sh -- go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
- name: Install Protoc
run: |
# protoc must be in lockstep with our dogfood Dockerfile or the
@@ -146,7 +146,7 @@ jobs:
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
+3 -3
View File
@@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -96,7 +96,7 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
@@ -120,7 +120,7 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+1 -1
View File
@@ -21,7 +21,7 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@5ef0c079ce82195b2a36a210272d6b661572d83e # v2.14.2
uses: step-security/harden-runner@20cf305ff2072d973412fa9b1e3a4f227bda3c76 # v2.14.0
with:
egress-policy: audit
+5
View File
@@ -491,6 +491,11 @@ func (m multiSelectModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
case tea.KeySpace:
options := m.filteredOptions()
if m.enableCustomInput && m.cursor == len(options) {
return m, nil
}
if len(options) != 0 {
options[m.cursor].chosen = !options[m.cursor].chosen
}
+5 -1
View File
@@ -139,12 +139,15 @@ func TestCreate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent())
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Create a new version
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, completeWithAgent(), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
@@ -516,6 +519,7 @@ func TestCreateWithRichParameters(t *testing.T) {
version2 := coderdtest.CreateTemplateVersion(t, tctx.client, tctx.owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "another_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = tctx.template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, tctx.client, version2.ID)
+13
View File
@@ -174,6 +174,19 @@ func (RootCmd) promptExample() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("multi-select-no-defaults", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
Message: "Select some things:",
Options: []string{
"Code", "Chairs", "Whale",
},
EnableCustomInput: enableCustomInput,
})
}
_, _ = fmt.Fprintf(inv.Stdout, "%q are nice choices.\n", strings.Join(multiSelectValues, ", "))
return multiSelectError
}, useThingsOption, enableCustomInputOption),
promptCmd("rich-multi-select", func(inv *serpent.Invocation) error {
if len(multiSelectValues) == 0 {
multiSelectValues, multiSelectError = cliui.MultiSelect(inv, cliui.MultiSelectOptions{
+2 -5
View File
@@ -49,9 +49,6 @@ Examples:
# Test OpenAI API through bridge
coder scaletest bridge --mode bridge --provider openai --concurrent-users 10 --request-count 5 --num-messages 10
# Test OpenAI Responses API through bridge
coder scaletest bridge --mode bridge --provider responses --concurrent-users 10 --request-count 5 --num-messages 10
# Test Anthropic API through bridge
coder scaletest bridge --mode bridge --provider anthropic --concurrent-users 10 --request-count 5 --num-messages 10
@@ -222,9 +219,9 @@ Examples:
{
Flag: "provider",
Env: "CODER_SCALETEST_BRIDGE_PROVIDER",
Required: true,
Default: "openai",
Description: "API provider to use.",
Value: serpent.EnumOf(&provider, "completions", "messages", "responses"),
Value: serpent.EnumOf(&provider, "openai", "anthropic"),
},
{
Flag: "request-count",
-1
View File
@@ -62,7 +62,6 @@ func (*RootCmd) scaletestLLMMock() *serpent.Command {
_, _ = fmt.Fprintf(inv.Stdout, "Mock LLM API server started on %s\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " OpenAI endpoint: %s/v1/chat/completions\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " OpenAI responses endpoint: %s/v1/responses\n", srv.APIAddress())
_, _ = fmt.Fprintf(inv.Stdout, " Anthropic endpoint: %s/v1/messages\n", srv.APIAddress())
<-ctx.Done()
+9 -3
View File
@@ -141,7 +141,9 @@ func TestGitSSH(t *testing.T) {
"-o", "IdentitiesOnly=yes",
"127.0.0.1",
)
ctx := testutil.Context(t, testutil.WaitMedium)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.EqualValues(t, 1, inc)
@@ -205,7 +207,9 @@ func TestGitSSH(t *testing.T) {
inv, _ := clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
ctx := testutil.Context(t, testutil.WaitMedium)
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx := testutil.Context(t, testutil.WaitSuperLong)
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
@@ -223,7 +227,9 @@ func TestGitSSH(t *testing.T) {
inv, _ = clitest.New(t, cmdArgs...)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
ctx = testutil.Context(t, testutil.WaitMedium) // Reset context for second cmd test.
// This occasionally times out at 15s on Windows CI runners. Use a
// longer timeout to reduce flakes.
ctx = testutil.Context(t, testutil.WaitSuperLong) // Reset context for second cmd test.
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
select {
+29
View File
@@ -462,9 +462,38 @@ func (r *RootCmd) login() *serpent.Command {
Value: serpent.BoolOf(&useTokenForSession),
},
}
cmd.Children = []*serpent.Command{
r.loginToken(),
}
return cmd
}
func (r *RootCmd) loginToken() *serpent.Command {
return &serpent.Command{
Use: "token",
Short: "Print the current session token",
Long: "Print the session token for use in scripts and automation.",
Middleware: serpent.RequireNArgs(0),
Handler: func(inv *serpent.Invocation) error {
tok, err := r.ensureTokenBackend().Read(r.clientURL)
if err != nil {
if xerrors.Is(err, os.ErrNotExist) {
return xerrors.New("no session token found - run 'coder login' first")
}
if xerrors.Is(err, sessionstore.ErrNotImplemented) {
return errKeyringNotSupported
}
return xerrors.Errorf("read session token: %w", err)
}
if tok == "" {
return xerrors.New("no session token found - run 'coder login' first")
}
_, err = fmt.Fprintln(inv.Stdout, tok)
return err
},
}
}
// isWSL determines if coder-cli is running within Windows Subsystem for Linux
func isWSL() (bool, error) {
if runtime.GOOS == goosDarwin || runtime.GOOS == goosWindows {
+28
View File
@@ -537,3 +537,31 @@ func TestLogin(t *testing.T) {
require.Equal(t, selected, first.OrganizationID.String())
})
}
func TestLoginToken(t *testing.T) {
t.Parallel()
t.Run("PrintsToken", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "login", "token", "--url", client.URL.String())
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
pty.ExpectMatch(client.SessionToken())
})
t.Run("NoTokenStored", func(t *testing.T) {
t.Parallel()
inv, _ := clitest.New(t, "login", "token")
ctx := testutil.Context(t, testutil.WaitShort)
err := inv.WithContext(ctx).Run()
require.Error(t, err)
require.Contains(t, err.Error(), "no session token found")
})
}
+58
View File
@@ -24,6 +24,7 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -84,6 +85,9 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -175,6 +179,24 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -775,6 +797,12 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1662,3 +1690,33 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}
+96
View File
@@ -312,6 +312,102 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err
+101
View File
@@ -1122,6 +1122,107 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
// Ref: https://github.com/coder/internal/issues/1289
// This may return either a nil error or io.EOF.
// There is an inherent race here:
// 1. Sleep process is killed -> sleepDone is closed.
// 2. watchParentContext detects parent death, cancels context,
// causing SSH session teardown.
// 3. We receive from sleepDone and attempt to call session.Close()
// Now either:
// a. Session teardown completes before we call Close(), resulting in io.EOF
// b. We call Close() first, resulting in a nil error.
_ = session.Close()
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")
+4 -1
View File
@@ -367,7 +367,9 @@ func TestStartAutoUpdate(t *testing.T) {
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
version1 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version1.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version1.ID)
workspace := coderdtest.CreateWorkspace(t, member, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
@@ -379,6 +381,7 @@ func TestStartAutoUpdate(t *testing.T) {
coderdtest.MustTransitionWorkspace(t, member, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
}
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses(stringRichParameters), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
+3
View File
@@ -9,6 +9,9 @@ USAGE:
macOS and Windows and a plain text file on Linux. Use the --use-keyring flag
or CODER_USE_KEYRING environment variable to change the storage mechanism.
SUBCOMMANDS:
token Print the current session token
OPTIONS:
--first-user-email string, $CODER_FIRST_USER_EMAIL
Specifies an email address to use if creating the first user for the
+11
View File
@@ -0,0 +1,11 @@
coder v0.0.0-devel
USAGE:
coder login token
Print the current session token
Print the session token for use in scripts and automation.
———
Run `coder --help` for a list of global options.
+1 -1
View File
@@ -7,7 +7,7 @@
"last_seen_at": "====[timestamp]=====",
"name": "test-daemon",
"version": "v0.0.0-devel",
"api_version": "1.14",
"api_version": "1.15",
"provisioners": [
"echo"
],
+1 -1
View File
@@ -91,7 +91,7 @@ func (a *SubAgentAPI) CreateSubAgent(ctx context.Context, req *agentproto.Create
Name: agentName,
ResourceID: parentAgent.ResourceID,
AuthToken: uuid.New(),
AuthInstanceID: sql.NullString{},
AuthInstanceID: parentAgent.AuthInstanceID,
Architecture: req.Architecture,
EnvironmentVariables: pqtype.NullRawMessage{},
OperatingSystem: req.OperatingSystem,
-46
View File
@@ -175,52 +175,6 @@ func TestSubAgentAPI(t *testing.T) {
}
})
// Context: https://github.com/coder/coder/pull/22196
t.Run("CreateSubAgentDoesNotInheritAuthInstanceID", func(t *testing.T) {
t.Parallel()
var (
log = testutil.Logger(t)
clock = quartz.NewMock(t)
db, org = newDatabaseWithOrg(t)
user, agent = newUserWithWorkspaceAgent(t, db, org)
)
// Given: The parent agent has an AuthInstanceID set
ctx := testutil.Context(t, testutil.WaitShort)
parentAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), agent.ID)
require.NoError(t, err)
require.True(t, parentAgent.AuthInstanceID.Valid, "parent agent should have an AuthInstanceID")
require.NotEmpty(t, parentAgent.AuthInstanceID.String)
api := newAgentAPI(t, log, db, clock, user, org, agent)
// When: We create a sub agent
createResp, err := api.CreateSubAgent(ctx, &proto.CreateSubAgentRequest{
Name: "sub-agent",
Directory: "/workspaces/test",
Architecture: "amd64",
OperatingSystem: "linux",
})
require.NoError(t, err)
subAgentID, err := uuid.FromBytes(createResp.Agent.Id)
require.NoError(t, err)
// Then: The sub-agent must NOT re-use the parent's AuthInstanceID.
subAgent, err := db.GetWorkspaceAgentByID(dbauthz.AsSystemRestricted(ctx), subAgentID)
require.NoError(t, err)
assert.False(t, subAgent.AuthInstanceID.Valid, "sub-agent should not have an AuthInstanceID")
assert.Empty(t, subAgent.AuthInstanceID.String, "sub-agent AuthInstanceID string should be empty")
// Double-check: looking up by the parent's instance ID must
// still return the parent, not the sub-agent.
lookedUp, err := db.GetWorkspaceAgentByInstanceID(dbauthz.AsSystemRestricted(ctx), parentAgent.AuthInstanceID.String)
require.NoError(t, err)
assert.Equal(t, parentAgent.ID, lookedUp.ID, "instance ID lookup should still return the parent agent")
})
type expectedAppError struct {
index int32
field string
+137 -25
View File
@@ -786,6 +786,30 @@ func (api *API) taskSend(rw http.ResponseWriter, r *http.Request) {
rw.WriteHeader(http.StatusNoContent)
}
// convertAgentAPIMessagesToLogEntries converts AgentAPI messages to
// TaskLogEntry format.
func convertAgentAPIMessagesToLogEntries(messages []agentapisdk.Message) ([]codersdk.TaskLogEntry, error) {
logs := make([]codersdk.TaskLogEntry, 0, len(messages))
for _, m := range messages {
var typ codersdk.TaskLogType
switch m.Role {
case agentapisdk.RoleUser:
typ = codersdk.TaskLogTypeInput
case agentapisdk.RoleAgent:
typ = codersdk.TaskLogTypeOutput
default:
return nil, xerrors.Errorf("invalid agentapi message role %q", m.Role)
}
logs = append(logs, codersdk.TaskLogEntry{
ID: int(m.Id),
Content: m.Content,
Type: typ,
Time: m.Time,
})
}
return logs, nil
}
// @Summary Get AI task logs
// @ID get-ai-task-logs
// @Security CoderSessionToken
@@ -799,8 +823,42 @@ func (api *API) taskLogs(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
task := httpmw.TaskParam(r)
switch task.Status {
case database.TaskStatusActive:
// Active tasks: fetch live logs from AgentAPI.
out, err := api.fetchLiveTaskLogs(r, task)
if err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusOK, out)
case database.TaskStatusPaused, database.TaskStatusPending, database.TaskStatusInitializing:
// In pause, pending and initializing states, we attempt to fetch
// the snapshot from database to provide continuity.
out, err := api.fetchSnapshotTaskLogs(ctx, task.ID)
if err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusOK, out)
default:
// Cases: database.TaskStatusError, database.TaskStatusUnknown.
// - Error: snapshot would be stale from previous pause.
// - Unknown: cannot determine reliable state.
httpapi.Write(ctx, rw, http.StatusConflict, codersdk.Response{
Message: "Cannot fetch logs for task in current state.",
Detail: fmt.Sprintf("Task status is %q.", task.Status),
})
}
}
func (api *API) fetchLiveTaskLogs(r *http.Request, task database.Task) (codersdk.TaskLogsResponse, error) {
var out codersdk.TaskLogsResponse
if err := api.authAndDoWithTaskAppClient(r, task, func(ctx context.Context, client *http.Client, appURL *url.URL) error {
err := api.authAndDoWithTaskAppClient(r, task, func(ctx context.Context, client *http.Client, appURL *url.URL) error {
agentAPIClient, err := agentapisdk.NewClient(appURL.String(), agentapisdk.WithHTTPClient(client))
if err != nil {
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
@@ -817,35 +875,89 @@ func (api *API) taskLogs(rw http.ResponseWriter, r *http.Request) {
})
}
logs := make([]codersdk.TaskLogEntry, 0, len(messagesResp.Messages))
for _, m := range messagesResp.Messages {
var typ codersdk.TaskLogType
switch m.Role {
case agentapisdk.RoleUser:
typ = codersdk.TaskLogTypeInput
case agentapisdk.RoleAgent:
typ = codersdk.TaskLogTypeOutput
default:
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
Message: "Invalid task app response message role.",
Detail: fmt.Sprintf(`Expected "user" or "agent", got %q.`, m.Role),
})
}
logs = append(logs, codersdk.TaskLogEntry{
ID: int(m.Id),
Content: m.Content,
Type: typ,
Time: m.Time,
logs, err := convertAgentAPIMessagesToLogEntries(messagesResp.Messages)
if err != nil {
return httperror.NewResponseError(http.StatusBadGateway, codersdk.Response{
Message: "Invalid task app response.",
Detail: err.Error(),
})
}
out = codersdk.TaskLogsResponse{Logs: logs}
out = codersdk.TaskLogsResponse{
Logs: logs,
}
return nil
}); err != nil {
httperror.WriteResponseError(ctx, rw, err)
return
})
return out, err
}
func (api *API) fetchSnapshotTaskLogs(ctx context.Context, taskID uuid.UUID) (codersdk.TaskLogsResponse, error) {
snapshot, err := api.Database.GetTaskSnapshot(ctx, taskID)
if err != nil {
if httpapi.IsUnauthorizedError(err) {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusNotFound, codersdk.Response{
Message: "Resource not found.",
})
}
if errors.Is(err, sql.ErrNoRows) {
// No snapshot exists yet, return empty logs. Snapshot is true
// because this field indicates whether the data is from the
// live task app (false) or not (true). Since the task is
// paused/initializing/pending, we cannot fetch live logs, so
// snapshot must be true even with no snapshot data.
return codersdk.TaskLogsResponse{
Logs: []codersdk.TaskLogEntry{},
Snapshot: true,
}, nil
}
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task snapshot.",
Detail: err.Error(),
})
}
httpapi.Write(ctx, rw, http.StatusOK, out)
// Unmarshal envelope with pre-populated data field to decode once.
envelope := TaskLogSnapshotEnvelope{
Data: &agentapisdk.GetMessagesResponse{},
}
if err := json.Unmarshal(snapshot.LogSnapshot, &envelope); err != nil {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error decoding task snapshot.",
Detail: err.Error(),
})
}
// Validate snapshot format.
if envelope.Format != "agentapi" {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Unsupported task snapshot format.",
Detail: fmt.Sprintf("Expected format %q, got %q.", "agentapi", envelope.Format),
})
}
// Extract agentapi data from envelope (already decoded into the correct type).
messagesResp, ok := envelope.Data.(*agentapisdk.GetMessagesResponse)
if !ok {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Internal error decoding snapshot data.",
Detail: "Unexpected data type in envelope.",
})
}
// Convert agentapi messages to log entries.
logs, err := convertAgentAPIMessagesToLogEntries(messagesResp.Messages)
if err != nil {
return codersdk.TaskLogsResponse{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
Message: "Invalid snapshot data.",
Detail: err.Error(),
})
}
return codersdk.TaskLogsResponse{
Logs: logs,
Snapshot: true,
SnapshotAt: ptr.Ref(snapshot.LogSnapshotCreatedAt),
}, nil
}
// authAndDoWithTaskAppClient centralizes the shared logic to:
+261
View File
@@ -12,6 +12,7 @@ import (
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -723,6 +724,266 @@ func TestTasks(t *testing.T) {
})
})
t.Run("LogsWithSnapshot", func(t *testing.T) {
t.Parallel()
ownerClient, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{})
owner := coderdtest.CreateFirstUser(t, ownerClient)
ownerUser, err := ownerClient.User(testutil.Context(t, testutil.WaitMedium), owner.UserID.String())
require.NoError(t, err)
ownerSubject := coderdtest.AuthzUserSubject(ownerUser)
// Create a regular user to test snapshot access.
client, user := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
// Helper to create a task in the desired state.
createTaskInState := func(ctx context.Context, t *testing.T, status database.TaskStatus) uuid.UUID {
ctx = dbauthz.As(ctx, ownerSubject)
builder := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: owner.OrganizationID,
OwnerID: user.ID,
}).
WithTask(database.TaskTable{
OrganizationID: owner.OrganizationID,
OwnerID: user.ID,
}, nil)
switch status {
case database.TaskStatusPending:
builder = builder.Pending()
case database.TaskStatusInitializing:
builder = builder.Starting()
case database.TaskStatusPaused:
builder = builder.Seed(database.WorkspaceBuild{
Transition: database.WorkspaceTransitionStop,
})
case database.TaskStatusError:
// For error state, create a completed build then manipulate app health.
default:
require.Fail(t, "unsupported task status in test helper", "status: %s", status)
}
resp := builder.Do()
taskID := resp.Task.ID
// Post-process by manipulating agent and app state.
if status == database.TaskStatusError {
// First, set agent to ready state so agent_status returns 'active'.
// This ensures the cascade reaches app_status.
err := db.UpdateWorkspaceAgentLifecycleStateByID(ctx, database.UpdateWorkspaceAgentLifecycleStateByIDParams{
ID: resp.Agents[0].ID,
LifecycleState: database.WorkspaceAgentLifecycleStateReady,
})
require.NoError(t, err)
// Then set workspace app health to unhealthy to trigger error state.
apps, err := db.GetWorkspaceAppsByAgentID(ctx, resp.Agents[0].ID)
require.NoError(t, err)
require.Len(t, apps, 1, "expected exactly one app for task")
err = db.UpdateWorkspaceAppHealthByID(ctx, database.UpdateWorkspaceAppHealthByIDParams{
ID: apps[0].ID,
Health: database.WorkspaceAppHealthUnhealthy,
})
require.NoError(t, err)
}
return taskID
}
// Prepare snapshot data used across tests.
snapshotMessages := []agentapisdk.Message{
{
Id: 0,
Content: "First message",
Role: agentapisdk.RoleAgent,
Time: time.Date(2025, 1, 1, 10, 0, 0, 0, time.UTC),
},
{
Id: 1,
Content: "Second message",
Role: agentapisdk.RoleUser,
Time: time.Date(2025, 1, 1, 10, 1, 0, 0, time.UTC),
},
}
snapshotData := agentapisdk.GetMessagesResponse{
Messages: snapshotMessages,
}
envelope := coderd.TaskLogSnapshotEnvelope{
Format: "agentapi",
Data: snapshotData,
}
snapshotJSON, err := json.Marshal(envelope)
require.NoError(t, err)
snapshotTime := time.Date(2025, 1, 1, 10, 5, 0, 0, time.UTC)
// Helper to verify snapshot logs content.
verifySnapshotLogs := func(t *testing.T, got codersdk.TaskLogsResponse) {
t.Helper()
want := codersdk.TaskLogsResponse{
Snapshot: true,
SnapshotAt: &snapshotTime,
Logs: []codersdk.TaskLogEntry{
{
ID: 0,
Type: codersdk.TaskLogTypeOutput,
Content: "First message",
Time: snapshotMessages[0].Time,
},
{
ID: 1,
Type: codersdk.TaskLogTypeInput,
Content: "Second message",
Time: snapshotMessages[1].Time,
},
},
}
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("got bad response (-want +got):\n%s", diff)
}
}
t.Run("PendingTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("InitializingTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusInitializing)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("PausedTaskReturnsSnapshot", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPaused)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(snapshotJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err, "upserting task snapshot")
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err, "fetching task logs")
verifySnapshotLogs(t, logsResp)
})
t.Run("NoSnapshotReturnsEmpty", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
logsResp, err := client.TaskLogs(ctx, "me", taskID)
require.NoError(t, err)
assert.True(t, logsResp.Snapshot)
assert.Nil(t, logsResp.SnapshotAt)
assert.Len(t, logsResp.Logs, 0)
})
t.Run("InvalidSnapshotFormat", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
invalidEnvelope := coderd.TaskLogSnapshotEnvelope{
Format: "unknown-format",
Data: map[string]any{},
}
invalidJSON, err := json.Marshal(invalidEnvelope)
require.NoError(t, err)
err = db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(invalidJSON),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusInternalServerError, sdkErr.StatusCode())
assert.Contains(t, sdkErr.Message, "Unsupported task snapshot format")
})
t.Run("MalformedSnapshotData", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusPending)
err := db.UpsertTaskSnapshot(dbauthz.As(ctx, ownerSubject), database.UpsertTaskSnapshotParams{
TaskID: taskID,
LogSnapshot: json.RawMessage(`{"format":"agentapi","data":"not an object"}`),
LogSnapshotCreatedAt: snapshotTime,
})
require.NoError(t, err)
_, err = client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusInternalServerError, sdkErr.StatusCode())
})
t.Run("ErrorStateReturnsError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
taskID := createTaskInState(ctx, t, database.TaskStatusError)
_, err := client.TaskLogs(ctx, "me", taskID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
assert.Equal(t, http.StatusConflict, sdkErr.StatusCode())
assert.Contains(t, sdkErr.Message, "Cannot fetch logs for task in current state")
assert.Contains(t, sdkErr.Detail, "error")
})
})
t.Run("UpdateInput", func(t *testing.T) {
tests := []struct {
name string
+10
View File
@@ -15066,6 +15066,10 @@ const docTemplate = `{
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -18563,6 +18567,12 @@ const docTemplate = `{
"items": {
"$ref": "#/definitions/codersdk.TaskLogEntry"
}
},
"snapshot": {
"type": "boolean"
},
"snapshot_at": {
"type": "string"
}
}
},
+10
View File
@@ -13623,6 +13623,10 @@
"limit": {
"type": "integer"
},
"soft_limit": {
"description": "SoftLimit is the soft limit of the feature, and is only used for showing\nincluded limits in the dashboard. No license validation or warnings are\ngenerated from this value.",
"type": "integer"
},
"usage_period": {
"description": "UsagePeriod denotes that the usage is a counter that accumulates over\nthis period (and most likely resets with the issuance of the next\nlicense).\n\nThese dates are determined from the license that this entitlement comes\nfrom, see enterprise/coderd/license/license.go.\n\nOnly certain features set these fields:\n- FeatureManagedAgentLimit",
"allOf": [
@@ -16979,6 +16983,12 @@
"items": {
"$ref": "#/definitions/codersdk.TaskLogEntry"
}
},
"snapshot": {
"type": "boolean"
},
"snapshot_at": {
"type": "string"
}
}
},
+2 -4
View File
@@ -40,10 +40,8 @@
// counters. When boundary logs are reported, Track() adds the IDs to the sets
// and increments request counters.
//
// FlushToDB() writes stats to the database only when there's been new activity
// since the last flush. This prevents stale data from being written after a
// telemetry reset when no new usage occurred. Stats accumulate in memory
// throughout the telemetry period.
// FlushToDB() writes stats to the database, replacing all values with the current
// in-memory state. Stats accumulate in memory throughout the telemetry period.
//
// A new period is detected when the upsert results in an INSERT (meaning
// telemetry deleted the replica's row). At that point, all in-memory stats are
+35 -72
View File
@@ -14,40 +14,21 @@ import (
// Tracker tracks boundary usage for telemetry reporting.
//
// Unique user/workspace counts are tracked both cumulatively and as deltas since
// the last flush. The delta is needed because when a new telemetry period starts
// (the DB row is deleted), we must only insert data accumulated since the last
// flush. If we used cumulative values, stale data from the previous period would
// be written to the new row and then lost when subsequent updates overwrite it.
//
// Request counts are tracked as deltas and accumulated in the database.
// All stats accumulate in memory throughout a telemetry period and are only
// reset when a new period begins.
type Tracker struct {
mu sync.Mutex
// Cumulative unique counts for the current period (used on UPDATE to
// replace the DB value with accurate totals).
workspaces map[uuid.UUID]struct{}
users map[uuid.UUID]struct{}
// Delta unique counts since last flush (used on INSERT to avoid writing
// stale data from the previous period).
workspacesDelta map[uuid.UUID]struct{}
usersDelta map[uuid.UUID]struct{}
// Request deltas (always reset when flushing, accumulated in DB).
mu sync.Mutex
workspaces map[uuid.UUID]struct{}
users map[uuid.UUID]struct{}
allowedRequests int64
deniedRequests int64
usageSinceLastFlush bool
}
// NewTracker creates a new boundary usage tracker.
func NewTracker() *Tracker {
return &Tracker{
workspaces: make(map[uuid.UUID]struct{}),
users: make(map[uuid.UUID]struct{}),
workspacesDelta: make(map[uuid.UUID]struct{}),
usersDelta: make(map[uuid.UUID]struct{}),
workspaces: make(map[uuid.UUID]struct{}),
users: make(map[uuid.UUID]struct{}),
}
}
@@ -58,68 +39,50 @@ func (t *Tracker) Track(workspaceID, ownerID uuid.UUID, allowed, denied int64) {
t.workspaces[workspaceID] = struct{}{}
t.users[ownerID] = struct{}{}
t.workspacesDelta[workspaceID] = struct{}{}
t.usersDelta[ownerID] = struct{}{}
t.allowedRequests += allowed
t.deniedRequests += denied
t.usageSinceLastFlush = true
}
// FlushToDB writes stats to the database. For unique counts, cumulative values
// are used on UPDATE (replacing the DB value) while delta values are used on
// INSERT (starting fresh). Request counts are always deltas, accumulated in DB.
// All deltas are reset immediately after snapshot so Track() calls during the
// DB operation are preserved for the next flush.
// FlushToDB writes the accumulated stats to the database. All values are
// replaced in the database (they represent the current in-memory state). If the
// database row was deleted (new telemetry period), all in-memory stats are reset.
func (t *Tracker) FlushToDB(ctx context.Context, db database.Store, replicaID uuid.UUID) error {
t.mu.Lock()
if !t.usageSinceLastFlush {
t.mu.Unlock()
workspaceCount := int64(len(t.workspaces))
userCount := int64(len(t.users))
allowed := t.allowedRequests
denied := t.deniedRequests
t.mu.Unlock()
// Don't flush if there's no activity.
if workspaceCount == 0 && userCount == 0 && allowed == 0 && denied == 0 {
return nil
}
// Snapshot all values.
workspaceCount := int64(len(t.workspaces)) // cumulative, for UPDATE
userCount := int64(len(t.users)) // cumulative, for UPDATE
workspaceDelta := int64(len(t.workspacesDelta)) // delta, for INSERT
userDelta := int64(len(t.usersDelta)) // delta, for INSERT
allowed := t.allowedRequests // delta, accumulated in DB
denied := t.deniedRequests // delta, accumulated in DB
// Reset all deltas immediately so Track() calls during the DB operation
// below are preserved for the next flush.
t.workspacesDelta = make(map[uuid.UUID]struct{})
t.usersDelta = make(map[uuid.UUID]struct{})
t.allowedRequests = 0
t.deniedRequests = 0
t.usageSinceLastFlush = false
t.mu.Unlock()
//nolint:gocritic // This is the actual package doing boundary usage tracking.
_, err := db.UpsertBoundaryUsageStats(dbauthz.AsBoundaryUsageTracker(ctx), database.UpsertBoundaryUsageStatsParams{
newPeriod, err := db.UpsertBoundaryUsageStats(dbauthz.AsBoundaryUsageTracker(ctx), database.UpsertBoundaryUsageStatsParams{
ReplicaID: replicaID,
UniqueWorkspacesCount: workspaceCount, // cumulative, for UPDATE
UniqueUsersCount: userCount, // cumulative, for UPDATE
UniqueWorkspacesDelta: workspaceDelta, // delta, for INSERT
UniqueUsersDelta: userDelta, // delta, for INSERT
UniqueWorkspacesCount: workspaceCount,
UniqueUsersCount: userCount,
AllowedRequests: allowed,
DeniedRequests: denied,
})
// Always reset cumulative counts to prevent unbounded memory growth (e.g.
// if the DB is unreachable). Copy delta maps to preserve any Track() calls
// that occurred during the DB operation above.
t.mu.Lock()
t.workspaces = make(map[uuid.UUID]struct{})
t.users = make(map[uuid.UUID]struct{})
for id := range t.workspacesDelta {
t.workspaces[id] = struct{}{}
if err != nil {
return err
}
for id := range t.usersDelta {
t.users[id] = struct{}{}
}
t.mu.Unlock()
return err
// If this was an insert (new period), reset all stats. Any Track() calls
// that occurred during the DB operation will be counted in the next period.
if newPeriod {
t.mu.Lock()
t.workspaces = make(map[uuid.UUID]struct{})
t.users = make(map[uuid.UUID]struct{})
t.allowedRequests = 0
t.deniedRequests = 0
t.mu.Unlock()
}
return nil
}
// StartFlushLoop begins the periodic flush loop that writes accumulated stats
+36 -137
View File
@@ -159,18 +159,23 @@ func TestTracker_FlushToDB_Accumulates(t *testing.T) {
workspaceID := uuid.New()
ownerID := uuid.New()
// First flush is an insert, resets unique counts (new period).
tracker.Track(workspaceID, ownerID, 5, 3)
// First flush is an insert, which resets in-memory stats.
err := tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
// Track & flush more data. Same workspace/user, so unique counts stay at 1.
// Track more data after the reset.
tracker.Track(workspaceID, ownerID, 2, 1)
// Second flush is an update so stats should accumulate.
err = tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
// Track & flush even more data to continue accumulation.
// Track even more data.
tracker.Track(workspaceID, ownerID, 3, 2)
// Third flush stats should continue accumulating.
err = tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
@@ -179,8 +184,8 @@ func TestTracker_FlushToDB_Accumulates(t *testing.T) {
require.NoError(t, err)
require.Equal(t, int64(1), summary.UniqueWorkspaces)
require.Equal(t, int64(1), summary.UniqueUsers)
require.Equal(t, int64(5+2+3), summary.AllowedRequests)
require.Equal(t, int64(3+1+2), summary.DeniedRequests)
require.Equal(t, int64(5), summary.AllowedRequests, "should accumulate after first reset: 2+3=5")
require.Equal(t, int64(3), summary.DeniedRequests, "should accumulate after first reset: 1+2=3")
}
func TestTracker_FlushToDB_NewPeriod(t *testing.T) {
@@ -251,24 +256,15 @@ func TestUpsertBoundaryUsageStats_Insert(t *testing.T) {
replicaID := uuid.New()
// Set different values for delta vs cumulative to verify INSERT uses delta.
newPeriod, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replicaID,
UniqueWorkspacesDelta: 5,
UniqueUsersDelta: 3,
UniqueWorkspacesCount: 999, // should be ignored on INSERT
UniqueUsersCount: 999, // should be ignored on INSERT
UniqueWorkspacesCount: 5,
UniqueUsersCount: 3,
AllowedRequests: 100,
DeniedRequests: 10,
})
require.NoError(t, err)
require.True(t, newPeriod, "should return true for insert")
// Verify INSERT used the delta values, not cumulative.
summary, err := db.GetBoundaryUsageSummary(ctx, 60000)
require.NoError(t, err)
require.Equal(t, int64(5), summary.UniqueWorkspaces)
require.Equal(t, int64(3), summary.UniqueUsers)
}
func TestUpsertBoundaryUsageStats_Update(t *testing.T) {
@@ -279,34 +275,34 @@ func TestUpsertBoundaryUsageStats_Update(t *testing.T) {
replicaID := uuid.New()
// First insert uses delta fields.
// First insert.
_, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replicaID,
UniqueWorkspacesDelta: 5,
UniqueUsersDelta: 3,
UniqueWorkspacesCount: 5,
UniqueUsersCount: 3,
AllowedRequests: 100,
DeniedRequests: 10,
})
require.NoError(t, err)
// Second upsert (update). Set different delta vs cumulative to verify UPDATE uses cumulative.
// Second upsert (update).
newPeriod, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replicaID,
UniqueWorkspacesCount: 8, // cumulative, should be used
UniqueUsersCount: 5, // cumulative, should be used
UniqueWorkspacesCount: 8,
UniqueUsersCount: 5,
AllowedRequests: 200,
DeniedRequests: 20,
})
require.NoError(t, err)
require.False(t, newPeriod, "should return false for update")
// Verify UPDATE used cumulative values.
// Verify the update took effect.
summary, err := db.GetBoundaryUsageSummary(ctx, 60000)
require.NoError(t, err)
require.Equal(t, int64(8), summary.UniqueWorkspaces)
require.Equal(t, int64(5), summary.UniqueUsers)
require.Equal(t, int64(100+200), summary.AllowedRequests)
require.Equal(t, int64(10+20), summary.DeniedRequests)
require.Equal(t, int64(200), summary.AllowedRequests)
require.Equal(t, int64(20), summary.DeniedRequests)
}
func TestGetBoundaryUsageSummary_MultipleReplicas(t *testing.T) {
@@ -319,11 +315,11 @@ func TestGetBoundaryUsageSummary_MultipleReplicas(t *testing.T) {
replica2 := uuid.New()
replica3 := uuid.New()
// Insert stats for 3 replicas. Delta fields are used for INSERT.
// Insert stats for 3 replicas.
_, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replica1,
UniqueWorkspacesDelta: 10,
UniqueUsersDelta: 5,
UniqueWorkspacesCount: 10,
UniqueUsersCount: 5,
AllowedRequests: 100,
DeniedRequests: 10,
})
@@ -331,8 +327,8 @@ func TestGetBoundaryUsageSummary_MultipleReplicas(t *testing.T) {
_, err = db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replica2,
UniqueWorkspacesDelta: 15,
UniqueUsersDelta: 8,
UniqueWorkspacesCount: 15,
UniqueUsersCount: 8,
AllowedRequests: 150,
DeniedRequests: 15,
})
@@ -340,8 +336,8 @@ func TestGetBoundaryUsageSummary_MultipleReplicas(t *testing.T) {
_, err = db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replica3,
UniqueWorkspacesDelta: 20,
UniqueUsersDelta: 12,
UniqueWorkspacesCount: 20,
UniqueUsersCount: 12,
AllowedRequests: 200,
DeniedRequests: 20,
})
@@ -379,12 +375,12 @@ func TestResetBoundaryUsageStats(t *testing.T) {
db, _ := dbtestutil.NewDB(t)
ctx := dbauthz.AsBoundaryUsageTracker(context.Background())
// Insert stats for multiple replicas. Delta fields are used for INSERT.
// Insert stats for multiple replicas.
for i := 0; i < 5; i++ {
_, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: uuid.New(),
UniqueWorkspacesDelta: int64(i + 1),
UniqueUsersDelta: int64(i + 1),
UniqueWorkspacesCount: int64(i + 1),
UniqueUsersCount: int64(i + 1),
AllowedRequests: int64((i + 1) * 10),
DeniedRequests: int64(i + 1),
})
@@ -416,11 +412,11 @@ func TestDeleteBoundaryUsageStatsByReplicaID(t *testing.T) {
replica1 := uuid.New()
replica2 := uuid.New()
// Insert stats for 2 replicas. Delta fields are used for INSERT.
// Insert stats for 2 replicas.
_, err := db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replica1,
UniqueWorkspacesDelta: 10,
UniqueUsersDelta: 5,
UniqueWorkspacesCount: 10,
UniqueUsersCount: 5,
AllowedRequests: 100,
DeniedRequests: 10,
})
@@ -428,8 +424,8 @@ func TestDeleteBoundaryUsageStatsByReplicaID(t *testing.T) {
_, err = db.UpsertBoundaryUsageStats(ctx, database.UpsertBoundaryUsageStatsParams{
ReplicaID: replica2,
UniqueWorkspacesDelta: 20,
UniqueUsersDelta: 10,
UniqueWorkspacesCount: 20,
UniqueUsersCount: 10,
AllowedRequests: 200,
DeniedRequests: 20,
})
@@ -501,49 +497,6 @@ func TestTracker_TelemetryCycle(t *testing.T) {
require.Equal(t, int64(1), summary.AllowedRequests)
}
func TestTracker_FlushToDB_NoStaleDataAfterReset(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := testutil.Context(t, testutil.WaitShort)
boundaryCtx := dbauthz.AsBoundaryUsageTracker(ctx)
tracker := boundaryusage.NewTracker()
replicaID := uuid.New()
workspaceID := uuid.New()
ownerID := uuid.New()
// Track some data, flush, and verify.
tracker.Track(workspaceID, ownerID, 10, 5)
err := tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
summary, err := db.GetBoundaryUsageSummary(boundaryCtx, 60000)
require.NoError(t, err)
require.Equal(t, int64(1), summary.UniqueWorkspaces)
require.Equal(t, int64(10), summary.AllowedRequests)
// Simulate telemetry reset (new period).
err = db.ResetBoundaryUsageStats(boundaryCtx)
require.NoError(t, err)
summary, err = db.GetBoundaryUsageSummary(boundaryCtx, 60000)
require.NoError(t, err)
require.Equal(t, int64(0), summary.AllowedRequests)
// Flush again without any new Track() calls. This should not write stale
// data back to the DB.
err = tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
// Summary should be empty (no stale data written).
summary, err = db.GetBoundaryUsageSummary(boundaryCtx, 60000)
require.NoError(t, err)
require.Equal(t, int64(0), summary.UniqueWorkspaces)
require.Equal(t, int64(0), summary.UniqueUsers)
require.Equal(t, int64(0), summary.AllowedRequests)
require.Equal(t, int64(0), summary.DeniedRequests)
}
func TestTracker_ConcurrentFlushAndTrack(t *testing.T) {
t.Parallel()
@@ -587,57 +540,3 @@ func TestTracker_ConcurrentFlushAndTrack(t *testing.T) {
require.GreaterOrEqual(t, summary.AllowedRequests, int64(0))
require.GreaterOrEqual(t, summary.DeniedRequests, int64(0))
}
// trackDuringUpsertDB wraps a database.Store to call Track() during the
// UpsertBoundaryUsageStats operation, simulating a concurrent Track() call.
type trackDuringUpsertDB struct {
database.Store
tracker *boundaryusage.Tracker
workspaceID uuid.UUID
userID uuid.UUID
}
func (s *trackDuringUpsertDB) UpsertBoundaryUsageStats(ctx context.Context, arg database.UpsertBoundaryUsageStatsParams) (bool, error) {
s.tracker.Track(s.workspaceID, s.userID, 20, 10)
return s.Store.UpsertBoundaryUsageStats(ctx, arg)
}
func TestTracker_TrackDuringFlush(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := testutil.Context(t, testutil.WaitShort)
boundaryCtx := dbauthz.AsBoundaryUsageTracker(ctx)
tracker := boundaryusage.NewTracker()
replicaID := uuid.New()
// Track some initial data.
tracker.Track(uuid.New(), uuid.New(), 10, 5)
trackingDB := &trackDuringUpsertDB{
Store: db,
tracker: tracker,
workspaceID: uuid.New(),
userID: uuid.New(),
}
// Flush will call Track() during the DB operation.
err := tracker.FlushToDB(ctx, trackingDB, replicaID)
require.NoError(t, err)
// Verify first flush only wrote the initial data.
summary, err := db.GetBoundaryUsageSummary(boundaryCtx, 60000)
require.NoError(t, err)
require.Equal(t, int64(10), summary.AllowedRequests)
// The second flush should include the Track() call that happened during the
// first flush's DB operation.
err = tracker.FlushToDB(ctx, db, replicaID)
require.NoError(t, err)
summary, err = db.GetBoundaryUsageSummary(boundaryCtx, 60000)
require.NoError(t, err)
require.Equal(t, int64(10+20), summary.AllowedRequests)
require.Equal(t, int64(5+10), summary.DeniedRequests)
}
+2 -14
View File
@@ -106,8 +106,6 @@ import (
"github.com/coder/quartz"
)
const DefaultDERPMeshKey = "test-key"
const defaultTestDaemonName = "test-daemon"
type Options struct {
@@ -512,18 +510,8 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
stunAddresses = options.DeploymentValues.DERP.Server.STUNAddresses.Value()
}
const derpMeshKey = "test-key"
// Technically AGPL coderd servers don't set this value, but it doesn't
// change any behavior. It's useful for enterprise tests.
err = options.Database.InsertDERPMeshKey(dbauthz.AsSystemRestricted(ctx), derpMeshKey) //nolint:gocritic // test
if !database.IsUniqueViolation(err, database.UniqueSiteConfigsKeyKey) {
require.NoError(t, err, "insert DERP mesh key")
}
var derpServer *derp.Server
if options.DeploymentValues.DERP.Server.Enable.Value() {
derpServer = derp.NewServer(key.NewNode(), tailnet.Logger(options.Logger.Named("derp").Leveled(slog.LevelDebug)))
derpServer.SetMeshKey(derpMeshKey)
}
derpServer := derp.NewServer(key.NewNode(), tailnet.Logger(options.Logger.Named("derp").Leveled(slog.LevelDebug)))
derpServer.SetMeshKey("test-key")
// match default with cli default
if options.SSHKeygenAlgorithm == "" {
+4 -13
View File
@@ -31,23 +31,14 @@ import (
previewtypes "github.com/coder/preview/types"
)
// List is a helper function to reduce boilerplate when converting slices of
// database types to slices of codersdk types.
// Only works if the function takes a single argument.
// Deprecated: use slice.List
func List[F any, T any](list []F, convert func(F) T) []T {
return ListLazy(convert)(list)
return slice.List[F, T](list, convert)
}
// ListLazy returns the converter function for a list, but does not eval
// the input. Helpful for combining the Map and the List functions.
// Deprecated: use slice.ListLazy
func ListLazy[F any, T any](convert func(F) T) func(list []F) []T {
return func(list []F) []T {
into := make([]T, 0, len(list))
for _, item := range list {
into = append(into, convert(item))
}
return into
}
return slice.ListLazy[F, T](convert)
}
func APIAllowListTarget(entry rbac.AllowListElement) codersdk.APIAllowListTarget {
+1
View File
@@ -394,6 +394,7 @@ func WorkspaceAgentDevcontainer(t testing.TB, db database.Store, orig database.W
Name: []string{takeFirst(orig.Name, testutil.GetRandomName(t))},
WorkspaceFolder: []string{takeFirst(orig.WorkspaceFolder, "/workspace")},
ConfigPath: []string{takeFirst(orig.ConfigPath, "")},
SubagentID: []uuid.UUID{orig.SubagentID.UUID},
})
require.NoError(t, err, "insert workspace agent devcontainer")
return devcontainers[0]
+5 -1
View File
@@ -2457,7 +2457,8 @@ CREATE TABLE workspace_agent_devcontainers (
created_at timestamp with time zone DEFAULT now() NOT NULL,
workspace_folder text NOT NULL,
config_path text NOT NULL,
name text NOT NULL
name text NOT NULL,
subagent_id uuid
);
COMMENT ON TABLE workspace_agent_devcontainers IS 'Workspace agent devcontainer configuration';
@@ -3737,6 +3738,9 @@ ALTER TABLE ONLY user_status_changes
ALTER TABLE ONLY webpush_subscriptions
ADD CONSTRAINT webpush_subscriptions_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspace_agent_devcontainers
ADD CONSTRAINT workspace_agent_devcontainers_subagent_id_fkey FOREIGN KEY (subagent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspace_agent_devcontainers
ADD CONSTRAINT workspace_agent_devcontainers_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
@@ -72,6 +72,7 @@ const (
ForeignKeyUserSecretsUserID ForeignKeyConstraint = "user_secrets_user_id_fkey" // ALTER TABLE ONLY user_secrets ADD CONSTRAINT user_secrets_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyUserStatusChangesUserID ForeignKeyConstraint = "user_status_changes_user_id_fkey" // ALTER TABLE ONLY user_status_changes ADD CONSTRAINT user_status_changes_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
ForeignKeyWebpushSubscriptionsUserID ForeignKeyConstraint = "webpush_subscriptions_user_id_fkey" // ALTER TABLE ONLY webpush_subscriptions ADD CONSTRAINT webpush_subscriptions_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentDevcontainersSubagentID ForeignKeyConstraint = "workspace_agent_devcontainers_subagent_id_fkey" // ALTER TABLE ONLY workspace_agent_devcontainers ADD CONSTRAINT workspace_agent_devcontainers_subagent_id_fkey FOREIGN KEY (subagent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentDevcontainersWorkspaceAgentID ForeignKeyConstraint = "workspace_agent_devcontainers_workspace_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_devcontainers ADD CONSTRAINT workspace_agent_devcontainers_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentLogSourcesWorkspaceAgentID ForeignKeyConstraint = "workspace_agent_log_sources_workspace_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_log_sources ADD CONSTRAINT workspace_agent_log_sources_workspace_agent_id_fkey FOREIGN KEY (workspace_agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
ForeignKeyWorkspaceAgentMemoryResourceMonitorsAgentID ForeignKeyConstraint = "workspace_agent_memory_resource_monitors_agent_id_fkey" // ALTER TABLE ONLY workspace_agent_memory_resource_monitors ADD CONSTRAINT workspace_agent_memory_resource_monitors_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE CASCADE;
@@ -0,0 +1,2 @@
ALTER TABLE workspace_agent_devcontainers
DROP COLUMN subagent_id;
@@ -0,0 +1,2 @@
ALTER TABLE workspace_agent_devcontainers
ADD COLUMN subagent_id UUID REFERENCES workspace_agents(id) ON DELETE CASCADE;
+2 -1
View File
@@ -4771,7 +4771,8 @@ type WorkspaceAgentDevcontainer struct {
// Path to devcontainer.json.
ConfigPath string `db:"config_path" json:"config_path"`
// The name of the Dev Container.
Name string `db:"name" json:"name"`
Name string `db:"name" json:"name"`
SubagentID uuid.NullUUID `db:"subagent_id" json:"subagent_id"`
}
type WorkspaceAgentLog struct {
+3 -4
View File
@@ -762,10 +762,9 @@ type sqlcQuerier interface {
UpsertAnnouncementBanners(ctx context.Context, value string) error
UpsertAppSecurityKey(ctx context.Context, value string) error
UpsertApplicationName(ctx context.Context, value string) error
// Upserts boundary usage statistics for a replica. On INSERT (new period), uses
// delta values for unique counts (only data since last flush). On UPDATE, uses
// cumulative values for unique counts (accurate period totals). Request counts
// are always deltas, accumulated in DB. Returns true if insert, false if update.
// Upserts boundary usage statistics for a replica. All values are replaced with
// the current in-memory state. Returns true if this was an insert (new period),
// false if update.
UpsertBoundaryUsageStats(ctx context.Context, arg UpsertBoundaryUsageStatsParams) (bool, error)
UpsertConnectionLog(ctx context.Context, arg UpsertConnectionLogParams) (ConnectionLog, error)
UpsertCoordinatorResumeTokenSigningKey(ctx context.Context, value string) error
+102 -50
View File
@@ -7,7 +7,9 @@ import (
"errors"
"fmt"
"net"
"slices"
"sort"
"strings"
"testing"
"time"
@@ -6271,56 +6273,6 @@ func TestGetWorkspaceAgentsByParentID(t *testing.T) {
})
}
func TestGetWorkspaceAgentByInstanceID(t *testing.T) {
t.Parallel()
// Context: https://github.com/coder/coder/pull/22196
t.Run("DoesNotReturnSubAgents", func(t *testing.T) {
t.Parallel()
// Given: A parent workspace agent with an AuthInstanceID and a
// sub-agent that shares the same AuthInstanceID.
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource := dbgen.WorkspaceResource(t, db, database.WorkspaceResource{
JobID: job.ID,
})
authInstanceID := fmt.Sprintf("instance-%s-%d", t.Name(), time.Now().UnixNano())
parentAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
// Create a sub-agent with the same AuthInstanceID (simulating
// the old behavior before the fix).
_ = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{UUID: parentAgent.ID, Valid: true},
ResourceID: resource.ID,
AuthInstanceID: sql.NullString{
String: authInstanceID,
Valid: true,
},
})
ctx := testutil.Context(t, testutil.WaitShort)
// When: We look up the agent by instance ID.
agent, err := db.GetWorkspaceAgentByInstanceID(ctx, authInstanceID)
require.NoError(t, err)
// Then: The result must be the parent agent, not the sub-agent.
assert.Equal(t, parentAgent.ID, agent.ID, "instance ID lookup should return the parent agent, not a sub-agent")
assert.False(t, agent.ParentID.Valid, "returned agent should not have a parent (should be the parent itself)")
})
}
func requireUsersMatch(t testing.TB, expected []database.User, found []database.GetUsersRow, msg string) {
t.Helper()
require.ElementsMatch(t, expected, database.ConvertUserRows(found), msg)
@@ -8532,3 +8484,103 @@ func TestGetAuthenticatedWorkspaceAgentAndBuildByAuthToken_ShutdownScripts(t *te
require.ErrorIs(t, err, sql.ErrNoRows, "agent should not authenticate when latest build is not STOP")
})
}
// Our `InsertWorkspaceAgentDevcontainers` query should ideally be `[]uuid.NullUUID` but unfortunately
// sqlc infers it as `[]uuid.UUID`. To ensure we don't insert a `uuid.Nil`, the query inserts NULL when
// passed with `uuid.Nil`. This test ensures we keep this behavior without regression.
func TestInsertWorkspaceAgentDevcontainers(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
validSubagent []bool
}{
{"BothValid", []bool{true, true}},
{"FirstValidSecondInvalid", []bool{true, false}},
{"FirstInvalidSecondValid", []bool{false, true}},
{"BothInvalid", []bool{false, false}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
db, _ = dbtestutil.NewDB(t)
org = dbgen.Organization(t, db, database.Organization{})
job = dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{
Type: database.ProvisionerJobTypeTemplateVersionImport,
OrganizationID: org.ID,
})
resource = dbgen.WorkspaceResource(t, db, database.WorkspaceResource{JobID: job.ID})
agent = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{ResourceID: resource.ID})
)
ids := make([]uuid.UUID, len(tc.validSubagent))
names := make([]string, len(tc.validSubagent))
workspaceFolders := make([]string, len(tc.validSubagent))
configPaths := make([]string, len(tc.validSubagent))
subagentIDs := make([]uuid.UUID, len(tc.validSubagent))
for i, valid := range tc.validSubagent {
ids[i] = uuid.New()
names[i] = fmt.Sprintf("test-devcontainer-%d", i)
workspaceFolders[i] = fmt.Sprintf("/workspace%d", i)
configPaths[i] = fmt.Sprintf("/workspace%d/.devcontainer/devcontainer.json", i)
if valid {
subagentIDs[i] = dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ResourceID: resource.ID,
ParentID: uuid.NullUUID{UUID: agent.ID, Valid: true},
}).ID
} else {
subagentIDs[i] = uuid.Nil
}
}
ctx := testutil.Context(t, testutil.WaitShort)
// Given: We insert multiple devcontainer records.
devcontainers, err := db.InsertWorkspaceAgentDevcontainers(ctx, database.InsertWorkspaceAgentDevcontainersParams{
WorkspaceAgentID: agent.ID,
CreatedAt: dbtime.Now(),
ID: ids,
Name: names,
WorkspaceFolder: workspaceFolders,
ConfigPath: configPaths,
SubagentID: subagentIDs,
})
require.NoError(t, err)
require.Len(t, devcontainers, len(tc.validSubagent))
// Then: Verify each devcontainer has the correct SubagentID validity.
// - When we pass `uuid.Nil`, we get a `uuid.NullUUID{Valid: false}`
// - When we pass a valid UUID, we get a `uuid.NullUUID{Valid: true}`
for i, valid := range tc.validSubagent {
require.Equal(t, valid, devcontainers[i].SubagentID.Valid, "devcontainer %d: subagent_id validity mismatch", i)
if valid {
require.Equal(t, subagentIDs[i], devcontainers[i].SubagentID.UUID, "devcontainer %d: subagent_id UUID mismatch", i)
}
}
// Perform the same check on data returned by
// `GetWorkspaceAgentDevcontainersByAgentID` to ensure the fix is at
// the data storage layer, instead of just at a query level.
fetched, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, agent.ID)
require.NoError(t, err)
require.Len(t, fetched, len(tc.validSubagent))
// Sort fetched by name to ensure consistent ordering for comparison.
slices.SortFunc(fetched, func(a, b database.WorkspaceAgentDevcontainer) int {
return strings.Compare(a.Name, b.Name)
})
for i, valid := range tc.validSubagent {
require.Equal(t, valid, fetched[i].SubagentID.Valid, "fetched devcontainer %d: subagent_id validity mismatch", i)
if valid {
require.Equal(t, subagentIDs[i], fetched[i].SubagentID.UUID, "fetched devcontainer %d: subagent_id UUID mismatch", i)
}
}
})
}
}
+20 -22
View File
@@ -2051,37 +2051,32 @@ INSERT INTO boundary_usage_stats (
NOW(),
NOW()
) ON CONFLICT (replica_id) DO UPDATE SET
unique_workspaces_count = $6,
unique_users_count = $7,
allowed_requests = boundary_usage_stats.allowed_requests + EXCLUDED.allowed_requests,
denied_requests = boundary_usage_stats.denied_requests + EXCLUDED.denied_requests,
unique_workspaces_count = EXCLUDED.unique_workspaces_count,
unique_users_count = EXCLUDED.unique_users_count,
allowed_requests = EXCLUDED.allowed_requests,
denied_requests = EXCLUDED.denied_requests,
updated_at = NOW()
RETURNING (xmax = 0) AS new_period
`
type UpsertBoundaryUsageStatsParams struct {
ReplicaID uuid.UUID `db:"replica_id" json:"replica_id"`
UniqueWorkspacesDelta int64 `db:"unique_workspaces_delta" json:"unique_workspaces_delta"`
UniqueUsersDelta int64 `db:"unique_users_delta" json:"unique_users_delta"`
AllowedRequests int64 `db:"allowed_requests" json:"allowed_requests"`
DeniedRequests int64 `db:"denied_requests" json:"denied_requests"`
UniqueWorkspacesCount int64 `db:"unique_workspaces_count" json:"unique_workspaces_count"`
UniqueUsersCount int64 `db:"unique_users_count" json:"unique_users_count"`
AllowedRequests int64 `db:"allowed_requests" json:"allowed_requests"`
DeniedRequests int64 `db:"denied_requests" json:"denied_requests"`
}
// Upserts boundary usage statistics for a replica. On INSERT (new period), uses
// delta values for unique counts (only data since last flush). On UPDATE, uses
// cumulative values for unique counts (accurate period totals). Request counts
// are always deltas, accumulated in DB. Returns true if insert, false if update.
// Upserts boundary usage statistics for a replica. All values are replaced with
// the current in-memory state. Returns true if this was an insert (new period),
// false if update.
func (q *sqlQuerier) UpsertBoundaryUsageStats(ctx context.Context, arg UpsertBoundaryUsageStatsParams) (bool, error) {
row := q.db.QueryRowContext(ctx, upsertBoundaryUsageStats,
arg.ReplicaID,
arg.UniqueWorkspacesDelta,
arg.UniqueUsersDelta,
arg.AllowedRequests,
arg.DeniedRequests,
arg.UniqueWorkspacesCount,
arg.UniqueUsersCount,
arg.AllowedRequests,
arg.DeniedRequests,
)
var new_period bool
err := row.Scan(&new_period)
@@ -17218,7 +17213,7 @@ func (q *sqlQuerier) ValidateUserIDs(ctx context.Context, userIds []uuid.UUID) (
const getWorkspaceAgentDevcontainersByAgentID = `-- name: GetWorkspaceAgentDevcontainersByAgentID :many
SELECT
id, workspace_agent_id, created_at, workspace_folder, config_path, name
id, workspace_agent_id, created_at, workspace_folder, config_path, name, subagent_id
FROM
workspace_agent_devcontainers
WHERE
@@ -17243,6 +17238,7 @@ func (q *sqlQuerier) GetWorkspaceAgentDevcontainersByAgentID(ctx context.Context
&i.WorkspaceFolder,
&i.ConfigPath,
&i.Name,
&i.SubagentID,
); err != nil {
return nil, err
}
@@ -17259,15 +17255,16 @@ func (q *sqlQuerier) GetWorkspaceAgentDevcontainersByAgentID(ctx context.Context
const insertWorkspaceAgentDevcontainers = `-- name: InsertWorkspaceAgentDevcontainers :many
INSERT INTO
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path)
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path, subagent_id)
SELECT
$1::uuid AS workspace_agent_id,
$2::timestamptz AS created_at,
unnest($3::uuid[]) AS id,
unnest($4::text[]) AS name,
unnest($5::text[]) AS workspace_folder,
unnest($6::text[]) AS config_path
RETURNING workspace_agent_devcontainers.id, workspace_agent_devcontainers.workspace_agent_id, workspace_agent_devcontainers.created_at, workspace_agent_devcontainers.workspace_folder, workspace_agent_devcontainers.config_path, workspace_agent_devcontainers.name
unnest($6::text[]) AS config_path,
NULLIF(unnest($7::uuid[]), '00000000-0000-0000-0000-000000000000')::uuid AS subagent_id
RETURNING workspace_agent_devcontainers.id, workspace_agent_devcontainers.workspace_agent_id, workspace_agent_devcontainers.created_at, workspace_agent_devcontainers.workspace_folder, workspace_agent_devcontainers.config_path, workspace_agent_devcontainers.name, workspace_agent_devcontainers.subagent_id
`
type InsertWorkspaceAgentDevcontainersParams struct {
@@ -17277,6 +17274,7 @@ type InsertWorkspaceAgentDevcontainersParams struct {
Name []string `db:"name" json:"name"`
WorkspaceFolder []string `db:"workspace_folder" json:"workspace_folder"`
ConfigPath []string `db:"config_path" json:"config_path"`
SubagentID []uuid.UUID `db:"subagent_id" json:"subagent_id"`
}
func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg InsertWorkspaceAgentDevcontainersParams) ([]WorkspaceAgentDevcontainer, error) {
@@ -17287,6 +17285,7 @@ func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg
pq.Array(arg.Name),
pq.Array(arg.WorkspaceFolder),
pq.Array(arg.ConfigPath),
pq.Array(arg.SubagentID),
)
if err != nil {
return nil, err
@@ -17302,6 +17301,7 @@ func (q *sqlQuerier) InsertWorkspaceAgentDevcontainers(ctx context.Context, arg
&i.WorkspaceFolder,
&i.ConfigPath,
&i.Name,
&i.SubagentID,
); err != nil {
return nil, err
}
@@ -18226,8 +18226,6 @@ WHERE
auth_instance_id = $1 :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC
`
+9 -10
View File
@@ -1,8 +1,7 @@
-- name: UpsertBoundaryUsageStats :one
-- Upserts boundary usage statistics for a replica. On INSERT (new period), uses
-- delta values for unique counts (only data since last flush). On UPDATE, uses
-- cumulative values for unique counts (accurate period totals). Request counts
-- are always deltas, accumulated in DB. Returns true if insert, false if update.
-- Upserts boundary usage statistics for a replica. All values are replaced with
-- the current in-memory state. Returns true if this was an insert (new period),
-- false if update.
INSERT INTO boundary_usage_stats (
replica_id,
unique_workspaces_count,
@@ -13,17 +12,17 @@ INSERT INTO boundary_usage_stats (
updated_at
) VALUES (
@replica_id,
@unique_workspaces_delta,
@unique_users_delta,
@unique_workspaces_count,
@unique_users_count,
@allowed_requests,
@denied_requests,
NOW(),
NOW()
) ON CONFLICT (replica_id) DO UPDATE SET
unique_workspaces_count = @unique_workspaces_count,
unique_users_count = @unique_users_count,
allowed_requests = boundary_usage_stats.allowed_requests + EXCLUDED.allowed_requests,
denied_requests = boundary_usage_stats.denied_requests + EXCLUDED.denied_requests,
unique_workspaces_count = EXCLUDED.unique_workspaces_count,
unique_users_count = EXCLUDED.unique_users_count,
allowed_requests = EXCLUDED.allowed_requests,
denied_requests = EXCLUDED.denied_requests,
updated_at = NOW()
RETURNING (xmax = 0) AS new_period;
@@ -1,13 +1,14 @@
-- name: InsertWorkspaceAgentDevcontainers :many
INSERT INTO
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path)
workspace_agent_devcontainers (workspace_agent_id, created_at, id, name, workspace_folder, config_path, subagent_id)
SELECT
@workspace_agent_id::uuid AS workspace_agent_id,
@created_at::timestamptz AS created_at,
unnest(@id::uuid[]) AS id,
unnest(@name::text[]) AS name,
unnest(@workspace_folder::text[]) AS workspace_folder,
unnest(@config_path::text[]) AS config_path
unnest(@config_path::text[]) AS config_path,
NULLIF(unnest(@subagent_id::uuid[]), '00000000-0000-0000-0000-000000000000')::uuid AS subagent_id
RETURNING workspace_agent_devcontainers.*;
-- name: GetWorkspaceAgentDevcontainersByAgentID :many
@@ -17,8 +17,6 @@ WHERE
auth_instance_id = @auth_instance_id :: TEXT
-- Filter out deleted sub agents.
AND deleted = FALSE
-- Filter out sub agents, they do not authenticate with auth_instance_id.
AND parent_id IS NULL
ORDER BY
created_at DESC;
+6
View File
@@ -162,6 +162,12 @@ func (l *Set) Errors() []string {
return slices.Clone(l.entitlements.Errors)
}
func (l *Set) Warnings() []string {
l.entitlementsMu.RLock()
defer l.entitlementsMu.RUnlock()
return slices.Clone(l.entitlements.Warnings)
}
func (l *Set) HasLicense() bool {
l.entitlementsMu.RLock()
defer l.entitlementsMu.RUnlock()
+7
View File
@@ -23,6 +23,7 @@ import (
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/promoauth"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
@@ -244,6 +245,12 @@ func ExtractAPIKey(rw http.ResponseWriter, r *http.Request, cfg ExtractAPIKeyCon
return optionalWrite(http.StatusUnauthorized, resp)
}
// Log the API key ID for all requests that have a valid key format and secret,
// regardless of whether subsequent validation (expiry, user status, etc.) succeeds.
if rl := loggermw.RequestLoggerFromContext(ctx); rl != nil {
rl.WithFields(slog.F("api_key_id", key.ID))
}
now := dbtime.Now()
if key.ExpiresAt.Before(now) {
return optionalWrite(http.StatusUnauthorized, codersdk.Response{
+79
View File
@@ -16,9 +16,11 @@ import (
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
"golang.org/x/exp/slices"
"golang.org/x/oauth2"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/apikey"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
@@ -27,6 +29,8 @@ import (
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/httpmw/loggermw/loggermock"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
@@ -991,4 +995,79 @@ func TestAPIKey(t *testing.T) {
defer res.Body.Close()
require.Equal(t, http.StatusOK, res.StatusCode)
})
t.Run("LogsAPIKeyID", func(t *testing.T) {
t.Parallel()
tests := []struct {
name string
expired bool
expectedStatus int
}{
{
name: "OnSuccess",
expired: false,
expectedStatus: http.StatusOK,
},
{
name: "OnFailure",
expired: true,
expectedStatus: http.StatusUnauthorized,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var (
db, _ = dbtestutil.NewDB(t)
user = dbgen.User(t, db, database.User{})
expiry = dbtime.Now().AddDate(0, 0, 1)
)
if tc.expired {
expiry = dbtime.Now().AddDate(0, 0, -1)
}
sentAPIKey, token := dbgen.APIKey(t, db, database.APIKey{
UserID: user.ID,
ExpiresAt: expiry,
})
var (
ctrl = gomock.NewController(t)
mockLogger = loggermock.NewMockRequestLogger(ctrl)
r = httptest.NewRequest("GET", "/", nil)
rw = httptest.NewRecorder()
)
r.Header.Set(codersdk.SessionTokenHeader, token)
// Expect WithAuthContext to be called (from dbauthz.As).
mockLogger.EXPECT().WithAuthContext(gomock.Any()).AnyTimes()
// Expect WithFields to be called with api_key_id field regardless of success/failure.
mockLogger.EXPECT().WithFields(
slog.F("api_key_id", sentAPIKey.ID),
).Times(1)
// Add the mock logger to the context.
ctx := loggermw.WithRequestLogger(r.Context(), mockLogger)
r = r.WithContext(ctx)
httpmw.ExtractAPIKeyMW(httpmw.ExtractAPIKeyConfig{
DB: db,
RedirectToLogin: false,
})(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
if tc.expired {
t.Error("handler should not be called on auth failure")
}
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.Response{
Message: "It worked!",
})
})).ServeHTTP(rw, r)
res := rw.Result()
defer res.Body.Close()
require.Equal(t, tc.expectedStatus, res.StatusCode)
})
}
})
}
-1
View File
@@ -65,7 +65,6 @@ type StateSnapshotter interface {
type Claimer interface {
Claim(
ctx context.Context,
store database.Store,
now time.Time,
userID uuid.UUID,
name string,
+1 -1
View File
@@ -34,7 +34,7 @@ var DefaultReconciler ReconciliationOrchestrator = NoopReconciler{}
type NoopClaimer struct{}
func (NoopClaimer) Claim(context.Context, database.Store, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
func (NoopClaimer) Claim(context.Context, time.Time, uuid.UUID, string, uuid.UUID, sql.NullString, sql.NullTime, sql.NullInt64) (*uuid.UUID, error) {
// Not entitled to claim prebuilds in AGPL version.
return nil, ErrAGPLDoesNotSupportPrebuiltWorkspaces
}
@@ -132,19 +132,6 @@ func Workspaces(ctx context.Context, logger slog.Logger, registerer prometheus.R
duration = defaultRefreshRate
}
// TODO: deprecated: remove in the future
// See: https://github.com/coder/coder/issues/12999
// Deprecation reason: gauge metrics should avoid suffix `_total``
workspaceLatestBuildTotalsDeprecated := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "api",
Name: "workspace_latest_build_total",
Help: "DEPRECATED: use coderd_api_workspace_latest_build instead",
}, []string{"status"})
if err := registerer.Register(workspaceLatestBuildTotalsDeprecated); err != nil {
return nil, err
}
workspaceLatestBuildTotals := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "api",
@@ -198,8 +185,6 @@ func Workspaces(ctx context.Context, logger slog.Logger, registerer prometheus.R
for _, w := range ws {
status := string(w.LatestBuildStatus)
workspaceLatestBuildTotals.WithLabelValues(status).Add(1)
// TODO: deprecated: remove in the future
workspaceLatestBuildTotalsDeprecated.WithLabelValues(status).Add(1)
workspaceLatestBuildStatuses.WithLabelValues(
status,
+3 -19
View File
@@ -70,11 +70,9 @@ type metrics struct {
// if the oauth supports it, rate limit metrics.
// rateLimit is the defined limit per interval
rateLimit *prometheus.GaugeVec
// TODO: remove deprecated metrics in the future release
rateLimitDeprecated *prometheus.GaugeVec
rateLimitRemaining *prometheus.GaugeVec
rateLimitUsed *prometheus.GaugeVec
rateLimit *prometheus.GaugeVec
rateLimitRemaining *prometheus.GaugeVec
rateLimitUsed *prometheus.GaugeVec
// rateLimitReset is unix time of the next interval (when the rate limit resets).
rateLimitReset *prometheus.GaugeVec
// rateLimitResetIn is the time in seconds until the rate limit resets.
@@ -109,18 +107,6 @@ func NewFactory(registry prometheus.Registerer) *Factory {
// Some IDPs have different buckets for different rate limits.
"resource",
}),
// TODO: deprecated: remove in the future
// See: https://github.com/coder/coder/issues/12999
// Deprecation reason: gauge metrics should avoid suffix `_total``
rateLimitDeprecated: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "oauth2",
Name: "external_requests_rate_limit_total",
Help: "DEPRECATED: use coderd_oauth2_external_requests_rate_limit instead",
}, []string{
"name",
"resource",
}),
rateLimitRemaining: factory.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "oauth2",
@@ -198,8 +184,6 @@ func (f *Factory) NewGithub(name string, under OAuth2Config) *Config {
}
}
// TODO: remove this metric in v3
f.metrics.rateLimitDeprecated.With(labels).Set(float64(limits.Limit))
f.metrics.rateLimit.With(labels).Set(float64(limits.Limit))
f.metrics.rateLimitRemaining.With(labels).Set(float64(limits.Remaining))
f.metrics.rateLimitUsed.With(labels).Set(float64(limits.Used))
+2 -2
View File
@@ -209,7 +209,7 @@ func TestGithubRateLimits(t *testing.T) {
}
pass := true
if !c.ExpectNoMetrics {
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_total", labels), c.Limit, "limit")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit", labels), c.Limit, "limit")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_remaining", labels), c.Remaining, "remaining")
pass = pass && assert.Equal(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_used", labels), c.Used, "used")
if !c.at.IsZero() {
@@ -218,7 +218,7 @@ func TestGithubRateLimits(t *testing.T) {
pass = pass && assert.InDelta(t, promhelp.GaugeValue(t, reg, "coderd_oauth2_external_requests_rate_limit_reset_in_seconds", labels), int(until.Seconds()), 2, "reset in")
}
} else {
pass = pass && assert.Nil(t, promhelp.MetricValue(t, reg, "coderd_oauth2_external_requests_rate_limit_total", labels), "not exists")
pass = pass && assert.Nil(t, promhelp.MetricValue(t, reg, "coderd_oauth2_external_requests_rate_limit", labels), "not exists")
}
// Helpful debugging
+335 -159
View File
@@ -1652,7 +1652,6 @@ func (s *server) completeTemplateImportJob(ctx context.Context, job database.Pro
// Process modules
for transition, modules := range map[database.WorkspaceTransition][]*sdkproto.Module{
database.WorkspaceTransitionStart: jobType.TemplateImport.StartModules,
database.WorkspaceTransitionStop: jobType.TemplateImport.StopModules,
} {
for _, module := range modules {
s.Logger.Info(ctx, "inserting template import job module",
@@ -2032,6 +2031,23 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
appIDs = append(appIDs, app.GetId())
agentIDByAppID[app.GetId()] = agentID
}
// Subagents in devcontainers can also have apps that need
// tracking for task linking, just like the parent agent's
// apps above.
for _, dc := range protoAgent.GetDevcontainers() {
dc.Id = uuid.New().String()
if dc.GetSubagentId() != "" {
subAgentID := uuid.New()
dc.SubagentId = subAgentID.String()
for _, app := range dc.GetApps() {
appIDs = append(appIDs, app.GetId())
agentIDByAppID[app.GetId()] = subAgentID
}
}
}
}
err = InsertWorkspaceResource(
@@ -2860,33 +2876,7 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
}
}
logSourceIDs := make([]uuid.UUID, 0, len(prAgent.Scripts))
logSourceDisplayNames := make([]string, 0, len(prAgent.Scripts))
logSourceIcons := make([]string, 0, len(prAgent.Scripts))
scriptIDs := make([]uuid.UUID, 0, len(prAgent.Scripts))
scriptDisplayName := make([]string, 0, len(prAgent.Scripts))
scriptLogPaths := make([]string, 0, len(prAgent.Scripts))
scriptSources := make([]string, 0, len(prAgent.Scripts))
scriptCron := make([]string, 0, len(prAgent.Scripts))
scriptTimeout := make([]int32, 0, len(prAgent.Scripts))
scriptStartBlocksLogin := make([]bool, 0, len(prAgent.Scripts))
scriptRunOnStart := make([]bool, 0, len(prAgent.Scripts))
scriptRunOnStop := make([]bool, 0, len(prAgent.Scripts))
for _, script := range prAgent.Scripts {
logSourceIDs = append(logSourceIDs, uuid.New())
logSourceDisplayNames = append(logSourceDisplayNames, script.DisplayName)
logSourceIcons = append(logSourceIcons, script.Icon)
scriptIDs = append(scriptIDs, uuid.New())
scriptDisplayName = append(scriptDisplayName, script.DisplayName)
scriptLogPaths = append(scriptLogPaths, script.LogPath)
scriptSources = append(scriptSources, script.Script)
scriptCron = append(scriptCron, script.Cron)
scriptTimeout = append(scriptTimeout, script.TimeoutSeconds)
scriptStartBlocksLogin = append(scriptStartBlocksLogin, script.StartBlocksLogin)
scriptRunOnStart = append(scriptRunOnStart, script.RunOnStart)
scriptRunOnStop = append(scriptRunOnStop, script.RunOnStop)
}
scriptsParams := agentScriptsFromProto(prAgent.Scripts)
// Dev Containers require a script and log/source, so we do this before
// the logs insert below.
@@ -2896,32 +2886,46 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
devcontainerNames = make([]string, 0, len(devcontainers))
devcontainerWorkspaceFolders = make([]string, 0, len(devcontainers))
devcontainerConfigPaths = make([]string, 0, len(devcontainers))
devcontainerSubagentIDs = make([]uuid.UUID, 0, len(devcontainers))
)
for _, dc := range devcontainers {
id := uuid.New()
if opts.useAgentIDsFromProto {
id, err = uuid.Parse(dc.GetId())
if err != nil {
return xerrors.Errorf("invalid devcontainer ID format; must be uuid: %w", err)
}
}
subAgentID, err := insertDevcontainerSubagent(ctx, db, dc, dbAgent, resource.ID, appSlugs, snapshot, opts)
if err != nil {
return xerrors.Errorf("insert devcontainer %q subagent: %w", dc.GetName(), err)
}
devcontainerIDs = append(devcontainerIDs, id)
devcontainerNames = append(devcontainerNames, dc.Name)
devcontainerWorkspaceFolders = append(devcontainerWorkspaceFolders, dc.WorkspaceFolder)
devcontainerConfigPaths = append(devcontainerConfigPaths, dc.ConfigPath)
devcontainerNames = append(devcontainerNames, dc.GetName())
devcontainerWorkspaceFolders = append(devcontainerWorkspaceFolders, dc.GetWorkspaceFolder())
devcontainerConfigPaths = append(devcontainerConfigPaths, dc.GetConfigPath())
devcontainerSubagentIDs = append(devcontainerSubagentIDs, subAgentID)
// Add a log source and script for each devcontainer so we can
// track logs and timings for each devcontainer.
displayName := fmt.Sprintf("Dev Container (%s)", dc.Name)
logSourceIDs = append(logSourceIDs, uuid.New())
logSourceDisplayNames = append(logSourceDisplayNames, displayName)
logSourceIcons = append(logSourceIcons, "/emojis/1f4e6.png") // Emoji package. Or perhaps /icon/container.svg?
scriptIDs = append(scriptIDs, id) // Re-use the devcontainer ID as the script ID for identification.
scriptDisplayName = append(scriptDisplayName, displayName)
scriptLogPaths = append(scriptLogPaths, "")
scriptSources = append(scriptSources, `echo "WARNING: Dev Containers are early access. If you're seeing this message then Dev Containers haven't been enabled for your workspace yet. To enable, the agent needs to run with the environment variable CODER_AGENT_DEVCONTAINERS_ENABLE=true set."`)
scriptCron = append(scriptCron, "")
scriptTimeout = append(scriptTimeout, 0)
scriptStartBlocksLogin = append(scriptStartBlocksLogin, false)
displayName := fmt.Sprintf("Dev Container (%s)", dc.GetName())
scriptsParams.LogSourceIDs = append(scriptsParams.LogSourceIDs, uuid.New())
scriptsParams.LogSourceDisplayNames = append(scriptsParams.LogSourceDisplayNames, displayName)
scriptsParams.LogSourceIcons = append(scriptsParams.LogSourceIcons, "/emojis/1f4e6.png") // Emoji package. Or perhaps /icon/container.svg?
scriptsParams.ScriptIDs = append(scriptsParams.ScriptIDs, id) // Re-use the devcontainer ID as the script ID for identification.
scriptsParams.ScriptDisplayNames = append(scriptsParams.ScriptDisplayNames, displayName)
scriptsParams.ScriptLogPaths = append(scriptsParams.ScriptLogPaths, "")
scriptsParams.ScriptSources = append(scriptsParams.ScriptSources, `echo "WARNING: Dev Containers are early access. If you're seeing this message then Dev Containers haven't been enabled for your workspace yet. To enable, the agent needs to run with the environment variable CODER_AGENT_DEVCONTAINERS_ENABLE=true set."`)
scriptsParams.ScriptCron = append(scriptsParams.ScriptCron, "")
scriptsParams.ScriptTimeout = append(scriptsParams.ScriptTimeout, 0)
scriptsParams.ScriptStartBlocksLogin = append(scriptsParams.ScriptStartBlocksLogin, false)
// Run on start to surface the warning message in case the
// terraform resource is used, but the experiment hasn't
// been enabled.
scriptRunOnStart = append(scriptRunOnStart, true)
scriptRunOnStop = append(scriptRunOnStop, false)
scriptsParams.ScriptRunOnStart = append(scriptsParams.ScriptRunOnStart, true)
scriptsParams.ScriptRunOnStop = append(scriptsParams.ScriptRunOnStop, false)
}
_, err = db.InsertWorkspaceAgentDevcontainers(ctx, database.InsertWorkspaceAgentDevcontainersParams{
@@ -2931,131 +2935,21 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
Name: devcontainerNames,
WorkspaceFolder: devcontainerWorkspaceFolders,
ConfigPath: devcontainerConfigPaths,
SubagentID: devcontainerSubagentIDs,
})
if err != nil {
return xerrors.Errorf("insert agent devcontainer: %w", err)
}
}
_, err = db.InsertWorkspaceAgentLogSources(ctx, database.InsertWorkspaceAgentLogSourcesParams{
WorkspaceAgentID: agentID,
ID: logSourceIDs,
CreatedAt: dbtime.Now(),
DisplayName: logSourceDisplayNames,
Icon: logSourceIcons,
})
if err != nil {
return xerrors.Errorf("insert agent log sources: %w", err)
}
_, err = db.InsertWorkspaceAgentScripts(ctx, database.InsertWorkspaceAgentScriptsParams{
WorkspaceAgentID: agentID,
LogSourceID: logSourceIDs,
LogPath: scriptLogPaths,
CreatedAt: dbtime.Now(),
Script: scriptSources,
Cron: scriptCron,
TimeoutSeconds: scriptTimeout,
StartBlocksLogin: scriptStartBlocksLogin,
RunOnStart: scriptRunOnStart,
RunOnStop: scriptRunOnStop,
DisplayName: scriptDisplayName,
ID: scriptIDs,
})
if err != nil {
return xerrors.Errorf("insert agent scripts: %w", err)
if err := insertAgentScriptsAndLogSources(ctx, db, agentID, scriptsParams); err != nil {
return xerrors.Errorf("insert agent scripts and log sources: %w", err)
}
for _, app := range prAgent.Apps {
// Similar logic is duplicated in terraform/resources.go.
slug := app.Slug
if slug == "" {
return xerrors.Errorf("app must have a slug or name set")
if err := insertAgentApp(ctx, db, dbAgent.ID, app, appSlugs, snapshot); err != nil {
return xerrors.Errorf("insert agent app: %w", err)
}
// Contrary to agent names above, app slugs were never permitted to
// contain uppercase letters or underscores.
if !provisioner.AppSlugRegex.MatchString(slug) {
return xerrors.Errorf("app slug %q does not match regex %q", slug, provisioner.AppSlugRegex.String())
}
if _, exists := appSlugs[slug]; exists {
return xerrors.Errorf("duplicate app slug, must be unique per template: %q", slug)
}
appSlugs[slug] = struct{}{}
health := database.WorkspaceAppHealthDisabled
if app.Healthcheck == nil {
app.Healthcheck = &sdkproto.Healthcheck{}
}
if app.Healthcheck.Url != "" {
health = database.WorkspaceAppHealthInitializing
}
sharingLevel := database.AppSharingLevelOwner
switch app.SharingLevel {
case sdkproto.AppSharingLevel_AUTHENTICATED:
sharingLevel = database.AppSharingLevelAuthenticated
case sdkproto.AppSharingLevel_PUBLIC:
sharingLevel = database.AppSharingLevelPublic
}
displayGroup := sql.NullString{
Valid: app.Group != "",
String: app.Group,
}
openIn := database.WorkspaceAppOpenInSlimWindow
switch app.OpenIn {
case sdkproto.AppOpenIn_TAB:
openIn = database.WorkspaceAppOpenInTab
case sdkproto.AppOpenIn_SLIM_WINDOW:
openIn = database.WorkspaceAppOpenInSlimWindow
}
var appID string
if app.Id == "" || app.Id == uuid.Nil.String() {
appID = uuid.NewString()
} else {
appID = app.Id
}
id, err := uuid.Parse(appID)
if err != nil {
return xerrors.Errorf("parse app uuid: %w", err)
}
// If workspace apps are "persistent", the ID will not be regenerated across workspace builds, so we have to upsert.
dbApp, err := db.UpsertWorkspaceApp(ctx, database.UpsertWorkspaceAppParams{
ID: id,
CreatedAt: dbtime.Now(),
AgentID: dbAgent.ID,
Slug: slug,
DisplayName: app.DisplayName,
Icon: app.Icon,
Command: sql.NullString{
String: app.Command,
Valid: app.Command != "",
},
Url: sql.NullString{
String: app.Url,
Valid: app.Url != "",
},
External: app.External,
Subdomain: app.Subdomain,
SharingLevel: sharingLevel,
HealthcheckUrl: app.Healthcheck.Url,
HealthcheckInterval: app.Healthcheck.Interval,
HealthcheckThreshold: app.Healthcheck.Threshold,
Health: health,
// #nosec G115 - Order represents a display order value that's always small and fits in int32
DisplayOrder: int32(app.Order),
DisplayGroup: displayGroup,
Hidden: app.Hidden,
OpenIn: openIn,
Tooltip: app.Tooltip,
})
if err != nil {
return xerrors.Errorf("upsert app: %w", err)
}
snapshot.WorkspaceApps = append(snapshot.WorkspaceApps, telemetry.ConvertWorkspaceApp(dbApp))
}
}
@@ -3361,3 +3255,285 @@ func convertDisplayApps(apps *sdkproto.DisplayApps) []database.DisplayApp {
}
return dapps
}
// insertDevcontainerSubagent creates a workspace agent for a devcontainer's
// subagent if one is defined. It returns the subagent ID (zero UUID if no
// subagent is defined).
func insertDevcontainerSubagent(
ctx context.Context,
db database.Store,
dc *sdkproto.Devcontainer,
parentAgent database.WorkspaceAgent,
resourceID uuid.UUID,
appSlugs map[string]struct{},
snapshot *telemetry.Snapshot,
opts *insertWorkspaceResourceOptions,
) (uuid.UUID, error) {
// If there are no attached resources, we don't need to pre-create the
// subagent. This preserves backwards compatibility where devcontainers
// without resources can have their agents recreated dynamically.
if len(dc.GetApps()) == 0 && len(dc.GetScripts()) == 0 && len(dc.GetEnvs()) == 0 {
return uuid.UUID{}, nil
}
subAgentID := uuid.New()
if opts.useAgentIDsFromProto {
var err error
subAgentID, err = uuid.Parse(dc.GetSubagentId())
if err != nil {
return uuid.UUID{}, xerrors.Errorf("parse subagent id: %w", err)
}
}
envJSON, err := encodeSubagentEnvs(dc.GetEnvs())
if err != nil {
return uuid.UUID{}, err
}
_, err = db.InsertWorkspaceAgent(ctx, database.InsertWorkspaceAgentParams{
ID: subAgentID,
ParentID: uuid.NullUUID{Valid: true, UUID: parentAgent.ID},
CreatedAt: dbtime.Now(),
UpdatedAt: dbtime.Now(),
ResourceID: resourceID,
Name: dc.GetName(),
AuthToken: uuid.New(),
AuthInstanceID: parentAgent.AuthInstanceID,
Architecture: parentAgent.Architecture,
EnvironmentVariables: envJSON,
Directory: dc.GetWorkspaceFolder(),
InstanceMetadata: pqtype.NullRawMessage{},
ResourceMetadata: pqtype.NullRawMessage{},
OperatingSystem: parentAgent.OperatingSystem,
ConnectionTimeoutSeconds: parentAgent.ConnectionTimeoutSeconds,
TroubleshootingURL: parentAgent.TroubleshootingURL,
MOTDFile: "",
DisplayApps: []database.DisplayApp{},
DisplayOrder: 0,
APIKeyScope: parentAgent.APIKeyScope,
})
if err != nil {
return uuid.UUID{}, xerrors.Errorf("insert subagent: %w", err)
}
for _, app := range dc.GetApps() {
if err := insertAgentApp(ctx, db, subAgentID, app, appSlugs, snapshot); err != nil {
return uuid.UUID{}, xerrors.Errorf("insert agent app: %w", err)
}
}
if err := insertAgentScriptsAndLogSources(ctx, db, subAgentID, agentScriptsFromProto(dc.GetScripts())); err != nil {
return uuid.UUID{}, xerrors.Errorf("insert agent scripts and log sources: %w", err)
}
return subAgentID, nil
}
func encodeSubagentEnvs(envs []*sdkproto.Env) (pqtype.NullRawMessage, error) {
if len(envs) == 0 {
return pqtype.NullRawMessage{}, nil
}
subAgentEnvs := make(map[string]string, len(envs))
for _, env := range envs {
subAgentEnvs[env.GetName()] = env.GetValue()
}
data, err := json.Marshal(subAgentEnvs)
if err != nil {
return pqtype.NullRawMessage{}, xerrors.Errorf("marshal env: %w", err)
}
return pqtype.NullRawMessage{Valid: true, RawMessage: data}, nil
}
// agentScriptsParams holds the parameters for inserting agent scripts and
// their associated log sources.
type agentScriptsParams struct {
LogSourceIDs []uuid.UUID
LogSourceDisplayNames []string
LogSourceIcons []string
ScriptIDs []uuid.UUID
ScriptDisplayNames []string
ScriptLogPaths []string
ScriptSources []string
ScriptCron []string
ScriptTimeout []int32
ScriptStartBlocksLogin []bool
ScriptRunOnStart []bool
ScriptRunOnStop []bool
}
// agentScriptsFromProto converts a slice of proto scripts into the
// agentScriptsParams struct needed for database insertion.
func agentScriptsFromProto(scripts []*sdkproto.Script) agentScriptsParams {
params := agentScriptsParams{
LogSourceIDs: make([]uuid.UUID, 0, len(scripts)),
LogSourceDisplayNames: make([]string, 0, len(scripts)),
LogSourceIcons: make([]string, 0, len(scripts)),
ScriptIDs: make([]uuid.UUID, 0, len(scripts)),
ScriptDisplayNames: make([]string, 0, len(scripts)),
ScriptLogPaths: make([]string, 0, len(scripts)),
ScriptSources: make([]string, 0, len(scripts)),
ScriptCron: make([]string, 0, len(scripts)),
ScriptTimeout: make([]int32, 0, len(scripts)),
ScriptStartBlocksLogin: make([]bool, 0, len(scripts)),
ScriptRunOnStart: make([]bool, 0, len(scripts)),
ScriptRunOnStop: make([]bool, 0, len(scripts)),
}
for _, script := range scripts {
params.LogSourceIDs = append(params.LogSourceIDs, uuid.New())
params.LogSourceDisplayNames = append(params.LogSourceDisplayNames, script.GetDisplayName())
params.LogSourceIcons = append(params.LogSourceIcons, script.GetIcon())
params.ScriptIDs = append(params.ScriptIDs, uuid.New())
params.ScriptDisplayNames = append(params.ScriptDisplayNames, script.GetDisplayName())
params.ScriptLogPaths = append(params.ScriptLogPaths, script.GetLogPath())
params.ScriptSources = append(params.ScriptSources, script.GetScript())
params.ScriptCron = append(params.ScriptCron, script.GetCron())
params.ScriptTimeout = append(params.ScriptTimeout, script.GetTimeoutSeconds())
params.ScriptStartBlocksLogin = append(params.ScriptStartBlocksLogin, script.GetStartBlocksLogin())
params.ScriptRunOnStart = append(params.ScriptRunOnStart, script.GetRunOnStart())
params.ScriptRunOnStop = append(params.ScriptRunOnStop, script.GetRunOnStop())
}
return params
}
// insertAgentScriptsAndLogSources inserts log sources and scripts for an agent (or
// subagent). It expects the caller to have built the agentScriptsParams,
// allowing for additional entries to be appended before insertion (e.g. for
// devcontainers). Returns nil if there are no log sources to insert.
func insertAgentScriptsAndLogSources(ctx context.Context, db database.Store, agentID uuid.UUID, params agentScriptsParams) error {
if len(params.LogSourceIDs) == 0 {
return nil
}
_, err := db.InsertWorkspaceAgentLogSources(ctx, database.InsertWorkspaceAgentLogSourcesParams{
WorkspaceAgentID: agentID,
ID: params.LogSourceIDs,
CreatedAt: dbtime.Now(),
DisplayName: params.LogSourceDisplayNames,
Icon: params.LogSourceIcons,
})
if err != nil {
return xerrors.Errorf("insert log sources: %w", err)
}
_, err = db.InsertWorkspaceAgentScripts(ctx, database.InsertWorkspaceAgentScriptsParams{
WorkspaceAgentID: agentID,
LogSourceID: params.LogSourceIDs,
ID: params.ScriptIDs,
LogPath: params.ScriptLogPaths,
CreatedAt: dbtime.Now(),
Script: params.ScriptSources,
Cron: params.ScriptCron,
TimeoutSeconds: params.ScriptTimeout,
StartBlocksLogin: params.ScriptStartBlocksLogin,
RunOnStart: params.ScriptRunOnStart,
RunOnStop: params.ScriptRunOnStop,
DisplayName: params.ScriptDisplayNames,
})
if err != nil {
return xerrors.Errorf("insert scripts: %w", err)
}
return nil
}
func insertAgentApp(ctx context.Context, db database.Store, agentID uuid.UUID, app *sdkproto.App, appSlugs map[string]struct{}, snapshot *telemetry.Snapshot) error {
// Similar logic is duplicated in terraform/resources.go.
slug := app.Slug
if slug == "" {
return xerrors.Errorf("app must have a slug or name set")
}
// Unlike agent names, app slugs were never permitted to contain uppercase
// letters or underscores.
if !provisioner.AppSlugRegex.MatchString(slug) {
return xerrors.Errorf("app slug %q does not match regex %q", slug, provisioner.AppSlugRegex.String())
}
if _, exists := appSlugs[slug]; exists {
return xerrors.Errorf("duplicate app slug, must be unique per template: %q", slug)
}
appSlugs[slug] = struct{}{}
health := database.WorkspaceAppHealthDisabled
if app.Healthcheck == nil {
app.Healthcheck = &sdkproto.Healthcheck{}
}
if app.Healthcheck.Url != "" {
health = database.WorkspaceAppHealthInitializing
}
sharingLevel := database.AppSharingLevelOwner
switch app.SharingLevel {
case sdkproto.AppSharingLevel_AUTHENTICATED:
sharingLevel = database.AppSharingLevelAuthenticated
case sdkproto.AppSharingLevel_PUBLIC:
sharingLevel = database.AppSharingLevelPublic
}
displayGroup := sql.NullString{
Valid: app.Group != "",
String: app.Group,
}
openIn := database.WorkspaceAppOpenInSlimWindow
switch app.OpenIn {
case sdkproto.AppOpenIn_TAB:
openIn = database.WorkspaceAppOpenInTab
case sdkproto.AppOpenIn_SLIM_WINDOW:
openIn = database.WorkspaceAppOpenInSlimWindow
}
var appID string
if app.Id == "" || app.Id == uuid.Nil.String() {
appID = uuid.NewString()
} else {
appID = app.Id
}
id, err := uuid.Parse(appID)
if err != nil {
return xerrors.Errorf("parse app uuid: %w", err)
}
// If workspace apps are "persistent", the ID will not be regenerated across workspace builds, so we have to upsert.
dbApp, err := db.UpsertWorkspaceApp(ctx, database.UpsertWorkspaceAppParams{
ID: id,
CreatedAt: dbtime.Now(),
AgentID: agentID,
Slug: slug,
DisplayName: app.DisplayName,
Icon: app.Icon,
Command: sql.NullString{
String: app.Command,
Valid: app.Command != "",
},
Url: sql.NullString{
String: app.Url,
Valid: app.Url != "",
},
External: app.External,
Subdomain: app.Subdomain,
SharingLevel: sharingLevel,
HealthcheckUrl: app.Healthcheck.Url,
HealthcheckInterval: app.Healthcheck.Interval,
HealthcheckThreshold: app.Healthcheck.Threshold,
Health: health,
// #nosec G115 - Order represents a display order value that's always small and fits in int32
DisplayOrder: int32(app.Order),
DisplayGroup: displayGroup,
Hidden: app.Hidden,
OpenIn: openIn,
Tooltip: app.Tooltip,
})
if err != nil {
return xerrors.Errorf("upsert app: %w", err)
}
snapshot.WorkspaceApps = append(snapshot.WorkspaceApps, telemetry.ConvertWorkspaceApp(dbApp))
return nil
}
@@ -2309,19 +2309,17 @@ func TestCompleteJob(t *testing.T) {
Version: "1.0.0",
Source: "github.com/example/example",
},
},
StopResources: []*sdkproto.Resource{{
Name: "something2",
Type: "aws_instance",
ModulePath: "module.test2",
}},
StopModules: []*sdkproto.Module{
{
Key: "test2",
Version: "2.0.0",
Source: "github.com/example2/example",
},
},
StopResources: []*sdkproto.Resource{{
Name: "something2",
Type: "aws_instance",
ModulePath: "module.test2",
}},
Plan: []byte("{}"),
},
},
@@ -2358,7 +2356,7 @@ func TestCompleteJob(t *testing.T) {
Key: "test2",
Version: "2.0.0",
Source: "github.com/example2/example",
Transition: database.WorkspaceTransitionStop,
Transition: database.WorkspaceTransitionStart,
}},
},
{
@@ -2983,6 +2981,46 @@ func TestCompleteJob(t *testing.T) {
expectHasAiTask: true,
expectUsageEvent: true,
},
{
name: "ai task linked to subagent app in devcontainer",
transition: database.WorkspaceTransitionStart,
input: &proto.CompletedJob_WorkspaceBuild{
AiTasks: []*sdkproto.AITask{
{
Id: uuid.NewString(),
AppId: sidebarAppID.String(),
},
},
Resources: []*sdkproto.Resource{
{
Agents: []*sdkproto.Agent{
{
Id: uuid.NewString(),
Name: "parent-agent",
Devcontainers: []*sdkproto.Devcontainer{
{
Name: "dev",
WorkspaceFolder: "/workspace",
SubagentId: uuid.NewString(),
Apps: []*sdkproto.App{
{
Id: sidebarAppID.String(),
Slug: "subagent-app",
},
},
},
},
},
},
},
},
},
isTask: true,
expectTaskStatus: database.TaskStatusInitializing,
expectAppID: uuid.NullUUID{UUID: sidebarAppID, Valid: true},
expectHasAiTask: true,
expectUsageEvent: true,
},
// Checks regression for https://github.com/coder/coder/issues/18776
{
name: "non-existing app",
@@ -3388,6 +3426,9 @@ func TestInsertWorkspaceResource(t *testing.T) {
insert := func(db database.Store, jobID uuid.UUID, resource *sdkproto.Resource) error {
return provisionerdserver.InsertWorkspaceResource(ctx, db, jobID, database.WorkspaceTransitionStart, resource, &telemetry.Snapshot{})
}
insertWithProtoIDs := func(db database.Store, jobID uuid.UUID, resource *sdkproto.Resource) error {
return provisionerdserver.InsertWorkspaceResource(ctx, db, jobID, database.WorkspaceTransitionStart, resource, &telemetry.Snapshot{}, provisionerdserver.InsertWorkspaceResourceWithAgentIDsFromProto())
}
t.Run("NoAgents", func(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
@@ -3724,39 +3765,400 @@ func TestInsertWorkspaceResource(t *testing.T) {
t.Run("Devcontainers", func(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{})
err := insert(db, job.ID, &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{Name: "foo", WorkspaceFolder: "/workspace1"},
{Name: "bar", WorkspaceFolder: "/workspace2", ConfigPath: "/workspace2/.devcontainer/devcontainer.json"},
agentID := uuid.New()
subAgentID := uuid.New()
devcontainerID := uuid.New()
devcontainerID2 := uuid.New()
tests := []struct {
name string
resource *sdkproto.Resource
wantErr string
protoIDsOnly bool // when true, only run with insertWithProtoIDs (e.g., for UUID parsing error tests)
expectSubAgentCount int
check func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, useProtoIDs bool)
}{
{
name: "OK",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{Id: devcontainerID.String(), Name: "foo", WorkspaceFolder: "/workspace1"},
{Id: devcontainerID2.String(), Name: "bar", WorkspaceFolder: "/workspace2", ConfigPath: "/workspace2/.devcontainer/devcontainer.json"},
},
}},
},
}},
})
require.NoError(t, err)
resources, err := db.GetWorkspaceResourcesByJobID(ctx, job.ID)
require.NoError(t, err)
require.Len(t, resources, 1)
agents, err := db.GetWorkspaceAgentsByResourceIDs(ctx, []uuid.UUID{resources[0].ID})
require.NoError(t, err)
require.Len(t, agents, 1)
agent := agents[0]
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, agent.ID)
sort.Slice(devcontainers, func(i, j int) bool {
return devcontainers[i].Name > devcontainers[j].Name
})
require.NoError(t, err)
require.Len(t, devcontainers, 2)
require.Equal(t, "foo", devcontainers[0].Name)
require.Equal(t, "/workspace1", devcontainers[0].WorkspaceFolder)
require.Equal(t, "", devcontainers[0].ConfigPath)
require.Equal(t, "bar", devcontainers[1].Name)
require.Equal(t, "/workspace2", devcontainers[1].WorkspaceFolder)
require.Equal(t, "/workspace2/.devcontainer/devcontainer.json", devcontainers[1].ConfigPath)
expectSubAgentCount: 0,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, _ []database.WorkspaceAgent, useProtoIDs bool) {
require.Equal(t, "dev", parentAgent.Name)
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
sort.Slice(devcontainers, func(i, j int) bool {
return devcontainers[i].Name > devcontainers[j].Name
})
require.Len(t, devcontainers, 2)
if useProtoIDs {
assert.Equal(t, devcontainerID, devcontainers[0].ID)
assert.Equal(t, devcontainerID2, devcontainers[1].ID)
} else {
assert.NotEqual(t, uuid.Nil, devcontainers[0].ID)
assert.NotEqual(t, uuid.Nil, devcontainers[1].ID)
}
assert.Equal(t, "foo", devcontainers[0].Name)
assert.Equal(t, "/workspace1", devcontainers[0].WorkspaceFolder)
assert.Equal(t, "", devcontainers[0].ConfigPath)
assert.False(t, devcontainers[0].SubagentID.Valid)
assert.Equal(t, "bar", devcontainers[1].Name)
assert.Equal(t, "/workspace2", devcontainers[1].WorkspaceFolder)
assert.Equal(t, "/workspace2/.devcontainer/devcontainer.json", devcontainers[1].ConfigPath)
assert.False(t, devcontainers[1].SubagentID.Valid)
},
},
{
name: "SubAgentWithAllResources",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Architecture: "amd64",
OperatingSystem: "linux",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "full-subagent",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "code-server", DisplayName: "VS Code", Url: "http://localhost:8080"},
},
Scripts: []*sdkproto.Script{
{DisplayName: "Startup", Script: "echo start", RunOnStart: true},
},
Envs: []*sdkproto.Env{
{Name: "EDITOR", Value: "vim"},
},
}},
}},
},
expectSubAgentCount: 1,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, useProtoIDs bool) {
require.Len(t, subAgents, 1)
subAgent := subAgents[0]
if useProtoIDs {
require.Equal(t, subAgentID, subAgent.ID)
} else {
require.NotEqual(t, uuid.Nil, subAgent.ID)
}
assert.Equal(t, parentAgent.ID, subAgent.ParentID.UUID)
assert.Equal(t, parentAgent.Architecture, subAgent.Architecture)
assert.Equal(t, parentAgent.OperatingSystem, subAgent.OperatingSystem)
apps, err := db.GetWorkspaceAppsByAgentID(ctx, subAgent.ID)
require.NoError(t, err)
require.Len(t, apps, 1)
assert.Equal(t, "code-server", apps[0].Slug)
scripts, err := db.GetWorkspaceAgentScriptsByAgentIDs(ctx, []uuid.UUID{subAgent.ID})
require.NoError(t, err)
require.Len(t, scripts, 1)
assert.Equal(t, "Startup", scripts[0].DisplayName)
var envVars map[string]string
err = json.Unmarshal(subAgent.EnvironmentVariables.RawMessage, &envVars)
require.NoError(t, err)
assert.Equal(t, "vim", envVars["EDITOR"])
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 1)
assert.True(t, devcontainers[0].SubagentID.Valid)
if useProtoIDs {
assert.Equal(t, subAgentID, devcontainers[0].SubagentID.UUID)
} else {
assert.Equal(t, subAgent.ID, devcontainers[0].SubagentID.UUID)
}
},
},
{
name: "MultipleDevcontainersWithSubagents",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{
Id: devcontainerID.String(),
Name: "frontend",
WorkspaceFolder: "/workspace/frontend",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "frontend-app", DisplayName: "Frontend"},
},
},
{
Id: devcontainerID2.String(),
Name: "backend",
WorkspaceFolder: "/workspace/backend",
SubagentId: uuid.New().String(),
Apps: []*sdkproto.App{
{Slug: "backend-app", DisplayName: "Backend"},
},
},
},
}},
},
expectSubAgentCount: 2,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, subAgents []database.WorkspaceAgent, _ bool) {
for _, subAgent := range subAgents {
apps, err := db.GetWorkspaceAppsByAgentID(ctx, subAgent.ID)
require.NoError(t, err)
require.Len(t, apps, 1, "each subagent should have exactly one app")
}
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 2)
for _, dc := range devcontainers {
assert.True(t, dc.SubagentID.Valid, "devcontainer %s should have subagent", dc.Name)
}
},
},
{
name: "SubAgentDuplicateAppSlugs",
wantErr: `duplicate app slug, must be unique per template: "my-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-dup-apps",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "my-app", DisplayName: "App 1"},
{Slug: "my-app", DisplayName: "App 2"},
},
}},
}},
},
},
{
name: "SubAgentInvalidAppSlug",
wantErr: `app slug "Invalid_Slug" does not match regex`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-invalid-app",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "Invalid_Slug", DisplayName: "Bad App"},
},
}},
}},
},
},
{
name: "SubAgentAppSlugConflictsWithParentAgent",
wantErr: `duplicate app slug, must be unique per template: "shared-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Apps: []*sdkproto.App{
{Slug: "shared-app", DisplayName: "Parent App"},
},
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "dc",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "shared-app", DisplayName: "Child App"},
},
}},
}},
},
},
{
name: "SubAgentAppSlugConflictsBetweenSubagents",
wantErr: `duplicate app slug, must be unique per template: "conflicting-app"`,
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{
{
Id: devcontainerID.String(),
Name: "dc1",
WorkspaceFolder: "/workspace1",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{
{Slug: "conflicting-app", DisplayName: "App in DC1"},
},
},
{
Id: devcontainerID2.String(),
Name: "dc2",
WorkspaceFolder: "/workspace2",
SubagentId: uuid.New().String(),
Apps: []*sdkproto.App{
{Slug: "conflicting-app", DisplayName: "App in DC2"},
},
},
},
}},
},
},
{
name: "SubAgentInvalidSubagentID",
wantErr: "parse subagent id",
protoIDsOnly: true, // UUID parsing errors only occur with proto IDs
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "invalid-subagent",
WorkspaceFolder: "/workspace",
SubagentId: "not-a-valid-uuid",
Apps: []*sdkproto.App{{Slug: "app", DisplayName: "App"}},
}},
}},
},
},
{
name: "SubAgentInvalidAppID",
wantErr: "parse app uuid",
protoIDsOnly: true, // UUID parsing errors only occur with proto IDs
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "with-invalid-app-id",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
Apps: []*sdkproto.App{{Id: "not-a-uuid", Slug: "my-app", DisplayName: "App"}},
}},
}},
},
},
{
// This test verifies the backward-compatibility behavior where a
// devcontainer with a SubagentId but no apps, scripts, or envs does
// NOT create a subagent.
name: "SubAgentBackwardCompatNoResources",
resource: &sdkproto.Resource{
Name: "something",
Type: "aws_instance",
Agents: []*sdkproto.Agent{{
Id: agentID.String(),
Name: "dev",
Devcontainers: []*sdkproto.Devcontainer{{
Id: devcontainerID.String(),
Name: "no-resources",
WorkspaceFolder: "/workspace",
SubagentId: subAgentID.String(),
// Intentionally no Apps, Scripts, or Envs.
}},
}},
},
expectSubAgentCount: 0,
check: func(t *testing.T, db database.Store, parentAgent database.WorkspaceAgent, _ []database.WorkspaceAgent, _ bool) {
devcontainers, err := db.GetWorkspaceAgentDevcontainersByAgentID(ctx, parentAgent.ID)
require.NoError(t, err)
require.Len(t, devcontainers, 1)
assert.Equal(t, "no-resources", devcontainers[0].Name)
assert.False(t, devcontainers[0].SubagentID.Valid,
"devcontainer with SubagentId but no apps/scripts/envs should not have a subagent (backward compatibility)")
},
},
}
for _, tt := range tests {
for _, useProtoIDs := range []bool{false, true} {
if tt.protoIDsOnly && !useProtoIDs {
continue
}
name := tt.name
if useProtoIDs {
name += "/WithProtoIDs"
} else {
name += "/WithoutProtoIDs"
}
t.Run(name, func(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
job := dbgen.ProvisionerJob(t, db, nil, database.ProvisionerJob{})
var err error
if useProtoIDs {
err = insertWithProtoIDs(db, job.ID, tt.resource)
} else {
err = insert(db, job.ID, tt.resource)
}
if tt.wantErr != "" {
require.ErrorContains(t, err, tt.wantErr)
return
}
require.NoError(t, err)
resources, err := db.GetWorkspaceResourcesByJobID(ctx, job.ID)
require.NoError(t, err)
require.Len(t, resources, 1)
agents, err := db.GetWorkspaceAgentsByResourceIDs(ctx, []uuid.UUID{resources[0].ID})
require.NoError(t, err)
var parentAgent database.WorkspaceAgent
var subAgents []database.WorkspaceAgent
for _, agent := range agents {
if agent.ParentID.Valid {
subAgents = append(subAgents, agent)
} else {
parentAgent = agent
}
}
require.NotEqual(t, uuid.Nil, parentAgent.ID)
require.Len(t, subAgents, tt.expectSubAgentCount, "expected %d subagents", tt.expectSubAgentCount)
tt.check(t, db, parentAgent, subAgents, useProtoIDs)
})
}
}
})
}
+4 -3
View File
@@ -3,6 +3,7 @@ package rbac
import (
"fmt"
"strings"
"sync/atomic"
"github.com/google/uuid"
"golang.org/x/xerrors"
@@ -239,16 +240,16 @@ func (z Object) WithGroupACL(groups map[string][]policy.Action) Object {
// TODO(geokat): similar to builtInRoles, this should ideally be
// scoped to a coderd rather than a global.
var workspaceACLDisabled bool
var workspaceACLDisabled atomic.Bool
// SetWorkspaceACLDisabled disables/enables workspace sharing for the
// deployment.
func SetWorkspaceACLDisabled(v bool) {
workspaceACLDisabled = v
workspaceACLDisabled.Store(v)
}
// WorkspaceACLDisabled returns true if workspace sharing is disabled
// for the deployment.
func WorkspaceACLDisabled() bool {
return workspaceACLDisabled
return workspaceACLDisabled.Load()
}
+2 -2
View File
@@ -177,7 +177,7 @@ func generateFromPrompt(prompt string) (TaskName, error) {
// Ensure display name is never empty
displayName = strings.ReplaceAll(name, "-", " ")
}
displayName = strutil.Capitalize(displayName)
displayName = strings.ToUpper(displayName[:1]) + displayName[1:]
return TaskName{
Name: taskName,
@@ -269,7 +269,7 @@ func generateFromAnthropic(ctx context.Context, prompt string, apiKey string, mo
// Ensure display name is never empty
displayName = strings.ReplaceAll(taskNameResponse.Name, "-", " ")
}
displayName = strutil.Capitalize(displayName)
displayName = strings.ToUpper(displayName[:1]) + displayName[1:]
return TaskName{
Name: name,
-13
View File
@@ -49,19 +49,6 @@ func TestGenerate(t *testing.T) {
require.NotEmpty(t, taskName.DisplayName)
})
t.Run("FromPromptMultiByte", func(t *testing.T) {
t.Setenv("ANTHROPIC_API_KEY", "")
ctx := testutil.Context(t, testutil.WaitShort)
taskName := taskname.Generate(ctx, testutil.Logger(t), "über cool feature")
require.NoError(t, codersdk.NameValid(taskName.Name))
require.True(t, len(taskName.DisplayName) > 0)
// The display name must start with "Ü", not corrupted bytes.
require.Equal(t, "Über cool feature", taskName.DisplayName)
})
t.Run("Fallback", func(t *testing.T) {
// Ensure no API key
t.Setenv("ANTHROPIC_API_KEY", "")
+4 -1
View File
@@ -1977,10 +1977,13 @@ func TestTemplateVersionPatch(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
user := coderdtest.CreateFirstUser(t, client)
version1 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
version1 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v1"
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version1.ID)
version2 := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil, func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.Name = "v2"
ctvr.TemplateID = template.ID
})
+19
View File
@@ -4,6 +4,25 @@ import (
"golang.org/x/exp/constraints"
)
// List is a helper function to reduce boilerplate when converting slices of
// database types to slices of codersdk types.
// Only works if the function takes a single argument.
func List[F any, T any](list []F, convert func(F) T) []T {
return ListLazy(convert)(list)
}
// ListLazy returns the converter function for a list, but does not eval
// the input. Helpful for combining the Map and the List functions.
func ListLazy[F any, T any](convert func(F) T) func(list []F) []T {
return func(list []F) []T {
into := make([]T, 0, len(list))
for _, item := range list {
into = append(into, convert(item))
}
return into
}
}
// ToStrings works for any type where the base type is a string.
func ToStrings[T ~string](a []T) []string {
tmp := make([]string, 0, len(a))
+10 -22
View File
@@ -5,7 +5,6 @@ import (
"strconv"
"strings"
"unicode"
"unicode/utf8"
"github.com/acarl005/stripansi"
"github.com/microcosm-cc/bluemonday"
@@ -54,7 +53,7 @@ const (
TruncateWithFullWords TruncateOption = 1 << 1
)
// Truncate truncates s to n runes.
// Truncate truncates s to n characters.
// Additional behaviors can be specified using TruncateOptions.
func Truncate(s string, n int, opts ...TruncateOption) string {
var options TruncateOption
@@ -64,8 +63,7 @@ func Truncate(s string, n int, opts ...TruncateOption) string {
if n < 1 {
return ""
}
runes := []rune(s)
if len(runes) <= n {
if len(s) <= n {
return s
}
@@ -74,18 +72,18 @@ func Truncate(s string, n int, opts ...TruncateOption) string {
maxLen--
}
var sb strings.Builder
// If we need to truncate to full words, find the last word boundary before n.
if options&TruncateWithFullWords != 0 {
// Convert the rune-safe prefix to a string, then find
// the last word boundary (byte offset within that prefix).
truncated := string(runes[:maxLen])
lastWordBoundary := strings.LastIndexFunc(truncated, unicode.IsSpace)
lastWordBoundary := strings.LastIndexFunc(s[:maxLen], unicode.IsSpace)
if lastWordBoundary < 0 {
_, _ = sb.WriteString(truncated)
} else {
_, _ = sb.WriteString(truncated[:lastWordBoundary])
// We cannot find a word boundary. At this point, we'll truncate the string.
// It's better than nothing.
_, _ = sb.WriteString(s[:maxLen])
} else { // lastWordBoundary <= maxLen
_, _ = sb.WriteString(s[:lastWordBoundary])
}
} else {
_, _ = sb.WriteString(string(runes[:maxLen]))
_, _ = sb.WriteString(s[:maxLen])
}
if options&TruncateWithEllipsis != 0 {
@@ -128,13 +126,3 @@ func UISanitize(in string) string {
}
return strings.TrimSpace(b.String())
}
// Capitalize returns s with its first rune upper-cased. It is safe for
// multi-byte UTF-8 characters, unlike naive byte-slicing approaches.
func Capitalize(s string) string {
r, size := utf8.DecodeRuneInString(s)
if size == 0 {
return s
}
return string(unicode.ToUpper(r)) + s[size:]
}
-32
View File
@@ -57,17 +57,6 @@ func TestTruncate(t *testing.T) {
{"foo bar", 1, "…", []strings.TruncateOption{strings.TruncateWithFullWords, strings.TruncateWithEllipsis}},
{"foo bar", 0, "", []strings.TruncateOption{strings.TruncateWithFullWords, strings.TruncateWithEllipsis}},
{"This is a very long task prompt that should be truncated to 160 characters. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", 160, "This is a very long task prompt that should be truncated to 160 characters. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed do eiusmod tempor…", []strings.TruncateOption{strings.TruncateWithFullWords, strings.TruncateWithEllipsis}},
// Multi-byte rune handling.
{"日本語テスト", 3, "日本語", nil},
{"日本語テスト", 4, "日本語テ", nil},
{"日本語テスト", 6, "日本語テスト", nil},
{"日本語テスト", 4, "日本語…", []strings.TruncateOption{strings.TruncateWithEllipsis}},
{"🎉🎊🎈🎁", 2, "🎉🎊", nil},
{"🎉🎊🎈🎁", 3, "🎉🎊…", []strings.TruncateOption{strings.TruncateWithEllipsis}},
// Multi-byte with full-word truncation.
{"hello 日本語", 7, "hello…", []strings.TruncateOption{strings.TruncateWithFullWords, strings.TruncateWithEllipsis}},
{"hello 日本語", 8, "hello 日…", []strings.TruncateOption{strings.TruncateWithEllipsis}},
{"日本語 テスト", 4, "日本語", []strings.TruncateOption{strings.TruncateWithFullWords}},
} {
tName := fmt.Sprintf("%s_%d", tt.s, tt.n)
for _, opt := range tt.options {
@@ -118,24 +107,3 @@ func TestUISanitize(t *testing.T) {
})
}
}
func TestCapitalize(t *testing.T) {
t.Parallel()
tests := []struct {
input string
expected string
}{
{"", ""},
{"hello", "Hello"},
{"über", "Über"},
{"Hello", "Hello"},
{"a", "A"},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("%q", tt.input), func(t *testing.T) {
t.Parallel()
assert.Equal(t, tt.expected, strings.Capitalize(tt.input))
})
}
}
+4 -7
View File
@@ -959,7 +959,7 @@ func claimPrebuild(
nextStartAt sql.NullTime,
ttl sql.NullInt64,
) (*database.Workspace, error) {
claimedID, err := claimer.Claim(ctx, db, now, owner.ID, name, templateVersionPresetID, autostartSchedule, nextStartAt, ttl)
claimedID, err := claimer.Claim(ctx, now, owner.ID, name, templateVersionPresetID, autostartSchedule, nextStartAt, ttl)
if err != nil {
// TODO: enhance this by clarifying whether this *specific* prebuild failed or whether there are none to claim.
return nil, xerrors.Errorf("claim prebuild: %w", err)
@@ -2353,13 +2353,10 @@ func (api *API) patchWorkspaceACL(rw http.ResponseWriter, r *http.Request) {
return
}
// Don't allow adding new groups or users to a workspace associated with a
// task. Sharing a task workspace without sharing the task itself is a broken
// half measure that we don't want to support right now. To be fixed!
if workspace.TaskID.Valid {
apiKey := httpmw.APIKey(r)
if _, ok := req.UserRoles[apiKey.UserID.String()]; ok {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Task workspaces cannot be shared.",
Detail: "This workspace is managed by a task. Task sharing has not yet been implemented.",
Message: "You cannot change your own workspace sharing role.",
})
return
}
+68
View File
@@ -5190,6 +5190,74 @@ func TestUpdateWorkspaceACL(t *testing.T) {
require.Len(t, cerr.Validations, 1)
require.Equal(t, cerr.Validations[0].Field, "user_roles")
})
//nolint:tparallel,paralleltest // Modifies package global rbac.workspaceACLDisabled.
t.Run("CannotChangeOwnRole", func(t *testing.T) {
// Save and restore the global to avoid affecting other tests.
prevWorkspaceACLDisabled := rbac.WorkspaceACLDisabled()
rbac.SetWorkspaceACLDisabled(false)
t.Cleanup(func() { rbac.SetWorkspaceACLDisabled(prevWorkspaceACLDisabled) })
dv := coderdtest.DeploymentValues(t)
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
adminClient := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
DeploymentValues: dv,
})
adminUser := coderdtest.CreateFirstUser(t, adminClient)
orgID := adminUser.OrganizationID
workspaceOwnerClient, workspaceOwner := coderdtest.CreateAnotherUser(t, adminClient, orgID)
sharedAdminClient, sharedAdminUser := coderdtest.CreateAnotherUser(t, adminClient, orgID)
tv := coderdtest.CreateTemplateVersion(t, adminClient, orgID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, adminClient, tv.ID)
template := coderdtest.CreateTemplate(t, adminClient, orgID, tv.ID)
ws := coderdtest.CreateWorkspace(t, workspaceOwnerClient, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, workspaceOwnerClient, ws.LatestBuild.ID)
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with another user as admin.
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, ws.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
sharedAdminUser.ID.String(): codersdk.WorkspaceRoleAdmin,
},
})
require.NoError(t, err)
// The shared admin user should not be able to change their own role.
err = sharedAdminClient.UpdateWorkspaceACL(ctx, ws.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
sharedAdminUser.ID.String(): codersdk.WorkspaceRoleUse,
},
})
require.Error(t, err)
cerr, ok := codersdk.AsError(err)
require.True(t, ok)
require.Equal(t, http.StatusBadRequest, cerr.StatusCode())
require.Contains(t, cerr.Message, "You cannot change your own workspace sharing role")
// The workspace owner should also not be able to change their own role.
err = workspaceOwnerClient.UpdateWorkspaceACL(ctx, ws.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
workspaceOwner.ID.String(): codersdk.WorkspaceRoleUse,
},
})
require.Error(t, err)
cerr, ok = codersdk.AsError(err)
require.True(t, ok)
require.Equal(t, http.StatusBadRequest, cerr.StatusCode())
require.Contains(t, cerr.Message, "You cannot change your own workspace sharing role")
// But the workspace owner should still be able to change the shared admin's role.
err = workspaceOwnerClient.UpdateWorkspaceACL(ctx, ws.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
sharedAdminUser.ID.String(): codersdk.WorkspaceRoleUse,
},
})
require.NoError(t, err)
})
}
func TestDeleteWorkspaceACL(t *testing.T) {
+6 -2
View File
@@ -346,9 +346,13 @@ type TaskLogEntry struct {
Time time.Time `json:"time" format:"date-time" table:"time,default_sort"`
}
// TaskLogsResponse contains the logs for a task.
// TaskLogsResponse contains task logs and metadata. When snapshot is false,
// logs are fetched live from the task app. When snapshot is true, logs are
// fetched from a stored snapshot captured during pause.
type TaskLogsResponse struct {
Logs []TaskLogEntry `json:"logs"`
Logs []TaskLogEntry `json:"logs"`
Snapshot bool `json:"snapshot,omitempty"`
SnapshotAt *time.Time `json:"snapshot_at,omitempty"`
}
// TaskLogs retrieves logs from the task app.
+4
View File
@@ -372,6 +372,10 @@ type Feature struct {
// Below is only for features that use usage periods.
// SoftLimit is the soft limit of the feature, and is only used for showing
// included limits in the dashboard. No license validation or warnings are
// generated from this value.
SoftLimit *int64 `json:"soft_limit,omitempty"`
// UsagePeriod denotes that the usage is a counter that accumulates over
// this period (and most likely resets with the issuance of the next
// license).
+2 -3
View File
@@ -12,9 +12,8 @@ import (
)
const (
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
LicenseManagedAgentLimitExceededWarningText = "You have built more workspaces with managed agents than your license allows."
LicenseExpiryClaim = "license_expires"
LicenseTelemetryRequiredErrorText = "License requires telemetry but telemetry is disabled"
)
type AddLicenseRequest struct {
+2 -2
View File
@@ -142,19 +142,19 @@ deployment. They will always be available from the agent.
| `coderd_api_requests_processed_total` | counter | The total number of processed API requests | `code` `method` `path` |
| `coderd_api_websocket_durations_seconds` | histogram | Websocket duration distribution of requests in seconds. | `path` |
| `coderd_api_workspace_latest_build` | gauge | The latest workspace builds with a status. | `status` |
| `coderd_api_workspace_latest_build_total` | gauge | DEPRECATED: use coderd_api_workspace_latest_build instead | `status` |
| `coderd_insights_applications_usage_seconds` | gauge | The application usage per template. | `application_name` `slug` `template_name` |
| `coderd_insights_parameters` | gauge | The parameter usage per template. | `parameter_name` `parameter_type` `parameter_value` `template_name` |
| `coderd_insights_templates_active_users` | gauge | The number of active users of the template. | `template_name` |
| `coderd_license_active_users` | gauge | The number of active users. | |
| `coderd_license_errors` | gauge | The number of active license errors. | |
| `coderd_license_limit_users` | gauge | The user seats limit based on the active Coder license. | |
| `coderd_license_user_limit_enabled` | gauge | Returns 1 if the current license enforces the user limit. | |
| `coderd_license_warnings` | gauge | The number of active license warnings. | |
| `coderd_metrics_collector_agents_execution_seconds` | histogram | Histogram for duration of agents metrics collection in seconds. | |
| `coderd_oauth2_external_requests_rate_limit` | gauge | The total number of allowed requests per interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_next_reset_unix` | gauge | Unix timestamp of the next interval | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_remaining` | gauge | The remaining number of allowed requests in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_reset_in_seconds` | gauge | Seconds until the next interval | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_total` | gauge | DEPRECATED: use coderd_oauth2_external_requests_rate_limit instead | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_used` | gauge | The number of requests made in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_total` | counter | The total number of api calls made to external oauth2 providers. 'status_code' will be 0 if the request failed with no response. | `name` `source` `status_code` |
| `coderd_prebuilt_workspace_claim_duration_seconds` | histogram | Time to claim a prebuilt workspace by organization, template, and preset. | `organization_name` `preset_name` `template_name` |
-19
View File
@@ -115,25 +115,6 @@ specified in your template in the `disable_params` search params list
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace?disable_params=first_parameter,second_parameter)
```
### Security: consent dialog for automatic creation
When using `mode=auto` with prefilled `param.*` values, Coder displays a
security consent dialog before creating the workspace. This protects users
from malicious links that could provision workspaces with untrusted
configurations, such as dotfiles or startup scripts from unknown sources.
The dialog shows:
- A warning that a workspace is about to be created automatically from a link
- All prefilled `param.*` values from the URL
- **Confirm and Create** and **Cancel** buttons
The workspace is only created if the user explicitly clicks **Confirm and
Create**. Clicking **Cancel** falls back to the standard creation form where
all parameters can be reviewed manually.
![Consent dialog for automatic workspace creation](../../images/templates/auto-create-consent-dialog.png)
### Example: Kubernetes
For a full example of the Open in Coder flow in Kubernetes, check out
+15
View File
@@ -9,6 +9,21 @@ The [Coder CLI](../../install/cli.md) and
token to authenticate. To generate a short-lived session token on behalf of your
account, visit the following URL: `https://coder.example.com/cli-auth`
### Retrieve the current session token
If you're already logged in with the CLI, you can retrieve your current session
token for use in scripts and automation:
```sh
coder login token
```
This is useful for passing your session token to other tools:
```sh
export CODER_SESSION_TOKEN=$(coder login token)
```
### Session Durations
By default, sessions last 24 hours and are automatically refreshed. You can
+147
View File
@@ -0,0 +1,147 @@
# Client Configuration
Once AI Bridge is setup on your deployment, the AI coding tools used by your users will need to be configured to route requests via AI Bridge.
## Base URLs
Most AI coding tools allow the "base URL" to be customized. In other words, when a request is made to OpenAI's API from your coding tool, the API endpoint such as [`/v1/chat/completions`](https://platform.openai.com/docs/api-reference/chat) will be appended to the configured base. Therefore, instead of the default base URL of `https://api.openai.com/v1`, you'll need to set it to `https://coder.example.com/api/v2/aibridge/openai/v1`.
The exact configuration method varies by client — some use environment variables, others use configuration files or UI settings:
- **OpenAI-compatible clients**: Set the base URL (commonly via the `OPENAI_BASE_URL` environment variable) to `https://coder.example.com/api/v2/aibridge/openai/v1`
- **Anthropic-compatible clients**: Set the base URL (commonly via the `ANTHROPIC_BASE_URL` environment variable) to `https://coder.example.com/api/v2/aibridge/anthropic`
Replace `coder.example.com` with your actual Coder deployment URL.
## Authentication
Instead of distributing provider-specific API keys (OpenAI/Anthropic keys) to users, they authenticate to AI Bridge using their **Coder session token** or **Coder API key**:
- **OpenAI clients**: Users set `OPENAI_API_KEY` to their Coder session token or Coder API key
- **Anthropic clients**: Users set `ANTHROPIC_API_KEY` to their Coder session token or Coder API key
> [!NOTE]
> Only Coder-issued tokens are accepted at this time.
> Provider-specific API keys (such as OpenAI or Anthropic keys) will not work with AI Bridge.
Again, the exact environment variable or setting naming may differ from tool to tool; consult your tool's documentation.
### Retrieving your session token
If you're logged in with the Coder CLI, you can retrieve your current session
token using [`coder login token`](../../reference/cli/login_token.md):
```sh
export ANTHROPIC_API_KEY=$(coder login token)
export ANTHROPIC_BASE_URL="https://coder.example.com/api/v2/aibridge/anthropic"
```
## Configuring In-Workspace Tools
AI coding tools running inside a Coder workspace, such as IDE extensions, can be configured to use AI Bridge.
While users can manually configure these tools with a long-lived API key, template admins can provide a more seamless experience by pre-configuring them. Admins can automatically inject the user's session token with `data.coder_workspace_owner.me.session_token` and the AI Bridge base URL into the workspace environment.
In this example, Claude code respects these environment variables and will route all requests via AI Bridge.
This is the fastest way to bring existing agents like Roo Code, Cursor, or Claude Code into compliance without adopting Coder Tasks.
```hcl
data "coder_workspace_owner" "me" {}
data "coder_workspace" "me" {}
resource "coder_agent" "dev" {
arch = "amd64"
os = "linux"
dir = local.repo_dir
env = {
ANTHROPIC_BASE_URL : "${data.coder_workspace.me.access_url}/api/v2/aibridge/anthropic",
ANTHROPIC_AUTH_TOKEN : data.coder_workspace_owner.me.session_token
}
... # other agent configuration
}
```
### Using Coder Tasks
Agents like Claude Code can be configured to route through AI Bridge in any template by pre-configuring the agent with the session token. [Coder Tasks](../tasks.md) is particularly useful for this pattern, providing a framework for agents to complete background development operations autonomously. To route agents through AI Bridge in a Coder Tasks template, pre-configure it to install Claude Code and configure it with the session token:
```hcl
data "coder_workspace_owner" "me" {}
data "coder_workspace" "me" {}
data "coder_task" "me" {}
resource "coder_agent" "dev" {
arch = "amd64"
os = "linux"
dir = local.repo_dir
env = {
ANTHROPIC_BASE_URL : "${data.coder_workspace.me.access_url}/api/v2/aibridge/anthropic",
ANTHROPIC_AUTH_TOKEN : data.coder_workspace_owner.me.session_token
}
... # other agent configuration
}
# See https://registry.coder.com/modules/coder/claude-code for more information
module "claude-code" {
count = data.coder_task.me.enabled ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/claude-code/coder"
version = ">= 4.0.0"
agent_id = coder_agent.dev.id
workdir = "/home/coder/project"
claude_api_key = data.coder_workspace_owner.me.session_token # Use the Coder session token to authenticate with AI Bridge
ai_prompt = data.coder_task.me.prompt
... # other claude-code configuration
}
# The coder_ai_task resource associates the task to the app.
resource "coder_ai_task" "task" {
count = data.coder_task.me.enabled ? data.coder_workspace.me.start_count : 0
app_id = module.claude-code[0].task_app_id
}
```
## External and Desktop Clients
You can also configure AI tools running outside of a Coder workspace, such as local IDE extensions or desktop applications, to connect to AI Bridge.
The configuration is the same: point the tool to the AI Bridge [base URL](#base-urls) and use a Coder API key for authentication.
Users can generate a long-lived API key from the Coder UI or CLI. Follow the instructions at [Sessions and API tokens](../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself) to create one.
## Compatibility
The table below shows tested AI clients and their compatibility with AI Bridge. Click each client name for vendor-specific configuration instructions. Report issues or share compatibility updates in the [aibridge](https://github.com/coder/aibridge) issue tracker.
| Client | OpenAI support | Anthropic support | Notes |
|-------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Claude Code](https://docs.claude.com/en/docs/claude-code/settings#environment-variables) | - | ✅ | Works out of the box and can be preconfigured in templates. |
| Claude Code (VS Code) | - | ✅ | May require signing in once; afterwards respects workspace environment variables. |
| Cursor | ❌ | ❌ | Support dropped for `v1/chat/completions` endpoints; `v1/responses` support is in progress [#16](https://github.com/coder/aibridge/issues/16) |
| [Roo Code](https://docs.roocode.com/features/api-configuration-profiles#creating-and-managing-profiles) | ✅ | ✅ | Use the **OpenAI Compatible** provider with the legacy format to avoid `/v1/responses`. |
| [Codex CLI](https://github.com/openai/codex/blob/main/docs/config.md#model_providers) | ⚠️ | N/A | • Use v0.58.0 (`npm install -g @openai/codex@0.58.0`). Newer versions have a [bug](https://github.com/openai/codex/issues/8107) breaking the request payload. <br/>• `gpt-5-codex` support is [in progress](https://github.com/coder/aibridge/issues/16). |
| [GitHub Copilot (VS Code)](https://code.visualstudio.com/docs/copilot/customization/language-models#_add-an-openaicompatible-model) | ✅ | ❌ | Requires the pre-release extension. Anthropic endpoints are not supported. |
| [Goose](https://block.github.io/goose/docs/getting-started/providers/#available-providers) | ❓ | ❓ | |
| [Goose Desktop](https://block.github.io/goose/docs/getting-started/providers/#available-providers) | ❓ | ✅ | |
| WindSurf | ❌ | ❌ | No option to override the base URL. |
| Sourcegraph Amp | ❌ | ❌ | No option to override the base URL. |
| Kiro | ❌ | ❌ | No option to override the base URL. |
| [Copilot CLI](https://github.com/github/copilot-cli/issues/104) | ❌ | ❌ | No support for custom base URLs and uses a `GITHUB_TOKEN` for authentication. |
| [Kilo Code](https://kilocode.ai/docs/ai-providers/openai-compatible) | ✅ | ✅ | Similar to Roo Code. |
| Gemini CLI | ❌ | ❌ | Not supported yet. |
| [Amazon Q CLI](https://aws.amazon.com/q/) | ❌ | ❌ | Limited to Amazon Q subscriptions; no custom endpoint support. |
Legend: ✅ works, ⚠️ limited support, ❌ not supported, ❓ not yet verified, — not applicable.
### Compatibility Overview
Most AI coding assistants can use AI Bridge, provided they support custom base URLs. Client-specific requirements vary:
- Some clients require specific URL formats (for example, removing the `/v1` suffix).
- Some clients proxy requests through their own servers, which limits compatibility.
- Some clients do not support custom base URLs.
See the table in the [compatibility](#compatibility) section above for the combinations we have verified and any known issues.
@@ -1,55 +0,0 @@
# Claude Code
## Configuration
Claude Code can be configured using environment variables.
* **Base URL**: `ANTHROPIC_BASE_URL` should point to `https://coder.example.com/api/v2/aibridge/anthropic`
* **API Key**: `ANTHROPIC_API_KEY` should be your [Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself).
### Pre-configuring in Templates
Template admins can pre-configure Claude Code for a seamless experience. Admins can automatically inject the user's Coder session token and the AI Bridge base URL into the workspace environment.
```hcl
module "claude-code" {
source = "registry.coder.com/coder/claude-code/coder"
version = "4.7.3"
agent_id = coder_agent.main.id
workdir = "/path/to/project" # Set to your project directory
enable_aibridge = true
}
```
### Coder Tasks
[Coder Tasks](../../tasks.md) provides a framework for agents to complete background development operations autonomously. Claude Code can be configured in your Tasks automatically:
```hcl
resource "coder_ai_task" "task" {
count = data.coder_workspace.me.start_count
app_id = module.claude-code.task_app_id
}
data "coder_task" "me" {}
module "claude-code" {
source = "registry.coder.com/coder/claude-code/coder"
version = "4.7.3"
agent_id = coder_agent.main.id
workdir = "/path/to/project" # Set to your project directory
ai_prompt = data.coder_task.me.prompt
# Route through AI Bridge (Premium feature)
enable_aibridge = true
}
```
## VS Code Extension
The Claude Code VS Code extension is also supported.
1. If pre-configured in the workspace environment variables (as shown above), it typically respects them.
2. You may need to sign in once; afterwards, it respects the workspace environment variables.
**References:** [Claude Code Settings](https://docs.claude.com/en/docs/claude-code/settings#environment-variables)
-36
View File
@@ -1,36 +0,0 @@
# Cline
Cline supports both OpenAI and Anthropic models and can be configured to use AI Bridge by setting providers.
## Configuration
To configure Cline to use AI Bridge, follow these steps:
![Cline Settings](../../../images/aibridge/clients/cline-setup.png)
<div class="tabs">
### OpenAI Compatible
1. Open Cline in VS Code.
1. Go to **Settings**.
1. **API Provider**: Select **OpenAI Compatible**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/openai/v1`.
1. **OpenAI Compatible API Key**: Enter your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
1. **Model ID** (Optional): Enter the model you wish to use (e.g., `gpt-5.2-codex`).
![Cline OpenAI Settings](../../../images/aibridge/clients/cline-openai.png)
### Anthropic
1. Open Cline in VS Code.
1. Go to **Settings**.
1. **API Provider**: Select **Anthropic**.
1. **Anthropic API Key**: Enter your **Coder Session Token**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/anthropic` after checking **_Use custom base URL_**.
1. **Model ID** (Optional): Select your desired Claude model.
![Cline Anthropic Settings](../../../images/aibridge/clients/cline-anthropic.png)
</div>
**References:** [Cline Configuration](https://github.com/cline/cline)
-50
View File
@@ -1,50 +0,0 @@
# Codex CLI
Codex CLI can be configured to use AI Bridge by setting up a custom model provider.
## Configuration
> [!NOTE]
> When running Codex CLI inside a Coder workspace, use the configuration below to route requests through AI Bridge.
To configure Codex CLI to use AI Bridge, set the following configuration options in your Codex configuration file (e.g., `~/.codex/config.toml`):
```toml
[model_providers.aibridge]
name = "AI Bridge"
base_url = "${data.coder_workspace.me.access_url}/api/v2/aibridge/openai/v1"
env_key = "OPENAI_API_KEY"
wire_api = "responses"
[profiles.aibridge]
model_provider = "aibridge"
model = "gpt-5.2-codex"
```
Run Codex with the `aibridge` profile:
```bash
codex --profile aibridge
```
If configuring within a Coder workspace, you can also use the [Codex CLI](https://registry.coder.com/modules/coder-labs/codex) module and set the following variables:
```tf
module "codex" {
source = "registry.coder.com/coder-labs/codex/coder"
version = "~> 4.1"
agent_id = coder_agent.main.id
workdir = "/path/to/project" # Set to your project directory
enable_aibridge = true
}
```
## Authentication
To authenticate with AI Bridge, get your **[Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)** and set it in your environment:
```bash
export OPENAI_API_KEY="<your-coder-session-token>"
```
**References:** [Codex CLI Configuration](https://developers.openai.com/codex/config-advanced)
@@ -1,35 +0,0 @@
# Factory
Factort's Droid agent can be configured to use AI Bridge by setting up custom models for OpenAI and Anthropic.
## Configuration
1. Open `~/.factory/settings.json` (create it if it does not exist).
2. Add a `customModels` entry for each provider you want to use with AI Bridge.
3. Replace `coder.example.com` with your Coder deployment URL.
4. Use a **[Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)** for `apiKey`.
```json
{
"customModels": [
{
"model": "claude-4-5-opus",
"displayName": "Claude (Coder AI Bridge)",
"baseUrl": "https://coder.example.com/api/v2/aibridge/anthropic",
"apiKey": "<your-coder-session-token>",
"provider": "anthropic",
"maxOutputTokens": 8192
},
{
"model": "gpt-5.2-codex",
"displayName": "GPT (Coder AI Bridge)",
"baseUrl": "https://coder.example.com/api/v2/aibridge/openai/v1",
"apiKey": "<your-coder-session-token>",
"provider": "openai",
"maxOutputTokens": 16384
}
]
}
```
**References:** [Factory BYOK OpenAI & Anthropic](https://docs.factory.ai/cli/byok/openai-anthropic)
-99
View File
@@ -1,99 +0,0 @@
# Client Configuration
Once AI Bridge is setup on your deployment, the AI coding tools used by your users will need to be configured to route requests via AI Bridge.
## Base URLs
Most AI coding tools allow the "base URL" to be customized. In other words, when a request is made to OpenAI's API from your coding tool, the API endpoint such as [`/v1/chat/completions`](https://platform.openai.com/docs/api-reference/chat) will be appended to the configured base. Therefore, instead of the default base URL of `https://api.openai.com/v1`, you'll need to set it to `https://coder.example.com/api/v2/aibridge/openai/v1`.
The exact configuration method varies by client — some use environment variables, others use configuration files or UI settings:
- **OpenAI-compatible clients**: Set the base URL (commonly via the `OPENAI_BASE_URL` environment variable) to `https://coder.example.com/api/v2/aibridge/openai/v1`
- **Anthropic-compatible clients**: Set the base URL (commonly via the `ANTHROPIC_BASE_URL` environment variable) to `https://coder.example.com/api/v2/aibridge/anthropic`
Replace `coder.example.com` with your actual Coder deployment URL.
## Authentication
Instead of distributing provider-specific API keys (OpenAI/Anthropic keys) to users, they authenticate to AI Bridge using their **Coder session token** or **API key**:
- **OpenAI clients**: Users set `OPENAI_API_KEY` to their Coder session token or API key
- **Anthropic clients**: Users set `ANTHROPIC_API_KEY` to their Coder session token or API key
> [!NOTE]
> Only Coder-issued tokens can authenticate users against AI Bridge.
> AI Bridge will use provider-specific API keys to [authenticate against upstream AI services](https://coder.com/docs/ai-coder/ai-bridge/setup#configure-providers).
Again, the exact environment variable or setting naming may differ from tool to tool. See a list of [supported clients](#all-supported-clients) below and consult your tool's documentation for details.
### Retrieving your session token
[Generate a long-lived API token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself) via the Coder dashboard and use it to configure your AI coding tool:
```sh
export ANTHROPIC_API_KEY="your-coder-session-token"
export ANTHROPIC_BASE_URL="https://coder.example.com/api/v2/aibridge/anthropic"
```
## Compatibility
The table below shows tested AI clients and their compatibility with AI Bridge.
| Client | OpenAI | Anthropic | Notes |
|----------------------------------|--------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Claude Code](./claude-code.md) | - | ✅ | |
| [Codex CLI](./codex.md) | ✅ | - | |
| [OpenCode](./opencode.md) | ✅ | ✅ | |
| [Factory](./factory.md) | ✅ | ✅ | |
| [Cline](./cline.md) | ✅ | ✅ | |
| [Kilo Code](./kilo-code.md) | ✅ | ✅ | |
| [Roo Code](./roo-code.md) | ✅ | ✅ | |
| [VS Code](./vscode.md) | ✅ | ❌ | Only supports Custom Base URL for OpenAI. |
| [JetBrains IDEs](./jetbrains.md) | ✅ | ❌ | Works in Chat mode via "Bring Your Own Key". |
| [Zed](./zed.md) | ✅ | ✅ | |
| WindSurf | ❌ | ❌ | No option to override base URL. |
| Cursor | ❌ | ❌ | Override for OpenAI broken ([upstream issue](https://forum.cursor.com/t/requests-are-sent-to-incorrect-endpoint-when-using-base-url-override/144894)). |
| Sourcegraph Amp | ❌ | ❌ | No option to override base URL. |
| Kiro | ❌ | ❌ | No option to override base URL. |
| Gemini CLI | ❌ | ❌ | No Gemini API support. Upvote [this issue](https://github.com/coder/aibridge/issues/27). |
| Antigravity | ❌ | ❌ | No option to override base URL. |
|
*Legend: ✅ supported, ❌ not supported, - not applicable.*
## Configuring In-Workspace Tools
AI coding tools running inside a Coder workspace, such as IDE extensions, can be configured to use AI Bridge.
While users can manually configure these tools with a long-lived API key, template admins can provide a more seamless experience by pre-configuring them. Admins can automatically inject the user's session token with `data.coder_workspace_owner.me.session_token` and the AI Bridge base URL into the workspace environment.
In this example, Claude Code respects these environment variables and will route all requests via AI Bridge.
```hcl
data "coder_workspace_owner" "me" {}
data "coder_workspace" "me" {}
resource "coder_agent" "dev" {
arch = "amd64"
os = "linux"
dir = local.repo_dir
env = {
ANTHROPIC_BASE_URL : "${data.coder_workspace.me.access_url}/api/v2/aibridge/anthropic",
ANTHROPIC_AUTH_TOKEN : data.coder_workspace_owner.me.session_token
}
... # other agent configuration
}
```
## External and Desktop Clients
You can also configure AI tools running outside of a Coder workspace, such as local IDE extensions or desktop applications, to connect to AI Bridge.
The configuration is the same: point the tool to the AI Bridge [base URL](#base-urls) and use a Coder API key for authentication.
Users can generate a long-lived API key from the Coder UI or CLI. Follow the instructions at [Sessions and API tokens](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself) to create one.
## All Supported Clients
<children></children>
@@ -1,35 +0,0 @@
# JetBrains IDEs
JetBrains IDE (IntelliJ IDEA, PyCharm, WebStorm, etc.) support AI Bridge via the ["Bring Your Own Key" (BYOK)](https://www.jetbrains.com/help/ai-assistant/use-custom-models.html#provide-your-own-api-key) feature.
## Prerequisites
* [**JetBrains AI Assistant**](https://www.jetbrains.com/help/ai-assistant/installation-guide-ai-assistant.html): Installed and enabled.
* **Authentication**: Your **[Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
## Configuration
1. **Open Settings**: Go to **Settings** > **Tools** > **AI Assistant** > **Models & API Keys**.
1. **Configure Provider**: Go to **Third-party AI providers**.
1. **Choose Provider**: Choose **OpenAI-compatible**.
1. **URL**: `https://coder.example.com/api/v2/aibridge/openai/v1`
1. **API Key**: Paste your **[Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
1. **Apply**: Click **Apply** and **OK**.
![JetBrains AI Assistant Settings](../../../images/aibridge/clients/jetbrains-ai-settings.png)
## Using the AI Assistant
1. Go back to **AI Chat** on theleft side bar and choose **Chat**.
1. In the Model dropdown, select the desired model (e.g., `gpt-5.2`).
![JetBrains AI Assistant Chat](../../../images/aibridge/clients/jetbrains-ai-chat.png)
You can now use the AI Assistant chat with the configured provider.
> [!NOTE]
>
> * JetBrains AI Assistant currently only supports OpenAI-compatible endpoints. There is an open [issue](https://youtrack.jetbrains.com/issue/LLM-22740) tracking support for Anthropic.
> * JetBrains AI Assistant may not support all models that support OPenAI's `/chat/completions` endpoint in Chat mode.
**References:** [Use custom models with JetBrains AI Assistant](https://www.jetbrains.com/help/ai-assistant/use-custom-models.html#provide-your-own-api-key)
@@ -1,33 +0,0 @@
# Kilo Code
Kilo Code allows you to configure providers via the UI and can be set up to use AI Bridge.
## Configuration
<div class="tabs">
### OpenAI Compatible
1. Open Kilo Code in VS Code.
1. Go to **Settings**.
1. **Provider**: Select **OpenAI**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/openai/v1`.
1. **API Key**: Enter your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
1. **Model ID**: Enter the model you wish to use (e.g., `gpt-5.2-codex`).
![Kilo Code OpenAI Settings](../../../images/aibridge/clients/kilo-code-openai.png)
### Anthropic
1. Open Kilo Code in VS Code.
1. Go to **Settings**.
1. **Provider**: Select **Anthropic**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/anthropic`.
1. **API Key**: Enter your **Coder Session Token**.
1. **Model ID**: Select your desired Claude model.
![Kilo Code Anthropic Settings](../../../images/aibridge/clients/kilo-code-anthropic.png)
</div>
**References:** [Kilo Code Configuration](https://kilocode.ai/docs/ai-providers/openai-compatible)
@@ -1,44 +0,0 @@
# OpenCode
OpenCode supports both OpenAI and Anthropic models and can be configured to use AI Bridge by setting custom base URLs for each provider.
## Configuration
You can configure OpenCode to connect to AI Bridge by setting the following configuration options in your OpenCode configuration file (e.g., `~/.config/opencode/opencode.json`):
```json
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anthropic": {
"options": {
"baseURL": "https://coder.example.com/api/v2/aibridge/anthropic/v1"
}
},
"openai": {
"options": {
"baseURL": "https://coder.example.com/api/v2/aibridge/openai/v1"
}
}
}
}
```
## Authentication
To authenticate with AI Bridge, get your **[Coder session token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)** and replace `<your-coder-session-token>` in `~/.local/share/opencode/auth.json`
```json
{
"anthropic": {
"type": "api",
"key": "<your-coder-session-token>"
},
"openai": {
"type": "api",
"key": "<your-coder-session-token>"
}
}
```
**References:** [OpenCode Documentation](https://opencode.ai/docs/providers/#config)
@@ -1,39 +0,0 @@
# Roo Code
Roo Code allows you to configure providers via the UI and can be set up to use AI Bridge.
## Configuration
Roo Code allows you to configure providers via the UI.
<div class="tabs">
### OpenAI Compatible
1. Open Roo Code in VS Code.
1. Go to **Settings**.
1. **Provider**: Select **OpenAI**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/openai/v1`.
1. **API Key**: Enter your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
1. **Model ID**: Enter the model you wish to use (e.g., `gpt-5.2-codex`).
![Roo Code OpenAI Settings](../../../images/aibridge/clients/roo-code-openai.png)
### Anthropic
1. Open Roo Code in VS Code.
1. Go to **Settings**.
1. **Provider**: Select **Anthropic**.
1. **Base URL**: Enter `https://coder.example.com/api/v2/aibridge/anthropic`.
1. **API Key**: Enter your **Coder Session Token**.
1. **Model ID**: Select your desired Claude model.
![Roo Code Anthropic Settings](../../../images/aibridge/clients/roo-code-anthropic.png)
</div>
### Notes
* If you encounter issues with the **OpenAI** provider type, use **OpenAI Compatible** to ensure correct endpoint routing.
* Ensure your Coder deployment URL is reachable from your VS Code environment.
**References:** [Roo Code Configuration Profiles](https://docs.roocode.com/features/api-configuration-profiles#creating-and-managing-profiles)
-50
View File
@@ -1,50 +0,0 @@
# VS Code
VS Code's native chat can be configured to use AI Bridge with the GitHub Copilot Chat extension's custom language model support.
## Configuration
> [!IMPORTANT]
> You need the **Pre-release** version of the [GitHub Copilot Chat extension](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat) and [VS Code Insiders](https://code.visualstudio.com/insiders/).
1. Open command palette (`Ctrl+Shift+P` or `Cmd+Shift+P` on Mac) and search for _Chat: Open Language Models (JSON)_.
1. Paste the following JSON configuration, replacing `<your-coder-session-token>` with your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**:
```json
[
{
"name": "Coder",
"vendor": "customoai",
"apiKey": "your-coder-session-token>",
"models": [
{
"name": "GPT 5.2",
"url": "https://coder.example.com/api/v2/aibridge/openai/v1/chat/completions",
"toolCalling": true,
"vision": true,
"thinking": true,
"maxInputTokens": 272000,
"maxOutputTokens": 128000,
"id": "gpt-5.2"
},
{
"name": "GPT 5.2 Codex",
"url": "https://coder.example.com/api/v2/aibridge/openai/v1/responses",
"toolCalling": true,
"vision": true,
"thinking": true,
"maxInputTokens": 272000,
"maxOutputTokens": 128000,
"id": "gpt-5.2-codex"
}
]
}
]
```
_Replace `coder.example.com` with your Coder deployment URL._
> [!NOTE]
> The setting names may change as the feature moves from pre-release to stable. Refer to the official documentation for the latest setting keys.
**References:** [GitHub Copilot - Bring your own language model](https://code.visualstudio.com/docs/copilot/customization/language-models#_add-an-openaicompatible-model)
-63
View File
@@ -1,63 +0,0 @@
# Zed
Zed IDE supports AI Bridge via its `language_models` configuration in `settings.json`.
## Configuration
To configure Zed to use AI Bridge, you need to edit your `settings.json` file. You can access this by pressing `Cmd/Ctrl + ,` or opening the command palette and searching for "Open Settings".
You can configure both Anthropic and OpenAI providers to point to AI Bridge.
```json
{
"language_models": {
"anthropic": {
"api_url": "https://coder.example.com/api/v2/aibridge/anthropic",
},
"openai": {
"api_url": "https://coder.example.com/api/v2/aibridge/openai/v1",
},
},
// optional settings to set favorite models for the AI
"agent": {
"favorite_models": [
{
"provider": "anthropic",
"model": "claude-sonnet-4-5-thinking-latest"
},
{
"provider": "openai",
"model": "gpt-5.2-codex"
}
],
},
}
```
*Replace `coder.example.com` with your Coder deployment URL.*
> [!NOTE]
> These settings and environment variables need to be configured from client side. Zed currently does not support reading these settings from remote configuration. See this [feature request](https://github.com/zed-industries/zed/discussions/47058) for more details.
## Authentication
Zed requires an API key for these providers. For AI Bridge, this key is your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)**.
You can set this in two ways:
<div class="tabs">
### Zed UI
1. Open the **Assistant Panel** (right sidebar).
1. Click **Configuration** or the settings icon.
1. Select your provider ("Anthropic" or "OpenAI").
1. Paste your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)** for the API Key.
### Environment Variables
1. Set `ANTHROPIC_API_KEY` and `OPENAI_API_KEY` to your **[Coder Session Token](../../../admin/users/sessions-tokens.md#generate-a-long-lived-api-token-on-behalf-of-yourself)** in the environment where you launch Zed.
</div>
**References:** [Configuring Zed - Language Models](https://zed.dev/docs/reference/all-settings#language-models)
+1 -3
View File
@@ -33,9 +33,7 @@ AI Bridge is best suited for organizations facing these centralized management a
## Next steps
- [Set up AI Bridge](./setup.md) on your Coder deployment
- [Configure AI clients](./clients/index.md) to use AI Bridge
- [Configure AI clients](./client-config.md) to use AI Bridge
- [Configure MCP servers](./mcp.md) for tool access
- [Monitor usage and metrics](./monitoring.md) and [configure data retention](./setup.md#data-retention)
- [Reference documentation](./reference.md)
<children></children>
+1 -1
View File
@@ -20,11 +20,11 @@ Where relevant, both streaming and non-streaming requests are supported.
#### Intercepted
- [`/v1/chat/completions`](https://platform.openai.com/docs/api-reference/chat/create)
- [`/v1/responses`](https://platform.openai.com/docs/api-reference/responses/create)
#### Passthrough
- [`/v1/models(/*)`](https://platform.openai.com/docs/api-reference/models/list)
- [`/v1/responses`](https://platform.openai.com/docs/api-reference/responses/create) _(Interception support coming in **Beta**)_
### Anthropic
+35 -11
View File
@@ -8,21 +8,11 @@ Coders AI Governance Add-On for Premium licenses includes a set of features t
- [Boundaries](./boundary/agent-boundary.md): Process-level firewalls for agents, restricting which domains can be accessed by AI agents
- [Additional Tasks Use (via Agent Workspace Builds)](#how-coder-tasks-usage-is-measured): Additional allowance of Agent Workspace Builds for continued use of Coder Tasks.
## GA status and availability
Starting with Coder v2.30 (February 2026), AI Bridge and Agent Boundaries are generally available as part of the AI Governance Add-On.
If youve been experimenting with these features in earlier releases, youll see a notification banner in your deployment in v2.30. This banner is a reminder that these features have moved out of beta and are now included with the AI Governance Add-On.
In v2.30, this notification is informational only. A future Coder release will require the add-on to continue using AI Bridge and Agent Boundaries.
To learn more about enabling the AI Governance Add-On, pricing, or trial options, reach out to your [Coder account team](https://coder.com/contact/sales).
## Who should use the AI Governance Add-On
The AI Governance Add-On is for teams that want to extend that platform to support AI-powered IDEs and coding agents in a controlled, observable way.
Its a good fit if youre:
It's a good fit if you're:
- Rolling out AI-powered IDEs like Cursor and AI coding agents like Claude Code across teams
- Looking to centrally observe, audit, and govern AI activity in Coder Workspaces
@@ -31,6 +21,40 @@ Its a good fit if youre:
If you already use other AI governance tools, such as third-party LLM gateways or vendor-managed policies, you can continue using them. Coder Workspaces can still serve as the backend for development environments and AI workflows, with or without the AI Governance Add-On.
## Use cases for AI Governance
Organizations adopting AI coding tools at scale often encounter operational and security challenges that traditional developer tooling doesn't address.
### Auditing AI activity across teams
Without centralized monitoring, teams have no way to understand how AI tools are being used across the organization. AI Bridge provides audit trails of prompts, token usage, and tool invocations, giving administrators insight into AI adoption patterns and potential issues.
### Restricting agent network and command access
AI agents can make arbitrary network requests, potentially accessing unauthorized services or exfiltrating data. They can also execute destructive commands within a workspace. Agent Boundaries enforce process-level policies that restrict which domains agents can reach and what actions they can perform, preventing unintended data exposure and destructive operations like `rm -rf`.
### Centralizing API key management
Managing individual API keys for AI providers across hundreds of developers creates security risks and administrative overhead. AI Bridge centralizes authentication so users authenticate through Coder, eliminating the need to distribute and rotate provider API keys.
### Standardizing MCP tools and servers
Different teams may use different MCP servers and tools with varying security postures. AI Bridge enables centralized MCP administration, allowing organizations to define approved tools and servers that all users can access.
### Measuring AI adoption and spend
Without usage data, it's hard to justify AI tooling investments or identify high-leverage use cases. AI Bridge captures metrics on token spend, adoption rates, and usage patterns to inform decisions about AI strategy.
## GA status and availability
Starting with Coder v2.30 (February 2026), AI Bridge and Agent Boundaries are generally available as part of the AI Governance Add-On.
If you've been experimenting with these features in earlier releases, you'll see a notification banner in your deployment in v2.30. This banner is a reminder that these features have moved out of beta and are now included with the AI Governance Add-On.
In v2.30, this notification is informational only. A future Coder release will require the add-on to continue using AI Bridge and Agent Boundaries.
To learn more about enabling the AI Governance Add-On, pricing, or trial options, reach out to your [Coder account team](https://coder.com/contact/sales).
## How Coder Tasks usage is measured
The usage metric used to measure Coder Tasks consumption is called **Agent Workspace Builds** (prev. "managed agents").
+1 -1
View File
@@ -7,7 +7,7 @@ Coder Tasks is an interface for running & managing coding agents such as Claude
Coder Tasks is best for cases where the IDE is secondary, such as prototyping or running long-running background jobs. However, tasks run inside full workspaces so developers can [connect via an IDE](../user-guides/workspace-access) to take a task to completion.
> [!NOTE]
> Premium Coder deployments are limited to running 1,000 tasks. [Contact us](https://coder.com/contact) for pricing options or learn more about our [AI Governance Add-On](./ai-governance.md) to evaluate all of Coder's AI features.
> Premium deployments include 1,000 Agent Workspace Builds for proof-of-concept use. To scale beyond this limit, the [AI Governance Add-On](./ai-governance.md) provides expanded usage pools that grow with your user count. [Contact us](https://coder.com/contact) to discuss pricing.
## Supported Agents (and Models)

Some files were not shown because too many files have changed in this diff Show More