* provisionerdserver: Expires prebuild user token for workspace, if it
exists, when regenerating session token.
* dbauthz: disallow prebuilds user from creating api keys
* dbpurge: added functionality to expire stale api keys owned by the
prebuilds user
(cherry picked from commit 06cbb2890f)
THIS IS A SECURITY FIX - cherry picked from #19214
upgrade to go 1.24.6 to avoid https://github.com/golang/go/issues/74831
(CVE-2025-47907)
Also points to a new version of our lib/pq fork that worked around the
Go issue, which should restore better performance.
Continues to address https://github.com/coder/coder-desktop-macos/issues/201
Identical to the windows command, except we don't write to stdio. We're retaining the system we have for logging on macOS, where we push logs over the tunnel and use the OS logger.
I've tested that a build with this command works end-to-end with my new version of Coder Desktop macOS.
Also brings in the soft net isolation changes from `main` of coder/tailscale.
## Description
This PR improves the `coder templates presets` and `coder create` CLI
commands to include preset descriptions.
## Changes
* Added a `description` column to the `coder templates presets list` CLI
command.
* Fixed the `-o json` output for `coder templates presets list` to
correctly include and format data.
* Updated the `coder create` CLI command to display the preset's
description in the selection menu.
Follow-up from:
* https://github.com/coder/coder/pull/18910
* https://github.com/coder/coder/pull/18912
* https://github.com/coder/coder/pull/18977
**Add pod-level securityContext support to Coder Helm chart**
Adds `coder.podSecurityContext` field to enable pod-level security
settings, primarily to solve TLS certificate mounting permission issues.
**Problem**: When mounting TLS certificates from Kubernetes secrets, the
Coder process (UID 1000) cannot read the files due to restrictive
permissions.
**Solution**: Setting `podSecurityContext.fsGroup: 1000` ensures
Kubernetes sets group ownership of mounted volumes to GID 1000, allowing
the Coder process to read certificate files.
**Changes**:
- Added `podSecurityContext` field to values.yaml with documentation
- Updated `_coder.yaml` template to include pod-level security context
- Added test case and golden files
- Maintains backward compatibility (opt-in feature)
**Usage**:
```yaml
coder:
podSecurityContext:
fsGroup: 1000 # Enables TLS cert access
```
Fixes#19038
I just added support for rendering GFM alerts inside of numbered lists
in coder.com (see https://github.com/coder/coder.com/pull/328), and
noticed that these plain blockquotes should probably be alerts.
This should cover all the missing alerts. I found them by searching for
the regex `^\s*>\s` within docs/**/*.md
Is `[!NOTE]` the correct type for these? Or do we want to use
tip/important/etc?
- @mtojek CONTRIBUTING.md
- @johnstcn support-bundle.md
- @matifali gateway.md
## Description
This PR adds support for `description` and `icon` fields to
`template_version_presets`. These fields will allow displaying richer
information for presets in the UI, improving the user experience when
creating a workspace.
Both fields are optional, non-nullable, and default to empty strings.
## Changes
* Database migration with the addition of `description VARCHAR(128)` and
`icon VARCHAR(256)` columns to the `template_version_presets` table.
* Updated the `CreateWorkspacePageView` in the UI
Note: UI changes will be addressed in a separate PR
## Description
This PR introduces a `--preset` flag for the `create` command to allow
users to apply a predefined preset to their workspace build.
## Changes
- The `--preset` flag on the `create` command integrates with the
parameter resolution logic and takes precedence over other sources
(e.g., CLI/env vars, last build, etc.).
- Added internal logic to ensure that preset parameters override
parameters values during resolution.
- Updated tests and added new ones to cover these flows.
## Implementation logic
* If a template has presets and includes a default, the CLI will
automatically use the default when `--preset` is not specified.
* If a template has presets but no default, the CLI will prompt the user
to select one when `--preset` is not specified.
* If a template does not have presets, the CLI will not prompt the user
for a preset.
* If the user specifies a preset using the `--preset` flag, that preset
will be used.
* If the user passes `--preset None`, no preset will be applied.
This logic aligns with the behavior in the UI for consistency.
```
> coder create --help
USAGE:
coder create [flags] [workspace]
Create a workspace
- Create a workspace for another user (if you have permission):
$ coder create <username>/<workspace_name>
OPTIONS:
(...)
--preset string, $CODER_PRESET_NAME
Specify the name of a template version preset. Use 'none' to explicitly indicate that no preset should be used.
(...)
-y, --yes bool
Bypass prompts.
```
## Breaking change
**Note:** This is a breaking change to the create CLI command. If a
template includes presets and the user does not provide a `--preset`
flag, the CLI will now prompt the user to select one. This behavior may
break non-interactive scripts or automated workflows.
Relates to PR: https://github.com/coder/coder/pull/18910 - please
consider both PRs together as they’re part of the same workflow
Relates to issue: https://github.com/coder/coder/issues/16594
Closes https://github.com/coder/internal/issues/711
When a `devcontainer.json` has been found and it has `.customizations.coder.autoStart = true`, we will now auto start this dev container.
# Add timeout support to workspace bash tool
This PR adds a timeout feature to the workspace bash tool, allowing
users to specify a maximum execution time for commands. Key changes
include:
- Added a `timeout_ms` parameter to control command execution time
(defaults to 60 seconds, with a maximum of 5 minutes)
- Implemented a new `executeCommandWithTimeout` function that properly
handles command timeouts
- Added proper output capturing during timeout scenarios, returning all
output collected before the timeout
- Updated documentation to explain the timeout feature and provide usage
examples
- Added comprehensive tests for the timeout functionality, including
integration tests
When a command times out, the tool now returns all captured output up to
that point along with a cancellation message, making it clear to users
what happened.
Signed-off-by: Thomas Kosiewski <tk@coder.com>
This pull request addresses a bug related to a nil pointer dereference
in the task reporting functionality.
### Bug Fixes and Error Handling:
* Updated `RegisterTools` in `mcp.go` to skip registering the
`ReportTask` tool in the remote MCP context when a task reporter is not
configured, preventing potential nil pointer dereference panics.
* Added a check in `toolsdk.go` to ensure task reporting dependencies
are available before invoking the reporter, returning an appropriate
error if not.
### Test Coverage:
* Added `TestReportTaskNilPointerDeref` in `toolsdk_test.go` to verify
that the system does not panic when task reporting dependencies are
missing and instead returns a clear error message.
* Added `TestReportTaskWithReporter` in `toolsdk_test.go` to validate
correct behavior when a task reporter is configured, ensuring the
handler processes the request as expected.
Signed-off-by: Thomas Kosiewski <tk@coder.com>
[//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some
time.
Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.
---
[//]: # (dependabot-end)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Adds a preset selector in TasksPage with the default preset pre-selected and at the top of the list.
* If no default preset exists, the user is prompted to select one.
* If a preset defines an AI Prompt, it will override the textarea.
## Description
This PR introduces a new `list presets` command to display the presets
associated with a given template.
By default, it displays the presets for the template's active version,
unless a `--template-version` flag is provided.
## Changes
* Added a new `list presets` command under `coder templates presets` to
display presets associated with a template.
* By default, the command lists presets from the template’s active
version.
* Users can override the default behavior by providing the
`--template-version` flag to target a specific version.
```
> coder templates versions presets list --help
USAGE:
coder templates presets list [flags] <template>
List all presets of the specified template. Defaults to the active template version.
OPTIONS:
-O, --org string, $CODER_ORGANIZATION
Select which organization (uuid or name) to use.
-c, --column [name|parameters|default|desired prebuild instances] (default: name,parameters,default,desired prebuild instances)
Columns to display in table output.
-o, --output table|json (default: table)
Output format.
--template-version string
Specify a template version to list presets for. Defaults to the active version.
```
Related PR: https://github.com/coder/coder/pull/18912 - please consider
both PRs together as they’re part of the same workflow
Relates to issue: https://github.com/coder/coder/issues/16594
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added CLI commands to manage and list presets for specific template
versions, supporting tabular and JSON output.
* Introduced a new CLI subcommand group for template version presets,
including detailed help and documentation.
* Added support for displaying and managing the desired number of
prebuild instances for presets in CLI, API, and UI.
* **Documentation**
* Updated and expanded CLI and API documentation to describe new
commands, options, and the desired prebuild instances field in presets.
* Added new help output and reference files for template version presets
commands.
* **Bug Fixes**
* Ensured correct handling and display of the desired prebuild instances
property for presets across CLI, API, and UI.
* **Tests**
* Introduced end-to-end tests for listing template version presets,
covering scenarios with and without presets.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Updates `develop.sh`, `coder-dev.sh` and `build_go.sh` to conditionally override `codersdk.SessionTokenCookie` for usage in nested development scenario.
Relates to https://github.com/coder/internal/issues/711
This PR implements a project discovery mechanism that searches for any
dev container projects and makes them visible in the UI so that they can
be started. To make the wording on the site more clear, "Rebuild" has
been changed to "Start" when there is no container associated with a
known dev container configuration. I've also made it so that site will
show the dev container config path when there is no other name
available.
### Design decisions
Just want to ensure my explanation for a few design decisions are noted
down:
- We only search for dev container configurations inside git
repositories
- We only search for these git repositories if they're at the top level
or a direct child of the agent directory.
This limited approach is to reduce the amount of files we ultimately
walk when trying to find these projects. It makes sense to limit it to
only the agent directory, although I'm open to expanding how deep we
search.
- Refactors the bash tool to use `io.Discard` instead of nil to avoid panics.
- Enhances panic recovery in `codersdk/toolsdk/toolsdk.go` by adding stack trace information in development builds. When a panic occurs in a tool handler:
- In development builds: The error includes the full stack trace for easier debugging
- In production builds: A simpler error message is shown without the stack trace
This PR introduces new build reason values to identify what type of
connection triggered a workspace build, helping to troubleshoot
workspace-related issues.
## Database Migration
Added migration 000349_extend_workspace_build_reason.up.sql that extends
the build_reason enum with new values:
```
dashboard, cli, ssh_connection, vscode_connection, jetbrains_connection
```
## Implementation
The build reason is specified through the API when creating new
workspace builds:
- Dashboard: Automatically sets reason to `dashboard` when users start
workspaces via the web interface
- CLI `start` command: Sets reason to `cli` when workspaces are started
via the command line
- CLI `ssh` command: Sets reason to ssh_connection when workspaces are
started due to SSH connections
- VS Code connections: Will be set to `vscode_connection` by the VS Code
extension through CLI hidden flag
(https://github.com/coder/vscode-coder/pull/550)
- JetBrains connections: Will be set to `jetbrains_connection` by the
Jetbrains Toolbox
(https://github.com/coder/coder-jetbrains-toolbox/pull/150) and
Jetbrains Gateway extension
(https://github.com/coder/jetbrains-coder/pull/561)
## UI Changes:
* Tooltip with reason in Build history
<img width="309" height="457" alt="image"
src="https://github.com/user-attachments/assets/bde8440b-bf3b-49a1-a244-ed7e8eb9763c"
/>
* Reason in Audit Logs Row tooltip
<img width="906" height="237" alt="image"
src="https://github.com/user-attachments/assets/ebbb62c7-cf07-4398-afbf-323c83fb6426"
/>
<img width="909" height="188" alt="image"
src="https://github.com/user-attachments/assets/1ddbab07-44bf-4dee-8867-b4e2cd56ae96"
/>
- Adds a query for counting managed agent workspace builds between two
timestamps
- The "Actual" field in the feature entitlement for managed agents is
now populated with the value read from the database
- The wsbuilder package now validates AI agent usage against the limit
when a license is installed
Closescoder/internal#777
Simplifies the title to reduce customer confusion as requested by
@kylejaggi.
The DX platform covers all products, not just Data Cloud. This change
makes the documentation clearer for customers who might get confused
about which DX product the integration refers to.
**Changes:**
- Updated page title from "DX Data Cloud" to "DX" in
`docs/admin/integrations/dx-data-cloud.md`
**Testing:**
- Verified the markdown renders correctly
- No functional changes, documentation-only update
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: bpmct <22407953+bpmct@users.noreply.github.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Style**
* Removed the "beta" badge from various workspace and template settings
pages. The "Dynamic parameters" feature no longer displays a beta label
in the interface.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
# Add SSH Command Execution Tool for Coder Workspaces
This PR adds a new AI tool `coder_workspace_ssh_exec` that allows executing commands in Coder workspaces via SSH. The tool provides functionality similar to the `coder ssh <workspace> <command>` CLI command.
Key features:
- Executes commands in workspaces via SSH and returns the output and exit code
- Automatically starts workspaces if they're stopped
- Waits for the agent to be ready before executing commands
- Trims leading and trailing whitespace from command output
- Supports various workspace identifier formats:
- `workspace` (uses current user)
- `owner/workspace`
- `owner--workspace`
- `workspace.agent` (specific agent)
- `owner/workspace.agent`
The implementation includes:
- A new tool definition with schema and handler
- Helper functions for workspace and agent discovery
- Workspace name normalization to handle different input formats
- Comprehensive test coverage including integration tests
This tool enables AI assistants to execute commands in user workspaces, making it possible to automate tasks and provide more interactive assistance.
<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit
* **New Features**
* Introduced the ability to execute bash commands inside a Coder workspace via SSH, supporting multiple workspace identification formats.
* **Tests**
* Added comprehensive unit and integration tests for executing bash commands in workspaces, including input validation, output handling, and error scenarios.
* **Chores**
* Registered the new bash execution tool in the global tools list.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Updates the dogfood envbuilder template to pull modules from
`dev.registry.coder.com` instead of `registry.coder.com` to match the
regular dogfood template.
This ensures consistency between both dogfood templates and uses the
development registry for testing new module versions.
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: matifali <10648092+matifali@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[//]: # (dependabot-start)
⚠️ **Dependabot is rebasing this PR** ⚠️
Rebasing might not happen immediately, so don't worry if this takes some
time.
Note: if you make any changes to this PR yourself, they will take
precedence over the rebase.
---
[//]: # (dependabot-end)
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This PR starts running the full test suite on Windows and macOS in the
nightly gauntlet, since the regular CI only runs agent and cli tests.
The full suite is too slow to be run on every PR.
# Replace SVG with external logo file in OAuth2 authorization page
This PR replaces the inline SVG logo in the OAuth2 authorization page with a reference to an external SVG file. The change:
1. Adds a new `logo.svg` file in the static directory with the Coder logo
2. Updates the OAuth2 authorization page to use this external file instead of embedding the SVG directly
This approach improves maintainability by centralizing the logo in a single file and reduces duplication in the codebase.
# Refactor OAuth2 Provider Authorization Flow
This PR refactors the OAuth2 provider authorization flow by:
1. Removing the `authorizeMW` middleware and directly implementing its functionality in the `ShowAuthorizePage` handler
2. Simplifying function signatures by removing unnecessary parameters:
- Removed `db` parameter from `ShowAuthorizePage`
- Removed `accessURL` parameter from `ProcessAuthorize`
3. Changing the redirect status code in `ProcessAuthorize` from 307 (Temporary Redirect) to 302 (Found) to improve compatibility with external OAuth2 apps and browsers. (Technical explanation: we replied with a 307 to a POST request, thus the browser performs a redirect to that URL as a POST request, but we need it to be a GET request to be compatible. Thus, we use the 302 redirect so that browsers turn it into a GET request when redirecting back to the redirect_uri.)
The changes maintain the same functionality while simplifying the code and improving compatibility with external systems.
# Enhanced OAuth2 and MCP Compliance for API Authentication
This PR improves OAuth2 and MCP (Microsoft Cloud for Sovereignty)
compliance by:
1. Adding RFC 9728 compliant `WWW-Authenticate` headers with resource
metadata URLs
2. Passing the configured `AccessURL` to API key middleware for proper
audience validation
3. Creating specialized CORS handling for OAuth2 and MCP endpoints with
appropriate headers
4. Making the `state` parameter optional in OAuth2 authorization
requests
These changes ensure proper OAuth2 token audience validation against the
configured access URL and improve interoperability with OAuth2 clients
by providing better error responses and metadata discovery.
Signed-off-by: Thomas Kosiewski <tk@coder.com>
No issue to link – I'm basically pushing some updates upstream from the
version of the hook I copied over for the Registry website.
## Changes made
- Updated debounce functions to have input validation for timeouts
- Updated `useDebouncedValue` to flush state syncs immediately if
timeout value is `0`
- Updated tests to reflect changes
- Cleaned up some comments and parameter names to make things more clear
draft: add contribution docs for modules and templates individually to
be referenced in coder docs manifest.
---------
Co-authored-by: Atif Ali <atif@coder.com>
Note that enforcement and checking usage will come in a future PR.
This feature is implemented differently than existing features in a few
ways.
It's highly recommended that reviewers read:
- This document which outlines the methods we could've used for license
enforcement:
https://www.notion.so/coderhq/AI-Agent-License-Enforcement-21ed579be59280c088b9c1dc5e364ee8
- Phase 0 of the actual RFC document:
https://www.notion.so/coderhq/Usage-based-Billing-AI-b-210d579be592800eb257de7eecd2d26d
### Multiple features in the license, a single feature in codersdk
Firstly, the feature is represented as a single feature in the codersdk
world, but is represented with multiple features in the license.
E.g. in the license you may have:
{
"features": {
"managed_agent_limit_soft": 100,
"managed_agent_limit_hard": 200
}
}
But the entitlements endpoint will return a single feature:
{
"features": {
"managed_agent_limit": {
"limit": 200,
"soft_limit": 100
}
}
}
This is required because of our rigid parsing that uses a
`map[string]int64` for features in the license. To avoid requiring all
customers to upgrade to use new licenses, the decision was made to just
use two features and merge them into one. Older Coder deployments will
parse this feature (from new licenses) as two separate features, but
it's not a problem because they don't get used anywhere obviously.
The reason we want to differentiate between a "soft" and "hard" limit is
so we can show admins how much of the usage is "included" vs. how much
they can use before they get hard cut-off.
### Usage period features will be compared and trump based on license
issuance time
The second major difference to other features is that "usage period"
features such as `managed_agent_limit` will now be primarily compared by
the `iat` (issued at) claim of the license they come from. This differs
from previous features. The reason this was done was so we could reduce
limits with newer licenses, which the current comparison code does not
allow for.
This effectively means if you have two active licenses:
- `iat`: 2025-07-14, `managed_agent_limit_soft`: 100,
`managed_agent_limit_hard`: 200
- `iat`: 2025-07-15, `managed_agent_limit_soft`: 50,
`managed_agent_limit_hard`: 100
Then the resulting `managed_agent_limit` entitlement will come from the
second license, even though the values are smaller than another valid
license. The existing comparison code would prefer the first license
even though it was issued earlier.
### Usage period features will count usage between the start and end
dates of the license
Existing limit features, like the user limit, just measure the current
usage value of the feature. The active user count is a gauge that goes
up and down, whereas agent usage can only be incremented, so it doesn't
make sense to use a continually incrementing counter forever and ever
for managed agents.
For managed agent limit, we count the usage between `nbf` (not before)
and `exp` (expires at) of the license that the entitlement comes from.
In the example above, we'd use the issued at date and expiry of the
second license as this date range.
This essentially means, when you get a new license, the usage resets to
zero.
The actual usage counting code will be implemented in a follow-up PR.
### Managed agent limit has a default entitlement value
Temporarily (until further notice), we will be providing licenses with
`feature_set` set to `premium` a default limit.
- Soft limit: `800 * user_limit`
- Hard limit: `1000 * user_limit`
"Enterprise" licenses do not get any default limit and are not entitled
to use the feature.
Unlicensed customers (e.g. OSS) will be permitted to use the feature as
much as they want without limits. This will be implemented when the
counting code is implemented in a follow-up PR.
Closes https://github.com/coder/internal/issues/760
The agentsdk currently does a remap of the DERP map to change the
EmbeddedRelay node's URL to match the agent's access URL.
This PR makes changes to the `workspacesdk` (used by clients like the
CLI) and `vpn` (used by Coder Desktop) to match this behavior.
This enables us the ability to try Coder clients in dogfood over a VPN
without changing the global access URL.
## Description
This PR updates the UI to avoid rendering workspace schedule settings
(autostop, autostart, etc.) for prebuilt workspaces. Instead, it
displays an informational message with a link to the relevant
documentation.
## Changes
* Introduce `IsPrebuild` parameter to `convertWorkspace` to indicate
whether the workspace is a prebuild.
* Prevent the Workspace Schedule settings form from rendering in the UI
for prebuilt workspaces.
* Display an info alert with a link to documentation when viewing a
prebuilt workspace.
<img width="2980" height="864" alt="Screenshot 2025-07-10 at 13 16 13"
src="https://github.com/user-attachments/assets/5f831c21-50bb-4e05-beea-dbeb930ddff8"
/>
Relates with: https://github.com/coder/coder/pull/18762
---------
Co-authored-by: BrunoQuaresma <bruno_nonato_quaresma@hotmail.com>
## Description
This PR fixes a flaky test in
`TestDelete/Prebuilt_workspace_delete_permissions`:
https://github.com/coder/internal/issues/764
Previously, all subtests used the same context created at the top level.
Since the subtests run in parallel, they could run for too long and
cause the shared context to expire. This sometimes led to context
deadline exceeded errors, especially during the `testutil.Eventually`
check for running prebuilt workspaces.
The fix is to create a fresh context per subtest, ensuring they are
isolated and not prematurely cancelled due to other subtests' durations.
Enhances the Performance efficiency section in the validated
architectures documentation with specific instance type recommendations
for AWS, Azure, and GCP.
**Changes:**
- Added recommended instance types for small, medium, and large
deployments across all three major cloud providers
- Included guidance on avoiding burstable instances (t-family, B-series)
for production workloads
- Added note about CPU baseline limitations for burstable instances
This addresses customer questions about appropriate database instance
sizing.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: dannykopping <373762+dannykopping@users.noreply.github.com>
Co-authored-by: Danny Kopping <dannykopping@gmail.com>
Many of the issues with the copy on #18739 were because I blindly copied from the audit logs page. This PR adds Edward's copy suggestions from that PR to the audit logs page.
[preview](https://coder.com/docs/@ethan-improve-audit-logs-copy/admin/security/audit-logs)
I've included this in the PR stack, as the previous PR modifies the auto-gen docs for audit logs.
The main goal of this PR is to remove Workspace Apps and Workspace Agents from the auto-generated audit log documentation, that incorrectly claims they are audited resources (no longer true with the addition of the connection log).
Though I believe we haven't touched any codepaths for returning audit logs, this PR also adds a test that ensures we continue to return *existing* connection, disconnect and open events correctly from the audit log API.
### Breaking change (changelog note):
>With new connection events appearing in the Connection Log, connection events older than 90 days will now be deleted from the Audit Log. If you require this legacy data, we recommend querying it from the REST API or making a backup of the database/these events before upgrading your Coder deployment. Please see the PR for details on what exactly will be deleted.
Of note is that there are currently no plans to delete connection events from the Connection Log.
### Context
This is the fifth PR for moving connection events out of the audit log.
In previous PRs:
- **New** connection logs have been routed to the `connection_logs` table. They will *not* appear in the audit log.
- These new connection logs are served from the new `/api/v2/connectionlog` endpoint.
In this PR:
- We'll now clean existing connection events out of the audit log, if they are older than 90 days, We do this in batches of 1000, every 10 minutes.
The criteria for deletion is simple:
```
WHERE
(
action = 'connect'
OR action = 'disconnect'
OR action = 'open'
OR action = 'close'
)
AND "time" < @before_time::timestamp with time zone
```
where `@before_time` is currently configured to 90 days in the past.
Future PRs:
- Write documentation for the endpoint / feature
This is the fourth PR for moving connection events out of the audit log.
This PR adds `/connectionlog` to the frontend. This page is identical in structure to the audit log, but with different filters and contents.
The connection log lists sessions, and the time they start. If we support tracking the end time of a session, and we've received a disconnect event for that session, the end timestamp is also included.
Demo:
https://github.com/user-attachments/assets/e0fff799-0ed6-45f7-a8c0-237839659ef9
<img width="346" alt="image" src="https://github.com/user-attachments/assets/6de29945-55c2-4fe5-9a4f-d42e476ded25" />
<img width="184" alt="image" src="https://github.com/user-attachments/assets/e83234bc-4d9d-4f71-b668-9256a600659c" />
Since the styling is identical to that of the audit log, I've continued to use MUI table components. When the audit log is migrated off MUI/restyled, this table can be too, relatively easily.
Future PRs:
- Write a query to delete old events from the audit log, call it from dbpurge.
- Write documentation for the endpoint / feature
This is the third PR for moving connection events out of the audit log.
This PR populates `count` on `ConnectionLogResponse` using a separate query, to preemptively mitigate the issue described in #17689. It's structurally identical to a portion of https://github.com/coder/coder/pull/18600, but for the connection log instead of the audit log.
Future PRs:
- Implement a table in the Web UI for viewing connection logs.
- Write a query to delete old events from the audit log, call it from dbpurge.
- Write documentation for the endpoint / feature
This is the second PR for moving connection events out of the audit log.
This PR:
- Adds the `/api/v2/connectionlog` endpoint
- Adds filtering for `GetAuthorizedConnectionLogsOffset` and thus the endpoint.
There's quite a few, but I was aiming for feature parity with the audit log.
1. `organization:<id|name>`
2. `workspace_owner:<username>`
3. `workspace_owner_email:<email>`
4. `type:<ssh|vscode|jetbrains|reconnecting_pty|workspace_app|port_forwarding>`
5. `username:<username>`
- Only includes web-based connection events (workspace apps, web port forwarding) as only those include user metadata.
6. `user_email:<email>`
7. `connected_after:<time>`
8. `connected_before:<time>`
9. `workspace_id:<id>`
10. `connection_id:<id>`
- If you have one snapshot of the connection log, and some sessions are ongoing in that snapshot, you could use this filter to check if they've been closed since.
11. `status:<connected|disconnected>`
- If `connected` only sessions with a null `close_time` are returned, if `disconnected`, only those with a non-null `close_time`. If filter is omitted, both are returned.
Future PRs:
- Populate `count` on `ConnectionLogResponse` using a seperate query (to preemptively mitigate the issue described in #17689)
- Implement a table in the Web UI for viewing connection logs.
- Write a query to delete old events from the audit log, call it from dbpurge.
- Write documentation for the endpoint / feature (including these filters)
### Breaking Change (changelog note):
> User connections to workspaces, and the opening of workspace apps or ports will no longer create entries in the audit log. Those events will now be included in the 'Connection Log'.
Please see the 'Connection Log' page in the dashboard, and the Connection Log [documentation](https://coder.com/docs/admin/monitoring/connection-logs) for details. Those with permission to view the Audit Log will also be able to view the Connection Log. The new Connection Log has the same licensing restrictions as the Audit Log, and requires a Premium Coder deployment.
### Context
This is the first PR of a few for moving connection events out of the audit log, and into a new database table and web UI page called the 'Connection Log'.
This PR:
- Creates the new table
- Adds and tests queries for inserting and reading, including reading with an RBAC filter.
- Implements the corresponding RBAC changes, such that anyone who can view the audit log can read from the table
- Implements, under the enterprise package, a `ConnectionLogger` abstraction to replace the `Auditor` abstraction for these logs. (No-op'd in AGPL, like the `Auditor`)
- Routes SSH connection and Workspace App events into the new `ConnectionLogger`
- Updates all existing tests to check the values of the `ConnectionLogger` instead of the `Auditor`.
Future PRs:
- Add filtering to the query
- Add an enterprise endpoint to query the new table
- Write a query to delete old events from the audit log, call it from dbpurge.
- Implement a table in the Web UI for viewing connection logs.
> [!NOTE]
> The PRs in this stack obviously won't be (completely) atomic. Whilst they'll each pass CI, the stack is designed to be merged all at once. I'm splitting them up for the sake of those reviewing, and so changes can be reviewed as early as possible. Despite this, it's really hard to make this PR any smaller than it already is. I'll be keeping it in draft until it's actually ready to merge.
Removes retries / reruns from our CI as they are masking flaky tests
that don't get fixed.
Also limits the Windows and macOS postgresql tests to the CLI and Agent
for now, since we don't officially support coderd on these platforms and
they are particularly flaky.
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This (week-old) test was failing in my workspace because I use fish shell.
I really do not like that Fish shell does not support `$?`, but I also do like Fish shell! We have a few people at Coder who use it who would appreciate this change.
**Changes:**
- Use [websocket-ts](https://www.npmjs.com/package/websocket-ts) to have
auto reconnection out of the box 🙏
- Update the disconnected alert message to "Trying to connect..." since
the connection is always trying to reconnect
- Remove `useWithRetry` because it is not necessary anymore
**Other topics:**
- The disconnected alert is displaying a simple message, but we can
include more info such as the number of attemtps
- The reconnection feature is in a good state and adding value. IMO, any
improvement can be done after getting this merged
Closes https://github.com/coder/internal/issues/659
### Description
This PR introduces GPG signing for all Coder *slim-binaries*.
Detached signatures will allow users to verify the integrity and
authenticity of the binaries they download.
### Changes
* `scripts/sign_with_gpg.sh`: New script to sign a given binary
using GPG. It imports the release key, signs the binary, and
verifies the signature.
* `scripts/build_go.sh`: Updated to call `sign_with_gpg.sh` when the
`CODER_SIGN_GPG` environment variable is set to 1.
* `.github/workflows/release.yaml`: The` CODER_SIGN_GPG` environment
variable is now set to 1 during the release build, enabling GPG
signing for all release binaries.
* `.github/workflows/ci.yaml`: The `CODER_SIGN_GPG` environment
variable is now set to 1 during the CI build, enabling GPG
signing for all CI binaries.
* `Makefile`: Detached signatures are moved to the `/site/out/bin/
`directory
Fixes#18751
Use `postgresql` as the Helm release name instead of `coder-db` to make
the service name more intuitive and eliminate confusion entirely.
## Changes
- Changed `helm install coder-db bitnami/postgresql` to `helm install
postgresql bitnami/postgresql`
- Updated PostgreSQL URLs from
`coder-db-postgresql.coder.svc.cluster.local` to
`postgresql.coder.svc.cluster.local`
- Removed explanatory notes about service naming (no longer needed)
## Benefits
✅ Makes examples work out-of-the-box for most users
✅ Uses the most straightforward and intuitive release name
✅ Eliminates confusion about service naming entirely
✅ Simpler documentation without complex explanations
## Testing
- Verified that `helm install postgresql bitnami/postgresql` creates
service named `postgresql`
- Confirmed this approach works with the connection URL
`postgresql.coder.svc.cluster.local`
Suggested by @EdwardAngert as a cleaner solution than explaining the
service naming dependency.
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: matifali <10648092+matifali@users.noreply.github.com>
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
Co-authored-by: Edward Angert <EdwardAngert@users.noreply.github.com>
This change allows a devcontainer to be opened via the agent syntax,
`coder open vscode <workspace>.<agent>` and removes the `--container`
option to simplify the subcommand. Accessing the subagent will behave
similarly to how the `--container` option behaved.
Fixescoder/internal#748
Workspaces with `Write Coder on Coder` template are failing with an
error in the agent related to File Browser:
```
2025/07/08 14:00:29 Using database: /home/coder/filebrowser.db
2025/07/08 14:00:29 password is too short, minimum length is 12
```
Updating filebrowser module version to 1.1.1:
https://github.com/coder/registry/pull/173
Fixes https://github.com/coder/internal/issues/695
PostgreSQL tests are getting run in a non-postgres CI job because the tests don't get skipped if the `DB=` env is unset. This PR adds a skip for them.
They are flaking in the `test-go-race` CI job. They run fine in the `test-go-race-pg` job, which pre-creates the postgres server, so the flakiness is almost certainly related to spinning up the database server.
## Description
This PR updates the lifecycle executor to explicitly exclude prebuilt
workspaces from being considered for lifecycle operations such as
`autostart`, `autostop`, `dormancy`, `default TTL` and `failure TTL`.
Prebuilt workspaces (i.e., those owned by the prebuild system user) are
handled separately by the prebuild reconciliation loop. Including them
in the lifecycle executor could lead to unintended behavior such as
incorrect scheduling or state transitions.
## Changes
* Updated the lifecycle executor query
`GetWorkspacesEligibleForTransition` to exclude workspaces with
`owner_id = 'c42fdf75-3097-471c-8c33-fb52454d81c0'` (prebuilds).
* Added tests to verify prebuilt workspaces are not considered in:
* Autostop
* Autostart
* Default TTL
* Dormancy
* Failure TTL
Fixes: https://github.com/coder/coder/issues/18740
Related to: https://github.com/coder/coder/issues/18658
Relates to https://github.com/coder/internal/issues/720
* Reduces workspaces data refetch interval if no builds are pending
* Sets `refetchOnWindowFocus: always` to mitigate impact of reduced polling duration
Closes#17791
This PR adds ability to cancel workspace builds that are in "pending"
status.
Breaking changes:
- CancelWorkspaceBuild method in codersdk now accepts an optional
request parameter
API:
- Added `expect_status` query parameter to the cancel workspace build
endpoint
- This parameter ensures the job hasn't changed state before canceling
- API returns `412 Precondition Failed` if the job is not in the
expected status
- Valid values: `running` or `pending`
- Wrapped the entire cancel method in a database transaction
UI:
- Added confirmation dialog to the `Cancel` button, since it's a
destructive operation


- Enabled cancel action for pending workspaces (`expect_status=pending`
is sent if workspace is in pending status)

---------
Co-authored-by: Dean Sheather <dean@deansheather.com>
Fixes#18767
This PR restores the missing `landing.png` and `duplicate.png` images
that were accidentally deleted in commit
b26c9e2432.
## Problem
The images were deleted during a documentation restructure, but external
links and cached website content are still referencing these image URLs,
causing 404 errors:
-
`https://raw.githubusercontent.com/coder/coder/main/docs/images/guides/ai-agents/landing.png`
-
`https://raw.githubusercontent.com/coder/coder/main/docs/images/guides/ai-agents/duplicate.png`
## Solution
Restore the original images from the git history to maintain backward
compatibility for external references while preserving the current
documentation structure.
## Testing
✅ Verified images are restored to correct location
✅ Confirmed file sizes match original images
✅ No conflicts with current documentation structure
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: kylecarbs <7122116+kylecarbs@users.noreply.github.com>
## Problem
Users were being automatically logged out when deleting OAuth2
applications.
## Root Cause
1. User deletes OAuth2 app successfully
2. React Query automatically refetches the app data
3. Management API incorrectly returned **401 Unauthorized** for the
missing app
4. Frontend axios interceptor sees 401 and calls `signOut()`
5. User gets logged out unexpectedly
## Solution
- Change management API to return **404 Not Found** for missing OAuth2
apps
- OAuth2 protocol endpoints continue returning 401 per RFC 6749
- Rename `writeInvalidClient` to `writeClientNotFound` for clarity
## Additional Changes
- Add conditional OAuth2 navigation when experiment is enabled or in dev
builds
- Add `isDevBuild()` utility and `buildInfo` to dashboard context
- Minor improvements to format script and warning dialogs
Signed-off-by: Thomas Kosiewski <tk@coder.com>
- Add `format:"uri"` to `Group.AvatarURL` (matches `User.AvatarURL`
field)
- `<user_id>` and `<group_id>` were backwards in the `example:` tags
- The `@Success` annotation for `/acl [get]` had an incorrect type
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Makes `initialDelaySeconds` configurable for both `readinessProbe` and
`livenessProbe` in the Helm chart.
**Changes:**
- Added `coder.readinessProbe.initialDelaySeconds` and
`coder.livenessProbe.initialDelaySeconds` to `values.yaml`
- Updated `_coder.tpl` template to use these configurable values
- Defaults to 0 seconds to maintain existing behavior
**Testing:**
- Verified template rendering with default values (0)
- Verified template rendering with custom values (30, 60)
- Both probes correctly use the configured `initialDelaySeconds`
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: kylecarbs <7122116+kylecarbs@users.noreply.github.com>
Add a confirmation dialog to the release script that prompts the user to
manually update the release calendar documentation before proceeding
with the release.
## Changes
- Added a confirmation prompt that asks users to update the release
calendar documentation
- Provides the URL to the documentation
(https://coder.com/docs/install/releases#release-schedule)
- Suggests running the `./scripts/update-release-calendar.sh` script
- Requires explicit confirmation before proceeding with the release
- Exits the script if the user hasn't updated the documentation
## Testing
- [x] Script syntax validation passes (`bash -n scripts/release.sh`)
- [x] Changes are placed at the appropriate point in the release flow
(after release notes editing, before actual release creation)
This addresses the issue where the release calendar documentation was
getting out of date. While automation can be added later, this ensures
users manually confirm the documentation is updated before each release.
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: bpmct <22407953+bpmct@users.noreply.github.com>
# OAuth2 Provider Code Reorganization
This PR reorganizes the OAuth2 provider code to improve separation of concerns and maintainability. The changes include:
1. Migrating OAuth2 provider app validation tests from `coderd/oauth2_test.go` to `oauth2provider/provider_test.go`
2. Moving OAuth2 client registration validation tests to `oauth2provider/validation_test.go`
3. Adding new comprehensive test files for metadata and validation edge cases
4. Renaming `OAuth2ProviderAppSecret` to `AppSecret` for better naming consistency
5. Simplifying the main integration test in `oauth2_test.go` to focus on core functionality
The PR maintains all existing test coverage while organizing the code more logically, making it easier to understand and maintain the OAuth2 provider implementation. This reorganization will help with future enhancements to the OAuth2 provider functionality.
# Refactor OAuth2 Provider Code into Dedicated Package
This PR refactors the OAuth2 provider functionality by moving it from the main `coderd` package into a dedicated `oauth2provider` package. The change improves code organization and maintainability without changing functionality.
Key changes:
- Created a new `oauth2provider` package to house all OAuth2 provider-related code
- Moved existing OAuth2 provider functionality from `coderd/identityprovider` to the new package
- Refactored handler functions to follow a consistent pattern of returning `http.HandlerFunc` instead of being handlers directly
- Split large files into smaller, more focused files organized by functionality:
- `app_secrets.go` - Manages OAuth2 application secrets
- `apps.go` - Handles OAuth2 application CRUD operations
- `authorize.go` - Implements the authorization flow
- `metadata.go` - Provides OAuth2 metadata endpoints
- `registration.go` - Handles dynamic client registration
- `revoke.go` - Implements token revocation
- `secrets.go` - Manages secret generation and validation
- `tokens.go` - Handles token issuance and validation
This refactoring improves code organization and makes the OAuth2 provider functionality more maintainable while preserving all existing behavior.
# Add MCP HTTP Server Experiment
This PR adds a new experiment flag `mcp-server-http` to enable the MCP HTTP server functionality. The changes include:
1. Added a new experiment constant `ExperimentMCPServerHTTP` with the value "mcp-server-http"
2. Added display name and documentation for the new experiment
3. Improved the experiment middleware to:
- Support requiring multiple experiments
- Provide better error messages with experiment display names
- Add a development mode bypass option
4. Applied the new experiment requirement to the MCP HTTP endpoint
5. Replaced the custom OAuth2 middleware with the standard experiment middleware
The PR also improves the `Enabled()` method on the `Experiments` type by using `slices.Contains()` for better readability.
# Add OAuth2 Provider Functionality as an Experiment
This PR adds a new experiment flag `oauth2` that enables OAuth2 provider functionality in Coder. When enabled, this experiment allows Coder to act as an OAuth2 provider.
The changes include:
- Added the new `ExperimentOAuth2` constant with appropriate documentation
- Updated the OAuth2 provider middleware to check for the experiment flag
- Modified the error message to indicate that the OAuth2 provider requires enabling the experiment
- Added the new experiment to the known experiments list in the SDK
Previously, OAuth2 provider functionality was only available in development mode. With this change, it can be enabled in production environments by activating the experiment.
# Add MCP HTTP server with streamable transport support
- Add MCP HTTP server with streamable transport support
- Integrate with existing toolsdk for Coder workspace operations
- Add comprehensive E2E tests with OAuth2 bearer token support
- Register MCP endpoint at /api/experimental/mcp/http with authentication
- Support RFC 6750 Bearer token authentication for MCP clients
Change-Id: Ib9024569ae452729908797c42155006aa04330af
Signed-off-by: Thomas Kosiewski <tk@coder.com>
# Remove unique constraint on OAuth2 provider app names
This PR removes the unique constraint on the `name` field in the `oauth2_provider_apps` table to comply with RFC 7591, which only requires unique client IDs, not unique client names.
Changes include:
- Removing the unique constraint from the database schema
- Adding migration files for both up and down migrations
- Removing the name uniqueness check in the in-memory database implementation
- Updating the unique constraint constants
Change-Id: Iae7a1a06546fbc8de541a52e291f8a4510d57e8a
Signed-off-by: Thomas Kosiewski <tk@coder.com>
# Organize Development Documentation into Separate Files
This PR reorganizes the development documentation by splitting the monolithic CLAUDE.md file into multiple focused documents. The main file now provides a concise overview with essential commands and critical patterns, while importing detailed content from specialized guides.
Key improvements:
- Created separate documentation files for specific domains:
- Database development patterns
- OAuth2 implementation guidelines
- Testing best practices
- Troubleshooting common issues
- Development workflows and guidelines
- Restructured the main CLAUDE.md to be more scannable with improved formatting
- Added quick-reference tables for common commands
- Maintained all existing content while making it more accessible
- Highlighted critical patterns that must be followed
This organization makes the documentation more maintainable and easier to navigate, allowing developers to quickly find relevant information for their specific tasks.
# Implement OAuth2 Dynamic Client Registration (RFC 7591/7592)
This PR implements OAuth2 Dynamic Client Registration according to RFC 7591 and Client Configuration Management according to RFC 7592. These standards allow OAuth2 clients to register themselves programmatically with Coder as an authorization server.
Key changes include:
1. Added database schema extensions to support RFC 7591/7592 fields in the `oauth2_provider_apps` table
2. Implemented `/oauth2/register` endpoint for dynamic client registration (RFC 7591)
3. Added client configuration management endpoints (RFC 7592):
- GET/PUT/DELETE `/oauth2/clients/{client_id}`
- Registration access token validation middleware
4. Added comprehensive validation for OAuth2 client metadata:
- URI validation with support for custom schemes for native apps
- Grant type and response type validation
- Token endpoint authentication method validation
5. Enhanced developer documentation with:
- RFC compliance guidelines
- Testing best practices to avoid race conditions
- Systematic debugging approaches for OAuth2 implementations
The implementation follows security best practices from the RFCs, including proper token handling, secure defaults, and appropriate error responses. This enables third-party applications to integrate with Coder's OAuth2 provider capabilities programmatically.
This change adds a new `docker-devcontainer` template which allows you
to provision a workspace running in Docker, that also creates workspaces
via Docker running inside (DinD).
- **chore(examples/templates): rename `docker-devcontainer` to
`docker-envbuilder`**
- **feat(examples/templates): add `docker-devcontainer` example
template**
# Add RFC 6750 Bearer Token Authentication Support
This PR implements RFC 6750 Bearer Token authentication as an additional authentication method for Coder's API. This allows clients to authenticate using standard OAuth 2.0 Bearer tokens in two ways:
1. Using the `Authorization: Bearer <token>` header
2. Using the `access_token` query parameter
Key changes:
- Added support for extracting tokens from both Bearer headers and access_token query parameters
- Implemented proper WWW-Authenticate headers for 401/403 responses with appropriate error descriptions
- Added comprehensive test coverage for the new authentication methods
- Updated the OAuth2 protected resource metadata endpoint to advertise Bearer token support
- Enhanced the OAuth2 testing script to verify Bearer token functionality
These authentication methods are added as fallback options, maintaining backward compatibility with Coder's existing authentication mechanisms. The existing authentication methods (cookies, session token header, etc.) still take precedence.
This implementation follows the OAuth 2.0 Bearer Token specification (RFC 6750) and improves interoperability with standard OAuth 2.0 clients.
# Add OAuth2 Protected Resource Metadata Endpoint
This PR implements the OAuth2 Protected Resource Metadata endpoint according to RFC 9728. The endpoint is available at `/.well-known/oauth-protected-resource` and provides information about Coder as an OAuth2 protected resource.
Key changes:
- Added a new endpoint at `/.well-known/oauth-protected-resource` that returns metadata about Coder as an OAuth2 protected resource
- Created a new `OAuth2ProtectedResourceMetadata` struct in the SDK
- Added tests to verify the endpoint functionality
- Updated API documentation to include the new endpoint
The implementation currently returns basic metadata including the resource identifier and authorization server URL. The `scopes_supported` field is empty until a scope system based on RBAC permissions is implemented. The `bearer_methods_supported` field is omitted as Coder uses custom authentication methods rather than standard RFC 6750 bearer tokens.
A TODO has been added to implement RFC 6750 bearer token support in the future.
# Add Code Navigation and Investigation Guide for Go LSP Tools
Added a new section to the CLAUDE.md documentation that explains how to use Go Language Server Protocol (LSP) tools when working with the Coder codebase. The guide includes:
- Commands for finding function definitions, symbol references, and getting symbol information
- Examples of LSP usage with specific commands
- Guidance on when to use LSP versus other tools like grep or bash
- A structured investigation strategy for navigating the codebase, starting with route registration and tracing through to implementations
This documentation helps developers more efficiently explore and understand the codebase structure.
This pull request implements RFC 8707, Resource Indicators for OAuth 2.0 (https://datatracker.ietf.org/doc/html/rfc8707), to enhance the security of our OAuth 2.0 provider.
This change enables proper audience validation and binds access tokens to their intended resource, which is crucial
for preventing token misuse in multi-tenant environments or deployments with multiple resource servers.
## Key Changes:
* Resource Parameter Support: Adds support for the resource parameter in both the authorization (`/oauth2/authorize`) and token (`/oauth2/token`) endpoints, allowing clients to specify the intended resource server.
* Audience Validation: Implements server-side validation to ensure that the resource parameter provided during the token exchange matches the one from the authorization request.
* API Middleware Enforcement: Introduces a new validation step in the API authentication middleware (`coderd/httpmw/apikey.go`) to verify that the audience of the access token matches the resource server being accessed.
* Database Schema Updates:
* Adds a `resource_uri` column to the `oauth2_provider_app_codes` table to store the resource requested during authorization.
* Adds an `audience` column to the `oauth2_provider_app_tokens` table to bind the issued token to a specific audience.
* Enhanced PKCE: Includes a minor enhancement to the PKCE implementation to protect against timing attacks.
* Comprehensive Testing: Adds extensive new tests to `coderd/oauth2_test.go` to cover various RFC 8707 scenarios, including valid flows, mismatched resources, and refresh token validation.
## How it Works:
1. An OAuth2 client specifies the target resource (e.g., https://coder.example.com) using the resource parameter in the authorization request.
2. The authorization server stores this resource URI with the authorization code.
3. During the token exchange, the server validates that the client provides the same resource parameter.
4. The server issues an access token with an audience claim set to the validated resource URI.
5. When the client uses the access token to call an API endpoint, the middleware verifies that the token's audience matches the URL of the Coder deployment, rejecting any tokens intended for a different resource.
This ensures that a token issued for one Coder deployment cannot be used to access another, significantly strengthening our authentication security.
---
Change-Id: I3924cb2139e837e3ac0b0bd40a5aeb59637ebc1b
Signed-off-by: Thomas Kosiewski <tk@coder.com>
This PR provides two commands:
* `coder prebuilds pause`
* `coder prebuilds resume`
These allow the suspension of all prebuilds activity, intended for use
if prebuilds are misbehaving.
add a new section specifically about how to disable path-based apps to
the security best practices doc
## todo
- [x] copy review
- [x] cross-linking
---------
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
Co-authored-by: Dean Sheather <dean@deansheather.com>
The Go build cache has a tendency to accumulate and waste space
(typically in the realm of 10-70 GB). This change automatically cleans
up the cache on shutdown to prevent accumulation.
## Summary
This PR implements critical MCP OAuth2 compliance features for Coder's authorization server, adding PKCE support, resource parameter handling, and OAuth2 server metadata discovery. This brings Coder's OAuth2 implementation significantly closer to production readiness for MCP (Model Context Protocol)
integrations.
## What's Added
### OAuth2 Authorization Server Metadata (RFC 8414)
- Add `/.well-known/oauth-authorization-server` endpoint for automatic client discovery
- Returns standardized metadata including supported grant types, response types, and PKCE methods
- Essential for MCP client compatibility and OAuth2 standards compliance
### PKCE Support (RFC 7636)
- Implement Proof Key for Code Exchange with S256 challenge method
- Add `code_challenge` and `code_challenge_method` parameters to authorization flow
- Add `code_verifier` validation in token exchange
- Provides enhanced security for public clients (mobile apps, CLIs)
### Resource Parameter Support (RFC 8707)
- Add `resource` parameter to authorization and token endpoints
- Store resource URI and bind tokens to specific audiences
- Critical for MCP's resource-bound token model
### Enhanced OAuth2 Error Handling
- Add OAuth2-compliant error responses with proper error codes
- Use standard error format: `{"error": "code", "error_description": "details"}`
- Improve error consistency across OAuth2 endpoints
### Authorization UI Improvements
- Fix authorization flow to use POST-based consent instead of GET redirects
- Remove dependency on referer headers for security decisions
- Improve CSRF protection with proper state parameter validation
## Why This Matters
**For MCP Integration:** MCP requires OAuth2 authorization servers to support PKCE, resource parameters, and metadata discovery. Without these features, MCP clients cannot securely authenticate with Coder.
**For Security:** PKCE prevents authorization code interception attacks, especially critical for public clients. Resource binding ensures tokens are only valid for intended services.
**For Standards Compliance:** These are widely adopted OAuth2 extensions that improve interoperability with modern OAuth2 clients.
## Database Changes
- **Migration 000343:** Adds `code_challenge`, `code_challenge_method`, `resource_uri` to `oauth2_provider_app_codes`
- **Migration 000343:** Adds `audience` field to `oauth2_provider_app_tokens` for resource binding
- **Audit Updates:** New OAuth2 fields properly tracked in audit system
- **Backward Compatibility:** All changes maintain compatibility with existing OAuth2 flows
## Test Coverage
- Comprehensive PKCE test suite in `coderd/identityprovider/pkce_test.go`
- OAuth2 metadata endpoint tests in `coderd/oauth2_metadata_test.go`
- Integration tests covering PKCE + resource parameter combinations
- Negative tests for invalid PKCE verifiers and malformed requests
## Testing Instructions
```bash
# Run the comprehensive OAuth2 test suite
./scripts/oauth2/test-mcp-oauth2.sh
Manual Testing with Interactive Server
# Start Coder in development mode
./scripts/develop.sh
# In another terminal, set up test app and run interactive flow
eval $(./scripts/oauth2/setup-test-app.sh)
./scripts/oauth2/test-manual-flow.sh
# Opens browser with OAuth2 flow, handles callback automatically
# Clean up when done
./scripts/oauth2/cleanup-test-app.sh
Individual Component Testing
# Test metadata endpoint
curl -s http://localhost:3000/.well-known/oauth-authorization-server | jq .
# Test PKCE generation
./scripts/oauth2/generate-pkce.sh
# Run specific test suites
go test -v ./coderd/identityprovider -run TestVerifyPKCE
go test -v ./coderd -run TestOAuth2AuthorizationServerMetadata
```
### Breaking Changes
None. All changes maintain backward compatibility with existing OAuth2 flows.
---
Change-Id: Ifbd0d9a543d545f9f56ecaa77ff2238542ff954a
Signed-off-by: Thomas Kosiewski <tk@coder.com>
## Description
This PR adds a warning to the prebuilds documentation about
incompatibility with Workspace schedule (autostart/autostop), dormancy,
and DevContainers. These configurations can interfere with prebuild
behavior and should be avoided for now.
Preview:

Closes#17689
This PR optimizes the audit logs query performance by extracting the
count operation into a separate query and replacing the OR-based
workspace_builds with conditional joins.
## Query changes
* Extracted count query to separate one
* Replaced single `workspace_builds` join with OR conditions with
separate conditional joins
* Added conditional joins
* `wb_build` for workspace_build audit logs (which is a direct lookup)
* `wb_workspace` for workspace create audit logs (via workspace)
Optimized AuditLogsOffset query:
https://explain.dalibo.com/plan/4g1hbedg4a564bg8
New CountAuditLogs query:
https://explain.dalibo.com/plan/ga2fbcecb9efbce3
We were discarding all "working" updates from the screen watcher because
we cannot tell the difference between the agent or user changing the
screen, but it makes sense to accept it as the very first update,
because the agent could be working but neglected to report that fact, so
you would never get an initial "working" update (it would just
eventually go straight to "idle").
Also messages can start at zero, so I made a fix for that as well,
although the first message will be from the LLM and we ignore
those anyway, so this probably has no actual effect, but seems more
technically correct.
And it seems I forgot to actually update the last message ID, which
also does not actually matter for user messages (since I think the
SSE endpoint will not re-emit a user message it has already emitted),
but seems more technically correct to check.
Lastly, if we have the screen watcher, ignore the agent's self-reported
state and always use "working" since it is unreliable. The idle state will
eventually be caught by the watcher.
Previously, we displayed apps in iframes on the task page without
waiting for them to initialize. This would result in 502 errors shown to
the user. This PR makes sure that we only display the app after it
initializes.
### Before
<img width="1920" alt="Screenshot 2025-06-30 at 14 59 07 (2)"
src="https://github.com/user-attachments/assets/63564ac9-abce-4a0c-b58e-b988772fae82"
/>
(possibly temporary) fix for #18519
Matches OpenSSH for non-tty sessions, where we don't actively terminate
the process.
Adds explicit tracking to the SSH server for these processes so that if
we are shutting down we terminate them: this ensures that we can shut
down quickly to allow shutdown scripts to run. It also ensures our tests
don't leak system resources.
When creating a new task, the following error was getting returned:
**Error:**
```json
{
"message": "Validation failed.",
"validations": [
{
"field": "template_id",
"detail": "Validation failed for tag \"excluded_with\" with value: \"42205a38-845c-4186-8475-f002e0936d53\""
},
{
"field": "template_version_id",
"detail": "Validation failed for tag \"excluded_with\" with value: \"22b1c4b7-432d-4eb5-9341-cd8efacb8f46\""
}
]
}
```
Caused by https://github.com/coder/coder/pull/18623
Previously in #18635 we delayed the containers API `Init` to avoid producing
errors due to Docker and `@devcontainers/cli` not yet being installed by startup
scripts. This had an adverse effect on the UX via UI responsiveness as the
detection of devcontainers was greatly delayed.
This change splits `Init` into `Init` and `Start` so that we can immediately
after `Init` start serving known devcontainers (defined in Terraform), improving
the UX.
Related #18635
Related #18640
Fixes https://github.com/coder/internal/issues/695
Retries initial connection to postgres in testing up to 3 seconds if we
see "reset by peer", which probably means that some other test proc just
started the container.
---------
Co-authored-by: Hugo Dutka <hugo@coder.com>
fixes#18263
Adds support to bump `usedAt` for X11 forwarding sessions whenever an application connects over the TCP socket. This should help avoid evicting sessions that are actually in use.
## Description
This PR improves the RBAC package by refactoring the policy, enhancing
documentation, and adding utility scripts.
## Changes
* Refactored `policy.rego` for clarity and readability
* Updated README with OPA section
* Added `benchmark_authz.sh` script for authz performance testing and
comparison
* Added `gen_input.go` to generate input for `opa eval` testing
relates to #18263
Refactors the x11Forwarder to accept a networking `interface` that we can fake out for testing. This isolates the unit tests from other processes listening in the port range used by X11 forwarding. This will become extremely important in up-stack PRs where we listen on every port in the range and need to control which ports have conflicts.
partial for #18263
Caps the X11 forwarding sessions at a maximum port of 6200, and evicts the oldest session if we create new sessions while at the max.
Unit tests included higher in the stack.
This lets you browse storybook using a Coder Desktop hostname (i.e. `workspace.coder:6006`). The default configuration (including `localhost`) will still work.
The previous method of refreshing after we change the devcontainer
status introduced an intermediary state where the devcontainer might not
yet have been assigned a container and will flicker as stopped before
going into running.
Pin Nix version to 2.28.4 in dogfood workflow
Pins the Nix version in the dogfood workflow to 2.28.4 to avoid a JSON type error that occurs with Nix 2.29 and above.
Change-Id: Ie024d5070dbe5901952fc52463c6602363ef8886
Signed-off-by: Thomas Kosiewski <tk@coder.com>
This PR replaces the use of the **container** ID with the
**devcontainer** ID. This is a breaking change. This allows rebuilding a
devcontainer when there is no valid container ID.
The incorrect assumption that slugs were unique per-agent was made when
the subagent API was implemented. Whilst this PR doesn't completely
enforce that, we instead compute a stable hash to prefix the slug that
should provide a reasonable level of probability that the slug will be
unique.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## PR Summary
Commit 5df70a613d added by mistake the the
following old line to `CLAUDE.md`:
```
For building Frontend refer to [this document](docs/contributing/frontend.md)
```
This PR removes it.
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
resolve#18361
Its possible for a dynamic parameter option value to be an empty string
which will cause the following error in the Radix Select component. The
solution is to handle empty strings so that they are not set directly in
the component.
`Uncaught Error: A <Select.Item /> must have a value prop that is not an
empty string. This is because the Select value can be set to an empty
string to clear the selection and show the placeholder.`
```
data "coder_parameter" "radio" {
name = "radio"
display_name = "An example of a radio input"
description = "The next parameter supports a single value."
type = "string"
form_type = "dropdown"
order = 1
default = ""
option {
name = "Empty"
value = ""
}
}
```
Updates icons in WorkspacesTable to better differentiate between "start"
and "update and start".
Note: the logic I'm currently using is as follows:
* Workspace does not require active version and is outdated -> cloud
icon
* Workspace requires active version and is outdated -> circle play icon
I also, on a whim, updated the stories for the component to make the
workspace names more identifiably reflect their content.

Closes https://github.com/coder/internal/issues/732
We now try (up to 5 times) when attempting to create an agent using the
workspace folder as the name.
It is important to note this flow is only ever ran when attempting to
create an agent using the workspace folder as the name. If a deployment
uses terraform or the devcontainer customization, we do not fall back to
this approach.
## Description
Follow-up from PR https://github.com/coder/coder/pull/18333
Related with:
https://github.com/coder/coder/pull/18333#discussion_r2159300881
This changes the authorization logic to first try the normal workspace
authorization check, and only if the resource is a prebuilt workspace,
fall back to the prebuilt workspace authorization check. Since prebuilt
workspaces are a subset of workspaces, the normal workspace check is
more likely to succeed. This is a small optimization to reduce
unnecessary prebuilt authorization calls.
This Pull request allows dynamic parameters to list system users in its
search for workspace owners. This is necessary to allow prebuilds to
reconcile prebuilt workspaces and to delete them.
Use the `/workspaces?q=has-ai-task=true`,
`/templates?q=has-ai-task=true` and `/aitasks/prompts` endpoints to
fetch Task templates and workspaces on the `/tasks` page.
Also:
- remove documentation link placeholders: the documentation is not in
place yet and is not going to be available before the June 24th code
freeze
- load workspaces and templates in parallel
- replace loading spinners with content skeletons
Related to https://github.com/coder/coder/issues/18454 and
https://github.com/coder/internal/issues/660.
Relates to https://github.com/coder/internal/issues/674
Currently, we send notifications to **all template admins** for **every
failed and hard-limited preset**. This can generate excessive
noise—especially when someone is debugging a template and creates
multiple broken versions in quick succession.
For now, we've decided to remove hard-limited preset notifications to
reduce excessive noise.
In the long term, we plan to aggregate failure information and deliver
it on a daily or weekly basis.
`wsbuilder` hits the file cache when running validation. This solution is imperfect, but by first sorting workspaces by their template version id, the cache hit rate should improve.
Add an endpoint to fetch AI task prompts for multiple workspace builds
at the same time. A prompt is the value of the "AI Prompt" workspace
build parameter. On main, the only way our API allows fetching workspace
build parameters is by using the `/workspacebuilds/$build_id/parameters`
endpoint, requiring a separate API call for every build.
The Tasks dashboard fetches Task workspaces in order to show them in a
list, and then needs to fetch the value of the `AI Prompt` parameter for
every task workspace (using its latest build id), requiring an
additional API call for each list item. This endpoint will allow the
dashboard to make just 2 calls to render the list: one to fetch task
workspaces, the other to fetch prompts.
<img width="1512" alt="Screenshot 2025-06-20 at 11 33 11"
src="https://github.com/user-attachments/assets/92899999-e922-44c5-8325-b4b23a0d2bff"
/>
Related to https://github.com/coder/internal/issues/660.
Closes#18207
This PR adds license status to support bundle to help with
troubleshooting license-related issues.
- `license-status.txt`, is added to the support bundle.
- it contains the same output as the `coder license list` command.
- license output formatter logic has been extracted into a separate
function.
- this allows it to be reused both in the `coder license list` cmd and
in the support bundle generation.
Fixes https://github.com/coder/coder/issues/18024
* drive-by: renames `handleExperimentsSafe` to
`handleExperimentsAvailable` to better match semantics
* defines list of `codersdk.ExperimentsKnown` and updates
`ReadExperiments` to log on invalid experiments
* typescript-ignores `codersdk.Experiments` so apitypings generates a
valid enum list of possible values of experiment
* updates OverviewPageView to distinguish between known 'hidden'
experiments and unknown 'invalid' experiments
closes https://github.com/coder/coder/issues/18430.
Selecting a preset, and then selecting the "None" preset used to result in a validation error because an invalid preset id ("") was sent to the backend.
---------
Co-authored-by: Jaayden Halko <jaayden@coder.com>
Co-authored-by: Susana Ferreira <susana@coder.com>
`BuildError` response from `wsbuilder` does not support rich errors from validation. Changed this to use the `Validations` block of codersdk responses to return all errors for invalid parameters.
When in experimental this was used as an escape hatch. Removed to be
consistent with the template author's intentions
Backwards compatible, removing an experimental api field that is no longer used.
# What does this do?
This does parameter validation for dynamic parameters in `wsbuilder`. All input parameters are validated in `coder/coder` before being sent to terraform.
The heart of this PR is [`ResolveParameters`](https://github.com/coder/coder/blob/b65001e89c0577199a8e470c138c51e91cf2350c/coderd/dynamicparameters/resolver.go#L30-L30).
# What else changes?
`wsbuilder` now needs to load the terraform files into memory to succeed. This does add a larger memory requirement to workspace builds.
# Future work
- Sort autostart handling workspaces by template version id. So workspaces with the same template version only load the terraform files once from the db, and store them in the cache.
Use richer `previewtypes.Parameter` for `wsbuilder`. This is a pre-requirement to adding dynamic parameter validation.
The richer type contains more information than the `db` parameter, so the conversion is lossless.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Alternate fix for https://github.com/coder/coder/issues/18080
Modifies wsbuilder to complete the provisioner job and mark the
workspace as deleted if it is clear that no provisioner will be able to
pick up the delete build.
This has a significant advantage of not deviating too much from the
current semantics of `POST /api/v2/workspacebuilds`.
https://github.com/coder/coder/pull/18460 ends up returning a 204 on
orphan delete due to no build being created.
Downside is that we have to duplicate some responsibilities of
provisionerdserver in wsbuilder.
There is a slight gotcha to this approach though: if you stop a
provisioner and then immediately try to orphan-delete, the job will
still be created because of the provisioner heartbeat interval. However
you can cancel it and try again.
This PR changes the logic for how we decide on an agent name.
Previously it followed these steps:
1. Use a name from `customizations.coder.name`
2. Use a name from the terraform resource `coder_devcontainer`
3. Use the dev container's friendly name
With this change it now does:
1. Use a name from `customizations.coder.name`
2. Use a name from the terraform resource `coder_devcontainer`
3. Use a name from the workspace folder
4. Use the dev container's friendly name
We now attempt to construct a valid agent name from the workspace
folder. Should we fail to construct a valid agent name from the
workspace folder, we will fall back to the dev container's friendly
name.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Currently, the prebuilds documentation states:
```
### Managing resource quotas
Prebuilt workspaces can be used in conjunction with [resource quotas](../../users/quotas.md).
Because unclaimed prebuilt workspaces are owned by the `prebuilds` user, you can:
1. Configure quotas for any group that includes this user.
1. Set appropriate limits to balance prebuilt workspace availability with resource constraints.
If a quota is exceeded, the prebuilt workspace will fail provisioning the same way other workspaces do.
```
If you need to have a separate quota for prebuilds as opposed to regular
users, you are required to create a separate group, as quotas are
applied to groups.
Currently it is not possible to create a separate 'prebuilds' group with
only the prebuilds user to add a quota. This PR skips the org membership
check specifically for the prebuilds user when patching a group.

Fixes https://github.com/coder/coder/issues/17840
NOTE: calling this out as a breaking change so that it is highly visible
in the changelog.
* CLI: Modifies `coder update` to stop the workspace if already running.
* UI: Modifies "update" button to always stop the workspace if already
running.
"Idle" is more accurate than "complete" since:
1. AgentAPI only knows if the screen is active; it has no way of knowing
if the task is complete.
2. The LLM might be done with its current prompt, but that does not mean
the task is complete either (it likely needs refinement).
The "complete" state will be reserved for future definition.
Additionally, in the case where the screen goes idle but the LLM never
reported a status update, we can get an idle icon without a message, and
it looks kinda janky in the UI so if there is no message I display the
state text.
Closes https://github.com/coder/internal/issues/699
resolves#17709
FYI, blink created a first draft which was heavily modified.
## Summary
This PR implements ephemeral parameter handling for workspace
start/restart operations when templates use dynamic parameters
(`use_classic_parameter_flow = false`).
<img width="522" alt="Screenshot 2025-06-18 at 14 35 54"
src="https://github.com/user-attachments/assets/450527c0-cc88-4fc3-b0fa-170bdeb5ea51"
/>
<img width="327" alt="Screenshot 2025-06-18 at 14 35 43"
src="https://github.com/user-attachments/assets/ea74bf8e-d127-489d-b406-edfc5ec1e9a8"
/>

## Changes
### 1. EphemeralParametersDialog Component
- **New**: `site/src/components/EphemeralParametersDialog/`
- Shows a dialog when starting/restarting workspaces with ephemeral
parameters
- Lists ephemeral parameters with names and descriptions
- Provides options to continue without setting values or navigate to
parameters page
### 2. WorkspaceReadyPage Updates
- Added `checkEphemeralParameters()` function using
`API.getDynamicParameters`
- Modified `handleStart` and `handleRestart` to check for ephemeral
parameters
- Only triggers for templates with `use_classic_parameter_flow = false`
- Shows dialog if ephemeral parameters exist, otherwise proceeds
normally
### 3. BuildParametersPopover Updates
- Added special UI for non-classic parameter flow templates with
ephemeral parameters
- Lists ephemeral parameters with descriptions
- Explains that users must use the workspace parameters page
- Provides direct link to `WorkspaceParametersPageExperimental`
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: jaaydenh <1858163+jaaydenh@users.noreply.github.com>
Co-authored-by: Jaayden Halko <jaayden@coder.com>
This PR extracts dynamic parameter rendering logic from
coderd/parameters.go into a new coderd/dynamicparameters package. Partly
for organization and maintainability, but primarily to be reused in
`wsbuilder` to be leveraged as validation.
## Description
This PR adds support for deleting prebuilt workspaces via the
authorization layer. It introduces special-case handling to ensure that
`prebuilt_workspace` permissions are evaluated when attempting to delete
a prebuilt workspace, falling back to the standard `workspace` resource
as needed.
Prebuilt workspaces are a subset of workspaces, identified by having
`owner_id` set to `PREBUILD_SYSTEM_USER`.
This means:
* A user with `prebuilt_workspace.delete` permission is allowed to
**delete only prebuilt workspaces**.
* A user with `workspace.delete` permission can **delete both normal and
prebuilt workspaces**.
⚠️ This implementation is scoped to **deletion operations only**. No
other operations are currently supported for the `prebuilt_workspace`
resource.
To delete a workspace, users must have the following permissions:
* `workspace.read`: to read the current workspace state
* `update`: to modify workspace metadata and related resources during
deletion (e.g., updating the `deleted` field in the database)
* `delete`: to perform the actual deletion of the workspace
## Changes
* Introduced `authorizeWorkspace()` helper to handle prebuilt workspace
authorization logic.
* Ensured both `prebuilt_workspace` and `workspace` permissions are
checked.
* Added comments to clarify the current behavior and limitations.
* Moved `SystemUserID` constant from the `prebuilds` package to the
`database` package `PrebuildsSystemUserID` to resolve an import cycle
(commit
https://github.com/coder/coder/pull/18333/commits/f24e4ab4b6f0a56726fd04be2d7302c9fdb52d53).
* Update middleware `ExtractOrganizationMember` to include system user
members.
In the past we randomly selected workspace agent if there were multiple.
Unless both are running on the same machine with the same configuration,
this would be very confusing behavior for a user.
With the introduction of sub agents (devcontainer agents), we have now
made this an error state and require the specifying of agent when there
is more than one (either normal agent or sub agent).
This aligns with the behavior of e.g. Coder Desktop.
Fixescoder/internal#696
Add a home and "open in new tab" button. Other controls are not
possible due to cross-origin restrictions.
Closes#18178
---------
Co-authored-by: BrunoQuaresma <bruno_nonato_quaresma@hotmail.com>
Previously, `CODER_WORKSPACE_AGENT_NAME` would always be passed as the
dev container name.
This is invalid for the following scenarios:
- The dev container is specified in terraform
- The dev container has a name customization
This change now runs `ReadConfig` twice. The first read is to extract a
name (if present), from the `devcontainer.json`. The second read will
then use the name we have stored for the dev container (so this could be
either the customization, terraform resource name, or container name).
Closes https://github.com/coder/internal/issues/312
Depends on https://github.com/coder/terraform-provider-coder/pull/408
This PR adds support for defining an **autoscaling block** for
prebuilds, allowing number of desired instances to scale dynamically
based on a schedule.
Example usage:
```
data "coder_workspace_preset" "us-nix" {
...
prebuilds = {
instances = 0 # default to 0 instances
scheduling = {
timezone = "UTC" # a single timezone is used for simplicity
# Scale to 3 instances during the work week
schedule {
cron = "* 8-18 * * 1-5" # from 8AM–6:59PM, Mon–Fri, UTC
instances = 3 # scale to 3 instances
}
# Scale to 1 instance on Saturdays for urgent support queries
schedule {
cron = "* 8-14 * * 6" # from 8AM–2:59PM, Sat, UTC
instances = 1 # scale to 1 instance
}
}
}
}
```
### Behavior
- Multiple `schedule` blocks per `prebuilds` block are supported.
- If the current time matches any defined autoscaling schedule, the
corresponding number of instances is used.
- If no schedule matches, the **default instance count**
(`prebuilds.instances`) is used as a fallback.
### Why
This feature allows prebuild instance capacity to adapt to predictable
usage patterns, such as:
- Scaling up during business hours or high-demand periods
- Reducing capacity during off-hours to save resources
### Cron specification
The cron specification is interpreted as a **continuous time range.**
For example, the expression:
```
* 9-18 * * 1-5
```
is intended to represent a continuous range from **09:00 to 18:59**,
Monday through Friday.
However, due to minor implementation imprecision, it is currently
interpreted as a range from **08:59:00 to 18:58:59**, Monday through
Friday.
This slight discrepancy arises because the evaluation is based on
whether a specific **point in time** falls within the range, using the
`github.com/coder/coder/v2/coderd/schedule/cron` library, which performs
per-minute matching rather than strict range evaluation.
---------
Co-authored-by: Danny Kopping <danny@coder.com>
Deletion of data is uncommon in our database, so the introduction of sub agents
and the deletion of them introduced issues with foreign key assumptions, as can
be seen in coder/internal#685. We could have only addressed the specific case by
allowing cascade deletion of stats as well as handling in the stats collector,
but it's unclear how many more such edge-cases we could run into.
In this change, we mark the rows as deleted via boolean instead, and filter them
out in all relevant queries.
Fixescoder/internal#685
This change adds the `devcontainers-cli` module to ensure the command
has been installed.
Its presence will not change how workspaces behave currently without
additional changes to the terraform.
Updates coder/internal#463
Relates to https://github.com/coder/internal/issues/732
This PR supports specifying a name that will be used for the
devcontainer agent in the customizations section of the
devcontainer.json configuration file.
Listen to feedback that was missed in
https://github.com/coder/coder/pull/18346
- Adds `CODER_WORKSPACE_OWNER_NAME` into the agent environment.
- Logs warnings for when dev container app creation fails.
I'll be honest I'm not even really sure the point of this test but it
was failing due to
```
2025-06-16T15:01:54.0863251Z Error: Received unexpected error:
2025-06-16T15:01:54.0863554Z acquire job:
2025-06-16T15:01:54.0864230Z github.com/coder/coder/v2/coderd/provisionerdserver.(*server).AcquireJob
2025-06-16T15:01:54.0865173Z /home/runner/work/coder/coder/coderd/provisionerdserver/provisionerdserver.go:329
2025-06-16T15:01:54.0865683Z - failed to acquire job:
2025-06-16T15:01:54.0866374Z github.com/coder/coder/v2/coderd/provisionerdserver.(*Acquirer).AcquireJob
2025-06-16T15:01:54.0867262Z /home/runner/work/coder/coder/coderd/provisionerdserver/acquirer.go:148
2025-06-16T15:01:54.0867819Z - pq: canceling statement due to user request
```
which is certainly unintended.
* use `ctx` instead of `session.Context()` for consistency
* log SSH connection start with the phrase `ssh connection` for symmetry
with the stop log and ease of `grep`'ing.
Add apps to the sub agent based on the dev container customization.
The implementation also provides the following env variables for use in
the devcontainer json
- `CODER_WORKSPACE_AGENT_NAME`
- `CODER_WORKSPACE_USER_NAME`
- `CODER_WORKSPACE_NAME`
- `CODER_DEPLOYMENT_URL`
Adds a custom marshaler to handle some cases where nils were being
marshaled to nulls, causing the web UI to throw an error.
---------
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
I modified the proxy host cache we already had and were using for
websocket csp headers to also include the wildcard app host, then used
those for frame-src policies.
I did not add frame-ancestors, since if I understand correctly, those
would go on the app, and this middleware does not come into play there.
Maybe we will want to add it on workspace apps like we do with cors, if
we find apps are setting it to `none` or something.
Closes https://github.com/coder/internal/issues/684
Updates Terraform from 1.11.4 to 1.12.2 across all relevant files.
Changes include:
- GitHub Actions setup-tf configuration
- Dockerfile configurations (dogfood and base)
- Install script
- Provisioner install.go with version constants
- Test data files (tfstate.json, tfplan.json, version.txt)
Follows the same pattern as PR #17323 which updated to 1.11.4.
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: sreya <4856196+sreya@users.noreply.github.com>
Updates all Go version references in the codebase to use Go 1.24.4.
## Changes
- Update `go.mod` to use Go 1.24.4
- Update `dogfood/coder/Dockerfile` GO_VERSION to 1.24.4
- Update `.github/actions/setup-go/action.yaml` default version to
1.24.4
- Update `examples/parameters-dynamic-options/variables.yml` to use
golang:1.24
## Testing
- ✅ All Go version references are consistent (verified with
`scripts/check_go_versions.sh`)
- ✅ Build tested successfully with Go 1.24.4
- ✅ Binary runs correctly
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: sreya <4856196+sreya@users.noreply.github.com>
The fields must be nullable because there’s a period of time between
inserting a row into the database and finishing the “plan” provisioner
job when the final value of the field is unknown.
This commit consolidates two container endpoints on the backend and improves the
frontend devcontainer support by showing names and displaying apps as
appropriate.
With this change, the frontend now has knowledge of the subagent and we can also
display things like port forwards.
The frontend was updated to show dev container labels on the border as well as
subagent connection status. The recreation flow was also adjusted a bit to show
placeholder app icons when relevant.
Support for apps was also added, although these are still WIP on the backend.
And the port forwarding utility was added in since the sub agents now provide
the necessary info.
Fixescoder/internal#666
## Description
Adds tests for `ReconcileAll` to verify the full reconciliation flow
when handling expired prebuilds. This complements existing lower-level
tests by checking multiple reconciliation actions (delete + create) at
the higher reconciliation cycle level.
Related with comment:
https://github.com/coder/coder/pull/17996#issuecomment-2910516489
Large modules can potentially break or slow down template behaviors. Our
primary dogfood template should experience this if it becomes an issue.
Just trying to catch things in dogfood before we experience them in the
wild.
The file cache was caching the `Unauthorized` errors if a user without
the right perms opened the file first. So all future opens would fail.
Now the cache always opens with a subject that can read files. And authz
is checked on the Acquire per user.
Bumps [github.com/gen2brain/beeep](https://github.com/gen2brain/beeep)
from 0.0.0-20220402123239-6a3042f4b71a to 0.11.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/gen2brain/beeep/commits/v0.11.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This is meant to complement the existing task reporter since the LLM
does not call it reliably.
It also includes refactoring to use the common agent flags/env vars.
This PR implements protobuf streaming to handle large module files by:
1. **Streaming large payloads**: When module files exceed the 4MB limit,
they're streamed in chunks using a new UploadFile RPC method
2. **Database storage**: Streamed files are stored in the database and
referenced by hash for deduplication
3. **Backward compatibility**: Small module files continue using the
existing direct payload method
Adds database migrations required for the Tasks feature.
There's a slight difference between the migrations in this PR and the
RFC: this PR adds `NOT NULL` constraints to the `has_ai_task` columns.
It was an oversight on my part when I wrote the RFC - I assumed the
`DEFAULT FALSE` value would make the columns implicitly NOT NULL, but
that's not the case with Postgres. We have no use for the NULL value.
The `DEFAULT FALSE` statement ensures that the migration will pass even
when there are existing rows in the template version and workspace
builds tables, so there's no danger in adding the `NOT NULL`
constraints.
This PR fixes a mistake from the previous PR
https://github.com/coder/coder/pull/18342. Merged configuration results
in the customization being an array not an object.
This PR also moves `displayApps` from being an array to being an object,
like the terraform provider has.
It seems we do not validate external auth in the backend currently, so I
opted to do this in the frontend to match the create workspace page.
This adds a new section underneath the task prompt for external auth
that only shows when there is non-optional missing auth.
Closes#18166
As part of an information architecture overhaul, this PR reorganizes the
About section and adds a Support section (but not content to it yet)
[preview](https://coder.com/docs/@docs-ia-about/about)
this PR is intentionally limited in scope so that we can ship meaningful
changes faster and followup PRs should include:
- [ ] edit + overhaul the About page
- [ ] decide on the `start` directory
- [ ] ~screenshots page updates~ (this should happen July or later)
redirects PR: https://github.com/coder/coder.com/pull/944
---------
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
This removes the opt-in and opt-out buttons for dynamic parameters on
the create workspace page and the workspace parameters settings page.
---------
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
Following some issues we discovered on dogfood after merging #17878, we
think `prompt=consent` is required for refresh tokens to be sent by
Google every time you sign in.
Fixes#15523
Uses latest https://github.com/coder/tailscale which includes https://github.com/coder/tailscale/pull/85 to stop selecting paths with small MTU for direct connections.
Also updates the tailnet integration test to reproduce the issue. The previous version had the 2 peers connected by a single veth, but this allows the OS to fragment the packet. In the new version, the 2 peers (and server) are all connected by a central router. The link between peer 1 and the router has an adjustable MTU. IPv6 does not allow packets to be fragmented by intermediate routers, so sending a too-large packet in this scenario forces the router to drop packets and reproduce the issue (without the tailscale changes).
It unfortunately doesn't seem possible, even with a custom ruleguard rule, to mark a function as requiring it's return value be used, it looks like you have to go all in on a linter that rejects *any* unused return values.
## Problem
When creating a workspace from a template with dynamic parameter
ordering, parameter values are not displaying correctly when the order
changes. This occurs when a parameter's `order` value depends on another
parameter's value.
**Example scenario:**
```terraform
data "coder_parameter" "reorder" {
name = "reorder"
type = "bool"
default = false
order = 1
}
data "coder_parameter" "cpu" {
order = data.coder_parameter.reorder.value ? 0 : 2
name = "cpu"
type = "number"
default = 4
}
```
When the user toggles `reorder` from `false` to `true`, the `cpu`
parameter moves from position 2 to position 0, but its value gets mixed
up with the `reorder` parameter's value.
## Root Cause
The issue was in `CreateWorkspacePageViewExperimental.tsx` where
parameters were rendered using array indices instead of parameter names:
```typescript
// Problematic code
const parameterField = `rich_parameter_values.${index}`;
const formValue = form.values?.rich_parameter_values?.[index]?.value || "";
```
When parameters are reordered:
1. The `parameters` array order changes based on the new `order` values
2. The `form.values.rich_parameter_values` array maintains the original
order
3. Array index-based lookup causes values to be mismatched
## Solution
Implemented name-based lookup to ensure parameter values stay with their
correct parameters:
```typescript
// Find parameter value by name instead of index
const currentParameterValueIndex = form.values.rich_parameter_values?.findIndex(
(p) => p.name === parameter.name
) ?? -1;
// Use the found index for form field mapping
const parameterFieldIndex = currentParameterValueIndex !== -1 ? currentParameterValueIndex : index;
const parameterField = `rich_parameter_values.${parameterFieldIndex}`;
// Get form value by name to ensure correct mapping
const formValue = currentParameterValueIndex !== -1
? form.values?.rich_parameter_values?.[currentParameterValueIndex]?.value || ""
: "";
```
## Testing
- ✅ Created test script that validates the fix works correctly
- ✅ Tested with the provided template showing dynamic parameter ordering
- ✅ Verified parameter values persist correctly during reordering
- ✅ Confirmed no TypeScript compilation issues
## Impact
This fix ensures that users can reliably use dynamic parameter ordering
in their templates without losing parameter values when the order
changes. This is particularly important for templates that use
conditional parameter visibility and ordering based on user selections.
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Jaayden Halko <jaayden@coder.com>
Bumps gopkg.in/DataDog/dd-trace-go.v1 from 1.73.0 to 1.74.0.
<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>
| Dependency Name | Ignore Conditions |
| --- | --- |
| gopkg.in/DataDog/dd-trace-go.v1 | [>= 1.58.a, < 1.59] |
</details>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Completely missed this in the original PR for adding support for
creating sub agents. This now allows specifying a list of display apps
to be added to the agent.
# Add separate token lifetime limits for administrators
This PR introduces a new configuration option `--max-admin-token-lifetime` that allows administrators to create API tokens with longer lifetimes than regular users. By default, administrators can create tokens with a lifetime of up to 7 days (168 hours), while the existing `--max-token-lifetime` setting continues to apply to regular users.
The implementation:
- Adds a new `MaximumAdminTokenDuration` field to the session configuration
- Modifies the token validation logic to check the user's role and apply the appropriate lifetime limit
- Updates the token configuration endpoint to return the correct maximum lifetime based on the user's role
- Adds tests to verify that administrators can create tokens with longer and shorter lifetimes
- Updates documentation and help text to reflect the new option
This change allows organizations to grant administrators extended token lifetimes while maintaining tighter security controls for regular users.
Fixes#17395
fixes#18199
Corrects handling of paths with spaces in the `Match !exec` clause we
use to determine whether Coder Connect is running. This is handled
differently than the ProxyCommand, so we have a different escape
routine, which also varies by OS.
On Windows, we resort to a pretty gnarly hack, but it does work and I
feel the only other option would be to reduce functionality such that we
could not detect the Coder Connect state.
Closes https://github.com/coder/internal/issues/677
Resolves flakes such as:
```
$ go test -race -run "TestRunStopRace" github.com/coder/coder/v2/coderd/notifications -count=10000 -parallel $(nproc)
--- FAIL: TestRunStopRace (0.00s)
t.go:106: 2025-06-06 02:44:39.348 [debu] notifications-manager: notification manager started
t.go:106: 2025-06-06 02:44:39.348 [debu] notifications-manager: graceful stop requested
t.go:106: 2025-06-06 02:44:39.348 [debu] notifications-manager: notification manager stopped
t.go:115: 2025-06-06 02:44:39.348 [erro] notifications-manager: notification manager stopped with error ...
error= manager already closed:
github.com/coder/coder/v2/coderd/notifications.(*Manager).loop
/home/coder/coder/coderd/notifications/manager.go:166
*** slogtest: log detected at level ERROR; TEST FAILURE ***
--- FAIL: TestRunStopRace (0.00s)
t.go:106: 2025-06-06 02:44:41.632 [debu] notifications-manager: notification manager started
t.go:106: 2025-06-06 02:44:41.632 [debu] notifications-manager: graceful stop requested
t.go:106: 2025-06-06 02:44:41.632 [debu] notifications-manager: notification manager stopped
t.go:115: 2025-06-06 02:44:41.633 [erro] notifications-manager: notification manager stopped with error ...
error= manager already closed:
github.com/coder/coder/v2/coderd/notifications.(*Manager).loop
/home/coder/coder/coderd/notifications/manager.go:166
*** slogtest: log detected at level ERROR; TEST FAILURE ***
FAIL
FAIL github.com/coder/coder/v2/coderd/notifications 6.847s
FAIL
```
These error logs are caused as a result of the `Manager` `Run` start operation being asynchronous. In the flaking test case we immediately call `Stop` after `Run`. It's possible for `Stop` to be scheduled to completion before the goroutine spawned by `Run` calls `loop` and checks `closed`. If this happens, `loop` returns an error and produces the error log.
We'll address this by replacing this specific error log with a warning log.
```
$ go test -run "TestRunStopRace" github.com/coder/coder/v2/coderd/notifications -count=10000 -parallel $(nproc)
ok github.com/coder/coder/v2/coderd/notifications 1.294s
$ go test -race github.com/coder/coder/v2/coderd/notifications -count=100 -parallel $(nproc)
ok github.com/coder/coder/v2/coderd/notifications 26.525s
```
My understanding is that `io.EOF` is eventually expected, so logging it
as an error may be confusing. For other errors we should definitely
WARN.
```
[info] provisionerd-ip-172-31-12-44-14: recv done on Session session_id=22b9ef8a-9cd6-4188-98e0-573a50d724cc error=EOF
```
Refactors tailnet integration test and adds UDP echo tests with different MTU related to #15523
I still haven't gotten to the bottom of what's causing the issue (the added test case I expected to fail actually succeeds), but these integration test improvements are generally useful.
also:
* consolidates networking setup with easy and hard NAT
* consolidates client setup
* makes Client2 act like an agent at the tailnet layer, so it will send ReadyForHandshake and speed up the tunnel establishment
* adds support for logging tunneled packets
* adds support for dumping outer (underlay) IP traffic
* adds support for adjusting veth MTU
* adds support for IPv6 in the outer (underlay) network topology
Closes#17982.
The purpose of this PR is to expose network latency via the API used by Coder Desktop.
This PR has the tunnel ping all known agents every 5 seconds, in order to produce an instance of:
```proto
message LastPing {
// latency is the RTT of the ping to the agent.
google.protobuf.Duration latency = 1;
// did_p2p indicates whether the ping was sent P2P, or over DERP.
bool did_p2p = 2;
// preferred_derp is the human readable name of the preferred DERP region,
// or the region used for the last ping, if it was sent over DERP.
string preferred_derp = 3;
// preferred_derp_latency is the last known latency to the preferred DERP
// region. Unset if the region does not appear in the DERP map.
optional google.protobuf.Duration preferred_derp_latency = 4;
}
```
The contents of this message are stored and included on all subsequent upsertions of the agent.
Note that we upsert existing agents every 5 seconds to update the `last_handshake` value.
On the desktop apps, this message will be used to produce a tooltip similar to that of the VS Code extension:
<img width="495" alt="image" src="https://github.com/user-attachments/assets/d8b65f3d-f536-4c64-9af9-35c1a42c92d2" />
(wording not final)
Unlike the VS Code extension, we omit:
- The Latency of *all* available DERP regions. It seems not ideal to send a copy of this entire map for every online agent, and it certainly doesn't make sense for it to be on the `Agent` or `LastPing` message.
If we do want to expose this info on Coder Desktop, we should consider how best to do so; maybe we want to include it on a more generic `Netcheck` message.
- The current throughput (Bytes up/down). This is out of scope of the linked issue, and is non-trivial to implement. I'm also not sure of the value given the frequency we're doing these status updates (every 5 seconds).
If we want to expose it, it'll be in a separate PR.
<img width="343" alt="image" src="https://github.com/user-attachments/assets/8447d03b-9721-4111-8ac1-332d70a1e8f1" />
Always show preset parameters in CreateWorkspacePageViewExperimental if
the preset parameter has any diagnostics, regardless of the
showPresetParameters toggle state.
This ensures that users can see and address errors in preset parameters
even when the "Show preset parameters" toggle is disabled.
Fixescoder/internal#651
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Jaayden Halko <jaayden@coder.com>
.git directories were causing identical modules to have different
hashes. This adds unecessary bloat to the database, and the .git
directory is not needed for dynamic params
Update the copy on the task starting page to be more user-friendly:
- Change "Building the workspace" to "Starting your workspace"
- Change "Your task will run as soon as the workspace is ready" to "This
should take a few minutes"
The new copy provides clearer expectations about timing and uses more
user-friendly language.
Fixes#18164
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
closes#18071
- [x] move `## Accessing web apps in a secure browser context` to the
troubleshooting section
- [x] use a compacted view for troubleshooting topics to prevent them
from occupying a significant space on page
- [x] remove `Issues updating Coder Desktop`
- [x] Update screenshots
---------
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
- hardcode a custom pathname (`/chat/embed`) to use in the sidebar
iframe. this is a temporary fix so that the agentapi chat displays
properly
- make the sidebar a bit wider so that the chat fits without line
wrapping
<img width="1512" alt="Screenshot 2025-06-04 at 15 32 30"
src="https://github.com/user-attachments/assets/8be5d053-d7b3-40da-8b62-a6151975527d"
/>
Instead of using `ResourceSystem` as the resource for
`InsertWorkspaceApp`, we instead use the associated workspace (if it
exists), with the action `ActionUpdate`.
**Demo:**
<img width="1512" alt="Screenshot 2025-06-03 at 14 36 25"
src="https://github.com/user-attachments/assets/e4a61bd3-2182-4593-991d-5db9573a5b7f"
/>
- Extract components to be reused and easier to reasoning about
- When having cloude-code-web, embed the chat in the sidebar
- The sidebar will be wider when having the chat to better fit that
**Does not include:**
- Sidebar width drag and drop control. The width is static but would be
nice to have a control to customize it.
## Summary
This PR adds template export functionality to the Coder UI, addressing
issue #17859. Users can now export templates directly from the web
interface without requiring CLI access.
## Changes
### Frontend API
- Added `downloadTemplateVersion` function to `site/src/api/api.ts`
- Supports both TAR (default) and ZIP formats
- Uses existing `/api/v2/files/{fileId}` endpoint with format parameter
### UI Enhancement
- Added "Export as TAR" and "Export as ZIP" options to template dropdown
menu
- Positioned logically between "Duplicate" and "Delete" actions
- Uses download icon from Lucide React for consistency
### User Experience
- Files automatically named as
`{templateName}-{templateVersion}.{extension}`
- Immediate download trigger on click
- Proper error handling with console logging
- Clean blob URL management to prevent memory leaks
## Testing
The implementation has been tested for:
- ✅ TypeScript compilation
- ✅ Proper function signatures and types
- ✅ UI component integration
- ✅ Error handling structure
## Screenshots
The export options appear in the template dropdown menu:
- Export as TAR (default format, compatible with `coder template pull`)
- Export as ZIP (compressed format for easier handling)
## Fixes
Closes#17859
## Notes
This enhancement makes template management more accessible for users
who:
- Don't have CLI access
- Manage deployments on devices without Coder CLI
- Prefer web-based workflows
- Need to transfer templates between environments
The implementation follows existing patterns in the codebase and
maintains consistency with the current UI design.
---------
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
Co-authored-by: Kyle Carberry <kyle@coder.com>
Relates to https://github.com/coder/coder/issues/17818
Note: due to limitations in `cobra/serpent` I ended up having to use `-`
to signify absence of provisioner tags. This value is not a valid
key-value pair and thus not a valid tag.
Refactor the workspace SSH command syntax across the project to use the
"workspace.coder" format instead of "coder.workspace". This standardizes
the SSH host entries for better consistency and clarity.
This is a follow-up from #17445 and recommends using the suffix-based
format for all new Coder versions.
<img width="418" alt="image"
src="https://github.com/user-attachments/assets/3893f840-9ce1-4803-a013-736068feb328"
/>
This PR which updated react-query to 5.77.0 introduced an issue on the
create workspace page where the error dialog would be briefly displayed
while the page is loading. https://github.com/coder/coder/pull/18039
The issue is that there is a moment when `optOutQuery.isLoading` is
false and `optOutQuery.data` is undefined causing the ErrorAlert to
display.
Subdomains should have 63 max characters, so we don't want to have a
long default workspace name that could overflow this limit. With that in
mind, I'm reducing 3 characters from the default name.
PS: I've been facing issues with that already. Eg:
```
claude-code-web--dev--ai-task-1748889021126--brunoquaresma--apps.sao-paulo.fly.dev.coder.com
```
Updates the placeholder text in the task prompt box to be consistent
with the "task" terminology used throughout the UI.
**Changes:**
- Changed placeholder from "Write an action for your AI agent to
perform..." to "Prompt your AI agent to start a task..."
- This aligns with the "Run task" button text and overall task-focused
language
**Testing:**
- Verified the text change renders correctly in the UI
- No functional changes, only text update
Co-authored-by: blink-so <blink-so@users.noreply.github.com>
Related to https://github.com/coder/coder/issues/15109.
Running postgres tests used to create a new postgres docker container
every time. I believe the slow down might've been caused by that and was
misattributed to postgres performance.
```
coder@main ~/coder ((0e90ac29))> DB=ci gotestsum --packages="./coderd/idpsync" -- -count=1
✓ coderd/idpsync (1.471s)
DONE 91 tests in 4.766s
```
## Summary
- Fixes image build failure by adding fallback to kernel.org mirrors
when Ubuntu/Debian repositories fail
- Ensures unzip is available during Bun installation process
- Improves apt repository configuration to prevent 403 errors in CI
## Root Cause
The build was failing for two reasons:
1. Network issues with Ubuntu/Debian package repositories returning 403
Forbidden errors
2. Unzip package not being reliably available in the image layer where
Bun installation happens
## Fix
- Added fallback mirrors for apt repositories using kernel.org mirrors
- Explicitly installed unzip before using it in the Bun installation
- Added proper cleanup after package installations to keep image size
down
## Test plan
- The CI workflow that was previously failing should now succeed
- Build the dogfood image locally with `cd dogfood/coder && docker build
-t codercom/oss-dogfood:test .`
- Verify Bun is correctly installed and can be used
Fixes build failure from PR #18154 (original PR that added Bun)
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
- Adds Bun JavaScript runtime (v1.2.15) to the dogfood image
- Installs Bun to /usr/local/bin to ensure persistence when /home/coder
is mounted
- Verified that Bun works correctly in the built container
## Test plan
1. Build the dogfood image with `cd dogfood/coder && docker build -t
codercom/oss-dogfood:test .`
2. Run the container with `docker run --rm -it codercom/oss-dogfood:test
bash`
3. Test Bun in the container with:
- `bun --version` (should output 1.2.15)
- `cd /tmp && echo "console.log('Hello from Bun\!');" > test.js && bun
run test.js`
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
Auto select the proxy on first load (stored in local storage, so per
browser), then defer to user selection. The auto selected proxy will not
update again once set.
Discovered an unhelpful error when running a CLI command without internet (I didn't know I didn't have internet!):
```
$ coder ls
Encountered an error running "coder list", see "coder list --help" for more information
error: <nil>
```
The source of this was that calling `Unwrap()` on `net.DNSError` can return nil, causing the whole error trace to get replaced by it. Instead, we'll just treat a nil `Unwrap()` return value as if there was nothing to unwrap.
The result is:
```
$ coder ls
Encountered an error running "coder list", see "coder list --help" for more information
error: query workspaces: Get "https://dev.coder.com/api/v2/workspaces?q=owner%3Ame": dial tcp: lookup dev.coder.com: no such host
```
Set the form parameters using autofill parameters based on the workspace
build parameters for the latest build
---------
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
Relates to https://github.com/coder/coder/issues/15432
Ensures that no workspace build timings with zero values for started_at or ended_at are inserted into the DB or returned from the API.
Closes https://github.com/coder/internal/issues/619
Implement the `coderd` side of the AgentAPI for the upcoming
dev-container agents work.
`agent/agenttest/client.go` is left unimplemented for a future PR
working to implement the agent side of this feature.
Adds telemetry for a _global_ account of prebuilt workspaces created,
failed to build, and claimed.
Partitioning this data by template/preset tuple is not currently in
scope.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Closes#18088.
The linked issue is misleading -- `coder config-ssh` continues to support the `coder.` prefix. The reason the command
`ssh coder.workspace.agent` fails is because `coder ssh workspace.agent` wasn't supported. This PR fixes that.
We know we used to support `workspace.agent`, as this is what we recommend in the Web UI:

This PR also adds support for `coder ssh agent.workspace.owner`, such that after running `coder config-ssh`, a command like
```
ssh agent.workspace.owner.coder
```
works, even without Coder Connect running. This is done for parity with an existing workflow that uses `ssh workspace.coder`, which either uses Coder Connect if available, or the CLI.
The existing code persists all static parameters and their values. Using
the previous build as the source if no new inputs are found.
Dynamic params do not have a state of the parameters saved to disk. So
instead, all previous values are persisted always, and new inputs
override.
Adds a database trigger that runs on insert and update of the
`workspace_agents` table. The trigger ensures that the agent name is
unique within the context of the workspace build it is being inserted
into.
Normally parameters had red text for error diagnostics. The goal here is
to make errors more obvious when the form_type is error meaning the
parameter could not be processed correctly.
<img width="543" alt="Screenshot 2025-05-27 at 18 35 50"
src="https://github.com/user-attachments/assets/2265553e-34a3-4526-8209-6253d541f784"
/>
Logs emitted by dynamic params did not have any additional scope or
context, and are not helpful in the current state. A future change can
capture these logs for display somewhere.
Does this by using latest `preview`
resolvescoder/preview#137
This hides the `Use classic workspace creation form` checkbox on the
template settings page if the dynamic-parameters experiment is not
enabled
Add mention of "workspace parameters settings form" in the checkbox
description as this is also affected.
Relates to https://github.com/coder/coder/issues/15432
* Adds a storybook entry for zero values in provisioner timings.
* Coerces a 'zero' start time to an 'instant'.
* Improves timing chart handling for large timeframes. Previously, this
would have caused the tab to run out of memory when encountering a
`time.Time{}`.
* Render 'instants' as 'invalid' in timing chart.
```
// Report a metric only if the preset uses the latest version of the template and the template is not deleted.
// This avoids conflicts between metrics from old and new template versions.
//
// NOTE: Multiple versions of a preset can exist with the same orgName, templateName, and presetName,
// because templates can have multiple versions — or deleted templates can share the same name.
//
// The safest approach is to report the metric only for the latest version of the preset.
// When a new template version is released, the metric for the new preset should overwrite
// the old value in Prometheus.
//
// However, there’s one edge case: if an admin creates a template, it becomes hard-limited,
// then deletes the template and never creates another with the same name,
// the old preset will continue to be reported as hard-limited —
// even though it’s deleted. This will persist until `coderd` is restarted.
```
## Summary
This PR updates the terraform/testdata by running
`provisioner/terraform/testdata/generate.sh` script. These changes occur
from `terraform-provider-coder`
[v2.4.2](https://github.com/coder/terraform-provider-coder/releases/tag/v2.4.2)
and are associated to the introduction of a `api_key_scope` optional
field with a default value:
https://github.com/coder/terraform-provider-coder/pull/391
## Changes
* Run `provisioner/terraform/testdata/generate.sh` script.
* Update `resource_test.go` to include `api_key_scope`
## Summary
This PR introduces support for expiration policies in prebuilds. The TTL
(time-to-live) is retrieved from the Terraform configuration
([terraform-provider-coder
PR](https://github.com/coder/terraform-provider-coder/pull/404)):
```
prebuilds = {
instances = 2
expiration_policy {
ttl = 86400
}
}
```
**Note**: Since there is no need for precise TTL enforcement down to the
second, in this implementation expired prebuilds are handled in a single
reconciliation cycle: they are deleted, and new instances are created
only if needed to match the desired count.
## Changes
* The outcome of a reconciliation cycle is now expressed as a slice of
reconciliation actions, instead of a single aggregated action.
* Adjusted reconciliation logic to delete expired prebuilds and
guarantee that the number of desired instances is correct.
* Updated relevant data structures and methods to support expiration
policies parameters.
* Added documentation to `Prebuilt workspaces` page
* Update `terraform-provider-coder` to version 2.5.0:
https://github.com/coder/terraform-provider-coder/releases/tag/v2.5.0
Depends on: https://github.com/coder/terraform-provider-coder/pull/404
Fixes: https://github.com/coder/coder/issues/17916
Closes https://github.com/coder/coder/issues/17988
Define `preset_hard_limited` metric which for every preset indicates
whether a given preset has reached the hard failure limit (1 for
hard-limited, 0 otherwise).
CLI example:
```
curl -X GET localhost:2118/metrics | grep preset_hard_limited
# HELP coderd_prebuilt_workspaces_preset_hard_limited Indicates whether a given preset has reached the hard failure limit (1 for hard-limited, 0 otherwise).
# TYPE coderd_prebuilt_workspaces_preset_hard_limited gauge
coderd_prebuilt_workspaces_preset_hard_limited{organization_name="coder",preset_name="GoLand: Large",template_name="Test7"} 1
coderd_prebuilt_workspaces_preset_hard_limited{organization_name="coder",preset_name="GoLand: Large",template_name="ValidTemplate"} 0
coderd_prebuilt_workspaces_preset_hard_limited{organization_name="coder",preset_name="IU: Medium",template_name="Test7"} 1
coderd_prebuilt_workspaces_preset_hard_limited{organization_name="coder",preset_name="IU: Medium",template_name="ValidTemplate"} 0
coderd_prebuilt_workspaces_preset_hard_limited{organization_name="coder",preset_name="WS: Small",template_name="Test7"} 1
```
NOTE:
```go
if !ps.Preset.Deleted && ps.Preset.UsingActiveVersion {
c.metrics.trackHardLimitedStatus(ps.Preset.OrganizationName, ps.Preset.TemplateName, ps.Preset.Name, ps.IsHardLimited)
}
```
Only active template version is tracked. If admin creates new template
version - old value of metric (for previous template version) will be
overwritten with new value of metric (for active template version).
Because `template_version` is not part of metric:
```go
labels = []string{"template_name", "preset_name", "organization_name"}
```
Implementation is similar to implementation of
`MetricResourceReplacementsCount` metric
---------
Co-authored-by: Susana Ferreira <ssncferreira@gmail.com>
This change introduces a refactor of the devcontainers recreation logic
which is now handled asynchronously rather than being request scoped.
The response was consequently changed from "No Content" to "Accepted" to
reflect this.
A new `Status` field was introduced to the devcontainer struct which
replaces `Running` (bool). This reflects that the devcontainer can now
be in various states (starting, running, stopped or errored).
The status field also protects against multiple concurrent recrations,
as long as they are initiated via the API.
Updates #16424
This change replaces date-fns with dayjs throughout the codebase for
more consistent date/time handling and to reduce bundle size. It also
tries to make the formatting and usage consistent.
**Why dayjs over date-fns?**
Just because we were using dayjs more broadly. Its formatting
capabilities, were also easier to extend.
User name, avatar URL, and last seen at, are not required fields so they
can be empty. Instead of returning the 0 values from Go, we want to make
it more agnostic, and omit them when they are empty. This make the docs
and usage way clearer for consumers.
The goal is to better integrate the activity column data with the
existent data:
- Make the message one line, the full message is in the tooltip, and
display the state at the bottom. This way, it is visually consistent
with the other columns like status, name and template.
- Moved the app, and uri, to the actions column, instead of showing them
together with the message in the activity column.
**Previous:**
<img width="1512" alt="Screenshot 2025-05-21 at 17 28 46"
src="https://github.com/user-attachments/assets/ea9188a5-d82e-416c-b961-edf0104f66c6"
/>
**After:**
<img width="1512" alt="Screenshot 2025-05-21 at 17 28 57"
src="https://github.com/user-attachments/assets/f50dbe82-cd3e-4448-9fa2-bde9193166d6"
/>
This change introduces a significant refactor to the agentcontainers API
and enables periodic updates of Docker containers rather than on-demand.
Consequently this change also allows us to move away from using a
locking channel and replace it with a mutex, which simplifies usage.
Additionally a previous oversight was fixed, and testing added, to clear
devcontainer running/dirty status when the container has been removed.
Updates coder/coder#16424
Updates coder/internal#621
This PR starts running test-go-pg on macOS and Windows in regular CI.
Previously this suite was only run in the nightly gauntlet for 2
reasons:
- it was flaky
- it was slow (took 17 minutes)
We've since stabilized the flakiness by switching to depot runners,
using ram disks, optimizing the number of tests run in parallel, and
automatically re-running failing tests. We've also [brought
down](https://github.com/coder/coder/pull/17756) the time to run the
suite to 9 minutes. Additionally, this PR allows test-go-pg to use cache
from previous runs, which speeds it up further. The cache is only used
on PRs, `main` will still run tests without it.
This PR also:
- removes the nightly gauntlet since all tests now run in regular CI
- removes the `test-cli` job for the same reason
- removes the `setup-imdisk` action which is now fully replaced by
[coder/setup-ramdisk-action](https://github.com/coder/setup-ramdisk-action)
- makes 2 minor changes which could be separate PRs, but I rolled them
into this because they were helpful when iterating on it:
- replace the `if: always()` condition on the `gen` job with a `if: ${{
!cancelled() }}` to allow the job to be cancelled. Previously the job
would run to completion even if the entire workflow was cancelled. See
[the GitHub
docs](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/evaluate-expressions-in-workflows-and-actions#always)
for more details.
- disable the recently added `TestReinitializeAgent` since it does not
pass on Windows with Postgres. There's an open issue to fix it:
https://github.com/coder/internal/issues/642
This PR will:
- unblock https://github.com/coder/coder/issues/15109
- alleviate https://github.com/coder/internal/issues/647
I tested caching by temporarily enabling cache upload on this PR: here's
[a
run](https://github.com/coder/coder/actions/runs/15119046903/job/42496939341?pr=17853#step:13:1296)
showing cache being used.
This PR ensures that waits on channels will time out according to the
test context, rather than waiting indefinitely. This should alleviate
the panic seen in https://github.com/coder/internal/issues/645 and, if
the deadlock recurs, allow the test to be retried automatically in CI.
Relates to https://github.com/coder/coder/issues/17432
### Part 1:
Notes:
- `GetPresetsAtFailureLimit` SQL query is added, which is similar to
`GetPresetsBackoff`, they use same CTEs: `filtered_builds`,
`time_sorted_builds`, but they are still different.
- Query is executed on every loop iteration. We can consider marking
specific preset as permanently failed as an optimization to avoid
executing query on every loop iteration. But I decided don't do it for
now.
- By default `FailureHardLimit` is set to 3.
- `FailureHardLimit` is configurable. Setting it to zero - means that
hard limit is disabled.
### Part 2
Notes:
- `PrebuildFailureLimitReached` notification is added.
- Notification is sent to template admins.
- Notification is sent only the first time, when hard limit is reached.
But it will `log.Warn` on every loop iteration.
- I introduced this enum:
```sql
CREATE TYPE prebuild_status AS ENUM (
'normal', -- Prebuilds are working as expected; this is the default, healthy state.
'hard_limited', -- Prebuilds have failed repeatedly and hit the configured hard failure limit; won't be retried anymore.
'validation_failed' -- Prebuilds failed due to a non-retryable validation error (e.g. template misconfiguration); won't be retried.
);
```
`validation_failed` not used in this PR, but I think it will be used in
next one, so I wanted to save us an extra migration.
- Notification looks like this:
<img width="472" alt="image"
src="https://github.com/user-attachments/assets/e10efea0-1790-4e7f-a65c-f94c40fced27"
/>
### Latest notification views:
<img width="463" alt="image"
src="https://github.com/user-attachments/assets/11310c58-68d1-4075-a497-f76d854633fe"
/>
<img width="725" alt="image"
src="https://github.com/user-attachments/assets/6bbfe21a-91ac-47c3-a9d1-21807bb0c53a"
/>
Replaced MUI Button with custom Button in 5 components:
- Filter.tsx - Changed import and updated Button props
(variant="outline", size="sm")
- ChatLayout.tsx - Changed import and updated Button
props for the "New Chat" button
- StarterTemplatePageView.tsx - Changed import and
implemented asChild pattern for links
- Notifications.tsx - Changed import and updated
NotificationActionButton to use variant="subtle"
- DateRange.tsx - Changed import and updated Button
styling
Prefer to show the top level diagnostics inside the parameters section
for context but this adds a case to show diagnostics if there are no
parameters.
Normally, the entire parameters section is hidden if there are no
parameters.
This PR refactors the CompleteJob function to use database transactions
more consistently for better atomicity guarantees. The large function
was broken down into three specialized handlers:
- completeTemplateImportJob
- completeWorkspaceBuildJob
- completeTemplateDryRunJob
Each handler now uses the Database.InTx wrapper to ensure all database
operations for a job completion are performed within a single
transaction, preventing partial updates in case of failures.
Added comprehensive tests for transaction behavior for each job type.
Fixes#17694🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
fixes https://github.com/coder/internal/issues/627
Adds docs for `coder://` URLs for Windows Remote Desktop (RDP).
Note that we might want to hold of merging since the URI handling is
unreleased in Coder Desktop for Windows.
We probably shouldn't be suggesting `ignore_changes = all`. Only the
attributes which cause drift in prebuilds should be ignored; everything
else can behave as normal.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Edward Angert <EdwardAngert@users.noreply.github.com>
Fixed environment variable name for app status slug in Claude MCP configuration from `CODER_MCP_CLAUDE_APP_STATUS_SLUG` to `CODER_MCP_APP_STATUS_SLUG` to maintain consistency with other MCP environment variables.
This also caused the User level Claude.md to not contain instructions to report its progress, so it did not receive status reports.
Dynamic params skip parameter validation in coder/coder.
This is because conditional parameters cannot be validated
with the static parameters in the database.
# Use workspace.OwnerUsername instead of fetching the owner
This PR optimizes the agent API by using the `workspace.OwnerUsername` field directly instead of making an additional database query to fetch the owner's username. The change removes the need to call `GetUserByID` in the manifest API and workspace agent RPC endpoints.
An issue arose when the agent token was scoped without access to user data (`api_key_scope = "no_user_data"`), causing the agent to fail to fetch the manifest due to an RBAC issue.
Change-Id: I3b6e7581134e2374b364ee059e3b18ece3d98b41
Signed-off-by: Thomas Kosiewski <tk@coder.com>
This PR adds a preset with prebuilds for each region to our dogfood
template. Creating a workspace based on a preset should now save time
compared to creating a workspace from scratch
This adds a few fixes to get presets working correctly with dynamic
params
1. Changes to preset params need to be rendered and displayed correctly
2. Changes to preset params need to be sent to the websocket
3. Changes to preset params need to be marked as touched so they won't
be automatically changed later because of dynamic defaults. Dynamic
defaults means any default parameter value can be changed by the
websocket response unless edited by the user, set by autofill or set by
a preset.
Pass through the user input as is. The previous code only passed through
parameters that existed in the db (static params). This would omit
conditional params.
Validation is enforced by the dynamic params websocket, so validation at
this point is not required.
Closes https://github.com/coder/internal/issues/648
This change introduces a new `ParentId` field to the agent's manifest.
This will allow an agent to know if it is a child or not, as well as
knowing who the owner is.
This is part of the Dev Container Agents work
# Description
This PR adds the `worker_name` field to the provisioner jobs endpoint.
To achieve this, the following SQL query was updated:
-
`GetProvisionerJobsByOrganizationAndStatusWithQueuePositionAndProvisioner`
As a result, the `codersdk.ProvisionerJob` type, which represents the
provisioner job API response, was modified to include the new field.
**Notes:**
* As mentioned in
[comment](https://github.com/coder/coder/pull/17877#discussion_r2093218206),
the `GetProvisionerJobsByIDsWithQueuePosition` query was not changed due
to load concerns. This means that for template and template version
endpoints, `worker_id` will still be returned, but `worker_name` will
not.
* Similar to `worker_id`, the `worker_name` is only present once a job
is assigned to a provisioner daemon. For jobs in a pending state (not
yet assigned), neither `worker_id` nor `worker_name` will be returned.
---
# Affected Endpoints
- `/organizations/{organization}/provisionerjobs`
- `/organizations/{organization}/provisionerjobs/{job}`
---
# Testing
- Added new tests verifying that both `worker_id` and `worker_name` are
returned once a provisioner job reaches the **succeeded** state.
- Existing tests covering state transitions and other logic remain
unchanged, as they test different scenarios.
---
# Front-end Changes
Admin provisioner jobs dashboard:
<img width="1088" alt="Screenshot 2025-05-16 at 11 51 33"
src="https://github.com/user-attachments/assets/0e20e360-c615-4497-84b7-693777c5443e"
/>
Fixes: https://github.com/coder/coder/issues/16982
Publishing inside a db transaction can lead to database connection
starvation/contention since it requires its own connection.
This ruleguard rule (one-shotted by Claude Sonnet 3.7 and finalized by
@Emyrk) will detect two of the following 3 instances:
```go
type Nested struct {
ps pubsub.Pubsub
}
func TestFail(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
nested := &Nested{
ps: ps,
}
// will catch this
_ = db.InTx(func(_ database.Store) error {
_, _ = fmt.Printf("")
_ = ps.Publish("", []byte{})
return nil
}, nil)
// will catch this
_ = db.InTx(func(_ database.Store) error {
_ = nested.ps.Publish("", []byte{})
return nil
}, nil)
// will NOT catch this
_ = db.InTx(func(_ database.Store) error {
blah(ps)
return nil
}, nil)
}
func blah(ps pubsub.Pubsub) {
ps.Publish("", []byte{})
}
```
The ruleguard doesn't recursively introspect function calls so only the
first two cases will be guarded against, but it's better than nothing.
<img width="1444" alt="image"
src="https://github.com/user-attachments/assets/8ffa0d88-16a0-41a9-9521-21211910dec9"
/>
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
The current issue is that when multiple parameters are added or removed
from a form because a user change in a conditional parameter value. The
websocket parameters response gets out of sync with the state of the
parameters in the form.
The form state needs to be maintained because this is what gets
submitted when the user attempts to create a workspace.
Fixes:
1. When autofill params are set from the url, mark these params as
touched in the form. This is necessary as only touched params are sent
in the request to the websocket. These params should technically count
as being touched because they were preset from the url params.
2. Create a hook to synchronize the parameters from the websocket
response with the current state of the parameters stored in the form.
This change adds docker stop and docker system prune to the shutdown script so
that it doesn't need to be done by the Docker host which will take a lot longer.
This change greatly speeds up workspace destruction:
```
2025-05-19 12:26:57.046+03:00 docker_container.workspace[0]: Destroying... [id=2685e2f456ba7b280c420219f19ef15384faa52c61ba7c087c7f109ffa6b1bda]
2025-05-19 12:27:07.046+03:00 docker_container.workspace[0]: Still destroying... [10s elapsed]
2025-05-19 12:27:16.734+03:00 docker_container.workspace[0]: Destruction complete after 20s
```
Follow-up for #17110
closes https://github.com/coder/internal/issues/632
`pubsubReinitSpy` used to signal that a subscription had happened before
it actually had.
This created a slight opportunity for the main goroutine to publish
before the actual subscription was listening. The published event was
then dropped, leading to a failed test.
fixes#17070
Cleans up our handling of APIKey expiration and OIDC to keep them separate concepts. For an OIDC-login APIKey, both the APIKey and OIDC link must be valid to login. If the OIDC link is expired and we have a refresh token, we will attempt to refresh.
OIDC refreshes do not have any effect on APIKey expiry.
https://github.com/coder/coder/issues/17070#issuecomment-2886183613 explains why this is the correct behavior.
the local storage key is only set when a user presses the opt-in or
opt-out buttons
Overall, this feels less annoying for users to have to opt-in/opt-out on
every visit to the create workspace page. Maybe less of a concern for
end users but more of a concern while dogfooding.
Pros:
- User gets the admin setting value for the template as long as they
didn't opt-in or opt-out
- User can choose to opt-in/out-out at will and their preference is
saved
These items came up in an internal "bug bash" session yesterday.
@EdwardAngert note: I've reverted to the "transparent" phrasing; the
current docs confused a couple folks yesterday, and I feel that
"transparent" is clearly understood in this context.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Edward Angert <EdwardAngert@users.noreply.github.com>
Existing template versions do not have the metadata (modules + plan) in
the db. So revert to using static parameter information from the
original template import.
This data will still be served over the websocket.
`v1.5` is going out with release `v2.22`
I had to reorder `module_files` and `resource_replacements` because of
this.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
This will be used in the extensions and desktop apps to enable
compression AND progress reporting for the download by comparing the
original content length to the amount of bytes written to disk.
Closes#16340
Fixes a couple agent tests so that they work correctly on Windows.
`HOME` is not a standard Windows environment variable, and we don't have any specific Code in Coder to set it on SSH, so I've removed the test case. Amazingly/bizarrely the Windows test runners set this variable, but this is not standard Windows behavior so we shouldn't be including it in our tests.
Also the command `true` is not valid on a default Windows install.
```
true: The term 'true' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
```
I'm not really sure how the CI runners are allowing this test to pass, but again, it's not standard so we shouldn't be doing it.
We've been continuously pulling the containers endpoint even when the
agent does not support containers. To optimize the requests, we can
check if it is throwing an error and stop if it is a 403 status code.
Also add some clarification about the lack of database constraints for
soft template deletion.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Danny Kopping <dannykopping@gmail.com>
Bumps [github.com/justinas/nosurf](https://github.com/justinas/nosurf)
from 1.1.1 to 1.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/justinas/nosurf/releases">github.com/justinas/nosurf's
releases</a>.</em></p>
<blockquote>
<h2>v1.2.0</h2>
<p>This is a <em>security</em> release for nosurf. It mainly addresses
<a
href="https://github.com/justinas/nosurf-cve-2025-46721">CVE-2025-46721</a>.</p>
<p>This release technically includes breaking changes, as nosurf starts
applying same-origin checks that were not previously enforced. In most
cases, users will not need to make any changes to their code. However,
it is recommended to read <a
href="https://github.com/justinas/nosurf/blob/master/docs/origin-checks.md">the
documentation on nosurf's trusted origin checks</a> before
upgrading.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/justinas/nosurf/commit/ec9bb776d8e5ba9e906b6eb70428f4e7b009feee"><code>ec9bb77</code></a>
Rework origin checks (<a
href="https://redirect.github.com/justinas/nosurf/issues/74">#74</a>)</li>
<li><a
href="https://github.com/justinas/nosurf/commit/e5c9c1fe2d4f69668ff78f872abf3b396a08673a"><code>e5c9c1f</code></a>
Add GitHub Actions CI, fix lints and tests</li>
<li>See full diff in <a
href="https://github.com/justinas/nosurf/compare/v1.1.1...v1.2.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Closes https://github.com/coder/internal/issues/369
We can't know whether a replacement (i.e. drift of terraform state
leading to a resource needing to be deleted/recreated) will take place
apriori; we can only detect it at `plan` time, because the provider
decides whether a resource must be replaced and it cannot be inferred
through static analysis of the template.
**This is likely to be the most common gotcha with using prebuilds,
since it requires a slight template modification to use prebuilds
effectively**, so let's head this off before it's an issue for
customers.
Drift details will now be logged in the workspace build logs:

Plus a notification will be sent to template admins when this situation
arises:

A new metric - `coderd_prebuilt_workspaces_resource_replacements_total`
- will also increment each time a workspace encounters replacements.
We only track _that_ a resource replacement occurred, not how many. Just
one is enough to ruin a prebuild, but we can't know apriori which
replacement would cause this.
For example, say we have 2 replacements: a `docker_container` and a
`null_resource`; we don't know which one might
cause an issue (or indeed if either would), so we just track the
replacement.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
This pull request allows coder workspace agents to be reinitialized when
a prebuilt workspace is claimed by a user. This facilitates the transfer
of ownership between the anonymous prebuilds system user and the new
owner of the workspace.
Only a single agent per prebuilt workspace is supported for now, but
plumbing has already been done to facilitate the seamless transition to
multi-agent support.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Danny Kopping <dannykopping@gmail.com>
Avoids two sequential scans of massive tables (`workspace_builds`,
`provisioner_jobs`) and uses index scans instead. This new view largely
replicates our already optimized query `GetWorkspaces` to fetch the
latest build.
The original query and the new query were compared against the dogfood
database to ensure they return the exact same data in the exact same
order (minus the new `workspaces.deleted = false` filter to improve
performance even more). The performance is massively improved even
without the `workspaces.deleted = false` filter, but it was added to
improve it even more.
Note: these query times are probably inflated due to high database load
on our dogfood environment that this intends to partially resolve.
Before: 2,139ms
([explain](https://explain.dalibo.com/plan/997e4fch241b46e6))
After: 33ms
([explain](https://explain.dalibo.com/plan/c888dc223870f181))
Co-authored-by: Cian Johnston <cian@coder.com>
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Co-authored-by: Danny Kopping <dannykopping@gmail.com>
`Collect()` is called whenever the `/metrics` endpoint is hit to
retrieve metrics.
The queries used in prebuilds metrics collection are quite heavy, and we
want to avoid having them running concurrently / too often to keep db
load down.
Here I'm moving towards a background retrieval of the state required to
set the metrics, which gets invalidated every interval.
Also introduces `coderd_prebuilt_workspaces_metrics_last_updated` which
operators can use to determine when these metrics go stale.
See https://github.com/coder/coder/pull/17789 as well.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Builds on https://github.com/coder/coder/pull/17570
Frontend portion of https://github.com/coder/coder/tree/chat originally
authored by @kylecarbs
Additional changes:
- Addresses linter complaints
- Brings `ChatToolInvocation` argument definitions in line with those
defined in `codersdk/toolsdk`
- Ensures chat-related features are not shown unless
`ExperimentAgenticChat` is enabled.
Co-authored-by: Kyle Carberry <kyle@carberry.com>
## Description
Modifies the behaviour of the "list templates" API endpoints to return
non-deprecated templates by default. Users can still query for
deprecated templates by specifying the `deprecated=true` query
parameter.
**Note:** The deprecation feature is an enterprise-level feature
## Affected Endpoints
* /api/v2/organizations/{organization}/templates
* /api/v2/templates
Fixes#17565
Bumps gopkg.in/DataDog/dd-trace-go.v1 from 1.72.1 to 1.73.0.
<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>
| Dependency Name | Ignore Conditions |
| --- | --- |
| gopkg.in/DataDog/dd-trace-go.v1 | [>= 1.58.a, < 1.59] |
</details>
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
To prevent malicious apps and vendors to use the Coder session token we
are adding safe protocols/schemas we want to support.
- vscode:
- vscode-insiders:
- windsurf:
- cursor:
- jetbrains-gateway:
- jetbrains:
Fix https://github.com/coder/security/issues/77
closes#17706
Clarify that:
1. URL query parameters work without experiment flag
2. The 'populate recently used parameters' feature still requires the
auto-fill-parameters experiment flag
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
The changes in `coder/preview` necessitated the changes in
`codersdk/richparameters.go` & `provisioner/terraform/resources.go`.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
We are starting to add app links in many places in the UI, and to make
it consistent, this PR extracts the most core logic into the
modules/apps for reuse.
Related to https://github.com/coder/coder/issues/17311
Closes https://github.com/coder/coder/issues/17691
`ExtractOrganizationMembersParam` will allow fetching a user with only
organization permissions. If the user belongs to 0 orgs, then the user "does not exist"
from an org perspective. But if you are a site-wide admin, then the user does exist.
It's a security issue to share the API token, and the protocols that we
actually want to share it with are not HTTP and handled locally on the
same machine.
Security issue introduced by https://github.com/coder/coder/pull/17708
We've been using an abstraction that was not necessary to fetch
workspaces data. I also took sometime to use the new useWorkspaceUpdate
hook in the update workspace tooltip that was missing some important
steps like confirmation.
Fix https://github.com/coder/coder/issues/17704
During the [refactoring of WorkspaceApp response
type](https://github.com/coder/coder/pull/17700/files#diff-a7e67944708c3c914a24a02d515a89ecd414bfe61890468dac08abde55ba8e96R112),
I updated the logic to check if the session token should be injected
causing external apps to not load correctly.
To also avoid future confusions, we are only going to rely on the
`app.external` prop to open apps externally instead of verifying if the
URL does not use the HTTP protocol. I did some research and I didn't
find out a use case where it would be a problem.
I'm going to refactor this code very soon to allow opening apps from the
workspaces page, so I will write the tests to cover this use case there.
**Not included:**
During my next refactoring I'm also going to change the code to support
token injections directly in the HREF instead of making it happen during
the click event.
Part of #17649
---
# Allow MCP server to run without authentication
This PR enhances the MCP server to operate without requiring authentication, making it more flexible for environments where authentication isn't available or necessary. Key changes:
- Replaced `InitClient` with `TryInitClient` to allow the MCP server to start without credentials
- Added graceful handling when URL or authentication is missing
- Made authentication status visible in server logs
- Added logic to skip user-dependent tools when no authenticated user is present
- Made the `coder_report_task` tool available with just an agent token (no user token required)
- Added comprehensive tests to verify operation without authentication
These changes allow the MCP server to function in more environments while still using authentication when available, improving flexibility for CI/CD and other automated environments.
This PR introduces failing test retries in CI for e2e tests, Go tests
with the in-memory database, Go tests with Postgres, and the CLI tests.
Retries are not enabled for race tests.
The goal is to reduce how often flakes disrupt developers' workflows.
Closes https://github.com/coder/internal/issues/563
The [Coder Connect
tunnel](https://github.com/coder/coder/blob/main/vpn/tunnel.go) receives
workspace state from the Coder server over a [dRPC
stream.](https://github.com/coder/coder/blob/114ba4593b2a82dfd41cdcb7fd6eb70d866e7b86/tailnet/controllers.go#L1029)
When first connecting to this stream, the current state of the user's
workspaces is received, with subsequent messages being diffs on top of
that state.
However, if the client disconnects from this stream, such as when the
user's device is suspended, and then reconnects later, no mechanism
exists for the tunnel to differentiate that message containing the
entire initial state from another diff, and so that state is incorrectly
applied as a diff.
In practice:
- Tunnel connects, receives a workspace update containing all the
existing workspaces & agents.
- Tunnel loses connection, but isn't completely stopped.
- All the user's workspaces are restarted, producing a new set of
agents.
- Tunnel regains connection, and receives a workspace update containing
all the existing workspaces & agents.
- This initial update is incorrectly applied as a diff, with the
Tunnel's state containing both the old & new agents.
This PR introduces a solution in which tunnelUpdater, when created,
sends a FreshState flag with the WorkspaceUpdate type. This flag is
handled in the vpn tunnel in the following fashion:
- Preserve existing Agents
- Remove current Agents in the tunnel that are not present in the
WorkspaceUpdate
- Remove unreferenced Workspaces
This fixes a test issue where we were waiting on a channel indefinitely
and the test timed out instead of failing due to earlier error.
Updates coder/internal#558
This PR speeds up the "Upload tests to datadog" step by downloading the
`datadog-ci` binary directly from GitHub releases. Most of the time used
to be spent in `npm install`, which consistently timed out on Windows
after a minute. [Now it takes 3
seconds](https://github.com/coder/coder/actions/runs/14834976784/job/41644230049?pr=17668#step:10:1).
I updated it to version v2.48.0 because v2.21.0 didn't have the
artifacts for arm64 macOS.
Closes https://github.com/coder/internal/issues/609.
As seen in the below logs, the `last_used_at` time was updating, but just to the same value that it was on creation; `dbtime.Now` was called in quick succession.
```
t.go:106: 2025-05-05 12:11:54.166 [info] coderd.workspace_usage_tracker: updated workspaces last_used_at count=1 now="2025-05-05T12:11:54.161329Z"
t.go:106: 2025-05-05 12:11:54.172 [debu] coderd: GET host=localhost:50422 path=/api/v2/workspaces/745b7ff3-47f2-4e1a-9452-85ea48ba5c46 proto=HTTP/1.1 remote_addr=127.0.0.1 start="2025-05-05T12:11:54.1669073Z" workspace_name=peaceful_faraday34 requestor_id=b2cf02ae-2181-480b-bb1f-95dc6acb6497 requestor_name=testuser requestor_email="" took=5.2105ms status_code=200 latency_ms=5 params_workspace=745b7ff3-47f2-4e1a-9452-85ea48ba5c46 request_id=7fd5ea90-af7b-4104-91c5-9ca64bc2d5e6
workspaceagentsrpc_test.go:70:
Error Trace: C:/actions-runner/coder/coder/coderd/workspaceagentsrpc_test.go:70
Error: Should be true
Test: TestWorkspaceAgentReportStats
Messages: 2025-05-05 12:11:54.161329 +0000 UTC is not after 2025-05-05 12:11:54.161329 +0000 UTC
```
If we change the initial `LastUsedAt` time to be a time in the past, ticking with a `dbtime.Now` will always update it to a later value. If it never updates, the condition will still fail.
This PR focuses on optimizing go-test CI times on Windows. It:
- backs the `$RUNNER_TEMP` directory with a RAM disk. This directory is
used by actions like cache, setup-go, and setup-terraform as a staging
area
- backs `GOCACHE`, `GOMODCACHE`, and `GOPATH` with a RAM disk
- backs `$GITHUB_WORKSPACE` with a RAM disk - that's where the
repository is checked out
- uses preinstalled Go on Windows runners
- starts using the depot Windows runner
From what I've seen, these changes bring test times down to be on par
with Linux and macOS. The biggest improvement comes from backing
frequently accessed paths with RAM disks. The C drive is surprisingly
slow - I ran some performance tests with
[fio](https://fio.readthedocs.io/en/latest/fio_doc.html#) where I tested
IOPS on many small files, and the RAM disk was 100x faster.
Additionally, the depot runners seem to have more consistent performance
than the ones provided by GitHub.
Database transactions hold onto connections, and `pubsub.Publish` tries
to acquire a connection of its own. If the latter is called within a
transaction, this can lead to connection exhaustion.
I plan two follow-ups to this PR:
1. Make connection counts tuneable
https://github.com/coder/coder/blob/main/cli/server.go#L2360-L2376
We will then be able to write tests showing how connection exhaustion
occurs.
2. Write a linter/ruleguard to prevent `pubsub.Publish` from being
called within a transaction.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Currently we don't have a way to get insight into Postgres connections
being exhausted.
By using the prometheus' [`DBStats`
collector](https://github.com/prometheus/client_golang/blob/main/prometheus/collectors/dbstats_collector.go),
we get some insight out-of-the-box.
```
# HELP go_sql_idle_connections The number of idle connections.
# TYPE go_sql_idle_connections gauge
go_sql_idle_connections{db_name="coder"} 1
# HELP go_sql_in_use_connections The number of connections currently in use.
# TYPE go_sql_in_use_connections gauge
go_sql_in_use_connections{db_name="coder"} 2
# HELP go_sql_max_idle_closed_total The total number of connections closed due to SetMaxIdleConns.
# TYPE go_sql_max_idle_closed_total counter
go_sql_max_idle_closed_total{db_name="coder"} 112
# HELP go_sql_max_idle_time_closed_total The total number of connections closed due to SetConnMaxIdleTime.
# TYPE go_sql_max_idle_time_closed_total counter
go_sql_max_idle_time_closed_total{db_name="coder"} 0
# HELP go_sql_max_lifetime_closed_total The total number of connections closed due to SetConnMaxLifetime.
# TYPE go_sql_max_lifetime_closed_total counter
go_sql_max_lifetime_closed_total{db_name="coder"} 0
# HELP go_sql_max_open_connections Maximum number of open connections to the database.
# TYPE go_sql_max_open_connections gauge
go_sql_max_open_connections{db_name="coder"} 10
# HELP go_sql_open_connections The number of established connections both in use and idle.
# TYPE go_sql_open_connections gauge
go_sql_open_connections{db_name="coder"} 3
# HELP go_sql_wait_count_total The total number of connections waited for.
# TYPE go_sql_wait_count_total counter
go_sql_wait_count_total{db_name="coder"} 28
# HELP go_sql_wait_duration_seconds_total The total time blocked waiting for a new connection.
# TYPE go_sql_wait_duration_seconds_total counter
go_sql_wait_duration_seconds_total{db_name="coder"} 0.086936235
```
`go_sql_wait_count_total` is the metric I'm most interested in gaining,
but the others are also very useful.
Changing the prefix is easy (`prometheus.WrapRegistererWithPrefix`), but
getting rid of the `go_` segment is not quite so easy. I've kept the
changeset small for now.
**NOTE:** I imported a library to determine the database name from the
given conn string. It's [not as
simple](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING)
as one might hope. The database name is used for the `db_name` label.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
Refactor WebSocket error handling to ensure that errors are only set
when the current socket ref matches the active one. This prevents
unnecessary error messages when the WebSocket connection closes
unexpectedly
This solves the problem of showing error messages because of React
Strict mode rendering the page twice and opening 2 websocket
connections.
This change documents the early access dev containers integration and
how to enable it, what features are available and what limitations exist
at the time of writing.
---------
Co-authored-by: EdwardAngert <17991901+EdwardAngert@users.noreply.github.com>
Don't specify the template version for a delete transition, because the
prebuilt workspace may have been created using an older template
version.
If the template version isn't explicitly set, the builder will
automatically use the version from the last workspace build - which is
the desired behavior.
Fixes https://github.com/coder/internal/issues/604
Fixes a data race in `agentscripts.Runner` where a concurrent `Execute()` call races with `Init()`. We hit this race during shut down, which is not synchronized against starting up.
In this PR I've chosen to add synchronization to the `Runner` rather than try to synchronize the calls in the agent. When we close down the agent, it's OK to just throw an error if we were never initialized with a startup script---we don't want to wait for it since that requires an active connection to the control plane.
Adds @spikecurtis and @johnstcn as CODEOWNERS of the provisioner protocol files. These need to be versioned, so we need some human review over changes.
PR contains:
- fix for claiming & deleting prebuilds with immutable params
- unit test for claiming scenario
- unit test for deletion scenario
The parameter resolver was failing when deleting/claiming prebuilds
because a value for a previously-used parameter was provided to the
resolver, but since the value was unchanged (it's coming from the
preset) it failed in the resolver. The resolver was missing a check to
see if the old value != new value; if the values match then there's no
mutation of an immutable parameter.
---------
Signed-off-by: Danny Kopping <dannykopping@gmail.com>
The regular network info file creation code also calls `Mkdirall`.
Wasn't picked up in manual testing as I already had the `/net` folder in
my VSCode.
Wasn't picked up in automated testing because we use an in-memory FS,
which for some reason does this implicitly.
Fix https://github.com/coder/internal/issues/594
**Notice:**
This is a temporary solution to get the devcontainers feature released.
Maybe a better solution, to avoid pulling the API every 10 seconds, is
to implement a websocket connection to get updates on containers.
Closes https://github.com/coder/vscode-coder/issues/447
Closes https://github.com/coder/jetbrains-coder/issues/543
Closes https://github.com/coder/coder-jetbrains-toolbox/issues/21
This PR adds Coder Connect support to `coder ssh --stdio`.
When connecting to a workspace, if `--force-new-tunnel` is not passed, the CLI will first do a DNS lookup for `<agent>.<workspace>.<owner>.<hostname-suffix>`. If an IP address is returned, and it's within the Coder service prefix, the CLI will not create a new tailnet connection to the workspace, and instead dial the SSH server running on port 22 on the workspace directly over TCP.
This allows IDE extensions to use the Coder Connect tunnel, without requiring any modifications to the extensions themselves.
Additionally, `using_coder_connect` is added to the `sshNetworkStats` file, which the VS Code extension (and maybe Jetbrains?) will be able to read, and indicate to the user that they are using Coder Connect.
One advantage of this approach is that running `coder ssh --stdio` on an offline workspace with Coder Connect enabled will have the CLI wait for the workspace to build, the agent to connect (and optionally, for the startup scripts to finish), before finally connecting using the Coder Connect tunnel.
As a result, `coder ssh --stdio` has the overhead of looking up the workspace and agent, and checking if they are running. On my device, this meant `coder ssh --stdio <workspace>` was approximately a second slower than just connecting to the workspace directly using `ssh <workspace>.coder` (I would assume anyone serious about their Coder Connect usage would know to just do the latter anyway).
To ensure this doesn't come at a significant performance cost, I've also benchmarked this PR.
<details>
<summary>Benchmark</summary>
## Methodology
All tests were completed on `dev.coder.com`, where a Linux workspace running in AWS `us-west1` was created.
The machine running Coder Desktop (the 'client') was a Windows VM running in the same AWS region and VPC as the workspace.
To test the performance of specifically the SSH connection, a port was forwarded between the client and workspace using:
```
ssh -p 22 -L7001:localhost:7001 <host>
```
where `host` was either an alias for an SSH ProxyCommand that called `coder ssh`, or a Coder Connect hostname.
For latency, [`tcping`](https://www.elifulkerson.com/projects/tcping.php) was used against the forwarded port:
```
tcping -n 100 localhost 7001
```
For throughput, [`iperf3`](https://iperf.fr/iperf-download.php) was used:
```
iperf3 -c localhost -p 7001
```
where an `iperf3` server was running on the workspace on port 7001.
## Test Cases
### Testcase 1: `coder ssh` `ProxyCommand` that bicopies from Coder Connect
This case tests the implementation in this PR, such that we can write a config like:
```
Host codercliconnect
ProxyCommand /path/to/coder ssh --stdio workspace
```
With Coder Connect enabled, `ssh -p 22 -L7001:localhost:7001 codercliconnect` will use the Coder Connect tunnel. The results were as follows:
**Throughput, 10 tests, back to back:**
- Average throughput across all tests: 788.20 Mbits/sec
- Minimum average throughput: 731 Mbits/sec
- Maximum average throughput: 871 Mbits/sec
- Standard Deviation: 38.88 Mbits/sec
**Latency, 100 RTTs:**
- Average: 0.369ms
- Minimum: 0.290ms
- Maximum: 0.473ms
### Testcase 2: `ssh` dialing Coder Connect directly without a `ProxyCommand`
This is what we assume to be the 'best' way to use Coder Connect
**Throughput, 10 tests, back to back:**
- Average throughput across all tests: 789.50 Mbits/sec
- Minimum average throughput: 708 Mbits/sec
- Maximum average throughput: 839 Mbits/sec
- Standard Deviation: 39.98 Mbits/sec
**Latency, 100 RTTs:**
- Average: 0.369ms
- Minimum: 0.267ms
- Maximum: 0.440ms
### Testcase 3: `coder ssh` `ProxyCommand` that creates its own Tailnet connection in-process
This is what normally happens when you run `coder ssh`:
**Throughput, 10 tests, back to back:**
- Average throughput across all tests: 610.20 Mbits/sec
- Minimum average throughput: 569 Mbits/sec
- Maximum average throughput: 664 Mbits/sec
- Standard Deviation: 27.29 Mbits/sec
**Latency, 100 RTTs:**
- Average: 0.335ms
- Minimum: 0.262ms
- Maximum: 0.452ms
## Analysis
Performing a two-tailed, unpaired t-test against the throughput of testcases 1 and 2, we find a P value of `0.9450`. This suggests the difference between the data sets is not statistically significant. In other words, there is a 94.5% chance that the difference between the data sets is due to chance.
## Conclusion
From the t-test, and by comparison to the status quo (regular `coder ssh`, which uses gvisor, and is noticeably slower), I think it's safe to say any impact on throughput or latency by the `ProxyCommand` performing a bicopy against Coder Connect is negligible. Users are very much unlikely to run into performance issues as a result of using Coder Connect via `coder ssh`, as implemented in this PR.
Less scientifically, I ran these same tests on my home network with my Sydney workspace, and both throughput and latency were consistent across testcases 1 and 2.
</details>
2025-04-30 15:17:10 +10:00
1922 changed files with 116665 additions and 40492 deletions
**IMPORTANT**: Always use LSP tools for code navigation and understanding. These tools provide accurate, real-time analysis of the codebase and should be your first choice for code investigation.
#### Go LSP Tools (for backend code)
1.**Find function definitions** (USE THIS FREQUENTLY):
@@ -4,7 +4,7 @@ This project is called "Coder" - an application for managing remote development
Coder provides a platform for creating, managing, and using remote development environments (also known as Cloud Development Environments or CDEs). It leverages Terraform to define and provision these environments, which are referred to as "workspaces" within the project. The system is designed to be extensible, secure, and provide developers with a seamless remote development experience.
# Core Architecture
## Core Architecture
The heart of Coder is a control plane that orchestrates the creation and management of workspaces. This control plane interacts with separate Provisioner processes over gRPC to handle workspace builds. The Provisioners consume workspace definitions and use Terraform to create the actual infrastructure.
@@ -12,17 +12,17 @@ The CLI package serves dual purposes - it can be used to launch the control plan
The database layer uses PostgreSQL with SQLC for generating type-safe database code. Database migrations are carefully managed to ensure both forward and backward compatibility through paired `.up.sql` and `.down.sql` files.
# API Design
## API Design
Coder's API architecture combines REST and gRPC approaches. The REST API is defined in `coderd/coderd.go` and uses Chi for HTTP routing. This provides the primary interface for the frontend and external integrations.
Internal communication with Provisioners occurs over gRPC, with service definitions maintained in `.proto` files. This separation allows for efficient binary communication with the components responsible for infrastructure management while providing a standard REST interface for human-facing applications.
# Network Architecture
## Network Architecture
Coder implements a secure networking layer based on Tailscale's Wireguard implementation. The `tailnet` package provides connectivity between workspace agents and clients through DERP (Designated Encrypted Relay for Packets) servers when direct connections aren't possible. This creates a secure overlay network allowing access to workspaces regardless of network topology, firewalls, or NAT configurations.
## Tailnet and DERP System
### Tailnet and DERP System
The networking system has three key components:
@@ -35,7 +35,7 @@ The networking system has three key components:
3. **Direct Connections**: When possible, the system establishes peer-to-peer connections between clients and workspaces using STUN for NAT traversal. This requires both endpoints to send UDP traffic on ephemeral ports.
## Workspace Proxies
### Workspace Proxies
Workspace proxies (in the Enterprise edition) provide regional relay points for browser-based connections, reducing latency for geo-distributed teams. Key characteristics:
@@ -45,9 +45,10 @@ Workspace proxies (in the Enterprise edition) provide regional relay points for
- Managed through the `coder wsproxy` commands
- Implemented primarily in the `enterprise/wsproxy/` package
# Agent System
## Agent System
The workspace agent runs within each provisioned workspace and provides core functionality including:
- SSH access to workspaces via the `agentssh` package
- Port forwarding
- Terminal connectivity via the `pty` package for pseudo-terminal support
@@ -57,7 +58,7 @@ The workspace agent runs within each provisioned workspace and provides core fun
Agents communicate with the control plane using the tailnet system and authenticate using secure tokens.
# Workspace Applications
## Workspace Applications
Workspace applications (or "apps") provide browser-based access to services running within workspaces. The system supports:
@@ -69,17 +70,17 @@ Workspace applications (or "apps") provide browser-based access to services runn
The implementation is primarily in the `coderd/workspaceapps/` directory with components for URL generation, proxying connections, and managing application state.
# Implementation Details
## Implementation Details
The project structure separates frontend and backend concerns. React components and pages are organized in the `site/src/` directory, with Jest used for testing. The backend is primarily written in Go, with a strong emphasis on error handling patterns and test coverage.
Database interactions are carefully managed through migrations in `coderd/database/migrations/` and queries in `coderd/database/queries/`. All new queries require proper database authorization (dbauthz) implementation to ensure that only users with appropriate permissions can access specific resources.
# Authorization System
## Authorization System
The database authorization (dbauthz) system enforces fine-grained access control across all database operations. It uses role-based access control (RBAC) to validate user permissions before executing database operations. The `dbauthz` package wraps the database store and performs authorization checks before returning data. All database operations must pass through this layer to ensure security.
# Testing Framework
## Testing Framework
The codebase has a comprehensive testing approach with several key components:
@@ -91,7 +92,7 @@ The codebase has a comprehensive testing approach with several key components:
4. **Enterprise Testing**: Enterprise features have dedicated test utilities in the `coderdenttest` package.
# Open Source and Enterprise Components
## Open Source and Enterprise Components
The repository contains both open source and enterprise components:
@@ -100,9 +101,10 @@ The repository contains both open source and enterprise components:
- The boundary between open source and enterprise is managed through a licensing system
- The same core codebase supports both editions, with enterprise features conditionally enabled
# Development Philosophy
## Development Philosophy
Coder emphasizes clear error handling, with specific patterns required:
- Concise error messages that avoid phrases like "failed to"
- Wrapping errors with `%w` to maintain error chains
- Using sentinel errors with the "err" prefix (e.g., `errNotFound`)
@@ -111,7 +113,7 @@ All tests should run in parallel using `t.Parallel()` to ensure efficient testin
Git contributions follow a standard format with commit messages structured as `type: <message>`, where type is one of `feat`, `fix`, or `chore`.
# Development Workflow
## Development Workflow
Development can be initiated using `scripts/develop.sh` to start the application after making changes. Database schema updates should be performed through the migration system using `create_migration.sh <name>` to generate migration files, with each `.up.sql` migration paired with a corresponding `.down.sql` that properly reverts all changes.
NEVER use `time.Sleep` to mitigate timing issues. If an issue
seems like it should use `time.Sleep`, read through https://github.com/coder/quartz and specifically the [README](https://github.com/coder/quartz/blob/main/README.md) to better understand how to handle timing issues.
## 🎯 Code Style
### Detailed guidelines in imported WORKFLOWS.md
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
@@ -109,9 +109,10 @@ We are always working on new integrations. Please feel free to open an issue and
### Official
- [**VS Code Extension**](https://marketplace.visualstudio.com/items?itemName=coder.coder-remote): Open any Coder workspace in VS Code with a single click
- [**JetBrains Gateway Extension**](https://plugins.jetbrains.com/plugin/19620-coder): Open any Coder workspace in JetBrains Gateway with a single click
- [**JetBrains Toolbox Plugin**](https://plugins.jetbrains.com/plugin/26968-coder): Open any Coder workspace from JetBrains Toolbox with a single click
- [**JetBrains Gateway Plugin**](https://plugins.jetbrains.com/plugin/19620-coder): Open any Coder workspace in JetBrains Gateway with a single click
- [**Dev Container Builder**](https://github.com/coder/envbuilder): Build development environments using `devcontainer.json` on Docker, Kubernetes, and OpenShift
- [**Module Registry**](https://registry.coder.com): Extend development environments with common use-cases
- [**Coder Registry**](https://registry.coder.com): Build and extend development environments with common use-cases
- [**Kubernetes Log Stream**](https://github.com/coder/coder-logstream-kube): Stream Kubernetes Pod events to the Coder startup logs
- [**Self-Hosted VS Code Extension Marketplace**](https://github.com/coder/code-marketplace): A private extension marketplace that works in restricted or airgapped networks integrating with [code-server](https://github.com/coder/code-server).
- [**Setup Coder**](https://github.com/marketplace/actions/setup-coder): An action to setup coder CLI in GitHub workflows.
{"type":"text","level":3,"timestamp":1749557935646,"text":"@devcontainers/cli 0.75.0. Node.js v20.16.0. linux 6.8.0-60-generic x64."}
{"type":"text","level":2,"timestamp":1749557935646,"text":"Error: Dev container config (/home/coder/.devcontainer/devcontainer.json) not found.\n at v7 (/usr/local/nvm/versions/node/v20.16.0/lib/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:668:6918)\n at async /usr/local/nvm/versions/node/v20.16.0/lib/node_modules/@devcontainers/cli/dist/spec-node/devContainersSpecCLI.js:484:1188"}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.