Compare commits

..

68 Commits

Author SHA1 Message Date
Callum Styan b816191609 perf(coderd/dynamicparameters): skip caching introspection render calls
The first render in ResolveParameters() is purely for introspection to
identify ephemeral parameters. The second render validates the final
values after ephemeral processing.

Previously, both renders were cached, causing unnecessary cache entries.
For prebuilds with constant parameters, this doubled cache size and
created entries that were never reused.

This optimization:
- Adds RenderWithoutCache() method to Renderer interface
- First render (introspection) skips cache operations
- Second render (final validation) still uses cache
- Reduces cache size by 50% for prebuild scenarios
- Improves cache hit ratio (was 4 misses/11 hits, now 1 miss/2+ hits)

For 3 prebuilds with identical parameters:
- Before: 2 misses, 4 hits, 2 entries (introspection + final)
- After: 1 miss, 2 hits, 1 entry (final values only)
2025-12-11 20:58:43 +00:00
Callum Styan 1c4b645b43 test: add prebuild cache test to verify dual-render behavior
This test confirms that the cache correctly handles the prebuild flow where
ResolveParameters() calls Render() twice per build. With identical preset
parameters across multiple prebuilds, we get the expected metrics:
- First prebuild: 2 misses (one for empty params, one for preset params)
- Subsequent prebuilds: 2 hits each (both Render calls hit the same cache entries)

The user's actual deployment shows more cache misses and entries than expected,
indicating that parameter values differ between prebuild instances. This suggests
parameters are being modified or injected somewhere in the prebuild creation flow.
2025-12-11 19:42:37 +00:00
Callum Styan 4f9b23a11c fix: close render cache even when reconciler never ran
The reconciler's Stop() method was returning early if the reconciler
was never started (running == false), which meant the render cache
cleanup goroutine was never stopped.

This happened in tests that create reconcilers but never call Run() on
them. When Stop() was called in t.Cleanup(), it would skip closing the
render cache, causing goroutine leaks.

Fix: Move renderCache.Close() to execute before the running check, so
the cleanup goroutine is always stopped regardless of whether the
reconciler was ever started.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 06:38:12 +00:00
Callum Styan 85bbae697a fix: add Close() to RenderCache interface for proper cleanup
The renderCache field in StoreReconciler was using the concrete type
*RenderCacheImpl instead of the RenderCache interface, preventing
proper cleanup of cache goroutines.

Changes:
- Add Close() method to RenderCache interface
- Implement Close() as no-op in noopRenderCache
- Update StoreReconciler.renderCache to use interface type
- Update wsbuilder.Builder.renderCache to use interface type
- Remove unnecessary nil check in reconciler Stop() method

This ensures all render cache cleanup goroutines are properly stopped
when reconcilers are stopped, fixing goleak test failures.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 04:02:35 +00:00
Callum Styan bea92ad397 fix: add missing context import and remaining cleanup calls
- Add missing context import to metricscollector_test.go
- Add cleanup calls for all remaining NewStoreReconciler instances
  in reconcile_test.go, including those in goroutines

This completes the fix for goroutine leaks in render cache tests.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 03:44:24 +00:00
Callum Styan ba4e5085d6 fix: cleanup render cache goroutines in tests
Previously, tests creating StoreReconciler instances would leak cleanup
goroutines because they never called Stop() to close the render cache.
This caused goleak test failures.

Add t.Cleanup() calls to all tests creating reconcilers to ensure the
render cache cleanup goroutines are properly stopped.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-10 22:39:19 +00:00
Callum Styan f7daab4da3 fix missing Close calls in tests
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2025-12-10 22:07:49 +00:00
Callum Styan 890a058443 fix lint and fmt, required refactoring of the metrics testing
Signed-off-by: Callum Styan <callumstyan@gmail.com>
2025-12-10 18:24:38 +00:00
Callum Styan a54f1ae3ed refactor: make render cache non-nullable with interface
Changes renderCache from a pointer to an interface to eliminate nil checks:

- Created RenderCache interface with get/put methods
- Renamed concrete type to RenderCacheImpl
- Added noopRenderCache for default no-op behavior
- Removed all nil checks in Render() method

This simplifies the code by ensuring renderCache is always present,
either as a real cache with metrics or as a no-op implementation.
The nil checks were inconsistent with other non-lazy-loaded fields
and added unnecessary complexity at each usage site.
2025-12-10 00:18:50 +00:00
Callum Styan 318f692837 feat: refresh cache entry timestamp on hits
Updates the cache entry timestamp on every successful cache hit,
implementing LRU-like behavior where frequently accessed entries
stay in the cache longer.

This ensures that actively used template versions remain cached
even if they were initially added long ago, while rarely accessed
entries expire and get cleaned up.

Adds TestRenderCache_TimestampRefreshOnHit to verify that:
- Cache hits refresh the entry timestamp
- Refreshed entries remain valid beyond their original TTL
- Entries eventually expire after no access for the full TTL period
2025-12-09 23:12:54 +00:00
Callum Styan 353adc78c8 feat: add TTL-based cache cleanup for render cache
Implements automatic expiration and cleanup of cache entries:
- 1 hour TTL for cached entries
- Periodic cleanup every 15 minutes to remove expired entries
- Expired entries are treated as cache misses on get()
- Cache properly closed when reconciler stops to clean up goroutine

Uses quartz.Clock instead of time package for testability, allowing
tests to advance the clock without time.Sleep().

This prevents unbounded cache growth while maintaining high hit rates
for frequently rendered template versions.
2025-12-09 22:47:44 +00:00
Callum Styan acd9bed98e feat: add Prometheus metrics for render cache observability
Adds cache hits, misses, and size metrics to track render cache
performance and enable monitoring of the cache's effectiveness
in reducing expensive terraform parsing operations.

Metrics added:
- coderd_prebuilds_render_cache_hits_total: Counter for cache hits
- coderd_prebuilds_render_cache_misses_total: Counter for cache misses
- coderd_prebuilds_render_cache_size_entries: Gauge for current cache size

The metrics are optional and only created when a Prometheus registerer
is provided to the reconciler.
2025-12-09 22:21:56 +00:00
Callum Styan 72d711dde2 feat: wire render cache into prebuilds reconciler
This commit integrates the render cache into the prebuilds
reconciliation flow, enabling cache sharing across all prebuild
operations.

Changes:

1. StoreReconciler:
   - Add renderCache field (*dynamicparameters.RenderCache)
   - Initialize cache in NewStoreReconciler()
   - Pass cache to builder in provision() method

2. wsbuilder.Builder:
   - Add renderCache field
   - Add RenderCache(cache) builder method
   - Pass cache to dynamicparameters.Prepare() when available

How it works:
- Single RenderCache instance shared across all prebuilds
- Each workspace build passes the cache through the builder chain
- Cache keyed by (templateVersionID, ownerID, parameterHash)
- All prebuilds use PrebuildsSystemUserID, maximizing cache hits

Expected behavior for a preset with 5 instances:
- First instance: cache miss → full render → cache stored
- Instances 2-5: cache hit → instant return
- Result: ~80% reduction in render operations per cycle

This addresses the resource cost identified in profiling where the
prebuilds path consumed 25% CPU and 50% memory allocations, primarily
from repeated Terraform file parsing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-09 21:47:55 +00:00
Callum Styan 22f5008d3a feat: add render cache for dynamic parameters
This commit introduces an in-memory cache for preview.Preview results
to avoid expensive Terraform file parsing on repeated renders with
identical inputs.

Key components:

1. RenderCache: Thread-safe cache keyed by (templateVersionID, ownerID,
   parameterHash) that stores preview.Output results

2. Integration with dynamicRenderer.Render():
   - Check cache before calling preview.Preview()
   - Store successful renders in cache
   - Return cached results on cache hit

3. Comprehensive tests:
   - Basic cache operations (get/put)
   - Cache key separation (different versions/owners/params)
   - Parameter hash consistency (order-independent)
   - Prebuild scenario simulation

The cache enables significant resource savings for prebuilds where
multiple instances share the same template version and parameters.
All prebuilds use database.PrebuildsSystemUserID, ensuring perfect
cache sharing across instances of the same preset.

Expected impact: ~80-90% cache hit rate for steady-state prebuild
reconciliation cycles, reducing CPU/memory from Terraform parsing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-09 21:47:30 +00:00
Mathias Fredriksson 8f15caad22 docs: add dev container screenshots (#21191)
Add screenshots to the dev containers user guide:

- Running dev containers with sub-agents (index.md, working-with-dev-containers.md)
- Discovered dev containers with Start button (index.md)
- Outdated status with rebuild option (working-with-dev-containers.md)
- Display apps disabled (customizing-dev-containers.md)

Also deletes the outdated devcontainer-agent-ports.png.

Refs #21157
2025-12-09 18:08:58 +00:00
david-fraley 84760f4a8f chore(docs): update release cal for patch (#21193) 2025-12-09 17:57:17 +00:00
Atif Ali d92649ae3b fix(site): use FolderIcon instead of TextAlignStartIcon for folders in template editor (#21169)
## Summary

Fixes folder icons in the template editor file tree.


## Changes

- Import `FolderIcon` instead of `TextAlignStartIcon` from lucide-react
- Use `FolderIcon` for folder entries in the file tree

_Generated with `mux`_
2025-12-09 22:32:02 +05:00
George K 4379230a27 feat: add deployment-wide option to disable workspace sharing (#21172)
Adds `--disable-workspace-sharing` option.

Workspace sharing is disabled by not including user and group ACLs in
the workspace RBAC object, which prevents ACL-based authz.

Closes https://github.com/coder/internal/issues/1072

The commit also adds saving of workspace user/group ACLs in the test DB
data generator.
2025-12-09 08:13:09 -08:00
Atif Ali e31578da4b chore(site): replace stop icon with pause icon (#21173) 2025-12-09 20:57:13 +05:00
blinkagent[bot] 4844c978d8 fix: improve task naming prompt to avoid URL content guessing (#21151)
Previously, when a user created a task with a URL-only prompt (e.g.,
`Let's work on https://github.com/coder/coder/issues/21138`), the LLM
would hallucinate what the URL content might be about - generating names
like "Fix GitHub Actions workflow issue" when the actual issue was
unrelated.

Add examples to the task naming system prompt showing expected behavior
for GitHub issue and PR URLs, teaching the model to use visible URL
parts (repo name, issue/PR number) rather than guessing content.

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-09 09:10:54 -06:00
Ben Potter f6b025ec7b fix(site): show command apps in Tasks view (#21185)
Command apps (terminal apps with `command` field) were being filtered
out from the Tasks view because they have health "disabled" when no
healthcheck is defined.

Updated the filter in `TaskApps.tsx` to allow command apps through
regardless of health status so they appear as tabs alongside web apps.

<img width="1378" height="790" alt="Screenshot 2025-12-08 at 8 10 47 PM"
src="https://github.com/user-attachments/assets/90a73bb2-d49c-4241-a4d5-798e2dd8118b"
/>

Closes https://github.com/coder/coder/issues/21183

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-09 09:04:53 -06:00
blinkagent[bot] 779f1571a6 docs: remove unused _redirects file and document redirect process (#21144)
The _redirects file format is used by Netlify and Cloudflare Pages, but
coder.com runs on Vercel with Next.js. Redirects for coder.com/docs must
be configured in the coder/coder.com repository redirects.json file.

This file was never functional and caused confusion when renaming docs.

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-09 14:57:55 +00:00
Mathias Fredriksson ea9f003cdd docs: clarify dev containers entry point and reduce callouts (#21188)
The user guide jumped straight into integration details without explaining
what dev containers are. Now it opens with a brief orientation linking to
the spec, then explains this guide covers the Docker-based approach.

Converted several NOTE callouts to prose where they were just cross-references
or stacked unnecessarily. The Envbuilder index note was reframed to lead with
its strengths rather than "we recommend the other thing."

Also updates platform support to Linux only per current status.

Refs #21157
2025-12-09 16:37:19 +02:00
Mathias Fredriksson f3e26ca557 docs: add guidance on when to use Project Discovery for Dev Containers (#21190)
Refs #21157
2025-12-09 16:36:19 +02:00
Jakub Domeracki b04e6870ea chore: upgrade bundled terraform binary version (#21092)
Closes:
https://github.com/coder/internal/issues/1172

---------

Co-authored-by: Dean Sheather <dean@deansheather.com>
2025-12-09 14:49:40 +01:00
Spike Curtis ce9e7ad909 fix(agent): ignore EOF errors during shutdown (#21187)
fixes: https://github.com/coder/internal/issues/1179

The problem in that flake is that dRPC doensn't consistently return
`context.Canceled` if you make an RPC call and then cancel it: sometimes
it returns EOF.

Without this PR, if we get an EOF on one of the routines that uses the
agentapi connection, we tear down the whole connection and reconnect to
coderd --- even if we are in the middle of a graceful shutdown.

What happened in the linked flake is that writing stats failed with EOF,
which then caused us to reconnect and write the lifecycle "SHUTTING
DOWN" twice.
2025-12-09 17:32:38 +04:00
Dean Sheather b199eb1c38 fix: allow stops and deletes after breaching AI limit (#21186)
Fixes a bug a customer encountered once they breached their limit. Adds
a test.
2025-12-09 11:05:12 +00:00
Mathias Fredriksson 97bc7eb9e5 docs: restructure dev container documentation (#21157)
Dev container admin docs were scattered across two locations: the Docker-based
integration under extending-templates/ and Envbuilder under managing-templates/.
There was no landing page explaining that two approaches exist or helping admins
choose between them.

This moves everything under admin/integrations/devcontainers/ with a decision
guide at the top. Dev containers are an integration with the dev container
specification, so integrations/ is a natural fit alongside JFrog, Vault, etc.

Stub pages remain at the original locations for discoverability.

New structure:

  admin/integrations/devcontainers/
  ├── index.md                                # Landing page + decision guide
  ├── integration.md                          # Docker-based dev containers
  └── envbuilder/
      ├── index.md
      ├── add-envbuilder.md
      ├── envbuilder-security-caching.md
      └── envbuilder-releases-known-issues.md

Refs #21080
2025-12-09 13:03:02 +02:00
Ben Potter 244e6ca027 docs: address review comments for DOCS_STYLE_GUIDE.md (#21178)
## Summary

This PR addresses David's review comments from PR #21153 to improve the
Documentation Style Guide.

## Changes

- **Research section**: Updated to focus on reading "recent
documentation" instead of "10+ similar pages" - more practical guidance
- **Premium Feature Callout**: Clarified that the manifest.json badge
addition should happen in `docs/manifest.json`
- **Screenshot Guidelines**: Added context that this is for when
screenshots "don't exist yet", making it clearer this is a temporary
measure
- **Tabs documentation**: Expanded explanation to clarify when tabs are
appropriate (parallel content paths)
- **Coder registry**: Added mention of referencing Coder registry URLs
for cross-linking to external Coder resources

All changes maintain the existing documentation structure while
improving clarity and specificity based on review feedback.

Refs #21153

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-08 20:16:15 -06:00
Asher 3a0e8af6e3 feat: add view workspace button to app error page (#20960)
Closes #19984 

As part of this, I refactored the error template to take in a slice of
actions rather than using individual booleans and strings to control the
behavior.

We decided a link resolves the issue for now so that is what I added,
although we may want to consider a way to start the workspace and follow
the logs dynamically on that page and then show the app when finished
(similar to the tasks page), or at least make the link automatically
start the workspace instead of only taking you to the dashboard where
you have to then start the workspace.
2025-12-08 14:16:00 -09:00
blinkagent[bot] 50d42ab0b9 docs: document 200 OK response for upload file API when file exists (#21071)
Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-08 16:04:56 -06:00
Jaayden Halko a6285dde5e chore(site): mark MUI components and Stack as deprecated (#20973)
Adds deprecation markers for MUI components and the custom Stack
component to guide migration to shadcn/ui and Tailwind CSS.

Changes:
- Added JSDoc @deprecated tags to Stack component and type definitions
- Added deprecation comments to MUI imports in theme files
- Expanded Biome noRestrictedImports rules to flag all MUI component
imports

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 21:11:32 +00:00
Ehab Younes ac1d51aeca fix: export site public API to be used in the VS Code extension (#21165) 2025-12-08 23:02:36 +03:00
Ben Potter 493f77120c docs: add PR description style guide for agents (#21148)
Adds a style guide documenting PR description patterns observed in the
Coder repository. This guide is intended for AI agents to reference when
creating PRs, ensuring consistency with project conventions.

The guide covers title format (Conventional Commits), description
structure (default concise vs. complex structured), what to include
(links, performance context, warnings), and what to avoid (test plans,
benefits sections). Includes examples from recent merged PRs
demonstrating each pattern.

Placed in `.claude/docs/` alongside other agent-specific documentation
(WORKFLOWS.md, ARCHITECTURE.md, etc.) rather than in the main
contributing docs, as this is primarily for automated tooling reference.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-08 19:55:32 +00:00
Ben Potter b228c6b135 feat(dogfood): add gh CLI workflow and GitHub auth for tasks (#21146)
Adds automatic GitHub CLI authentication and workflow instructions to
the dogfood template's Claude Code tasks.

The startup script now authenticates gh CLI using `coder external-auth
access-token github`, eliminating manual authentication. The system
prompt instructs tasks to read GitHub issue details with `gh issue view`
and create feature branches with descriptive names before
implementation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-08 11:12:44 -06:00
Ben Potter 3ae0600174 docs: add documentation style guide for AI agents (#21153)
Adds a comprehensive documentation style guide in
`.claude/docs/DOCS_STYLE_GUIDE.md` documenting patterns observed across
Coder's existing documentation. This guide is intended for AI agents to
reference when writing documentation, ensuring consistency with project
conventions.

The guide covers research requirements (code verification, permissions
model, UI thresholds), document structure (titles, premium callouts,
overview sections), image usage (placement, captions, screenshot-driven
organization), content organization, writing style, code examples,
accuracy standards (specific numbers, permission actions, API
endpoints), manifest requirements, and proactive documentation
approaches.

Placed in `.claude/docs/` alongside other agent-specific documentation
(WORKFLOWS.md, ARCHITECTURE.md, etc.) and imported in CLAUDE.md to
ensure it's automatically loaded into context for documentation work.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-08 11:12:25 -06:00
Jaayden Halko 731683ab4f chore: display the starting date when the license becomes active (#21162)
display the the not before date from the licenses endpoint to display a
Valid from license date in the UI
2025-12-08 15:21:02 +00:00
Mathias Fredriksson 7fc8ee4c60 test(cli/cliui): add test for context cancellation during log streaming (#21125)
Verifies that streamLogs properly returns ctx.Err() when the context is
cancelled while waiting for logs. This covers the case where a user
interrupts an SSH connection (e.g., Ctrl+C) during startup script
execution.

Refs #21104
2025-12-08 14:17:25 +00:00
Mathias Fredriksson d351821ec3 fix(cli/cliui): skip startup script logs when Wait=false (#21105)
When users pass --wait=no or set CODER_SSH_WAIT=no, startup logs are no
longer dumped to stderr. The stage indicator is still shown, just not
the log content.

Fixes #13580
2025-12-08 14:11:47 +00:00
Mathias Fredriksson 0c453d7f8e refactor(cli/cliui): extract agentWaiter struct from agent connection state machine (#21104)
The Agent function had complex nested control flow and cross-case state sharing
via the showStartupLogs flag. This made the code hard to follow and maintain.

This change extract an agentWaiter struct with self-contained methods:

- wait: main state machine loop
- waitForConnection: handles Connecting/Timeout states
- handleConnected: handles Connected state and startup scripts
- streamLogs: handles log streaming/fetching
- waitForReconnection: handles Disconnected state
- pollWhile: helper to consolidate polling loops

Each handler is now self-contained with no cross-method state sharing and the 
showStartupLogs flag is replaced by return values and the waitedForConnection
tracking variable.
2025-12-08 14:00:25 +00:00
Ethan 04d5ff88e4 test: bump TestAgent_SessionTTYShell timeout (#21155)
## Problem

The `TestAgent_SessionTTYShell` test was flaking on macOS CI runners
with:

```
match deadline exceeded: context deadline exceeded (wanted 1 bytes; got 0: "")
```

The test uses `WaitShort` (10s) for the context timeout when waiting for
shell prompt output via `Peek(ctx, 1)`. On slow macOS CI runners, the
shell startup can exceed this timeout due to resource contention.

This is evidenced in the failure logs, the SSH session was not reported
by the agent until the 10s timeout is nearly up - it took a while to
connect.

## Solution

Increase the timeout from `WaitShort` (10s) to `WaitMedium` (30s). This
matches the timeout used by `ExpectMatch` internally and gives the shell
more time to initialize on slow CI machines.

---

This PR was entirely generated by [mux](https://github.com/coder/mux)
but reviewed by a human.

Closes https://github.com/coder/internal/issues/1177
2025-12-09 00:48:47 +11:00
Ehab Younes 52243557a2 fix: do not log CSRF error in Electron environments (#21054)
Closes #20914
2025-12-08 16:24:59 +03:00
dependabot[bot] 4d15b30a63 chore: bump golang.org/x/sync from 0.18.0 to 0.19.0 in the x group (#21159)
Bumps the x group with 1 update:
[golang.org/x/sync](https://github.com/golang/sync).

Updates `golang.org/x/sync` from 0.18.0 to 0.19.0
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/golang/sync/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/sync&package-manager=go_modules&previous-version=0.18.0&new-version=0.19.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 12:21:03 +00:00
dependabot[bot] 8a3f8373e8 chore: bump alpine from 3.22.2 to 3.23.0 in /scripts (#21160)
Bumps alpine from 3.22.2 to 3.23.0.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=alpine&package-manager=docker&previous-version=3.22.2&new-version=3.23.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 12:19:16 +00:00
Jake Howell 25400fedca fix: update colors and theme in error.html (#20941)
### Description

This pull-request ensures that we're using the right colors (and
themeing things within the actual coder brand) on the `error.html` page.
Furthermore, I went ahead and cleaned up the CSS Variables and converted
all `px` units to a standard `rem` unit (16px base).

### Preview

<img width="3516" height="2388" alt="CleanShot 2025-12-02 at 11 09
55@2x"
src="https://github.com/user-attachments/assets/781623ea-a487-4a2e-a08e-bec86d6de6f5"
/>
2025-12-08 14:32:18 +11:00
Andrew Aquino 82bb833099 docs: update AI Bridge description for H2 2025 (#21126)
@suman-bisht This file is the source for docs pages' SEO description +
automatically generated preview image on coder.com
2025-12-05 14:36:39 -08:00
Mathias Fredriksson 61beb7bfa8 docs: rewrite dev containers documentation for GA (#21080)
docs: rewrite dev containers documentation for GA

Corrects inaccuracies in SSH examples (deprecated `--container` flag),
port forwarding (native sub-agent forwarding is primary), and
prerequisites (dev containers are on by default). Fixes template
descriptions: docker-devcontainer uses native Dev Containers while
AWS/Kubernetes templates use Envbuilder.

Renames admin docs folder from `devcontainers/` to `envbuilder/` to
reflect actual content. Adds customization guide documenting agent
naming, display apps, custom apps, and variable interpolation. Documents
multi-repo workspace support and adds note about Terraform module
limitations with sub-agents. Fixes module registry URLs.

Refs #18907
2025-12-05 19:42:16 +02:00
blinkagent[bot] b4be5bcfed docs: fix swagger tags for license endpoints (#21101)
## Summary

Change `@Tags` from `Organizations` to `Enterprise` for `POST /licenses`
and `POST /licenses/refresh-entitlements` to match the `GET` and
`DELETE` license endpoints which are already tagged as `Enterprise`.

## Problem

The license API endpoints were inconsistently tagged in the swagger
annotations:
- `GET /licenses` → `Enterprise` ✓
- `DELETE /licenses/{id}` → `Enterprise` ✓
- `POST /licenses` → `Organizations` ✗
- `POST /licenses/refresh-entitlements` → `Organizations` ✗

This caused the POST endpoints to be documented in the [Organizations
API docs](https://coder.com/docs/reference/api/organizations) instead of
the [Enterprise API
docs](https://coder.com/docs/reference/api/enterprise) where the other
license endpoints live.

## Fix

Simply updated the `@Tags` annotation from `Organizations` to
`Enterprise` for both POST endpoints.

This was an oversight from the original swagger docs addition in #5625
(January 2023).

Co-authored-by: blink-so[bot] <211532188+blink-so[bot]@users.noreply.github.com>
2025-12-05 15:27:22 +00:00
dependabot[bot] ceaba0778e chore: bump github.com/brianvoe/gofakeit/v7 from 7.9.0 to 7.12.1 (#21096)
Bumps
[github.com/brianvoe/gofakeit/v7](https://github.com/brianvoe/gofakeit)
from 7.9.0 to 7.12.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/6cb292655fc27bb9c5d81cb2bf2c5596313740e3"><code>6cb2926</code></a>
go mod - change back to 1.22</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/5a2657afecd681df23c4dd8664216db5fe26c805"><code>5a2657a</code></a>
lookup - updated keywords to not have the display name in the keywords
list</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/67d31c1b536d7bfe3e8a34d954afc6e9583b15ff"><code>67d31c1</code></a>
person - added ethnicity</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/ae0414ce24587529911ca8de6b886d7d9810c0ef"><code>ae0414c</code></a>
lookup - added test to make sure display is not in keywords</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/6f01995ab4278531359da913268f2e058becbb92"><code>6f01995</code></a>
user agent - added api user agent</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/e4c26beca835cc4262566104db199c6061ae02d2"><code>e4c26be</code></a>
person - added age</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/dcf08aee21636c27ef87ca9b4b5424af85fa4991"><code>dcf08ae</code></a>
funclookup concurrency - cleaned up focused verbage more on concurrency.
also...</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/1a45942dacb4b0b98d3e7e563033f87f1d34c364"><code>1a45942</code></a>
Merge pull request <a
href="https://redirect.github.com/brianvoe/gofakeit/issues/390">#390</a>
from AdamDrewsTR/race-conditions-fix</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/c03d3c83a271e9d543eab20387263adfd639037b"><code>c03d3c8</code></a>
fix: resolve concurrent map read/write race conditions by implementing
RWMute...</li>
<li><a
href="https://github.com/brianvoe/gofakeit/commit/a0380da0d95ea14b94e21a7a1d953e595eb5226f"><code>a0380da</code></a>
refactor: consolidate race condition tests into concurrency_test.go and
remov...</li>
<li>Additional commits viewable in <a
href="https://github.com/brianvoe/gofakeit/compare/v7.9.0...v7.12.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/brianvoe/gofakeit/v7&package-manager=go_modules&previous-version=7.9.0&new-version=7.12.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 15:13:33 +00:00
Paweł Banaszewski e24cc5e6da feat: add tracing to aibridge (#21106)
Adds tracing for AIBridge.
Updates github.com/coder/aibridge version from `v0.2.2` to `v0.3.0`

Depends on: https://github.com/coder/aibridge/pull/63
Fixes: https://github.com/coder/aibridge/issues/26

---------

Co-authored-by: Danny Kopping <danny@coder.com>
2025-12-05 15:59:52 +01:00
Danny Kopping 259dee2ea8 fix: move contexts to appropriate locations (#21121)
Closes https://github.com/coder/internal/issues/1173,
https://github.com/coder/internal/issues/1174

Currently these two tests are flaky because the contexts were created
before a potentially long-running process. By the time the context was
actually used, it may have timed out - leading to confusion.

Additionally, the `ExpectMatch` calls were not using the test context -
but rather a background context. I've marked that func as deprecated
because we should always tie these to the test context.

Special thanks to @mafredri for the brain probe 🧠

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2025-12-05 13:14:35 +00:00
DevCats 8e0516a19c chore: add support for antigravity external app protocol (#20873)
Adds `antigravity` to the allowed protocols list.

Related PR:
https://github.com/coder/registry/pull/558
2025-12-05 13:09:58 +00:00
Jakub Domeracki 770fdb377c chore: update react to apply patch for CVE-2025-55182 (#21084)
Reference:

https://react.dev/blog/2025/12/03/critical-security-vulnerability-in-react-server-components
2025-12-05 09:49:59 +01:00
Callum Styan 83dbf73dde perf: don't calculate build times for deleted templates (#21072)
The metrics cache to calculate and expose build time metrics for
templates currently calls `GetTemplates`, which returns all templates
even if they are deleted. We can use the `GetTemplatesWithFilter` query
to easily filter out deleted templates from the results, and thus not
call `GetTemplateAverageBuildTime` for those deleted templates. Delete
time for workspaces for non-deleted templates is still calculated.

Signed-off-by: Callum Styan <callumstyan@gmail.com>
2025-12-04 10:27:56 -08:00
Mathias Fredriksson 0ab23abb19 refactor(site): convert workspace batch delete dialog to Tailwind CSS (#20946)
Converts from Emotion to Tailwind CSS, based on the tasks batch delete
dialog implementation.

Also propagates simplifications back to the tasks dialog:
- Use `border-border` instead of hardcoded color variants
- Use `max-h-48` instead of specific `max-h-[184px]`
- Add cancel button to workspaces dialog

Refs #20905
2025-12-04 20:10:21 +02:00
david-fraley c4bf5a2d81 docs: add ESR to Release Channels (#21060) 2025-12-04 11:43:32 -06:00
Mathias Fredriksson 5cb02a6cc0 refactor(site): remove redundant client-side sorting of app statuses (#21102)
Depends on #21099
2025-12-04 18:55:45 +02:00
Mathias Fredriksson cfdd4a9b88 perf(coderd/database): add index on workspace_app_statuses.app_id (#21099) 2025-12-04 17:56:13 +02:00
Mathias Fredriksson d9159103cd fix(agent/agentcontainers): broadcast devcontainer dirty status over websocket (#21100) 2025-12-04 16:11:03 +02:00
Mathias Fredriksson 532a1f3054 fix(coderd): exclude sub-agents from workspace health calculation (#21098) 2025-12-04 15:38:24 +02:00
dependabot[bot] 6aeb144a98 chore: bump google.golang.org/api from 0.256.0 to 0.257.0 (#21094)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.256.0 to 0.257.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.257.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.256.0...v0.257.0">0.257.0</a>
(2025-12-02)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3376">#3376</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b0c07d2f5cc4aa2cf974c2938508626f8430855e">b0c07d2</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3380">#3380</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/47fcc39088f806c4202ca47159416ce99a0a0c72">47fcc39</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3381">#3381</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/cf5cf20d07fac3acc66c1f9ade705bb99701519a">cf5cf20</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3382">#3382</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2931d4b217c6934f85bdc378ebbbbe4fa54db96d">2931d4b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3383">#3383</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/446402e7d6aedbe169505c07aafcf45e96563a8e">446402e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3384">#3384</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/d82a5d02f83b3455f747cbb1fb14930703dad60e">d82a5d0</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3386">#3386</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6a0b46d49312d528dab4dce8daee48866f38ba25">6a0b46d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3387">#3387</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/f3dc8f4bd57ade8c6ffb37cda8d55289228ebcd1">f3dc8f4</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3388">#3388</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/e3ca7fd5738afd1a8aa046431ef005c48e701358">e3ca7fd</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3389">#3389</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b78dd96b2c603926daca6c30baae9c4843bf5664">b78dd96</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.256.0...v0.257.0">0.257.0</a>
(2025-12-02)</h2>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3376">#3376</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b0c07d2f5cc4aa2cf974c2938508626f8430855e">b0c07d2</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3380">#3380</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/47fcc39088f806c4202ca47159416ce99a0a0c72">47fcc39</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3381">#3381</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/cf5cf20d07fac3acc66c1f9ade705bb99701519a">cf5cf20</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3382">#3382</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2931d4b217c6934f85bdc378ebbbbe4fa54db96d">2931d4b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3383">#3383</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/446402e7d6aedbe169505c07aafcf45e96563a8e">446402e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3384">#3384</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/d82a5d02f83b3455f747cbb1fb14930703dad60e">d82a5d0</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3386">#3386</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6a0b46d49312d528dab4dce8daee48866f38ba25">6a0b46d</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3387">#3387</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/f3dc8f4bd57ade8c6ffb37cda8d55289228ebcd1">f3dc8f4</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3388">#3388</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/e3ca7fd5738afd1a8aa046431ef005c48e701358">e3ca7fd</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3389">#3389</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b78dd96b2c603926daca6c30baae9c4843bf5664">b78dd96</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/3e8573d81048b9abb10fe28e0cbe9623fdb4668a"><code>3e8573d</code></a>
chore(main): release 0.257.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3377">#3377</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/61592252846bfa757ebd4a68e461bd96381984cf"><code>6159225</code></a>
chore(all): update all to 79d6a2a (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3390">#3390</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/b78dd96b2c603926daca6c30baae9c4843bf5664"><code>b78dd96</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3389">#3389</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/e3ca7fd5738afd1a8aa046431ef005c48e701358"><code>e3ca7fd</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3388">#3388</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/f3dc8f4bd57ade8c6ffb37cda8d55289228ebcd1"><code>f3dc8f4</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3387">#3387</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/6a0b46d49312d528dab4dce8daee48866f38ba25"><code>6a0b46d</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3386">#3386</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/a07da21fe2e45130d2e9cb1494f98a7fe2420ba1"><code>a07da21</code></a>
chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3379">#3379</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/35b0966d9ce6b797c6e63730066ecec3292ade34"><code>35b0966</code></a>
chore(deps): bump golang.org/x/crypto from 0.37.0 to 0.45.0 in
/internal/koko...</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/d82a5d02f83b3455f747cbb1fb14930703dad60e"><code>d82a5d0</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3384">#3384</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/8c1e205890063d558cb41bfc1bf806a9385c2126"><code>8c1e205</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3374">#3374</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.256.0...v0.257.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.256.0&new-version=0.257.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 10:27:14 +00:00
dependabot[bot] f94d8fc019 chore: bump github.com/aws/smithy-go from 1.23.2 to 1.24.0 (#21095)
Bumps [github.com/aws/smithy-go](https://github.com/aws/smithy-go) from
1.23.2 to 1.24.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws/smithy-go/blob/main/CHANGELOG.md">github.com/aws/smithy-go's
changelog</a>.</em></p>
<blockquote>
<h1>Release (2025-12-01)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.24.0
<ul>
<li><strong>Feature</strong>: Improve allocation footprint of the
middleware stack. This should convey a ~10% reduction in allocations per
SDK request.</li>
</ul>
</li>
</ul>
<h1>Release (2025-11-03)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.23.2
<ul>
<li><strong>Bug Fix</strong>: Adjust the initial sizes of each
middleware phase to avoid some unnecessary reallocation.</li>
<li><strong>Bug Fix</strong>: Avoid unnecessary allocation overhead from
the metrics system when not in use.</li>
</ul>
</li>
</ul>
<h1>Release (2025-10-15)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Bump minimum go version to
1.23.</li>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h1>Release (2025-09-18)</h1>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go/aws-http-auth</code>: <a
href="https://github.com/aws/smithy-go/blob/main/aws-http-auth/CHANGELOG.md#v110-2025-09-18">v1.1.0</a>
<ul>
<li><strong>Feature</strong>: Added support for SIG4/SIGV4A querystring
authentication.</li>
</ul>
</li>
</ul>
<h1>Release (2025-08-27)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.23.0
<ul>
<li><strong>Feature</strong>: Sort map keys in JSON Document types.</li>
</ul>
</li>
</ul>
<h1>Release (2025-07-24)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.5
<ul>
<li><strong>Feature</strong>: Add HTTP interceptors.</li>
</ul>
</li>
</ul>
<h1>Release (2025-06-16)</h1>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/aws/smithy-go/commit/71f5bff362491399f8a2cca586c5802eb5a66d70"><code>71f5bff</code></a>
Release 2025-12-01</li>
<li><a
href="https://github.com/aws/smithy-go/commit/c94c177cfcf46095d48a88253899242f5971ae1b"><code>c94c177</code></a>
changelog</li>
<li><a
href="https://github.com/aws/smithy-go/commit/0cc0b1c115aede116e0a5b901f195fef2ea2567a"><code>0cc0b1c</code></a>
convert middleware steps to linked lists (<a
href="https://redirect.github.com/aws/smithy-go/issues/617">#617</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/ed49224f03828a26529458a48ff56b9b0b4db45e"><code>ed49224</code></a>
Add param binding error check in auth scheme resolution to avoid panic
(<a
href="https://redirect.github.com/aws/smithy-go/issues/619">#619</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/0e0b20cb123137d985083894df55fdbdbe3ce332"><code>0e0b20c</code></a>
add discrete tests for initialize step (<a
href="https://redirect.github.com/aws/smithy-go/issues/618">#618</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/ddbac1c617ac6bea513c16923e7883b1439b2a34"><code>ddbac1c</code></a>
only add interceptors if configured (<a
href="https://redirect.github.com/aws/smithy-go/issues/616">#616</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/798bf4fa874b68b13350766bf270d3b868e8abcf"><code>798bf4f</code></a>
remove pointless trace spans (<a
href="https://redirect.github.com/aws/smithy-go/issues/615">#615</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/dc545a769d214b08bd69e93fffc40a962b815c56"><code>dc545a7</code></a>
don't create op metrics context if not in use (<a
href="https://redirect.github.com/aws/smithy-go/issues/613">#613</a>)</li>
<li><a
href="https://github.com/aws/smithy-go/commit/6f12c095f5277d7e682217bcfd50bab607b193ab"><code>6f12c09</code></a>
add host label validation for region before ep resolution (<a
href="https://redirect.github.com/aws/smithy-go/issues/612">#612</a>)</li>
<li>See full diff in <a
href="https://github.com/aws/smithy-go/compare/v1.23.2...v1.24.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/aws/smithy-go&package-manager=go_modules&previous-version=1.23.2&new-version=1.24.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 10:26:32 +00:00
dependabot[bot] e93a917c2f chore: bump the coder-modules group across 3 directories with 7 updates (#21093)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 10:25:01 +00:00
Jakub Domeracki 0f054096e4 chore: enforce cooldown period (#21079)
Closes:
https://github.com/coder/internal/issues/1170

Changes in scope:
- [enforce cooldown
period](https://github.com/coder/coder/commit/829792e37fbe25b119fa658efd6d310dd507d7d6)
- [remove obsolete reviewers
entry](https://github.com/coder/coder/commit/4875992bc6323249353518d4e361b3f7244c8a71)

Reference:

https://github.blog/changelog/2025-04-29-dependabot-reviewers-configuration-option-being-replaced-by-code-owners/
2025-12-04 11:13:19 +01:00
Mathias Fredriksson 2f829286f2 fix(site): simplify bulk task delete confirmation UI (#20979)
Reduce from 3 confirmation stages to 2 by removing the redundant
"resources" stage. The final button now shows "Delete N tasks and M
workspaces" directly, so users still see what will be deleted.

Also add a Cancel button to match the single task delete dialog UX.

Refs #20905
2025-12-04 10:46:02 +02:00
dependabot[bot] 6acfcd5736 chore: bump next from 15.5.6 to 15.5.7 in /offlinedocs (#21086)
Bumps [next](https://github.com/vercel/next.js) from 15.5.6 to 15.5.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.7</h2>
<p>Please see <a
href="https://nextjs.org/blog/CVE-2025-66478">CVE-2025-66478</a> for
additional details about this release.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/vercel/next.js/commit/3eaf68b09b2b6b8c0c8e080a9713e131a78dc529"><code>3eaf68b</code></a>
v15.5.7</li>
<li><a
href="https://github.com/vercel/next.js/commit/8367ce592ad0190ec941dac1ce6d0b5a44606593"><code>8367ce5</code></a>
update version script</li>
<li><a
href="https://github.com/vercel/next.js/commit/9115040008baf255499136933a50084b76f4bfd8"><code>9115040</code></a>
Update React Version for Next.js 15.5.7 (<a
href="https://redirect.github.com/vercel/next.js/issues/10">#10</a>)</li>
<li><a
href="https://github.com/vercel/next.js/commit/96f699902a5c57293e312591f843080a4d68ee1b"><code>96f6999</code></a>
update tag</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/compare/v15.5.6...v15.5.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=next&package-manager=npm_and_yarn&previous-version=15.5.6&new-version=15.5.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/coder/coder/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 20:03:02 +00:00
Danny Kopping 9e021f7b57 chore: upgrade to aibridge@v0.2.2 (#21085)
Changes from https://github.com/coder/aibridge/pull/71/

Signed-off-by: Danny Kopping <danny@coder.com>
2025-12-03 19:56:50 +00:00
Mathias Fredriksson aa306f2262 docs: add rules to avoid unnecessary changes to CLAUDE.md (#21083) 2025-12-03 20:49:18 +02:00
179 changed files with 5347 additions and 2059 deletions
+321
View File
@@ -0,0 +1,321 @@
# Documentation Style Guide
This guide documents documentation patterns observed in the Coder repository, based on analysis of existing admin guides, tutorials, and reference documentation. This is specifically for documentation files in the `docs/` directory - see [CONTRIBUTING.md](../../docs/about/contributing/CONTRIBUTING.md) for general contribution guidelines.
## Research Before Writing
Before documenting a feature:
1. **Research similar documentation** - Read recent documentation pages in `docs/` to understand writing style, structure, and conventions for your content type (admin guides, tutorials, reference docs, etc.)
2. **Read the code implementation** - Check backend endpoints, frontend components, database queries
3. **Verify permissions model** - Look up RBAC actions in `coderd/rbac/` (e.g., `view_insights` for Template Insights)
4. **Check UI thresholds and defaults** - Review frontend code for color thresholds, time intervals, display logic
5. **Cross-reference with tests** - Test files document expected behavior and edge cases
6. **Verify API endpoints** - Check `coderd/coderd.go` for route registration
### Code Verification Checklist
When documenting features, always verify these implementation details:
- Read handler implementation in `coderd/`
- Check permission requirements in `coderd/rbac/`
- Review frontend components in `site/src/pages/` or `site/src/modules/`
- Verify display thresholds and intervals (e.g., color codes, time defaults)
- Confirm API endpoint paths and parameters
- Check for server flags in serpent configuration
## Document Structure
### Title and Introduction Pattern
**H1 heading**: Single clear title without prefix
```markdown
# Template Insights
```
**Introduction**: 1-2 sentences describing what the feature does, concise and actionable
```markdown
Template Insights provides detailed analytics and usage metrics for your Coder templates.
```
### Premium Feature Callout
For Premium-only features, add `(Premium)` suffix to the H1 heading. The documentation system automatically links these to premium pricing information. You should also add a premium badge in the `docs/manifest.json` file with `"state": ["premium"]`.
```markdown
# Template Insights (Premium)
```
### Overview Section Pattern
Common pattern after introduction:
```markdown
## Overview
Template Insights offers visibility into:
- **Active Users**: Track the number of users actively using workspaces
- **Application Usage**: See which applications users are accessing
```
Use bold labels for capabilities, provides high-level understanding before details.
## Image Usage
### Placement and Format
**Place images after descriptive text**, then add caption:
```markdown
![Template Insights page](../../images/admin/templates/template-insights.png)
<small>Template Insights showing weekly active users and connection latency metrics.</small>
```
- Image format: `![Descriptive alt text](../../path/to/image.png)`
- Caption: Use `<small>` tag below images
- Alt text: Describe what's shown, not just repeat heading
### Image-Driven Documentation
When you have multiple screenshots showing different aspects of a feature:
1. **Structure sections around images** - Each major screenshot gets its own section
2. **Describe what's visible** - Reference specific UI elements, data values shown in the screenshot
3. **Flow naturally** - Let screenshots guide the reader through the feature
**Example**: Template Insights documentation has 3 screenshots that define the 3 main content sections.
### Screenshot Guidelines
**When screenshots are not yet available**: If you're documenting a feature before screenshots exist, you can use image placeholders with descriptive alt text and ask the user to provide screenshots:
```markdown
![Placeholder: Template Insights page showing weekly active users chart](../../images/admin/templates/template-insights.png)
```
Then ask: "Could you provide a screenshot of the Template Insights page? I've added a placeholder at [location]."
**When documenting with screenshots**:
- Illustrate features being discussed in preceding text
- Show actual UI/data, not abstract concepts
- Reference specific values shown when explaining features
- Organize documentation around key screenshots
## Content Organization
### Section Hierarchy
1. **H2 (##)**: Major sections - "Overview", "Accessing [Feature]", "Use Cases"
2. **H3 (###)**: Subsections within major sections
3. **H4 (####)**: Rare, only for deeply nested content
### Common Section Patterns
- **Accessing [Feature]**: How to navigate to/use the feature
- **Use Cases**: Practical applications
- **Permissions**: Access control information
- **API Access**: Programmatic access details
- **Related Documentation**: Links to related content
### Lists and Callouts
- **Unordered lists**: Non-sequential items, features, capabilities
- **Ordered lists**: Step-by-step instructions
- **Tables**: Comparing options, showing permissions, listing parameters
- **Callouts**:
- `> [!NOTE]` for additional information
- `> [!WARNING]` for important warnings
- `> [!TIP]` for helpful tips
- **Tabs**: Use tabs for presenting related but parallel content, such as different installation methods or platform-specific instructions. Tabs work well when readers need to choose one path that applies to their specific situation.
## Writing Style
### Tone and Voice
- **Direct and concise**: Avoid unnecessary words
- **Active voice**: "Template Insights tracks users" not "Users are tracked"
- **Present tense**: "The chart displays..." not "The chart will display..."
- **Second person**: "You can view..." for instructions
### Terminology
- **Consistent terms**: Use same term throughout (e.g., "workspace" not "workspace environment")
- **Bold for UI elements**: "Navigate to the **Templates** page"
- **Code formatting**: Use backticks for commands, file paths, code
- Inline: `` `coder server` ``
- Blocks: Use triple backticks with language identifier
### Instructions
- **Numbered lists** for sequential steps
- **Start with verb**: "Navigate to", "Click", "Select", "Run"
- **Be specific**: Include exact button/menu names in bold
## Code Examples
### Command Examples
````markdown
```sh
coder server --disable-template-insights
```
````
### Environment Variables
````markdown
```sh
CODER_DISABLE_TEMPLATE_INSIGHTS=true
```
````
### Code Comments
- Keep minimal
- Explain non-obvious parameters
- Use `# Comment` for shell, `// Comment` for other languages
## Links and References
### Internal Links
Use relative paths from current file location:
- `[Template Permissions](./template-permissions.md)`
- `[API documentation](../../reference/api/insights.md)`
For cross-linking to Coder registry templates or other external Coder resources, reference the appropriate registry URLs.
### Cross-References
- Link to related documentation at the end
- Use descriptive text: "Learn about [template access control](./template-permissions.md)"
- Not just: "[Click here](./template-permissions.md)"
### API References
Link to specific endpoints:
```markdown
- `/api/v2/insights/templates` - Template usage metrics
```
## Accuracy Standards
### Specific Numbers Matter
Document exact values from code:
- **Thresholds**: "green < 150ms, yellow 150-300ms, red ≥300ms"
- **Time intervals**: "daily for templates < 5 weeks old, weekly for 5+ weeks"
- **Counts and limits**: Use precise numbers, not approximations
### Permission Actions
- Use exact RBAC action names from code (e.g., `view_insights` not "view insights")
- Reference permission system correctly (`template:view_insights` scope)
- Specify which roles have permissions by default
### API Endpoints
- Use full, correct paths (e.g., `/api/v2/insights/templates` not `/insights/templates`)
- Link to generated API documentation in `docs/reference/api/`
## Documentation Manifest
**CRITICAL**: All documentation pages must be added to `docs/manifest.json` to appear in navigation. Read the manifest file to understand the structure and find the appropriate section for your documentation. Place new pages in logical sections matching the existing hierarchy.
## Proactive Documentation
When documenting features that depend on upcoming PRs:
1. **Reference the PR explicitly** - Mention PR number and what it adds
2. **Document the feature anyway** - Write as if feature exists
3. **Link to auto-generated docs** - Point to CLI reference sections that will be created
4. **Update PR description** - Note documentation is included proactively
**Example**: Template Insights docs include `--disable-template-insights` flag from PR #20940 before it merged, with link to `../../reference/cli/server.md#--disable-template-insights` that will exist when the PR lands.
## Special Sections
### Troubleshooting
- **H3 subheadings** for each issue
- Format: Issue description followed by solution steps
### Prerequisites
- Bullet or numbered list
- Include version requirements, dependencies, permissions
## Formatting and Linting
**Always run these commands before submitting documentation:**
```sh
make fmt/markdown # Format markdown tables and content
make lint/markdown # Lint and fix markdown issues
```
These ensure consistent formatting and catch common documentation errors.
## Formatting Conventions
### Text Formatting
- **Bold** (`**text**`): UI elements, important concepts, labels
- *Italic* (`*text*`): Rare, mainly for emphasis
- `Code` (`` `text` ``): Commands, file paths, parameter names
### Tables
- Use for comparing options, listing parameters, showing permissions
- Left-align text, right-align numbers
- Keep simple - avoid nested formatting when possible
### Code Blocks
- **Always specify language**: `` ```sh ``, `` ```yaml ``, `` ```go ``
- Include comments for complex examples
- Keep minimal - show only relevant configuration
## Document Length
- **Comprehensive but scannable**: Cover all aspects but use clear headings
- **Break up long sections**: Use H3 subheadings for logical chunks
- **Visual hierarchy**: Images and code blocks break up text
## Auto-Generated Content
Some content is auto-generated with comments:
```markdown
<!-- Code generated by 'make docs/...' DO NOT EDIT -->
```
Don't manually edit auto-generated sections.
## URL Redirects
When renaming or moving documentation pages, redirects must be added to prevent broken links.
**Important**: Redirects are NOT configured in this repository. The coder.com website runs on Vercel with Next.js and reads redirects from a separate repository:
- **Redirect configuration**: https://github.com/coder/coder.com/blob/master/redirects.json
- **Do NOT create** a `docs/_redirects` file - this format (used by Netlify/Cloudflare Pages) is not processed by coder.com
When you rename or move a doc page, create a PR in coder/coder.com to add the redirect.
## Key Principles
1. **Research first** - Verify against actual code implementation
2. **Be precise** - Use exact numbers, permission names, API paths
3. **Visual structure** - Organize around screenshots when available
4. **Link everything** - Related docs, API endpoints, CLI references
5. **Manifest inclusion** - Add to manifest.json for navigation
6. **Add redirects** - When moving/renaming pages, add redirects in coder/coder.com repo
+256
View File
@@ -0,0 +1,256 @@
# Pull Request Description Style Guide
This guide documents the PR description style used in the Coder repository, based on analysis of recent merged PRs.
## PR Title Format
Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/) format:
```text
type(scope): brief description
```
**Common types:**
- `feat`: New features
- `fix`: Bug fixes
- `refactor`: Code refactoring without behavior change
- `perf`: Performance improvements
- `docs`: Documentation changes
- `chore`: Dependency updates, tooling changes
**Examples:**
- `feat: add tracing to aibridge`
- `fix: move contexts to appropriate locations`
- `perf(coderd/database): add index on workspace_app_statuses.app_id`
- `docs: fix swagger tags for license endpoints`
- `refactor(site): remove redundant client-side sorting of app statuses`
## PR Description Structure
### Default Pattern: Keep It Concise
Most PRs use a simple 1-2 paragraph format:
```markdown
[Brief statement of what changed]
[One sentence explaining technical details or context if needed]
```
**Example (bugfix):**
```markdown
Previously, when a devcontainer config file was modified, the dirty
status was updated internally but not broadcast to websocket listeners.
Add `broadcastUpdatesLocked()` call in `markDevcontainerDirty` to notify
websocket listeners immediately when a config file changes.
```
**Example (dependency update):**
```markdown
Changes from https://github.com/upstream/repo/pull/XXX/
```
**Example (docs correction):**
```markdown
Removes incorrect references to database replicas from the scaling documentation.
Coder only supports a single database connection URL.
```
### For Complex Changes: Use "Summary", "Problem", "Fix"
Only use structured sections when the change requires significant explanation:
```markdown
## Summary
Brief overview of the change
## Problem
Detailed explanation of the issue being addressed
## Fix
How the solution works
```
**Example (API documentation fix):**
```markdown
## Summary
Change `@Tags` from `Organizations` to `Enterprise` for POST /licenses...
## Problem
The license API endpoints were inconsistently tagged...
## Fix
Simply updated the `@Tags` annotation from `Organizations` to `Enterprise`...
```
### For Large Refactors: Lead with Context
When rewriting significant documentation or code, start with the problems being fixed:
```markdown
This PR rewrites [component] for [reason].
The previous [component] had [specific issues]: [details].
[What changed]: [specific improvements made].
[Additional changes]: [context].
Refs #[issue-number]
```
**Example (major documentation rewrite):**
- Started with "This PR rewrites the dev containers documentation for GA readiness"
- Listed specific inaccuracies being fixed
- Explained organizational changes
- Referenced related issue
## What to Include
### Always Include
1. **Link Related Work**
- `Closes https://github.com/coder/internal/issues/XXX`
- `Depends on #XXX`
- `Fixes: https://github.com/coder/aibridge/issues/XX`
- `Refs #XXX` (for general reference)
2. **Performance Context** (when relevant)
```markdown
Each query took ~30ms on average with 80 requests/second to the cluster,
resulting in ~5.2 query-seconds every second.
```
3. **Migration Warnings** (when relevant)
```markdown
**NOTE**: This migration creates an index on `workspace_app_statuses`.
For deployments with heavy task usage, this may take a moment to complete.
```
4. **Visual Evidence** (for UI changes)
```markdown
<img width="1281" height="425" alt="image" src="..." />
```
### Never Include
- ❌ **Test plans** - Testing is handled through code review and CI
- ❌ **"Benefits" sections** - Benefits should be clear from the description
- ❌ **Implementation details** - Keep it high-level
- ❌ **Marketing language** - Stay technical and factual
- ❌ **Bullet lists of features** (unless it's a large refactor that needs enumeration)
## Special Patterns
### Simple Chore PRs
For straightforward updates (dependency bumps, minor fixes):
```markdown
Changes from [link to upstream PR/issue]
```
Or:
```markdown
Reference:
[link explaining why this change is needed]
```
### Bug Fixes
Start with the problem, then explain the fix:
```markdown
[What was broken and why it matters]
[What you changed to fix it]
```
### Dependency Updates
Dependabot PRs are auto-generated - don't try to match their verbose style for manual updates. Instead use:
```markdown
Changes from https://github.com/upstream/repo/pull/XXX/
```
## Attribution Footer
For AI-generated PRs, end with:
```markdown
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
```
## Creating PRs as Draft
**IMPORTANT**: Unless explicitly told otherwise, always create PRs as drafts using the `--draft` flag:
```bash
gh pr create --draft --title "..." --body "..."
```
After creating the PR, encourage the user to review it before marking as ready:
```
I've created draft PR #XXXX. Please review the changes and mark it as ready for review when you're satisfied.
```
This allows the user to:
- Review the code changes before requesting reviews from maintainers
- Make additional adjustments if needed
- Ensure CI passes before notifying reviewers
- Control when the PR enters the review queue
Only create non-draft PRs when the user explicitly requests it or when following up on an existing draft.
## Key Principles
1. **Always create draft PRs** - Unless explicitly told otherwise
2. **Be concise** - Default to 1-2 paragraphs unless complexity demands more
3. **Be technical** - Explain what and why, not detailed how
4. **Link everything** - Issues, PRs, upstream changes, Notion docs
5. **Show impact** - Metrics for performance, screenshots for UI, warnings for migrations
6. **No test plans** - Code review and CI handle testing
7. **No benefits sections** - Benefits should be obvious from the technical description
## Examples by Category
### Performance Improvements
Includes query timing metrics and explains the index solution
### Bug Fixes
Describes broken behavior then the fix in two sentences
### Documentation
- **Major rewrite**: Long form explaining inaccuracies and improvements
- **Simple correction**: One sentence for simple correction
### Features
Simple statement of what was added and dependencies
### Refactoring
Explains why client-side sorting is now redundant
### Configuration
Adds guidelines with issue reference
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.13.4
terraform_version: 1.14.1
terraform_wrapper: false
+4 -2
View File
@@ -6,6 +6,8 @@ updates:
interval: "weekly"
time: "06:00"
timezone: "America/Chicago"
cooldown:
default-days: 7
labels: []
commit-message:
prefix: "ci"
@@ -68,8 +70,8 @@ updates:
interval: "monthly"
time: "06:00"
timezone: "America/Chicago"
reviewers:
- "coder/ts"
cooldown:
default-days: 7
commit-message:
prefix: "chore"
labels: []
+3
View File
@@ -0,0 +1,3 @@
{
"ignores": ["PLAN.md"],
}
+19
View File
@@ -173,6 +173,23 @@ ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
```
### Avoid Unnecessary Changes
When fixing a bug or adding a feature, don't modify code unrelated to your
task. Unnecessary changes make PRs harder to review and can introduce
regressions.
**Don't reword existing comments or code** unless the change is directly
motivated by your task. Rewording comments to be shorter or "cleaner" wastes
reviewer time and clutters the diff.
**Don't delete existing comments** that explain non-obvious behavior. These
comments preserve important context about why code works a certain way.
**When adding tests for new behavior**, add new test cases instead of modifying
existing ones. This preserves coverage for the original behavior and makes it
clear what the new test covers.
## Detailed Development Guides
@.claude/docs/ARCHITECTURE.md
@@ -180,6 +197,8 @@ ctx, cancel := context.WithTimeout(ctx, 5*time.Minute)
@.claude/docs/TESTING.md
@.claude/docs/TROUBLESHOOTING.md
@.claude/docs/DATABASE.md
@.claude/docs/PR_STYLE_GUIDE.md
@.claude/docs/DOCS_STYLE_GUIDE.md
## Local Configuration
+11 -6
View File
@@ -2189,14 +2189,19 @@ func (a *apiConnRoutineManager) startTailnetAPI(
a.eg.Go(func() error {
logger.Debug(ctx, "starting tailnet routine")
err := f(ctx, a.tAPI)
if xerrors.Is(err, context.Canceled) && ctx.Err() != nil {
logger.Debug(ctx, "swallowing context canceled")
if (xerrors.Is(err, context.Canceled) ||
xerrors.Is(err, io.EOF)) &&
ctx.Err() != nil {
logger.Debug(ctx, "swallowing error because context is canceled", slog.Error(err))
// Don't propagate context canceled errors to the error group, because we don't want the
// graceful context being canceled to halt the work of routines with
// gracefulShutdownBehaviorRemain. Note that we check both that the error is
// context.Canceled and that *our* context is currently canceled, because when Coderd
// unilaterally closes the API connection (for example if the build is outdated), it can
// sometimes show up as context.Canceled in our RPC calls.
// gracefulShutdownBehaviorRemain. Unfortunately, the dRPC library closes the stream
// when context is canceled on an RPC, so canceling the context can also show up as
// io.EOF. Also, when Coderd unilaterally closes the API connection (for example if the
// build is outdated), it can sometimes show up as context.Canceled in our RPC calls.
// We can't reliably distinguish between a context cancelation and a legit EOF, so we
// also check that *our* context is currently canceled. If it is, we can safely ignore
// the error.
return nil
}
logger.Debug(ctx, "routine exited", slog.Error(err))
+1 -1
View File
@@ -465,7 +465,7 @@ func TestAgent_SessionTTYShell(t *testing.T) {
for _, port := range sshPorts {
t.Run(fmt.Sprintf("(%d)", port), func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
ctx := testutil.Context(t, testutil.WaitMedium)
session := setupSSHSessionOnPort(t, agentsdk.Manifest{}, codersdk.ServiceBannerConfig{}, nil, port)
command := "sh"
+2
View File
@@ -1457,6 +1457,8 @@ func (api *API) markDevcontainerDirty(configPath string, modifiedAt time.Time) {
api.knownDevcontainers[dc.WorkspaceFolder] = dc
}
api.broadcastUpdatesLocked()
}
// cleanupSubAgents removes subagents that are no longer managed by
+71
View File
@@ -1641,6 +1641,77 @@ func TestAPI(t *testing.T) {
require.NotNil(t, response.Devcontainers[0].Container, "container should not be nil")
})
// Verify that modifying a config file broadcasts the dirty status
// over websocket immediately.
t.Run("FileWatcherDirtyBroadcast", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
configPath := "/workspace/project/.devcontainer/devcontainer.json"
fWatcher := newFakeWatcher(t)
fLister := &fakeContainerCLI{
containers: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{
{
ID: "container-id",
FriendlyName: "container-name",
Running: true,
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/workspace/project",
agentcontainers.DevcontainerConfigFileLabel: configPath,
},
},
},
},
}
mClock := quartz.NewMock(t)
tickerTrap := mClock.Trap().TickerFunc("updaterLoop")
api := agentcontainers.NewAPI(
slogtest.Make(t, nil).Leveled(slog.LevelDebug),
agentcontainers.WithContainerCLI(fLister),
agentcontainers.WithWatcher(fWatcher),
agentcontainers.WithClock(mClock),
)
api.Start()
defer api.Close()
srv := httptest.NewServer(api.Routes())
defer srv.Close()
tickerTrap.MustWait(ctx).MustRelease(ctx)
tickerTrap.Close()
wsConn, resp, err := websocket.Dial(ctx, "ws"+strings.TrimPrefix(srv.URL, "http")+"/watch", nil)
require.NoError(t, err)
if resp != nil && resp.Body != nil {
defer resp.Body.Close()
}
defer wsConn.Close(websocket.StatusNormalClosure, "")
// Read and discard initial state.
_, _, err = wsConn.Read(ctx)
require.NoError(t, err)
fWatcher.waitNext(ctx)
fWatcher.sendEventWaitNextCalled(ctx, fsnotify.Event{
Name: configPath,
Op: fsnotify.Write,
})
// Verify dirty status is broadcast without advancing the clock.
_, msg, err := wsConn.Read(ctx)
require.NoError(t, err)
var response codersdk.WorkspaceAgentListContainersResponse
err = json.Unmarshal(msg, &response)
require.NoError(t, err)
require.Len(t, response.Devcontainers, 1)
assert.True(t, response.Devcontainers[0].Dirty,
"devcontainer should be marked as dirty after config file modification")
})
t.Run("SubAgentLifecycle", func(t *testing.T) {
t.Parallel()
+69 -12
View File
@@ -17,12 +17,12 @@
"useSemanticElements": "off",
"noStaticElementInteractions": "off"
},
"correctness": {
"noUnusedImports": "warn",
"correctness": {
"noUnusedImports": "warn",
"useUniqueElementIds": "off", // TODO: This is new but we want to fix it
"noNestedComponentDefinitions": "off", // TODO: Investigate, since it is used by shadcn components
"noUnusedVariables": {
"level": "warn",
"noUnusedVariables": {
"level": "warn",
"options": {
"ignoreRestSiblings": true
}
@@ -40,19 +40,76 @@
"useNumberNamespace": "error",
"noInferrableTypes": "error",
"noUselessElse": "error",
"noRestrictedImports": {
"level": "error",
"noRestrictedImports": {
"level": "error",
"options": {
"paths": {
"@mui/material": "Use @mui/material/<name> instead. See: https://material-ui.com/guides/minimizing-bundle-size/.",
// "@mui/material/Alert": "Use components/Alert/Alert instead.",
// "@mui/material/AlertTitle": "Use components/Alert/Alert instead.",
// "@mui/material/Autocomplete": "Use shadcn/ui Combobox instead.",
"@mui/material/Avatar": "Use components/Avatar/Avatar instead.",
"@mui/material/Alert": "Use components/Alert/Alert instead.",
"@mui/material/Box": "Use a <div> with Tailwind classes instead.",
"@mui/material/Button": "Use components/Button/Button instead.",
// "@mui/material/Card": "Use shadcn/ui Card component instead.",
// "@mui/material/CardActionArea": "Use shadcn/ui Card component instead.",
// "@mui/material/CardContent": "Use shadcn/ui Card component instead.",
// "@mui/material/Checkbox": "Use shadcn/ui Checkbox component instead.",
// "@mui/material/Chip": "Use components/Badge or Tailwind styles instead.",
// "@mui/material/CircularProgress": "Use components/Spinner/Spinner instead.",
// "@mui/material/Collapse": "Use shadcn/ui Collapsible instead.",
// "@mui/material/CssBaseline": "Use Tailwind CSS base styles instead.",
// "@mui/material/Dialog": "Use shadcn/ui Dialog component instead.",
// "@mui/material/DialogActions": "Use shadcn/ui Dialog component instead.",
// "@mui/material/DialogContent": "Use shadcn/ui Dialog component instead.",
// "@mui/material/DialogContentText": "Use shadcn/ui Dialog component instead.",
// "@mui/material/DialogTitle": "Use shadcn/ui Dialog component instead.",
// "@mui/material/Divider": "Use shadcn/ui Separator or <hr> with Tailwind instead.",
// "@mui/material/Drawer": "Use shadcn/ui Sheet component instead.",
// "@mui/material/FormControl": "Use native form elements with Tailwind instead.",
// "@mui/material/FormControlLabel": "Use shadcn/ui Label with form components instead.",
// "@mui/material/FormGroup": "Use a <div> with Tailwind classes instead.",
// "@mui/material/FormHelperText": "Use a <p> with Tailwind classes instead.",
// "@mui/material/FormLabel": "Use shadcn/ui Label component instead.",
// "@mui/material/Grid": "Use Tailwind grid utilities instead.",
// "@mui/material/IconButton": "Use components/Button/Button with variant='icon' instead.",
// "@mui/material/InputAdornment": "Use Tailwind positioning in input wrapper instead.",
// "@mui/material/InputBase": "Use shadcn/ui Input component instead.",
// "@mui/material/LinearProgress": "Use a progress bar with Tailwind instead.",
// "@mui/material/Link": "Use React Router Link or native <a> tags instead.",
// "@mui/material/List": "Use native <ul> with Tailwind instead.",
// "@mui/material/ListItem": "Use native <li> with Tailwind instead.",
// "@mui/material/ListItemIcon": "Use lucide-react icons in list items instead.",
// "@mui/material/ListItemText": "Use native elements with Tailwind instead.",
// "@mui/material/Menu": "Use shadcn/ui DropdownMenu instead.",
// "@mui/material/MenuItem": "Use shadcn/ui DropdownMenu components instead.",
// "@mui/material/MenuList": "Use shadcn/ui DropdownMenu components instead.",
// "@mui/material/Paper": "Use a <div> with Tailwind shadow/border classes instead.",
"@mui/material/Popover": "Use components/Popover/Popover instead.",
// "@mui/material/Radio": "Use shadcn/ui RadioGroup instead.",
// "@mui/material/RadioGroup": "Use shadcn/ui RadioGroup instead.",
// "@mui/material/Select": "Use shadcn/ui Select component instead.",
// "@mui/material/Skeleton": "Use shadcn/ui Skeleton component instead.",
// "@mui/material/Snackbar": "Use components/GlobalSnackbar instead.",
// "@mui/material/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
// "@mui/material/styles": "Use Tailwind CSS instead.",
// "@mui/material/SvgIcon": "Use lucide-react icons instead.",
// "@mui/material/Switch": "Use shadcn/ui Switch component instead.",
"@mui/material/Table": "Import from components/Table/Table instead.",
// "@mui/material/TableRow": "Import from components/Table/Table instead.",
// "@mui/material/TextField": "Use shadcn/ui Input component instead.",
// "@mui/material/ToggleButton": "Use shadcn/ui Toggle or custom component instead.",
// "@mui/material/ToggleButtonGroup": "Use shadcn/ui Toggle or custom component instead.",
// "@mui/material/Tooltip": "Use shadcn/ui Tooltip component instead.",
"@mui/material/Typography": "Use native HTML elements instead. Eg: <span>, <p>, <h1>, etc.",
"@mui/material/Box": "Use a <div> instead.",
"@mui/material/Button": "Use a components/Button/Button instead.",
"@mui/material/styles": "Import from @emotion/react instead.",
"@mui/material/Table*": "Import from components/Table/Table instead.",
// "@mui/material/useMediaQuery": "Use Tailwind responsive classes or custom hook instead.",
// "@mui/system": "Use Tailwind CSS instead.",
// "@mui/utils": "Use native alternatives or utility libraries instead.",
// "@mui/x-tree-view": "Use a Tailwind-compatible alternative.",
// "@emotion/css": "Use Tailwind CSS instead.",
// "@emotion/react": "Use Tailwind CSS instead.",
"@emotion/styled": "Use Tailwind CSS instead.",
// "@emotion/cache": "Use Tailwind CSS instead.",
// "components/Stack/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
"lodash": "Use lodash/<name> instead."
}
}
+254 -167
View File
@@ -20,6 +20,12 @@ import (
var errAgentShuttingDown = xerrors.New("agent is shutting down")
// fetchAgentResult is used to pass agent fetch results through channels.
type fetchAgentResult struct {
agent codersdk.WorkspaceAgent
err error
}
type AgentOptions struct {
FetchInterval time.Duration
Fetch func(ctx context.Context, agentID uuid.UUID) (codersdk.WorkspaceAgent, error)
@@ -28,6 +34,14 @@ type AgentOptions struct {
DocsURL string
}
// agentWaiter encapsulates the state machine for waiting on a workspace agent.
type agentWaiter struct {
opts AgentOptions
sw *stageWriter
logSources map[uuid.UUID]codersdk.WorkspaceAgentLogSource
fetchAgent func(context.Context) (codersdk.WorkspaceAgent, error)
}
// Agent displays a spinning indicator that waits for a workspace agent to connect.
func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentOptions) error {
ctx, cancel := context.WithCancel(ctx)
@@ -44,11 +58,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
}
}
type fetchAgent struct {
agent codersdk.WorkspaceAgent
err error
}
fetchedAgent := make(chan fetchAgent, 1)
fetchedAgent := make(chan fetchAgentResult, 1)
go func() {
t := time.NewTimer(0)
defer t.Stop()
@@ -67,10 +77,10 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
default:
}
if err != nil {
fetchedAgent <- fetchAgent{err: xerrors.Errorf("fetch workspace agent: %w", err)}
fetchedAgent <- fetchAgentResult{err: xerrors.Errorf("fetch workspace agent: %w", err)}
return
}
fetchedAgent <- fetchAgent{agent: agent}
fetchedAgent <- fetchAgentResult{agent: agent}
// Adjust the interval based on how long we've been waiting.
elapsed := time.Since(startTime)
@@ -79,7 +89,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
}
}
}()
fetch := func() (codersdk.WorkspaceAgent, error) {
fetch := func(ctx context.Context) (codersdk.WorkspaceAgent, error) {
select {
case <-ctx.Done():
return codersdk.WorkspaceAgent{}, ctx.Err()
@@ -91,7 +101,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
}
}
agent, err := fetch()
agent, err := fetch(ctx)
if err != nil {
return xerrors.Errorf("fetch: %w", err)
}
@@ -100,9 +110,23 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
logSources[source.ID] = source
}
sw := &stageWriter{w: writer}
w := &agentWaiter{
opts: opts,
sw: &stageWriter{w: writer},
logSources: logSources,
fetchAgent: fetch,
}
return w.wait(ctx, agent, fetchedAgent)
}
// wait runs the main state machine loop.
func (aw *agentWaiter) wait(ctx context.Context, agent codersdk.WorkspaceAgent, fetchedAgent chan fetchAgentResult) error {
var err error
// Track whether we've gone through a wait state, which determines if we
// should show startup logs when connected.
waitedForConnection := false
showStartupLogs := false
for {
// It doesn't matter if we're connected or not, if the agent is
// shutting down, we don't know if it's coming back.
@@ -112,173 +136,236 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
switch agent.Status {
case codersdk.WorkspaceAgentConnecting, codersdk.WorkspaceAgentTimeout:
// Since we were waiting for the agent to connect, also show
// startup logs if applicable.
showStartupLogs = true
stage := "Waiting for the workspace agent to connect"
sw.Start(stage)
for agent.Status == codersdk.WorkspaceAgentConnecting {
if agent, err = fetch(); err != nil {
return xerrors.Errorf("fetch: %w", err)
}
}
if agent.Status == codersdk.WorkspaceAgentTimeout {
now := time.Now()
sw.Log(now, codersdk.LogLevelInfo, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
sw.Log(now, codersdk.LogLevelInfo, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", opts.DocsURL)))
for agent.Status == codersdk.WorkspaceAgentTimeout {
if agent, err = fetch(); err != nil {
return xerrors.Errorf("fetch: %w", err)
}
}
}
sw.Complete(stage, agent.FirstConnectedAt.Sub(agent.CreatedAt))
case codersdk.WorkspaceAgentConnected:
if !showStartupLogs && agent.LifecycleState == codersdk.WorkspaceAgentLifecycleReady {
// The workspace is ready, there's nothing to do but connect.
return nil
}
stage := "Running workspace agent startup scripts"
follow := opts.Wait && agent.LifecycleState.Starting()
if !follow {
stage += " (non-blocking)"
}
sw.Start(stage)
if follow {
sw.Log(time.Time{}, codersdk.LogLevelInfo, "==> ︎ To connect immediately, reconnect with --wait=no or CODER_SSH_WAIT=no, see --help for more information.")
}
err = func() error {
// In non-blocking mode, skip streaming logs.
// See: https://github.com/coder/coder/issues/13580
if !opts.Wait {
return nil
}
logStream, logsCloser, err := opts.FetchLogs(ctx, agent.ID, 0, follow)
if err != nil {
return xerrors.Errorf("fetch workspace agent startup logs: %w", err)
}
defer logsCloser.Close()
var lastLog codersdk.WorkspaceAgentLog
fetchedAgentWhileFollowing := fetchedAgent
if !follow {
fetchedAgentWhileFollowing = nil
}
for {
// This select is essentially and inline `fetch()`.
select {
case <-ctx.Done():
return ctx.Err()
case f := <-fetchedAgentWhileFollowing:
if f.err != nil {
return xerrors.Errorf("fetch: %w", f.err)
}
agent = f.agent
// If the agent is no longer starting, stop following
// logs because FetchLogs will keep streaming forever.
// We do one last non-follow request to ensure we have
// fetched all logs.
if !agent.LifecycleState.Starting() {
_ = logsCloser.Close()
fetchedAgentWhileFollowing = nil
logStream, logsCloser, err = opts.FetchLogs(ctx, agent.ID, lastLog.ID, false)
if err != nil {
return xerrors.Errorf("fetch workspace agent startup logs: %w", err)
}
// Logs are already primed, so we can call close.
_ = logsCloser.Close()
}
case logs, ok := <-logStream:
if !ok {
return nil
}
for _, log := range logs {
source, hasSource := logSources[log.SourceID]
output := log.Output
if hasSource && source.DisplayName != "" {
output = source.DisplayName + ": " + output
}
sw.Log(log.CreatedAt, log.Level, output)
lastLog = log
}
}
}
}()
agent, err = aw.waitForConnection(ctx, agent)
if err != nil {
return err
}
// Since we were waiting for the agent to connect, also show
// startup logs if applicable.
waitedForConnection = true
for follow && agent.LifecycleState.Starting() {
if agent, err = fetch(); err != nil {
return xerrors.Errorf("fetch: %w", err)
}
}
switch agent.LifecycleState {
case codersdk.WorkspaceAgentLifecycleReady:
sw.Complete(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
case codersdk.WorkspaceAgentLifecycleStartTimeout:
// Backwards compatibility: Avoid printing warning if
// coderd is old and doesn't set ReadyAt for timeouts.
if agent.ReadyAt == nil {
sw.Fail(stage, 0)
} else {
sw.Fail(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
}
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script timed out and your workspace may be incomplete.")
case codersdk.WorkspaceAgentLifecycleStartError:
sw.Fail(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
// Use zero time (omitted) to separate these from the startup logs.
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script exited with an error and your workspace may be incomplete.")
sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#startup-script-exited-with-an-error", opts.DocsURL)))
default:
switch {
case agent.LifecycleState.Starting():
// Use zero time (omitted) to separate these from the startup logs.
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Notice: The startup scripts are still running and your workspace may be incomplete.")
sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#your-workspace-may-be-incomplete", opts.DocsURL)))
// Note: We don't complete or fail the stage here, it's
// intentionally left open to indicate this stage didn't
// complete.
case agent.LifecycleState.ShuttingDown():
// We no longer know if the startup script failed or not,
// but we need to tell the user something.
sw.Complete(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
return errAgentShuttingDown
}
}
return nil
case codersdk.WorkspaceAgentConnected:
return aw.handleConnected(ctx, agent, waitedForConnection, fetchedAgent)
case codersdk.WorkspaceAgentDisconnected:
// If the agent was still starting during disconnect, we'll
// show startup logs.
showStartupLogs = agent.LifecycleState.Starting()
stage := "The workspace agent lost connection"
sw.Start(stage)
sw.Log(time.Now(), codersdk.LogLevelWarn, "Wait for it to reconnect or restart your workspace.")
sw.Log(time.Now(), codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", opts.DocsURL)))
disconnectedAt := agent.DisconnectedAt
for agent.Status == codersdk.WorkspaceAgentDisconnected {
if agent, err = fetch(); err != nil {
return xerrors.Errorf("fetch: %w", err)
}
agent, waitedForConnection, err = aw.waitForReconnection(ctx, agent)
if err != nil {
return err
}
sw.Complete(stage, safeDuration(sw, agent.LastConnectedAt, disconnectedAt))
}
}
}
// waitForConnection handles the Connecting/Timeout states.
// Returns when agent transitions to Connected or Disconnected.
func (aw *agentWaiter) waitForConnection(ctx context.Context, agent codersdk.WorkspaceAgent) (codersdk.WorkspaceAgent, error) {
stage := "Waiting for the workspace agent to connect"
aw.sw.Start(stage)
agent, err := aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
return agent.Status == codersdk.WorkspaceAgentConnecting
})
if err != nil {
return agent, err
}
if agent.Status == codersdk.WorkspaceAgentTimeout {
now := time.Now()
aw.sw.Log(now, codersdk.LogLevelInfo, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
aw.sw.Log(now, codersdk.LogLevelInfo, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", aw.opts.DocsURL)))
agent, err = aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
return agent.Status == codersdk.WorkspaceAgentTimeout
})
if err != nil {
return agent, err
}
}
aw.sw.Complete(stage, agent.FirstConnectedAt.Sub(agent.CreatedAt))
return agent, nil
}
// handleConnected handles the Connected state and startup script logic.
// This is a terminal state, returns nil on success or error on failure.
//
//nolint:revive // Control flag is acceptable for internal method.
func (aw *agentWaiter) handleConnected(ctx context.Context, agent codersdk.WorkspaceAgent, showStartupLogs bool, fetchedAgent chan fetchAgentResult) error {
if !showStartupLogs && agent.LifecycleState == codersdk.WorkspaceAgentLifecycleReady {
// The workspace is ready, there's nothing to do but connect.
return nil
}
// Determine if we should follow/stream logs (blocking mode).
follow := aw.opts.Wait && agent.LifecycleState.Starting()
stage := "Running workspace agent startup scripts"
if !follow {
stage += " (non-blocking)"
}
aw.sw.Start(stage)
if follow {
aw.sw.Log(time.Time{}, codersdk.LogLevelInfo, "==> ︎ To connect immediately, reconnect with --wait=no or CODER_SSH_WAIT=no, see --help for more information.")
}
// In non-blocking mode (Wait=false), we don't stream logs. This prevents
// dumping a wall of logs on users who explicitly pass --wait=no. The stage
// indicator is still shown, just not the log content. See issue #13580.
if aw.opts.Wait {
var err error
agent, err = aw.streamLogs(ctx, agent, follow, fetchedAgent)
if err != nil {
return err
}
// If we were following, wait until startup completes.
if follow {
agent, err = aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
return agent.LifecycleState.Starting()
})
if err != nil {
return err
}
}
}
// Handle final lifecycle state.
switch agent.LifecycleState {
case codersdk.WorkspaceAgentLifecycleReady:
aw.sw.Complete(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
case codersdk.WorkspaceAgentLifecycleStartTimeout:
// Backwards compatibility: Avoid printing warning if
// coderd is old and doesn't set ReadyAt for timeouts.
if agent.ReadyAt == nil {
aw.sw.Fail(stage, 0)
} else {
aw.sw.Fail(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
}
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script timed out and your workspace may be incomplete.")
case codersdk.WorkspaceAgentLifecycleStartError:
aw.sw.Fail(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script exited with an error and your workspace may be incomplete.")
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#startup-script-exited-with-an-error", aw.opts.DocsURL)))
default:
switch {
case agent.LifecycleState.Starting():
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Notice: The startup scripts are still running and your workspace may be incomplete.")
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#your-workspace-may-be-incomplete", aw.opts.DocsURL)))
// Note: We don't complete or fail the stage here, it's
// intentionally left open to indicate this stage didn't
// complete.
case agent.LifecycleState.ShuttingDown():
// We no longer know if the startup script failed or not,
// but we need to tell the user something.
aw.sw.Complete(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
return errAgentShuttingDown
}
}
return nil
}
// streamLogs handles streaming or fetching startup logs.
//
//nolint:revive // Control flag is acceptable for internal method.
func (aw *agentWaiter) streamLogs(ctx context.Context, agent codersdk.WorkspaceAgent, follow bool, fetchedAgent chan fetchAgentResult) (codersdk.WorkspaceAgent, error) {
logStream, logsCloser, err := aw.opts.FetchLogs(ctx, agent.ID, 0, follow)
if err != nil {
return agent, xerrors.Errorf("fetch workspace agent startup logs: %w", err)
}
defer logsCloser.Close()
var lastLog codersdk.WorkspaceAgentLog
// If not following, we don't need to watch for agent state changes.
var fetchedAgentWhileFollowing chan fetchAgentResult
if follow {
fetchedAgentWhileFollowing = fetchedAgent
}
for {
select {
case <-ctx.Done():
return agent, ctx.Err()
case f := <-fetchedAgentWhileFollowing:
if f.err != nil {
return agent, xerrors.Errorf("fetch: %w", f.err)
}
agent = f.agent
// If the agent is no longer starting, stop following
// logs because FetchLogs will keep streaming forever.
// We do one last non-follow request to ensure we have
// fetched all logs.
if !agent.LifecycleState.Starting() {
_ = logsCloser.Close()
fetchedAgentWhileFollowing = nil
logStream, logsCloser, err = aw.opts.FetchLogs(ctx, agent.ID, lastLog.ID, false)
if err != nil {
return agent, xerrors.Errorf("fetch workspace agent startup logs: %w", err)
}
// Logs are already primed, so we can call close.
_ = logsCloser.Close()
}
case logs, ok := <-logStream:
if !ok {
return agent, nil
}
for _, log := range logs {
source, hasSource := aw.logSources[log.SourceID]
output := log.Output
if hasSource && source.DisplayName != "" {
output = source.DisplayName + ": " + output
}
aw.sw.Log(log.CreatedAt, log.Level, output)
lastLog = log
}
}
}
}
// waitForReconnection handles the Disconnected state.
// Returns when agent reconnects along with whether to show startup logs.
func (aw *agentWaiter) waitForReconnection(ctx context.Context, agent codersdk.WorkspaceAgent) (codersdk.WorkspaceAgent, bool, error) {
// If the agent was still starting during disconnect, we'll
// show startup logs.
showStartupLogs := agent.LifecycleState.Starting()
stage := "The workspace agent lost connection"
aw.sw.Start(stage)
aw.sw.Log(time.Now(), codersdk.LogLevelWarn, "Wait for it to reconnect or restart your workspace.")
aw.sw.Log(time.Now(), codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", aw.opts.DocsURL)))
disconnectedAt := agent.DisconnectedAt
agent, err := aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
return agent.Status == codersdk.WorkspaceAgentDisconnected
})
if err != nil {
return agent, showStartupLogs, err
}
aw.sw.Complete(stage, safeDuration(aw.sw, agent.LastConnectedAt, disconnectedAt))
return agent, showStartupLogs, nil
}
// pollWhile polls the agent while the condition is true. It fetches the agent
// on each iteration and returns the updated agent when the condition is false,
// the context is canceled, or an error occurs.
func (aw *agentWaiter) pollWhile(ctx context.Context, agent codersdk.WorkspaceAgent, cond func(agent codersdk.WorkspaceAgent) bool) (codersdk.WorkspaceAgent, error) {
var err error
for cond(agent) {
agent, err = aw.fetchAgent(ctx)
if err != nil {
return agent, xerrors.Errorf("fetch: %w", err)
}
}
if err = ctx.Err(); err != nil {
return agent, err
}
return agent, nil
}
func troubleshootingMessage(agent codersdk.WorkspaceAgent, url string) string {
m := "For more information and troubleshooting, see " + url
if agent.TroubleshootingURL != "" {
+145
View File
@@ -268,6 +268,87 @@ func TestAgent(t *testing.T) {
"For more information and troubleshooting, see",
},
},
{
// Verify that in non-blocking mode (Wait=false), startup script
// logs are suppressed. This prevents dumping a wall of logs on
// users who explicitly pass --wait=no. See issue #13580.
name: "No logs in non-blocking mode",
opts: cliui.AgentOptions{
FetchInterval: time.Millisecond,
Wait: false,
},
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.StartedAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStartError
agent.ReadyAt = ptr.Ref(time.Now())
// These logs should NOT be shown in non-blocking mode.
logs <- []codersdk.WorkspaceAgentLog{
{
CreatedAt: time.Now(),
Output: "Startup script log 1",
},
{
CreatedAt: time.Now(),
Output: "Startup script log 2",
},
}
return nil
},
},
// Note: Log content like "Startup script log 1" should NOT appear here.
want: []string{
"⧗ Running workspace agent startup scripts (non-blocking)",
"✘ Running workspace agent startup scripts (non-blocking)",
"Warning: A startup script exited with an error and your workspace may be incomplete.",
"For more information and troubleshooting, see",
},
},
{
// Verify that even after waiting for the agent to connect, logs
// are still suppressed in non-blocking mode. See issue #13580.
name: "No logs after connection wait in non-blocking mode",
opts: cliui.AgentOptions{
FetchInterval: time.Millisecond,
Wait: false,
},
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnecting
return nil
},
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.StartedAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStartError
agent.ReadyAt = ptr.Ref(time.Now())
// These logs should NOT be shown in non-blocking mode,
// even though we waited for connection.
logs <- []codersdk.WorkspaceAgentLog{
{
CreatedAt: time.Now(),
Output: "Startup script log 1",
},
}
return nil
},
},
// Note: Log content should NOT appear here despite waiting for connection.
want: []string{
"⧗ Waiting for the workspace agent to connect",
"✔ Waiting for the workspace agent to connect",
"⧗ Running workspace agent startup scripts (non-blocking)",
"✘ Running workspace agent startup scripts (non-blocking)",
"Warning: A startup script exited with an error and your workspace may be incomplete.",
"For more information and troubleshooting, see",
},
},
{
name: "Error when shutting down",
opts: cliui.AgentOptions{
@@ -485,6 +566,70 @@ func TestAgent(t *testing.T) {
}
require.NoError(t, cmd.Invoke().Run())
})
t.Run("ContextCancelDuringLogStreaming", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
agent := codersdk.WorkspaceAgent{
ID: uuid.New(),
Status: codersdk.WorkspaceAgentConnected,
FirstConnectedAt: ptr.Ref(time.Now()),
CreatedAt: time.Now(),
LifecycleState: codersdk.WorkspaceAgentLifecycleStarting,
StartedAt: ptr.Ref(time.Now()),
}
logs := make(chan []codersdk.WorkspaceAgentLog, 1)
logStreamStarted := make(chan struct{})
cmd := &serpent.Command{
Handler: func(inv *serpent.Invocation) error {
return cliui.Agent(inv.Context(), io.Discard, agent.ID, cliui.AgentOptions{
FetchInterval: time.Millisecond,
Wait: true,
Fetch: func(_ context.Context, _ uuid.UUID) (codersdk.WorkspaceAgent, error) {
return agent, nil
},
FetchLogs: func(_ context.Context, _ uuid.UUID, _ int64, follow bool) (<-chan []codersdk.WorkspaceAgentLog, io.Closer, error) {
// Signal that log streaming has started.
select {
case <-logStreamStarted:
default:
close(logStreamStarted)
}
return logs, closeFunc(func() error { return nil }), nil
},
})
},
}
inv := cmd.Invoke().WithContext(ctx)
done := make(chan error, 1)
go func() {
done <- inv.Run()
}()
// Wait for log streaming to start.
select {
case <-logStreamStarted:
case <-time.After(testutil.WaitShort):
t.Fatal("timed out waiting for log streaming to start")
}
// Cancel the context while streaming logs.
cancel()
// Verify that the agent function returns with a context error.
select {
case err := <-done:
require.ErrorIs(t, err, context.Canceled)
case <-time.After(testutil.WaitShort):
t.Fatal("timed out waiting for agent to return after context cancellation")
}
})
}
func TestPeerDiagnostics(t *testing.T) {
+3 -3
View File
@@ -90,7 +90,6 @@ func TestExpRpty(t *testing.T) {
wantLabel := "coder.devcontainers.TestExpRpty.Container"
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -128,14 +127,15 @@ func TestExpRpty(t *testing.T) {
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
pty.ExpectMatch(" #")
pty.ExpectMatchContext(ctx, " #")
pty.WriteLine("hostname")
pty.ExpectMatch(ct.Container.Config.Hostname)
pty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
pty.WriteLine("exit")
<-cmdDone
})
+3 -3
View File
@@ -2052,7 +2052,6 @@ func TestSSH_Container(t *testing.T) {
t.Parallel()
client, workspace, agentToken := setupWorkspaceForAgent(t)
ctx := testutil.Context(t, testutil.WaitLong)
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
ct, err := pool.RunWithOptions(&dockertest.RunOptions{
@@ -2087,14 +2086,15 @@ func TestSSH_Container(t *testing.T) {
clitest.SetupConfig(t, client, root)
ptty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitLong)
cmdDone := tGo(t, func() {
err := inv.WithContext(ctx).Run()
assert.NoError(t, err)
})
ptty.ExpectMatch(" #")
ptty.ExpectMatchContext(ctx, " #")
ptty.WriteLine("hostname")
ptty.ExpectMatch(ct.Container.Config.Hostname)
ptty.ExpectMatchContext(ctx, ct.Container.Config.Hostname)
ptty.WriteLine("exit")
<-cmdDone
})
+7
View File
@@ -46,6 +46,13 @@ OPTIONS:
the workspace serves malicious JavaScript. This is recommended for
security purposes if a --wildcard-access-url is configured.
--disable-workspace-sharing bool, $CODER_DISABLE_WORKSPACE_SHARING
Disable workspace sharing (requires the "workspace-sharing" experiment
to be enabled). Workspace ACL checking is disabled and only owners can
have ssh, apps and terminal access to workspaces. Access based on the
'owner' role is also allowed unless disabled via
--disable-owner-workspace-access.
--swagger-enable bool, $CODER_SWAGGER_ENABLE
Expose the swagger endpoint via /swagger.
+6
View File
@@ -497,6 +497,12 @@ disablePathApps: false
# workspaces.
# (default: <unset>, type: bool)
disableOwnerWorkspaceAccess: false
# Disable workspace sharing (requires the "workspace-sharing" experiment to be
# enabled). Workspace ACL checking is disabled and only owners can have ssh, apps
# and terminal access to workspaces. Access based on the 'owner' role is also
# allowed unless disabled via --disable-owner-workspace-access.
# (default: <unset>, type: bool)
disableWorkspaceSharing: false
# These options change the behavior of how clients interact with the Coder.
# Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
client:
+2 -2
View File
@@ -1680,8 +1680,8 @@ func TestTasksNotification(t *testing.T) {
require.NoError(t, err)
require.Len(t, workspaceAgent.Apps, 1)
require.GreaterOrEqual(t, len(workspaceAgent.Apps[0].Statuses), 1)
latestStatusIndex := len(workspaceAgent.Apps[0].Statuses) - 1
require.Equal(t, tc.newAppStatus, workspaceAgent.Apps[0].Statuses[latestStatusIndex].State)
// Statuses are ordered by created_at DESC, so the first element is the latest.
require.Equal(t, tc.newAppStatus, workspaceAgent.Apps[0].Statuses[0].State)
if tc.isNotificationSent {
// Then: A notification is sent to the workspace owner (memberUser)
+12 -3
View File
@@ -1290,8 +1290,14 @@ const docTemplate = `{
}
],
"responses": {
"200": {
"description": "Returns existing file if duplicate",
"schema": {
"$ref": "#/definitions/codersdk.UploadResponse"
}
},
"201": {
"description": "Created",
"description": "Returns newly created file",
"schema": {
"$ref": "#/definitions/codersdk.UploadResponse"
}
@@ -1800,7 +1806,7 @@ const docTemplate = `{
"application/json"
],
"tags": [
"Organizations"
"Enterprise"
],
"summary": "Add new license",
"operationId": "add-new-license",
@@ -1836,7 +1842,7 @@ const docTemplate = `{
"application/json"
],
"tags": [
"Organizations"
"Enterprise"
],
"summary": "Update license entitlements",
"operationId": "update-license-entitlements",
@@ -14208,6 +14214,9 @@ const docTemplate = `{
"disable_path_apps": {
"type": "boolean"
},
"disable_workspace_sharing": {
"type": "boolean"
},
"docs_url": {
"$ref": "#/definitions/serpent.URL"
},
+12 -3
View File
@@ -1116,8 +1116,14 @@
}
],
"responses": {
"200": {
"description": "Returns existing file if duplicate",
"schema": {
"$ref": "#/definitions/codersdk.UploadResponse"
}
},
"201": {
"description": "Created",
"description": "Returns newly created file",
"schema": {
"$ref": "#/definitions/codersdk.UploadResponse"
}
@@ -1570,7 +1576,7 @@
],
"consumes": ["application/json"],
"produces": ["application/json"],
"tags": ["Organizations"],
"tags": ["Enterprise"],
"summary": "Add new license",
"operationId": "add-new-license",
"parameters": [
@@ -1602,7 +1608,7 @@
}
],
"produces": ["application/json"],
"tags": ["Organizations"],
"tags": ["Enterprise"],
"summary": "Update license entitlements",
"operationId": "update-license-entitlements",
"responses": {
@@ -12792,6 +12798,9 @@
"disable_path_apps": {
"type": "boolean"
},
"disable_workspace_sharing": {
"type": "boolean"
},
"docs_url": {
"$ref": "#/definitions/serpent.URL"
},
+4
View File
@@ -333,6 +333,10 @@ func New(options *Options) *API {
})
}
if options.DeploymentValues.DisableWorkspaceSharing {
rbac.SetWorkspaceACLDisabled(true)
}
if options.PrometheusRegistry == nil {
options.PrometheusRegistry = prometheus.NewRegistry()
}
+10
View File
@@ -439,6 +439,16 @@ func Workspace(t testing.TB, db database.Store, orig database.WorkspaceTable) da
require.NoError(t, err, "set workspace as dormant")
workspace.DormantAt = orig.DormantAt
}
if len(orig.UserACL) > 0 || len(orig.GroupACL) > 0 {
err = db.UpdateWorkspaceACLByID(genCtx, database.UpdateWorkspaceACLByIDParams{
ID: workspace.ID,
UserACL: orig.UserACL,
GroupACL: orig.GroupACL,
})
require.NoError(t, err, "set workspace ACL")
workspace.UserACL = orig.UserACL
workspace.GroupACL = orig.GroupACL
}
return workspace
}
+2
View File
@@ -3449,6 +3449,8 @@ COMMENT ON INDEX workspace_app_audit_sessions_unique_index IS 'Unique index to e
CREATE INDEX workspace_app_stats_workspace_id_idx ON workspace_app_stats USING btree (workspace_id);
CREATE INDEX workspace_app_statuses_app_id_idx ON workspace_app_statuses USING btree (app_id, created_at DESC);
CREATE INDEX workspace_modules_created_at_idx ON workspace_modules USING btree (created_at);
CREATE INDEX workspace_next_start_at_idx ON workspaces USING btree (next_start_at) WHERE (deleted = false);
@@ -0,0 +1 @@
DROP INDEX IF EXISTS workspace_app_statuses_app_id_idx;
@@ -0,0 +1 @@
CREATE INDEX workspace_app_statuses_app_id_idx ON workspace_app_statuses (app_id, created_at DESC);
+9 -2
View File
@@ -430,9 +430,16 @@ func (w WorkspaceTable) RBACObject() rbac.Object {
return w.DormantRBAC()
}
return rbac.ResourceWorkspace.WithID(w.ID).
obj := rbac.ResourceWorkspace.
WithID(w.ID).
InOrg(w.OrganizationID).
WithOwner(w.OwnerID.String()).
WithOwner(w.OwnerID.String())
if rbac.WorkspaceACLDisabled() {
return obj
}
return obj.
WithGroupACL(w.GroupACL.RBACACL()).
WithACLUserList(w.UserACL.RBACACL())
}
@@ -143,6 +143,45 @@ func TestAPIKeyScopesExpand(t *testing.T) {
})
}
//nolint:tparallel,paralleltest
func TestWorkspaceACLDisabled(t *testing.T) {
uid := uuid.NewString()
gid := uuid.NewString()
ws := WorkspaceTable{
ID: uuid.New(),
OrganizationID: uuid.New(),
OwnerID: uuid.New(),
UserACL: WorkspaceACL{
uid: WorkspaceACLEntry{Permissions: []policy.Action{policy.ActionSSH}},
},
GroupACL: WorkspaceACL{
gid: WorkspaceACLEntry{Permissions: []policy.Action{policy.ActionSSH}},
},
}
t.Run("ACLsOmittedWhenDisabled", func(t *testing.T) {
rbac.SetWorkspaceACLDisabled(true)
t.Cleanup(func() { rbac.SetWorkspaceACLDisabled(false) })
obj := ws.RBACObject()
require.Empty(t, obj.ACLUserList, "user ACLs should be empty when disabled")
require.Empty(t, obj.ACLGroupList, "group ACLs should be empty when disabled")
})
t.Run("ACLsIncludedWhenEnabled", func(t *testing.T) {
rbac.SetWorkspaceACLDisabled(false)
obj := ws.RBACObject()
require.NotEmpty(t, obj.ACLUserList, "user ACLs should be present when enabled")
require.NotEmpty(t, obj.ACLGroupList, "group ACLs should be present when enabled")
require.Contains(t, obj.ACLUserList, uid)
require.Contains(t, obj.ACLGroupList, gid)
})
}
// Helpers
func requirePermission(t *testing.T, s rbac.Scope, resource string, action policy.Action) {
t.Helper()
+1
View File
@@ -20246,6 +20246,7 @@ func (q *sqlQuerier) GetWorkspaceAppByAgentIDAndSlug(ctx context.Context, arg Ge
const getWorkspaceAppStatusesByAppIDs = `-- name: GetWorkspaceAppStatusesByAppIDs :many
SELECT id, created_at, agent_id, app_id, workspace_id, state, message, uri FROM workspace_app_statuses WHERE app_id = ANY($1 :: uuid [ ])
ORDER BY created_at DESC, id DESC
`
func (q *sqlQuerier) GetWorkspaceAppStatusesByAppIDs(ctx context.Context, ids []uuid.UUID) ([]WorkspaceAppStatus, error) {
+2 -1
View File
@@ -71,7 +71,8 @@ VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
RETURNING *;
-- name: GetWorkspaceAppStatusesByAppIDs :many
SELECT * FROM workspace_app_statuses WHERE app_id = ANY(@ids :: uuid [ ]);
SELECT * FROM workspace_app_statuses WHERE app_id = ANY(@ids :: uuid [ ])
ORDER BY created_at DESC, id DESC;
-- name: GetLatestWorkspaceAppStatusByAppID :one
SELECT *
+56 -1
View File
@@ -30,11 +30,34 @@ import (
// Forgetting to do so will result in a memory leak.
type Renderer interface {
Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics)
RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics)
Close()
}
var ErrTemplateVersionNotReady = xerrors.New("template version job not finished")
// RenderCache is an interface for caching preview.Preview results.
type RenderCache interface {
get(templateVersionID, ownerID uuid.UUID, parameters map[string]string) (*preview.Output, bool)
put(templateVersionID, ownerID uuid.UUID, parameters map[string]string, output *preview.Output)
Close()
}
// noopRenderCache is a no-op implementation of RenderCache that doesn't cache anything.
type noopRenderCache struct{}
func (noopRenderCache) get(uuid.UUID, uuid.UUID, map[string]string) (*preview.Output, bool) {
return nil, false
}
func (noopRenderCache) put(uuid.UUID, uuid.UUID, map[string]string, *preview.Output) {
// no-op
}
func (noopRenderCache) Close() {
// no-op
}
// loader is used to load the necessary coder objects for rendering a template
// version's parameters. The output is a Renderer, which is the object that uses
// the cached objects to render the template version's parameters.
@@ -46,6 +69,9 @@ type loader struct {
job *database.ProvisionerJob
terraformValues *database.TemplateVersionTerraformValue
templateVariableValues *[]database.TemplateVersionVariable
// renderCache caches preview.Preview results
renderCache RenderCache
}
// Prepare is the entrypoint for this package. It loads the necessary objects &
@@ -54,6 +80,7 @@ type loader struct {
func Prepare(ctx context.Context, db database.Store, cache files.FileAcquirer, versionID uuid.UUID, options ...func(r *loader)) (Renderer, error) {
l := &loader{
templateVersionID: versionID,
renderCache: noopRenderCache{},
}
for _, opt := range options {
@@ -91,6 +118,12 @@ func WithTerraformValues(values database.TemplateVersionTerraformValue) func(r *
}
}
func WithRenderCache(cache RenderCache) func(r *loader) {
return func(r *loader) {
r.renderCache = cache
}
}
func (r *loader) loadData(ctx context.Context, db database.Store) error {
if r.templateVersion == nil {
tv, err := db.GetTemplateVersionByID(ctx, r.templateVersionID)
@@ -227,6 +260,21 @@ type dynamicRenderer struct {
}
func (r *dynamicRenderer) Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
return r.render(ctx, ownerID, values, true)
}
func (r *dynamicRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
return r.render(ctx, ownerID, values, false)
}
func (r *dynamicRenderer) render(ctx context.Context, ownerID uuid.UUID, values map[string]string, useCache bool) (*preview.Output, hcl.Diagnostics) {
// Check cache first if enabled
if useCache {
if cached, ok := r.data.renderCache.get(r.data.templateVersionID, ownerID, values); ok {
return cached, nil
}
}
// Always start with the cached error, if we have one.
ownerErr := r.ownerErrors[ownerID]
if ownerErr == nil {
@@ -258,7 +306,14 @@ func (r *dynamicRenderer) Render(ctx context.Context, ownerID uuid.UUID, values
Logger: slog.New(slog.DiscardHandler),
}
return preview.Preview(ctx, input, r.templateFS)
output, diags := preview.Preview(ctx, input, r.templateFS)
// Store in cache if successful and caching is enabled
if useCache && !diags.HasErrors() {
r.data.renderCache.put(r.data.templateVersionID, ownerID, values, output)
}
return output, diags
}
func (r *dynamicRenderer) getWorkspaceOwnerData(ctx context.Context, ownerID uuid.UUID) error {
+214
View File
@@ -0,0 +1,214 @@
package dynamicparameters
import (
"context"
"fmt"
"sort"
"sync"
"time"
"github.com/cespare/xxhash/v2"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"github.com/coder/preview"
"github.com/coder/quartz"
)
// RenderCacheImpl is a simple in-memory cache for preview.Preview results.
// It caches based on (templateVersionID, ownerID, parameterValues).
type RenderCacheImpl struct {
mu sync.RWMutex
entries map[cacheKey]*cacheEntry
// Metrics (optional)
cacheHits prometheus.Counter
cacheMisses prometheus.Counter
cacheSize prometheus.Gauge
// TTL cleanup
clock quartz.Clock
ttl time.Duration
stopOnce sync.Once
stopCh chan struct{}
doneCh chan struct{}
}
type cacheEntry struct {
output *preview.Output
timestamp time.Time
}
type cacheKey struct {
templateVersionID uuid.UUID
ownerID uuid.UUID
parameterHash uint64
}
// NewRenderCache creates a new render cache with a default TTL of 1 hour.
func NewRenderCache() *RenderCacheImpl {
return newCache(quartz.NewReal(), time.Hour, nil, nil, nil)
}
// NewRenderCacheWithMetrics creates a new render cache with Prometheus metrics.
func NewRenderCacheWithMetrics(cacheHits, cacheMisses prometheus.Counter, cacheSize prometheus.Gauge) *RenderCacheImpl {
return newCache(quartz.NewReal(), time.Hour, cacheHits, cacheMisses, cacheSize)
}
func newCache(clock quartz.Clock, ttl time.Duration, cacheHits, cacheMisses prometheus.Counter, cacheSize prometheus.Gauge) *RenderCacheImpl {
c := &RenderCacheImpl{
entries: make(map[cacheKey]*cacheEntry),
clock: clock,
cacheHits: cacheHits,
cacheMisses: cacheMisses,
cacheSize: cacheSize,
ttl: ttl,
stopCh: make(chan struct{}),
doneCh: make(chan struct{}),
}
// Start cleanup goroutine
go c.cleanupLoop(context.Background())
return c
}
// NewRenderCacheForTest creates a new render cache for testing purposes.
func NewRenderCacheForTest() *RenderCacheImpl {
return NewRenderCache()
}
// Close stops the cleanup goroutine and waits for it to finish.
func (c *RenderCacheImpl) Close() {
c.stopOnce.Do(func() {
close(c.stopCh)
<-c.doneCh
})
}
func (c *RenderCacheImpl) get(templateVersionID, ownerID uuid.UUID, parameters map[string]string) (*preview.Output, bool) {
key := makeKey(templateVersionID, ownerID, parameters)
c.mu.RLock()
entry, ok := c.entries[key]
c.mu.RUnlock()
if !ok {
// Record miss
if c.cacheMisses != nil {
c.cacheMisses.Inc()
}
return nil, false
}
// Check if entry has expired
if c.clock.Since(entry.timestamp) > c.ttl {
// Expired entry, treat as miss
if c.cacheMisses != nil {
c.cacheMisses.Inc()
}
return nil, false
}
// Record hit and refresh timestamp
if c.cacheHits != nil {
c.cacheHits.Inc()
}
// Refresh timestamp on hit to keep frequently accessed entries alive
c.mu.Lock()
entry.timestamp = c.clock.Now()
c.mu.Unlock()
return entry.output, true
}
func (c *RenderCacheImpl) put(templateVersionID, ownerID uuid.UUID, parameters map[string]string, output *preview.Output) {
key := makeKey(templateVersionID, ownerID, parameters)
c.mu.Lock()
defer c.mu.Unlock()
c.entries[key] = &cacheEntry{
output: output,
timestamp: c.clock.Now(),
}
// Update cache size metric
if c.cacheSize != nil {
c.cacheSize.Set(float64(len(c.entries)))
}
}
func makeKey(templateVersionID, ownerID uuid.UUID, parameters map[string]string) cacheKey {
return cacheKey{
templateVersionID: templateVersionID,
ownerID: ownerID,
parameterHash: hashParameters(parameters),
}
}
// hashParameters creates a deterministic hash of the parameter map.
func hashParameters(params map[string]string) uint64 {
if len(params) == 0 {
return 0
}
// Sort keys for deterministic hashing
keys := make([]string, 0, len(params))
for k := range params {
keys = append(keys, k)
}
sort.Strings(keys)
// Hash the sorted key-value pairs
var b string
for _, k := range keys {
b += fmt.Sprintf("%s:%s,", k, params[k])
}
return xxhash.Sum64String(b)
}
// cleanupLoop runs periodically to remove expired cache entries.
func (c *RenderCacheImpl) cleanupLoop(ctx context.Context) {
defer close(c.doneCh)
// Run cleanup every 15 minutes
cleanupFunc := func() error {
c.cleanup()
return nil
}
// Run once immediately
_ = cleanupFunc()
// Create a cancellable context for the ticker
tickerCtx, cancel := context.WithCancel(ctx)
defer cancel()
// Create ticker for periodic cleanup
tkr := c.clock.TickerFunc(tickerCtx, 15*time.Minute, cleanupFunc, "render-cache-cleanup")
// Wait for stop signal
<-c.stopCh
cancel()
_ = tkr.Wait()
}
// cleanup removes expired entries from the cache.
func (c *RenderCacheImpl) cleanup() {
c.mu.Lock()
defer c.mu.Unlock()
now := c.clock.Now()
for key, entry := range c.entries {
if now.Sub(entry.timestamp) > c.ttl {
delete(c.entries, key)
}
}
// Update cache size metric after cleanup
if c.cacheSize != nil {
c.cacheSize.Set(float64(len(c.entries)))
}
}
@@ -0,0 +1,354 @@
package dynamicparameters
import (
"testing"
"time"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
promtestutil "github.com/prometheus/client_golang/prometheus/testutil"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/testutil"
"github.com/coder/preview"
previewtypes "github.com/coder/preview/types"
"github.com/coder/quartz"
)
func TestRenderCache_BasicOperations(t *testing.T) {
t.Parallel()
cache := NewRenderCache()
defer cache.Close()
templateVersionID := uuid.New()
ownerID := uuid.New()
params := map[string]string{"region": "us-west-2"}
// Cache should be empty initially
_, ok := cache.get(templateVersionID, ownerID, params)
require.False(t, ok, "cache should be empty initially")
// Put an entry in the cache
expectedOutput := &preview.Output{
Parameters: []previewtypes.Parameter{
{
ParameterData: previewtypes.ParameterData{
Name: "region",
Type: previewtypes.ParameterTypeString,
},
},
},
}
cache.put(templateVersionID, ownerID, params, expectedOutput)
// Get should now return the cached value
cachedOutput, ok := cache.get(templateVersionID, ownerID, params)
require.True(t, ok, "cache should contain the entry")
require.Same(t, expectedOutput, cachedOutput, "should return same pointer")
}
func TestRenderCache_DifferentKeysAreSeparate(t *testing.T) {
t.Parallel()
cache := NewRenderCache()
defer cache.Close()
templateVersion1 := uuid.New()
templateVersion2 := uuid.New()
owner1 := uuid.New()
owner2 := uuid.New()
params := map[string]string{"region": "us-west-2"}
output1 := &preview.Output{}
output2 := &preview.Output{}
output3 := &preview.Output{}
// Put different entries for different keys
cache.put(templateVersion1, owner1, params, output1)
cache.put(templateVersion2, owner1, params, output2)
cache.put(templateVersion1, owner2, params, output3)
// Verify each key returns its own entry
cached1, ok1 := cache.get(templateVersion1, owner1, params)
require.True(t, ok1)
require.Same(t, output1, cached1)
cached2, ok2 := cache.get(templateVersion2, owner1, params)
require.True(t, ok2)
require.Same(t, output2, cached2)
cached3, ok3 := cache.get(templateVersion1, owner2, params)
require.True(t, ok3)
require.Same(t, output3, cached3)
}
func TestRenderCache_ParameterHashConsistency(t *testing.T) {
t.Parallel()
cache := NewRenderCache()
defer cache.Close()
templateVersionID := uuid.New()
ownerID := uuid.New()
// Parameters in different order should produce same cache key
params1 := map[string]string{"a": "1", "b": "2", "c": "3"}
params2 := map[string]string{"c": "3", "a": "1", "b": "2"}
output := &preview.Output{}
cache.put(templateVersionID, ownerID, params1, output)
// Should hit cache even with different parameter order
cached, ok := cache.get(templateVersionID, ownerID, params2)
require.True(t, ok, "different parameter order should still hit cache")
require.Same(t, output, cached)
}
func TestRenderCache_EmptyParameters(t *testing.T) {
t.Parallel()
cache := NewRenderCache()
defer cache.Close()
templateVersionID := uuid.New()
ownerID := uuid.New()
// Test with empty parameters
emptyParams := map[string]string{}
output := &preview.Output{}
cache.put(templateVersionID, ownerID, emptyParams, output)
cached, ok := cache.get(templateVersionID, ownerID, emptyParams)
require.True(t, ok)
require.Same(t, output, cached)
}
func TestRenderCache_PrebuildScenario(t *testing.T) {
t.Parallel()
// This test simulates the prebuild scenario where multiple prebuilds
// are created from the same template version with the same preset parameters.
cache := NewRenderCache()
defer cache.Close()
// In prebuilds, all instances use the same fixed ownerID
prebuildOwnerID := uuid.MustParse("c42fdf75-3097-471c-8c33-fb52454d81c0") // database.PrebuildsSystemUserID
templateVersionID := uuid.New()
presetParams := map[string]string{
"instance_type": "t3.micro",
"region": "us-west-2",
}
output := &preview.Output{}
// First prebuild - cache miss
_, ok := cache.get(templateVersionID, prebuildOwnerID, presetParams)
require.False(t, ok, "first prebuild should miss cache")
cache.put(templateVersionID, prebuildOwnerID, presetParams, output)
// Second prebuild with same template version and preset - cache hit
cached2, ok2 := cache.get(templateVersionID, prebuildOwnerID, presetParams)
require.True(t, ok2, "second prebuild should hit cache")
require.Same(t, output, cached2, "should return cached output")
// Third prebuild - also cache hit
cached3, ok3 := cache.get(templateVersionID, prebuildOwnerID, presetParams)
require.True(t, ok3, "third prebuild should hit cache")
require.Same(t, output, cached3, "should return cached output")
// All three prebuilds shared the same cache entry
}
func TestRenderCache_Metrics(t *testing.T) {
t.Parallel()
// Create test metrics
cacheHits := prometheus.NewCounter(prometheus.CounterOpts{
Name: "test_cache_hits_total",
})
cacheMisses := prometheus.NewCounter(prometheus.CounterOpts{
Name: "test_cache_misses_total",
})
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "test_cache_entries",
})
cache := NewRenderCacheWithMetrics(cacheHits, cacheMisses, cacheSize)
defer cache.Close()
templateVersionID := uuid.New()
ownerID := uuid.New()
params := map[string]string{"region": "us-west-2"}
// Initially: 0 hits, 0 misses, 0 size
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "initial hits should be 0")
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheMisses), "initial misses should be 0")
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "initial size should be 0")
// First get - should be a miss
_, ok := cache.get(templateVersionID, ownerID, params)
require.False(t, ok)
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "hits should still be 0")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should be 1")
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "size should still be 0")
// Put an entry
output := &preview.Output{}
cache.put(templateVersionID, ownerID, params, output)
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "size should be 1 after put")
// Second get - should be a hit
_, ok = cache.get(templateVersionID, ownerID, params)
require.True(t, ok)
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheHits), "hits should be 1")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "size should still be 1")
// Third get - another hit
_, ok = cache.get(templateVersionID, ownerID, params)
require.True(t, ok)
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheHits), "hits should be 2")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
// Put another entry with different params
params2 := map[string]string{"region": "us-east-1"}
cache.put(templateVersionID, ownerID, params2, output)
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheSize), "size should be 2 after second put")
// Get with different params - should be a hit
_, ok = cache.get(templateVersionID, ownerID, params2)
require.True(t, ok)
require.Equal(t, float64(3), promtestutil.ToFloat64(cacheHits), "hits should be 3")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
}
func TestRenderCache_TTL(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clock := quartz.NewMock(t)
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
defer trapTickerFunc.Close()
// Create cache with short TTL for testing
cache := newCache(clock, 100*time.Millisecond, nil, nil, nil)
defer cache.Close()
// Wait for the initial cleanup and ticker to be created
trapTickerFunc.MustWait(ctx).Release(ctx)
templateVersionID := uuid.New()
ownerID := uuid.New()
params := map[string]string{"region": "us-west-2"}
output := &preview.Output{}
// Put an entry
cache.put(templateVersionID, ownerID, params, output)
// Should be a hit immediately
cached, ok := cache.get(templateVersionID, ownerID, params)
require.True(t, ok, "should hit cache immediately")
require.Same(t, output, cached)
// Advance time beyond TTL
clock.Advance(150 * time.Millisecond)
// Should be a miss after TTL
_, ok = cache.get(templateVersionID, ownerID, params)
require.False(t, ok, "should miss cache after TTL expiration")
}
func TestRenderCache_CleanupRemovesExpiredEntries(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clock := quartz.NewMock(t)
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
defer trapTickerFunc.Close()
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "test_cache_entries",
})
cache := newCache(clock, 100*time.Millisecond, nil, nil, cacheSize)
defer cache.Close()
// Wait for the initial cleanup and ticker to be created
trapTickerFunc.MustWait(ctx).Release(ctx)
// Initial size should be 0 after first cleanup
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "should have 0 entries initially")
templateVersionID := uuid.New()
ownerID := uuid.New()
// Add 3 entries
for i := 0; i < 3; i++ {
params := map[string]string{"index": string(rune(i))}
cache.put(templateVersionID, ownerID, params, &preview.Output{})
}
require.Equal(t, float64(3), promtestutil.ToFloat64(cacheSize), "should have 3 entries")
// Advance time beyond TTL
clock.Advance(150 * time.Millisecond)
// Trigger cleanup by advancing to the next ticker event (15 minutes from start minus what we already advanced)
clock.Advance(15*time.Minute - 150*time.Millisecond).MustWait(ctx)
// All entries should be removed
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "all entries should be removed after cleanup")
}
func TestRenderCache_TimestampRefreshOnHit(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
clock := quartz.NewMock(t)
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
defer trapTickerFunc.Close()
// Create cache with 1 second TTL for testing
cache := newCache(clock, time.Second, nil, nil, nil)
defer cache.Close()
// Wait for the initial cleanup and ticker to be created
trapTickerFunc.MustWait(ctx).Release(ctx)
templateVersionID := uuid.New()
ownerID := uuid.New()
params := map[string]string{"region": "us-west-2"}
output := &preview.Output{}
// Put an entry at T=0
cache.put(templateVersionID, ownerID, params, output)
// Advance time to 600ms (still within TTL)
clock.Advance(600 * time.Millisecond)
// Access the entry - should hit and refresh timestamp to T=600ms
cached, ok := cache.get(templateVersionID, ownerID, params)
require.True(t, ok, "should hit cache")
require.Same(t, output, cached)
// Advance another 600ms (now at T=1200ms)
// Entry was created at T=0 but refreshed at T=600ms, so it should still be valid
clock.Advance(600 * time.Millisecond)
// Should still hit because timestamp was refreshed at T=600ms
cached, ok = cache.get(templateVersionID, ownerID, params)
require.True(t, ok, "should still hit cache because timestamp was refreshed")
require.Same(t, output, cached)
// Now advance another 1.1 seconds (to T=2300ms)
// Last refresh was at T=1200ms, so now it should be expired
clock.Advance(1100 * time.Millisecond)
// Should miss because more than 1 second since last access
_, ok = cache.get(templateVersionID, ownerID, params)
require.False(t, ok, "should miss cache after TTL from last access")
}
@@ -0,0 +1,197 @@
package dynamicparameters
import (
"context"
"testing"
"github.com/google/uuid"
"github.com/hashicorp/hcl/v2"
"github.com/prometheus/client_golang/prometheus"
promtestutil "github.com/prometheus/client_golang/prometheus/testutil"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
"github.com/coder/preview"
previewtypes "github.com/coder/preview/types"
"github.com/coder/terraform-provider-coder/v2/provider"
)
// TestRenderCache_PrebuildWithResolveParameters simulates the actual prebuild flow
// where ResolveParameters calls Render() twice - once with previous values and once
// with the final computed values.
func TestRenderCache_PrebuildWithResolveParameters(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
// Create test metrics
cacheHits := prometheus.NewCounter(prometheus.CounterOpts{
Name: "test_prebuild_cache_hits_total",
})
cacheMisses := prometheus.NewCounter(prometheus.CounterOpts{
Name: "test_prebuild_cache_misses_total",
})
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "test_prebuild_cache_entries",
})
cache := NewRenderCacheWithMetrics(cacheHits, cacheMisses, cacheSize)
defer cache.Close()
// Simulate prebuild scenario
prebuildOwnerID := uuid.MustParse("c42fdf75-3097-471c-8c33-fb52454d81c0") // database.PrebuildsSystemUserID
templateVersionID := uuid.New()
// Preset parameters that all prebuilds share
presetParams := []database.TemplateVersionPresetParameter{
{Name: "instance_type", Value: "t3.micro"},
{Name: "region", Value: "us-west-2"},
}
// Create a mock renderer that returns consistent parameter definitions
mockRenderer := &mockRenderer{
cache: cache,
templateVersionID: templateVersionID,
output: &preview.Output{
Parameters: []previewtypes.Parameter{
{
ParameterData: previewtypes.ParameterData{
Name: "instance_type",
Type: previewtypes.ParameterTypeString,
FormType: provider.ParameterFormTypeInput,
Mutable: true,
DefaultValue: previewtypes.StringLiteral("t3.micro"),
Required: true,
},
Value: previewtypes.StringLiteral("t3.micro"),
Diagnostics: nil,
},
{
ParameterData: previewtypes.ParameterData{
Name: "region",
Type: previewtypes.ParameterTypeString,
FormType: provider.ParameterFormTypeInput,
Mutable: true,
DefaultValue: previewtypes.StringLiteral("us-west-2"),
Required: true,
},
Value: previewtypes.StringLiteral("us-west-2"),
Diagnostics: nil,
},
},
},
}
// Initial metrics should be 0
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "initial hits should be 0")
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheMisses), "initial misses should be 0")
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "initial size should be 0")
// === FIRST PREBUILD ===
// First build: no previous values, preset values provided
values1, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, true,
[]database.WorkspaceBuildParameter{}, // No previous values (first build)
[]codersdk.WorkspaceBuildParameter{}, // No build-specific values
presetParams, // Preset values from template
)
require.NoError(t, err)
require.NotNil(t, values1)
// After first prebuild:
// - ResolveParameters calls Render() twice:
// 1. RenderWithoutCache with previousValuesMap (empty {}) → no cache operation
// 2. Render with values.ValuesMap() ({preset}) → miss, creates cache entry
// Expected: 0 hits, 1 miss, 1 cache entry
t.Logf("After first prebuild: hits=%v, misses=%v, size=%v",
promtestutil.ToFloat64(cacheHits),
promtestutil.ToFloat64(cacheMisses),
promtestutil.ToFloat64(cacheSize))
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "first prebuild should have 0 hits")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "first prebuild should have 1 miss")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should have 1 cache entry after first prebuild")
// === SECOND PREBUILD ===
// Second build: previous values now set to preset values
previousValues := []database.WorkspaceBuildParameter{
{Name: "instance_type", Value: "t3.micro"},
{Name: "region", Value: "us-west-2"},
}
values2, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, false,
previousValues, // Previous values from first build
[]codersdk.WorkspaceBuildParameter{},
presetParams,
)
require.NoError(t, err)
require.NotNil(t, values2)
// After second prebuild:
// - ResolveParameters calls Render() twice:
// 1. RenderWithoutCache with previousValuesMap ({preset}) → no cache operation
// 2. Render with values.ValuesMap() ({preset}) → HIT (cache entry from first prebuild's 2nd render)
// Expected: 1 hit, 1 miss (still), 1 cache entry (still)
t.Logf("After second prebuild: hits=%v, misses=%v, size=%v",
promtestutil.ToFloat64(cacheHits),
promtestutil.ToFloat64(cacheMisses),
promtestutil.ToFloat64(cacheSize))
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheHits), "second prebuild should have 1 hit")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should still have 1 cache entry")
// === THIRD PREBUILD ===
values3, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, false,
previousValues,
[]codersdk.WorkspaceBuildParameter{},
presetParams,
)
require.NoError(t, err)
require.NotNil(t, values3)
// After third prebuild:
// - ResolveParameters calls Render() twice:
// 1. RenderWithoutCache with previousValuesMap ({preset}) → no cache operation
// 2. Render with values.ValuesMap() ({preset}) → HIT
// Expected: 2 hits, 1 miss (still), 1 cache entry (still)
t.Logf("After third prebuild: hits=%v, misses=%v, size=%v",
promtestutil.ToFloat64(cacheHits),
promtestutil.ToFloat64(cacheMisses),
promtestutil.ToFloat64(cacheSize))
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheHits), "third prebuild should have 2 total hits")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should still have 1 cache entry")
// Summary: With 3 prebuilds, we should have:
// - 2 cache hits (1 from 2nd prebuild, 1 from 3rd prebuild)
// - 1 cache miss (1 from 1st prebuild)
// - 1 cache entry (for preset params only - introspection renders are not cached)
}
// mockRenderer is a simple renderer that uses the cache for testing
type mockRenderer struct {
cache RenderCache
templateVersionID uuid.UUID
output *preview.Output
}
func (m *mockRenderer) Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
// This simulates what dynamicRenderer does - check cache first
if cached, ok := m.cache.get(m.templateVersionID, ownerID, values); ok {
return cached, nil
}
// Not in cache, "render" (just return our mock output) and cache it
m.cache.put(m.templateVersionID, ownerID, values, m.output)
return m.output, nil
}
func (m *mockRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
// For test purposes, just return output without caching
return m.output, nil
}
func (m *mockRenderer) Close() {}
@@ -69,3 +69,18 @@ func (mr *MockRendererMockRecorder) Render(ctx, ownerID, values any) *gomock.Cal
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Render", reflect.TypeOf((*MockRenderer)(nil).Render), ctx, ownerID, values)
}
// RenderWithoutCache mocks base method.
func (m *MockRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "RenderWithoutCache", ctx, ownerID, values)
ret0, _ := ret[0].(*preview.Output)
ret1, _ := ret[1].(hcl.Diagnostics)
return ret0, ret1
}
// RenderWithoutCache indicates an expected call of RenderWithoutCache.
func (mr *MockRendererMockRecorder) RenderWithoutCache(ctx, ownerID, values any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RenderWithoutCache", reflect.TypeOf((*MockRenderer)(nil).RenderWithoutCache), ctx, ownerID, values)
}
+4 -1
View File
@@ -69,7 +69,10 @@ func ResolveParameters(
//
// This is how the form should look to the user on their workspace settings page.
// This is the original form truth that our validations should initially be based on.
output, diags := renderer.Render(ctx, ownerID, previousValuesMap)
//
// Skip caching this render - it's only used for introspection to identify ephemeral
// parameters. The actual values used for the build will be rendered and cached below.
output, diags := renderer.RenderWithoutCache(ctx, ownerID, previousValuesMap)
if diags.HasErrors() {
// Top level diagnostics should break the build. Previous values (and new) should
// always be valid. If there is a case where this is not true, then this has to
+20 -1
View File
@@ -28,6 +28,25 @@ func TestResolveParameters(t *testing.T) {
render := rendermock.NewMockRenderer(ctrl)
// A single immutable parameter with no previous value.
render.EXPECT().
RenderWithoutCache(gomock.Any(), gomock.Any(), gomock.Any()).
AnyTimes().
Return(&preview.Output{
Parameters: []previewtypes.Parameter{
{
ParameterData: previewtypes.ParameterData{
Name: "immutable",
Type: previewtypes.ParameterTypeString,
FormType: provider.ParameterFormTypeInput,
Mutable: false,
DefaultValue: previewtypes.StringLiteral("foo"),
Required: true,
},
Value: previewtypes.StringLiteral("foo"),
Diagnostics: nil,
},
},
}, nil)
render.EXPECT().
Render(gomock.Any(), gomock.Any(), gomock.Any()).
AnyTimes().
@@ -78,7 +97,7 @@ func TestResolveParameters(t *testing.T) {
// A single immutable parameter with no previous value.
render.EXPECT().
Render(gomock.Any(), gomock.Any(), gomock.Any()).
RenderWithoutCache(gomock.Any(), gomock.Any(), gomock.Any()).
// Return the mutable param first
Return(&preview.Output{
Parameters: []previewtypes.Parameter{
+9
View File
@@ -40,6 +40,15 @@ func (r *loader) staticRender(ctx context.Context, db database.Store) (*staticRe
}
func (r *staticRender) Render(_ context.Context, _ uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
return r.render(values)
}
func (r *staticRender) RenderWithoutCache(_ context.Context, _ uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
// Static renderer doesn't use cache, so this is the same as Render
return r.render(values)
}
func (r *staticRender) render(values map[string]string) (*preview.Output, hcl.Diagnostics) {
params := r.staticParams
for i := range params {
param := &params[i]
+2 -1
View File
@@ -41,7 +41,8 @@ const (
// @Tags Files
// @Param Content-Type header string true "Content-Type must be `application/x-tar` or `application/zip`" default(application/x-tar)
// @Param file formData file true "File to be uploaded. If using tar format, file must conform to ustar (pax may cause problems)."
// @Success 201 {object} codersdk.UploadResponse
// @Success 200 {object} codersdk.UploadResponse "Returns existing file if duplicate"
// @Success 201 {object} codersdk.UploadResponse "Returns newly created file"
// @Router /files [post]
func (api *API) postFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
+10 -7
View File
@@ -251,13 +251,16 @@ type HTTPError struct {
func (e HTTPError) Write(rw http.ResponseWriter, r *http.Request) {
if e.RenderStaticPage {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: e.Code,
HideStatus: true,
Title: e.Msg,
Description: e.Detail,
RetryEnabled: false,
DashboardURL: "/login",
Status: e.Code,
HideStatus: true,
Title: e.Msg,
Description: e.Detail,
Actions: []site.Action{
{
URL: "/login",
Text: "Back to site",
},
},
RenderDescriptionMarkdown: e.RenderDetailMarkdown,
})
return
+3 -1
View File
@@ -87,7 +87,9 @@ func (c *Cache) refreshTemplateBuildTimes(ctx context.Context) error {
//nolint:gocritic // This is a system service.
ctx = dbauthz.AsSystemRestricted(ctx)
templates, err := c.database.GetTemplates(ctx)
templates, err := c.database.GetTemplatesWithFilter(ctx, database.GetTemplatesWithFilterParams{
Deleted: false,
})
if err != nil {
return err
}
+25 -2
View File
@@ -75,7 +75,18 @@ func ShowAuthorizePage(accessURL *url.URL) http.HandlerFunc {
callbackURL, err := url.Parse(app.CallbackURL)
if err != nil {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{Status: http.StatusInternalServerError, HideStatus: false, Title: "Internal Server Error", Description: err.Error(), RetryEnabled: false, DashboardURL: accessURL.String(), Warnings: nil})
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusInternalServerError,
HideStatus: false,
Title: "Internal Server Error",
Description: err.Error(),
Actions: []site.Action{
{
URL: accessURL.String(),
Text: "Back to site",
},
},
})
return
}
@@ -85,7 +96,19 @@ func ShowAuthorizePage(accessURL *url.URL) http.HandlerFunc {
for i, err := range validationErrs {
errStr[i] = err.Detail
}
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{Status: http.StatusBadRequest, HideStatus: false, Title: "Invalid Query Parameters", Description: "One or more query parameters are missing or invalid.", RetryEnabled: false, DashboardURL: accessURL.String(), Warnings: errStr})
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusBadRequest,
HideStatus: false,
Title: "Invalid Query Parameters",
Description: "One or more query parameters are missing or invalid.",
Warnings: errStr,
Actions: []site.Action{
{
URL: accessURL.String(),
Text: "Back to site",
},
},
})
return
}
+16
View File
@@ -236,3 +236,19 @@ func (z Object) WithGroupACL(groups map[string][]policy.Action) Object {
AnyOrgOwner: z.AnyOrgOwner,
}
}
// TODO(geokat): similar to builtInRoles, this should ideally be
// scoped to a coderd rather than a global.
var workspaceACLDisabled bool
// SetWorkspaceACLDisabled disables/enables workspace sharing for the
// deployment.
func SetWorkspaceACLDisabled(v bool) {
workspaceACLDisabled = v
}
// WorkspaceACLDisabled returns true if workspace sharing is disabled
// for the deployment.
func WorkspaceACLDisabled() bool {
return workspaceACLDisabled
}
+20 -14
View File
@@ -199,10 +199,9 @@ func (s *ServerTailnet) ReverseProxy(targetURL, dashboardURL *url.URL, agentID u
proxy := httputil.NewSingleHostReverseProxy(&tgt)
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, theErr error) {
var (
desc = "Failed to proxy request to application: " + theErr.Error()
additionalInfo = ""
additionalButtonLink = ""
additionalButtonText = ""
desc = "Failed to proxy request to application: " + theErr.Error()
additionalInfo = ""
actions = []site.Action{}
)
var tlsError tls.RecordHeaderError
@@ -222,21 +221,28 @@ func (s *ServerTailnet) ReverseProxy(targetURL, dashboardURL *url.URL, agentID u
app = app.ChangePortProtocol(targetProtocol)
switchURL.Host = fmt.Sprintf("%s%s", app.String(), strings.TrimPrefix(wildcardHostname, "*"))
additionalButtonLink = switchURL.String()
additionalButtonText = fmt.Sprintf("Switch to %s", strings.ToUpper(targetProtocol))
actions = append(actions, site.Action{
URL: switchURL.String(),
Text: fmt.Sprintf("Switch to %s", strings.ToUpper(targetProtocol)),
})
additionalInfo += fmt.Sprintf("This error seems to be due to an app protocol mismatch, try switching to %s.", strings.ToUpper(targetProtocol))
}
}
site.RenderStaticErrorPage(w, r, site.ErrorPageData{
Status: http.StatusBadGateway,
Title: "Bad Gateway",
Description: desc,
RetryEnabled: true,
DashboardURL: dashboardURL.String(),
AdditionalInfo: additionalInfo,
AdditionalButtonLink: additionalButtonLink,
AdditionalButtonText: additionalButtonText,
Status: http.StatusBadGateway,
Title: "Bad Gateway",
Description: desc,
Actions: append(actions, []site.Action{
{
Text: "Retry",
},
{
URL: dashboardURL.String(),
Text: "Back to site",
},
}...),
AdditionalInfo: additionalInfo,
})
}
proxy.Director = s.director(agentID, proxy.Director)
+12
View File
@@ -71,6 +71,18 @@ Prompt: "Set up CI/CD pipeline" →
"task_name": "setup-cicd"
}
Prompt: "Work on https://github.com/coder/coder/issues/1234"
{
"display_name": "Work on coder/coder #1234",
"task_name": "coder-1234"
}
Prompt: "Fix https://github.com/org/repo/pull/567"
{
"display_name": "Fix org/repo PR #567",
"task_name": "repo-pr-567"
}
If a suitable name cannot be created, output exactly:
{
"display_name": "Task Unnamed",
+52 -21
View File
@@ -4,6 +4,7 @@ import (
"fmt"
"net/http"
"net/url"
"path"
"cdr.dev/slog"
"github.com/coder/coder/v2/codersdk"
@@ -30,12 +31,16 @@ func WriteWorkspaceApp404(log slog.Logger, accessURL *url.URL, rw http.ResponseW
}
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusNotFound,
Title: "Application Not Found",
Description: "The application or workspace you are trying to access does not exist or you do not have permission to access it.",
RetryEnabled: false,
DashboardURL: accessURL.String(),
Warnings: warnings,
Status: http.StatusNotFound,
Title: "Application Not Found",
Description: "The application or workspace you are trying to access does not exist or you do not have permission to access it.",
Warnings: warnings,
Actions: []site.Action{
{
URL: accessURL.String(),
Text: "Back to site",
},
},
})
}
@@ -60,11 +65,15 @@ func WriteWorkspaceApp500(log slog.Logger, accessURL *url.URL, rw http.ResponseW
)
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusInternalServerError,
Title: "Internal Server Error",
Description: "An internal server error occurred.",
RetryEnabled: false,
DashboardURL: accessURL.String(),
Status: http.StatusInternalServerError,
Title: "Internal Server Error",
Description: "An internal server error occurred.",
Actions: []site.Action{
{
URL: accessURL.String(),
Text: "Back to site",
},
},
})
}
@@ -85,11 +94,18 @@ func WriteWorkspaceAppOffline(log slog.Logger, accessURL *url.URL, rw http.Respo
}
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusBadGateway,
Title: "Application Unavailable",
Description: msg,
RetryEnabled: true,
DashboardURL: accessURL.String(),
Status: http.StatusBadGateway,
Title: "Application Unavailable",
Description: msg,
Actions: []site.Action{
{
Text: "Retry",
},
{
URL: accessURL.String(),
Text: "Back to site",
},
},
})
}
@@ -109,11 +125,26 @@ func WriteWorkspaceOffline(log slog.Logger, accessURL *url.URL, rw http.Response
)
}
actions := []site.Action{
{
URL: accessURL.String(),
Text: "Back to site",
},
}
workspaceURL, err := url.Parse(accessURL.String())
if err == nil {
workspaceURL.Path = path.Join(accessURL.Path, "@"+appReq.UsernameOrID, appReq.WorkspaceNameOrID)
actions = append(actions, site.Action{
URL: workspaceURL.String(),
Text: "View workspace",
})
}
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusBadRequest,
Title: "Workspace Offline",
Description: fmt.Sprintf("Last workspace transition was to the %q state. Start the workspace to access its applications.", codersdk.WorkspaceTransitionStop),
RetryEnabled: false,
DashboardURL: accessURL.String(),
Status: http.StatusBadRequest,
Title: "Workspace Offline",
Description: fmt.Sprintf("Last workspace transition was to the %q state. Start the workspace to access its applications.", codersdk.WorkspaceTransitionStop),
Actions: actions,
})
}
+62 -31
View File
@@ -185,10 +185,14 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
Status: http.StatusBadRequest,
Title: "Bad Request",
Description: "Could not decrypt API key. Workspace app API key smuggling is not permitted on the primary access URL. Please remove the query parameter and try again.",
// Retry is disabled because the user needs to remove the query
// No retry is included because the user needs to remove the query
// parameter before they try again.
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return false
}
@@ -204,10 +208,14 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
Status: http.StatusBadRequest,
Title: "Bad Request",
Description: "Could not decrypt API key. Please remove the query parameter and try again.",
// Retry is disabled because the user needs to remove the query
// No retry is included because the user needs to remove the query
// parameter before they try again.
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return false
}
@@ -224,11 +232,15 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
// startup, but we'll check anyways.
s.Logger.Error(r.Context(), "could not split invalid app hostname", slog.F("hostname", s.Hostname))
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusInternalServerError,
Title: "Internal Server Error",
Description: "The app is configured with an invalid app wildcard hostname. Please contact an administrator.",
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Status: http.StatusInternalServerError,
Title: "Internal Server Error",
Description: "The app is configured with an invalid app wildcard hostname. Please contact an administrator.",
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return false
}
@@ -274,11 +286,15 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
func (s *Server) workspaceAppsProxyPath(rw http.ResponseWriter, r *http.Request) {
if s.DisablePathApps {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusForbidden,
Title: "Forbidden",
Description: "Path-based applications are disabled on this Coder deployment by the administrator.",
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Status: http.StatusForbidden,
Title: "Forbidden",
Description: "Path-based applications are disabled on this Coder deployment by the administrator.",
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return
}
@@ -287,11 +303,15 @@ func (s *Server) workspaceAppsProxyPath(rw http.ResponseWriter, r *http.Request)
// lookup the username from token. We used to redirect by doing this lookup.
if chi.URLParam(r, "user") == codersdk.Me {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusNotFound,
Title: "Application Not Found",
Description: "Applications must be accessed with the full username, not @me.",
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Status: http.StatusNotFound,
Title: "Application Not Found",
Description: "Applications must be accessed with the full username, not @me.",
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return
}
@@ -519,11 +539,15 @@ func (s *Server) parseHostname(rw http.ResponseWriter, r *http.Request, next htt
app, err := appurl.ParseSubdomainAppURL(subdomain)
if err != nil {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusBadRequest,
Title: "Invalid Application URL",
Description: fmt.Sprintf("Could not parse subdomain application URL %q: %s", subdomain, err.Error()),
RetryEnabled: false,
DashboardURL: s.DashboardURL.String(),
Status: http.StatusBadRequest,
Title: "Invalid Application URL",
Description: fmt.Sprintf("Could not parse subdomain application URL %q: %s", subdomain, err.Error()),
Actions: []site.Action{
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return appurl.ApplicationURL{}, false
}
@@ -547,11 +571,18 @@ func (s *Server) proxyWorkspaceApp(rw http.ResponseWriter, r *http.Request, appT
appURL, err := url.Parse(appToken.AppURL)
if err != nil {
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
Status: http.StatusBadRequest,
Title: "Bad Request",
Description: fmt.Sprintf("Application has an invalid URL %q: %s", appToken.AppURL, err.Error()),
RetryEnabled: true,
DashboardURL: s.DashboardURL.String(),
Status: http.StatusBadRequest,
Title: "Bad Request",
Description: fmt.Sprintf("Application has an invalid URL %q: %s", appToken.AppURL, err.Error()),
Actions: []site.Action{
{
Text: "Retry",
},
{
URL: s.DashboardURL.String(),
Text: "Back to site",
},
},
})
return
}
+7
View File
@@ -2598,6 +2598,13 @@ func convertWorkspace(
failingAgents := []uuid.UUID{}
for _, resource := range workspaceBuild.Resources {
for _, agent := range resource.Agents {
// Sub-agents (e.g., devcontainer agents) are excluded from the
// workspace health calculation. Their health is managed by
// their parent agent, and temporary disconnections during
// devcontainer rebuilds should not affect workspace health.
if agent.ParentID.Valid {
continue
}
if !agent.Health.Healthy {
failingAgents = append(failingAgents, agent.ID)
}
+148
View File
@@ -346,6 +346,81 @@ func TestWorkspace(t *testing.T) {
assert.False(t, agent2.Health.Healthy)
assert.NotEmpty(t, agent2.Health.Reason)
})
t.Run("Sub-agent excluded", func(t *testing.T) {
t.Parallel()
// This test verifies that sub-agents (e.g., devcontainer agents)
// are excluded from the workspace health calculation. When a
// devcontainer is rebuilding, the sub-agent may be temporarily
// disconnected, but this should not make the workspace unhealthy.
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: []*proto.Response{{
Type: &proto.Response_Apply{
Apply: &proto.ApplyComplete{
Resources: []*proto.Resource{{
Name: "some",
Type: "example",
Agents: []*proto.Agent{{
Id: uuid.NewString(),
Name: "parent",
Auth: &proto.Agent_Token{},
}},
}},
},
},
}},
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
// Get the workspace and parent agent.
workspace, err := client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
parentAgent := workspace.LatestBuild.Resources[0].Agents[0]
require.True(t, parentAgent.Health.Healthy, "parent agent should be healthy initially")
// Create a sub-agent with a short connection timeout so it becomes
// unhealthy quickly (simulating a devcontainer rebuild scenario).
subAgent := dbgen.WorkspaceAgent(t, db, database.WorkspaceAgent{
ParentID: uuid.NullUUID{Valid: true, UUID: parentAgent.ID},
ResourceID: parentAgent.ResourceID,
Name: "subagent",
ConnectionTimeoutSeconds: 1,
})
// Wait for the sub-agent to become unhealthy due to timeout.
var subAgentUnhealthy bool
require.Eventually(t, func() bool {
workspace, err = client.Workspace(ctx, workspace.ID)
if err != nil {
return false
}
for _, res := range workspace.LatestBuild.Resources {
for _, agent := range res.Agents {
if agent.ID == subAgent.ID && !agent.Health.Healthy {
subAgentUnhealthy = true
return true
}
}
}
return false
}, testutil.WaitShort, testutil.IntervalFast, "sub-agent should become unhealthy")
require.True(t, subAgentUnhealthy, "sub-agent should be unhealthy")
// Verify that the workspace is still healthy because sub-agents
// are excluded from the health calculation.
assert.True(t, workspace.Health.Healthy, "workspace should be healthy despite unhealthy sub-agent")
assert.Empty(t, workspace.Health.FailingAgents, "failing agents should not include sub-agent")
})
})
t.Run("Archived", func(t *testing.T) {
@@ -5165,6 +5240,79 @@ func TestDeleteWorkspaceACL(t *testing.T) {
})
}
// nolint:tparallel,paralleltest // Subtests modify package global.
func TestWorkspaceSharingDisabled(t *testing.T) {
t.Run("CanAccessWhenEnabled", func(t *testing.T) {
var (
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
// DisableWorkspaceSharing is false (default)
}),
})
admin = coderdtest.CreateFirstUser(t, client)
_, wsOwner = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
userClient, user = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
)
ctx := testutil.Context(t, testutil.WaitMedium)
// Create workspace with ACL granting access to user
ws := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OwnerID: wsOwner.ID,
OrganizationID: admin.OrganizationID,
UserACL: database.WorkspaceACL{
user.ID.String(): database.WorkspaceACLEntry{
Permissions: []policy.Action{
policy.ActionRead, policy.ActionSSH, policy.ActionApplicationConnect,
},
},
},
}).Do().Workspace
// User SHOULD be able to access workspace when sharing is enabled
fetchedWs, err := userClient.Workspace(ctx, ws.ID)
require.NoError(t, err)
require.Equal(t, ws.ID, fetchedWs.ID)
})
t.Run("NoAccessWhenDisabled", func(t *testing.T) {
var (
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
dv.DisableWorkspaceSharing = true
}),
})
admin = coderdtest.CreateFirstUser(t, client)
_, wsOwner = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
userClient, user = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
)
ctx := testutil.Context(t, testutil.WaitMedium)
// Create workspace with ACL granting access to user directly in DB
ws := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OwnerID: wsOwner.ID,
OrganizationID: admin.OrganizationID,
UserACL: database.WorkspaceACL{
user.ID.String(): database.WorkspaceACLEntry{
Permissions: []policy.Action{
policy.ActionRead, policy.ActionSSH, policy.ActionApplicationConnect,
},
},
},
}).Do().Workspace
// User should NOT be able to access workspace when sharing is disabled
_, err := userClient.Workspace(ctx, ws.ID)
require.Error(t, err)
var sdkErr *codersdk.Error
require.ErrorAs(t, err, &sdkErr)
require.Equal(t, http.StatusNotFound, sdkErr.StatusCode())
})
}
func TestWorkspaceCreateWithImplicitPreset(t *testing.T) {
t.Parallel()
+30 -3
View File
@@ -88,12 +88,15 @@ type Builder struct {
parameterRender dynamicparameters.Renderer
workspaceTags *map[string]string
// renderCache caches template rendering results
renderCache dynamicparameters.RenderCache
prebuiltWorkspaceBuildStage sdkproto.PrebuiltWorkspaceBuildStage
verifyNoLegacyParametersOnce bool
}
type UsageChecker interface {
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (UsageCheckResponse, error)
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (UsageCheckResponse, error)
}
type UsageCheckResponse struct {
@@ -105,7 +108,7 @@ type NoopUsageChecker struct{}
var _ UsageChecker = NoopUsageChecker{}
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion) (UsageCheckResponse, error) {
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion, _ database.WorkspaceTransition) (UsageCheckResponse, error) {
return UsageCheckResponse{
Permitted: true,
}, nil
@@ -253,6 +256,14 @@ func (b Builder) TemplateVersionPresetID(id uuid.UUID) Builder {
return b
}
// RenderCache sets the render cache to use for template rendering.
// This allows multiple workspace builds to share cached render results.
func (b Builder) RenderCache(cache dynamicparameters.RenderCache) Builder {
// nolint: revive
b.renderCache = cache
return b
}
type BuildError struct {
// Status is a suitable HTTP status code
Status int
@@ -686,6 +697,22 @@ func (b *Builder) getDynamicParameterRenderer() (dynamicparameters.Renderer, err
return nil, xerrors.Errorf("get template version variables: %w", err)
}
// Pass render cache if available
if b.renderCache != nil {
renderer, err := dynamicparameters.Prepare(b.ctx, b.store, b.fileCache, tv.ID,
dynamicparameters.WithTemplateVersion(*tv),
dynamicparameters.WithProvisionerJob(*job),
dynamicparameters.WithTerraformValues(*tfVals),
dynamicparameters.WithTemplateVariableValues(variableValues),
dynamicparameters.WithRenderCache(b.renderCache),
)
if err != nil {
return nil, xerrors.Errorf("get template version renderer: %w", err)
}
b.parameterRender = renderer
return renderer, nil
}
renderer, err := dynamicparameters.Prepare(b.ctx, b.store, b.fileCache, tv.ID,
dynamicparameters.WithTemplateVersion(*tv),
dynamicparameters.WithProvisionerJob(*job),
@@ -1307,7 +1334,7 @@ func (b *Builder) checkUsage() error {
return BuildError{http.StatusInternalServerError, "Failed to fetch template version", err}
}
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion)
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion, b.trans)
if err != nil {
return BuildError{http.StatusInternalServerError, "Failed to check build usage", err}
}
+5 -5
View File
@@ -1049,7 +1049,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
var calls int64
fakeUsageChecker := &fakeUsageChecker{
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
atomic.AddInt64(&calls, 1)
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
},
@@ -1126,7 +1126,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
var calls int64
fakeUsageChecker := &fakeUsageChecker{
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
atomic.AddInt64(&calls, 1)
return c.response, c.responseErr
},
@@ -1577,11 +1577,11 @@ func expectFindMatchingPresetID(id uuid.UUID, err error) func(mTx *dbmock.MockSt
}
type fakeUsageChecker struct {
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error)
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error)
}
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
return f.checkBuildUsageFunc(ctx, store, templateVersion)
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
return f.checkBuildUsageFunc(ctx, store, templateVersion, transition)
}
func withNoTask(mTx *dbmock.MockStore) {
+10
View File
@@ -495,6 +495,7 @@ type DeploymentValues struct {
SSHConfig SSHConfig `json:"config_ssh,omitempty" typescript:",notnull"`
WgtunnelHost serpent.String `json:"wgtunnel_host,omitempty" typescript:",notnull"`
DisableOwnerWorkspaceExec serpent.Bool `json:"disable_owner_workspace_exec,omitempty" typescript:",notnull"`
DisableWorkspaceSharing serpent.Bool `json:"disable_workspace_sharing,omitempty" typescript:",notnull"`
ProxyHealthStatusInterval serpent.Duration `json:"proxy_health_status_interval,omitempty" typescript:",notnull"`
EnableTerraformDebugMode serpent.Bool `json:"enable_terraform_debug_mode,omitempty" typescript:",notnull"`
UserQuietHoursSchedule UserQuietHoursScheduleConfig `json:"user_quiet_hours_schedule,omitempty" typescript:",notnull"`
@@ -2728,6 +2729,15 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
YAML: "disableOwnerWorkspaceAccess",
Annotations: serpent.Annotations{}.Mark(annotationExternalProxies, "true"),
},
{
Name: "Disable Workspace Sharing",
Description: `Disable workspace sharing (requires the "workspace-sharing" experiment to be enabled). Workspace ACL checking is disabled and only owners can have ssh, apps and terminal access to workspaces. Access based on the 'owner' role is also allowed unless disabled via --disable-owner-workspace-access.`,
Flag: "disable-workspace-sharing",
Env: "CODER_DISABLE_WORKSPACE_SHARING",
Value: &c.DisableWorkspaceSharing,
YAML: "disableWorkspaceSharing",
},
{
Name: "Session Duration",
Description: "The token expiry duration for browser sessions. Sessions may last longer if they are actively making requests, but this functionality can be disabled via --disable-session-expiry-refresh.",
-6
View File
@@ -1,6 +0,0 @@
# Redirect old offline deployments URL to new airgap URL
/install/offline /install/airgap 301
# Redirect old offline anchor fragments to new airgap anchors
/install/offline#offline-docs /install/airgap#airgap-docs 301
/install/offline#offline-container-images /install/airgap#airgap-container-images 301
+3 -3
View File
@@ -510,9 +510,9 @@ resource "kubernetes_pod" "workspace" {
## Get help
- **Examples**: Review real-world examples from the [official Coder templates](https://registry.coder.com/contributors/coder?tab=templates):
- [AWS EC2 (Devcontainer)](https://registry.coder.com/templates/aws-devcontainer) - AWS EC2 VMs with devcontainer support
- [Docker (Devcontainer)](https://registry.coder.com/templates/docker-devcontainer) - Envbuilder containers with dev container support
- [Kubernetes (Devcontainer)](https://registry.coder.com/templates/kubernetes-devcontainer) - Envbuilder pods on Kubernetes
- [AWS EC2 (Devcontainer)](https://registry.coder.com/templates/aws-devcontainer) - AWS EC2 VMs with Envbuilder
- [Docker (Devcontainer)](https://registry.coder.com/templates/docker-devcontainer) - Docker-in-Docker with Dev Containers integration
- [Kubernetes (Devcontainer)](https://registry.coder.com/templates/kubernetes-devcontainer) - Kubernetes pods with Envbuilder
- [Docker Containers](https://registry.coder.com/templates/docker) - Basic Docker container workspaces
- [AWS EC2 (Linux)](https://registry.coder.com/templates/aws-linux) - AWS EC2 VMs for Linux development
- [Google Compute Engine (Linux)](https://registry.coder.com/templates/gcp-vm-container) - GCP VM instances
+1 -1
View File
@@ -52,7 +52,7 @@ For any information not strictly contained in these sections, check out our
### Development containers (dev containers)
- A
[Development Container](./templates/managing-templates/devcontainers/index.md)
[Development Container](./integrations/devcontainers/index.md)
is an open-source specification for defining development environments (called
dev containers). It is generally stored in VCS alongside associated source
code. It can reference an existing base image, or a custom Dockerfile that
@@ -1,10 +1,9 @@
# Add a dev container template to Coder
# Add an Envbuilder template
A Coder administrator adds a dev container-compatible template to Coder
(Envbuilder). This allows the template to prompt for the developer for their dev
container repository's URL as a
[parameter](../../extending-templates/parameters.md) when they create their
workspace. Envbuilder clones the repo and builds a container from the
A Coder administrator adds an Envbuilder-compatible template to Coder. This
allows the template to prompt the developer for their dev container repository's
URL as a [parameter](../../../templates/extending-templates/parameters.md) when they create
their workspace. Envbuilder clones the repo and builds a container from the
`devcontainer.json` specified in the repo.
You can create template files through the Coder dashboard, CLI, or you can
@@ -128,7 +127,7 @@ their development environments:
| [AWS EC2 dev container](https://github.com/coder/coder/tree/main/examples/templates/aws-devcontainer) | Runs a development container inside a single EC2 instance. It also mounts the Docker socket from the VM inside the container to enable Docker inside the workspace. |
Your template can prompt the user for a repo URL with
[parameters](../../extending-templates/parameters.md):
[parameters](../../../templates/extending-templates/parameters.md):
![Dev container parameter screen](../../../../images/templates/devcontainers.png)
@@ -143,4 +142,4 @@ Lifecycle scripts are managed by project developers.
## Next steps
- [Dev container security and caching](./devcontainer-security-caching.md)
- [Envbuilder security and caching](./envbuilder-security-caching.md)
@@ -1,4 +1,4 @@
# Dev container releases and known issues
# Envbuilder releases and known issues
## Release channels
@@ -1,4 +1,4 @@
# Dev container security and caching
# Envbuilder security and caching
Ensure Envbuilder can only pull pre-approved images and artifacts by configuring
it with your existing HTTP proxies, firewalls, and artifact managers.
@@ -26,7 +26,7 @@ of caching:
- Caches the entire image, skipping the build process completely (except for
post-build
[lifecycle scripts](./add-devcontainer.md#dev-container-lifecycle-scripts)).
[lifecycle scripts](./add-envbuilder.md#dev-container-lifecycle-scripts)).
Note that caching requires push access to a registry, and may require approval
from relevant infrastructure team(s).
@@ -62,5 +62,5 @@ You may also wish to consult a
## Next steps
- [Dev container releases and known issues](./devcontainer-releases-known-issues.md)
- [Envbuilder releases and known issues](./envbuilder-releases-known-issues.md)
- [Dotfiles](../../../../user-guides/workspace-dotfiles.md)
@@ -0,0 +1,52 @@
# Envbuilder
Envbuilder is an open-source tool that builds development environments from
[dev container](https://containers.dev/implementors/spec/) configuration files.
Unlike the [Dev Containers integration](../integration.md),
Envbuilder transforms the workspace image itself rather than running containers
inside the workspace.
Envbuilder is well-suited for Kubernetes-native deployments without privileged
containers, environments where Docker is unavailable or restricted, and
workflows where administrators need infrastructure-level control over image
builds, caching, and security scanning. For workspaces with Docker available,
the [Dev Containers Integration](../integration.md) offers container management
with dashboard visibility and multi-container support.
Dev containers provide developers with increased autonomy and control over their
Coder cloud development environments.
By using dev containers, developers can customize their workspaces with tools
pre-approved by platform teams in registries like
[JFrog Artifactory](../../jfrog-artifactory.md). This simplifies
workflows, reduces the need for tickets and approvals, and promotes greater
independence for developers.
## Prerequisites
An administrator should construct or choose a base image and create a template
that includes a `devcontainer_builder` image before a developer team configures
dev containers.
## Devcontainer Features
[Dev container Features](https://containers.dev/implementors/features/) allow
owners of a project to specify self-contained units of code and runtime
configuration that can be composed together on top of an existing base image.
This is a good place to install project-specific tools, such as
language-specific runtimes and compilers.
## Coder Envbuilder
[Envbuilder](https://github.com/coder/envbuilder/) is an open-source project
maintained by Coder that runs dev containers via Coder templates and your
underlying infrastructure. Envbuilder can run on Docker or Kubernetes.
It is independently packaged and versioned from the centralized Coder
open-source project. This means that Envbuilder can be used with Coder, but it
is not required. It also means that dev container builds can scale independently
of the Coder control plane and even run within a CI/CD pipeline.
## Next steps
- [Add an Envbuilder template](./add-envbuilder.md)
@@ -0,0 +1,49 @@
# Dev Containers
Dev containers allow developers to define their development environment
as code using the [Dev Container specification](https://containers.dev/).
Configuration lives in a `devcontainer.json` file alongside source code,
enabling consistent, reproducible environments.
By adopting dev containers, organizations can:
- **Standardize environments**: Eliminate "works on my machine" issues while
still allowing developers to customize their tools within approved boundaries.
- **Scale efficiently**: Let developers maintain their own environment
definitions, reducing the burden on platform teams.
- **Improve security**: Use hardened base images and controlled package
registries to enforce security policies while enabling developer self-service.
Coder supports two approaches for running dev containers. Choose based on your
infrastructure and workflow requirements.
## Dev Containers Integration
The Dev Containers Integration uses the standard `@devcontainers/cli` and Docker
to run containers inside your workspace. This is the recommended approach for
most use cases.
**Best for:**
- Workspaces with Docker available (Docker-in-Docker or mounted socket)
- Dev container management in the Coder dashboard (discovery, status, rebuild)
- Multiple dev containers per workspace
[Configure Dev Containers Integration](./integration.md)
For user documentation, see the
[Dev Containers user guide](../../../user-guides/devcontainers/index.md).
## Envbuilder
Envbuilder transforms the workspace image itself from a `devcontainer.json`,
rather than running containers inside the workspace. It does not require
a Docker daemon.
**Best for:**
- Environments where Docker is unavailable or restricted
- Infrastructure-level control over image builds, caching, and security scanning
- Kubernetes-native deployments without privileged containers
[Configure Envbuilder](./envbuilder/index.md)
@@ -0,0 +1,259 @@
# Configure a template for Dev Containers
This guide covers the Dev Containers Integration, which uses Docker. For
environments without Docker, see [Envbuilder](./envbuilder/index.md) as an
alternative.
To enable Dev Containers in workspaces, configure your template with the Dev Containers
modules and configurations outlined in this doc.
Dev Containers are currently not supported in Windows or macOS workspaces.
## Configuration Modes
There are two approaches to configuring Dev Containers in Coder:
### Manual Configuration
Use the [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) Terraform resource to explicitly define which Dev
Containers should be started in your workspace. This approach provides:
- Predictable behavior and explicit control
- Clear template configuration
- Easier troubleshooting
- Better for production environments
This is the recommended approach for most use cases.
### Project Discovery
Alternatively, enable automatic discovery of Dev Containers in Git repositories.
The agent scans for `devcontainer.json` files and surfaces them in the Coder UI.
See [Environment Variables](#environment-variables) for configuration options.
This approach is useful when developers frequently switch between repositories
or work with many projects, as it reduces template maintenance overhead.
## Install the Dev Containers CLI
Use the
[devcontainers-cli](https://registry.coder.com/modules/devcontainers-cli) module
to ensure the `@devcontainers/cli` is installed in your workspace:
```terraform
module "devcontainers-cli" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/devcontainers-cli/coder"
agent_id = coder_agent.dev.id
}
```
Alternatively, install the devcontainer CLI manually in your base image.
## Configure Automatic Dev Container Startup
The
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer)
resource automatically starts a Dev Container in your workspace, ensuring it's
ready when you access the workspace:
```terraform
resource "coder_devcontainer" "my-repository" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
```
The `workspace_folder` attribute must point to a valid project folder containing
a `devcontainer.json` file. Consider using the
[`git-clone`](https://registry.coder.com/modules/git-clone) module to ensure
your repository is cloned and ready for automatic startup.
For multi-repo workspaces, define multiple `coder_devcontainer` resources, each
pointing to a different repository. Each one runs as a separate sub-agent with
its own terminal and apps in the dashboard.
## Enable Dev Containers Integration
Dev Containers integration is **enabled by default** in Coder 2.24.0 and later.
You don't need to set any environment variables unless you want to change the
default behavior.
If you need to explicitly disable Dev Containers, set the
`CODER_AGENT_DEVCONTAINERS_ENABLE` environment variable to `false`:
```terraform
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/oss-dogfood:latest"
env = [
"CODER_AGENT_DEVCONTAINERS_ENABLE=false", # Explicitly disable
# ... Other environment variables.
]
# ... Other container configuration.
}
```
See the [Environment Variables](#environment-variables) section below for more
details on available configuration options.
## Environment Variables
The following environment variables control Dev Container behavior in your
workspace. Both `CODER_AGENT_DEVCONTAINERS_ENABLE` and
`CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE` are **enabled by default**,
so you typically don't need to set them unless you want to explicitly disable
the feature.
### CODER_AGENT_DEVCONTAINERS_ENABLE
**Default: `true`** • **Added in: v2.24.0**
Enables the Dev Containers integration in the Coder agent.
The Dev Containers feature is enabled by default. You can explicitly disable it
by setting this to `false`.
### CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE
**Default: `true`** • **Added in: v2.25.0**
Enables automatic discovery of Dev Containers in Git repositories.
When enabled, the agent scans the configured working directory (set via the
`directory` attribute in `coder_agent`, typically the user's home directory) for
Git repositories. If the directory itself is a Git repository, it searches that
project. Otherwise, it searches immediate subdirectories for Git repositories.
For each repository found, the agent looks for `devcontainer.json` files in the
[standard locations](../../../user-guides/devcontainers/index.md#add-a-devcontainerjson)
and surfaces discovered Dev Containers in the Coder UI. Discovery respects
`.gitignore` patterns.
Set to `false` if you prefer explicit configuration via `coder_devcontainer`.
### CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE
**Default: `false`** • **Added in: v2.25.0**
Automatically starts Dev Containers discovered via project discovery.
When enabled, discovered Dev Containers will be automatically built and started
during workspace initialization. This only applies to Dev Containers found via
project discovery. Dev Containers defined with the `coder_devcontainer` resource
always auto-start regardless of this setting.
## Per-Container Customizations
> [!NOTE]
>
> Dev container sub-agents are created dynamically after workspace provisioning,
> so Terraform resources like
> [`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script)
> and [`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
> cannot currently be attached to them. Modules from the
> [Coder registry](https://registry.coder.com) that depend on these resources
> are also not currently supported for sub-agents.
>
> To add tools to dev containers, use
> [dev container features](../../../user-guides/devcontainers/working-with-dev-containers.md#dev-container-features).
> For Coder-specific apps, use the
> [`apps` customization](../../../user-guides/devcontainers/customizing-dev-containers.md#custom-apps).
Developers can customize individual dev containers using the `customizations.coder`
block in their `devcontainer.json` file. Available options include:
- `ignore` — Hide a dev container from Coder completely
- `autoStart` — Control whether the container starts automatically (requires
`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE` to be enabled)
- `name` — Set a custom agent name
- `displayApps` — Control which built-in apps appear
- `apps` — Define custom applications
For the full reference, see
[Customizing dev containers](../../../user-guides/devcontainers/customizing-dev-containers.md).
## Complete Template Example
Here's a simplified template example that uses Dev Containers with manual
configuration:
```terraform
terraform {
required_providers {
coder = { source = "coder/coder" }
docker = { source = "kreuzwerker/docker" }
}
}
provider "coder" {}
data "coder_workspace" "me" {}
data "coder_workspace_owner" "me" {}
resource "coder_agent" "dev" {
arch = "amd64"
os = "linux"
startup_script_behavior = "blocking"
startup_script = "sudo service docker start"
shutdown_script = "sudo service docker stop"
# ...
}
module "devcontainers-cli" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/devcontainers-cli/coder"
agent_id = coder_agent.dev.id
}
resource "coder_devcontainer" "my-repository" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
```
### Alternative: Project Discovery with Autostart
By default, discovered containers appear in the dashboard but developers must
manually start them. To have them start automatically, enable autostart:
```terraform
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/oss-dogfood:latest"
env = [
# Project discovery is enabled by default, but autostart is not.
# Enable autostart to automatically build and start discovered containers:
"CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=true",
# ... Other environment variables.
]
# ... Other container configuration.
}
```
With autostart enabled:
- Discovered containers automatically build and start during workspace
initialization
- The `coder_devcontainer` resource is not required
- Developers can work with multiple projects seamlessly
> [!NOTE]
>
> When using project discovery, you still need to install the devcontainers CLI
> using the module or in your base image.
## Example Template
The [Docker (Dev Containers)](https://github.com/coder/coder/tree/main/examples/templates/docker-devcontainer)
starter template demonstrates Dev Containers integration using Docker-in-Docker.
It includes the `devcontainers-cli` module, `git-clone` module, and the
`coder_devcontainer` resource.
## Next Steps
- [Dev Containers Integration](../../../user-guides/devcontainers/index.md)
- [Customizing Dev Containers](../../../user-guides/devcontainers/customizing-dev-containers.md)
- [Working with Dev Containers](../../../user-guides/devcontainers/working-with-dev-containers.md)
- [Troubleshooting Dev Containers](../../../user-guides/devcontainers/troubleshooting-dev-containers.md)
@@ -1,280 +1,14 @@
# Configure a template for Dev Containers
# Dev Containers
To enable Dev Containers in workspaces, configure your template with the Dev Containers
modules and configurations outlined in this doc.
Dev containers extend your template with containerized development environments,
allowing developers to work in consistent, reproducible setups defined by
`devcontainer.json` files.
> [!NOTE]
>
> Dev Containers require a **Linux or macOS workspace**. Windows is not supported.
Coder's Dev Containers Integration uses the standard `@devcontainers/cli` and
Docker to run containers inside workspaces.
## Configuration Modes
For setup instructions, see
[Dev Containers Integration](../../integrations/devcontainers/integration.md).
There are two approaches to configuring Dev Containers in Coder:
### Manual Configuration
Use the [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) Terraform resource to explicitly define which Dev
Containers should be started in your workspace. This approach provides:
- Predictable behavior and explicit control
- Clear template configuration
- Easier troubleshooting
- Better for production environments
This is the recommended approach for most use cases.
### Project Discovery
Enable automatic discovery of Dev Containers in Git repositories. Project discovery automatically scans Git repositories for `.devcontainer/devcontainer.json` or `.devcontainer.json` files and surfaces them in the Coder UI. See the [Environment Variables](#environment-variables) section for detailed configuration options.
## Install the Dev Containers CLI
Use the
[devcontainers-cli](https://registry.coder.com/modules/devcontainers-cli) module
to ensure the `@devcontainers/cli` is installed in your workspace:
```terraform
module "devcontainers-cli" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/modules/devcontainers-cli/coder"
agent_id = coder_agent.dev.id
}
```
Alternatively, install the devcontainer CLI manually in your base image.
## Configure Automatic Dev Container Startup
The
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer)
resource automatically starts a Dev Container in your workspace, ensuring it's
ready when you access the workspace:
```terraform
resource "coder_devcontainer" "my-repository" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
```
> [!NOTE]
>
> The `workspace_folder` attribute must specify the location of the dev
> container's workspace and should point to a valid project folder containing a
> `devcontainer.json` file.
<!-- nolint:MD028/no-blanks-blockquote -->
> [!TIP]
>
> Consider using the [`git-clone`](https://registry.coder.com/modules/git-clone)
> module to ensure your repository is cloned into the workspace folder and ready
> for automatic startup.
## Enable Dev Containers Integration
Dev Containers integration is **enabled by default** in Coder 2.24.0 and later.
You don't need to set any environment variables unless you want to change the
default behavior.
If you need to explicitly disable Dev Containers, set the
`CODER_AGENT_DEVCONTAINERS_ENABLE` environment variable to `false`:
```terraform
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/oss-dogfood:latest"
env = [
"CODER_AGENT_DEVCONTAINERS_ENABLE=false", # Explicitly disable
# ... Other environment variables.
]
# ... Other container configuration.
}
```
See the [Environment Variables](#environment-variables) section below for more
details on available configuration options.
## Environment Variables
The following environment variables control Dev Container behavior in your
workspace. Both `CODER_AGENT_DEVCONTAINERS_ENABLE` and
`CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE` are **enabled by default**,
so you typically don't need to set them unless you want to explicitly disable
the feature.
### CODER_AGENT_DEVCONTAINERS_ENABLE
**Default: `true`** • **Added in: v2.24.0**
Enables the Dev Containers integration in the Coder agent.
The Dev Containers feature is enabled by default. You can explicitly disable it
by setting this to `false`.
### CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE
**Default: `true`** • **Added in: v2.25.0**
Enables automatic discovery of Dev Containers in Git repositories.
When enabled, the agent will:
- Scan the agent directory for Git repositories
- Look for `.devcontainer/devcontainer.json` or `.devcontainer.json` files
- Surface discovered Dev Containers automatically in the Coder UI
- Respect `.gitignore` patterns during discovery
You can disable automatic discovery by setting this to `false` if you prefer to
use only the `coder_devcontainer` resource for explicit configuration.
### CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE
**Default: `false`** • **Added in: v2.25.0**
Automatically starts Dev Containers discovered via project discovery.
When enabled, discovered Dev Containers will be automatically built and started
during workspace initialization. This only applies to Dev Containers found via
project discovery. Dev Containers defined with the `coder_devcontainer` resource
always auto-start regardless of this setting.
## Per-Container Customizations
Individual Dev Containers can be customized using the `customizations.coder` block
in your `devcontainer.json` file. These customizations allow you to control
container-specific behavior without modifying your template.
### Ignore Specific Containers
Use the `ignore` option to hide a Dev Container from Coder completely:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"ignore": true
}
}
}
```
When `ignore` is set to `true`:
- The Dev Container won't appear in the Coder UI
- Coder won't manage or monitor the container
This is useful when you have Dev Containers in your repository that you don't
want Coder to manage.
### Per-Container Auto-Start
Control whether individual Dev Containers should auto-start using the
`autoStart` option:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"autoStart": true
}
}
}
```
**Important**: The `autoStart` option only applies when global auto-start is
enabled via `CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=true`. If
the global setting is disabled, containers won't auto-start regardless of this
setting.
When `autoStart` is set to `true`:
- The Dev Container automatically builds and starts during workspace
initialization
- Works on a per-container basis (you can enable it for some containers but not
others)
When `autoStart` is set to `false` or omitted:
- The Dev Container is discovered and shown in the UI
- Users must manually start it via the UI
## Complete Template Example
Here's a simplified template example that uses Dev Containers with manual
configuration:
```terraform
terraform {
required_providers {
coder = { source = "coder/coder" }
docker = { source = "kreuzwerker/docker" }
}
}
provider "coder" {}
data "coder_workspace" "me" {}
data "coder_workspace_owner" "me" {}
resource "coder_agent" "dev" {
arch = "amd64"
os = "linux"
startup_script_behavior = "blocking"
startup_script = "sudo service docker start"
shutdown_script = "sudo service docker stop"
# ...
}
module "devcontainers-cli" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/modules/devcontainers-cli/coder"
agent_id = coder_agent.dev.id
}
resource "coder_devcontainer" "my-repository" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.dev.id
workspace_folder = "/home/coder/my-repository"
}
```
### Alternative: Project Discovery Mode
You can enable automatic starting of discovered Dev Containers:
```terraform
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/oss-dogfood:latest"
env = [
# Project discovery is enabled by default, but autostart is not.
# Enable autostart to automatically build and start discovered containers:
"CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=true",
# ... Other environment variables.
]
# ... Other container configuration.
}
```
With this configuration:
- Project discovery is enabled (default behavior)
- Discovered containers are automatically started (via the env var)
- The `coder_devcontainer` resource is **not** required
- Developers can work with multiple projects seamlessly
> [!NOTE]
>
> When using project discovery, you still need to install the devcontainers CLI
> using the module or in your base image.
## Next Steps
- [Dev Containers Integration](../../../user-guides/devcontainers/index.md)
- [Working with Dev Containers](../../../user-guides/devcontainers/working-with-dev-containers.md)
- [Troubleshooting Dev Containers](../../../user-guides/devcontainers/troubleshooting-dev-containers.md)
For an alternative approach that doesn't require Docker, see
[Envbuilder](../../integrations/devcontainers/envbuilder/index.md).
+4 -5
View File
@@ -48,11 +48,10 @@ needs of different teams.
- [Image management](./managing-templates/image-management.md): Learn how to
create and publish images for use within Coder workspaces & templates.
- [Dev Container support](./managing-templates/devcontainers/index.md): Enable
dev containers to allow teams to bring their own tools into Coder workspaces.
- [Early Access Dev Containers](../../user-guides/devcontainers/index.md): Try our
new direct devcontainers integration (distinct from Envbuilder-based
approach).
- [Dev Containers integration](../integrations/devcontainers/integration.md): Enable
native dev containers support using `@devcontainers/cli` and Docker.
- [Envbuilder](../integrations/devcontainers/envbuilder/index.md): Alternative approach
for environments without Docker access.
- [Template hardening](./extending-templates/resource-persistence.md#-bulletproofing):
Configure your template to prevent certain resources from being destroyed
(e.g. user disks).
@@ -1,122 +0,0 @@
# Dev containers
A Development Container is an
[open-source specification](https://containers.dev/implementors/spec/) for
defining containerized development environments which are also called
development containers (dev containers).
Dev containers provide developers with increased autonomy and control over their
Coder cloud development environments.
By using dev containers, developers can customize their workspaces with tools
pre-approved by platform teams in registries like
[JFrog Artifactory](../../../integrations/jfrog-artifactory.md). This simplifies
workflows, reduces the need for tickets and approvals, and promotes greater
independence for developers.
## Prerequisites
An administrator should construct or choose a base image and create a template
that includes a `devcontainer_builder` image before a developer team configures
dev containers.
## Benefits of devcontainers
There are several benefits to adding a dev container-compatible template to
Coder:
- Reliability through standardization
- Scalability for growing teams
- Improved security
- Performance efficiency
- Cost Optimization
### Reliability through standardization
Use dev containers to empower development teams to personalize their own
environments while maintaining consistency and security through an approved and
hardened base image.
Standardized environments ensure uniform behavior across machines and team
members, eliminating "it works on my machine" issues and creating a stable
foundation for development and testing. Containerized setups reduce dependency
conflicts and misconfigurations, enhancing build stability.
### Scalability for growing teams
Dev containers allow organizations to handle multiple projects and teams
efficiently.
You can leverage platforms like Kubernetes to allocate resources on demand,
optimizing costs and ensuring fair distribution of quotas. Developer teams can
use efficient custom images and independently configure the contents of their
version-controlled dev containers.
This approach allows organizations to scale seamlessly, reducing the maintenance
burden on the administrators that support diverse projects while allowing
development teams to maintain their own images and onboard new users quickly.
### Improved security
Since Coder and Envbuilder run on your own infrastructure, you can use firewalls
and cluster-level policies to ensure Envbuilder only downloads packages from
your secure registry powered by JFrog Artifactory or Sonatype Nexus.
Additionally, Envbuilder can be configured to push the full image back to your
registry for additional security scanning.
This means that Coder admins can require hardened base images and packages,
while still allowing developer self-service.
Envbuilder runs inside a small container image but does not require a Docker
daemon in order to build a dev container. This is useful in environments where
you may not have access to a Docker socket for security reasons, but still need
to work with a container.
### Performance efficiency
Create a unique image for each project to reduce the dependency size of any
given project.
Envbuilder has various caching modes to ensure workspaces start as fast as
possible, such as layer caching and even full image caching and fetching via the
[Envbuilder Terraform provider](https://registry.terraform.io/providers/coder/envbuilder/latest/docs).
### Cost optimization
By creating unique images per-project, you remove unnecessary dependencies and
reduce the workspace size and resource consumption of any given project. Full
image caching ensures optimal start and stop times.
## When to use a dev container
Dev containers are a good fit for developer teams who are familiar with Docker
and are already using containerized development environments. If you have a
large number of projects with different toolchains, dependencies, or that depend
on a particular Linux distribution, dev containers make it easier to quickly
switch between projects.
They may also be a great fit for more restricted environments where you may not
have access to a Docker daemon since it doesn't need one to work.
## Devcontainer Features
[Dev container Features](https://containers.dev/implementors/features/) allow
owners of a project to specify self-contained units of code and runtime
configuration that can be composed together on top of an existing base image.
This is a good place to install project-specific tools, such as
language-specific runtimes and compilers.
## Coder Envbuilder
[Envbuilder](https://github.com/coder/envbuilder/) is an open-source project
maintained by Coder that runs dev containers via Coder templates and your
underlying infrastructure. Envbuilder can run on Docker or Kubernetes.
It is independently packaged and versioned from the centralized Coder
open-source project. This means that Envbuilder can be used with Coder, but it
is not required. It also means that dev container builds can scale independently
of the Coder control plane and even run within a CI/CD pipeline.
## Next steps
- [Add a dev container template](./add-devcontainer.md)
@@ -0,0 +1,14 @@
# Envbuilder
Envbuilder shifts environment definition from template administrators to
developers. Instead of baking tools into template images, developers define
their environments via `devcontainer.json` files in their repositories.
Envbuilder transforms the workspace image itself from a dev container
configuration, without requiring a Docker daemon.
For setup instructions, see
[Envbuilder documentation](../../integrations/devcontainers/envbuilder/index.md).
For an alternative that uses Docker inside workspaces, see
[Dev Containers Integration](../../integrations/devcontainers/integration.md).
@@ -70,4 +70,5 @@ specific tooling for their projects. The [Dev Container](https://containers.dev)
specification allows developers to define their projects dependencies within a
`devcontainer.json` in their Git repository.
- [Learn how to integrate Dev Containers with Coder](./devcontainers/index.md)
- [Configure a template for Dev Containers](../../integrations/devcontainers/integration.md) (recommended)
- [Learn about Envbuilder](../../integrations/devcontainers/envbuilder/index.md) (alternative for environments without Docker)
@@ -96,5 +96,6 @@ coder templates delete <template-name>
## Next steps
- [Image management](./image-management.md)
- [Devcontainer templates](./devcontainers/index.md)
- [Dev Containers integration](../../integrations/devcontainers/integration.md) (recommended)
- [Envbuilder](../../integrations/devcontainers/envbuilder/index.md) (alternative for environments without Docker)
- [Change management](./change-management.md)
+62
View File
@@ -60,3 +60,65 @@ needs.
For configuration options and details, see [Data Retention](./setup.md#data-retention)
in the AI Bridge setup guide.
## Tracing
AI Bridge supports tracing via [OpenTelemetry](https://opentelemetry.io/),
providing visibility into request processing, upstream API calls, and MCP server
interactions.
### Enabling Tracing
AI Bridge tracing is enabled when tracing is enabled for the Coder server.
To enable tracing set `CODER_TRACE_ENABLE` environment variable or
[--trace](https://coder.com/docs/reference/cli/server#--trace) CLI flag:
```sh
export CODER_TRACE_ENABLE=true
```
```sh
coder server --trace
```
### What is Traced
AI Bridge creates spans for the following operations:
| Span Name | Description |
|---------------------------------------------|------------------------------------------------------|
| `CachedBridgePool.Acquire` | Acquiring a request bridge instance from the pool |
| `Intercept` | Top-level span for processing an intercepted request |
| `Intercept.CreateInterceptor` | Creating the request interceptor |
| `Intercept.ProcessRequest` | Processing the request through the bridge |
| `Intercept.ProcessRequest.Upstream` | Forwarding the request to the upstream AI provider |
| `Intercept.ProcessRequest.ToolCall` | Executing a tool call requested by the AI model |
| `Intercept.RecordInterception` | Recording creating interception record |
| `Intercept.RecordPromptUsage` | Recording prompt/message data |
| `Intercept.RecordTokenUsage` | Recording token consumption |
| `Intercept.RecordToolUsage` | Recording tool/function calls |
| `Intercept.RecordInterceptionEnded` | Recording the interception as completed |
| `ServerProxyManager.Init` | Initializing MCP server proxy connections |
| `StreamableHTTPServerProxy.Init` | Setting up HTTP-based MCP server proxies |
| `StreamableHTTPServerProxy.Init.fetchTools` | Fetching available tools from MCP servers |
Example trace of an interception using Jaeger backend:
![Trace of interception](../../images/aibridge/jaeger_interception_trace.png)
### Capturing Logs in Traces
> **Note:** Enabling log capture may generate a large volume of trace events.
To include log messages as trace events, enable trace log capture
by setting `CODER_TRACE_LOGS` environment variable or using
[--trace-logs](https://coder.com/docs/reference/cli/server#--trace-logs) flag:
```sh
export CODER_TRACE_ENABLE=true
export CODER_TRACE_LOGS=true
```
```sh
coder server --trace --trace-logs
```
Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

+4 -5
View File
@@ -129,14 +129,13 @@ We support two release channels: mainline and stable - read the
- **Mainline** Coder release:
- **Chart Registry**
<!-- autoversion(mainline): "--version [version]" -->
```shell
helm install coder coder-v2/coder \
--namespace coder \
--values values.yaml \
--version 2.29.0
--version 2.29.1
```
- **OCI Registry**
@@ -147,7 +146,7 @@ We support two release channels: mainline and stable - read the
helm install coder oci://ghcr.io/coder/chart/coder \
--namespace coder \
--values values.yaml \
--version 2.29.0
--version 2.29.1
```
- **Stable** Coder release:
@@ -160,7 +159,7 @@ We support two release channels: mainline and stable - read the
helm install coder coder-v2/coder \
--namespace coder \
--values values.yaml \
--version 2.28.5
--version 2.28.6
```
- **OCI Registry**
@@ -171,7 +170,7 @@ We support two release channels: mainline and stable - read the
helm install coder oci://ghcr.io/coder/chart/coder \
--namespace coder \
--values values.yaml \
--version 2.28.5
--version 2.28.6
```
You can watch Coder start up by running `kubectl get pods -n coder`. Once Coder
+2 -2
View File
@@ -134,8 +134,8 @@ kubectl create secret generic coder-db-url -n coder \
1. Select a Coder version:
- **Mainline**: `2.29.0`
- **Stable**: `2.28.5`
- **Mainline**: `2.29.1`
- **Stable**: `2.28.6`
Learn more about release channels in the [Releases documentation](./releases/index.md).
@@ -0,0 +1,69 @@
# Upgrading from ESR 2.24 to 2.29
## Guide Overview
Coder provides Extended Support Releases (ESR) bianually. This guide walks through upgrading from the initial Coder 2.24 ESR to our new 2.29 ESR. It will summarize key changes, highlight breaking updates, and provide a recommended upgrade process.
Read more about the ESR release process [here](./index.md#extended-support-release), and how Coder supports it.
## What's New in Coder 2.29
### Coder Tasks
Coder Tasks is an interface for running and interfacing with terminal-based coding agents like Claude Code and Codex, powered by Coder workspaces. Beginning in Coder 2.24, Tasks were introduced as an experimental feature that allowed administrators and developers to run long-lived or automated operations from templates. Over subsequent releases, Tasks matured significantly through UI refinement, improved reliability, and underlying task-status improvements in the server and database layers. By 2.29, Tasks were formally promoted to general availability, with full CLI support, a task-specific UI, and consistent visibility of task states across the dashboard. This transition establishes Tasks as a stable automation and job-execution primitive within Coder—particularly suited for long-running background operations like bug fixes, documentation generation, PR reviews, and testing/QA.For more information, read our documentation [here](https://coder.com/docs/ai-coder/tasks).
### AI Bridge
AI Bridge was introduced in 2.26, and is a smart gateway that acts as an intermediary between users' coding agents/IDEs and AI providers like OpenAI and Anthropic. It solves three key problems:
- Centralized authentication/authorization management (users authenticate via Coder instead of managing individual API tokens)
- Auditing and attribution of all AI interactions (whether autonomous or human-initiated)
- Secure communication between the Coder control plane and upstream AI APIs
This is a Premium/Beta feature that intercepts AI traffic to record prompts, token usage, and tool invocations. For more information, read our documentation [here](https://coder.com/docs/ai-coder/ai-bridge).
### Agent Boundaries
Agent Boundaries was introduced in 2.27 and is currently in Early Access. Agent Boundaries are process-level firewalls in Coder that restrict and audit what autonomous programs (like AI agents) can access and do within a workspace. They provide network policy enforcement—blocking specific domains and HTTP verbs to prevent data exfiltration—and write logs to the workspace for auditability. Boundaries support any terminal-based agent, including custom ones, and can be easily configured through existing Coder modules like the Claude Code module. For more information, read our documentation [here](https://coder.com/docs/ai-coder/agent-boundary).
### Performance Enhancements
Performance, particularly at scale, improved across nearly every system layer. Database queries were optimized, several new indexes were added, and expensive migrations—such as migration 371—were reworked to complete faster on large deployments. Caching was introduced for Terraform installer files and workspace/agent lookups, reducing repeated calls. Notification performance improved through more efficient connection pooling. These changes collectively enable deployments with hundreds or thousands of workspaces to operate more smoothly and with lower resource contention.
### Server and API Updates
Core server capabilities expanded significantly across the releases. Prebuild workflows gained timestamp-driven invalidation via last_invalidated_at, expired API keys began being automatically purged, and new API key-scope documentation was introduced to help administrators understand authorization boundaries. New API endpoints were added, including the ability to modify a task prompt or look up tasks by name. Template developers benefited from new Terraform directory-persistence capabilities (opt-in on a per-template basis) and improved `protobuf` configuration metadata.
### CLI Enhancements
The CLI gained substantial improvements between the two versions. Most notably, beginning in 2.29, Coders CLI now stores session tokens in the operating system keyring by default on macOS and Windows, enhancing credential security and reducing exposure from plaintext token storage. Users who rely on directly accessing the token file can opt out using `--use-keyring=false`. The CLI also introduced cross-platform support for keyring storage, gained support for GA Task commands, and integrated experimental functionality for the new Agent Socket API.
## Changes to be Aware of
The following are changes introduced after 2.24.X that might break workflows, or require other manual effort to address:
| Initial State (2.24 & before) | New State (2.252.29) | Change Required |
|--------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Workspace updates occur in place without stopping | Workspace updates now forcibly stop workspaces before updating | Expect downtime during updates; update any scripted update flows that rely on seamless updates. See [`coder update` CLI reference](https://coder.com/docs/reference/cli/update). |
| Connection events (SSH, port-forward, browser) logged in Audit Log | Connection events moved to Connection Log; historical entries older than 90 days pruned | Update compliance, audit, or ingestion pipelines to use the new [Connection Log](https://coder.com/docs/admin/monitoring/connection-logs) instead of [Audit Logs](https://coder.com/docs/admin/security/audit-logs) for connection events. |
| CLI session tokens stored in plaintext file | CLI session tokens stored in OS keyring (macOS/Windows) | Update scripts, automation, or SSO flows that read/modify the token file, or use `--use-keyring=false`. See [Sessions & API Tokens](https://coder.com/docs/admin/users/sessions-tokens) and [`coder login` CLI reference](https://coder.com/docs/reference/cli/login). |
| `task_app_id` field available in `codersdk.WorkspaceBuild` | `task_app_id` removed from `codersdk.WorkspaceBuild` | Migrate integrations to use `Task.WorkspaceAppID` instead. See [REST API reference](https://coder.com/docs/reference/api). |
| OIDC session handling more permissive | Sessions expire when access tokens expire (typically 1 hour) unless refresh tokens are configured | Add `offline_access` to `CODER_OIDC_SCOPES` (e.g., `openid,profile,email,offline_access`); Google requires `CODER_OIDC_AUTH_URL_PARAMS='{"access_type":"offline","prompt":"consent"}'`. See [OIDC Refresh Tokens](https://coder.com/docs/admin/users/oidc-auth/refresh-tokens). |
| Devcontainer agent selection is random when multiple agents exist | Devcontainer agent selection requires explicit choice | Update automated workflows to explicitly specify agent selection. See [Dev Containers Integration](https://coder.com/docs/user-guides/devcontainers) and [Configure a template for dev containers](https://coder.com/docs/admin/templates/extending-templates/devcontainers). |
| Terraform execution uses clean directories per build | Terraform workflows use persistent or cached directories when enabled | Update templates that rely on clean execution directories or per-build isolation. See [External Provisioners](https://coder.com/docs/admin/provisioners) and [Template Dependencies](https://coder.com/docs/admin/templates/managing-templates/dependencies). |
| Agent and task lifecycle behaviors more permissive | Agent and task lifecycle behaviors enforce stricter permission checks, readiness gating, and ordering | Review workflows for compatibility with stricter readiness and permission requirements. See [Workspace Lifecycle](https://coder.com/docs/user-guides/workspace-lifecycle) and [Extending Templates](https://coder.com/docs/admin/templates/extending-templates). |
## Upgrading
The following are recommendations by the Coder team when performing the upgrade:
- **Perform the upgrade in a staging environment first:** The cumulative changes between 2.24 and 2.29 introduce new subsystems and lifecycle behaviors, so validating templates, authentication flows, and workspace operations in staging helps avoid production issues
- **Audit scripts or tools that rely on the CLI token file:** Since 2.29 uses the OS keyring for session tokens on macOS and Windows, update any tooling that reads the plaintext token file or plan to use `--use-keyring=false`
- **Review templates using devcontainers or Terraform:** Explicit agent selection, optional persistent/cached Terraform directories, and updated metadata handling mean template authors should retest builds and startup behavior
- **Check and update OIDC provider configuration:** Stricter refresh-token requirements in later releases can cause unexpected logouts or failed CLI authentication if providers are not configured according to updated docs
- **Update integrations referencing deprecated API fields:** Code relying on `WorkspaceBuild.task_app_id` must migrate to `Task.WorkspaceAppID`, and any custom integrations built against 2.24 APIs should be validated against the new SDK
- **Communicate audit-logging changes to security/compliance teams:** From 2.25 onward, connection events moved into the Connection Log, and older audit entries may be pruned, which can affect SIEM pipelines or compliance workflows
- **Validate workspace lifecycle automation:** Since updates now require stopping the workspace first, confirm that automated update jobs, scripts, or scheduled tasks still function correctly in this new model
- **Retest agent and task automation built on early experimental features:** Updates to agent readiness, permission checks, and lifecycle ordering may affect workflows developed against 2.24s looser behaviors
- **Monitor workspace, template, and Terraform build performance:** New caching, indexes, and DB optimizations may change build times; observing performance post-upgrade helps catch regressions early
- **Prepare user communications around Tasks and UI changes:** Tasks are now GA and more visible in the dashboard, and many UI improvements will be new to users coming from 2.24, so a brief internal announcement can smooth the transition
+29 -17
View File
@@ -9,12 +9,14 @@ deployment.
## Release channels
We support two release channels:
[mainline](https://github.com/coder/coder/releases/tag/v2.29.0) for the bleeding
edge version of Coder and
[stable](https://github.com/coder/coder/releases/latest) for those with lower
tolerance for fault. We field our mainline releases publicly for one month
before promoting them to stable. The version prior to stable receives patches
We support four release channels:
- **Mainline:** The bleeding edge version of Coder
- **Stable:** N-1 of the mainline release
- **Security Support:** N-2 of the mainline release
- **Extended Support Release:** Biannually released version of Coder
We field our mainline releases publicly for one month before promoting them to stable. The security support version, so n-2 from mainline, receives patches
only for security issues or CVEs.
### Mainline releases
@@ -37,6 +39,16 @@ only for security issues or CVEs.
For more information on feature rollout, see our
[feature stages documentation](../releases/feature-stages.md).
### Extended Support Release
- Designed for organizations that prioritize long-term stability
- Receives only critical bugfixes and security patches
- Ideal for regulated environments or large deployments with strict upgrade cycles
ESR releases will be updated with critical bugfixes and security patches that are available to paying customers. This extended support model provides predictable, long-term maintenance for organizations that require enhanced stability. Because ESR forgoes new features in favor of maintenance and stability, it is best suited for teams with strict upgrade constraints. The latest ESR version is [Coder 2.29](https://github.com/coder/coder/releases/tag/v2.29.0).
For more information, see the [Coder ESR announcement](https://coder.com/blog/esr) or our [ESR Upgrade Guide](./esr-2.24-2.29-upgrade.md).
## Installing stable
When installing Coder, we generally advise specifying the desired version from
@@ -55,15 +67,15 @@ pages.
## Release schedule
<!-- Autogenerated release calendar from scripts/update-release-calendar.sh -->
<!-- RELEASE_CALENDAR_START -->
| Release name | Release Date | Status | Latest Release |
|------------------------------------------------|--------------------|------------------|----------------------------------------------------------------|
| [2.24](https://coder.com/changelog/coder-2-24) | July 01, 2025 | Not Supported | [v2.24.4](https://github.com/coder/coder/releases/tag/v2.24.4) |
| [2.25](https://coder.com/changelog/coder-2-25) | August 05, 2025 | Not Supported | [v2.25.3](https://github.com/coder/coder/releases/tag/v2.25.3) |
| [2.26](https://coder.com/changelog/coder-2-26) | September 03, 2025 | Not Supported | [v2.26.6](https://github.com/coder/coder/releases/tag/v2.26.6) |
| [2.27](https://coder.com/changelog/coder-2-27) | October 02, 2025 | Security Support | [v2.27.8](https://github.com/coder/coder/releases/tag/v2.27.8) |
| [2.28](https://coder.com/changelog/coder-2-28) | November 04, 2025 | Stable | [v2.28.5](https://github.com/coder/coder/releases/tag/v2.28.5) |
| [2.29](https://coder.com/changelog/coder-2-29) | December 02, 2025 | Mainline | [v2.29.0](https://github.com/coder/coder/releases/tag/v2.29.0) |
| 2.30 | | Not Released | N/A |
| Release name | Release Date | Status | Latest Release |
|------------------------------------------------|--------------------|--------------------------|----------------------------------------------------------------|
| [2.24](https://coder.com/changelog/coder-2-24) | July 01, 2025 | Extended Support Release | [v2.24.4](https://github.com/coder/coder/releases/tag/v2.24.4) |
| [2.25](https://coder.com/changelog/coder-2-25) | August 05, 2025 | Not Supported | [v2.25.3](https://github.com/coder/coder/releases/tag/v2.25.3) |
| [2.26](https://coder.com/changelog/coder-2-26) | September 03, 2025 | Not Supported | [v2.26.6](https://github.com/coder/coder/releases/tag/v2.26.6) |
| [2.27](https://coder.com/changelog/coder-2-27) | October 02, 2025 | Security Support | [v2.27.9](https://github.com/coder/coder/releases/tag/v2.27.9) |
| [2.28](https://coder.com/changelog/coder-2-28) | November 04, 2025 | Stable | [v2.28.6](https://github.com/coder/coder/releases/tag/v2.28.6) |
| [2.29](https://coder.com/changelog/coder-2-29) | December 02, 2025 | Mainline + ESR | [v2.29.1](https://github.com/coder/coder/releases/tag/v2.29.1) |
| 2.30 | | Not Released | N/A |
<!-- RELEASE_CALENDAR_END -->
> [!TIP]
@@ -75,6 +87,6 @@ pages.
>
> The `preview` image is not intended for production use.
### A note about January releases
### January Releases
As of January, 2025 we skip the January release each year because most of our engineering team is out for the December holiday period.
Releases on the first Tuesday of January **are not guaranteed to occur** because most of our team is out for the December holiday period. That being said, an ad-hoc release might still occur. We advise not relying on a January release, or reaching out to Coder directly to determine if one will be occurring closer to the release date.
+51 -24
View File
@@ -187,6 +187,11 @@
"title": "Feature stages",
"description": "Information about pre-GA stages.",
"path": "./install/releases/feature-stages.md"
},
{
"title": "Upgrading from ESR 2.24 to 2.29",
"description": "Upgrade Guide for ESR Releases",
"path": "./install/releases/esr-2.24-2.29-upgrade.md"
}
]
}
@@ -316,7 +321,7 @@
"icon_path": "./images/icons/circle-dot.svg"
},
{
"title": "Dev Containers Integration",
"title": "Dev Containers",
"description": "Run containerized development environments in your Coder workspace using the dev containers specification.",
"path": "./user-guides/devcontainers/index.md",
"icon_path": "./images/icons/container.svg",
@@ -326,6 +331,11 @@
"description": "Access dev containers via SSH, your IDE, or web terminal.",
"path": "./user-guides/devcontainers/working-with-dev-containers.md"
},
{
"title": "Customizing dev containers",
"description": "Configure custom agent names, apps, and display options in devcontainer.json.",
"path": "./user-guides/devcontainers/customizing-dev-containers.md"
},
{
"title": "Troubleshooting dev containers",
"description": "Diagnose and resolve common issues with dev containers in your Coder workspace.",
@@ -522,26 +532,9 @@
"path": "./admin/templates/managing-templates/change-management.md"
},
{
"title": "Dev containers",
"description": "Learn about using development containers in templates",
"path": "./admin/templates/managing-templates/devcontainers/index.md",
"children": [
{
"title": "Add a dev container template",
"description": "How to add a dev container template to Coder",
"path": "./admin/templates/managing-templates/devcontainers/add-devcontainer.md"
},
{
"title": "Dev container security and caching",
"description": "Configure dev container authentication and caching",
"path": "./admin/templates/managing-templates/devcontainers/devcontainer-security-caching.md"
},
{
"title": "Dev container releases and known issues",
"description": "Dev container releases and known issues",
"path": "./admin/templates/managing-templates/devcontainers/devcontainer-releases-known-issues.md"
}
]
"title": "Envbuilder",
"description": "Shift environment definition to repositories",
"path": "./admin/templates/managing-templates/envbuilder.md"
},
{
"title": "Template Dependencies",
@@ -653,8 +646,8 @@
"path": "./admin/templates/extending-templates/provider-authentication.md"
},
{
"title": "Configure a template for dev containers",
"description": "How to use configure your template for dev containers",
"title": "Dev Containers",
"description": "Extend templates with containerized dev environments",
"path": "./admin/templates/extending-templates/devcontainers.md"
},
{
@@ -754,6 +747,40 @@
"title": "OAuth2 Provider",
"description": "Use Coder as an OAuth2 provider",
"path": "./admin/integrations/oauth2-provider.md"
},
{
"title": "Dev Containers",
"description": "Configure dev container support using Docker or Envbuilder",
"path": "./admin/integrations/devcontainers/index.md",
"children": [
{
"title": "Dev Containers Integration",
"description": "Configure native dev containers with Docker",
"path": "./admin/integrations/devcontainers/integration.md"
},
{
"title": "Envbuilder",
"description": "Build dev containers without Docker",
"path": "./admin/integrations/devcontainers/envbuilder/index.md",
"children": [
{
"title": "Add an Envbuilder template",
"description": "How to add an Envbuilder template",
"path": "./admin/integrations/devcontainers/envbuilder/add-envbuilder.md"
},
{
"title": "Security and caching",
"description": "Configure authentication and caching",
"path": "./admin/integrations/devcontainers/envbuilder/envbuilder-security-caching.md"
},
{
"title": "Releases and known issues",
"description": "Release channels and known issues",
"path": "./admin/integrations/devcontainers/envbuilder/envbuilder-releases-known-issues.md"
}
]
}
]
}
]
},
@@ -938,7 +965,7 @@
},
{
"title": "AI Bridge",
"description": "Centralized LLM and MCP proxy for platform teams",
"description": "AI Gateway for Enterprise Governance \u0026 Observability",
"path": "./ai-coder/ai-bridge/index.md",
"icon_path": "./images/icons/api.svg",
"state": ["premium", "beta"],
+87
View File
@@ -727,6 +727,93 @@ Status Code **200**
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Add new license
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/licenses \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /licenses`
> Body parameter
```json
{
"license": "string"
}
```
### Parameters
| Name | In | Type | Required | Description |
|--------|------|--------------------------------------------------------------------|----------|---------------------|
| `body` | body | [codersdk.AddLicenseRequest](schemas.md#codersdkaddlicenserequest) | true | Add license request |
### Example responses
> 201 Response
```json
{
"claims": {},
"id": 0,
"uploaded_at": "2019-08-24T14:15:22Z",
"uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f"
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|-------------|------------------------------------------------|
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.License](schemas.md#codersdklicense) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Update license entitlements
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/licenses/refresh-entitlements \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /licenses/refresh-entitlements`
### Example responses
> 201 Response
```json
{
"detail": "string",
"message": "string",
"validations": [
{
"detail": "string",
"field": "string"
}
]
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|-------------|--------------------------------------------------|
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.Response](schemas.md#codersdkresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Delete license
### Code samples
+5 -4
View File
@@ -31,7 +31,7 @@ file: string
### Example responses
> 201 Response
> 200 Response
```json
{
@@ -41,9 +41,10 @@ file: string
### Responses
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|-------------|--------------------------------------------------------------|
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|------------------------------------|--------------------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | Returns existing file if duplicate | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Returns newly created file | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
+1
View File
@@ -233,6 +233,7 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"disable_owner_workspace_exec": true,
"disable_password_auth": true,
"disable_path_apps": true,
"disable_workspace_sharing": true,
"docs_url": {
"forceQuery": true,
"fragment": "string",
-87
View File
@@ -1,92 +1,5 @@
# Organizations
## Add new license
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/licenses \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /licenses`
> Body parameter
```json
{
"license": "string"
}
```
### Parameters
| Name | In | Type | Required | Description |
|--------|------|--------------------------------------------------------------------|----------|---------------------|
| `body` | body | [codersdk.AddLicenseRequest](schemas.md#codersdkaddlicenserequest) | true | Add license request |
### Example responses
> 201 Response
```json
{
"claims": {},
"id": 0,
"uploaded_at": "2019-08-24T14:15:22Z",
"uuid": "095be615-a8ad-4c33-8e9c-c7612fbf6c9f"
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|-------------|------------------------------------------------|
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.License](schemas.md#codersdklicense) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Update license entitlements
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/licenses/refresh-entitlements \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /licenses/refresh-entitlements`
### Example responses
> 201 Response
```json
{
"detail": "string",
"message": "string",
"validations": [
{
"detail": "string",
"field": "string"
}
]
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|--------------------------------------------------------------|-------------|--------------------------------------------------|
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.Response](schemas.md#codersdkresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get organizations
### Code samples
+3
View File
@@ -2917,6 +2917,7 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"disable_owner_workspace_exec": true,
"disable_password_auth": true,
"disable_path_apps": true,
"disable_workspace_sharing": true,
"docs_url": {
"forceQuery": true,
"fragment": "string",
@@ -3439,6 +3440,7 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"disable_owner_workspace_exec": true,
"disable_password_auth": true,
"disable_path_apps": true,
"disable_workspace_sharing": true,
"docs_url": {
"forceQuery": true,
"fragment": "string",
@@ -3793,6 +3795,7 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
| `disable_owner_workspace_exec` | boolean | false | | |
| `disable_password_auth` | boolean | false | | |
| `disable_path_apps` | boolean | false | | |
| `disable_workspace_sharing` | boolean | false | | |
| `docs_url` | [serpent.URL](#serpenturl) | false | | |
| `enable_authz_recording` | boolean | false | | |
| `enable_terraform_debug_mode` | boolean | false | | |
+10
View File
@@ -1115,6 +1115,16 @@ Disable workspace apps that are not served from subdomains. Path-based apps can
Remove the permission for the 'owner' role to have workspace execution on all workspaces. This prevents the 'owner' from ssh, apps, and terminal access based on the 'owner' role. They still have their user permissions to access their own workspaces.
### --disable-workspace-sharing
| | |
|-------------|-----------------------------------------------|
| Type | <code>bool</code> |
| Environment | <code>$CODER_DISABLE_WORKSPACE_SHARING</code> |
| YAML | <code>disableWorkspaceSharing</code> |
Disable workspace sharing (requires the "workspace-sharing" experiment to be enabled). Workspace ACL checking is disabled and only owners can have ssh, apps and terminal access to workspaces. Access based on the 'owner' role is also allowed unless disabled via --disable-owner-workspace-access.
### --session-duration
| | |
@@ -0,0 +1,311 @@
# Customizing dev containers
Coder supports custom configuration in your `devcontainer.json` file through the
`customizations.coder` block. These options let you control how Coder interacts
with your dev container without requiring template changes.
## Ignore a dev container
Use the `ignore` option to hide a dev container from Coder completely:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"ignore": true
}
}
}
```
When `ignore` is set to `true`:
- The dev container won't appear in the Coder UI
- Coder won't manage or monitor the container
This is useful for dev containers in your repository that you don't want Coder
to manage.
## Auto-start
Control whether your dev container should auto-start using the `autoStart`
option:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"autoStart": true
}
}
}
```
When `autoStart` is set to `true`, the dev container automatically builds and
starts during workspace initialization.
When `autoStart` is set to `false` or omitted, the dev container is discovered
and shown in the UI, but users must manually start it.
> [!NOTE]
>
> The `autoStart` option only takes effect when your template administrator has
> enabled [`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE`](../../admin/integrations/devcontainers/integration.md#coder_agent_devcontainers_discovery_autostart_enable).
> If this setting is disabled at the template level, containers won't auto-start
> regardless of this option.
## Custom agent name
Each dev container gets an agent name derived from the workspace folder path by
default. You can set a custom name using the `name` option:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"name": "my-custom-agent"
}
}
}
```
The name must contain only lowercase letters, numbers, and hyphens. This name
appears in `coder ssh` commands and the dashboard (e.g.,
`coder ssh my-workspace.my-custom-agent`).
## Display apps
Control which built-in Coder apps appear for your dev container using
`displayApps`:
![Dev container with all display apps disabled](../../images/user-guides/devcontainers/devcontainer-apps-bar.png)_Disable built-in apps to reduce clutter or guide developers toward preferred tools_
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"displayApps": {
"web_terminal": true,
"ssh_helper": true,
"port_forwarding_helper": true,
"vscode": true,
"vscode_insiders": false
}
}
}
}
```
Available display apps:
| App | Description | Default |
|--------------------------|------------------------------|---------|
| `web_terminal` | Web-based terminal access | `true` |
| `ssh_helper` | SSH connection helper | `true` |
| `port_forwarding_helper` | Port forwarding interface | `true` |
| `vscode` | VS Code Desktop integration | `true` |
| `vscode_insiders` | VS Code Insiders integration | `false` |
## Custom apps
Define custom applications for your dev container using the `apps` array:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu",
"customizations": {
"coder": {
"apps": [
{
"slug": "zed",
"displayName": "Zed Editor",
"url": "zed://ssh/${localEnv:CODER_WORKSPACE_AGENT_NAME}.${localEnv:CODER_WORKSPACE_NAME}.${localEnv:CODER_WORKSPACE_OWNER_NAME}.coder${containerWorkspaceFolder}",
"external": true,
"icon": "/icon/zed.svg",
"order": 1
}
]
}
}
}
```
This example adds a Zed Editor button that opens the dev container directly in
the Zed desktop app via its SSH remote feature.
Each app supports the following properties:
| Property | Type | Description |
|---------------|---------|---------------------------------------------------------------|
| `slug` | string | Unique identifier for the app (required) |
| `displayName` | string | Human-readable name shown in the UI |
| `url` | string | URL to open (supports variable interpolation) |
| `command` | string | Command to run instead of opening a URL |
| `icon` | string | Path to an icon (e.g., `/icon/code.svg`) |
| `openIn` | string | `"tab"` or `"slim-window"` (default: `"slim-window"`) |
| `share` | string | `"owner"`, `"authenticated"`, `"organization"`, or `"public"` |
| `external` | boolean | Open as external URL (e.g., for desktop apps) |
| `group` | string | Group name for organizing apps in the UI |
| `order` | number | Sort order for display |
| `hidden` | boolean | Hide the app from the UI |
| `subdomain` | boolean | Use subdomain-based access |
| `healthCheck` | object | Health check configuration (see below) |
### Health checks
Configure health checks to monitor app availability:
```json
{
"customizations": {
"coder": {
"apps": [
{
"slug": "web-server",
"displayName": "Web Server",
"url": "http://localhost:8080",
"healthCheck": {
"url": "http://localhost:8080/healthz",
"interval": 5,
"threshold": 2
}
}
]
}
}
}
```
Health check properties:
| Property | Type | Description |
|-------------|--------|-------------------------------------------------|
| `url` | string | URL to check for health status |
| `interval` | number | Seconds between health checks |
| `threshold` | number | Number of failures before marking app unhealthy |
## Variable interpolation
App URLs and other string values support variable interpolation for dynamic
configuration.
### Environment variables
Use `${localEnv:VAR_NAME}` to reference environment variables, with optional
default values:
```json
{
"customizations": {
"coder": {
"apps": [
{
"slug": "my-app",
"url": "http://${localEnv:HOST:127.0.0.1}:${localEnv:PORT:8080}"
}
]
}
}
}
```
### Coder-provided variables
Coder provides these environment variables automatically:
| Variable | Description |
|-------------------------------------|------------------------------------|
| `CODER_WORKSPACE_NAME` | Name of the workspace |
| `CODER_WORKSPACE_OWNER_NAME` | Username of the workspace owner |
| `CODER_WORKSPACE_AGENT_NAME` | Name of the dev container agent |
| `CODER_WORKSPACE_PARENT_AGENT_NAME` | Name of the parent workspace agent |
| `CODER_URL` | URL of the Coder deployment |
| `CONTAINER_ID` | Docker container ID |
### Dev container variables
Standard dev container variables are also available:
| Variable | Description |
|-------------------------------|--------------------------------------------|
| `${containerWorkspaceFolder}` | Workspace folder path inside the container |
| `${localWorkspaceFolder}` | Workspace folder path on the host |
### Session token
Use `$SESSION_TOKEN` in external app URLs to include the user's session token:
```json
{
"customizations": {
"coder": {
"apps": [
{
"slug": "custom-ide",
"displayName": "Custom IDE",
"url": "custom-ide://open?token=$SESSION_TOKEN&folder=${containerWorkspaceFolder}",
"external": true
}
]
}
}
}
```
## Feature options as environment variables
When your dev container uses features, Coder exposes feature options as
environment variables. The format is `FEATURE_<FEATURE_NAME>_OPTION_<OPTION_NAME>`.
For example, with this feature configuration:
```json
{
"features": {
"ghcr.io/coder/devcontainer-features/code-server:1": {
"port": 9090
}
}
}
```
Coder creates `FEATURE_CODE_SERVER_OPTION_PORT=9090`, which you can reference in
your apps:
```json
{
"features": {
"ghcr.io/coder/devcontainer-features/code-server:1": {
"port": 9090
}
},
"customizations": {
"coder": {
"apps": [
{
"slug": "code-server",
"displayName": "Code Server",
"url": "http://localhost:${localEnv:FEATURE_CODE_SERVER_OPTION_PORT:8080}",
"icon": "/icon/code.svg"
}
]
}
}
}
```
## Next steps
- [Working with dev containers](./working-with-dev-containers.md) — SSH, IDE
integration, and port forwarding
- [Troubleshooting dev containers](./troubleshooting-dev-containers.md) —
Diagnose common issues
+126 -71
View File
@@ -1,87 +1,142 @@
# Dev Containers Integration
# Dev Containers
The Dev Containers integration enables seamless creation and management of Dev
Containers in Coder workspaces. This feature leverages the
[`@devcontainers/cli`](https://github.com/devcontainers/cli) and
[Docker](https://www.docker.com) to provide a streamlined development
experience.
[Dev containers](https://containers.dev/) define your development environment
as code using a `devcontainer.json` file. Coder's Dev Containers integration
uses the [`@devcontainers/cli`](https://github.com/devcontainers/cli) and
[Docker](https://www.docker.com) to seamlessly build and run these containers,
with management in your dashboard.
This implementation is different from the existing
[Envbuilder-based Dev Containers](../../admin/templates/managing-templates/devcontainers/index.md)
offering.
This guide covers the Dev Containers integration. For workspaces without Docker,
administrators can configure
[Envbuilder](../../admin/integrations/devcontainers/envbuilder/index.md) instead,
which builds the workspace image itself from your dev container configuration.
![Two dev containers running as sub-agents in a Coder workspace](../../images/user-guides/devcontainers/devcontainer-running.png)_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
## Prerequisites
- Coder version 2.24.0 or later
- Coder CLI version 2.24.0 or later
- **Linux or macOS workspace**, Dev Containers are not supported on Windows
- A template with:
- Dev Containers integration enabled
- A Docker-compatible workspace image
- Appropriate permissions to execute Docker commands inside your workspace
- Docker available inside your workspace
- The `@devcontainers/cli` installed in your workspace
## How It Works
The Dev Containers integration utilizes the `devcontainer` command from
[`@devcontainers/cli`](https://github.com/devcontainers/cli) to manage Dev
Containers within your Coder workspace.
This command provides comprehensive functionality for creating, starting, and managing Dev Containers.
Dev environments are configured through a standard `devcontainer.json` file,
which allows for extensive customization of your development setup.
When a workspace with the Dev Containers integration starts:
1. The workspace initializes the Docker environment.
1. The integration detects repositories with a `.devcontainer` directory or a
`devcontainer.json` file.
1. The integration builds and starts the Dev Container based on the
configuration.
1. Your workspace automatically detects the running Dev Container.
Dev Containers integration is enabled by default. Your workspace needs Docker
(via Docker-in-Docker or a mounted socket) and the devcontainers CLI. Most
templates with Dev Containers support include both. See
[Configure a template for dev containers](../../admin/integrations/devcontainers/integration.md)
for setup details.
## Features
### Available Now
- Automatic dev container detection from repositories
- Seamless container startup during workspace initialization
- Change detection with outdated status indicator
- On-demand container rebuild via dashboard button
- Integrated IDE experience with VS Code
- Direct SSH access to containers
- Automatic port detection
- Automatic Dev Container detection from repositories
- Seamless Dev Container startup during workspace initialization
- Dev Container change detection and dirty state indicators
- On-demand Dev Container recreation via rebuild button
- Integrated IDE experience in Dev Containers with VS Code
- Direct service access in Dev Containers
- SSH access to Dev Containers
- Automatic port detection for container ports
## Getting started
### Add a devcontainer.json
Add a `devcontainer.json` file to your repository. This file defines your
development environment. You can place it in:
- `.devcontainer/devcontainer.json` (recommended)
- `.devcontainer.json` (root of repository)
- `.devcontainer/<folder>/devcontainer.json` (for multiple configurations)
The third option allows monorepos to define multiple dev container
configurations in separate sub-folders. See the
[Dev Container specification](https://containers.dev/implementors/spec/#devcontainerjson)
for details.
Here's a minimal example:
```json
{
"name": "My Dev Container",
"image": "mcr.microsoft.com/devcontainers/base:ubuntu"
}
```
For more configuration options, see the
[Dev Container specification](https://containers.dev/).
### Start your dev container
Coder automatically discovers dev container configurations in your repositories
and displays them in your workspace dashboard. From there, you can start a dev
container with a single click.
![Discovered dev containers with Start buttons](../../images/user-guides/devcontainers/devcontainer-discovery.png)_Coder detects dev container configurations and displays them with a Start button_
If your template administrator has configured automatic startup (via the
`coder_devcontainer` Terraform resource or autostart settings), your dev
container will build and start automatically when the workspace starts.
### Connect to your dev container
Once running, your dev container appears as a sub-agent in your workspace
dashboard. You can connect via:
- **Web terminal** in the Coder dashboard
- **SSH** using `coder ssh <workspace>.<agent>`
- **VS Code** using the "Open in VS Code Desktop" button
See [Working with dev containers](./working-with-dev-containers.md) for detailed
connection instructions.
## How it works
The Dev Containers integration uses the `devcontainer` command from
[`@devcontainers/cli`](https://github.com/devcontainers/cli) to manage
containers within your Coder workspace.
When a workspace with Dev Containers integration starts:
1. The workspace initializes the Docker environment.
1. The integration detects repositories with dev container configurations.
1. Detected dev containers appear in the Coder dashboard.
1. If auto-start is configured (via `coder_devcontainer` or autostart settings),
the integration builds and starts the dev container automatically.
1. Coder creates a sub-agent for the running container, enabling direct access.
Without auto-start, users can manually start discovered dev containers from the
dashboard.
### Agent naming
Each dev container gets its own agent name, derived from the workspace folder
path. For example, a dev container with workspace folder `/home/coder/my-app`
will have an agent named `my-app`.
Agent names are sanitized to contain only lowercase alphanumeric characters and
hyphens. You can also set a
[custom agent name](./customizing-dev-containers.md#custom-agent-name)
in your `devcontainer.json`.
## Limitations
The Dev Containers integration has the following limitations:
- **Linux only**: Dev Containers are currently not supported in Windows or
macOS workspaces
- Changes to `devcontainer.json` require manual rebuild using the dashboard
button
- The `forwardPorts` property in `devcontainer.json` with `host:port` syntax
(e.g., `"db:5432"`) for Docker Compose sidecar containers is not yet
supported. For single-container dev containers, use `coder port-forward` to
access ports directly on the sub-agent.
- Some advanced dev container features may have limited support
- **Not supported on Windows**
- Changes to the `devcontainer.json` file require manual container recreation
using the rebuild button
- Some Dev Container features may not work as expected
## Next steps
## Comparison with Envbuilder-based Dev Containers
| Feature | Dev Containers Integration | Envbuilder Dev Containers |
|----------------|----------------------------------------|----------------------------------------------|
| Implementation | Direct `@devcontainers/cli` and Docker | Coder's Envbuilder |
| Target users | Individual developers | Platform teams and administrators |
| Configuration | Standard `devcontainer.json` | Terraform templates with Envbuilder |
| Management | User-controlled | Admin-controlled |
| Requirements | Docker access in workspace | Compatible with more restricted environments |
Choose the appropriate solution based on your team's needs and infrastructure
constraints. For additional details on Envbuilder's Dev Container support, see
the
[Envbuilder Dev Container spec support documentation](https://github.com/coder/envbuilder/blob/main/docs/devcontainer-spec-support.md).
## Next Steps
- Explore the [Dev Container specification](https://containers.dev/) to learn
more about advanced configuration options
- Read about [Dev Container features](https://containers.dev/features) to
enhance your development environment
- Check the
[VS Code dev containers documentation](https://code.visualstudio.com/docs/devcontainers/containers)
for IDE-specific features
- [Working with dev containers](./working-with-dev-containers.md) — SSH, IDE
integration, and port forwarding
- [Customizing dev containers](./customizing-dev-containers.md) — Custom agent
names, apps, and display options
- [Troubleshooting dev containers](./troubleshooting-dev-containers.md) —
Diagnose common issues
- [Dev Container specification](https://containers.dev/) — Advanced
configuration options
- [Dev Container features](https://containers.dev/features) — Enhance your
environment with pre-built tools
@@ -1,6 +1,6 @@
# Troubleshooting dev containers
## Dev Container Not Starting
## Dev container not starting
If your dev container fails to start:
@@ -10,7 +10,108 @@ If your dev container fails to start:
- `/tmp/coder-startup-script.log`
- `/tmp/coder-script-[script_id].log`
1. Verify that Docker is running in your workspace.
1. Ensure the `devcontainer.json` file is valid.
1. Verify Docker is available in your workspace (see below).
1. Ensure the `devcontainer.json` file is valid JSON.
1. Check that the repository has been cloned correctly.
1. Verify the resource limits in your workspace are sufficient.
## Docker not available
Dev containers require Docker, either via a running daemon (Docker-in-Docker) or
a mounted socket from the host. Your template determines which approach is used.
**If using Docker-in-Docker**, check that the daemon is running:
```console
sudo service docker status
sudo service docker start # if not running
```
**If using a mounted socket**, verify the socket exists and is accessible:
```console
ls -la /var/run/docker.sock
docker ps # test access
```
If you get permission errors, your user may need to be in the `docker` group.
## Finding your dev container agent
Use `coder show` to list all agents in your workspace, including dev container
sub-agents:
```console
coder show <workspace>
```
The agent name is derived from the workspace folder path. For details on how
names are generated, see [Agent naming](./index.md#agent-naming).
## SSH connection issues
If `coder ssh <workspace>.<agent>` fails:
1. Verify the agent name using `coder show <workspace>`.
1. Check that the dev container is running:
```console
docker ps
```
1. Check the workspace agent logs for container-related errors:
```console
grep -i container /tmp/coder-agent.log
```
## VS Code connection issues
VS Code connects to dev containers through the Coder extension. The extension
uses the sub-agent information to route connections through the parent workspace
agent to the dev container. If VS Code fails to connect:
1. Ensure you have the latest Coder VS Code extension.
1. Verify the dev container is running in the Coder dashboard.
1. Check the parent workspace agent is healthy.
1. Try restarting the dev container from the dashboard.
## Dev container features not working
If features from your `devcontainer.json` aren't being applied:
1. Rebuild the container to ensure features are installed fresh.
1. Check the container build output for feature installation errors.
1. Verify the feature reference format is correct:
```json
{
"features": {
"ghcr.io/devcontainers/features/node:1": {}
}
}
```
## Slow container startup
If your dev container takes a long time to start:
1. **Use a pre-built image** instead of building from a Dockerfile. This avoids
the image build step, though features and lifecycle scripts still run.
1. **Minimize features**. Each feature executes as a separate Docker layer
during the image build, which is typically the slowest part. Changing
`devcontainer.json` invalidates the layer cache, causing features to
reinstall on rebuild.
1. **Check lifecycle scripts**. Commands in `postStartCommand` run on every
container start. Commands in `postCreateCommand` run once per build, so
they execute again after each rebuild.
## Getting more help
If you continue to experience issues:
1. Collect logs from `/tmp/coder-agent.log` (both workspace and container).
1. Note the exact error messages.
1. Check [Coder GitHub issues](https://github.com/coder/coder/issues) for
similar problems.
1. Contact your Coder administrator for template-specific issues.
@@ -3,95 +3,155 @@
The dev container integration appears in your Coder dashboard, providing a
visual representation of the running environment:
![Dev container integration in Coder dashboard](../../images/user-guides/devcontainers/devcontainer-agent-ports.png)
![Two dev containers running as sub-agents in a Coder workspace](../../images/user-guides/devcontainers/devcontainer-running.png)_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
## SSH Access
## SSH access
You can SSH into your dev container directly using the Coder CLI:
Each dev container has its own agent name, derived from the workspace folder
(e.g., `/home/coder/my-project` becomes `my-project`). You can find agent names
in your workspace dashboard, or see
[Agent naming](./index.md#agent-naming) for details on how names are generated.
### Using the Coder CLI
The simplest way to SSH into a dev container is using `coder ssh` with the
workspace and agent name:
```console
coder ssh --container keen_dijkstra my-workspace
coder ssh <workspace>.<agent>
```
> [!NOTE]
>
> SSH access is not yet compatible with the `coder config-ssh` command for use
> with OpenSSH. You would need to manually modify your SSH config to include the
> `--container` flag in the `ProxyCommand`.
For example, to connect to a dev container with agent name `my-project` in
workspace `my-workspace`:
## Web Terminal Access
```console
coder ssh my-workspace.my-project
```
To SSH into the main workspace agent instead of the dev container:
```console
coder ssh my-workspace
```
### Using OpenSSH (config-ssh)
You can also use standard OpenSSH tools after generating SSH config entries with
`coder config-ssh`:
```console
coder config-ssh
```
This creates a wildcard SSH host entry that matches all your workspaces and
their agents, including dev container sub-agents. You can then connect using:
```console
ssh my-project.my-workspace.me.coder
```
The default hostname suffix is `.coder`. If your organization uses a different
suffix, adjust the hostname accordingly. The suffix can be configured via
[`coder config-ssh --hostname-suffix`](../../reference/cli/config-ssh.md) or
by your deployment administrator.
This method works with any SSH client, IDE remote extensions, `rsync`, `scp`,
and other tools that use SSH.
## Web terminal access
Once your workspace and dev container are running, you can use the web terminal
in the Coder interface to execute commands directly inside the dev container.
![Coder web terminal with dev container](../../images/user-guides/devcontainers/devcontainer-web-terminal.png)
## IDE Integration (VS Code)
## IDE integration (VS Code)
You can open your dev container directly in VS Code by:
1. Selecting "Open in VS Code Desktop" from the Coder web interface
2. Using the Coder CLI with the container flag:
1. Selecting **Open in VS Code Desktop** from the dev container agent in the
Coder web interface.
1. Using the Coder CLI:
```console
coder open vscode --container keen_dijkstra my-workspace
```
```console
coder open vscode <workspace>.<agent>
```
While optimized for VS Code, other IDEs with dev containers support may also
For example:
```console
coder open vscode my-workspace.my-project
```
VS Code will automatically detect the dev container environment and connect
appropriately.
While optimized for VS Code, other IDEs with dev container support may also
work.
## Port Forwarding
## Port forwarding
During the early access phase, port forwarding is limited to ports defined via
Since dev containers run as sub-agents, you can forward ports directly to them
using standard Coder port forwarding:
```console
coder port-forward <workspace>.<agent> --tcp 8080
```
For example, to forward port 8080 from a dev container with agent name
`my-project`:
```console
coder port-forward my-workspace.my-project --tcp 8080
```
This forwards port 8080 on your local machine directly to port 8080 in the dev
container. Coder also automatically detects ports opened inside the container.
### Exposing ports on the parent workspace
If you need to expose dev container ports through the parent workspace agent
(rather than the sub-agent), you can use the
[`appPort`](https://containers.dev/implementors/json_reference/#image-specific)
in your `devcontainer.json` file.
> [!NOTE]
>
> Support for automatic port forwarding via the `forwardPorts` property in
> `devcontainer.json` is planned for a future release.
For example, with this `devcontainer.json` configuration:
property in your `devcontainer.json`:
```json
{
"appPort": ["8080:8080", "4000:3000"]
"appPort": ["8080:8080", "4000:3000"]
}
```
You can forward these ports to your local machine using:
This maps container ports to the parent workspace, which can then be forwarded
using the main workspace agent.
```console
coder port-forward my-workspace --tcp 8080,4000
```
## Dev container features
This forwards port 8080 (local) -> 8080 (agent) -> 8080 (dev container) and port
4000 (local) -> 4000 (agent) -> 3000 (dev container).
## Dev Container Features
You can use standard dev container features in your `devcontainer.json` file.
Coder also maintains a
You can use standard [dev container features](https://containers.dev/features)
in your `devcontainer.json` file. Coder also maintains a
[repository of features](https://github.com/coder/devcontainer-features) to
enhance your development experience.
Currently available features include [code-server](https://github.com/coder/devcontainer-features/blob/main/src/code-server).
To use the code-server feature, add the following to your `devcontainer.json`:
For example, the
[code-server](https://github.com/coder/devcontainer-features/blob/main/src/code-server)
feature from the [Coder features repository](https://github.com/coder/devcontainer-features):
```json
{
"features": {
"ghcr.io/coder/devcontainer-features/code-server:1": {
"port": 13337,
"host": "0.0.0.0"
}
},
"appPort": ["13337:13337"]
"features": {
"ghcr.io/coder/devcontainer-features/code-server:1": {
"port": 13337,
"host": "0.0.0.0"
}
}
}
```
> [!NOTE]
>
> Remember to include the port in the `appPort` section to ensure proper port
> forwarding.
## Rebuilding dev containers
When you modify your `devcontainer.json`, you need to rebuild the container for
changes to take effect. Coder detects changes and shows an **Outdated** status
next to the dev container.
![Dev container showing Outdated status with rebuild option](../../images/user-guides/devcontainers/devcontainer-outdated.png)_The Outdated indicator appears when changes to devcontainer.json are detected_
Click **Rebuild** to recreate your dev container with the updated configuration.
+2 -2
View File
@@ -7,7 +7,7 @@ These are intended for end-user flows only. If you are an administrator, please
refer to our docs on configuring [templates](../admin/index.md) or the
[control plane](../admin/index.md).
Check out our [early access features](../install/releases/feature-stages.md) for upcoming
functionality, including [Dev Containers integration](../user-guides/devcontainers/index.md).
Check out [Dev Containers integration](./devcontainers/index.md) for running
containerized development environments in your Coder workspace.
<children></children>
+2 -2
View File
@@ -104,14 +104,14 @@ data "coder_workspace_owner" "me" {}
module "slackme" {
source = "dev.registry.coder.com/coder/slackme/coder"
version = "1.0.32"
version = "1.0.33"
agent_id = coder_agent.dev.id
auth_provider_id = "slack"
}
module "dotfiles" {
source = "dev.registry.coder.com/coder/dotfiles/coder"
version = "1.2.2"
version = "1.2.3"
agent_id = coder_agent.dev.id
}
+1 -1
View File
@@ -214,7 +214,7 @@ RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/u
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.12.2.
# Installing the same version here to match.
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.13.4/terraform_1.13.4_linux_amd64.zip" && \
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_linux_amd64.zip" && \
unzip /tmp/terraform.zip -d /usr/local/bin && \
rm -f /tmp/terraform.zip && \
chmod +x /usr/local/bin/terraform && \
+22 -7
View File
@@ -333,7 +333,7 @@ data "coder_parameter" "vscode_channel" {
module "slackme" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/coder/slackme/coder"
version = "1.0.32"
version = "1.0.33"
agent_id = coder_agent.dev.id
auth_provider_id = "slack"
}
@@ -341,7 +341,7 @@ module "slackme" {
module "dotfiles" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/coder/dotfiles/coder"
version = "1.2.2"
version = "1.2.3"
agent_id = coder_agent.dev.id
}
@@ -357,7 +357,7 @@ module "git-config" {
module "git-clone" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/coder/git-clone/coder"
version = "1.2.1"
version = "1.2.2"
agent_id = coder_agent.dev.id
url = "https://github.com/coder/coder"
base_dir = local.repo_base_dir
@@ -373,7 +373,7 @@ module "personalize" {
module "mux" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/mux/coder"
version = "1.0.2"
version = "1.0.4"
agent_id = coder_agent.dev.id
subdomain = true
}
@@ -391,7 +391,7 @@ module "code-server" {
module "vscode-web" {
count = contains(jsondecode(data.coder_parameter.ide_choices.value), "vscode-web") ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/vscode-web/coder"
version = "1.4.2"
version = "1.4.3"
agent_id = coder_agent.dev.id
folder = local.repo_dir
extensions = ["github.copilot"]
@@ -429,7 +429,7 @@ module "coder-login" {
module "cursor" {
count = contains(jsondecode(data.coder_parameter.ide_choices.value), "cursor") ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/cursor/coder"
version = "1.3.3"
version = "1.4.0"
agent_id = coder_agent.dev.id
folder = local.repo_dir
}
@@ -437,7 +437,7 @@ module "cursor" {
module "windsurf" {
count = contains(jsondecode(data.coder_parameter.ide_choices.value), "windsurf") ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/windsurf/coder"
version = "1.2.1"
version = "1.3.0"
agent_id = coder_agent.dev.id
folder = local.repo_dir
}
@@ -596,6 +596,14 @@ resource "coder_agent" "dev" {
# Allow synchronization between scripts.
trap 'touch /tmp/.coder-startup-script.done' EXIT
# Authenticate GitHub CLI
if ! gh auth status >/dev/null 2>&1; then
echo "Logging into GitHub CLI…"
coder external-auth access-token github | gh auth login --hostname github.com --with-token
else
echo "Already logged into GitHub CLI."
fi
# Increase the shutdown timeout of the docker service for improved cleanup.
# The 240 was picked as it's lower than the 300 seconds we set for the
# container shutdown grace period.
@@ -831,6 +839,13 @@ locals {
- Built-in tools - use for everything else:
(file operations, git commands, builds & installs, one-off shell commands)
-- Workflow --
When starting new work:
1. If given a GitHub issue URL, use the `gh` CLI to read the full issue details with `gh issue view <issue-number>`.
2. Create a feature branch for the work using a descriptive name based on the issue or task.
Example: `git checkout -b fix/issue-123-oauth-error` or `git checkout -b feat/add-dark-mode`
3. Proceed with implementation following the CLAUDE.md guidelines.
-- Context --
There is an existing application in the current directory.
Be sure to read CLAUDE.md before making any changes.
+5 -2
View File
@@ -8,6 +8,7 @@ import (
"sync"
"time"
"go.opentelemetry.io/otel/trace"
"golang.org/x/xerrors"
"cdr.dev/slog"
@@ -33,6 +34,7 @@ type Server struct {
requestBridgePool Pooler
logger slog.Logger
tracer trace.Tracer
wg sync.WaitGroup
// initConnectionCh will receive when the daemon connects to coderd for the
@@ -48,7 +50,7 @@ type Server struct {
shutdownOnce sync.Once
}
func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger) (*Server, error) {
func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger, tracer trace.Tracer) (*Server, error) {
if rpcDialer == nil {
return nil, xerrors.Errorf("nil rpcDialer given")
}
@@ -56,6 +58,7 @@ func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger)
ctx, cancel := context.WithCancel(ctx)
daemon := &Server{
logger: logger,
tracer: tracer,
clientDialer: rpcDialer,
clientCh: make(chan DRPCClient),
lifecycleCtx: ctx,
@@ -143,7 +146,7 @@ func (s *Server) GetRequestHandler(ctx context.Context, req Request) (http.Handl
return nil, xerrors.New("nil requestBridgePool")
}
reqBridge, err := s.requestBridgePool.Acquire(ctx, req, s.Client, NewMCPProxyFactory(s.logger, s.Client))
reqBridge, err := s.requestBridgePool.Acquire(ctx, req, s.Client, NewMCPProxyFactory(s.logger, s.tracer, s.Client))
if err != nil {
return nil, xerrors.Errorf("acquire request bridge: %w", err)
}
@@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"slices"
"testing"
"time"
@@ -13,8 +14,13 @@ import (
promtest "github.com/prometheus/client_golang/prometheus/testutil"
"github.com/stretchr/testify/require"
"github.com/tidwall/gjson"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
"go.opentelemetry.io/otel/sdk/trace/tracetest"
"github.com/coder/aibridge"
aibtracing "github.com/coder/aibridge/tracing"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
@@ -28,6 +34,8 @@ import (
"github.com/coder/coder/v2/testutil"
)
var testTracer = otel.Tracer("aibridged_test")
// TestIntegration is not an exhaustive test against the upstream AI providers' SDKs (see coder/aibridge for those).
// This test validates that:
// - intercepted requests can be authenticated/authorized
@@ -35,11 +43,17 @@ import (
// - responses can be returned as expected
// - interceptions are logged, as well as their related prompt, token, and tool calls
// - MCP server configurations are returned as expected
// - tracing spans are properly recorded
func TestIntegration(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
sr := tracetest.NewSpanRecorder()
tp := sdktrace.NewTracerProvider(sdktrace.WithSpanProcessor(sr))
tracer := tp.Tracer(t.Name())
defer func() { _ = tp.Shutdown(t.Context()) }()
// Create mock MCP server.
var mcpTokenReceived string
mockMCPServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -169,13 +183,13 @@ func TestIntegration(t *testing.T) {
logger := testutil.Logger(t)
providers := []aibridge.Provider{aibridge.NewOpenAIProvider(aibridge.OpenAIConfig{BaseURL: mockOpenAI.URL})}
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, nil, logger)
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, logger, nil, tracer)
require.NoError(t, err)
// Given: aibridged is started.
srv, err := aibridged.New(t.Context(), pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
return aiBridgeClient, nil
}, logger)
}, logger, tracer)
require.NoError(t, err, "create new aibridged")
t.Cleanup(func() {
_ = srv.Shutdown(ctx)
@@ -256,6 +270,44 @@ func TestIntegration(t *testing.T) {
// Then: the MCP server was initialized.
require.Contains(t, mcpTokenReceived, authLink.OAuthAccessToken, "mock MCP server not requested")
// Then: verify tracing spans were recorded.
spans := sr.Ended()
require.NotEmpty(t, spans)
i := slices.IndexFunc(spans, func(s sdktrace.ReadOnlySpan) bool { return s.Name() == "CachedBridgePool.Acquire" })
require.NotEqual(t, -1, i, "span named 'CachedBridgePool.Acquire' not found")
expectAttrs := []attribute.KeyValue{
attribute.String(aibtracing.InitiatorID, user.ID.String()),
attribute.String(aibtracing.APIKeyID, keyID),
}
require.Equal(t, spans[i].Attributes(), expectAttrs)
// Check for aibridge spans.
spanNames := make(map[string]bool)
for _, span := range spans {
spanNames[span.Name()] = true
}
expectedAibridgeSpans := []string{
"CachedBridgePool.Acquire",
"ServerProxyManager.Init",
"StreamableHTTPServerProxy.Init",
"StreamableHTTPServerProxy.Init.fetchTools",
"Intercept",
"Intercept.CreateInterceptor",
"Intercept.RecordInterception",
"Intercept.ProcessRequest",
"Intercept.ProcessRequest.Upstream",
"Intercept.RecordPromptUsage",
"Intercept.RecordTokenUsage",
"Intercept.RecordToolUsage",
"Intercept.RecordInterceptionEnded",
}
for _, expectedSpan := range expectedAibridgeSpans {
require.Contains(t, spanNames, expectedSpan)
}
}
// TestIntegrationWithMetrics validates that Prometheus metrics are correctly incremented
@@ -324,13 +376,13 @@ func TestIntegrationWithMetrics(t *testing.T) {
providers := []aibridge.Provider{aibridge.NewOpenAIProvider(aibridge.OpenAIConfig{BaseURL: mockOpenAI.URL})}
// Create pool with metrics.
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, metrics, logger)
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, logger, metrics, testTracer)
require.NoError(t, err)
// Given: aibridged is started.
srv, err := aibridged.New(ctx, pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
return aiBridgeClient, nil
}, logger)
}, logger, testTracer)
require.NoError(t, err, "create new aibridged")
t.Cleanup(func() {
_ = srv.Shutdown(ctx)
+3 -3
View File
@@ -41,7 +41,7 @@ func newTestServer(t *testing.T) (*aibridged.Server, *mock.MockDRPCClient, *mock
pool,
func(ctx context.Context) (aibridged.DRPCClient, error) {
return client, nil
}, logger)
}, logger, testTracer)
require.NoError(t, err, "create new aibridged")
t.Cleanup(func() {
srv.Shutdown(context.Background())
@@ -290,7 +290,7 @@ func TestRouting(t *testing.T) {
aibridge.NewOpenAIProvider(aibridge.OpenAIConfig{BaseURL: openaiSrv.URL}),
aibridge.NewAnthropicProvider(aibridge.AnthropicConfig{BaseURL: antSrv.URL}, nil),
}
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, nil, logger)
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, logger, nil, testTracer)
require.NoError(t, err)
conn := &mockDRPCConn{}
client.EXPECT().DRPCConn().AnyTimes().Return(conn)
@@ -309,7 +309,7 @@ func TestRouting(t *testing.T) {
// Given: aibridged is started.
srv, err := aibridged.New(t.Context(), pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
return client, nil
}, logger)
}, logger, testTracer)
require.NoError(t, err, "create new aibridged")
t.Cleanup(func() {
_ = srv.Shutdown(testutil.Context(t, testutil.WaitShort))
+9 -5
View File
@@ -6,6 +6,7 @@ import (
"regexp"
"time"
"go.opentelemetry.io/otel/trace"
"golang.org/x/xerrors"
"cdr.dev/slog"
@@ -28,30 +29,32 @@ type MCPProxyBuilder interface {
// The SessionKey from [Request] is used to authenticate against the Coder MCP server.
//
// NOTE: the [mcp.ServerProxier] instance may be proxying one or more MCP servers.
Build(ctx context.Context, req Request) (mcp.ServerProxier, error)
Build(ctx context.Context, req Request, tracer trace.Tracer) (mcp.ServerProxier, error)
}
var _ MCPProxyBuilder = &MCPProxyFactory{}
type MCPProxyFactory struct {
logger slog.Logger
tracer trace.Tracer
clientFn ClientFunc
}
func NewMCPProxyFactory(logger slog.Logger, clientFn ClientFunc) *MCPProxyFactory {
func NewMCPProxyFactory(logger slog.Logger, tracer trace.Tracer, clientFn ClientFunc) *MCPProxyFactory {
return &MCPProxyFactory{
logger: logger,
tracer: tracer,
clientFn: clientFn,
}
}
func (m *MCPProxyFactory) Build(ctx context.Context, req Request) (mcp.ServerProxier, error) {
func (m *MCPProxyFactory) Build(ctx context.Context, req Request, tracer trace.Tracer) (mcp.ServerProxier, error) {
proxiers, err := m.retrieveMCPServerConfigs(ctx, req)
if err != nil {
return nil, xerrors.Errorf("resolve configs: %w", err)
}
return mcp.NewServerProxyManager(proxiers), nil
return mcp.NewServerProxyManager(proxiers, tracer), nil
}
func (m *MCPProxyFactory) retrieveMCPServerConfigs(ctx context.Context, req Request) (map[string]mcp.ServerProxier, error) {
@@ -173,7 +176,6 @@ func (m *MCPProxyFactory) newStreamableHTTPServerProxy(cfg *proto.MCPServerConfi
// The proxy could then use its interface to retrieve a new access token and re-establish a connection.
// For now though, the short TTL of this cache should mostly mask this problem.
srv, err := mcp.NewStreamableHTTPServerProxy(
m.logger.Named(fmt.Sprintf("mcp-server-proxy-%s", cfg.GetId())),
cfg.GetId(),
cfg.GetUrl(),
// See https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#token-requirements.
@@ -182,6 +184,8 @@ func (m *MCPProxyFactory) newStreamableHTTPServerProxy(cfg *proto.MCPServerConfi
},
allowlist,
denylist,
m.logger.Named(fmt.Sprintf("mcp-server-proxy-%s", cfg.GetId())),
m.tracer,
)
if err != nil {
return nil, xerrors.Errorf("create streamable HTTP MCP server proxy: %w", err)
+2 -1
View File
@@ -4,6 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel"
"github.com/coder/coder/v2/enterprise/aibridged/proto"
"github.com/coder/coder/v2/testutil"
@@ -42,7 +43,7 @@ func TestMCPRegex(t *testing.T) {
t.Parallel()
logger := testutil.Logger(t)
f := NewMCPProxyFactory(logger, nil)
f := NewMCPProxyFactory(logger, otel.Tracer("aibridged_test"), nil)
_, err := f.newStreamableHTTPServerProxy(&proto.MCPServerConfig{
Id: "mock",
+22 -9
View File
@@ -7,13 +7,15 @@ import (
"time"
"github.com/dgraph-io/ristretto/v2"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"golang.org/x/xerrors"
"tailscale.com/util/singleflight"
"cdr.dev/slog"
"github.com/coder/aibridge"
"github.com/coder/aibridge/mcp"
"github.com/coder/aibridge/tracing"
)
const (
@@ -52,12 +54,13 @@ type CachedBridgePool struct {
singleflight *singleflight.Group[string, *aibridge.RequestBridge]
metrics *aibridge.Metrics
tracer trace.Tracer
shutDownOnce sync.Once
shuttingDownCh chan struct{}
}
func NewCachedBridgePool(options PoolOptions, providers []aibridge.Provider, metrics *aibridge.Metrics, logger slog.Logger) (*CachedBridgePool, error) {
func NewCachedBridgePool(options PoolOptions, providers []aibridge.Provider, logger slog.Logger, metrics *aibridge.Metrics, tracer trace.Tracer) (*CachedBridgePool, error) {
cache, err := ristretto.NewCache(&ristretto.Config[string, *aibridge.RequestBridge]{
NumCounters: options.MaxItems * 10, // Docs suggest setting this 10x number of keys.
MaxCost: options.MaxItems * cacheCost, // Up to n instances.
@@ -85,13 +88,13 @@ func NewCachedBridgePool(options PoolOptions, providers []aibridge.Provider, met
return &CachedBridgePool{
cache: cache,
providers: providers,
logger: logger,
options: options,
metrics: metrics,
tracer: tracer,
logger: logger,
singleflight: &singleflight.Group[string, *aibridge.RequestBridge]{},
metrics: metrics,
shuttingDownCh: make(chan struct{}),
}, nil
}
@@ -100,7 +103,15 @@ func NewCachedBridgePool(options PoolOptions, providers []aibridge.Provider, met
//
// Each returned [*aibridge.RequestBridge] is safe for concurrent use.
// Each [*aibridge.RequestBridge] is stateful because it has MCP clients which maintain sessions to the configured MCP server.
func (p *CachedBridgePool) Acquire(ctx context.Context, req Request, clientFn ClientFunc, mcpProxyFactory MCPProxyBuilder) (http.Handler, error) {
func (p *CachedBridgePool) Acquire(ctx context.Context, req Request, clientFn ClientFunc, mcpProxyFactory MCPProxyBuilder) (_ http.Handler, outErr error) {
spanAttrs := []attribute.KeyValue{
attribute.String(tracing.InitiatorID, req.InitiatorID.String()),
attribute.String(tracing.APIKeyID, req.APIKeyID),
}
ctx, span := p.tracer.Start(ctx, "CachedBridgePool.Acquire", trace.WithAttributes(spanAttrs...))
defer tracing.EndSpanErr(span, &outErr)
ctx = tracing.WithRequestBridgeAttributesInContext(ctx, spanAttrs)
if err := ctx.Err(); err != nil {
return nil, xerrors.Errorf("acquire: %w", err)
}
@@ -124,10 +135,12 @@ func (p *CachedBridgePool) Acquire(ctx context.Context, req Request, clientFn Cl
// expire after the original TTL; we can extend the TTL on each Acquire() call.
// For now, we need to let the instance expiry to keep the MCP connections fresh.
span.AddEvent("cache_hit")
return bridge, nil
}
recorder := aibridge.NewRecorder(p.logger.Named("recorder"), func() (aibridge.Recorder, error) {
span.AddEvent("cache_miss")
recorder := aibridge.NewRecorder(p.logger.Named("recorder"), p.tracer, func() (aibridge.Recorder, error) {
client, err := clientFn()
if err != nil {
return nil, xerrors.Errorf("acquire client: %w", err)
@@ -145,7 +158,7 @@ func (p *CachedBridgePool) Acquire(ctx context.Context, req Request, clientFn Cl
err error
)
mcpServers, err = mcpProxyFactory.Build(ctx, req)
mcpServers, err = mcpProxyFactory.Build(ctx, req, p.tracer)
if err != nil {
p.logger.Warn(ctx, "failed to create MCP server proxiers", slog.Error(err))
// Don't fail here; MCP server injection can gracefully degrade.
@@ -158,7 +171,7 @@ func (p *CachedBridgePool) Acquire(ctx context.Context, req Request, clientFn Cl
}
}
bridge, err := aibridge.NewRequestBridge(ctx, p.providers, recorder, mcpServers, p.metrics, p.logger)
bridge, err := aibridge.NewRequestBridge(ctx, p.providers, recorder, mcpServers, p.logger, p.metrics, p.tracer)
if err != nil {
return nil, xerrors.Errorf("create new request bridge: %w", err)
}
+3 -2
View File
@@ -8,6 +8,7 @@ import (
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel/trace"
"go.uber.org/mock/gomock"
"cdr.dev/slog/sloggers/slogtest"
@@ -30,7 +31,7 @@ func TestPool(t *testing.T) {
mcpProxy := mcpmock.NewMockServerProxier(ctrl)
opts := aibridged.PoolOptions{MaxItems: 1, TTL: time.Second}
pool, err := aibridged.NewCachedBridgePool(opts, nil, nil, logger)
pool, err := aibridged.NewCachedBridgePool(opts, nil, logger, nil, testTracer)
require.NoError(t, err)
t.Cleanup(func() { pool.Shutdown(context.Background()) })
@@ -120,6 +121,6 @@ func newMockMCPFactory(proxy *mcpmock.MockServerProxier) *mockMCPFactory {
return &mockMCPFactory{proxy: proxy}
}
func (m *mockMCPFactory) Build(ctx context.Context, req aibridged.Request) (mcp.ServerProxier, error) {
func (m *mockMCPFactory) Build(ctx context.Context, req aibridged.Request, tracer trace.Tracer) (mcp.ServerProxier, error) {
return m.proxy, nil
}
+4 -2
View File
@@ -10,6 +10,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/coder/aibridge"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/enterprise/aibridged"
"github.com/coder/coder/v2/enterprise/coderd"
@@ -35,9 +36,10 @@ func newAIBridgeDaemon(coderAPI *coderd.API) (*aibridged.Server, error) {
reg := prometheus.WrapRegistererWithPrefix("coder_aibridged_", coderAPI.PrometheusRegistry)
metrics := aibridge.NewMetrics(reg)
tracer := coderAPI.TracerProvider.Tracer(tracing.TracerName)
// Create pool for reusable stateful [aibridge.RequestBridge] instances (one per user).
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, metrics, logger.Named("pool")) // TODO: configurable size.
pool, err := aibridged.NewCachedBridgePool(aibridged.DefaultPoolOptions, providers, logger.Named("pool"), metrics, tracer) // TODO: configurable size.
if err != nil {
return nil, xerrors.Errorf("create request pool: %w", err)
}
@@ -45,7 +47,7 @@ func newAIBridgeDaemon(coderAPI *coderd.API) (*aibridged.Server, error) {
// Create daemon.
srv, err := aibridged.New(ctx, pool, func(dialCtx context.Context) (aibridged.DRPCClient, error) {
return coderAPI.CreateInMemoryAIBridgeServer(dialCtx)
}, logger)
}, logger, tracer)
if err != nil {
return nil, xerrors.Errorf("start in-memory aibridge daemon: %w", err)
}

Some files were not shown because too many files have changed in this diff Show More