Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| d4c7cb2021 |
@@ -1,321 +0,0 @@
|
||||
# Documentation Style Guide
|
||||
|
||||
This guide documents documentation patterns observed in the Coder repository, based on analysis of existing admin guides, tutorials, and reference documentation. This is specifically for documentation files in the `docs/` directory - see [CONTRIBUTING.md](../../docs/about/contributing/CONTRIBUTING.md) for general contribution guidelines.
|
||||
|
||||
## Research Before Writing
|
||||
|
||||
Before documenting a feature:
|
||||
|
||||
1. **Research similar documentation** - Read recent documentation pages in `docs/` to understand writing style, structure, and conventions for your content type (admin guides, tutorials, reference docs, etc.)
|
||||
2. **Read the code implementation** - Check backend endpoints, frontend components, database queries
|
||||
3. **Verify permissions model** - Look up RBAC actions in `coderd/rbac/` (e.g., `view_insights` for Template Insights)
|
||||
4. **Check UI thresholds and defaults** - Review frontend code for color thresholds, time intervals, display logic
|
||||
5. **Cross-reference with tests** - Test files document expected behavior and edge cases
|
||||
6. **Verify API endpoints** - Check `coderd/coderd.go` for route registration
|
||||
|
||||
### Code Verification Checklist
|
||||
|
||||
When documenting features, always verify these implementation details:
|
||||
|
||||
- Read handler implementation in `coderd/`
|
||||
- Check permission requirements in `coderd/rbac/`
|
||||
- Review frontend components in `site/src/pages/` or `site/src/modules/`
|
||||
- Verify display thresholds and intervals (e.g., color codes, time defaults)
|
||||
- Confirm API endpoint paths and parameters
|
||||
- Check for server flags in serpent configuration
|
||||
|
||||
## Document Structure
|
||||
|
||||
### Title and Introduction Pattern
|
||||
|
||||
**H1 heading**: Single clear title without prefix
|
||||
|
||||
```markdown
|
||||
# Template Insights
|
||||
```
|
||||
|
||||
**Introduction**: 1-2 sentences describing what the feature does, concise and actionable
|
||||
|
||||
```markdown
|
||||
Template Insights provides detailed analytics and usage metrics for your Coder templates.
|
||||
```
|
||||
|
||||
### Premium Feature Callout
|
||||
|
||||
For Premium-only features, add `(Premium)` suffix to the H1 heading. The documentation system automatically links these to premium pricing information. You should also add a premium badge in the `docs/manifest.json` file with `"state": ["premium"]`.
|
||||
|
||||
```markdown
|
||||
# Template Insights (Premium)
|
||||
```
|
||||
|
||||
### Overview Section Pattern
|
||||
|
||||
Common pattern after introduction:
|
||||
|
||||
```markdown
|
||||
## Overview
|
||||
|
||||
Template Insights offers visibility into:
|
||||
|
||||
- **Active Users**: Track the number of users actively using workspaces
|
||||
- **Application Usage**: See which applications users are accessing
|
||||
```
|
||||
|
||||
Use bold labels for capabilities, provides high-level understanding before details.
|
||||
|
||||
## Image Usage
|
||||
|
||||
### Placement and Format
|
||||
|
||||
**Place images after descriptive text**, then add caption:
|
||||
|
||||
```markdown
|
||||

|
||||
|
||||
<small>Template Insights showing weekly active users and connection latency metrics.</small>
|
||||
```
|
||||
|
||||
- Image format: ``
|
||||
- Caption: Use `<small>` tag below images
|
||||
- Alt text: Describe what's shown, not just repeat heading
|
||||
|
||||
### Image-Driven Documentation
|
||||
|
||||
When you have multiple screenshots showing different aspects of a feature:
|
||||
|
||||
1. **Structure sections around images** - Each major screenshot gets its own section
|
||||
2. **Describe what's visible** - Reference specific UI elements, data values shown in the screenshot
|
||||
3. **Flow naturally** - Let screenshots guide the reader through the feature
|
||||
|
||||
**Example**: Template Insights documentation has 3 screenshots that define the 3 main content sections.
|
||||
|
||||
### Screenshot Guidelines
|
||||
|
||||
**When screenshots are not yet available**: If you're documenting a feature before screenshots exist, you can use image placeholders with descriptive alt text and ask the user to provide screenshots:
|
||||
|
||||
```markdown
|
||||

|
||||
```
|
||||
|
||||
Then ask: "Could you provide a screenshot of the Template Insights page? I've added a placeholder at [location]."
|
||||
|
||||
**When documenting with screenshots**:
|
||||
|
||||
- Illustrate features being discussed in preceding text
|
||||
- Show actual UI/data, not abstract concepts
|
||||
- Reference specific values shown when explaining features
|
||||
- Organize documentation around key screenshots
|
||||
|
||||
## Content Organization
|
||||
|
||||
### Section Hierarchy
|
||||
|
||||
1. **H2 (##)**: Major sections - "Overview", "Accessing [Feature]", "Use Cases"
|
||||
2. **H3 (###)**: Subsections within major sections
|
||||
3. **H4 (####)**: Rare, only for deeply nested content
|
||||
|
||||
### Common Section Patterns
|
||||
|
||||
- **Accessing [Feature]**: How to navigate to/use the feature
|
||||
- **Use Cases**: Practical applications
|
||||
- **Permissions**: Access control information
|
||||
- **API Access**: Programmatic access details
|
||||
- **Related Documentation**: Links to related content
|
||||
|
||||
### Lists and Callouts
|
||||
|
||||
- **Unordered lists**: Non-sequential items, features, capabilities
|
||||
- **Ordered lists**: Step-by-step instructions
|
||||
- **Tables**: Comparing options, showing permissions, listing parameters
|
||||
- **Callouts**:
|
||||
- `> [!NOTE]` for additional information
|
||||
- `> [!WARNING]` for important warnings
|
||||
- `> [!TIP]` for helpful tips
|
||||
- **Tabs**: Use tabs for presenting related but parallel content, such as different installation methods or platform-specific instructions. Tabs work well when readers need to choose one path that applies to their specific situation.
|
||||
|
||||
## Writing Style
|
||||
|
||||
### Tone and Voice
|
||||
|
||||
- **Direct and concise**: Avoid unnecessary words
|
||||
- **Active voice**: "Template Insights tracks users" not "Users are tracked"
|
||||
- **Present tense**: "The chart displays..." not "The chart will display..."
|
||||
- **Second person**: "You can view..." for instructions
|
||||
|
||||
### Terminology
|
||||
|
||||
- **Consistent terms**: Use same term throughout (e.g., "workspace" not "workspace environment")
|
||||
- **Bold for UI elements**: "Navigate to the **Templates** page"
|
||||
- **Code formatting**: Use backticks for commands, file paths, code
|
||||
- Inline: `` `coder server` ``
|
||||
- Blocks: Use triple backticks with language identifier
|
||||
|
||||
### Instructions
|
||||
|
||||
- **Numbered lists** for sequential steps
|
||||
- **Start with verb**: "Navigate to", "Click", "Select", "Run"
|
||||
- **Be specific**: Include exact button/menu names in bold
|
||||
|
||||
## Code Examples
|
||||
|
||||
### Command Examples
|
||||
|
||||
````markdown
|
||||
```sh
|
||||
coder server --disable-template-insights
|
||||
```
|
||||
````
|
||||
|
||||
### Environment Variables
|
||||
|
||||
````markdown
|
||||
```sh
|
||||
CODER_DISABLE_TEMPLATE_INSIGHTS=true
|
||||
```
|
||||
````
|
||||
|
||||
### Code Comments
|
||||
|
||||
- Keep minimal
|
||||
- Explain non-obvious parameters
|
||||
- Use `# Comment` for shell, `// Comment` for other languages
|
||||
|
||||
## Links and References
|
||||
|
||||
### Internal Links
|
||||
|
||||
Use relative paths from current file location:
|
||||
|
||||
- `[Template Permissions](./template-permissions.md)`
|
||||
- `[API documentation](../../reference/api/insights.md)`
|
||||
|
||||
For cross-linking to Coder registry templates or other external Coder resources, reference the appropriate registry URLs.
|
||||
|
||||
### Cross-References
|
||||
|
||||
- Link to related documentation at the end
|
||||
- Use descriptive text: "Learn about [template access control](./template-permissions.md)"
|
||||
- Not just: "[Click here](./template-permissions.md)"
|
||||
|
||||
### API References
|
||||
|
||||
Link to specific endpoints:
|
||||
|
||||
```markdown
|
||||
- `/api/v2/insights/templates` - Template usage metrics
|
||||
```
|
||||
|
||||
## Accuracy Standards
|
||||
|
||||
### Specific Numbers Matter
|
||||
|
||||
Document exact values from code:
|
||||
|
||||
- **Thresholds**: "green < 150ms, yellow 150-300ms, red ≥300ms"
|
||||
- **Time intervals**: "daily for templates < 5 weeks old, weekly for 5+ weeks"
|
||||
- **Counts and limits**: Use precise numbers, not approximations
|
||||
|
||||
### Permission Actions
|
||||
|
||||
- Use exact RBAC action names from code (e.g., `view_insights` not "view insights")
|
||||
- Reference permission system correctly (`template:view_insights` scope)
|
||||
- Specify which roles have permissions by default
|
||||
|
||||
### API Endpoints
|
||||
|
||||
- Use full, correct paths (e.g., `/api/v2/insights/templates` not `/insights/templates`)
|
||||
- Link to generated API documentation in `docs/reference/api/`
|
||||
|
||||
## Documentation Manifest
|
||||
|
||||
**CRITICAL**: All documentation pages must be added to `docs/manifest.json` to appear in navigation. Read the manifest file to understand the structure and find the appropriate section for your documentation. Place new pages in logical sections matching the existing hierarchy.
|
||||
|
||||
## Proactive Documentation
|
||||
|
||||
When documenting features that depend on upcoming PRs:
|
||||
|
||||
1. **Reference the PR explicitly** - Mention PR number and what it adds
|
||||
2. **Document the feature anyway** - Write as if feature exists
|
||||
3. **Link to auto-generated docs** - Point to CLI reference sections that will be created
|
||||
4. **Update PR description** - Note documentation is included proactively
|
||||
|
||||
**Example**: Template Insights docs include `--disable-template-insights` flag from PR #20940 before it merged, with link to `../../reference/cli/server.md#--disable-template-insights` that will exist when the PR lands.
|
||||
|
||||
## Special Sections
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
- **H3 subheadings** for each issue
|
||||
- Format: Issue description followed by solution steps
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Bullet or numbered list
|
||||
- Include version requirements, dependencies, permissions
|
||||
|
||||
## Formatting and Linting
|
||||
|
||||
**Always run these commands before submitting documentation:**
|
||||
|
||||
```sh
|
||||
make fmt/markdown # Format markdown tables and content
|
||||
make lint/markdown # Lint and fix markdown issues
|
||||
```
|
||||
|
||||
These ensure consistent formatting and catch common documentation errors.
|
||||
|
||||
## Formatting Conventions
|
||||
|
||||
### Text Formatting
|
||||
|
||||
- **Bold** (`**text**`): UI elements, important concepts, labels
|
||||
- *Italic* (`*text*`): Rare, mainly for emphasis
|
||||
- `Code` (`` `text` ``): Commands, file paths, parameter names
|
||||
|
||||
### Tables
|
||||
|
||||
- Use for comparing options, listing parameters, showing permissions
|
||||
- Left-align text, right-align numbers
|
||||
- Keep simple - avoid nested formatting when possible
|
||||
|
||||
### Code Blocks
|
||||
|
||||
- **Always specify language**: `` ```sh ``, `` ```yaml ``, `` ```go ``
|
||||
- Include comments for complex examples
|
||||
- Keep minimal - show only relevant configuration
|
||||
|
||||
## Document Length
|
||||
|
||||
- **Comprehensive but scannable**: Cover all aspects but use clear headings
|
||||
- **Break up long sections**: Use H3 subheadings for logical chunks
|
||||
- **Visual hierarchy**: Images and code blocks break up text
|
||||
|
||||
## Auto-Generated Content
|
||||
|
||||
Some content is auto-generated with comments:
|
||||
|
||||
```markdown
|
||||
<!-- Code generated by 'make docs/...' DO NOT EDIT -->
|
||||
```
|
||||
|
||||
Don't manually edit auto-generated sections.
|
||||
|
||||
## URL Redirects
|
||||
|
||||
When renaming or moving documentation pages, redirects must be added to prevent broken links.
|
||||
|
||||
**Important**: Redirects are NOT configured in this repository. The coder.com website runs on Vercel with Next.js and reads redirects from a separate repository:
|
||||
|
||||
- **Redirect configuration**: https://github.com/coder/coder.com/blob/master/redirects.json
|
||||
- **Do NOT create** a `docs/_redirects` file - this format (used by Netlify/Cloudflare Pages) is not processed by coder.com
|
||||
|
||||
When you rename or move a doc page, create a PR in coder/coder.com to add the redirect.
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Research first** - Verify against actual code implementation
|
||||
2. **Be precise** - Use exact numbers, permission names, API paths
|
||||
3. **Visual structure** - Organize around screenshots when available
|
||||
4. **Link everything** - Related docs, API endpoints, CLI references
|
||||
5. **Manifest inclusion** - Add to manifest.json for navigation
|
||||
6. **Add redirects** - When moving/renaming pages, add redirects in coder/coder.com repo
|
||||
@@ -1,256 +0,0 @@
|
||||
# Pull Request Description Style Guide
|
||||
|
||||
This guide documents the PR description style used in the Coder repository, based on analysis of recent merged PRs.
|
||||
|
||||
## PR Title Format
|
||||
|
||||
Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/) format:
|
||||
|
||||
```text
|
||||
type(scope): brief description
|
||||
```
|
||||
|
||||
**Common types:**
|
||||
|
||||
- `feat`: New features
|
||||
- `fix`: Bug fixes
|
||||
- `refactor`: Code refactoring without behavior change
|
||||
- `perf`: Performance improvements
|
||||
- `docs`: Documentation changes
|
||||
- `chore`: Dependency updates, tooling changes
|
||||
|
||||
**Examples:**
|
||||
|
||||
- `feat: add tracing to aibridge`
|
||||
- `fix: move contexts to appropriate locations`
|
||||
- `perf(coderd/database): add index on workspace_app_statuses.app_id`
|
||||
- `docs: fix swagger tags for license endpoints`
|
||||
- `refactor(site): remove redundant client-side sorting of app statuses`
|
||||
|
||||
## PR Description Structure
|
||||
|
||||
### Default Pattern: Keep It Concise
|
||||
|
||||
Most PRs use a simple 1-2 paragraph format:
|
||||
|
||||
```markdown
|
||||
[Brief statement of what changed]
|
||||
|
||||
[One sentence explaining technical details or context if needed]
|
||||
```
|
||||
|
||||
**Example (bugfix):**
|
||||
|
||||
```markdown
|
||||
Previously, when a devcontainer config file was modified, the dirty
|
||||
status was updated internally but not broadcast to websocket listeners.
|
||||
|
||||
Add `broadcastUpdatesLocked()` call in `markDevcontainerDirty` to notify
|
||||
websocket listeners immediately when a config file changes.
|
||||
```
|
||||
|
||||
**Example (dependency update):**
|
||||
|
||||
```markdown
|
||||
Changes from https://github.com/upstream/repo/pull/XXX/
|
||||
```
|
||||
|
||||
**Example (docs correction):**
|
||||
|
||||
```markdown
|
||||
Removes incorrect references to database replicas from the scaling documentation.
|
||||
Coder only supports a single database connection URL.
|
||||
```
|
||||
|
||||
### For Complex Changes: Use "Summary", "Problem", "Fix"
|
||||
|
||||
Only use structured sections when the change requires significant explanation:
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
Brief overview of the change
|
||||
|
||||
## Problem
|
||||
Detailed explanation of the issue being addressed
|
||||
|
||||
## Fix
|
||||
How the solution works
|
||||
```
|
||||
|
||||
**Example (API documentation fix):**
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
Change `@Tags` from `Organizations` to `Enterprise` for POST /licenses...
|
||||
|
||||
## Problem
|
||||
The license API endpoints were inconsistently tagged...
|
||||
|
||||
## Fix
|
||||
Simply updated the `@Tags` annotation from `Organizations` to `Enterprise`...
|
||||
```
|
||||
|
||||
### For Large Refactors: Lead with Context
|
||||
|
||||
When rewriting significant documentation or code, start with the problems being fixed:
|
||||
|
||||
```markdown
|
||||
This PR rewrites [component] for [reason].
|
||||
|
||||
The previous [component] had [specific issues]: [details].
|
||||
|
||||
[What changed]: [specific improvements made].
|
||||
|
||||
[Additional changes]: [context].
|
||||
|
||||
Refs #[issue-number]
|
||||
```
|
||||
|
||||
**Example (major documentation rewrite):**
|
||||
|
||||
- Started with "This PR rewrites the dev containers documentation for GA readiness"
|
||||
- Listed specific inaccuracies being fixed
|
||||
- Explained organizational changes
|
||||
- Referenced related issue
|
||||
|
||||
## What to Include
|
||||
|
||||
### Always Include
|
||||
|
||||
1. **Link Related Work**
|
||||
- `Closes https://github.com/coder/internal/issues/XXX`
|
||||
- `Depends on #XXX`
|
||||
- `Fixes: https://github.com/coder/aibridge/issues/XX`
|
||||
- `Refs #XXX` (for general reference)
|
||||
|
||||
2. **Performance Context** (when relevant)
|
||||
|
||||
```markdown
|
||||
Each query took ~30ms on average with 80 requests/second to the cluster,
|
||||
resulting in ~5.2 query-seconds every second.
|
||||
```
|
||||
|
||||
3. **Migration Warnings** (when relevant)
|
||||
|
||||
```markdown
|
||||
**NOTE**: This migration creates an index on `workspace_app_statuses`.
|
||||
For deployments with heavy task usage, this may take a moment to complete.
|
||||
```
|
||||
|
||||
4. **Visual Evidence** (for UI changes)
|
||||
|
||||
```markdown
|
||||
<img width="1281" height="425" alt="image" src="..." />
|
||||
```
|
||||
|
||||
### Never Include
|
||||
|
||||
- ❌ **Test plans** - Testing is handled through code review and CI
|
||||
- ❌ **"Benefits" sections** - Benefits should be clear from the description
|
||||
- ❌ **Implementation details** - Keep it high-level
|
||||
- ❌ **Marketing language** - Stay technical and factual
|
||||
- ❌ **Bullet lists of features** (unless it's a large refactor that needs enumeration)
|
||||
|
||||
## Special Patterns
|
||||
|
||||
### Simple Chore PRs
|
||||
|
||||
For straightforward updates (dependency bumps, minor fixes):
|
||||
|
||||
```markdown
|
||||
Changes from [link to upstream PR/issue]
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
```markdown
|
||||
Reference:
|
||||
[link explaining why this change is needed]
|
||||
```
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
Start with the problem, then explain the fix:
|
||||
|
||||
```markdown
|
||||
[What was broken and why it matters]
|
||||
|
||||
[What you changed to fix it]
|
||||
```
|
||||
|
||||
### Dependency Updates
|
||||
|
||||
Dependabot PRs are auto-generated - don't try to match their verbose style for manual updates. Instead use:
|
||||
|
||||
```markdown
|
||||
Changes from https://github.com/upstream/repo/pull/XXX/
|
||||
```
|
||||
|
||||
## Attribution Footer
|
||||
|
||||
For AI-generated PRs, end with:
|
||||
|
||||
```markdown
|
||||
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
||||
```
|
||||
|
||||
## Creating PRs as Draft
|
||||
|
||||
**IMPORTANT**: Unless explicitly told otherwise, always create PRs as drafts using the `--draft` flag:
|
||||
|
||||
```bash
|
||||
gh pr create --draft --title "..." --body "..."
|
||||
```
|
||||
|
||||
After creating the PR, encourage the user to review it before marking as ready:
|
||||
|
||||
```
|
||||
I've created draft PR #XXXX. Please review the changes and mark it as ready for review when you're satisfied.
|
||||
```
|
||||
|
||||
This allows the user to:
|
||||
- Review the code changes before requesting reviews from maintainers
|
||||
- Make additional adjustments if needed
|
||||
- Ensure CI passes before notifying reviewers
|
||||
- Control when the PR enters the review queue
|
||||
|
||||
Only create non-draft PRs when the user explicitly requests it or when following up on an existing draft.
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Always create draft PRs** - Unless explicitly told otherwise
|
||||
2. **Be concise** - Default to 1-2 paragraphs unless complexity demands more
|
||||
3. **Be technical** - Explain what and why, not detailed how
|
||||
4. **Link everything** - Issues, PRs, upstream changes, Notion docs
|
||||
5. **Show impact** - Metrics for performance, screenshots for UI, warnings for migrations
|
||||
6. **No test plans** - Code review and CI handle testing
|
||||
7. **No benefits sections** - Benefits should be obvious from the technical description
|
||||
|
||||
## Examples by Category
|
||||
|
||||
### Performance Improvements
|
||||
|
||||
Includes query timing metrics and explains the index solution
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
Describes broken behavior then the fix in two sentences
|
||||
|
||||
### Documentation
|
||||
|
||||
- **Major rewrite**: Long form explaining inaccuracies and improvements
|
||||
- **Simple correction**: One sentence for simple correction
|
||||
|
||||
### Features
|
||||
|
||||
Simple statement of what was added and dependencies
|
||||
|
||||
### Refactoring
|
||||
|
||||
Explains why client-side sorting is now redundant
|
||||
|
||||
### Configuration
|
||||
|
||||
Adds guidelines with issue reference
|
||||
@@ -7,5 +7,5 @@ runs:
|
||||
- name: Install Terraform
|
||||
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
|
||||
with:
|
||||
terraform_version: 1.14.1
|
||||
terraform_version: 1.13.4
|
||||
terraform_wrapper: false
|
||||
|
||||
@@ -197,8 +197,6 @@ clear what the new test covers.
|
||||
@.claude/docs/TESTING.md
|
||||
@.claude/docs/TROUBLESHOOTING.md
|
||||
@.claude/docs/DATABASE.md
|
||||
@.claude/docs/PR_STYLE_GUIDE.md
|
||||
@.claude/docs/DOCS_STYLE_GUIDE.md
|
||||
|
||||
## Local Configuration
|
||||
|
||||
|
||||
+6
-11
@@ -2189,19 +2189,14 @@ func (a *apiConnRoutineManager) startTailnetAPI(
|
||||
a.eg.Go(func() error {
|
||||
logger.Debug(ctx, "starting tailnet routine")
|
||||
err := f(ctx, a.tAPI)
|
||||
if (xerrors.Is(err, context.Canceled) ||
|
||||
xerrors.Is(err, io.EOF)) &&
|
||||
ctx.Err() != nil {
|
||||
logger.Debug(ctx, "swallowing error because context is canceled", slog.Error(err))
|
||||
if xerrors.Is(err, context.Canceled) && ctx.Err() != nil {
|
||||
logger.Debug(ctx, "swallowing context canceled")
|
||||
// Don't propagate context canceled errors to the error group, because we don't want the
|
||||
// graceful context being canceled to halt the work of routines with
|
||||
// gracefulShutdownBehaviorRemain. Unfortunately, the dRPC library closes the stream
|
||||
// when context is canceled on an RPC, so canceling the context can also show up as
|
||||
// io.EOF. Also, when Coderd unilaterally closes the API connection (for example if the
|
||||
// build is outdated), it can sometimes show up as context.Canceled in our RPC calls.
|
||||
// We can't reliably distinguish between a context cancelation and a legit EOF, so we
|
||||
// also check that *our* context is currently canceled. If it is, we can safely ignore
|
||||
// the error.
|
||||
// gracefulShutdownBehaviorRemain. Note that we check both that the error is
|
||||
// context.Canceled and that *our* context is currently canceled, because when Coderd
|
||||
// unilaterally closes the API connection (for example if the build is outdated), it can
|
||||
// sometimes show up as context.Canceled in our RPC calls.
|
||||
return nil
|
||||
}
|
||||
logger.Debug(ctx, "routine exited", slog.Error(err))
|
||||
|
||||
+1
-1
@@ -465,7 +465,7 @@ func TestAgent_SessionTTYShell(t *testing.T) {
|
||||
for _, port := range sshPorts {
|
||||
t.Run(fmt.Sprintf("(%d)", port), func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
session := setupSSHSessionOnPort(t, agentsdk.Manifest{}, codersdk.ServiceBannerConfig{}, nil, port)
|
||||
command := "sh"
|
||||
|
||||
+12
-69
@@ -17,12 +17,12 @@
|
||||
"useSemanticElements": "off",
|
||||
"noStaticElementInteractions": "off"
|
||||
},
|
||||
"correctness": {
|
||||
"noUnusedImports": "warn",
|
||||
"correctness": {
|
||||
"noUnusedImports": "warn",
|
||||
"useUniqueElementIds": "off", // TODO: This is new but we want to fix it
|
||||
"noNestedComponentDefinitions": "off", // TODO: Investigate, since it is used by shadcn components
|
||||
"noUnusedVariables": {
|
||||
"level": "warn",
|
||||
"noUnusedVariables": {
|
||||
"level": "warn",
|
||||
"options": {
|
||||
"ignoreRestSiblings": true
|
||||
}
|
||||
@@ -40,76 +40,19 @@
|
||||
"useNumberNamespace": "error",
|
||||
"noInferrableTypes": "error",
|
||||
"noUselessElse": "error",
|
||||
"noRestrictedImports": {
|
||||
"level": "error",
|
||||
"noRestrictedImports": {
|
||||
"level": "error",
|
||||
"options": {
|
||||
"paths": {
|
||||
// "@mui/material/Alert": "Use components/Alert/Alert instead.",
|
||||
// "@mui/material/AlertTitle": "Use components/Alert/Alert instead.",
|
||||
// "@mui/material/Autocomplete": "Use shadcn/ui Combobox instead.",
|
||||
"@mui/material": "Use @mui/material/<name> instead. See: https://material-ui.com/guides/minimizing-bundle-size/.",
|
||||
"@mui/material/Avatar": "Use components/Avatar/Avatar instead.",
|
||||
"@mui/material/Box": "Use a <div> with Tailwind classes instead.",
|
||||
"@mui/material/Button": "Use components/Button/Button instead.",
|
||||
// "@mui/material/Card": "Use shadcn/ui Card component instead.",
|
||||
// "@mui/material/CardActionArea": "Use shadcn/ui Card component instead.",
|
||||
// "@mui/material/CardContent": "Use shadcn/ui Card component instead.",
|
||||
// "@mui/material/Checkbox": "Use shadcn/ui Checkbox component instead.",
|
||||
// "@mui/material/Chip": "Use components/Badge or Tailwind styles instead.",
|
||||
// "@mui/material/CircularProgress": "Use components/Spinner/Spinner instead.",
|
||||
// "@mui/material/Collapse": "Use shadcn/ui Collapsible instead.",
|
||||
// "@mui/material/CssBaseline": "Use Tailwind CSS base styles instead.",
|
||||
// "@mui/material/Dialog": "Use shadcn/ui Dialog component instead.",
|
||||
// "@mui/material/DialogActions": "Use shadcn/ui Dialog component instead.",
|
||||
// "@mui/material/DialogContent": "Use shadcn/ui Dialog component instead.",
|
||||
// "@mui/material/DialogContentText": "Use shadcn/ui Dialog component instead.",
|
||||
// "@mui/material/DialogTitle": "Use shadcn/ui Dialog component instead.",
|
||||
// "@mui/material/Divider": "Use shadcn/ui Separator or <hr> with Tailwind instead.",
|
||||
// "@mui/material/Drawer": "Use shadcn/ui Sheet component instead.",
|
||||
// "@mui/material/FormControl": "Use native form elements with Tailwind instead.",
|
||||
// "@mui/material/FormControlLabel": "Use shadcn/ui Label with form components instead.",
|
||||
// "@mui/material/FormGroup": "Use a <div> with Tailwind classes instead.",
|
||||
// "@mui/material/FormHelperText": "Use a <p> with Tailwind classes instead.",
|
||||
// "@mui/material/FormLabel": "Use shadcn/ui Label component instead.",
|
||||
// "@mui/material/Grid": "Use Tailwind grid utilities instead.",
|
||||
// "@mui/material/IconButton": "Use components/Button/Button with variant='icon' instead.",
|
||||
// "@mui/material/InputAdornment": "Use Tailwind positioning in input wrapper instead.",
|
||||
// "@mui/material/InputBase": "Use shadcn/ui Input component instead.",
|
||||
// "@mui/material/LinearProgress": "Use a progress bar with Tailwind instead.",
|
||||
// "@mui/material/Link": "Use React Router Link or native <a> tags instead.",
|
||||
// "@mui/material/List": "Use native <ul> with Tailwind instead.",
|
||||
// "@mui/material/ListItem": "Use native <li> with Tailwind instead.",
|
||||
// "@mui/material/ListItemIcon": "Use lucide-react icons in list items instead.",
|
||||
// "@mui/material/ListItemText": "Use native elements with Tailwind instead.",
|
||||
// "@mui/material/Menu": "Use shadcn/ui DropdownMenu instead.",
|
||||
// "@mui/material/MenuItem": "Use shadcn/ui DropdownMenu components instead.",
|
||||
// "@mui/material/MenuList": "Use shadcn/ui DropdownMenu components instead.",
|
||||
// "@mui/material/Paper": "Use a <div> with Tailwind shadow/border classes instead.",
|
||||
"@mui/material/Alert": "Use components/Alert/Alert instead.",
|
||||
"@mui/material/Popover": "Use components/Popover/Popover instead.",
|
||||
// "@mui/material/Radio": "Use shadcn/ui RadioGroup instead.",
|
||||
// "@mui/material/RadioGroup": "Use shadcn/ui RadioGroup instead.",
|
||||
// "@mui/material/Select": "Use shadcn/ui Select component instead.",
|
||||
// "@mui/material/Skeleton": "Use shadcn/ui Skeleton component instead.",
|
||||
// "@mui/material/Snackbar": "Use components/GlobalSnackbar instead.",
|
||||
// "@mui/material/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
|
||||
// "@mui/material/styles": "Use Tailwind CSS instead.",
|
||||
// "@mui/material/SvgIcon": "Use lucide-react icons instead.",
|
||||
// "@mui/material/Switch": "Use shadcn/ui Switch component instead.",
|
||||
"@mui/material/Table": "Import from components/Table/Table instead.",
|
||||
// "@mui/material/TableRow": "Import from components/Table/Table instead.",
|
||||
// "@mui/material/TextField": "Use shadcn/ui Input component instead.",
|
||||
// "@mui/material/ToggleButton": "Use shadcn/ui Toggle or custom component instead.",
|
||||
// "@mui/material/ToggleButtonGroup": "Use shadcn/ui Toggle or custom component instead.",
|
||||
// "@mui/material/Tooltip": "Use shadcn/ui Tooltip component instead.",
|
||||
"@mui/material/Typography": "Use native HTML elements instead. Eg: <span>, <p>, <h1>, etc.",
|
||||
// "@mui/material/useMediaQuery": "Use Tailwind responsive classes or custom hook instead.",
|
||||
// "@mui/system": "Use Tailwind CSS instead.",
|
||||
// "@mui/utils": "Use native alternatives or utility libraries instead.",
|
||||
// "@mui/x-tree-view": "Use a Tailwind-compatible alternative.",
|
||||
// "@emotion/css": "Use Tailwind CSS instead.",
|
||||
// "@emotion/react": "Use Tailwind CSS instead.",
|
||||
"@emotion/styled": "Use Tailwind CSS instead.",
|
||||
// "@emotion/cache": "Use Tailwind CSS instead.",
|
||||
// "components/Stack/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
|
||||
"@mui/material/Box": "Use a <div> instead.",
|
||||
"@mui/material/Button": "Use a components/Button/Button instead.",
|
||||
"@mui/material/styles": "Import from @emotion/react instead.",
|
||||
"@mui/material/Table*": "Import from components/Table/Table instead.",
|
||||
"lodash": "Use lodash/<name> instead."
|
||||
}
|
||||
}
|
||||
|
||||
+158
-251
@@ -20,12 +20,6 @@ import (
|
||||
|
||||
var errAgentShuttingDown = xerrors.New("agent is shutting down")
|
||||
|
||||
// fetchAgentResult is used to pass agent fetch results through channels.
|
||||
type fetchAgentResult struct {
|
||||
agent codersdk.WorkspaceAgent
|
||||
err error
|
||||
}
|
||||
|
||||
type AgentOptions struct {
|
||||
FetchInterval time.Duration
|
||||
Fetch func(ctx context.Context, agentID uuid.UUID) (codersdk.WorkspaceAgent, error)
|
||||
@@ -34,14 +28,6 @@ type AgentOptions struct {
|
||||
DocsURL string
|
||||
}
|
||||
|
||||
// agentWaiter encapsulates the state machine for waiting on a workspace agent.
|
||||
type agentWaiter struct {
|
||||
opts AgentOptions
|
||||
sw *stageWriter
|
||||
logSources map[uuid.UUID]codersdk.WorkspaceAgentLogSource
|
||||
fetchAgent func(context.Context) (codersdk.WorkspaceAgent, error)
|
||||
}
|
||||
|
||||
// Agent displays a spinning indicator that waits for a workspace agent to connect.
|
||||
func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentOptions) error {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
@@ -58,7 +44,11 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
}
|
||||
}
|
||||
|
||||
fetchedAgent := make(chan fetchAgentResult, 1)
|
||||
type fetchAgent struct {
|
||||
agent codersdk.WorkspaceAgent
|
||||
err error
|
||||
}
|
||||
fetchedAgent := make(chan fetchAgent, 1)
|
||||
go func() {
|
||||
t := time.NewTimer(0)
|
||||
defer t.Stop()
|
||||
@@ -77,10 +67,10 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
default:
|
||||
}
|
||||
if err != nil {
|
||||
fetchedAgent <- fetchAgentResult{err: xerrors.Errorf("fetch workspace agent: %w", err)}
|
||||
fetchedAgent <- fetchAgent{err: xerrors.Errorf("fetch workspace agent: %w", err)}
|
||||
return
|
||||
}
|
||||
fetchedAgent <- fetchAgentResult{agent: agent}
|
||||
fetchedAgent <- fetchAgent{agent: agent}
|
||||
|
||||
// Adjust the interval based on how long we've been waiting.
|
||||
elapsed := time.Since(startTime)
|
||||
@@ -89,7 +79,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
}
|
||||
}
|
||||
}()
|
||||
fetch := func(ctx context.Context) (codersdk.WorkspaceAgent, error) {
|
||||
fetch := func() (codersdk.WorkspaceAgent, error) {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return codersdk.WorkspaceAgent{}, ctx.Err()
|
||||
@@ -101,7 +91,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
}
|
||||
}
|
||||
|
||||
agent, err := fetch(ctx)
|
||||
agent, err := fetch()
|
||||
if err != nil {
|
||||
return xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
@@ -110,23 +100,9 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
logSources[source.ID] = source
|
||||
}
|
||||
|
||||
w := &agentWaiter{
|
||||
opts: opts,
|
||||
sw: &stageWriter{w: writer},
|
||||
logSources: logSources,
|
||||
fetchAgent: fetch,
|
||||
}
|
||||
|
||||
return w.wait(ctx, agent, fetchedAgent)
|
||||
}
|
||||
|
||||
// wait runs the main state machine loop.
|
||||
func (aw *agentWaiter) wait(ctx context.Context, agent codersdk.WorkspaceAgent, fetchedAgent chan fetchAgentResult) error {
|
||||
var err error
|
||||
// Track whether we've gone through a wait state, which determines if we
|
||||
// should show startup logs when connected.
|
||||
waitedForConnection := false
|
||||
sw := &stageWriter{w: writer}
|
||||
|
||||
showStartupLogs := false
|
||||
for {
|
||||
// It doesn't matter if we're connected or not, if the agent is
|
||||
// shutting down, we don't know if it's coming back.
|
||||
@@ -136,236 +112,167 @@ func (aw *agentWaiter) wait(ctx context.Context, agent codersdk.WorkspaceAgent,
|
||||
|
||||
switch agent.Status {
|
||||
case codersdk.WorkspaceAgentConnecting, codersdk.WorkspaceAgentTimeout:
|
||||
agent, err = aw.waitForConnection(ctx, agent)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Since we were waiting for the agent to connect, also show
|
||||
// startup logs if applicable.
|
||||
waitedForConnection = true
|
||||
showStartupLogs = true
|
||||
|
||||
stage := "Waiting for the workspace agent to connect"
|
||||
sw.Start(stage)
|
||||
for agent.Status == codersdk.WorkspaceAgentConnecting {
|
||||
if agent, err = fetch(); err != nil {
|
||||
return xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if agent.Status == codersdk.WorkspaceAgentTimeout {
|
||||
now := time.Now()
|
||||
sw.Log(now, codersdk.LogLevelInfo, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
|
||||
sw.Log(now, codersdk.LogLevelInfo, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", opts.DocsURL)))
|
||||
for agent.Status == codersdk.WorkspaceAgentTimeout {
|
||||
if agent, err = fetch(); err != nil {
|
||||
return xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
sw.Complete(stage, agent.FirstConnectedAt.Sub(agent.CreatedAt))
|
||||
|
||||
case codersdk.WorkspaceAgentConnected:
|
||||
return aw.handleConnected(ctx, agent, waitedForConnection, fetchedAgent)
|
||||
if !showStartupLogs && agent.LifecycleState == codersdk.WorkspaceAgentLifecycleReady {
|
||||
// The workspace is ready, there's nothing to do but connect.
|
||||
return nil
|
||||
}
|
||||
|
||||
stage := "Running workspace agent startup scripts"
|
||||
follow := opts.Wait && agent.LifecycleState.Starting()
|
||||
if !follow {
|
||||
stage += " (non-blocking)"
|
||||
}
|
||||
sw.Start(stage)
|
||||
if follow {
|
||||
sw.Log(time.Time{}, codersdk.LogLevelInfo, "==> ℹ︎ To connect immediately, reconnect with --wait=no or CODER_SSH_WAIT=no, see --help for more information.")
|
||||
}
|
||||
|
||||
err = func() error { // Use func because of defer in for loop.
|
||||
logStream, logsCloser, err := opts.FetchLogs(ctx, agent.ID, 0, follow)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("fetch workspace agent startup logs: %w", err)
|
||||
}
|
||||
defer logsCloser.Close()
|
||||
|
||||
var lastLog codersdk.WorkspaceAgentLog
|
||||
fetchedAgentWhileFollowing := fetchedAgent
|
||||
if !follow {
|
||||
fetchedAgentWhileFollowing = nil
|
||||
}
|
||||
for {
|
||||
// This select is essentially and inline `fetch()`.
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case f := <-fetchedAgentWhileFollowing:
|
||||
if f.err != nil {
|
||||
return xerrors.Errorf("fetch: %w", f.err)
|
||||
}
|
||||
agent = f.agent
|
||||
|
||||
// If the agent is no longer starting, stop following
|
||||
// logs because FetchLogs will keep streaming forever.
|
||||
// We do one last non-follow request to ensure we have
|
||||
// fetched all logs.
|
||||
if !agent.LifecycleState.Starting() {
|
||||
_ = logsCloser.Close()
|
||||
fetchedAgentWhileFollowing = nil
|
||||
|
||||
logStream, logsCloser, err = opts.FetchLogs(ctx, agent.ID, lastLog.ID, false)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("fetch workspace agent startup logs: %w", err)
|
||||
}
|
||||
// Logs are already primed, so we can call close.
|
||||
_ = logsCloser.Close()
|
||||
}
|
||||
case logs, ok := <-logStream:
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
for _, log := range logs {
|
||||
source, hasSource := logSources[log.SourceID]
|
||||
output := log.Output
|
||||
if hasSource && source.DisplayName != "" {
|
||||
output = source.DisplayName + ": " + output
|
||||
}
|
||||
sw.Log(log.CreatedAt, log.Level, output)
|
||||
lastLog = log
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for follow && agent.LifecycleState.Starting() {
|
||||
if agent, err = fetch(); err != nil {
|
||||
return xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
switch agent.LifecycleState {
|
||||
case codersdk.WorkspaceAgentLifecycleReady:
|
||||
sw.Complete(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
|
||||
case codersdk.WorkspaceAgentLifecycleStartTimeout:
|
||||
// Backwards compatibility: Avoid printing warning if
|
||||
// coderd is old and doesn't set ReadyAt for timeouts.
|
||||
if agent.ReadyAt == nil {
|
||||
sw.Fail(stage, 0)
|
||||
} else {
|
||||
sw.Fail(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
|
||||
}
|
||||
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script timed out and your workspace may be incomplete.")
|
||||
case codersdk.WorkspaceAgentLifecycleStartError:
|
||||
sw.Fail(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
|
||||
// Use zero time (omitted) to separate these from the startup logs.
|
||||
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script exited with an error and your workspace may be incomplete.")
|
||||
sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#startup-script-exited-with-an-error", opts.DocsURL)))
|
||||
default:
|
||||
switch {
|
||||
case agent.LifecycleState.Starting():
|
||||
// Use zero time (omitted) to separate these from the startup logs.
|
||||
sw.Log(time.Time{}, codersdk.LogLevelWarn, "Notice: The startup scripts are still running and your workspace may be incomplete.")
|
||||
sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#your-workspace-may-be-incomplete", opts.DocsURL)))
|
||||
// Note: We don't complete or fail the stage here, it's
|
||||
// intentionally left open to indicate this stage didn't
|
||||
// complete.
|
||||
case agent.LifecycleState.ShuttingDown():
|
||||
// We no longer know if the startup script failed or not,
|
||||
// but we need to tell the user something.
|
||||
sw.Complete(stage, safeDuration(sw, agent.ReadyAt, agent.StartedAt))
|
||||
return errAgentShuttingDown
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
case codersdk.WorkspaceAgentDisconnected:
|
||||
agent, waitedForConnection, err = aw.waitForReconnection(ctx, agent)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// If the agent was still starting during disconnect, we'll
|
||||
// show startup logs.
|
||||
showStartupLogs = agent.LifecycleState.Starting()
|
||||
|
||||
// waitForConnection handles the Connecting/Timeout states.
|
||||
// Returns when agent transitions to Connected or Disconnected.
|
||||
func (aw *agentWaiter) waitForConnection(ctx context.Context, agent codersdk.WorkspaceAgent) (codersdk.WorkspaceAgent, error) {
|
||||
stage := "Waiting for the workspace agent to connect"
|
||||
aw.sw.Start(stage)
|
||||
stage := "The workspace agent lost connection"
|
||||
sw.Start(stage)
|
||||
sw.Log(time.Now(), codersdk.LogLevelWarn, "Wait for it to reconnect or restart your workspace.")
|
||||
sw.Log(time.Now(), codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", opts.DocsURL)))
|
||||
|
||||
agent, err := aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return agent.Status == codersdk.WorkspaceAgentConnecting
|
||||
})
|
||||
if err != nil {
|
||||
return agent, err
|
||||
}
|
||||
|
||||
if agent.Status == codersdk.WorkspaceAgentTimeout {
|
||||
now := time.Now()
|
||||
aw.sw.Log(now, codersdk.LogLevelInfo, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
|
||||
aw.sw.Log(now, codersdk.LogLevelInfo, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", aw.opts.DocsURL)))
|
||||
agent, err = aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return agent.Status == codersdk.WorkspaceAgentTimeout
|
||||
})
|
||||
if err != nil {
|
||||
return agent, err
|
||||
}
|
||||
}
|
||||
|
||||
aw.sw.Complete(stage, agent.FirstConnectedAt.Sub(agent.CreatedAt))
|
||||
return agent, nil
|
||||
}
|
||||
|
||||
// handleConnected handles the Connected state and startup script logic.
|
||||
// This is a terminal state, returns nil on success or error on failure.
|
||||
//
|
||||
//nolint:revive // Control flag is acceptable for internal method.
|
||||
func (aw *agentWaiter) handleConnected(ctx context.Context, agent codersdk.WorkspaceAgent, showStartupLogs bool, fetchedAgent chan fetchAgentResult) error {
|
||||
if !showStartupLogs && agent.LifecycleState == codersdk.WorkspaceAgentLifecycleReady {
|
||||
// The workspace is ready, there's nothing to do but connect.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Determine if we should follow/stream logs (blocking mode).
|
||||
follow := aw.opts.Wait && agent.LifecycleState.Starting()
|
||||
|
||||
stage := "Running workspace agent startup scripts"
|
||||
if !follow {
|
||||
stage += " (non-blocking)"
|
||||
}
|
||||
aw.sw.Start(stage)
|
||||
|
||||
if follow {
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelInfo, "==> ℹ︎ To connect immediately, reconnect with --wait=no or CODER_SSH_WAIT=no, see --help for more information.")
|
||||
}
|
||||
|
||||
// In non-blocking mode (Wait=false), we don't stream logs. This prevents
|
||||
// dumping a wall of logs on users who explicitly pass --wait=no. The stage
|
||||
// indicator is still shown, just not the log content. See issue #13580.
|
||||
if aw.opts.Wait {
|
||||
var err error
|
||||
agent, err = aw.streamLogs(ctx, agent, follow, fetchedAgent)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If we were following, wait until startup completes.
|
||||
if follow {
|
||||
agent, err = aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return agent.LifecycleState.Starting()
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle final lifecycle state.
|
||||
switch agent.LifecycleState {
|
||||
case codersdk.WorkspaceAgentLifecycleReady:
|
||||
aw.sw.Complete(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
|
||||
case codersdk.WorkspaceAgentLifecycleStartTimeout:
|
||||
// Backwards compatibility: Avoid printing warning if
|
||||
// coderd is old and doesn't set ReadyAt for timeouts.
|
||||
if agent.ReadyAt == nil {
|
||||
aw.sw.Fail(stage, 0)
|
||||
} else {
|
||||
aw.sw.Fail(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
|
||||
}
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script timed out and your workspace may be incomplete.")
|
||||
case codersdk.WorkspaceAgentLifecycleStartError:
|
||||
aw.sw.Fail(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Warning: A startup script exited with an error and your workspace may be incomplete.")
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#startup-script-exited-with-an-error", aw.opts.DocsURL)))
|
||||
default:
|
||||
switch {
|
||||
case agent.LifecycleState.Starting():
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, "Notice: The startup scripts are still running and your workspace may be incomplete.")
|
||||
aw.sw.Log(time.Time{}, codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#your-workspace-may-be-incomplete", aw.opts.DocsURL)))
|
||||
// Note: We don't complete or fail the stage here, it's
|
||||
// intentionally left open to indicate this stage didn't
|
||||
// complete.
|
||||
case agent.LifecycleState.ShuttingDown():
|
||||
// We no longer know if the startup script failed or not,
|
||||
// but we need to tell the user something.
|
||||
aw.sw.Complete(stage, safeDuration(aw.sw, agent.ReadyAt, agent.StartedAt))
|
||||
return errAgentShuttingDown
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// streamLogs handles streaming or fetching startup logs.
|
||||
//
|
||||
//nolint:revive // Control flag is acceptable for internal method.
|
||||
func (aw *agentWaiter) streamLogs(ctx context.Context, agent codersdk.WorkspaceAgent, follow bool, fetchedAgent chan fetchAgentResult) (codersdk.WorkspaceAgent, error) {
|
||||
logStream, logsCloser, err := aw.opts.FetchLogs(ctx, agent.ID, 0, follow)
|
||||
if err != nil {
|
||||
return agent, xerrors.Errorf("fetch workspace agent startup logs: %w", err)
|
||||
}
|
||||
defer logsCloser.Close()
|
||||
|
||||
var lastLog codersdk.WorkspaceAgentLog
|
||||
|
||||
// If not following, we don't need to watch for agent state changes.
|
||||
var fetchedAgentWhileFollowing chan fetchAgentResult
|
||||
if follow {
|
||||
fetchedAgentWhileFollowing = fetchedAgent
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return agent, ctx.Err()
|
||||
case f := <-fetchedAgentWhileFollowing:
|
||||
if f.err != nil {
|
||||
return agent, xerrors.Errorf("fetch: %w", f.err)
|
||||
}
|
||||
agent = f.agent
|
||||
|
||||
// If the agent is no longer starting, stop following
|
||||
// logs because FetchLogs will keep streaming forever.
|
||||
// We do one last non-follow request to ensure we have
|
||||
// fetched all logs.
|
||||
if !agent.LifecycleState.Starting() {
|
||||
_ = logsCloser.Close()
|
||||
fetchedAgentWhileFollowing = nil
|
||||
|
||||
logStream, logsCloser, err = aw.opts.FetchLogs(ctx, agent.ID, lastLog.ID, false)
|
||||
if err != nil {
|
||||
return agent, xerrors.Errorf("fetch workspace agent startup logs: %w", err)
|
||||
disconnectedAt := agent.DisconnectedAt
|
||||
for agent.Status == codersdk.WorkspaceAgentDisconnected {
|
||||
if agent, err = fetch(); err != nil {
|
||||
return xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
// Logs are already primed, so we can call close.
|
||||
_ = logsCloser.Close()
|
||||
}
|
||||
case logs, ok := <-logStream:
|
||||
if !ok {
|
||||
return agent, nil
|
||||
}
|
||||
for _, log := range logs {
|
||||
source, hasSource := aw.logSources[log.SourceID]
|
||||
output := log.Output
|
||||
if hasSource && source.DisplayName != "" {
|
||||
output = source.DisplayName + ": " + output
|
||||
}
|
||||
aw.sw.Log(log.CreatedAt, log.Level, output)
|
||||
lastLog = log
|
||||
}
|
||||
sw.Complete(stage, safeDuration(sw, agent.LastConnectedAt, disconnectedAt))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// waitForReconnection handles the Disconnected state.
|
||||
// Returns when agent reconnects along with whether to show startup logs.
|
||||
func (aw *agentWaiter) waitForReconnection(ctx context.Context, agent codersdk.WorkspaceAgent) (codersdk.WorkspaceAgent, bool, error) {
|
||||
// If the agent was still starting during disconnect, we'll
|
||||
// show startup logs.
|
||||
showStartupLogs := agent.LifecycleState.Starting()
|
||||
|
||||
stage := "The workspace agent lost connection"
|
||||
aw.sw.Start(stage)
|
||||
aw.sw.Log(time.Now(), codersdk.LogLevelWarn, "Wait for it to reconnect or restart your workspace.")
|
||||
aw.sw.Log(time.Now(), codersdk.LogLevelWarn, troubleshootingMessage(agent, fmt.Sprintf("%s/admin/templates/troubleshooting#agent-connection-issues", aw.opts.DocsURL)))
|
||||
|
||||
disconnectedAt := agent.DisconnectedAt
|
||||
agent, err := aw.pollWhile(ctx, agent, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return agent.Status == codersdk.WorkspaceAgentDisconnected
|
||||
})
|
||||
if err != nil {
|
||||
return agent, showStartupLogs, err
|
||||
}
|
||||
aw.sw.Complete(stage, safeDuration(aw.sw, agent.LastConnectedAt, disconnectedAt))
|
||||
|
||||
return agent, showStartupLogs, nil
|
||||
}
|
||||
|
||||
// pollWhile polls the agent while the condition is true. It fetches the agent
|
||||
// on each iteration and returns the updated agent when the condition is false,
|
||||
// the context is canceled, or an error occurs.
|
||||
func (aw *agentWaiter) pollWhile(ctx context.Context, agent codersdk.WorkspaceAgent, cond func(agent codersdk.WorkspaceAgent) bool) (codersdk.WorkspaceAgent, error) {
|
||||
var err error
|
||||
for cond(agent) {
|
||||
agent, err = aw.fetchAgent(ctx)
|
||||
if err != nil {
|
||||
return agent, xerrors.Errorf("fetch: %w", err)
|
||||
}
|
||||
}
|
||||
if err = ctx.Err(); err != nil {
|
||||
return agent, err
|
||||
}
|
||||
return agent, nil
|
||||
}
|
||||
|
||||
func troubleshootingMessage(agent codersdk.WorkspaceAgent, url string) string {
|
||||
m := "For more information and troubleshooting, see " + url
|
||||
if agent.TroubleshootingURL != "" {
|
||||
|
||||
@@ -268,87 +268,6 @@ func TestAgent(t *testing.T) {
|
||||
"For more information and troubleshooting, see",
|
||||
},
|
||||
},
|
||||
{
|
||||
// Verify that in non-blocking mode (Wait=false), startup script
|
||||
// logs are suppressed. This prevents dumping a wall of logs on
|
||||
// users who explicitly pass --wait=no. See issue #13580.
|
||||
name: "No logs in non-blocking mode",
|
||||
opts: cliui.AgentOptions{
|
||||
FetchInterval: time.Millisecond,
|
||||
Wait: false,
|
||||
},
|
||||
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
|
||||
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
|
||||
agent.Status = codersdk.WorkspaceAgentConnected
|
||||
agent.FirstConnectedAt = ptr.Ref(time.Now())
|
||||
agent.StartedAt = ptr.Ref(time.Now())
|
||||
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStartError
|
||||
agent.ReadyAt = ptr.Ref(time.Now())
|
||||
// These logs should NOT be shown in non-blocking mode.
|
||||
logs <- []codersdk.WorkspaceAgentLog{
|
||||
{
|
||||
CreatedAt: time.Now(),
|
||||
Output: "Startup script log 1",
|
||||
},
|
||||
{
|
||||
CreatedAt: time.Now(),
|
||||
Output: "Startup script log 2",
|
||||
},
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
// Note: Log content like "Startup script log 1" should NOT appear here.
|
||||
want: []string{
|
||||
"⧗ Running workspace agent startup scripts (non-blocking)",
|
||||
"✘ Running workspace agent startup scripts (non-blocking)",
|
||||
"Warning: A startup script exited with an error and your workspace may be incomplete.",
|
||||
"For more information and troubleshooting, see",
|
||||
},
|
||||
},
|
||||
{
|
||||
// Verify that even after waiting for the agent to connect, logs
|
||||
// are still suppressed in non-blocking mode. See issue #13580.
|
||||
name: "No logs after connection wait in non-blocking mode",
|
||||
opts: cliui.AgentOptions{
|
||||
FetchInterval: time.Millisecond,
|
||||
Wait: false,
|
||||
},
|
||||
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
|
||||
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
|
||||
agent.Status = codersdk.WorkspaceAgentConnecting
|
||||
return nil
|
||||
},
|
||||
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
|
||||
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
|
||||
},
|
||||
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
|
||||
agent.Status = codersdk.WorkspaceAgentConnected
|
||||
agent.FirstConnectedAt = ptr.Ref(time.Now())
|
||||
agent.StartedAt = ptr.Ref(time.Now())
|
||||
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStartError
|
||||
agent.ReadyAt = ptr.Ref(time.Now())
|
||||
// These logs should NOT be shown in non-blocking mode,
|
||||
// even though we waited for connection.
|
||||
logs <- []codersdk.WorkspaceAgentLog{
|
||||
{
|
||||
CreatedAt: time.Now(),
|
||||
Output: "Startup script log 1",
|
||||
},
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
// Note: Log content should NOT appear here despite waiting for connection.
|
||||
want: []string{
|
||||
"⧗ Waiting for the workspace agent to connect",
|
||||
"✔ Waiting for the workspace agent to connect",
|
||||
"⧗ Running workspace agent startup scripts (non-blocking)",
|
||||
"✘ Running workspace agent startup scripts (non-blocking)",
|
||||
"Warning: A startup script exited with an error and your workspace may be incomplete.",
|
||||
"For more information and troubleshooting, see",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Error when shutting down",
|
||||
opts: cliui.AgentOptions{
|
||||
@@ -566,70 +485,6 @@ func TestAgent(t *testing.T) {
|
||||
}
|
||||
require.NoError(t, cmd.Invoke().Run())
|
||||
})
|
||||
|
||||
t.Run("ContextCancelDuringLogStreaming", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
agent := codersdk.WorkspaceAgent{
|
||||
ID: uuid.New(),
|
||||
Status: codersdk.WorkspaceAgentConnected,
|
||||
FirstConnectedAt: ptr.Ref(time.Now()),
|
||||
CreatedAt: time.Now(),
|
||||
LifecycleState: codersdk.WorkspaceAgentLifecycleStarting,
|
||||
StartedAt: ptr.Ref(time.Now()),
|
||||
}
|
||||
|
||||
logs := make(chan []codersdk.WorkspaceAgentLog, 1)
|
||||
logStreamStarted := make(chan struct{})
|
||||
|
||||
cmd := &serpent.Command{
|
||||
Handler: func(inv *serpent.Invocation) error {
|
||||
return cliui.Agent(inv.Context(), io.Discard, agent.ID, cliui.AgentOptions{
|
||||
FetchInterval: time.Millisecond,
|
||||
Wait: true,
|
||||
Fetch: func(_ context.Context, _ uuid.UUID) (codersdk.WorkspaceAgent, error) {
|
||||
return agent, nil
|
||||
},
|
||||
FetchLogs: func(_ context.Context, _ uuid.UUID, _ int64, follow bool) (<-chan []codersdk.WorkspaceAgentLog, io.Closer, error) {
|
||||
// Signal that log streaming has started.
|
||||
select {
|
||||
case <-logStreamStarted:
|
||||
default:
|
||||
close(logStreamStarted)
|
||||
}
|
||||
return logs, closeFunc(func() error { return nil }), nil
|
||||
},
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
inv := cmd.Invoke().WithContext(ctx)
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
done <- inv.Run()
|
||||
}()
|
||||
|
||||
// Wait for log streaming to start.
|
||||
select {
|
||||
case <-logStreamStarted:
|
||||
case <-time.After(testutil.WaitShort):
|
||||
t.Fatal("timed out waiting for log streaming to start")
|
||||
}
|
||||
|
||||
// Cancel the context while streaming logs.
|
||||
cancel()
|
||||
|
||||
// Verify that the agent function returns with a context error.
|
||||
select {
|
||||
case err := <-done:
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
case <-time.After(testutil.WaitShort):
|
||||
t.Fatal("timed out waiting for agent to return after context cancellation")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestPeerDiagnostics(t *testing.T) {
|
||||
|
||||
+11
-7
@@ -46,13 +46,6 @@ OPTIONS:
|
||||
the workspace serves malicious JavaScript. This is recommended for
|
||||
security purposes if a --wildcard-access-url is configured.
|
||||
|
||||
--disable-workspace-sharing bool, $CODER_DISABLE_WORKSPACE_SHARING
|
||||
Disable workspace sharing (requires the "workspace-sharing" experiment
|
||||
to be enabled). Workspace ACL checking is disabled and only owners can
|
||||
have ssh, apps and terminal access to workspaces. Access based on the
|
||||
'owner' role is also allowed unless disabled via
|
||||
--disable-owner-workspace-access.
|
||||
|
||||
--swagger-enable bool, $CODER_SWAGGER_ENABLE
|
||||
Expose the swagger endpoint via /swagger.
|
||||
|
||||
@@ -125,12 +118,23 @@ AI BRIDGE OPTIONS:
|
||||
requests (requires the "oauth2" and "mcp-server-http" experiments to
|
||||
be enabled).
|
||||
|
||||
--aibridge-max-concurrency int, $CODER_AIBRIDGE_MAX_CONCURRENCY (default: 0)
|
||||
Maximum number of concurrent AI Bridge requests. Set to 0 to disable
|
||||
(unlimited).
|
||||
|
||||
--aibridge-openai-base-url string, $CODER_AIBRIDGE_OPENAI_BASE_URL (default: https://api.openai.com/v1/)
|
||||
The base URL of the OpenAI API.
|
||||
|
||||
--aibridge-openai-key string, $CODER_AIBRIDGE_OPENAI_KEY
|
||||
The key to authenticate against the OpenAI API.
|
||||
|
||||
--aibridge-rate-limit int, $CODER_AIBRIDGE_RATE_LIMIT (default: 0)
|
||||
Maximum number of AI Bridge requests per rate window. Set to 0 to
|
||||
disable rate limiting.
|
||||
|
||||
--aibridge-rate-window duration, $CODER_AIBRIDGE_RATE_WINDOW (default: 1m)
|
||||
Duration of the rate limiting window for AI Bridge requests.
|
||||
|
||||
CLIENT OPTIONS:
|
||||
These options change the behavior of how clients interact with the Coder.
|
||||
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
|
||||
|
||||
+11
-6
@@ -497,12 +497,6 @@ disablePathApps: false
|
||||
# workspaces.
|
||||
# (default: <unset>, type: bool)
|
||||
disableOwnerWorkspaceAccess: false
|
||||
# Disable workspace sharing (requires the "workspace-sharing" experiment to be
|
||||
# enabled). Workspace ACL checking is disabled and only owners can have ssh, apps
|
||||
# and terminal access to workspaces. Access based on the 'owner' role is also
|
||||
# allowed unless disabled via --disable-owner-workspace-access.
|
||||
# (default: <unset>, type: bool)
|
||||
disableWorkspaceSharing: false
|
||||
# These options change the behavior of how clients interact with the Coder.
|
||||
# Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
|
||||
client:
|
||||
@@ -748,6 +742,17 @@ aibridge:
|
||||
# (token, prompt, tool use).
|
||||
# (default: 60d, type: duration)
|
||||
retention: 1440h0m0s
|
||||
# Maximum number of concurrent AI Bridge requests. Set to 0 to disable
|
||||
# (unlimited).
|
||||
# (default: 0, type: int)
|
||||
max_concurrency: 0
|
||||
# Maximum number of AI Bridge requests per rate window. Set to 0 to disable rate
|
||||
# limiting.
|
||||
# (default: 0, type: int)
|
||||
rate_limit: 0
|
||||
# Duration of the rate limiting window for AI Bridge requests.
|
||||
# (default: 1m, type: duration)
|
||||
rate_window: 1m0s
|
||||
# Configure data retention policies for various database tables. Retention
|
||||
# policies automatically purge old data to reduce database size and improve
|
||||
# performance. Setting a retention duration to 0 disables automatic purging for
|
||||
|
||||
Generated
+11
-10
@@ -1290,14 +1290,8 @@ const docTemplate = `{
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Returns existing file if duplicate",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/codersdk.UploadResponse"
|
||||
}
|
||||
},
|
||||
"201": {
|
||||
"description": "Returns newly created file",
|
||||
"description": "Created",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/codersdk.UploadResponse"
|
||||
}
|
||||
@@ -11883,9 +11877,19 @@ const docTemplate = `{
|
||||
"inject_coder_mcp_tools": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"max_concurrency": {
|
||||
"description": "Overload protection settings.",
|
||||
"type": "integer"
|
||||
},
|
||||
"openai": {
|
||||
"$ref": "#/definitions/codersdk.AIBridgeOpenAIConfig"
|
||||
},
|
||||
"rate_limit": {
|
||||
"type": "integer"
|
||||
},
|
||||
"rate_window": {
|
||||
"type": "integer"
|
||||
},
|
||||
"retention": {
|
||||
"type": "integer"
|
||||
}
|
||||
@@ -14214,9 +14218,6 @@ const docTemplate = `{
|
||||
"disable_path_apps": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"disable_workspace_sharing": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"docs_url": {
|
||||
"$ref": "#/definitions/serpent.URL"
|
||||
},
|
||||
|
||||
Generated
+11
-10
@@ -1116,14 +1116,8 @@
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
"200": {
|
||||
"description": "Returns existing file if duplicate",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/codersdk.UploadResponse"
|
||||
}
|
||||
},
|
||||
"201": {
|
||||
"description": "Returns newly created file",
|
||||
"description": "Created",
|
||||
"schema": {
|
||||
"$ref": "#/definitions/codersdk.UploadResponse"
|
||||
}
|
||||
@@ -10549,9 +10543,19 @@
|
||||
"inject_coder_mcp_tools": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"max_concurrency": {
|
||||
"description": "Overload protection settings.",
|
||||
"type": "integer"
|
||||
},
|
||||
"openai": {
|
||||
"$ref": "#/definitions/codersdk.AIBridgeOpenAIConfig"
|
||||
},
|
||||
"rate_limit": {
|
||||
"type": "integer"
|
||||
},
|
||||
"rate_window": {
|
||||
"type": "integer"
|
||||
},
|
||||
"retention": {
|
||||
"type": "integer"
|
||||
}
|
||||
@@ -12798,9 +12802,6 @@
|
||||
"disable_path_apps": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"disable_workspace_sharing": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"docs_url": {
|
||||
"$ref": "#/definitions/serpent.URL"
|
||||
},
|
||||
|
||||
@@ -333,10 +333,6 @@ func New(options *Options) *API {
|
||||
})
|
||||
}
|
||||
|
||||
if options.DeploymentValues.DisableWorkspaceSharing {
|
||||
rbac.SetWorkspaceACLDisabled(true)
|
||||
}
|
||||
|
||||
if options.PrometheusRegistry == nil {
|
||||
options.PrometheusRegistry = prometheus.NewRegistry()
|
||||
}
|
||||
|
||||
@@ -439,16 +439,6 @@ func Workspace(t testing.TB, db database.Store, orig database.WorkspaceTable) da
|
||||
require.NoError(t, err, "set workspace as dormant")
|
||||
workspace.DormantAt = orig.DormantAt
|
||||
}
|
||||
if len(orig.UserACL) > 0 || len(orig.GroupACL) > 0 {
|
||||
err = db.UpdateWorkspaceACLByID(genCtx, database.UpdateWorkspaceACLByIDParams{
|
||||
ID: workspace.ID,
|
||||
UserACL: orig.UserACL,
|
||||
GroupACL: orig.GroupACL,
|
||||
})
|
||||
require.NoError(t, err, "set workspace ACL")
|
||||
workspace.UserACL = orig.UserACL
|
||||
workspace.GroupACL = orig.GroupACL
|
||||
}
|
||||
return workspace
|
||||
}
|
||||
|
||||
|
||||
@@ -430,16 +430,9 @@ func (w WorkspaceTable) RBACObject() rbac.Object {
|
||||
return w.DormantRBAC()
|
||||
}
|
||||
|
||||
obj := rbac.ResourceWorkspace.
|
||||
WithID(w.ID).
|
||||
return rbac.ResourceWorkspace.WithID(w.ID).
|
||||
InOrg(w.OrganizationID).
|
||||
WithOwner(w.OwnerID.String())
|
||||
|
||||
if rbac.WorkspaceACLDisabled() {
|
||||
return obj
|
||||
}
|
||||
|
||||
return obj.
|
||||
WithOwner(w.OwnerID.String()).
|
||||
WithGroupACL(w.GroupACL.RBACACL()).
|
||||
WithACLUserList(w.UserACL.RBACACL())
|
||||
}
|
||||
|
||||
@@ -143,45 +143,6 @@ func TestAPIKeyScopesExpand(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
//nolint:tparallel,paralleltest
|
||||
func TestWorkspaceACLDisabled(t *testing.T) {
|
||||
uid := uuid.NewString()
|
||||
gid := uuid.NewString()
|
||||
|
||||
ws := WorkspaceTable{
|
||||
ID: uuid.New(),
|
||||
OrganizationID: uuid.New(),
|
||||
OwnerID: uuid.New(),
|
||||
UserACL: WorkspaceACL{
|
||||
uid: WorkspaceACLEntry{Permissions: []policy.Action{policy.ActionSSH}},
|
||||
},
|
||||
GroupACL: WorkspaceACL{
|
||||
gid: WorkspaceACLEntry{Permissions: []policy.Action{policy.ActionSSH}},
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("ACLsOmittedWhenDisabled", func(t *testing.T) {
|
||||
rbac.SetWorkspaceACLDisabled(true)
|
||||
t.Cleanup(func() { rbac.SetWorkspaceACLDisabled(false) })
|
||||
|
||||
obj := ws.RBACObject()
|
||||
|
||||
require.Empty(t, obj.ACLUserList, "user ACLs should be empty when disabled")
|
||||
require.Empty(t, obj.ACLGroupList, "group ACLs should be empty when disabled")
|
||||
})
|
||||
|
||||
t.Run("ACLsIncludedWhenEnabled", func(t *testing.T) {
|
||||
rbac.SetWorkspaceACLDisabled(false)
|
||||
|
||||
obj := ws.RBACObject()
|
||||
|
||||
require.NotEmpty(t, obj.ACLUserList, "user ACLs should be present when enabled")
|
||||
require.NotEmpty(t, obj.ACLGroupList, "group ACLs should be present when enabled")
|
||||
require.Contains(t, obj.ACLUserList, uid)
|
||||
require.Contains(t, obj.ACLGroupList, gid)
|
||||
})
|
||||
}
|
||||
|
||||
// Helpers
|
||||
func requirePermission(t *testing.T, s rbac.Scope, resource string, action policy.Action) {
|
||||
t.Helper()
|
||||
|
||||
@@ -30,34 +30,11 @@ import (
|
||||
// Forgetting to do so will result in a memory leak.
|
||||
type Renderer interface {
|
||||
Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics)
|
||||
RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics)
|
||||
Close()
|
||||
}
|
||||
|
||||
var ErrTemplateVersionNotReady = xerrors.New("template version job not finished")
|
||||
|
||||
// RenderCache is an interface for caching preview.Preview results.
|
||||
type RenderCache interface {
|
||||
get(templateVersionID, ownerID uuid.UUID, parameters map[string]string) (*preview.Output, bool)
|
||||
put(templateVersionID, ownerID uuid.UUID, parameters map[string]string, output *preview.Output)
|
||||
Close()
|
||||
}
|
||||
|
||||
// noopRenderCache is a no-op implementation of RenderCache that doesn't cache anything.
|
||||
type noopRenderCache struct{}
|
||||
|
||||
func (noopRenderCache) get(uuid.UUID, uuid.UUID, map[string]string) (*preview.Output, bool) {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (noopRenderCache) put(uuid.UUID, uuid.UUID, map[string]string, *preview.Output) {
|
||||
// no-op
|
||||
}
|
||||
|
||||
func (noopRenderCache) Close() {
|
||||
// no-op
|
||||
}
|
||||
|
||||
// loader is used to load the necessary coder objects for rendering a template
|
||||
// version's parameters. The output is a Renderer, which is the object that uses
|
||||
// the cached objects to render the template version's parameters.
|
||||
@@ -69,9 +46,6 @@ type loader struct {
|
||||
job *database.ProvisionerJob
|
||||
terraformValues *database.TemplateVersionTerraformValue
|
||||
templateVariableValues *[]database.TemplateVersionVariable
|
||||
|
||||
// renderCache caches preview.Preview results
|
||||
renderCache RenderCache
|
||||
}
|
||||
|
||||
// Prepare is the entrypoint for this package. It loads the necessary objects &
|
||||
@@ -80,7 +54,6 @@ type loader struct {
|
||||
func Prepare(ctx context.Context, db database.Store, cache files.FileAcquirer, versionID uuid.UUID, options ...func(r *loader)) (Renderer, error) {
|
||||
l := &loader{
|
||||
templateVersionID: versionID,
|
||||
renderCache: noopRenderCache{},
|
||||
}
|
||||
|
||||
for _, opt := range options {
|
||||
@@ -118,12 +91,6 @@ func WithTerraformValues(values database.TemplateVersionTerraformValue) func(r *
|
||||
}
|
||||
}
|
||||
|
||||
func WithRenderCache(cache RenderCache) func(r *loader) {
|
||||
return func(r *loader) {
|
||||
r.renderCache = cache
|
||||
}
|
||||
}
|
||||
|
||||
func (r *loader) loadData(ctx context.Context, db database.Store) error {
|
||||
if r.templateVersion == nil {
|
||||
tv, err := db.GetTemplateVersionByID(ctx, r.templateVersionID)
|
||||
@@ -260,21 +227,6 @@ type dynamicRenderer struct {
|
||||
}
|
||||
|
||||
func (r *dynamicRenderer) Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
return r.render(ctx, ownerID, values, true)
|
||||
}
|
||||
|
||||
func (r *dynamicRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
return r.render(ctx, ownerID, values, false)
|
||||
}
|
||||
|
||||
func (r *dynamicRenderer) render(ctx context.Context, ownerID uuid.UUID, values map[string]string, useCache bool) (*preview.Output, hcl.Diagnostics) {
|
||||
// Check cache first if enabled
|
||||
if useCache {
|
||||
if cached, ok := r.data.renderCache.get(r.data.templateVersionID, ownerID, values); ok {
|
||||
return cached, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Always start with the cached error, if we have one.
|
||||
ownerErr := r.ownerErrors[ownerID]
|
||||
if ownerErr == nil {
|
||||
@@ -306,14 +258,7 @@ func (r *dynamicRenderer) render(ctx context.Context, ownerID uuid.UUID, values
|
||||
Logger: slog.New(slog.DiscardHandler),
|
||||
}
|
||||
|
||||
output, diags := preview.Preview(ctx, input, r.templateFS)
|
||||
|
||||
// Store in cache if successful and caching is enabled
|
||||
if useCache && !diags.HasErrors() {
|
||||
r.data.renderCache.put(r.data.templateVersionID, ownerID, values, output)
|
||||
}
|
||||
|
||||
return output, diags
|
||||
return preview.Preview(ctx, input, r.templateFS)
|
||||
}
|
||||
|
||||
func (r *dynamicRenderer) getWorkspaceOwnerData(ctx context.Context, ownerID uuid.UUID) error {
|
||||
|
||||
@@ -1,214 +0,0 @@
|
||||
package dynamicparameters
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/cespare/xxhash/v2"
|
||||
"github.com/google/uuid"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
|
||||
"github.com/coder/preview"
|
||||
"github.com/coder/quartz"
|
||||
)
|
||||
|
||||
// RenderCacheImpl is a simple in-memory cache for preview.Preview results.
|
||||
// It caches based on (templateVersionID, ownerID, parameterValues).
|
||||
type RenderCacheImpl struct {
|
||||
mu sync.RWMutex
|
||||
entries map[cacheKey]*cacheEntry
|
||||
|
||||
// Metrics (optional)
|
||||
cacheHits prometheus.Counter
|
||||
cacheMisses prometheus.Counter
|
||||
cacheSize prometheus.Gauge
|
||||
|
||||
// TTL cleanup
|
||||
clock quartz.Clock
|
||||
ttl time.Duration
|
||||
stopOnce sync.Once
|
||||
stopCh chan struct{}
|
||||
doneCh chan struct{}
|
||||
}
|
||||
|
||||
type cacheEntry struct {
|
||||
output *preview.Output
|
||||
timestamp time.Time
|
||||
}
|
||||
|
||||
type cacheKey struct {
|
||||
templateVersionID uuid.UUID
|
||||
ownerID uuid.UUID
|
||||
parameterHash uint64
|
||||
}
|
||||
|
||||
// NewRenderCache creates a new render cache with a default TTL of 1 hour.
|
||||
func NewRenderCache() *RenderCacheImpl {
|
||||
return newCache(quartz.NewReal(), time.Hour, nil, nil, nil)
|
||||
}
|
||||
|
||||
// NewRenderCacheWithMetrics creates a new render cache with Prometheus metrics.
|
||||
func NewRenderCacheWithMetrics(cacheHits, cacheMisses prometheus.Counter, cacheSize prometheus.Gauge) *RenderCacheImpl {
|
||||
return newCache(quartz.NewReal(), time.Hour, cacheHits, cacheMisses, cacheSize)
|
||||
}
|
||||
|
||||
func newCache(clock quartz.Clock, ttl time.Duration, cacheHits, cacheMisses prometheus.Counter, cacheSize prometheus.Gauge) *RenderCacheImpl {
|
||||
c := &RenderCacheImpl{
|
||||
entries: make(map[cacheKey]*cacheEntry),
|
||||
clock: clock,
|
||||
cacheHits: cacheHits,
|
||||
cacheMisses: cacheMisses,
|
||||
cacheSize: cacheSize,
|
||||
ttl: ttl,
|
||||
stopCh: make(chan struct{}),
|
||||
doneCh: make(chan struct{}),
|
||||
}
|
||||
|
||||
// Start cleanup goroutine
|
||||
go c.cleanupLoop(context.Background())
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// NewRenderCacheForTest creates a new render cache for testing purposes.
|
||||
func NewRenderCacheForTest() *RenderCacheImpl {
|
||||
return NewRenderCache()
|
||||
}
|
||||
|
||||
// Close stops the cleanup goroutine and waits for it to finish.
|
||||
func (c *RenderCacheImpl) Close() {
|
||||
c.stopOnce.Do(func() {
|
||||
close(c.stopCh)
|
||||
<-c.doneCh
|
||||
})
|
||||
}
|
||||
|
||||
func (c *RenderCacheImpl) get(templateVersionID, ownerID uuid.UUID, parameters map[string]string) (*preview.Output, bool) {
|
||||
key := makeKey(templateVersionID, ownerID, parameters)
|
||||
c.mu.RLock()
|
||||
entry, ok := c.entries[key]
|
||||
c.mu.RUnlock()
|
||||
|
||||
if !ok {
|
||||
// Record miss
|
||||
if c.cacheMisses != nil {
|
||||
c.cacheMisses.Inc()
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// Check if entry has expired
|
||||
if c.clock.Since(entry.timestamp) > c.ttl {
|
||||
// Expired entry, treat as miss
|
||||
if c.cacheMisses != nil {
|
||||
c.cacheMisses.Inc()
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
// Record hit and refresh timestamp
|
||||
if c.cacheHits != nil {
|
||||
c.cacheHits.Inc()
|
||||
}
|
||||
|
||||
// Refresh timestamp on hit to keep frequently accessed entries alive
|
||||
c.mu.Lock()
|
||||
entry.timestamp = c.clock.Now()
|
||||
c.mu.Unlock()
|
||||
|
||||
return entry.output, true
|
||||
}
|
||||
|
||||
func (c *RenderCacheImpl) put(templateVersionID, ownerID uuid.UUID, parameters map[string]string, output *preview.Output) {
|
||||
key := makeKey(templateVersionID, ownerID, parameters)
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
c.entries[key] = &cacheEntry{
|
||||
output: output,
|
||||
timestamp: c.clock.Now(),
|
||||
}
|
||||
|
||||
// Update cache size metric
|
||||
if c.cacheSize != nil {
|
||||
c.cacheSize.Set(float64(len(c.entries)))
|
||||
}
|
||||
}
|
||||
|
||||
func makeKey(templateVersionID, ownerID uuid.UUID, parameters map[string]string) cacheKey {
|
||||
return cacheKey{
|
||||
templateVersionID: templateVersionID,
|
||||
ownerID: ownerID,
|
||||
parameterHash: hashParameters(parameters),
|
||||
}
|
||||
}
|
||||
|
||||
// hashParameters creates a deterministic hash of the parameter map.
|
||||
func hashParameters(params map[string]string) uint64 {
|
||||
if len(params) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// Sort keys for deterministic hashing
|
||||
keys := make([]string, 0, len(params))
|
||||
for k := range params {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
|
||||
// Hash the sorted key-value pairs
|
||||
var b string
|
||||
for _, k := range keys {
|
||||
b += fmt.Sprintf("%s:%s,", k, params[k])
|
||||
}
|
||||
|
||||
return xxhash.Sum64String(b)
|
||||
}
|
||||
|
||||
// cleanupLoop runs periodically to remove expired cache entries.
|
||||
func (c *RenderCacheImpl) cleanupLoop(ctx context.Context) {
|
||||
defer close(c.doneCh)
|
||||
|
||||
// Run cleanup every 15 minutes
|
||||
cleanupFunc := func() error {
|
||||
c.cleanup()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run once immediately
|
||||
_ = cleanupFunc()
|
||||
|
||||
// Create a cancellable context for the ticker
|
||||
tickerCtx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
// Create ticker for periodic cleanup
|
||||
tkr := c.clock.TickerFunc(tickerCtx, 15*time.Minute, cleanupFunc, "render-cache-cleanup")
|
||||
|
||||
// Wait for stop signal
|
||||
<-c.stopCh
|
||||
cancel()
|
||||
|
||||
_ = tkr.Wait()
|
||||
}
|
||||
|
||||
// cleanup removes expired entries from the cache.
|
||||
func (c *RenderCacheImpl) cleanup() {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
now := c.clock.Now()
|
||||
for key, entry := range c.entries {
|
||||
if now.Sub(entry.timestamp) > c.ttl {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
}
|
||||
|
||||
// Update cache size metric after cleanup
|
||||
if c.cacheSize != nil {
|
||||
c.cacheSize.Set(float64(len(c.entries)))
|
||||
}
|
||||
}
|
||||
@@ -1,354 +0,0 @@
|
||||
package dynamicparameters
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
promtestutil "github.com/prometheus/client_golang/prometheus/testutil"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
"github.com/coder/preview"
|
||||
previewtypes "github.com/coder/preview/types"
|
||||
"github.com/coder/quartz"
|
||||
)
|
||||
|
||||
func TestRenderCache_BasicOperations(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cache := NewRenderCache()
|
||||
defer cache.Close()
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
params := map[string]string{"region": "us-west-2"}
|
||||
|
||||
// Cache should be empty initially
|
||||
_, ok := cache.get(templateVersionID, ownerID, params)
|
||||
require.False(t, ok, "cache should be empty initially")
|
||||
|
||||
// Put an entry in the cache
|
||||
expectedOutput := &preview.Output{
|
||||
Parameters: []previewtypes.Parameter{
|
||||
{
|
||||
ParameterData: previewtypes.ParameterData{
|
||||
Name: "region",
|
||||
Type: previewtypes.ParameterTypeString,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
cache.put(templateVersionID, ownerID, params, expectedOutput)
|
||||
|
||||
// Get should now return the cached value
|
||||
cachedOutput, ok := cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok, "cache should contain the entry")
|
||||
require.Same(t, expectedOutput, cachedOutput, "should return same pointer")
|
||||
}
|
||||
|
||||
func TestRenderCache_DifferentKeysAreSeparate(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cache := NewRenderCache()
|
||||
defer cache.Close()
|
||||
|
||||
templateVersion1 := uuid.New()
|
||||
templateVersion2 := uuid.New()
|
||||
owner1 := uuid.New()
|
||||
owner2 := uuid.New()
|
||||
params := map[string]string{"region": "us-west-2"}
|
||||
|
||||
output1 := &preview.Output{}
|
||||
output2 := &preview.Output{}
|
||||
output3 := &preview.Output{}
|
||||
|
||||
// Put different entries for different keys
|
||||
cache.put(templateVersion1, owner1, params, output1)
|
||||
cache.put(templateVersion2, owner1, params, output2)
|
||||
cache.put(templateVersion1, owner2, params, output3)
|
||||
|
||||
// Verify each key returns its own entry
|
||||
cached1, ok1 := cache.get(templateVersion1, owner1, params)
|
||||
require.True(t, ok1)
|
||||
require.Same(t, output1, cached1)
|
||||
|
||||
cached2, ok2 := cache.get(templateVersion2, owner1, params)
|
||||
require.True(t, ok2)
|
||||
require.Same(t, output2, cached2)
|
||||
|
||||
cached3, ok3 := cache.get(templateVersion1, owner2, params)
|
||||
require.True(t, ok3)
|
||||
require.Same(t, output3, cached3)
|
||||
}
|
||||
|
||||
func TestRenderCache_ParameterHashConsistency(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cache := NewRenderCache()
|
||||
defer cache.Close()
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
|
||||
// Parameters in different order should produce same cache key
|
||||
params1 := map[string]string{"a": "1", "b": "2", "c": "3"}
|
||||
params2 := map[string]string{"c": "3", "a": "1", "b": "2"}
|
||||
|
||||
output := &preview.Output{}
|
||||
cache.put(templateVersionID, ownerID, params1, output)
|
||||
|
||||
// Should hit cache even with different parameter order
|
||||
cached, ok := cache.get(templateVersionID, ownerID, params2)
|
||||
require.True(t, ok, "different parameter order should still hit cache")
|
||||
require.Same(t, output, cached)
|
||||
}
|
||||
|
||||
func TestRenderCache_EmptyParameters(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cache := NewRenderCache()
|
||||
defer cache.Close()
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
|
||||
// Test with empty parameters
|
||||
emptyParams := map[string]string{}
|
||||
output := &preview.Output{}
|
||||
|
||||
cache.put(templateVersionID, ownerID, emptyParams, output)
|
||||
|
||||
cached, ok := cache.get(templateVersionID, ownerID, emptyParams)
|
||||
require.True(t, ok)
|
||||
require.Same(t, output, cached)
|
||||
}
|
||||
|
||||
func TestRenderCache_PrebuildScenario(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// This test simulates the prebuild scenario where multiple prebuilds
|
||||
// are created from the same template version with the same preset parameters.
|
||||
cache := NewRenderCache()
|
||||
defer cache.Close()
|
||||
|
||||
// In prebuilds, all instances use the same fixed ownerID
|
||||
prebuildOwnerID := uuid.MustParse("c42fdf75-3097-471c-8c33-fb52454d81c0") // database.PrebuildsSystemUserID
|
||||
templateVersionID := uuid.New()
|
||||
presetParams := map[string]string{
|
||||
"instance_type": "t3.micro",
|
||||
"region": "us-west-2",
|
||||
}
|
||||
|
||||
output := &preview.Output{}
|
||||
|
||||
// First prebuild - cache miss
|
||||
_, ok := cache.get(templateVersionID, prebuildOwnerID, presetParams)
|
||||
require.False(t, ok, "first prebuild should miss cache")
|
||||
|
||||
cache.put(templateVersionID, prebuildOwnerID, presetParams, output)
|
||||
|
||||
// Second prebuild with same template version and preset - cache hit
|
||||
cached2, ok2 := cache.get(templateVersionID, prebuildOwnerID, presetParams)
|
||||
require.True(t, ok2, "second prebuild should hit cache")
|
||||
require.Same(t, output, cached2, "should return cached output")
|
||||
|
||||
// Third prebuild - also cache hit
|
||||
cached3, ok3 := cache.get(templateVersionID, prebuildOwnerID, presetParams)
|
||||
require.True(t, ok3, "third prebuild should hit cache")
|
||||
require.Same(t, output, cached3, "should return cached output")
|
||||
|
||||
// All three prebuilds shared the same cache entry
|
||||
}
|
||||
|
||||
func TestRenderCache_Metrics(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// Create test metrics
|
||||
cacheHits := prometheus.NewCounter(prometheus.CounterOpts{
|
||||
Name: "test_cache_hits_total",
|
||||
})
|
||||
cacheMisses := prometheus.NewCounter(prometheus.CounterOpts{
|
||||
Name: "test_cache_misses_total",
|
||||
})
|
||||
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "test_cache_entries",
|
||||
})
|
||||
|
||||
cache := NewRenderCacheWithMetrics(cacheHits, cacheMisses, cacheSize)
|
||||
defer cache.Close()
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
params := map[string]string{"region": "us-west-2"}
|
||||
|
||||
// Initially: 0 hits, 0 misses, 0 size
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "initial hits should be 0")
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheMisses), "initial misses should be 0")
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "initial size should be 0")
|
||||
|
||||
// First get - should be a miss
|
||||
_, ok := cache.get(templateVersionID, ownerID, params)
|
||||
require.False(t, ok)
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "hits should still be 0")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should be 1")
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "size should still be 0")
|
||||
|
||||
// Put an entry
|
||||
output := &preview.Output{}
|
||||
cache.put(templateVersionID, ownerID, params, output)
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "size should be 1 after put")
|
||||
|
||||
// Second get - should be a hit
|
||||
_, ok = cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheHits), "hits should be 1")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "size should still be 1")
|
||||
|
||||
// Third get - another hit
|
||||
_, ok = cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheHits), "hits should be 2")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
|
||||
|
||||
// Put another entry with different params
|
||||
params2 := map[string]string{"region": "us-east-1"}
|
||||
cache.put(templateVersionID, ownerID, params2, output)
|
||||
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheSize), "size should be 2 after second put")
|
||||
|
||||
// Get with different params - should be a hit
|
||||
_, ok = cache.get(templateVersionID, ownerID, params2)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, float64(3), promtestutil.ToFloat64(cacheHits), "hits should be 3")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
|
||||
}
|
||||
|
||||
func TestRenderCache_TTL(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
clock := quartz.NewMock(t)
|
||||
|
||||
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
|
||||
defer trapTickerFunc.Close()
|
||||
|
||||
// Create cache with short TTL for testing
|
||||
cache := newCache(clock, 100*time.Millisecond, nil, nil, nil)
|
||||
defer cache.Close()
|
||||
|
||||
// Wait for the initial cleanup and ticker to be created
|
||||
trapTickerFunc.MustWait(ctx).Release(ctx)
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
params := map[string]string{"region": "us-west-2"}
|
||||
output := &preview.Output{}
|
||||
|
||||
// Put an entry
|
||||
cache.put(templateVersionID, ownerID, params, output)
|
||||
|
||||
// Should be a hit immediately
|
||||
cached, ok := cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok, "should hit cache immediately")
|
||||
require.Same(t, output, cached)
|
||||
|
||||
// Advance time beyond TTL
|
||||
clock.Advance(150 * time.Millisecond)
|
||||
|
||||
// Should be a miss after TTL
|
||||
_, ok = cache.get(templateVersionID, ownerID, params)
|
||||
require.False(t, ok, "should miss cache after TTL expiration")
|
||||
}
|
||||
|
||||
func TestRenderCache_CleanupRemovesExpiredEntries(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
clock := quartz.NewMock(t)
|
||||
|
||||
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
|
||||
defer trapTickerFunc.Close()
|
||||
|
||||
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "test_cache_entries",
|
||||
})
|
||||
cache := newCache(clock, 100*time.Millisecond, nil, nil, cacheSize)
|
||||
defer cache.Close()
|
||||
|
||||
// Wait for the initial cleanup and ticker to be created
|
||||
trapTickerFunc.MustWait(ctx).Release(ctx)
|
||||
|
||||
// Initial size should be 0 after first cleanup
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "should have 0 entries initially")
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
|
||||
// Add 3 entries
|
||||
for i := 0; i < 3; i++ {
|
||||
params := map[string]string{"index": string(rune(i))}
|
||||
cache.put(templateVersionID, ownerID, params, &preview.Output{})
|
||||
}
|
||||
|
||||
require.Equal(t, float64(3), promtestutil.ToFloat64(cacheSize), "should have 3 entries")
|
||||
|
||||
// Advance time beyond TTL
|
||||
clock.Advance(150 * time.Millisecond)
|
||||
|
||||
// Trigger cleanup by advancing to the next ticker event (15 minutes from start minus what we already advanced)
|
||||
clock.Advance(15*time.Minute - 150*time.Millisecond).MustWait(ctx)
|
||||
|
||||
// All entries should be removed
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "all entries should be removed after cleanup")
|
||||
}
|
||||
|
||||
func TestRenderCache_TimestampRefreshOnHit(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
clock := quartz.NewMock(t)
|
||||
|
||||
trapTickerFunc := clock.Trap().TickerFunc("render-cache-cleanup")
|
||||
defer trapTickerFunc.Close()
|
||||
|
||||
// Create cache with 1 second TTL for testing
|
||||
cache := newCache(clock, time.Second, nil, nil, nil)
|
||||
defer cache.Close()
|
||||
|
||||
// Wait for the initial cleanup and ticker to be created
|
||||
trapTickerFunc.MustWait(ctx).Release(ctx)
|
||||
|
||||
templateVersionID := uuid.New()
|
||||
ownerID := uuid.New()
|
||||
params := map[string]string{"region": "us-west-2"}
|
||||
output := &preview.Output{}
|
||||
|
||||
// Put an entry at T=0
|
||||
cache.put(templateVersionID, ownerID, params, output)
|
||||
|
||||
// Advance time to 600ms (still within TTL)
|
||||
clock.Advance(600 * time.Millisecond)
|
||||
|
||||
// Access the entry - should hit and refresh timestamp to T=600ms
|
||||
cached, ok := cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok, "should hit cache")
|
||||
require.Same(t, output, cached)
|
||||
|
||||
// Advance another 600ms (now at T=1200ms)
|
||||
// Entry was created at T=0 but refreshed at T=600ms, so it should still be valid
|
||||
clock.Advance(600 * time.Millisecond)
|
||||
|
||||
// Should still hit because timestamp was refreshed at T=600ms
|
||||
cached, ok = cache.get(templateVersionID, ownerID, params)
|
||||
require.True(t, ok, "should still hit cache because timestamp was refreshed")
|
||||
require.Same(t, output, cached)
|
||||
|
||||
// Now advance another 1.1 seconds (to T=2300ms)
|
||||
// Last refresh was at T=1200ms, so now it should be expired
|
||||
clock.Advance(1100 * time.Millisecond)
|
||||
|
||||
// Should miss because more than 1 second since last access
|
||||
_, ok = cache.get(templateVersionID, ownerID, params)
|
||||
require.False(t, ok, "should miss cache after TTL from last access")
|
||||
}
|
||||
@@ -1,197 +0,0 @@
|
||||
package dynamicparameters
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
promtestutil "github.com/prometheus/client_golang/prometheus/testutil"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/coder/coder/v2/coderd/database"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
"github.com/coder/preview"
|
||||
previewtypes "github.com/coder/preview/types"
|
||||
"github.com/coder/terraform-provider-coder/v2/provider"
|
||||
)
|
||||
|
||||
// TestRenderCache_PrebuildWithResolveParameters simulates the actual prebuild flow
|
||||
// where ResolveParameters calls Render() twice - once with previous values and once
|
||||
// with the final computed values.
|
||||
func TestRenderCache_PrebuildWithResolveParameters(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
// Create test metrics
|
||||
cacheHits := prometheus.NewCounter(prometheus.CounterOpts{
|
||||
Name: "test_prebuild_cache_hits_total",
|
||||
})
|
||||
cacheMisses := prometheus.NewCounter(prometheus.CounterOpts{
|
||||
Name: "test_prebuild_cache_misses_total",
|
||||
})
|
||||
cacheSize := prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "test_prebuild_cache_entries",
|
||||
})
|
||||
|
||||
cache := NewRenderCacheWithMetrics(cacheHits, cacheMisses, cacheSize)
|
||||
defer cache.Close()
|
||||
|
||||
// Simulate prebuild scenario
|
||||
prebuildOwnerID := uuid.MustParse("c42fdf75-3097-471c-8c33-fb52454d81c0") // database.PrebuildsSystemUserID
|
||||
templateVersionID := uuid.New()
|
||||
|
||||
// Preset parameters that all prebuilds share
|
||||
presetParams := []database.TemplateVersionPresetParameter{
|
||||
{Name: "instance_type", Value: "t3.micro"},
|
||||
{Name: "region", Value: "us-west-2"},
|
||||
}
|
||||
|
||||
// Create a mock renderer that returns consistent parameter definitions
|
||||
mockRenderer := &mockRenderer{
|
||||
cache: cache,
|
||||
templateVersionID: templateVersionID,
|
||||
output: &preview.Output{
|
||||
Parameters: []previewtypes.Parameter{
|
||||
{
|
||||
ParameterData: previewtypes.ParameterData{
|
||||
Name: "instance_type",
|
||||
Type: previewtypes.ParameterTypeString,
|
||||
FormType: provider.ParameterFormTypeInput,
|
||||
Mutable: true,
|
||||
DefaultValue: previewtypes.StringLiteral("t3.micro"),
|
||||
Required: true,
|
||||
},
|
||||
Value: previewtypes.StringLiteral("t3.micro"),
|
||||
Diagnostics: nil,
|
||||
},
|
||||
{
|
||||
ParameterData: previewtypes.ParameterData{
|
||||
Name: "region",
|
||||
Type: previewtypes.ParameterTypeString,
|
||||
FormType: provider.ParameterFormTypeInput,
|
||||
Mutable: true,
|
||||
DefaultValue: previewtypes.StringLiteral("us-west-2"),
|
||||
Required: true,
|
||||
},
|
||||
Value: previewtypes.StringLiteral("us-west-2"),
|
||||
Diagnostics: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Initial metrics should be 0
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "initial hits should be 0")
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheMisses), "initial misses should be 0")
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheSize), "initial size should be 0")
|
||||
|
||||
// === FIRST PREBUILD ===
|
||||
// First build: no previous values, preset values provided
|
||||
values1, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, true,
|
||||
[]database.WorkspaceBuildParameter{}, // No previous values (first build)
|
||||
[]codersdk.WorkspaceBuildParameter{}, // No build-specific values
|
||||
presetParams, // Preset values from template
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, values1)
|
||||
|
||||
// After first prebuild:
|
||||
// - ResolveParameters calls Render() twice:
|
||||
// 1. RenderWithoutCache with previousValuesMap (empty {}) → no cache operation
|
||||
// 2. Render with values.ValuesMap() ({preset}) → miss, creates cache entry
|
||||
// Expected: 0 hits, 1 miss, 1 cache entry
|
||||
t.Logf("After first prebuild: hits=%v, misses=%v, size=%v",
|
||||
promtestutil.ToFloat64(cacheHits),
|
||||
promtestutil.ToFloat64(cacheMisses),
|
||||
promtestutil.ToFloat64(cacheSize))
|
||||
|
||||
require.Equal(t, float64(0), promtestutil.ToFloat64(cacheHits), "first prebuild should have 0 hits")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "first prebuild should have 1 miss")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should have 1 cache entry after first prebuild")
|
||||
|
||||
// === SECOND PREBUILD ===
|
||||
// Second build: previous values now set to preset values
|
||||
previousValues := []database.WorkspaceBuildParameter{
|
||||
{Name: "instance_type", Value: "t3.micro"},
|
||||
{Name: "region", Value: "us-west-2"},
|
||||
}
|
||||
|
||||
values2, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, false,
|
||||
previousValues, // Previous values from first build
|
||||
[]codersdk.WorkspaceBuildParameter{},
|
||||
presetParams,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, values2)
|
||||
|
||||
// After second prebuild:
|
||||
// - ResolveParameters calls Render() twice:
|
||||
// 1. RenderWithoutCache with previousValuesMap ({preset}) → no cache operation
|
||||
// 2. Render with values.ValuesMap() ({preset}) → HIT (cache entry from first prebuild's 2nd render)
|
||||
// Expected: 1 hit, 1 miss (still), 1 cache entry (still)
|
||||
t.Logf("After second prebuild: hits=%v, misses=%v, size=%v",
|
||||
promtestutil.ToFloat64(cacheHits),
|
||||
promtestutil.ToFloat64(cacheMisses),
|
||||
promtestutil.ToFloat64(cacheSize))
|
||||
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheHits), "second prebuild should have 1 hit")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should still have 1 cache entry")
|
||||
|
||||
// === THIRD PREBUILD ===
|
||||
values3, err := ResolveParameters(ctx, prebuildOwnerID, mockRenderer, false,
|
||||
previousValues,
|
||||
[]codersdk.WorkspaceBuildParameter{},
|
||||
presetParams,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, values3)
|
||||
|
||||
// After third prebuild:
|
||||
// - ResolveParameters calls Render() twice:
|
||||
// 1. RenderWithoutCache with previousValuesMap ({preset}) → no cache operation
|
||||
// 2. Render with values.ValuesMap() ({preset}) → HIT
|
||||
// Expected: 2 hits, 1 miss (still), 1 cache entry (still)
|
||||
t.Logf("After third prebuild: hits=%v, misses=%v, size=%v",
|
||||
promtestutil.ToFloat64(cacheHits),
|
||||
promtestutil.ToFloat64(cacheMisses),
|
||||
promtestutil.ToFloat64(cacheSize))
|
||||
|
||||
require.Equal(t, float64(2), promtestutil.ToFloat64(cacheHits), "third prebuild should have 2 total hits")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheMisses), "misses should still be 1")
|
||||
require.Equal(t, float64(1), promtestutil.ToFloat64(cacheSize), "should still have 1 cache entry")
|
||||
|
||||
// Summary: With 3 prebuilds, we should have:
|
||||
// - 2 cache hits (1 from 2nd prebuild, 1 from 3rd prebuild)
|
||||
// - 1 cache miss (1 from 1st prebuild)
|
||||
// - 1 cache entry (for preset params only - introspection renders are not cached)
|
||||
}
|
||||
|
||||
// mockRenderer is a simple renderer that uses the cache for testing
|
||||
type mockRenderer struct {
|
||||
cache RenderCache
|
||||
templateVersionID uuid.UUID
|
||||
output *preview.Output
|
||||
}
|
||||
|
||||
func (m *mockRenderer) Render(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
// This simulates what dynamicRenderer does - check cache first
|
||||
if cached, ok := m.cache.get(m.templateVersionID, ownerID, values); ok {
|
||||
return cached, nil
|
||||
}
|
||||
|
||||
// Not in cache, "render" (just return our mock output) and cache it
|
||||
m.cache.put(m.templateVersionID, ownerID, values, m.output)
|
||||
return m.output, nil
|
||||
}
|
||||
|
||||
func (m *mockRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
// For test purposes, just return output without caching
|
||||
return m.output, nil
|
||||
}
|
||||
|
||||
func (m *mockRenderer) Close() {}
|
||||
@@ -69,18 +69,3 @@ func (mr *MockRendererMockRecorder) Render(ctx, ownerID, values any) *gomock.Cal
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Render", reflect.TypeOf((*MockRenderer)(nil).Render), ctx, ownerID, values)
|
||||
}
|
||||
|
||||
// RenderWithoutCache mocks base method.
|
||||
func (m *MockRenderer) RenderWithoutCache(ctx context.Context, ownerID uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "RenderWithoutCache", ctx, ownerID, values)
|
||||
ret0, _ := ret[0].(*preview.Output)
|
||||
ret1, _ := ret[1].(hcl.Diagnostics)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// RenderWithoutCache indicates an expected call of RenderWithoutCache.
|
||||
func (mr *MockRendererMockRecorder) RenderWithoutCache(ctx, ownerID, values any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "RenderWithoutCache", reflect.TypeOf((*MockRenderer)(nil).RenderWithoutCache), ctx, ownerID, values)
|
||||
}
|
||||
|
||||
@@ -69,10 +69,7 @@ func ResolveParameters(
|
||||
//
|
||||
// This is how the form should look to the user on their workspace settings page.
|
||||
// This is the original form truth that our validations should initially be based on.
|
||||
//
|
||||
// Skip caching this render - it's only used for introspection to identify ephemeral
|
||||
// parameters. The actual values used for the build will be rendered and cached below.
|
||||
output, diags := renderer.RenderWithoutCache(ctx, ownerID, previousValuesMap)
|
||||
output, diags := renderer.Render(ctx, ownerID, previousValuesMap)
|
||||
if diags.HasErrors() {
|
||||
// Top level diagnostics should break the build. Previous values (and new) should
|
||||
// always be valid. If there is a case where this is not true, then this has to
|
||||
|
||||
@@ -28,25 +28,6 @@ func TestResolveParameters(t *testing.T) {
|
||||
render := rendermock.NewMockRenderer(ctrl)
|
||||
|
||||
// A single immutable parameter with no previous value.
|
||||
render.EXPECT().
|
||||
RenderWithoutCache(gomock.Any(), gomock.Any(), gomock.Any()).
|
||||
AnyTimes().
|
||||
Return(&preview.Output{
|
||||
Parameters: []previewtypes.Parameter{
|
||||
{
|
||||
ParameterData: previewtypes.ParameterData{
|
||||
Name: "immutable",
|
||||
Type: previewtypes.ParameterTypeString,
|
||||
FormType: provider.ParameterFormTypeInput,
|
||||
Mutable: false,
|
||||
DefaultValue: previewtypes.StringLiteral("foo"),
|
||||
Required: true,
|
||||
},
|
||||
Value: previewtypes.StringLiteral("foo"),
|
||||
Diagnostics: nil,
|
||||
},
|
||||
},
|
||||
}, nil)
|
||||
render.EXPECT().
|
||||
Render(gomock.Any(), gomock.Any(), gomock.Any()).
|
||||
AnyTimes().
|
||||
@@ -97,7 +78,7 @@ func TestResolveParameters(t *testing.T) {
|
||||
|
||||
// A single immutable parameter with no previous value.
|
||||
render.EXPECT().
|
||||
RenderWithoutCache(gomock.Any(), gomock.Any(), gomock.Any()).
|
||||
Render(gomock.Any(), gomock.Any(), gomock.Any()).
|
||||
// Return the mutable param first
|
||||
Return(&preview.Output{
|
||||
Parameters: []previewtypes.Parameter{
|
||||
|
||||
@@ -40,15 +40,6 @@ func (r *loader) staticRender(ctx context.Context, db database.Store) (*staticRe
|
||||
}
|
||||
|
||||
func (r *staticRender) Render(_ context.Context, _ uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
return r.render(values)
|
||||
}
|
||||
|
||||
func (r *staticRender) RenderWithoutCache(_ context.Context, _ uuid.UUID, values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
// Static renderer doesn't use cache, so this is the same as Render
|
||||
return r.render(values)
|
||||
}
|
||||
|
||||
func (r *staticRender) render(values map[string]string) (*preview.Output, hcl.Diagnostics) {
|
||||
params := r.staticParams
|
||||
for i := range params {
|
||||
param := ¶ms[i]
|
||||
|
||||
+1
-2
@@ -41,8 +41,7 @@ const (
|
||||
// @Tags Files
|
||||
// @Param Content-Type header string true "Content-Type must be `application/x-tar` or `application/zip`" default(application/x-tar)
|
||||
// @Param file formData file true "File to be uploaded. If using tar format, file must conform to ustar (pax may cause problems)."
|
||||
// @Success 200 {object} codersdk.UploadResponse "Returns existing file if duplicate"
|
||||
// @Success 201 {object} codersdk.UploadResponse "Returns newly created file"
|
||||
// @Success 201 {object} codersdk.UploadResponse
|
||||
// @Router /files [post]
|
||||
func (api *API) postFile(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
@@ -251,16 +251,13 @@ type HTTPError struct {
|
||||
func (e HTTPError) Write(rw http.ResponseWriter, r *http.Request) {
|
||||
if e.RenderStaticPage {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: e.Code,
|
||||
HideStatus: true,
|
||||
Title: e.Msg,
|
||||
Description: e.Detail,
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: "/login",
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: e.Code,
|
||||
HideStatus: true,
|
||||
Title: e.Msg,
|
||||
Description: e.Detail,
|
||||
RetryEnabled: false,
|
||||
DashboardURL: "/login",
|
||||
|
||||
RenderDescriptionMarkdown: e.RenderDetailMarkdown,
|
||||
})
|
||||
return
|
||||
|
||||
@@ -75,18 +75,7 @@ func ShowAuthorizePage(accessURL *url.URL) http.HandlerFunc {
|
||||
|
||||
callbackURL, err := url.Parse(app.CallbackURL)
|
||||
if err != nil {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusInternalServerError,
|
||||
HideStatus: false,
|
||||
Title: "Internal Server Error",
|
||||
Description: err.Error(),
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
})
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{Status: http.StatusInternalServerError, HideStatus: false, Title: "Internal Server Error", Description: err.Error(), RetryEnabled: false, DashboardURL: accessURL.String(), Warnings: nil})
|
||||
return
|
||||
}
|
||||
|
||||
@@ -96,19 +85,7 @@ func ShowAuthorizePage(accessURL *url.URL) http.HandlerFunc {
|
||||
for i, err := range validationErrs {
|
||||
errStr[i] = err.Detail
|
||||
}
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusBadRequest,
|
||||
HideStatus: false,
|
||||
Title: "Invalid Query Parameters",
|
||||
Description: "One or more query parameters are missing or invalid.",
|
||||
Warnings: errStr,
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
})
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{Status: http.StatusBadRequest, HideStatus: false, Title: "Invalid Query Parameters", Description: "One or more query parameters are missing or invalid.", RetryEnabled: false, DashboardURL: accessURL.String(), Warnings: errStr})
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -236,19 +236,3 @@ func (z Object) WithGroupACL(groups map[string][]policy.Action) Object {
|
||||
AnyOrgOwner: z.AnyOrgOwner,
|
||||
}
|
||||
}
|
||||
|
||||
// TODO(geokat): similar to builtInRoles, this should ideally be
|
||||
// scoped to a coderd rather than a global.
|
||||
var workspaceACLDisabled bool
|
||||
|
||||
// SetWorkspaceACLDisabled disables/enables workspace sharing for the
|
||||
// deployment.
|
||||
func SetWorkspaceACLDisabled(v bool) {
|
||||
workspaceACLDisabled = v
|
||||
}
|
||||
|
||||
// WorkspaceACLDisabled returns true if workspace sharing is disabled
|
||||
// for the deployment.
|
||||
func WorkspaceACLDisabled() bool {
|
||||
return workspaceACLDisabled
|
||||
}
|
||||
|
||||
+14
-20
@@ -199,9 +199,10 @@ func (s *ServerTailnet) ReverseProxy(targetURL, dashboardURL *url.URL, agentID u
|
||||
proxy := httputil.NewSingleHostReverseProxy(&tgt)
|
||||
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, theErr error) {
|
||||
var (
|
||||
desc = "Failed to proxy request to application: " + theErr.Error()
|
||||
additionalInfo = ""
|
||||
actions = []site.Action{}
|
||||
desc = "Failed to proxy request to application: " + theErr.Error()
|
||||
additionalInfo = ""
|
||||
additionalButtonLink = ""
|
||||
additionalButtonText = ""
|
||||
)
|
||||
|
||||
var tlsError tls.RecordHeaderError
|
||||
@@ -221,28 +222,21 @@ func (s *ServerTailnet) ReverseProxy(targetURL, dashboardURL *url.URL, agentID u
|
||||
app = app.ChangePortProtocol(targetProtocol)
|
||||
|
||||
switchURL.Host = fmt.Sprintf("%s%s", app.String(), strings.TrimPrefix(wildcardHostname, "*"))
|
||||
actions = append(actions, site.Action{
|
||||
URL: switchURL.String(),
|
||||
Text: fmt.Sprintf("Switch to %s", strings.ToUpper(targetProtocol)),
|
||||
})
|
||||
additionalButtonLink = switchURL.String()
|
||||
additionalButtonText = fmt.Sprintf("Switch to %s", strings.ToUpper(targetProtocol))
|
||||
additionalInfo += fmt.Sprintf("This error seems to be due to an app protocol mismatch, try switching to %s.", strings.ToUpper(targetProtocol))
|
||||
}
|
||||
}
|
||||
|
||||
site.RenderStaticErrorPage(w, r, site.ErrorPageData{
|
||||
Status: http.StatusBadGateway,
|
||||
Title: "Bad Gateway",
|
||||
Description: desc,
|
||||
Actions: append(actions, []site.Action{
|
||||
{
|
||||
Text: "Retry",
|
||||
},
|
||||
{
|
||||
URL: dashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
}...),
|
||||
AdditionalInfo: additionalInfo,
|
||||
Status: http.StatusBadGateway,
|
||||
Title: "Bad Gateway",
|
||||
Description: desc,
|
||||
RetryEnabled: true,
|
||||
DashboardURL: dashboardURL.String(),
|
||||
AdditionalInfo: additionalInfo,
|
||||
AdditionalButtonLink: additionalButtonLink,
|
||||
AdditionalButtonText: additionalButtonText,
|
||||
})
|
||||
}
|
||||
proxy.Director = s.director(agentID, proxy.Director)
|
||||
|
||||
@@ -71,18 +71,6 @@ Prompt: "Set up CI/CD pipeline" →
|
||||
"task_name": "setup-cicd"
|
||||
}
|
||||
|
||||
Prompt: "Work on https://github.com/coder/coder/issues/1234" →
|
||||
{
|
||||
"display_name": "Work on coder/coder #1234",
|
||||
"task_name": "coder-1234"
|
||||
}
|
||||
|
||||
Prompt: "Fix https://github.com/org/repo/pull/567" →
|
||||
{
|
||||
"display_name": "Fix org/repo PR #567",
|
||||
"task_name": "repo-pr-567"
|
||||
}
|
||||
|
||||
If a suitable name cannot be created, output exactly:
|
||||
{
|
||||
"display_name": "Task Unnamed",
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
@@ -31,16 +30,12 @@ func WriteWorkspaceApp404(log slog.Logger, accessURL *url.URL, rw http.ResponseW
|
||||
}
|
||||
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusNotFound,
|
||||
Title: "Application Not Found",
|
||||
Description: "The application or workspace you are trying to access does not exist or you do not have permission to access it.",
|
||||
Warnings: warnings,
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusNotFound,
|
||||
Title: "Application Not Found",
|
||||
Description: "The application or workspace you are trying to access does not exist or you do not have permission to access it.",
|
||||
RetryEnabled: false,
|
||||
DashboardURL: accessURL.String(),
|
||||
Warnings: warnings,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -65,15 +60,11 @@ func WriteWorkspaceApp500(log slog.Logger, accessURL *url.URL, rw http.ResponseW
|
||||
)
|
||||
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusInternalServerError,
|
||||
Title: "Internal Server Error",
|
||||
Description: "An internal server error occurred.",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusInternalServerError,
|
||||
Title: "Internal Server Error",
|
||||
Description: "An internal server error occurred.",
|
||||
RetryEnabled: false,
|
||||
DashboardURL: accessURL.String(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -94,18 +85,11 @@ func WriteWorkspaceAppOffline(log slog.Logger, accessURL *url.URL, rw http.Respo
|
||||
}
|
||||
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusBadGateway,
|
||||
Title: "Application Unavailable",
|
||||
Description: msg,
|
||||
Actions: []site.Action{
|
||||
{
|
||||
Text: "Retry",
|
||||
},
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusBadGateway,
|
||||
Title: "Application Unavailable",
|
||||
Description: msg,
|
||||
RetryEnabled: true,
|
||||
DashboardURL: accessURL.String(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -125,26 +109,11 @@ func WriteWorkspaceOffline(log slog.Logger, accessURL *url.URL, rw http.Response
|
||||
)
|
||||
}
|
||||
|
||||
actions := []site.Action{
|
||||
{
|
||||
URL: accessURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
}
|
||||
|
||||
workspaceURL, err := url.Parse(accessURL.String())
|
||||
if err == nil {
|
||||
workspaceURL.Path = path.Join(accessURL.Path, "@"+appReq.UsernameOrID, appReq.WorkspaceNameOrID)
|
||||
actions = append(actions, site.Action{
|
||||
URL: workspaceURL.String(),
|
||||
Text: "View workspace",
|
||||
})
|
||||
}
|
||||
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Workspace Offline",
|
||||
Description: fmt.Sprintf("Last workspace transition was to the %q state. Start the workspace to access its applications.", codersdk.WorkspaceTransitionStop),
|
||||
Actions: actions,
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Workspace Offline",
|
||||
Description: fmt.Sprintf("Last workspace transition was to the %q state. Start the workspace to access its applications.", codersdk.WorkspaceTransitionStop),
|
||||
RetryEnabled: false,
|
||||
DashboardURL: accessURL.String(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -185,14 +185,10 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Bad Request",
|
||||
Description: "Could not decrypt API key. Workspace app API key smuggling is not permitted on the primary access URL. Please remove the query parameter and try again.",
|
||||
// No retry is included because the user needs to remove the query
|
||||
// Retry is disabled because the user needs to remove the query
|
||||
// parameter before they try again.
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return false
|
||||
}
|
||||
@@ -208,14 +204,10 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Bad Request",
|
||||
Description: "Could not decrypt API key. Please remove the query parameter and try again.",
|
||||
// No retry is included because the user needs to remove the query
|
||||
// Retry is disabled because the user needs to remove the query
|
||||
// parameter before they try again.
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return false
|
||||
}
|
||||
@@ -232,15 +224,11 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
|
||||
// startup, but we'll check anyways.
|
||||
s.Logger.Error(r.Context(), "could not split invalid app hostname", slog.F("hostname", s.Hostname))
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusInternalServerError,
|
||||
Title: "Internal Server Error",
|
||||
Description: "The app is configured with an invalid app wildcard hostname. Please contact an administrator.",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusInternalServerError,
|
||||
Title: "Internal Server Error",
|
||||
Description: "The app is configured with an invalid app wildcard hostname. Please contact an administrator.",
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return false
|
||||
}
|
||||
@@ -286,15 +274,11 @@ func (s *Server) handleAPIKeySmuggling(rw http.ResponseWriter, r *http.Request,
|
||||
func (s *Server) workspaceAppsProxyPath(rw http.ResponseWriter, r *http.Request) {
|
||||
if s.DisablePathApps {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusForbidden,
|
||||
Title: "Forbidden",
|
||||
Description: "Path-based applications are disabled on this Coder deployment by the administrator.",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusForbidden,
|
||||
Title: "Forbidden",
|
||||
Description: "Path-based applications are disabled on this Coder deployment by the administrator.",
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return
|
||||
}
|
||||
@@ -303,15 +287,11 @@ func (s *Server) workspaceAppsProxyPath(rw http.ResponseWriter, r *http.Request)
|
||||
// lookup the username from token. We used to redirect by doing this lookup.
|
||||
if chi.URLParam(r, "user") == codersdk.Me {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusNotFound,
|
||||
Title: "Application Not Found",
|
||||
Description: "Applications must be accessed with the full username, not @me.",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusNotFound,
|
||||
Title: "Application Not Found",
|
||||
Description: "Applications must be accessed with the full username, not @me.",
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return
|
||||
}
|
||||
@@ -539,15 +519,11 @@ func (s *Server) parseHostname(rw http.ResponseWriter, r *http.Request, next htt
|
||||
app, err := appurl.ParseSubdomainAppURL(subdomain)
|
||||
if err != nil {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Invalid Application URL",
|
||||
Description: fmt.Sprintf("Could not parse subdomain application URL %q: %s", subdomain, err.Error()),
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Invalid Application URL",
|
||||
Description: fmt.Sprintf("Could not parse subdomain application URL %q: %s", subdomain, err.Error()),
|
||||
RetryEnabled: false,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return appurl.ApplicationURL{}, false
|
||||
}
|
||||
@@ -571,18 +547,11 @@ func (s *Server) proxyWorkspaceApp(rw http.ResponseWriter, r *http.Request, appT
|
||||
appURL, err := url.Parse(appToken.AppURL)
|
||||
if err != nil {
|
||||
site.RenderStaticErrorPage(rw, r, site.ErrorPageData{
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Bad Request",
|
||||
Description: fmt.Sprintf("Application has an invalid URL %q: %s", appToken.AppURL, err.Error()),
|
||||
Actions: []site.Action{
|
||||
{
|
||||
Text: "Retry",
|
||||
},
|
||||
{
|
||||
URL: s.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
Status: http.StatusBadRequest,
|
||||
Title: "Bad Request",
|
||||
Description: fmt.Sprintf("Application has an invalid URL %q: %s", appToken.AppURL, err.Error()),
|
||||
RetryEnabled: true,
|
||||
DashboardURL: s.DashboardURL.String(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
@@ -5240,79 +5240,6 @@ func TestDeleteWorkspaceACL(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
// nolint:tparallel,paralleltest // Subtests modify package global.
|
||||
func TestWorkspaceSharingDisabled(t *testing.T) {
|
||||
t.Run("CanAccessWhenEnabled", func(t *testing.T) {
|
||||
var (
|
||||
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
|
||||
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
|
||||
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
|
||||
// DisableWorkspaceSharing is false (default)
|
||||
}),
|
||||
})
|
||||
admin = coderdtest.CreateFirstUser(t, client)
|
||||
_, wsOwner = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
|
||||
userClient, user = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
|
||||
)
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
|
||||
// Create workspace with ACL granting access to user
|
||||
ws := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
|
||||
OwnerID: wsOwner.ID,
|
||||
OrganizationID: admin.OrganizationID,
|
||||
UserACL: database.WorkspaceACL{
|
||||
user.ID.String(): database.WorkspaceACLEntry{
|
||||
Permissions: []policy.Action{
|
||||
policy.ActionRead, policy.ActionSSH, policy.ActionApplicationConnect,
|
||||
},
|
||||
},
|
||||
},
|
||||
}).Do().Workspace
|
||||
|
||||
// User SHOULD be able to access workspace when sharing is enabled
|
||||
fetchedWs, err := userClient.Workspace(ctx, ws.ID)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ws.ID, fetchedWs.ID)
|
||||
})
|
||||
|
||||
t.Run("NoAccessWhenDisabled", func(t *testing.T) {
|
||||
var (
|
||||
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{
|
||||
DeploymentValues: coderdtest.DeploymentValues(t, func(dv *codersdk.DeploymentValues) {
|
||||
dv.Experiments = []string{string(codersdk.ExperimentWorkspaceSharing)}
|
||||
dv.DisableWorkspaceSharing = true
|
||||
}),
|
||||
})
|
||||
admin = coderdtest.CreateFirstUser(t, client)
|
||||
_, wsOwner = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
|
||||
userClient, user = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
|
||||
)
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
|
||||
// Create workspace with ACL granting access to user directly in DB
|
||||
ws := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
|
||||
OwnerID: wsOwner.ID,
|
||||
OrganizationID: admin.OrganizationID,
|
||||
UserACL: database.WorkspaceACL{
|
||||
user.ID.String(): database.WorkspaceACLEntry{
|
||||
Permissions: []policy.Action{
|
||||
policy.ActionRead, policy.ActionSSH, policy.ActionApplicationConnect,
|
||||
},
|
||||
},
|
||||
},
|
||||
}).Do().Workspace
|
||||
|
||||
// User should NOT be able to access workspace when sharing is disabled
|
||||
_, err := userClient.Workspace(ctx, ws.ID)
|
||||
require.Error(t, err)
|
||||
var sdkErr *codersdk.Error
|
||||
require.ErrorAs(t, err, &sdkErr)
|
||||
require.Equal(t, http.StatusNotFound, sdkErr.StatusCode())
|
||||
})
|
||||
}
|
||||
|
||||
func TestWorkspaceCreateWithImplicitPreset(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
|
||||
@@ -88,15 +88,12 @@ type Builder struct {
|
||||
parameterRender dynamicparameters.Renderer
|
||||
workspaceTags *map[string]string
|
||||
|
||||
// renderCache caches template rendering results
|
||||
renderCache dynamicparameters.RenderCache
|
||||
|
||||
prebuiltWorkspaceBuildStage sdkproto.PrebuiltWorkspaceBuildStage
|
||||
verifyNoLegacyParametersOnce bool
|
||||
}
|
||||
|
||||
type UsageChecker interface {
|
||||
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (UsageCheckResponse, error)
|
||||
CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (UsageCheckResponse, error)
|
||||
}
|
||||
|
||||
type UsageCheckResponse struct {
|
||||
@@ -108,7 +105,7 @@ type NoopUsageChecker struct{}
|
||||
|
||||
var _ UsageChecker = NoopUsageChecker{}
|
||||
|
||||
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion, _ database.WorkspaceTransition) (UsageCheckResponse, error) {
|
||||
func (NoopUsageChecker) CheckBuildUsage(_ context.Context, _ database.Store, _ *database.TemplateVersion) (UsageCheckResponse, error) {
|
||||
return UsageCheckResponse{
|
||||
Permitted: true,
|
||||
}, nil
|
||||
@@ -256,14 +253,6 @@ func (b Builder) TemplateVersionPresetID(id uuid.UUID) Builder {
|
||||
return b
|
||||
}
|
||||
|
||||
// RenderCache sets the render cache to use for template rendering.
|
||||
// This allows multiple workspace builds to share cached render results.
|
||||
func (b Builder) RenderCache(cache dynamicparameters.RenderCache) Builder {
|
||||
// nolint: revive
|
||||
b.renderCache = cache
|
||||
return b
|
||||
}
|
||||
|
||||
type BuildError struct {
|
||||
// Status is a suitable HTTP status code
|
||||
Status int
|
||||
@@ -697,22 +686,6 @@ func (b *Builder) getDynamicParameterRenderer() (dynamicparameters.Renderer, err
|
||||
return nil, xerrors.Errorf("get template version variables: %w", err)
|
||||
}
|
||||
|
||||
// Pass render cache if available
|
||||
if b.renderCache != nil {
|
||||
renderer, err := dynamicparameters.Prepare(b.ctx, b.store, b.fileCache, tv.ID,
|
||||
dynamicparameters.WithTemplateVersion(*tv),
|
||||
dynamicparameters.WithProvisionerJob(*job),
|
||||
dynamicparameters.WithTerraformValues(*tfVals),
|
||||
dynamicparameters.WithTemplateVariableValues(variableValues),
|
||||
dynamicparameters.WithRenderCache(b.renderCache),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("get template version renderer: %w", err)
|
||||
}
|
||||
b.parameterRender = renderer
|
||||
return renderer, nil
|
||||
}
|
||||
|
||||
renderer, err := dynamicparameters.Prepare(b.ctx, b.store, b.fileCache, tv.ID,
|
||||
dynamicparameters.WithTemplateVersion(*tv),
|
||||
dynamicparameters.WithProvisionerJob(*job),
|
||||
@@ -1334,7 +1307,7 @@ func (b *Builder) checkUsage() error {
|
||||
return BuildError{http.StatusInternalServerError, "Failed to fetch template version", err}
|
||||
}
|
||||
|
||||
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion, b.trans)
|
||||
resp, err := b.usageChecker.CheckBuildUsage(b.ctx, b.store, templateVersion)
|
||||
if err != nil {
|
||||
return BuildError{http.StatusInternalServerError, "Failed to check build usage", err}
|
||||
}
|
||||
|
||||
@@ -1049,7 +1049,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
|
||||
|
||||
var calls int64
|
||||
fakeUsageChecker := &fakeUsageChecker{
|
||||
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
|
||||
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
|
||||
atomic.AddInt64(&calls, 1)
|
||||
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
|
||||
},
|
||||
@@ -1126,7 +1126,7 @@ func TestWorkspaceBuildUsageChecker(t *testing.T) {
|
||||
|
||||
var calls int64
|
||||
fakeUsageChecker := &fakeUsageChecker{
|
||||
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion, _ database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
|
||||
checkBuildUsageFunc: func(_ context.Context, _ database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
|
||||
atomic.AddInt64(&calls, 1)
|
||||
return c.response, c.responseErr
|
||||
},
|
||||
@@ -1577,11 +1577,11 @@ func expectFindMatchingPresetID(id uuid.UUID, err error) func(mTx *dbmock.MockSt
|
||||
}
|
||||
|
||||
type fakeUsageChecker struct {
|
||||
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error)
|
||||
checkBuildUsageFunc func(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error)
|
||||
}
|
||||
|
||||
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
|
||||
return f.checkBuildUsageFunc(ctx, store, templateVersion, transition)
|
||||
func (f *fakeUsageChecker) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
|
||||
return f.checkBuildUsageFunc(ctx, store, templateVersion)
|
||||
}
|
||||
|
||||
func withNoTask(mTx *dbmock.MockStore) {
|
||||
|
||||
+35
-10
@@ -495,7 +495,6 @@ type DeploymentValues struct {
|
||||
SSHConfig SSHConfig `json:"config_ssh,omitempty" typescript:",notnull"`
|
||||
WgtunnelHost serpent.String `json:"wgtunnel_host,omitempty" typescript:",notnull"`
|
||||
DisableOwnerWorkspaceExec serpent.Bool `json:"disable_owner_workspace_exec,omitempty" typescript:",notnull"`
|
||||
DisableWorkspaceSharing serpent.Bool `json:"disable_workspace_sharing,omitempty" typescript:",notnull"`
|
||||
ProxyHealthStatusInterval serpent.Duration `json:"proxy_health_status_interval,omitempty" typescript:",notnull"`
|
||||
EnableTerraformDebugMode serpent.Bool `json:"enable_terraform_debug_mode,omitempty" typescript:",notnull"`
|
||||
UserQuietHoursSchedule UserQuietHoursScheduleConfig `json:"user_quiet_hours_schedule,omitempty" typescript:",notnull"`
|
||||
@@ -2729,15 +2728,6 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
|
||||
YAML: "disableOwnerWorkspaceAccess",
|
||||
Annotations: serpent.Annotations{}.Mark(annotationExternalProxies, "true"),
|
||||
},
|
||||
{
|
||||
Name: "Disable Workspace Sharing",
|
||||
Description: `Disable workspace sharing (requires the "workspace-sharing" experiment to be enabled). Workspace ACL checking is disabled and only owners can have ssh, apps and terminal access to workspaces. Access based on the 'owner' role is also allowed unless disabled via --disable-owner-workspace-access.`,
|
||||
Flag: "disable-workspace-sharing",
|
||||
Env: "CODER_DISABLE_WORKSPACE_SHARING",
|
||||
|
||||
Value: &c.DisableWorkspaceSharing,
|
||||
YAML: "disableWorkspaceSharing",
|
||||
},
|
||||
{
|
||||
Name: "Session Duration",
|
||||
Description: "The token expiry duration for browser sessions. Sessions may last longer if they are actively making requests, but this functionality can be disabled via --disable-session-expiry-refresh.",
|
||||
@@ -3401,6 +3391,37 @@ Write out the current server config as YAML to stdout.`,
|
||||
YAML: "retention",
|
||||
Annotations: serpent.Annotations{}.Mark(annotationFormatDuration, "true"),
|
||||
},
|
||||
{
|
||||
Name: "AI Bridge Max Concurrency",
|
||||
Description: "Maximum number of concurrent AI Bridge requests. Set to 0 to disable (unlimited).",
|
||||
Flag: "aibridge-max-concurrency",
|
||||
Env: "CODER_AIBRIDGE_MAX_CONCURRENCY",
|
||||
Value: &c.AI.BridgeConfig.MaxConcurrency,
|
||||
Default: "0",
|
||||
Group: &deploymentGroupAIBridge,
|
||||
YAML: "max_concurrency",
|
||||
},
|
||||
{
|
||||
Name: "AI Bridge Rate Limit",
|
||||
Description: "Maximum number of AI Bridge requests per rate window. Set to 0 to disable rate limiting.",
|
||||
Flag: "aibridge-rate-limit",
|
||||
Env: "CODER_AIBRIDGE_RATE_LIMIT",
|
||||
Value: &c.AI.BridgeConfig.RateLimit,
|
||||
Default: "0",
|
||||
Group: &deploymentGroupAIBridge,
|
||||
YAML: "rate_limit",
|
||||
},
|
||||
{
|
||||
Name: "AI Bridge Rate Window",
|
||||
Description: "Duration of the rate limiting window for AI Bridge requests.",
|
||||
Flag: "aibridge-rate-window",
|
||||
Env: "CODER_AIBRIDGE_RATE_WINDOW",
|
||||
Value: &c.AI.BridgeConfig.RateWindow,
|
||||
Default: "1m",
|
||||
Group: &deploymentGroupAIBridge,
|
||||
YAML: "rate_window",
|
||||
Annotations: serpent.Annotations{}.Mark(annotationFormatDuration, "true"),
|
||||
},
|
||||
// Retention settings
|
||||
{
|
||||
Name: "Audit Logs Retention",
|
||||
@@ -3471,6 +3492,10 @@ type AIBridgeConfig struct {
|
||||
Bedrock AIBridgeBedrockConfig `json:"bedrock" typescript:",notnull"`
|
||||
InjectCoderMCPTools serpent.Bool `json:"inject_coder_mcp_tools" typescript:",notnull"`
|
||||
Retention serpent.Duration `json:"retention" typescript:",notnull"`
|
||||
// Overload protection settings.
|
||||
MaxConcurrency serpent.Int64 `json:"max_concurrency" typescript:",notnull"`
|
||||
RateLimit serpent.Int64 `json:"rate_limit" typescript:",notnull"`
|
||||
RateWindow serpent.Duration `json:"rate_window" typescript:",notnull"`
|
||||
}
|
||||
|
||||
type AIBridgeOpenAIConfig struct {
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
# Redirect old offline deployments URL to new airgap URL
|
||||
/install/offline /install/airgap 301
|
||||
|
||||
# Redirect old offline anchor fragments to new airgap anchors
|
||||
/install/offline#offline-docs /install/airgap#airgap-docs 301
|
||||
/install/offline#offline-container-images /install/airgap#airgap-container-images 301
|
||||
|
||||
# Redirect old devcontainers folder to envbuilder
|
||||
/admin/templates/managing-templates/devcontainers /admin/templates/managing-templates/envbuilder 301
|
||||
/admin/templates/managing-templates/devcontainers/index /admin/templates/managing-templates/envbuilder 301
|
||||
/admin/templates/managing-templates/devcontainers/add-devcontainer /admin/templates/managing-templates/envbuilder/add-envbuilder 301
|
||||
/admin/templates/managing-templates/devcontainers/devcontainer-security-caching /admin/templates/managing-templates/envbuilder/envbuilder-security-caching 301
|
||||
/admin/templates/managing-templates/devcontainers/devcontainer-releases-known-issues /admin/templates/managing-templates/envbuilder/envbuilder-releases-known-issues 301
|
||||
+1
-1
@@ -52,7 +52,7 @@ For any information not strictly contained in these sections, check out our
|
||||
### Development containers (dev containers)
|
||||
|
||||
- A
|
||||
[Development Container](./integrations/devcontainers/index.md)
|
||||
[Development Container](./templates/extending-templates/devcontainers.md)
|
||||
is an open-source specification for defining development environments (called
|
||||
dev containers). It is generally stored in VCS alongside associated source
|
||||
code. It can reference an existing base image, or a custom Dockerfile that
|
||||
|
||||
@@ -1,52 +0,0 @@
|
||||
# Envbuilder
|
||||
|
||||
Envbuilder is an open-source tool that builds development environments from
|
||||
[dev container](https://containers.dev/implementors/spec/) configuration files.
|
||||
Unlike the [Dev Containers integration](../integration.md),
|
||||
Envbuilder transforms the workspace image itself rather than running containers
|
||||
inside the workspace.
|
||||
|
||||
Envbuilder is well-suited for Kubernetes-native deployments without privileged
|
||||
containers, environments where Docker is unavailable or restricted, and
|
||||
workflows where administrators need infrastructure-level control over image
|
||||
builds, caching, and security scanning. For workspaces with Docker available,
|
||||
the [Dev Containers Integration](../integration.md) offers container management
|
||||
with dashboard visibility and multi-container support.
|
||||
|
||||
Dev containers provide developers with increased autonomy and control over their
|
||||
Coder cloud development environments.
|
||||
|
||||
By using dev containers, developers can customize their workspaces with tools
|
||||
pre-approved by platform teams in registries like
|
||||
[JFrog Artifactory](../../jfrog-artifactory.md). This simplifies
|
||||
workflows, reduces the need for tickets and approvals, and promotes greater
|
||||
independence for developers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
An administrator should construct or choose a base image and create a template
|
||||
that includes a `devcontainer_builder` image before a developer team configures
|
||||
dev containers.
|
||||
|
||||
## Devcontainer Features
|
||||
|
||||
[Dev container Features](https://containers.dev/implementors/features/) allow
|
||||
owners of a project to specify self-contained units of code and runtime
|
||||
configuration that can be composed together on top of an existing base image.
|
||||
This is a good place to install project-specific tools, such as
|
||||
language-specific runtimes and compilers.
|
||||
|
||||
## Coder Envbuilder
|
||||
|
||||
[Envbuilder](https://github.com/coder/envbuilder/) is an open-source project
|
||||
maintained by Coder that runs dev containers via Coder templates and your
|
||||
underlying infrastructure. Envbuilder can run on Docker or Kubernetes.
|
||||
|
||||
It is independently packaged and versioned from the centralized Coder
|
||||
open-source project. This means that Envbuilder can be used with Coder, but it
|
||||
is not required. It also means that dev container builds can scale independently
|
||||
of the Coder control plane and even run within a CI/CD pipeline.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Add an Envbuilder template](./add-envbuilder.md)
|
||||
@@ -1,49 +0,0 @@
|
||||
# Dev Containers
|
||||
|
||||
Dev containers allow developers to define their development environment
|
||||
as code using the [Dev Container specification](https://containers.dev/).
|
||||
Configuration lives in a `devcontainer.json` file alongside source code,
|
||||
enabling consistent, reproducible environments.
|
||||
|
||||
By adopting dev containers, organizations can:
|
||||
|
||||
- **Standardize environments**: Eliminate "works on my machine" issues while
|
||||
still allowing developers to customize their tools within approved boundaries.
|
||||
- **Scale efficiently**: Let developers maintain their own environment
|
||||
definitions, reducing the burden on platform teams.
|
||||
- **Improve security**: Use hardened base images and controlled package
|
||||
registries to enforce security policies while enabling developer self-service.
|
||||
|
||||
Coder supports two approaches for running dev containers. Choose based on your
|
||||
infrastructure and workflow requirements.
|
||||
|
||||
## Dev Containers Integration
|
||||
|
||||
The Dev Containers Integration uses the standard `@devcontainers/cli` and Docker
|
||||
to run containers inside your workspace. This is the recommended approach for
|
||||
most use cases.
|
||||
|
||||
**Best for:**
|
||||
|
||||
- Workspaces with Docker available (Docker-in-Docker or mounted socket)
|
||||
- Dev container management in the Coder dashboard (discovery, status, rebuild)
|
||||
- Multiple dev containers per workspace
|
||||
|
||||
[Configure Dev Containers Integration](./integration.md)
|
||||
|
||||
For user documentation, see the
|
||||
[Dev Containers user guide](../../../user-guides/devcontainers/index.md).
|
||||
|
||||
## Envbuilder
|
||||
|
||||
Envbuilder transforms the workspace image itself from a `devcontainer.json`,
|
||||
rather than running containers inside the workspace. It does not require
|
||||
a Docker daemon.
|
||||
|
||||
**Best for:**
|
||||
|
||||
- Environments where Docker is unavailable or restricted
|
||||
- Infrastructure-level control over image builds, caching, and security scanning
|
||||
- Kubernetes-native deployments without privileged containers
|
||||
|
||||
[Configure Envbuilder](./envbuilder/index.md)
|
||||
@@ -1,259 +0,0 @@
|
||||
# Configure a template for Dev Containers
|
||||
|
||||
This guide covers the Dev Containers Integration, which uses Docker. For
|
||||
environments without Docker, see [Envbuilder](./envbuilder/index.md) as an
|
||||
alternative.
|
||||
|
||||
To enable Dev Containers in workspaces, configure your template with the Dev Containers
|
||||
modules and configurations outlined in this doc.
|
||||
|
||||
Dev Containers are currently not supported in Windows or macOS workspaces.
|
||||
|
||||
## Configuration Modes
|
||||
|
||||
There are two approaches to configuring Dev Containers in Coder:
|
||||
|
||||
### Manual Configuration
|
||||
|
||||
Use the [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) Terraform resource to explicitly define which Dev
|
||||
Containers should be started in your workspace. This approach provides:
|
||||
|
||||
- Predictable behavior and explicit control
|
||||
- Clear template configuration
|
||||
- Easier troubleshooting
|
||||
- Better for production environments
|
||||
|
||||
This is the recommended approach for most use cases.
|
||||
|
||||
### Project Discovery
|
||||
|
||||
Alternatively, enable automatic discovery of Dev Containers in Git repositories.
|
||||
The agent scans for `devcontainer.json` files and surfaces them in the Coder UI.
|
||||
See [Environment Variables](#environment-variables) for configuration options.
|
||||
|
||||
This approach is useful when developers frequently switch between repositories
|
||||
or work with many projects, as it reduces template maintenance overhead.
|
||||
|
||||
## Install the Dev Containers CLI
|
||||
|
||||
Use the
|
||||
[devcontainers-cli](https://registry.coder.com/modules/devcontainers-cli) module
|
||||
to ensure the `@devcontainers/cli` is installed in your workspace:
|
||||
|
||||
```terraform
|
||||
module "devcontainers-cli" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/devcontainers-cli/coder"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, install the devcontainer CLI manually in your base image.
|
||||
|
||||
## Configure Automatic Dev Container Startup
|
||||
|
||||
The
|
||||
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer)
|
||||
resource automatically starts a Dev Container in your workspace, ensuring it's
|
||||
ready when you access the workspace:
|
||||
|
||||
```terraform
|
||||
resource "coder_devcontainer" "my-repository" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
agent_id = coder_agent.dev.id
|
||||
workspace_folder = "/home/coder/my-repository"
|
||||
}
|
||||
```
|
||||
|
||||
The `workspace_folder` attribute must point to a valid project folder containing
|
||||
a `devcontainer.json` file. Consider using the
|
||||
[`git-clone`](https://registry.coder.com/modules/git-clone) module to ensure
|
||||
your repository is cloned and ready for automatic startup.
|
||||
|
||||
For multi-repo workspaces, define multiple `coder_devcontainer` resources, each
|
||||
pointing to a different repository. Each one runs as a separate sub-agent with
|
||||
its own terminal and apps in the dashboard.
|
||||
|
||||
## Enable Dev Containers Integration
|
||||
|
||||
Dev Containers integration is **enabled by default** in Coder 2.24.0 and later.
|
||||
You don't need to set any environment variables unless you want to change the
|
||||
default behavior.
|
||||
|
||||
If you need to explicitly disable Dev Containers, set the
|
||||
`CODER_AGENT_DEVCONTAINERS_ENABLE` environment variable to `false`:
|
||||
|
||||
```terraform
|
||||
resource "docker_container" "workspace" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
image = "codercom/oss-dogfood:latest"
|
||||
env = [
|
||||
"CODER_AGENT_DEVCONTAINERS_ENABLE=false", # Explicitly disable
|
||||
# ... Other environment variables.
|
||||
]
|
||||
# ... Other container configuration.
|
||||
}
|
||||
```
|
||||
|
||||
See the [Environment Variables](#environment-variables) section below for more
|
||||
details on available configuration options.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The following environment variables control Dev Container behavior in your
|
||||
workspace. Both `CODER_AGENT_DEVCONTAINERS_ENABLE` and
|
||||
`CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE` are **enabled by default**,
|
||||
so you typically don't need to set them unless you want to explicitly disable
|
||||
the feature.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_ENABLE
|
||||
|
||||
**Default: `true`** • **Added in: v2.24.0**
|
||||
|
||||
Enables the Dev Containers integration in the Coder agent.
|
||||
|
||||
The Dev Containers feature is enabled by default. You can explicitly disable it
|
||||
by setting this to `false`.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE
|
||||
|
||||
**Default: `true`** • **Added in: v2.25.0**
|
||||
|
||||
Enables automatic discovery of Dev Containers in Git repositories.
|
||||
|
||||
When enabled, the agent scans the configured working directory (set via the
|
||||
`directory` attribute in `coder_agent`, typically the user's home directory) for
|
||||
Git repositories. If the directory itself is a Git repository, it searches that
|
||||
project. Otherwise, it searches immediate subdirectories for Git repositories.
|
||||
|
||||
For each repository found, the agent looks for `devcontainer.json` files in the
|
||||
[standard locations](../../../user-guides/devcontainers/index.md#add-a-devcontainerjson)
|
||||
and surfaces discovered Dev Containers in the Coder UI. Discovery respects
|
||||
`.gitignore` patterns.
|
||||
|
||||
Set to `false` if you prefer explicit configuration via `coder_devcontainer`.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE
|
||||
|
||||
**Default: `false`** • **Added in: v2.25.0**
|
||||
|
||||
Automatically starts Dev Containers discovered via project discovery.
|
||||
|
||||
When enabled, discovered Dev Containers will be automatically built and started
|
||||
during workspace initialization. This only applies to Dev Containers found via
|
||||
project discovery. Dev Containers defined with the `coder_devcontainer` resource
|
||||
always auto-start regardless of this setting.
|
||||
|
||||
## Per-Container Customizations
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> Dev container sub-agents are created dynamically after workspace provisioning,
|
||||
> so Terraform resources like
|
||||
> [`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script)
|
||||
> and [`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
|
||||
> cannot currently be attached to them. Modules from the
|
||||
> [Coder registry](https://registry.coder.com) that depend on these resources
|
||||
> are also not currently supported for sub-agents.
|
||||
>
|
||||
> To add tools to dev containers, use
|
||||
> [dev container features](../../../user-guides/devcontainers/working-with-dev-containers.md#dev-container-features).
|
||||
> For Coder-specific apps, use the
|
||||
> [`apps` customization](../../../user-guides/devcontainers/customizing-dev-containers.md#custom-apps).
|
||||
|
||||
Developers can customize individual dev containers using the `customizations.coder`
|
||||
block in their `devcontainer.json` file. Available options include:
|
||||
|
||||
- `ignore` — Hide a dev container from Coder completely
|
||||
- `autoStart` — Control whether the container starts automatically (requires
|
||||
`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE` to be enabled)
|
||||
- `name` — Set a custom agent name
|
||||
- `displayApps` — Control which built-in apps appear
|
||||
- `apps` — Define custom applications
|
||||
|
||||
For the full reference, see
|
||||
[Customizing dev containers](../../../user-guides/devcontainers/customizing-dev-containers.md).
|
||||
|
||||
## Complete Template Example
|
||||
|
||||
Here's a simplified template example that uses Dev Containers with manual
|
||||
configuration:
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
coder = { source = "coder/coder" }
|
||||
docker = { source = "kreuzwerker/docker" }
|
||||
}
|
||||
}
|
||||
|
||||
provider "coder" {}
|
||||
data "coder_workspace" "me" {}
|
||||
data "coder_workspace_owner" "me" {}
|
||||
|
||||
resource "coder_agent" "dev" {
|
||||
arch = "amd64"
|
||||
os = "linux"
|
||||
startup_script_behavior = "blocking"
|
||||
startup_script = "sudo service docker start"
|
||||
shutdown_script = "sudo service docker stop"
|
||||
# ...
|
||||
}
|
||||
|
||||
module "devcontainers-cli" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/devcontainers-cli/coder"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
|
||||
resource "coder_devcontainer" "my-repository" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
agent_id = coder_agent.dev.id
|
||||
workspace_folder = "/home/coder/my-repository"
|
||||
}
|
||||
```
|
||||
|
||||
### Alternative: Project Discovery with Autostart
|
||||
|
||||
By default, discovered containers appear in the dashboard but developers must
|
||||
manually start them. To have them start automatically, enable autostart:
|
||||
|
||||
```terraform
|
||||
resource "docker_container" "workspace" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
image = "codercom/oss-dogfood:latest"
|
||||
env = [
|
||||
# Project discovery is enabled by default, but autostart is not.
|
||||
# Enable autostart to automatically build and start discovered containers:
|
||||
"CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=true",
|
||||
# ... Other environment variables.
|
||||
]
|
||||
# ... Other container configuration.
|
||||
}
|
||||
```
|
||||
|
||||
With autostart enabled:
|
||||
|
||||
- Discovered containers automatically build and start during workspace
|
||||
initialization
|
||||
- The `coder_devcontainer` resource is not required
|
||||
- Developers can work with multiple projects seamlessly
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> When using project discovery, you still need to install the devcontainers CLI
|
||||
> using the module or in your base image.
|
||||
|
||||
## Example Template
|
||||
|
||||
The [Docker (Dev Containers)](https://github.com/coder/coder/tree/main/examples/templates/docker-devcontainer)
|
||||
starter template demonstrates Dev Containers integration using Docker-in-Docker.
|
||||
It includes the `devcontainers-cli` module, `git-clone` module, and the
|
||||
`coder_devcontainer` resource.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Dev Containers Integration](../../../user-guides/devcontainers/index.md)
|
||||
- [Customizing Dev Containers](../../../user-guides/devcontainers/customizing-dev-containers.md)
|
||||
- [Working with Dev Containers](../../../user-guides/devcontainers/working-with-dev-containers.md)
|
||||
- [Troubleshooting Dev Containers](../../../user-guides/devcontainers/troubleshooting-dev-containers.md)
|
||||
@@ -1,14 +1,263 @@
|
||||
# Dev Containers
|
||||
# Configure a template for Dev Containers
|
||||
|
||||
Dev containers extend your template with containerized development environments,
|
||||
allowing developers to work in consistent, reproducible setups defined by
|
||||
`devcontainer.json` files.
|
||||
To enable Dev Containers in workspaces, configure your template with the Dev Containers
|
||||
modules and configurations outlined in this doc.
|
||||
|
||||
Coder's Dev Containers Integration uses the standard `@devcontainers/cli` and
|
||||
Docker to run containers inside workspaces.
|
||||
> [!NOTE]
|
||||
>
|
||||
> Dev Containers require a **Linux or macOS workspace**. Windows is not supported.
|
||||
|
||||
For setup instructions, see
|
||||
[Dev Containers Integration](../../integrations/devcontainers/integration.md).
|
||||
## Configuration Modes
|
||||
|
||||
For an alternative approach that doesn't require Docker, see
|
||||
[Envbuilder](../../integrations/devcontainers/envbuilder/index.md).
|
||||
There are two approaches to configuring Dev Containers in Coder:
|
||||
|
||||
### Manual Configuration
|
||||
|
||||
Use the [`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer) Terraform resource to explicitly define which Dev
|
||||
Containers should be started in your workspace. This approach provides:
|
||||
|
||||
- Predictable behavior and explicit control
|
||||
- Clear template configuration
|
||||
- Easier troubleshooting
|
||||
- Better for production environments
|
||||
|
||||
This is the recommended approach for most use cases.
|
||||
|
||||
### Project Discovery
|
||||
|
||||
Alternatively, enable automatic discovery of Dev Containers in Git repositories.
|
||||
The agent scans for `devcontainer.json` files and surfaces them in the Coder UI.
|
||||
See [Environment Variables](#environment-variables) for configuration options.
|
||||
|
||||
## Install the Dev Containers CLI
|
||||
|
||||
Use the
|
||||
[devcontainers-cli](https://registry.coder.com/modules/devcontainers-cli) module
|
||||
to ensure the `@devcontainers/cli` is installed in your workspace:
|
||||
|
||||
```terraform
|
||||
module "devcontainers-cli" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/devcontainers-cli/coder"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
```
|
||||
|
||||
Alternatively, install the devcontainer CLI manually in your base image.
|
||||
|
||||
## Configure Automatic Dev Container Startup
|
||||
|
||||
The
|
||||
[`coder_devcontainer`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/devcontainer)
|
||||
resource automatically starts a Dev Container in your workspace, ensuring it's
|
||||
ready when you access the workspace:
|
||||
|
||||
```terraform
|
||||
resource "coder_devcontainer" "my-repository" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
agent_id = coder_agent.dev.id
|
||||
workspace_folder = "/home/coder/my-repository"
|
||||
}
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> The `workspace_folder` attribute must specify the location of the dev
|
||||
> container's workspace and should point to a valid project folder containing a
|
||||
> `devcontainer.json` file.
|
||||
|
||||
<!-- nolint:MD028/no-blanks-blockquote -->
|
||||
|
||||
> [!TIP]
|
||||
>
|
||||
> Consider using the [`git-clone`](https://registry.coder.com/modules/git-clone)
|
||||
> module to ensure your repository is cloned into the workspace folder and ready
|
||||
> for automatic startup.
|
||||
|
||||
For multi-repo workspaces, define multiple `coder_devcontainer` resources, each
|
||||
pointing to a different repository. Each one runs as a separate sub-agent with
|
||||
its own terminal and apps in the dashboard.
|
||||
|
||||
## Enable Dev Containers Integration
|
||||
|
||||
Dev Containers integration is **enabled by default** in Coder 2.24.0 and later.
|
||||
You don't need to set any environment variables unless you want to change the
|
||||
default behavior.
|
||||
|
||||
If you need to explicitly disable Dev Containers, set the
|
||||
`CODER_AGENT_DEVCONTAINERS_ENABLE` environment variable to `false`:
|
||||
|
||||
```terraform
|
||||
resource "docker_container" "workspace" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
image = "codercom/oss-dogfood:latest"
|
||||
env = [
|
||||
"CODER_AGENT_DEVCONTAINERS_ENABLE=false", # Explicitly disable
|
||||
# ... Other environment variables.
|
||||
]
|
||||
# ... Other container configuration.
|
||||
}
|
||||
```
|
||||
|
||||
See the [Environment Variables](#environment-variables) section below for more
|
||||
details on available configuration options.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The following environment variables control Dev Container behavior in your
|
||||
workspace. Both `CODER_AGENT_DEVCONTAINERS_ENABLE` and
|
||||
`CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE` are **enabled by default**,
|
||||
so you typically don't need to set them unless you want to explicitly disable
|
||||
the feature.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_ENABLE
|
||||
|
||||
**Default: `true`** • **Added in: v2.24.0**
|
||||
|
||||
Enables the Dev Containers integration in the Coder agent.
|
||||
|
||||
The Dev Containers feature is enabled by default. You can explicitly disable it
|
||||
by setting this to `false`.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE
|
||||
|
||||
**Default: `true`** • **Added in: v2.25.0**
|
||||
|
||||
Enables automatic discovery of Dev Containers in Git repositories.
|
||||
|
||||
When enabled, the agent scans the configured working directory (set via the
|
||||
`directory` attribute in `coder_agent`, typically the user's home directory) for
|
||||
Git repositories. If the directory itself is a Git repository, it searches that
|
||||
project. Otherwise, it searches immediate subdirectories for Git repositories.
|
||||
|
||||
For each repository found, the agent looks for `devcontainer.json` files in the
|
||||
[standard locations](../../../user-guides/devcontainers/index.md#add-a-devcontainerjson)
|
||||
and surfaces discovered Dev Containers in the Coder UI. Discovery respects
|
||||
`.gitignore` patterns.
|
||||
|
||||
Set to `false` if you prefer explicit configuration via `coder_devcontainer`.
|
||||
|
||||
### CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE
|
||||
|
||||
**Default: `false`** • **Added in: v2.25.0**
|
||||
|
||||
Automatically starts Dev Containers discovered via project discovery.
|
||||
|
||||
When enabled, discovered Dev Containers will be automatically built and started
|
||||
during workspace initialization. This only applies to Dev Containers found via
|
||||
project discovery. Dev Containers defined with the `coder_devcontainer` resource
|
||||
always auto-start regardless of this setting.
|
||||
|
||||
## Per-Container Customizations
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> Dev container sub-agents are created dynamically after workspace provisioning,
|
||||
> so Terraform resources like
|
||||
> [`coder_script`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/script)
|
||||
> and [`coder_app`](https://registry.terraform.io/providers/coder/coder/latest/docs/resources/app)
|
||||
> cannot currently be attached to them. Modules from the
|
||||
> [Coder registry](https://registry.coder.com) that depend on these resources
|
||||
> are also not currently supported for sub-agents.
|
||||
>
|
||||
> To add tools to dev containers, use
|
||||
> [dev container features](../../../user-guides/devcontainers/working-with-dev-containers.md#dev-container-features).
|
||||
> For Coder-specific apps, use the
|
||||
> [`apps` customization](../../../user-guides/devcontainers/customizing-dev-containers.md#custom-apps).
|
||||
|
||||
Developers can customize individual dev containers using the `customizations.coder`
|
||||
block in their `devcontainer.json` file. Available options include:
|
||||
|
||||
- `ignore` — Hide a dev container from Coder completely
|
||||
- `autoStart` — Control whether the container starts automatically (requires
|
||||
`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE` to be enabled)
|
||||
- `name` — Set a custom agent name
|
||||
- `displayApps` — Control which built-in apps appear
|
||||
- `apps` — Define custom applications
|
||||
|
||||
For the full reference, see
|
||||
[Customizing dev containers](../../../user-guides/devcontainers/customizing-dev-containers.md).
|
||||
|
||||
## Complete Template Example
|
||||
|
||||
Here's a simplified template example that uses Dev Containers with manual
|
||||
configuration:
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
coder = { source = "coder/coder" }
|
||||
docker = { source = "kreuzwerker/docker" }
|
||||
}
|
||||
}
|
||||
|
||||
provider "coder" {}
|
||||
data "coder_workspace" "me" {}
|
||||
data "coder_workspace_owner" "me" {}
|
||||
|
||||
resource "coder_agent" "dev" {
|
||||
arch = "amd64"
|
||||
os = "linux"
|
||||
startup_script_behavior = "blocking"
|
||||
startup_script = "sudo service docker start"
|
||||
shutdown_script = "sudo service docker stop"
|
||||
# ...
|
||||
}
|
||||
|
||||
module "devcontainers-cli" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
source = "registry.coder.com/coder/devcontainers-cli/coder"
|
||||
agent_id = coder_agent.dev.id
|
||||
}
|
||||
|
||||
resource "coder_devcontainer" "my-repository" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
agent_id = coder_agent.dev.id
|
||||
workspace_folder = "/home/coder/my-repository"
|
||||
}
|
||||
```
|
||||
|
||||
### Alternative: Project Discovery with Autostart
|
||||
|
||||
By default, discovered containers appear in the dashboard but developers must
|
||||
manually start them. To have them start automatically, enable autostart:
|
||||
|
||||
```terraform
|
||||
resource "docker_container" "workspace" {
|
||||
count = data.coder_workspace.me.start_count
|
||||
image = "codercom/oss-dogfood:latest"
|
||||
env = [
|
||||
# Project discovery is enabled by default, but autostart is not.
|
||||
# Enable autostart to automatically build and start discovered containers:
|
||||
"CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=true",
|
||||
# ... Other environment variables.
|
||||
]
|
||||
# ... Other container configuration.
|
||||
}
|
||||
```
|
||||
|
||||
With autostart enabled:
|
||||
|
||||
- Discovered containers automatically build and start during workspace
|
||||
initialization
|
||||
- The `coder_devcontainer` resource is not required
|
||||
- Developers can work with multiple projects seamlessly
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> When using project discovery, you still need to install the devcontainers CLI
|
||||
> using the module or in your base image.
|
||||
|
||||
## Example Template
|
||||
|
||||
The [Docker (Dev Containers)](https://github.com/coder/coder/tree/main/examples/templates/docker-devcontainer)
|
||||
starter template demonstrates Dev Containers integration using Docker-in-Docker.
|
||||
It includes the `devcontainers-cli` module, `git-clone` module, and the
|
||||
`coder_devcontainer` resource.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Dev Containers Integration](../../../user-guides/devcontainers/index.md)
|
||||
- [Customizing Dev Containers](../../../user-guides/devcontainers/customizing-dev-containers.md)
|
||||
- [Working with Dev Containers](../../../user-guides/devcontainers/working-with-dev-containers.md)
|
||||
- [Troubleshooting Dev Containers](../../../user-guides/devcontainers/troubleshooting-dev-containers.md)
|
||||
|
||||
@@ -48,9 +48,9 @@ needs of different teams.
|
||||
|
||||
- [Image management](./managing-templates/image-management.md): Learn how to
|
||||
create and publish images for use within Coder workspaces & templates.
|
||||
- [Dev Containers integration](../integrations/devcontainers/integration.md): Enable
|
||||
- [Dev Containers integration](./extending-templates/devcontainers.md): Enable
|
||||
native dev containers support using `@devcontainers/cli` and Docker.
|
||||
- [Envbuilder](../integrations/devcontainers/envbuilder/index.md): Alternative approach
|
||||
- [Envbuilder](./managing-templates/envbuilder/index.md): Alternative approach
|
||||
for environments without Docker access.
|
||||
- [Template hardening](./extending-templates/resource-persistence.md#-bulletproofing):
|
||||
Configure your template to prevent certain resources from being destroyed
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
# Envbuilder
|
||||
|
||||
Envbuilder shifts environment definition from template administrators to
|
||||
developers. Instead of baking tools into template images, developers define
|
||||
their environments via `devcontainer.json` files in their repositories.
|
||||
|
||||
Envbuilder transforms the workspace image itself from a dev container
|
||||
configuration, without requiring a Docker daemon.
|
||||
|
||||
For setup instructions, see
|
||||
[Envbuilder documentation](../../integrations/devcontainers/envbuilder/index.md).
|
||||
|
||||
For an alternative that uses Docker inside workspaces, see
|
||||
[Dev Containers Integration](../../integrations/devcontainers/integration.md).
|
||||
+2
-2
@@ -2,7 +2,7 @@
|
||||
|
||||
A Coder administrator adds an Envbuilder-compatible template to Coder. This
|
||||
allows the template to prompt the developer for their dev container repository's
|
||||
URL as a [parameter](../../../templates/extending-templates/parameters.md) when they create
|
||||
URL as a [parameter](../../extending-templates/parameters.md) when they create
|
||||
their workspace. Envbuilder clones the repo and builds a container from the
|
||||
`devcontainer.json` specified in the repo.
|
||||
|
||||
@@ -127,7 +127,7 @@ their development environments:
|
||||
| [AWS EC2 dev container](https://github.com/coder/coder/tree/main/examples/templates/aws-devcontainer) | Runs a development container inside a single EC2 instance. It also mounts the Docker socket from the VM inside the container to enable Docker inside the workspace. |
|
||||
|
||||
Your template can prompt the user for a repo URL with
|
||||
[parameters](../../../templates/extending-templates/parameters.md):
|
||||
[parameters](../../extending-templates/parameters.md):
|
||||
|
||||

|
||||
|
||||
@@ -0,0 +1,131 @@
|
||||
# Envbuilder
|
||||
|
||||
Envbuilder is an open-source tool that builds development environments from
|
||||
[dev container](https://containers.dev/implementors/spec/) configuration files.
|
||||
Unlike the [native Dev Containers integration](../../extending-templates/devcontainers.md),
|
||||
Envbuilder transforms the workspace image itself rather than running containers
|
||||
inside the workspace.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> For most use cases, we recommend the
|
||||
> [native Dev Containers integration](../../extending-templates/devcontainers.md),
|
||||
> which uses the standard `@devcontainers/cli` and Docker. Envbuilder is an
|
||||
> alternative for environments where Docker is not available or for
|
||||
> administrator-controlled dev container workflows.
|
||||
|
||||
Dev containers provide developers with increased autonomy and control over their
|
||||
Coder cloud development environments.
|
||||
|
||||
By using dev containers, developers can customize their workspaces with tools
|
||||
pre-approved by platform teams in registries like
|
||||
[JFrog Artifactory](../../../integrations/jfrog-artifactory.md). This simplifies
|
||||
workflows, reduces the need for tickets and approvals, and promotes greater
|
||||
independence for developers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
An administrator should construct or choose a base image and create a template
|
||||
that includes a `devcontainer_builder` image before a developer team configures
|
||||
dev containers.
|
||||
|
||||
## Benefits of devcontainers
|
||||
|
||||
There are several benefits to adding a dev container-compatible template to
|
||||
Coder:
|
||||
|
||||
- Reliability through standardization
|
||||
- Scalability for growing teams
|
||||
- Improved security
|
||||
- Performance efficiency
|
||||
- Cost Optimization
|
||||
|
||||
### Reliability through standardization
|
||||
|
||||
Use dev containers to empower development teams to personalize their own
|
||||
environments while maintaining consistency and security through an approved and
|
||||
hardened base image.
|
||||
|
||||
Standardized environments ensure uniform behavior across machines and team
|
||||
members, eliminating "it works on my machine" issues and creating a stable
|
||||
foundation for development and testing. Containerized setups reduce dependency
|
||||
conflicts and misconfigurations, enhancing build stability.
|
||||
|
||||
### Scalability for growing teams
|
||||
|
||||
Dev containers allow organizations to handle multiple projects and teams
|
||||
efficiently.
|
||||
|
||||
You can leverage platforms like Kubernetes to allocate resources on demand,
|
||||
optimizing costs and ensuring fair distribution of quotas. Developer teams can
|
||||
use efficient custom images and independently configure the contents of their
|
||||
version-controlled dev containers.
|
||||
|
||||
This approach allows organizations to scale seamlessly, reducing the maintenance
|
||||
burden on the administrators that support diverse projects while allowing
|
||||
development teams to maintain their own images and onboard new users quickly.
|
||||
|
||||
### Improved security
|
||||
|
||||
Since Coder and Envbuilder run on your own infrastructure, you can use firewalls
|
||||
and cluster-level policies to ensure Envbuilder only downloads packages from
|
||||
your secure registry powered by JFrog Artifactory or Sonatype Nexus.
|
||||
Additionally, Envbuilder can be configured to push the full image back to your
|
||||
registry for additional security scanning.
|
||||
|
||||
This means that Coder admins can require hardened base images and packages,
|
||||
while still allowing developer self-service.
|
||||
|
||||
Envbuilder runs inside a small container image but does not require a Docker
|
||||
daemon in order to build a dev container. This is useful in environments where
|
||||
you may not have access to a Docker socket for security reasons, but still need
|
||||
to work with a container.
|
||||
|
||||
### Performance efficiency
|
||||
|
||||
Create a unique image for each project to reduce the dependency size of any
|
||||
given project.
|
||||
|
||||
Envbuilder has various caching modes to ensure workspaces start as fast as
|
||||
possible, such as layer caching and even full image caching and fetching via the
|
||||
[Envbuilder Terraform provider](https://registry.terraform.io/providers/coder/envbuilder/latest/docs).
|
||||
|
||||
### Cost optimization
|
||||
|
||||
By creating unique images per-project, you remove unnecessary dependencies and
|
||||
reduce the workspace size and resource consumption of any given project. Full
|
||||
image caching ensures optimal start and stop times.
|
||||
|
||||
## When to use a dev container
|
||||
|
||||
Dev containers are a good fit for developer teams who are familiar with Docker
|
||||
and are already using containerized development environments. If you have a
|
||||
large number of projects with different toolchains, dependencies, or that depend
|
||||
on a particular Linux distribution, dev containers make it easier to quickly
|
||||
switch between projects.
|
||||
|
||||
They may also be a great fit for more restricted environments where you may not
|
||||
have access to a Docker daemon since it doesn't need one to work.
|
||||
|
||||
## Devcontainer Features
|
||||
|
||||
[Dev container Features](https://containers.dev/implementors/features/) allow
|
||||
owners of a project to specify self-contained units of code and runtime
|
||||
configuration that can be composed together on top of an existing base image.
|
||||
This is a good place to install project-specific tools, such as
|
||||
language-specific runtimes and compilers.
|
||||
|
||||
## Coder Envbuilder
|
||||
|
||||
[Envbuilder](https://github.com/coder/envbuilder/) is an open-source project
|
||||
maintained by Coder that runs dev containers via Coder templates and your
|
||||
underlying infrastructure. Envbuilder can run on Docker or Kubernetes.
|
||||
|
||||
It is independently packaged and versioned from the centralized Coder
|
||||
open-source project. This means that Envbuilder can be used with Coder, but it
|
||||
is not required. It also means that dev container builds can scale independently
|
||||
of the Coder control plane and even run within a CI/CD pipeline.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Add an Envbuilder template](./add-envbuilder.md)
|
||||
@@ -70,5 +70,5 @@ specific tooling for their projects. The [Dev Container](https://containers.dev)
|
||||
specification allows developers to define their projects dependencies within a
|
||||
`devcontainer.json` in their Git repository.
|
||||
|
||||
- [Configure a template for Dev Containers](../../integrations/devcontainers/integration.md) (recommended)
|
||||
- [Learn about Envbuilder](../../integrations/devcontainers/envbuilder/index.md) (alternative for environments without Docker)
|
||||
- [Configure a template for Dev Containers](../extending-templates/devcontainers.md) (recommended)
|
||||
- [Learn about Envbuilder](./envbuilder/index.md) (alternative for environments without Docker)
|
||||
|
||||
@@ -96,6 +96,6 @@ coder templates delete <template-name>
|
||||
## Next steps
|
||||
|
||||
- [Image management](./image-management.md)
|
||||
- [Dev Containers integration](../../integrations/devcontainers/integration.md) (recommended)
|
||||
- [Envbuilder](../../integrations/devcontainers/envbuilder/index.md) (alternative for environments without Docker)
|
||||
- [Dev Containers integration](../extending-templates/devcontainers.md) (recommended)
|
||||
- [Envbuilder](./envbuilder/index.md) (alternative for environments without Docker)
|
||||
- [Change management](./change-management.md)
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 133 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 94 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 107 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 194 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 187 KiB |
@@ -129,13 +129,14 @@ We support two release channels: mainline and stable - read the
|
||||
- **Mainline** Coder release:
|
||||
|
||||
- **Chart Registry**
|
||||
|
||||
<!-- autoversion(mainline): "--version [version]" -->
|
||||
|
||||
```shell
|
||||
helm install coder coder-v2/coder \
|
||||
--namespace coder \
|
||||
--values values.yaml \
|
||||
--version 2.29.1
|
||||
--version 2.29.0
|
||||
```
|
||||
|
||||
- **OCI Registry**
|
||||
@@ -146,7 +147,7 @@ We support two release channels: mainline and stable - read the
|
||||
helm install coder oci://ghcr.io/coder/chart/coder \
|
||||
--namespace coder \
|
||||
--values values.yaml \
|
||||
--version 2.29.1
|
||||
--version 2.29.0
|
||||
```
|
||||
|
||||
- **Stable** Coder release:
|
||||
@@ -159,7 +160,7 @@ We support two release channels: mainline and stable - read the
|
||||
helm install coder coder-v2/coder \
|
||||
--namespace coder \
|
||||
--values values.yaml \
|
||||
--version 2.28.6
|
||||
--version 2.28.5
|
||||
```
|
||||
|
||||
- **OCI Registry**
|
||||
@@ -170,7 +171,7 @@ We support two release channels: mainline and stable - read the
|
||||
helm install coder oci://ghcr.io/coder/chart/coder \
|
||||
--namespace coder \
|
||||
--values values.yaml \
|
||||
--version 2.28.6
|
||||
--version 2.28.5
|
||||
```
|
||||
|
||||
You can watch Coder start up by running `kubectl get pods -n coder`. Once Coder
|
||||
|
||||
@@ -134,8 +134,8 @@ kubectl create secret generic coder-db-url -n coder \
|
||||
|
||||
1. Select a Coder version:
|
||||
|
||||
- **Mainline**: `2.29.1`
|
||||
- **Stable**: `2.28.6`
|
||||
- **Mainline**: `2.29.0`
|
||||
- **Stable**: `2.28.5`
|
||||
|
||||
Learn more about release channels in the [Releases documentation](./releases/index.md).
|
||||
|
||||
|
||||
@@ -72,9 +72,9 @@ pages.
|
||||
| [2.24](https://coder.com/changelog/coder-2-24) | July 01, 2025 | Extended Support Release | [v2.24.4](https://github.com/coder/coder/releases/tag/v2.24.4) |
|
||||
| [2.25](https://coder.com/changelog/coder-2-25) | August 05, 2025 | Not Supported | [v2.25.3](https://github.com/coder/coder/releases/tag/v2.25.3) |
|
||||
| [2.26](https://coder.com/changelog/coder-2-26) | September 03, 2025 | Not Supported | [v2.26.6](https://github.com/coder/coder/releases/tag/v2.26.6) |
|
||||
| [2.27](https://coder.com/changelog/coder-2-27) | October 02, 2025 | Security Support | [v2.27.9](https://github.com/coder/coder/releases/tag/v2.27.9) |
|
||||
| [2.28](https://coder.com/changelog/coder-2-28) | November 04, 2025 | Stable | [v2.28.6](https://github.com/coder/coder/releases/tag/v2.28.6) |
|
||||
| [2.29](https://coder.com/changelog/coder-2-29) | December 02, 2025 | Mainline + ESR | [v2.29.1](https://github.com/coder/coder/releases/tag/v2.29.1) |
|
||||
| [2.27](https://coder.com/changelog/coder-2-27) | October 02, 2025 | Security Support | [v2.27.8](https://github.com/coder/coder/releases/tag/v2.27.8) |
|
||||
| [2.28](https://coder.com/changelog/coder-2-28) | November 04, 2025 | Stable | [v2.28.5](https://github.com/coder/coder/releases/tag/v2.28.5) |
|
||||
| [2.29](https://coder.com/changelog/coder-2-29) | December 02, 2025 | Mainline + ESR | [v2.29.0](https://github.com/coder/coder/releases/tag/v2.29.0) |
|
||||
| 2.30 | | Not Released | N/A |
|
||||
<!-- RELEASE_CALENDAR_END -->
|
||||
|
||||
|
||||
+22
-39
@@ -321,7 +321,7 @@
|
||||
"icon_path": "./images/icons/circle-dot.svg"
|
||||
},
|
||||
{
|
||||
"title": "Dev Containers",
|
||||
"title": "Dev Containers Integration",
|
||||
"description": "Run containerized development environments in your Coder workspace using the dev containers specification.",
|
||||
"path": "./user-guides/devcontainers/index.md",
|
||||
"icon_path": "./images/icons/container.svg",
|
||||
@@ -533,8 +533,25 @@
|
||||
},
|
||||
{
|
||||
"title": "Envbuilder",
|
||||
"description": "Shift environment definition to repositories",
|
||||
"path": "./admin/templates/managing-templates/envbuilder.md"
|
||||
"description": "Build dev containers using Envbuilder for environments without Docker",
|
||||
"path": "./admin/templates/managing-templates/envbuilder/index.md",
|
||||
"children": [
|
||||
{
|
||||
"title": "Add an Envbuilder template",
|
||||
"description": "How to add an Envbuilder dev container template to Coder",
|
||||
"path": "./admin/templates/managing-templates/envbuilder/add-envbuilder.md"
|
||||
},
|
||||
{
|
||||
"title": "Envbuilder security and caching",
|
||||
"description": "Configure Envbuilder authentication and caching",
|
||||
"path": "./admin/templates/managing-templates/envbuilder/envbuilder-security-caching.md"
|
||||
},
|
||||
{
|
||||
"title": "Envbuilder releases and known issues",
|
||||
"description": "Envbuilder releases and known issues",
|
||||
"path": "./admin/templates/managing-templates/envbuilder/envbuilder-releases-known-issues.md"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"title": "Template Dependencies",
|
||||
@@ -646,8 +663,8 @@
|
||||
"path": "./admin/templates/extending-templates/provider-authentication.md"
|
||||
},
|
||||
{
|
||||
"title": "Dev Containers",
|
||||
"description": "Extend templates with containerized dev environments",
|
||||
"title": "Configure a template for dev containers",
|
||||
"description": "How to use configure your template for dev containers",
|
||||
"path": "./admin/templates/extending-templates/devcontainers.md"
|
||||
},
|
||||
{
|
||||
@@ -747,40 +764,6 @@
|
||||
"title": "OAuth2 Provider",
|
||||
"description": "Use Coder as an OAuth2 provider",
|
||||
"path": "./admin/integrations/oauth2-provider.md"
|
||||
},
|
||||
{
|
||||
"title": "Dev Containers",
|
||||
"description": "Configure dev container support using Docker or Envbuilder",
|
||||
"path": "./admin/integrations/devcontainers/index.md",
|
||||
"children": [
|
||||
{
|
||||
"title": "Dev Containers Integration",
|
||||
"description": "Configure native dev containers with Docker",
|
||||
"path": "./admin/integrations/devcontainers/integration.md"
|
||||
},
|
||||
{
|
||||
"title": "Envbuilder",
|
||||
"description": "Build dev containers without Docker",
|
||||
"path": "./admin/integrations/devcontainers/envbuilder/index.md",
|
||||
"children": [
|
||||
{
|
||||
"title": "Add an Envbuilder template",
|
||||
"description": "How to add an Envbuilder template",
|
||||
"path": "./admin/integrations/devcontainers/envbuilder/add-envbuilder.md"
|
||||
},
|
||||
{
|
||||
"title": "Security and caching",
|
||||
"description": "Configure authentication and caching",
|
||||
"path": "./admin/integrations/devcontainers/envbuilder/envbuilder-security-caching.md"
|
||||
},
|
||||
{
|
||||
"title": "Releases and known issues",
|
||||
"description": "Release channels and known issues",
|
||||
"path": "./admin/integrations/devcontainers/envbuilder/envbuilder-releases-known-issues.md"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
Generated
+4
-5
@@ -31,7 +31,7 @@ file: string
|
||||
|
||||
### Example responses
|
||||
|
||||
> 200 Response
|
||||
> 201 Response
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -41,10 +41,9 @@ file: string
|
||||
|
||||
### Responses
|
||||
|
||||
| Status | Meaning | Description | Schema |
|
||||
|--------|--------------------------------------------------------------|------------------------------------|--------------------------------------------------------------|
|
||||
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | Returns existing file if duplicate | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
|
||||
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Returns newly created file | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
|
||||
| Status | Meaning | Description | Schema |
|
||||
|--------|--------------------------------------------------------------|-------------|--------------------------------------------------------------|
|
||||
| 201 | [Created](https://tools.ietf.org/html/rfc7231#section-6.3.2) | Created | [codersdk.UploadResponse](schemas.md#codersdkuploadresponse) |
|
||||
|
||||
To perform this operation, you must be authenticated. [Learn more](authentication.md).
|
||||
|
||||
|
||||
Generated
+3
-1
@@ -176,10 +176,13 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
|
||||
},
|
||||
"enabled": true,
|
||||
"inject_coder_mcp_tools": true,
|
||||
"max_concurrency": 0,
|
||||
"openai": {
|
||||
"base_url": "string",
|
||||
"key": "string"
|
||||
},
|
||||
"rate_limit": 0,
|
||||
"rate_window": 0,
|
||||
"retention": 0
|
||||
}
|
||||
},
|
||||
@@ -233,7 +236,6 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
|
||||
"disable_owner_workspace_exec": true,
|
||||
"disable_password_auth": true,
|
||||
"disable_path_apps": true,
|
||||
"disable_workspace_sharing": true,
|
||||
"docs_url": {
|
||||
"forceQuery": true,
|
||||
"fragment": "string",
|
||||
|
||||
Generated
+23
-11
@@ -390,24 +390,30 @@
|
||||
},
|
||||
"enabled": true,
|
||||
"inject_coder_mcp_tools": true,
|
||||
"max_concurrency": 0,
|
||||
"openai": {
|
||||
"base_url": "string",
|
||||
"key": "string"
|
||||
},
|
||||
"rate_limit": 0,
|
||||
"rate_window": 0,
|
||||
"retention": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Properties
|
||||
|
||||
| Name | Type | Required | Restrictions | Description |
|
||||
|--------------------------|----------------------------------------------------------------------|----------|--------------|-------------|
|
||||
| `anthropic` | [codersdk.AIBridgeAnthropicConfig](#codersdkaibridgeanthropicconfig) | false | | |
|
||||
| `bedrock` | [codersdk.AIBridgeBedrockConfig](#codersdkaibridgebedrockconfig) | false | | |
|
||||
| `enabled` | boolean | false | | |
|
||||
| `inject_coder_mcp_tools` | boolean | false | | |
|
||||
| `openai` | [codersdk.AIBridgeOpenAIConfig](#codersdkaibridgeopenaiconfig) | false | | |
|
||||
| `retention` | integer | false | | |
|
||||
| Name | Type | Required | Restrictions | Description |
|
||||
|--------------------------|----------------------------------------------------------------------|----------|--------------|-------------------------------|
|
||||
| `anthropic` | [codersdk.AIBridgeAnthropicConfig](#codersdkaibridgeanthropicconfig) | false | | |
|
||||
| `bedrock` | [codersdk.AIBridgeBedrockConfig](#codersdkaibridgebedrockconfig) | false | | |
|
||||
| `enabled` | boolean | false | | |
|
||||
| `inject_coder_mcp_tools` | boolean | false | | |
|
||||
| `max_concurrency` | integer | false | | Overload protection settings. |
|
||||
| `openai` | [codersdk.AIBridgeOpenAIConfig](#codersdkaibridgeopenaiconfig) | false | | |
|
||||
| `rate_limit` | integer | false | | |
|
||||
| `rate_window` | integer | false | | |
|
||||
| `retention` | integer | false | | |
|
||||
|
||||
## codersdk.AIBridgeInterception
|
||||
|
||||
@@ -700,10 +706,13 @@
|
||||
},
|
||||
"enabled": true,
|
||||
"inject_coder_mcp_tools": true,
|
||||
"max_concurrency": 0,
|
||||
"openai": {
|
||||
"base_url": "string",
|
||||
"key": "string"
|
||||
},
|
||||
"rate_limit": 0,
|
||||
"rate_window": 0,
|
||||
"retention": 0
|
||||
}
|
||||
}
|
||||
@@ -2860,10 +2869,13 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
|
||||
},
|
||||
"enabled": true,
|
||||
"inject_coder_mcp_tools": true,
|
||||
"max_concurrency": 0,
|
||||
"openai": {
|
||||
"base_url": "string",
|
||||
"key": "string"
|
||||
},
|
||||
"rate_limit": 0,
|
||||
"rate_window": 0,
|
||||
"retention": 0
|
||||
}
|
||||
},
|
||||
@@ -2917,7 +2929,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
|
||||
"disable_owner_workspace_exec": true,
|
||||
"disable_password_auth": true,
|
||||
"disable_path_apps": true,
|
||||
"disable_workspace_sharing": true,
|
||||
"docs_url": {
|
||||
"forceQuery": true,
|
||||
"fragment": "string",
|
||||
@@ -3383,10 +3394,13 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
|
||||
},
|
||||
"enabled": true,
|
||||
"inject_coder_mcp_tools": true,
|
||||
"max_concurrency": 0,
|
||||
"openai": {
|
||||
"base_url": "string",
|
||||
"key": "string"
|
||||
},
|
||||
"rate_limit": 0,
|
||||
"rate_window": 0,
|
||||
"retention": 0
|
||||
}
|
||||
},
|
||||
@@ -3440,7 +3454,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
|
||||
"disable_owner_workspace_exec": true,
|
||||
"disable_password_auth": true,
|
||||
"disable_path_apps": true,
|
||||
"disable_workspace_sharing": true,
|
||||
"docs_url": {
|
||||
"forceQuery": true,
|
||||
"fragment": "string",
|
||||
@@ -3795,7 +3808,6 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
|
||||
| `disable_owner_workspace_exec` | boolean | false | | |
|
||||
| `disable_password_auth` | boolean | false | | |
|
||||
| `disable_path_apps` | boolean | false | | |
|
||||
| `disable_workspace_sharing` | boolean | false | | |
|
||||
| `docs_url` | [serpent.URL](#serpenturl) | false | | |
|
||||
| `enable_authz_recording` | boolean | false | | |
|
||||
| `enable_terraform_debug_mode` | boolean | false | | |
|
||||
|
||||
Generated
+33
-10
@@ -1115,16 +1115,6 @@ Disable workspace apps that are not served from subdomains. Path-based apps can
|
||||
|
||||
Remove the permission for the 'owner' role to have workspace execution on all workspaces. This prevents the 'owner' from ssh, apps, and terminal access based on the 'owner' role. They still have their user permissions to access their own workspaces.
|
||||
|
||||
### --disable-workspace-sharing
|
||||
|
||||
| | |
|
||||
|-------------|-----------------------------------------------|
|
||||
| Type | <code>bool</code> |
|
||||
| Environment | <code>$CODER_DISABLE_WORKSPACE_SHARING</code> |
|
||||
| YAML | <code>disableWorkspaceSharing</code> |
|
||||
|
||||
Disable workspace sharing (requires the "workspace-sharing" experiment to be enabled). Workspace ACL checking is disabled and only owners can have ssh, apps and terminal access to workspaces. Access based on the 'owner' role is also allowed unless disabled via --disable-owner-workspace-access.
|
||||
|
||||
### --session-duration
|
||||
|
||||
| | |
|
||||
@@ -1781,6 +1771,39 @@ Whether to inject Coder's MCP tools into intercepted AI Bridge requests (require
|
||||
|
||||
Length of time to retain data such as interceptions and all related records (token, prompt, tool use).
|
||||
|
||||
### --aibridge-max-concurrency
|
||||
|
||||
| | |
|
||||
|-------------|----------------------------------------------|
|
||||
| Type | <code>int</code> |
|
||||
| Environment | <code>$CODER_AIBRIDGE_MAX_CONCURRENCY</code> |
|
||||
| YAML | <code>aibridge.max_concurrency</code> |
|
||||
| Default | <code>0</code> |
|
||||
|
||||
Maximum number of concurrent AI Bridge requests. Set to 0 to disable (unlimited).
|
||||
|
||||
### --aibridge-rate-limit
|
||||
|
||||
| | |
|
||||
|-------------|-----------------------------------------|
|
||||
| Type | <code>int</code> |
|
||||
| Environment | <code>$CODER_AIBRIDGE_RATE_LIMIT</code> |
|
||||
| YAML | <code>aibridge.rate_limit</code> |
|
||||
| Default | <code>0</code> |
|
||||
|
||||
Maximum number of AI Bridge requests per rate window. Set to 0 to disable rate limiting.
|
||||
|
||||
### --aibridge-rate-window
|
||||
|
||||
| | |
|
||||
|-------------|------------------------------------------|
|
||||
| Type | <code>duration</code> |
|
||||
| Environment | <code>$CODER_AIBRIDGE_RATE_WINDOW</code> |
|
||||
| YAML | <code>aibridge.rate_window</code> |
|
||||
| Default | <code>1m</code> |
|
||||
|
||||
Duration of the rate limiting window for AI Bridge requests.
|
||||
|
||||
### --audit-logs-retention
|
||||
|
||||
| | |
|
||||
|
||||
@@ -54,7 +54,7 @@ and shown in the UI, but users must manually start it.
|
||||
> [!NOTE]
|
||||
>
|
||||
> The `autoStart` option only takes effect when your template administrator has
|
||||
> enabled [`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE`](../../admin/integrations/devcontainers/integration.md#coder_agent_devcontainers_discovery_autostart_enable).
|
||||
> enabled [`CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE`](../../admin/templates/extending-templates/devcontainers.md#coder_agent_devcontainers_discovery_autostart_enable).
|
||||
> If this setting is disabled at the template level, containers won't auto-start
|
||||
> regardless of this option.
|
||||
|
||||
@@ -84,8 +84,6 @@ appears in `coder ssh` commands and the dashboard (e.g.,
|
||||
Control which built-in Coder apps appear for your dev container using
|
||||
`displayApps`:
|
||||
|
||||
_Disable built-in apps to reduce clutter or guide developers toward preferred tools_
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "My Dev Container",
|
||||
|
||||
@@ -1,17 +1,10 @@
|
||||
# Dev Containers
|
||||
# Dev Containers Integration
|
||||
|
||||
[Dev containers](https://containers.dev/) define your development environment
|
||||
as code using a `devcontainer.json` file. Coder's Dev Containers integration
|
||||
uses the [`@devcontainers/cli`](https://github.com/devcontainers/cli) and
|
||||
[Docker](https://www.docker.com) to seamlessly build and run these containers,
|
||||
with management in your dashboard.
|
||||
|
||||
This guide covers the Dev Containers integration. For workspaces without Docker,
|
||||
administrators can configure
|
||||
[Envbuilder](../../admin/integrations/devcontainers/envbuilder/index.md) instead,
|
||||
which builds the workspace image itself from your dev container configuration.
|
||||
|
||||
_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
|
||||
The Dev Containers integration enables seamless creation and management of dev
|
||||
containers in Coder workspaces. This feature leverages the
|
||||
[`@devcontainers/cli`](https://github.com/devcontainers/cli) and
|
||||
[Docker](https://www.docker.com) to provide a streamlined development
|
||||
experience.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -21,8 +14,8 @@ which builds the workspace image itself from your dev container configuration.
|
||||
|
||||
Dev Containers integration is enabled by default. Your workspace needs Docker
|
||||
(via Docker-in-Docker or a mounted socket) and the devcontainers CLI. Most
|
||||
templates with Dev Containers support include both. See
|
||||
[Configure a template for dev containers](../../admin/integrations/devcontainers/integration.md)
|
||||
templates with Dev Containers support include both—see
|
||||
[Configure a template for dev containers](../../admin/templates/extending-templates/devcontainers.md)
|
||||
for setup details.
|
||||
|
||||
## Features
|
||||
@@ -69,8 +62,6 @@ Coder automatically discovers dev container configurations in your repositories
|
||||
and displays them in your workspace dashboard. From there, you can start a dev
|
||||
container with a single click.
|
||||
|
||||
_Coder detects dev container configurations and displays them with a Start button_
|
||||
|
||||
If your template administrator has configured automatic startup (via the
|
||||
`coder_devcontainer` Terraform resource or autostart settings), your dev
|
||||
container will build and start automatically when the workspace starts.
|
||||
@@ -118,8 +109,8 @@ in your `devcontainer.json`.
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Linux only**: Dev Containers are currently not supported in Windows or
|
||||
macOS workspaces
|
||||
- **Linux and macOS only** — Dev Containers are not supported on Windows
|
||||
workspaces
|
||||
- Changes to `devcontainer.json` require manual rebuild using the dashboard
|
||||
button
|
||||
- The `forwardPorts` property in `devcontainer.json` with `host:port` syntax
|
||||
@@ -128,6 +119,10 @@ in your `devcontainer.json`.
|
||||
access ports directly on the sub-agent.
|
||||
- Some advanced dev container features may have limited support
|
||||
|
||||
> [!NOTE]
|
||||
> If your template uses Envbuilder rather than Docker-based dev containers, see
|
||||
> the [Envbuilder documentation](../../admin/templates/managing-templates/envbuilder/index.md).
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Working with dev containers](./working-with-dev-containers.md) — SSH, IDE
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
The dev container integration appears in your Coder dashboard, providing a
|
||||
visual representation of the running environment:
|
||||
|
||||
_Dev containers appear as sub-agents with their own apps, SSH access, and port forwarding_
|
||||

|
||||
|
||||
## SSH access
|
||||
|
||||
@@ -152,6 +152,4 @@ When you modify your `devcontainer.json`, you need to rebuild the container for
|
||||
changes to take effect. Coder detects changes and shows an **Outdated** status
|
||||
next to the dev container.
|
||||
|
||||
_The Outdated indicator appears when changes to devcontainer.json are detected_
|
||||
|
||||
Click **Rebuild** to recreate your dev container with the updated configuration.
|
||||
|
||||
@@ -214,7 +214,7 @@ RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/u
|
||||
|
||||
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.12.2.
|
||||
# Installing the same version here to match.
|
||||
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_linux_amd64.zip" && \
|
||||
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.13.4/terraform_1.13.4_linux_amd64.zip" && \
|
||||
unzip /tmp/terraform.zip -d /usr/local/bin && \
|
||||
rm -f /tmp/terraform.zip && \
|
||||
chmod +x /usr/local/bin/terraform && \
|
||||
|
||||
@@ -596,14 +596,6 @@ resource "coder_agent" "dev" {
|
||||
# Allow synchronization between scripts.
|
||||
trap 'touch /tmp/.coder-startup-script.done' EXIT
|
||||
|
||||
# Authenticate GitHub CLI
|
||||
if ! gh auth status >/dev/null 2>&1; then
|
||||
echo "Logging into GitHub CLI…"
|
||||
coder external-auth access-token github | gh auth login --hostname github.com --with-token
|
||||
else
|
||||
echo "Already logged into GitHub CLI."
|
||||
fi
|
||||
|
||||
# Increase the shutdown timeout of the docker service for improved cleanup.
|
||||
# The 240 was picked as it's lower than the 300 seconds we set for the
|
||||
# container shutdown grace period.
|
||||
@@ -839,13 +831,6 @@ locals {
|
||||
- Built-in tools - use for everything else:
|
||||
(file operations, git commands, builds & installs, one-off shell commands)
|
||||
|
||||
-- Workflow --
|
||||
When starting new work:
|
||||
1. If given a GitHub issue URL, use the `gh` CLI to read the full issue details with `gh issue view <issue-number>`.
|
||||
2. Create a feature branch for the work using a descriptive name based on the issue or task.
|
||||
Example: `git checkout -b fix/issue-123-oauth-error` or `git checkout -b feat/add-dark-mode`
|
||||
3. Proceed with implementation following the CLAUDE.md guidelines.
|
||||
|
||||
-- Context --
|
||||
There is an existing application in the current directory.
|
||||
Be sure to read CLAUDE.md before making any changes.
|
||||
|
||||
@@ -33,6 +33,9 @@ type Server struct {
|
||||
// A pool of [aibridge.RequestBridge] instances, which service incoming requests.
|
||||
requestBridgePool Pooler
|
||||
|
||||
// overloadProtection provides rate limiting and concurrency control.
|
||||
overloadProtection *OverloadProtection
|
||||
|
||||
logger slog.Logger
|
||||
tracer trace.Tracer
|
||||
wg sync.WaitGroup
|
||||
@@ -50,7 +53,7 @@ type Server struct {
|
||||
shutdownOnce sync.Once
|
||||
}
|
||||
|
||||
func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger, tracer trace.Tracer) (*Server, error) {
|
||||
func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger, tracer trace.Tracer, overloadCfg *OverloadConfig) (*Server, error) {
|
||||
if rpcDialer == nil {
|
||||
return nil, xerrors.Errorf("nil rpcDialer given")
|
||||
}
|
||||
@@ -68,6 +71,16 @@ func New(ctx context.Context, pool Pooler, rpcDialer Dialer, logger slog.Logger,
|
||||
requestBridgePool: pool,
|
||||
}
|
||||
|
||||
// Initialize overload protection if configured.
|
||||
if overloadCfg != nil {
|
||||
daemon.overloadProtection = NewOverloadProtection(*overloadCfg, logger)
|
||||
logger.Info(ctx, "overload protection enabled",
|
||||
slog.F("max_concurrency", overloadCfg.MaxConcurrency),
|
||||
slog.F("rate_limit", overloadCfg.RateLimit),
|
||||
slog.F("rate_window", overloadCfg.RateWindow),
|
||||
)
|
||||
}
|
||||
|
||||
daemon.wg.Add(1)
|
||||
go daemon.connect()
|
||||
|
||||
|
||||
@@ -189,7 +189,7 @@ func TestIntegration(t *testing.T) {
|
||||
// Given: aibridged is started.
|
||||
srv, err := aibridged.New(t.Context(), pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
|
||||
return aiBridgeClient, nil
|
||||
}, logger, tracer)
|
||||
}, logger, tracer, nil)
|
||||
require.NoError(t, err, "create new aibridged")
|
||||
t.Cleanup(func() {
|
||||
_ = srv.Shutdown(ctx)
|
||||
@@ -382,7 +382,7 @@ func TestIntegrationWithMetrics(t *testing.T) {
|
||||
// Given: aibridged is started.
|
||||
srv, err := aibridged.New(ctx, pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
|
||||
return aiBridgeClient, nil
|
||||
}, logger, testTracer)
|
||||
}, logger, testTracer, nil)
|
||||
require.NoError(t, err, "create new aibridged")
|
||||
t.Cleanup(func() {
|
||||
_ = srv.Shutdown(ctx)
|
||||
|
||||
@@ -41,7 +41,7 @@ func newTestServer(t *testing.T) (*aibridged.Server, *mock.MockDRPCClient, *mock
|
||||
pool,
|
||||
func(ctx context.Context) (aibridged.DRPCClient, error) {
|
||||
return client, nil
|
||||
}, logger, testTracer)
|
||||
}, logger, testTracer, nil)
|
||||
require.NoError(t, err, "create new aibridged")
|
||||
t.Cleanup(func() {
|
||||
srv.Shutdown(context.Background())
|
||||
@@ -309,7 +309,7 @@ func TestRouting(t *testing.T) {
|
||||
// Given: aibridged is started.
|
||||
srv, err := aibridged.New(t.Context(), pool, func(ctx context.Context) (aibridged.DRPCClient, error) {
|
||||
return client, nil
|
||||
}, logger, testTracer)
|
||||
}, logger, testTracer, nil)
|
||||
require.NoError(t, err, "create new aibridged")
|
||||
t.Cleanup(func() {
|
||||
_ = srv.Shutdown(testutil.Context(t, testutil.WaitShort))
|
||||
|
||||
@@ -19,8 +19,19 @@ var (
|
||||
ErrConnect = xerrors.New("could not connect to coderd")
|
||||
ErrUnauthorized = xerrors.New("unauthorized")
|
||||
ErrAcquireRequestHandler = xerrors.New("failed to acquire request handler")
|
||||
ErrOverloaded = xerrors.New("server is overloaded")
|
||||
)
|
||||
|
||||
// Handler returns an http.Handler that wraps the server with any configured
|
||||
// overload protection (rate limiting and concurrency control).
|
||||
func (s *Server) Handler() http.Handler {
|
||||
var handler http.Handler = s
|
||||
if s.overloadProtection != nil {
|
||||
handler = s.overloadProtection.WrapHandler(handler)
|
||||
}
|
||||
return handler
|
||||
}
|
||||
|
||||
// ServeHTTP is the entrypoint for requests which will be intercepted by AI Bridge.
|
||||
// This function will validate that the given API key may be used to perform the request.
|
||||
//
|
||||
|
||||
@@ -0,0 +1,119 @@
|
||||
package aibridged
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/go-chi/httprate"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"github.com/coder/coder/v2/coderd/httpapi"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
)
|
||||
|
||||
// OverloadConfig configures overload protection for the AI Bridge server.
|
||||
type OverloadConfig struct {
|
||||
// MaxConcurrency is the maximum number of concurrent requests allowed.
|
||||
// Set to 0 to disable concurrency limiting.
|
||||
MaxConcurrency int64
|
||||
|
||||
// RateLimit is the maximum number of requests per RateWindow.
|
||||
// Set to 0 to disable rate limiting.
|
||||
RateLimit int64
|
||||
|
||||
// RateWindow is the duration of the rate limiting window.
|
||||
RateWindow time.Duration
|
||||
}
|
||||
|
||||
// OverloadProtection provides middleware for protecting the AI Bridge server
|
||||
// from overload conditions.
|
||||
type OverloadProtection struct {
|
||||
config OverloadConfig
|
||||
logger slog.Logger
|
||||
|
||||
// concurrencyLimiter tracks the number of concurrent requests.
|
||||
currentConcurrency atomic.Int64
|
||||
|
||||
// rateLimiter is the rate limiting middleware.
|
||||
rateLimiter func(http.Handler) http.Handler
|
||||
}
|
||||
|
||||
// NewOverloadProtection creates a new OverloadProtection instance.
|
||||
func NewOverloadProtection(config OverloadConfig, logger slog.Logger) *OverloadProtection {
|
||||
op := &OverloadProtection{
|
||||
config: config,
|
||||
logger: logger.Named("overload"),
|
||||
}
|
||||
|
||||
// Initialize rate limiter if configured.
|
||||
if config.RateLimit > 0 && config.RateWindow > 0 {
|
||||
op.rateLimiter = httprate.Limit(
|
||||
int(config.RateLimit),
|
||||
config.RateWindow,
|
||||
httprate.WithKeyFuncs(httprate.KeyByIP),
|
||||
httprate.WithLimitHandler(func(w http.ResponseWriter, r *http.Request) {
|
||||
httpapi.Write(r.Context(), w, http.StatusTooManyRequests, codersdk.Response{
|
||||
Message: "AI Bridge rate limit exceeded. Please try again later.",
|
||||
})
|
||||
}),
|
||||
)
|
||||
}
|
||||
|
||||
return op
|
||||
}
|
||||
|
||||
// ConcurrencyMiddleware returns a middleware that limits concurrent requests.
|
||||
// Returns nil if concurrency limiting is disabled.
|
||||
func (op *OverloadProtection) ConcurrencyMiddleware() func(http.Handler) http.Handler {
|
||||
if op.config.MaxConcurrency <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
current := op.currentConcurrency.Add(1)
|
||||
defer op.currentConcurrency.Add(-1)
|
||||
|
||||
if current > op.config.MaxConcurrency {
|
||||
op.logger.Warn(r.Context(), "ai bridge concurrency limit exceeded",
|
||||
slog.F("current", current),
|
||||
slog.F("max", op.config.MaxConcurrency),
|
||||
)
|
||||
httpapi.Write(r.Context(), w, http.StatusServiceUnavailable, codersdk.Response{
|
||||
Message: "AI Bridge is currently at capacity. Please try again later.",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// RateLimitMiddleware returns a middleware that limits the rate of requests.
|
||||
// Returns nil if rate limiting is disabled.
|
||||
func (op *OverloadProtection) RateLimitMiddleware() func(http.Handler) http.Handler {
|
||||
return op.rateLimiter
|
||||
}
|
||||
|
||||
// CurrentConcurrency returns the current number of concurrent requests.
|
||||
func (op *OverloadProtection) CurrentConcurrency() int64 {
|
||||
return op.currentConcurrency.Load()
|
||||
}
|
||||
|
||||
// WrapHandler wraps the given handler with all enabled overload protection
|
||||
// middleware.
|
||||
func (op *OverloadProtection) WrapHandler(handler http.Handler) http.Handler {
|
||||
// Apply rate limiting first (cheaper check).
|
||||
if op.rateLimiter != nil {
|
||||
handler = op.rateLimiter(handler)
|
||||
}
|
||||
|
||||
// Then apply concurrency limiting.
|
||||
if concurrencyMW := op.ConcurrencyMiddleware(); concurrencyMW != nil {
|
||||
handler = concurrencyMW(handler)
|
||||
}
|
||||
|
||||
return handler
|
||||
}
|
||||
@@ -0,0 +1,226 @@
|
||||
package aibridged_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"cdr.dev/slog/sloggers/slogtest"
|
||||
"github.com/coder/coder/v2/enterprise/aibridged"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
func TestOverloadProtection_ConcurrencyLimit(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
|
||||
t.Run("allows_requests_within_limit", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
MaxConcurrency: 5,
|
||||
}, logger)
|
||||
|
||||
var handlerCalls atomic.Int32
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
handlerCalls.Add(1)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
wrapped := op.WrapHandler(handler)
|
||||
|
||||
// Make 5 requests in sequence - all should succeed.
|
||||
for i := 0; i < 5; i++ {
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
assert.Equal(t, http.StatusOK, rec.Code)
|
||||
}
|
||||
|
||||
assert.Equal(t, int32(5), handlerCalls.Load())
|
||||
})
|
||||
|
||||
t.Run("rejects_requests_over_limit", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
MaxConcurrency: 2,
|
||||
}, logger)
|
||||
|
||||
// Create a handler that blocks until we release it.
|
||||
blocked := make(chan struct{})
|
||||
var handlerCalls atomic.Int32
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
handlerCalls.Add(1)
|
||||
<-blocked
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
wrapped := op.WrapHandler(handler)
|
||||
|
||||
// Start 2 requests that will block.
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 2; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
}()
|
||||
}
|
||||
|
||||
// Wait for the handlers to be called.
|
||||
require.Eventually(t, func() bool {
|
||||
return handlerCalls.Load() == 2
|
||||
}, testutil.WaitShort, testutil.IntervalFast)
|
||||
|
||||
// Make a third request - it should be rejected.
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
assert.Equal(t, http.StatusServiceUnavailable, rec.Code)
|
||||
|
||||
// Verify current concurrency is 2.
|
||||
assert.Equal(t, int64(2), op.CurrentConcurrency())
|
||||
|
||||
// Unblock the handlers.
|
||||
close(blocked)
|
||||
wg.Wait()
|
||||
|
||||
// Verify concurrency is back to 0.
|
||||
assert.Equal(t, int64(0), op.CurrentConcurrency())
|
||||
})
|
||||
|
||||
t.Run("disabled_when_zero", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
MaxConcurrency: 0, // Disabled.
|
||||
}, logger)
|
||||
|
||||
assert.Nil(t, op.ConcurrencyMiddleware())
|
||||
})
|
||||
}
|
||||
|
||||
func TestOverloadProtection_RateLimit(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
|
||||
t.Run("allows_requests_within_limit", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
RateLimit: 5,
|
||||
RateWindow: time.Minute,
|
||||
}, logger)
|
||||
|
||||
var handlerCalls atomic.Int32
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
handlerCalls.Add(1)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
wrapped := op.WrapHandler(handler)
|
||||
|
||||
// Make 5 requests - all should succeed.
|
||||
for i := 0; i < 5; i++ {
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
assert.Equal(t, http.StatusOK, rec.Code)
|
||||
}
|
||||
|
||||
assert.Equal(t, int32(5), handlerCalls.Load())
|
||||
})
|
||||
|
||||
t.Run("rejects_requests_over_limit", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
RateLimit: 2,
|
||||
RateWindow: time.Minute,
|
||||
}, logger)
|
||||
|
||||
var handlerCalls atomic.Int32
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
handlerCalls.Add(1)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
wrapped := op.WrapHandler(handler)
|
||||
|
||||
// Make 3 requests - first 2 should succeed, 3rd should be rate limited.
|
||||
for i := 0; i < 3; i++ {
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
|
||||
if i < 2 {
|
||||
assert.Equal(t, http.StatusOK, rec.Code)
|
||||
} else {
|
||||
assert.Equal(t, http.StatusTooManyRequests, rec.Code)
|
||||
}
|
||||
}
|
||||
|
||||
assert.Equal(t, int32(2), handlerCalls.Load())
|
||||
})
|
||||
|
||||
t.Run("disabled_when_zero", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
RateLimit: 0, // Disabled.
|
||||
}, logger)
|
||||
|
||||
assert.Nil(t, op.RateLimitMiddleware())
|
||||
})
|
||||
}
|
||||
|
||||
func TestOverloadProtection_Combined(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
|
||||
|
||||
t.Run("both_limits_applied", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
op := aibridged.NewOverloadProtection(aibridged.OverloadConfig{
|
||||
MaxConcurrency: 10,
|
||||
RateLimit: 3,
|
||||
RateWindow: time.Minute,
|
||||
}, logger)
|
||||
|
||||
var handlerCalls atomic.Int32
|
||||
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
handlerCalls.Add(1)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})
|
||||
|
||||
wrapped := op.WrapHandler(handler)
|
||||
|
||||
// Make 4 requests - first 3 should succeed, 4th should be rate limited.
|
||||
for i := 0; i < 4; i++ {
|
||||
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
wrapped.ServeHTTP(rec, req)
|
||||
|
||||
if i < 3 {
|
||||
assert.Equal(t, http.StatusOK, rec.Code)
|
||||
} else {
|
||||
assert.Equal(t, http.StatusTooManyRequests, rec.Code)
|
||||
}
|
||||
}
|
||||
|
||||
assert.Equal(t, int32(3), handlerCalls.Load())
|
||||
})
|
||||
}
|
||||
@@ -44,10 +44,21 @@ func newAIBridgeDaemon(coderAPI *coderd.API) (*aibridged.Server, error) {
|
||||
return nil, xerrors.Errorf("create request pool: %w", err)
|
||||
}
|
||||
|
||||
// Configure overload protection if any limits are set.
|
||||
var overloadCfg *aibridged.OverloadConfig
|
||||
bridgeCfg := coderAPI.DeploymentValues.AI.BridgeConfig
|
||||
if bridgeCfg.MaxConcurrency.Value() > 0 || bridgeCfg.RateLimit.Value() > 0 {
|
||||
overloadCfg = &aibridged.OverloadConfig{
|
||||
MaxConcurrency: bridgeCfg.MaxConcurrency.Value(),
|
||||
RateLimit: bridgeCfg.RateLimit.Value(),
|
||||
RateWindow: bridgeCfg.RateWindow.Value(),
|
||||
}
|
||||
}
|
||||
|
||||
// Create daemon.
|
||||
srv, err := aibridged.New(ctx, pool, func(dialCtx context.Context) (aibridged.DRPCClient, error) {
|
||||
return coderAPI.CreateInMemoryAIBridgeServer(dialCtx)
|
||||
}, logger, tracer)
|
||||
}, logger, tracer, overloadCfg)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("start in-memory aibridge daemon: %w", err)
|
||||
}
|
||||
|
||||
@@ -371,9 +371,6 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
|
||||
notifications.NewNoopEnqueuer(),
|
||||
newNoopUsageCheckerPtr(),
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -485,9 +482,6 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
|
||||
notifications.NewNoopEnqueuer(),
|
||||
newNoopUsageCheckerPtr(),
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
|
||||
+11
-7
@@ -47,13 +47,6 @@ OPTIONS:
|
||||
the workspace serves malicious JavaScript. This is recommended for
|
||||
security purposes if a --wildcard-access-url is configured.
|
||||
|
||||
--disable-workspace-sharing bool, $CODER_DISABLE_WORKSPACE_SHARING
|
||||
Disable workspace sharing (requires the "workspace-sharing" experiment
|
||||
to be enabled). Workspace ACL checking is disabled and only owners can
|
||||
have ssh, apps and terminal access to workspaces. Access based on the
|
||||
'owner' role is also allowed unless disabled via
|
||||
--disable-owner-workspace-access.
|
||||
|
||||
--swagger-enable bool, $CODER_SWAGGER_ENABLE
|
||||
Expose the swagger endpoint via /swagger.
|
||||
|
||||
@@ -126,12 +119,23 @@ AI BRIDGE OPTIONS:
|
||||
requests (requires the "oauth2" and "mcp-server-http" experiments to
|
||||
be enabled).
|
||||
|
||||
--aibridge-max-concurrency int, $CODER_AIBRIDGE_MAX_CONCURRENCY (default: 0)
|
||||
Maximum number of concurrent AI Bridge requests. Set to 0 to disable
|
||||
(unlimited).
|
||||
|
||||
--aibridge-openai-base-url string, $CODER_AIBRIDGE_OPENAI_BASE_URL (default: https://api.openai.com/v1/)
|
||||
The base URL of the OpenAI API.
|
||||
|
||||
--aibridge-openai-key string, $CODER_AIBRIDGE_OPENAI_KEY
|
||||
The key to authenticate against the OpenAI API.
|
||||
|
||||
--aibridge-rate-limit int, $CODER_AIBRIDGE_RATE_LIMIT (default: 0)
|
||||
Maximum number of AI Bridge requests per rate window. Set to 0 to
|
||||
disable rate limiting.
|
||||
|
||||
--aibridge-rate-window duration, $CODER_AIBRIDGE_RATE_WINDOW (default: 1m)
|
||||
Duration of the rate limiting window for AI Bridge requests.
|
||||
|
||||
CLIENT OPTIONS:
|
||||
These options change the behavior of how clients interact with the Coder.
|
||||
Clients include the Coder CLI, Coder Desktop, IDE extensions, and the web UI.
|
||||
|
||||
+13
-27
@@ -971,7 +971,7 @@ func (api *API) updateEntitlements(ctx context.Context) error {
|
||||
|
||||
var _ wsbuilder.UsageChecker = &API{}
|
||||
|
||||
func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
|
||||
func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion) (wsbuilder.UsageCheckResponse, error) {
|
||||
// If the template version has an external agent, we need to check that the
|
||||
// license is entitled to this feature.
|
||||
if templateVersion.HasExternalAgent.Valid && templateVersion.HasExternalAgent.Bool {
|
||||
@@ -984,31 +984,16 @@ func (api *API) CheckBuildUsage(ctx context.Context, store database.Store, templ
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := api.checkAIBuildUsage(ctx, store, templateVersion, transition)
|
||||
if err != nil {
|
||||
return wsbuilder.UsageCheckResponse{}, err
|
||||
}
|
||||
if !resp.Permitted {
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
|
||||
}
|
||||
|
||||
// checkAIBuildUsage validates AI-related usage constraints. It is a no-op
|
||||
// unless the transition is "start" and the template version has an AI task.
|
||||
func (api *API) checkAIBuildUsage(ctx context.Context, store database.Store, templateVersion *database.TemplateVersion, transition database.WorkspaceTransition) (wsbuilder.UsageCheckResponse, error) {
|
||||
// Only check AI usage rules for start transitions.
|
||||
if transition != database.WorkspaceTransitionStart {
|
||||
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
|
||||
}
|
||||
|
||||
// If the template version doesn't have an AI task, we don't need to check usage.
|
||||
// If the template version doesn't have an AI task, we don't need to check
|
||||
// usage.
|
||||
if !templateVersion.HasAITask.Valid || !templateVersion.HasAITask.Bool {
|
||||
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
|
||||
return wsbuilder.UsageCheckResponse{
|
||||
Permitted: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// When licensed, ensure we haven't breached the managed agent limit.
|
||||
// When unlicensed, we need to check that we haven't breached the managed agent
|
||||
// limit.
|
||||
// Unlicensed deployments are allowed to use unlimited managed agents.
|
||||
if api.Entitlements.HasLicense() {
|
||||
managedAgentLimit, ok := api.Entitlements.Feature(codersdk.FeatureManagedAgentLimit)
|
||||
@@ -1019,9 +1004,8 @@ func (api *API) checkAIBuildUsage(ctx context.Context, store database.Store, tem
|
||||
}, nil
|
||||
}
|
||||
|
||||
// This check is intentionally not committed to the database. It's fine
|
||||
// if it's not 100% accurate or allows for minor breaches due to build
|
||||
// races.
|
||||
// This check is intentionally not committed to the database. It's fine if
|
||||
// it's not 100% accurate or allows for minor breaches due to build races.
|
||||
// nolint:gocritic // Requires permission to read all usage events.
|
||||
managedAgentCount, err := store.GetTotalUsageDCManagedAgentsV1(agpldbauthz.AsSystemRestricted(ctx), database.GetTotalUsageDCManagedAgentsV1Params{
|
||||
StartDate: managedAgentLimit.UsagePeriod.Start,
|
||||
@@ -1039,7 +1023,9 @@ func (api *API) checkAIBuildUsage(ctx context.Context, store database.Store, tem
|
||||
}
|
||||
}
|
||||
|
||||
return wsbuilder.UsageCheckResponse{Permitted: true}, nil
|
||||
return wsbuilder.UsageCheckResponse{
|
||||
Permitted: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// getProxyDERPStartingRegionID returns the starting region ID that should be
|
||||
|
||||
@@ -3,7 +3,6 @@ package coderd_test
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -22,7 +21,6 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/goleak"
|
||||
"go.uber.org/mock/gomock"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"cdr.dev/slog/sloggers/slogtest"
|
||||
@@ -41,16 +39,13 @@ import (
|
||||
"github.com/coder/retry"
|
||||
"github.com/coder/serpent"
|
||||
|
||||
agplcoderd "github.com/coder/coder/v2/coderd"
|
||||
agplaudit "github.com/coder/coder/v2/coderd/audit"
|
||||
"github.com/coder/coder/v2/coderd/coderdtest"
|
||||
"github.com/coder/coder/v2/coderd/database"
|
||||
"github.com/coder/coder/v2/coderd/database/dbauthz"
|
||||
"github.com/coder/coder/v2/coderd/database/dbfake"
|
||||
"github.com/coder/coder/v2/coderd/database/dbmock"
|
||||
"github.com/coder/coder/v2/coderd/database/dbtestutil"
|
||||
"github.com/coder/coder/v2/coderd/database/dbtime"
|
||||
"github.com/coder/coder/v2/coderd/entitlements"
|
||||
"github.com/coder/coder/v2/coderd/rbac"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
@@ -640,18 +635,18 @@ func TestManagedAgentLimit(t *testing.T) {
|
||||
})
|
||||
|
||||
// Get entitlements to check that the license is a-ok.
|
||||
sdkEntitlements, err := cli.Entitlements(ctx) //nolint:gocritic // we're not testing authz on the entitlements endpoint, so using owner is fine
|
||||
entitlements, err := cli.Entitlements(ctx) //nolint:gocritic // we're not testing authz on the entitlements endpoint, so using owner is fine
|
||||
require.NoError(t, err)
|
||||
require.True(t, sdkEntitlements.HasLicense)
|
||||
agentLimit := sdkEntitlements.Features[codersdk.FeatureManagedAgentLimit]
|
||||
require.True(t, entitlements.HasLicense)
|
||||
agentLimit := entitlements.Features[codersdk.FeatureManagedAgentLimit]
|
||||
require.True(t, agentLimit.Enabled)
|
||||
require.NotNil(t, agentLimit.Limit)
|
||||
require.EqualValues(t, 1, *agentLimit.Limit)
|
||||
require.NotNil(t, agentLimit.SoftLimit)
|
||||
require.EqualValues(t, 1, *agentLimit.SoftLimit)
|
||||
require.Empty(t, sdkEntitlements.Errors)
|
||||
require.Empty(t, entitlements.Errors)
|
||||
// There should be a warning since we're really close to our agent limit.
|
||||
require.Equal(t, sdkEntitlements.Warnings[0], "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
|
||||
require.Equal(t, entitlements.Warnings[0], "You are approaching the managed agent limit in your license. Please refer to the Deployment Licenses page for more information.")
|
||||
|
||||
// Create a fake provision response that claims there are agents in the
|
||||
// template and every built workspace.
|
||||
@@ -728,69 +723,6 @@ func TestManagedAgentLimit(t *testing.T) {
|
||||
coderdtest.AwaitWorkspaceBuildJobCompleted(t, cli, workspace.LatestBuild.ID)
|
||||
}
|
||||
|
||||
func TestCheckBuildUsage_SkipsAIForNonStartTransitions(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
|
||||
// Prepare entitlements with a managed agent limit to enforce.
|
||||
entSet := entitlements.New()
|
||||
entSet.Modify(func(e *codersdk.Entitlements) {
|
||||
e.HasLicense = true
|
||||
limit := int64(1)
|
||||
issuedAt := time.Now().Add(-2 * time.Hour)
|
||||
start := time.Now().Add(-time.Hour)
|
||||
end := time.Now().Add(time.Hour)
|
||||
e.Features[codersdk.FeatureManagedAgentLimit] = codersdk.Feature{
|
||||
Enabled: true,
|
||||
Limit: &limit,
|
||||
UsagePeriod: &codersdk.UsagePeriod{IssuedAt: issuedAt, Start: start, End: end},
|
||||
}
|
||||
})
|
||||
|
||||
// Enterprise API instance with entitlements injected.
|
||||
agpl := &agplcoderd.API{
|
||||
Options: &agplcoderd.Options{
|
||||
Entitlements: entSet,
|
||||
},
|
||||
}
|
||||
eapi := &coderd.API{
|
||||
AGPL: agpl,
|
||||
Options: &coderd.Options{Options: agpl.Options},
|
||||
}
|
||||
|
||||
// Template version that has an AI task.
|
||||
tv := &database.TemplateVersion{
|
||||
HasAITask: sql.NullBool{Valid: true, Bool: true},
|
||||
HasExternalAgent: sql.NullBool{Valid: true, Bool: false},
|
||||
}
|
||||
|
||||
// Mock DB: expect exactly one count call for the "start" transition.
|
||||
mDB := dbmock.NewMockStore(ctrl)
|
||||
mDB.EXPECT().
|
||||
GetTotalUsageDCManagedAgentsV1(gomock.Any(), gomock.Any()).
|
||||
Times(1).
|
||||
Return(int64(1), nil) // equal to limit -> should breach
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Start transition: should be not permitted due to limit breach.
|
||||
startResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, database.WorkspaceTransitionStart)
|
||||
require.NoError(t, err)
|
||||
require.False(t, startResp.Permitted)
|
||||
require.Contains(t, startResp.Message, "breached the managed agent limit")
|
||||
|
||||
// Stop transition: should be permitted and must not trigger additional DB calls.
|
||||
stopResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, database.WorkspaceTransitionStop)
|
||||
require.NoError(t, err)
|
||||
require.True(t, stopResp.Permitted)
|
||||
|
||||
// Delete transition: should be permitted and must not trigger additional DB calls.
|
||||
deleteResp, err := eapi.CheckBuildUsage(ctx, mDB, tv, database.WorkspaceTransitionDelete)
|
||||
require.NoError(t, err)
|
||||
require.True(t, deleteResp.Permitted)
|
||||
}
|
||||
|
||||
// testDBAuthzRole returns a context with a subject that has a role
|
||||
// with permissions required for test setup.
|
||||
func testDBAuthzRole(ctx context.Context) context.Context {
|
||||
|
||||
@@ -168,9 +168,6 @@ func TestClaimPrebuild(t *testing.T) {
|
||||
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(spy, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(spy)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package prebuilds_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"slices"
|
||||
"testing"
|
||||
@@ -199,9 +198,6 @@ func TestMetricsCollector(t *testing.T) {
|
||||
db, pubsub := dbtestutil.NewDB(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
|
||||
createdUsers := []uuid.UUID{database.PrebuildsSystemUserID}
|
||||
@@ -334,9 +330,6 @@ func TestMetricsCollector_DuplicateTemplateNames(t *testing.T) {
|
||||
db, pubsub := dbtestutil.NewDB(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
|
||||
collector := prebuilds.NewMetricsCollector(db, logger, reconciler)
|
||||
@@ -485,9 +478,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
registry := prometheus.NewPedanticRegistry()
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), registry, newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
|
||||
// Ensure no pause setting is set (default state)
|
||||
@@ -517,9 +507,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
registry := prometheus.NewPedanticRegistry()
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), registry, newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
|
||||
// Set reconciliation to paused
|
||||
@@ -549,9 +536,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
registry := prometheus.NewPedanticRegistry()
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubsub, cache, codersdk.PrebuildsConfig{}, logger, quartz.NewMock(t), registry, newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
|
||||
// Set reconciliation back to not paused
|
||||
|
||||
@@ -26,7 +26,6 @@ import (
|
||||
"github.com/coder/coder/v2/coderd/database/dbauthz"
|
||||
"github.com/coder/coder/v2/coderd/database/provisionerjobs"
|
||||
"github.com/coder/coder/v2/coderd/database/pubsub"
|
||||
"github.com/coder/coder/v2/coderd/dynamicparameters"
|
||||
"github.com/coder/coder/v2/coderd/files"
|
||||
"github.com/coder/coder/v2/coderd/notifications"
|
||||
"github.com/coder/coder/v2/coderd/prebuilds"
|
||||
@@ -59,9 +58,6 @@ type StoreReconciler struct {
|
||||
metrics *MetricsCollector
|
||||
// Operational metrics
|
||||
reconciliationDuration prometheus.Histogram
|
||||
|
||||
// renderCache caches template rendering results to avoid expensive re-parsing
|
||||
renderCache dynamicparameters.RenderCache
|
||||
}
|
||||
|
||||
var _ prebuilds.ReconciliationOrchestrator = &StoreReconciler{}
|
||||
@@ -123,30 +119,6 @@ func NewStoreReconciler(store database.Store,
|
||||
Help: "Duration of each prebuilds reconciliation cycle.",
|
||||
Buckets: prometheus.DefBuckets,
|
||||
})
|
||||
|
||||
// Create metrics for the render cache
|
||||
renderCacheHits := factory.NewCounter(prometheus.CounterOpts{
|
||||
Namespace: "coderd",
|
||||
Subsystem: "prebuilds",
|
||||
Name: "render_cache_hits_total",
|
||||
Help: "Total number of render cache hits.",
|
||||
})
|
||||
renderCacheMisses := factory.NewCounter(prometheus.CounterOpts{
|
||||
Namespace: "coderd",
|
||||
Subsystem: "prebuilds",
|
||||
Name: "render_cache_misses_total",
|
||||
Help: "Total number of render cache misses.",
|
||||
})
|
||||
renderCacheSize := factory.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: "coderd",
|
||||
Subsystem: "prebuilds",
|
||||
Name: "render_cache_size_entries",
|
||||
Help: "Current number of entries in the render cache.",
|
||||
})
|
||||
|
||||
reconciler.renderCache = dynamicparameters.NewRenderCacheWithMetrics(renderCacheHits, renderCacheMisses, renderCacheSize)
|
||||
} else {
|
||||
reconciler.renderCache = dynamicparameters.NewRenderCache()
|
||||
}
|
||||
|
||||
return reconciler
|
||||
@@ -268,10 +240,6 @@ func (c *StoreReconciler) Stop(ctx context.Context, cause error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Close the render cache to stop its cleanup goroutine
|
||||
// This must be done regardless of whether the reconciler is running
|
||||
c.renderCache.Close()
|
||||
|
||||
// If the reconciler is not running, there's nothing else to do.
|
||||
if !c.running.Load() {
|
||||
return
|
||||
@@ -932,8 +900,7 @@ func (c *StoreReconciler) provision(
|
||||
builder := wsbuilder.New(workspace, transition, *c.buildUsageChecker.Load()).
|
||||
Reason(database.BuildReasonInitiator).
|
||||
Initiator(database.PrebuildsSystemUserID).
|
||||
MarkPrebuild().
|
||||
RenderCache(c.renderCache)
|
||||
MarkPrebuild()
|
||||
|
||||
if transition != database.WorkspaceTransitionDelete {
|
||||
// We don't specify the version for a delete transition,
|
||||
|
||||
@@ -53,9 +53,6 @@ func TestNoReconciliationActionsIfNoPresets(t *testing.T) {
|
||||
logger := testutil.Logger(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
controller := prebuilds.NewStoreReconciler(db, ps, cache, cfg, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
controller.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// given a template version with no presets
|
||||
org := dbgen.Organization(t, db, database.Organization{})
|
||||
@@ -99,9 +96,6 @@ func TestNoReconciliationActionsIfNoPrebuilds(t *testing.T) {
|
||||
logger := testutil.Logger(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
controller := prebuilds.NewStoreReconciler(db, ps, cache, cfg, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
controller.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// given there are presets, but no prebuilds
|
||||
org := dbgen.Organization(t, db, database.Organization{})
|
||||
@@ -432,9 +426,6 @@ func (tc testCase) run(t *testing.T) {
|
||||
}
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
controller := prebuilds.NewStoreReconciler(db, pubSub, cache, cfg, logger, quartz.NewMock(t), prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
controller.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// Run the reconciliation multiple times to ensure idempotency
|
||||
// 8 was arbitrary, but large enough to reasonably trust the result
|
||||
@@ -1221,9 +1212,6 @@ func TestRunLoop(t *testing.T) {
|
||||
db, pubSub := dbtestutil.NewDB(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, pubSub, cache, cfg, logger, clock, prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
ownerID := uuid.New()
|
||||
dbgen.User(t, db, database.User{
|
||||
@@ -1352,9 +1340,6 @@ func TestFailedBuildBackoff(t *testing.T) {
|
||||
db, ps := dbtestutil.NewDB(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, cfg, logger, clock, prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// Given: an active template version with presets and prebuilds configured.
|
||||
const desiredInstances = 2
|
||||
@@ -1477,7 +1462,6 @@ func TestReconciliationLock(t *testing.T) {
|
||||
prometheus.NewRegistry(),
|
||||
newNoopEnqueuer(),
|
||||
newNoopUsageCheckerPtr())
|
||||
defer reconciler.Stop(context.Background(), nil)
|
||||
reconciler.WithReconciliationLock(ctx, logger, func(_ context.Context, _ database.Store) error {
|
||||
lockObtained := mutex.TryLock()
|
||||
// As long as the postgres lock is held, this mutex should always be unlocked when we get here.
|
||||
@@ -1508,9 +1492,6 @@ func TestTrackResourceReplacement(t *testing.T) {
|
||||
registry := prometheus.NewRegistry()
|
||||
cache := files.New(registry, &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, codersdk.PrebuildsConfig{}, logger, clock, registry, fakeEnqueuer, newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// Given: a template admin to receive a notification.
|
||||
templateAdmin := dbgen.User(t, db, database.User{
|
||||
@@ -2118,9 +2099,6 @@ func TestCancelPendingPrebuilds(t *testing.T) {
|
||||
cache := files.New(registry, &coderdtest.FakeAuthorizer{})
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: false}).Leveled(slog.LevelDebug)
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, codersdk.PrebuildsConfig{}, logger, clock, registry, fakeEnqueuer, newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
|
||||
// Given: a template with a version containing a preset with 1 prebuild instance
|
||||
@@ -2358,9 +2336,6 @@ func TestCancelPendingPrebuilds(t *testing.T) {
|
||||
cache := files.New(registry, &coderdtest.FakeAuthorizer{})
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: false}).Leveled(slog.LevelDebug)
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, codersdk.PrebuildsConfig{}, logger, clock, registry, fakeEnqueuer, newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
|
||||
// Given: template A with 2 versions
|
||||
@@ -2426,9 +2401,6 @@ func TestReconciliationStats(t *testing.T) {
|
||||
cache := files.New(registry, &coderdtest.FakeAuthorizer{})
|
||||
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: false}).Leveled(slog.LevelDebug)
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, codersdk.PrebuildsConfig{}, logger, clock, registry, fakeEnqueuer, newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
@@ -2940,9 +2912,6 @@ func TestReconciliationRespectsPauseSetting(t *testing.T) {
|
||||
logger := testutil.Logger(t)
|
||||
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
|
||||
reconciler := prebuilds.NewStoreReconciler(db, ps, cache, cfg, logger, clock, prometheus.NewRegistry(), newNoopEnqueuer(), newNoopUsageCheckerPtr())
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
|
||||
// Setup a template with a preset that should create prebuilds
|
||||
org := dbgen.Organization(t, db, database.Organization{})
|
||||
|
||||
@@ -1978,9 +1978,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
|
||||
notificationsNoop,
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -2103,9 +2100,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
|
||||
notificationsNoop,
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -2228,9 +2222,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
|
||||
notificationsNoop,
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -2375,9 +2366,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
|
||||
notificationsNoop,
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -2523,9 +2511,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
|
||||
notificationsNoop,
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
@@ -2972,9 +2957,6 @@ func TestWorkspaceProvisionerdServerMetrics(t *testing.T) {
|
||||
notifications.NewNoopEnqueuer(),
|
||||
api.AGPL.BuildUsageChecker,
|
||||
)
|
||||
t.Cleanup(func() {
|
||||
reconciler.Stop(context.Background(), nil)
|
||||
})
|
||||
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer(db)
|
||||
api.AGPL.PrebuildsClaimer.Store(&claimer)
|
||||
|
||||
|
||||
@@ -379,12 +379,8 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
|
||||
HideStatus: true,
|
||||
Description: "This workspace proxy is DERP-only and cannot be used for browser connections. " +
|
||||
"Please use a different region directly from the dashboard. Click to be redirected!",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: opts.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
RetryEnabled: false,
|
||||
DashboardURL: opts.DashboardURL.String(),
|
||||
})
|
||||
}
|
||||
serveDerpOnlyHandler := func(r chi.Router) {
|
||||
@@ -426,12 +422,8 @@ func New(ctx context.Context, opts *Options) (*Server, error) {
|
||||
HideStatus: true,
|
||||
Description: "Workspace Proxies route traffic in terminals and apps directly to your workspace. " +
|
||||
"This page must be loaded from the dashboard. Click to be redirected!",
|
||||
Actions: []site.Action{
|
||||
{
|
||||
URL: opts.DashboardURL.String(),
|
||||
Text: "Back to site",
|
||||
},
|
||||
},
|
||||
RetryEnabled: false,
|
||||
DashboardURL: opts.DashboardURL.String(),
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@@ -200,7 +200,7 @@ require (
|
||||
golang.org/x/mod v0.30.0
|
||||
golang.org/x/net v0.47.0
|
||||
golang.org/x/oauth2 v0.33.0
|
||||
golang.org/x/sync v0.19.0
|
||||
golang.org/x/sync v0.18.0
|
||||
golang.org/x/sys v0.38.0
|
||||
golang.org/x/term v0.37.0
|
||||
golang.org/x/text v0.31.0
|
||||
|
||||
@@ -2245,8 +2245,8 @@ golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
|
||||
+1
-1
@@ -273,7 +273,7 @@ EOF
|
||||
main() {
|
||||
MAINLINE=1
|
||||
STABLE=0
|
||||
TERRAFORM_VERSION="1.14.1"
|
||||
TERRAFORM_VERSION="1.13.4"
|
||||
|
||||
if [ "${TRACE-}" ]; then
|
||||
set -x
|
||||
|
||||
@@ -22,10 +22,10 @@ var (
|
||||
// when Terraform is not available on the system.
|
||||
// NOTE: Keep this in sync with the version in scripts/Dockerfile.base.
|
||||
// NOTE: Keep this in sync with the version in install.sh.
|
||||
TerraformVersion = version.Must(version.NewVersion("1.14.1"))
|
||||
TerraformVersion = version.Must(version.NewVersion("1.13.4"))
|
||||
|
||||
minTerraformVersion = version.Must(version.NewVersion("1.1.0"))
|
||||
maxTerraformVersion = version.Must(version.NewVersion("1.14.9")) // use .9 to automatically allow patch releases
|
||||
maxTerraformVersion = version.Must(version.NewVersion("1.13.9")) // use .9 to automatically allow patch releases
|
||||
|
||||
errTerraformMinorVersionMismatch = xerrors.New("Terraform binary minor version mismatch.")
|
||||
)
|
||||
|
||||
@@ -102,7 +102,7 @@ func (p *terraformProxy) handleGet(w http.ResponseWriter, r *http.Request) {
|
||||
require.NoError(p.t, err)
|
||||
|
||||
// update index.json so urls in it point to proxy by making them relative
|
||||
// "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_windows_amd64.zip" -> "/terraform/1.14.1/terraform_1.14.1_windows_amd64.zip"
|
||||
// "https://releases.hashicorp.com/terraform/1.13.4/terraform_1.13.4_windows_amd64.zip" -> "/terraform/1.13.4/terraform_1.13.4_windows_amd64.zip"
|
||||
if strings.HasSuffix(r.URL.Path, "index.json") {
|
||||
body = []byte(strings.ReplaceAll(string(body), terraformURL, ""))
|
||||
}
|
||||
|
||||
-5
@@ -3,11 +3,6 @@
|
||||
set -euo pipefail
|
||||
cd "$(dirname "${BASH_SOURCE[0]}")/resources"
|
||||
|
||||
# These environment variables influence the coder provider.
|
||||
for v in $(env | grep -E '^CODER_' | cut -d= -f1); do
|
||||
unset "$v"
|
||||
done
|
||||
|
||||
generate() {
|
||||
local name="$1"
|
||||
|
||||
|
||||
+3
-4
@@ -41,7 +41,6 @@
|
||||
"sidebar_app": []
|
||||
},
|
||||
"after_unknown": {
|
||||
"enabled": true,
|
||||
"id": true,
|
||||
"prompt": true,
|
||||
"sidebar_app": []
|
||||
@@ -82,11 +81,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "5c06d6ea-101b-4069-8d14-7179df66ebcc",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "coder",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -105,7 +104,7 @@
|
||||
"schema_version": 0,
|
||||
"values": {
|
||||
"email": "default@example.com",
|
||||
"full_name": "default",
|
||||
"full_name": "coder",
|
||||
"groups": [],
|
||||
"id": "8796d8d7-88f1-445a-bea7-65f5cf530b95",
|
||||
"login_type": null,
|
||||
|
||||
+4
-5
@@ -27,11 +27,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "bca94359-107b-43c9-a272-99af4b239aad",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "coder",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -50,7 +50,7 @@
|
||||
"schema_version": 0,
|
||||
"values": {
|
||||
"email": "default@example.com",
|
||||
"full_name": "default",
|
||||
"full_name": "coder",
|
||||
"groups": [],
|
||||
"id": "cb8c55f2-7f66-4e69-a584-eb08f4a7cf04",
|
||||
"login_type": null,
|
||||
@@ -79,9 +79,8 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
|
||||
"enabled": false,
|
||||
"id": "c4f032b8-97e4-42b0-aa2f-30a9e698f8d4",
|
||||
"prompt": null,
|
||||
"prompt": "default",
|
||||
"sidebar_app": []
|
||||
},
|
||||
"sensitive_values": {
|
||||
|
||||
Generated
Vendored
+2
-6
@@ -66,7 +66,6 @@
|
||||
},
|
||||
"after_unknown": {
|
||||
"app_id": true,
|
||||
"enabled": true,
|
||||
"id": true,
|
||||
"prompt": true,
|
||||
"sidebar_app": [
|
||||
@@ -98,7 +97,6 @@
|
||||
"sidebar_app": []
|
||||
},
|
||||
"after_unknown": {
|
||||
"enabled": true,
|
||||
"id": true,
|
||||
"prompt": true,
|
||||
"sidebar_app": []
|
||||
@@ -139,11 +137,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "344575c1-55b9-43bb-89b5-35f547e2cf08",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "sebenza-nonix",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -175,9 +173,7 @@
|
||||
},
|
||||
"sensitive_values": {
|
||||
"groups": [],
|
||||
"oidc_access_token": true,
|
||||
"rbac_roles": [],
|
||||
"session_token": true,
|
||||
"ssh_private_key": true
|
||||
}
|
||||
}
|
||||
|
||||
Generated
Vendored
+4
-8
@@ -27,11 +27,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "b6713709-6736-4d2f-b3da-7b5b242df5f4",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "sebenza-nonix",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -63,9 +63,7 @@
|
||||
},
|
||||
"sensitive_values": {
|
||||
"groups": [],
|
||||
"oidc_access_token": true,
|
||||
"rbac_roles": [],
|
||||
"session_token": true,
|
||||
"ssh_private_key": true
|
||||
}
|
||||
},
|
||||
@@ -79,9 +77,8 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
|
||||
"enabled": false,
|
||||
"id": "89e6ab36-2e98-4d13-9b4c-69b7588b7e1d",
|
||||
"prompt": null,
|
||||
"prompt": "default",
|
||||
"sidebar_app": [
|
||||
{
|
||||
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd"
|
||||
@@ -104,9 +101,8 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
|
||||
"enabled": false,
|
||||
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
|
||||
"prompt": null,
|
||||
"prompt": "default",
|
||||
"sidebar_app": []
|
||||
},
|
||||
"sensitive_values": {
|
||||
|
||||
Generated
Vendored
+2
-5
@@ -50,7 +50,6 @@
|
||||
},
|
||||
"after_unknown": {
|
||||
"app_id": true,
|
||||
"enabled": true,
|
||||
"id": true,
|
||||
"prompt": true,
|
||||
"sidebar_app": [
|
||||
@@ -95,11 +94,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "344575c1-55b9-43bb-89b5-35f547e2cf08",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "sebenza-nonix",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -131,9 +130,7 @@
|
||||
},
|
||||
"sensitive_values": {
|
||||
"groups": [],
|
||||
"oidc_access_token": true,
|
||||
"rbac_roles": [],
|
||||
"session_token": true,
|
||||
"ssh_private_key": true
|
||||
}
|
||||
}
|
||||
|
||||
Generated
Vendored
+3
-6
@@ -27,11 +27,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "b6713709-6736-4d2f-b3da-7b5b242df5f4",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "sebenza-nonix",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -63,9 +63,7 @@
|
||||
},
|
||||
"sensitive_values": {
|
||||
"groups": [],
|
||||
"oidc_access_token": true,
|
||||
"rbac_roles": [],
|
||||
"session_token": true,
|
||||
"ssh_private_key": true
|
||||
}
|
||||
},
|
||||
@@ -79,9 +77,8 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"app_id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd",
|
||||
"enabled": false,
|
||||
"id": "89e6ab36-2e98-4d13-9b4c-69b7588b7e1d",
|
||||
"prompt": null,
|
||||
"prompt": "default",
|
||||
"sidebar_app": [
|
||||
{
|
||||
"id": "5ece4674-dd35-4f16-88c8-82e40e72e2fd"
|
||||
|
||||
Generated
Vendored
+3
-3
@@ -147,11 +147,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "0b7fc772-5e27-4096-b8a3-9e6a8b914ebe",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "kacper",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -170,7 +170,7 @@
|
||||
"schema_version": 0,
|
||||
"values": {
|
||||
"email": "default@example.com",
|
||||
"full_name": "default",
|
||||
"full_name": "kacpersaw",
|
||||
"groups": [],
|
||||
"id": "1ebd1795-7cf2-47c5-8024-5d56e68f1681",
|
||||
"login_type": null,
|
||||
|
||||
Generated
Vendored
+3
-3
@@ -27,11 +27,11 @@
|
||||
"schema_version": 1,
|
||||
"values": {
|
||||
"access_port": 443,
|
||||
"access_url": "https://mydeployment.coder.com",
|
||||
"access_url": "https://dev.coder.com/",
|
||||
"id": "dfa1dbe8-ad31-410b-b201-a4ed4d884938",
|
||||
"is_prebuild": false,
|
||||
"is_prebuild_claim": false,
|
||||
"name": "default",
|
||||
"name": "kacper",
|
||||
"prebuild_count": 0,
|
||||
"start_count": 1,
|
||||
"template_id": "",
|
||||
@@ -50,7 +50,7 @@
|
||||
"schema_version": 0,
|
||||
"values": {
|
||||
"email": "default@example.com",
|
||||
"full_name": "default",
|
||||
"full_name": "kacpersaw",
|
||||
"groups": [],
|
||||
"id": "f5e82b90-ea22-4288-8286-9cf7af651143",
|
||||
"login_type": null,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user