Compare commits

..

50 Commits

Author SHA1 Message Date
Charlie Voiselle eee13c42a4 docs(cli): reference --oidc-group-mapping flag name instead of 'legacy'
Issue: Used internal nomenclature instead of user-facing flag name

The previous fix referenced 'legacy group name mapping' but users don't
know what that means - it's an internal implementation detail. Users
configure this via the --oidc-group-mapping flag.

Changed to: 'This filter is applied after the oidc-group-mapping.'

This directly references the flag name users would actually use, making
the relationship clear and actionable. Users can now understand:
- The regex filter applies to group names
- Group names may have been transformed by --oidc-group-mapping first
- They need to write regex patterns that match the mapped names

Example: If --oidc-group-mapping transforms 'developers' to 'dev-team',
the regex in --oidc-group-regex-filter will match against 'dev-team'.
2026-02-09 16:14:00 -05:00
Charlie Voiselle 65b48c0f84 docs(cli): fix help text for --oidc-group-regex-filter (clarify mapping order)
Issue: Removed ordering information when it was actually helpful

The previous correction removed the sentence about filter order to avoid
confusion, but this actually made the description LESS clear. Users need
to understand that the regex filter operates on group names AFTER any
legacy name mapping has been applied.

Example: If IdP sends 'developers' and LegacyNameMapping renames it to
'dev-team', the regex filter will match against 'dev-team', not 'developers'.

Changed to: 'This filter is applied after legacy group name mapping.'

This clarifies:
1. It's the LEGACY mapping (name→name) not the new Mapping (name→IDs)
2. The regex operates on potentially-renamed group names
3. The filter happens before the final ID mapping

Code reference: coderd/idpsync/group.go lines 379-398
- Line 380: LegacyNameMapping (name → name)
- Line 386: RegexFilter (on the potentially renamed name)
- Line 392: Mapping (name → []uuid.UUID)
2026-02-09 16:13:59 -05:00
Charlie Voiselle 30cdf29e52 docs(cli): fix help text for --oidc-group-regex-filter (final correction)
Issue: Previous description incorrectly stated filter order

The correction commit stated 'This filter is applied after the group mapping'
but the actual code order in coderd/idpsync/group.go lines 379-398 shows:
1. Legacy group mappings
2. Regex filter
3. (New) group mapping

Since the filter order is complex and the description was causing confusion,
removed the last sentence entirely. The first two sentences clearly explain
what the flag does without introducing incorrect ordering claims.

This follows the verification report's recommendation to remove the
confusing last sentence.
2026-02-09 16:13:59 -05:00
Charlie Voiselle b1d2bb6d71 docs(cli): fix help text for --external-auth-providers
Issue: Clarity - vague description

Changed 'External Authentication providers.' to 'Configure external authentication providers for Git and other services.' to explain what these providers are actually used for.
2026-02-09 16:13:59 -05:00
Charlie Voiselle 94bad2a956 docs(cli): fix help text for --workspace-prebuilds-reconciliation-backoff-lookback-period
Issue: Clarity - unclear purpose

Changed 'Interval to look back to determine number of failed prebuilds, which influences backoff' to 'Time period to look back when counting failed prebuilds to calculate the backoff delay' to clarify this determines the time window for counting failures.
2026-02-09 16:13:59 -05:00
Charlie Voiselle 111714c7ed docs(cli): fix help text for --workspace-prebuilds-reconciliation-backoff-interval
Issue: Clarity - confusing wording about backoff behavior

Changed 'Interval to increase reconciliation backoff by when prebuilds fail, after which a retry attempt is made' to 'Amount of time to add to the reconciliation backoff delay after each prebuild failure, before the next retry attempt is made' to clarify this is an incremental addition to the backoff delay.
2026-02-09 16:13:58 -05:00
Charlie Voiselle 1f9c516c5c docs(cli): fix help text for --workspace-prebuilds-failure-hard-limit
Issue: Clarity - unclear what 'hits the hard limit' means

Changed 'before a preset hits the hard limit' to 'before a preset is considered hard-limited and stops automatic prebuild creation' to explain what actually happens when the limit is reached.
2026-02-09 16:13:58 -05:00
Charlie Voiselle 3645c65bb2 docs(cli): fix help text for --workspace-hostname-suffix
Issue: Clarity - incomplete example hostname

Changed 'in SSH config and Coder Connect on Coder Desktop' to 'for SSH connections and Coder Connect' for conciseness. Updated the example from 'myworkspace.coder' to the full format 'agent.workspace.owner.coder' to show the complete hostname structure.
2026-02-09 16:13:58 -05:00
Charlie Voiselle d3d2d2fb1e docs(cli): fix help text for --workspace-agent-logs-retention
Issue: Clarity - ambiguous scope

Changed 'Logs from the latest build are always retained' to 'Logs from the latest build for each workspace are always retained' to clarify that this applies per-workspace, not just one latest build globally.
2026-02-09 16:13:58 -05:00
Charlie Voiselle 086fb1f5d5 docs(cli): fix help text for --block-direct-connections
Issue: Clarity - imprecise wording about STUN behavior

Clarified that 'Workspace agents' (not 'Workspaces') reach out to STUN servers, changed 'get their address' to 'discover their address', and simplified 'until they are restarted after this change has been made' to just 'until they are restarted'.
2026-02-09 16:13:58 -05:00
Charlie Voiselle a73a535a5b docs(cli): fix help text for --proxy-health-interval
Issue: Clarity - awkward phrasing

Changed 'in which coderd should be checking' to 'at which coderd checks' for more concise, natural phrasing.
2026-02-09 16:13:57 -05:00
Charlie Voiselle 96e01c3018 docs(cli): fix help text for --email-tls-cert-key-file
Issue: Clarity - vague description

Changed 'Certificate key file to use' to 'Private key file for the client certificate' to clarify this is the private key that pairs with --email-tls-cert-file.
2026-02-09 16:13:57 -05:00
Charlie Voiselle 6b10a0359b docs(cli): fix help text for --email-tls-cert-file
Issue: Clarity - vague description

Changed 'Certificate file to use' to 'Client certificate file for mutual TLS authentication' to clarify what this certificate is for and when it's needed.
2026-02-09 16:13:57 -05:00
Charlie Voiselle b62583ad4b docs(cli): fix help text for --oidc-user-role-default
Issue: Clarity - ambiguous relationship between defaults and synced roles

Added 'in addition to synced roles' to clarify that these defaults don't replace synced roles. Also clarified that 'member' is always assigned 'regardless of this setting' to avoid confusion about whether this setting affects the member role.
2026-02-09 16:13:57 -05:00
Charlie Voiselle 3d6727a2cb docs(cli): fix help text for --oidc-group-field
Issue: Clarity - unclear structure

Reordered to put the primary purpose first: 'OIDC claim field to use as the user's groups' before the conditional requirement. This makes the description more scannable and understandable.
2026-02-09 16:13:56 -05:00
Charlie Voiselle b163962a14 docs(cli): fix help text for --aibridge-circuit-breaker-interval
Issue: Clarity - confusing technical jargon

Changed 'Cyclic period of the closed state for clearing internal failure counts' to 'Time window for counting failures before resetting the failure count in the closed state' to explain what the interval actually does in clearer terms.
2026-02-09 16:13:56 -05:00
Charlie Voiselle 9aca4ea27c docs(cli): fix help text for --aibridge-circuit-breaker-enabled
Issue: Clarity - ambiguous error code description

Changed '(429, 503, 529 overloaded)' to '(HTTP 429, 503, 529)' and added 'and overload errors' to clarify that these are HTTP status codes and what they represent.
2026-02-09 16:13:56 -05:00
Charlie Voiselle b0c10131ea docs(cli): fix help text for --aibridge-retention
Issue: Clarity - wordy phrasing

Simplified 'Length of time to retain data such as interceptions and all related records (token, prompt, tool use)' to 'How long to retain AI Bridge data including interceptions, tokens, prompts, and tool usage records' for more natural, clearer phrasing.
2026-02-09 16:13:56 -05:00
Charlie Voiselle c8c7e13e96 docs(cli): fix help text for --aibridge-inject-coder-mcp-tools
Issue: Clarity - awkward phrasing and formatting

Changed 'Whether to inject' to 'Enable injection of' for consistency with other boolean flags. Simplified the requirements clause and changed double quotes to single quotes for consistency.
2026-02-09 16:13:55 -05:00
Charlie Voiselle 249b7ea38e docs(cli): fix help text for --aibridge-enabled
Issue: Clarity - unclear technical jargon

Changed 'Whether to start an in-memory aibridged instance' to 'Enable the embedded AI Bridge service to intercept and record AI provider requests' to explain what the feature actually does in user-friendly terms.
2026-02-09 16:13:55 -05:00
Charlie Voiselle 1333096e25 docs(cli): fix help text for --oidc-group-regex-filter (correction)
Issue: Previous fix introduced confusing circular wording

The previous commit incorrectly changed the ending to 'after the group mapping and regex filter' which is nonsensical since this flag configures THE regex filter itself. Reverted to the correct wording: 'after the group mapping'.

The only valid changes from the original are:
- Added comma after 'If provided'
- Simplified 'allows for filtering' to 'allows filtering'
2026-02-09 16:13:55 -05:00
Charlie Voiselle 54bc9324dd docs(cli): fix help text for --samesite-auth-cookie
Issue: Grammar - missing word

Added missing 'if' to read 'Controls if the SameSite property is set' instead of 'Controls the SameSite property is set'.
2026-02-09 16:13:55 -05:00
Charlie Voiselle 109e5f2b19 docs(cli): fix help text for --enable-authz-recordings
Issue: Grammar - acronym capitalization

Capitalized 'API' (Application Programming Interface) - should always be uppercase.
2026-02-09 16:13:55 -05:00
Charlie Voiselle ee176b4207 docs(cli): fix help text for --ssh-config-options
Issue: Grammar - missing space after period

Added missing space after period between sentences: 'commas.' + 'Using' → 'commas. ' + 'Using'.
2026-02-09 16:13:54 -05:00
Charlie Voiselle 7e1e16be33 docs(cli): fix help text for --prometheus-address
Issue: Grammar - proper noun capitalization

Capitalized 'Prometheus' as it's a proper noun.
2026-02-09 16:13:54 -05:00
Charlie Voiselle 5cfe8082ce docs(cli): fix help text for --prometheus-enable
Issue: Grammar - proper noun capitalization

Capitalized 'Prometheus' as it's a proper noun (the name of the monitoring system).
2026-02-09 16:13:54 -05:00
Charlie Voiselle 6b7f672834 docs(cli): fix help text for --allow-custom-quiet-hours
Issue: Grammar - awkward phrasing

Changed 'for workspaces to stop in' to 'for when workspaces are stopped' for more natural phrasing.
2026-02-09 16:13:53 -05:00
Charlie Voiselle c55f6252a1 docs(cli): fix help text for --tls-client-ca-file
Issue: Grammar - missing article

Added missing article 'the' before 'client' to read 'authenticity of the client'.
2026-02-09 16:13:53 -05:00
Charlie Voiselle 842553b677 docs(cli): fix help text for --tls-ciphers
Issue: Grammar - missing verb

Fixed missing 'are' in 'that allowed to be used' → 'that are allowed to be used'.
2026-02-09 16:13:53 -05:00
Charlie Voiselle 05a771ba77 docs(cli): fix help text for --derp-server-stun-addresses
Issue: Grammar - incorrect possessive

Fixed "it's" (contraction of "it is") → "its" (possessive). Should be 'Each STUN server will get its own DERP region'.
2026-02-09 16:13:53 -05:00
Charlie Voiselle 70a0d42e65 docs(cli): fix help text for --derp-server-region-name
Issue: Grammar - malformed sentence

Fixed malformed sentence 'Region name that for' → 'Region name to use for'. The original was missing a verb.
2026-02-09 16:13:52 -05:00
Charlie Voiselle 6b1d73b466 docs(cli): fix help text for --notifications-store-sync-buffer-size
Issue: Grammar - typo

Fixed typo: 'change' → 'chance'. Same typo as in --notifications-store-sync-interval.
2026-02-09 16:13:52 -05:00
Charlie Voiselle d7b9596145 docs(cli): fix help text for --notifications-store-sync-interval
Issue: Grammar - typo

Fixed typo: 'change' → 'chance'. The sentence should read 'the lower the chance of state inconsistency'.
2026-02-09 16:13:52 -05:00
Charlie Voiselle 7a0aa1a40a docs(cli): fix help text for --oidc-signups-disabled-text
Issue: Grammar - awkward phrasing

Changed 'The custom text to show on the error page informing about disabled OIDC signups' to 'Custom text to show on the error page when OIDC signups are disabled' for clearer, more direct phrasing. Removed unnecessary 'The' article.
2026-02-09 16:13:52 -05:00
Charlie Voiselle 4d8ea43e11 docs(cli): fix help text for --oidc-icon-url
Issue: Grammar - redundant phrasing

Changed 'URL pointing to the icon' to 'URL of the icon'. The phrase 'pointing to' is redundant since a URL inherently points to a resource.
2026-02-09 16:13:52 -05:00
Charlie Voiselle 6fddae98f6 docs(cli): fix help text for --oidc-group-regex-filter
Issue: Grammar - missing comma + simplification + filter order clarification

Added missing comma after 'If provided'. Simplified 'allows for filtering' to 'allows filtering'. Clarified filter order to match the actual implementation.
2026-02-09 16:13:51 -05:00
Charlie Voiselle e33fbb6087 docs(cli): fix help text for --oidc-group-mapping
Issue: Grammar - subject-verb agreement + awkward phrasing

Changed 'the group in Coder it should map to' to 'the groups in Coder they should map to' for proper plural agreement. Also simplified 'for when' to 'when'.
2026-02-09 16:13:51 -05:00
Charlie Voiselle 2337393e13 docs(cli): fix help text for --oidc-client-cert-file
Issue: Grammar - incorrect acronym capitalization

Changed 'Pem' to 'PEM', 'oauth2' to 'OAuth2', and 'x509' to 'X.509'. These are standard capitalizations for these acronyms and standards.
2026-02-09 16:13:51 -05:00
Charlie Voiselle d7357a1b0a docs(cli): fix help text for --oidc-client-key-file
Issue: Grammar - incorrect acronym capitalization

Changed 'Pem' to 'PEM' (Privacy Enhanced Mail), 'oauth2' to 'OAuth2', and 'IDP' to 'IdP' (Identity Provider). These are standard capitalizations for these acronyms.
2026-02-09 16:13:51 -05:00
Charlie Voiselle afbf1af29c docs(cli): fix help text for --oauth2-github-allow-everyone
Issue: Grammar - unclear and run-on sentence

Changed 'Allow all logins, setting this option means...' to 'Allow all GitHub users to authenticate. When enabled, allowed orgs and teams must be empty.' This separates the run-on sentence and clarifies what 'all logins' means (all GitHub users).
2026-02-09 16:13:50 -05:00
Charlie Voiselle 1d834c747c docs(cli): fix help text for --aibridge-circuit-breaker-failure-threshold
Issue: Grammar - subject-verb agreement

Changed 'triggers' to 'trigger' for correct subject-verb agreement. 'Number' is the subject, which takes the singular form, but 'failures' is the head of the relative clause 'that trigger...', making 'trigger' (plural) correct.
2026-02-09 16:13:50 -05:00
Charlie Voiselle a80edec752 docs(cli): fix help text for --aibridge-bedrock-access-key-secret
Issue: Grammar - wordy and redundant phrasing

Simplified from 'The access key secret to use with the access key to authenticate against' to 'AWS secret access key for authenticating with'. Uses standard AWS terminology and eliminates redundancy.
2026-02-09 16:13:50 -05:00
Charlie Voiselle 2a6473e8c6 docs(cli): fix help text for --aibridge-bedrock-access-key
Issue: Grammar - awkward phrasing

Changed 'The access key to authenticate against' to 'AWS access key for authenticating with' for consistency and clarity. Uses standard AWS terminology.
2026-02-09 16:13:50 -05:00
Charlie Voiselle 1f9c0b9b7f docs(cli): fix help text for --aibridge-anthropic-key
Issue: Grammar - awkward phrasing

Changed 'The key to authenticate against' to 'API key for authenticating with' for consistency with --aibridge-openai-key and more natural phrasing.
2026-02-09 16:13:49 -05:00
Charlie Voiselle 5494afabd8 docs(cli): fix help text for --aibridge-openai-key
Issue: Grammar - awkward phrasing

Changed 'The key to authenticate against' to 'API key for authenticating with' for more natural, concise phrasing. This matches standard API documentation conventions.
2026-02-09 16:13:49 -05:00
Charlie Voiselle 07c6e86a50 docs(cli): fix help text for --notifications-email-hello
Issue: Factually incorrect description of SMTP HELO/EHLO (deprecated alias)

Same issue as --email-hello. This is a deprecated alias but still needs the correct description. The HELO/EHLO command identifies the client to the server, not the server itself.

Fix: Clarified this identifies 'this client to the SMTP server'.
2026-02-09 16:13:49 -05:00
Charlie Voiselle b543821a1c docs(cli): fix help text for --email-hello
Issue: Factually incorrect description of SMTP HELO/EHLO

The description incorrectly stated this identifies 'the SMTP server' when it actually identifies the CLIENT to the server. The HELO/EHLO command is how the client introduces itself to the SMTP server during connection.

Fix: Clarified this identifies 'this client to the SMTP server' which accurately reflects the SMTP protocol.
2026-02-09 16:13:49 -05:00
Charlie Voiselle e8b7045a9b docs(cli): fix help text for --pprof-enable
Issue: Factually incorrect terminology

The description incorrectly stated pprof serves 'metrics' when it actually serves profiling data (CPU profiles, memory profiles, goroutines, etc.). Metrics are Prometheus's domain, not pprof's.

Fix: Changed 'metrics' to 'profiling endpoints' to accurately describe what pprof provides.
2026-02-09 16:13:48 -05:00
Charlie Voiselle 2571089528 docs(cli): fix help text for --oidc-user-role-mapping
Issue: Factually incorrect (confuses roles with groups) + grammar error

The description incorrectly stated this maps to 'groups in Coder' when it actually maps to site ROLES (member, admin, etc.). Also had a grammar error: 'will ignored' should be 'will be ignored'.

Fix: Corrected to clarify this maps OIDC role names to Coder role names, and fixed the grammar error.
2026-02-09 16:13:48 -05:00
Charlie Voiselle 1fb733fe1e docs(cli): fix help text for --oidc-allowed-groups
Issue: Factually incorrect filter order

The description incorrectly stated that the check is applied 'after the group mapping and before the regex filter'. This is wrong.

Fix: Updated to reflect actual behavior where the check is applied BEFORE any group mapping or filtering. Also clarified the positive case (users WITH at least one matching group are allowed) instead of the confusing double-negative phrasing.
2026-02-09 16:13:48 -05:00
206 changed files with 4793 additions and 7176 deletions
-4
View File
@@ -1,4 +0,0 @@
# All artifacts of the build processed are dumped here.
# Ignore it for docker context, as all Dockerfiles should build their own
# binaries.
build
+1 -4
View File
@@ -909,10 +909,7 @@ site/src/api/countriesGenerated.ts: site/node_modules/.installed scripts/typegen
(cd site/ && pnpm exec biome format --write src/api/countriesGenerated.ts)
touch "$@"
scripts/metricsdocgen/generated_metrics: $(GO_SRC_FILES)
go run ./scripts/metricsdocgen/scanner > $@
docs/admin/integrations/prometheus.md: node_modules/.installed scripts/metricsdocgen/main.go scripts/metricsdocgen/metrics scripts/metricsdocgen/generated_metrics
docs/admin/integrations/prometheus.md: node_modules/.installed scripts/metricsdocgen/main.go scripts/metricsdocgen/metrics
go run scripts/metricsdocgen/main.go
pnpm exec markdownlint-cli2 --fix ./docs/admin/integrations/prometheus.md
pnpm exec markdown-table-formatter ./docs/admin/integrations/prometheus.md
+21 -25
View File
@@ -3,11 +3,11 @@
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true,
"defaultBranch": "main",
"defaultBranch": "main"
},
"files": {
"includes": ["**", "!**/pnpm-lock.yaml"],
"ignoreUnknown": true,
"ignoreUnknown": true
},
"linter": {
"rules": {
@@ -15,18 +15,18 @@
"noSvgWithoutTitle": "off",
"useButtonType": "off",
"useSemanticElements": "off",
"noStaticElementInteractions": "off",
"noStaticElementInteractions": "off"
},
"correctness": {
"noUnusedImports": "warn",
"correctness": {
"noUnusedImports": "warn",
"useUniqueElementIds": "off", // TODO: This is new but we want to fix it
"noNestedComponentDefinitions": "off", // TODO: Investigate, since it is used by shadcn components
"noUnusedVariables": {
"level": "warn",
"noUnusedVariables": {
"level": "warn",
"options": {
"ignoreRestSiblings": true,
},
},
"ignoreRestSiblings": true
}
}
},
"style": {
"noNonNullAssertion": "off",
@@ -45,10 +45,6 @@
"level": "error",
"options": {
"paths": {
"react": {
"message": "React 19 no longer requires forwardRef. Use ref as a prop instead.",
"importNames": ["forwardRef"],
},
// "@mui/material/Alert": "Use components/Alert/Alert instead.",
// "@mui/material/AlertTitle": "Use components/Alert/Alert instead.",
// "@mui/material/Autocomplete": "Use shadcn/ui Combobox instead.",
@@ -115,10 +111,10 @@
"@emotion/styled": "Use Tailwind CSS instead.",
// "@emotion/cache": "Use Tailwind CSS instead.",
// "components/Stack/Stack": "Use Tailwind flex utilities instead (e.g., <div className='flex flex-col gap-4'>).",
"lodash": "Use lodash/<name> instead.",
},
},
},
"lodash": "Use lodash/<name> instead."
}
}
}
},
"suspicious": {
"noArrayIndexKey": "off",
@@ -129,14 +125,14 @@
"noConsole": {
"level": "error",
"options": {
"allow": ["error", "info", "warn"],
},
},
"allow": ["error", "info", "warn"]
}
}
},
"complexity": {
"noImportantStyles": "off", // TODO: check and fix !important styles
},
},
"noImportantStyles": "off" // TODO: check and fix !important styles
}
}
},
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json",
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json"
}
+1 -8
View File
@@ -95,7 +95,6 @@ import (
"github.com/coder/coder/v2/coderd/webpush"
"github.com/coder/coder/v2/coderd/workspaceapps/appurl"
"github.com/coder/coder/v2/coderd/workspacestats"
"github.com/coder/coder/v2/coderd/wsbuilder"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/drpcsdk"
"github.com/coder/coder/v2/cryptorand"
@@ -936,12 +935,6 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
options.StatsBatcher = batcher
defer closeBatcher()
wsBuilderMetrics, err := wsbuilder.NewMetrics(options.PrometheusRegistry)
if err != nil {
return xerrors.Errorf("failed to register workspace builder metrics: %w", err)
}
options.WorkspaceBuilderMetrics = wsBuilderMetrics
// Manage notifications.
var (
notificationsCfg = options.DeploymentValues.Notifications
@@ -1125,7 +1118,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
autobuildTicker := time.NewTicker(vals.AutobuildPollInterval.Value())
defer autobuildTicker.Stop()
autobuildExecutor := autobuild.NewExecutor(
ctx, options.Database, options.Pubsub, coderAPI.FileCache, options.PrometheusRegistry, coderAPI.TemplateScheduleStore, &coderAPI.Auditor, coderAPI.AccessControlStore, coderAPI.BuildUsageChecker, logger, autobuildTicker.C, options.NotificationsEnqueuer, coderAPI.Experiments, coderAPI.WorkspaceBuilderMetrics)
ctx, options.Database, options.Pubsub, coderAPI.FileCache, options.PrometheusRegistry, coderAPI.TemplateScheduleStore, &coderAPI.Auditor, coderAPI.AccessControlStore, coderAPI.BuildUsageChecker, logger, autobuildTicker.C, options.NotificationsEnqueuer, coderAPI.Experiments)
autobuildExecutor.Run()
jobReaperTicker := time.NewTicker(vals.JobReaperDetectorInterval.Value())
-1
View File
@@ -17,7 +17,6 @@ func (r *RootCmd) tasksCommand() *serpent.Command {
r.taskDelete(),
r.taskList(),
r.taskLogs(),
r.taskPause(),
r.taskSend(),
r.taskStatus(),
},
+10 -5
View File
@@ -41,7 +41,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client // user already has access to their own workspace
inv, root := clitest.New(t, "task", "logs", task.Name, "--output", "json")
output := clitest.Capture(inv)
@@ -64,7 +65,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String(), "--output", "json")
output := clitest.Capture(inv)
@@ -87,7 +89,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsOK(testMessages))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String())
output := clitest.Capture(inv)
@@ -141,7 +144,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsErr(assert.AnError))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskLogsErr(assert.AnError))
userClient := client
inv, root := clitest.New(t, "task", "logs", task.ID.String())
clitest.SetupConfig(t, userClient, root)
@@ -197,7 +201,8 @@ func Test_TaskLogs_Golden(t *testing.T) {
t.Run("SnapshotWithoutLogs_NoSnapshotCaptured", func(t *testing.T) {
t.Parallel()
userClient, task := setupCLITaskTestWithoutSnapshot(t, codersdk.TaskStatusPaused)
client, task := setupCLITaskTestWithoutSnapshot(t, codersdk.TaskStatusPaused)
userClient := client
inv, root := clitest.New(t, "task", "logs", task.Name)
output := clitest.Capture(inv)
-90
View File
@@ -1,90 +0,0 @@
package cli
import (
"fmt"
"time"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
"github.com/coder/serpent"
)
func (r *RootCmd) taskPause() *serpent.Command {
cmd := &serpent.Command{
Use: "pause <task>",
Short: "Pause a task",
Long: FormatExamples(
Example{
Description: "Pause a task by name",
Command: "coder task pause my-task",
},
Example{
Description: "Pause another user's task",
Command: "coder task pause alice/my-task",
},
Example{
Description: "Pause a task without confirmation",
Command: "coder task pause my-task --yes",
},
),
Middleware: serpent.Chain(
serpent.RequireNArgs(1),
),
Options: serpent.OptionSet{
cliui.SkipPromptOption(),
},
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
client, err := r.InitClient(inv)
if err != nil {
return err
}
task, err := client.TaskByIdentifier(ctx, inv.Args[0])
if err != nil {
return xerrors.Errorf("resolve task %q: %w", inv.Args[0], err)
}
display := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
if task.Status == codersdk.TaskStatusPaused {
return xerrors.Errorf("task %q is already paused", display)
}
_, err = cliui.Prompt(inv, cliui.PromptOptions{
Text: fmt.Sprintf("Pause task %s?", pretty.Sprint(cliui.DefaultStyles.Code, display)),
IsConfirm: true,
Default: cliui.ConfirmNo,
})
if err != nil {
return err
}
resp, err := client.PauseTask(ctx, task.OwnerName, task.ID)
if err != nil {
return xerrors.Errorf("pause task %q: %w", display, err)
}
if resp.WorkspaceBuild == nil {
return xerrors.Errorf("pause task %q: no workspace build returned", display)
}
err = cliui.WorkspaceBuild(ctx, inv.Stdout, client, resp.WorkspaceBuild.ID)
if err != nil {
return xerrors.Errorf("watch pause build for task %q: %w", display, err)
}
_, _ = fmt.Fprintf(
inv.Stdout,
"\nThe %s task has been paused at %s!\n",
cliui.Keyword(task.Name),
cliui.Timestamp(time.Now()),
)
return nil
},
}
return cmd
}
-144
View File
@@ -1,144 +0,0 @@
package cli_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestExpTaskPause(t *testing.T) {
t.Parallel()
t.Run("WithYesFlag", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, userClient, root)
// Then: Expect the task to be paused
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been paused")
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
// OtherUserTask verifies that an admin can pause a task owned by
// another user using the "owner/name" identifier format.
t.Run("OtherUserTask", func(t *testing.T) {
t.Parallel()
// Given: A different user's running task
setupCtx := testutil.Context(t, testutil.WaitLong)
adminClient, _, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause their task
identifier := fmt.Sprintf("%s/%s", task.OwnerName, task.Name)
inv, root := clitest.New(t, "task", "pause", identifier, "--yes")
output := clitest.Capture(inv)
clitest.SetupConfig(t, adminClient, root)
// Then: We expect the task to be paused
ctx := testutil.Context(t, testutil.WaitMedium)
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
require.Contains(t, output.Stdout(), "has been paused")
updated, err := adminClient.TaskByIdentifier(ctx, identifier)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("PromptConfirm", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name)
clitest.SetupConfig(t, userClient, root)
// And: We confirm we want to pause the task
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Pause task")
pty.WriteLine("yes")
// Then: We expect the task to be paused
pty.ExpectMatchContext(ctx, "has been paused")
require.NoError(t, w.Wait())
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.Equal(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("PromptDecline", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// When: We attempt to pause the task
inv, root := clitest.New(t, "task", "pause", task.Name)
clitest.SetupConfig(t, userClient, root)
// But: We say no at the confirmation screen
ctx := testutil.Context(t, testutil.WaitMedium)
inv = inv.WithContext(ctx)
pty := ptytest.New(t).Attach(inv)
w := clitest.StartWithWaiter(t, inv)
pty.ExpectMatchContext(ctx, "Pause task")
pty.WriteLine("no")
require.Error(t, w.Wait())
// Then: We expect the task to not be paused
updated, err := userClient.TaskByIdentifier(ctx, task.Name)
require.NoError(t, err)
require.NotEqual(t, codersdk.TaskStatusPaused, updated.Status)
})
t.Run("TaskAlreadyPaused", func(t *testing.T) {
t.Parallel()
// Given: A running task
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, nil)
// And: We paused the running task
ctx := testutil.Context(t, testutil.WaitMedium)
resp, err := userClient.PauseTask(ctx, task.OwnerName, task.ID)
require.NoError(t, err)
require.NotNil(t, resp.WorkspaceBuild)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, userClient, resp.WorkspaceBuild.ID)
// When: We attempt to pause the task again
inv, root := clitest.New(t, "task", "pause", task.Name, "--yes")
clitest.SetupConfig(t, userClient, root)
// Then: We expect to get an error that the task is already paused
err = inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "is already paused")
})
}
+7 -4
View File
@@ -25,7 +25,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "carry on with the task")
@@ -41,7 +42,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.ID.String(), "carry on with the task")
@@ -57,7 +59,8 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
client, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendOK(t, "carry on with the task", "you got it"))
userClient := client
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "--stdin")
@@ -110,7 +113,7 @@ func Test_TaskSend(t *testing.T) {
t.Parallel()
setupCtx := testutil.Context(t, testutil.WaitLong)
_, userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
userClient, task := setupCLITaskTest(setupCtx, t, fakeAgentAPITaskSendErr(t, assert.AnError))
var stdout strings.Builder
inv, root := clitest.New(t, "task", "send", task.Name, "some task input")
+10 -27
View File
@@ -120,23 +120,6 @@ func Test_Tasks(t *testing.T) {
require.Equal(t, logs[2].Type, codersdk.TaskLogTypeOutput, "third message should be an output")
},
},
{
name: "pause task",
cmdArgs: []string{"task", "pause", taskName, "--yes"},
assertFn: func(stdout string, userClient *codersdk.Client) {
require.Contains(t, stdout, "has been paused", "pause output should confirm task was paused")
},
},
{
name: "get task status after pause",
cmdArgs: []string{"task", "status", taskName, "--output", "json"},
assertFn: func(stdout string, userClient *codersdk.Client) {
var task codersdk.Task
require.NoError(t, json.NewDecoder(strings.NewReader(stdout)).Decode(&task), "should unmarshal task status")
require.Equal(t, taskName, task.Name, "task name should match")
require.Equal(t, codersdk.TaskStatusPaused, task.Status, "task should be paused")
},
},
{
name: "delete task",
cmdArgs: []string{"task", "delete", taskName, "--yes"},
@@ -255,17 +238,17 @@ func fakeAgentAPIEcho(ctx context.Context, t testing.TB, initMsg agentapisdk.Mes
// setupCLITaskTest creates a test workspace with an AI task template and agent,
// with a fake agent API configured with the provided set of handlers.
// Returns the user client and workspace.
func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[string]http.HandlerFunc) (ownerClient *codersdk.Client, memberClient *codersdk.Client, task codersdk.Task) {
func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[string]http.HandlerFunc) (*codersdk.Client, codersdk.Task) {
t.Helper()
ownerClient = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, ownerClient)
userClient, _ := coderdtest.CreateAnotherUser(t, ownerClient, owner.OrganizationID)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
fakeAPI := startFakeAgentAPI(t, agentAPIHandlers)
authToken := uuid.NewString()
template := createAITaskTemplate(t, ownerClient, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
template := createAITaskTemplate(t, client, owner.OrganizationID, withSidebarURL(fakeAPI.URL()), withAgentToken(authToken))
wantPrompt := "test prompt"
task, err := userClient.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
@@ -279,17 +262,17 @@ func setupCLITaskTest(ctx context.Context, t *testing.T, agentAPIHandlers map[st
require.True(t, task.WorkspaceID.Valid, "task should have a workspace ID")
workspace, err := userClient.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, userClient, workspace.LatestBuild.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
agentClient := agentsdk.New(userClient.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, userClient.URL, authToken, func(o *agent.Options) {
agentClient := agentsdk.New(client.URL, agentsdk.WithFixedToken(authToken))
_ = agenttest.New(t, client.URL, authToken, func(o *agent.Options) {
o.Client = agentClient
})
coderdtest.NewWorkspaceAgentWaiter(t, userClient, workspace.ID).
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).
WaitFor(coderdtest.AgentsReady)
return ownerClient, userClient, task
return userClient, task
}
// setupCLITaskTestWithSnapshot creates a task in the specified status with a log snapshot.
-1
View File
@@ -12,7 +12,6 @@ SUBCOMMANDS:
delete Delete tasks
list List tasks
logs Show a task's logs
pause Pause a task
send Send input to a task
status Show the status of a task.
-25
View File
@@ -1,25 +0,0 @@
coder v0.0.0-devel
USAGE:
coder task pause [flags] <task>
Pause a task
- Pause a task by name:
$ coder task pause my-task
- Pause another user's task:
$ coder task pause alice/my-task
- Pause a task without confirmation:
$ coder task pause my-task --yes
OPTIONS:
-y, --yes bool
Bypass confirmation prompts.
———
Run `coder --help` for a list of global options.
+24
View File
@@ -0,0 +1,24 @@
//go:build !windows && !darwin
package cli
import (
"golang.org/x/xerrors"
"github.com/coder/serpent"
)
func (*RootCmd) vpnDaemonRun() *serpent.Command {
cmd := &serpent.Command{
Use: "run",
Short: "Run the VPN daemon on Windows.",
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
),
Handler: func(_ *serpent.Invocation) error {
return xerrors.New("vpn-daemon subcommand is not supported on this platform")
},
}
return cmd
}
@@ -1,4 +1,4 @@
//go:build windows || linux
//go:build windows
package cli
@@ -11,7 +11,7 @@ import (
"github.com/coder/serpent"
)
func (*RootCmd) vpnDaemonRun() *serpent.Command {
func (r *RootCmd) vpnDaemonRun() *serpent.Command {
var (
rpcReadHandleInt int64
rpcWriteHandleInt int64
@@ -19,7 +19,7 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
cmd := &serpent.Command{
Use: "run",
Short: "Run the VPN daemon on Windows and Linux.",
Short: "Run the VPN daemon on Windows.",
Middleware: serpent.Chain(
serpent.RequireNArgs(0),
),
@@ -53,8 +53,8 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
return xerrors.Errorf("rpc-read-handle (%v) and rpc-write-handle (%v) must be different", rpcReadHandleInt, rpcWriteHandleInt)
}
// The manager passes the read and write descriptors directly to the
// daemon, so we can open the RPC pipe from the raw values.
// We don't need to worry about duplicating the handles on Windows,
// which is different from Unix.
logger.Info(ctx, "opening bidirectional RPC pipe", slog.F("rpc_read_handle", rpcReadHandleInt), slog.F("rpc_write_handle", rpcWriteHandleInt))
pipe, err := vpn.NewBidirectionalPipe(uintptr(rpcReadHandleInt), uintptr(rpcWriteHandleInt))
if err != nil {
@@ -62,7 +62,7 @@ func (*RootCmd) vpnDaemonRun() *serpent.Command {
}
defer pipe.Close()
logger.Info(ctx, "starting VPN tunnel")
logger.Info(ctx, "starting tunnel")
tunnel, err := vpn.NewTunnel(ctx, logger, pipe, vpn.NewClient(), vpn.UseOSNetworkingStack())
if err != nil {
return xerrors.Errorf("create new tunnel for client: %w", err)
@@ -1,19 +0,0 @@
//go:build linux
package cli_test
import (
"os"
"testing"
"github.com/stretchr/testify/require"
"golang.org/x/sys/unix"
)
func dupHandle(t *testing.T, f *os.File) uintptr {
t.Helper()
dupFD, err := unix.Dup(int(f.Fd()))
require.NoError(t, err)
return uintptr(dupFD)
}
@@ -1,33 +0,0 @@
//go:build windows
package cli_test
import (
"os"
"syscall"
"testing"
"github.com/stretchr/testify/require"
)
func dupHandle(t *testing.T, f *os.File) uintptr {
t.Helper()
src := syscall.Handle(f.Fd())
var dup syscall.Handle
proc, err := syscall.GetCurrentProcess()
require.NoError(t, err)
err = syscall.DuplicateHandle(
proc,
src,
proc,
&dup,
0,
false,
syscall.DUPLICATE_SAME_ACCESS,
)
require.NoError(t, err)
return uintptr(dup)
}
@@ -1,4 +1,4 @@
//go:build windows || linux
//go:build windows
package cli_test
@@ -67,35 +67,22 @@ func TestVPNDaemonRun(t *testing.T) {
r1, w1, err := os.Pipe()
require.NoError(t, err)
defer r1.Close()
defer w1.Close()
r2, w2, err := os.Pipe()
require.NoError(t, err)
defer r2.Close()
// The daemon closes the handles passed via NewBidirectionalPipe. Since our
// CLI tests run in-process, pass duplicated handles so we can close the
// originals without risking a double-close on FD reuse.
rpcReadHandle := dupHandle(t, r1)
rpcWriteHandle := dupHandle(t, w2)
require.NoError(t, r1.Close())
require.NoError(t, w2.Close())
defer w2.Close()
ctx := testutil.Context(t, testutil.WaitLong)
inv, _ := clitest.New(t,
"vpn-daemon",
"run",
"--rpc-read-handle",
fmt.Sprint(rpcReadHandle),
"--rpc-write-handle",
fmt.Sprint(rpcWriteHandle),
)
inv, _ := clitest.New(t, "vpn-daemon", "run", "--rpc-read-handle", fmt.Sprint(r1.Fd()), "--rpc-write-handle", fmt.Sprint(w2.Fd()))
waiter := clitest.StartWithWaiter(t, inv.WithContext(ctx))
// Send an invalid header, including a newline delimiter, so the handshake
// fails without requiring context cancellation.
_, err = w1.Write([]byte("garbage\n"))
// Send garbage which should cause the handshake to fail and the daemon
// to exit.
_, err = w1.Write([]byte("garbage"))
require.NoError(t, err)
waiter.Cancel()
err = waiter.Wait()
require.ErrorContains(t, err, "handshake failed")
})
-87
View File
@@ -1304,90 +1304,3 @@ func (api *API) pauseTask(rw http.ResponseWriter, r *http.Request) {
WorkspaceBuild: &build,
})
}
// @Summary Resume task
// @ID resume-task
// @Security CoderSessionToken
// @Accept json
// @Tags Tasks
// @Param user path string true "Username, user ID, or 'me' for the authenticated user"
// @Param task path string true "Task ID" format(uuid)
// @Success 202 {object} codersdk.ResumeTaskResponse
// @Router /tasks/{user}/{task}/resume [post]
func (api *API) resumeTask(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
apiKey = httpmw.APIKey(r)
task = httpmw.TaskParam(r)
)
if !task.WorkspaceID.Valid {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Task does not have a workspace.",
})
return
}
workspace, err := api.Database.GetWorkspaceByID(ctx, task.WorkspaceID.UUID)
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace.",
Detail: err.Error(),
})
return
}
latestBuild, err := api.Database.GetLatestWorkspaceBuildByWorkspaceID(ctx, workspace.ID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace build.",
Detail: err.Error(),
})
return
}
job, err := api.Database.GetProvisionerJobByID(ctx, latestBuild.JobID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace build job.",
Detail: err.Error(),
})
return
}
workspaceStatus := codersdk.ConvertWorkspaceStatus(
codersdk.ProvisionerJobStatus(job.JobStatus),
codersdk.WorkspaceTransition(latestBuild.Transition),
)
if workspaceStatus == codersdk.WorkspaceStatusRunning {
httpapi.Write(ctx, rw, http.StatusConflict, codersdk.Response{
Message: "Task workspace is already running.",
Detail: fmt.Sprintf("Workspace status is %q.", workspaceStatus),
})
return
}
buildReq := codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStart,
Reason: codersdk.CreateWorkspaceBuildReasonTaskResume,
}
build, err := api.postWorkspaceBuildsInternal(
ctx,
apiKey,
workspace,
buildReq,
func(action policy.Action, object rbac.Objecter) bool {
return api.Authorize(r, action, object)
},
audit.WorkspaceBuildBaggageFromRequest(r),
)
if err != nil {
httperror.WriteWorkspaceBuildError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.ResumeTaskResponse{
WorkspaceBuild: &build,
})
}
+1 -337
View File
@@ -2512,20 +2512,13 @@ func TestPauseTask(t *testing.T) {
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
resp, err := client.PauseTask(ctx, codersdk.Me, task.ID)
// Verify that the request was accepted correctly:
require.NoError(t, err)
build := *resp.WorkspaceBuild
require.NotNil(t, build)
require.Equal(t, codersdk.WorkspaceTransitionStop, build.Transition)
require.Equal(t, task.WorkspaceID.UUID, build.WorkspaceID)
require.Equal(t, workspace.LatestBuild.BuildNumber+1, build.BuildNumber)
require.Equal(t, string(codersdk.CreateWorkspaceBuildReasonTaskManualPause), string(build.Reason))
// Verify that the accepted request was processed correctly:
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, build.ID)
workspace, err = client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
require.Equal(t, codersdk.WorkspaceStatusStopped, workspace.LatestBuild.Status)
})
t.Run("Non-owner role access", func(t *testing.T) {
@@ -2788,332 +2781,3 @@ func TestPauseTask(t *testing.T) {
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
})
}
func TestResumeTask(t *testing.T) {
t.Parallel()
setupClient := func(t *testing.T, db database.Store, ps pubsub.Pubsub, authorizer rbac.Authorizer) *codersdk.Client {
t.Helper()
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
Authorizer: authorizer,
IncludeProvisionerDaemon: true,
})
return client
}
setupWorkspaceTask := func(t *testing.T, db database.Store, user codersdk.CreateFirstUserResponse) (database.Task, uuid.UUID) {
t.Helper()
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithTask(database.TaskTable{
Prompt: "resume me",
}, nil).Do()
return workspaceBuild.Task, workspaceBuild.Workspace.ID
}
t.Run("OK", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ApplyComplete,
ProvisionGraph: []*proto.Response{
{Type: &proto.Response_Graph{Graph: &proto.GraphComplete{
HasAiTasks: true,
}}},
},
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
task, err := client.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Input: "resume me",
})
require.NoError(t, err)
workspace, err := client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
pauseResp, err := client.PauseTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, pauseResp.WorkspaceBuild.ID)
resumeResp, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
build := *resumeResp.WorkspaceBuild
require.Equal(t, codersdk.WorkspaceTransitionStart, build.Transition)
require.Equal(t, task.WorkspaceID.UUID, build.WorkspaceID)
require.Equal(t, workspace.LatestBuild.BuildNumber+2, build.BuildNumber)
require.Equal(t, string(codersdk.CreateWorkspaceBuildReasonTaskResume), string(build.Reason))
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, build.ID)
workspace, err = client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
require.Equal(t, codersdk.WorkspaceStatusRunning, workspace.LatestBuild.Status)
})
t.Run("Resume a task that is not paused", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).
WithTask(database.TaskTable{
Prompt: "pause me",
}, nil).
Succeeded().
Do()
_, err := client.ResumeTask(ctx, codersdk.Me, workspaceBuild.Task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusConflict, apiErr.StatusCode())
})
t.Run("Task not found", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
_ = coderdtest.CreateFirstUser(t, client)
_, err := client.ResumeTask(ctx, codersdk.Me, uuid.New())
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Task lookup forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionRead && object.Type == rbac.ResourceTask.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Workspace lookup forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionRead && object.Type == rbac.ResourceWorkspace.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("No Workspace for Task", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).Do()
task := dbgen.Task(t, db, database.TaskTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
TemplateVersionID: workspaceBuild.Build.TemplateVersionID,
Prompt: "no workspace",
})
_, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
require.Equal(t, "Task does not have a workspace.", apiErr.Message)
})
t.Run("Workspace not found", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
var workspaceID uuid.UUID
wrapped := aiTaskStoreWrapper{
Store: db,
getWorkspaceByID: func(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
if id == workspaceID && id != uuid.Nil {
return database.Workspace{}, sql.ErrNoRows
}
return db.GetWorkspaceByID(ctx, id)
},
}
client := setupClient(t, wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
task, workspaceIDValue := setupWorkspaceTask(t, db, user)
workspaceID = workspaceIDValue
_, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Workspace lookup internal error", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
var workspaceID uuid.UUID
wrapped := aiTaskStoreWrapper{
Store: db,
getWorkspaceByID: func(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
if id == workspaceID && id != uuid.Nil {
return database.Workspace{}, xerrors.New("boom")
}
return db.GetWorkspaceByID(ctx, id)
},
}
client := setupClient(t, wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
task, workspaceIDValue := setupWorkspaceTask(t, db, user)
workspaceID = workspaceIDValue
_, err := client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
require.Equal(t, "Internal error fetching task workspace.", apiErr.Message)
})
t.Run("Build Forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionWorkspaceStart && object.Type == rbac.ResourceWorkspace.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
pauseResp, err := client.PauseTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, pauseResp.WorkspaceBuild.ID)
_, err = client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusForbidden, apiErr.StatusCode())
})
t.Run("Job already in progress", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).
WithTask(database.TaskTable{
Prompt: "resume me",
}, nil).
Starting().
Do()
_, err := client.ResumeTask(ctx, codersdk.Me, workspaceBuild.Task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusConflict, apiErr.StatusCode())
})
t.Run("Build Internal Error", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
wrapped := aiTaskStoreWrapper{
Store: db,
}
client := setupClient(t, &wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ApplyComplete,
ProvisionGraph: []*proto.Response{
{Type: &proto.Response_Graph{Graph: &proto.GraphComplete{
HasAiTasks: true,
}}},
},
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
task, err := client.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Input: "resume me",
})
require.NoError(t, err)
workspace, err := client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
pauseResp, err := client.PauseTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, pauseResp.WorkspaceBuild.ID)
// Induce a transient failure in the database after the task has been paused.
wrapped.insertWorkspaceBuild = func(ctx context.Context, arg database.InsertWorkspaceBuildParams) error {
return xerrors.New("insert failed")
}
_, err = client.ResumeTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
})
}
+2 -54
View File
@@ -5866,48 +5866,6 @@ const docTemplate = `{
}
}
},
"/tasks/{user}/{task}/resume": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Tasks"
],
"summary": "Resume task",
"operationId": "resume-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.ResumeTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -14187,8 +14145,7 @@ const docTemplate = `{
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause",
"task_resume"
"task_manual_pause"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
@@ -14196,8 +14153,7 @@ const docTemplate = `{
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause",
"CreateWorkspaceBuildReasonTaskResume"
"CreateWorkspaceBuildReasonTaskManualPause"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -18279,14 +18235,6 @@ const docTemplate = `{
}
}
},
"codersdk.ResumeTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
+2 -50
View File
@@ -5185,44 +5185,6 @@
}
}
},
"/tasks/{user}/{task}/resume": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Tasks"],
"summary": "Resume task",
"operationId": "resume-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.ResumeTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -12739,8 +12701,7 @@
"ssh_connection",
"vscode_connection",
"jetbrains_connection",
"task_manual_pause",
"task_resume"
"task_manual_pause"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
@@ -12748,8 +12709,7 @@
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause",
"CreateWorkspaceBuildReasonTaskResume"
"CreateWorkspaceBuildReasonTaskManualPause"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -16687,14 +16647,6 @@
}
}
},
"codersdk.ResumeTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.RetentionConfig": {
"type": "object",
"properties": {
+1 -1
View File
@@ -400,7 +400,7 @@ func TestAPIKey_Deleted(t *testing.T) {
require.Error(t, err)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
require.Equal(t, http.StatusBadRequest, apiErr.StatusCode())
}
func TestAPIKey_SetDefault(t *testing.T) {
+18 -21
View File
@@ -48,10 +48,9 @@ type Executor struct {
tick <-chan time.Time
statsCh chan<- Stats
// NotificationsEnqueuer handles enqueueing notifications for delivery by SMTP, webhook, etc.
notificationsEnqueuer notifications.Enqueuer
reg prometheus.Registerer
experiments codersdk.Experiments
workspaceBuilderMetrics *wsbuilder.Metrics
notificationsEnqueuer notifications.Enqueuer
reg prometheus.Registerer
experiments codersdk.Experiments
metrics executorMetrics
}
@@ -68,24 +67,23 @@ type Stats struct {
}
// New returns a new wsactions executor.
func NewExecutor(ctx context.Context, db database.Store, ps pubsub.Pubsub, fc *files.Cache, reg prometheus.Registerer, tss *atomic.Pointer[schedule.TemplateScheduleStore], auditor *atomic.Pointer[audit.Auditor], acs *atomic.Pointer[dbauthz.AccessControlStore], buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker], log slog.Logger, tick <-chan time.Time, enqueuer notifications.Enqueuer, exp codersdk.Experiments, workspaceBuilderMetrics *wsbuilder.Metrics) *Executor {
func NewExecutor(ctx context.Context, db database.Store, ps pubsub.Pubsub, fc *files.Cache, reg prometheus.Registerer, tss *atomic.Pointer[schedule.TemplateScheduleStore], auditor *atomic.Pointer[audit.Auditor], acs *atomic.Pointer[dbauthz.AccessControlStore], buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker], log slog.Logger, tick <-chan time.Time, enqueuer notifications.Enqueuer, exp codersdk.Experiments) *Executor {
factory := promauto.With(reg)
le := &Executor{
//nolint:gocritic // Autostart has a limited set of permissions.
ctx: dbauthz.AsAutostart(ctx),
db: db,
ps: ps,
fileCache: fc,
templateScheduleStore: tss,
tick: tick,
log: log.Named("autobuild"),
auditor: auditor,
accessControlStore: acs,
buildUsageChecker: buildUsageChecker,
notificationsEnqueuer: enqueuer,
reg: reg,
experiments: exp,
workspaceBuilderMetrics: workspaceBuilderMetrics,
ctx: dbauthz.AsAutostart(ctx),
db: db,
ps: ps,
fileCache: fc,
templateScheduleStore: tss,
tick: tick,
log: log.Named("autobuild"),
auditor: auditor,
accessControlStore: acs,
buildUsageChecker: buildUsageChecker,
notificationsEnqueuer: enqueuer,
reg: reg,
experiments: exp,
metrics: executorMetrics{
autobuildExecutionDuration: factory.NewHistogram(prometheus.HistogramOpts{
Namespace: "coderd",
@@ -337,8 +335,7 @@ func (e *Executor) runOnce(t time.Time) Stats {
SetLastWorkspaceBuildInTx(&latestBuild).
SetLastWorkspaceBuildJobInTx(&latestJob).
Experiments(e.experiments).
Reason(reason).
BuildMetrics(e.workspaceBuilderMetrics)
Reason(reason)
log.Debug(e.ctx, "auto building workspace", slog.F("transition", nextTransition))
if nextTransition == database.WorkspaceTransitionStart &&
useActiveVersion(accessControl, ws) {
-2
View File
@@ -245,7 +245,6 @@ type Options struct {
MetadataBatcherOptions []metadatabatcher.Option
ProvisionerdServerMetrics *provisionerdserver.Metrics
WorkspaceBuilderMetrics *wsbuilder.Metrics
// WorkspaceAppAuditSessionTimeout allows changing the timeout for audit
// sessions. Raising or lowering this value will directly affect the write
@@ -1080,7 +1079,6 @@ func New(options *Options) *API {
r.Post("/send", api.taskSend)
r.Get("/logs", api.taskLogs)
r.Post("/pause", api.pauseTask)
r.Post("/resume", api.resumeTask)
})
})
})
-3
View File
@@ -191,7 +191,6 @@ type Options struct {
TelemetryReporter telemetry.Reporter
ProvisionerdServerMetrics *provisionerdserver.Metrics
WorkspaceBuilderMetrics *wsbuilder.Metrics
UsageInserter usage.Inserter
}
@@ -400,7 +399,6 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
options.AutobuildTicker,
options.NotificationsEnqueuer,
experiments,
options.WorkspaceBuilderMetrics,
).WithStatsChannel(options.AutobuildStats)
lifecycleExecutor.Run()
@@ -622,7 +620,6 @@ func NewOptions(t testing.TB, options *Options) (func(http.Handler), context.Can
AppEncryptionKeyCache: options.APIKeyEncryptionCache,
OIDCConvertKeyCache: options.OIDCConvertKeyCache,
ProvisionerdServerMetrics: options.ProvisionerdServerMetrics,
WorkspaceBuilderMetrics: options.WorkspaceBuilderMetrics,
}
}
-2
View File
@@ -17,6 +17,4 @@ const (
CheckTelemetryLockEventTypeConstraint CheckConstraint = "telemetry_lock_event_type_constraint" // telemetry_locks
CheckValidationMonotonicOrder CheckConstraint = "validation_monotonic_order" // template_version_parameters
CheckUsageEventTypeCheck CheckConstraint = "usage_event_type_check" // usage_events
CheckGroupAclIsObject CheckConstraint = "group_acl_is_object" // workspaces
CheckUserAclIsObject CheckConstraint = "user_acl_is_object" // workspaces
)
+1
View File
@@ -93,6 +93,7 @@ type TxOptions struct {
// IncrementExecutionCount is a helper function for external packages
// to increment the unexported count.
// Mainly for `dbmem`.
func IncrementExecutionCount(opts *TxOptions) {
opts.executionCount++
}
+5 -2
View File
@@ -19,6 +19,7 @@ import (
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/apikey"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
@@ -29,6 +30,7 @@ import (
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
"github.com/coder/coder/v2/coderd/taskname"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
"github.com/coder/coder/v2/provisionerd/proto"
@@ -1662,12 +1664,13 @@ func Task(t testing.TB, db database.Store, orig database.TaskTable) database.Tas
parameters = json.RawMessage([]byte("{}"))
}
taskName := taskname.Generate(genCtx, slog.Make(), orig.Prompt)
task, err := db.InsertTask(genCtx, database.InsertTaskParams{
ID: takeFirst(orig.ID, uuid.New()),
OrganizationID: orig.OrganizationID,
OwnerID: orig.OwnerID,
Name: takeFirst(orig.Name, testutil.GetRandomNameHyphenated(t)),
DisplayName: takeFirst(orig.DisplayName, testutil.GetRandomNameHyphenated(t)),
Name: takeFirst(orig.Name, taskName.Name),
DisplayName: takeFirst(orig.DisplayName, taskName.DisplayName),
WorkspaceID: orig.WorkspaceID,
TemplateVersionID: orig.TemplateVersionID,
TemplateParameters: parameters,
+1 -3
View File
@@ -2736,9 +2736,7 @@ CREATE TABLE workspaces (
favorite boolean DEFAULT false NOT NULL,
next_start_at timestamp with time zone,
group_acl jsonb DEFAULT '{}'::jsonb NOT NULL,
user_acl jsonb DEFAULT '{}'::jsonb NOT NULL,
CONSTRAINT group_acl_is_object CHECK ((jsonb_typeof(group_acl) = 'object'::text)),
CONSTRAINT user_acl_is_object CHECK ((jsonb_typeof(user_acl) = 'object'::text))
user_acl jsonb DEFAULT '{}'::jsonb NOT NULL
);
COMMENT ON COLUMN workspaces.favorite IS 'Favorite is true if the workspace owner has favorited the workspace.';
@@ -1,3 +0,0 @@
ALTER TABLE workspaces
DROP CONSTRAINT IF EXISTS group_acl_is_object,
DROP CONSTRAINT IF EXISTS user_acl_is_object;
@@ -1,9 +0,0 @@
-- Add constraints that reject 'null'::jsonb for group and user ACLs
-- because they would break the new workspace_expanded view.
UPDATE workspaces SET group_acl = '{}'::jsonb WHERE group_acl = 'null'::jsonb;
UPDATE workspaces SET user_acl = '{}'::jsonb WHERE user_acl = 'null'::jsonb;
ALTER TABLE workspaces
ADD CONSTRAINT group_acl_is_object CHECK (jsonb_typeof(group_acl) = 'object'),
ADD CONSTRAINT user_acl_is_object CHECK (jsonb_typeof(user_acl) = 'object');
@@ -1,35 +0,0 @@
-- Fixture for migration 000417_workspace_acl_object_constraint.
-- Inserts a workspace with 'null'::json ACLs to ensure the migration
-- correctly normalizes such values.
INSERT INTO workspaces (
id,
created_at,
updated_at,
owner_id,
organization_id,
template_id,
deleted,
name,
last_used_at,
automatic_updates,
favorite,
group_acl,
user_acl
)
VALUES (
'6f6fdbee-4c18-4a5c-8a8d-9b811c9f0a28',
'2024-02-10 00:00:00+00',
'2024-02-10 00:00:00+00',
'30095c71-380b-457a-8995-97b8ee6e5307',
'bb640d07-ca8a-4869-b6bc-ae61ebb2fda1',
'4cc1f466-f326-477e-8762-9d0c6781fc56',
false,
'acl-null-workspace',
'0001-01-01 00:00:00+00',
'never',
false,
'null'::jsonb,
'null'::jsonb
)
ON CONFLICT DO NOTHING;
-59
View File
@@ -6765,65 +6765,6 @@ func TestWorkspaceBuildDeadlineConstraint(t *testing.T) {
}
}
func TestWorkspaceACLObjectConstraint(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
user := dbgen.User(t, db, database.User{})
template := dbgen.Template(t, db, database.Template{
CreatedBy: user.ID,
OrganizationID: org.ID,
})
workspace := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: user.ID,
TemplateID: template.ID,
Deleted: false,
})
t.Run("GroupACLNull", func(t *testing.T) {
t.Parallel()
var nilACL database.WorkspaceACL
ctx := testutil.Context(t, testutil.WaitLong)
err := db.UpdateWorkspaceACLByID(ctx, database.UpdateWorkspaceACLByIDParams{
ID: workspace.ID,
GroupACL: nilACL,
UserACL: database.WorkspaceACL{},
})
require.Error(t, err)
require.True(t, database.IsCheckViolation(err, database.CheckGroupAclIsObject))
})
t.Run("UserACLNull", func(t *testing.T) {
t.Parallel()
var nilACL database.WorkspaceACL
ctx := testutil.Context(t, testutil.WaitLong)
err := db.UpdateWorkspaceACLByID(ctx, database.UpdateWorkspaceACLByIDParams{
ID: workspace.ID,
GroupACL: database.WorkspaceACL{},
UserACL: nilACL,
})
require.Error(t, err)
require.True(t, database.IsCheckViolation(err, database.CheckUserAclIsObject))
})
t.Run("ValidEmptyObjects", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
err := db.UpdateWorkspaceACLByID(ctx, database.UpdateWorkspaceACLByIDParams{
ID: workspace.ID,
GroupACL: database.WorkspaceACL{},
UserACL: database.WorkspaceACL{},
})
require.NoError(t, err)
})
}
// TestGetLatestWorkspaceBuildsByWorkspaceIDs populates the database with
// workspaces and builds. It then tests that
// GetLatestWorkspaceBuildsByWorkspaceIDs returns the latest build for some
-8
View File
@@ -106,10 +106,6 @@ func ExtractUserContext(ctx context.Context, db database.Store, rw http.Response
if userID, err := uuid.Parse(userQuery); err == nil {
user, err = db.GetUserByID(ctx, userID)
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return database.User{}, false
}
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: userErrorMessage,
Detail: fmt.Sprintf("queried user=%q", userQuery),
@@ -124,10 +120,6 @@ func ExtractUserContext(ctx context.Context, db database.Store, rw http.Response
Username: userQuery,
})
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return database.User{}, false
}
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: userErrorMessage,
Detail: fmt.Sprintf("queried user=%q", userQuery),
+1 -47
View File
@@ -71,53 +71,7 @@ func TestUserParam(t *testing.T) {
})).ServeHTTP(rw, r)
res := rw.Result()
defer res.Body.Close()
// User "ben" doesn't exist, so expect 404.
require.Equal(t, http.StatusNotFound, res.StatusCode)
})
t.Run("NotFoundByUsername", func(t *testing.T) {
t.Parallel()
db, rw, r := setup(t)
httpmw.ExtractAPIKeyMW(httpmw.ExtractAPIKeyConfig{
DB: db,
RedirectToLogin: false,
})(http.HandlerFunc(func(rw http.ResponseWriter, returnedRequest *http.Request) {
r = returnedRequest
})).ServeHTTP(rw, r)
routeContext := chi.NewRouteContext()
routeContext.URLParams.Add("user", "nonexistent-user")
r = r.WithContext(context.WithValue(r.Context(), chi.RouteCtxKey, routeContext))
httpmw.ExtractUserParam(db)(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
rw.WriteHeader(http.StatusOK)
})).ServeHTTP(rw, r)
res := rw.Result()
defer res.Body.Close()
require.Equal(t, http.StatusNotFound, res.StatusCode)
})
t.Run("NotFoundByUUID", func(t *testing.T) {
t.Parallel()
db, rw, r := setup(t)
httpmw.ExtractAPIKeyMW(httpmw.ExtractAPIKeyConfig{
DB: db,
RedirectToLogin: false,
})(http.HandlerFunc(func(rw http.ResponseWriter, returnedRequest *http.Request) {
r = returnedRequest
})).ServeHTTP(rw, r)
routeContext := chi.NewRouteContext()
// Use a valid UUID that doesn't exist in the database.
routeContext.URLParams.Add("user", "88888888-4444-4444-4444-121212121212")
r = r.WithContext(context.WithValue(r.Context(), chi.RouteCtxKey, routeContext))
httpmw.ExtractUserParam(db)(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
rw.WriteHeader(http.StatusOK)
})).ServeHTTP(rw, r)
res := rw.Result()
defer res.Body.Close()
require.Equal(t, http.StatusNotFound, res.StatusCode)
require.Equal(t, http.StatusBadRequest, res.StatusCode)
})
t.Run("me", func(t *testing.T) {
@@ -262,6 +262,8 @@ func TestWebhookDispatch(t *testing.T) {
// This is not strictly necessary for this test, but it's testing some side logic which is too small for its own test.
require.Equal(t, payload.Payload.UserName, name)
require.Equal(t, payload.Payload.UserUsername, username)
// Right now we don't have a way to query notification templates by ID in dbmem, and it's not necessary to add this
// just to satisfy this test. We can safely assume that as long as this value is not empty that the given value was delivered.
require.NotEmpty(t, payload.Payload.NotificationName)
}
+1 -1
View File
@@ -150,7 +150,7 @@ func TestNotificationPreferences(t *testing.T) {
require.ErrorAsf(t, err, &sdkError, "error should be of type *codersdk.Error")
// NOTE: ExtractUserParam gets in the way here, and returns a 400 Bad Request instead of a 403 Forbidden.
// This is not ideal, and we should probably change this behavior.
require.Equal(t, http.StatusNotFound, sdkError.StatusCode())
require.Equal(t, http.StatusBadRequest, sdkError.StatusCode())
})
t.Run("Admin may read any users' preferences", func(t *testing.T) {
+1 -41
View File
@@ -13,7 +13,6 @@ type Metrics struct {
logger slog.Logger
workspaceCreationTimings *prometheus.HistogramVec
workspaceClaimTimings *prometheus.HistogramVec
jobQueueWait *prometheus.HistogramVec
}
type WorkspaceTimingType int
@@ -30,12 +29,6 @@ const (
workspaceTypePrebuild = "prebuild"
)
// BuildReasonPrebuild is the build_reason metric label value for prebuild
// operations. This is distinct from database.BuildReason values since prebuilds
// use BuildReasonInitiator in the database but we want to track them separately
// in metrics. This is also used as a label value by the metrics in wsbuilder.
const BuildReasonPrebuild = workspaceTypePrebuild
type WorkspaceTimingFlags struct {
IsPrebuild bool
IsClaim bool
@@ -97,30 +90,6 @@ func NewMetrics(logger slog.Logger) *Metrics {
NativeHistogramZeroThreshold: 0,
NativeHistogramMaxZeroThreshold: 0,
}, []string{"organization_name", "template_name", "preset_name"}),
jobQueueWait: prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "coderd",
Name: "provisioner_job_queue_wait_seconds",
Help: "Time from job creation to acquisition by a provisioner daemon.",
Buckets: []float64{
0.1, // 100ms
0.5, // 500ms
1, // 1s
5, // 5s
10, // 10s
30, // 30s
60, // 1m
120, // 2m
300, // 5m
600, // 10m
900, // 15m
1800, // 30m
},
NativeHistogramBucketFactor: 1.1,
NativeHistogramMaxBucketNumber: 100,
NativeHistogramMinResetDuration: time.Hour,
NativeHistogramZeroThreshold: 0,
NativeHistogramMaxZeroThreshold: 0,
}, []string{"provisioner_type", "job_type", "transition", "build_reason"}),
}
}
@@ -128,10 +97,7 @@ func (m *Metrics) Register(reg prometheus.Registerer) error {
if err := reg.Register(m.workspaceCreationTimings); err != nil {
return err
}
if err := reg.Register(m.workspaceClaimTimings); err != nil {
return err
}
return reg.Register(m.jobQueueWait)
return reg.Register(m.workspaceClaimTimings)
}
// IsTrackable returns true if the workspace build should be tracked in metrics.
@@ -196,9 +162,3 @@ func (m *Metrics) UpdateWorkspaceTimingsMetrics(
// Not a trackable build type (e.g. restart, stop, subsequent builds)
}
}
// ObserveJobQueueWait records the time a provisioner job spent waiting in the queue.
// For non-workspace-build jobs, transition and buildReason should be empty strings.
func (m *Metrics) ObserveJobQueueWait(provisionerType, jobType, transition, buildReason string, waitSeconds float64) {
m.jobQueueWait.WithLabelValues(provisionerType, jobType, transition, buildReason).Observe(waitSeconds)
}
@@ -478,10 +478,6 @@ func (s *server) acquireProtoJob(ctx context.Context, job database.ProvisionerJo
TraceMetadata: jobTraceMetadata,
}
// jobTransition and jobBuildReason are used for metrics; only set for workspace builds.
var jobTransition string
var jobBuildReason string
switch job.Type {
case database.ProvisionerJobTypeWorkspaceBuild:
var input WorkspaceProvisionJob
@@ -588,15 +584,6 @@ func (s *server) acquireProtoJob(ctx context.Context, job database.ProvisionerJo
if err != nil {
return nil, failJob(fmt.Sprintf("convert workspace transition: %s", err))
}
jobTransition = string(workspaceBuild.Transition)
// Prebuilds use BuildReasonInitiator in the database but we want to
// track them separately in metrics. Check the initiator ID to detect
// prebuild jobs.
if job.InitiatorID == database.PrebuildsSystemUserID {
jobBuildReason = BuildReasonPrebuild
} else {
jobBuildReason = string(workspaceBuild.Reason)
}
// A previous workspace build exists
var lastWorkspaceBuildParameters []database.WorkspaceBuildParameter
@@ -838,12 +825,6 @@ func (s *server) acquireProtoJob(ctx context.Context, job database.ProvisionerJo
return nil, failJob(fmt.Sprintf("payload was too big: %d > %d", protobuf.Size(protoJob), drpcsdk.MaxMessageSize))
}
// Record the time the job spent waiting in the queue.
if s.metrics != nil && job.StartedAt.Valid && job.Provisioner.Valid() {
queueWaitSeconds := job.StartedAt.Time.Sub(job.CreatedAt).Seconds()
s.metrics.ObserveJobQueueWait(string(job.Provisioner), string(job.Type), jobTransition, jobBuildReason, queueWaitSeconds)
}
return protoJob, err
}
+3 -3
View File
@@ -349,7 +349,7 @@ func TestDeleteUser(t *testing.T) {
err := client.DeleteUser(context.Background(), firstUser.UserID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
require.Equal(t, http.StatusBadRequest, apiErr.StatusCode())
})
t.Run("HasWorkspaces", func(t *testing.T) {
t.Parallel()
@@ -1010,7 +1010,7 @@ func TestUpdateUserProfile(t *testing.T) {
require.ErrorAs(t, err, &apiErr)
// Right now, we are raising a BAD request error because we don't support a
// user accessing other users info
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
require.Equal(t, http.StatusBadRequest, apiErr.StatusCode())
})
t.Run("ConflictingUsername", func(t *testing.T) {
@@ -2602,7 +2602,7 @@ func TestUserAutofillParameters(t *testing.T) {
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
require.Equal(t, http.StatusBadRequest, apiErr.StatusCode())
// u1 should be able to read u2's parameters as u1 is site admin.
_, err = client1.UserAutofillParameters(
+14 -17
View File
@@ -68,30 +68,27 @@ func SubdomainAppSessionTokenCookie(hostname string) string {
// the wrong value.
//
// We use different cookie names for:
// - path apps: coder_path_app_session_token
// - path apps on primary access URL: coder_session_token
// - path apps on proxies: coder_path_app_session_token
// - subdomain apps: coder_subdomain_app_session_token_{unique_hash}
//
// We prefer the access-method-specific cookie first, then fall back to standard
// Coder token extraction (query parameters, Coder-Session-Token header, etc.).
// First we try the default function to get a token from request, which supports
// query parameters, the Coder-Session-Token header and the coder_session_token
// cookie.
//
// Then we try the specific cookie name for the access method.
func (c AppCookies) TokenFromRequest(r *http.Request, accessMethod AccessMethod) string {
// Prefer the access-method-specific cookie first.
//
// Workspace app requests commonly include an `Authorization` header intended
// for the upstream app (e.g. API calls). `httpmw.APITokenFromRequest` supports
// RFC 6750 bearer tokens, so if we consult it first we'd incorrectly treat
// that upstream header as a Coder session token and ignore the app session
// cookie, breaking token renewal for subdomain apps.
cookie, err := r.Cookie(c.CookieNameForAccessMethod(accessMethod))
if err == nil && cookie.Value != "" {
return cookie.Value
}
// Fall back to standard Coder token extraction (session cookie, query param,
// Coder-Session-Token header, and then Authorization: Bearer).
// Try the default function first.
token := httpmw.APITokenFromRequest(r)
if token != "" {
return token
}
// Then try the specific cookie name for the access method.
cookie, err := r.Cookie(c.CookieNameForAccessMethod(accessMethod))
if err == nil && cookie.Value != "" {
return cookie.Value
}
return ""
}
-18
View File
@@ -1,8 +1,6 @@
package workspaceapps_test
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/stretchr/testify/require"
@@ -34,19 +32,3 @@ func TestAppCookies(t *testing.T) {
newCookies := workspaceapps.NewAppCookies("different.com")
require.NotEqual(t, cookies.SubdomainAppSessionToken, newCookies.SubdomainAppSessionToken)
}
func TestAppCookies_TokenFromRequest_PrefersAppCookieOverAuthorizationBearer(t *testing.T) {
t.Parallel()
cookies := workspaceapps.NewAppCookies("apps.example.com")
req := httptest.NewRequest("GET", "https://8081--agent--workspace--user.apps.example.com/", nil)
req.Header.Set("Authorization", "Bearer whatever")
req.AddCookie(&http.Cookie{
Name: cookies.CookieNameForAccessMethod(workspaceapps.AccessMethodSubdomain),
Value: "subdomain-session-token",
})
got := cookies.TokenFromRequest(req, workspaceapps.AccessMethodSubdomain)
require.Equal(t, "subdomain-session-token", got)
}
+1 -2
View File
@@ -382,8 +382,7 @@ func (api *API) postWorkspaceBuildsInternal(
LogLevel(string(createBuild.LogLevel)).
DeploymentValues(api.Options.DeploymentValues).
Experiments(api.Experiments).
TemplateVersionPresetID(createBuild.TemplateVersionPresetID).
BuildMetrics(api.WorkspaceBuilderMetrics)
TemplateVersionPresetID(createBuild.TemplateVersionPresetID)
if (transition == database.WorkspaceTransitionStart || transition == database.WorkspaceTransitionStop) && createBuild.Reason != "" {
builder = builder.Reason(database.BuildReason(createBuild.Reason))
+1 -2
View File
@@ -787,8 +787,7 @@ func createWorkspace(
ActiveVersion().
Experiments(api.Experiments).
DeploymentValues(api.DeploymentValues).
RichParameterValues(req.RichParameterValues).
BuildMetrics(api.WorkspaceBuilderMetrics)
RichParameterValues(req.RichParameterValues)
if req.TemplateVersionID != uuid.Nil {
builder = builder.VersionID(req.TemplateVersionID)
}
-137
View File
@@ -14,7 +14,6 @@ import (
"time"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -22,9 +21,7 @@ import (
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/coderd"
"github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/autobuild"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/coderdtest/promhelp"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
@@ -33,7 +30,6 @@ import (
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/notifications/notificationstest"
"github.com/coder/coder/v2/coderd/provisionerdserver"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/render"
@@ -41,7 +37,6 @@ import (
"github.com/coder/coder/v2/coderd/schedule/cron"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/util/slice"
"github.com/coder/coder/v2/coderd/wsbuilder"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
"github.com/coder/coder/v2/provisioner/echo"
@@ -5906,135 +5901,3 @@ func TestWorkspaceCreateWithImplicitPreset(t *testing.T) {
require.Equal(t, preset2ID, *ws2.LatestBuild.TemplateVersionPresetID)
})
}
func TestProvisionerJobQueueWaitMetric(t *testing.T) {
t.Parallel()
logger := testutil.Logger(t)
reg := prometheus.NewRegistry()
metrics := provisionerdserver.NewMetrics(logger)
err := metrics.Register(reg)
require.NoError(t, err)
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
ProvisionerdServerMetrics: metrics,
})
user := coderdtest.CreateFirstUser(t, client)
// Create a template version - this triggers a template_version_import job.
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
// Check that the queue wait metric was recorded for the template_version_import job.
importMetric := promhelp.MetricValue(t, reg, "coderd_provisioner_job_queue_wait_seconds", prometheus.Labels{
"provisioner_type": string(database.ProvisionerTypeEcho),
"job_type": string(database.ProvisionerJobTypeTemplateVersionImport),
"transition": "",
"build_reason": "",
})
require.NotNil(t, importMetric, "import job metric should be recorded")
importHistogram := importMetric.GetHistogram()
require.NotNil(t, importHistogram)
require.Equal(t, uint64(1), importHistogram.GetSampleCount(), "import job should have 1 sample")
require.Greater(t, importHistogram.GetSampleSum(), 0.0, "import job queue wait should be non-zero")
// Create a template and workspace - this triggers a workspace_build job.
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
// Check that the queue wait metric was recorded for the workspace_build job.
buildMetric := promhelp.MetricValue(t, reg, "coderd_provisioner_job_queue_wait_seconds", prometheus.Labels{
"provisioner_type": string(database.ProvisionerTypeEcho),
"job_type": string(database.ProvisionerJobTypeWorkspaceBuild),
"transition": string(database.WorkspaceTransitionStart),
"build_reason": string(database.BuildReasonInitiator),
})
require.NotNil(t, buildMetric, "workspace build job metric should be recorded")
buildHistogram := buildMetric.GetHistogram()
require.NotNil(t, buildHistogram)
require.Equal(t, uint64(1), buildHistogram.GetSampleCount(), "workspace build job should have 1 sample")
require.Greater(t, buildHistogram.GetSampleSum(), 0.0, "workspace build job queue wait should be non-zero")
}
func TestWorkspaceBuildsEnqueuedMetric(t *testing.T) {
t.Parallel()
var (
logger = testutil.Logger(t)
reg = prometheus.NewRegistry()
metrics = provisionerdserver.NewMetrics(logger)
sched = mustSchedule(t, "CRON_TZ=UTC 0 * * * *")
tickCh = make(chan time.Time)
statsCh = make(chan autobuild.Stats)
)
err := metrics.Register(reg)
require.NoError(t, err)
wsBuilderMetrics, err := wsbuilder.NewMetrics(reg)
require.NoError(t, err)
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
ProvisionerdServerMetrics: metrics,
WorkspaceBuilderMetrics: wsBuilderMetrics,
AutobuildTicker: tickCh,
AutobuildStats: statsCh,
})
user := coderdtest.CreateFirstUser(t, client)
// Create a template and workspace with autostart schedule.
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.AutostartSchedule = ptr.Ref(sched.String())
})
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
// Stop the workspace to prepare for autostart.
workspace = coderdtest.MustTransitionWorkspace(t, client, workspace.ID, codersdk.WorkspaceTransitionStart, codersdk.WorkspaceTransitionStop)
// Trigger an autostart build via the autobuild ticker. This verifies that
// autostart builds are recorded with build_reason="autostart".
p, err := coderdtest.GetProvisionerForTags(db, time.Now(), workspace.OrganizationID, map[string]string{})
require.NoError(t, err)
go func() {
tickTime := sched.Next(workspace.LatestBuild.CreatedAt)
coderdtest.UpdateProvisionerLastSeenAt(t, db, p.ID, tickTime)
tickCh <- tickTime
close(tickCh)
}()
// Wait for the autostart to complete.
stats := <-statsCh
require.Len(t, stats.Errors, 0)
require.Len(t, stats.Transitions, 1)
require.Contains(t, stats.Transitions, workspace.ID)
require.Equal(t, database.WorkspaceTransitionStart, stats.Transitions[workspace.ID])
// Verify the workspace was autostarted.
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
require.Equal(t, codersdk.BuildReasonAutostart, workspace.LatestBuild.Reason)
// Now check the autostart metric was recorded.
autostartCount := promhelp.CounterValue(t, reg, "coderd_workspace_builds_enqueued_total", prometheus.Labels{
"provisioner_type": string(database.ProvisionerTypeEcho),
"build_reason": string(database.BuildReasonAutostart),
"transition": string(database.WorkspaceTransitionStart),
"status": wsbuilder.BuildStatusSuccess,
})
require.Equal(t, 1, autostartCount, "autostart should record 1 enqueue with build_reason=autostart")
}
func mustSchedule(t *testing.T, s string) *cron.Schedule {
t.Helper()
sched, err := cron.Weekly(s)
require.NoError(t, err)
return sched
}
-42
View File
@@ -1,42 +0,0 @@
package wsbuilder
import "github.com/prometheus/client_golang/prometheus"
// Metrics holds metrics related to workspace build creation.
type Metrics struct {
workspaceBuildsEnqueued *prometheus.CounterVec
}
// Metric label values for build status.
const (
BuildStatusSuccess = "success"
BuildStatusFailed = "failed"
)
func NewMetrics(reg prometheus.Registerer) (*Metrics, error) {
m := &Metrics{
workspaceBuildsEnqueued: prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "coderd",
Name: "workspace_builds_enqueued_total",
Help: "Total number of workspace build enqueue attempts.",
}, []string{"provisioner_type", "build_reason", "transition", "status"}),
}
if reg != nil {
if err := reg.Register(m.workspaceBuildsEnqueued); err != nil {
return nil, err
}
}
return m, nil
}
// RecordBuildEnqueued records a workspace build enqueue attempt. It determines
// the status based on whether an error occurred and increments the counter.
func (m *Metrics) RecordBuildEnqueued(provisionerType, buildReason, transition string, err error) {
status := BuildStatusSuccess
if err != nil {
status = BuildStatusFailed
}
m.workspaceBuildsEnqueued.WithLabelValues(provisionerType, buildReason, transition, status).Inc()
}
-31
View File
@@ -90,8 +90,6 @@ type Builder struct {
prebuiltWorkspaceBuildStage sdkproto.PrebuiltWorkspaceBuildStage
verifyNoLegacyParametersOnce bool
buildMetrics *Metrics
}
type UsageChecker interface {
@@ -255,12 +253,6 @@ func (b Builder) TemplateVersionPresetID(id uuid.UUID) Builder {
return b
}
func (b Builder) BuildMetrics(m *Metrics) Builder {
// nolint: revive
b.buildMetrics = m
return b
}
type BuildError struct {
// Status is a suitable HTTP status code
Status int
@@ -321,34 +313,11 @@ func (b *Builder) Build(
return err
})
if err != nil {
b.recordBuildMetrics(provisionerJob, err)
return nil, nil, nil, xerrors.Errorf("build tx: %w", err)
}
b.recordBuildMetrics(provisionerJob, nil)
return workspaceBuild, provisionerJob, provisionerDaemons, nil
}
// recordBuildMetrics records the workspace build enqueue metric if metrics are
// configured. It determines the appropriate build reason label, using "prebuild"
// for prebuild operations instead of the database reason.
func (b *Builder) recordBuildMetrics(job *database.ProvisionerJob, err error) {
if b.buildMetrics == nil {
return
}
if job == nil || !job.Provisioner.Valid() {
return
}
// Determine the build reason for metrics. Prebuilds use BuildReasonInitiator
// in the database but we want to track them separately in metrics.
buildReason := string(b.reason)
if b.prebuiltWorkspaceBuildStage == sdkproto.PrebuiltWorkspaceBuildStage_CREATE {
buildReason = provisionerdserver.BuildReasonPrebuild
}
b.buildMetrics.RecordBuildEnqueued(string(job.Provisioner), buildReason, string(b.trans), err)
}
// buildTx contains the business logic of computing a new build. Attributes of the new database objects are computed
// in a functional style, rather than imperative, to emphasize the logic of how they are defined. A simple cache
// of database-fetched objects is stored on the struct to ensure we only fetch things once, even if they are used in
-23
View File
@@ -354,29 +354,6 @@ func (c *Client) PauseTask(ctx context.Context, user string, id uuid.UUID) (Paus
return resp, nil
}
// ResumeTaskResponse represents the response from resuming a task.
type ResumeTaskResponse struct {
WorkspaceBuild *WorkspaceBuild `json:"workspace_build"`
}
func (c *Client) ResumeTask(ctx context.Context, user string, id uuid.UUID) (ResumeTaskResponse, error) {
res, err := c.Request(ctx, http.MethodPost, fmt.Sprintf("/api/experimental/tasks/%s/%s/resume", user, id.String()), nil)
if err != nil {
return ResumeTaskResponse{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
return ResumeTaskResponse{}, ReadBodyAsError(res)
}
var resp ResumeTaskResponse
if err := json.NewDecoder(res.Body).Decode(&resp); err != nil {
return ResumeTaskResponse{}, err
}
return resp, nil
}
// TaskLogType indicates the source of a task log entry.
type TaskLogType string
+48 -48
View File
@@ -1431,7 +1431,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
}
emailHello := serpent.Option{
Name: "Email: Hello",
Description: "The hostname identifying the SMTP server.",
Description: "The hostname identifying this client to the SMTP server.",
Flag: "email-hello",
Env: "CODER_EMAIL_HELLO",
Default: "localhost",
@@ -1523,7 +1523,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
}
emailTLSCertFile := serpent.Option{
Name: "Email TLS: Certificate File",
Description: "Certificate file to use.",
Description: "Client certificate file for mutual TLS authentication.",
Flag: "email-tls-cert-file",
Env: "CODER_EMAIL_TLS_CERTFILE",
Value: &c.Notifications.SMTP.TLS.CertFile,
@@ -1532,7 +1532,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
}
emailTLSCertKeyFile := serpent.Option{
Name: "Email TLS: Certificate Key File",
Description: "Certificate key file to use.",
Description: "Private key file for the client certificate.",
Flag: "email-tls-cert-key-file",
Env: "CODER_EMAIL_TLS_CERTKEYFILE",
Value: &c.Notifications.SMTP.TLS.KeyFile,
@@ -1551,7 +1551,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
}
workspaceHostnameSuffix := serpent.Option{
Name: "Workspace Hostname Suffix",
Description: "Workspace hostnames use this suffix in SSH config and Coder Connect on Coder Desktop. By default it is coder, resulting in names like myworkspace.coder.",
Description: "Workspace hostnames use this suffix for SSH connections and Coder Connect. By default it is coder, resulting in hostnames like agent.workspace.owner.coder.",
Flag: "workspace-hostname-suffix",
Env: "CODER_WORKSPACE_HOSTNAME_SUFFIX",
YAML: "workspaceHostnameSuffix",
@@ -1680,7 +1680,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "TLS Client CA Files",
Description: "PEM-encoded Certificate Authority file used for checking the authenticity of client.",
Description: "PEM-encoded Certificate Authority file used for checking the authenticity of the client.",
Flag: "tls-client-ca-file",
Env: "CODER_TLS_CLIENT_CA_FILE",
Value: &c.TLS.ClientCAFile,
@@ -1742,7 +1742,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "TLS Ciphers",
Description: "Specify specific TLS ciphers that allowed to be used. See https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53-L75.",
Description: "Specify specific TLS ciphers that are allowed to be used. See https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L53-L75.",
Flag: "tls-ciphers",
Env: "CODER_TLS_CIPHERS",
Default: "",
@@ -1800,7 +1800,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "DERP Server Region Name",
Description: "Region name that for the embedded DERP server.",
Description: "Region name to use for the embedded DERP server.",
Flag: "derp-server-region-name",
Env: "CODER_DERP_SERVER_REGION_NAME",
Default: "Coder Embedded Relay",
@@ -1811,7 +1811,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "DERP Server STUN Addresses",
Description: "Addresses for STUN servers to establish P2P connections. It's recommended to have at least two STUN servers to give users the best chance of connecting P2P to workspaces. Each STUN server will get it's own DERP region, with region IDs starting at `--derp-server-region-id + 1`. Use special value 'disable' to turn off STUN completely.",
Description: "Addresses for STUN servers to establish P2P connections. It's recommended to have at least two STUN servers to give users the best chance of connecting P2P to workspaces. Each STUN server will get its own DERP region, with region IDs starting at `--derp-server-region-id + 1`. Use special value 'disable' to turn off STUN completely.",
Flag: "derp-server-stun-addresses",
Env: "CODER_DERP_SERVER_STUN_ADDRESSES",
Default: "stun.l.google.com:19302,stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302",
@@ -1833,7 +1833,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "Block Direct Connections",
Description: "Block peer-to-peer (aka. direct) workspace connections. All workspace connections from the CLI will be proxied through Coder (or custom configured DERP servers) and will never be peer-to-peer when enabled. Workspaces may still reach out to STUN servers to get their address until they are restarted after this change has been made, but new connections will still be proxied regardless.",
Description: "Block peer-to-peer (aka. direct) workspace connections. All workspace connections from the CLI will be proxied through Coder (or custom configured DERP servers) and will never be peer-to-peer when enabled. Workspace agents may still reach out to STUN servers to discover their address until they are restarted, but all new connections will be proxied regardless.",
// This cannot be called `disable-direct-connections` because that's
// already a global CLI flag for CLI connections. This is a
// deployment-wide flag.
@@ -1884,7 +1884,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
// Prometheus settings
{
Name: "Prometheus Enable",
Description: "Serve prometheus metrics on the address defined by prometheus address.",
Description: "Serve Prometheus metrics on the address defined by prometheus address.",
Flag: "prometheus-enable",
Env: "CODER_PROMETHEUS_ENABLE",
Value: &c.Prometheus.Enable,
@@ -1894,7 +1894,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "Prometheus Address",
Description: "The bind address to serve prometheus metrics.",
Description: "The bind address to serve Prometheus metrics.",
Flag: "prometheus-address",
Env: "CODER_PROMETHEUS_ADDRESS",
Default: "127.0.0.1:2112",
@@ -1945,7 +1945,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
// Pprof settings
{
Name: "pprof Enable",
Description: "Serve pprof metrics on the address defined by pprof address.",
Description: "Serve pprof profiling endpoints on the address defined by pprof address.",
Flag: "pprof-enable",
Env: "CODER_PPROF_ENABLE",
Value: &c.Pprof.Enable,
@@ -2032,7 +2032,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OAuth2 GitHub Allow Everyone",
Description: "Allow all logins, setting this option means allowed orgs and teams must be empty.",
Description: "Allow all GitHub users to authenticate. When enabled, allowed orgs and teams must be empty.",
Flag: "oauth2-github-allow-everyone",
Env: "CODER_OAUTH2_GITHUB_ALLOW_EVERYONE",
Value: &c.OAuth2.Github.AllowEveryone,
@@ -2079,8 +2079,8 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Client Key File",
Description: "Pem encoded RSA private key to use for oauth2 PKI/JWT authorization. " +
"This can be used instead of oidc-client-secret if your IDP supports it.",
Description: "PEM encoded RSA private key to use for OAuth2 PKI/JWT authorization. " +
"This can be used instead of oidc-client-secret if your IdP supports it.",
Flag: "oidc-client-key-file",
Env: "CODER_OIDC_CLIENT_KEY_FILE",
YAML: "oidcClientKeyFile",
@@ -2089,8 +2089,8 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Client Cert File",
Description: "Pem encoded certificate file to use for oauth2 PKI/JWT authorization. " +
"The public certificate that accompanies oidc-client-key-file. A standard x509 certificate is expected.",
Description: "PEM encoded certificate file to use for OAuth2 PKI/JWT authorization. " +
"The public certificate that accompanies oidc-client-key-file. A standard X.509 certificate is expected.",
Flag: "oidc-client-cert-file",
Env: "CODER_OIDC_CLIENT_CERT_FILE",
YAML: "oidcClientCertFile",
@@ -2242,7 +2242,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Group Field",
Description: "This field must be set if using the group sync feature and the scope name is not 'groups'. Set to the claim to be used for groups.",
Description: "OIDC claim field to use as the user's groups. This field must be set if using the group sync feature and the scope name is not 'groups'.",
Flag: "oidc-group-field",
Env: "CODER_OIDC_GROUP_FIELD",
// This value is intentionally blank. If this is empty, then OIDC group
@@ -2257,7 +2257,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Group Mapping",
Description: "A map of OIDC group IDs and the group in Coder it should map to. This is useful for when OIDC providers only return group IDs.",
Description: "A map of OIDC group IDs and the groups in Coder they should map to. This is useful when OIDC providers only return group IDs.",
Flag: "oidc-group-mapping",
Env: "CODER_OIDC_GROUP_MAPPING",
Default: "{}",
@@ -2277,7 +2277,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Regex Group Filter",
Description: "If provided any group name not matching the regex is ignored. This allows for filtering out groups that are not needed. This filter is applied after the group mapping.",
Description: "If provided, any group name not matching the regex is ignored. This allows filtering out groups that are not needed. This filter is applied after the OIDC Group Mapping step.",
Flag: "oidc-group-regex-filter",
Env: "CODER_OIDC_GROUP_REGEX_FILTER",
Default: ".*",
@@ -2287,7 +2287,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC Allowed Groups",
Description: "If provided any group name not in the list will not be allowed to authenticate. This allows for restricting access to a specific set of groups. This filter is applied after the group mapping and before the regex filter.",
Description: "If provided, only users with at least one group in this list will be allowed to authenticate. This restricts access to a specific set of groups. This check is applied before any group mapping or filtering.",
Flag: "oidc-allowed-groups",
Env: "CODER_OIDC_ALLOWED_GROUPS",
Default: "",
@@ -2309,7 +2309,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC User Role Mapping",
Description: "A map of the OIDC passed in user roles and the groups in Coder it should map to. This is useful if the group names do not match. If mapped to the empty string, the role will ignored.",
Description: "A map of OIDC user role names to Coder role names. This is useful if the role names do not match between systems. If mapped to the empty string, the role will be ignored.",
Flag: "oidc-user-role-mapping",
Env: "CODER_OIDC_USER_ROLE_MAPPING",
Default: "{}",
@@ -2319,7 +2319,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OIDC User Role Default",
Description: "If user role sync is enabled, these roles are always included for all authenticated users. The 'member' role is always assigned.",
Description: "If user role sync is enabled, these roles are always included for all authenticated users in addition to synced roles. The 'member' role is always assigned regardless of this setting.",
Flag: "oidc-user-role-default",
Env: "CODER_OIDC_USER_ROLE_DEFAULT",
Default: "",
@@ -2339,7 +2339,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "OpenID connect icon URL",
Description: "URL pointing to the icon to use on the OpenID Connect login button.",
Description: "URL of the icon to use on the OpenID Connect login button.",
Flag: "oidc-icon-url",
Env: "CODER_OIDC_ICON_URL",
Value: &c.OIDC.IconURL,
@@ -2348,7 +2348,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "Signups disabled text",
Description: "The custom text to show on the error page informing about disabled OIDC signups. Markdown format is supported.",
Description: "Custom text to show on the error page when OIDC signups are disabled. Markdown format is supported.",
Flag: "oidc-signups-disabled-text",
Env: "CODER_OIDC_SIGNUPS_DISABLED_TEXT",
Value: &c.OIDC.SignupsDisabledText,
@@ -2807,7 +2807,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
},
{
Name: "SameSite Auth Cookie",
Description: "Controls the 'SameSite' property is set on browser session cookies.",
Description: "Controls if the 'SameSite' property is set on browser session cookies.",
Flag: "samesite-auth-cookie",
Env: "CODER_SAMESITE_AUTH_COOKIE",
// Do not allow "strict" same-site cookies. That would potentially break workspace apps.
@@ -3000,7 +3000,7 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
{
Name: "SSH Config Options",
Description: "These SSH config options will override the default SSH config options. " +
"Provide options in \"key=value\" or \"key value\" format separated by commas." +
"Provide options in \"key=value\" or \"key value\" format separated by commas. " +
"Using this incorrectly can break SSH to your deployment, use cautiously.",
Flag: "ssh-config-options",
Env: "CODER_SSH_CONFIG_OPTIONS",
@@ -3041,7 +3041,7 @@ Write out the current server config as YAML to stdout.`,
{
// Env handling is done in cli.ReadGitAuthFromEnvironment
Name: "External Auth Providers",
Description: "External Authentication providers.",
Description: "Configure external authentication providers for Git and other services.",
YAML: "externalAuthProviders",
Flag: "external-auth-providers",
Value: &c.ExternalAuthConfigs,
@@ -3059,7 +3059,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Proxy Health Check Interval",
Description: "The interval in which coderd should be checking the status of workspace proxies.",
Description: "The interval at which coderd checks the status of workspace proxies.",
Flag: "proxy-health-interval",
Env: "CODER_PROXY_HEALTH_INTERVAL",
Default: (time.Minute).String(),
@@ -3080,7 +3080,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Allow Custom Quiet Hours",
Description: "Allow users to set their own quiet hours schedule for workspaces to stop in (depending on template autostop requirement settings). If false, users can't change their quiet hours schedule and the site default is always used.",
Description: "Allow users to set their own quiet hours schedule for when workspaces are stopped (depending on template autostop requirement settings). If false, users can't change their quiet hours schedule and the site default is always used.",
Flag: "allow-custom-quiet-hours",
Env: "CODER_ALLOW_CUSTOM_QUIET_HOURS",
Default: "true",
@@ -3192,7 +3192,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Notifications: Email: Hello",
Description: "The hostname identifying the SMTP server.",
Description: "The hostname identifying this client to the SMTP server.",
Flag: "notifications-email-hello",
Env: "CODER_NOTIFICATIONS_EMAIL_HELLO",
Value: &c.Notifications.SMTP.Hello,
@@ -3355,7 +3355,7 @@ Write out the current server config as YAML to stdout.`,
Name: "Notifications: Store Sync Interval",
Description: "The notifications system buffers message updates in memory to ease pressure on the database. " +
"This option controls how often it synchronizes its state with the database. The shorter this value the " +
"lower the change of state inconsistency in a non-graceful shutdown - but it also increases load on the " +
"lower the chance of state inconsistency in a non-graceful shutdown - but it also increases load on the " +
"database. It is recommended to keep this option at its default value.",
Flag: "notifications-store-sync-interval",
Env: "CODER_NOTIFICATIONS_STORE_SYNC_INTERVAL",
@@ -3370,7 +3370,7 @@ Write out the current server config as YAML to stdout.`,
Name: "Notifications: Store Sync Buffer Size",
Description: "The notifications system buffers message updates in memory to ease pressure on the database. " +
"This option controls how many updates are kept in memory. The lower this value the " +
"lower the change of state inconsistency in a non-graceful shutdown - but it also increases load on the " +
"lower the chance of state inconsistency in a non-graceful shutdown - but it also increases load on the " +
"database. It is recommended to keep this option at its default value.",
Flag: "notifications-store-sync-buffer-size",
Env: "CODER_NOTIFICATIONS_STORE_SYNC_BUFFER_SIZE",
@@ -3434,7 +3434,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Reconciliation Backoff Interval",
Description: "Interval to increase reconciliation backoff by when prebuilds fail, after which a retry attempt is made.",
Description: "Amount of time to add to the reconciliation backoff delay after each prebuild failure, before the next retry attempt is made.",
Flag: "workspace-prebuilds-reconciliation-backoff-interval",
Env: "CODER_WORKSPACE_PREBUILDS_RECONCILIATION_BACKOFF_INTERVAL",
Value: &c.Prebuilds.ReconciliationBackoffInterval,
@@ -3446,7 +3446,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Reconciliation Backoff Lookback Period",
Description: "Interval to look back to determine number of failed prebuilds, which influences backoff.",
Description: "Time period to look back when counting failed prebuilds to calculate the backoff delay.",
Flag: "workspace-prebuilds-reconciliation-backoff-lookback-period",
Env: "CODER_WORKSPACE_PREBUILDS_RECONCILIATION_BACKOFF_LOOKBACK_PERIOD",
Value: &c.Prebuilds.ReconciliationBackoffLookback,
@@ -3458,7 +3458,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Failure Hard Limit",
Description: "Maximum number of consecutive failed prebuilds before a preset hits the hard limit; disabled when set to zero.",
Description: "Maximum number of consecutive failed prebuilds before a preset is considered hard-limited and stops automatic prebuild creation. Disabled when set to zero.",
Flag: "workspace-prebuilds-failure-hard-limit",
Env: "CODER_WORKSPACE_PREBUILDS_FAILURE_HARD_LIMIT",
Value: &c.Prebuilds.FailureHardLimit,
@@ -3481,7 +3481,7 @@ Write out the current server config as YAML to stdout.`,
// AI Bridge Options
{
Name: "AI Bridge Enabled",
Description: "Whether to start an in-memory aibridged instance.",
Description: "Enable the embedded AI Bridge service to intercept and record AI provider requests.",
Flag: "aibridge-enabled",
Env: "CODER_AIBRIDGE_ENABLED",
Value: &c.AI.BridgeConfig.Enabled,
@@ -3501,7 +3501,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge OpenAI Key",
Description: "The key to authenticate against the OpenAI API.",
Description: "API key for authenticating with the OpenAI API.",
Flag: "aibridge-openai-key",
Env: "CODER_AIBRIDGE_OPENAI_KEY",
Value: &c.AI.BridgeConfig.OpenAI.Key,
@@ -3521,7 +3521,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Anthropic Key",
Description: "The key to authenticate against the Anthropic API.",
Description: "API key for authenticating with the Anthropic API.",
Flag: "aibridge-anthropic-key",
Env: "CODER_AIBRIDGE_ANTHROPIC_KEY",
Value: &c.AI.BridgeConfig.Anthropic.Key,
@@ -3553,7 +3553,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Bedrock Access Key",
Description: "The access key to authenticate against the AWS Bedrock API.",
Description: "AWS access key for authenticating with the AWS Bedrock API.",
Flag: "aibridge-bedrock-access-key",
Env: "CODER_AIBRIDGE_BEDROCK_ACCESS_KEY",
Value: &c.AI.BridgeConfig.Bedrock.AccessKey,
@@ -3563,7 +3563,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Bedrock Access Key Secret",
Description: "The access key secret to use with the access key to authenticate against the AWS Bedrock API.",
Description: "AWS secret access key for authenticating with the AWS Bedrock API.",
Flag: "aibridge-bedrock-access-key-secret",
Env: "CODER_AIBRIDGE_BEDROCK_ACCESS_KEY_SECRET",
Value: &c.AI.BridgeConfig.Bedrock.AccessKeySecret,
@@ -3593,7 +3593,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Inject Coder MCP tools",
Description: "Whether to inject Coder's MCP tools into intercepted AI Bridge requests (requires the \"oauth2\" and \"mcp-server-http\" experiments to be enabled).",
Description: "Enable injection of Coder's MCP tools into intercepted AI Bridge requests. Requires the 'oauth2' and 'mcp-server-http' experiments.",
Flag: "aibridge-inject-coder-mcp-tools",
Env: "CODER_AIBRIDGE_INJECT_CODER_MCP_TOOLS",
Value: &c.AI.BridgeConfig.InjectCoderMCPTools,
@@ -3603,7 +3603,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Data Retention Duration",
Description: "Length of time to retain data such as interceptions and all related records (token, prompt, tool use).",
Description: "How long to retain AI Bridge data including interceptions, tokens, prompts, and tool usage records.",
Flag: "aibridge-retention",
Env: "CODER_AIBRIDGE_RETENTION",
Value: &c.AI.BridgeConfig.Retention,
@@ -3656,7 +3656,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Circuit Breaker Enabled",
Description: "Enable the circuit breaker to protect against cascading failures from upstream AI provider rate limits (429, 503, 529 overloaded).",
Description: "Enable the circuit breaker to protect against cascading failures from upstream AI provider rate limits and overload errors (HTTP 429, 503, 529).",
Flag: "aibridge-circuit-breaker-enabled",
Env: "CODER_AIBRIDGE_CIRCUIT_BREAKER_ENABLED",
Value: &c.AI.BridgeConfig.CircuitBreakerEnabled,
@@ -3666,7 +3666,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Circuit Breaker Failure Threshold",
Description: "Number of consecutive failures that triggers the circuit breaker to open.",
Description: "Number of consecutive failures that trigger the circuit breaker to open.",
Flag: "aibridge-circuit-breaker-failure-threshold",
Env: "CODER_AIBRIDGE_CIRCUIT_BREAKER_FAILURE_THRESHOLD",
Value: serpent.Validate(&c.AI.BridgeConfig.CircuitBreakerFailureThreshold, func(value *serpent.Int64) error {
@@ -3682,7 +3682,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "AI Bridge Circuit Breaker Interval",
Description: "Cyclic period of the closed state for clearing internal failure counts.",
Description: "Time window for counting failures before resetting the failure count in the closed state.",
Flag: "aibridge-circuit-breaker-interval",
Env: "CODER_AIBRIDGE_CIRCUIT_BREAKER_INTERVAL",
Value: &c.AI.BridgeConfig.CircuitBreakerInterval,
@@ -3830,7 +3830,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Workspace Agent Logs Retention",
Description: "How long workspace agent logs are retained. Logs from non-latest builds are deleted if the agent hasn't connected within this period. Logs from the latest build are always retained. Set to 0 to disable automatic deletion.",
Description: "How long workspace agent logs are retained. Logs from non-latest builds are deleted if the agent hasn't connected within this period. Logs from the latest build for each workspace are always retained. Set to 0 to disable automatic deletion.",
Flag: "workspace-agent-logs-retention",
Env: "CODER_WORKSPACE_AGENT_LOGS_RETENTION",
Value: &c.Retention.WorkspaceAgentLogs,
@@ -3841,7 +3841,7 @@ Write out the current server config as YAML to stdout.`,
},
{
Name: "Enable Authorization Recordings",
Description: "All api requests will have a header including all authorization calls made during the request. " +
Description: "All API requests will have a header including all authorization calls made during the request. " +
"This is used for debugging purposes and only available for dev builds.",
Required: false,
Flag: "enable-authz-recordings",
-1
View File
@@ -110,7 +110,6 @@ const (
CreateWorkspaceBuildReasonVSCodeConnection CreateWorkspaceBuildReason = "vscode_connection"
CreateWorkspaceBuildReasonJetbrainsConnection CreateWorkspaceBuildReason = "jetbrains_connection"
CreateWorkspaceBuildReasonTaskManualPause CreateWorkspaceBuildReason = "task_manual_pause"
CreateWorkspaceBuildReasonTaskResume CreateWorkspaceBuildReason = "task_resume"
)
// CreateWorkspaceBuildRequest provides options to update the latest workspace build.
+99 -164
View File
@@ -104,170 +104,105 @@ deployment. They will always be available from the agent.
<!-- Code generated by 'make docs/admin/integrations/prometheus.md'. DO NOT EDIT -->
| Name | Type | Description | Labels |
|-------------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| `agent_scripts_executed_total` | counter | Total number of scripts executed by the Coder agent. Includes cron scheduled scripts. | `agent_name` `success` `template_name` `username` `workspace_name` |
| `coder_aibridged_circuit_breaker_rejects_total` | counter | Total number of requests rejected due to open circuit breaker. | `endpoint` `model` `provider` |
| `coder_aibridged_circuit_breaker_state` | gauge | Current state of the circuit breaker (0=closed, 0.5=half-open, 1=open). | `endpoint` `model` `provider` |
| `coder_aibridged_circuit_breaker_trips_total` | counter | Total number of times the circuit breaker transitioned to open state. | `endpoint` `model` `provider` |
| `coder_aibridged_injected_tool_invocations_total` | counter | The number of times an injected MCP tool was invoked by aibridge. | `model` `name` `provider` `server` |
| `coder_aibridged_interceptions_duration_seconds` | histogram | The total duration of intercepted requests, in seconds. The majority of this time will be the upstream processing of the request. aibridge has no control over upstream processing time, so it's just an illustrative metric. | `model` `provider` |
| `coder_aibridged_interceptions_inflight` | gauge | The number of intercepted requests which are being processed. | `model` `provider` `route` |
| `coder_aibridged_interceptions_total` | counter | The count of intercepted requests. | `initiator_id` `method` `model` `provider` `route` `status` |
| `coder_aibridged_non_injected_tool_selections_total` | counter | The number of times an AI model selected a tool to be invoked by the client. | `model` `name` `provider` |
| `coder_aibridged_passthrough_total` | counter | The count of requests which were not intercepted but passed through to the upstream. | `method` `provider` `route` |
| `coder_aibridged_prompts_total` | counter | The number of prompts issued by users (initiators). | `initiator_id` `model` `provider` |
| `coder_aibridged_tokens_total` | counter | The number of tokens used by intercepted requests. | `initiator_id` `model` `provider` `type` |
| `coder_aibridgeproxyd_connect_sessions_total` | counter | Total number of CONNECT sessions established. | `type` |
| `coder_aibridgeproxyd_inflight_mitm_requests` | gauge | Number of MITM requests currently being processed. | `provider` |
| `coder_aibridgeproxyd_mitm_requests_total` | counter | Total number of MITM requests handled by the proxy. | `provider` |
| `coder_aibridgeproxyd_mitm_responses_total` | counter | Total number of MITM responses by HTTP status code class. | `code` `provider` |
| `coder_pubsub_connected` | gauge | Whether we are connected (1) or not connected (0) to postgres | |
| `coder_pubsub_current_events` | gauge | The current number of pubsub event channels listened for | |
| `coder_pubsub_current_subscribers` | gauge | The current number of active pubsub subscribers | |
| `coder_pubsub_disconnections_total` | counter | Total number of times we disconnected unexpectedly from postgres | |
| `coder_pubsub_latency_measure_errs_total` | counter | The number of pubsub latency measurement failures | |
| `coder_pubsub_latency_measures_total` | counter | The number of pubsub latency measurements | |
| `coder_pubsub_messages_total` | counter | Total number of messages received from postgres | `size` |
| `coder_pubsub_published_bytes_total` | counter | Total number of bytes successfully published across all publishes | |
| `coder_pubsub_publishes_total` | counter | Total number of calls to Publish | `success` |
| `coder_pubsub_receive_latency_seconds` | gauge | The time taken to receive a message from a pubsub event channel | |
| `coder_pubsub_received_bytes_total` | counter | Total number of bytes received across all messages | |
| `coder_pubsub_send_latency_seconds` | gauge | The time taken to send a message into a pubsub event channel | |
| `coder_pubsub_subscribes_total` | counter | Total number of calls to Subscribe/SubscribeWithErr | `success` |
| `coder_servertailnet_connections_total` | counter | Total number of TCP connections made to workspace agents. | `network` |
| `coder_servertailnet_open_connections` | gauge | Total number of TCP connections currently open to workspace agents. | `network` |
| `coderd_agentapi_metadata_batch_size` | histogram | Total number of metadata entries in each batch, updated before flushes. | |
| `coderd_agentapi_metadata_batch_utilization` | histogram | Number of metadata keys per agent in each batch, updated before flushes. | |
| `coderd_agentapi_metadata_batches_total` | counter | Total number of metadata batches flushed. | `reason` |
| `coderd_agentapi_metadata_dropped_keys_total` | counter | Total number of metadata keys dropped due to capacity limits. | |
| `coderd_agentapi_metadata_flush_duration_seconds` | histogram | Time taken to flush metadata batch to database and pubsub. | `reason` |
| `coderd_agentapi_metadata_flushed_total` | counter | Total number of unique metadatas flushed. | |
| `coderd_agentapi_metadata_publish_errors_total` | counter | Total number of metadata batch pubsub publish calls that have resulted in an error. | |
| `coderd_agents_apps` | gauge | Agent applications with statuses. | `agent_name` `app_name` `health` `username` `workspace_name` |
| `coderd_agents_connection_latencies_seconds` | gauge | Agent connection latencies in seconds. | `agent_name` `derp_region` `preferred` `username` `workspace_name` |
| `coderd_agents_connections` | gauge | Agent connections with statuses. | `agent_name` `lifecycle_state` `status` `tailnet_node` `username` `workspace_name` |
| `coderd_agents_up` | gauge | The number of active agents per workspace. | `template_name` `template_version` `username` `workspace_name` |
| `coderd_agentstats_connection_count` | gauge | The number of established connections by agent | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_connection_median_latency_seconds` | gauge | The median agent connection latency | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_currently_reachable_peers` | gauge | The number of peers (e.g. clients) that are currently reachable over the encrypted network. | `agent_name` `connection_type` `template_name` `username` `workspace_name` |
| `coderd_agentstats_rx_bytes` | gauge | Agent Rx bytes | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_jetbrains` | gauge | The number of session established by JetBrains | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_reconnecting_pty` | gauge | The number of session established by reconnecting PTY | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_ssh` | gauge | The number of session established by SSH | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_vscode` | gauge | The number of session established by VSCode | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_startup_script_seconds` | gauge | The number of seconds the startup script took to execute. | `agent_name` `success` `template_name` `username` `workspace_name` |
| `coderd_agentstats_tx_bytes` | gauge | Agent Tx bytes | `agent_name` `username` `workspace_name` |
| `coderd_api_active_users_duration_hour` | gauge | The number of users that have been active within the last hour. | |
| `coderd_api_concurrent_requests` | gauge | The number of concurrent API requests. | `method` `path` |
| `coderd_api_concurrent_websockets` | gauge | The total number of concurrent API websockets. | `path` |
| `coderd_api_request_latencies_seconds` | histogram | Latency distribution of requests in seconds. | `method` `path` |
| `coderd_api_requests_processed_total` | counter | The total number of processed API requests | `code` `method` `path` |
| `coderd_api_total_user_count` | gauge | The total number of registered users, partitioned by status. | `status` |
| `coderd_api_websocket_durations_seconds` | histogram | Websocket duration distribution of requests in seconds. | `path` |
| `coderd_api_workspace_latest_build` | gauge | The current number of workspace builds by status for all non-deleted workspaces. | `status` |
| `coderd_authz_authorize_duration_seconds` | histogram | Duration of the 'Authorize' call in seconds. Only counts calls that succeed. | `allowed` |
| `coderd_authz_prepare_authorize_duration_seconds` | histogram | Duration of the 'PrepareAuthorize' call in seconds. | |
| `coderd_db_query_counts_total` | counter | Total number of queries labelled by HTTP route, method, and query name. | `method` `query` `route` |
| `coderd_db_query_latencies_seconds` | histogram | Latency distribution of queries in seconds. | `query` |
| `coderd_db_tx_duration_seconds` | histogram | Duration of transactions in seconds. | `success` `tx_id` |
| `coderd_db_tx_executions_count` | counter | Total count of transactions executed. 'retries' is expected to be 0 for a successful transaction. | `retries` `success` `tx_id` |
| `coderd_dbpurge_iteration_duration_seconds` | histogram | Duration of each dbpurge iteration in seconds. | `success` |
| `coderd_dbpurge_records_purged_total` | counter | Total number of records purged by type. | `record_type` |
| `coderd_experiments` | gauge | Indicates whether each experiment is enabled (1) or not (0) | `experiment` |
| `coderd_insights_applications_usage_seconds` | gauge | The application usage per template. | `application_name` `slug` `template_name` |
| `coderd_insights_parameters` | gauge | The parameter usage per template. | `parameter_name` `parameter_type` `parameter_value` `template_name` |
| `coderd_insights_templates_active_users` | gauge | The number of active users of the template. | `template_name` |
| `coderd_license_active_users` | gauge | The number of active users. | |
| `coderd_license_errors` | gauge | The number of active license errors. | |
| `coderd_license_limit_users` | gauge | The user seats limit based on the active Coder license. | |
| `coderd_license_user_limit_enabled` | gauge | Returns 1 if the current license enforces the user limit. | |
| `coderd_license_warnings` | gauge | The number of active license warnings. | |
| `coderd_lifecycle_autobuild_execution_duration_seconds` | histogram | Duration of each autobuild execution. | |
| `coderd_notifications_dispatcher_send_seconds` | histogram | The time taken to dispatch notifications. | `method` |
| `coderd_notifications_inflight_dispatches` | gauge | The number of dispatch attempts which are currently in progress. | `method` `notification_template_id` |
| `coderd_notifications_pending_updates` | gauge | The number of dispatch attempt results waiting to be flushed to the store. | |
| `coderd_notifications_queued_seconds` | histogram | The time elapsed between a notification being enqueued in the store and retrieved for dispatching (measures the latency of the notifications system). This should generally be within CODER_NOTIFICATIONS_FETCH_INTERVAL seconds; higher values for a sustained period indicates delayed processing and CODER_NOTIFICATIONS_LEASE_COUNT can be increased to accommodate this. | `method` |
| `coderd_notifications_retry_count` | counter | The count of notification dispatch retry attempts. | `method` `notification_template_id` |
| `coderd_notifications_synced_updates_total` | counter | The number of dispatch attempt results flushed to the store. | |
| `coderd_oauth2_external_requests_rate_limit` | gauge | The total number of allowed requests per interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_next_reset_unix` | gauge | Unix timestamp for when the next interval starts | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_remaining` | gauge | The remaining number of allowed requests in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_reset_in_seconds` | gauge | Seconds until the next interval | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_used` | gauge | The number of requests made in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_total` | counter | The total number of api calls made to external oauth2 providers. 'status_code' will be 0 if the request failed with no response. | `name` `source` `status_code` |
| `coderd_open_file_refs_current` | gauge | The count of file references currently open in the file cache. Multiple references can be held for the same file. | |
| `coderd_open_file_refs_total` | counter | The total number of file references ever opened in the file cache. The 'hit' label indicates if the file was loaded from the cache. | `hit` |
| `coderd_open_files_current` | gauge | The count of unique files currently open in the file cache. | |
| `coderd_open_files_size_bytes_current` | gauge | The current amount of memory of all files currently open in the file cache. | |
| `coderd_open_files_size_bytes_total` | counter | The total amount of memory ever opened in the file cache. This number never decrements. | |
| `coderd_open_files_total` | counter | The total count of unique files ever opened in the file cache. | |
| `coderd_prebuilds_reconciliation_duration_seconds` | histogram | Duration of each prebuilds reconciliation cycle. | |
| `coderd_prebuilt_workspace_claim_duration_seconds` | histogram | Time to claim a prebuilt workspace by organization, template, and preset. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_claimed_total` | counter | Total number of prebuilt workspaces which were claimed by users. Claiming refers to creating a workspace with a preset selected for which eligible prebuilt workspaces are available and one is reassigned to a user. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_created_total` | counter | Total number of prebuilt workspaces that have been created to meet the desired instance count of each template preset. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_desired` | gauge | Target number of prebuilt workspaces that should be available for each template preset. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_eligible` | gauge | Current number of prebuilt workspaces that are eligible to be claimed by users. These are workspaces that have completed their build process with their agent reporting 'ready' status. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_failed_total` | counter | Total number of prebuilt workspaces that failed to build. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_metrics_last_updated` | gauge | The unix timestamp when the metrics related to prebuilt workspaces were last updated; these metrics are cached. | |
| `coderd_prebuilt_workspaces_preset_hard_limited` | gauge | Indicates whether a given preset has reached the hard failure limit (1 = hard-limited). Metric is omitted otherwise. | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_reconciliation_paused` | gauge | Indicates whether prebuilds reconciliation is currently paused (1 = paused, 0 = not paused). | |
| `coderd_prebuilt_workspaces_resource_replacements_total` | counter | Total number of prebuilt workspaces whose resource(s) got replaced upon being claimed. In Terraform, drift on immutable attributes results in resource replacement. This represents a worst-case scenario for prebuilt workspaces because the pre-provisioned resource would have been recreated when claiming, thus obviating the point of pre-provisioning. See https://coder.com/docs/admin/templates/extending-templates/prebuilt-workspaces#preventing-resource-replacement | `organization_name` `preset_name` `template_name` |
| `coderd_prebuilt_workspaces_running` | gauge | Current number of prebuilt workspaces that are in a running state. These workspaces have started successfully but may not yet be claimable by users (see coderd_prebuilt_workspaces_eligible). | `organization_name` `preset_name` `template_name` |
| `coderd_prometheusmetrics_agents_execution_seconds` | histogram | Histogram for duration of agents metrics collection in seconds. | |
| `coderd_prometheusmetrics_agentstats_execution_seconds` | histogram | Histogram for duration of agent stats metrics collection in seconds. | |
| `coderd_prometheusmetrics_metrics_aggregator_execution_cleanup_seconds` | histogram | Histogram for duration of metrics aggregator cleanup in seconds. | |
| `coderd_prometheusmetrics_metrics_aggregator_execution_update_seconds` | histogram | Histogram for duration of metrics aggregator update in seconds. | |
| `coderd_prometheusmetrics_metrics_aggregator_store_size` | gauge | The number of metrics stored in the aggregator | |
| `coderd_provisioner_job_queue_wait_seconds` | histogram | Time from job creation to acquisition by a provisioner daemon. | `build_reason` `job_type` `provisioner_type` `transition` |
| `coderd_provisionerd_job_timings_seconds` | histogram | The provisioner job time duration in seconds. | `provisioner` `status` |
| `coderd_provisionerd_jobs_current` | gauge | The number of currently running provisioner jobs. | `provisioner` |
| `coderd_provisionerd_num_daemons` | gauge | The number of provisioner daemons. | |
| `coderd_provisionerd_workspace_build_timings_seconds` | histogram | The time taken for a workspace to build. | `status` `template_name` `template_version` `workspace_transition` |
| `coderd_proxyhealth_health_check_duration_seconds` | histogram | Histogram for duration of proxy health collection in seconds. | |
| `coderd_proxyhealth_health_check_results` | gauge | This endpoint returns a number to indicate the health status. -3 (unknown), -2 (Unreachable), -1 (Unhealthy), 0 (Unregistered), 1 (Healthy) | `proxy_id` |
| `coderd_template_workspace_build_duration_seconds` | histogram | Duration from workspace build creation to agent ready, by template. | `is_prebuild` `organization_name` `status` `template_name` `transition` |
| `coderd_workspace_builds_enqueued_total` | counter | Total number of workspace build enqueue attempts. | `build_reason` `provisioner_type` `status` `transition` |
| `coderd_workspace_builds_total` | counter | The number of workspaces started, updated, or deleted. | `status` `template_name` `template_version` `workspace_name` `workspace_owner` `workspace_transition` |
| `coderd_workspace_creation_duration_seconds` | histogram | Time to create a workspace by organization, template, preset, and type (regular or prebuild). | `organization_name` `preset_name` `template_name` `type` |
| `coderd_workspace_creation_total` | counter | Total regular (non-prebuilt) workspace creations by organization, template, and preset. | `organization_name` `preset_name` `template_name` |
| `coderd_workspace_latest_build_status` | gauge | The current workspace statuses by template, transition, and owner for all non-deleted workspaces. | `status` `template_name` `template_version` `workspace_owner` `workspace_transition` |
| `go_gc_duration_seconds` | summary | A summary of the pause duration of garbage collection cycles. | |
| `go_goroutines` | gauge | Number of goroutines that currently exist. | |
| `go_info` | gauge | Information about the Go environment. | `version` |
| `go_memstats_alloc_bytes` | gauge | Number of bytes allocated and still in use. | |
| `go_memstats_alloc_bytes_total` | counter | Total number of bytes allocated, even if freed. | |
| `go_memstats_buck_hash_sys_bytes` | gauge | Number of bytes used by the profiling bucket hash table. | |
| `go_memstats_frees_total` | counter | Total number of frees. | |
| `go_memstats_gc_sys_bytes` | gauge | Number of bytes used for garbage collection system metadata. | |
| `go_memstats_heap_alloc_bytes` | gauge | Number of heap bytes allocated and still in use. | |
| `go_memstats_heap_idle_bytes` | gauge | Number of heap bytes waiting to be used. | |
| `go_memstats_heap_inuse_bytes` | gauge | Number of heap bytes that are in use. | |
| `go_memstats_heap_objects` | gauge | Number of allocated objects. | |
| `go_memstats_heap_released_bytes` | gauge | Number of heap bytes released to OS. | |
| `go_memstats_heap_sys_bytes` | gauge | Number of heap bytes obtained from system. | |
| `go_memstats_last_gc_time_seconds` | gauge | Number of seconds since 1970 of last garbage collection. | |
| `go_memstats_lookups_total` | counter | Total number of pointer lookups. | |
| `go_memstats_mallocs_total` | counter | Total number of mallocs. | |
| `go_memstats_mcache_inuse_bytes` | gauge | Number of bytes in use by mcache structures. | |
| `go_memstats_mcache_sys_bytes` | gauge | Number of bytes used for mcache structures obtained from system. | |
| `go_memstats_mspan_inuse_bytes` | gauge | Number of bytes in use by mspan structures. | |
| `go_memstats_mspan_sys_bytes` | gauge | Number of bytes used for mspan structures obtained from system. | |
| `go_memstats_next_gc_bytes` | gauge | Number of heap bytes when next garbage collection will take place. | |
| `go_memstats_other_sys_bytes` | gauge | Number of bytes used for other system allocations. | |
| `go_memstats_stack_inuse_bytes` | gauge | Number of bytes in use by the stack allocator. | |
| `go_memstats_stack_sys_bytes` | gauge | Number of bytes obtained from system for stack allocator. | |
| `go_memstats_sys_bytes` | gauge | Number of bytes obtained from system. | |
| `go_threads` | gauge | Number of OS threads created. | |
| `process_cpu_seconds_total` | counter | Total user and system CPU time spent in seconds. | |
| `process_max_fds` | gauge | Maximum number of open file descriptors. | |
| `process_open_fds` | gauge | Number of open file descriptors. | |
| `process_resident_memory_bytes` | gauge | Resident memory size in bytes. | |
| `process_start_time_seconds` | gauge | Start time of the process since unix epoch in seconds. | |
| `process_virtual_memory_bytes` | gauge | Virtual memory size in bytes. | |
| `process_virtual_memory_max_bytes` | gauge | Maximum amount of virtual memory available in bytes. | |
| `promhttp_metric_handler_requests_in_flight` | gauge | Current number of scrapes being served. | |
| `promhttp_metric_handler_requests_total` | counter | Total number of scrapes by HTTP status code. | `code` |
| Name | Type | Description | Labels |
|---------------------------------------------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| `agent_scripts_executed_total` | counter | Total number of scripts executed by the Coder agent. Includes cron scheduled scripts. | `agent_name` `success` `template_name` `username` `workspace_name` |
| `coder_aibridged_injected_tool_invocations_total` | counter | The number of times an injected MCP tool was invoked by aibridge. | `model` `name` `provider` `server` |
| `coder_aibridged_interceptions_duration_seconds` | histogram | The total duration of intercepted requests, in seconds. The majority of this time will be the upstream processing of the request. aibridge has no control over upstream processing time, so it's just an illustrative metric. | `model` `provider` |
| `coder_aibridged_interceptions_inflight` | gauge | The number of intercepted requests which are being processed. | `model` `provider` `route` |
| `coder_aibridged_interceptions_total` | counter | The count of intercepted requests. | `initiator_id` `method` `model` `provider` `route` `status` |
| `coder_aibridged_non_injected_tool_selections_total` | counter | The number of times an AI model selected a tool to be invoked by the client. | `model` `name` `provider` |
| `coder_aibridged_prompts_total` | counter | The number of prompts issued by users (initiators). | `initiator_id` `model` `provider` |
| `coder_aibridged_tokens_total` | counter | The number of tokens used by intercepted requests. | `initiator_id` `model` `provider` `type` |
| `coderd_agentapi_metadata_batch_size` | histogram | Total number of metadata entries in each batch, updated before flushes. | |
| `coderd_agentapi_metadata_batch_utilization` | histogram | Number of metadata keys per agent in each batch, updated before flushes. | |
| `coderd_agentapi_metadata_batches_total` | counter | Total number of metadata batches flushed. | `reason` |
| `coderd_agentapi_metadata_dropped_keys_total` | counter | Total number of metadata keys dropped due to capacity limits. | |
| `coderd_agentapi_metadata_flush_duration_seconds` | histogram | Time taken to flush metadata batch to database and pubsub. | `reason` |
| `coderd_agentapi_metadata_flushed_total` | counter | Total number of unique metadatas flushed. | |
| `coderd_agentapi_metadata_publish_errors_total` | counter | Total number of metadata batch pubsub publish calls that have resulted in an error. | |
| `coderd_agents_apps` | gauge | Agent applications with statuses. | `agent_name` `app_name` `health` `username` `workspace_name` |
| `coderd_agents_connection_latencies_seconds` | gauge | Agent connection latencies in seconds. | `agent_name` `derp_region` `preferred` `username` `workspace_name` |
| `coderd_agents_connections` | gauge | Agent connections with statuses. | `agent_name` `lifecycle_state` `status` `tailnet_node` `username` `workspace_name` |
| `coderd_agents_up` | gauge | The number of active agents per workspace. | `template_name` `username` `workspace_name` |
| `coderd_agentstats_connection_count` | gauge | The number of established connections by agent | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_connection_median_latency_seconds` | gauge | The median agent connection latency | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_currently_reachable_peers` | gauge | The number of peers (e.g. clients) that are currently reachable over the encrypted network. | `agent_name` `connection_type` `template_name` `username` `workspace_name` |
| `coderd_agentstats_rx_bytes` | gauge | Agent Rx bytes | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_jetbrains` | gauge | The number of session established by JetBrains | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_reconnecting_pty` | gauge | The number of session established by reconnecting PTY | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_ssh` | gauge | The number of session established by SSH | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_session_count_vscode` | gauge | The number of session established by VSCode | `agent_name` `username` `workspace_name` |
| `coderd_agentstats_startup_script_seconds` | gauge | The number of seconds the startup script took to execute. | `agent_name` `success` `template_name` `username` `workspace_name` |
| `coderd_agentstats_tx_bytes` | gauge | Agent Tx bytes | `agent_name` `username` `workspace_name` |
| `coderd_api_active_users_duration_hour` | gauge | The number of users that have been active within the last hour. | |
| `coderd_api_concurrent_requests` | gauge | The number of concurrent API requests. | |
| `coderd_api_concurrent_websockets` | gauge | The total number of concurrent API websockets. | |
| `coderd_api_request_latencies_seconds` | histogram | Latency distribution of requests in seconds. | `method` `path` |
| `coderd_api_requests_processed_total` | counter | The total number of processed API requests | `code` `method` `path` |
| `coderd_api_websocket_durations_seconds` | histogram | Websocket duration distribution of requests in seconds. | `path` |
| `coderd_api_workspace_latest_build` | gauge | The latest workspace builds with a status. | `status` |
| `coderd_insights_applications_usage_seconds` | gauge | The application usage per template. | `application_name` `slug` `template_name` |
| `coderd_insights_parameters` | gauge | The parameter usage per template. | `parameter_name` `parameter_type` `parameter_value` `template_name` |
| `coderd_insights_templates_active_users` | gauge | The number of active users of the template. | `template_name` |
| `coderd_license_active_users` | gauge | The number of active users. | |
| `coderd_license_errors` | gauge | The number of active license errors. | |
| `coderd_license_limit_users` | gauge | The user seats limit based on the active Coder license. | |
| `coderd_license_user_limit_enabled` | gauge | Returns 1 if the current license enforces the user limit. | |
| `coderd_license_warnings` | gauge | The number of active license warnings. | |
| `coderd_metrics_collector_agents_execution_seconds` | histogram | Histogram for duration of agents metrics collection in seconds. | |
| `coderd_oauth2_external_requests_rate_limit` | gauge | The total number of allowed requests per interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_next_reset_unix` | gauge | Unix timestamp of the next interval | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_remaining` | gauge | The remaining number of allowed requests in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_reset_in_seconds` | gauge | Seconds until the next interval | `name` `resource` |
| `coderd_oauth2_external_requests_rate_limit_used` | gauge | The number of requests made in this interval. | `name` `resource` |
| `coderd_oauth2_external_requests_total` | counter | The total number of api calls made to external oauth2 providers. 'status_code' will be 0 if the request failed with no response. | `name` `source` `status_code` |
| `coderd_prebuilt_workspace_claim_duration_seconds` | histogram | Time to claim a prebuilt workspace by organization, template, and preset. | `organization_name` `preset_name` `template_name` |
| `coderd_provisionerd_job_timings_seconds` | histogram | The provisioner job time duration in seconds. | `provisioner` `status` |
| `coderd_provisionerd_jobs_current` | gauge | The number of currently running provisioner jobs. | `provisioner` |
| `coderd_provisionerd_num_daemons` | gauge | The number of provisioner daemons. | |
| `coderd_provisionerd_workspace_build_timings_seconds` | histogram | The time taken for a workspace to build. | `status` `template_name` `template_version` `workspace_transition` |
| `coderd_template_workspace_build_duration_seconds` | histogram | Duration from workspace build creation to agent ready, by template. | `is_prebuild` `organization_name` `status` `template_name` `transition` |
| `coderd_workspace_builds_total` | counter | The number of workspaces started, updated, or deleted. | `action` `owner_email` `status` `template_name` `template_version` `workspace_name` |
| `coderd_workspace_creation_duration_seconds` | histogram | Time to create a workspace by organization, template, preset, and type (regular or prebuild). | `organization_name` `preset_name` `template_name` `type` |
| `coderd_workspace_creation_total` | counter | Total regular (non-prebuilt) workspace creations by organization, template, and preset. | `organization_name` `preset_name` `template_name` |
| `coderd_workspace_latest_build_status` | gauge | The current workspace statuses by template, transition, and owner. | `status` `template_name` `template_version` `workspace_owner` `workspace_transition` |
| `go_gc_duration_seconds` | summary | A summary of the pause duration of garbage collection cycles. | |
| `go_goroutines` | gauge | Number of goroutines that currently exist. | |
| `go_info` | gauge | Information about the Go environment. | `version` |
| `go_memstats_alloc_bytes` | gauge | Number of bytes allocated and still in use. | |
| `go_memstats_alloc_bytes_total` | counter | Total number of bytes allocated, even if freed. | |
| `go_memstats_buck_hash_sys_bytes` | gauge | Number of bytes used by the profiling bucket hash table. | |
| `go_memstats_frees_total` | counter | Total number of frees. | |
| `go_memstats_gc_sys_bytes` | gauge | Number of bytes used for garbage collection system metadata. | |
| `go_memstats_heap_alloc_bytes` | gauge | Number of heap bytes allocated and still in use. | |
| `go_memstats_heap_idle_bytes` | gauge | Number of heap bytes waiting to be used. | |
| `go_memstats_heap_inuse_bytes` | gauge | Number of heap bytes that are in use. | |
| `go_memstats_heap_objects` | gauge | Number of allocated objects. | |
| `go_memstats_heap_released_bytes` | gauge | Number of heap bytes released to OS. | |
| `go_memstats_heap_sys_bytes` | gauge | Number of heap bytes obtained from system. | |
| `go_memstats_last_gc_time_seconds` | gauge | Number of seconds since 1970 of last garbage collection. | |
| `go_memstats_lookups_total` | counter | Total number of pointer lookups. | |
| `go_memstats_mallocs_total` | counter | Total number of mallocs. | |
| `go_memstats_mcache_inuse_bytes` | gauge | Number of bytes in use by mcache structures. | |
| `go_memstats_mcache_sys_bytes` | gauge | Number of bytes used for mcache structures obtained from system. | |
| `go_memstats_mspan_inuse_bytes` | gauge | Number of bytes in use by mspan structures. | |
| `go_memstats_mspan_sys_bytes` | gauge | Number of bytes used for mspan structures obtained from system. | |
| `go_memstats_next_gc_bytes` | gauge | Number of heap bytes when next garbage collection will take place. | |
| `go_memstats_other_sys_bytes` | gauge | Number of bytes used for other system allocations. | |
| `go_memstats_stack_inuse_bytes` | gauge | Number of bytes in use by the stack allocator. | |
| `go_memstats_stack_sys_bytes` | gauge | Number of bytes obtained from system for stack allocator. | |
| `go_memstats_sys_bytes` | gauge | Number of bytes obtained from system. | |
| `go_threads` | gauge | Number of OS threads created. | |
| `process_cpu_seconds_total` | counter | Total user and system CPU time spent in seconds. | |
| `process_max_fds` | gauge | Maximum number of open file descriptors. | |
| `process_open_fds` | gauge | Number of open file descriptors. | |
| `process_resident_memory_bytes` | gauge | Resident memory size in bytes. | |
| `process_start_time_seconds` | gauge | Start time of the process since unix epoch in seconds. | |
| `process_virtual_memory_bytes` | gauge | Virtual memory size in bytes. | |
| `process_virtual_memory_max_bytes` | gauge | Maximum amount of virtual memory available in bytes. | |
| `promhttp_metric_handler_requests_in_flight` | gauge | Current number of scrapes being served. | |
| `promhttp_metric_handler_requests_total` | counter | Total number of scrapes by HTTP status code. | `code` |
<!-- End generated by 'make docs/admin/integrations/prometheus.md'. -->
-19
View File
@@ -115,25 +115,6 @@ specified in your template in the `disable_params` search params list
[![Open in Coder](https://YOUR_ACCESS_URL/open-in-coder.svg)](https://YOUR_ACCESS_URL/templates/YOUR_TEMPLATE/workspace?disable_params=first_parameter,second_parameter)
```
### Security: consent dialog for automatic creation
When using `mode=auto` with prefilled `param.*` values, Coder displays a
security consent dialog before creating the workspace. This protects users
from malicious links that could provision workspaces with untrusted
configurations, such as dotfiles or startup scripts from unknown sources.
The dialog shows:
- A warning that a workspace is about to be created automatically from a link
- All prefilled `param.*` values from the URL
- **Confirm and Create** and **Cancel** buttons
The workspace is only created if the user explicitly clicks **Confirm and
Create**. Clicking **Cancel** falls back to the standard creation form where
all parameters can be reviewed manually.
![Consent dialog for automatic workspace creation](../../images/templates/auto-create-consent-dialog.png)
### Example: Kubernetes
For a full example of the Open in Coder flow in Kubernetes, check out
+1 -2
View File
@@ -13,8 +13,7 @@ AI Bridge runs inside the Coder control plane (`coderd`), requiring no separate
You will need to enable AI Bridge explicitly:
```sh
export CODER_AIBRIDGE_ENABLED=true
coder server
CODER_AIBRIDGE_ENABLED=true coder server
# or
coder server --aibridge-enabled=true
```
Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

-5
View File
@@ -2009,11 +2009,6 @@
"description": "Show a task's logs",
"path": "reference/cli/task_logs.md"
},
{
"title": "task pause",
"description": "Pause a task",
"path": "reference/cli/task_pause.md"
},
{
"title": "task send",
"description": "Send input to a task",
+3 -222
View File
@@ -2184,9 +2184,9 @@ This is required on creation to enable a user-flow of validating a template work
#### Enumerated Values
| Value(s) |
|-----------------------------------------------------------------------------------------------------------------------|
| `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `task_manual_pause`, `task_resume`, `vscode_connection` |
| Value(s) |
|--------------------------------------------------------------------------------------------------------|
| `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `task_manual_pause`, `vscode_connection` |
## codersdk.CreateWorkspaceBuildRequest
@@ -7522,225 +7522,6 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
| `message` | string | false | | Message is an actionable message that depicts actions the request took. These messages should be fully formed sentences with proper punctuation. Examples: - "A user has been created." - "Failed to create a user." |
| `validations` | array of [codersdk.ValidationError](#codersdkvalidationerror) | false | | Validations are form field-specific friendly error messages. They will be shown on a form field in the UI. These can also be used to add additional context if there is a set of errors in the primary 'Message'. |
## codersdk.ResumeTaskResponse
```json
{
"workspace_build": {
"build_number": 0,
"created_at": "2019-08-24T14:15:22Z",
"daily_cost": 0,
"deadline": "2019-08-24T14:15:22Z",
"has_ai_task": true,
"has_external_agent": true,
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"initiator_id": "06588898-9a84-4b35-ba8f-f9cbd64946f3",
"initiator_name": "string",
"job": {
"available_workers": [
"497f6eca-6276-4993-bfeb-53cbbbba6f08"
],
"canceled_at": "2019-08-24T14:15:22Z",
"completed_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"error": "string",
"error_code": "REQUIRED_TEMPLATE_VARIABLES",
"file_id": "8a0cfb4f-ddc9-436d-91bb-75133c583767",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"initiator_id": "06588898-9a84-4b35-ba8f-f9cbd64946f3",
"input": {
"error": "string",
"template_version_id": "0ba39c92-1f1b-4c32-aa3e-9925d7713eb1",
"workspace_build_id": "badaf2eb-96c5-4050-9f1d-db2d39ca5478"
},
"logs_overflowed": true,
"metadata": {
"template_display_name": "string",
"template_icon": "string",
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
"organization_id": "7c60d51f-b44e-4682-87d6-449835ea4de6",
"queue_position": 0,
"queue_size": 0,
"started_at": "2019-08-24T14:15:22Z",
"status": "pending",
"tags": {
"property1": "string",
"property2": "string"
},
"type": "template_version_import",
"worker_id": "ae5fa6f7-c55b-40c1-b40a-b36ac467652b",
"worker_name": "string"
},
"matched_provisioners": {
"available": 0,
"count": 0,
"most_recently_seen": "2019-08-24T14:15:22Z"
},
"max_deadline": "2019-08-24T14:15:22Z",
"reason": "initiator",
"resources": [
{
"agents": [
{
"api_version": "string",
"apps": [
{
"command": "string",
"display_name": "string",
"external": true,
"group": "string",
"health": "disabled",
"healthcheck": {
"interval": 0,
"threshold": 0,
"url": "string"
},
"hidden": true,
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"open_in": "slim-window",
"sharing_level": "owner",
"slug": "string",
"statuses": [
{
"agent_id": "2b1e3b65-2c04-4fa2-a2d7-467901e98978",
"app_id": "affd1d10-9538-4fc8-9e0b-4594a28c1335",
"created_at": "2019-08-24T14:15:22Z",
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"message": "string",
"needs_user_attention": true,
"state": "working",
"uri": "string",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9"
}
],
"subdomain": true,
"subdomain_name": "string",
"tooltip": "string",
"url": "string"
}
],
"architecture": "string",
"connection_timeout_seconds": 0,
"created_at": "2019-08-24T14:15:22Z",
"directory": "string",
"disconnected_at": "2019-08-24T14:15:22Z",
"display_apps": [
"vscode"
],
"environment_variables": {
"property1": "string",
"property2": "string"
},
"expanded_directory": "string",
"first_connected_at": "2019-08-24T14:15:22Z",
"health": {
"healthy": false,
"reason": "agent has lost connection"
},
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"instance_id": "string",
"last_connected_at": "2019-08-24T14:15:22Z",
"latency": {
"property1": {
"latency_ms": 0,
"preferred": true
},
"property2": {
"latency_ms": 0,
"preferred": true
}
},
"lifecycle_state": "created",
"log_sources": [
{
"created_at": "2019-08-24T14:15:22Z",
"display_name": "string",
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"workspace_agent_id": "7ad2e618-fea7-4c1a-b70a-f501566a72f1"
}
],
"logs_length": 0,
"logs_overflowed": true,
"name": "string",
"operating_system": "string",
"parent_id": {
"uuid": "string",
"valid": true
},
"ready_at": "2019-08-24T14:15:22Z",
"resource_id": "4d5215ed-38bb-48ed-879a-fdb9ca58522f",
"scripts": [
{
"cron": "string",
"display_name": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"log_path": "string",
"log_source_id": "4197ab25-95cf-4b91-9c78-f7f2af5d353a",
"run_on_start": true,
"run_on_stop": true,
"script": "string",
"start_blocks_login": true,
"timeout": 0
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
"subsystems": [
"envbox"
],
"troubleshooting_url": "string",
"updated_at": "2019-08-24T14:15:22Z",
"version": "string"
}
],
"created_at": "2019-08-24T14:15:22Z",
"daily_cost": 0,
"hide": true,
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"job_id": "453bd7d7-5355-4d6d-a38e-d9e7eb218c3f",
"metadata": [
{
"key": "string",
"sensitive": true,
"value": "string"
}
],
"name": "string",
"type": "string",
"workspace_transition": "start"
}
],
"status": "pending",
"template_version_id": "0ba39c92-1f1b-4c32-aa3e-9925d7713eb1",
"template_version_name": "string",
"template_version_preset_id": "512a53a7-30da-446e-a1fc-713c630baff1",
"transition": "start",
"updated_at": "2019-08-24T14:15:22Z",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string",
"workspace_owner_avatar_url": "string",
"workspace_owner_id": "e7078695-5279-4c86-8774-3ac2367a2fc7",
"workspace_owner_name": "string"
}
}
```
### Properties
| Name | Type | Required | Restrictions | Description |
|-------------------|----------------------------------------------------|----------|--------------|-------------|
| `workspace_build` | [codersdk.WorkspaceBuild](#codersdkworkspacebuild) | false | | |
## codersdk.RetentionConfig
```json
-32
View File
@@ -397,38 +397,6 @@ curl -X POST http://coder-server:8080/api/v2/tasks/{user}/{task}/pause \
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Resume task
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/tasks/{user}/{task}/resume \
-H 'Accept: */*' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /tasks/{user}/{task}/resume`
### Parameters
| Name | In | Type | Required | Description |
|--------|------|--------------|----------|-------------------------------------------------------|
| `user` | path | string | true | Username, user ID, or 'me' for the authenticated user |
| `task` | path | string(uuid) | true | Task ID |
### Example responses
> 202 Response
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------------|-------------|----------------------------------------------------------------------|
| 202 | [Accepted](https://tools.ietf.org/html/rfc7231#section-6.3.3) | Accepted | [codersdk.ResumeTaskResponse](schemas.md#codersdkresumetaskresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Send input to AI task
### Code samples
-1
View File
@@ -21,6 +21,5 @@ coder task
| [<code>delete</code>](./task_delete.md) | Delete tasks |
| [<code>list</code>](./task_list.md) | List tasks |
| [<code>logs</code>](./task_logs.md) | Show a task's logs |
| [<code>pause</code>](./task_pause.md) | Pause a task |
| [<code>send</code>](./task_send.md) | Send input to a task |
| [<code>status</code>](./task_status.md) | Show the status of a task. |
-36
View File
@@ -1,36 +0,0 @@
<!-- DO NOT EDIT | GENERATED CONTENT -->
# task pause
Pause a task
## Usage
```console
coder task pause [flags] <task>
```
## Description
```console
- Pause a task by name:
$ coder task pause my-task
- Pause another user's task:
$ coder task pause alice/my-task
- Pause a task without confirmation:
$ coder task pause my-task --yes
```
## Options
### -y, --yes
| | |
|------|-------------------|
| Type | <code>bool</code> |
Bypass confirmation prompts.
+5 -1
View File
@@ -102,7 +102,11 @@ manually updated the workspace.
## Bulk operations
Admins may apply bulk operations (update, delete, start, stop) in the
> [!NOTE]
> Bulk operations are a Premium feature.
> [Learn more](https://coder.com/pricing#compare-plans).
Licensed admins may apply bulk operations (update, delete, start, stop) in the
**Workspaces** tab. Select the workspaces you'd like to modify with the
checkboxes on the left, then use the top-right **Actions** dropdown to apply the
operation.
+77
View File
@@ -0,0 +1,77 @@
# AI Bridge Proxy
A MITM (Man-in-the-Middle) proxy server for intercepting and decrypting HTTPS requests to AI providers.
## Overview
The AI Bridge Proxy intercepts HTTPS traffic, decrypts it using a configured CA certificate, and forwards requests to AI Bridge for processing.
## Configuration
### Certificate Setup
Generate a CA key pair for MITM:
#### 1. Generate a new private key
```sh
openssl genrsa -out mitm.key 2048
chmod 400 mitm.key
```
#### 2. Create a self-signed CA certificate
```sh
openssl req -new -x509 -days 365 \
-key mitm.key \
-out mitm.crt \
-subj "/CN=Coder AI Bridge Proxy CA"
```
### Configuration options
| Environment Variable | Description | Default |
|------------------------------------|---------------------------------|---------|
| `CODER_AIBRIDGE_PROXY_ENABLED` | Enable the AI Bridge Proxy | `false` |
| `CODER_AIBRIDGE_PROXY_LISTEN_ADDR` | Address the proxy listens on | `:8888` |
| `CODER_AIBRIDGE_PROXY_CERT_FILE` | Path to the CA certificate file | - |
| `CODER_AIBRIDGE_PROXY_KEY_FILE` | Path to the CA private key file | - |
### Client Configuration
Clients must trust the proxy's CA certificate and authenticate with their Coder session token.
#### CA Certificate
Clients need to trust the MITM CA certificate:
```sh
# Node.js
export NODE_EXTRA_CA_CERTS="/path/to/mitm.crt"
# Python (requests, httpx)
export REQUESTS_CA_BUNDLE="/path/to/mitm.crt"
export SSL_CERT_FILE="/path/to/mitm.crt"
# Go
export SSL_CERT_FILE="/path/to/mitm.crt"
```
#### Proxy Authentication
Clients authenticate with the proxy using their Coder session token in the `Proxy-Authorization` header via HTTP Basic Auth.
The token is passed as the password (username is ignored):
```sh
export HTTP_PROXY="http://ignored:<coder-session-token>@<proxy-host>:<proxy-port>"
export HTTPS_PROXY="http://ignored:<coder-session-token>@<proxy-host>:<proxy-port>"
```
For example:
```sh
export HTTP_PROXY="http://coder:${CODER_SESSION_TOKEN}@localhost:8888"
export HTTPS_PROXY="http://coder:${CODER_SESSION_TOKEN}@localhost:8888"
```
Most HTTP clients and AI SDKs will automatically use these environment variables.
-2
View File
@@ -370,7 +370,6 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -484,7 +483,6 @@ func TestEnterpriseCreateWithPreset(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
+1 -1
View File
@@ -64,7 +64,7 @@ func TestRemoveOrganizationMembers(t *testing.T) {
buf := new(bytes.Buffer)
inv.Stdout = buf
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "Resource not found or you do not have access to this resource")
require.ErrorContains(t, err, "must be an existing uuid or username")
})
}
-1
View File
@@ -1331,7 +1331,6 @@ func (api *API) setupPrebuilds(featureEnabled bool) (agplprebuilds.Reconciliatio
api.AGPL.BuildUsageChecker,
api.TracerProvider,
int(api.DeploymentValues.PostgresConnMaxOpen.Value()),
api.AGPL.WorkspaceBuilderMetrics,
)
return reconciler, prebuilds.NewEnterpriseClaimer()
}
@@ -174,7 +174,6 @@ func TestClaimPrebuild(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -204,7 +204,6 @@ func TestMetricsCollector(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ctx := testutil.Context(t, testutil.WaitLong)
@@ -345,7 +344,6 @@ func TestMetricsCollector_DuplicateTemplateNames(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ctx := testutil.Context(t, testutil.WaitLong)
@@ -502,7 +500,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ctx := testutil.Context(t, testutil.WaitLong)
@@ -540,7 +537,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ctx := testutil.Context(t, testutil.WaitLong)
@@ -578,7 +574,6 @@ func TestMetricsCollector_ReconciliationPausedMetric(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ctx := testutil.Context(t, testutil.WaitLong)
+25 -52
View File
@@ -51,12 +51,9 @@ type StoreReconciler struct {
buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker]
tracer trace.Tracer
// mu protects the reconciler's lifecycle state.
mu sync.Mutex
running bool
stopped bool
cancelFn context.CancelCauseFunc
cancelFn context.CancelCauseFunc
running atomic.Bool
stopped atomic.Bool
done chan struct{}
provisionNotifyCh chan database.ProvisionerJob
@@ -65,8 +62,7 @@ type StoreReconciler struct {
// Prebuild state metrics
metrics *MetricsCollector
// Operational metrics
reconciliationDuration prometheus.Histogram
workspaceBuilderMetrics *wsbuilder.Metrics
reconciliationDuration prometheus.Histogram
}
var _ prebuilds.ReconciliationOrchestrator = &StoreReconciler{}
@@ -100,7 +96,6 @@ func NewStoreReconciler(store database.Store,
buildUsageChecker *atomic.Pointer[wsbuilder.UsageChecker],
tracerProvider trace.TracerProvider,
maxDBConnections int,
workspaceBuilderMetrics *wsbuilder.Metrics,
) *StoreReconciler {
reconciliationConcurrency := calculateReconciliationConcurrency(maxDBConnections)
@@ -122,7 +117,6 @@ func NewStoreReconciler(store database.Store,
done: make(chan struct{}, 1),
provisionNotifyCh: make(chan database.ProvisionerJob, 10),
reconciliationConcurrency: reconciliationConcurrency,
workspaceBuilderMetrics: workspaceBuilderMetrics,
}
if registerer != nil {
@@ -180,33 +174,18 @@ func (c *StoreReconciler) Run(ctx context.Context) {
slog.F("backoff_lookback", c.cfg.ReconciliationBackoffLookback.String()),
slog.F("preset_concurrency", c.reconciliationConcurrency))
// Create a child context that will be canceled when:
// 1. The parent context is canceled, OR
// 2. c.cancelFn() is called to trigger shutdown
// nolint:gocritic // Reconciliation Loop needs Prebuilds Orchestrator permissions.
ctx, cancel := context.WithCancelCause(dbauthz.AsPrebuildsOrchestrator(ctx))
// If the reconciler was already stopped, exit early and release the context.
// Otherwise, mark it as running and store the cancel function for shutdown.
c.mu.Lock()
if c.stopped || c.running {
c.mu.Unlock()
cancel(nil)
return
}
c.running = true
c.cancelFn = cancel
c.mu.Unlock()
var wg sync.WaitGroup
ticker := c.clock.NewTicker(reconciliationInterval)
defer ticker.Stop()
// Wait for all background goroutines to exit before signaling completion.
var wg sync.WaitGroup
defer func() {
wg.Wait()
c.done <- struct{}{}
}()
// nolint:gocritic // Reconciliation Loop needs Prebuilds Orchestrator permissions.
ctx, cancel := context.WithCancelCause(dbauthz.AsPrebuildsOrchestrator(ctx))
c.cancelFn = cancel
// Start updating metrics in the background.
if c.metrics != nil {
wg.Add(1)
@@ -216,6 +195,11 @@ func (c *StoreReconciler) Run(ctx context.Context) {
}()
}
// Everything is in place, reconciler can now be considered as running.
//
// NOTE: without this atomic bool, Stop might race with Run for the c.cancelFn above.
c.running.Store(true)
// Publish provisioning jobs outside of database transactions.
// A connection is held while a database transaction is active; PGPubsub also tries to acquire a new connection on
// Publish, so we can exhaust available connections.
@@ -223,11 +207,11 @@ func (c *StoreReconciler) Run(ctx context.Context) {
// A single worker dequeues from the channel, which should be sufficient.
// If any messages are missed due to congestion or errors, provisionerdserver has a backup polling mechanism which
// will periodically pick up any queued jobs (see poll(time.Duration) in coderd/provisionerdserver/acquirer.go).
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-c.done:
return
case <-ctx.Done():
return
case job := <-c.provisionNotifyCh:
@@ -272,29 +256,21 @@ func (c *StoreReconciler) Run(ctx context.Context) {
}
}
// Stop triggers reconciler shutdown and waits for it to complete.
// The ctx parameter provides a timeout, if cleanup doesn't finish within
// this timeout, Stop() logs an error and returns.
func (c *StoreReconciler) Stop(ctx context.Context, cause error) {
defer c.running.Store(false)
if cause != nil {
c.logger.Info(context.Background(), "stopping reconciler", slog.F("cause", cause.Error()))
} else {
c.logger.Info(context.Background(), "stopping reconciler")
}
// Mark the reconciler as stopped. If it was already stopped, return early.
// If the reconciler is running, we'll proceed to shut it down.
// If previously stopped (Swap returns previous value), then short-circuit.
//
// NOTE: we need to *prospectively* mark this as stopped to prevent the
// reconciler from being stopped multiple times and causing problems.
c.mu.Lock()
if c.stopped {
c.mu.Unlock()
// NOTE: we need to *prospectively* mark this as stopped to prevent Stop being called multiple times and causing problems.
if c.stopped.Swap(true) {
return
}
c.stopped = true
running := c.running
c.mu.Unlock()
// Unregister prebuilds state and operational metrics.
if c.metrics != nil && c.registerer != nil {
@@ -313,18 +289,16 @@ func (c *StoreReconciler) Stop(ctx context.Context, cause error) {
}
// If the reconciler is not running, there's nothing else to do.
if !running {
if !c.running.Load() {
return
}
// Trigger reconciler shutdown by canceling its internal context.
if c.cancelFn != nil {
c.cancelFn(cause)
}
// Wait for the reconciler to signal that it has fully exited and cleaned up.
select {
// Timeout: reconciler didn't finish cleanup within the timeout period.
// Give up waiting for control loop to exit.
case <-ctx.Done():
// nolint:gocritic // it's okay to use slog.F() for an error in this case
// because we want to differentiate two different types of errors: ctx.Err() and context.Cause()
@@ -334,7 +308,7 @@ func (c *StoreReconciler) Stop(ctx context.Context, cause error) {
slog.Error(ctx.Err()),
slog.F("cause", context.Cause(ctx)),
)
// Happy path: reconciler has successfully exited.
// Wait for the control loop to exit.
case <-c.done:
c.logger.Info(context.Background(), "reconciler stopped")
}
@@ -1055,8 +1029,7 @@ func (c *StoreReconciler) provision(
builder := wsbuilder.New(workspace, transition, *c.buildUsageChecker.Load()).
Reason(database.BuildReasonInitiator).
Initiator(database.PrebuildsSystemUserID).
MarkPrebuild().
BuildMetrics(c.workspaceBuilderMetrics)
MarkPrebuild()
if transition != database.WorkspaceTransitionDelete {
// We don't specify the version for a delete transition,
+3 -25
View File
@@ -61,7 +61,6 @@ func TestNoReconciliationActionsIfNoPresets(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// given a template version with no presets
@@ -113,7 +112,6 @@ func TestNoReconciliationActionsIfNoPrebuilds(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// given there are presets, but no prebuilds
@@ -452,7 +450,6 @@ func (tc testCase) run(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Run the reconciliation multiple times to ensure idempotency
@@ -530,7 +527,6 @@ func TestMultiplePresetsPerTemplateVersion(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ownerID := uuid.New()
@@ -662,7 +658,6 @@ func TestPrebuildScheduling(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ownerID := uuid.New()
@@ -772,7 +767,6 @@ func TestInvalidPreset(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ownerID := uuid.New()
@@ -843,7 +837,6 @@ func TestDeletionOfPrebuiltWorkspaceWithInvalidPreset(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ownerID := uuid.New()
@@ -946,7 +939,6 @@ func TestSkippingHardLimitedPresets(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Set up test environment with a template, version, and preset.
@@ -1098,7 +1090,6 @@ func TestHardLimitedPresetShouldNotBlockDeletion(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Set up test environment with a template, version, and preset.
@@ -1288,8 +1279,9 @@ func TestRunLoop(t *testing.T) {
ReconciliationBackoffInterval: serpent.Duration(backoffInterval),
ReconciliationInterval: serpent.Duration(time.Second),
}
// Do not ignore errors as we want a graceful stop
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: false}).Leveled(slog.LevelDebug)
logger := slogtest.Make(
t, &slogtest.Options{IgnoreErrors: true},
).Leveled(slog.LevelDebug)
db, pubSub := dbtestutil.NewDB(t)
cache := files.New(prometheus.NewRegistry(), &coderdtest.FakeAuthorizer{})
reconciler := prebuilds.NewStoreReconciler(
@@ -1300,7 +1292,6 @@ func TestRunLoop(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
ownerID := uuid.New()
@@ -1433,7 +1424,6 @@ func TestReconcilerLifecycle(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// When: the reconciler is stopped (simulating the prebuilds feature being disabled)
@@ -1449,7 +1439,6 @@ func TestReconcilerLifecycle(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Gracefully stop the reconciliation loop
@@ -1483,7 +1472,6 @@ func TestFailedBuildBackoff(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Given: an active template version with presets and prebuilds configured.
@@ -1608,7 +1596,6 @@ func TestReconciliationLock(t *testing.T) {
newNoopEnqueuer(),
newNoopUsageCheckerPtr(), noop.NewTracerProvider(),
10,
nil,
)
reconciler.WithReconciliationLock(ctx, logger, func(_ context.Context, _ database.Store) error {
lockObtained := mutex.TryLock()
@@ -1647,7 +1634,6 @@ func TestTrackResourceReplacement(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Given: a template admin to receive a notification.
@@ -1808,7 +1794,6 @@ func TestExpiredPrebuildsMultipleActions(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Set up test environment with a template, version, and preset
@@ -2274,7 +2259,6 @@ func TestCancelPendingPrebuilds(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
owner := coderdtest.CreateFirstUser(t, client)
@@ -2520,7 +2504,6 @@ func TestCancelPendingPrebuilds(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
owner := coderdtest.CreateFirstUser(t, client)
@@ -2594,7 +2577,6 @@ func TestReconciliationStats(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
owner := coderdtest.CreateFirstUser(t, client)
@@ -3085,7 +3067,6 @@ func TestReconciliationRespectsPauseSetting(t *testing.T) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
10,
nil,
)
// Setup a template with a preset that should create prebuilds
@@ -3192,7 +3173,6 @@ func BenchmarkReconcileAll_NoOps(b *testing.B) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
maxOpenConns,
nil,
)
org := dbgen.Organization(b, db, database.Organization{})
@@ -3304,7 +3284,6 @@ func BenchmarkReconcileAll_ConnectionContention(b *testing.B) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
maxOpenConns,
nil,
)
// Create presets from active template versions that need reconciliation actions
@@ -3424,7 +3403,6 @@ func BenchmarkReconcileAll_Mix(b *testing.B) {
newNoopUsageCheckerPtr(),
noop.NewTracerProvider(),
maxOpenConns,
nil,
)
org := dbgen.Organization(b, db, database.Organization{})
+2 -2
View File
@@ -356,7 +356,7 @@ func TestGrantSiteRoles(t *testing.T) {
AssignToUser: uuid.NewString(),
Roles: []string{codersdk.RoleOwner},
Error: true,
StatusCode: http.StatusNotFound,
StatusCode: http.StatusBadRequest,
},
{
Name: "MemberCannotUpdateRoles",
@@ -364,7 +364,7 @@ func TestGrantSiteRoles(t *testing.T) {
AssignToUser: first.UserID.String(),
Roles: []string{},
Error: true,
StatusCode: http.StatusNotFound,
StatusCode: http.StatusBadRequest,
},
{
// Cannot update your own roles
-6
View File
@@ -1991,7 +1991,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -2116,7 +2115,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -2241,7 +2239,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -2388,7 +2385,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -2536,7 +2532,6 @@ func TestPrebuildsAutobuild(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
@@ -2984,7 +2979,6 @@ func TestWorkspaceProvisionerdServerMetrics(t *testing.T) {
api.AGPL.BuildUsageChecker,
noop.NewTracerProvider(),
10,
nil,
)
var claimer agplprebuilds.Claimer = prebuilds.NewEnterpriseClaimer()
api.AGPL.PrebuildsClaimer.Store(&claimer)
+1 -1
View File
@@ -152,7 +152,7 @@ func TestEnterpriseMembers(t *testing.T) {
require.Error(t, err)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Contains(t, apiErr.Message, "Resource not found or you do not have access to this resource")
require.Contains(t, apiErr.Message, "must be an existing")
})
// Calling it from a user without the org access.
+2 -2
View File
@@ -473,7 +473,7 @@ require (
github.com/anthropics/anthropic-sdk-go v1.19.0
github.com/brianvoe/gofakeit/v7 v7.14.0
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225
github.com/coder/aibridge v1.0.3
github.com/coder/aibridge v1.0.2
github.com/coder/aisdk-go v0.0.9
github.com/coder/boundary v0.8.0
github.com/coder/preview v1.0.4
@@ -481,7 +481,7 @@ require (
github.com/dgraph-io/ristretto/v2 v2.4.0
github.com/elazarl/goproxy v1.8.0
github.com/fsnotify/fsnotify v1.9.0
github.com/go-git/go-git/v5 v5.16.5
github.com/go-git/go-git/v5 v5.16.2
github.com/icholy/replace v0.6.0
github.com/mark3labs/mcp-go v0.38.0
gonum.org/v1/gonum v0.17.0
+4 -4
View File
@@ -927,8 +927,8 @@ github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225 h1:tRIViZ5JRmzdOEo5wUWngaGEFBG8OaE1o2GIHN5ujJ8=
github.com/coder/agentapi-sdk-go v0.0.0-20250505131810-560d1d88d225/go.mod h1:rNLVpYgEVeu1Zk29K64z6Od8RBP9DwqCu9OfCzh8MR4=
github.com/coder/aibridge v1.0.3 h1:gt3XKbnFBJ/jyls/yanU/iWZO5yhd6LVYuTQbEZ/SxQ=
github.com/coder/aibridge v1.0.3/go.mod h1:c7Of2xfAksZUrPWN180Eh60fiKgzs7dyOjniTjft6AE=
github.com/coder/aibridge v1.0.2 h1:cVPr9+TFLIzULpKPGI/1lnL14+DruedR7KnjZHklIEU=
github.com/coder/aibridge v1.0.2/go.mod h1:c7Of2xfAksZUrPWN180Eh60fiKgzs7dyOjniTjft6AE=
github.com/coder/aisdk-go v0.0.9 h1:Vzo/k2qwVGLTR10ESDeP2Ecek1SdPfZlEjtTfMveiVo=
github.com/coder/aisdk-go v0.0.9/go.mod h1:KF6/Vkono0FJJOtWtveh5j7yfNrSctVTpwgweYWSp5M=
github.com/coder/boundary v0.8.0 h1:g/H6VIGY4IoWeKkbvao7zhO1BAQe7upSHfHzoAZxdik=
@@ -1149,8 +1149,8 @@ github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66D
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
github.com/go-git/go-git/v5 v5.16.5 h1:mdkuqblwr57kVfXri5TTH+nMFLNUxIj9Z7F5ykFbw5s=
github.com/go-git/go-git/v5 v5.16.5/go.mod h1:QOMLpNf1qxuSY4StA/ArOdfFR2TrKEjJiye2kel2m+M=
github.com/go-git/go-git/v5 v5.16.2 h1:fT6ZIOjE5iEnkzKyxTHK1W4HGAsPhqEqiSAssSO77hM=
github.com/go-git/go-git/v5 v5.16.2/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
-52
View File
@@ -1,52 +0,0 @@
# Metrics Documentation Generator
This tool generates the Prometheus metrics documentation at [`docs/admin/integrations/prometheus.md`](https://coder.com/docs/admin/integrations/prometheus#available-metrics).
## How It Works
The documentation is generated from two metrics files:
1. `metrics` (static, manually maintained)
2. `generated_metrics` (auto-generated, do not edit)
These files are merged and used to produce the final documentation.
### `metrics` (static)
Contains metrics that are **not** directly defined in the coder source code:
- `go_*`: Go runtime metrics
- `process_*`: Process metrics from prometheus/client_golang
- `promhttp_*`: Prometheus HTTP handler metrics
- `coder_aibridged_*`: Metrics from external dependencies
> [!Note]
> This file also contains edge cases where metric metadata cannot be accurately extracted by the scanner (e.g., labels determined by runtime logic).
> Static metrics take priority over generated metrics when both files contain the same metric name.
**Edit this file** to add metrics that should appear in the documentation but are not scanned from the coder codebase,
or to manually override metrics where the scanner generates incorrect metadata (e.g., missing runtime-determined labels like in `agent_scripts_executed_total`).
### `generated_metrics` (auto-generated)
Contains metrics extracted from the coder source code by the AST scanner (`scanner/scanner.go`).
**Do not edit this file directly.** It is regenerated by running:
```bash
make scripts/metricsdocgen/generated_metrics
```
## Updating Metrics Documentation
To regenerate the documentation after code changes:
```bash
make docs/admin/integrations/prometheus.md
```
This will:
- Run the scanner to update `generated_metrics`
- Merge `metrics` and `generated_metrics` metric files
- Update the documentation file
-330
View File
@@ -1,330 +0,0 @@
# HELP coder_pubsub_connected Whether we are connected (1) or not connected (0) to postgres
# TYPE coder_pubsub_connected gauge
coder_pubsub_connected 0
# HELP coder_pubsub_current_events The current number of pubsub event channels listened for
# TYPE coder_pubsub_current_events gauge
coder_pubsub_current_events 0
# HELP coder_pubsub_current_subscribers The current number of active pubsub subscribers
# TYPE coder_pubsub_current_subscribers gauge
coder_pubsub_current_subscribers 0
# HELP coder_pubsub_disconnections_total Total number of times we disconnected unexpectedly from postgres
# TYPE coder_pubsub_disconnections_total counter
coder_pubsub_disconnections_total 0
# HELP coder_pubsub_latency_measure_errs_total The number of pubsub latency measurement failures
# TYPE coder_pubsub_latency_measure_errs_total counter
coder_pubsub_latency_measure_errs_total 0
# HELP coder_pubsub_latency_measures_total The number of pubsub latency measurements
# TYPE coder_pubsub_latency_measures_total counter
coder_pubsub_latency_measures_total 0
# HELP coder_pubsub_messages_total Total number of messages received from postgres
# TYPE coder_pubsub_messages_total counter
coder_pubsub_messages_total{size=""} 0
# HELP coder_pubsub_published_bytes_total Total number of bytes successfully published across all publishes
# TYPE coder_pubsub_published_bytes_total counter
coder_pubsub_published_bytes_total 0
# HELP coder_pubsub_publishes_total Total number of calls to Publish
# TYPE coder_pubsub_publishes_total counter
coder_pubsub_publishes_total{success=""} 0
# HELP coder_pubsub_receive_latency_seconds The time taken to receive a message from a pubsub event channel
# TYPE coder_pubsub_receive_latency_seconds gauge
coder_pubsub_receive_latency_seconds 0
# HELP coder_pubsub_received_bytes_total Total number of bytes received across all messages
# TYPE coder_pubsub_received_bytes_total counter
coder_pubsub_received_bytes_total 0
# HELP coder_pubsub_send_latency_seconds The time taken to send a message into a pubsub event channel
# TYPE coder_pubsub_send_latency_seconds gauge
coder_pubsub_send_latency_seconds 0
# HELP coder_pubsub_subscribes_total Total number of calls to Subscribe/SubscribeWithErr
# TYPE coder_pubsub_subscribes_total counter
coder_pubsub_subscribes_total{success=""} 0
# HELP coder_servertailnet_connections_total Total number of TCP connections made to workspace agents.
# TYPE coder_servertailnet_connections_total counter
coder_servertailnet_connections_total{network=""} 0
# HELP coder_servertailnet_open_connections Total number of TCP connections currently open to workspace agents.
# TYPE coder_servertailnet_open_connections gauge
coder_servertailnet_open_connections{network=""} 0
# HELP coderd_agentapi_metadata_batch_size Total number of metadata entries in each batch, updated before flushes.
# TYPE coderd_agentapi_metadata_batch_size histogram
coderd_agentapi_metadata_batch_size 0
# HELP coderd_agentapi_metadata_batch_utilization Number of metadata keys per agent in each batch, updated before flushes.
# TYPE coderd_agentapi_metadata_batch_utilization histogram
coderd_agentapi_metadata_batch_utilization 0
# HELP coderd_agentapi_metadata_batches_total Total number of metadata batches flushed.
# TYPE coderd_agentapi_metadata_batches_total counter
coderd_agentapi_metadata_batches_total{reason=""} 0
# HELP coderd_agentapi_metadata_dropped_keys_total Total number of metadata keys dropped due to capacity limits.
# TYPE coderd_agentapi_metadata_dropped_keys_total counter
coderd_agentapi_metadata_dropped_keys_total 0
# HELP coderd_agentapi_metadata_flush_duration_seconds Time taken to flush metadata batch to database and pubsub.
# TYPE coderd_agentapi_metadata_flush_duration_seconds histogram
coderd_agentapi_metadata_flush_duration_seconds{reason=""} 0
# HELP coderd_agentapi_metadata_flushed_total Total number of unique metadatas flushed.
# TYPE coderd_agentapi_metadata_flushed_total counter
coderd_agentapi_metadata_flushed_total 0
# HELP coderd_agentapi_metadata_publish_errors_total Total number of metadata batch pubsub publish calls that have resulted in an error.
# TYPE coderd_agentapi_metadata_publish_errors_total counter
coderd_agentapi_metadata_publish_errors_total 0
# HELP coderd_agents_apps Agent applications with statuses.
# TYPE coderd_agents_apps gauge
coderd_agents_apps{agent_name="",username="",workspace_name="",app_name="",health=""} 0
# HELP coderd_agents_connection_latencies_seconds Agent connection latencies in seconds.
# TYPE coderd_agents_connection_latencies_seconds gauge
coderd_agents_connection_latencies_seconds{agent_name="",username="",workspace_name="",derp_region="",preferred=""} 0
# HELP coderd_agents_connections Agent connections with statuses.
# TYPE coderd_agents_connections gauge
coderd_agents_connections{agent_name="",username="",workspace_name="",status="",lifecycle_state="",tailnet_node=""} 0
# HELP coderd_agents_up The number of active agents per workspace.
# TYPE coderd_agents_up gauge
coderd_agents_up{username="",workspace_name="",template_name="",template_version=""} 0
# HELP coderd_agentstats_connection_count The number of established connections by agent
# TYPE coderd_agentstats_connection_count gauge
coderd_agentstats_connection_count 0
# HELP coderd_agentstats_connection_median_latency_seconds The median agent connection latency in seconds
# TYPE coderd_agentstats_connection_median_latency_seconds gauge
coderd_agentstats_connection_median_latency_seconds 0
# HELP coderd_agentstats_currently_reachable_peers The number of peers (e.g. clients) that are currently reachable over the encrypted network.
# TYPE coderd_agentstats_currently_reachable_peers gauge
coderd_agentstats_currently_reachable_peers{connection_type=""} 0
# HELP coderd_agentstats_rx_bytes Agent Rx bytes
# TYPE coderd_agentstats_rx_bytes gauge
coderd_agentstats_rx_bytes 0
# HELP coderd_agentstats_session_count_jetbrains The number of session established by JetBrains
# TYPE coderd_agentstats_session_count_jetbrains gauge
coderd_agentstats_session_count_jetbrains 0
# HELP coderd_agentstats_session_count_reconnecting_pty The number of session established by reconnecting PTY
# TYPE coderd_agentstats_session_count_reconnecting_pty gauge
coderd_agentstats_session_count_reconnecting_pty 0
# HELP coderd_agentstats_session_count_ssh The number of session established by SSH
# TYPE coderd_agentstats_session_count_ssh gauge
coderd_agentstats_session_count_ssh 0
# HELP coderd_agentstats_session_count_vscode The number of session established by VSCode
# TYPE coderd_agentstats_session_count_vscode gauge
coderd_agentstats_session_count_vscode 0
# HELP coderd_agentstats_startup_script_seconds Amount of time taken to run the startup script in seconds.
# TYPE coderd_agentstats_startup_script_seconds gauge
coderd_agentstats_startup_script_seconds{success=""} 0
# HELP coderd_agentstats_tx_bytes Agent Tx bytes
# TYPE coderd_agentstats_tx_bytes gauge
coderd_agentstats_tx_bytes 0
# HELP coderd_api_active_users_duration_hour The number of users that have been active within the last hour.
# TYPE coderd_api_active_users_duration_hour gauge
coderd_api_active_users_duration_hour 0
# HELP coderd_api_concurrent_requests The number of concurrent API requests.
# TYPE coderd_api_concurrent_requests gauge
coderd_api_concurrent_requests{method="",path=""} 0
# HELP coderd_api_concurrent_websockets The total number of concurrent API websockets.
# TYPE coderd_api_concurrent_websockets gauge
coderd_api_concurrent_websockets{path=""} 0
# HELP coderd_api_request_latencies_seconds Latency distribution of requests in seconds.
# TYPE coderd_api_request_latencies_seconds histogram
coderd_api_request_latencies_seconds{method="",path=""} 0
# HELP coderd_api_requests_processed_total The total number of processed API requests
# TYPE coderd_api_requests_processed_total counter
coderd_api_requests_processed_total{code="",method="",path=""} 0
# HELP coderd_api_total_user_count The total number of registered users, partitioned by status.
# TYPE coderd_api_total_user_count gauge
coderd_api_total_user_count{status=""} 0
# HELP coderd_api_websocket_durations_seconds Websocket duration distribution of requests in seconds.
# TYPE coderd_api_websocket_durations_seconds histogram
coderd_api_websocket_durations_seconds{path=""} 0
# HELP coderd_api_workspace_latest_build The current number of workspace builds by status for all non-deleted workspaces.
# TYPE coderd_api_workspace_latest_build gauge
coderd_api_workspace_latest_build{status=""} 0
# HELP coderd_authz_authorize_duration_seconds Duration of the 'Authorize' call in seconds. Only counts calls that succeed.
# TYPE coderd_authz_authorize_duration_seconds histogram
coderd_authz_authorize_duration_seconds{allowed=""} 0
# HELP coderd_authz_prepare_authorize_duration_seconds Duration of the 'PrepareAuthorize' call in seconds.
# TYPE coderd_authz_prepare_authorize_duration_seconds histogram
coderd_authz_prepare_authorize_duration_seconds 0
# HELP coderd_db_query_counts_total Total number of queries labelled by HTTP route, method, and query name.
# TYPE coderd_db_query_counts_total counter
coderd_db_query_counts_total{route="",method="",query=""} 0
# HELP coderd_db_query_latencies_seconds Latency distribution of queries in seconds.
# TYPE coderd_db_query_latencies_seconds histogram
coderd_db_query_latencies_seconds{query=""} 0
# HELP coderd_db_tx_duration_seconds Duration of transactions in seconds.
# TYPE coderd_db_tx_duration_seconds histogram
coderd_db_tx_duration_seconds{success="",tx_id=""} 0
# HELP coderd_db_tx_executions_count Total count of transactions executed. 'retries' is expected to be 0 for a successful transaction.
# TYPE coderd_db_tx_executions_count counter
coderd_db_tx_executions_count{success="",retries="",tx_id=""} 0
# HELP coderd_dbpurge_iteration_duration_seconds Duration of each dbpurge iteration in seconds.
# TYPE coderd_dbpurge_iteration_duration_seconds histogram
coderd_dbpurge_iteration_duration_seconds{success=""} 0
# HELP coderd_dbpurge_records_purged_total Total number of records purged by type.
# TYPE coderd_dbpurge_records_purged_total counter
coderd_dbpurge_records_purged_total{record_type=""} 0
# HELP coderd_experiments Indicates whether each experiment is enabled (1) or not (0)
# TYPE coderd_experiments gauge
coderd_experiments{experiment=""} 0
# HELP coderd_insights_applications_usage_seconds The application usage per template.
# TYPE coderd_insights_applications_usage_seconds gauge
coderd_insights_applications_usage_seconds{template_name="",application_name="",slug=""} 0
# HELP coderd_insights_parameters The parameter usage per template.
# TYPE coderd_insights_parameters gauge
coderd_insights_parameters{template_name="",parameter_name="",parameter_type="",parameter_value=""} 0
# HELP coderd_insights_templates_active_users The number of active users of the template.
# TYPE coderd_insights_templates_active_users gauge
coderd_insights_templates_active_users{template_name=""} 0
# HELP coderd_license_active_users The number of active users.
# TYPE coderd_license_active_users gauge
coderd_license_active_users 0
# HELP coderd_license_errors The number of active license errors.
# TYPE coderd_license_errors gauge
coderd_license_errors 0
# HELP coderd_license_limit_users The user seats limit based on the active Coder license.
# TYPE coderd_license_limit_users gauge
coderd_license_limit_users 0
# HELP coderd_license_user_limit_enabled Returns 1 if the current license enforces the user limit.
# TYPE coderd_license_user_limit_enabled gauge
coderd_license_user_limit_enabled 0
# HELP coderd_license_warnings The number of active license warnings.
# TYPE coderd_license_warnings gauge
coderd_license_warnings 0
# HELP coderd_lifecycle_autobuild_execution_duration_seconds Duration of each autobuild execution.
# TYPE coderd_lifecycle_autobuild_execution_duration_seconds histogram
coderd_lifecycle_autobuild_execution_duration_seconds 0
# HELP coderd_notifications_dispatcher_send_seconds The time taken to dispatch notifications.
# TYPE coderd_notifications_dispatcher_send_seconds histogram
coderd_notifications_dispatcher_send_seconds{method=""} 0
# HELP coderd_notifications_inflight_dispatches The number of dispatch attempts which are currently in progress.
# TYPE coderd_notifications_inflight_dispatches gauge
coderd_notifications_inflight_dispatches{method="",notification_template_id=""} 0
# HELP coderd_notifications_pending_updates The number of dispatch attempt results waiting to be flushed to the store.
# TYPE coderd_notifications_pending_updates gauge
coderd_notifications_pending_updates 0
# HELP coderd_notifications_queued_seconds The time elapsed between a notification being enqueued in the store and retrieved for dispatching (measures the latency of the notifications system). This should generally be within CODER_NOTIFICATIONS_FETCH_INTERVAL seconds; higher values for a sustained period indicates delayed processing and CODER_NOTIFICATIONS_LEASE_COUNT can be increased to accommodate this.
# TYPE coderd_notifications_queued_seconds histogram
coderd_notifications_queued_seconds{method=""} 0
# HELP coderd_notifications_retry_count The count of notification dispatch retry attempts.
# TYPE coderd_notifications_retry_count counter
coderd_notifications_retry_count{method="",notification_template_id=""} 0
# HELP coderd_notifications_synced_updates_total The number of dispatch attempt results flushed to the store.
# TYPE coderd_notifications_synced_updates_total counter
coderd_notifications_synced_updates_total 0
# HELP coderd_oauth2_external_requests_rate_limit The total number of allowed requests per interval.
# TYPE coderd_oauth2_external_requests_rate_limit gauge
coderd_oauth2_external_requests_rate_limit{name="",resource=""} 0
# HELP coderd_oauth2_external_requests_rate_limit_next_reset_unix Unix timestamp for when the next interval starts
# TYPE coderd_oauth2_external_requests_rate_limit_next_reset_unix gauge
coderd_oauth2_external_requests_rate_limit_next_reset_unix{name="",resource=""} 0
# HELP coderd_oauth2_external_requests_rate_limit_remaining The remaining number of allowed requests in this interval.
# TYPE coderd_oauth2_external_requests_rate_limit_remaining gauge
coderd_oauth2_external_requests_rate_limit_remaining{name="",resource=""} 0
# HELP coderd_oauth2_external_requests_rate_limit_reset_in_seconds Seconds until the next interval
# TYPE coderd_oauth2_external_requests_rate_limit_reset_in_seconds gauge
coderd_oauth2_external_requests_rate_limit_reset_in_seconds{name="",resource=""} 0
# HELP coderd_oauth2_external_requests_rate_limit_used The number of requests made in this interval.
# TYPE coderd_oauth2_external_requests_rate_limit_used gauge
coderd_oauth2_external_requests_rate_limit_used{name="",resource=""} 0
# HELP coderd_oauth2_external_requests_total The total number of api calls made to external oauth2 providers. 'status_code' will be 0 if the request failed with no response.
# TYPE coderd_oauth2_external_requests_total counter
coderd_oauth2_external_requests_total{name="",source="",status_code=""} 0
# HELP coderd_open_file_refs_current The count of file references currently open in the file cache. Multiple references can be held for the same file.
# TYPE coderd_open_file_refs_current gauge
coderd_open_file_refs_current 0
# HELP coderd_open_file_refs_total The total number of file references ever opened in the file cache. The 'hit' label indicates if the file was loaded from the cache.
# TYPE coderd_open_file_refs_total counter
coderd_open_file_refs_total{hit=""} 0
# HELP coderd_open_files_current The count of unique files currently open in the file cache.
# TYPE coderd_open_files_current gauge
coderd_open_files_current 0
# HELP coderd_open_files_size_bytes_current The current amount of memory of all files currently open in the file cache.
# TYPE coderd_open_files_size_bytes_current gauge
coderd_open_files_size_bytes_current 0
# HELP coderd_open_files_size_bytes_total The total amount of memory ever opened in the file cache. This number never decrements.
# TYPE coderd_open_files_size_bytes_total counter
coderd_open_files_size_bytes_total 0
# HELP coderd_open_files_total The total count of unique files ever opened in the file cache.
# TYPE coderd_open_files_total counter
coderd_open_files_total 0
# HELP coderd_prebuilds_reconciliation_duration_seconds Duration of each prebuilds reconciliation cycle.
# TYPE coderd_prebuilds_reconciliation_duration_seconds histogram
coderd_prebuilds_reconciliation_duration_seconds 0
# HELP coderd_prebuilt_workspace_claim_duration_seconds Time to claim a prebuilt workspace by organization, template, and preset.
# TYPE coderd_prebuilt_workspace_claim_duration_seconds histogram
coderd_prebuilt_workspace_claim_duration_seconds{organization_name="",template_name="",preset_name=""} 0
# HELP coderd_prebuilt_workspaces_claimed_total Total number of prebuilt workspaces which were claimed by users. Claiming refers to creating a workspace with a preset selected for which eligible prebuilt workspaces are available and one is reassigned to a user.
# TYPE coderd_prebuilt_workspaces_claimed_total counter
coderd_prebuilt_workspaces_claimed_total{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_created_total Total number of prebuilt workspaces that have been created to meet the desired instance count of each template preset.
# TYPE coderd_prebuilt_workspaces_created_total counter
coderd_prebuilt_workspaces_created_total{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_desired Target number of prebuilt workspaces that should be available for each template preset.
# TYPE coderd_prebuilt_workspaces_desired gauge
coderd_prebuilt_workspaces_desired{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_eligible Current number of prebuilt workspaces that are eligible to be claimed by users. These are workspaces that have completed their build process with their agent reporting 'ready' status.
# TYPE coderd_prebuilt_workspaces_eligible gauge
coderd_prebuilt_workspaces_eligible{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_failed_total Total number of prebuilt workspaces that failed to build.
# TYPE coderd_prebuilt_workspaces_failed_total counter
coderd_prebuilt_workspaces_failed_total{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_metrics_last_updated The unix timestamp when the metrics related to prebuilt workspaces were last updated; these metrics are cached.
# TYPE coderd_prebuilt_workspaces_metrics_last_updated gauge
coderd_prebuilt_workspaces_metrics_last_updated 0
# HELP coderd_prebuilt_workspaces_preset_hard_limited Indicates whether a given preset has reached the hard failure limit (1 = hard-limited). Metric is omitted otherwise.
# TYPE coderd_prebuilt_workspaces_preset_hard_limited gauge
coderd_prebuilt_workspaces_preset_hard_limited{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_reconciliation_paused Indicates whether prebuilds reconciliation is currently paused (1 = paused, 0 = not paused).
# TYPE coderd_prebuilt_workspaces_reconciliation_paused gauge
coderd_prebuilt_workspaces_reconciliation_paused 0
# HELP coderd_prebuilt_workspaces_resource_replacements_total Total number of prebuilt workspaces whose resource(s) got replaced upon being claimed. In Terraform, drift on immutable attributes results in resource replacement. This represents a worst-case scenario for prebuilt workspaces because the pre-provisioned resource would have been recreated when claiming, thus obviating the point of pre-provisioning. See https://coder.com/docs/admin/templates/extending-templates/prebuilt-workspaces#preventing-resource-replacement
# TYPE coderd_prebuilt_workspaces_resource_replacements_total counter
coderd_prebuilt_workspaces_resource_replacements_total{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prebuilt_workspaces_running Current number of prebuilt workspaces that are in a running state. These workspaces have started successfully but may not yet be claimable by users (see coderd_prebuilt_workspaces_eligible).
# TYPE coderd_prebuilt_workspaces_running gauge
coderd_prebuilt_workspaces_running{template_name="",preset_name="",organization_name=""} 0
# HELP coderd_prometheusmetrics_agents_execution_seconds Histogram for duration of agents metrics collection in seconds.
# TYPE coderd_prometheusmetrics_agents_execution_seconds histogram
coderd_prometheusmetrics_agents_execution_seconds 0
# HELP coderd_prometheusmetrics_agentstats_execution_seconds Histogram for duration of agent stats metrics collection in seconds.
# TYPE coderd_prometheusmetrics_agentstats_execution_seconds histogram
coderd_prometheusmetrics_agentstats_execution_seconds 0
# HELP coderd_prometheusmetrics_metrics_aggregator_execution_cleanup_seconds Histogram for duration of metrics aggregator cleanup in seconds.
# TYPE coderd_prometheusmetrics_metrics_aggregator_execution_cleanup_seconds histogram
coderd_prometheusmetrics_metrics_aggregator_execution_cleanup_seconds 0
# HELP coderd_prometheusmetrics_metrics_aggregator_execution_update_seconds Histogram for duration of metrics aggregator update in seconds.
# TYPE coderd_prometheusmetrics_metrics_aggregator_execution_update_seconds histogram
coderd_prometheusmetrics_metrics_aggregator_execution_update_seconds 0
# HELP coderd_prometheusmetrics_metrics_aggregator_store_size The number of metrics stored in the aggregator
# TYPE coderd_prometheusmetrics_metrics_aggregator_store_size gauge
coderd_prometheusmetrics_metrics_aggregator_store_size 0
# HELP coderd_provisioner_job_queue_wait_seconds Time from job creation to acquisition by a provisioner daemon.
# TYPE coderd_provisioner_job_queue_wait_seconds histogram
coderd_provisioner_job_queue_wait_seconds{provisioner_type="",job_type="",transition="",build_reason=""} 0
# HELP coderd_provisionerd_job_timings_seconds The provisioner job time duration in seconds.
# TYPE coderd_provisionerd_job_timings_seconds histogram
coderd_provisionerd_job_timings_seconds{provisioner="",status=""} 0
# HELP coderd_provisionerd_jobs_current The number of currently running provisioner jobs.
# TYPE coderd_provisionerd_jobs_current gauge
coderd_provisionerd_jobs_current{provisioner=""} 0
# HELP coderd_provisionerd_num_daemons The number of provisioner daemons.
# TYPE coderd_provisionerd_num_daemons gauge
coderd_provisionerd_num_daemons 0
# HELP coderd_provisionerd_workspace_build_timings_seconds The time taken for a workspace to build.
# TYPE coderd_provisionerd_workspace_build_timings_seconds histogram
coderd_provisionerd_workspace_build_timings_seconds{template_name="",template_version="",workspace_transition="",status=""} 0
# HELP coderd_proxyhealth_health_check_duration_seconds Histogram for duration of proxy health collection in seconds.
# TYPE coderd_proxyhealth_health_check_duration_seconds histogram
coderd_proxyhealth_health_check_duration_seconds 0
# HELP coderd_proxyhealth_health_check_results This endpoint returns a number to indicate the health status. -3 (unknown), -2 (Unreachable), -1 (Unhealthy), 0 (Unregistered), 1 (Healthy)
# TYPE coderd_proxyhealth_health_check_results gauge
coderd_proxyhealth_health_check_results{proxy_id=""} 0
# HELP coderd_template_workspace_build_duration_seconds Duration from workspace build creation to agent ready, by template.
# TYPE coderd_template_workspace_build_duration_seconds histogram
coderd_template_workspace_build_duration_seconds{template_name="",organization_name="",transition="",status="",is_prebuild=""} 0
# HELP coderd_workspace_builds_enqueued_total Total number of workspace build enqueue attempts.
# TYPE coderd_workspace_builds_enqueued_total counter
coderd_workspace_builds_enqueued_total{provisioner_type="",build_reason="",transition="",status=""} 0
# HELP coderd_workspace_builds_total The number of workspaces started, updated, or deleted.
# TYPE coderd_workspace_builds_total counter
coderd_workspace_builds_total{workspace_owner="",workspace_name="",template_name="",template_version="",workspace_transition="",status=""} 0
# HELP coderd_workspace_creation_duration_seconds Time to create a workspace by organization, template, preset, and type (regular or prebuild).
# TYPE coderd_workspace_creation_duration_seconds histogram
coderd_workspace_creation_duration_seconds{organization_name="",template_name="",preset_name="",type=""} 0
# HELP coderd_workspace_creation_total Total regular (non-prebuilt) workspace creations by organization, template, and preset.
# TYPE coderd_workspace_creation_total counter
coderd_workspace_creation_total{organization_name="",template_name="",preset_name=""} 0
# HELP coderd_workspace_latest_build_status The current workspace statuses by template, transition, and owner for all non-deleted workspaces.
# TYPE coderd_workspace_latest_build_status gauge
coderd_workspace_latest_build_status{status="",template_name="",template_version="",workspace_owner="",workspace_transition=""} 0
+10 -55
View File
@@ -16,23 +16,21 @@ import (
)
var (
staticMetricsFile string
prometheusDocFile string
generatedMetricsFile string
dryRun bool
metricsFile string
prometheusDocFile string
dryRun bool
generatorPrefix = []byte("<!-- Code generated by 'make docs/admin/integrations/prometheus.md'. DO NOT EDIT -->")
generatorSuffix = []byte("<!-- End generated by 'make docs/admin/integrations/prometheus.md'. -->")
)
func main() {
flag.StringVar(&staticMetricsFile, "static-metrics", "scripts/metricsdocgen/metrics", "Path to static metrics file (manually maintained)")
flag.StringVar(&generatedMetricsFile, "generated-metrics", "scripts/metricsdocgen/generated_metrics", "Path to generated metrics file (from scanner)")
flag.StringVar(&metricsFile, "metrics-file", "scripts/metricsdocgen/metrics", "Path to Prometheus metrics file")
flag.StringVar(&prometheusDocFile, "prometheus-doc-file", "docs/admin/integrations/prometheus.md", "Path to Prometheus doc file")
flag.BoolVar(&dryRun, "dry-run", false, "Dry run")
flag.Parse()
metrics, err := readAndMergeMetrics()
metrics, err := readMetrics()
if err != nil {
log.Fatal("can't read metrics: ", err)
}
@@ -58,13 +56,11 @@ func main() {
}
}
// readMetricsFromFile reads metrics from a single Prometheus text format file.
func readMetricsFromFile(path string) ([]*dto.MetricFamily, error) {
f, err := os.Open(path)
func readMetrics() ([]*dto.MetricFamily, error) {
f, err := os.Open(metricsFile)
if err != nil {
return nil, xerrors.Errorf("can't open metrics file %s: %w", path, err)
return nil, xerrors.New("can't open metrics file")
}
defer f.Close()
var metrics []*dto.MetricFamily
@@ -75,55 +71,14 @@ func readMetricsFromFile(path string) ([]*dto.MetricFamily, error) {
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return nil, xerrors.Errorf("decoding metrics from %s: %w", path, err)
return nil, err
}
metrics = append(metrics, &m)
}
return metrics, nil
}
// readAndMergeMetrics reads metrics from both generated and static files,
// merges them, and returns a sorted list. Generated metrics are produced
// by the AST scanner that extracts metric definitions from the coder source
// code while static metrics are manually maintained (e.g., go_*, process_*,
// external dependencies).
// Note: Static metrics take priority over generated metrics, allowing manual
// overrides for metrics that can't be accurately extracted by the scanner.
func readAndMergeMetrics() ([]*dto.MetricFamily, error) {
generatedMetrics, err := readMetricsFromFile(generatedMetricsFile)
if err != nil {
return nil, xerrors.Errorf("reading generated metrics: %w", err)
}
staticMetrics, err := readMetricsFromFile(staticMetricsFile)
if err != nil {
return nil, xerrors.Errorf("reading static metrics: %w", err)
}
// Merge metrics, using a map to deduplicate by name.
metricsByName := make(map[string]*dto.MetricFamily)
// Add generated metrics first.
for _, m := range generatedMetrics {
metricsByName[*m.Name] = m
}
// Static metrics overwrite generated metrics if they exist.
for _, m := range staticMetrics {
metricsByName[*m.Name] = m
}
// Convert back to slice and sort.
var metrics []*dto.MetricFamily
for _, m := range metricsByName {
metrics = append(metrics, m)
}
sort.Slice(metrics, func(i, j int) bool {
return *metrics[i].Name < *metrics[j].Name
return sort.StringsAreSorted([]string{*metrics[i].Name, *metrics[j].Name})
})
return metrics, nil
}
+809 -27
View File
@@ -1,9 +1,58 @@
# HELP agent_scripts_executed_total Total number of scripts executed by the Coder agent. Includes cron scheduled scripts.
# TYPE agent_scripts_executed_total counter
agent_scripts_executed_total{agent_name="main",success="true",template_name="docker",username="admin",workspace_name="workspace-1"} 1
# HELP coderd_oauth2_external_requests_rate_limit_next_reset_unix Unix timestamp of the next interval
# TYPE coderd_oauth2_external_requests_rate_limit_next_reset_unix gauge
coderd_oauth2_external_requests_rate_limit_next_reset_unix{name="primary-github",resource="core"} 1.704835507e+09
coderd_oauth2_external_requests_rate_limit_next_reset_unix{name="secondary-github",resource="core"} 1.704835507e+09
# HELP coderd_oauth2_external_requests_rate_limit_remaining The remaining number of allowed requests in this interval.
# TYPE coderd_oauth2_external_requests_rate_limit_remaining gauge
coderd_oauth2_external_requests_rate_limit_remaining{name="primary-github",resource="core"} 4852
coderd_oauth2_external_requests_rate_limit_remaining{name="secondary-github",resource="core"} 4867
# HELP coderd_oauth2_external_requests_rate_limit_reset_in_seconds Seconds until the next interval
# TYPE coderd_oauth2_external_requests_rate_limit_reset_in_seconds gauge
coderd_oauth2_external_requests_rate_limit_reset_in_seconds{name="primary-github",resource="core"} 63.617162731
coderd_oauth2_external_requests_rate_limit_reset_in_seconds{name="secondary-github",resource="core"} 121.82186601
# HELP coderd_oauth2_external_requests_rate_limit The total number of allowed requests per interval.
# TYPE coderd_oauth2_external_requests_rate_limit gauge
coderd_oauth2_external_requests_rate_limit{name="primary-github",resource="core-unauthorized"} 5000
coderd_oauth2_external_requests_rate_limit{name="secondary-github",resource="core-unauthorized"} 5000
# HELP coderd_oauth2_external_requests_rate_limit_used The number of requests made in this interval.
# TYPE coderd_oauth2_external_requests_rate_limit_used gauge
coderd_oauth2_external_requests_rate_limit_used{name="primary-github",resource="core"} 148
coderd_oauth2_external_requests_rate_limit_used{name="secondary-github",resource="core"} 133
# HELP coderd_oauth2_external_requests_total The total number of api calls made to external oauth2 providers. 'status_code' will be 0 if the request failed with no response.
# TYPE coderd_oauth2_external_requests_total counter
coderd_oauth2_external_requests_total{name="primary-github",source="AppInstallations",status_code="200"} 12
coderd_oauth2_external_requests_total{name="primary-github",source="Exchange",status_code="200"} 1
coderd_oauth2_external_requests_total{name="primary-github",source="TokenSource",status_code="200"} 1
coderd_oauth2_external_requests_total{name="primary-github",source="ValidateToken",status_code="200"} 16
coderd_oauth2_external_requests_total{name="secondary-github",source="AppInstallations",status_code="403"} 4
coderd_oauth2_external_requests_total{name="secondary-github",source="Exchange",status_code="200"} 2
coderd_oauth2_external_requests_total{name="secondary-github",source="ValidateToken",status_code="200"} 5
# HELP coderd_agents_apps Agent applications with statuses.
# TYPE coderd_agents_apps gauge
coderd_agents_apps{agent_name="main",app_name="code-server",health="healthy",username="admin",workspace_name="workspace-1"} 1
coderd_agents_apps{agent_name="main",app_name="code-server",health="healthy",username="admin",workspace_name="workspace-2"} 1
coderd_agents_apps{agent_name="main",app_name="code-server",health="healthy",username="admin",workspace_name="workspace-3"} 1
# HELP coderd_agents_connection_latencies_seconds Agent connection latencies in seconds.
# TYPE coderd_agents_connection_latencies_seconds gauge
coderd_agents_connection_latencies_seconds{agent_name="main",derp_region="Coder Embedded Relay",preferred="true",username="admin",workspace_name="workspace-1"} 0.03018125
coderd_agents_connection_latencies_seconds{agent_name="main",derp_region="Coder Embedded Relay",preferred="true",username="admin",workspace_name="workspace-2"} 0.028658416
coderd_agents_connection_latencies_seconds{agent_name="main",derp_region="Coder Embedded Relay",preferred="true",username="admin",workspace_name="workspace-3"} 0.028041416
# HELP coderd_agents_connections Agent connections with statuses.
# TYPE coderd_agents_connections gauge
coderd_agents_connections{agent_name="main",lifecycle_state="ready",status="connected",tailnet_node="nodeid:16966f7df70d8cc5",username="admin",workspace_name="workspace-3"} 1
coderd_agents_connections{agent_name="main",lifecycle_state="start_timeout",status="connected",tailnet_node="nodeid:3237d00938be23e3",username="admin",workspace_name="workspace-2"} 1
coderd_agents_connections{agent_name="main",lifecycle_state="start_timeout",status="connected",tailnet_node="nodeid:3779bd45d00be0eb",username="admin",workspace_name="workspace-1"} 1
# HELP coderd_agents_up The number of active agents per workspace.
# TYPE coderd_agents_up gauge
coderd_agents_up{template_name="docker", username="admin",workspace_name="workspace-1"} 1
coderd_agents_up{template_name="docker", username="admin",workspace_name="workspace-2"} 1
coderd_agents_up{template_name="gcp", username="admin",workspace_name="workspace-3"} 1
# HELP coderd_agentstats_startup_script_seconds The number of seconds the startup script took to execute.
# TYPE coderd_agentstats_startup_script_seconds gauge
coderd_agentstats_startup_script_seconds{agent_name="main",success="true",template_name="docker",username="admin",workspace_name="workspace-1"} 1.969900304
# HELP agent_scripts_executed_total Total number of scripts executed by the Coder agent. Includes cron scheduled scripts.
# TYPE agent_scripts_executed_total counter
agent_scripts_executed_total{agent_name="main",success="true",template_name="docker",username="admin",workspace_name="workspace-1"} 1
# HELP coderd_agentstats_connection_count The number of established connections by agent
# TYPE coderd_agentstats_connection_count gauge
coderd_agentstats_connection_count{agent_name="main",username="admin",workspace_name="workspace1"} 2
@@ -31,6 +80,694 @@ coderd_agentstats_session_count_vscode{agent_name="main",username="admin",worksp
# HELP coderd_agentstats_tx_bytes Agent Tx bytes
# TYPE coderd_agentstats_tx_bytes gauge
coderd_agentstats_tx_bytes{agent_name="main",username="admin",workspace_name="workspace1"} 6643
# HELP coderd_api_websocket_durations_seconds Websocket duration distribution of requests in seconds.
# TYPE coderd_api_websocket_durations_seconds histogram
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="0.001"} 0
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="1"} 3
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="60"} 3
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="3600"} 4
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="54000"} 4
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="108000"} 4
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/me/coordinate",le="+Inf"} 4
coderd_api_websocket_durations_seconds_sum{path="/api/v2/workspaceagents/me/coordinate"} 156.042058706
coderd_api_websocket_durations_seconds_count{path="/api/v2/workspaceagents/me/coordinate"} 4
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="0.001"} 0
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="1"} 0
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="60"} 0
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="3600"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="54000"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="108000"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspaceagents/{workspaceagent}/pty",le="+Inf"} 1
coderd_api_websocket_durations_seconds_sum{path="/api/v2/workspaceagents/{workspaceagent}/pty"} 119.810027963
coderd_api_websocket_durations_seconds_count{path="/api/v2/workspaceagents/{workspaceagent}/pty"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="0.001"} 0
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="1"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="60"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="3600"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="54000"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="108000"} 1
coderd_api_websocket_durations_seconds_bucket{path="/api/v2/workspacebuilds/{workspacebuild}/logs",le="+Inf"} 1
coderd_api_websocket_durations_seconds_sum{path="/api/v2/workspacebuilds/{workspacebuild}/logs"} 0.015562347
coderd_api_websocket_durations_seconds_count{path="/api/v2/workspacebuilds/{workspacebuild}/logs"} 1
# HELP coderd_api_active_users_duration_hour The number of users that have been active within the last hour.
# TYPE coderd_api_active_users_duration_hour gauge
coderd_api_active_users_duration_hour 0
# HELP coderd_api_concurrent_requests The number of concurrent API requests.
# TYPE coderd_api_concurrent_requests gauge
coderd_api_concurrent_requests 3
# HELP coderd_api_concurrent_websockets The total number of concurrent API websockets.
# TYPE coderd_api_concurrent_websockets gauge
coderd_api_concurrent_websockets 2
# HELP coderd_api_request_latencies_seconds Latency distribution of requests in seconds.
# TYPE coderd_api_request_latencies_seconds histogram
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.025"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.05"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.1"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="0.5"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="1"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="5"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path=""} 6.687792526
coderd_api_request_latencies_seconds_count{method="GET",path=""} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.005"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/appearance/",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/appearance/"} 0.005080632
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/appearance/"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/applications/host/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/applications/host/"} 0.001333428
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/applications/host/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.001"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.005"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.01"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.025"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.05"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.1"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="0.5"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="1"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="5"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="10"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="30"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/buildinfo",le="+Inf"} 5
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/buildinfo"} 0.000471086
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/buildinfo"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.001"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.005"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.01"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.025"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.05"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.1"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="0.5"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="1"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="5"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="10"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="30"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/entitlements",le="+Inf"} 5
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/entitlements"} 0.0007040899999999999
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/entitlements"} 5
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.001"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.005"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/*",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/organizations/*"} 0.000904424
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/organizations/*"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/organizations/{organization}/templates/"} 0.045776814
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/organizations/{organization}/templates/"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/examples",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/organizations/{organization}/templates/examples"} 0.015829003
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/organizations/{organization}/templates/examples"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}"} 0.004708487
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templates/{template}/"} 0.004230499
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templates/{template}/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/daus",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templates/{template}/daus"} 0.004370203
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templates/{template}/daus"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templates/{template}/versions/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templates/{template}/versions/"} 0.00656286
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templates/{template}/versions/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templateversions/{templateversion}/"} 0.010606176
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templateversions/{templateversion}/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/resources",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templateversions/{templateversion}/resources"} 0.007596192
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templateversions/{templateversion}/resources"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/templateversions/{templateversion}/schema",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/templateversions/{templateversion}/schema"} 0.00339007
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/templateversions/{templateversion}/schema"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/updatecheck",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/updatecheck"} 0.000390431
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/updatecheck"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/"} 0.003569641
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/authmethods",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/authmethods"} 0.000148719
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/authmethods"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.005"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/first",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/first"} 0.002299768
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/first"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/{user}"} 0.000131803
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/{user}"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/{user}/"} 0.012900051
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/{user}/"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.005"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/*",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/{user}/*"} 0.0017976070000000001
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/{user}/*"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.01"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/"} 0.014837208000000001
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspace-quota/{user}/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/workspace-quota/{user}/"} 0.01856146
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/workspace-quota/{user}/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaceagents/me/metadata",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/workspaceagents/me/metadata"} 0.005921315
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/workspaceagents/me/metadata"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/workspaces"} 0.000824226
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/workspaces"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/workspaces/"} 0.016112682
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/workspaces/"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.025"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.05"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="0.5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="1"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="5"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="10"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="30"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/api/v2/workspaces/{workspace}/builds/",le="+Inf"} 2
coderd_api_request_latencies_seconds_sum{method="GET",path="/api/v2/workspaces/{workspace}/builds/"} 0.022512011000000002
coderd_api_request_latencies_seconds_count{method="GET",path="/api/v2/workspaces/{workspace}/builds/"} 2
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="GET",path="/healthz",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="GET",path="/healthz"} 0.000109226
coderd_api_request_latencies_seconds_count{method="GET",path="/healthz"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.005"} 4
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.01"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.025"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.05"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.1"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="0.5"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="1"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="5"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="10"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="30"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/authcheck/",le="+Inf"} 6
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/authcheck/"} 0.027684736
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/authcheck/"} 6
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.001"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/files",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/files"} 0.000426037
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/files"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces"} 0.014369701
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.025"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.05"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/users/login",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/users/login"} 0.079973393
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/users/login"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.005"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.01"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/report-stats",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/workspaceagents/me/report-stats"} 0.001123106
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/workspaceagents/me/report-stats"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.001"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.005"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.01"} 0
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.025"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.05"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="0.5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="1"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="5"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="10"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="30"} 1
coderd_api_request_latencies_seconds_bucket{method="POST",path="/api/v2/workspaceagents/me/version",le="+Inf"} 1
coderd_api_request_latencies_seconds_sum{method="POST",path="/api/v2/workspaceagents/me/version"} 0.012078959
coderd_api_request_latencies_seconds_count{method="POST",path="/api/v2/workspaceagents/me/version"} 1
# HELP coderd_api_requests_processed_total The total number of processed API requests
# TYPE coderd_api_requests_processed_total counter
coderd_api_requests_processed_total{code="200",method="GET",path=""} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/appearance/"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/applications/host/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/buildinfo"} 5
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/entitlements"} 5
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/organizations/{organization}/templates/"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/organizations/{organization}/templates/examples"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/organizations/{organization}/templates/{templatename}"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templates/{template}/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templates/{template}/daus"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templates/{template}/versions/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templateversions/{templateversion}/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templateversions/{templateversion}/resources"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/templateversions/{templateversion}/schema"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/updatecheck"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/users/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/users/authmethods"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/users/first"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/users/{user}/"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/users/{user}/workspace/{workspacename}/"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/workspace-quota/{user}/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/workspaceagents/me/metadata"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/workspaces/"} 1
coderd_api_requests_processed_total{code="200",method="GET",path="/api/v2/workspaces/{workspace}/builds/"} 2
coderd_api_requests_processed_total{code="200",method="GET",path="/healthz"} 1
coderd_api_requests_processed_total{code="200",method="POST",path="/api/v2/authcheck/"} 6
coderd_api_requests_processed_total{code="200",method="POST",path="/api/v2/workspaceagents/me/report-stats"} 1
coderd_api_requests_processed_total{code="200",method="POST",path="/api/v2/workspaceagents/me/version"} 1
coderd_api_requests_processed_total{code="201",method="POST",path="/api/v2/organizations/{organization}/members/{user}/workspaces"} 1
coderd_api_requests_processed_total{code="201",method="POST",path="/api/v2/users/login"} 1
coderd_api_requests_processed_total{code="401",method="GET",path="/api/v2/organizations/*"} 2
coderd_api_requests_processed_total{code="401",method="GET",path="/api/v2/users/{user}"} 1
coderd_api_requests_processed_total{code="401",method="GET",path="/api/v2/users/{user}/*"} 2
coderd_api_requests_processed_total{code="401",method="GET",path="/api/v2/workspaces"} 1
coderd_api_requests_processed_total{code="401",method="POST",path="/api/v2/files"} 1
# HELP coderd_api_workspace_latest_build The latest workspace builds with a status.
# TYPE coderd_api_workspace_latest_build gauge
coderd_api_workspace_latest_build{status="succeeded"} 1
# HELP coderd_insights_applications_usage_seconds The application usage per template.
# TYPE coderd_insights_applications_usage_seconds gauge
coderd_insights_applications_usage_seconds{application_name="JetBrains",slug="",template_name="code-server-pod"} 1
# HELP coderd_insights_parameters The parameter usage per template.
# TYPE coderd_insights_parameters gauge
coderd_insights_parameters{parameter_name="cpu",parameter_type="string",parameter_value="8",template_name="code-server-pod"} 1
# HELP coderd_insights_templates_active_users The number of active users of the template.
# TYPE coderd_insights_templates_active_users gauge
coderd_insights_templates_active_users{template_name="code-server-pod"} 1
# HELP coderd_license_active_users The number of active users.
# TYPE coderd_license_active_users gauge
coderd_license_active_users 1
# HELP coderd_license_limit_users The user seats limit based on the active Coder license.
# TYPE coderd_license_limit_users gauge
coderd_license_limit_users 25
# HELP coderd_license_user_limit_enabled Returns 1 if the current license enforces the user limit.
# TYPE coderd_license_user_limit_enabled gauge
coderd_license_user_limit_enabled 1
# HELP coderd_metrics_collector_agents_execution_seconds Histogram for duration of agents metrics collection in seconds.
# TYPE coderd_metrics_collector_agents_execution_seconds histogram
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.001"} 0
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.005"} 0
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.01"} 0
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.025"} 0
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.05"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.1"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="0.5"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="1"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="5"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="10"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="30"} 2
coderd_metrics_collector_agents_execution_seconds_bucket{le="+Inf"} 2
coderd_metrics_collector_agents_execution_seconds_sum 0.0592915
coderd_metrics_collector_agents_execution_seconds_count 2
# HELP coderd_provisionerd_job_timings_seconds The provisioner job time duration in seconds.
# TYPE coderd_provisionerd_job_timings_seconds histogram
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="1"} 0
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="10"} 0
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="30"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="60"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="300"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="600"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="1800"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="3600"} 1
coderd_provisionerd_job_timings_seconds_bucket{provisioner="terraform",status="success",le="+Inf"} 1
coderd_provisionerd_job_timings_seconds_sum{provisioner="terraform",status="success"} 14.739479476
coderd_provisionerd_job_timings_seconds_count{provisioner="terraform",status="success"} 1
# HELP coderd_provisionerd_jobs_current The number of currently running provisioner jobs.
# TYPE coderd_provisionerd_jobs_current gauge
coderd_provisionerd_jobs_current{provisioner="terraform"} 0
# HELP coderd_provisionerd_num_daemons The number of provisioner daemons.
# TYPE coderd_provisionerd_num_daemons gauge
coderd_provisionerd_num_daemons 3
# HELP coderd_provisionerd_workspace_build_timings_seconds The time taken for a workspace to build.
# TYPE coderd_provisionerd_workspace_build_timings_seconds histogram
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="1"} 0
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="10"} 0
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="30"} 0
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="60"} 1
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="300"} 1
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="600"} 1
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="1800"} 1
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="3600"} 1
coderd_provisionerd_workspace_build_timings_seconds_bucket{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START",le="+Inf"} 1
coderd_provisionerd_workspace_build_timings_seconds_sum{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START"} 31.042659852
coderd_provisionerd_workspace_build_timings_seconds_count{status="success",template_name="docker",template_version="gallant_wright0",workspace_transition="START"} 1
# HELP coderd_workspace_latest_build_status The current workspace statuses by template, transition, and owner.
# TYPE coderd_workspace_latest_build_status gauge
coderd_workspace_latest_build_status{status="failed",template_name="docker",template_version="sweet_gould9",workspace_owner="admin",workspace_transition="stop"} 1
# HELP coderd_workspace_builds_total The number of workspaces started, updated, or deleted.
# TYPE coderd_workspace_builds_total counter
coderd_workspace_builds_total{action="START",owner_email="admin@coder.com",status="failed",template_name="docker",template_version="gallant_wright0",workspace_name="test1"} 1
coderd_workspace_builds_total{action="START",owner_email="admin@coder.com",status="success",template_name="docker",template_version="gallant_wright0",workspace_name="test1"} 1
coderd_workspace_builds_total{action="STOP",owner_email="admin@coder.com",status="success",template_name="docker",template_version="gallant_wright0",workspace_name="test1"} 1
# HELP coderd_workspace_creation_total Total regular (non-prebuilt) workspace creations by organization, template, and preset.
# TYPE coderd_workspace_creation_total counter
coderd_workspace_creation_total{organization_name="{organization}",preset_name="",template_name="docker"} 1
# HELP coderd_workspace_creation_duration_seconds Time to create a workspace by organization, template, preset, and type (regular or prebuild).
# TYPE coderd_workspace_creation_duration_seconds histogram
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="1"} 0
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="10"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="30"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="60"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="300"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="600"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="1800"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="3600"} 1
coderd_workspace_creation_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild",le="+Inf"} 1
coderd_workspace_creation_duration_seconds_sum{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild"} 4.406214
coderd_workspace_creation_duration_seconds_count{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",type="prebuild"} 1
# HELP coderd_template_workspace_build_duration_seconds Duration from workspace build creation to agent ready, by template.
# TYPE coderd_template_workspace_build_duration_seconds histogram
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="1"} 0
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="10"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="30"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="60"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="300"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="600"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="1800"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="3600"} 1
coderd_template_workspace_build_duration_seconds_bucket{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start",le="+Inf"} 1
coderd_template_workspace_build_duration_seconds_sum{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start"} 7.241532
coderd_template_workspace_build_duration_seconds_count{is_prebuild="false",organization_name="{organization}",status="success",template_name="docker",transition="start"} 1
# HELP coderd_prebuilt_workspace_claim_duration_seconds Time to claim a prebuilt workspace by organization, template, and preset.
# TYPE coderd_prebuilt_workspace_claim_duration_seconds histogram
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="1"} 0
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="5"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="10"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="20"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="30"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="60"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="120"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="180"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="240"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="300"} 1
coderd_prebuilt_workspace_claim_duration_seconds_bucket{organization_name="{organization}",preset_name="Falkenstein",template_name="docker",le="+Inf"} 1
coderd_prebuilt_workspace_claim_duration_seconds_sum{organization_name="{organization}",preset_name="Falkenstein",template_name="docker"} 4.860075
coderd_prebuilt_workspace_claim_duration_seconds_count{organization_name="{organization}",preset_name="Falkenstein",template_name="docker"} 1
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 2.4056e-05
@@ -184,27 +921,72 @@ coder_aibridged_tokens_total{initiator_id="95f6752b-08cc-4cf1-97f7-c2165e3519c5"
coder_aibridged_tokens_total{initiator_id="95f6752b-08cc-4cf1-97f7-c2165e3519c5",model="gpt-5-nano",provider="openai",type="output"} 2014
coder_aibridged_tokens_total{initiator_id="95f6752b-08cc-4cf1-97f7-c2165e3519c5",model="gpt-5-nano",provider="openai",type="prompt_audio"} 0
coder_aibridged_tokens_total{initiator_id="95f6752b-08cc-4cf1-97f7-c2165e3519c5",model="gpt-5-nano",provider="openai",type="prompt_cached"} 31872
# HELP coder_aibridged_circuit_breaker_rejects_total Total number of requests rejected due to open circuit breaker.
# TYPE coder_aibridged_circuit_breaker_rejects_total counter
coder_aibridged_circuit_breaker_rejects_total{provider="",endpoint="",model=""} 0
# HELP coder_aibridged_circuit_breaker_state Current state of the circuit breaker (0=closed, 0.5=half-open, 1=open).
# TYPE coder_aibridged_circuit_breaker_state gauge
coder_aibridged_circuit_breaker_state{provider="",endpoint="",model=""} 0
# HELP coder_aibridged_circuit_breaker_trips_total Total number of times the circuit breaker transitioned to open state.
# TYPE coder_aibridged_circuit_breaker_trips_total counter
coder_aibridged_circuit_breaker_trips_total{provider="",endpoint="",model=""} 0
# HELP coder_aibridged_passthrough_total The count of requests which were not intercepted but passed through to the upstream.
# TYPE coder_aibridged_passthrough_total counter
coder_aibridged_passthrough_total{provider="",route="",method=""} 0
# HELP coder_aibridgeproxyd_connect_sessions_total Total number of CONNECT sessions established.
# TYPE coder_aibridgeproxyd_connect_sessions_total counter
coder_aibridgeproxyd_connect_sessions_total{type=""} 0
# HELP coder_aibridgeproxyd_inflight_mitm_requests Number of MITM requests currently being processed.
# TYPE coder_aibridgeproxyd_inflight_mitm_requests gauge
coder_aibridgeproxyd_inflight_mitm_requests{provider=""} 0
# HELP coder_aibridgeproxyd_mitm_requests_total Total number of MITM requests handled by the proxy.
# TYPE coder_aibridgeproxyd_mitm_requests_total counter
coder_aibridgeproxyd_mitm_requests_total{provider=""} 0
# HELP coder_aibridgeproxyd_mitm_responses_total Total number of MITM responses by HTTP status code class.
# TYPE coder_aibridgeproxyd_mitm_responses_total counter
coder_aibridgeproxyd_mitm_responses_total{code="",provider=""} 0
#HELP coderd_agentapi_metadata_batch_size Total number of metadata entries in each batch, updated before flushes.
# TYPE coderd_agentapi_metadata_batch_size histogram
coderd_agentapi_metadata_batch_size_bucket{le="10"} 11
coderd_agentapi_metadata_batch_size_bucket{le="25"} 12
coderd_agentapi_metadata_batch_size_bucket{le="50"} 12
coderd_agentapi_metadata_batch_size_bucket{le="100"} 12
coderd_agentapi_metadata_batch_size_bucket{le="150"} 12
coderd_agentapi_metadata_batch_size_bucket{le="200"} 12
coderd_agentapi_metadata_batch_size_bucket{le="250"} 12
coderd_agentapi_metadata_batch_size_bucket{le="300"} 12
coderd_agentapi_metadata_batch_size_bucket{le="350"} 12
coderd_agentapi_metadata_batch_size_bucket{le="400"} 12
coderd_agentapi_metadata_batch_size_bucket{le="450"} 12
coderd_agentapi_metadata_batch_size_bucket{le="500"} 12
coderd_agentapi_metadata_batch_size_bucket{le="+Inf"} 12
coderd_agentapi_metadata_batch_size_sum 71
coderd_agentapi_metadata_batch_size_count 12
# HELP coderd_agentapi_metadata_batch_utilization Number of metadata keys per agent in each batch, updated before flushes.
# TYPE coderd_agentapi_metadata_batch_utilization histogram
coderd_agentapi_metadata_batch_utilization_bucket{le="1"} 0
coderd_agentapi_metadata_batch_utilization_bucket{le="2"} 0
coderd_agentapi_metadata_batch_utilization_bucket{le="3"} 0
coderd_agentapi_metadata_batch_utilization_bucket{le="4"} 0
coderd_agentapi_metadata_batch_utilization_bucket{le="5"} 10
coderd_agentapi_metadata_batch_utilization_bucket{le="6"} 10
coderd_agentapi_metadata_batch_utilization_bucket{le="7"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="8"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="9"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="10"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="15"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="20"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="40"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="80"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="160"} 13
coderd_agentapi_metadata_batch_utilization_bucket{le="+Inf"} 13
coderd_agentapi_metadata_batch_utilization_sum 71
coderd_agentapi_metadata_batch_utilization_count 13
# HELP coderd_agentapi_metadata_batches_total Total number of metadata batches flushed.
# TYPE coderd_agentapi_metadata_batches_total counter
coderd_agentapi_metadata_batches_total{reason="scheduled"} 12
# HELP coderd_agentapi_metadata_dropped_keys_total Total number of metadata keys dropped due to capacity limits.
# TYPE coderd_agentapi_metadata_dropped_keys_total counter
coderd_agentapi_metadata_dropped_keys_total 0
# HELP coderd_agentapi_metadata_flush_duration_seconds Time taken to flush metadata batch to database and pubsub.
# TYPE coderd_agentapi_metadata_flush_duration_seconds histogram
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.01"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.025"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.05"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.1"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.25"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="0.5"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="1"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="2.5"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="5"} 12
coderd_agentapi_metadata_flush_duration_seconds_bucket{reason="scheduled",le="+Inf"} 12
coderd_agentapi_metadata_flush_duration_seconds_sum{reason="scheduled"} 0.008704553
coderd_agentapi_metadata_flush_duration_seconds_count{reason="scheduled"} 12
# HELP coderd_agentapi_metadata_flushed_total Total number of unique metadatas flushed.
# TYPE coderd_agentapi_metadata_flushed_total counter
coderd_agentapi_metadata_flushed_total 71
# HELP coderd_agentapi_metadata_publish_errors_total Total number of metadata batch pubsub publish calls that have resulted in an error.
# TYPE coderd_agentapi_metadata_publish_errors_total counter
coderd_agentapi_metadata_publish_errors_total 0
# HELP coderd_license_warnings The number of active license warnings.
# TYPE coderd_license_warnings gauge
coderd_license_warnings 0
# HELP coderd_license_errors The number of active license errors.
# TYPE coderd_license_errors gauge
coderd_license_errors 0
-712
View File
@@ -1,712 +0,0 @@
// Package main provides a tool to scan Go source files and extract Prometheus
// metric definitions. It outputs metrics in Prometheus text exposition format
// to stdout for use by the documentation generator.
//
// Usage:
//
// go run ./scripts/metricsdocgen/scanner > scripts/metricsdocgen/generated_metrics
package main
import (
"fmt"
"go/ast"
"go/parser"
"go/token"
"io"
"io/fs"
"log"
"os"
"path/filepath"
"sort"
"strings"
"golang.org/x/xerrors"
)
// Directories to scan for metric definitions, relative to the repository root.
// Add or remove directories here to control the scanner's scope.
var scanDirs = []string{
"agent",
"coderd",
"enterprise",
"provisionerd",
}
// skipPaths lists files that should be excluded from scanning. Their metrics
// must be maintained in the static metrics file instead.
// TODO(ssncferreira): Add support for resolving WrapRegistererWithPrefix to
//
// eliminate the need for this skip list.
var skipPaths = []string{
"enterprise/aibridgeproxyd/metrics.go",
}
// MetricType represents the type of Prometheus metric.
type MetricType string
const (
MetricTypeCounter MetricType = "counter"
MetricTypeGauge MetricType = "gauge"
MetricTypeHistogram MetricType = "histogram"
MetricTypeSummary MetricType = "summary"
)
// Metric represents a single Prometheus metric definition extracted from source code.
type Metric struct {
Name string // Full metric name (namespace_subsystem_name)
Type MetricType // counter, gauge, histogram, or summary
Help string // Description of the metric
Labels []string // Label names for this metric
}
// metricOpts holds the fields extracted from a prometheus.*Opts struct.
type metricOpts struct {
Namespace string
Subsystem string
Name string
Help string
}
// declarations holds const/var values collected from a file for resolving references.
type declarations struct {
strings map[string]string // string constants/variables
stringSlices map[string][]string // []string variables
}
// packageDeclarations holds exported string constants collected from all scanned files,
// keyed by package name. This allows resolving cross-file references.
// Note: resolution depends on directory scan order in scanDirs, i.e.,
// constants from later directories won't be available when scanning earlier ones.
var packageDeclarations = make(map[string]map[string]string)
func main() {
metrics, err := scanAllDirs()
if err != nil {
log.Fatalf("Failed to scan directories: %v", err)
}
// Duplicates are not expected since Prometheus enforces unique metric names at registration.
uniqueMetrics := make(map[string]Metric)
for _, m := range metrics {
uniqueMetrics[m.Name] = m
}
metrics = make([]Metric, 0, len(uniqueMetrics))
for _, m := range uniqueMetrics {
metrics = append(metrics, m)
}
// Sort metrics by name for consistent output across runs.
sort.Slice(metrics, func(i, j int) bool {
return metrics[i].Name < metrics[j].Name
})
writeMetrics(metrics, os.Stdout)
log.Printf("Successfully parsed %d metrics", len(metrics))
}
// scanAllDirs scans all configured directories for metric definitions.
func scanAllDirs() ([]Metric, error) {
var allMetrics []Metric
for _, dir := range scanDirs {
metrics, err := scanDirectory(dir)
if err != nil {
return nil, xerrors.Errorf("scanning %s: %w", dir, err)
}
log.Printf("scanning %s: found %d metrics", dir, len(metrics))
allMetrics = append(allMetrics, metrics...)
}
return allMetrics, nil
}
// scanDirectory recursively walks a directory and extracts metrics from all Go files.
func scanDirectory(root string) ([]Metric, error) {
var metrics []Metric
err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
// Skip non-Go files.
if d.IsDir() || !strings.HasSuffix(path, ".go") {
return nil
}
// Skip test files.
if strings.HasSuffix(path, "_test.go") {
return nil
}
// Skip files listed in skipPaths.
for _, sp := range skipPaths {
if path == sp {
return nil
}
}
fileMetrics, err := scanFile(path)
if err != nil {
return xerrors.Errorf("scanning %s: %w", path, err)
}
if len(fileMetrics) > 0 {
log.Printf("scanning %s: found %d metrics", path, len(fileMetrics))
}
metrics = append(metrics, fileMetrics...)
return nil
})
return metrics, err
}
// scanFile parses a single Go file and extracts all Prometheus metric definitions.
func scanFile(path string) ([]Metric, error) {
fset := token.NewFileSet()
file, err := parser.ParseFile(fset, path, nil, parser.SkipObjectResolution)
if err != nil {
return nil, xerrors.Errorf("parsing file: %w", err)
}
// Collect exported constants into the global package declarations map.
collectPackageConsts(file)
// Collect file-local const and var declarations for resolving references.
decls := collectDecls(file)
var metrics []Metric
// Walk the AST looking for metric registration calls.
ast.Inspect(file, func(n ast.Node) bool {
call, ok := n.(*ast.CallExpr)
if !ok {
return true
}
metric, ok := extractMetricFromCall(call, decls)
if ok {
if metric.Help == "" {
log.Printf("WARNING: metric %q has no HELP description, skipping", metric.Name)
// Skip metrics without descriptions, they should be fixed in the source code
// or added to the static metrics file with a manual description.
return true
}
metrics = append(metrics, metric)
}
return true
})
return metrics, nil
}
// collectPackageConsts collects exported string constants from a file into
// the global packageDeclarations map, keyed by package name.
func collectPackageConsts(file *ast.File) {
pkgName := file.Name.Name
if packageDeclarations[pkgName] == nil {
packageDeclarations[pkgName] = make(map[string]string)
}
for _, decl := range file.Decls {
genDecl, ok := decl.(*ast.GenDecl)
if !ok || genDecl.Tok != token.CONST {
continue
}
for _, spec := range genDecl.Specs {
valueSpec, ok := spec.(*ast.ValueSpec)
if !ok {
continue
}
for i, name := range valueSpec.Names {
if !ast.IsExported(name.Name) {
continue
}
if i >= len(valueSpec.Values) {
continue
}
if lit, ok := valueSpec.Values[i].(*ast.BasicLit); ok {
if lit.Kind == token.STRING {
packageDeclarations[pkgName][name.Name] = strings.Trim(lit.Value, `"`)
}
}
}
}
}
}
// resolveStringExpr attempts to resolve an expression to a string value.
// Examples:
// - "my_metric": "my_metric" (string literal)
// - metricName: resolved value of metricName constant (identifier)
// - agentmetrics.LabelUsername: resolved from package constants (selector)
func resolveStringExpr(expr ast.Expr, decls declarations) string {
switch e := expr.(type) {
case *ast.BasicLit:
return strings.Trim(e.Value, `"`)
case *ast.Ident:
return decls.strings[e.Name]
case *ast.BinaryExpr:
return resolveBinaryExpr(e, decls)
case *ast.SelectorExpr:
// Handle pkg.Const syntax.
if ident, ok := e.X.(*ast.Ident); ok {
if pkgConsts, ok := packageDeclarations[ident.Name]; ok {
return pkgConsts[e.Sel.Name]
}
}
}
return ""
}
// resolveBinaryExpr resolves a binary expression (string concatenation) to a string.
// It recursively resolves the left and right operands.
// Example:
// - "coderd_" + "api_" + "requests": "coderd_api_requests"
// - namespace + "_" + metricName: resolved concatenation
func resolveBinaryExpr(expr *ast.BinaryExpr, decls declarations) string {
left := resolveStringExpr(expr.X, decls)
right := resolveStringExpr(expr.Y, decls)
if left != "" && right != "" {
return left + right
}
return ""
}
// extractStringSlice extracts a []string from a composite literal.
// Example:
// - []string{"a", "b", myConst}: ["a", "b", <resolved value of myConst>]
func extractStringSlice(lit *ast.CompositeLit, decls declarations) []string {
var labels []string
for _, elt := range lit.Elts {
if label := resolveStringExpr(elt, decls); label != "" {
labels = append(labels, label)
}
}
return labels
}
// collectDecls collects const and var declarations from a file.
// This is used to resolve constant and variable references in metric definitions.
func collectDecls(file *ast.File) declarations {
decls := declarations{
strings: make(map[string]string),
stringSlices: make(map[string][]string),
}
for _, decl := range file.Decls {
genDecl, ok := decl.(*ast.GenDecl)
if !ok {
continue
}
for _, spec := range genDecl.Specs {
valueSpec, ok := spec.(*ast.ValueSpec)
if !ok {
continue
}
for i, name := range valueSpec.Names {
if i >= len(valueSpec.Values) {
continue
}
switch v := valueSpec.Values[i].(type) {
case *ast.BasicLit:
// String literal: const name = "value"
decls.strings[name.Name] = strings.Trim(v.Value, `"`)
case *ast.BinaryExpr:
// Concatenation: const name = prefix + "suffix"
if resolved := resolveBinaryExpr(v, decls); resolved != "" {
decls.strings[name.Name] = resolved
}
case *ast.CompositeLit:
// Slice literal: var labels = []string{"a", "b"}
if resolved := extractStringSlice(v, decls); resolved != nil {
decls.stringSlices[name.Name] = resolved
}
}
}
}
}
return decls
}
// extractLabels extracts label names from an expression passed as an argument
// to a metric constructor. Handles both inline []string literals and
// variable references from decls.
// Examples:
// - []string{"label1", "label2"}: ["label1", "label2"] (inline literal)
// - myLabels: resolved value of myLabels variable (variable reference)
func extractLabels(expr ast.Expr, decls declarations) []string {
switch e := expr.(type) {
case *ast.CompositeLit:
// []string{"label1", "label2"}
return extractStringSlice(e, decls)
case *ast.Ident:
// Variable reference like 'labels'.
if labels, ok := decls.stringSlices[e.Name]; ok {
return labels
}
return nil
}
return nil
}
// extractNewDescMetric extracts a metric from a prometheus.NewDesc() call.
// Pattern: prometheus.NewDesc(name, help, variableLabels, constLabels)
// Currently, coder only uses MustNewConstMetric with NewDesc.
// TODO(ssncferreira): Add support for other MustNewConst* functions if needed.
func extractNewDescMetric(call *ast.CallExpr, decls declarations) (Metric, bool) {
// Check if this is a prometheus.NewDesc call.
sel, ok := call.Fun.(*ast.SelectorExpr)
if !ok {
return Metric{}, false
}
// Match calls that are exactly "prometheus.NewDesc()". This checks the local
// package identifier, not the resolved import path. If the prometheus package
// is imported with an alias, this will not match.
ident, ok := sel.X.(*ast.Ident)
if !ok || ident.Name != "prometheus" || sel.Sel.Name != "NewDesc" {
return Metric{}, false
}
// NewDesc requires at least 4 arguments: name, help, variableLabels, constLabels
if len(call.Args) < 4 {
return Metric{}, false
}
// Extract name (first argument).
name := resolveStringExpr(call.Args[0], decls)
if name == "" {
log.Printf("extractNewDescMetric: skipping prometheus.NewDesc() call: could not resolve metric name")
return Metric{}, false
}
// Extract help (second argument).
help := resolveStringExpr(call.Args[1], decls)
// Extract labels (third argument).
labels := extractLabels(call.Args[2], decls)
// Infer metric type from name suffix.
// TODO(ssncferreira): The actual type is determined by the MustNewConst* function
// that uses this descriptor (e.g., MustNewConstMetric with prometheus.CounterValue or
// prometheus.GaugeValue). Currently, coder only uses MustNewConstMetric, so we
// infer the type from naming conventions.
metricType := MetricTypeGauge
if strings.HasSuffix(name, "_total") || strings.HasSuffix(name, "_count") {
metricType = MetricTypeCounter
}
return Metric{
Name: name,
Type: metricType,
Help: help,
Labels: labels,
}, true
}
// parseMetricFuncName parses a prometheus function name and returns the metric type
// and whether it's a Vec type. Returns empty string if not a recognized metric function.
func parseMetricFuncName(funcName string) (MetricType, bool) {
isVec := strings.HasSuffix(funcName, "Vec")
baseName := strings.TrimSuffix(funcName, "Vec")
switch baseName {
case "NewGauge":
return MetricTypeGauge, isVec
case "NewCounter":
return MetricTypeCounter, isVec
case "NewHistogram":
return MetricTypeHistogram, isVec
case "NewSummary":
return MetricTypeSummary, isVec
}
return "", false
}
// extractOpts extracts fields from a prometheus.*Opts composite literal.
func extractOpts(expr ast.Expr, decls declarations) (metricOpts, bool) {
// Handle both direct composite literals and calls that return opts.
var lit *ast.CompositeLit
switch e := expr.(type) {
case *ast.CompositeLit:
lit = e
case *ast.UnaryExpr:
// Handle &prometheus.GaugeOpts{...}
if l, ok := e.X.(*ast.CompositeLit); ok {
lit = l
}
}
if lit == nil {
return metricOpts{}, false
}
var opts metricOpts
for _, elt := range lit.Elts {
kv, ok := elt.(*ast.KeyValueExpr)
if !ok {
continue
}
key, ok := kv.Key.(*ast.Ident)
if !ok {
continue
}
value := resolveStringExpr(kv.Value, decls)
switch key.Name {
case "Namespace":
opts.Namespace = value
case "Subsystem":
opts.Subsystem = value
case "Name":
opts.Name = value
case "Help":
opts.Help = value
}
}
return opts, opts.Name != ""
}
// buildMetricName constructs the full metric name from namespace, subsystem, and name.
func buildMetricName(namespace, subsystem, name string) string {
metricNameParts := make([]string, 0, 3)
if namespace != "" {
metricNameParts = append(metricNameParts, namespace)
}
if subsystem != "" {
metricNameParts = append(metricNameParts, subsystem)
}
if name != "" {
metricNameParts = append(metricNameParts, name)
}
// Join non-empty parts with "_" to handle optional namespace/subsystem.
// e.g., ("coderd", "", "agents_up"): "coderd_agents_up"
return strings.Join(metricNameParts, "_")
}
// extractOptsMetric extracts a metric from prometheus.New*() or prometheus.New*Vec() calls.
// Supported patterns:
// - prometheus.NewGauge(prometheus.GaugeOpts{...})
// - prometheus.NewCounter(prometheus.CounterOpts{...})
// - prometheus.NewHistogram(prometheus.HistogramOpts{...})
// - prometheus.NewSummary(prometheus.SummaryOpts{...})
// - prometheus.NewGaugeVec(prometheus.GaugeOpts{...}, labels)
// - prometheus.NewCounterVec(prometheus.CounterOpts{...}, labels)
// - prometheus.NewHistogramVec(prometheus.HistogramOpts{...}, labels)
// - prometheus.NewSummaryVec(prometheus.SummaryOpts{...}, labels)
func extractOptsMetric(call *ast.CallExpr, decls declarations) (Metric, bool) {
sel, ok := call.Fun.(*ast.SelectorExpr)
if !ok {
return Metric{}, false
}
// Match calls that are exactly "prometheus.New*(...)". This checks the local
// package identifier, not the resolved import path. If the prometheus package
// is imported with an alias, this will not match.
ident, ok := sel.X.(*ast.Ident)
if !ok || ident.Name != "prometheus" {
return Metric{}, false
}
funcName := sel.Sel.Name
metricType, isVec := parseMetricFuncName(funcName)
if metricType == "" {
return Metric{}, false
}
// Need at least one argument (the Opts struct).
if len(call.Args) < 1 {
return Metric{}, false
}
// Extract metric info from the Opts struct.
opts, ok := extractOpts(call.Args[0], decls)
if !ok {
log.Printf("extractOptsMetric: skipping prometheus.%s() call: could not extract opts", funcName)
return Metric{}, false
}
// Extract labels for Vec types.
var labels []string
if isVec && len(call.Args) >= 2 {
labels = extractLabels(call.Args[1], decls)
}
// Build the full metric name.
name := buildMetricName(opts.Namespace, opts.Subsystem, opts.Name)
if name == "" {
log.Printf("extractOptsMetric: skipping prometheus.%s() call: could not build metric name", funcName)
return Metric{}, false
}
return Metric{
Name: name,
Type: metricType,
Help: opts.Help,
Labels: labels,
}, true
}
// isPromautoCall checks if an expression is a promauto factory call.
// Matches:
// - promauto.With(reg): direct chained call
// - factory: variable that was assigned from promauto.With()
func isPromautoCall(expr ast.Expr) bool {
switch e := expr.(type) {
case *ast.CallExpr:
// Check for promauto.With(reg).New*()
sel, ok := e.Fun.(*ast.SelectorExpr)
if !ok {
return false
}
ident, ok := sel.X.(*ast.Ident)
if !ok {
return false
}
// Match calls that are exactly "promauto.With(...)". This checks the local
// package identifier, not the resolved import path. If the promauto package
// is imported with an alias, this will not match.
return ident.Name == "promauto" && sel.Sel.Name == "With"
case *ast.Ident:
// Heuristic: assume any identifier that isn't "prometheus" used as a
// receiver for New*() methods is a promauto factory variable.
// This works for the codebase patterns (e.g., factory.NewGaugeVec(...))
// but could false-positive on other receivers. Downstream extractOpts
// validation prevents incorrect metrics from being emitted.
return e.Name != "prometheus"
}
return false
}
// extractPromautoMetric extracts a metric from promauto.With().New*() or factory.New*() calls.
// Supported patterns:
// - promauto.With(reg).NewCounterVec(prometheus.CounterOpts{...}, labels)
// - factory.NewGaugeVec(prometheus.GaugeOpts{...}, labels) where factory := promauto.With(reg)
func extractPromautoMetric(call *ast.CallExpr, decls declarations) (Metric, bool) {
sel, ok := call.Fun.(*ast.SelectorExpr)
if !ok {
return Metric{}, false
}
funcName := sel.Sel.Name
metricType, isVec := parseMetricFuncName(funcName)
if metricType == "" {
return Metric{}, false
}
// Check if this is a promauto call by examining the receiver.
if !isPromautoCall(sel.X) {
return Metric{}, false
}
// Need at least one argument (the Opts struct).
if len(call.Args) < 1 {
return Metric{}, false
}
// Extract metric info from the Opts struct.
opts, ok := extractOpts(call.Args[0], decls)
if !ok {
log.Printf("extractPromautoMetric: skipping promauto.%s() call: could not extract opts", funcName)
return Metric{}, false
}
// Extract labels for Vec types.
var labels []string
if isVec && len(call.Args) >= 2 {
labels = extractLabels(call.Args[1], decls)
}
// Build the full metric name.
name := buildMetricName(opts.Namespace, opts.Subsystem, opts.Name)
if name == "" {
log.Printf("extractPromautoMetric: skipping promauto.%s() call: could not build metric name", funcName)
return Metric{}, false
}
return Metric{
Name: name,
Type: metricType,
Help: opts.Help,
Labels: labels,
}, true
}
// extractMetricFromCall attempts to extract a Metric from a function call expression.
// It returns the metric and true if successful, or an empty metric and false if
// the call is not a metric registration.
//
// Supported patterns:
// - prometheus.NewDesc() calls
// - prometheus.New*() and prometheus.New*Vec() with *Opts{}
// - promauto.With(reg).New*() and factory.New*() patterns
func extractMetricFromCall(call *ast.CallExpr, decls declarations) (Metric, bool) {
// Check for prometheus.NewDesc() pattern.
if metric, ok := extractNewDescMetric(call, decls); ok {
return metric, true
}
// Check for prometheus.New*() and prometheus.New*Vec() patterns.
if metric, ok := extractOptsMetric(call, decls); ok {
return metric, true
}
// Check for promauto.With(reg).New*() pattern.
if metric, ok := extractPromautoMetric(call, decls); ok {
return metric, true
}
return Metric{}, false
}
// String returns the metric in Prometheus text exposition format.
// Label values are empty strings and metric values are 0 since only
// metadata (name, type, help, label names) is used for documentation generation.
func (m Metric) String() string {
var buf strings.Builder
// Write HELP line.
_, _ = fmt.Fprintf(&buf, "# HELP %s %s\n", m.Name, m.Help)
// Write TYPE line.
_, _ = fmt.Fprintf(&buf, "# TYPE %s %s\n", m.Name, m.Type)
// Write a sample metric line with empty label values and zero metric value.
if len(m.Labels) > 0 {
labelPairs := make([]string, len(m.Labels))
for i, l := range m.Labels {
labelPairs[i] = fmt.Sprintf("%s=\"\"", l)
}
_, _ = fmt.Fprintf(&buf, "%s{%s} 0\n", m.Name, strings.Join(labelPairs, ","))
} else {
_, _ = fmt.Fprintf(&buf, "%s 0\n", m.Name)
}
return buf.String()
}
// writeMetrics writes all metrics in Prometheus text exposition format.
func writeMetrics(metrics []Metric, w io.Writer) {
for _, m := range metrics {
_, _ = fmt.Fprint(w, m.String())
}
}
+1 -1
View File
@@ -1,4 +1,4 @@
<link rel="preload" href="/node_modules/@fontsource-variable/geist/files/geist-latin-wght-normal.woff2" as="font" type="font/woff2" crossorigin />
<link rel="preload" href="/node_modules/@fontsource-variable/inter/files/inter-latin-wght-normal.woff2" as="font" type="font/woff2" crossorigin />
<!-- Web terminal fonts -->
<link rel="preload" href="/node_modules/@fontsource/ibm-plex-mono/files/ibm-plex-mono-latin-400-normal.woff2" as="font" type="font/woff2" crossorigin />
-130
View File
@@ -1,130 +0,0 @@
# Frontend Development Guidelines
## TypeScript LSP Navigation (USE FIRST)
When investigating or editing TypeScript/React code, always use the TypeScript language server tools for accurate navigation:
- **Find component/function definitions**: `mcp__typescript-language-server__definition ComponentName`
- Example: `mcp__typescript-language-server__definition LoginPage`
- **Find all usages**: `mcp__typescript-language-server__references ComponentName`
- Example: `mcp__typescript-language-server__references useAuthenticate`
- **Get type information**: `mcp__typescript-language-server__hover site/src/pages/LoginPage.tsx 42 15`
- **Check for errors**: `mcp__typescript-language-server__diagnostics site/src/pages/LoginPage.tsx`
- **Rename symbols**: `mcp__typescript-language-server__rename_symbol site/src/components/Button.tsx 10 5 PrimaryButton`
- **Edit files**: `mcp__typescript-language-server__edit_file` for multi-line edits
## Bash commands
- `pnpm dev` - Start Vite development server
- `pnpm storybook --no-open` - Run storybook tests
- `pnpm test` - Run jest unit tests
- `pnpm test -- path/to/specific.test.ts` - Run a single test file
- `pnpm lint` - Run complete linting suite (Biome + TypeScript + circular deps + knip)
- `pnpm lint:fix` - Auto-fix linting issues where possible
- `pnpm playwright:test` - Run playwright e2e tests. When running e2e tests, remind the user that a license is required to run all the tests
- `pnpm format` - Format frontend code. Always run before creating a PR
## Components
- MUI components are deprecated - migrate away from these when encountered
- Use shadcn/ui components first - check `site/src/components` for existing implementations.
- Do not use shadcn CLI - manually add components to maintain consistency
- The modules folder should contain components with business logic specific to the codebase.
- Create custom components only when shadcn alternatives don't exist
## Styling
- Emotion CSS is deprecated. Use Tailwind CSS instead.
- Use custom Tailwind classes in tailwind.config.js.
- Tailwind CSS reset is currently not used to maintain compatibility with MUI
- Responsive design - use Tailwind's responsive prefixes (sm:, md:, lg:, xl:)
- Do not use `dark:` prefix for dark mode
## Tailwind Best Practices
- Group related classes
- Use semantic color names from the theme inside `tailwind.config.js` including `content`, `surface`, `border`, `highlight` semantic tokens
- Prefer Tailwind utilities over custom CSS when possible
## General Code style
- Use ES modules (import/export) syntax, not CommonJS (require)
- Destructure imports when possible (eg. import { foo } from 'bar')
- Prefer `for...of` over `forEach` for iteration
- **Biome** handles both linting and formatting (not ESLint/Prettier)
- Always use react-query for data fetching. Do not attempt to manage any data life cycle manually. Do not ever call an `API` function directly within a component.
## Workflow
- Be sure to typecheck when you're done making a series of code changes
- Prefer running single tests, and not the whole test suite, for performance
- Some e2e tests require a license from the user to execute
- Use pnpm format before creating a PR
- **ALWAYS use TypeScript LSP tools first** when investigating code - don't manually search files
## Pre-PR Checklist
1. `pnpm check` - Ensure no TypeScript errors
2. `pnpm lint` - Fix linting issues
3. `pnpm format` - Format code consistently
4. `pnpm test` - Run affected unit tests
5. Visual check in Storybook if component changes
## Migration (MUI → shadcn) (Emotion → Tailwind)
### Migration Strategy
- Identify MUI components in current feature
- Find shadcn equivalent in existing components
- Create wrapper if needed for missing functionality
- Update tests to reflect new component structure
- Remove MUI imports once migration complete
### Migration Guidelines
- Use Tailwind classes for all new styling
- Replace Emotion `css` prop with Tailwind classes
- Leverage custom color tokens: `content-primary`, `surface-secondary`, etc.
- Use `className` with `clsx` for conditional styling
## React Rules
### 1. Purity & Immutability
- **Components and custom Hooks must be pure and idempotent**—same inputs → same output; move side-effects to event handlers or Effects.
- **Never mutate props, state, or values returned by Hooks.** Always create new objects or use the setter from useState.
### 2. Rules of Hooks
- **Only call Hooks at the top level** of a function component or another custom Hook—never in loops, conditions, nested functions, or try / catch.
- **Only call Hooks from React functions.** Regular JS functions, classes, event handlers, useMemo, etc. are off-limits.
### 3. React orchestrates execution
- **Dont call component functions directly; render them via JSX.** This keeps Hook rules intact and lets React optimize reconciliation.
- **Never pass Hooks around as values or mutate them dynamically.** Keep Hook usage static and local to each component.
### 4. State Management
- After calling a setter youll still read the **previous** state during the same event; updates are queued and batched.
- Use **functional updates** (setX(prev ⇒ …)) whenever next state depends on previous state.
- Pass a function to useState(initialFn) for **lazy initialization**—it runs only on the first render.
- If the next state is Object.is-equal to the current one, React skips the re-render.
### 5. Effects
- An Effect takes a **setup** function and optional **cleanup**; React runs setup after commit, cleanup before the next setup or on unmount.
- The **dependency array must list every reactive value** referenced inside the Effect, and its length must stay constant.
- Effects run **only on the client**, never during server rendering.
- Use Effects solely to **synchronize with external systems**; if youre not “escaping React,” you probably dont need one.
### 6. Lists & Keys
- Every sibling element in a list **needs a stable, unique key prop**. Never use array indexes or Math.random(); prefer data-driven IDs.
- Keys arent passed to children and **must not change between renders**; if you return multiple nodes per item, use `<Fragment key={id}>`
### 7. Refs & DOM Access
- useRef stores a mutable .current **without causing re-renders**.
- **Dont call Hooks (including useRef) inside loops, conditions, or map().** Extract a child component instead.
- **Avoid reading or mutating refs during render;** access them in event handlers or Effects after commit.
-1
View File
@@ -1 +0,0 @@
AGENTS.md
+129
View File
@@ -0,0 +1,129 @@
# Frontend Development Guidelines
## TypeScript LSP Navigation (USE FIRST)
When investigating or editing TypeScript/React code, always use the TypeScript language server tools for accurate navigation:
- **Find component/function definitions**: `mcp__typescript-language-server__definition ComponentName`
- Example: `mcp__typescript-language-server__definition LoginPage`
- **Find all usages**: `mcp__typescript-language-server__references ComponentName`
- Example: `mcp__typescript-language-server__references useAuthenticate`
- **Get type information**: `mcp__typescript-language-server__hover site/src/pages/LoginPage.tsx 42 15`
- **Check for errors**: `mcp__typescript-language-server__diagnostics site/src/pages/LoginPage.tsx`
- **Rename symbols**: `mcp__typescript-language-server__rename_symbol site/src/components/Button.tsx 10 5 PrimaryButton`
- **Edit files**: `mcp__typescript-language-server__edit_file` for multi-line edits
## Bash commands
- `pnpm dev` - Start Vite development server
- `pnpm storybook --no-open` - Run storybook tests
- `pnpm test` - Run jest unit tests
- `pnpm test -- path/to/specific.test.ts` - Run a single test file
- `pnpm lint` - Run complete linting suite (Biome + TypeScript + circular deps + knip)
- `pnpm lint:fix` - Auto-fix linting issues where possible
- `pnpm playwright:test` - Run playwright e2e tests. When running e2e tests, remind the user that a license is required to run all the tests
- `pnpm format` - Format frontend code. Always run before creating a PR
## Components
- MUI components are deprecated - migrate away from these when encountered
- Use shadcn/ui components first - check `site/src/components` for existing implementations.
- Do not use shadcn CLI - manually add components to maintain consistency
- The modules folder should contain components with business logic specific to the codebase.
- Create custom components only when shadcn alternatives don't exist
## Styling
- Emotion CSS is deprecated. Use Tailwind CSS instead.
- Use custom Tailwind classes in tailwind.config.js.
- Tailwind CSS reset is currently not used to maintain compatibility with MUI
- Responsive design - use Tailwind's responsive prefixes (sm:, md:, lg:, xl:)
- Do not use `dark:` prefix for dark mode
## Tailwind Best Practices
- Group related classes
- Use semantic color names from the theme inside `tailwind.config.js` including `content`, `surface`, `border`, `highlight` semantic tokens
- Prefer Tailwind utilities over custom CSS when possible
## General Code style
- Use ES modules (import/export) syntax, not CommonJS (require)
- Destructure imports when possible (eg. import { foo } from 'bar')
- Prefer `for...of` over `forEach` for iteration
- **Biome** handles both linting and formatting (not ESLint/Prettier)
## Workflow
- Be sure to typecheck when you're done making a series of code changes
- Prefer running single tests, and not the whole test suite, for performance
- Some e2e tests require a license from the user to execute
- Use pnpm format before creating a PR
- **ALWAYS use TypeScript LSP tools first** when investigating code - don't manually search files
## Pre-PR Checklist
1. `pnpm check` - Ensure no TypeScript errors
2. `pnpm lint` - Fix linting issues
3. `pnpm format` - Format code consistently
4. `pnpm test` - Run affected unit tests
5. Visual check in Storybook if component changes
## Migration (MUI → shadcn) (Emotion → Tailwind)
### Migration Strategy
- Identify MUI components in current feature
- Find shadcn equivalent in existing components
- Create wrapper if needed for missing functionality
- Update tests to reflect new component structure
- Remove MUI imports once migration complete
### Migration Guidelines
- Use Tailwind classes for all new styling
- Replace Emotion `css` prop with Tailwind classes
- Leverage custom color tokens: `content-primary`, `surface-secondary`, etc.
- Use `className` with `clsx` for conditional styling
## React Rules
### 1. Purity & Immutability
- **Components and custom Hooks must be pure and idempotent**—same inputs → same output; move side-effects to event handlers or Effects.
- **Never mutate props, state, or values returned by Hooks.** Always create new objects or use the setter from useState.
### 2. Rules of Hooks
- **Only call Hooks at the top level** of a function component or another custom Hook—never in loops, conditions, nested functions, or try / catch.
- **Only call Hooks from React functions.** Regular JS functions, classes, event handlers, useMemo, etc. are off-limits.
### 3. React orchestrates execution
- **Dont call component functions directly; render them via JSX.** This keeps Hook rules intact and lets React optimize reconciliation.
- **Never pass Hooks around as values or mutate them dynamically.** Keep Hook usage static and local to each component.
### 4. State Management
- After calling a setter youll still read the **previous** state during the same event; updates are queued and batched.
- Use **functional updates** (setX(prev ⇒ …)) whenever next state depends on previous state.
- Pass a function to useState(initialFn) for **lazy initialization**—it runs only on the first render.
- If the next state is Object.is-equal to the current one, React skips the re-render.
### 5. Effects
- An Effect takes a **setup** function and optional **cleanup**; React runs setup after commit, cleanup before the next setup or on unmount.
- The **dependency array must list every reactive value** referenced inside the Effect, and its length must stay constant.
- Effects run **only on the client**, never during server rendering.
- Use Effects solely to **synchronize with external systems**; if youre not “escaping React,” you probably dont need one.
### 6. Lists & Keys
- Every sibling element in a list **needs a stable, unique key prop**. Never use array indexes or Math.random(); prefer data-driven IDs.
- Keys arent passed to children and **must not change between renders**; if you return multiple nodes per item, use `<Fragment key={id}>`
### 7. Refs & DOM Access
- useRef stores a mutable .current **without causing re-renders**.
- **Dont call Hooks (including useRef) inside loops, conditions, or map().** Extract a child component instead.
- **Avoid reading or mutating refs during render;** access them in event handlers or Effects after commit.
+1 -1
View File
@@ -62,7 +62,7 @@ test("app", async ({ context, page }) => {
const agent = await startAgent(page, token);
// Wait for the web terminal to open in a new tab
const pagePromise = context.waitForEvent("page", { timeout: 10_000 });
const pagePromise = context.waitForEvent("page");
await page.getByText(appName).click({ timeout: 10_000 });
const app = await pagePromise;
await app.waitForLoadState("domcontentloaded");
@@ -40,7 +40,6 @@ test("create workspace in auto mode", async ({ page }) => {
waitUntil: "domcontentloaded",
},
);
await page.getByRole("button", { name: /confirm and create/i }).click();
await expect(page).toHaveTitle(`${users.member.username}/${name} - Coder`);
});
@@ -54,7 +53,6 @@ test("use an existing workspace that matches the `match` parameter instead of cr
waitUntil: "domcontentloaded",
},
);
await page.getByRole("button", { name: /confirm and create/i }).click();
await expect(page).toHaveTitle(
`${users.member.username}/${prevWorkspace} - Coder`,
);
@@ -68,6 +66,5 @@ test("show error if `match` parameter is invalid", async ({ page }) => {
waitUntil: "domcontentloaded",
},
);
await page.getByRole("button", { name: /confirm and create/i }).click();
await expect(page.getByText("Invalid match value")).toBeVisible();
});
+2 -2
View File
@@ -32,7 +32,7 @@
"test:watch": "vitest",
"test:watch-jest": "jest --watch",
"stats": "STATS=true pnpm build && npx http-server ./stats -p 8081 -c-1",
"update-emojis": "cp -rf ./node_modules/emoji-datasource-apple/img/apple/64/* ./static/emojis && cp -f ./node_modules/emoji-datasource-apple/img/apple/sheets-256/64.png ./static/emojis/spritesheet.png"
"update-emojis": "cp -rf ./node_modules/emoji-datasource-apple/img/apple/64/* ./static/emojis"
},
"dependencies": {
"@emoji-mart/data": "1.2.1",
@@ -41,7 +41,7 @@
"@emotion/css": "11.13.5",
"@emotion/react": "11.14.0",
"@emotion/styled": "11.14.1",
"@fontsource-variable/geist": "5.2.8",
"@fontsource-variable/inter": "5.2.8",
"@fontsource/fira-code": "5.2.7",
"@fontsource/ibm-plex-mono": "5.2.7",
"@fontsource/jetbrains-mono": "5.2.8",
+4 -4
View File
@@ -37,7 +37,7 @@ importers:
'@emotion/styled':
specifier: 11.14.1
version: 11.14.1(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2)
'@fontsource-variable/geist':
'@fontsource-variable/inter':
specifier: 5.2.8
version: 5.2.8
'@fontsource/fira-code':
@@ -1190,8 +1190,8 @@ packages:
'@floating-ui/utils@0.2.10':
resolution: {integrity: sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==, tarball: https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.10.tgz}
'@fontsource-variable/geist@5.2.8':
resolution: {integrity: sha512-cJ6m9e+8MQ5dCYJsLylfZrgBh6KkG4bOLckB35Tr9J/EqdkEM6QllH5PxqP1dhTvFup+HtMRPuz9xOjxXJggxw==, tarball: https://registry.npmjs.org/@fontsource-variable/geist/-/geist-5.2.8.tgz}
'@fontsource-variable/inter@5.2.8':
resolution: {integrity: sha512-kOfP2D+ykbcX/P3IFnokOhVRNoTozo5/JxhAIVYLpea/UBmCQ/YWPBfWIDuBImXX/15KH+eKh4xpEUyS2sQQGQ==, tarball: https://registry.npmjs.org/@fontsource-variable/inter/-/inter-5.2.8.tgz}
'@fontsource/fira-code@5.2.7':
resolution: {integrity: sha512-tnB9NNund9TwIym8/7DMJe573nlPEQb+fKUV5GL8TBYXjIhDvL0D7mgmNVNQUPhXp+R7RylQeiBdkA4EbOHPGQ==, tarball: https://registry.npmjs.org/@fontsource/fira-code/-/fira-code-5.2.7.tgz}
@@ -7141,7 +7141,7 @@ snapshots:
'@floating-ui/utils@0.2.10': {}
'@fontsource-variable/geist@5.2.8': {}
'@fontsource-variable/inter@5.2.8': {}
'@fontsource/fira-code@5.2.7': {}
-1
View File
@@ -36,7 +36,6 @@ declare module "@emoji-mart/react" {
emojiButtonSize?: number;
emojiSize?: number;
emojiVersion?: string;
getSpritesheetURL?: (set: string) => string;
onEmojiSelect: (emoji: EmojiData) => void;
}
-40
View File
@@ -2787,46 +2787,6 @@ class ApiMethods {
} satisfies TypesGen.UpdateTaskInputRequest);
};
getTaskLogs = async (
user: string,
id: string,
): Promise<TypesGen.TaskLogsResponse> => {
const response = await this.axios.get<TypesGen.TaskLogsResponse>(
`/api/v2/tasks/${user}/${id}/logs`,
);
return response.data;
};
pauseTask = async (
user: string,
id: string,
): Promise<TypesGen.PauseTaskResponse> => {
const response = await this.axios.post<TypesGen.PauseTaskResponse>(
`/api/v2/tasks/${user}/${id}/pause`,
);
return response.data;
};
resumeTask = async (
user: string,
id: string,
): Promise<TypesGen.ResumeTaskResponse> => {
const response = await this.axios.post<TypesGen.ResumeTaskResponse>(
`/api/v2/tasks/${user}/${id}/resume`,
);
return response.data;
};
sendTaskInput = async (
user: string,
id: string,
input: string,
): Promise<void> => {
await this.axios.post(`/api/v2/tasks/${user}/${id}/send`, {
input,
} satisfies TypesGen.TaskSendRequest);
};
createTaskFeedback = async (
_taskId: string,
_req: CreateTaskFeedbackRequest,
-10
View File
@@ -1426,7 +1426,6 @@ export type CreateWorkspaceBuildReason =
| "jetbrains_connection"
| "ssh_connection"
| "task_manual_pause"
| "task_resume"
| "vscode_connection";
export const CreateWorkspaceBuildReasons: CreateWorkspaceBuildReason[] = [
@@ -1435,7 +1434,6 @@ export const CreateWorkspaceBuildReasons: CreateWorkspaceBuildReason[] = [
"jetbrains_connection",
"ssh_connection",
"task_manual_pause",
"task_resume",
"vscode_connection",
];
@@ -4357,14 +4355,6 @@ export interface Response {
readonly validations?: readonly ValidationError[];
}
// From codersdk/aitasks.go
/**
* ResumeTaskResponse represents the response from resuming a task.
*/
export interface ResumeTaskResponse {
readonly workspace_build: WorkspaceBuild | null;
}
// From codersdk/deployment.go
/**
* RetentionConfig contains configuration for data retention policies.
+19 -13
View File
@@ -7,7 +7,13 @@ import {
TriangleAlertIcon,
XIcon,
} from "lucide-react";
import { type FC, type ReactNode, useState } from "react";
import {
type FC,
forwardRef,
type PropsWithChildren,
type ReactNode,
useState,
} from "react";
import { cn } from "utils/cn";
const alertVariants = cva(
@@ -116,7 +122,7 @@ export const Alert: FC<AlertProps> = ({
data-testid="dismiss-banner-btn"
aria-label="Dismiss"
>
<XIcon className="!p-0" />
<XIcon className="!size-icon-sm !p-0" />
</Button>
)}
</div>
@@ -125,9 +131,7 @@ export const Alert: FC<AlertProps> = ({
);
};
export const AlertDetail: React.FC<React.PropsWithChildren> = ({
children,
}) => {
export const AlertDetail: FC<PropsWithChildren> = ({ children }) => {
return (
<span className="m-0 text-sm" data-chromatic="ignore">
{children}
@@ -135,11 +139,13 @@ export const AlertDetail: React.FC<React.PropsWithChildren> = ({
);
};
export const AlertTitle: React.FC<React.ComponentPropsWithRef<"h1">> = ({
className,
...props
}) => {
return (
<h1 className={cn("m-0 mb-1 text-sm font-medium", className)} {...props} />
);
};
export const AlertTitle = forwardRef<
HTMLHeadingElement,
React.HTMLAttributes<HTMLHeadingElement>
>(({ className, ...props }, ref) => (
<h1
ref={ref}
className={cn("m-0 mb-1 text-sm font-medium", className)}
{...props}
/>
));
@@ -189,7 +189,7 @@ export function Autocomplete<TOption>({
<span className="flex items-center justify-center size-5">
<ChevronDown
className={cn(
"size-icon-lg text-content-secondary transition-transform p-0.5",
"size-4 text-content-secondary transition-transform",
isOpen && "rotate-180",
)}
/>
+10 -11
View File
@@ -13,6 +13,7 @@
import { useTheme } from "@emotion/react";
import * as AvatarPrimitive from "@radix-ui/react-avatar";
import { cva, type VariantProps } from "class-variance-authority";
import * as React from "react";
import { getExternalImageStylesFromUrl } from "theme/externalImages";
import { cn } from "utils/cn";
@@ -57,22 +58,17 @@ export type AvatarProps = AvatarPrimitive.AvatarProps &
VariantProps<typeof avatarVariants> & {
src?: string;
fallback?: string;
ref?: React.Ref<React.ComponentRef<typeof AvatarPrimitive.Root>>;
};
export const Avatar: React.FC<AvatarProps> = ({
className,
size,
variant,
src,
fallback,
children,
...props
}) => {
const Avatar = React.forwardRef<
React.ElementRef<typeof AvatarPrimitive.Root>,
AvatarProps
>(({ className, size, variant, src, fallback, children, ...props }, ref) => {
const theme = useTheme();
return (
<AvatarPrimitive.Root
ref={ref}
className={cn(avatarVariants({ size, variant, className }))}
{...props}
>
@@ -89,4 +85,7 @@ export const Avatar: React.FC<AvatarProps> = ({
{children}
</AvatarPrimitive.Root>
);
};
});
Avatar.displayName = AvatarPrimitive.Root.displayName;
export { Avatar };
+24 -23
View File
@@ -4,6 +4,7 @@
*/
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { forwardRef } from "react";
import { cn } from "utils/cn";
const badgeVariants = cva(
@@ -25,8 +26,6 @@ const badgeVariants = cva(
"border border-solid border-border-green bg-surface-green text-highlight-green shadow",
purple:
"border border-solid border-border-purple bg-surface-purple text-highlight-purple shadow",
magenta:
"border border-solid border-border-magenta bg-surface-magenta text-highlight-magenta shadow",
info: "border border-solid border-border-pending bg-surface-sky text-highlight-sky shadow",
},
size: {
@@ -59,26 +58,28 @@ const badgeVariants = cva(
},
);
type BadgeProps = React.ComponentPropsWithRef<"div"> &
VariantProps<typeof badgeVariants> & {
asChild?: boolean;
};
interface BadgeProps
extends React.HTMLAttributes<HTMLDivElement>,
VariantProps<typeof badgeVariants> {
asChild?: boolean;
}
export const Badge: React.FC<BadgeProps> = ({
className,
variant,
size,
border,
hover,
asChild = false,
...props
}) => {
const Comp = asChild ? Slot : "div";
export const Badge = forwardRef<HTMLDivElement, BadgeProps>(
(
{ className, variant, size, border, hover, asChild = false, ...props },
ref,
) => {
const Comp = asChild ? Slot : "div";
return (
<Comp
{...props}
className={cn(badgeVariants({ variant, size, border, hover }), className)}
/>
);
};
return (
<Comp
{...props}
ref={ref}
className={cn(
badgeVariants({ variant, size, border, hover }),
className,
)}
/>
);
},
);
+22 -16
View File
@@ -1,7 +1,13 @@
import { Badge } from "components/Badge/Badge";
import { Stack } from "components/Stack/Stack";
import {
type FC,
forwardRef,
type HTMLAttributes,
type PropsWithChildren,
} from "react";
export const EnabledBadge: React.FC = () => {
export const EnabledBadge: FC = () => {
return (
<Badge className="option-enabled" variant="green" border="solid">
Enabled
@@ -9,27 +15,27 @@ export const EnabledBadge: React.FC = () => {
);
};
export const EntitledBadge: React.FC = () => {
export const EntitledBadge: FC = () => {
return (
<Badge border="solid" variant="green">
Entitled
</Badge>
);
};
export const DisabledBadge: React.FC<React.ComponentPropsWithRef<"div">> = ({
...props
}) => {
export const DisabledBadge: FC = forwardRef<
HTMLDivElement,
HTMLAttributes<HTMLDivElement>
>((props, ref) => {
return (
<Badge {...props} className="option-disabled">
<Badge ref={ref} {...props} className="option-disabled">
Disabled
</Badge>
);
};
});
export const EnterpriseBadge: React.FC = () => {
export const EnterpriseBadge: FC = () => {
return (
<Badge variant="purple" border="solid">
<Badge variant="info" border="solid">
Enterprise
</Badge>
);
@@ -39,17 +45,17 @@ interface PremiumBadgeProps {
children?: React.ReactNode;
}
export const PremiumBadge: React.FC<PremiumBadgeProps> = ({
export const PremiumBadge: FC<PremiumBadgeProps> = ({
children = "Premium",
}) => {
return (
<Badge variant="magenta" border="solid">
<Badge variant="purple" border="solid">
{children}
</Badge>
);
};
export const PreviewBadge: React.FC = () => {
export const PreviewBadge: FC = () => {
return (
<Badge variant="purple" border="solid">
Preview
@@ -57,7 +63,7 @@ export const PreviewBadge: React.FC = () => {
);
};
export const AlphaBadge: React.FC = () => {
export const AlphaBadge: FC = () => {
return (
<Badge variant="purple" border="solid">
Alpha
@@ -65,7 +71,7 @@ export const AlphaBadge: React.FC = () => {
);
};
export const DeprecatedBadge: React.FC = () => {
export const DeprecatedBadge: FC = () => {
return (
<Badge variant="warning" border="solid">
Deprecated
@@ -73,7 +79,7 @@ export const DeprecatedBadge: React.FC = () => {
);
};
export const Badges: React.FC<React.PropsWithChildren> = ({ children }) => {
export const Badges: FC<PropsWithChildren> = ({ children }) => {
return (
<Stack
css={{ margin: "0 0 16px" }}
+89 -91
View File
@@ -4,59 +4,62 @@
*/
import { Slot } from "@radix-ui/react-slot";
import { MoreHorizontal } from "lucide-react";
import {
type ComponentProps,
type ComponentPropsWithoutRef,
type FC,
forwardRef,
type ReactNode,
} from "react";
import { cn } from "utils/cn";
type BreadcrumbProps = React.ComponentPropsWithRef<"nav"> & {
separator?: React.ReactNode;
};
export const Breadcrumb = forwardRef<
HTMLElement,
ComponentPropsWithoutRef<"nav"> & {
separator?: ReactNode;
}
>(({ ...props }, ref) => <nav ref={ref} aria-label="breadcrumb" {...props} />);
Breadcrumb.displayName = "Breadcrumb";
export const Breadcrumb: React.FC<BreadcrumbProps> = ({ ...props }) => {
return <nav aria-label="breadcrumb" {...props} />;
};
export const BreadcrumbList = forwardRef<
HTMLOListElement,
ComponentPropsWithoutRef<"ol">
>(({ className, ...props }, ref) => (
<ol
ref={ref}
className={cn(
"flex flex-wrap items-center text-sm pl-6 my-4 gap-1.5 break-words font-medium list-none sm:gap-2.5",
className,
)}
{...props}
/>
));
export const BreadcrumbList: React.FC<React.ComponentPropsWithRef<"ol">> = ({
className,
...props
}) => {
return (
<ol
className={cn(
"flex flex-wrap items-center text-sm pl-6 my-4 gap-1.5 break-words font-medium list-none sm:gap-2.5",
className,
)}
{...props}
/>
);
};
export const BreadcrumbItem = forwardRef<
HTMLLIElement,
ComponentPropsWithoutRef<"li">
>(({ className, ...props }, ref) => (
<li
ref={ref}
className={cn(
"inline-flex items-center gap-1.5 text-content-secondary",
className,
)}
{...props}
/>
));
export const BreadcrumbItem: React.FC<React.ComponentPropsWithRef<"li">> = ({
className,
...props
}) => {
return (
<li
className={cn(
"inline-flex items-center gap-1.5 text-content-secondary",
className,
)}
{...props}
/>
);
};
type BreadcrumbLinkProps = React.ComponentPropsWithRef<"a"> & {
asChild?: boolean;
};
export const BreadcrumbLink: React.FC<BreadcrumbLinkProps> = ({
asChild,
className,
...props
}) => {
export const BreadcrumbLink = forwardRef<
HTMLAnchorElement,
ComponentPropsWithoutRef<"a"> & {
asChild?: boolean;
}
>(({ asChild, className, ...props }, ref) => {
const Comp = asChild ? Slot : "a";
return (
<Comp
ref={ref}
className={cn(
"text-content-secondary transition-colors hover:text-content-primary no-underline hover:underline",
className,
@@ -64,54 +67,49 @@ export const BreadcrumbLink: React.FC<BreadcrumbLinkProps> = ({
{...props}
/>
);
};
});
export const BreadcrumbPage: React.FC<React.ComponentPropsWithRef<"span">> = ({
export const BreadcrumbPage = forwardRef<
HTMLSpanElement,
ComponentPropsWithoutRef<"span">
>(({ className, ...props }, ref) => (
<span
ref={ref}
aria-current="page"
className={cn("flex items-center gap-2 text-content-secondary", className)}
{...props}
/>
));
export const BreadcrumbSeparator: FC<ComponentProps<"li">> = ({
children,
className,
...props
}) => {
return (
<span
aria-current="page"
className={cn(
"flex items-center gap-2 text-content-secondary",
className,
)}
{...props}
/>
);
};
}) => (
<li
role="presentation"
aria-hidden="true"
className={cn(
"text-content-disabled [&>svg]:w-3.5 [&>svg]:h-3.5",
className,
)}
{...props}
>
/
</li>
);
export const BreadcrumbSeparator: React.FC<
Omit<React.ComponentPropsWithRef<"li">, "children">
> = ({ className, ...props }) => {
return (
<li
role="presentation"
aria-hidden="true"
className={cn(
"text-content-disabled [&>svg]:w-3.5 [&>svg]:h-3.5",
className,
)}
{...props}
>
/
</li>
);
};
export const BreadcrumbEllipsis: React.FC<
Omit<React.ComponentPropsWithRef<"span">, "children">
> = ({ className, ...props }) => {
return (
<span
role="presentation"
aria-hidden="true"
className={cn("flex h-9 w-9 items-center justify-center", className)}
{...props}
>
<MoreHorizontal className="h-4 w-4" />
<span className="sr-only">More</span>
</span>
);
};
export const BreadcrumbEllipsis: FC<ComponentProps<"span">> = ({
className,
...props
}) => (
<span
role="presentation"
aria-hidden="true"
className={cn("flex h-9 w-9 items-center justify-center", className)}
{...props}
>
<MoreHorizontal className="h-4 w-4" />
<span className="sr-only">More</span>
</span>
);
+28 -30
View File
@@ -4,6 +4,7 @@
*/
import { Slot } from "@radix-ui/react-slot";
import { cva, type VariantProps } from "class-variance-authority";
import { forwardRef } from "react";
import { cn } from "utils/cn";
// Be careful when changing the child styles from the button such as images
@@ -57,34 +58,31 @@ const buttonVariants = cva(
},
);
export type ButtonProps = React.ComponentPropsWithRef<"button"> &
VariantProps<typeof buttonVariants> & {
asChild?: boolean;
};
export interface ButtonProps
extends React.ButtonHTMLAttributes<HTMLButtonElement>,
VariantProps<typeof buttonVariants> {
asChild?: boolean;
}
export const Button: React.FC<ButtonProps> = ({
className,
variant,
size,
asChild = false,
...props
}) => {
const Comp = asChild ? Slot : "button";
// We want `type` to default to `"button"` when the component is not being
// used as a `Slot`. The default behavior of any given `<button>` element is
// to submit the closest parent `<form>` because Web Platform reasons. This
// prevents that. However, we don't want to set it on non-`<button>`s when
// `asChild` is set.
// https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/button#type
if (!asChild && !props.type) {
props.type = "button";
}
return (
<Comp
{...props}
className={cn(buttonVariants({ variant, size }), className)}
/>
);
};
export const Button = forwardRef<HTMLButtonElement, ButtonProps>(
({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
<Comp
{...props}
ref={ref}
className={cn(buttonVariants({ variant, size }), className)}
// Adding default button type to make sure that buttons don't
// accidentally trigger form actions when clicked. But because
// this Button component is so polymorphic (it's also used to
// make <a> elements look like buttons), we can only safely
// default to adding the prop when we know that we're rendering
// a real HTML button instead of an arbitrary Slot. Adding the
// type attribute to any non-buttons will produce invalid HTML
type={
props.type === undefined && Comp === "button" ? "button" : props.type
}
/>
);
},
);
+204 -156
View File
@@ -2,14 +2,7 @@
* Copied from shadc/ui on 01/13/2025
* @see {@link https://ui.shadcn.com/docs/components/chart}
*/
import {
type CSSProperties,
createContext,
type Ref,
useContext,
useId,
useMemo,
} from "react";
import * as React from "react";
import * as RechartsPrimitive from "recharts";
import { cn } from "utils/cn";
@@ -30,10 +23,10 @@ type ChartContextProps = {
config: ChartConfig;
};
const ChartContext = createContext<ChartContextProps | null>(null);
const ChartContext = React.createContext<ChartContextProps | null>(null);
function useChart() {
const context = useContext(ChartContext);
const context = React.useContext(ChartContext);
if (!context) {
throw new Error("useChart must be used within a <ChartContainer />");
@@ -42,31 +35,23 @@ function useChart() {
return context;
}
type ChartContainerProps = Omit<
React.ComponentPropsWithRef<"div">,
"children"
> &
Pick<
React.ComponentProps<typeof RechartsPrimitive.ResponsiveContainer>,
"children"
> & {
export const ChartContainer = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> & {
config: ChartConfig;
};
export const ChartContainer: React.FC<ChartContainerProps> = ({
id,
className,
children,
config,
...props
}) => {
const uniqueId = useId();
children: React.ComponentProps<
typeof RechartsPrimitive.ResponsiveContainer
>["children"];
}
>(({ id, className, children, config, ...props }, ref) => {
const uniqueId = React.useId();
const chartId = `chart-${id || uniqueId.replace(/:/g, "")}`;
return (
<ChartContext.Provider value={{ config }}>
<div
data-chart={chartId}
ref={ref}
className={cn(
"flex aspect-video justify-center text-xs",
"[&_.recharts-cartesian-axis-tick_text]:fill-muted-foreground",
@@ -94,7 +79,8 @@ export const ChartContainer: React.FC<ChartContainerProps> = ({
</div>
</ChartContext.Provider>
);
};
});
ChartContainer.displayName = "Chart";
const ChartStyle = ({ id, config }: { id: string; config: ChartConfig }) => {
const colorConfig = Object.entries(config).filter(
@@ -131,157 +117,219 @@ ${colorConfig
export const ChartTooltip = RechartsPrimitive.Tooltip;
type ChartTooltipContentProps = React.ComponentProps<
typeof RechartsPrimitive.Tooltip
> & {
className?: string;
color?: string;
hideLabel?: boolean;
hideIndicator?: boolean;
indicator?: "line" | "dot" | "dashed";
nameKey?: string;
labelKey?: string;
ref?: Ref<HTMLDivElement>;
};
export const ChartTooltipContent = React.forwardRef<
HTMLDivElement,
React.ComponentProps<typeof RechartsPrimitive.Tooltip> &
React.ComponentProps<"div"> & {
hideLabel?: boolean;
hideIndicator?: boolean;
indicator?: "line" | "dot" | "dashed";
nameKey?: string;
labelKey?: string;
}
>(
(
{
active,
payload,
className,
indicator = "dot",
hideLabel = false,
hideIndicator = false,
label,
labelFormatter,
labelClassName,
formatter,
color,
nameKey,
labelKey,
},
ref,
) => {
const { config } = useChart();
export const ChartTooltipContent: React.FC<ChartTooltipContentProps> = ({
active,
payload,
formatter,
className,
color,
hideLabel = false,
hideIndicator = false,
indicator = "dot",
nameKey,
labelKey,
label,
labelFormatter,
labelClassName,
ref,
}) => {
const { config } = useChart();
const tooltipLabel = React.useMemo(() => {
if (hideLabel || !payload?.length) {
return null;
}
const tooltipLabel = useMemo(() => {
if (hideLabel || !payload?.length) {
const [item] = payload;
const key = `${labelKey || item.dataKey || item.name || "value"}`;
const itemConfig = getPayloadConfigFromPayload(config, item, key);
const value =
!labelKey && typeof label === "string"
? config[label as keyof typeof config]?.label || label
: itemConfig?.label;
if (labelFormatter) {
return (
<div className={cn("font-medium", labelClassName)}>
{labelFormatter(value, payload)}
</div>
);
}
if (!value) {
return null;
}
return <div className={cn("font-medium", labelClassName)}>{value}</div>;
}, [
label,
labelFormatter,
payload,
hideLabel,
labelClassName,
config,
labelKey,
]);
if (!active || !payload?.length) {
return null;
}
const [item] = payload;
const key = `${labelKey || item.dataKey || item.name || "value"}`;
const itemConfig = getPayloadConfigFromPayload(config, item, key);
const value =
!labelKey && typeof label === "string"
? config[label as keyof typeof config]?.label || label
: itemConfig?.label;
const nestLabel = payload.length === 1 && indicator !== "dot";
if (labelFormatter) {
return (
<div className={cn("font-medium", labelClassName)}>
{labelFormatter(value, payload)}
return (
<div
ref={ref}
className={cn(
"grid min-w-[8rem] items-start gap-1 rounded-lg border border-solid border-border bg-surface-primary px-3 py-2 text-xs shadow-xl",
className,
)}
>
{!nestLabel ? tooltipLabel : null}
<div className="grid gap-1.5">
{payload.map((item, index) => {
const key = `${nameKey || item.name || item.dataKey || "value"}`;
const itemConfig = getPayloadConfigFromPayload(config, item, key);
const indicatorColor = color || item.payload.fill || item.color;
return (
<div
key={item.dataKey}
className={cn(
"flex w-full flex-wrap items-stretch gap-2 [&>svg]:h-2.5 [&>svg]:w-2.5 [&>svg]:text-muted-foreground",
indicator === "dot" && "items-center",
)}
>
{formatter && item?.value !== undefined && item.name ? (
formatter(item.value, item.name, item, index, item.payload)
) : (
<>
{itemConfig?.icon ? (
<itemConfig.icon />
) : (
!hideIndicator && (
<div
className={cn(
"shrink-0 rounded-[2px] border-[--color-border] bg-[--color-bg]",
{
"h-2.5 w-2.5": indicator === "dot",
"w-1": indicator === "line",
"w-0 border-[1.5px] border-dashed bg-transparent":
indicator === "dashed",
"my-0.5": nestLabel && indicator === "dashed",
},
)}
style={
{
"--color-bg": indicatorColor,
"--color-border": indicatorColor,
} as React.CSSProperties
}
/>
)
)}
<div
className={cn(
"flex flex-1 justify-between leading-none",
nestLabel ? "items-end" : "items-center",
)}
>
<div className="grid gap-1.5">
{nestLabel ? tooltipLabel : null}
<span className="text-muted-foreground">
{itemConfig?.label || item.name}
</span>
</div>
{item.value && (
<span className="font-mono font-medium tabular-nums text-foreground">
{item.value.toLocaleString()}
</span>
)}
</div>
</>
)}
</div>
);
})}
</div>
);
}
</div>
);
},
);
ChartTooltipContent.displayName = "ChartTooltip";
if (!value) {
const _ChartLegend = RechartsPrimitive.Legend;
const ChartLegendContent = React.forwardRef<
HTMLDivElement,
React.ComponentProps<"div"> &
Pick<RechartsPrimitive.LegendProps, "payload" | "verticalAlign"> & {
hideIcon?: boolean;
nameKey?: string;
}
>(
(
{ className, hideIcon = false, payload, verticalAlign = "bottom", nameKey },
ref,
) => {
const { config } = useChart();
if (!payload?.length) {
return null;
}
return <div className={cn("font-medium", labelClassName)}>{value}</div>;
}, [
label,
labelFormatter,
payload,
hideLabel,
labelClassName,
config,
labelKey,
]);
if (!active || !payload?.length) {
return null;
}
const nestLabel = payload.length === 1 && indicator !== "dot";
return (
<div
ref={ref}
className={cn(
"grid min-w-[8rem] items-start gap-1 rounded-lg border border-solid border-border bg-surface-primary px-3 py-2 text-xs shadow-xl",
className,
)}
>
{!nestLabel ? tooltipLabel : null}
<div className="grid gap-1.5">
{payload.map((item, index) => {
const key = `${nameKey || item.name || item.dataKey || "value"}`;
return (
<div
ref={ref}
className={cn(
"flex items-center justify-center gap-4",
verticalAlign === "top" ? "pb-3" : "pt-3",
className,
)}
>
{payload.map((item) => {
const key = `${nameKey || item.dataKey || "value"}`;
const itemConfig = getPayloadConfigFromPayload(config, item, key);
const indicatorColor = color || item.payload.fill || item.color;
return (
<div
key={item.dataKey}
key={item.value}
className={cn(
"flex w-full flex-wrap items-stretch gap-2 [&>svg]:h-2.5 [&>svg]:w-2.5 [&>svg]:text-muted-foreground",
indicator === "dot" && "items-center",
"flex items-center gap-1.5 [&>svg]:h-3 [&>svg]:w-3 [&>svg]:text-muted-foreground",
)}
>
{formatter && item?.value !== undefined && item.name ? (
formatter(item.value, item.name, item, index, item.payload)
{itemConfig?.icon && !hideIcon ? (
<itemConfig.icon />
) : (
<>
{itemConfig?.icon ? (
<itemConfig.icon />
) : (
!hideIndicator && (
<div
className={cn(
"shrink-0 rounded-[2px] border-[--color-border] bg-[--color-bg]",
{
"h-2.5 w-2.5": indicator === "dot",
"w-1": indicator === "line",
"w-0 border-[1.5px] border-dashed bg-transparent":
indicator === "dashed",
"my-0.5": nestLabel && indicator === "dashed",
},
)}
style={
{
"--color-bg": indicatorColor,
"--color-border": indicatorColor,
} as CSSProperties
}
/>
)
)}
<div
className={cn(
"flex flex-1 justify-between leading-none",
nestLabel ? "items-end" : "items-center",
)}
>
<div className="grid gap-1.5">
{nestLabel ? tooltipLabel : null}
<span className="text-muted-foreground">
{itemConfig?.label || item.name}
</span>
</div>
{item.value && (
<span className="font-mono font-medium tabular-nums text-foreground">
{item.value.toLocaleString()}
</span>
)}
</div>
</>
<div
className="h-2 w-2 shrink-0 rounded-[2px]"
style={{
backgroundColor: item.color,
}}
/>
)}
{itemConfig?.label}
</div>
);
})}
</div>
</div>
);
};
);
},
);
ChartLegendContent.displayName = "ChartLegend";
// Helper to extract item config from a payload.
function getPayloadConfigFromPayload(
+26 -25
View File
@@ -4,40 +4,41 @@
*/
import * as CheckboxPrimitive from "@radix-ui/react-checkbox";
import { Check, Minus } from "lucide-react";
import * as React from "react";
import { cn } from "utils/cn";
/**
* To allow for an indeterminate state the checkbox must be controlled, otherwise the checked prop would remain undefined
*/
export const Checkbox: React.FC<
React.ComponentPropsWithRef<typeof CheckboxPrimitive.Root>
> = ({ className, ...props }) => {
return (
<CheckboxPrimitive.Root
className={cn(
`peer size-[18px] shrink-0 rounded-sm border border-border border-solid
export const Checkbox = React.forwardRef<
React.ElementRef<typeof CheckboxPrimitive.Root>,
React.ComponentPropsWithoutRef<typeof CheckboxPrimitive.Root>
>(({ className, ...props }, ref) => (
<CheckboxPrimitive.Root
ref={ref}
className={cn(
`peer size-[18px] shrink-0 rounded-sm border border-border border-solid
focus-visible:outline-none focus-visible:ring-2
focus-visible:ring-content-link focus-visible:ring-offset-4 focus-visible:ring-offset-surface-primary
disabled:cursor-not-allowed disabled:bg-surface-primary disabled:data-[state=checked]:bg-surface-tertiary
data-[state=unchecked]:bg-surface-primary
data-[state=checked]:bg-surface-invert-primary data-[state=checked]:text-content-invert
hover:enabled:border-border-hover hover:data-[state=checked]:bg-surface-invert-secondary`,
className,
)}
{...props}
className,
)}
{...props}
>
<CheckboxPrimitive.Indicator
className={cn("flex items-center justify-center text-current relative")}
>
<CheckboxPrimitive.Indicator
className={cn("flex items-center justify-center text-current relative")}
>
<div className="flex">
{(props.checked === true || props.defaultChecked === true) && (
<Check className="w-4 h-4" strokeWidth={2.5} />
)}
{props.checked === "indeterminate" && (
<Minus className="w-4 h-4" strokeWidth={2.5} />
)}
</div>
</CheckboxPrimitive.Indicator>
</CheckboxPrimitive.Root>
);
};
<div className="flex">
{(props.checked === true || props.defaultChecked === true) && (
<Check className="w-4 h-4" strokeWidth={2.5} />
)}
{props.checked === "indeterminate" && (
<Minus className="w-4 h-4" strokeWidth={2.5} />
)}
</div>
</CheckboxPrimitive.Indicator>
</CheckboxPrimitive.Root>
));
@@ -20,7 +20,7 @@ const meta: Meta<typeof Collapsible> = {
</h4>
<CollapsibleTrigger asChild>
<Button size="sm">
<ChevronsUpDown />
<ChevronsUpDown className="h-4 w-4" />
<span className="sr-only">Toggle</span>
</Button>
</CollapsibleTrigger>
+56 -146
View File
@@ -1,153 +1,82 @@
import type { Meta, StoryObj } from "@storybook/react-vite";
import type { SelectFilterOption } from "components/Filter/SelectFilter";
import { useState } from "react";
import { expect, screen, userEvent, waitFor, within } from "storybook/test";
import {
Combobox,
ComboboxButton,
ComboboxContent,
ComboboxEmpty,
ComboboxInput,
ComboboxItem,
ComboboxList,
ComboboxTrigger,
} from "./Combobox";
import { Combobox } from "./Combobox";
const options: SelectFilterOption[] = [
{ value: "go", label: "Go" },
{ value: "gleam", label: "Gleam" },
{ value: "kotlin", label: "Kotlin" },
{ value: "rust", label: "Rust" },
];
const simpleOptions = ["Go", "Gleam", "Kotlin", "Rust"];
const advancedOptions: SelectFilterOption[] = [
{ value: "go", label: "Go", startIcon: "/icon/go.svg" },
{ value: "gleam", label: "Gleam", startIcon: "/icon/gleam.svg" },
const advancedOptions = [
{
value: "kotlin",
label: "Kotlin",
startIcon: "/icon/kotlin.svg",
displayName: "Go",
value: "go",
icon: "/icon/go.svg",
},
{ value: "rust", label: "Rust", startIcon: "/icon/rust.svg" },
];
{
displayName: "Gleam",
value: "gleam",
icon: "https://github.com/gleam-lang.png",
},
{
displayName: "Kotlin",
value: "kotlin",
description: "Kotlin 2.1, OpenJDK 24, gradle",
icon: "/icon/kotlin.svg",
},
{
displayName: "Rust",
value: "rust",
icon: "/icon/rust.svg",
},
] as const;
const ComboboxWithHooks = ({
optionsList = options,
options = advancedOptions,
}: {
optionsList?: SelectFilterOption[];
options?: React.ComponentProps<typeof Combobox>["options"];
}) => {
const [value, setValue] = useState<string | undefined>(undefined);
const selectedOption = optionsList.find((opt) => opt.value === value);
return (
<Combobox value={value} onValueChange={setValue}>
<ComboboxTrigger asChild>
<ComboboxButton
selectedOption={selectedOption}
placeholder="Select option"
/>
</ComboboxTrigger>
<ComboboxContent className="w-60">
<ComboboxInput placeholder="Search..." />
<ComboboxList>
{optionsList.map((option) => (
<ComboboxItem key={option.value} value={option.value}>
{option.label}
</ComboboxItem>
))}
</ComboboxList>
<ComboboxEmpty>No results found</ComboboxEmpty>
</ComboboxContent>
</Combobox>
);
};
const ComboboxWithCustomValue = ({
optionsList = options,
}: {
optionsList?: SelectFilterOption[];
}) => {
const [value, setValue] = useState<string | undefined>(undefined);
const [inputValue, setInputValue] = useState("");
const [value, setValue] = useState("");
const [open, setOpen] = useState(false);
const selectedOption = optionsList.find((opt) => opt.value === value);
const displayLabel = selectedOption?.label ?? value;
const handleKeyDown = (e: React.KeyboardEvent) => {
if (
e.key === "Enter" &&
inputValue &&
!optionsList.some((o) => o.value === inputValue)
) {
setValue(inputValue);
setInputValue("");
setOpen(false);
}
};
const [inputValue, setInputValue] = useState("");
return (
<Combobox
value={value}
onValueChange={setValue}
options={options}
placeholder="Select option"
open={open}
onOpenChange={setOpen}
>
<ComboboxTrigger asChild>
<ComboboxButton
selectedOption={
displayLabel
? { label: displayLabel, value: value ?? "" }
: undefined
}
placeholder="Select option"
/>
</ComboboxTrigger>
<ComboboxContent className="w-60">
<ComboboxInput
placeholder="Search or enter custom..."
value={inputValue}
onValueChange={setInputValue}
onKeyDown={handleKeyDown}
/>
<ComboboxList>
{optionsList.map((option) => (
<ComboboxItem key={option.value} value={option.value}>
{option.label}
</ComboboxItem>
))}
</ComboboxList>
<ComboboxEmpty>
<span>No results found</span>
{inputValue && (
<span className="block text-content-secondary text-xs mt-1">
Press Enter to use "{inputValue}"
</span>
)}
</ComboboxEmpty>
</ComboboxContent>
</Combobox>
inputValue={inputValue}
onInputChange={setInputValue}
onSelect={setValue}
onKeyDown={(e) => {
if (e.key === "Enter" && inputValue && !options.includes(inputValue)) {
setValue(inputValue);
setInputValue("");
setOpen(false);
}
}}
/>
);
};
const meta: Meta<typeof Combobox> = {
title: "components/Combobox",
component: Combobox,
args: { options: advancedOptions },
};
export default meta;
type Story = StoryObj<typeof Combobox>;
export const Default: Story = {
render: () => <ComboboxWithHooks />,
};
export const Default: Story = {};
export const WithAdvancedOptions: Story = {
render: () => <ComboboxWithHooks optionsList={advancedOptions} />,
export const SimpleOptions: Story = {
args: {
options: simpleOptions,
},
};
export const OpenCombobox: Story = {
render: () => <ComboboxWithHooks />,
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
@@ -162,10 +91,6 @@ export const SelectOption: Story = {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
await userEvent.click(screen.getByText("Go"));
await waitFor(() =>
expect(canvas.getByRole("button")).toHaveTextContent("Go"),
);
},
};
@@ -175,35 +100,25 @@ export const SearchAndFilter: Story = {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
await userEvent.type(screen.getByRole("combobox"), "r");
await waitFor(() => {
expect(screen.getByRole("option", { name: /Rust/ })).toBeInTheDocument();
expect(
screen.queryByRole("option", { name: /^Go$/ }),
screen.queryByRole("option", { name: "Kotlin" }),
).not.toBeInTheDocument();
});
await userEvent.click(screen.getByRole("option", { name: "Rust" }));
},
};
export const WithCustomValue: Story = {
render: () => <ComboboxWithCustomValue />,
};
export const EnterCustomValue: Story = {
render: () => <ComboboxWithCustomValue />,
render: () => <ComboboxWithHooks />,
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
await userEvent.type(screen.getByRole("combobox"), "Custom Value{enter}");
await waitFor(() =>
expect(canvas.getByRole("button")).toHaveTextContent("Custom Value"),
);
await userEvent.type(screen.getByRole("combobox"), "Swift{enter}");
},
};
export const NoResults: Story = {
render: () => <ComboboxWithCustomValue />,
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
@@ -211,7 +126,7 @@ export const NoResults: Story = {
await waitFor(() => {
expect(screen.getByText("No results found")).toBeInTheDocument();
expect(screen.getByText(/Press Enter to use/)).toBeInTheDocument();
expect(screen.getByText("Enter custom value")).toBeInTheDocument();
});
},
};
@@ -221,17 +136,12 @@ export const ClearSelectedOption: Story = {
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
await userEvent.click(canvas.getByRole("button"));
// const goOption = screen.getByText("Go");
// First select an option
await userEvent.click(canvas.getByRole("button"));
await userEvent.click(screen.getByRole("option", { name: /Go/ }));
await waitFor(() =>
expect(canvas.getByRole("button")).toHaveTextContent("Go"),
);
// Then clear it by selecting it again (toggle behavior)
await userEvent.click(canvas.getByRole("button"));
await userEvent.click(screen.getByRole("option", { name: /Go/ }));
await userEvent.click(await screen.findByRole("option", { name: "Go" }));
// Then clear it by selecting it again
await userEvent.click(await screen.findByRole("option", { name: "Go" }));
await waitFor(() =>
expect(canvas.getByRole("button")).toHaveTextContent("Select option"),

Some files were not shown because too many files have changed in this diff Show More