Compare commits

...

134 Commits

Author SHA1 Message Date
Cian Johnston 4e98814e82 cdev: fix oidc port binding 2026-02-14 17:11:34 +00:00
Cian Johnston 11e5e8e7b0 fix(cdev): address all golangci-lint warnings 2026-02-14 09:50:03 +00:00
Cian Johnston a033050ead fix(cdev): add mutex and atomic write for concurrent compose operations 2026-02-13 22:03:52 +00:00
Cian Johnston d1cd7ddafc fix(cdev): strip dangling compose depends_on references
Each service's Start() calls WriteCompose() which dumps all registered
services to YAML. Docker Compose validates the entire file, including
depends_on entries referencing services not yet registered. The
load-balancer's catalog DAG only depends on Docker (not Coderd), so it
starts before coderd-0 is registered, causing:

  service "load-balancer" depends on undefined service "coderd-0"

Fix by:
1. Removing the unnecessary depends_on from the load-balancer service
   in both the runtime SetCompose call and the generate path.
2. Adding a defensive filter in WriteCompose() that strips depends_on
   entries referencing unregistered services.
2026-02-13 18:39:34 +00:00
Cian Johnston 78529e5a60 feat(cdev): port load balancer to compose 2026-02-13 18:03:56 +00:00
Cian Johnston 401b4688c4 refactor(cdev): convert compose builders to chainable methods on ComposeFile 2026-02-13 18:00:50 +00:00
Cian Johnston fba1d921c1 chore(cdev): remove dockertest container helper and docker-dev scripts 2026-02-13 17:59:42 +00:00
Cian Johnston b6c5dbf994 feat(cdev): rewrite main.go for compose + add generate command 2026-02-13 17:59:42 +00:00
Cian Johnston aa1749ac08 feat(cdev): update API handlers for docker client 2026-02-13 17:59:42 +00:00
Cian Johnston e258a633dd feat(cdev): rewrite clean.go for compose 2026-02-13 17:59:41 +00:00
Cian Johnston 17d7b60d50 feat(cdev): port all services to compose 2026-02-13 17:59:34 +00:00
Cian Johnston adece8177c feat(cdev): rewrite docker service for compose management 2026-02-13 17:59:07 +00:00
Cian Johnston 2cb75e31c0 feat(cdev): add compose types and generation 2026-02-13 17:58:15 +00:00
Cian Johnston c2e99e9142 feat(cdev): add polling to air 2026-02-13 16:33:35 +00:00
Cian Johnston f7f91c5f52 fix(cdev): fix test oidc healthcheck 2026-02-13 16:33:13 +00:00
Cian Johnston 32ce4ee32e refactor(cdev): standardize container services on Docker healthchecks
Add a shared waitForHealthy helper in docker.go that polls Docker's
container health status via InspectContainer. Convert all five container
services (coderd, oidc, postgres, site, prometheus) to use Docker
healthchecks in their container configs and delegate to waitForHealthy
instead of HTTP-polling from the host.

- docker.go: Add waitForHealthy shared helper
- coderd.go: Add curl-based healthcheck, use waitForHealthy
- oidc.go: Add wget-based healthcheck, use waitForHealthy
- postgres.go: Add pg_isready healthcheck, use waitForHealthy
- site.go: Add wget-based healthcheck, use waitForHealthy
- prometheus.go: Replace inline health polling with waitForHealthy
2026-02-13 15:49:31 +00:00
Cian Johnston c18e3b1e2f fix(cdev): refine air watch config - drop sql, add exclude dirs 2026-02-13 15:38:29 +00:00
Cian Johnston 58f65e12f2 feat(cdev): add --watch flag for air hot reload of coderd 2026-02-13 15:38:24 +00:00
Cian Johnston 7c7dc9ddb9 chore: add air as Go tool dependency for coderd hot reload 2026-02-13 15:38:05 +00:00
Cian Johnston b0d6dc10a1 fix(cdev): use Docker healthcheck for Prometheus readiness instead of HTTP polling
On Docker Desktop (macOS/Windows), container bridge IPs are unreachable
from the host, causing waitForReady to always time out after 60s.

Replace direct HTTP polling of the bridge IP with a Docker healthcheck
that runs wget inside the container, then poll the container health
status via the Docker API.
2026-02-13 15:32:58 +00:00
Cian Johnston f5f5db85c0 fix(cdev): fix nginx websocket Origin/Host mismatch and add proper websocket support
- Use $http_host instead of $host to preserve port in Host header,
  fixing Origin vs Host mismatch in coderd's websocket library.
- Add map directive for conditional Connection header (upgrade vs close).
- Add proxy_read_timeout and proxy_send_timeout of 86400s to prevent
  nginx from killing idle websocket connections.
2026-02-13 15:11:18 +00:00
Cian Johnston 5a7d8ccebb feat(oidctest): add backchannelBaseURL for split-horizon OIDC discovery
Adds a backchannelBaseURL option to FakeIDP that overrides
server-to-server endpoint URLs (token, userinfo, jwks, revocation,
device auth, external auth) in the OIDC discovery response while
keeping authorization_endpoint on the issuer URL for browsers.

Also adds the -backchannel-base-url CLI flag to testidp and wires
it into the cdev OIDC container config.
2026-02-13 13:29:33 +00:00
Cian Johnston d5a2f3916d feat(cdev): add bridge networking and nginx load balancer
- Migrate all containers from host networking to cdev Docker bridge network
- Add EnsureNetwork to Docker service for bridge network management
- Add nginx-based load balancer service for round-robin across HA coderd instances
- Add per-instance port mapping (3001+ coderd, 6060+ pprof, 2112+ metrics)
- Add --instance flag to pprof command for targeting specific HA instances
- Update postgres/oidc with InternalURL/InternalIssuerURL for container-to-container comms

fix(cdev): fix OIDC login for bridge networking

The OIDC issuer URL must be browser-reachable (localhost:4500) but
coderd discovers OIDC via the bridge network (load-balancer:4500).
Set testidp -issuer to localhost:4500 for browser redirects, coderd
--oidc-issuer-url to load-balancer:4500 for discovery, and enable
--dangerous-oidc-skip-issuer-checks to tolerate the mismatch.
2026-02-13 13:29:21 +00:00
Steven Masley 39908f4c0b add url button 2026-02-12 17:52:36 -06:00
Steven Masley 5ef3b2166f add better errors when docker is not available 2026-02-12 17:38:54 -06:00
Steven Masley fdcfa63841 longer error 2026-02-12 17:22:24 -06:00
Steven Masley 05eedbf17a inject a subdomain app into tempalte 2026-02-12 17:11:17 -06:00
Cian Johnston c1b274e47f Merge branch 'cdev-ui-rsr0' into h7n/basement-musicians-codev 2026-02-12 23:08:13 +00:00
Cian Johnston fd86c4046d feat(cdev): add start service endpoint and contextual UI buttons 2026-02-12 22:54:58 +00:00
Cian Johnston d8ccf510d4 fix(cdev): update restart/stop handlers to manage unit.Manager status
The restart and stop API handlers were calling svc.Stop()/svc.Start()
directly, bypassing unit.Manager.UpdateStatus(). This caused the status
badge to stay stuck on completed after a restart or stop.

Add RestartService() and StopService() methods to Catalog that manage
the full status lifecycle (pending -> started -> complete) with
notifySubscribers() calls at each transition. Rewrite the API handlers
to delegate to these new methods.
2026-02-12 22:46:53 +00:00
Steven Masley b20bb2e94d fix dockergroup detection 2026-02-12 16:35:35 -06:00
Steven Masley e35d06afac fix order and work on docker template 2026-02-12 16:14:37 -06:00
Cian Johnston 57f458d4fc fix(cdev): update restart/stop handlers to manage unit.Manager status
The restart and stop API handlers were calling svc.Stop()/svc.Start()
directly, bypassing unit.Manager.UpdateStatus(). This caused the status
badge to stay stuck on completed after a restart or stop.

Add RestartService() and StopService() methods to Catalog that manage
the full status lifecycle (pending -> started -> complete) with
notifySubscribers() calls at each transition. Rewrite the API handlers
to delegate to these new methods.
2026-02-12 22:00:36 +00:00
Cian Johnston dd4543338f feat(cdev): replace HTTP polling with SSE for status UI
- Add subscriber pattern to Catalog (Subscribe/Unsubscribe/notifySubscribers)
- Extract buildListServicesResponse helper from handleListServices
- Add handleSSE endpoint with dedup, ticker fallback, and flusher
- Register GET /api/events route
- Replace setInterval polling with EventSource in frontend
- Notify subscribers after status changes, restart, and stop operations
2026-02-12 21:37:59 +00:00
Cian Johnston 1c0b0e3bee chore(cdev): remove unused logTicker 2026-02-12 21:06:02 +00:00
Cian Johnston 877c98e22c fix(cdev): fix issue with docker port and restarting 2026-02-12 21:02:04 +00:00
Steven Masley b8a1608ea2 add some status indicator 2026-02-12 14:02:25 -06:00
Steven Masley d4924c3254 add ui 2026-02-12 13:38:29 -06:00
Steven Masley 8e7b6e66ee log image pulling 2026-02-12 11:47:26 -06:00
Steven Masley 873bc970c0 log image pulling 2026-02-12 11:42:15 -06:00
Steven Masley 60b832a7a0 add dep log 2026-02-12 11:34:27 -06:00
Steven Masley 2b267c4fd0 dry up service names 2026-02-12 11:27:00 -06:00
Cian Johnston 2c288aa48d feat(cdev): always try to add a license 2026-02-12 17:25:07 +00:00
Steven Masley f4ca1b20a3 logs wip 2026-02-12 11:19:49 -06:00
Steven Masley 403562e351 setup working 2026-02-12 11:09:39 -06:00
Steven Masley 91e71a0806 setup working 2026-02-12 10:59:59 -06:00
Cian Johnston a2cb75a915 scripts/cdev: add service label to cleanup logs 2026-02-12 16:55:44 +00:00
Cian Johnston 26ce7347cf cdev: add prometheus and provisioner to servicesToDown 2026-02-12 16:40:26 +00:00
Steven Masley 9fa936de96 add site to down 2026-02-12 10:38:51 -06:00
Steven Masley de1e53854a add frontend servicet 2026-02-12 10:37:42 -06:00
Cian Johnston 5b70638c20 fix(cdev): remove manual chown, override entrypoint for init container
EnsureVolume already chowns the volume to 65534:65534 on creation.
The init container now runs as the image default user (nobody) with
the entrypoint overridden to sh -c, so mkdir creates dirs with the
correct ownership without needing an explicit chown.
2026-02-12 16:20:28 +00:00
Cian Johnston b2cc4e55a9 fix(cdev): use prom/prometheus image for init container instead of busybox 2026-02-12 16:20:28 +00:00
Cian Johnston ad5aade6e4 fix(cdev): chown prometheus data dir for nobody user 2026-02-12 16:20:28 +00:00
Cian Johnston b0d4f91d2f fix(cdev): auto-pull missing Docker images in RunContainer 2026-02-12 16:20:28 +00:00
Cian Johnston 825b045762 feat(cdev): add prometheus container service 2026-02-12 16:20:28 +00:00
Cian Johnston 5bcbbd7337 feat(cdev): enable prometheus metrics endpoint on coderd 2026-02-12 16:20:28 +00:00
Steven Masley 88e6be28a1 watch images 2026-02-12 10:18:57 -06:00
Cian Johnston a3964ca963 fix(scripts/cdev): remove superfluous 'bytes' from humanize.Bytes log message 2026-02-12 11:44:07 +00:00
Cian Johnston 56410ea623 fix(cdev): resolve all golangci-lint errors in catalog/ files
- Remove unused struct field (BuildSlim.pool)
- Fix receiver naming consistency (BuildSlim d->b)
- Omit unused method receivers per revive rules
- Add checked type assertions (forcetypeassert)
- Rename confusing ensureVolume to createVolumeIfNeeded
- Add blank import justification comments
- Handle ignored errors (db.Close, resp.Body.Close, fmt.Fprintf)
- Use http.NewRequestWithContext instead of client.Get (noctx)
- Fix ineffectual assignment in container.go
- Invert if/else for early return in container.go
- Export LoggerSink type to avoid returning unexported type
- Lengthen short log messages to meet 16-char minimum
- Rename slog.F uuid field to license_id per ruleguard
2026-02-12 10:39:19 +00:00
Cian Johnston 57918bef71 fix(scripts/cdev): resolve all golangci-lint errors in cleanup/clean.go
- Remove redundant dockertest.NewPool() that overwrote pool parameter (SA4009)
- Rename CleanupContainers/Volumes/Images to Containers/Volumes/Images (stuttering)
- Use slog.Error(err) instead of slog.F("error", err) (gocritic/ruleguard)
- Lowercase log messages per linter rules (gocritic/ruleguard)
- Add error checks for ListVolumes/ListImages (ineffassign)
- Add nolint:gosec for int64->uint64 with max(0, val) guard (G115)
2026-02-12 10:39:15 +00:00
Cian Johnston 0a7b2a2c69 fix(scripts/cdev): resolve all golangci-lint errors in main.go
- Handle return values from fmt.Fprintf, fmt.Fprintln, WriteString, Flush
- Add checked type assertion for coderd service
2026-02-12 10:39:12 +00:00
Cian Johnston 495252637f fix(cdev): wait for migrations before inserting license 2026-02-12 00:07:42 +00:00
Cian Johnston a8cc769f90 refactor(cdev): move HA license check to configure phase 2026-02-11 23:54:39 +00:00
Cian Johnston c392434419 refactor(cdev): extract license helpers; require license for HA coderd 2026-02-11 23:50:04 +00:00
Cian Johnston 5031d8d12b fix(cdev): only register provisioner service when count > 0
The provisioner service was always registered in the service graph,
even when --provisioner-count=0 (the default). This cluttered the
service graph with an unused service.

Now the provisioner is created early to expose its options in help
text, but only registered in the catalog when count > 0. A Count()
accessor is added to Provisioner to support this check.
2026-02-11 23:33:25 +00:00
Cian Johnston e889d82c19 feat(cdev): require and insert license for external provisioners 2026-02-11 23:26:37 +00:00
Cian Johnston 64c8eb8728 feat(cdev): wire up Provisioner service in main.go 2026-02-11 23:17:18 +00:00
Cian Johnston ee93188d00 feat(cdev): add external Provisioner service 2026-02-11 23:16:13 +00:00
Cian Johnston 28343e44c5 feat(cdev): add CDevProvisioner service label 2026-02-11 23:13:49 +00:00
Cian Johnston 94cf4ae846 feat(cdev): add ExtraEnv/ExtraArgs to Coderd for cross-service config 2026-02-11 23:13:45 +00:00
Cian Johnston 1c68cb1c7e feat(cdev): add generic Configure[T] and ApplyConfigurations to catalog 2026-02-11 23:13:42 +00:00
Cian Johnston ce3379acaf fix(cdev): fix pprof address and healthcheck endpoint 2026-02-11 22:23:27 +00:00
Cian Johnston 5c2a25eb40 fix(cdev): resolve mutex deadlock in catalog service startup
Pass logger to ServiceBase.Start() and narrow the lock scope in
Catalog.Start() to prevent deadlock. Previously, Start() held a
write lock across wg.Wait(), while service Start() implementations
called ServiceLogger() which tried to acquire a read lock.

Now the lock is released before spawning goroutines by snapshotting
services and loggers into a local slice. The ServiceLogger method
is removed entirely since loggers are passed directly.
2026-02-11 22:23:25 +00:00
Steven Masley ce0d45b4f2 fake oidc working 2026-02-11 16:09:38 -06:00
Cian Johnston 389785b101 fix(cdev): display slog fields in log sink output 2026-02-11 21:42:46 +00:00
Cian Johnston 4c8d88a0f3 fix(cdev): log service dependency graph on startup 2026-02-11 21:34:34 +00:00
Cian Johnston 944e19480d feat(cdev): add pprof support to coderd containers and pprof CLI command 2026-02-11 21:13:42 +00:00
Cian Johnston 47f8d2efa4 fix(cdev): route docker build output through logger
The buildImage() function in oidc.go was wiring docker build
stdout/stderr directly to os.Stdout/os.Stderr, bypassing the
LogWriter pipeline that all other services use. This caused raw
BuildKit output to appear without the formatted service prefix.
2026-02-11 21:10:26 +00:00
Steven Masley 6f0f7e01d6 oidc work 2026-02-11 12:38:10 -06:00
Cian Johnston ade22c6500 feat(cdev): add PrettySink with per-service emoji logging
- Add PrettySink slog.Sink with emoji prefix, service name, and
  stdout/stderr stream indicators per log line.
- Add Emoji() to ServiceBase interface; implement on all services:
  docker (🐳), build-slim (🔨), postgres (🐘), coderd (🖥️).
- Replace SetLogger with Init(io.Writer) that builds base and
  per-service loggers from registered services.
- Add ServiceLogger(name) for services to get their own logger.
- Update Start() to use per-service loggers (no logger param).
- Switch cleanCmd/downCmd to use NewPrettySink.

refactor(cdev): merge PrettySink formatting into loggerSink

Consolidate PrettySink and loggerSink into a single type that handles
both pretty formatting and controllable Close()/done semantics. Delete
prettysink.go and remove controllableLoggerSink helper.

Update all call sites in catalog.go, coderd.go, postgres.go, and
main.go to use NewLoggerSink.
2026-02-11 18:22:10 +00:00
Steven Masley f1b98f2d9f oidc work 2026-02-11 11:54:00 -06:00
Steven Masley 90a6f1b25b coderd ha update 2026-02-11 11:01:17 -06:00
Steven Masley 86703208bb added down 2026-02-11 10:49:09 -06:00
Steven Masley 209a92688f add ha count to options 2026-02-11 10:38:56 -06:00
Steven Masley 60e44d20f8 coderd is working 2026-02-11 10:22:41 -06:00
Steven Masley 527f2795ed Add postgres 2026-02-11 09:44:18 -06:00
Steven Masley 4ecf17fffe chore: give buildslim a static name and auto remove container 2026-02-11 08:11:12 -06:00
Cian Johnston 786c82fd59 fixup! refactor(cdev): replace ad-hoc logging with slog, extract RunContainer helper 2026-02-11 13:50:29 +00:00
Cian Johnston 62c358ecf1 refactor(cdev): replace ad-hoc logging with slog, extract RunContainer helper
- Add Logger() accessor to Catalog
- Add LogWriter adapter (slog.Logger → io.WriteCloser)
- Extract Create→Attach→Start→Wait pattern into RunContainer()
- Refactor buildslim.go to use structured slog logging and RunContainer
- Refactor volumes.go chown to use RunContainer and add log messages
- Remove fmt.Println/os.Stdout/os.Stderr from buildslim.go
2026-02-11 13:48:51 +00:00
Steven Masley 68a43b1d84 use labels 2026-02-11 07:44:43 -06:00
Steven Masley 121dfc0bce add todo, commit labels 2026-02-11 07:44:16 -06:00
Steven Masley 6b52b3fbc9 cleanup moved and uses a logger 2026-02-11 07:41:34 -06:00
Steven Masley d87f0de67c Add in basic labels 2026-02-11 07:30:57 -06:00
Cian Johnston 64fbaf9361 fix(cdev): use /mnt/volume mount path for chown container
Docker rejects mounting to '/' as a volume destination. Mount to
/mnt/volume instead so the ephemeral chown container can set correct
ownership on newly created volumes.
2026-02-11 13:18:08 +00:00
Cian Johnston e2d615c106 refactor(cdev): move volume management into Docker service as lazy helper
Remove standalone Volume service from the catalog DAG. Volumes are now
created lazily via Docker.EnsureVolume(), which uses sync.Once to
guarantee each volume is created at most once.

- Add VolumeOptions, EnsureVolume, ensureVolume, chownVolume to Docker
- Simplify BuildSlim.DependsOn to only depend on Docker
- Delete catalog/volumes.go
- Remove dead constants and volume registrations from main.go
2026-02-11 13:05:31 +00:00
Cian Johnston 7d47de461d Merge branch 'dogfood-docker-16pm' into h7n/basement-musicians-codev 2026-02-11 12:32:45 +00:00
Cian Johnston b76726cdd0 fix(dogfood): bump dive and kube-linter for arm64 support 2026-02-11 12:15:43 +00:00
Cian Johnston 9c7091778a fix(dogfood): use multi-arch alpine:3.18 for proto stage
The previous image was pinned to an amd64-only digest of a coder
mirror of alpine:3.18. Replace with the official multi-arch
alpine:3.18 tag so the proto stage works on both amd64 and arm64.
2026-02-11 12:05:42 +00:00
Cian Johnston 109addd504 fix(dogfood): make Dockerfile compatible with linux/arm64
Parameterize all hardcoded amd64/x86_64 binary download URLs using
Docker BuildKit's TARGETARCH variable. This enables building the
dogfood image on ARM machines without code changes.

Changes:
- Add ARG TARGETARCH to go, proto, and final build stages
- Make Go checksum selection conditional via case statement
- Replace ~27 hardcoded architecture references in binary download
  URLs with TARGETARCH or mapped variables (ALT_ARCH, TRIVY_ARCH,
  BUN_ARCH, BUN_DIR, KUBE_LINTER_SUFFIX)
- No behavioral change for amd64 builds
2026-02-11 11:40:43 +00:00
Cian Johnston 2e07f05f4c fix catalog import 2026-02-11 11:40:23 +00:00
Cian Johnston 0c263a3f59 go mod tidy 2026-02-11 11:26:37 +00:00
Steven Masley 2977e14ed1 work on cleanup 2026-02-10 17:50:33 -06:00
Steven Masley 5865f56709 capturing logs at least 2026-02-10 15:14:47 -06:00
Steven Masley 5f27c0c8d3 working towards building the slim binary 2026-02-10 14:52:53 -06:00
Steven Masley 645f711b65 add labels to containers 2026-02-10 12:17:15 -06:00
Steven Masley 76a18b3514 chore: name compose network, throw slim-build on host network
slim-build runs containers as apart of the build process
2026-02-10 09:09:36 -06:00
Cian Johnston 9f2bd2e0b8 make -j 2026-02-10 12:58:41 +00:00
Cian Johnston baf23c1911 fix: copy correctly-named slim binaries from site/out/bin
make build-slim produces build/coder-slim_{version}_{os}_{arch} but
also copies them to site/out/bin/coder-{os}-{arch} with the names
the bin handler expects. Copy from site/out/bin/ so the agent
download endpoint finds them.
2026-02-10 12:48:26 +00:00
Cian Johnston e42918246d fix: use init-volumes service to fix permissions for coder user
Replace user: 0:0 overrides with an init-volumes service that runs
as root to chown named volumes to uid 1000 (coder user). All other
services run as the default coder user.

Also fix coderv2_config mount path from /root to /home/coder.
2026-02-10 11:31:47 +00:00
Cian Johnston 1328c7a02e fix: run dev containers as root to fix volume permissions
The oss-dogfood image defaults to the coder user, but named volumes
are created root-owned. Run as root (user: 0:0) in all dev services
to avoid permission denied errors on shared volumes.
2026-02-10 11:29:12 +00:00
Cian Johnston ba034de40a refactor: use dogfood image for site service too
Drop node:22 and corepack enable since the dogfood image already
has Node and pnpm installed.
2026-02-10 11:04:05 +00:00
Cian Johnston c5a6db98f5 refactor: use codercom/oss-dogfood:latest for all dev services
Replace golang:1.25 and custom setup.Dockerfile with the dogfood
image which has all build dependencies (Go, Terraform, Node, pnpm,
jq, curl, etc.) pre-installed.
2026-02-10 11:03:39 +00:00
Cian Johnston 049c533027 fix: use golang:1.25 image to match go.mod 2026-02-10 11:00:53 +00:00
Cian Johnston 02839c08a0 feat: add build-slim service for agent binaries
Add a build-slim init service that runs make build-slim and copies
the resulting slim binaries into a shared coder_cache volume. coderd
picks these up via CODER_CACHE_DIRECTORY, serving them at
/bin/coder-{os}-{arch} for workspace agents.

Also adds DOCKER_HOST env var to coderd and setup services, and
fixes the coderd image tag from golang:1.25 to golang:1.24.
2026-02-10 10:50:56 +00:00
Cian Johnston 56a3f8f711 refactor: use /root/.config/coderv2 volume instead of /home/coder 2026-02-09 15:22:41 +00:00
Cian Johnston d9097c3b78 fix: enable corepack for pnpm in site service 2026-02-09 15:18:31 +00:00
Cian Johnston 52d03bac70 feat: add exit trap to setup script for troubleshooting 2026-02-09 14:29:04 +00:00
Cian Johnston 0cf85db334 fix: use persistent session token from coder_dev_home volume 2026-02-09 14:19:13 +00:00
Cian Johnston f4f927209e feat: add configurable docker group_add to coderd service 2026-02-09 14:15:42 +00:00
Cian Johnston c0317c2c32 fix: add healthcheck to coderd service in dev compose 2026-02-09 14:10:49 +00:00
Cian Johnston 6856d972ca chore: add Docker Compose development environment 2026-02-09 14:00:42 +00:00
dependabot[bot] 19d24075da ci: bump the github-actions group with 4 updates (#22010)
Bumps the github-actions group with 4 updates:
[actions/cache](https://github.com/actions/cache),
[docker/login-action](https://github.com/docker/login-action),
[actions/attest](https://github.com/actions/attest) and
[nix-community/cache-nix-action](https://github.com/nix-community/cache-nix-action).

Updates `actions/cache` from 5.0.2 to 5.0.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/releases">actions/cache's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/cache/compare/v5...v5.0.3">https://github.com/actions/cache/compare/v5...v5.0.3</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/cache/blob/main/RELEASES.md">actions/cache's
changelog</a>.</em></p>
<blockquote>
<h1>Releases</h1>
<h2>How to prepare a release</h2>
<blockquote>
<p>[!NOTE]<br />
Relevant for maintainers with write access only.</p>
</blockquote>
<ol>
<li>Switch to a new branch from <code>main</code>.</li>
<li>Run <code>npm test</code> to ensure all tests are passing.</li>
<li>Update the version in <a
href="https://github.com/actions/cache/blob/main/package.json"><code>https://github.com/actions/cache/blob/main/package.json</code></a>.</li>
<li>Run <code>npm run build</code> to update the compiled files.</li>
<li>Update this <a
href="https://github.com/actions/cache/blob/main/RELEASES.md"><code>https://github.com/actions/cache/blob/main/RELEASES.md</code></a>
with the new version and changes in the <code>## Changelog</code>
section.</li>
<li>Run <code>licensed cache</code> to update the license report.</li>
<li>Run <code>licensed status</code> and resolve any warnings by
updating the <a
href="https://github.com/actions/cache/blob/main/.licensed.yml"><code>https://github.com/actions/cache/blob/main/.licensed.yml</code></a>
file with the exceptions.</li>
<li>Commit your changes and push your branch upstream.</li>
<li>Open a pull request against <code>main</code> and get it reviewed
and merged.</li>
<li>Draft a new release <a
href="https://github.com/actions/cache/releases">https://github.com/actions/cache/releases</a>
use the same version number used in <code>package.json</code>
<ol>
<li>Create a new tag with the version number.</li>
<li>Auto generate release notes and update them to match the changes you
made in <code>RELEASES.md</code>.</li>
<li>Toggle the set as the latest release option.</li>
<li>Publish the release.</li>
</ol>
</li>
<li>Navigate to <a
href="https://github.com/actions/cache/actions/workflows/release-new-action-version.yml">https://github.com/actions/cache/actions/workflows/release-new-action-version.yml</a>
<ol>
<li>There should be a workflow run queued with the same version
number.</li>
<li>Approve the run to publish the new version and update the major tags
for this action.</li>
</ol>
</li>
</ol>
<h2>Changelog</h2>
<h3>5.0.3</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.5 (Resolves: <a
href="https://github.com/actions/cache/security/dependabot/33">https://github.com/actions/cache/security/dependabot/33</a>)</li>
<li>Bump <code>@actions/core</code> to v2.0.3</li>
</ul>
<h3>5.0.2</h3>
<ul>
<li>Bump <code>@actions/cache</code> to v5.0.3 <a
href="https://redirect.github.com/actions/cache/pull/1692">#1692</a></li>
</ul>
<h3>5.0.1</h3>
<ul>
<li>Update <code>@azure/storage-blob</code> to <code>^12.29.1</code> via
<code>@actions/cache@5.0.1</code> <a
href="https://redirect.github.com/actions/cache/pull/1685">#1685</a></li>
</ul>
<h3>5.0.0</h3>
<blockquote>
<p>[!IMPORTANT]
<code>actions/cache@v5</code> runs on the Node.js 24 runtime and
requires a minimum Actions Runner version of <code>2.327.1</code>.
If you are using self-hosted runners, ensure they are updated before
upgrading.</p>
</blockquote>
<h3>4.3.0</h3>
<ul>
<li>Bump <code>@actions/cache</code> to <a
href="https://redirect.github.com/actions/toolkit/pull/2132">v4.1.0</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/actions/cache/commit/cdf6c1fa76f9f475f3d7449005a359c84ca0f306"><code>cdf6c1f</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/cache/issues/1695">#1695</a>
from actions/Link-/prepare-5.0.3</li>
<li><a
href="https://github.com/actions/cache/commit/a1bee22673bee4afb9ce4e0a1dc3da1c44060b7d"><code>a1bee22</code></a>
Add review for the <code>@​actions/http-client</code> license</li>
<li><a
href="https://github.com/actions/cache/commit/46957638dc5c5ff0c34c0143f443c07d3a7c769f"><code>4695763</code></a>
Add licensed output</li>
<li><a
href="https://github.com/actions/cache/commit/dc73bb9f7bf74a733c05ccd2edfd1f2ac9e5f502"><code>dc73bb9</code></a>
Upgrade dependencies and address security warnings</li>
<li><a
href="https://github.com/actions/cache/commit/345d5c2f761565bace4b6da356737147e9041e3a"><code>345d5c2</code></a>
Add 5.0.3 builds</li>
<li>See full diff in <a
href="https://github.com/actions/cache/compare/8b402f58fbc84540c8b491a91e594a4576fec3d7...cdf6c1fa76f9f475f3d7449005a359c84ca0f306">compare
view</a></li>
</ul>
</details>
<br />

Updates `docker/login-action` from 3.6.0 to 3.7.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/docker/login-action/releases">docker/login-action's
releases</a>.</em></p>
<blockquote>
<h2>v3.7.0</h2>
<ul>
<li>Add <code>scope</code> input to set scopes for the authentication
token by <a
href="https://github.com/crazy-max"><code>@​crazy-max</code></a> in <a
href="https://redirect.github.com/docker/login-action/pull/912">docker/login-action#912</a></li>
<li>Add support for AWS European Sovereign Cloud ECR by <a
href="https://github.com/dphi"><code>@​dphi</code></a> in <a
href="https://redirect.github.com/docker/login-action/pull/914">docker/login-action#914</a></li>
<li>Ensure passwords are redacted with <code>registry-auth</code> input
by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a>
in <a
href="https://redirect.github.com/docker/login-action/pull/911">docker/login-action#911</a></li>
<li>build(deps): bump lodash from 4.17.21 to 4.17.23 in <a
href="https://redirect.github.com/docker/login-action/pull/915">docker/login-action#915</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/docker/login-action/compare/v3.6.0...v3.7.0">https://github.com/docker/login-action/compare/v3.6.0...v3.7.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/docker/login-action/commit/c94ce9fb468520275223c153574b00df6fe4bcc9"><code>c94ce9f</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/login-action/issues/915">#915</a>
from docker/dependabot/npm_and_yarn/lodash-4.17.23</li>
<li><a
href="https://github.com/docker/login-action/commit/8339c958ce8511f38d0c474c1886a87c802bf1ef"><code>8339c95</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/login-action/issues/912">#912</a>
from docker/scope</li>
<li><a
href="https://github.com/docker/login-action/commit/c83e9320c8beb50b77dd007c46d5c8161f0cac4a"><code>c83e932</code></a>
build(deps): bump lodash from 4.17.21 to 4.17.23</li>
<li><a
href="https://github.com/docker/login-action/commit/b268aa57e39ff0a5386d2fd1eded4e2e1d60d705"><code>b268aa5</code></a>
chore: update generated content</li>
<li><a
href="https://github.com/docker/login-action/commit/a60322927812ddc99316dd6252b4fba6d8f09ac1"><code>a603229</code></a>
documentation for scope input</li>
<li><a
href="https://github.com/docker/login-action/commit/7567f92a74b2639be1bd8bc932a112a0d81283da"><code>7567f92</code></a>
Add scope input to set scopes for the authentication token</li>
<li><a
href="https://github.com/docker/login-action/commit/0567fa5ae8c9a197cb207537dc5cbb43ca3d803f"><code>0567fa5</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/login-action/issues/914">#914</a>
from dphi/add-support-for-amazonaws.eu</li>
<li><a
href="https://github.com/docker/login-action/commit/f6ef57754547a85003a0e18f789be661346d4a6e"><code>f6ef577</code></a>
feat: add support for AWS European Sovereign Cloud ECR registries</li>
<li><a
href="https://github.com/docker/login-action/commit/916386b00027d425839f8da46d302dab33f5875b"><code>916386b</code></a>
Merge pull request <a
href="https://redirect.github.com/docker/login-action/issues/911">#911</a>
from crazy-max/ensure-redact</li>
<li><a
href="https://github.com/docker/login-action/commit/5b3f94a294ea5478af3af437baa6ad0d3dcd04fd"><code>5b3f94a</code></a>
chore: update generated content</li>
<li>Additional commits viewable in <a
href="https://github.com/docker/login-action/compare/5e57cd118135c172c3672efd75eb46360885c0ef...c94ce9fb468520275223c153574b00df6fe4bcc9">compare
view</a></li>
</ul>
</details>
<br />

Updates `actions/attest` from 3.1.0 to 3.2.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/attest/releases">actions/attest's
releases</a>.</em></p>
<blockquote>
<h2>v3.2.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump the npm-development group with 3 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/attest/pull/320">actions/attest#320</a></li>
<li>Validate repository org-ownership before storage record creation by
<a href="https://github.com/malancas"><code>@​malancas</code></a> in <a
href="https://redirect.github.com/actions/attest/pull/328">actions/attest#328</a></li>
<li>Update version to 3.2.0 by <a
href="https://github.com/malancas"><code>@​malancas</code></a> in <a
href="https://redirect.github.com/actions/attest/pull/334">actions/attest#334</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/attest/compare/v3.1.0...v3.2.0">https://github.com/actions/attest/compare/v3.1.0...v3.2.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/actions/attest/commit/e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d"><code>e59cbc1</code></a>
Update version to 3.2.0 (<a
href="https://redirect.github.com/actions/attest/issues/334">#334</a>)</li>
<li><a
href="https://github.com/actions/attest/commit/20eb46ce7aac0a8d0fb0ba74463460bff36cc0bd"><code>20eb46c</code></a>
Validate repository org-ownership before storage record creation (<a
href="https://redirect.github.com/actions/attest/issues/328">#328</a>)</li>
<li><a
href="https://github.com/actions/attest/commit/7433fa7e7a4d4084bbd71358379fa9b45ce9d4d7"><code>7433fa7</code></a>
Update <code>undici</code> development dependency to the latest version
(<a
href="https://redirect.github.com/actions/attest/issues/332">#332</a>)</li>
<li><a
href="https://github.com/actions/attest/commit/c03bf4160d4018cb293f5dcbf204e47c1b2808e1"><code>c03bf41</code></a>
Bump the npm-development group with 3 updates (<a
href="https://redirect.github.com/actions/attest/issues/320">#320</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/attest/compare/7667f588f2f73a90cea6c7ac70e78266c4f76616...e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d">compare
view</a></li>
</ul>
</details>
<br />

Updates `nix-community/cache-nix-action` from 7.0.1 to 7.0.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nix-community/cache-nix-action/releases">nix-community/cache-nix-action's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.2</h2>
<h2>What's Changed</h2>
<h2>Fixed</h2>
<ul>
<li>Fix: Nix versions under <code>v2.33</code> not supported by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/295">nix-community/cache-nix-action#295</a></li>
<li>Use a more precise check by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in
47869c4cbb023c803424e7311f07a744a2d66296</li>
</ul>
<h2>Changed (deps)</h2>
<!-- raw HTML omitted -->
<ul>
<li>chore(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code>
from 8.53.0 to 8.53.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/284">nix-community/cache-nix-action#284</a></li>
<li>chore(deps): bump DeterminateSystems/determinate-nix-action from
3.15.1 to 3.15.2 in the minor-actions-dependencies group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/288">nix-community/cache-nix-action#288</a></li>
<li>chore(deps-dev): bump eslint-config-love from 144.0.0 to 147.0.0 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/287">nix-community/cache-nix-action#287</a></li>
<li>chore(deps-dev): bump prettier from 3.8.0 to 3.8.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/286">nix-community/cache-nix-action#286</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/parser</code> from
8.53.1 to 8.54.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/290">nix-community/cache-nix-action#290</a></li>
<li>chore(deps): bump <code>@​actions/github</code> from 7.0.0 to 8.0.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/291">nix-community/cache-nix-action#291</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code>
from 8.53.1 to 8.54.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/289">nix-community/cache-nix-action#289</a></li>
<li>chore(deps-dev): bump eslint-config-love from 147.0.0 to 149.0.0 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/294">nix-community/cache-nix-action#294</a></li>
</ul>
<!-- raw HTML omitted -->
<p><strong>Full Changelog</strong>: <a
href="https://github.com/nix-community/cache-nix-action/compare/v7...v7.0.2">https://github.com/nix-community/cache-nix-action/compare/v7...v7.0.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/7df957e333c1e5da7721f60227dbba6d06080569"><code>7df957e</code></a>
chore: build the action</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/47869c4cbb023c803424e7311f07a744a2d66296"><code>47869c4</code></a>
fix(action): use a more precise check</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/eca69c462eda8455304862773d53bfe08a7c1fad"><code>eca69c4</code></a>
Merge pull request <a
href="https://redirect.github.com/nix-community/cache-nix-action/issues/295">#295</a>
from nix-community/nix-versions-under-v233-not-supported</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/b6fd2e3f7b9992c952409248b26c3806976ca922"><code>b6fd2e3</code></a>
feat(ci): add test with Nix version &lt;2.33</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/ddd9cbc8ee25d0dbd64bc7bf380398d810fedcc0"><code>ddd9cbc</code></a>
fix(ci): bump action version</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/922e9060c19ec2c406a055d4255ec1760e0af798"><code>922e906</code></a>
chore: build the action</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/4038f94ae961f71f156295e34fc27af3846cb555"><code>4038f94</code></a>
refactor(action): rename constants for command results</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/dfde4d35b86aa2875e5829cfc8b6c2d4c203ab9b"><code>dfde4d3</code></a>
fix(action): choose command based on the Nix version</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/4b2dd9ec99b6d72fad66eeff381bc94d20d7207d"><code>4b2dd9e</code></a>
Merge pull request <a
href="https://redirect.github.com/nix-community/cache-nix-action/issues/294">#294</a>
from nix-community/dependabot/npm_and_yarn/eslint-con...</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/273d1a77100543feec627c2bdd09b6c7060b88ab"><code>273d1a7</code></a>
chore(deps-dev): bump eslint-config-love from 147.0.0 to 149.0.0</li>
<li>Additional commits viewable in <a
href="https://github.com/nix-community/cache-nix-action/compare/106bba72ed8e29c8357661199511ef07790175e9...7df957e333c1e5da7721f60227dbba6d06080569">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 13:35:13 +00:00
dependabot[bot] d017c27eaf chore: bump google.golang.org/api from 0.264.0 to 0.265.0 (#22007)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.264.0 to 0.265.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.265.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.264.0...v0.265.0">0.265.0</a>
(2026-02-04)</h2>
<h3>Features</h3>
<ul>
<li>Add checksums for single chunk json uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3448">#3448</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0f1cb7b9b71b8f21e2bb14d69bd1e11a1ca7a9ff">0f1cb7b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3473">#3473</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/e617dd5dc920921e5fff184be3c33a8ab9c8ce41">e617dd5</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3476">#3476</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/986f55600724d148e102413766cfbdc278adba38">986f556</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3477">#3477</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/cdb1738722afcceb26e6d4be934bac46682c1c25">cdb1738</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3479">#3479</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2aa3478d4e2a94b30eb6873ff5b41cffef0e89bd">2aa3478</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3480">#3480</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/29bd84381608db3db0385bd8f4544af458df7329">29bd843</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3482">#3482</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/afa65b7fb9b586aac07247474fdd1efc5812e824">afa65b7</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.264.0...v0.265.0">0.265.0</a>
(2026-02-04)</h2>
<h3>Features</h3>
<ul>
<li>Add checksums for single chunk json uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3448">#3448</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0f1cb7b9b71b8f21e2bb14d69bd1e11a1ca7a9ff">0f1cb7b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3473">#3473</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/e617dd5dc920921e5fff184be3c33a8ab9c8ce41">e617dd5</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3476">#3476</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/986f55600724d148e102413766cfbdc278adba38">986f556</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3477">#3477</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/cdb1738722afcceb26e6d4be934bac46682c1c25">cdb1738</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3479">#3479</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2aa3478d4e2a94b30eb6873ff5b41cffef0e89bd">2aa3478</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3480">#3480</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/29bd84381608db3db0385bd8f4544af458df7329">29bd843</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3482">#3482</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/afa65b7fb9b586aac07247474fdd1efc5812e824">afa65b7</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/e6edc1df27af3ccdceb9ec580e4e4189500e154f"><code>e6edc1d</code></a>
chore(main): release 0.265.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3474">#3474</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/afa65b7fb9b586aac07247474fdd1efc5812e824"><code>afa65b7</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3482">#3482</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/0554404d716233619aee04791086c3fca768129f"><code>0554404</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3466">#3466</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/84932f3abee6aaff6e00d04099c1a10b69d8963d"><code>84932f3</code></a>
chore: replace old go teams with cloud-sdk-go-team (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3475">#3475</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/242927a161200a778bd00dc8ff3136e5eea85b53"><code>242927a</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3469">#3469</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/0f1cb7b9b71b8f21e2bb14d69bd1e11a1ca7a9ff"><code>0f1cb7b</code></a>
feat: add checksums for single chunk json uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3448">#3448</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/e92945d638f320e93a83d875f0590c57d43396f4"><code>e92945d</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3470">#3470</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/ba218c11dc7d70f76529b2084eff74d4c252e8d0"><code>ba218c1</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3468">#3468</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/2e7d0f51983a1b4d905ac01669777b9d3910064d"><code>2e7d0f5</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3471">#3471</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/460b37cbd6a873dff58046a15abb1b0289d956ec"><code>460b37c</code></a>
chore: Migrate gsutil usage to gcloud storage (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3467">#3467</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.264.0...v0.265.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.264.0&new-version=0.265.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 13:26:56 +00:00
dependabot[bot] 0bab4a2042 chore: bump the x group with 2 updates (#22005)
Bumps the x group with 2 updates:
[golang.org/x/oauth2](https://github.com/golang/oauth2) and
[golang.org/x/sys](https://github.com/golang/sys).

Updates `golang.org/x/oauth2` from 0.34.0 to 0.35.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/oauth2/commit/89ff2e1ac388c1a234a687cb2735341cde3f7122"><code>89ff2e1</code></a>
google: add safer credentials JSON loading options.</li>
<li>See full diff in <a
href="https://github.com/golang/oauth2/compare/v0.34.0...v0.35.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/sys` from 0.40.0 to 0.41.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/sys/commit/fc646e489fd944b6f77d327ab77f1a4bab81d5ad"><code>fc646e4</code></a>
cpu: use IsProcessorFeaturePresent to calculate ARM64 on windows</li>
<li><a
href="https://github.com/golang/sys/commit/f11c7bb268eb8a49f5a42afe15387a159a506935"><code>f11c7bb</code></a>
windows: add IsProcessorFeaturePresent and processor feature consts</li>
<li><a
href="https://github.com/golang/sys/commit/d25a7aaff8c2b056b2059fd7065afe1d4132e082"><code>d25a7aa</code></a>
unix: add IoctlSetString on all platforms</li>
<li><a
href="https://github.com/golang/sys/commit/6fb913b30f367555467f08da4d60f49996c9b17a"><code>6fb913b</code></a>
unix: return early on error in Recvmsg</li>
<li>See full diff in <a
href="https://github.com/golang/sys/compare/v0.40.0...v0.41.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 13:26:42 +00:00
dependabot[bot] f3cd74d9d8 chore: bump rust from df6ca8f to 760ad1d in /dogfood/coder (#22009)
Bumps rust from `df6ca8f` to `760ad1d`.


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rust&package-manager=docker&previous-version=slim&new-version=slim)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 13:26:12 +00:00
dependabot[bot] e3b4099c9d chore: bump github.com/prometheus-community/pro-bing from 0.7.0 to 0.8.0 (#22006)
Bumps
[github.com/prometheus-community/pro-bing](https://github.com/prometheus-community/pro-bing)
from 0.7.0 to 0.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/prometheus-community/pro-bing/releases">github.com/prometheus-community/pro-bing's
releases</a>.</em></p>
<blockquote>
<h2>v0.8.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Synchronize common files from prometheus/prometheus by <a
href="https://github.com/prombot"><code>@​prombot</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/155">prometheus-community/pro-bing#155</a></li>
<li>Bump golang.org/x/net from 0.38.0 to 0.39.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/154">prometheus-community/pro-bing#154</a></li>
<li>Synchronize common files from prometheus/prometheus by <a
href="https://github.com/prombot"><code>@​prombot</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/161">prometheus-community/pro-bing#161</a></li>
<li>Set ping traffic class to zero by default by <a
href="https://github.com/floatingstatic"><code>@​floatingstatic</code></a>
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/168">prometheus-community/pro-bing#168</a></li>
<li>Bump golang.org/x/net from 0.39.0 to 0.44.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/169">prometheus-community/pro-bing#169</a></li>
<li>Synchronize common files from prometheus/prometheus by <a
href="https://github.com/prombot"><code>@​prombot</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/167">prometheus-community/pro-bing#167</a></li>
<li>Update build by <a
href="https://github.com/SuperQ"><code>@​SuperQ</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/172">prometheus-community/pro-bing#172</a></li>
<li>Bump golang.org/x/sync from 0.13.0 to 0.17.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/170">prometheus-community/pro-bing#170</a></li>
<li>feat: support setting ICMP source address for outgoing packets by <a
href="https://github.com/snormore"><code>@​snormore</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/171">prometheus-community/pro-bing#171</a></li>
<li>Synchronize common files from prometheus/prometheus by <a
href="https://github.com/prombot"><code>@​prombot</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/173">prometheus-community/pro-bing#173</a></li>
<li>Bump golang.org/x/net from 0.44.0 to 0.49.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/183">prometheus-community/pro-bing#183</a></li>
<li>Bump golang.org/x/sync from 0.17.0 to 0.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/181">prometheus-community/pro-bing#181</a></li>
<li>Synchronize common files from prometheus/prometheus by <a
href="https://github.com/prombot"><code>@​prombot</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/179">prometheus-community/pro-bing#179</a></li>
<li>Optimize BPF code to reject non-Echo Reply ICMP packets by <a
href="https://github.com/nvksie"><code>@​nvksie</code></a> in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/180">prometheus-community/pro-bing#180</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/snormore"><code>@​snormore</code></a>
made their first contribution in <a
href="https://redirect.github.com/prometheus-community/pro-bing/pull/171">prometheus-community/pro-bing#171</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/prometheus-community/pro-bing/compare/v0.7.0...v0.8.0">https://github.com/prometheus-community/pro-bing/compare/v0.7.0...v0.8.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/112c6d152733673e7e7b463bd8a339230536260d"><code>112c6d1</code></a>
Merge pull request <a
href="https://redirect.github.com/prometheus-community/pro-bing/issues/180">#180</a>
from nvksie/main</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/c0e523e8e6d005a91f5700083239f903cf39ef2f"><code>c0e523e</code></a>
Merge pull request <a
href="https://redirect.github.com/prometheus-community/pro-bing/issues/179">#179</a>
from prometheus-community/repo_sync</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/dc59983a3a2c41b8b5a2fb3781056a89dd7af680"><code>dc59983</code></a>
Merge pull request <a
href="https://redirect.github.com/prometheus-community/pro-bing/issues/181">#181</a>
from prometheus-community/dependabot/go_modules/golan...</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/3b320ae455af8dfe6e2462e49fcdbdad81bf164f"><code>3b320ae</code></a>
Bump golang.org/x/sync from 0.17.0 to 0.19.0</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/df60cdb87f3c9d6a0ddef2a184254f8e0f9afeb2"><code>df60cdb</code></a>
Merge pull request <a
href="https://redirect.github.com/prometheus-community/pro-bing/issues/183">#183</a>
from prometheus-community/dependabot/go_modules/golan...</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/22f264b8c85e8e2ffc53a21b2e775aabccbb4666"><code>22f264b</code></a>
Bump golang.org/x/net from 0.44.0 to 0.49.0</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/3e7f4fe13f3401f6c2ce76995c564b656749dc2a"><code>3e7f4fe</code></a>
optimize bpf filter, accept Echo Reply only</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/13271982908ad062b4ed542e1cb6a5c77fa7804c"><code>1327198</code></a>
Update common Prometheus files</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/3b66532b7fd1f7ca238988d3654eb48ab4ddc88a"><code>3b66532</code></a>
Merge pull request <a
href="https://redirect.github.com/prometheus-community/pro-bing/issues/173">#173</a>
from prometheus-community/repo_sync</li>
<li><a
href="https://github.com/prometheus-community/pro-bing/commit/4d98d366567dd8b581d39fe59a4c667876d38174"><code>4d98d36</code></a>
Update common Prometheus files</li>
<li>Additional commits viewable in <a
href="https://github.com/prometheus-community/pro-bing/compare/v0.7.0...v0.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus-community/pro-bing&package-manager=go_modules&previous-version=0.7.0&new-version=0.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-09 13:25:33 +00:00
Zach fa2481c650 test: add synctest-based aibridged cache expiry test (#21984)
Resolves the TODO in TestPool by adding TestPool_Expiry which uses Go
1.25's testing/synctest to verify TTL-based cache eviction.

I wanted to get familiar with the new `synctest` package in Go 1.25 and
found this TODO comment, so I decided to take a stab at it 😄
2026-02-09 15:09:40 +02:00
Jake Howell 2c0ffdd590 feat: refactor <TerminalAlerts /> component (#22004)
Quick easy and simple set of changes, with some added flavour. Removes
two use-cases of MUI-based components with our drop-in-place links.
Added a refresh icon to the `Refresh` button and added the external link
icon `➚` to all of the links as they all link out to `/docs` (this is
inline with the rest of the application).

|    |    |
|---|---|
| Old | <img width="1152" height="65" alt="ALERT_1"
src="https://github.com/user-attachments/assets/5e0a0ce3-29ef-4fa1-8793-8aa89d80c661"
/> |
| New | <img width="1152" height="65" alt="ALERT_1_FIX"
src="https://github.com/user-attachments/assets/7be1f0b7-1594-478c-b7c1-6f2288064e13"
/> |

|    |    |
|---|---|
| Old | <img width="1152" height="81" alt="ALERT_2"
src="https://github.com/user-attachments/assets/f8e4d65f-5aa1-408c-9149-0511c8367e3b"
/> |
| New | <img width="1152" height="81" alt="ALERT_2_FIX"
src="https://github.com/user-attachments/assets/230e0754-dd18-40d5-825d-5e5082fe806a"
/> |
2026-02-10 00:01:48 +11:00
Jake Howell e8fa04404f fix: remove @mui/ components from <ConnectionLog* /> (#22003)
Migrates `ConnectionLogRow` and `ConnectionLogDescription` off MUI and
Emotion. Replaces `@mui/material/Link` with the existing shadcn-based
`Link` component, swaps the deprecated `Stack` wrappers for plain divs
with Tailwind flex utilities, and converts all Emotion `css` prop styles
to Tailwind classes.

Also fixes a pre-existing lint issue where `tabIndex` was set on a
non-interactive div.
2026-02-09 23:20:44 +11:00
Jake Howell f11a8086b0 fix: migrate all uses of visuallyHidden (#22001)
Replace all usages of MUI's `visuallyHidden` utility from `@mui/utils`
with Tailwind's `sr-only` class. Both produce identical CSS, so this is
a no-op behaviorally -- just removes another MUI dependency from the
codebase. Also updates the accessibility example in the frontend
contributing docs to match.
2026-02-09 23:17:03 +11:00
Spike Curtis 95b3bc9c7a test: fix failnow in goroutine in TestServer_TelemetryDisabled_FinalReport (#21973)
closes: https://github.com/coder/internal/issues/1331

Fixes up an issue in the test where we end up calling `FailNow` outside
the main test goroutine. Also adds the ability to name a `ptytest.PTY`
for cases like this one where we start multiple commands. This will help
debugging if we see the issue again.

This doesn't address the root cause of the failure, but I think we
should close the flake issue. I think we'd need like a stacktrace of all
goroutines at the point of failing the test, but that's way too much
effort unless we see this again.
2026-02-09 14:20:57 +04:00
Cian Johnston 93b000776f fix(cli): revert #21583 (#22000)
Relates to https://github.com/coder/internal/issues/1217

This reverts commit f799cba395.

@deansheather reported that this breaks ControlMaster.

Investigating alternative fixes to coder/internal#1217
2026-02-09 09:56:33 +00:00
Sas Swart e6fbf501ac feat: add an endpoint to manually pause a coder task (#21889)
Closes https://github.com/coder/internal/issues/1261.

This pull request adds an endpoint to pause coder tasks by stopping the
underlying workspace.
* Instead of `POST /api/v2/tasks/{user}/{task}/pause`, the endpoint is
currently experimental.
* We do not currently set the build reason to `task_manual_pause`,
because build reasons are currently only used on stop transitions.
2026-02-09 08:56:41 +02:00
Dean Sheather d3036d569e chore: only run lint-actions job on CI changes (#21999)
It was split to reduce flaking, but still always ran on `main` anyways
2026-02-09 05:31:17 +00:00
69 changed files with 7527 additions and 497 deletions
+4
View File
@@ -0,0 +1,4 @@
# All artifacts of the build processed are dumped here.
# Ignore it for docker context, as all Dockerfiles should build their own
# binaries.
build
+8 -6
View File
@@ -181,7 +181,7 @@ jobs:
echo "LINT_CACHE_DIR=$dir" >> "$GITHUB_ENV"
- name: golangci-lint cache
uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5.0.2
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: |
${{ env.LINT_CACHE_DIR }}
@@ -241,7 +241,9 @@ jobs:
lint-actions:
needs: changes
if: needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
# Only run this job if changes to CI workflow files are detected. This job
# can flake as it reaches out to GitHub to check referenced actions.
if: needs.changes.outputs.ci == 'true'
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
@@ -1184,7 +1186,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -1391,7 +1393,7 @@ jobs:
id: attest_main
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: "ghcr.io/coder/coder-preview:main"
predicate-type: "https://slsa.dev/provenance/v1"
@@ -1428,7 +1430,7 @@ jobs:
id: attest_latest
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: "ghcr.io/coder/coder-preview:latest"
predicate-type: "https://slsa.dev/provenance/v1"
@@ -1465,7 +1467,7 @@ jobs:
id: attest_version
if: github.ref == 'refs/heads/main'
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: "ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}"
predicate-type: "https://slsa.dev/provenance/v1"
+1 -1
View File
@@ -76,7 +76,7 @@ jobs:
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
+1 -1
View File
@@ -48,7 +48,7 @@ jobs:
persist-credentials: false
- name: Docker login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
+2 -2
View File
@@ -42,7 +42,7 @@ jobs:
# on version 2.29 and above.
nix_version: "2.28.5"
- uses: nix-community/cache-nix-action@106bba72ed8e29c8357661199511ef07790175e9 # v7.0.1
- uses: nix-community/cache-nix-action@7df957e333c1e5da7721f60227dbba6d06080569 # v7.0.2
with:
# restore and save a cache using this key
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
@@ -82,7 +82,7 @@ jobs:
- name: Login to DockerHub
if: github.ref == 'refs/heads/main'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
+1 -1
View File
@@ -248,7 +248,7 @@ jobs:
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
+4 -4
View File
@@ -233,7 +233,7 @@ jobs:
cat "$CODER_RELEASE_NOTES_FILE"
- name: Docker Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -448,7 +448,7 @@ jobs:
id: attest_base
if: ${{ !inputs.dry_run && steps.image-base-tag.outputs.tag != '' }}
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: ${{ steps.image-base-tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -564,7 +564,7 @@ jobs:
id: attest_main
if: ${{ !inputs.dry_run }}
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: ${{ steps.build_docker.outputs.multiarch_image }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -608,7 +608,7 @@ jobs:
id: attest_latest
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
continue-on-error: true
uses: actions/attest@7667f588f2f73a90cea6c7ac70e78266c4f76616 # v3.1.0
uses: actions/attest@e59cbc1ad1ac2d59339667419eb8cdde6eb61e3d # v3.2.0
with:
subject-name: ${{ steps.latest_tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
+4
View File
@@ -98,3 +98,7 @@ AGENTS.local.md
# Ignore plans written by AI agents.
PLAN.md
# cdev load balancer temp config (created under repo root for
# Docker Desktop bind mount compatibility).
.cdev-lb-*
Executable
BIN
View File
Binary file not shown.
+24 -19
View File
@@ -2244,6 +2244,7 @@ type runServerOpts struct {
waitForSnapshot bool
telemetryDisabled bool
waitForTelemetryDisabledCheck bool
name string
}
func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
@@ -2266,25 +2267,23 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
"--cache-dir", cacheDir,
"--log-filter", ".*",
)
finished := make(chan bool, 2)
inv.Logger = inv.Logger.Named(opts.name)
errChan := make(chan error, 1)
pty := ptytest.New(t).Attach(inv)
pty := ptytest.New(t).Named(opts.name).Attach(inv)
go func() {
errChan <- inv.WithContext(ctx).Run()
finished <- true
// close the pty here so that we can start tearing down resources. This test creates multiple servers with
// associated ptys. There is a `t.Cleanup()` that does this, but it waits until the whole test is complete.
_ = pty.Close()
}()
go func() {
defer func() {
finished <- true
}()
if opts.waitForSnapshot {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "submitted snapshot")
}
if opts.waitForTelemetryDisabledCheck {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "finished telemetry status check")
}
}()
<-finished
if opts.waitForSnapshot {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "submitted snapshot")
}
if opts.waitForTelemetryDisabledCheck {
pty.ExpectMatchContext(testutil.Context(t, testutil.WaitLong), "finished telemetry status check")
}
return errChan, cancelFunc
}
waitForShutdown := func(t *testing.T, errChan chan error) error {
@@ -2298,7 +2297,9 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
return nil
}
errChan, cancelFunc := runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
errChan, cancelFunc := runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "0disabled",
})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
@@ -2306,7 +2307,7 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
require.Empty(t, deployment)
require.Empty(t, snapshot)
errChan, cancelFunc = runServer(t, runServerOpts{waitForSnapshot: true})
errChan, cancelFunc = runServer(t, runServerOpts{waitForSnapshot: true, name: "1enabled"})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
// we expect to see a deployment and a snapshot twice:
@@ -2325,7 +2326,9 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
}
}
errChan, cancelFunc = runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
errChan, cancelFunc = runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "2disabled",
})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
@@ -2341,7 +2344,9 @@ func TestServer_TelemetryDisabled_FinalReport(t *testing.T) {
t.Fatalf("timed out waiting for snapshot")
}
errChan, cancelFunc = runServer(t, runServerOpts{telemetryDisabled: true, waitForTelemetryDisabledCheck: true})
errChan, cancelFunc = runServer(t, runServerOpts{
telemetryDisabled: true, waitForTelemetryDisabledCheck: true, name: "3disabled",
})
cancelFunc()
require.NoError(t, waitForShutdown(t, errChan))
// Since telemetry is disabled and we've already sent a snapshot, we expect no
-58
View File
@@ -24,7 +24,6 @@ import (
"github.com/gofrs/flock"
"github.com/google/uuid"
"github.com/mattn/go-isatty"
"github.com/shirou/gopsutil/v4/process"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
gosshagent "golang.org/x/crypto/ssh/agent"
@@ -85,9 +84,6 @@ func (r *RootCmd) ssh() *serpent.Command {
containerName string
containerUser string
// Used in tests to simulate the parent exiting.
testForcePPID int64
)
cmd := &serpent.Command{
Annotations: workspaceCommand,
@@ -179,24 +175,6 @@ func (r *RootCmd) ssh() *serpent.Command {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// When running as a ProxyCommand (stdio mode), monitor the parent process
// and exit if it dies to avoid leaving orphaned processes. This is
// particularly important when editors like VSCode/Cursor spawn SSH
// connections and then crash or are killed - we don't want zombie
// `coder ssh` processes accumulating.
// Note: using gopsutil to check the parent process as this handles
// windows processes as well in a standard way.
if stdio {
ppid := int32(os.Getppid()) // nolint:gosec
checkParentInterval := 10 * time.Second // Arbitrary interval to not be too frequent
if testForcePPID > 0 {
ppid = int32(testForcePPID) // nolint:gosec
checkParentInterval = 100 * time.Millisecond // Shorter interval for testing
}
ctx, cancel = watchParentContext(ctx, quartz.NewReal(), ppid, process.PidExistsWithContext, checkParentInterval)
defer cancel()
}
// Prevent unnecessary logs from the stdlib from messing up the TTY.
// See: https://github.com/coder/coder/issues/13144
log.SetOutput(io.Discard)
@@ -797,12 +775,6 @@ func (r *RootCmd) ssh() *serpent.Command {
Value: serpent.BoolOf(&forceNewTunnel),
Hidden: true,
},
{
Flag: "test.force-ppid",
Description: "Override the parent process ID to simulate a different parent process. ONLY USE THIS IN TESTS.",
Value: serpent.Int64Of(&testForcePPID),
Hidden: true,
},
sshDisableAutostartOption(serpent.BoolOf(&disableAutostart)),
}
return cmd
@@ -1690,33 +1662,3 @@ func normalizeWorkspaceInput(input string) string {
return input // Fallback
}
}
// watchParentContext returns a context that is canceled when the parent process
// dies. It polls using the provided clock and checks if the parent is alive
// using the provided pidExists function.
func watchParentContext(ctx context.Context, clock quartz.Clock, originalPPID int32, pidExists func(context.Context, int32) (bool, error), interval time.Duration) (context.Context, context.CancelFunc) {
ctx, cancel := context.WithCancel(ctx) // intentionally shadowed
go func() {
ticker := clock.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
alive, err := pidExists(ctx, originalPPID)
// If we get an error checking the parent process (e.g., permission
// denied, the process is in an unknown state), we assume the parent
// is still alive to avoid disrupting the SSH connection. We only
// cancel when we definitively know the parent is gone (alive=false, err=nil).
if !alive && err == nil {
cancel()
return
}
}
}
}()
return ctx, cancel
}
-96
View File
@@ -312,102 +312,6 @@ type fakeCloser struct {
err error
}
func TestWatchParentContext(t *testing.T) {
t.Parallel()
t.Run("CancelsWhenParentDies", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
parentAlive := true
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return parentAlive, nil
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we simulate parent death and advance the clock
parentAlive = false
mClock.AdvanceNext()
// Then: The context should be canceled
_ = testutil.TryReceive(ctx, t, childCtx.Done())
})
t.Run("DoesNotCancelWhenParentAlive", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil // Parent always alive
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance the clock several times with the parent alive
for range 3 {
mClock.AdvanceNext()
}
// Then: context should not be canceled
require.NoError(t, childCtx.Err())
})
t.Run("RespectsParentContext", func(t *testing.T) {
t.Parallel()
ctx, cancelParent := context.WithCancel(context.Background())
mClock := quartz.NewMock(t)
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return true, nil
}, testutil.WaitShort)
defer cancel()
// When: we cancel the parent context
cancelParent()
// Then: The context should be canceled
require.ErrorIs(t, childCtx.Err(), context.Canceled)
})
t.Run("DoesNotCancelOnError", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
mClock := quartz.NewMock(t)
trap := mClock.Trap().NewTicker()
defer trap.Close()
// Simulate an error checking parent status (e.g., permission denied).
// We should not cancel the context in this case to avoid disrupting
// the SSH connection.
childCtx, cancel := watchParentContext(ctx, mClock, 1234, func(context.Context, int32) (bool, error) {
return false, xerrors.New("permission denied")
}, testutil.WaitShort)
defer cancel()
// Wait for the ticker to be created
trap.MustWait(ctx).MustRelease(ctx)
// When: we advance clock several times
for range 3 {
mClock.AdvanceNext()
}
// Context should NOT be canceled since we got an error (not a definitive "not alive")
require.NoError(t, childCtx.Err(), "context was canceled even though pidExists returned an error")
})
}
func (c *fakeCloser) Close() error {
*c.closes = append(*c.closes, c)
return c.err
-101
View File
@@ -1122,107 +1122,6 @@ func TestSSH(t *testing.T) {
}
})
// This test ensures that the SSH session exits when the parent process dies.
t.Run("StdioExitOnParentDeath", func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitSuperLong)
defer cancel()
// sleepStart -> agentReady -> sessionStarted -> sleepKill -> sleepDone -> cmdDone
sleepStart := make(chan int)
agentReady := make(chan struct{})
sessionStarted := make(chan struct{})
sleepKill := make(chan struct{})
sleepDone := make(chan struct{})
// Start a sleep process which we will pretend is the parent.
go func() {
sleepCmd := exec.Command("sleep", "infinity")
if !assert.NoError(t, sleepCmd.Start(), "failed to start sleep command") {
return
}
sleepStart <- sleepCmd.Process.Pid
defer close(sleepDone)
<-sleepKill
sleepCmd.Process.Kill()
_ = sleepCmd.Wait()
}()
client, workspace, agentToken := setupWorkspaceForAgent(t)
go func() {
defer close(agentReady)
_ = agenttest.New(t, client.URL, agentToken)
coderdtest.NewWorkspaceAgentWaiter(t, client, workspace.ID).WaitFor(coderdtest.AgentsReady)
}()
clientOutput, clientInput := io.Pipe()
serverOutput, serverInput := io.Pipe()
defer func() {
for _, c := range []io.Closer{clientOutput, clientInput, serverOutput, serverInput} {
_ = c.Close()
}
}()
// Start a connection to the agent once it's ready
go func() {
<-agentReady
conn, channels, requests, err := ssh.NewClientConn(&testutil.ReaderWriterConn{
Reader: serverOutput,
Writer: clientInput,
}, "", &ssh.ClientConfig{
// #nosec
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if !assert.NoError(t, err, "failed to create SSH client connection") {
return
}
defer conn.Close()
sshClient := ssh.NewClient(conn, channels, requests)
defer sshClient.Close()
session, err := sshClient.NewSession()
if !assert.NoError(t, err, "failed to create SSH session") {
return
}
close(sessionStarted)
<-sleepDone
// Ref: https://github.com/coder/internal/issues/1289
// This may return either a nil error or io.EOF.
// There is an inherent race here:
// 1. Sleep process is killed -> sleepDone is closed.
// 2. watchParentContext detects parent death, cancels context,
// causing SSH session teardown.
// 3. We receive from sleepDone and attempt to call session.Close()
// Now either:
// a. Session teardown completes before we call Close(), resulting in io.EOF
// b. We call Close() first, resulting in a nil error.
_ = session.Close()
}()
// Wait for our "parent" process to start
sleepPid := testutil.RequireReceive(ctx, t, sleepStart)
// Wait for the agent to be ready
testutil.SoftTryReceive(ctx, t, agentReady)
inv, root := clitest.New(t, "ssh", "--stdio", workspace.Name, "--test.force-ppid", fmt.Sprintf("%d", sleepPid))
clitest.SetupConfig(t, client, root)
inv.Stdin = clientOutput
inv.Stdout = serverInput
inv.Stderr = io.Discard
// Start the command
clitest.Start(t, inv.WithContext(ctx))
// Wait for a session to be established
testutil.SoftTryReceive(ctx, t, sessionStarted)
// Now kill the fake "parent"
close(sleepKill)
// The sleep process should exit
testutil.SoftTryReceive(ctx, t, sleepDone)
// And then the command should exit. This is tracked by clitest.Start.
})
t.Run("ForwardAgent", func(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("Test not supported on windows")
+60
View File
@@ -1244,3 +1244,63 @@ func (api *API) postWorkspaceAgentTaskLogSnapshot(rw http.ResponseWriter, r *htt
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Pause task
// @ID pause-task
// @Security CoderSessionToken
// @Accept json
// @Tags Tasks
// @Param user path string true "Username, user ID, or 'me' for the authenticated user"
// @Param task path string true "Task ID" format(uuid)
// @Success 202 {object} codersdk.PauseTaskResponse
// @Router /tasks/{user}/{task}/pause [post]
func (api *API) pauseTask(rw http.ResponseWriter, r *http.Request) {
var (
ctx = r.Context()
apiKey = httpmw.APIKey(r)
task = httpmw.TaskParam(r)
)
if !task.WorkspaceID.Valid {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Task does not have a workspace.",
})
return
}
workspace, err := api.Database.GetWorkspaceByID(ctx, task.WorkspaceID.UUID)
if err != nil {
if httpapi.Is404Error(err) {
httpapi.ResourceNotFound(rw)
return
}
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching task workspace.",
Detail: err.Error(),
})
return
}
buildReq := codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStop,
Reason: codersdk.CreateWorkspaceBuildReasonTaskManualPause,
}
build, err := api.postWorkspaceBuildsInternal(
ctx,
apiKey,
workspace,
buildReq,
func(action policy.Action, object rbac.Objecter) bool {
return api.Authorize(r, action, object)
},
audit.WorkspaceBuildBaggageFromRequest(r),
)
if err != nil {
httperror.WriteWorkspaceBuildError(ctx, rw, err)
return
}
httpapi.Write(ctx, rw, http.StatusAccepted, codersdk.PauseTaskResponse{
WorkspaceBuild: &build,
})
}
+359
View File
@@ -16,6 +16,7 @@ import (
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
agentapisdk "github.com/coder/agentapi-sdk-go"
"github.com/coder/coder/v2/agent"
@@ -26,11 +27,14 @@ import (
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/database/pubsub"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/notifications"
"github.com/coder/coder/v2/coderd/notifications/notificationstest"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/util/slice"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
@@ -100,6 +104,36 @@ func createTaskInState(db database.Store, ownerSubject rbac.Subject, ownerOrgID,
}
}
type aiTaskStoreWrapper struct {
database.Store
getWorkspaceByID func(ctx context.Context, id uuid.UUID) (database.Workspace, error)
insertWorkspaceBuild func(ctx context.Context, arg database.InsertWorkspaceBuildParams) error
}
func (s aiTaskStoreWrapper) GetWorkspaceByID(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
if s.getWorkspaceByID != nil {
return s.getWorkspaceByID(ctx, id)
}
return s.Store.GetWorkspaceByID(ctx, id)
}
func (s aiTaskStoreWrapper) InsertWorkspaceBuild(ctx context.Context, arg database.InsertWorkspaceBuildParams) error {
if s.insertWorkspaceBuild != nil {
return s.insertWorkspaceBuild(ctx, arg)
}
return s.Store.InsertWorkspaceBuild(ctx, arg)
}
func (s aiTaskStoreWrapper) InTx(fn func(database.Store) error, opts *database.TxOptions) error {
return s.Store.InTx(func(tx database.Store) error {
return fn(aiTaskStoreWrapper{
Store: tx,
getWorkspaceByID: s.getWorkspaceByID,
insertWorkspaceBuild: s.insertWorkspaceBuild,
})
}, opts)
}
func TestTasks(t *testing.T) {
t.Parallel()
@@ -2422,3 +2456,328 @@ func TestPostWorkspaceAgentTaskSnapshot(t *testing.T) {
require.Equal(t, http.StatusUnauthorized, res.StatusCode)
})
}
func TestPauseTask(t *testing.T) {
t.Parallel()
setupClient := func(t *testing.T, db database.Store, ps pubsub.Pubsub, authorizer rbac.Authorizer) *codersdk.Client {
t.Helper()
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
Authorizer: authorizer,
})
return client
}
setupWorkspaceTask := func(t *testing.T, db database.Store, user codersdk.CreateFirstUserResponse) (database.Task, uuid.UUID) {
t.Helper()
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithTask(database.TaskTable{
Prompt: "pause me",
}, nil).Do()
return workspaceBuild.Task, workspaceBuild.Workspace.ID
}
t.Run("OK", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ApplyComplete,
ProvisionGraph: []*proto.Response{
{Type: &proto.Response_Graph{Graph: &proto.GraphComplete{
HasAiTasks: true,
}}},
},
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
task, err := client.CreateTask(ctx, codersdk.Me, codersdk.CreateTaskRequest{
TemplateVersionID: template.ActiveVersionID,
Input: "pause me",
})
require.NoError(t, err)
require.True(t, task.WorkspaceID.Valid)
workspace, err := client.Workspace(ctx, task.WorkspaceID.UUID)
require.NoError(t, err)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
resp, err := client.PauseTask(ctx, codersdk.Me, task.ID)
require.NoError(t, err)
build := *resp.WorkspaceBuild
require.NotNil(t, build)
require.Equal(t, codersdk.WorkspaceTransitionStop, build.Transition)
require.Equal(t, task.WorkspaceID.UUID, build.WorkspaceID)
require.Equal(t, workspace.LatestBuild.BuildNumber+1, build.BuildNumber)
require.Equal(t, string(codersdk.CreateWorkspaceBuildReasonTaskManualPause), string(build.Reason))
})
t.Run("Non-owner role access", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
owner := coderdtest.CreateFirstUser(t, client)
cases := []struct {
name string
roles []rbac.RoleIdentifier
expectedStatus int
}{
{
name: "org_member",
expectedStatus: http.StatusNotFound,
},
{
name: "org_admin",
roles: []rbac.RoleIdentifier{rbac.ScopedRoleOrgAdmin(owner.OrganizationID)},
expectedStatus: http.StatusAccepted,
},
{
name: "sitewide_member",
roles: []rbac.RoleIdentifier{rbac.RoleMember()},
expectedStatus: http.StatusNotFound,
},
{
name: "sitewide_admin",
roles: []rbac.RoleIdentifier{rbac.RoleOwner()},
expectedStatus: http.StatusAccepted,
},
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
task, _ := setupWorkspaceTask(t, db, owner)
userClient, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, tc.roles...)
resp, err := userClient.PauseTask(ctx, codersdk.Me, task.ID)
if tc.expectedStatus == http.StatusAccepted {
require.NoError(t, err)
require.NotNil(t, resp.WorkspaceBuild)
require.NotEqual(t, uuid.Nil, resp.WorkspaceBuild.ID)
return
}
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, tc.expectedStatus, apiErr.StatusCode())
})
}
})
t.Run("Task not found", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
_ = coderdtest.CreateFirstUser(t, client)
_, err := client.PauseTask(ctx, codersdk.Me, uuid.New())
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Task lookup forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionRead && object.Type == rbac.ResourceTask.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Workspace lookup forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionRead && object.Type == rbac.ResourceWorkspace.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("No Workspace for Task", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).Do()
task := dbgen.Task(t, db, database.TaskTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
TemplateVersionID: workspaceBuild.Build.TemplateVersionID,
Prompt: "no workspace",
})
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
require.Equal(t, "Task does not have a workspace.", apiErr.Message)
})
t.Run("Workspace not found", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
var workspaceID uuid.UUID
wrapped := aiTaskStoreWrapper{
Store: db,
getWorkspaceByID: func(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
if id == workspaceID && id != uuid.Nil {
return database.Workspace{}, sql.ErrNoRows
}
return db.GetWorkspaceByID(ctx, id)
},
}
client := setupClient(t, wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
task, workspaceIDValue := setupWorkspaceTask(t, db, user)
workspaceID = workspaceIDValue
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusNotFound, apiErr.StatusCode())
})
t.Run("Workspace lookup internal error", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
var workspaceID uuid.UUID
wrapped := aiTaskStoreWrapper{
Store: db,
getWorkspaceByID: func(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
if id == workspaceID && id != uuid.Nil {
return database.Workspace{}, xerrors.New("boom")
}
return db.GetWorkspaceByID(ctx, id)
},
}
client := setupClient(t, wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
task, workspaceIDValue := setupWorkspaceTask(t, db, user)
workspaceID = workspaceIDValue
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
require.Equal(t, "Internal error fetching task workspace.", apiErr.Message)
})
t.Run("Build Forbidden", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
auth := &coderdtest.FakeAuthorizer{
ConditionalReturn: func(_ context.Context, _ rbac.Subject, action policy.Action, object rbac.Object) error {
if action == policy.ActionWorkspaceStop && object.Type == rbac.ResourceWorkspace.Type {
return rbac.UnauthorizedError{}
}
return nil
},
}
client := setupClient(t, db, ps, auth)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusForbidden, apiErr.StatusCode())
})
t.Run("Job already in progress", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
client := setupClient(t, db, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
workspaceBuild := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).
WithTask(database.TaskTable{
Prompt: "pause me",
}, nil).
Starting().
Do()
_, err := client.PauseTask(ctx, codersdk.Me, workspaceBuild.Task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusConflict, apiErr.StatusCode())
})
t.Run("Build Internal Error", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
db, ps := dbtestutil.NewDB(t)
wrapped := aiTaskStoreWrapper{
Store: db,
insertWorkspaceBuild: func(ctx context.Context, arg database.InsertWorkspaceBuildParams) error {
return xerrors.New("insert failed")
},
}
client := setupClient(t, wrapped, ps, nil)
user := coderdtest.CreateFirstUser(t, client)
task, _ := setupWorkspaceTask(t, db, user)
_, err := client.PauseTask(ctx, codersdk.Me, task.ID)
var apiErr *codersdk.Error
require.ErrorAs(t, err, &apiErr)
require.Equal(t, http.StatusInternalServerError, apiErr.StatusCode())
})
}
+56 -3
View File
@@ -5824,6 +5824,48 @@ const docTemplate = `{
}
}
},
"/tasks/{user}/{task}/pause": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Tasks"
],
"summary": "Pause task",
"operationId": "pause-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.PauseTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -14102,14 +14144,16 @@ const docTemplate = `{
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection"
"jetbrains_connection",
"task_manual_pause"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
"CreateWorkspaceBuildReasonCLI",
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection"
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -14143,7 +14187,8 @@ const docTemplate = `{
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection"
"jetbrains_connection",
"task_manual_pause"
],
"allOf": [
{
@@ -17014,6 +17059,14 @@ const docTemplate = `{
}
}
},
"codersdk.PauseTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.Permission": {
"type": "object",
"properties": {
+52 -3
View File
@@ -5147,6 +5147,44 @@
}
}
},
"/tasks/{user}/{task}/pause": {
"post": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Tasks"],
"summary": "Pause task",
"operationId": "pause-task",
"parameters": [
{
"type": "string",
"description": "Username, user ID, or 'me' for the authenticated user",
"name": "user",
"in": "path",
"required": true
},
{
"type": "string",
"format": "uuid",
"description": "Task ID",
"name": "task",
"in": "path",
"required": true
}
],
"responses": {
"202": {
"description": "Accepted",
"schema": {
"$ref": "#/definitions/codersdk.PauseTaskResponse"
}
}
}
}
},
"/tasks/{user}/{task}/send": {
"post": {
"security": [
@@ -12662,14 +12700,16 @@
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection"
"jetbrains_connection",
"task_manual_pause"
],
"x-enum-varnames": [
"CreateWorkspaceBuildReasonDashboard",
"CreateWorkspaceBuildReasonCLI",
"CreateWorkspaceBuildReasonSSHConnection",
"CreateWorkspaceBuildReasonVSCodeConnection",
"CreateWorkspaceBuildReasonJetbrainsConnection"
"CreateWorkspaceBuildReasonJetbrainsConnection",
"CreateWorkspaceBuildReasonTaskManualPause"
]
},
"codersdk.CreateWorkspaceBuildRequest": {
@@ -12699,7 +12739,8 @@
"cli",
"ssh_connection",
"vscode_connection",
"jetbrains_connection"
"jetbrains_connection",
"task_manual_pause"
],
"allOf": [
{
@@ -15477,6 +15518,14 @@
}
}
},
"codersdk.PauseTaskResponse": {
"type": "object",
"properties": {
"workspace_build": {
"$ref": "#/definitions/codersdk.WorkspaceBuild"
}
}
},
"codersdk.Permission": {
"type": "object",
"properties": {
+1
View File
@@ -1078,6 +1078,7 @@ func New(options *Options) *API {
r.Patch("/input", api.taskUpdateInput)
r.Post("/send", api.taskSend)
r.Get("/logs", api.taskLogs)
r.Post("/pause", api.pauseTask)
})
})
})
+41 -2
View File
@@ -173,6 +173,11 @@ type FakeIDP struct {
// externalProviderID is optional to match the provider in coderd for
// redirectURLs.
externalProviderID string
// backchannelBaseURL overrides server-to-server endpoint URLs
// (token, userinfo, jwks, revocation, device auth) in the OIDC
// discovery response. The authorization_endpoint stays on the
// issuer URL so browsers can still reach it.
backchannelBaseURL string
logger slog.Logger
// externalAuthValidate will be called when the user tries to validate their
// external auth. The fake IDP will reject any invalid tokens, so this just
@@ -372,6 +377,12 @@ func WithServing() func(*FakeIDP) {
}
}
func WithBackchannelBaseURL(u string) func(*FakeIDP) {
return func(f *FakeIDP) {
f.backchannelBaseURL = u
}
}
func WithIssuer(issuer string) func(*FakeIDP) {
return func(f *FakeIDP) {
f.locked.SetIssuer(issuer)
@@ -504,6 +515,13 @@ func (f *FakeIDP) IssuerURL() *url.URL {
return f.locked.IssuerURL()
}
// Handler returns the HTTP handler for the fake IDP. This can be used to serve
// the IDP on a custom address without using WithServing() which overrides the
// issuer URL.
func (f *FakeIDP) Handler() http.Handler {
return f.locked.Handler()
}
func (f *FakeIDP) updateIssuerURL(t testing.TB, issuer string) {
t.Helper()
@@ -514,7 +532,7 @@ func (f *FakeIDP) updateIssuerURL(t testing.TB, issuer string) {
f.locked.SetIssuerURL(u)
// ProviderJSON is the JSON representation of the OpenID Connect provider
// These are all the urls that the IDP will respond to.
f.locked.SetProvider(ProviderJSON{
pj := ProviderJSON{
Issuer: issuer,
AuthURL: u.ResolveReference(&url.URL{Path: authorizePath}).String(),
TokenURL: u.ResolveReference(&url.URL{Path: tokenPath}).String(),
@@ -526,7 +544,25 @@ func (f *FakeIDP) updateIssuerURL(t testing.TB, issuer string) {
"RS256",
},
ExternalAuthURL: u.ResolveReference(&url.URL{Path: "/external-auth-validate/user"}).String(),
})
}
// If a backchannel base URL is configured, override the
// server-to-server endpoints so that coderd (running in a
// container) can reach the IDP over the Docker network while
// browsers keep using the issuer URL for authorization.
if f.backchannelBaseURL != "" {
bu, err := url.Parse(f.backchannelBaseURL)
require.NoError(t, err, "invalid backchannel base URL")
pj.TokenURL = bu.ResolveReference(&url.URL{Path: tokenPath}).String()
pj.JWKSURL = bu.ResolveReference(&url.URL{Path: keysPath}).String()
pj.UserInfoURL = bu.ResolveReference(&url.URL{Path: userInfoPath}).String()
pj.RevokeURL = bu.ResolveReference(&url.URL{Path: revokeTokenPath}).String()
pj.DeviceCodeURL = bu.ResolveReference(&url.URL{Path: deviceAuth}).String()
pj.ExternalAuthURL = bu.ResolveReference(&url.URL{Path: "/external-auth-validate/user"}).String()
}
f.locked.SetProvider(pj)
}
// realServer turns the FakeIDP into a real http server.
@@ -541,6 +577,9 @@ func (f *FakeIDP) realServer(t testing.TB) *httptest.Server {
}
}
srvURL = strings.ReplaceAll(srvURL, "127.0.0.1", "0.0.0.0")
srvURL = strings.ReplaceAll(srvURL, "localhost", "0.0.0.0")
l, err := net.Listen("tcp", srvURL)
require.NoError(t, err, "failed to create listener")
+1 -1
View File
@@ -384,7 +384,7 @@ func (api *API) postWorkspaceBuildsInternal(
Experiments(api.Experiments).
TemplateVersionPresetID(createBuild.TemplateVersionPresetID)
if transition == database.WorkspaceTransitionStart && createBuild.Reason != "" {
if (transition == database.WorkspaceTransitionStart || transition == database.WorkspaceTransitionStop) && createBuild.Reason != "" {
builder = builder.Reason(database.BuildReason(createBuild.Reason))
}
+25
View File
@@ -329,6 +329,31 @@ func (c *Client) UpdateTaskInput(ctx context.Context, user string, id uuid.UUID,
return nil
}
// PauseTaskResponse represents the response from pausing a task.
type PauseTaskResponse struct {
WorkspaceBuild *WorkspaceBuild `json:"workspace_build"`
}
// PauseTask pauses a task by stopping its workspace.
// Experimental: uses the /api/experimental endpoint.
func (c *Client) PauseTask(ctx context.Context, user string, id uuid.UUID) (PauseTaskResponse, error) {
res, err := c.Request(ctx, http.MethodPost, fmt.Sprintf("/api/experimental/tasks/%s/%s/pause", user, id.String()), nil)
if err != nil {
return PauseTaskResponse{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
return PauseTaskResponse{}, ReadBodyAsError(res)
}
var resp PauseTaskResponse
if err := json.NewDecoder(res.Body).Decode(&resp); err != nil {
return PauseTaskResponse{}, err
}
return resp, nil
}
// TaskLogType indicates the source of a task log entry.
type TaskLogType string
+2 -1
View File
@@ -109,6 +109,7 @@ const (
CreateWorkspaceBuildReasonSSHConnection CreateWorkspaceBuildReason = "ssh_connection"
CreateWorkspaceBuildReasonVSCodeConnection CreateWorkspaceBuildReason = "vscode_connection"
CreateWorkspaceBuildReasonJetbrainsConnection CreateWorkspaceBuildReason = "jetbrains_connection"
CreateWorkspaceBuildReasonTaskManualPause CreateWorkspaceBuildReason = "task_manual_pause"
)
// CreateWorkspaceBuildRequest provides options to update the latest workspace build.
@@ -129,7 +130,7 @@ type CreateWorkspaceBuildRequest struct {
// TemplateVersionPresetID is the ID of the template version preset to use for the build.
TemplateVersionPresetID uuid.UUID `json:"template_version_preset_id,omitempty" format:"uuid"`
// Reason sets the reason for the workspace build.
Reason CreateWorkspaceBuildReason `json:"reason,omitempty" validate:"omitempty,oneof=dashboard cli ssh_connection vscode_connection jetbrains_connection"`
Reason CreateWorkspaceBuildReason `json:"reason,omitempty" validate:"omitempty,oneof=dashboard cli ssh_connection vscode_connection jetbrains_connection task_manual_pause"`
}
type WorkspaceOptions struct {
+2 -6
View File
@@ -220,16 +220,12 @@ screen-readers; a placeholder text value is not enough for all users.
When possible, make sure that all image/graphic elements have accompanying text
that describes the image. `<img />` elements should have an `alt` text value. In
other situations, it might make sense to place invisible, descriptive text
inside the component itself using MUI's `visuallyHidden` utility function.
inside the component itself using Tailwind's `sr-only` class.
```tsx
import { visuallyHidden } from "@mui/utils";
<Button>
<GearIcon />
<Box component="span" sx={visuallyHidden}>
Settings
</Box>
<span className="sr-only">Settings</span>
</Button>;
```
+227 -8
View File
@@ -2184,9 +2184,9 @@ This is required on creation to enable a user-flow of validating a template work
#### Enumerated Values
| Value(s) |
|-----------------------------------------------------------------------------------|
| `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `vscode_connection` |
| Value(s) |
|--------------------------------------------------------------------------------------------------------|
| `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `task_manual_pause`, `vscode_connection` |
## codersdk.CreateWorkspaceBuildRequest
@@ -2227,11 +2227,11 @@ This is required on creation to enable a user-flow of validating a template work
#### Enumerated Values
| Property | Value(s) |
|--------------|-----------------------------------------------------------------------------------|
| `log_level` | `debug` |
| `reason` | `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `vscode_connection` |
| `transition` | `delete`, `start`, `stop` |
| Property | Value(s) |
|--------------|--------------------------------------------------------------------------------------------------------|
| `log_level` | `debug` |
| `reason` | `cli`, `dashboard`, `jetbrains_connection`, `ssh_connection`, `task_manual_pause`, `vscode_connection` |
| `transition` | `delete`, `start`, `stop` |
## codersdk.CreateWorkspaceProxyRequest
@@ -6178,6 +6178,225 @@ Only certain features set these fields: - FeatureManagedAgentLimit|
| `name` | string | true | | |
| `regenerate_token` | boolean | false | | |
## codersdk.PauseTaskResponse
```json
{
"workspace_build": {
"build_number": 0,
"created_at": "2019-08-24T14:15:22Z",
"daily_cost": 0,
"deadline": "2019-08-24T14:15:22Z",
"has_ai_task": true,
"has_external_agent": true,
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"initiator_id": "06588898-9a84-4b35-ba8f-f9cbd64946f3",
"initiator_name": "string",
"job": {
"available_workers": [
"497f6eca-6276-4993-bfeb-53cbbbba6f08"
],
"canceled_at": "2019-08-24T14:15:22Z",
"completed_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"error": "string",
"error_code": "REQUIRED_TEMPLATE_VARIABLES",
"file_id": "8a0cfb4f-ddc9-436d-91bb-75133c583767",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"initiator_id": "06588898-9a84-4b35-ba8f-f9cbd64946f3",
"input": {
"error": "string",
"template_version_id": "0ba39c92-1f1b-4c32-aa3e-9925d7713eb1",
"workspace_build_id": "badaf2eb-96c5-4050-9f1d-db2d39ca5478"
},
"logs_overflowed": true,
"metadata": {
"template_display_name": "string",
"template_icon": "string",
"template_id": "c6d67e98-83ea-49f0-8812-e4abae2b68bc",
"template_name": "string",
"template_version_name": "string",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
},
"organization_id": "7c60d51f-b44e-4682-87d6-449835ea4de6",
"queue_position": 0,
"queue_size": 0,
"started_at": "2019-08-24T14:15:22Z",
"status": "pending",
"tags": {
"property1": "string",
"property2": "string"
},
"type": "template_version_import",
"worker_id": "ae5fa6f7-c55b-40c1-b40a-b36ac467652b",
"worker_name": "string"
},
"matched_provisioners": {
"available": 0,
"count": 0,
"most_recently_seen": "2019-08-24T14:15:22Z"
},
"max_deadline": "2019-08-24T14:15:22Z",
"reason": "initiator",
"resources": [
{
"agents": [
{
"api_version": "string",
"apps": [
{
"command": "string",
"display_name": "string",
"external": true,
"group": "string",
"health": "disabled",
"healthcheck": {
"interval": 0,
"threshold": 0,
"url": "string"
},
"hidden": true,
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"open_in": "slim-window",
"sharing_level": "owner",
"slug": "string",
"statuses": [
{
"agent_id": "2b1e3b65-2c04-4fa2-a2d7-467901e98978",
"app_id": "affd1d10-9538-4fc8-9e0b-4594a28c1335",
"created_at": "2019-08-24T14:15:22Z",
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"message": "string",
"needs_user_attention": true,
"state": "working",
"uri": "string",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9"
}
],
"subdomain": true,
"subdomain_name": "string",
"tooltip": "string",
"url": "string"
}
],
"architecture": "string",
"connection_timeout_seconds": 0,
"created_at": "2019-08-24T14:15:22Z",
"directory": "string",
"disconnected_at": "2019-08-24T14:15:22Z",
"display_apps": [
"vscode"
],
"environment_variables": {
"property1": "string",
"property2": "string"
},
"expanded_directory": "string",
"first_connected_at": "2019-08-24T14:15:22Z",
"health": {
"healthy": false,
"reason": "agent has lost connection"
},
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"instance_id": "string",
"last_connected_at": "2019-08-24T14:15:22Z",
"latency": {
"property1": {
"latency_ms": 0,
"preferred": true
},
"property2": {
"latency_ms": 0,
"preferred": true
}
},
"lifecycle_state": "created",
"log_sources": [
{
"created_at": "2019-08-24T14:15:22Z",
"display_name": "string",
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"workspace_agent_id": "7ad2e618-fea7-4c1a-b70a-f501566a72f1"
}
],
"logs_length": 0,
"logs_overflowed": true,
"name": "string",
"operating_system": "string",
"parent_id": {
"uuid": "string",
"valid": true
},
"ready_at": "2019-08-24T14:15:22Z",
"resource_id": "4d5215ed-38bb-48ed-879a-fdb9ca58522f",
"scripts": [
{
"cron": "string",
"display_name": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"log_path": "string",
"log_source_id": "4197ab25-95cf-4b91-9c78-f7f2af5d353a",
"run_on_start": true,
"run_on_stop": true,
"script": "string",
"start_blocks_login": true,
"timeout": 0
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
"subsystems": [
"envbox"
],
"troubleshooting_url": "string",
"updated_at": "2019-08-24T14:15:22Z",
"version": "string"
}
],
"created_at": "2019-08-24T14:15:22Z",
"daily_cost": 0,
"hide": true,
"icon": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"job_id": "453bd7d7-5355-4d6d-a38e-d9e7eb218c3f",
"metadata": [
{
"key": "string",
"sensitive": true,
"value": "string"
}
],
"name": "string",
"type": "string",
"workspace_transition": "start"
}
],
"status": "pending",
"template_version_id": "0ba39c92-1f1b-4c32-aa3e-9925d7713eb1",
"template_version_name": "string",
"template_version_preset_id": "512a53a7-30da-446e-a1fc-713c630baff1",
"transition": "start",
"updated_at": "2019-08-24T14:15:22Z",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string",
"workspace_owner_avatar_url": "string",
"workspace_owner_id": "e7078695-5279-4c86-8774-3ac2367a2fc7",
"workspace_owner_name": "string"
}
}
```
### Properties
| Name | Type | Required | Restrictions | Description |
|-------------------|----------------------------------------------------|----------|--------------|-------------|
| `workspace_build` | [codersdk.WorkspaceBuild](#codersdkworkspacebuild) | false | | |
## codersdk.Permission
```json
+32
View File
@@ -365,6 +365,38 @@ curl -X GET http://coder-server:8080/api/v2/tasks/{user}/{task}/logs \
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Pause task
### Code samples
```shell
# Example request using curl
curl -X POST http://coder-server:8080/api/v2/tasks/{user}/{task}/pause \
-H 'Accept: */*' \
-H 'Coder-Session-Token: API_KEY'
```
`POST /tasks/{user}/{task}/pause`
### Parameters
| Name | In | Type | Required | Description |
|--------|------|--------------|----------|-------------------------------------------------------|
| `user` | path | string | true | Username, user ID, or 'me' for the authenticated user |
| `task` | path | string(uuid) | true | Task ID |
### Example responses
> 202 Response
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------------|-------------|--------------------------------------------------------------------|
| 202 | [Accepted](https://tools.ietf.org/html/rfc7231#section-6.3.3) | Accepted | [codersdk.PauseTaskResponse](schemas.md#codersdkpausetaskresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Send input to AI task
### Code samples
+51 -36
View File
@@ -1,5 +1,5 @@
# 1.86.0
FROM rust:slim@sha256:df6ca8f96d338697ccdbe3ccac57a85d2172e03a2429c2d243e74f3bb83ba2f5 AS rust-utils
FROM rust:slim@sha256:760ad1d638d70ebbd0c61e06210e1289cbe45ff6425e3ea6e01241de3e14d08e AS rust-utils
# Install rust helper programs
ENV CARGO_INSTALL_ROOT=/tmp/
# Use more reliable mirrors for Debian packages
@@ -9,16 +9,20 @@ RUN apt-get update && apt-get install -y libssl-dev openssl pkg-config build-ess
RUN cargo install jj-cli typos-cli watchexec-cli
FROM ubuntu:jammy@sha256:c7eb020043d8fc2ae0793fb35a37bff1cf33f156d4d4b12ccc7f3ef8706c38b1 AS go
ARG TARGETARCH
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.25.6
ARG GO_CHECKSUM="f022b6aad78e362bcba9b0b94d09ad58c5a70c6ba3b7582905fababf5fe0181a"
# Boring Go is needed to build FIPS-compliant binaries.
RUN apt-get update && \
apt-get install --yes curl && \
case ${TARGETARCH} in \
amd64) GO_CHECKSUM="f022b6aad78e362bcba9b0b94d09ad58c5a70c6ba3b7582905fababf5fe0181a" ;; \
arm64) GO_CHECKSUM="738ef87d79c34272424ccdf83302b7b0300b8b096ed443896089306117943dd5" ;; \
esac && \
curl --silent --show-error --location \
"https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
"https://go.dev/dl/go${GO_VERSION}.linux-${TARGETARCH}.tar.gz" \
-o /usr/local/go.tar.gz && \
echo "$GO_CHECKSUM /usr/local/go.tar.gz" | sha256sum -c && \
rm -rf /var/lib/apt/lists/*
@@ -94,17 +98,19 @@ RUN apt-get update && \
rm -rf /tmp/go/pkg && \
rm -rf /tmp/go/src
# alpine:3.18
FROM us-docker.pkg.dev/coder-v2-images-public/public/alpine@sha256:fd032399cd767f310a1d1274e81cab9f0fd8a49b3589eba2c3420228cd45b6a7 AS proto
FROM alpine:3.18 AS proto
ARG TARGETARCH
WORKDIR /tmp
RUN apk add curl unzip
RUN curl -L -o protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v23.4/protoc-23.4-linux-x86_64.zip && \
RUN case ${TARGETARCH} in amd64) PROTOC_ARCH=x86_64;; arm64) PROTOC_ARCH=aarch_64;; esac && \
curl -L -o protoc.zip "https://github.com/protocolbuffers/protobuf/releases/download/v23.4/protoc-23.4-linux-${PROTOC_ARCH}.zip" && \
unzip protoc.zip && \
rm protoc.zip
FROM ubuntu:jammy@sha256:c7eb020043d8fc2ae0793fb35a37bff1cf33f156d4d4b12ccc7f3ef8706c38b1
SHELL ["/bin/bash", "-c"]
ARG TARGETARCH
# Install packages from apt repositories
ARG DEBIAN_FRONTEND="noninteractive"
@@ -214,7 +220,7 @@ RUN sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/u
# NOTE: In scripts/Dockerfile.base we specifically install Terraform version 1.12.2.
# Installing the same version here to match.
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_linux_amd64.zip" && \
RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/terraform_1.14.1_linux_${TARGETARCH}.zip" && \
unzip /tmp/terraform.zip -d /usr/local/bin && \
rm -f /tmp/terraform.zip && \
chmod +x /usr/local/bin/terraform && \
@@ -223,27 +229,28 @@ RUN wget -O /tmp/terraform.zip "https://releases.hashicorp.com/terraform/1.14.1/
# Install the docker buildx component.
RUN DOCKER_BUILDX_VERSION=$(curl -s "https://api.github.com/repos/docker/buildx/releases/latest" | grep '"tag_name":' | sed -E 's/.*"(v[^"]+)".*/\1/') && \
mkdir -p /usr/local/lib/docker/cli-plugins && \
curl -Lo /usr/local/lib/docker/cli-plugins/docker-buildx "https://github.com/docker/buildx/releases/download/${DOCKER_BUILDX_VERSION}/buildx-${DOCKER_BUILDX_VERSION}.linux-amd64" && \
curl -Lo /usr/local/lib/docker/cli-plugins/docker-buildx "https://github.com/docker/buildx/releases/download/${DOCKER_BUILDX_VERSION}/buildx-${DOCKER_BUILDX_VERSION}.linux-${TARGETARCH}" && \
chmod a+x /usr/local/lib/docker/cli-plugins/docker-buildx
# See https://github.com/cli/cli/issues/6175#issuecomment-1235984381 for proof
# the apt repository is unreliable
RUN GH_CLI_VERSION=$(curl -s "https://api.github.com/repos/cli/cli/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/') && \
curl -L https://github.com/cli/cli/releases/download/v${GH_CLI_VERSION}/gh_${GH_CLI_VERSION}_linux_amd64.deb -o gh.deb && \
curl -L https://github.com/cli/cli/releases/download/v${GH_CLI_VERSION}/gh_${GH_CLI_VERSION}_linux_${TARGETARCH}.deb -o gh.deb && \
dpkg -i gh.deb && \
rm gh.deb
# Install Lazygit
# See https://github.com/jesseduffield/lazygit#ubuntu
RUN LAZYGIT_VERSION=$(curl -s "https://api.github.com/repos/jesseduffield/lazygit/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v*([^"]+)".*/\1/') && \
curl -Lo lazygit.tar.gz "https://github.com/jesseduffield/lazygit/releases/latest/download/lazygit_${LAZYGIT_VERSION}_Linux_x86_64.tar.gz" && \
RUN case ${TARGETARCH} in amd64) LAZYGIT_ARCH=x86_64;; arm64) LAZYGIT_ARCH=arm64;; esac && \
LAZYGIT_VERSION=$(curl -s "https://api.github.com/repos/jesseduffield/lazygit/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v*([^"]+)".*/\1/') && \
curl -Lo lazygit.tar.gz "https://github.com/jesseduffield/lazygit/releases/latest/download/lazygit_${LAZYGIT_VERSION}_Linux_${LAZYGIT_ARCH}.tar.gz" && \
tar xf lazygit.tar.gz -C /usr/local/bin lazygit && \
rm lazygit.tar.gz
# Install doctl
# See https://docs.digitalocean.com/reference/doctl/how-to/install
RUN DOCTL_VERSION=$(curl -s "https://api.github.com/repos/digitalocean/doctl/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/') && \
curl -L https://github.com/digitalocean/doctl/releases/download/v${DOCTL_VERSION}/doctl-${DOCTL_VERSION}-linux-amd64.tar.gz -o doctl.tar.gz && \
curl -L https://github.com/digitalocean/doctl/releases/download/v${DOCTL_VERSION}/doctl-${DOCTL_VERSION}-linux-${TARGETARCH}.tar.gz -o doctl.tar.gz && \
tar xf doctl.tar.gz -C /usr/local/bin doctl && \
rm doctl.tar.gz
@@ -289,12 +296,12 @@ RUN systemctl enable \
# Install tools with published releases, where that is the
# preferred/recommended installation method.
ARG CLOUD_SQL_PROXY_VERSION=2.2.0 \
DIVE_VERSION=0.10.0 \
DIVE_VERSION=0.12.0 \
DOCKER_GCR_VERSION=2.1.8 \
GOLANGCI_LINT_VERSION=1.64.8 \
GRYPE_VERSION=0.61.1 \
HELM_VERSION=3.12.0 \
KUBE_LINTER_VERSION=0.6.3 \
KUBE_LINTER_VERSION=0.8.1 \
KUBECTX_VERSION=0.9.4 \
STRIPE_VERSION=1.14.5 \
TERRAGRUNT_VERSION=0.45.11 \
@@ -303,58 +310,66 @@ ARG CLOUD_SQL_PROXY_VERSION=2.2.0 \
COSIGN_VERSION=2.4.3 \
BUN_VERSION=1.2.15
# cloud_sql_proxy, for connecting to cloudsql instances
# the upstream go.mod prevents this from being installed with go install
RUN curl --silent --show-error --location --output /usr/local/bin/cloud_sql_proxy "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v${CLOUD_SQL_PROXY_VERSION}/cloud-sql-proxy.linux.amd64" && \
# Map TARGETARCH to variant names used by different projects.
# ALT_ARCH: amd64->x86_64, arm64->arm64 (lazygit, kubectx, kubens, stripe)
# TRIVY_ARCH: amd64->Linux-64bit, arm64->Linux-ARM64
# BUN_ARCH/BUN_DIR: amd64->x64/bun-linux-x64, arm64->aarch64/bun-linux-aarch64
RUN case ${TARGETARCH} in \
amd64) ALT_ARCH=x86_64; TRIVY_ARCH=Linux-64bit; BUN_ARCH=x64; BUN_DIR=bun-linux-x64 ;; \
arm64) ALT_ARCH=arm64; TRIVY_ARCH=Linux-ARM64; BUN_ARCH=aarch64; BUN_DIR=bun-linux-aarch64 ;; \
esac && \
# cloud_sql_proxy, for connecting to cloudsql instances
# the upstream go.mod prevents this from being installed with go install
curl --silent --show-error --location --output /usr/local/bin/cloud_sql_proxy "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v${CLOUD_SQL_PROXY_VERSION}/cloud-sql-proxy.linux.${TARGETARCH}" && \
chmod a=rx /usr/local/bin/cloud_sql_proxy && \
# dive for scanning image layer utilization metrics in CI
curl --silent --show-error --location "https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_amd64.tar.gz" | \
curl --silent --show-error --location "https://github.com/wagoodman/dive/releases/download/v${DIVE_VERSION}/dive_${DIVE_VERSION}_linux_${TARGETARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- dive && \
# docker-credential-gcr is a Docker credential helper for pushing/pulling
# images from Google Container Registry and Artifact Registry
curl --silent --show-error --location "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${DOCKER_GCR_VERSION}/docker-credential-gcr_linux_amd64-${DOCKER_GCR_VERSION}.tar.gz" | \
curl --silent --show-error --location "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${DOCKER_GCR_VERSION}/docker-credential-gcr_linux_${TARGETARCH}-${DOCKER_GCR_VERSION}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- docker-credential-gcr && \
# golangci-lint performs static code analysis for our Go code
curl --silent --show-error --location "https://github.com/golangci/golangci-lint/releases/download/v${GOLANGCI_LINT_VERSION}/golangci-lint-${GOLANGCI_LINT_VERSION}-linux-amd64.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- --strip-components=1 "golangci-lint-${GOLANGCI_LINT_VERSION}-linux-amd64/golangci-lint" && \
curl --silent --show-error --location "https://github.com/golangci/golangci-lint/releases/download/v${GOLANGCI_LINT_VERSION}/golangci-lint-${GOLANGCI_LINT_VERSION}-linux-${TARGETARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- --strip-components=1 "golangci-lint-${GOLANGCI_LINT_VERSION}-linux-${TARGETARCH}/golangci-lint" && \
# Anchore Grype for scanning container images for security issues
curl --silent --show-error --location "https://github.com/anchore/grype/releases/download/v${GRYPE_VERSION}/grype_${GRYPE_VERSION}_linux_amd64.tar.gz" | \
curl --silent --show-error --location "https://github.com/anchore/grype/releases/download/v${GRYPE_VERSION}/grype_${GRYPE_VERSION}_linux_${TARGETARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- grype && \
# Helm is necessary for deploying Coder
curl --silent --show-error --location "https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- --strip-components=1 linux-amd64/helm && \
curl --silent --show-error --location "https://get.helm.sh/helm-v${HELM_VERSION}-linux-${TARGETARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- --strip-components=1 linux-${TARGETARCH}/helm && \
# kube-linter for linting Kubernetes objects, including those
# that Helm generates from our charts
curl --silent --show-error --location "https://github.com/stackrox/kube-linter/releases/download/${KUBE_LINTER_VERSION}/kube-linter-linux" --output /usr/local/bin/kube-linter && \
curl --silent --show-error --location "https://github.com/stackrox/kube-linter/releases/download/v${KUBE_LINTER_VERSION}/kube-linter-linux_${TARGETARCH}" --output /usr/local/bin/kube-linter && \
# kubens and kubectx for managing Kubernetes namespaces and contexts
curl --silent --show-error --location "https://github.com/ahmetb/kubectx/releases/download/v${KUBECTX_VERSION}/kubectx_v${KUBECTX_VERSION}_linux_x86_64.tar.gz" | \
curl --silent --show-error --location "https://github.com/ahmetb/kubectx/releases/download/v${KUBECTX_VERSION}/kubectx_v${KUBECTX_VERSION}_linux_${ALT_ARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- kubectx && \
curl --silent --show-error --location "https://github.com/ahmetb/kubectx/releases/download/v${KUBECTX_VERSION}/kubens_v${KUBECTX_VERSION}_linux_x86_64.tar.gz" | \
curl --silent --show-error --location "https://github.com/ahmetb/kubectx/releases/download/v${KUBECTX_VERSION}/kubens_v${KUBECTX_VERSION}_linux_${ALT_ARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- kubens && \
# stripe for coder.com billing API
curl --silent --show-error --location "https://github.com/stripe/stripe-cli/releases/download/v${STRIPE_VERSION}/stripe_${STRIPE_VERSION}_linux_x86_64.tar.gz" | \
curl --silent --show-error --location "https://github.com/stripe/stripe-cli/releases/download/v${STRIPE_VERSION}/stripe_${STRIPE_VERSION}_linux_${ALT_ARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- stripe && \
# terragrunt for running Terraform and Terragrunt files
curl --silent --show-error --location --output /usr/local/bin/terragrunt "https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_amd64" && \
curl --silent --show-error --location --output /usr/local/bin/terragrunt "https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_${TARGETARCH}" && \
chmod a=rx /usr/local/bin/terragrunt && \
# AquaSec Trivy for scanning container images for security issues
curl --silent --show-error --location "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_Linux-64bit.tar.gz" | \
curl --silent --show-error --location "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_${TRIVY_ARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- trivy && \
# Anchore Syft for SBOM generation
curl --silent --show-error --location "https://github.com/anchore/syft/releases/download/v${SYFT_VERSION}/syft_${SYFT_VERSION}_linux_amd64.tar.gz" | \
curl --silent --show-error --location "https://github.com/anchore/syft/releases/download/v${SYFT_VERSION}/syft_${SYFT_VERSION}_linux_${TARGETARCH}.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- syft && \
# Sigstore Cosign for artifact signing and attestation
curl --silent --show-error --location --output /usr/local/bin/cosign "https://github.com/sigstore/cosign/releases/download/v${COSIGN_VERSION}/cosign-linux-amd64" && \
curl --silent --show-error --location --output /usr/local/bin/cosign "https://github.com/sigstore/cosign/releases/download/v${COSIGN_VERSION}/cosign-linux-${TARGETARCH}" && \
chmod a=rx /usr/local/bin/cosign && \
# Install Bun JavaScript runtime to /usr/local/bin
# Ensure unzip is installed right before using it and use multiple mirrors for reliability
(apt-get update || (sed -i 's|http://archive.ubuntu.com/ubuntu/|http://mirrors.edge.kernel.org/ubuntu/|g' /etc/apt/sources.list && apt-get update)) && \
apt-get install -y unzip && \
curl --silent --show-error --location --fail "https://github.com/oven-sh/bun/releases/download/bun-v${BUN_VERSION}/bun-linux-x64.zip" --output /tmp/bun.zip && \
curl --silent --show-error --location --fail "https://github.com/oven-sh/bun/releases/download/bun-v${BUN_VERSION}/bun-linux-${BUN_ARCH}.zip" --output /tmp/bun.zip && \
unzip -q /tmp/bun.zip -d /tmp && \
mv /tmp/bun-linux-x64/bun /usr/local/bin/ && \
mv /tmp/${BUN_DIR}/bun /usr/local/bin/ && \
chmod a=rx /usr/local/bin/bun && \
rm -rf /tmp/bun.zip /tmp/bun-linux-x64 && \
rm -rf /tmp/bun.zip /tmp/${BUN_DIR} && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# We use yq during "make deploy" to manually substitute out fields in
@@ -366,7 +381,7 @@ RUN curl --silent --show-error --location --output /usr/local/bin/cloud_sql_prox
# tar --extract --gzip --directory=/usr/local/bin --file=- ./yq_linux_amd64 && \
# mv /usr/local/bin/yq_linux_amd64 /usr/local/bin/yq
RUN curl --silent --show-error --location --output /usr/local/bin/yq "https://github.com/mikefarah/yq/releases/download/3.3.0/yq_linux_amd64" && \
RUN curl --silent --show-error --location --output /usr/local/bin/yq "https://github.com/mikefarah/yq/releases/download/3.3.0/yq_linux_${TARGETARCH}" && \
chmod a=rx /usr/local/bin/yq
# Install GoLand.
+59 -4
View File
@@ -2,8 +2,8 @@ package aibridged_test
import (
"context"
_ "embed"
"testing"
"testing/synctest"
"time"
"github.com/google/uuid"
@@ -105,10 +105,65 @@ func TestPool(t *testing.T) {
require.EqualValues(t, 2, cacheMetrics.KeysEvicted())
require.EqualValues(t, 1, cacheMetrics.Hits())
require.EqualValues(t, 3, cacheMetrics.Misses())
}
// TODO: add test for expiry.
// This requires Go 1.25's [synctest](https://pkg.go.dev/testing/synctest) since the
// internal cache lib cannot be tested using coder/quartz.
func TestPool_Expiry(t *testing.T) {
t.Parallel()
synctest.Test(t, func(t *testing.T) {
logger := slogtest.Make(t, nil)
ctrl := gomock.NewController(t)
client := mock.NewMockDRPCClient(ctrl)
mcpProxy := mcpmock.NewMockServerProxier(ctrl)
mcpProxy.EXPECT().Init(gomock.Any()).AnyTimes().Return(nil)
mcpProxy.EXPECT().Shutdown(gomock.Any()).AnyTimes().Return(nil)
const ttl = time.Second
opts := aibridged.PoolOptions{MaxItems: 1, TTL: ttl}
pool, err := aibridged.NewCachedBridgePool(opts, nil, logger, nil, testTracer)
require.NoError(t, err)
t.Cleanup(func() { pool.Shutdown(context.Background()) })
req := aibridged.Request{
SessionKey: "key",
InitiatorID: uuid.New(),
APIKeyID: uuid.New().String(),
}
clientFn := func() (aibridged.DRPCClient, error) {
return client, nil
}
ctx := t.Context()
// First acquire is a cache miss.
_, err = pool.Acquire(ctx, req, clientFn, newMockMCPFactory(mcpProxy))
require.NoError(t, err)
// Second acquire is a cache hit.
_, err = pool.Acquire(ctx, req, clientFn, newMockMCPFactory(mcpProxy))
require.NoError(t, err)
metrics := pool.CacheMetrics()
require.EqualValues(t, 1, metrics.Misses())
require.EqualValues(t, 1, metrics.Hits())
// TTL expires
time.Sleep(ttl + time.Millisecond)
// Third acquire is a cache miss because the entry expired.
_, err = pool.Acquire(ctx, req, clientFn, newMockMCPFactory(mcpProxy))
require.NoError(t, err)
metrics = pool.CacheMetrics()
require.EqualValues(t, 2, metrics.Misses())
require.EqualValues(t, 1, metrics.Hits())
// Wait for all eviction goroutines to complete before gomock's ctrl.Finish()
// runs in test cleanup. ristretto's OnEvict callback spawns goroutines that
// need to finish calling mcpProxy.Shutdown() before ctrl.finish clears the
// expectations.
synctest.Wait()
})
}
var _ aibridged.MCPProxyBuilder = &mockMCPFactory{}
+11 -7
View File
@@ -163,7 +163,7 @@ require (
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e
github.com/pkg/sftp v1.13.7
github.com/prometheus-community/pro-bing v0.7.0
github.com/prometheus-community/pro-bing v0.8.0
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/client_model v0.6.2
github.com/prometheus/common v0.67.4
@@ -198,14 +198,14 @@ require (
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546
golang.org/x/mod v0.32.0
golang.org/x/net v0.49.0
golang.org/x/oauth2 v0.34.0
golang.org/x/oauth2 v0.35.0
golang.org/x/sync v0.19.0
golang.org/x/sys v0.40.0
golang.org/x/sys v0.41.0
golang.org/x/term v0.39.0
golang.org/x/text v0.33.0
golang.org/x/tools v0.41.0
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da
google.golang.org/api v0.264.0
google.golang.org/api v0.265.0
google.golang.org/grpc v1.78.0
google.golang.org/protobuf v1.36.11
gopkg.in/DataDog/dd-trace-go.v1 v1.74.0
@@ -220,7 +220,7 @@ require (
require (
cloud.google.com/go/auth v0.18.1 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
dario.cat/mergo v1.0.1 // indirect
dario.cat/mergo v1.0.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/DataDog/appsec-internal-go v1.11.2 // indirect
@@ -450,7 +450,7 @@ require (
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260122232226-8e98ce8d340d // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
howett.net/plist v1.0.0 // indirect
kernel.org/pub/linux/libs/security/libcap/psx v1.2.77 // indirect
@@ -484,6 +484,7 @@ require (
github.com/go-git/go-git/v5 v5.16.2
github.com/icholy/replace v0.6.0
github.com/mark3labs/mcp-go v0.38.0
github.com/spf13/cobra v1.10.2
gonum.org/v1/gonum v0.17.0
)
@@ -503,6 +504,7 @@ require (
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
github.com/Masterminds/semver/v3 v3.3.1 // indirect
github.com/air-verse/air v1.64.5 // indirect
github.com/alecthomas/chroma v0.10.0 // indirect
github.com/aquasecurity/go-version v0.0.1 // indirect
github.com/aquasecurity/iamgo v0.0.10 // indirect
@@ -540,6 +542,7 @@ require (
github.com/invopop/jsonschema v0.13.0 // indirect
github.com/jackmordaunt/icns/v3 v3.0.1 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/joho/godotenv v1.5.1 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/landlock-lsm/go-landlock v0.0.0-20251103212306-430f8e5cd97c // indirect
github.com/mattn/go-shellwords v1.0.12 // indirect
@@ -548,6 +551,7 @@ require (
github.com/openai/openai-go v1.12.0 // indirect
github.com/openai/openai-go/v3 v3.15.0 // indirect
github.com/package-url/packageurl-go v0.1.3 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/rhysd/actionlint v1.7.10 // indirect
@@ -556,7 +560,6 @@ require (
github.com/sergeymakinen/go-bmp v1.0.0 // indirect
github.com/sergeymakinen/go-ico v1.0.0-beta.0 // indirect
github.com/sony/gobreaker/v2 v2.3.0 // indirect
github.com/spf13/cobra v1.10.2 // indirect
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
github.com/tmaxmax/go-sse v0.11.0 // indirect
@@ -581,6 +584,7 @@ require (
)
tool (
github.com/air-verse/air
github.com/coder/paralleltestctx/cmd/paralleltestctx
github.com/daixiang0/gci
github.com/rhysd/actionlint/cmd/actionlint
+19 -10
View File
@@ -615,6 +615,8 @@ cloud.google.com/go/workflows v1.9.0/go.mod h1:ZGkj1aFIOd9c8Gerkjjq7OW7I5+l6cSvT
cloud.google.com/go/workflows v1.10.0/go.mod h1:fZ8LmRmZQWacon9UCX1r/g/DfAXx5VcPALq2CxzdePw=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
@@ -705,6 +707,8 @@ github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7l
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/agnivade/levenshtein v1.2.1 h1:EHBY3UOn1gwdy/VbFwgo4cxecRznFk7fKWN1KOX7eoM=
github.com/agnivade/levenshtein v1.2.1/go.mod h1:QVVI16kDrtSuwcpd0p1+xMC6Z/VfhtCyDIjcwga4/DU=
github.com/air-verse/air v1.64.5 h1:+gs/NgTzYYe+gGPyfHy3XxpJReQWC1pIsiKIg0LgNt4=
github.com/air-verse/air v1.64.5/go.mod h1:OaJZSfZqf7wyjS2oP/CcEVyIt0JmZuPh5x1gdtklmmY=
github.com/ajstarks/deck v0.0.0-20200831202436-30c9fc6549a9/go.mod h1:JynElWSGnm/4RlzPXRlREEwqTHAN3T56Bv2ITsFT3gY=
github.com/ajstarks/deck/generate v0.0.0-20210309230005-c3f852c02e19/go.mod h1:T13YZdzov6OU0A1+RfKZiZN9ca6VeKdBdyDV+BY97Tk=
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
@@ -810,6 +814,7 @@ github.com/bep/clocks v0.5.0 h1:hhvKVGLPQWRVsBP/UB7ErrHYIO42gINVbvqxvYTPVps=
github.com/bep/clocks v0.5.0/go.mod h1:SUq3q+OOq41y2lRQqH5fsOoxN8GbxSiT6jvoVVLCVhU=
github.com/bep/debounce v1.2.0 h1:wXds8Kq8qRfwAOpAxHrJDbCXgC5aHSzgQb/0gKsHQqo=
github.com/bep/debounce v1.2.0/go.mod h1:H8yggRPQKLUhUoqrJC1bO2xNya7vanpDl7xR3ISbCJ0=
github.com/bep/debounce v1.2.1 h1:v67fRdBA9UQu2NhLFXrSg0Brw7CexQekrBwDMM8bzeY=
github.com/bep/gitmap v1.9.0 h1:2pyb1ex+cdwF6c4tsrhEgEKfyNfxE34d5K+s2sa9byc=
github.com/bep/gitmap v1.9.0/go.mod h1:Juq6e1qqCRvc1W7nzgadPGI9IGV13ZncEebg5atj4Vo=
github.com/bep/goat v0.5.0 h1:S8jLXHCVy/EHIoCY+btKkmcxcXFd34a0Q63/0D4TKeA=
@@ -1493,6 +1498,8 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGw
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/josharian/native v1.1.1-0.20230202152459-5c7d0dd6ab86 h1:elKwZS1OcdQ0WwEDBeqxKwb7WB62QX8bvZ/FJnVXIfk=
@@ -1705,6 +1712,8 @@ github.com/package-url/packageurl-go v0.1.3 h1:4juMED3hHiz0set3Vq3KeQ75KD1avthoX
github.com/package-url/packageurl-go v0.1.3/go.mod h1:nKAWB8E6uk1MHqiS/lQb9pYBGH2+mdJ2PJc2s50dQY0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
@@ -1743,8 +1752,8 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus-community/pro-bing v0.7.0 h1:KFYFbxC2f2Fp6c+TyxbCOEarf7rbnzr9Gw8eIb0RfZA=
github.com/prometheus-community/pro-bing v0.7.0/go.mod h1:Moob9dvlY50Bfq6i88xIwfyw7xLFHH69LUgx9n5zqCE=
github.com/prometheus-community/pro-bing v0.8.0 h1:CEY/g1/AgERRDjxw5P32ikcOgmrSuXs7xon7ovx6mNc=
github.com/prometheus-community/pro-bing v0.8.0/go.mod h1:Idyxz8raDO6TgkUN6ByiEGvWJNyQd40kN9ZUeho3lN0=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -2264,8 +2273,8 @@ golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec
golang.org/x/oauth2 v0.5.0/go.mod h1:9/XBHVqLaWO3/BRHs5jbpYCnOZVjj5V0ndyaAM7KB4I=
golang.org/x/oauth2 v0.6.0/go.mod h1:ycmewcwgD4Rpr3eZJLSB4Kyyljb3qDh40vJ8STE5HKw=
golang.org/x/oauth2 v0.7.0/go.mod h1:hPLQkd9LyjfXTiRohC/41GhcFqxisoUQ99sCUOHO9x4=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/oauth2 v0.35.0 h1:Mv2mzuHuZuY2+bkyWXIHMfhNdJAdwW3FuWeCPYN5GVQ=
golang.org/x/oauth2 v0.35.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -2385,8 +2394,8 @@ golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 h1:O1cMQHRfwNpDfDJerqRoE2oD+AFlyid87D40L/OkkJo=
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
@@ -2591,8 +2600,8 @@ google.golang.org/api v0.108.0/go.mod h1:2Ts0XTHNVWxypznxWOYUeI4g3WdP9Pk2Qk58+a/
google.golang.org/api v0.110.0/go.mod h1:7FC4Vvx1Mooxh8C5HWjzZHcavuS2f6pmJpZx60ca7iI=
google.golang.org/api v0.111.0/go.mod h1:qtFHvU9mhgTJegR31csQ+rwxyUTHOKFqCKWp1J0fdw0=
google.golang.org/api v0.114.0/go.mod h1:ifYI2ZsFK6/uGddGfAD5BMxlnkBqCmqHSDUVi45N5Yg=
google.golang.org/api v0.264.0 h1:+Fo3DQXBK8gLdf8rFZ3uLu39JpOnhvzJrLMQSoSYZJM=
google.golang.org/api v0.264.0/go.mod h1:fAU1xtNNisHgOF5JooAs8rRaTkl2rT3uaoNGo9NS3R8=
google.golang.org/api v0.265.0 h1:FZvfUdI8nfmuNrE34aOWFPmLC+qRBEiNm3JdivTvAAU=
google.golang.org/api v0.265.0/go.mod h1:uAvfEl3SLUj/7n6k+lJutcswVojHPp2Sp08jWCu8hLY=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -2737,8 +2746,8 @@ google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217 h1:GvESR9BIyHUahIb
google.golang.org/genproto v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:yJ2HH4EHEDTd3JiLmhds6NkJ17ITVYOdV3m3VKOnws0=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217 h1:fCvbg86sFXwdrl5LgVcTEvNC+2txB5mgROGmRL5mrls=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260122232226-8e98ce8d340d h1:xXzuihhT3gL/ntduUZwHECzAn57E8dA6l8SOtYWdD8Q=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260122232226-8e98ce8d340d/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409 h1:H86B94AW+VfJWDqFeEbBPhEtHzJwJfTbgE2lZa54ZAQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260128011058-8636f8732409/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+13 -3
View File
@@ -17,6 +17,7 @@ import (
"github.com/acarl005/stripansi"
"github.com/stretchr/testify/require"
"go.uber.org/atomic"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/pty"
@@ -78,7 +79,7 @@ func newExpecter(t *testing.T, r io.Reader, name string) outExpecter {
ex := outExpecter{
t: t,
out: out,
name: name,
name: atomic.NewString(name),
runeReader: bufio.NewReaderSize(out, utf8.UTFMax),
}
@@ -140,7 +141,7 @@ type outExpecter struct {
t *testing.T
close func(reason string) error
out *stdbuf
name string
name *atomic.String
runeReader *bufio.Reader
}
@@ -361,7 +362,7 @@ func (e *outExpecter) logf(format string, args ...interface{}) {
// Match regular logger timestamp format, we seem to be logging in
// UTC in other places as well, so match here.
e.t.Logf("%s: %s: %s", time.Now().UTC().Format("2006-01-02 15:04:05.000"), e.name, fmt.Sprintf(format, args...))
e.t.Logf("%s: %s: %s", time.Now().UTC().Format("2006-01-02 15:04:05.000"), e.name.Load(), fmt.Sprintf(format, args...))
}
func (e *outExpecter) fatalf(reason string, format string, args ...interface{}) {
@@ -430,6 +431,15 @@ func (p *PTY) WriteLine(str string) {
require.NoError(p.t, err, "write line failed")
}
// Named sets the PTY name in the logs. Defaults to "cmd". Make sure you set this before anything starts writing to the
// pty, or it may not be named consistently. E.g.
//
// p := New(t).Named("myCmd")
func (p *PTY) Named(name string) *PTY {
p.name.Store(name)
return p
}
type PTYCmd struct {
outExpecter
pty.PTYCmd
+18
View File
@@ -0,0 +1,18 @@
root = "/app"
tmp_dir = "tmp"
[build]
cmd = "go build -o ./tmp/coder ./enterprise/cmd/coder"
entrypoint = ["./tmp/coder"]
exclude_dir = ["site", "node_modules", ".git", "tmp", "vendor", ".coderv2", "bin", "build", "dist", "out", "test-output", "scaletest"]
exclude_regex = ["_test\\.go$"]
include_ext = ["go"]
# Use polling instead of fsnotify. This is required for macOS +
# Colima (and similar VM-based Docker setups) where host filesystem
# events are not propagated into the container via inotify.
poll = true
poll_interval = 10000
delay = 10000
kill_delay = 5000
send_interrupt = true
stop_on_error = true
+1
View File
@@ -0,0 +1 @@
cdev
+6
View File
@@ -0,0 +1,6 @@
package api
import "embed"
//go:embed static/*
var staticFS embed.FS
+463
View File
@@ -0,0 +1,463 @@
package api
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"sort"
"time"
"github.com/ory/dockertest/v3/docker"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/scripts/cdev/catalog"
)
// ServiceInfo represents a service in the API response.
type ServiceInfo struct {
Name string `json:"name"`
Emoji string `json:"emoji"`
Status unit.Status `json:"status"`
CurrentStep string `json:"current_step,omitempty"`
DependsOn []string `json:"depends_on"`
UnmetDependencies []string `json:"unmet_dependencies,omitempty"`
URL string `json:"url,omitempty"`
}
// ListServicesResponse is the response for GET /api/services.
type ListServicesResponse struct {
Services []ServiceInfo `json:"services"`
}
func serviceNamesToStrings(names []catalog.ServiceName) []string {
result := make([]string, len(names))
for i, n := range names {
result[i] = string(n)
}
return result
}
func (s *Server) buildListServicesResponse() ListServicesResponse {
var services []ServiceInfo
_ = s.catalog.ForEach(func(svc catalog.ServiceBase) error {
status, err := s.catalog.Status(svc.Name())
if err != nil {
return err
}
info := ServiceInfo{
Name: string(svc.Name()),
Emoji: svc.Emoji(),
Status: status,
CurrentStep: svc.CurrentStep(),
DependsOn: serviceNamesToStrings(svc.DependsOn()),
}
// Include URL if service is addressable.
if addressable, ok := svc.(catalog.ServiceAddressable); ok {
info.URL = addressable.URL()
}
// Include unmet dependencies for non-completed services.
if status != unit.StatusComplete {
unmet, _ := s.catalog.UnmetDependencies(svc.Name())
info.UnmetDependencies = unmet
}
sort.Strings(info.DependsOn)
sort.Strings(info.UnmetDependencies)
services = append(services, info)
return nil
})
return ListServicesResponse{Services: services}
}
func (s *Server) handleListServices(w http.ResponseWriter, _ *http.Request) {
s.writeJSON(w, http.StatusOK, s.buildListServicesResponse())
}
func (s *Server) handleGetService(w http.ResponseWriter, r *http.Request) {
name := r.PathValue("name")
svc, ok := s.catalog.Get(catalog.ServiceName(name))
if !ok {
s.writeError(w, http.StatusNotFound, "service not found")
return
}
status, err := s.catalog.Status(catalog.ServiceName(name))
if err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to get service status: "+err.Error())
return
}
info := ServiceInfo{
Name: string(svc.Name()),
Emoji: svc.Emoji(),
Status: status,
CurrentStep: svc.CurrentStep(),
DependsOn: serviceNamesToStrings(svc.DependsOn()),
}
// Include unmet dependencies for non-completed services.
if status != unit.StatusComplete {
unmet, _ := s.catalog.UnmetDependencies(svc.Name())
info.UnmetDependencies = unmet
}
s.writeJSON(w, http.StatusOK, info)
}
func (s *Server) handleStartService(w http.ResponseWriter, r *http.Request) {
name := r.PathValue("name")
if _, ok := s.catalog.Get(catalog.ServiceName(name)); !ok {
s.writeError(w, http.StatusNotFound, "service not found")
return
}
if err := s.catalog.StartService(r.Context(), catalog.ServiceName(name)); err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to start service: "+err.Error())
return
}
s.writeJSON(w, http.StatusOK, map[string]string{"status": "started"})
}
func (s *Server) handleRestartService(w http.ResponseWriter, r *http.Request) {
name := r.PathValue("name")
if _, ok := s.catalog.Get(catalog.ServiceName(name)); !ok {
s.writeError(w, http.StatusNotFound, "service not found")
return
}
if err := s.catalog.RestartService(r.Context(), catalog.ServiceName(name), s.logger); err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to restart service: "+err.Error())
return
}
s.writeJSON(w, http.StatusOK, map[string]string{"status": "restarted"})
}
func (s *Server) handleStopService(w http.ResponseWriter, r *http.Request) {
name := r.PathValue("name")
if _, ok := s.catalog.Get(catalog.ServiceName(name)); !ok {
s.writeError(w, http.StatusNotFound, "service not found")
return
}
if err := s.catalog.StopService(r.Context(), catalog.ServiceName(name)); err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to stop service: "+err.Error())
return
}
s.writeJSON(w, http.StatusOK, map[string]string{"status": "stopped"})
}
func (s *Server) handleServiceLogs(w http.ResponseWriter, r *http.Request) {
name := r.PathValue("name")
dockerSvc, ok := s.catalog.Get(catalog.CDevDocker)
if !ok {
s.writeError(w, http.StatusServiceUnavailable, "docker service not available")
return
}
dockerService, ok := dockerSvc.(*catalog.Docker)
if !ok {
s.writeError(w, http.StatusInternalServerError, "invalid docker service type")
return
}
client := dockerService.Result()
if client == nil {
s.writeError(w, http.StatusServiceUnavailable, "docker not connected")
return
}
// Find containers matching the service label.
filter := catalog.NewServiceLabels(catalog.ServiceName(name)).Filter()
containers, err := client.ListContainers(docker.ListContainersOptions{
All: true,
Filters: filter,
})
if err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to list containers: "+err.Error())
return
}
if len(containers) == 0 {
s.writeError(w, http.StatusNotFound, "no container found for service")
return
}
// Set headers for streaming.
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("X-Content-Type-Options", "nosniff")
flusher, ok := w.(http.Flusher)
if !ok {
s.writeError(w, http.StatusInternalServerError, "streaming not supported")
return
}
// Create a flushing writer that flushes after each write.
fw := &flushWriter{w: w, f: flusher}
// Stream logs from container.
ctx := r.Context()
err = client.Logs(docker.LogsOptions{
Context: ctx,
Container: containers[0].ID,
OutputStream: fw,
ErrorStream: fw,
Follow: true,
Stdout: true,
Stderr: true,
Tail: "100", // Start with last 100 lines.
Timestamps: true,
})
if err != nil && ctx.Err() == nil {
// Only log error if context wasn't cancelled (client disconnect).
s.logger.Error(ctx, "log streaming error", slog.Error(err))
}
}
// flushWriter wraps a writer and flusher to flush after each write.
type flushWriter struct {
w io.Writer
f http.Flusher
}
func (fw *flushWriter) Write(p []byte) (n int, err error) {
n, err = fw.w.Write(p)
fw.f.Flush()
return n, err
}
func (s *Server) handleStartAllServices(w http.ResponseWriter, _ *http.Request) {
// Start all services in background since this can take a while.
// Use a background context since the request context will be cancelled
// when the response is sent.
go func() {
ctx := context.Background()
if err := s.catalog.Start(ctx); err != nil {
s.logger.Error(ctx, "failed to start all services", slog.Error(err))
}
}()
s.writeJSON(w, http.StatusAccepted, map[string]string{"status": "starting"})
}
func (s *Server) handleHealthz(w http.ResponseWriter, _ *http.Request) {
s.writeJSON(w, http.StatusOK, map[string]string{"status": "ok"})
}
// ImageInfo represents a Docker image in the API response.
type ImageInfo struct {
ID string `json:"id"`
Tags []string `json:"tags"`
Size int64 `json:"size"`
Created int64 `json:"created"`
}
// ListImagesResponse is the response for GET /api/images.
type ListImagesResponse struct {
Images []ImageInfo `json:"images"`
}
func (s *Server) handleListImages(w http.ResponseWriter, _ *http.Request) {
dockerSvc, ok := s.catalog.Get(catalog.CDevDocker)
if !ok {
s.writeError(w, http.StatusServiceUnavailable, "docker service not available")
return
}
dockerService, ok := dockerSvc.(*catalog.Docker)
if !ok {
s.writeError(w, http.StatusInternalServerError, "invalid docker service type")
return
}
client := dockerService.Result()
if client == nil {
s.writeError(w, http.StatusServiceUnavailable, "docker not connected")
return
}
images, err := client.ListImages(docker.ListImagesOptions{
Filters: catalog.NewLabels().Filter(),
})
if err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to list images: "+err.Error())
return
}
var result []ImageInfo
for _, img := range images {
result = append(result, ImageInfo{
ID: img.ID,
Tags: img.RepoTags,
Size: img.Size,
Created: img.Created,
})
}
s.writeJSON(w, http.StatusOK, ListImagesResponse{Images: result})
}
func (s *Server) handleDeleteImage(w http.ResponseWriter, r *http.Request) {
imageID := r.PathValue("id")
dockerSvc, ok := s.catalog.Get(catalog.CDevDocker)
if !ok {
s.writeError(w, http.StatusServiceUnavailable, "docker service not available")
return
}
dockerService, ok := dockerSvc.(*catalog.Docker)
if !ok {
s.writeError(w, http.StatusInternalServerError, "invalid docker service type")
return
}
client := dockerService.Result()
if client == nil {
s.writeError(w, http.StatusServiceUnavailable, "docker not connected")
return
}
if err := client.RemoveImage(imageID); err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to delete image: "+err.Error())
return
}
s.writeJSON(w, http.StatusOK, map[string]string{"status": "deleted"})
}
// VolumeInfo represents a Docker volume in the API response.
type VolumeInfo struct {
Name string `json:"name"`
Driver string `json:"driver"`
}
// ListVolumesResponse is the response for GET /api/volumes.
type ListVolumesResponse struct {
Volumes []VolumeInfo `json:"volumes"`
}
func (s *Server) handleListVolumes(w http.ResponseWriter, _ *http.Request) {
dockerSvc, ok := s.catalog.Get(catalog.CDevDocker)
if !ok {
s.writeError(w, http.StatusServiceUnavailable, "docker service not available")
return
}
dockerService, ok := dockerSvc.(*catalog.Docker)
if !ok {
s.writeError(w, http.StatusInternalServerError, "invalid docker service type")
return
}
client := dockerService.Result()
if client == nil {
s.writeError(w, http.StatusServiceUnavailable, "docker not connected")
return
}
volumes, err := client.ListVolumes(docker.ListVolumesOptions{
Filters: catalog.NewLabels().Filter(),
})
if err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to list volumes: "+err.Error())
return
}
var result []VolumeInfo
for _, vol := range volumes {
result = append(result, VolumeInfo{
Name: vol.Name,
Driver: vol.Driver,
})
}
s.writeJSON(w, http.StatusOK, ListVolumesResponse{Volumes: result})
}
func (s *Server) handleDeleteVolume(w http.ResponseWriter, r *http.Request) {
volumeName := r.PathValue("name")
dockerSvc, ok := s.catalog.Get(catalog.CDevDocker)
if !ok {
s.writeError(w, http.StatusServiceUnavailable, "docker service not available")
return
}
dockerService, ok := dockerSvc.(*catalog.Docker)
if !ok {
s.writeError(w, http.StatusInternalServerError, "invalid docker service type")
return
}
client := dockerService.Result()
if client == nil {
s.writeError(w, http.StatusServiceUnavailable, "docker not connected")
return
}
if err := client.RemoveVolume(volumeName); err != nil {
s.writeError(w, http.StatusInternalServerError, "failed to delete volume: "+err.Error())
return
}
s.writeJSON(w, http.StatusOK, map[string]string{"status": "deleted"})
}
func (s *Server) handleSSE(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
if !ok {
http.Error(w, "streaming unsupported", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
sub := s.catalog.Subscribe()
defer s.catalog.Unsubscribe(sub)
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
var lastData []byte
sendState := func() {
data, err := json.Marshal(s.buildListServicesResponse())
if err != nil {
return
}
if bytes.Equal(data, lastData) {
return
}
lastData = data
_, _ = fmt.Fprintf(w, "data: %s\n\n", data)
flusher.Flush()
}
// Send initial state immediately.
sendState()
for {
select {
case <-r.Context().Done():
return
case <-sub:
sendState()
case <-ticker.C:
sendState()
}
}
}
+112
View File
@@ -0,0 +1,112 @@
package api
import (
"context"
"encoding/json"
"io/fs"
"net"
"net/http"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/scripts/cdev/catalog"
)
const (
DefaultAPIPort = "19000"
)
// Server provides an HTTP API for controlling cdev services.
type Server struct {
catalog *catalog.Catalog
logger slog.Logger
srv *http.Server
addr string
}
// NewServer creates a new API server.
func NewServer(c *catalog.Catalog, logger slog.Logger, addr string) *Server {
s := &Server{
catalog: c,
logger: logger,
addr: addr,
}
mux := http.NewServeMux()
// Service endpoints.
mux.HandleFunc("GET /api/events", s.handleSSE)
mux.HandleFunc("GET /api/services", s.handleListServices)
mux.HandleFunc("GET /api/services/{name}", s.handleGetService)
mux.HandleFunc("POST /api/services/{name}/restart", s.handleRestartService)
mux.HandleFunc("POST /api/services/{name}/start", s.handleStartService)
mux.HandleFunc("POST /api/services/{name}/stop", s.handleStopService)
mux.HandleFunc("GET /api/services/{name}/logs", s.handleServiceLogs)
mux.HandleFunc("POST /api/services/start", s.handleStartAllServices)
// Image endpoints.
mux.HandleFunc("GET /api/images", s.handleListImages)
mux.HandleFunc("DELETE /api/images/{id}", s.handleDeleteImage)
// Volume endpoints.
mux.HandleFunc("GET /api/volumes", s.handleListVolumes)
mux.HandleFunc("DELETE /api/volumes/{name}", s.handleDeleteVolume)
// Health endpoint.
mux.HandleFunc("GET /healthz", s.handleHealthz)
// Serve embedded static files (web UI).
staticContent, err := fs.Sub(staticFS, "static")
if err != nil {
panic("failed to create sub filesystem: " + err.Error())
}
mux.Handle("GET /", http.FileServer(http.FS(staticContent)))
s.srv = &http.Server{
Addr: addr,
Handler: mux,
ReadHeaderTimeout: 10 * time.Second,
}
return s
}
// Start begins listening for HTTP requests. This is non-blocking.
func (s *Server) Start(ctx context.Context) error {
ln, err := net.Listen("tcp", s.addr)
if err != nil {
return xerrors.Errorf("listen on %s: %w", s.addr, err)
}
s.logger.Info(ctx, "api server listening", slog.F("addr", s.addr))
go func() {
if err := s.srv.Serve(ln); err != nil && err != http.ErrServerClosed {
s.logger.Error(ctx, "api server error", slog.Error(err))
}
}()
return nil
}
// Stop gracefully shuts down the server.
func (s *Server) Stop(ctx context.Context) error {
return s.srv.Shutdown(ctx)
}
// Addr returns the address the server is listening on.
func (s *Server) Addr() string {
return s.addr
}
func (*Server) writeJSON(w http.ResponseWriter, status int, v any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
_ = json.NewEncoder(w).Encode(v)
}
func (s *Server) writeError(w http.ResponseWriter, status int, message string) {
s.writeJSON(w, status, map[string]string{"error": message})
}
+615
View File
@@ -0,0 +1,615 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>cdev - Development Environment</title>
<style>
:root {
--bg: #1a1a2e;
--card-bg: #16213e;
--accent: #0f3460;
--primary: #e94560;
--text: #eaeaea;
--text-muted: #a0a0a0;
--success: #4ade80;
--warning: #fbbf24;
--error: #f87171;
}
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
background: var(--bg);
color: var(--text);
min-height: 100vh;
padding: 2rem;
}
.container { max-width: 900px; margin: 0 auto; }
header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 2rem;
}
h1 { font-size: 1.5rem; display: flex; align-items: center; gap: 0.5rem; }
.status-dot {
width: 10px; height: 10px;
border-radius: 50%;
background: var(--success);
animation: pulse 2s infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.refresh-info { color: var(--text-muted); font-size: 0.875rem; }
.services { display: flex; flex-direction: column; gap: 1rem; }
.service {
background: var(--card-bg);
border-radius: 8px;
padding: 1rem 1.25rem;
display: flex;
align-items: center;
gap: 1rem;
transition: transform 0.1s, box-shadow 0.1s;
}
.service:hover {
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0,0,0,0.3);
}
.service-emoji { font-size: 1.5rem; }
.service-info { flex: 1; }
.service-name { font-weight: 600; font-size: 1.1rem; }
.service-deps { color: var(--text-muted); font-size: 0.8rem; margin-top: 0.25rem; }
.service-step { color: var(--primary); font-size: 0.8rem; margin-top: 0.25rem; font-style: italic; }
.service-unmet { color: var(--warning); font-size: 0.8rem; margin-top: 0.25rem; }
.service-status {
padding: 0.25rem 0.75rem;
border-radius: 9999px;
font-size: 0.75rem;
font-weight: 500;
text-transform: uppercase;
}
.status-completed { background: rgba(74, 222, 128, 0.2); color: var(--success); }
.status-started { background: rgba(251, 191, 36, 0.2); color: var(--warning); }
.status-pending { background: rgba(160, 160, 160, 0.2); color: var(--text-muted); }
.status-not-registered { background: rgba(248, 113, 113, 0.2); color: var(--error); }
.service-actions { display: flex; gap: 0.5rem; }
button {
background: var(--accent);
border: none;
color: var(--text);
padding: 0.5rem 1rem;
border-radius: 6px;
cursor: pointer;
font-size: 0.875rem;
transition: background 0.2s;
}
button:hover { background: var(--primary); }
button:disabled { opacity: 0.5; cursor: not-allowed; }
.error {
background: rgba(248, 113, 113, 0.1);
border: 1px solid var(--error);
color: var(--error);
padding: 1rem;
border-radius: 8px;
margin-bottom: 1rem;
}
.loading {
text-align: center;
padding: 3rem;
color: var(--text-muted);
}
.section-title {
font-size: 1.25rem;
margin: 2rem 0 1rem;
color: var(--text);
display: flex;
align-items: center;
gap: 0.5rem;
}
.section-title:first-of-type { margin-top: 0; }
.images { display: flex; flex-direction: column; gap: 0.5rem; }
.image {
background: var(--card-bg);
border-radius: 8px;
padding: 0.75rem 1rem;
display: flex;
align-items: center;
gap: 1rem;
font-size: 0.9rem;
}
.image-info { flex: 1; min-width: 0; }
.image-tags {
font-family: monospace;
font-size: 0.85rem;
word-break: break-all;
}
.image-meta {
color: var(--text-muted);
font-size: 0.75rem;
margin-top: 0.25rem;
}
.image-id {
font-family: monospace;
font-size: 0.75rem;
color: var(--text-muted);
width: 120px;
overflow: hidden;
text-overflow: ellipsis;
}
button.danger { background: rgba(248, 113, 113, 0.3); }
button.danger:hover { background: var(--error); }
/* Modal styles */
.modal-overlay {
display: none;
position: fixed;
top: 0; left: 0; right: 0; bottom: 0;
background: rgba(0, 0, 0, 0.8);
z-index: 1000;
padding: 2rem;
}
.modal-overlay.active { display: flex; }
.modal {
background: var(--card-bg);
border-radius: 12px;
width: 100%;
max-width: 1000px;
height: 80vh;
margin: auto;
display: flex;
flex-direction: column;
overflow: hidden;
}
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1rem 1.5rem;
border-bottom: 1px solid var(--accent);
}
.modal-title { font-size: 1.1rem; font-weight: 600; }
.modal-close {
background: transparent;
border: none;
color: var(--text-muted);
font-size: 1.5rem;
cursor: pointer;
padding: 0.25rem 0.5rem;
}
.modal-close:hover { color: var(--text); background: transparent; }
.modal-body {
flex: 1;
overflow: auto;
padding: 1rem;
background: #0d1117;
}
.log-content {
font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', monospace;
font-size: 0.8rem;
line-height: 1.5;
white-space: pre-wrap;
word-break: break-all;
color: #c9d1d9;
}
.log-error { color: var(--error); padding: 1rem; }
.volumes { display: flex; flex-direction: column; gap: 0.5rem; }
.volume {
background: var(--card-bg);
border-radius: 8px;
padding: 0.75rem 1rem;
display: flex;
align-items: center;
gap: 1rem;
font-size: 0.9rem;
}
.volume-info { flex: 1; min-width: 0; }
.volume-name {
font-family: monospace;
font-size: 0.85rem;
word-break: break-all;
}
.volume-meta {
color: var(--text-muted);
font-size: 0.75rem;
margin-top: 0.25rem;
}
</style>
</head>
<body>
<!-- Logs Modal -->
<div id="logs-modal" class="modal-overlay" onclick="if(event.target===this)closeLogs()">
<div class="modal">
<div class="modal-header">
<span class="modal-title" id="logs-title">Logs</span>
<button class="modal-close" onclick="closeLogs()">&times;</button>
</div>
<div class="modal-body">
<div id="logs-content" class="log-content"></div>
</div>
</div>
</div>
<div class="container">
<header>
<h1><span class="status-dot"></span> cdev</h1>
<span class="refresh-info">Live (SSE)</span>
</header>
<div id="error" class="error" style="display: none;"></div>
<h2 class="section-title">📦 Services</h2>
<div id="services" class="services">
<div class="loading">Loading services...</div>
</div>
<h2 class="section-title">🐳 Docker Images</h2>
<div id="images" class="images">
<div class="loading">Loading images...</div>
</div>
<h2 class="section-title">💾 Docker Volumes</h2>
<div id="volumes" class="volumes">
<div class="loading">Loading volumes...</div>
</div>
</div>
<script>
const API_BASE = '';
let services = [];
let images = [];
let volumes = [];
async function fetchServices() {
try {
const res = await fetch(`${API_BASE}/api/services`);
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const data = await res.json();
services = data.services || [];
hideError();
renderServices();
} catch (err) {
showError(`Failed to fetch services: ${err.message}`);
}
}
async function startService(name) {
setLoading(name, true);
try {
const res = await fetch(`${API_BASE}/api/services/${name}/start`, { method: 'POST' });
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
} catch (err) {
showError(`Failed to start ${name}: ${err.message}`);
} finally {
setLoading(name, false);
}
}
async function fetchImages() {
try {
const res = await fetch(`${API_BASE}/api/images`);
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
const data = await res.json();
images = data.images || [];
renderImages();
} catch (err) {
images = [];
renderImages(err.message);
}
}
async function fetchVolumes() {
try {
const res = await fetch(`${API_BASE}/api/volumes`);
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
const data = await res.json();
volumes = data.volumes || [];
renderVolumes();
} catch (err) {
volumes = [];
renderVolumes(err.message);
}
}
async function restartService(name) {
setLoading(name, true);
try {
const res = await fetch(`${API_BASE}/api/services/${name}/restart`, { method: 'POST' });
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
} catch (err) {
showError(`Failed to restart ${name}: ${err.message}`);
} finally {
setLoading(name, false);
}
}
async function stopService(name) {
setLoading(name, true);
try {
const res = await fetch(`${API_BASE}/api/services/${name}/stop`, { method: 'POST' });
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
} catch (err) {
showError(`Failed to stop ${name}: ${err.message}`);
} finally {
setLoading(name, false);
}
}
async function deleteImage(id) {
if (!confirm('Delete this image?')) return;
try {
const res = await fetch(`${API_BASE}/api/images/${encodeURIComponent(id)}`, { method: 'DELETE' });
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
fetchImages();
} catch (err) {
showError(`Failed to delete image: ${err.message}`);
}
}
async function deleteVolume(name) {
if (!confirm(`Delete volume "${name}"?`)) return;
try {
const res = await fetch(`${API_BASE}/api/volumes/${encodeURIComponent(name)}`, { method: 'DELETE' });
if (!res.ok) {
const data = await res.json();
throw new Error(data.error || `HTTP ${res.status}`);
}
fetchVolumes();
} catch (err) {
showError(`Failed to delete volume: ${err.message}`);
}
}
function setLoading(name, loading) {
const btns = document.querySelectorAll(`[data-service="${name}"] button`);
btns.forEach(btn => btn.disabled = loading);
}
async function copyURL(url) {
try {
await navigator.clipboard.writeText(url);
// Brief visual feedback - find the button and flash it.
const btn = document.querySelector(`button[title="${url}"]`);
if (btn) {
const orig = btn.textContent;
btn.textContent = '✓ Copied';
setTimeout(() => btn.textContent = orig, 1500);
}
} catch (err) {
showError(`Failed to copy URL: ${err.message}`);
}
}
// Logs modal
let logsAbortController = null;
function showLogs(serviceName) {
const modal = document.getElementById('logs-modal');
const title = document.getElementById('logs-title');
const content = document.getElementById('logs-content');
title.textContent = `Logs: ${serviceName}`;
content.textContent = 'Loading...';
modal.classList.add('active');
// Abort any existing stream.
if (logsAbortController) {
logsAbortController.abort();
}
logsAbortController = new AbortController();
// Fetch streaming logs.
fetch(`${API_BASE}/api/services/${serviceName}/logs`, {
signal: logsAbortController.signal
}).then(async response => {
if (!response.ok) {
const data = await response.json();
content.innerHTML = `<div class="log-error">Error: ${data.error || response.statusText}</div>`;
return;
}
content.textContent = '';
const reader = response.body.getReader();
const decoder = new TextDecoder();
const modalBody = content.parentElement;
while (true) {
const { done, value } = await reader.read();
if (done) break;
content.textContent += decoder.decode(value, { stream: true });
// Auto-scroll to bottom.
modalBody.scrollTop = modalBody.scrollHeight;
}
}).catch(err => {
if (err.name !== 'AbortError') {
content.innerHTML = `<div class="log-error">Error: ${err.message}</div>`;
}
});
}
function closeLogs() {
const modal = document.getElementById('logs-modal');
modal.classList.remove('active');
if (logsAbortController) {
logsAbortController.abort();
logsAbortController = null;
}
}
// Close modal on Escape key.
document.addEventListener('keydown', e => {
if (e.key === 'Escape') closeLogs();
});
let errorTimeout = null;
function showError(msg) {
const el = document.getElementById('error');
el.textContent = msg;
el.style.display = 'block';
// Clear any existing timeout and set a new one to auto-hide after 15s.
if (errorTimeout) clearTimeout(errorTimeout);
errorTimeout = setTimeout(hideError, 15000);
}
function hideError() {
document.getElementById('error').style.display = 'none';
if (errorTimeout) {
clearTimeout(errorTimeout);
errorTimeout = null;
}
}
function renderServices() {
const container = document.getElementById('services');
if (services.length === 0) {
container.innerHTML = '<div class="loading">No services found</div>';
return;
}
const sorted = [...services].sort((a, b) => a.name.localeCompare(b.name));
container.innerHTML = sorted.map(svc => {
// Handle empty status (not registered) and format for CSS class.
const status = svc.status || 'not-registered';
const statusClass = 'status-' + status.replace('_', '-');
const statusLabel = status === 'not-registered' ? 'not registered' : status;
const stepHtml = svc.current_step
? `<div class="service-step">▶ ${svc.current_step}</div>`
: '';
const unmetHtml = svc.unmet_dependencies?.length
? `<div class="service-unmet">⏳ waiting for: ${svc.unmet_dependencies.join(', ')}</div>`
: '';
return `
<div class="service" data-service="${svc.name}">
<span class="service-emoji">${svc.emoji}</span>
<div class="service-info">
<div class="service-name">${svc.name}</div>
<div class="service-deps">${svc.depends_on?.length ? `depends on: ${svc.depends_on.join(', ')}` : 'no dependencies'}</div>
${stepHtml}
${unmetHtml}
</div>
<span class="service-status ${statusClass}">${statusLabel}</span>
<div class="service-actions">
${svc.url ? `<button onclick="copyURL('${svc.url}')" title="${svc.url}">📋 URL</button>` : ''}
<button onclick="showLogs('${svc.name}')">Logs</button>
${svc.status === 'pending' ? `<button onclick="startService('${svc.name}')">Start</button>` : ''}
${svc.status !== 'pending' ? `<button onclick="restartService('${svc.name}')">Restart</button>` : ''}
${svc.status !== 'pending' ? `<button onclick="stopService('${svc.name}')">Stop</button>` : ''}
</div>
</div>
`;
}).join('');
}
function formatBytes(bytes) {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(1)) + ' ' + sizes[i];
}
function formatDate(timestamp) {
return new Date(timestamp * 1000).toLocaleDateString();
}
function renderImages(error) {
const container = document.getElementById('images');
if (error) {
const hint = error.includes('docker not connected')
? ' Start the <strong>docker</strong> service first.'
: '';
container.innerHTML = `<div class="loading" style="color: var(--error)">⚠️ ${error}.${hint}</div>`;
return;
}
if (images.length === 0) {
container.innerHTML = '<div class="loading">No images found</div>';
return;
}
// Sort by creation date, newest first.
const sorted = [...images].sort((a, b) => b.created - a.created);
container.innerHTML = sorted.map(img => {
const shortId = img.id.replace('sha256:', '').substring(0, 12);
const tags = img.tags?.length ? img.tags.join(', ') : '<none>';
return `
<div class="image">
<span class="image-id" title="${img.id}">${shortId}</span>
<div class="image-info">
<div class="image-tags">${tags}</div>
<div class="image-meta">${formatBytes(img.size)} • ${formatDate(img.created)}</div>
</div>
<button class="danger" onclick="deleteImage('${img.id}')">Delete</button>
</div>
`;
}).join('');
}
function renderVolumes(error) {
const container = document.getElementById('volumes');
if (error) {
const hint = error.includes('docker not connected')
? ' Start the <strong>docker</strong> service first.'
: '';
container.innerHTML = `<div class="loading" style="color: var(--error)">⚠️ ${error}.${hint}</div>`;
return;
}
if (volumes.length === 0) {
container.innerHTML = '<div class="loading">No volumes found</div>';
return;
}
// Sort by name.
const sorted = [...volumes].sort((a, b) => a.name.localeCompare(b.name));
container.innerHTML = sorted.map(vol => {
return `
<div class="volume">
<div class="volume-info">
<div class="volume-name">${vol.name}</div>
<div class="volume-meta">${vol.driver}</div>
</div>
<button class="danger" onclick="deleteVolume('${vol.name}')">Delete</button>
</div>
`;
}).join('');
}
// Use Server-Sent Events for live updates.
const evtSource = new EventSource('/api/events');
evtSource.onmessage = (event) => {
const data = JSON.parse(event.data);
services = data.services || [];
// Don't hide errors here - let them timeout naturally.
renderServices();
};
evtSource.onerror = () => {
showError('SSE connection lost, retrying...');
};
// Fetch images and volumes on load and periodically refresh.
fetchImages();
fetchVolumes();
setInterval(fetchImages, 10000);
setInterval(fetchVolumes, 10000);
</script>
</body>
</html>
+168
View File
@@ -0,0 +1,168 @@
package catalog
import (
"context"
"fmt"
"os"
"sync/atomic"
"github.com/ory/dockertest/v3/docker"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
)
const (
// Docker image used for building.
dogfoodImage = "codercom/oss-dogfood"
dogfoodTag = "latest"
)
var _ Service[BuildResult] = (*BuildSlim)(nil)
// BuildSlim builds the slim Coder binaries inside a Docker container.
type BuildSlim struct {
currentStep atomic.Pointer[string]
// Verbose enables verbose output from the build.
Verbose bool
result BuildResult
}
type BuildResult struct {
CoderCache *docker.Volume
GoCache *docker.Volume
}
func NewBuildSlim() *BuildSlim {
return &BuildSlim{
Verbose: true, // Default to verbose for dev experience.
}
}
func (b *BuildSlim) Result() BuildResult {
return b.result
}
func (*BuildSlim) Name() ServiceName {
return CDevBuildSlim
}
func (*BuildSlim) Emoji() string {
return "🔨"
}
func (*BuildSlim) DependsOn() []ServiceName {
return []ServiceName{
OnDocker(),
}
}
func (b *BuildSlim) CurrentStep() string {
if s := b.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (b *BuildSlim) setStep(step string) {
b.currentStep.Store(&step)
}
func (b *BuildSlim) Start(ctx context.Context, logger slog.Logger, c *Catalog) error {
b.setStep("Initializing Docker volumes")
dkr, ok := c.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
goCache, err := dkr.EnsureVolume(ctx, VolumeOptions{
Name: "cdev_go_cache",
Labels: NewServiceLabels(CDevBuildSlim),
UID: 1000, GID: 1000,
})
if err != nil {
return xerrors.Errorf("failed to ensure go cache volume: %w", err)
}
coderCache, err := dkr.EnsureVolume(ctx, VolumeOptions{
Name: "cdev_coder_cache",
Labels: NewServiceLabels(CDevBuildSlim),
UID: 1000, GID: 1000,
})
if err != nil {
return xerrors.Errorf("failed to ensure coder cache volume: %w", err)
}
// Get current working directory for mounting.
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("failed to get working directory: %w", err)
}
// Get docker group ID for socket access.
dockerGroup := os.Getenv("DOCKER_GROUP")
if dockerGroup == "" {
dockerGroup = "999"
}
// Get docker socket path.
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
dockerSocket = "/var/run/docker.sock"
}
// Register init-volumes and build-slim compose services.
dkr.SetComposeVolume("go_cache", ComposeVolume{})
dkr.SetComposeVolume("coder_cache", ComposeVolume{})
dkr.SetCompose("init-volumes", ComposeService{
Image: dogfoodImage + ":" + dogfoodTag,
User: "0:0",
Volumes: []string{
"go_cache:/go-cache",
"coder_cache:/cache",
},
Command: "chown -R 1000:1000 /go-cache /cache",
Labels: composeServiceLabels("init-volumes"),
})
dkr.SetCompose("build-slim", ComposeService{
Image: dogfoodImage + ":" + dogfoodTag,
NetworkMode: "host",
WorkingDir: "/app",
GroupAdd: []string{dockerGroup},
Environment: map[string]string{
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"DOCKER_HOST": fmt.Sprintf("unix://%s", dockerSocket),
},
Volumes: []string{
fmt.Sprintf("%s:/app", cwd),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:%s", dockerSocket, dockerSocket),
},
Command: `sh -c 'make -j build-slim && mkdir -p /cache/site/orig/bin && cp site/out/bin/coder-* /cache/site/orig/bin/ 2>/dev/null || true && echo "Slim binaries built and cached."'`,
DependsOn: map[string]ComposeDependsOn{
"init-volumes": {Condition: "service_completed_successfully"},
},
Labels: composeServiceLabels("build-slim"),
})
b.setStep("Running make build-slim")
logger.Info(ctx, "building slim binaries via compose")
if err := dkr.DockerComposeRun(ctx, "build-slim"); err != nil {
return err
}
b.setStep("")
logger.Info(ctx, "slim binaries built successfully")
b.result.CoderCache = coderCache
b.result.GoCache = goCache
return nil
}
func (*BuildSlim) Stop(_ context.Context) error {
// Build is a one-shot task, nothing to stop.
return nil
}
+610
View File
@@ -0,0 +1,610 @@
package catalog
import (
"context"
"fmt"
"io"
"slices"
"strings"
"sync"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/coderd/util/slice"
"github.com/coder/serpent"
)
const (
CDevLabelEphemeral = "cdev/ephemeral"
CDevLabelCache = "cdev/cache"
)
type ServiceBase interface {
// Name returns a unique identifier for this service.
Name() ServiceName
// Emoji returns a single emoji used to identify this service
// in log output.
Emoji() string
// DependsOn returns the names of services this service depends on before "Start" can be called.
// This is used to determine the order in which services should be started and stopped.
DependsOn() []ServiceName
// CurrentStep returns a human-readable description of what the service
// is currently doing. Returns empty string if idle/complete.
CurrentStep() string
// Start launches the service. This should not block.
Start(ctx context.Context, logger slog.Logger, c *Catalog) error
// Stop gracefully shuts down the service.
Stop(ctx context.Context) error
}
// ServiceAddressable is implemented by services that expose a URL.
type ServiceAddressable interface {
URL() string
}
type ConfigurableService interface {
ServiceBase
Options() serpent.OptionSet
}
type Service[Result any] interface {
ServiceBase
// Result is usable by other services.
Result() Result
}
type configurator struct {
target ServiceName
apply func(ServiceBase)
}
type Catalog struct {
mu sync.RWMutex
services map[ServiceName]ServiceBase
loggers map[ServiceName]slog.Logger
logger slog.Logger
w io.Writer
manager *unit.Manager
// startCancels tracks cancel functions for in-progress StartService calls.
// Used to cancel starts when StopService is called.
startCancels map[ServiceName]context.CancelFunc
startCancelsMu sync.Mutex
subscribers map[chan struct{}]struct{}
subscribersMu sync.Mutex
configurators []configurator
configured bool
}
func New() *Catalog {
return &Catalog{
services: make(map[ServiceName]ServiceBase),
loggers: make(map[ServiceName]slog.Logger),
manager: unit.NewManager(),
subscribers: make(map[chan struct{}]struct{}),
startCancels: make(map[ServiceName]context.CancelFunc),
}
}
// Init sets the writer and builds the base logger and all
// per-service loggers. Call this after registration and before
// Start.
func (c *Catalog) Init(w io.Writer) {
c.w = w
c.logger = slog.Make(NewLoggerSink(w, nil))
for name, svc := range c.services {
c.loggers[name] = slog.Make(NewLoggerSink(w, svc))
}
}
// Logger returns the catalog's logger.
func (c *Catalog) Logger() slog.Logger {
return c.logger
}
func Get[T Service[R], R any](c *Catalog) R {
var zero T
s, ok := c.Get(zero.Name())
if !ok {
panic(fmt.Sprintf("catalog.Get[%q] not found", zero.Name()))
}
typed, ok := s.(T)
if !ok {
panic(fmt.Sprintf("catalog.Get[%q] has wrong type: %T", zero.Name(), s))
}
return typed.Result()
}
func (c *Catalog) ForEach(f func(s ServiceBase) error) error {
c.mu.RLock()
defer c.mu.RUnlock()
for _, srv := range c.services {
if err := f(srv); err != nil {
return err
}
}
return nil
}
func (c *Catalog) Register(s ...ServiceBase) error {
for _, srv := range s {
if err := c.registerOne(srv); err != nil {
return err
}
}
return nil
}
func (c *Catalog) registerOne(s ServiceBase) error {
c.mu.Lock()
defer c.mu.Unlock()
name := s.Name()
if _, exists := c.services[name]; exists {
return xerrors.Errorf("service %q already registered", name)
}
// Register with unit manager.
if err := c.manager.Register(unit.ID(name)); err != nil && !xerrors.Is(err, unit.ErrUnitAlreadyRegistered) {
return xerrors.Errorf("register %s with manager: %w", name, err)
}
// Add dependencies.
for _, dep := range s.DependsOn() {
// Register dependency if not already registered (it may not exist yet).
_ = c.manager.Register(unit.ID(dep))
if err := c.manager.AddDependency(unit.ID(name), unit.ID(dep), unit.StatusComplete); err != nil {
return xerrors.Errorf("add dependency %s -> %s: %w", name, dep, err)
}
}
c.services[name] = s
return nil
}
func (c *Catalog) MustGet(name ServiceName) ServiceBase {
s, ok := c.Get(name)
if !ok {
panic(fmt.Sprintf("catalog.MustGet: service %q not found", name))
}
return s
}
// Get returns a service by name.
func (c *Catalog) Get(name ServiceName) (ServiceBase, bool) {
s, ok := c.services[name]
return s, ok
}
func (c *Catalog) Status(name ServiceName) (unit.Status, error) {
u, err := c.manager.Unit(unit.ID(name))
if err != nil {
return unit.StatusPending, xerrors.Errorf("get unit for %q: %w", name, err)
}
return u.Status(), nil
}
// UnmetDependencies returns the list of dependencies that are not yet satisfied
// for the given service.
func (c *Catalog) UnmetDependencies(name ServiceName) ([]string, error) {
deps, err := c.manager.GetUnmetDependencies(unit.ID(name))
if err != nil {
return nil, xerrors.Errorf("get unmet dependencies for %q: %w", name, err)
}
result := make([]string, 0, len(deps))
for _, dep := range deps {
result = append(result, string(dep.DependsOn))
}
return result, nil
}
// Configure registers a typed callback to mutate a target service
// before startup. Panics if called after ApplyConfigurations.
func Configure[T ServiceBase](c *Catalog, target ServiceName, fn func(T)) {
if c.configured {
panic(fmt.Sprintf("catalog: Configure(%q) called after ApplyConfigurations", target))
}
c.configurators = append(c.configurators, configurator{
target: target,
apply: func(s ServiceBase) {
typed, ok := s.(T)
if !ok {
panic(fmt.Sprintf("catalog: Configure(%q) type mismatch: got %T", target, s))
}
fn(typed)
},
})
}
// ApplyConfigurations runs all registered Configure callbacks,
// then prevents further Configure calls. Must be called after
// option parsing but before Start.
func (c *Catalog) ApplyConfigurations() error {
for _, cfg := range c.configurators {
svc, ok := c.services[cfg.target]
if !ok {
return xerrors.Errorf("configure target %q not found", cfg.target)
}
cfg.apply(svc)
}
c.configured = true
return nil
}
func (c *Catalog) BuildGraph(ctx context.Context) error {
c.mu.Lock()
defer c.mu.Unlock()
// Log the service dependency graph on startup.
c.logger.Info(ctx, "service dependency graph")
for _, srv := range c.services {
deps := srv.DependsOn()
if len(deps) == 0 {
c.logger.Info(ctx, fmt.Sprintf(" %s %s (no dependencies)", srv.Emoji(), srv.Name()))
} else {
c.logger.Info(ctx, fmt.Sprintf(" %s %s -> [%s]", srv.Emoji(), srv.Name(), strings.Join(slice.ToStrings(deps), ", ")))
}
}
return nil
}
// Start launches all registered services concurrently.
// Services block until their dependencies (tracked by unit.Manager) are ready.
func (c *Catalog) Start(ctx context.Context) error {
c.mu.Lock()
type svcEntry struct {
srv ServiceBase
}
entries := make([]svcEntry, 0, len(c.services))
for _, srv := range c.services {
entries = append(entries, svcEntry{srv: srv})
}
c.mu.Unlock()
wg, ctx := errgroup.WithContext(ctx)
wg.SetLimit(-1) // No limit on concurrency, since unit.Manager tracks dependencies.
for _, e := range entries {
wg.Go(func() (failure error) {
return c.StartService(ctx, e.srv.Name())
})
}
// Start a goroutine that prints startup progress every 3 seconds.
startTime := time.Now()
done := make(chan struct{})
go func() {
ticker := time.NewTicker(3 * time.Second)
defer ticker.Stop()
for {
select {
case <-done:
return
case <-ctx.Done():
return
case <-ticker.C:
if c.allUnitsComplete() {
return
}
c.unitsWaiting(ctx, startTime)
}
}
}()
err := wg.Wait()
close(done)
if err != nil {
return xerrors.Errorf("start services: %w", err)
}
return nil
}
// allUnitsComplete returns true if all registered units have completed.
func (c *Catalog) allUnitsComplete() bool {
c.mu.RLock()
defer c.mu.RUnlock()
for name := range c.services {
u, err := c.manager.Unit(unit.ID(name))
if err != nil {
return false
}
if u.Status() != unit.StatusComplete {
return false
}
}
return true
}
// unitsWaiting logs the current state of all units, showing which dependencies
// are blocking each waiting unit. This helps debug startup DAG issues.
func (c *Catalog) unitsWaiting(ctx context.Context, startTime time.Time) {
c.mu.RLock()
defer c.mu.RUnlock()
elapsed := time.Since(startTime).Truncate(time.Millisecond)
var waiting, started, completed []string
for name := range c.services {
u, err := c.manager.Unit(unit.ID(name))
if err != nil {
c.logger.Warn(ctx, "failed to get unit", slog.F("name", name), slog.Error(err))
continue
}
switch u.Status() {
case unit.StatusPending:
waiting = append(waiting, string(name))
case unit.StatusStarted:
started = append(started, string(name))
case unit.StatusComplete:
completed = append(completed, string(name))
}
}
// Sort for deterministic output.
slices.Sort(waiting)
slices.Sort(started)
slices.Sort(completed)
c.logger.Info(ctx, "startup progress",
slog.F("elapsed", elapsed.String()),
slog.F("completed", len(completed)),
slog.F("started", len(started)),
slog.F("waiting", len(waiting)),
)
// Log details for each waiting unit.
for _, name := range waiting {
unmet, err := c.manager.GetUnmetDependencies(unit.ID(name))
if err != nil {
c.logger.Warn(ctx, "failed to get unmet dependencies",
slog.F("name", name), slog.Error(err))
continue
}
if len(unmet) == 0 {
c.logger.Info(ctx, "unit waiting (ready to start)",
slog.F("name", name))
continue
}
// Build a summary of unmet dependencies.
blockers := make([]string, 0, len(unmet))
for _, dep := range unmet {
blockers = append(blockers, fmt.Sprintf("%s(%s!=%s)",
dep.DependsOn, dep.CurrentStatus, dep.RequiredStatus))
}
slices.Sort(blockers)
c.logger.Info(ctx, "unit waiting on dependencies",
slog.F("name", name),
slog.F("blocked_by", strings.Join(blockers, ", ")),
)
}
// Log started units (in progress).
for _, name := range started {
c.logger.Info(ctx, "unit in progress", slog.F("name", name))
}
}
// Subscribe returns a channel that receives a notification whenever
// service state changes. The channel is buffered with size 1 so
// sends never block. Pass the returned channel to Unsubscribe when
// done.
func (c *Catalog) Subscribe() chan struct{} {
ch := make(chan struct{}, 1)
c.subscribersMu.Lock()
c.subscribers[ch] = struct{}{}
c.subscribersMu.Unlock()
return ch
}
// Unsubscribe removes and closes a subscriber channel.
func (c *Catalog) Unsubscribe(ch chan struct{}) {
c.subscribersMu.Lock()
delete(c.subscribers, ch)
c.subscribersMu.Unlock()
close(ch)
}
// NotifySubscribers does a non-blocking send to every subscriber.
// It is exported so that API handlers can trigger notifications
// after operations like restart or stop.
func (c *Catalog) NotifySubscribers() {
c.notifySubscribers()
}
//nolint:revive // Intentional: public NotifySubscribers wraps internal notifySubscribers.
func (c *Catalog) notifySubscribers() {
c.subscribersMu.Lock()
defer c.subscribersMu.Unlock()
for ch := range c.subscribers {
select {
case ch <- struct{}{}:
default:
}
}
}
// waitForReady polls until the service's dependencies are satisfied.
// RestartService stops a service, resets its status, and starts it again,
// updating the unit.Manager status throughout the lifecycle.
func (c *Catalog) RestartService(ctx context.Context, name ServiceName, logger slog.Logger) error {
svc, ok := c.services[name]
if !ok {
return xerrors.Errorf("service %q not found", name)
}
if err := svc.Stop(ctx); err != nil {
return xerrors.Errorf("stop %s: %w", name, err)
}
// Reset status to pending, then follow the same lifecycle as Catalog.Start().
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusPending); err != nil {
return xerrors.Errorf("reset status for %s: %w", name, err)
}
c.notifySubscribers()
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusStarted); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
if err := svc.Start(ctx, logger, c); err != nil {
return xerrors.Errorf("start %s: %w", name, err)
}
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusComplete); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
return nil
}
// StartService starts a previously stopped service, transitioning its
// unit.Manager status through pending → started → completed.
func (c *Catalog) StartService2(ctx context.Context, name ServiceName, logger slog.Logger) error {
svc, ok := c.services[name]
if !ok {
return xerrors.Errorf("service %q not found", name)
}
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusStarted); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
if err := svc.Start(ctx, logger, c); err != nil {
return xerrors.Errorf("start %s: %w", name, err)
}
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusComplete); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
return nil
}
// StartService starts a previously stopped service, transitioning its
// unit.Manager status through pending → started → completed.
// This method is idempotent: calling it multiple times while a start is in
// progress or after the service is already running will return nil without
// doing anything. StopService will cancel any in-progress start operation.
func (c *Catalog) StartService(ctx context.Context, name ServiceName) (failure error) {
defer func() {
if err := recover(); err != nil {
failure = xerrors.Errorf("panic: %v", err)
}
}()
// Check if service is already started/starting (idempotent).
status, err := c.Status(name)
if err != nil {
return xerrors.Errorf("get status for %s: %w", name, err)
}
if status == unit.StatusStarted || status == unit.StatusComplete {
// Already starting or running, nothing to do.
return nil
}
// Check if a start is already in progress.
c.startCancelsMu.Lock()
if _, exists := c.startCancels[name]; exists {
c.startCancelsMu.Unlock()
return nil // Another start is in progress.
}
// Create a cancellable context for this start operation.
ctx, cancel := context.WithCancel(ctx)
c.startCancels[name] = cancel
c.startCancelsMu.Unlock()
// Clean up when done (success or failure).
defer func() {
c.startCancelsMu.Lock()
delete(c.startCancels, name)
c.startCancelsMu.Unlock()
}()
srv := c.services[name]
svcLogger := c.loggers[srv.Name()]
if err := c.waitForReady(ctx, name); err != nil {
return xerrors.Errorf("wait for %s to be ready: %w", name, err)
}
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusStarted); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
svcLogger.Info(ctx, "starting service")
if err := srv.Start(ctx, svcLogger, c); err != nil {
return xerrors.Errorf("start %s: %w", name, err)
}
// Mark as complete after starting, which allows dependent services to start.
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusComplete); err != nil {
return xerrors.Errorf("update status for %s: %w", name, err)
}
c.notifySubscribers()
svcLogger.Info(ctx, "service started", slog.F("name", name))
return nil
}
// StopService stops a service and resets its unit.Manager status to pending.
// If a StartService call is in progress for this service, it will be canceled.
func (c *Catalog) StopService(ctx context.Context, name ServiceName) error {
// Cancel any in-progress start operation.
c.startCancelsMu.Lock()
if cancel, exists := c.startCancels[name]; exists {
cancel()
delete(c.startCancels, name)
}
c.startCancelsMu.Unlock()
svc, ok := c.services[name]
if !ok {
return xerrors.Errorf("service %q not found", name)
}
if err := svc.Stop(ctx); err != nil {
return xerrors.Errorf("stop %s: %w", name, err)
}
// Reset to pending since the service is no longer running.
if err := c.manager.UpdateStatus(unit.ID(name), unit.StatusPending); err != nil {
return xerrors.Errorf("reset status for %s: %w", name, err)
}
c.notifySubscribers()
return nil
}
func (c *Catalog) waitForReady(ctx context.Context, name ServiceName) error {
for {
ready, err := c.manager.IsReady(unit.ID(name))
if err != nil {
return err
}
if ready {
return nil
}
select {
case <-ctx.Done():
return xerrors.Errorf("wait for service %s: %w", name, ctx.Err())
default:
time.Sleep(time.Millisecond * 15)
continue
}
}
}
+177
View File
@@ -0,0 +1,177 @@
package catalog
import (
"context"
"fmt"
"os/exec"
"strings"
"github.com/dustin/go-humanize"
"github.com/ory/dockertest/v3/docker"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
)
// Down stops containers via docker compose down.
func Down(ctx context.Context, logger slog.Logger) error {
logger.Info(ctx, "running docker compose down")
//nolint:gosec // Arguments are controlled.
cmd := exec.CommandContext(ctx, "docker", "compose", "-f", composeFilePath(), "down")
cmd.Stdout = LogWriter(logger, slog.LevelInfo, "compose-down")
cmd.Stderr = LogWriter(logger, slog.LevelWarn, "compose-down")
if err := cmd.Run(); err != nil {
return xerrors.Errorf("docker compose down: %w", err)
}
return nil
}
// Cleanup removes all compose resources including volumes and locally
// built images.
func Cleanup(ctx context.Context, logger slog.Logger) error {
logger.Info(ctx, "running docker compose down -v --rmi local")
//nolint:gosec // Arguments are controlled.
cmd := exec.CommandContext(ctx,
"docker", "compose", "-f", composeFilePath(),
"down", "-v", "--rmi", "local",
)
cmd.Stdout = LogWriter(logger, slog.LevelInfo, "compose-cleanup")
cmd.Stderr = LogWriter(logger, slog.LevelWarn, "compose-cleanup")
if err := cmd.Run(); err != nil {
// If the compose file doesn't exist, fall back to direct
// Docker cleanup via labels.
logger.Warn(ctx, "compose down failed, falling back to label-based cleanup", slog.Error(err))
}
// Also clean up any remaining cdev-labeled resources that may
// not be in the compose file.
client, err := docker.NewClientFromEnv()
if err != nil {
return xerrors.Errorf("connect to docker: %w", err)
}
filter := NewLabels().Filter()
if err := cleanContainers(ctx, logger, client, filter); err != nil {
logger.Error(ctx, "failed to clean up containers", slog.Error(err))
}
if err := cleanVolumes(ctx, logger, client, filter); err != nil {
logger.Error(ctx, "failed to clean up volumes", slog.Error(err))
}
if err := cleanImages(ctx, logger, client, filter); err != nil {
logger.Error(ctx, "failed to clean up images", slog.Error(err))
}
return nil
}
func StopContainers(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
containers, err := client.ListContainers(docker.ListContainersOptions{
All: true,
Filters: filter,
Context: ctx,
})
if err != nil {
return xerrors.Errorf("list containers: %w", err)
}
for _, cnt := range containers {
err := client.StopContainer(cnt.ID, 10)
if err != nil && !strings.Contains(err.Error(), "Container not running") {
logger.Error(ctx, fmt.Sprintf("Failed to stop container %s: %v", cnt.ID, err))
// Continue trying to stop other containers even if one fails.
continue
}
}
return nil
}
func Containers(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
err := StopContainers(ctx, logger, client, NewLabels().Filter())
if err != nil {
return xerrors.Errorf("stop containers: %w", err)
}
res, err := client.PruneContainers(docker.PruneContainersOptions{
Filters: filter,
Context: ctx,
})
if err != nil {
return xerrors.Errorf("prune containers: %w", err)
}
if len(res.ContainersDeleted) == 0 {
return nil
}
logger.Info(ctx, fmt.Sprintf("📋 Deleted %d containers and reclaimed %s of space",
len(res.ContainersDeleted), humanize.Bytes(uint64(max(0, res.SpaceReclaimed))), //nolint:gosec // G115 SpaceReclaimed is non-negative in practice
))
for _, id := range res.ContainersDeleted {
logger.Debug(ctx, "🧹 Deleted container %s",
slog.F("container_id", id),
)
}
return nil
}
func cleanContainers(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
return Containers(ctx, logger, client, filter)
}
func Volumes(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
vols, err := client.ListVolumes(docker.ListVolumesOptions{
Filters: filter,
})
if err != nil {
return xerrors.Errorf("list volumes: %w", err)
}
for _, vol := range vols {
err = client.RemoveVolumeWithOptions(docker.RemoveVolumeOptions{
Context: nil,
Name: vol.Name,
Force: true,
})
if err != nil {
logger.Error(ctx, fmt.Sprintf("Failed to remove volume %s: %v", vol.Name, err))
// Continue trying to remove other volumes even if one fails.
} else {
logger.Debug(ctx, "🧹 Deleted volume %s",
slog.F("volume_name", vol.Name),
)
}
}
return nil
}
func cleanVolumes(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
return Volumes(ctx, logger, client, filter)
}
func Images(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
imgs, err := client.ListImages(docker.ListImagesOptions{
Filters: filter,
})
if err != nil {
return xerrors.Errorf("list images: %w", err)
}
for _, img := range imgs {
err = client.RemoveImage(img.ID)
if err != nil {
logger.Error(ctx, fmt.Sprintf("Failed to remove image %s: %v", img.ID, err))
} else {
logger.Debug(ctx, "🧹 Deleted image %s",
slog.F("image_id", img.ID),
slog.F("image_size", humanize.Bytes(uint64(max(0, img.Size)))), //nolint:gosec // G115 Size is non-negative in practice
)
}
}
return nil
}
func cleanImages(ctx context.Context, logger slog.Logger, client *docker.Client, filter map[string][]string) error {
return Images(ctx, logger, client, filter)
}
+344
View File
@@ -0,0 +1,344 @@
package catalog
import (
"context"
"fmt"
"net/http"
"os"
"os/exec"
"strings"
"sync/atomic"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/serpent"
)
const (
coderdBasePort = 3000
pprofBasePort = 6060
prometheusBasePort = 2112
)
// PprofPortNum returns the pprof port number for a given coderd
// instance index. Instance 0 uses port 6060, instance 1 uses 6061,
// etc.
func PprofPortNum(index int) int {
return pprofBasePort + index
}
// PrometheusPortNum returns the Prometheus metrics port number for a
// given coderd instance index. Instance 0 uses port 2112, instance 1
// uses 2113, etc.
func PrometheusPortNum(index int) int {
return prometheusBasePort + index
}
// coderdPortNum returns the port number for a given coderd instance index.
// Instance 0 uses port 3000, instance 1 uses 3001, etc.
func coderdPortNum(index int) int {
return coderdBasePort + index
}
// CoderdResult contains the connection info for the running Coderd instance.
type CoderdResult struct {
// URL is the access URL for the Coder instance.
URL string
// Port is the host port mapped to the container's 3000.
Port string
}
var _ Service[CoderdResult] = (*Coderd)(nil)
func OnCoderd() ServiceName {
return (&Coderd{}).Name()
}
// Coderd runs the Coder server inside a Docker container via compose.
type Coderd struct {
currentStep atomic.Pointer[string]
haCount int64
// ExtraEnv contains additional "KEY=VALUE" environment variables
// for the coderd container, set by Configure callbacks.
ExtraEnv []string
// ExtraArgs contains additional CLI arguments for the coderd
// server command, set by Configure callbacks.
ExtraArgs []string
result CoderdResult
logger slog.Logger
dkr *Docker
}
func (c *Coderd) CurrentStep() string {
if s := c.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (c *Coderd) URL() string {
return c.result.URL
}
func (c *Coderd) setStep(step string) {
c.currentStep.Store(&step)
}
func NewCoderd() *Coderd {
return &Coderd{}
}
func (*Coderd) Name() ServiceName {
return CDevCoderd
}
func (*Coderd) Emoji() string {
return "🖥️"
}
// HACount returns the number of coderd instances configured for HA.
func (c *Coderd) HACount() int64 { return c.haCount }
func (*Coderd) DependsOn() []ServiceName {
return []ServiceName{
OnDocker(),
OnPostgres(),
OnBuildSlim(),
OnOIDC(),
}
}
func (c *Coderd) Options() serpent.OptionSet {
return serpent.OptionSet{
{
Name: "Coderd HA Count",
Description: "Number of coderd instances to run in HA mode.",
Required: false,
Flag: "coderd-count",
Env: "CDEV_CODERD_COUNT",
Default: "1",
Value: serpent.Int64Of(&c.haCount),
},
}
}
func OnBuildSlim() ServiceName {
return (&BuildSlim{}).Name()
}
func (c *Coderd) Start(ctx context.Context, logger slog.Logger, cat *Catalog) error {
defer c.setStep("")
c.logger = logger
dkr, ok := cat.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
c.dkr = dkr
oidc, ok := cat.MustGet(OnOIDC()).(*OIDC)
if !ok {
return xerrors.New("unexpected type for OIDC service")
}
// Get current working directory for mounting.
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
// Get docker socket path.
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
dockerSocket = "/var/run/docker.sock"
}
// Get docker group ID for socket access.
dockerGroup := os.Getenv("DOCKER_GROUP")
if dockerGroup == "" {
dockerGroup = getDockerGroupID()
}
// Register each HA instance as a compose service.
var serviceNames []string
for i := range c.haCount {
index := int(i)
name := fmt.Sprintf("coderd-%d", index)
serviceNames = append(serviceNames, name)
c.setStep(fmt.Sprintf("Registering coderd-%d compose service", index))
logger.Info(ctx, "registering coderd instance", slog.F("index", index))
port := coderdPortNum(index)
pprofPort := PprofPortNum(index)
prometheusPort := PrometheusPortNum(index)
accessURL := fmt.Sprintf("http://localhost:%d", port)
wildcardAccessURL := fmt.Sprintf("*.localhost:%d", port)
volName := fmt.Sprintf("coderv2_config_%d", index)
dkr.SetComposeVolume(volName, ComposeVolume{})
env := map[string]string{
"CODER_PG_CONNECTION_URL": "postgresql://coder:coder@database:5432/coder?sslmode=disable",
"CODER_HTTP_ADDRESS": "0.0.0.0:3000",
"CODER_ACCESS_URL": accessURL,
"CODER_WILDCARD_ACCESS_URL": wildcardAccessURL,
"CODER_SWAGGER_ENABLE": "true",
"CODER_DANGEROUS_ALLOW_CORS_REQUESTS": "true",
"CODER_TELEMETRY_ENABLE": "false",
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"CODER_CACHE_DIRECTORY": "/cache",
"DOCKER_HOST": fmt.Sprintf("unix://%s", dockerSocket),
"CODER_PPROF_ENABLE": "true",
"CODER_PPROF_ADDRESS": fmt.Sprintf("0.0.0.0:%d", pprofPort),
"CODER_PROMETHEUS_ENABLE": "true",
"CODER_PROMETHEUS_ADDRESS": fmt.Sprintf("0.0.0.0:%d", prometheusPort),
}
for _, kv := range c.ExtraEnv {
parts := strings.SplitN(kv, "=", 2)
if len(parts) == 2 {
env[parts[0]] = parts[1]
}
}
cmd := []string{
"go", "run", "./enterprise/cmd/coder", "server",
"--http-address", "0.0.0.0:3000",
"--access-url", accessURL,
"--wildcard-access-url", wildcardAccessURL,
"--swagger-enable",
"--dangerous-allow-cors-requests=true",
"--enable-terraform-debug-mode",
"--pprof-enable",
"--pprof-address", fmt.Sprintf("0.0.0.0:%d", pprofPort),
"--prometheus-enable",
"--prometheus-address", fmt.Sprintf("0.0.0.0:%d", prometheusPort),
"--oidc-issuer-url", oidc.Result().IssuerURL,
"--oidc-client-id", oidc.Result().ClientID,
"--oidc-client-secret", oidc.Result().ClientSecret,
}
cmd = append(cmd, c.ExtraArgs...)
depends := map[string]ComposeDependsOn{
"database": {Condition: "service_healthy"},
"build-slim": {Condition: "service_completed_successfully"},
}
dkr.SetCompose(name, ComposeService{
Image: dogfoodImage + ":" + dogfoodTag,
WorkingDir: "/app",
Networks: []string{composeNetworkName},
GroupAdd: []string{dockerGroup},
Environment: env,
Command: cmd,
Ports: []string{
fmt.Sprintf("%d:3000", port),
fmt.Sprintf("%d:%d", pprofPort, pprofPort),
fmt.Sprintf("%d:%d", prometheusPort, prometheusPort),
},
Volumes: []string{
fmt.Sprintf("%s:/app", cwd),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:/home/coder/.config/coderv2", volName),
fmt.Sprintf("%s:%s", dockerSocket, dockerSocket),
},
DependsOn: depends,
Restart: "unless-stopped",
Labels: composeServiceLabels("coderd"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "curl -sf http://localhost:3000/api/v2/buildinfo || exit 1"},
Interval: "5s",
Timeout: "5s",
Retries: 60,
StartPeriod: "120s",
},
})
}
c.setStep("Starting coderd via compose")
if err := dkr.DockerComposeUp(ctx, serviceNames...); err != nil {
return xerrors.Errorf("docker compose up coderd: %w", err)
}
port := coderdPortNum(0)
c.result = CoderdResult{
URL: fmt.Sprintf("http://localhost:%d", port),
Port: fmt.Sprintf("%d", port),
}
c.setStep("Inserting license if set")
logger.Info(ctx, "inserting license for coderd", slog.F("ha_count", c.haCount))
if err := EnsureLicense(ctx, logger, cat); err != nil {
if c.haCount > 1 {
// Ensure license is present for HA deployments.
return xerrors.Errorf("ensure license: %w", err)
}
}
c.setStep("Waiting for coderd to be ready")
return c.waitForReady(ctx, logger)
}
func (c *Coderd) waitForReady(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
// Coderd can take a while to start, especially on first run with go run.
timeout := time.After(5 * time.Minute)
healthURL := c.result.URL + "/api/v2/buildinfo" // this actually returns when the server is ready, as opposed to healthz
logger.Info(ctx, "waiting for coderd to be ready", slog.F("health_url", healthURL))
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timeout waiting for coderd to be ready")
case <-ticker.C:
req, err := http.NewRequestWithContext(ctx, http.MethodGet, healthURL, nil)
if err != nil {
continue
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
continue
}
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
logger.Info(ctx, "coderd server is ready and accepting connections", slog.F("url", c.result.URL))
return nil
}
}
}
}
func (c *Coderd) Stop(ctx context.Context) error {
if c.dkr == nil {
return nil
}
return c.dkr.DockerComposeStop(ctx, "coderd-0")
}
func (c *Coderd) Result() CoderdResult {
return c.result
}
// getDockerGroupID returns the GID of the docker group via getent,
// falling back to "999" if the lookup fails.
func getDockerGroupID() string {
out, err := exec.Command("getent", "group", "docker").Output()
if err == nil {
// Format is "docker:x:GID:users", we want the third field.
parts := strings.Split(strings.TrimSpace(string(out)), ":")
if len(parts) >= 3 {
return parts[2]
}
}
return "999"
}
+487
View File
@@ -0,0 +1,487 @@
package catalog
import (
"fmt"
"path/filepath"
"strings"
"gopkg.in/yaml.v3"
)
// Compose file types that marshal to valid docker-compose YAML.
// ComposeFile represents a docker-compose.yml file.
type ComposeFile struct {
Services map[string]ComposeService `yaml:"services"`
Volumes map[string]ComposeVolume `yaml:"volumes,omitempty"`
Networks map[string]ComposeNetwork `yaml:"networks,omitempty"`
cfg ComposeConfig `yaml:"-"`
}
// NewComposeFile creates a new ComposeFile with initialized maps and
// the given config stored for use by builder methods.
func NewComposeFile(cfg ComposeConfig) *ComposeFile {
return &ComposeFile{
Services: make(map[string]ComposeService),
Volumes: make(map[string]ComposeVolume),
Networks: map[string]ComposeNetwork{
composeNetworkName: {Driver: "bridge"},
},
cfg: cfg,
}
}
// ComposeService represents a single service in a compose file.
type ComposeService struct {
Image string `yaml:"image,omitempty"`
Build *ComposeBuild `yaml:"build,omitempty"`
Command any `yaml:"command,omitempty"`
Entrypoint any `yaml:"entrypoint,omitempty"`
Environment map[string]string `yaml:"environment,omitempty"`
Ports []string `yaml:"ports,omitempty"`
Volumes []string `yaml:"volumes,omitempty"`
DependsOn map[string]ComposeDependsOn `yaml:"depends_on,omitempty"`
Networks []string `yaml:"networks,omitempty"`
NetworkMode string `yaml:"network_mode,omitempty"`
WorkingDir string `yaml:"working_dir,omitempty"`
Labels []string `yaml:"labels,omitempty"`
GroupAdd []string `yaml:"group_add,omitempty"`
User string `yaml:"user,omitempty"`
Restart string `yaml:"restart,omitempty"`
Healthcheck *ComposeHealthcheck `yaml:"healthcheck,omitempty"`
}
// ComposeBuild represents build configuration for a service.
type ComposeBuild struct {
Context string `yaml:"context"`
Dockerfile string `yaml:"dockerfile,omitempty"`
}
// ComposeDependsOn represents a dependency condition.
type ComposeDependsOn struct {
Condition string `yaml:"condition"`
}
// ComposeHealthcheck represents a healthcheck configuration.
type ComposeHealthcheck struct {
Test []string `yaml:"test"`
Interval string `yaml:"interval,omitempty"`
Timeout string `yaml:"timeout,omitempty"`
Retries int `yaml:"retries,omitempty"`
StartPeriod string `yaml:"start_period,omitempty"`
}
// ComposeVolume represents a named volume declaration.
type ComposeVolume struct{}
// ComposeNetwork represents a network declaration.
type ComposeNetwork struct {
Driver string `yaml:"driver,omitempty"`
}
const (
composeNetworkName = "coder-dev"
composeDogfood = "codercom/oss-dogfood:latest"
)
// ComposeConfig holds the configuration for generating a compose file.
type ComposeConfig struct {
CoderdCount int
ProvisionerCount int
OIDC bool
Prometheus bool
DockerGroup string
DockerSocket string
CWD string
License string
}
func composeServiceLabels(service string) []string {
return []string{
CDevLabel + "=true",
CDevService + "=" + service,
}
}
// Generate builds the full ComposeFile from the given config.
func Generate(cfg ComposeConfig) *ComposeFile {
if cfg.CoderdCount < 1 {
cfg.CoderdCount = 1
}
cf := NewComposeFile(cfg)
cf.AddDatabase().AddInitVolumes().AddBuildSlim()
for i := range cfg.CoderdCount {
cf.AddCoderd(i)
}
if cfg.OIDC {
cf.AddOIDC()
}
cf.AddSite()
for i := range cfg.ProvisionerCount {
cf.AddProvisioner(i)
}
if cfg.Prometheus {
cf.AddPrometheus()
}
cf.AddLoadBalancer(cfg.CoderdCount)
return cf
}
// AddLoadBalancer adds the nginx load balancer service that fronts
// all cdev services with separate listeners per service.
func (cf *ComposeFile) AddLoadBalancer(haCount int) *ComposeFile {
cfg := cf.cfg
if haCount < 1 {
haCount = 1
}
nginxConf := filepath.Join(cfg.CWD, ".cdev-lb", "nginx.conf")
var ports []string
addPort := func(port int) {
ports = append(ports, fmt.Sprintf("%d:%d", port, port))
}
// Load-balanced coderd.
addPort(coderdBasePort)
// Individual coderd instances (3001..3000+N).
for i := range haCount {
addPort(coderdBasePort + 1 + i)
}
// pprof per instance.
for i := range haCount {
addPort(pprofBasePort + i)
}
// Metrics per instance.
for i := range haCount {
addPort(prometheusBasePort + i)
}
// OIDC.
addPort(oidcPort)
// Prometheus UI.
addPort(prometheusUIPort2)
// Site dev server.
addPort(sitePort)
cf.Services["load-balancer"] = ComposeService{
Image: nginxImage + ":" + nginxTag,
Volumes: []string{nginxConf + ":/etc/nginx/nginx.conf:ro"},
Ports: ports,
Networks: []string{composeNetworkName},
Labels: composeServiceLabels("load-balancer"),
}
return cf
}
// GenerateYAML generates the compose YAML bytes from the given config.
func GenerateYAML(cfg ComposeConfig) ([]byte, error) {
cf := Generate(cfg)
return yaml.Marshal(cf)
}
// AddDatabase adds the PostgreSQL database service.
func (cf *ComposeFile) AddDatabase() *ComposeFile {
cf.Volumes["coder_dev_data"] = ComposeVolume{}
cf.Services["database"] = ComposeService{
Image: "postgres:17",
Environment: map[string]string{
"POSTGRES_USER": postgresUser,
"POSTGRES_PASSWORD": postgresPassword,
"POSTGRES_DB": postgresDB,
},
Volumes: []string{"coder_dev_data:/var/lib/postgresql/data"},
Ports: []string{"5432:5432"},
Networks: []string{composeNetworkName},
Labels: composeServiceLabels("database"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "pg_isready -U coder"},
Interval: "2s",
Timeout: "5s",
Retries: 10,
},
}
return cf
}
// AddInitVolumes adds the volume initialization service.
func (cf *ComposeFile) AddInitVolumes() *ComposeFile {
cf.Volumes["go_cache"] = ComposeVolume{}
cf.Volumes["coder_cache"] = ComposeVolume{}
cf.Volumes["site_node_modules"] = ComposeVolume{}
cf.Services["init-volumes"] = ComposeService{
Image: composeDogfood,
User: "0:0",
Volumes: []string{
"go_cache:/go-cache",
"coder_cache:/cache",
"site_node_modules:/app/site/node_modules",
},
Command: "chown -R 1000:1000 /go-cache /cache /app/site/node_modules",
Labels: composeServiceLabels("init-volumes"),
}
return cf
}
// AddBuildSlim adds the slim binary build service.
func (cf *ComposeFile) AddBuildSlim() *ComposeFile {
cfg := cf.cfg
cf.Services["build-slim"] = ComposeService{
Image: composeDogfood,
NetworkMode: "host",
WorkingDir: "/app",
GroupAdd: []string{cfg.DockerGroup},
Environment: map[string]string{
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"DOCKER_HOST": fmt.Sprintf("unix://%s", cfg.DockerSocket),
},
Volumes: []string{
fmt.Sprintf("%s:/app", cfg.CWD),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:/var/run/docker.sock", cfg.DockerSocket),
},
Command: `sh -c 'make -j build-slim && mkdir -p /cache/site/orig/bin && cp site/out/bin/coder-* /cache/site/orig/bin/ 2>/dev/null || true && echo "Slim binaries built and cached."'`,
DependsOn: map[string]ComposeDependsOn{
"init-volumes": {Condition: "service_completed_successfully"},
"database": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("build-slim"),
}
return cf
}
// AddCoderd adds a coderd service instance at the given index.
func (cf *ComposeFile) AddCoderd(index int) *ComposeFile {
cfg := cf.cfg
name := fmt.Sprintf("coderd-%d", index)
hostPort := 3000 + index
pprofPort := 6060 + index
promPort := 2112 + index
volName := fmt.Sprintf("coderv2_config_%d", index)
cf.Volumes[volName] = ComposeVolume{}
pgURL := "postgresql://coder:coder@database:5432/coder?sslmode=disable" //nolint:gosec // G101: Dev-only postgres credentials.
accessURL := fmt.Sprintf("http://localhost:%d", hostPort)
env := map[string]string{
"CODER_PG_CONNECTION_URL": pgURL,
"CODER_HTTP_ADDRESS": "0.0.0.0:3000",
"CODER_ACCESS_URL": accessURL,
"CODER_SWAGGER_ENABLE": "true",
"CODER_DANGEROUS_ALLOW_CORS_REQUESTS": "true",
"CODER_TELEMETRY_ENABLE": "false",
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"CODER_CACHE_DIRECTORY": "/cache",
"DOCKER_HOST": "unix:///var/run/docker.sock",
"CODER_PPROF_ENABLE": "true",
"CODER_PPROF_ADDRESS": "0.0.0.0:6060",
"CODER_PROMETHEUS_ENABLE": "true",
"CODER_PROMETHEUS_ADDRESS": "0.0.0.0:2112",
}
if cfg.ProvisionerCount > 0 {
env["CODER_PROVISIONER_DAEMONS"] = "0"
}
if cfg.License != "" {
env["CODER_LICENSE"] = cfg.License
}
cmd := []string{
"go", "run", "./enterprise/cmd/coder", "server",
"--http-address", "0.0.0.0:3000",
"--access-url", accessURL,
"--swagger-enable",
"--dangerous-allow-cors-requests=true",
"--enable-terraform-debug-mode",
"--pprof-enable",
"--pprof-address", "0.0.0.0:6060",
"--prometheus-enable",
"--prometheus-address", "0.0.0.0:2112",
}
if cfg.OIDC {
cmd = append(cmd,
"--oidc-issuer-url", "http://oidc:4500",
"--oidc-client-id", "static-client-id",
"--oidc-client-secret", "static-client-secret",
)
}
depends := map[string]ComposeDependsOn{
"database": {Condition: "service_healthy"},
"build-slim": {Condition: "service_completed_successfully"},
}
if cfg.OIDC {
depends["oidc"] = ComposeDependsOn{Condition: "service_healthy"}
}
cf.Services[name] = ComposeService{
Image: composeDogfood,
WorkingDir: "/app",
Networks: []string{composeNetworkName},
GroupAdd: []string{cfg.DockerGroup},
Environment: env,
Command: cmd,
Ports: []string{
fmt.Sprintf("%d:3000", hostPort),
fmt.Sprintf("%d:6060", pprofPort),
fmt.Sprintf("%d:2112", promPort),
},
Volumes: []string{
fmt.Sprintf("%s:/app", cfg.CWD),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:/home/coder/.config/coderv2", volName),
fmt.Sprintf("%s:/var/run/docker.sock", cfg.DockerSocket),
},
DependsOn: depends,
Restart: "unless-stopped",
Labels: composeServiceLabels("coderd"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "curl -sf http://localhost:3000/api/v2/buildinfo || exit 1"},
Interval: "5s",
Timeout: "5s",
Retries: 60,
StartPeriod: "120s",
},
}
return cf
}
// AddOIDC adds the OIDC test identity provider service.
func (cf *ComposeFile) AddOIDC() *ComposeFile {
cf.Services["oidc"] = ComposeService{
Build: &ComposeBuild{
Context: ".",
Dockerfile: "scripts/testidp/Dockerfile.testidp",
},
Ports: []string{"4500:4500"},
Networks: []string{composeNetworkName},
Command: "-client-id static-client-id -client-sec static-client-secret -issuer http://oidc:4500",
Labels: composeServiceLabels("oidc"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "curl -sf http://localhost:4500/.well-known/openid-configuration || exit 1"},
Interval: "2s",
Timeout: "5s",
Retries: 15,
},
}
return cf
}
// AddSite adds the frontend dev server service.
func (cf *ComposeFile) AddSite() *ComposeFile {
cfg := cf.cfg
cf.Services["site"] = ComposeService{
Image: composeDogfood,
Networks: []string{composeNetworkName},
WorkingDir: "/app/site",
Environment: map[string]string{
"CODER_HOST": "http://coderd-0:3000",
},
Ports: []string{"8080:8080"},
Volumes: []string{
fmt.Sprintf("%s/site:/app/site", cfg.CWD),
"site_node_modules:/app/site/node_modules",
},
Command: `sh -c "pnpm install --frozen-lockfile && pnpm dev --host"`,
DependsOn: map[string]ComposeDependsOn{
"coderd-0": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("site"),
}
return cf
}
// AddProvisioner adds an external provisioner service at the given index.
func (cf *ComposeFile) AddProvisioner(index int) *ComposeFile {
cfg := cf.cfg
name := fmt.Sprintf("provisioner-%d", index)
env := map[string]string{
"CODER_URL": "http://coderd-0:3000",
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"CODER_CACHE_DIRECTORY": "/cache",
"DOCKER_HOST": "unix:///var/run/docker.sock",
"CODER_PROVISIONER_DAEMON_NAME": fmt.Sprintf("cdev-provisioner-%d", index),
}
cf.Services[name] = ComposeService{
Image: composeDogfood,
Networks: []string{composeNetworkName},
WorkingDir: "/app",
Environment: env,
Command: []string{"go", "run", "./enterprise/cmd/coder", "provisioner", "start", "--verbose"},
Volumes: []string{
fmt.Sprintf("%s:/app", cfg.CWD),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:/var/run/docker.sock", cfg.DockerSocket),
},
GroupAdd: []string{cfg.DockerGroup},
DependsOn: map[string]ComposeDependsOn{
"coderd-0": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("provisioner"),
}
return cf
}
// AddPrometheus adds Prometheus monitoring services.
func (cf *ComposeFile) AddPrometheus() *ComposeFile {
cfg := cf.cfg
cf.Volumes["prometheus"] = ComposeVolume{}
// Build scrape targets for all coderd instances.
var targets []string
for i := range cfg.CoderdCount {
targets = append(targets, fmt.Sprintf("coderd-%d:2112", i))
}
targetsStr := `"` + strings.Join(targets, `", "`) + `"`
configScript := fmt.Sprintf(
`mkdir -p /prom-vol/config /prom-vol/data && printf '%%s' 'global:
scrape_interval: 15s
scrape_configs:
- job_name: "coder"
static_configs:
- targets: [%s]
' > /prom-vol/config/prometheus.yml`, targetsStr)
cf.Services["prometheus-init"] = ComposeService{
Image: "prom/prometheus:latest",
Entrypoint: []string{"sh", "-c"},
Command: configScript,
Volumes: []string{"prometheus:/prom-vol"},
Labels: composeServiceLabels("prometheus-init"),
}
cf.Services["prometheus"] = ComposeService{
Image: "prom/prometheus:latest",
Command: []string{
"--config.file=/prom-vol/config/prometheus.yml",
"--storage.tsdb.path=/prom-vol/data",
"--web.listen-address=0.0.0.0:9090",
},
Ports: []string{"9090:9090"},
Networks: []string{composeNetworkName},
Volumes: []string{"prometheus:/prom-vol"},
DependsOn: map[string]ComposeDependsOn{
"prometheus-init": {Condition: "service_completed_successfully"},
"coderd-0": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("prometheus"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "curl -sf http://localhost:9090/-/ready || exit 1"},
Interval: "2s",
Timeout: "5s",
Retries: 15,
},
}
return cf
}
+362
View File
@@ -0,0 +1,362 @@
package catalog
import (
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"sync"
"sync/atomic"
"time"
"github.com/ory/dockertest/v3/docker"
"golang.org/x/xerrors"
"gopkg.in/yaml.v3"
"cdr.dev/slog/v3"
)
// waitForHealthy polls Docker's container health status until it
// reports "healthy" or the timeout expires. The container must
// have a Healthcheck configured in its docker.Config.
func waitForHealthy(ctx context.Context, logger slog.Logger, client *docker.Client, containerName string, timeout time.Duration) error {
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
deadline := time.After(timeout)
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-deadline:
return xerrors.Errorf("timeout waiting for %s to be healthy", containerName)
case <-ticker.C:
ctr, err := client.InspectContainer(containerName)
if err != nil {
continue
}
if ctr.State.Health.Status == "healthy" {
logger.Info(ctx, "container is healthy", slog.F("container", containerName))
return nil
}
}
}
}
var _ Service[*docker.Client] = (*Docker)(nil)
func OnDocker() ServiceName {
return (&Docker{}).Name()
}
// VolumeOptions configures a Docker volume to be lazily created.
type VolumeOptions struct {
Name string
Labels map[string]string
UID int // 0 means skip chown.
GID int
}
// CDevNetworkName is the Docker bridge network used by all cdev
// containers.
const CDevNetworkName = "cdev"
type volumeOnce struct {
once sync.Once
vol *docker.Volume
err error
}
type Docker struct {
currentStep atomic.Pointer[string]
client *docker.Client
volumes map[string]*volumeOnce
volumesMu sync.Mutex
networkID string
networkOnce sync.Once
networkErr error
composeMu sync.Mutex
// compose holds registered compose services keyed by name.
compose map[string]ComposeService
// composeVolumes holds registered compose volumes keyed by name.
composeVolumes map[string]ComposeVolume
}
func NewDocker() *Docker {
return &Docker{
volumes: make(map[string]*volumeOnce),
compose: make(map[string]ComposeService),
composeVolumes: make(map[string]ComposeVolume),
}
}
func (*Docker) Name() ServiceName {
return CDevDocker
}
func (*Docker) Emoji() string {
return "🐳"
}
func (*Docker) DependsOn() []ServiceName {
return []ServiceName{}
}
func (d *Docker) CurrentStep() string {
if s := d.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (d *Docker) setStep(step string) {
d.currentStep.Store(&step)
}
func (d *Docker) Start(_ context.Context, _ slog.Logger, _ *Catalog) error {
d.setStep("Connecting to Docker daemon")
client, err := docker.NewClientFromEnv()
if err != nil {
return xerrors.Errorf("connect to docker: %w", err)
}
d.client = client
d.setStep("")
return nil
}
func (*Docker) Stop(_ context.Context) error {
return nil
}
func (d *Docker) Result() *docker.Client {
return d.client
}
// EnsureVolume lazily creates a named Docker volume, returning it
// on all subsequent calls without repeating the creation work.
func (d *Docker) EnsureVolume(ctx context.Context, opts VolumeOptions) (*docker.Volume, error) {
d.volumesMu.Lock()
vo, ok := d.volumes[opts.Name]
if !ok {
vo = &volumeOnce{}
d.volumes[opts.Name] = vo
}
d.volumesMu.Unlock()
vo.once.Do(func() {
vo.vol, vo.err = d.createVolumeIfNeeded(ctx, opts)
})
return vo.vol, vo.err
}
// EnsureNetwork lazily creates the cdev Docker bridge network,
// returning its ID on all subsequent calls without repeating the
// creation work.
func (d *Docker) EnsureNetwork(_ context.Context, labels map[string]string) (string, error) {
d.networkOnce.Do(func() {
d.networkID, d.networkErr = d.createNetworkIfNeeded(labels)
})
return d.networkID, d.networkErr
}
func (d *Docker) createNetworkIfNeeded(labels map[string]string) (string, error) {
networks, err := d.client.FilteredListNetworks(docker.NetworkFilterOpts{
"name": map[string]bool{CDevNetworkName: true},
})
if err != nil {
return "", xerrors.Errorf("failed to list networks: %w", err)
}
// FilteredListNetworks does substring matching, so check for
// an exact name match before deciding to create.
for _, n := range networks {
if n.Name == CDevNetworkName {
return n.ID, nil
}
}
net, err := d.client.CreateNetwork(docker.CreateNetworkOptions{
Name: CDevNetworkName,
Driver: "bridge",
Labels: labels,
})
if err != nil {
return "", xerrors.Errorf("failed to create network %s: %w", CDevNetworkName, err)
}
return net.ID, nil
}
func (d *Docker) createVolumeIfNeeded(ctx context.Context, opts VolumeOptions) (*docker.Volume, error) {
vol, err := d.client.InspectVolume(opts.Name)
if err != nil {
vol, err = d.client.CreateVolume(docker.CreateVolumeOptions{
Name: opts.Name,
Labels: opts.Labels,
})
if err != nil {
return nil, xerrors.Errorf("failed to create volume %s: %w", opts.Name, err)
}
if opts.UID != 0 || opts.GID != 0 {
if err := d.chownVolume(ctx, opts); err != nil {
return nil, xerrors.Errorf("failed to chown volume %s: %w", opts.Name, err)
}
}
}
return vol, nil
}
func (d *Docker) chownVolume(ctx context.Context, opts VolumeOptions) error {
initCmd := fmt.Sprintf("chown -R %d:%d /mnt/volume", opts.UID, opts.GID)
container, err := d.client.CreateContainer(docker.CreateContainerOptions{
Config: &docker.Config{
Image: dogfoodImage + ":" + dogfoodTag,
User: "0:0",
Cmd: []string{"sh", "-c", initCmd},
Labels: map[string]string{
CDevLabel: "true",
CDevLabelEphemeral: "true",
},
},
HostConfig: &docker.HostConfig{
AutoRemove: true,
Binds: []string{fmt.Sprintf("%s:/mnt/volume", opts.Name)},
},
})
if err != nil {
return xerrors.Errorf("failed to create init container: %w", err)
}
if err := d.client.StartContainer(container.ID, nil); err != nil {
return xerrors.Errorf("failed to start init container: %w", err)
}
exitCode, err := d.client.WaitContainerWithContext(container.ID, ctx)
if err != nil {
return xerrors.Errorf("failed waiting for init: %w", err)
}
if exitCode != 0 {
return xerrors.Errorf("init volumes failed with exit code %d", exitCode)
}
return nil
}
// SetCompose registers a compose service definition.
func (d *Docker) SetCompose(name string, svc ComposeService) {
d.composeMu.Lock()
defer d.composeMu.Unlock()
d.compose[name] = svc
}
// SetComposeVolume registers a compose volume definition.
func (d *Docker) SetComposeVolume(name string, vol ComposeVolume) {
d.composeMu.Lock()
defer d.composeMu.Unlock()
d.composeVolumes[name] = vol
}
// composeFilePath returns the path to the compose file.
func composeFilePath() string {
return filepath.Join(".cdev", "docker-compose.yml")
}
// WriteCompose writes the current compose state to
// .cdev/docker-compose.yml.
func (d *Docker) WriteCompose(_ context.Context) error {
d.composeMu.Lock()
defer d.composeMu.Unlock()
// Strip depends_on entries referencing services not yet
// registered — the catalog DAG handles ordering, and
// partial compose files may not contain all services.
services := make(map[string]ComposeService, len(d.compose))
for name, svc := range d.compose {
if len(svc.DependsOn) > 0 {
filtered := make(map[string]ComposeDependsOn, len(svc.DependsOn))
for dep, cond := range svc.DependsOn {
if _, ok := d.compose[dep]; ok {
filtered[dep] = cond
}
}
svc.DependsOn = filtered
}
services[name] = svc
}
cf := &ComposeFile{
Services: services,
Volumes: d.composeVolumes,
Networks: map[string]ComposeNetwork{
composeNetworkName: {Driver: "bridge"},
},
}
data, err := yaml.Marshal(cf)
if err != nil {
return xerrors.Errorf("marshal compose file: %w", err)
}
if err := os.MkdirAll(".cdev", 0o755); err != nil {
return xerrors.Errorf("create .cdev directory: %w", err)
}
// Atomic write: temp file + rename to avoid readers
// seeing a truncated file.
tmp := composeFilePath() + ".tmp"
if err := os.WriteFile(tmp, data, 0o644); err != nil {
return xerrors.Errorf("write compose temp file: %w", err)
}
if err := os.Rename(tmp, composeFilePath()); err != nil {
return xerrors.Errorf("rename compose file: %w", err)
}
return nil
}
// DockerComposeUp runs `docker compose up -d` for the given services.
func (d *Docker) DockerComposeUp(ctx context.Context, services ...string) error {
if err := d.WriteCompose(ctx); err != nil {
return err
}
args := []string{"compose", "-f", composeFilePath(), "up", "-d"}
args = append(args, services...)
//nolint:gosec // Arguments are controlled.
cmd := exec.CommandContext(ctx, "docker", args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return xerrors.Errorf("docker compose up: %w", err)
}
return nil
}
// DockerComposeRun runs `docker compose run --rm` for a blocking
// one-shot service.
func (d *Docker) DockerComposeRun(ctx context.Context, service string) error {
if err := d.WriteCompose(ctx); err != nil {
return err
}
args := []string{
"compose", "-f", composeFilePath(),
"run", "--rm", service,
}
//nolint:gosec // Arguments are controlled.
cmd := exec.CommandContext(ctx, "docker", args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return xerrors.Errorf("docker compose run %s: %w", service, err)
}
return nil
}
// DockerComposeStop runs `docker compose stop` for the given services.
func (d *Docker) DockerComposeStop(ctx context.Context, services ...string) error {
args := []string{"compose", "-f", composeFilePath(), "stop"}
args = append(args, services...)
//nolint:gosec // Arguments are controlled.
cmd := exec.CommandContext(ctx, "docker", args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return xerrors.Errorf("docker compose stop: %w", err)
}
return nil
}
+55
View File
@@ -0,0 +1,55 @@
package catalog
import "fmt"
const (
CDevLabel = "cdev"
CDevService = "cdev/service"
)
type ServiceName string
const (
CDevDocker ServiceName = "docker"
CDevBuildSlim ServiceName = "build-slim"
CDevPostgres ServiceName = "postgres"
CDevCoderd ServiceName = "coderd"
CDevOIDC ServiceName = "oidc"
CDevProvisioner ServiceName = "provisioner"
CDevPrometheus ServiceName = "prometheus"
CDevSetup ServiceName = "setup"
CDevSite ServiceName = "site"
CDevLoadBalancer ServiceName = "load-balancer"
)
type Labels map[string]string
func NewServiceLabels(service ServiceName) Labels {
return NewLabels().WithService(service)
}
func NewLabels() Labels {
return map[string]string{
CDevLabel: "true",
}
}
func (l Labels) WithService(service ServiceName) Labels {
return l.With(CDevService, string(service))
}
func (l Labels) With(key, value string) Labels {
l[key] = value
return l
}
func (l Labels) Filter() map[string][]string {
list := make([]string, 0)
for k, v := range l {
list = append(list, fmt.Sprintf("%s=%s", k, v))
}
return map[string][]string{
"label": list,
}
}
+105
View File
@@ -0,0 +1,105 @@
package catalog
import (
"context"
"os"
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/google/uuid"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbtime"
_ "github.com/lib/pq" // Imported for postgres driver side effects.
)
// RequireLicense panics if CODER_LICENSE is not set. Call this
// during the configuration phase for features that require a
// license (external provisioners, HA).
func RequireLicense(feature string) {
if os.Getenv("CODER_LICENSE") == "" {
panic("CODER_LICENSE must be set when using " + feature)
}
}
// EnsureLicense checks if the license JWT from CODER_LICENSE is
// already in the database, and inserts it if not. The JWT is parsed
// without verification to extract the exp and uuid claims — this is
// acceptable since cdev is a development tool.
func EnsureLicense(ctx context.Context, logger slog.Logger, cat *Catalog) error {
licenseJWT := os.Getenv("CODER_LICENSE")
if licenseJWT == "" {
return nil
}
pg, ok := cat.MustGet(OnPostgres()).(*Postgres)
if !ok {
return xerrors.New("unexpected type for Postgres service")
}
// Wait for coderd to finish running migrations before
// attempting to read or write the licenses table.
beforeMig := time.Now()
err := pg.waitForMigrations(ctx, logger)
if err != nil {
return xerrors.Errorf("wait for postgres migrations: %w", err)
}
logger.Info(ctx, "waited for postgres migrations", slog.F("duration", time.Since(beforeMig)))
db, err := pg.sqlDB()
if err != nil {
return xerrors.Errorf("connect to database: %w", err)
}
defer db.Close()
store := database.New(db)
// Check if this exact JWT is already in the database.
licenses, err := store.GetLicenses(ctx)
if err != nil {
return xerrors.Errorf("get licenses: %w", err)
}
for _, lic := range licenses {
if lic.JWT == licenseJWT {
logger.Info(ctx, "license already present in database")
return nil
}
}
// Parse JWT claims without verification to extract exp and uuid.
parser := jwt.NewParser()
claims := &jwt.RegisteredClaims{}
_, _, err = parser.ParseUnverified(licenseJWT, claims)
if err != nil {
return xerrors.Errorf("parse license JWT: %w", err)
}
if claims.ExpiresAt == nil {
return xerrors.New("license JWT missing exp claim")
}
// UUID comes from the standard "jti" claim (claims.ID).
// Fallback to random UUID for older licenses without one.
licenseUUID, err := uuid.Parse(claims.ID)
if err != nil {
licenseUUID = uuid.New()
}
_, err = store.InsertLicense(ctx, database.InsertLicenseParams{
UploadedAt: dbtime.Now(),
JWT: licenseJWT,
Exp: claims.ExpiresAt.Time,
UUID: licenseUUID,
})
if err != nil {
return xerrors.Errorf("insert license: %w", err)
}
logger.Info(ctx, "license inserted into database",
slog.F("license_id", licenseUUID),
slog.F("expires", claims.ExpiresAt.Time),
)
return nil
}
+335
View File
@@ -0,0 +1,335 @@
package catalog
import (
"bytes"
"context"
"fmt"
"os"
"path/filepath"
"sync/atomic"
"text/template"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
)
const (
nginxImage = "nginx"
nginxTag = "alpine"
oidcPort = 4500
prometheusUIPort2 = 9090
)
// LoadBalancerResult contains connection info for the running load
// balancer.
type LoadBalancerResult struct {
// CoderdURL is the load-balanced coderd URL.
CoderdURL string
}
var _ Service[LoadBalancerResult] = (*LoadBalancer)(nil)
// OnLoadBalancer returns the service name for the load balancer.
func OnLoadBalancer() ServiceName {
return (&LoadBalancer{}).Name()
}
// LoadBalancer runs an nginx container that fronts all cdev services
// with separate listeners per service on sequential ports.
type LoadBalancer struct {
currentStep atomic.Pointer[string]
tmpDir string
result LoadBalancerResult
}
// NewLoadBalancer creates a new LoadBalancer service.
func NewLoadBalancer() *LoadBalancer {
return &LoadBalancer{}
}
func (lb *LoadBalancer) CurrentStep() string {
if s := lb.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (lb *LoadBalancer) URL() string {
return lb.result.CoderdURL
}
func (lb *LoadBalancer) setStep(step string) {
lb.currentStep.Store(&step)
}
func (*LoadBalancer) Name() ServiceName {
return CDevLoadBalancer
}
func (*LoadBalancer) Emoji() string {
return "⚖️"
}
func (*LoadBalancer) DependsOn() []ServiceName {
return []ServiceName{OnDocker()}
}
func (lb *LoadBalancer) Start(ctx context.Context, logger slog.Logger, cat *Catalog) error {
defer lb.setStep("")
dkr, ok := cat.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
coderd, ok2 := cat.MustGet(OnCoderd()).(*Coderd)
if !ok2 {
return xerrors.New("unexpected type for Coderd service")
}
haCount := int(coderd.HACount())
if haCount < 1 {
haCount = 1
}
lb.setStep("generating nginx config")
// Write nginx config under the current working directory so
// Docker Desktop can access it (macOS /var/folders temp dirs
// are not shared with the Docker VM by default).
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
tmpDir, err := os.MkdirTemp(cwd, ".cdev-lb-*")
if err != nil {
return xerrors.Errorf("create temp dir: %w", err)
}
lb.tmpDir = tmpDir
nginxConf := generateNginxConfig(haCount)
if err := os.WriteFile(filepath.Join(tmpDir, "nginx.conf"), []byte(nginxConf), 0o644); err != nil { //nolint:gosec // G306: nginx.conf must be readable by the container.
return xerrors.Errorf("write nginx.conf: %w", err)
}
// Build port mappings for the compose service.
var ports []string
addPort := func(port int) {
ports = append(ports, fmt.Sprintf("%d:%d", port, port))
}
// Load-balanced coderd.
addPort(coderdBasePort)
// Individual coderd instances (3001..3000+N).
for i := range haCount {
addPort(coderdBasePort + 1 + i)
}
// pprof per instance.
for i := range haCount {
addPort(pprofBasePort + i)
}
// Metrics per instance.
for i := range haCount {
addPort(prometheusBasePort + i)
}
// OIDC.
addPort(oidcPort)
// Prometheus UI.
addPort(prometheusUIPort2)
// Site dev server.
addPort(sitePort)
lb.setStep("starting nginx container")
logger.Info(ctx, "starting load balancer container", slog.F("ha_count", haCount))
dkr.SetCompose("load-balancer", ComposeService{
Image: nginxImage + ":" + nginxTag,
Volumes: []string{filepath.Join(tmpDir, "nginx.conf") + ":/etc/nginx/nginx.conf:ro"},
Ports: ports,
Networks: []string{composeNetworkName},
Labels: composeServiceLabels("load-balancer"),
})
if err := dkr.DockerComposeUp(ctx, "load-balancer"); err != nil {
return xerrors.Errorf("start load balancer container: %w", err)
}
lb.result = LoadBalancerResult{
CoderdURL: fmt.Sprintf("http://localhost:%d", coderdBasePort),
}
logger.Info(ctx, "load balancer is ready",
slog.F("coderd_url", lb.result.CoderdURL),
)
return nil
}
func (lb *LoadBalancer) Stop(_ context.Context) error {
if lb.tmpDir != "" {
_ = os.RemoveAll(lb.tmpDir)
lb.tmpDir = ""
}
return nil
}
func (lb *LoadBalancer) Result() LoadBalancerResult {
return lb.result
}
// nginxConfigData holds the data for rendering the nginx config
// template.
type nginxConfigData struct {
HACount int
CoderdBasePort int
PprofBasePort int
MetricsBasePort int
Instances []int
}
//nolint:lll // Template content is inherently wide.
var nginxConfigTmpl = template.Must(template.New("nginx.conf").Funcs(template.FuncMap{
"add": func(a, b int) int { return a + b },
"pct": func(i, total int) string {
if total <= 0 || i == total-1 {
return "*"
}
return fmt.Sprintf("%.1f%%", float64(i+1)/float64(total)*100)
},
}).Parse(`events {
worker_connections 1024;
}
http {
# Use Docker's embedded DNS so nginx resolves container
# hostnames at request time rather than at startup. This
# lets the load balancer start before its backends exist.
resolver 127.0.0.11 valid=5s;
# Map upgrade header to connection type for conditional websocket support.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# Distribute requests across coderd instances by request ID.
split_clients $request_id $coderd_backend {
{{- range $i, $idx := .Instances }}
{{ pct $i $.HACount }} coderd-{{ $idx }}:3000;
{{- end }}
}
# Load-balanced coderd.
server {
listen 3000;
location / {
proxy_pass http://$coderd_backend;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
{{ range .Instances }}
# coderd-{{ . }} direct access.
server {
listen {{ add $.CoderdBasePort (add . 1) }};
location / {
set $coderd_{{ . }} http://coderd-{{ . }}:3000;
proxy_pass $coderd_{{ . }};
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
{{ end }}
{{- range .Instances }}
# pprof coderd-{{ . }}.
server {
listen {{ add $.PprofBasePort . }};
location / {
set $pprof_{{ . }} http://coderd-{{ . }}:6060;
proxy_pass $pprof_{{ . }};
}
}
{{ end }}
{{- range .Instances }}
# metrics coderd-{{ . }}.
server {
listen {{ add $.MetricsBasePort . }};
location / {
set $metrics_{{ . }} http://coderd-{{ . }}:2112;
proxy_pass $metrics_{{ . }};
}
}
{{ end }}
# OIDC.
server {
listen 4500;
location / {
set $oidc http://oidc:4500;
proxy_pass $oidc;
}
}
# Prometheus UI.
server {
listen 9090;
location / {
set $prometheus http://prometheus:9090;
proxy_pass $prometheus;
}
}
# Site dev server.
server {
listen 8080;
location / {
set $site http://site:8080;
proxy_pass $site;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
}
`))
// generateNginxConfig builds the nginx.conf for load balancing all
// cdev services.
func generateNginxConfig(haCount int) string {
instances := make([]int, haCount)
for i := range haCount {
instances[i] = i
}
var buf bytes.Buffer
err := nginxConfigTmpl.Execute(&buf, nginxConfigData{
HACount: haCount,
CoderdBasePort: coderdBasePort,
PprofBasePort: pprofBasePort,
MetricsBasePort: prometheusBasePort,
Instances: instances,
})
if err != nil {
panic(fmt.Sprintf("nginx config template: %v", err))
}
return buf.String()
}
+30
View File
@@ -0,0 +1,30 @@
package catalog
import (
"bufio"
"context"
"io"
"time"
"cdr.dev/slog/v3"
)
// LogWriter returns an io.WriteCloser that logs each line written
// to it at the given level. The caller must close the returned
// writer when done to terminate the internal goroutine.
func LogWriter(logger slog.Logger, level slog.Level, containerName string) io.WriteCloser {
pr, pw := io.Pipe()
go func() {
scanner := bufio.NewScanner(pr)
for scanner.Scan() {
logger.Log(context.Background(), slog.SinkEntry{
Time: time.Now(),
Level: level,
Message: scanner.Text(),
Fields: slog.M(slog.F("container", containerName)),
})
}
_ = pr.Close()
}()
return pw
}
+221
View File
@@ -0,0 +1,221 @@
package catalog
import (
"context"
"net/http"
"os"
"os/exec"
"sync/atomic"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
)
const (
testidpImage = "cdev-testidp"
testidpTag = "latest"
testidpPort = "4500/tcp"
testidpHostPort = "4500"
testidpClientID = "static-client-id"
testidpClientSec = "static-client-secret"
testidpIssuerURL = "http://localhost:4500"
)
// OIDCResult contains the connection info for the running OIDC IDP.
type OIDCResult struct {
// IssuerURL is the OIDC issuer URL.
IssuerURL string
// ClientID is the OIDC client ID.
ClientID string
// ClientSecret is the OIDC client secret.
ClientSecret string
// Port is the host port mapped to the container's 4500.
Port string
}
var _ Service[OIDCResult] = (*OIDC)(nil)
func OnOIDC() ServiceName {
return (&OIDC{}).Name()
}
// OIDC runs a fake OIDC identity provider via docker compose.
type OIDC struct {
currentStep atomic.Pointer[string]
result OIDCResult
dkr *Docker
}
func (o *OIDC) CurrentStep() string {
if s := o.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (o *OIDC) URL() string {
return o.result.IssuerURL
}
func (o *OIDC) setStep(step string) {
o.currentStep.Store(&step)
}
func NewOIDC() *OIDC {
return &OIDC{}
}
func (*OIDC) Name() ServiceName {
return CDevOIDC
}
func (*OIDC) Emoji() string {
return "🔒"
}
func (*OIDC) DependsOn() []ServiceName {
return []ServiceName{
OnDocker(),
}
}
func (o *OIDC) Start(ctx context.Context, logger slog.Logger, c *Catalog) error {
defer o.setStep("")
d, ok := c.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
o.dkr = d
o.setStep("building testidp docker image (this can take awhile)")
// Build the testidp image from the Dockerfile.
if err := o.buildImage(ctx, logger); err != nil {
return xerrors.Errorf("build testidp image: %w", err)
}
o.setStep("Registering OIDC compose service")
logger.Info(ctx, "registering oidc compose service")
d.SetCompose("oidc", ComposeService{
Image: testidpImage + ":" + testidpTag,
Command: []string{
"-client-id", testidpClientID,
"-client-sec", testidpClientSec,
"-issuer", testidpIssuerURL,
},
Networks: []string{composeNetworkName},
Labels: composeServiceLabels("oidc"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "curl -sf http://oidc:4500/.well-known/openid-configuration || exit 1"},
Interval: "2s",
Timeout: "5s",
Retries: 15,
},
})
o.setStep("Starting OIDC via compose")
if err := d.DockerComposeUp(ctx, "oidc"); err != nil {
return xerrors.Errorf("docker compose up oidc: %w", err)
}
o.result = OIDCResult{
IssuerURL: testidpIssuerURL,
ClientID: testidpClientID,
ClientSecret: testidpClientSec,
Port: testidpHostPort,
}
return o.waitForReady(ctx, logger)
}
func (*OIDC) buildImage(ctx context.Context, logger slog.Logger) error {
// Check if image already exists.
//nolint:gosec // Arguments are controlled.
checkCmd := exec.CommandContext(ctx, "docker", "image", "inspect", testidpImage+":"+testidpTag)
if err := checkCmd.Run(); err == nil {
logger.Info(ctx, "testidp image already exists, skipping build")
return nil
}
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
logger.Info(ctx, "building testidp image")
labels := NewServiceLabels(CDevOIDC)
// Use docker CLI directly because go-dockerclient doesn't handle BuildKit
// output properly (Docker 23+ uses BuildKit by default).
args := []string{
"build",
"-f", "scripts/testidp/Dockerfile.testidp",
"-t", testidpImage + ":" + testidpTag,
}
for k, v := range labels {
args = append(args, "--label", k+"="+v)
}
args = append(args, cwd)
//nolint:gosec // Arguments are controlled, not arbitrary user input.
cmd := exec.CommandContext(ctx, "docker", args...)
stdoutLog := LogWriter(logger, slog.LevelInfo, "testidp-build")
stderrLog := LogWriter(logger, slog.LevelWarn, "testidp-build")
defer stdoutLog.Close()
defer stderrLog.Close()
cmd.Stdout = stdoutLog
cmd.Stderr = stderrLog
return cmd.Run()
}
func (o *OIDC) waitForReady(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(60 * time.Second)
client := &http.Client{Timeout: 2 * time.Second}
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timeout waiting for oidc to be ready")
case <-ticker.C:
// Check the well-known endpoint.
wellKnownURL := o.result.IssuerURL + "/.well-known/openid-configuration"
req, err := http.NewRequestWithContext(ctx, http.MethodGet, wellKnownURL, nil)
if err != nil {
continue
}
resp, err := client.Do(req)
if err != nil {
continue
}
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
logger.Info(ctx, "oidc provider is ready and accepting connections",
slog.F("issuer_url", o.result.IssuerURL),
slog.F("client_id", o.result.ClientID),
)
return nil
}
}
}
}
func (o *OIDC) Stop(ctx context.Context) error {
if o.dkr == nil {
return nil
}
return o.dkr.DockerComposeStop(ctx, "oidc")
}
func (o *OIDC) Result() OIDCResult {
return o.result
}
+200
View File
@@ -0,0 +1,200 @@
package catalog
import (
"context"
"database/sql"
"fmt"
"sync/atomic"
"time"
"golang.org/x/xerrors"
_ "github.com/lib/pq" // Imported for postgres driver side effects.
"cdr.dev/slog/v3"
)
const (
postgresImage = "postgres"
postgresTag = "17"
postgresUser = "coder"
postgresPassword = "coder"
postgresDB = "coder"
postgresPort = "5432/tcp"
)
// PostgresResult contains the connection info for the running Postgres instance.
type PostgresResult struct {
// URL is the connection string for the database.
URL string
// Port is the host port mapped to the container's 5432.
Port string
}
var _ Service[PostgresResult] = (*Postgres)(nil)
func OnPostgres() ServiceName {
return (&Postgres{}).Name()
}
// Postgres runs a PostgreSQL database via docker compose.
type Postgres struct {
currentStep atomic.Pointer[string]
result PostgresResult
}
func (p *Postgres) CurrentStep() string {
if s := p.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (p *Postgres) setStep(step string) {
p.currentStep.Store(&step)
}
func NewPostgres() *Postgres {
return &Postgres{}
}
func (*Postgres) Name() ServiceName {
return CDevPostgres
}
func (*Postgres) Emoji() string {
return "🐘"
}
func (*Postgres) DependsOn() []ServiceName {
return []ServiceName{
OnDocker(),
}
}
func (p *Postgres) Start(ctx context.Context, logger slog.Logger, c *Catalog) error {
defer p.setStep("")
d, ok := c.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
p.setStep("Registering database compose service")
logger.Info(ctx, "registering postgres compose service")
d.SetComposeVolume("coder_dev_data", ComposeVolume{})
d.SetCompose("database", ComposeService{
Image: postgresImage + ":" + postgresTag,
Environment: map[string]string{
"POSTGRES_USER": postgresUser,
"POSTGRES_PASSWORD": postgresPassword,
"POSTGRES_DB": postgresDB,
},
Volumes: []string{"coder_dev_data:/var/lib/postgresql/data"},
Ports: []string{"5432:5432"},
Networks: []string{composeNetworkName},
Labels: composeServiceLabels("database"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", "pg_isready -U coder"},
Interval: "2s",
Timeout: "5s",
Retries: 10,
},
})
p.setStep("Starting PostgreSQL via compose")
if err := d.DockerComposeUp(ctx, "database"); err != nil {
return xerrors.Errorf("docker compose up database: %w", err)
}
// Fixed port mapping via compose.
p.result = PostgresResult{
URL: fmt.Sprintf("postgres://%s:%s@localhost:5432/%s?sslmode=disable", postgresUser, postgresPassword, postgresDB),
Port: "5432",
}
p.setStep("Waiting for PostgreSQL to be ready")
return p.waitForReady(ctx, logger)
}
func (p *Postgres) sqlDB() (*sql.DB, error) {
db, err := sql.Open("postgres", p.result.URL)
if err != nil {
return nil, xerrors.Errorf("open database: %w", err)
}
return db, nil
}
// waitForMigrations polls the schema_migrations table until
// migrations are complete (version != 0 and dirty = false).
// This is necessary because EnsureLicense may run concurrently
// with coderd's startup, which performs migrations.
func (p *Postgres) waitForMigrations(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
timeout := time.After(5 * time.Minute)
db, err := sql.Open("postgres", p.result.URL)
if err != nil {
return xerrors.Errorf("open database: %w", err)
}
defer db.Close()
for {
var version int64
var dirty bool
err := db.QueryRowContext(ctx,
"SELECT version, dirty FROM schema_migrations LIMIT 1",
).Scan(&version, &dirty)
if err == nil && version != 0 && !dirty {
logger.Info(ctx, "migrations complete",
slog.F("version", version),
)
return nil
}
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timed out waiting for migrations")
case <-ticker.C:
}
}
}
func (p *Postgres) waitForReady(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(60 * time.Second)
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timeout waiting for postgres to be ready")
case <-ticker.C:
db, err := sql.Open("postgres", p.result.URL)
if err != nil {
continue
}
err = db.PingContext(ctx)
_ = db.Close()
if err == nil {
logger.Info(ctx, "postgres is ready", slog.F("url", p.result.URL))
return nil
}
}
}
}
func (*Postgres) Stop(_ context.Context) error {
// Don't stop the container - it persists across runs.
// Use "cdev down" to fully clean up.
return nil
}
func (p *Postgres) Result() PostgresResult {
return p.result
}
+223
View File
@@ -0,0 +1,223 @@
package catalog
import (
"context"
"fmt"
"net/http"
"strings"
"sync/atomic"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/serpent"
)
const (
prometheusImage = "prom/prometheus"
prometheusTag = "latest"
prometheusUIPort = 9090
)
// PrometheusResult contains connection info for the running
// Prometheus instance.
type PrometheusResult struct {
// URL is the base URL for the Prometheus UI.
URL string
}
var _ Service[PrometheusResult] = (*Prometheus)(nil)
var _ ConfigurableService = (*Prometheus)(nil)
// OnPrometheus returns the service name for the Prometheus service.
func OnPrometheus() ServiceName {
return (&Prometheus{}).Name()
}
// Prometheus runs a Prometheus container that scrapes coderd metrics
// via docker compose.
type Prometheus struct {
currentStep atomic.Pointer[string]
enabled bool
result PrometheusResult
}
func (p *Prometheus) CurrentStep() string {
if s := p.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (p *Prometheus) URL() string {
return p.result.URL
}
func (p *Prometheus) setStep(step string) {
p.currentStep.Store(&step)
}
// NewPrometheus creates a new Prometheus service.
func NewPrometheus() *Prometheus {
return &Prometheus{}
}
// Enabled returns whether the Prometheus service is enabled.
func (p *Prometheus) Enabled() bool { return p.enabled }
func (*Prometheus) Name() ServiceName {
return CDevPrometheus
}
func (*Prometheus) Emoji() string {
return "📊"
}
func (*Prometheus) DependsOn() []ServiceName {
return []ServiceName{OnDocker(), OnCoderd()}
}
func (p *Prometheus) Options() serpent.OptionSet {
return serpent.OptionSet{{
Name: "Prometheus",
Description: "Enable Prometheus metrics collection.",
Flag: "prometheus",
Env: "CDEV_PROMETHEUS",
Default: "false",
Value: serpent.BoolOf(&p.enabled),
}}
}
// generateConfig builds a prometheus.yml scrape config targeting
// each coderd HA instance's metrics endpoint.
func generateConfig(haCount int) string {
var targets []string
for i := range haCount {
targets = append(targets, fmt.Sprintf("\"coderd-%d:2112\"", i))
}
return fmt.Sprintf(`global:
scrape_interval: 15s
scrape_configs:
- job_name: "coder"
static_configs:
- targets: [%s]
`, strings.Join(targets, ", "))
}
func (p *Prometheus) Start(ctx context.Context, logger slog.Logger, cat *Catalog) error {
defer p.setStep("")
dkr, ok := cat.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
coderd, ok := cat.MustGet(OnCoderd()).(*Coderd)
if !ok {
return xerrors.New("unexpected type for Coderd service")
}
// Generate the scrape config based on HA count.
haCount := int(coderd.HACount())
if haCount < 1 {
haCount = 1
}
configYAML := generateConfig(haCount)
dkr.SetComposeVolume("prometheus", ComposeVolume{})
// Register prometheus-init (one-shot config writer).
configScript := fmt.Sprintf(
"mkdir -p /prom-vol/config /prom-vol/data && printf '%%s' '%s' > /prom-vol/config/prometheus.yml",
strings.ReplaceAll(configYAML, "'", "'\"'\"'"),
)
dkr.SetCompose("prometheus-init", ComposeService{
Image: prometheusImage + ":" + prometheusTag,
Entrypoint: []string{"sh", "-c"},
Command: configScript,
Volumes: []string{"prometheus:/prom-vol"},
Labels: composeServiceLabels("prometheus-init"),
})
dkr.SetCompose("prometheus", ComposeService{
Image: prometheusImage + ":" + prometheusTag,
Command: []string{
"--config.file=/prom-vol/config/prometheus.yml",
"--storage.tsdb.path=/prom-vol/data",
fmt.Sprintf("--web.listen-address=0.0.0.0:%d", prometheusUIPort),
},
Ports: []string{fmt.Sprintf("%d:%d", prometheusUIPort, prometheusUIPort)},
Networks: []string{composeNetworkName},
Volumes: []string{"prometheus:/prom-vol"},
DependsOn: map[string]ComposeDependsOn{
"prometheus-init": {Condition: "service_completed_successfully"},
"coderd-0": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("prometheus"),
Healthcheck: &ComposeHealthcheck{
Test: []string{"CMD-SHELL", fmt.Sprintf("curl -sf http://localhost:%d/-/ready || exit 1", prometheusUIPort)},
Interval: "2s",
Timeout: "5s",
Retries: 15,
},
})
p.setStep("Starting Prometheus via compose")
logger.Info(ctx, "starting prometheus via compose")
if err := dkr.DockerComposeUp(ctx, "prometheus-init", "prometheus"); err != nil {
return xerrors.Errorf("docker compose up prometheus: %w", err)
}
p.result = PrometheusResult{
URL: fmt.Sprintf("http://localhost:%d", prometheusUIPort),
}
return p.waitForReady(ctx, logger)
}
func (p *Prometheus) waitForReady(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(60 * time.Second)
client := &http.Client{Timeout: 2 * time.Second}
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timeout waiting for prometheus to be ready")
case <-ticker.C:
readyURL := fmt.Sprintf("http://localhost:%d/-/ready", prometheusUIPort)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, readyURL, nil)
if err != nil {
continue
}
resp, err := client.Do(req)
if err != nil {
continue
}
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
logger.Info(ctx, "prometheus is ready",
slog.F("url", p.result.URL),
)
return nil
}
}
}
}
func (*Prometheus) Stop(_ context.Context) error {
return nil
}
func (p *Prometheus) Result() PrometheusResult {
return p.result
}
+226
View File
@@ -0,0 +1,226 @@
package catalog
import (
"context"
"fmt"
"os"
"sync/atomic"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/provisionerkey"
"github.com/coder/serpent"
_ "github.com/lib/pq" // Imported for postgres driver side effects.
)
// ProvisionerResult contains the provisioner key for connecting
// external provisioner daemons.
type ProvisionerResult struct {
// Key is the plaintext provisioner key.
Key string
}
var _ Service[ProvisionerResult] = (*Provisioner)(nil)
var _ ConfigurableService = (*Provisioner)(nil)
// OnProvisioner returns the service name for the provisioner service.
func OnProvisioner() ServiceName {
return (&Provisioner{}).Name()
}
// Provisioner runs external provisioner daemons via docker compose.
type Provisioner struct {
currentStep atomic.Pointer[string]
count int64
result ProvisionerResult
}
func (p *Provisioner) CurrentStep() string {
if s := p.currentStep.Load(); s != nil {
return *s
}
return ""
}
func (p *Provisioner) setStep(step string) {
p.currentStep.Store(&step)
}
// NewProvisioner creates a new Provisioner and registers a Configure
// callback to disable built-in provisioners on coderd when external
// provisioners are enabled.
func NewProvisioner(cat *Catalog) *Provisioner {
p := &Provisioner{}
Configure[*Coderd](cat, OnCoderd(), func(c *Coderd) {
if p.count > 0 {
// Fail fast: license is required for external provisioners.
RequireLicense("external provisioners (--provisioner-count > 0)")
c.ExtraEnv = append(c.ExtraEnv, "CODER_PROVISIONER_DAEMONS=0")
}
})
return p
}
// Count returns the configured number of provisioner instances.
func (p *Provisioner) Count() int64 { return p.count }
func (*Provisioner) Name() ServiceName {
return CDevProvisioner
}
func (*Provisioner) Emoji() string {
return "⚙️"
}
func (*Provisioner) DependsOn() []ServiceName {
return []ServiceName{OnCoderd()}
}
func (p *Provisioner) Options() serpent.OptionSet {
return serpent.OptionSet{{
Name: "Provisioner Count",
Description: "Number of external provisioner daemons to start. 0 disables (uses built-in).",
Flag: "provisioner-count",
Env: "CDEV_PROVISIONER_COUNT",
Default: "0",
Value: serpent.Int64Of(&p.count),
}}
}
func (p *Provisioner) Start(ctx context.Context, logger slog.Logger, cat *Catalog) error {
if p.count == 0 {
return nil
}
defer p.setStep("")
pg, ok := cat.MustGet(OnPostgres()).(*Postgres)
if !ok {
return xerrors.New("unexpected type for Postgres service")
}
// Ensure license is in the database before provisioner setup.
if err := EnsureLicense(ctx, logger, cat); err != nil {
return xerrors.Errorf("ensure license: %w", err)
}
// Open direct DB connection to create the provisioner key.
sqlDB, err := pg.sqlDB()
if err != nil {
return xerrors.Errorf("open database: %w", err)
}
defer sqlDB.Close()
store := database.New(sqlDB)
// Get default organization.
org, err := store.GetDefaultOrganization(ctx)
if err != nil {
return xerrors.Errorf("get default organization: %w", err)
}
// Generate provisioner key.
params, secret, err := provisionerkey.New(org.ID, "cdev-external", nil)
if err != nil {
return xerrors.Errorf("generate provisioner key: %w", err)
}
// Upsert: delete existing, then insert fresh.
existing, err := store.GetProvisionerKeyByName(ctx, database.GetProvisionerKeyByNameParams{
OrganizationID: org.ID,
Name: "cdev-external",
})
if err == nil {
_ = store.DeleteProvisionerKey(ctx, existing.ID)
}
_, err = store.InsertProvisionerKey(ctx, params)
if err != nil {
return xerrors.Errorf("insert provisioner key: %w", err)
}
p.result = ProvisionerResult{Key: secret}
logger.Info(ctx, "provisioner key created", slog.F("name", "cdev-external"))
// Register and start provisioner containers via compose.
dkr, ok := cat.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
coderd, ok := cat.MustGet(OnCoderd()).(*Coderd)
if !ok {
return xerrors.New("unexpected type for Coderd service")
}
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
dockerGroup := os.Getenv("DOCKER_GROUP")
if dockerGroup == "" {
dockerGroup = "999"
}
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
dockerSocket = "/var/run/docker.sock"
}
_ = coderd.Result() // ensure dep is used
p.setStep("Starting provisioner daemons")
var serviceNames []string
for i := range p.count {
index := int(i)
name := fmt.Sprintf("provisioner-%d", index)
serviceNames = append(serviceNames, name)
logger.Info(ctx, "registering provisioner compose service", slog.F("index", index))
dkr.SetCompose(name, ComposeService{
Image: dogfoodImage + ":" + dogfoodTag,
Networks: []string{composeNetworkName},
WorkingDir: "/app",
Environment: map[string]string{
"CODER_URL": "http://coderd-0:3000",
"CODER_PROVISIONER_DAEMON_KEY": secret,
"CODER_PROVISIONER_DAEMON_NAME": fmt.Sprintf("cdev-provisioner-%d", index),
"GOMODCACHE": "/go-cache/mod",
"GOCACHE": "/go-cache/build",
"CODER_CACHE_DIRECTORY": "/cache",
"DOCKER_HOST": fmt.Sprintf("unix://%s", dockerSocket),
},
Command: []string{
"go", "run", "./enterprise/cmd/coder",
"provisioner", "start",
"--verbose",
},
Volumes: []string{
fmt.Sprintf("%s:/app", cwd),
"go_cache:/go-cache",
"coder_cache:/cache",
fmt.Sprintf("%s:%s", dockerSocket, dockerSocket),
},
GroupAdd: []string{dockerGroup},
DependsOn: map[string]ComposeDependsOn{
"coderd-0": {Condition: "service_healthy"},
},
Labels: composeServiceLabels("provisioner"),
})
}
if err := dkr.DockerComposeUp(ctx, serviceNames...); err != nil {
return xerrors.Errorf("docker compose up provisioners: %w", err)
}
return nil
}
func (*Provisioner) Stop(_ context.Context) error {
return nil
}
func (p *Provisioner) Result() ProvisionerResult {
return p.result
}
+510
View File
@@ -0,0 +1,510 @@
package catalog
import (
"archive/tar"
"bytes"
"context"
"errors"
"io"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"sync/atomic"
"time"
"golang.org/x/xerrors"
"github.com/google/uuid"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/codersdk"
)
const (
defaultAdminEmail = "admin@coder.com"
defaultAdminUsername = "admin"
defaultAdminName = "Admin User"
defaultAdminPassword = "SomeSecurePassword!"
defaultMemberEmail = "member@coder.com"
defaultMemberUsername = "member"
defaultMemberName = "Regular User"
)
// SetupResult contains the credentials for the created users.
type SetupResult struct {
// AdminEmail is the email of the admin user.
AdminEmail string
// AdminUsername is the username of the admin user.
AdminUsername string
// AdminPassword is the password for both admin and member users.
AdminPassword string
// MemberEmail is the email of the regular member user.
MemberEmail string
// MemberUsername is the username of the regular member user.
MemberUsername string
// SessionToken is the admin session token for API access.
SessionToken string
}
var _ Service[SetupResult] = (*Setup)(nil)
func OnSetup() ServiceName {
return (&Setup{}).Name()
}
// Setup creates the first user and a regular member user for the Coder
// deployment. This is a one-shot service that runs after coderd is ready.
type Setup struct {
currentStep atomic.Pointer[string]
result SetupResult
}
func (s *Setup) CurrentStep() string {
if st := s.currentStep.Load(); st != nil {
return *st
}
return ""
}
func (s *Setup) setStep(step string) {
s.currentStep.Store(&step)
}
func NewSetup() *Setup {
return &Setup{}
}
func (*Setup) Name() ServiceName {
return CDevSetup
}
func (*Setup) Emoji() string {
return "👤"
}
func (*Setup) DependsOn() []ServiceName {
return []ServiceName{
OnCoderd(),
}
}
func (s *Setup) Start(ctx context.Context, logger slog.Logger, c *Catalog) error {
defer s.setStep("")
coderd, ok := c.MustGet(OnCoderd()).(*Coderd)
if !ok {
return xerrors.New("unexpected type for Coderd service")
}
coderdResult := coderd.Result()
coderdURL, err := url.Parse(coderdResult.URL)
if err != nil {
return xerrors.Errorf("parse coderd URL: %w", err)
}
client := codersdk.New(coderdURL)
pg, ok := c.MustGet(OnPostgres()).(*Postgres)
if !ok {
return xerrors.New("unexpected type for Postgres service")
}
err = pg.waitForMigrations(ctx, logger)
if err != nil {
return xerrors.Errorf("wait for postgres migrations: %w", err)
}
// Check if first user already exists by trying to get build info.
// If users exist, we can still try to login.
hasFirstUser, err := client.HasFirstUser(ctx)
if err != nil {
return xerrors.Errorf("check first user: %w", err)
}
s.result = SetupResult{
AdminEmail: defaultAdminEmail,
AdminUsername: defaultAdminUsername,
AdminPassword: defaultAdminPassword,
MemberEmail: defaultMemberEmail,
MemberUsername: defaultMemberUsername,
}
if !hasFirstUser {
// Create the first admin user.
s.setStep("Creating first admin user")
logger.Info(ctx, "creating first admin user",
slog.F("email", defaultAdminEmail),
slog.F("username", defaultAdminUsername))
_, err = client.CreateFirstUser(ctx, codersdk.CreateFirstUserRequest{
Email: defaultAdminEmail,
Username: defaultAdminUsername,
Name: defaultAdminName,
Password: defaultAdminPassword,
Trial: false,
})
if err != nil {
return xerrors.Errorf("create first user: %w", err)
}
logger.Info(ctx, "first admin user created successfully")
} else {
logger.Info(ctx, "first user already exists, skipping creation")
}
// Login to get a session token.
s.setStep("Logging in as admin")
logger.Info(ctx, "logging in as admin user")
loginResp, err := client.LoginWithPassword(ctx, codersdk.LoginWithPasswordRequest{
Email: defaultAdminEmail,
Password: defaultAdminPassword,
})
if err != nil {
return xerrors.Errorf("login as admin: %w", err)
}
client.SetSessionToken(loginResp.SessionToken)
s.result.SessionToken = loginResp.SessionToken
// Check if member user already exists.
memberExists := false
_, err = client.User(ctx, defaultMemberUsername)
if err == nil {
memberExists = true
} else {
var sdkErr *codersdk.Error
if errors.As(err, &sdkErr) && sdkErr.StatusCode() == http.StatusNotFound {
memberExists = false
} else {
switch sdkErr.StatusCode() {
case http.StatusBadRequest:
// https://github.com/coder/coder/pull/22069 fixes this bug
memberExists = false
default:
return xerrors.Errorf("check member user: %w", err)
}
}
}
if !memberExists {
org, err := client.OrganizationByName(ctx, codersdk.DefaultOrganization)
if err != nil {
return xerrors.Errorf("get default organization: %w", err)
}
// Create a regular member user.
s.setStep("Creating member user")
logger.Info(ctx, "creating regular member user",
slog.F("email", defaultMemberEmail),
slog.F("username", defaultMemberUsername))
_, err = client.CreateUserWithOrgs(ctx, codersdk.CreateUserRequestWithOrgs{
Email: defaultMemberEmail,
Username: defaultMemberUsername,
Name: defaultMemberName,
Password: defaultAdminPassword,
UserLoginType: codersdk.LoginTypePassword,
UserStatus: nil,
OrganizationIDs: []uuid.UUID{org.ID},
})
if err != nil {
return xerrors.Errorf("create member user: %w", err)
}
logger.Info(ctx, "regular member user created successfully")
} else {
logger.Info(ctx, "member user already exists, skipping creation")
}
// Create docker template if it doesn't exist.
s.setStep("Creating docker template")
if err := s.createDockerTemplate(ctx, logger, client); err != nil {
// Don't fail setup if template creation fails - it's not critical.
logger.Warn(ctx, "failed to create docker template", slog.Error(err))
}
logger.Info(ctx, "setup completed successfully",
slog.F("admin_email", s.result.AdminEmail),
slog.F("admin_username", s.result.AdminUsername),
slog.F("member_email", s.result.MemberEmail),
slog.F("member_username", s.result.MemberUsername))
return nil
}
func (s *Setup) createDockerTemplate(ctx context.Context, logger slog.Logger, client *codersdk.Client) error {
const templateName = "docker"
// Check if template already exists.
org, err := client.OrganizationByName(ctx, codersdk.DefaultOrganization)
if err != nil {
return xerrors.Errorf("get default organization: %w", err)
}
_, err = client.TemplateByName(ctx, org.ID, templateName)
if err == nil {
logger.Info(ctx, "docker template already exists, skipping creation")
return nil
}
// Template doesn't exist, create it.
logger.Info(ctx, "creating docker template")
// Copy template to temp directory and run terraform init to generate lock file.
s.setStep("Initializing terraform providers")
templateDir := filepath.Join("examples", "templates", "docker")
tempDir, err := s.prepareTemplateDir(ctx, logger, templateDir)
if err != nil {
return xerrors.Errorf("prepare template directory: %w", err)
}
defer os.RemoveAll(tempDir)
// Create a tar archive of the initialized template files.
tarData, err := createTarFromDir(tempDir)
if err != nil {
return xerrors.Errorf("create tar archive: %w", err)
}
// Upload the template files.
s.setStep("Uploading template files")
uploadResp, err := client.Upload(ctx, codersdk.ContentTypeTar, bytes.NewReader(tarData))
if err != nil {
return xerrors.Errorf("upload template files: %w", err)
}
// Create a template version.
s.setStep("Creating template version")
version, err := client.CreateTemplateVersion(ctx, org.ID, codersdk.CreateTemplateVersionRequest{
Name: "v1.0.0",
StorageMethod: codersdk.ProvisionerStorageMethodFile,
FileID: uploadResp.ID,
Provisioner: codersdk.ProvisionerTypeTerraform,
})
if err != nil {
return xerrors.Errorf("create template version: %w", err)
}
// Wait for the template version to be ready.
s.setStep("Waiting for template to build")
version, err = s.waitForTemplateVersion(ctx, logger, client, version.ID)
if err != nil {
return xerrors.Errorf("wait for template version: %w", err)
}
if version.Job.Status != codersdk.ProvisionerJobSucceeded {
logger.Error(ctx, "template version build failed", slog.F("error", version.Job.Error))
return xerrors.Errorf("template version failed: %s", version.Job.Status)
}
// Create the template.
s.setStep("Finalizing template")
_, err = client.CreateTemplate(ctx, org.ID, codersdk.CreateTemplateRequest{
Name: templateName,
DisplayName: "Docker",
Description: "Develop in Docker containers",
Icon: "/icon/docker.png",
VersionID: version.ID,
})
if err != nil {
return xerrors.Errorf("create template: %w", err)
}
logger.Info(ctx, "docker template created successfully")
return nil
}
// prepareTemplateDir copies the template to a temp directory and runs terraform init
// to generate the lock file that Coder's provisioner needs.
func (s *Setup) prepareTemplateDir(ctx context.Context, logger slog.Logger, srcDir string) (string, error) {
// Create temp directory.
tempDir, err := os.MkdirTemp("", "cdev-template-*")
if err != nil {
return "", xerrors.Errorf("create temp dir: %w", err)
}
// Copy all files from source to temp directory.
err = filepath.Walk(srcDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
relPath, err := filepath.Rel(srcDir, path)
if err != nil {
return err
}
destPath := filepath.Join(tempDir, relPath)
if info.IsDir() {
return os.MkdirAll(destPath, info.Mode())
}
// Copy file.
srcFile, err := os.Open(path)
if err != nil {
return err
}
defer srcFile.Close()
destFile, err := os.OpenFile(destPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, info.Mode())
if err != nil {
return err
}
defer destFile.Close()
_, err = io.Copy(destFile, srcFile)
return err
})
if err != nil {
_ = os.RemoveAll(tempDir)
return "", xerrors.Errorf("copy template files: %w", err)
}
// Inject additional modules into main.tf for development.
if err := s.injectDevModules(filepath.Join(tempDir, "main.tf")); err != nil {
_ = os.RemoveAll(tempDir)
return "", xerrors.Errorf("inject dev modules: %w", err)
}
// Run terraform init to download providers and create lock file.
logger.Info(ctx, "running terraform init", slog.F("dir", tempDir))
cmd := exec.CommandContext(ctx, "terraform", "init")
cmd.Dir = tempDir
output, err := cmd.CombinedOutput()
if err != nil {
_ = os.RemoveAll(tempDir)
return "", xerrors.Errorf("terraform init failed: %w\nOutput: %s", err, string(output))
}
logger.Debug(ctx, "terraform init completed", slog.F("output", string(output)))
// Remove the .terraform directory - we only need the lock file.
// The provisioner will download the providers itself.
tfDir := filepath.Join(tempDir, ".terraform")
if err := os.RemoveAll(tfDir); err != nil {
logger.Warn(ctx, "failed to remove .terraform directory", slog.Error(err))
}
return tempDir, nil
}
// injectDevModules appends additional Terraform modules to main.tf for development.
func (*Setup) injectDevModules(mainTFPath string) error {
const filebrowserModule = `
# ============================================================
# Development modules injected by cdev
# ============================================================
# See https://registry.coder.com/modules/coder/filebrowser
module "filebrowser" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/filebrowser/coder"
version = "~> 1.0"
agent_id = coder_agent.main.id
agent_name = "main"
}
`
f, err := os.OpenFile(mainTFPath, os.O_APPEND|os.O_WRONLY, 0o644)
if err != nil {
return xerrors.Errorf("open main.tf: %w", err)
}
defer f.Close()
if _, err := f.WriteString(filebrowserModule); err != nil {
return xerrors.Errorf("write filebrowser module: %w", err)
}
return nil
}
func (*Setup) waitForTemplateVersion(ctx context.Context, logger slog.Logger, client *codersdk.Client, versionID uuid.UUID) (codersdk.TemplateVersion, error) {
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
timeout := time.After(5 * time.Minute)
for {
select {
case <-ctx.Done():
return codersdk.TemplateVersion{}, ctx.Err()
case <-timeout:
return codersdk.TemplateVersion{}, xerrors.New("timeout waiting for template version")
case <-ticker.C:
version, err := client.TemplateVersion(ctx, versionID)
if err != nil {
logger.Warn(ctx, "failed to get template version", slog.Error(err))
continue
}
if !version.Job.Status.Active() {
return version, nil
}
logger.Debug(ctx, "template version still building",
slog.F("status", version.Job.Status))
}
}
}
// createTarFromDir creates a tar archive from a directory.
func createTarFromDir(dir string) ([]byte, error) {
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
// Skip directories.
if info.IsDir() {
return nil
}
// Get relative path.
relPath, err := filepath.Rel(dir, path)
if err != nil {
return err
}
// Create tar header.
header, err := tar.FileInfoHeader(info, "")
if err != nil {
return err
}
header.Name = relPath
if err := tw.WriteHeader(header); err != nil {
return err
}
// Write file content.
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(tw, file)
return err
})
if err != nil {
return nil, err
}
if err := tw.Close(); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (*Setup) Stop(_ context.Context) error {
// Setup is a one-shot task, nothing to stop.
return nil
}
func (s *Setup) Result() SetupResult {
return s.result
}
+66
View File
@@ -0,0 +1,66 @@
package catalog
import (
"context"
"fmt"
"io"
"sync"
"sync/atomic"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/pretty"
)
// LoggerSink is a controllable slog.Sink with pretty formatting.
type LoggerSink struct {
mu sync.Mutex
w io.Writer
emoji string
serviceName ServiceName
done atomic.Bool
}
// NewLoggerSink returns a controllable sink with pretty formatting.
// If svc is non-nil, lines are prefixed with the service's emoji
// and name. Pass nil for non-service contexts.
func NewLoggerSink(w io.Writer, svc ServiceBase) *LoggerSink {
s := &LoggerSink{w: w, emoji: "🚀", serviceName: "cdev"}
if svc != nil {
s.emoji = svc.Emoji()
s.serviceName = svc.Name()
}
return s
}
func (l *LoggerSink) LogEntry(_ context.Context, e slog.SinkEntry) {
if l.done.Load() {
return
}
ts := cliui.Timestamp(e.Time)
var streamTag string
if e.Level >= slog.LevelWarn {
streamTag = pretty.Sprint(cliui.DefaultStyles.Warn, "stderr")
} else {
streamTag = pretty.Sprint(cliui.DefaultStyles.Keyword, "stdout")
}
serviceLabel := fmt.Sprintf("%s %-10s", l.emoji, l.serviceName)
var fields string
for _, f := range e.Fields {
fields += fmt.Sprintf(" %s=%v", f.Name, f.Value)
}
l.mu.Lock()
defer l.mu.Unlock()
_, _ = fmt.Fprintf(l.w, "%s %s [%s] %s%s\n", serviceLabel, ts, streamTag, e.Message, fields)
}
func (*LoggerSink) Sync() {}
func (l *LoggerSink) Close() {
l.done.Store(true)
}
+180
View File
@@ -0,0 +1,180 @@
package catalog
import (
"context"
"fmt"
"net/http"
"os"
"sync/atomic"
"time"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
)
const (
sitePort = 8080
)
// SiteResult contains the connection info for the running Site dev server.
type SiteResult struct {
// URL is the access URL for the frontend dev server.
URL string
// Port is the host port mapped to the container's 8080.
Port string
}
var _ Service[SiteResult] = (*Site)(nil)
func OnSite() ServiceName {
return (&Site{}).Name()
}
// Site runs the Coder frontend dev server via docker compose.
type Site struct {
currentStep atomic.Pointer[string]
result SiteResult
}
func (s *Site) CurrentStep() string {
if st := s.currentStep.Load(); st != nil {
return *st
}
return ""
}
func (s *Site) URL() string {
return s.result.URL
}
func (s *Site) setStep(step string) {
s.currentStep.Store(&step)
}
func NewSite() *Site {
return &Site{}
}
func (*Site) Name() ServiceName {
return CDevSite
}
func (*Site) Emoji() string {
return "🌐"
}
func (*Site) DependsOn() []ServiceName {
return []ServiceName{
OnDocker(),
OnSetup(),
}
}
func (s *Site) Start(ctx context.Context, logger slog.Logger, c *Catalog) error {
defer s.setStep("")
dkr, ok := c.MustGet(OnDocker()).(*Docker)
if !ok {
return xerrors.New("unexpected type for Docker service")
}
// Get coderd result for the backend URL.
coderd, ok := c.MustGet(OnCoderd()).(*Coderd)
if !ok {
return xerrors.New("unexpected type for Coderd service")
}
// Get current working directory for mounting.
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
portStr := fmt.Sprintf("%d", sitePort)
s.setStep("Registering site compose service")
logger.Info(ctx, "registering site compose service", slog.F("port", sitePort))
dkr.SetComposeVolume("site_node_modules", ComposeVolume{})
dkr.SetCompose("site", ComposeService{
Image: dogfoodImage + ":" + dogfoodTag,
Networks: []string{composeNetworkName},
WorkingDir: "/app/site",
Environment: map[string]string{
"CODER_HOST": fmt.Sprintf("http://coderd-0:3000"),
},
Ports: []string{fmt.Sprintf("%s:%s", portStr, portStr)},
Volumes: []string{
fmt.Sprintf("%s:/app", cwd),
"site_node_modules:/app/site/node_modules",
},
Command: `sh -c "pnpm install --frozen-lockfile && pnpm dev --host"`,
DependsOn: map[string]ComposeDependsOn{
"coderd-0": {Condition: "service_healthy"},
},
Restart: "unless-stopped",
Labels: composeServiceLabels("site"),
})
s.setStep("Starting site via compose")
if err := dkr.DockerComposeUp(ctx, "site"); err != nil {
return xerrors.Errorf("docker compose up site: %w", err)
}
s.result = SiteResult{
URL: fmt.Sprintf("http://localhost:%d", sitePort),
Port: portStr,
}
// Use coderd URL for reference (ensures dep is used).
_ = coderd.Result()
s.setStep("Waiting for dev server")
return s.waitForReady(ctx, logger)
}
func (s *Site) waitForReady(ctx context.Context, logger slog.Logger) error {
ticker := time.NewTicker(2 * time.Second)
defer ticker.Stop()
// Site dev server can take a while to start, especially on first run
// with pnpm install.
timeout := time.After(5 * time.Minute)
healthURL := s.result.URL
logger.Info(ctx, "waiting for site dev server to be ready", slog.F("health_url", healthURL))
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-timeout:
return xerrors.New("timeout waiting for site dev server to be ready")
case <-ticker.C:
req, err := http.NewRequestWithContext(ctx, http.MethodGet, healthURL, nil)
if err != nil {
continue
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
continue
}
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
logger.Info(ctx, "site dev server is ready and accepting connections", slog.F("url", s.result.URL))
return nil
}
}
}
}
func (*Site) Stop(_ context.Context) error {
// Don't stop the container - it persists across runs.
// Use "cdev down" to fully clean up.
return nil
}
func (s *Site) Result() SiteResult {
return s.result
}
+825
View File
@@ -0,0 +1,825 @@
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"os/signal"
"slices"
"strings"
"syscall"
"text/tabwriter"
"time"
tea "github.com/charmbracelet/bubbletea"
"github.com/ory/dockertest/v3/docker"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/scripts/cdev/api"
"github.com/coder/coder/v2/scripts/cdev/catalog"
"github.com/coder/serpent"
)
func main() {
cmd := &serpent.Command{
Use: "cdev",
Short: "Development environment manager for Coder",
Long: "A smart, opinionated tool for running the Coder development stack.",
Children: []*serpent.Command{
upCmd(),
psCmd(),
resourcesCmd(),
downCmd(),
cleanCmd(),
pprofCmd(),
logsCmd(),
generateCmd(),
},
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
sigs := make(chan os.Signal, 1)
// We want to catch SIGINT (Ctrl+C) and SIGTERM (graceful shutdown).
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigs
// Notify the main function that cleanup is finished.
// TODO: Would be best to call a `Close()` function and try a graceful shutdown first, but this is good enough for now.
cancel()
}()
err := cmd.Invoke().WithContext(ctx).WithOS().Run()
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "error: %v\n", err)
os.Exit(1) //nolint:gocritic // exitAfterDefer: deferred cancel is for the non-error path.
}
}
func cleanCmd() *serpent.Command {
return &serpent.Command{
Use: "clean",
Short: "Remove all cdev-managed resources (volumes, containers, etc.)",
Handler: func(inv *serpent.Invocation) error {
logger := slog.Make(catalog.NewLoggerSink(inv.Stderr, nil))
return catalog.Cleanup(inv.Context(), logger)
},
}
}
func downCmd() *serpent.Command {
return &serpent.Command{
Use: "down",
Short: "Stop all running cdev-managed containers, but keep volumes and other resources.",
Handler: func(inv *serpent.Invocation) error {
logger := slog.Make(catalog.NewLoggerSink(inv.Stderr, nil))
return catalog.Down(inv.Context(), logger)
},
}
}
func psCmd() *serpent.Command {
var apiAddr string
var interval time.Duration
return &serpent.Command{
Use: "ps",
Short: "Show status of cdev services.",
Options: serpent.OptionSet{
{
Flag: "api-addr",
Description: "Address of the cdev control API server.",
Default: "localhost:" + api.DefaultAPIPort,
Value: serpent.StringOf(&apiAddr),
},
{
Flag: "interval",
FlagShorthand: "n",
Description: "Refresh interval (0 to disable auto-refresh).",
Default: "2s",
Value: serpent.DurationOf(&interval),
},
},
Handler: func(inv *serpent.Invocation) error {
m := &psModel{
apiAddr: apiAddr,
interval: interval,
}
p := tea.NewProgram(m,
tea.WithContext(inv.Context()),
tea.WithOutput(inv.Stdout),
tea.WithInput(inv.Stdin),
)
_, err := p.Run()
return err
},
}
}
// psModel is the bubbletea model for the ps command.
type psModel struct {
apiAddr string
interval time.Duration
services []api.ServiceInfo
err error
}
type psTickMsg time.Time
type psDataMsg struct {
services []api.ServiceInfo
}
func (m *psModel) Init() tea.Cmd {
cmds := []tea.Cmd{m.fetchData}
if m.interval > 0 {
cmds = append(cmds, m.tick())
}
return tea.Batch(cmds...)
}
func (m *psModel) tick() tea.Cmd {
return tea.Tick(m.interval, func(t time.Time) tea.Msg {
return psTickMsg(t)
})
}
func (m *psModel) fetchData() tea.Msg {
url := fmt.Sprintf("http://%s/api/services", m.apiAddr)
req, err := http.NewRequestWithContext(context.Background(), http.MethodGet, url, nil)
if err != nil {
return err
}
resp, err := http.DefaultClient.Do(req) //nolint:gosec // User-provided API address.
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return xerrors.Errorf("API returned status %d", resp.StatusCode)
}
var data api.ListServicesResponse
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
return err
}
return psDataMsg{services: data.Services}
}
func (m *psModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.String() {
case "q", "ctrl+c":
return m, tea.Quit
case "r":
// Manual refresh.
return m, m.fetchData
}
case psTickMsg:
return m, tea.Batch(m.fetchData, m.tick())
case psDataMsg:
m.services = msg.services
m.err = nil
return m, nil
case error:
m.err = msg
return m, nil
}
return m, nil
}
func (m *psModel) View() string {
if m.err != nil {
return fmt.Sprintf("Error: %v\n\nIs cdev running? Try: cdev up\n\nPress q to quit, r to retry.\n", m.err)
}
if len(m.services) == 0 {
return "Loading...\n"
}
var s strings.Builder
_, _ = s.WriteString("SERVICES\n")
tw := tabwriter.NewWriter(&s, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintln(tw, "NAME\tEMOJI\tSTATUS\tCURRENT STEP\tDEPENDS ON")
// Sort services by name.
services := slices.Clone(m.services)
slices.SortFunc(services, func(a, b api.ServiceInfo) int {
return strings.Compare(a.Name, b.Name)
})
for _, svc := range services {
deps := "-"
if len(svc.DependsOn) > 0 {
deps = strings.Join(svc.DependsOn, ", ")
}
step := "-"
if svc.CurrentStep != "" {
step = svc.CurrentStep
}
_, _ = fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\n", svc.Name, svc.Emoji, svc.Status, step, deps)
}
_ = tw.Flush()
if m.interval > 0 {
_, _ = s.WriteString(fmt.Sprintf("\nRefreshing every %s. Press q to quit, r to refresh.\n", m.interval))
} else {
_, _ = s.WriteString("\nPress q to quit, r to refresh.\n")
}
return s.String()
}
func resourcesCmd() *serpent.Command {
var interval time.Duration
return &serpent.Command{
Use: "resources",
Aliases: []string{"res"},
Short: "Watch all cdev-managed resources like containers, images, and volumes.",
Options: serpent.OptionSet{
{
Flag: "interval",
FlagShorthand: "n",
Description: "Refresh interval.",
Default: "2s",
Value: serpent.DurationOf(&interval),
},
},
Handler: func(inv *serpent.Invocation) error {
client, err := docker.NewClientFromEnv()
if err != nil {
return xerrors.Errorf("failed to connect to docker: %w", err)
}
m := &watchModel{
client: client,
interval: interval,
filter: catalog.NewLabels().Filter(),
}
p := tea.NewProgram(m,
tea.WithContext(inv.Context()),
tea.WithOutput(inv.Stdout),
tea.WithInput(inv.Stdin),
)
_, err = p.Run()
return err
},
}
}
// watchModel is the bubbletea model for the watch command.
type watchModel struct {
client *docker.Client
interval time.Duration
filter map[string][]string
containers []docker.APIContainers
volumes []docker.Volume
images []docker.APIImages
err error
}
type tickMsg time.Time
func (m *watchModel) Init() tea.Cmd {
return tea.Batch(m.fetchData, m.tick())
}
func (m *watchModel) tick() tea.Cmd {
return tea.Tick(m.interval, func(t time.Time) tea.Msg {
return tickMsg(t)
})
}
func (m *watchModel) fetchData() tea.Msg {
containers, err := m.client.ListContainers(docker.ListContainersOptions{
All: true,
Filters: m.filter,
})
if err != nil {
return err
}
vols, err := m.client.ListVolumes(docker.ListVolumesOptions{
Filters: m.filter,
})
if err != nil {
return err
}
imgs, err := m.client.ListImages(docker.ListImagesOptions{
Filters: m.filter,
All: true,
})
if err != nil {
return err
}
return dataMsg{containers: containers, volumes: vols, images: imgs}
}
type dataMsg struct {
containers []docker.APIContainers
volumes []docker.Volume
images []docker.APIImages
}
func (m *watchModel) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.String() {
case "q", "ctrl+c":
return m, tea.Quit
}
case tickMsg:
return m, tea.Batch(m.fetchData, m.tick())
case dataMsg:
m.containers = msg.containers
m.volumes = msg.volumes
m.images = msg.images
return m, nil
case error:
m.err = msg
return m, tea.Quit
}
return m, nil
}
func (m *watchModel) View() string {
if m.err != nil {
return fmt.Sprintf("Error: %v\n", m.err)
}
var s strings.Builder
// Containers table.
_, _ = s.WriteString("CONTAINERS\n")
tw := tabwriter.NewWriter(&s, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintln(tw, "NAME\tIMAGE\tSTATUS\tPORTS")
// Sort containers by name.
containers := slices.Clone(m.containers)
slices.SortFunc(containers, func(a, b docker.APIContainers) int {
return strings.Compare(a.Names[0], b.Names[0])
})
for _, c := range containers {
name := strings.TrimPrefix(c.Names[0], "/")
ports := formatPorts(c.Ports)
_, _ = fmt.Fprintf(tw, "%s\t%s\t%s\t%s\n", name, c.Image, c.Status, ports)
}
_ = tw.Flush()
if len(containers) == 0 {
_, _ = s.WriteString(" (none)\n")
}
// Volumes table.
_, _ = s.WriteString("\nVOLUMES\n")
tw = tabwriter.NewWriter(&s, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintln(tw, "NAME\tDRIVER\tLABELS")
// Sort volumes by name.
volumes := slices.Clone(m.volumes)
slices.SortFunc(volumes, func(a, b docker.Volume) int {
return strings.Compare(a.Name, b.Name)
})
for _, v := range volumes {
labels := formatLabels(v.Labels)
_, _ = fmt.Fprintf(tw, "%s\t%s\t%s\n", v.Name, v.Driver, labels)
}
_ = tw.Flush()
if len(volumes) == 0 {
_, _ = s.WriteString(" (none)\n")
}
// Images table.
_, _ = s.WriteString("\nIMAGES\n")
tw = tabwriter.NewWriter(&s, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintln(tw, "TAG\tID\tSIZE\tLABELS")
// Sort images by tag.
images := slices.Clone(m.images)
slices.SortFunc(images, func(a, b docker.APIImages) int {
aTag := formatImageTag(a.RepoTags)
bTag := formatImageTag(b.RepoTags)
return strings.Compare(aTag, bTag)
})
for _, img := range images {
tag := formatImageTag(img.RepoTags)
id := formatImageID(img.ID)
size := formatSize(img.Size)
labels := formatLabels(img.Labels)
_, _ = fmt.Fprintf(tw, "%s\t%s\t%s\t%s\n", tag, id, size, labels)
}
_ = tw.Flush()
if len(images) == 0 {
_, _ = s.WriteString(" (none)\n")
}
_, _ = s.WriteString(fmt.Sprintf("\nRefreshing every %s. Press q to quit.\n", m.interval))
return s.String()
}
func formatPorts(ports []docker.APIPort) string {
var parts []string
for _, p := range ports {
if p.PublicPort != 0 {
parts = append(parts, fmt.Sprintf("%s:%d->%d/%s", p.IP, p.PublicPort, p.PrivatePort, p.Type))
}
}
if len(parts) == 0 {
return "-"
}
return strings.Join(parts, ", ")
}
func formatLabels(labels map[string]string) string {
var parts []string
for k, v := range labels {
// Only show cdev-specific labels for brevity.
if strings.HasPrefix(k, "cdev") {
parts = append(parts, fmt.Sprintf("%s=%s", k, v))
}
}
if len(parts) == 0 {
return "-"
}
// Sort for deterministic output.
slices.Sort(parts)
return strings.Join(parts, ", ")
}
func formatImageTag(repoTags []string) string {
if len(repoTags) == 0 {
return "<none>"
}
return repoTags[0]
}
func formatImageID(id string) string {
// Shorten "sha256:abc123..." to "abc123..." (first 12 chars of hash).
id = strings.TrimPrefix(id, "sha256:")
if len(id) > 12 {
return id[:12]
}
return id
}
func formatSize(bytes int64) string {
const (
kb = 1024
mb = kb * 1024
gb = mb * 1024
)
switch {
case bytes >= gb:
return fmt.Sprintf("%.1fGB", float64(bytes)/gb)
case bytes >= mb:
return fmt.Sprintf("%.1fMB", float64(bytes)/mb)
case bytes >= kb:
return fmt.Sprintf("%.1fKB", float64(bytes)/kb)
default:
return fmt.Sprintf("%dB", bytes)
}
}
func pprofCmd() *serpent.Command {
var instance int64
return &serpent.Command{
Use: "pprof <profile>",
Short: "Open pprof web UI for a running coderd instance",
Long: `Open the pprof web UI for a running coderd instance.
Supported profiles:
profile CPU profile (30s sample)
heap Heap memory allocations
goroutine Stack traces of all goroutines
allocs Past memory allocations
block Stack traces of blocking operations
mutex Stack traces of mutex contention
threadcreate Stack traces that led to new OS threads
trace Execution trace (30s sample)
Examples:
cdev pprof heap
cdev pprof profile
cdev pprof goroutine
cdev pprof -i 1 heap # instance 1`,
Options: serpent.OptionSet{
{
Name: "Instance",
Description: "Coderd instance index (0-based).",
Flag: "instance",
FlagShorthand: "i",
Default: "0",
Value: serpent.Int64Of(&instance),
},
},
Handler: func(inv *serpent.Invocation) error {
if len(inv.Args) != 1 {
_ = serpent.DefaultHelpFn()(inv)
return xerrors.New("expected exactly one argument: the profile name")
}
profile := inv.Args[0]
url := fmt.Sprintf("http://localhost:%d/debug/pprof/%s", catalog.PprofPortNum(int(instance)), profile)
if profile == "profile" || profile == "trace" {
url += "?seconds=30"
}
_, _ = fmt.Fprintf(inv.Stdout, "Opening pprof web UI for instance %d, %q at %s\n", instance, profile, url)
//nolint:gosec // User-provided profile name is passed as a URL path.
cmd := exec.CommandContext(inv.Context(), "go", "tool", "pprof", "-http=:", url)
cmd.Stdout = inv.Stdout
cmd.Stderr = inv.Stderr
return cmd.Run()
},
}
}
func logsCmd() *serpent.Command {
var follow bool
return &serpent.Command{
Use: "logs <service>",
Short: "Show logs for a cdev-managed service",
Long: `Show logs for a cdev-managed service container.
Available services:
coderd Main Coder API server
postgres PostgreSQL database
oidc OIDC test provider
provisioner Provisioner daemon
prometheus Prometheus metrics server
site Frontend development server
Examples:
cdev logs coderd
cdev logs -f postgres`,
Options: serpent.OptionSet{
{
Flag: "follow",
FlagShorthand: "f",
Description: "Follow log output (like tail -f).",
Default: "false",
Value: serpent.BoolOf(&follow),
},
},
Handler: func(inv *serpent.Invocation) error {
if len(inv.Args) != 1 {
_ = serpent.DefaultHelpFn()(inv)
return xerrors.New("expected exactly one argument: the service name")
}
service := inv.Args[0]
client, err := docker.NewClientFromEnv()
if err != nil {
return xerrors.Errorf("failed to connect to docker: %w", err)
}
// Find containers matching the service label.
filter := catalog.NewServiceLabels(catalog.ServiceName(service)).Filter()
containers, err := client.ListContainers(docker.ListContainersOptions{
All: true,
Filters: filter,
})
if err != nil {
return xerrors.Errorf("failed to list containers: %w", err)
}
if len(containers) == 0 {
return xerrors.Errorf("no container found for service %q", service)
}
// Use the first container's name (strip leading slash).
containerName := strings.TrimPrefix(containers[0].Names[0], "/")
// Build docker logs command.
args := []string{"logs"}
if follow {
args = append(args, "-f")
}
args = append(args, containerName)
//nolint:gosec // User-provided service name is validated by docker.
cmd := exec.CommandContext(inv.Context(), "docker", args...)
cmd.Stdout = inv.Stdout
cmd.Stderr = inv.Stderr
return cmd.Run()
},
}
}
func generateCmd() *serpent.Command {
var (
coderdCount int64
provisionerCount int64
oidc bool
prometheus bool
outputFile string
)
return &serpent.Command{
Use: "generate",
Short: "Generate docker-compose.yml for the cdev stack",
Options: serpent.OptionSet{
{
Flag: "coderd-count",
Default: "1",
Value: serpent.Int64Of(&coderdCount),
},
{
Flag: "provisioner-count",
Default: "0",
Value: serpent.Int64Of(&provisionerCount),
},
{
Flag: "oidc",
Default: "false",
Value: serpent.BoolOf(&oidc),
},
{
Flag: "prometheus",
Default: "false",
Value: serpent.BoolOf(&prometheus),
},
{
Flag: "output",
FlagShorthand: "o",
Description: "Output file (default: stdout).",
Value: serpent.StringOf(&outputFile),
},
},
Handler: func(inv *serpent.Invocation) error {
cwd, err := os.Getwd()
if err != nil {
return xerrors.Errorf("get working directory: %w", err)
}
dockerGroup := os.Getenv("DOCKER_GROUP")
if dockerGroup == "" {
dockerGroup = "999"
}
dockerSocket := os.Getenv("DOCKER_SOCKET")
if dockerSocket == "" {
dockerSocket = "/var/run/docker.sock"
}
cfg := catalog.ComposeConfig{
CoderdCount: int(coderdCount),
ProvisionerCount: int(provisionerCount),
OIDC: oidc,
Prometheus: prometheus,
DockerGroup: dockerGroup,
DockerSocket: dockerSocket,
CWD: cwd,
License: os.Getenv("CODER_LICENSE"),
}
data, err := catalog.GenerateYAML(cfg)
if err != nil {
return xerrors.Errorf("generate compose YAML: %w", err)
}
if outputFile != "" {
if err := os.WriteFile(outputFile, data, 0o644); err != nil { //nolint:gosec // G306: Generated compose file, 0o644 is intentional.
return xerrors.Errorf("write output file: %w", err)
}
_, _ = fmt.Fprintf(inv.Stdout, "Wrote compose file to %s\n", outputFile)
return nil
}
_, err = inv.Stdout.Write(data)
return err
},
}
}
func upCmd() *serpent.Command {
services := catalog.New()
err := services.Register(
catalog.NewDocker(),
catalog.NewBuildSlim(),
catalog.NewPostgres(),
catalog.NewCoderd(),
catalog.NewOIDC(),
catalog.NewSetup(),
catalog.NewSite(),
catalog.NewLoadBalancer(),
)
if err != nil {
panic(fmt.Sprintf("failed to register services: %v", err))
}
// Create provisioner to collect its options, but don't register
// it yet — we only register when count > 0 (after option parsing).
provisioner := catalog.NewProvisioner(services)
prometheusSvc := catalog.NewPrometheus()
// Fail fast if HA is enabled without a license.
catalog.Configure[*catalog.Coderd](services, catalog.OnCoderd(), func(c *catalog.Coderd) {
if c.HACount() > 1 {
catalog.RequireLicense("HA coderd (--coderd-count > 1)")
}
})
var apiAddr string
var startPaused bool
optionSet := serpent.OptionSet{
{
Flag: "api-addr",
Description: "Address for the cdev control API server.",
Default: "localhost:" + api.DefaultAPIPort,
Value: serpent.StringOf(&apiAddr),
},
{
Flag: "start-paused",
Description: "Start cdev without auto-starting services. Services can be started via the API or UI.",
Default: "false",
Value: serpent.BoolOf(&startPaused),
},
}
_ = services.ForEach(func(srv catalog.ServiceBase) error {
if configurable, ok := srv.(catalog.ConfigurableService); ok {
optionSet = append(optionSet, configurable.Options()...)
}
return nil
})
// Add provisioner options even though it's not registered yet,
// so --provisioner-count always appears in help text.
optionSet = append(optionSet, provisioner.Options()...)
optionSet = append(optionSet, prometheusSvc.Options()...)
return &serpent.Command{
Use: "up",
Short: "Start the development environment",
Options: optionSet,
Handler: func(inv *serpent.Invocation) error {
ctx := inv.Context()
// Register provisioner only if count > 0.
if provisioner.Count() > 0 {
if err := services.Register(provisioner); err != nil {
return xerrors.Errorf("failed to register provisioner: %w", err)
}
}
// Register prometheus only if enabled.
if prometheusSvc.Enabled() {
if err := services.Register(prometheusSvc); err != nil {
return xerrors.Errorf("failed to register prometheus: %w", err)
}
}
services.Init(inv.Stderr)
if err := services.ApplyConfigurations(); err != nil {
return xerrors.Errorf("failed to apply configurations: %w", err)
}
// Start the API server first so we can query status while services
// are starting.
apiServer := api.NewServer(services, services.Logger(), apiAddr)
if err := apiServer.Start(ctx); err != nil {
return xerrors.Errorf("failed to start API server: %w", err)
}
_, _ = fmt.Fprintf(inv.Stdout, "🔌 API server is ready at http://%s\n", apiAddr)
if startPaused {
_, _ = fmt.Fprintln(inv.Stdout, "⏸️ Started in paused mode. Services can be started via the API or UI.")
_, _ = fmt.Fprintf(inv.Stdout, " Start all: curl -X POST http://%s/api/services/start\n", apiAddr)
_, _ = fmt.Fprintf(inv.Stdout, " UI: http://%s\n", apiAddr)
<-inv.Context().Done()
return nil
}
_, _ = fmt.Fprintln(inv.Stdout, "🚀 Starting cdev...")
err = services.Start(ctx)
if err != nil {
return xerrors.Errorf("failed to start services: %w", err)
}
coderd, ok := services.MustGet(catalog.OnCoderd()).(*catalog.Coderd)
if !ok {
return xerrors.New("unexpected type for coderd service")
}
_, _ = fmt.Fprintf(inv.Stdout, "✅ Coder is ready at %s\n", coderd.Result().URL)
if prometheusSvc.Enabled() {
_, _ = fmt.Fprintf(inv.Stdout, "📊 Prometheus is ready at http://localhost:9090\n")
}
<-inv.Context().Done()
return nil
},
}
}
+27
View File
@@ -0,0 +1,27 @@
# Dockerfile for testidp - a fake OIDC identity provider for development.
# This is used by the cdev catalog to run a local OIDC provider.
FROM golang:1.25.6-alpine AS builder
WORKDIR /app
# Copy go mod files first for better caching.
COPY go.mod go.sum ./
RUN go mod download
# Copy source code (testidp has many transitive deps within the repo).
COPY . .
# Build the testidp binary.
RUN CGO_ENABLED=0 go build -o /testidp ./scripts/testidp
# Runtime image with debug tools.
FROM alpine:3.21
RUN apk add --no-cache curl wget netcat-openbsd bind-tools
COPY --from=builder /testidp /testidp
# Default port for the IDP.
EXPOSE 4500
ENTRYPOINT ["/testidp"]
@@ -0,0 +1,17 @@
build
site/*
!site/*.go
!site/static/*.html
node_modules
**/node_modules
**/testdata
docs/*
helm/*
.idea/*
.github/*
.git
*.tsx
*.ts
*.md
**.test
**.bin
+20 -3
View File
@@ -4,6 +4,7 @@ import (
"encoding/json"
"flag"
"log"
"net/http"
"os"
"os/signal"
"strings"
@@ -26,9 +27,11 @@ var (
clientID = flag.String("client-id", "static-client-id", "Client ID, set empty to be random")
clientSecret = flag.String("client-sec", "static-client-secret", "Client Secret, set empty to be random")
deviceFlow = flag.Bool("device-flow", false, "Enable device flow")
issuerURL = flag.String("issuer", "http://localhost:4500", "Issuer URL that clients will use to reach this IDP")
// By default, no regex means it will never match anything. So at least default to matching something.
extRegex = flag.String("ext-regex", `^(https?://)?example\.com(/.*)?$`, "External auth regex")
tooManyRequests = flag.String("429", "", "Simulate too many requests for a given endpoint.")
backchannelBaseURL = flag.String("backchannel-base-url", "", "Base URL for server-to-server endpoints (token, userinfo, jwks). When set, these endpoints in discovery use this URL while authorization_endpoint keeps the -issuer URL.")
tooManyRequests = flag.String("429", "", "Simulate too many requests for a given endpoint.")
)
func main() {
@@ -84,7 +87,9 @@ func RunIDP() func(t *testing.T) {
return func(t *testing.T) {
idp := oidctest.NewFakeIDP(t,
oidctest.WithServing(),
// Don't use WithServing() - it overrides the issuer URL with the
// actual server address. We serve manually below to preserve our
// configured issuer URL.
oidctest.WithStaticUserInfo(jwt.MapClaims{
// This is a static set of auth fields. Might be beneficial to make flags
// to allow different values here. This is only required for using the
@@ -101,10 +106,22 @@ func RunIDP() func(t *testing.T) {
}),
oidctest.WithDefaultExpire(*expiry),
oidctest.WithStaticCredentials(*clientID, *clientSecret),
oidctest.WithIssuer("http://localhost:4500"),
oidctest.WithIssuer(*issuerURL),
oidctest.WithBackchannelBaseURL(*backchannelBaseURL),
oidctest.WithLogger(slog.Make(sloghuman.Sink(os.Stderr))),
oidctest.With429(tooManyRequestParams),
)
// Serve the IDP manually on port 4500 to preserve the configured issuer URL.
srv := &http.Server{
Addr: ":4500",
Handler: idp.Handler(),
}
go func() {
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Fatalf("IDP server error: %v", err)
}
}()
id, sec := idp.AppCredentials()
prov := idp.WellknownConfig()
const appID = "fake"
-1
View File
@@ -49,7 +49,6 @@
"@monaco-editor/react": "4.7.0",
"@mui/material": "5.18.0",
"@mui/system": "5.18.0",
"@mui/utils": "5.17.1",
"@mui/x-tree-view": "7.29.10",
"@radix-ui/react-avatar": "1.1.11",
"@radix-ui/react-checkbox": "1.3.3",
-3
View File
@@ -61,9 +61,6 @@ importers:
'@mui/system':
specifier: 5.18.0
version: 5.18.0(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@emotion/styled@11.14.1(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2)
'@mui/utils':
specifier: 5.17.1
version: 5.17.1(@types/react@19.2.7)(react@19.2.2)
'@mui/x-tree-view':
specifier: 7.29.10
version: 7.29.10(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@emotion/styled@11.14.1(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2))(@mui/material@5.18.0(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@emotion/styled@11.14.1(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react-dom@19.2.2(react@19.2.2))(react@19.2.2))(@mui/system@5.18.0(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@emotion/styled@11.14.1(@emotion/react@11.14.0(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react@19.2.2))(@types/react@19.2.7)(react-dom@19.2.2(react@19.2.2))(react@19.2.2)
+10
View File
@@ -1425,6 +1425,7 @@ export type CreateWorkspaceBuildReason =
| "dashboard"
| "jetbrains_connection"
| "ssh_connection"
| "task_manual_pause"
| "vscode_connection";
export const CreateWorkspaceBuildReasons: CreateWorkspaceBuildReason[] = [
@@ -1432,6 +1433,7 @@ export const CreateWorkspaceBuildReasons: CreateWorkspaceBuildReason[] = [
"dashboard",
"jetbrains_connection",
"ssh_connection",
"task_manual_pause",
"vscode_connection",
];
@@ -3583,6 +3585,14 @@ export interface PatchWorkspaceProxy {
*/
export const PathAppSessionTokenCookie = "coder_path_app_session_token";
// From codersdk/aitasks.go
/**
* PauseTaskResponse represents the response from pausing a task.
*/
export interface PauseTaskResponse {
readonly workspace_build: WorkspaceBuild | null;
}
// From codersdk/roles.go
/**
* Permission is the format passed into the rego.
+1 -2
View File
@@ -1,7 +1,6 @@
import { css, Global, useTheme } from "@emotion/react";
import InputAdornment from "@mui/material/InputAdornment";
import TextField, { type TextFieldProps } from "@mui/material/TextField";
import { visuallyHidden } from "@mui/utils";
import { Button } from "components/Button/Button";
import { ExternalImage } from "components/ExternalImage/ExternalImage";
import { Loader } from "components/Loader/Loader";
@@ -116,7 +115,7 @@ export const IconField: FC<IconFieldProps> = ({
- Except we don't do it when running tests, because Jest doesn't define
`IntersectionObserver`, and it would make them slower anyway. */}
{process.env.NODE_ENV !== "test" && (
<div css={{ ...visuallyHidden }}>
<div className="sr-only" aria-hidden="true">
<Suspense>
<EmojiPicker onEmojiSelect={() => {}} />
</Suspense>
@@ -1,5 +1,4 @@
import Skeleton from "@mui/material/Skeleton";
import { visuallyHidden } from "@mui/utils";
import type * as TypesGen from "api/typesGenerated";
import { Abbr } from "components/Abbr/Abbr";
import { Button } from "components/Button/Button";
@@ -74,7 +73,7 @@ export const ProxyMenu: FC<ProxyMenuProps> = ({ proxyContextValue }) => {
<DropdownMenu open={open} onOpenChange={setOpen}>
<DropdownMenuTrigger asChild>
<Button variant="outline" size="lg">
<span css={{ ...visuallyHidden }}>
<span className="sr-only">
Latency for {selectedProxy?.display_name ?? "your region"}
</span>
@@ -1,5 +1,5 @@
import Link from "@mui/material/Link";
import type { ConnectionLog } from "api/typesGenerated";
import { Link } from "components/Link/Link";
import type { FC, ReactNode } from "react";
import { Link as RouterLink } from "react-router";
import { connectionTypeToFriendlyName } from "utils/connection";
@@ -62,11 +62,10 @@ export const ConnectionLogDescription: FC<ConnectionLogDescriptionProps> = ({
<span>
{user ? user.username : "Unauthenticated user"} {actionText} in{" "}
{isOwnWorkspace ? "their" : `${workspace_owner_username}'s`}{" "}
<Link
component={RouterLink}
to={`/@${workspace_owner_username}/${workspace_name}`}
>
<strong>{workspace_name}</strong>
<Link asChild showExternalIcon={false} className="text-base">
<RouterLink to={`/@${workspace_owner_username}/${workspace_name}`}>
<strong>{workspace_name}</strong>
</RouterLink>
</Link>{" "}
workspace
</span>
@@ -81,11 +80,10 @@ export const ConnectionLogDescription: FC<ConnectionLogDescriptionProps> = ({
return (
<span>
{friendlyType} session to {workspace_owner_username}'s{" "}
<Link
component={RouterLink}
to={`/@${workspace_owner_username}/${workspace_name}`}
>
<strong>{workspace_name}</strong>
<Link asChild showExternalIcon={false} className="text-base">
<RouterLink to={`/@${workspace_owner_username}/${workspace_name}`}>
<strong>{workspace_name}</strong>
</RouterLink>
</Link>{" "}
workspace{" "}
</span>
@@ -1,8 +1,6 @@
import type { CSSObject, Interpolation, Theme } from "@emotion/react";
import Link from "@mui/material/Link";
import type { ConnectionLog } from "api/typesGenerated";
import { Avatar } from "components/Avatar/Avatar";
import { Stack } from "components/Stack/Stack";
import { Link } from "components/Link/Link";
import { StatusPill } from "components/StatusPill/StatusPill";
import { TableCell } from "components/Table/Table";
import { TimelineEntry } from "components/Timeline/TimelineEntry";
@@ -38,18 +36,9 @@ export const ConnectionLogRow: FC<ConnectionLogRowProps> = ({
data-testid={`connection-log-row-${connectionLog.id}`}
clickable={false}
>
<TableCell css={styles.connectionLogCell}>
<Stack
direction="row"
alignItems="center"
css={styles.connectionLogHeader}
tabIndex={0}
>
<Stack
direction="row"
alignItems="center"
css={styles.connectionLogHeaderInfo}
>
<TableCell className="!p-0 border-0">
<div className="flex flex-row items-center gap-4 py-4 px-8">
<div className="flex flex-row items-center gap-4 flex-1">
{/* Non-web logs don't have an associated user, so we
* display a default network icon instead */}
{connectionLog.web_info?.user ? (
@@ -63,27 +52,17 @@ export const ConnectionLogRow: FC<ConnectionLogRowProps> = ({
</Avatar>
)}
<Stack
alignItems="center"
css={styles.fullWidth}
justifyContent="space-between"
direction="row"
>
<Stack
css={styles.connectionLogSummary}
direction="row"
alignItems="baseline"
spacing={1}
>
<div className="flex flex-row items-center justify-between w-full">
<div className="flex flex-row items-baseline gap-2 text-base">
<ConnectionLogDescription connectionLog={connectionLog} />
<span css={styles.connectionLogTime}>
<span className="text-content-secondary text-xs">
{new Date(connectionLog.connect_time).toLocaleTimeString()}
{connectionLog.ssh_info?.disconnect_time &&
`${new Date(connectionLog.ssh_info.disconnect_time).toLocaleTimeString()}`}
</span>
</Stack>
</div>
<Stack direction="row" alignItems="center">
<div className="flex flex-row items-center gap-4">
{code !== undefined && (
<StatusPill
code={code}
@@ -93,29 +72,31 @@ export const ConnectionLogRow: FC<ConnectionLogRowProps> = ({
)}
<Tooltip>
<TooltipTrigger asChild>
<InfoIcon
css={(theme) => ({
color: theme.palette.info.light,
})}
/>
<InfoIcon className="text-content-link" />
</TooltipTrigger>
<TooltipContent side="bottom">
<div css={styles.connectionLogInfoTooltip}>
<div className="flex flex-col gap-2">
{connectionLog.ip && (
<div>
<h4 css={styles.connectionLogInfoheader}>IP:</h4>
<h4 className="m-0 text-content-primary text-sm leading-[150%] font-semibold">
IP:
</h4>
<div>{connectionLog.ip}</div>
</div>
)}
{userAgent?.os.name && (
<div>
<h4 css={styles.connectionLogInfoheader}>OS:</h4>
<h4 className="m-0 text-content-primary text-sm leading-[150%] font-semibold">
OS:
</h4>
<div>{userAgent.os.name}</div>
</div>
)}
{userAgent?.browser.name && (
<div>
<h4 css={styles.connectionLogInfoheader}>Browser:</h4>
<h4 className="m-0 text-content-primary text-sm leading-[150%] font-semibold">
Browser:
</h4>
<div>
{userAgent.browser.name} {userAgent.browser.version}
</div>
@@ -123,21 +104,26 @@ export const ConnectionLogRow: FC<ConnectionLogRowProps> = ({
)}
{connectionLog.organization && (
<div>
<h4 css={styles.connectionLogInfoheader}>
<h4 className="m-0 text-content-primary text-sm leading-[150%] font-semibold">
Organization:
</h4>
<Link
component={RouterLink}
to={`/organizations/${connectionLog.organization.name}`}
asChild
showExternalIcon={false}
className="px-0 text-xs"
>
{connectionLog.organization.display_name ||
connectionLog.organization.name}
<RouterLink
to={`/organizations/${connectionLog.organization.name}`}
>
{connectionLog.organization.display_name ||
connectionLog.organization.name}
</RouterLink>
</Link>
</div>
)}
{connectionLog.ssh_info?.disconnect_reason && (
<div>
<h4 css={styles.connectionLogInfoheader}>
<h4 className="m-0 text-content-primary text-sm leading-[150%] font-semibold">
Close Reason:
</h4>
<div>{connectionLog.ssh_info?.disconnect_reason}</div>
@@ -146,54 +132,11 @@ export const ConnectionLogRow: FC<ConnectionLogRowProps> = ({
</div>
</TooltipContent>
</Tooltip>
</Stack>
</Stack>
</Stack>
</Stack>
</div>
</div>
</div>
</div>
</TableCell>
</TimelineEntry>
);
};
const styles = {
connectionLogCell: {
padding: "0 !important",
border: 0,
},
connectionLogHeader: {
padding: "16px 32px",
},
connectionLogHeaderInfo: {
flex: 1,
},
connectionLogSummary: (theme) => ({
...(theme.typography.body1 as CSSObject),
fontFamily: "inherit",
}),
connectionLogTime: (theme) => ({
color: theme.palette.text.secondary,
fontSize: 12,
}),
connectionLogInfoheader: (theme) => ({
margin: 0,
color: theme.palette.text.primary,
fontSize: 14,
lineHeight: "150%",
fontWeight: 600,
}),
connectionLogInfoTooltip: {
display: "flex",
flexDirection: "column",
gap: 8,
},
fullWidth: {
width: "100%",
},
} satisfies Record<string, Interpolation<Theme>>;
@@ -1,7 +1,6 @@
import type { Interpolation, Theme } from "@emotion/react";
import Drawer from "@mui/material/Drawer";
import IconButton from "@mui/material/IconButton";
import { visuallyHidden } from "@mui/utils";
import { JobError } from "api/queries/templates";
import type { TemplateVersion } from "api/typesGenerated";
import { Button } from "components/Button/Button";
@@ -46,7 +45,7 @@ export const BuildLogsDrawer: FC<BuildLogsDrawerProps> = ({
<h3 css={styles.title}>Creating template...</h3>
<IconButton size="small" onClick={drawerProps.onClose}>
<XIcon className="size-icon-sm" />
<span style={visuallyHidden}>Close build logs</span>
<span className="sr-only">Close build logs</span>
</IconButton>
</header>
+1 -2
View File
@@ -1,4 +1,3 @@
import { visuallyHidden } from "@mui/utils";
import type { AuthMethods } from "api/typesGenerated";
import { Button } from "components/Button/Button";
import { ExternalImage } from "components/ExternalImage/ExternalImage";
@@ -80,7 +79,7 @@ const OidcIcon: FC<OidcIconProps> = ({ iconUrl }) => {
return (
<>
<img alt="" src={iconUrl} aria-labelledby={oidcId} />
<div id={oidcId} css={{ ...visuallyHidden }}>
<div id={oidcId} className="sr-only">
Open ID Connect
</div>
</>
@@ -1,4 +1,3 @@
import Link from "@mui/material/Link";
import type { WorkspaceAgent } from "api/typesGenerated";
import {
Alert,
@@ -6,6 +5,8 @@ import {
type AlertProps,
} from "components/Alert/Alert";
import { Button } from "components/Button/Button";
import { Link } from "components/Link/Link";
import { RefreshCcwIcon } from "lucide-react";
import { type FC, useEffect, useRef, useState } from "react";
import { cn } from "utils/cn";
import { docs } from "utils/docs";
@@ -205,6 +206,7 @@ const RefreshSessionButton: FC = () => {
window.location.reload();
}}
>
<RefreshCcwIcon />
{isRefreshing ? "Refreshing session..." : "Refresh session"}
</Button>
);
@@ -1,5 +1,4 @@
import { useTheme } from "@emotion/react";
import visuallyHidden from "@mui/utils/visuallyHidden";
import { richParameters } from "api/queries/templates";
import { workspaceBuildParameters } from "api/queries/workspaceBuilds";
import type {
@@ -69,7 +68,7 @@ export const BuildParametersPopover: FC<BuildParametersPopoverProps> = ({
className="min-w-fit"
>
<ChevronDownIcon />
<span css={{ ...visuallyHidden }}>{label}</span>
<span className="sr-only">{label}</span>
</TopbarButton>
</PopoverTrigger>
<PopoverContent
@@ -1,6 +1,5 @@
import type { Interpolation, Theme } from "@emotion/react";
import Link, { type LinkProps } from "@mui/material/Link";
import { visuallyHidden } from "@mui/utils";
import { getErrorMessage } from "api/errors";
import {
updateDeadline,
@@ -218,7 +217,7 @@ const AutostopDisplay: FC<AutostopDisplayProps> = ({
}}
>
<MinusIcon />
<span style={visuallyHidden}>Subtract 1 hour from deadline</span>
<span className="sr-only">Subtract 1 hour from deadline</span>
</Button>
</TooltipTrigger>
<TooltipContent side="bottom">
@@ -236,7 +235,7 @@ const AutostopDisplay: FC<AutostopDisplayProps> = ({
}}
>
<PlusIcon />
<span style={visuallyHidden}>Add 1 hour to deadline</span>
<span className="sr-only">Add 1 hour to deadline</span>
</Button>
</TooltipTrigger>
<TooltipContent side="bottom">Add 1 hour to deadline</TooltipContent>