Compare commits

...

579 Commits

Author SHA1 Message Date
Colin Adler 5137f715f0 chore: add v2.6.1 changelog 2024-03-04 18:28:25 +00:00
Colin Adler 1171ce7add Merge pull request from GHSA-7cc2-r658-7xpf
This fixes a vulnerability with the `CODER_OIDC_EMAIL_DOMAIN` option,
where users with a superset of the allowed email domain would be allowed
to login. For example, given `CODER_OIDC_EMAIL_DOMAIN=google.com`, a
user would be permitted entry if their email domain was
`colin-google.com`.

(cherry picked from commit 4439a920e4)
2024-03-04 18:26:39 +00:00
Ben Potter b3e3521274 docs: add v2.6.0 changelog (#11320)
* docs: add v2.6.0 changelog

* fmt
2023-12-21 22:33:13 +00:00
Kayla Washburn 029c92fede fix: fix name for external auth connections (#11318) 2023-12-21 15:27:16 -07:00
Kayla Washburn db71c0fa54 refactor: remove theme "color palettes" (#11314) 2023-12-21 14:45:54 -07:00
Asher 5cfa34b31e feat: add OAuth2 applications (#11197)
* Add database tables for OAuth2 applications

These are applications that will be able to use OAuth2 to get an API key
from Coder.

* Add endpoints for managing OAuth2 applications

These let you add, update, and remove OAuth2 applications.

* Add frontend for managing OAuth2 applications
2023-12-21 21:38:42 +00:00
Kayla Washburn e044d3b752 fix: add additional theme colors (#11313) 2023-12-21 12:59:39 -07:00
Jon Ayers 0b7d68dc3f chore: remove template_update_policies experiment (#11250) 2023-12-21 13:39:33 -06:00
Muhammad Atif Ali 5b071f4d94 feat(examples/templates): add GCP VM devcontainer template (#11246) 2023-12-21 13:01:10 +00:00
Spike Curtis 52b87a28b0 fix: stop printing warnings on external provisioner daemon command (#11309)
fixes #11307
2023-12-21 16:55:34 +04:00
Spike Curtis db9104c02e fix: avoid panic on nil connection (#11305)
Related to https://github.com/coder/coder/actions/runs/7286675441/job/19855871305

Fixes a panic if the listener returns an error, which can obfuscate the underlying problem and cause unrelated tests to be marked failed.
2023-12-21 14:26:11 +04:00
Steven Masley fe867d02e0 fix: correct perms for forbidden error in TemplateScheduleStore.Load (#11286)
* chore: TemplateScheduleStore.Load() throwing forbidden error
* fix: workspace agent scope to include template
2023-12-20 11:38:49 -06:00
Kira Pilot 20dff2aa5d added react query dev tools (#11293) 2023-12-20 10:08:51 -05:00
Ben Potter 19e4a86711 docs: add guidelines for debugging group sync (#11296)
* docs: add guidelines for debugging group sync

* fmt
2023-12-20 12:52:07 +00:00
Bruno Quaresma e2e56d7d4f refactor(site): move workspace schedule controls to its own component (#11281) 2023-12-20 08:46:18 -03:00
Cian Johnston bfc588955c ci: make test-go-pg depend on sqlc-vet (#11288) 2023-12-20 08:47:47 +00:00
Muhammad Atif Ali 3ffe7f55aa feat(examples/templates): add aws vm devcontainer template (#11248)
* feat(examples/templates): add aws vm devcontainer template

* Create README.md

* add code-server

* fix code-server

* `make fmt`

* Add files via upload

* Update README.md

* fix typo and persist workspace

* always land in the repo directory
2023-12-20 08:24:45 +03:00
Kayla Washburn 97f7a35a47 feat: add light theme (#11266) 2023-12-19 17:03:00 -07:00
Bruno Quaresma e0d34ca6f7 fix(site): fix error when loading workspaces with dormant (#11291) 2023-12-19 20:42:07 -03:00
Steven Masley 24080b121c feat: enable csrf token header (#11283)
* feat: enable csrf token header

* Exempt external auth requets
* ensure dev server bypasses CSRF
* external auth is just get requests
* Add some more routes
* Extra assurance nothing breaks
2023-12-19 15:42:05 -06:00
Steven Masley fbda21a9f2 feat: move moons experiment to ga (released) (#11285)
* feat: release moons experiment as ga
2023-12-19 14:40:22 -06:00
Steven Masley e8be092af0 chore: add sqlc push action on releases (#11171)
* add sqlc push action on releases
* Make sqlc push optional
2023-12-19 20:31:55 +00:00
Steven Masley c1451ca4da chore: implement yaml parsing for external auth configs (#11268)
* chore: yaml parsing for external auth configs
* Also unmarshal and check the output again
2023-12-19 18:09:45 +00:00
dependabot[bot] 016b3ef5a2 chore: bump golang.org/x/crypto from 0.15.0 to 0.17.0 (#11274)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 20:52:43 +03:00
Cian Johnston d2d7628522 fix(enterprise/cli): add CODER_PROVISIONER_DAEMON_LOG_* options (#11279)
- Extracts cli.BuildLogger to clilog package
- Updates existing usage of cli.BuildLogger and removes it
- Use clilog to initialize provisionerd logger
2023-12-19 16:49:50 +00:00
Bruno Quaresma 7c4fbe5bae refactor(site): make HelpTooltip easier to reuse and compose (#11242) 2023-12-19 10:43:23 -03:00
Spike Curtis f2606a78dd fix: avoid converting nil node
fixes: #11276
2023-12-19 13:38:15 +04:00
Stephen Kirby 83e1349c2c moved docker installation warning to install/docker (#11273) 2023-12-18 18:19:20 -06:00
MarkE 280d38d4b8 added UI as Dashboard synonym (#11271) 2023-12-18 17:13:07 -06:00
Kayla Washburn 3ab4800a18 chore: clean up lint (#11270) 2023-12-18 14:59:39 -07:00
Bruno Quaresma e84d89353f fix(site): fix template editor filetree navigation (#11260)
Close https://github.com/coder/coder/issues/11203
2023-12-18 14:21:24 -03:00
Cian Johnston ff61475239 fix(coderd/provisionerdserver): use s.timeNow (#11267) 2023-12-18 17:11:50 +00:00
Steven Masley c35b560c87 chore: fix flake, use time closer to actual test (#11240)
* chore: fix flake, use time closer to actual test

The tests were queued, and the autostart time was being set
to the time the table was created, not when the test was actually
being run. This diff was causing failures in CI
2023-12-18 10:55:46 -06:00
Cian Johnston 213b768785 feat(coderd): insert provisioner daemons (#11207)
* Adds UpdateProvisionerDaemonLastSeenAt
* Adds heartbeat to provisioner daemons
* Inserts provisioner daemons to database upon start
* Ensures TagOwner is an empty string and not nil
* Adds COALESCE() in idx_provisioner_daemons_name_owner_key
2023-12-18 16:44:52 +00:00
Steven Masley a6901ae2c5 chore: fix race in cron close behavior (TestAgent_WriteVSCodeConfigs) (#11243)
* chore: add unit test to excercise flake
* Implement a *fix for cron stop() before run()

This fix still has a race condition. I do not see a clean solution
without modifying the cron libary. The cron library uses a boolean
to indicate running, and that boolean needs to be set to "true"
before we call "Close()". Or "Close()" should prevent "Run()"
from doing anything.

In either case, this solves the issue for a niche unit test bug
in which the test finishes, calling Close(), before there was
an oppertunity to start the go routine. It probably isn't worth
a lot of time investment, and this fix will suffice
2023-12-18 09:26:40 -06:00
Jon Ayers 56cbd47082 chore: fix TestWorkspaceAutobuild/DormancyThresholdOK flake (#11251) 2023-12-18 09:23:06 -06:00
Muhammad Atif Ali 45e9d93d37 chore: remove unused input from deploy-pr workflow (#11259) 2023-12-18 17:32:53 +03:00
Muhammad Atif Ali 5647e87207 ci: drop chocolatey from ci (#11245) 2023-12-18 17:31:35 +03:00
Dean Sheather 307186325f fix: avoid db import in slim builds (#11258) 2023-12-19 00:09:22 +10:00
dependabot[bot] 28a0242c27 ci: bump the github-actions group with 4 updates (#11256)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-18 13:30:18 +00:00
Dean Sheather e46431078c feat: add AgentAPI using DRPC (#10811)
Co-authored-by: Spike Curtis <spike@coder.com>
2023-12-18 22:53:28 +10:00
Cian Johnston eb781751b8 ci: update flux to 2.2.1 (#11253) 2023-12-18 09:29:46 +00:00
Muhammad Atif Ali 838ab8de7e docs: fix a broken link (#11254) 2023-12-18 09:28:55 +00:00
Ben Potter 2e86b76fb8 docs: improve structure for example templates (#9842)
Co-authored-by: Kyle Carberry <kyle@carberry.com>
Co-authored-by: Muhammad Atif Ali <atif@coder.com>
Co-authored-by: Muhammad Atif Ali <me@matifali.dev>
2023-12-17 17:05:13 +03:00
Steven Masley 3f6096b0d7 chore: unit test to enforce authorized queries match args (#11211)
* chore: unit test to enforce authorized queries match args
* Also check querycontext arguments
2023-12-15 20:31:07 +00:00
Garrett Delfosse 7924bb2a56 feat!: move workspace renames behind flag, disable by default (#11189) 2023-12-15 13:38:47 -05:00
Steven Masley e63de9a259 chore: enforcement of dbauthz tests was broken (#11218)
* chore: enforcement of dbauthz tests was broken

Implemented missing tests to catch back up

---------

Co-authored-by: Cian Johnston <cian@coder.com>
2023-12-15 18:30:21 +00:00
Stephen Kirby 0801760956 docs: add guides section (#11199)
* setup manifest

* added okta guide from steven M

* improved index by adding children

* changed icon to notes.svg

* added meta guide, fixed profile photo fmt
2023-12-15 11:10:41 -06:00
Ravindra Shinde a495952349 Upgrade code-server version to 4.19.1 (#11233) 2023-12-15 14:21:07 +00:00
Marcin Tojek 58c2ce17da refactor(cli): load template variables (#11234) 2023-12-15 14:55:24 +01:00
Cian Johnston fa91992976 ci: add audit docs gen dependency on db gen (#11231)
Audit docs gen depends on queries.sql.go so adding an explicit dependency
2023-12-15 11:49:19 +00:00
Marcin Tojek 89d8a293f0 fix: tar: do not archive .tfvars (#11208) 2023-12-15 11:15:12 +01:00
Spike Curtis 211e59bf65 feat: add tailnet v2 API support to coordinate endpoint (#11228)
closes #10532

Adds v2 support to the /coordinate endpoint via a query parameter.

v1 already has test cases, and we haven't implemented v2 at the client yet, so the only new test case is an unsupported version.
2023-12-15 14:10:24 +04:00
Cian Johnston a41cbb0f03 chore(dogfood): align Terraform version to that of dockerfile.base (#11227) 2023-12-15 10:02:59 +00:00
Dean Sheather 1e49190e12 feat: add server flag to disable user custom quiet hours (#11124) 2023-12-15 19:33:51 +10:00
Spike Curtis a58e4febb9 feat: add tailnet v2 Service and Client (#11225)
Part of #10532

Adds a tailnet ClientService that accepts a net.Conn and serves v1 or v2 of the tailnet API.

Also adds a DRPCService that implements the DRPC interface for the v2 API.  This component is within the ClientService, but needs to be reusable and exported so that we can also embed it in the Agent API.

Finally, includes a NewDRPCClient function that takes a net.Conn and runs dRPC in yamux over it on the client side.
2023-12-15 12:48:39 +04:00
Spike Curtis 9a4e1100fa chore: move drpc transport tools to codersdk/drpc (#11224)
Part of #10532

DRPC transport over yamux and in-mem pipes was previously only used on the provisioner APIs, but now will also be used in tailnet.  Moved to subpackage of codersdk to avoid import loops.
2023-12-15 12:41:39 +04:00
Dean Sheather b36071c6bb feat: allow templates to specify max_ttl or autostop_requirement (#10920) 2023-12-15 18:27:56 +10:00
Spike Curtis 30f032d282 feat: add tailnet ValidateVersion (#11223)
Part of #10532

Adds a method to validate a requested version of the tailnet API
2023-12-15 11:49:30 +04:00
Spike Curtis ad3fed72bc chore: rename Coordinator to CoordinatorV1 (#11222)
Renames the tailnet.Coordinator to represent both v1 and v2 APIs, so that we can use this interface for the main atomic pointer.

Part of #10532
2023-12-15 11:38:12 +04:00
Spike Curtis 545cb9a7cc fix: wait for coordinator in Test_agentIsLegacy (#11214)
Fixes flake https://github.com/coder/coder/runs/19639217635

AGPL coordinator used to process node updates for single_tailnet synchronously, but it's been refactored to process async, so in this test we need to wait for it to be processed.
2023-12-15 07:21:18 +04:00
Ben Potter e6e65fdc64 docs: add v2.5.1 changelog (#11220)
* docs: add v2.5.1 changelog

* fix typo
2023-12-14 17:35:36 -06:00
Colin Adler 4672700ef6 chore: add additional fields to license telemetry (#11173)
This sends the email the license was issued to, and whether or not it's a trial in the telemetry payload. It's a bit janky since the license parsing is all enterprise licensed.
2023-12-14 15:52:52 -06:00
Jon Ayers 06394a5b8c Revert "fix: prevent data race when mutating tags (#11200)" (#11216)
This reverts commit 82f7b0cef4.
2023-12-14 12:37:55 -06:00
Kayla Washburn 81ed112cd3 fix: fix auto theme (#11215) 2023-12-14 11:31:42 -07:00
Spike Curtis fad457420b fix: copy StringMap on insert and query in dbmem (#11206)
Addresses the issue in #11185 for the StringMap datatype.

There are other slice data types in our database package that also need to be fixed, but that'll be a different PR
2023-12-14 22:23:29 +04:00
Bruno Quaresma 32c93a887e fix(site): fix initial body background color 2023-12-14 18:15:25 +00:00
Bruno Quaresma 43411d20ba fix(site): fix pending color on dark blue theme (#11212) 2023-12-14 15:08:53 -03:00
Kayla Washburn 133dc66143 feat: add a theme picker (#11140) 2023-12-14 10:38:44 -07:00
Bruno Quaresma 0cd4842d18 fix(site): fix pending indicator color (#11209) 2023-12-14 11:30:40 -03:00
Cian Johnston df7ed18e1b chore(coderd/autobuild): wait for active template version and inactive template version (#11210) 2023-12-14 13:58:57 +00:00
Cian Johnston 5b0e6bfa2a feat(coderd/database): add api_version to provisioner_daemons table (#11204)
Adds column api_version to the provisioner_daemons table.
This is distinct from the coderd version, and is used to handle breaking changes in the provisioner daemon API.
2023-12-14 12:52:41 +00:00
Muhammad Atif Ali b779655f01 ci: fix syntax for ipv6 address in fly.io wsproxies (#11205) 2023-12-14 15:26:43 +03:00
Jon Ayers 82f7b0cef4 fix: prevent data race when mutating tags (#11200) 2023-12-14 08:56:59 +00:00
Colin Adler eb81fcf1e1 fix: lower amount of cached timezones for deployment daus (#11196)
Updates https://github.com/coder/customers/issues/384

This should help alleviate some pressure, but doesn't really fix the
root cause. See above issue for more details.
2023-12-13 16:50:29 -06:00
Stephen Kirby a3432b4265 docs: add faqs from sharkymark (#11168)
* added sharkymark FAQs page

* make fmt

* fixed typos for link

* changed FAQs icon to (i)

* satisfied review

* make fmt

* added docs links for coder_app, CODER_ACCESS_URL

* removed mentions of mark

* fixed some minor code formatting issues

* fixed numbered bullets rendering, make fmt
2023-12-13 15:56:11 -06:00
Muhammad Atif Ali c3eb68a585 Update CODER_WILDCARD_ACCESS_URL in fly-wsproxies configuration files (#11195) 2023-12-13 21:43:53 +00:00
Muhammad Atif Ali d82ed008f2 ci: revert fly proxies to shared cpu type (#11194) 2023-12-13 21:15:56 +00:00
Muhammad Atif Ali 3924b294fb ci: bump memory to 1024 MB for fly.io proxies (#11193)
* Update paris-coder.toml

* Update sao-paulo-coder.toml

* Update sydney-coder.toml
2023-12-13 20:03:46 +00:00
Muhammad Atif Ali 12f728189c ci: add wildcard support to fly.io wsproxies (#11188)
* ci: add wildcard support to fly.io wsproxies

* Update sao-paulo-coder.toml

* Update sydney-coder.toml

* Update paris-coder.toml

* Apply suggestions from code review

Co-authored-by: Dean Sheather <dean@deansheather.com>

* Update .github/fly-wsproxies/sao-paulo-coder.toml

Co-authored-by: Dean Sheather <dean@deansheather.com>

* Update sao-paulo-coder.toml

* Update sydney-coder.toml

---------

Co-authored-by: Dean Sheather <dean@deansheather.com>
2023-12-13 22:44:04 +03:00
Steven Masley b7bdb17460 feat: add metrics to workspace agent scripts (#11132)
* push startup script metrics to agent
2023-12-13 11:45:43 -06:00
Steven Masley 41ed581460 chore: include build version header on subdomain apps (#11172)
Idk why this was not the case before, this is very helpful to have
2023-12-13 11:45:27 -06:00
Marcin Tojek fd43985e94 fix: nix: switch to go1.21.5 (#11183) 2023-12-13 14:41:18 +01:00
Muhammad Atif Ali c60c75c833 ci: do not rebuild but use artifacts from the build job (#11180) 2023-12-13 12:46:22 +00:00
Marcin Tojek f2a91157a9 fix: update nix to include sqlc v1.24.0 (#11182) 2023-12-13 13:35:02 +01:00
Cian Johnston 4f7ae6461b feat(coderd/database): add UpsertProvisionerDaemons query (#11178)
Co-authored-by: Marcin Tojek <marcin@coder.com>
2023-12-13 12:31:40 +00:00
Marcin Tojek ef4d1b68e1 test: insights metrics: verify plugin usage (#11156) 2023-12-13 10:46:52 +01:00
Dean Sheather 8b8a763ca9 chore: use flux 2.2.0 (#11174) 2023-12-13 09:26:48 +00:00
Spike Curtis bf3b35b1e2 fix: stop logging context Canceled as error (#11177)
fixes #11166 and a related log that could have the same problem
2023-12-13 13:08:30 +04:00
Spike Curtis 43ba3146a9 feat: add test case for BlockDirect + listening ports (#11152)
Adds a test case for #10391 with single tailnet out of experimental
2023-12-13 12:28:09 +04:00
Steven Masley 6800fc8477 chore: bump go (->v1.21.5) and sqlc (->v1.24.0) to new versions (#11170) 2023-12-12 18:50:23 -06:00
Steven Masley 6b4d908e7e chore: makefile set sqlc-vet to .Phony (#11169) 2023-12-12 22:55:13 +00:00
Steven Masley e52d848d05 chore: validate queries using sqlc-vet in github actions (#11163) 2023-12-12 15:53:26 -06:00
Steven Masley dba0dfa859 chore: correct 500 -> 404 on workspace agent mw (#11129)
* chore: correct 500 -> 404
2023-12-12 15:14:32 -06:00
Steven Masley 0181e036f6 chore: remove unused query failing to prepare (#11167) 2023-12-12 15:02:15 -06:00
Ammar Bandukwala 19c0cfdabf chore(provisionersdk): add test for not following symlinks (#11165) 2023-12-12 14:44:50 -06:00
Cian Johnston 2471f3b9a8 ci: set flux version to 2.1.2 (#11164) 2023-12-12 20:17:01 +00:00
Kayla Washburn f67c5cf72b fix: only show orphan option while deleting failed workspaces (#11161) 2023-12-12 11:18:04 -07:00
Kayla Washburn 689da5b7c1 feat(site): improve bulk delete flow (#11093) 2023-12-12 10:14:28 -07:00
sempie 007b2b8db0 docs: add text to docs mentioning appearance settings for oidc sign-on page (#11159)
* add text to docs mentioning appearance settings for oidc sign-on page
2023-12-12 11:33:44 -05:00
Ben Potter cab8ffa54a docs: add v2.5.0 changelog (#11139)
* docs: add v2.5.0 changelog

* fix typos

* Apply suggestions from code review

* changes from feedback

* more fixes

* Update docs/changelogs/v2.5.0.md

Co-authored-by: Muhammad Atif Ali <atif@coder.com>

* Update docs/changelogs/v2.5.0.md

* fmt

* updates

---------

Co-authored-by: Muhammad Atif Ali <atif@coder.com>
2023-12-12 09:52:11 -06:00
Mathias Fredriksson b32a0a9af6 fix(go.mod): switch to sftp fork to fix file upload permissions (#11157)
Fixes #6685
Upstream https://github.com/pkg/sftp/pull/567
Related https://github.com/mutagen-io/mutagen/issues/459
2023-12-12 17:42:03 +02:00
Jon Ayers 41dbe7de4e fix: use correct permission when determining orphan deletion privileges (#11143) 2023-12-12 08:24:04 -06:00
Cian Johnston 8afbc8f7f5 chore(site): update test entities (#11155) 2023-12-12 13:03:37 +00:00
Spike Curtis edeb9bb42a fix: appease linter on darwin (#11154)
Fixing up some linting errors that show up on Darwin, but not in CI.
2023-12-12 17:02:28 +04:00
Cian Johnston 2883cad6ad fix(coderd/autobuild): wait for template version job in TestExecutorInactiveWorkspace (#11150) 2023-12-12 12:43:02 +00:00
Muhammad Atif Ali dde21cebcc chore(dogfood): use go 1.20.11 to match CI (#11153) 2023-12-12 11:45:28 +00:00
Cian Johnston b02796655e fix(coderd/database): remove column updated_at from provisioner_daemons table (#11108) 2023-12-12 11:19:28 +00:00
Cian Johnston 197cd935cf chore(Makefile): use linter version from dogfood Dockerfile (#11147)
* chore(Makefile): use golangci-lint version from dogfood Dockerfile

* chore(dogfood/Dockerfile): update golangci-lint to latest version

* chore(coderd): address linter complaints
2023-12-12 10:02:32 +00:00
Cian Johnston d07fa9c62f ci: offlinedocs: install protoc (#11148) 2023-12-12 10:00:16 +00:00
Jon Ayers 45c07317c0 docs: add documentation for template update policies (#11145) 2023-12-11 19:05:25 -06:00
Michael Smith 3ce7b2ebe6 fix: remove URL desyncs when trying to search users table (#11144)
* fix: remove URL search params desync

* refactor: clean up payload definition for clarity
2023-12-12 00:45:03 +00:00
Jon Ayers ba3b835339 fix: prevent editing build parameters if template requires active version (#11117)
Co-authored-by: McKayla Washburn <mckayla@hey.com>
2023-12-11 15:54:16 -07:00
Garrett Delfosse b7ea330aea fix: ensure we are talking to coder on first user check (#11130) 2023-12-11 14:27:32 -05:00
Stephen Kirby e37bbe6208 fixed small typo in docs/admin/configure (#11135) 2023-12-11 12:49:28 -06:00
Kayla Washburn 6775a86785 chore: make "users"."avatar_url" NOT NULL (#11112) 2023-12-11 10:09:51 -07:00
Mathias Fredriksson 3e5d292135 feat: add support for coder_env (#11102)
Fixes #10166
2023-12-11 16:10:18 +02:00
Muhammad Atif Ali 4612c28d99 ci: update tj-actions/branch-names action in dogfood.yaml (#11120) 2023-12-11 16:49:53 +03:00
dependabot[bot] 486d1fb697 chore: bump alpine from 3.18.5 to 3.19.0 in /scripts (#11126)
Bumps alpine from 3.18.5 to 3.19.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-11 23:24:16 +10:00
dependabot[bot] 6823194683 ci: bump the github-actions group with 7 updates (#11123)
Bumps the github-actions group with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [crate-ci/typos](https://github.com/crate-ci/typos) | `1.16.23` | `1.16.24` |
| [google-github-actions/setup-gcloud](https://github.com/google-github-actions/setup-gcloud) | `1` | `2` |
| [google-github-actions/get-gke-credentials](https://github.com/google-github-actions/get-gke-credentials) | `1` | `2` |
| [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) | `2` | `3` |
| [docker/build-push-action](https://github.com/docker/build-push-action) | `4` | `5` |
| [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action) | `0.14.0` | `0.16.0` |
| [actions/stale](https://github.com/actions/stale) | `8.0.0` | `9.0.0` |


Updates `crate-ci/typos` from 1.16.23 to 1.16.24
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.16.23...v1.16.24)

Updates `google-github-actions/setup-gcloud` from 1 to 2
- [Release notes](https://github.com/google-github-actions/setup-gcloud/releases)
- [Changelog](https://github.com/google-github-actions/setup-gcloud/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google-github-actions/setup-gcloud/compare/v1...v2)

Updates `google-github-actions/get-gke-credentials` from 1 to 2
- [Release notes](https://github.com/google-github-actions/get-gke-credentials/releases)
- [Changelog](https://github.com/google-github-actions/get-gke-credentials/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google-github-actions/get-gke-credentials/compare/v1...v2)

Updates `docker/setup-buildx-action` from 2 to 3
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

Updates `docker/build-push-action` from 4 to 5
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

Updates `aquasecurity/trivy-action` from 0.14.0 to 0.16.0
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](https://github.com/aquasecurity/trivy-action/compare/2b6a709cf9c4025c5438138008beaddbb02086f0...91713af97dc80187565512baba96e4364e983601)

Updates `actions/stale` from 8.0.0 to 9.0.0
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v8.0.0...v9.0.0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: google-github-actions/setup-gcloud
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: google-github-actions/get-gke-credentials
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: aquasecurity/trivy-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-11 23:21:07 +10:00
Muhammad Atif Ali 2c7ad1c094 ci: ungroup Dockerfile dependabot changes (#11125) 2023-12-11 16:16:28 +03:00
Spike Curtis 8d9157dc35 fix: use provisionerd context when failing job on canceled acquire (#11118)
Spotted during code read. We need to use the provisionerd auth context when failing a job due to a lost provisioner daemon.
2023-12-11 14:52:44 +04:00
Spike Curtis 50575e1a9a fix: use fake local network for port-forward tests (#11119)
Fixes #10979

Testing code that listens on a specific port has created a long battle with flakes.  Previous attempts to deal with this include opening a listener on a port chosen by the OS, then closing the listener, noting the port and starting the test with that port.
This still flakes, notably in macOS which has a proclivity to reuse ports quickly.

Instead of fighting with the chaos that is an OS networking stack, this PR fakes the host networking in tests.

I've taken a small step here, only faking out the Listen() calls that port-forward makes, but I think over time we should be transitioning all networking the CLI does to an abstract interface so we can fake it.  This allows us to run in parallel without flakes and
presents an opportunity to test error paths as well.
2023-12-11 14:51:56 +04:00
Jon Ayers 37f6b38d53 fix: return 403 when rebuilding workspace with require_active_version (#11114) 2023-12-08 23:03:46 -06:00
Bruno Quaresma 8488afa8df chore(site): enable react-query cache (#11113) 2023-12-08 23:58:29 +00:00
Kayla Washburn d8e95001e8 chore: add theme_preference column to users table (#11069) 2023-12-08 21:59:53 +00:00
Kayla Washburn ebd6c1b573 feat(site): bring back dark blue (#11071) 2023-12-08 14:38:35 -07:00
Garrett Delfosse 716759aacf fix: provide helpful error when no login url specified (#11110) 2023-12-08 14:44:40 -05:00
Eric Paulsen 167c759149 docs: add license and template insights prom metrics (#11109)
* docs: add license and template insights prom metrics

* add: coderd_insights_applications_usage_seconds
2023-12-08 14:17:14 -05:00
Garrett Delfosse d8467c11ad fix: handle no memory limit in coder stat mem (#11107) 2023-12-08 12:46:53 -05:00
Spike Curtis 6d66cb246d feat: display 'Deprecated' warning for agents using old API version (#11058)
Fixes #10340
2023-12-08 20:20:44 +04:00
Steven Masley 78517cab52 feat: add group allowlist for oidc (#11070)
* feat: group allow list in OIDC settings
2023-12-08 10:14:19 -06:00
Steven Masley cb89bc1729 feat: restart stopped workspaces on ssh command (#11050)
* feat: autostart workspaces on ssh & port forward

This is opt out by default. VScode ssh does not have this behavior
2023-12-08 10:01:13 -06:00
Bruno Quaresma 1f7c63cf1b fix(site): hide ws proxy on menu when disabled (#11101) 2023-12-08 11:47:09 -03:00
Bruno Quaresma 9d8578e0e3 refactor(site): apply minor naming improvements (#11080)
Minor naming and logic improvements to improve readability
2023-12-08 11:46:18 -03:00
Bruno Quaresma 2c7394bb3d refactor(site): change a few names related to workspace actions (#11079) 2023-12-08 13:41:58 +00:00
Cian Johnston 2b19a2369f chore(coderd): move provisionerd tags to provisionersdk (#11100) 2023-12-08 12:10:25 +00:00
Cian Johnston 4ca4736411 ci: reconcile provisionerd as well (#11085) 2023-12-08 09:55:43 +00:00
Marcin Tojek 918a82436e fix: insights: remove time-dependent tests (#11099) 2023-12-08 09:51:18 +00:00
Jon Ayers 02696f2df9 chore: fix flake in TestExecutorAutostopTemplateDisabled (#11096) 2023-12-08 09:02:54 +00:00
Spike Curtis b4ca1d6579 feat: include server agent API version in buildinfo (#11057)
First part of #10340 -- we need this version to compare with agents to tell if they are on a deprecated Agent API version
2023-12-08 12:50:25 +04:00
Muhammad Atif Ali f0969f99ad revert: "chore(dogfood): remove agent_name from jetbrains-ide module" (#11095) 2023-12-08 01:14:37 +00:00
Jon Ayers e73a202aed feat: show dormant workspaces by default (#11053) 2023-12-07 18:09:35 -06:00
Muhammad Atif Ali be31b2e4d7 chore(dogfood): remove agent_name from jetbrains-ide module
This is no more needed.
Depends on https://github.com/coder/modules/pull/99
2023-12-08 02:34:21 +03:00
Jon Ayers ce49a55f56 chore: update build_reason 'autolock' -> 'dormancy' (#11074) 2023-12-07 17:11:57 -06:00
Steven Masley 8221544514 chore: check if process is nil (#11090)
* chore: check if process is nil

We check if process is nil in the ports_supported file.
Just matching that defensive check, not sure if it can be nil.
2023-12-07 22:23:42 +00:00
Asher dbbf8acc26 fix: track JetBrains connections (#10968)
* feat: implement jetbrains agentssh tracking

Based on tcp forwarding instead of ssh connections

* Add JetBrains tracking to bottom bar
2023-12-07 12:15:54 -09:00
Cian Johnston 51687c74c8 fix(coderd/healthcheck): do not return null regions in RegionsResponse (#11088) 2023-12-07 21:10:12 +00:00
Garrett Delfosse 228cbec99b fix: stop updating agent stats from deleted workspaces (#11026)
Co-authored-by: Steven Masley <stevenmasley@gmail.com>
2023-12-07 13:55:29 -05:00
Cian Johnston 1e349f0d50 feat(cli): allow specifying name of provisioner daemon (#11077)
- Adds a --name argument to provisionerd start
- Plumbs through name to integrated and external provisioners
- Defaults to hostname if not specified for external, hostname-N for integrated
- Adds cliutil.Hostname
2023-12-07 16:59:13 +00:00
Garrett Delfosse 8aea6040c8 fix: use unique workspace owners over unique users (#11044) 2023-12-07 10:53:15 -05:00
Kira Pilot 091fdd6761 fix: redirect unauthorized git users to login screen (#10995)
* fix: redirect to login screen if unauthorized git user

* consolidated language

* fix redirect
2023-12-07 09:19:31 -05:00
Barton Ip 5d2e87f1a7 docs: add warning about Sysbox before installation (#10619)
* Add warning about Sysbox before installation

* Formatting tings
2023-12-07 16:58:50 +03:00
Spike Curtis b34ecf1e9e fix: fix deadlock of mappingQuery on context canceled
Fixes #11078

replace bare channel send with SendCtx so that we properly shut down when context is canceled.
2023-12-07 17:19:18 +04:00
Marcin Tojek 941e3873a8 fix: implement fake DeleteOldWorkspaceAgentStats (#11076) 2023-12-07 14:08:16 +01:00
Bruno Quaresma c0d68a4c2c fix(site): fix clickable props on the workspace table row (#11072) 2023-12-06 19:50:39 +00:00
dependabot[bot] 567ecca61b chore: bump vite from 4.5.0 to 4.5.1 in /site (#11052)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.5.1/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-06 22:40:24 +03:00
Bruno Quaresma 667ee41165 refactor(site): improve minor queries and visuals on external auth (#11066) 2023-12-06 16:17:31 -03:00
Bruno Quaresma 8a6bfc9d28 feat(site): do not show health warning when the warning is dismissed (#11068) 2023-12-06 16:06:58 -03:00
Steven Masley 2947b827fb chore: use httpError to allow better error elevation (#11065) 2023-12-06 10:27:40 -06:00
Bruno Quaresma dd01bde9b6 fix(site): fix template editor route (#11063) 2023-12-06 15:59:00 +00:00
Bruno Quaresma 44f9613bf2 feat(site): dismiss health section warnings (#11059) 2023-12-06 12:50:35 -03:00
Bruno Quaresma 2bc11d2e63 fix(site): fetch health data only if has permissions (#11062) 2023-12-06 15:47:58 +00:00
Bruno Quaresma 43488b44ce chore(site): refactor pagination text (#11061) 2023-12-06 12:19:29 -03:00
Steven Masley b376b2cd13 feat: add user/settings page for managing external auth (#10945)
Also add support for unlinking on the coder side to allow reflow.
2023-12-06 08:41:45 -06:00
Marcin Tojek f6891bc465 fix: implement fake DeleteOldWorkspaceAgentLogs (#11042) 2023-12-06 14:31:43 +01:00
Bruno Quaresma 088fd0b904 chore(site): ignore updated at on chromatic (#11060) 2023-12-06 10:19:33 -03:00
Spike Curtis 2c86d0bed0 feat: support v2 Tailnet API in AGPL coordinator (#11010)
Fixes #10529
2023-12-06 15:04:28 +04:00
Cian Johnston 38ed816207 fix(coderd/debug): fix caching issue with dismissed sections (#11051) 2023-12-06 08:38:03 +00:00
Kira Pilot 53453c06a1 fix: display app templates correctly in build preview (#10994)
* fix: appropriately display display_app apps in template build preview

* added display apps to build preview

* added test, consolidated names

* handling empty state
2023-12-05 16:01:40 -05:00
Steven Masley 81a3b36884 feat: add endpoints to list all authed external apps (#10944)
* feat: add endpoints to list all authed external apps

Listing the apps allows users to auth to external apps without going through the create workspace flow.
2023-12-05 14:03:44 -06:00
Cian Johnston feaa9894a4 fix(site/src/api/typesGenerated): generate HealthSection enums (#11049)
Relates to #8971

- Introduces a codersdk.HealthSection enum type
- Refactors existing references using strings to use new HealthSection type
2023-12-05 20:00:27 +00:00
Cian Johnston f66e802fae fix(coderd/debug): putDeploymentHealthSettings: use 204 instead of 304 if not modified (#11048) 2023-12-05 19:06:56 +00:00
Bruno Quaresma 876d448d69 fix(site): fix padding for loader (#11046) 2023-12-05 17:18:31 +00:00
Eric Paulsen 3dcbf63cbe add: document suspended users not consuming seat (#11045) 2023-12-05 12:05:05 -05:00
Bruno Quaresma 0f47b58bfb feat(site): refactor health pages (#11025) 2023-12-05 13:58:51 -03:00
Cian Johnston 2e4e0b2d2c fix(scripts/apitypings): force health.Message and health.Severity to correct types (#11043)
* Force typegen types for some fields of derp health report
* Explicitly allocate slices for RegionReport.{Errors,Warnings} to avoid nulls in API response
2023-12-05 16:31:48 +00:00
Cian Johnston a235644046 fix(codersdk): make codersdk.ProvisionerDaemon.UpdatedAt a codersdk.NullTime (#11037) 2023-12-05 15:40:45 +00:00
Michael Smith fab343a2e9 fix: increase default staleTime for paginated data (#11041)
* fix: update default staleTime for paginated data

* fix: swap cacheTime for staleTime in app-wide query client

* fix: revert cacheTime change

* fix: update debug limit

* fix: apply staleTime to prefetches

* refactor: cleanup code
2023-12-05 14:41:06 +00:00
Muhammad Atif Ali f0b4badf74 ci: add arm64 and amd64 portable binaries to winget (#11030)
* ci: add arm64 and amd64 portable binaries to winget 

This PR updates `release.yaml` workflow to automate updates for `arm64` and `x64` zip installers to winget. This has recently been merged into [winget](https://github.com/microsoft/winget-pkgs/pull/129175).

Thanks to @mdanish-kh for the upstream PR.

* fixup!

* remove extra `--urls` flags

* remove architecture override.

`wingetcreate` does not need architecture override as it now supports parsing the URL for `amd64` and correctly marking it as x64 architecture. 

Reference: 
1. https://github.com/microsoft/winget-create/blob/08baf0e61e62dabcb2487397984fc69fad6a7499/src/WingetCreateCore/Common/PackageParser.cs#L594C56-L594C61
2. PR: https://github.com/microsoft/winget-create/pull/445
2. This has been available since version https://github.com/microsoft/winget-create/releases/tag/v1.5.3.0

* fixup!

* Update release.yaml
2023-12-05 17:06:39 +03:00
Cian Johnston 5fad611020 feat(coderd): add last_seen_at and version to provisioner_daemons table (#11033)
Related to #10676

- Adds columns last_seen_at and version to provisioner_daemons table
- Adds the above to codersdk.ProvisionerDaemons struct
2023-12-05 13:54:38 +00:00
Michael Smith dd1f8331de fix: disable prefetches for audits table (#11040) 2023-12-05 08:49:11 -05:00
Cian Johnston 1b2ed5bc9b ci: add missing go tools to offlinedocs build step (#11034) 2023-12-05 12:03:29 +00:00
Mathias Fredriksson e300b036be feat(scaletest): add greedy agent test to runner (#10559) 2023-12-05 12:37:10 +02:00
Spike Curtis dca8125263 fix: update tailscale to include fix to prevent race (#11032)
fixes #10876
2023-12-05 14:30:19 +04:00
Dean Sheather 695f57f7ff fix: use header flags in wsproxy server (#10985) 2023-12-05 14:13:42 +04:00
Dean Sheather b07b40b346 chore: revert nix dogfood image (#11022)
The nix image isn't used because it doesn't work, and we haven't been
updating our "pre-nix" tag since the changes were made. Reverts back to
being a regular Dockerfile.
2023-12-05 09:02:57 +00:00
Cian Johnston d70f9ea26c chore(docs): apply async suggestions from #10915 (#10976) 2023-12-05 09:01:03 +00:00
Bruno Quaresma dff53d0787 fix(site): fix filter font size (#11028) 2023-12-04 18:17:43 -03:00
Kayla Washburn 185400db11 refactor: remove usage of <Box> and sx (#10702) 2023-12-04 12:09:04 -07:00
Garrett Delfosse 1e6ea6133c fix: pass in time parameter to prevent flakes (#11023)
Co-authored-by: Dean Sheather <dean@deansheather.com>
2023-12-04 12:20:22 -05:00
Marcin Tojek a42b6c185d fix(site): e2e: use click instead of check (#11024) 2023-12-04 18:02:46 +01:00
dependabot[bot] b8e9262c51 chore: bump the scripts-docker group in /scripts with 1 update (#11020)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 10:38:58 -06:00
Garrett Delfosse ccd5e1a749 fix: use database for user creation to prevent flake (#10992) 2023-12-04 11:05:17 -05:00
Steven Masley 2f54f769be feat: allow IDP to return single string for roles/groups claim (#10993)
* feat: allow IDP to return single string instead of array for roles/groups claim

This is to support ADFS
2023-12-04 10:01:45 -06:00
dependabot[bot] 3883d7181d chore: bump the offlinedocs group in /offlinedocs with 6 updates (#11014)
* chore: bump the offlinedocs group in /offlinedocs with 6 updates

Bumps the offlinedocs group in /offlinedocs with 6 updates:

| Package | From | To |
| --- | --- | --- |
| [fs-extra](https://github.com/jprichardson/node-fs-extra) | `11.1.1` | `11.2.0` |
| [react-markdown](https://github.com/remarkjs/react-markdown) | `8.0.3` | `9.0.1` |
| [rehype-raw](https://github.com/rehypejs/rehype-raw) | `6.1.1` | `7.0.0` |
| [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) | `18.18.1` | `18.19.2` |
| [eslint](https://github.com/eslint/eslint) | `8.53.0` | `8.55.0` |
| [typescript](https://github.com/Microsoft/TypeScript) | `5.1.6` | `5.3.2` |


Updates `fs-extra` from 11.1.1 to 11.2.0
- [Changelog](https://github.com/jprichardson/node-fs-extra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jprichardson/node-fs-extra/compare/11.1.1...11.2.0)

Updates `react-markdown` from 8.0.3 to 9.0.1
- [Release notes](https://github.com/remarkjs/react-markdown/releases)
- [Changelog](https://github.com/remarkjs/react-markdown/blob/main/changelog.md)
- [Commits](https://github.com/remarkjs/react-markdown/compare/8.0.3...9.0.1)

Updates `rehype-raw` from 6.1.1 to 7.0.0
- [Release notes](https://github.com/rehypejs/rehype-raw/releases)
- [Commits](https://github.com/rehypejs/rehype-raw/compare/6.1.1...7.0.0)

Updates `@types/node` from 18.18.1 to 18.19.2
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Updates `eslint` from 8.53.0 to 8.55.0
- [Release notes](https://github.com/eslint/eslint/releases)
- [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md)
- [Commits](https://github.com/eslint/eslint/compare/v8.53.0...v8.55.0)

Updates `typescript` from 5.1.6 to 5.3.2
- [Release notes](https://github.com/Microsoft/TypeScript/releases)
- [Commits](https://github.com/Microsoft/TypeScript/compare/v5.1.6...v5.3.2)

---
updated-dependencies:
- dependency-name: fs-extra
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: offlinedocs
- dependency-name: react-markdown
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: offlinedocs
- dependency-name: rehype-raw
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: offlinedocs
- dependency-name: "@types/node"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: offlinedocs
- dependency-name: eslint
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: offlinedocs
- dependency-name: typescript
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: offlinedocs
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix: install react-gfm v4 and update type signatures

* fix: update link-nesting for a11y/hydration issue

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Parkreiner <michaelsmith@coder.com>
2023-12-04 10:11:01 -05:00
dependabot[bot] 2443a9f861 ci: bump the github-actions group with 2 updates (#11018)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-04 16:57:44 +03:00
sharkymark 676e215a91 chore: path app IDEs (#11007) 2023-12-04 11:22:22 +00:00
Mathias Fredriksson 70cede8f7a test(agent): improve TestAgent_Dial tests (#11013)
Refs #11008
2023-12-04 13:11:30 +02:00
Muhammad Atif Ali b212bd4ac5 chore: deploy workspace proxies on fly.io (#10983)
Co-authored-by: Dean Sheather <dean@deansheather.com>
2023-12-04 12:12:22 +03:00
Spike Curtis dbadae5a9c Revert "chore(helm): gitignore and rm helm chart tarballs from vcs (#10951)" (#11009)
This reverts commit 7f62085a02.
2023-12-04 06:59:06 +00:00
Spike Curtis 0536b58b48 fix: parse username/workspace correctly on coder state push --build (#10974)
Fixes the same issue as #10884 but for state push
2023-12-04 09:58:35 +04:00
Szabolcs Fruhwald baf3bf6b9c feat: add workspace_id, owner_name to agent manifest (#10199)
Co-authored-by: Kyle Carberry <kyle@carberry.com>
Co-authored-by: Atif Ali <atif@coder.com>
2023-12-04 00:41:54 +03:00
Michael Smith 28eca2e53f fix: create centralized PaginationContainer component (#10967)
* chore: add Pagination component, add new test, and update other pagination tests

* fix: add back temp spacing for WorkspacesPageView

* chore: update AuditPage to use Pagination

* chore: update UsersPage to use Pagination

* refactor: move parts of Pagination into WorkspacesPageView

* fix: handle empty states for pagination labels better

* docs: rewrite comment for clarity

* refactor: rename components/properties for clarity

* fix: rename component files for clarity

* chore: add story for PaginationContainer

* chore: rename story for clarity

* fix: handle undefined case better

* fix: update imports for PaginationContainer mocks

* fix: update story values for clarity

* fix: update scroll logic to go to the bottom instead of the top

* fix: update mock setup for test

* fix: update stories

* fix: remove scrolling functionality

* fix: remove deprecated property

* refactor: rename prop

* fix: remove debounce flake
2023-12-02 17:37:59 -05:00
Dean Sheather d9a169556a chore: run deploy job on regular runner 2023-12-02 10:08:33 -08:00
Colin Adler 6b3c4c00a2 fix: UpdateWorkspaceDormantDeletingAt interval out of range (#11000) 2023-12-02 11:47:08 -06:00
Colin Adler 49ed66c7ad chore: remove ALTER TYPE .. ADD VALUE from migration 65 (#10998)
Follow up of Follow up of https://github.com/coder/coder/pull/10966
2023-12-02 11:40:23 -06:00
Colin Adler cbcf7561e5 chore: remove ALTER TYPE .. ADD VALUE from migration 46 (#10997)
Follow up of https://github.com/coder/coder/pull/10966
2023-12-02 11:38:12 -06:00
Colin Adler 427572199e chore: remove ALTER TYPE .. ADD VALUE from migration 18 (#10996)
Follow up of https://github.com/coder/coder/pull/10966
2023-12-02 11:35:25 -06:00
Dean Sheather c82e878b50 chore: disable legacy dogfood deploy (#10999) 2023-12-03 02:20:19 +10:00
Colin Adler 8e684c8195 feat: run all migrations in a transaction (#10966)
Updates coder/customers#365

This PR updates our migration framework to run all migrations in a single transaction. This is the same behavior we had in v1 and ensures that failed migrations don't bring the whole deployment down. If a migration fails now, it will automatically be rolled back to the previous version, allowing the deployment to continue functioning.
2023-12-01 16:11:10 -06:00
Garrett Delfosse 60d0aa6930 fix: handle 404 on unknown top level routes (#10964) 2023-12-01 12:35:44 -05:00
Bruno Quaresma 2aa79369a2 refactor(site): improve health check page sidebar (#10960) 2023-12-01 12:43:51 -03:00
Cian Johnston 432925df31 ci: make offlinedocs required (#10980) 2023-12-01 14:37:47 +00:00
Mathias Fredriksson 6fe84025aa chore(Makefile): exclude .terraform directories (#10988) 2023-12-01 15:13:51 +02:00
Marcin Tojek 13b89f79df feat: purge old provisioner daemons (#10949) 2023-12-01 12:43:05 +00:00
Dean Sheather 153abd5003 chore: fix build job pt.3 (#10986) 2023-12-01 12:25:06 +00:00
Dean Sheather 122cbaa134 chore: fix build job (#10984) 2023-12-01 12:08:10 +00:00
Dean Sheather 15875a76ae chore: add new deploy job for new dogfood (#10852) 2023-12-01 03:16:49 -08:00
Cian Johnston 9ad96288b2 fix(helm/provisioner): run helm dependency update (#10982) 2023-12-01 10:30:00 +00:00
Cian Johnston 7f62085a02 chore(helm): gitignore and rm helm chart tarballs from vcs (#10951) 2023-12-01 09:52:54 +00:00
Cian Johnston d49bcc93fe fix(docs): remove anchor links from headings in admin/healthcheck.md (#10975)
Relates to #8965

* Fixes offlinedocs that broke from change in feat(coderd/healthcheck): add access URL error codes and healthcheck doc #10915 by removing the offending anchor links from the page subheadings.
* Makes offlinedocs also conditional on changes to docs
2023-12-01 09:49:18 +00:00
Spike Curtis b267497c6d fix: parse username/workspace correctly on coder state pull --build (#10973)
fixes #10884
2023-12-01 13:03:49 +04:00
Spike Curtis 46d95cb0f0 fix: wait for dial goroutine to complete (#10959)
Fixes flake seen here: https://github.com/coder/coder/runs/19170327767

The goroutine that attempts to dial the socket didn't complete before the test did.  Here we add an explicit wait for it to complete in each run of the loop.
2023-12-01 11:37:32 +04:00
Spike Curtis 812fb95273 fix: prevent connIO from panicking in race between Close and Enqueue (#10948)
Spotted during a code read.  ConnIO unlocks the mutex before attempting to write to the response channel, which could allow another goroutine to call Close() and close the channel, causing a panic.

Fix is to hold the mutex.  This won't cause a deadlock because the `select{}` has a `default` case, so we won't block even if the receiver isn't keeping up.
2023-12-01 10:23:29 +04:00
Spike Curtis 612e67a53b feat: add cleanup of lost tailnet peers and tunnels to PGCoordinator (#10939)
Adds the "lost" peer cleanup queries to PGCoordinator, including tests.
2023-12-01 10:13:29 +04:00
dependabot[bot] d9ccd97d36 chore: bump @adobe/css-tools from 4.3.1 to 4.3.2 in /site (#10970)
Bumps [@adobe/css-tools](https://github.com/adobe/css-tools) from 4.3.1 to 4.3.2.
- [Changelog](https://github.com/adobe/css-tools/blob/main/History.md)
- [Commits](https://github.com/adobe/css-tools/commits)

---
updated-dependencies:
- dependency-name: "@adobe/css-tools"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-01 09:09:33 +03:00
Spike Curtis 571d358e4b feat: add queries to clean lost connections in PGCoordinator (#10938)
Adds cleanup queries to clean out "lost" peer and tunnel state after 24 hours.  We leave this state in the database so that anything trying to connect to the peer can see that it was lost, but clean it up after 24 hours to ensure our table doesn't grow without bounds.
2023-12-01 10:02:30 +04:00
Spike Curtis 0cab6e7763 feat: support graceful disconnect in PGCoordinator (#10937)
Adds support for graceful disconnect to PGCoordinator.  When peers gracefully disconnect, they send a disconnect message.  This triggers the peer to be disconnected from all tunneled peers.

The Multi-Agent Client supports graceful disconnect, since it is in memory and we know that when it is closed, we really mean to disconnect.

The v1 agent and client Websocket connections do not support graceful disconnect, since the v1 protocol doesn't have this feature.  That means that if a v1 peer connects to a v2 peer, when the v1 peer's coordinator connection is closed, the v2 peer will
see it as "lost" since we don't know whether the v1 peer meant to disconnect, or it just lost connectivity to the coordinator.
2023-12-01 09:55:25 +04:00
Jon Ayers 967db2801b chore: refactor ResolveAutostart tests to use dbfake (#10603) 2023-11-30 19:33:04 -06:00
Jon Ayers 12a4b114de fix: fix TestWorkspaceAutobuild/InactiveTTLOK flake (#10965) 2023-11-30 18:29:41 -06:00
Michael Smith d016f93de8 feat: add usePaginatedQuery hook (#10803)
* wip: commit current progress on usePaginatedQuery

* chore: add cacheTime to users query

* chore: update cache logic for UsersPage usersQuery

* wip: commit progress on Pagination

* chore: add function overloads to prepareQuery

* wip: commit progress on usePaginatedQuery

* docs: add clarifying comment about implementation

* chore: remove optional prefetch property from query options

* chore: redefine queryKey

* refactor: consolidate how queryKey/queryFn are called

* refactor: clean up pagination code more

* fix: remove redundant properties

* refactor: clean up code

* wip: commit progress on usePaginatedQuery

* wip: commit current pagination progress

* docs: clean up comments for clarity

* wip: get type signatures compatible (breaks runtime logic slightly)

* refactor: clean up type definitions

* chore: add support for custom onInvalidPage functions

* refactor: clean up type definitions more for clarity reasons

* chore: delete Pagination component (separate PR)

* chore: remove cacheTime fixes (to be resolved in future PR)

* docs: add clarifying/intellisense comments for DX

* refactor: link users queries to same queryKey implementation

* docs: remove misleading comment

* docs: more comments

* chore: update onInvalidPage params for more flexibility

* fix: remove explicit any

* refactor: clean up type definitions

* refactor: rename query params for consistency

* refactor: clean up input validation for page changes

* refactor/fix: update hook to be aware of async data

* chore: add contravariance to dictionary

* refactor: increase type-safety of usePaginatedQuery

* docs: more comments

* chore: move usePaginatedQuery file

* fix: add back cacheTime

* chore: swap in usePaginatedQuery for users table

* chore: add goToFirstPage to usePaginatedQuery

* fix: make page redirects work properly

* refactor: clean up clamp logic

* chore: swap in usePaginatedQuery for Audits table

* refactor: move dependencies around

* fix: remove deprecated properties from hook

* refactor: clean up code more

* docs: add todo comment

* chore: update testing fixtures

* wip: commit current progress for tests

* fix: update useEffectEvent to sync via layout effects

* wip: commit more progress on tests

* wip: stub out all expected test cases

* wip: more test progress

* wip: more test progress

* wip: commit more test progress

* wip: AHHHHHHHH

* chore: finish two more test cases

* wip: add in all tests (still need to investigate prefetching

* refactor: clean up code slightly

* fix: remove math bugs when calculating pages

* fix: wrap up all testing and clean up cases

* docs: update comments for clarity

* fix: update error-handling for invalid page handling

* fix: apply suggestions
2023-11-30 17:44:03 -05:00
Jon Ayers 329aa45c16 fix: fix TestWorkspaceAutobuild/DormantNoAutostart flake (#10963) 2023-11-30 15:45:27 -06:00
Steven Masley 0a16bda786 chore: add external auth providers to oidctest (#10958)
* implement external auth in oidctest
* Refactor more external tests to new oidctest
2023-11-30 14:05:15 -06:00
Mathias Fredriksson 99151183bc feat(scaletest): replace bash with dd in ssh/rpty traffic and use pseudorandomness (#10821)
Fixes #10795
Refs #8556
2023-11-30 19:30:12 +02:00
Cian Johnston 433be7b16d chore(docs/admin/healthcheck): remove GHFM tips (#10954) 2023-11-30 16:33:41 +00:00
Cian Johnston 07895006d9 refactor(coderd/healthcheck): make Warnings an object with { Code, Message } (#10950)
- Adds health.Message { code string, mesasge string }
- Refactors existing warnings []string to be of type []health.Message instead
2023-11-30 14:49:50 +00:00
Cian Johnston 4f9292859d feat(coderd/healthcheck): add access URL error codes and healthcheck doc (#10915)
Relates to #8965

- Added error codes for separate code paths in health checks
- Prefixed errors and warnings with error code prefixes
- Added a docs page with details on each code, cause and solution

Co-authored-by: Muhammad Atif Ali <atif@coder.com>
2023-11-30 12:15:40 +00:00
dependabot[bot] 5b2f43619b chore: bump the react group in /site with 4 updates (#10869)
* chore: bump the react group in /site with 3 updates

Bumps the react group in /site with 3 updates: [react-helmet-async](https://github.com/staylor/react-helmet-async), [react-markdown](https://github.com/remarkjs/react-markdown) and [react-router-dom](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router-dom).


Updates `react-helmet-async` from 1.3.0 to 2.0.1
- [Release notes](https://github.com/staylor/react-helmet-async/releases)
- [Commits](https://github.com/staylor/react-helmet-async/commits)

Updates `react-markdown` from 8.0.7 to 9.0.1
- [Release notes](https://github.com/remarkjs/react-markdown/releases)
- [Changelog](https://github.com/remarkjs/react-markdown/blob/main/changelog.md)
- [Commits](https://github.com/remarkjs/react-markdown/compare/8.0.7...9.0.1)

Updates `react-router-dom` from 6.16.0 to 6.20.0
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/main/packages/react-router-dom/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/react-router-dom@6.20.0/packages/react-router-dom)

---
updated-dependencies:
- dependency-name: react-helmet-async
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: react
- dependency-name: react-markdown
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: react
- dependency-name: react-router-dom
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: react
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix lint

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Atif Ali <atif@coder.com>
2023-11-29 23:11:59 +03:00
Garrett Delfosse d41f9f8b47 fix: do not allow selection of unsuccessful versions (#10941) 2023-11-29 13:01:17 -05:00
Bruno Quaresma 2e8ab2aeaf chore(site): enable react-query cache (#10943) 2023-11-29 17:53:11 +00:00
Bruno Quaresma e4d7b0b664 docs: update FE guide (#10942) 2023-11-29 17:27:36 +00:00
Marcin Tojek 2b574e2b2d feat: add dismissed property to the healthcheck section (#10940) 2023-11-29 16:37:40 +00:00
Kira Pilot d374becdeb fix: redirect to new url after template name update (#10926)
* fix: updating template name routes to correct URL

* added e2e test
2023-11-29 10:54:21 -05:00
Kira Pilot 88f4490ad6 fix: clear workspace name validation on field dirty (#10927) 2023-11-29 10:53:45 -05:00
Steven Masley cb6c0f3cbb chore: refactor oidc group and role sync to methods (#10918)
The 'userOIDC' method body was getting unwieldy.
I think there is a good way to redesign the flow, but
I do not want to undertake that at this time.
The easy win is just to move some LoC to other methods
and cleanup the main method.
2023-11-29 09:24:00 -06:00
Spike Curtis 2b71e38b31 feat: add status to tailnet mapping query (#10936)
Adds the `status` column to the mapping query so that we can add graceful disconnect logic around it
2023-11-29 16:53:01 +04:00
Mathias Fredriksson f431aa53d2 chore(go.mod): update github.com/coder/ssh (#10934) 2023-11-29 13:19:49 +02:00
Spike Curtis 2dc565d5de chore: remove New----Builder from dbfake function names (#10882)
Drop "New" and "Builder" from the function names, in favor of the top-level resource created.  This shortens tests and gives a nice syntax.  Since everything is a builder, the prefix and suffix don't add much value and just make things harder to read.

I've also chosen to leave `Do()` as the function to insert into the database.  Even though it's a builder pattern, I fear `.Build()` might be confusing with Workspace Builds.  One other idea is `Insert()` but if we later add dbfake functions that update, this might be inconsistent.
2023-11-29 11:06:04 +04:00
Jon Ayers 48d69c9e60 fix: update autostart context to include querying users (#10929) 2023-11-28 17:56:49 -06:00
Bruno Quaresma e9c12c30cf feat(site): refactor template version editor layout (#10912) 2023-11-28 16:42:31 -03:00
Garrett Delfosse afbda2235c fix: insert replica when removed by cleanup (#10917) 2023-11-28 14:15:09 -05:00
Spike Curtis 52901e1219 feat: implement HTMLDebug for PGCoord with v2 API (#10914)
Implements HTMLDebug for the PGCoordinator with the new v2 API and related DB tables.
2023-11-28 22:37:20 +04:00
Eric Paulsen 18c4a98865 fix: numerical validation grammer (#10924) 2023-11-28 10:14:53 -08:00
Marcin Tojek 19b6d194fc feat: manage health settings using Coder API (#10861) 2023-11-28 18:15:17 +01:00
Dean Sheather 452668c893 chore: avoid dbmock test errors in dbgen (#10923) 2023-11-28 17:04:25 +00:00
Spike Curtis 14bd489af6 feat: add queries for PGCoord HTMLDebug (#10913)
Adds queries for implementing HTMLDebug on the new PGCoordinator
2023-11-28 20:19:32 +04:00
Dean Sheather 3416f6dfb5 chore: update port-forwarding documentation (#10916) 2023-11-28 23:54:19 +10:00
Bruno Quaresma 6808daef0f chore(site): use variable font for Inter (#10903) 2023-11-27 21:35:29 +00:00
Garrett Delfosse 74c5261013 fix: add spacing for yes/no prompts (#10907) 2023-11-27 16:12:07 -05:00
Michael Smith 1f6e39c0b0 fix: hide groups in account page if not enabled (#10898) 2023-11-27 14:06:00 -05:00
Bruno Quaresma a4d74b8b44 chore(site): remove paperLight background value (#10857)
I noticed we have been overusing colors in the UI, so simplifying is better for the "look and feel" and maintaining the styles over time. 

![image](https://github.com/coder/coder/assets/3165839/f70c831d-eba8-4521-820a-6257ae0bedf1)

If you want to have a better sense of what it looks like, I recommend you go to the Chromatic snapshot.
2023-11-27 15:52:20 -03:00
dependabot[bot] c634a38bd7 ci: bump the github-actions group with 1 update (#10890)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-27 21:47:26 +03:00
Muhammad Atif Ali 4cb94d1347 chore: update dependabot to use single groups (#10870)
* chore: update dependabot.yaml to use single groups

This will hopefully reduce @dependabot spamming PRs.

* Update dependabot.yaml
2023-11-27 21:27:58 +03:00
Kira Pilot 54c3fc63d9 fix: docuemnt workspace filter query param correctly (#10894) 2023-11-27 12:57:24 -05:00
Steven Masley 20525c8b2e chore: add script to analyze which releases have migrations (#10823)
* chore: add script to analyze which releases have migrations
2023-11-27 10:53:32 -06:00
Steven Masley abb2c7656a chore: add claims to oauth link in db for debug (#10827)
* chore: add claims to oauth link in db for debug
2023-11-27 10:47:23 -06:00
Cian Johnston 0534f8f59b fix(provisionersdk): use mtime instead of atime for session cleanup (#10893)
See #10892

- Updates provisionersdk session cleanup to use mtime instead of atime.
- Also runs go mod tidy.
2023-11-27 16:21:59 +00:00
Dean Sheather f28df8e7b8 chore: update wgtunnel to avoid panic (#10877) 2023-11-28 02:19:40 +10:00
Cian Johnston 0babc3c555 fix(provisioner/terraform/cleanup): use mtime instead of atime (#10892)
- Updates plugin staleness check to check mtime instead of atime, as atime has been shown to be unreliable
- Updates existing unit test to use a real filesystem as Afero's in-memory FS doesn't support atimes at all
2023-11-27 15:19:41 +00:00
Bruno Quaresma 707d0e97d9 fix(site): fixsidebar styles (#10891) 2023-11-27 09:55:20 -03:00
Mathias Fredriksson f441ad66e1 fix(codersdk): keep workspace agent connection open after dial context (#10863) 2023-11-27 14:29:57 +02:00
Spike Curtis 3a0a4ddfcd chore: convert dbfake.ProvisionerJobResources to builder (#10881)
Convert to builder for consistency with rest of the package.  This will make it easier to use, and means we can drop "Builder" from function arguments since they are all builders in the package.
2023-11-27 14:46:31 +04:00
Spike Curtis 4548ad7cef chore: remove dbfake.Workspace (#10880)
Remove dbfake.Workspace and use builder instead.
2023-11-27 14:39:16 +04:00
Spike Curtis 78283a7fb9 chore: remove dbfake.WorkspaceWithAgent (#10879)
Replace dbfake.WorkspaceWithAgent() with the builder pattern and remove this function.
2023-11-27 14:30:15 +04:00
Spike Curtis 82d5130b07 chore: convert dbfake.Workspace and .WorkspaceWithAgent to a builder pattern (#10878)
Converts dbfake Workspace and WorkspaceWithAgent to builder pattern.
2023-11-27 14:16:31 +04:00
Cian Johnston b73397e08c fix(site): add workspace proxy section to health page (#10862)
* Adds workspace proxy section to health page
* Conditionally places workspace proxy warnings in errors or warnings based on calculated severity
* Adds some more stories we were missing for HealthPage
2023-11-27 09:26:02 +00:00
Spike Curtis 6c67add2d9 fix: detect and retry reverse port forward on used port (#10844)
Fixes #10799

The flake happens when we try to remote forward, but the port we've chosen is not free.  In the flaked example, it's actually the SSH listener that occupies the port we try to remote forward, leading to confusing reads (c.f. the linked issue).

This fix simplies the tests considerably by using the Go ssh client, rather than shelling out to OpenSSH.  This avoids using a pseudoterminal, avoids the need for starting any local OS listeners to communicate the forwarding (go SSH just returns in-process listeners), and avoids an OS listener to wire OpenSSH up to the agentConn.

With the simplied logic, we can immediately tell if a remote forward on a random port fails, so we can do this in a loop until success or timeout.

I've also simplified and fixed up the other forwarding tests. Since we set up forwarding in-process with Go ssh, we can remove a lot of the `require.Eventually` logic.
2023-11-27 09:42:45 +04:00
Dean Sheather d5ddcbdda0 chore: fix flake in templates_test.go (#10875) 2023-11-27 15:29:10 +10:00
lbi22 7029ccfbdf feat: add support for custom permissions in Helm chart rbac.yaml file (#10590)
Co-authored-by: Dean Sheather <dean@deansheather.com>
Co-authored-by: Atif Ali <atif@coder.com>
2023-11-27 14:12:46 +10:00
Ben Potter 3530d39740 docs: fix typo in additional-clusters.md (#10868) 2023-11-26 12:53:33 +00:00
Cian Johnston dd161b172e feat: allow auditors to read template insights (#10860)
- Adds a template_insights pseudo-resource
- Grants auditor and template admin roles read access on template_insights
- Updates existing RBAC checks to check for read template_insights, falling back to template update permissions where necessary
- Updates TemplateLayout to show Insights tab if can read template_insights or can update template
2023-11-24 17:21:32 +00:00
Mathias Fredriksson e73901cf56 fix(coderd): remove nil ptr deref in watchWorkspace (#10859)
Fixes #10849
2023-11-24 15:16:21 +00:00
Cian Johnston 411ce46442 feat(coderd/healthcheck): add health check for proxy (#10846)
Adds a health check for workspace proxies:
- Healthy iff all proxies are healthy and the same version,
- Warning if some proxies are unhealthy,
- Error if all proxies are unhealthy, or do not all have the same version.
2023-11-24 15:06:51 +00:00
Marcin Tojek b501046cf9 test: increase test coverage around health severity (#10858) 2023-11-24 15:42:17 +01:00
Mathias Fredriksson 61be4dfe5a fix: improve exit codes for agent/agentssh and cli/ssh (#10850) 2023-11-24 14:35:56 +02:00
Mathias Fredriksson dbdcad0d09 test(agent/agentssh): fix flake in signal test (#10855) 2023-11-24 13:47:40 +02:00
Marcin Tojek 34841cf2b7 fix: healthcheck warnings should be empty array (#10856) 2023-11-24 12:37:07 +01:00
Mathias Fredriksson 2c6e0f7d0a feat(agent/agentssh): handle session signals (#10842) 2023-11-23 19:55:36 +02:00
Marcin Tojek a7c27cad26 feat: add database support for dismissed healthchecks (#10845) 2023-11-23 16:18:12 +00:00
Cian Johnston f342d10c31 fix(enterprise/coderd/proxyhealth): properly defer healthCheckDuration observe (#10848) 2023-11-23 15:23:40 +00:00
Marcin Tojek 78df68348a feat: include health severity in reports (#10817) 2023-11-23 16:08:41 +01:00
sharkymark e311e9ec24 chore: correct disabling direct and STUN; add vs code remote required URLs (#10830)
* chore: correct disabling direct and STUN; add vs code remote required URLs

* chore: offline docs
2023-11-22 20:04:56 -06:00
Michael Smith 491e0e3abf fix: display explicit 'retry' button(s) when a workspace fails (#10720)
* refactor: remove workspace error enums

* fix: add in retry button for failed workspaces

* fix: make handleBuildRetry auto-detect debug permissions

* chore: consolidate retry messaging

* chore: update renderWorkspacePage to accept parameters

* chore: make workspace test helpers take explicit workspace parameter

* refactor: update how parameters for tests are defined

* fix: update old tests to be correctly parameterized
2023-11-22 16:03:09 -05:00
dependabot[bot] 65c726eb50 chore: bump eslint from 8.52.0 to 8.53.0 in /offlinedocs (#10686)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-22 21:35:05 +03:00
Colin Adler 7f39ff854e fix: skip autostart for suspended/dormant users (#10771) 2023-11-22 11:14:32 -06:00
Zubarev Alexander 614c17924c fix(docs): disable CODER_DERP_SERVER_STUN_ADDRESSES correctly (#10840) 2023-11-22 11:14:01 -06:00
Mathias Fredriksson 6ecba0fda7 fix(coderd): prevent logging error for query cancellation in watchWorkspaceAgentMetadata (#10843) 2023-11-22 15:32:31 +00:00
Bruno Quaresma d58239b9ec chore(site): ignore chromatic changes on syntax highlight (#10839) 2023-11-22 09:51:46 -03:00
Bruno Quaresma ddf5569b10 fix(site): fix tabs (#10838) 2023-11-22 09:33:02 -03:00
Mathias Fredriksson a20ec6659d fix(site): use correct default insights time for day interval (#10837) 2023-11-22 12:30:04 +00:00
Spike Curtis 89c13c2212 fix: enable FeatureHighAvailability if it is licensed (#10834)
fixes #10810

The tailnet coordinators don't depend on replicasync, so we can still enable HA coordinators even if the relay URL is unset.

The in-memory, non-HA coordinator probably has lower latency than the PG Coordinator, since we have to query the database, so enterprise customers might want to disable it for single-replica deployments, but this PR default-enables the HA coordinator.  We could add support later to disable it if anyone complains. Latency setting up connections matters, but I don't believe the coordinator contributes significantly at this point for reasonable postgres round-trip-time.
2023-11-22 14:46:55 +04:00
Marcin Tojek 8dd003ba5e fix: preserve order of node reports in healthcheck (#10835) 2023-11-22 11:15:11 +01:00
dependabot[bot] 60c01555b9 chore: bump react-icons from 4.11.0 to 4.12.0 in /offlinedocs (#10687)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-22 13:00:59 +03:00
Dean Sheather a9c0c01629 chore: fix flake in listening ports test (#10833) 2023-11-22 09:30:51 +00:00
Spike Curtis f20cc66c04 fix: give SSH stdio sessions a chance to close before closing netstack (#10815)
Man, graceful shutdown is hard.  Even after my changes, we were still hitting a graceful shutdown race: https://github.com/coder/coder/runs/18886842123

The problem was that while we attempt a graceful shutdown at the SSH layer by closing the session for writing, we were not giving it a chance to complete before continuing to tear down the stack of closers, including one that closes the netstack, and thus drop the TCP connection before it closes.
2023-11-22 13:11:21 +04:00
Spike Curtis b25e5dc90b chore: remove dbfake.WorkspaceBuild in favor of builder pattern (#10814)
I'd like to convert dbfake into a builder pattern to prevent a proliferation of XXXWithYYY methods.  This is one step of the way by removing the Non-builder function.
2023-11-22 13:04:58 +04:00
dependabot[bot] b73d9d788b chore: bump github.com/go-jose/go-jose/v3 from 3.0.0 to 3.0.1 (#10828)
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.0 to 3.0.1.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.0...v3.0.1)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-22 08:21:59 +03:00
Jon Ayers 8d1cfbce8f fix: update workspace cleanup flag names for template cmds (#10805) 2023-11-21 18:20:01 -06:00
Jon Ayers 51b58cfc98 fix: only update last_used_at when connection count > 0 (#10808) 2023-11-21 18:10:41 -06:00
Jon Ayers 782fe84c7c feat: disable start/restart if active version required (#10809) 2023-11-21 18:06:30 -06:00
Marcin Tojek 214123d476 test: skip flaky HealthyWithNodeDegraded (#10826) 2023-11-21 20:46:58 +01:00
Muhammad Atif Ali 1c2f9e3199 chore: refactoring to move the notes at top 2023-11-21 22:03:21 +03:00
Kayla Washburn 8cd8901db5 refactor: avoid @emotion/css when possible (#10807) 2023-11-21 11:29:43 -07:00
Kayla Washburn 26b5390f4b refactor: remove usage of styled and withStyles (#10806) 2023-11-21 10:43:01 -07:00
Jon Ayers ad3eb4bb75 Revert "docs: add documentation for template update policies (#10804)" (#10822)
This reverts commit e6dc9eeffc.
2023-11-21 17:10:08 +00:00
dependabot[bot] d0ac4cb4b1 chore: bump prettier from 3.0.0 to 3.1.0 in /site (#10695)
* chore: bump prettier from 3.0.0 to 3.1.0 in /site

Bumps [prettier](https://github.com/prettier/prettier) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/prettier/prettier/releases)
- [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/prettier/compare/3.0.0...3.1.0)

---
updated-dependencies:
- dependency-name: prettier
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* prettier

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Kira Pilot <kira.pilot23@gmail.com>
2023-11-21 11:48:40 -05:00
Kayla Washburn e51eeb67ce refactor: improve settings sidebar components (#10801) 2023-11-21 09:38:55 -07:00
dependabot[bot] 7fa70ce159 chore: bump github.com/aws/smithy-go from 1.16.0 to 1.17.0 (#10788)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 16:33:15 +03:00
dependabot[bot] 4590149810 chore: bump google.golang.org/api from 0.150.0 to 0.151.0 (#10787)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-21 16:32:49 +03:00
Spike Curtis 5d5b5aa074 chore: use dbfake for ssh tests rather than provisionerd (#10812)
Refactors SSH tests to skip provisionerd and instead use dbfake to insert workspaces and builds.  This should make tests faster and more reliable.

dbfake.WorkspaceBuild is refactored to use a "builder" pattern with "fluent" options, as the number of options and variants was starting to get out of hand.
2023-11-21 16:22:08 +04:00
Marcin Tojek 048dc0450f feat: ensure coder remains healthy with single degraded DERP server (#10813) 2023-11-21 12:58:25 +01:00
Cian Johnston abafc0863c feat(coderd): store workspace proxy version in the database (#10790)
Stores workspace proxy version in database upon registration.
2023-11-21 11:21:25 +00:00
Steven Masley 7060069034 fix: prevent change in defaults if user unsets in template edit (#10793)
* fix: template edit not change defaults if user unset
2023-11-20 18:14:30 -06:00
Jon Ayers e6dc9eeffc docs: add documentation for template update policies (#10804)
Co-authored-by: Ben Potter <ben@coder.com>
2023-11-20 16:30:24 -06:00
Kira Pilot ace188bfc2 fix: clarify language in orphan section of delete modal (#10764)
* fix: clarify language in orphan section of delete modal

* tinted title

* Update site/src/pages/WorkspacePage/WorkspaceDeleteDialog/WorkspaceDeleteDialog.tsx

Co-authored-by: Muhammad Atif Ali <atif@coder.com>

* prettier

---------

Co-authored-by: Muhammad Atif Ali <atif@coder.com>
2023-11-20 15:04:51 -05:00
Steven Masley 5229d7fd3a feat: implement deprecated flag for templates to prevent new workspaces (#10745)
* feat: implement deprecated flag for templates to prevent new workspaces
* Add deprecated filter to template fetching
* Add deprecated to template table
* Add deprecated notice to template page
* Add ui to deprecate a template
2023-11-20 19:16:18 +00:00
Marcin Tojek d8df87d5ae fix: insights metrics comparison (#10800)
* fix: insights metrics comparison

* links
2023-11-20 18:37:46 +01:00
Mathias Fredriksson 6b3f599438 fix(site): correctly interpret timezone based on offset in formatOffset (#10797)
Fixes #10784
2023-11-20 19:30:09 +02:00
Kayla Washburn 9b6433e3a7 chore: remove theme experiment (#10798)
Co-authored-by: Kyle Carberry <kyle@carberry.com>
2023-11-20 09:53:20 -07:00
Spike Curtis 92ef0baff3 fix: remove pty match for TestSSH/RemoteForward (#10789)
Fixes #10578
2023-11-20 20:50:09 +04:00
Michael Smith df4f34ac15 fix: prevent alt text from appearing if OIDC icon fail to load (#10792)
* fix: update alt text issue
2023-11-20 10:51:25 -05:00
Bruno Quaresma fbec79f35d refactor(site): refactor login screen (#10768) 2023-11-20 11:19:50 -03:00
Bruno Quaresma 2895c108c2 chore(site): remove Typography component (#10769)
* Remove Typography from NavbarView

* Remove Typography from EmptyState

* Remove Typography from Paywall

* Fix font size

* Remove Typography from CliAuthPage

* Remove Typography from Single SignOn

* Remove Typography from file dialog

* Remove from not found

* Remove from Section

* Remove from global snackbar

* Remove Typography component

* Add eslint role
2023-11-20 10:15:40 -03:00
Spike Curtis 5173bce5cc fix: stop redirecting DERP and replicasync http requests (#10752)
Fixes an issue where setting CODER_REDIRECT_TO_ACCESS_URL breaks use of multiple Coder server replicas for DERP traffic.
2023-11-20 14:46:59 +04:00
Spike Curtis 5c48cb4447 feat: modify PG Coordinator to work with new v2 Tailnet API (#10573)
re: #10528

Refactors PG Coordinator to work with the Tailnet v2 API, including wrappers for the existing v1 API.

The debug endpoint functions, but doesn't return sensible data, that will be in another stacked PR.
2023-11-20 14:31:04 +04:00
Muhammad Atif Ali a8c25180db fix(docs): fix a broken link (#10783) 2023-11-20 12:49:07 +03:00
JounQin 148eb90bda docs: migrate all deprecated CODER_ADDRESS to CODER_HTTP_ADDRESS (#10780)
Co-authored-by: Muhammad Atif Ali <me@matifali.dev>
2023-11-19 17:54:02 +00:00
JounQin 9b864ed700 docs: align CODER_HTTP_ADDRESS with document (#10779) 2023-11-19 15:38:39 +00:00
Ammar Bandukwala cfe35f54b4 feat(cli/agent): preserve old logs (#10776)
See https://github.com/coder/coder/pull/7815 for background.
2023-11-18 10:53:56 -06:00
Eric Paulsen 328a383f15 fix: set ignore_changes on EC2 example templates (#10773) 2023-11-18 01:07:27 -05:00
Colin Adler 3aef070959 fix: return non-null warning arrays in healthcheck (#10774) 2023-11-17 22:25:44 +00:00
Cian Johnston 2c3ebc50cb fix(site): handle null warnings in health page (#10775) 2023-11-17 22:10:13 +00:00
Ben Potter d19a762589 docs: add v2.4.1 changelog (#10770) 2023-11-17 14:46:07 -06:00
Steven Masley 0f17d7c144 chore: return context.Canceled when in Prepare for rbac (#10763)
Was returning a custom rego canceled error. This conforms with
how Authorize handles this error.
2023-11-17 20:28:59 +00:00
Kayla Washburn 875cae1fc9 chore: lint sink_test.go (#10765) 2023-11-17 09:45:24 -07:00
Steven Masley e448c10122 chore: add uuid's to ssh sessions for logging (#10721)
* chore: add uuid to ssh connection logs
2023-11-17 16:04:23 +00:00
Cian Johnston befb42b6fd feat(site): add refresh button on health page (#10719)
Adds a button on DeploymentHealth page to immediately re-run the healthcheck.

Co-authored-by: BrunoQuaresma <bruno_nonato_quaresma@hotmail.com>
2023-11-17 15:26:25 +00:00
Bruno Quaresma e6f11a383a refactor(site): add minor improvements to the schedule controls (#10756)
Demo:

https://github.com/coder/coder/assets/3165839/d6ea83c0-6390-42d9-bd48-3438fc8685db
2023-11-17 12:03:44 -03:00
Bruno Quaresma 20c2dda13f refactor(site): replace secondary by primary color (#10757) 2023-11-17 12:02:58 -03:00
Bruno Quaresma b508c325b1 refactor(site): add minor tweaks to the workspace delete dialog (#10758)
Before:
<img width="483" alt="Screenshot 2023-11-17 at 11 29 25" src="https://github.com/coder/coder/assets/3165839/28e07832-d816-48d3-a3d5-500227f2799e">

After:
<img width="491" alt="Screenshot 2023-11-17 at 11 29 30" src="https://github.com/coder/coder/assets/3165839/e01bc181-34af-4299-b86a-9081a5efd954">
2023-11-17 12:01:57 -03:00
Marcin Tojek 8999d5785a feat: do not fail DERP healthcheck if WebSocket is used (#10714) 2023-11-17 16:00:49 +01:00
Bruno Quaresma 24aa223399 refactor(site): adjust a few colors (#10750) 2023-11-17 09:27:07 -03:00
Bruno Quaresma 4121121797 fix(site): prevent overwriting of newest workspace data during optimistic updates (#10751) 2023-11-17 09:13:46 -03:00
Spike Curtis 71f87d054f fix: accept legacy redirect HTTP environment variables (#10748)
> Can someone help me understand the differences between these env variables:
>
>    CODER_REDIRECT_TO_ACCESS_URL
>    CODER_TLS_REDIRECT_HTTP_TO_HTTPS
>    CODER_TLS_REDIRECT_HTTP

Oh man, what a mess. It looks like `CODER_TLS_REDIRECT_HTTP ` appears in our config docs. Maybe that was the initial name for the environment variable?

At some point, both the flag and the environment variable were `--tls-redirect-http-to-https` and `CODER_TLS_REDIRECT_HTTP_TO_HTTPS`.  `CODER_TLS_REDIRECT_HTTP` did nothing.

However, then we introduced `CODER_REDIRECT_TO_ACCESS_URL`, we put in some deprecation code that was maybe fat-fingered such that we accept the environment variable `CODER_TLS_REDIRECT_HTTP` but the flag `--tls-redirect-http-to-https`.  Our docs still refer to `CODER_TLS_REDIRECT_HTTP` at https://coder.com/docs/v2/latest/admin/configure#address

So, I think what we gotta do is still accept `CODER_TLS_REDIRECT_HTTP` since it was working and in an example doc, but also fix the deprecation code to accept `CODER_TLS_REDIRECT_HTTP_TO_HTTPS` environment variable.
2023-11-17 15:09:29 +04:00
Marcin Tojek fc249fab1e skip TestCollectInsights (#10749) 2023-11-17 10:57:53 +01:00
Spike Curtis 3dd35e019b fix: close ssh sessions gracefully (#10732)
Re-enables TestSSH/RemoteForward_Unix_Signal and addresses the underlying race: we were not closing the remote forward on context expiry, only the session and connection.

However, there is still a more fundamental issue in that we don't have the ability to ensure that TCP sessions are properly terminated before tearing down the Tailnet conn.  This is due to the assumption in the sockets API, that the underlying IP interface is long 
lived compared with the TCP socket, and thus closing a socket returns immediately and does not wait for the TCP termination handshake --- that is handled async in the tcpip stack.  However, this assumption does not hold for us and tailnet, since on shutdown,
we also tear down the tailnet connection, and this can race with the TCP termination.

Closing the remote forward explicitly should prevent forward state from accumulating, since the Close() function waits for a reply from the remote SSH server.

I've also attempted to workaround the TCP/tailnet issue for `--stdio` by using `CloseWrite()` instead of `Close()`.  By closing the write side of the connection, half-close the TCP connection, and the server detects this and closes the other direction, which then
triggers our read loop to exit only after the server has had a chance to process the close.

TODO in a stacked PR is to implement this logic for `vscodessh` as well.
2023-11-17 12:43:20 +04:00
Bruno Quaresma ba955f44d0 fix(site): fix scroll when having many build options (#10744) 2023-11-16 22:13:59 +00:00
Bruno Quaresma 88c1ee6d52 chore(site): increase stop workspace timeout (#10742) 2023-11-16 18:27:51 -03:00
Kayla Washburn 111ac3de8a chore: switch to zinc for our gray palette (#10740) 2023-11-16 14:22:40 -07:00
Bruno Quaresma fefe02c2df fix(site): fix group name validation (#10739) 2023-11-16 18:16:24 -03:00
Kira Pilot 9f3a955ebf fix: show all experiments in deployments list if opted into (#10722) 2023-11-16 10:53:35 -05:00
Marcin Tojek 0e5eecd7da feat: add more logging around echo tar (#10731) 2023-11-16 16:52:04 +01:00
dependabot[bot] ced6ae01b7 chore: bump prettier from 3.0.0 to 3.1.0 in /offlinedocs (#10688)
Bumps [prettier](https://github.com/prettier/prettier) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/prettier/prettier/releases)
- [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/prettier/compare/3.0.0...3.1.0)

---
updated-dependencies:
- dependency-name: prettier
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-16 12:31:32 -03:00
Bruno Quaresma f47ecb54aa chore: disable trial activation on e2e tests (#10683) 2023-11-16 12:19:22 -03:00
Mathias Fredriksson 198b56c137 fix(coderd): fix memory leak in watchWorkspaceAgentMetadata (#10685)
Fixes #10550
2023-11-16 17:03:53 +02:00
Dean Sheather c130f8d6d0 chore: disable test on save in vscode (#10730) 2023-11-16 22:27:08 +10:00
Dean Sheather 10204ba829 chore: retry healthcheck in proxy region test (#10729) 2023-11-16 22:21:16 +10:00
Jon Ayers 9ac44aa74f fix: disable autoupdate workspace setting when template setting enabled (#10662) 2023-11-15 16:58:55 -06:00
Kayla Washburn 8ddc8b3447 site: new dark theme (#10331) 2023-11-15 14:39:26 -07:00
Cian Johnston bd17290ff4 chore(coderd/autobuild): address some logic errors in autostart tests (#10713) 2023-11-15 16:26:10 +00:00
Kira Pilot 38163edf2f feat: allow autostop to be specified in minutes and seconds (#10707)
* feat: allow autostop to be specified in minutes and seconds

* fix test
2023-11-15 11:01:26 -05:00
Cian Johnston 9d310388e5 feat(coderd): /debug/health: add parameter to force healthcheck (#10677) 2023-11-15 15:54:15 +00:00
Steven Masley 290180b104 feat!: bump workspace activity by 1 hour (#10704)
Marked as a breaking change as the previous activity bump was always the TTL duration of the workspace/template.

This change is more cost conservative, only bumping by 1 hour for workspace activity. To accommodate wrap around, eg bumping a workspace into the next autostart, the deadline is bumped by the TTL if the workspace crosses the autostart threshold.

This is a niche case that is likely caused by an idle terminal making a workspace survive through a night. The next morning, the workspace will get activity bumped the default TTL on the autostart, being similar to as if the workspace was autostarted again.

In practice, a good way to avoid this is to set a max_deadline of <24hrs to avoid wrap around entirely.
2023-11-15 09:42:27 -06:00
Cian Johnston 6085b92fae feat(site): add annotation to display values of type clibase.Duration correctly (#10667)
* Adds an annotation format_duration_ns to all deployment values of type clibase.Duration
* Adds a unit test that complains if you forget to add the above annotation to a clibase.Duration
* Modifies optionValue() to check for the presence of format_duration_ns when displaying an option.
2023-11-15 12:29:20 +00:00
Spike Curtis 34c9661f1b fix: disable flaky test TestSSH/RemoteForward_Unix_Signal (#10711) 2023-11-15 11:04:36 +00:00
Spike Curtis 1516c6636b feat: add SQL queries for v2 PG Coordinator (#10572)
re #10528

Adds SQL queries to support Tailnet v2 API in the PG Coordinator
2023-11-15 10:13:27 +04:00
dependabot[bot] a8ce099638 chore: bump @octokit/types from 12.1.1 to 12.3.0 in /site (#10693)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-15 01:02:53 +03:00
dependabot[bot] b568344fe1 chore: bump chromatic from 7.6.0 to 9.0.0 in /site (#10697)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-15 01:02:28 +03:00
dependabot[bot] 3ae438b968 chore: bump cronstrue from 2.41.0 to 2.43.0 in /site (#10698)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-15 01:01:53 +03:00
dependabot[bot] acda90236d chore: bump ts-proto from 1.163.0 to 1.164.0 in /site (#10699)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-15 01:01:07 +03:00
dependabot[bot] f623153438 chore: bump @testing-library/react from 14.0.0 to 14.1.0 in /site (#10700)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-15 01:00:46 +03:00
Ben Potter f3ffcba63b chore: clarify namespace requirement for kubernetes template (#10657) 2023-11-14 21:50:58 +00:00
Ben Potter 3091f8f70c chore: fix docs for max lifetime (#10706) 2023-11-14 21:08:06 +00:00
dependabot[bot] c14c1cce13 ci: bump the github-actions group with 1 update (#10694)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 20:15:52 +00:00
Colin Adler cb22df9bea chore: tidy go.mod (#10703) 2023-11-14 14:12:58 -06:00
Colin Adler fbfd192370 chore: update openssl in Dockerfile (#10701)
Includes a security fix for CVE-2023-5363 and CVE-2023-5678.
2023-11-14 13:40:30 -06:00
Spike Curtis 4894eda711 feat: capture cli logs in tests (#10669)
Adds a Logger to cli Invocation and standardizes CLI commands to use it.  clitest creates a test logger by default so that CLI command logs are captured in the test logs.

CLI commands that do their own log configuration are modified to add sinks to the existing logger, rather than create a new one.  This ensures we still capture logs in CLI tests.
2023-11-14 22:56:27 +04:00
Bruno Quaresma 90b6e86555 chore(site): remove xstate (#10659) 2023-11-14 18:34:38 +00:00
Kira Pilot ef70165a8a feat: add orphan option to workspace delete in UI (#10654)
* added workspace delete dialog

* added stories and tests

* PR review

* fix flake

* fixed stories
2023-11-14 11:32:05 -05:00
dependabot[bot] 4f08330297 chore: bump github.com/coder/retry from 1.4.0 to 1.5.1 (#10672)
Bumps [github.com/coder/retry](https://github.com/coder/retry) from 1.4.0 to 1.5.1.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/coder/retry/commit/f5ccc4d2d45135bf65c7ccc5e78942dd7df19c84"><code>f5ccc4d</code></a> Fix double-scaling bug</li>
<li><a href="https://github.com/coder/retry/commit/14c7c27e14e40827a36754dd2071b09249d426f8"><code>14c7c27</code></a> Add support for Jitter (<a href="https://redirect.github.com/coder/retry/issues/28">#28</a>)</li>
<li><a href="https://github.com/coder/retry/commit/12627b155ff59e5f62c15d262ba1ba06f17daa90"><code>12627b1</code></a> Update README to give a goto example</li>
<li><a href="https://github.com/coder/retry/commit/a8710231a1a7a7f884eb894aca0bee24c5caf21c"><code>a871023</code></a> Make minor format improvements to README</li>
<li>See full diff in <a href="https://github.com/coder/retry/compare/v1.4.0...v1.5.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/coder/retry&package-manager=go_modules&previous-version=1.4.0&new-version=1.5.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions


</details>
2023-11-14 10:00:07 -06:00
dependabot[bot] 4965f1853b chore: bump github.com/fergusstrange/embedded-postgres from 1.24.0 to 1.25.0 (#10674)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 12:51:00 +00:00
Spike Curtis dc4b1ef406 fix: lock log sink against concurrent write and close (#10668)
fixes #10663
2023-11-14 16:38:34 +04:00
dependabot[bot] 530be2f96a chore: bump github.com/valyala/fasthttp from 1.50.0 to 1.51.0 (#10671)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 12:35:10 +00:00
dependabot[bot] 1b20b3cfa8 chore: bump google.golang.org/api from 0.148.0 to 0.150.0 (#10673)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 12:28:21 +00:00
Colin Adler e0afee1b85 feat: add debug endpoint for single tailnet (#10485) 2023-11-13 17:14:12 -06:00
dependabot[bot] f4de2b64ec chore: bump gopkg.in/DataDog/dd-trace-go.v1 from 1.56.1 to 1.57.0 (#10647)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-14 00:58:27 +03:00
dependabot[bot] 3f4791c9de ci: bump the github-actions group with 4 updates (#10649)
Bumps the github-actions group with 4 updates: [crate-ci/typos](https://github.com/crate-ci/typos), [actions/github-script](https://github.com/actions/github-script), [DeterminateSystems/nix-installer-action](https://github.com/determinatesystems/nix-installer-action) and [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action).


Updates `crate-ci/typos` from 1.16.22 to 1.16.23
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.16.22...v1.16.23)

Updates `actions/github-script` from 5 to 6
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v5...v6)

Updates `DeterminateSystems/nix-installer-action` from 6 to 7
- [Release notes](https://github.com/determinatesystems/nix-installer-action/releases)
- [Commits](https://github.com/determinatesystems/nix-installer-action/compare/v6...v7)

Updates `aquasecurity/trivy-action` from 0.13.1 to 0.14.0
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](https://github.com/aquasecurity/trivy-action/compare/f78e9ecf42a1271402d4f484518b9313235990e1...2b6a709cf9c4025c5438138008beaddbb02086f0)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: actions/github-script
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: DeterminateSystems/nix-installer-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: aquasecurity/trivy-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 22:53:19 +03:00
dependabot[bot] 4a0ca8aa5b chore: bump github.com/go-playground/validator/v10 (#10646)
Bumps [github.com/go-playground/validator/v10](https://github.com/go-playground/validator) from 10.15.1 to 10.16.0.
- [Release notes](https://github.com/go-playground/validator/releases)
- [Commits](https://github.com/go-playground/validator/compare/v10.15.1...v10.16.0)

---
updated-dependencies:
- dependency-name: github.com/go-playground/validator/v10
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 22:49:55 +03:00
dependabot[bot] 1fe5c969c7 chore: bump github.com/hashicorp/terraform-json (#10648)
Bumps [github.com/hashicorp/terraform-json](https://github.com/hashicorp/terraform-json) from 0.17.2-0.20230905102422-cd7b46b136bb to 0.18.0.
- [Release notes](https://github.com/hashicorp/terraform-json/releases)
- [Commits](https://github.com/hashicorp/terraform-json/commits/v0.18.0)

---
updated-dependencies:
- dependency-name: github.com/hashicorp/terraform-json
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 22:48:42 +03:00
Jon Ayers 75ab16d19a fix: prevent db deadlock when workspaces go dormant (#10618) 2023-11-13 13:40:20 -06:00
dependabot[bot] 76e7a1d06b chore: bump the golang-x group with 4 updates (#10644)
Bumps the golang-x group with 4 updates: [golang.org/x/crypto](https://github.com/golang/crypto), [golang.org/x/net](https://github.com/golang/net), [golang.org/x/oauth2](https://github.com/golang/oauth2) and [golang.org/x/tools](https://github.com/golang/tools).


Updates `golang.org/x/crypto` from 0.14.0 to 0.15.0
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.15.0)

Updates `golang.org/x/net` from 0.17.0 to 0.18.0
- [Commits](https://github.com/golang/net/compare/v0.17.0...v0.18.0)

Updates `golang.org/x/oauth2` from 0.13.0 to 0.14.0
- [Commits](https://github.com/golang/oauth2/compare/v0.13.0...v0.14.0)

Updates `golang.org/x/tools` from 0.14.0 to 0.15.0
- [Release notes](https://github.com/golang/tools/releases)
- [Commits](https://github.com/golang/tools/compare/v0.14.0...v0.15.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x
- dependency-name: golang.org/x/net
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x
- dependency-name: golang.org/x/oauth2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x
- dependency-name: golang.org/x/tools
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-13 12:31:41 -06:00
Kayla Washburn 33761c9c7d refactor: add experimental NewTheme (#10613) 2023-11-13 10:09:44 -07:00
Kira Pilot 652097ed3a fix: update HealthcheckDatabaseReport mocks (#10655) 2023-11-13 11:28:20 -05:00
Marcin Tojek fbd34139b5 refactor(site): use generated Healthcheck API entities (#10650) 2023-11-13 15:58:57 +01:00
Cian Johnston b69c237b8a feat(coderd/healthcheck): allow configuring database hc threshold (#10623)
* feat(coderd/healthcheck): allow configuring database hc threshold
* feat(coderd): add database hc latency, plumb through
* feat(coderd): allow configuring healthcheck refresh interval
2023-11-13 14:14:43 +00:00
Michael Smith e4211ccb40 fix: add missing focus state styling to buttons and checkboxes (#10614)
* fix: add focus styling to checkboxes

* fix: add focus styling to icon buttons

* fix: add focus styling to switches

* fix: swap outlines for box-shadows for more styling control
2023-11-13 08:08:18 -05:00
Spike Curtis f400d8a0c5 fix: handle SIGHUP from OpenSSH (#10638)
Fixes an issue where remote forwards are not correctly torn down when using OpenSSH with `coder ssh --stdio`.  OpenSSH sends a disconnect signal, but then also sends SIGHUP to `coder`.  Previously, we just exited when we got SIGHUP, and this raced against properly disconnecting.

Fixes https://github.com/coder/customers/issues/327
2023-11-13 15:14:42 +04:00
Muhammad Atif Ali be0436afbe ci: bump terraform version to 1.5.7 to match embedded terraform version (#10630) 2023-11-13 10:06:36 +03:00
Muhammad Atif Ali 715bbd3edd ci: bump go to version 1.20.11 (#10631) 2023-11-13 10:06:26 +03:00
Anunaya Srivastava 5f0417d14e Fix nix-shell on macos (#10591)
strace is unavailable on macos. flake.nix is updated to handle this
scenario.
2023-11-11 12:06:26 +03:00
Cian Johnston a4f1319108 feat(cli): allow showing schedules for multiple workspaces (#10596)
* coder list: adds information about next start / stop to available columns (not default)
* coder schedule: show now essentially coder list with a different set of columns
* Updates cli schedule unit tests to use new dbfake

Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
2023-11-10 13:51:49 +00:00
Jon Ayers 177affbe4b feat: add frontend warning when autostart disabled due to automatic updates (#10508) 2023-11-09 17:01:12 -06:00
Eric Paulsen 9c5b631323 feat: add docs for Bitbucket Server external auth config (#10617) 2023-11-09 16:14:22 -05:00
Michael Smith 8290fee3f7 fix: remove accidental scrollbar from deployment banner (#10616)
* chore: clean up DeploymentBannerView markup

* fix: remove extra scrollbar

* refactor: remove needless calc call
2023-11-09 15:52:23 -05:00
Marcin Tojek 61fac2dcfc feat(cli): create workspace using parameters from existing workspace (#10604) 2023-11-09 19:22:47 +01:00
Muhammad Atif Ali 076db31486 ci: use actions/setup-go builtin cache (#10608) 2023-11-09 20:41:31 +03:00
Michael Smith ad3abe350f refactor: revamp pagination UI view logic (#10567)
* chore: revamp Page Utility tests

* refactor: simplify component design for PageButton

* chore: beef up isNonInitialPage and add tests

* docs: clean up comments

* chore: quick refactor for buildPagedList

* refactor: clean up math calculations for buildPagedList

* chore: rename PageButtons file

* chore: revamp how nav buttons are defined

* fix: remove test disabled state

* chore: clean up base nav button

* chore: rename props for clarity

* refactor: clean up logic for isNonInitialPage

* chore: add more tests and catch bugs

* docs: fix confusing typo in comments

* chore: add one more test case for pagination buttons

* refactor: update props definition for PaginationNavButton

* fix: remove possible state sync bugs
2023-11-09 09:10:14 -05:00
Cian Johnston 8a7f0e9eb9 refactor(cli): extract workspace list parameters (#10605)
Extracts the --search and --all parameters to a separate struct in cliui.
2023-11-09 12:16:43 +00:00
Mathias Fredriksson 473585de6c fix(scripts): forward all necessary ports for remote playwright (#10606) 2023-11-09 12:02:46 +00:00
Mathias Fredriksson e71c53d4d0 chore(site): add remote playwright support and script (#10445) 2023-11-09 13:26:26 +02:00
Marcin Tojek ed7e43b54c feat: expose parameter insights as Prometheus metrics (#10574) 2023-11-09 10:30:40 +01:00
Jon Ayers e23873ff8f feat: add endpoint for resolving autostart status (#10507) 2023-11-08 23:24:56 -06:00
Jon Ayers cf8ee78547 fix: disable autostart for flakey test (#10598) 2023-11-08 17:56:36 -06:00
Bruno Quaresma 645c4bd612 fix(site): fix daylight savings date range issue (#10595)
Close https://github.com/coder/coder/issues/10575
2023-11-08 16:49:09 -03:00
Bruno Quaresma a328d20bcb chore(site): remove workspace schedule banner service (#10588)
Related to https://github.com/coder/coder/issues/9943
2023-11-08 16:48:54 -03:00
Kyle Carberry 2cf2904515 fix: improve language of latest build error (#10593) 2023-11-08 18:38:46 +00:00
Steven Masley 63a4f5f4a7 fix: case insensitive magic label (#10592) 2023-11-08 11:17:14 -06:00
Steven Masley aded7b1513 feat: implement bitbucket-server external auth defaults (#10520)
* feat: implement bitbucket-server external auth defaults

Bitbucket cloud != Bitbucket server
Add reasonable defaults for server

* change "bitbucket" to "bitbucket-cloud"
2023-11-08 11:05:51 -06:00
Bruno Quaresma 71153e2317 chore(site): remove workspace schedule machine (#10583)
Related to https://github.com/coder/coder/issues/9943
2023-11-08 13:46:29 -03:00
Cian Johnston 26740cf00d chore(scripts/rules.go): broaden scope of testingWithOwnerUser linter (#10548)
* Updated testingWithOwnerUser ruleguard rule to detect:
  a) Passing client from coderdenttest.New() to clitest.SetupConfig() similar to what already exists for AGPL code
  b) Usage of any method of the owner client from coderdenttest.New() - all usages of the owner client must be justified with a `//nolint:gocritic` comment.
* Fixed resulting linter complaints.
* Added new coderdtest helpers CreateGroup and UpdateTemplateMeta.
* Modified check_enterprise_import.sh to ignore scripts/rules.go.
2023-11-08 14:54:48 +00:00
Michael Smith 057b43a935 fix: remove stray 0 when no data is in users table (#10584) 2023-11-08 09:06:14 -05:00
Bruno Quaresma f418983f23 chore(site): make chromatic ignore changes inside of the code editor (#10586) 2023-11-08 11:01:28 -03:00
Bruno Quaresma de196b89b6 chore(site): revert remark-gfm upgrade (#10580) 2023-11-08 08:23:09 -05:00
Bruno Quaresma 7f26111c01 feat(site): add stop and start batch actions (#10565) 2023-11-08 09:29:22 -03:00
Bruno Quaresma 861ae1a23a fix(site): fix bottom bar height (#10579) 2023-11-08 12:21:20 +00:00
Ammar Bandukwala 4f3925d0b3 ci: close likely-no issues automatically (#10569) 2023-11-08 04:54:44 +00:00
Kira Pilot 4316c1c862 fix: display all metadata items alongside daily_cost (#10554)
* resolves #10411

* Update site/src/components/Resources/ResourceCard.test.tsx
2023-11-07 13:04:10 -05:00
Kayla Washburn 9e4558ae3a feat: parse resource metadata values as markdown (#10521) 2023-11-07 10:34:24 -07:00
Mathias Fredriksson 43a867441a feat(cli): add template filter support to exp scaletest cleanup and traffic (#10558) 2023-11-07 16:41:55 +00:00
Kayla Washburn 1dd3eb603b fix: hide promote/archive buttons for template versions from users without permission (#10555) 2023-11-07 09:33:14 -07:00
Marcin Tojek 0a550815e9 feat: expose app insights as Prometheus metrics (#10346) 2023-11-07 17:14:59 +01:00
Cian Johnston 8441c36dfb fix(site/src/api): getDeploymentDAUs: truncate tz_offset to whole number (#10563) 2023-11-07 16:00:00 +00:00
Bruno Quaresma 651d14ea68 fix(site): fix agent log error (#10557) 2023-11-07 10:37:09 -05:00
Steven Masley 64398def48 feat: add configurable cipher suites for tls listening (#10505)
* feat: add configurable cipher suites for tls listening
* tls.VersionName is go 1.21, copy the function
2023-11-07 14:55:39 +00:00
Mathias Fredriksson e36503afd2 test(codersdk/agentsdk): fix context cancel flush test (#10560)
This change tests that the patch request is cancelled instead of hoping
that there's no race between context cancellations leading to patch
never being called.
2023-11-07 16:47:23 +02:00
Michael Smith b0aa91bf27 fix: disable pagination nav buttons correctly (#10561)
* fix: update button disabling logic
2023-11-07 09:36:26 -05:00
Michael Smith f5c4826e4c feat: add list of user's groups to Accounts page (#10522)
* chore: add query for a user's groups

* chore: integrate user groups into UI

* refactor: split UI card into separate component

* chore: enforce alt text for AvatarCard

* chore: add proper alt text support for Avatar

* fix: update props for Avatar call sites

* finish AccountPage changes

* wip: commit progress on AvatarCard

* fix: add better UI error handling

* fix: update theme setup for AvatarCard

* fix: update styling for AccountPage

* fix: make error message conditional

* chore: update styling for AvatarCard

* chore: finish AvatarCard

* fix: add maxWidth support to AvatarCard

* chore: update how no max width is defined

* chore: add AvatarCard stories

* fix: remove incorrect semantics for AvatarCard

* docs: add comment about flexbox behavior

* docs: add clarifying text about prop

* fix: fix grammar for singular groups

* refactor: split off AccountUserGroups and add story

* fix: differentiate mock groups more
2023-11-07 08:36:53 -05:00
Michael Smith 8c3828b531 fix: stop SSHKeysPage from flaking (#10553)
* refactor: reorganize SSHKeysPage

* refactor: update render behavior for GlobalSnackbar

* fix: remove redundant error handling

* docs: Clean up wording on docs

* fix: remove temp error handling tests

* fix: remove local error alert

* fix: remove error logging hacks
2023-11-07 08:31:06 -05:00
dependabot[bot] b83a8ce76d chore: bump github.com/aws/smithy-go from 1.15.0 to 1.16.0 (#10543)
Bumps [github.com/aws/smithy-go](https://github.com/aws/smithy-go) from 1.15.0 to 1.16.0.
- [Release notes](https://github.com/aws/smithy-go/releases)
- [Changelog](https://github.com/aws/smithy-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/smithy-go/compare/v1.15.0...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/aws/smithy-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-07 11:32:47 +00:00
Cian Johnston 4208c30d32 fix(coderd/rbac): allow user admin all perms on ResourceUserData (#10556) 2023-11-07 08:54:12 +00:00
Dean Sheather f84485d2c4 chore: add timezone to quiet hours display message in UI (#10538) 2023-11-07 08:36:11 +00:00
Spike Curtis c87deb868b fix: upgrade tailscale to fix STUN probes on dual stack (#10535)
Fixes STUN probe issues on dual stack systems by incorporating https://github.com/coder/tailscale/pull/43
2023-11-07 08:48:27 +04:00
Bruno Quaresma 14925e71a7 refactor(site): add version back to workspace header (#10552) 2023-11-06 13:46:16 -05:00
Bruno Quaresma a9797fa391 refactor(site): improve templates empty state (#10518) 2023-11-06 12:24:45 -05:00
dependabot[bot] e976f50415 ci: bump the github-actions group with 2 updates (#10537)
Bumps the github-actions group with 2 updates: [crate-ci/typos](https://github.com/crate-ci/typos) and [aquasecurity/trivy-action](https://github.com/aquasecurity/trivy-action).


Updates `crate-ci/typos` from 1.16.21 to 1.16.22
- [Release notes](https://github.com/crate-ci/typos/releases)
- [Changelog](https://github.com/crate-ci/typos/blob/master/CHANGELOG.md)
- [Commits](https://github.com/crate-ci/typos/compare/v1.16.21...v1.16.22)

Updates `aquasecurity/trivy-action` from 0.13.0 to 0.13.1
- [Release notes](https://github.com/aquasecurity/trivy-action/releases)
- [Commits](https://github.com/aquasecurity/trivy-action/compare/b77b85c0254bba6789e787844f0585cde1e56320...f78e9ecf42a1271402d4f484518b9313235990e1)

---
updated-dependencies:
- dependency-name: crate-ci/typos
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: aquasecurity/trivy-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 11:20:25 -06:00
dependabot[bot] ee15adda4b chore: bump ts-proto from 1.162.2 to 1.163.0 in /site (#10541)
Bumps [ts-proto](https://github.com/stephenh/ts-proto) from 1.162.2 to 1.163.0.
- [Release notes](https://github.com/stephenh/ts-proto/releases)
- [Changelog](https://github.com/stephenh/ts-proto/blob/main/CHANGELOG.md)
- [Commits](https://github.com/stephenh/ts-proto/compare/v1.162.2...v1.163.0)

---
updated-dependencies:
- dependency-name: ts-proto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 11:20:11 -06:00
dependabot[bot] a5c409dfee chore: bump github.com/gohugoio/hugo from 0.119.0 to 0.120.3 (#10544)
Bumps [github.com/gohugoio/hugo](https://github.com/gohugoio/hugo) from 0.119.0 to 0.120.3.
- [Release notes](https://github.com/gohugoio/hugo/releases)
- [Changelog](https://github.com/gohugoio/hugo/blob/master/hugoreleaser.toml)
- [Commits](https://github.com/gohugoio/hugo/compare/v0.119.0...v0.120.3)

---
updated-dependencies:
- dependency-name: github.com/gohugoio/hugo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 11:20:00 -06:00
Kyle Carberry 7162dc7e14 fix: use DefaultTransport in exchangeWithClientSecret if nil (#10551) 2023-11-06 16:55:00 +00:00
Kayla Washburn ca6e6213bf chore: use px values instead of theme.spacing and theme.shape.borderRadius (#10519) 2023-11-06 09:43:06 -07:00
dependabot[bot] 0cb875cba5 chore: bump remark-gfm from 3.0.1 to 4.0.0 in /site (#10540)
Bumps [remark-gfm](https://github.com/remarkjs/remark-gfm) from 3.0.1 to 4.0.0.
- [Release notes](https://github.com/remarkjs/remark-gfm/releases)
- [Commits](https://github.com/remarkjs/remark-gfm/compare/3.0.1...4.0.0)

---
updated-dependencies:
- dependency-name: remark-gfm
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 11:32:45 -05:00
dependabot[bot] 04dd663680 chore: bump github.com/fatih/color from 1.15.0 to 1.16.0 (#10546)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 18:06:02 +03:00
Patrick McKee ddaf913088 feat: expose prometheus port in helm chart (#10448)
Co-authored-by: Dean Sheather <dean@deansheather.com>
2023-11-06 14:47:28 +00:00
dependabot[bot] 44bb958114 chore: bump the golang-x group with 4 updates (#10542)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 17:41:35 +03:00
Cian Johnston 4277ca02e5 feat(cli): prompt for misspelled parameter names (#10350)
* feat(cli): add cliutil/levenshtein package
* feat(cli): attempt to catch misspelled parameter names
2023-11-06 13:44:39 +00:00
Dean Sheather bb5acb0332 fix: allow users to use quiet hours endpoint (#10547) 2023-11-06 13:16:50 +00:00
Dean Sheather 95e5419626 chore: fail server startup on invalid DERP map (#10536) 2023-11-06 23:04:07 +10:00
Bruno Quaresma 5b9e26a13f refactor(site): handle edge cases for non-admin users with no workspaces and templates (#10517) 2023-11-06 09:34:45 -03:00
Muhammad Atif Ali 55fb6b663a chore: pin devcontainer.json to pre-nix image (#10417)
fixes #10416
this is a workaround, and it is tagged to an old version of an image. 
While testing, it seems like `--privileged` is no longer required.
2023-11-06 15:01:47 +03:00
dependabot[bot] 06d91bee34 chore: bump @playwright/test from 1.38.0 to 1.39.0 in /site (#10458)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-06 13:43:12 +03:00
Cian Johnston 26c3c1226e chore(coderd): add MockAuditor.Contains test helper (#10421)
* Adds a Contains() method on MockAuditor to help with asserting the presence of an audit log with specific fields.
* Updates existing usages of verifyAuditWorkspaceCreated to use the new helper
* Updates test referenced in PR#10396.
2023-11-06 09:17:07 +00:00
Bruno Quaresma e36b606498 fix(site): fix user dropdown width (#10523) 2023-11-04 12:05:19 -03:00
Michael Smith 744c73394a feat: allow users to duplicate workspaces by parameters (#10362)
* chore: add queries for workspace build info

* refactor: clean up logic for CreateWorkspacePage to support multiple modes

* chore: add custom workspace duplication hook

* chore: integrate mode into CreateWorkspacePageView

* fix: add mode to CreateWorkspacePageView stories

* refactor: extract workspace duplication outside CreateWorkspacePage file

* chore: integrate useWorkspaceDuplication into WorkspaceActions

* chore: delete unnecessary function

* refactor: swap useReducer for useState

* fix: swap warning alert for info alert

* refactor: move info alert message

* refactor: simplify UI logic for mode alerts

* fix: prevent dismissed Alerts from affecting layouts

* fix: remove unnecessary prop binding

* docs: reword comment for clarity

* chore: update msw build params to return multiple params

* chore: rename duplicationReady to isDuplicationReady

* chore: expose root component for testing/re-rendering

* chore: get tests in place (still have act warnings)

* refactor: move stuff around for clarity

* chore: finish tests

* chore: revamp tests
2023-11-03 18:23:09 -04:00
Kyle Carberry 23f02651f9 chore: migrate CLI tests to use dbfake (#10500) 2023-11-03 12:22:32 -05:00
dependabot[bot] 6588494abd chore: bump ts-proto from 1.159.1 to 1.162.2 in /site (#10462)
Bumps [ts-proto](https://github.com/stephenh/ts-proto) from 1.159.1 to 1.162.2.
- [Release notes](https://github.com/stephenh/ts-proto/releases)
- [Changelog](https://github.com/stephenh/ts-proto/blob/main/CHANGELOG.md)
- [Commits](https://github.com/stephenh/ts-proto/compare/v1.159.1...v1.162.2)

---
updated-dependencies:
- dependency-name: ts-proto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 13:20:36 -04:00
dependabot[bot] 84dc001f7e chore: bump cronstrue from 2.32.0 to 2.41.0 in /site (#10463)
Bumps [cronstrue](https://github.com/bradymholt/cronstrue) from 2.32.0 to 2.41.0.
- [Release notes](https://github.com/bradymholt/cronstrue/releases)
- [Changelog](https://github.com/bradymholt/cRonstrue/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bradymholt/cronstrue/compare/v2.32.0...v2.41.0)

---
updated-dependencies:
- dependency-name: cronstrue
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 13:20:32 -04:00
dependabot[bot] 311d1dc576 chore: bump @octokit/types from 12.0.0 to 12.1.1 in /site (#10466)
Bumps [@octokit/types](https://github.com/octokit/types.ts) from 12.0.0 to 12.1.1.
- [Release notes](https://github.com/octokit/types.ts/releases)
- [Commits](https://github.com/octokit/types.ts/compare/v12.0.0...v12.1.1)

---
updated-dependencies:
- dependency-name: "@octokit/types"
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 13:20:26 -04:00
dependabot[bot] b86e2e4cd4 chore: bump monaco-editor from 0.43.0 to 0.44.0 in /site (#10467)
Bumps [monaco-editor](https://github.com/microsoft/monaco-editor) from 0.43.0 to 0.44.0.
- [Changelog](https://github.com/microsoft/monaco-editor/blob/main/CHANGELOG.md)
- [Commits](https://github.com/microsoft/monaco-editor/compare/v0.43.0...v0.44.0)

---
updated-dependencies:
- dependency-name: monaco-editor
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 13:20:18 -04:00
Bruno Quaresma 7d63dc2b02 refactor(site): add minor design improvements on the setup page (#10511) 2023-11-03 12:53:11 -04:00
Kyle Carberry bb4ce87242 fix: add support for custom auth header with client secret (#10513)
This fixes OAuth2 with JFrog Artifactory.
2023-11-03 16:26:30 +00:00
Kyle Carberry 21dc93c8a3 feat: add log-dir flag to vscodessh for debuggability (#10514) 2023-11-03 16:21:31 +00:00
dependabot[bot] 08844d03fb chore: bump vite from 4.4.2 to 4.5.0 in /site (#10459)
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.4.2 to 4.5.0.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.0/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-03 10:09:22 -04:00
Bruno Quaresma ca353cb81c refactor(site): improve first workspace creation time (#10510)
One tiny improvement to make the onboarding faster. When a user has no workspace, show the existent templates with direct links to the workspace creation instead of asking them to see all templates, select one, and after, click on "Create workspace". 

Before:
<img width="1351" alt="Screenshot 2023-11-03 at 10 11 32" src="https://github.com/coder/coder/assets/3165839/46050f16-0196-477a-90e2-a0f475c8b707">

After:
<img width="1360" alt="Screenshot 2023-11-03 at 10 11 43" src="https://github.com/coder/coder/assets/3165839/5bef3d50-b192-49b5-8bdf-dec9654f529f">
2023-11-03 11:03:21 -03:00
Bruno Quaresma c9aeea6f64 chore(site): remove template version editor xservice (#10490)
Close https://github.com/coder/coder/issues/9942
2023-11-02 21:42:33 -03:00
Bruno Quaresma 03045bd47a fix(site): fix dialog loading buttons displaying text over the spinner (#10501) 2023-11-02 21:34:18 -03:00
Bruno Quaresma 01ceb84a22 fix(site): fix health tooltip on deployment bar (#10502)
Fix https://github.com/coder/coder/issues/10489
2023-11-02 21:32:24 -03:00
Bruno Quaresma 716b86b380 refactor(site): make minor design tweaks and fix issues on more options menus (#10493)
- Fix menus not closing when clicking and navigating to a lazy loaded page
- Minor design tweaks
- Make all "More options" menus consistent

Before:

<img width="243" alt="Screenshot 2023-11-02 at 10 21 02" src="https://github.com/coder/coder/assets/3165839/4d4eee7f-60d9-4c55-9559-468760715fe7">
<img width="246" alt="Screenshot 2023-11-02 at 10 18 03" src="https://github.com/coder/coder/assets/3165839/a834263a-f950-4f02-b3c7-c631928c0421">
<img width="251" alt="Screenshot 2023-11-02 at 10 07 40" src="https://github.com/coder/coder/assets/3165839/b2135281-1ffe-422b-a054-0c175f0dc2ad">

Now:

<img width="279" alt="Screenshot 2023-11-02 at 10 21 07" src="https://github.com/coder/coder/assets/3165839/a36b4025-3df0-4bd1-8071-7f1127caa2e2">
<img width="257" alt="Screenshot 2023-11-02 at 10 18 08" src="https://github.com/coder/coder/assets/3165839/57f737d4-fa32-4657-b59d-cf26029f8a69">
<img width="236" alt="Screenshot 2023-11-02 at 10 07 48" src="https://github.com/coder/coder/assets/3165839/a45a7f7d-f492-4498-a1f9-d86f7815d119">
2023-11-02 21:32:04 -03:00
Jon Ayers 2dce4151ba feat: add cli support for workspace automatic updates (#10438) 2023-11-02 14:41:34 -05:00
Bruno Quaresma e756baa0c4 refactor(site): simplify proxy menu (#10496) 2023-11-02 15:39:46 -04:00
Bruno Quaresma ae20df4229 refactor(site): remove version and last built from workspace header (#10495) 2023-11-02 16:26:41 -03:00
Bruno Quaresma d2b8a93638 fix(site): fix favicon theme (#10497) 2023-11-02 18:51:39 +00:00
Kayla Washburn 921b6eb4ee chore: use emotion for styling (pt. 9) (#10474) 2023-11-02 17:51:23 +00:00
Kyle Carberry 839a16e299 feat: add dbfake for workspace builds and resources (#10426)
* feat: add dbfakedata for workspace builds and resources

This creates `coderdtest.NewWithDatabase` and adds a series of
helper functions to `dbfake` that insert structured fake data
for resources into the database.

It allows us to remove provisionerd from a significant amount of
tests which should speed them up and reduce flakes.

* Rename dbfakedata to dbfake

* Migrate workspaceagents_test.go to use the new dbfake

* Migrate agent_test.go to use the new fakes

* Fix comments
2023-11-02 17:15:07 +00:00
Colin Adler ac9c16864c chore: update audit log api docs (#10486) 2023-11-02 16:12:38 +00:00
Bruno Quaresma e756a95759 refactor(site): minor improvements on users page popovers (#10492) 2023-11-02 13:39:52 +00:00
dependabot[bot] b8449d5894 chore: bump axios from 1.5.0 to 1.6.0 in /site (#10460)
Bumps [axios](https://github.com/axios/axios) from 1.5.0 to 1.6.0.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v1.x/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v1.5.0...v1.6.0)

---
updated-dependencies:
- dependency-name: axios
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-02 08:50:18 -04:00
dependabot[bot] 725cda9463 chore: bump next from 13.5.3 to 14.0.1 in /offlinedocs (#10469)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Atif Ali <atif@coder.com>
2023-11-02 09:29:56 +00:00
dependabot[bot] af1c74d62d chore: bump eslint-config-next from 13.5.3 to 14.0.1 in /offlinedocs (#10470)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-02 11:58:50 +03:00
Steven Masley 0c993ea329 feat: add observability configuration values to deployment page (#10471)
* feat: add observability configuration values to deployment page

- Moved audit logging to this page
- Logging, prometheus, tracing, debug, and pprof settings
2023-11-01 15:56:02 -05:00
dependabot[bot] 5c49ce0194 chore: bump eslint from 8.50.0 to 8.52.0 in /offlinedocs (#10468)
Bumps [eslint](https://github.com/eslint/eslint) from 8.50.0 to 8.52.0.
- [Release notes](https://github.com/eslint/eslint/releases)
- [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md)
- [Commits](https://github.com/eslint/eslint/compare/v8.50.0...v8.52.0)

---
updated-dependencies:
- dependency-name: eslint
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 23:48:52 +03:00
dependabot[bot] b5405dc424 chore: bump chromatic from 7.2.0 to 7.6.0 in /site (#10464)
Bumps [chromatic](https://github.com/chromaui/chromatic-cli) from 7.2.0 to 7.6.0.
- [Release notes](https://github.com/chromaui/chromatic-cli/releases)
- [Changelog](https://github.com/chromaui/chromatic-cli/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chromaui/chromatic-cli/compare/v7.2.0...v7.6.0)

---
updated-dependencies:
- dependency-name: chromatic
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 23:13:13 +03:00
Kayla Washburn 7f70a23844 chore: use emotion for styling (pt. 8) (#10447) 2023-11-01 12:43:42 -06:00
dependabot[bot] b3e6a461ed chore: bump the storybook group in /site with 7 updates (#10456)
Bumps the storybook group in /site with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [@storybook/addon-actions](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/actions) | `7.4.0` | `7.5.2` |
| [@storybook/addon-essentials](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/essentials) | `7.4.0` | `7.5.2` |
| [@storybook/addon-links](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/links) | `7.4.0` | `7.5.2` |
| [@storybook/addon-mdx-gfm](https://github.com/storybookjs/storybook/tree/HEAD/code/addons/gfm) | `7.4.0` | `7.5.2` |
| [@storybook/react](https://github.com/storybookjs/storybook/tree/HEAD/code/renderers/react) | `7.4.0` | `7.5.2` |
| [@storybook/react-vite](https://github.com/storybookjs/storybook/tree/HEAD/code/frameworks/react-vite) | `7.4.0` | `7.5.2` |
| [storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/cli) | `7.4.0` | `7.5.2` |


Updates `@storybook/addon-actions` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/addons/actions)

Updates `@storybook/addon-essentials` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/addons/essentials)

Updates `@storybook/addon-links` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/addons/links)

Updates `@storybook/addon-mdx-gfm` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/addons/gfm)

Updates `@storybook/react` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/renderers/react)

Updates `@storybook/react-vite` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/frameworks/react-vite)

Updates `storybook` from 7.4.0 to 7.5.2
- [Release notes](https://github.com/storybookjs/storybook/releases)
- [Changelog](https://github.com/storybookjs/storybook/blob/next/CHANGELOG.md)
- [Commits](https://github.com/storybookjs/storybook/commits/v7.5.2/code/lib/cli)

---
updated-dependencies:
- dependency-name: "@storybook/addon-actions"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: "@storybook/addon-essentials"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: "@storybook/addon-links"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: "@storybook/addon-mdx-gfm"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: "@storybook/react"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: "@storybook/react-vite"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
- dependency-name: storybook
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: storybook
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 20:19:19 +03:00
Kayla Washburn 5284d974ef chore: use emotion for styling (pt. 7) (#10431) 2023-11-01 09:28:26 -06:00
dependabot[bot] ec7d7595ff chore: bump @monaco-editor/react from 4.5.0 to 4.6.0 in /site (#10465)
Bumps [@monaco-editor/react](https://github.com/suren-atoyan/monaco-react) from 4.5.0 to 4.6.0.
- [Release notes](https://github.com/suren-atoyan/monaco-react/releases)
- [Changelog](https://github.com/suren-atoyan/monaco-react/blob/master/CHANGELOG.md)
- [Commits](https://github.com/suren-atoyan/monaco-react/compare/v4.5.0...v4.6.0)

---
updated-dependencies:
- dependency-name: "@monaco-editor/react"
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 10:28:04 -04:00
dependabot[bot] 59c7c340a3 chore: bump the eslint group in /site with 7 updates (#10457)
Bumps the eslint group in /site with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [eslint-plugin-testing-library](https://github.com/testing-library/eslint-plugin-testing-library) | `6.0.1` | `6.1.0` |
| [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin) | `6.7.0` | `6.9.1` |
| [@typescript-eslint/parser](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser) | `6.7.0` | `6.9.1` |
| [eslint](https://github.com/eslint/eslint) | `8.50.0` | `8.52.0` |
| [eslint-plugin-import](https://github.com/import-js/eslint-plugin-import) | `2.28.0` | `2.29.0` |
| [eslint-plugin-jest](https://github.com/jest-community/eslint-plugin-jest) | `27.4.0` | `27.6.0` |
| [eslint-plugin-unicorn](https://github.com/sindresorhus/eslint-plugin-unicorn) | `48.0.0` | `49.0.0` |


Updates `eslint-plugin-testing-library` from 6.0.1 to 6.1.0
- [Release notes](https://github.com/testing-library/eslint-plugin-testing-library/releases)
- [Changelog](https://github.com/testing-library/eslint-plugin-testing-library/blob/main/.releaserc.json)
- [Commits](https://github.com/testing-library/eslint-plugin-testing-library/compare/v6.0.1...v6.1.0)

Updates `@typescript-eslint/eslint-plugin` from 6.7.0 to 6.9.1
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v6.9.1/packages/eslint-plugin)

Updates `@typescript-eslint/parser` from 6.7.0 to 6.9.1
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/parser/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v6.9.1/packages/parser)

Updates `eslint` from 8.50.0 to 8.52.0
- [Release notes](https://github.com/eslint/eslint/releases)
- [Changelog](https://github.com/eslint/eslint/blob/main/CHANGELOG.md)
- [Commits](https://github.com/eslint/eslint/compare/v8.50.0...v8.52.0)

Updates `eslint-plugin-import` from 2.28.0 to 2.29.0
- [Release notes](https://github.com/import-js/eslint-plugin-import/releases)
- [Changelog](https://github.com/import-js/eslint-plugin-import/blob/main/CHANGELOG.md)
- [Commits](https://github.com/import-js/eslint-plugin-import/compare/v2.28.0...v2.29.0)

Updates `eslint-plugin-jest` from 27.4.0 to 27.6.0
- [Release notes](https://github.com/jest-community/eslint-plugin-jest/releases)
- [Changelog](https://github.com/jest-community/eslint-plugin-jest/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jest-community/eslint-plugin-jest/compare/v27.4.0...v27.6.0)

Updates `eslint-plugin-unicorn` from 48.0.0 to 49.0.0
- [Release notes](https://github.com/sindresorhus/eslint-plugin-unicorn/releases)
- [Commits](https://github.com/sindresorhus/eslint-plugin-unicorn/compare/v48.0.0...v49.0.0)

---
updated-dependencies:
- dependency-name: eslint-plugin-testing-library
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: "@typescript-eslint/eslint-plugin"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: "@typescript-eslint/parser"
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: eslint
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: eslint-plugin-import
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: eslint-plugin-jest
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: eslint
- dependency-name: eslint-plugin-unicorn
  dependency-type: direct:development
  update-type: version-update:semver-major
  dependency-group: eslint
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-01 10:26:56 -04:00
Spike Curtis cac29e0b4d feat: add tables for PGCoordinator v2 (#10442)
Adds tables for a simplified PG Coordinator that only considers Peers and Tunnels, rather than agent/client distinctions we have today.
2023-11-01 16:30:09 +04:00
Spike Curtis 95ce697e3a fix: schedule autobuild directly on TestExecutorAutostopTemplateDisabled (#10453)
Fixes flake seen here: https://github.com/coder/coder/actions/runs/6716682414/job/18253279654

The test used a cron schedule to compute autobuild ticks, with ticks every hour on the hour.  The default TTL was set to an hour.  Usually, the next tick is less than one hour in the future, unless the test runs at :00 past the hour, which it did in my flake'd
run.  But, given that this is an autostop test, the cron schedule is irrelevant (such schedules are used for auto_start_).  So, I've removed it from the test and compute the build ticks directly.

Also, the test originally had the workspace TTL set to longer than the default template TTL, and then tested that no build happened when the tick was prior to both. This seems odd to me, as we want to demonstrate the the executor disregards the workspace TTL.
So, I changed the test to set the workspace TTL shorter, and then send in a tick between the two, verify that we don't autostop, then a tick after the template TTL and verify that we do.
2023-11-01 15:16:20 +04:00
Spike Curtis 94eb9b8db1 fix: disable t.Parallel on TestPortForward (#10449)
I've said it before, I'll say it again: you can't create a timed context before calling `t.Parallel()` and then use it after.

Fixes flakes like https://github.com/coder/coder/actions/runs/6716682414/job/18253279157

I've chosen just to drop `t.Parallel()` entirely rather than create a second context after the parallel call, since the vast majority of the test time happens before where the parallel call was.  It does all the tailnet setup before `t.Parallel()`.
Leaving a call to `t.Parallel()` is a bug risk for future maintainers to come in and use the wrong context in the latter part of the test by accident.
2023-11-01 13:45:13 +04:00
Spike Curtis 6882e8e524 feat: add conversions from tailnet to proto (#10441)
Adds conversions from existing tailnet types to protobuf
2023-11-01 10:54:00 +04:00
Jon Ayers f4026edd71 feat: add frontend support for enabling automatic workspace updates (#10375) 2023-10-31 17:06:36 -05:00
Spike Curtis 3200b85d87 Revert "chore: bump go.uber.org/goleak from 1.2.1 to 1.3.0 (#10398)" (#10444)
This reverts commit 8fe3dcf18a.
2023-10-31 12:53:29 +00:00
Spike Curtis 8d5a13d768 fix: update tailscale to fixed STUN probe version (#10439) 2023-10-31 10:21:19 +00:00
Spike Curtis a7c671ca07 feat: add workspace agent APIVersion (#10419)
Fixes #10339
2023-10-31 10:08:43 +04:00
dependabot[bot] 90573a6e99 chore: bump github.com/open-policy-agent/opa from 0.57.0 to 0.58.0 (#10424)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 21:08:41 +00:00
dependabot[bot] 0bf156cde3 chore: bump github.com/google/uuid from 1.3.1 to 1.4.0 (#10422)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 20:12:53 +00:00
dependabot[bot] eaf9176bc5 chore: bump github.com/docker/docker from 23.0.5+incompatible to 24.0.7+incompatible (#10427)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 15:07:46 -05:00
Ben Potter e491217a12 docs: add v2.3.3 changelog (#10435) 2023-10-30 15:06:11 -05:00
Steven Masley 9d2b805fb7 fix: prevent infinite redirect oauth auth flow (#10430)
* fix: prevent infinite redirect oauth auth flow
2023-10-30 14:45:06 -05:00
Kyle Carberry 7fc1a65b14 fix: add new aws regions to instance identity (#10434)
Fixes #10433
2023-10-30 19:44:29 +00:00
Kayla Washburn fdf035cd06 chore: remove fly template (#10429) 2023-10-30 13:16:43 -06:00
dependabot[bot] fc1d823cae chore: bump github.com/go-logr/logr from 1.2.4 to 1.3.0 (#10423)
Bumps [github.com/go-logr/logr](https://github.com/go-logr/logr) from 1.2.4 to 1.3.0.
- [Release notes](https://github.com/go-logr/logr/releases)
- [Changelog](https://github.com/go-logr/logr/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-logr/logr/compare/v1.2.4...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/go-logr/logr
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 21:36:01 +03:00
dependabot[bot] 8fe3dcf18a chore: bump go.uber.org/goleak from 1.2.1 to 1.3.0 (#10398)
Bumps [go.uber.org/goleak](https://github.com/uber-go/goleak) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/uber-go/goleak/releases)
- [Changelog](https://github.com/uber-go/goleak/blob/master/CHANGELOG.md)
- [Commits](https://github.com/uber-go/goleak/compare/v1.2.1...v1.3.0)

---
updated-dependencies:
- dependency-name: go.uber.org/goleak
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 21:35:40 +03:00
Kyle Carberry 5abfe5afd0 chore: rename dbfake to dbmem (#10432) 2023-10-30 17:42:20 +00:00
Spike Curtis 7a8da08124 feat: add api_version column to workspace_agents (#10418)
Adds api_version to workspace_agents table

Part of #10399
2023-10-30 21:30:49 +04:00
dependabot[bot] 6b7858c516 ci: bump the github-actions group with 2 updates (#10420)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-30 11:25:37 +00:00
Mathias Fredriksson 9d3785def8 test(cli/cliui): make agent tests more robust (#10415)
Fixes #10408
2023-10-30 13:20:10 +02:00
Spike Curtis 2a6fd90140 feat: add tailnet and agent API definitions (#10324)
Adds API definitions and packages for Tailnet and Agent APIs (API version 2.0)
2023-10-30 12:14:45 +04:00
Spike Curtis c2e3648484 fix: disable tests broken by daylight savings (#10414) 2023-10-30 06:44:30 +00:00
dependabot[bot] 3b50530a63 chore: bump gopkg.in/DataDog/dd-trace-go.v1 from 1.55.0 to 1.56.1 (#10403)
Bumps gopkg.in/DataDog/dd-trace-go.v1 from 1.55.0 to 1.56.1.

---
updated-dependencies:
- dependency-name: gopkg.in/DataDog/dd-trace-go.v1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-27 22:41:33 +03:00
Mathias Fredriksson 7fecd39e23 fix(agent/agentscripts): display informative error for ErrWaitDelay (#10407)
Fixes #10400
2023-10-27 19:07:26 +03:00
Muhammad Atif Ali 99fda4a8e2 docs: replace gituth with externalauth (#10409) 2023-10-27 10:53:56 -04:00
Muhammad Atif Ali 51aa32cfcf chore: limit history to the last 30 runs/days for PR deploy and cleanup workflows (#10406) 2023-10-27 11:15:21 +00:00
Muhammad Atif Ali 6ae8bfed94 chore(examples): fix a small typo (#10404) 2023-10-26 09:42:46 +00:00
dependabot[bot] 35e7d7854a chore: bump google.golang.org/grpc from 1.58.2 to 1.59.0 (#10381)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 11:36:50 +03:00
dependabot[bot] edcbd4f394 chore: bump github.com/coreos/go-oidc/v3 from 3.6.0 to 3.7.0 (#10397)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 11:36:10 +03:00
dependabot[bot] ea578ceabb chore: bump github.com/prometheus/common from 0.44.0 to 0.45.0 (#10399)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-26 11:35:47 +03:00
Mathias Fredriksson 0ddd54d34b fix(coderd/provisionerdserver): avoid error log during shutdown (#10402) 2023-10-25 18:31:28 +03:00
Josh Vawdrey fdc9097d6c feat(provisioner): expose template version to provisioner (#10306) 2023-10-25 14:44:08 +03:00
dependabot[bot] e7fd2cb1a6 chore: bump github.com/djherbis/times from 1.5.0 to 1.6.0 (#10380)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-24 23:03:08 +03:00
dependabot[bot] 670ee4d54f chore: bump google.golang.org/api from 0.147.0 to 0.148.0 (#10383)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-24 22:58:41 +03:00
dependabot[bot] 39fbf74c7d ci: bump the github-actions group with 1 update (#10379)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-24 22:14:48 +03:00
Mathias Fredriksson eac155aec2 test(cli): fix TestServer flake due to DNS lookup (#10390) 2023-10-24 22:12:03 +03:00
Michael Smith 7732ac475a refactor: update logic for all metadata query factories (#10356)
* refactor: simplify metadata patterns

* fix: add return type to me factory

* fix: make sure query key for me is always defined at type level
2023-10-24 08:42:38 -04:00
Josh Vawdrey 6b2aee4133 feat(cli): make the dotfiles repository directory configurable (#10377) 2023-10-24 12:00:04 +03:00
Devarshi Shimpi d8592bf09a fix(README.md): update installation link (#10275) 2023-10-24 11:36:38 +03:00
Asher 4af8446f48 fix: initialize terminal with correct size (#10369)
* Fit once during creation

This does not fix any bugs (that I know of) but we only need to fit once
when the terminal is created, not every time we reconnect.  Granted,
currently we do not support reconnecting without refreshing anyway so it
does not really matter, but this just seems more correct.

Plus now we will not have to pass the fit addon around.

* Pass size when connecting web socket URL

I think this will solve an issue where screen does does not correctly
handle an immediate resize.  It seems to ignore the resize, but even if
you send it again nothing changes, seemingly thinking it is already at
that size?

* Use new struct for decoding reconnecting pty requests

Decoding a JSON message does not touch omitted (or null) fields so once
a message with a resize comes in, every single message from that point
will cause a resize.

I am not sure if this is an actual problem in practice but at the very
least it seems unintentional.
2023-10-23 23:42:39 -04:00
Mathias Fredriksson 1286904de8 test(agent): improve TestAgent_Session_TTY_MOTD_Update (#10385) 2023-10-23 17:32:28 +00:00
Mathias Fredriksson 09f7b8e88c fix(agent/agentscripts): track cron run and wait for cron stop (#10388)
Fixes #10289
2023-10-23 17:08:52 +00:00
Mathias Fredriksson 1a2aea3a6b fix(agent): prevent metadata from being discarded if report is slow (#10386) 2023-10-23 17:02:54 +00:00
Mathias Fredriksson 6683ad989a test(coderd): fix TestWorkspaceBuild flake (#10387)
Fixes #10335
2023-10-23 19:45:54 +03:00
Mathias Fredriksson 8f1b4fb061 test(agent): fix service banner trim test flake (#10384) 2023-10-23 18:06:59 +03:00
Ben Potter a7243b3f3b docs: add v2.3.2 changelog (#10371) 2023-10-20 21:26:43 +00:00
Jon Ayers 1372bf82f5 chore: revert "chore: remove workspace_actions experiment (#10030)" (#10363) 2023-10-20 13:21:53 -05:00
Asher 57c9d88703 chore(site): remove terminal xservice (#10234)
* Remove terminalXService

This is a prelude to the change I actually want to make, which is to
send the size of the terminal on the web socket URL after we do a fit.
I have found xstate so confusing that it was easier to just rewrite it.

* Fix hanging tests

I am not really sure what ws.connected is doing but it seems to somehow
block updates.  Something to do with `act()` maybe?

Basically, the useEffect creating the terminal never updates once the
config query finishes, so the web socket is never created, and the test
hangs forever.

It might have been working before only because the web socket was
created using xstate rather than useEffect and once it connected it
would unblock and React could update again but this is just a guess.

* Ignore other config changes

The terminal only cares about the renderer specifically, no need to
recreate the terminal if something else changes.

* Break out port forward URL open to util

Felt like this could be broken out to reduce the component size.  Also
trying to figure out why it is causing the terminal to create multiple
times.

* Prevent handleWebLink change from recreating terminal

Depending on the timing, handleWebLink was causing the terminal to get
recreated.  We only need to create the terminal once unless the render
type changes.

Recreating the terminal was also recreating the web socket pointlessly.
2023-10-20 10:18:17 -08:00
Muhammad Atif Ali 5ebb702e00 chore: add OIDC provider logos (#10365)
* chore: add OIDC provider logos

* Add files via upload

* fmt
2023-10-20 19:30:05 +03:00
Eric Paulsen 9dbc913798 fix: additional cluster SA, role names (#10366) 2023-10-20 11:44:16 -04:00
Kira Pilot ed5567ba28 fix: show dormant and suspended users in groups (#10333)
* fix: show dormant and suspended users in groups

* added status column
2023-10-20 11:36:00 -04:00
Bruno Quaresma ac322724b0 chore(site): replace custom LoadingButton from the one in MUI (#10351) 2023-10-20 09:57:27 -03:00
Bruno Quaresma 3d9bfdd5dc chore(site): remove update check service (#10355) 2023-10-20 09:41:34 -03:00
Bruno Quaresma 1ba5169109 chore(site): remove search users and groups xservice (#10353) 2023-10-20 09:33:07 -03:00
Jon Ayers d33526108f feat: add frontend support for mandating active template version (#10338) 2023-10-19 18:21:52 -05:00
Jon Ayers f5f150d568 feat: add cli support for --require-active-version (#10337) 2023-10-19 17:16:15 -05:00
Ammar Bandukwala b799014832 docs: rework telemetry doc and add CLI warning (#10354) 2023-10-19 15:50:20 -05:00
Kira Pilot 9c9319f81e fix: resolve User is not unauthenticated error seen on logout (#10349)
* fix: do not cache getAuthenticatedUser call

* use initialQuery, add back meta tag for initial load of users

* lift initialUserData
2023-10-19 14:50:53 -04:00
Michael Smith ab2904a676 feat: add user groups column to users table (#10284)
* refactor: extract UserRoleCell into separate component

* wip: add placeholder Groups column

* fix: remove redundant css styles

* refactor: update EditRolesButton to use Sets to detect selections

* wip: commit progress for updated roles column

* wip: commit current role pill progress

* fix: update state sync logic

* chore: add groupsByUserId query options factory

* fix: update return value of select function

* chore: drill groups data down to cell component

* wip: commit current cell progress

* fix: remove redundant classes

* wip: commit current styling progress

* fix: update line height for CTA

* fix: update spacing

* chore: add tooltip for Groups column header

* fix: remove tsbuild file

* refactor: consolidate tooltip components

* fix: update font size defaults inside theme

* fix: expand hoverable/clickable area of groups cell

* fix: remove possible undefined cases from HelpTooltip

* chore: add popover functionality to groups

* wip: commit progress on groups tooltip

* fix: remove zero-height group name visual bug

* feat: get basic version of user group tooltips done

* perf: move sort order callback outside loop

* fix: update spacing for tooltip

* feat: make popovers entirely hover-based

* fix: disable scroll locking for popover

* docs: add comments explaining some pitfalls with Popover component

* refactor: simplify userRoleCell implementation

* feat: complete main feature

* fix: prevent scroll lock for role tooltips

* fix: change import to type import

* refactor: simplify how groups are clustered

* refactor: update UserRoleCell to use Popover

* refactor: remove unnecessary fragment

* chore: add id/aria support for Popover

* refactor: update UserGroupsCell to use Popover

* chore: redo visual design for UserGroupsCell

* fix: shrink UserGroupsCell text

* fix: update UsersTable test to include groups info
2023-10-19 14:31:48 -04:00
Bruno Quaresma 557adab224 chore(site): remove template ACL XService (#10332) 2023-10-19 14:59:08 -03:00
dependabot[bot] 21f87313bd chore: bump github.com/aws/smithy-go from 1.14.2 to 1.15.0 (#10282)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-19 16:08:56 +03:00
Muhammad Atif Ali 42c21d400f fix(docs): update external-auth docs to use coder_external_auth (#10347) 2023-10-19 12:30:48 +00:00
Bruno Quaresma f677c4470b chore(site): add custom popover component (#10319) 2023-10-19 09:13:21 -03:00
Bruno Quaresma b8c7b56fda fix(site): fix tabs in the template layout (#10334) 2023-10-19 09:12:41 -03:00
Marcin Tojek c4f590581e feat: expose template insights as Prometheus metrics (#10325) 2023-10-19 08:45:12 +00:00
Jon Ayers 997493d4ae feat: add template setting to require active template version (#10277) 2023-10-18 17:07:21 -05:00
Colin Adler 1ad998ee3a fix: add requester IP to workspace build audit logs (#10242) 2023-10-18 15:08:02 -05:00
Colin Adler 504cedf15a feat: add telemetry for external provisioners (#10322) 2023-10-18 14:20:30 -05:00
Mathias Fredriksson 9b73020f11 ci(.github): set DataDog upload timeout (#10328) 2023-10-18 20:07:52 +03:00
Bruno Quaresma c93fe8ddbe chore(site): remove template version machine (#10315) 2023-10-18 09:18:03 -03:00
Muhammad Atif Ali fe05fd1e6e docs: update vscode web docs (#10327) 2023-10-18 12:13:44 +00:00
Kayla Washburn 2b5e02f5b2 refactor: improve e2e test reporting (#10304) 2023-10-17 16:11:42 -06:00
1349 changed files with 80950 additions and 37430 deletions
+4 -3
View File
@@ -4,9 +4,10 @@
"features": {
// See all possible options here https://github.com/devcontainers/features/tree/main/src/docker-in-docker
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
"ghcr.io/devcontainers/features/docker-in-docker:2": {
"moby": "false"
}
},
// SYS_PTRACE to enable go debugging
// without --priviliged the Github Codespace build fails (not required otherwise)
"runArgs": ["--cap-add=SYS_PTRACE", "--privileged"]
"runArgs": ["--cap-add=SYS_PTRACE"]
}
+1 -47
View File
@@ -4,61 +4,15 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.20.10"
default: "1.21.5"
runs:
using: "composite"
steps:
- name: Cache go toolchain
uses: buildjet/cache@v3
with:
path: |
${{ runner.tool_cache }}/go/${{ inputs.version }}
key: gotoolchain-${{ runner.os }}-${{ inputs.version }}
restore-keys: |
gotoolchain-${{ runner.os }}-
- name: Setup Go
uses: buildjet/setup-go@v4
with:
# We do our own caching for implementation clarity.
cache: false
go-version: ${{ inputs.version }}
- name: Get cache dirs
shell: bash
run: |
set -x
echo "GOMODCACHE=$(go env GOMODCACHE)" >> $GITHUB_ENV
echo "GOCACHE=$(go env GOCACHE)" >> $GITHUB_ENV
# We split up GOMODCACHE from GOCACHE because the latter must be invalidated
# on code change, but the former can be kept.
- name: Cache $GOMODCACHE
uses: buildjet/cache@v3
with:
path: |
${{ env.GOMODCACHE }}
key: gomodcache-${{ runner.os }}-${{ hashFiles('**/go.sum') }}-${{ github.job }}
# restore-keys aren't used because it causes the cache to grow
# infinitely. go.sum changes very infrequently, so rebuilding from
# scratch every now and then isn't terrible.
- name: Cache $GOCACHE
uses: buildjet/cache@v3
with:
path: |
${{ env.GOCACHE }}
# Job name must be included in the key for effective test cache reuse.
# The key format is intentionally different than GOMODCACHE, because any
# time a Go file changes we invalidate this cache, whereas GOMODCACHE is
# only invalidated when go.sum changes.
# The number in the key is incremented when the cache gets too large,
# since this technically grows without bound.
key: gocache2-${{ runner.os }}-${{ github.job }}-${{ hashFiles('**/*.go', 'go.**') }}
restore-keys: |
gocache2-${{ runner.os }}-${{ github.job }}-
gocache2-${{ runner.os }}-
- name: Install gotestsum
shell: bash
run: go install gotest.tools/gotestsum@latest
+1 -1
View File
@@ -7,4 +7,4 @@ runs:
- name: Setup sqlc
uses: sqlc-dev/setup-sqlc@v4
with:
sqlc-version: "1.20.0"
sqlc-version: "1.24.0"
+2 -2
View File
@@ -5,7 +5,7 @@ runs:
using: "composite"
steps:
- name: Install Terraform
uses: hashicorp/setup-terraform@v2
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.5.5
terraform_version: 1.5.7
terraform_wrapper: false
+8 -33
View File
@@ -44,13 +44,9 @@ updates:
update-types:
- version-update:semver-patch
groups:
otel:
go:
patterns:
- "go.nhat.io/otelsql"
- "go.opentelemetry.io/otel*"
golang-x:
patterns:
- "golang.org/x/*"
- "*"
# Update our Dockerfile.
- package-ecosystem: "docker"
@@ -66,10 +62,6 @@ updates:
# We need to coordinate terraform updates with the version hardcoded in
# our Go code.
- dependency-name: "terraform"
groups:
scripts-docker:
patterns:
- "*"
- package-ecosystem: "npm"
directory: "/site/"
@@ -94,30 +86,9 @@ updates:
- version-update:semver-major
open-pull-requests-limit: 15
groups:
react:
site:
patterns:
- "react*"
- "@types/react*"
xterm:
patterns:
- "xterm*"
xstate:
patterns:
- "xstate"
- "@xstate*"
mui:
patterns:
- "@mui*"
storybook:
patterns:
- "@storybook*"
- "storybook*"
eslint:
patterns:
- "eslint*"
- "@eslint*"
- "@typescript-eslint/eslint-plugin"
- "@typescript-eslint/parser"
- "*"
- package-ecosystem: "npm"
directory: "/offlinedocs/"
@@ -140,6 +111,10 @@ updates:
- dependency-name: "@types/node"
update-types:
- version-update:semver-major
groups:
offlinedocs:
patterns:
- "*"
# Update dogfood.
- package-ecosystem: "terraform"
+27
View File
@@ -0,0 +1,27 @@
app = "paris-coder"
primary_region = "cdg"
[experimental]
entrypoint = ["/bin/sh", "-c", "CODER_DERP_SERVER_RELAY_URL=\"http://[${FLY_PRIVATE_IP}]:3000\" /opt/coder wsproxy server"]
auto_rollback = true
[build]
image = "ghcr.io/coder/coder-preview:main"
[env]
CODER_ACCESS_URL = "https://paris.fly.dev.coder.com"
CODER_HTTP_ADDRESS = "0.0.0.0:3000"
CODER_PRIMARY_ACCESS_URL = "https://dev.coder.com"
CODER_WILDCARD_ACCESS_URL = "*--apps.paris.fly.dev.coder.com"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
cpu_kind = "shared"
cpus = 2
memory_mb = 512
@@ -0,0 +1,27 @@
app = "sao-paulo-coder"
primary_region = "gru"
[experimental]
entrypoint = ["/bin/sh", "-c", "CODER_DERP_SERVER_RELAY_URL=\"http://[${FLY_PRIVATE_IP}]:3000\" /opt/coder wsproxy server"]
auto_rollback = true
[build]
image = "ghcr.io/coder/coder-preview:main"
[env]
CODER_ACCESS_URL = "https://sao-paulo.fly.dev.coder.com"
CODER_HTTP_ADDRESS = "0.0.0.0:3000"
CODER_PRIMARY_ACCESS_URL = "https://dev.coder.com"
CODER_WILDCARD_ACCESS_URL = "*--apps.sao-paulo.fly.dev.coder.com"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
cpu_kind = "shared"
cpus = 2
memory_mb = 512
+27
View File
@@ -0,0 +1,27 @@
app = "sydney-coder"
primary_region = "syd"
[experimental]
entrypoint = ["/bin/sh", "-c", "CODER_DERP_SERVER_RELAY_URL=\"http://[${FLY_PRIVATE_IP}]:3000\" /opt/coder wsproxy server"]
auto_rollback = true
[build]
image = "ghcr.io/coder/coder-preview:main"
[env]
CODER_ACCESS_URL = "https://sydney.fly.dev.coder.com"
CODER_HTTP_ADDRESS = "0.0.0.0:3000"
CODER_PRIMARY_ACCESS_URL = "https://dev.coder.com"
CODER_WILDCARD_ACCESS_URL = "*--apps.sydney.fly.dev.coder.com"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
cpu_kind = "shared"
cpus = 2
memory_mb = 512
+262 -138
View File
@@ -31,10 +31,12 @@ jobs:
runs-on: ubuntu-latest
outputs:
docs-only: ${{ steps.filter.outputs.docs_count == steps.filter.outputs.all_count }}
docs: ${{ steps.filter.outputs.docs }}
go: ${{ steps.filter.outputs.go }}
ts: ${{ steps.filter.outputs.ts }}
k8s: ${{ steps.filter.outputs.k8s }}
ci: ${{ steps.filter.outputs.ci }}
db: ${{ steps.filter.outputs.db }}
offlinedocs-only: ${{ steps.filter.outputs.offlinedocs_count == steps.filter.outputs.all_count }}
offlinedocs: ${{ steps.filter.outputs.offlinedocs }}
steps:
@@ -56,6 +58,12 @@ jobs:
- "examples/web-server/**"
- "examples/monitoring/**"
- "examples/lima/**"
db:
- "**.sql"
- "coderd/database/queries/**"
- "coderd/database/migrations"
- "coderd/database/sqlc.yaml"
- "coderd/database/dump.sql"
go:
- "**.sql"
- "**.go"
@@ -136,7 +144,7 @@ jobs:
# Check for any typos
- name: Check for typos
uses: crate-ci/typos@v1.16.19
uses: crate-ci/typos@v1.16.25
with:
config: .github/workflows/typos.toml
@@ -220,7 +228,7 @@ jobs:
with:
# This doesn't need caching. It's super fast anyways!
cache: false
go-version: 1.20.10
go-version: 1.21.5
- name: Install shfmt
run: go install mvdan.cc/sh/v3/cmd/shfmt@v3.7.0
@@ -291,14 +299,9 @@ jobs:
gotestsum --junitfile="gotests.xml" --jsonfile="gotests.json" \
--packages="./..." -- $PARALLEL_FLAG -short -failfast $COVERAGE_FLAGS
- name: Print test stats
if: success() || failure()
run: |
# Artifacts are not available after rerunning a job,
# so we need to print the test stats to the log.
go run ./scripts/ci-report/main.go gotests.json | tee gotests_stats.json
- name: Upload test stats to Datadog
timeout-minutes: 1
continue-on-error: true
uses: ./.github/actions/upload-datadog
if: success() || failure()
with:
@@ -319,7 +322,9 @@ jobs:
test-go-pg:
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-8vcpu-ubuntu-2204' || 'ubuntu-latest' }}
needs: changes
needs:
- changes
- sqlc-vet # No point in testing the DB if the queries are invalid
if: needs.changes.outputs.go == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
# This timeout must be greater than the timeout set by `go test` in
# `make test-postgres` to ensure we receive a trace of running
@@ -343,14 +348,9 @@ jobs:
export TS_DEBUG_DISCO=true
make test-postgres
- name: Print test stats
if: success() || failure()
run: |
# Artifacts are not available after rerunning a job,
# so we need to print the test stats to the log.
go run ./scripts/ci-report/main.go gotests.json | tee gotests_stats.json
- name: Upload test stats to Datadog
timeout-minutes: 1
continue-on-error: true
uses: ./.github/actions/upload-datadog
if: success() || failure()
with:
@@ -391,105 +391,13 @@ jobs:
gotestsum --junitfile="gotests.xml" -- -race ./...
- name: Upload test stats to Datadog
timeout-minutes: 1
continue-on-error: true
uses: ./.github/actions/upload-datadog
if: always()
with:
api-key: ${{ secrets.DATADOG_API_KEY }}
deploy:
name: "deploy"
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-16vcpu-ubuntu-2204' || 'ubuntu-latest' }}
timeout-minutes: 30
needs: changes
if: |
github.ref == 'refs/heads/main' && !github.event.pull_request.head.repo.fork
&& needs.changes.outputs.docs-only == 'false'
permissions:
contents: read
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v1
with:
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@v1
- name: Setup Node
uses: ./.github/actions/setup-node
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Install goimports
run: go install golang.org/x/tools/cmd/goimports@latest
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.16.0
- name: Install zstd
run: sudo apt-get install -y zstd
- name: Build Release
run: |
set -euo pipefail
go mod download
version="$(./scripts/version.sh)"
make gen/mark-fresh
make -j \
build/coder_"$version"_windows_amd64.zip \
build/coder_"$version"_linux_amd64.{tar.gz,deb}
- name: Install Release
run: |
set -euo pipefail
regions=(
# gcp-region-id instance-name systemd-service-name
"us-central1-a coder coder"
"australia-southeast1-b coder-sydney coder-workspace-proxy"
"europe-west3-c coder-europe coder-workspace-proxy"
"southamerica-east1-b coder-brazil coder-workspace-proxy"
)
deb_pkg="./build/coder_$(./scripts/version.sh)_linux_amd64.deb"
if [ ! -f "$deb_pkg" ]; then
echo "deb package not found: $deb_pkg"
ls -l ./build
exit 1
fi
gcloud config set project coder-dogfood
for region in "${regions[@]}"; do
echo "::group::$region"
set -- $region
set -x
gcloud config set compute/zone "$1"
gcloud compute scp "$deb_pkg" "${2}:/tmp/coder.deb"
gcloud compute ssh "$2" -- /bin/sh -c "set -eux; sudo dpkg -i --force-confdef /tmp/coder.deb; sudo systemctl daemon-reload; sudo service '$3' restart"
set +x
echo "::endgroup::"
done
- name: Upload build artifacts
uses: actions/upload-artifact@v3
with:
name: coder
path: |
./build/*.zip
./build/*.tar.gz
./build/*.deb
retention-days: 7
test-js:
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-8vcpu-ubuntu-2204' || 'ubuntu-latest' }}
needs: changes
@@ -572,7 +480,7 @@ jobs:
- name: Upload Playwright Failed Tests
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: failed-test-videos
path: ./site/test-results/**/*.webm
@@ -580,7 +488,7 @@ jobs:
- name: Upload pprof dumps
if: always() && github.actor != 'dependabot[bot]' && runner.os == 'Linux' && !github.event.pull_request.head.repo.fork
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: debug-pprof-dumps
path: ./site/test-results/**/debug-pprof-*.txt
@@ -607,7 +515,7 @@ jobs:
# the check to pass. This is desired in PRs, but not in mainline.
- name: Publish to Chromatic (non-mainline)
if: github.ref != 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@v1
uses: chromaui/action@v10
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -635,7 +543,7 @@ jobs:
# infinitely "in progress" in mainline unless we re-review each build.
- name: Publish to Chromatic (mainline)
if: github.ref == 'refs/heads/main' && github.repository_owner == 'coder'
uses: chromaui/action@v1
uses: chromaui/action@v10
env:
NODE_OPTIONS: "--max_old_space_size=4096"
STORYBOOK: true
@@ -655,7 +563,8 @@ jobs:
name: offlinedocs
needs: changes
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-8vcpu-ubuntu-2204' || 'ubuntu-latest' }}
if: needs.changes.outputs.offlinedocs == 'true' || needs.changes.outputs.ci == 'true'
if: needs.changes.outputs.offlinedocs == 'true' || needs.changes.outputs.ci == 'true' || needs.changes.outputs.docs == 'true'
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -668,11 +577,25 @@ jobs:
with:
directory: offlinedocs
- name: Install Protoc
run: |
mkdir -p /tmp/proto
pushd /tmp/proto
curl -L -o protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v23.3/protoc-23.3-linux-x86_64.zip
unzip protoc.zip
cp -r ./bin/* /usr/local/bin
cp -r ./include /usr/local/bin/include
popd
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Install go tools
run: |
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.33
go install golang.org/x/tools/cmd/goimports@latest
go install github.com/mikefarah/yq/v4@v4.30.6
go install github.com/golang/mock/mockgen@v1.6.0
- name: Setup sqlc
@@ -704,6 +627,7 @@ jobs:
- test-js
- test-e2e
- offlinedocs
- sqlc-vet
# Allow this job to run even if the needed jobs fail, are skipped or
# cancelled.
if: always()
@@ -718,6 +642,8 @@ jobs:
echo "- test-go-pg: ${{ needs.test-go-pg.result }}"
echo "- test-go-race: ${{ needs.test-go-race.result }}"
echo "- test-js: ${{ needs.test-js.result }}"
echo "- test-e2e: ${{ needs.test-e2e.result }}"
echo "- offlinedocs: ${{ needs.offlinedocs.result }}"
echo
# We allow skipped jobs to pass, but not failed or cancelled jobs.
@@ -728,29 +654,23 @@ jobs:
echo "Required checks have passed"
build-main-image:
# This build and publihes ghcr.io/coder/coder-preview:main for each merge commit to main branch.
# We are only building this for amd64 plateform. (>95% pulls are for amd64)
build:
# This builds and publishes ghcr.io/coder/coder-preview:main for each commit
# to main branch. We are only building this for amd64 platform. (>95% pulls
# are for amd64)
needs: changes
if: github.ref == 'refs/heads/main' && needs.changes.outputs.docs-only == 'false'
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-8vcpu-ubuntu-2204' || 'ubuntu-latest' }}
env:
DOCKER_CLI_EXPERIMENTAL: "enabled"
outputs:
IMAGE: ghcr.io/coder/coder-preview:${{ steps.build-docker.outputs.tag }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node
uses: ./.github/actions/setup-node
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Setup sqlc
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@v3
with:
@@ -758,27 +678,51 @@ jobs:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Linux amd64 Docker image
id: build_and_push
- name: Setup Node
uses: ./.github/actions/setup-node
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Install nfpm
run: go install github.com/goreleaser/nfpm/v2/cmd/nfpm@v2.16.0
- name: Install zstd
run: sudo apt-get install -y zstd
- name: Build
run: |
set -euxo pipefail
go mod download
make gen/mark-fresh
export DOCKER_IMAGE_NO_PREREQUISITES=true
version="$(./scripts/version.sh)"
make gen/mark-fresh
make -j \
build/coder_linux_amd64 \
build/coder_"$version"_windows_amd64.zip \
build/coder_"$version"_linux_amd64.{tar.gz,deb}
- name: Build and Push Linux amd64 Docker Image
id: build-docker
run: |
set -euxo pipefail
version="$(./scripts/version.sh)"
tag="main-$(echo "$version" | sed 's/+/-/g')"
export CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
make -j build/coder_linux_amd64
./scripts/build_docker.sh \
--arch amd64 \
--target ghcr.io/coder/coder-preview:main \
--target "ghcr.io/coder/coder-preview:$tag" \
--version $version \
--push \
build/coder_linux_amd64
# Tag image with new package tag and push
tag=$(echo "$version" | sed 's/+/-/g')
docker tag ghcr.io/coder/coder-preview:main ghcr.io/coder/coder-preview:main-$tag
docker push ghcr.io/coder/coder-preview:main-$tag
# Tag as main
docker tag "ghcr.io/coder/coder-preview:$tag" ghcr.io/coder/coder-preview:main
docker push ghcr.io/coder/coder-preview:main
# Store the tag in an output variable so we can use it in other jobs
echo "tag=$tag" >> $GITHUB_OUTPUT
- name: Prune old images
uses: vlaurin/action-ghcr-prune@v0.5.0
@@ -790,3 +734,183 @@ jobs:
keep-tags-regexes: ^pr
prune-tags-regexes: ^main-
prune-untagged: true
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: coder
path: |
./build/*.zip
./build/*.tar.gz
./build/*.deb
retention-days: 7
deploy:
name: "deploy"
runs-on: ubuntu-latest
timeout-minutes: 30
needs:
- changes
- build
if: |
github.ref == 'refs/heads/main' && !github.event.pull_request.head.repo.fork
&& needs.changes.outputs.docs-only == 'false'
permissions:
contents: read
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
with:
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@v2
- name: Set up Flux CLI
uses: fluxcd/flux2/action@main
with:
# Keep this up to date with the version of flux installed in dogfood cluster
version: "2.2.1"
- name: Get Cluster Credentials
uses: "google-github-actions/get-gke-credentials@v2"
with:
cluster_name: dogfood-v2
location: us-central1-a
project_id: coder-dogfood-v2
- name: Reconcile Flux
run: |
set -euxo pipefail
flux --namespace flux-system reconcile source git flux-system
flux --namespace flux-system reconcile source git coder-main
flux --namespace flux-system reconcile kustomization flux-system
flux --namespace flux-system reconcile kustomization coder
flux --namespace flux-system reconcile source chart coder-coder
flux --namespace flux-system reconcile source chart coder-coder-provisioner
flux --namespace coder reconcile helmrelease coder
flux --namespace coder reconcile helmrelease coder-provisioner
# Just updating Flux is usually not enough. The Helm release may get
# redeployed, but unless something causes the Deployment to update the
# pods won't be recreated. It's important that the pods get recreated,
# since we use `imagePullPolicy: Always` to ensure we're running the
# latest image.
- name: Rollout Deployment
run: |
set -euxo pipefail
kubectl --namespace coder rollout restart deployment/coder
kubectl --namespace coder rollout status deployment/coder
kubectl --namespace coder rollout restart deployment/coder-provisioner
kubectl --namespace coder rollout status deployment/coder-provisioner
deploy-wsproxies:
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main' && !github.event.pull_request.head.repo.fork
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup flyctl
uses: superfly/flyctl-actions/setup-flyctl@master
- name: Deploy workspace proxies
run: |
flyctl deploy --image "$IMAGE" --app paris-coder --config ./.github/fly-wsproxies/paris-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_PARIS" --yes
flyctl deploy --image "$IMAGE" --app sydney-coder --config ./.github/fly-wsproxies/sydney-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SYDNEY" --yes
flyctl deploy --image "$IMAGE" --app sao-paulo-coder --config ./.github/fly-wsproxies/sao-paulo-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SAO_PAULO" --yes
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
IMAGE: ${{ needs.build.outputs.IMAGE }}
TOKEN_PARIS: ${{ secrets.FLY_PARIS_CODER_PROXY_SESSION_TOKEN }}
TOKEN_SYDNEY: ${{ secrets.FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN }}
TOKEN_SAO_PAULO: ${{ secrets.FLY_SAO_PAULO_CODER_PROXY_SESSION_TOKEN }}
deploy-legacy-proxies:
runs-on: ubuntu-latest
timeout-minutes: 30
needs: build
if: github.ref == 'refs/heads/main' && !github.event.pull_request.head.repo.fork
permissions:
contents: read
id-token: write
steps:
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v2
with:
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@v2
- name: Download build artifacts
uses: actions/download-artifact@v4
with:
name: coder
path: ./build
- name: Install Release
run: |
set -euo pipefail
regions=(
# gcp-region-id instance-name systemd-service-name
"australia-southeast1-b coder-sydney coder-workspace-proxy"
"europe-west3-c coder-europe coder-workspace-proxy"
"southamerica-east1-b coder-brazil coder-workspace-proxy"
)
deb_pkg=$(find ./build -name "coder_*_linux_amd64.deb" -print -quit)
if [ -z "$deb_pkg" ]; then
echo "deb package $deb_pkg not found"
ls -l ./build
exit 1
fi
gcloud config set project coder-dogfood
for region in "${regions[@]}"; do
echo "::group::$region"
set -- $region
set -x
gcloud config set compute/zone "$1"
gcloud compute scp "$deb_pkg" "${2}:/tmp/coder.deb"
gcloud compute ssh "$2" -- /bin/sh -c "set -eux; sudo dpkg -i --force-confdef /tmp/coder.deb; sudo systemctl daemon-reload; sudo service '$3' restart"
set +x
echo "::endgroup::"
done
# sqlc-vet runs a postgres docker container, runs Coder migrations, and then
# runs sqlc-vet to ensure all queries are valid. This catches any mistakes
# in migrations or sqlc queries that makes a query unable to be prepared.
sqlc-vet:
runs-on: ${{ github.repository_owner == 'coder' && 'buildjet-8vcpu-ubuntu-2204' || 'ubuntu-latest' }}
needs: changes
if: needs.changes.outputs.db == 'true' || needs.changes.outputs.ci == 'true' || github.ref == 'refs/heads/main'
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
# We need golang to run the migration main.go
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Setup sqlc
uses: ./.github/actions/setup-sqlc
- name: Setup and run sqlc vet
run: |
make sqlc-vet
+1 -1
View File
@@ -55,7 +55,7 @@ jobs:
if: ${{ github.event_name == 'pull_request_target' && success() && !github.event.pull_request.draft }}
steps:
- name: release-labels
uses: actions/github-script@v6
uses: actions/github-script@v7
with:
# This script ensures PR title and labels are in sync:
#
+12 -16
View File
@@ -5,15 +5,11 @@ on:
branches:
- main
paths:
- "flake.nix"
- "flake.lock"
- "dogfood/**"
- ".github/workflows/dogfood.yaml"
# Uncomment these lines when testing with CI.
# pull_request:
# paths:
# - "flake.nix"
# - "flake.lock"
# - "dogfood/**"
# - ".github/workflows/dogfood.yaml"
workflow_dispatch:
@@ -27,7 +23,7 @@ jobs:
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@v6.5
uses: tj-actions/branch-names@v8
- name: "Branch name to Docker tag name"
id: docker-tag-name
@@ -37,13 +33,8 @@ jobs:
tag=${tag//\//--}
echo "tag=${tag}" >> $GITHUB_OUTPUT
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@v6
- name: Run the Magic Nix Cache
uses: DeterminateSystems/magic-nix-cache-action@v2
- run: nix build .#devEnvImage && ./result | docker load
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -51,10 +42,15 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Tag and Push
run: |
docker tag codercom/oss-dogfood:latest codercom/oss-dogfood:${{ steps.docker-tag-name.outputs.tag }}
docker push codercom/oss-dogfood -a
- name: Build and push
uses: docker/build-push-action@v5
with:
context: "{{defaultContext}}:dogfood"
pull: true
push: true
tags: "codercom/oss-dogfood:${{ steps.docker-tag-name.outputs.tag }},codercom/oss-dogfood:latest"
cache-from: type=registry,ref=codercom/oss-dogfood:latest
cache-to: type=inline
deploy_template:
needs: deploy_image
+1 -4
View File
@@ -9,10 +9,6 @@ on:
- main
workflow_dispatch:
inputs:
pr_number:
description: "PR number"
type: number
required: true
experiments:
description: "Experiments to enable"
required: false
@@ -355,6 +351,7 @@ jobs:
- name: Install/Upgrade Helm chart
run: |
set -euo pipefail
helm dependency update --skip-refresh ./helm/coder
helm upgrade --install "pr${{ env.PR_NUMBER }}" ./helm/coder \
--namespace "pr${{ env.PR_NUMBER }}" \
--values ./pr-deploy-values.yaml \
+31 -69
View File
@@ -281,13 +281,13 @@ jobs:
CODER_GPG_RELEASE_KEY_BASE64: ${{ secrets.GPG_RELEASE_KEY_BASE64 }}
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@v1
uses: google-github-actions/auth@v2
with:
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
- name: Setup GCloud SDK
uses: "google-github-actions/setup-gcloud@v1"
uses: "google-github-actions/setup-gcloud@v2"
- name: Publish Helm Chart
if: ${{ !inputs.dry_run }}
@@ -306,7 +306,7 @@ jobs:
- name: Upload artifacts to actions (if dry-run)
if: ${{ inputs.dry_run }}
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: release-artifacts
path: |
@@ -434,27 +434,26 @@ jobs:
$release_assets = gh release view --repo coder/coder "v${version}" --json assets | `
ConvertFrom-Json
# Get the installer URL from the release assets.
$installer_url = $release_assets.assets | `
# Get the installer URLs from the release assets.
$amd64_installer_url = $release_assets.assets | `
Where-Object name -Match ".*_windows_amd64_installer.exe$" | `
Select -ExpandProperty url
$amd64_zip_url = $release_assets.assets | `
Where-Object name -Match ".*_windows_amd64.zip$" | `
Select -ExpandProperty url
$arm64_zip_url = $release_assets.assets | `
Where-Object name -Match ".*_windows_arm64.zip$" | `
Select -ExpandProperty url
echo "Installer URL: ${installer_url}"
echo "amd64 Installer URL: ${amd64_installer_url}"
echo "amd64 zip URL: ${amd64_zip_url}"
echo "arm64 zip URL: ${arm64_zip_url}"
echo "Package version: ${version}"
# The URL "|X64" suffix forces the architecture as it cannot be
# sniffed properly from the URL. wingetcreate checks both the URL and
# binary magic bytes for the architecture and they need to both match,
# but they only check for `x64`, `win64` and `_64` in the URL. Our URL
# contains `amd64` which doesn't match sadly.
#
# wingetcreate will still do the binary magic bytes check, so if we
# accidentally change the architecture of the installer, it will fail
# submission.
.\wingetcreate.exe update Coder.Coder `
--submit `
--version "${version}" `
--urls "${installer_url}|X64" `
--urls "${amd64_installer_url}" "${amd64_zip_url}" "${arm64_zip_url}" `
--token "$env:WINGET_GH_TOKEN"
env:
@@ -481,65 +480,28 @@ jobs:
# different repo.
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
publish-chocolatey:
name: Publish to Chocolatey
runs-on: windows-latest
# publish-sqlc pushes the latest schema to sqlc cloud.
# At present these pushes cannot be tagged, so the last push is always the latest.
publish-sqlc:
name: "Publish to schema sqlc cloud"
runs-on: "ubuntu-latest"
needs: release
if: ${{ !inputs.dry_run }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-depth: 1
# Same reason as for release.
- name: Fetch git tags
run: git fetch --tags --force
# We need golang to run the migration main.go
- name: Setup Go
uses: ./.github/actions/setup-go
# From https://chocolatey.org
- name: Install Chocolatey
- name: Setup sqlc
uses: ./.github/actions/setup-sqlc
- name: Push schema to sqlc cloud
# Don't block a release on this
continue-on-error: true
run: |
Set-ExecutionPolicy Bypass -Scope Process -Force
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
- name: Build chocolatey package
run: |
cd scripts/chocolatey
# The package version is the same as the tag minus the leading "v".
# The version in this output already has the leading "v" removed but
# we do it again to be safe.
$version = "${{ needs.release.outputs.version }}".Trim('v')
$release_assets = gh release view --repo coder/coder "v${version}" --json assets | `
ConvertFrom-Json
# Get the URL for the Windows ZIP from the release assets.
$zip_url = $release_assets.assets | `
Where-Object name -Match ".*_windows_amd64.zip$" | `
Select -ExpandProperty url
echo "ZIP URL: ${zip_url}"
echo "Package version: ${version}"
echo "Downloading ZIP..."
Invoke-WebRequest $zip_url -OutFile assets.zip
echo "Extracting ZIP..."
Expand-Archive assets.zip -DestinationPath assets/
# No need to specify nuspec if there's only one in the directory.
choco pack --version=$version binary_path=assets/coder.exe
choco apikey --api-key $env:CHOCO_API_KEY --source https://push.chocolatey.org/
# No need to specify nupkg if there's only one in the directory.
choco push --source https://push.chocolatey.org/
env:
CHOCO_API_KEY: ${{ secrets.CHOCO_API_KEY }}
# We need a GitHub token for the gh CLI to function under GitHub Actions
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
make sqlc-push
+5 -5
View File
@@ -29,7 +29,7 @@ jobs:
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
uses: github/codeql-action/init@v3
with:
languages: go, javascript
@@ -42,7 +42,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
uses: github/codeql-action/analyze@v3
- name: Send Slack notification on failure
if: ${{ failure() }}
@@ -122,7 +122,7 @@ jobs:
image_name: ${{ steps.build.outputs.image }}
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@fbd16365eb88e12433951383f5e99bd901fc618f
uses: aquasecurity/trivy-action@91713af97dc80187565512baba96e4364e983601
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
@@ -130,13 +130,13 @@ jobs:
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v2
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: trivy-results.sarif
category: "Trivy"
- name: Upload Trivy scan results as an artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: trivy
path: trivy-results.sarif
+51 -6
View File
@@ -13,7 +13,7 @@ jobs:
actions: write
steps:
- name: stale
uses: actions/stale@v8.0.0
uses: actions/stale@v9.0.0
with:
stale-issue-label: "stale"
stale-pr-label: "stale"
@@ -30,6 +30,52 @@ jobs:
operations-per-run: 60
# Start with the oldest issues, always.
ascending: true
- name: "Close old issues labeled likely-no"
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const thirtyDaysAgo = new Date(new Date().setDate(new Date().getDate() - 30));
console.log(`Looking for issues labeled with 'likely-no' more than 30 days ago, which is after ${thirtyDaysAgo.toISOString()}`);
const issues = await github.rest.issues.listForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
labels: 'likely-no',
state: 'open',
});
console.log(`Found ${issues.data.length} open issues labeled with 'likely-no'`);
for (const issue of issues.data) {
console.log(`Checking issue #${issue.number} created at ${issue.created_at}`);
const timeline = await github.rest.issues.listEventsForTimeline({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
});
const labelEvent = timeline.data.find(event => event.event === 'labeled' && event.label.name === 'likely-no');
if (labelEvent) {
console.log(`Issue #${issue.number} was labeled with 'likely-no' at ${labelEvent.created_at}`);
if (new Date(labelEvent.created_at) < thirtyDaysAgo) {
console.log(`Issue #${issue.number} is older than 30 days with 'likely-no' label, closing issue.`);
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not planned'
});
}
} else {
console.log(`Issue #${issue.number} does not have a 'likely-no' label event in its timeline.`);
}
}
branches:
runs-on: ubuntu-latest
steps:
@@ -52,8 +98,8 @@ jobs:
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
retain_days: 1
keep_minimum_runs: 1
retain_days: 30
keep_minimum_runs: 30
delete_workflow_pattern: pr-cleanup.yaml
- name: Delete PR Deploy workflow skipped runs
@@ -61,7 +107,6 @@ jobs:
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
retain_days: 0
keep_minimum_runs: 0
delete_run_by_conclusion_pattern: skipped
retain_days: 30
keep_minimum_runs: 30
delete_workflow_pattern: pr-deploy.yaml
+2
View File
@@ -14,6 +14,7 @@ darcula = "darcula"
Hashi = "Hashi"
trialer = "trialer"
encrypter = "encrypter"
hel = "hel" # as in helsinki
[files]
extend-exclude = [
@@ -29,4 +30,5 @@ extend-exclude = [
"**/*_test.go",
"**/*.test.tsx",
"**/pnpm-lock.yaml",
"tailnet/testdata/**",
]
-1
View File
@@ -20,7 +20,6 @@ yarn-error.log
# Front-end ignore patterns.
.next/
site/**/*.typegen.ts
site/build-storybook.log
site/coverage/
site/storybook-static/
+2 -1
View File
@@ -23,7 +23,6 @@ yarn-error.log
# Front-end ignore patterns.
.next/
site/**/*.typegen.ts
site/build-storybook.log
site/coverage/
site/storybook-static/
@@ -83,6 +82,8 @@ helm/**/templates/*.yaml
# Testdata shouldn't be formatted.
scripts/apitypings/testdata/**/*.ts
enterprise/tailnet/testdata/*.golden.html
tailnet/testdata/*.golden.html
# Generated files shouldn't be formatted.
site/e2e/provisionerGenerated.ts
+2
View File
@@ -8,6 +8,8 @@ helm/**/templates/*.yaml
# Testdata shouldn't be formatted.
scripts/apitypings/testdata/**/*.ts
enterprise/tailnet/testdata/*.golden.html
tailnet/testdata/*.golden.html
# Generated files shouldn't be formatted.
site/e2e/provisionerGenerated.ts
+3 -4
View File
@@ -18,9 +18,10 @@
"coderdenttest",
"coderdtest",
"codersdk",
"contravariance",
"cronstrue",
"databasefake",
"dbfake",
"dbmem",
"dbgen",
"dbtype",
"DERP",
@@ -170,7 +171,7 @@
"wsconncache",
"wsjson",
"xerrors",
"xstate",
"xlarge",
"yamux"
],
"cSpell.ignorePaths": ["site/package.json", ".vscode/settings.json"],
@@ -206,8 +207,6 @@
"files.insertFinalNewline": true,
"go.lintTool": "golangci-lint",
"go.lintFlags": ["--fast"],
"go.lintOnSave": "package",
"go.coverOnSave": true,
"go.coverageDecorator": {
"type": "gutter",
"coveredGutterStyle": "blockgreen",
+70 -5
View File
@@ -50,7 +50,7 @@ endif
# Note, all find statements should be written with `.` or `./path` as
# the search path so that these exclusions match.
FIND_EXCLUSIONS= \
-not \( \( -path '*/.git/*' -o -path './build/*' -o -path './vendor/*' -o -path './.coderv2/*' -o -path '*/node_modules/*' -o -path '*/out/*' -o -path './coderd/apidoc/*' -o -path '*/.next/*' \) -prune \)
-not \( \( -path '*/.git/*' -o -path './build/*' -o -path './vendor/*' -o -path './.coderv2/*' -o -path '*/node_modules/*' -o -path '*/out/*' -o -path './coderd/apidoc/*' -o -path '*/.next/*' -o -path '*/.terraform/*' \) -prune \)
# Source files used for make targets, evaluated on use.
GO_SRC_FILES := $(shell find . $(FIND_EXCLUSIONS) -type f -name '*.go' -not -name '*_test.go')
# All the shell files in the repo, excluding ignored files.
@@ -428,7 +428,8 @@ lint/ts:
lint/go:
./scripts/check_enterprise_imports.sh
go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.53.2
linter_ver=$(shell egrep -o 'GOLANGCI_LINT_VERSION=\S+' dogfood/Dockerfile | cut -d '=' -f 2)
go install github.com/golangci/golangci-lint/cmd/golangci-lint@v$$linter_ver
golangci-lint run
.PHONY: lint/go
@@ -448,13 +449,15 @@ lint/helm:
DB_GEN_FILES := \
coderd/database/querier.go \
coderd/database/unique_constraint.go \
coderd/database/dbfake/dbfake.go \
coderd/database/dbmem/dbmem.go \
coderd/database/dbmetrics/dbmetrics.go \
coderd/database/dbauthz/dbauthz.go \
coderd/database/dbmock/dbmock.go
# all gen targets should be added here and to gen/mark-fresh
gen: \
tailnet/proto/tailnet.pb.go \
agent/proto/agent.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
coderd/database/dump.sql \
@@ -479,6 +482,8 @@ gen: \
# used during releases so we don't run generation scripts.
gen/mark-fresh:
files="\
tailnet/proto/tailnet.pb.go \
agent/proto/agent.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
coderd/database/dump.sql \
@@ -524,6 +529,22 @@ coderd/database/querier.go: coderd/database/sqlc.yaml coderd/database/dump.sql $
coderd/database/dbmock/dbmock.go: coderd/database/db.go coderd/database/querier.go
go generate ./coderd/database/dbmock/
tailnet/proto/tailnet.pb.go: tailnet/proto/tailnet.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./tailnet/proto/tailnet.proto
agent/proto/agent.pb.go: agent/proto/agent.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./agent/proto/agent.proto
provisionersdk/proto/provisioner.pb.go: provisionersdk/proto/provisioner.proto
protoc \
--go_out=. \
@@ -567,7 +588,7 @@ docs/cli.md: scripts/clidocgen/main.go examples/examples.gen.json $(GO_SRC_FILES
CI=true BASE_PATH="." go run ./scripts/clidocgen
pnpm run format:write:only ./docs/cli.md ./docs/cli/*.md ./docs/manifest.json
docs/admin/audit-logs.md: scripts/auditdocgen/main.go enterprise/audit/table.go coderd/rbac/object_gen.go
docs/admin/audit-logs.md: coderd/database/querier.go scripts/auditdocgen/main.go enterprise/audit/table.go coderd/rbac/object_gen.go
go run scripts/auditdocgen/main.go
pnpm run format:write:only ./docs/admin/audit-logs.md
@@ -575,7 +596,16 @@ coderd/apidoc/swagger.json: $(shell find ./scripts/apidocgen $(FIND_EXCLUSIONS)
./scripts/apidocgen/generate.sh
pnpm run format:write:only ./docs/api ./docs/manifest.json ./coderd/apidoc/swagger.json
update-golden-files: cli/testdata/.gen-golden helm/coder/tests/testdata/.gen-golden helm/provisioner/tests/testdata/.gen-golden scripts/ci-report/testdata/.gen-golden enterprise/cli/testdata/.gen-golden coderd/.gen-golden provisioner/terraform/testdata/.gen-golden
update-golden-files: \
cli/testdata/.gen-golden \
helm/coder/tests/testdata/.gen-golden \
helm/provisioner/tests/testdata/.gen-golden \
scripts/ci-report/testdata/.gen-golden \
enterprise/cli/testdata/.gen-golden \
enterprise/tailnet/testdata/.gen-golden \
tailnet/testdata/.gen-golden \
coderd/.gen-golden \
provisioner/terraform/testdata/.gen-golden
.PHONY: update-golden-files
cli/testdata/.gen-golden: $(wildcard cli/testdata/*.golden) $(wildcard cli/*.tpl) $(GO_SRC_FILES) $(wildcard cli/*_test.go)
@@ -586,6 +616,14 @@ enterprise/cli/testdata/.gen-golden: $(wildcard enterprise/cli/testdata/*.golden
go test ./enterprise/cli -run="TestEnterpriseCommandHelp" -update
touch "$@"
tailnet/testdata/.gen-golden: $(wildcard tailnet/testdata/*.golden.html) $(GO_SRC_FILES) $(wildcard tailnet/*_test.go)
go test ./tailnet -run="TestDebugTemplate" -update
touch "$@"
enterprise/tailnet/testdata/.gen-golden: $(wildcard enterprise/tailnet/testdata/*.golden.html) $(GO_SRC_FILES) $(wildcard enterprise/tailnet/*_test.go)
go test ./enterprise/tailnet -run="TestDebugTemplate" -update
touch "$@"
helm/coder/tests/testdata/.gen-golden: $(wildcard helm/coder/tests/testdata/*.yaml) $(wildcard helm/coder/tests/testdata/*.golden) $(GO_SRC_FILES) $(wildcard helm/coder/tests/*_test.go)
go test ./helm/coder/tests -run=TestUpdateGoldenFiles -update
touch "$@"
@@ -670,6 +708,33 @@ test:
gotestsum --format standard-quiet -- -v -short -count=1 ./...
.PHONY: test
# sqlc-cloud-is-setup will fail if no SQLc auth token is set. Use this as a
# dependency for any sqlc-cloud related targets.
sqlc-cloud-is-setup:
if [[ "$(SQLC_AUTH_TOKEN)" == "" ]]; then
echo "ERROR: 'SQLC_AUTH_TOKEN' must be set to auth with sqlc cloud before running verify." 1>&2
exit 1
fi
.PHONY: sqlc-cloud-is-setup
sqlc-push: sqlc-cloud-is-setup test-postgres-docker
echo "--- sqlc push"
SQLC_DATABASE_URL="postgresql://postgres:postgres@localhost:5432/$(shell go run scripts/migrate-ci/main.go)" \
sqlc push -f coderd/database/sqlc.yaml && echo "Passed sqlc push"
.PHONY: sqlc-push
sqlc-verify: sqlc-cloud-is-setup test-postgres-docker
echo "--- sqlc verify"
SQLC_DATABASE_URL="postgresql://postgres:postgres@localhost:5432/$(shell go run scripts/migrate-ci/main.go)" \
sqlc verify -f coderd/database/sqlc.yaml && echo "Passed sqlc verify"
.PHONY: sqlc-verify
sqlc-vet: test-postgres-docker
echo "--- sqlc vet"
SQLC_DATABASE_URL="postgresql://postgres:postgres@localhost:5432/$(shell go run scripts/migrate-ci/main.go)" \
sqlc vet -f coderd/database/sqlc.yaml && echo "Passed sqlc vet"
.PHONY: sqlc-vet
# When updating -timeout for this test, keep in sync with
# test-go-postgres (.github/workflows/coder.yaml).
# Do add coverage flags so that test caching works.
+1 -1
View File
@@ -70,7 +70,7 @@ curl -L https://coder.com/install.sh | sh
You can run the install script with `--dry-run` to see the commands that will be used to install without executing them. You can modify the installation process by including flags. Run the install script with `--help` for reference.
> See [install](docs/install) for additional methods.
> See [install](https://coder.com/docs/v2/latest/install) for additional methods.
Once installed, you can start a production deployment<sup>1</sup> with a single command:
+36 -14
View File
@@ -35,6 +35,8 @@ import (
"tailscale.com/types/netlogtype"
"cdr.dev/slog"
"github.com/coder/retry"
"github.com/coder/coder/v2/agent/agentproc"
"github.com/coder/coder/v2/agent/agentscripts"
"github.com/coder/coder/v2/agent/agentssh"
@@ -45,7 +47,6 @@ import (
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/retry"
)
const (
@@ -68,6 +69,7 @@ type Options struct {
EnvironmentVariables map[string]string
Logger slog.Logger
IgnorePorts map[int]string
PortCacheDuration time.Duration
SSHMaxTimeout time.Duration
TailnetListenPort uint16
Subsystems []codersdk.AgentSubsystem
@@ -126,6 +128,9 @@ func New(options Options) Agent {
if options.ServiceBannerRefreshInterval == 0 {
options.ServiceBannerRefreshInterval = 2 * time.Minute
}
if options.PortCacheDuration == 0 {
options.PortCacheDuration = 1 * time.Second
}
prometheusRegistry := options.PrometheusRegistry
if prometheusRegistry == nil {
@@ -153,6 +158,7 @@ func New(options Options) Agent {
lifecycleReported: make(chan codersdk.WorkspaceAgentLifecycle, 1),
lifecycleStates: []agentsdk.PostLifecycleRequest{{State: codersdk.WorkspaceAgentLifecycleCreated}},
ignorePorts: options.IgnorePorts,
portCacheDuration: options.PortCacheDuration,
connStatsChan: make(chan *agentsdk.Stats, 1),
reportMetadataInterval: options.ReportMetadataInterval,
serviceBannerRefreshInterval: options.ServiceBannerRefreshInterval,
@@ -181,8 +187,9 @@ type agent struct {
// ignorePorts tells the api handler which ports to ignore when
// listing all listening ports. This is helpful to hide ports that
// are used by the agent, that the user does not care about.
ignorePorts map[int]string
subsystems []codersdk.AgentSubsystem
ignorePorts map[int]string
portCacheDuration time.Duration
subsystems []codersdk.AgentSubsystem
reconnectingPTYs sync.Map
reconnectingPTYTimeout time.Duration
@@ -216,8 +223,10 @@ type agent struct {
connCountReconnectingPTY atomic.Int64
prometheusRegistry *prometheus.Registry
metrics *agentMetrics
syscaller agentproc.Syscaller
// metrics are prometheus registered metrics that will be collected and
// labeled in Coder with the agent + workspace.
metrics *agentMetrics
syscaller agentproc.Syscaller
// modifiedProcs is used for testing process priority management.
modifiedProcs chan []*agentproc.Process
@@ -246,6 +255,9 @@ func (a *agent) init(ctx context.Context) {
Filesystem: a.filesystem,
PatchLogs: a.client.PatchLogs,
})
// Register runner metrics. If the prom registry is nil, the metrics
// will not report anywhere.
a.scriptRunner.RegisterMetrics(a.prometheusRegistry)
go a.runLoop(ctx)
}
@@ -536,6 +548,14 @@ func (a *agent) reportMetadataLoop(ctx context.Context) {
continue
case <-report:
if len(updatedMetadata) > 0 {
select {
case <-reportSemaphore:
default:
// If there's already a report in flight, don't send
// another one, wait for next tick instead.
continue
}
metadata := make([]agentsdk.Metadata, 0, len(updatedMetadata))
for key, result := range updatedMetadata {
metadata = append(metadata, agentsdk.Metadata{
@@ -545,14 +565,6 @@ func (a *agent) reportMetadataLoop(ctx context.Context) {
delete(updatedMetadata, key)
}
select {
case <-reportSemaphore:
default:
// If there's already a report in flight, don't send
// another one, wait for next tick instead.
continue
}
go func() {
ctx, cancel := context.WithTimeout(ctx, reportTimeout)
defer func() {
@@ -739,11 +751,14 @@ func (a *agent) run(ctx context.Context) error {
return xerrors.Errorf("init script runner: %w", err)
}
err = a.trackConnGoroutine(func() {
start := time.Now()
err := a.scriptRunner.Execute(ctx, func(script codersdk.WorkspaceAgentScript) bool {
return script.RunOnStart
})
// Measure the time immediately after the script has finished
dur := time.Since(start).Seconds()
if err != nil {
a.logger.Warn(ctx, "startup script failed", slog.Error(err))
a.logger.Warn(ctx, "startup script(s) failed", slog.Error(err))
if errors.Is(err, agentscripts.ErrTimeout) {
a.setLifecycle(ctx, codersdk.WorkspaceAgentLifecycleStartTimeout)
} else {
@@ -752,6 +767,12 @@ func (a *agent) run(ctx context.Context) error {
} else {
a.setLifecycle(ctx, codersdk.WorkspaceAgentLifecycleReady)
}
label := "false"
if err == nil {
label = "true"
}
a.metrics.startupScriptSeconds.WithLabelValues(label).Set(dur)
a.scriptRunner.StartCron()
})
if err != nil {
@@ -1465,6 +1486,7 @@ func (a *agent) Close() error {
return script.RunOnStop
})
if err != nil {
a.logger.Warn(ctx, "shutdown script(s) failed", slog.Error(err))
if errors.Is(err, agentscripts.ErrTimeout) {
lifecycleState = codersdk.WorkspaceAgentLifecycleShutdownTimeout
} else {
+266 -319
View File
@@ -1,11 +1,13 @@
package agent_test
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"math/rand"
"net"
"net/http"
"net/http/httptest"
@@ -17,7 +19,6 @@ import (
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -25,7 +26,7 @@ import (
"testing"
"time"
scp "github.com/bramvdbogaerde/go-scp"
"github.com/bramvdbogaerde/go-scp"
"github.com/golang/mock/gomock"
"github.com/google/uuid"
"github.com/pion/udp"
@@ -45,6 +46,7 @@ import (
"cdr.dev/slog"
"cdr.dev/slog/sloggers/sloghuman"
"cdr.dev/slog/sloggers/slogtest"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agentproc"
"github.com/coder/coder/v2/agent/agentproc/agentproctest"
@@ -52,7 +54,6 @@ import (
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/pty"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/tailnettest"
@@ -153,7 +154,7 @@ func TestAgent_Stats_Magic(t *testing.T) {
require.NoError(t, err)
require.Equal(t, expected, strings.TrimSpace(string(output)))
})
t.Run("Tracks", func(t *testing.T) {
t.Run("TracksVSCode", func(t *testing.T) {
t.Parallel()
if runtime.GOOS == "window" {
t.Skip("Sleeping for infinity doesn't work on Windows")
@@ -192,6 +193,77 @@ func TestAgent_Stats_Magic(t *testing.T) {
err = session.Wait()
require.NoError(t, err)
})
t.Run("TracksJetBrains", func(t *testing.T) {
t.Parallel()
if runtime.GOOS != "linux" {
t.Skip("JetBrains tracking is only supported on Linux")
}
ctx := testutil.Context(t, testutil.WaitLong)
// JetBrains tracking works by looking at the process name listening on the
// forwarded port. If the process's command line includes the magic string
// we are looking for, then we assume it is a JetBrains editor. So when we
// connect to the port we must ensure the process includes that magic string
// to fool the agent into thinking this is JetBrains. To do this we need to
// spawn an external process (in this case a simple echo server) so we can
// control the process name. The -D here is just to mimic how Java options
// are set but is not necessary as the agent looks only for the magic
// string itself anywhere in the command.
_, b, _, ok := runtime.Caller(0)
require.True(t, ok)
dir := filepath.Join(filepath.Dir(b), "../scripts/echoserver/main.go")
echoServerCmd := exec.Command("go", "run", dir,
"-D", agentssh.MagicProcessCmdlineJetBrains)
stdout, err := echoServerCmd.StdoutPipe()
require.NoError(t, err)
err = echoServerCmd.Start()
require.NoError(t, err)
defer echoServerCmd.Process.Kill()
// The echo server prints its port as the first line.
sc := bufio.NewScanner(stdout)
sc.Scan()
remotePort := sc.Text()
//nolint:dogsled
conn, _, stats, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
sshClient, err := conn.SSHClient(ctx)
require.NoError(t, err)
tunneledConn, err := sshClient.Dial("tcp", fmt.Sprintf("127.0.0.1:%s", remotePort))
require.NoError(t, err)
t.Cleanup(func() {
// always close on failure of test
_ = conn.Close()
_ = tunneledConn.Close()
})
var s *agentsdk.Stats
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
return ok && s.ConnectionCount > 0 &&
s.SessionCountJetBrains == 1
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats with conn open: %+v", s,
)
// Kill the server and connection after checking for the echo.
requireEcho(t, tunneledConn)
_ = echoServerCmd.Process.Kill()
_ = tunneledConn.Close()
require.Eventuallyf(t, func() bool {
var ok bool
s, ok = <-stats
return ok && s.ConnectionCount == 0 &&
s.SessionCountJetBrains == 0
}, testutil.WaitLong, testutil.IntervalFast,
"never saw stats after conn closes: %+v", s,
)
})
}
func TestAgent_SessionExec(t *testing.T) {
@@ -350,8 +422,13 @@ func TestAgent_Session_TTY_MOTD(t *testing.T) {
unexpected: []string{},
},
{
name: "Trim",
manifest: agentsdk.Manifest{},
name: "Trim",
// Enable motd since it will be printed after the banner,
// this ensures that we can test for an exact mount of
// newlines.
manifest: agentsdk.Manifest{
MOTDFile: name,
},
banner: codersdk.ServiceBannerConfig{
Enabled: true,
Message: "\n\n\n\n\n\nbanner\n\n\n\n\n\n",
@@ -375,6 +452,7 @@ func TestAgent_Session_TTY_MOTD(t *testing.T) {
}
}
//nolint:tparallel // Sub tests need to run sequentially.
func TestAgent_Session_TTY_MOTD_Update(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
@@ -434,33 +512,38 @@ func TestAgent_Session_TTY_MOTD_Update(t *testing.T) {
}
//nolint:dogsled // Allow the blank identifiers.
conn, client, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0, setSBInterval)
for _, test := range tests {
sshClient, err := conn.SSHClient(ctx)
require.NoError(t, err)
t.Cleanup(func() {
_ = sshClient.Close()
})
//nolint:paralleltest // These tests need to swap the banner func.
for i, test := range tests {
test := test
// Set new banner func and wait for the agent to call it to update the
// banner.
ready := make(chan struct{}, 2)
client.SetServiceBannerFunc(func() (codersdk.ServiceBannerConfig, error) {
select {
case ready <- struct{}{}:
default:
}
return test.banner, nil
})
<-ready
<-ready // Wait for two updates to ensure the value has propagated.
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
// Set new banner func and wait for the agent to call it to update the
// banner.
ready := make(chan struct{}, 2)
client.SetServiceBannerFunc(func() (codersdk.ServiceBannerConfig, error) {
select {
case ready <- struct{}{}:
default:
}
return test.banner, nil
})
<-ready
<-ready // Wait for two updates to ensure the value has propagated.
sshClient, err := conn.SSHClient(ctx)
require.NoError(t, err)
t.Cleanup(func() {
_ = sshClient.Close()
})
session, err := sshClient.NewSession()
require.NoError(t, err)
t.Cleanup(func() {
_ = session.Close()
})
session, err := sshClient.NewSession()
require.NoError(t, err)
t.Cleanup(func() {
_ = session.Close()
})
testSessionOutput(t, session, test.expected, test.unexpected, nil)
testSessionOutput(t, session, test.expected, test.unexpected, nil)
})
}
}
@@ -637,150 +720,57 @@ func TestAgent_Session_TTY_HugeOutputIsNotLost(t *testing.T) {
}
}
//nolint:paralleltest // This test reserves a port.
func TestAgent_TCPLocalForwarding(t *testing.T) {
random, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
_ = random.Close()
tcpAddr, valid := random.Addr().(*net.TCPAddr)
require.True(t, valid)
randomPort := tcpAddr.Port
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
local, err := net.Listen("tcp", "127.0.0.1:0")
rl, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
defer local.Close()
tcpAddr, valid = local.Addr().(*net.TCPAddr)
defer rl.Close()
tcpAddr, valid := rl.Addr().(*net.TCPAddr)
require.True(t, valid)
remotePort := tcpAddr.Port
done := make(chan struct{})
go func() {
defer close(done)
conn, err := local.Accept()
if !assert.NoError(t, err) {
return
}
defer conn.Close()
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return
}
_, err = conn.Write(b)
if !assert.NoError(t, err) {
return
}
}()
go echoOnce(t, rl)
_, proc := setupSSHCommand(t, []string{"-L", fmt.Sprintf("%d:127.0.0.1:%d", randomPort, remotePort)}, []string{"sleep", "5"})
sshClient := setupAgentSSHClient(ctx, t)
go func() {
err := proc.Wait()
select {
case <-done:
default:
assert.NoError(t, err)
}
}()
require.Eventually(t, func() bool {
conn, err := net.Dial("tcp", "127.0.0.1:"+strconv.Itoa(randomPort))
if err != nil {
return false
}
defer conn.Close()
_, err = conn.Write([]byte("test"))
if !assert.NoError(t, err) {
return false
}
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return false
}
if !assert.Equal(t, "test", string(b)) {
return false
}
return true
}, testutil.WaitLong, testutil.IntervalSlow)
<-done
_ = proc.Kill()
conn, err := sshClient.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", remotePort))
require.NoError(t, err)
defer conn.Close()
requireEcho(t, conn)
}
//nolint:paralleltest // This test reserves a port.
func TestAgent_TCPRemoteForwarding(t *testing.T) {
random, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
_ = random.Close()
tcpAddr, valid := random.Addr().(*net.TCPAddr)
require.True(t, valid)
randomPort := tcpAddr.Port
t.Parallel()
ctx := testutil.Context(t, testutil.WaitLong)
sshClient := setupAgentSSHClient(ctx, t)
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
defer l.Close()
tcpAddr, valid = l.Addr().(*net.TCPAddr)
require.True(t, valid)
localPort := tcpAddr.Port
done := make(chan struct{})
go func() {
defer close(done)
conn, err := l.Accept()
localhost := netip.MustParseAddr("127.0.0.1")
var randomPort uint16
var ll net.Listener
var err error
for {
randomPort = pickRandomPort()
addr := net.TCPAddrFromAddrPort(netip.AddrPortFrom(localhost, randomPort))
ll, err = sshClient.ListenTCP(addr)
if err != nil {
return
t.Logf("error remote forwarding: %s", err.Error())
select {
case <-ctx.Done():
t.Fatal("timed out getting random listener")
default:
continue
}
}
defer conn.Close()
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return
}
_, err = conn.Write(b)
if !assert.NoError(t, err) {
return
}
}()
break
}
defer ll.Close()
go echoOnce(t, ll)
_, proc := setupSSHCommand(t, []string{"-R", fmt.Sprintf("127.0.0.1:%d:127.0.0.1:%d", randomPort, localPort)}, []string{"sleep", "5"})
go func() {
err := proc.Wait()
select {
case <-done:
default:
assert.NoError(t, err)
}
}()
require.Eventually(t, func() bool {
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", randomPort))
if err != nil {
return false
}
defer conn.Close()
_, err = conn.Write([]byte("test"))
if !assert.NoError(t, err) {
return false
}
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return false
}
if !assert.Equal(t, "test", string(b)) {
return false
}
return true
}, testutil.WaitLong, testutil.IntervalSlow)
<-done
_ = proc.Kill()
conn, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", randomPort))
require.NoError(t, err)
defer conn.Close()
requireEcho(t, conn)
}
func TestAgent_UnixLocalForwarding(t *testing.T) {
@@ -788,52 +778,18 @@ func TestAgent_UnixLocalForwarding(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("unix domain sockets are not fully supported on Windows")
}
ctx := testutil.Context(t, testutil.WaitLong)
tmpdir := tempDirUnixSocket(t)
remoteSocketPath := filepath.Join(tmpdir, "remote-socket")
localSocketPath := filepath.Join(tmpdir, "local-socket")
l, err := net.Listen("unix", remoteSocketPath)
require.NoError(t, err)
defer l.Close()
go echoOnce(t, l)
done := make(chan struct{})
go func() {
defer close(done)
sshClient := setupAgentSSHClient(ctx, t)
conn, err := l.Accept()
if err != nil {
return
}
defer conn.Close()
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return
}
_, err = conn.Write(b)
if !assert.NoError(t, err) {
return
}
}()
_, proc := setupSSHCommand(t, []string{"-L", fmt.Sprintf("%s:%s", localSocketPath, remoteSocketPath)}, []string{"sleep", "5"})
go func() {
err := proc.Wait()
select {
case <-done:
default:
assert.NoError(t, err)
}
}()
require.Eventually(t, func() bool {
_, err := os.Stat(localSocketPath)
return err == nil
}, testutil.WaitLong, testutil.IntervalFast)
conn, err := net.Dial("unix", localSocketPath)
conn, err := sshClient.Dial("unix", remoteSocketPath)
require.NoError(t, err)
defer conn.Close()
_, err = conn.Write([]byte("test"))
@@ -843,9 +799,6 @@ func TestAgent_UnixLocalForwarding(t *testing.T) {
require.NoError(t, err)
require.Equal(t, "test", string(b))
_ = conn.Close()
<-done
_ = proc.Kill()
}
func TestAgent_UnixRemoteForwarding(t *testing.T) {
@@ -856,66 +809,19 @@ func TestAgent_UnixRemoteForwarding(t *testing.T) {
tmpdir := tempDirUnixSocket(t)
remoteSocketPath := filepath.Join(tmpdir, "remote-socket")
localSocketPath := filepath.Join(tmpdir, "local-socket")
l, err := net.Listen("unix", localSocketPath)
ctx := testutil.Context(t, testutil.WaitLong)
sshClient := setupAgentSSHClient(ctx, t)
l, err := sshClient.ListenUnix(remoteSocketPath)
require.NoError(t, err)
defer l.Close()
go echoOnce(t, l)
done := make(chan struct{})
go func() {
defer close(done)
conn, err := l.Accept()
if err != nil {
return
}
defer conn.Close()
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return
}
_, err = conn.Write(b)
if !assert.NoError(t, err) {
return
}
}()
_, proc := setupSSHCommand(t, []string{"-R", fmt.Sprintf("%s:%s", remoteSocketPath, localSocketPath)}, []string{"sleep", "5"})
go func() {
err := proc.Wait()
select {
case <-done:
default:
assert.NoError(t, err)
}
}()
// It's possible that the socket is created but the server is not ready to
// accept connections yet. We need to retry until we can connect.
//
// Note that we wait long here because if the tailnet connection has trouble
// connecting, it could take 5 seconds or more to reconnect.
var conn net.Conn
require.Eventually(t, func() bool {
var err error
conn, err = net.Dial("unix", remoteSocketPath)
return err == nil
}, testutil.WaitLong, testutil.IntervalFast)
conn, err := net.Dial("unix", remoteSocketPath)
require.NoError(t, err)
defer conn.Close()
_, err = conn.Write([]byte("test"))
require.NoError(t, err)
b := make([]byte, 4)
_, err = conn.Read(b)
require.NoError(t, err)
require.Equal(t, "test", string(b))
_ = conn.Close()
<-done
_ = proc.Kill()
requireEcho(t, conn)
}
func TestAgent_SFTP(t *testing.T) {
@@ -1714,32 +1620,34 @@ func TestAgent_Dial(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
t.Parallel()
// Setup listener
// The purpose of this test is to ensure that a client can dial a
// listener in the workspace over tailnet.
l := c.setup(t)
defer l.Close()
go func() {
for {
c, err := l.Accept()
if err != nil {
return
}
done := make(chan struct{})
defer func() {
l.Close()
<-done
}()
go testAccept(t, c)
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
go func() {
defer close(done)
c, err := l.Accept()
if assert.NoError(t, err, "accept connection") {
defer c.Close()
testAccept(ctx, t, c)
}
}()
//nolint:dogsled
conn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
require.True(t, conn.AwaitReachable(context.Background()))
conn1, err := conn.DialContext(context.Background(), l.Addr().Network(), l.Addr().String())
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
require.True(t, agentConn.AwaitReachable(ctx))
conn, err := agentConn.DialContext(ctx, l.Addr().Network(), l.Addr().String())
require.NoError(t, err)
defer conn1.Close()
conn2, err := conn.DialContext(context.Background(), l.Addr().Network(), l.Addr().String())
require.NoError(t, err)
defer conn2.Close()
testDial(t, conn2)
testDial(t, conn1)
time.Sleep(150 * time.Millisecond)
defer conn.Close()
testDial(ctx, t, conn)
})
}
}
@@ -2052,50 +1960,14 @@ func TestAgent_DebugServer(t *testing.T) {
})
}
func setupSSHCommand(t *testing.T, beforeArgs []string, afterArgs []string) (*ptytest.PTYCmd, pty.Process) {
//nolint:dogsled
// setupAgentSSHClient creates an agent, dials it, and sets up an ssh.Client for it
func setupAgentSSHClient(ctx context.Context, t *testing.T) *ssh.Client {
//nolint: dogsled
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
listener, err := net.Listen("tcp", "127.0.0.1:0")
sshClient, err := agentConn.SSHClient(ctx)
require.NoError(t, err)
waitGroup := sync.WaitGroup{}
go func() {
defer listener.Close()
for {
conn, err := listener.Accept()
if err != nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
ssh, err := agentConn.SSH(ctx)
cancel()
if err != nil {
_ = conn.Close()
return
}
waitGroup.Add(1)
go func() {
agentssh.Bicopy(context.Background(), conn, ssh)
waitGroup.Done()
}()
}
}()
t.Cleanup(func() {
_ = listener.Close()
waitGroup.Wait()
})
tcpAddr, valid := listener.Addr().(*net.TCPAddr)
require.True(t, valid)
args := append(beforeArgs,
"-o", "HostName "+tcpAddr.IP.String(),
"-o", "Port "+strconv.Itoa(tcpAddr.Port),
"-o", "StrictHostKeyChecking=no",
"-o", "UserKnownHostsFile=/dev/null",
"host",
)
args = append(args, afterArgs...)
cmd := pty.Command("ssh", args...)
return ptytest.Start(t, cmd)
t.Cleanup(func() { sshClient.Close() })
return sshClient
}
func setupSSHSession(
@@ -2205,22 +2077,41 @@ func setupAgent(t *testing.T, metadata agentsdk.Manifest, ptyTimeout time.Durati
var dialTestPayload = []byte("dean-was-here123")
func testDial(t *testing.T, c net.Conn) {
func testDial(ctx context.Context, t *testing.T, c net.Conn) {
t.Helper()
if deadline, ok := ctx.Deadline(); ok {
err := c.SetDeadline(deadline)
assert.NoError(t, err)
defer func() {
err := c.SetDeadline(time.Time{})
assert.NoError(t, err)
}()
}
assertWritePayload(t, c, dialTestPayload)
assertReadPayload(t, c, dialTestPayload)
}
func testAccept(t *testing.T, c net.Conn) {
func testAccept(ctx context.Context, t *testing.T, c net.Conn) {
t.Helper()
defer c.Close()
if deadline, ok := ctx.Deadline(); ok {
err := c.SetDeadline(deadline)
assert.NoError(t, err)
defer func() {
err := c.SetDeadline(time.Time{})
assert.NoError(t, err)
}()
}
assertReadPayload(t, c, dialTestPayload)
assertWritePayload(t, c, dialTestPayload)
}
func assertReadPayload(t *testing.T, r io.Reader, payload []byte) {
t.Helper()
b := make([]byte, len(payload)+16)
n, err := r.Read(b)
assert.NoError(t, err, "read payload")
@@ -2229,6 +2120,7 @@ func assertReadPayload(t *testing.T, r io.Reader, payload []byte) {
}
func assertWritePayload(t *testing.T, w io.Writer, payload []byte) {
t.Helper()
n, err := w.Write(payload)
assert.NoError(t, err, "write payload")
assert.Equal(t, len(payload), n, "payload length does not match")
@@ -2345,6 +2237,17 @@ func TestAgent_Metrics_SSH(t *testing.T) {
Type: agentsdk.AgentMetricTypeCounter,
Value: 0,
},
{
Name: "coderd_agentstats_startup_script_seconds",
Type: agentsdk.AgentMetricTypeGauge,
Value: 0,
Labels: []agentsdk.AgentMetricLabel{
{
Name: "success",
Value: "true",
},
},
},
}
var actual []*promgo.MetricFamily
@@ -2569,3 +2472,47 @@ func (s *syncWriter) Write(p []byte) (int, error) {
defer s.mu.Unlock()
return s.w.Write(p)
}
// pickRandomPort picks a random port number for the ephemeral range. We do this entirely randomly
// instead of opening a listener and closing it to find a port that is likely to be free, since
// sometimes the OS reallocates the port very quickly.
func pickRandomPort() uint16 {
const (
// Overlap of windows, linux in https://en.wikipedia.org/wiki/Ephemeral_port
min = 49152
max = 60999
)
n := max - min
x := rand.Intn(n) //nolint: gosec
return uint16(min + x)
}
// echoOnce accepts a single connection, reads 4 bytes and echos them back
func echoOnce(t *testing.T, ll net.Listener) {
t.Helper()
conn, err := ll.Accept()
if err != nil {
return
}
defer conn.Close()
b := make([]byte, 4)
_, err = conn.Read(b)
if !assert.NoError(t, err) {
return
}
_, err = conn.Write(b)
if !assert.NoError(t, err) {
return
}
}
// requireEcho sends 4 bytes and requires the read response to match what was sent.
func requireEcho(t *testing.T, conn net.Conn) {
t.Helper()
_, err := conn.Write([]byte("test"))
require.NoError(t, err)
b := make([]byte, 4)
_, err = conn.Read(b)
require.NoError(t, err)
require.Equal(t, "test", string(b))
}
+4 -4
View File
@@ -7,18 +7,18 @@ import (
"github.com/spf13/afero"
)
func (p *Process) Niceness(sc Syscaller) (int, error) {
func (*Process) Niceness(Syscaller) (int, error) {
return 0, errUnimplemented
}
func (p *Process) SetNiceness(sc Syscaller, score int) error {
func (*Process) SetNiceness(Syscaller, int) error {
return errUnimplemented
}
func (p *Process) Cmd() string {
func (*Process) Cmd() string {
return ""
}
func List(fs afero.Fs, syscaller Syscaller) ([]*Process, error) {
func List(afero.Fs, Syscaller) ([]*Process, error) {
return nil, errUnimplemented
}
+1
View File
@@ -10,6 +10,7 @@ type Syscaller interface {
Kill(pid int32, sig syscall.Signal) error
}
// nolint: unused // used on some but no all platforms
const defaultProcDir = "/proc"
type Process struct {
+3 -3
View File
@@ -17,14 +17,14 @@ var errUnimplemented = xerrors.New("unimplemented")
type nopSyscaller struct{}
func (nopSyscaller) SetPriority(pid int32, priority int) error {
func (nopSyscaller) SetPriority(int32, int) error {
return errUnimplemented
}
func (nopSyscaller) GetPriority(pid int32) (int, error) {
func (nopSyscaller) GetPriority(int32) (int, error) {
return 0, errUnimplemented
}
func (nopSyscaller) Kill(pid int32, sig syscall.Signal) error {
func (nopSyscaller) Kill(int32, syscall.Signal) error {
return errUnimplemented
}
+79 -5
View File
@@ -13,12 +13,14 @@ import (
"sync/atomic"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/robfig/cron/v3"
"github.com/spf13/afero"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentssh"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
@@ -27,6 +29,14 @@ import (
var (
// ErrTimeout is returned when a script times out.
ErrTimeout = xerrors.New("script timed out")
// ErrOutputPipesOpen is returned when a script exits leaving the output
// pipe(s) (stdout, stderr) open. This happens because we set WaitDelay on
// the command, which gives us two things:
//
// 1. The ability to ensure that a script exits (this is important for e.g.
// blocking login, and avoiding doing so indefinitely)
// 2. Improved command cancellation on timeout
ErrOutputPipesOpen = xerrors.New("script exited without closing output pipes")
parser = cron.NewParser(cron.Second | cron.Minute | cron.Hour | cron.Dom | cron.Month | cron.DowOptional)
)
@@ -49,6 +59,11 @@ func New(opts Options) *Runner {
cronCtxCancel: cronCtxCancel,
cron: cron.New(cron.WithParser(parser)),
closed: make(chan struct{}),
scriptsExecuted: prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: "agent",
Subsystem: "scripts",
Name: "executed_total",
}, []string{"success"}),
}
}
@@ -63,6 +78,19 @@ type Runner struct {
cron *cron.Cron
initialized atomic.Bool
scripts []codersdk.WorkspaceAgentScript
// scriptsExecuted includes all scripts executed by the workspace agent. Agents
// execute startup scripts, and scripts on a cron schedule. Both will increment
// this counter.
scriptsExecuted *prometheus.CounterVec
}
func (r *Runner) RegisterMetrics(reg prometheus.Registerer) {
if reg == nil {
// If no registry, do nothing.
return
}
reg.MustRegister(r.scriptsExecuted)
}
// Init initializes the runner with the provided scripts.
@@ -82,7 +110,7 @@ func (r *Runner) Init(scripts []codersdk.WorkspaceAgentScript) error {
}
script := script
_, err := r.cron.AddFunc(script.Cron, func() {
err := r.run(r.cronCtx, script)
err := r.trackRun(r.cronCtx, script)
if err != nil {
r.Logger.Warn(context.Background(), "run agent script on schedule", slog.Error(err))
}
@@ -97,7 +125,26 @@ func (r *Runner) Init(scripts []codersdk.WorkspaceAgentScript) error {
// StartCron starts the cron scheduler.
// This is done async to allow for the caller to execute scripts prior.
func (r *Runner) StartCron() {
r.cron.Start()
// cron.Start() and cron.Stop() does not guarantee that the cron goroutine
// has exited by the time the `cron.Stop()` context returns, so we need to
// track it manually.
err := r.trackCommandGoroutine(func() {
// Since this is run async, in quick unit tests, it is possible the
// Close() function gets called before we even start the cron.
// In these cases, the Run() will never end.
// So if we are closed, we just return, and skip the Run() entirely.
select {
case <-r.cronCtx.Done():
// The cronCtx is canceled before cron.Close() happens. So if the ctx is
// canceled, then Close() will be called, or it is about to be called.
// So do nothing!
default:
r.cron.Run()
}
})
if err != nil {
r.Logger.Warn(context.Background(), "start cron failed", slog.Error(err))
}
}
// Execute runs a set of scripts according to a filter.
@@ -115,7 +162,7 @@ func (r *Runner) Execute(ctx context.Context, filter func(script codersdk.Worksp
}
script := script
eg.Go(func() error {
err := r.run(ctx, script)
err := r.trackRun(ctx, script)
if err != nil {
return xerrors.Errorf("run agent script %q: %w", script.LogSourceID, err)
}
@@ -125,6 +172,17 @@ func (r *Runner) Execute(ctx context.Context, filter func(script codersdk.Worksp
return eg.Wait()
}
// trackRun wraps "run" with metrics.
func (r *Runner) trackRun(ctx context.Context, script codersdk.WorkspaceAgentScript) error {
err := r.run(ctx, script)
if err != nil {
r.scriptsExecuted.WithLabelValues("false").Add(1)
} else {
r.scriptsExecuted.WithLabelValues("true").Add(1)
}
return err
}
// run executes the provided script with the timeout.
// If the timeout is exceeded, the process is sent an interrupt signal.
// If the process does not exit after a few seconds, it is forcefully killed.
@@ -240,7 +298,22 @@ func (r *Runner) run(ctx context.Context, script codersdk.WorkspaceAgentScript)
err = cmdCtx.Err()
case err = <-cmdDone:
}
if errors.Is(err, context.DeadlineExceeded) {
switch {
case errors.Is(err, exec.ErrWaitDelay):
err = ErrOutputPipesOpen
message := fmt.Sprintf("script exited successfully, but output pipes were not closed after %s", cmd.WaitDelay)
details := fmt.Sprint(
"This usually means a child process was started with references to stdout or stderr. As a result, this " +
"process may now have been terminated. Consider redirecting the output or using a separate " +
"\"coder_script\" for the process, see " +
"https://coder.com/docs/v2/latest/templates/troubleshooting#startup-script-issues for more information.",
)
// Inform the user by propagating the message via log writers.
_, _ = fmt.Fprintf(cmd.Stderr, "WARNING: %s. %s\n", message, details)
// Also log to agent logs for ease of debugging.
r.Logger.Warn(ctx, message, slog.F("details", details), slog.Error(err))
case errors.Is(err, context.DeadlineExceeded):
err = ErrTimeout
}
return err
@@ -253,8 +326,9 @@ func (r *Runner) Close() error {
return nil
}
close(r.closed)
// Must cancel the cron ctx BEFORE stopping the cron.
r.cronCtxCancel()
r.cron.Stop()
<-r.cron.Stop().Done()
r.cmdCloseWait.Wait()
return nil
}
+9
View File
@@ -53,6 +53,15 @@ func TestTimeout(t *testing.T) {
require.ErrorIs(t, runner.Execute(context.Background(), nil), agentscripts.ErrTimeout)
}
// TestCronClose exists because cron.Run() can happen after cron.Close().
// If this happens, there used to be a deadlock.
func TestCronClose(t *testing.T) {
t.Parallel()
runner := agentscripts.New(agentscripts.Options{})
runner.StartCron()
require.NoError(t, runner.Close(), "close runner")
}
func setup(t *testing.T, patchLogs func(ctx context.Context, req agentsdk.PatchLogs) error) *agentscripts.Runner {
t.Helper()
if patchLogs == nil {
+122 -38
View File
@@ -19,6 +19,7 @@ import (
"time"
"github.com/gliderlabs/ssh"
"github.com/google/uuid"
"github.com/kballard/go-shellquote"
"github.com/pkg/sftp"
"github.com/prometheus/client_golang/prometheus"
@@ -46,8 +47,12 @@ const (
MagicSessionTypeEnvironmentVariable = "CODER_SSH_SESSION_TYPE"
// MagicSessionTypeVSCode is set in the SSH config by the VS Code extension to identify itself.
MagicSessionTypeVSCode = "vscode"
// MagicSessionTypeJetBrains is set in the SSH config by the JetBrains extension to identify itself.
// MagicSessionTypeJetBrains is set in the SSH config by the JetBrains
// extension to identify itself.
MagicSessionTypeJetBrains = "jetbrains"
// MagicProcessCmdlineJetBrains is a string in a process's command line that
// uniquely identifies it as JetBrains software.
MagicProcessCmdlineJetBrains = "idea.vendor.name=JetBrains"
)
type Server struct {
@@ -110,7 +115,11 @@ func NewServer(ctx context.Context, logger slog.Logger, prometheusRegistry *prom
srv := &ssh.Server{
ChannelHandlers: map[string]ssh.ChannelHandler{
"direct-tcpip": ssh.DirectTCPIPHandler,
"direct-tcpip": func(srv *ssh.Server, conn *gossh.ServerConn, newChan gossh.NewChannel, ctx ssh.Context) {
// Wrapper is designed to find and track JetBrains Gateway connections.
wrapped := NewJetbrainsChannelWatcher(ctx, s.logger, newChan, &s.connCountJetBrains)
ssh.DirectTCPIPHandler(srv, conn, wrapped, ctx)
},
"direct-streamlocal@openssh.com": directStreamLocalHandler,
"session": ssh.DefaultSessionHandler,
},
@@ -141,7 +150,7 @@ func NewServer(ctx context.Context, logger slog.Logger, prometheusRegistry *prom
},
ReversePortForwardingCallback: func(ctx ssh.Context, bindHost string, bindPort uint32) bool {
// Allow reverse port forwarding all!
s.logger.Debug(ctx, "local port forward",
s.logger.Debug(ctx, "reverse port forward",
slog.F("bind_host", bindHost),
slog.F("bind_port", bindPort))
return true
@@ -192,9 +201,16 @@ func (s *Server) ConnStats() ConnStats {
}
func (s *Server) sessionHandler(session ssh.Session) {
logger := s.logger.With(slog.F("remote_addr", session.RemoteAddr()), slog.F("local_addr", session.LocalAddr()))
logger.Info(session.Context(), "handling ssh session")
ctx := session.Context()
logger := s.logger.With(
slog.F("remote_addr", session.RemoteAddr()),
slog.F("local_addr", session.LocalAddr()),
// Assigning a random uuid for each session is useful for tracking
// logs for the same ssh session.
slog.F("id", uuid.NewString()),
)
logger.Info(ctx, "handling ssh session")
if !s.trackSession(session, true) {
// See (*Server).Close() for why we call Close instead of Exit.
_ = session.Close()
@@ -218,7 +234,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
switch ss := session.Subsystem(); ss {
case "":
case "sftp":
s.sftpHandler(session)
s.sftpHandler(logger, session)
return
default:
logger.Warn(ctx, "unsupported subsystem", slog.F("subsystem", ss))
@@ -226,11 +242,32 @@ func (s *Server) sessionHandler(session ssh.Session) {
return
}
err := s.sessionStart(session, extraEnv)
err := s.sessionStart(logger, session, extraEnv)
var exitError *exec.ExitError
if xerrors.As(err, &exitError) {
logger.Info(ctx, "ssh session returned", slog.Error(exitError))
_ = session.Exit(exitError.ExitCode())
code := exitError.ExitCode()
if code == -1 {
// If we return -1 here, it will be transmitted as an
// uint32(4294967295). This exit code is nonsense, so
// instead we return 255 (same as OpenSSH). This is
// also the same exit code that the shell returns for
// -1.
//
// For signals, we could consider sending 128+signal
// instead (however, OpenSSH doesn't seem to do this).
code = 255
}
logger.Info(ctx, "ssh session returned",
slog.Error(exitError),
slog.F("process_exit_code", exitError.ExitCode()),
slog.F("exit_code", code),
)
// TODO(mafredri): For signal exit, there's also an "exit-signal"
// request (session.Exit sends "exit-status"), however, since it's
// not implemented on the session interface and not used by
// OpenSSH, we'll leave it for now.
_ = session.Exit(code)
return
}
if err != nil {
@@ -244,7 +281,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
_ = session.Exit(0)
}
func (s *Server) sessionStart(session ssh.Session, extraEnv []string) (retErr error) {
func (s *Server) sessionStart(logger slog.Logger, session ssh.Session, extraEnv []string) (retErr error) {
ctx := session.Context()
env := append(session.Environ(), extraEnv...)
var magicType string
@@ -252,23 +289,23 @@ func (s *Server) sessionStart(session ssh.Session, extraEnv []string) (retErr er
if !strings.HasPrefix(kv, MagicSessionTypeEnvironmentVariable) {
continue
}
magicType = strings.TrimPrefix(kv, MagicSessionTypeEnvironmentVariable+"=")
magicType = strings.ToLower(strings.TrimPrefix(kv, MagicSessionTypeEnvironmentVariable+"="))
env = append(env[:index], env[index+1:]...)
}
// Always force lowercase checking to be case-insensitive.
switch strings.ToLower(magicType) {
case strings.ToLower(MagicSessionTypeVSCode):
switch magicType {
case MagicSessionTypeVSCode:
s.connCountVSCode.Add(1)
defer s.connCountVSCode.Add(-1)
case strings.ToLower(MagicSessionTypeJetBrains):
s.connCountJetBrains.Add(1)
defer s.connCountJetBrains.Add(-1)
case MagicSessionTypeJetBrains:
// Do nothing here because JetBrains launches hundreds of ssh sessions.
// We instead track JetBrains in the single persistent tcp forwarding channel.
case "":
s.connCountSSHSession.Add(1)
defer s.connCountSSHSession.Add(-1)
default:
s.logger.Warn(ctx, "invalid magic ssh session type specified", slog.F("type", magicType))
logger.Warn(ctx, "invalid magic ssh session type specified", slog.F("type", magicType))
}
magicTypeLabel := magicTypeMetricLabel(magicType)
@@ -301,12 +338,12 @@ func (s *Server) sessionStart(session ssh.Session, extraEnv []string) (retErr er
}
if isPty {
return s.startPTYSession(session, magicTypeLabel, cmd, sshPty, windowSize)
return s.startPTYSession(logger, session, magicTypeLabel, cmd, sshPty, windowSize)
}
return s.startNonPTYSession(session, magicTypeLabel, cmd.AsExec())
return s.startNonPTYSession(logger, session, magicTypeLabel, cmd.AsExec())
}
func (s *Server) startNonPTYSession(session ssh.Session, magicTypeLabel string, cmd *exec.Cmd) error {
func (s *Server) startNonPTYSession(logger slog.Logger, session ssh.Session, magicTypeLabel string, cmd *exec.Cmd) error {
s.metrics.sessionsTotal.WithLabelValues(magicTypeLabel, "no").Add(1)
cmd.Stdout = session
@@ -330,6 +367,17 @@ func (s *Server) startNonPTYSession(session ssh.Session, magicTypeLabel string,
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "no", "start_command").Add(1)
return xerrors.Errorf("start: %w", err)
}
sigs := make(chan ssh.Signal, 1)
session.Signals(sigs)
defer func() {
session.Signals(nil)
close(sigs)
}()
go func() {
for sig := range sigs {
s.handleSignal(logger, sig, cmd.Process, magicTypeLabel)
}
}()
return cmd.Wait()
}
@@ -340,9 +388,10 @@ type ptySession interface {
Context() ssh.Context
DisablePTYEmulation()
RawCommand() string
Signals(chan<- ssh.Signal)
}
func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd *pty.Cmd, sshPty ssh.Pty, windowSize <-chan ssh.Window) (retErr error) {
func (s *Server) startPTYSession(logger slog.Logger, session ptySession, magicTypeLabel string, cmd *pty.Cmd, sshPty ssh.Pty, windowSize <-chan ssh.Window) (retErr error) {
s.metrics.sessionsTotal.WithLabelValues(magicTypeLabel, "yes").Add(1)
ctx := session.Context()
@@ -355,7 +404,7 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
if serviceBanner != nil {
err := showServiceBanner(session, serviceBanner)
if err != nil {
s.logger.Error(ctx, "agent failed to show service banner", slog.Error(err))
logger.Error(ctx, "agent failed to show service banner", slog.Error(err))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "service_banner").Add(1)
}
}
@@ -366,11 +415,11 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
if manifest != nil {
err := showMOTD(s.fs, session, manifest.MOTDFile)
if err != nil {
s.logger.Error(ctx, "agent failed to show MOTD", slog.Error(err))
logger.Error(ctx, "agent failed to show MOTD", slog.Error(err))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "motd").Add(1)
}
} else {
s.logger.Warn(ctx, "metadata lookup failed, unable to show MOTD")
logger.Warn(ctx, "metadata lookup failed, unable to show MOTD")
}
}
@@ -379,7 +428,7 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
// The pty package sets `SSH_TTY` on supported platforms.
ptty, process, err := pty.Start(cmd, pty.WithPTYOption(
pty.WithSSHRequest(sshPty),
pty.WithLogger(slog.Stdlib(ctx, s.logger, slog.LevelInfo)),
pty.WithLogger(slog.Stdlib(ctx, logger, slog.LevelInfo)),
))
if err != nil {
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "start_command").Add(1)
@@ -388,20 +437,43 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
defer func() {
closeErr := ptty.Close()
if closeErr != nil {
s.logger.Warn(ctx, "failed to close tty", slog.Error(closeErr))
logger.Warn(ctx, "failed to close tty", slog.Error(closeErr))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "close").Add(1)
if retErr == nil {
retErr = closeErr
}
}
}()
sigs := make(chan ssh.Signal, 1)
session.Signals(sigs)
defer func() {
session.Signals(nil)
close(sigs)
}()
go func() {
for win := range windowSize {
resizeErr := ptty.Resize(uint16(win.Height), uint16(win.Width))
// If the pty is closed, then command has exited, no need to log.
if resizeErr != nil && !errors.Is(resizeErr, pty.ErrClosed) {
s.logger.Warn(ctx, "failed to resize tty", slog.Error(resizeErr))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "resize").Add(1)
for {
if sigs == nil && windowSize == nil {
return
}
select {
case sig, ok := <-sigs:
if !ok {
sigs = nil
continue
}
s.handleSignal(logger, sig, process, magicTypeLabel)
case win, ok := <-windowSize:
if !ok {
windowSize = nil
continue
}
resizeErr := ptty.Resize(uint16(win.Height), uint16(win.Width))
// If the pty is closed, then command has exited, no need to log.
if resizeErr != nil && !errors.Is(resizeErr, pty.ErrClosed) {
logger.Warn(ctx, "failed to resize tty", slog.Error(resizeErr))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "resize").Add(1)
}
}
}
}()
@@ -422,7 +494,7 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
// 2. The client hangs up, which cancels the command's Context, and go will
// kill the command's process. This then has the same effect as (1).
n, err := io.Copy(session, ptty.OutputReader())
s.logger.Debug(ctx, "copy output done", slog.F("bytes", n), slog.Error(err))
logger.Debug(ctx, "copy output done", slog.F("bytes", n), slog.Error(err))
if err != nil {
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "output_io_copy").Add(1)
return xerrors.Errorf("copy error: %w", err)
@@ -435,7 +507,7 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
// ExitErrors just mean the command we run returned a non-zero exit code, which is normal
// and not something to be concerned about. But, if it's something else, we should log it.
if err != nil && !xerrors.As(err, &exitErr) {
s.logger.Warn(ctx, "process wait exited with error", slog.Error(err))
logger.Warn(ctx, "process wait exited with error", slog.Error(err))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "wait").Add(1)
}
if err != nil {
@@ -444,7 +516,19 @@ func (s *Server) startPTYSession(session ptySession, magicTypeLabel string, cmd
return nil
}
func (s *Server) sftpHandler(session ssh.Session) {
func (s *Server) handleSignal(logger slog.Logger, ssig ssh.Signal, signaler interface{ Signal(os.Signal) error }, magicTypeLabel string) {
ctx := context.Background()
sig := osSignalFrom(ssig)
logger = logger.With(slog.F("ssh_signal", ssig), slog.F("signal", sig.String()))
logger.Info(ctx, "received signal from client")
err := signaler.Signal(sig)
if err != nil {
logger.Warn(ctx, "signaling the process failed", slog.Error(err))
s.metrics.sessionErrors.WithLabelValues(magicTypeLabel, "yes", "signal").Add(1)
}
}
func (s *Server) sftpHandler(logger slog.Logger, session ssh.Session) {
s.metrics.sftpConnectionsTotal.Add(1)
ctx := session.Context()
@@ -460,14 +544,14 @@ func (s *Server) sftpHandler(session ssh.Session) {
// directory so that SFTP connections land there.
homedir, err := userHomeDir()
if err != nil {
s.logger.Warn(ctx, "get sftp working directory failed, unable to get home dir", slog.Error(err))
logger.Warn(ctx, "get sftp working directory failed, unable to get home dir", slog.Error(err))
} else {
opts = append(opts, sftp.WithServerWorkingDirectory(homedir))
}
server, err := sftp.NewServer(session, opts...)
if err != nil {
s.logger.Debug(ctx, "initialize sftp server", slog.Error(err))
logger.Debug(ctx, "initialize sftp server", slog.Error(err))
return
}
defer server.Close()
@@ -485,7 +569,7 @@ func (s *Server) sftpHandler(session ssh.Session) {
_ = session.Exit(0)
return
}
s.logger.Warn(ctx, "sftp server closed with error", slog.Error(err))
logger.Warn(ctx, "sftp server closed with error", slog.Error(err))
s.metrics.sftpServerErrors.Add(1)
_ = session.Exit(1)
}
+10 -1
View File
@@ -63,7 +63,7 @@ func Test_sessionStart_orphan(t *testing.T) {
// we don't really care what the error is here. In the larger scenario,
// the client has disconnected, so we can't return any error information
// to them.
_ = s.startPTYSession(sess, "ssh", cmd, ptyInfo, windowSize)
_ = s.startPTYSession(logger, sess, "ssh", cmd, ptyInfo, windowSize)
}()
readDone := make(chan struct{})
@@ -114,6 +114,11 @@ type testSSHContext struct {
context.Context
}
var (
_ gliderssh.Context = testSSHContext{}
_ ptySession = &testSession{}
)
func newTestSession(ctx context.Context) (toClient *io.PipeReader, fromClient *io.PipeWriter, s ptySession) {
toClient, fromPty := io.Pipe()
toPty, fromClient := io.Pipe()
@@ -144,6 +149,10 @@ func (s *testSession) Write(p []byte) (n int, err error) {
return s.fromPty.Write(p)
}
func (*testSession) Signals(_ chan<- gliderssh.Signal) {
// Not implemented, but will be called.
}
func (testSSHContext) Lock() {
panic("not implemented")
}
+158 -1
View File
@@ -3,8 +3,10 @@
package agentssh_test
import (
"bufio"
"bytes"
"context"
"fmt"
"net"
"runtime"
"strings"
@@ -24,6 +26,7 @@ import (
"github.com/coder/coder/v2/agent/agentssh"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestMain(m *testing.M) {
@@ -57,8 +60,8 @@ func TestNewServer_ServeClient(t *testing.T) {
var b bytes.Buffer
sess, err := c.NewSession()
sess.Stdout = &b
require.NoError(t, err)
sess.Stdout = &b
err = sess.Start("echo hello")
require.NoError(t, err)
@@ -139,6 +142,7 @@ func TestNewServer_CloseActiveConnections(t *testing.T) {
defer wg.Done()
c := sshClient(t, ln.Addr().String())
sess, err := c.NewSession()
assert.NoError(t, err)
sess.Stdin = pty.Input()
sess.Stdout = pty.Output()
sess.Stderr = pty.Output()
@@ -159,6 +163,159 @@ func TestNewServer_CloseActiveConnections(t *testing.T) {
wg.Wait()
}
func TestNewServer_Signal(t *testing.T) {
t.Parallel()
t.Run("Stdout", func(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, nil)
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), afero.NewMemMapFs(), 0, "")
require.NoError(t, err)
defer s.Close()
// The assumption is that these are set before serving SSH connections.
s.AgentToken = func() string { return "" }
s.Manifest = atomic.NewPointer(&agentsdk.Manifest{})
ln, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
done := make(chan struct{})
go func() {
defer close(done)
err := s.Serve(ln)
assert.Error(t, err) // Server is closed.
}()
defer func() {
err := s.Close()
require.NoError(t, err)
<-done
}()
c := sshClient(t, ln.Addr().String())
sess, err := c.NewSession()
require.NoError(t, err)
r, err := sess.StdoutPipe()
require.NoError(t, err)
// Perform multiple sleeps since the interrupt signal doesn't propagate to
// the process group, this lets us exit early.
sleeps := strings.Repeat("sleep 1 && ", int(testutil.WaitMedium.Seconds()))
err = sess.Start(fmt.Sprintf("echo hello && %s echo bye", sleeps))
require.NoError(t, err)
sc := bufio.NewScanner(r)
for sc.Scan() {
t.Log(sc.Text())
if strings.Contains(sc.Text(), "hello") {
break
}
}
require.NoError(t, sc.Err())
err = sess.Signal(ssh.SIGKILL)
require.NoError(t, err)
// Assumption, signal propagates and the command exists, closing stdout.
for sc.Scan() {
t.Log(sc.Text())
require.NotContains(t, sc.Text(), "bye")
}
require.NoError(t, sc.Err())
err = sess.Wait()
exitErr := &ssh.ExitError{}
require.ErrorAs(t, err, &exitErr)
wantCode := 255
if runtime.GOOS == "windows" {
wantCode = 1
}
require.Equal(t, wantCode, exitErr.ExitStatus())
})
t.Run("PTY", func(t *testing.T) {
t.Parallel()
ctx := context.Background()
logger := slogtest.Make(t, nil)
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), afero.NewMemMapFs(), 0, "")
require.NoError(t, err)
defer s.Close()
// The assumption is that these are set before serving SSH connections.
s.AgentToken = func() string { return "" }
s.Manifest = atomic.NewPointer(&agentsdk.Manifest{})
ln, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
done := make(chan struct{})
go func() {
defer close(done)
err := s.Serve(ln)
assert.Error(t, err) // Server is closed.
}()
defer func() {
err := s.Close()
require.NoError(t, err)
<-done
}()
c := sshClient(t, ln.Addr().String())
pty := ptytest.New(t)
sess, err := c.NewSession()
require.NoError(t, err)
r, err := sess.StdoutPipe()
require.NoError(t, err)
// Note, we request pty but don't use ptytest here because we can't
// easily test for no text before EOF.
sess.Stdin = pty.Input()
sess.Stderr = pty.Output()
err = sess.RequestPty("xterm", 80, 80, nil)
require.NoError(t, err)
// Perform multiple sleeps since the interrupt signal doesn't propagate to
// the process group, this lets us exit early.
sleeps := strings.Repeat("sleep 1 && ", int(testutil.WaitMedium.Seconds()))
err = sess.Start(fmt.Sprintf("echo hello && %s echo bye", sleeps))
require.NoError(t, err)
sc := bufio.NewScanner(r)
for sc.Scan() {
t.Log(sc.Text())
if strings.Contains(sc.Text(), "hello") {
break
}
}
require.NoError(t, sc.Err())
err = sess.Signal(ssh.SIGKILL)
require.NoError(t, err)
// Assumption, signal propagates and the command exists, closing stdout.
for sc.Scan() {
t.Log(sc.Text())
require.NotContains(t, sc.Text(), "bye")
}
require.NoError(t, sc.Err())
err = sess.Wait()
exitErr := &ssh.ExitError{}
require.ErrorAs(t, err, &exitErr)
wantCode := 255
if runtime.GOOS == "windows" {
wantCode = 1
}
require.Equal(t, wantCode, exitErr.ExitStatus())
})
}
func sshClient(t *testing.T, addr string) *ssh.Client {
conn, err := net.Dial("tcp", addr)
require.NoError(t, err)
+17 -9
View File
@@ -37,6 +37,7 @@ type forwardedUnixHandler struct {
}
func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server, req *gossh.Request) (bool, []byte) {
h.log.Debug(ctx, "handling SSH unix forward")
h.Lock()
if h.forwards == nil {
h.forwards = make(map[string]net.Listener)
@@ -47,22 +48,25 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
h.log.Warn(ctx, "SSH unix forward request from client with no gossh connection")
return false, nil
}
log := h.log.With(slog.F("remote_addr", conn.RemoteAddr()))
switch req.Type {
case "streamlocal-forward@openssh.com":
var reqPayload streamLocalForwardPayload
err := gossh.Unmarshal(req.Payload, &reqPayload)
if err != nil {
h.log.Warn(ctx, "parse streamlocal-forward@openssh.com request payload from client", slog.Error(err))
h.log.Warn(ctx, "parse streamlocal-forward@openssh.com request (SSH unix forward) payload from client", slog.Error(err))
return false, nil
}
addr := reqPayload.SocketPath
log = log.With(slog.F("socket_path", addr))
log.Debug(ctx, "request begin SSH unix forward")
h.Lock()
_, ok := h.forwards[addr]
h.Unlock()
if ok {
h.log.Warn(ctx, "SSH unix forward request for socket path that is already being forwarded (maybe to another client?)",
log.Warn(ctx, "SSH unix forward request for socket path that is already being forwarded (maybe to another client?)",
slog.F("socket_path", addr),
)
return false, nil
@@ -72,9 +76,8 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
parentDir := filepath.Dir(addr)
err = os.MkdirAll(parentDir, 0o700)
if err != nil {
h.log.Warn(ctx, "create parent dir for SSH unix forward request",
log.Warn(ctx, "create parent dir for SSH unix forward request",
slog.F("parent_dir", parentDir),
slog.F("socket_path", addr),
slog.Error(err),
)
return false, nil
@@ -82,12 +85,13 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
ln, err := net.Listen("unix", addr)
if err != nil {
h.log.Warn(ctx, "listen on Unix socket for SSH unix forward request",
log.Warn(ctx, "listen on Unix socket for SSH unix forward request",
slog.F("socket_path", addr),
slog.Error(err),
)
return false, nil
}
log.Debug(ctx, "SSH unix forward listening on socket")
// The listener needs to successfully start before it can be added to
// the map, so we don't have to worry about checking for an existing
@@ -97,6 +101,7 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
h.Lock()
h.forwards[addr] = ln
h.Unlock()
log.Debug(ctx, "SSH unix forward added to cache")
ctx, cancel := context.WithCancel(ctx)
go func() {
@@ -110,14 +115,15 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
c, err := ln.Accept()
if err != nil {
if !xerrors.Is(err, net.ErrClosed) {
h.log.Warn(ctx, "accept on local Unix socket for SSH unix forward request",
slog.F("socket_path", addr),
log.Warn(ctx, "accept on local Unix socket for SSH unix forward request",
slog.Error(err),
)
}
// closed below
log.Debug(ctx, "SSH unix forward listener closed")
break
}
log.Debug(ctx, "accepted SSH unix forward connection")
payload := gossh.Marshal(&forwardedStreamLocalPayload{
SocketPath: addr,
})
@@ -125,7 +131,7 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
go func() {
ch, reqs, err := conn.OpenChannel("forwarded-streamlocal@openssh.com", payload)
if err != nil {
h.log.Warn(ctx, "open SSH channel to forward Unix connection to client",
h.log.Warn(ctx, "open SSH unix forward channel to client",
slog.F("socket_path", addr),
slog.Error(err),
)
@@ -143,6 +149,7 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
delete(h.forwards, addr)
}
h.Unlock()
log.Debug(ctx, "SSH unix forward listener removed from cache", slog.F("path", addr))
_ = ln.Close()
}()
@@ -152,9 +159,10 @@ func (h *forwardedUnixHandler) HandleSSHRequest(ctx ssh.Context, _ *ssh.Server,
var reqPayload streamLocalForwardPayload
err := gossh.Unmarshal(req.Payload, &reqPayload)
if err != nil {
h.log.Warn(ctx, "parse cancel-streamlocal-forward@openssh.com request payload from client", slog.Error(err))
h.log.Warn(ctx, "parse cancel-streamlocal-forward@openssh.com (SSH unix forward) request payload from client", slog.Error(err))
return false, nil
}
log.Debug(ctx, "request to cancel SSH unix forward", slog.F("path", reqPayload.SocketPath))
h.Lock()
ln, ok := h.forwards[reqPayload.SocketPath]
h.Unlock()
+90
View File
@@ -0,0 +1,90 @@
package agentssh
import (
"strings"
"sync"
"github.com/gliderlabs/ssh"
"go.uber.org/atomic"
gossh "golang.org/x/crypto/ssh"
"cdr.dev/slog"
)
// localForwardChannelData is copied from the ssh package.
type localForwardChannelData struct {
DestAddr string
DestPort uint32
OriginAddr string
OriginPort uint32
}
// JetbrainsChannelWatcher is used to track JetBrains port forwarded (Gateway)
// channels. If the port forward is something other than JetBrains, this struct
// is a noop.
type JetbrainsChannelWatcher struct {
gossh.NewChannel
jetbrainsCounter *atomic.Int64
}
func NewJetbrainsChannelWatcher(ctx ssh.Context, logger slog.Logger, newChannel gossh.NewChannel, counter *atomic.Int64) gossh.NewChannel {
d := localForwardChannelData{}
if err := gossh.Unmarshal(newChannel.ExtraData(), &d); err != nil {
// If the data fails to unmarshal, do nothing.
logger.Warn(ctx, "failed to unmarshal port forward data", slog.Error(err))
return newChannel
}
// If we do get a port, we should be able to get the matching PID and from
// there look up the invocation.
cmdline, err := getListeningPortProcessCmdline(d.DestPort)
if err != nil {
logger.Warn(ctx, "failed to inspect port",
slog.F("destination_port", d.DestPort),
slog.Error(err))
return newChannel
}
// If this is not JetBrains, then we do not need to do anything special. We
// attempt to match on something that appears unique to JetBrains software.
if !strings.Contains(strings.ToLower(cmdline), strings.ToLower(MagicProcessCmdlineJetBrains)) {
return newChannel
}
logger.Debug(ctx, "discovered forwarded JetBrains process",
slog.F("destination_port", d.DestPort))
return &JetbrainsChannelWatcher{
NewChannel: newChannel,
jetbrainsCounter: counter,
}
}
func (w *JetbrainsChannelWatcher) Accept() (gossh.Channel, <-chan *gossh.Request, error) {
c, r, err := w.NewChannel.Accept()
if err != nil {
return c, r, err
}
w.jetbrainsCounter.Add(1)
return &ChannelOnClose{
Channel: c,
done: func() {
w.jetbrainsCounter.Add(-1)
},
}, r, err
}
type ChannelOnClose struct {
gossh.Channel
// once ensures close only decrements the counter once.
// Because close can be called multiple times.
once sync.Once
done func()
}
func (c *ChannelOnClose) Close() error {
c.once.Do(c.done)
return c.Channel.Close()
}
@@ -0,0 +1,37 @@
//go:build linux
package agentssh
import (
"fmt"
"os"
"github.com/cakturk/go-netstat/netstat"
"golang.org/x/xerrors"
)
func getListeningPortProcessCmdline(port uint32) (string, error) {
tabs, err := netstat.TCPSocks(func(s *netstat.SockTabEntry) bool {
return s.LocalAddr != nil && uint32(s.LocalAddr.Port) == port
})
if err != nil {
return "", xerrors.Errorf("inspect port %d: %w", port, err)
}
if len(tabs) == 0 {
return "", nil
}
// Defensive check.
if tabs[0].Process == nil {
return "", nil
}
// The process name provided by go-netstat does not include the full command
// line so grab that instead.
pid := tabs[0].Process.Pid
data, err := os.ReadFile(fmt.Sprintf("/proc/%d/cmdline", pid))
if err != nil {
return "", xerrors.Errorf("read /proc/%d/cmdline: %w", pid, err)
}
return string(data), nil
}
@@ -0,0 +1,9 @@
//go:build !linux
package agentssh
func getListeningPortProcessCmdline(uint32) (string, error) {
// We are not worrying about other platforms at the moment because Gateway
// only supports Linux anyway.
return "", nil
}
+45
View File
@@ -0,0 +1,45 @@
//go:build !windows
package agentssh
import (
"os"
"github.com/gliderlabs/ssh"
"golang.org/x/sys/unix"
)
func osSignalFrom(sig ssh.Signal) os.Signal {
switch sig {
case ssh.SIGABRT:
return unix.SIGABRT
case ssh.SIGALRM:
return unix.SIGALRM
case ssh.SIGFPE:
return unix.SIGFPE
case ssh.SIGHUP:
return unix.SIGHUP
case ssh.SIGILL:
return unix.SIGILL
case ssh.SIGINT:
return unix.SIGINT
case ssh.SIGKILL:
return unix.SIGKILL
case ssh.SIGPIPE:
return unix.SIGPIPE
case ssh.SIGQUIT:
return unix.SIGQUIT
case ssh.SIGSEGV:
return unix.SIGSEGV
case ssh.SIGTERM:
return unix.SIGTERM
case ssh.SIGUSR1:
return unix.SIGUSR1
case ssh.SIGUSR2:
return unix.SIGUSR2
// Unhandled, use sane fallback.
default:
return unix.SIGKILL
}
}
+15
View File
@@ -0,0 +1,15 @@
package agentssh
import (
"os"
"github.com/gliderlabs/ssh"
)
func osSignalFrom(sig ssh.Signal) os.Signal {
switch sig {
// Signals are not supported on Windows.
default:
return os.Kill
}
}
+2 -2
View File
@@ -24,7 +24,7 @@ func NewClient(t testing.TB,
agentID uuid.UUID,
manifest agentsdk.Manifest,
statsChan chan *agentsdk.Stats,
coordinator tailnet.Coordinator,
coordinator tailnet.CoordinatorV1,
) *Client {
if manifest.AgentID == uuid.Nil {
manifest.AgentID = agentID
@@ -47,7 +47,7 @@ type Client struct {
manifest agentsdk.Manifest
metadata map[string]agentsdk.Metadata
statsChan chan *agentsdk.Stats
coordinator tailnet.Coordinator
coordinator tailnet.CoordinatorV1
LastWorkspaceAgent func()
PatchWorkspaceLogs func() error
GetServiceBannerFunc func() (codersdk.ServiceBannerConfig, error)
+18 -5
View File
@@ -26,17 +26,30 @@ func (a *agent) apiHandler() http.Handler {
cpy[k] = b
}
lp := &listeningPortsHandler{ignorePorts: cpy}
cacheDuration := 1 * time.Second
if a.portCacheDuration > 0 {
cacheDuration = a.portCacheDuration
}
lp := &listeningPortsHandler{
ignorePorts: cpy,
cacheDuration: cacheDuration,
}
r.Get("/api/v0/listening-ports", lp.handler)
return r
}
type listeningPortsHandler struct {
mut sync.Mutex
ports []codersdk.WorkspaceAgentListeningPort
mtime time.Time
ignorePorts map[int]string
ignorePorts map[int]string
cacheDuration time.Duration
//nolint: unused // used on some but not all platforms
mut sync.Mutex
//nolint: unused // used on some but not all platforms
ports []codersdk.WorkspaceAgentListeningPort
//nolint: unused // used on some but not all platforms
mtime time.Time
}
// handler returns a list of listening ports. This is tested by coderd's
+12
View File
@@ -17,6 +17,9 @@ import (
type agentMetrics struct {
connectionsTotal prometheus.Counter
reconnectingPTYErrors *prometheus.CounterVec
// startupScriptSeconds is the time in seconds that the start script(s)
// took to run. This is reported once per agent.
startupScriptSeconds *prometheus.GaugeVec
}
func newAgentMetrics(registerer prometheus.Registerer) *agentMetrics {
@@ -35,9 +38,18 @@ func newAgentMetrics(registerer prometheus.Registerer) *agentMetrics {
)
registerer.MustRegister(reconnectingPTYErrors)
startupScriptSeconds := prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "coderd",
Subsystem: "agentstats",
Name: "startup_script_seconds",
Help: "Amount of time taken to run the startup script in seconds.",
}, []string{"success"})
registerer.MustRegister(startupScriptSeconds)
return &agentMetrics{
connectionsTotal: connectionsTotal,
reconnectingPTYErrors: reconnectingPTYErrors,
startupScriptSeconds: startupScriptSeconds,
}
}
+1 -1
View File
@@ -15,7 +15,7 @@ func (lp *listeningPortsHandler) getListeningPorts() ([]codersdk.WorkspaceAgentL
lp.mut.Lock()
defer lp.mut.Unlock()
if time.Since(lp.mtime) < time.Second {
if time.Since(lp.mtime) < lp.cacheDuration {
// copy
ports := make([]codersdk.WorkspaceAgentListeningPort, len(lp.ports))
copy(ports, lp.ports)
+1 -1
View File
@@ -4,7 +4,7 @@ package agent
import "github.com/coder/coder/v2/codersdk"
func (lp *listeningPortsHandler) getListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
func (*listeningPortsHandler) getListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error) {
// Can't scan for ports on non-linux or non-windows_amd64 systems at the
// moment. The UI will not show any "no ports found" message to the user, so
// the user won't suspect a thing.
File diff suppressed because it is too large Load Diff
+262
View File
@@ -0,0 +1,262 @@
syntax = "proto3";
option go_package = "github.com/coder/coder/v2/agent/proto";
package coder.agent.v2;
import "tailnet/proto/tailnet.proto";
import "google/protobuf/timestamp.proto";
import "google/protobuf/duration.proto";
message WorkspaceApp {
bytes id = 1;
string url = 2;
bool external = 3;
string slug = 4;
string display_name = 5;
string command = 6;
string icon = 7;
bool subdomain = 8;
string subdomain_name = 9;
enum SharingLevel {
SHARING_LEVEL_UNSPECIFIED = 0;
OWNER = 1;
AUTHENTICATED = 2;
PUBLIC = 3;
}
SharingLevel sharing_level = 10;
message Healthcheck {
string url = 1;
google.protobuf.Duration interval = 2;
int32 threshold = 3;
}
Healthcheck healthcheck = 11;
enum Health {
HEALTH_UNSPECIFIED = 0;
DISABLED = 1;
INITIALIZING = 2;
HEALTHY = 3;
UNHEALTHY = 4;
}
Health health = 12;
}
message WorkspaceAgentScript {
bytes log_source_id = 1;
string log_path = 2;
string script = 3;
string cron = 4;
bool run_on_start = 5;
bool run_on_stop = 6;
bool start_blocks_login = 7;
google.protobuf.Duration timeout = 8;
}
message WorkspaceAgentMetadata {
message Result {
google.protobuf.Timestamp collected_at = 1;
int64 age = 2;
string value = 3;
string error = 4;
}
Result result = 1;
message Description {
string display_name = 1;
string key = 2;
string script = 3;
google.protobuf.Duration interval = 4;
google.protobuf.Duration timeout = 5;
}
Description description = 2;
}
message Manifest {
bytes agent_id = 1;
string owner_username = 13;
bytes workspace_id = 14;
uint32 git_auth_configs = 2;
map<string, string> environment_variables = 3;
string directory = 4;
string vs_code_port_proxy_uri = 5;
string motd_path = 6;
bool disable_direct_connections = 7;
bool derp_force_websockets = 8;
coder.tailnet.v2.DERPMap derp_map = 9;
repeated WorkspaceAgentScript scripts = 10;
repeated WorkspaceApp apps = 11;
repeated WorkspaceAgentMetadata.Description metadata = 12;
}
message GetManifestRequest {}
message ServiceBanner {
bool enabled = 1;
string message = 2;
string background_color = 3;
}
message GetServiceBannerRequest {}
message Stats {
// ConnectionsByProto is a count of connections by protocol.
map<string, int64> connections_by_proto = 1;
// ConnectionCount is the number of connections received by an agent.
int64 connection_count = 2;
// ConnectionMedianLatencyMS is the median latency of all connections in milliseconds.
double connection_median_latency_ms = 3;
// RxPackets is the number of received packets.
int64 rx_packets = 4;
// RxBytes is the number of received bytes.
int64 rx_bytes = 5;
// TxPackets is the number of transmitted bytes.
int64 tx_packets = 6;
// TxBytes is the number of transmitted bytes.
int64 tx_bytes = 7;
// SessionCountVSCode is the number of connections received by an agent
// that are from our VS Code extension.
int64 session_count_vscode = 8;
// SessionCountJetBrains is the number of connections received by an agent
// that are from our JetBrains extension.
int64 session_count_jetbrains = 9;
// SessionCountReconnectingPTY is the number of connections received by an agent
// that are from the reconnecting web terminal.
int64 session_count_reconnecting_pty = 10;
// SessionCountSSH is the number of connections received by an agent
// that are normal, non-tagged SSH sessions.
int64 session_count_ssh = 11;
message Metric {
string name = 1;
enum Type {
TYPE_UNSPECIFIED = 0;
COUNTER = 1;
GAUGE = 2;
}
Type type = 2;
double value = 3;
message Label {
string name = 1;
string value = 2;
}
repeated Label labels = 4;
}
repeated Metric metrics = 12;
}
message UpdateStatsRequest{
Stats stats = 1;
}
message UpdateStatsResponse {
google.protobuf.Duration report_interval = 1;
}
message Lifecycle {
enum State {
STATE_UNSPECIFIED = 0;
CREATED = 1;
STARTING = 2;
START_TIMEOUT = 3;
START_ERROR = 4;
READY = 5;
SHUTTING_DOWN = 6;
SHUTDOWN_TIMEOUT = 7;
SHUTDOWN_ERROR = 8;
OFF = 9;
}
State state = 1;
google.protobuf.Timestamp changed_at = 2;
}
message UpdateLifecycleRequest {
Lifecycle lifecycle = 1;
}
enum AppHealth {
APP_HEALTH_UNSPECIFIED = 0;
DISABLED = 1;
INITIALIZING = 2;
HEALTHY = 3;
UNHEALTHY = 4;
}
message BatchUpdateAppHealthRequest {
message HealthUpdate {
bytes id = 1;
AppHealth health = 2;
}
repeated HealthUpdate updates = 1;
}
message BatchUpdateAppHealthResponse {}
message Startup {
string version = 1;
string expanded_directory = 2;
enum Subsystem {
SUBSYSTEM_UNSPECIFIED = 0;
ENVBOX = 1;
ENVBUILDER = 2;
EXECTRACE = 3;
}
repeated Subsystem subsystems = 3;
}
message UpdateStartupRequest{
Startup startup = 1;
}
message Metadata {
string key = 1;
WorkspaceAgentMetadata.Result result = 2;
}
message BatchUpdateMetadataRequest {
repeated Metadata metadata = 2;
}
message BatchUpdateMetadataResponse {}
message Log {
google.protobuf.Timestamp created_at = 1;
string output = 2;
enum Level {
LEVEL_UNSPECIFIED = 0;
TRACE = 1;
DEBUG = 2;
INFO = 3;
WARN = 4;
ERROR = 5;
}
Level level = 3;
}
message BatchCreateLogsRequest {
bytes log_source_id = 1;
repeated Log logs = 2;
}
message BatchCreateLogsResponse {}
service Agent {
rpc GetManifest(GetManifestRequest) returns (Manifest);
rpc GetServiceBanner(GetServiceBannerRequest) returns (ServiceBanner);
rpc UpdateStats(UpdateStatsRequest) returns (UpdateStatsResponse);
rpc UpdateLifecycle(UpdateLifecycleRequest) returns (Lifecycle);
rpc BatchUpdateAppHealths(BatchUpdateAppHealthRequest) returns (BatchUpdateAppHealthResponse);
rpc UpdateStartup(UpdateStartupRequest) returns (Startup);
rpc BatchUpdateMetadata(BatchUpdateMetadataRequest) returns (BatchUpdateMetadataResponse);
rpc BatchCreateLogs(BatchCreateLogsRequest) returns (BatchCreateLogsResponse);
rpc StreamDERPMaps(tailnet.v2.StreamDERPMapsRequest) returns (stream tailnet.v2.DERPMap);
rpc CoordinateTailnet(stream tailnet.v2.CoordinateRequest) returns (stream tailnet.v2.CoordinateResponse);
}
+539
View File
@@ -0,0 +1,539 @@
// Code generated by protoc-gen-go-drpc. DO NOT EDIT.
// protoc-gen-go-drpc version: v0.0.33
// source: agent/proto/agent.proto
package proto
import (
context "context"
errors "errors"
proto1 "github.com/coder/coder/v2/tailnet/proto"
protojson "google.golang.org/protobuf/encoding/protojson"
proto "google.golang.org/protobuf/proto"
drpc "storj.io/drpc"
drpcerr "storj.io/drpc/drpcerr"
)
type drpcEncoding_File_agent_proto_agent_proto struct{}
func (drpcEncoding_File_agent_proto_agent_proto) Marshal(msg drpc.Message) ([]byte, error) {
return proto.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_proto_agent_proto) MarshalAppend(buf []byte, msg drpc.Message) ([]byte, error) {
return proto.MarshalOptions{}.MarshalAppend(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_proto_agent_proto) Unmarshal(buf []byte, msg drpc.Message) error {
return proto.Unmarshal(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_proto_agent_proto) JSONMarshal(msg drpc.Message) ([]byte, error) {
return protojson.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_proto_agent_proto) JSONUnmarshal(buf []byte, msg drpc.Message) error {
return protojson.Unmarshal(buf, msg.(proto.Message))
}
type DRPCAgentClient interface {
DRPCConn() drpc.Conn
GetManifest(ctx context.Context, in *GetManifestRequest) (*Manifest, error)
GetServiceBanner(ctx context.Context, in *GetServiceBannerRequest) (*ServiceBanner, error)
UpdateStats(ctx context.Context, in *UpdateStatsRequest) (*UpdateStatsResponse, error)
UpdateLifecycle(ctx context.Context, in *UpdateLifecycleRequest) (*Lifecycle, error)
BatchUpdateAppHealths(ctx context.Context, in *BatchUpdateAppHealthRequest) (*BatchUpdateAppHealthResponse, error)
UpdateStartup(ctx context.Context, in *UpdateStartupRequest) (*Startup, error)
BatchUpdateMetadata(ctx context.Context, in *BatchUpdateMetadataRequest) (*BatchUpdateMetadataResponse, error)
BatchCreateLogs(ctx context.Context, in *BatchCreateLogsRequest) (*BatchCreateLogsResponse, error)
StreamDERPMaps(ctx context.Context, in *proto1.StreamDERPMapsRequest) (DRPCAgent_StreamDERPMapsClient, error)
CoordinateTailnet(ctx context.Context) (DRPCAgent_CoordinateTailnetClient, error)
}
type drpcAgentClient struct {
cc drpc.Conn
}
func NewDRPCAgentClient(cc drpc.Conn) DRPCAgentClient {
return &drpcAgentClient{cc}
}
func (c *drpcAgentClient) DRPCConn() drpc.Conn { return c.cc }
func (c *drpcAgentClient) GetManifest(ctx context.Context, in *GetManifestRequest) (*Manifest, error) {
out := new(Manifest)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/GetManifest", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) GetServiceBanner(ctx context.Context, in *GetServiceBannerRequest) (*ServiceBanner, error) {
out := new(ServiceBanner)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/GetServiceBanner", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) UpdateStats(ctx context.Context, in *UpdateStatsRequest) (*UpdateStatsResponse, error) {
out := new(UpdateStatsResponse)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/UpdateStats", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) UpdateLifecycle(ctx context.Context, in *UpdateLifecycleRequest) (*Lifecycle, error) {
out := new(Lifecycle)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/UpdateLifecycle", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) BatchUpdateAppHealths(ctx context.Context, in *BatchUpdateAppHealthRequest) (*BatchUpdateAppHealthResponse, error) {
out := new(BatchUpdateAppHealthResponse)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/BatchUpdateAppHealths", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) UpdateStartup(ctx context.Context, in *UpdateStartupRequest) (*Startup, error) {
out := new(Startup)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/UpdateStartup", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) BatchUpdateMetadata(ctx context.Context, in *BatchUpdateMetadataRequest) (*BatchUpdateMetadataResponse, error) {
out := new(BatchUpdateMetadataResponse)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/BatchUpdateMetadata", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) BatchCreateLogs(ctx context.Context, in *BatchCreateLogsRequest) (*BatchCreateLogsResponse, error) {
out := new(BatchCreateLogsResponse)
err := c.cc.Invoke(ctx, "/coder.agent.v2.Agent/BatchCreateLogs", drpcEncoding_File_agent_proto_agent_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentClient) StreamDERPMaps(ctx context.Context, in *proto1.StreamDERPMapsRequest) (DRPCAgent_StreamDERPMapsClient, error) {
stream, err := c.cc.NewStream(ctx, "/coder.agent.v2.Agent/StreamDERPMaps", drpcEncoding_File_agent_proto_agent_proto{})
if err != nil {
return nil, err
}
x := &drpcAgent_StreamDERPMapsClient{stream}
if err := x.MsgSend(in, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return nil, err
}
if err := x.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type DRPCAgent_StreamDERPMapsClient interface {
drpc.Stream
Recv() (*proto1.DERPMap, error)
}
type drpcAgent_StreamDERPMapsClient struct {
drpc.Stream
}
func (x *drpcAgent_StreamDERPMapsClient) GetStream() drpc.Stream {
return x.Stream
}
func (x *drpcAgent_StreamDERPMapsClient) Recv() (*proto1.DERPMap, error) {
m := new(proto1.DERPMap)
if err := x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return nil, err
}
return m, nil
}
func (x *drpcAgent_StreamDERPMapsClient) RecvMsg(m *proto1.DERPMap) error {
return x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{})
}
func (c *drpcAgentClient) CoordinateTailnet(ctx context.Context) (DRPCAgent_CoordinateTailnetClient, error) {
stream, err := c.cc.NewStream(ctx, "/coder.agent.v2.Agent/CoordinateTailnet", drpcEncoding_File_agent_proto_agent_proto{})
if err != nil {
return nil, err
}
x := &drpcAgent_CoordinateTailnetClient{stream}
return x, nil
}
type DRPCAgent_CoordinateTailnetClient interface {
drpc.Stream
Send(*proto1.CoordinateRequest) error
Recv() (*proto1.CoordinateResponse, error)
}
type drpcAgent_CoordinateTailnetClient struct {
drpc.Stream
}
func (x *drpcAgent_CoordinateTailnetClient) GetStream() drpc.Stream {
return x.Stream
}
func (x *drpcAgent_CoordinateTailnetClient) Send(m *proto1.CoordinateRequest) error {
return x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{})
}
func (x *drpcAgent_CoordinateTailnetClient) Recv() (*proto1.CoordinateResponse, error) {
m := new(proto1.CoordinateResponse)
if err := x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return nil, err
}
return m, nil
}
func (x *drpcAgent_CoordinateTailnetClient) RecvMsg(m *proto1.CoordinateResponse) error {
return x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{})
}
type DRPCAgentServer interface {
GetManifest(context.Context, *GetManifestRequest) (*Manifest, error)
GetServiceBanner(context.Context, *GetServiceBannerRequest) (*ServiceBanner, error)
UpdateStats(context.Context, *UpdateStatsRequest) (*UpdateStatsResponse, error)
UpdateLifecycle(context.Context, *UpdateLifecycleRequest) (*Lifecycle, error)
BatchUpdateAppHealths(context.Context, *BatchUpdateAppHealthRequest) (*BatchUpdateAppHealthResponse, error)
UpdateStartup(context.Context, *UpdateStartupRequest) (*Startup, error)
BatchUpdateMetadata(context.Context, *BatchUpdateMetadataRequest) (*BatchUpdateMetadataResponse, error)
BatchCreateLogs(context.Context, *BatchCreateLogsRequest) (*BatchCreateLogsResponse, error)
StreamDERPMaps(*proto1.StreamDERPMapsRequest, DRPCAgent_StreamDERPMapsStream) error
CoordinateTailnet(DRPCAgent_CoordinateTailnetStream) error
}
type DRPCAgentUnimplementedServer struct{}
func (s *DRPCAgentUnimplementedServer) GetManifest(context.Context, *GetManifestRequest) (*Manifest, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) GetServiceBanner(context.Context, *GetServiceBannerRequest) (*ServiceBanner, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) UpdateStats(context.Context, *UpdateStatsRequest) (*UpdateStatsResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) UpdateLifecycle(context.Context, *UpdateLifecycleRequest) (*Lifecycle, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) BatchUpdateAppHealths(context.Context, *BatchUpdateAppHealthRequest) (*BatchUpdateAppHealthResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) UpdateStartup(context.Context, *UpdateStartupRequest) (*Startup, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) BatchUpdateMetadata(context.Context, *BatchUpdateMetadataRequest) (*BatchUpdateMetadataResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) BatchCreateLogs(context.Context, *BatchCreateLogsRequest) (*BatchCreateLogsResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) StreamDERPMaps(*proto1.StreamDERPMapsRequest, DRPCAgent_StreamDERPMapsStream) error {
return drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentUnimplementedServer) CoordinateTailnet(DRPCAgent_CoordinateTailnetStream) error {
return drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
type DRPCAgentDescription struct{}
func (DRPCAgentDescription) NumMethods() int { return 10 }
func (DRPCAgentDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver, interface{}, bool) {
switch n {
case 0:
return "/coder.agent.v2.Agent/GetManifest", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
GetManifest(
ctx,
in1.(*GetManifestRequest),
)
}, DRPCAgentServer.GetManifest, true
case 1:
return "/coder.agent.v2.Agent/GetServiceBanner", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
GetServiceBanner(
ctx,
in1.(*GetServiceBannerRequest),
)
}, DRPCAgentServer.GetServiceBanner, true
case 2:
return "/coder.agent.v2.Agent/UpdateStats", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
UpdateStats(
ctx,
in1.(*UpdateStatsRequest),
)
}, DRPCAgentServer.UpdateStats, true
case 3:
return "/coder.agent.v2.Agent/UpdateLifecycle", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
UpdateLifecycle(
ctx,
in1.(*UpdateLifecycleRequest),
)
}, DRPCAgentServer.UpdateLifecycle, true
case 4:
return "/coder.agent.v2.Agent/BatchUpdateAppHealths", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
BatchUpdateAppHealths(
ctx,
in1.(*BatchUpdateAppHealthRequest),
)
}, DRPCAgentServer.BatchUpdateAppHealths, true
case 5:
return "/coder.agent.v2.Agent/UpdateStartup", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
UpdateStartup(
ctx,
in1.(*UpdateStartupRequest),
)
}, DRPCAgentServer.UpdateStartup, true
case 6:
return "/coder.agent.v2.Agent/BatchUpdateMetadata", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
BatchUpdateMetadata(
ctx,
in1.(*BatchUpdateMetadataRequest),
)
}, DRPCAgentServer.BatchUpdateMetadata, true
case 7:
return "/coder.agent.v2.Agent/BatchCreateLogs", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentServer).
BatchCreateLogs(
ctx,
in1.(*BatchCreateLogsRequest),
)
}, DRPCAgentServer.BatchCreateLogs, true
case 8:
return "/coder.agent.v2.Agent/StreamDERPMaps", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return nil, srv.(DRPCAgentServer).
StreamDERPMaps(
in1.(*proto1.StreamDERPMapsRequest),
&drpcAgent_StreamDERPMapsStream{in2.(drpc.Stream)},
)
}, DRPCAgentServer.StreamDERPMaps, true
case 9:
return "/coder.agent.v2.Agent/CoordinateTailnet", drpcEncoding_File_agent_proto_agent_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return nil, srv.(DRPCAgentServer).
CoordinateTailnet(
&drpcAgent_CoordinateTailnetStream{in1.(drpc.Stream)},
)
}, DRPCAgentServer.CoordinateTailnet, true
default:
return "", nil, nil, nil, false
}
}
func DRPCRegisterAgent(mux drpc.Mux, impl DRPCAgentServer) error {
return mux.Register(impl, DRPCAgentDescription{})
}
type DRPCAgent_GetManifestStream interface {
drpc.Stream
SendAndClose(*Manifest) error
}
type drpcAgent_GetManifestStream struct {
drpc.Stream
}
func (x *drpcAgent_GetManifestStream) SendAndClose(m *Manifest) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_GetServiceBannerStream interface {
drpc.Stream
SendAndClose(*ServiceBanner) error
}
type drpcAgent_GetServiceBannerStream struct {
drpc.Stream
}
func (x *drpcAgent_GetServiceBannerStream) SendAndClose(m *ServiceBanner) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_UpdateStatsStream interface {
drpc.Stream
SendAndClose(*UpdateStatsResponse) error
}
type drpcAgent_UpdateStatsStream struct {
drpc.Stream
}
func (x *drpcAgent_UpdateStatsStream) SendAndClose(m *UpdateStatsResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_UpdateLifecycleStream interface {
drpc.Stream
SendAndClose(*Lifecycle) error
}
type drpcAgent_UpdateLifecycleStream struct {
drpc.Stream
}
func (x *drpcAgent_UpdateLifecycleStream) SendAndClose(m *Lifecycle) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_BatchUpdateAppHealthsStream interface {
drpc.Stream
SendAndClose(*BatchUpdateAppHealthResponse) error
}
type drpcAgent_BatchUpdateAppHealthsStream struct {
drpc.Stream
}
func (x *drpcAgent_BatchUpdateAppHealthsStream) SendAndClose(m *BatchUpdateAppHealthResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_UpdateStartupStream interface {
drpc.Stream
SendAndClose(*Startup) error
}
type drpcAgent_UpdateStartupStream struct {
drpc.Stream
}
func (x *drpcAgent_UpdateStartupStream) SendAndClose(m *Startup) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_BatchUpdateMetadataStream interface {
drpc.Stream
SendAndClose(*BatchUpdateMetadataResponse) error
}
type drpcAgent_BatchUpdateMetadataStream struct {
drpc.Stream
}
func (x *drpcAgent_BatchUpdateMetadataStream) SendAndClose(m *BatchUpdateMetadataResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_BatchCreateLogsStream interface {
drpc.Stream
SendAndClose(*BatchCreateLogsResponse) error
}
type drpcAgent_BatchCreateLogsStream struct {
drpc.Stream
}
func (x *drpcAgent_BatchCreateLogsStream) SendAndClose(m *BatchCreateLogsResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgent_StreamDERPMapsStream interface {
drpc.Stream
Send(*proto1.DERPMap) error
}
type drpcAgent_StreamDERPMapsStream struct {
drpc.Stream
}
func (x *drpcAgent_StreamDERPMapsStream) Send(m *proto1.DERPMap) error {
return x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{})
}
type DRPCAgent_CoordinateTailnetStream interface {
drpc.Stream
Send(*proto1.CoordinateResponse) error
Recv() (*proto1.CoordinateRequest, error)
}
type drpcAgent_CoordinateTailnetStream struct {
drpc.Stream
}
func (x *drpcAgent_CoordinateTailnetStream) Send(m *proto1.CoordinateResponse) error {
return x.MsgSend(m, drpcEncoding_File_agent_proto_agent_proto{})
}
func (x *drpcAgent_CoordinateTailnetStream) Recv() (*proto1.CoordinateRequest, error) {
m := new(proto1.CoordinateRequest)
if err := x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{}); err != nil {
return nil, err
}
return m, nil
}
func (x *drpcAgent_CoordinateTailnetStream) RecvMsg(m *proto1.CoordinateRequest) error {
return x.MsgRecv(m, drpcEncoding_File_agent_proto_agent_proto{})
}
+106
View File
@@ -0,0 +1,106 @@
package proto
import (
"strings"
"github.com/google/uuid"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/codersdk"
)
func SDKAgentMetadataDescriptionsFromProto(descriptions []*WorkspaceAgentMetadata_Description) []codersdk.WorkspaceAgentMetadataDescription {
ret := make([]codersdk.WorkspaceAgentMetadataDescription, len(descriptions))
for i, description := range descriptions {
ret[i] = SDKAgentMetadataDescriptionFromProto(description)
}
return ret
}
func SDKAgentMetadataDescriptionFromProto(description *WorkspaceAgentMetadata_Description) codersdk.WorkspaceAgentMetadataDescription {
return codersdk.WorkspaceAgentMetadataDescription{
DisplayName: description.DisplayName,
Key: description.Key,
Script: description.Script,
Interval: int64(description.Interval.AsDuration()),
Timeout: int64(description.Timeout.AsDuration()),
}
}
func SDKAgentScriptsFromProto(protoScripts []*WorkspaceAgentScript) ([]codersdk.WorkspaceAgentScript, error) {
ret := make([]codersdk.WorkspaceAgentScript, len(protoScripts))
for i, protoScript := range protoScripts {
app, err := SDKAgentScriptFromProto(protoScript)
if err != nil {
return nil, xerrors.Errorf("parse script %v: %w", i, err)
}
ret[i] = app
}
return ret, nil
}
func SDKAgentScriptFromProto(protoScript *WorkspaceAgentScript) (codersdk.WorkspaceAgentScript, error) {
id, err := uuid.FromBytes(protoScript.LogSourceId)
if err != nil {
return codersdk.WorkspaceAgentScript{}, xerrors.Errorf("parse id: %w", err)
}
return codersdk.WorkspaceAgentScript{
LogSourceID: id,
LogPath: protoScript.LogPath,
Script: protoScript.Script,
Cron: protoScript.Cron,
RunOnStart: protoScript.RunOnStart,
RunOnStop: protoScript.RunOnStop,
StartBlocksLogin: protoScript.StartBlocksLogin,
Timeout: protoScript.Timeout.AsDuration(),
}, nil
}
func SDKAppsFromProto(protoApps []*WorkspaceApp) ([]codersdk.WorkspaceApp, error) {
ret := make([]codersdk.WorkspaceApp, len(protoApps))
for i, protoApp := range protoApps {
app, err := SDKAppFromProto(protoApp)
if err != nil {
return nil, xerrors.Errorf("parse app %v (%q): %w", i, protoApp.Slug, err)
}
ret[i] = app
}
return ret, nil
}
func SDKAppFromProto(protoApp *WorkspaceApp) (codersdk.WorkspaceApp, error) {
id, err := uuid.FromBytes(protoApp.Id)
if err != nil {
return codersdk.WorkspaceApp{}, xerrors.Errorf("parse id: %w", err)
}
var sharingLevel codersdk.WorkspaceAppSharingLevel = codersdk.WorkspaceAppSharingLevel(strings.ToLower(protoApp.SharingLevel.String()))
if _, ok := codersdk.MapWorkspaceAppSharingLevels[sharingLevel]; !ok {
return codersdk.WorkspaceApp{}, xerrors.Errorf("unknown app sharing level: %v (%q)", protoApp.SharingLevel, protoApp.SharingLevel.String())
}
var health codersdk.WorkspaceAppHealth = codersdk.WorkspaceAppHealth(strings.ToLower(protoApp.Health.String()))
if _, ok := codersdk.MapWorkspaceAppHealths[health]; !ok {
return codersdk.WorkspaceApp{}, xerrors.Errorf("unknown app health: %v (%q)", protoApp.Health, protoApp.Health.String())
}
return codersdk.WorkspaceApp{
ID: id,
URL: protoApp.Url,
External: protoApp.External,
Slug: protoApp.Slug,
DisplayName: protoApp.DisplayName,
Command: protoApp.Command,
Icon: protoApp.Icon,
Subdomain: protoApp.Subdomain,
SubdomainName: protoApp.SubdomainName,
SharingLevel: sharingLevel,
Healthcheck: codersdk.Healthcheck{
URL: protoApp.Healthcheck.Url,
Interval: int32(protoApp.Healthcheck.Interval.AsDuration().Seconds()),
Threshold: protoApp.Healthcheck.Threshold,
},
Health: health,
}, nil
}
+1 -1
View File
@@ -196,8 +196,8 @@ func (s *ptyState) waitForStateOrContext(ctx context.Context, state State) (Stat
// until EOF or an error writing to ptty or reading from conn.
func readConnLoop(ctx context.Context, conn net.Conn, ptty pty.PTYCmd, metrics *prometheus.CounterVec, logger slog.Logger) {
decoder := json.NewDecoder(conn)
var req codersdk.ReconnectingPTYRequest
for {
var req codersdk.ReconnectingPTYRequest
err := decoder.Decode(&req)
if xerrors.Is(err, io.EOF) {
return
+4
View File
@@ -13,6 +13,10 @@ import (
func Get(username string) (string, error) {
// This command will output "UserShell: /bin/zsh" if successful, we
// can ignore the error since we have fallback behavior.
if !filepath.IsLocal(username) {
return "", xerrors.Errorf("username is nonlocal path: %s", username)
}
//nolint: gosec // input checked above
out, _ := exec.Command("dscl", ".", "-read", filepath.Join("/Users", username), "UserShell").Output()
s, ok := strings.CutPrefix(string(out), "UserShell: ")
if ok {
+6 -6
View File
@@ -8,7 +8,6 @@ import (
"net/http/pprof"
"net/url"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
@@ -117,7 +116,7 @@ func (r *RootCmd) workspaceAgent() *clibase.Cmd {
defer logWriter.Close()
sinks = append(sinks, sloghuman.Sink(logWriter))
logger := slog.Make(sinks...).Leveled(slog.LevelDebug)
logger := inv.Logger.AppendSinks(sinks...).Leveled(slog.LevelDebug)
logger.Info(ctx, "spawning reaper process")
// Do not start a reaper on the child process. It's important
@@ -144,7 +143,7 @@ func (r *RootCmd) workspaceAgent() *clibase.Cmd {
// Note that we don't want to handle these signals in the
// process that runs as PID 1, that's why we do this after
// the reaper forked.
ctx, stopNotify := signal.NotifyContext(ctx, InterruptSignals...)
ctx, stopNotify := inv.SignalNotifyContext(ctx, InterruptSignals...)
defer stopNotify()
// DumpHandler does signal handling, so we call it after the
@@ -154,13 +153,14 @@ func (r *RootCmd) workspaceAgent() *clibase.Cmd {
logWriter := &lumberjackWriteCloseFixer{w: &lumberjack.Logger{
Filename: filepath.Join(logDir, "coder-agent.log"),
MaxSize: 5, // MB
// Without this, rotated logs will never be deleted.
MaxBackups: 1,
// Per customer incident on November 17th, 2023, its helpful
// to have the log of the last few restarts to debug a failing agent.
MaxBackups: 10,
}}
defer logWriter.Close()
sinks = append(sinks, sloghuman.Sink(logWriter))
logger := slog.Make(sinks...).Leveled(slog.LevelDebug)
logger := inv.Logger.AppendSinks(sinks...).Leveled(slog.LevelDebug)
version := buildinfo.Version()
logger.Info(ctx, "agent is starting now",
+63 -125
View File
@@ -16,10 +16,11 @@ import (
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisioner/echo"
"github.com/coder/coder/v2/provisionersdk/proto"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
func TestWorkspaceAgent(t *testing.T) {
@@ -28,83 +29,62 @@ func TestWorkspaceAgent(t *testing.T) {
t.Run("LogDirectory", func(t *testing.T) {
t.Parallel()
authToken := uuid.NewString()
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
})
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ProvisionApplyWithAgent(authToken),
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).
WithAgent().
Do()
logDir := t.TempDir()
inv, _ := clitest.New(t,
"agent",
"--auth", "token",
"--agent-token", authToken,
"--agent-token", r.AgentToken,
"--agent-url", client.URL.String(),
"--log-dir", logDir,
)
pty := ptytest.New(t).Attach(inv)
clitest.Start(t, inv)
ctx := inv.Context()
pty.ExpectMatchContext(ctx, "agent is starting now")
coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
info, err := os.Stat(filepath.Join(logDir, "coder-agent.log"))
require.NoError(t, err)
require.Greater(t, info.Size(), int64(0))
require.Eventually(t, func() bool {
info, err := os.Stat(filepath.Join(logDir, "coder-agent.log"))
if err != nil {
return false
}
return info.Size() > 0
}, testutil.WaitLong, testutil.IntervalMedium)
})
t.Run("Azure", func(t *testing.T) {
t.Parallel()
instanceID := "instanceidentifier"
certificates, metadataClient := coderdtest.NewAzureInstanceIdentity(t, instanceID)
client := coderdtest.New(t, &coderdtest.Options{
AzureCertificates: certificates,
IncludeProvisionerDaemon: true,
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
AzureCertificates: certificates,
})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: []*proto.Response{{
Type: &proto.Response_Apply{
Apply: &proto.ApplyComplete{
Resources: []*proto.Resource{{
Name: "somename",
Type: "someinstance",
Agents: []*proto.Agent{{
Auth: &proto.Agent_InstanceId{
InstanceId: instanceID,
},
}},
}},
},
},
}},
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
return agents
}).Do()
inv, _ := clitest.New(t, "agent", "--auth", "azure-instance-identity", "--agent-url", client.URL.String())
inv = inv.WithContext(
//nolint:revive,staticcheck
context.WithValue(inv.Context(), "azure-client", metadataClient),
)
ctx := inv.Context()
clitest.Start(t, inv)
coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
workspace, err := client.Workspace(ctx, workspace.ID)
coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
workspace, err := client.Workspace(ctx, r.Workspace.ID)
require.NoError(t, err)
resources := workspace.LatestBuild.Resources
if assert.NotEmpty(t, workspace.LatestBuild.Resources) && assert.NotEmpty(t, resources[0].Agents) {
@@ -120,43 +100,28 @@ func TestWorkspaceAgent(t *testing.T) {
t.Parallel()
instanceID := "instanceidentifier"
certificates, metadataClient := coderdtest.NewAWSInstanceIdentity(t, instanceID)
client := coderdtest.New(t, &coderdtest.Options{
AWSCertificates: certificates,
IncludeProvisionerDaemon: true,
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
AWSCertificates: certificates,
})
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: []*proto.Response{{
Type: &proto.Response_Apply{
Apply: &proto.ApplyComplete{
Resources: []*proto.Resource{{
Name: "somename",
Type: "someinstance",
Agents: []*proto.Agent{{
Auth: &proto.Agent_InstanceId{
InstanceId: instanceID,
},
}},
}},
},
},
}},
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
return agents
}).Do()
inv, _ := clitest.New(t, "agent", "--auth", "aws-instance-identity", "--agent-url", client.URL.String())
inv = inv.WithContext(
//nolint:revive,staticcheck
context.WithValue(inv.Context(), "aws-client", metadataClient),
)
clitest.Start(t, inv)
ctx := inv.Context()
coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
workspace, err := client.Workspace(ctx, workspace.ID)
coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
workspace, err := client.Workspace(ctx, r.Workspace.ID)
require.NoError(t, err)
resources := workspace.LatestBuild.Resources
if assert.NotEmpty(t, resources) && assert.NotEmpty(t, resources[0].Agents) {
@@ -172,38 +137,22 @@ func TestWorkspaceAgent(t *testing.T) {
t.Parallel()
instanceID := "instanceidentifier"
validator, metadataClient := coderdtest.NewGoogleInstanceIdentity(t, instanceID, false)
client := coderdtest.New(t, &coderdtest.Options{
GoogleTokenValidator: validator,
IncludeProvisionerDaemon: true,
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
GoogleTokenValidator: validator,
})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: []*proto.Response{{
Type: &proto.Response_Apply{
Apply: &proto.ApplyComplete{
Resources: []*proto.Resource{{
Name: "somename",
Type: "someinstance",
Agents: []*proto.Agent{{
Auth: &proto.Agent_InstanceId{
InstanceId: instanceID,
},
}},
}},
},
},
}},
})
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: owner.OrganizationID,
OwnerID: memberUser.ID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
return agents
}).Do()
inv, cfg := clitest.New(t, "agent", "--auth", "google-instance-identity", "--agent-url", client.URL.String())
ptytest.New(t).Attach(inv)
clitest.SetupConfig(t, member, cfg)
clitest.Start(t,
inv.WithContext(
//nolint:revive,staticcheck
@@ -212,9 +161,8 @@ func TestWorkspaceAgent(t *testing.T) {
)
ctx := inv.Context()
coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
workspace, err := client.Workspace(ctx, workspace.ID)
coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
workspace, err := client.Workspace(ctx, r.Workspace.ID)
require.NoError(t, err)
resources := workspace.LatestBuild.Resources
if assert.NotEmpty(t, resources) && assert.NotEmpty(t, resources[0].Agents) {
@@ -244,37 +192,27 @@ func TestWorkspaceAgent(t *testing.T) {
t.Run("PostStartup", func(t *testing.T) {
t.Parallel()
authToken := uuid.NewString()
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
})
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ProvisionApplyWithAgent(authToken),
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
logDir := t.TempDir()
inv, _ := clitest.New(t,
"agent",
"--auth", "token",
"--agent-token", authToken,
"--agent-token", r.AgentToken,
"--agent-url", client.URL.String(),
"--log-dir", logDir,
)
// Set the subsystems for the agent.
inv.Environ.Set(agent.EnvAgentSubsystem, fmt.Sprintf("%s,%s", codersdk.AgentSubsystemExectrace, codersdk.AgentSubsystemEnvbox))
pty := ptytest.New(t).Attach(inv)
clitest.Start(t, inv)
pty.ExpectMatchContext(inv.Context(), "agent is starting now")
resources := coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
resources := coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
require.Len(t, resources, 1)
require.Len(t, resources[0].Agents, 1)
require.Len(t, resources[0].Agents[0].Subsystems, 2)
+58
View File
@@ -0,0 +1,58 @@
package cli
import (
"fmt"
"strings"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
)
func (r *RootCmd) autoupdate() *clibase.Cmd {
client := new(codersdk.Client)
cmd := &clibase.Cmd{
Annotations: workspaceCommand,
Use: "autoupdate <workspace> <always|never>",
Short: "Toggle auto-update policy for a workspace",
Middleware: clibase.Chain(
clibase.RequireNArgs(2),
r.InitClient(client),
),
Handler: func(inv *clibase.Invocation) error {
policy := strings.ToLower(inv.Args[1])
err := validateAutoUpdatePolicy(policy)
if err != nil {
return xerrors.Errorf("validate policy: %w", err)
}
workspace, err := namedWorkspace(inv.Context(), client, inv.Args[0])
if err != nil {
return xerrors.Errorf("get workspace: %w", err)
}
err = client.UpdateWorkspaceAutomaticUpdates(inv.Context(), workspace.ID, codersdk.UpdateWorkspaceAutomaticUpdatesRequest{
AutomaticUpdates: codersdk.AutomaticUpdates(policy),
})
if err != nil {
return xerrors.Errorf("update workspace automatic updates policy: %w", err)
}
_, _ = fmt.Fprintf(inv.Stdout, "Updated workspace %q auto-update policy to %q\n", workspace.Name, policy)
return nil
},
}
cmd.Options = append(cmd.Options, cliui.SkipPromptOption())
return cmd
}
func validateAutoUpdatePolicy(arg string) error {
switch codersdk.AutomaticUpdates(arg) {
case codersdk.AutomaticUpdatesAlways, codersdk.AutomaticUpdatesNever:
return nil
default:
return xerrors.Errorf("invalid option %q must be either of %q or %q", arg, codersdk.AutomaticUpdatesAlways, codersdk.AutomaticUpdatesNever)
}
}
+79
View File
@@ -0,0 +1,79 @@
package cli_test
import (
"bytes"
"fmt"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
)
func TestAutoUpdate(t *testing.T) {
t.Parallel()
t.Run("OK", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
require.Equal(t, codersdk.AutomaticUpdatesNever, workspace.AutomaticUpdates)
expectedPolicy := codersdk.AutomaticUpdatesAlways
inv, root := clitest.New(t, "autoupdate", workspace.Name, string(expectedPolicy))
clitest.SetupConfig(t, member, root)
var buf bytes.Buffer
inv.Stdout = &buf
err := inv.Run()
require.NoError(t, err)
require.Contains(t, buf.String(), fmt.Sprintf("Updated workspace %q auto-update policy to %q", workspace.Name, expectedPolicy))
workspace = coderdtest.MustWorkspace(t, client, workspace.ID)
require.Equal(t, expectedPolicy, workspace.AutomaticUpdates)
})
t.Run("InvalidArgs", func(t *testing.T) {
type testcase struct {
Name string
Args []string
ErrorContains string
}
cases := []testcase{
{
Name: "NoPolicy",
Args: []string{"autoupdate", "ws"},
ErrorContains: "wanted 2 args but got 1",
},
{
Name: "InvalidPolicy",
Args: []string{"autoupdate", "ws", "sometimes"},
ErrorContains: `invalid option "sometimes" must be either of`,
},
}
for _, c := range cases {
c := c
t.Run(c.Name, func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
_ = coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, c.Args...)
clitest.SetupConfig(t, client, root)
err := inv.Run()
require.Error(t, err)
require.Contains(t, err.Error(), c.ErrorContains)
})
}
})
}
+40
View File
@@ -7,9 +7,13 @@ import (
"fmt"
"io"
"os"
"os/signal"
"strings"
"testing"
"unicode"
"cdr.dev/slog"
"github.com/spf13/pflag"
"golang.org/x/exp/slices"
"golang.org/x/xerrors"
@@ -168,6 +172,7 @@ func (c *Cmd) Invoke(args ...string) *Invocation {
Stdout: io.Discard,
Stderr: io.Discard,
Stdin: strings.NewReader(""),
Logger: slog.Make(),
}
}
@@ -183,6 +188,11 @@ type Invocation struct {
Stdout io.Writer
Stderr io.Writer
Stdin io.Reader
Logger slog.Logger
Net Net
// testing
signalNotifyContext func(parent context.Context, signals ...os.Signal) (ctx context.Context, stop context.CancelFunc)
}
// WithOS returns the invocation as a main package, filling in the invocation's unset
@@ -194,6 +204,36 @@ func (inv *Invocation) WithOS() *Invocation {
i.Stdin = os.Stdin
i.Args = os.Args[1:]
i.Environ = ParseEnviron(os.Environ(), "")
i.Net = osNet{}
})
}
// WithTestSignalNotifyContext allows overriding the default implementation of SignalNotifyContext.
// This should only be used in testing.
func (inv *Invocation) WithTestSignalNotifyContext(
_ testing.TB, // ensure we only call this from tests
f func(parent context.Context, signals ...os.Signal) (ctx context.Context, stop context.CancelFunc),
) *Invocation {
return inv.with(func(i *Invocation) {
i.signalNotifyContext = f
})
}
// SignalNotifyContext is equivalent to signal.NotifyContext, but supports being overridden in
// tests.
func (inv *Invocation) SignalNotifyContext(parent context.Context, signals ...os.Signal) (ctx context.Context, stop context.CancelFunc) {
if inv.signalNotifyContext == nil {
return signal.NotifyContext(parent, signals...)
}
return inv.signalNotifyContext(parent, signals...)
}
func (inv *Invocation) WithTestParsedFlags(
_ testing.TB, // ensure we only call this from tests
parsedFlags *pflag.FlagSet,
) *Invocation {
return inv.with(func(i *Invocation) {
i.parsedFlags = parsedFlags
})
}
+50
View File
@@ -0,0 +1,50 @@
package clibase
import (
"net"
"strconv"
"github.com/pion/udp"
"golang.org/x/xerrors"
)
// Net abstracts CLI commands interacting with the operating system networking.
//
// At present, it covers opening local listening sockets, since doing this
// in testing is a challenge without flakes, since it's hard to pick a port we
// know a priori will be free.
type Net interface {
// Listen has the same semantics as `net.Listen` but also supports `udp`
Listen(network, address string) (net.Listener, error)
}
// osNet is an implementation that call the real OS for networking.
type osNet struct{}
func (osNet) Listen(network, address string) (net.Listener, error) {
switch network {
case "tcp", "tcp4", "tcp6", "unix", "unixpacket":
return net.Listen(network, address)
case "udp":
host, port, err := net.SplitHostPort(address)
if err != nil {
return nil, xerrors.Errorf("split %q: %w", address, err)
}
var portInt int
portInt, err = strconv.Atoi(port)
if err != nil {
return nil, xerrors.Errorf("parse port %v from %q as int: %w", port, address, err)
}
// Use pion here so that we get a stream-style net.Conn listener, instead
// of a packet-oriented connection that can read and write to multiple
// addresses.
return udp.Listen(network, &net.UDPAddr{
IP: net.ParseIP(host),
Port: portInt,
})
default:
return nil, xerrors.Errorf("unknown listen network %q", network)
}
}
+211
View File
@@ -0,0 +1,211 @@
package clilog
import (
"context"
"fmt"
"io"
"os"
"regexp"
"strings"
"golang.org/x/xerrors"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/sloghuman"
"cdr.dev/slog/sloggers/slogjson"
"cdr.dev/slog/sloggers/slogstackdriver"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/codersdk"
)
type (
Option func(*Builder)
Builder struct {
Filter []string
Human string
JSON string
Stackdriver string
Trace bool
Verbose bool
}
)
func New(opts ...Option) *Builder {
b := &Builder{}
for _, opt := range opts {
opt(b)
}
return b
}
func WithFilter(filters ...string) Option {
return func(b *Builder) {
b.Filter = filters
}
}
func WithHuman(loc string) Option {
return func(b *Builder) {
b.Human = loc
}
}
func WithJSON(loc string) Option {
return func(b *Builder) {
b.JSON = loc
}
}
func WithStackdriver(loc string) Option {
return func(b *Builder) {
b.Stackdriver = loc
}
}
func WithTrace() Option {
return func(b *Builder) {
b.Trace = true
}
}
func WithVerbose() Option {
return func(b *Builder) {
b.Verbose = true
}
}
func FromDeploymentValues(vals *codersdk.DeploymentValues) Option {
return func(b *Builder) {
b.Filter = vals.Logging.Filter.Value()
b.Human = vals.Logging.Human.Value()
b.JSON = vals.Logging.JSON.Value()
b.Stackdriver = vals.Logging.Stackdriver.Value()
b.Trace = vals.Trace.Enable.Value()
b.Verbose = vals.Verbose.Value()
}
}
func (b *Builder) Build(inv *clibase.Invocation) (log slog.Logger, closeLog func(), err error) {
var (
sinks = []slog.Sink{}
closers = []func() error{}
)
defer func() {
if err != nil {
for _, closer := range closers {
_ = closer()
}
}
}()
noopClose := func() {}
addSinkIfProvided := func(sinkFn func(io.Writer) slog.Sink, loc string) error {
switch loc {
case "":
case "/dev/stdout":
sinks = append(sinks, sinkFn(inv.Stdout))
case "/dev/stderr":
sinks = append(sinks, sinkFn(inv.Stderr))
default:
fi, err := os.OpenFile(loc, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0o644)
if err != nil {
return xerrors.Errorf("open log file %q: %w", loc, err)
}
closers = append(closers, fi.Close)
sinks = append(sinks, sinkFn(fi))
}
return nil
}
err = addSinkIfProvided(sloghuman.Sink, b.Human)
if err != nil {
return slog.Logger{}, noopClose, xerrors.Errorf("add human sink: %w", err)
}
err = addSinkIfProvided(slogjson.Sink, b.JSON)
if err != nil {
return slog.Logger{}, noopClose, xerrors.Errorf("add json sink: %w", err)
}
err = addSinkIfProvided(slogstackdriver.Sink, b.Stackdriver)
if err != nil {
return slog.Logger{}, noopClose, xerrors.Errorf("add stackdriver sink: %w", err)
}
if b.Trace {
sinks = append(sinks, tracing.SlogSink{})
}
// User should log to null device if they don't want logs.
if len(sinks) == 0 {
return slog.Logger{}, noopClose, xerrors.New("no loggers provided, use /dev/null to disable logging")
}
filter := &debugFilterSink{next: sinks}
err = filter.compile(b.Filter)
if err != nil {
return slog.Logger{}, noopClose, xerrors.Errorf("compile filters: %w", err)
}
level := slog.LevelInfo
// Debug logging is always enabled if a filter is present.
if b.Verbose || filter.re != nil {
level = slog.LevelDebug
}
return inv.Logger.AppendSinks(filter).Leveled(level), func() {
for _, closer := range closers {
_ = closer()
}
}, nil
}
var _ slog.Sink = &debugFilterSink{}
type debugFilterSink struct {
next []slog.Sink
re *regexp.Regexp
}
func (f *debugFilterSink) compile(res []string) error {
if len(res) == 0 {
return nil
}
var reb strings.Builder
for i, re := range res {
_, _ = fmt.Fprintf(&reb, "(%s)", re)
if i != len(res)-1 {
_, _ = reb.WriteRune('|')
}
}
re, err := regexp.Compile(reb.String())
if err != nil {
return xerrors.Errorf("compile regex: %w", err)
}
f.re = re
return nil
}
func (f *debugFilterSink) LogEntry(ctx context.Context, ent slog.SinkEntry) {
if ent.Level == slog.LevelDebug {
logName := strings.Join(ent.LoggerNames, ".")
if f.re != nil && !f.re.MatchString(logName) && !f.re.MatchString(ent.Message) {
return
}
}
for _, sink := range f.next {
sink.LogEntry(ctx, ent)
}
}
func (f *debugFilterSink) Sync() {
for _, sink := range f.next {
sink.Sync()
}
}
+243
View File
@@ -0,0 +1,243 @@
package clilog_test
import (
"encoding/json"
"io/fs"
"os"
"path/filepath"
"strings"
"testing"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/clilog"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBuilder(t *testing.T) {
t.Parallel()
t.Run("NoConfiguration", func(t *testing.T) {
t.Parallel()
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t),
}
err := cmd.Invoke().Run()
require.ErrorContains(t, err, "no loggers provided, use /dev/null to disable logging")
})
t.Run("Verbose", func(t *testing.T) {
t.Parallel()
tempFile := filepath.Join(t.TempDir(), "test.log")
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t,
clilog.WithHuman(tempFile),
clilog.WithVerbose(),
),
}
err := cmd.Invoke().Run()
require.NoError(t, err)
assertLogs(t, tempFile, debugLog, infoLog, warnLog, filterLog)
})
t.Run("WithFilter", func(t *testing.T) {
t.Parallel()
tempFile := filepath.Join(t.TempDir(), "test.log")
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t,
clilog.WithHuman(tempFile),
// clilog.WithVerbose(), // implicit
clilog.WithFilter("important debug message"),
),
}
err := cmd.Invoke().Run()
require.NoError(t, err)
assertLogs(t, tempFile, infoLog, warnLog, filterLog)
})
t.Run("WithHuman", func(t *testing.T) {
t.Parallel()
tempFile := filepath.Join(t.TempDir(), "test.log")
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t, clilog.WithHuman(tempFile)),
}
err := cmd.Invoke().Run()
require.NoError(t, err)
assertLogs(t, tempFile, infoLog, warnLog)
})
t.Run("WithJSON", func(t *testing.T) {
t.Parallel()
tempFile := filepath.Join(t.TempDir(), "test.log")
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t, clilog.WithJSON(tempFile), clilog.WithVerbose()),
}
err := cmd.Invoke().Run()
require.NoError(t, err)
assertLogsJSON(t, tempFile, debug, debugLog, info, infoLog, warn, warnLog, debug, filterLog)
})
t.Run("FromDeploymentValues", func(t *testing.T) {
t.Parallel()
t.Run("Defaults", func(t *testing.T) {
stdoutPath := filepath.Join(t.TempDir(), "stdout")
stderrPath := filepath.Join(t.TempDir(), "stderr")
stdout, err := os.OpenFile(stdoutPath, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0o644)
require.NoError(t, err)
t.Cleanup(func() { _ = stdout.Close() })
stderr, err := os.OpenFile(stderrPath, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0o644)
require.NoError(t, err)
t.Cleanup(func() { _ = stderr.Close() })
// Use the default deployment values.
dv := coderdtest.DeploymentValues(t)
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t, clilog.FromDeploymentValues(dv)),
}
inv := cmd.Invoke()
inv.Stdout = stdout
inv.Stderr = stderr
err = inv.Run()
require.NoError(t, err)
assertLogs(t, stdoutPath, "")
assertLogs(t, stderrPath, infoLog, warnLog)
})
t.Run("Override", func(t *testing.T) {
tempFile := filepath.Join(t.TempDir(), "test.log")
tempJSON := filepath.Join(t.TempDir(), "test.json")
dv := &codersdk.DeploymentValues{
Logging: codersdk.LoggingConfig{
Filter: []string{"foo", "baz"},
Human: clibase.String(tempFile),
JSON: clibase.String(tempJSON),
},
Verbose: true,
Trace: codersdk.TraceConfig{
Enable: true,
},
}
cmd := &clibase.Cmd{
Use: "test",
Handler: testHandler(t, clilog.FromDeploymentValues(dv)),
}
err := cmd.Invoke().Run()
require.NoError(t, err)
assertLogs(t, tempFile, infoLog, warnLog)
assertLogsJSON(t, tempJSON, info, infoLog, warn, warnLog)
})
})
t.Run("NotFound", func(t *testing.T) {
t.Parallel()
tempFile := filepath.Join(t.TempDir(), "doesnotexist", "test.log")
cmd := &clibase.Cmd{
Use: "test",
Handler: func(inv *clibase.Invocation) error {
logger, closeLog, err := clilog.New(
clilog.WithFilter("foo", "baz"),
clilog.WithHuman(tempFile),
clilog.WithVerbose(),
).Build(inv)
if err != nil {
return err
}
defer closeLog()
logger.Error(inv.Context(), "you will never see this")
return nil
},
}
err := cmd.Invoke().Run()
require.ErrorIs(t, err, fs.ErrNotExist)
})
}
var (
debug = "DEBUG"
info = "INFO"
warn = "WARN"
debugLog = "this is a debug message"
infoLog = "this is an info message"
warnLog = "this is a warning message"
filterLog = "this is an important debug message you want to see"
)
func testHandler(t testing.TB, opts ...clilog.Option) clibase.HandlerFunc {
t.Helper()
return func(inv *clibase.Invocation) error {
logger, closeLog, err := clilog.New(opts...).Build(inv)
if err != nil {
return err
}
defer closeLog()
logger.Debug(inv.Context(), debugLog)
logger.Info(inv.Context(), infoLog)
logger.Warn(inv.Context(), warnLog)
logger.Debug(inv.Context(), filterLog)
return nil
}
}
func assertLogs(t testing.TB, path string, expected ...string) {
t.Helper()
data, err := os.ReadFile(path)
require.NoError(t, err)
logs := strings.Split(strings.TrimSpace(string(data)), "\n")
if !assert.Len(t, logs, len(expected)) {
t.Logf(string(data))
t.FailNow()
}
for i, log := range logs {
require.Contains(t, log, expected[i])
}
}
func assertLogsJSON(t testing.TB, path string, levelExpected ...string) {
t.Helper()
data, err := os.ReadFile(path)
require.NoError(t, err)
if len(levelExpected)%2 != 0 {
t.Errorf("levelExpected must be a list of level-message pairs")
return
}
logs := strings.Split(strings.TrimSpace(string(data)), "\n")
if !assert.Len(t, logs, len(levelExpected)/2) {
t.Logf(string(data))
t.FailNow()
}
for i, log := range logs {
var entry struct {
Level string `json:"level"`
Message string `json:"msg"`
}
err := json.NewDecoder(strings.NewReader(log)).Decode(&entry)
require.NoError(t, err)
require.Equal(t, levelExpected[2*i], entry.Level)
require.Equal(t, levelExpected[2*i+1], entry.Message)
}
}
+2
View File
@@ -0,0 +1,2 @@
// Package clilog provides a fluent API for configuring structured logging.
package clilog
+11
View File
@@ -44,6 +44,13 @@ const (
cgroupV2MemoryStat = "/sys/fs/cgroup/memory.stat"
)
const (
// 9223372036854771712 is the highest positive signed 64-bit integer (263-1),
// rounded down to multiples of 4096 (2^12), the most common page size on x86 systems.
// This is used by docker to indicate no memory limit.
UnlimitedMemory int64 = 9223372036854771712
)
// ContainerCPU returns the CPU usage of the container cgroup.
// This is calculated as difference of two samples of the
// CPU usage of the container cgroup.
@@ -271,6 +278,10 @@ func (s *Statter) cGroupV1Memory(p Prefix) (*Result, error) {
// Nonetheless, if it is not, assume there is no limit set.
maxUsageBytes = -1
}
// Set to unlimited if we detect the unlimited docker value.
if maxUsageBytes == UnlimitedMemory {
maxUsageBytes = -1
}
// need a space after total_rss so we don't hit something else
usageBytes, err := readInt64(s.fs, cgroupV1MemoryUsageBytes)
+23
View File
@@ -197,6 +197,18 @@ func TestStatter(t *testing.T) {
assert.Nil(t, mem.Total)
assert.Equal(t, "B", mem.Unit)
})
t.Run("ContainerMemory/NoLimit", func(t *testing.T) {
t.Parallel()
fs := initFS(t, fsContainerCgroupV1DockerNoMemoryLimit)
s, err := New(WithFS(fs), withNoWait)
require.NoError(t, err)
mem, err := s.ContainerMemory(PrefixDefault)
require.NoError(t, err)
require.NotNil(t, mem)
assert.Equal(t, 268435456.0, mem.Used)
assert.Nil(t, mem.Total)
assert.Equal(t, "B", mem.Unit)
})
})
t.Run("CGroupV2", func(t *testing.T) {
@@ -384,6 +396,17 @@ proc /proc/sys proc ro,nosuid,nodev,noexec,relatime 0 0`,
cgroupV1MemoryUsageBytes: "536870912",
cgroupV1MemoryStat: "total_inactive_file 268435456",
}
fsContainerCgroupV1DockerNoMemoryLimit = map[string]string{
procOneCgroup: "0::/docker/aa86ac98959eeedeae0ecb6e0c9ddd8ae8b97a9d0fdccccf7ea7a474f4e0bb1f",
procMounts: `overlay / overlay rw,relatime,lowerdir=/some/path:/some/path,upperdir=/some/path:/some/path,workdir=/some/path:/some/path 0 0
proc /proc/sys proc ro,nosuid,nodev,noexec,relatime 0 0`,
cgroupV1CPUAcctUsage: "0",
cgroupV1CFSQuotaUs: "-1",
cgroupV1CFSPeriodUs: "100000",
cgroupV1MemoryMaxUsageBytes: "9223372036854771712",
cgroupV1MemoryUsageBytes: "536870912",
cgroupV1MemoryStat: "total_inactive_file 268435456",
}
fsContainerCgroupV1AltPath = map[string]string{
procOneCgroup: "0::/docker/aa86ac98959eeedeae0ecb6e0c9ddd8ae8b97a9d0fdccccf7ea7a474f4e0bb1f",
procMounts: `overlay / overlay rw,relatime,lowerdir=/some/path:/some/path,upperdir=/some/path:/some/path,workdir=/some/path:/some/path 0 0
+6 -1
View File
@@ -59,13 +59,18 @@ func NewWithCommand(
t testing.TB, cmd *clibase.Cmd, args ...string,
) (*clibase.Invocation, config.Root) {
configDir := config.Root(t.TempDir())
logger := slogtest.Make(t, nil)
// I really would like to fail test on error logs, but realistically, turning on by default
// in all our CLI tests is going to create a lot of flaky noise.
logger := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).
Leveled(slog.LevelDebug).
Named("cli")
i := &clibase.Invocation{
Command: cmd,
Args: append([]string{"--global-config", string(configDir)}, args...),
Stdin: io.LimitReader(nil, 0),
Stdout: (&logWriter{prefix: "stdout", log: logger}),
Stderr: (&logWriter{prefix: "stderr", log: logger}),
Logger: logger,
}
t.Logf("invoking command: %s %s", cmd.Name(), strings.Join(i.Args, " "))
+59
View File
@@ -0,0 +1,59 @@
package clitest
import (
"context"
"os"
"sync"
"testing"
"github.com/stretchr/testify/assert"
)
type FakeSignalNotifier struct {
sync.Mutex
t *testing.T
ctx context.Context
cancel context.CancelFunc
signals []os.Signal
stopped bool
}
func NewFakeSignalNotifier(t *testing.T) *FakeSignalNotifier {
fsn := &FakeSignalNotifier{t: t}
return fsn
}
func (f *FakeSignalNotifier) Stop() {
f.Lock()
defer f.Unlock()
f.stopped = true
if f.cancel == nil {
f.t.Error("stopped before started")
return
}
f.cancel()
}
func (f *FakeSignalNotifier) NotifyContext(parent context.Context, signals ...os.Signal) (ctx context.Context, stop context.CancelFunc) {
f.Lock()
defer f.Unlock()
f.signals = signals
f.ctx, f.cancel = context.WithCancel(parent)
return f.ctx, f.Stop
}
func (f *FakeSignalNotifier) Notify() {
f.Lock()
defer f.Unlock()
if f.cancel == nil {
f.t.Error("notified before started")
return
}
f.cancel()
}
func (f *FakeSignalNotifier) AssertStopped() {
f.Lock()
defer f.Unlock()
assert.True(f.t, f.stopped)
}
+101 -43
View File
@@ -5,12 +5,14 @@ import (
"bytes"
"context"
"io"
"os"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
@@ -25,9 +27,31 @@ import (
func TestAgent(t *testing.T) {
t.Parallel()
waitLines := func(t *testing.T, output <-chan string, lines ...string) error {
t.Helper()
var got []string
outerLoop:
for _, want := range lines {
for {
select {
case line := <-output:
got = append(got, line)
if strings.Contains(line, want) {
continue outerLoop
}
case <-time.After(testutil.WaitShort):
assert.Failf(t, "timed out waiting for line", "want: %q; got: %q", want, got)
return xerrors.Errorf("timed out waiting for line: %q; got: %q", want, got)
}
}
}
return nil
}
for _, tc := range []struct {
name string
iter []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error
iter []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error
logs chan []codersdk.WorkspaceAgentLog
opts cliui.AgentOptions
want []string
@@ -38,12 +62,15 @@ func TestAgent(t *testing.T) {
opts: cliui.AgentOptions{
FetchInterval: time.Millisecond,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnecting
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
return nil
@@ -62,12 +89,15 @@ func TestAgent(t *testing.T) {
opts: cliui.AgentOptions{
FetchInterval: time.Millisecond,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnecting
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStartTimeout
agent.FirstConnectedAt = ptr.Ref(time.Now())
@@ -87,18 +117,24 @@ func TestAgent(t *testing.T) {
opts: cliui.AgentOptions{
FetchInterval: 1 * time.Millisecond,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnecting
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStarting
agent.StartedAt = ptr.Ref(time.Now())
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentTimeout
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleReady
@@ -120,8 +156,8 @@ func TestAgent(t *testing.T) {
opts: cliui.AgentOptions{
FetchInterval: 1 * time.Millisecond,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentDisconnected
agent.FirstConnectedAt = ptr.Ref(time.Now().Add(-1 * time.Minute))
agent.LastConnectedAt = ptr.Ref(time.Now().Add(-1 * time.Minute))
@@ -131,7 +167,10 @@ func TestAgent(t *testing.T) {
agent.ReadyAt = ptr.Ref(time.Now())
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ The workspace agent lost connection")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.DisconnectedAt = nil
agent.LastConnectedAt = ptr.Ref(time.Now())
@@ -151,8 +190,8 @@ func TestAgent(t *testing.T) {
FetchInterval: time.Millisecond,
Wait: true,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStarting
@@ -170,7 +209,7 @@ func TestAgent(t *testing.T) {
}
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleReady
agent.ReadyAt = ptr.Ref(time.Now())
logs <- []codersdk.WorkspaceAgentLog{
@@ -195,8 +234,8 @@ func TestAgent(t *testing.T) {
FetchInterval: time.Millisecond,
Wait: true,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.StartedAt = ptr.Ref(time.Now())
@@ -224,8 +263,8 @@ func TestAgent(t *testing.T) {
opts: cliui.AgentOptions{
FetchInterval: time.Millisecond,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentDisconnected
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleOff
return nil
@@ -239,8 +278,8 @@ func TestAgent(t *testing.T) {
FetchInterval: time.Millisecond,
Wait: true,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, logs chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnected
agent.FirstConnectedAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleStarting
@@ -253,7 +292,10 @@ func TestAgent(t *testing.T) {
}
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, logs chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "Hello world")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.ReadyAt = ptr.Ref(time.Now())
agent.LifecycleState = codersdk.WorkspaceAgentLifecycleShuttingDown
return nil
@@ -272,12 +314,15 @@ func TestAgent(t *testing.T) {
FetchInterval: time.Millisecond,
Wait: true,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentConnecting
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "⧗ Waiting for the workspace agent to connect")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return xerrors.New("bad")
},
},
@@ -292,13 +337,16 @@ func TestAgent(t *testing.T) {
FetchInterval: time.Millisecond,
Wait: true,
},
iter: []func(context.Context, *codersdk.WorkspaceAgent, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
iter: []func(context.Context, *testing.T, *codersdk.WorkspaceAgent, <-chan string, chan []codersdk.WorkspaceAgentLog) error{
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, _ <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
agent.Status = codersdk.WorkspaceAgentTimeout
agent.TroubleshootingURL = "https://troubleshoot"
return nil
},
func(_ context.Context, agent *codersdk.WorkspaceAgent, _ chan []codersdk.WorkspaceAgentLog) error {
func(_ context.Context, t *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return waitLines(t, output, "The workspace agent is having trouble connecting, wait for it to connect or restart your workspace.")
},
func(_ context.Context, _ *testing.T, agent *codersdk.WorkspaceAgent, output <-chan string, _ chan []codersdk.WorkspaceAgentLog) error {
return xerrors.New("bad")
},
},
@@ -317,21 +365,27 @@ func TestAgent(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
var buf bytes.Buffer
r, w, err := os.Pipe()
require.NoError(t, err, "create pipe failed")
defer r.Close()
defer w.Close()
agent := codersdk.WorkspaceAgent{
ID: uuid.New(),
Status: codersdk.WorkspaceAgentConnecting,
CreatedAt: time.Now(),
LifecycleState: codersdk.WorkspaceAgentLifecycleCreated,
}
output := make(chan string, 100) // Buffered to avoid blocking, overflow is discarded.
logs := make(chan []codersdk.WorkspaceAgentLog, 1)
cmd := &clibase.Cmd{
Handler: func(inv *clibase.Invocation) error {
tc.opts.Fetch = func(_ context.Context, _ uuid.UUID) (codersdk.WorkspaceAgent, error) {
t.Log("iter", len(tc.iter))
var err error
if len(tc.iter) > 0 {
err = tc.iter[0](ctx, &agent, logs)
err = tc.iter[0](ctx, t, &agent, output, logs)
tc.iter = tc.iter[1:]
}
return agent, err
@@ -352,27 +406,25 @@ func TestAgent(t *testing.T) {
close(fetchLogs)
return fetchLogs, closeFunc(func() error { return nil }), nil
}
err := cliui.Agent(inv.Context(), &buf, uuid.Nil, tc.opts)
err := cliui.Agent(inv.Context(), w, uuid.Nil, tc.opts)
_ = w.Close()
return err
},
}
inv := cmd.Invoke()
w := clitest.StartWithWaiter(t, inv)
if tc.wantErr {
w.RequireError()
} else {
w.RequireSuccess()
}
waiter := clitest.StartWithWaiter(t, inv)
s := bufio.NewScanner(&buf)
s := bufio.NewScanner(r)
for s.Scan() {
line := s.Text()
t.Log(line)
select {
case output <- line:
default:
t.Logf("output overflow: %s", line)
}
if len(tc.want) == 0 {
for i := 0; i < 5; i++ {
t.Log(line)
}
require.Fail(t, "unexpected line", line)
}
require.Contains(t, line, tc.want[0])
@@ -382,6 +434,12 @@ func TestAgent(t *testing.T) {
if len(tc.want) > 0 {
require.Fail(t, "missing lines: "+strings.Join(tc.want, ", "))
}
if tc.wantErr {
waiter.RequireError()
} else {
waiter.RequireSuccess()
}
})
}
+63
View File
@@ -0,0 +1,63 @@
package cliui
import (
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/codersdk"
)
var defaultQuery = "owner:me"
// WorkspaceFilter wraps codersdk.WorkspaceFilter
// and allows easy integration to a CLI command.
// Example usage:
//
// func (r *RootCmd) MyCmd() *clibase.Cmd {
// var (
// filter cliui.WorkspaceFilter
// ...
// )
// cmd := &clibase.Cmd{
// ...
// }
// filter.AttachOptions(&cmd.Options)
// ...
// return cmd
// }
//
// The above will add the following flags to the command:
// --all
// --search
type WorkspaceFilter struct {
searchQuery string
all bool
}
func (w *WorkspaceFilter) Filter() codersdk.WorkspaceFilter {
var f codersdk.WorkspaceFilter
if w.all {
return f
}
f.FilterQuery = w.searchQuery
if f.FilterQuery == "" {
f.FilterQuery = defaultQuery
}
return f
}
func (w *WorkspaceFilter) AttachOptions(opts *clibase.OptionSet) {
*opts = append(*opts,
clibase.Option{
Flag: "all",
FlagShorthand: "a",
Description: "Specifies whether all workspaces will be listed or not.",
Value: clibase.BoolOf(&w.all),
},
clibase.Option{
Flag: "search",
Description: "Search for a workspace with a query.",
Default: defaultQuery,
Value: clibase.StringOf(&w.searchQuery),
},
)
}
+1 -1
View File
@@ -71,7 +71,7 @@ func Prompt(inv *clibase.Invocation, opts PromptOptions) (string, error) {
} else {
renderedNo = Bold(ConfirmNo)
}
pretty.Fprintf(inv.Stdout, DefaultStyles.Placeholder, "(%s/%s)", renderedYes, renderedNo)
pretty.Fprintf(inv.Stdout, DefaultStyles.Placeholder, "(%s/%s) ", renderedYes, renderedNo)
} else if opts.Default != "" {
_, _ = fmt.Fprint(inv.Stdout, pretty.Sprint(DefaultStyles.Placeholder, "("+opts.Default+") "))
}
+40
View File
@@ -0,0 +1,40 @@
package cliutil
import (
"os"
"strings"
"sync"
)
var (
hostname string
hostnameOnce sync.Once
)
// Hostname returns the hostname of the machine, lowercased,
// with any trailing domain suffix stripped.
// It is cached after the first call.
// If the hostname cannot be determined, for any reason,
// localhost will be returned instead.
func Hostname() string {
hostnameOnce.Do(func() { hostname = getHostname() })
return hostname
}
func getHostname() string {
h, err := os.Hostname()
if err != nil {
// Something must be very wrong if this fails.
// We'll just return localhost and hope for the best.
return "localhost"
}
// On some platforms, the hostname can be an FQDN. We only want the hostname.
if idx := strings.Index(h, "."); idx != -1 {
h = h[:idx]
}
// For the sake of consistency, we also want to lowercase the hostname.
// Per RFC 4343, DNS lookups must be case-insensitive.
return strings.ToLower(h)
}
+99
View File
@@ -0,0 +1,99 @@
package levenshtein
import (
"golang.org/x/exp/constraints"
"golang.org/x/xerrors"
)
// Matches returns the closest matches to the needle from the haystack.
// The maxDistance parameter is the maximum Matches distance to consider.
// If no matches are found, an empty slice is returned.
func Matches(needle string, maxDistance int, haystack ...string) (matches []string) {
for _, hay := range haystack {
if d, err := Distance(needle, hay, maxDistance); err == nil && d <= maxDistance {
matches = append(matches, hay)
}
}
return matches
}
var ErrMaxDist = xerrors.New("levenshtein: maxDist exceeded")
// Distance returns the edit distance between a and b using the
// Wagner-Fischer algorithm.
// A and B must be less than 255 characters long.
// maxDist is the maximum distance to consider.
// A value of -1 for maxDist means no maximum.
func Distance(a, b string, maxDist int) (int, error) {
if len(a) > 255 {
return 0, xerrors.Errorf("levenshtein: a must be less than 255 characters long")
}
if len(b) > 255 {
return 0, xerrors.Errorf("levenshtein: b must be less than 255 characters long")
}
m := uint8(len(a))
n := uint8(len(b))
// Special cases for empty strings
if m == 0 {
return int(n), nil
}
if n == 0 {
return int(m), nil
}
// Allocate a matrix of size m+1 * n+1
d := make([][]uint8, 0)
var i, j uint8
for i = 0; i < m+1; i++ {
di := make([]uint8, n+1)
d = append(d, di)
}
// Source prefixes
for i = 1; i < m+1; i++ {
d[i][0] = i
}
// Target prefixes
for j = 1; j < n; j++ {
d[0][j] = j // nolint:gosec // this cannot overflow
}
// Compute the distance
for j = 0; j < n; j++ {
for i = 0; i < m; i++ {
var subCost uint8
// Equal
if a[i] != b[j] {
subCost = 1
}
// Don't forget: matrix is +1 size
d[i+1][j+1] = min(
d[i][j+1]+1, // deletion
d[i+1][j]+1, // insertion
d[i][j]+subCost, // substitution
)
// check maxDist on the diagonal
if maxDist > -1 && i == j && d[i+1][j+1] > uint8(maxDist) {
return int(d[i+1][j+1]), ErrMaxDist
}
}
}
return int(d[m][n]), nil
}
func min[T constraints.Ordered](ts ...T) T {
if len(ts) == 0 {
panic("min: no arguments")
}
m := ts[0]
for _, t := range ts[1:] {
if t < m {
m = t
}
}
return m
}
+194
View File
@@ -0,0 +1,194 @@
package levenshtein_test
import (
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/cli/cliutil/levenshtein"
)
func Test_Levenshtein_Matches(t *testing.T) {
t.Parallel()
for _, tt := range []struct {
Name string
Needle string
MaxDistance int
Haystack []string
Expected []string
}{
{
Name: "empty",
Needle: "",
MaxDistance: 0,
Haystack: []string{},
Expected: []string{},
},
{
Name: "empty haystack",
Needle: "foo",
MaxDistance: 0,
Haystack: []string{},
Expected: []string{},
},
{
Name: "empty needle",
Needle: "",
MaxDistance: 0,
Haystack: []string{"foo"},
Expected: []string{},
},
{
Name: "exact match distance 0",
Needle: "foo",
MaxDistance: 0,
Haystack: []string{"foo", "fob"},
Expected: []string{"foo"},
},
{
Name: "exact match distance 1",
Needle: "foo",
MaxDistance: 1,
Haystack: []string{"foo", "bar"},
Expected: []string{"foo"},
},
{
Name: "not found",
Needle: "foo",
MaxDistance: 1,
Haystack: []string{"bar"},
Expected: []string{},
},
{
Name: "1 deletion",
Needle: "foo",
MaxDistance: 1,
Haystack: []string{"bar", "fo"},
Expected: []string{"fo"},
},
{
Name: "one deletion, two matches",
Needle: "foo",
MaxDistance: 1,
Haystack: []string{"bar", "fo", "fou"},
Expected: []string{"fo", "fou"},
},
{
Name: "one deletion, one addition",
Needle: "foo",
MaxDistance: 1,
Haystack: []string{"bar", "fo", "fou", "f"},
Expected: []string{"fo", "fou"},
},
{
Name: "distance 2",
Needle: "foo",
MaxDistance: 2,
Haystack: []string{"bar", "boo", "boof"},
Expected: []string{"boo", "boof"},
},
{
Name: "longer input",
Needle: "kuberenetes",
MaxDistance: 5,
Haystack: []string{"kubernetes", "kubeconfig", "kubectl", "kube"},
Expected: []string{"kubernetes"},
},
} {
tt := tt
t.Run(tt.Name, func(t *testing.T) {
t.Parallel()
actual := levenshtein.Matches(tt.Needle, tt.MaxDistance, tt.Haystack...)
require.ElementsMatch(t, tt.Expected, actual)
})
}
}
func Test_Levenshtein_Distance(t *testing.T) {
t.Parallel()
for _, tt := range []struct {
Name string
A string
B string
MaxDist int
Expected int
Error string
}{
{
Name: "empty",
A: "",
B: "",
MaxDist: -1,
Expected: 0,
},
{
Name: "a empty",
A: "",
B: "foo",
MaxDist: -1,
Expected: 3,
},
{
Name: "b empty",
A: "foo",
B: "",
MaxDist: -1,
Expected: 3,
},
{
Name: "a is b",
A: "foo",
B: "foo",
MaxDist: -1,
Expected: 0,
},
{
Name: "one addition",
A: "foo",
B: "fooo",
MaxDist: -1,
Expected: 1,
},
{
Name: "one deletion",
A: "fooo",
B: "foo",
MaxDist: -1,
Expected: 1,
},
{
Name: "one substitution",
A: "foo",
B: "fou",
MaxDist: -1,
Expected: 1,
},
{
Name: "different strings entirely",
A: "foo",
B: "bar",
MaxDist: -1,
Expected: 3,
},
{
Name: "different strings, max distance 2",
A: "foo",
B: "bar",
MaxDist: 2,
Error: levenshtein.ErrMaxDist.Error(),
},
} {
tt := tt
t.Run(tt.Name, func(t *testing.T) {
t.Parallel()
actual, err := levenshtein.Distance(tt.A, tt.B, tt.MaxDist)
if tt.Error == "" {
require.NoError(t, err)
require.Equal(t, tt.Expected, actual)
} else {
require.EqualError(t, err, tt.Error)
}
})
}
}
+38
View File
@@ -0,0 +1,38 @@
package cliutil
import (
"io"
"sync"
)
type discardAfterClose struct {
sync.Mutex
wc io.WriteCloser
closed bool
}
// DiscardAfterClose is an io.WriteCloser that discards writes after it is closed without errors.
// It is useful as a target for a slog.Sink such that an underlying WriteCloser, like a file, can
// be cleaned up without race conditions from still-active loggers.
func DiscardAfterClose(wc io.WriteCloser) io.WriteCloser {
return &discardAfterClose{wc: wc}
}
func (d *discardAfterClose) Write(p []byte) (n int, err error) {
d.Lock()
defer d.Unlock()
if d.closed {
return len(p), nil
}
return d.wc.Write(p)
}
func (d *discardAfterClose) Close() error {
d.Lock()
defer d.Unlock()
if d.closed {
return nil
}
d.closed = true
return d.wc.Close()
}
+54
View File
@@ -0,0 +1,54 @@
package cliutil_test
import (
"testing"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/cliutil"
)
func TestDiscardAfterClose(t *testing.T) {
t.Parallel()
exErr := xerrors.New("test")
fwc := &fakeWriteCloser{err: exErr}
uut := cliutil.DiscardAfterClose(fwc)
n, err := uut.Write([]byte("one"))
require.Equal(t, 3, n)
require.NoError(t, err)
n, err = uut.Write([]byte("two"))
require.Equal(t, 3, n)
require.NoError(t, err)
err = uut.Close()
require.Equal(t, exErr, err)
n, err = uut.Write([]byte("three"))
require.Equal(t, 5, n)
require.NoError(t, err)
require.Len(t, fwc.writes, 2)
require.EqualValues(t, "one", fwc.writes[0])
require.EqualValues(t, "two", fwc.writes[1])
}
type fakeWriteCloser struct {
writes [][]byte
closed bool
err error
}
func (f *fakeWriteCloser) Write(p []byte) (n int, err error) {
q := make([]byte, len(p))
copy(q, p)
f.writes = append(f.writes, q)
return len(p), nil
}
func (f *fakeWriteCloser) Close() error {
f.closed = true
return f.err
}
+24 -4
View File
@@ -13,6 +13,7 @@ import (
"path/filepath"
"runtime"
"sort"
"strconv"
"strings"
"github.com/cli/safeexec"
@@ -46,9 +47,10 @@ const (
// sshConfigOptions represents options that can be stored and read
// from the coder config in ~/.ssh/coder.
type sshConfigOptions struct {
waitEnum string
userHostPrefix string
sshOptions []string
waitEnum string
userHostPrefix string
sshOptions []string
disableAutostart bool
}
// addOptions expects options in the form of "option=value" or "option value".
@@ -106,7 +108,7 @@ func (o sshConfigOptions) equal(other sshConfigOptions) bool {
if !slices.Equal(opt1, opt2) {
return false
}
return o.waitEnum == other.waitEnum && o.userHostPrefix == other.userHostPrefix
return o.waitEnum == other.waitEnum && o.userHostPrefix == other.userHostPrefix && o.disableAutostart == other.disableAutostart
}
func (o sshConfigOptions) asList() (list []string) {
@@ -116,6 +118,9 @@ func (o sshConfigOptions) asList() (list []string) {
if o.userHostPrefix != "" {
list = append(list, fmt.Sprintf("ssh-host-prefix: %s", o.userHostPrefix))
}
if o.disableAutostart {
list = append(list, fmt.Sprintf("disable-autostart: %v", o.disableAutostart))
}
for _, opt := range o.sshOptions {
list = append(list, fmt.Sprintf("ssh-option: %s", opt))
}
@@ -392,6 +397,9 @@ func (r *RootCmd) configSSH() *clibase.Cmd {
if sshConfigOpts.waitEnum != "auto" {
flags += " --wait=" + sshConfigOpts.waitEnum
}
if sshConfigOpts.disableAutostart {
flags += " --disable-autostart=true"
}
defaultOptions = append(defaultOptions, fmt.Sprintf(
"ProxyCommand %s --global-config %s ssh --stdio%s %s",
escapedCoderBinary, escapedGlobalConfig, flags, workspaceHostname,
@@ -566,6 +574,13 @@ func (r *RootCmd) configSSH() *clibase.Cmd {
Default: "auto",
Value: clibase.EnumOf(&sshConfigOpts.waitEnum, "yes", "no", "auto"),
},
{
Flag: "disable-autostart",
Description: "Disable starting the workspace automatically when connecting via SSH.",
Env: "CODER_CONFIGSSH_DISABLE_AUTOSTART",
Value: clibase.BoolOf(&sshConfigOpts.disableAutostart),
Default: "false",
},
{
Flag: "force-unix-filepaths",
Env: "CODER_CONFIGSSH_UNIX_FILEPATHS",
@@ -602,6 +617,9 @@ func sshConfigWriteSectionHeader(w io.Writer, addNewline bool, o sshConfigOption
if o.userHostPrefix != "" {
_, _ = fmt.Fprintf(&ow, "# :%s=%s\n", "ssh-host-prefix", o.userHostPrefix)
}
if o.disableAutostart {
_, _ = fmt.Fprintf(&ow, "# :%s=%v\n", "disable-autostart", o.disableAutostart)
}
for _, opt := range o.sshOptions {
_, _ = fmt.Fprintf(&ow, "# :%s=%s\n", "ssh-option", opt)
}
@@ -634,6 +652,8 @@ func sshConfigParseLastOptions(r io.Reader) (o sshConfigOptions) {
o.userHostPrefix = parts[1]
case "ssh-option":
o.sshOptions = append(o.sshOptions, parts[1])
case "disable-autostart":
o.disableAutostart, _ = strconv.ParseBool(parts[1])
default:
// Unknown option, ignore.
}
+36 -61
View File
@@ -22,8 +22,9 @@ import (
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisioner/echo"
"github.com/coder/coder/v2/provisionersdk/proto"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
@@ -64,8 +65,7 @@ func TestConfigSSH(t *testing.T) {
const hostname = "test-coder."
const expectedKey = "ConnectionAttempts"
const removeKey = "ConnectionTimeout"
client := coderdtest.New(t, &coderdtest.Options{
IncludeProvisionerDaemon: true,
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
ConfigSSH: codersdk.SSHConfigResponse{
HostnamePrefix: hostname,
SSHConfigOptions: map[string]string{
@@ -76,32 +76,13 @@ func TestConfigSSH(t *testing.T) {
},
})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
authToken := uuid.NewString()
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionPlan: []*proto.Response{{
Type: &proto.Response_Plan{
Plan: &proto.PlanComplete{
Resources: []*proto.Resource{{
Name: "example",
Type: "aws_instance",
Agents: []*proto.Agent{{
Id: uuid.NewString(),
Name: "example",
}},
}},
},
},
}},
ProvisionApply: echo.ProvisionApplyWithAgent(authToken),
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
_ = agenttest.New(t, client.URL, authToken)
resources := coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: owner.OrganizationID,
OwnerID: memberUser.ID,
}).WithAgent().Do()
_ = agenttest.New(t, client.URL, r.AgentToken)
resources := coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
agentConn, err := client.DialWorkspaceAgent(context.Background(), resources[0].Agents[0].ID, nil)
require.NoError(t, err)
defer agentConn.Close()
@@ -172,7 +153,7 @@ func TestConfigSSH(t *testing.T) {
home := filepath.Dir(filepath.Dir(sshConfigFile))
// #nosec
sshCmd := exec.Command("ssh", "-F", sshConfigFile, hostname+workspace.Name, "echo", "test")
sshCmd := exec.Command("ssh", "-F", sshConfigFile, hostname+r.Workspace.Name, "echo", "test")
pty = ptytest.New(t)
// Set HOME because coder config is included from ~/.ssh/coder.
sshCmd.Env = append(sshCmd.Env, fmt.Sprintf("HOME=%s", home))
@@ -213,13 +194,13 @@ func TestConfigSSH_FileWriteAndOptionsFlow(t *testing.T) {
match, write string
}
tests := []struct {
name string
args []string
matches []match
writeConfig writeConfig
wantConfig wantConfig
wantErr bool
echoResponse *echo.Responses
name string
args []string
matches []match
writeConfig writeConfig
wantConfig wantConfig
wantErr bool
hasAgent bool
}{
{
name: "Config file is created",
@@ -576,11 +557,8 @@ func TestConfigSSH_FileWriteAndOptionsFlow(t *testing.T) {
args: []string{
"-y", "--coder-binary-path", "/foo/bar/coder",
},
wantErr: false,
echoResponse: &echo.Responses{
Parse: echo.ParseComplete,
ProvisionApply: echo.ProvisionApplyWithAgent(""),
},
wantErr: false,
hasAgent: true,
wantConfig: wantConfig{
regexMatch: "ProxyCommand /foo/bar/coder",
},
@@ -591,15 +569,14 @@ func TestConfigSSH_FileWriteAndOptionsFlow(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, tt.echoResponse)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID)
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
)
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
if tt.hasAgent {
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
}
// Prepare ssh config files.
sshConfigName := sshConfigFileName(t)
@@ -613,6 +590,7 @@ func TestConfigSSH_FileWriteAndOptionsFlow(t *testing.T) {
}
args = append(args, tt.args...)
inv, root := clitest.New(t, args...)
//nolint:gocritic // This has always ran with the admin user.
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t)
@@ -710,17 +688,14 @@ func TestConfigSSH_Hostnames(t *testing.T) {
resources = append(resources, resource)
}
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
client, db := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// authToken := uuid.NewString()
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID,
echo.WithResources(resources))
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: owner.OrganizationID,
OwnerID: memberUser.ID,
}).Resource(resources...).Do()
sshConfigFile := sshConfigFileName(t)
inv, root := clitest.New(t, "config-ssh", "--ssh-config-file", sshConfigFile)
@@ -745,7 +720,7 @@ func TestConfigSSH_Hostnames(t *testing.T) {
var expectedHosts []string
for _, hostnamePattern := range tt.expected {
hostname := strings.ReplaceAll(hostnamePattern, "@", workspace.Name)
hostname := strings.ReplaceAll(hostnamePattern, "@", r.Workspace.Name)
expectedHosts = append(expectedHosts, hostname)
}
-6
View File
@@ -1,6 +0,0 @@
package cli
const (
timeFormat = "3:04PM MST"
dateFormat = "Jan 2, 2006"
)
+57 -14
View File
@@ -26,8 +26,9 @@ func (r *RootCmd) create() *clibase.Cmd {
stopAfter time.Duration
workspaceName string
parameterFlags workspaceParameterFlags
autoUpdates string
parameterFlags workspaceParameterFlags
autoUpdates string
copyParametersFrom string
)
client := new(codersdk.Client)
cmd := &clibase.Cmd{
@@ -76,7 +77,24 @@ func (r *RootCmd) create() *clibase.Cmd {
return xerrors.Errorf("A workspace already exists named %q!", workspaceName)
}
var sourceWorkspace codersdk.Workspace
if copyParametersFrom != "" {
sourceWorkspaceOwner, sourceWorkspaceName, err := splitNamedWorkspace(copyParametersFrom)
if err != nil {
return err
}
sourceWorkspace, err = client.WorkspaceByOwnerAndName(inv.Context(), sourceWorkspaceOwner, sourceWorkspaceName, codersdk.WorkspaceOptions{})
if err != nil {
return xerrors.Errorf("get source workspace: %w", err)
}
_, _ = fmt.Fprintf(inv.Stdout, "Coder will use the same template %q as the source workspace.\n", sourceWorkspace.TemplateName)
templateName = sourceWorkspace.TemplateName
}
var template codersdk.Template
var templateVersionID uuid.UUID
if templateName == "" {
_, _ = fmt.Fprintln(inv.Stdout, pretty.Sprint(cliui.DefaultStyles.Wrap, "Select a template below to preview the provisioned infrastructure:"))
@@ -118,11 +136,19 @@ func (r *RootCmd) create() *clibase.Cmd {
}
template = templateByName[option]
templateVersionID = template.ActiveVersionID
} else if sourceWorkspace.LatestBuild.TemplateVersionID != uuid.Nil {
template, err = client.Template(inv.Context(), sourceWorkspace.TemplateID)
if err != nil {
return xerrors.Errorf("get template by name: %w", err)
}
templateVersionID = sourceWorkspace.LatestBuild.TemplateVersionID
} else {
template, err = client.TemplateByName(inv.Context(), organization.ID, templateName)
if err != nil {
return xerrors.Errorf("get template by name: %w", err)
}
templateVersionID = template.ActiveVersionID
}
var schedSpec *string
@@ -134,18 +160,28 @@ func (r *RootCmd) create() *clibase.Cmd {
schedSpec = ptr.Ref(sched.String())
}
cliRichParameters, err := asWorkspaceBuildParameters(parameterFlags.richParameters)
cliBuildParameters, err := asWorkspaceBuildParameters(parameterFlags.richParameters)
if err != nil {
return xerrors.Errorf("can't parse given parameter values: %w", err)
}
var sourceWorkspaceParameters []codersdk.WorkspaceBuildParameter
if copyParametersFrom != "" {
sourceWorkspaceParameters, err = client.WorkspaceBuildParameters(inv.Context(), sourceWorkspace.LatestBuild.ID)
if err != nil {
return xerrors.Errorf("get source workspace build parameters: %w", err)
}
}
richParameters, err := prepWorkspaceBuild(inv, client, prepWorkspaceBuildArgs{
Action: WorkspaceCreate,
Template: template,
NewWorkspaceName: workspaceName,
Action: WorkspaceCreate,
TemplateVersionID: templateVersionID,
NewWorkspaceName: workspaceName,
RichParameterFile: parameterFlags.richParameterFile,
RichParameters: cliRichParameters,
RichParameters: cliBuildParameters,
SourceWorkspaceParameters: sourceWorkspaceParameters,
})
if err != nil {
return xerrors.Errorf("prepare build: %w", err)
@@ -165,7 +201,7 @@ func (r *RootCmd) create() *clibase.Cmd {
}
workspace, err := client.CreateWorkspace(inv.Context(), organization.ID, workspaceOwner, codersdk.CreateWorkspaceRequest{
TemplateID: template.ID,
TemplateVersionID: templateVersionID,
Name: workspaceName,
AutostartSchedule: schedSpec,
TTLMillis: ttlMillis,
@@ -217,6 +253,12 @@ func (r *RootCmd) create() *clibase.Cmd {
Default: string(codersdk.AutomaticUpdatesNever),
Value: clibase.StringOf(&autoUpdates),
},
clibase.Option{
Flag: "copy-parameters-from",
Env: "CODER_WORKSPACE_COPY_PARAMETERS_FROM",
Description: "Specify the source workspace name to copy parameters from.",
Value: clibase.StringOf(&copyParametersFrom),
},
cliui.SkipPromptOption(),
)
cmd.Options = append(cmd.Options, parameterFlags.cliParameters()...)
@@ -224,12 +266,12 @@ func (r *RootCmd) create() *clibase.Cmd {
}
type prepWorkspaceBuildArgs struct {
Action WorkspaceCLIAction
Template codersdk.Template
NewWorkspaceName string
WorkspaceID uuid.UUID
Action WorkspaceCLIAction
TemplateVersionID uuid.UUID
NewWorkspaceName string
LastBuildParameters []codersdk.WorkspaceBuildParameter
LastBuildParameters []codersdk.WorkspaceBuildParameter
SourceWorkspaceParameters []codersdk.WorkspaceBuildParameter
PromptBuildOptions bool
BuildOptions []codersdk.WorkspaceBuildParameter
@@ -244,7 +286,7 @@ type prepWorkspaceBuildArgs struct {
func prepWorkspaceBuild(inv *clibase.Invocation, client *codersdk.Client, args prepWorkspaceBuildArgs) ([]codersdk.WorkspaceBuildParameter, error) {
ctx := inv.Context()
templateVersion, err := client.TemplateVersion(ctx, args.Template.ActiveVersionID)
templateVersion, err := client.TemplateVersion(ctx, args.TemplateVersionID)
if err != nil {
return nil, xerrors.Errorf("get template version: %w", err)
}
@@ -264,6 +306,7 @@ func prepWorkspaceBuild(inv *clibase.Invocation, client *codersdk.Client, args p
resolver := new(ParameterResolver).
WithLastBuildParameters(args.LastBuildParameters).
WithSourceWorkspaceParameters(args.SourceWorkspaceParameters).
WithPromptBuildOptions(args.PromptBuildOptions).
WithBuildOptions(args.BuildOptions).
WithPromptRichParameters(args.PromptRichParameters).
+143
View File
@@ -391,6 +391,149 @@ func TestCreateWithRichParameters(t *testing.T) {
}
<-doneChan
})
t.Run("WrongParameterName/DidYouMean", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
wrongFirstParameterName := "frst-prameter"
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name,
"--parameter", fmt.Sprintf("%s=%s", wrongFirstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
assert.ErrorContains(t, err, "parameter \""+wrongFirstParameterName+"\" is not present in the template")
assert.ErrorContains(t, err, "Did you mean: "+firstParameterName)
})
t.Run("CopyParameters", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
t.Run("CopyParametersFromNotUpdatedWorkspace", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, echoResponses)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
// Firstly, create a regular workspace using template with parameters.
inv, root := clitest.New(t, "create", "my-workspace", "--template", template.Name, "-y",
"--parameter", fmt.Sprintf("%s=%s", firstParameterName, firstParameterValue),
"--parameter", fmt.Sprintf("%s=%s", secondParameterName, secondParameterValue),
"--parameter", fmt.Sprintf("%s=%s", immutableParameterName, immutableParameterValue))
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.Run()
require.NoError(t, err, "can't create first workspace")
// Secondly, update the template to the newer version.
version2 := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, prepareEchoResponses([]*proto.RichParameter{
{Name: "third_parameter", Type: "string", DefaultValue: "not-relevant"},
}), func(ctvr *codersdk.CreateTemplateVersionRequest) {
ctvr.TemplateID = template.ID
})
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version2.ID)
coderdtest.UpdateActiveTemplateVersion(t, client, template.ID, version2.ID)
// Thirdly, create a new workspace using parameters from the previous workspace.
const otherWorkspace = "other-workspace"
inv, root = clitest.New(t, "create", "--copy-parameters-from", "my-workspace", otherWorkspace, "-y")
clitest.SetupConfig(t, member, root)
pty = ptytest.New(t).Attach(inv)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err = inv.Run()
require.NoError(t, err, "can't create a workspace based on the source workspace")
// Verify if the new workspace uses expected parameters.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspaces, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: otherWorkspace,
})
require.NoError(t, err, "can't list available workspaces")
require.Len(t, workspaces.Workspaces, 1)
otherWorkspaceLatestBuild := workspaces.Workspaces[0].LatestBuild
require.Equal(t, version.ID, otherWorkspaceLatestBuild.TemplateVersionID)
buildParameters, err := client.WorkspaceBuildParameters(ctx, otherWorkspaceLatestBuild.ID)
require.NoError(t, err)
require.Len(t, buildParameters, 3)
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: firstParameterName, Value: firstParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: secondParameterName, Value: secondParameterValue})
require.Contains(t, buildParameters, codersdk.WorkspaceBuildParameter{Name: immutableParameterName, Value: immutableParameterValue})
})
}
func TestCreateValidateRichParameters(t *testing.T) {
+12 -5
View File
@@ -22,6 +22,7 @@ import (
func (r *RootCmd) dotfiles() *clibase.Cmd {
var symlinkDir string
var gitbranch string
var dotfilesRepoDir string
cmd := &clibase.Cmd{
Use: "dotfiles <git_repo_url>",
@@ -35,11 +36,10 @@ func (r *RootCmd) dotfiles() *clibase.Cmd {
),
Handler: func(inv *clibase.Invocation) error {
var (
dotfilesRepoDir = "dotfiles"
gitRepo = inv.Args[0]
cfg = r.createConfig()
cfgDir = string(cfg)
dotfilesDir = filepath.Join(cfgDir, dotfilesRepoDir)
gitRepo = inv.Args[0]
cfg = r.createConfig()
cfgDir = string(cfg)
dotfilesDir = filepath.Join(cfgDir, dotfilesRepoDir)
// This follows the same pattern outlined by others in the market:
// https://github.com/coder/coder/pull/1696#issue-1245742312
installScriptSet = []string{
@@ -290,6 +290,13 @@ func (r *RootCmd) dotfiles() *clibase.Cmd {
"If empty, will default to cloning the default branch or using the existing branch in the cloned repo on disk.",
Value: clibase.StringOf(&gitbranch),
},
{
Flag: "repo-dir",
Default: "dotfiles",
Env: "CODER_DOTFILES_REPO_DIR",
Description: "Specifies the directory for the dotfiles repository, relative to global config directory.",
Value: clibase.StringOf(&dotfilesRepoDir),
},
cliui.SkipPromptOption(),
}
return cmd
+62
View File
@@ -50,6 +50,68 @@ func TestDotfiles(t *testing.T) {
require.NoError(t, err)
require.Equal(t, string(b), "wow")
})
t.Run("SwitchRepoDir", func(t *testing.T) {
t.Parallel()
_, root := clitest.New(t)
testRepo := testGitRepo(t, root)
// nolint:gosec
err := os.WriteFile(filepath.Join(testRepo, ".bashrc"), []byte("wow"), 0o750)
require.NoError(t, err)
c := exec.Command("git", "add", ".bashrc")
c.Dir = testRepo
err = c.Run()
require.NoError(t, err)
c = exec.Command("git", "commit", "-m", `"add .bashrc"`)
c.Dir = testRepo
out, err := c.CombinedOutput()
require.NoError(t, err, string(out))
inv, _ := clitest.New(t, "dotfiles", "--global-config", string(root), "--symlink-dir", string(root), "--repo-dir", "testrepo", "-y", testRepo)
err = inv.Run()
require.NoError(t, err)
b, err := os.ReadFile(filepath.Join(string(root), ".bashrc"))
require.NoError(t, err)
require.Equal(t, string(b), "wow")
stat, staterr := os.Stat(filepath.Join(string(root), "testrepo"))
require.NoError(t, staterr)
require.True(t, stat.IsDir())
})
t.Run("SwitchRepoDirRelative", func(t *testing.T) {
t.Parallel()
_, root := clitest.New(t)
testRepo := testGitRepo(t, root)
// nolint:gosec
err := os.WriteFile(filepath.Join(testRepo, ".bashrc"), []byte("wow"), 0o750)
require.NoError(t, err)
c := exec.Command("git", "add", ".bashrc")
c.Dir = testRepo
err = c.Run()
require.NoError(t, err)
c = exec.Command("git", "commit", "-m", `"add .bashrc"`)
c.Dir = testRepo
out, err := c.CombinedOutput()
require.NoError(t, err, string(out))
inv, _ := clitest.New(t, "dotfiles", "--global-config", string(root), "--symlink-dir", string(root), "--repo-dir", "./relrepo", "-y", testRepo)
err = inv.Run()
require.NoError(t, err)
b, err := os.ReadFile(filepath.Join(string(root), ".bashrc"))
require.NoError(t, err)
require.Equal(t, string(b), "wow")
stat, staterr := os.Stat(filepath.Join(string(root), "relrepo"))
require.NoError(t, staterr)
require.True(t, stat.IsDir())
})
t.Run("InstallScript", func(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
+113 -51
View File
@@ -10,6 +10,7 @@ import (
"math/rand"
"net/http"
"os"
"os/signal"
"strconv"
"strings"
"sync"
@@ -173,11 +174,12 @@ func (s *scaletestStrategyFlags) attach(opts *clibase.OptionSet) {
func (s *scaletestStrategyFlags) toStrategy() harness.ExecutionStrategy {
var strategy harness.ExecutionStrategy
if s.concurrency == 1 {
switch s.concurrency {
case 1:
strategy = harness.LinearExecutionStrategy{}
} else if s.concurrency == 0 {
case 0:
strategy = harness.ConcurrentExecutionStrategy{}
} else {
default:
strategy = harness.ParallelExecutionStrategy{
Limit: int(s.concurrency),
}
@@ -244,7 +246,9 @@ func (o *scaleTestOutput) write(res harness.Results, stdout io.Writer) error {
err := s.Sync()
// On Linux, EINVAL is returned when calling fsync on /dev/stdout. We
// can safely ignore this error.
if err != nil && !xerrors.Is(err, syscall.EINVAL) {
// On macOS, ENOTTY is returned when calling sync on /dev/stdout. We
// can safely ignore this error.
if err != nil && !xerrors.Is(err, syscall.EINVAL) && !xerrors.Is(err, syscall.ENOTTY) {
return xerrors.Errorf("flush output file: %w", err)
}
}
@@ -394,6 +398,8 @@ func (r *userCleanupRunner) Run(ctx context.Context, _ string, _ io.Writer) erro
}
func (r *RootCmd) scaletestCleanup() *clibase.Cmd {
var template string
cleanupStrategy := &scaletestStrategyFlags{cleanup: true}
client := new(codersdk.Client)
@@ -407,22 +413,29 @@ func (r *RootCmd) scaletestCleanup() *clibase.Cmd {
Handler: func(inv *clibase.Invocation) error {
ctx := inv.Context()
_, err := requireAdmin(ctx, client)
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
client.HTTPClient = &http.Client{
Transport: &headerTransport{
transport: http.DefaultTransport,
header: map[string][]string{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
if template != "" {
_, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
if err != nil {
return xerrors.Errorf("parse template: %w", err)
}
}
cliui.Infof(inv.Stdout, "Fetching scaletest workspaces...")
workspaces, err := getScaletestWorkspaces(ctx, client)
workspaces, err := getScaletestWorkspaces(ctx, client, template)
if err != nil {
return err
}
@@ -494,6 +507,15 @@ func (r *RootCmd) scaletestCleanup() *clibase.Cmd {
},
}
cmd.Options = clibase.OptionSet{
{
Flag: "template",
Env: "CODER_SCALETEST_CLEANUP_TEMPLATE",
Description: "Name or ID of the template. Only delete workspaces created from the given template.",
Value: clibase.StringOf(&template),
},
}
cleanupStrategy.attach(&cmd.Options)
return cmd
}
@@ -548,9 +570,9 @@ func (r *RootCmd) scaletestCreateWorkspaces() *clibase.Cmd {
}
client.HTTPClient = &http.Client{
Transport: &headerTransport{
transport: http.DefaultTransport,
header: map[string][]string{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
@@ -564,34 +586,12 @@ func (r *RootCmd) scaletestCreateWorkspaces() *clibase.Cmd {
return xerrors.Errorf("could not parse --output flags")
}
var tpl codersdk.Template
if template == "" {
return xerrors.Errorf("--template is required")
}
if id, err := uuid.Parse(template); err == nil && id != uuid.Nil {
tpl, err = client.Template(ctx, id)
if err != nil {
return xerrors.Errorf("get template by ID %q: %w", template, err)
}
} else {
// List templates in all orgs until we find a match.
orgLoop:
for _, orgID := range me.OrganizationIDs {
tpls, err := client.TemplatesByOrganization(ctx, orgID)
if err != nil {
return xerrors.Errorf("list templates in org %q: %w", orgID, err)
}
for _, t := range tpls {
if t.Name == template {
tpl = t
break orgLoop
}
}
}
}
if tpl.ID == uuid.Nil {
return xerrors.Errorf("could not find template %q in any organization", template)
tpl, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
if err != nil {
return xerrors.Errorf("parse template: %w", err)
}
cliRichParameters, err := asWorkspaceBuildParameters(parameterFlags.richParameters)
@@ -600,9 +600,9 @@ func (r *RootCmd) scaletestCreateWorkspaces() *clibase.Cmd {
}
richParameters, err := prepWorkspaceBuild(inv, client, prepWorkspaceBuildArgs{
Action: WorkspaceCreate,
Template: tpl,
NewWorkspaceName: "scaletest-N", // TODO: the scaletest runner will pass in a different name here. Does this matter?
Action: WorkspaceCreate,
TemplateVersionID: tpl.ActiveVersionID,
NewWorkspaceName: "scaletest-N", // TODO: the scaletest runner will pass in a different name here. Does this matter?
RichParameterFile: parameterFlags.richParameterFile,
RichParameters: cliRichParameters,
@@ -859,6 +859,7 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *clibase.Cmd {
tickInterval time.Duration
bytesPerTick int64
ssh bool
template string
client = &codersdk.Client{}
tracingFlags = &scaletestTracingFlags{}
@@ -874,26 +875,43 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *clibase.Cmd {
Middleware: clibase.Chain(
r.InitClient(client),
),
Handler: func(inv *clibase.Invocation) error {
Handler: func(inv *clibase.Invocation) (err error) {
ctx := inv.Context()
notifyCtx, stop := signal.NotifyContext(ctx, InterruptSignals...) // Checked later.
defer stop()
ctx = notifyCtx
me, err := requireAdmin(ctx, client)
if err != nil {
return err
}
reg := prometheus.NewRegistry()
metrics := workspacetraffic.NewMetrics(reg, "username", "workspace_name", "agent_name")
logger := slog.Make(sloghuman.Sink(io.Discard))
logger := inv.Logger
prometheusSrvClose := ServeHandler(ctx, logger, promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), prometheusFlags.Address, "prometheus")
defer prometheusSrvClose()
// Bypass rate limiting
client.HTTPClient = &http.Client{
Transport: &headerTransport{
transport: http.DefaultTransport,
header: map[string][]string{
Transport: &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: map[string][]string{
codersdk.BypassRatelimitHeader: {"true"},
},
},
}
workspaces, err := getScaletestWorkspaces(inv.Context(), client)
if template != "" {
_, err := parseTemplate(ctx, client, me.OrganizationIDs, template)
if err != nil {
return xerrors.Errorf("parse template: %w", err)
}
}
workspaces, err := getScaletestWorkspaces(inv.Context(), client, template)
if err != nil {
return err
}
@@ -955,6 +973,7 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *clibase.Cmd {
ReadMetrics: metrics.ReadMetrics(ws.OwnerName, ws.Name, agentName),
WriteMetrics: metrics.WriteMetrics(ws.OwnerName, ws.Name, agentName),
SSH: ssh,
Echo: ssh,
}
if err := config.Validate(); err != nil {
@@ -980,6 +999,11 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *clibase.Cmd {
return xerrors.Errorf("run test harness (harness failure, not a test failure): %w", err)
}
// If the command was interrupted, skip stats.
if notifyCtx.Err() != nil {
return notifyCtx.Err()
}
res := th.Results()
for _, o := range outputs {
err = o.write(res, inv.Stdout)
@@ -997,6 +1021,13 @@ func (r *RootCmd) scaletestWorkspaceTraffic() *clibase.Cmd {
}
cmd.Options = []clibase.Option{
{
Flag: "template",
FlagShorthand: "t",
Env: "CODER_SCALETEST_TEMPLATE",
Description: "Name or ID of the template. Traffic generation will be limited to workspaces created from this template.",
Value: clibase.StringOf(&template),
},
{
Flag: "bytes-per-tick",
Env: "CODER_SCALETEST_WORKSPACE_TRAFFIC_BYTES_PER_TICK",
@@ -1058,7 +1089,7 @@ func (r *RootCmd) scaletestDashboard() *clibase.Cmd {
return xerrors.Errorf("--jitter must be less than --interval")
}
ctx := inv.Context()
logger := slog.Make(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelInfo)
logger := inv.Logger.AppendSinks(sloghuman.Sink(inv.Stdout))
if r.verbose {
logger = logger.Leveled(slog.LevelDebug)
}
@@ -1281,7 +1312,7 @@ func isScaleTestWorkspace(workspace codersdk.Workspace) bool {
strings.HasPrefix(workspace.Name, "scaletest-")
}
func getScaletestWorkspaces(ctx context.Context, client *codersdk.Client) ([]codersdk.Workspace, error) {
func getScaletestWorkspaces(ctx context.Context, client *codersdk.Client, template string) ([]codersdk.Workspace, error) {
var (
pageNumber = 0
limit = 100
@@ -1290,9 +1321,10 @@ func getScaletestWorkspaces(ctx context.Context, client *codersdk.Client) ([]cod
for {
page, err := client.Workspaces(ctx, codersdk.WorkspaceFilter{
Name: "scaletest-",
Offset: pageNumber * limit,
Limit: limit,
Name: "scaletest-",
Template: template,
Offset: pageNumber * limit,
Limit: limit,
})
if err != nil {
return nil, xerrors.Errorf("fetch scaletest workspaces page %d: %w", pageNumber, err)
@@ -1349,3 +1381,33 @@ func getScaletestUsers(ctx context.Context, client *codersdk.Client) ([]codersdk
return users, nil
}
func parseTemplate(ctx context.Context, client *codersdk.Client, organizationIDs []uuid.UUID, template string) (tpl codersdk.Template, err error) {
if id, err := uuid.Parse(template); err == nil && id != uuid.Nil {
tpl, err = client.Template(ctx, id)
if err != nil {
return tpl, xerrors.Errorf("get template by ID %q: %w", template, err)
}
} else {
// List templates in all orgs until we find a match.
orgLoop:
for _, orgID := range organizationIDs {
tpls, err := client.TemplatesByOrganization(ctx, orgID)
if err != nil {
return tpl, xerrors.Errorf("list templates in org %q: %w", orgID, err)
}
for _, t := range tpls {
if t.Name == template {
tpl = t
break orgLoop
}
}
}
}
if tpl.ID == uuid.Nil {
return tpl, xerrors.Errorf("could not find template %q in any organization", template)
}
return tpl, nil
}
+50
View File
@@ -91,6 +91,56 @@ func TestScaleTestWorkspaceTraffic(t *testing.T) {
require.ErrorContains(t, err, "no scaletest workspaces exist")
}
// This test just validates that the CLI command accepts its known arguments.
func TestScaleTestWorkspaceTraffic_Template(t *testing.T) {
t.Parallel()
ctx, cancelFunc := context.WithTimeout(context.Background(), testutil.WaitMedium)
defer cancelFunc()
log := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
client := coderdtest.New(t, &coderdtest.Options{
Logger: &log,
})
_ = coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "exp", "scaletest", "workspace-traffic",
"--template", "doesnotexist",
)
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "could not find template \"doesnotexist\" in any organization")
}
// This test just validates that the CLI command accepts its known arguments.
func TestScaleTestCleanup_Template(t *testing.T) {
t.Parallel()
ctx, cancelFunc := context.WithTimeout(context.Background(), testutil.WaitMedium)
defer cancelFunc()
log := slogtest.Make(t, &slogtest.Options{IgnoreErrors: true})
client := coderdtest.New(t, &coderdtest.Options{
Logger: &log,
})
_ = coderdtest.CreateFirstUser(t, client)
inv, root := clitest.New(t, "exp", "scaletest", "cleanup",
"--template", "doesnotexist",
)
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t)
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
err := inv.WithContext(ctx).Run()
require.ErrorContains(t, err, "could not find template \"doesnotexist\" in any organization")
}
// This test just validates that the CLI command accepts its known arguments.
func TestScaleTestDashboard(t *testing.T) {
t.Parallel()
+1 -2
View File
@@ -2,7 +2,6 @@ package cli
import (
"encoding/json"
"os/signal"
"golang.org/x/xerrors"
@@ -63,7 +62,7 @@ fi
Handler: func(inv *clibase.Invocation) error {
ctx := inv.Context()
ctx, stop := signal.NotifyContext(ctx, InterruptSignals...)
ctx, stop := inv.SignalNotifyContext(ctx, InterruptSignals...)
defer stop()
client, err := r.createAgentClient()
+1 -2
View File
@@ -4,7 +4,6 @@ import (
"errors"
"fmt"
"net/http"
"os/signal"
"time"
"golang.org/x/xerrors"
@@ -26,7 +25,7 @@ func (r *RootCmd) gitAskpass() *clibase.Cmd {
Handler: func(inv *clibase.Invocation) error {
ctx := inv.Context()
ctx, stop := signal.NotifyContext(ctx, InterruptSignals...)
ctx, stop := inv.SignalNotifyContext(ctx, InterruptSignals...)
defer stop()
user, host, err := gitauth.ParseAskpass(inv.Args[0])
+1 -2
View File
@@ -8,7 +8,6 @@ import (
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strings"
@@ -30,7 +29,7 @@ func (r *RootCmd) gitssh() *clibase.Cmd {
// Catch interrupt signals to ensure the temporary private
// key file is cleaned up on most cases.
ctx, stop := signal.NotifyContext(ctx, InterruptSignals...)
ctx, stop := inv.SignalNotifyContext(ctx, InterruptSignals...)
defer stop()
// Early check so errors are reported immediately.
+11 -17
View File
@@ -16,7 +16,6 @@ import (
"testing"
"github.com/gliderlabs/ssh"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
gossh "golang.org/x/crypto/ssh"
@@ -24,9 +23,10 @@ import (
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/provisioner/echo"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
@@ -34,7 +34,7 @@ import (
func prepareTestGitSSH(ctx context.Context, t *testing.T) (*agentsdk.Client, string, gossh.PublicKey) {
t.Helper()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
client, db := coderdtest.NewWithDatabase(t, nil)
user := coderdtest.CreateFirstUser(t, client)
ctx, cancel := context.WithCancel(ctx)
@@ -48,25 +48,19 @@ func prepareTestGitSSH(ctx context.Context, t *testing.T) (*agentsdk.Client, str
require.NoError(t, err)
// setup template
agentToken := uuid.NewString()
version := coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionPlan: echo.PlanComplete,
ProvisionApply: echo.ProvisionApplyWithAgent(agentToken),
})
template := coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
workspace := coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
// start workspace agent
agentClient := agentsdk.New(client.URL)
agentClient.SetSessionToken(agentToken)
_ = agenttest.New(t, client.URL, agentToken, func(o *agent.Options) {
agentClient.SetSessionToken(r.AgentToken)
_ = agenttest.New(t, client.URL, r.AgentToken, func(o *agent.Options) {
o.Client = agentClient
})
_ = coderdtest.AwaitWorkspaceAgents(t, client, workspace.ID)
return agentClient, agentToken, pubkey
_ = coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
return agentClient, r.AgentToken, pubkey
}
func serveSSHForGitSSH(t *testing.T, handler func(ssh.Session), pubkeys ...gossh.PublicKey) *net.TCPAddr {
+37 -73
View File
@@ -1,19 +1,17 @@
package cli
import (
"context"
"fmt"
"strconv"
"time"
"github.com/google/uuid"
"github.com/coder/pretty"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/coderd/schedule/cron"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
)
// workspaceListRow is the type provided to the OutputFormatter. This is a bit
@@ -31,57 +29,42 @@ type workspaceListRow struct {
LastBuilt string `json:"-" table:"last built"`
Outdated bool `json:"-" table:"outdated"`
StartsAt string `json:"-" table:"starts at"`
StartsNext string `json:"-" table:"starts next"`
StopsAfter string `json:"-" table:"stops after"`
StopsNext string `json:"-" table:"stops next"`
DailyCost string `json:"-" table:"daily cost"`
}
func workspaceListRowFromWorkspace(now time.Time, usersByID map[uuid.UUID]codersdk.User, workspace codersdk.Workspace) workspaceListRow {
func workspaceListRowFromWorkspace(now time.Time, workspace codersdk.Workspace) workspaceListRow {
status := codersdk.WorkspaceDisplayStatus(workspace.LatestBuild.Job.Status, workspace.LatestBuild.Transition)
lastBuilt := now.UTC().Sub(workspace.LatestBuild.Job.CreatedAt).Truncate(time.Second)
autostartDisplay := "-"
if !ptr.NilOrEmpty(workspace.AutostartSchedule) {
if sched, err := cron.Weekly(*workspace.AutostartSchedule); err == nil {
autostartDisplay = fmt.Sprintf("%s %s (%s)", sched.Time(), sched.DaysOfWeek(), sched.Location())
}
}
autostopDisplay := "-"
if !ptr.NilOrZero(workspace.TTLMillis) {
dur := time.Duration(*workspace.TTLMillis) * time.Millisecond
autostopDisplay = durationDisplay(dur)
if !workspace.LatestBuild.Deadline.IsZero() && workspace.LatestBuild.Deadline.Time.After(now) && status == "Running" {
remaining := time.Until(workspace.LatestBuild.Deadline.Time)
autostopDisplay = fmt.Sprintf("%s (%s)", autostopDisplay, relative(remaining))
}
}
schedRow := scheduleListRowFromWorkspace(now, workspace)
healthy := ""
if status == "Starting" || status == "Started" {
healthy = strconv.FormatBool(workspace.Health.Healthy)
}
user := usersByID[workspace.OwnerID]
return workspaceListRow{
Workspace: workspace,
WorkspaceName: user.Username + "/" + workspace.Name,
WorkspaceName: workspace.OwnerName + "/" + workspace.Name,
Template: workspace.TemplateName,
Status: status,
Healthy: healthy,
LastBuilt: durationDisplay(lastBuilt),
Outdated: workspace.Outdated,
StartsAt: autostartDisplay,
StopsAfter: autostopDisplay,
StartsAt: schedRow.StartsAt,
StartsNext: schedRow.StartsNext,
StopsAfter: schedRow.StopsAfter,
StopsNext: schedRow.StopsNext,
DailyCost: strconv.Itoa(int(workspace.LatestBuild.DailyCost)),
}
}
func (r *RootCmd) list() *clibase.Cmd {
var (
all bool
defaultQuery = "owner:me"
searchQuery string
displayWorkspaces []workspaceListRow
formatter = cliui.NewOutputFormatter(
filter cliui.WorkspaceFilter
formatter = cliui.NewOutputFormatter(
cliui.TableFormat(
[]workspaceListRow{},
[]string{
@@ -109,18 +92,12 @@ func (r *RootCmd) list() *clibase.Cmd {
r.InitClient(client),
),
Handler: func(inv *clibase.Invocation) error {
filter := codersdk.WorkspaceFilter{
FilterQuery: searchQuery,
}
if all && searchQuery == defaultQuery {
filter.FilterQuery = ""
}
res, err := client.Workspaces(inv.Context(), filter)
res, err := queryConvertWorkspaces(inv.Context(), client, filter.Filter(), workspaceListRowFromWorkspace)
if err != nil {
return err
}
if len(res.Workspaces) == 0 {
if len(res) == 0 {
pretty.Fprintf(inv.Stderr, cliui.DefaultStyles.Prompt, "No workspaces found! Create one:\n")
_, _ = fmt.Fprintln(inv.Stderr)
_, _ = fmt.Fprintln(inv.Stderr, " "+pretty.Sprint(cliui.DefaultStyles.Code, "coder create <name>"))
@@ -128,23 +105,7 @@ func (r *RootCmd) list() *clibase.Cmd {
return nil
}
userRes, err := client.Users(inv.Context(), codersdk.UsersRequest{})
if err != nil {
return err
}
usersByID := map[uuid.UUID]codersdk.User{}
for _, user := range userRes.Users {
usersByID[user.ID] = user
}
now := time.Now()
displayWorkspaces = make([]workspaceListRow, len(res.Workspaces))
for i, workspace := range res.Workspaces {
displayWorkspaces[i] = workspaceListRowFromWorkspace(now, usersByID, workspace)
}
out, err := formatter.Format(inv.Context(), displayWorkspaces)
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
@@ -153,22 +114,25 @@ func (r *RootCmd) list() *clibase.Cmd {
return err
},
}
cmd.Options = clibase.OptionSet{
{
Flag: "all",
FlagShorthand: "a",
Description: "Specifies whether all workspaces will be listed or not.",
Value: clibase.BoolOf(&all),
},
{
Flag: "search",
Description: "Search for a workspace with a query.",
Default: defaultQuery,
Value: clibase.StringOf(&searchQuery),
},
}
filter.AttachOptions(&cmd.Options)
formatter.AttachOptions(&cmd.Options)
return cmd
}
// queryConvertWorkspaces is a helper function for converting
// codersdk.Workspaces to a different type.
// It's used by the list command to convert workspaces to
// workspaceListRow, and by the schedule command to
// convert workspaces to scheduleListRow.
func queryConvertWorkspaces[T any](ctx context.Context, client *codersdk.Client, filter codersdk.WorkspaceFilter, convertF func(time.Time, codersdk.Workspace) T) ([]T, error) {
var empty []T
workspaces, err := client.Workspaces(ctx, filter)
if err != nil {
return empty, xerrors.Errorf("query workspaces: %w", err)
}
converted := make([]T, len(workspaces.Workspaces))
for i, workspace := range workspaces.Workspaces {
converted[i] = convertF(time.Now(), workspace)
}
return converted, nil
}
+20 -18
View File
@@ -11,6 +11,8 @@ import (
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
@@ -20,14 +22,15 @@ func TestList(t *testing.T) {
t.Parallel()
t.Run("Single", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
client, db := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
// setup template
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: owner.OrganizationID,
OwnerID: memberUser.ID,
}).WithAgent().Do()
inv, root := clitest.New(t, "ls")
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
@@ -40,7 +43,7 @@ func TestList(t *testing.T) {
assert.NoError(t, errC)
close(done)
}()
pty.ExpectMatch(workspace.Name)
pty.ExpectMatch(r.Workspace.Name)
pty.ExpectMatch("Started")
cancelFunc()
<-done
@@ -48,14 +51,13 @@ func TestList(t *testing.T) {
t.Run("JSON", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
client, db := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: owner.OrganizationID,
OwnerID: memberUser.ID,
}).WithAgent().Do()
inv, root := clitest.New(t, "list", "--output=json")
clitest.SetupConfig(t, member, root)
@@ -68,8 +70,8 @@ func TestList(t *testing.T) {
err := inv.WithContext(ctx).Run()
require.NoError(t, err)
var templates []codersdk.Workspace
require.NoError(t, json.Unmarshal(out.Bytes(), &templates))
require.Len(t, templates, 1)
var workspaces []codersdk.Workspace
require.NoError(t, json.Unmarshal(out.Bytes(), &workspaces))
require.Len(t, workspaces, 1)
})
}
+4
View File
@@ -147,6 +147,10 @@ func (r *RootCmd) login() *clibase.Cmd {
rawURL = inv.Args[0]
}
if rawURL == "" {
return xerrors.Errorf("no url argument provided")
}
if !strings.HasPrefix(rawURL, "http://") && !strings.HasPrefix(rawURL, "https://") {
scheme := "https"
if strings.HasPrefix(rawURL, "localhost") {
+35
View File
@@ -3,6 +3,8 @@ package cli_test
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"runtime"
"testing"
@@ -36,6 +38,39 @@ func TestLogin(t *testing.T) {
require.ErrorContains(t, err, errMsg)
})
t.Run("InitialUserNonCoderURLFail", func(t *testing.T) {
t.Parallel()
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
w.Write([]byte("Not Found"))
}))
defer ts.Close()
badLoginURL := ts.URL
root, _ := clitest.New(t, "login", badLoginURL)
err := root.Run()
errMsg := fmt.Sprintf("Failed to check server %q for first user, is the URL correct and is coder accessible from your browser?", badLoginURL)
require.ErrorContains(t, err, errMsg)
})
t.Run("InitialUserNonCoderURLSuccess", func(t *testing.T) {
t.Parallel()
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-Coder-Build-Version", "something")
w.WriteHeader(http.StatusNotFound)
w.Write([]byte("Not Found"))
}))
defer ts.Close()
badLoginURL := ts.URL
root, _ := clitest.New(t, "login", badLoginURL)
err := root.Run()
// this means we passed the check for a valid coder server
require.ErrorContains(t, err, "the initial user cannot be created in non-interactive mode")
})
t.Run("InitialUserTTY", func(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, nil)
+15
View File
@@ -20,6 +20,13 @@ type workspaceParameterFlags struct {
richParameterFile string
richParameters []string
promptRichParameters bool
}
func (wpf *workspaceParameterFlags) allOptions() []clibase.Option {
options := append(wpf.cliBuildOptions(), wpf.cliParameters()...)
return append(options, wpf.alwaysPrompt())
}
func (wpf *workspaceParameterFlags) cliBuildOptions() []clibase.Option {
@@ -55,6 +62,14 @@ func (wpf *workspaceParameterFlags) cliParameters() []clibase.Option {
}
}
func (wpf *workspaceParameterFlags) alwaysPrompt() clibase.Option {
return clibase.Option{
Flag: "always-prompt",
Description: "Always prompt all parameters. Does not pull parameter values from existing workspace.",
Value: clibase.BoolOf(&wpf.promptRichParameters),
}
}
func asWorkspaceBuildParameters(nameValuePairs []string) ([]codersdk.WorkspaceBuildParameter, error) {
var params []codersdk.WorkspaceBuildParameter
for _, nameValue := range nameValuePairs {
+53 -5
View File
@@ -2,14 +2,15 @@ package cli
import (
"fmt"
"strings"
"golang.org/x/xerrors"
"github.com/coder/pretty"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/cliutil/levenshtein"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
)
type WorkspaceCLIAction int
@@ -22,7 +23,8 @@ const (
)
type ParameterResolver struct {
lastBuildParameters []codersdk.WorkspaceBuildParameter
lastBuildParameters []codersdk.WorkspaceBuildParameter
sourceWorkspaceParameters []codersdk.WorkspaceBuildParameter
richParameters []codersdk.WorkspaceBuildParameter
richParametersFile map[string]string
@@ -37,6 +39,11 @@ func (pr *ParameterResolver) WithLastBuildParameters(params []codersdk.Workspace
return pr
}
func (pr *ParameterResolver) WithSourceWorkspaceParameters(params []codersdk.WorkspaceBuildParameter) *ParameterResolver {
pr.sourceWorkspaceParameters = params
return pr
}
func (pr *ParameterResolver) WithRichParameters(params []codersdk.WorkspaceBuildParameter) *ParameterResolver {
pr.richParameters = params
return pr
@@ -68,6 +75,7 @@ func (pr *ParameterResolver) Resolve(inv *clibase.Invocation, action WorkspaceCL
staged = pr.resolveWithParametersMapFile(staged)
staged = pr.resolveWithCommandLineOrEnv(staged)
staged = pr.resolveWithSourceBuildParameters(staged, templateVersionParameters)
staged = pr.resolveWithLastBuildParameters(staged, templateVersionParameters)
if err = pr.verifyConstraints(staged, action, templateVersionParameters); err != nil {
return nil, err
@@ -159,11 +167,35 @@ next:
return resolved
}
func (pr *ParameterResolver) resolveWithSourceBuildParameters(resolved []codersdk.WorkspaceBuildParameter, templateVersionParameters []codersdk.TemplateVersionParameter) []codersdk.WorkspaceBuildParameter {
next:
for _, buildParameter := range pr.sourceWorkspaceParameters {
tvp := findTemplateVersionParameter(buildParameter, templateVersionParameters)
if tvp == nil {
continue // it looks like this parameter is not present anymore
}
if tvp.Ephemeral {
continue // ephemeral parameters should not be passed to consecutive builds
}
for i, r := range resolved {
if r.Name == buildParameter.Name {
resolved[i].Value = buildParameter.Value
continue next
}
}
resolved = append(resolved, buildParameter)
}
return resolved
}
func (pr *ParameterResolver) verifyConstraints(resolved []codersdk.WorkspaceBuildParameter, action WorkspaceCLIAction, templateVersionParameters []codersdk.TemplateVersionParameter) error {
for _, r := range resolved {
tvp := findTemplateVersionParameter(r, templateVersionParameters)
if tvp == nil {
return xerrors.Errorf("parameter %q is not present in the template", r.Name)
return templateVersionParametersNotFound(r.Name, templateVersionParameters)
}
if tvp.Ephemeral && !pr.promptBuildOptions && findWorkspaceBuildParameter(tvp.Name, pr.buildOptions) == nil {
@@ -194,7 +226,7 @@ func (pr *ParameterResolver) resolveWithInput(resolved []codersdk.WorkspaceBuild
(action == WorkspaceUpdate && promptParameterOption) ||
(action == WorkspaceUpdate && tvp.Mutable && tvp.Required) ||
(action == WorkspaceUpdate && !tvp.Mutable && firstTimeUse) ||
(action == WorkspaceUpdate && tvp.Mutable && !tvp.Ephemeral && pr.promptRichParameters) {
(tvp.Mutable && !tvp.Ephemeral && pr.promptRichParameters) {
parameterValue, err := cliui.RichParameter(inv, tvp)
if err != nil {
return nil, err
@@ -254,3 +286,19 @@ func isValidTemplateParameterOption(buildParameter codersdk.WorkspaceBuildParame
}
return false
}
func templateVersionParametersNotFound(unknown string, params []codersdk.TemplateVersionParameter) error {
var sb strings.Builder
_, _ = sb.WriteString(fmt.Sprintf("parameter %q is not present in the template.", unknown))
// Going with a fairly generous edit distance
maxDist := len(unknown) / 2
var paramNames []string
for _, p := range params {
paramNames = append(paramNames, p.Name)
}
matches := levenshtein.Matches(unknown, maxDist, paramNames...)
if len(matches) > 0 {
_, _ = sb.WriteString(fmt.Sprintf("\nDid you mean: %s", strings.Join(matches, ", ")))
}
return xerrors.Errorf(sb.String())
}
+3 -2
View File
@@ -40,15 +40,16 @@ func (r *RootCmd) ping() *clibase.Cmd {
workspaceName := inv.Args[0]
_, workspaceAgent, err := getWorkspaceAndAgent(
ctx, inv, client,
false, // Do not autostart for a ping.
codersdk.Me, workspaceName,
)
if err != nil {
return err
}
var logger slog.Logger
logger := inv.Logger
if r.verbose {
logger = slog.Make(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
logger = logger.AppendSinks(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
}
if r.disableDirect {
+1 -1
View File
@@ -19,7 +19,7 @@ func TestPing(t *testing.T) {
t.Run("OK", func(t *testing.T) {
t.Parallel()
client, workspace, agentToken := setupWorkspaceForAgent(t, nil)
client, workspace, agentToken := setupWorkspaceForAgent(t)
inv, root := clitest.New(t, "ping", workspace.Name)
clitest.SetupConfig(t, client, root)
pty := ptytest.New(t)
+29 -36
View File
@@ -12,7 +12,6 @@ import (
"sync"
"syscall"
"github.com/pion/udp"
"golang.org/x/xerrors"
"cdr.dev/slog"
@@ -26,8 +25,9 @@ import (
func (r *RootCmd) portForward() *clibase.Cmd {
var (
tcpForwards []string // <port>:<port>
udpForwards []string // <port>:<port>
tcpForwards []string // <port>:<port>
udpForwards []string // <port>:<port>
disableAutostart bool
)
client := new(codersdk.Client)
cmd := &clibase.Cmd{
@@ -76,7 +76,7 @@ func (r *RootCmd) portForward() *clibase.Cmd {
return xerrors.New("no port-forwards requested")
}
workspace, workspaceAgent, err := getWorkspaceAndAgent(ctx, inv, client, codersdk.Me, inv.Args[0])
workspace, workspaceAgent, err := getWorkspaceAndAgent(ctx, inv, client, !disableAutostart, codersdk.Me, inv.Args[0])
if err != nil {
return err
}
@@ -98,9 +98,9 @@ func (r *RootCmd) portForward() *clibase.Cmd {
return xerrors.Errorf("await agent: %w", err)
}
var logger slog.Logger
logger := inv.Logger
if r.verbose {
logger = slog.Make(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
logger = logger.AppendSinks(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
}
if r.disableDirect {
@@ -120,6 +120,7 @@ func (r *RootCmd) portForward() *clibase.Cmd {
wg = new(sync.WaitGroup)
listeners = make([]net.Listener, len(specs))
closeAllListeners = func() {
logger.Debug(ctx, "closing all listeners")
for _, l := range listeners {
if l == nil {
continue
@@ -131,8 +132,9 @@ func (r *RootCmd) portForward() *clibase.Cmd {
defer closeAllListeners()
for i, spec := range specs {
l, err := listenAndPortForward(ctx, inv, conn, wg, spec)
l, err := listenAndPortForward(ctx, inv, conn, wg, spec, logger)
if err != nil {
logger.Error(ctx, "failed to listen", slog.F("spec", spec), slog.Error(err))
return err
}
listeners[i] = l
@@ -150,8 +152,10 @@ func (r *RootCmd) portForward() *clibase.Cmd {
select {
case <-ctx.Done():
logger.Debug(ctx, "command context expired waiting for signal", slog.Error(ctx.Err()))
closeErr = ctx.Err()
case <-sigs:
case sig := <-sigs:
logger.Debug(ctx, "received signal", slog.F("signal", sig))
_, _ = fmt.Fprintln(inv.Stderr, "\nReceived signal, closing all listeners and active connections")
}
@@ -160,6 +164,7 @@ func (r *RootCmd) portForward() *clibase.Cmd {
}()
conn.AwaitReachable(ctx)
logger.Debug(ctx, "read to accept connections to forward")
_, _ = fmt.Fprintln(inv.Stderr, "Ready!")
wg.Wait()
return closeErr
@@ -180,44 +185,28 @@ func (r *RootCmd) portForward() *clibase.Cmd {
Description: "Forward UDP port(s) from the workspace to the local machine. The UDP connection has TCP-like semantics to support stateful UDP protocols.",
Value: clibase.StringArrayOf(&udpForwards),
},
sshDisableAutostartOption(clibase.BoolOf(&disableAutostart)),
}
return cmd
}
func listenAndPortForward(ctx context.Context, inv *clibase.Invocation, conn *codersdk.WorkspaceAgentConn, wg *sync.WaitGroup, spec portForwardSpec) (net.Listener, error) {
func listenAndPortForward(
ctx context.Context,
inv *clibase.Invocation,
conn *codersdk.WorkspaceAgentConn,
wg *sync.WaitGroup,
spec portForwardSpec,
logger slog.Logger,
) (net.Listener, error) {
logger = logger.With(slog.F("network", spec.listenNetwork), slog.F("address", spec.listenAddress))
_, _ = fmt.Fprintf(inv.Stderr, "Forwarding '%v://%v' locally to '%v://%v' in the workspace\n", spec.listenNetwork, spec.listenAddress, spec.dialNetwork, spec.dialAddress)
var (
l net.Listener
err error
)
switch spec.listenNetwork {
case "tcp":
l, err = net.Listen(spec.listenNetwork, spec.listenAddress)
case "udp":
var host, port string
host, port, err = net.SplitHostPort(spec.listenAddress)
if err != nil {
return nil, xerrors.Errorf("split %q: %w", spec.listenAddress, err)
}
var portInt int
portInt, err = strconv.Atoi(port)
if err != nil {
return nil, xerrors.Errorf("parse port %v from %q as int: %w", port, spec.listenAddress, err)
}
l, err = udp.Listen(spec.listenNetwork, &net.UDPAddr{
IP: net.ParseIP(host),
Port: portInt,
})
default:
return nil, xerrors.Errorf("unknown listen network %q", spec.listenNetwork)
}
l, err := inv.Net.Listen(spec.listenNetwork, spec.listenAddress)
if err != nil {
return nil, xerrors.Errorf("listen '%v://%v': %w", spec.listenNetwork, spec.listenAddress, err)
}
logger.Debug(ctx, "listening")
wg.Add(1)
go func(spec portForwardSpec) {
@@ -227,12 +216,14 @@ func listenAndPortForward(ctx context.Context, inv *clibase.Invocation, conn *co
if err != nil {
// Silently ignore net.ErrClosed errors.
if xerrors.Is(err, net.ErrClosed) {
logger.Debug(ctx, "listener closed")
return
}
_, _ = fmt.Fprintf(inv.Stderr, "Error accepting connection from '%v://%v': %v\n", spec.listenNetwork, spec.listenAddress, err)
_, _ = fmt.Fprintln(inv.Stderr, "Killing listener")
return
}
logger.Debug(ctx, "accepted connection", slog.F("remote_addr", netConn.RemoteAddr()))
go func(netConn net.Conn) {
defer netConn.Close()
@@ -242,8 +233,10 @@ func listenAndPortForward(ctx context.Context, inv *clibase.Invocation, conn *co
return
}
defer remoteConn.Close()
logger.Debug(ctx, "dialed remote", slog.F("remote_addr", netConn.RemoteAddr()))
agentssh.Bicopy(ctx, netConn, remoteConn)
logger.Debug(ctx, "connection closing", slog.F("remote_addr", netConn.RemoteAddr()))
}(netConn)
}
}(spec)
+147 -100
View File
@@ -13,13 +13,15 @@ import (
"github.com/pion/udp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisioner/echo"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
@@ -44,47 +46,35 @@ func TestPortForward_None(t *testing.T) {
pty.ExpectMatch("port-forward <workspace>")
}
//nolint:tparallel,paralleltest // Subtests require setup that must not be done in parallel.
func TestPortForward(t *testing.T) {
t.Parallel()
cases := []struct {
name string
network string
// The flag to pass to `coder port-forward X` to port-forward this type
// of connection. Has two format args (both strings), the first is the
// local address and the second is the remote address.
flag string
// The flag(s) to pass to `coder port-forward X` to port-forward this type
// of connection. Has one format arg (string) for the remote address.
flag []string
// setupRemote creates a "remote" listener to emulate a service in the
// workspace.
setupRemote func(t *testing.T) net.Listener
// setupLocal returns an available port that the
// port-forward command will listen on "locally". Returns the address
// you pass to net.Dial, and the port/path you pass to `coder
// port-forward`.
setupLocal func(t *testing.T) (string, string)
// the local address(es) to "dial"
localAddress []string
}{
{
name: "TCP",
network: "tcp",
flag: "--tcp=%v:%v",
flag: []string{"--tcp=5555:%v", "--tcp=6666:%v"},
setupRemote: func(t *testing.T) net.Listener {
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err, "create TCP listener")
return l
},
setupLocal: func(t *testing.T) (string, string) {
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err, "create TCP listener to generate random port")
defer l.Close()
_, port, err := net.SplitHostPort(l.Addr().String())
require.NoErrorf(t, err, "split TCP address %q", l.Addr().String())
return l.Addr().String(), port
},
localAddress: []string{"127.0.0.1:5555", "127.0.0.1:6666"},
},
{
name: "UDP",
network: "udp",
flag: "--udp=%v:%v",
flag: []string{"--udp=7777:%v", "--udp=8888:%v"},
setupRemote: func(t *testing.T) net.Listener {
addr := net.UDPAddr{
IP: net.ParseIP("127.0.0.1"),
@@ -94,61 +84,37 @@ func TestPortForward(t *testing.T) {
require.NoError(t, err, "create UDP listener")
return l
},
setupLocal: func(t *testing.T) (string, string) {
addr := net.UDPAddr{
IP: net.ParseIP("127.0.0.1"),
Port: 0,
}
l, err := udp.Listen("udp", &addr)
require.NoError(t, err, "create UDP listener to generate random port")
defer l.Close()
_, port, err := net.SplitHostPort(l.Addr().String())
require.NoErrorf(t, err, "split UDP address %q", l.Addr().String())
return l.Addr().String(), port
},
localAddress: []string{"127.0.0.1:7777", "127.0.0.1:8888"},
},
{
name: "TCPWithAddress",
network: "tcp",
flag: "--tcp=%v:%v",
network: "tcp", flag: []string{"--tcp=10.10.10.99:9999:%v", "--tcp=10.10.10.10:1010:%v"},
setupRemote: func(t *testing.T) net.Listener {
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err, "create TCP listener")
return l
},
setupLocal: func(t *testing.T) (string, string) {
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err, "create TCP listener to generate random port")
defer l.Close()
_, port, err := net.SplitHostPort(l.Addr().String())
require.NoErrorf(t, err, "split TCP address %q", l.Addr().String())
return l.Addr().String(), fmt.Sprint("0.0.0.0:", port)
},
localAddress: []string{"10.10.10.99:9999", "10.10.10.10:1010"},
},
}
// Setup agent once to be shared between test-cases (avoid expensive
// non-parallel setup).
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
admin = coderdtest.CreateFirstUser(t, client)
member, _ = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
workspace = runAgent(t, client, member)
client, db = coderdtest.NewWithDatabase(t, nil)
admin = coderdtest.CreateFirstUser(t, client)
member, memberUser = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
workspace = runAgent(t, client, memberUser.ID, db)
)
for _, c := range cases {
c := c
// Delay parallel tests here because setupLocal reserves
// a free open port which is not guaranteed to be free
// between the listener closing and port-forward ready.
t.Run(c.name+"_OnePort", func(t *testing.T) {
t.Parallel()
p1 := setupTestListener(t, c.setupRemote(t))
// Create a flag that forwards from local to listener 1.
localAddress, localFlag := c.setupLocal(t)
flag := fmt.Sprintf(c.flag, localFlag, p1)
flag := fmt.Sprintf(c.flag[0], p1)
// Launch port-forward in a goroutine so we can start dialing
// the "local" listener.
@@ -158,23 +124,27 @@ func TestPortForward(t *testing.T) {
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
iNet := newInProcNet()
inv.Net = iNet
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
errC := make(chan error)
go func() {
errC <- inv.WithContext(ctx).Run()
err := inv.WithContext(ctx).Run()
t.Logf("command complete; err=%s", err.Error())
errC <- err
}()
pty.ExpectMatchContext(ctx, "Ready!")
t.Parallel() // Port is reserved, enable parallel execution.
// Open two connections simultaneously and test them out of
// sync.
d := net.Dialer{Timeout: testutil.WaitShort}
c1, err := d.DialContext(ctx, c.network, localAddress)
dialCtx, dialCtxCancel := context.WithTimeout(ctx, testutil.WaitShort)
defer dialCtxCancel()
c1, err := iNet.dial(dialCtx, addr{c.network, c.localAddress[0]})
require.NoError(t, err, "open connection 1 to 'local' listener")
defer c1.Close()
c2, err := d.DialContext(ctx, c.network, localAddress)
c2, err := iNet.dial(dialCtx, addr{c.network, c.localAddress[0]})
require.NoError(t, err, "open connection 2 to 'local' listener")
defer c2.Close()
testDial(t, c2)
@@ -186,16 +156,15 @@ func TestPortForward(t *testing.T) {
})
t.Run(c.name+"_TwoPorts", func(t *testing.T) {
t.Parallel()
var (
p1 = setupTestListener(t, c.setupRemote(t))
p2 = setupTestListener(t, c.setupRemote(t))
)
// Create a flags for listener 1 and listener 2.
localAddress1, localFlag1 := c.setupLocal(t)
localAddress2, localFlag2 := c.setupLocal(t)
flag1 := fmt.Sprintf(c.flag, localFlag1, p1)
flag2 := fmt.Sprintf(c.flag, localFlag2, p2)
flag1 := fmt.Sprintf(c.flag[0], p1)
flag2 := fmt.Sprintf(c.flag[1], p2)
// Launch port-forward in a goroutine so we can start dialing
// the "local" listeners.
@@ -205,6 +174,9 @@ func TestPortForward(t *testing.T) {
inv.Stdin = pty.Input()
inv.Stdout = pty.Output()
inv.Stderr = pty.Output()
iNet := newInProcNet()
inv.Net = iNet
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
errC := make(chan error)
@@ -213,15 +185,14 @@ func TestPortForward(t *testing.T) {
}()
pty.ExpectMatchContext(ctx, "Ready!")
t.Parallel() // Port is reserved, enable parallel execution.
// Open a connection to both listener 1 and 2 simultaneously and
// then test them out of order.
d := net.Dialer{Timeout: testutil.WaitShort}
c1, err := d.DialContext(ctx, c.network, localAddress1)
dialCtx, dialCtxCancel := context.WithTimeout(ctx, testutil.WaitShort)
defer dialCtxCancel()
c1, err := iNet.dial(dialCtx, addr{c.network, c.localAddress[0]})
require.NoError(t, err, "open connection 1 to 'local' listener 1")
defer c1.Close()
c2, err := d.DialContext(ctx, c.network, localAddress2)
c2, err := iNet.dial(dialCtx, addr{c.network, c.localAddress[1]})
require.NoError(t, err, "open connection 2 to 'local' listener 2")
defer c2.Close()
testDial(t, c2)
@@ -233,8 +204,8 @@ func TestPortForward(t *testing.T) {
})
}
// Test doing TCP and UDP at the same time.
t.Run("All", func(t *testing.T) {
t.Parallel()
var (
dials = []addr{}
flags = []string{}
@@ -244,12 +215,11 @@ func TestPortForward(t *testing.T) {
for _, c := range cases {
p := setupTestListener(t, c.setupRemote(t))
localAddress, localFlag := c.setupLocal(t)
dials = append(dials, addr{
network: c.network,
addr: localAddress,
addr: c.localAddress[0],
})
flags = append(flags, fmt.Sprintf(c.flag, localFlag, p))
flags = append(flags, fmt.Sprintf(c.flag[0], p))
}
// Launch port-forward in a goroutine so we can start dialing
@@ -258,6 +228,9 @@ func TestPortForward(t *testing.T) {
clitest.SetupConfig(t, member, root)
pty := ptytest.New(t).Attach(inv)
inv.Stderr = pty.Output()
iNet := newInProcNet()
inv.Net = iNet
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
errC := make(chan error)
@@ -266,15 +239,14 @@ func TestPortForward(t *testing.T) {
}()
pty.ExpectMatchContext(ctx, "Ready!")
t.Parallel() // Port is reserved, enable parallel execution.
// Open connections to all items in the "dial" array.
var (
d = net.Dialer{Timeout: testutil.WaitShort}
conns = make([]net.Conn, len(dials))
dialCtx, dialCtxCancel = context.WithTimeout(ctx, testutil.WaitShort)
conns = make([]net.Conn, len(dials))
)
defer dialCtxCancel()
for i, a := range dials {
c, err := d.DialContext(ctx, a.network, a.addr)
c, err := iNet.dial(dialCtx, a)
require.NoErrorf(t, err, "open connection %v to 'local' listener %v", i+1, i+1)
t.Cleanup(func() {
_ = c.Close()
@@ -296,35 +268,23 @@ func TestPortForward(t *testing.T) {
// runAgent creates a fake workspace and starts an agent locally for that
// workspace. The agent will be cleaned up on test completion.
// nolint:unused
func runAgent(t *testing.T, adminClient, userClient *codersdk.Client) codersdk.Workspace {
ctx := context.Background()
user, err := userClient.User(ctx, codersdk.Me)
func runAgent(t *testing.T, client *codersdk.Client, owner uuid.UUID, db database.Store) database.Workspace {
user, err := client.User(context.Background(), codersdk.Me)
require.NoError(t, err, "specified user does not exist")
require.Greater(t, len(user.OrganizationIDs), 0, "user has no organizations")
orgID := user.OrganizationIDs[0]
r := dbfake.WorkspaceBuild(t, db, database.Workspace{
OrganizationID: orgID,
OwnerID: owner,
}).WithAgent().Do()
// Setup template
agentToken := uuid.NewString()
version := coderdtest.CreateTemplateVersion(t, adminClient, orgID, &echo.Responses{
Parse: echo.ParseComplete,
ProvisionPlan: echo.PlanComplete,
ProvisionApply: echo.ProvisionApplyWithAgent(agentToken),
})
// Create template and workspace
template := coderdtest.CreateTemplate(t, adminClient, orgID, version.ID)
coderdtest.AwaitTemplateVersionJobCompleted(t, adminClient, version.ID)
workspace := coderdtest.CreateWorkspace(t, userClient, orgID, template.ID)
coderdtest.AwaitWorkspaceBuildJobCompleted(t, adminClient, workspace.LatestBuild.ID)
_ = agenttest.New(t, adminClient.URL, agentToken,
_ = agenttest.New(t, client.URL, r.AgentToken,
func(o *agent.Options) {
o.SSHMaxTimeout = 60 * time.Second
},
)
coderdtest.AwaitWorkspaceAgents(t, adminClient, workspace.ID)
return workspace
coderdtest.AwaitWorkspaceAgents(t, client, r.Workspace.ID)
return r.Workspace
}
// setupTestListener starts accepting connections and echoing a single packet.
@@ -404,3 +364,90 @@ type addr struct {
network string
addr string
}
func (a addr) Network() string {
return a.network
}
func (a addr) Address() string {
return a.addr
}
func (a addr) String() string {
return a.network + "|" + a.addr
}
type inProcNet struct {
sync.Mutex
listeners map[addr]*inProcListener
}
type inProcListener struct {
c chan net.Conn
n *inProcNet
a addr
o sync.Once
}
func newInProcNet() *inProcNet {
return &inProcNet{listeners: make(map[addr]*inProcListener)}
}
func (n *inProcNet) Listen(network, address string) (net.Listener, error) {
a := addr{network, address}
n.Lock()
defer n.Unlock()
if _, ok := n.listeners[a]; ok {
return nil, xerrors.New("busy")
}
l := newInProcListener(n, a)
n.listeners[a] = l
return l, nil
}
func (n *inProcNet) dial(ctx context.Context, a addr) (net.Conn, error) {
n.Lock()
defer n.Unlock()
l, ok := n.listeners[a]
if !ok {
return nil, xerrors.Errorf("nothing listening on %s", a)
}
x, y := net.Pipe()
select {
case <-ctx.Done():
return nil, ctx.Err()
case l.c <- x:
return y, nil
}
}
func newInProcListener(n *inProcNet, a addr) *inProcListener {
return &inProcListener{
c: make(chan net.Conn),
n: n,
a: a,
}
}
func (l *inProcListener) Accept() (net.Conn, error) {
c, ok := <-l.c
if !ok {
return nil, net.ErrClosed
}
return c, nil
}
func (l *inProcListener) Close() error {
l.o.Do(func() {
l.n.Lock()
defer l.n.Unlock()
delete(l.n.listeners, l.a)
close(l.c)
})
return nil
}
func (l *inProcListener) Addr() net.Addr {
return l.a
}
+1 -1
View File
@@ -15,7 +15,7 @@ import (
func TestRename(t *testing.T) {
t.Parallel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true, AllowWorkspaceRenames: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, nil)
+19 -32
View File
@@ -2,15 +2,15 @@ package cli
import (
"fmt"
"net/http"
"time"
"golang.org/x/xerrors"
"github.com/coder/pretty"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/pretty"
)
func (r *RootCmd) restart() *clibase.Cmd {
@@ -25,7 +25,7 @@ func (r *RootCmd) restart() *clibase.Cmd {
clibase.RequireNArgs(1),
r.InitClient(client),
),
Options: append(parameterFlags.cliBuildOptions(), cliui.SkipPromptOption()),
Options: clibase.OptionSet{cliui.SkipPromptOption()},
Handler: func(inv *clibase.Invocation) error {
ctx := inv.Context()
out := inv.Stdout
@@ -35,30 +35,7 @@ func (r *RootCmd) restart() *clibase.Cmd {
return err
}
lastBuildParameters, err := client.WorkspaceBuildParameters(inv.Context(), workspace.LatestBuild.ID)
if err != nil {
return err
}
template, err := client.Template(inv.Context(), workspace.TemplateID)
if err != nil {
return err
}
buildOptions, err := asWorkspaceBuildParameters(parameterFlags.buildOptions)
if err != nil {
return xerrors.Errorf("can't parse build options: %w", err)
}
buildParameters, err := prepStartWorkspace(inv, client, prepStartWorkspaceArgs{
Action: WorkspaceRestart,
Template: template,
LastBuildParameters: lastBuildParameters,
PromptBuildOptions: parameterFlags.promptBuildOptions,
BuildOptions: buildOptions,
})
startReq, err := buildWorkspaceStartRequest(inv, client, workspace, parameterFlags, WorkspaceRestart)
if err != nil {
return err
}
@@ -77,18 +54,25 @@ func (r *RootCmd) restart() *clibase.Cmd {
if err != nil {
return err
}
err = cliui.WorkspaceBuild(ctx, out, client, build.ID)
if err != nil {
return err
}
build, err = client.CreateWorkspaceBuild(ctx, workspace.ID, codersdk.CreateWorkspaceBuildRequest{
Transition: codersdk.WorkspaceTransitionStart,
RichParameterValues: buildParameters,
})
if err != nil {
build, err = client.CreateWorkspaceBuild(ctx, workspace.ID, startReq)
// It's possible for a workspace build to fail due to the template requiring starting
// workspaces with the active version.
if cerr, ok := codersdk.AsError(err); ok && cerr.StatusCode() == http.StatusForbidden {
_, _ = fmt.Fprintln(inv.Stdout, "Failed to restart with the template version from your last build. Policy may require you to restart with the current active template version.")
build, err = startWorkspace(inv, client, workspace, parameterFlags, WorkspaceUpdate)
if err != nil {
return xerrors.Errorf("start workspace with active template version: %w", err)
}
} else if err != nil {
return err
}
err = cliui.WorkspaceBuild(ctx, out, client, build.ID)
if err != nil {
return err
@@ -101,5 +85,8 @@ func (r *RootCmd) restart() *clibase.Cmd {
return nil
},
}
cmd.Options = append(cmd.Options, parameterFlags.allOptions()...)
return cmd
}
+51
View File
@@ -239,4 +239,55 @@ func TestRestartWithParameters(t *testing.T) {
Value: immutableParameterValue,
})
})
t.Run("AlwaysPrompt", func(t *testing.T) {
t.Parallel()
// Create the workspace
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
owner := coderdtest.CreateFirstUser(t, client)
member, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
version := coderdtest.CreateTemplateVersion(t, client, owner.OrganizationID, mutableParamsResponse)
coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template := coderdtest.CreateTemplate(t, client, owner.OrganizationID, version.ID)
workspace := coderdtest.CreateWorkspace(t, member, owner.OrganizationID, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.RichParameterValues = []codersdk.WorkspaceBuildParameter{
{
Name: mutableParameterName,
Value: mutableParameterValue,
},
}
})
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
inv, root := clitest.New(t, "restart", workspace.Name, "-y", "--always-prompt")
clitest.SetupConfig(t, member, root)
doneChan := make(chan struct{})
pty := ptytest.New(t).Attach(inv)
go func() {
defer close(doneChan)
err := inv.Run()
assert.NoError(t, err)
}()
// We should be prompted for the parameters again.
newValue := "xyz"
pty.ExpectMatch(mutableParameterName)
pty.WriteLine(newValue)
pty.ExpectMatch("workspace has been restarted")
<-doneChan
// Verify that the updated values are persisted.
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
workspace, err := client.WorkspaceByOwnerAndName(ctx, workspace.OwnerName, workspace.Name, codersdk.WorkspaceOptions{})
require.NoError(t, err)
actualParameters, err := client.WorkspaceBuildParameters(ctx, workspace.LatestBuild.ID)
require.NoError(t, err)
require.Contains(t, actualParameters, codersdk.WorkspaceBuildParameter{
Name: mutableParameterName,
Value: newValue,
})
})
}
+78 -53
View File
@@ -30,7 +30,6 @@ import (
"github.com/coder/pretty"
"cdr.dev/slog"
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/cli/clibase"
"github.com/coder/coder/v2/cli/cliui"
@@ -97,6 +96,7 @@ func (r *RootCmd) Core() []*clibase.Cmd {
r.version(defaultVersionInfo),
// Workspace Commands
r.autoupdate(),
r.configSSH(),
r.create(),
r.deleteWorkspace(),
@@ -136,14 +136,22 @@ func (r *RootCmd) RunMain(subcommands []*clibase.Cmd) {
}
err = cmd.Invoke().WithOS().Run()
if err != nil {
code := 1
var exitErr *exitError
if errors.As(err, &exitErr) {
code = exitErr.code
err = exitErr.err
}
if errors.Is(err, cliui.Canceled) {
//nolint:revive
os.Exit(1)
os.Exit(code)
}
f := prettyErrorFormatter{w: os.Stderr, verbose: r.verbose}
f.format(err)
if err != nil {
f.format(err)
}
//nolint:revive
os.Exit(1)
os.Exit(code)
}
}
@@ -441,21 +449,6 @@ func (r *RootCmd) Command(subcommands []*clibase.Cmd) (*clibase.Cmd, error) {
return cmd, nil
}
type contextKey int
const (
contextKeyLogger contextKey = iota
)
func ContextWithLogger(ctx context.Context, l slog.Logger) context.Context {
return context.WithValue(ctx, contextKeyLogger, l)
}
func LoggerFromContext(ctx context.Context) (slog.Logger, bool) {
l, ok := ctx.Value(contextKeyLogger).(slog.Logger)
return l, ok
}
// RootCmd contains parameters and helpers useful to all commands.
type RootCmd struct {
clientURL *url.URL
@@ -478,11 +471,11 @@ type RootCmd struct {
}
func addTelemetryHeader(client *codersdk.Client, inv *clibase.Invocation) {
transport, ok := client.HTTPClient.Transport.(*headerTransport)
transport, ok := client.HTTPClient.Transport.(*codersdk.HeaderTransport)
if !ok {
transport = &headerTransport{
transport: client.HTTPClient.Transport,
header: http.Header{},
transport = &codersdk.HeaderTransport{
Transport: client.HTTPClient.Transport,
Header: http.Header{},
}
client.HTTPClient.Transport = transport
}
@@ -516,13 +509,17 @@ func addTelemetryHeader(client *codersdk.Client, inv *clibase.Invocation) {
return
}
transport.header.Add(codersdk.CLITelemetryHeader, s)
transport.Header.Add(codersdk.CLITelemetryHeader, s)
}
// InitClient sets client to a new client.
// It reads from global configuration files if flags are not set.
func (r *RootCmd) InitClient(client *codersdk.Client) clibase.MiddlewareFunc {
return r.initClientInternal(client, false)
return clibase.Chain(
r.initClientInternal(client, false),
// By default, we should print warnings in addition to initializing the client
r.PrintWarnings(client),
)
}
func (r *RootCmd) InitClientMissingTokenOK(client *codersdk.Client) clibase.MiddlewareFunc {
@@ -582,7 +579,20 @@ func (r *RootCmd) initClientInternal(client *codersdk.Client, allowTokenMissing
client.SetLogBodies(true)
}
client.DisableDirectConnections = r.disableDirect
return next(inv)
}
}
}
func (r *RootCmd) PrintWarnings(client *codersdk.Client) clibase.MiddlewareFunc {
if client == nil {
panic("client is nil")
}
if r == nil {
panic("root is nil")
}
return func(next clibase.HandlerFunc) clibase.HandlerFunc {
return func(inv *clibase.Invocation) error {
// We send these requests in parallel to minimize latency.
var (
versionErr = make(chan error)
@@ -598,14 +608,14 @@ func (r *RootCmd) initClientInternal(client *codersdk.Client, allowTokenMissing
close(warningErr)
}()
if err = <-versionErr; err != nil {
if err := <-versionErr; err != nil {
// Just log the error here. We never want to fail a command
// due to a pre-run.
pretty.Fprintf(inv.Stderr, cliui.DefaultStyles.Warn, "check versions error: %s", err)
_, _ = fmt.Fprintln(inv.Stderr)
}
if err = <-warningErr; err != nil {
if err := <-warningErr; err != nil {
// Same as above
pretty.Fprintf(inv.Stderr, cliui.DefaultStyles.Warn, "check entitlement warnings error: %s", err)
_, _ = fmt.Fprintln(inv.Stderr)
@@ -616,10 +626,10 @@ func (r *RootCmd) initClientInternal(client *codersdk.Client, allowTokenMissing
}
}
func (r *RootCmd) setClient(ctx context.Context, client *codersdk.Client, serverURL *url.URL) error {
transport := &headerTransport{
transport: http.DefaultTransport,
header: http.Header{},
func (r *RootCmd) HeaderTransport(ctx context.Context, serverURL *url.URL) (*codersdk.HeaderTransport, error) {
transport := &codersdk.HeaderTransport{
Transport: http.DefaultTransport,
Header: http.Header{},
}
headers := r.header
if r.headerCommand != "" {
@@ -637,23 +647,32 @@ func (r *RootCmd) setClient(ctx context.Context, client *codersdk.Client, server
cmd.Stderr = io.Discard
err := cmd.Run()
if err != nil {
return xerrors.Errorf("failed to run %v: %w", cmd.Args, err)
return nil, xerrors.Errorf("failed to run %v: %w", cmd.Args, err)
}
scanner := bufio.NewScanner(&outBuf)
for scanner.Scan() {
headers = append(headers, scanner.Text())
}
if err := scanner.Err(); err != nil {
return xerrors.Errorf("scan %v: %w", cmd.Args, err)
return nil, xerrors.Errorf("scan %v: %w", cmd.Args, err)
}
}
for _, header := range headers {
parts := strings.SplitN(header, "=", 2)
if len(parts) < 2 {
return xerrors.Errorf("split header %q had less than two parts", header)
return nil, xerrors.Errorf("split header %q had less than two parts", header)
}
transport.header.Add(parts[0], parts[1])
transport.Header.Add(parts[0], parts[1])
}
return transport, nil
}
func (r *RootCmd) setClient(ctx context.Context, client *codersdk.Client, serverURL *url.URL) error {
transport, err := r.HeaderTransport(ctx, serverURL)
if err != nil {
return xerrors.Errorf("create header transport: %w", err)
}
client.URL = serverURL
client.HTTPClient = &http.Client{
Transport: transport,
@@ -860,24 +879,6 @@ func (r *RootCmd) Verbosef(inv *clibase.Invocation, fmtStr string, args ...inter
}
}
type headerTransport struct {
transport http.RoundTripper
header http.Header
}
func (h *headerTransport) Header() http.Header {
return h.header.Clone()
}
func (h *headerTransport) RoundTrip(req *http.Request) (*http.Response, error) {
for k, v := range h.header {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
return h.transport.RoundTrip(req)
}
// DumpHandler provides a custom SIGQUIT and SIGTRAP handler that dumps the
// stacktrace of all goroutines to stderr and a well-known file in the home
// directory. This is useful for debugging deadlock issues that may occur in
@@ -968,6 +969,30 @@ func DumpHandler(ctx context.Context) {
}
}
type exitError struct {
code int
err error
}
var _ error = (*exitError)(nil)
func (e *exitError) Error() string {
if e.err != nil {
return fmt.Sprintf("exit code %d: %v", e.code, e.err)
}
return fmt.Sprintf("exit code %d", e.code)
}
func (e *exitError) Unwrap() error {
return e.err
}
// ExitError returns an error that will cause the CLI to exit with the given
// exit code. If err is non-nil, it will be wrapped by the returned error.
func ExitError(code int, err error) error {
return &exitError{code: code, err: err}
}
// IiConnectionErr is a convenience function for checking if the source of an
// error is due to a 'connection refused', 'no such host', etc.
func isConnectionError(err error) bool {
+3 -3
View File
@@ -136,9 +136,9 @@ func TestDERPHeaders(t *testing.T) {
})
var (
admin = coderdtest.CreateFirstUser(t, client)
member, _ = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
workspace = runAgent(t, client, member)
admin = coderdtest.CreateFirstUser(t, client)
member, memberUser = coderdtest.CreateAnotherUser(t, client, admin.OrganizationID)
workspace = runAgent(t, client, memberUser.ID, newOptions.Database)
)
// Inject custom /derp handler so we can inspect the headers.
+95 -51
View File
@@ -3,9 +3,9 @@ package cli
import (
"fmt"
"io"
"strings"
"time"
"github.com/jedib0t/go-pretty/v6/table"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/cli/clibase"
@@ -17,7 +17,7 @@ import (
)
const (
scheduleShowDescriptionLong = `Shows the following information for the given workspace:
scheduleShowDescriptionLong = `Shows the following information for the given workspace(s):
* The automatic start schedule
* The next scheduled start time
* The duration after which it will stop
@@ -72,25 +72,67 @@ func (r *RootCmd) schedules() *clibase.Cmd {
return scheduleCmd
}
// scheduleShow() is just a wrapper for list() with some different defaults.
func (r *RootCmd) scheduleShow() *clibase.Cmd {
var (
filter cliui.WorkspaceFilter
formatter = cliui.NewOutputFormatter(
cliui.TableFormat(
[]scheduleListRow{},
[]string{
"workspace",
"starts at",
"starts next",
"stops after",
"stops next",
},
),
cliui.JSONFormat(),
)
)
client := new(codersdk.Client)
showCmd := &clibase.Cmd{
Use: "show <workspace-name>",
Short: "Show workspace schedule",
Use: "show <workspace | --search <query> | --all>",
Short: "Show workspace schedules",
Long: scheduleShowDescriptionLong,
Middleware: clibase.Chain(
clibase.RequireNArgs(1),
clibase.RequireRangeArgs(0, 1),
r.InitClient(client),
),
Handler: func(inv *clibase.Invocation) error {
workspace, err := namedWorkspace(inv.Context(), client, inv.Args[0])
// To preserve existing behavior, if an argument is passed we will
// only show the schedule for that workspace.
// This will clobber the search query if one is passed.
f := filter.Filter()
if len(inv.Args) == 1 {
// If the argument contains a slash, we assume it's a full owner/name reference
if strings.Contains(inv.Args[0], "/") {
_, workspaceName, err := splitNamedWorkspace(inv.Args[0])
if err != nil {
return err
}
f.FilterQuery = fmt.Sprintf("name:%s", workspaceName)
} else {
// Otherwise, we assume it's a workspace name owned by the current user
f.FilterQuery = fmt.Sprintf("owner:me name:%s", inv.Args[0])
}
}
res, err := queryConvertWorkspaces(inv.Context(), client, f, scheduleListRowFromWorkspace)
if err != nil {
return err
}
return displaySchedule(workspace, inv.Stdout)
out, err := formatter.Format(inv.Context(), res)
if err != nil {
return err
}
_, err = fmt.Fprintln(inv.Stdout, out)
return err
},
}
filter.AttachOptions(&showCmd.Options)
formatter.AttachOptions(&showCmd.Options)
return showCmd
}
@@ -242,50 +284,52 @@ func (r *RootCmd) scheduleOverride() *clibase.Cmd {
return overrideCmd
}
func displaySchedule(workspace codersdk.Workspace, out io.Writer) error {
loc, err := tz.TimezoneIANA()
func displaySchedule(ws codersdk.Workspace, out io.Writer) error {
rows := []workspaceListRow{workspaceListRowFromWorkspace(time.Now(), ws)}
rendered, err := cliui.DisplayTable(rows, "workspace", []string{
"workspace", "starts at", "starts next", "stops after", "stops next",
})
if err != nil {
loc = time.UTC // best effort
return err
}
_, err = fmt.Fprintln(out, rendered)
return err
}
// scheduleListRow is a row in the schedule list.
// this is required for proper JSON output.
type scheduleListRow struct {
WorkspaceName string `json:"workspace" table:"workspace,default_sort"`
StartsAt string `json:"starts_at" table:"starts at"`
StartsNext string `json:"starts_next" table:"starts next"`
StopsAfter string `json:"stops_after" table:"stops after"`
StopsNext string `json:"stops_next" table:"stops next"`
}
func scheduleListRowFromWorkspace(now time.Time, workspace codersdk.Workspace) scheduleListRow {
autostartDisplay := ""
nextStartDisplay := ""
if !ptr.NilOrEmpty(workspace.AutostartSchedule) {
if sched, err := cron.Weekly(*workspace.AutostartSchedule); err == nil {
autostartDisplay = sched.Humanize()
nextStartDisplay = timeDisplay(sched.Next(now))
}
}
autostopDisplay := ""
nextStopDisplay := ""
if !ptr.NilOrZero(workspace.TTLMillis) {
dur := time.Duration(*workspace.TTLMillis) * time.Millisecond
autostopDisplay = durationDisplay(dur)
if !workspace.LatestBuild.Deadline.IsZero() && workspace.LatestBuild.Transition == codersdk.WorkspaceTransitionStart {
nextStopDisplay = timeDisplay(workspace.LatestBuild.Deadline.Time)
}
}
return scheduleListRow{
WorkspaceName: workspace.OwnerName + "/" + workspace.Name,
StartsAt: autostartDisplay,
StartsNext: nextStartDisplay,
StopsAfter: autostopDisplay,
StopsNext: nextStopDisplay,
}
var (
schedStart = "manual"
schedStop = "manual"
schedNextStart = "-"
schedNextStop = "-"
)
if !ptr.NilOrEmpty(workspace.AutostartSchedule) {
sched, err := cron.Weekly(ptr.NilToEmpty(workspace.AutostartSchedule))
if err != nil {
// This should never happen.
_, _ = fmt.Fprintf(out, "Invalid autostart schedule %q for workspace %s: %s\n", *workspace.AutostartSchedule, workspace.Name, err.Error())
return nil
}
schedNext := sched.Next(time.Now()).In(sched.Location())
schedStart = fmt.Sprintf("%s %s (%s)", sched.Time(), sched.DaysOfWeek(), sched.Location())
schedNextStart = schedNext.Format(timeFormat + " on " + dateFormat)
}
if !ptr.NilOrZero(workspace.TTLMillis) {
d := time.Duration(*workspace.TTLMillis) * time.Millisecond
schedStop = durationDisplay(d) + " after start"
}
if !workspace.LatestBuild.Deadline.IsZero() {
if workspace.LatestBuild.Transition != "start" {
schedNextStop = "-"
} else {
schedNextStop = workspace.LatestBuild.Deadline.Time.In(loc).Format(timeFormat + " on " + dateFormat)
schedNextStop = fmt.Sprintf("%s (in %s)", schedNextStop, durationDisplay(time.Until(workspace.LatestBuild.Deadline.Time)))
}
}
tw := cliui.Table()
tw.AppendRow(table.Row{"Starts at", schedStart})
tw.AppendRow(table.Row{"Starts next", schedNextStart})
tw.AppendRow(table.Row{"Stops at", schedStop})
tw.AppendRow(table.Row{"Stops next", schedNextStop})
_, _ = fmt.Fprintln(out, tw.Render())
return nil
}
+326 -348
View File
@@ -3,8 +3,9 @@ package cli_test
import (
"bytes"
"context"
"fmt"
"strings"
"database/sql"
"encoding/json"
"sort"
"testing"
"time"
@@ -14,372 +15,349 @@ import (
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/schedule/cron"
"github.com/coder/coder/v2/coderd/util/tz"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
// setupTestSchedule creates 4 workspaces:
// 1. a-owner-ws1: owned by owner, has both autostart and autostop enabled.
// 2. b-owner-ws2: owned by owner, has only autostart enabled.
// 3. c-member-ws3: owned by member, has only autostop enabled.
// 4. d-member-ws4: owned by member, has neither autostart nor autostop enabled.
// It returns the owner and member clients, the database, and the workspaces.
// The workspaces are returned in the same order as they are created.
func setupTestSchedule(t *testing.T, sched *cron.Schedule) (ownerClient, memberClient *codersdk.Client, db database.Store, ws []codersdk.Workspace) {
t.Helper()
ownerClient, db = coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, ownerClient)
memberClient, memberUser := coderdtest.CreateAnotherUserMutators(t, ownerClient, owner.OrganizationID, nil, func(r *codersdk.CreateUserRequest) {
r.Username = "testuser2" // ensure deterministic ordering
})
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
Name: "a-owner",
OwnerID: owner.UserID,
OrganizationID: owner.OrganizationID,
AutostartSchedule: sql.NullString{String: sched.String(), Valid: true},
Ttl: sql.NullInt64{Int64: 8 * time.Hour.Nanoseconds(), Valid: true},
}).WithAgent().Do()
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
Name: "b-owner",
OwnerID: owner.UserID,
OrganizationID: owner.OrganizationID,
AutostartSchedule: sql.NullString{String: sched.String(), Valid: true},
}).WithAgent().Do()
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
Name: "c-member",
OwnerID: memberUser.ID,
OrganizationID: owner.OrganizationID,
Ttl: sql.NullInt64{Int64: 8 * time.Hour.Nanoseconds(), Valid: true},
}).WithAgent().Do()
_ = dbfake.WorkspaceBuild(t, db, database.Workspace{
Name: "d-member",
OwnerID: memberUser.ID,
OrganizationID: owner.OrganizationID,
}).WithAgent().Do()
// Need this for LatestBuild.Deadline
resp, err := ownerClient.Workspaces(context.Background(), codersdk.WorkspaceFilter{})
require.NoError(t, err)
require.Len(t, resp.Workspaces, 4)
// Ensure same order as in CLI output
ws = resp.Workspaces
sort.Slice(ws, func(i, j int) bool {
a := ws[i].OwnerName + "/" + ws[i].Name
b := ws[j].OwnerName + "/" + ws[j].Name
return a < b
})
return ownerClient, memberClient, db, ws
}
//nolint:paralleltest // t.Setenv
func TestScheduleShow(t *testing.T) {
t.Parallel()
t.Run("Enabled", func(t *testing.T) {
t.Parallel()
// Given
// Set timezone to Asia/Kolkata to surface any timezone-related bugs.
t.Setenv("TZ", "Asia/Kolkata")
loc, err := tz.TimezoneIANA()
require.NoError(t, err)
require.Equal(t, "Asia/Kolkata", loc.String())
sched, err := cron.Weekly("CRON_TZ=Europe/Dublin 30 7 * * Mon-Fri")
require.NoError(t, err, "invalid schedule")
ownerClient, memberClient, _, ws := setupTestSchedule(t, sched)
now := time.Now()
var (
tz = "Europe/Dublin"
sched = "30 7 * * 1-5"
schedCron = fmt.Sprintf("CRON_TZ=%s %s", tz, sched)
ttl = 8 * time.Hour
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.AutostartSchedule = ptr.Ref(schedCron)
cwr.TTLMillis = ptr.Ref(ttl.Milliseconds())
})
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
cmdArgs = []string{"schedule", "show", workspace.Name}
stdoutBuf = &bytes.Buffer{}
)
t.Run("OwnerNoArgs", func(t *testing.T) {
// When: owner specifies no args
inv, root := clitest.New(t, "schedule", "show")
//nolint:gocritic // Testing that owner user sees all
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
inv, root := clitest.New(t, cmdArgs...)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err := inv.Run()
require.NoError(t, err, "unexpected error")
lines := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[0], "Starts at 7:30AM Mon-Fri (Europe/Dublin)")
assert.Contains(t, lines[1], "Starts next 7:30AM")
// it should have either IST or GMT
if !strings.Contains(lines[1], "IST") && !strings.Contains(lines[1], "GMT") {
t.Error("expected either IST or GMT")
}
assert.Contains(t, lines[2], "Stops at 8h after start")
assert.NotContains(t, lines[3], "Stops next -")
}
// Then: they should see their own workspaces.
// 1st workspace: a-owner-ws1 has both autostart and autostop enabled.
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[0].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
// 2nd workspace: b-owner-ws2 has only autostart enabled.
pty.ExpectMatch(ws[1].OwnerName + "/" + ws[1].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
})
t.Run("Manual", func(t *testing.T) {
t.Parallel()
t.Run("OwnerAll", func(t *testing.T) {
// When: owner lists all workspaces
inv, root := clitest.New(t, "schedule", "show", "--all")
//nolint:gocritic // Testing that owner user sees all
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.AutostartSchedule = nil
cwr.TTLMillis = nil
})
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
cmdArgs = []string{"schedule", "show", workspace.Name}
stdoutBuf = &bytes.Buffer{}
)
inv, root := clitest.New(t, cmdArgs...)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err := inv.Run()
require.NoError(t, err, "unexpected error")
lines := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[0], "Starts at manual")
assert.Contains(t, lines[1], "Starts next -")
assert.Contains(t, lines[2], "Stops at manual")
assert.Contains(t, lines[3], "Stops next -")
}
// Then: they should see all workspaces
// 1st workspace: a-owner-ws1 has both autostart and autostop enabled.
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[0].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
// 2nd workspace: b-owner-ws2 has only autostart enabled.
pty.ExpectMatch(ws[1].OwnerName + "/" + ws[1].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
// 3rd workspace: c-member-ws3 has only autostop enabled.
pty.ExpectMatch(ws[2].OwnerName + "/" + ws[2].Name)
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
// 4th workspace: d-member-ws4 has neither autostart nor autostop enabled.
pty.ExpectMatch(ws[3].OwnerName + "/" + ws[3].Name)
})
t.Run("NotFound", func(t *testing.T) {
t.Parallel()
t.Run("OwnerSearchByName", func(t *testing.T) {
// When: owner specifies a search query
inv, root := clitest.New(t, "schedule", "show", "--search", "name:"+ws[1].Name)
//nolint:gocritic // Testing that owner user sees all
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
)
inv, root := clitest.New(t, "schedule", "show", "doesnotexist")
clitest.SetupConfig(t, client, root)
err := inv.Run()
require.ErrorContains(t, err, "status code 404", "unexpected error")
// Then: they should see workspaces matching that query
// 2nd workspace: b-owner-ws2 has only autostart enabled.
pty.ExpectMatch(ws[1].OwnerName + "/" + ws[1].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
})
}
func TestScheduleStart(t *testing.T) {
t.Parallel()
t.Run("OwnerOneArg", func(t *testing.T) {
// When: owner asks for a specific workspace by name
inv, root := clitest.New(t, "schedule", "show", ws[2].OwnerName+"/"+ws[2].Name)
//nolint:gocritic // Testing that owner user sees all
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
var (
ctx = context.Background()
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.AutostartSchedule = nil
// Then: they should see that workspace
// 3rd workspace: c-member-ws3 has only autostop enabled.
pty.ExpectMatch(ws[2].OwnerName + "/" + ws[2].Name)
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
})
t.Run("MemberNoArgs", func(t *testing.T) {
// When: a member specifies no args
inv, root := clitest.New(t, "schedule", "show")
clitest.SetupConfig(t, memberClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: they should see their own workspaces
// 1st workspace: c-member-ws3 has only autostop enabled.
pty.ExpectMatch(ws[2].OwnerName + "/" + ws[2].Name)
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
// 2nd workspace: d-member-ws4 has neither autostart nor autostop enabled.
pty.ExpectMatch(ws[3].OwnerName + "/" + ws[3].Name)
})
t.Run("MemberAll", func(t *testing.T) {
// When: a member lists all workspaces
inv, root := clitest.New(t, "schedule", "show", "--all")
clitest.SetupConfig(t, memberClient, root)
pty := ptytest.New(t).Attach(inv)
ctx := testutil.Context(t, testutil.WaitShort)
errC := make(chan error)
go func() {
errC <- inv.WithContext(ctx).Run()
}()
require.NoError(t, <-errC)
// Then: they should only see their own
// 1st workspace: c-member-ws3 has only autostop enabled.
pty.ExpectMatch(ws[2].OwnerName + "/" + ws[2].Name)
pty.ExpectMatch("8h")
pty.ExpectMatch(ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
// 2nd workspace: d-member-ws4 has neither autostart nor autostop enabled.
pty.ExpectMatch(ws[3].OwnerName + "/" + ws[3].Name)
})
t.Run("JSON", func(t *testing.T) {
// When: owner lists all workspaces in JSON format
inv, root := clitest.New(t, "schedule", "show", "--all", "--output", "json")
var buf bytes.Buffer
inv.Stdout = &buf
clitest.SetupConfig(t, ownerClient, root)
ctx := testutil.Context(t, testutil.WaitShort)
errC := make(chan error)
go func() {
errC <- inv.WithContext(ctx).Run()
}()
assert.NoError(t, <-errC)
// Then: they should see all workspace schedules in JSON format
var parsed []map[string]string
require.NoError(t, json.Unmarshal(buf.Bytes(), &parsed))
require.Len(t, parsed, 4)
// Ensure same order as in CLI output
sort.Slice(parsed, func(i, j int) bool {
a := parsed[i]["workspace"]
b := parsed[j]["workspace"]
return a < b
})
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
tz = "Europe/Dublin"
sched = "CRON_TZ=Europe/Dublin 30 9 * * Mon-Fri"
stdoutBuf = &bytes.Buffer{}
)
// Set a well-specified autostart schedule
inv, root := clitest.New(t, "schedule", "start", workspace.Name, "9:30AM", "Mon-Fri", tz)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err := inv.Run()
assert.NoError(t, err, "unexpected error")
lines := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[0], "Starts at 9:30AM Mon-Fri (Europe/Dublin)")
assert.Contains(t, lines[1], "Starts next 9:30AM")
// it should have either IST or GMT
if !strings.Contains(lines[1], "IST") && !strings.Contains(lines[1], "GMT") {
t.Error("expected either IST or GMT")
}
}
// Ensure autostart schedule updated
updated, err := client.Workspace(ctx, workspace.ID)
require.NoError(t, err, "fetch updated workspace")
require.Equal(t, sched, *updated.AutostartSchedule, "expected autostart schedule to be set")
// Reset stdout
stdoutBuf = &bytes.Buffer{}
// unset schedule
inv, root = clitest.New(t, "schedule", "start", workspace.Name, "manual")
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err = inv.Run()
assert.NoError(t, err, "unexpected error")
lines = strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[0], "Starts at manual")
assert.Contains(t, lines[1], "Starts next -")
}
}
func TestScheduleStop(t *testing.T) {
t.Parallel()
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
ttl = 8*time.Hour + 30*time.Minute
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID)
_ = coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
stdoutBuf = &bytes.Buffer{}
)
// Set the workspace TTL
inv, root := clitest.New(t, "schedule", "stop", workspace.Name, ttl.String())
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err := inv.Run()
assert.NoError(t, err, "unexpected error")
lines := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[2], "Stops at 8h30m after start")
// Should not be manual
assert.NotContains(t, lines[3], "Stops next -")
}
// Reset stdout
stdoutBuf = &bytes.Buffer{}
// Unset the workspace TTL
inv, root = clitest.New(t, "schedule", "stop", workspace.Name, "manual")
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err = inv.Run()
assert.NoError(t, err, "unexpected error")
lines = strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[2], "Stops at manual")
// Deadline of a running workspace is not updated.
assert.NotContains(t, lines[3], "Stops next -")
}
}
func TestScheduleOverride(t *testing.T) {
t.Parallel()
t.Run("OK", func(t *testing.T) {
t.Parallel()
// Given: we have a workspace
var (
err error
ctx = context.Background()
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID)
cmdArgs = []string{"schedule", "override-stop", workspace.Name, "10h"}
stdoutBuf = &bytes.Buffer{}
)
// Given: we wait for the workspace to be built
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
workspace, err = client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
expectedDeadline := time.Now().Add(10 * time.Hour)
// Assert test invariant: workspace build has a deadline set equal to now plus ttl
initDeadline := time.Now().Add(time.Duration(*workspace.TTLMillis) * time.Millisecond)
require.WithinDuration(t, initDeadline, workspace.LatestBuild.Deadline.Time, time.Minute)
inv, root := clitest.New(t, cmdArgs...)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
// When: we execute `coder schedule override workspace <number without units>`
err = inv.WithContext(ctx).Run()
require.NoError(t, err)
// Then: the deadline of the latest build is updated assuming the units are minutes
updated, err := client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
require.WithinDuration(t, expectedDeadline, updated.LatestBuild.Deadline.Time, time.Minute)
})
t.Run("InvalidDuration", func(t *testing.T) {
t.Parallel()
// Given: we have a workspace
var (
err error
ctx = context.Background()
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID)
cmdArgs = []string{"schedule", "override-stop", workspace.Name, "kwyjibo"}
stdoutBuf = &bytes.Buffer{}
)
// Given: we wait for the workspace to be built
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
workspace, err = client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
// Assert test invariant: workspace build has a deadline set equal to now plus ttl
initDeadline := time.Now().Add(time.Duration(*workspace.TTLMillis) * time.Millisecond)
require.WithinDuration(t, initDeadline, workspace.LatestBuild.Deadline.Time, time.Minute)
inv, root := clitest.New(t, cmdArgs...)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
// When: we execute `coder bump workspace <not a number>`
err = inv.WithContext(ctx).Run()
// Then: the command fails
require.ErrorContains(t, err, "invalid duration")
})
t.Run("NoDeadline", func(t *testing.T) {
t.Parallel()
// Given: we have a workspace with no deadline set
var (
err error
ctx = context.Background()
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
template = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, template.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.TTLMillis = nil
})
cmdArgs = []string{"schedule", "override-stop", workspace.Name, "1h"}
stdoutBuf = &bytes.Buffer{}
)
require.Zero(t, template.DefaultTTLMillis)
require.Empty(t, template.AutostopRequirement.DaysOfWeek)
require.EqualValues(t, 1, template.AutostopRequirement.Weeks)
// Unset the workspace TTL
err = client.UpdateWorkspaceTTL(ctx, workspace.ID, codersdk.UpdateWorkspaceTTLRequest{TTLMillis: nil})
require.NoError(t, err)
workspace, err = client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
require.Nil(t, workspace.TTLMillis)
// Given: we wait for the workspace to build
coderdtest.AwaitWorkspaceBuildJobCompleted(t, client, workspace.LatestBuild.ID)
workspace, err = client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
// NOTE(cian): need to stop and start the workspace as we do not update the deadline
// see: https://github.com/coder/coder/issues/2224
coderdtest.MustTransitionWorkspace(t, client, workspace.ID, database.WorkspaceTransitionStart, database.WorkspaceTransitionStop)
coderdtest.MustTransitionWorkspace(t, client, workspace.ID, database.WorkspaceTransitionStop, database.WorkspaceTransitionStart)
// Assert test invariant: workspace has no TTL set
require.Zero(t, workspace.LatestBuild.Deadline)
require.NoError(t, err)
inv, root := clitest.New(t, cmdArgs...)
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
// When: we execute `coder bump workspace``
err = inv.WithContext(ctx).Run()
require.Error(t, err)
// Then: nothing happens and the deadline remains unset
updated, err := client.Workspace(ctx, workspace.ID)
require.NoError(t, err)
require.Zero(t, updated.LatestBuild.Deadline)
// 1st workspace: a-owner-ws1 has both autostart and autostop enabled.
assert.Equal(t, ws[0].OwnerName+"/"+ws[0].Name, parsed[0]["workspace"])
assert.Equal(t, sched.Humanize(), parsed[0]["starts_at"])
assert.Equal(t, sched.Next(now).In(loc).Format(time.RFC3339), parsed[0]["starts_next"])
assert.Equal(t, "8h", parsed[0]["stops_after"])
assert.Equal(t, ws[0].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339), parsed[0]["stops_next"])
// 2nd workspace: b-owner-ws2 has only autostart enabled.
assert.Equal(t, ws[1].OwnerName+"/"+ws[1].Name, parsed[1]["workspace"])
assert.Equal(t, sched.Humanize(), parsed[1]["starts_at"])
assert.Equal(t, sched.Next(now).In(loc).Format(time.RFC3339), parsed[1]["starts_next"])
assert.Empty(t, parsed[1]["stops_after"])
assert.Empty(t, parsed[1]["stops_next"])
// 3rd workspace: c-member-ws3 has only autostop enabled.
assert.Equal(t, ws[2].OwnerName+"/"+ws[2].Name, parsed[2]["workspace"])
assert.Empty(t, parsed[2]["starts_at"])
assert.Empty(t, parsed[2]["starts_next"])
assert.Equal(t, "8h", parsed[2]["stops_after"])
assert.Equal(t, ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339), parsed[2]["stops_next"])
// 4th workspace: d-member-ws4 has neither autostart nor autostop enabled.
assert.Equal(t, ws[3].OwnerName+"/"+ws[3].Name, parsed[3]["workspace"])
assert.Empty(t, parsed[3]["starts_at"])
assert.Empty(t, parsed[3]["starts_next"])
assert.Empty(t, parsed[3]["stops_after"])
})
}
//nolint:paralleltest // t.Setenv
func TestScheduleStartDefaults(t *testing.T) {
t.Setenv("TZ", "Pacific/Tongatapu")
var (
client = coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
user = coderdtest.CreateFirstUser(t, client)
version = coderdtest.CreateTemplateVersion(t, client, user.OrganizationID, nil)
_ = coderdtest.AwaitTemplateVersionJobCompleted(t, client, version.ID)
project = coderdtest.CreateTemplate(t, client, user.OrganizationID, version.ID)
workspace = coderdtest.CreateWorkspace(t, client, user.OrganizationID, project.ID, func(cwr *codersdk.CreateWorkspaceRequest) {
cwr.AutostartSchedule = nil
})
stdoutBuf = &bytes.Buffer{}
func TestScheduleModify(t *testing.T) {
// Given
// Set timezone to Asia/Kolkata to surface any timezone-related bugs.
t.Setenv("TZ", "Asia/Kolkata")
loc, err := tz.TimezoneIANA()
require.NoError(t, err)
require.Equal(t, "Asia/Kolkata", loc.String())
sched, err := cron.Weekly("CRON_TZ=Europe/Dublin 30 7 * * Mon-Fri")
require.NoError(t, err, "invalid schedule")
ownerClient, _, _, ws := setupTestSchedule(t, sched)
now := time.Now()
t.Run("SetStart", func(t *testing.T) {
// When: we set the start schedule
inv, root := clitest.New(t,
"schedule", "start", ws[3].OwnerName+"/"+ws[3].Name, "7:30AM", "Mon-Fri", "Europe/Dublin",
)
//nolint:gocritic // this workspace is not owned by the same user
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[3].OwnerName + "/" + ws[3].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
})
t.Run("SetStop", func(t *testing.T) {
// When: we set the stop schedule
inv, root := clitest.New(t,
"schedule", "stop", ws[2].OwnerName+"/"+ws[2].Name, "8h30m",
)
//nolint:gocritic // this workspace is not owned by the same user
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[2].OwnerName + "/" + ws[2].Name)
pty.ExpectMatch("8h30m")
pty.ExpectMatch(ws[2].LatestBuild.Deadline.Time.In(loc).Format(time.RFC3339))
})
t.Run("UnsetStart", func(t *testing.T) {
// When: we unset the start schedule
inv, root := clitest.New(t,
"schedule", "start", ws[1].OwnerName+"/"+ws[1].Name, "manual",
)
//nolint:gocritic // this workspace is owned by owner
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[1].OwnerName + "/" + ws[1].Name)
})
t.Run("UnsetStop", func(t *testing.T) {
// When: we unset the stop schedule
inv, root := clitest.New(t,
"schedule", "stop", ws[0].OwnerName+"/"+ws[0].Name, "manual",
)
//nolint:gocritic // this workspace is owned by owner
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
})
}
//nolint:paralleltest // t.Setenv
func TestScheduleOverride(t *testing.T) {
// Given
// Set timezone to Asia/Kolkata to surface any timezone-related bugs.
t.Setenv("TZ", "Asia/Kolkata")
loc, err := tz.TimezoneIANA()
require.NoError(t, err)
require.Equal(t, "Asia/Kolkata", loc.String())
sched, err := cron.Weekly("CRON_TZ=Europe/Dublin 30 7 * * Mon-Fri")
require.NoError(t, err, "invalid schedule")
ownerClient, _, _, ws := setupTestSchedule(t, sched)
now := time.Now()
// To avoid the likelihood of time-related flakes, only matching up to the hour.
expectedDeadline := time.Now().In(loc).Add(10 * time.Hour).Format("2006-01-02T15:")
// When: we override the stop schedule
inv, root := clitest.New(t,
"schedule", "override-stop", ws[0].OwnerName+"/"+ws[0].Name, "10h",
)
// Set an underspecified schedule
inv, root := clitest.New(t, "schedule", "start", workspace.Name, "9:30AM")
clitest.SetupConfig(t, client, root)
inv.Stdout = stdoutBuf
err := inv.Run()
require.NoError(t, err, "unexpected error")
lines := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if assert.Len(t, lines, 4) {
assert.Contains(t, lines[0], "Starts at 9:30AM daily (Pacific/Tongatapu)")
assert.Contains(t, lines[1], "Starts next 9:30AM +13 on")
assert.Contains(t, lines[2], "Stops at 8h after start")
}
clitest.SetupConfig(t, ownerClient, root)
pty := ptytest.New(t).Attach(inv)
require.NoError(t, inv.Run())
// Then: the updated schedule should be shown
pty.ExpectMatch(ws[0].OwnerName + "/" + ws[0].Name)
pty.ExpectMatch(sched.Humanize())
pty.ExpectMatch(sched.Next(now).In(loc).Format(time.RFC3339))
pty.ExpectMatch("8h")
pty.ExpectMatch(expectedDeadline)
}

Some files were not shown because too many files have changed in this diff Show More