Compare commits

...

24 Commits

Author SHA1 Message Date
Garrett Delfosse ce2aed9002 feat: move boundary code from coder/boundary to enterprise/cli/boundary 2026-01-14 13:07:05 -05:00
Steven Masley 8d6a202ee4 chore: git ignore jetbrains run configs (#21497)
Jetbrains ide users can save their debug/test run configs to `.run`.
2026-01-14 06:51:35 -06:00
Sas Swart ffa83a4ebc docs: add documentation for coder script ordering (#21090)
This Pull request adds documentation and guidance for the Coder script
ordering feature. We:
* explain the use case, benefits, and requirements.
* provide example configuration snippets
* discuss best practices and troubleshooting

---------

Co-authored-by: Cian Johnston <cian@coder.com>
Co-authored-by: DevCats <christofer@coder.com>
2026-01-14 14:40:38 +02:00
blinkagent[bot] b3a81be1aa fix(coderd/database): remove hardcoded public schema from migration 000401 (#21493) 2026-01-14 05:40:30 +02:00
Andrew Aquino 0c5809726d fix(docs): show dynamic parameters demo in local GIF instead of Imgur link (#21487)
fixes this bug where the dynamic parameters demo GIF isn't viewable in
the UK:

<img width="720" height="798" alt="image"
src="https://github.com/user-attachments/assets/757cd4fb-6b32-4db8-87fa-31a01588d69d"
/>
2026-01-13 09:31:32 -08:00
Susana Ferreira 000bc334c9 fix: reuse reconciliation lock transaction for read operations in prebuilds (#21408)
## Description

Reuses the reconciliation lock transaction for read operations during
prebuilds reconciliation, reducing unnecessary database connections.

## Changes

* Use the lock transaction (`db`) for read operations and `c.store` for
write operations:
  * `GetPrebuildsSettings`: now uses `db`
  * `SnapshotState`: now uses `db`
* `MembershipReconciler`: continues to use `c.store` (performs write
operations)
* Add comments explaining the transaction model and when to use `db` vs
`c.store`

Related to: https://github.com/coder/coder/pull/20587
2026-01-13 15:04:51 +00:00
Cian Johnston 8dd7d8b882 chore: clean up coder build directory on shutdown (#21490)
Adds a step to delete the `build/` directory inside the Coder repo on
shutdown.

---------

Co-authored-by: Dean Sheather <dean@deansheather.com>
2026-01-13 12:45:50 +00:00
Susana Ferreira 74b6d12a8a feat: implement selective MITM with configurable domain allowlist in aibridgeproxyd (#21473)
## Description

Implements selective MITM (Man-in-the-Middle) in `aibridgeproxyd` so
that only requests to allowlisted domains are intercepted and decrypted.
Requests to all other domains are tunneled directly without decryption.

## Changes

* New config option: `CODER_AIBRIDGE_PROXY_DOMAIN_ALLOWLIST` (default:
`api.anthropic.com`,`api.openai.com`)
* Selective MITM: Uses `goproxy.ReqHostIs()` to only intercept `CONNECT`
requests to allowlisted hosts
* Certificate caching: Now only generates/caches certificates for
allowlisted domains
* Validation: Startup fails if domain allowlist is empty or contains
invalid entries

Closes: https://github.com/coder/internal/issues/1182
2026-01-13 11:30:51 +00:00
Cian Johnston 64e7a77983 feat: add user_agent to loggermw (#21485)
Adds the `user_agent` field to `httpmw/loggermw`.
2026-01-13 10:50:01 +00:00
Danny Kopping 7d558e76e9 fix: make make test runnable again (#21251)
Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-13 10:36:06 +00:00
Danny Kopping 40adf91cb0 feat: add profiling options to tests in Makefile (#21488)
Usage example:

```bash
$ make test TEST_CPUPROFILE=cpu.prof TEST_MEMPROFILE=mem.prof TEST_PACKAGES=./coderd
```

Note that `TEST_PACKAGES` has to be specified, otherwise you get `cannot
use -{cpu,memory}profile flag with multiple packages`.

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-13 10:53:09 +02:00
Danny Kopping 49a42eff5c feat: make database connection pool size configurable (#21403)
Closes https://github.com/coder/coder/issues/21360

A few considerations/notes:
- I've kept the number of conns to 10 in all other places, except coderd
- which uses the config value
- I opted to also make idle conns configurable; the greater the delta
between max open and max idle, the more connection churn
- Postgres maintains a [_process_ per
connection](https://www.postgresql.org/docs/current/connect-estab.html),
contrary to what the comment said previously
- Operators should be able to tune this, since process churn can
negatively affect OS scheduling
- I've set the value to `"auto"` by default so it's not another knob one
_has to_ twiddle, and sets max idle = max conns / 3

---------

Signed-off-by: Danny Kopping <danny@coder.com>
2026-01-13 10:50:57 +02:00
Spike Curtis 61ae5b81ab fix: fix scaletest sdkclient duplication (#21475)
Fixes an issue introduce in #21288 

The default sdkclient created by the CLI root includes several additional http.RoundTripper wrappers to check versions and attach telemetry, so `DupClientCopyingHeaders` would break and scale tests would fail.

Instead of explicitly adding support for these additional wrappers to `DupClientCopyingHeaders` I think we should just stop unwrapping and move on. Scale tests don't need these wrapped functions.

This is a bit fragile, since it depends on the fact that the headers wrapper needs to be outermost, but that needs to be true for other uses, since things like dialing DERP do a similar thing where they unwrap and extract the auth headers. More long term this needs a refactor to make HTTP headers in the SDK a more first-class resource instead of this hacky RoundTripper wrapping, but that's for a different day.
2026-01-13 11:14:06 +04:00
George K cc2efe9e1f feat(coderd/rbac): make organization-member a per-org system custom role (#21359)
Migrated the built-in organization-member role to DB storage so it can be customized per org.

Closes https://github.com/coder/internal/issues/1073 (part 1)
2026-01-12 18:19:19 -08:00
Cian Johnston 2b448c7178 feat(cli): enrich user-agent header for client requests (#21483)
Adds the following information to CLI User-Agent headers to aid
deployment administrators in troubleshooting where requests are coming
from.

Before: `Go-http-client/1.1`
After: `coder-cli/v2.34.5 (linux/amd64; coder whoami)`

🤖 These changes were generated by Claude Sonnet 4.5 but reviewed and
edited manually by me.
2026-01-12 17:46:05 +00:00
dependabot[bot] 2730e29105 chore: bump google.golang.org/api from 0.258.0 to 0.259.0 (#21480)
Bumps
[google.golang.org/api](https://github.com/googleapis/google-api-go-client)
from 0.258.0 to 0.259.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/releases">google.golang.org/api's
releases</a>.</em></p>
<blockquote>
<h2>v0.259.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.258.0...v0.259.0">0.259.0</a>
(2026-01-06)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>remove firebaseremoteconfig from package list (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3422">#3422</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3412">#3412</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/c7d21a4d7b388f98004cdef7eb1da28afda20e3c">c7d21a4</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3415">#3415</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6860a5e602d186c2b09c124bf66eed5ff9a4417c">6860a5e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3417">#3417</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0a99634bc071a7c86eef4397bc7f236f7e691453">0a99634</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3419">#3419</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/03d987b2b4bed89a1d97eae8fd1c1390b03aa5ed">03d987b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3421">#3421</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/632ee92f17be886948004adc2096825fb259d5e3">632ee92</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3425">#3425</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b5998236840eb877911befa581668ad47ea5dc02">b599823</a>)</li>
<li>Support write checksums in json resumable uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3405">#3405</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6e57e384f3af2773be6ec086c7cca6a500a9c9f5">6e57e38</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>option:</strong> Remove option.WithAuthCredentials from
validation (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3420">#3420</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2c337321d374c3e9f02c09c75cb94b73eaf23fd2">2c33732</a>)</li>
<li>Remove firebaseremoteconfig from package list (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3422">#3422</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/fd0ce7cd83e33d83e3040e4cc3c8f39fc4aed6dd">fd0ce7c</a>)</li>
<li><strong>transport:</strong> Remove singleton and restore normal
usage of otelgrpc.clientHandler (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3424">#3424</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/24fbfcbae5daea4fd67445129091522c6fad5200">24fbfcb</a>),
refs <a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/2321">#2321</a>
<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/2329">#2329</a></li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>Correct release version (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3426">#3426</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a783dbb2bb83627f299916fb808756cc64038fdd">a783dbb</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md">google.golang.org/api's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-api-go-client/compare/v0.258.0...v0.259.0">0.259.0</a>
(2026-01-06)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>remove firebaseremoteconfig from package list (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3422">#3422</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3412">#3412</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/c7d21a4d7b388f98004cdef7eb1da28afda20e3c">c7d21a4</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3415">#3415</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6860a5e602d186c2b09c124bf66eed5ff9a4417c">6860a5e</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3417">#3417</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/0a99634bc071a7c86eef4397bc7f236f7e691453">0a99634</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3419">#3419</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/03d987b2b4bed89a1d97eae8fd1c1390b03aa5ed">03d987b</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3421">#3421</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/632ee92f17be886948004adc2096825fb259d5e3">632ee92</a>)</li>
<li><strong>all:</strong> Auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3425">#3425</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/b5998236840eb877911befa581668ad47ea5dc02">b599823</a>)</li>
<li>Support write checksums in json resumable uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3405">#3405</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/6e57e384f3af2773be6ec086c7cca6a500a9c9f5">6e57e38</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>option:</strong> Remove option.WithAuthCredentials from
validation (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3420">#3420</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/2c337321d374c3e9f02c09c75cb94b73eaf23fd2">2c33732</a>)</li>
<li>Remove firebaseremoteconfig from package list (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3422">#3422</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/fd0ce7cd83e33d83e3040e4cc3c8f39fc4aed6dd">fd0ce7c</a>)</li>
<li><strong>transport:</strong> Remove singleton and restore normal
usage of otelgrpc.clientHandler (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3424">#3424</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/24fbfcbae5daea4fd67445129091522c6fad5200">24fbfcb</a>),
refs <a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/2321">#2321</a>
<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/2329">#2329</a></li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>Correct release version (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3426">#3426</a>)
(<a
href="https://github.com/googleapis/google-api-go-client/commit/a783dbb2bb83627f299916fb808756cc64038fdd">a783dbb</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/854019061430bb37ad7160fcfe91dec9f8e54328"><code>8540190</code></a>
chore(main): release 0.259.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3413">#3413</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/6e57e384f3af2773be6ec086c7cca6a500a9c9f5"><code>6e57e38</code></a>
feat: support write checksums in json resumable uploads (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3405">#3405</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/1d9673aa44353250400b723978014707fee94563"><code>1d9673a</code></a>
chore(all): update module google.golang.org/grpc to v1.78.0 (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3423">#3423</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/a783dbb2bb83627f299916fb808756cc64038fdd"><code>a783dbb</code></a>
chore: correct release version (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3426">#3426</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/fd0ce7cd83e33d83e3040e4cc3c8f39fc4aed6dd"><code>fd0ce7c</code></a>
fix!: remove firebaseremoteconfig from package list (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3422">#3422</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/b5998236840eb877911befa581668ad47ea5dc02"><code>b599823</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3425">#3425</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/24fbfcbae5daea4fd67445129091522c6fad5200"><code>24fbfcb</code></a>
fix(transport): remove singleton and restore normal usage of
otelgrpc.clientH...</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/632ee92f17be886948004adc2096825fb259d5e3"><code>632ee92</code></a>
feat(all): auto-regenerate discovery clients (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3421">#3421</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/2c337321d374c3e9f02c09c75cb94b73eaf23fd2"><code>2c33732</code></a>
fix(option): remove option.WithAuthCredentials from validation (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3420">#3420</a>)</li>
<li><a
href="https://github.com/googleapis/google-api-go-client/commit/75e055a4fbf9c61e8b875065f0e0693d0f6ba77c"><code>75e055a</code></a>
chore(all): update all (<a
href="https://redirect.github.com/googleapis/google-api-go-client/issues/3418">#3418</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/google-api-go-client/compare/v0.258.0...v0.259.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/api&package-manager=go_modules&previous-version=0.258.0&new-version=0.259.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 15:26:54 +00:00
dependabot[bot] 150763720d chore: bump gonum.org/v1/gonum from 0.16.0 to 0.17.0 (#21481)
Bumps [gonum.org/v1/gonum](https://github.com/gonum/gonum) from 0.16.0
to 0.17.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/gonum/gonum/releases">gonum.org/v1/gonum's
releases</a>.</em></p>
<blockquote>
<h2>v0.17.0</h2>
<p>Release v0.17.0 is a minor release in the v0.17 branch.</p>
<p>Bug fixes/improvements since v0.16.0:</p>
<p>fc402bc4 spatial: add Umeyama's algorithm for estimating point
pattern transformation parameters
837a68db optimize: add configurable MinimumStepSize
ac810a10 mathext: optimize Li2 and add benchmarks
8da34cf6 optimize/functions: add sphere function
a9119bd3 distuv: add non-central t distribution
27d16a49 spatial/r2: increase box scale test tolerance
9c251ca0 mathext: add dilogarithm function Li2
509ffe02 mathext: add Hypergeo for computing the Gaussian Hypergeometric
function
98271d5d graph/network: add Dinic maximum flow function
672aa59e stat: implement Wasserstein distance calculation
4408afac stat: add an example to compute a confidence interval
43738f81 graph/network: add diameter example for Eccentricity
6b50a894 graph/network: add eccentricity measurement
e62ddf59 lapack/testlapack: fix random source use</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/gonum/gonum/commit/fc402bc485e3a92f8d4f1f0ee5a49e2edf232ed2"><code>fc402bc</code></a>
spatial: add Umeyama's algorithm for estimating point pattern
transformation ...</li>
<li><a
href="https://github.com/gonum/gonum/commit/93a8c051bbc0286e46ff296f8eabf0b37273620f"><code>93a8c05</code></a>
A+C: add Mohamed Ali Bouhaouala</li>
<li><a
href="https://github.com/gonum/gonum/commit/837a68db3f5f0ec24e9922aef24c16872820327d"><code>837a68d</code></a>
optimize: add configurable MinimumStepSize</li>
<li><a
href="https://github.com/gonum/gonum/commit/ac810a105c3fd4eb2955093d9839a2a856a2fe5f"><code>ac810a1</code></a>
mathext: optimize Li2 and add benchmarks</li>
<li><a
href="https://github.com/gonum/gonum/commit/9a4c13cfe22ee229ea5d3ccf7e78c8b482b2c32a"><code>9a4c13c</code></a>
A+C: add Nathan Rooy</li>
<li><a
href="https://github.com/gonum/gonum/commit/8da34cf6b4b610e7e1c7fab827f921dc40d5df27"><code>8da34cf</code></a>
optimize/functions: add sphere function</li>
<li><a
href="https://github.com/gonum/gonum/commit/a9119bd313fe095fec9203481b1e75d506e9d42b"><code>a9119bd</code></a>
distuv: add non-central t distribution</li>
<li><a
href="https://github.com/gonum/gonum/commit/27d16a49cbd53b5bd83509f52ecc0b9a00f4de06"><code>27d16a4</code></a>
spatial/r2: increase box scale test tolerance</li>
<li><a
href="https://github.com/gonum/gonum/commit/ba05c1592d9864fe2786368ff0285bb4a8d21500"><code>ba05c15</code></a>
all: use go tool directive</li>
<li><a
href="https://github.com/gonum/gonum/commit/9c251ca02972205ba15bd868a57c53380dd468ed"><code>9c251ca</code></a>
mathext: add dilogarithm function Li2</li>
<li>Additional commits viewable in <a
href="https://github.com/gonum/gonum/compare/v0.16.0...v0.17.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gonum.org/v1/gonum&package-manager=go_modules&previous-version=0.16.0&new-version=0.17.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 15:09:36 +00:00
dependabot[bot] 8b995e3e06 chore: bump github.com/valyala/fasthttp from 1.68.0 to 1.69.0 (#21479)
Bumps [github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)
from 1.68.0 to 1.69.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/valyala/fasthttp/releases">github.com/valyala/fasthttp's
releases</a>.</em></p>
<blockquote>
<h2>v1.69.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add sortkeys by <a
href="https://github.com/pjebs"><code>@​pjebs</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2118">valyala/fasthttp#2118</a></li>
<li>Expose header parsing error variables by <a
href="https://github.com/ReneWerner87"><code>@​ReneWerner87</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2096">valyala/fasthttp#2096</a></li>
<li>Add documentation that modifying during iteration can panic by <a
href="https://github.com/erikdubbelboer"><code>@​erikdubbelboer</code></a>
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2122">valyala/fasthttp#2122</a></li>
<li>update readme by <a
href="https://github.com/pjebs"><code>@​pjebs</code></a> in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2114">valyala/fasthttp#2114</a></li>
<li>chore(deps): bump actions/upload-artifact from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2092">valyala/fasthttp#2092</a></li>
<li>chore(deps): bump golangci/golangci-lint-action from 8 to 9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2095">valyala/fasthttp#2095</a></li>
<li>chore(deps): bump golang.org/x/sys from 0.37.0 to 0.38.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2094">valyala/fasthttp#2094</a></li>
<li>chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.44.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2098">valyala/fasthttp#2098</a></li>
<li>chore(deps): bump golang.org/x/net from 0.46.0 to 0.47.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2097">valyala/fasthttp#2097</a></li>
<li>chore(deps): bump golang.org/x/crypto from 0.44.0 to 0.45.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2099">valyala/fasthttp#2099</a></li>
<li>chore(deps): bump actions/checkout from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2101">valyala/fasthttp#2101</a></li>
<li>chore(deps): bump github.com/klauspost/compress from 1.18.1 to
1.18.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2103">valyala/fasthttp#2103</a></li>
<li>chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2109">valyala/fasthttp#2109</a></li>
<li>chore(deps): bump actions/upload-artifact from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2111">valyala/fasthttp#2111</a></li>
<li>chore(deps): bump securego/gosec from 2.22.10 to 2.22.11 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/valyala/fasthttp/pull/2110">valyala/fasthttp#2110</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/valyala/fasthttp/compare/v1.68.0...v1.69.0">https://github.com/valyala/fasthttp/compare/v1.68.0...v1.69.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/valyala/fasthttp/commit/7cf1fb7967ea5fe8c4ab6380d2e5885a9ff7b540"><code>7cf1fb7</code></a>
Add documentation that modifying during iteration can panic (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2122">#2122</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/7b5cb77b95e2200cab14572519bd7dfdcc55fdeb"><code>7b5cb77</code></a>
Add sortkeys (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2118">#2118</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/42f89fbefde644b077e1caef94fb3e5741c4c595"><code>42f89fb</code></a>
update readme (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2114">#2114</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/fb6b6d160c1f7dcfff5b79f1f8efb231c4bb2abf"><code>fb6b6d1</code></a>
chore(deps): bump securego/gosec from 2.22.10 to 2.22.11 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2110">#2110</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/fe7e70d901b8ec24a68558e17eeb2c30ad0fec9c"><code>fe7e70d</code></a>
chore(deps): bump actions/upload-artifact from 5 to 6 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2111">#2111</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/69ef8f70f62b1fd4aefa96c5d73a5834c0cc942e"><code>69ef8f7</code></a>
chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2109">#2109</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/c2db56193f8baf0864735bcff0369bbd1f8c6d0d"><code>c2db561</code></a>
chore(deps): bump github.com/klauspost/compress from 1.18.1 to 1.18.2
(<a
href="https://redirect.github.com/valyala/fasthttp/issues/2103">#2103</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/ec00ff0e62071e5915a988ee79391b65e84b5453"><code>ec00ff0</code></a>
chore(deps): bump actions/checkout from 5 to 6 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2101">#2101</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/5d415acb4e79ebd008bffea29e9d81986e3da346"><code>5d415ac</code></a>
chore(deps): bump golang.org/x/crypto from 0.44.0 to 0.45.0 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2099">#2099</a>)</li>
<li><a
href="https://github.com/valyala/fasthttp/commit/cc8220f6920689b15893c4e81bef71d9875e9c7b"><code>cc8220f</code></a>
chore(deps): bump golang.org/x/net from 0.46.0 to 0.47.0 (<a
href="https://redirect.github.com/valyala/fasthttp/issues/2097">#2097</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/valyala/fasthttp/compare/v1.68.0...v1.69.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/valyala/fasthttp&package-manager=go_modules&previous-version=1.68.0&new-version=1.69.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 15:08:10 +00:00
dependabot[bot] 2c2c67665f ci: bump the github-actions group across 1 directory with 3 updates (#21482)
Bumps the github-actions group with 3 updates in the / directory:
[dependabot/fetch-metadata](https://github.com/dependabot/fetch-metadata),
[nix-community/cache-nix-action](https://github.com/nix-community/cache-nix-action)
and
[toshimaru/auto-author-assign](https://github.com/toshimaru/auto-author-assign).

Updates `dependabot/fetch-metadata` from 2.4.0 to 2.5.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/dependabot/fetch-metadata/releases">dependabot/fetch-metadata's
releases</a>.</em></p>
<blockquote>
<h2>v2.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump actions/publish-immutable-action from 0.0.3 to 0.0.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/628">dependabot/fetch-metadata#628</a></li>
<li>Bump the dev-dependencies group with 11 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/629">dependabot/fetch-metadata#629</a></li>
<li>Bump actions/create-github-app-token from 2.0.6 to 2.1.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/635">dependabot/fetch-metadata#635</a></li>
<li>Bump actions/create-github-app-token from 2.1.1 to 2.1.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/638">dependabot/fetch-metadata#638</a></li>
<li>Bump actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/636">dependabot/fetch-metadata#636</a></li>
<li>Bump actions/setup-node from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/637">dependabot/fetch-metadata#637</a></li>
<li>Bump actions/setup-node from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/639">dependabot/fetch-metadata#639</a></li>
<li>Bump actions/create-github-app-token from 2.1.4 to 2.2.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/643">dependabot/fetch-metadata#643</a></li>
<li>Bump actions/checkout from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/642">dependabot/fetch-metadata#642</a></li>
<li>Bump actions/create-github-app-token from 2.2.0 to 2.2.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/648">dependabot/fetch-metadata#648</a></li>
<li>Bump js-yaml from 3.14.1 to 3.14.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/644">dependabot/fetch-metadata#644</a></li>
<li>Bump express from 5.1.0 to 5.2.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/645">dependabot/fetch-metadata#645</a></li>
<li>Bump <code>@​modelcontextprotocol/sdk</code> from 1.11.2 to 1.24.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/647">dependabot/fetch-metadata#647</a></li>
<li>v2.5.0 by <a
href="https://github.com/fetch-metadata-action-automation"><code>@​fetch-metadata-action-automation</code></a>[bot]
in <a
href="https://redirect.github.com/dependabot/fetch-metadata/pull/631">dependabot/fetch-metadata#631</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/dependabot/fetch-metadata/compare/v2...v2.5.0">https://github.com/dependabot/fetch-metadata/compare/v2...v2.5.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/21025c705c08248db411dc16f3619e6b5f9ea21a"><code>21025c7</code></a>
v2.5.0</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/252291c4909623444d34d29176583b6bae564c4a"><code>252291c</code></a>
Merge pull request <a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/647">#647</a>
from dependabot/dependabot/npm_and_yarn/modelcontextp...</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/fa144c97df0d508a206af2a27295ecc2935effbd"><code>fa144c9</code></a>
chore: Migrate jest expectation function</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/33c7a0bfc8c64c28af2c81b3431ef4c59ec496b4"><code>33c7a0b</code></a>
bug: Mock PR body in test</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/99c27add52552e57615946e8e3e30bb1e06c907f"><code>99c27ad</code></a>
Bump <code>@​modelcontextprotocol/sdk</code> from 1.11.2 to 1.24.0</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/3837dcc013fa49857b3ce43e5e985c87b36856fe"><code>3837dcc</code></a>
Merge pull request <a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/645">#645</a>
from dependabot/dependabot/npm_and_yarn/express-5.2.1</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/d411582f801e564114e3c0e221a9301030b6b7dd"><code>d411582</code></a>
Bump express from 5.1.0 to 5.2.1</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/186ccbbe83ea100061d2a4e5ad1e78372b949c3f"><code>186ccbb</code></a>
Merge pull request <a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/644">#644</a>
from dependabot/dependabot/npm_and_yarn/js-yaml-3.14.2</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/84c891ecc223caac49af317368a1df9d6fb72ff7"><code>84c891e</code></a>
Bump js-yaml from 3.14.1 to 3.14.2</li>
<li><a
href="https://github.com/dependabot/fetch-metadata/commit/4542092e926ee0072c057475cbe8b76968714a21"><code>4542092</code></a>
Merge pull request <a
href="https://redirect.github.com/dependabot/fetch-metadata/issues/648">#648</a>
from dependabot/dependabot/github_actions/actions/cre...</li>
<li>Additional commits viewable in <a
href="https://github.com/dependabot/fetch-metadata/compare/08eff52bf64351f401fb50d4972fa95b9f2c2d1b...21025c705c08248db411dc16f3619e6b5f9ea21a">compare
view</a></li>
</ul>
</details>
<br />

Updates `nix-community/cache-nix-action` from 6.1.3 to 7.0.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nix-community/cache-nix-action/releases">nix-community/cache-nix-action's
releases</a>.</em></p>
<blockquote>
<h2>v7.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking changes</h3>
<ul>
<li>Cache only <code>/nix</code> by default by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/261">nix-community/cache-nix-action#261</a></li>
<li>Improve <code>saveFromGC</code> by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/253">nix-community/cache-nix-action#253</a></li>
<li>Update dependencies by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/228">nix-community/cache-nix-action#228</a></li>
</ul>
<h3>Added</h3>
<ul>
<li>Support ca-derivations by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/130">nix-community/cache-nix-action#130</a></li>
<li>Support <code>cachix/install-nix-action</code> and
<code>DeterminateSystems/determinate-nix-action</code> by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/234">nix-community/cache-nix-action#234</a></li>
<li>Support custom cache URL by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/244">nix-community/cache-nix-action#244</a></li>
<li>Use Temporal by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/260">nix-community/cache-nix-action#260</a></li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fix assumptions in nix commands by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/240">nix-community/cache-nix-action#240</a></li>
<li>Install sqlite on macOS only when it's missing and if there's at
least one cache to restore by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/241">nix-community/cache-nix-action#241</a></li>
<li>Run <code>zstd</code> in multi-threaded mode by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/243">nix-community/cache-nix-action#243</a></li>
<li>Align with upstream by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/249">nix-community/cache-nix-action#249</a></li>
<li>Update saveFromGC package by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/254">nix-community/cache-nix-action#254</a></li>
<li>Fix skipping restore on hit primary key by <a
href="https://github.com/deemp"><code>@​deemp</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/259">nix-community/cache-nix-action#259</a></li>
</ul>
<h3>Changed (docs)</h3>
<ul>
<li>fix <code>nix_conf</code> example in readme by <a
href="https://github.com/peterbecich"><code>@​peterbecich</code></a> in
<a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/132">nix-community/cache-nix-action#132</a></li>
<li>add <code>nothing-but-nix</code> to readme by <a
href="https://github.com/peterbecich"><code>@​peterbecich</code></a> in
<a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/162">nix-community/cache-nix-action#162</a></li>
<li>Update status of <code>magic-nix-cache-action</code> by <a
href="https://github.com/lucperkins"><code>@​lucperkins</code></a> in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/161">nix-community/cache-nix-action#161</a></li>
</ul>
<h3>Changed (deps)</h3>
<!-- raw HTML omitted -->
<ul>
<li>chore(deps): bump actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/183">nix-community/cache-nix-action#183</a></li>
<li>chore(deps-dev): bump eslint from 9.22.0 to 9.37.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/207">nix-community/cache-nix-action#207</a></li>
<li>chore(deps-dev): bump eslint-plugin-import from 2.31.0 to 2.32.0 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/210">nix-community/cache-nix-action#210</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/parser</code> from
8.26.1 to 8.46.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/208">nix-community/cache-nix-action#208</a></li>
<li>chore(deps-dev): bump ts-jest from 29.2.6 to 29.4.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/200">nix-community/cache-nix-action#200</a></li>
<li>chore(deps): bump nixbuild/nix-quick-install-action from 30 to 34 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/204">nix-community/cache-nix-action#204</a></li>
<li>chore(deps-dev): bump <code>@​typescript-eslint/eslint-plugin</code>
from 8.26.1 to 8.46.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/209">nix-community/cache-nix-action#209</a></li>
<li>chore(deps-dev): bump eslint-import-resolver-typescript from 3.8.3
to 4.4.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/143">nix-community/cache-nix-action#143</a></li>
<li>chore(deps-dev): bump eslint-plugin-n from 17.16.2 to 17.23.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/215">nix-community/cache-nix-action#215</a></li>
<li>chore(deps-dev): bump nock from 14.0.1 to 14.0.10 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/213">nix-community/cache-nix-action#213</a></li>
<li>chore(deps-dev): bump ts-jest from 29.4.4 to 29.4.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/211">nix-community/cache-nix-action#211</a></li>
<li>chore(deps-dev): bump eslint-plugin-jest from 28.11.0 to 29.0.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/214">nix-community/cache-nix-action#214</a></li>
<li>chore(deps): bump actions/checkout from 5 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/220">nix-community/cache-nix-action#220</a></li>
<li>chore(deps): bump dedent from 1.5.3 to 1.7.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/218">nix-community/cache-nix-action#218</a></li>
<li>chore(deps-dev): bump prettier from 3.5.3 to 3.6.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/nix-community/cache-nix-action/pull/216">nix-community/cache-nix-action#216</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/b426b118b6dc86d6952988d396aa7c6b09776d08"><code>b426b11</code></a>
chore: update docs</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/4bec4a908ea92e7c1b67b20cc4fd603014a22e1c"><code>4bec4a9</code></a>
fix(readme): improve the typical job explanation</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/a084f54b888218ed2c3f358e3d5a6ae5af164b25"><code>a084f54</code></a>
chore: update docs</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/f0ee4ceeda6370d9059e4d1356124668f4cf0bfe"><code>f0ee4ce</code></a>
fix(readme): improve the section about caching approaches</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/5764445d30f0763098b7a4ccbdaf01419d666e99"><code>5764445</code></a>
fix(readme): improve example - show how to use ISO 8601 duration format
in `p...</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/7b6e0ca65529ad4f25cc125059556d432556f564"><code>7b6e0ca</code></a>
fix(readme): improve comments</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/01b2c9a1def1aa05e61ea0fd5772ffa018f3f677"><code>01b2c9a</code></a>
Merge pull request <a
href="https://redirect.github.com/nix-community/cache-nix-action/issues/264">#264</a>
from nix-community/dependabot/npm_and_yarn/eslint-plu...</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/c62435b446f5eac45d711e3e9301350e8ac4bb16"><code>c62435b</code></a>
chore(deps-dev): bump eslint-plugin-jest from 29.11.2 to 29.12.0</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/69bb33a85010f6093f94a43682182f5455b2c18d"><code>69bb33a</code></a>
fix(readme): explain which files get restored</li>
<li><a
href="https://github.com/nix-community/cache-nix-action/commit/507f991008894d9be5f9cf90f38caaf3dcb650a2"><code>507f991</code></a>
Merge pull request <a
href="https://redirect.github.com/nix-community/cache-nix-action/issues/261">#261</a>
from nix-community/cache-only-nix-store</li>
<li>Additional commits viewable in <a
href="https://github.com/nix-community/cache-nix-action/compare/135667ec418502fa5a3598af6fb9eb733888ce6a...b426b118b6dc86d6952988d396aa7c6b09776d08">compare
view</a></li>
</ul>
</details>
<br />

Updates `toshimaru/auto-author-assign` from 2.1.1 to 3.0.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/toshimaru/auto-author-assign/releases">toshimaru/auto-author-assign's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.1</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Dependencies</h3>
<ul>
<li>build(deps): bump <code>@​actions/core</code> from 1.11.1 to 2.0.1
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/122">toshimaru/auto-author-assign#122</a></li>
</ul>
<h3>Chores</h3>
<ul>
<li>chore(main): release 3.0.1 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/138">toshimaru/auto-author-assign#138</a></li>
<li>Replace ubuntu-latest with ubuntu-slim across workflows and
documentation by <a
href="https://github.com/Copilot"><code>@​Copilot</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/137">toshimaru/auto-author-assign#137</a></li>
<li>Add workflow_dispatch trigger to release-please workflow by <a
href="https://github.com/Copilot"><code>@​Copilot</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/136">toshimaru/auto-author-assign#136</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/toshimaru/auto-author-assign/compare/v3.0.0...v3.0.1">https://github.com/toshimaru/auto-author-assign/compare/v3.0.0...v3.0.1</a></p>
<h2>v3.0.0</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<ul>
<li>Bump Node.js from 20 to 24 by <a
href="https://github.com/toshimaru"><code>@​toshimaru</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/128">toshimaru/auto-author-assign#128</a></li>
<li>Migrate from standard-version to release-please by <a
href="https://github.com/toshimaru"><code>@​toshimaru</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/129">toshimaru/auto-author-assign#129</a></li>
<li>feat: Add <code>npm run package</code> instead of <code>build</code>
by <a href="https://github.com/toshimaru"><code>@​toshimaru</code></a>
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/130">toshimaru/auto-author-assign#130</a></li>
</ul>
<h3>Chores</h3>
<ul>
<li>chore(main): release 3.0.0 by <a
href="https://github.com/github-actions"><code>@​github-actions</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/135">toshimaru/auto-author-assign#135</a></li>
<li>chore: Remove reviewers from dependabot.yml by <a
href="https://github.com/google-labs-jules"><code>@​google-labs-jules</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/127">toshimaru/auto-author-assign#127</a></li>
</ul>
<h3>Docs</h3>
<ul>
<li>docs(ai): Create <code>AGENTS.md</code>(<code>CLAUDE.md</code>) file
by <a href="https://github.com/toshimaru"><code>@​toshimaru</code></a>
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/125">toshimaru/auto-author-assign#125</a></li>
<li>docs: bump version to 2.1.2 in <code>README.md</code> by <a
href="https://github.com/toshimaru"><code>@​toshimaru</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/134">toshimaru/auto-author-assign#134</a></li>
<li>docs(ai): Create build-script.md for Claude Code / Restore
<code>CHANGELOG.md</code> by <a
href="https://github.com/toshimaru"><code>@​toshimaru</code></a> in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/132">toshimaru/auto-author-assign#132</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.1.2...v3.0.0">https://github.com/toshimaru/auto-author-assign/compare/v2.1.2...v3.0.0</a></p>
<h2>v2.1.2</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Dependencies</h3>
<ul>
<li>build(deps): bump actions/setup-node from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/110">toshimaru/auto-author-assign#110</a></li>
<li>build(deps): bump actions/checkout from 4 to 6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/111">toshimaru/auto-author-assign#111</a></li>
<li>build(deps): bump undici from 5.28.4 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/118">toshimaru/auto-author-assign#118</a></li>
<li>build(deps): bump <code>@​octokit/plugin-paginate-rest</code> from
9.1.5 to 9.2.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/115">toshimaru/auto-author-assign#115</a></li>
<li>build(deps-dev): bump <code>@​vercel/ncc</code> from 0.38.1 to
0.38.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/114">toshimaru/auto-author-assign#114</a></li>
<li>build(deps): bump <code>@​actions/core</code> from 1.10.1 to 1.11.1
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/toshimaru/auto-author-assign/pull/105">toshimaru/auto-author-assign#105</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/toshimaru/auto-author-assign/blob/main/CHANGELOG.md">toshimaru/auto-author-assign's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2><a
href="https://github.com/toshimaru/auto-author-assign/compare/v3.0.0...v3.0.1">3.0.1</a>
(2025-12-25)</h2>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 3.0.1 (<a
href="https://github.com/toshimaru/auto-author-assign/commit/718d4ed5349747d47952ae841ae03fcbdd74ebea">718d4ed</a>)</li>
</ul>
<h2><a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.1.2...v3.0.0">3.0.0</a>
(2025-12-21)</h2>
<h3>Features</h3>
<ul>
<li>Add <code>npm run package</code> instead of <code>build</code> (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/130">#130</a>)
(<a
href="https://github.com/toshimaru/auto-author-assign/commit/972720f0403d2873e807f16e350c5b0b1be4dda3">972720f</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 3.0.0 (<a
href="https://github.com/toshimaru/auto-author-assign/commit/d100ceff34d1e9cd2c4ea5b8055922f1409f3068">d100cef</a>)</li>
</ul>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.1.1...v2.1.2">2.1.2</a>
(2025-12-16)</h3>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.1.0...v2.1.1">2.1.1</a>
(2024-06-26)</h3>
<h2><a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.0.1...v2.1.0">2.1.0</a>
(2024-01-17)</h2>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v2.0.0...v2.0.1">2.0.1</a>
(2023-09-26)</h3>
<h2><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.6.2...v2.0.0">2.0.0</a>
(2023-09-24)</h2>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.6.1...v1.6.2">1.6.2</a>
(2023-01-03)</h3>
<ul>
<li>chore: dependencies update</li>
</ul>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.6.0...v1.6.1">1.6.1</a>
(2022-08-01)</h3>
<ul>
<li>doc: README Update</li>
</ul>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.5.1...v1.6.0">1.6.0</a>
(2022-07-28)</h3>
<ul>
<li>feat: Add auto-author-assign for the issues</li>
</ul>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.5.0...v1.5.1">1.5.1</a>
(2022-07-22)</h3>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.4.0...v1.5.0">1.5.0</a>
(2022-03-28)</h3>
<ul>
<li>Bump node from node12 to node16</li>
</ul>
<h3><a
href="https://github.com/toshimaru/auto-author-assign/compare/v1.3.7...v1.4.0">1.4.0</a>
(2021-10-17)</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/4d585cc37690897bd9015942ed6e766aa7cdb97f"><code>4d585cc</code></a>
chore(main): release 3.0.1 (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/138">#138</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/718d4ed5349747d47952ae841ae03fcbdd74ebea"><code>718d4ed</code></a>
chore: release 3.0.1</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/4a5388d22f6d4ff1d3dd731718ecef020b6ba4d7"><code>4a5388d</code></a>
build(deps): bump <code>@​actions/core</code> from 1.11.1 to 2.0.1 (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/122">#122</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/988cabb6fa31f6fbe7445a9404c4a81c595da880"><code>988cabb</code></a>
Add workflow_dispatch to release-please.yml (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/136">#136</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/fccc493a2659c5efe9f9f5afbbba91afb29a8a2f"><code>fccc493</code></a>
Replace ubuntu-latest with ubuntu-slim across workflows and
documentation (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/137">#137</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/c66af760da33f680c9baa5e8aa27c3a933b11593"><code>c66af76</code></a>
chore(main): release 3.0.0 (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/135">#135</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/d100ceff34d1e9cd2c4ea5b8055922f1409f3068"><code>d100cef</code></a>
chore: release 3.0.0</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/a076d1056015d81890e49a0cea0d907609200384"><code>a076d10</code></a>
docs: bump version to 2.1.2 in <code>README.md</code> (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/134">#134</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/e7df92b95b730fface0fd16ad67929d77df07251"><code>e7df92b</code></a>
docs(ai): Create build-script.md for Claude Code / Restore
<code>CHANGELOG.md</code> (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/132">#132</a>)</li>
<li><a
href="https://github.com/toshimaru/auto-author-assign/commit/972720f0403d2873e807f16e350c5b0b1be4dda3"><code>972720f</code></a>
feat: Add <code>npm run package</code> instead of <code>build</code> (<a
href="https://redirect.github.com/toshimaru/auto-author-assign/issues/130">#130</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/toshimaru/auto-author-assign/compare/16f0022cf3d7970c106d8d1105f75a1165edb516...4d585cc37690897bd9015942ed6e766aa7cdb97f">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 15:07:29 +00:00
dependabot[bot] 4e8e158ee4 chore: bump the x group with 4 updates (#21477)
Bumps the x group with 4 updates:
[golang.org/x/mod](https://github.com/golang/mod),
[golang.org/x/sys](https://github.com/golang/sys),
[golang.org/x/term](https://github.com/golang/term) and
[golang.org/x/text](https://github.com/golang/text).

Updates `golang.org/x/mod` from 0.31.0 to 0.32.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/mod/commit/4c04067938546e62fc0572259a68a6912726bcdd"><code>4c04067</code></a>
go.mod: update golang.org/x dependencies</li>
<li>See full diff in <a
href="https://github.com/golang/mod/compare/v0.31.0...v0.32.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/sys` from 0.39.0 to 0.40.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/sys/commit/2f442297556c884f9b52fc6ef7280083f4d65023"><code>2f44229</code></a>
sys/cpu: add symbolic constants for remaining cpuid bits</li>
<li><a
href="https://github.com/golang/sys/commit/e5770d27b7f2fca0e959b31bdb18fad4afba8565"><code>e5770d2</code></a>
sys/cpu: use symbolic names for masks</li>
<li><a
href="https://github.com/golang/sys/commit/714a44c845225bf4314182db4c910ef151c32d2f"><code>714a44c</code></a>
sys/cpu: modify x86 port to match what internal/cpu does</li>
<li>See full diff in <a
href="https://github.com/golang/sys/compare/v0.39.0...v0.40.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/term` from 0.38.0 to 0.39.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/term/commit/a7e5b0437ffa3159709172efbe396bc546550e23"><code>a7e5b04</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="https://github.com/golang/term/commit/943f25d3595f79ce29c4175d889758d38b375688"><code>943f25d</code></a>
x/term: handle transpose</li>
<li><a
href="https://github.com/golang/term/commit/9b991dd831b8a478f9fc99a0b39b492b4e25a3c0"><code>9b991dd</code></a>
x/term: handle delete key</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.38.0...v0.39.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `golang.org/x/text` from 0.32.0 to 0.33.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/golang/text/commit/536231a9abc69feaab8d726b5ec75ee8d3620829"><code>536231a</code></a>
go.mod: update golang.org/x dependencies</li>
<li>See full diff in <a
href="https://github.com/golang/text/compare/v0.32.0...v0.33.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 15:05:13 +00:00
Kacper Sawicki 6ca70d3618 feat(cli): add --no-build flag to state push for state-only updates (#21374)
## Summary

Adds a `--no-build` flag to `coder state push` that updates the
Terraform state directly without triggering a workspace build.

## Use Case

This enables state-only migrations, such as migrating Kubernetes
resources from deprecated types (e.g., `kubernetes_config_map`) to
versioned types (e.g., `kubernetes_config_map_v1`):

```bash
coder state pull my-workspace > state.json
terraform init
terraform state rm -state=state.json kubernetes_config_map.example
terraform import -state=state.json kubernetes_config_map_v1.example default/example
coder state push --no-build my-workspace state.json
```

## Changes

- Add `PUT /api/v2/workspacebuilds/{id}/state` endpoint to update state
without triggering a build
- Add `UpdateWorkspaceBuildState` SDK method
- Add `--no-build`/`-n` flag to `coder state push`
- Add confirmation prompt (can be skipped with `--yes`/`-y`) since this
is a potentially dangerous operation
- Add test for `--no-build` functionality

Fixes #21336
2026-01-12 15:16:59 +01:00
Ehab Younes a581431bc8 fix(site): show apps with disabled health status on workspaces list (#21428)
- Fix to display apps with disabled health status on workspaces list
- Migrate WorkspacesPage jest test into vitest
- Modularize vitest setup into separate files:
  - setup/polyfills.ts: Blob, ResizeObserver polyfills
  - setup/domStubs.ts: Radix UI pointer capture stubs
  - setup/mocks.ts: useProxyLatency mock
  - setup/msw.ts: MSW server lifecycle

Fixed #20319
2026-01-12 13:37:30 +03:00
dependabot[bot] d5100543ea chore: bump the coder-modules group across 3 directories with 4 updates (#21474)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 00:44:22 +00:00
Zach 091d31224d fix: replace moby/moby namesgenerator with internal implementation (#21377)
Replace the external moby/moby/pkg/namesgenerator dependency with an
internal implementation using gofakeit/v7. The moby package has ~25k
unique name combinations, and with its retry parameter only adds a
random digit 0-9, giving ~250k possibilities. In parallel tests, this
has led to collisions (flakes).

The new internal API at coderd/util/namesgenerator eliminates the
external dependnecy and offers functions with explicit uniqueness
guarantees. This PR also consolidates fragmented name generation in a
few places to use the new package.

| Old (moby/moby)                     | New                    |
|-------------------------------------|------------------------|
| namesgenerator.GetRandomName(0)     | NameWith("_")          |
| namesgenerator.GetRandomName(>0)    | NameDigitWith("_")     |
| testutil.GetRandomName(t)           | UniqueName()           |
| testutil.GetRandomNameHyphenated(t) | UniqueNameWith("-")    |

namesgenerator package API:
- NameWith(delim): random name, not unique
- NameDigitWith(delim): random name with 1-9 suffix, not unique
- UniqueName(): guaranteed unique via atomic counter
- UniqueNameWith(delim): unique with custom delimiter

Names continue to be docker style `[adjective][delim][surname]`. Unique
names are truncated to 32 characters (preserving the numeric suffix) to
fit common name length limits in Coder.

Related test flakes:
https://github.com/coder/internal/issues/1212
https://github.com/coder/internal/issues/118
https://github.com/coder/internal/issues/1068
2026-01-09 15:40:26 -07:00
165 changed files with 11182 additions and 933 deletions
+1
View File
@@ -71,6 +71,7 @@ runs:
if [[ ${RACE_DETECTION} == true ]]; then
gotestsum --junitfile="gotests.xml" --packages="${TEST_PACKAGES}" -- \
-tags=testsmallbatch \
-race \
-parallel "${TEST_NUM_PARALLEL_TESTS}" \
-p "${TEST_NUM_PARALLEL_PACKAGES}"
+1 -1
View File
@@ -23,7 +23,7 @@ jobs:
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@08eff52bf64351f401fb50d4972fa95b9f2c2d1b # v2.4.0
uses: dependabot/fetch-metadata@21025c705c08248db411dc16f3619e6b5f9ea21a # v2.5.0
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
+1 -1
View File
@@ -42,7 +42,7 @@ jobs:
# on version 2.29 and above.
nix_version: "2.28.5"
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
- uses: nix-community/cache-nix-action@b426b118b6dc86d6952988d396aa7c6b09776d08 # v7.0.0
with:
# restore and save a cache using this key
primary-key: nix-${{ runner.os }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
+1 -1
View File
@@ -20,4 +20,4 @@ jobs:
egress-policy: audit
- name: Assign author
uses: toshimaru/auto-author-assign@16f0022cf3d7970c106d8d1105f75a1165edb516 # v2.1.1
uses: toshimaru/auto-author-assign@4d585cc37690897bd9015942ed6e766aa7cdb97f # v3.0.1
+1
View File
@@ -3,6 +3,7 @@
.eslintcache
.gitpod.yml
.idea
.run
**/*.swp
gotests.coverage
gotests.xml
+8
View File
@@ -211,6 +211,14 @@ issues:
- path: scripts/rules.go
linters:
- ALL
# Boundary code is imported from github.com/coder/boundary and has different
# lint standards. Suppress lint issues in this imported code.
- path: enterprise/cli/boundary/
linters:
- revive
- gocritic
- gosec
- errorlint
fix: true
max-issues-per-linter: 0
+13 -2
View File
@@ -1018,7 +1018,8 @@ endif
# default to 8x8 parallelism to avoid overwhelming our workspaces. Hopefully we can remove these defaults
# when we get our test suite's resource utilization under control.
GOTEST_FLAGS := -v -p $(or $(TEST_NUM_PARALLEL_PACKAGES),"8") -parallel=$(or $(TEST_NUM_PARALLEL_TESTS),"8")
# Use testsmallbatch tag to reduce wireguard memory allocation in tests (from ~18GB to negligible).
GOTEST_FLAGS := -tags=testsmallbatch -v -p $(or $(TEST_NUM_PARALLEL_PACKAGES),"8") -parallel=$(or $(TEST_NUM_PARALLEL_TESTS),"8")
# The most common use is to set TEST_COUNT=1 to avoid Go's test cache.
ifdef TEST_COUNT
@@ -1033,6 +1034,14 @@ ifdef RUN
GOTEST_FLAGS += -run $(RUN)
endif
ifdef TEST_CPUPROFILE
GOTEST_FLAGS += -cpuprofile=$(TEST_CPUPROFILE)
endif
ifdef TEST_MEMPROFILE
GOTEST_FLAGS += -memprofile=$(TEST_MEMPROFILE)
endif
TEST_PACKAGES ?= ./...
test:
@@ -1081,6 +1090,7 @@ test-postgres: test-postgres-docker
--jsonfile="gotests.json" \
$(GOTESTSUM_RETRY_FLAGS) \
--packages="./..." -- \
-tags=testsmallbatch \
-timeout=20m \
-count=1
.PHONY: test-postgres
@@ -1153,7 +1163,7 @@ test-postgres-docker:
# Make sure to keep this in sync with test-go-race from .github/workflows/ci.yaml.
test-race:
$(GIT_FLAGS) gotestsum --junitfile="gotests.xml" -- -race -count=1 -parallel 4 -p 4 ./...
$(GIT_FLAGS) gotestsum --junitfile="gotests.xml" -- -tags=testsmallbatch -race -count=1 -parallel 4 -p 4 ./...
.PHONY: test-race
test-tailnet-integration:
@@ -1163,6 +1173,7 @@ test-tailnet-integration:
TS_DEBUG_NETCHECK=true \
GOTRACEBACK=single \
go test \
-tags=testsmallbatch \
-exec "sudo -E" \
-timeout=5m \
-count=1 \
+10 -4
View File
@@ -1,12 +1,18 @@
package cli
import (
boundarycli "github.com/coder/boundary/cli"
"golang.org/x/xerrors"
"github.com/coder/serpent"
)
func (*RootCmd) boundary() *serpent.Command {
cmd := boundarycli.BaseCommand() // Package coder/boundary/cli exports a "base command" designed to be integrated as a subcommand.
cmd.Use += " [args...]" // The base command looks like `boundary -- command`. Serpent adds the flags piece, but we need to add the args.
return cmd
return &serpent.Command{
Use: "boundary",
Short: "Network isolation tool for monitoring and restricting HTTP/HTTPS requests (enterprise)",
Long: `boundary creates an isolated network environment for target processes. This is an enterprise feature.`,
Handler: func(_ *serpent.Invocation) error {
return xerrors.New("boundary is an enterprise feature; upgrade to use this command")
},
}
}
+3 -7
View File
@@ -5,15 +5,13 @@ import (
"github.com/stretchr/testify/assert"
boundarycli "github.com/coder/boundary/cli"
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/pty/ptytest"
"github.com/coder/coder/v2/testutil"
)
// Actually testing the functionality of coder/boundary takes place in the
// coder/boundary repo, since it's a dependency of coder.
// Here we want to test basically that integrating it as a subcommand doesn't break anything.
// Here we want to test that integrating boundary as a subcommand doesn't break anything.
// The full boundary functionality is tested in enterprise/cli.
func TestBoundarySubcommand(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
@@ -27,7 +25,5 @@ func TestBoundarySubcommand(t *testing.T) {
}()
// Expect the --help output to include the short description.
// We're simply confirming that `coder boundary --help` ran without a runtime error as
// a good chunk of serpents self validation logic happens at runtime.
pty.ExpectMatch(boundarycli.BaseCommand().Short)
pty.ExpectMatch("Network isolation tool")
}
+17
View File
@@ -684,6 +684,7 @@ func (r *RootCmd) HeaderTransport(ctx context.Context, serverURL *url.URL) (*cod
func (r *RootCmd) createHTTPClient(ctx context.Context, serverURL *url.URL, inv *serpent.Invocation) (*http.Client, error) {
transport := http.DefaultTransport
transport = wrapTransportWithTelemetryHeader(transport, inv)
transport = wrapTransportWithUserAgentHeader(transport, inv)
if !r.noVersionCheck {
transport = wrapTransportWithVersionMismatchCheck(transport, inv, buildinfo.Version(), func(ctx context.Context) (codersdk.BuildInfoResponse, error) {
// Create a new client without any wrapped transport
@@ -1497,6 +1498,22 @@ func wrapTransportWithTelemetryHeader(transport http.RoundTripper, inv *serpent.
})
}
// wrapTransportWithUserAgentHeader sets a User-Agent header for all CLI requests
// that includes the CLI version, os/arch, and the specific command being run.
func wrapTransportWithUserAgentHeader(transport http.RoundTripper, inv *serpent.Invocation) http.RoundTripper {
var (
userAgent string
once sync.Once
)
return roundTripper(func(req *http.Request) (*http.Response, error) {
once.Do(func() {
userAgent = fmt.Sprintf("coder-cli/%s (%s/%s; %s)", buildinfo.Version(), runtime.GOOS, runtime.GOARCH, inv.Command.FullName())
})
req.Header.Set("User-Agent", userAgent)
return transport.RoundTrip(req)
})
}
type roundTripper func(req *http.Request) (*http.Response, error)
func (r roundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
+56
View File
@@ -380,3 +380,59 @@ func agentClientCommand(clientRef **agentsdk.Client) *serpent.Command {
agentAuth.AttachOptions(cmd, false)
return cmd
}
func TestWrapTransportWithUserAgentHeader(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
cmdArgs []string
cmdEnv map[string]string
expectedUserAgentHeader string
}{
{
name: "top-level command",
cmdArgs: []string{"login"},
expectedUserAgentHeader: fmt.Sprintf("coder-cli/%s (%s/%s; coder login)", buildinfo.Version(), runtime.GOOS, runtime.GOARCH),
},
{
name: "nested commands",
cmdArgs: []string{"templates", "list"},
expectedUserAgentHeader: fmt.Sprintf("coder-cli/%s (%s/%s; coder templates list)", buildinfo.Version(), runtime.GOOS, runtime.GOARCH),
},
{
name: "does not include positional args, flags, or env",
cmdArgs: []string{"templates", "push", "my-template", "-d", "/path/to/template", "--yes", "--var", "myvar=myvalue"},
cmdEnv: map[string]string{"SECRET_KEY": "secret_value"},
expectedUserAgentHeader: fmt.Sprintf("coder-cli/%s (%s/%s; coder templates push)", buildinfo.Version(), runtime.GOOS, runtime.GOARCH),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
ch := make(chan string, 1)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
select {
case ch <- r.Header.Get("User-Agent"):
default: // already sent
}
}))
t.Cleanup(srv.Close)
args := append([]string{}, tc.cmdArgs...)
inv, _ := clitest.New(t, args...)
inv.Environ.Set("CODER_URL", srv.URL)
for k, v := range tc.cmdEnv {
inv.Environ.Set(k, v)
}
ctx := testutil.Context(t, testutil.WaitShort)
_ = inv.WithContext(ctx).Run() // Ignore error as we only care about headers.
actual := testutil.RequireReceive(ctx, t, ch)
require.Equal(t, tc.expectedUserAgentHeader, actual, "User-Agent should match expected format exactly")
})
}
}
+50 -17
View File
@@ -747,7 +747,16 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
// "bare" read on this channel.
var pubsubWatchdogTimeout <-chan struct{}
sqlDB, dbURL, err := getAndMigratePostgresDB(ctx, logger, vals.PostgresURL.String(), codersdk.PostgresAuth(vals.PostgresAuth), sqlDriver)
maxOpenConns := int(vals.PostgresConnMaxOpen.Value())
maxIdleConns, err := codersdk.ComputeMaxIdleConns(maxOpenConns, vals.PostgresConnMaxIdle.Value())
if err != nil {
return xerrors.Errorf("compute max idle connections: %w", err)
}
logger.Debug(ctx, "creating database connection pool", slog.F("max_open_conns", maxOpenConns), slog.F("max_idle_conns", maxIdleConns))
sqlDB, dbURL, err := getAndMigratePostgresDB(ctx, logger, vals.PostgresURL.String(), codersdk.PostgresAuth(vals.PostgresAuth), sqlDriver,
WithMaxOpenConns(maxOpenConns),
WithMaxIdleConns(maxIdleConns),
)
if err != nil {
return xerrors.Errorf("connect to postgres: %w", err)
}
@@ -2324,6 +2333,29 @@ func IsLocalhost(host string) bool {
return host == "localhost" || host == "127.0.0.1" || host == "::1"
}
// PostgresConnectOptions contains options for connecting to Postgres.
type PostgresConnectOptions struct {
MaxOpenConns int
MaxIdleConns int
}
// PostgresConnectOption is a functional option for ConnectToPostgres.
type PostgresConnectOption func(*PostgresConnectOptions)
// WithMaxOpenConns sets the maximum number of open connections to the database.
func WithMaxOpenConns(n int) PostgresConnectOption {
return func(o *PostgresConnectOptions) {
o.MaxOpenConns = n
}
}
// WithMaxIdleConns sets the maximum number of idle connections in the pool.
func WithMaxIdleConns(n int) PostgresConnectOption {
return func(o *PostgresConnectOptions) {
o.MaxIdleConns = n
}
}
// ConnectToPostgres takes in the migration command to run on the database once
// it connects. To avoid running migrations, pass in `nil` or a no-op function.
// Regardless of the passed in migration function, if the database is not fully
@@ -2331,7 +2363,15 @@ func IsLocalhost(host string) bool {
// future or past migration version.
//
// If no error is returned, the database is fully migrated and up to date.
func ConnectToPostgres(ctx context.Context, logger slog.Logger, driver string, dbURL string, migrate func(db *sql.DB) error) (*sql.DB, error) {
func ConnectToPostgres(ctx context.Context, logger slog.Logger, driver string, dbURL string, migrate func(db *sql.DB) error, opts ...PostgresConnectOption) (*sql.DB, error) {
// Apply defaults.
options := PostgresConnectOptions{
MaxOpenConns: 10,
MaxIdleConns: 3,
}
for _, opt := range opts {
opt(&options)
}
logger.Debug(ctx, "connecting to postgresql")
var err error
@@ -2414,19 +2454,12 @@ func ConnectToPostgres(ctx context.Context, logger slog.Logger, driver string, d
// cannot accept new connections, so we try to limit that here.
// Requests will wait for a new connection instead of a hard error
// if a limit is set.
sqlDB.SetMaxOpenConns(10)
// Allow a max of 3 idle connections at a time. Lower values end up
// creating a lot of connection churn. Since each connection uses about
// 10MB of memory, we're allocating 30MB to Postgres connections per
// replica, but is better than causing Postgres to spawn a thread 15-20
// times/sec. PGBouncer's transaction pooling is not the greatest so
// it's not optimal for us to deploy.
//
// This was set to 10 before we started doing HA deployments, but 3 was
// later determined to be a better middle ground as to not use up all
// of PGs default connection limit while simultaneously avoiding a lot
// of connection churn.
sqlDB.SetMaxIdleConns(3)
sqlDB.SetMaxOpenConns(options.MaxOpenConns)
// Limit idle connections to reduce connection churn while keeping some
// connections ready for reuse. When a connection is returned to the pool
// but the idle pool is full, it's closed immediately - which can cause
// connection establishment overhead when load fluctuates.
sqlDB.SetMaxIdleConns(options.MaxIdleConns)
dbNeedsClosing = false
return sqlDB, nil
@@ -2830,7 +2863,7 @@ func signalNotifyContext(ctx context.Context, inv *serpent.Invocation, sig ...os
return inv.SignalNotifyContext(ctx, sig...)
}
func getAndMigratePostgresDB(ctx context.Context, logger slog.Logger, postgresURL string, auth codersdk.PostgresAuth, sqlDriver string) (*sql.DB, string, error) {
func getAndMigratePostgresDB(ctx context.Context, logger slog.Logger, postgresURL string, auth codersdk.PostgresAuth, sqlDriver string, opts ...PostgresConnectOption) (*sql.DB, string, error) {
dbURL, err := escapePostgresURLUserInfo(postgresURL)
if err != nil {
return nil, "", xerrors.Errorf("escaping postgres URL: %w", err)
@@ -2843,7 +2876,7 @@ func getAndMigratePostgresDB(ctx context.Context, logger slog.Logger, postgresUR
}
}
sqlDB, err := ConnectToPostgres(ctx, logger, sqlDriver, dbURL, migrations.Up)
sqlDB, err := ConnectToPostgres(ctx, logger, sqlDriver, dbURL, migrations.Up, opts...)
if err != nil {
return nil, "", xerrors.Errorf("connect to postgres: %w", err)
}
+3 -3
View File
@@ -197,7 +197,7 @@ func TestSharingStatus(t *testing.T) {
ctx = testutil.Context(t, testutil.WaitMedium)
)
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
},
@@ -248,7 +248,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toShareWithUser.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser.ID.String(): codersdk.WorkspaceRoleUse,
@@ -309,7 +309,7 @@ func TestSharingRemove(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitMedium)
// Share the workspace with a user to later remove
err := client.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
err := workspaceOwnerClient.UpdateWorkspaceACL(ctx, workspace.ID, codersdk.UpdateWorkspaceACL{
UserRoles: map[string]codersdk.WorkspaceRole{
toRemoveUser2.ID.String(): codersdk.WorkspaceRoleUse,
toRemoveUser1.ID.String(): codersdk.WorkspaceRoleUse,
+17
View File
@@ -87,6 +87,7 @@ func buildNumberOption(n *int64) serpent.Option {
func (r *RootCmd) statePush() *serpent.Command {
var buildNumber int64
var noBuild bool
cmd := &serpent.Command{
Use: "push <workspace> <file>",
Short: "Push a Terraform state file to a workspace.",
@@ -126,6 +127,16 @@ func (r *RootCmd) statePush() *serpent.Command {
return err
}
if noBuild {
// Update state directly without triggering a build.
err = client.UpdateWorkspaceBuildState(inv.Context(), build.ID, state)
if err != nil {
return err
}
_, _ = fmt.Fprintln(inv.Stdout, "State updated successfully.")
return nil
}
build, err = client.CreateWorkspaceBuild(inv.Context(), workspace.ID, codersdk.CreateWorkspaceBuildRequest{
TemplateVersionID: build.TemplateVersionID,
Transition: build.Transition,
@@ -139,6 +150,12 @@ func (r *RootCmd) statePush() *serpent.Command {
}
cmd.Options = serpent.OptionSet{
buildNumberOption(&buildNumber),
{
Flag: "no-build",
FlagShorthand: "n",
Description: "Update the state without triggering a workspace build. Useful for state-only migrations.",
Value: serpent.BoolOf(&noBuild),
},
}
return cmd
}
+47
View File
@@ -2,6 +2,7 @@ package cli_test
import (
"bytes"
"context"
"fmt"
"os"
"path/filepath"
@@ -14,6 +15,7 @@ import (
"github.com/coder/coder/v2/cli/clitest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/provisioner/echo"
@@ -157,4 +159,49 @@ func TestStatePush(t *testing.T) {
err := inv.Run()
require.NoError(t, err)
})
t.Run("NoBuild", func(t *testing.T) {
t.Parallel()
client, store := coderdtest.NewWithDatabase(t, nil)
owner := coderdtest.CreateFirstUser(t, client)
templateAdmin, taUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.RoleTemplateAdmin())
initialState := []byte("initial state")
r := dbfake.WorkspaceBuild(t, store, database.WorkspaceTable{
OrganizationID: owner.OrganizationID,
OwnerID: taUser.ID,
}).
Seed(database.WorkspaceBuild{ProvisionerState: initialState}).
Do()
wantState := []byte("updated state")
stateFile, err := os.CreateTemp(t.TempDir(), "")
require.NoError(t, err)
_, err = stateFile.Write(wantState)
require.NoError(t, err)
err = stateFile.Close()
require.NoError(t, err)
inv, root := clitest.New(t, "state", "push", "--no-build", r.Workspace.Name, stateFile.Name())
clitest.SetupConfig(t, templateAdmin, root)
var stdout bytes.Buffer
inv.Stdout = &stdout
err = inv.Run()
require.NoError(t, err)
require.Contains(t, stdout.String(), "State updated successfully")
// Verify the state was updated by pulling it.
inv, root = clitest.New(t, "state", "pull", r.Workspace.Name)
var gotState bytes.Buffer
inv.Stdout = &gotState
clitest.SetupConfig(t, templateAdmin, root)
err = inv.Run()
require.NoError(t, err)
require.Equal(t, wantState, bytes.TrimSpace(gotState.Bytes()))
// Verify no new build was created.
builds, err := store.GetWorkspaceBuildsByWorkspaceID(dbauthz.AsSystemRestricted(context.Background()), database.GetWorkspaceBuildsByWorkspaceIDParams{
WorkspaceID: r.Workspace.ID,
})
require.NoError(t, err)
require.Len(t, builds, 1, "expected only the initial build, no new build should be created")
})
}
+8
View File
@@ -65,6 +65,14 @@ OPTIONS:
Type of auth to use when connecting to postgres. For AWS RDS, using
IAM authentication (awsiamrds) is recommended.
--postgres-conn-max-idle string, $CODER_PG_CONN_MAX_IDLE (default: auto)
Maximum number of idle connections to the database. Set to "auto" (the
default) to use max open / 3. Value must be greater or equal to 0; 0
means explicitly no idle connections.
--postgres-conn-max-open int, $CODER_PG_CONN_MAX_OPEN (default: 10)
Maximum number of open connections to the database. Defaults to 10.
--postgres-url string, $CODER_PG_CONNECTION_URL
URL of a PostgreSQL database. If empty, PostgreSQL binaries will be
downloaded from Maven (https://repo1.maven.org/maven2) and store all
+4
View File
@@ -9,5 +9,9 @@ OPTIONS:
-b, --build int
Specify a workspace build to target by name. Defaults to latest.
-n, --no-build bool
Update the state without triggering a workspace build. Useful for
state-only migrations.
———
Run `coder --help` for a list of global options.
+15
View File
@@ -483,6 +483,14 @@ ephemeralDeployment: false
# authentication (awsiamrds) is recommended.
# (default: password, type: enum[password\|awsiamrds])
pgAuth: password
# Maximum number of open connections to the database. Defaults to 10.
# (default: 10, type: int)
pgConnMaxOpen: 10
# Maximum number of idle connections to the database. Set to "auto" (the default)
# to use max open / 3. Value must be greater or equal to 0; 0 means explicitly no
# idle connections.
# (default: auto, type: string)
pgConnMaxIdle: auto
# A URL to an external Terms of Service that must be accepted by users when
# logging in.
# (default: <unset>, type: string)
@@ -779,6 +787,13 @@ aibridgeproxy:
# Path to the CA private key file for AI Bridge Proxy.
# (default: <unset>, type: string)
key_file: ""
# Comma-separated list of domains for which HTTPS traffic will be decrypted and
# routed through AI Bridge. Requests to other domains will be tunneled directly
# without decryption.
# (default: api.anthropic.com,api.openai.com, type: string-array)
domain_allowlist:
- api.anthropic.com
- api.openai.com
# Configure data retention policies for various database tables. Retention
# policies automatically purge old data to reduce database size and improve
# performance. Setting a retention duration to 0 disables automatic purging for
+65 -3
View File
@@ -3349,8 +3349,8 @@ const docTemplate = `{
"tags": [
"Members"
],
"summary": "Upsert a custom organization role",
"operationId": "upsert-a-custom-organization-role",
"summary": "Update a custom organization role",
"operationId": "update-a-custom-organization-role",
"parameters": [
{
"type": "string",
@@ -3361,7 +3361,7 @@ const docTemplate = `{
"required": true
},
{
"description": "Upsert role request",
"description": "Update role request",
"name": "request",
"in": "body",
"required": true,
@@ -10225,6 +10225,45 @@ const docTemplate = `{
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": [
"application/json"
],
"tags": [
"Builds"
],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -12016,6 +12055,12 @@ const docTemplate = `{
"cert_file": {
"type": "string"
},
"domain_allowlist": {
"type": "array",
"items": {
"type": "string"
}
},
"enabled": {
"type": "boolean"
},
@@ -14341,6 +14386,12 @@ const docTemplate = `{
"pg_auth": {
"type": "string"
},
"pg_conn_max_idle": {
"type": "string"
},
"pg_conn_max_open": {
"type": "integer"
},
"pg_connection_url": {
"type": "string"
},
@@ -19550,6 +19601,17 @@ const docTemplate = `{
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+61 -3
View File
@@ -2935,8 +2935,8 @@
"consumes": ["application/json"],
"produces": ["application/json"],
"tags": ["Members"],
"summary": "Upsert a custom organization role",
"operationId": "upsert-a-custom-organization-role",
"summary": "Update a custom organization role",
"operationId": "update-a-custom-organization-role",
"parameters": [
{
"type": "string",
@@ -2947,7 +2947,7 @@
"required": true
},
{
"description": "Upsert role request",
"description": "Update role request",
"name": "request",
"in": "body",
"required": true,
@@ -9056,6 +9056,41 @@
}
}
}
},
"put": {
"security": [
{
"CoderSessionToken": []
}
],
"consumes": ["application/json"],
"tags": ["Builds"],
"summary": "Update workspace build state",
"operationId": "update-workspace-build-state",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace build ID",
"name": "workspacebuild",
"in": "path",
"required": true
},
{
"description": "Request body",
"name": "request",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/codersdk.UpdateWorkspaceBuildStateRequest"
}
}
],
"responses": {
"204": {
"description": "No Content"
}
}
}
},
"/workspacebuilds/{workspacebuild}/timings": {
@@ -10681,6 +10716,12 @@
"cert_file": {
"type": "string"
},
"domain_allowlist": {
"type": "array",
"items": {
"type": "string"
}
},
"enabled": {
"type": "boolean"
},
@@ -12924,6 +12965,12 @@
"pg_auth": {
"type": "string"
},
"pg_conn_max_idle": {
"type": "string"
},
"pg_conn_max_open": {
"type": "integer"
},
"pg_connection_url": {
"type": "string"
},
@@ -17935,6 +17982,17 @@
}
}
},
"codersdk.UpdateWorkspaceBuildStateRequest": {
"type": "object",
"properties": {
"state": {
"type": "array",
"items": {
"type": "integer"
}
}
}
},
"codersdk.UpdateWorkspaceDormancy": {
"type": "object",
"properties": {
+2 -2
View File
@@ -9,7 +9,6 @@ import (
"github.com/go-chi/chi/v5"
"github.com/google/uuid"
"github.com/moby/moby/pkg/namesgenerator"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
@@ -22,6 +21,7 @@ import (
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
"github.com/coder/coder/v2/codersdk"
)
@@ -101,7 +101,7 @@ func (api *API) postToken(rw http.ResponseWriter, r *http.Request) {
}
}
tokenName := namesgenerator.GetRandomName(1)
tokenName := namesgenerator.NameDigitWith("_")
if len(createToken.TokenName) != 0 {
tokenName = createToken.TokenName
+1 -1
View File
@@ -1037,7 +1037,7 @@ func TestExecutorRequireActiveVersion(t *testing.T) {
//nolint We need to set this in the database directly, because the API will return an error
// letting you know that this feature requires an enterprise license.
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(me, owner.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(me)), database.UpdateTemplateAccessControlByIDParams{
ID: template.ID,
RequireActiveVersion: true,
})
+11
View File
@@ -568,6 +568,16 @@ func New(options *Options) *API {
// bugs that may only occur when a key isn't precached in tests and the latency cost is minimal.
cryptokeys.StartRotator(ctx, options.Logger, options.Database)
// Ensure all system role permissions are current.
//nolint:gocritic // Startup reconciliation reads/writes system roles. There is
// no user request context here, so use a system-restricted context.
err = rolestore.ReconcileSystemRoles(dbauthz.AsSystemRestricted(ctx), options.Logger, options.Database)
if err != nil {
// Not ideal, but not using Fatal here and just continuing
// after logging the error would be a potential security hole.
options.Logger.Fatal(ctx, "failed to reconcile system role permissions", slog.Error(err))
}
// AGPL uses a no-op build usage checker as there are no license
// entitlements to enforce. This is swapped out in
// enterprise/coderd/coderd.go.
@@ -1503,6 +1513,7 @@ func New(options *Options) *API {
r.Get("/parameters", api.workspaceBuildParameters)
r.Get("/resources", api.workspaceBuildResourcesDeprecated)
r.Get("/state", api.workspaceBuildState)
r.Put("/state", api.workspaceBuildUpdateState)
r.Get("/timings", api.workspaceBuildTimings)
})
r.Route("/authcheck", func(r chi.Router) {
+4 -4
View File
@@ -11,7 +11,6 @@ import (
"testing"
"github.com/google/uuid"
"github.com/moby/moby/pkg/namesgenerator"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
@@ -22,6 +21,7 @@ import (
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/rbac/regosql"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
)
@@ -439,10 +439,10 @@ func RandomRBACObject() rbac.Object {
OrgID: uuid.NewString(),
Type: randomRBACType(),
ACLUserList: map[string][]policy.Action{
namesgenerator.GetRandomName(1): {RandomRBACAction()},
namesgenerator.UniqueName(): {RandomRBACAction()},
},
ACLGroupList: map[string][]policy.Action{
namesgenerator.GetRandomName(1): {RandomRBACAction()},
namesgenerator.UniqueName(): {RandomRBACAction()},
},
}
}
@@ -471,7 +471,7 @@ func RandomRBACSubject() rbac.Subject {
return rbac.Subject{
ID: uuid.NewString(),
Roles: rbac.RoleIdentifiers{rbac.RoleMember()},
Groups: []string{namesgenerator.GetRandomName(1)},
Groups: []string{namesgenerator.UniqueName()},
Scope: rbac.ScopeAll,
}
}
+61 -36
View File
@@ -30,17 +30,17 @@ import (
"sync/atomic"
"testing"
"time"
"unicode"
"cloud.google.com/go/compute/metadata"
"github.com/fullsailor/pkcs7"
"github.com/go-chi/chi/v5"
"github.com/golang-jwt/jwt/v4"
"github.com/google/uuid"
"github.com/moby/moby/pkg/namesgenerator"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/text/cases"
"golang.org/x/text/language"
"golang.org/x/xerrors"
"google.golang.org/api/idtoken"
"google.golang.org/api/option"
@@ -76,10 +76,12 @@ import (
"github.com/coder/coder/v2/coderd/provisionerdserver"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
"github.com/coder/coder/v2/coderd/runtimeconfig"
"github.com/coder/coder/v2/coderd/schedule"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/coderd/updatecheck"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/coderd/webpush"
"github.com/coder/coder/v2/coderd/workspaceapps"
@@ -767,8 +769,9 @@ func CreateAnotherUserMutators(t testing.TB, client *codersdk.Client, organizati
return createAnotherUserRetry(t, client, []uuid.UUID{organizationID}, 5, roles, mutators...)
}
// AuthzUserSubject does not include the user's groups.
func AuthzUserSubject(user codersdk.User, orgID uuid.UUID) rbac.Subject {
// AuthzUserSubject does not include the user's groups or the org-member role
// (which is a db-backed system role).
func AuthzUserSubject(user codersdk.User) rbac.Subject {
roles := make(rbac.RoleIdentifiers, 0, len(user.Roles))
// Member role is always implied
roles = append(roles, rbac.RoleMember())
@@ -779,8 +782,6 @@ func AuthzUserSubject(user codersdk.User, orgID uuid.UUID) rbac.Subject {
OrganizationID: orgID,
})
}
// We assume only 1 org exists
roles = append(roles, rbac.ScopedRoleOrgMember(orgID))
return rbac.Subject{
ID: user.ID.String(),
@@ -790,9 +791,55 @@ func AuthzUserSubject(user codersdk.User, orgID uuid.UUID) rbac.Subject {
}
}
// AuthzUserSubjectWithDB is like AuthzUserSubject but adds db-backed roles
// (like organization-member).
func AuthzUserSubjectWithDB(ctx context.Context, t testing.TB, db database.Store, user codersdk.User) rbac.Subject {
t.Helper()
roles := make(rbac.RoleIdentifiers, 0, len(user.Roles)+2)
// Member role is always implied
roles = append(roles, rbac.RoleMember())
for _, r := range user.Roles {
parsedOrgID, _ := uuid.Parse(r.OrganizationID) // defaults to nil
roles = append(roles, rbac.RoleIdentifier{
Name: r.Name,
OrganizationID: parsedOrgID,
})
}
//nolint:gocritic // Were constructing the subject. The incoming ctx
// typically has no dbauthz actor yet, and using AuthzUserSubject(user)
// here would be circular (it lacks DB-backed org-member roles needed for
// organization:read). Use system-restricted ctx for the membership lookup.
orgs, err := db.GetOrganizationsByUserID(dbauthz.AsSystemRestricted(ctx), database.GetOrganizationsByUserIDParams{
UserID: user.ID,
Deleted: sql.NullBool{
Valid: true,
Bool: false,
},
})
require.NoError(t, err)
for _, org := range orgs {
roles = append(roles, rbac.ScopedRoleOrgMember(org.ID))
}
//nolint:gocritic // We need to expand DB-backed/system roles. The caller
// ctx may not have permission to read system roles, so use system-restricted
// context for the internal role lookup.
rbacRoles, err := rolestore.Expand(dbauthz.AsSystemRestricted(ctx), db, roles)
require.NoError(t, err)
return rbac.Subject{
ID: user.ID.String(),
Roles: rbacRoles,
Groups: []string{},
Scope: rbac.ScopeAll,
}.WithCachedASTValue()
}
func createAnotherUserRetry(t testing.TB, client *codersdk.Client, organizationIDs []uuid.UUID, retries int, roles []rbac.RoleIdentifier, mutators ...func(r *codersdk.CreateUserRequestWithOrgs)) (*codersdk.Client, codersdk.User) {
req := codersdk.CreateUserRequestWithOrgs{
Email: namesgenerator.GetRandomName(10) + "@coder.com",
Email: namesgenerator.UniqueName() + "@coder.com",
Username: RandomUsername(t),
Name: RandomName(t),
Password: "SomeSecurePassword!",
@@ -1556,37 +1603,15 @@ func NewAzureInstanceIdentity(t testing.TB, instanceID string) (x509.VerifyOptio
}
}
func RandomUsername(t testing.TB) string {
suffix, err := cryptorand.String(3)
require.NoError(t, err)
suffix = "-" + suffix
n := strings.ReplaceAll(namesgenerator.GetRandomName(10), "_", "-") + suffix
if len(n) > 32 {
n = n[:32-len(suffix)] + suffix
}
return n
func RandomUsername(_ testing.TB) string {
return namesgenerator.UniqueNameWith("-")
}
func RandomName(t testing.TB) string {
var sb strings.Builder
var err error
ss := strings.Split(namesgenerator.GetRandomName(10), "_")
for si, s := range ss {
for ri, r := range s {
if ri == 0 {
_, err = sb.WriteRune(unicode.ToTitle(r))
require.NoError(t, err)
} else {
_, err = sb.WriteRune(r)
require.NoError(t, err)
}
}
if si < len(ss)-1 {
_, err = sb.WriteRune(' ')
require.NoError(t, err)
}
}
return sb.String()
// RandomName returns a random name in title case (e.g. "Happy Einstein").
func RandomName(_ testing.TB) string {
return cases.Title(language.English).String(
namesgenerator.NameWith(" "),
)
}
// Used to easily create an HTTP transport!
+22
View File
@@ -1,8 +1,11 @@
package coderdtest_test
import (
"strings"
"testing"
"unicode"
"github.com/stretchr/testify/require"
"go.uber.org/goleak"
"github.com/coder/coder/v2/coderd/coderdtest"
@@ -28,3 +31,22 @@ func TestNew(t *testing.T) {
_, _ = coderdtest.NewGoogleInstanceIdentity(t, "example", false)
_, _ = coderdtest.NewAWSInstanceIdentity(t, "an-instance")
}
func TestRandomName(t *testing.T) {
t.Parallel()
for range 10 {
name := coderdtest.RandomName(t)
require.NotEmpty(t, name, "name should not be empty")
require.NotContains(t, name, "_", "name should not contain underscores")
// Should be title cased (e.g., "Happy Einstein").
words := strings.Split(name, " ")
require.Len(t, words, 2, "name should have exactly two words")
for _, word := range words {
firstRune := []rune(word)[0]
require.True(t, unicode.IsUpper(firstRune), "word %q should start with uppercase letter", word)
}
}
}
+235 -6
View File
@@ -73,6 +73,7 @@ func TestInsertCustomRoles(t *testing.T) {
site []codersdk.Permission
org []codersdk.Permission
user []codersdk.Permission
member []codersdk.Permission
errorContains string
}{
{
@@ -171,6 +172,16 @@ func TestInsertCustomRoles(t *testing.T) {
}),
errorContains: "organization roles specify site or user permissions",
},
{
// Not allowing these at this time.
name: "member-permissions",
organizationID: orgID,
subject: merge(canCreateCustomRole),
member: codersdk.CreatePermissions(map[codersdk.RBACResource][]codersdk.RBACAction{
codersdk.ResourceWorkspace: {codersdk.ActionRead},
}),
errorContains: "non-system roles specify member permissions",
},
{
name: "site-escalation",
organizationID: orgID,
@@ -213,12 +224,13 @@ func TestInsertCustomRoles(t *testing.T) {
ctx = dbauthz.As(ctx, subject)
_, err := az.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-role",
DisplayName: "",
OrganizationID: uuid.NullUUID{UUID: tc.organizationID, Valid: true},
SitePermissions: db2sdk.List(tc.site, convertSDKPerm),
OrgPermissions: db2sdk.List(tc.org, convertSDKPerm),
UserPermissions: db2sdk.List(tc.user, convertSDKPerm),
Name: "test-role",
DisplayName: "",
OrganizationID: uuid.NullUUID{UUID: tc.organizationID, Valid: true},
SitePermissions: db2sdk.List(tc.site, convertSDKPerm),
OrgPermissions: db2sdk.List(tc.org, convertSDKPerm),
UserPermissions: db2sdk.List(tc.user, convertSDKPerm),
MemberPermissions: db2sdk.List(tc.member, convertSDKPerm),
})
if tc.errorContains != "" {
require.ErrorContains(t, err, tc.errorContains)
@@ -250,3 +262,220 @@ func convertSDKPerm(perm codersdk.Permission) database.CustomRolePermission {
Action: policy.Action(perm.Action),
}
}
func TestSystemRoles(t *testing.T) {
t.Parallel()
orgID := uuid.New()
canManageOrgRoles := rbac.Role{
Identifier: rbac.RoleIdentifier{Name: "can-manage-org-roles"},
DisplayName: "",
Site: rbac.Permissions(map[string][]policy.Action{
rbac.ResourceAssignOrgRole.Type: {policy.ActionRead, policy.ActionCreate, policy.ActionUpdate},
}),
}
canCreateSystem := rbac.Role{
Identifier: rbac.RoleIdentifier{Name: "can-create-system"},
DisplayName: "",
Site: rbac.Permissions(map[string][]policy.Action{
rbac.ResourceSystem.Type: {policy.ActionCreate},
}),
}
canUpdateSystem := rbac.Role{
Identifier: rbac.RoleIdentifier{Name: "can-update-system"},
DisplayName: "",
Site: rbac.Permissions(map[string][]policy.Action{
rbac.ResourceSystem.Type: {policy.ActionUpdate},
}),
}
userID := uuid.New()
subjectNoSystemPerms := rbac.Subject{
FriendlyName: "Test user",
ID: userID.String(),
Roles: rbac.Roles([]rbac.Role{canManageOrgRoles}),
Groups: nil,
Scope: rbac.ScopeAll,
}
subjectWithSystemCreatePerms := subjectNoSystemPerms
subjectWithSystemCreatePerms.Roles = rbac.Roles([]rbac.Role{canManageOrgRoles, canCreateSystem})
subjectWithSystemUpdatePerms := subjectNoSystemPerms
subjectWithSystemUpdatePerms.Roles = rbac.Roles([]rbac.Role{canManageOrgRoles, canUpdateSystem})
db, _ := dbtestutil.NewDB(t)
rec := &coderdtest.RecordingAuthorizer{
Wrapped: rbac.NewAuthorizer(prometheus.NewRegistry()),
}
az := dbauthz.New(db, rec, slog.Make(), coderdtest.AccessControlStorePointer())
t.Run("insert-requires-system-create", func(t *testing.T) {
t.Parallel()
insertParamsTemplate := database.InsertCustomRoleParams{
Name: "",
OrganizationID: uuid.NullUUID{
UUID: orgID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
IsSystem: true,
}
t.Run("deny-no-system-perms", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
insertParams := insertParamsTemplate
insertParams.Name = "test-system-role-" + uuid.NewString()
ctx = dbauthz.As(ctx, subjectNoSystemPerms)
_, err := az.InsertCustomRole(ctx, insertParams)
require.ErrorContains(t, err, "forbidden")
})
t.Run("deny-update-only", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
insertParams := insertParamsTemplate
insertParams.Name = "test-system-role-" + uuid.NewString()
ctx = dbauthz.As(ctx, subjectWithSystemUpdatePerms)
_, err := az.InsertCustomRole(ctx, insertParams)
require.ErrorContains(t, err, "forbidden")
})
t.Run("allow-create-only", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
insertParams := insertParamsTemplate
insertParams.Name = "test-system-role-" + uuid.NewString()
ctx = dbauthz.As(ctx, subjectWithSystemCreatePerms)
_, err := az.InsertCustomRole(ctx, insertParams)
require.NoError(t, err)
})
})
t.Run("update-requires-system-update", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectWithSystemCreatePerms)
// Setup: create the role that we will attempt to update in
// subtests. One role for all is fine as we are only testing
// authz.
role, err := az.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-system-role-" + uuid.NewString(),
OrganizationID: uuid.NullUUID{
UUID: orgID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
IsSystem: true,
})
require.NoError(t, err)
// Use same params for all updates as we're only testing authz.
updateParams := database.UpdateCustomRoleParams{
Name: role.Name,
OrganizationID: uuid.NullUUID{
UUID: orgID,
Valid: true,
},
DisplayName: "",
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
}
t.Run("deny-no-system-perms", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectNoSystemPerms)
_, err := az.UpdateCustomRole(ctx, updateParams)
require.ErrorContains(t, err, "forbidden")
})
t.Run("deny-create-only", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectWithSystemCreatePerms)
_, err := az.UpdateCustomRole(ctx, updateParams)
require.ErrorContains(t, err, "forbidden")
})
t.Run("allow-update-only", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectWithSystemUpdatePerms)
_, err := az.UpdateCustomRole(ctx, updateParams)
require.NoError(t, err)
})
})
t.Run("allow-member-permissions", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectWithSystemCreatePerms)
_, err := az.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-system-role-member-perms",
OrganizationID: uuid.NullUUID{
UUID: orgID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{
{
ResourceType: rbac.ResourceWorkspace.Type,
Action: policy.ActionRead,
},
},
IsSystem: true,
})
require.NoError(t, err)
})
t.Run("allow-negative-permissions", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitMedium)
ctx = dbauthz.As(ctx, subjectWithSystemCreatePerms)
_, err := az.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-system-role-negative",
OrganizationID: uuid.NullUUID{
UUID: orgID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{
{
Negate: true,
ResourceType: rbac.ResourceWorkspace.Type,
Action: policy.ActionShare,
},
},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
IsSystem: true,
})
require.NoError(t, err)
})
}
+102 -42
View File
@@ -1161,13 +1161,18 @@ func (q *querier) canAssignRoles(ctx context.Context, orgID uuid.UUID, added, re
for _, roleName := range grantedRoles {
if _, isCustom := customRolesMap[roleName]; isCustom {
// To support a dynamic mapping of what roles can assign what, we need
// to store this in the database. For now, just use a static role so
// owners and org admins can assign roles.
if roleName.IsOrgRole() {
roleName = rbac.CustomOrganizationRole(roleName.OrganizationID)
} else {
roleName = rbac.CustomSiteRole()
// System roles are stored in the database but have a fixed, code-defined
// meaning. Do not rewrite the name for them so the static "who can assign
// what" mapping applies.
if !rbac.SystemRoleName(roleName.Name) {
// To support a dynamic mapping of what roles can assign what, we need
// to store this in the database. For now, just use a static role so
// owners and org admins can assign roles.
if roleName.IsOrgRole() {
roleName = rbac.CustomOrganizationRole(roleName.OrganizationID)
} else {
roleName = rbac.CustomSiteRole()
}
}
}
@@ -1282,33 +1287,39 @@ func (q *querier) customRoleEscalationCheck(ctx context.Context, actor rbac.Subj
// - Check custom roles are valid for their resource types + actions
// - Check the actor can create the custom role
// - Check the custom role does not grant perms the actor does not have
// - Prevent negative perms
// - Prevent roles with site and org permissions.
func (q *querier) customRoleCheck(ctx context.Context, role database.CustomRole) error {
// - Prevent negative perms for non-system roles
// - Prevent roles that have both organization scoped and non-organization scoped permissions
func (q *querier) customRoleCheck(ctx context.Context, role database.CustomRole, action policy.Action) error {
act, ok := ActorFromContext(ctx)
if !ok {
return ErrNoActor
}
// Org permissions require an org role
if role.OrganizationID.UUID == uuid.Nil && len(role.OrgPermissions) > 0 {
return xerrors.Errorf("organization permissions require specifying an organization id")
// Org and org member permissions require an org role.
if role.OrganizationID.UUID == uuid.Nil && (len(role.OrgPermissions) > 0 || len(role.MemberPermissions) > 0) {
return xerrors.Errorf("organization and member permissions require specifying an organization id")
}
// Org roles can only specify org permissions
// Org roles can only specify org permissions; system roles can also specify orgMember ones.
if role.OrganizationID.UUID != uuid.Nil && (len(role.SitePermissions) > 0 || len(role.UserPermissions) > 0) {
return xerrors.Errorf("organization roles specify site or user permissions")
}
// For now only system roles can specify orgMember permissions.
if !role.IsSystem && len(role.MemberPermissions) > 0 {
return xerrors.Errorf("non-system roles specify member permissions")
}
// The rbac.Role has a 'Valid()' function on it that will do a lot
// of checks.
rbacRole, err := rolestore.ConvertDBRole(database.CustomRole{
Name: role.Name,
DisplayName: role.DisplayName,
SitePermissions: role.SitePermissions,
OrgPermissions: role.OrgPermissions,
UserPermissions: role.UserPermissions,
OrganizationID: role.OrganizationID,
Name: role.Name,
DisplayName: role.DisplayName,
SitePermissions: role.SitePermissions,
OrgPermissions: role.OrgPermissions,
UserPermissions: role.UserPermissions,
MemberPermissions: role.MemberPermissions,
OrganizationID: role.OrganizationID,
})
if err != nil {
return xerrors.Errorf("invalid args: %w", err)
@@ -1333,6 +1344,16 @@ func (q *querier) customRoleCheck(ctx context.Context, role database.CustomRole)
return xerrors.Errorf("invalid custom role, cannot assign permissions to more than 1 org at a time")
}
// System roles are managed internally and may include permissions
// (including negative ones) that user-facing custom role APIs
// should reject. Still validate that the role shape and perms
// are internally consistent via rbacRole.Valid() above.
if role.IsSystem {
// Defensive programming: the caller should have checked that
// the action is authorized, but we double-check.
return q.authorizeContext(ctx, action, rbac.ResourceSystem)
}
// Prevent escalation
for _, sitePerm := range rbacRole.Site {
err := q.customRoleEscalationCheck(ctx, act, sitePerm, rbac.Object{Type: sitePerm.ResourceType})
@@ -4132,21 +4153,33 @@ func (q *querier) InsertCustomRole(ctx context.Context, arg database.InsertCusto
if !arg.OrganizationID.Valid || arg.OrganizationID.UUID == uuid.Nil {
return database.CustomRole{}, NotAuthorizedError{Err: xerrors.New("custom roles must belong to an organization")}
}
if err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceAssignOrgRole.InOrg(arg.OrganizationID.UUID)); err != nil {
rbacObj := rbac.ResourceAssignOrgRole.InOrg(arg.OrganizationID.UUID)
if err := q.authorizeContext(ctx, policy.ActionCreate, rbacObj); err != nil {
return database.CustomRole{}, err
}
if arg.IsSystem {
err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceSystem)
if err != nil {
return database.CustomRole{}, err
}
}
if err := q.customRoleCheck(ctx, database.CustomRole{
Name: arg.Name,
DisplayName: arg.DisplayName,
SitePermissions: arg.SitePermissions,
OrgPermissions: arg.OrgPermissions,
UserPermissions: arg.UserPermissions,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
OrganizationID: arg.OrganizationID,
ID: uuid.New(),
}); err != nil {
Name: arg.Name,
DisplayName: arg.DisplayName,
SitePermissions: arg.SitePermissions,
OrgPermissions: arg.OrgPermissions,
UserPermissions: arg.UserPermissions,
MemberPermissions: arg.MemberPermissions,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
OrganizationID: arg.OrganizationID,
ID: uuid.New(),
IsSystem: arg.IsSystem,
}, policy.ActionCreate); err != nil {
return database.CustomRole{}, err
}
return q.db.InsertCustomRole(ctx, arg)
@@ -4886,21 +4919,48 @@ func (q *querier) UpdateCustomRole(ctx context.Context, arg database.UpdateCusto
if !arg.OrganizationID.Valid || arg.OrganizationID.UUID == uuid.Nil {
return database.CustomRole{}, NotAuthorizedError{Err: xerrors.New("custom roles must belong to an organization")}
}
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceAssignOrgRole.InOrg(arg.OrganizationID.UUID)); err != nil {
rbacObj := rbac.ResourceAssignOrgRole.InOrg(arg.OrganizationID.UUID)
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbacObj); err != nil {
return database.CustomRole{}, err
}
existing, err := database.ExpectOne(q.db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: []database.NameOrganizationPair{
{
Name: arg.Name,
OrganizationID: arg.OrganizationID.UUID,
},
},
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
IncludeSystemRoles: true,
}))
if err != nil {
return database.CustomRole{}, err
}
if existing.IsSystem {
err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceSystem)
if err != nil {
return database.CustomRole{}, err
}
}
if err := q.customRoleCheck(ctx, database.CustomRole{
Name: arg.Name,
DisplayName: arg.DisplayName,
SitePermissions: arg.SitePermissions,
OrgPermissions: arg.OrgPermissions,
UserPermissions: arg.UserPermissions,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
OrganizationID: arg.OrganizationID,
ID: uuid.New(),
}); err != nil {
Name: arg.Name,
DisplayName: arg.DisplayName,
SitePermissions: arg.SitePermissions,
OrgPermissions: arg.OrgPermissions,
UserPermissions: arg.UserPermissions,
MemberPermissions: arg.MemberPermissions,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
OrganizationID: arg.OrganizationID,
ID: uuid.New(),
IsSystem: existing.IsSystem,
}, policy.ActionUpdate); err != nil {
return database.CustomRole{}, err
}
return q.db.UpdateCustomRole(ctx, arg)
+7 -2
View File
@@ -15,6 +15,7 @@ import (
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
)
// nolint:tparallel
@@ -109,8 +110,12 @@ func TestGroupsAuth(t *testing.T) {
{
Name: "GroupMember",
Subject: rbac.Subject{
ID: users[0].ID.String(),
Roles: rbac.Roles(must(rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(org.ID)}.Expand())),
ID: users[0].ID.String(),
Roles: must(rolestore.Expand(
context.Background(),
store,
[]rbac.RoleIdentifier{rbac.RoleMember(), rbac.ScopedRoleOrgMember(org.ID)},
)),
Groups: []string{
group.ID.String(),
},
+39
View File
@@ -105,12 +105,51 @@ func (s *MethodTestSuite) TearDownSuite() {
var testActorID = uuid.New()
type includeSystemRolesMatcher struct{}
func (includeSystemRolesMatcher) Matches(x any) bool {
p, ok := x.(database.CustomRolesParams)
if !ok {
return false
}
return p.IncludeSystemRoles
}
func (includeSystemRolesMatcher) String() string {
return "CustomRolesParams with IncludeSystemRoles=true"
}
// Mocked runs a subtest with a mocked database. Removing the overhead of a real
// postgres database resulting in much faster tests.
func (s *MethodTestSuite) Mocked(testCaseF func(dmb *dbmock.MockStore, faker *gofakeit.Faker, check *expects)) func() {
t := s.T()
mDB := dbmock.NewMockStore(gomock.NewController(t))
mDB.EXPECT().Wrappers().Return([]string{}).AnyTimes()
// dbauthz now expands DB-backed system roles (e.g. organization-member)
// during role-assignment validation, which triggers a CustomRoles lookup
// with IncludeSystemRoles=true.
mDB.EXPECT().CustomRoles(gomock.Any(), includeSystemRolesMatcher{}).DoAndReturn(func(_ context.Context, arg database.CustomRolesParams) ([]database.CustomRole, error) {
if len(arg.LookupRoles) == 0 {
return []database.CustomRole{}, nil
}
out := make([]database.CustomRole, 0, len(arg.LookupRoles))
for _, pair := range arg.LookupRoles {
// Minimal set of fields that the tested code uses.
out = append(out, database.CustomRole{
Name: pair.Name,
OrganizationID: uuid.NullUUID{
UUID: pair.OrganizationID,
Valid: pair.OrganizationID != uuid.Nil,
},
IsSystem: rbac.SystemRoleName(pair.Name),
ID: uuid.New(),
})
}
return out, nil
}).AnyTimes()
// Use a constant seed to prevent flakes from random data generation.
faker := gofakeit.New(0)
+49 -8
View File
@@ -29,6 +29,7 @@ import (
"github.com/coder/coder/v2/coderd/database/pubsub"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/rbac/rolestore"
"github.com/coder/coder/v2/coderd/taskname"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
@@ -41,8 +42,16 @@ import (
// genCtx is to give all generator functions permission if the db is a dbauthz db.
var genCtx = dbauthz.As(context.Background(), rbac.Subject{
ID: "owner",
Roles: rbac.Roles(must(rbac.RoleIdentifiers{rbac.RoleOwner()}.Expand())),
ID: "owner",
Roles: rbac.Roles(append(
must(rbac.RoleIdentifiers{rbac.RoleOwner()}.Expand()),
rbac.Role{
Identifier: rbac.RoleIdentifier{Name: "dbgen-workspace-sharer"},
Site: rbac.Permissions(map[string][]policy.Action{
rbac.ResourceWorkspace.Type: {policy.ActionShare},
}),
},
)),
Groups: []string{},
Scope: rbac.ExpandableScope(rbac.ScopeAll),
})
@@ -639,6 +648,36 @@ func Organization(t testing.TB, db database.Store, orig database.Organization) d
UpdatedAt: takeFirst(orig.UpdatedAt, dbtime.Now()),
})
require.NoError(t, err, "insert organization")
// Populate the placeholder organization-member system role (created by
// DB trigger/migration) so org members have expected permissions.
//nolint:gocritic // ReconcileOrgMemberRole needs the system:update
// permission that `genCtx` does not have.
sysCtx := dbauthz.AsSystemRestricted(genCtx)
_, _, err = rolestore.ReconcileOrgMemberRole(sysCtx, db, database.CustomRole{
Name: rbac.RoleOrgMember(),
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
}, org.WorkspaceSharingDisabled)
if errors.Is(err, sql.ErrNoRows) {
// The trigger that creates the placeholder role didn't run (e.g.,
// triggers were disabled in the test). Create the role manually.
err = rolestore.CreateOrgMemberRole(sysCtx, db, org)
require.NoError(t, err, "create organization-member role")
_, _, err = rolestore.ReconcileOrgMemberRole(sysCtx, db, database.CustomRole{
Name: rbac.RoleOrgMember(),
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
}, org.WorkspaceSharingDisabled)
}
require.NoError(t, err, "reconcile organization-member role")
return org
}
@@ -1395,12 +1434,14 @@ func WorkspaceAgentVolumeResourceMonitor(t testing.TB, db database.Store, seed d
func CustomRole(t testing.TB, db database.Store, seed database.CustomRole) database.CustomRole {
role, err := db.InsertCustomRole(genCtx, database.InsertCustomRoleParams{
Name: takeFirst(seed.Name, strings.ToLower(testutil.GetRandomName(t))),
DisplayName: testutil.GetRandomName(t),
OrganizationID: seed.OrganizationID,
SitePermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
OrgPermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
UserPermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
Name: takeFirst(seed.Name, strings.ToLower(testutil.GetRandomName(t))),
DisplayName: testutil.GetRandomName(t),
OrganizationID: seed.OrganizationID,
SitePermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
OrgPermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
UserPermissions: takeFirstSlice(seed.SitePermissions, []database.CustomRolePermission{}),
MemberPermissions: takeFirstSlice(seed.MemberPermissions, []database.CustomRolePermission{}),
IsSystem: seed.IsSystem,
})
require.NoError(t, err, "insert custom role")
return role
+39 -1
View File
@@ -746,6 +746,37 @@ BEGIN
END;
$$;
CREATE FUNCTION insert_org_member_system_role() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
) VALUES (
'organization-member',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
);
RETURN NEW;
END;
$$;
CREATE FUNCTION insert_user_links_fail_if_user_deleted() RETURNS trigger
LANGUAGE plpgsql
AS $$
@@ -1203,6 +1234,8 @@ CREATE TABLE custom_roles (
updated_at timestamp with time zone DEFAULT CURRENT_TIMESTAMP NOT NULL,
organization_id uuid,
id uuid DEFAULT gen_random_uuid() NOT NULL,
is_system boolean DEFAULT false NOT NULL,
member_permissions jsonb DEFAULT '[]'::jsonb NOT NULL,
CONSTRAINT organization_id_not_zero CHECK ((organization_id <> '00000000-0000-0000-0000-000000000000'::uuid))
);
@@ -1212,6 +1245,8 @@ COMMENT ON COLUMN custom_roles.organization_id IS 'Roles can optionally be scope
COMMENT ON COLUMN custom_roles.id IS 'Custom roles ID is used purely for auditing purposes. Name is a better unique identifier.';
COMMENT ON COLUMN custom_roles.is_system IS 'System roles are managed by Coder and cannot be modified or deleted by users.';
CREATE TABLE dbcrypt_keys (
number integer NOT NULL,
active_key_digest text,
@@ -1595,7 +1630,8 @@ CREATE TABLE organizations (
is_default boolean DEFAULT false NOT NULL,
display_name text NOT NULL,
icon text DEFAULT ''::text NOT NULL,
deleted boolean DEFAULT false NOT NULL
deleted boolean DEFAULT false NOT NULL,
workspace_sharing_disabled boolean DEFAULT false NOT NULL
);
CREATE TABLE parameter_schemas (
@@ -3546,6 +3582,8 @@ CREATE TRIGGER trigger_delete_oauth2_provider_app_token AFTER DELETE ON oauth2_p
CREATE TRIGGER trigger_insert_apikeys BEFORE INSERT ON api_keys FOR EACH ROW EXECUTE FUNCTION insert_apikey_fail_if_user_deleted();
CREATE TRIGGER trigger_insert_org_member_system_role AFTER INSERT ON organizations FOR EACH ROW EXECUTE FUNCTION insert_org_member_system_role();
CREATE TRIGGER trigger_nullify_next_start_at_on_workspace_autostart_modificati AFTER UPDATE ON workspaces FOR EACH ROW EXECUTE FUNCTION nullify_next_start_at_on_workspace_autostart_modification();
CREATE TRIGGER trigger_update_users AFTER INSERT OR UPDATE ON users FOR EACH ROW WHEN ((new.deleted = true)) EXECUTE FUNCTION delete_deleted_user_resources();
+1
View File
@@ -13,6 +13,7 @@ const (
LockIDNotificationsReportGenerator
LockIDCryptoKeyRotation
LockIDReconcilePrebuilds
LockIDReconcileSystemRoles
)
// GenLockID generates a unique and consistent lock ID from a given string.
@@ -1 +1 @@
DROP INDEX IF EXISTS public.workspace_agents_auth_instance_id_deleted_idx;
DROP INDEX IF EXISTS workspace_agents_auth_instance_id_deleted_idx;
@@ -1 +1 @@
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON public.workspace_agents (auth_instance_id, deleted);
CREATE INDEX IF NOT EXISTS workspace_agents_auth_instance_id_deleted_idx ON workspace_agents (auth_instance_id, deleted);
@@ -0,0 +1,3 @@
ALTER TABLE custom_roles DROP COLUMN IF EXISTS member_permissions;
ALTER TABLE custom_roles DROP COLUMN IF EXISTS is_system;
@@ -0,0 +1,10 @@
-- Add is_system column to identify system-managed roles.
ALTER TABLE custom_roles
ADD COLUMN is_system boolean NOT NULL DEFAULT false;
-- Add member_permissions column for member-scoped permissions within an organization.
ALTER TABLE custom_roles
ADD COLUMN member_permissions jsonb NOT NULL DEFAULT '[]'::jsonb;
COMMENT ON COLUMN custom_roles.is_system IS
'System roles are managed by Coder and cannot be modified or deleted by users.';
@@ -0,0 +1 @@
ALTER TABLE organizations DROP COLUMN IF EXISTS workspace_sharing_disabled;
@@ -0,0 +1,2 @@
ALTER TABLE organizations
ADD COLUMN workspace_sharing_disabled boolean NOT NULL DEFAULT false;
@@ -0,0 +1,6 @@
-- Drop the trigger and function created by the up migration.
DROP TRIGGER IF EXISTS trigger_insert_org_member_system_role ON organizations;
DROP FUNCTION IF EXISTS insert_org_member_system_role;
-- Remove organization-member system roles created by the up migration.
DELETE FROM custom_roles WHERE name = 'organization-member' AND is_system = true;
@@ -0,0 +1,85 @@
-- Create placeholder organization-member system roles for existing
-- organizations. Also add a trigger that creates the placeholder role
-- when an organization is created. Permissions will be empty until
-- populated by the reconciliation routine.
--
-- Note: why do all this in the database (as opposed to coderd)? Less
-- room for race conditions. If the role doesn't exist when coderd
-- expects it, the only correct option is to panic. On the other hand,
-- a placeholder role with empty permissions is harmless and the
-- reconciliation process is idempotent.
-- 'organization-member' is reserved and blocked from being created in
-- coderd, but let's do a delete just in case.
DELETE FROM custom_roles WHERE name = 'organization-member';
-- Create roles for the existing organizations.
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
)
SELECT
'organization-member', -- reserved role name, so it doesn't exist in DB yet
'',
id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
FROM
organizations
WHERE
NOT EXISTS (
SELECT 1
FROM custom_roles
WHERE
custom_roles.name = 'organization-member'
AND custom_roles.organization_id = organizations.id
);
-- When we insert a new organization, we also want to create a
-- placeholder org-member system role for it.
CREATE OR REPLACE FUNCTION insert_org_member_system_role() RETURNS trigger AS $$
BEGIN
INSERT INTO custom_roles (
name,
display_name,
organization_id,
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
) VALUES (
'organization-member',
'',
NEW.id,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
'[]'::jsonb,
true,
NOW(),
NOW()
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_insert_org_member_system_role
AFTER INSERT ON organizations
FOR EACH ROW
EXECUTE FUNCTION insert_org_member_system_role();
+13 -9
View File
@@ -3741,6 +3741,9 @@ type CustomRole struct {
OrganizationID uuid.NullUUID `db:"organization_id" json:"organization_id"`
// Custom roles ID is used purely for auditing purposes. Name is a better unique identifier.
ID uuid.UUID `db:"id" json:"id"`
// System roles are managed by Coder and cannot be modified or deleted by users.
IsSystem bool `db:"is_system" json:"is_system"`
MemberPermissions CustomRolePermissions `db:"member_permissions" json:"member_permissions"`
}
// A table used to store the keys used to encrypt the database.
@@ -4006,15 +4009,16 @@ type OAuth2ProviderAppToken struct {
}
type Organization struct {
ID uuid.UUID `db:"id" json:"id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
IsDefault bool `db:"is_default" json:"is_default"`
DisplayName string `db:"display_name" json:"display_name"`
Icon string `db:"icon" json:"icon"`
Deleted bool `db:"deleted" json:"deleted"`
ID uuid.UUID `db:"id" json:"id"`
Name string `db:"name" json:"name"`
Description string `db:"description" json:"description"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
IsDefault bool `db:"is_default" json:"is_default"`
DisplayName string `db:"display_name" json:"display_name"`
Icon string `db:"icon" json:"icon"`
Deleted bool `db:"deleted" json:"deleted"`
WorkspaceSharingDisabled bool `db:"workspace_sharing_disabled" json:"workspace_sharing_disabled"`
}
type OrganizationMember struct {
+76
View File
@@ -2228,6 +2228,82 @@ func TestReadCustomRoles(t *testing.T) {
}
}
func TestDeleteCustomRoleDoesNotDeleteSystemRole(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
ctx := testutil.Context(t, testutil.WaitShort)
systemRole, err := db.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-system-role",
DisplayName: "",
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
IsSystem: true,
})
require.NoError(t, err)
nonSystemRole, err := db.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: "test-custom-role",
DisplayName: "",
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
IsSystem: false,
})
require.NoError(t, err)
err = db.DeleteCustomRole(ctx, database.DeleteCustomRoleParams{
Name: systemRole.Name,
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
})
require.NoError(t, err)
err = db.DeleteCustomRole(ctx, database.DeleteCustomRoleParams{
Name: nonSystemRole.Name,
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
})
require.NoError(t, err)
roles, err := db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: []database.NameOrganizationPair{
{
Name: systemRole.Name,
OrganizationID: org.ID,
},
{
Name: nonSystemRole.Name,
OrganizationID: org.ID,
},
},
IncludeSystemRoles: true,
})
require.NoError(t, err)
require.Len(t, roles, 1)
require.Equal(t, systemRole.Name, roles[0].Name)
require.True(t, roles[0].IsSystem)
}
func TestAuthorizedAuditLogs(t *testing.T) {
t.Parallel()
+69 -28
View File
@@ -7808,7 +7808,7 @@ func (q *sqlQuerier) UpdateMemberRoles(ctx context.Context, arg UpdateMemberRole
const getDefaultOrganization = `-- name: GetDefaultOrganization :one
SELECT
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
FROM
organizations
WHERE
@@ -7830,13 +7830,14 @@ func (q *sqlQuerier) GetDefaultOrganization(ctx context.Context) (Organization,
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
)
return i, err
}
const getOrganizationByID = `-- name: GetOrganizationByID :one
SELECT
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
FROM
organizations
WHERE
@@ -7856,13 +7857,14 @@ func (q *sqlQuerier) GetOrganizationByID(ctx context.Context, id uuid.UUID) (Org
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
)
return i, err
}
const getOrganizationByName = `-- name: GetOrganizationByName :one
SELECT
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
FROM
organizations
WHERE
@@ -7891,6 +7893,7 @@ func (q *sqlQuerier) GetOrganizationByName(ctx context.Context, arg GetOrganizat
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
)
return i, err
}
@@ -7961,7 +7964,7 @@ func (q *sqlQuerier) GetOrganizationResourceCountByID(ctx context.Context, organ
const getOrganizations = `-- name: GetOrganizations :many
SELECT
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
FROM
organizations
WHERE
@@ -8005,6 +8008,7 @@ func (q *sqlQuerier) GetOrganizations(ctx context.Context, arg GetOrganizationsP
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
); err != nil {
return nil, err
}
@@ -8021,7 +8025,7 @@ func (q *sqlQuerier) GetOrganizations(ctx context.Context, arg GetOrganizationsP
const getOrganizationsByUserID = `-- name: GetOrganizationsByUserID :many
SELECT
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
FROM
organizations
WHERE
@@ -8066,6 +8070,7 @@ func (q *sqlQuerier) GetOrganizationsByUserID(ctx context.Context, arg GetOrgani
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
); err != nil {
return nil, err
}
@@ -8085,7 +8090,7 @@ INSERT INTO
organizations (id, "name", display_name, description, icon, created_at, updated_at, is_default)
VALUES
-- If no organizations exist, and this is the first, make it the default.
($1, $2, $3, $4, $5, $6, $7, (SELECT TRUE FROM organizations LIMIT 1) IS NULL) RETURNING id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
($1, $2, $3, $4, $5, $6, $7, (SELECT TRUE FROM organizations LIMIT 1) IS NULL) RETURNING id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
`
type InsertOrganizationParams struct {
@@ -8119,6 +8124,7 @@ func (q *sqlQuerier) InsertOrganization(ctx context.Context, arg InsertOrganizat
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
)
return i, err
}
@@ -8134,7 +8140,7 @@ SET
icon = $5
WHERE
id = $6
RETURNING id, name, description, created_at, updated_at, is_default, display_name, icon, deleted
RETURNING id, name, description, created_at, updated_at, is_default, display_name, icon, deleted, workspace_sharing_disabled
`
type UpdateOrganizationParams struct {
@@ -8166,6 +8172,7 @@ func (q *sqlQuerier) UpdateOrganization(ctx context.Context, arg UpdateOrganizat
&i.DisplayName,
&i.Icon,
&i.Deleted,
&i.WorkspaceSharingDisabled,
)
return i, err
}
@@ -11927,7 +11934,7 @@ func (q *sqlQuerier) UpdateReplica(ctx context.Context, arg UpdateReplicaParams)
const customRoles = `-- name: CustomRoles :many
SELECT
name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id
name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id, is_system, member_permissions
FROM
custom_roles
WHERE
@@ -11950,16 +11957,30 @@ WHERE
organization_id = $3
ELSE true
END
-- Filter system roles. By default, system roles are excluded.
-- System roles are managed by Coder and should be hidden from user-facing APIs.
-- The authorization system uses @include_system_roles = true to load them.
AND CASE WHEN $4 :: boolean THEN
true
ELSE
is_system = false
END
`
type CustomRolesParams struct {
LookupRoles []NameOrganizationPair `db:"lookup_roles" json:"lookup_roles"`
ExcludeOrgRoles bool `db:"exclude_org_roles" json:"exclude_org_roles"`
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
LookupRoles []NameOrganizationPair `db:"lookup_roles" json:"lookup_roles"`
ExcludeOrgRoles bool `db:"exclude_org_roles" json:"exclude_org_roles"`
OrganizationID uuid.UUID `db:"organization_id" json:"organization_id"`
IncludeSystemRoles bool `db:"include_system_roles" json:"include_system_roles"`
}
func (q *sqlQuerier) CustomRoles(ctx context.Context, arg CustomRolesParams) ([]CustomRole, error) {
rows, err := q.db.QueryContext(ctx, customRoles, pq.Array(arg.LookupRoles), arg.ExcludeOrgRoles, arg.OrganizationID)
rows, err := q.db.QueryContext(ctx, customRoles,
pq.Array(arg.LookupRoles),
arg.ExcludeOrgRoles,
arg.OrganizationID,
arg.IncludeSystemRoles,
)
if err != nil {
return nil, err
}
@@ -11977,6 +11998,8 @@ func (q *sqlQuerier) CustomRoles(ctx context.Context, arg CustomRolesParams) ([]
&i.UpdatedAt,
&i.OrganizationID,
&i.ID,
&i.IsSystem,
&i.MemberPermissions,
); err != nil {
return nil, err
}
@@ -11997,6 +12020,9 @@ DELETE FROM
WHERE
name = lower($1)
AND organization_id = $2
-- Prevents accidental deletion of system roles even if the API
-- layer check is bypassed due to a bug.
AND is_system = false
`
type DeleteCustomRoleParams struct {
@@ -12018,6 +12044,8 @@ INSERT INTO
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
)
@@ -12029,19 +12057,23 @@ VALUES (
$4,
$5,
$6,
$7,
$8,
now(),
now()
)
RETURNING name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id
RETURNING name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id, is_system, member_permissions
`
type InsertCustomRoleParams struct {
Name string `db:"name" json:"name"`
DisplayName string `db:"display_name" json:"display_name"`
OrganizationID uuid.NullUUID `db:"organization_id" json:"organization_id"`
SitePermissions CustomRolePermissions `db:"site_permissions" json:"site_permissions"`
OrgPermissions CustomRolePermissions `db:"org_permissions" json:"org_permissions"`
UserPermissions CustomRolePermissions `db:"user_permissions" json:"user_permissions"`
Name string `db:"name" json:"name"`
DisplayName string `db:"display_name" json:"display_name"`
OrganizationID uuid.NullUUID `db:"organization_id" json:"organization_id"`
SitePermissions CustomRolePermissions `db:"site_permissions" json:"site_permissions"`
OrgPermissions CustomRolePermissions `db:"org_permissions" json:"org_permissions"`
UserPermissions CustomRolePermissions `db:"user_permissions" json:"user_permissions"`
MemberPermissions CustomRolePermissions `db:"member_permissions" json:"member_permissions"`
IsSystem bool `db:"is_system" json:"is_system"`
}
func (q *sqlQuerier) InsertCustomRole(ctx context.Context, arg InsertCustomRoleParams) (CustomRole, error) {
@@ -12052,6 +12084,8 @@ func (q *sqlQuerier) InsertCustomRole(ctx context.Context, arg InsertCustomRoleP
arg.SitePermissions,
arg.OrgPermissions,
arg.UserPermissions,
arg.MemberPermissions,
arg.IsSystem,
)
var i CustomRole
err := row.Scan(
@@ -12064,6 +12098,8 @@ func (q *sqlQuerier) InsertCustomRole(ctx context.Context, arg InsertCustomRoleP
&i.UpdatedAt,
&i.OrganizationID,
&i.ID,
&i.IsSystem,
&i.MemberPermissions,
)
return i, err
}
@@ -12076,20 +12112,22 @@ SET
site_permissions = $2,
org_permissions = $3,
user_permissions = $4,
member_permissions = $5,
updated_at = now()
WHERE
name = lower($5)
AND organization_id = $6
RETURNING name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id
name = lower($6)
AND organization_id = $7
RETURNING name, display_name, site_permissions, org_permissions, user_permissions, created_at, updated_at, organization_id, id, is_system, member_permissions
`
type UpdateCustomRoleParams struct {
DisplayName string `db:"display_name" json:"display_name"`
SitePermissions CustomRolePermissions `db:"site_permissions" json:"site_permissions"`
OrgPermissions CustomRolePermissions `db:"org_permissions" json:"org_permissions"`
UserPermissions CustomRolePermissions `db:"user_permissions" json:"user_permissions"`
Name string `db:"name" json:"name"`
OrganizationID uuid.NullUUID `db:"organization_id" json:"organization_id"`
DisplayName string `db:"display_name" json:"display_name"`
SitePermissions CustomRolePermissions `db:"site_permissions" json:"site_permissions"`
OrgPermissions CustomRolePermissions `db:"org_permissions" json:"org_permissions"`
UserPermissions CustomRolePermissions `db:"user_permissions" json:"user_permissions"`
MemberPermissions CustomRolePermissions `db:"member_permissions" json:"member_permissions"`
Name string `db:"name" json:"name"`
OrganizationID uuid.NullUUID `db:"organization_id" json:"organization_id"`
}
func (q *sqlQuerier) UpdateCustomRole(ctx context.Context, arg UpdateCustomRoleParams) (CustomRole, error) {
@@ -12098,6 +12136,7 @@ func (q *sqlQuerier) UpdateCustomRole(ctx context.Context, arg UpdateCustomRoleP
arg.SitePermissions,
arg.OrgPermissions,
arg.UserPermissions,
arg.MemberPermissions,
arg.Name,
arg.OrganizationID,
)
@@ -12112,6 +12151,8 @@ func (q *sqlQuerier) UpdateCustomRole(ctx context.Context, arg UpdateCustomRoleP
&i.UpdatedAt,
&i.OrganizationID,
&i.ID,
&i.IsSystem,
&i.MemberPermissions,
)
return i, err
}
+16
View File
@@ -23,6 +23,14 @@ WHERE
organization_id = @organization_id
ELSE true
END
-- Filter system roles. By default, system roles are excluded.
-- System roles are managed by Coder and should be hidden from user-facing APIs.
-- The authorization system uses @include_system_roles = true to load them.
AND CASE WHEN @include_system_roles :: boolean THEN
true
ELSE
is_system = false
END
;
-- name: DeleteCustomRole :exec
@@ -31,6 +39,9 @@ DELETE FROM
WHERE
name = lower(@name)
AND organization_id = @organization_id
-- Prevents accidental deletion of system roles even if the API
-- layer check is bypassed due to a bug.
AND is_system = false
;
-- name: InsertCustomRole :one
@@ -42,6 +53,8 @@ INSERT INTO
site_permissions,
org_permissions,
user_permissions,
member_permissions,
is_system,
created_at,
updated_at
)
@@ -53,6 +66,8 @@ VALUES (
@site_permissions,
@org_permissions,
@user_permissions,
@member_permissions,
@is_system,
now(),
now()
)
@@ -66,6 +81,7 @@ SET
site_permissions = @site_permissions,
org_permissions = @org_permissions,
user_permissions = @user_permissions,
member_permissions = @member_permissions,
updated_at = now()
WHERE
name = lower(@name)
+3
View File
@@ -53,6 +53,9 @@ sql:
- column: "custom_roles.user_permissions"
go_type:
type: "CustomRolePermissions"
- column: "custom_roles.member_permissions"
go_type:
type: "CustomRolePermissions"
- column: "provisioner_daemons.tags"
go_type:
type: "StringMap"
+2 -2
View File
@@ -307,13 +307,13 @@ func TestExternalAuthManagement(t *testing.T) {
gitlab.ExternalLogin(t, client)
links, err := db.GetExternalAuthLinksByUserID(
dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, ownerUser.OrganizationID)), user.ID)
dbauthz.As(ctx, coderdtest.AuthzUserSubject(user)), user.ID)
require.NoError(t, err)
require.Len(t, links, 2)
// Expire the links
for _, l := range links {
_, err := db.UpdateExternalAuthLink(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, ownerUser.OrganizationID)), database.UpdateExternalAuthLinkParams{
_, err := db.UpdateExternalAuthLink(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user)), database.UpdateExternalAuthLinkParams{
ProviderID: l.ProviderID,
UserID: l.UserID,
UpdatedAt: dbtime.Now(),
+1
View File
@@ -80,6 +80,7 @@ func Logger(log slog.Logger) func(next http.Handler) http.Handler {
}
httplog := log.With(
slog.F("user_agent", r.Header.Get("User-Agent")),
slog.F("host", httpapi.RequestHost(r)),
slog.F("path", r.URL.Path),
slog.F("proto", r.Proto),
@@ -90,7 +90,7 @@ func TestLoggerMiddleware_SingleRequest(t *testing.T) {
}
// Check that the log contains the expected fields
requiredFields := []string{"host", "path", "proto", "remote_addr", "start", "took", "status_code", "latency_ms"}
requiredFields := []string{"host", "path", "proto", "remote_addr", "start", "took", "status_code", "user_agent", "latency_ms"}
for _, field := range requiredFields {
_, exists := fieldsMap[field]
require.True(t, exists, "field %q is missing in log fields", field)
+4 -3
View File
@@ -393,9 +393,10 @@ func convertOrganizationMembers(ctx context.Context, db database.Store, mems []d
}
customRoles, err := db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: roleLookup,
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
LookupRoles: roleLookup,
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
IncludeSystemRoles: false,
})
if err != nil {
// We are missing the display names, but that is not absolutely required. So just
+50 -33
View File
@@ -168,7 +168,7 @@ func TestFilter(t *testing.T) {
Name: "Admin",
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{ScopedRoleOrgMember(orgIDs[0]), RoleAuditor(), RoleOwner(), RoleMember()},
Roles: RoleIdentifiers{RoleAuditor(), RoleOwner(), RoleMember()},
},
ObjectType: ResourceWorkspace.Type,
Action: policy.ActionRead,
@@ -177,7 +177,7 @@ func TestFilter(t *testing.T) {
Name: "OrgAdmin",
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{ScopedRoleOrgMember(orgIDs[0]), ScopedRoleOrgAdmin(orgIDs[0]), RoleMember()},
Roles: RoleIdentifiers{ScopedRoleOrgAdmin(orgIDs[0]), RoleMember()},
},
ObjectType: ResourceWorkspace.Type,
Action: policy.ActionRead,
@@ -186,7 +186,7 @@ func TestFilter(t *testing.T) {
Name: "OrgMember",
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{ScopedRoleOrgMember(orgIDs[0]), ScopedRoleOrgMember(orgIDs[1]), RoleMember()},
Roles: RoleIdentifiers{RoleMember()},
},
ObjectType: ResourceWorkspace.Type,
Action: policy.ActionRead,
@@ -196,11 +196,9 @@ func TestFilter(t *testing.T) {
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{
ScopedRoleOrgMember(orgIDs[0]), ScopedRoleOrgAdmin(orgIDs[0]),
ScopedRoleOrgMember(orgIDs[1]), ScopedRoleOrgAdmin(orgIDs[1]),
ScopedRoleOrgMember(orgIDs[2]), ScopedRoleOrgAdmin(orgIDs[2]),
ScopedRoleOrgMember(orgIDs[4]),
ScopedRoleOrgMember(orgIDs[5]),
ScopedRoleOrgAdmin(orgIDs[0]),
ScopedRoleOrgAdmin(orgIDs[1]),
ScopedRoleOrgAdmin(orgIDs[2]),
RoleMember(),
},
},
@@ -221,10 +219,6 @@ func TestFilter(t *testing.T) {
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{
ScopedRoleOrgMember(orgIDs[0]),
ScopedRoleOrgMember(orgIDs[1]),
ScopedRoleOrgMember(orgIDs[2]),
ScopedRoleOrgMember(orgIDs[3]),
RoleMember(),
},
},
@@ -235,7 +229,7 @@ func TestFilter(t *testing.T) {
Name: "ScopeApplicationConnect",
Actor: Subject{
ID: userIDs[0].String(),
Roles: RoleIdentifiers{ScopedRoleOrgMember(orgIDs[0]), RoleAuditor(), RoleOwner(), RoleMember()},
Roles: RoleIdentifiers{RoleAuditor(), RoleOwner(), RoleMember()},
},
ObjectType: ResourceWorkspace.Type,
Action: policy.ActionRead,
@@ -312,7 +306,7 @@ func TestAuthorizeDomain(t *testing.T) {
Groups: []string{allUsersGroup},
Roles: Roles{
must(RoleByName(RoleMember())),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
}
@@ -456,7 +450,7 @@ func TestAuthorizeDomain(t *testing.T) {
Scope: must(ExpandScope(ScopeAll)),
Roles: Roles{
must(RoleByName(ScopedRoleOrgAdmin(defOrg))),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
must(RoleByName(RoleMember())),
},
}
@@ -502,39 +496,40 @@ func TestAuthorizeDomain(t *testing.T) {
},
}
siteAdminWorkspaceActions := slice.Omit(ResourceWorkspace.AvailableActions(), policy.ActionShare)
testAuthorize(t, "SiteAdmin", user, []authTestCase{
// Similar to an orphaned user, but has site level perms
{resource: ResourceTemplate.AnyOrganization(), actions: []policy.Action{policy.ActionCreate}, allow: true},
// Org + me
{resource: ResourceWorkspace.InOrg(defOrg).WithOwner(user.ID), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(defOrg), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(defOrg).WithOwner(user.ID), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.InOrg(defOrg), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.WithOwner(user.ID), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.WithOwner(user.ID), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.All(), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.All(), actions: siteAdminWorkspaceActions, allow: true},
// Other org + me
{resource: ResourceWorkspace.InOrg(unusedID).WithOwner(user.ID), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(unusedID), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(unusedID).WithOwner(user.ID), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.InOrg(unusedID), actions: siteAdminWorkspaceActions, allow: true},
// Other org + other user
{resource: ResourceWorkspace.InOrg(defOrg).WithOwner("not-me"), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(defOrg).WithOwner("not-me"), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.WithOwner("not-me"), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.WithOwner("not-me"), actions: siteAdminWorkspaceActions, allow: true},
// Other org + other use
{resource: ResourceWorkspace.InOrg(unusedID).WithOwner("not-me"), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(unusedID), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.InOrg(unusedID).WithOwner("not-me"), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.InOrg(unusedID), actions: siteAdminWorkspaceActions, allow: true},
{resource: ResourceWorkspace.WithOwner("not-me"), actions: ResourceWorkspace.AvailableActions(), allow: true},
{resource: ResourceWorkspace.WithOwner("not-me"), actions: siteAdminWorkspaceActions, allow: true},
})
user = Subject{
ID: "me",
Scope: must(ExpandScope(ScopeApplicationConnect)),
Roles: Roles{
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
must(RoleByName(RoleMember())),
},
}
@@ -762,7 +757,7 @@ func TestAuthorizeLevels(t *testing.T) {
testAuthorize(t, "AdminAlwaysAllow", user,
cases(func(c authTestCase) authTestCase {
c.actions = ResourceWorkspace.AvailableActions()
c.actions = slice.Omit(ResourceWorkspace.AvailableActions(), policy.ActionShare)
c.allow = true
return c
}, []authTestCase{
@@ -890,7 +885,7 @@ func TestAuthorizeScope(t *testing.T) {
ID: "me",
Roles: Roles{
must(RoleByName(RoleMember())),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
Scope: must(ExpandScope(ScopeApplicationConnect)),
}
@@ -926,7 +921,7 @@ func TestAuthorizeScope(t *testing.T) {
ID: "me",
Roles: Roles{
must(RoleByName(RoleMember())),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
Scope: Scope{
Role: Role{
@@ -1015,7 +1010,7 @@ func TestAuthorizeScope(t *testing.T) {
ID: "me",
Roles: Roles{
must(RoleByName(RoleMember())),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
Scope: Scope{
Role: Role{
@@ -1070,7 +1065,7 @@ func TestAuthorizeScope(t *testing.T) {
ID: meID.String(),
Roles: Roles{
must(RoleByName(RoleMember())),
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
Scope: must(ScopeNoUserData.Expand()),
}
@@ -1138,7 +1133,7 @@ func TestAuthorizeScope(t *testing.T) {
// This is odd behavior, as without this membership role, the test for
// the workspace fails. Maybe scopes should just assume the user
// is a member.
must(RoleByName(ScopedRoleOrgMember(defOrg))),
orgMemberRole(defOrg),
},
Scope: Scope{
Role: Role{
@@ -1404,6 +1399,28 @@ func testAuthorize(t *testing.T, name string, subject Subject, sets ...[]authTes
}
}
// orgMemberRole returns an organization-member role for RBAC-only tests.
//
// organization-member is now a DB-backed system role (not a built-in role), so
// RoleByName won't resolve it here. Assume the default behavior: workspace
// sharing enabled.
func orgMemberRole(orgID uuid.UUID) Role {
workspaceSharingDisabled := false
orgPerms, memberPerms := OrgMemberPermissions(workspaceSharingDisabled)
return Role{
Identifier: ScopedRoleOrgMember(orgID),
DisplayName: "",
Site: []Permission{},
User: []Permission{},
ByOrgID: map[string]OrgPermissions{
orgID.String(): {
Org: orgPerms,
Member: memberPerms,
},
},
}
}
func must[T any](value T, err error) T {
if err != nil {
panic(err)
+126 -37
View File
@@ -229,15 +229,30 @@ func allPermsExcept(excepts ...Objecter) []Permission {
// https://github.com/coder/coder/issues/1194
var builtInRoles map[string]func(orgID uuid.UUID) Role
// systemRoles are roles that have migrated from builtInRoles to
// database storage. This migration is partial - permissions are still
// generated at runtime and reconciled to the database, rather than
// the database being the source of truth.
var systemRoles = map[string]struct{}{
RoleOrgMember(): {},
}
func SystemRoleName(name string) bool {
_, ok := systemRoles[name]
return ok
}
type RoleOptions struct {
NoOwnerWorkspaceExec bool
}
// ReservedRoleName exists because the database should only allow unique role
// names, but some roles are built in. So these names are reserved
// names, but some roles are built in or generated at runtime. So these names
// are reserved
func ReservedRoleName(name string) bool {
_, ok := builtInRoles[name]
return ok
_, isBuiltIn := builtInRoles[name]
_, isSystem := systemRoles[name]
return isBuiltIn || isSystem
}
// ReloadBuiltinRoles loads the static roles into the builtInRoles map.
@@ -252,7 +267,7 @@ func ReloadBuiltinRoles(opts *RoleOptions) {
opts = &RoleOptions{}
}
ownerWorkspaceActions := ResourceWorkspace.AvailableActions()
ownerWorkspaceActions := slice.Omit(ResourceWorkspace.AvailableActions(), policy.ActionShare)
if opts.NoOwnerWorkspaceExec {
// Remove ssh and application connect from the owner role. This
// prevents owners from have exec access to all workspaces.
@@ -431,39 +446,6 @@ func ReloadBuiltinRoles(opts *RoleOptions) {
},
}
},
// orgMember is an implied role to any member in an organization.
orgMember: func(organizationID uuid.UUID) Role {
return Role{
Identifier: RoleIdentifier{Name: orgMember, OrganizationID: organizationID},
DisplayName: "",
Site: []Permission{},
User: []Permission{},
ByOrgID: map[string]OrgPermissions{
organizationID.String(): {
Org: Permissions(map[string][]policy.Action{
// All users can see the provisioner daemons for workspace
// creation.
ResourceProvisionerDaemon.Type: {policy.ActionRead},
// All org members can read the organization
ResourceOrganization.Type: {policy.ActionRead},
// Can read available roles.
ResourceAssignOrgRole.Type: {policy.ActionRead},
}),
Member: append(allPermsExcept(ResourceWorkspaceDormant, ResourcePrebuiltWorkspace, ResourceUser, ResourceOrganizationMember),
Permissions(map[string][]policy.Action{
// Reduced permission set on dormant workspaces. No build, ssh, or exec
ResourceWorkspaceDormant.Type: {policy.ActionRead, policy.ActionDelete, policy.ActionCreate, policy.ActionUpdate, policy.ActionWorkspaceStop, policy.ActionCreateAgent, policy.ActionDeleteAgent},
// Can read their own organization member record
ResourceOrganizationMember.Type: {policy.ActionRead},
// Users can create provisioner daemons scoped to themselves.
ResourceProvisionerDaemon.Type: {policy.ActionRead, policy.ActionCreate, policy.ActionRead, policy.ActionUpdate},
})...,
),
},
},
}
},
orgAuditor: func(organizationID uuid.UUID) Role {
return Role{
Identifier: RoleIdentifier{Name: orgAuditor, OrganizationID: organizationID},
@@ -915,3 +897,110 @@ func DeduplicatePermissions(perms []Permission) []Permission {
}
return deduped
}
// PermissionsEqual compares two permission slices as sets. Order and
// duplicate entries do not matter; it only checks that both slices
// contain the same unique permissions.
func PermissionsEqual(a, b []Permission) bool {
setA := make(map[Permission]struct{}, len(a))
for _, p := range a {
setA[p] = struct{}{}
}
setB := make(map[Permission]struct{}, len(b))
for _, p := range b {
if _, ok := setA[p]; !ok {
return false
}
setB[p] = struct{}{}
}
return len(setA) == len(setB)
}
// OrgMemberPermissions returns the permissions for the organization-member
// system role. The results are then stored in the database and can vary per
// organization based on the workspace_sharing_disabled setting.
// This is the source of truth for org-member permissions, used by:
// - the startup reconciliation routine, to keep permissions current with
// RBAC resources
// - the organization workspace sharing setting endpoint, when updating
// the setting
// - the org creation endpoint, when populating the organization-member
// system role created by the DB trigger
//
//nolint:revive // workspaceSharingDisabled is an org setting
func OrgMemberPermissions(workspaceSharingDisabled bool) (
orgPerms, memberPerms []Permission,
) {
// Organization-level permissions that all org members get.
orgPermMap := map[string][]policy.Action{
// All users can see provisioner daemons for workspace creation.
ResourceProvisionerDaemon.Type: {policy.ActionRead},
// All org members can read the organization.
ResourceOrganization.Type: {policy.ActionRead},
// Can read available roles.
ResourceAssignOrgRole.Type: {policy.ActionRead},
}
// When workspace sharing is enabled, members need to see other org members
// and groups to share workspaces with them.
if !workspaceSharingDisabled {
orgPermMap[ResourceOrganizationMember.Type] = []policy.Action{policy.ActionRead}
orgPermMap[ResourceGroup.Type] = []policy.Action{policy.ActionRead}
}
orgPerms = Permissions(orgPermMap)
// Member-scoped permissions (resources owned by the member).
// Uses allPermsExcept to automatically include permissions for new resources.
memberPerms = append(
allPermsExcept(
ResourceWorkspaceDormant,
ResourcePrebuiltWorkspace,
ResourceUser,
ResourceOrganizationMember,
),
Permissions(map[string][]policy.Action{
// Reduced permission set on dormant workspaces. No build,
// ssh, or exec.
ResourceWorkspaceDormant.Type: {
policy.ActionRead,
policy.ActionDelete,
policy.ActionCreate,
policy.ActionUpdate,
policy.ActionWorkspaceStop,
policy.ActionCreateAgent,
policy.ActionDeleteAgent,
},
// Can read their own organization member record.
ResourceOrganizationMember.Type: {
policy.ActionRead,
},
// Users can create provisioner daemons scoped to themselves.
//
// TODO(geokat): copied from the original built-in role
// verbatim, but seems to be a no-op (not excepted above;
// plus no owner is set for the ProvisionerDaemon RBAC
// object).
ResourceProvisionerDaemon.Type: {
policy.ActionRead,
policy.ActionCreate,
policy.ActionUpdate,
},
})...,
)
if workspaceSharingDisabled {
// Org-level negation blocks sharing on ANY workspace in the
// org. This overrides any positive permission from other
// roles, including org-admin.
orgPerms = append(orgPerms, Permission{
Negate: true,
ResourceType: ResourceWorkspace.Type,
Action: policy.ActionShare,
})
}
return orgPerms, memberPerms
}
+61 -4
View File
@@ -1,6 +1,7 @@
package rbac
import (
"slices"
"testing"
"github.com/google/uuid"
@@ -74,7 +75,7 @@ func TestRegoInputValue(t *testing.T) {
// Expand all roles and make sure we have a good copy.
// This is because these tests modify the roles, and we don't want to
// modify the original roles.
roles, err := RoleIdentifiers{ScopedRoleOrgMember(uuid.New()), ScopedRoleOrgAdmin(uuid.New()), RoleMember()}.Expand()
roles, err := RoleIdentifiers{ScopedRoleOrgAuditor(uuid.New()), ScopedRoleOrgAdmin(uuid.New()), RoleMember()}.Expand()
require.NoError(t, err, "failed to expand roles")
for i := range roles {
// If all cached values are nil, then the role will not use
@@ -224,9 +225,9 @@ func TestRoleByName(t *testing.T) {
{Role: builtInRoles[orgAdmin](uuid.New())},
{Role: builtInRoles[orgAdmin](uuid.New())},
{Role: builtInRoles[orgMember](uuid.New())},
{Role: builtInRoles[orgMember](uuid.New())},
{Role: builtInRoles[orgMember](uuid.New())},
{Role: builtInRoles[orgAuditor](uuid.New())},
{Role: builtInRoles[orgAuditor](uuid.New())},
{Role: builtInRoles[orgAuditor](uuid.New())},
}
for _, c := range testCases {
@@ -271,6 +272,62 @@ func TestDeduplicatePermissions(t *testing.T) {
require.Equal(t, want, got)
}
func TestPermissionsEqual(t *testing.T) {
t.Parallel()
a := []Permission{
{ResourceType: ResourceWorkspace.Type, Action: policy.ActionRead},
{ResourceType: ResourceTemplate.Type, Action: policy.ActionUpdate},
{ResourceType: ResourceWorkspace.Type, Action: policy.ActionShare, Negate: true},
}
t.Run("Order", func(t *testing.T) {
t.Parallel()
b := []Permission{
a[2],
a[0],
a[1],
}
require.True(t, PermissionsEqual(a, b))
})
t.Run("SubsetAndSuperset", func(t *testing.T) {
t.Parallel()
require.False(t, PermissionsEqual(a, a[:2]))
b := append(slices.Clone(a), Permission{ResourceType: ResourceWorkspace.Type, Action: policy.ActionUpdate})
require.False(t, PermissionsEqual(a, b))
})
t.Run("Negate", func(t *testing.T) {
t.Parallel()
b := slices.Clone(a)
b[0] = Permission{
ResourceType: ResourceWorkspace.Type, Action: policy.ActionRead, Negate: true,
}
require.False(t, PermissionsEqual(a, b))
})
t.Run("Duplicates", func(t *testing.T) {
t.Parallel()
b := append(slices.Clone(a), a[0])
require.True(t, PermissionsEqual(a, b), "equal sets with duplicates should compare equal even without pre-deduplication")
})
t.Run("NilEmpty", func(t *testing.T) {
t.Parallel()
var nilSlice []Permission
emptySlice := []Permission{}
require.True(t, PermissionsEqual(nilSlice, emptySlice))
require.True(t, PermissionsEqual(emptySlice, nilSlice))
})
}
// equalRoles compares 2 roles for equality.
func equalRoles(t *testing.T, a, b Role) {
require.Equal(t, a.Identifier, b.Identifier, "role names")
+168 -110
View File
@@ -3,6 +3,7 @@ package rbac_test
import (
"context"
"fmt"
"slices"
"testing"
"github.com/google/uuid"
@@ -50,6 +51,56 @@ func TestBuiltInRoles(t *testing.T) {
}
}
func TestSystemRolesAreReservedRoleNames(t *testing.T) {
t.Parallel()
require.True(t, rbac.ReservedRoleName(rbac.RoleOrgMember()))
}
func TestOrgMemberPermissions(t *testing.T) {
t.Parallel()
t.Run("WorkspaceSharingEnabled", func(t *testing.T) {
t.Parallel()
orgPerms, _ := rbac.OrgMemberPermissions(false)
require.True(t, slices.Contains(orgPerms, rbac.Permission{
ResourceType: rbac.ResourceOrganizationMember.Type,
Action: policy.ActionRead,
}))
require.True(t, slices.Contains(orgPerms, rbac.Permission{
ResourceType: rbac.ResourceGroup.Type,
Action: policy.ActionRead,
}))
require.False(t, slices.Contains(orgPerms, rbac.Permission{
Negate: true,
ResourceType: rbac.ResourceWorkspace.Type,
Action: policy.ActionShare,
}))
})
t.Run("WorkspaceSharingDisabled", func(t *testing.T) {
t.Parallel()
orgPerms, _ := rbac.OrgMemberPermissions(true)
require.False(t, slices.Contains(orgPerms, rbac.Permission{
ResourceType: rbac.ResourceOrganizationMember.Type,
Action: policy.ActionRead,
}))
require.False(t, slices.Contains(orgPerms, rbac.Permission{
ResourceType: rbac.ResourceGroup.Type,
Action: policy.ActionRead,
}))
require.True(t, slices.Contains(orgPerms, rbac.Permission{
Negate: true,
ResourceType: rbac.ResourceWorkspace.Type,
Action: policy.ActionShare,
}))
})
}
//nolint:tparallel,paralleltest
func TestOwnerExec(t *testing.T) {
owner := rbac.Subject{
@@ -86,6 +137,19 @@ func TestOwnerExec(t *testing.T) {
})
}
// These were "pared down" in https://github.com/coder/coder/pull/21359 to avoid
// using the now DB-backed organization-member role. As a result, they no longer
// model real-world org-scoped users (who also have organization-member).
//
// For example, `org_auditor` is now expected to be forbidden for
// `assign_org_role:read`, even though in production an org auditor can read
// available org roles via the org-member baseline.
//
// The tests are still useful for unit-testing the built-in roles in isolation.
//
// TODO(geokat): Add an integration test that includes organization-member to
// recover the old test coverage.
//
// nolint:tparallel,paralleltest // subtests share a map, just run sequentially.
func TestRolePermissions(t *testing.T) {
t.Parallel()
@@ -110,34 +174,30 @@ func TestRolePermissions(t *testing.T) {
// Subjects to user
memberMe := authSubject{Name: "member_me", Actor: rbac.Subject{ID: currentUser.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember()}}}
orgMemberMe := authSubject{Name: "org_member_me", Actor: rbac.Subject{ID: currentUser.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID)}}}
orgMemberMeBanWorkspace := authSubject{Name: "org_member_me_workspace_ban", Actor: rbac.Subject{ID: currentUser.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID), rbac.ScopedRoleOrgWorkspaceCreationBan(orgID)}}}
groupMemberMe := authSubject{Name: "group_member_me", Actor: rbac.Subject{ID: currentUser.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID)}, Groups: []string{groupID.String()}}}
owner := authSubject{Name: "owner", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.RoleOwner()}}}
templateAdmin := authSubject{Name: "template-admin", Actor: rbac.Subject{ID: templateAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.RoleTemplateAdmin()}}}
userAdmin := authSubject{Name: "user-admin", Actor: rbac.Subject{ID: userAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.RoleUserAdmin()}}}
auditor := authSubject{Name: "auditor", Actor: rbac.Subject{ID: auditorID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.RoleAuditor()}}}
orgAdmin := authSubject{Name: "org_admin", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID), rbac.ScopedRoleOrgAdmin(orgID)}}}
orgAuditor := authSubject{Name: "org_auditor", Actor: rbac.Subject{ID: auditorID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID), rbac.ScopedRoleOrgAuditor(orgID)}}}
orgUserAdmin := authSubject{Name: "org_user_admin", Actor: rbac.Subject{ID: templateAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID), rbac.ScopedRoleOrgUserAdmin(orgID)}}}
orgTemplateAdmin := authSubject{Name: "org_template_admin", Actor: rbac.Subject{ID: userAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(orgID), rbac.ScopedRoleOrgTemplateAdmin(orgID)}}}
orgAdmin := authSubject{Name: "org_admin", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAdmin(orgID)}}}
orgAuditor := authSubject{Name: "org_auditor", Actor: rbac.Subject{ID: auditorID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAuditor(orgID)}}}
orgUserAdmin := authSubject{Name: "org_user_admin", Actor: rbac.Subject{ID: templateAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgUserAdmin(orgID)}}}
orgTemplateAdmin := authSubject{Name: "org_template_admin", Actor: rbac.Subject{ID: userAdminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgTemplateAdmin(orgID)}}}
orgAdminBanWorkspace := authSubject{Name: "org_admin_workspace_ban", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAdmin(orgID), rbac.ScopedRoleOrgWorkspaceCreationBan(orgID)}}}
setOrgNotMe := authSubjectSet{orgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin}
otherOrgMember := authSubject{Name: "org_member_other", Actor: rbac.Subject{ID: uuid.NewString(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(otherOrg)}}}
otherOrgAdmin := authSubject{Name: "org_admin_other", Actor: rbac.Subject{ID: uuid.NewString(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(otherOrg), rbac.ScopedRoleOrgAdmin(otherOrg)}}}
otherOrgAuditor := authSubject{Name: "org_auditor_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(otherOrg), rbac.ScopedRoleOrgAuditor(otherOrg)}}}
otherOrgUserAdmin := authSubject{Name: "org_user_admin_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(otherOrg), rbac.ScopedRoleOrgUserAdmin(otherOrg)}}}
otherOrgTemplateAdmin := authSubject{Name: "org_template_admin_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgMember(otherOrg), rbac.ScopedRoleOrgTemplateAdmin(otherOrg)}}}
setOtherOrg := authSubjectSet{otherOrgMember, otherOrgAdmin, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin}
otherOrgAdmin := authSubject{Name: "org_admin_other", Actor: rbac.Subject{ID: uuid.NewString(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAdmin(otherOrg)}}}
otherOrgAuditor := authSubject{Name: "org_auditor_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgAuditor(otherOrg)}}}
otherOrgUserAdmin := authSubject{Name: "org_user_admin_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgUserAdmin(otherOrg)}}}
otherOrgTemplateAdmin := authSubject{Name: "org_template_admin_other", Actor: rbac.Subject{ID: adminID.String(), Roles: rbac.RoleIdentifiers{rbac.RoleMember(), rbac.ScopedRoleOrgTemplateAdmin(otherOrg)}}}
setOtherOrg := authSubjectSet{otherOrgAdmin, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin}
// requiredSubjects are required to be asserted in each test case. This is
// to make sure one is not forgotten.
requiredSubjects := []authSubject{
memberMe, owner,
orgMemberMe, orgAdmin,
otherOrgAdmin, otherOrgMember, orgAuditor, orgUserAdmin, orgTemplateAdmin,
orgAdmin, otherOrgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin,
templateAdmin, userAdmin, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
}
@@ -159,10 +219,10 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceUserObject(currentUser),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {orgMemberMe, owner, memberMe, templateAdmin, userAdmin, orgUserAdmin, otherOrgAdmin, otherOrgUserAdmin, orgAdmin},
true: {owner, memberMe, templateAdmin, userAdmin, orgUserAdmin, otherOrgAdmin, otherOrgUserAdmin, orgAdmin},
false: {
orgTemplateAdmin, orgAuditor,
otherOrgMember, otherOrgAuditor, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgTemplateAdmin,
},
},
},
@@ -172,7 +232,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceUser,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin},
},
},
{
@@ -181,7 +241,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin, templateAdmin, orgTemplateAdmin, orgMemberMeBanWorkspace},
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, userAdmin, orgAuditor, orgUserAdmin},
},
},
@@ -191,7 +251,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionUpdate},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin},
true: {owner, orgAdmin, orgAdminBanWorkspace},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
@@ -201,8 +261,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionDelete},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgMemberMeBanWorkspace},
true: {owner, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
},
},
{
@@ -211,7 +271,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionSSH},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe},
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
@@ -221,7 +281,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionApplicationConnect},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe},
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
@@ -230,8 +290,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreateAgent, policy.ActionDeleteAgent},
Resource: rbac.ResourceWorkspace.WithID(workspaceID).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgMemberMeBanWorkspace},
true: {owner, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor, orgAdminBanWorkspace},
},
},
{
@@ -242,9 +302,9 @@ func TestRolePermissions(t *testing.T) {
InOrg(orgID).
WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin, orgMemberMeBanWorkspace},
true: {orgAdmin, orgAdminBanWorkspace},
false: {
memberMe, setOtherOrg,
owner, memberMe, setOtherOrg,
templateAdmin, userAdmin,
orgTemplateAdmin, orgUserAdmin, orgAuditor,
},
@@ -260,10 +320,10 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {},
false: {
orgMemberMe, orgAdmin, owner, setOtherOrg,
orgAdmin, owner, setOtherOrg,
userAdmin, memberMe,
templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor,
orgMemberMeBanWorkspace,
orgAdminBanWorkspace,
},
},
},
@@ -273,7 +333,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTemplate.WithID(templateID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, orgMemberMe, userAdmin},
false: {setOtherOrg, orgUserAdmin, orgAuditor, memberMe, userAdmin},
},
},
{
@@ -282,7 +342,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTemplate.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgUserAdmin, memberMe, userAdmin, orgMemberMe},
false: {setOtherOrg, orgUserAdmin, memberMe, userAdmin},
},
},
{
@@ -292,8 +352,8 @@ func TestRolePermissions(t *testing.T) {
groupID.String(): {policy.ActionUse},
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin, groupMemberMe},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, userAdmin, orgMemberMe},
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, userAdmin},
},
},
{
@@ -304,7 +364,7 @@ func TestRolePermissions(t *testing.T) {
true: {owner, templateAdmin},
// Org template admins can only read org scoped files.
// File scope is currently not org scoped :cry:
false: {setOtherOrg, orgTemplateAdmin, orgMemberMe, orgAdmin, memberMe, userAdmin, orgAuditor, orgUserAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAdmin, memberMe, userAdmin, orgAuditor, orgUserAdmin},
},
},
{
@@ -312,7 +372,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead},
Resource: rbac.ResourceFile.WithID(fileID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, orgMemberMe, templateAdmin},
true: {owner, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, userAdmin},
},
},
@@ -322,7 +382,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganization,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -331,7 +391,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganization.WithID(orgID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgUserAdmin, orgAuditor, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgUserAdmin, orgAuditor, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -339,7 +399,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceOrganization.WithID(orgID).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgMemberMe, templateAdmin, orgTemplateAdmin, auditor, orgAuditor, userAdmin, orgUserAdmin},
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin, auditor, orgAuditor, userAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe},
},
},
@@ -349,7 +409,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, userAdmin, orgMemberMe, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, userAdmin, memberMe, templateAdmin},
},
},
{
@@ -358,7 +418,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, userAdmin},
false: {setOtherOrg, setOrgNotMe, orgMemberMe, memberMe, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin},
},
},
{
@@ -366,7 +426,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceAssignRole,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {setOtherOrg, setOrgNotMe, owner, orgMemberMe, memberMe, templateAdmin, userAdmin},
true: {setOtherOrg, setOrgNotMe, owner, memberMe, templateAdmin, userAdmin},
false: {},
},
},
@@ -376,7 +436,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, orgMemberMe, memberMe, templateAdmin, orgTemplateAdmin, orgAuditor},
false: {setOtherOrg, memberMe, templateAdmin, orgTemplateAdmin, orgAuditor},
},
},
{
@@ -385,7 +445,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, orgMemberMe, memberMe, templateAdmin, userAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -393,8 +453,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceAssignOrgRole.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, setOrgNotMe, orgMemberMe, userAdmin, templateAdmin},
false: {setOtherOrg, memberMe},
true: {owner, orgAdmin, orgUserAdmin, userAdmin, templateAdmin},
false: {setOtherOrg, memberMe, orgAuditor, orgTemplateAdmin},
},
},
{
@@ -402,7 +462,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionDelete, policy.ActionUpdate},
Resource: rbac.ResourceApiKey.WithID(apiKeyID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, memberMe},
true: {owner, memberMe},
false: {setOtherOrg, setOrgNotMe, templateAdmin, userAdmin},
},
},
@@ -413,7 +473,7 @@ func TestRolePermissions(t *testing.T) {
},
Resource: rbac.ResourceInboxNotification.WithID(uuid.New()).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, orgAdmin},
true: {owner, orgAdmin},
false: {setOtherOrg, orgUserAdmin, orgTemplateAdmin, orgAuditor, templateAdmin, userAdmin, memberMe},
},
},
@@ -422,7 +482,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionReadPersonal, policy.ActionUpdatePersonal},
Resource: rbac.ResourceUserObject(currentUser),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgMemberMe, memberMe, userAdmin},
true: {owner, memberMe, userAdmin},
false: {setOtherOrg, setOrgNotMe, templateAdmin},
},
},
@@ -432,7 +492,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOrganizationMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAuditor, orgMemberMe, memberMe, templateAdmin},
false: {setOtherOrg, orgTemplateAdmin, orgAuditor, memberMe, templateAdmin},
},
},
{
@@ -440,7 +500,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceOrganizationMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, orgMemberMe, templateAdmin, orgUserAdmin, orgTemplateAdmin},
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgUserAdmin, orgTemplateAdmin},
false: {memberMe, setOtherOrg},
},
},
@@ -453,7 +513,7 @@ func TestRolePermissions(t *testing.T) {
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgMemberMe, templateAdmin, orgUserAdmin, orgTemplateAdmin, orgAuditor},
true: {owner, orgAdmin, templateAdmin, orgUserAdmin, orgTemplateAdmin, orgAuditor},
false: {setOtherOrg, memberMe, userAdmin},
},
},
@@ -467,7 +527,7 @@ func TestRolePermissions(t *testing.T) {
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe, orgMemberMe, templateAdmin, orgTemplateAdmin, groupMemberMe, orgAuditor},
false: {setOtherOrg, memberMe, templateAdmin, orgTemplateAdmin, orgAuditor},
},
},
{
@@ -479,8 +539,8 @@ func TestRolePermissions(t *testing.T) {
},
}),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, groupMemberMe, orgAuditor},
false: {setOtherOrg, memberMe, orgMemberMe},
true: {owner, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe},
},
},
{
@@ -488,7 +548,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceGroupMember.WithID(currentUser).InOrg(orgID).WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgMemberMe, groupMemberMe},
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe},
},
},
@@ -498,7 +558,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceGroupMember.WithID(adminID).InOrg(orgID).WithOwner(adminID.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAuditor, orgAdmin, userAdmin, templateAdmin, orgTemplateAdmin, orgUserAdmin},
false: {setOtherOrg, memberMe, orgMemberMe, groupMemberMe},
false: {setOtherOrg, memberMe},
},
},
{
@@ -506,7 +566,7 @@ func TestRolePermissions(t *testing.T) {
Actions: append(crud, policy.ActionWorkspaceStop, policy.ActionCreateAgent, policy.ActionDeleteAgent),
Resource: rbac.ResourceWorkspaceDormant.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {orgMemberMe, orgAdmin, owner},
true: {orgAdmin, owner},
false: {setOtherOrg, userAdmin, memberMe, templateAdmin, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
@@ -516,7 +576,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspaceDormant.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {},
false: {setOtherOrg, setOrgNotMe, memberMe, userAdmin, orgMemberMe, owner, templateAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, userAdmin, owner, templateAdmin},
},
},
{
@@ -524,7 +584,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionWorkspaceStart, policy.ActionWorkspaceStop},
Resource: rbac.ResourceWorkspace.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgMemberMe},
true: {owner, orgAdmin},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
@@ -534,7 +594,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourcePrebuiltWorkspace.WithID(uuid.New()).InOrg(orgID).WithOwner(database.PrebuildsSystemUserID.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, templateAdmin, orgTemplateAdmin},
false: {setOtherOrg, userAdmin, memberMe, orgUserAdmin, orgAuditor, orgMemberMe},
false: {setOtherOrg, userAdmin, memberMe, orgUserAdmin, orgAuditor},
},
},
{
@@ -542,7 +602,7 @@ func TestRolePermissions(t *testing.T) {
Actions: crud,
Resource: rbac.ResourceTask.WithID(uuid.New()).InOrg(orgID).WithOwner(memberMe.Actor.ID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgMemberMe},
true: {owner, orgAdmin},
false: {setOtherOrg, userAdmin, templateAdmin, memberMe, orgTemplateAdmin, orgUserAdmin, orgAuditor},
},
},
@@ -553,7 +613,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceLicense,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -562,7 +622,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDeploymentStats,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -571,7 +631,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDeploymentConfig,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -580,7 +640,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceDebugInfo,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -589,7 +649,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceReplicas,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -598,7 +658,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceTailnetCoordinator,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -607,7 +667,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceAuditLog,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -616,7 +676,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerDaemon.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgAdmin, orgTemplateAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, orgMemberMe, userAdmin},
false: {setOtherOrg, orgAuditor, orgUserAdmin, memberMe, userAdmin},
},
},
{
@@ -624,8 +684,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceProvisionerDaemon.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, setOrgNotMe, orgMemberMe},
false: {setOtherOrg, memberMe, userAdmin},
true: {owner, templateAdmin, orgAdmin, orgTemplateAdmin},
false: {setOtherOrg, memberMe, userAdmin, orgAuditor, orgUserAdmin},
},
},
{
@@ -633,7 +693,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceProvisionerDaemon.WithOwner(currentUser.String()).InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, templateAdmin, orgTemplateAdmin, orgMemberMe, orgAdmin},
true: {owner, templateAdmin, orgTemplateAdmin, orgAdmin},
false: {setOtherOrg, memberMe, userAdmin, orgUserAdmin, orgAuditor},
},
},
@@ -643,7 +703,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceProvisionerJobs.InOrg(orgID),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgTemplateAdmin, orgAdmin},
false: {setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin, orgUserAdmin, orgAuditor},
false: {setOtherOrg, memberMe, templateAdmin, userAdmin, orgUserAdmin, orgAuditor},
},
},
{
@@ -652,7 +712,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceSystem,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -661,7 +721,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2App,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -669,7 +729,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceOauth2App,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, setOrgNotMe, setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin},
true: {owner, setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
false: {},
},
},
@@ -679,7 +739,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2AppSecret,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -688,7 +748,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceOauth2AppCodeToken,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -697,7 +757,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceWorkspaceProxy,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOrgNotMe, setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -705,7 +765,7 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead},
Resource: rbac.ResourceWorkspaceProxy,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, setOrgNotMe, setOtherOrg, memberMe, orgMemberMe, templateAdmin, userAdmin},
true: {owner, setOrgNotMe, setOtherOrg, memberMe, templateAdmin, userAdmin},
false: {},
},
},
@@ -716,11 +776,11 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionRead, policy.ActionUpdate},
Resource: rbac.ResourceNotificationPreference.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {memberMe, orgMemberMe, owner},
true: {memberMe, owner},
false: {
userAdmin, orgUserAdmin, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
orgAdmin, otherOrgAdmin,
},
},
@@ -733,9 +793,9 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, orgMemberMe, userAdmin, orgUserAdmin, templateAdmin,
memberMe, userAdmin, orgUserAdmin, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
orgAdmin, otherOrgAdmin,
},
},
@@ -747,7 +807,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, orgMemberMe, otherOrgMember,
memberMe,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -767,8 +827,8 @@ func TestRolePermissions(t *testing.T) {
false: {
memberMe, templateAdmin, orgUserAdmin, userAdmin,
orgAdmin, orgAuditor, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAdmin, orgMemberMe,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAdmin,
},
},
},
@@ -778,8 +838,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionDelete},
Resource: rbac.ResourceWebpushSubscription.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, orgMemberMe},
false: {otherOrgMember, orgAdmin, otherOrgAdmin, orgAuditor, otherOrgAuditor, templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin, userAdmin, orgUserAdmin, otherOrgUserAdmin},
true: {owner, memberMe},
false: {orgAdmin, otherOrgAdmin, orgAuditor, otherOrgAuditor, templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin, userAdmin, orgUserAdmin, otherOrgUserAdmin},
},
},
// AnyOrganization tests
@@ -791,8 +851,8 @@ func TestRolePermissions(t *testing.T) {
true: {owner, userAdmin, orgAdmin, otherOrgAdmin, orgUserAdmin, otherOrgUserAdmin},
false: {
memberMe, templateAdmin,
orgTemplateAdmin, orgMemberMe, orgAuditor,
otherOrgMember, otherOrgAuditor, otherOrgTemplateAdmin,
orgTemplateAdmin, orgAuditor,
otherOrgAuditor, otherOrgTemplateAdmin,
},
},
},
@@ -804,8 +864,8 @@ func TestRolePermissions(t *testing.T) {
true: {owner, templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin, orgAdmin, otherOrgAdmin},
false: {
userAdmin, memberMe,
orgMemberMe, orgAuditor, orgUserAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin,
orgAuditor, orgUserAdmin,
otherOrgAuditor, otherOrgUserAdmin,
},
},
},
@@ -814,11 +874,11 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate},
Resource: rbac.ResourceWorkspace.AnyOrganization().WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, otherOrgAdmin, orgMemberMe},
true: {owner, orgAdmin, otherOrgAdmin},
false: {
memberMe, userAdmin, templateAdmin,
orgAuditor, orgUserAdmin, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
},
},
@@ -828,7 +888,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceCryptoKey,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
{
@@ -838,10 +898,10 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, orgAdmin, orgUserAdmin, userAdmin},
false: {
orgMemberMe, otherOrgAdmin,
otherOrgAdmin,
memberMe, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
},
},
@@ -853,10 +913,10 @@ func TestRolePermissions(t *testing.T) {
true: {owner, userAdmin},
false: {
orgAdmin, orgUserAdmin,
orgMemberMe, otherOrgAdmin,
otherOrgAdmin,
memberMe, templateAdmin,
orgAuditor, orgTemplateAdmin,
otherOrgMember, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
},
},
@@ -867,7 +927,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, orgMemberMe, otherOrgMember,
memberMe,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -882,7 +942,7 @@ func TestRolePermissions(t *testing.T) {
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {
memberMe, orgMemberMe, otherOrgMember,
memberMe,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -896,7 +956,7 @@ func TestRolePermissions(t *testing.T) {
Resource: rbac.ResourceConnectionLog,
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner},
false: {setOtherOrg, setOrgNotMe, memberMe, orgMemberMe, templateAdmin, userAdmin},
false: {setOtherOrg, setOrgNotMe, memberMe, templateAdmin, userAdmin},
},
},
// Only the user themselves can access their own secrets — no one else.
@@ -905,10 +965,10 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionUpdate, policy.ActionDelete},
Resource: rbac.ResourceUserSecret.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {memberMe, orgMemberMe},
true: {memberMe},
false: {
owner, orgAdmin,
otherOrgAdmin, otherOrgMember, orgAuditor, orgUserAdmin, orgTemplateAdmin,
otherOrgAdmin, orgAuditor, orgUserAdmin, orgTemplateAdmin,
templateAdmin, userAdmin, otherOrgAuditor, otherOrgUserAdmin, otherOrgTemplateAdmin,
},
},
@@ -921,7 +981,7 @@ func TestRolePermissions(t *testing.T) {
true: {},
false: {
owner,
memberMe, orgMemberMe, otherOrgMember,
memberMe,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -934,9 +994,8 @@ func TestRolePermissions(t *testing.T) {
Actions: []policy.Action{policy.ActionCreate, policy.ActionRead, policy.ActionUpdate},
Resource: rbac.ResourceAibridgeInterception.WithOwner(currentUser.String()),
AuthorizeMap: map[bool][]hasAuthSubjects{
true: {owner, memberMe, orgMemberMe},
true: {owner, memberMe},
false: {
otherOrgMember,
orgAdmin, otherOrgAdmin,
orgAuditor, otherOrgAuditor,
templateAdmin, orgTemplateAdmin, otherOrgTemplateAdmin,
@@ -1096,7 +1155,6 @@ func TestListRoles(t *testing.T) {
require.ElementsMatch(t, []string{
fmt.Sprintf("organization-admin:%s", orgID.String()),
fmt.Sprintf("organization-member:%s", orgID.String()),
fmt.Sprintf("organization-auditor:%s", orgID.String()),
fmt.Sprintf("organization-user-admin:%s", orgID.String()),
fmt.Sprintf("organization-template-admin:%s", orgID.String()),
+172 -7
View File
@@ -7,6 +7,7 @@ import (
"github.com/google/uuid"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/util/syncmap"
@@ -83,9 +84,10 @@ func Expand(ctx context.Context, db database.Store, names []rbac.RoleIdentifier)
// the expansion. These roles are no-ops. Should we raise some kind of
// warning when this happens?
dbroles, err := db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: lookupArgs,
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
LookupRoles: lookupArgs,
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
IncludeSystemRoles: true,
})
if err != nil {
return nil, xerrors.Errorf("fetch custom roles: %w", err)
@@ -105,7 +107,8 @@ func Expand(ctx context.Context, db database.Store, names []rbac.RoleIdentifier)
return roles, nil
}
func convertPermissions(dbPerms []database.CustomRolePermission) []rbac.Permission {
// ConvertDBPermissions converts database permissions to RBAC permissions.
func ConvertDBPermissions(dbPerms []database.CustomRolePermission) []rbac.Permission {
n := make([]rbac.Permission, 0, len(dbPerms))
for _, dbPerm := range dbPerms {
n = append(n, rbac.Permission{
@@ -117,14 +120,28 @@ func convertPermissions(dbPerms []database.CustomRolePermission) []rbac.Permissi
return n
}
// ConvertPermissionsToDB converts RBAC permissions to the database
// format.
func ConvertPermissionsToDB(perms []rbac.Permission) []database.CustomRolePermission {
dbPerms := make([]database.CustomRolePermission, 0, len(perms))
for _, perm := range perms {
dbPerms = append(dbPerms, database.CustomRolePermission{
Negate: perm.Negate,
ResourceType: perm.ResourceType,
Action: perm.Action,
})
}
return dbPerms
}
// ConvertDBRole should not be used by any human facing apis. It is used
// for authz purposes.
func ConvertDBRole(dbRole database.CustomRole) (rbac.Role, error) {
role := rbac.Role{
Identifier: dbRole.RoleIdentifier(),
DisplayName: dbRole.DisplayName,
Site: convertPermissions(dbRole.SitePermissions),
User: convertPermissions(dbRole.UserPermissions),
Site: ConvertDBPermissions(dbRole.SitePermissions),
User: ConvertDBPermissions(dbRole.UserPermissions),
}
// Org permissions only make sense if an org id is specified.
@@ -135,10 +152,158 @@ func ConvertDBRole(dbRole database.CustomRole) (rbac.Role, error) {
if dbRole.OrganizationID.UUID != uuid.Nil {
role.ByOrgID = map[string]rbac.OrgPermissions{
dbRole.OrganizationID.UUID.String(): {
Org: convertPermissions(dbRole.OrgPermissions),
Org: ConvertDBPermissions(dbRole.OrgPermissions),
Member: ConvertDBPermissions(dbRole.MemberPermissions),
},
}
}
return role, nil
}
// ReconcileSystemRoles ensures that every organization's org-member
// system role in the DB is up-to-date with permissions reflecting
// current RBAC resources and the organization's
// workspace_sharing_disabled setting. Uses PostgreSQL advisory lock
// (LockIDReconcileSystemRoles) to safely handle multi-instance
// deployments. Uses set-based comparison to avoid unnecessary
// database writes when permissions haven't changed.
func ReconcileSystemRoles(ctx context.Context, log slog.Logger, db database.Store) error {
return db.InTx(func(tx database.Store) error {
// Acquire advisory lock to prevent concurrent updates from
// multiple coderd instances. Other instances will block here
// until we release the lock (when this transaction commits).
err := tx.AcquireLock(ctx, database.LockIDReconcileSystemRoles)
if err != nil {
return xerrors.Errorf("acquire system roles reconciliation lock: %w", err)
}
orgs, err := tx.GetOrganizations(ctx, database.GetOrganizationsParams{})
if err != nil {
return xerrors.Errorf("fetch organizations: %w", err)
}
customRoles, err := tx.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: nil,
ExcludeOrgRoles: false,
OrganizationID: uuid.Nil,
IncludeSystemRoles: true,
})
if err != nil {
return xerrors.Errorf("fetch custom roles: %w", err)
}
// Find org-member roles and index by organization ID for quick lookup.
rolesByOrg := make(map[uuid.UUID]database.CustomRole)
for _, role := range customRoles {
if role.IsSystem && role.Name == rbac.RoleOrgMember() && role.OrganizationID.Valid {
rolesByOrg[role.OrganizationID.UUID] = role
}
}
for _, org := range orgs {
role, exists := rolesByOrg[org.ID]
if !exists {
// Something is very wrong: the role should have been created by the
// database trigger or migration. Log loudly and try creating it as
// a last-ditch effort before giving up.
log.Critical(ctx, "missing organization-member system role; trying to re-create",
slog.F("organization_id", org.ID))
if err := CreateOrgMemberRole(ctx, tx, org); err != nil {
return xerrors.Errorf("create missing organization-member role for organization %s: %w",
org.ID, err)
}
// Nothing more to do; the new role's permissions are up-to-date.
continue
}
_, _, err := ReconcileOrgMemberRole(ctx, tx, role, org.WorkspaceSharingDisabled)
if err != nil {
return xerrors.Errorf("reconcile organization-member role for organization %s: %w",
org.ID, err)
}
}
return nil
}, nil)
}
// ReconcileOrgMemberRole ensures passed-in org-member role's perms
// are correct (current) and stored in the DB. Uses set-based
// comparison to avoid unnecessary database writes when permissions
// haven't changed. Returns the correct role and a boolean indicating
// whether the reconciliation was necessary.
// NOTE: Callers must acquire `database.LockIDReconcileSystemRoles` at
// the start of the transaction and hold it for the transactions
// duration. This prevents concurrent org-member reconciliation from
// racing and producing inconsistent writes.
func ReconcileOrgMemberRole(
ctx context.Context,
tx database.Store,
in database.CustomRole,
workspaceSharingDisabled bool,
) (
database.CustomRole, bool, error,
) {
// All fields except OrgPermissions and MemberPermissions will be the same.
out := in
// Paranoia check: we don't use these in custom roles yet.
// TODO(geokat): Have these as check constraints in DB for now?
out.SitePermissions = database.CustomRolePermissions{}
out.UserPermissions = database.CustomRolePermissions{}
out.DisplayName = ""
inOrgPerms := ConvertDBPermissions(in.OrgPermissions)
inMemberPerms := ConvertDBPermissions(in.MemberPermissions)
outOrgPerms, outMemberPerms := rbac.OrgMemberPermissions(workspaceSharingDisabled)
// Compare using set-based comparison (order doesn't matter).
match := rbac.PermissionsEqual(inOrgPerms, outOrgPerms) &&
rbac.PermissionsEqual(inMemberPerms, outMemberPerms)
if !match {
out.OrgPermissions = ConvertPermissionsToDB(outOrgPerms)
out.MemberPermissions = ConvertPermissionsToDB(outMemberPerms)
_, err := tx.UpdateCustomRole(ctx, database.UpdateCustomRoleParams{
Name: out.Name,
OrganizationID: out.OrganizationID,
DisplayName: out.DisplayName,
SitePermissions: out.SitePermissions,
UserPermissions: out.UserPermissions,
OrgPermissions: out.OrgPermissions,
MemberPermissions: out.MemberPermissions,
})
if err != nil {
return out, !match, xerrors.Errorf("update organization-member custom role for organization %s: %w",
in.OrganizationID.UUID, err)
}
}
return out, !match, nil
}
// CreateOrgMemberRole creates an org-member system role for an organization.
func CreateOrgMemberRole(ctx context.Context, tx database.Store, org database.Organization) error {
orgPerms, memberPerms := rbac.OrgMemberPermissions(org.WorkspaceSharingDisabled)
_, err := tx.InsertCustomRole(ctx, database.InsertCustomRoleParams{
Name: rbac.RoleOrgMember(),
DisplayName: "",
OrganizationID: uuid.NullUUID{UUID: org.ID, Valid: true},
SitePermissions: database.CustomRolePermissions{},
OrgPermissions: ConvertPermissionsToDB(orgPerms),
UserPermissions: database.CustomRolePermissions{},
MemberPermissions: ConvertPermissionsToDB(memberPerms),
IsSystem: true,
})
if err != nil {
return xerrors.Errorf("insert org-member role: %w", err)
}
return nil
}
+132
View File
@@ -1,11 +1,13 @@
package rolestore_test
import (
"database/sql"
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"cdr.dev/slog/v3"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
@@ -39,3 +41,133 @@ func TestExpandCustomRoleRoles(t *testing.T) {
require.NoError(t, err)
require.Len(t, roles, 1, "role found")
}
func TestReconcileOrgMemberRole(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
org := dbgen.Organization(t, db, database.Organization{})
ctx := testutil.Context(t, testutil.WaitShort)
existing, err := database.ExpectOne(db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: []database.NameOrganizationPair{
{
Name: rbac.RoleOrgMember(),
OrganizationID: org.ID,
},
},
IncludeSystemRoles: true,
}))
require.NoError(t, err)
_, err = db.UpdateCustomRole(ctx, database.UpdateCustomRoleParams{
Name: existing.Name,
OrganizationID: uuid.NullUUID{
UUID: org.ID,
Valid: true,
},
DisplayName: "",
SitePermissions: database.CustomRolePermissions{},
UserPermissions: database.CustomRolePermissions{},
OrgPermissions: database.CustomRolePermissions{},
MemberPermissions: database.CustomRolePermissions{},
})
require.NoError(t, err)
stale := existing
stale.OrgPermissions = database.CustomRolePermissions{}
stale.MemberPermissions = database.CustomRolePermissions{}
reconciled, didUpdate, err := rolestore.ReconcileOrgMemberRole(ctx, db, stale, org.WorkspaceSharingDisabled)
require.NoError(t, err)
require.True(t, didUpdate, "expected reconciliation to update stale permissions")
got, err := database.ExpectOne(db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: []database.NameOrganizationPair{
{
Name: rbac.RoleOrgMember(),
OrganizationID: org.ID,
},
},
IncludeSystemRoles: true,
}))
require.NoError(t, err)
wantOrg, wantMember := rbac.OrgMemberPermissions(org.WorkspaceSharingDisabled)
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(got.OrgPermissions), wantOrg))
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(got.MemberPermissions), wantMember))
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(reconciled.OrgPermissions), wantOrg))
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(reconciled.MemberPermissions), wantMember))
_, didUpdate, err = rolestore.ReconcileOrgMemberRole(ctx, db, reconciled, org.WorkspaceSharingDisabled)
require.NoError(t, err)
require.False(t, didUpdate, "expected no-op reconciliation when permissions are already current")
}
func TestReconcileSystemRoles(t *testing.T) {
t.Parallel()
var sqlDB *sql.DB
db, _, sqlDB := dbtestutil.NewDBWithSQLDB(t)
// The DB trigger will create system roles for the org.
org1 := dbgen.Organization(t, db, database.Organization{})
org2 := dbgen.Organization(t, db, database.Organization{})
ctx := testutil.Context(t, testutil.WaitShort)
_, err := sqlDB.ExecContext(ctx, "UPDATE organizations SET workspace_sharing_disabled = true WHERE id = $1", org2.ID)
require.NoError(t, err)
// Simulate a missing system role by bypassing the application's
// safety check in DeleteCustomRole (which prevents deleting
// system roles).
res, err := sqlDB.ExecContext(ctx,
"DELETE FROM custom_roles WHERE name = lower($1) AND organization_id = $2",
rbac.RoleOrgMember(),
org1.ID,
)
require.NoError(t, err)
affected, err := res.RowsAffected()
require.NoError(t, err)
require.Equal(t, int64(1), affected)
// Not using testutil.Logger() here because it would fail on the
// CRITICAL log line due to the deleted custom role.
err = rolestore.ReconcileSystemRoles(ctx, slog.Make(), db)
require.NoError(t, err)
orgs, err := db.GetOrganizations(ctx, database.GetOrganizationsParams{})
require.NoError(t, err)
orgByID := make(map[uuid.UUID]database.Organization, len(orgs))
for _, org := range orgs {
orgByID[org.ID] = org
}
assertOrgMemberRole := func(t *testing.T, orgID uuid.UUID) {
t.Helper()
org := orgByID[orgID]
got, err := database.ExpectOne(db.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: []database.NameOrganizationPair{
{
Name: rbac.RoleOrgMember(),
OrganizationID: orgID,
},
},
IncludeSystemRoles: true,
}))
require.NoError(t, err)
require.True(t, got.IsSystem)
wantOrg, wantMember := rbac.OrgMemberPermissions(org.WorkspaceSharingDisabled)
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(got.OrgPermissions), wantOrg))
require.True(t, rbac.PermissionsEqual(rolestore.ConvertDBPermissions(got.MemberPermissions), wantMember))
}
assertOrgMemberRole(t, org1.ID)
assertOrgMemberRole(t, org2.ID)
}
+7 -5
View File
@@ -34,8 +34,9 @@ func (api *API) AssignableSiteRoles(rw http.ResponseWriter, r *http.Request) {
dbCustomRoles, err := api.Database.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: nil,
// Only site wide custom roles to be included
ExcludeOrgRoles: true,
OrganizationID: uuid.Nil,
ExcludeOrgRoles: true,
OrganizationID: uuid.Nil,
IncludeSystemRoles: false,
})
if err != nil {
httpapi.InternalServerError(rw, err)
@@ -67,9 +68,10 @@ func (api *API) assignableOrgRoles(rw http.ResponseWriter, r *http.Request) {
roles := rbac.OrganizationRoles(organization.ID)
dbCustomRoles, err := api.Database.CustomRoles(ctx, database.CustomRolesParams{
LookupRoles: nil,
ExcludeOrgRoles: false,
OrganizationID: organization.ID,
LookupRoles: nil,
ExcludeOrgRoles: false,
OrganizationID: organization.ID,
IncludeSystemRoles: false,
})
if err != nil {
httpapi.InternalServerError(rw, err)
+2 -5
View File
@@ -12,11 +12,11 @@ import (
"github.com/anthropics/anthropic-sdk-go"
anthropicoption "github.com/anthropics/anthropic-sdk-go/option"
"github.com/moby/moby/pkg/namesgenerator"
"golang.org/x/xerrors"
"cdr.dev/slog/v3"
"github.com/coder/aisdk-go"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
strutil "github.com/coder/coder/v2/coderd/util/strings"
"github.com/coder/coder/v2/codersdk"
)
@@ -125,10 +125,7 @@ func generateFallback() TaskName {
// We have a 32 character limit for the name.
// We have a 5 character suffix `-ffff`.
// This leaves us with 27 characters for the name.
//
// `namesgenerator.GetRandomName(0)` can generate names
// up to 27 characters, but we truncate defensively.
name := strings.ReplaceAll(namesgenerator.GetRandomName(0), "_", "-")
name := namesgenerator.NameWith("-")
name = name[:min(len(name), 27)]
name = strings.TrimSuffix(name, "-")
+7 -7
View File
@@ -480,7 +480,7 @@ func TestTemplates(t *testing.T) {
// Deprecate bar template
deprecationMessage := "Some deprecated message"
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: bar.ID,
RequireActiveVersion: false,
Deprecated: deprecationMessage,
@@ -522,13 +522,13 @@ func TestTemplates(t *testing.T) {
// Deprecate foo and bar templates
deprecationMessage := "Some deprecated message"
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: foo.ID,
RequireActiveVersion: false,
Deprecated: deprecationMessage,
})
require.NoError(t, err)
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: bar.ID,
RequireActiveVersion: false,
Deprecated: deprecationMessage,
@@ -637,7 +637,7 @@ func TestTemplates(t *testing.T) {
// Deprecate bar template
deprecationMessage := "Some deprecated message"
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: bar.ID,
RequireActiveVersion: false,
Deprecated: deprecationMessage,
@@ -650,7 +650,7 @@ func TestTemplates(t *testing.T) {
require.Equal(t, deprecationMessage, updatedBar.DeprecationMessage)
// Re-enable bar template
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err = db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: bar.ID,
RequireActiveVersion: false,
Deprecated: "",
@@ -793,7 +793,7 @@ func TestTemplatesByOrganization(t *testing.T) {
// Deprecate bar template
deprecationMessage := "Some deprecated message"
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: bar.ID,
RequireActiveVersion: false,
Deprecated: deprecationMessage,
@@ -1004,7 +1004,7 @@ func TestPatchTemplateMeta(t *testing.T) {
ctx := testutil.Context(t, testutil.WaitLong)
// nolint:gocritic // Setting up unit test data
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin, user.OrganizationID)), database.UpdateTemplateAccessControlByIDParams{
err := db.UpdateTemplateAccessControlByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(tplAdmin)), database.UpdateTemplateAccessControlByIDParams{
ID: template.ID,
RequireActiveVersion: false,
Deprecated: "Some deprecated message",
+2 -2
View File
@@ -16,7 +16,6 @@ import (
"github.com/go-chi/chi/v5"
"github.com/google/uuid"
"github.com/moby/moby/pkg/namesgenerator"
"github.com/sqlc-dev/pqtype"
"github.com/zclconf/go-cty/cty"
"golang.org/x/xerrors"
@@ -37,6 +36,7 @@ import (
"github.com/coder/coder/v2/coderd/rbac"
"github.com/coder/coder/v2/coderd/rbac/policy"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/examples"
@@ -1699,7 +1699,7 @@ func (api *API) postTemplateVersionsByOrganization(rw http.ResponseWriter, r *ht
}
if req.Name == "" {
req.Name = namesgenerator.GetRandomName(1)
req.Name = namesgenerator.NameDigitWith("_")
}
err = tx.InsertTemplateVersion(ctx, database.InsertTemplateVersionParams{
+2 -2
View File
@@ -182,7 +182,7 @@ func TestPostTemplateVersionsByOrganization(t *testing.T) {
admin, err := client.User(ctx, user.UserID.String())
require.NoError(t, err)
tvDB, err := db.GetTemplateVersionByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(admin, user.OrganizationID)), version.ID)
tvDB, err := db.GetTemplateVersionByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(admin)), version.ID)
require.NoError(t, err)
require.False(t, tvDB.SourceExampleID.Valid)
})
@@ -232,7 +232,7 @@ func TestPostTemplateVersionsByOrganization(t *testing.T) {
admin, err := client.User(ctx, user.UserID.String())
require.NoError(t, err)
tvDB, err := db.GetTemplateVersionByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(admin, user.OrganizationID)), tv.ID)
tvDB, err := db.GetTemplateVersionByID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(admin)), tv.ID)
require.NoError(t, err)
require.Equal(t, ls[0].ID, tvDB.SourceExampleID.String)
+2 -2
View File
@@ -19,7 +19,6 @@ import (
"github.com/go-jose/go-jose/v4/jwt"
"github.com/google/go-github/v43/github"
"github.com/google/uuid"
"github.com/moby/moby/pkg/namesgenerator"
"golang.org/x/oauth2"
"golang.org/x/xerrors"
@@ -41,6 +40,7 @@ import (
"github.com/coder/coder/v2/coderd/render"
"github.com/coder/coder/v2/coderd/telemetry"
"github.com/coder/coder/v2/coderd/userpassword"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
"github.com/coder/coder/v2/coderd/util/ptr"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/cryptorand"
@@ -1723,7 +1723,7 @@ func (api *API) oauthLogin(r *http.Request, params *oauthLoginParams) ([]*http.C
validUsername bool
)
for i := 0; i < 10; i++ {
alternate := fmt.Sprintf("%s-%s", original, namesgenerator.GetRandomName(1))
alternate := fmt.Sprintf("%s-%s", original, namesgenerator.NameDigitWith("_"))
params.Username = codersdk.UsernameFrom(alternate)
+3 -2
View File
@@ -873,7 +873,8 @@ func TestUserOAuth2Github(t *testing.T) {
},
},
})
first := coderdtest.CreateFirstUser(t, owner)
coderdtest.CreateFirstUser(t, owner)
ctx := testutil.Context(t, testutil.WaitLong)
ownerUser, err := owner.User(context.Background(), "me")
@@ -890,7 +891,7 @@ func TestUserOAuth2Github(t *testing.T) {
err = owner.DeleteUser(ctx, deleted.ID)
require.NoError(t, err)
// Check no user links for the user
links, err := db.GetUserLinksByUserID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(ownerUser, first.OrganizationID)), deleted.ID)
links, err := db.GetUserLinksByUserID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(ownerUser)), deleted.ID)
require.NoError(t, err)
require.Empty(t, links)
@@ -0,0 +1,82 @@
// Package namesgenerator generates random names.
//
// This package provides functions for generating random names in the format
// "adjective_surname" with various options for delimiters and uniqueness.
//
// For identifiers that must be unique within a process, use UniqueName or
// UniqueNameWith. For display purposes where uniqueness is not required,
// use NameWith.
package namesgenerator
import (
"math/rand/v2"
"strconv"
"strings"
"sync/atomic"
"github.com/brianvoe/gofakeit/v7"
)
// maxNameLen is the maximum length for names. Many places in Coder have a 32
// character limit for names (e.g. usernames, workspace names).
const maxNameLen = 32
// counter provides unique suffixes for UniqueName functions.
var counter atomic.Int64
// NameWith returns a random name with a custom delimiter.
// Names are not guaranteed to be unique.
func NameWith(delim string) string {
const seed = 0 // gofakeit will use a random crypto seed.
faker := gofakeit.New(seed)
adjective := strings.ToLower(faker.AdjectiveDescriptive())
last := strings.ToLower(faker.LastName())
return adjective + delim + last
}
// NameDigitWith returns a random name with a single random digit suffix (1-9),
// in the format "[adjective][delim][surname][digit]" e.g. "happy_smith9".
// Provides some collision resistance while keeping names short and clean.
// Not guaranteed to be unique.
func NameDigitWith(delim string) string {
const (
minDigit = 1
maxDigit = 9
)
//nolint:gosec // The random digit doesn't need to be cryptographically secure.
return NameWith(delim) + strconv.Itoa(rand.IntN(maxDigit-minDigit+1))
}
// UniqueName returns a random name with a monotonically increasing suffix,
// guaranteeing uniqueness within the process. The name is truncated to 32
// characters if necessary, preserving the numeric suffix.
func UniqueName() string {
return UniqueNameWith("_")
}
// UniqueNameWith returns a unique name with a custom delimiter.
// See UniqueName for details on uniqueness guarantees.
func UniqueNameWith(delim string) string {
name := NameWith(delim) + strconv.FormatInt(counter.Add(1), 10)
return truncate(name, maxNameLen)
}
// truncate truncates a name to maxLen characters. It assumes the name ends with
// a numeric suffix and preserves it, truncating the base name portion instead.
func truncate(name string, maxLen int) string {
if len(name) <= maxLen {
return name
}
// Find where the numeric suffix starts.
suffixStart := len(name)
for suffixStart > 0 && name[suffixStart-1] >= '0' && name[suffixStart-1] <= '9' {
suffixStart--
}
base := name[:suffixStart]
suffix := name[suffixStart:]
truncateAt := maxLen - len(suffix)
if truncateAt <= 0 {
return strconv.Itoa(maxLen) // Fallback, shouldn't happen in practice.
}
return base[:truncateAt] + suffix
}
@@ -0,0 +1,103 @@
package namesgenerator
import (
"strings"
"testing"
"unicode"
"github.com/stretchr/testify/assert"
)
func TestTruncate(t *testing.T) {
t.Parallel()
tests := []struct {
name string
input string
maxLen int
want string
}{
{
name: "no truncation needed",
input: "foo1",
maxLen: 10,
want: "foo1",
},
{
name: "exact fit",
input: "foo1",
maxLen: 4,
want: "foo1",
},
{
name: "truncate base",
input: "foobar42",
maxLen: 5,
want: "foo42",
},
{
name: "truncate more",
input: "foobar3",
maxLen: 3,
want: "fo3",
},
{
name: "long suffix",
input: "foo123456",
maxLen: 8,
want: "fo123456",
},
{
name: "realistic name",
input: "condescending_proskuriakova999999",
maxLen: 32,
want: "condescending_proskuriakov999999",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
got := truncate(tt.input, tt.maxLen)
assert.Equal(t, tt.want, got)
assert.LessOrEqual(t, len(got), tt.maxLen)
})
}
}
func TestUniqueNameLength(t *testing.T) {
t.Parallel()
// Generate many names to exercise the truncation logic.
const iter = 10000
for range iter {
name := UniqueName()
assert.LessOrEqual(t, len(name), maxNameLen)
assert.Contains(t, name, "_")
assert.Equal(t, name, strings.ToLower(name))
verifyNoWhitespace(t, name)
}
}
func TestUniqueNameWithLength(t *testing.T) {
t.Parallel()
// Generate many names with hyphen delimiter.
const iter = 10000
for range iter {
name := UniqueNameWith("-")
assert.LessOrEqual(t, len(name), maxNameLen)
assert.Contains(t, name, "-")
assert.Equal(t, name, strings.ToLower(name))
verifyNoWhitespace(t, name)
}
}
func verifyNoWhitespace(t *testing.T, s string) {
t.Helper()
for _, r := range s {
if unicode.IsSpace(r) {
t.Fatalf("found whitespace in string %q: %v", s, r)
}
}
}
+4 -4
View File
@@ -31,7 +31,7 @@ func TestPostWorkspaceAgentPortShare(t *testing.T) {
agents[0].Directory = tmpDir
return agents
}).Do()
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, owner.OrganizationID)), r.Workspace.ID)
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubjectWithDB(ctx, t, db, user)), r.Workspace.ID)
require.NoError(t, err)
// owner level should fail
@@ -148,7 +148,7 @@ func TestGetWorkspaceAgentPortShares(t *testing.T) {
agents[0].Directory = tmpDir
return agents
}).Do()
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, owner.OrganizationID)), r.Workspace.ID)
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubjectWithDB(ctx, t, db, user)), r.Workspace.ID)
require.NoError(t, err)
_, err = client.UpsertWorkspaceAgentPortShare(ctx, r.Workspace.ID, codersdk.UpsertWorkspaceAgentPortShareRequest{
@@ -184,7 +184,7 @@ func TestDeleteWorkspaceAgentPortShare(t *testing.T) {
agents[0].Directory = tmpDir
return agents
}).Do()
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, owner.OrganizationID)), r.Workspace.ID)
agents, err := db.GetWorkspaceAgentsInLatestBuildByWorkspaceID(dbauthz.As(ctx, coderdtest.AuthzUserSubjectWithDB(ctx, t, db, user)), r.Workspace.ID)
require.NoError(t, err)
// create
@@ -211,7 +211,7 @@ func TestDeleteWorkspaceAgentPortShare(t *testing.T) {
})
require.Error(t, err)
_, err = db.GetWorkspaceAgentPortShare(dbauthz.As(ctx, coderdtest.AuthzUserSubject(user, owner.OrganizationID)), database.GetWorkspaceAgentPortShareParams{
_, err = db.GetWorkspaceAgentPortShare(dbauthz.As(ctx, coderdtest.AuthzUserSubjectWithDB(ctx, t, db, user)), database.GetWorkspaceAgentPortShareParams{
WorkspaceID: r.Workspace.ID,
AgentName: agents[0].Name,
Port: 8080,
+57
View File
@@ -882,6 +882,63 @@ func (api *API) workspaceBuildState(rw http.ResponseWriter, r *http.Request) {
_, _ = rw.Write(workspaceBuild.ProvisionerState)
}
// @Summary Update workspace build state
// @ID update-workspace-build-state
// @Security CoderSessionToken
// @Accept json
// @Tags Builds
// @Param workspacebuild path string true "Workspace build ID" format(uuid)
// @Param request body codersdk.UpdateWorkspaceBuildStateRequest true "Request body"
// @Success 204
// @Router /workspacebuilds/{workspacebuild}/state [put]
func (api *API) workspaceBuildUpdateState(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
workspaceBuild := httpmw.WorkspaceBuildParam(r)
workspace, err := api.Database.GetWorkspaceByID(ctx, workspaceBuild.WorkspaceID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "No workspace exists for this job.",
})
return
}
template, err := api.Database.GetTemplateByID(ctx, workspace.TemplateID)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to get template",
Detail: err.Error(),
})
return
}
// You must have update permissions on the template to update the state.
if !api.Authorize(r, policy.ActionUpdate, template.RBACObject()) {
httpapi.ResourceNotFound(rw)
return
}
var req codersdk.UpdateWorkspaceBuildStateRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
// Use system context since we've already verified authorization via template permissions.
// nolint:gocritic // System access required for provisioner state update.
err = api.Database.UpdateWorkspaceBuildProvisionerStateByID(dbauthz.AsSystemRestricted(ctx), database.UpdateWorkspaceBuildProvisionerStateByIDParams{
ID: workspaceBuild.ID,
ProvisionerState: req.State,
UpdatedAt: dbtime.Now(),
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to update workspace build state.",
Detail: err.Error(),
})
return
}
rw.WriteHeader(http.StatusNoContent)
}
// @Summary Get workspace build timings by ID
// @ID get-workspace-build-timings-by-id
// @Security CoderSessionToken
+71 -4
View File
@@ -442,6 +442,10 @@ var PostgresAuthDrivers = []string{
string(PostgresAuthAWSIAMRDS),
}
// PostgresConnMaxIdleAuto is the value for auto-computing max idle connections
// based on max open connections.
const PostgresConnMaxIdleAuto = "auto"
// DeploymentValues is the central configuration values the coder server.
type DeploymentValues struct {
Verbose serpent.Bool `json:"verbose,omitempty"`
@@ -462,6 +466,8 @@ type DeploymentValues struct {
EphemeralDeployment serpent.Bool `json:"ephemeral_deployment,omitempty" typescript:",notnull"`
PostgresURL serpent.String `json:"pg_connection_url,omitempty" typescript:",notnull"`
PostgresAuth string `json:"pg_auth,omitempty" typescript:",notnull"`
PostgresConnMaxOpen serpent.Int64 `json:"pg_conn_max_open,omitempty" typescript:",notnull"`
PostgresConnMaxIdle serpent.String `json:"pg_conn_max_idle,omitempty" typescript:",notnull"`
OAuth2 OAuth2Config `json:"oauth2,omitempty" typescript:",notnull"`
OIDC OIDCConfig `json:"oidc,omitempty" typescript:",notnull"`
Telemetry TelemetryConfig `json:"telemetry,omitempty" typescript:",notnull"`
@@ -2623,6 +2629,30 @@ func (c *DeploymentValues) Options() serpent.OptionSet {
Value: serpent.EnumOf(&c.PostgresAuth, PostgresAuthDrivers...),
YAML: "pgAuth",
},
{
Name: "Postgres Connection Max Open",
Description: "Maximum number of open connections to the database. Defaults to 10.",
Flag: "postgres-conn-max-open",
Env: "CODER_PG_CONN_MAX_OPEN",
Default: "10",
Value: serpent.Validate(&c.PostgresConnMaxOpen, func(value *serpent.Int64) error {
if value.Value() <= 0 {
return xerrors.New("must be greater than zero")
}
return nil
}),
YAML: "pgConnMaxOpen",
},
{
Name: "Postgres Connection Max Idle",
Description: "Maximum number of idle connections to the database. Set to \"auto\" (the default) to use max open / 3. " +
"Value must be greater or equal to 0; 0 means explicitly no idle connections.",
Flag: "postgres-conn-max-idle",
Env: "CODER_PG_CONN_MAX_IDLE",
Default: PostgresConnMaxIdleAuto,
Value: &c.PostgresConnMaxIdle,
YAML: "pgConnMaxIdle",
},
{
Name: "Secure Auth Cookie",
Description: "Controls if the 'Secure' property is set on browser session cookies.",
@@ -3496,6 +3526,17 @@ Write out the current server config as YAML to stdout.`,
Group: &deploymentGroupAIBridgeProxy,
YAML: "key_file",
},
{
Name: "AI Bridge Proxy Domain Allowlist",
Description: "Comma-separated list of domains for which HTTPS traffic will be decrypted and routed through AI Bridge. Requests to other domains will be tunneled directly without decryption.",
Flag: "aibridge-proxy-domain-allowlist",
Env: "CODER_AIBRIDGE_PROXY_DOMAIN_ALLOWLIST",
Value: &c.AI.BridgeProxyConfig.DomainAllowlist,
Default: "api.anthropic.com,api.openai.com",
Hidden: true,
Group: &deploymentGroupAIBridgeProxy,
YAML: "domain_allowlist",
},
// Retention settings
{
@@ -3590,10 +3631,11 @@ type AIBridgeBedrockConfig struct {
}
type AIBridgeProxyConfig struct {
Enabled serpent.Bool `json:"enabled" typescript:",notnull"`
ListenAddr serpent.String `json:"listen_addr" typescript:",notnull"`
CertFile serpent.String `json:"cert_file" typescript:",notnull"`
KeyFile serpent.String `json:"key_file" typescript:",notnull"`
Enabled serpent.Bool `json:"enabled" typescript:",notnull"`
ListenAddr serpent.String `json:"listen_addr" typescript:",notnull"`
CertFile serpent.String `json:"cert_file" typescript:",notnull"`
KeyFile serpent.String `json:"key_file" typescript:",notnull"`
DomainAllowlist serpent.StringArray `json:"domain_allowlist" typescript:",notnull"`
}
type AIConfig struct {
@@ -4128,3 +4170,28 @@ func (c CryptoKey) CanVerify(now time.Time) bool {
beforeDelete := c.DeletesAt.IsZero() || now.Before(c.DeletesAt)
return hasSecret && beforeDelete
}
// ComputeMaxIdleConns calculates the effective maxIdleConns value. If
// configuredIdle is "auto", it returns maxOpen/3 with a minimum of 1. If
// configuredIdle exceeds maxOpen, it returns an error.
func ComputeMaxIdleConns(maxOpen int, configuredIdle string) (int, error) {
configuredIdle = strings.TrimSpace(configuredIdle)
if configuredIdle == PostgresConnMaxIdleAuto {
computed := maxOpen / 3
if computed < 1 {
return 1, nil
}
return computed, nil
}
idle, err := strconv.Atoi(configuredIdle)
if err != nil {
return 0, xerrors.Errorf("invalid max idle connections %q: must be %q or >= 0", configuredIdle, PostgresConnMaxIdleAuto)
}
if idle < 0 {
return 0, xerrors.Errorf("max idle connections must be %q or >= 0", PostgresConnMaxIdleAuto)
}
if idle > maxOpen {
return 0, xerrors.Errorf("max idle connections (%d) cannot exceed max open connections (%d)", idle, maxOpen)
}
return idle, nil
}
+117
View File
@@ -765,3 +765,120 @@ func TestRetentionConfigParsing(t *testing.T) {
})
}
}
func TestComputeMaxIdleConns(t *testing.T) {
t.Parallel()
tests := []struct {
name string
maxOpen int
configuredIdle string
expectedIdle int
expectError bool
errorContains string
}{
{
name: "auto_default_10_open",
maxOpen: 10,
configuredIdle: "auto",
expectedIdle: 3, // 10/3 = 3
},
{
name: "auto_with_whitespace",
maxOpen: 10,
configuredIdle: " auto ",
expectedIdle: 3, // 10/3 = 3
},
{
name: "auto_30_open",
maxOpen: 30,
configuredIdle: "auto",
expectedIdle: 10, // 30/3 = 10
},
{
name: "auto_minimum_1",
maxOpen: 1,
configuredIdle: "auto",
expectedIdle: 1, // 1/3 = 0, but minimum is 1
},
{
name: "auto_minimum_2_open",
maxOpen: 2,
configuredIdle: "auto",
expectedIdle: 1, // 2/3 = 0, but minimum is 1
},
{
name: "auto_3_open",
maxOpen: 3,
configuredIdle: "auto",
expectedIdle: 1, // 3/3 = 1
},
{
name: "explicit_equal_to_max",
maxOpen: 10,
configuredIdle: "10",
expectedIdle: 10,
},
{
name: "explicit_less_than_max",
maxOpen: 10,
configuredIdle: "5",
expectedIdle: 5,
},
{
name: "explicit_with_whitespace",
maxOpen: 10,
configuredIdle: " 5 ",
expectedIdle: 5,
},
{
name: "explicit_0",
maxOpen: 10,
configuredIdle: "0",
expectedIdle: 0,
},
{
name: "error_exceeds_max",
maxOpen: 10,
configuredIdle: "15",
expectError: true,
errorContains: "cannot exceed",
},
{
name: "error_exceeds_max_by_1",
maxOpen: 10,
configuredIdle: "11",
expectError: true,
errorContains: "cannot exceed",
},
{
name: "error_invalid_string",
maxOpen: 10,
configuredIdle: "invalid",
expectError: true,
errorContains: "must be \"auto\" or >= 0",
},
{
name: "error_negative",
maxOpen: 10,
configuredIdle: "-1",
expectError: true,
errorContains: "must be \"auto\" or >= 0",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
result, err := codersdk.ComputeMaxIdleConns(tt.maxOpen, tt.configuredIdle)
if tt.expectError {
require.Error(t, err)
require.Contains(t, err.Error(), tt.errorContains)
} else {
require.NoError(t, err)
require.Equal(t, tt.expectedIdle, result)
}
})
}
}
+3 -2
View File
@@ -5,8 +5,9 @@ import (
"regexp"
"strings"
"github.com/moby/moby/pkg/namesgenerator"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/coderd/util/namesgenerator"
)
var (
@@ -35,7 +36,7 @@ func UsernameFrom(str string) string {
if valid := NameValid(str); valid == nil {
return str
}
return strings.ReplaceAll(namesgenerator.GetRandomName(1), "_", "-")
return namesgenerator.NameDigitWith("-")
}
// NameValid returns whether the input string is a valid name.
+22
View File
@@ -188,6 +188,28 @@ func (c *Client) WorkspaceBuildState(ctx context.Context, build uuid.UUID) ([]by
return io.ReadAll(res.Body)
}
// UpdateWorkspaceBuildStateRequest is the request body for updating the
// provisioner state of a workspace build.
type UpdateWorkspaceBuildStateRequest struct {
State []byte `json:"state"`
}
// UpdateWorkspaceBuildState updates the provisioner state of the build without
// triggering a new build. This is useful for state-only migrations.
func (c *Client) UpdateWorkspaceBuildState(ctx context.Context, build uuid.UUID, state []byte) error {
res, err := c.Request(ctx, http.MethodPut, fmt.Sprintf("/api/v2/workspacebuilds/%s/state", build), UpdateWorkspaceBuildStateRequest{
State: state,
})
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusNoContent {
return ReadBodyAsError(res)
}
return nil
}
func (c *Client) WorkspaceBuildByUsernameAndWorkspaceNameAndBuildNumber(ctx context.Context, username string, workspaceName string, buildNumber string) (WorkspaceBuild, error) {
res, err := c.Request(ctx, http.MethodGet, fmt.Sprintf("/api/v2/users/%s/workspace/%s/builds/%s", username, workspaceName, buildNumber), nil)
if err != nil {
+2 -2
View File
@@ -19,7 +19,7 @@ We track the following resources:
| AuditOAuthConvertState<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>true</td></tr><tr><td>expires_at</td><td>true</td></tr><tr><td>from_login_type</td><td>true</td></tr><tr><td>to_login_type</td><td>true</td></tr><tr><td>user_id</td><td>true</td></tr></tbody></table> |
| Group<br><i>create, write, delete</i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>avatar_url</td><td>true</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>id</td><td>true</td></tr><tr><td>members</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>organization_id</td><td>false</td></tr><tr><td>quota_allowance</td><td>true</td></tr><tr><td>source</td><td>false</td></tr></tbody></table> |
| AuditableOrganizationMember<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>true</td></tr><tr><td>organization_id</td><td>false</td></tr><tr><td>roles</td><td>true</td></tr><tr><td>updated_at</td><td>true</td></tr><tr><td>user_id</td><td>true</td></tr><tr><td>username</td><td>true</td></tr></tbody></table> |
| CustomRole<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>false</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>id</td><td>false</td></tr><tr><td>name</td><td>true</td></tr><tr><td>org_permissions</td><td>true</td></tr><tr><td>organization_id</td><td>false</td></tr><tr><td>site_permissions</td><td>true</td></tr><tr><td>updated_at</td><td>false</td></tr><tr><td>user_permissions</td><td>true</td></tr></tbody></table> |
| CustomRole<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>false</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>id</td><td>false</td></tr><tr><td>is_system</td><td>false</td></tr><tr><td>member_permissions</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>org_permissions</td><td>true</td></tr><tr><td>organization_id</td><td>false</td></tr><tr><td>site_permissions</td><td>true</td></tr><tr><td>updated_at</td><td>false</td></tr><tr><td>user_permissions</td><td>true</td></tr></tbody></table> |
| GitSSHKey<br><i>create</i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>false</td></tr><tr><td>private_key</td><td>true</td></tr><tr><td>public_key</td><td>true</td></tr><tr><td>updated_at</td><td>false</td></tr><tr><td>user_id</td><td>true</td></tr></tbody></table> |
| GroupSyncSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>auto_create_missing_groups</td><td>true</td></tr><tr><td>field</td><td>true</td></tr><tr><td>legacy_group_name_mapping</td><td>false</td></tr><tr><td>mapping</td><td>true</td></tr><tr><td>regex_filter</td><td>true</td></tr></tbody></table> |
| HealthSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>dismissed_healthchecks</td><td>true</td></tr><tr><td>id</td><td>false</td></tr></tbody></table> |
@@ -28,7 +28,7 @@ We track the following resources:
| NotificationsSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>id</td><td>false</td></tr><tr><td>notifier_paused</td><td>true</td></tr></tbody></table> |
| OAuth2ProviderApp<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>callback_url</td><td>true</td></tr><tr><td>client_id_issued_at</td><td>false</td></tr><tr><td>client_secret_expires_at</td><td>true</td></tr><tr><td>client_type</td><td>true</td></tr><tr><td>client_uri</td><td>true</td></tr><tr><td>contacts</td><td>true</td></tr><tr><td>created_at</td><td>false</td></tr><tr><td>dynamically_registered</td><td>true</td></tr><tr><td>grant_types</td><td>true</td></tr><tr><td>icon</td><td>true</td></tr><tr><td>id</td><td>false</td></tr><tr><td>jwks</td><td>true</td></tr><tr><td>jwks_uri</td><td>true</td></tr><tr><td>logo_uri</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>policy_uri</td><td>true</td></tr><tr><td>redirect_uris</td><td>true</td></tr><tr><td>registration_access_token</td><td>true</td></tr><tr><td>registration_client_uri</td><td>true</td></tr><tr><td>response_types</td><td>true</td></tr><tr><td>scope</td><td>true</td></tr><tr><td>software_id</td><td>true</td></tr><tr><td>software_version</td><td>true</td></tr><tr><td>token_endpoint_auth_method</td><td>true</td></tr><tr><td>tos_uri</td><td>true</td></tr><tr><td>updated_at</td><td>false</td></tr></tbody></table> |
| OAuth2ProviderAppSecret<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>app_id</td><td>false</td></tr><tr><td>created_at</td><td>false</td></tr><tr><td>display_secret</td><td>false</td></tr><tr><td>hashed_secret</td><td>false</td></tr><tr><td>id</td><td>false</td></tr><tr><td>last_used_at</td><td>false</td></tr><tr><td>secret_prefix</td><td>false</td></tr></tbody></table> |
| Organization<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>false</td></tr><tr><td>deleted</td><td>true</td></tr><tr><td>description</td><td>true</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>icon</td><td>true</td></tr><tr><td>id</td><td>false</td></tr><tr><td>is_default</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>updated_at</td><td>true</td></tr></tbody></table> |
| Organization<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>created_at</td><td>false</td></tr><tr><td>deleted</td><td>true</td></tr><tr><td>description</td><td>true</td></tr><tr><td>display_name</td><td>true</td></tr><tr><td>icon</td><td>true</td></tr><tr><td>id</td><td>false</td></tr><tr><td>is_default</td><td>true</td></tr><tr><td>name</td><td>true</td></tr><tr><td>updated_at</td><td>true</td></tr><tr><td>workspace_sharing_disabled</td><td>true</td></tr></tbody></table> |
| OrganizationSyncSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>assign_default</td><td>true</td></tr><tr><td>field</td><td>true</td></tr><tr><td>mapping</td><td>true</td></tr></tbody></table> |
| PrebuildsSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>id</td><td>false</td></tr><tr><td>reconciliation_paused</td><td>true</td></tr></tbody></table> |
| RoleSyncSettings<br><i></i> | <table><thead><tr><th>Field</th><th>Tracked</th></tr></thead><tbody> | <tr><td>field</td><td>true</td></tr><tr><td>mapping</td><td>true</td></tr></tbody></table> |
@@ -5,7 +5,7 @@ enriched input types, and user identity awareness.
This allows template authors to create interactive workspace creation forms with more environment customization,
and that means fewer templates to maintain.
![Dynamic Parameters in Action](https://i.imgur.com/uR8mpRJ.gif)
![Dynamic Parameters in Action](../../../images/admin/templates/extend-templates/dyn-params/dynamic-parameters-in-action.gif)
All parameters are parsed from Terraform, so your workspace creation forms live in the same location as your provisioning code.
You can use all the native Terraform functions and conditionality to create a self-service tooling catalog for every template.
@@ -0,0 +1,215 @@
# Workspace Startup Coordination Examples
## Script Example
This example shows a complete, production-ready script that starts Claude Code
only after a repository has been cloned. It includes error handling, graceful
degradation, and cleanup on exit:
```bash
#!/bin/bash
set -euo pipefail
UNIT_NAME="claude-code"
DEPENDENCIES="git-clone"
REPO_DIR="/workspace/repo"
# Track if sync started successfully
SYNC_STARTED=0
# Declare dependencies
if [ -n "$DEPENDENCIES" ]; then
if command -v coder > /dev/null 2>&1; then
IFS=',' read -ra DEPS <<< "$DEPENDENCIES"
for dep in "${DEPS[@]}"; do
dep=$(echo "$dep" | xargs)
if [ -n "$dep" ]; then
echo "Waiting for dependency: $dep"
coder exp sync want "$UNIT_NAME" "$dep" > /dev/null 2>&1 || \
echo "Warning: Failed to register dependency $dep, continuing..."
fi
done
else
echo "Coder CLI not found, running without sync coordination"
fi
fi
# Start sync and track success
if [ -n "$UNIT_NAME" ]; then
if command -v coder > /dev/null 2>&1; then
if coder exp sync start "$UNIT_NAME" > /dev/null 2>&1; then
SYNC_STARTED=1
echo "Started sync: $UNIT_NAME"
else
echo "Sync start failed or not available, continuing without sync..."
fi
fi
fi
# Ensure completion on exit (even if script fails)
cleanup_sync() {
if [ "$SYNC_STARTED" -eq 1 ] && [ -n "$UNIT_NAME" ]; then
echo "Completing sync: $UNIT_NAME"
coder exp sync complete "$UNIT_NAME" > /dev/null 2>&1 || \
echo "Warning: Sync complete failed, but continuing..."
fi
}
trap cleanup_sync EXIT
# Now do the actual work
echo "Repository cloned, starting Claude Code"
cd "$REPO_DIR"
claude
```
This script demonstrates several [best practices](./usage.md#best-practices):
- Checking for Coder CLI availability before using sync commands
- Tracking whether `coder exp sync` started successfully
- Using `trap` to ensure completion even if the script exits early
- Graceful degradation when `coder exp sync` isn't available
- Redirecting `coder exp sync` output to reduce noise in logs
## Template Migration Example
Below is a simple example Docker template that clones [Miguel Grinberg's example Flask repo](https://github.com/miguelgrinberg/microblog/) using the [`git-clone` module](https://registry.coder.com/modules/coder/git-clone) and installs the required dependencies for the project:
- Python development headers (required for building some Python packages)
- Python dependencies from the project's `requirements.txt`
We've omitted some details (such as persistent storage) for brevity, but these are easily added.
### Before
```terraform
data "coder_provisioner" "me" {}
data "coder_workspace" "me" {}
data "coder_workspace_owner" "me" {}
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/enterprise-base:ubuntu"
name = "coder-${data.coder_workspace_owner.me.name}-${lower(data.coder_workspace.me.name)}"
entrypoint = ["sh", "-c", coder_agent.main.init_script]
env = [
"CODER_AGENT_TOKEN=${coder_agent.main.token}",
]
}
resource "coder_agent" "main" {
arch = data.coder_provisioner.me.arch
os = "linux"
}
module "git-clone" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/git-clone/coder"
version = "1.2.3"
agent_id = coder_agent.main.id
url = "https://github.com/miguelgrinberg/microblog"
}
resource "coder_script" "setup" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.main.id
display_name = "Installing Dependencies"
run_on_start = true
script = <<EOT
sudo apt-get update
sudo apt-get install --yes python-dev-is-python3
cd ${module.git-clone[count.index].repo_dir}
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
EOT
}
```
We can note the following issues in the above template:
1. There is a race between cloning the repository and the `pip install` commands, which can lead to failed workspace startups in some cases.
2. The `apt` commands can run independently of the `git clone` command, meaning that there is a potential speedup here.
Based on the above, we can improve both the startup time and reliability of the template by splitting the monolithic startup script into multiple independent scripts:
- Install `apt` dependencies
- Install `pip` dependencies (depends on the `git-clone` module and the above step)
### After
Here is the updated version of the template:
```terraform
data "coder_provisioner" "me" {}
data "coder_workspace" "me" {}
data "coder_workspace_owner" "me" {}
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/enterprise-base:ubuntu"
name = "coder-${data.coder_workspace_owner.me.name}-${lower(data.coder_workspace.me.name)}"
entrypoint = ["sh", "-c", coder_agent.main.init_script]
env = [
"CODER_AGENT_TOKEN=${coder_agent.main.token}",
"CODER_AGENT_SOCKET_SERVER_ENABLED=true"
]
}
resource "coder_agent" "main" {
arch = data.coder_provisioner.me.arch
os = "linux"
}
module "git-clone" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/git-clone/coder"
version = "1.2.3"
agent_id = coder_agent.main.id
url = "https://github.com/miguelgrinberg/microblog/"
post_clone_script = <<-EOT
coder exp sync start git-clone && coder exp sync complete git-clone
EOT
}
resource "coder_script" "apt-install" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.main.id
display_name = "Installing APT Dependencies"
run_on_start = true
script = <<EOT
trap 'coder exp sync complete apt-install' EXIT
coder exp sync start apt-install
sudo apt-get update
sudo apt-get install --yes python-dev-is-python3
EOT
}
resource "coder_script" "pip-install" {
count = data.coder_workspace.me.start_count
agent_id = coder_agent.main.id
display_name = "Installing Python Dependencies"
run_on_start = true
script = <<EOT
trap 'coder exp sync complete pip-install' EXIT
coder exp sync want pip-install git-clone apt-install
coder exp sync start pip-install
cd ${module.git-clone[count.index].repo_dir}
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
EOT
}
```
A short summary of the changes:
- We've added `CODER_AGENT_SOCKET_SERVER_ENABLED=true` to the environment variables of the Docker container in which the Coder agent runs.
- We've broken the monolithic "setup" script into two separate scripts: one for the `apt` commands, and one for the `pip` commands.
- In each script, we've added a `coder exp sync start $SCRIPT_NAME` command to mark the startup script as started.
- We've also added an exit trap to ensure that we mark the startup scripts as completed. Without this, the `coder exp sync wait` command would eventually time out.
- We have used the `post_clone_script` feature of the `git-clone` module to allow waiting on the Git repository clone.
- In the `pip-install` script, we have declared a dependency on both `git-clone` and `apt-install`.
With these changes, the startup time has been reduced significantly and there is no longer any possibility of a race condition.
@@ -0,0 +1,50 @@
# Workspace Startup Coordination
> [!NOTE]
> This feature is experimental and may change without notice in future releases.
When workspaces start, scripts often need to run in a specific order.
For example, an IDE or coding agent might need the repository cloned
before it can start. Without explicit coordination, these scripts can
race against each other, leading to startup failures and inconsistent
workspace states.
Coder's workspace startup coordination feature lets you declare
dependencies between startup scripts and ensure they run in the correct order.
This eliminates race conditions and makes workspace startup predictable and
reliable.
## Why use this?
Simply placing all of your workspace initialization logic in a single script works, but leads to slow workspace startup times.
Breaking this out into multiple independent `coder_script` resources improves startup times by allowing the scripts to run in parallel.
However, this can lead to intermittent failures between dependent scripts due to timing issues.
Up until now, template authors have had to rely on manual coordination methods (for example, touching a file upon completion).
The goal of startup script coordination is to provide a single reliable source of truth for coordination between workspace startup scripts.
## Quick Start
To start using workspace startup coordination, follow these steps:
1. Set the environment variable `CODER_AGENT_SOCKET_SERVER_ENABLED=true` in your template to enable the agent socket server. The environment variable *must* be readable to the agent process. For example, in a template using the `kreuzwerker/docker` provider:
```terraform
resource "docker_container" "workspace" {
image = "codercom/enterprise-base:ubuntu"
env = [
"CODER_AGENT_TOKEN=${coder_agent.main.token}",
"CODER_AGENT_SOCKET_SERVER_ENABLED=true",
]
}
```
1. Add calls to `coder exp sync (start|complete)` in your startup scripts where required:
```bash
trap 'coder exp sync complete my-script' EXIT
coder exp sync want my-script my-other-script
coder exp sync start my-script
# Existing startup logic
```
For more information, refer to the [usage documentation](./usage.md), [troubleshooting documentation](./troubleshooting.md), or view our [examples](./example.md).
@@ -0,0 +1,98 @@
# Workspace Startup Coordination Troubleshooting
> [!NOTE]
> This feature is experimental and may change without notice in future releases.
## Test Sync Availability
From a workspace terminal, test if sync is working using `coder exp sync ping`:
```bash
coder exp sync ping
```
* If sync is working, expect the output to be `Success`.
* Otherwise, you will see an error message similar to the below:
```bash
error: connect to agent socket: connect to socket: dial unix /tmp/coder-agent.sock: connect: permission denied
```
## Check Unit Status
You can check the status of a specific unit using `coder exp sync status`:
```bash
coder exp sync status git-clone
```
If the unit exists, you will see output similar to the below:
```bash
# coder exp sync status git-clone
Unit: git-clone
Status: completed
Ready: true
```
If the unit is not known to the agent, you will see output similar to the below:
```bash
# coder exp sync status doesnotexist
Unit: doesnotexist
Status: not registered
Ready: true
Dependencies:
No dependencies found
```
## Common Issues
### Socket not enabled
If the Coder Agent Socket Server is not enabled, you will see an error message similar to the below when running `coder exp sync ping`:
```bash
error: connect to agent socket: connect to socket: dial unix /tmp/coder-agent.sock: connect: no such file or directory
```
Verify `CODER_AGENT_SOCKET_SERVER_ENABLED=true` is set in the Coder agent's environment:
```bash
tr '\0' '\n' < /proc/$(pidof -s coder)/environ | grep CODER_AGENT_SOCKET_SERVER_ENABLED
```
If the output of the above command is empty, review your template and ensure that the environment variable is set such that it is readable by the Coder agent process. Setting it on the `coder_agent` resource directly is **not** sufficient.
## Workspace startup script hangs
If the workspace startup scripts appear to 'hang', one or more of your startup scripts may be waiting for a dependency that never completes.
* Inside the workspace, review `/tmp/coder-script-*.log` for more details on your script's execution.
> **Tip:** add `set -x` to the top of your script to enable debug mode and update/restart the workspace.
* Review your template and verify that `coder exp sync complete <unit>` is called after the script completes e.g. with an exit trap.
* View the unit status using `coder exp sync status <unit>`.
## Workspace startup scripts fail
If the workspace startup scripts fail:
* Review `/tmp/coder-script-*.log` inside the workspace for script errors.
* Verify the Coder CLI is available in `$PATH` inside the workspace:
```bash
command -v coder
```
## Cycle detected
If you see an error similar to the below in your startup script logs, you have defined a cyclic dependency:
```bash
error: declare dependency failed: cannot add dependency: adding edge for unit "bar": failed to add dependency
adding edge (bar -> foo): cycle detected
```
To fix this, review your dependency declarations and redesign them to remove the cycle. It may help to draw out the dependency graph to find
the cycle.
@@ -0,0 +1,283 @@
# Workspace Startup Coordination Usage
> [!NOTE]
> This feature is experimental and may change without notice in future releases.
Startup coordination is built around the concept of **units**. You declare units in your Coder workspace template using the `coder exp sync` command in `coder_script` resources. When the Coder agent starts, it keeps an in-memory directed acyclic graph (DAG) of all units of which it is aware. When you need to synchronize with another unit, you can use `coder exp sync start $UNIT_NAME` to block until all dependencies of that unit have been marked complete.
## What is a unit?
A **unit** is a named phase of work, typically corresponding to a script or initialization
task.
- Units **may** declare dependencies on other units, creating an explicit ordering for workspace initialization.
- Units **must** be registered before they can be marked as complete.
- Units **may** be marked as dependencies before they are registered.
- Units **must not** declare cyclic dependencies. Attempting to create a cyclic dependency will result in an error.
## Requirements
> [!IMPORTANT]
> The `coder exp sync` command is only available from Coder version >=v2.30 onwards.
To use startup dependencies in your templates, you must:
- Enable the Coder Agent Socket Server.
- Modify your workspace startup scripts to run in parallel and declare dependencies as required using `coder exp sync`.
### Enable the Coder Agent Socket Server
The agent socket server provides the communication layer for startup
coordination. To enable it, set `CODER_AGENT_SOCKET_SERVER_ENABLED=true` in the environment in which the agent is running.
The exact method for doing this depends on your infrastructure platform:
<div class="tabs">
#### Docker / Podman
```hcl
resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = "codercom/enterprise-base:ubuntu"
name = "coder-${data.coder_workspace_owner.me.name}-${lower(data.coder_workspace.me.name)}"
env = [
"CODER_AGENT_SOCKET_SERVER_ENABLED=true"
]
command = ["sh", "-c", coder_agent.main.init_script]
}
```
#### Kubernetes
```hcl
resource "kubernetes_pod" "main" {
count = data.coder_workspace.me.start_count
metadata {
name = "coder-${data.coder_workspace_owner.me.name}-${lower(data.coder_workspace.me.name)}"
namespace = var.workspaces_namespace
}
spec {
container {
name = "dev"
image = "codercom/enterprise-base:ubuntu"
command = ["sh", "-c", coder_agent.main.init_script]
env {
name = "CODER_AGENT_SOCKET_SERVER_ENABLED"
value = "true"
}
}
}
}
```
#### AWS EC2 / VMs
For virtual machines, pass the environment variable through cloud-init or your
provisioning system:
```hcl
locals {
agent_env = {
"CODER_AGENT_SOCKET_SERVER_ENABLED" = "true"
}
}
# In your cloud-init userdata template:
# %{ for key, value in local.agent_env ~}
# export ${key}="${value}"
# %{ endfor ~}
```
</div>
### Declare Dependencies in your Workspace Startup Scripts
<div class="tabs">
#### Single Dependency
Here's a simple example of a script that depends on another unit completing
first:
```bash
#!/bin/bash
UNIT_NAME="my-setup"
# Declare dependency on git-clone
coder exp sync want "$UNIT_NAME" "git-clone"
# Wait for dependencies and mark as started
coder exp sync start "$UNIT_NAME"
# Do your work here
echo "Running after git-clone completes"
# Signal completion
coder exp sync complete "$UNIT_NAME"
```
This script will wait until the `git-clone` unit completes before starting its
own work.
#### Multiple Dependencies
If your unit depends on multiple other units, you can declare all dependencies
before starting:
```bash
#!/bin/bash
UNIT_NAME="my-app"
DEPENDENCIES="git-clone,env-setup,database-migration"
# Declare all dependencies
if [ -n "$DEPENDENCIES" ]; then
IFS=',' read -ra DEPS <<< "$DEPENDENCIES"
for dep in "${DEPS[@]}"; do
dep=$(echo "$dep" | xargs) # Trim whitespace
if [ -n "$dep" ]; then
coder exp sync want "$UNIT_NAME" "$dep"
fi
done
fi
# Wait for all dependencies
coder exp sync start "$UNIT_NAME"
# Your work here
echo "All dependencies satisfied, starting application"
# Signal completion
coder exp sync complete "$UNIT_NAME"
```
</div>
## Best Practices
### Test your changes before rolling out to all users
Before rolling out to all users:
1. Create a test workspace from the updated template
2. Check workspace build logs for sync messages
3. Verify all units reach "completed" status
4. Test workspace functionality
Once you're satisfied, [promote the new template version](../../../reference/cli/templates_versions_promote.md).
### Handle missing CLI gracefully
Not all workspaces will have the Coder CLI available in `$PATH`. Check for availability of the Coder CLI before using
sync commands:
```bash
if command -v coder > /dev/null 2>&1; then
coder exp sync start "$UNIT_NAME"
else
echo "Coder CLI not available, continuing without coordination"
fi
```
### Complete units that start successfully
Units **must** call `coder exp sync complete` to unblock dependent units. Use `trap` to ensure
completion even if your script exits early or encounters errors:
```bash
SYNC_STARTED=0
if coder exp sync start "$UNIT_NAME"; then
SYNC_STARTED=1
fi
cleanup_sync() {
if [ "$SYNC_STARTED" -eq 1 ]; then
coder exp sync complete "$UNIT_NAME"
fi
}
trap cleanup_sync EXIT
```
### Use descriptive unit names
Names should explain what the unit does, not its position in a sequence:
- Good: `git-clone`, `env-setup`, `database-migration`
- Avoid: `step1`, `init`, `script-1`
### Prefix a unique name to your units
When using `coder exp sync` in modules, note that unit names like `git-clone` might be common. Prefix the name of your module to your units to
ensure that your unit does not conflict with others.
- Good: `<module>.git-clone`, `<module>.claude`
- Bad: `git-clone`, `claude`
### Document dependencies
Add comments explaining why dependencies exist:
```hcl
resource "coder_script" "ide_setup" {
# Depends on git-clone because we need .vscode/extensions.json
# Depends on env-setup because we need $NODE_PATH configured
script = <<-EOT
coder exp sync want "ide-setup" "git-clone"
coder exp sync want "ide-setup" "env-setup"
# ...
EOT
}
```
### Avoid circular dependencies
The Coder Agent detects and rejects circular dependencies, but they indicate a design problem:
```bash
# This will fail
coder exp sync want "unit-a" "unit-b"
coder exp sync want "unit-b" "unit-a"
```
## Frequently Asked Questions
### How do I identify scripts that can benefit from startup coordination?
Look for these patterns in existing templates:
- `sleep` commands used to order scripts
- Using files to coordinate startup between scripts (e.g. `touch /tmp/startup-complete`)
- Scripts that fail intermittently on startup
- Comments like "must run after X" or "wait for Y"
### Will this slow down my workspace?
No. The socket server adds minimal overhead, and the default polling interval is 1
second, so waiting for dependencies adds at most a few seconds to startup.
You are more likely to notice an improvement in startup times as it becomes easier to manage complex dependencies in parallel.
### How do units interact with each other?
Units with no dependencies run immediately and in parallel.
Only units with unsatisfied dependencies wait for their dependencies.
### How long can a dependency take to complete?
By default, `coder exp sync start` has a 5-minute timeout to prevent indefinite hangs.
Upon timeout, the command will exit with an error code and print `timeout waiting for dependencies of unit <unit_name>` to stderr.
You can adjust this timeout as necessary for long-running operations:
```bash
coder exp sync start "long-operation" --timeout 10m
```
### Is state stored between restarts?
No. Sync state is kept in-memory only and resets on workspace restart.
This is intentional to ensure clean initialization on every start.
Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 MiB

+23
View File
@@ -667,6 +667,29 @@
"description": "Log workspace processes",
"path": "./admin/templates/extending-templates/process-logging.md",
"state": ["premium"]
},
{
"title": "Startup Dependencies",
"description": "Coordinate workspace startup with dependency management",
"path": "./admin/templates/startup-coordination/index.md",
"state": ["early access"],
"children": [
{
"title": "Usage",
"description": "How to use startup coordination",
"path": "./admin/templates/startup-coordination/usage.md"
},
{
"title": "Troubleshooting",
"description": "Troubleshoot startup coordination",
"path": "./admin/templates/startup-coordination/troubleshooting.md"
},
{
"title": "Examples",
"description": "Examples of startup coordination",
"path": "./admin/templates/startup-coordination/example.md"
}
]
}
]
},
+38
View File
@@ -1183,6 +1183,44 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/sta
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Update workspace build state
### Code samples
```shell
# Example request using curl
curl -X PUT http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/state \
-H 'Content-Type: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`PUT /workspacebuilds/{workspacebuild}/state`
> Body parameter
```json
{
"state": [
0
]
}
```
### Parameters
| Name | In | Type | Required | Description |
|------------------|------|--------------------------------------------------------------------------------------------------|----------|--------------------|
| `workspacebuild` | path | string(uuid) | true | Workspace build ID |
| `body` | body | [codersdk.UpdateWorkspaceBuildStateRequest](schemas.md#codersdkupdateworkspacebuildstaterequest) | true | Request body |
### Responses
| Status | Meaning | Description | Schema |
|--------|-----------------------------------------------------------------|-------------|--------|
| 204 | [No Content](https://tools.ietf.org/html/rfc7231#section-6.3.5) | No Content | |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get workspace build timings by ID
### Code samples
+5
View File
@@ -164,6 +164,9 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"ai": {
"aibridge_proxy": {
"cert_file": "string",
"domain_allowlist": [
"string"
],
"enabled": true,
"key_file": "string",
"listen_addr": "string"
@@ -433,6 +436,8 @@ curl -X GET http://coder-server:8080/api/v2/deployment/config \
"username_field": "string"
},
"pg_auth": "string",
"pg_conn_max_idle": "string",
"pg_conn_max_open": 0,
"pg_connection_url": "string",
"pprof": {
"address": {
+3 -3
View File
@@ -179,7 +179,7 @@ Status Code **200**
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Upsert a custom organization role
## Update a custom organization role
### Code samples
@@ -235,7 +235,7 @@ curl -X PUT http://coder-server:8080/api/v2/organizations/{organization}/members
| Name | In | Type | Required | Description |
|----------------|------|--------------------------------------------------------------------|----------|---------------------|
| `organization` | path | string(uuid) | true | Organization ID |
| `body` | body | [codersdk.CustomRoleRequest](schemas.md#codersdkcustomrolerequest) | true | Upsert role request |
| `body` | body | [codersdk.CustomRoleRequest](schemas.md#codersdkcustomrolerequest) | true | Update role request |
### Example responses
@@ -285,7 +285,7 @@ curl -X PUT http://coder-server:8080/api/v2/organizations/{organization}/members
|--------|---------------------------------------------------------|-------------|---------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | OK | array of [codersdk.Role](schemas.md#codersdkrole) |
<h3 id="upsert-a-custom-organization-role-responseschema">Response Schema</h3>
<h3 id="update-a-custom-organization-role-responseschema">Response Schema</h3>
Status Code **200**
+41 -6
View File
@@ -597,6 +597,9 @@
```json
{
"cert_file": "string",
"domain_allowlist": [
"string"
],
"enabled": true,
"key_file": "string",
"listen_addr": "string"
@@ -605,12 +608,13 @@
### Properties
| Name | Type | Required | Restrictions | Description |
|---------------|---------|----------|--------------|-------------|
| `cert_file` | string | false | | |
| `enabled` | boolean | false | | |
| `key_file` | string | false | | |
| `listen_addr` | string | false | | |
| Name | Type | Required | Restrictions | Description |
|--------------------|-----------------|----------|--------------|-------------|
| `cert_file` | string | false | | |
| `domain_allowlist` | array of string | false | | |
| `enabled` | boolean | false | | |
| `key_file` | string | false | | |
| `listen_addr` | string | false | | |
## codersdk.AIBridgeTokenUsage
@@ -712,6 +716,9 @@
{
"aibridge_proxy": {
"cert_file": "string",
"domain_allowlist": [
"string"
],
"enabled": true,
"key_file": "string",
"listen_addr": "string"
@@ -2624,6 +2631,9 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"ai": {
"aibridge_proxy": {
"cert_file": "string",
"domain_allowlist": [
"string"
],
"enabled": true,
"key_file": "string",
"listen_addr": "string"
@@ -2893,6 +2903,8 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"username_field": "string"
},
"pg_auth": "string",
"pg_conn_max_idle": "string",
"pg_conn_max_open": 0,
"pg_connection_url": "string",
"pprof": {
"address": {
@@ -3163,6 +3175,9 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"ai": {
"aibridge_proxy": {
"cert_file": "string",
"domain_allowlist": [
"string"
],
"enabled": true,
"key_file": "string",
"listen_addr": "string"
@@ -3432,6 +3447,8 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
"username_field": "string"
},
"pg_auth": "string",
"pg_conn_max_idle": "string",
"pg_conn_max_open": 0,
"pg_connection_url": "string",
"pprof": {
"address": {
@@ -3622,6 +3639,8 @@ CreateWorkspaceRequest provides options for creating a new workspace. Only one o
| `oauth2` | [codersdk.OAuth2Config](#codersdkoauth2config) | false | | |
| `oidc` | [codersdk.OIDCConfig](#codersdkoidcconfig) | false | | |
| `pg_auth` | string | false | | |
| `pg_conn_max_idle` | string | false | | |
| `pg_conn_max_open` | integer | false | | |
| `pg_connection_url` | string | false | | |
| `pprof` | [codersdk.PprofConfig](#codersdkpprofconfig) | false | | |
| `prometheus` | [codersdk.PrometheusConfig](#codersdkprometheusconfig) | false | | |
@@ -9147,6 +9166,22 @@ If the schedule is empty, the user will be updated to use the default schedule.|
|------------|--------|----------|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `schedule` | string | false | | Schedule is expected to be of the form `CRON_TZ=<IANA Timezone> <min> <hour> * * <dow>` Example: `CRON_TZ=US/Central 30 9 * * 1-5` represents 0930 in the timezone US/Central on weekdays (Mon-Fri). `CRON_TZ` defaults to UTC if not present. |
## codersdk.UpdateWorkspaceBuildStateRequest
```json
{
"state": [
0
]
}
```
### Properties
| Name | Type | Required | Restrictions | Description |
|---------|------------------|----------|--------------|-------------|
| `state` | array of integer | false | | |
## codersdk.UpdateWorkspaceDormancy
```json
+22
View File
@@ -1015,6 +1015,28 @@ URL of a PostgreSQL database. If empty, PostgreSQL binaries will be downloaded f
Type of auth to use when connecting to postgres. For AWS RDS, using IAM authentication (awsiamrds) is recommended.
### --postgres-conn-max-open
| | |
|-------------|--------------------------------------|
| Type | <code>int</code> |
| Environment | <code>$CODER_PG_CONN_MAX_OPEN</code> |
| YAML | <code>pgConnMaxOpen</code> |
| Default | <code>10</code> |
Maximum number of open connections to the database. Defaults to 10.
### --postgres-conn-max-idle
| | |
|-------------|--------------------------------------|
| Type | <code>string</code> |
| Environment | <code>$CODER_PG_CONN_MAX_IDLE</code> |
| YAML | <code>pgConnMaxIdle</code> |
| Default | <code>auto</code> |
Maximum number of idle connections to the database. Set to "auto" (the default) to use max open / 3. Value must be greater or equal to 0; 0 means explicitly no idle connections.
### --secure-auth-cookie
| | |
+8
View File
@@ -18,3 +18,11 @@ coder state push [flags] <workspace> <file>
| Type | <code>int</code> |
Specify a workspace build to target by name. Defaults to latest.
### -n, --no-build
| | |
|------|-------------------|
| Type | <code>bool</code> |
Update the state without triggering a workspace build. Useful for state-only migrations.
@@ -218,6 +218,43 @@ performance. Coder's
[validated architectures](../../admin/infrastructure/validated-architectures/index.md)
give specific sizing recommendations for various user scales.
### Connection pool tuning
Coder Server maintains a pool of connections to PostgreSQL. You can tune the
pool size with these settings:
- `--postgres-conn-max-open` (env: `CODER_PG_CONN_MAX_OPEN`): Maximum number of open
connections. Default: 10. Ensure that your PostgreSQL Server has `max_connections`
set appropriately to accommodate all Coder Server replicas multiplied by the
maximum number of open connections. We recommend configuring an additional 20%
of connections to account for churn and other clients.
- `--postgres-conn-max-idle` (env: `CODER_PG_CONN_MAX_IDLE`): Maximum number of idle
connections kept in the pool. Default: "auto", which uses max open / 3.
When a connection is returned to the pool and the idle pool is already full, the
connection is closed immediately. This can cause connection establishment
overhead (churn) when load fluctuates. Monitor these metrics to understand your
connection pool behavior:
- **Capacity**: `go_sql_max_open_connections - go_sql_in_use_connections` shows
how many connections are available for new requests. If this is 0, Coder
Server performance will start to degrade. This just provides a point-in-time view
of the connections, however.
For a more systematic view, consider running
`sum by (pod) (increase(go_sql_wait_duration_seconds_total[1m]))` to see how long
each Coder replica spent waiting on the connection pool (i.e. no free connections);
`sum by (pod) (increase(go_sql_wait_count_total[$__interval]))` shows how many
connections were waited for.
If either of these values seem unacceptably high, try tuning the above settings.
- **Churn**: `sum(rate(go_sql_max_idle_closed_total[$__rate_interval]))` shows
how many connections are being closed because the idle pool is full.
If you see high churn, consider increasing `--pg-conn-max-idle` to keep more
connections ready for reuse. If you see capacity consistently near zero,
consider increasing `--pg-conn-max-open`.
## Workspace proxies
Workspace proxies proxy HTTP traffic from end users to workspaces for Coder apps
+3 -1
View File
@@ -60,7 +60,9 @@ as [JetBrains](./workspace-access/jetbrains/index.md) or
Once started, the Coder agent is responsible for running your workspace startup
scripts. These may configure tools, service connections, or personalization with
[dotfiles](./workspace-dotfiles.md).
[dotfiles](./workspace-dotfiles.md). For complex initialization with multiple
dependent scripts, see
[Workspace Startup Coordination](../admin/templates/startup-coordination/index.md).
Once these steps have completed, your workspace will now be in the `Running`
state. You can access it via any of the [supported methods](./index.md), stop it
+1 -1
View File
@@ -140,7 +140,7 @@ module "jetbrains" {
module "filebrowser" {
source = "dev.registry.coder.com/coder/filebrowser/coder"
version = "1.1.3"
version = "1.1.4"
agent_id = coder_agent.dev.id
}
+6 -3
View File
@@ -380,7 +380,7 @@ module "personalize" {
module "mux" {
count = data.coder_workspace.me.start_count
source = "registry.coder.com/coder/mux/coder"
version = "1.0.5"
version = "1.0.7"
agent_id = coder_agent.dev.id
subdomain = true
}
@@ -410,7 +410,7 @@ module "vscode-web" {
module "jetbrains" {
count = contains(jsondecode(data.coder_parameter.ide_choices.value), "jetbrains") ? data.coder_workspace.me.start_count : 0
source = "dev.registry.coder.com/coder/jetbrains/coder"
version = "1.2.1"
version = "1.3.0"
agent_id = coder_agent.dev.id
agent_name = "dev"
folder = local.repo_dir
@@ -421,7 +421,7 @@ module "jetbrains" {
module "filebrowser" {
count = data.coder_workspace.me.start_count
source = "dev.registry.coder.com/coder/filebrowser/coder"
version = "1.1.3"
version = "1.1.4"
agent_id = coder_agent.dev.id
agent_name = "dev"
}
@@ -632,6 +632,9 @@ resource "coder_agent" "dev" {
# accumulating waste and growing too large.
go clean -cache
# Clean up the coder build directory as this can get quite large
rm -rf "${local.repo_dir}/build"
# Clean up the unused resources to keep storage usage low.
#
# WARNING! This will remove:
+76 -24
View File
@@ -10,6 +10,7 @@ import (
"net"
"net/http"
"net/url"
"slices"
"strings"
"sync"
"time"
@@ -69,6 +70,10 @@ type Options struct {
// CertStore is an optional certificate cache for MITM. If nil, a default
// cache is created. Exposed for testing.
CertStore goproxy.CertStorage
// DomainAllowlist is the list of domains to intercept and route through AI Bridge.
// Only requests to these domains will be MITM'd and forwarded to aibridged.
// Requests to other domains will be tunneled directly without decryption.
DomainAllowlist []string
}
func New(ctx context.Context, logger slog.Logger, opts Options) (*Server, error) {
@@ -78,10 +83,6 @@ func New(ctx context.Context, logger slog.Logger, opts Options) (*Server, error)
return nil, xerrors.New("listen address is required")
}
if opts.CertFile == "" || opts.KeyFile == "" {
return nil, xerrors.New("cert file and key file are required")
}
if strings.TrimSpace(opts.CoderAccessURL) == "" {
return nil, xerrors.New("coder access URL is required")
}
@@ -90,6 +91,31 @@ func New(ctx context.Context, logger slog.Logger, opts Options) (*Server, error)
return nil, xerrors.Errorf("invalid coder access URL %q: %w", opts.CoderAccessURL, err)
}
if opts.CertFile == "" || opts.KeyFile == "" {
return nil, xerrors.New("cert file and key file are required")
}
allowedPorts := opts.AllowedPorts
if len(allowedPorts) == 0 {
allowedPorts = []string{"80", "443"}
}
if len(opts.DomainAllowlist) == 0 {
return nil, xerrors.New("domain allow list is required")
}
mitmHosts, err := convertDomainsToHosts(opts.DomainAllowlist, allowedPorts)
if err != nil {
return nil, xerrors.Errorf("invalid domain allowlist: %w", err)
}
if len(mitmHosts) == 0 {
return nil, xerrors.New("domain allowlist is empty, at least one domain is required")
}
logger.Info(ctx, "configured domain allowlist for MITM",
slog.F("domains", opts.DomainAllowlist),
slog.F("hosts", mitmHosts),
)
// Load CA certificate for MITM
certPEM, err := loadMitmCertificate(opts.CertFile, opts.KeyFile)
if err != nil {
@@ -100,9 +126,6 @@ func New(ctx context.Context, logger slog.Logger, opts Options) (*Server, error)
// Cache generated leaf certificates to avoid expensive RSA key generation
// and signing on every request to the same hostname.
// TODO(ssncferreira): Currently certs are cached for all MITM'd hosts, but once
// host filtering is implemented, only AI provider certs will be cached.
// Related to https://github.com/coder/internal/issues/1182
if opts.CertStore != nil {
proxy.CertStore = opts.CertStore
} else {
@@ -118,19 +141,19 @@ func New(ctx context.Context, logger slog.Logger, opts Options) (*Server, error)
}
// Reject CONNECT requests to non-standard ports.
allowedPorts := opts.AllowedPorts
if len(allowedPorts) == 0 {
allowedPorts = []string{"80", "443"}
}
proxy.OnRequest().HandleConnectFunc(srv.portMiddleware(allowedPorts))
// Extract Coder session token from proxy authentication to forward to aibridged.
proxy.OnRequest().HandleConnectFunc(srv.authMiddleware)
// Apply MITM with authentication only to allowlisted hosts.
proxy.OnRequest(
// Only CONNECT requests to these hosts will be intercepted and decrypted.
// All other requests will be tunneled directly to their destination.
goproxy.ReqHostIs(mitmHosts...),
).HandleConnectFunc(
// Extract Coder session token from proxy authentication to forward to aibridged.
srv.authMiddleware,
)
// Handle decrypted requests: route to aibridged for known AI providers, or passthrough to original destination.
// TODO(ssncferreira): Currently the proxy always behaves as MITM, but this should only happen for known
// AI providers as all other requests should be tunneled. This will be implemented upstack.
// Related to https://github.com/coder/internal/issues/1182
proxy.OnRequest().DoFunc(srv.handleRequest)
// Create listener first so we can get the actual address.
@@ -249,6 +272,38 @@ func (s *Server) portMiddleware(allowedPorts []string) func(host string, ctx *go
}
}
// convertDomainsToHosts converts a list of domain names to host:port combinations.
// Each domain is combined with each allowed port.
// Returns an error if a domain includes a port that's not in the allowed ports list.
// For example, ["api.anthropic.com"] with ports ["443"] becomes ["api.anthropic.com:443"].
func convertDomainsToHosts(domains []string, allowedPorts []string) ([]string, error) {
var hosts []string
for _, domain := range domains {
domain = strings.TrimSpace(strings.ToLower(domain))
if domain == "" {
continue
}
// If domain already includes a port, validate it's in the allowed list.
if strings.Contains(domain, ":") {
host, port, err := net.SplitHostPort(domain)
if err != nil {
return nil, xerrors.Errorf("invalid domain %q: %w", domain, err)
}
if !slices.Contains(allowedPorts, port) {
return nil, xerrors.Errorf("invalid port in domain %q: port %s is not in allowed ports %v", domain, port, allowedPorts)
}
hosts = append(hosts, host+":"+port)
} else {
// Otherwise, combine domain with all allowed ports.
for _, port := range allowedPorts {
hosts = append(hosts, domain+":"+port)
}
}
}
return hosts, nil
}
// authMiddleware is a CONNECT middleware that extracts the Coder session token
// from the Proxy-Authorization header and stores it in ctx.UserData for use by
// downstream request handlers.
@@ -317,10 +372,6 @@ func extractCoderTokenFromProxyAuth(proxyAuth string) string {
// - Known AI providers return their provider name, used to route to the
// corresponding aibridge endpoint.
// - Unknown hosts return empty string and are passed through directly.
//
// TODO(ssncferreira): Provider list configurable via domain allowlists will be implemented upstack.
//
// Related to https://github.com/coder/internal/issues/1182.
func providerFromURL(reqURL *url.URL) string {
if reqURL == nil {
return ""
@@ -345,14 +396,15 @@ func (s *Server) handleRequest(req *http.Request, ctx *goproxy.ProxyCtx) (*http.
// Check if this request is for a supported AI provider.
provider := providerFromURL(req.URL)
if provider == "" {
// TODO(ssncferreira): After implementing selective MITM, this case should never
// happen since unknown hosts will be tunneled, not decrypted.
// Related to https://github.com/coder/internal/issues/1182
s.logger.Debug(s.ctx, "passthrough request to unknown host",
// This can happen if a domain is in the allowlist but doesn't have a
// corresponding provider mapping in providerFromURL(). The request was
// decrypted but we don't know how to route it to aibridged.
s.logger.Warn(s.ctx, "decrypted request has no provider mapping, passing through",
slog.F("host", req.Host),
slog.F("method", req.Method),
slog.F("path", originalPath),
)
// Passthrough to the original destination.
return req, nil
}
+334 -140
View File
@@ -96,10 +96,11 @@ func generateSharedTestCA() (certFile, keyFile string, err error) {
}
type testProxyConfig struct {
listenAddr string
coderAccessURL string
allowedPorts []string
certStore *aibridgeproxyd.CertCache
listenAddr string
coderAccessURL string
allowedPorts []string
certStore *aibridgeproxyd.CertCache
domainAllowlist []string
}
type testProxyOption func(*testProxyConfig)
@@ -122,6 +123,12 @@ func withCertStore(store *aibridgeproxyd.CertCache) testProxyOption {
}
}
func withDomainAllowlist(domains ...string) testProxyOption {
return func(cfg *testProxyConfig) {
cfg.domainAllowlist = domains
}
}
// newTestProxy creates a new AI Bridge Proxy server for testing.
// It uses the shared test CA and registers cleanup automatically.
// It waits for the proxy server to be ready before returning.
@@ -129,8 +136,9 @@ func newTestProxy(t *testing.T, opts ...testProxyOption) *aibridgeproxyd.Server
t.Helper()
cfg := &testProxyConfig{
listenAddr: "127.0.0.1:0",
coderAccessURL: "http://localhost:3000",
listenAddr: "127.0.0.1:0",
coderAccessURL: "http://localhost:3000",
domainAllowlist: []string{"127.0.0.1", "localhost"},
}
for _, opt := range opts {
opt(cfg)
@@ -140,11 +148,12 @@ func newTestProxy(t *testing.T, opts ...testProxyOption) *aibridgeproxyd.Server
logger := slogtest.Make(t, nil)
aibridgeOpts := aibridgeproxyd.Options{
ListenAddr: cfg.listenAddr,
CoderAccessURL: cfg.coderAccessURL,
CertFile: certFile,
KeyFile: keyFile,
AllowedPorts: cfg.allowedPorts,
ListenAddr: cfg.listenAddr,
CoderAccessURL: cfg.coderAccessURL,
CertFile: certFile,
KeyFile: keyFile,
AllowedPorts: cfg.allowedPorts,
DomainAllowlist: cfg.domainAllowlist,
}
if cfg.certStore != nil {
aibridgeOpts.CertStore = cfg.certStore
@@ -169,9 +178,10 @@ func newTestProxy(t *testing.T, opts ...testProxyOption) *aibridgeproxyd.Server
return srv
}
// newProxyClient creates an HTTP client configured to use the proxy and trust its CA.
// It adds a Proxy-Authorization header with the provided token for authentication.
func newProxyClient(t *testing.T, srv *aibridgeproxyd.Server, proxyAuth string) *http.Client {
// getProxyCertPool returns a cert pool containing the shared test CA certificate.
// This is used for tests where requests are MITM'd by the proxy, so the client
// needs to trust the proxy's CA to verify the generated certificates.
func getProxyCertPool(t *testing.T) *x509.CertPool {
t.Helper()
certFile, _ := getSharedTestCA(t)
@@ -183,6 +193,16 @@ func newProxyClient(t *testing.T, srv *aibridgeproxyd.Server, proxyAuth string)
ok := certPool.AppendCertsFromPEM(certPEM)
require.True(t, ok)
return certPool
}
// newProxyClient creates an HTTP client configured to use the proxy.
// It adds a Proxy-Authorization header with the provided token for authentication.
// The certPool parameter specifies which certificates the client should trust.
// For MITM'd requests, use the proxy's CA. For passthrough, use the target server's cert.
func newProxyClient(t *testing.T, srv *aibridgeproxyd.Server, proxyAuth string, certPool *x509.CertPool) *http.Client {
t.Helper()
// Create an HTTP client configured to use the proxy.
proxyURL, err := url.Parse("http://" + srv.Addr())
require.NoError(t, err)
@@ -207,8 +227,8 @@ func newProxyClient(t *testing.T, srv *aibridgeproxyd.Server, proxyAuth string)
}
// newTargetServer creates a mock HTTPS server that will be the target of proxied requests.
// It returns the server's parsed URL. The server is automatically closed when the test ends.
func newTargetServer(t *testing.T, handler http.HandlerFunc) *url.URL {
// It returns the server and its parsed URL. The server is automatically closed when the test ends.
func newTargetServer(t *testing.T, handler http.HandlerFunc) (*httptest.Server, *url.URL) {
t.Helper()
srv := httptest.NewTLSServer(handler)
@@ -217,7 +237,7 @@ func newTargetServer(t *testing.T, handler http.HandlerFunc) *url.URL {
srvURL, err := url.Parse(srv.URL)
require.NoError(t, err)
return srvURL
return srv, srvURL
}
// makeProxyAuthHeader creates a Proxy-Authorization header value with the given token.
@@ -237,10 +257,27 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "listen address is required")
})
t.Run("EmptyListenAddr", func(t *testing.T) {
t.Parallel()
certFile, keyFile := getSharedTestCA(t)
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "listen address is required")
@@ -253,9 +290,10 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CertFile: certFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "coder access URL is required")
@@ -268,10 +306,11 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: " ",
CertFile: certFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CoderAccessURL: " ",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "coder access URL is required")
@@ -284,10 +323,11 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "://invalid",
CertFile: certFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "://invalid",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "invalid coder access URL")
@@ -299,9 +339,10 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
KeyFile: "key.pem",
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
KeyFile: "key.pem",
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "cert file and key file are required")
@@ -313,9 +354,10 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: "cert.pem",
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: "cert.pem",
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "cert file and key file are required")
@@ -327,15 +369,83 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: "/nonexistent/cert.pem",
KeyFile: "/nonexistent/key.pem",
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: "/nonexistent/cert.pem",
KeyFile: "/nonexistent/key.pem",
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "failed to load MITM certificate")
})
t.Run("MissingDomainAllowlist", func(t *testing.T) {
t.Parallel()
certFile, keyFile := getSharedTestCA(t)
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
})
require.Error(t, err)
require.Contains(t, err.Error(), "domain allow list is required")
})
t.Run("EmptyDomainAllowlist", func(t *testing.T) {
t.Parallel()
certFile, keyFile := getSharedTestCA(t)
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: ":0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{""},
})
require.Error(t, err)
require.Contains(t, err.Error(), "domain allowlist is empty, at least one domain is required")
})
t.Run("InvalidDomainAllowlist", func(t *testing.T) {
t.Parallel()
certFile, keyFile := getSharedTestCA(t)
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"[invalid:domain"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "invalid domain")
})
t.Run("DomainWithNonAllowedPort", func(t *testing.T) {
t.Parallel()
certFile, keyFile := getSharedTestCA(t)
logger := slogtest.Make(t, nil)
_, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"api.anthropic.com:8443"},
})
require.Error(t, err)
require.Contains(t, err.Error(), "invalid port in domain")
})
t.Run("Success", func(t *testing.T) {
t.Parallel()
@@ -343,10 +453,11 @@ func TestNew(t *testing.T) {
logger := slogtest.Make(t, nil)
srv, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"api.anthropic.com", "api.openai.com"},
})
require.NoError(t, err)
require.NotNil(t, srv)
@@ -363,10 +474,11 @@ func TestClose(t *testing.T) {
logger := slogtest.Make(t, nil)
srv, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: certFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.NoError(t, err)
@@ -381,38 +493,86 @@ func TestClose(t *testing.T) {
func TestProxy_CertCaching(t *testing.T) {
t.Parallel()
// Create a mock HTTPS server that will be the target of the proxied request.
targetURL := newTargetServer(t, func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
tests := []struct {
name string
domainAllowlist []string
passthrough bool
}{
{
name: "AllowlistedDomainCached",
domainAllowlist: nil, // will use targetURL.Hostname()
passthrough: false,
},
{
name: "NonAllowlistedDomainNotCached",
domainAllowlist: []string{"other.example.com"},
passthrough: true,
},
}
// Create a cert cache so we can inspect it after the request.
certCache := aibridgeproxyd.NewCertCache()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
// Start the proxy server with the certificate cache.
srv := newTestProxy(t,
withAllowedPorts(targetURL.Port()),
withCertStore(certCache),
)
// Create a mock HTTPS server that will be the target of the proxied request.
targetServer, targetURL := newTargetServer(t, func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
// Make a request through the proxy to the target server.
// This triggers MITM and caches the generated certificate.
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"))
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, targetURL.String(), nil)
require.NoError(t, err)
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Create a cert cache so we can inspect it after the request.
certCache := aibridgeproxyd.NewCertCache()
// Fetch with a generator that tracks calls: if the certificate was cached
// during the request above, the generator should not be called.
genCalls := 0
_, err = certCache.Fetch(targetURL.Hostname(), func() (*tls.Certificate, error) {
genCalls++
return &tls.Certificate{}, nil
})
require.NoError(t, err)
require.Equal(t, 0, genCalls, "certificate should have been cached during request")
// Configure domain allowlist.
domainAllowlist := tt.domainAllowlist
if domainAllowlist == nil {
domainAllowlist = []string{targetURL.Hostname()}
}
// Start the proxy server with the certificate cache.
srv := newTestProxy(t,
withAllowedPorts(targetURL.Port()),
withCertStore(certCache),
withDomainAllowlist(domainAllowlist...),
)
// Build the cert pool for the client to trust.
// - For MITM'd requests, the client connects through the proxy which generates
// certificates signed by our test CA, so it needs to trust the proxy's CA.
// - For passthrough requests, the client connects directly to the target server
// through a tunnel, so it needs to trust the target's self-signed certificate.
var certPool *x509.CertPool
if tt.passthrough {
certPool = x509.NewCertPool()
certPool.AddCert(targetServer.Certificate())
} else {
certPool = getProxyCertPool(t)
}
// Make a request through the proxy to the target server.
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"), certPool)
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, targetURL.String(), nil)
require.NoError(t, err)
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Fetch with a generator that tracks calls.
genCalls := 0
_, err = certCache.Fetch(targetURL.Hostname(), func() (*tls.Certificate, error) {
genCalls++
return &tls.Certificate{}, nil
})
require.NoError(t, err)
if tt.passthrough {
// Certificate should NOT have been cached since request was tunneled.
require.Equal(t, 1, genCalls, "certificate should NOT have been cached for non-allowlisted domain")
} else {
// Certificate should have been cached during MITM.
require.Equal(t, 0, genCalls, "certificate should have been cached during request")
}
})
}
}
func TestProxy_PortValidation(t *testing.T) {
@@ -445,16 +605,19 @@ func TestProxy_PortValidation(t *testing.T) {
t.Parallel()
// Create a target HTTPS server that will be the destination of our proxied request.
targetURL := newTargetServer(t, func(w http.ResponseWriter, r *http.Request) {
_, targetURL := newTargetServer(t, func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("hello from target"))
})
// Start the proxy server on a random port to avoid conflicts when running tests in parallel.
srv := newTestProxy(t, withAllowedPorts(tt.allowedPorts(targetURL)...))
srv := newTestProxy(t,
withAllowedPorts(tt.allowedPorts(targetURL)...),
withDomainAllowlist(targetURL.Hostname()),
)
// Make a request through the proxy to the target server.
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"))
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"), getProxyCertPool(t))
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, targetURL.String(), nil)
require.NoError(t, err)
@@ -511,16 +674,19 @@ func TestProxy_Authentication(t *testing.T) {
t.Parallel()
// Create a mock HTTPS server that will be the target of our proxied request.
targetURL := newTargetServer(t, func(w http.ResponseWriter, _ *http.Request) {
_, targetURL := newTargetServer(t, func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("hello from target"))
})
// Start the proxy server on a random port to avoid conflicts when running tests in parallel.
srv := newTestProxy(t, withAllowedPorts(targetURL.Port()))
srv := newTestProxy(t,
withAllowedPorts(targetURL.Port()),
withDomainAllowlist(targetURL.Hostname()),
)
// Make a request through the proxy to the target server.
client := newProxyClient(t, srv, tt.proxyAuth)
client := newProxyClient(t, srv, tt.proxyAuth, getProxyCertPool(t))
req, err := http.NewRequestWithContext(t.Context(), http.MethodGet, targetURL.String(), nil)
require.NoError(t, err)
resp, err := client.Do(req)
@@ -545,44 +711,72 @@ func TestProxy_MITM(t *testing.T) {
t.Parallel()
tests := []struct {
name string
targetHost string
targetPort string // optional, if empty uses default HTTPS port (443)
targetPath string
expectedPath string
passthrough bool
name string
domainAllowlist []string
allowedPorts []string
buildTargetURL func(passthroughURL *url.URL) (string, error)
passthrough bool
noAIBridgeRouting bool
expectedPath string
}{
{
name: "AnthropicMessages",
targetHost: "api.anthropic.com",
targetPath: "/v1/messages",
name: "MitmdAnthropic",
domainAllowlist: []string{"api.anthropic.com"},
allowedPorts: []string{"443"},
buildTargetURL: func(_ *url.URL) (string, error) {
return "https://api.anthropic.com/v1/messages", nil
},
expectedPath: "/api/v2/aibridge/anthropic/v1/messages",
},
{
name: "AnthropicNonDefaultPort",
targetHost: "api.anthropic.com",
targetPort: "8443",
targetPath: "/v1/messages",
name: "MitmdAnthropicNonDefaultPort",
domainAllowlist: []string{"api.anthropic.com"},
allowedPorts: []string{"8443"},
buildTargetURL: func(_ *url.URL) (string, error) {
return "https://api.anthropic.com:8443/v1/messages", nil
},
expectedPath: "/api/v2/aibridge/anthropic/v1/messages",
},
{
name: "OpenAIChatCompletions",
targetHost: "api.openai.com",
targetPath: "/v1/chat/completions",
name: "MitmdOpenAI",
domainAllowlist: []string{"api.openai.com"},
allowedPorts: []string{"443"},
buildTargetURL: func(_ *url.URL) (string, error) {
return "https://api.openai.com/v1/chat/completions", nil
},
expectedPath: "/api/v2/aibridge/openai/v1/chat/completions",
},
{
name: "OpenAINonDefaultPort",
targetHost: "api.openai.com",
targetPort: "8443",
targetPath: "/v1/chat/completions",
name: "MitmdOpenAINonDefaultPort",
domainAllowlist: []string{"api.openai.com"},
allowedPorts: []string{"8443"},
buildTargetURL: func(_ *url.URL) (string, error) {
return "https://api.openai.com:8443/v1/chat/completions", nil
},
expectedPath: "/api/v2/aibridge/openai/v1/chat/completions",
},
{
name: "UnknownHostPassthrough",
targetPath: "/some/path",
name: "PassthroughUnknownHost",
domainAllowlist: []string{"other.example.com"},
allowedPorts: nil, // will use passthroughURL.Port()
buildTargetURL: func(passthroughURL *url.URL) (string, error) {
return url.JoinPath(passthroughURL.String(), "/some/path")
},
passthrough: true,
},
// The host is MITM'd but has no provider mapping.
// The request is decrypted but passed through to the original destination
// instead of being routed to aibridge.
{
name: "MitmdWithoutAIBridgeRouting",
domainAllowlist: nil, // will use passthroughURL.Hostname()
allowedPorts: nil, // will use passthroughURL.Port()
buildTargetURL: func(passthroughURL *url.URL) (string, error) {
return url.JoinPath(passthroughURL.String(), "/some/path")
},
passthrough: false,
noAIBridgeRouting: true,
},
}
for _, tt := range tests {
@@ -603,50 +797,49 @@ func TestProxy_MITM(t *testing.T) {
t.Cleanup(func() { aibridgedServer.Close() })
// Create a mock target server for passthrough tests.
passthroughURL := newTargetServer(t, func(w http.ResponseWriter, _ *http.Request) {
passthroughServer, passthroughURL := newTargetServer(t, func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("hello from passthrough"))
})
// Configure allowed ports based on test case.
// AI provider tests connect to the specified port, or 443 if not specified.
// Passthrough tests connect directly to the local target server's random port.
var allowedPorts []string
switch {
case tt.passthrough:
// Configure allowed ports.
allowedPorts := tt.allowedPorts
if allowedPorts == nil {
allowedPorts = []string{passthroughURL.Port()}
case tt.targetPort != "":
allowedPorts = []string{tt.targetPort}
default:
allowedPorts = []string{"443"}
}
// Configure domain allowlist.
domainAllowlist := tt.domainAllowlist
if domainAllowlist == nil {
domainAllowlist = []string{passthroughURL.Hostname()}
}
// Start the proxy server pointing to our mock aibridged.
srv := newTestProxy(t,
withCoderAccessURL(aibridgedServer.URL),
withAllowedPorts(allowedPorts...),
withDomainAllowlist(domainAllowlist...),
)
// Build the target URL:
// - For passthrough, target the local mock TLS server.
// - For AI providers, use their real hostnames to trigger routing.
// Non-default ports are included explicitly; default port (443) is omitted.
var targetURL string
var err error
switch {
case tt.passthrough:
targetURL, err = url.JoinPath(passthroughURL.String(), tt.targetPath)
require.NoError(t, err)
case tt.targetPort != "":
targetURL, err = url.JoinPath("https://"+tt.targetHost+":"+tt.targetPort, tt.targetPath)
require.NoError(t, err)
default:
targetURL, err = url.JoinPath("https://"+tt.targetHost, tt.targetPath)
require.NoError(t, err)
targetURL, err := tt.buildTargetURL(passthroughURL)
require.NoError(t, err)
// Build the cert pool for the client to trust.
// - For MITM'd requests, the client connects through the proxy which generates
// certificates signed by our test CA, so it needs to trust the proxy's CA.
// - For passthrough requests, the client connects directly to the target server
// through a tunnel, so it needs to trust the target's self-signed certificate.
var certPool *x509.CertPool
if tt.passthrough {
certPool = x509.NewCertPool()
certPool.AddCert(passthroughServer.Certificate())
} else {
certPool = getProxyCertPool(t)
}
// Make a request through the proxy to the target URL.
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"))
client := newProxyClient(t, srv, makeProxyAuthHeader("test-session-token"), certPool)
req, err := http.NewRequestWithContext(t.Context(), http.MethodPost, targetURL, strings.NewReader(`{}`))
require.NoError(t, err)
req.Header.Set("Content-Type", "application/json")
@@ -659,16 +852,16 @@ func TestProxy_MITM(t *testing.T) {
require.NoError(t, err)
require.Equal(t, http.StatusOK, resp.StatusCode)
if tt.passthrough {
if tt.passthrough || tt.noAIBridgeRouting {
// Verify request went to target server, not aibridged.
require.Equal(t, "hello from passthrough", string(body))
require.Empty(t, receivedPath, "aibridged should not receive passthrough requests")
require.Empty(t, receivedAuth, "aibridged should not receive passthrough requests")
require.Empty(t, receivedAuth, "passthrough requests are not authenticated by the proxy")
} else {
// Verify the request was routed to aibridged correctly.
require.Equal(t, "hello from aibridged", string(body))
require.Equal(t, tt.expectedPath, receivedPath)
require.Equal(t, "Bearer test-session-token", receivedAuth)
require.Equal(t, "Bearer test-session-token", receivedAuth, "MITM'd requests must include authentication")
}
})
}
@@ -743,10 +936,11 @@ func TestServeCACert_CompoundPEM(t *testing.T) {
logger := slogtest.Make(t, nil)
srv, err := aibridgeproxyd.New(t.Context(), logger, aibridgeproxyd.Options{
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: compoundCertFile,
KeyFile: keyFile,
ListenAddr: "127.0.0.1:0",
CoderAccessURL: "http://localhost:3000",
CertFile: compoundCertFile,
KeyFile: keyFile,
DomainAllowlist: []string{"127.0.0.1", "localhost"},
})
require.NoError(t, err)
t.Cleanup(func() { _ = srv.Close() })
+18 -15
View File
@@ -62,12 +62,14 @@ var auditableResourcesTypes = map[any]map[string]Action{
"roles": ActionTrack,
},
&database.CustomRole{}: {
"name": ActionTrack,
"display_name": ActionTrack,
"site_permissions": ActionTrack,
"org_permissions": ActionTrack,
"user_permissions": ActionTrack,
"organization_id": ActionIgnore, // Never changes.
"name": ActionTrack,
"display_name": ActionTrack,
"site_permissions": ActionTrack,
"org_permissions": ActionTrack,
"user_permissions": ActionTrack,
"member_permissions": ActionTrack,
"organization_id": ActionIgnore, // Never changes.
"is_system": ActionIgnore, // Never changes.
"id": ActionIgnore,
"created_at": ActionIgnore,
@@ -309,15 +311,16 @@ var auditableResourcesTypes = map[any]map[string]Action{
"secret_prefix": ActionIgnore,
},
&database.Organization{}: {
"id": ActionIgnore,
"name": ActionTrack,
"description": ActionTrack,
"deleted": ActionTrack,
"created_at": ActionIgnore,
"updated_at": ActionTrack,
"is_default": ActionTrack,
"display_name": ActionTrack,
"icon": ActionTrack,
"id": ActionIgnore,
"name": ActionTrack,
"description": ActionTrack,
"deleted": ActionTrack,
"created_at": ActionIgnore,
"updated_at": ActionTrack,
"is_default": ActionTrack,
"display_name": ActionTrack,
"icon": ActionTrack,
"workspace_sharing_disabled": ActionTrack,
},
&database.NotificationTemplate{}: {
"id": ActionIgnore,
+5 -4
View File
@@ -18,10 +18,11 @@ func newAIBridgeProxyDaemon(coderAPI *coderd.API) (*aibridgeproxyd.Server, error
logger := coderAPI.Logger.Named("aibridgeproxyd")
srv, err := aibridgeproxyd.New(ctx, logger, aibridgeproxyd.Options{
ListenAddr: coderAPI.DeploymentValues.AI.BridgeProxyConfig.ListenAddr.String(),
CoderAccessURL: coderAPI.AccessURL.String(),
CertFile: coderAPI.DeploymentValues.AI.BridgeProxyConfig.CertFile.String(),
KeyFile: coderAPI.DeploymentValues.AI.BridgeProxyConfig.KeyFile.String(),
ListenAddr: coderAPI.DeploymentValues.AI.BridgeProxyConfig.ListenAddr.String(),
CoderAccessURL: coderAPI.AccessURL.String(),
CertFile: coderAPI.DeploymentValues.AI.BridgeProxyConfig.CertFile.String(),
KeyFile: coderAPI.DeploymentValues.AI.BridgeProxyConfig.KeyFile.String(),
DomainAllowlist: coderAPI.DeploymentValues.AI.BridgeProxyConfig.DomainAllowlist.Value(),
})
if err != nil {
return nil, xerrors.Errorf("failed to start in-memory aibridgeproxy daemon: %w", err)
@@ -0,0 +1,33 @@
//nolint:revive,gocritic,errname,unconvert
package audit
import "log/slog"
// LogAuditor implements proxy.Auditor by logging to slog
type LogAuditor struct {
logger *slog.Logger
}
// NewLogAuditor creates a new LogAuditor
func NewLogAuditor(logger *slog.Logger) *LogAuditor {
return &LogAuditor{
logger: logger,
}
}
// AuditRequest logs the request using structured logging
func (a *LogAuditor) AuditRequest(req Request) {
if req.Allowed {
a.logger.Info("ALLOW",
"method", req.Method,
"url", req.URL,
"host", req.Host,
"rule", req.Rule)
} else {
a.logger.Warn("DENY",
"method", req.Method,
"url", req.URL,
"host", req.Host,
)
}
}
@@ -0,0 +1,10 @@
//nolint:paralleltest,testpackage,revive,gocritic
package audit
import "testing"
// Stub test file - tests removed
func TestStub(t *testing.T) {
// This is a stub test
t.Skip("stub test file")
}
@@ -0,0 +1,65 @@
//nolint:revive,gocritic,errname,unconvert
package audit
import (
"context"
"log/slog"
"os"
"golang.org/x/xerrors"
)
// MultiAuditor wraps multiple auditors and sends audit events to all of them.
type MultiAuditor struct {
auditors []Auditor
}
// NewMultiAuditor creates a new MultiAuditor that sends to all provided auditors.
func NewMultiAuditor(auditors ...Auditor) *MultiAuditor {
return &MultiAuditor{auditors: auditors}
}
// AuditRequest sends the request to all wrapped auditors.
func (m *MultiAuditor) AuditRequest(req Request) {
for _, a := range m.auditors {
a.AuditRequest(req)
}
}
// SetupAuditor creates and configures the appropriate auditors based on the
// provided configuration. It always includes a LogAuditor for stderr logging,
// and conditionally adds a SocketAuditor if audit logs are enabled and the
// workspace agent's log proxy socket exists.
func SetupAuditor(ctx context.Context, logger *slog.Logger, disableAuditLogs bool, logProxySocketPath string) (Auditor, error) {
stderrAuditor := NewLogAuditor(logger)
auditors := []Auditor{stderrAuditor}
if !disableAuditLogs {
if logProxySocketPath == "" {
return nil, xerrors.New("log proxy socket path is undefined")
}
// Since boundary is separately versioned from a Coder deployment, it's possible
// Coder is on an older version that will not create the socket and listen for
// the audit logs. Here we check for the socket to determine if the workspace
// agent is on a new enough version to prevent boundary application log spam from
// trying to connect to the agent. This assumes the agent will run and start the
// log proxy server before boundary runs.
_, err := os.Stat(logProxySocketPath)
if err != nil && !os.IsNotExist(err) {
return nil, xerrors.Errorf("failed to stat log proxy socket: %v", err)
}
agentWillProxy := !os.IsNotExist(err)
if agentWillProxy {
socketAuditor := NewSocketAuditor(logger, logProxySocketPath)
go socketAuditor.Loop(ctx)
auditors = append(auditors, socketAuditor)
} else {
logger.Warn("Audit logs are disabled; workspace agent has not created log proxy socket",
"socket", logProxySocketPath)
}
} else {
logger.Warn("Audit logs are disabled by configuration")
}
return NewMultiAuditor(auditors...), nil
}
@@ -0,0 +1,143 @@
//nolint:paralleltest,testpackage,revive,gocritic
package audit
import (
"context"
"io"
"log/slog"
"os"
"path/filepath"
"testing"
)
type mockAuditor struct {
onAudit func(req Request)
}
func (m *mockAuditor) AuditRequest(req Request) {
if m.onAudit != nil {
m.onAudit(req)
}
}
func TestSetupAuditor_DisabledAuditLogs(t *testing.T) {
t.Parallel()
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
ctx := context.Background()
auditor, err := SetupAuditor(ctx, logger, true, "")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
multi, ok := auditor.(*MultiAuditor)
if !ok {
t.Fatalf("expected *MultiAuditor, got %T", auditor)
}
if len(multi.auditors) != 1 {
t.Errorf("expected 1 auditor, got %d", len(multi.auditors))
}
if _, ok := multi.auditors[0].(*LogAuditor); !ok {
t.Errorf("expected *LogAuditor, got %T", multi.auditors[0])
}
}
func TestSetupAuditor_EmptySocketPath(t *testing.T) {
t.Parallel()
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
ctx := context.Background()
_, err := SetupAuditor(ctx, logger, false, "")
if err == nil {
t.Fatal("expected error for empty socket path, got nil")
}
}
func TestSetupAuditor_SocketDoesNotExist(t *testing.T) {
t.Parallel()
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
ctx := context.Background()
auditor, err := SetupAuditor(ctx, logger, false, "/nonexistent/socket/path")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
multi, ok := auditor.(*MultiAuditor)
if !ok {
t.Fatalf("expected *MultiAuditor, got %T", auditor)
}
if len(multi.auditors) != 1 {
t.Errorf("expected 1 auditor, got %d", len(multi.auditors))
}
if _, ok := multi.auditors[0].(*LogAuditor); !ok {
t.Errorf("expected *LogAuditor, got %T", multi.auditors[0])
}
}
func TestSetupAuditor_SocketExists(t *testing.T) {
t.Parallel()
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Create a temporary file to simulate the socket existing
tmpDir := t.TempDir()
socketPath := filepath.Join(tmpDir, "test.sock")
f, err := os.Create(socketPath)
if err != nil {
t.Fatalf("failed to create temp file: %v", err)
}
err = f.Close()
if err != nil {
t.Fatalf("failed to close temp file: %v", err)
}
auditor, err := SetupAuditor(ctx, logger, false, socketPath)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
multi, ok := auditor.(*MultiAuditor)
if !ok {
t.Fatalf("expected *MultiAuditor, got %T", auditor)
}
if len(multi.auditors) != 2 {
t.Errorf("expected 2 auditors, got %d", len(multi.auditors))
}
if _, ok := multi.auditors[0].(*LogAuditor); !ok {
t.Errorf("expected first auditor to be *LogAuditor, got %T", multi.auditors[0])
}
if _, ok := multi.auditors[1].(*SocketAuditor); !ok {
t.Errorf("expected second auditor to be *SocketAuditor, got %T", multi.auditors[1])
}
}
func TestMultiAuditor_AuditRequest(t *testing.T) {
t.Parallel()
var called1, called2 bool
auditor1 := &mockAuditor{onAudit: func(req Request) { called1 = true }}
auditor2 := &mockAuditor{onAudit: func(req Request) { called2 = true }}
multi := NewMultiAuditor(auditor1, auditor2)
multi.AuditRequest(Request{Method: "GET", URL: "https://example.com"})
if !called1 {
t.Error("expected first auditor to be called")
}
if !called2 {
t.Error("expected second auditor to be called")
}
}
+15
View File
@@ -0,0 +1,15 @@
//nolint:revive,gocritic,errname,unconvert
package audit
type Auditor interface {
AuditRequest(req Request)
}
// Request represents information about an HTTP request for auditing
type Request struct {
Method string
URL string // The fully qualified request URL (scheme, domain, optional path).
Host string
Allowed bool
Rule string // The rule that matched (if any)
}
@@ -0,0 +1,247 @@
//nolint:revive,gocritic,errname,unconvert
package audit
import (
"context"
"log/slog"
"net"
"time"
"golang.org/x/xerrors"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
"github.com/coder/coder/v2/agent/boundarylogproxy/codec"
agentproto "github.com/coder/coder/v2/agent/proto"
)
const (
// The batch size and timer duration are chosen to provide reasonable responsiveness
// for consumers of the aggregated logs while still minimizing the agent <-> coderd
// network I/O when an AI agent is actively making network requests.
defaultBatchSize = 10
defaultBatchTimerDuration = 5 * time.Second
)
// SocketAuditor implements the Auditor interface. It sends logs to the
// workspace agent's boundary log proxy socket. It queues logs and sends
// them in batches using a batch size and timer. The internal queue operates
// as a FIFO i.e., logs are sent in the order they are received and dropped
// if the queue is full.
type SocketAuditor struct {
dial func() (net.Conn, error)
logger *slog.Logger
logCh chan *agentproto.BoundaryLog
batchSize int
batchTimerDuration time.Duration
socketPath string
// onFlushAttempt is called after each flush attempt (intended for testing).
onFlushAttempt func()
}
// NewSocketAuditor creates a new SocketAuditor that sends logs to the agent's
// boundary log proxy socket after SocketAuditor.Loop is called. The socket path
// is read from EnvAuditSocketPath, falling back to defaultAuditSocketPath.
func NewSocketAuditor(logger *slog.Logger, socketPath string) *SocketAuditor {
// This channel buffer size intends to allow enough buffering for bursty
// AI agent network requests while a batch is being sent to the workspace
// agent.
const logChBufSize = 2 * defaultBatchSize
return &SocketAuditor{
dial: func() (net.Conn, error) {
return net.Dial("unix", socketPath)
},
logger: logger,
logCh: make(chan *agentproto.BoundaryLog, logChBufSize),
batchSize: defaultBatchSize,
batchTimerDuration: defaultBatchTimerDuration,
socketPath: socketPath,
}
}
// AuditRequest implements the Auditor interface. It queues the log to be sent to the
// agent in a batch.
func (s *SocketAuditor) AuditRequest(req Request) {
httpReq := &agentproto.BoundaryLog_HttpRequest{
Method: req.Method,
Url: req.URL,
}
// Only include the matched rule for allowed requests. Boundary is deny by
// default, so rules are what allow requests.
if req.Allowed {
httpReq.MatchedRule = req.Rule
}
log := &agentproto.BoundaryLog{
Allowed: req.Allowed,
Time: timestamppb.Now(),
Resource: &agentproto.BoundaryLog_HttpRequest_{HttpRequest: httpReq},
}
select {
case s.logCh <- log:
default:
s.logger.Warn("audit log dropped, channel full")
}
}
// flushErr represents an error from flush, distinguishing between
// permanent errors (bad data) and transient errors (network issues).
type flushErr struct {
err error
permanent bool
}
func (e *flushErr) Error() string { return e.err.Error() }
// flush sends the current batch of logs to the given connection.
func flush(conn net.Conn, logs []*agentproto.BoundaryLog) *flushErr {
if len(logs) == 0 {
return nil
}
req := &agentproto.ReportBoundaryLogsRequest{
Logs: logs,
}
data, err := proto.Marshal(req)
if err != nil {
return &flushErr{err: err, permanent: true}
}
err = codec.WriteFrame(conn, codec.TagV1, data)
if err != nil {
return &flushErr{err: xerrors.Errorf("write frame: %x", err)}
}
return nil
}
// Loop handles the I/O to send audit logs to the agent.
func (s *SocketAuditor) Loop(ctx context.Context) {
var conn net.Conn
batch := make([]*agentproto.BoundaryLog, 0, s.batchSize)
t := time.NewTimer(0)
t.Stop()
// connect attempts to establish a connection to the socket.
connect := func() {
if conn != nil {
return
}
var err error
conn, err = s.dial()
if err != nil {
s.logger.Warn("failed to connect to audit socket", "path", s.socketPath, "error", err)
conn = nil
}
}
// closeConn closes the current connection if open.
closeConn := func() {
if conn != nil {
_ = conn.Close()
conn = nil
}
}
// clearBatch resets the length of the batch and frees memory while preserving
// the batch slice backing array.
clearBatch := func() {
for i := range len(batch) {
batch[i] = nil
}
batch = batch[:0]
}
// doFlush flushes the batch and handles errors by reconnecting.
doFlush := func() {
t.Stop()
defer func() {
if s.onFlushAttempt != nil {
s.onFlushAttempt()
}
}()
if len(batch) == 0 {
return
}
connect()
if conn == nil {
// No connection: logs will be retried on next flush.
s.logger.Warn("no connection to flush; resetting batch timer",
"duration_sec", s.batchTimerDuration.Seconds(),
"batch_size", len(batch))
// Reset the timer so we aren't stuck waiting for the batch to fill
// or a new log to arrive before the next attempt.
t.Reset(s.batchTimerDuration)
return
}
if err := flush(conn, batch); err != nil {
if err.permanent {
// Data error: discard batch to avoid infinite retries.
s.logger.Warn("dropping batch due to data error on flush attempt",
"error", err, "batch_size", len(batch))
clearBatch()
} else {
// Network error: close connection but keep batch and retry.
s.logger.Warn("failed to flush audit logs; resetting batch timer to reconnect and retry",
"error", err, "duration_sec", s.batchTimerDuration.Seconds(),
"batch_size", len(batch))
closeConn()
// Reset the timer so we aren't stuck waiting for a new log to
// arrive before the next attempt.
t.Reset(s.batchTimerDuration)
}
return
}
clearBatch()
}
connect()
for {
select {
case <-ctx.Done():
// Drain any pending logs before the last flush. Not concerned about
// growing the batch slice here since we're exiting.
drain:
for {
select {
case log := <-s.logCh:
batch = append(batch, log)
default:
break drain
}
}
doFlush()
closeConn()
return
case <-t.C:
doFlush()
case log := <-s.logCh:
// If batch is at capacity, attempt flushing first and drop the log if
// the batch still full.
if len(batch) >= s.batchSize {
doFlush()
if len(batch) >= s.batchSize {
s.logger.Warn("audit log dropped, batch full")
continue
}
}
batch = append(batch, log)
if len(batch) == 1 {
t.Reset(s.batchTimerDuration)
}
if len(batch) >= s.batchSize {
doFlush()
}
}
}
}
@@ -0,0 +1,373 @@
//nolint:paralleltest,testpackage,revive,gocritic
package audit
import (
"context"
"io"
"log/slog"
"net"
"sync"
"sync/atomic"
"testing"
"time"
"golang.org/x/xerrors"
"google.golang.org/protobuf/proto"
"github.com/coder/coder/v2/agent/boundarylogproxy/codec"
agentproto "github.com/coder/coder/v2/agent/proto"
)
func TestSocketAuditor_AuditRequest_QueuesLog(t *testing.T) {
t.Parallel()
auditor := setupSocketAuditor(t)
auditor.AuditRequest(Request{
Method: "GET",
URL: "https://example.com",
Host: "example.com",
Allowed: true,
Rule: "allow-all",
})
select {
case log := <-auditor.logCh:
if log.Allowed != true {
t.Errorf("expected Allowed=true, got %v", log.Allowed)
}
httpReq := log.GetHttpRequest()
if httpReq == nil {
t.Fatal("expected HttpRequest, got nil")
}
if httpReq.Method != "GET" {
t.Errorf("expected Method=GET, got %s", httpReq.Method)
}
if httpReq.Url != "https://example.com" {
t.Errorf("expected URL=https://example.com, got %s", httpReq.Url)
}
// Rule should be set for allowed requests
if httpReq.MatchedRule != "allow-all" {
t.Errorf("unexpected MatchedRule %v", httpReq.MatchedRule)
}
default:
t.Fatal("expected log in channel, got none")
}
}
func TestSocketAuditor_AuditRequest_AllowIncludesRule(t *testing.T) {
t.Parallel()
auditor := setupSocketAuditor(t)
auditor.AuditRequest(Request{
Method: "POST",
URL: "https://evil.com",
Host: "evil.com",
Allowed: true,
Rule: "allow-evil",
})
select {
case log := <-auditor.logCh:
if log.Allowed != true {
t.Errorf("expected Allowed=false, got %v", log.Allowed)
}
httpReq := log.GetHttpRequest()
if httpReq == nil {
t.Fatal("expected HttpRequest, got nil")
}
if httpReq.MatchedRule != "allow-evil" {
t.Errorf("expected MatchedRule=allow-evil, got %s", httpReq.MatchedRule)
}
default:
t.Fatal("expected log in channel, got none")
}
}
func TestSocketAuditor_AuditRequest_DropsWhenFull(t *testing.T) {
t.Parallel()
auditor := setupSocketAuditor(t)
// Fill the channel (capacity is 2*batchSize = 20)
for i := 0; i < 2*auditor.batchSize; i++ {
auditor.AuditRequest(Request{Method: "GET", URL: "https://example.com", Allowed: true})
}
// This should not block and drop the log
auditor.AuditRequest(Request{Method: "GET", URL: "https://dropped.com", Allowed: true})
// Drain the channel and verify all entries are from the original batch (dropped.com was dropped)
for i := 0; i < 2*auditor.batchSize; i++ {
v := <-auditor.logCh
resource, ok := v.Resource.(*agentproto.BoundaryLog_HttpRequest_)
if !ok {
t.Fatal("unexpected resource type")
}
if resource.HttpRequest.Url != "https://example.com" {
t.Errorf("expected batch to be FIFO, got %s", resource.HttpRequest.Url)
}
}
select {
case v := <-auditor.logCh:
t.Errorf("expected empty channel, got %v", v)
default:
}
}
func TestSocketAuditor_Loop_FlushesOnBatchSize(t *testing.T) {
t.Parallel()
auditor, serverConn := setupTestAuditor(t)
auditor.batchTimerDuration = time.Hour // Ensure timer doesn't interfere with the test
received := make(chan *agentproto.ReportBoundaryLogsRequest, 1)
go readFromConn(t, serverConn, received)
go auditor.Loop(t.Context())
// Send exactly a full batch of logs to trigger a flush
for i := 0; i < auditor.batchSize; i++ {
auditor.AuditRequest(Request{Method: "GET", URL: "https://example.com", Allowed: true})
}
select {
case req := <-received:
if len(req.Logs) != auditor.batchSize {
t.Errorf("expected %d logs, got %d", auditor.batchSize, len(req.Logs))
}
case <-time.After(5 * time.Second):
t.Fatal("timeout waiting for flush")
}
}
func TestSocketAuditor_Loop_FlushesOnTimer(t *testing.T) {
t.Parallel()
auditor, serverConn := setupTestAuditor(t)
auditor.batchTimerDuration = 3 * time.Second
received := make(chan *agentproto.ReportBoundaryLogsRequest, 1)
go readFromConn(t, serverConn, received)
go auditor.Loop(t.Context())
// A single log should start the timer
auditor.AuditRequest(Request{Method: "GET", URL: "https://example.com", Allowed: true})
// Should flush after the timer duration elapses
select {
case req := <-received:
if len(req.Logs) != 1 {
t.Errorf("expected 1 log, got %d", len(req.Logs))
}
case <-time.After(2 * auditor.batchTimerDuration):
t.Fatal("timeout waiting for timer flush")
}
}
func TestSocketAuditor_Loop_FlushesOnContextCancel(t *testing.T) {
t.Parallel()
auditor, serverConn := setupTestAuditor(t)
// Make the timer long to always exercise the context cancellation case
auditor.batchTimerDuration = time.Hour
received := make(chan *agentproto.ReportBoundaryLogsRequest, 1)
go readFromConn(t, serverConn, received)
ctx, cancel := context.WithCancel(t.Context())
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
auditor.Loop(ctx)
}()
// Send a log but don't fill the batch
auditor.AuditRequest(Request{Method: "GET", URL: "https://example.com", Allowed: true})
cancel()
select {
case req := <-received:
if len(req.Logs) != 1 {
t.Errorf("expected 1 log, got %d", len(req.Logs))
}
case <-time.After(5 * time.Second):
t.Fatal("timeout waiting for shutdown flush")
}
wg.Wait()
}
func TestSocketAuditor_Loop_RetriesOnConnectionFailure(t *testing.T) {
t.Parallel()
clientConn, serverConn := net.Pipe()
t.Cleanup(func() {
err := clientConn.Close()
if err != nil {
t.Errorf("close client connection: %v", err)
}
err = serverConn.Close()
if err != nil {
t.Errorf("close server connection: %v", err)
}
})
var dialCount atomic.Int32
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
auditor := &SocketAuditor{
dial: func() (net.Conn, error) {
// First dial attempt fails, subsequent ones succeed
if dialCount.Add(1) == 1 {
return nil, xerrors.New("connection refused")
}
return clientConn, nil
},
logger: logger,
logCh: make(chan *agentproto.BoundaryLog, 2*defaultBatchSize),
batchSize: defaultBatchSize,
batchTimerDuration: time.Hour, // Ensure timer doesn't interfere with the test
}
// Set up hook to detect flush attempts
flushed := make(chan struct{}, 1)
auditor.onFlushAttempt = func() {
select {
case flushed <- struct{}{}:
default:
}
}
received := make(chan *agentproto.ReportBoundaryLogsRequest, 1)
go readFromConn(t, serverConn, received)
go auditor.Loop(t.Context())
// Send batchSize+1 logs so we can verify the last log here gets dropped.
for i := 0; i < auditor.batchSize+1; i++ {
auditor.AuditRequest(Request{Method: "GET", URL: "https://servernotup.com", Allowed: true})
}
// Wait for the first flush attempt (which will fail)
select {
case <-flushed:
case <-time.After(5 * time.Second):
t.Fatal("timeout waiting for first flush attempt")
}
// Send one more log - batch is at capacity, so this triggers flush first
// The flush succeeds (dial now works), sending the retained batch.
auditor.AuditRequest(Request{Method: "POST", URL: "https://serverup.com", Allowed: true})
// Should receive the retained batch (the new log goes into a fresh batch)
select {
case req := <-received:
if len(req.Logs) != auditor.batchSize {
t.Errorf("expected %d logs from retry, got %d", auditor.batchSize, len(req.Logs))
}
for _, log := range req.Logs {
resource, ok := log.Resource.(*agentproto.BoundaryLog_HttpRequest_)
if !ok {
t.Fatal("unexpected resource type")
}
if resource.HttpRequest.Url != "https://servernotup.com" {
t.Errorf("expected URL https://servernotup.com, got %v", resource.HttpRequest.Url)
}
}
case <-time.After(5 * time.Second):
t.Fatal("timeout waiting for retry flush")
}
}
func TestFlush_EmptyBatch(t *testing.T) {
t.Parallel()
err := flush(nil, nil)
if err != nil {
t.Errorf("expected nil error for empty batch, got %v", err)
}
err = flush(nil, []*agentproto.BoundaryLog{})
if err != nil {
t.Errorf("expected nil error for empty slice, got %v", err)
}
}
// setupSocketAuditor creates a SocketAuditor for tests that only exercise
// the queueing behavior (no connection needed).
func setupSocketAuditor(t *testing.T) *SocketAuditor {
t.Helper()
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
return &SocketAuditor{
dial: func() (net.Conn, error) {
return nil, xerrors.New("not connected")
},
logger: logger,
logCh: make(chan *agentproto.BoundaryLog, 2*defaultBatchSize),
batchSize: defaultBatchSize,
batchTimerDuration: defaultBatchTimerDuration,
}
}
// setupTestAuditor creates a SocketAuditor with an in-memory connection using
// net.Pipe(). Returns the auditor and the server-side connection for reading.
func setupTestAuditor(t *testing.T) (*SocketAuditor, net.Conn) {
t.Helper()
clientConn, serverConn := net.Pipe()
t.Cleanup(func() {
err := clientConn.Close()
if err != nil {
t.Error("Failed to close client connection", "error", err)
}
err = serverConn.Close()
if err != nil {
t.Error("Failed to close server connection", "error", err)
}
})
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
auditor := &SocketAuditor{
dial: func() (net.Conn, error) {
return clientConn, nil
},
logger: logger,
logCh: make(chan *agentproto.BoundaryLog, 2*defaultBatchSize),
batchSize: defaultBatchSize,
batchTimerDuration: defaultBatchTimerDuration,
}
return auditor, serverConn
}
// readFromConn reads length-prefixed protobuf messages from a connection and
// sends them to the received channel.
func readFromConn(t *testing.T, conn net.Conn, received chan<- *agentproto.ReportBoundaryLogsRequest) {
t.Helper()
buf := make([]byte, 1<<10)
for {
tag, data, err := codec.ReadFrame(conn, buf)
if err != nil {
return // connection closed
}
if tag != codec.TagV1 {
t.Errorf("invalid tag: %d", tag)
}
var req agentproto.ReportBoundaryLogsRequest
if err := proto.Unmarshal(data, &req); err != nil {
t.Errorf("failed to unmarshal: %v", err)
return
}
received <- &req
}
}

Some files were not shown because too many files have changed in this diff Show More