Compare commits

...

15 Commits

Author SHA1 Message Date
Dean Sheather 7db167552a fix: avoid connection logging crashes in agent [2.25] (#20309)
# For release 2.25

- Ignore errors when reporting a connection from the server, just log
them instead
- Translate connection log IP `localhost` to `127.0.0.1` on both the
server and the agent
- Temporary fix: convert invalid IPs to `127.0.0.1` since the database
forbids NULL

Relates to #20194

(cherry picked from commit 03440f6ae2)
2025-10-16 01:56:16 +11:00
Danielle Maywood f3db876772 fix: stop reading closed channel for /watch devcontainers endpoint (#19373) (#20095)
Fixes https://github.com/coder/coder/issues/19372
2025-10-01 16:32:39 -05:00
Cian Johnston a9bdbdb004 fix(coderd): ensure agent WebSocket conn is cleaned up (#19711) (#20093)
Co-authored-by: Danielle Maywood <danielle@themaywoods.com>
2025-10-01 15:39:20 -05:00
Stephen Kirby f51e22da5c fix: pin pg_dump version when generating schema (#19696) (#19697)
Co-authored-by: Ethan <39577870+ethanndickson@users.noreply.github.com>
2025-09-03 23:11:45 -05:00
Cian Johnston a79adb1558 fix(coderd): add audit log on creating a new session key (#19672) (#19684)
Fixes https://github.com/coder/coder/issues/19671
(re-?)Adds an audit log entry when an API key is created via `coder
login`.

NOTE: This does _not_ backfill audit logs.

<img width="1354" height="207" alt="Screenshot 2025-09-02 at 14 16 24"
src="https://github.com/user-attachments/assets/921e85c1-eced-4a19-9d37-8f84f4af1e73"
/>

(cherry picked from commit bd6e91eeab)
2025-09-03 14:33:45 +01:00
Cian Johnston ec660907fa fix: expire token for prebuilds user when regenerating session token (#19667) (#19668)
* provisionerdserver: Expires prebuild user token for workspace, if it
exists, when regenerating session token.
* dbauthz: disallow prebuilds user from creating api keys
* dbpurge: added functionality to expire stale api keys owned by the
prebuilds user

(cherry picked from commit 06cbb2890f)
2025-09-03 09:23:02 +01:00
Jakub Domeracki ee8050986d chore: update the slim binaries upload from the build directory to the GCS bucket (#19521)
Updated the upload script to copy the slim binaries from the ./build
directory to the GCS bucket (instead of the ./site/out/bin directory)
2025-08-25 14:58:14 +02:00
Rowan Smith ed39f4c92c chore: fix typo in clientNetcheckSummary for support bundle command (#19482)
(cherry picked from commit 33708413b8)

bringing in https://github.com/coder/coder/pull/19441 to the 2.25
release branch to fix a bug in the `support bundle` command.
2025-08-22 13:38:02 +10:00
Ethan d324cf7fa8 ci: fix gcp service accounts (#19312) (#19315)
Backport of #19312
2025-08-12 22:31:07 +10:00
Jakub Domeracki 3bf6a00876 chore: revert CLI binary publishing for releases.coder.com (#19236) 2025-08-07 11:06:14 -05:00
Jakub Domeracki 9eb5fc695e chore: fix CLI binary publishing for releases.coder.com (#19230) 2025-08-07 10:41:48 -05:00
Spike Curtis 079328d874 fix: upgrade to 1.24.6 to fix race in lib/pq queries (#19214) (#19218)
THIS IS A SECURITY FIX - cherry picked from #19214 

upgrade to go 1.24.6 to avoid https://github.com/golang/go/issues/74831
(CVE-2025-47907)

Also points to a new version of our lib/pq fork that worked around the
Go issue, which should restore better performance.
2025-08-07 15:18:55 +04:00
Cian Johnston e68ffe85b7 ci: bump xcode version to 16.1.0 (#19125) (#19221)
(cherry picked from commit 0d7cc5c156)

required for CI to pass with new runner version
2025-08-07 11:40:40 +01:00
Stephen Kirby e6ec95757a Cherry-pick for release 2.25 (#19169)
Co-authored-by: Sas Swart <sas.swart.cdk@gmail.com>
Co-authored-by: Danielle Maywood <danielle@themaywoods.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Ethan <39577870+ethanndickson@users.noreply.github.com>
Co-authored-by: Hugo Dutka <hugo@coder.com>
Co-authored-by: Thomas Kosiewski <tk@coder.com>
Co-authored-by: Cian Johnston <cian@coder.com>
2025-08-05 11:50:51 -05:00
gcp-cherry-pick-bot[bot] f1cf81c10b chore: add openai icon (cherry-pick #19118) (#19176)
Co-authored-by: ケイラ <mckayla@hey.com>
Co-authored-by: 35C4n0r <70096901+35C4n0r@users.noreply.github.com>
2025-08-05 12:17:53 +05:00
46 changed files with 1298 additions and 253 deletions
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.24.4"
default: "1.24.6"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
+14 -9
View File
@@ -256,8 +256,8 @@ jobs:
pushd /tmp/proto
curl -L -o protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v23.4/protoc-23.4-linux-x86_64.zip
unzip protoc.zip
cp -r ./bin/* /usr/local/bin
cp -r ./include /usr/local/bin/include
sudo cp -r ./bin/* /usr/local/bin
sudo cp -r ./include /usr/local/bin/include
popd
- name: make gen
@@ -340,6 +340,11 @@ jobs:
- name: Disable Spotlight Indexing
if: runner.os == 'macOS'
run: |
enabled=$(sudo mdutil -a -s | grep "Indexing enabled" | wc -l)
if [ $enabled -eq 0 ]; then
echo "Spotlight indexing is already disabled"
exit 0
fi
sudo mdutil -a -i off
sudo mdutil -X /
sudo launchctl bootout system /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
@@ -864,8 +869,8 @@ jobs:
pushd /tmp/proto
curl -L -o protoc.zip https://github.com/protocolbuffers/protobuf/releases/download/v23.4/protoc-23.4-linux-x86_64.zip
unzip protoc.zip
cp -r ./bin/* /usr/local/bin
cp -r ./include /usr/local/bin/include
sudo cp -r ./bin/* /usr/local/bin
sudo cp -r ./include /usr/local/bin/include
popd
- name: Setup Go
@@ -959,7 +964,7 @@ jobs:
- name: Switch XCode Version
uses: maxim-lobanov/setup-xcode@60606e260d2fc5762a71e64e74b2174e8ea3c8bd # v1.6.0
with:
xcode-version: "16.0.0"
xcode-version: "16.1.0"
- name: Setup Go
uses: ./.github/actions/setup-go
@@ -1118,8 +1123,8 @@ jobs:
id: gcloud_auth
uses: google-github-actions/auth@140bb5113ffb6b65a7e9b937a81fa96cf5064462 # v2.1.11
with:
workload_identity_provider: ${{ secrets.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
workload_identity_provider: ${{ vars.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
token_format: "access_token"
- name: Setup GCloud SDK
@@ -1422,8 +1427,8 @@ jobs:
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@140bb5113ffb6b65a7e9b937a81fa96cf5064462 # v2.1.11
with:
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@6a7c903a70c8625ed6700fa299f5ddb4ca6022e9 # v2.1.5
+2 -2
View File
@@ -131,8 +131,8 @@ jobs:
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@140bb5113ffb6b65a7e9b937a81fa96cf5064462 # v2.1.11
with:
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
- name: Terraform init and validate
run: |
+1 -1
View File
@@ -420,7 +420,7 @@ jobs:
curl -fsSL "$URL" -o "${DEST}"
chmod +x "${DEST}"
"${DEST}" version
mv "${DEST}" /usr/local/bin/coder
sudo mv "${DEST}" /usr/local/bin/coder
- name: Create first user
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
+20 -19
View File
@@ -60,7 +60,7 @@ jobs:
- name: Switch XCode Version
uses: maxim-lobanov/setup-xcode@60606e260d2fc5762a71e64e74b2174e8ea3c8bd # v1.6.0
with:
xcode-version: "16.0.0"
xcode-version: "16.1.0"
- name: Setup Go
uses: ./.github/actions/setup-go
@@ -288,8 +288,8 @@ jobs:
id: gcloud_auth
uses: google-github-actions/auth@140bb5113ffb6b65a7e9b937a81fa96cf5064462 # v2.1.11
with:
workload_identity_provider: ${{ secrets.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
workload_identity_provider: ${{ vars.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
token_format: "access_token"
- name: Setup GCloud SDK
@@ -641,21 +641,22 @@ jobs:
version="$(./scripts/version.sh)"
binaries=(
"coder-darwin-amd64"
"coder-darwin-arm64"
"coder-linux-amd64"
"coder-linux-arm64"
"coder-linux-armv7"
"coder-windows-amd64.exe"
"coder-windows-arm64.exe"
)
# Source array of slim binaries
declare -A binaries
binaries["coder-darwin-amd64"]="coder-slim_${version}_darwin_amd64"
binaries["coder-darwin-arm64"]="coder-slim_${version}_darwin_arm64"
binaries["coder-linux-amd64"]="coder-slim_${version}_linux_amd64"
binaries["coder-linux-arm64"]="coder-slim_${version}_linux_arm64"
binaries["coder-linux-armv7"]="coder-slim_${version}_linux_armv7"
binaries["coder-windows-amd64.exe"]="coder-slim_${version}_windows_amd64.exe"
binaries["coder-windows-arm64.exe"]="coder-slim_${version}_windows_arm64.exe"
for binary in "${binaries[@]}"; do
detached_signature="${binary}.asc"
gcloud storage cp "./site/out/bin/${binary}" "gs://releases.coder.com/coder-cli/${version}/${binary}"
gcloud storage cp "./site/out/bin/${detached_signature}" "gs://releases.coder.com/coder-cli/${version}/${detached_signature}"
done
for cli_name in "${!binaries[@]}"; do
slim_binary="${binaries[$cli_name]}"
detached_signature="${slim_binary}.asc"
gcloud storage cp "./build/${slim_binary}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}"
gcloud storage cp "./build/${detached_signature}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}.asc"
done
- name: Publish release
run: |
@@ -698,8 +699,8 @@ jobs:
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@140bb5113ffb6b65a7e9b937a81fa96cf5064462 # v2.1.11
with:
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
- name: Setup GCloud SDK
uses: google-github-actions/setup-gcloud@6a7c903a70c8625ed6700fa299f5ddb4ca6022e9 # 2.1.5
+14 -3
View File
@@ -790,11 +790,15 @@ func (a *agent) reportConnectionsLoop(ctx context.Context, aAPI proto.DRPCAgentC
logger.Debug(ctx, "reporting connection")
_, err := aAPI.ReportConnection(ctx, payload)
if err != nil {
return xerrors.Errorf("failed to report connection: %w", err)
// Do not fail the loop if we fail to report a connection, just
// log a warning.
// Related to https://github.com/coder/coder/issues/20194
logger.Warn(ctx, "failed to report connection to server", slog.Error(err))
// no continue here, we still need to remove it from the slice
} else {
logger.Debug(ctx, "successfully reported connection")
}
logger.Debug(ctx, "successfully reported connection")
// Remove the payload we sent.
a.reportConnectionsMu.Lock()
a.reportConnections[0] = nil // Release the pointer from the underlying array.
@@ -825,6 +829,13 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
ip = host
}
// If the IP is "localhost" (which it can be in some cases), set it to
// 127.0.0.1 instead.
// Related to https://github.com/coder/coder/issues/20194
if ip == "localhost" {
ip = "127.0.0.1"
}
a.reportConnectionsMu.Lock()
defer a.reportConnectionsMu.Unlock()
+24 -8
View File
@@ -77,7 +77,8 @@ type API struct {
subAgentURL string
subAgentEnv []string
projectDiscovery bool // If we should perform project discovery or not.
projectDiscovery bool // If we should perform project discovery or not.
discoveryAutostart bool // If we should autostart discovered projects.
ownerName string
workspaceName string
@@ -144,7 +145,8 @@ func WithCommandEnv(ce CommandEnv) Option {
strings.HasPrefix(s, "CODER_AGENT_TOKEN=") ||
strings.HasPrefix(s, "CODER_AGENT_AUTH=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_ENABLE=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE=")
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=")
})
return shell, dir, env, nil
}
@@ -287,6 +289,14 @@ func WithProjectDiscovery(projectDiscovery bool) Option {
}
}
// WithDiscoveryAutostart sets if the API should attempt to autostart
// projects that have been discovered
func WithDiscoveryAutostart(discoveryAutostart bool) Option {
return func(api *API) {
api.discoveryAutostart = discoveryAutostart
}
}
// ScriptLogger is an interface for sending devcontainer logs to the
// controlplane.
type ScriptLogger interface {
@@ -542,11 +552,13 @@ func (api *API) discoverDevcontainersInProject(projectPath string) error {
Container: nil,
}
config, err := api.dccli.ReadConfig(api.ctx, workspaceFolder, path, []string{})
if err != nil {
logger.Error(api.ctx, "read project configuration", slog.Error(err))
} else if config.Configuration.Customizations.Coder.AutoStart {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
if api.discoveryAutostart {
config, err := api.dccli.ReadConfig(api.ctx, workspaceFolder, path, []string{})
if err != nil {
logger.Error(api.ctx, "read project configuration", slog.Error(err))
} else if config.Configuration.Customizations.Coder.AutoStart {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
}
}
api.knownDevcontainers[workspaceFolder] = dc
@@ -751,7 +763,11 @@ func (api *API) broadcastUpdatesLocked() {
func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
conn, err := websocket.Accept(rw, r, nil)
conn, err := websocket.Accept(rw, r, &websocket.AcceptOptions{
// We want `NoContextTakeover` compression to balance improving
// bandwidth cost/latency with minimal memory usage overhead.
CompressionMode: websocket.CompressionNoContextTakeover,
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to upgrade connection to websocket.",
+70
View File
@@ -3792,6 +3792,7 @@ func TestDevcontainerDiscovery(t *testing.T) {
agentcontainers.WithContainerCLI(&fakeContainerCLI{}),
agentcontainers.WithDevcontainerCLI(mDCCLI),
agentcontainers.WithProjectDiscovery(true),
agentcontainers.WithDiscoveryAutostart(true),
)
api.Start()
defer api.Close()
@@ -3813,5 +3814,74 @@ func TestDevcontainerDiscovery(t *testing.T) {
// Then: We expect the mock infra to not fail.
})
}
t.Run("Disabled", func(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitShort)
logger = testutil.Logger(t)
mClock = quartz.NewMock(t)
mDCCLI = acmock.NewMockDevcontainerCLI(gomock.NewController(t))
fs = map[string]string{
"/home/coder/.git/HEAD": "",
"/home/coder/.devcontainer/devcontainer.json": "",
}
r = chi.NewRouter()
)
// We expect that neither `ReadConfig`, nor `Up` are called as we
// have explicitly disabled the agentcontainers API from attempting
// to autostart devcontainers that it discovers.
mDCCLI.EXPECT().ReadConfig(gomock.Any(),
"/home/coder",
"/home/coder/.devcontainer/devcontainer.json",
[]string{},
).Return(agentcontainers.DevcontainerConfig{
Configuration: agentcontainers.DevcontainerConfiguration{
Customizations: agentcontainers.DevcontainerCustomizations{
Coder: agentcontainers.CoderCustomization{
AutoStart: true,
},
},
},
}, nil).Times(0)
mDCCLI.EXPECT().Up(gomock.Any(),
"/home/coder",
"/home/coder/.devcontainer/devcontainer.json",
gomock.Any(),
).Return("", nil).Times(0)
api := agentcontainers.NewAPI(logger,
agentcontainers.WithClock(mClock),
agentcontainers.WithWatcher(watcher.NewNoop()),
agentcontainers.WithFileSystem(initFS(t, fs)),
agentcontainers.WithManifestInfo("owner", "workspace", "parent-agent", "/home/coder"),
agentcontainers.WithContainerCLI(&fakeContainerCLI{}),
agentcontainers.WithDevcontainerCLI(mDCCLI),
agentcontainers.WithProjectDiscovery(true),
agentcontainers.WithDiscoveryAutostart(false),
)
api.Start()
defer api.Close()
r.Mount("/", api.Routes())
// When: All expected dev containers have been found.
require.Eventuallyf(t, func() bool {
req := httptest.NewRequest(http.MethodGet, "/", nil).WithContext(ctx)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
got := codersdk.WorkspaceAgentListContainersResponse{}
err := json.NewDecoder(rec.Body).Decode(&got)
require.NoError(t, err)
return len(got.Devcontainers) >= 1
}, testutil.WaitShort, testutil.IntervalFast, "dev containers never found")
// Then: We expect the mock infra to not fail.
})
})
}
+26 -17
View File
@@ -40,23 +40,24 @@ import (
func (r *RootCmd) workspaceAgent() *serpent.Command {
var (
auth string
logDir string
scriptDataDir string
pprofAddress string
noReap bool
sshMaxTimeout time.Duration
tailnetListenPort int64
prometheusAddress string
debugAddress string
slogHumanPath string
slogJSONPath string
slogStackdriverPath string
blockFileTransfer bool
agentHeaderCommand string
agentHeader []string
devcontainers bool
devcontainerProjectDiscovery bool
auth string
logDir string
scriptDataDir string
pprofAddress string
noReap bool
sshMaxTimeout time.Duration
tailnetListenPort int64
prometheusAddress string
debugAddress string
slogHumanPath string
slogJSONPath string
slogStackdriverPath string
blockFileTransfer bool
agentHeaderCommand string
agentHeader []string
devcontainers bool
devcontainerProjectDiscovery bool
devcontainerDiscoveryAutostart bool
)
cmd := &serpent.Command{
Use: "agent",
@@ -366,6 +367,7 @@ func (r *RootCmd) workspaceAgent() *serpent.Command {
DevcontainerAPIOptions: []agentcontainers.Option{
agentcontainers.WithSubAgentURL(r.agentURL.String()),
agentcontainers.WithProjectDiscovery(devcontainerProjectDiscovery),
agentcontainers.WithDiscoveryAutostart(devcontainerDiscoveryAutostart),
},
})
@@ -519,6 +521,13 @@ func (r *RootCmd) workspaceAgent() *serpent.Command {
Description: "Allow the agent to search the filesystem for devcontainer projects.",
Value: serpent.BoolOf(&devcontainerProjectDiscovery),
},
{
Flag: "devcontainers-discovery-autostart-enable",
Default: "false",
Env: "CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE",
Description: "Allow the agent to autostart devcontainer projects it discovers based on their configuration.",
Value: serpent.BoolOf(&devcontainerDiscoveryAutostart),
},
}
return cmd
+1 -1
View File
@@ -251,7 +251,7 @@ func summarizeBundle(inv *serpent.Invocation, bun *support.Bundle) {
clientNetcheckSummary := bun.Network.Netcheck.Summarize("Client netcheck:", docsURL)
if len(clientNetcheckSummary) > 0 {
cliui.Warn(inv.Stdout, "Networking issues detected:", deployHealthSummary...)
cliui.Warn(inv.Stdout, "Networking issues detected:", clientNetcheckSummary...)
}
}
+4
View File
@@ -33,6 +33,10 @@ OPTIONS:
--debug-address string, $CODER_AGENT_DEBUG_ADDRESS (default: 127.0.0.1:2113)
The bind address to serve a debug HTTP server.
--devcontainers-discovery-autostart-enable bool, $CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE (default: false)
Allow the agent to autostart devcontainer projects it discovers based
on their configuration.
--devcontainers-enable bool, $CODER_AGENT_DEVCONTAINERS_ENABLE (default: true)
Allow the agent to automatically detect running devcontainers.
+24 -1
View File
@@ -3,9 +3,11 @@ package agentapi
import (
"context"
"database/sql"
"net"
"sync/atomic"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"golang.org/x/xerrors"
"google.golang.org/protobuf/types/known/emptypb"
@@ -61,6 +63,27 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
return nil, xerrors.Errorf("get workspace by agent id: %w", err)
}
// Some older clients may incorrectly report "localhost" as the IP address.
// Related to https://github.com/coder/coder/issues/20194
logIPRaw := req.GetConnection().GetIp()
if logIPRaw == "localhost" {
logIPRaw = "127.0.0.1"
}
// TEMPORARY FIX for https://github.com/coder/coder/issues/20194
logIP := database.ParseIP(logIPRaw)
if !logIP.Valid {
// In older versions of Coder, NULL IPs are not permitted in the DB, so
// use 127.0.0.1 instead.
logIP = pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.CIDRMask(32, 32),
},
Valid: true,
}
}
reason := req.GetConnection().GetReason()
connLogger := *a.ConnectionLogger.Load()
err = connLogger.Upsert(ctx, database.UpsertConnectionLogParams{
@@ -73,7 +96,7 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
AgentName: workspaceAgent.Name,
Type: connectionType,
Code: code,
Ip: database.ParseIP(req.GetConnection().GetIp()),
Ip: logIP,
ConnectionID: uuid.NullUUID{
UUID: connectionID,
Valid: true,
+5
View File
@@ -110,6 +110,11 @@ func TestConnectionLog(t *testing.T) {
mDB := dbmock.NewMockStore(gomock.NewController(t))
mDB.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agent.ID).Return(workspace, nil)
// TEMPORARY FIX for https://github.com/coder/coder/issues/20194
if tt.ip == "" {
tt.ip = "127.0.0.1"
}
api := &agentapi.ConnLogAPI{
ConnectionLogger: asAtomicPointer[connectionlog.ConnectionLogger](connLogger),
Database: mDB,
+33 -3
View File
@@ -12,6 +12,8 @@ import (
"github.com/moby/moby/pkg/namesgenerator"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/coderd/apikey"
"github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/database"
@@ -56,6 +58,14 @@ func (api *API) postToken(rw http.ResponseWriter, r *http.Request) {
return
}
// TODO(Cian): System users technically just have the 'member' role
// and we don't want to disallow all members from creating API keys.
if user.IsSystem {
api.Logger.Warn(ctx, "disallowed creating api key for system user", slog.F("user_id", user.ID))
httpapi.Forbidden(rw)
return
}
scope := database.APIKeyScopeAll
if scope != "" {
scope = database.APIKeyScope(createToken.Scope)
@@ -121,10 +131,29 @@ func (api *API) postToken(rw http.ResponseWriter, r *http.Request) {
// @Success 201 {object} codersdk.GenerateAPIKeyResponse
// @Router /users/{user}/keys [post]
func (api *API) postAPIKey(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
user := httpmw.UserParam(r)
var (
ctx = r.Context()
user = httpmw.UserParam(r)
auditor = api.Auditor.Load()
aReq, commitAudit = audit.InitRequest[database.APIKey](rw, &audit.RequestParams{
Audit: *auditor,
Log: api.Logger,
Request: r,
Action: database.AuditActionCreate,
})
)
aReq.Old = database.APIKey{}
defer commitAudit()
cookie, _, err := api.createAPIKey(ctx, apikey.CreateParams{
// TODO(Cian): System users technically just have the 'member' role
// and we don't want to disallow all members from creating API keys.
if user.IsSystem {
api.Logger.Warn(ctx, "disallowed creating api key for system user", slog.F("user_id", user.ID))
httpapi.Forbidden(rw)
return
}
cookie, key, err := api.createAPIKey(ctx, apikey.CreateParams{
UserID: user.ID,
DefaultLifetime: api.DeploymentValues.Sessions.DefaultTokenDuration.Value(),
LoginType: database.LoginTypePassword,
@@ -138,6 +167,7 @@ func (api *API) postAPIKey(rw http.ResponseWriter, r *http.Request) {
return
}
aReq.New = *key
// We intentionally do not set the cookie on the response here.
// Setting the cookie will couple the browser session to the API
// key we return here, meaning logging out of the website would
+54 -2
View File
@@ -2,6 +2,7 @@ package coderd_test
import (
"context"
"encoding/json"
"net/http"
"strings"
"testing"
@@ -13,8 +14,10 @@ import (
"github.com/coder/coder/v2/coderd/audit"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
"github.com/coder/serpent"
@@ -301,14 +304,32 @@ func TestSessionExpiry(t *testing.T) {
func TestAPIKey_OK(t *testing.T) {
t.Parallel()
// Given: a deployment with auditing enabled
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
client := coderdtest.New(t, &coderdtest.Options{IncludeProvisionerDaemon: true})
_ = coderdtest.CreateFirstUser(t, client)
auditor := audit.NewMock()
client := coderdtest.New(t, &coderdtest.Options{Auditor: auditor})
owner := coderdtest.CreateFirstUser(t, client)
auditor.ResetLogs()
// When: an API key is created
res, err := client.CreateAPIKey(ctx, codersdk.Me)
require.NoError(t, err)
require.Greater(t, len(res.Key), 2)
// Then: an audit log is generated
als := auditor.AuditLogs()
require.Len(t, als, 1)
al := als[0]
assert.Equal(t, owner.UserID, al.UserID)
assert.Equal(t, database.AuditActionCreate, al.Action)
assert.Equal(t, database.ResourceTypeApiKey, al.ResourceType)
// Then: the diff MUST NOT contain the generated key.
raw, err := json.Marshal(al)
require.NoError(t, err)
require.NotContains(t, res.Key, string(raw))
}
func TestAPIKey_Deleted(t *testing.T) {
@@ -351,3 +372,34 @@ func TestAPIKey_SetDefault(t *testing.T) {
require.NoError(t, err)
require.EqualValues(t, dc.Sessions.DefaultTokenDuration.Value().Seconds(), apiKey1.LifetimeSeconds)
}
func TestAPIKey_PrebuildsNotAllowed(t *testing.T) {
t.Parallel()
db, pubsub := dbtestutil.NewDB(t)
dc := coderdtest.DeploymentValues(t)
dc.Sessions.DefaultTokenDuration = serpent.Duration(time.Hour * 12)
client := coderdtest.New(t, &coderdtest.Options{
Database: db,
Pubsub: pubsub,
DeploymentValues: dc,
})
ctx := testutil.Context(t, testutil.WaitLong)
// Given: an existing api token for the prebuilds user
_, prebuildsToken := dbgen.APIKey(t, db, database.APIKey{
UserID: database.PrebuildsSystemUserID,
})
client.SetSessionToken(prebuildsToken)
// When: the prebuilds user tries to create an API key
_, err := client.CreateAPIKey(ctx, database.PrebuildsSystemUserID.String())
// Then: denied.
require.ErrorContains(t, err, httpapi.ResourceForbiddenResponse.Message)
// When: the prebuilds user tries to create a token
_, err = client.CreateToken(ctx, database.PrebuildsSystemUserID.String(), codersdk.CreateTokenRequest{})
// Then: also denied.
require.ErrorContains(t, err, httpapi.ResourceForbiddenResponse.Message)
}
+15
View File
@@ -1725,6 +1725,13 @@ func (q *querier) EnqueueNotificationMessage(ctx context.Context, arg database.E
return q.db.EnqueueNotificationMessage(ctx, arg)
}
func (q *querier) ExpirePrebuildsAPIKeys(ctx context.Context, now time.Time) error {
if err := q.authorizeContext(ctx, policy.ActionDelete, rbac.ResourceApiKey); err != nil {
return err
}
return q.db.ExpirePrebuildsAPIKeys(ctx, now)
}
func (q *querier) FavoriteWorkspace(ctx context.Context, id uuid.UUID) error {
fetch := func(ctx context.Context, id uuid.UUID) (database.Workspace, error) {
return q.db.GetWorkspaceByID(ctx, id)
@@ -3623,6 +3630,14 @@ func (q *querier) HasTemplateVersionsWithAITask(ctx context.Context) (bool, erro
}
func (q *querier) InsertAPIKey(ctx context.Context, arg database.InsertAPIKeyParams) (database.APIKey, error) {
// TODO(Cian): ideally this would be encoded in the policy, but system users are just members and we
// don't currently have a capability to conditionally deny creating resources by owner ID in a role.
// We also need to enrich rbac.Actor with IsSystem so that we can distinguish all system users.
// For now, there is only one system user (prebuilds).
if act, ok := ActorFromContext(ctx); ok && act.ID == database.PrebuildsSystemUserID.String() {
return database.APIKey{}, logNotAuthorizedError(ctx, q.log, NotAuthorizedError{Err: xerrors.Errorf("prebuild user may not create api keys")})
}
return insert(q.log, q.auth,
rbac.ResourceApiKey.WithOwner(arg.UserID.String()),
q.db.InsertAPIKey)(ctx, arg)
+21
View File
@@ -14,14 +14,17 @@ import (
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
"golang.org/x/xerrors"
"cdr.dev/slog"
"cdr.dev/slog/sloggers/slogtest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbmock"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/notifications"
@@ -1681,6 +1684,9 @@ func (s *MethodTestSuite) TestUser() {
u := dbgen.User(s.T(), db, database.User{})
check.Args(u.ID).Asserts(rbac.ResourceApiKey.WithOwner(u.ID.String()), policy.ActionDelete).Returns()
}))
s.Run("ExpirePrebuildsAPIKeys", s.Subtest(func(db database.Store, check *expects) {
check.Args(dbtime.Now()).Asserts(rbac.ResourceApiKey, policy.ActionDelete).Returns()
}))
s.Run("GetQuotaAllowanceForUser", s.Subtest(func(db database.Store, check *expects) {
u := dbgen.User(s.T(), db, database.User{})
check.Args(database.GetQuotaAllowanceForUserParams{
@@ -5845,3 +5851,18 @@ func (s *MethodTestSuite) TestAuthorizePrebuiltWorkspace() {
}).Asserts(w, policy.ActionUpdate, w.AsPrebuild(), policy.ActionUpdate)
}))
}
// Ensures that the prebuilds actor may never insert an api key.
func TestInsertAPIKey_AsPrebuildsUser(t *testing.T) {
t.Parallel()
prebuildsSubj := rbac.Subject{
ID: database.PrebuildsSystemUserID.String(),
}
ctx := dbauthz.As(testutil.Context(t, testutil.WaitShort), prebuildsSubj)
mDB := dbmock.NewMockStore(gomock.NewController(t))
log := slogtest.Make(t, nil)
mDB.EXPECT().Wrappers().Times(1).Return([]string{})
dbz := dbauthz.New(mDB, nil, log, nil)
_, err := dbz.InsertAPIKey(ctx, database.InsertAPIKeyParams{})
require.True(t, dbauthz.IsNotAuthorizedError(err))
}
+7 -3
View File
@@ -156,7 +156,7 @@ func Template(t testing.TB, db database.Store, seed database.Template) database.
return template
}
func APIKey(t testing.TB, db database.Store, seed database.APIKey) (key database.APIKey, token string) {
func APIKey(t testing.TB, db database.Store, seed database.APIKey, munge ...func(*database.InsertAPIKeyParams)) (key database.APIKey, token string) {
id, _ := cryptorand.String(10)
secret, _ := cryptorand.String(22)
hashed := sha256.Sum256([]byte(secret))
@@ -172,7 +172,7 @@ func APIKey(t testing.TB, db database.Store, seed database.APIKey) (key database
}
}
key, err := db.InsertAPIKey(genCtx, database.InsertAPIKeyParams{
params := database.InsertAPIKeyParams{
ID: takeFirst(seed.ID, id),
// 0 defaults to 86400 at the db layer
LifetimeSeconds: takeFirst(seed.LifetimeSeconds, 0),
@@ -186,7 +186,11 @@ func APIKey(t testing.TB, db database.Store, seed database.APIKey) (key database
LoginType: takeFirst(seed.LoginType, database.LoginTypePassword),
Scope: takeFirst(seed.Scope, database.APIKeyScopeAll),
TokenName: takeFirst(seed.TokenName),
})
}
for _, fn := range munge {
fn(&params)
}
key, err := db.InsertAPIKey(genCtx, params)
require.NoError(t, err, "insert api key")
return key, fmt.Sprintf("%s-%s", key.ID, secret)
}
@@ -509,6 +509,13 @@ func (m queryMetricsStore) EnqueueNotificationMessage(ctx context.Context, arg d
return r0
}
func (m queryMetricsStore) ExpirePrebuildsAPIKeys(ctx context.Context, now time.Time) error {
start := time.Now()
r0 := m.s.ExpirePrebuildsAPIKeys(ctx, now)
m.queryLatencies.WithLabelValues("ExpirePrebuildsAPIKeys").Observe(time.Since(start).Seconds())
return r0
}
func (m queryMetricsStore) FavoriteWorkspace(ctx context.Context, arg uuid.UUID) error {
start := time.Now()
r0 := m.s.FavoriteWorkspace(ctx, arg)
+14
View File
@@ -933,6 +933,20 @@ func (mr *MockStoreMockRecorder) EnqueueNotificationMessage(ctx, arg any) *gomoc
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EnqueueNotificationMessage", reflect.TypeOf((*MockStore)(nil).EnqueueNotificationMessage), ctx, arg)
}
// ExpirePrebuildsAPIKeys mocks base method.
func (m *MockStore) ExpirePrebuildsAPIKeys(ctx context.Context, now time.Time) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ExpirePrebuildsAPIKeys", ctx, now)
ret0, _ := ret[0].(error)
return ret0
}
// ExpirePrebuildsAPIKeys indicates an expected call of ExpirePrebuildsAPIKeys.
func (mr *MockStoreMockRecorder) ExpirePrebuildsAPIKeys(ctx, now any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ExpirePrebuildsAPIKeys", reflect.TypeOf((*MockStore)(nil).ExpirePrebuildsAPIKeys), ctx, now)
}
// FavoriteWorkspace mocks base method.
func (m *MockStore) FavoriteWorkspace(ctx context.Context, id uuid.UUID) error {
m.ctrl.T.Helper()
+3
View File
@@ -67,6 +67,9 @@ func New(ctx context.Context, logger slog.Logger, db database.Store, clk quartz.
if err := tx.DeleteOldNotificationMessages(ctx); err != nil {
return xerrors.Errorf("failed to delete old notification messages: %w", err)
}
if err := tx.ExpirePrebuildsAPIKeys(ctx, dbtime.Time(start)); err != nil {
return xerrors.Errorf("failed to expire prebuilds user api keys: %w", err)
}
deleteOldAuditLogConnectionEventsBefore := start.Add(-maxAuditLogConnectionEventAge)
if err := tx.DeleteOldAuditLogConnectionEvents(ctx, database.DeleteOldAuditLogConnectionEventsParams{
+66
View File
@@ -25,6 +25,7 @@ import (
"github.com/coder/coder/v2/coderd/database/dbrollup"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/provisionerdserver"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/provisionerd/proto"
"github.com/coder/coder/v2/provisionersdk"
@@ -635,3 +636,68 @@ func TestDeleteOldAuditLogConnectionEventsLimit(t *testing.T) {
require.Len(t, logs, 0)
}
func TestExpireOldAPIKeys(t *testing.T) {
t.Parallel()
// Given: a number of workspaces and API keys owned by a regular user and the prebuilds system user.
var (
ctx = testutil.Context(t, testutil.WaitShort)
now = dbtime.Now()
db, _ = dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
org = dbgen.Organization(t, db, database.Organization{})
user = dbgen.User(t, db, database.User{})
tpl = dbgen.Template(t, db, database.Template{OrganizationID: org.ID, CreatedBy: user.ID})
userWs = dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: user.ID,
TemplateID: tpl.ID,
})
prebuildsWs = dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: database.PrebuildsSystemUserID,
TemplateID: tpl.ID,
})
createAPIKey = func(userID uuid.UUID, name string) database.APIKey {
k, _ := dbgen.APIKey(t, db, database.APIKey{UserID: userID, TokenName: name, ExpiresAt: now.Add(time.Hour)}, func(iap *database.InsertAPIKeyParams) {
iap.TokenName = name
})
return k
}
assertKeyActive = func(kid string) {
k, err := db.GetAPIKeyByID(ctx, kid)
require.NoError(t, err)
assert.True(t, k.ExpiresAt.After(now))
}
assertKeyExpired = func(kid string) {
k, err := db.GetAPIKeyByID(ctx, kid)
require.NoError(t, err)
assert.True(t, k.ExpiresAt.Equal(now))
}
unnamedUserAPIKey = createAPIKey(user.ID, "")
unnamedPrebuildsAPIKey = createAPIKey(database.PrebuildsSystemUserID, "")
namedUserAPIKey = createAPIKey(user.ID, "my-token")
namedPrebuildsAPIKey = createAPIKey(database.PrebuildsSystemUserID, "also-my-token")
userWorkspaceAPIKey1 = createAPIKey(user.ID, provisionerdserver.WorkspaceSessionTokenName(user.ID, userWs.ID))
userWorkspaceAPIKey2 = createAPIKey(user.ID, provisionerdserver.WorkspaceSessionTokenName(user.ID, prebuildsWs.ID))
prebuildsWorkspaceAPIKey1 = createAPIKey(database.PrebuildsSystemUserID, provisionerdserver.WorkspaceSessionTokenName(database.PrebuildsSystemUserID, prebuildsWs.ID))
prebuildsWorkspaceAPIKey2 = createAPIKey(database.PrebuildsSystemUserID, provisionerdserver.WorkspaceSessionTokenName(database.PrebuildsSystemUserID, userWs.ID))
)
// When: we call ExpirePrebuildsAPIKeys
err := db.ExpirePrebuildsAPIKeys(ctx, now)
// Then: no errors is reported.
require.NoError(t, err)
// We do not touch user API keys.
assertKeyActive(unnamedUserAPIKey.ID)
assertKeyActive(namedUserAPIKey.ID)
assertKeyActive(userWorkspaceAPIKey1.ID)
assertKeyActive(userWorkspaceAPIKey2.ID)
// Unnamed prebuilds API keys get expired.
assertKeyExpired(unnamedPrebuildsAPIKey.ID)
// API keys for workspaces still owned by prebuilds user remain active until claimed.
assertKeyActive(prebuildsWorkspaceAPIKey1.ID)
// API keys for workspaces no longer owned by prebuilds user get expired.
assertKeyExpired(prebuildsWorkspaceAPIKey2.ID)
// Out of an abundance of caution, we do not expire explicitly named prebuilds API keys.
assertKeyActive(namedPrebuildsAPIKey.ID)
}
+21 -17
View File
@@ -10,7 +10,6 @@ import (
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
"testing"
"time"
@@ -251,26 +250,31 @@ func PGDump(dbURL string) ([]byte, error) {
return stdout.Bytes(), nil
}
const minimumPostgreSQLVersion = 13
const (
minimumPostgreSQLVersion = 13
postgresImageSha = "sha256:467e7f2fb97b2f29d616e0be1d02218a7bbdfb94eb3cda7461fd80165edfd1f7"
)
// PGDumpSchemaOnly is for use by gen/dump only.
// It runs pg_dump against dbURL and sets a consistent timezone and encoding.
func PGDumpSchemaOnly(dbURL string) ([]byte, error) {
hasPGDump := false
if _, err := exec.LookPath("pg_dump"); err == nil {
out, err := exec.Command("pg_dump", "--version").Output()
if err == nil {
// Parse output:
// pg_dump (PostgreSQL) 14.5 (Ubuntu 14.5-0ubuntu0.22.04.1)
parts := strings.Split(string(out), " ")
if len(parts) > 2 {
version, err := strconv.Atoi(strings.Split(parts[2], ".")[0])
if err == nil && version >= minimumPostgreSQLVersion {
hasPGDump = true
}
}
}
}
// TODO: Temporarily pin pg_dump to the docker image until
// https://github.com/sqlc-dev/sqlc/issues/4065 is resolved.
// if _, err := exec.LookPath("pg_dump"); err == nil {
// out, err := exec.Command("pg_dump", "--version").Output()
// if err == nil {
// // Parse output:
// // pg_dump (PostgreSQL) 14.5 (Ubuntu 14.5-0ubuntu0.22.04.1)
// parts := strings.Split(string(out), " ")
// if len(parts) > 2 {
// version, err := strconv.Atoi(strings.Split(parts[2], ".")[0])
// if err == nil && version >= minimumPostgreSQLVersion {
// hasPGDump = true
// }
// }
// }
// }
cmdArgs := []string{
"pg_dump",
@@ -295,7 +299,7 @@ func PGDumpSchemaOnly(dbURL string) ([]byte, error) {
"run",
"--rm",
"--network=host",
fmt.Sprintf("%s:%d", postgresImage, minimumPostgreSQLVersion),
fmt.Sprintf("%s:%d@%s", postgresImage, minimumPostgreSQLVersion, postgresImageSha),
}, cmdArgs...)
}
cmd := exec.Command(cmdArgs[0], cmdArgs[1:]...) //#nosec
+5
View File
@@ -128,6 +128,11 @@ type sqlcQuerier interface {
// of the test-only in-memory database. Do not use this in new code.
DisableForeignKeysAndTriggers(ctx context.Context) error
EnqueueNotificationMessage(ctx context.Context, arg EnqueueNotificationMessageParams) error
// Firstly, collect api_keys owned by the prebuilds user that correlate
// to workspaces no longer owned by the prebuilds user.
// Next, collect api_keys that belong to the prebuilds user but have no token name.
// These were most likely created via 'coder login' as the prebuilds user.
ExpirePrebuildsAPIKeys(ctx context.Context, now time.Time) error
FavoriteWorkspace(ctx context.Context, id uuid.UUID) error
FetchMemoryResourceMonitorsByAgentID(ctx context.Context, agentID uuid.UUID) (WorkspaceAgentMemoryResourceMonitor, error)
FetchMemoryResourceMonitorsUpdatedAfter(ctx context.Context, updatedAt time.Time) ([]WorkspaceAgentMemoryResourceMonitor, error)
+40
View File
@@ -144,6 +144,46 @@ func (q *sqlQuerier) DeleteApplicationConnectAPIKeysByUserID(ctx context.Context
return err
}
const expirePrebuildsAPIKeys = `-- name: ExpirePrebuildsAPIKeys :exec
WITH unexpired_prebuilds_workspace_session_tokens AS (
SELECT id, SUBSTRING(token_name FROM 38 FOR 36)::uuid AS workspace_id
FROM api_keys
WHERE user_id = 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
AND expires_at > $1::timestamptz
AND token_name SIMILAR TO 'c42fdf75-3097-471c-8c33-fb52454d81c0_[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}_session_token'
),
stale_prebuilds_workspace_session_tokens AS (
SELECT upwst.id
FROM unexpired_prebuilds_workspace_session_tokens upwst
LEFT JOIN workspaces w
ON w.id = upwst.workspace_id
WHERE w.owner_id <> 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
),
unnamed_prebuilds_api_keys AS (
SELECT id
FROM api_keys
WHERE user_id = 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
AND token_name = ''
AND expires_at > $1::timestamptz
)
UPDATE api_keys
SET expires_at = $1::timestamptz
WHERE id IN (
SELECT id FROM stale_prebuilds_workspace_session_tokens
UNION
SELECT id FROM unnamed_prebuilds_api_keys
)
`
// Firstly, collect api_keys owned by the prebuilds user that correlate
// to workspaces no longer owned by the prebuilds user.
// Next, collect api_keys that belong to the prebuilds user but have no token name.
// These were most likely created via 'coder login' as the prebuilds user.
func (q *sqlQuerier) ExpirePrebuildsAPIKeys(ctx context.Context, now time.Time) error {
_, err := q.db.ExecContext(ctx, expirePrebuildsAPIKeys, now)
return err
}
const getAPIKeyByID = `-- name: GetAPIKeyByID :one
SELECT
id, hashed_secret, user_id, last_used, expires_at, created_at, updated_at, login_type, lifetime_seconds, ip_address, scope, token_name
+34
View File
@@ -83,3 +83,37 @@ DELETE FROM
api_keys
WHERE
user_id = $1;
-- name: ExpirePrebuildsAPIKeys :exec
-- Firstly, collect api_keys owned by the prebuilds user that correlate
-- to workspaces no longer owned by the prebuilds user.
WITH unexpired_prebuilds_workspace_session_tokens AS (
SELECT id, SUBSTRING(token_name FROM 38 FOR 36)::uuid AS workspace_id
FROM api_keys
WHERE user_id = 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
AND expires_at > @now::timestamptz
AND token_name SIMILAR TO 'c42fdf75-3097-471c-8c33-fb52454d81c0_[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}_session_token'
),
stale_prebuilds_workspace_session_tokens AS (
SELECT upwst.id
FROM unexpired_prebuilds_workspace_session_tokens upwst
LEFT JOIN workspaces w
ON w.id = upwst.workspace_id
WHERE w.owner_id <> 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
),
-- Next, collect api_keys that belong to the prebuilds user but have no token name.
-- These were most likely created via 'coder login' as the prebuilds user.
unnamed_prebuilds_api_keys AS (
SELECT id
FROM api_keys
WHERE user_id = 'c42fdf75-3097-471c-8c33-fb52454d81c0'::uuid
AND token_name = ''
AND expires_at > @now::timestamptz
)
UPDATE api_keys
SET expires_at = @now::timestamptz
WHERE id IN (
SELECT id FROM stale_prebuilds_workspace_session_tokens
UNION
SELECT id FROM unnamed_prebuilds_api_keys
);
+8
View File
@@ -26,6 +26,14 @@ func tagValidationError(diags hcl.Diagnostics) *DiagnosticError {
}
}
func presetValidationError(diags hcl.Diagnostics) *DiagnosticError {
return &DiagnosticError{
Message: "Unable to validate presets",
Diagnostics: diags,
KeyedDiagnostics: make(map[string]hcl.Diagnostics),
}
}
type DiagnosticError struct {
// Message is the human-readable message that will be returned to the user.
Message string
+28
View File
@@ -0,0 +1,28 @@
package dynamicparameters
import (
"github.com/hashicorp/hcl/v2"
"github.com/coder/preview"
)
// CheckPresets extracts the preset related diagnostics from a template version preset
func CheckPresets(output *preview.Output, diags hcl.Diagnostics) *DiagnosticError {
de := presetValidationError(diags)
if output == nil {
return de
}
presets := output.Presets
for _, preset := range presets {
if hcl.Diagnostics(preset.Diagnostics).HasErrors() {
de.Extend(preset.Name, hcl.Diagnostics(preset.Diagnostics))
}
}
if de.HasError() {
return de
}
return nil
}
+4
View File
@@ -11,6 +11,10 @@ import (
func CheckTags(output *preview.Output, diags hcl.Diagnostics) *DiagnosticError {
de := tagValidationError(diags)
if output == nil {
return de
}
failedTags := output.WorkspaceTags.UnusableTags()
if len(failedTags) == 0 && !de.HasError() {
return nil // No errors, all is good!
@@ -2711,15 +2711,23 @@ func InsertWorkspaceResource(ctx context.Context, db database.Store, jobID uuid.
return nil
}
func workspaceSessionTokenName(workspace database.Workspace) string {
return fmt.Sprintf("%s_%s_session_token", workspace.OwnerID, workspace.ID)
func WorkspaceSessionTokenName(ownerID, workspaceID uuid.UUID) string {
return fmt.Sprintf("%s_%s_session_token", ownerID, workspaceID)
}
func (s *server) regenerateSessionToken(ctx context.Context, user database.User, workspace database.Workspace) (string, error) {
// NOTE(Cian): Once a workspace is claimed, there's no reason for the session token to be valid any longer.
// Not generating any session token at all for a system user may unintentionally break existing templates,
// which we want to avoid. If there's no session token for the workspace belonging to the prebuilds user,
// then there's nothing for us to worry about here.
// TODO(Cian): Update this to handle _all_ system users. At the time of writing, only one system user exists.
if err := deleteSessionTokenForUserAndWorkspace(ctx, s.Database, database.PrebuildsSystemUserID, workspace.ID); err != nil && !errors.Is(err, sql.ErrNoRows) {
s.Logger.Error(ctx, "failed to delete prebuilds session token", slog.Error(err), slog.F("workspace_id", workspace.ID))
}
newkey, sessionToken, err := apikey.Generate(apikey.CreateParams{
UserID: user.ID,
LoginType: user.LoginType,
TokenName: workspaceSessionTokenName(workspace),
TokenName: WorkspaceSessionTokenName(workspace.OwnerID, workspace.ID),
DefaultLifetime: s.DeploymentValues.Sessions.DefaultTokenDuration.Value(),
LifetimeSeconds: int64(s.DeploymentValues.Sessions.MaximumTokenDuration.Value().Seconds()),
})
@@ -2747,10 +2755,14 @@ func (s *server) regenerateSessionToken(ctx context.Context, user database.User,
}
func deleteSessionToken(ctx context.Context, db database.Store, workspace database.Workspace) error {
return deleteSessionTokenForUserAndWorkspace(ctx, db, workspace.OwnerID, workspace.ID)
}
func deleteSessionTokenForUserAndWorkspace(ctx context.Context, db database.Store, userID, workspaceID uuid.UUID) error {
err := db.InTx(func(tx database.Store) error {
key, err := tx.GetAPIKeyByName(ctx, database.GetAPIKeyByNameParams{
UserID: workspace.OwnerID,
TokenName: workspaceSessionTokenName(workspace),
UserID: userID,
TokenName: WorkspaceSessionTokenName(userID, workspaceID),
})
if err == nil {
err = tx.DeleteAPIKeyByID(ctx, key.ID)
@@ -3576,6 +3576,70 @@ func TestNotifications(t *testing.T) {
})
}
func TestServer_ExpirePrebuildsSessionToken(t *testing.T) {
t.Parallel()
// Given: a prebuilt workspace where an API key was previously created for the prebuilds user.
var (
ctx = testutil.Context(t, testutil.WaitShort)
srv, db, ps, pd = setup(t, false, nil)
user = dbgen.User(t, db, database.User{})
template = dbgen.Template(t, db, database.Template{
OrganizationID: pd.OrganizationID,
CreatedBy: user.ID,
})
version = dbgen.TemplateVersion(t, db, database.TemplateVersion{
TemplateID: uuid.NullUUID{UUID: template.ID, Valid: true},
OrganizationID: pd.OrganizationID,
CreatedBy: user.ID,
})
workspace = dbgen.Workspace(t, db, database.WorkspaceTable{
OrganizationID: pd.OrganizationID,
TemplateID: template.ID,
OwnerID: database.PrebuildsSystemUserID,
})
workspaceBuildID = uuid.New()
buildJob = dbgen.ProvisionerJob(t, db, ps, database.ProvisionerJob{
OrganizationID: pd.OrganizationID,
FileID: dbgen.File(t, db, database.File{CreatedBy: user.ID}).ID,
Type: database.ProvisionerJobTypeWorkspaceBuild,
Input: must(json.Marshal(provisionerdserver.WorkspaceProvisionJob{
WorkspaceBuildID: workspaceBuildID,
})),
InitiatorID: database.PrebuildsSystemUserID,
Tags: pd.Tags,
})
_ = dbgen.WorkspaceBuild(t, db, database.WorkspaceBuild{
ID: workspaceBuildID,
WorkspaceID: workspace.ID,
TemplateVersionID: version.ID,
JobID: buildJob.ID,
Transition: database.WorkspaceTransitionStart,
InitiatorID: database.PrebuildsSystemUserID,
})
existingKey, _ = dbgen.APIKey(t, db, database.APIKey{
UserID: database.PrebuildsSystemUserID,
TokenName: provisionerdserver.WorkspaceSessionTokenName(database.PrebuildsSystemUserID, workspace.ID),
})
)
// When: the prebuild claim job is acquired
fs := newFakeStream(ctx)
err := srv.AcquireJobWithCancel(fs)
require.NoError(t, err)
job, err := fs.waitForJob()
require.NoError(t, err)
require.NotNil(t, job)
workspaceBuildJob := job.Type.(*proto.AcquiredJob_WorkspaceBuild_).WorkspaceBuild
require.NotNil(t, workspaceBuildJob.Metadata)
// Assert test invariant: we acquired the expected build job
require.Equal(t, workspaceBuildID.String(), workspaceBuildJob.WorkspaceBuildId)
// Then: The session token should be deleted
_, err = db.GetAPIKeyByID(ctx, existingKey.ID)
require.ErrorIs(t, err, sql.ErrNoRows, "api key for prebuilds user should be deleted")
}
type overrides struct {
ctx context.Context
deploymentValues *codersdk.DeploymentValues
+8
View File
@@ -1822,6 +1822,14 @@ func (api *API) dynamicTemplateVersionTags(ctx context.Context, rw http.Response
return nil, false
}
// Fails early if presets are invalid to prevent downstream workspace creation errors
presetErr := dynamicparameters.CheckPresets(output, nil)
if presetErr != nil {
code, resp := presetErr.Response()
httpapi.Write(ctx, rw, code, resp)
return nil, false
}
return output.WorkspaceTags.Tags(), true
}
+113
View File
@@ -620,6 +620,119 @@ func TestPostTemplateVersionsByOrganization(t *testing.T) {
})
}
})
t.Run("Presets", func(t *testing.T) {
t.Parallel()
store, ps := dbtestutil.NewDB(t)
client := coderdtest.New(t, &coderdtest.Options{
Database: store,
Pubsub: ps,
})
owner := coderdtest.CreateFirstUser(t, client)
templateAdmin, _ := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID, rbac.RoleTemplateAdmin())
for _, tt := range []struct {
name string
files map[string]string
expectError string
}{
{
name: "valid preset",
files: map[string]string{
`main.tf`: `
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "2.8.0"
}
}
}
data "coder_parameter" "valid_parameter" {
name = "valid_parameter_name"
default = "valid_option_value"
option {
name = "valid_option_name"
value = "valid_option_value"
}
}
data "coder_workspace_preset" "valid_preset" {
name = "valid_preset"
parameters = {
"valid_parameter_name" = "valid_option_value"
}
}
`,
},
},
{
name: "invalid preset",
files: map[string]string{
`main.tf`: `
terraform {
required_providers {
coder = {
source = "coder/coder"
version = "2.8.0"
}
}
}
data "coder_parameter" "valid_parameter" {
name = "valid_parameter_name"
default = "valid_option_value"
option {
name = "valid_option_name"
value = "valid_option_value"
}
}
data "coder_workspace_preset" "invalid_parameter_name" {
name = "invalid_parameter_name"
parameters = {
"invalid_parameter_name" = "irrelevant_value"
}
}
`,
},
expectError: "Undefined Parameter",
},
} {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
// Create an archive from the files provided in the test case.
tarFile := testutil.CreateTar(t, tt.files)
// Post the archive file
fi, err := templateAdmin.Upload(ctx, "application/x-tar", bytes.NewReader(tarFile))
require.NoError(t, err)
// Create a template version from the archive
tvName := testutil.GetRandomNameHyphenated(t)
tv, err := templateAdmin.CreateTemplateVersion(ctx, owner.OrganizationID, codersdk.CreateTemplateVersionRequest{
Name: tvName,
StorageMethod: codersdk.ProvisionerStorageMethodFile,
Provisioner: codersdk.ProvisionerTypeTerraform,
FileID: fi.ID,
})
if tt.expectError == "" {
require.NoError(t, err)
// Assert the expected provisioner job is created from the template version import
pj, err := store.GetProvisionerJobByID(ctx, tv.Job.ID)
require.NoError(t, err)
require.NotNil(t, pj)
// Also assert that we get the expected information back from the API endpoint
require.Zero(t, tv.MatchedProvisioners.Count)
require.Zero(t, tv.MatchedProvisioners.Available)
require.Zero(t, tv.MatchedProvisioners.MostRecentlySeen.Time)
} else {
require.ErrorContains(t, err, tt.expectError)
require.Equal(t, tv.Job.ID, uuid.Nil)
}
})
}
})
}
func TestPatchCancelTemplateVersion(t *testing.T) {
+13 -6
View File
@@ -817,12 +817,13 @@ func (api *API) watchWorkspaceAgentContainers(rw http.ResponseWriter, r *http.Re
var (
ctx = r.Context()
workspaceAgent = httpmw.WorkspaceAgentParam(r)
logger = api.Logger.Named("agent_container_watcher").With(slog.F("agent_id", workspaceAgent.ID))
)
// If the agent is unreachable, the request will hang. Assume that if we
// don't get a response after 30s that the agent is unreachable.
dialCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
dialCtx, dialCancel := context.WithTimeout(ctx, 30*time.Second)
defer dialCancel()
apiAgent, err := db2sdk.WorkspaceAgent(
api.DERPMap(),
*api.TailnetCoordinator.Load(),
@@ -857,8 +858,7 @@ func (api *API) watchWorkspaceAgentContainers(rw http.ResponseWriter, r *http.Re
}
defer release()
watcherLogger := api.Logger.Named("agent_container_watcher").With(slog.F("agent_id", workspaceAgent.ID))
containersCh, closer, err := agentConn.WatchContainers(ctx, watcherLogger)
containersCh, closer, err := agentConn.WatchContainers(ctx, logger)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error watching agent's containers.",
@@ -877,6 +877,9 @@ func (api *API) watchWorkspaceAgentContainers(rw http.ResponseWriter, r *http.Re
return
}
ctx, cancel := context.WithCancel(r.Context())
defer cancel()
// Here we close the websocket for reading, so that the websocket library will handle pings and
// close frames.
_ = conn.CloseRead(context.Background())
@@ -884,7 +887,7 @@ func (api *API) watchWorkspaceAgentContainers(rw http.ResponseWriter, r *http.Re
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.Heartbeat(ctx, conn)
go httpapi.HeartbeatClose(ctx, logger, cancel, conn)
encoder := json.NewEncoder(wsNetConn)
@@ -896,7 +899,11 @@ func (api *API) watchWorkspaceAgentContainers(rw http.ResponseWriter, r *http.Re
case <-ctx.Done():
return
case containers := <-containersCh:
case containers, ok := <-containersCh:
if !ok {
return
}
if err := encoder.Encode(containers); err != nil {
api.Logger.Error(ctx, "encode containers", slog.Error(err))
return
+221 -147
View File
@@ -1389,169 +1389,147 @@ func TestWorkspaceAgentContainers(t *testing.T) {
func TestWatchWorkspaceAgentDevcontainers(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitLong)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock = quartz.NewMock(t)
updaterTickerTrap = mClock.Trap().TickerFunc("updaterLoop")
mCtrl = gomock.NewController(t)
mCCLI = acmock.NewMockContainerCLI(mCtrl)
t.Run("OK", func(t *testing.T) {
t.Parallel()
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{Logger: &logger})
user = coderdtest.CreateFirstUser(t, client)
r = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
return agents
}).Do()
var (
ctx = testutil.Context(t, testutil.WaitLong)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mClock = quartz.NewMock(t)
updaterTickerTrap = mClock.Trap().TickerFunc("updaterLoop")
mCtrl = gomock.NewController(t)
mCCLI = acmock.NewMockContainerCLI(mCtrl)
fakeContainer1 = codersdk.WorkspaceAgentContainer{
ID: "container1",
CreatedAt: dbtime.Now(),
FriendlyName: "container1",
Image: "busybox:latest",
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project1",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project1/.devcontainer/devcontainer.json",
},
Running: true,
Status: "running",
}
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{Logger: &logger})
user = coderdtest.CreateFirstUser(t, client)
r = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
return agents
}).Do()
fakeContainer2 = codersdk.WorkspaceAgentContainer{
ID: "container1",
CreatedAt: dbtime.Now(),
FriendlyName: "container2",
Image: "busybox:latest",
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project2",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project2/.devcontainer/devcontainer.json",
},
Running: true,
Status: "running",
}
)
fakeContainer1 = codersdk.WorkspaceAgentContainer{
ID: "container1",
CreatedAt: dbtime.Now(),
FriendlyName: "container1",
Image: "busybox:latest",
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project1",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project1/.devcontainer/devcontainer.json",
},
Running: true,
Status: "running",
}
stages := []struct {
containers []codersdk.WorkspaceAgentContainer
expected codersdk.WorkspaceAgentListContainersResponse
}{
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer1},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer1},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "project1",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer1,
fakeContainer2 = codersdk.WorkspaceAgentContainer{
ID: "container1",
CreatedAt: dbtime.Now(),
FriendlyName: "container2",
Image: "busybox:latest",
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project2",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project2/.devcontainer/devcontainer.json",
},
Running: true,
Status: "running",
}
)
stages := []struct {
containers []codersdk.WorkspaceAgentContainer
expected codersdk.WorkspaceAgentListContainersResponse
}{
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer1},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer1},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "project1",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer1,
},
},
},
},
},
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer1, fakeContainer2},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer1, fakeContainer2},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "project1",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer1,
},
{
Name: "project2",
WorkspaceFolder: fakeContainer2.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer2.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer2,
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer1, fakeContainer2},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer1, fakeContainer2},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "project1",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer1,
},
{
Name: "project2",
WorkspaceFolder: fakeContainer2.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer2.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer2,
},
},
},
},
},
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer2},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer2},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "stopped",
Container: nil,
},
{
Name: "project2",
WorkspaceFolder: fakeContainer2.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer2.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer2,
{
containers: []codersdk.WorkspaceAgentContainer{fakeContainer2},
expected: codersdk.WorkspaceAgentListContainersResponse{
Containers: []codersdk.WorkspaceAgentContainer{fakeContainer2},
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{
Name: "",
WorkspaceFolder: fakeContainer1.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer1.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "stopped",
Container: nil,
},
{
Name: "project2",
WorkspaceFolder: fakeContainer2.Labels[agentcontainers.DevcontainerLocalFolderLabel],
ConfigPath: fakeContainer2.Labels[agentcontainers.DevcontainerConfigFileLabel],
Status: "running",
Container: &fakeContainer2,
},
},
},
},
},
}
// Set up initial state for immediate send on connection
mCCLI.EXPECT().List(gomock.Any()).Return(codersdk.WorkspaceAgentListContainersResponse{Containers: stages[0].containers}, nil)
mCCLI.EXPECT().DetectArchitecture(gomock.Any(), gomock.Any()).Return("<none>", nil).AnyTimes()
_ = agenttest.New(t, client.URL, r.AgentToken, func(o *agent.Options) {
o.Logger = logger.Named("agent")
o.Devcontainers = true
o.DevcontainerAPIOptions = []agentcontainers.Option{
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(mCCLI),
agentcontainers.WithWatcher(watcher.NewNoop()),
}
})
resources := coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).Wait()
require.Len(t, resources, 1, "expected one resource")
require.Len(t, resources[0].Agents, 1, "expected one agent")
agentID := resources[0].Agents[0].ID
// Set up initial state for immediate send on connection
mCCLI.EXPECT().List(gomock.Any()).Return(codersdk.WorkspaceAgentListContainersResponse{Containers: stages[0].containers}, nil)
mCCLI.EXPECT().DetectArchitecture(gomock.Any(), gomock.Any()).Return("<none>", nil).AnyTimes()
updaterTickerTrap.MustWait(ctx).MustRelease(ctx)
defer updaterTickerTrap.Close()
_ = agenttest.New(t, client.URL, r.AgentToken, func(o *agent.Options) {
o.Logger = logger.Named("agent")
o.Devcontainers = true
o.DevcontainerAPIOptions = []agentcontainers.Option{
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerCLI(mCCLI),
agentcontainers.WithWatcher(watcher.NewNoop()),
}
})
containers, closer, err := client.WatchWorkspaceAgentContainers(ctx, agentID)
require.NoError(t, err)
defer func() {
closer.Close()
}()
resources := coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).Wait()
require.Len(t, resources, 1, "expected one resource")
require.Len(t, resources[0].Agents, 1, "expected one agent")
agentID := resources[0].Agents[0].ID
// Read initial state sent immediately on connection
var got codersdk.WorkspaceAgentListContainersResponse
select {
case <-ctx.Done():
case got = <-containers:
}
require.NoError(t, ctx.Err())
updaterTickerTrap.MustWait(ctx).MustRelease(ctx)
defer updaterTickerTrap.Close()
require.Equal(t, stages[0].expected.Containers, got.Containers)
require.Len(t, got.Devcontainers, len(stages[0].expected.Devcontainers))
for j, expectedDev := range stages[0].expected.Devcontainers {
gotDev := got.Devcontainers[j]
require.Equal(t, expectedDev.Name, gotDev.Name)
require.Equal(t, expectedDev.WorkspaceFolder, gotDev.WorkspaceFolder)
require.Equal(t, expectedDev.ConfigPath, gotDev.ConfigPath)
require.Equal(t, expectedDev.Status, gotDev.Status)
require.Equal(t, expectedDev.Container, gotDev.Container)
}
// Process remaining stages through updater loop
for i, stage := range stages[1:] {
mCCLI.EXPECT().List(gomock.Any()).Return(codersdk.WorkspaceAgentListContainersResponse{Containers: stage.containers}, nil)
_, aw := mClock.AdvanceNext()
aw.MustWait(ctx)
containers, closer, err := client.WatchWorkspaceAgentContainers(ctx, agentID)
require.NoError(t, err)
defer func() {
closer.Close()
}()
// Read initial state sent immediately on connection
var got codersdk.WorkspaceAgentListContainersResponse
select {
case <-ctx.Done():
@@ -1559,9 +1537,9 @@ func TestWatchWorkspaceAgentDevcontainers(t *testing.T) {
}
require.NoError(t, ctx.Err())
require.Equal(t, stages[i+1].expected.Containers, got.Containers)
require.Len(t, got.Devcontainers, len(stages[i+1].expected.Devcontainers))
for j, expectedDev := range stages[i+1].expected.Devcontainers {
require.Equal(t, stages[0].expected.Containers, got.Containers)
require.Len(t, got.Devcontainers, len(stages[0].expected.Devcontainers))
for j, expectedDev := range stages[0].expected.Devcontainers {
gotDev := got.Devcontainers[j]
require.Equal(t, expectedDev.Name, gotDev.Name)
require.Equal(t, expectedDev.WorkspaceFolder, gotDev.WorkspaceFolder)
@@ -1569,7 +1547,103 @@ func TestWatchWorkspaceAgentDevcontainers(t *testing.T) {
require.Equal(t, expectedDev.Status, gotDev.Status)
require.Equal(t, expectedDev.Container, gotDev.Container)
}
}
// Process remaining stages through updater loop
for i, stage := range stages[1:] {
mCCLI.EXPECT().List(gomock.Any()).Return(codersdk.WorkspaceAgentListContainersResponse{Containers: stage.containers}, nil)
_, aw := mClock.AdvanceNext()
aw.MustWait(ctx)
var got codersdk.WorkspaceAgentListContainersResponse
select {
case <-ctx.Done():
case got = <-containers:
}
require.NoError(t, ctx.Err())
require.Equal(t, stages[i+1].expected.Containers, got.Containers)
require.Len(t, got.Devcontainers, len(stages[i+1].expected.Devcontainers))
for j, expectedDev := range stages[i+1].expected.Devcontainers {
gotDev := got.Devcontainers[j]
require.Equal(t, expectedDev.Name, gotDev.Name)
require.Equal(t, expectedDev.WorkspaceFolder, gotDev.WorkspaceFolder)
require.Equal(t, expectedDev.ConfigPath, gotDev.ConfigPath)
require.Equal(t, expectedDev.Status, gotDev.Status)
require.Equal(t, expectedDev.Container, gotDev.Container)
}
}
})
t.Run("PayloadTooLarge", func(t *testing.T) {
t.Parallel()
var (
ctx = testutil.Context(t, testutil.WaitShort)
logger = slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}).Leveled(slog.LevelDebug)
mCtrl = gomock.NewController(t)
mCCLI = acmock.NewMockContainerCLI(mCtrl)
client, db = coderdtest.NewWithDatabase(t, &coderdtest.Options{Logger: &logger})
user = coderdtest.CreateFirstUser(t, client)
r = dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
return agents
}).Do()
)
// WebSocket limit is 4MiB, so we want to ensure we create _more_ than 4MiB worth of payload.
// Creating 20,000 fake containers creates a payload of roughly 7MiB.
var fakeContainers []codersdk.WorkspaceAgentContainer
for range 20_000 {
fakeContainers = append(fakeContainers, codersdk.WorkspaceAgentContainer{
CreatedAt: time.Now(),
ID: uuid.NewString(),
FriendlyName: uuid.NewString(),
Image: "busybox:latest",
Labels: map[string]string{
agentcontainers.DevcontainerLocalFolderLabel: "/home/coder/project",
agentcontainers.DevcontainerConfigFileLabel: "/home/coder/project/.devcontainer/devcontainer.json",
},
Running: false,
Ports: []codersdk.WorkspaceAgentContainerPort{},
Status: string(codersdk.WorkspaceAgentDevcontainerStatusRunning),
Volumes: map[string]string{},
})
}
mCCLI.EXPECT().List(gomock.Any()).Return(codersdk.WorkspaceAgentListContainersResponse{Containers: fakeContainers}, nil)
mCCLI.EXPECT().DetectArchitecture(gomock.Any(), gomock.Any()).Return("<none>", nil).AnyTimes()
_ = agenttest.New(t, client.URL, r.AgentToken, func(o *agent.Options) {
o.Logger = logger.Named("agent")
o.Devcontainers = true
o.DevcontainerAPIOptions = []agentcontainers.Option{
agentcontainers.WithContainerCLI(mCCLI),
agentcontainers.WithWatcher(watcher.NewNoop()),
}
})
resources := coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).Wait()
require.Len(t, resources, 1, "expected one resource")
require.Len(t, resources[0].Agents, 1, "expected one agent")
agentID := resources[0].Agents[0].ID
containers, closer, err := client.WatchWorkspaceAgentContainers(ctx, agentID)
require.NoError(t, err)
defer func() {
closer.Close()
}()
select {
case <-ctx.Done():
t.Fail()
case _, ok := <-containers:
require.False(t, ok)
}
})
}
func TestWorkspaceAgentRecreateDevcontainer(t *testing.T) {
+9 -1
View File
@@ -550,7 +550,9 @@ func (c *Client) WatchWorkspaceAgentContainers(ctx context.Context, agentID uuid
}})
conn, res, err := websocket.Dial(ctx, reqURL.String(), &websocket.DialOptions{
CompressionMode: websocket.CompressionDisabled,
// We want `NoContextTakeover` compression to balance improving
// bandwidth cost/latency with minimal memory usage overhead.
CompressionMode: websocket.CompressionNoContextTakeover,
HTTPClient: &http.Client{
Jar: jar,
Transport: c.HTTPClient.Transport,
@@ -563,6 +565,12 @@ func (c *Client) WatchWorkspaceAgentContainers(ctx context.Context, agentID uuid
return nil, nil, ReadBodyAsError(res)
}
// When a workspace has a few devcontainers running, or a single devcontainer
// has a large amount of apps, then each payload can easily exceed 32KiB.
// We up the limit to 4MiB to give us plenty of headroom for workspaces that
// have lots of dev containers with lots of apps.
conn.SetReadLimit(1 << 22) // 4MiB
d := wsjson.NewDecoder[WorkspaceAgentListContainersResponse](conn, websocket.MessageText, c.logger)
return d.Chan(), d, nil
}
+10
View File
@@ -400,6 +400,10 @@ func (c *AgentConn) WatchContainers(ctx context.Context, logger slog.Logger) (<-
conn, res, err := websocket.Dial(ctx, url, &websocket.DialOptions{
HTTPClient: c.apiClient(),
// We want `NoContextTakeover` compression to balance improving
// bandwidth cost/latency with minimal memory usage overhead.
CompressionMode: websocket.CompressionNoContextTakeover,
})
if err != nil {
if res == nil {
@@ -411,6 +415,12 @@ func (c *AgentConn) WatchContainers(ctx context.Context, logger slog.Logger) (<-
defer res.Body.Close()
}
// When a workspace has a few devcontainers running, or a single devcontainer
// has a large amount of apps, then each payload can easily exceed 32KiB.
// We up the limit to 4MiB to give us plenty of headroom for workspaces that
// have lots of dev containers with lots of apps.
conn.SetReadLimit(1 << 22) // 4MiB
d := wsjson.NewDecoder[codersdk.WorkspaceAgentListContainersResponse](conn, websocket.MessageText, logger)
return d.Chan(), d, nil
}
+236
View File
@@ -0,0 +1,236 @@
# OAuth2 Provider (Experimental)
> [!WARNING]
> The OAuth2 provider functionality is currently **experimental and unstable**. This feature:
>
> - Is subject to breaking changes without notice
> - May have incomplete functionality
> - Is not recommended for production use
> - Requires the `oauth2` experiment flag to be enabled
>
> Use this feature for development and testing purposes only.
Coder can act as an OAuth2 authorization server, allowing third-party applications to authenticate users through Coder and access the Coder API on their behalf. This enables integrations where external applications can leverage Coder's authentication and user management.
## Requirements
- Admin privileges in Coder
- OAuth2 experiment flag enabled
- HTTPS recommended for production deployments
## Enable OAuth2 Provider
Add the `oauth2` experiment flag to your Coder server:
```bash
coder server --experiments oauth2
```
Or set the environment variable:
```env
CODER_EXPERIMENTS=oauth2
```
## Creating OAuth2 Applications
### Method 1: Web UI
1. Navigate to **Deployment Settings** → **OAuth2 Applications**
2. Click **Create Application**
3. Fill in the application details:
- **Name**: Your application name
- **Callback URL**: `https://yourapp.example.com/callback`
- **Icon**: Optional icon URL
### Method 2: Management API
Create an application using the Coder API:
```bash
curl -X POST \
-H "Authorization: Bearer $CODER_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "My Application",
"callback_url": "https://myapp.example.com/callback",
"icon": "https://myapp.example.com/icon.png"
}' \
"$CODER_URL/api/v2/oauth2-provider/apps"
```
Generate a client secret:
```bash
curl -X POST \
-H "Authorization: Bearer $CODER_SESSION_TOKEN" \
"$CODER_URL/api/v2/oauth2-provider/apps/$APP_ID/secrets"
```
## Integration Patterns
### Standard OAuth2 Flow
1. **Authorization Request**: Redirect users to Coder's authorization endpoint:
```url
https://coder.example.com/oauth2/authorize?
client_id=your-client-id&
response_type=code&
redirect_uri=https://yourapp.example.com/callback&
state=random-string
```
2. **Token Exchange**: Exchange the authorization code for an access token:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=$AUTH_CODE" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d "redirect_uri=https://yourapp.example.com/callback" \
"$CODER_URL/oauth2/tokens"
```
3. **API Access**: Use the access token to call Coder's API:
```bash
curl -H "Authorization: Bearer $ACCESS_TOKEN" \
"$CODER_URL/api/v2/users/me"
```
### PKCE Flow (Public Clients)
For mobile apps and single-page applications, use PKCE for enhanced security:
1. Generate a code verifier and challenge:
```bash
CODE_VERIFIER=$(openssl rand -base64 96 | tr -d "=+/" | cut -c1-128)
CODE_CHALLENGE=$(echo -n $CODE_VERIFIER | openssl dgst -sha256 -binary | base64 | tr -d "=+/" | cut -c1-43)
```
2. Include PKCE parameters in the authorization request:
```url
https://coder.example.com/oauth2/authorize?
client_id=your-client-id&
response_type=code&
code_challenge=$CODE_CHALLENGE&
code_challenge_method=S256&
redirect_uri=https://yourapp.example.com/callback
```
3. Include the code verifier in the token exchange:
```bash
curl -X POST \
-d "grant_type=authorization_code" \
-d "code=$AUTH_CODE" \
-d "client_id=$CLIENT_ID" \
-d "code_verifier=$CODE_VERIFIER" \
"$CODER_URL/oauth2/tokens"
```
## Discovery Endpoints
Coder provides OAuth2 discovery endpoints for programmatic integration:
- **Authorization Server Metadata**: `GET /.well-known/oauth-authorization-server`
- **Protected Resource Metadata**: `GET /.well-known/oauth-protected-resource`
These endpoints return server capabilities and endpoint URLs according to [RFC 8414](https://datatracker.ietf.org/doc/html/rfc8414) and [RFC 9728](https://datatracker.ietf.org/doc/html/rfc9728).
## Token Management
### Refresh Tokens
Refresh an expired access token:
```bash
curl -X POST \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token" \
-d "refresh_token=$REFRESH_TOKEN" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
"$CODER_URL/oauth2/tokens"
```
### Revoke Access
Revoke all tokens for an application:
```bash
curl -X DELETE \
-H "Authorization: Bearer $CODER_SESSION_TOKEN" \
"$CODER_URL/oauth2/tokens?client_id=$CLIENT_ID"
```
## Testing and Development
Coder provides comprehensive test scripts for OAuth2 development:
```bash
# Navigate to the OAuth2 test scripts
cd scripts/oauth2/
# Run the full automated test suite
./test-mcp-oauth2.sh
# Create a test application for manual testing
eval $(./setup-test-app.sh)
# Run an interactive browser-based test
./test-manual-flow.sh
# Clean up when done
./cleanup-test-app.sh
```
For more details on testing, see the [OAuth2 test scripts README](../../../scripts/oauth2/README.md).
## Common Issues
### "OAuth2 experiment not enabled"
Add `oauth2` to your experiment flags: `coder server --experiments oauth2`
### "Invalid redirect_uri"
Ensure the redirect URI in your request exactly matches the one registered for your application.
### "PKCE verification failed"
Verify that the `code_verifier` used in the token request matches the one used to generate the `code_challenge`.
## Security Considerations
- **Use HTTPS**: Always use HTTPS in production to protect tokens in transit
- **Implement PKCE**: Use PKCE for all public clients (mobile apps, SPAs)
- **Validate redirect URLs**: Only register trusted redirect URIs for your applications
- **Rotate secrets**: Periodically rotate client secrets using the management API
## Limitations
As an experimental feature, the current implementation has limitations:
- No scope system - all tokens have full API access
- No client credentials grant support
- Limited to opaque access tokens (no JWT support)
## Standards Compliance
This implementation follows established OAuth2 standards including [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749) (OAuth2 core), [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636) (PKCE), and related specifications for discovery and client registration.
## Next Steps
- Review the [API Reference](../../reference/api/index.md) for complete endpoint documentation
- Check [External Authentication](../external-auth/index.md) for configuring Coder as an OAuth2 client
- See [Security Best Practices](../security/index.md) for deployment security guidance
## Feedback
This is an experimental feature under active development. Please report issues and feedback through [GitHub Issues](https://github.com/coder/coder/issues) with the `oauth2` label.
+27 -2
View File
@@ -1,6 +1,6 @@
# MCP Server
Power users can configure Claude Desktop, Cursor, or other external agents to interact with Coder in order to:
Power users can configure [claude.ai](https://claude.ai), Claude Desktop, Cursor, or other external agents to interact with Coder in order to:
- List workspaces
- Create/start/stop workspaces
@@ -12,6 +12,8 @@ Power users can configure Claude Desktop, Cursor, or other external agents to in
In this model, any custom agent could interact with a remote Coder workspace, or Coder can be used in a remote pipeline or a larger workflow.
## Local MCP server
The Coder CLI has options to automatically configure MCP servers for you. On your local machine, run the following command:
```sh
@@ -30,4 +32,27 @@ coder exp mcp server
```
> [!NOTE]
> The MCP server is authenticated with the same identity as your Coder CLI and can perform any action on the user's behalf. Fine-grained permissions and a remote MCP server are in development. [Contact us](https://coder.com/contact) if this use case is important to you.
> The MCP server is authenticated with the same identity as your Coder CLI and can perform any action on the user's behalf. Fine-grained permissions are in development. [Contact us](https://coder.com/contact) if this use case is important to you.
## Remote MCP server
Coder can expose an MCP server via HTTP. This is useful for connecting web-based agents, like https://claude.ai/, to Coder. This is an experimental feature and is subject to change.
To enable this feature, activate the `oauth2` and `mcp-server-http` experiments using an environment variable or a CLI flag:
```sh
CODER_EXPERIMENTS="oauth2,mcp-server-http" coder server
# or
coder server --experiments=oauth2,mcp-server-http
```
The Coder server will expose the MCP server at:
```txt
https://coder.example.com/api/experimental/mcp/http
```
> [!NOTE]
> At this time, the remote MCP server is not compatible with web-based ChatGPT.
Users can authenticate applications to use the remote MCP server with [OAuth2](../admin/integrations/oauth2-provider.md). An authenticated application can perform any action on the user's behalf. Fine-grained permissions are in development.
+5
View File
@@ -718,6 +718,11 @@
"title": "Hashicorp Vault",
"description": "Integrate Coder with Hashicorp Vault",
"path": "./admin/integrations/vault.md"
},
{
"title": "OAuth2 Provider",
"description": "Use Coder as an OAuth2 provider",
"path": "./admin/integrations/oauth2-provider.md"
}
]
},
+1 -1
View File
@@ -11,7 +11,7 @@ RUN cargo install jj-cli typos-cli watchexec-cli
FROM ubuntu:jammy@sha256:0e5e4a57c2499249aafc3b40fcd541e9a456aab7296681a3994d631587203f97 AS go
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.24.4
ARG GO_VERSION=1.24.6
# Boring Go is needed to build FIPS-compliant binaries.
RUN apt-get update && \
+2 -2
View File
@@ -1,6 +1,6 @@
module github.com/coder/coder/v2
go 1.24.4
go 1.24.6
// Required until a v3 of chroma is created to lazily initialize all XML files.
// None of our dependencies seem to use the registries anyways, so this
@@ -58,7 +58,7 @@ replace github.com/imulab/go-scim/pkg/v2 => github.com/coder/go-scim/pkg/v2 v2.0
// Adds support for a new Listener from a driver.Connector
// This lets us use rotating authentication tokens for passwords in connection strings
// which we use in the awsiamrds package.
replace github.com/lib/pq => github.com/coder/pq v1.10.5-0.20250630052411-a259f96b6102
replace github.com/lib/pq => github.com/coder/pq v1.10.5-0.20250807075151-6ad9b0a25151
// Removes an init() function that causes terminal sequences to be printed to the web terminal when
// used in conjunction with agent-exec. See https://github.com/coder/coder/pull/15817
+2 -2
View File
@@ -912,8 +912,8 @@ github.com/coder/go-scim/pkg/v2 v2.0.0-20230221055123-1d63c1222136 h1:0RgB61LcNs
github.com/coder/go-scim/pkg/v2 v2.0.0-20230221055123-1d63c1222136/go.mod h1:VkD1P761nykiq75dz+4iFqIQIZka189tx1BQLOp0Skc=
github.com/coder/guts v1.5.0 h1:a94apf7xMf5jDdg1bIHzncbRiTn3+BvBZgrFSDbUnyI=
github.com/coder/guts v1.5.0/go.mod h1:0Sbv5Kp83u1Nl7MIQiV2zmacJ3o02I341bkWkjWXSUQ=
github.com/coder/pq v1.10.5-0.20250630052411-a259f96b6102 h1:ahTJlTRmTogsubgRVGOUj40dg62WvqPQkzTQP7pyepI=
github.com/coder/pq v1.10.5-0.20250630052411-a259f96b6102/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/coder/pq v1.10.5-0.20250807075151-6ad9b0a25151 h1:YAxwg3lraGNRwoQ18H7R7n+wsCqNve7Brdvj0F1rDnU=
github.com/coder/pq v1.10.5-0.20250807075151-6ad9b0a25151/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/coder/pretty v0.0.0-20230908205945-e89ba86370e0 h1:3A0ES21Ke+FxEM8CXx9n47SZOKOpgSE1bbJzlE4qPVs=
github.com/coder/pretty v0.0.0-20230908205945-e89ba86370e0/go.mod h1:5UuS2Ts+nTToAMeOjNlnHFkPahrtDkmpydBen/3wgZc=
github.com/coder/preview v1.0.3-0.20250714153828-a737d4750448 h1:S86sFp4Dr4dUn++fXOMOTu6ClnEZ/NrGCYv7bxZjYYc=
+1
View File
@@ -156,6 +156,7 @@ export const defaultParametersForBuiltinIcons = new Map<string, string>([
["/icon/kasmvnc.svg", "whiteWithColor"],
["/icon/kiro.svg", "whiteWithColor"],
["/icon/memory.svg", "monochrome"],
["/icon/openai.svg", "monochrome"],
["/icon/rust.svg", "monochrome"],
["/icon/terminal.svg", "monochrome"],
["/icon/widgets.svg", "monochrome"],
+1
View File
@@ -85,6 +85,7 @@
"nomad.svg",
"novnc.svg",
"okta.svg",
"openai.svg",
"personalize.svg",
"php.svg",
"phpstorm.svg",
+2
View File
@@ -0,0 +1,2 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg fill="#000000" width="800px" height="800px" viewBox="0 0 24 24" role="img" xmlns="http://www.w3.org/2000/svg"><title>OpenAI icon</title><path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z"/></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB