Compare commits

...

6 Commits

Author SHA1 Message Date
Spike Curtis 0aaeb1e8db fix: bump coder/tailscale to pick up RTM_MISS fix (#24187) (#24215)
## Cherry-pick of #24187 onto `release/2.29`

Bumps `coder/tailscale` to
[`e956a95`](https://github.com/coder/tailscale/commit/e956a950740bd737c55451f56e77038f7430a919)
([PR #113](https://github.com/coder/tailscale/pull/113)) to pick up the
`RTM_MISS` fix for the Darwin network monitor.

### Why

On Darwin, `RTM_MISS` route-socket messages (fired on every failed route
lookup) were not filtered by `netmon`, causing each one to be treated as
a `LinkChange`. When netcheck sends STUN probes to an IPv6 address with
no route, this creates a self-sustaining feedback loop: `RTM_MISS` →
`LinkChange` → `ReSTUN` → netcheck → v6 STUN probe → `RTM_MISS` → …

The loop drives DERP home-region flapping at ~70× baseline, which at
fleet scale saturates PostgreSQL's `NOTIFY` lock and causes coordinator
health-check timeouts.

The upstream fix adds a single `if msg.Type == unix.RTM_MISS { return
true }` check to `skipRouteMessage`. This is safe because `RTM_MISS` is
a lookup-path signal, not a table-mutation signal — route withdrawals
always emit `RTM_DELETE` before any subsequent lookup can miss.

This issue has only been reported recently, since users updated to macOS
26.4.

### Notes

The cherry-pick resolved as empty due to divergent tailscale base
versions between `main` (`33e050f`) and `release/2.29` (`6eafe0f`). The
dependency was manually updated with `go mod tidy`, which also bumped
the `go` directive to 1.25.7 (required by the new tailscale module) and
pulled in required transitive dependency updates (golang.org/x/*, etc.).

Relates to ENG-2394

> 🤖 Generated by Coder Agents
2026-04-10 14:36:37 -04:00
Garrett Delfosse b0cfd1a3b5 chore: update Go from 1.25.6 to 1.25.7 (#22042) (#24221)
Cherry-pick of e82edf1b6b onto
`release/2.29`.

Original PR: #22042

**Conflict resolution:** `testutil/unixsocket.go` was deleted on
`release/2.29` but modified in the original commit — resolved by keeping
it deleted.

**Note:** Merge #24245 first to fix pre-existing CI failures on
`release/2.29`.

> 🤖 Generated by Coder Agents

Co-authored-by: Jon Ayers <jon@coder.com>
2026-04-10 13:58:12 -04:00
Garrett Delfosse ccba5732aa fix(site): resolve WS/HTTP race condition on workspace parameters page (backport v2.29) (#24249)
Cherry-pick of
https://github.com/coder/coder/commit/17d214b4a4a75a39fc499784dd2e65f81b7cdcf1
onto `release/2.29`.

Original PR: #22556

When the workspace parameters page loads, the WebSocket sends an initial
response with template defaults. For parameters with no default, the
server returns `{valid: false, value: ""}`. On first render,
`useSyncFormParameters` overwrites the form's correctly-autofilled value
with `""`. This fix preserves the current form value when the server
value is `{valid: false}`.

Fixes the consistently failing e2e test `create workspace with default
and required parameters`.

> 🤖 Generated by Coder Agents

Co-authored-by: Kyle Carberry <kyle@coder.com>
2026-04-10 13:44:48 -04:00
Garrett Delfosse f8edef292f fix: backport dogfood CI fixes to release/2.29 (#24245)
Cherry-picks two fixes onto `release/2.29` that are both needed together
to unblock CI:

1. **Remove trivy from Dockerfile** (original: #23367, commit
`4c9041b2`) — upstream Trivy v0.41.0 release artifact was deleted,
causing `gzip: stdin: not in gzip format` in the `build_image` job.
2. **Remove subdomain from coder_app with command** (original: #22990,
commit `fd634626`) — `coder_app` no longer supports both `command` and
`subdomain`, causing `deploy_template` to fail on `terraform validate`.

Both commits are required for CI to pass — the first fixes the Docker
build and the second fixes Terraform validation.

> 🤖 Generated by Coder Agents

---------

Co-authored-by: Cian Johnston <cian@coder.com>
Co-authored-by: Ethan <39577870+ethanndickson@users.noreply.github.com>
2026-04-10 11:20:34 -04:00
George K 72ce5ac4ab perf: cap count queries, use native UUID ops for audit/conn logs (backport #23835) (#24116)
Backport of #23835.

Audit and connection log pages were timing out due to expensive COUNT(*)
queries over large tables. This commit adds opt-in count capping:
requests can return a `count_cap` field signaling that the count was
truncated at a threshold, avoiding full table scans that caused page
timeouts.

Text-cast UUID comparisons in regosql-generated authorization queries
also contributed to the slowdown by preventing index usage for
connection and audit log queries. These now emit native UUID operators.

Frontend changes handle the capped state in usePaginatedQuery and
PaginationWidget, optionally displaying a capped count in the pagination
UI (e.g. "Showing 2,076 to 2,100 of 2,000+ logs")

---

Cherry picked from 86ca61d6ca
2026-04-09 12:46:24 -04:00
Jakub Domeracki 399623b328 chore: remove trivy GHA job (backport v2.29) (#23860) 2026-04-01 12:33:50 +05:00
36 changed files with 916 additions and 577 deletions
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.25.6"
default: "1.25.7"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
-113
View File
@@ -63,116 +63,3 @@ jobs:
--data "{\"content\": \"$msg\"}" \
"${{ secrets.SLACK_SECURITY_FAILURE_WEBHOOK_URL }}"
trivy:
permissions:
security-events: write
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@58077d3c7e43986b6b15fba718e8ea69e387dfcc # v2.16.0
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Setup Node
uses: ./.github/actions/setup-node
- name: Setup sqlc
uses: ./.github/actions/setup-sqlc
- name: Install cosign
uses: ./.github/actions/install-cosign
- name: Install syft
uses: ./.github/actions/install-syft
- name: Install yq
run: go run github.com/mikefarah/yq/v4@v4.44.3
- name: Install mockgen
run: go install go.uber.org/mock/mockgen@v0.6.0
- name: Install protoc-gen-go
run: go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.30
- name: Install protoc-gen-go-drpc
run: go install storj.io/drpc/cmd/protoc-gen-go-drpc@v0.0.34
- name: Install Protoc
run: |
# protoc must be in lockstep with our dogfood Dockerfile or the
# version in the comments will differ. This is also defined in
# ci.yaml.
set -euxo pipefail
cd dogfood/coder
mkdir -p /usr/local/bin
mkdir -p /usr/local/include
DOCKER_BUILDKIT=1 docker build . --target proto -t protoc
protoc_path=/usr/local/bin/protoc
docker run --rm --entrypoint cat protoc /tmp/bin/protoc > $protoc_path
chmod +x $protoc_path
protoc --version
# Copy the generated files to the include directory.
docker run --rm -v /usr/local/include:/target protoc cp -r /tmp/include/google /target/
ls -la /usr/local/include/google/protobuf/
stat /usr/local/include/google/protobuf/timestamp.proto
- name: Build Coder linux amd64 Docker image
id: build
run: |
set -euo pipefail
version="$(./scripts/version.sh)"
image_job="build/coder_${version}_linux_amd64.tag"
# This environment variable force make to not build packages and
# archives (which the Docker image depends on due to technical reasons
# related to concurrent FS writes).
export DOCKER_IMAGE_NO_PREREQUISITES=true
# This environment variables forces scripts/build_docker.sh to build
# the base image tag locally instead of using the cached version from
# the registry.
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
export CODER_IMAGE_BUILD_BASE_TAG
# We would like to use make -j here, but it doesn't work with the some recent additions
# to our code generation.
make "$image_job"
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@c1824fd6edce30d7ab345a9989de00bbd46ef284 # v0.34.0
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
output: trivy-results.sarif
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
with:
sarif_file: trivy-results.sarif
category: "Trivy"
- name: Upload Trivy scan results as an artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: trivy
path: trivy-results.sarif
retention-days: 7
- name: Send Slack notification on failure
if: ${{ failure() }}
run: |
msg="❌ Trivy Failed\n\nhttps://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
curl \
-qfsSL \
-X POST \
-H "Content-Type: application/json" \
--data "{\"content\": \"$msg\"}" \
"${{ secrets.SLACK_SECURITY_FAILURE_WEBHOOK_URL }}"
+6
View File
@@ -12902,6 +12902,9 @@ const docTemplate = `{
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
@@ -13209,6 +13212,9 @@ const docTemplate = `{
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
+6
View File
@@ -11550,6 +11550,9 @@
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
@@ -11836,6 +11839,9 @@
},
"count": {
"type": "integer"
},
"count_cap": {
"type": "integer"
}
}
},
+8 -1
View File
@@ -26,6 +26,11 @@ import (
"github.com/coder/coder/v2/codersdk"
)
// Limit the count query to avoid a slow sequential scan due to joins
// on a large table. Set to 0 to disable capping (but also see the note
// in the SQL query).
const auditLogCountCap = 2000
// @Summary Get audit logs
// @ID get-audit-logs
// @Security CoderSessionToken
@@ -66,7 +71,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
countFilter.Username = ""
}
// Use the same filters to count the number of audit logs
countFilter.CountCap = auditLogCountCap
count, err := api.Database.CountAuditLogs(ctx, countFilter)
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
@@ -81,6 +86,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.AuditLogResponse{
AuditLogs: []codersdk.AuditLog{},
Count: 0,
CountCap: auditLogCountCap,
})
return
}
@@ -98,6 +104,7 @@ func (api *API) auditLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.AuditLogResponse{
AuditLogs: api.convertAuditLogs(ctx, dblogs),
Count: count,
CountCap: auditLogCountCap,
})
}
+2
View File
@@ -607,6 +607,7 @@ func (q *sqlQuerier) CountAuthorizedAuditLogs(ctx context.Context, arg CountAudi
arg.DateTo,
arg.BuildReason,
arg.RequestID,
arg.CountCap,
)
if err != nil {
return 0, err
@@ -743,6 +744,7 @@ func (q *sqlQuerier) CountAuthorizedConnectionLogs(ctx context.Context, arg Coun
arg.WorkspaceID,
arg.ConnectionID,
arg.Status,
arg.CountCap,
)
if err != nil {
return 0, err
@@ -145,5 +145,13 @@ func extractWhereClause(query string) string {
// Remove SQL comments
whereClause = regexp.MustCompile(`(?m)--.*$`).ReplaceAllString(whereClause, "")
// Normalize indentation so subquery wrapping doesn't cause
// mismatches.
lines := strings.Split(whereClause, "\n")
for i, line := range lines {
lines[i] = strings.TrimLeft(line, " \t")
}
whereClause = strings.Join(lines, "\n")
return strings.TrimSpace(whereClause)
}
+210 -191
View File
@@ -1486,93 +1486,105 @@ func (q *sqlQuerier) UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDP
}
const countAuditLogs = `-- name: CountAuditLogs :one
SELECT COUNT(*)
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN $1::text != '' THEN resource_type = $1::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN $2::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = $2
ELSE true
END
-- Filter organization_id
AND CASE
WHEN $3::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = $3
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN $4::text != '' THEN resource_target = $4
ELSE true
END
-- Filter action
AND CASE
WHEN $5::text != '' THEN action = $5::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower($7)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8::text != '' THEN users.email = $8
ELSE true
END
-- Filter by date_from
AND CASE
WHEN $9::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= $9
ELSE true
END
-- Filter by date_to
AND CASE
WHEN $10::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= $10
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN $11::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = $11
ELSE true
END
-- Filter request_id
AND CASE
WHEN $12::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = $12
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
SELECT COUNT(*) FROM (
SELECT 1
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN $1::text != '' THEN resource_type = $1::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN $2::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = $2
ELSE true
END
-- Filter organization_id
AND CASE
WHEN $3::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = $3
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN $4::text != '' THEN resource_target = $4
ELSE true
END
-- Filter action
AND CASE
WHEN $5::text != '' THEN action = $5::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower($7)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8::text != '' THEN users.email = $8
ELSE true
END
-- Filter by date_from
AND CASE
WHEN $9::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= $9
ELSE true
END
-- Filter by date_to
AND CASE
WHEN $10::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= $10
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN $11::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = $11
ELSE true
END
-- Filter request_id
AND CASE
WHEN $12::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = $12
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
-- Avoid a slow scan on a large table with joins. The caller
-- passes the count cap and we add 1 so the frontend can detect
-- capping and show "... of N+". A cap of 0 means no limit (NULLIF
-- -> NULL + 1 = NULL).
-- NOTE: Parameterizing this so that we can easily change from,
-- e.g., 2000 to 5000. However, use literal NULL (or no LIMIT)
-- here if disabling the capping on a large table permanently.
-- This way the PG planner can plan parallel execution for
-- potential large wins.
LIMIT NULLIF($13::int, 0) + 1
) AS limited_count
`
type CountAuditLogsParams struct {
@@ -1588,6 +1600,7 @@ type CountAuditLogsParams struct {
DateTo time.Time `db:"date_to" json:"date_to"`
BuildReason string `db:"build_reason" json:"build_reason"`
RequestID uuid.UUID `db:"request_id" json:"request_id"`
CountCap int32 `db:"count_cap" json:"count_cap"`
}
func (q *sqlQuerier) CountAuditLogs(ctx context.Context, arg CountAuditLogsParams) (int64, error) {
@@ -1604,6 +1617,7 @@ func (q *sqlQuerier) CountAuditLogs(ctx context.Context, arg CountAuditLogsParam
arg.DateTo,
arg.BuildReason,
arg.RequestID,
arg.CountCap,
)
var count int64
err := row.Scan(&count)
@@ -1952,110 +1966,113 @@ func (q *sqlQuerier) InsertAuditLog(ctx context.Context, arg InsertAuditLogParam
}
const countConnectionLogs = `-- name: CountConnectionLogs :one
SELECT
COUNT(*) AS count
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN $1 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = $1
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN $2 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower($2) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN $3 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = $3
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN $4 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = $4 AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN $5 :: text != '' THEN
type = $5 :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7 :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower($7) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8 :: text != '' THEN
users.email = $8
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN $9 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= $9
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN $10 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= $10
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN $11 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = $11
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN $12 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = $12
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN $13 :: text != '' THEN
(($13 = 'ongoing' AND disconnect_time IS NULL) OR
($13 = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
SELECT COUNT(*) AS count FROM (
SELECT 1
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN $1 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = $1
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN $2 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower($2) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN $3 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = $3
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN $4 :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = $4 AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN $5 :: text != '' THEN
type = $5 :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN $6 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = $6
ELSE true
END
-- Filter by username
AND CASE
WHEN $7 :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower($7) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN $8 :: text != '' THEN
users.email = $8
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN $9 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= $9
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN $10 :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= $10
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN $11 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = $11
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN $12 :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = $12
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN $13 :: text != '' THEN
(($13 = 'ongoing' AND disconnect_time IS NULL) OR
($13 = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
-- NOTE: See the CountAuditLogs LIMIT note.
LIMIT NULLIF($14::int, 0) + 1
) AS limited_count
`
type CountConnectionLogsParams struct {
@@ -2072,6 +2089,7 @@ type CountConnectionLogsParams struct {
WorkspaceID uuid.UUID `db:"workspace_id" json:"workspace_id"`
ConnectionID uuid.UUID `db:"connection_id" json:"connection_id"`
Status string `db:"status" json:"status"`
CountCap int32 `db:"count_cap" json:"count_cap"`
}
func (q *sqlQuerier) CountConnectionLogs(ctx context.Context, arg CountConnectionLogsParams) (int64, error) {
@@ -2089,6 +2107,7 @@ func (q *sqlQuerier) CountConnectionLogs(ctx context.Context, arg CountConnectio
arg.WorkspaceID,
arg.ConnectionID,
arg.Status,
arg.CountCap,
)
var count int64
err := row.Scan(&count)
+99 -88
View File
@@ -149,94 +149,105 @@ VALUES (
RETURNING *;
-- name: CountAuditLogs :one
SELECT COUNT(*)
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN @resource_type::text != '' THEN resource_type = @resource_type::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN @resource_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = @resource_id
ELSE true
END
-- Filter organization_id
AND CASE
WHEN @organization_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = @organization_id
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN @resource_target::text != '' THEN resource_target = @resource_target
ELSE true
END
-- Filter action
AND CASE
WHEN @action::text != '' THEN action = @action::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower(@username)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @email::text != '' THEN users.email = @email
ELSE true
END
-- Filter by date_from
AND CASE
WHEN @date_from::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= @date_from
ELSE true
END
-- Filter by date_to
AND CASE
WHEN @date_to::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= @date_to
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN @build_reason::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = @build_reason
ELSE true
END
-- Filter request_id
AND CASE
WHEN @request_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = @request_id
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
;
SELECT COUNT(*) FROM (
SELECT 1
FROM audit_logs
LEFT JOIN users ON audit_logs.user_id = users.id
LEFT JOIN organizations ON audit_logs.organization_id = organizations.id
-- First join on workspaces to get the initial workspace create
-- to workspace build 1 id. This is because the first create is
-- is a different audit log than subsequent starts.
LEFT JOIN workspaces ON audit_logs.resource_type = 'workspace'
AND audit_logs.resource_id = workspaces.id
-- Get the reason from the build if the resource type
-- is a workspace_build
LEFT JOIN workspace_builds wb_build ON audit_logs.resource_type = 'workspace_build'
AND audit_logs.resource_id = wb_build.id
-- Get the reason from the build #1 if this is the first
-- workspace create.
LEFT JOIN workspace_builds wb_workspace ON audit_logs.resource_type = 'workspace'
AND audit_logs.action = 'create'
AND workspaces.id = wb_workspace.workspace_id
AND wb_workspace.build_number = 1
WHERE
-- Filter resource_type
CASE
WHEN @resource_type::text != '' THEN resource_type = @resource_type::resource_type
ELSE true
END
-- Filter resource_id
AND CASE
WHEN @resource_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN resource_id = @resource_id
ELSE true
END
-- Filter organization_id
AND CASE
WHEN @organization_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.organization_id = @organization_id
ELSE true
END
-- Filter by resource_target
AND CASE
WHEN @resource_target::text != '' THEN resource_target = @resource_target
ELSE true
END
-- Filter action
AND CASE
WHEN @action::text != '' THEN action = @action::audit_action
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username::text != '' THEN user_id = (
SELECT id
FROM users
WHERE lower(username) = lower(@username)
AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @email::text != '' THEN users.email = @email
ELSE true
END
-- Filter by date_from
AND CASE
WHEN @date_from::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" >= @date_from
ELSE true
END
-- Filter by date_to
AND CASE
WHEN @date_to::timestamp with time zone != '0001-01-01 00:00:00Z' THEN "time" <= @date_to
ELSE true
END
-- Filter by build_reason
AND CASE
WHEN @build_reason::text != '' THEN COALESCE(wb_build.reason::text, wb_workspace.reason::text) = @build_reason
ELSE true
END
-- Filter request_id
AND CASE
WHEN @request_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN audit_logs.request_id = @request_id
ELSE true
END
-- Authorize Filter clause will be injected below in CountAuthorizedAuditLogs
-- @authorize_filter
-- Avoid a slow scan on a large table with joins. The caller
-- passes the count cap and we add 1 so the frontend can detect
-- capping and show "... of N+". A cap of 0 means no limit (NULLIF
-- -> NULL + 1 = NULL).
-- NOTE: Parameterizing this so that we can easily change from,
-- e.g., 2000 to 5000. However, use literal NULL (or no LIMIT)
-- here if disabling the capping on a large table permanently.
-- This way the PG planner can plan parallel execution for
-- potential large wins.
LIMIT NULLIF(@count_cap::int, 0) + 1
) AS limited_count;
-- name: DeleteOldAuditLogConnectionEvents :exec
DELETE FROM audit_logs
+107 -105
View File
@@ -133,111 +133,113 @@ OFFSET
@offset_opt;
-- name: CountConnectionLogs :one
SELECT
COUNT(*) AS count
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = @organization_id
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN @workspace_owner :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower(@workspace_owner) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN @workspace_owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = @workspace_owner_id
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN @workspace_owner_email :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = @workspace_owner_email AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN @type :: text != '' THEN
type = @type :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower(@username) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @user_email :: text != '' THEN
users.email = @user_email
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN @connected_after :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= @connected_after
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN @connected_before :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= @connected_before
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN @workspace_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = @workspace_id
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN @connection_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = @connection_id
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
;
SELECT COUNT(*) AS count FROM (
SELECT 1
FROM
connection_logs
JOIN users AS workspace_owner ON
connection_logs.workspace_owner_id = workspace_owner.id
LEFT JOIN users ON
connection_logs.user_id = users.id
JOIN organizations ON
connection_logs.organization_id = organizations.id
WHERE
-- Filter organization_id
CASE
WHEN @organization_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.organization_id = @organization_id
ELSE true
END
-- Filter by workspace owner username
AND CASE
WHEN @workspace_owner :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE lower(username) = lower(@workspace_owner) AND deleted = false
)
ELSE true
END
-- Filter by workspace_owner_id
AND CASE
WHEN @workspace_owner_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
workspace_owner_id = @workspace_owner_id
ELSE true
END
-- Filter by workspace_owner_email
AND CASE
WHEN @workspace_owner_email :: text != '' THEN
workspace_owner_id = (
SELECT id FROM users
WHERE email = @workspace_owner_email AND deleted = false
)
ELSE true
END
-- Filter by type
AND CASE
WHEN @type :: text != '' THEN
type = @type :: connection_type
ELSE true
END
-- Filter by user_id
AND CASE
WHEN @user_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
user_id = @user_id
ELSE true
END
-- Filter by username
AND CASE
WHEN @username :: text != '' THEN
user_id = (
SELECT id FROM users
WHERE lower(username) = lower(@username) AND deleted = false
)
ELSE true
END
-- Filter by user_email
AND CASE
WHEN @user_email :: text != '' THEN
users.email = @user_email
ELSE true
END
-- Filter by connected_after
AND CASE
WHEN @connected_after :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time >= @connected_after
ELSE true
END
-- Filter by connected_before
AND CASE
WHEN @connected_before :: timestamp with time zone != '0001-01-01 00:00:00Z' THEN
connect_time <= @connected_before
ELSE true
END
-- Filter by workspace_id
AND CASE
WHEN @workspace_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.workspace_id = @workspace_id
ELSE true
END
-- Filter by connection_id
AND CASE
WHEN @connection_id :: uuid != '00000000-0000-0000-0000-000000000000'::uuid THEN
connection_logs.connection_id = @connection_id
ELSE true
END
-- Filter by whether the session has a disconnect_time
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
ELSE true
END
-- Authorize Filter clause will be injected below in
-- CountAuthorizedConnectionLogs
-- @authorize_filter
-- NOTE: See the CountAuditLogs LIMIT note.
LIMIT NULLIF(@count_cap::int, 0) + 1
) AS limited_count;
-- name: UpsertConnectionLog :one
INSERT INTO connection_logs (
+34
View File
@@ -282,6 +282,40 @@ neq(input.object.owner, "");
p("'10d03e62-7703-4df5-a358-4f76577d4e2f' = id :: text") + " AND " + p("id :: text != ''") + " AND " + p("'' = ''"),
),
},
{
Name: "AuditLogUUID",
Queries: []string{
`"8c0b9bdc-a013-4b14-a49b-5747bc335708" = input.object.org_owner`,
`input.object.org_owner != ""`,
`neq(input.object.org_owner, "8c0b9bdc-a013-4b14-a49b-5747bc335708")`,
`input.object.org_owner in {"8c0b9bdc-a013-4b14-a49b-5747bc335708", "05f58202-4bfc-43ce-9ba4-5ff6e0174a71"}`,
`"read" in input.object.acl_group_list[input.object.org_owner]`,
},
ExpectedSQL: p(
p("audit_logs.organization_id = '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("audit_logs.organization_id IS NOT NULL") + " OR " +
p("audit_logs.organization_id != '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("audit_logs.organization_id = ANY(ARRAY ['05f58202-4bfc-43ce-9ba4-5ff6e0174a71'::uuid,'8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid])") + " OR " +
"(false)"),
VariableConverter: regosql.AuditLogConverter(),
},
{
Name: "ConnectionLogUUID",
Queries: []string{
`"8c0b9bdc-a013-4b14-a49b-5747bc335708" = input.object.org_owner`,
`input.object.org_owner != ""`,
`neq(input.object.org_owner, "8c0b9bdc-a013-4b14-a49b-5747bc335708")`,
`input.object.org_owner in {"8c0b9bdc-a013-4b14-a49b-5747bc335708"}`,
`"read" in input.object.acl_group_list[input.object.org_owner]`,
},
ExpectedSQL: p(
p("connection_logs.organization_id = '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("connection_logs.organization_id IS NOT NULL") + " OR " +
p("connection_logs.organization_id != '8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid") + " OR " +
p("connection_logs.organization_id = ANY(ARRAY ['8c0b9bdc-a013-4b14-a49b-5747bc335708'::uuid])") + " OR " +
"(false)"),
VariableConverter: regosql.ConnectionLogConverter(),
},
}
for _, tc := range testCases {
+2 -2
View File
@@ -53,7 +53,7 @@ func WorkspaceConverter() *sqltypes.VariableConverter {
func AuditLogConverter() *sqltypes.VariableConverter {
matcher := sqltypes.NewVariableConverter().RegisterMatcher(
resourceIDMatcher(),
sqltypes.StringVarMatcher("COALESCE(audit_logs.organization_id :: text, '')", []string{"input", "object", "org_owner"}),
sqltypes.UUIDVarMatcher("audit_logs.organization_id", []string{"input", "object", "org_owner"}),
// Audit logs have no user owner, only owner by an organization.
sqltypes.AlwaysFalse(userOwnerMatcher()),
)
@@ -67,7 +67,7 @@ func AuditLogConverter() *sqltypes.VariableConverter {
func ConnectionLogConverter() *sqltypes.VariableConverter {
matcher := sqltypes.NewVariableConverter().RegisterMatcher(
resourceIDMatcher(),
sqltypes.StringVarMatcher("COALESCE(connection_logs.organization_id :: text, '')", []string{"input", "object", "org_owner"}),
sqltypes.UUIDVarMatcher("connection_logs.organization_id", []string{"input", "object", "org_owner"}),
// Connection logs have no user owner, only owner by an organization.
sqltypes.AlwaysFalse(userOwnerMatcher()),
)
+114
View File
@@ -0,0 +1,114 @@
package sqltypes
import (
"fmt"
"strings"
"github.com/open-policy-agent/opa/ast"
"golang.org/x/xerrors"
)
var (
_ VariableMatcher = astUUIDVar{}
_ Node = astUUIDVar{}
_ SupportsEquality = astUUIDVar{}
)
// astUUIDVar is a variable that represents a UUID column. Unlike
// astStringVar it emits native UUID comparisons (column = 'val'::uuid)
// instead of text-based ones (COALESCE(column::text, ”) = 'val').
// This allows PostgreSQL to use indexes on UUID columns.
type astUUIDVar struct {
Source RegoSource
FieldPath []string
ColumnString string
}
func UUIDVarMatcher(sqlColumn string, regoPath []string) VariableMatcher {
return astUUIDVar{FieldPath: regoPath, ColumnString: sqlColumn}
}
func (astUUIDVar) UseAs() Node { return astUUIDVar{} }
func (u astUUIDVar) ConvertVariable(rego ast.Ref) (Node, bool) {
left, err := RegoVarPath(u.FieldPath, rego)
if err == nil && len(left) == 0 {
return astUUIDVar{
Source: RegoSource(rego.String()),
FieldPath: u.FieldPath,
ColumnString: u.ColumnString,
}, true
}
return nil, false
}
func (u astUUIDVar) SQLString(_ *SQLGenerator) string {
return u.ColumnString
}
// EqualsSQLString handles equality comparisons for UUID columns.
// Rego always produces string literals, so we accept AstString and
// cast the literal to ::uuid in the output SQL. This lets PG use
// native UUID indexes instead of falling back to text comparisons.
// nolint:revive
func (u astUUIDVar) EqualsSQLString(cfg *SQLGenerator, not bool, other Node) (string, error) {
switch other.UseAs().(type) {
case AstString:
// The other side is a rego string literal like
// "8c0b9bdc-a013-4b14-a49b-5747bc335708". Emit a comparison
// that casts the literal to uuid so PG can use indexes:
// column = 'val'::uuid
// instead of the text-based:
// 'val' = COALESCE(column::text, '')
s, ok := other.(AstString)
if !ok {
return "", xerrors.Errorf("expected AstString, got %T", other)
}
if s.Value == "" {
// Empty string in rego means "no value". Compare the
// column against NULL since UUID columns represent
// absent values as NULL, not empty strings.
op := "IS NULL"
if not {
op = "IS NOT NULL"
}
return fmt.Sprintf("%s %s", u.ColumnString, op), nil
}
return fmt.Sprintf("%s %s '%s'::uuid",
u.ColumnString, equalsOp(not), s.Value), nil
case astUUIDVar:
return basicSQLEquality(cfg, not, u, other), nil
default:
return "", xerrors.Errorf("unsupported equality: %T %s %T",
u, equalsOp(not), other)
}
}
// ContainedInSQL implements SupportsContainedIn so that a UUID column
// can appear in membership checks like `col = ANY(ARRAY[...])`. The
// array elements are rego strings, so we cast each to ::uuid.
func (u astUUIDVar) ContainedInSQL(_ *SQLGenerator, haystack Node) (string, error) {
arr, ok := haystack.(ASTArray)
if !ok {
return "", xerrors.Errorf("unsupported containedIn: %T in %T", u, haystack)
}
if len(arr.Value) == 0 {
return "false", nil
}
// Build ARRAY['uuid1'::uuid, 'uuid2'::uuid, ...]
values := make([]string, 0, len(arr.Value))
for _, v := range arr.Value {
s, ok := v.(AstString)
if !ok {
return "", xerrors.Errorf("expected AstString array element, got %T", v)
}
values = append(values, fmt.Sprintf("'%s'::uuid", s.Value))
}
return fmt.Sprintf("%s = ANY(ARRAY [%s])",
u.ColumnString,
strings.Join(values, ",")), nil
}
+2 -1
View File
@@ -67,7 +67,7 @@ func AuditLogs(ctx context.Context, db database.Store, query string) (database.G
}
// Prepare the count filter, which uses the same parameters as the GetAuditLogsOffsetParams.
// nolint:exhaustruct // UserID is not obtained from the query parameters.
// nolint:exhaustruct // UserID and CountCap are not obtained from the query parameters.
countFilter := database.CountAuditLogsParams{
RequestID: filter.RequestID,
ResourceID: filter.ResourceID,
@@ -124,6 +124,7 @@ func ConnectionLogs(ctx context.Context, db database.Store, query string, apiKey
}
// This MUST be kept in sync with the above
// nolint:exhaustruct // CountCap is not obtained from the query parameters.
countFilter := database.CountConnectionLogsParams{
OrganizationID: filter.OrganizationID,
WorkspaceOwner: filter.WorkspaceOwner,
+1
View File
@@ -209,6 +209,7 @@ type AuditLogsRequest struct {
type AuditLogResponse struct {
AuditLogs []AuditLog `json:"audit_logs"`
Count int64 `json:"count"`
CountCap int64 `json:"count_cap"`
}
type CreateTestAuditLogRequest struct {
+1
View File
@@ -96,6 +96,7 @@ type ConnectionLogsRequest struct {
type ConnectionLogResponse struct {
ConnectionLogs []ConnectionLog `json:"connection_logs"`
Count int64 `json:"count"`
CountCap int64 `json:"count_cap"`
}
func (c *Client) ConnectionLogs(ctx context.Context, req ConnectionLogsRequest) (ConnectionLogResponse, error) {
+2 -1
View File
@@ -88,7 +88,8 @@ curl -X GET http://coder-server:8080/api/v2/audit?limit=0 \
"user_agent": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
+2 -1
View File
@@ -289,7 +289,8 @@ curl -X GET http://coder-server:8080/api/v2/connectionlog?limit=0 \
"workspace_owner_username": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
+6 -2
View File
@@ -1417,7 +1417,8 @@
"user_agent": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
@@ -1427,6 +1428,7 @@
|--------------|-------------------------------------------------|----------|--------------|-------------|
| `audit_logs` | array of [codersdk.AuditLog](#codersdkauditlog) | false | | |
| `count` | integer | false | | |
| `count_cap` | integer | false | | |
## codersdk.AuthMethod
@@ -1845,7 +1847,8 @@ AuthorizationObject can represent a "set" of objects, such as: all workspaces in
"workspace_owner_username": "string"
}
],
"count": 0
"count": 0,
"count_cap": 0
}
```
@@ -1855,6 +1858,7 @@ AuthorizationObject can represent a "set" of objects, such as: all workspaces in
|-------------------|-----------------------------------------------------------|----------|--------------|-------------|
| `connection_logs` | array of [codersdk.ConnectionLog](#codersdkconnectionlog) | false | | |
| `count` | integer | false | | |
| `count_cap` | integer | false | | |
## codersdk.ConnectionLogSSHInfo
+2 -6
View File
@@ -11,8 +11,8 @@ RUN cargo install jj-cli typos-cli watchexec-cli@2.3.2
FROM ubuntu:jammy@sha256:104ae83764a5119017b8e8d6218fa0832b09df65aae7d5a6de29a85d813da2fb AS go
# Install Go manually, so that we can control the version
ARG GO_VERSION=1.25.6
ARG GO_CHECKSUM="f022b6aad78e362bcba9b0b94d09ad58c5a70c6ba3b7582905fababf5fe0181a"
ARG GO_VERSION=1.25.7
ARG GO_CHECKSUM="12e6d6a191091ae27dc31f6efc630e3a3b8ba409baf3573d955b196fdf086005"
# Boring Go is needed to build FIPS-compliant binaries.
RUN apt-get update && \
@@ -298,7 +298,6 @@ ARG CLOUD_SQL_PROXY_VERSION=2.2.0 \
KUBECTX_VERSION=0.9.4 \
STRIPE_VERSION=1.14.5 \
TERRAGRUNT_VERSION=0.45.11 \
TRIVY_VERSION=0.41.0 \
SYFT_VERSION=1.20.0 \
COSIGN_VERSION=2.4.3 \
BUN_VERSION=1.2.15
@@ -337,9 +336,6 @@ RUN curl --silent --show-error --location --output /usr/local/bin/cloud_sql_prox
# terragrunt for running Terraform and Terragrunt files
curl --silent --show-error --location --output /usr/local/bin/terragrunt "https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_amd64" && \
chmod a=rx /usr/local/bin/terragrunt && \
# AquaSec Trivy for scanning container images for security issues
curl --silent --show-error --location "https://github.com/aquasecurity/trivy/releases/download/v${TRIVY_VERSION}/trivy_${TRIVY_VERSION}_Linux-64bit.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- trivy && \
# Anchore Syft for SBOM generation
curl --silent --show-error --location "https://github.com/anchore/syft/releases/download/v${SYFT_VERSION}/syft_${SYFT_VERSION}_linux_amd64.tar.gz" | \
tar --extract --gzip --directory=/usr/local/bin --file=- syft && \
-1
View File
@@ -890,7 +890,6 @@ resource "coder_app" "develop_sh" {
icon = "${data.coder_workspace.me.access_url}/emojis/1f4bb.png" // 💻
command = "screen -x develop_sh"
share = "authenticated"
subdomain = true
open_in = "tab"
order = 0
}
+6
View File
@@ -16,6 +16,9 @@ import (
"github.com/coder/coder/v2/codersdk"
)
// NOTE: See the auditLogCountCap note.
const connectionLogCountCap = 2000
// @Summary Get connection logs
// @ID get-connection-logs
// @Security CoderSessionToken
@@ -49,6 +52,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
// #nosec G115 - Safe conversion as pagination limit is expected to be within int32 range
filter.LimitOpt = int32(page.Limit)
countFilter.CountCap = connectionLogCountCap
count, err := api.Database.CountConnectionLogs(ctx, countFilter)
if dbauthz.IsNotAuthorizedError(err) {
httpapi.Forbidden(rw)
@@ -63,6 +67,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ConnectionLogResponse{
ConnectionLogs: []codersdk.ConnectionLog{},
Count: 0,
CountCap: connectionLogCountCap,
})
return
}
@@ -80,6 +85,7 @@ func (api *API) connectionLogs(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(ctx, rw, http.StatusOK, codersdk.ConnectionLogResponse{
ConnectionLogs: convertConnectionLogs(dblogs),
Count: count,
CountCap: connectionLogCountCap,
})
}
+3 -3
View File
@@ -94,7 +94,7 @@
# 3. Update the sha256 and run again
# 4. Nix will fail with the correct vendorHash
# 5. Update the vendorHash
sqlc-custom = unstablePkgs.buildGo124Module {
sqlc-custom = unstablePkgs.buildGo125Module {
pname = "sqlc";
version = "coder-fork-aab4e865a51df0c43e1839f81a9d349b41d14f05";
@@ -156,7 +156,7 @@
gnused
gnugrep
gnutar
unstablePkgs.go_1_24
unstablePkgs.go_1_25
gofumpt
go-migrate
(pinnedPkgs.golangci-lint)
@@ -224,7 +224,7 @@
# slim bundle into it's own derivation.
buildFat =
osArch:
unstablePkgs.buildGo124Module {
unstablePkgs.buildGo125Module {
name = "coder-${osArch}";
# Updated with ./scripts/update-flake.sh`.
# This should be updated whenever go.mod changes!
+14 -14
View File
@@ -1,6 +1,6 @@
module github.com/coder/coder/v2
go 1.25.6
go 1.25.7
// Required until a v3 of chroma is created to lazily initialize all XML files.
// None of our dependencies seem to use the registries anyways, so this
@@ -36,7 +36,7 @@ replace github.com/tcnksm/go-httpstat => github.com/coder/go-httpstat v0.0.0-202
// There are a few minor changes we make to Tailscale that we're slowly upstreaming. Compare here:
// https://github.com/tailscale/tailscale/compare/main...coder:tailscale:main
replace tailscale.com => github.com/coder/tailscale v1.1.1-0.20250829055706-6eafe0f9199e
replace tailscale.com => github.com/coder/tailscale v1.1.1-0.20260409064601-e956a950740b
// This is replaced to include
// 1. a fix for a data race: c.f. https://github.com/tailscale/wireguard-go/pull/25
@@ -194,16 +194,16 @@ require (
go.uber.org/goleak v1.3.1-0.20240429205332-517bace7cc29
go.uber.org/mock v0.6.0
go4.org/netipx v0.0.0-20230728180743-ad4cb58a6516
golang.org/x/crypto v0.46.0
golang.org/x/crypto v0.48.0
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546
golang.org/x/mod v0.31.0
golang.org/x/net v0.48.0
golang.org/x/mod v0.33.0
golang.org/x/net v0.50.0
golang.org/x/oauth2 v0.34.0
golang.org/x/sync v0.19.0
golang.org/x/sys v0.40.0
golang.org/x/term v0.38.0
golang.org/x/text v0.32.0
golang.org/x/tools v0.40.0
golang.org/x/sync v0.20.0
golang.org/x/sys v0.41.0
golang.org/x/term v0.40.0
golang.org/x/text v0.35.0
golang.org/x/tools v0.42.0
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da
google.golang.org/api v0.260.0
google.golang.org/grpc v1.78.0
@@ -223,7 +223,7 @@ require (
cloud.google.com/go/logging v1.13.1 // indirect
cloud.google.com/go/longrunning v0.7.0 // indirect
dario.cat/mergo v1.0.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
filippo.io/edwards25519 v1.1.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/DataDog/appsec-internal-go v1.11.2 // indirect
github.com/DataDog/datadog-agent/pkg/obfuscate v0.64.2 // indirect
@@ -283,7 +283,7 @@ require (
github.com/containerd/continuity v0.4.5 // indirect
github.com/coreos/go-iptables v0.6.0 // indirect
github.com/dlclark/regexp2 v1.11.5 // indirect
github.com/docker/cli v29.1.1+incompatible // indirect
github.com/docker/cli v29.2.0+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dop251/goja v0.0.0-20241024094426-79f3a7efcdbd // indirect
@@ -478,7 +478,7 @@ require (
github.com/danieljoos/wincred v1.2.3
github.com/dgraph-io/ristretto/v2 v2.3.0
github.com/fsnotify/fsnotify v1.9.0
github.com/go-git/go-git/v5 v5.16.5
github.com/go-git/go-git/v5 v5.17.1
github.com/icholy/replace v0.6.0
github.com/mark3labs/mcp-go v0.38.0
gonum.org/v1/gonum v0.16.0
@@ -524,7 +524,7 @@ require (
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
github.com/esiqveland/notify v0.13.3 // indirect
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
github.com/go-git/go-billy/v5 v5.6.2 // indirect
github.com/go-git/go-billy/v5 v5.8.0 // indirect
github.com/go-openapi/swag/conv v0.25.4 // indirect
github.com/go-openapi/swag/jsonname v0.25.4 // indirect
github.com/go-openapi/swag/jsonutils v0.25.4 // indirect
+28 -27
View File
@@ -24,8 +24,9 @@ cloud.google.com/go/trace v1.11.7 h1:kDNDX8JkaAG3R2nq1lIdkb7FCSi1rCmsEtKVsty7p+U
cloud.google.com/go/trace v1.11.7/go.mod h1:TNn9d5V3fQVf6s4SCveVMIBS2LJUqo73GACmq/Tky0s=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
filippo.io/edwards25519 v1.1.1 h1:YpjwWWlNmGIDyXOn8zLzqiD+9TyIlPhGFG96P39uBpw=
filippo.io/edwards25519 v1.1.1/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
git.sr.ht/~jackmordaunt/go-toast v1.1.2 h1:/yrfI55LRt1M7H1vkaw+NaH1+L1CDxrqDltwm5euVuE=
@@ -325,8 +326,8 @@ github.com/coder/serpent v0.12.0 h1:fUu3qVjeRvVy3DB/C2EFFvOctm+f2HKyckyfA86O63Q=
github.com/coder/serpent v0.12.0/go.mod h1:mPEpD8Cq106E0glBs5ROAAGoALLtD5HAAMVZmjf4zO0=
github.com/coder/ssh v0.0.0-20231128192721-70855dedb788 h1:YoUSJ19E8AtuUFVYBpXuOD6a/zVP3rcxezNsoDseTUw=
github.com/coder/ssh v0.0.0-20231128192721-70855dedb788/go.mod h1:aGQbuCLyhRLMzZF067xc84Lh7JDs1FKwCmF1Crl9dxQ=
github.com/coder/tailscale v1.1.1-0.20250829055706-6eafe0f9199e h1:9RKGKzGLHtTvVBQublzDGtCtal3cXP13diCHoAIGPeI=
github.com/coder/tailscale v1.1.1-0.20250829055706-6eafe0f9199e/go.mod h1:jU9T1vEs+DOs8NtGp1F2PT0/TOGVwtg/JCCKYRgvMOs=
github.com/coder/tailscale v1.1.1-0.20260409064601-e956a950740b h1:HW3db+iEczHHSsPLJokZRJTO788qf782qJcR9YAeAaM=
github.com/coder/tailscale v1.1.1-0.20260409064601-e956a950740b/go.mod h1:9lK5yqqKpK5yhDv4G8ZDDHr2S8EATEjLyUkLTKDbPzU=
github.com/coder/terraform-config-inspect v0.0.0-20250107175719-6d06d90c630e h1:JNLPDi2P73laR1oAclY6jWzAbucf70ASAvf5mh2cME0=
github.com/coder/terraform-config-inspect v0.0.0-20250107175719-6d06d90c630e/go.mod h1:Gz/z9Hbn+4KSp8A2FBtNszfLSdT2Tn/uAKGuVqqWmDI=
github.com/coder/terraform-provider-coder/v2 v2.13.1 h1:dtPaJUvueFm+XwBPUMWQCc5Z1QUQBW4B4RNyzX4h4y8=
@@ -395,8 +396,8 @@ github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5
github.com/dlclark/regexp2 v1.4.0/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
github.com/dlclark/regexp2 v1.11.5 h1:Q/sSnsKerHeCkc/jSTNq1oCm7KiVgUMZRDUoRu0JQZQ=
github.com/dlclark/regexp2 v1.11.5/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/docker/cli v29.1.1+incompatible h1:gGQk5qx62yPKRm3bUdKBzmDBSQzp17hlSLbV1F7jjys=
github.com/docker/cli v29.1.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v29.2.0+incompatible h1:9oBd9+YM7rxjZLfyMGxjraKBKE4/nVyvVfN4qNl9XRM=
github.com/docker/cli v29.2.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=
github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
@@ -482,10 +483,10 @@ github.com/go-chi/httprate v0.15.0 h1:j54xcWV9KGmPf/X4H32/aTH+wBlrvxL7P+SdnRqxh5
github.com/go-chi/httprate v0.15.0/go.mod h1:rzGHhVrsBn3IMLYDOZQsSU4fJNWcjui4fWKJcCId1R4=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
github.com/go-git/go-git/v5 v5.16.5 h1:mdkuqblwr57kVfXri5TTH+nMFLNUxIj9Z7F5ykFbw5s=
github.com/go-git/go-git/v5 v5.16.5/go.mod h1:QOMLpNf1qxuSY4StA/ArOdfFR2TrKEjJiye2kel2m+M=
github.com/go-git/go-billy/v5 v5.8.0 h1:I8hjc3LbBlXTtVuFNJuwYuMiHvQJDq1AT6u4DwDzZG0=
github.com/go-git/go-billy/v5 v5.8.0/go.mod h1:RpvI/rw4Vr5QA+Z60c6d6LXH0rYJo0uD5SqfmrrheCY=
github.com/go-git/go-git/v5 v5.17.1 h1:WnljyxIzSj9BRRUlnmAU35ohDsjRK0EKmL0evDqi5Jk=
github.com/go-git/go-git/v5 v5.17.1/go.mod h1:pW/VmeqkanRFqR6AljLcs7EA7FbZaN5MQqO7oZADXpo=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
@@ -1306,12 +1307,12 @@ golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/image v0.32.0 h1:6lZQWq75h7L5IWNk0r+SCpUJ6tUVd3v4ZHnbRKLkUDQ=
golang.org/x/image v0.32.0/go.mod h1:/R37rrQmKXtO6tYXAjtDLwQgFLHmhW+V6ayXlxzP2Pc=
golang.org/x/image v0.38.0 h1:5l+q+Y9JDC7mBOMjo4/aPhMDcxEptsX+Tt3GgRQRPuE=
golang.org/x/image v0.38.0/go.mod h1:/3f6vaXC+6CEanU4KJxbcUZyEePbyKbaLoDOe4ehFYY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -1320,8 +1321,8 @@ golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.31.0 h1:HaW9xtz0+kOcWKwli0ZXy79Ix+UW/vOfmWI5QVd2tgI=
golang.org/x/mod v0.31.0/go.mod h1:43JraMp9cGx1Rx3AqioxrbrhNsLl2l/iNAvuBkrezpg=
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -1337,8 +1338,8 @@ golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
golang.org/x/net v0.50.0/go.mod h1:UgoSli3F/pBgdJBHCTc+tp3gmrU4XswgGRgtnwWTfyM=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1352,8 +1353,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1392,8 +1393,8 @@ golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -1406,8 +1407,8 @@ golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg=
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
@@ -1420,8 +1421,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1434,8 +1435,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA=
golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+2
View File
@@ -744,6 +744,7 @@ export interface AuditLog {
export interface AuditLogResponse {
readonly audit_logs: readonly AuditLog[];
readonly count: number;
readonly count_cap: number;
}
// From codersdk/audit.go
@@ -1050,6 +1051,7 @@ export interface ConnectionLog {
export interface ConnectionLogResponse {
readonly connection_logs: readonly ConnectionLog[];
readonly count: number;
readonly count_cap: number;
}
// From codersdk/connectionlog.go
@@ -18,6 +18,7 @@ export const mockPaginationResultBase: ResultBase = {
limit: 25,
hasNextPage: false,
hasPreviousPage: false,
countIsCapped: false,
goToPreviousPage: () => {},
goToNextPage: () => {},
goToFirstPage: () => {},
@@ -33,6 +34,7 @@ export const mockInitialRenderResult: PaginationResult = {
hasPreviousPage: false,
totalRecords: undefined,
totalPages: undefined,
countIsCapped: false,
};
export const mockSuccessResult: PaginationResult = {
@@ -119,3 +119,54 @@ export const SecondPageWithData: Story = {
children: <div>New data for page 2</div>,
},
};
export const CappedCountFirstPage: Story = {
args: {
query: {
...mockPaginationResultBase,
isSuccess: true,
currentPage: 1,
currentOffsetStart: 1,
totalRecords: 2000,
totalPages: 80,
hasPreviousPage: false,
hasNextPage: true,
isPlaceholderData: false,
countIsCapped: true,
},
},
};
export const CappedCountMiddlePage: Story = {
args: {
query: {
...mockPaginationResultBase,
isSuccess: true,
currentPage: 3,
currentOffsetStart: 51,
totalRecords: 2000,
totalPages: 80,
hasPreviousPage: true,
hasNextPage: true,
isPlaceholderData: false,
countIsCapped: true,
},
},
};
export const CappedCountBeyondKnownPages: Story = {
args: {
query: {
...mockPaginationResultBase,
isSuccess: true,
currentPage: 85,
currentOffsetStart: 2101,
totalRecords: 2000,
totalPages: 85,
hasPreviousPage: true,
hasNextPage: true,
isPlaceholderData: false,
countIsCapped: true,
},
},
};
@@ -25,6 +25,7 @@ export const PaginationContainer: FC<PaginationProps> = ({
totalRecords={query.totalRecords}
currentOffsetStart={query.currentOffsetStart}
paginationUnitLabel={paginationUnitLabel}
countIsCapped={query.countIsCapped}
/>
<div
@@ -40,6 +41,7 @@ export const PaginationContainer: FC<PaginationProps> = ({
{query.isSuccess && (
<PaginationWidgetBase
totalRecords={query.totalRecords}
totalPages={query.totalPages}
currentPage={query.currentPage}
pageSize={query.limit}
onPageChange={query.onPageChange}
@@ -7,6 +7,7 @@ type PaginationHeaderProps = {
limit: number;
totalRecords: number | undefined;
currentOffsetStart: number | undefined;
countIsCapped?: boolean;
// Temporary escape hatch until Workspaces can be switched over to using
// PaginationContainer
@@ -18,6 +19,7 @@ export const PaginationHeader: FC<PaginationHeaderProps> = ({
limit,
totalRecords,
currentOffsetStart,
countIsCapped,
className,
}) => {
const theme = useTheme();
@@ -49,12 +51,20 @@ export const PaginationHeader: FC<PaginationHeaderProps> = ({
{totalRecords !== 0 && currentOffsetStart !== undefined && (
<div>
Showing <strong>{currentOffsetStart}</strong> to{" "}
Showing <strong>{currentOffsetStart.toLocaleString()}</strong> to{" "}
<strong>
{currentOffsetStart +
Math.min(limit - 1, totalRecords - currentOffsetStart)}
{(
currentOffsetStart +
(countIsCapped
? limit - 1
: Math.min(limit - 1, totalRecords - currentOffsetStart))
).toLocaleString()}
</strong>{" "}
of{" "}
<strong>
{totalRecords.toLocaleString()}
{countIsCapped && "+"}
</strong>{" "}
of <strong>{totalRecords.toLocaleString()}</strong>{" "}
{paginationUnitLabel}
</div>
)}
@@ -14,6 +14,10 @@ export type PaginationWidgetBaseProps = {
hasPreviousPage?: boolean;
hasNextPage?: boolean;
/** Override the computed totalPages.
* Used when, e.g., the row count is capped and the user navigates beyond
* the known range, so totalPages stays at least as high as currentPage. */
totalPages?: number;
};
export const PaginationWidgetBase: FC<PaginationWidgetBaseProps> = ({
@@ -23,10 +27,11 @@ export const PaginationWidgetBase: FC<PaginationWidgetBaseProps> = ({
onPageChange,
hasPreviousPage,
hasNextPage,
totalPages: totalPagesProp,
}) => {
const theme = useTheme();
const isMobile = useMediaQuery(theme.breakpoints.down("md"));
const totalPages = Math.ceil(totalRecords / pageSize);
const totalPages = totalPagesProp ?? Math.ceil(totalRecords / pageSize);
if (totalPages < 2) {
return null;
+72
View File
@@ -269,6 +269,78 @@ describe.skip(usePaginatedQuery.name, () => {
});
});
describe("Capped count behavior", () => {
const mockQueryKey = jest.fn(() => ["mock"]);
// Returns count 2001 (capped) with items on pages up to page 84
// (84 * 25 = 2100 items total).
const mockCappedQueryFn = jest.fn(({ pageNumber, limit }) => {
const totalItems = 2100;
const offset = (pageNumber - 1) * limit;
// Returns 0 items when the requested page is past the end, simulating
// an empty server response.
const itemsOnPage = Math.max(0, Math.min(limit, totalItems - offset));
return Promise.resolve({
data: new Array(itemsOnPage).fill(pageNumber),
count: 2001,
count_cap: 2000,
});
});
it("Caps totalRecords at 2000 when count exceeds cap", async () => {
const { result } = await render({
queryKey: mockQueryKey,
queryFn: mockCappedQueryFn,
});
await waitFor(() => expect(result.current.isSuccess).toBe(true));
expect(result.current.totalRecords).toBe(2000);
});
it("hasNextPage is true when count is capped", async () => {
const { result } = await render(
{ queryKey: mockQueryKey, queryFn: mockCappedQueryFn },
"/?page=80",
);
await waitFor(() => expect(result.current.isSuccess).toBe(true));
expect(result.current.hasNextPage).toBe(true);
});
it("hasPreviousPage is true when count is capped and page is beyond cap", async () => {
const { result } = await render(
{ queryKey: mockQueryKey, queryFn: mockCappedQueryFn },
"/?page=83",
);
await waitFor(() => expect(result.current.isSuccess).toBe(true));
expect(result.current.hasPreviousPage).toBe(true);
});
it("Does not redirect to last page when count is capped and page is valid", async () => {
const { result } = await render(
{ queryKey: mockQueryKey, queryFn: mockCappedQueryFn },
"/?page=83",
);
await waitFor(() => expect(result.current.isSuccess).toBe(true));
// Should stay on page 83 — not redirect to page 80.
expect(result.current.currentPage).toBe(83);
});
it("Redirects to last known page when navigating beyond actual data", async () => {
const { result } = await render(
{ queryKey: mockQueryKey, queryFn: mockCappedQueryFn },
"/?page=999",
);
// Page 999 has no items. Should redirect to page 81
// (ceil(2001 / 25) = 81), the last page guaranteed to
// have data.
await waitFor(() => expect(result.current.currentPage).toBe(81));
});
});
describe("Passing in searchParams property", () => {
const mockQueryKey = jest.fn(() => ["mock"]);
const mockQueryFn = jest.fn(({ pageNumber, limit }) =>
+50 -8
View File
@@ -144,16 +144,44 @@ export function usePaginatedQuery<
placeholderData: keepPreviousData,
});
const totalRecords = query.data?.count;
const totalPages =
totalRecords !== undefined ? Math.ceil(totalRecords / limit) : undefined;
const count = query.data?.count;
const countCap = query.data?.count_cap;
const countIsCapped =
countCap !== undefined &&
countCap > 0 &&
count !== undefined &&
count > countCap;
const totalRecords = countIsCapped ? countCap : count;
let totalPages =
totalRecords !== undefined
? Math.max(
Math.ceil(totalRecords / limit),
// True count is not known; let them navigate forward
// until they hit an empty page (checked below).
countIsCapped ? currentPage : 0,
)
: undefined;
// When the true count is unknown, the user can navigate past
// all actual data. If that happens, we need to redirect (via
// updatePageIfInvalid) to the last page guaranteed to be not
// empty.
const pageIsEmpty =
query.data != null &&
!Object.values(query.data).some((v) => Array.isArray(v) && v.length > 0);
if (pageIsEmpty) {
totalPages = count !== undefined ? Math.ceil(count / limit) : 1;
}
const hasNextPage =
totalRecords !== undefined && limit + currentPageOffset < totalRecords;
totalRecords !== undefined &&
((countIsCapped && !pageIsEmpty) ||
limit + currentPageOffset < totalRecords);
const hasPreviousPage =
totalRecords !== undefined &&
currentPage > 1 &&
currentPageOffset - limit < totalRecords;
((countIsCapped && !pageIsEmpty) ||
currentPageOffset - limit < totalRecords);
const queryClient = useQueryClient();
const prefetchPage = useEffectEvent((newPage: number) => {
@@ -224,10 +252,14 @@ export function usePaginatedQuery<
});
useEffect(() => {
if (!query.isFetching && totalPages !== undefined) {
if (
!query.isFetching &&
totalPages !== undefined &&
currentPage > totalPages
) {
void updatePageIfInvalid(totalPages);
}
}, [updatePageIfInvalid, query.isFetching, totalPages]);
}, [updatePageIfInvalid, query.isFetching, totalPages, currentPage]);
const onPageChange = (newPage: number) => {
// Page 1 is the only page that can be safely navigated to without knowing
@@ -236,7 +268,12 @@ export function usePaginatedQuery<
return;
}
const cleanedInput = clamp(Math.trunc(newPage), 1, totalPages ?? 1);
// If the true count is unknown, we allow navigating past the
// known page range.
const upperBound = countIsCapped
? Number.MAX_SAFE_INTEGER
: (totalPages ?? 1);
const cleanedInput = clamp(Math.trunc(newPage), 1, upperBound);
if (Number.isNaN(cleanedInput)) {
return;
}
@@ -274,6 +311,7 @@ export function usePaginatedQuery<
totalRecords: totalRecords as number,
totalPages: totalPages as number,
currentOffsetStart: currentPageOffset + 1,
countIsCapped,
}
: {
isSuccess: false,
@@ -282,6 +320,7 @@ export function usePaginatedQuery<
totalRecords: undefined,
totalPages: undefined,
currentOffsetStart: undefined,
countIsCapped: false as const,
}),
};
@@ -323,6 +362,7 @@ export type PaginationResultInfo = {
totalRecords: undefined;
totalPages: undefined;
currentOffsetStart: undefined;
countIsCapped: false;
}
| {
isSuccess: true;
@@ -331,6 +371,7 @@ export type PaginationResultInfo = {
totalRecords: number;
totalPages: number;
currentOffsetStart: number;
countIsCapped: boolean;
}
);
@@ -417,6 +458,7 @@ type QueryPageParamsWithPayload<TPayload = never> = QueryPageParams & {
*/
export type PaginatedData = {
count: number;
count_cap?: number;
};
/**
@@ -27,16 +27,30 @@ export function useSyncFormParameters({
useEffect(() => {
if (!parameters) return;
const currentFormValues = formValuesRef.current;
const newParameterValues = parameters.map((param) => ({
name: param.name,
value: param.value.valid ? param.value.value : "",
}));
const currentFormValuesMap = new Map(
currentFormValues.map((value) => [value.name, value.value]),
);
const newParameterValues = parameters.map((param) => {
// When the server value is not valid (e.g., the initial
// WebSocket response before any user input is sent),
// preserve the current form value. This prevents the sync
// hook from overwriting autofilled values (from the
// previous build) with empty strings before the server
// has had a chance to process them.
if (!param.value.valid) {
const existingValue = currentFormValuesMap.get(param.name);
if (existingValue !== undefined) {
return { name: param.name, value: existingValue };
}
}
return {
name: param.name,
value: param.value.valid ? param.value.value : "",
};
});
const isChanged =
currentFormValues.length !== newParameterValues.length ||
newParameterValues.some(
+31 -1
View File
@@ -65,6 +65,7 @@ describe("AuditPage", () => {
const getAuditLogsSpy = jest.spyOn(API, "getAuditLogs").mockResolvedValue({
audit_logs: [MockAuditLog, MockAuditLog2],
count: 2,
count_cap: 0,
});
// When
@@ -84,7 +85,11 @@ describe("AuditPage", () => {
it("filters by URL", async () => {
const getAuditLogsSpy = jest
.spyOn(API, "getAuditLogs")
.mockResolvedValue({ audit_logs: [MockAuditLog], count: 1 });
.mockResolvedValue({
audit_logs: [MockAuditLog],
count: 1,
count_cap: 0,
});
const query = "resource_type:workspace action:create";
await renderPage({ filter: query });
@@ -115,4 +120,29 @@ describe("AuditPage", () => {
);
});
});
describe("Capped count", () => {
it("shows capped count indicator and navigates to next page with correct offset", async () => {
jest.spyOn(API, "getAuditLogs").mockResolvedValue({
audit_logs: [MockAuditLog, MockAuditLog2],
count: 2001,
count_cap: 2000,
});
const user = userEvent.setup();
await renderPage();
await screen.findByText(/2,000\+/);
await user.click(screen.getByRole("button", { name: /next page/i }));
await waitFor(() =>
expect(API.getAuditLogs).toHaveBeenLastCalledWith<[AuditLogsRequest]>({
limit: DEFAULT_RECORDS_PER_PAGE,
offset: DEFAULT_RECORDS_PER_PAGE,
q: "",
}),
);
});
});
});
@@ -69,6 +69,7 @@ describe("ConnectionLogPage", () => {
MockDisconnectedSSHConnectionLog,
],
count: 2,
count_cap: 0,
});
// When
@@ -95,6 +96,7 @@ describe("ConnectionLogPage", () => {
.mockResolvedValue({
connection_logs: [MockConnectedSSHConnectionLog],
count: 1,
count_cap: 0,
});
const query = "type:ssh status:connected";