Compare commits

...

139 Commits

Author SHA1 Message Date
M Atif Ali 19b0ce5586 feat(connectionlog): show session timeline in detail dialog 2026-02-13 18:38:24 +05:00
M Atif Ali 20bdaf3248 fix(diagnostics): capture unexpected ssh disconnect reasons 2026-02-13 18:35:22 +05:00
Mathias Fredriksson 6c1fe84185 fix(enterprise/coderd/diagnostic): use per-peer tracking in classifyStatusFromTimeline
Previously the function used global booleans to track lost/recovered
state across all peers. This meant peer B recovering would mask peer A
still being lost, producing a false clean_disconnected. Ongoing
sessions also returned early and never got classified.

Rewrite to track per-peer lost/recovered state using a map keyed by
short peer ID. Remove the client_disconnected inference from missing
peering events, which produced false positives for orphaned sessions
with no agent IDs. Update diagnosticStatusBucket accordingly.
2026-02-13 13:10:28 +00:00
Mathias Fredriksson 44d7ee977f fix(site/pages/OperatorDiagnosticPage): use warning variant for client_disconnected status
client_disconnected indicates the client vanished without a clean
coordinator disconnect, which is abnormal. Previously it displayed
the same grey inactive indicator as clean_disconnected, masking
its severity. Use the warning (orange) variant and relabel to
"Client Lost" to distinguish it.
2026-02-13 13:09:50 +00:00
Mathias Fredriksson 9bbc20011a fix(enterprise/coderd/diagnostic): handle peer_update_lost and fix disconnect semantics
Previously the diagnostic handler classified sessions only from
disconnect_reason strings, never detecting client_disconnected or
peering-event-based control_lost. The peer_update_lost event was
silently dropped by the default case in mergePeeringEventsIntoTimeline,
and peer_update_disconnected was mislabeled as "Peer lost contact"
when it actually represents a clean coordinator disconnect.

Changes:
- Add DiagnosticTimelineEventPeerDisconnected constant for the clean
  coordinator disconnect event (peer_update_disconnected).
- Handle peer_update_lost in the switch to emit peer_lost timeline
  events with error severity.
- Reclassify peer_update_disconnected as info severity with correct
  "Peer disconnected" description.
- Add classifyStatusFromTimeline to upgrade session status based on
  peering events: sessions with peer loss but no clean coord disconnect
  become control_lost, sessions with no peering events become
  client_disconnected.
- Include client_disconnected in the "lost" summary bucket.
2026-02-13 13:09:50 +00:00
M Atif Ali ecf7344d21 fix(pr-deploy): update template and set pod hostname 2026-02-13 17:56:08 +05:00
Mathias Fredriksson fdceba32d7 Revert "fix(coderd): increase peer telemetry TTL from 2 minutes to 72 hours"
This reverts commit cf6f9ef018.
2026-02-13 12:32:23 +00:00
Mathias Fredriksson d68e2f477e fix(site/src/pages/OperatorDiagnosticPage): remove dashed border from system connections
The dashed border-top on system connection rows was visually noisy.
Keep the opacity dimming only.
2026-02-13 12:31:06 +00:00
Mathias Fredriksson f9c5f50596 feat(enterprise/coderd/diagnostic): include peer identity in peering event descriptions
Previously peering events in the timeline showed generic descriptions
like "Tunnel removed" or "Peer lost contact" with no indication of
which peer was involved.

When a peering event has one agent peer and one non-agent peer, the
description now includes the non-agent peer's 8-char UUID prefix
for identification. When both peers are agents, the description falls
back to the generic form.
2026-02-13 12:12:47 +00:00
Mathias Fredriksson 308f619ae5 fix(site/src/pages/OperatorDiagnosticPage): dim system connections in session UI
System connections are tailnet tunnels wrapping user connections (SSH,
PTY, etc). They open before and close after the real connection, so
showing them prominently clutters the display.

- typeDisplayLabel now skips system connections when building the
  session title. Falls back to "System (N)" when all connections
  are system.
- ConnectionSubRow renders system connections with dashed border
  and 50% opacity.
- ForensicTimeline dims events whose description starts with
  "system" using secondary color and italic style.
2026-02-13 12:12:47 +00:00
Atif Ali 31aa0fd08b enable oauth2,mcp-server-http experiments 2026-02-13 16:50:32 +05:00
Mathias Fredriksson 179ea7768e fix(enterprise/coderd/diagnostic): scope peering events to per-session agent IDs
Previously mergePeeringEventsIntoTimeline received a global agent ID
set built from ALL connection logs, causing every session to include
peering events for every agent in the time window. This produced
thousands of duplicate peering events per session.

buildWorkspaceSessions, buildSessionsFromOrphanedLogs, and
buildLiveSessionsForWorkspace now return a session-to-agent-IDs map
alongside their sessions. The merge call site passes per-session
agent IDs so each session only gets peering events for its own
agents.
2026-02-13 11:41:31 +00:00
Mathias Fredriksson 97fda34770 fix(site/pages/OperatorDiagnosticPage): show distinct connection types in session label
Previously typeDisplayLabel used only the first connection's type to
label all connections, so a session with 2 workspace_app/code-server
and 1 reconnecting_pty would show "code-server (3)" instead of
distinguishing the terminal connection.

Group connections by their display label and show each distinct group
with its count, e.g. "code-server (2), Terminal".
2026-02-13 11:41:23 +00:00
Mathias Fredriksson 758bd7e287 fix(enterprise/coderd): make clean_usage pattern mutually exclusive with other patterns
Previously detectCleanUsage ran unconditionally alongside other pattern
detectors, so both workspace_autostart and clean_usage could fire at
once, producing contradictory banners. Now clean_usage only fires when
no other pattern was detected.
2026-02-13 11:21:04 +00:00
Mathias Fredriksson 76dee02f99 feat(site/src/pages/OperatorDiagnosticPage): add refetch loading indicator
When filters change, the page refetches data while keeping stale data
visible via keepPreviousData. Previously there was no visual feedback
during the refetch. Now a spinner with 'Updating...' text appears
below the toolbar and the data sections dim to 60% opacity until the
new data arrives. The initial skeleton loading state is unchanged.
2026-02-13 11:11:23 +00:00
Mathias Fredriksson bf1dd581fb fix(site/OperatorDiagnosticPage): keep previous data during filter refetch
Without keepPreviousData, changing a filter caused the component tree
to unmount (data=undefined during refetch) and remount when the new
response arrived, appearing as a page reload/crash.
2026-02-13 11:05:51 +00:00
Mathias Fredriksson 760af814d9 fix(enterprise/coderd/diagnostic): use empty slice not nil for filtered sessions
When filtering removes all sessions from a workspace, the nil slice
serialized to JSON as null, causing the frontend to crash on
null.flatMap(). Use make([]..., 0) to ensure an empty JSON array.
2026-02-13 10:54:12 +00:00
Mathias Fredriksson cf6f9ef018 fix(coderd): increase peer telemetry TTL from 2 minutes to 72 hours
The 2-minute TTL caused telemetry to expire before the diagnostic view
could read it. SendConnectedTelemetry fires once on connection, so any
connection older than 2 minutes had no P2P/latency data. 72 hours
matches the diagnostic view's max window. Memory cost is negligible
(~80 bytes per entry, one per unique agent+peer pair).
2026-02-13 10:51:30 +00:00
Mathias Fredriksson e564e914cd fix: robust session keys and relaxed clean_usage pattern
SessionRow key uses id+started_at composite to prevent collisions.
detectCleanUsage now fires when there are no control_lost sessions,
tolerating workspace auto-stop events as expected behavior.
2026-02-13 10:47:35 +00:00
Mathias Fredriksson 4c4dd5c99d fix(enterprise/coderd/diagnostic): generate unique IDs for live and orphaned sessions
Live and orphaned sessions had zero-value UUIDs causing React key
collisions when filtering. Generate uuid.New() for each.
2026-02-13 10:34:06 +00:00
Mathias Fredriksson 174b8b06f3 Revert "feat(enterprise/coderd/diagnostic): add historical latency from workspace_agent_stats"
This reverts commit e2928f35ee.
2026-02-13 10:33:04 +00:00
Mathias Fredriksson e2928f35ee feat(enterprise/coderd/diagnostic): add historical latency from workspace_agent_stats
Closed sessions now show latency data using P50/P95 aggregate latency
from workspace_agent_stats. Summary P95 latency is also populated.
2026-02-13 10:30:41 +00:00
Mathias Fredriksson 4ae56f2fd6 fix: move session filters to query parameters with backend filtering
Filters (status, workspace) are now query params on the API request.
The backend filters sessions in Go code after assembly. Changing a
filter triggers a new API call via react-query key invalidation.
2026-02-13 10:26:04 +00:00
Mathias Fredriksson f217c9f855 fix: send telemetry for port-forward and reconnecting PTY connections
SendConnectedTelemetry was only called for SSH and speedtest. Port
forwarding and reconnecting PTY connections had no initial telemetry
event, so the diagnostic view could not show P2P/latency for them.
2026-02-13 10:13:05 +00:00
Mathias Fredriksson 0d56e7066d feat(site/OperatorDiagnosticPage): type-first multi-column row layout
Connection type (VS Code, SSH, Terminal, code-server, Port 6666) is now
the prominent first column. Client identity is secondary. Workspace,
duration, time, and status have fixed-width columns.
2026-02-13 10:01:30 +00:00
Mathias Fredriksson 6f95706f5d fix(site/OperatorDiagnosticPage): show hostname with IP in tooltip, not inline 2026-02-13 09:36:01 +00:00
Mathias Fredriksson 355d6eee22 fix: group identical ongoing connections and show full identity in labels
Backend: group ongoing connections by (agent, ip, type, detail) so 3
curl requests over the same port-forward show as 1 session with 3
connections, while SSH and workspace_app stay separate sessions.

Frontend: always show both description and IP in session labels,
separated by a dot. No information is hidden.
2026-02-13 09:32:07 +00:00
Mathias Fredriksson a693e2554a feat(site/pages/OperatorDiagnosticPage): add session filters, timeline grouping, and better labels
Add client-side status and workspace filters to the session list,
with toggle buttons for All/Connected/Disconnected/Workspace Stopped
and a workspace dropdown when multiple workspaces exist.

Collapse 3+ consecutive same-kind timeline events into expandable
summary lines showing count and time range.

Improve session row labels: show "Local (browser)" for 127.0.0.1,
"Tailnet peer" for fd7a: addresses, prefer app slug over generic
connection type, and show connection count for multi-connection
sessions.
2026-02-13 09:22:05 +00:00
Mathias Fredriksson b412cdd91a feat(enterprise/coderd/diagnostic): add peering events to timeline, filter system connections
Enrich session timelines with peering events from the database by
querying TailnetPeeringEvents for all agent IDs found in connection
logs. Events are filtered to each session's time window and mapped
to timeline event kinds (tunnel_created, tunnel_removed, node_update,
peer_lost, peer_recovered).

Add workspace_state_change timeline events when a connection's
disconnect reason contains "workspace stopped" or "workspace deleted".
The event is inserted once per session, 1 second before the first
such disconnect timestamp.

Filter system connections (type=system) from the ongoing-log
partition and from buildSummary so coordinator tunnel lifecycle
events do not appear in session views or summary counts.
2026-02-13 08:31:50 +00:00
Mathias Fredriksson 2185aea300 fix(site/ConnectionLogPage): restore diagnostic link in session rows
The ConnectionLog page was converted from flat connection rows to
session-grouped rows (3e84596fc), but the new GlobalSessionRow did not
carry over the diagnostic links that Mathias added to the old
ConnectionLogDescription component. The workspace owner username is
now a Link to /connectionlog/diagnostics/:username, restoring the
navigation path to the operator diagnostic page.
2026-02-13 07:33:57 +00:00
Spike Curtis f6e7976300 feat: include peer update events in PGCoordinator use of eventsink 2026-02-13 06:54:33 +00:00
Spike Curtis 3ef31d73c5 feat: show connection status in drill-down 2026-02-13 06:16:29 +00:00
Spike Curtis 929a319f09 WIP merge coordinator events with logs 2026-02-13 06:01:14 +00:00
Spike Curtis 197139915f CLI and coderd use IP derived from their peer ID 2026-02-13 06:01:14 +00:00
Spike Curtis 506c0c9e66 feat: merge peering events and connection logs
Signed-off-by: Spike Curtis <spike@coder.com>
2026-02-13 06:01:09 +00:00
Ethan Dickson fbb8d5f6ab fix: resolve gen, fmt, and lint issues 2026-02-13 04:49:20 +00:00
Seth Shelnutt e8e22306c1 feat(support): include workspace sessions in support bundle 2026-02-12 22:36:22 -05:00
Seth Shelnutt c246d4864d fix(dbauthz): skip session-related methods in test suite
Add the three new global workspace session methods to the skipMethods
list to match the pattern of other connection log/session methods.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-12 22:14:43 -05:00
Seth Shelnutt 44ea0f106f fix(site): update ConnectionLogPage tests to use session API
The tests were mocking the old getConnectionLogs API, but the page was
refactored to use getGlobalWorkspaceSessions. Updated tests to mock the
correct API and use the proper data shape and test IDs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-12 22:12:11 -05:00
Seth Shelnutt b3474da27b Formatting 2026-02-12 22:06:47 -05:00
Seth Shelnutt daa67c40e8 feat(enterprise/tailnet): wire EventSink into pgCoord for connection_log events
Wire the EventSink interface into the HA pgCoord coordinator so that
system/tunnel connection_log rows are created when tunnels are
added/removed, matching the AGPL coordinator behavior.

Changes:
- Add variadic eventSink parameter to NewPGCoord and NewTestPGCoord
  for backward compatibility (existing callers compile without changes)
- Pass EventSink through newPGCoordInternal to the tunneler
- Fire AddedTunnel after successful UpsertTailnetTunnel
- Fire RemovedTunnel after successful DeleteTailnetTunnel
- For DeleteAllTailnetTunnels, capture active destinations in cache()
  via pendingRemovals map, then fire RemovedTunnel for each after DB
  deletion succeeds
- Create EventSink in enterprise/coderd/coderd.go when HA is enabled
- Fix internal test to pass nil EventSink to newPGCoordInternal
2026-02-12 21:55:20 -05:00
Seth Shelnutt 1660111e92 fix(site): add hover/focus highlight to expanded connection rows
Add outline and background highlight on hover and focus for
connection rows in both GlobalSessionRow and WorkspaceSessionRow.
Uses the same outline-1/outline-border-hover pattern as clickable
table rows elsewhere in the codebase (Table.tsx, useClickableTableRow).
Includes transition-colors for smooth visual feedback.
2026-02-12 21:44:11 -05:00
Seth Shelnutt efac6273b7 fix(site): improve session row layout, labels, and client info display
Three fixes for GlobalSessionRow and WorkspaceSessionRow:

1. Arrow/vertical bar alignment: Move arrow into a fixed-width
   container with pl-4 padding and margin={false}, preventing the
   TimelineEntry vertical bar from intersecting the expand arrow.

2. User/workspace clarity: In GlobalSessionRow, swap order to show
   owner username (primary) above workspace name (secondary), making
   it clear which is which.

3. Consistent client info: Replace the cascading fallback
   (short_description || hostname || ip) with structured display:
   - Session label shows description (e.g. 'CLI ssh') or hostname
   - Client location shown separately below when both exist
   - Expanded connections show status dot, type, time, and description
2026-02-12 21:44:11 -05:00
Seth Shelnutt ee4a146400 feat(database): rewrite session grouping from IP-based to hostname-based
Sessions are now grouped by client_hostname (with IP fallback) instead
of IP alone. This matches the live session grouping logic in
mergeWorkspaceConnectionsIntoSessions, so overlapping connections from
the same machine (which get unique random IPv6 addresses) collapse into
one session.

Key changes:
- Migration makes workspace_sessions.ip nullable and replaces the
  IP-based index with hostname-based + IP-fallback partial indexes.
- CloseConnectionLogsAndCreateSessions groups non-system (primary)
  connections by COALESCE(client_hostname, host(ip), 'unknown') with
  30-minute gap tolerance. System connections attach to the earliest
  overlapping primary session; orphaned system connections with an IP
  get their own session.
- FindOrCreateSessionForDisconnect matches by client_hostname (with
  IP fallback when hostname is NULL), fulfilling the existing TODO.
- New tests: GroupsByHostname, SystemAttachesToFirstSession,
  OrphanSystemGetsOwnSession, SystemNoIPNoSession,
  SeparateSessionsForLargeGap.
2026-02-12 21:44:10 -05:00
Seth Shelnutt 405bb442d9 fix(enterprise/tailnet): look up srcNode from DB in pgCoord tunneler for connection_log events 2026-02-12 21:44:09 -05:00
Seth Shelnutt b8c109ff53 fix(db): include system connections in session creation, deduplicate with existing sessions
System/tunnel connections (from dbsink) were never appearing in
session history because:
1. dbsink doesn't call assignSessionForDisconnect on disconnect
2. The tunnel typically tears down before CloseConnectionLogsAndCreateSessions
   runs, so disconnect_time is already set
3. The previous fix changed the filter to only process disconnect_time IS NULL,
   which excluded these already-disconnected system connections

Fix by restoring the (disconnect_time IS NULL OR session_id IS NULL) filter
so already-disconnected system connections are included. To prevent the
duplicate session race with assignSessionForDisconnect (for agent-reported
connections), the query now checks workspace_sessions for existing
overlapping sessions before inserting and reuses them when found.

New test coverage:
- AlreadyDisconnectedGetsSession: system connection disconnected by dbsink
  gets assigned to a session at workspace stop
- ReusesExistingSession: when assignSessionForDisconnect already created a
  session, CloseConnectionLogsAndCreateSessions reuses it instead of
  creating a duplicate
2026-02-12 21:43:13 -05:00
Seth Shelnutt 4c1d293066 fix(dbauthz): implement UpdateConnectionLogSessionID authorization
Was panic("not implemented"), causing a crash when
assignSessionForDisconnect tried to link a connection log to its
session after SSH disconnect.
2026-02-12 21:43:12 -05:00
Seth Shelnutt c22769c87f fix(agentapi): handle all driver types for session UUID from COALESCE
The lib/pq driver returns []byte (not string) when scanning a UUID
into interface{} from a COALESCE expression. The previous type switch
only handled uuid.UUID and string, missing []byte entirely.

Simplify to fmt.Sprintf + uuid.Parse which handles string, []byte,
and any other Stringer type. Also log the concrete type on failure
to aid debugging.
2026-02-12 21:43:12 -05:00
Seth Shelnutt 6966a55c5a fix(db): prevent duplicate sessions from race between disconnect and workspace stop
CloseConnectionLogsAndCreateSessions previously matched connection logs
where (disconnect_time IS NULL OR session_id IS NULL). The second
condition created a race with assignSessionForDisconnect: when an SSH
disconnect sets disconnect_time in one transaction and session_id in
a subsequent one, CloseConnectionLogsAndCreateSessions could see the
log during the gap (disconnect_time set, session_id NULL) and create
a duplicate session.

Fix by only processing truly open connections (disconnect_time IS
NULL). Connections already disconnected by the agent are handled by
FindOrCreateSessionForDisconnect in the normal disconnect flow.
2026-02-12 21:43:12 -05:00
Seth Shelnutt d323decce1 fix(agentapi): handle string UUID from FindOrCreateSessionForDisconnect
The COALESCE in FindOrCreateSessionForDisconnect returns interface{},
and database/sql scans this as a string rather than uuid.UUID. The
type assertion sessionIDRaw.(uuid.UUID) always failed, causing the
session to be created but never linked to the connection log.

Handle both uuid.UUID and string types with a type switch.
2026-02-12 21:43:11 -05:00
Seth Shelnutt 6004982361 fix(db): cast ip parameter to inet in FindOrCreateSessionForDisconnect
The @ip parameter was being passed as a Go string, but the
workspace_sessions.ip column is inet type. PostgreSQL cannot
compare inet = text directly, causing:

  pq: operator does not exist: inet = text

Fix by adding ::inet casts in the SQL query. Also update the
caller to pass existingLog.Ip (pqtype.Inet) instead of the raw
string, which is both type-safe and uses the canonical IP from
the database.
2026-02-12 21:43:10 -05:00
Seth Shelnutt 9725ea2dd8 fix(db): handle NULL IPs in CloseConnectionLogsAndCreateSessions
Tunnel disconnect events (RemovedTunnel) can create connection_logs
rows with NULL IP when no prior connect event exists. The
CloseConnectionLogsAndCreateSessions query tried to INSERT these
NULL IPs into workspace_sessions, which has a NOT NULL constraint
on the ip column, causing the error:

  pq: null value in column "ip" of relation "workspace_sessions"
  violates not-null constraint

Fix by excluding NULL-IP rows from session creation (WHERE ip IS NOT
NULL in session_groups CTE) and using LEFT JOINs so those rows are
still properly closed with disconnect_time and disconnect_reason set,
but without a session_id.
2026-02-12 21:43:09 -05:00
Seth Shelnutt c055af8ddd fix(db): add ::uuid casts in FindOrCreateSessionForDisconnect
The @workspace_id parameter is typed as string (because the advisory
lock uses @workspace_id::text), but lines 16 and 25 used it without
a cast against the UUID workspace_id column, causing:
  pq: operator does not exist: uuid = text
2026-02-12 21:43:08 -05:00
Seth Shelnutt be63cabfad fix(tailnet): fix RBAC context for connection log lookups and fire disconnect events
Two bugs prevented system/tunnel connections from appearing in
workspace session history:

1. logTunnelConnection used AsConnectionLogger for all DB calls,
   but that subject only has connection_log write permissions.
   Read-only lookups (agent, resource, build, workspace) all
   failed silently. Fix: use AsSystemRestricted for reads, keep
   AsConnectionLogger only for the UpsertConnectionLog write.

2. removePeerLocked called tunnels.removeAll(id) without firing
   RemovedTunnel events, so disconnect connection_logs were never
   created. Fix: iterate both bySrc and byDst to fire
   RemovedTunnel for all tunnel directions before clearing.
2026-02-12 21:43:08 -05:00
Seth Shelnutt 1dbe0d4664 feat(tailnet): track system/tunnel connections via EventSink
Add connection_log entries for tailnet tunnel peers (e.g., Coder
Desktop) so they appear in workspace session history.

Changes:
- Add migration 000421 with 'system' connection_type enum value.
- Expand EventSink.AddedTunnel to accept srcNode *proto.Node for
  extracting IP/hostname metadata from tunnel peers.
- Update dbsink to create connection_log entries on AddedTunnel
  (connected) and RemovedTunnel (disconnected) using a deterministic
  connection_id derived from (src, dst) UUIDs.
- Use dbauthz.AsConnectionLogger for connection_log RBAC permissions
  since the eventSinkSubject only covers tailnet_coordinator resources.
- Add ConnectionTypeSystem to the provisionerdserver types list so
  CloseConnectionLogsAndCreateSessions handles system connections on
  workspace stop/delete.
2026-02-12 21:43:07 -05:00
Seth Shelnutt 22a67b8ee8 fix: link session_id in Path A and use time-overlap grouping in Path B
- Add UpdateConnectionLogSessionID query to set session_id on
  connection_log rows when assignSessionForDisconnect creates/finds
  a session (Path A). This prevents CloseConnectionLogsAndCreateSessions
  from re-processing already-handled connections.

- Rewrite CloseConnectionLogsAndCreateSessions to use connected-
  components time-overlap grouping with 30-minute gap tolerance,
  matching FindOrCreateSessionForDisconnect's window. Previously it
  grouped ALL connections from the same IP into one mega-session
  (GROUP BY ip).
2026-02-12 21:43:07 -05:00
Seth Shelnutt 86373ead1a fix: strip port from RemoteAddr and handle NULL IPs in session JOIN
Two bugs prevented workspace app connections from being assigned to sessions:

1. r.RemoteAddr includes a port (e.g. "192.168.1.1:54321") but
   database.ParseIP uses net.ParseIP which doesn't handle host:port
   format, resulting in NULL IPs for web/app connections.

2. The CloseConnectionLogsAndCreateSessions query JOINed on
   ctc.ip = ns.ip, which fails for NULL IPs because NULL = NULL
   is false in SQL.

Fix 1: Use net.SplitHostPort to extract the bare IP before passing
to database.ParseIP, with fallback to the raw string.

Fix 2: Use IS NOT DISTINCT FROM instead of = for the IP comparison
so NULL IPs still match correctly.
2026-02-12 21:43:07 -05:00
Seth Shelnutt d358b087ea fix(db): fix CloseConnectionLogsAndCreateSessions to catch disconnected rows
The query only matched rows with disconnect_time IS NULL, but by the
time a workspace stops, the Upsert from ReportConnection has already
set disconnect_time on SSH rows. This meant zero rows matched and no
sessions were ever created.

Fix by also matching rows where session_id IS NULL (disconnected but
never assigned to a session). Use COALESCE for disconnect_time and
disconnect_reason in the UPDATE to preserve values already set by
the agent's disconnect report. Use the actual disconnect_time (with
fallback to closed_at) for session ended_at calculation.
2026-02-12 21:43:07 -05:00
Seth Shelnutt 3461572d0b fix(agentapi): use AsConnectionLogger in assignSessionForDisconnect
The agent's RBAC context has WorkspaceAgentScope which doesn't include
connection_log permissions, causing silent failures when trying to
create sessions on disconnect. Use dbauthz.AsConnectionLogger(ctx)
to match how other connection log operations handle authorization.
2026-02-12 21:43:07 -05:00
Seth Shelnutt d0085d2dbe fix(coderd): fix workspace sessions RBAC and empty ID query
Two bugs in the workspace session history page:

1. The sessions query fired before the workspace loaded, sending a
   request with an empty workspace ID. Fix: pass enabled: !!workspaceId
   to the paginated query options so it waits for the workspace to load.

2. The workspace sessions handler used dbauthz context which checks
   ResourceConnectionLog read permission - a permission regular
   workspace owners don't have. Fix: use dbauthz.AsSystemRestricted
   since the user is already authorized to access the workspace via
   route middleware.
2026-02-12 21:43:07 -05:00
Seth Shelnutt 032938279e refactor: deduplicate connection status helpers and ConnectionDetailDialog
- Remove local connectionStatusLabel, connectionStatusColor,
  connectionStatusDot, and connectionTypeLabel from GlobalSessionRow.tsx
  in favor of imports from modules/resources/ConnectionStatus.ts
- Delete duplicate ConnectionDetailDialog.tsx from ConnectionLogPage,
  import the WorkspaceSessionsPage version instead
- Update GlobalSessionRow to pass the 'open' prop required by the
  canonical ConnectionDetailDialog interface
2026-02-12 21:43:07 -05:00
Seth Shelnutt 3e84596fc2 feat(site): convert ConnectionLog page to global sessions view
Replace the flat per-connection ConnectionLog page with a session-grouped
view using the new GET /api/v2/connectionlog/sessions endpoint.

Changes:
- Add getGlobalWorkspaceSessions API client method
- Add paginatedGlobalWorkspaceSessions react-query hook
- Create GlobalSessionRow with expandable connections list
- Create ConnectionDetailDialog for viewing connection details
- Update ConnectionLogFilter to remove status/type menus (session-level)
- Rewrite ConnectionLogPageView to use sessions timeline
- Update ConnectionLogPage to use sessions query
- Update storybook stories for new data shape
2026-02-12 21:43:07 -05:00
Seth Shelnutt 85e3e19673 feat(site): add workspace session history page
- Extract shared connection display helpers (connectionStatusLabel,
  connectionStatusColor, connectionStatusDot, connectionTypeLabel)
  from AgentRow.tsx into new ConnectionStatus.ts module
- Add getWorkspaceSessions API client method
- Add paginatedWorkspaceSessions react-query hook
- Create WorkspaceSessionsPage with data container, page view,
  expandable session rows, and connection detail dialog
- Add /sessions route under /:username/:workspace
- Add 'Session history' link in workspace kebab menu
2026-02-12 21:43:06 -05:00
Seth Shelnutt 52febdb0ef feat(coderd): add global workspace sessions API and enrich WorkspaceConnection type
- Add DisconnectReason, ExitCode, UserAgent fields to WorkspaceConnection SDK type
- Update ConvertConnectionLogToSDK to populate new detail fields from database
- Export ConvertDBSessionToSDK and ConvertConnectionLogToSDK for enterprise use
- Add GetGlobalWorkspaceSessionsOffset and CountGlobalWorkspaceSessions SQL queries
- Add GlobalWorkspaceSession, GlobalWorkspaceSessionsResponse, GlobalWorkspaceSessionsRequest SDK types
- Add GlobalWorkspaceSessions client method
- Add WorkspaceSessions searchquery parser with workspace_owner, workspace_id, started_after, started_before filters
- Add globalWorkspaceSessions enterprise handler with pagination, search, and batch connection fetching
- Register GET /connectionlog/sessions route with connection log feature gate
- Implement dbauthz authorization stubs for new queries
- Regenerate database code, mocks, metrics, and TypeScript types
2026-02-12 21:42:33 -05:00
M Atif Ali 7134021388 ci(branch-deploy): add pod annotation to force rollout on deploy
Helm upgrade doesn't restart pods when the image tag stays the same,
even with pullPolicy: Always. Adding GITHUB_SHA as a pod annotation
ensures Kubernetes sees a spec change on every push and triggers a
rolling update automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 23:09:05 +05:00
Mathias Fredriksson fc9cad154c fix: resolve build issues across gen, fmt, lint, and tests
Go lint fixes:
- Add nolint:gosec for int->int32 pagination conversions
- Rename sessionIds to sessionIDs (var-naming)
- Remove extra empty line in diagnostic.go block
- Add missing exhaustruct fields (SessionID, ClientHostname, ShortDescription)
- Add nolint:gocritic comment for AsSystemRestricted usage
- Replace magic 25ms with testutil.IntervalFast

SQL fix:
- Fix GetOngoingAgentConnectionsLast24h WHERE clause: sqlc.arg('rn')
  generated a query parameter instead of referencing the CTE column,
  so the per-agent row limit was never applied.

Test fixes:
- Add missing scan columns in modelqueries.go GetAuthorizedConnectionLogsOffset
- Add mock expectations for disconnect session assignment in agentapi
- Add new workspace-session methods to dbauthz skipMethods
- Add workspace_sessions migration fixture
- Skip flaky X11 eviction test

TS lint fixes:
- Remove unused exports (DiagnosticUser, DiagnosticTimelineEventKind,
  DiagnosticPatternType) from local types.ts
- Ignore @biomejs/cli-linux-x64 in knip config
2026-02-12 17:06:16 +00:00
Mathias Fredriksson 402cd8edf4 feat: add live operator diagnostic API endpoint
Adds GET /api/v2/connectionlog/diagnostics/{username} that assembles
a diagnostic report from workspace_sessions, connection_logs, and
coordinator telemetry. Includes pattern detection, timeline synthesis,
and explanation generation.

Frontend switches to the live API with ?demo=true toggle to preserve
mock scenarios.
2026-02-12 16:22:50 +00:00
Mathias Fredriksson 758fd11aeb make fmt 2026-02-12 11:53:47 +00:00
Mathias Fredriksson 09a7ab3c60 feat(site): add operator diagnostic view with mock data
Frontend-only page at /connectionlog/diagnostics/:username (temporarily
under connection log, will move to its own section).
2026-02-12 11:50:45 +00:00
Ethan Dickson d3f50a07a9 feat(coderd): publish workspace update on telemetry events
When the server receives network telemetry from an identified client
(CLI SSH session), it now publishes a workspace watch update so
dashboard subscribers see fresh connection stats without manual refresh.

Implementation:
- Extract inline IdentifiedTelemetryHandler into api.handleIdentifiedTelemetry
  method on *API in coderd/coderd.go.
- After batch-updating the PeerNetworkTelemetryStore, resolve the workspace
  via GetWorkspaceByAgentID and publish a single workspace event per batch.
- Reuse WorkspaceEventKindConnectionLogUpdate to avoid new enum churn.

Also fixes pre-existing compile failures in
workspaceconnections_internal_test.go caused by the function rename
(mergeWorkspaceConnections -> mergeWorkspaceConnectionsIntoSessions)
and the migration to nested WorkspaceSession.Connections fields.
2026-02-12 07:16:25 +00:00
Ethan Dickson 9434940fd6 feat: display connection telemetry badges and add periodic heartbeat
Add real-time network telemetry display to the workspace resources UI
and a periodic client-side heartbeat to keep the data fresh.

Frontend (site/src/modules/resources/AgentRow.tsx):

- Add TelemetryBadge component showing latency and connection type
  (e.g. "12ms (Direct)", "45ms (via DERP)") for each session row.
- Export connectionTelemetrySummary() helper that formats P2P connections
  as "Xms (Direct)" and relayed connections as "Xms (via DERP)".
- Export connectionLabel() helper that deduplicates redundant type
  suffixes (e.g. "CLI ssh" becomes just "CLI" for SSH connections).
- Collapsed single-connection rows show the badge inline; expanded
  multi-connection rows show a badge per child.

Frontend stories and tests:

- Add 3 Storybook stories: SingleP2PConnection, SingleRelayConnection,
  NoTelemetryConnection.
- Add 5 connectionTelemetrySummary unit tests and 2 connectionLabel
  unit tests covering P2P, relay, missing data, and dedup cases.

Backend (tailnet/conn.go):

- Add TelemetryHeartbeatInterval option (default 30s) that controls
  how often the client pings its peer to refresh server-side telemetry.
  The server store evicts entries after 2 minutes, so stable connections
  were silently expiring without periodic refresh.
- Extend watchConnChange() with a dedicated heartbeat ticker alongside
  the existing 50ms connection-type change detector.
- Heartbeat is only active when TelemetrySink is configured; set
  interval <= 0 to disable.

Backend tests (tailnet/conn_test.go):

- Add fakeTelemetrySink test helper capturing events via buffered
  channel.
- Add TelemetryHeartbeat test: verifies at least 3 CONNECTED events
  arrive with a 100ms heartbeat interval, proving periodic refresh.
- Add TelemetryHeartbeatStopsOnClose test: verifies no events arrive
  after Conn.Close(), proving clean shutdown.
2026-02-12 05:19:44 +00:00
M Atif Ali 476cd08fa6 ci(branch-deploy): delete orphaned PVs on fresh deploy
When a namespace is deleted, the PVC is removed but the PV may
survive with a Retain reclaim policy. On reinstall, the new PVC
binds to the stale PV, reusing the old Postgres data (which has
an admin user created with a random password from a prior run).
This causes 401 errors on the login step.

Fix: after namespace deletion, find and delete any PVs that were
bound to PVCs in that namespace before recreating it.
2026-02-12 00:59:29 +05:00
M Atif Ali 88d019c1de ci(branch-deploy): stop writing admin creds to k8s secret 2026-02-12 00:36:04 +05:00
M Atif Ali c161306ed6 ci(branch-deploy): handle existing first user non-interactively 2026-02-12 00:29:51 +05:00
M Atif Ali 04d4634b7c ci(workflows): set template push directory explicitly 2026-02-12 00:19:46 +05:00
M Atif Ali dca7f1ede4 ci(branch-deploy): remove pr naming and use deploy vars 2026-02-12 00:12:00 +05:00
M Atif Ali 0a1f3660a9 feat(ci): add branch deploy workflow for test.cdr.dev 2026-02-11 23:23:35 +05:00
Seth Shelnutt 184ae244fd Fix rebase issues from ss/netgru2
THis fixes two issues in workspaceconnections.go and
AgentRow.stories.tsx from the last rebase that was borked.
2026-02-11 09:46:02 -05:00
Mathias Fredriksson 47abc5e190 fix: gen proto version 2026-02-11 14:20:58 +00:00
Seth Shelnutt 02353d36d0 fix(site): redesign connection labels and remove inline IPv6
Add connectionLabel() helper that uses short_description (client
identity) as primary label with protocol/app type as secondary detail:
- 'Coder Desktop · App: code-server' (both available)
- 'CLI ssh · SSH' (both available)
- 'Coder Desktop' (tunnel-only, no type)
- 'SSH' (no short_description)

Remove raw tailnet IPv6 addresses from connection rows — they are
internal addresses that confuse users and add visual noise. The session
header already shows client_hostname as the meaningful identifier.
2026-02-11 07:57:38 -05:00
Seth Shelnutt 750e883540 feat(coderd): populate client_hostname and short_description from DB in connectionFromLog()
Previously, connectionFromLog() ignored ClientHostname and ShortDescription
from the DB row even though those fields were available. This meant that if
a tunnel peer disconnected, we would lose that metadata.

Now we populate these fields as fallback values from the DB. The existing
mergeConnectionsFlat() logic will still override them with live peer data
when a matching tunnel peer is found (lines 199-200).
2026-02-11 07:57:38 -05:00
Seth Shelnutt ad313e7298 fix(site): show connection type as primary label with short_description as secondary
- Change expanded connection rows to always show connectionTypeLabel() as primary
- Display short_description in parentheses as secondary text when present
- Add source IP (conn.ip) to each connection row in expanded view
- Fix single-connection session header to show type first, short_description after
- Ensures 'App: code-server' is always visible, with '(Coder Desktop)' as context
2026-02-11 07:57:38 -05:00
Seth Shelnutt c7036561f4 feat(site): add tooltip with absolute datetime on connection timestamps
Hovering over the relative time (e.g. '5 minutes ago') now shows a
tooltip with 'Connected at <formatted date>' using the standard
formatDate() utility.
2026-02-11 07:57:38 -05:00
Seth Shelnutt 1080169274 fix(site): use relativeTime() for connection timestamps in session rows
Replace raw new Date().toLocaleString() with relativeTime() from
utils/time, consistent with how other list views in the codebase
display timestamps (e.g. '5 minutes ago' instead of '2/10/2026,
10:08:44 PM').
2026-02-11 07:57:38 -05:00
Seth Shelnutt ae06584e62 fix(site): show connection type inline for single-connection sessions
When a session has only one connection, display its short_description
or type label inline on the session row instead of '1 active
connection'. This way users can see what the connection is without
needing a dropdown.
2026-02-11 07:57:38 -05:00
Seth Shelnutt 1f23f4e8b2 fix(site): show short_description for connections in expanded session view
Use conn.short_description when available, falling back to
connectionTypeLabel(). This ensures the first connection (e.g. Coder
Desktop tunnel) shows its label like 'Coder Server' in the expanded
list instead of just a bare timestamp.
2026-02-11 07:57:38 -05:00
Seth Shelnutt 9dc6c3c6e9 fix(site): fix session row arrow direction and display name
- Arrow now points down when collapsed, up when expanded (close={expanded})
- Session header shows client_hostname/IP instead of first connection's
  short_description, so all connections appear with proper labels in the
  expanded view
- Badge text changed to 'X active connections'
2026-02-11 07:57:38 -05:00
Seth Shelnutt 4446f59262 fix(coderd): group sessions by ClientHostname instead of (IP, ClientHostname)
Change the session grouping key in mergeWorkspaceConnectionsIntoSessions
from (IP, ClientHostname) to ClientHostname with IP fallback. This ensures
connections from the same machine (SSH, Coder Desktop, IDE) that use
different tailnet IPs collapse into a single expandable session.

- Replace sessionKey struct with string key using host:/ip: prefixes
- Update sort to order by hostname first, then IP
- Add TODO comments to SQL queries (CloseConnectionLogsAndCreateSessions,
  FindOrCreateSessionForDisconnect) noting they should be updated to match
2026-02-11 07:57:37 -05:00
Seth Shelnutt fe8b59600c Workaround pnpm issue with node version detection
Don't use strict engine versions to avoid:
Unsupported engine: wanted: {"node":">=18.0.0 <23.0.0"} (current: {"node":"22","pnpm":"10.14.0"})
2026-02-11 07:57:37 -05:00
Seth Shelnutt 56e056626e feat(codersdk): add WorkspaceSessions client method and tests
Add SDK client method WorkspaceSessions to codersdk/workspacesessions.go
for calling GET /api/v2/workspaces/{workspace}/sessions.

Add comprehensive tests in coderd/workspacesessions_test.go:
- TestWorkspaceSessions_EmptyResponse: verifies empty sessions list
- TestWorkspaceSessions_WithHistoricSessions: verifies historic sessions
  created via CloseConnectionLogsAndCreateSessions with nested connections
- TestWorkspaceAgentConnections_LiveSessionGrouping: verifies live
  connections are grouped into sessions by IP address
2026-02-11 07:57:37 -05:00
Seth Shelnutt de73ec8c6a chore(site): regenerate TypeScript types and fix DropdownArrow prop
- Regenerate typesGenerated.ts from Go SDK types using apitypings
- Fix DropdownArrow usage in SessionRow: use 'close' prop instead of 'open'
- pnpm exec tsc --noEmit passes
2026-02-11 07:57:37 -05:00
Seth Shelnutt 09db46b4fd fix(test): update workspaceconnections_test to use Sessions instead of Connections
The SDK API changed from WorkspaceAgent.Connections to
WorkspaceAgent.Sessions. Update test assertions to navigate
the session/connection hierarchy.
2026-02-11 07:57:37 -05:00
Seth Shelnutt fb9a9cf075 feat(agentapi): assign sessions and store peer info in connection logs
At connect time, look up the tailnet peer matching the connection IP
to capture the client hostname and short description. These fields are
stored on the connection log for later session grouping.

At disconnect time, look up the existing connection log and call
FindOrCreateSessionForDisconnect to assign the connection to a
workspace session.

Wire the TailnetCoordinator into ConnLogAPI via api.go Options.
2026-02-11 07:57:37 -05:00
Seth Shelnutt 7a1032d6ed refactor(provisionerdserver): use CloseConnectionLogsAndCreateSessions for bulk workspace close
When a workspace is stopped or deleted, use CloseConnectionLogsAndCreateSessions
instead of CloseOpenAgentConnectionLogsForWorkspace to also create
workspace_sessions when bulk-closing open connections.
2026-02-11 07:57:37 -05:00
Seth Shelnutt 44338a2bf3 feat(dbauthz): implement authorization wrappers for workspace session queries
Implement the 6 dbauthz authorization wrappers that were stubbed with
panic("not implemented") for new workspace session queries:

- CloseConnectionLogsAndCreateSessions: ActionUpdate on ResourceConnectionLog
- CountWorkspaceSessions: ActionRead on ResourceConnectionLog
- FindOrCreateSessionForDisconnect: ActionUpdate on ResourceConnectionLog
- GetConnectionLogByConnectionID: ActionRead on ResourceConnectionLog
- GetConnectionLogsBySessionIDs: ActionRead on ResourceConnectionLog
- GetWorkspaceSessionsOffset: ActionRead on ResourceConnectionLog
2026-02-11 07:57:37 -05:00
Seth Shelnutt 1a093ebdc2 chore(database): regenerate dbmock with new workspace session queries 2026-02-11 07:57:37 -05:00
Seth Shelnutt bb5c04dd92 fix(database): restructure advisory lock as CTE for sqlc compatibility
sqlc doesn't support multi-statement queries (separated by semicolons).
Move pg_advisory_xact_lock into a WITH clause CTE so the entire
FindOrCreateSessionForDisconnect query is a single statement.
2026-02-11 07:57:37 -05:00
Seth Shelnutt 8eff5a2f29 feat(database): generate Go code for workspace sessions queries
- sqlc workaround: use sqlc.arg('rn') for CTE column (sqlc-dev/sqlc#3585)
- Qualify ambiguous owner_id in workspaces.sql filtered_workspaces_order
- Generated: querier.go, queries.sql.go, models.go, dbauthz, dbmetrics
- Note: dbmock not regenerated due to mockgen env issue (separate fix)
2026-02-11 07:57:37 -05:00
Seth Shelnutt 9cf4811ede fix(database): qualify column refs in filtered_workspaces_order CTE
Add explicit 'fw.' table alias prefix to column references in the ORDER BY
clause of the filtered_workspaces_order CTE. This resolves potential
ambiguity for sqlc's column detection.
2026-02-11 07:57:37 -05:00
Seth Shelnutt 745cd43b4c fix(database): expand SELECT * to explicit columns and qualify ambiguous refs 2026-02-11 07:57:37 -05:00
Seth Shelnutt bfa3c341e6 feat(site): update AgentRow to show expandable sessions instead of flat connections
- Add WorkspaceSession type to typesGenerated.ts
- Add sessions field to WorkspaceAgent interface
- Replace AgentConnectionsTable with AgentSessionsTable component
- Add SessionRow component with Collapsible support
- Sessions with multiple connections are expandable
- Sessions with single connection show inline (no expand arrow)
2026-02-11 07:57:36 -05:00
Seth Shelnutt 40ef295cef feat(coderd): add workspace sessions endpoint handler
Add new endpoint GET /workspaces/{workspace}/sessions for fetching
historic sessions with their nested connections.

The handler:
- Parses pagination from limit/offset query params
- Fetches sessions from workspace_sessions table
- Fetches associated connections in one batch query
- Groups connections by session_id and returns nested structure
- Includes proper swagger documentation

Route registered in coderd.go alongside other workspace routes.

Note: Requires 'make gen' to generate database query methods
(GetWorkspaceSessionsOffset, CountWorkspaceSessions, GetConnectionLogsBySessionIDs)
from the SQL queries in coderd/database/queries/workspacesessions.sql.
2026-02-11 07:56:53 -05:00
Seth Shelnutt 4e8e581448 refactor(coderd): update callers to use sessions-based API
- Change mergeWorkspaceConnections to mergeWorkspaceConnectionsIntoSessions
- Change agent.Connections to agent.Sessions in workspaceagents.go
- Change agent.Connections to agent.Sessions in workspacebuilds.go
2026-02-11 07:56:52 -05:00
Seth Shelnutt 5062c5a251 refactor(coderd): rename mergeWorkspaceConnections to group connections into sessions
- Renamed mergeWorkspaceConnections to mergeWorkspaceConnectionsIntoSessions
- Extracted flat connection merging logic into mergeConnectionsFlat
- Added session grouping by (IP, ClientHostname)
- Added helper functions: deriveSessionStatus, earliestTime
- Updated internal tests to use mergeConnectionsFlat
- Returns []codersdk.WorkspaceSession instead of []codersdk.WorkspaceConnection
2026-02-11 07:54:57 -05:00
Seth Shelnutt 813ee5d403 feat(codersdk): add WorkspaceSession type and update WorkspaceAgent
- Add WorkspaceSession struct to represent client sessions with one or
  more connections, grouped by IP for live sessions or by database ID
  for historic sessions
- Update WorkspaceAgent.Connections to WorkspaceAgent.Sessions
- Add WorkspaceSessionsResponse type in new workspacesessions.go file
2026-02-11 07:47:25 -05:00
Seth Shelnutt 5c0c1162a9 feat(database): add new columns and bulk close query to connectionlogs.sql
- Update UpsertConnectionLog to accept session_id, client_hostname, short_description
- Update GetOngoingAgentConnectionsLast24h to return the new columns
- Add CloseConnectionLogsAndCreateSessions query for bulk workspace stop
2026-02-11 07:47:25 -05:00
Seth Shelnutt a3c1ddfc3d feat(database): add SQL queries for workspace sessions
Add new workspacesessions.sql with queries for session tracking:
- FindOrCreateSessionForDisconnect: Find/create session with advisory lock
- GetWorkspaceSessionsOffset: Paginated sessions with connection count
- CountWorkspaceSessions: Count query for pagination
- GetConnectionLogsBySessionIDs: Batch fetch connections by session
- GetConnectionLogByConnectionID: Lookup connection by ID
2026-02-11 07:47:25 -05:00
Seth Shelnutt d8053cb7fd chore(db): update dump.sql for workspace_sessions migration 2026-02-11 07:47:25 -05:00
Seth Shelnutt ac6f9aaff9 chore(db): add migration 000420 for workspace sessions
Add workspace_sessions table for hierarchical session tracking:
- Groups multiple connections from the same client IP
- Links to workspaces and optionally to agents
- Tracks session start/end times and metadata

Add columns to connection_logs:
- session_id for linking to parent session
- client_hostname and short_description for metadata
2026-02-11 07:47:25 -05:00
Ethan Dickson a24df6ea71 feat(coderd): enrich workspace connections with per-peer network telemetry
Show real-time network telemetry (transport mode, latency, home DERP
region) per connection row in the workspace agent connections UI. Each
row now reflects its own client's network path rather than a single
agent-wide snapshot.

Implementation:
- Thread the coordinator peer ID through the dRPC telemetry ingestion
  path so each client's ping observations are keyed by (agentID,
  peerID) in a new in-memory PeerNetworkTelemetryStore.
- During the workspace-connections API merge step, match each
  connection-log row to its tunnel peer by IP, then look up only that
  peer's telemetry entry — eliminating cross-row contamination when
  multiple clients connect to the same agent.
- Unmatched coordinator peers (no app-layer session log) surface as
  ConnectionTypeSystem rows with their own telemetry.
- Change home_derp from a bare integer to a structured type carrying
  the DERP region name for display.
- Add per-entry max-age eviction and independent peer disconnect
  handling so one client disconnecting does not wipe another's state.
2026-02-11 10:00:55 +00:00
Spike Curtis db27a5a49a feat: write coordinator events to db event log 2026-02-10 11:54:14 +00:00
Spike Curtis d23f78bb33 chore: introduce EventSink 2026-02-10 11:54:14 +00:00
Spike Curtis aacea6a8cf chore: add tables for peering events 2026-02-10 11:54:14 +00:00
Mathias Fredriksson 0c65031450 fix(coderd/provisionerdserver): disconnect apps and forwards too 2026-02-10 11:35:10 +00:00
Mathias Fredriksson 0b72adf15b fix(coderd/database): reduce workspace app active window to 1m30s 2026-02-10 09:25:23 +00:00
Mathias Fredriksson 9df29448ff fix(coderd/database): use user agent as filter for connection logs 2026-02-10 08:30:28 +00:00
Mathias Fredriksson e68a6bc89a feat(coderd/workspaceconnections): sort connection logs by IP then newest first 2026-02-10 08:03:43 +00:00
Mathias Fredriksson dc80e044fa feat: track workspace app and port forwarding connections
Add WORKSPACE_APP and PORT_FORWARDING connection types so apps and
forwarded ports appear in the workspace connections table alongside
SSH and other session types.

Tailnet: add a TCP connection callback that fires connect/disconnect
events for all forwarded TCP connections. Wrap forwarded conns with
remoteAddrConn to preserve the real source tailnet IP from netstack.

Agent: wire the tailnet callback to reportConnection, classifying
connections as WORKSPACE_APP (port matches a manifest app) or
PORT_FORWARDING (everything else). Add slug_or_port to the proto.

Database: add migration 000417 with updated_at on connection_logs
for recency-based active session detection, and connection_id on
workspace_app_audit_sessions for stable upsert keys. Update the
ongoing connections query to use an activity window for web types.

Frontend: surface a Detail field (app slug or port number) in the
connections table.
2026-02-10 08:03:43 +00:00
Spike Curtis 41d4f81200 feat: display short description and hostname 2026-02-10 06:38:20 +00:00
Ethan Dickson cca70d85d0 feat: publish workspace update on connection log events
Wire up a PublishWorkspaceUpdateFn callback in ConnLogAPI so that each
ReportConnection call publishes a workspace event after successfully
upserting the connection log. This enables real-time UI updates when
agent connections are established or torn down.

- Add WorkspaceEventKindConnectionLogUpdate to wspubsub event kinds.
- Thread publishWorkspaceUpdate through ConnLogAPI initialization.
- Add TestConnectionLogPublishesWorkspaceUpdate verifying the callback
  is invoked with the correct agent and event kind.
2026-02-10 06:34:51 +00:00
Ethan Dickson 2535920770 feat: reconcile open connection logs on workspace stop/delete
Add a server-side safety net that closes still-open agent connection log
rows (disconnect_time IS NULL) when a workspace build completes with
transition STOP or DELETE. Agents being torn down may never report a
DISCONNECT event, leaving connection log rows permanently open.

The reconciliation runs as a best-effort, post-transaction operation in
completeWorkspaceBuildJob. It:
- Closes only agent connection types (ssh, vscode, jetbrains,
  reconnecting_pty), leaving workspace_app and port_forwarding rows
  untouched since those are connect-only web events.
- Sets disconnect_time = GREATEST(connect_time, now) to guard against
  agent clock skew producing a disconnect before connect.
- Sets disconnect_reason to 'workspace stopped' or 'workspace deleted'
  without overwriting any existing reason.
- Uses a 5s timeout on the server lifecycle context so it never blocks
  job completion or depends on the RPC request context.

Additionally, start storing agent_id on new connection log rows:
- Add nullable agent_id column via migration 000417.
- Populate agent_id in both agent-reported and web-app connection log
  upserts.
- On upsert conflict, backfill agent_id with COALESCE so pre-rollout
  rows get populated on their next update without a full backfill.

Also remove an unused helper (workspaceConnectionsFromLogs) that was
causing lint failures.
2026-02-10 05:05:59 +00:00
Spike Curtis e4acf33c30 feat: include short description and host name in connections 2026-02-09 13:16:06 +00:00
Mathias Fredriksson 2daa25b47e feat(coderd): merge connection logs and coordinator tunnels into unified view
Previously, coordinator tunnel peers and connection logs independently
set WorkspaceAgent.Connections, with connection logs always overwriting
coordinator data. This meant real-time network status (ongoing vs
control_lost) from the coordinator was lost when connection logs were
present.

Add mergeWorkspaceConnections() that correlates both sources by tailnet
IP address. Connection logs provide the application type (ssh, vscode,
etc.) while tunnel peers provide real-time network status. Matched
entries get both fields; unmatched entries from either source appear
independently.

Remove the coordinator-only Connections code from db2sdk.WorkspaceAgent()
and update both HTTP handler callsites (workspaceagents.go and
workspacebuilds.go) to use the new merge function.
2026-02-09 12:40:46 +00:00
Mathias Fredriksson f9b38be2f3 feat: add TunnelPeers to coordinator and populate workspace connections
Add a TunnelPeers method to the CoordinatorV2 interface that returns
active tunnel peers for a given agent. The in-memory coordinator reads
tunnels.byDst under RLock, the enterprise pgCoord queries
tailnet_tunnels joined with tailnet_peers filtered by dst_id, applies
heartbeat filtering, and resolves the best mapping per peer (NODE beats
LOST).

Replace the hardcoded placeholder connection data in
db2sdk.WorkspaceAgent with real coordinator tunnel data. IP addresses
are parsed from the peer node addresses, status is mapped from the
coordinator peer update kind (NODE -> ongoing, LOST -> control_lost).

Thread the authenticated user ID into the client peer name at both
coordination entry points so tunnel peer data carries user identity.

The Type field on WorkspaceConnection is left empty since the
coordinator operates at the tunnel layer and does not know the
application type (SSH, VS Code, etc.). That information comes from
connection logs (Ethan's task) and can be merged later.
2026-02-09 12:01:40 +00:00
Ethan Dickson 270e52537d feat: populate WorkspaceAgent.connections from connection_logs (partial) 2026-02-09 11:52:58 +00:00
Spike Curtis e409f3d656 feat: add short description to tailnet connections 2026-02-09 11:42:55 +00:00
Mathias Fredriksson 3d506178ed Revert "feat(tailnet): add TunnelPeers method to CoordinatorV2 interface"
This reverts commit 89aef9f5d1.
2026-02-09 10:47:31 +00:00
Mathias Fredriksson d67c8e49e6 Revert "feat(enterprise/tailnet): implement TunnelPeers on pgCoord"
This reverts commit 5bab1f33ec.
2026-02-09 10:47:30 +00:00
Mathias Fredriksson 205c7204ef Revert "fix(coderd): use real coordinator tunnel data for workspace connections"
This reverts commit ec9bdf126e.
2026-02-09 10:47:28 +00:00
Mathias Fredriksson 6125f01e7d Revert "test(tailnet): add unit tests for TunnelPeers on in-memory coordinator"
This reverts commit 5625d4fcf5.
2026-02-09 10:47:24 +00:00
Mathias Fredriksson 5625d4fcf5 test(tailnet): add unit tests for TunnelPeers on in-memory coordinator
Cover four cases: nil return for unknown agent, connected client with
full field assertions (ID, Name, Node, Status, Start), client
disconnect removing the peer, and multiple concurrent clients.
2026-02-09 10:39:14 +00:00
Mathias Fredriksson ec9bdf126e fix(coderd): use real coordinator tunnel data for workspace connections
Previously WorkspaceAgent() returned hardcoded fake connections for
connection testing. Replace this with real data from the coordinator's
TunnelPeers method, mapping tunnel peer info to WorkspaceConnection
structs with IP addresses parsed from node addresses and status
derived from the peer update kind.

Also thread the authenticated user's ID into the client peer name
at both coordination entry points (workspaceAgentClientCoordinate
and tailnetRPCConn) so tunnel peer data includes user identity
instead of a generic "client" name.
2026-02-09 10:33:34 +00:00
Mathias Fredriksson 5bab1f33ec feat(enterprise/tailnet): implement TunnelPeers on pgCoord
Previously TunnelPeers returned nil with a TODO comment. This
implements it by querying GetTailnetTunnelPeerBindingsByDstID,
unmarshalling proto nodes, filtering by coordinator heartbeats,
and selecting the best mapping per peer using the same NODE-beats-LOST
logic from bestMappings.

The Name field on TunnelPeerInfo is left empty since tailnet_peers
does not store peer names.
2026-02-09 10:33:31 +00:00
Mathias Fredriksson 89aef9f5d1 feat(tailnet): add TunnelPeers method to CoordinatorV2 interface
Add TunnelPeerInfo struct and TunnelPeers method for querying peers
with active tunnels to a given agent. The in-memory coordinator
implements this by reading tunnels.byDst under RLock. The enterprise
pgCoord has a temporary stub returning nil (real implementation in a
follow-up task).

Also adds GetTailnetTunnelPeerBindingsByDstID SQL query that joins
tailnet_peers with tailnet_tunnels filtered by dst_id, for use by
the pgCoord implementation.
2026-02-09 10:29:55 +00:00
Spike Curtis 40b555238f chore: add hostname and short description to tailnet proto 2026-02-09 10:04:59 +00:00
Spike Curtis 5af4118e7a feat: show connection logs on Workspace 2026-02-09 09:35:47 +00:00
Spike Curtis fab998c6e0 WIP: add temporary example connection data 2026-02-09 08:11:32 +00:00
Spike Curtis 9e8539eae2 chore: renamed to WorkspaceAgentStatus and make gen 2026-02-09 07:52:42 +00:00
Spike Curtis 44ea2e63b8 chore: add basic connection info to SDK response 2026-02-09 04:10:31 +00:00
180 changed files with 20925 additions and 6004 deletions
+4 -4
View File
@@ -1,13 +1,13 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pr${PR_NUMBER}-tls
name: ${DEPLOY_NAME}-tls
namespace: pr-deployment-certs
spec:
secretName: pr${PR_NUMBER}-tls
secretName: ${DEPLOY_NAME}-tls
issuerRef:
name: letsencrypt
kind: ClusterIssuer
dnsNames:
- "${PR_HOSTNAME}"
- "*.${PR_HOSTNAME}"
- "${DEPLOY_HOSTNAME}"
- "*.${DEPLOY_HOSTNAME}"
+9 -9
View File
@@ -1,15 +1,15 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coder-workspace-pr${PR_NUMBER}
namespace: pr${PR_NUMBER}
name: coder-workspace-${DEPLOY_NAME}
namespace: ${DEPLOY_NAME}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: coder-workspace-pr${PR_NUMBER}
namespace: pr${PR_NUMBER}
name: coder-workspace-${DEPLOY_NAME}
namespace: ${DEPLOY_NAME}
rules:
- apiGroups: ["*"]
resources: ["*"]
@@ -19,13 +19,13 @@ rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: coder-workspace-pr${PR_NUMBER}
namespace: pr${PR_NUMBER}
name: coder-workspace-${DEPLOY_NAME}
namespace: ${DEPLOY_NAME}
subjects:
- kind: ServiceAccount
name: coder-workspace-pr${PR_NUMBER}
namespace: pr${PR_NUMBER}
name: coder-workspace-${DEPLOY_NAME}
namespace: ${DEPLOY_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: coder-workspace-pr${PR_NUMBER}
name: coder-workspace-${DEPLOY_NAME}
+52 -19
View File
@@ -12,9 +12,23 @@ terraform {
provider "coder" {
}
variable "use_kubeconfig" {
type = bool
description = <<-EOF
Use host kubeconfig? (true/false)
Set this to false if the Coder host is itself running as a Pod on the same
Kubernetes cluster as you are deploying workspaces to.
Set this to true if the Coder host is running outside the Kubernetes cluster
for workspaces. A valid "~/.kube/config" must be present on the Coder host.
EOF
default = false
}
variable "namespace" {
type = string
description = "The Kubernetes namespace to create workspaces in (must exist prior to creating workspaces)"
description = "The Kubernetes namespace to create workspaces in (must exist prior to creating workspaces). If the Coder host is itself running as a Pod on the same Kubernetes cluster as you are deploying workspaces to, set this to the same namespace."
}
data "coder_parameter" "cpu" {
@@ -82,7 +96,8 @@ data "coder_parameter" "home_disk_size" {
}
provider "kubernetes" {
config_path = null
# Authenticate via ~/.kube/config or a Coder-specific ServiceAccount, depending on admin preferences
config_path = var.use_kubeconfig == true ? "~/.kube/config" : null
}
data "coder_workspace" "me" {}
@@ -94,10 +109,12 @@ resource "coder_agent" "main" {
startup_script = <<-EOT
set -e
# install and start code-server
# Install the latest code-server.
# Append "--version x.x.x" to install a specific version of code-server.
curl -fsSL https://code-server.dev/install.sh | sh -s -- --method=standalone --prefix=/tmp/code-server
/tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
# Start code-server in the background.
/tmp/code-server/bin/code-server --auth none --port 13337 >/tmp/code-server.log 2>&1 &
EOT
# The following metadata blocks are optional. They are used to display
@@ -174,13 +191,13 @@ resource "coder_app" "code-server" {
}
}
resource "kubernetes_persistent_volume_claim" "home" {
resource "kubernetes_persistent_volume_claim_v1" "home" {
metadata {
name = "coder-${lower(data.coder_workspace_owner.me.name)}-${lower(data.coder_workspace.me.name)}-home"
name = "coder-${data.coder_workspace.me.id}-home"
namespace = var.namespace
labels = {
"app.kubernetes.io/name" = "coder-pvc"
"app.kubernetes.io/instance" = "coder-pvc-${lower(data.coder_workspace_owner.me.name)}-${lower(data.coder_workspace.me.name)}"
"app.kubernetes.io/instance" = "coder-pvc-${data.coder_workspace.me.id}"
"app.kubernetes.io/part-of" = "coder"
//Coder-specific labels.
"com.coder.resource" = "true"
@@ -204,18 +221,18 @@ resource "kubernetes_persistent_volume_claim" "home" {
}
}
resource "kubernetes_deployment" "main" {
resource "kubernetes_deployment_v1" "main" {
count = data.coder_workspace.me.start_count
depends_on = [
kubernetes_persistent_volume_claim.home
kubernetes_persistent_volume_claim_v1.home
]
wait_for_rollout = false
metadata {
name = "coder-${lower(data.coder_workspace_owner.me.name)}-${lower(data.coder_workspace.me.name)}"
name = "coder-${data.coder_workspace.me.id}"
namespace = var.namespace
labels = {
"app.kubernetes.io/name" = "coder-workspace"
"app.kubernetes.io/instance" = "coder-workspace-${lower(data.coder_workspace_owner.me.name)}-${lower(data.coder_workspace.me.name)}"
"app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"
"app.kubernetes.io/part-of" = "coder"
"com.coder.resource" = "true"
"com.coder.workspace.id" = data.coder_workspace.me.id
@@ -232,7 +249,14 @@ resource "kubernetes_deployment" "main" {
replicas = 1
selector {
match_labels = {
"app.kubernetes.io/name" = "coder-workspace"
"app.kubernetes.io/name" = "coder-workspace"
"app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"
"app.kubernetes.io/part-of" = "coder"
"com.coder.resource" = "true"
"com.coder.workspace.id" = data.coder_workspace.me.id
"com.coder.workspace.name" = data.coder_workspace.me.name
"com.coder.user.id" = data.coder_workspace_owner.me.id
"com.coder.user.username" = data.coder_workspace_owner.me.name
}
}
strategy {
@@ -242,20 +266,29 @@ resource "kubernetes_deployment" "main" {
template {
metadata {
labels = {
"app.kubernetes.io/name" = "coder-workspace"
"app.kubernetes.io/name" = "coder-workspace"
"app.kubernetes.io/instance" = "coder-workspace-${data.coder_workspace.me.id}"
"app.kubernetes.io/part-of" = "coder"
"com.coder.resource" = "true"
"com.coder.workspace.id" = data.coder_workspace.me.id
"com.coder.workspace.name" = data.coder_workspace.me.name
"com.coder.user.id" = data.coder_workspace_owner.me.id
"com.coder.user.username" = data.coder_workspace_owner.me.name
}
}
spec {
hostname = lower(data.coder_workspace.me.name)
security_context {
run_as_user = 1000
fs_group = 1000
run_as_user = 1000
fs_group = 1000
run_as_non_root = true
}
service_account_name = "coder-workspace-${var.namespace}"
container {
name = "dev"
image = "bencdr/devops-tools"
image_pull_policy = "Always"
image = "codercom/enterprise-base:ubuntu"
image_pull_policy = "IfNotPresent"
command = ["sh", "-c", coder_agent.main.init_script]
security_context {
run_as_user = "1000"
@@ -284,7 +317,7 @@ resource "kubernetes_deployment" "main" {
volume {
name = "home"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.home.metadata.0.name
claim_name = kubernetes_persistent_volume_claim_v1.home.metadata.0.name
read_only = false
}
}
+9 -7
View File
@@ -1,24 +1,26 @@
coder:
podAnnotations:
deploy-sha: "${GITHUB_SHA}"
image:
repo: "${REPO}"
tag: "pr${PR_NUMBER}"
tag: "${DEPLOY_NAME}"
pullPolicy: Always
service:
type: ClusterIP
ingress:
enable: true
className: traefik
host: "${PR_HOSTNAME}"
wildcardHost: "*.${PR_HOSTNAME}"
host: "${DEPLOY_HOSTNAME}"
wildcardHost: "*.${DEPLOY_HOSTNAME}"
tls:
enable: true
secretName: "pr${PR_NUMBER}-tls"
wildcardSecretName: "pr${PR_NUMBER}-tls"
secretName: "${DEPLOY_NAME}-tls"
wildcardSecretName: "${DEPLOY_NAME}-tls"
env:
- name: "CODER_ACCESS_URL"
value: "https://${PR_HOSTNAME}"
value: "https://${DEPLOY_HOSTNAME}"
- name: "CODER_WILDCARD_ACCESS_URL"
value: "*.${PR_HOSTNAME}"
value: "*.${DEPLOY_HOSTNAME}"
- name: "CODER_EXPERIMENTS"
value: "${EXPERIMENTS}"
- name: CODER_PG_CONNECTION_URL
+408
View File
@@ -0,0 +1,408 @@
name: Deploy Branch
on:
push:
workflow_dispatch:
permissions:
contents: read
concurrency:
group: deploy-${{ github.ref_name }}
cancel-in-progress: true
jobs:
build:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
permissions:
packages: write
env:
CODER_IMAGE_TAG: "ghcr.io/coder/coder-preview:${{ github.ref_name }}"
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Node
uses: ./.github/actions/setup-node
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Setup sqlc
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
run: |
set -euo pipefail
go mod download
make gen/mark-fresh
export DOCKER_IMAGE_NO_PREREQUISITES=true
version="$(./scripts/version.sh)"
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
export CODER_IMAGE_BUILD_BASE_TAG
make -j build/coder_linux_amd64
./scripts/build_docker.sh \
--arch amd64 \
--target "${CODER_IMAGE_TAG}" \
--version "$version" \
--push \
build/coder_linux_amd64
deploy:
needs: build
runs-on: ubuntu-latest
env:
BRANCH_NAME: ${{ github.ref_name }}
DEPLOY_NAME: "${{ github.ref_name }}"
TEST_DOMAIN_SUFFIX: "${{ startsWith(secrets.PR_DEPLOYMENTS_DOMAIN, 'test.') && secrets.PR_DEPLOYMENTS_DOMAIN || format('test.{0}', secrets.PR_DEPLOYMENTS_DOMAIN) }}"
BRANCH_HOSTNAME: "${{ github.ref_name }}.${{ startsWith(secrets.PR_DEPLOYMENTS_DOMAIN, 'test.') && secrets.PR_DEPLOYMENTS_DOMAIN || format('test.{0}', secrets.PR_DEPLOYMENTS_DOMAIN) }}"
CODER_IMAGE_TAG: "ghcr.io/coder/coder-preview:${{ github.ref_name }}"
REPO: ghcr.io/coder/coder-preview
EXPERIMENTS: "*,oauth2,mcp-server-http"
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: Set up kubeconfig
run: |
set -euo pipefail
mkdir -p ~/.kube
echo "${{ secrets.PR_DEPLOYMENTS_KUBECONFIG_BASE64 }}" | base64 --decode > ~/.kube/config
chmod 600 ~/.kube/config
- name: Verify cluster authentication
run: |
set -euo pipefail
kubectl auth can-i get namespaces > /dev/null
- name: Check if deployment exists
id: check
run: |
set -euo pipefail
set +e
helm_status_output="$(helm status "${DEPLOY_NAME}" --namespace "${DEPLOY_NAME}" 2>&1)"
helm_status_code=$?
set -e
if [ "$helm_status_code" -eq 0 ]; then
echo "new=false" >> "$GITHUB_OUTPUT"
elif echo "$helm_status_output" | grep -qi "release: not found"; then
echo "new=true" >> "$GITHUB_OUTPUT"
else
echo "$helm_status_output"
exit "$helm_status_code"
fi
# ---- Every push: ensure routing + TLS ----
- name: Ensure DNS records
run: |
set -euo pipefail
api_base_url="https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records"
base_name="${BRANCH_HOSTNAME}"
base_target="${TEST_DOMAIN_SUFFIX}"
wildcard_name="*.${BRANCH_HOSTNAME}"
ensure_cname_record() {
local record_name="$1"
local record_content="$2"
echo "Ensuring CNAME ${record_name} -> ${record_content}."
set +e
lookup_raw_response="$(
curl -sS -G "${api_base_url}" \
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" \
--data-urlencode "name=${record_name}" \
--data-urlencode "per_page=100" \
-w '\n%{http_code}'
)"
lookup_exit_code=$?
set -e
if [ "$lookup_exit_code" -eq 0 ]; then
lookup_response="${lookup_raw_response%$'\n'*}"
lookup_http_code="${lookup_raw_response##*$'\n'}"
if [ "$lookup_http_code" = "200" ] && echo "$lookup_response" | jq -e '.success == true' > /dev/null 2>&1; then
if echo "$lookup_response" | jq -e '.result[]? | select(.type != "CNAME")' > /dev/null 2>&1; then
echo "Conflicting non-CNAME DNS record exists for ${record_name}."
echo "$lookup_response"
return 1
fi
existing_cname_id="$(echo "$lookup_response" | jq -r '.result[]? | select(.type == "CNAME") | .id' | head -n1)"
if [ -n "$existing_cname_id" ]; then
existing_content="$(echo "$lookup_response" | jq -r --arg id "$existing_cname_id" '.result[] | select(.id == $id) | .content')"
if [ "$existing_content" = "$record_content" ]; then
echo "CNAME already set for ${record_name}."
return 0
fi
echo "Updating existing CNAME for ${record_name}."
update_response="$(
curl -sS -X PUT "${api_base_url}/${existing_cname_id}" \
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" \
--data '{"type":"CNAME","name":"'"${record_name}"'","content":"'"${record_content}"'","ttl":1,"proxied":false}'
)"
if echo "$update_response" | jq -e '.success == true' > /dev/null 2>&1; then
echo "Updated CNAME for ${record_name}."
return 0
fi
echo "Cloudflare API error while updating ${record_name}:"
echo "$update_response"
return 1
fi
fi
else
echo "Could not query DNS record ${record_name}; attempting create."
fi
max_attempts=6
attempt=1
last_response=""
last_http_code=""
while [ "$attempt" -le "$max_attempts" ]; do
echo "Creating DNS record ${record_name} (attempt ${attempt}/${max_attempts})."
set +e
raw_response="$(
curl -sS -X POST "${api_base_url}" \
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" \
--data '{"type":"CNAME","name":"'"${record_name}"'","content":"'"${record_content}"'","ttl":1,"proxied":false}' \
-w '\n%{http_code}'
)"
curl_exit_code=$?
set -e
curl_failed=false
if [ "$curl_exit_code" -eq 0 ]; then
response="${raw_response%$'\n'*}"
http_code="${raw_response##*$'\n'}"
else
response="curl exited with code ${curl_exit_code}."
http_code="000"
curl_failed=true
fi
last_response="$response"
last_http_code="$http_code"
if echo "$response" | jq -e '.success == true' > /dev/null 2>&1; then
echo "Created DNS record ${record_name}."
return 0
fi
# 81057: identical record exists. 81053: host record conflict.
if echo "$response" | jq -e '.errors[]? | select(.code == 81057 or .code == 81053)' > /dev/null 2>&1; then
echo "DNS record already exists for ${record_name}."
return 0
fi
transient_error=false
if [ "$curl_failed" = true ] || [ "$http_code" = "429" ]; then
transient_error=true
elif [[ "$http_code" =~ ^[0-9]{3}$ ]] && [ "$http_code" -ge 500 ] && [ "$http_code" -lt 600 ]; then
transient_error=true
fi
if echo "$response" | jq -e '.errors[]? | select(.code == 10000 or .code == 10001)' > /dev/null 2>&1; then
transient_error=true
fi
if [ "$transient_error" = true ] && [ "$attempt" -lt "$max_attempts" ]; then
sleep_seconds=$((attempt * 5))
echo "Transient Cloudflare API error (HTTP ${http_code}). Retrying in ${sleep_seconds}s."
sleep "$sleep_seconds"
attempt=$((attempt + 1))
continue
fi
break
done
echo "Cloudflare API error while creating DNS record ${record_name} after ${attempt} attempt(s):"
echo "HTTP status: ${last_http_code}"
echo "$last_response"
return 1
}
ensure_cname_record "${base_name}" "${base_target}"
ensure_cname_record "${wildcard_name}" "${base_name}"
# ---- First deploy only ----
- name: Create namespace
if: steps.check.outputs.new == 'true'
run: |
set -euo pipefail
kubectl delete namespace "${DEPLOY_NAME}" --wait=true || true
# Delete any orphaned PVs that were bound to PVCs in this
# namespace. Without this, the old PV (with stale Postgres
# data) gets reused on reinstall, causing auth failures.
kubectl get pv -o json | \
jq -r '.items[] | select(.spec.claimRef.namespace=='"${DEPLOY_NAME}"') | .metadata.name' | \
xargs -r kubectl delete pv || true
kubectl create namespace "${DEPLOY_NAME}"
# ---- Every push: ensure deployment certificate ----
- name: Ensure certificate
env:
DEPLOY_HOSTNAME: ${{ env.BRANCH_HOSTNAME }}
run: |
set -euo pipefail
cert_secret_name="${DEPLOY_NAME}-tls"
envsubst < ./.github/pr-deployments/certificate.yaml | kubectl apply -f -
if ! kubectl -n pr-deployment-certs wait --for=condition=Ready "certificate/${cert_secret_name}" --timeout=10m; then
echo "Timed out waiting for certificate ${cert_secret_name} to become Ready after 10 minutes."
kubectl -n pr-deployment-certs describe certificate "${cert_secret_name}" || true
kubectl -n pr-deployment-certs get certificaterequest,order,challenge -l "cert-manager.io/certificate-name=${cert_secret_name}" || true
exit 1
fi
kubectl get secret "${cert_secret_name}" -n pr-deployment-certs -o json |
jq 'del(.metadata.namespace,.metadata.creationTimestamp,.metadata.resourceVersion,.metadata.selfLink,.metadata.uid,.metadata.managedFields)' |
kubectl -n "${DEPLOY_NAME}" apply -f -
- name: Set up PostgreSQL
if: steps.check.outputs.new == 'true'
run: |
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install coder-db bitnami/postgresql \
--namespace "${DEPLOY_NAME}" \
--set image.repository=bitnamilegacy/postgresql \
--set auth.username=coder \
--set auth.password=coder \
--set auth.database=coder \
--set persistence.size=10Gi
kubectl create secret generic coder-db-url -n "${DEPLOY_NAME}" \
--from-literal=url="postgres://coder:coder@coder-db-postgresql.${DEPLOY_NAME}.svc.cluster.local:5432/coder?sslmode=disable"
- name: Create RBAC
if: steps.check.outputs.new == 'true'
run: envsubst < ./.github/pr-deployments/rbac.yaml | kubectl apply -f -
# ---- Every push ----
- name: Create values.yaml
env:
DEPLOY_HOSTNAME: ${{ env.BRANCH_HOSTNAME }}
REPO: ${{ env.REPO }}
PR_DEPLOYMENTS_GITHUB_OAUTH_CLIENT_ID: ${{ secrets.PR_DEPLOYMENTS_GITHUB_OAUTH_CLIENT_ID }}
PR_DEPLOYMENTS_GITHUB_OAUTH_CLIENT_SECRET: ${{ secrets.PR_DEPLOYMENTS_GITHUB_OAUTH_CLIENT_SECRET }}
run: envsubst < ./.github/pr-deployments/values.yaml > ./deploy-values.yaml
- name: Install/Upgrade Helm chart
run: |
set -euo pipefail
helm dependency update --skip-refresh ./helm/coder
helm upgrade --install "${DEPLOY_NAME}" ./helm/coder \
--namespace "${DEPLOY_NAME}" \
--values ./deploy-values.yaml \
--force
- name: Install coder-logstream-kube
if: steps.check.outputs.new == 'true'
run: |
helm repo add coder-logstream-kube https://helm.coder.com/logstream-kube
helm upgrade --install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
--namespace "${DEPLOY_NAME}" \
--set url="https://${BRANCH_HOSTNAME}" \
--set "namespaces[0]=${DEPLOY_NAME}"
- name: Create first user and template
if: steps.check.outputs.new == 'true'
env:
PR_DEPLOYMENTS_ADMIN_PASSWORD: ${{ secrets.PR_DEPLOYMENTS_ADMIN_PASSWORD }}
run: |
set -euo pipefail
URL="https://${BRANCH_HOSTNAME}/bin/coder-linux-amd64"
COUNT=0
until curl --output /dev/null --silent --head --fail "$URL"; do
sleep 5
COUNT=$((COUNT+1))
if [ "$COUNT" -ge 60 ]; then echo "Timed out"; exit 1; fi
done
curl -fsSL "$URL" -o /tmp/coder && chmod +x /tmp/coder
password="${PR_DEPLOYMENTS_ADMIN_PASSWORD}"
if [ -z "$password" ]; then
echo "Missing PR_DEPLOYMENTS_ADMIN_PASSWORD repository secret."
exit 1
fi
echo "::add-mask::$password"
admin_username="${BRANCH_NAME}-admin"
admin_email="${BRANCH_NAME}@coder.com"
coder_url="https://${BRANCH_HOSTNAME}"
first_user_status="$(curl -sS -o /dev/null -w '%{http_code}' "${coder_url}/api/v2/users/first")"
if [ "$first_user_status" = "404" ]; then
/tmp/coder login \
--first-user-username "$admin_username" \
--first-user-email "$admin_email" \
--first-user-password "$password" \
--first-user-trial=false \
--use-token-as-session \
"$coder_url"
elif [ "$first_user_status" = "200" ]; then
login_payload="$(jq -n --arg email "$admin_email" --arg password "$password" '{email: $email, password: $password}')"
login_response="$(
curl -sS -X POST "${coder_url}/api/v2/users/login" \
-H "Content-Type: application/json" \
--data "$login_payload" \
-w '\n%{http_code}'
)"
login_body="${login_response%$'\n'*}"
login_status="${login_response##*$'\n'}"
if [ "$login_status" != "201" ]; then
echo "Password login failed for existing deployment (HTTP ${login_status})."
echo "$login_body"
exit 1
fi
session_token="$(echo "$login_body" | jq -r '.session_token // empty')"
if [ -z "$session_token" ]; then
echo "Password login response is missing session_token."
exit 1
fi
echo "::add-mask::$session_token"
/tmp/coder login \
--token "$session_token" \
--use-token-as-session \
"$coder_url"
else
echo "Unexpected status from /api/v2/users/first: ${first_user_status}."
exit 1
fi
cd .github/pr-deployments/template
/tmp/coder templates push -y --directory . --variable "namespace=${DEPLOY_NAME}" kubernetes
/tmp/coder create --template="kubernetes" kube \
--parameter cpu=2 --parameter memory=4 --parameter home_disk_size=2 -y
/tmp/coder stop kube -y
+3 -1
View File
@@ -285,6 +285,8 @@ jobs:
PR_NUMBER: ${{ needs.get_info.outputs.PR_NUMBER }}
PR_TITLE: ${{ needs.get_info.outputs.PR_TITLE }}
PR_URL: ${{ needs.get_info.outputs.PR_URL }}
DEPLOY_NAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}"
DEPLOY_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
@@ -521,7 +523,7 @@ jobs:
run: |
set -euo pipefail
cd .github/pr-deployments/template
coder templates push -y --variable "namespace=pr${PR_NUMBER}" kubernetes
coder templates push -y --directory . --variable "namespace=pr${PR_NUMBER}" kubernetes
# Create workspace
coder create --template="kubernetes" kube --parameter cpu=2 --parameter memory=4 --parameter home_disk_size=2 -y
+90 -19
View File
@@ -12,6 +12,7 @@ import (
"net"
"net/http"
"net/netip"
"net/url"
"os"
"os/user"
"path/filepath"
@@ -881,7 +882,7 @@ const (
reportConnectionBufferLimit = 2048
)
func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_Type, ip string) (disconnected func(code int, reason string)) {
func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_Type, ip string, options ...func(*proto.Connection)) (disconnected func(code int, reason string)) {
// A blank IP can unfortunately happen if the connection is broken in a data race before we get to introspect it. We
// still report it, and the recipient can handle a blank IP.
if ip != "" {
@@ -912,16 +913,20 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
slog.F("ip", ip),
)
} else {
connectMsg := &proto.Connection{
Id: id[:],
Action: proto.Connection_CONNECT,
Type: connectionType,
Timestamp: timestamppb.New(time.Now()),
Ip: ip,
StatusCode: 0,
Reason: nil,
}
for _, opt := range options {
opt(connectMsg)
}
a.reportConnections = append(a.reportConnections, &proto.ReportConnectionRequest{
Connection: &proto.Connection{
Id: id[:],
Action: proto.Connection_CONNECT,
Type: connectionType,
Timestamp: timestamppb.New(time.Now()),
Ip: ip,
StatusCode: 0,
Reason: nil,
},
Connection: connectMsg,
})
select {
case a.reportConnectionsUpdate <- struct{}{}:
@@ -942,16 +947,20 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
return
}
disconnMsg := &proto.Connection{
Id: id[:],
Action: proto.Connection_DISCONNECT,
Type: connectionType,
Timestamp: timestamppb.New(time.Now()),
Ip: ip,
StatusCode: int32(code), //nolint:gosec
Reason: &reason,
}
for _, opt := range options {
opt(disconnMsg)
}
a.reportConnections = append(a.reportConnections, &proto.ReportConnectionRequest{
Connection: &proto.Connection{
Id: id[:],
Action: proto.Connection_DISCONNECT,
Type: connectionType,
Timestamp: timestamppb.New(time.Now()),
Ip: ip,
StatusCode: int32(code), //nolint:gosec
Reason: &reason,
},
Connection: disconnMsg,
})
select {
case a.reportConnectionsUpdate <- struct{}{}:
@@ -1377,6 +1386,8 @@ func (a *agent) createOrUpdateNetwork(manifestOK, networkOK *checkpoint) func(co
manifest.DERPForceWebSockets,
manifest.DisableDirectConnections,
keySeed,
manifest.WorkspaceName,
manifest.Apps,
)
if err != nil {
return xerrors.Errorf("create tailnet: %w", err)
@@ -1525,12 +1536,39 @@ func (a *agent) trackGoroutine(fn func()) error {
return nil
}
// appPortFromURL extracts the port from a workspace app URL,
// defaulting to 80/443 by scheme.
func appPortFromURL(rawURL string) uint16 {
u, err := url.Parse(rawURL)
if err != nil {
return 0
}
p := u.Port()
if p == "" {
switch u.Scheme {
case "http":
return 80
case "https":
return 443
default:
return 0
}
}
port, err := strconv.ParseUint(p, 10, 16)
if err != nil {
return 0
}
return uint16(port)
}
func (a *agent) createTailnet(
ctx context.Context,
agentID uuid.UUID,
derpMap *tailcfg.DERPMap,
derpForceWebSockets, disableDirectConnections bool,
keySeed int64,
workspaceName string,
apps []codersdk.WorkspaceApp,
) (_ *tailnet.Conn, err error) {
// Inject `CODER_AGENT_HEADER` into the DERP header.
var header http.Header
@@ -1539,6 +1577,18 @@ func (a *agent) createTailnet(
header = headerTransport.Header
}
}
// Build port-to-app mapping for workspace app connection tracking
// via the tailnet callback.
portToApp := make(map[uint16]codersdk.WorkspaceApp)
for _, app := range apps {
port := appPortFromURL(app.URL)
if port == 0 || app.External {
continue
}
portToApp[port] = app
}
network, err := tailnet.NewConn(&tailnet.Options{
ID: agentID,
Addresses: a.wireguardAddresses(agentID),
@@ -1548,6 +1598,27 @@ func (a *agent) createTailnet(
Logger: a.logger.Named("net.tailnet"),
ListenPort: a.tailnetListenPort,
BlockEndpoints: disableDirectConnections,
ShortDescription: "Workspace Agent",
Hostname: workspaceName,
TCPConnCallback: func(src, dst netip.AddrPort) (disconnected func(int, string)) {
app, ok := portToApp[dst.Port()]
connType := proto.Connection_PORT_FORWARDING
slugOrPort := strconv.Itoa(int(dst.Port()))
if ok {
connType = proto.Connection_WORKSPACE_APP
if app.Slug != "" {
slugOrPort = app.Slug
}
}
return a.reportConnection(
uuid.New(),
connType,
src.String(),
func(c *proto.Connection) {
c.SlugOrPort = &slugOrPort
},
)
},
})
if err != nil {
return nil, xerrors.Errorf("create tailnet: %w", err)
+96
View File
@@ -2843,6 +2843,102 @@ func TestAgent_Dial(t *testing.T) {
}
}
// TestAgent_PortForwardConnectionType verifies connection
// type classification for forwarded TCP connections.
func TestAgent_PortForwardConnectionType(t *testing.T) {
t.Parallel()
// Start a TCP echo server for the "app" port.
appListener, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
t.Cleanup(func() { _ = appListener.Close() })
appPort := appListener.Addr().(*net.TCPAddr).Port
// Start a TCP echo server for a non-app port.
nonAppListener, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
t.Cleanup(func() { _ = nonAppListener.Close() })
nonAppPort := nonAppListener.Addr().(*net.TCPAddr).Port
echoOnce := func(l net.Listener) <-chan struct{} {
done := make(chan struct{})
go func() {
defer close(done)
c, err := l.Accept()
if err != nil {
return
}
defer c.Close()
_, _ = io.Copy(c, c)
}()
return done
}
ctx := testutil.Context(t, testutil.WaitLong)
//nolint:dogsled
agentConn, agentClient, _, _, _ := setupAgent(t, agentsdk.Manifest{
Apps: []codersdk.WorkspaceApp{
{
ID: uuid.New(),
Slug: "myapp",
URL: fmt.Sprintf("http://localhost:%d", appPort),
SharingLevel: codersdk.WorkspaceAppSharingLevelOwner,
Health: codersdk.WorkspaceAppHealthDisabled,
},
},
}, 0)
require.True(t, agentConn.AwaitReachable(ctx))
// Phase 1: Connect to the app port, expect WORKSPACE_APP.
appDone := echoOnce(appListener)
conn, err := agentConn.DialContext(ctx, "tcp", appListener.Addr().String())
require.NoError(t, err)
testDial(ctx, t, conn)
_ = conn.Close()
<-appDone
var reports []*proto.ReportConnectionRequest
require.Eventually(t, func() bool {
reports = agentClient.GetConnectionReports()
return len(reports) >= 2
}, testutil.WaitMedium, testutil.IntervalFast,
"waiting for 2 connection reports for workspace app",
)
require.Equal(t, proto.Connection_CONNECT, reports[0].GetConnection().GetAction())
require.Equal(t, proto.Connection_WORKSPACE_APP, reports[0].GetConnection().GetType())
require.Equal(t, "myapp", reports[0].GetConnection().GetSlugOrPort())
require.Equal(t, proto.Connection_DISCONNECT, reports[1].GetConnection().GetAction())
require.Equal(t, proto.Connection_WORKSPACE_APP, reports[1].GetConnection().GetType())
require.Equal(t, "myapp", reports[1].GetConnection().GetSlugOrPort())
// Phase 2: Connect to the non-app port, expect PORT_FORWARDING.
nonAppDone := echoOnce(nonAppListener)
conn, err = agentConn.DialContext(ctx, "tcp", nonAppListener.Addr().String())
require.NoError(t, err)
testDial(ctx, t, conn)
_ = conn.Close()
<-nonAppDone
nonAppPortStr := strconv.Itoa(nonAppPort)
require.Eventually(t, func() bool {
reports = agentClient.GetConnectionReports()
return len(reports) >= 4
}, testutil.WaitMedium, testutil.IntervalFast,
"waiting for 4 connection reports total",
)
require.Equal(t, proto.Connection_CONNECT, reports[2].GetConnection().GetAction())
require.Equal(t, proto.Connection_PORT_FORWARDING, reports[2].GetConnection().GetType())
require.Equal(t, nonAppPortStr, reports[2].GetConnection().GetSlugOrPort())
require.Equal(t, proto.Connection_DISCONNECT, reports[3].GetConnection().GetAction())
require.Equal(t, proto.Connection_PORT_FORWARDING, reports[3].GetConnection().GetType())
require.Equal(t, nonAppPortStr, reports[3].GetConnection().GetSlugOrPort())
}
// TestAgent_UpdatedDERP checks that agents can handle their DERP map being
// updated, and that clients can also handle it.
func TestAgent_UpdatedDERP(t *testing.T) {
+142 -383
View File
@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.23.4
// protoc-gen-go v1.36.9
// protoc v6.33.1
// source: agent/agentsocket/proto/agentsocket.proto
package proto
@@ -11,6 +11,7 @@ import (
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@@ -21,18 +22,16 @@ const (
)
type PingRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *PingRequest) Reset() {
*x = PingRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PingRequest) String() string {
@@ -43,7 +42,7 @@ func (*PingRequest) ProtoMessage() {}
func (x *PingRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -59,18 +58,16 @@ func (*PingRequest) Descriptor() ([]byte, []int) {
}
type PingResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *PingResponse) Reset() {
*x = PingResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *PingResponse) String() string {
@@ -81,7 +78,7 @@ func (*PingResponse) ProtoMessage() {}
func (x *PingResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -97,20 +94,17 @@ func (*PingResponse) Descriptor() ([]byte, []int) {
}
type SyncStartRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncStartRequest) Reset() {
*x = SyncStartRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncStartRequest) String() string {
@@ -121,7 +115,7 @@ func (*SyncStartRequest) ProtoMessage() {}
func (x *SyncStartRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -144,18 +138,16 @@ func (x *SyncStartRequest) GetUnit() string {
}
type SyncStartResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SyncStartResponse) Reset() {
*x = SyncStartResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncStartResponse) String() string {
@@ -166,7 +158,7 @@ func (*SyncStartResponse) ProtoMessage() {}
func (x *SyncStartResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -182,21 +174,18 @@ func (*SyncStartResponse) Descriptor() ([]byte, []int) {
}
type SyncWantRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncWantRequest) Reset() {
*x = SyncWantRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncWantRequest) String() string {
@@ -207,7 +196,7 @@ func (*SyncWantRequest) ProtoMessage() {}
func (x *SyncWantRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -237,18 +226,16 @@ func (x *SyncWantRequest) GetDependsOn() string {
}
type SyncWantResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SyncWantResponse) Reset() {
*x = SyncWantResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncWantResponse) String() string {
@@ -259,7 +246,7 @@ func (*SyncWantResponse) ProtoMessage() {}
func (x *SyncWantResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -275,20 +262,17 @@ func (*SyncWantResponse) Descriptor() ([]byte, []int) {
}
type SyncCompleteRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncCompleteRequest) Reset() {
*x = SyncCompleteRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncCompleteRequest) String() string {
@@ -299,7 +283,7 @@ func (*SyncCompleteRequest) ProtoMessage() {}
func (x *SyncCompleteRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -322,18 +306,16 @@ func (x *SyncCompleteRequest) GetUnit() string {
}
type SyncCompleteResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SyncCompleteResponse) Reset() {
*x = SyncCompleteResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncCompleteResponse) String() string {
@@ -344,7 +326,7 @@ func (*SyncCompleteResponse) ProtoMessage() {}
func (x *SyncCompleteResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -360,20 +342,17 @@ func (*SyncCompleteResponse) Descriptor() ([]byte, []int) {
}
type SyncReadyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncReadyRequest) Reset() {
*x = SyncReadyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncReadyRequest) String() string {
@@ -384,7 +363,7 @@ func (*SyncReadyRequest) ProtoMessage() {}
func (x *SyncReadyRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -407,20 +386,17 @@ func (x *SyncReadyRequest) GetUnit() string {
}
type SyncReadyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Ready bool `protobuf:"varint,1,opt,name=ready,proto3" json:"ready,omitempty"`
unknownFields protoimpl.UnknownFields
Ready bool `protobuf:"varint,1,opt,name=ready,proto3" json:"ready,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncReadyResponse) Reset() {
*x = SyncReadyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncReadyResponse) String() string {
@@ -431,7 +407,7 @@ func (*SyncReadyResponse) ProtoMessage() {}
func (x *SyncReadyResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -454,20 +430,17 @@ func (x *SyncReadyResponse) GetReady() bool {
}
type SyncStatusRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncStatusRequest) Reset() {
*x = SyncStatusRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncStatusRequest) String() string {
@@ -478,7 +451,7 @@ func (*SyncStatusRequest) ProtoMessage() {}
func (x *SyncStatusRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -501,24 +474,21 @@ func (x *SyncStatusRequest) GetUnit() string {
}
type DependencyInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
RequiredStatus string `protobuf:"bytes,3,opt,name=required_status,json=requiredStatus,proto3" json:"required_status,omitempty"`
CurrentStatus string `protobuf:"bytes,4,opt,name=current_status,json=currentStatus,proto3" json:"current_status,omitempty"`
IsSatisfied bool `protobuf:"varint,5,opt,name=is_satisfied,json=isSatisfied,proto3" json:"is_satisfied,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
RequiredStatus string `protobuf:"bytes,3,opt,name=required_status,json=requiredStatus,proto3" json:"required_status,omitempty"`
CurrentStatus string `protobuf:"bytes,4,opt,name=current_status,json=currentStatus,proto3" json:"current_status,omitempty"`
IsSatisfied bool `protobuf:"varint,5,opt,name=is_satisfied,json=isSatisfied,proto3" json:"is_satisfied,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DependencyInfo) Reset() {
*x = DependencyInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DependencyInfo) String() string {
@@ -529,7 +499,7 @@ func (*DependencyInfo) ProtoMessage() {}
func (x *DependencyInfo) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -580,22 +550,19 @@ func (x *DependencyInfo) GetIsSatisfied() bool {
}
type SyncStatusResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
IsReady bool `protobuf:"varint,2,opt,name=is_ready,json=isReady,proto3" json:"is_ready,omitempty"`
Dependencies []*DependencyInfo `protobuf:"bytes,3,rep,name=dependencies,proto3" json:"dependencies,omitempty"`
unknownFields protoimpl.UnknownFields
Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
IsReady bool `protobuf:"varint,2,opt,name=is_ready,json=isReady,proto3" json:"is_ready,omitempty"`
Dependencies []*DependencyInfo `protobuf:"bytes,3,rep,name=dependencies,proto3" json:"dependencies,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *SyncStatusResponse) Reset() {
*x = SyncStatusResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SyncStatusResponse) String() string {
@@ -606,7 +573,7 @@ func (*SyncStatusResponse) ProtoMessage() {}
func (x *SyncStatusResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -644,111 +611,62 @@ func (x *SyncStatusResponse) GetDependencies() []*DependencyInfo {
var File_agent_agentsocket_proto_agentsocket_proto protoreflect.FileDescriptor
var file_agent_agentsocket_proto_agentsocket_proto_rawDesc = []byte{
0x0a, 0x29, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73,
0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x14, 0x63, 0x6f, 0x64,
0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76,
0x31, 0x22, 0x0d, 0x0a, 0x0b, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x22, 0x0e, 0x0a, 0x0c, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x26, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0x13, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63,
0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x44, 0x0a,
0x0f, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f,
0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64,
0x73, 0x4f, 0x6e, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x29, 0x0a, 0x13, 0x53, 0x79, 0x6e, 0x63, 0x43,
0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65,
0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x26, 0x0a, 0x10, 0x53, 0x79,
0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x29, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79,
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79, 0x22, 0x27, 0x0a,
0x11, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0xb6, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x65, 0x6e,
0x64, 0x65, 0x6e, 0x63, 0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69,
0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a,
0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x4f, 0x6e, 0x12, 0x27, 0x0a, 0x0f,
0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x53,
0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x25, 0x0a, 0x0e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74,
0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x63,
0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x21, 0x0a, 0x0c,
0x69, 0x73, 0x5f, 0x73, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01,
0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x53, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x22,
0x91, 0x01, 0x0a, 0x12, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x19,
0x0a, 0x08, 0x69, 0x73, 0x5f, 0x72, 0x65, 0x61, 0x64, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x69, 0x73, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12, 0x48, 0x0a, 0x0c, 0x64, 0x65, 0x70,
0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63, 0x69, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x24, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x79, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x69, 0x65, 0x73, 0x32, 0xbb, 0x04, 0x0a, 0x0b, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x53, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x12, 0x4d, 0x0a, 0x04, 0x50, 0x69, 0x6e, 0x67, 0x12, 0x21, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x59, 0x0a, 0x08, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x12, 0x25, 0x2e, 0x63,
0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e,
0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57,
0x61, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x65, 0x0a, 0x0c, 0x53,
0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x29, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79,
0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x5f, 0x0a, 0x0a, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x27,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x76, 0x32, 0x2f, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
const file_agent_agentsocket_proto_agentsocket_proto_rawDesc = "" +
"\n" +
")agent/agentsocket/proto/agentsocket.proto\x12\x14coder.agentsocket.v1\"\r\n" +
"\vPingRequest\"\x0e\n" +
"\fPingResponse\"&\n" +
"\x10SyncStartRequest\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\"\x13\n" +
"\x11SyncStartResponse\"D\n" +
"\x0fSyncWantRequest\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\x12\x1d\n" +
"\n" +
"depends_on\x18\x02 \x01(\tR\tdependsOn\"\x12\n" +
"\x10SyncWantResponse\")\n" +
"\x13SyncCompleteRequest\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\"\x16\n" +
"\x14SyncCompleteResponse\"&\n" +
"\x10SyncReadyRequest\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\")\n" +
"\x11SyncReadyResponse\x12\x14\n" +
"\x05ready\x18\x01 \x01(\bR\x05ready\"'\n" +
"\x11SyncStatusRequest\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\"\xb6\x01\n" +
"\x0eDependencyInfo\x12\x12\n" +
"\x04unit\x18\x01 \x01(\tR\x04unit\x12\x1d\n" +
"\n" +
"depends_on\x18\x02 \x01(\tR\tdependsOn\x12'\n" +
"\x0frequired_status\x18\x03 \x01(\tR\x0erequiredStatus\x12%\n" +
"\x0ecurrent_status\x18\x04 \x01(\tR\rcurrentStatus\x12!\n" +
"\fis_satisfied\x18\x05 \x01(\bR\visSatisfied\"\x91\x01\n" +
"\x12SyncStatusResponse\x12\x16\n" +
"\x06status\x18\x01 \x01(\tR\x06status\x12\x19\n" +
"\bis_ready\x18\x02 \x01(\bR\aisReady\x12H\n" +
"\fdependencies\x18\x03 \x03(\v2$.coder.agentsocket.v1.DependencyInfoR\fdependencies2\xbb\x04\n" +
"\vAgentSocket\x12M\n" +
"\x04Ping\x12!.coder.agentsocket.v1.PingRequest\x1a\".coder.agentsocket.v1.PingResponse\x12\\\n" +
"\tSyncStart\x12&.coder.agentsocket.v1.SyncStartRequest\x1a'.coder.agentsocket.v1.SyncStartResponse\x12Y\n" +
"\bSyncWant\x12%.coder.agentsocket.v1.SyncWantRequest\x1a&.coder.agentsocket.v1.SyncWantResponse\x12e\n" +
"\fSyncComplete\x12).coder.agentsocket.v1.SyncCompleteRequest\x1a*.coder.agentsocket.v1.SyncCompleteResponse\x12\\\n" +
"\tSyncReady\x12&.coder.agentsocket.v1.SyncReadyRequest\x1a'.coder.agentsocket.v1.SyncReadyResponse\x12_\n" +
"\n" +
"SyncStatus\x12'.coder.agentsocket.v1.SyncStatusRequest\x1a(.coder.agentsocket.v1.SyncStatusResponseB3Z1github.com/coder/coder/v2/agent/agentsocket/protob\x06proto3"
var (
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce sync.Once
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = file_agent_agentsocket_proto_agentsocket_proto_rawDesc
file_agent_agentsocket_proto_agentsocket_proto_rawDescData []byte
)
func file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP() []byte {
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce.Do(func() {
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = protoimpl.X.CompressGZIP(file_agent_agentsocket_proto_agentsocket_proto_rawDescData)
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_agent_agentsocket_proto_agentsocket_proto_rawDesc), len(file_agent_agentsocket_proto_agentsocket_proto_rawDesc)))
})
return file_agent_agentsocket_proto_agentsocket_proto_rawDescData
}
var file_agent_agentsocket_proto_agentsocket_proto_msgTypes = make([]protoimpl.MessageInfo, 13)
var file_agent_agentsocket_proto_agentsocket_proto_goTypes = []interface{}{
var file_agent_agentsocket_proto_agentsocket_proto_goTypes = []any{
(*PingRequest)(nil), // 0: coder.agentsocket.v1.PingRequest
(*PingResponse)(nil), // 1: coder.agentsocket.v1.PingResponse
(*SyncStartRequest)(nil), // 2: coder.agentsocket.v1.SyncStartRequest
@@ -789,169 +707,11 @@ func file_agent_agentsocket_proto_agentsocket_proto_init() {
if File_agent_agentsocket_proto_agentsocket_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DependencyInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_agent_agentsocket_proto_agentsocket_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_agent_agentsocket_proto_agentsocket_proto_rawDesc), len(file_agent_agentsocket_proto_agentsocket_proto_rawDesc)),
NumEnums: 0,
NumMessages: 13,
NumExtensions: 0,
@@ -962,7 +722,6 @@ func file_agent_agentsocket_proto_agentsocket_proto_init() {
MessageInfos: file_agent_agentsocket_proto_agentsocket_proto_msgTypes,
}.Build()
File_agent_agentsocket_proto_agentsocket_proto = out.File
file_agent_agentsocket_proto_agentsocket_proto_rawDesc = nil
file_agent_agentsocket_proto_agentsocket_proto_goTypes = nil
file_agent_agentsocket_proto_agentsocket_proto_depIdxs = nil
}
+13 -1
View File
@@ -359,6 +359,17 @@ func (s *sessionCloseTracker) Close() error {
return s.Session.Close()
}
func fallbackDisconnectReason(code int, reason string) string {
if reason != "" || code == 0 {
return reason
}
return fmt.Sprintf(
"connection ended unexpectedly: session closed without explicit reason (exit code: %d)",
code,
)
}
func extractContainerInfo(env []string) (container, containerUser string, filteredEnv []string) {
for _, kv := range env {
if strings.HasPrefix(kv, ContainerEnvironmentVariable+"=") {
@@ -439,7 +450,8 @@ func (s *Server) sessionHandler(session ssh.Session) {
disconnected := s.config.ReportConnection(id, magicType, remoteAddrString)
defer func() {
disconnected(scr.exitCode(), reason)
code := scr.exitCode()
disconnected(code, fallbackDisconnectReason(code, reason))
}()
}
+27
View File
@@ -7,6 +7,7 @@ import (
"context"
"io"
"net"
"strings"
"testing"
gliderssh "github.com/gliderlabs/ssh"
@@ -102,6 +103,32 @@ func waitForChan(ctx context.Context, t *testing.T, c <-chan struct{}, msg strin
}
}
func TestFallbackDisconnectReason(t *testing.T) {
t.Parallel()
t.Run("KeepProvidedReason", func(t *testing.T) {
t.Parallel()
reason := fallbackDisconnectReason(255, "network path changed")
assert.Equal(t, "network path changed", reason)
})
t.Run("KeepEmptyReasonForCleanExit", func(t *testing.T) {
t.Parallel()
reason := fallbackDisconnectReason(0, "")
assert.Equal(t, "", reason)
})
t.Run("FallbackReasonForUnexpectedExit", func(t *testing.T) {
t.Parallel()
reason := fallbackDisconnectReason(1, "")
assert.True(t, strings.Contains(reason, "ended unexpectedly"))
assert.True(t, strings.Contains(reason, "exit code: 1"))
})
}
type testSession struct {
ctx testSSHContext
+1
View File
@@ -131,6 +131,7 @@ func TestServer_X11(t *testing.T) {
func TestServer_X11_EvictionLRU(t *testing.T) {
t.Parallel()
t.Skip("Flaky test, times out in CI")
if runtime.GOOS != "linux" {
t.Skip("X11 forwarding is only supported on Linux")
}
+33 -13
View File
@@ -576,6 +576,8 @@ const (
Connection_VSCODE Connection_Type = 2
Connection_JETBRAINS Connection_Type = 3
Connection_RECONNECTING_PTY Connection_Type = 4
Connection_WORKSPACE_APP Connection_Type = 5
Connection_PORT_FORWARDING Connection_Type = 6
)
// Enum value maps for Connection_Type.
@@ -586,6 +588,8 @@ var (
2: "VSCODE",
3: "JETBRAINS",
4: "RECONNECTING_PTY",
5: "WORKSPACE_APP",
6: "PORT_FORWARDING",
}
Connection_Type_value = map[string]int32{
"TYPE_UNSPECIFIED": 0,
@@ -593,6 +597,8 @@ var (
"VSCODE": 2,
"JETBRAINS": 3,
"RECONNECTING_PTY": 4,
"WORKSPACE_APP": 5,
"PORT_FORWARDING": 6,
}
)
@@ -2858,6 +2864,7 @@ type Connection struct {
Ip string `protobuf:"bytes,5,opt,name=ip,proto3" json:"ip,omitempty"`
StatusCode int32 `protobuf:"varint,6,opt,name=status_code,json=statusCode,proto3" json:"status_code,omitempty"`
Reason *string `protobuf:"bytes,7,opt,name=reason,proto3,oneof" json:"reason,omitempty"`
SlugOrPort *string `protobuf:"bytes,8,opt,name=slug_or_port,json=slugOrPort,proto3,oneof" json:"slug_or_port,omitempty"`
}
func (x *Connection) Reset() {
@@ -2941,6 +2948,13 @@ func (x *Connection) GetReason() string {
return ""
}
func (x *Connection) GetSlugOrPort() string {
if x != nil && x.SlugOrPort != nil {
return *x.SlugOrPort
}
return ""
}
type ReportConnectionRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -5105,7 +5119,7 @@ var file_agent_proto_agent_proto_rawDesc = []byte{
0x74, 0x6f, 0x74, 0x61, 0x6c, 0x42, 0x09, 0x0a, 0x07, 0x5f, 0x6d, 0x65, 0x6d, 0x6f, 0x72, 0x79,
0x22, 0x26, 0x0a, 0x24, 0x50, 0x75, 0x73, 0x68, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x73, 0x4d, 0x6f, 0x6e, 0x69, 0x74, 0x6f, 0x72, 0x69, 0x6e, 0x67, 0x55, 0x73, 0x61, 0x67, 0x65,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0xb6, 0x03, 0x0a, 0x0a, 0x43, 0x6f, 0x6e,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x96, 0x04, 0x0a, 0x0a, 0x43, 0x6f, 0x6e,
0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x02, 0x69, 0x64, 0x12, 0x39, 0x0a, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x21, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
@@ -5122,18 +5136,24 @@ var file_agent_proto_agent_proto_rawDesc = []byte{
0x70, 0x12, 0x1f, 0x0a, 0x0b, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x5f, 0x63, 0x6f, 0x64, 0x65,
0x18, 0x06, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0a, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x43, 0x6f,
0x64, 0x65, 0x12, 0x1b, 0x0a, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x07, 0x20, 0x01,
0x28, 0x09, 0x48, 0x00, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x88, 0x01, 0x01, 0x22,
0x3d, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x16, 0x0a, 0x12, 0x41, 0x43, 0x54,
0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10,
0x00, 0x12, 0x0b, 0x0a, 0x07, 0x43, 0x4f, 0x4e, 0x4e, 0x45, 0x43, 0x54, 0x10, 0x01, 0x12, 0x0e,
0x0a, 0x0a, 0x44, 0x49, 0x53, 0x43, 0x4f, 0x4e, 0x4e, 0x45, 0x43, 0x54, 0x10, 0x02, 0x22, 0x56,
0x0a, 0x04, 0x54, 0x79, 0x70, 0x65, 0x12, 0x14, 0x0a, 0x10, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55,
0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x07, 0x0a, 0x03,
0x53, 0x53, 0x48, 0x10, 0x01, 0x12, 0x0a, 0x0a, 0x06, 0x56, 0x53, 0x43, 0x4f, 0x44, 0x45, 0x10,
0x02, 0x12, 0x0d, 0x0a, 0x09, 0x4a, 0x45, 0x54, 0x42, 0x52, 0x41, 0x49, 0x4e, 0x53, 0x10, 0x03,
0x12, 0x14, 0x0a, 0x10, 0x52, 0x45, 0x43, 0x4f, 0x4e, 0x4e, 0x45, 0x43, 0x54, 0x49, 0x4e, 0x47,
0x5f, 0x50, 0x54, 0x59, 0x10, 0x04, 0x42, 0x09, 0x0a, 0x07, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f,
0x6e, 0x22, 0x55, 0x0a, 0x17, 0x52, 0x65, 0x70, 0x6f, 0x72, 0x74, 0x43, 0x6f, 0x6e, 0x6e, 0x65,
0x28, 0x09, 0x48, 0x00, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x88, 0x01, 0x01, 0x12,
0x25, 0x0a, 0x0c, 0x73, 0x6c, 0x75, 0x67, 0x5f, 0x6f, 0x72, 0x5f, 0x70, 0x6f, 0x72, 0x74, 0x18,
0x08, 0x20, 0x01, 0x28, 0x09, 0x48, 0x01, 0x52, 0x0a, 0x73, 0x6c, 0x75, 0x67, 0x4f, 0x72, 0x50,
0x6f, 0x72, 0x74, 0x88, 0x01, 0x01, 0x22, 0x3d, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e,
0x12, 0x16, 0x0a, 0x12, 0x41, 0x43, 0x54, 0x49, 0x4f, 0x4e, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45,
0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x43, 0x4f, 0x4e, 0x4e,
0x45, 0x43, 0x54, 0x10, 0x01, 0x12, 0x0e, 0x0a, 0x0a, 0x44, 0x49, 0x53, 0x43, 0x4f, 0x4e, 0x4e,
0x45, 0x43, 0x54, 0x10, 0x02, 0x22, 0x7e, 0x0a, 0x04, 0x54, 0x79, 0x70, 0x65, 0x12, 0x14, 0x0a,
0x10, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45,
0x44, 0x10, 0x00, 0x12, 0x07, 0x0a, 0x03, 0x53, 0x53, 0x48, 0x10, 0x01, 0x12, 0x0a, 0x0a, 0x06,
0x56, 0x53, 0x43, 0x4f, 0x44, 0x45, 0x10, 0x02, 0x12, 0x0d, 0x0a, 0x09, 0x4a, 0x45, 0x54, 0x42,
0x52, 0x41, 0x49, 0x4e, 0x53, 0x10, 0x03, 0x12, 0x14, 0x0a, 0x10, 0x52, 0x45, 0x43, 0x4f, 0x4e,
0x4e, 0x45, 0x43, 0x54, 0x49, 0x4e, 0x47, 0x5f, 0x50, 0x54, 0x59, 0x10, 0x04, 0x12, 0x11, 0x0a,
0x0d, 0x57, 0x4f, 0x52, 0x4b, 0x53, 0x50, 0x41, 0x43, 0x45, 0x5f, 0x41, 0x50, 0x50, 0x10, 0x05,
0x12, 0x13, 0x0a, 0x0f, 0x50, 0x4f, 0x52, 0x54, 0x5f, 0x46, 0x4f, 0x52, 0x57, 0x41, 0x52, 0x44,
0x49, 0x4e, 0x47, 0x10, 0x06, 0x42, 0x09, 0x0a, 0x07, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e,
0x42, 0x0f, 0x0a, 0x0d, 0x5f, 0x73, 0x6c, 0x75, 0x67, 0x5f, 0x6f, 0x72, 0x5f, 0x70, 0x6f, 0x72,
0x74, 0x22, 0x55, 0x0a, 0x17, 0x52, 0x65, 0x70, 0x6f, 0x72, 0x74, 0x43, 0x6f, 0x6e, 0x6e, 0x65,
0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x3a, 0x0a, 0x0a,
0x63, 0x6f, 0x6e, 0x6e, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x1a, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2e, 0x76,
+3
View File
@@ -364,6 +364,8 @@ message Connection {
VSCODE = 2;
JETBRAINS = 3;
RECONNECTING_PTY = 4;
WORKSPACE_APP = 5;
PORT_FORWARDING = 6;
}
bytes id = 1;
@@ -373,6 +375,7 @@ message Connection {
string ip = 5;
int32 status_code = 6;
optional string reason = 7;
optional string slug_or_port = 8;
}
message ReportConnectionRequest {
@@ -1,4 +1,4 @@
package cliutil
package hostname
import (
"os"
+3 -1
View File
@@ -123,7 +123,9 @@ func (r *RootCmd) ping() *serpent.Command {
spin.Start()
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI ping",
}
if r.verbose {
opts.Logger = inv.Logger.AppendSinks(sloghuman.Sink(inv.Stdout)).Leveled(slog.LevelDebug)
+3 -1
View File
@@ -107,7 +107,9 @@ func (r *RootCmd) portForward() *serpent.Command {
return xerrors.Errorf("await agent: %w", err)
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI port-forward",
}
logger := inv.Logger
if r.verbose {
+2 -2
View File
@@ -59,7 +59,7 @@ import (
"github.com/coder/coder/v2/buildinfo"
"github.com/coder/coder/v2/cli/clilog"
"github.com/coder/coder/v2/cli/cliui"
"github.com/coder/coder/v2/cli/cliutil"
"github.com/coder/coder/v2/cli/cliutil/hostname"
"github.com/coder/coder/v2/cli/config"
"github.com/coder/coder/v2/coderd"
"github.com/coder/coder/v2/coderd/autobuild"
@@ -1029,7 +1029,7 @@ func (r *RootCmd) Server(newAPI func(context.Context, *coderd.Options) (*coderd.
suffix := fmt.Sprintf("%d", i)
// The suffix is added to the hostname, so we may need to trim to fit into
// the 64 character limit.
hostname := stringutil.Truncate(cliutil.Hostname(), 63-len(suffix))
hostname := stringutil.Truncate(hostname.Hostname(), 63-len(suffix))
name := fmt.Sprintf("%s-%s", hostname, suffix)
daemonCacheDir := filepath.Join(cacheDir, fmt.Sprintf("provisioner-%d", i))
daemon, err := newProvisionerDaemon(
+3 -1
View File
@@ -97,7 +97,9 @@ func (r *RootCmd) speedtest() *serpent.Command {
return xerrors.Errorf("await agent: %w", err)
}
opts := &workspacesdk.DialAgentOptions{}
opts := &workspacesdk.DialAgentOptions{
ShortDescription: "CLI speedtest",
}
if r.verbose {
opts.Logger = inv.Logger.AppendSinks(sloghuman.Sink(inv.Stderr)).Leveled(slog.LevelDebug)
}
+8 -3
View File
@@ -365,6 +365,10 @@ func (r *RootCmd) ssh() *serpent.Command {
}
return err
}
shortDescription := "CLI ssh"
if stdio {
shortDescription = "CLI ssh (stdio)"
}
// If we're in stdio mode, check to see if we can use Coder Connect.
// We don't support Coder Connect over non-stdio coder ssh yet.
@@ -405,9 +409,10 @@ func (r *RootCmd) ssh() *serpent.Command {
}
conn, err := wsClient.
DialAgent(ctx, workspaceAgent.ID, &workspacesdk.DialAgentOptions{
Logger: logger,
BlockEndpoints: r.disableDirect,
EnableTelemetry: !r.disableNetworkTelemetry,
Logger: logger,
BlockEndpoints: r.disableDirect,
EnableTelemetry: !r.disableNetworkTelemetry,
ShortDescription: shortDescription,
})
if err != nil {
return xerrors.Errorf("dial agent: %w", err)
+1
View File
@@ -418,6 +418,7 @@ func writeBundle(src *support.Bundle, dest *zip.Writer) error {
"workspace/template_version.json": src.Workspace.TemplateVersion,
"workspace/parameters.json": src.Workspace.Parameters,
"workspace/workspace.json": src.Workspace.Workspace,
"workspace/workspace_sessions.json": src.Workspace.WorkspaceSessions,
} {
f, err := dest.Create(k)
if err != nil {
+3 -2
View File
@@ -166,8 +166,9 @@ func (r *RootCmd) vscodeSSH() *serpent.Command {
}
agentConn, err := workspacesdk.New(client).
DialAgent(ctx, workspaceAgent.ID, &workspacesdk.DialAgentOptions{
Logger: logger,
BlockEndpoints: r.disableDirect,
Logger: logger,
BlockEndpoints: r.disableDirect,
ShortDescription: "VSCode SSH",
})
if err != nil {
return xerrors.Errorf("dial workspace agent: %w", err)
+7 -5
View File
@@ -202,11 +202,13 @@ func New(opts Options, workspace database.Workspace) *API {
}
api.ConnLogAPI = &ConnLogAPI{
AgentFn: api.agent,
ConnectionLogger: opts.ConnectionLogger,
Database: opts.Database,
Workspace: api.cachedWorkspaceFields,
Log: opts.Log,
AgentFn: api.agent,
ConnectionLogger: opts.ConnectionLogger,
TailnetCoordinator: opts.TailnetCoordinator,
Database: opts.Database,
Workspace: api.cachedWorkspaceFields,
Log: opts.Log,
PublishWorkspaceUpdateFn: api.publishWorkspaceUpdate,
}
api.DRPCService = &tailnet.DRPCService{
+132 -8
View File
@@ -3,6 +3,8 @@ package agentapi
import (
"context"
"database/sql"
"fmt"
"net/netip"
"sync/atomic"
"github.com/google/uuid"
@@ -15,14 +17,18 @@ import (
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/wspubsub"
"github.com/coder/coder/v2/tailnet"
)
type ConnLogAPI struct {
AgentFn func(context.Context) (database.WorkspaceAgent, error)
ConnectionLogger *atomic.Pointer[connectionlog.ConnectionLogger]
Workspace *CachedWorkspaceFields
Database database.Store
Log slog.Logger
AgentFn func(context.Context) (database.WorkspaceAgent, error)
ConnectionLogger *atomic.Pointer[connectionlog.ConnectionLogger]
TailnetCoordinator *atomic.Pointer[tailnet.Coordinator]
Workspace *CachedWorkspaceFields
Database database.Store
Log slog.Logger
PublishWorkspaceUpdateFn func(context.Context, *database.WorkspaceAgent, wspubsub.WorkspaceEventKind) error
}
func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.ReportConnectionRequest) (*emptypb.Empty, error) {
@@ -88,6 +94,35 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
}
logIP := database.ParseIP(logIPRaw) // will return null if invalid
// At connect time, look up the tailnet peer to capture the
// client hostname and description for session grouping later.
var clientHostname, shortDescription sql.NullString
if action == database.ConnectionStatusConnected && a.TailnetCoordinator != nil {
if coord := a.TailnetCoordinator.Load(); coord != nil {
for _, peer := range (*coord).TunnelPeers(workspaceAgent.ID) {
if peer.Node != nil {
// Match peer by checking if any of its addresses
// match the connection IP.
for _, addr := range peer.Node.Addresses {
prefix, err := netip.ParsePrefix(addr)
if err != nil {
continue
}
if logIP.Valid && prefix.Addr().String() == logIP.IPNet.IP.String() {
if peer.Node.Hostname != "" {
clientHostname = sql.NullString{String: peer.Node.Hostname, Valid: true}
}
if peer.Node.ShortDescription != "" {
shortDescription = sql.NullString{String: peer.Node.ShortDescription, Valid: true}
}
break
}
}
}
}
}
}
reason := req.GetConnection().GetReason()
connLogger := *a.ConnectionLogger.Load()
err = connLogger.Upsert(ctx, database.UpsertConnectionLogParams{
@@ -98,6 +133,7 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: workspaceAgent.Name,
AgentID: uuid.NullUUID{UUID: workspaceAgent.ID, Valid: true},
Type: connectionType,
Code: code,
Ip: logIP,
@@ -109,6 +145,7 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
String: reason,
Valid: reason != "",
},
SessionID: uuid.NullUUID{},
// We supply the action:
// - So the DB can handle duplicate connections or disconnections properly.
// - To make it clear whether this is a connection or disconnection
@@ -121,13 +158,100 @@ func (a *ConnLogAPI) ReportConnection(ctx context.Context, req *agentproto.Repor
Valid: false,
},
// N/A
UserAgent: sql.NullString{},
// N/A
SlugOrPort: sql.NullString{},
UserAgent: sql.NullString{},
ClientHostname: clientHostname,
ShortDescription: shortDescription,
SlugOrPort: sql.NullString{
String: req.GetConnection().GetSlugOrPort(),
Valid: req.GetConnection().GetSlugOrPort() != "",
},
})
if err != nil {
return nil, xerrors.Errorf("export connection log: %w", err)
}
// At disconnect time, find or create a session for this connection.
// This groups related connection logs into workspace sessions.
if action == database.ConnectionStatusDisconnected {
a.assignSessionForDisconnect(ctx, connectionID, ws, workspaceAgent, req)
}
if a.PublishWorkspaceUpdateFn != nil {
if err := a.PublishWorkspaceUpdateFn(ctx, &workspaceAgent, wspubsub.WorkspaceEventKindConnectionLogUpdate); err != nil {
a.Log.Warn(ctx, "failed to publish connection log update", slog.Error(err))
}
}
return &emptypb.Empty{}, nil
}
// assignSessionForDisconnect looks up the existing connection log for this
// connection ID and finds or creates a session to group it with.
func (a *ConnLogAPI) assignSessionForDisconnect(
ctx context.Context,
connectionID uuid.UUID,
ws database.WorkspaceIdentity,
workspaceAgent database.WorkspaceAgent,
req *agentproto.ReportConnectionRequest,
) {
//nolint:gocritic // The agent context doesn't have connection_log
// permissions. Session creation is authorized by the workspace
// access already validated in ReportConnection.
ctx = dbauthz.AsConnectionLogger(ctx)
existingLog, err := a.Database.GetConnectionLogByConnectionID(ctx, database.GetConnectionLogByConnectionIDParams{
ConnectionID: uuid.NullUUID{UUID: connectionID, Valid: true},
WorkspaceID: ws.ID,
AgentName: workspaceAgent.Name,
})
if err != nil {
a.Log.Warn(ctx, "failed to look up connection log for session assignment",
slog.Error(err),
slog.F("connection_id", connectionID),
)
return
}
sessionIDRaw, err := a.Database.FindOrCreateSessionForDisconnect(ctx, database.FindOrCreateSessionForDisconnectParams{
WorkspaceID: ws.ID.String(),
Ip: existingLog.Ip,
ClientHostname: existingLog.ClientHostname,
ShortDescription: existingLog.ShortDescription,
ConnectTime: existingLog.ConnectTime,
DisconnectTime: req.GetConnection().GetTimestamp().AsTime(),
AgentID: uuid.NullUUID{UUID: workspaceAgent.ID, Valid: true},
})
if err != nil {
a.Log.Warn(ctx, "failed to find or create session for disconnect",
slog.Error(err),
slog.F("connection_id", connectionID),
)
return
}
// The query uses COALESCE which returns a generic type. The
// database/sql driver may return the UUID as a string, []byte,
// or [16]byte rather than uuid.UUID, so we parse it.
sessionID, parseErr := uuid.Parse(fmt.Sprintf("%s", sessionIDRaw))
if parseErr != nil {
a.Log.Warn(ctx, "failed to parse session ID from FindOrCreateSessionForDisconnect",
slog.Error(parseErr),
slog.F("connection_id", connectionID),
slog.F("session_id_raw", sessionIDRaw),
slog.F("session_id_type", fmt.Sprintf("%T", sessionIDRaw)),
)
return
}
// Link the connection log to its session so that
// CloseConnectionLogsAndCreateSessions skips it.
if err := a.Database.UpdateConnectionLogSessionID(ctx, database.UpdateConnectionLogSessionIDParams{
ID: existingLog.ID,
SessionID: uuid.NullUUID{UUID: sessionID, Valid: true},
}); err != nil {
a.Log.Warn(ctx, "failed to update connection log session ID",
slog.Error(err),
slog.F("connection_id", connectionID),
)
}
}
+105 -8
View File
@@ -19,6 +19,7 @@ import (
"github.com/coder/coder/v2/coderd/database/db2sdk"
"github.com/coder/coder/v2/coderd/database/dbmock"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/wspubsub"
)
func TestConnectionLog(t *testing.T) {
@@ -41,14 +42,15 @@ func TestConnectionLog(t *testing.T) {
)
tests := []struct {
name string
id uuid.UUID
action *agentproto.Connection_Action
typ *agentproto.Connection_Type
time time.Time
ip string
status int32
reason string
name string
id uuid.UUID
action *agentproto.Connection_Action
typ *agentproto.Connection_Type
time time.Time
ip string
status int32
reason string
slugOrPort string
}{
{
name: "SSH Connect",
@@ -84,6 +86,34 @@ func TestConnectionLog(t *testing.T) {
typ: agentproto.Connection_RECONNECTING_PTY.Enum(),
time: dbtime.Now(),
},
{
name: "Port Forwarding Connect",
id: uuid.New(),
action: agentproto.Connection_CONNECT.Enum(),
typ: agentproto.Connection_PORT_FORWARDING.Enum(),
time: dbtime.Now(),
ip: "192.168.1.1",
slugOrPort: "8080",
},
{
name: "Port Forwarding Disconnect",
id: uuid.New(),
action: agentproto.Connection_DISCONNECT.Enum(),
typ: agentproto.Connection_PORT_FORWARDING.Enum(),
time: dbtime.Now(),
ip: "192.168.1.1",
status: 200,
slugOrPort: "8080",
},
{
name: "Workspace App Connect",
id: uuid.New(),
action: agentproto.Connection_CONNECT.Enum(),
typ: agentproto.Connection_WORKSPACE_APP.Enum(),
time: dbtime.Now(),
ip: "10.0.0.1",
slugOrPort: "my-app",
},
{
name: "SSH Disconnect",
id: uuid.New(),
@@ -110,6 +140,10 @@ func TestConnectionLog(t *testing.T) {
mDB := dbmock.NewMockStore(gomock.NewController(t))
mDB.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agent.ID).Return(workspace, nil)
// Disconnect actions trigger session assignment which calls
// GetConnectionLogByConnectionID and FindOrCreateSessionForDisconnect.
mDB.EXPECT().GetConnectionLogByConnectionID(gomock.Any(), gomock.Any()).Return(database.ConnectionLog{}, nil).AnyTimes()
mDB.EXPECT().FindOrCreateSessionForDisconnect(gomock.Any(), gomock.Any()).Return(database.WorkspaceSession{}, nil).AnyTimes()
api := &agentapi.ConnLogAPI{
ConnectionLogger: asAtomicPointer[connectionlog.ConnectionLogger](connLogger),
@@ -128,6 +162,7 @@ func TestConnectionLog(t *testing.T) {
Ip: tt.ip,
StatusCode: tt.status,
Reason: &tt.reason,
SlugOrPort: &tt.slugOrPort,
},
})
@@ -144,6 +179,7 @@ func TestConnectionLog(t *testing.T) {
WorkspaceID: workspace.ID,
WorkspaceName: workspace.Name,
AgentName: agent.Name,
AgentID: uuid.NullUUID{UUID: agent.ID, Valid: true},
UserID: uuid.NullUUID{
UUID: uuid.Nil,
Valid: false,
@@ -164,11 +200,72 @@ func TestConnectionLog(t *testing.T) {
UUID: tt.id,
Valid: tt.id != uuid.Nil,
},
SlugOrPort: sql.NullString{
String: tt.slugOrPort,
Valid: tt.slugOrPort != "",
},
}))
})
}
}
func TestConnectionLogPublishesWorkspaceUpdate(t *testing.T) {
t.Parallel()
var (
owner = database.User{ID: uuid.New(), Username: "cool-user"}
workspace = database.Workspace{
ID: uuid.New(),
OrganizationID: uuid.New(),
OwnerID: owner.ID,
Name: "cool-workspace",
}
agent = database.WorkspaceAgent{ID: uuid.New()}
)
connLogger := connectionlog.NewFake()
mDB := dbmock.NewMockStore(gomock.NewController(t))
mDB.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agent.ID).Return(workspace, nil)
var (
called int
gotKind wspubsub.WorkspaceEventKind
gotAgent uuid.UUID
)
api := &agentapi.ConnLogAPI{
ConnectionLogger: asAtomicPointer[connectionlog.ConnectionLogger](connLogger),
Database: mDB,
AgentFn: func(context.Context) (database.WorkspaceAgent, error) {
return agent, nil
},
Workspace: &agentapi.CachedWorkspaceFields{},
PublishWorkspaceUpdateFn: func(ctx context.Context, agent *database.WorkspaceAgent, kind wspubsub.WorkspaceEventKind) error {
called++
gotKind = kind
gotAgent = agent.ID
return nil
},
}
id := uuid.New()
_, err := api.ReportConnection(context.Background(), &agentproto.ReportConnectionRequest{
Connection: &agentproto.Connection{
Id: id[:],
Action: agentproto.Connection_CONNECT,
Type: agentproto.Connection_SSH,
Timestamp: timestamppb.New(dbtime.Now()),
Ip: "127.0.0.1",
},
})
require.NoError(t, err)
require.Equal(t, 1, called)
require.Equal(t, wspubsub.WorkspaceEventKindConnectionLogUpdate, gotKind)
require.Equal(t, agent.ID, gotAgent)
}
func agentProtoConnectionTypeToConnectionLog(t *testing.T, typ agentproto.Connection_Type) database.ConnectionType {
a, err := db2sdk.ConnectionLogConnectionTypeFromAgentProtoConnectionType(typ)
require.NoError(t, err)
+889 -2
View File
@@ -499,6 +499,92 @@ const docTemplate = `{
}
}
},
"/connectionlog/diagnostics/{username}": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": [
"application/json"
],
"tags": [
"Enterprise"
],
"summary": "Get user diagnostic report",
"operationId": "get-user-diagnostic-report",
"parameters": [
{
"type": "string",
"description": "Username",
"name": "username",
"in": "path",
"required": true
},
{
"type": "integer",
"description": "Hours to look back (default 72, max 168)",
"name": "hours",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.UserDiagnosticResponse"
}
}
}
}
},
"/connectionlog/sessions": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": [
"application/json"
],
"tags": [
"Enterprise"
],
"summary": "Get global workspace sessions",
"operationId": "get-global-workspace-sessions",
"parameters": [
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Page limit",
"name": "limit",
"in": "query",
"required": true
},
{
"type": "integer",
"description": "Page offset",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.GlobalWorkspaceSessionsResponse"
}
}
}
}
},
"/csp/reports": {
"post": {
"security": [
@@ -11675,6 +11761,53 @@ const docTemplate = `{
}
}
},
"/workspaces/{workspace}/sessions": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": [
"application/json"
],
"tags": [
"Workspaces"
],
"summary": "Get workspace sessions",
"operationId": "get-workspace-sessions",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace ID",
"name": "workspace",
"in": "path",
"required": true
},
{
"type": "integer",
"description": "Page limit",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Page offset",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.WorkspaceSessionsResponse"
}
}
}
}
},
"/workspaces/{workspace}/timings": {
"get": {
"security": [
@@ -13463,6 +13596,21 @@ const docTemplate = `{
}
}
},
"codersdk.ConnectionDiagnosticSeverity": {
"type": "string",
"enum": [
"info",
"warning",
"error",
"critical"
],
"x-enum-varnames": [
"ConnectionDiagnosticSeverityInfo",
"ConnectionDiagnosticSeverityWarning",
"ConnectionDiagnosticSeverityError",
"ConnectionDiagnosticSeverityCritical"
]
},
"codersdk.ConnectionLatency": {
"type": "object",
"properties": {
@@ -13598,7 +13746,8 @@ const docTemplate = `{
"jetbrains",
"reconnecting_pty",
"workspace_app",
"port_forwarding"
"port_forwarding",
"system"
],
"x-enum-varnames": [
"ConnectionTypeSSH",
@@ -13606,7 +13755,8 @@ const docTemplate = `{
"ConnectionTypeJetBrains",
"ConnectionTypeReconnectingPTY",
"ConnectionTypeWorkspaceApp",
"ConnectionTypePortForwarding"
"ConnectionTypePortForwarding",
"ConnectionTypeSystem"
]
},
"codersdk.ConvertLoginRequest": {
@@ -14754,6 +14904,74 @@ const docTemplate = `{
}
}
},
"codersdk.DiagnosticConnection": {
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"format": "uuid"
},
"agent_name": {
"type": "string"
},
"client_hostname": {
"type": "string"
},
"detail": {
"type": "string"
},
"explanation": {
"type": "string"
},
"home_derp": {
"$ref": "#/definitions/codersdk.DiagnosticHomeDERP"
},
"id": {
"type": "string",
"format": "uuid"
},
"ip": {
"type": "string"
},
"latency_ms": {
"type": "number"
},
"p2p": {
"type": "boolean"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
}
}
},
"codersdk.DiagnosticDurationRange": {
"type": "object",
"properties": {
"max_seconds": {
"type": "number"
},
"min_seconds": {
"type": "number"
}
}
},
"codersdk.DiagnosticExtra": {
"type": "object",
"properties": {
@@ -14762,6 +14980,246 @@ const docTemplate = `{
}
}
},
"codersdk.DiagnosticHealth": {
"type": "string",
"enum": [
"healthy",
"degraded",
"unhealthy",
"inactive"
],
"x-enum-varnames": [
"DiagnosticHealthHealthy",
"DiagnosticHealthDegraded",
"DiagnosticHealthUnhealthy",
"DiagnosticHealthInactive"
]
},
"codersdk.DiagnosticHomeDERP": {
"type": "object",
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
}
},
"codersdk.DiagnosticNetworkSummary": {
"type": "object",
"properties": {
"avg_latency_ms": {
"type": "number"
},
"derp_connections": {
"type": "integer"
},
"p2p_connections": {
"type": "integer"
},
"p95_latency_ms": {
"type": "number"
},
"primary_derp_region": {
"type": "string"
}
}
},
"codersdk.DiagnosticPattern": {
"type": "object",
"properties": {
"affected_sessions": {
"type": "integer"
},
"commonalities": {
"$ref": "#/definitions/codersdk.DiagnosticPatternCommonality"
},
"description": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"recommendation": {
"type": "string"
},
"severity": {
"$ref": "#/definitions/codersdk.ConnectionDiagnosticSeverity"
},
"title": {
"type": "string"
},
"total_sessions": {
"type": "integer"
},
"type": {
"$ref": "#/definitions/codersdk.DiagnosticPatternType"
}
}
},
"codersdk.DiagnosticPatternCommonality": {
"type": "object",
"properties": {
"client_descriptions": {
"type": "array",
"items": {
"type": "string"
}
},
"connection_types": {
"type": "array",
"items": {
"type": "string"
}
},
"disconnect_reasons": {
"type": "array",
"items": {
"type": "string"
}
},
"duration_range": {
"$ref": "#/definitions/codersdk.DiagnosticDurationRange"
},
"time_of_day_range": {
"type": "string"
}
}
},
"codersdk.DiagnosticPatternType": {
"type": "string",
"enum": [
"device_sleep",
"workspace_autostart",
"network_policy",
"agent_crash",
"latency_degradation",
"derp_fallback",
"clean_usage",
"unknown_drops"
],
"x-enum-varnames": [
"DiagnosticPatternDeviceSleep",
"DiagnosticPatternWorkspaceAutostart",
"DiagnosticPatternNetworkPolicy",
"DiagnosticPatternAgentCrash",
"DiagnosticPatternLatencyDegradation",
"DiagnosticPatternDERPFallback",
"DiagnosticPatternCleanUsage",
"DiagnosticPatternUnknownDrops"
]
},
"codersdk.DiagnosticSession": {
"type": "object",
"properties": {
"agent_name": {
"type": "string"
},
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticSessionConn"
}
},
"disconnect_reason": {
"type": "string"
},
"duration_seconds": {
"type": "number"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"explanation": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"ip": {
"type": "string"
},
"network": {
"$ref": "#/definitions/codersdk.DiagnosticSessionNetwork"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"timeline": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticTimelineEvent"
}
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
}
}
},
"codersdk.DiagnosticSessionConn": {
"type": "object",
"properties": {
"connected_at": {
"type": "string",
"format": "date-time"
},
"detail": {
"type": "string"
},
"disconnected_at": {
"type": "string",
"format": "date-time"
},
"exit_code": {
"type": "integer"
},
"explanation": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
}
}
},
"codersdk.DiagnosticSessionNetwork": {
"type": "object",
"properties": {
"avg_latency_ms": {
"type": "number"
},
"home_derp": {
"type": "string"
},
"p2p": {
"type": "boolean"
}
}
},
"codersdk.DiagnosticSeverityString": {
"type": "string",
"enum": [
@@ -14773,6 +15231,193 @@ const docTemplate = `{
"DiagnosticSeverityWarning"
]
},
"codersdk.DiagnosticStatusBreakdown": {
"type": "object",
"properties": {
"clean": {
"type": "integer"
},
"lost": {
"type": "integer"
},
"ongoing": {
"type": "integer"
},
"workspace_deleted": {
"type": "integer"
},
"workspace_stopped": {
"type": "integer"
}
}
},
"codersdk.DiagnosticSummary": {
"type": "object",
"properties": {
"active_connections": {
"type": "integer"
},
"by_status": {
"$ref": "#/definitions/codersdk.DiagnosticStatusBreakdown"
},
"by_type": {
"type": "object",
"additionalProperties": {
"type": "integer"
}
},
"headline": {
"type": "string"
},
"network": {
"$ref": "#/definitions/codersdk.DiagnosticNetworkSummary"
},
"total_connections": {
"type": "integer"
},
"total_sessions": {
"type": "integer"
}
}
},
"codersdk.DiagnosticTimeWindow": {
"type": "object",
"properties": {
"end": {
"type": "string",
"format": "date-time"
},
"hours": {
"type": "integer"
},
"start": {
"type": "string",
"format": "date-time"
}
}
},
"codersdk.DiagnosticTimelineEvent": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"kind": {
"$ref": "#/definitions/codersdk.DiagnosticTimelineEventKind"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"severity": {
"$ref": "#/definitions/codersdk.ConnectionDiagnosticSeverity"
},
"timestamp": {
"type": "string",
"format": "date-time"
}
}
},
"codersdk.DiagnosticTimelineEventKind": {
"type": "string",
"enum": [
"tunnel_created",
"tunnel_removed",
"node_update",
"peer_lost",
"peer_recovered",
"connection_opened",
"connection_closed",
"derp_fallback",
"p2p_established",
"latency_spike",
"workspace_state_change"
],
"x-enum-varnames": [
"DiagnosticTimelineEventTunnelCreated",
"DiagnosticTimelineEventTunnelRemoved",
"DiagnosticTimelineEventNodeUpdate",
"DiagnosticTimelineEventPeerLost",
"DiagnosticTimelineEventPeerRecovered",
"DiagnosticTimelineEventConnectionOpened",
"DiagnosticTimelineEventConnectionClosed",
"DiagnosticTimelineEventDERPFallback",
"DiagnosticTimelineEventP2PEstablished",
"DiagnosticTimelineEventLatencySpike",
"DiagnosticTimelineEventWorkspaceStateChange"
]
},
"codersdk.DiagnosticUser": {
"type": "object",
"properties": {
"avatar_url": {
"type": "string"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"email": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"last_seen_at": {
"type": "string",
"format": "date-time"
},
"name": {
"type": "string"
},
"roles": {
"type": "array",
"items": {
"type": "string"
}
},
"username": {
"type": "string"
}
}
},
"codersdk.DiagnosticWorkspace": {
"type": "object",
"properties": {
"health": {
"$ref": "#/definitions/codersdk.DiagnosticHealth"
},
"health_reason": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"name": {
"type": "string"
},
"owner_username": {
"type": "string"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticSession"
}
},
"status": {
"type": "string"
},
"template_display_name": {
"type": "string"
},
"template_name": {
"type": "string"
}
}
},
"codersdk.DisplayApp": {
"type": "string",
"enum": [
@@ -15269,6 +15914,65 @@ const docTemplate = `{
}
}
},
"codersdk.GlobalWorkspaceSession": {
"type": "object",
"properties": {
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"description": "nil for live sessions",
"type": "string"
},
"ip": {
"type": "string"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
},
"workspace_owner_username": {
"type": "string"
}
}
},
"codersdk.GlobalWorkspaceSessionsResponse": {
"type": "object",
"properties": {
"count": {
"type": "integer"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.GlobalWorkspaceSession"
}
}
}
},
"codersdk.Group": {
"type": "object",
"properties": {
@@ -20199,6 +20903,42 @@ const docTemplate = `{
}
}
},
"codersdk.UserDiagnosticResponse": {
"type": "object",
"properties": {
"current_connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticConnection"
}
},
"generated_at": {
"type": "string",
"format": "date-time"
},
"patterns": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticPattern"
}
},
"summary": {
"$ref": "#/definitions/codersdk.DiagnosticSummary"
},
"time_window": {
"$ref": "#/definitions/codersdk.DiagnosticTimeWindow"
},
"user": {
"$ref": "#/definitions/codersdk.DiagnosticUser"
},
"workspaces": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticWorkspace"
}
}
}
},
"codersdk.UserLatency": {
"type": "object",
"properties": {
@@ -20699,6 +21439,12 @@ const docTemplate = `{
"$ref": "#/definitions/codersdk.WorkspaceAgentScript"
}
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceSession"
}
},
"started_at": {
"type": "string",
"format": "date-time"
@@ -21509,6 +22255,83 @@ const docTemplate = `{
}
}
},
"codersdk.WorkspaceConnection": {
"type": "object",
"properties": {
"client_hostname": {
"description": "ClientHostname is the hostname of the client that connected to the agent. Self-reported by the client.",
"type": "string"
},
"connected_at": {
"type": "string",
"format": "date-time"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"detail": {
"description": "Detail is the app slug or port number for workspace_app and port_forwarding connections.",
"type": "string"
},
"disconnect_reason": {
"description": "DisconnectReason is the reason the connection was closed.",
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"exit_code": {
"description": "ExitCode is the exit code of the SSH session.",
"type": "integer"
},
"home_derp": {
"description": "HomeDERP is the DERP region metadata for the agent's home relay.",
"allOf": [
{
"$ref": "#/definitions/codersdk.WorkspaceConnectionHomeDERP"
}
]
},
"ip": {
"type": "string"
},
"latency_ms": {
"description": "LatencyMS is the most recent round-trip latency in\nmilliseconds. Uses P2P latency when direct, DERP otherwise.",
"type": "number"
},
"p2p": {
"description": "P2P indicates a direct peer-to-peer connection (true) or\nDERP relay (false). Nil if telemetry unavailable.",
"type": "boolean"
},
"short_description": {
"description": "ShortDescription is the human-readable short description of the connection. Self-reported by the client.",
"type": "string"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
},
"user_agent": {
"description": "UserAgent is the HTTP user agent string from web connections.",
"type": "string"
}
}
},
"codersdk.WorkspaceConnectionHomeDERP": {
"type": "object",
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
}
},
"codersdk.WorkspaceConnectionLatencyMS": {
"type": "object",
"properties": {
@@ -21522,6 +22345,21 @@ const docTemplate = `{
}
}
},
"codersdk.WorkspaceConnectionStatus": {
"type": "string",
"enum": [
"ongoing",
"control_lost",
"client_disconnected",
"clean_disconnected"
],
"x-enum-varnames": [
"ConnectionStatusOngoing",
"ConnectionStatusControlLost",
"ConnectionStatusClientDisconnected",
"ConnectionStatusCleanDisconnected"
]
},
"codersdk.WorkspaceDeploymentStats": {
"type": "object",
"properties": {
@@ -21796,6 +22634,55 @@ const docTemplate = `{
"WorkspaceRoleDeleted"
]
},
"codersdk.WorkspaceSession": {
"type": "object",
"properties": {
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"description": "nil for live sessions",
"type": "string"
},
"ip": {
"type": "string"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
}
}
},
"codersdk.WorkspaceSessionsResponse": {
"type": "object",
"properties": {
"count": {
"type": "integer"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceSession"
}
}
}
},
"codersdk.WorkspaceSharingSettings": {
"type": "object",
"properties": {
+867 -2
View File
@@ -428,6 +428,84 @@
}
}
},
"/connectionlog/diagnostics/{username}": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": ["application/json"],
"tags": ["Enterprise"],
"summary": "Get user diagnostic report",
"operationId": "get-user-diagnostic-report",
"parameters": [
{
"type": "string",
"description": "Username",
"name": "username",
"in": "path",
"required": true
},
{
"type": "integer",
"description": "Hours to look back (default 72, max 168)",
"name": "hours",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.UserDiagnosticResponse"
}
}
}
}
},
"/connectionlog/sessions": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": ["application/json"],
"tags": ["Enterprise"],
"summary": "Get global workspace sessions",
"operationId": "get-global-workspace-sessions",
"parameters": [
{
"type": "string",
"description": "Search query",
"name": "q",
"in": "query"
},
{
"type": "integer",
"description": "Page limit",
"name": "limit",
"in": "query",
"required": true
},
{
"type": "integer",
"description": "Page offset",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.GlobalWorkspaceSessionsResponse"
}
}
}
}
},
"/csp/reports": {
"post": {
"security": [
@@ -10337,6 +10415,49 @@
}
}
},
"/workspaces/{workspace}/sessions": {
"get": {
"security": [
{
"CoderSessionToken": []
}
],
"produces": ["application/json"],
"tags": ["Workspaces"],
"summary": "Get workspace sessions",
"operationId": "get-workspace-sessions",
"parameters": [
{
"type": "string",
"format": "uuid",
"description": "Workspace ID",
"name": "workspace",
"in": "path",
"required": true
},
{
"type": "integer",
"description": "Page limit",
"name": "limit",
"in": "query"
},
{
"type": "integer",
"description": "Page offset",
"name": "offset",
"in": "query"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"$ref": "#/definitions/codersdk.WorkspaceSessionsResponse"
}
}
}
}
},
"/workspaces/{workspace}/timings": {
"get": {
"security": [
@@ -12058,6 +12179,16 @@
}
}
},
"codersdk.ConnectionDiagnosticSeverity": {
"type": "string",
"enum": ["info", "warning", "error", "critical"],
"x-enum-varnames": [
"ConnectionDiagnosticSeverityInfo",
"ConnectionDiagnosticSeverityWarning",
"ConnectionDiagnosticSeverityError",
"ConnectionDiagnosticSeverityCritical"
]
},
"codersdk.ConnectionLatency": {
"type": "object",
"properties": {
@@ -12193,7 +12324,8 @@
"jetbrains",
"reconnecting_pty",
"workspace_app",
"port_forwarding"
"port_forwarding",
"system"
],
"x-enum-varnames": [
"ConnectionTypeSSH",
@@ -12201,7 +12333,8 @@
"ConnectionTypeJetBrains",
"ConnectionTypeReconnectingPTY",
"ConnectionTypeWorkspaceApp",
"ConnectionTypePortForwarding"
"ConnectionTypePortForwarding",
"ConnectionTypeSystem"
]
},
"codersdk.ConvertLoginRequest": {
@@ -13302,6 +13435,74 @@
}
}
},
"codersdk.DiagnosticConnection": {
"type": "object",
"properties": {
"agent_id": {
"type": "string",
"format": "uuid"
},
"agent_name": {
"type": "string"
},
"client_hostname": {
"type": "string"
},
"detail": {
"type": "string"
},
"explanation": {
"type": "string"
},
"home_derp": {
"$ref": "#/definitions/codersdk.DiagnosticHomeDERP"
},
"id": {
"type": "string",
"format": "uuid"
},
"ip": {
"type": "string"
},
"latency_ms": {
"type": "number"
},
"p2p": {
"type": "boolean"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
}
}
},
"codersdk.DiagnosticDurationRange": {
"type": "object",
"properties": {
"max_seconds": {
"type": "number"
},
"min_seconds": {
"type": "number"
}
}
},
"codersdk.DiagnosticExtra": {
"type": "object",
"properties": {
@@ -13310,6 +13511,241 @@
}
}
},
"codersdk.DiagnosticHealth": {
"type": "string",
"enum": ["healthy", "degraded", "unhealthy", "inactive"],
"x-enum-varnames": [
"DiagnosticHealthHealthy",
"DiagnosticHealthDegraded",
"DiagnosticHealthUnhealthy",
"DiagnosticHealthInactive"
]
},
"codersdk.DiagnosticHomeDERP": {
"type": "object",
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
}
},
"codersdk.DiagnosticNetworkSummary": {
"type": "object",
"properties": {
"avg_latency_ms": {
"type": "number"
},
"derp_connections": {
"type": "integer"
},
"p2p_connections": {
"type": "integer"
},
"p95_latency_ms": {
"type": "number"
},
"primary_derp_region": {
"type": "string"
}
}
},
"codersdk.DiagnosticPattern": {
"type": "object",
"properties": {
"affected_sessions": {
"type": "integer"
},
"commonalities": {
"$ref": "#/definitions/codersdk.DiagnosticPatternCommonality"
},
"description": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"recommendation": {
"type": "string"
},
"severity": {
"$ref": "#/definitions/codersdk.ConnectionDiagnosticSeverity"
},
"title": {
"type": "string"
},
"total_sessions": {
"type": "integer"
},
"type": {
"$ref": "#/definitions/codersdk.DiagnosticPatternType"
}
}
},
"codersdk.DiagnosticPatternCommonality": {
"type": "object",
"properties": {
"client_descriptions": {
"type": "array",
"items": {
"type": "string"
}
},
"connection_types": {
"type": "array",
"items": {
"type": "string"
}
},
"disconnect_reasons": {
"type": "array",
"items": {
"type": "string"
}
},
"duration_range": {
"$ref": "#/definitions/codersdk.DiagnosticDurationRange"
},
"time_of_day_range": {
"type": "string"
}
}
},
"codersdk.DiagnosticPatternType": {
"type": "string",
"enum": [
"device_sleep",
"workspace_autostart",
"network_policy",
"agent_crash",
"latency_degradation",
"derp_fallback",
"clean_usage",
"unknown_drops"
],
"x-enum-varnames": [
"DiagnosticPatternDeviceSleep",
"DiagnosticPatternWorkspaceAutostart",
"DiagnosticPatternNetworkPolicy",
"DiagnosticPatternAgentCrash",
"DiagnosticPatternLatencyDegradation",
"DiagnosticPatternDERPFallback",
"DiagnosticPatternCleanUsage",
"DiagnosticPatternUnknownDrops"
]
},
"codersdk.DiagnosticSession": {
"type": "object",
"properties": {
"agent_name": {
"type": "string"
},
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticSessionConn"
}
},
"disconnect_reason": {
"type": "string"
},
"duration_seconds": {
"type": "number"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"explanation": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"ip": {
"type": "string"
},
"network": {
"$ref": "#/definitions/codersdk.DiagnosticSessionNetwork"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"timeline": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticTimelineEvent"
}
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
}
}
},
"codersdk.DiagnosticSessionConn": {
"type": "object",
"properties": {
"connected_at": {
"type": "string",
"format": "date-time"
},
"detail": {
"type": "string"
},
"disconnected_at": {
"type": "string",
"format": "date-time"
},
"exit_code": {
"type": "integer"
},
"explanation": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
}
}
},
"codersdk.DiagnosticSessionNetwork": {
"type": "object",
"properties": {
"avg_latency_ms": {
"type": "number"
},
"home_derp": {
"type": "string"
},
"p2p": {
"type": "boolean"
}
}
},
"codersdk.DiagnosticSeverityString": {
"type": "string",
"enum": ["error", "warning"],
@@ -13318,6 +13754,193 @@
"DiagnosticSeverityWarning"
]
},
"codersdk.DiagnosticStatusBreakdown": {
"type": "object",
"properties": {
"clean": {
"type": "integer"
},
"lost": {
"type": "integer"
},
"ongoing": {
"type": "integer"
},
"workspace_deleted": {
"type": "integer"
},
"workspace_stopped": {
"type": "integer"
}
}
},
"codersdk.DiagnosticSummary": {
"type": "object",
"properties": {
"active_connections": {
"type": "integer"
},
"by_status": {
"$ref": "#/definitions/codersdk.DiagnosticStatusBreakdown"
},
"by_type": {
"type": "object",
"additionalProperties": {
"type": "integer"
}
},
"headline": {
"type": "string"
},
"network": {
"$ref": "#/definitions/codersdk.DiagnosticNetworkSummary"
},
"total_connections": {
"type": "integer"
},
"total_sessions": {
"type": "integer"
}
}
},
"codersdk.DiagnosticTimeWindow": {
"type": "object",
"properties": {
"end": {
"type": "string",
"format": "date-time"
},
"hours": {
"type": "integer"
},
"start": {
"type": "string",
"format": "date-time"
}
}
},
"codersdk.DiagnosticTimelineEvent": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"kind": {
"$ref": "#/definitions/codersdk.DiagnosticTimelineEventKind"
},
"metadata": {
"type": "object",
"additionalProperties": {}
},
"severity": {
"$ref": "#/definitions/codersdk.ConnectionDiagnosticSeverity"
},
"timestamp": {
"type": "string",
"format": "date-time"
}
}
},
"codersdk.DiagnosticTimelineEventKind": {
"type": "string",
"enum": [
"tunnel_created",
"tunnel_removed",
"node_update",
"peer_lost",
"peer_recovered",
"connection_opened",
"connection_closed",
"derp_fallback",
"p2p_established",
"latency_spike",
"workspace_state_change"
],
"x-enum-varnames": [
"DiagnosticTimelineEventTunnelCreated",
"DiagnosticTimelineEventTunnelRemoved",
"DiagnosticTimelineEventNodeUpdate",
"DiagnosticTimelineEventPeerLost",
"DiagnosticTimelineEventPeerRecovered",
"DiagnosticTimelineEventConnectionOpened",
"DiagnosticTimelineEventConnectionClosed",
"DiagnosticTimelineEventDERPFallback",
"DiagnosticTimelineEventP2PEstablished",
"DiagnosticTimelineEventLatencySpike",
"DiagnosticTimelineEventWorkspaceStateChange"
]
},
"codersdk.DiagnosticUser": {
"type": "object",
"properties": {
"avatar_url": {
"type": "string"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"email": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"last_seen_at": {
"type": "string",
"format": "date-time"
},
"name": {
"type": "string"
},
"roles": {
"type": "array",
"items": {
"type": "string"
}
},
"username": {
"type": "string"
}
}
},
"codersdk.DiagnosticWorkspace": {
"type": "object",
"properties": {
"health": {
"$ref": "#/definitions/codersdk.DiagnosticHealth"
},
"health_reason": {
"type": "string"
},
"id": {
"type": "string",
"format": "uuid"
},
"name": {
"type": "string"
},
"owner_username": {
"type": "string"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticSession"
}
},
"status": {
"type": "string"
},
"template_display_name": {
"type": "string"
},
"template_name": {
"type": "string"
}
}
},
"codersdk.DisplayApp": {
"type": "string",
"enum": [
@@ -13810,6 +14433,65 @@
}
}
},
"codersdk.GlobalWorkspaceSession": {
"type": "object",
"properties": {
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"description": "nil for live sessions",
"type": "string"
},
"ip": {
"type": "string"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"workspace_id": {
"type": "string",
"format": "uuid"
},
"workspace_name": {
"type": "string"
},
"workspace_owner_username": {
"type": "string"
}
}
},
"codersdk.GlobalWorkspaceSessionsResponse": {
"type": "object",
"properties": {
"count": {
"type": "integer"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.GlobalWorkspaceSession"
}
}
}
},
"codersdk.Group": {
"type": "object",
"properties": {
@@ -18518,6 +19200,42 @@
}
}
},
"codersdk.UserDiagnosticResponse": {
"type": "object",
"properties": {
"current_connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticConnection"
}
},
"generated_at": {
"type": "string",
"format": "date-time"
},
"patterns": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticPattern"
}
},
"summary": {
"$ref": "#/definitions/codersdk.DiagnosticSummary"
},
"time_window": {
"$ref": "#/definitions/codersdk.DiagnosticTimeWindow"
},
"user": {
"$ref": "#/definitions/codersdk.DiagnosticUser"
},
"workspaces": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.DiagnosticWorkspace"
}
}
}
},
"codersdk.UserLatency": {
"type": "object",
"properties": {
@@ -19003,6 +19721,12 @@
"$ref": "#/definitions/codersdk.WorkspaceAgentScript"
}
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceSession"
}
},
"started_at": {
"type": "string",
"format": "date-time"
@@ -19758,6 +20482,83 @@
}
}
},
"codersdk.WorkspaceConnection": {
"type": "object",
"properties": {
"client_hostname": {
"description": "ClientHostname is the hostname of the client that connected to the agent. Self-reported by the client.",
"type": "string"
},
"connected_at": {
"type": "string",
"format": "date-time"
},
"created_at": {
"type": "string",
"format": "date-time"
},
"detail": {
"description": "Detail is the app slug or port number for workspace_app and port_forwarding connections.",
"type": "string"
},
"disconnect_reason": {
"description": "DisconnectReason is the reason the connection was closed.",
"type": "string"
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"exit_code": {
"description": "ExitCode is the exit code of the SSH session.",
"type": "integer"
},
"home_derp": {
"description": "HomeDERP is the DERP region metadata for the agent's home relay.",
"allOf": [
{
"$ref": "#/definitions/codersdk.WorkspaceConnectionHomeDERP"
}
]
},
"ip": {
"type": "string"
},
"latency_ms": {
"description": "LatencyMS is the most recent round-trip latency in\nmilliseconds. Uses P2P latency when direct, DERP otherwise.",
"type": "number"
},
"p2p": {
"description": "P2P indicates a direct peer-to-peer connection (true) or\nDERP relay (false). Nil if telemetry unavailable.",
"type": "boolean"
},
"short_description": {
"description": "ShortDescription is the human-readable short description of the connection. Self-reported by the client.",
"type": "string"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
},
"type": {
"$ref": "#/definitions/codersdk.ConnectionType"
},
"user_agent": {
"description": "UserAgent is the HTTP user agent string from web connections.",
"type": "string"
}
}
},
"codersdk.WorkspaceConnectionHomeDERP": {
"type": "object",
"properties": {
"id": {
"type": "integer"
},
"name": {
"type": "string"
}
}
},
"codersdk.WorkspaceConnectionLatencyMS": {
"type": "object",
"properties": {
@@ -19771,6 +20572,21 @@
}
}
},
"codersdk.WorkspaceConnectionStatus": {
"type": "string",
"enum": [
"ongoing",
"control_lost",
"client_disconnected",
"clean_disconnected"
],
"x-enum-varnames": [
"ConnectionStatusOngoing",
"ConnectionStatusControlLost",
"ConnectionStatusClientDisconnected",
"ConnectionStatusCleanDisconnected"
]
},
"codersdk.WorkspaceDeploymentStats": {
"type": "object",
"properties": {
@@ -20034,6 +20850,55 @@
"WorkspaceRoleDeleted"
]
},
"codersdk.WorkspaceSession": {
"type": "object",
"properties": {
"client_hostname": {
"type": "string"
},
"connections": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceConnection"
}
},
"ended_at": {
"type": "string",
"format": "date-time"
},
"id": {
"description": "nil for live sessions",
"type": "string"
},
"ip": {
"type": "string"
},
"short_description": {
"type": "string"
},
"started_at": {
"type": "string",
"format": "date-time"
},
"status": {
"$ref": "#/definitions/codersdk.WorkspaceConnectionStatus"
}
}
},
"codersdk.WorkspaceSessionsResponse": {
"type": "object",
"properties": {
"count": {
"type": "integer"
},
"sessions": {
"type": "array",
"items": {
"$ref": "#/definitions/codersdk.WorkspaceSession"
}
}
}
},
"codersdk.WorkspaceSharingSettings": {
"type": "object",
"properties": {
+50 -9
View File
@@ -90,6 +90,7 @@ import (
"github.com/coder/coder/v2/coderd/workspaceapps/appurl"
"github.com/coder/coder/v2/coderd/workspacestats"
"github.com/coder/coder/v2/coderd/wsbuilder"
"github.com/coder/coder/v2/coderd/wspubsub"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/drpcsdk"
"github.com/coder/coder/v2/codersdk/healthsdk"
@@ -98,6 +99,8 @@ import (
"github.com/coder/coder/v2/provisionersdk"
"github.com/coder/coder/v2/site"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/eventsink"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
"github.com/coder/quartz"
"github.com/coder/serpent"
)
@@ -414,7 +417,8 @@ func New(options *Options) *API {
options.NetworkTelemetryBatchMaxSize = 1_000
}
if options.TailnetCoordinator == nil {
options.TailnetCoordinator = tailnet.NewCoordinator(options.Logger)
eventSink := eventsink.NewEventSink(context.Background(), options.Database, options.Logger)
options.TailnetCoordinator = tailnet.NewCoordinator(options.Logger, eventSink)
}
if options.Auditor == nil {
options.Auditor = audit.NewNop()
@@ -734,20 +738,23 @@ func New(options *Options) *API {
api.Auditor.Store(&options.Auditor)
api.ConnectionLogger.Store(&options.ConnectionLogger)
api.TailnetCoordinator.Store(&options.TailnetCoordinator)
serverTailnetID := uuid.New()
dialer := &InmemTailnetDialer{
CoordPtr: &api.TailnetCoordinator,
DERPFn: api.DERPMap,
Logger: options.Logger,
ClientID: uuid.New(),
ClientID: serverTailnetID,
DatabaseHealthCheck: api.Database,
}
stn, err := NewServerTailnet(api.ctx,
options.Logger,
options.DERPServer,
serverTailnetID,
dialer,
options.DeploymentValues.DERP.Config.ForceWebSockets.Value(),
options.DeploymentValues.DERP.Config.BlockDirect.Value(),
api.TracerProvider,
"Coder Server",
)
if err != nil {
panic("failed to setup server tailnet: " + err.Error())
@@ -763,17 +770,19 @@ func New(options *Options) *API {
api.Options.NetworkTelemetryBatchMaxSize,
api.handleNetworkTelemetry,
)
api.PeerNetworkTelemetryStore = NewPeerNetworkTelemetryStore()
if options.CoordinatorResumeTokenProvider == nil {
panic("CoordinatorResumeTokenProvider is nil")
}
api.TailnetClientService, err = tailnet.NewClientService(tailnet.ClientServiceOptions{
Logger: api.Logger.Named("tailnetclient"),
CoordPtr: &api.TailnetCoordinator,
DERPMapUpdateFrequency: api.Options.DERPMapUpdateFrequency,
DERPMapFn: api.DERPMap,
NetworkTelemetryHandler: api.NetworkTelemetryBatcher.Handler,
ResumeTokenProvider: api.Options.CoordinatorResumeTokenProvider,
WorkspaceUpdatesProvider: api.UpdatesProvider,
Logger: api.Logger.Named("tailnetclient"),
CoordPtr: &api.TailnetCoordinator,
DERPMapUpdateFrequency: api.Options.DERPMapUpdateFrequency,
DERPMapFn: api.DERPMap,
NetworkTelemetryHandler: api.NetworkTelemetryBatcher.Handler,
IdentifiedTelemetryHandler: api.handleIdentifiedTelemetry,
ResumeTokenProvider: api.Options.CoordinatorResumeTokenProvider,
WorkspaceUpdatesProvider: api.UpdatesProvider,
})
if err != nil {
api.Logger.Fatal(context.Background(), "failed to initialize tailnet client service", slog.Error(err))
@@ -1517,6 +1526,7 @@ func New(options *Options) *API {
r.Delete("/", api.deleteWorkspaceAgentPortShare)
})
r.Get("/timings", api.workspaceTimings)
r.Get("/sessions", api.workspaceSessions)
r.Route("/acl", func(r chi.Router) {
r.Use(
httpmw.RequireExperiment(api.Experiments, codersdk.ExperimentWorkspaceSharing),
@@ -1828,6 +1838,7 @@ type API struct {
WorkspaceClientCoordinateOverride atomic.Pointer[func(rw http.ResponseWriter) bool]
TailnetCoordinator atomic.Pointer[tailnet.Coordinator]
NetworkTelemetryBatcher *tailnet.NetworkTelemetryBatcher
PeerNetworkTelemetryStore *PeerNetworkTelemetryStore
TailnetClientService *tailnet.ClientService
// WebpushDispatcher is a way to send notifications to users via Web Push.
WebpushDispatcher webpush.Dispatcher
@@ -1962,6 +1973,36 @@ func (api *API) Close() error {
return nil
}
// handleIdentifiedTelemetry stores peer telemetry events and publishes a
// workspace update so watch subscribers see fresh data.
func (api *API) handleIdentifiedTelemetry(agentID, peerID uuid.UUID, events []*tailnetproto.TelemetryEvent) {
if len(events) == 0 {
return
}
for _, event := range events {
api.PeerNetworkTelemetryStore.Update(agentID, peerID, event)
}
// Telemetry callback runs outside any user request, so we use a system
// context to look up the workspace for the pubsub notification.
ctx := dbauthz.AsSystemRestricted(context.Background()) //nolint:gocritic // Telemetry callback has no user context.
workspace, err := api.Database.GetWorkspaceByAgentID(ctx, agentID)
if err != nil {
api.Logger.Warn(ctx, "failed to resolve workspace for telemetry update",
slog.F("agent_id", agentID),
slog.Error(err),
)
return
}
api.publishWorkspaceUpdate(ctx, workspace.OwnerID, wspubsub.WorkspaceEvent{
Kind: wspubsub.WorkspaceEventKindConnectionLogUpdate,
WorkspaceID: workspace.ID,
AgentID: &agentID,
})
}
func compressHandler(h http.Handler) http.Handler {
level := 5
if flag.Lookup("test.v") != nil {
+4
View File
@@ -82,6 +82,10 @@ func (m *FakeConnectionLogger) Contains(t testing.TB, expected database.UpsertCo
t.Logf("connection log %d: expected AgentName %s, got %s", idx+1, expected.AgentName, cl.AgentName)
continue
}
if expected.AgentID.Valid && cl.AgentID.UUID != expected.AgentID.UUID {
t.Logf("connection log %d: expected AgentID %s, got %s", idx+1, expected.AgentID.UUID, cl.AgentID.UUID)
continue
}
if expected.Type != "" && cl.Type != expected.Type {
t.Logf("connection log %d: expected Type %s, got %s", idx+1, expected.Type, cl.Type)
continue
@@ -0,0 +1,938 @@
package database_test
import (
"context"
"database/sql"
"fmt"
"net"
"testing"
"time"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
)
func TestCloseOpenAgentConnectionLogsForWorkspace(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws1 := dbgen.Workspace(t, db, database.WorkspaceTable{
ID: uuid.New(),
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ws2 := dbgen.Workspace(t, db, database.WorkspaceTable{
ID: uuid.New(),
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
// Simulate agent clock skew by using a connect time in the future.
connectTime := dbtime.Now().Add(time.Hour)
sshLog1, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: connectTime,
OrganizationID: ws1.OrganizationID,
WorkspaceOwnerID: ws1.OwnerID,
WorkspaceID: ws1.ID,
WorkspaceName: ws1.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
appLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: dbtime.Now(),
OrganizationID: ws1.OrganizationID,
WorkspaceOwnerID: ws1.OwnerID,
WorkspaceID: ws1.ID,
WorkspaceName: ws1.Name,
AgentName: "agent",
Type: database.ConnectionTypeWorkspaceApp,
Ip: ip,
UserAgent: sql.NullString{String: "test", Valid: true},
UserID: uuid.NullUUID{UUID: ws1.OwnerID, Valid: true},
SlugOrPort: sql.NullString{String: "app", Valid: true},
Code: sql.NullInt32{Int32: 200, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
sshLog2, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: dbtime.Now(),
OrganizationID: ws2.OrganizationID,
WorkspaceOwnerID: ws2.OwnerID,
WorkspaceID: ws2.ID,
WorkspaceName: ws2.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
rowsClosed, err := db.CloseOpenAgentConnectionLogsForWorkspace(ctx, database.CloseOpenAgentConnectionLogsForWorkspaceParams{
WorkspaceID: ws1.ID,
ClosedAt: dbtime.Now(),
Reason: "workspace stopped",
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeVscode,
database.ConnectionTypeJetbrains,
database.ConnectionTypeReconnectingPty,
},
})
require.NoError(t, err)
require.EqualValues(t, 1, rowsClosed)
ws1Rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{WorkspaceID: ws1.ID})
require.NoError(t, err)
require.Len(t, ws1Rows, 2)
for _, row := range ws1Rows {
switch row.ConnectionLog.ID {
case sshLog1.ID:
updated := row.ConnectionLog
require.True(t, updated.DisconnectTime.Valid)
require.True(t, updated.DisconnectReason.Valid)
require.Equal(t, "workspace stopped", updated.DisconnectReason.String)
require.False(t, updated.DisconnectTime.Time.Before(updated.ConnectTime), "disconnect_time should never be before connect_time")
case appLog.ID:
notClosed := row.ConnectionLog
require.False(t, notClosed.DisconnectTime.Valid)
require.False(t, notClosed.DisconnectReason.Valid)
default:
t.Fatalf("unexpected connection log id: %s", row.ConnectionLog.ID)
}
}
ws2Rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{WorkspaceID: ws2.ID})
require.NoError(t, err)
require.Len(t, ws2Rows, 1)
require.Equal(t, sshLog2.ID, ws2Rows[0].ConnectionLog.ID)
require.False(t, ws2Rows[0].ConnectionLog.DisconnectTime.Valid)
}
// Regression test: CloseConnectionLogsAndCreateSessions must not fail
// when connection_logs have NULL IPs (e.g., disconnect-only tunnel
// events). NULL-IP logs should be closed but no session created for
// them.
func TestCloseConnectionLogsAndCreateSessions_NullIP(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
validIP := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// Connection with a valid IP.
sshLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-30 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: validIP,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
// Connection with a NULL IP — simulates a disconnect-only tunnel
// event where the source node info is unavailable.
nullIPLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-25 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: pqtype.Inet{Valid: false},
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
// This previously failed with: "pq: null value in column ip of
// relation workspace_sessions violates not-null constraint".
closedAt := now.Add(-5 * time.Minute)
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
// Verify both logs were closed.
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 2)
for _, row := range rows {
cl := row.ConnectionLog
require.True(t, cl.DisconnectTime.Valid,
"connection log %s (type=%s) should be closed", cl.ID, cl.Type)
switch cl.ID {
case sshLog.ID:
// Valid-IP log should have a session.
require.True(t, cl.SessionID.Valid,
"valid-IP log should be linked to a session")
case nullIPLog.ID:
// NULL-IP system connection overlaps with the SSH
// session, so it gets attached to that session.
require.True(t, cl.SessionID.Valid,
"NULL-IP system log overlapping with SSH session should be linked to a session")
default:
t.Fatalf("unexpected connection log id: %s", cl.ID)
}
}
}
// Regression test: CloseConnectionLogsAndCreateSessions must handle
// connections that are already disconnected but have no session_id
// (e.g., system/tunnel connections disconnected by dbsink). It must
// also avoid creating duplicate sessions when assignSessionForDisconnect
// has already created one for the same IP/time range.
func TestCloseConnectionLogsAndCreateSessions_AlreadyDisconnectedGetsSession(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// A system connection that was already disconnected (by dbsink)
// but has no session_id — dbsink doesn't assign sessions.
sysConnID := uuid.New()
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sysConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-5 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sysConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusDisconnected,
})
require.NoError(t, err)
// Run CloseConnectionLogsAndCreateSessions (workspace stop).
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
// The system connection should now have a session_id.
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 1)
require.True(t, rows[0].ConnectionLog.SessionID.Valid,
"already-disconnected system connection should be assigned to a session")
}
// Regression test: when assignSessionForDisconnect has already
// created a session for an SSH connection,
// CloseConnectionLogsAndCreateSessions must reuse that session
// instead of creating a duplicate.
func TestCloseConnectionLogsAndCreateSessions_ReusesExistingSession(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// Simulate an SSH connection where assignSessionForDisconnect
// already created a session but the connection log's session_id
// was set (the normal successful path).
sshConnID := uuid.New()
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sshConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
sshLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-5 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sshConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusDisconnected,
})
require.NoError(t, err)
// Create the session that assignSessionForDisconnect would have
// created, and link the connection log to it.
existingSessionIDRaw, err := db.FindOrCreateSessionForDisconnect(ctx, database.FindOrCreateSessionForDisconnectParams{
WorkspaceID: ws.ID.String(),
Ip: ip,
ConnectTime: sshLog.ConnectTime,
DisconnectTime: sshLog.DisconnectTime.Time,
})
require.NoError(t, err)
existingSessionID, err := uuid.Parse(fmt.Sprintf("%s", existingSessionIDRaw))
require.NoError(t, err)
err = db.UpdateConnectionLogSessionID(ctx, database.UpdateConnectionLogSessionIDParams{
ID: sshLog.ID,
SessionID: uuid.NullUUID{UUID: existingSessionID, Valid: true},
})
require.NoError(t, err)
// Also add a system connection (no session, already disconnected).
sysConnID := uuid.New()
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sysConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-5 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: sysConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusDisconnected,
})
require.NoError(t, err)
// Run CloseConnectionLogsAndCreateSessions.
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
// Verify: the system connection should be assigned to the
// EXISTING session (reused), not a new one.
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 2)
for _, row := range rows {
cl := row.ConnectionLog
require.True(t, cl.SessionID.Valid,
"connection log %s (type=%s) should have a session", cl.ID, cl.Type)
require.Equal(t, existingSessionID, cl.SessionID.UUID,
"connection log %s should reuse the existing session, not create a new one", cl.ID)
}
}
// Test: connections with different IPs but same hostname get grouped
// into one session.
func TestCloseConnectionLogsAndCreateSessions_GroupsByHostname(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
now := dbtime.Now()
hostname := sql.NullString{String: "my-laptop", Valid: true}
// Create 3 SSH connections with different IPs but same hostname,
// overlapping in time.
for i := 0; i < 3; i++ {
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, byte(i+1)),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(time.Duration(-30+i*5) * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ClientHostname: hostname,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
}
closedAt := now
_, err := db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 3)
// All 3 connections should have the same session_id.
var sessionID uuid.UUID
for i, row := range rows {
cl := row.ConnectionLog
require.True(t, cl.SessionID.Valid,
"connection %d should have a session", i)
if i == 0 {
sessionID = cl.SessionID.UUID
} else {
require.Equal(t, sessionID, cl.SessionID.UUID,
"all connections with same hostname should share one session")
}
}
}
// Test: a long-running system connection gets attached to the first
// overlapping primary session, not the second.
func TestCloseConnectionLogsAndCreateSessions_SystemAttachesToFirstSession(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// System connection spanning the full workspace lifetime.
sysLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-3 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
// SSH session 1: -3h to -2h.
ssh1ConnID := uuid.New()
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-3 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: ssh1ConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
ssh1Disc, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-2 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: ssh1ConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusDisconnected,
})
require.NoError(t, err)
_ = ssh1Disc
// SSH session 2: -30min to now (>30min gap from session 1).
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-30 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
// Find the system connection and its assigned session.
var sysSessionID uuid.UUID
// Collect all session IDs from SSH connections to verify 2
// distinct sessions were created.
sshSessionIDs := make(map[uuid.UUID]bool)
for _, row := range rows {
cl := row.ConnectionLog
if cl.ID == sysLog.ID {
require.True(t, cl.SessionID.Valid,
"system connection should have a session")
sysSessionID = cl.SessionID.UUID
}
if cl.Type == database.ConnectionTypeSsh && cl.SessionID.Valid {
sshSessionIDs[cl.SessionID.UUID] = true
}
}
// Two distinct SSH sessions should exist (>30min gap).
require.Len(t, sshSessionIDs, 2, "should have 2 distinct SSH sessions")
// System connection should be attached to the first (earliest)
// session.
require.True(t, sshSessionIDs[sysSessionID],
"system connection should be attached to one of the SSH sessions")
}
// Test: an orphaned system connection (no overlapping primary sessions)
// with an IP gets its own session.
func TestCloseConnectionLogsAndCreateSessions_OrphanSystemGetsOwnSession(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// System connection with an IP but no overlapping primary
// connections.
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 1)
require.True(t, rows[0].ConnectionLog.SessionID.Valid,
"orphaned system connection with IP should get its own session")
}
// Test: a system connection with NULL IP and no overlapping primary
// sessions gets no session (can't create a useful session without IP).
func TestCloseConnectionLogsAndCreateSessions_SystemNoIPNoSession(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
now := dbtime.Now()
// System connection with NULL IP and no overlapping primary.
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSystem,
Ip: pqtype.Inet{Valid: false},
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSystem,
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
require.Len(t, rows, 1)
require.True(t, rows[0].ConnectionLog.DisconnectTime.Valid,
"system connection should be closed")
require.False(t, rows[0].ConnectionLog.SessionID.Valid,
"NULL-IP system connection with no primary overlap should not get a session")
}
// Test: connections from the same hostname with a >30-minute gap
// create separate sessions.
func TestCloseConnectionLogsAndCreateSessions_SeparateSessionsForLargeGap(t *testing.T) {
t.Parallel()
db, _ := dbtestutil.NewDB(t)
ctx := context.Background()
u := dbgen.User(t, db, database.User{})
o := dbgen.Organization(t, db, database.Organization{})
tpl := dbgen.Template(t, db, database.Template{
OrganizationID: o.ID,
CreatedBy: u.ID,
})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OwnerID: u.ID,
OrganizationID: o.ID,
AutomaticUpdates: database.AutomaticUpdatesNever,
TemplateID: tpl.ID,
})
ip := pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
now := dbtime.Now()
// SSH connection 1: -3h to -2h.
conn1ID := uuid.New()
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-3 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: conn1ID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-2 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: conn1ID, Valid: true},
ConnectionStatus: database.ConnectionStatusDisconnected,
})
require.NoError(t, err)
// SSH connection 2: -30min to now (>30min gap from connection 1).
_, err = db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: now.Add(-30 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
closedAt := now
_, err = db.CloseConnectionLogsAndCreateSessions(ctx, database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: ws.ID,
Types: []database.ConnectionType{
database.ConnectionTypeSsh,
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{
WorkspaceID: ws.ID,
})
require.NoError(t, err)
sessionIDs := make(map[uuid.UUID]bool)
for _, row := range rows {
cl := row.ConnectionLog
if cl.SessionID.Valid {
sessionIDs[cl.SessionID.UUID] = true
}
}
require.Len(t, sessionIDs, 2,
"connections with >30min gap should create 2 separate sessions")
}
@@ -0,0 +1,239 @@
package database_test
import (
"context"
"database/sql"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
)
func TestGetOngoingAgentConnectionsLast24h(t *testing.T) {
t.Parallel()
ctx := context.Background()
db, _ := dbtestutil.NewDB(t)
org := dbfake.Organization(t, db).Do()
user := dbgen.User(t, db, database.User{})
tpl := dbgen.Template(t, db, database.Template{OrganizationID: org.Org.ID, CreatedBy: user.ID})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
TemplateID: tpl.ID,
Name: "ws",
})
now := dbtime.Now()
since := now.Add(-24 * time.Hour)
const (
agent1 = "agent1"
agent2 = "agent2"
)
// Insert a disconnected log that should be excluded.
disconnectedConnID := uuid.New()
disconnected := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-30 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: disconnectedConnID, Valid: true},
})
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-20 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
AgentName: disconnected.AgentName,
ConnectionStatus: database.ConnectionStatusDisconnected,
ConnectionID: disconnected.ConnectionID,
DisconnectReason: sql.NullString{String: "closed", Valid: true},
})
// Insert an old log that should be excluded by the 24h window.
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-25 * time.Hour),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeSsh,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
// Insert a web log that should be excluded by the types filter.
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeWorkspaceApp,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
// Insert 55 active logs for agent1 (should be capped to 50).
for i := 0; i < 55; i++ {
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-time.Duration(i) * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent1,
Type: database.ConnectionTypeVscode,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
}
// Insert one active log for agent2.
agent2Log := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-5 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agent2,
Type: database.ConnectionTypeJetbrains,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
})
logs, err := db.GetOngoingAgentConnectionsLast24h(ctx, database.GetOngoingAgentConnectionsLast24hParams{
WorkspaceIds: []uuid.UUID{ws.ID},
AgentNames: []string{agent1, agent2},
Types: []database.ConnectionType{database.ConnectionTypeSsh, database.ConnectionTypeVscode, database.ConnectionTypeJetbrains, database.ConnectionTypeReconnectingPty},
Since: since,
PerAgentLimit: 50,
})
require.NoError(t, err)
byAgent := map[string][]database.GetOngoingAgentConnectionsLast24hRow{}
for _, l := range logs {
byAgent[l.AgentName] = append(byAgent[l.AgentName], l)
}
// Agent1 should be capped at 50 and contain only active logs within the window.
require.Len(t, byAgent[agent1], 50)
for i, l := range byAgent[agent1] {
require.False(t, l.DisconnectTime.Valid, "expected log to be ongoing")
require.True(t, l.ConnectTime.After(since) || l.ConnectTime.Equal(since), "expected log to be within window")
if i > 0 {
require.True(t, byAgent[agent1][i-1].ConnectTime.After(l.ConnectTime) || byAgent[agent1][i-1].ConnectTime.Equal(l.ConnectTime), "expected logs to be ordered by connect_time desc")
}
}
// Agent2 should include its single active log.
require.Equal(t, []uuid.UUID{agent2Log.ID}, []uuid.UUID{byAgent[agent2][0].ID})
}
func TestGetOngoingAgentConnectionsLast24h_PortForwarding(t *testing.T) {
t.Parallel()
ctx := context.Background()
db, _ := dbtestutil.NewDB(t)
org := dbfake.Organization(t, db).Do()
user := dbgen.User(t, db, database.User{})
tpl := dbgen.Template(t, db, database.Template{OrganizationID: org.Org.ID, CreatedBy: user.ID})
ws := dbgen.Workspace(t, db, database.WorkspaceTable{
OrganizationID: org.Org.ID,
OwnerID: user.ID,
TemplateID: tpl.ID,
Name: "ws-pf",
})
now := dbtime.Now()
since := now.Add(-24 * time.Hour)
const agentName = "agent-pf"
// Agent-reported: NULL user_agent, included unconditionally.
agentReported := dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-10 * time.Minute),
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agentName,
Type: database.ConnectionTypePortForwarding,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
SlugOrPort: sql.NullString{String: "8080", Valid: true},
Ip: database.ParseIP("fd7a:115c:a1e0:4353:89d9:4ca8:9c42:8d2d"),
})
// Stale proxy-reported: non-NULL user_agent, bumped but older than AppActiveSince.
// Use a non-localhost IP to verify the fix works even behind a reverse proxy.
staleConnID := uuid.New()
staleConnectTime := now.Add(-15 * time.Minute)
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: staleConnectTime,
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agentName,
Type: database.ConnectionTypePortForwarding,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: staleConnID, Valid: true},
SlugOrPort: sql.NullString{String: "3000", Valid: true},
Ip: database.ParseIP("203.0.113.45"),
UserAgent: sql.NullString{String: "Mozilla/5.0", Valid: true},
})
// Bump updated_at to simulate a proxy refresh.
staleBumpTime := now.Add(-8 * time.Minute)
_, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: staleBumpTime,
OrganizationID: ws.OrganizationID,
WorkspaceOwnerID: ws.OwnerID,
WorkspaceID: ws.ID,
WorkspaceName: ws.Name,
AgentName: agentName,
Type: database.ConnectionTypePortForwarding,
ConnectionStatus: database.ConnectionStatusConnected,
ConnectionID: uuid.NullUUID{UUID: staleConnID, Valid: true},
SlugOrPort: sql.NullString{String: "3000", Valid: true},
})
require.NoError(t, err)
appActiveSince := now.Add(-5 * time.Minute)
logs, err := db.GetOngoingAgentConnectionsLast24h(ctx, database.GetOngoingAgentConnectionsLast24hParams{
WorkspaceIds: []uuid.UUID{ws.ID},
AgentNames: []string{agentName},
Types: []database.ConnectionType{database.ConnectionTypePortForwarding},
Since: since,
PerAgentLimit: 50,
AppActiveSince: appActiveSince,
})
require.NoError(t, err)
// Only the agent-reported connection should appear.
require.Len(t, logs, 1)
require.Equal(t, agentReported.ID, logs[0].ID)
require.Equal(t, database.ConnectionTypePortForwarding, logs[0].Type)
require.True(t, logs[0].SlugOrPort.Valid)
require.Equal(t, "8080", logs[0].SlugOrPort.String)
}
+9
View File
@@ -3,3 +3,12 @@ package database
import "github.com/google/uuid"
var PrebuildsSystemUserID = uuid.MustParse("c42fdf75-3097-471c-8c33-fb52454d81c0")
const (
TailnetPeeringEventTypeAddedTunnel = "added_tunnel"
TailnetPeeringEventTypeRemovedTunnel = "removed_tunnel"
TailnetPeeringEventTypePeerUpdateNode = "peer_update_node"
TailnetPeeringEventTypePeerUpdateDisconnected = "peer_update_disconnected"
TailnetPeeringEventTypePeerUpdateLost = "peer_update_lost"
TailnetPeeringEventTypePeerUpdateReadyForHandshake = "peer_update_ready_for_handshake"
)
+4
View File
@@ -849,6 +849,10 @@ func ConnectionLogConnectionTypeFromAgentProtoConnectionType(typ agentproto.Conn
return database.ConnectionTypeVscode, nil
case agentproto.Connection_RECONNECTING_PTY:
return database.ConnectionTypeReconnectingPty, nil
case agentproto.Connection_WORKSPACE_APP:
return database.ConnectionTypeWorkspaceApp, nil
case agentproto.Connection_PORT_FORWARDING:
return database.ConnectionTypePortForwarding, nil
default:
// Also Connection_TYPE_UNSPECIFIED, no mapping.
return "", xerrors.Errorf("unknown agent connection type %q", typ)
+126 -2
View File
@@ -461,6 +461,24 @@ var (
Scope: rbac.ScopeAll,
}.WithCachedASTValue()
subjectTailnetCoordinator = rbac.Subject{
Type: rbac.SubjectTypeTailnetCoordinator,
FriendlyName: "Tailnet Coordinator",
ID: uuid.Nil.String(),
Roles: rbac.Roles([]rbac.Role{
{
Identifier: rbac.RoleIdentifier{Name: "tailnetcoordinator"},
DisplayName: "Tailnet Coordinator",
Site: rbac.Permissions(map[string][]policy.Action{
rbac.ResourceTailnetCoordinator.Type: {policy.WildcardSymbol},
}),
User: []rbac.Permission{},
ByOrgID: map[string]rbac.OrgPermissions{},
},
}),
Scope: rbac.ScopeAll,
}.WithCachedASTValue()
subjectSystemOAuth2 = rbac.Subject{
Type: rbac.SubjectTypeSystemOAuth,
FriendlyName: "System OAuth2",
@@ -726,6 +744,12 @@ func AsSystemRestricted(ctx context.Context) context.Context {
return As(ctx, subjectSystemRestricted)
}
// AsTailnetCoordinator returns a context with an actor that has permissions
// required for tailnet coordinator operations.
func AsTailnetCoordinator(ctx context.Context) context.Context {
return As(ctx, subjectTailnetCoordinator)
}
// AsSystemOAuth2 returns a context with an actor that has permissions
// required for OAuth2 provider operations (token revocation, device codes, registration).
func AsSystemOAuth2(ctx context.Context) context.Context {
@@ -1588,6 +1612,20 @@ func (q *querier) CleanTailnetTunnels(ctx context.Context) error {
return q.db.CleanTailnetTunnels(ctx)
}
func (q *querier) CloseConnectionLogsAndCreateSessions(ctx context.Context, arg database.CloseConnectionLogsAndCreateSessionsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceConnectionLog); err != nil {
return 0, err
}
return q.db.CloseConnectionLogsAndCreateSessions(ctx, arg)
}
func (q *querier) CloseOpenAgentConnectionLogsForWorkspace(ctx context.Context, arg database.CloseOpenAgentConnectionLogsForWorkspaceParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceConnectionLog); err != nil {
return 0, err
}
return q.db.CloseOpenAgentConnectionLogsForWorkspace(ctx, arg)
}
func (q *querier) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
prep, err := prepareSQLFilter(ctx, q.auth, policy.ActionRead, rbac.ResourceAibridgeInterception.Type)
if err != nil {
@@ -1623,6 +1661,13 @@ func (q *querier) CountConnectionLogs(ctx context.Context, arg database.CountCon
return q.db.CountAuthorizedConnectionLogs(ctx, arg, prep)
}
func (q *querier) CountGlobalWorkspaceSessions(ctx context.Context, arg database.CountGlobalWorkspaceSessionsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return 0, err
}
return q.db.CountGlobalWorkspaceSessions(ctx, arg)
}
func (q *querier) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceWorkspace.All()); err != nil {
return nil, err
@@ -1644,6 +1689,13 @@ func (q *querier) CountUnreadInboxNotificationsByUserID(ctx context.Context, use
return q.db.CountUnreadInboxNotificationsByUserID(ctx, userID)
}
func (q *querier) CountWorkspaceSessions(ctx context.Context, arg database.CountWorkspaceSessionsParams) (int64, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return 0, err
}
return q.db.CountWorkspaceSessions(ctx, arg)
}
func (q *querier) CreateUserSecret(ctx context.Context, arg database.CreateUserSecretParams) (database.UserSecret, error) {
obj := rbac.ResourceUserSecret.WithOwner(arg.UserID.String())
if err := q.authorizeContext(ctx, policy.ActionCreate, obj); err != nil {
@@ -2118,6 +2170,13 @@ func (q *querier) FindMatchingPresetID(ctx context.Context, arg database.FindMat
return q.db.FindMatchingPresetID(ctx, arg)
}
func (q *querier) FindOrCreateSessionForDisconnect(ctx context.Context, arg database.FindOrCreateSessionForDisconnectParams) (interface{}, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceConnectionLog); err != nil {
return nil, err
}
return q.db.FindOrCreateSessionForDisconnect(ctx, arg)
}
func (q *querier) GetAIBridgeInterceptionByID(ctx context.Context, id uuid.UUID) (database.AIBridgeInterception, error) {
return fetch(q.log, q.auth, q.db.GetAIBridgeInterceptionByID)(ctx, id)
}
@@ -2202,6 +2261,13 @@ func (q *querier) GetAllTailnetCoordinators(ctx context.Context) ([]database.Tai
return q.db.GetAllTailnetCoordinators(ctx)
}
func (q *querier) GetAllTailnetPeeringEventsByPeerID(ctx context.Context, srcPeerID uuid.NullUUID) ([]database.TailnetPeeringEvent, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
}
return q.db.GetAllTailnetPeeringEventsByPeerID(ctx, srcPeerID)
}
func (q *querier) GetAllTailnetPeers(ctx context.Context) ([]database.TailnetPeer, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
@@ -2271,6 +2337,20 @@ func (q *querier) GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUI
return q.db.GetAuthorizationUserRoles(ctx, userID)
}
func (q *querier) GetConnectionLogByConnectionID(ctx context.Context, arg database.GetConnectionLogByConnectionIDParams) (database.ConnectionLog, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return database.ConnectionLog{}, err
}
return q.db.GetConnectionLogByConnectionID(ctx, arg)
}
func (q *querier) GetConnectionLogsBySessionIDs(ctx context.Context, sessionIDs []uuid.UUID) ([]database.ConnectionLog, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return nil, err
}
return q.db.GetConnectionLogsBySessionIDs(ctx, sessionIDs)
}
func (q *querier) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
// Just like with the audit logs query, shortcut if the user is an owner.
err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog)
@@ -2446,6 +2526,13 @@ func (q *querier) GetGitSSHKey(ctx context.Context, userID uuid.UUID) (database.
return fetchWithAction(q.log, q.auth, policy.ActionReadPersonal, q.db.GetGitSSHKey)(ctx, userID)
}
func (q *querier) GetGlobalWorkspaceSessionsOffset(ctx context.Context, arg database.GetGlobalWorkspaceSessionsOffsetParams) ([]database.GetGlobalWorkspaceSessionsOffsetRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return nil, err
}
return q.db.GetGlobalWorkspaceSessionsOffset(ctx, arg)
}
func (q *querier) GetGroupByID(ctx context.Context, id uuid.UUID) (database.Group, error) {
return fetch(q.log, q.auth, q.db.GetGroupByID)(ctx, id)
}
@@ -2712,6 +2799,15 @@ func (q *querier) GetOAuthSigningKey(ctx context.Context) (string, error) {
return q.db.GetOAuthSigningKey(ctx)
}
func (q *querier) GetOngoingAgentConnectionsLast24h(ctx context.Context, arg database.GetOngoingAgentConnectionsLast24hParams) ([]database.GetOngoingAgentConnectionsLast24hRow, error) {
// This is a system-level read; authorization comes from the
// caller using dbauthz.AsSystemRestricted(ctx).
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return nil, err
}
return q.db.GetOngoingAgentConnectionsLast24h(ctx, arg)
}
func (q *querier) GetOrganizationByID(ctx context.Context, id uuid.UUID) (database.Organization, error) {
return fetch(q.log, q.auth, q.db.GetOrganizationByID)(ctx, id)
}
@@ -3081,6 +3177,13 @@ func (q *querier) GetTailnetTunnelPeerBindings(ctx context.Context, srcID uuid.U
return q.db.GetTailnetTunnelPeerBindings(ctx, srcID)
}
func (q *querier) GetTailnetTunnelPeerBindingsByDstID(ctx context.Context, dstID uuid.UUID) ([]database.GetTailnetTunnelPeerBindingsByDstIDRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
}
return q.db.GetTailnetTunnelPeerBindingsByDstID(ctx, dstID)
}
func (q *querier) GetTailnetTunnelPeerIDs(ctx context.Context, srcID uuid.UUID) ([]database.GetTailnetTunnelPeerIDsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceTailnetCoordinator); err != nil {
return nil, err
@@ -4086,6 +4189,13 @@ func (q *querier) GetWorkspaceResourcesCreatedAfter(ctx context.Context, created
return q.db.GetWorkspaceResourcesCreatedAfter(ctx, createdAt)
}
func (q *querier) GetWorkspaceSessionsOffset(ctx context.Context, arg database.GetWorkspaceSessionsOffsetParams) ([]database.GetWorkspaceSessionsOffsetRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceConnectionLog); err != nil {
return nil, err
}
return q.db.GetWorkspaceSessionsOffset(ctx, arg)
}
func (q *querier) GetWorkspaceUniqueOwnerCountByTemplateIDs(ctx context.Context, templateIDs []uuid.UUID) ([]database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow, error) {
if err := q.authorizeContext(ctx, policy.ActionRead, rbac.ResourceSystem); err != nil {
return nil, err
@@ -4400,6 +4510,13 @@ func (q *querier) InsertReplica(ctx context.Context, arg database.InsertReplicaP
return q.db.InsertReplica(ctx, arg)
}
func (q *querier) InsertTailnetPeeringEvent(ctx context.Context, arg database.InsertTailnetPeeringEventParams) error {
if err := q.authorizeContext(ctx, policy.ActionCreate, rbac.ResourceTailnetCoordinator); err != nil {
return err
}
return q.db.InsertTailnetPeeringEvent(ctx, arg)
}
func (q *querier) InsertTask(ctx context.Context, arg database.InsertTaskParams) (database.TaskTable, error) {
// Ensure the actor can access the specified template version (and thus its template).
if _, err := q.GetTemplateVersionByID(ctx, arg.TemplateVersionID); err != nil {
@@ -4948,6 +5065,13 @@ func (q *querier) UpdateAPIKeyByID(ctx context.Context, arg database.UpdateAPIKe
return update(q.log, q.auth, fetch, q.db.UpdateAPIKeyByID)(ctx, arg)
}
func (q *querier) UpdateConnectionLogSessionID(ctx context.Context, arg database.UpdateConnectionLogSessionIDParams) error {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceConnectionLog); err != nil {
return err
}
return q.db.UpdateConnectionLogSessionID(ctx, arg)
}
func (q *querier) UpdateCryptoKeyDeletesAt(ctx context.Context, arg database.UpdateCryptoKeyDeletesAtParams) (database.CryptoKey, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceCryptoKey); err != nil {
return database.CryptoKey{}, err
@@ -6202,9 +6326,9 @@ func (q *querier) UpsertWorkspaceApp(ctx context.Context, arg database.UpsertWor
return q.db.UpsertWorkspaceApp(ctx, arg)
}
func (q *querier) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (bool, error) {
func (q *querier) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (database.UpsertWorkspaceAppAuditSessionRow, error) {
if err := q.authorizeContext(ctx, policy.ActionUpdate, rbac.ResourceSystem); err != nil {
return false, err
return database.UpsertWorkspaceAppAuditSessionRow{}, err
}
return q.db.UpsertWorkspaceAppAuditSession(ctx, arg)
}
+10 -1
View File
@@ -362,6 +362,11 @@ func (s *MethodTestSuite) TestConnectionLogs() {
dbm.EXPECT().DeleteOldConnectionLogs(gomock.Any(), database.DeleteOldConnectionLogsParams{}).Return(int64(0), nil).AnyTimes()
check.Args(database.DeleteOldConnectionLogsParams{}).Asserts(rbac.ResourceSystem, policy.ActionDelete)
}))
s.Run("CloseOpenAgentConnectionLogsForWorkspace", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
arg := database.CloseOpenAgentConnectionLogsForWorkspaceParams{}
dbm.EXPECT().CloseOpenAgentConnectionLogsForWorkspace(gomock.Any(), arg).Return(int64(0), nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceConnectionLog, policy.ActionUpdate)
}))
}
func (s *MethodTestSuite) TestFile() {
@@ -2841,6 +2846,10 @@ func (s *MethodTestSuite) TestTailnetFunctions() {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
}))
s.Run("GetTailnetTunnelPeerBindingsByDstID", s.Subtest(func(_ database.Store, check *expects) {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
}))
s.Run("GetTailnetTunnelPeerIDs", s.Subtest(func(_ database.Store, check *expects) {
check.Args(uuid.New()).
Asserts(rbac.ResourceTailnetCoordinator, policy.ActionRead)
@@ -3309,7 +3318,7 @@ func (s *MethodTestSuite) TestSystemFunctions() {
agent := testutil.Fake(s.T(), faker, database.WorkspaceAgent{})
app := testutil.Fake(s.T(), faker, database.WorkspaceApp{})
arg := database.UpsertWorkspaceAppAuditSessionParams{AgentID: agent.ID, AppID: app.ID, UserID: u.ID, Ip: "127.0.0.1"}
dbm.EXPECT().UpsertWorkspaceAppAuditSession(gomock.Any(), arg).Return(true, nil).AnyTimes()
dbm.EXPECT().UpsertWorkspaceAppAuditSession(gomock.Any(), arg).Return(database.UpsertWorkspaceAppAuditSessionRow{NewOrStale: true}, nil).AnyTimes()
check.Args(arg).Asserts(rbac.ResourceSystem, policy.ActionUpdate)
}))
s.Run("InsertWorkspaceAgentScriptTimings", s.Mocked(func(dbm *dbmock.MockStore, _ *gofakeit.Faker, check *expects) {
+19 -6
View File
@@ -35,12 +35,25 @@ import (
var errMatchAny = xerrors.New("match any error")
var skipMethods = map[string]string{
"InTx": "Not relevant",
"Ping": "Not relevant",
"PGLocks": "Not relevant",
"Wrappers": "Not relevant",
"AcquireLock": "Not relevant",
"TryAcquireLock": "Not relevant",
"InTx": "Not relevant",
"Ping": "Not relevant",
"PGLocks": "Not relevant",
"Wrappers": "Not relevant",
"AcquireLock": "Not relevant",
"TryAcquireLock": "Not relevant",
"GetOngoingAgentConnectionsLast24h": "Hackathon",
"InsertTailnetPeeringEvent": "Hackathon",
"CloseConnectionLogsAndCreateSessions": "Hackathon",
"CountGlobalWorkspaceSessions": "Hackathon",
"CountWorkspaceSessions": "Hackathon",
"FindOrCreateSessionForDisconnect": "Hackathon",
"GetConnectionLogByConnectionID": "Hackathon",
"GetConnectionLogsBySessionIDs": "Hackathon",
"GetGlobalWorkspaceSessionsOffset": "Hackathon",
"GetWorkspaceSessionsOffset": "Hackathon",
"UpdateConnectionLogSessionID": "Hackathon",
"GetAllTailnetPeeringEventsByPeerID": "Hackathon",
}
// TestMethodTestSuite runs MethodTestSuite.
+29 -8
View File
@@ -86,18 +86,27 @@ func ConnectionLog(t testing.TB, db database.Store, seed database.UpsertConnecti
WorkspaceID: takeFirst(seed.WorkspaceID, uuid.New()),
WorkspaceName: takeFirst(seed.WorkspaceName, testutil.GetRandomName(t)),
AgentName: takeFirst(seed.AgentName, testutil.GetRandomName(t)),
Type: takeFirst(seed.Type, database.ConnectionTypeSsh),
AgentID: uuid.NullUUID{
UUID: takeFirst(seed.AgentID.UUID, uuid.Nil),
Valid: takeFirst(seed.AgentID.Valid, false),
},
Type: takeFirst(seed.Type, database.ConnectionTypeSsh),
Code: sql.NullInt32{
Int32: takeFirst(seed.Code.Int32, 0),
Valid: takeFirst(seed.Code.Valid, false),
},
Ip: pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
},
Ip: func() pqtype.Inet {
if seed.Ip.Valid {
return seed.Ip
}
return pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
}
}(),
UserAgent: sql.NullString{
String: takeFirst(seed.UserAgent.String, ""),
Valid: takeFirst(seed.UserAgent.Valid, false),
@@ -118,6 +127,18 @@ func ConnectionLog(t testing.TB, db database.Store, seed database.UpsertConnecti
String: takeFirst(seed.DisconnectReason.String, ""),
Valid: takeFirst(seed.DisconnectReason.Valid, false),
},
SessionID: uuid.NullUUID{
UUID: takeFirst(seed.SessionID.UUID, uuid.Nil),
Valid: takeFirst(seed.SessionID.Valid, false),
},
ClientHostname: sql.NullString{
String: takeFirst(seed.ClientHostname.String, ""),
Valid: takeFirst(seed.ClientHostname.Valid, false),
},
ShortDescription: sql.NullString{
String: takeFirst(seed.ShortDescription.String, ""),
Valid: takeFirst(seed.ShortDescription.Valid, false),
},
ConnectionStatus: takeFirst(seed.ConnectionStatus, database.ConnectionStatusConnected),
})
require.NoError(t, err, "insert connection log")
+113 -1
View File
@@ -231,6 +231,22 @@ func (m queryMetricsStore) CleanTailnetTunnels(ctx context.Context) error {
return r0
}
func (m queryMetricsStore) CloseConnectionLogsAndCreateSessions(ctx context.Context, arg database.CloseConnectionLogsAndCreateSessionsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CloseConnectionLogsAndCreateSessions(ctx, arg)
m.queryLatencies.WithLabelValues("CloseConnectionLogsAndCreateSessions").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CloseConnectionLogsAndCreateSessions").Inc()
return r0, r1
}
func (m queryMetricsStore) CloseOpenAgentConnectionLogsForWorkspace(ctx context.Context, arg database.CloseOpenAgentConnectionLogsForWorkspaceParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CloseOpenAgentConnectionLogsForWorkspace(ctx, arg)
m.queryLatencies.WithLabelValues("CloseOpenAgentConnectionLogsForWorkspace").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CloseOpenAgentConnectionLogsForWorkspace").Inc()
return r0, r1
}
func (m queryMetricsStore) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountAIBridgeInterceptions(ctx, arg)
@@ -255,6 +271,14 @@ func (m queryMetricsStore) CountConnectionLogs(ctx context.Context, arg database
return r0, r1
}
func (m queryMetricsStore) CountGlobalWorkspaceSessions(ctx context.Context, arg database.CountGlobalWorkspaceSessionsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountGlobalWorkspaceSessions(ctx, arg)
m.queryLatencies.WithLabelValues("CountGlobalWorkspaceSessions").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountGlobalWorkspaceSessions").Inc()
return r0, r1
}
func (m queryMetricsStore) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
start := time.Now()
r0, r1 := m.s.CountInProgressPrebuilds(ctx)
@@ -279,6 +303,14 @@ func (m queryMetricsStore) CountUnreadInboxNotificationsByUserID(ctx context.Con
return r0, r1
}
func (m queryMetricsStore) CountWorkspaceSessions(ctx context.Context, arg database.CountWorkspaceSessionsParams) (int64, error) {
start := time.Now()
r0, r1 := m.s.CountWorkspaceSessions(ctx, arg)
m.queryLatencies.WithLabelValues("CountWorkspaceSessions").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "CountWorkspaceSessions").Inc()
return r0, r1
}
func (m queryMetricsStore) CreateUserSecret(ctx context.Context, arg database.CreateUserSecretParams) (database.UserSecret, error) {
start := time.Now()
r0, r1 := m.s.CreateUserSecret(ctx, arg)
@@ -718,6 +750,14 @@ func (m queryMetricsStore) FindMatchingPresetID(ctx context.Context, arg databas
return r0, r1
}
func (m queryMetricsStore) FindOrCreateSessionForDisconnect(ctx context.Context, arg database.FindOrCreateSessionForDisconnectParams) (interface{}, error) {
start := time.Now()
r0, r1 := m.s.FindOrCreateSessionForDisconnect(ctx, arg)
m.queryLatencies.WithLabelValues("FindOrCreateSessionForDisconnect").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "FindOrCreateSessionForDisconnect").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAIBridgeInterceptionByID(ctx context.Context, id uuid.UUID) (database.AIBridgeInterception, error) {
start := time.Now()
r0, r1 := m.s.GetAIBridgeInterceptionByID(ctx, id)
@@ -830,6 +870,14 @@ func (m queryMetricsStore) GetAllTailnetCoordinators(ctx context.Context) ([]dat
return r0, r1
}
func (m queryMetricsStore) GetAllTailnetPeeringEventsByPeerID(ctx context.Context, srcPeerID uuid.NullUUID) ([]database.TailnetPeeringEvent, error) {
start := time.Now()
r0, r1 := m.s.GetAllTailnetPeeringEventsByPeerID(ctx, srcPeerID)
m.queryLatencies.WithLabelValues("GetAllTailnetPeeringEventsByPeerID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetAllTailnetPeeringEventsByPeerID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetAllTailnetPeers(ctx context.Context) ([]database.TailnetPeer, error) {
start := time.Now()
r0, r1 := m.s.GetAllTailnetPeers(ctx)
@@ -902,6 +950,22 @@ func (m queryMetricsStore) GetAuthorizationUserRoles(ctx context.Context, userID
return r0, r1
}
func (m queryMetricsStore) GetConnectionLogByConnectionID(ctx context.Context, arg database.GetConnectionLogByConnectionIDParams) (database.ConnectionLog, error) {
start := time.Now()
r0, r1 := m.s.GetConnectionLogByConnectionID(ctx, arg)
m.queryLatencies.WithLabelValues("GetConnectionLogByConnectionID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetConnectionLogByConnectionID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetConnectionLogsBySessionIDs(ctx context.Context, sessionIds []uuid.UUID) ([]database.ConnectionLog, error) {
start := time.Now()
r0, r1 := m.s.GetConnectionLogsBySessionIDs(ctx, sessionIds)
m.queryLatencies.WithLabelValues("GetConnectionLogsBySessionIDs").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetConnectionLogsBySessionIDs").Inc()
return r0, r1
}
func (m queryMetricsStore) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
start := time.Now()
r0, r1 := m.s.GetConnectionLogsOffset(ctx, arg)
@@ -1094,6 +1158,14 @@ func (m queryMetricsStore) GetGitSSHKey(ctx context.Context, userID uuid.UUID) (
return r0, r1
}
func (m queryMetricsStore) GetGlobalWorkspaceSessionsOffset(ctx context.Context, arg database.GetGlobalWorkspaceSessionsOffsetParams) ([]database.GetGlobalWorkspaceSessionsOffsetRow, error) {
start := time.Now()
r0, r1 := m.s.GetGlobalWorkspaceSessionsOffset(ctx, arg)
m.queryLatencies.WithLabelValues("GetGlobalWorkspaceSessionsOffset").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetGlobalWorkspaceSessionsOffset").Inc()
return r0, r1
}
func (m queryMetricsStore) GetGroupByID(ctx context.Context, id uuid.UUID) (database.Group, error) {
start := time.Now()
r0, r1 := m.s.GetGroupByID(ctx, id)
@@ -1390,6 +1462,14 @@ func (m queryMetricsStore) GetOAuthSigningKey(ctx context.Context) (string, erro
return r0, r1
}
func (m queryMetricsStore) GetOngoingAgentConnectionsLast24h(ctx context.Context, arg database.GetOngoingAgentConnectionsLast24hParams) ([]database.GetOngoingAgentConnectionsLast24hRow, error) {
start := time.Now()
r0, r1 := m.s.GetOngoingAgentConnectionsLast24h(ctx, arg)
m.queryLatencies.WithLabelValues("GetOngoingAgentConnectionsLast24h").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetOngoingAgentConnectionsLast24h").Inc()
return r0, r1
}
func (m queryMetricsStore) GetOrganizationByID(ctx context.Context, id uuid.UUID) (database.Organization, error) {
start := time.Now()
r0, r1 := m.s.GetOrganizationByID(ctx, id)
@@ -1734,6 +1814,14 @@ func (m queryMetricsStore) GetTailnetTunnelPeerBindings(ctx context.Context, src
return r0, r1
}
func (m queryMetricsStore) GetTailnetTunnelPeerBindingsByDstID(ctx context.Context, dstID uuid.UUID) ([]database.GetTailnetTunnelPeerBindingsByDstIDRow, error) {
start := time.Now()
r0, r1 := m.s.GetTailnetTunnelPeerBindingsByDstID(ctx, dstID)
m.queryLatencies.WithLabelValues("GetTailnetTunnelPeerBindingsByDstID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetTailnetTunnelPeerBindingsByDstID").Inc()
return r0, r1
}
func (m queryMetricsStore) GetTailnetTunnelPeerIDs(ctx context.Context, srcID uuid.UUID) ([]database.GetTailnetTunnelPeerIDsRow, error) {
start := time.Now()
r0, r1 := m.s.GetTailnetTunnelPeerIDs(ctx, srcID)
@@ -2590,6 +2678,14 @@ func (m queryMetricsStore) GetWorkspaceResourcesCreatedAfter(ctx context.Context
return r0, r1
}
func (m queryMetricsStore) GetWorkspaceSessionsOffset(ctx context.Context, arg database.GetWorkspaceSessionsOffsetParams) ([]database.GetWorkspaceSessionsOffsetRow, error) {
start := time.Now()
r0, r1 := m.s.GetWorkspaceSessionsOffset(ctx, arg)
m.queryLatencies.WithLabelValues("GetWorkspaceSessionsOffset").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "GetWorkspaceSessionsOffset").Inc()
return r0, r1
}
func (m queryMetricsStore) GetWorkspaceUniqueOwnerCountByTemplateIDs(ctx context.Context, templateIds []uuid.UUID) ([]database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow, error) {
start := time.Now()
r0, r1 := m.s.GetWorkspaceUniqueOwnerCountByTemplateIDs(ctx, templateIds)
@@ -2918,6 +3014,14 @@ func (m queryMetricsStore) InsertReplica(ctx context.Context, arg database.Inser
return r0, r1
}
func (m queryMetricsStore) InsertTailnetPeeringEvent(ctx context.Context, arg database.InsertTailnetPeeringEventParams) error {
start := time.Now()
r0 := m.s.InsertTailnetPeeringEvent(ctx, arg)
m.queryLatencies.WithLabelValues("InsertTailnetPeeringEvent").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "InsertTailnetPeeringEvent").Inc()
return r0
}
func (m queryMetricsStore) InsertTask(ctx context.Context, arg database.InsertTaskParams) (database.TaskTable, error) {
start := time.Now()
r0, r1 := m.s.InsertTask(ctx, arg)
@@ -3390,6 +3494,14 @@ func (m queryMetricsStore) UpdateAPIKeyByID(ctx context.Context, arg database.Up
return r0
}
func (m queryMetricsStore) UpdateConnectionLogSessionID(ctx context.Context, arg database.UpdateConnectionLogSessionIDParams) error {
start := time.Now()
r0 := m.s.UpdateConnectionLogSessionID(ctx, arg)
m.queryLatencies.WithLabelValues("UpdateConnectionLogSessionID").Observe(time.Since(start).Seconds())
m.queryCounts.WithLabelValues(httpmw.ExtractHTTPRoute(ctx), httpmw.ExtractHTTPMethod(ctx), "UpdateConnectionLogSessionID").Inc()
return r0
}
func (m queryMetricsStore) UpdateCryptoKeyDeletesAt(ctx context.Context, arg database.UpdateCryptoKeyDeletesAtParams) (database.CryptoKey, error) {
start := time.Now()
r0, r1 := m.s.UpdateCryptoKeyDeletesAt(ctx, arg)
@@ -4285,7 +4397,7 @@ func (m queryMetricsStore) UpsertWorkspaceApp(ctx context.Context, arg database.
return r0, r1
}
func (m queryMetricsStore) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (bool, error) {
func (m queryMetricsStore) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (database.UpsertWorkspaceAppAuditSessionRow, error) {
start := time.Now()
r0, r1 := m.s.UpsertWorkspaceAppAuditSession(ctx, arg)
m.queryLatencies.WithLabelValues("UpsertWorkspaceAppAuditSession").Observe(time.Since(start).Seconds())
+210 -2
View File
@@ -276,6 +276,36 @@ func (mr *MockStoreMockRecorder) CleanTailnetTunnels(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CleanTailnetTunnels", reflect.TypeOf((*MockStore)(nil).CleanTailnetTunnels), ctx)
}
// CloseConnectionLogsAndCreateSessions mocks base method.
func (m *MockStore) CloseConnectionLogsAndCreateSessions(ctx context.Context, arg database.CloseConnectionLogsAndCreateSessionsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CloseConnectionLogsAndCreateSessions", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CloseConnectionLogsAndCreateSessions indicates an expected call of CloseConnectionLogsAndCreateSessions.
func (mr *MockStoreMockRecorder) CloseConnectionLogsAndCreateSessions(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CloseConnectionLogsAndCreateSessions", reflect.TypeOf((*MockStore)(nil).CloseConnectionLogsAndCreateSessions), ctx, arg)
}
// CloseOpenAgentConnectionLogsForWorkspace mocks base method.
func (m *MockStore) CloseOpenAgentConnectionLogsForWorkspace(ctx context.Context, arg database.CloseOpenAgentConnectionLogsForWorkspaceParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CloseOpenAgentConnectionLogsForWorkspace", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CloseOpenAgentConnectionLogsForWorkspace indicates an expected call of CloseOpenAgentConnectionLogsForWorkspace.
func (mr *MockStoreMockRecorder) CloseOpenAgentConnectionLogsForWorkspace(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CloseOpenAgentConnectionLogsForWorkspace", reflect.TypeOf((*MockStore)(nil).CloseOpenAgentConnectionLogsForWorkspace), ctx, arg)
}
// CountAIBridgeInterceptions mocks base method.
func (m *MockStore) CountAIBridgeInterceptions(ctx context.Context, arg database.CountAIBridgeInterceptionsParams) (int64, error) {
m.ctrl.T.Helper()
@@ -366,6 +396,21 @@ func (mr *MockStoreMockRecorder) CountConnectionLogs(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountConnectionLogs", reflect.TypeOf((*MockStore)(nil).CountConnectionLogs), ctx, arg)
}
// CountGlobalWorkspaceSessions mocks base method.
func (m *MockStore) CountGlobalWorkspaceSessions(ctx context.Context, arg database.CountGlobalWorkspaceSessionsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountGlobalWorkspaceSessions", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountGlobalWorkspaceSessions indicates an expected call of CountGlobalWorkspaceSessions.
func (mr *MockStoreMockRecorder) CountGlobalWorkspaceSessions(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountGlobalWorkspaceSessions", reflect.TypeOf((*MockStore)(nil).CountGlobalWorkspaceSessions), ctx, arg)
}
// CountInProgressPrebuilds mocks base method.
func (m *MockStore) CountInProgressPrebuilds(ctx context.Context) ([]database.CountInProgressPrebuildsRow, error) {
m.ctrl.T.Helper()
@@ -411,6 +456,21 @@ func (mr *MockStoreMockRecorder) CountUnreadInboxNotificationsByUserID(ctx, user
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountUnreadInboxNotificationsByUserID", reflect.TypeOf((*MockStore)(nil).CountUnreadInboxNotificationsByUserID), ctx, userID)
}
// CountWorkspaceSessions mocks base method.
func (m *MockStore) CountWorkspaceSessions(ctx context.Context, arg database.CountWorkspaceSessionsParams) (int64, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CountWorkspaceSessions", ctx, arg)
ret0, _ := ret[0].(int64)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// CountWorkspaceSessions indicates an expected call of CountWorkspaceSessions.
func (mr *MockStoreMockRecorder) CountWorkspaceSessions(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CountWorkspaceSessions", reflect.TypeOf((*MockStore)(nil).CountWorkspaceSessions), ctx, arg)
}
// CreateUserSecret mocks base method.
func (m *MockStore) CreateUserSecret(ctx context.Context, arg database.CreateUserSecretParams) (database.UserSecret, error) {
m.ctrl.T.Helper()
@@ -1199,6 +1259,21 @@ func (mr *MockStoreMockRecorder) FindMatchingPresetID(ctx, arg any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FindMatchingPresetID", reflect.TypeOf((*MockStore)(nil).FindMatchingPresetID), ctx, arg)
}
// FindOrCreateSessionForDisconnect mocks base method.
func (m *MockStore) FindOrCreateSessionForDisconnect(ctx context.Context, arg database.FindOrCreateSessionForDisconnectParams) (any, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "FindOrCreateSessionForDisconnect", ctx, arg)
ret0, _ := ret[0].(any)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// FindOrCreateSessionForDisconnect indicates an expected call of FindOrCreateSessionForDisconnect.
func (mr *MockStoreMockRecorder) FindOrCreateSessionForDisconnect(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "FindOrCreateSessionForDisconnect", reflect.TypeOf((*MockStore)(nil).FindOrCreateSessionForDisconnect), ctx, arg)
}
// GetAIBridgeInterceptionByID mocks base method.
func (m *MockStore) GetAIBridgeInterceptionByID(ctx context.Context, id uuid.UUID) (database.AIBridgeInterception, error) {
m.ctrl.T.Helper()
@@ -1409,6 +1484,21 @@ func (mr *MockStoreMockRecorder) GetAllTailnetCoordinators(ctx any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAllTailnetCoordinators", reflect.TypeOf((*MockStore)(nil).GetAllTailnetCoordinators), ctx)
}
// GetAllTailnetPeeringEventsByPeerID mocks base method.
func (m *MockStore) GetAllTailnetPeeringEventsByPeerID(ctx context.Context, srcPeerID uuid.NullUUID) ([]database.TailnetPeeringEvent, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAllTailnetPeeringEventsByPeerID", ctx, srcPeerID)
ret0, _ := ret[0].([]database.TailnetPeeringEvent)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetAllTailnetPeeringEventsByPeerID indicates an expected call of GetAllTailnetPeeringEventsByPeerID.
func (mr *MockStoreMockRecorder) GetAllTailnetPeeringEventsByPeerID(ctx, srcPeerID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAllTailnetPeeringEventsByPeerID", reflect.TypeOf((*MockStore)(nil).GetAllTailnetPeeringEventsByPeerID), ctx, srcPeerID)
}
// GetAllTailnetPeers mocks base method.
func (m *MockStore) GetAllTailnetPeers(ctx context.Context) ([]database.TailnetPeer, error) {
m.ctrl.T.Helper()
@@ -1649,6 +1739,36 @@ func (mr *MockStoreMockRecorder) GetAuthorizedWorkspacesAndAgentsByOwnerID(ctx,
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAuthorizedWorkspacesAndAgentsByOwnerID", reflect.TypeOf((*MockStore)(nil).GetAuthorizedWorkspacesAndAgentsByOwnerID), ctx, ownerID, prepared)
}
// GetConnectionLogByConnectionID mocks base method.
func (m *MockStore) GetConnectionLogByConnectionID(ctx context.Context, arg database.GetConnectionLogByConnectionIDParams) (database.ConnectionLog, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetConnectionLogByConnectionID", ctx, arg)
ret0, _ := ret[0].(database.ConnectionLog)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetConnectionLogByConnectionID indicates an expected call of GetConnectionLogByConnectionID.
func (mr *MockStoreMockRecorder) GetConnectionLogByConnectionID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetConnectionLogByConnectionID", reflect.TypeOf((*MockStore)(nil).GetConnectionLogByConnectionID), ctx, arg)
}
// GetConnectionLogsBySessionIDs mocks base method.
func (m *MockStore) GetConnectionLogsBySessionIDs(ctx context.Context, sessionIds []uuid.UUID) ([]database.ConnectionLog, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetConnectionLogsBySessionIDs", ctx, sessionIds)
ret0, _ := ret[0].([]database.ConnectionLog)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetConnectionLogsBySessionIDs indicates an expected call of GetConnectionLogsBySessionIDs.
func (mr *MockStoreMockRecorder) GetConnectionLogsBySessionIDs(ctx, sessionIds any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetConnectionLogsBySessionIDs", reflect.TypeOf((*MockStore)(nil).GetConnectionLogsBySessionIDs), ctx, sessionIds)
}
// GetConnectionLogsOffset mocks base method.
func (m *MockStore) GetConnectionLogsOffset(ctx context.Context, arg database.GetConnectionLogsOffsetParams) ([]database.GetConnectionLogsOffsetRow, error) {
m.ctrl.T.Helper()
@@ -2009,6 +2129,21 @@ func (mr *MockStoreMockRecorder) GetGitSSHKey(ctx, userID any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetGitSSHKey", reflect.TypeOf((*MockStore)(nil).GetGitSSHKey), ctx, userID)
}
// GetGlobalWorkspaceSessionsOffset mocks base method.
func (m *MockStore) GetGlobalWorkspaceSessionsOffset(ctx context.Context, arg database.GetGlobalWorkspaceSessionsOffsetParams) ([]database.GetGlobalWorkspaceSessionsOffsetRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetGlobalWorkspaceSessionsOffset", ctx, arg)
ret0, _ := ret[0].([]database.GetGlobalWorkspaceSessionsOffsetRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetGlobalWorkspaceSessionsOffset indicates an expected call of GetGlobalWorkspaceSessionsOffset.
func (mr *MockStoreMockRecorder) GetGlobalWorkspaceSessionsOffset(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetGlobalWorkspaceSessionsOffset", reflect.TypeOf((*MockStore)(nil).GetGlobalWorkspaceSessionsOffset), ctx, arg)
}
// GetGroupByID mocks base method.
func (m *MockStore) GetGroupByID(ctx context.Context, id uuid.UUID) (database.Group, error) {
m.ctrl.T.Helper()
@@ -2564,6 +2699,21 @@ func (mr *MockStoreMockRecorder) GetOAuthSigningKey(ctx any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetOAuthSigningKey", reflect.TypeOf((*MockStore)(nil).GetOAuthSigningKey), ctx)
}
// GetOngoingAgentConnectionsLast24h mocks base method.
func (m *MockStore) GetOngoingAgentConnectionsLast24h(ctx context.Context, arg database.GetOngoingAgentConnectionsLast24hParams) ([]database.GetOngoingAgentConnectionsLast24hRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetOngoingAgentConnectionsLast24h", ctx, arg)
ret0, _ := ret[0].([]database.GetOngoingAgentConnectionsLast24hRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetOngoingAgentConnectionsLast24h indicates an expected call of GetOngoingAgentConnectionsLast24h.
func (mr *MockStoreMockRecorder) GetOngoingAgentConnectionsLast24h(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetOngoingAgentConnectionsLast24h", reflect.TypeOf((*MockStore)(nil).GetOngoingAgentConnectionsLast24h), ctx, arg)
}
// GetOrganizationByID mocks base method.
func (m *MockStore) GetOrganizationByID(ctx context.Context, id uuid.UUID) (database.Organization, error) {
m.ctrl.T.Helper()
@@ -3209,6 +3359,21 @@ func (mr *MockStoreMockRecorder) GetTailnetTunnelPeerBindings(ctx, srcID any) *g
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetTailnetTunnelPeerBindings", reflect.TypeOf((*MockStore)(nil).GetTailnetTunnelPeerBindings), ctx, srcID)
}
// GetTailnetTunnelPeerBindingsByDstID mocks base method.
func (m *MockStore) GetTailnetTunnelPeerBindingsByDstID(ctx context.Context, dstID uuid.UUID) ([]database.GetTailnetTunnelPeerBindingsByDstIDRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetTailnetTunnelPeerBindingsByDstID", ctx, dstID)
ret0, _ := ret[0].([]database.GetTailnetTunnelPeerBindingsByDstIDRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetTailnetTunnelPeerBindingsByDstID indicates an expected call of GetTailnetTunnelPeerBindingsByDstID.
func (mr *MockStoreMockRecorder) GetTailnetTunnelPeerBindingsByDstID(ctx, dstID any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetTailnetTunnelPeerBindingsByDstID", reflect.TypeOf((*MockStore)(nil).GetTailnetTunnelPeerBindingsByDstID), ctx, dstID)
}
// GetTailnetTunnelPeerIDs mocks base method.
func (m *MockStore) GetTailnetTunnelPeerIDs(ctx context.Context, srcID uuid.UUID) ([]database.GetTailnetTunnelPeerIDsRow, error) {
m.ctrl.T.Helper()
@@ -4844,6 +5009,21 @@ func (mr *MockStoreMockRecorder) GetWorkspaceResourcesCreatedAfter(ctx, createdA
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetWorkspaceResourcesCreatedAfter", reflect.TypeOf((*MockStore)(nil).GetWorkspaceResourcesCreatedAfter), ctx, createdAt)
}
// GetWorkspaceSessionsOffset mocks base method.
func (m *MockStore) GetWorkspaceSessionsOffset(ctx context.Context, arg database.GetWorkspaceSessionsOffsetParams) ([]database.GetWorkspaceSessionsOffsetRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetWorkspaceSessionsOffset", ctx, arg)
ret0, _ := ret[0].([]database.GetWorkspaceSessionsOffsetRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// GetWorkspaceSessionsOffset indicates an expected call of GetWorkspaceSessionsOffset.
func (mr *MockStoreMockRecorder) GetWorkspaceSessionsOffset(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetWorkspaceSessionsOffset", reflect.TypeOf((*MockStore)(nil).GetWorkspaceSessionsOffset), ctx, arg)
}
// GetWorkspaceUniqueOwnerCountByTemplateIDs mocks base method.
func (m *MockStore) GetWorkspaceUniqueOwnerCountByTemplateIDs(ctx context.Context, templateIds []uuid.UUID) ([]database.GetWorkspaceUniqueOwnerCountByTemplateIDsRow, error) {
m.ctrl.T.Helper()
@@ -5469,6 +5649,20 @@ func (mr *MockStoreMockRecorder) InsertReplica(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertReplica", reflect.TypeOf((*MockStore)(nil).InsertReplica), ctx, arg)
}
// InsertTailnetPeeringEvent mocks base method.
func (m *MockStore) InsertTailnetPeeringEvent(ctx context.Context, arg database.InsertTailnetPeeringEventParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "InsertTailnetPeeringEvent", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// InsertTailnetPeeringEvent indicates an expected call of InsertTailnetPeeringEvent.
func (mr *MockStoreMockRecorder) InsertTailnetPeeringEvent(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "InsertTailnetPeeringEvent", reflect.TypeOf((*MockStore)(nil).InsertTailnetPeeringEvent), ctx, arg)
}
// InsertTask mocks base method.
func (m *MockStore) InsertTask(ctx context.Context, arg database.InsertTaskParams) (database.TaskTable, error) {
m.ctrl.T.Helper()
@@ -6380,6 +6574,20 @@ func (mr *MockStoreMockRecorder) UpdateAPIKeyByID(ctx, arg any) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateAPIKeyByID", reflect.TypeOf((*MockStore)(nil).UpdateAPIKeyByID), ctx, arg)
}
// UpdateConnectionLogSessionID mocks base method.
func (m *MockStore) UpdateConnectionLogSessionID(ctx context.Context, arg database.UpdateConnectionLogSessionIDParams) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpdateConnectionLogSessionID", ctx, arg)
ret0, _ := ret[0].(error)
return ret0
}
// UpdateConnectionLogSessionID indicates an expected call of UpdateConnectionLogSessionID.
func (mr *MockStoreMockRecorder) UpdateConnectionLogSessionID(ctx, arg any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "UpdateConnectionLogSessionID", reflect.TypeOf((*MockStore)(nil).UpdateConnectionLogSessionID), ctx, arg)
}
// UpdateCryptoKeyDeletesAt mocks base method.
func (m *MockStore) UpdateCryptoKeyDeletesAt(ctx context.Context, arg database.UpdateCryptoKeyDeletesAtParams) (database.CryptoKey, error) {
m.ctrl.T.Helper()
@@ -7993,10 +8201,10 @@ func (mr *MockStoreMockRecorder) UpsertWorkspaceApp(ctx, arg any) *gomock.Call {
}
// UpsertWorkspaceAppAuditSession mocks base method.
func (m *MockStore) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (bool, error) {
func (m *MockStore) UpsertWorkspaceAppAuditSession(ctx context.Context, arg database.UpsertWorkspaceAppAuditSessionParams) (database.UpsertWorkspaceAppAuditSessionRow, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "UpsertWorkspaceAppAuditSession", ctx, arg)
ret0, _ := ret[0].(bool)
ret0, _ := ret[0].(database.UpsertWorkspaceAppAuditSessionRow)
ret1, _ := ret[1].(error)
return ret0, ret1
}
+61 -3
View File
@@ -271,7 +271,8 @@ CREATE TYPE connection_type AS ENUM (
'jetbrains',
'reconnecting_pty',
'workspace_app',
'port_forwarding'
'port_forwarding',
'system'
);
CREATE TYPE cors_behavior AS ENUM (
@@ -1015,6 +1016,11 @@ BEGIN
END;
$$;
CREATE TABLE agent_peering_ids (
agent_id uuid NOT NULL,
peering_id bytea NOT NULL
);
CREATE TABLE aibridge_interceptions (
id uuid NOT NULL,
initiator_id uuid NOT NULL,
@@ -1159,7 +1165,12 @@ CREATE TABLE connection_logs (
slug_or_port text,
connection_id uuid,
disconnect_time timestamp with time zone,
disconnect_reason text
disconnect_reason text,
agent_id uuid,
updated_at timestamp with time zone DEFAULT now() NOT NULL,
session_id uuid,
client_hostname text,
short_description text
);
COMMENT ON COLUMN connection_logs.code IS 'Either the HTTP status code of the web request, or the exit code of an SSH connection. For non-web connections, this is Null until we receive a disconnect event for the same connection_id.';
@@ -1176,6 +1187,8 @@ COMMENT ON COLUMN connection_logs.disconnect_time IS 'The time the connection wa
COMMENT ON COLUMN connection_logs.disconnect_reason IS 'The reason the connection was closed. Null for web connections. For other connections, this is null until we receive a disconnect event for the same connection_id.';
COMMENT ON COLUMN connection_logs.updated_at IS 'Last time this connection log was confirmed active. For agent connections, equals connect_time. For web connections, bumped while the session is active.';
CREATE TABLE crypto_keys (
feature crypto_key_feature NOT NULL,
sequence integer NOT NULL,
@@ -1771,6 +1784,15 @@ CREATE UNLOGGED TABLE tailnet_coordinators (
COMMENT ON TABLE tailnet_coordinators IS 'We keep this separate from replicas in case we need to break the coordinator out into its own service';
CREATE TABLE tailnet_peering_events (
peering_id bytea NOT NULL,
event_type text NOT NULL,
src_peer_id uuid,
dst_peer_id uuid,
node bytea,
occurred_at timestamp with time zone NOT NULL
);
CREATE UNLOGGED TABLE tailnet_peers (
id uuid NOT NULL,
coordinator_id uuid NOT NULL,
@@ -2604,7 +2626,8 @@ CREATE UNLOGGED TABLE workspace_app_audit_sessions (
status_code integer NOT NULL,
started_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL,
id uuid NOT NULL
id uuid NOT NULL,
connection_id uuid
);
COMMENT ON TABLE workspace_app_audit_sessions IS 'Audit sessions for workspace apps, the data in this table is ephemeral and is used to deduplicate audit log entries for workspace apps. While a session is active, the same data will not be logged again. This table does not store historical data.';
@@ -2898,6 +2921,18 @@ CREATE SEQUENCE workspace_resource_metadata_id_seq
ALTER SEQUENCE workspace_resource_metadata_id_seq OWNED BY workspace_resource_metadata.id;
CREATE TABLE workspace_sessions (
id uuid DEFAULT gen_random_uuid() NOT NULL,
workspace_id uuid NOT NULL,
agent_id uuid,
ip inet,
client_hostname text,
short_description text,
started_at timestamp with time zone NOT NULL,
ended_at timestamp with time zone NOT NULL,
created_at timestamp with time zone DEFAULT now() NOT NULL
);
CREATE VIEW workspaces_expanded AS
SELECT workspaces.id,
workspaces.created_at,
@@ -2955,6 +2990,9 @@ ALTER TABLE ONLY workspace_proxies ALTER COLUMN region_id SET DEFAULT nextval('w
ALTER TABLE ONLY workspace_resource_metadata ALTER COLUMN id SET DEFAULT nextval('workspace_resource_metadata_id_seq'::regclass);
ALTER TABLE ONLY agent_peering_ids
ADD CONSTRAINT agent_peering_ids_pkey PRIMARY KEY (agent_id, peering_id);
ALTER TABLE ONLY workspace_agent_stats
ADD CONSTRAINT agent_stats_pkey PRIMARY KEY (id);
@@ -3261,6 +3299,9 @@ ALTER TABLE ONLY workspace_resource_metadata
ALTER TABLE ONLY workspace_resources
ADD CONSTRAINT workspace_resources_pkey PRIMARY KEY (id);
ALTER TABLE ONLY workspace_sessions
ADD CONSTRAINT workspace_sessions_pkey PRIMARY KEY (id);
ALTER TABLE ONLY workspaces
ADD CONSTRAINT workspaces_pkey PRIMARY KEY (id);
@@ -3312,6 +3353,8 @@ COMMENT ON INDEX idx_connection_logs_connection_id_workspace_id_agent_name IS 'C
CREATE INDEX idx_connection_logs_organization_id ON connection_logs USING btree (organization_id);
CREATE INDEX idx_connection_logs_session ON connection_logs USING btree (session_id) WHERE (session_id IS NOT NULL);
CREATE INDEX idx_connection_logs_workspace_id ON connection_logs USING btree (workspace_id);
CREATE INDEX idx_connection_logs_workspace_owner_id ON connection_logs USING btree (workspace_owner_id);
@@ -3366,6 +3409,12 @@ CREATE INDEX idx_workspace_app_statuses_workspace_id_created_at ON workspace_app
CREATE INDEX idx_workspace_builds_initiator_id ON workspace_builds USING btree (initiator_id);
CREATE INDEX idx_workspace_sessions_hostname_lookup ON workspace_sessions USING btree (workspace_id, client_hostname, started_at) WHERE (client_hostname IS NOT NULL);
CREATE INDEX idx_workspace_sessions_ip_lookup ON workspace_sessions USING btree (workspace_id, ip, started_at) WHERE ((ip IS NOT NULL) AND (client_hostname IS NULL));
CREATE INDEX idx_workspace_sessions_workspace ON workspace_sessions USING btree (workspace_id, started_at DESC);
CREATE UNIQUE INDEX notification_messages_dedupe_hash_idx ON notification_messages USING btree (dedupe_hash);
CREATE UNIQUE INDEX organizations_single_default_org ON organizations USING btree (is_default) WHERE (is_default = true);
@@ -3553,6 +3602,9 @@ ALTER TABLE ONLY api_keys
ALTER TABLE ONLY connection_logs
ADD CONSTRAINT connection_logs_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ALTER TABLE ONLY connection_logs
ADD CONSTRAINT connection_logs_session_id_fkey FOREIGN KEY (session_id) REFERENCES workspace_sessions(id) ON DELETE SET NULL;
ALTER TABLE ONLY connection_logs
ADD CONSTRAINT connection_logs_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
@@ -3826,6 +3878,12 @@ ALTER TABLE ONLY workspace_resource_metadata
ALTER TABLE ONLY workspace_resources
ADD CONSTRAINT workspace_resources_job_id_fkey FOREIGN KEY (job_id) REFERENCES provisioner_jobs(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspace_sessions
ADD CONSTRAINT workspace_sessions_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ALTER TABLE ONLY workspace_sessions
ADD CONSTRAINT workspace_sessions_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
ALTER TABLE ONLY workspaces
ADD CONSTRAINT workspaces_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE RESTRICT;
@@ -9,6 +9,7 @@ const (
ForeignKeyAibridgeInterceptionsInitiatorID ForeignKeyConstraint = "aibridge_interceptions_initiator_id_fkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_initiator_id_fkey FOREIGN KEY (initiator_id) REFERENCES users(id);
ForeignKeyAPIKeysUserIDUUID ForeignKeyConstraint = "api_keys_user_id_uuid_fkey" // ALTER TABLE ONLY api_keys ADD CONSTRAINT api_keys_user_id_uuid_fkey FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyConnectionLogsOrganizationID ForeignKeyConstraint = "connection_logs_organization_id_fkey" // ALTER TABLE ONLY connection_logs ADD CONSTRAINT connection_logs_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE CASCADE;
ForeignKeyConnectionLogsSessionID ForeignKeyConstraint = "connection_logs_session_id_fkey" // ALTER TABLE ONLY connection_logs ADD CONSTRAINT connection_logs_session_id_fkey FOREIGN KEY (session_id) REFERENCES workspace_sessions(id) ON DELETE SET NULL;
ForeignKeyConnectionLogsWorkspaceID ForeignKeyConstraint = "connection_logs_workspace_id_fkey" // ALTER TABLE ONLY connection_logs ADD CONSTRAINT connection_logs_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
ForeignKeyConnectionLogsWorkspaceOwnerID ForeignKeyConstraint = "connection_logs_workspace_owner_id_fkey" // ALTER TABLE ONLY connection_logs ADD CONSTRAINT connection_logs_workspace_owner_id_fkey FOREIGN KEY (workspace_owner_id) REFERENCES users(id) ON DELETE CASCADE;
ForeignKeyCryptoKeysSecretKeyID ForeignKeyConstraint = "crypto_keys_secret_key_id_fkey" // ALTER TABLE ONLY crypto_keys ADD CONSTRAINT crypto_keys_secret_key_id_fkey FOREIGN KEY (secret_key_id) REFERENCES dbcrypt_keys(active_key_digest);
@@ -100,6 +101,8 @@ const (
ForeignKeyWorkspaceModulesJobID ForeignKeyConstraint = "workspace_modules_job_id_fkey" // ALTER TABLE ONLY workspace_modules ADD CONSTRAINT workspace_modules_job_id_fkey FOREIGN KEY (job_id) REFERENCES provisioner_jobs(id) ON DELETE CASCADE;
ForeignKeyWorkspaceResourceMetadataWorkspaceResourceID ForeignKeyConstraint = "workspace_resource_metadata_workspace_resource_id_fkey" // ALTER TABLE ONLY workspace_resource_metadata ADD CONSTRAINT workspace_resource_metadata_workspace_resource_id_fkey FOREIGN KEY (workspace_resource_id) REFERENCES workspace_resources(id) ON DELETE CASCADE;
ForeignKeyWorkspaceResourcesJobID ForeignKeyConstraint = "workspace_resources_job_id_fkey" // ALTER TABLE ONLY workspace_resources ADD CONSTRAINT workspace_resources_job_id_fkey FOREIGN KEY (job_id) REFERENCES provisioner_jobs(id) ON DELETE CASCADE;
ForeignKeyWorkspaceSessionsAgentID ForeignKeyConstraint = "workspace_sessions_agent_id_fkey" // ALTER TABLE ONLY workspace_sessions ADD CONSTRAINT workspace_sessions_agent_id_fkey FOREIGN KEY (agent_id) REFERENCES workspace_agents(id) ON DELETE SET NULL;
ForeignKeyWorkspaceSessionsWorkspaceID ForeignKeyConstraint = "workspace_sessions_workspace_id_fkey" // ALTER TABLE ONLY workspace_sessions ADD CONSTRAINT workspace_sessions_workspace_id_fkey FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE;
ForeignKeyWorkspacesOrganizationID ForeignKeyConstraint = "workspaces_organization_id_fkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_organization_id_fkey FOREIGN KEY (organization_id) REFERENCES organizations(id) ON DELETE RESTRICT;
ForeignKeyWorkspacesOwnerID ForeignKeyConstraint = "workspaces_owner_id_fkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_owner_id_fkey FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE RESTRICT;
ForeignKeyWorkspacesTemplateID ForeignKeyConstraint = "workspaces_template_id_fkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_template_id_fkey FOREIGN KEY (template_id) REFERENCES templates(id) ON DELETE RESTRICT;
@@ -0,0 +1,2 @@
ALTER TABLE connection_logs
DROP COLUMN agent_id;
@@ -0,0 +1,2 @@
ALTER TABLE connection_logs
ADD COLUMN agent_id uuid;
@@ -0,0 +1,2 @@
ALTER TABLE workspace_app_audit_sessions DROP COLUMN IF EXISTS connection_id;
ALTER TABLE connection_logs DROP COLUMN IF EXISTS updated_at;
@@ -0,0 +1,14 @@
ALTER TABLE workspace_app_audit_sessions
ADD COLUMN connection_id uuid;
ALTER TABLE connection_logs
ADD COLUMN updated_at timestamp with time zone;
UPDATE connection_logs SET updated_at = connect_time WHERE updated_at IS NULL;
ALTER TABLE connection_logs
ALTER COLUMN updated_at SET NOT NULL,
ALTER COLUMN updated_at SET DEFAULT now();
COMMENT ON COLUMN connection_logs.updated_at IS
'Last time this connection log was confirmed active. For agent connections, equals connect_time. For web connections, bumped while the session is active.';
@@ -0,0 +1,2 @@
DROP TABLE IF EXISTS tailnet_peering_events;
DROP TABLE IF EXISTS agent_peering_ids;
@@ -0,0 +1,14 @@
CREATE TABLE agent_peering_ids (
agent_id uuid NOT NULL,
peering_id bytea NOT NULL,
PRIMARY KEY (agent_id, peering_id)
);
CREATE TABLE tailnet_peering_events (
peering_id bytea NOT NULL,
event_type text NOT NULL,
src_peer_id uuid,
dst_peer_id uuid,
node bytea,
occurred_at timestamp with time zone NOT NULL
);
@@ -0,0 +1,8 @@
DROP INDEX IF EXISTS idx_connection_logs_session;
ALTER TABLE connection_logs
DROP COLUMN IF EXISTS short_description,
DROP COLUMN IF EXISTS client_hostname,
DROP COLUMN IF EXISTS session_id;
DROP TABLE IF EXISTS workspace_sessions;
@@ -0,0 +1,21 @@
CREATE TABLE workspace_sessions (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
workspace_id uuid NOT NULL REFERENCES workspaces(id) ON DELETE CASCADE,
agent_id uuid REFERENCES workspace_agents(id) ON DELETE SET NULL,
ip inet NOT NULL,
client_hostname text,
short_description text,
started_at timestamp with time zone NOT NULL,
ended_at timestamp with time zone NOT NULL,
created_at timestamp with time zone NOT NULL DEFAULT now()
);
CREATE INDEX idx_workspace_sessions_workspace ON workspace_sessions (workspace_id, started_at DESC);
CREATE INDEX idx_workspace_sessions_lookup ON workspace_sessions (workspace_id, ip, started_at);
ALTER TABLE connection_logs
ADD COLUMN session_id uuid REFERENCES workspace_sessions(id) ON DELETE SET NULL,
ADD COLUMN client_hostname text,
ADD COLUMN short_description text;
CREATE INDEX idx_connection_logs_session ON connection_logs (session_id) WHERE session_id IS NOT NULL;
@@ -0,0 +1 @@
-- No-op: PostgreSQL does not support removing enum values.
@@ -0,0 +1 @@
ALTER TYPE connection_type ADD VALUE IF NOT EXISTS 'system';
@@ -0,0 +1,6 @@
UPDATE workspace_sessions SET ip = '0.0.0.0'::inet WHERE ip IS NULL;
ALTER TABLE workspace_sessions ALTER COLUMN ip SET NOT NULL;
DROP INDEX IF EXISTS idx_workspace_sessions_hostname_lookup;
DROP INDEX IF EXISTS idx_workspace_sessions_ip_lookup;
CREATE INDEX idx_workspace_sessions_lookup ON workspace_sessions (workspace_id, ip, started_at);
@@ -0,0 +1,13 @@
-- Make workspace_sessions.ip nullable since sessions now group by
-- hostname (with IP fallback), and a session may span multiple IPs.
ALTER TABLE workspace_sessions ALTER COLUMN ip DROP NOT NULL;
-- Replace the IP-based lookup index with hostname-based indexes
-- to support the new grouping logic.
DROP INDEX IF EXISTS idx_workspace_sessions_lookup;
CREATE INDEX idx_workspace_sessions_hostname_lookup
ON workspace_sessions (workspace_id, client_hostname, started_at)
WHERE client_hostname IS NOT NULL;
CREATE INDEX idx_workspace_sessions_ip_lookup
ON workspace_sessions (workspace_id, ip, started_at)
WHERE ip IS NOT NULL AND client_hostname IS NULL;
@@ -0,0 +1,17 @@
INSERT INTO agent_peering_ids
(agent_id, peering_id)
VALUES (
'c0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11',
'\xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef'
);
INSERT INTO tailnet_peering_events
(peering_id, event_type, src_peer_id, dst_peer_id, node, occurred_at)
VALUES (
'\xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef',
'added_tunnel',
'c0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11',
'b0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11',
'a fake protobuf byte string',
'2025-01-15 10:23:54+00'
);
@@ -0,0 +1,11 @@
INSERT INTO workspace_sessions
(id, workspace_id, agent_id, ip, started_at, ended_at, created_at)
VALUES (
'a1b2c3d4-e5f6-7890-abcd-ef1234567890',
'3a9a1feb-e89d-457c-9d53-ac751b198ebe',
'5f8e48e4-1304-45bd-b91a-ab12c8bfc20f',
'127.0.0.1',
'2025-01-01 10:00:00+00',
'2025-01-01 11:00:00+00',
'2025-01-01 11:00:00+00'
);
+5
View File
@@ -689,6 +689,11 @@ func (q *sqlQuerier) GetAuthorizedConnectionLogsOffset(ctx context.Context, arg
&i.ConnectionLog.ConnectionID,
&i.ConnectionLog.DisconnectTime,
&i.ConnectionLog.DisconnectReason,
&i.ConnectionLog.AgentID,
&i.ConnectionLog.UpdatedAt,
&i.ConnectionLog.SessionID,
&i.ConnectionLog.ClientHostname,
&i.ConnectionLog.ShortDescription,
&i.UserUsername,
&i.UserName,
&i.UserEmail,
+39 -3
View File
@@ -1101,6 +1101,7 @@ const (
ConnectionTypeReconnectingPty ConnectionType = "reconnecting_pty"
ConnectionTypeWorkspaceApp ConnectionType = "workspace_app"
ConnectionTypePortForwarding ConnectionType = "port_forwarding"
ConnectionTypeSystem ConnectionType = "system"
)
func (e *ConnectionType) Scan(src interface{}) error {
@@ -1145,7 +1146,8 @@ func (e ConnectionType) Valid() bool {
ConnectionTypeJetbrains,
ConnectionTypeReconnectingPty,
ConnectionTypeWorkspaceApp,
ConnectionTypePortForwarding:
ConnectionTypePortForwarding,
ConnectionTypeSystem:
return true
}
return false
@@ -1159,6 +1161,7 @@ func AllConnectionTypeValues() []ConnectionType {
ConnectionTypeReconnectingPty,
ConnectionTypeWorkspaceApp,
ConnectionTypePortForwarding,
ConnectionTypeSystem,
}
}
@@ -3702,6 +3705,11 @@ type APIKey struct {
AllowList AllowList `db:"allow_list" json:"allow_list"`
}
type AgentPeeringID struct {
AgentID uuid.UUID `db:"agent_id" json:"agent_id"`
PeeringID []byte `db:"peering_id" json:"peering_id"`
}
type AuditLog struct {
ID uuid.UUID `db:"id" json:"id"`
Time time.Time `db:"time" json:"time"`
@@ -3762,6 +3770,12 @@ type ConnectionLog struct {
DisconnectTime sql.NullTime `db:"disconnect_time" json:"disconnect_time"`
// The reason the connection was closed. Null for web connections. For other connections, this is null until we receive a disconnect event for the same connection_id.
DisconnectReason sql.NullString `db:"disconnect_reason" json:"disconnect_reason"`
AgentID uuid.NullUUID `db:"agent_id" json:"agent_id"`
// Last time this connection log was confirmed active. For agent connections, equals connect_time. For web connections, bumped while the session is active.
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
SessionID uuid.NullUUID `db:"session_id" json:"session_id"`
ClientHostname sql.NullString `db:"client_hostname" json:"client_hostname"`
ShortDescription sql.NullString `db:"short_description" json:"short_description"`
}
type CryptoKey struct {
@@ -4228,6 +4242,15 @@ type TailnetPeer struct {
Status TailnetStatus `db:"status" json:"status"`
}
type TailnetPeeringEvent struct {
PeeringID []byte `db:"peering_id" json:"peering_id"`
EventType string `db:"event_type" json:"event_type"`
SrcPeerID uuid.NullUUID `db:"src_peer_id" json:"src_peer_id"`
DstPeerID uuid.NullUUID `db:"dst_peer_id" json:"dst_peer_id"`
Node []byte `db:"node" json:"node"`
OccurredAt time.Time `db:"occurred_at" json:"occurred_at"`
}
type TailnetTunnel struct {
CoordinatorID uuid.UUID `db:"coordinator_id" json:"coordinator_id"`
SrcID uuid.UUID `db:"src_id" json:"src_id"`
@@ -4933,8 +4956,9 @@ type WorkspaceAppAuditSession struct {
// The time the user started the session.
StartedAt time.Time `db:"started_at" json:"started_at"`
// The time the session was last updated.
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ID uuid.UUID `db:"id" json:"id"`
UpdatedAt time.Time `db:"updated_at" json:"updated_at"`
ID uuid.UUID `db:"id" json:"id"`
ConnectionID uuid.NullUUID `db:"connection_id" json:"connection_id"`
}
// A record of workspace app usage statistics
@@ -5109,6 +5133,18 @@ type WorkspaceResourceMetadatum struct {
ID int64 `db:"id" json:"id"`
}
type WorkspaceSession struct {
ID uuid.UUID `db:"id" json:"id"`
WorkspaceID uuid.UUID `db:"workspace_id" json:"workspace_id"`
AgentID uuid.NullUUID `db:"agent_id" json:"agent_id"`
Ip pqtype.Inet `db:"ip" json:"ip"`
ClientHostname sql.NullString `db:"client_hostname" json:"client_hostname"`
ShortDescription sql.NullString `db:"short_description" json:"short_description"`
StartedAt time.Time `db:"started_at" json:"started_at"`
EndedAt time.Time `db:"ended_at" json:"ended_at"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
}
type WorkspaceTable struct {
ID uuid.UUID `db:"id" json:"id"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
+48 -4
View File
@@ -68,15 +68,43 @@ type sqlcQuerier interface {
CleanTailnetCoordinators(ctx context.Context) error
CleanTailnetLostPeers(ctx context.Context) error
CleanTailnetTunnels(ctx context.Context) error
// Atomically closes open connections and creates sessions grouped by
// client_hostname (with IP fallback) and time overlap. Non-system
// connections drive session boundaries; system connections attach to
// the first overlapping session or get their own if orphaned.
//
// Processes connections that are still open (disconnect_time IS NULL) OR
// already disconnected but not yet assigned to a session (session_id IS
// NULL). The latter covers system/tunnel connections whose disconnect is
// recorded by dbsink but which have no session-assignment code path.
// Phase 1: Group non-system connections by hostname+time overlap.
// System connections persist for the entire workspace lifetime and
// would create mega-sessions if included in boundary computation.
// Check for pre-existing sessions that match by hostname (or IP
// fallback) and overlap in time, to avoid duplicates from the race
// with FindOrCreateSessionForDisconnect.
// Combine existing and newly created sessions.
// Phase 2: Assign system connections to the earliest overlapping
// primary session. First check sessions from this batch, then fall
// back to pre-existing workspace_sessions.
// Also match system connections to pre-existing sessions (created
// by FindOrCreateSessionForDisconnect) that aren't in this batch.
// Create sessions for orphaned system connections (no overlapping
// primary session) that have an IP.
// Combine all session sources for the final UPDATE.
CloseConnectionLogsAndCreateSessions(ctx context.Context, arg CloseConnectionLogsAndCreateSessionsParams) (int64, error)
CloseOpenAgentConnectionLogsForWorkspace(ctx context.Context, arg CloseOpenAgentConnectionLogsForWorkspaceParams) (int64, error)
CountAIBridgeInterceptions(ctx context.Context, arg CountAIBridgeInterceptionsParams) (int64, error)
CountAuditLogs(ctx context.Context, arg CountAuditLogsParams) (int64, error)
CountConnectionLogs(ctx context.Context, arg CountConnectionLogsParams) (int64, error)
CountGlobalWorkspaceSessions(ctx context.Context, arg CountGlobalWorkspaceSessionsParams) (int64, error)
// CountInProgressPrebuilds returns the number of in-progress prebuilds, grouped by preset ID and transition.
// Prebuild considered in-progress if it's in the "pending", "starting", "stopping", or "deleting" state.
CountInProgressPrebuilds(ctx context.Context) ([]CountInProgressPrebuildsRow, error)
// CountPendingNonActivePrebuilds returns the number of pending prebuilds for non-active template versions
CountPendingNonActivePrebuilds(ctx context.Context) ([]CountPendingNonActivePrebuildsRow, error)
CountUnreadInboxNotificationsByUserID(ctx context.Context, userID uuid.UUID) (int64, error)
CountWorkspaceSessions(ctx context.Context, arg CountWorkspaceSessionsParams) (int64, error)
CreateUserSecret(ctx context.Context, arg CreateUserSecretParams) (UserSecret, error)
CustomRoles(ctx context.Context, arg CustomRolesParams) ([]CustomRole, error)
DeleteAPIKeyByID(ctx context.Context, id string) error
@@ -161,6 +189,11 @@ type sqlcQuerier interface {
// The query finds presets where all preset parameters are present in the provided parameters,
// and returns the preset with the most parameters (largest subset).
FindMatchingPresetID(ctx context.Context, arg FindMatchingPresetIDParams) (uuid.UUID, error)
// Find existing session within time window, or create new one.
// Uses advisory lock to prevent duplicate sessions from concurrent disconnects.
// Groups by client_hostname (with IP fallback) to match the live session
// grouping in mergeWorkspaceConnectionsIntoSessions.
FindOrCreateSessionForDisconnect(ctx context.Context, arg FindOrCreateSessionForDisconnectParams) (interface{}, error)
GetAIBridgeInterceptionByID(ctx context.Context, id uuid.UUID) (AIBridgeInterception, error)
GetAIBridgeInterceptions(ctx context.Context) ([]AIBridgeInterception, error)
GetAIBridgeTokenUsagesByInterceptionID(ctx context.Context, interceptionID uuid.UUID) ([]AIBridgeTokenUsage, error)
@@ -177,6 +210,7 @@ type sqlcQuerier interface {
GetActiveWorkspaceBuildsByTemplateID(ctx context.Context, templateID uuid.UUID) ([]WorkspaceBuild, error)
// For PG Coordinator HTMLDebug
GetAllTailnetCoordinators(ctx context.Context) ([]TailnetCoordinator, error)
GetAllTailnetPeeringEventsByPeerID(ctx context.Context, srcPeerID uuid.NullUUID) ([]TailnetPeeringEvent, error)
GetAllTailnetPeers(ctx context.Context) ([]TailnetPeer, error)
GetAllTailnetTunnels(ctx context.Context) ([]TailnetTunnel, error)
// Atomic read+delete prevents replicas that flush between a separate read and
@@ -199,6 +233,8 @@ type sqlcQuerier interface {
// This function returns roles for authorization purposes. Implied member roles
// are included.
GetAuthorizationUserRoles(ctx context.Context, userID uuid.UUID) (GetAuthorizationUserRolesRow, error)
GetConnectionLogByConnectionID(ctx context.Context, arg GetConnectionLogByConnectionIDParams) (ConnectionLog, error)
GetConnectionLogsBySessionIDs(ctx context.Context, sessionIds []uuid.UUID) ([]ConnectionLog, error)
GetConnectionLogsOffset(ctx context.Context, arg GetConnectionLogsOffsetParams) ([]GetConnectionLogsOffsetRow, error)
GetCoordinatorResumeTokenSigningKey(ctx context.Context) (string, error)
GetCryptoKeyByFeatureAndSequence(ctx context.Context, arg GetCryptoKeyByFeatureAndSequenceParams) (CryptoKey, error)
@@ -231,6 +267,7 @@ type sqlcQuerier interface {
// param limit_opt: The limit of notifications to fetch. If the limit is not specified, it defaults to 25
GetFilteredInboxNotificationsByUserID(ctx context.Context, arg GetFilteredInboxNotificationsByUserIDParams) ([]InboxNotification, error)
GetGitSSHKey(ctx context.Context, userID uuid.UUID) (GitSSHKey, error)
GetGlobalWorkspaceSessionsOffset(ctx context.Context, arg GetGlobalWorkspaceSessionsOffsetParams) ([]GetGlobalWorkspaceSessionsOffsetRow, error)
GetGroupByID(ctx context.Context, id uuid.UUID) (Group, error)
GetGroupByOrgAndName(ctx context.Context, arg GetGroupByOrgAndNameParams) (Group, error)
GetGroupMembers(ctx context.Context, includeSystem bool) ([]GroupMember, error)
@@ -278,6 +315,7 @@ type sqlcQuerier interface {
GetOAuth2ProviderApps(ctx context.Context) ([]OAuth2ProviderApp, error)
GetOAuth2ProviderAppsByUserID(ctx context.Context, userID uuid.UUID) ([]GetOAuth2ProviderAppsByUserIDRow, error)
GetOAuthSigningKey(ctx context.Context) (string, error)
GetOngoingAgentConnectionsLast24h(ctx context.Context, arg GetOngoingAgentConnectionsLast24hParams) ([]GetOngoingAgentConnectionsLast24hRow, error)
GetOrganizationByID(ctx context.Context, id uuid.UUID) (Organization, error)
GetOrganizationByName(ctx context.Context, arg GetOrganizationByNameParams) (Organization, error)
GetOrganizationIDsByMemberIDs(ctx context.Context, ids []uuid.UUID) ([]GetOrganizationIDsByMemberIDsRow, error)
@@ -354,6 +392,7 @@ type sqlcQuerier interface {
GetRuntimeConfig(ctx context.Context, key string) (string, error)
GetTailnetPeers(ctx context.Context, id uuid.UUID) ([]TailnetPeer, error)
GetTailnetTunnelPeerBindings(ctx context.Context, srcID uuid.UUID) ([]GetTailnetTunnelPeerBindingsRow, error)
GetTailnetTunnelPeerBindingsByDstID(ctx context.Context, dstID uuid.UUID) ([]GetTailnetTunnelPeerBindingsByDstIDRow, error)
GetTailnetTunnelPeerIDs(ctx context.Context, srcID uuid.UUID) ([]GetTailnetTunnelPeerIDsRow, error)
GetTaskByID(ctx context.Context, id uuid.UUID) (Task, error)
GetTaskByOwnerIDAndName(ctx context.Context, arg GetTaskByOwnerIDAndNameParams) (Task, error)
@@ -533,6 +572,7 @@ type sqlcQuerier interface {
GetWorkspaceResourcesByJobID(ctx context.Context, jobID uuid.UUID) ([]WorkspaceResource, error)
GetWorkspaceResourcesByJobIDs(ctx context.Context, ids []uuid.UUID) ([]WorkspaceResource, error)
GetWorkspaceResourcesCreatedAfter(ctx context.Context, createdAt time.Time) ([]WorkspaceResource, error)
GetWorkspaceSessionsOffset(ctx context.Context, arg GetWorkspaceSessionsOffsetParams) ([]GetWorkspaceSessionsOffsetRow, error)
GetWorkspaceUniqueOwnerCountByTemplateIDs(ctx context.Context, templateIds []uuid.UUID) ([]GetWorkspaceUniqueOwnerCountByTemplateIDsRow, error)
// build_params is used to filter by build parameters if present.
// It has to be a CTE because the set returning function 'unnest' cannot
@@ -584,6 +624,7 @@ type sqlcQuerier interface {
InsertProvisionerJobTimings(ctx context.Context, arg InsertProvisionerJobTimingsParams) ([]ProvisionerJobTiming, error)
InsertProvisionerKey(ctx context.Context, arg InsertProvisionerKeyParams) (ProvisionerKey, error)
InsertReplica(ctx context.Context, arg InsertReplicaParams) (Replica, error)
InsertTailnetPeeringEvent(ctx context.Context, arg InsertTailnetPeeringEventParams) error
InsertTask(ctx context.Context, arg InsertTaskParams) (TaskTable, error)
InsertTelemetryItemIfNotExists(ctx context.Context, arg InsertTelemetryItemIfNotExistsParams) error
// Inserts a new lock row into the telemetry_locks table. Replicas should call
@@ -670,6 +711,8 @@ type sqlcQuerier interface {
UnfavoriteWorkspace(ctx context.Context, id uuid.UUID) error
UpdateAIBridgeInterceptionEnded(ctx context.Context, arg UpdateAIBridgeInterceptionEndedParams) (AIBridgeInterception, error)
UpdateAPIKeyByID(ctx context.Context, arg UpdateAPIKeyByIDParams) error
// Links a connection log row to its workspace session.
UpdateConnectionLogSessionID(ctx context.Context, arg UpdateConnectionLogSessionIDParams) error
UpdateCryptoKeyDeletesAt(ctx context.Context, arg UpdateCryptoKeyDeletesAtParams) (CryptoKey, error)
UpdateCustomRole(ctx context.Context, arg UpdateCustomRoleParams) (CustomRole, error)
UpdateExternalAuthLink(ctx context.Context, arg UpdateExternalAuthLinkParams) (ExternalAuthLink, error)
@@ -799,10 +842,11 @@ type sqlcQuerier interface {
UpsertWorkspaceAgentPortShare(ctx context.Context, arg UpsertWorkspaceAgentPortShareParams) (WorkspaceAgentPortShare, error)
UpsertWorkspaceApp(ctx context.Context, arg UpsertWorkspaceAppParams) (WorkspaceApp, error)
//
// The returned boolean, new_or_stale, can be used to deduce if a new session
// was started. This means that a new row was inserted (no previous session) or
// the updated_at is older than stale interval.
UpsertWorkspaceAppAuditSession(ctx context.Context, arg UpsertWorkspaceAppAuditSessionParams) (bool, error)
// The returned columns, new_or_stale and connection_id, can be used to deduce
// if a new session was started and which connection_id to use. new_or_stale is
// true when a new row was inserted (no previous session) or the updated_at is
// older than the stale interval.
UpsertWorkspaceAppAuditSession(ctx context.Context, arg UpsertWorkspaceAppAuditSessionParams) (UpsertWorkspaceAppAuditSessionRow, error)
ValidateGroupIDs(ctx context.Context, groupIds []uuid.UUID) (ValidateGroupIDsRow, error)
ValidateUserIDs(ctx context.Context, userIds []uuid.UUID) (ValidateUserIDsRow, error)
}
+8 -2
View File
@@ -3081,7 +3081,7 @@ func TestConnectionLogsOffsetFilters(t *testing.T) {
params: database.GetConnectionLogsOffsetParams{
Status: string(codersdk.ConnectionLogStatusOngoing),
},
expectedLogIDs: []uuid.UUID{log4.ID},
expectedLogIDs: []uuid.UUID{log1.ID, log4.ID},
},
{
name: "StatusCompleted",
@@ -3308,12 +3308,16 @@ func TestUpsertConnectionLog(t *testing.T) {
origLog, err := db.UpsertConnectionLog(ctx, connectParams2)
require.NoError(t, err)
require.Equal(t, log, origLog, "connect update should be a no-op")
// updated_at is always bumped on conflict to track activity.
require.True(t, connectTime2.Equal(origLog.UpdatedAt), "expected updated_at %s, got %s", connectTime2, origLog.UpdatedAt)
origLog.UpdatedAt = log.UpdatedAt
require.Equal(t, log, origLog, "connect update should be a no-op except updated_at")
// Check that still only one row exists.
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{})
require.NoError(t, err)
require.Len(t, rows, 1)
rows[0].ConnectionLog.UpdatedAt = log.UpdatedAt
require.Equal(t, log, rows[0].ConnectionLog)
})
@@ -3395,6 +3399,8 @@ func TestUpsertConnectionLog(t *testing.T) {
secondRows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{})
require.NoError(t, err)
require.Len(t, secondRows, 1)
// updated_at is always bumped on conflict to track activity.
secondRows[0].ConnectionLog.UpdatedAt = firstRows[0].ConnectionLog.UpdatedAt
require.Equal(t, firstRows, secondRows)
// Upsert a disconnection, which should also be a no op
File diff suppressed because it is too large Load Diff
+282 -10
View File
@@ -114,9 +114,7 @@ WHERE
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
(@status = 'completed' AND disconnect_time IS NOT NULL))
ELSE true
END
-- Authorize Filter clause will be injected below in
@@ -229,9 +227,7 @@ WHERE
AND CASE
WHEN @status :: text != '' THEN
((@status = 'ongoing' AND disconnect_time IS NULL) OR
(@status = 'completed' AND disconnect_time IS NOT NULL)) AND
-- Exclude web events, since we don't know their close time.
"type" NOT IN ('workspace_app', 'port_forwarding')
(@status = 'completed' AND disconnect_time IS NOT NULL))
ELSE true
END
-- Authorize Filter clause will be injected below in
@@ -260,6 +256,7 @@ INSERT INTO connection_logs (
workspace_id,
workspace_name,
agent_name,
agent_id,
type,
code,
ip,
@@ -268,18 +265,24 @@ INSERT INTO connection_logs (
slug_or_port,
connection_id,
disconnect_reason,
disconnect_time
disconnect_time,
updated_at,
session_id,
client_hostname,
short_description
) VALUES
($1, @time, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14,
($1, @time, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15,
-- If we've only received a disconnect event, mark the event as immediately
-- closed.
CASE
WHEN @connection_status::connection_status = 'disconnected'
THEN @time :: timestamp with time zone
ELSE NULL
END)
END,
@time, $16, $17, $18)
ON CONFLICT (connection_id, workspace_id, agent_name)
DO UPDATE SET
updated_at = @time,
-- No-op if the connection is still open.
disconnect_time = CASE
WHEN @connection_status::connection_status = 'disconnected'
@@ -301,5 +304,274 @@ DO UPDATE SET
AND connection_logs.code IS NULL
THEN EXCLUDED.code
ELSE connection_logs.code
END
END,
agent_id = COALESCE(connection_logs.agent_id, EXCLUDED.agent_id)
RETURNING *;
-- name: CloseOpenAgentConnectionLogsForWorkspace :execrows
UPDATE connection_logs
SET
disconnect_time = GREATEST(connect_time, @closed_at :: timestamp with time zone),
-- Do not overwrite any existing reason.
disconnect_reason = COALESCE(disconnect_reason, @reason :: text)
WHERE disconnect_time IS NULL
AND workspace_id = @workspace_id :: uuid
AND type = ANY(@types :: connection_type[]);
-- name: GetOngoingAgentConnectionsLast24h :many
WITH ranked AS (
SELECT
id,
connect_time,
organization_id,
workspace_owner_id,
workspace_id,
workspace_name,
agent_name,
type,
ip,
code,
user_agent,
user_id,
slug_or_port,
connection_id,
disconnect_time,
disconnect_reason,
agent_id,
updated_at,
session_id,
client_hostname,
short_description,
row_number() OVER (
PARTITION BY workspace_id, agent_name
ORDER BY connect_time DESC
) AS rn
FROM
connection_logs
WHERE
workspace_id = ANY(@workspace_ids :: uuid[])
AND agent_name = ANY(@agent_names :: text[])
AND type = ANY(@types :: connection_type[])
AND disconnect_time IS NULL
AND (
-- Non-web types always included while connected.
type NOT IN ('workspace_app', 'port_forwarding')
-- Agent-reported web connections have NULL user_agent
-- and carry proper disconnect lifecycle tracking.
OR user_agent IS NULL
-- Proxy-reported web connections (non-NULL user_agent)
-- rely on updated_at being bumped on each token refresh.
OR updated_at >= @app_active_since :: timestamp with time zone
)
AND connect_time >= @since :: timestamp with time zone
)
SELECT
id,
connect_time,
organization_id,
workspace_owner_id,
workspace_id,
workspace_name,
agent_name,
type,
ip,
code,
user_agent,
user_id,
slug_or_port,
connection_id,
disconnect_time,
disconnect_reason,
updated_at,
session_id,
client_hostname,
short_description
FROM
ranked
WHERE
rn <= @per_agent_limit
ORDER BY
workspace_id,
agent_name,
connect_time DESC;
-- name: UpdateConnectionLogSessionID :exec
-- Links a connection log row to its workspace session.
UPDATE connection_logs SET session_id = @session_id WHERE id = @id;
-- name: CloseConnectionLogsAndCreateSessions :execrows
-- Atomically closes open connections and creates sessions grouped by
-- client_hostname (with IP fallback) and time overlap. Non-system
-- connections drive session boundaries; system connections attach to
-- the first overlapping session or get their own if orphaned.
--
-- Processes connections that are still open (disconnect_time IS NULL) OR
-- already disconnected but not yet assigned to a session (session_id IS
-- NULL). The latter covers system/tunnel connections whose disconnect is
-- recorded by dbsink but which have no session-assignment code path.
WITH connections_to_close AS (
SELECT id, ip, connect_time, disconnect_time, agent_id,
client_hostname, short_description, type
FROM connection_logs
WHERE (disconnect_time IS NULL OR session_id IS NULL)
AND workspace_id = @workspace_id
AND type = ANY(@types::connection_type[])
),
-- Phase 1: Group non-system connections by hostname+time overlap.
-- System connections persist for the entire workspace lifetime and
-- would create mega-sessions if included in boundary computation.
primary_connections AS (
SELECT *,
COALESCE(client_hostname, host(ip), 'unknown') AS group_key
FROM connections_to_close
WHERE type != 'system'
),
ordered AS (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY group_key ORDER BY connect_time) AS rn,
MAX(COALESCE(disconnect_time, @closed_at::timestamptz))
OVER (PARTITION BY group_key ORDER BY connect_time
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS running_max_end
FROM primary_connections
),
with_boundaries AS (
SELECT *,
SUM(CASE
WHEN rn = 1 THEN 1
WHEN connect_time > running_max_end + INTERVAL '30 minutes' THEN 1
ELSE 0
END) OVER (PARTITION BY group_key ORDER BY connect_time) AS group_id
FROM ordered
),
session_groups AS (
SELECT
group_key,
group_id,
MIN(connect_time) AS started_at,
MAX(COALESCE(disconnect_time, @closed_at::timestamptz)) AS ended_at,
(array_agg(agent_id ORDER BY connect_time) FILTER (WHERE agent_id IS NOT NULL))[1] AS agent_id,
(array_agg(ip ORDER BY connect_time) FILTER (WHERE ip IS NOT NULL))[1] AS ip,
(array_agg(client_hostname ORDER BY connect_time) FILTER (WHERE client_hostname IS NOT NULL))[1] AS client_hostname,
(array_agg(short_description ORDER BY connect_time) FILTER (WHERE short_description IS NOT NULL))[1] AS short_description
FROM with_boundaries
GROUP BY group_key, group_id
),
-- Check for pre-existing sessions that match by hostname (or IP
-- fallback) and overlap in time, to avoid duplicates from the race
-- with FindOrCreateSessionForDisconnect.
existing_sessions AS (
SELECT DISTINCT ON (sg.group_key, sg.group_id)
sg.group_key, sg.group_id, ws.id AS session_id
FROM session_groups sg
JOIN workspace_sessions ws
ON ws.workspace_id = @workspace_id
AND (
(sg.client_hostname IS NOT NULL AND ws.client_hostname = sg.client_hostname)
OR (sg.client_hostname IS NULL AND sg.ip IS NOT NULL AND ws.ip = sg.ip AND ws.client_hostname IS NULL)
)
AND sg.started_at <= ws.ended_at + INTERVAL '30 minutes'
AND sg.ended_at >= ws.started_at - INTERVAL '30 minutes'
ORDER BY sg.group_key, sg.group_id, ws.started_at DESC
),
new_sessions AS (
INSERT INTO workspace_sessions (workspace_id, agent_id, ip, client_hostname, short_description, started_at, ended_at)
SELECT @workspace_id, sg.agent_id, sg.ip, sg.client_hostname, sg.short_description, sg.started_at, sg.ended_at
FROM session_groups sg
WHERE NOT EXISTS (
SELECT 1 FROM existing_sessions es
WHERE es.group_key = sg.group_key AND es.group_id = sg.group_id
)
RETURNING id, ip, started_at
),
-- Combine existing and newly created sessions.
all_sessions AS (
SELECT ns.id, sg.group_key, sg.started_at
FROM new_sessions ns
JOIN session_groups sg
ON sg.started_at = ns.started_at
AND (sg.ip IS NOT DISTINCT FROM ns.ip)
UNION ALL
SELECT es.session_id AS id, es.group_key, sg.started_at
FROM existing_sessions es
JOIN session_groups sg ON es.group_key = sg.group_key AND es.group_id = sg.group_id
),
-- Phase 2: Assign system connections to the earliest overlapping
-- primary session. First check sessions from this batch, then fall
-- back to pre-existing workspace_sessions.
system_batch_match AS (
SELECT DISTINCT ON (c.id)
c.id AS connection_id,
alls.id AS session_id,
sg.started_at AS session_start
FROM connections_to_close c
JOIN all_sessions alls ON true
JOIN session_groups sg ON alls.group_key = sg.group_key AND alls.started_at = sg.started_at
WHERE c.type = 'system'
AND COALESCE(c.disconnect_time, @closed_at::timestamptz) >= sg.started_at
AND c.connect_time <= sg.ended_at
ORDER BY c.id, sg.started_at
),
-- Also match system connections to pre-existing sessions (created
-- by FindOrCreateSessionForDisconnect) that aren't in this batch.
system_existing_match AS (
SELECT DISTINCT ON (c.id)
c.id AS connection_id,
ws.id AS session_id
FROM connections_to_close c
JOIN workspace_sessions ws
ON ws.workspace_id = @workspace_id
AND COALESCE(c.disconnect_time, @closed_at::timestamptz) >= ws.started_at
AND c.connect_time <= ws.ended_at
WHERE c.type = 'system'
AND NOT EXISTS (SELECT 1 FROM system_batch_match sbm WHERE sbm.connection_id = c.id)
ORDER BY c.id, ws.started_at
),
system_session_match AS (
SELECT connection_id, session_id FROM system_batch_match
UNION ALL
SELECT connection_id, session_id FROM system_existing_match
),
-- Create sessions for orphaned system connections (no overlapping
-- primary session) that have an IP.
orphan_system AS (
SELECT c.*
FROM connections_to_close c
LEFT JOIN system_session_match ssm ON ssm.connection_id = c.id
WHERE c.type = 'system'
AND ssm.connection_id IS NULL
AND c.ip IS NOT NULL
),
orphan_system_sessions AS (
INSERT INTO workspace_sessions (workspace_id, agent_id, ip, client_hostname, short_description, started_at, ended_at)
SELECT @workspace_id, os.agent_id, os.ip, os.client_hostname, os.short_description,
os.connect_time, COALESCE(os.disconnect_time, @closed_at::timestamptz)
FROM orphan_system os
RETURNING id, ip, started_at
),
-- Combine all session sources for the final UPDATE.
final_sessions AS (
-- Primary sessions matched to non-system connections.
SELECT wb.id AS connection_id, alls.id AS session_id
FROM with_boundaries wb
JOIN session_groups sg ON wb.group_key = sg.group_key AND wb.group_id = sg.group_id
JOIN all_sessions alls ON sg.group_key = alls.group_key AND sg.started_at = alls.started_at
UNION ALL
-- System connections matched to primary sessions.
SELECT ssm.connection_id, ssm.session_id
FROM system_session_match ssm
UNION ALL
-- Orphaned system connections with their own sessions.
SELECT os.id, oss.id
FROM orphan_system os
JOIN orphan_system_sessions oss ON os.ip = oss.ip AND os.connect_time = oss.started_at
)
UPDATE connection_logs cl
SET
disconnect_time = COALESCE(cl.disconnect_time, @closed_at),
disconnect_reason = COALESCE(cl.disconnect_reason, @reason),
session_id = COALESCE(cl.session_id, fs.session_id)
FROM connections_to_close ctc
LEFT JOIN final_sessions fs ON ctc.id = fs.connection_id
WHERE cl.id = ctc.id;
+24
View File
@@ -126,5 +126,29 @@ SELECT * FROM tailnet_coordinators;
-- name: GetAllTailnetPeers :many
SELECT * FROM tailnet_peers;
-- name: GetTailnetTunnelPeerBindingsByDstID :many
SELECT tp.id AS peer_id, tp.coordinator_id, tp.updated_at, tp.node, tp.status
FROM tailnet_peers tp
INNER JOIN tailnet_tunnels tt ON tp.id = tt.src_id
WHERE tt.dst_id = @dst_id;
-- name: GetAllTailnetTunnels :many
SELECT * FROM tailnet_tunnels;
-- name: InsertTailnetPeeringEvent :exec
INSERT INTO tailnet_peering_events (
peering_id,
event_type,
src_peer_id,
dst_peer_id,
node,
occurred_at
)
VALUES
($1, $2, $3, $4, $5, $6);
-- name: GetAllTailnetPeeringEventsByPeerID :many
SELECT *
FROM tailnet_peering_events
WHERE src_peer_id = $1 OR dst_peer_id = $1
ORDER BY peering_id, occurred_at;
+15 -6
View File
@@ -1,8 +1,9 @@
-- name: UpsertWorkspaceAppAuditSession :one
--
-- The returned boolean, new_or_stale, can be used to deduce if a new session
-- was started. This means that a new row was inserted (no previous session) or
-- the updated_at is older than stale interval.
-- The returned columns, new_or_stale and connection_id, can be used to deduce
-- if a new session was started and which connection_id to use. new_or_stale is
-- true when a new row was inserted (no previous session) or the updated_at is
-- older than the stale interval.
INSERT INTO
workspace_app_audit_sessions (
id,
@@ -14,7 +15,8 @@ INSERT INTO
slug_or_port,
status_code,
started_at,
updated_at
updated_at,
connection_id
)
VALUES
(
@@ -27,7 +29,8 @@ VALUES
$7,
$8,
$9,
$10
$10,
$11
)
ON CONFLICT
(agent_id, app_id, user_id, ip, user_agent, slug_or_port, status_code)
@@ -45,6 +48,12 @@ DO
THEN workspace_app_audit_sessions.started_at
ELSE EXCLUDED.started_at
END,
connection_id = CASE
WHEN workspace_app_audit_sessions.updated_at > NOW() - (@stale_interval_ms::bigint || ' ms')::interval
THEN workspace_app_audit_sessions.connection_id
ELSE EXCLUDED.connection_id
END,
updated_at = EXCLUDED.updated_at
RETURNING
id = $1 AS new_or_stale;
id = $1 AS new_or_stale,
connection_id;
+7 -7
View File
@@ -399,13 +399,13 @@ WHERE
filtered_workspaces fw
ORDER BY
-- To ensure that 'favorite' workspaces show up first in the list only for their owner.
CASE WHEN owner_id = @requester_id AND favorite THEN 0 ELSE 1 END ASC,
(latest_build_completed_at IS NOT NULL AND
latest_build_canceled_at IS NULL AND
latest_build_error IS NULL AND
latest_build_transition = 'start'::workspace_transition) DESC,
LOWER(owner_username) ASC,
LOWER(name) ASC
CASE WHEN fw.owner_id = @requester_id AND fw.favorite THEN 0 ELSE 1 END ASC,
(fw.latest_build_completed_at IS NOT NULL AND
fw.latest_build_canceled_at IS NULL AND
fw.latest_build_error IS NULL AND
fw.latest_build_transition = 'start'::workspace_transition) DESC,
LOWER(fw.owner_username) ASC,
LOWER(fw.name) ASC
LIMIT
CASE
WHEN @limit_ :: integer > 0 THEN
@@ -0,0 +1,110 @@
-- name: FindOrCreateSessionForDisconnect :one
-- Find existing session within time window, or create new one.
-- Uses advisory lock to prevent duplicate sessions from concurrent disconnects.
-- Groups by client_hostname (with IP fallback) to match the live session
-- grouping in mergeWorkspaceConnectionsIntoSessions.
WITH lock AS (
SELECT pg_advisory_xact_lock(
hashtext(@workspace_id::text || COALESCE(@client_hostname, host(@ip::inet), 'unknown'))
)
),
existing AS (
SELECT id FROM workspace_sessions
WHERE workspace_id = @workspace_id::uuid
AND (
(@client_hostname IS NOT NULL AND client_hostname = @client_hostname)
OR
(@client_hostname IS NULL AND client_hostname IS NULL AND ip = @ip::inet)
)
AND @connect_time BETWEEN started_at - INTERVAL '30 minutes' AND ended_at + INTERVAL '30 minutes'
ORDER BY started_at DESC
LIMIT 1
),
new_session AS (
INSERT INTO workspace_sessions (workspace_id, agent_id, ip, client_hostname, short_description, started_at, ended_at)
SELECT @workspace_id::uuid, @agent_id, @ip::inet, @client_hostname, @short_description, @connect_time, @disconnect_time
WHERE NOT EXISTS (SELECT 1 FROM existing)
RETURNING id
),
updated_session AS (
UPDATE workspace_sessions
SET started_at = LEAST(started_at, @connect_time),
ended_at = GREATEST(ended_at, @disconnect_time)
WHERE id = (SELECT id FROM existing)
RETURNING id
)
SELECT COALESCE(
(SELECT id FROM updated_session),
(SELECT id FROM new_session)
) AS id;
-- name: GetWorkspaceSessionsOffset :many
SELECT
ws.*,
(SELECT COUNT(*) FROM connection_logs cl WHERE cl.session_id = ws.id) AS connection_count
FROM workspace_sessions ws
WHERE ws.workspace_id = @workspace_id
AND CASE WHEN @started_after::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at >= @started_after ELSE true END
AND CASE WHEN @started_before::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at <= @started_before ELSE true END
ORDER BY ws.started_at DESC
LIMIT @limit_count
OFFSET @offset_count;
-- name: CountWorkspaceSessions :one
SELECT COUNT(*) FROM workspace_sessions ws
WHERE ws.workspace_id = @workspace_id
AND CASE WHEN @started_after::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at >= @started_after ELSE true END
AND CASE WHEN @started_before::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at <= @started_before ELSE true END;
-- name: GetConnectionLogsBySessionIDs :many
SELECT * FROM connection_logs
WHERE session_id = ANY(@session_ids::uuid[])
ORDER BY session_id, connect_time DESC;
-- name: GetConnectionLogByConnectionID :one
SELECT * FROM connection_logs
WHERE connection_id = @connection_id
AND workspace_id = @workspace_id
AND agent_name = @agent_name
LIMIT 1;
-- name: GetGlobalWorkspaceSessionsOffset :many
SELECT
ws.*,
w.name AS workspace_name,
workspace_owner.username AS workspace_owner_username,
(SELECT COUNT(*) FROM connection_logs cl WHERE cl.session_id = ws.id) AS connection_count
FROM workspace_sessions ws
JOIN workspaces w ON w.id = ws.workspace_id
JOIN users workspace_owner ON workspace_owner.id = w.owner_id
WHERE
CASE WHEN @workspace_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid
THEN ws.workspace_id = @workspace_id ELSE true END
AND CASE WHEN @workspace_owner::text != ''
THEN workspace_owner.username = @workspace_owner ELSE true END
AND CASE WHEN @started_after::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at >= @started_after ELSE true END
AND CASE WHEN @started_before::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at <= @started_before ELSE true END
ORDER BY ws.started_at DESC
LIMIT @limit_count
OFFSET @offset_count;
-- name: CountGlobalWorkspaceSessions :one
SELECT COUNT(*) FROM workspace_sessions ws
JOIN workspaces w ON w.id = ws.workspace_id
JOIN users workspace_owner ON workspace_owner.id = w.owner_id
WHERE
CASE WHEN @workspace_id::uuid != '00000000-0000-0000-0000-000000000000'::uuid
THEN ws.workspace_id = @workspace_id ELSE true END
AND CASE WHEN @workspace_owner::text != ''
THEN workspace_owner.username = @workspace_owner ELSE true END
AND CASE WHEN @started_after::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at >= @started_after ELSE true END
AND CASE WHEN @started_before::timestamptz != '0001-01-01 00:00:00Z'::timestamptz
THEN ws.started_at <= @started_before ELSE true END;
+2
View File
@@ -6,6 +6,7 @@ type UniqueConstraint string
// UniqueConstraint enums.
const (
UniqueAgentPeeringIDsPkey UniqueConstraint = "agent_peering_ids_pkey" // ALTER TABLE ONLY agent_peering_ids ADD CONSTRAINT agent_peering_ids_pkey PRIMARY KEY (agent_id, peering_id);
UniqueAgentStatsPkey UniqueConstraint = "agent_stats_pkey" // ALTER TABLE ONLY workspace_agent_stats ADD CONSTRAINT agent_stats_pkey PRIMARY KEY (id);
UniqueAibridgeInterceptionsPkey UniqueConstraint = "aibridge_interceptions_pkey" // ALTER TABLE ONLY aibridge_interceptions ADD CONSTRAINT aibridge_interceptions_pkey PRIMARY KEY (id);
UniqueAibridgeTokenUsagesPkey UniqueConstraint = "aibridge_token_usages_pkey" // ALTER TABLE ONLY aibridge_token_usages ADD CONSTRAINT aibridge_token_usages_pkey PRIMARY KEY (id);
@@ -108,6 +109,7 @@ const (
UniqueWorkspaceResourceMetadataName UniqueConstraint = "workspace_resource_metadata_name" // ALTER TABLE ONLY workspace_resource_metadata ADD CONSTRAINT workspace_resource_metadata_name UNIQUE (workspace_resource_id, key);
UniqueWorkspaceResourceMetadataPkey UniqueConstraint = "workspace_resource_metadata_pkey" // ALTER TABLE ONLY workspace_resource_metadata ADD CONSTRAINT workspace_resource_metadata_pkey PRIMARY KEY (id);
UniqueWorkspaceResourcesPkey UniqueConstraint = "workspace_resources_pkey" // ALTER TABLE ONLY workspace_resources ADD CONSTRAINT workspace_resources_pkey PRIMARY KEY (id);
UniqueWorkspaceSessionsPkey UniqueConstraint = "workspace_sessions_pkey" // ALTER TABLE ONLY workspace_sessions ADD CONSTRAINT workspace_sessions_pkey PRIMARY KEY (id);
UniqueWorkspacesPkey UniqueConstraint = "workspaces_pkey" // ALTER TABLE ONLY workspaces ADD CONSTRAINT workspaces_pkey PRIMARY KEY (id);
UniqueIndexAPIKeyName UniqueConstraint = "idx_api_key_name" // CREATE UNIQUE INDEX idx_api_key_name ON api_keys USING btree (user_id, token_name) WHERE (login_type = 'token'::login_type);
UniqueIndexConnectionLogsConnectionIDWorkspaceIDAgentName UniqueConstraint = "idx_connection_logs_connection_id_workspace_id_agent_name" // CREATE UNIQUE INDEX idx_connection_logs_connection_id_workspace_id_agent_name ON connection_logs USING btree (connection_id, workspace_id, agent_name);
+155
View File
@@ -0,0 +1,155 @@
package coderd
import (
"context"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"go.uber.org/mock/gomock"
"golang.org/x/xerrors"
"cdr.dev/slog/v3/sloggers/slogtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbmock"
"github.com/coder/coder/v2/coderd/database/pubsub"
"github.com/coder/coder/v2/coderd/wspubsub"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
"github.com/coder/coder/v2/testutil"
)
func TestHandleIdentifiedTelemetry(t *testing.T) {
t.Parallel()
t.Run("PublishesWorkspaceUpdate", func(t *testing.T) {
t.Parallel()
api, dbM, ps := newIdentifiedTelemetryTestAPI(t)
ownerID := uuid.New()
workspaceID := uuid.New()
agentID := uuid.New()
peerID := uuid.New()
dbM.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agentID).Return(database.Workspace{
ID: workspaceID,
OwnerID: ownerID,
}, nil)
events, errs := subscribeWorkspaceEvents(t, ps, ownerID)
api.handleIdentifiedTelemetry(agentID, peerID, []*tailnetproto.TelemetryEvent{{
Status: tailnetproto.TelemetryEvent_CONNECTED,
}})
select {
case err := <-errs:
require.NoError(t, err)
default:
}
select {
case event := <-events:
require.Equal(t, wspubsub.WorkspaceEventKindConnectionLogUpdate, event.Kind)
require.Equal(t, workspaceID, event.WorkspaceID)
require.NotNil(t, event.AgentID)
require.Equal(t, agentID, *event.AgentID)
case err := <-errs:
require.NoError(t, err)
case <-time.After(testutil.IntervalMedium):
t.Fatal("timed out waiting for workspace event")
}
require.NotNil(t, api.PeerNetworkTelemetryStore.Get(agentID, peerID))
})
t.Run("EmptyBatchNoPublish", func(t *testing.T) {
t.Parallel()
api, _, ps := newIdentifiedTelemetryTestAPI(t)
events, errs := subscribeWorkspaceEvents(t, ps, uuid.Nil)
agentID := uuid.New()
peerID := uuid.New()
api.handleIdentifiedTelemetry(agentID, peerID, []*tailnetproto.TelemetryEvent{})
select {
case event := <-events:
t.Fatalf("unexpected workspace event: %+v", event)
case err := <-errs:
t.Fatalf("unexpected pubsub error: %v", err)
case <-time.After(testutil.IntervalFast):
}
require.Nil(t, api.PeerNetworkTelemetryStore.Get(agentID, peerID))
})
t.Run("LookupFailureNoPublish", func(t *testing.T) {
t.Parallel()
api, dbM, ps := newIdentifiedTelemetryTestAPI(t)
agentID := uuid.New()
peerID := uuid.New()
dbM.EXPECT().GetWorkspaceByAgentID(gomock.Any(), agentID).Return(database.Workspace{}, xerrors.New("lookup failed"))
events, errs := subscribeWorkspaceEvents(t, ps, uuid.Nil)
api.handleIdentifiedTelemetry(agentID, peerID, []*tailnetproto.TelemetryEvent{{
Status: tailnetproto.TelemetryEvent_CONNECTED,
}})
select {
case event := <-events:
t.Fatalf("unexpected workspace event: %+v", event)
case err := <-errs:
t.Fatalf("unexpected pubsub error: %v", err)
case <-time.After(testutil.IntervalFast):
}
require.NotNil(t, api.PeerNetworkTelemetryStore.Get(agentID, peerID))
})
}
func newIdentifiedTelemetryTestAPI(t *testing.T) (*API, *dbmock.MockStore, pubsub.Pubsub) {
t.Helper()
dbM := dbmock.NewMockStore(gomock.NewController(t))
ps := pubsub.NewInMemory()
api := &API{
Options: &Options{
Database: dbM,
Pubsub: ps,
Logger: slogtest.Make(t, &slogtest.Options{IgnoreErrors: true}),
},
PeerNetworkTelemetryStore: NewPeerNetworkTelemetryStore(),
}
return api, dbM, ps
}
func subscribeWorkspaceEvents(t *testing.T, ps pubsub.Pubsub, ownerID uuid.UUID) (<-chan wspubsub.WorkspaceEvent, <-chan error) {
t.Helper()
events := make(chan wspubsub.WorkspaceEvent, 1)
errs := make(chan error, 1)
cancel, err := ps.SubscribeWithErr(wspubsub.WorkspaceEventChannel(ownerID), wspubsub.HandleWorkspaceEvent(
func(_ context.Context, event wspubsub.WorkspaceEvent, err error) {
if err != nil {
select {
case errs <- err:
default:
}
return
}
select {
case events <- event:
default:
}
},
))
require.NoError(t, err)
t.Cleanup(cancel)
return events, errs
}
+178
View File
@@ -0,0 +1,178 @@
package coderd
import (
"sync"
"time"
"github.com/google/uuid"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
)
const peerNetworkTelemetryMaxAge = 2 * time.Minute
type PeerNetworkTelemetry struct {
P2P *bool
DERPLatency *time.Duration
P2PLatency *time.Duration
HomeDERP int
LastUpdatedAt time.Time
}
type PeerNetworkTelemetryStore struct {
mu sync.RWMutex
byAgent map[uuid.UUID]map[uuid.UUID]*PeerNetworkTelemetry
}
func NewPeerNetworkTelemetryStore() *PeerNetworkTelemetryStore {
return &PeerNetworkTelemetryStore{
byAgent: make(map[uuid.UUID]map[uuid.UUID]*PeerNetworkTelemetry),
}
}
func (s *PeerNetworkTelemetryStore) Update(agentID, peerID uuid.UUID, event *tailnetproto.TelemetryEvent) {
if event == nil {
return
}
if event.Status == tailnetproto.TelemetryEvent_DISCONNECTED {
s.Delete(agentID, peerID)
return
}
if event.Status != tailnetproto.TelemetryEvent_CONNECTED {
return
}
s.mu.Lock()
defer s.mu.Unlock()
byPeer := s.byAgent[agentID]
if byPeer == nil {
byPeer = make(map[uuid.UUID]*PeerNetworkTelemetry)
s.byAgent[agentID] = byPeer
}
existing := byPeer[peerID]
entry := &PeerNetworkTelemetry{
LastUpdatedAt: time.Now(),
}
// HomeDERP: prefer explicit non-zero value from the event,
// otherwise preserve the prior known value.
if event.HomeDerp != 0 {
entry.HomeDERP = int(event.HomeDerp)
} else if existing != nil {
entry.HomeDERP = existing.HomeDERP
}
// Determine whether this event carries any mode/latency signal.
hasNetworkInfo := event.P2PEndpoint != nil || event.DerpLatency != nil || event.P2PLatency != nil
if hasNetworkInfo {
// Apply explicit values from the event.
if event.P2PEndpoint != nil {
p2p := true
entry.P2P = &p2p
}
if event.DerpLatency != nil {
d := event.DerpLatency.AsDuration()
entry.DERPLatency = &d
p2p := false
entry.P2P = &p2p
}
if event.P2PLatency != nil {
d := event.P2PLatency.AsDuration()
entry.P2PLatency = &d
p2p := true
entry.P2P = &p2p
}
} else if existing != nil {
// Event has no mode/latency info — preserve prior values
// so a bare CONNECTED event doesn't wipe known state.
entry.P2P = existing.P2P
entry.DERPLatency = existing.DERPLatency
entry.P2PLatency = existing.P2PLatency
}
byPeer[peerID] = entry
}
func (s *PeerNetworkTelemetryStore) Get(agentID uuid.UUID, peerID ...uuid.UUID) *PeerNetworkTelemetry {
if len(peerID) > 0 {
return s.getByPeer(agentID, peerID[0])
}
// Legacy callers only provide agentID. Return the freshest peer entry.
entries := s.GetAll(agentID)
var latest *PeerNetworkTelemetry
for _, entry := range entries {
if latest == nil || entry.LastUpdatedAt.After(latest.LastUpdatedAt) {
latest = entry
}
}
return latest
}
func (s *PeerNetworkTelemetryStore) getByPeer(agentID, peerID uuid.UUID) *PeerNetworkTelemetry {
s.mu.Lock()
defer s.mu.Unlock()
byPeer := s.byAgent[agentID]
if byPeer == nil {
return nil
}
entry := byPeer[peerID]
if entry == nil {
return nil
}
if time.Since(entry.LastUpdatedAt) > peerNetworkTelemetryMaxAge {
delete(byPeer, peerID)
if len(byPeer) == 0 {
delete(s.byAgent, agentID)
}
return nil
}
return entry
}
func (s *PeerNetworkTelemetryStore) GetAll(agentID uuid.UUID) map[uuid.UUID]*PeerNetworkTelemetry {
s.mu.Lock()
defer s.mu.Unlock()
byPeer := s.byAgent[agentID]
if len(byPeer) == 0 {
return nil
}
entries := make(map[uuid.UUID]*PeerNetworkTelemetry, len(byPeer))
now := time.Now()
for peerID, entry := range byPeer {
if now.Sub(entry.LastUpdatedAt) > peerNetworkTelemetryMaxAge {
delete(byPeer, peerID)
continue
}
entries[peerID] = entry
}
if len(byPeer) == 0 {
delete(s.byAgent, agentID)
}
if len(entries) == 0 {
return nil
}
return entries
}
func (s *PeerNetworkTelemetryStore) Delete(agentID, peerID uuid.UUID) {
s.mu.Lock()
defer s.mu.Unlock()
byPeer := s.byAgent[agentID]
if byPeer == nil {
return
}
delete(byPeer, peerID)
if len(byPeer) == 0 {
delete(s.byAgent, agentID)
}
}
+333
View File
@@ -0,0 +1,333 @@
package coderd_test
import (
"sync"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/durationpb"
"github.com/coder/coder/v2/coderd"
tailnetproto "github.com/coder/coder/v2/tailnet/proto"
)
func TestPeerNetworkTelemetryStore(t *testing.T) {
t.Parallel()
t.Run("UpdateAndGet", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(50 * time.Millisecond),
HomeDerp: 1,
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.NotNil(t, got.P2P)
require.True(t, *got.P2P)
require.NotNil(t, got.P2PLatency)
require.Equal(t, 50*time.Millisecond, *got.P2PLatency)
require.Equal(t, 1, got.HomeDERP)
})
t.Run("GetMissing", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
require.Nil(t, store.Get(uuid.New(), uuid.New()))
})
t.Run("LatestWins", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(10 * time.Millisecond),
HomeDerp: 1,
})
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
DerpLatency: durationpb.New(75 * time.Millisecond),
HomeDerp: 2,
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.NotNil(t, got.P2P)
require.False(t, *got.P2P)
require.NotNil(t, got.DERPLatency)
require.Equal(t, 75*time.Millisecond, *got.DERPLatency)
require.Nil(t, got.P2PLatency)
require.Equal(t, 2, got.HomeDERP)
})
t.Run("ConnectedWithoutLatencyPreservesExistingModeAndLatency", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(50 * time.Millisecond),
HomeDerp: 1,
})
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.NotNil(t, got.P2P)
require.True(t, *got.P2P)
require.NotNil(t, got.P2PLatency)
require.Equal(t, 50*time.Millisecond, *got.P2PLatency)
require.Equal(t, 1, got.HomeDERP)
})
t.Run("ConnectedWithHomeDerpZeroPreservesPreviousHomeDerp", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
HomeDerp: 3,
})
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
DerpLatency: durationpb.New(20 * time.Millisecond),
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.Equal(t, 3, got.HomeDERP)
require.NotNil(t, got.P2P)
require.False(t, *got.P2P)
require.NotNil(t, got.DERPLatency)
require.Equal(t, 20*time.Millisecond, *got.DERPLatency)
})
t.Run("ConnectedWithExplicitLatencyOverridesPreviousValues", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(50 * time.Millisecond),
})
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
DerpLatency: durationpb.New(30 * time.Millisecond),
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.NotNil(t, got.P2P)
require.False(t, *got.P2P)
require.NotNil(t, got.DERPLatency)
require.Equal(t, 30*time.Millisecond, *got.DERPLatency)
require.Nil(t, got.P2PLatency)
})
t.Run("ConnectedWithoutLatencyLeavesModeUnknown", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
HomeDerp: 1,
})
got := store.Get(agentID, peerID)
require.NotNil(t, got)
require.Nil(t, got.P2P)
require.Nil(t, got.DERPLatency)
require.Nil(t, got.P2PLatency)
})
t.Run("TwoPeersIndependentState", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerA := uuid.New()
peerB := uuid.New()
store.Update(agentID, peerA, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(10 * time.Millisecond),
HomeDerp: 1,
})
store.Update(agentID, peerB, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
DerpLatency: durationpb.New(80 * time.Millisecond),
HomeDerp: 2,
})
gotA := store.Get(agentID, peerA)
require.NotNil(t, gotA)
require.NotNil(t, gotA.P2P)
require.True(t, *gotA.P2P)
require.NotNil(t, gotA.P2PLatency)
require.Equal(t, 10*time.Millisecond, *gotA.P2PLatency)
require.Nil(t, gotA.DERPLatency)
require.Equal(t, 1, gotA.HomeDERP)
gotB := store.Get(agentID, peerB)
require.NotNil(t, gotB)
require.NotNil(t, gotB.P2P)
require.False(t, *gotB.P2P)
require.NotNil(t, gotB.DERPLatency)
require.Equal(t, 80*time.Millisecond, *gotB.DERPLatency)
require.Nil(t, gotB.P2PLatency)
require.Equal(t, 2, gotB.HomeDERP)
all := store.GetAll(agentID)
require.Len(t, all, 2)
require.Same(t, gotA, all[peerA])
require.Same(t, gotB, all[peerB])
})
t.Run("PeerDisconnectDoesNotWipeOther", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerA := uuid.New()
peerB := uuid.New()
store.Update(agentID, peerA, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(15 * time.Millisecond),
HomeDerp: 5,
})
store.Update(agentID, peerB, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
DerpLatency: durationpb.New(70 * time.Millisecond),
HomeDerp: 6,
})
store.Update(agentID, peerA, &tailnetproto.TelemetryEvent{Status: tailnetproto.TelemetryEvent_DISCONNECTED})
require.Nil(t, store.Get(agentID, peerA))
gotB := store.Get(agentID, peerB)
require.NotNil(t, gotB)
require.NotNil(t, gotB.P2P)
require.False(t, *gotB.P2P)
require.Equal(t, 6, gotB.HomeDERP)
require.NotNil(t, gotB.DERPLatency)
require.Equal(t, 70*time.Millisecond, *gotB.DERPLatency)
all := store.GetAll(agentID)
require.Len(t, all, 1)
require.Contains(t, all, peerB)
})
t.Run("DisconnectedDeletes", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(15 * time.Millisecond),
})
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{Status: tailnetproto.TelemetryEvent_DISCONNECTED})
require.Nil(t, store.Get(agentID, peerID))
})
t.Run("StaleEntryEvicted", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(20 * time.Millisecond),
})
entry := store.Get(agentID, peerID)
require.NotNil(t, entry)
entry.LastUpdatedAt = time.Now().Add(-3 * time.Minute)
require.Nil(t, store.Get(agentID, peerID))
require.Nil(t, store.Get(agentID, peerID))
})
t.Run("Delete", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentID := uuid.New()
peerID := uuid.New()
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(15 * time.Millisecond),
})
store.Delete(agentID, peerID)
require.Nil(t, store.Get(agentID, peerID))
})
t.Run("ConcurrentAccess", func(t *testing.T) {
t.Parallel()
store := coderd.NewPeerNetworkTelemetryStore()
agentIDs := make([]uuid.UUID, 8)
for i := range agentIDs {
agentIDs[i] = uuid.New()
}
peerIDs := make([]uuid.UUID, 16)
for i := range peerIDs {
peerIDs[i] = uuid.New()
}
const (
goroutines = 8
iterations = 100
)
var wg sync.WaitGroup
wg.Add(goroutines)
for g := 0; g < goroutines; g++ {
go func(worker int) {
defer wg.Done()
for i := 0; i < iterations; i++ {
agentID := agentIDs[(worker+i)%len(agentIDs)]
peerID := peerIDs[(worker*iterations+i)%len(peerIDs)]
store.Update(agentID, peerID, &tailnetproto.TelemetryEvent{
Status: tailnetproto.TelemetryEvent_CONNECTED,
P2PLatency: durationpb.New(time.Duration(i+1) * time.Millisecond),
HomeDerp: int32(worker + 1), //nolint:gosec // test data, worker is small
})
_ = store.Get(agentID, peerID)
_ = store.GetAll(agentID)
if i%10 == 0 {
store.Delete(agentID, peerID)
}
}
}(g)
}
wg.Wait()
})
}
@@ -0,0 +1,152 @@
package provisionerdserver_test
import (
"context"
"database/sql"
"encoding/json"
"net"
"testing"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/coderd/provisionerdserver"
"github.com/coder/coder/v2/provisionerd/proto"
sdkproto "github.com/coder/coder/v2/provisionersdk/proto"
)
func TestCompleteJob_ClosesOpenConnectionLogsOnStopOrDelete(t *testing.T) {
t.Parallel()
ctx := context.Background()
for _, tc := range []struct {
name string
transition database.WorkspaceTransition
reason string
}{
{
name: "Stop",
transition: database.WorkspaceTransitionStop,
reason: "workspace stopped",
},
{
name: "Delete",
transition: database.WorkspaceTransitionDelete,
reason: "workspace deleted",
},
} {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
srv, db, ps, pd := setup(t, false, nil)
user := dbgen.User(t, db, database.User{})
template := dbgen.Template(t, db, database.Template{
Name: "template",
CreatedBy: user.ID,
Provisioner: database.ProvisionerTypeEcho,
OrganizationID: pd.OrganizationID,
})
file := dbgen.File(t, db, database.File{CreatedBy: user.ID})
workspaceTable := dbgen.Workspace(t, db, database.WorkspaceTable{
TemplateID: template.ID,
OwnerID: user.ID,
OrganizationID: pd.OrganizationID,
})
version := dbgen.TemplateVersion(t, db, database.TemplateVersion{
OrganizationID: pd.OrganizationID,
CreatedBy: user.ID,
TemplateID: uuid.NullUUID{
UUID: template.ID,
Valid: true,
},
JobID: uuid.New(),
})
wsBuildID := uuid.New()
job := dbgen.ProvisionerJob(t, db, ps, database.ProvisionerJob{
ID: uuid.New(),
FileID: file.ID,
InitiatorID: user.ID,
Type: database.ProvisionerJobTypeWorkspaceBuild,
Input: must(json.Marshal(provisionerdserver.WorkspaceProvisionJob{
WorkspaceBuildID: wsBuildID,
})),
OrganizationID: pd.OrganizationID,
})
_ = dbgen.WorkspaceBuild(t, db, database.WorkspaceBuild{
ID: wsBuildID,
JobID: job.ID,
WorkspaceID: workspaceTable.ID,
TemplateVersionID: version.ID,
BuildNumber: 2,
Transition: tc.transition,
Reason: database.BuildReasonInitiator,
})
_, err := db.AcquireProvisionerJob(ctx, database.AcquireProvisionerJobParams{
OrganizationID: pd.OrganizationID,
WorkerID: uuid.NullUUID{
UUID: pd.ID,
Valid: true,
},
Types: []database.ProvisionerType{database.ProvisionerTypeEcho},
ProvisionerTags: must(json.Marshal(job.Tags)),
StartedAt: sql.NullTime{Time: job.CreatedAt, Valid: true},
})
require.NoError(t, err)
// Insert an open SSH connection log for the workspace.
ip := pqtype.Inet{
Valid: true,
IPNet: net.IPNet{
IP: net.IPv4(127, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
}
openLog, err := db.UpsertConnectionLog(ctx, database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: dbtime.Now(),
OrganizationID: workspaceTable.OrganizationID,
WorkspaceOwnerID: workspaceTable.OwnerID,
WorkspaceID: workspaceTable.ID,
WorkspaceName: workspaceTable.Name,
AgentName: "agent",
Type: database.ConnectionTypeSsh,
Ip: ip,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
require.NoError(t, err)
require.False(t, openLog.DisconnectTime.Valid)
_, err = srv.CompleteJob(ctx, &proto.CompletedJob{
JobId: job.ID.String(),
Type: &proto.CompletedJob_WorkspaceBuild_{
WorkspaceBuild: &proto.CompletedJob_WorkspaceBuild{
State: []byte{},
Resources: []*sdkproto.Resource{},
},
},
})
require.NoError(t, err)
rows, err := db.GetConnectionLogsOffset(ctx, database.GetConnectionLogsOffsetParams{WorkspaceID: workspaceTable.ID})
require.NoError(t, err)
require.Len(t, rows, 1)
updated := rows[0].ConnectionLog
require.Equal(t, openLog.ID, updated.ID)
require.True(t, updated.DisconnectTime.Valid)
require.True(t, updated.DisconnectReason.Valid)
require.Equal(t, tc.reason, updated.DisconnectReason.String)
require.False(t, updated.DisconnectTime.Time.Before(updated.ConnectTime), "disconnect_time should never be before connect_time")
})
}
}
@@ -1919,6 +1919,7 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
var workspace database.Workspace
var getWorkspaceError error
var completedAt time.Time
// Execute all database modifications in a transaction
err = s.Database.InTx(func(db database.Store) error {
@@ -1926,6 +1927,8 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
// able to customize the current time from within tests.
now := s.timeNow()
completedAt = now
workspace, getWorkspaceError = db.GetWorkspaceByID(ctx, workspaceBuild.WorkspaceID)
if getWorkspaceError != nil {
s.Logger.Error(ctx,
@@ -2339,6 +2342,51 @@ func (s *server) completeWorkspaceBuildJob(ctx context.Context, job database.Pro
// Post-transaction operations (operations that do not require transactions or
// are external to the database, like audit logging, notifications, etc.)
if workspaceBuild.Transition == database.WorkspaceTransitionStop ||
workspaceBuild.Transition == database.WorkspaceTransitionDelete {
reason := "workspace stopped"
if workspaceBuild.Transition == database.WorkspaceTransitionDelete {
reason = "workspace deleted"
}
agentConnectionTypes := []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeVscode,
database.ConnectionTypeJetbrains,
database.ConnectionTypeReconnectingPty,
database.ConnectionTypeWorkspaceApp,
database.ConnectionTypePortForwarding,
database.ConnectionTypeSystem,
}
//nolint:gocritic // Best-effort cleanup should not depend on RPC context.
sysCtx, cancel := context.WithTimeout(
dbauthz.AsConnectionLogger(s.lifecycleCtx),
5*time.Second,
)
defer cancel()
rowsClosed, closeErr := s.Database.CloseConnectionLogsAndCreateSessions(sysCtx, database.CloseConnectionLogsAndCreateSessionsParams{
WorkspaceID: workspaceBuild.WorkspaceID,
ClosedAt: sql.NullTime{Time: completedAt, Valid: true},
Reason: sql.NullString{String: reason, Valid: true},
Types: agentConnectionTypes,
})
if closeErr != nil {
s.Logger.Warn(ctx, "close open connection logs failed",
slog.F("workspace_id", workspaceBuild.WorkspaceID),
slog.F("workspace_build_id", workspaceBuild.ID),
slog.F("transition", workspaceBuild.Transition),
slog.Error(closeErr),
)
} else if rowsClosed > 0 {
s.Logger.Info(ctx, "closed open connection logs",
slog.F("workspace_id", workspaceBuild.WorkspaceID),
slog.F("rows", rowsClosed),
)
}
}
// audit the outcome of the workspace build
if getWorkspaceError == nil {
// If the workspace has been deleted, notify the owner about it.
+1
View File
@@ -74,6 +74,7 @@ const (
SubjectTypeSystemReadProvisionerDaemons SubjectType = "system_read_provisioner_daemons"
SubjectTypeSystemRestricted SubjectType = "system_restricted"
SubjectTypeSystemOAuth SubjectType = "system_oauth"
SubjectTypeTailnetCoordinator SubjectType = "tailnet_coordinator"
SubjectTypeNotifier SubjectType = "notifier"
SubjectTypeSubAgentAPI SubjectType = "sub_agent_api"
SubjectTypeFileReader SubjectType = "file_reader"
+40
View File
@@ -142,6 +142,46 @@ func ConnectionLogs(ctx context.Context, db database.Store, query string, apiKey
return filter, countFilter, parser.Errors
}
// WorkspaceSessions parses a search query string into database filter
// parameters for the global workspace sessions endpoint.
func WorkspaceSessions(_ context.Context, _ database.Store, query string, _ database.APIKey) (database.GetGlobalWorkspaceSessionsOffsetParams, database.CountGlobalWorkspaceSessionsParams, []codersdk.ValidationError) {
// Always lowercase for all searches.
query = strings.ToLower(query)
values, errors := searchTerms(query, func(term string, values url.Values) error {
values.Add("search", term)
return nil
})
if len(errors) > 0 {
// nolint:exhaustruct // We don't need to initialize these structs because we return an error.
return database.GetGlobalWorkspaceSessionsOffsetParams{}, database.CountGlobalWorkspaceSessionsParams{}, errors
}
parser := httpapi.NewQueryParamParser()
filter := database.GetGlobalWorkspaceSessionsOffsetParams{
WorkspaceOwner: parser.String(values, "", "workspace_owner"),
WorkspaceID: parser.UUID(values, uuid.Nil, "workspace_id"),
StartedAfter: parser.Time3339Nano(values, time.Time{}, "started_after"),
StartedBefore: parser.Time3339Nano(values, time.Time{}, "started_before"),
}
if filter.WorkspaceOwner == "me" {
// The "me" keyword is not supported for workspace_owner in
// global sessions since we filter by workspace owner, not
// the requesting user. Reset to empty to avoid confusion.
filter.WorkspaceOwner = ""
}
// This MUST be kept in sync with the above.
countFilter := database.CountGlobalWorkspaceSessionsParams{
WorkspaceOwner: filter.WorkspaceOwner,
WorkspaceID: filter.WorkspaceID,
StartedAfter: filter.StartedAfter,
StartedBefore: filter.StartedBefore,
}
parser.ErrorExcessParams(values)
return filter, countFilter, parser.Errors
}
func Users(query string) (database.GetUsersParams, []codersdk.ValidationError) {
// Always lowercase for all searches.
query = strings.ToLower(query)
+10 -2
View File
@@ -49,22 +49,30 @@ func init() {
var _ workspaceapps.AgentProvider = (*ServerTailnet)(nil)
// NewServerTailnet creates a new tailnet intended for use by coderd.
// NewServerTailnet creates a new tailnet intended for use by coderd. The
// clientID is used to derive deterministic tailnet IP addresses for the
// server, matching how agents derive IPs from their agent ID.
func NewServerTailnet(
ctx context.Context,
logger slog.Logger,
derpServer *derp.Server,
clientID uuid.UUID,
dialer tailnet.ControlProtocolDialer,
derpForceWebSockets bool,
blockEndpoints bool,
traceProvider trace.TracerProvider,
shortDescription string,
) (*ServerTailnet, error) {
logger = logger.Named("servertailnet")
conn, err := tailnet.NewConn(&tailnet.Options{
Addresses: []netip.Prefix{tailnet.TailscaleServicePrefix.RandomPrefix()},
Addresses: []netip.Prefix{
tailnet.TailscaleServicePrefix.PrefixFromUUID(clientID),
tailnet.CoderServicePrefix.PrefixFromUUID(clientID),
},
DERPForceWebSockets: derpForceWebSockets,
Logger: logger,
BlockEndpoints: blockEndpoints,
ShortDescription: shortDescription,
})
if err != nil {
return nil, xerrors.Errorf("create tailnet conn: %w", err)
+2
View File
@@ -480,10 +480,12 @@ func setupServerTailnetAgent(t *testing.T, agentNum int, opts ...tailnettest.DER
context.Background(),
logger,
derpServer,
uuid.UUID{5},
dialer,
false,
!derpMap.HasSTUN(),
trace.NewNoopTracerProvider(),
"Coder Server",
)
require.NoError(t, err)
+34 -2
View File
@@ -124,6 +124,34 @@ func (api *API) workspaceAgent(rw http.ResponseWriter, r *http.Request) {
return
}
// nolint:gocritic // Intentionally visible to any authorized workspace reader.
connectionLogs, err := getOngoingAgentConnectionsLast24h(
dbauthz.AsSystemRestricted(ctx),
api.Database,
[]uuid.UUID{waws.WorkspaceTable.ID},
[]string{waws.WorkspaceAgent.Name},
)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching workspace agent connections.",
Detail: err.Error(),
})
return
}
// nolint:gocritic // Reading tailnet peering events requires coordinator
// context since they record control-plane state transitions.
peeringEvents, err := api.Database.GetAllTailnetPeeringEventsByPeerID(dbauthz.AsTailnetCoordinator(api.ctx), uuid.NullUUID{UUID: waws.WorkspaceAgent.ID, Valid: true})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching workspace agent peering events.",
Detail: err.Error(),
})
return
}
peerTelemetry := api.PeerNetworkTelemetryStore.GetAll(waws.WorkspaceAgent.ID)
if sessions := mergeWorkspaceConnectionsIntoSessions(waws.WorkspaceAgent.ID, peeringEvents, connectionLogs, api.DERPMap(), peerTelemetry); len(sessions) > 0 {
apiAgent.Sessions = sessions
}
httpapi.Write(ctx, rw, http.StatusOK, apiAgent)
}
@@ -1529,8 +1557,12 @@ func (api *API) workspaceAgentClientCoordinate(rw http.ResponseWriter, r *http.R
go httpapi.HeartbeatClose(ctx, api.Logger, cancel, conn)
defer conn.Close(websocket.StatusNormalClosure, "")
peerName := "client"
if key, ok := httpmw.APIKeyOptional(r); ok {
peerName = key.UserID.String()
}
err = api.TailnetClientService.ServeClient(ctx, version, wsNetConn, tailnet.StreamID{
Name: "client",
Name: peerName,
ID: peerID,
Auth: tailnet.ClientCoordinateeAuth{
AgentID: waws.WorkspaceAgent.ID,
@@ -2340,7 +2372,7 @@ func (api *API) tailnetRPCConn(rw http.ResponseWriter, r *http.Request) {
defer cancel()
go httpapi.HeartbeatClose(ctx, api.Logger, cancel, conn)
err = api.TailnetClientService.ServeClient(ctx, version, wsNetConn, tailnet.StreamID{
Name: "client",
Name: apiKey.UserID.String(),
ID: peerID,
Auth: tailnet.ClientUserCoordinateeAuth{
Auth: &rbacAuthorizer{
+31 -14
View File
@@ -4,6 +4,7 @@ import (
"context"
"database/sql"
"fmt"
"net"
"net/http"
"net/url"
"path"
@@ -444,7 +445,12 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
userID = aReq.apiKey.UserID
}
userAgent := r.UserAgent()
// Strip the port from RemoteAddr (which is "host:port")
// so that database.ParseIP can parse the bare IP address.
ip := r.RemoteAddr
if host, _, err := net.SplitHostPort(ip); err == nil {
ip = host
}
// Approximation of the status code.
// #nosec G115 - Safe conversion as HTTP status code is expected to be within int32 range (typically 100-599)
@@ -479,12 +485,13 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
slog.F("status_code", statusCode),
)
var newOrStale bool
connectionID := uuid.New()
var result database.UpsertWorkspaceAppAuditSessionRow
err := p.Database.InTx(func(tx database.Store) (err error) {
// nolint:gocritic // System context is needed to write audit sessions.
dangerousSystemCtx := dbauthz.AsSystemRestricted(ctx)
newOrStale, err = tx.UpsertWorkspaceAppAuditSession(dangerousSystemCtx, database.UpsertWorkspaceAppAuditSessionParams{
result, err = tx.UpsertWorkspaceAppAuditSession(dangerousSystemCtx, database.UpsertWorkspaceAppAuditSessionParams{
// Config.
StaleIntervalMS: p.WorkspaceAppAuditSessionTimeout.Milliseconds(),
@@ -499,6 +506,10 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
StatusCode: statusCode,
StartedAt: aReq.time,
UpdatedAt: aReq.time,
ConnectionID: uuid.NullUUID{
UUID: connectionID,
Valid: true,
},
})
if err != nil {
return xerrors.Errorf("insert workspace app audit session: %w", err)
@@ -514,15 +525,8 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
return
}
if !newOrStale {
// We either didn't insert a new session, or the session
// didn't timeout due to inactivity.
return
}
connLogger := *p.ConnectionLogger.Load()
err = connLogger.Upsert(ctx, database.UpsertConnectionLogParams{
upsertParams := database.UpsertConnectionLogParams{
ID: uuid.New(),
Time: aReq.time,
OrganizationID: aReq.dbReq.Workspace.OrganizationID,
@@ -530,6 +534,7 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
WorkspaceID: aReq.dbReq.Workspace.ID,
WorkspaceName: aReq.dbReq.Workspace.Name,
AgentName: aReq.dbReq.Agent.Name,
AgentID: uuid.NullUUID{UUID: aReq.dbReq.Agent.ID, Valid: true},
Type: connType,
Code: sql.NullInt32{
Int32: statusCode,
@@ -543,11 +548,23 @@ func (p *DBTokenProvider) connLogInitRequest(w http.ResponseWriter, r *http.Requ
},
SlugOrPort: sql.NullString{Valid: slugOrPort != "", String: slugOrPort},
ConnectionStatus: database.ConnectionStatusConnected,
// N/A
ConnectionID: uuid.NullUUID{},
ConnectionID: result.ConnectionID,
DisconnectReason: sql.NullString{},
})
SessionID: uuid.NullUUID{},
ClientHostname: sql.NullString{},
ShortDescription: sql.NullString{},
}
if !result.NewOrStale {
// Session still active. Bump updated_at on the existing
// connection log via the ON CONFLICT path.
if err := connLogger.Upsert(ctx, upsertParams); err != nil {
logger.Error(ctx, "bump connection log updated_at failed", slog.Error(err))
}
return
}
err = connLogger.Upsert(ctx, upsertParams)
if err != nil {
logger.Error(ctx, "upsert connection log failed", slog.Error(err))
return
+22 -21
View File
@@ -318,7 +318,7 @@ func Test_ResolveRequest(t *testing.T) {
require.Equal(t, codersdk.SignedAppTokenCookie, cookie.Name)
require.Equal(t, req.BasePath, cookie.Path)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
var parsedToken workspaceapps.SignedToken
@@ -398,7 +398,7 @@ func Test_ResolveRequest(t *testing.T) {
require.NotNil(t, token)
require.Zero(t, w.StatusCode)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, secondUser.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, secondUser.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
}
})
@@ -438,7 +438,7 @@ func Test_ResolveRequest(t *testing.T) {
require.NotZero(t, rw.Code)
require.NotEqual(t, http.StatusOK, rw.Code)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, uuid.Nil)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, uuid.Nil)
require.Len(t, connLogger.ConnectionLogs(), 1)
} else {
if !assert.True(t, ok) {
@@ -452,7 +452,7 @@ func Test_ResolveRequest(t *testing.T) {
t.Fatalf("expected 200 (or unset) response code, got %d", rw.Code)
}
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, uuid.Nil)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, uuid.Nil)
require.Len(t, connLogger.ConnectionLogs(), 1)
}
_ = w.Body.Close()
@@ -577,7 +577,7 @@ func Test_ResolveRequest(t *testing.T) {
require.Equal(t, token.AgentNameOrID, c.agent)
require.Equal(t, token.WorkspaceID, workspace.ID)
require.Equal(t, token.AgentID, agentID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
} else {
require.Nil(t, token)
@@ -663,7 +663,7 @@ func Test_ResolveRequest(t *testing.T) {
require.NoError(t, err)
require.Equal(t, appNameOwner, parsedToken.AppSlugOrPort)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -736,7 +736,7 @@ func Test_ResolveRequest(t *testing.T) {
require.True(t, ok)
require.Equal(t, req.AppSlugOrPort, token.AppSlugOrPort)
require.Equal(t, "http://127.0.0.1:9090", token.AppURL)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, "9090", database.ConnectionTypePortForwarding, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, "9090", database.ConnectionTypePortForwarding, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -809,7 +809,7 @@ func Test_ResolveRequest(t *testing.T) {
})
require.True(t, ok)
require.Equal(t, req.AppSlugOrPort, token.AppSlugOrPort)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, appNameEndsInS, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, appNameEndsInS, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -846,7 +846,7 @@ func Test_ResolveRequest(t *testing.T) {
require.Equal(t, req.AgentNameOrID, token.Request.AgentNameOrID)
require.Empty(t, token.AppSlugOrPort)
require.Empty(t, token.AppURL)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, "terminal", database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, "terminal", database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -880,7 +880,7 @@ func Test_ResolveRequest(t *testing.T) {
})
require.False(t, ok)
require.Nil(t, token)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, secondUser.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, secondUser.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -954,7 +954,7 @@ func Test_ResolveRequest(t *testing.T) {
require.Equal(t, http.StatusSeeOther, w.StatusCode)
// Note that we don't capture the owner UUID here because the apiKey
// check/authorization exits early.
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, uuid.Nil)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, appNameOwner, database.ConnectionTypeWorkspaceApp, uuid.Nil)
require.Len(t, connLogger.ConnectionLogs(), 1)
loc, err := w.Location()
@@ -1016,7 +1016,7 @@ func Test_ResolveRequest(t *testing.T) {
w := rw.Result()
defer w.Body.Close()
require.Equal(t, http.StatusBadGateway, w.StatusCode)
assertConnLogContains(t, rw, r, connLogger, workspace, agentNameUnhealthy, appNameAgentUnhealthy, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, uuid.Nil, agentNameUnhealthy, appNameAgentUnhealthy, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
body, err := io.ReadAll(w.Body)
@@ -1075,7 +1075,7 @@ func Test_ResolveRequest(t *testing.T) {
})
require.True(t, ok, "ResolveRequest failed, should pass even though app is initializing")
require.NotNil(t, token)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -1133,7 +1133,7 @@ func Test_ResolveRequest(t *testing.T) {
})
require.True(t, ok, "ResolveRequest failed, should pass even though app is unhealthy")
require.NotNil(t, token)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, token.AppSlugOrPort, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
})
@@ -1170,7 +1170,7 @@ func Test_ResolveRequest(t *testing.T) {
AppRequest: req,
})
require.True(t, ok)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 1)
// Second request, no audit log because the session is active.
@@ -1188,7 +1188,7 @@ func Test_ResolveRequest(t *testing.T) {
AppRequest: req,
})
require.True(t, ok)
require.Len(t, connLogger.ConnectionLogs(), 1, "single connection log, previous session active")
require.Len(t, connLogger.ConnectionLogs(), 2, "one connection log, two upserts (updated_at bump)")
// Third request, session timed out, new audit log.
rw = httptest.NewRecorder()
@@ -1206,8 +1206,8 @@ func Test_ResolveRequest(t *testing.T) {
AppRequest: req,
})
require.True(t, ok)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 2, "two connection logs, session timed out")
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 3, "two connection logs, three upserts (session timed out)")
// Fourth request, new IP produces new audit log.
auditableIP = testutil.RandomIPv6(t)
@@ -1225,8 +1225,8 @@ func Test_ResolveRequest(t *testing.T) {
AppRequest: req,
})
require.True(t, ok)
assertConnLogContains(t, rw, r, connLogger, workspace, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 3, "three connection logs, new IP")
assertConnLogContains(t, rw, r, connLogger, workspace, agentID, agentName, app, database.ConnectionTypeWorkspaceApp, me.ID)
require.Len(t, connLogger.ConnectionLogs(), 4, "three connection logs, four upserts (new IP)")
}
})
}
@@ -1267,7 +1267,7 @@ func signedTokenProviderWithConnLogger(t testing.TB, provider workspaceapps.Sign
return &shallowCopy
}
func assertConnLogContains(t *testing.T, rr *httptest.ResponseRecorder, r *http.Request, connLogger *connectionlog.FakeConnectionLogger, workspace codersdk.Workspace, agentName string, slugOrPort string, typ database.ConnectionType, userID uuid.UUID) {
func assertConnLogContains(t *testing.T, rr *httptest.ResponseRecorder, r *http.Request, connLogger *connectionlog.FakeConnectionLogger, workspace codersdk.Workspace, agentID uuid.UUID, agentName string, slugOrPort string, typ database.ConnectionType, userID uuid.UUID) {
t.Helper()
resp := rr.Result()
@@ -1279,6 +1279,7 @@ func assertConnLogContains(t *testing.T, rr *httptest.ResponseRecorder, r *http.
WorkspaceID: workspace.ID,
WorkspaceName: workspace.Name,
AgentName: agentName,
AgentID: uuid.NullUUID{UUID: agentID, Valid: agentID != uuid.Nil},
Type: typ,
Ip: database.ParseIP(r.RemoteAddr),
UserAgent: sql.NullString{Valid: r.UserAgent() != "", String: r.UserAgent()},
+54
View File
@@ -90,6 +90,7 @@ func (api *API) workspaceBuild(rw http.ResponseWriter, r *http.Request) {
data.logSources,
data.templateVersions[0],
nil,
groupConnectionLogsByWorkspaceAndAgent(data.connectionLogs),
)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
@@ -209,6 +210,7 @@ func (api *API) workspaceBuilds(rw http.ResponseWriter, r *http.Request) {
data.logSources,
data.templateVersions,
data.provisionerDaemons,
data.connectionLogs,
)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
@@ -300,6 +302,7 @@ func (api *API) workspaceBuildByBuildNumber(rw http.ResponseWriter, r *http.Requ
data.logSources,
data.templateVersions[0],
data.provisionerDaemons,
groupConnectionLogsByWorkspaceAndAgent(data.connectionLogs),
)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
@@ -545,6 +548,7 @@ func (api *API) postWorkspaceBuildsInternal(
[]database.WorkspaceAgentLogSource{},
database.TemplateVersion{},
provisionerDaemons,
nil,
)
if err != nil {
return codersdk.WorkspaceBuild{}, httperror.NewResponseError(
@@ -977,6 +981,7 @@ type workspaceBuildsData struct {
scripts []database.WorkspaceAgentScript
logSources []database.WorkspaceAgentLogSource
provisionerDaemons []database.GetEligibleProvisionerDaemonsByProvisionerJobIDsRow
connectionLogs []database.GetOngoingAgentConnectionsLast24hRow
}
func (api *API) workspaceBuildsData(ctx context.Context, workspaceBuilds []database.WorkspaceBuild) (workspaceBuildsData, error) {
@@ -1045,6 +1050,33 @@ func (api *API) workspaceBuildsData(ctx context.Context, workspaceBuilds []datab
return workspaceBuildsData{}, xerrors.Errorf("get workspace agents: %w", err)
}
connectionLogs := []database.GetOngoingAgentConnectionsLast24hRow{}
if len(workspaceBuilds) > 0 && len(agents) > 0 {
workspaceIDSet := make(map[uuid.UUID]struct{}, len(workspaceBuilds))
for _, build := range workspaceBuilds {
workspaceIDSet[build.WorkspaceID] = struct{}{}
}
workspaceIDs := make([]uuid.UUID, 0, len(workspaceIDSet))
for workspaceID := range workspaceIDSet {
workspaceIDs = append(workspaceIDs, workspaceID)
}
agentNameSet := make(map[string]struct{}, len(agents))
for _, agent := range agents {
agentNameSet[agent.Name] = struct{}{}
}
agentNames := make([]string, 0, len(agentNameSet))
for agentName := range agentNameSet {
agentNames = append(agentNames, agentName)
}
// nolint:gocritic // Intentionally visible to any authorized workspace reader.
connectionLogs, err = getOngoingAgentConnectionsLast24h(dbauthz.AsSystemRestricted(ctx), api.Database, workspaceIDs, agentNames)
if err != nil {
return workspaceBuildsData{}, xerrors.Errorf("get ongoing agent connections: %w", err)
}
}
if len(resources) == 0 {
return workspaceBuildsData{
jobs: jobs,
@@ -1108,6 +1140,7 @@ func (api *API) workspaceBuildsData(ctx context.Context, workspaceBuilds []datab
appStatuses: statuses,
scripts: scripts,
logSources: logSources,
connectionLogs: connectionLogs,
provisionerDaemons: pendingJobProvisioners,
}, nil
}
@@ -1125,6 +1158,7 @@ func (api *API) convertWorkspaceBuilds(
agentLogSources []database.WorkspaceAgentLogSource,
templateVersions []database.TemplateVersion,
provisionerDaemons []database.GetEligibleProvisionerDaemonsByProvisionerJobIDsRow,
connectionLogs []database.GetOngoingAgentConnectionsLast24hRow,
) ([]codersdk.WorkspaceBuild, error) {
workspaceByID := map[uuid.UUID]database.Workspace{}
for _, workspace := range workspaces {
@@ -1138,6 +1172,7 @@ func (api *API) convertWorkspaceBuilds(
for _, templateVersion := range templateVersions {
templateVersionByID[templateVersion.ID] = templateVersion
}
connectionLogsByWorkspaceAndAgent := groupConnectionLogsByWorkspaceAndAgent(connectionLogs)
// Should never be nil for API consistency
apiBuilds := []codersdk.WorkspaceBuild{}
@@ -1168,6 +1203,7 @@ func (api *API) convertWorkspaceBuilds(
agentLogSources,
templateVersion,
provisionerDaemons,
connectionLogsByWorkspaceAndAgent,
)
if err != nil {
return nil, xerrors.Errorf("converting workspace build: %w", err)
@@ -1192,6 +1228,7 @@ func (api *API) convertWorkspaceBuild(
agentLogSources []database.WorkspaceAgentLogSource,
templateVersion database.TemplateVersion,
provisionerDaemons []database.GetEligibleProvisionerDaemonsByProvisionerJobIDsRow,
connectionLogsByWorkspaceAndAgent map[uuid.UUID]map[string][]database.GetOngoingAgentConnectionsLast24hRow,
) (codersdk.WorkspaceBuild, error) {
resourcesByJobID := map[uuid.UUID][]database.WorkspaceResource{}
for _, resource := range workspaceResources {
@@ -1259,6 +1296,23 @@ func (api *API) convertWorkspaceBuild(
if err != nil {
return codersdk.WorkspaceBuild{}, xerrors.Errorf("converting workspace agent: %w", err)
}
var agentLogs []database.GetOngoingAgentConnectionsLast24hRow
if connectionLogsByWorkspaceAndAgent != nil {
if byAgent, ok := connectionLogsByWorkspaceAndAgent[build.WorkspaceID]; ok {
agentLogs = byAgent[agent.Name]
}
}
// nolint:gocritic // Reading tailnet peering events requires
// coordinator context.
peeringEvents, err := api.Database.GetAllTailnetPeeringEventsByPeerID(dbauthz.AsTailnetCoordinator(api.ctx), uuid.NullUUID{UUID: agent.ID, Valid: true})
if err != nil {
return codersdk.WorkspaceBuild{}, xerrors.Errorf("getting tailnet peering events: %w", err)
}
peerTelemetry := api.PeerNetworkTelemetryStore.GetAll(agent.ID)
if sessions := mergeWorkspaceConnectionsIntoSessions(agent.ID, peeringEvents, agentLogs, api.DERPMap(), peerTelemetry); len(sessions) > 0 {
apiAgent.Sessions = sessions
}
apiAgents = append(apiAgents, apiAgent)
}
metadata := append(make([]database.WorkspaceResourceMetadatum, 0), metadataByResourceID[resource.ID]...)
+360
View File
@@ -0,0 +1,360 @@
package coderd
import (
"cmp"
"context"
"fmt"
"net/netip"
"slices"
"time"
"github.com/google/uuid"
gProto "google.golang.org/protobuf/proto"
tailcfg "tailscale.com/tailcfg"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/proto"
)
const (
workspaceAgentConnectionsPerAgentLimit int64 = 50
workspaceAgentConnectionsWindow time.Duration = 24 * time.Hour
// Web app connection logs have updated_at bumped on each token refresh
// (~1/min for HTTP apps). Use 1m30s as the activity window.
workspaceAppActiveWindow time.Duration = 90 * time.Second
)
var workspaceAgentConnectionsTypes = []database.ConnectionType{
database.ConnectionTypeSsh,
database.ConnectionTypeVscode,
database.ConnectionTypeJetbrains,
database.ConnectionTypeReconnectingPty,
database.ConnectionTypeWorkspaceApp,
database.ConnectionTypePortForwarding,
}
func getOngoingAgentConnectionsLast24h(ctx context.Context, db database.Store, workspaceIDs []uuid.UUID, agentNames []string) ([]database.GetOngoingAgentConnectionsLast24hRow, error) {
return db.GetOngoingAgentConnectionsLast24h(ctx, database.GetOngoingAgentConnectionsLast24hParams{
WorkspaceIds: workspaceIDs,
AgentNames: agentNames,
Types: workspaceAgentConnectionsTypes,
Since: dbtime.Now().Add(-workspaceAgentConnectionsWindow),
AppActiveSince: dbtime.Now().Add(-workspaceAppActiveWindow),
PerAgentLimit: workspaceAgentConnectionsPerAgentLimit,
})
}
func groupConnectionLogsByWorkspaceAndAgent(logs []database.GetOngoingAgentConnectionsLast24hRow) map[uuid.UUID]map[string][]database.GetOngoingAgentConnectionsLast24hRow {
byWorkspaceAndAgent := make(map[uuid.UUID]map[string][]database.GetOngoingAgentConnectionsLast24hRow)
for _, l := range logs {
byAgent, ok := byWorkspaceAndAgent[l.WorkspaceID]
if !ok {
byAgent = make(map[string][]database.GetOngoingAgentConnectionsLast24hRow)
byWorkspaceAndAgent[l.WorkspaceID] = byAgent
}
byAgent[l.AgentName] = append(byAgent[l.AgentName], l)
}
return byWorkspaceAndAgent
}
func connectionFromLog(log database.GetOngoingAgentConnectionsLast24hRow) codersdk.WorkspaceConnection {
connectTime := log.ConnectTime
var ip *netip.Addr
if log.Ip.Valid {
if addr, ok := netip.AddrFromSlice(log.Ip.IPNet.IP); ok {
addr = addr.Unmap()
ip = &addr
}
}
conn := codersdk.WorkspaceConnection{
IP: ip,
Status: codersdk.ConnectionStatusOngoing,
CreatedAt: connectTime,
ConnectedAt: &connectTime,
Type: codersdk.ConnectionType(log.Type),
}
if log.SlugOrPort.Valid {
conn.Detail = log.SlugOrPort.String
}
if log.ClientHostname.Valid {
conn.ClientHostname = log.ClientHostname.String
}
if log.ShortDescription.Valid {
conn.ShortDescription = log.ShortDescription.String
}
return conn
}
type peeringRecord struct {
agentID uuid.UUID
controlEvents []database.TailnetPeeringEvent
connectionLogs []database.GetOngoingAgentConnectionsLast24hRow
peerTelemetry *PeerNetworkTelemetry
}
// mergeWorkspaceConnectionsIntoSessions groups ongoing connections into
// sessions. Connections are grouped by ClientHostname when available
// (so that SSH, Coder Desktop, and IDE connections from the same machine
// become one expandable session), falling back to IP when hostname is
// unknown. Live sessions don't have session_id yet - they're computed
// at query time.
//
// This function combines three data sources:
// - tunnelPeers: live coordinator state for real-time network status
// - peeringEvents: DB-persisted control plane events for historical status
// - connectionLogs: application-layer connection records (ssh, vscode, etc.)
func mergeWorkspaceConnectionsIntoSessions(
agentID uuid.UUID,
peeringEvents []database.TailnetPeeringEvent,
connectionLogs []database.GetOngoingAgentConnectionsLast24hRow,
derpMap *tailcfg.DERPMap,
peerTelemetry map[uuid.UUID]*PeerNetworkTelemetry,
) []codersdk.WorkspaceSession {
if len(peeringEvents) == 0 && len(connectionLogs) == 0 {
return nil
}
// Build flat connections using peering events and tunnel peers.
connections := mergeConnectionsFlat(agentID, peeringEvents, connectionLogs, derpMap, peerTelemetry)
// Group by ClientHostname when available, otherwise by IP.
// This ensures connections from the same machine (e.g. SSH +
// Coder Desktop + IDE) collapse into a single session even if
// they use different tailnet IPs.
groups := make(map[string][]codersdk.WorkspaceConnection)
for _, conn := range connections {
var key string
if conn.ClientHostname != "" {
key = "host:" + conn.ClientHostname
} else if conn.IP != nil {
key = "ip:" + conn.IP.String()
}
groups[key] = append(groups[key], conn)
}
// Convert to sessions.
var sessions []codersdk.WorkspaceSession
for _, conns := range groups {
if len(conns) == 0 {
continue
}
sessions = append(sessions, codersdk.WorkspaceSession{
// No ID for live sessions (ephemeral grouping).
IP: conns[0].IP,
ClientHostname: conns[0].ClientHostname,
ShortDescription: conns[0].ShortDescription,
Status: deriveSessionStatus(conns),
StartedAt: earliestTime(conns),
Connections: conns,
})
}
// Sort sessions by hostname first, then IP for stable ordering.
slices.SortFunc(sessions, func(a, b codersdk.WorkspaceSession) int {
if c := cmp.Compare(a.ClientHostname, b.ClientHostname); c != 0 {
return c
}
aIP, bIP := "", ""
if a.IP != nil {
aIP = a.IP.String()
}
if b.IP != nil {
bIP = b.IP.String()
}
return cmp.Compare(aIP, bIP)
})
return sessions
}
// mergeConnectionsFlat combines coordinator tunnel peers, DB peering events,
// and connection logs into a unified view. Tunnel peers provide real-time
// network status, peering events provide persisted control plane history,
// and connection logs provide the application-layer type (ssh, vscode, etc.).
// Entries are correlated by tailnet IP address.
func mergeConnectionsFlat(
agentID uuid.UUID,
peeringEvents []database.TailnetPeeringEvent,
connectionLogs []database.GetOngoingAgentConnectionsLast24hRow,
derpMap *tailcfg.DERPMap,
peerTelemetry map[uuid.UUID]*PeerNetworkTelemetry,
) []codersdk.WorkspaceConnection {
agentAddr := tailnet.CoderServicePrefix.AddrFromUUID(agentID)
// Build peering records from DB events, keyed by peering ID.
peeringRecords := make(map[string]*peeringRecord)
for _, pe := range peeringEvents {
record, ok := peeringRecords[string(pe.PeeringID)]
if !ok {
record = &peeringRecord{
agentID: agentID,
}
peeringRecords[string(pe.PeeringID)] = record
}
record.controlEvents = append(record.controlEvents, pe)
}
var connections []codersdk.WorkspaceConnection
for _, log := range connectionLogs {
if !log.Ip.Valid {
connections = append(connections, connectionFromLog(log))
continue
}
clientIP, ok := netip.AddrFromSlice(log.Ip.IPNet.IP)
if !ok || clientIP.Is4() {
connections = append(connections, connectionFromLog(log))
continue
}
peeringID := tailnet.PeeringIDFromAddrs(agentAddr, clientIP)
record, ok := peeringRecords[string(peeringID)]
if !ok {
record = &peeringRecord{
agentID: agentID,
}
peeringRecords[string(peeringID)] = record
}
record.connectionLogs = append(record.connectionLogs, log)
}
// Apply network telemetry per peer to ongoing connections.
for clientID, peerTelemetry := range peerTelemetry {
peeringID := tailnet.PeeringIDFromUUIDs(agentID, clientID)
record, ok := peeringRecords[string(peeringID)]
if !ok {
continue
}
record.peerTelemetry = peerTelemetry
}
for _, record := range peeringRecords {
connections = append(connections, connectionFromRecord(record, derpMap))
}
// Sort by creation time
slices.SortFunc(connections, func(a, b codersdk.WorkspaceConnection) int {
return b.CreatedAt.Compare(a.CreatedAt) // Newest first.
})
return connections
}
func connectionFromRecord(record *peeringRecord, derpMap *tailcfg.DERPMap) codersdk.WorkspaceConnection {
slices.SortFunc(record.controlEvents, func(a, b database.TailnetPeeringEvent) int {
return a.OccurredAt.Compare(b.OccurredAt)
})
slices.SortFunc(record.connectionLogs, func(a, b database.GetOngoingAgentConnectionsLast24hRow) int {
return a.ConnectTime.Compare(b.ConnectTime)
})
conn := codersdk.WorkspaceConnection{
Status: codersdk.ConnectionStatusOngoing,
}
for _, ce := range record.controlEvents {
if conn.CreatedAt.IsZero() {
conn.CreatedAt = ce.OccurredAt
}
switch ce.EventType {
case database.TailnetPeeringEventTypePeerUpdateLost:
conn.Status = codersdk.ConnectionStatusControlLost
case database.TailnetPeeringEventTypePeerUpdateDisconnected:
conn.Status = codersdk.ConnectionStatusCleanDisconnected
conn.EndedAt = &ce.OccurredAt
case database.TailnetPeeringEventTypeRemovedTunnel:
conn.Status = codersdk.ConnectionStatusCleanDisconnected
conn.EndedAt = &ce.OccurredAt
case database.TailnetPeeringEventTypeAddedTunnel:
clientIP := tailnet.CoderServicePrefix.AddrFromUUID(ce.SrcPeerID.UUID)
conn.IP = &clientIP
case database.TailnetPeeringEventTypePeerUpdateNode:
if ce.SrcPeerID.Valid && ce.SrcPeerID.UUID != record.agentID && ce.Node != nil {
pNode := new(proto.Node)
if err := gProto.Unmarshal(ce.Node, pNode); err == nil {
conn.ClientHostname = pNode.Hostname
conn.ShortDescription = pNode.ShortDescription
}
}
}
}
for _, log := range record.connectionLogs {
if conn.CreatedAt.IsZero() {
conn.CreatedAt = log.ConnectTime
}
if log.Ip.Valid {
if addr, ok := netip.AddrFromSlice(log.Ip.IPNet.IP); ok {
addr = addr.Unmap()
conn.IP = &addr
}
}
if log.SlugOrPort.Valid {
conn.Detail = log.SlugOrPort.String
}
if log.Type.Valid() {
conn.Type = codersdk.ConnectionType(log.Type)
}
if conn.Status != codersdk.ConnectionStatusControlLost &&
conn.Status != codersdk.ConnectionStatusCleanDisconnected && log.DisconnectTime.Valid {
conn.Status = codersdk.ConnectionStatusClientDisconnected
}
if conn.EndedAt == nil && log.DisconnectTime.Valid {
conn.EndedAt = &log.DisconnectTime.Time
}
}
if record.peerTelemetry == nil {
return conn
}
if record.peerTelemetry.P2P != nil {
p2p := *record.peerTelemetry.P2P
conn.P2P = &p2p
}
if record.peerTelemetry.HomeDERP > 0 {
regionID := record.peerTelemetry.HomeDERP
name := fmt.Sprintf("Unnamed %d", regionID)
if derpMap != nil {
if region, ok := derpMap.Regions[regionID]; ok && region != nil && region.RegionName != "" {
name = region.RegionName
}
}
conn.HomeDERP = &codersdk.WorkspaceConnectionHomeDERP{
ID: regionID,
Name: name,
}
}
if record.peerTelemetry.P2P != nil && *record.peerTelemetry.P2P && record.peerTelemetry.P2PLatency != nil {
ms := float64(*record.peerTelemetry.P2PLatency) / float64(time.Millisecond)
conn.LatencyMS = &ms
} else if record.peerTelemetry.DERPLatency != nil {
ms := float64(*record.peerTelemetry.DERPLatency) / float64(time.Millisecond)
conn.LatencyMS = &ms
}
return conn
}
func deriveSessionStatus(conns []codersdk.WorkspaceConnection) codersdk.WorkspaceConnectionStatus {
for _, c := range conns {
if c.Status == codersdk.ConnectionStatusOngoing {
return codersdk.ConnectionStatusOngoing
}
}
if len(conns) > 0 {
return conns[0].Status
}
return codersdk.ConnectionStatusCleanDisconnected
}
func earliestTime(conns []codersdk.WorkspaceConnection) time.Time {
if len(conns) == 0 {
return time.Time{}
}
earliest := conns[0].CreatedAt
for _, c := range conns[1:] {
if c.CreatedAt.Before(earliest) {
earliest = c.CreatedAt
}
}
return earliest
}
+92
View File
@@ -0,0 +1,92 @@
package coderd_test
import (
"context"
"database/sql"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
func TestWorkspaceAgentConnections_FromConnectionLogs(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
})
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
now := dbtime.Now()
// One active SSH connection should be returned.
sshConnID := uuid.New()
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-1 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: sshConnID, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
// A web-ish connection type should be ignored.
// Use a time outside the 5-minute activity window so this
// localhost web connection is treated as stale and filtered out.
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-10 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeWorkspaceApp,
ConnectionStatus: database.ConnectionStatusConnected,
UserID: uuid.NullUUID{UUID: user.UserID, Valid: true},
UserAgent: sql.NullString{String: "Mozilla/5.0", Valid: true},
SlugOrPort: sql.NullString{String: "code-server", Valid: true},
})
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
workspace, err := client.Workspace(ctx, r.Workspace.ID)
require.NoError(t, err)
require.NotEmpty(t, workspace.LatestBuild.Resources)
require.NotEmpty(t, workspace.LatestBuild.Resources[0].Agents)
agent := workspace.LatestBuild.Resources[0].Agents[0]
require.Equal(t, r.Agents[0].Name, agent.Name)
require.Len(t, agent.Sessions, 1)
require.Equal(t, codersdk.ConnectionStatusOngoing, agent.Sessions[0].Status)
require.NotEmpty(t, agent.Sessions[0].Connections)
require.Equal(t, codersdk.ConnectionTypeSSH, agent.Sessions[0].Connections[0].Type)
require.NotNil(t, agent.Sessions[0].IP)
require.Equal(t, "127.0.0.1", agent.Sessions[0].IP.String())
apiAgent, err := client.WorkspaceAgent(ctx, agent.ID)
require.NoError(t, err)
require.Len(t, apiAgent.Sessions, 1)
require.Equal(t, codersdk.ConnectionTypeSSH, apiAgent.Sessions[0].Connections[0].Type)
}
+2
View File
@@ -857,6 +857,7 @@ func createWorkspace(
[]database.WorkspaceAgentLogSource{},
database.TemplateVersion{},
provisionerDaemons,
nil,
)
if err != nil {
return codersdk.Workspace{}, httperror.NewResponseError(http.StatusInternalServerError, codersdk.Response{
@@ -2579,6 +2580,7 @@ func (api *API) workspaceData(ctx context.Context, workspaces []database.Workspa
data.logSources,
data.templateVersions,
data.provisionerDaemons,
data.connectionLogs,
)
if err != nil {
return workspaceData{}, xerrors.Errorf("convert workspace builds: %w", err)
+195
View File
@@ -0,0 +1,195 @@
package coderd
import (
"net/http"
"net/netip"
"time"
"github.com/google/uuid"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbauthz"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw"
"github.com/coder/coder/v2/codersdk"
)
// @Summary Get workspace sessions
// @ID get-workspace-sessions
// @Security CoderSessionToken
// @Tags Workspaces
// @Produce json
// @Param workspace path string true "Workspace ID" format(uuid)
// @Param limit query int false "Page limit"
// @Param offset query int false "Page offset"
// @Success 200 {object} codersdk.WorkspaceSessionsResponse
// @Router /workspaces/{workspace}/sessions [get]
func (api *API) workspaceSessions(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
workspace := httpmw.WorkspaceParam(r)
// Parse pagination from query params.
queryParams := httpapi.NewQueryParamParser()
limit := queryParams.Int(r.URL.Query(), 25, "limit")
offset := queryParams.Int(r.URL.Query(), 0, "offset")
if len(queryParams.Errors) > 0 {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Invalid query parameters.",
Validations: queryParams.Errors,
})
return
}
// Fetch sessions. Use AsSystemRestricted because the user is
// already authorized to access the workspace via route
// middleware; the ResourceConnectionLog RBAC check would
// incorrectly reject regular workspace owners.
//nolint:gocritic // Workspace access is verified by middleware.
sysCtx := dbauthz.AsSystemRestricted(ctx)
sessions, err := api.Database.GetWorkspaceSessionsOffset(sysCtx, database.GetWorkspaceSessionsOffsetParams{
WorkspaceID: workspace.ID,
LimitCount: int32(limit), //nolint:gosec // query param is validated and bounded
OffsetCount: int32(offset), //nolint:gosec // query param is validated and bounded
StartedAfter: time.Time{},
StartedBefore: time.Time{},
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching sessions.",
Detail: err.Error(),
})
return
}
// Get total count for pagination.
//nolint:gocritic // Workspace access is verified by middleware.
totalCount, err := api.Database.CountWorkspaceSessions(sysCtx, database.CountWorkspaceSessionsParams{
WorkspaceID: workspace.ID,
StartedAfter: time.Time{},
StartedBefore: time.Time{},
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error counting sessions.",
Detail: err.Error(),
})
return
}
// Fetch connections for all sessions in one query.
sessionIDs := make([]uuid.UUID, len(sessions))
for i, s := range sessions {
sessionIDs[i] = s.ID
}
var connections []database.ConnectionLog
if len(sessionIDs) > 0 {
//nolint:gocritic // Workspace access is verified by middleware.
connections, err = api.Database.GetConnectionLogsBySessionIDs(sysCtx, sessionIDs)
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Internal error fetching connections.",
Detail: err.Error(),
})
return
}
}
// Group connections by session_id.
connsBySession := make(map[uuid.UUID][]database.ConnectionLog)
for _, conn := range connections {
if conn.SessionID.Valid {
connsBySession[conn.SessionID.UUID] = append(connsBySession[conn.SessionID.UUID], conn)
}
}
// Build response with nested connections.
response := codersdk.WorkspaceSessionsResponse{
Sessions: make([]codersdk.WorkspaceSession, len(sessions)),
Count: totalCount,
}
for i, s := range sessions {
response.Sessions[i] = ConvertDBSessionToSDK(s, connsBySession[s.ID])
}
httpapi.Write(ctx, rw, http.StatusOK, response)
}
// ConvertDBSessionToSDK converts a database workspace session row and its
// connection logs into the SDK representation.
func ConvertDBSessionToSDK(s database.GetWorkspaceSessionsOffsetRow, connections []database.ConnectionLog) codersdk.WorkspaceSession {
id := s.ID
session := codersdk.WorkspaceSession{
ID: &id,
Status: codersdk.ConnectionStatusCleanDisconnected, // Historic sessions are disconnected.
StartedAt: s.StartedAt,
EndedAt: &s.EndedAt,
Connections: make([]codersdk.WorkspaceConnection, len(connections)),
}
// Parse IP.
if s.Ip.Valid {
if addr, ok := netip.AddrFromSlice(s.Ip.IPNet.IP); ok {
addr = addr.Unmap()
session.IP = &addr
}
}
if s.ClientHostname.Valid {
session.ClientHostname = s.ClientHostname.String
}
if s.ShortDescription.Valid {
session.ShortDescription = s.ShortDescription.String
}
for i, conn := range connections {
session.Connections[i] = ConvertConnectionLogToSDK(conn)
}
return session
}
// ConvertConnectionLogToSDK converts a database connection log into the
// SDK representation used within workspace sessions.
func ConvertConnectionLogToSDK(conn database.ConnectionLog) codersdk.WorkspaceConnection {
wc := codersdk.WorkspaceConnection{
Status: codersdk.ConnectionStatusCleanDisconnected,
CreatedAt: conn.ConnectTime,
Type: codersdk.ConnectionType(conn.Type),
}
// Parse IP.
if conn.Ip.Valid {
if addr, ok := netip.AddrFromSlice(conn.Ip.IPNet.IP); ok {
addr = addr.Unmap()
wc.IP = &addr
}
}
if conn.SlugOrPort.Valid {
wc.Detail = conn.SlugOrPort.String
}
if conn.DisconnectTime.Valid {
wc.EndedAt = &conn.DisconnectTime.Time
}
if conn.DisconnectReason.Valid {
wc.DisconnectReason = conn.DisconnectReason.String
}
if conn.Code.Valid {
code := conn.Code.Int32
wc.ExitCode = &code
}
if conn.UserAgent.Valid {
wc.UserAgent = conn.UserAgent.String
}
if conn.ClientHostname.Valid {
wc.ClientHostname = conn.ClientHostname.String
}
if conn.ShortDescription.Valid {
wc.ShortDescription = conn.ShortDescription.String
}
return wc
}
+215
View File
@@ -0,0 +1,215 @@
package coderd_test
import (
"context"
"database/sql"
"net"
"testing"
"time"
"github.com/google/uuid"
"github.com/sqlc-dev/pqtype"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/coderd/database"
"github.com/coder/coder/v2/coderd/database/dbfake"
"github.com/coder/coder/v2/coderd/database/dbgen"
"github.com/coder/coder/v2/coderd/database/dbtestutil"
"github.com/coder/coder/v2/coderd/database/dbtime"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/testutil"
)
func TestWorkspaceSessions_EmptyResponse(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
})
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
resp, err := client.WorkspaceSessions(ctx, r.Workspace.ID)
require.NoError(t, err)
require.Empty(t, resp.Sessions)
require.Equal(t, int64(0), resp.Count)
}
func TestWorkspaceSessions_WithHistoricSessions(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t, dbtestutil.WithDumpOnFailure())
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
})
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
now := dbtime.Now()
// Insert two connected SSH connections from the same IP.
connID1 := uuid.New()
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-30 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: connID1, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
connID2 := uuid.New()
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-25 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: connID2, Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
// Close the connections and create sessions atomically.
closedAt := now.Add(-5 * time.Minute)
_, err := db.CloseConnectionLogsAndCreateSessions(context.Background(), database.CloseConnectionLogsAndCreateSessionsParams{
ClosedAt: sql.NullTime{Time: closedAt, Valid: true},
Reason: sql.NullString{String: "workspace stopped", Valid: true},
WorkspaceID: r.Workspace.ID,
Types: []database.ConnectionType{database.ConnectionTypeSsh},
})
require.NoError(t, err)
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
resp, err := client.WorkspaceSessions(ctx, r.Workspace.ID)
require.NoError(t, err)
// CloseConnectionLogsAndCreateSessions groups by IP, so both
// connections from 127.0.0.1 should be in a single session.
require.Equal(t, int64(1), resp.Count)
require.Len(t, resp.Sessions, 1)
require.NotNil(t, resp.Sessions[0].IP)
require.Equal(t, "127.0.0.1", resp.Sessions[0].IP.String())
require.Equal(t, codersdk.ConnectionStatusCleanDisconnected, resp.Sessions[0].Status)
require.Len(t, resp.Sessions[0].Connections, 2)
}
func TestWorkspaceAgentConnections_LiveSessionGrouping(t *testing.T) {
t.Parallel()
db, ps := dbtestutil.NewDB(t)
client, _, _ := coderdtest.NewWithAPI(t, &coderdtest.Options{
Database: db,
Pubsub: ps,
})
user := coderdtest.CreateFirstUser(t, client)
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
OrganizationID: user.OrganizationID,
OwnerID: user.UserID,
}).WithAgent().Do()
now := dbtime.Now()
// Two ongoing SSH connections from the same IP (127.0.0.1, the
// default in dbgen).
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-2 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-1 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
})
// One ongoing SSH connection from a different IP (10.0.0.1).
_ = dbgen.ConnectionLog(t, db, database.UpsertConnectionLogParams{
Time: now.Add(-1 * time.Minute),
OrganizationID: r.Workspace.OrganizationID,
WorkspaceOwnerID: r.Workspace.OwnerID,
WorkspaceID: r.Workspace.ID,
WorkspaceName: r.Workspace.Name,
AgentName: r.Agents[0].Name,
Type: database.ConnectionTypeSsh,
ConnectionID: uuid.NullUUID{UUID: uuid.New(), Valid: true},
ConnectionStatus: database.ConnectionStatusConnected,
Ip: pqtype.Inet{
IPNet: net.IPNet{
IP: net.IPv4(10, 0, 0, 1),
Mask: net.IPv4Mask(255, 255, 255, 255),
},
Valid: true,
},
})
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
workspace, err := client.Workspace(ctx, r.Workspace.ID)
require.NoError(t, err)
require.NotEmpty(t, workspace.LatestBuild.Resources)
require.NotEmpty(t, workspace.LatestBuild.Resources[0].Agents)
agent := workspace.LatestBuild.Resources[0].Agents[0]
require.Len(t, agent.Sessions, 2)
// Find which session is which by IP.
var session127, session10 codersdk.WorkspaceSession
for _, s := range agent.Sessions {
require.NotNil(t, s.IP)
switch s.IP.String() {
case "127.0.0.1":
session127 = s
case "10.0.0.1":
session10 = s
default:
t.Fatalf("unexpected session IP: %s", s.IP.String())
}
}
// The 127.0.0.1 session should have 2 connections.
require.Len(t, session127.Connections, 2)
require.Equal(t, codersdk.ConnectionStatusOngoing, session127.Status)
// The 10.0.0.1 session should have 1 connection.
require.Len(t, session10.Connections, 1)
require.Equal(t, codersdk.ConnectionStatusOngoing, session10.Status)
}
+5 -4
View File
@@ -45,10 +45,11 @@ type WorkspaceEvent struct {
type WorkspaceEventKind string
const (
WorkspaceEventKindStateChange WorkspaceEventKind = "state_change"
WorkspaceEventKindStatsUpdate WorkspaceEventKind = "stats_update"
WorkspaceEventKindMetadataUpdate WorkspaceEventKind = "mtd_update"
WorkspaceEventKindAppHealthUpdate WorkspaceEventKind = "app_health"
WorkspaceEventKindStateChange WorkspaceEventKind = "state_change"
WorkspaceEventKindStatsUpdate WorkspaceEventKind = "stats_update"
WorkspaceEventKindMetadataUpdate WorkspaceEventKind = "mtd_update"
WorkspaceEventKindAppHealthUpdate WorkspaceEventKind = "app_health"
WorkspaceEventKindConnectionLogUpdate WorkspaceEventKind = "connection_log_update"
WorkspaceEventKindAgentLifecycleUpdate WorkspaceEventKind = "agt_lifecycle_update"
WorkspaceEventKindAgentConnectionUpdate WorkspaceEventKind = "agt_connection_update"
+1
View File
@@ -46,6 +46,7 @@ const (
ConnectionTypeReconnectingPTY ConnectionType = "reconnecting_pty"
ConnectionTypeWorkspaceApp ConnectionType = "workspace_app"
ConnectionTypePortForwarding ConnectionType = "port_forwarding"
ConnectionTypeSystem ConnectionType = "system"
)
// ConnectionLogStatus is the status of a connection log entry.
+261
View File
@@ -0,0 +1,261 @@
package codersdk
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strconv"
"time"
"github.com/google/uuid"
)
// UserDiagnosticResponse is the top-level response from the operator
// diagnostic endpoint for a single user.
type UserDiagnosticResponse struct {
User DiagnosticUser `json:"user"`
GeneratedAt time.Time `json:"generated_at" format:"date-time"`
TimeWindow DiagnosticTimeWindow `json:"time_window"`
Summary DiagnosticSummary `json:"summary"`
CurrentConnections []DiagnosticConnection `json:"current_connections"`
Workspaces []DiagnosticWorkspace `json:"workspaces"`
Patterns []DiagnosticPattern `json:"patterns"`
}
// DiagnosticUser identifies the user being diagnosed.
type DiagnosticUser struct {
ID uuid.UUID `json:"id" format:"uuid"`
Username string `json:"username"`
Name string `json:"name"`
AvatarURL string `json:"avatar_url"`
Email string `json:"email"`
Roles []string `json:"roles"`
LastSeenAt time.Time `json:"last_seen_at" format:"date-time"`
CreatedAt time.Time `json:"created_at" format:"date-time"`
}
// DiagnosticTimeWindow describes the time range covered by the diagnostic.
type DiagnosticTimeWindow struct {
Start time.Time `json:"start" format:"date-time"`
End time.Time `json:"end" format:"date-time"`
Hours int `json:"hours"`
}
// DiagnosticSummary aggregates connection statistics across the time window.
type DiagnosticSummary struct {
TotalSessions int `json:"total_sessions"`
TotalConnections int `json:"total_connections"`
ActiveConnections int `json:"active_connections"`
ByStatus DiagnosticStatusBreakdown `json:"by_status"`
ByType map[string]int `json:"by_type"`
Network DiagnosticNetworkSummary `json:"network"`
Headline string `json:"headline"`
}
// DiagnosticStatusBreakdown counts sessions by their terminal status.
type DiagnosticStatusBreakdown struct {
Ongoing int `json:"ongoing"`
Clean int `json:"clean"`
Lost int `json:"lost"`
WorkspaceStopped int `json:"workspace_stopped"`
WorkspaceDeleted int `json:"workspace_deleted"`
}
// DiagnosticNetworkSummary contains aggregate network quality metrics.
type DiagnosticNetworkSummary struct {
P2PConnections int `json:"p2p_connections"`
DERPConnections int `json:"derp_connections"`
AvgLatencyMS *float64 `json:"avg_latency_ms"`
P95LatencyMS *float64 `json:"p95_latency_ms"`
PrimaryDERPRegion *string `json:"primary_derp_region"`
}
// DiagnosticConnection describes a single live or historical connection.
type DiagnosticConnection struct {
ID uuid.UUID `json:"id" format:"uuid"`
WorkspaceID uuid.UUID `json:"workspace_id" format:"uuid"`
WorkspaceName string `json:"workspace_name"`
AgentID uuid.UUID `json:"agent_id" format:"uuid"`
AgentName string `json:"agent_name"`
IP string `json:"ip"`
ClientHostname string `json:"client_hostname"`
ShortDescription string `json:"short_description"`
Type ConnectionType `json:"type"`
Detail string `json:"detail"`
Status WorkspaceConnectionStatus `json:"status"`
StartedAt time.Time `json:"started_at" format:"date-time"`
P2P *bool `json:"p2p"`
LatencyMS *float64 `json:"latency_ms"`
HomeDERP *DiagnosticHomeDERP `json:"home_derp"`
Explanation string `json:"explanation"`
}
// DiagnosticHomeDERP identifies a DERP relay region.
type DiagnosticHomeDERP struct {
ID int `json:"id"`
Name string `json:"name"`
}
// DiagnosticWorkspace groups sessions for a single workspace.
type DiagnosticWorkspace struct {
ID uuid.UUID `json:"id" format:"uuid"`
Name string `json:"name"`
OwnerUsername string `json:"owner_username"`
Status string `json:"status"`
TemplateName string `json:"template_name"`
TemplateDisplayName string `json:"template_display_name"`
Health DiagnosticHealth `json:"health"`
HealthReason string `json:"health_reason"`
Sessions []DiagnosticSession `json:"sessions"`
}
// DiagnosticHealth represents workspace health status.
type DiagnosticHealth string
const (
DiagnosticHealthHealthy DiagnosticHealth = "healthy"
DiagnosticHealthDegraded DiagnosticHealth = "degraded"
DiagnosticHealthUnhealthy DiagnosticHealth = "unhealthy"
DiagnosticHealthInactive DiagnosticHealth = "inactive"
)
// DiagnosticSession represents a client session with one or more connections.
type DiagnosticSession struct {
ID uuid.UUID `json:"id" format:"uuid"`
WorkspaceID uuid.UUID `json:"workspace_id" format:"uuid"`
WorkspaceName string `json:"workspace_name"`
AgentName string `json:"agent_name"`
IP string `json:"ip"`
ClientHostname string `json:"client_hostname"`
ShortDescription string `json:"short_description"`
StartedAt time.Time `json:"started_at" format:"date-time"`
EndedAt *time.Time `json:"ended_at" format:"date-time"`
DurationSeconds *float64 `json:"duration_seconds"`
Status WorkspaceConnectionStatus `json:"status"`
DisconnectReason string `json:"disconnect_reason"`
Explanation string `json:"explanation"`
Network DiagnosticSessionNetwork `json:"network"`
Connections []DiagnosticSessionConn `json:"connections"`
Timeline []DiagnosticTimelineEvent `json:"timeline"`
}
// DiagnosticSessionNetwork holds per-session network quality info.
type DiagnosticSessionNetwork struct {
P2P *bool `json:"p2p"`
AvgLatencyMS *float64 `json:"avg_latency_ms"`
HomeDERP *string `json:"home_derp"`
}
// DiagnosticSessionConn represents a single connection within a session.
type DiagnosticSessionConn struct {
ID uuid.UUID `json:"id" format:"uuid"`
Type ConnectionType `json:"type"`
Detail string `json:"detail"`
ConnectedAt time.Time `json:"connected_at" format:"date-time"`
DisconnectedAt *time.Time `json:"disconnected_at" format:"date-time"`
Status WorkspaceConnectionStatus `json:"status"`
ExitCode *int32 `json:"exit_code"`
Explanation string `json:"explanation"`
}
// DiagnosticTimelineEventKind enumerates timeline event types.
type DiagnosticTimelineEventKind string
const (
DiagnosticTimelineEventTunnelCreated DiagnosticTimelineEventKind = "tunnel_created"
DiagnosticTimelineEventTunnelRemoved DiagnosticTimelineEventKind = "tunnel_removed"
DiagnosticTimelineEventNodeUpdate DiagnosticTimelineEventKind = "node_update"
DiagnosticTimelineEventPeerDisconnected DiagnosticTimelineEventKind = "peer_disconnected"
DiagnosticTimelineEventPeerLost DiagnosticTimelineEventKind = "peer_lost"
DiagnosticTimelineEventPeerRecovered DiagnosticTimelineEventKind = "peer_recovered"
DiagnosticTimelineEventConnectionOpened DiagnosticTimelineEventKind = "connection_opened"
DiagnosticTimelineEventConnectionClosed DiagnosticTimelineEventKind = "connection_closed"
DiagnosticTimelineEventDERPFallback DiagnosticTimelineEventKind = "derp_fallback"
DiagnosticTimelineEventP2PEstablished DiagnosticTimelineEventKind = "p2p_established"
DiagnosticTimelineEventLatencySpike DiagnosticTimelineEventKind = "latency_spike"
DiagnosticTimelineEventWorkspaceStateChange DiagnosticTimelineEventKind = "workspace_state_change"
)
// DiagnosticTimelineEvent records a point-in-time event within a session.
type DiagnosticTimelineEvent struct {
Timestamp time.Time `json:"timestamp" format:"date-time"`
Kind DiagnosticTimelineEventKind `json:"kind"`
Description string `json:"description"`
Metadata map[string]any `json:"metadata"`
Severity ConnectionDiagnosticSeverity `json:"severity"`
}
// ConnectionDiagnosticSeverity represents event or pattern severity.
type ConnectionDiagnosticSeverity string
const (
ConnectionDiagnosticSeverityInfo ConnectionDiagnosticSeverity = "info"
ConnectionDiagnosticSeverityWarning ConnectionDiagnosticSeverity = "warning"
ConnectionDiagnosticSeverityError ConnectionDiagnosticSeverity = "error"
ConnectionDiagnosticSeverityCritical ConnectionDiagnosticSeverity = "critical"
)
// DiagnosticPatternType enumerates recognized connection patterns.
type DiagnosticPatternType string
const (
DiagnosticPatternDeviceSleep DiagnosticPatternType = "device_sleep"
DiagnosticPatternWorkspaceAutostart DiagnosticPatternType = "workspace_autostart"
DiagnosticPatternNetworkPolicy DiagnosticPatternType = "network_policy"
DiagnosticPatternAgentCrash DiagnosticPatternType = "agent_crash"
DiagnosticPatternLatencyDegradation DiagnosticPatternType = "latency_degradation"
DiagnosticPatternDERPFallback DiagnosticPatternType = "derp_fallback"
DiagnosticPatternCleanUsage DiagnosticPatternType = "clean_usage"
DiagnosticPatternUnknownDrops DiagnosticPatternType = "unknown_drops"
)
// DiagnosticPattern describes a detected pattern across sessions.
type DiagnosticPattern struct {
ID uuid.UUID `json:"id" format:"uuid"`
Type DiagnosticPatternType `json:"type"`
Severity ConnectionDiagnosticSeverity `json:"severity"`
AffectedSessions int `json:"affected_sessions"`
TotalSessions int `json:"total_sessions"`
Title string `json:"title"`
Description string `json:"description"`
Commonalities DiagnosticPatternCommonality `json:"commonalities"`
Recommendation string `json:"recommendation"`
}
// DiagnosticPatternCommonality captures shared attributes of affected sessions.
type DiagnosticPatternCommonality struct {
ConnectionTypes []string `json:"connection_types"`
ClientDescriptions []string `json:"client_descriptions"`
DurationRange *DiagnosticDurationRange `json:"duration_range"`
DisconnectReasons []string `json:"disconnect_reasons"`
TimeOfDayRange *string `json:"time_of_day_range"`
}
// DiagnosticDurationRange is a min/max pair of seconds.
type DiagnosticDurationRange struct {
MinSeconds float64 `json:"min_seconds"`
MaxSeconds float64 `json:"max_seconds"`
}
// UserDiagnostic fetches the operator diagnostic report for a user.
func (c *Client) UserDiagnostic(ctx context.Context, username string, hours int) (UserDiagnosticResponse, error) {
res, err := c.Request(ctx, http.MethodGet,
fmt.Sprintf("/api/v2/connectionlog/diagnostics/%s", username),
nil,
func(r *http.Request) {
q := r.URL.Query()
q.Set("hours", strconv.Itoa(hours))
r.URL.RawQuery = q.Encode()
},
)
if err != nil {
return UserDiagnosticResponse{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return UserDiagnosticResponse{}, ReadBodyAsError(res)
}
var resp UserDiagnosticResponse
return resp, json.NewDecoder(res.Body).Decode(&resp)
}
+1 -1
View File
@@ -100,7 +100,7 @@ Examples:
ctx, cancel := context.WithTimeoutCause(ctx, 5*time.Minute, xerrors.New("MCP handler timeout after 5 min"))
defer cancel()
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - bash")
if err != nil {
return WorkspaceBashResult{}, err
}
+8 -7
View File
@@ -1444,7 +1444,7 @@ var WorkspaceLS = Tool[WorkspaceLSArgs, WorkspaceLSResponse]{
},
UserClientOptional: true,
Handler: func(ctx context.Context, deps Deps, args WorkspaceLSArgs) (WorkspaceLSResponse, error) {
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - ls")
if err != nil {
return WorkspaceLSResponse{}, err
}
@@ -1509,7 +1509,7 @@ var WorkspaceReadFile = Tool[WorkspaceReadFileArgs, WorkspaceReadFileResponse]{
},
UserClientOptional: true,
Handler: func(ctx context.Context, deps Deps, args WorkspaceReadFileArgs) (WorkspaceReadFileResponse, error) {
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - read file")
if err != nil {
return WorkspaceReadFileResponse{}, err
}
@@ -1582,7 +1582,7 @@ content you are trying to write, then re-encode it properly.
},
UserClientOptional: true,
Handler: func(ctx context.Context, deps Deps, args WorkspaceWriteFileArgs) (codersdk.Response, error) {
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - write file")
if err != nil {
return codersdk.Response{}, err
}
@@ -1644,7 +1644,7 @@ var WorkspaceEditFile = Tool[WorkspaceEditFileArgs, codersdk.Response]{
},
UserClientOptional: true,
Handler: func(ctx context.Context, deps Deps, args WorkspaceEditFileArgs) (codersdk.Response, error) {
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - edit file")
if err != nil {
return codersdk.Response{}, err
}
@@ -1721,7 +1721,7 @@ var WorkspaceEditFiles = Tool[WorkspaceEditFilesArgs, codersdk.Response]{
},
UserClientOptional: true,
Handler: func(ctx context.Context, deps Deps, args WorkspaceEditFilesArgs) (codersdk.Response, error) {
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace)
conn, err := newAgentConn(ctx, deps.coderClient, args.Workspace, "AI tool call - edit files")
if err != nil {
return codersdk.Response{}, err
}
@@ -2156,7 +2156,7 @@ func NormalizeWorkspaceInput(input string) string {
// newAgentConn returns a connection to the agent specified by the workspace,
// which must be in the format [owner/]workspace[.agent].
func newAgentConn(ctx context.Context, client *codersdk.Client, workspace string) (workspacesdk.AgentConn, error) {
func newAgentConn(ctx context.Context, client *codersdk.Client, workspace string, shortDescription string) (workspacesdk.AgentConn, error) {
workspaceName := NormalizeWorkspaceInput(workspace)
_, workspaceAgent, err := findWorkspaceAndAgent(ctx, client, workspaceName)
if err != nil {
@@ -2176,7 +2176,8 @@ func newAgentConn(ctx context.Context, client *codersdk.Client, workspace string
wsClient := workspacesdk.New(client)
conn, err := wsClient.DialAgent(ctx, workspaceAgent.ID, &workspacesdk.DialAgentOptions{
BlockEndpoints: false,
BlockEndpoints: false,
ShortDescription: shortDescription,
})
if err != nil {
return nil, xerrors.Errorf("failed to dial agent: %w", err)
+65
View File
@@ -7,6 +7,7 @@ import (
"io"
"net/http"
"net/http/cookiejar"
"net/netip"
"strings"
"time"
@@ -176,8 +177,72 @@ type WorkspaceAgent struct {
// of the `coder_script` resource. It's only referenced by old clients.
// Deprecated: Remove in the future!
StartupScriptBehavior WorkspaceAgentStartupScriptBehavior `json:"startup_script_behavior"`
Sessions []WorkspaceSession `json:"sessions,omitempty"`
}
// WorkspaceConnectionHomeDERP identifies the DERP relay region
// used as the agent's home relay.
type WorkspaceConnectionHomeDERP struct {
ID int `json:"id"`
Name string `json:"name"`
}
type WorkspaceConnection struct {
IP *netip.Addr `json:"ip,omitempty"`
Status WorkspaceConnectionStatus `json:"status"`
CreatedAt time.Time `json:"created_at" format:"date-time"`
ConnectedAt *time.Time `json:"connected_at,omitempty" format:"date-time"`
EndedAt *time.Time `json:"ended_at,omitempty" format:"date-time"`
Type ConnectionType `json:"type"`
// Detail is the app slug or port number for workspace_app and port_forwarding connections.
Detail string `json:"detail,omitempty"`
// ClientHostname is the hostname of the client that connected to the agent. Self-reported by the client.
ClientHostname string `json:"client_hostname,omitempty"`
// ShortDescription is the human-readable short description of the connection. Self-reported by the client.
ShortDescription string `json:"short_description,omitempty"`
// P2P indicates a direct peer-to-peer connection (true) or
// DERP relay (false). Nil if telemetry unavailable.
P2P *bool `json:"p2p,omitempty"`
// LatencyMS is the most recent round-trip latency in
// milliseconds. Uses P2P latency when direct, DERP otherwise.
LatencyMS *float64 `json:"latency_ms,omitempty"`
// HomeDERP is the DERP region metadata for the agent's home relay.
HomeDERP *WorkspaceConnectionHomeDERP `json:"home_derp,omitempty"`
// DisconnectReason is the reason the connection was closed.
DisconnectReason string `json:"disconnect_reason,omitempty"`
// ExitCode is the exit code of the SSH session.
ExitCode *int32 `json:"exit_code,omitempty"`
// UserAgent is the HTTP user agent string from web connections.
UserAgent string `json:"user_agent,omitempty"`
}
// WorkspaceSession represents a client's session containing one or more connections.
// Live sessions are grouped by IP at query time; historic sessions have a database ID.
type WorkspaceSession struct {
ID *uuid.UUID `json:"id,omitempty"` // nil for live sessions
IP *netip.Addr `json:"ip,omitempty"`
ClientHostname string `json:"client_hostname,omitempty"`
ShortDescription string `json:"short_description,omitempty"`
Status WorkspaceConnectionStatus `json:"status"`
StartedAt time.Time `json:"started_at" format:"date-time"`
EndedAt *time.Time `json:"ended_at,omitempty" format:"date-time"`
Connections []WorkspaceConnection `json:"connections"`
}
type WorkspaceConnectionStatus string
const (
// ConnectionStatusOngoing is the status of a connection that has started but not finished yet.
ConnectionStatusOngoing WorkspaceConnectionStatus = "ongoing"
// ConnectionStatusControlLost is a connection where we lost contact with the client at the Tailnet Coordinator
ConnectionStatusControlLost WorkspaceConnectionStatus = "control_lost"
// ConnectionStatusClientDisconnected is a connection where the client disconnected without a clean Tailnet Coordinator disconnect.
ConnectionStatusClientDisconnected WorkspaceConnectionStatus = "client_disconnected"
// ConnectionStatusCleanDisconnected is a connection that cleanly disconnected at the Tailnet Coordinator and client
ConnectionStatusCleanDisconnected WorkspaceConnectionStatus = "clean_disconnected"
)
type WorkspaceAgentLogSource struct {
WorkspaceAgentID uuid.UUID `json:"workspace_agent_id" format:"uuid"`
ID uuid.UUID `json:"id" format:"uuid"`
+3
View File
@@ -175,6 +175,7 @@ func (c *agentConn) ReconnectingPTY(ctx context.Context, id uuid.UUID, height, w
return nil, xerrors.Errorf("workspace agent not reachable in time: %v", ctx.Err())
}
c.SendConnectedTelemetry(c.agentAddress(), tailnet.TelemetryApplicationReconnectingPTY)
conn, err := c.Conn.DialContextTCP(ctx, netip.AddrPortFrom(c.agentAddress(), AgentReconnectingPTYPort))
if err != nil {
return nil, err
@@ -290,6 +291,8 @@ func (c *agentConn) DialContext(ctx context.Context, network string, addr string
port, _ := strconv.ParseUint(rawPort, 10, 16)
ipp := netip.AddrPortFrom(c.agentAddress(), uint16(port))
c.SendConnectedTelemetry(c.agentAddress(), tailnet.TelemetryApplicationPortForward)
switch network {
case "tcp":
return c.Conn.DialContextTCP(ctx, ipp)
+80 -3
View File
@@ -8,6 +8,7 @@ import (
"net/http"
"net/http/cookiejar"
"net/netip"
"net/url"
"os"
"strconv"
"strings"
@@ -188,6 +189,8 @@ type DialAgentOptions struct {
// Whether the client will send network telemetry events.
// Enable instead of Disable so it's initialized to false (in tests).
EnableTelemetry bool
// ShortDescription is the human-readable short description of the connection.
ShortDescription string
}
// RewriteDERPMap rewrites the DERP map to use the configured access URL of the
@@ -236,9 +239,40 @@ func (c *Client) DialAgent(dialCtx context.Context, agentID uuid.UUID, options *
dialer := NewWebsocketDialer(options.Logger, coordinateURL, wsOptions)
clk := quartz.NewReal()
controller := tailnet.NewController(options.Logger, dialer)
controller.ResumeTokenCtrl = tailnet.NewBasicResumeTokenController(options.Logger, clk)
resumeTokenCtrl := tailnet.NewBasicResumeTokenController(options.Logger, clk)
controller.ResumeTokenCtrl = resumeTokenCtrl
// Pre-dial to obtain a server-assigned PeerID. This opens a temporary
// DRPC connection, calls RefreshResumeToken (which now returns the
// PeerID), then closes. The resume token is seeded into the controller
// so subsequent dials reuse the same PeerID.
var addresses []netip.Prefix
var connID uuid.UUID
tokenResp, err := c.preDialPeerID(dialCtx, coordinateURL, wsOptions, options.Logger)
if err != nil {
// Graceful fallback: if the pre-dial fails (e.g. old server),
// use a random IP address like before.
options.Logger.Warn(dialCtx, "failed to pre-dial for peer ID, falling back to random address", slog.Error(err))
ip := tailnet.CoderServicePrefix.RandomAddr()
addresses = []netip.Prefix{netip.PrefixFrom(ip, 128)}
} else {
peerID, parseErr := uuid.FromBytes(tokenResp.PeerId)
if parseErr != nil || peerID == uuid.Nil {
// Server returned a response without a PeerID (old server).
options.Logger.Warn(dialCtx, "server did not return peer ID, falling back to random address")
ip := tailnet.TailscaleServicePrefix.RandomAddr()
addresses = []netip.Prefix{netip.PrefixFrom(ip, 128)}
} else {
connID = peerID
addresses = []netip.Prefix{
tailnet.CoderServicePrefix.PrefixFromUUID(peerID),
}
resumeTokenCtrl.SetInitialToken(tokenResp)
options.Logger.Debug(dialCtx, "obtained server-assigned peer ID",
slog.F("peer_id", peerID.String()))
}
}
ip := tailnet.TailscaleServicePrefix.RandomAddr()
var header http.Header
if headerTransport, ok := c.client.HTTPClient.Transport.(*codersdk.HeaderTransport); ok {
header = headerTransport.Header
@@ -252,7 +286,8 @@ func (c *Client) DialAgent(dialCtx context.Context, agentID uuid.UUID, options *
c.RewriteDERPMap(connInfo.DERPMap)
conn, err := tailnet.NewConn(&tailnet.Options{
Addresses: []netip.Prefix{netip.PrefixFrom(ip, 128)},
ID: connID,
Addresses: addresses,
DERPMap: connInfo.DERPMap,
DERPHeader: &header,
DERPForceWebSockets: connInfo.DERPForceWebSockets,
@@ -261,6 +296,7 @@ func (c *Client) DialAgent(dialCtx context.Context, agentID uuid.UUID, options *
CaptureHook: options.CaptureHook,
ClientType: proto.TelemetryEvent_CLI,
TelemetrySink: telemetrySink,
ShortDescription: options.ShortDescription,
})
if err != nil {
return nil, xerrors.Errorf("create tailnet: %w", err)
@@ -306,6 +342,47 @@ func (c *Client) DialAgent(dialCtx context.Context, agentID uuid.UUID, options *
return agentConn, nil
}
// preDialPeerID opens a temporary DRPC connection to the coordinate endpoint
// and calls RefreshResumeToken to obtain the server-assigned PeerID and an
// initial resume token. The connection is closed before returning. This
// allows the caller to derive tailnet IP addresses from the PeerID before
// creating the tailnet.Conn.
func (*Client) preDialPeerID(
ctx context.Context,
coordinateURL *url.URL,
wsOptions *websocket.DialOptions,
logger slog.Logger,
) (*proto.RefreshResumeTokenResponse, error) {
u := new(url.URL)
*u = *coordinateURL
q := u.Query()
// Use version 2.0 for the pre-dial. RefreshResumeToken was added in
// 2.3 but fails gracefully as "unimplemented" on older servers.
q.Add("version", "2.0")
u.RawQuery = q.Encode()
// nolint:bodyclose
ws, _, err := websocket.Dial(ctx, u.String(), wsOptions)
if err != nil {
return nil, xerrors.Errorf("pre-dial websocket: %w", err)
}
defer ws.Close(websocket.StatusNormalClosure, "pre-dial complete")
client, err := tailnet.NewDRPCClient(
websocket.NetConn(ctx, ws, websocket.MessageBinary),
logger,
)
if err != nil {
return nil, xerrors.Errorf("pre-dial DRPC client: %w", err)
}
resp, err := client.RefreshResumeToken(ctx, &proto.RefreshResumeTokenRequest{})
if err != nil {
return nil, xerrors.Errorf("pre-dial RefreshResumeToken: %w", err)
}
return resp, nil
}
// @typescript-ignore:WorkspaceAgentReconnectingPTYOpts
type WorkspaceAgentReconnectingPTYOpts struct {
AgentID uuid.UUID
+83
View File
@@ -0,0 +1,83 @@
package codersdk
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strings"
"github.com/google/uuid"
)
// WorkspaceSessionsResponse is the response for listing workspace sessions.
type WorkspaceSessionsResponse struct {
Sessions []WorkspaceSession `json:"sessions"`
Count int64 `json:"count"`
}
// WorkspaceSessions returns the sessions for a workspace.
func (c *Client) WorkspaceSessions(ctx context.Context, workspaceID uuid.UUID) (WorkspaceSessionsResponse, error) {
res, err := c.Request(ctx, http.MethodGet, fmt.Sprintf("/api/v2/workspaces/%s/sessions", workspaceID), nil)
if err != nil {
return WorkspaceSessionsResponse{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return WorkspaceSessionsResponse{}, ReadBodyAsError(res)
}
var resp WorkspaceSessionsResponse
return resp, json.NewDecoder(res.Body).Decode(&resp)
}
// GlobalWorkspaceSession extends WorkspaceSession with workspace
// metadata for the global sessions view.
type GlobalWorkspaceSession struct {
WorkspaceSession
WorkspaceID uuid.UUID `json:"workspace_id" format:"uuid"`
WorkspaceName string `json:"workspace_name"`
WorkspaceOwnerUsername string `json:"workspace_owner_username"`
}
// GlobalWorkspaceSessionsResponse is the response for the global
// workspace sessions endpoint.
type GlobalWorkspaceSessionsResponse struct {
Sessions []GlobalWorkspaceSession `json:"sessions"`
Count int64 `json:"count"`
}
// GlobalWorkspaceSessionsRequest is the request for the global
// workspace sessions endpoint.
type GlobalWorkspaceSessionsRequest struct {
SearchQuery string `json:"q,omitempty"`
Pagination
}
// GlobalWorkspaceSessions returns workspace sessions across all
// workspaces, with optional search filters.
func (c *Client) GlobalWorkspaceSessions(ctx context.Context, req GlobalWorkspaceSessionsRequest) (GlobalWorkspaceSessionsResponse, error) {
res, err := c.Request(ctx, http.MethodGet, "/api/v2/connectionlog/sessions", nil, req.Pagination.asRequestOption(), func(r *http.Request) {
q := r.URL.Query()
var params []string
if req.SearchQuery != "" {
params = append(params, req.SearchQuery)
}
q.Set("q", strings.Join(params, " "))
r.URL.RawQuery = q.Encode()
})
if err != nil {
return GlobalWorkspaceSessionsResponse{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return GlobalWorkspaceSessionsResponse{}, ReadBodyAsError(res)
}
var resp GlobalWorkspaceSessionsResponse
err = json.NewDecoder(res.Body).Decode(&resp)
if err != nil {
return GlobalWorkspaceSessionsResponse{}, err
}
return resp, nil
}
+33
View File
@@ -631,6 +631,39 @@ curl -X GET http://coder-server:8080/api/v2/workspaceagents/{workspaceagent} \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
+263 -15
View File
@@ -191,6 +191,39 @@ curl -X GET http://coder-server:8080/api/v2/users/{user}/workspace/{workspacenam
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -431,6 +464,39 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild} \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -790,6 +856,39 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/res
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -914,6 +1013,32 @@ Status Code **200**
| `»»» script` | string | false | | |
| `»»» start_blocks_login` | boolean | false | | |
| `»»» timeout` | integer | false | | |
| `»» sessions` | array | false | | |
| `»»» client_hostname` | string | false | | |
| `»»» connections` | array | false | | |
| `»»»» client_hostname` | string | false | | Client hostname is the hostname of the client that connected to the agent. Self-reported by the client. |
| `»»»» connected_at` | string(date-time) | false | | |
| `»»»» created_at` | string(date-time) | false | | |
| `»»»» detail` | string | false | | Detail is the app slug or port number for workspace_app and port_forwarding connections. |
| `»»»» disconnect_reason` | string | false | | Disconnect reason is the reason the connection was closed. |
| `»»»» ended_at` | string(date-time) | false | | |
| `»»»» exit_code` | integer | false | | Exit code is the exit code of the SSH session. |
| `»»»» home_derp` | [codersdk.WorkspaceConnectionHomeDERP](schemas.md#codersdkworkspaceconnectionhomederp) | false | | Home derp is the DERP region metadata for the agent's home relay. |
| `»»»»» id` | integer | false | | |
| `»»»»» name` | string | false | | |
| `»»»» ip` | string | false | | |
| `»»»» latency_ms` | number | false | | Latency ms is the most recent round-trip latency in milliseconds. Uses P2P latency when direct, DERP otherwise. |
| `»»»» p2p` | boolean | false | | P2p indicates a direct peer-to-peer connection (true) or DERP relay (false). Nil if telemetry unavailable. |
| `»»»» short_description` | string | false | | Short description is the human-readable short description of the connection. Self-reported by the client. |
| `»»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»»»» type` | [codersdk.ConnectionType](schemas.md#codersdkconnectiontype) | false | | |
| `»»»» user_agent` | string | false | | User agent is the HTTP user agent string from web connections. |
| `»»» ended_at` | string(date-time) | false | | |
| `»»» id` | string | false | | nil for live sessions |
| `»»» ip` | string | false | | |
| `»»» short_description` | string | false | | |
| `»»» started_at` | string(date-time) | false | | |
| `»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» startup_script_behavior` | [codersdk.WorkspaceAgentStartupScriptBehavior](schemas.md#codersdkworkspaceagentstartupscriptbehavior) | false | | Startup script behavior is a legacy field that is deprecated in favor of the `coder_script` resource. It's only referenced by old clients. Deprecated: Remove in the future! |
| `»» status` | [codersdk.WorkspaceAgentStatus](schemas.md#codersdkworkspaceagentstatus) | false | | |
@@ -944,8 +1069,9 @@ Status Code **200**
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `status` | `clean_disconnected`, `client_disconnected`, `connected`, `connecting`, `control_lost`, `disconnected`, `ongoing`, `timeout` |
| `type` | `jetbrains`, `port_forwarding`, `reconnecting_pty`, `ssh`, `system`, `vscode`, `workspace_app` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `status` | `connected`, `connecting`, `disconnected`, `timeout` |
| `workspace_transition` | `delete`, `start`, `stop` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -1139,6 +1265,39 @@ curl -X GET http://coder-server:8080/api/v2/workspacebuilds/{workspacebuild}/sta
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -1490,6 +1649,36 @@ curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace}/builds \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -1676,6 +1865,32 @@ Status Code **200**
| `»»»» script` | string | false | | |
| `»»»» start_blocks_login` | boolean | false | | |
| `»»»» timeout` | integer | false | | |
| `»»» sessions` | array | false | | |
| `»»»» client_hostname` | string | false | | |
| `»»»» connections` | array | false | | |
| `»»»»» client_hostname` | string | false | | Client hostname is the hostname of the client that connected to the agent. Self-reported by the client. |
| `»»»»» connected_at` | string(date-time) | false | | |
| `»»»»» created_at` | string(date-time) | false | | |
| `»»»»» detail` | string | false | | Detail is the app slug or port number for workspace_app and port_forwarding connections. |
| `»»»»» disconnect_reason` | string | false | | Disconnect reason is the reason the connection was closed. |
| `»»»»» ended_at` | string(date-time) | false | | |
| `»»»»» exit_code` | integer | false | | Exit code is the exit code of the SSH session. |
| `»»»»» home_derp` | [codersdk.WorkspaceConnectionHomeDERP](schemas.md#codersdkworkspaceconnectionhomederp) | false | | Home derp is the DERP region metadata for the agent's home relay. |
| `»»»»»» id` | integer | false | | |
| `»»»»»» name` | string | false | | |
| `»»»»» ip` | string | false | | |
| `»»»»» latency_ms` | number | false | | Latency ms is the most recent round-trip latency in milliseconds. Uses P2P latency when direct, DERP otherwise. |
| `»»»»» p2p` | boolean | false | | P2p indicates a direct peer-to-peer connection (true) or DERP relay (false). Nil if telemetry unavailable. |
| `»»»»» short_description` | string | false | | Short description is the human-readable short description of the connection. Self-reported by the client. |
| `»»»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»»»»» type` | [codersdk.ConnectionType](schemas.md#codersdkconnectiontype) | false | | |
| `»»»»» user_agent` | string | false | | User agent is the HTTP user agent string from web connections. |
| `»»»» ended_at` | string(date-time) | false | | |
| `»»»» id` | string | false | | nil for live sessions |
| `»»»» ip` | string | false | | |
| `»»»» short_description` | string | false | | |
| `»»»» started_at` | string(date-time) | false | | |
| `»»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»»» started_at` | string(date-time) | false | | |
| `»»» startup_script_behavior` | [codersdk.WorkspaceAgentStartupScriptBehavior](schemas.md#codersdkworkspaceagentstartupscriptbehavior) | false | | Startup script behavior is a legacy field that is deprecated in favor of the `coder_script` resource. It's only referenced by old clients. Deprecated: Remove in the future! |
| `»»» status` | [codersdk.WorkspaceAgentStatus](schemas.md#codersdkworkspaceagentstatus) | false | | |
@@ -1710,20 +1925,20 @@ Status Code **200**
#### Enumerated Values
| Property | Value(s) |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `connected`, `connecting`, `deleted`, `deleting`, `disconnected`, `failed`, `pending`, `running`, `starting`, `stopped`, `stopping`, `succeeded`, `timeout` |
| `type` | `template_version_dry_run`, `template_version_import`, `workspace_build` |
| `reason` | `autostart`, `autostop`, `initiator` |
| `health` | `disabled`, `healthy`, `initializing`, `unhealthy` |
| `open_in` | `slim-window`, `tab` |
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `workspace_transition` | `delete`, `start`, `stop` |
| `transition` | `delete`, `start`, `stop` |
| Property | Value(s) |
|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `error_code` | `REQUIRED_TEMPLATE_VARIABLES` |
| `status` | `canceled`, `canceling`, `clean_disconnected`, `client_disconnected`, `connected`, `connecting`, `control_lost`, `deleted`, `deleting`, `disconnected`, `failed`, `ongoing`, `pending`, `running`, `starting`, `stopped`, `stopping`, `succeeded`, `timeout` |
| `type` | `jetbrains`, `port_forwarding`, `reconnecting_pty`, `ssh`, `system`, `template_version_dry_run`, `template_version_import`, `vscode`, `workspace_app`, `workspace_build` |
| `reason` | `autostart`, `autostop`, `initiator` |
| `health` | `disabled`, `healthy`, `initializing`, `unhealthy` |
| `open_in` | `slim-window`, `tab` |
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `workspace_transition` | `delete`, `start`, `stop` |
| `transition` | `delete`, `start`, `stop` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -1941,6 +2156,39 @@ curl -X POST http://coder-server:8080/api/v2/workspaces/{workspace}/builds \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
+263
View File
@@ -301,6 +301,269 @@ curl -X GET http://coder-server:8080/api/v2/connectionlog?limit=0 \
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get user diagnostic report
### Code samples
```shell
# Example request using curl
curl -X GET http://coder-server:8080/api/v2/connectionlog/diagnostics/{username} \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`GET /connectionlog/diagnostics/{username}`
### Parameters
| Name | In | Type | Required | Description |
|------------|-------|---------|----------|------------------------------------------|
| `username` | path | string | true | Username |
| `hours` | query | integer | false | Hours to look back (default 72, max 168) |
### Example responses
> 200 Response
```json
{
"current_connections": [
{
"agent_id": "2b1e3b65-2c04-4fa2-a2d7-467901e98978",
"agent_name": "string",
"client_hostname": "string",
"detail": "string",
"explanation": "string",
"home_derp": {
"id": 0,
"name": "string"
},
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing",
"type": "ssh",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
}
],
"generated_at": "2019-08-24T14:15:22Z",
"patterns": [
{
"affected_sessions": 0,
"commonalities": {
"client_descriptions": [
"string"
],
"connection_types": [
"string"
],
"disconnect_reasons": [
"string"
],
"duration_range": {
"max_seconds": 0,
"min_seconds": 0
},
"time_of_day_range": "string"
},
"description": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"recommendation": "string",
"severity": "info",
"title": "string",
"total_sessions": 0,
"type": "device_sleep"
}
],
"summary": {
"active_connections": 0,
"by_status": {
"clean": 0,
"lost": 0,
"ongoing": 0,
"workspace_deleted": 0,
"workspace_stopped": 0
},
"by_type": {
"property1": 0,
"property2": 0
},
"headline": "string",
"network": {
"avg_latency_ms": 0,
"derp_connections": 0,
"p2p_connections": 0,
"p95_latency_ms": 0,
"primary_derp_region": "string"
},
"total_connections": 0,
"total_sessions": 0
},
"time_window": {
"end": "2019-08-24T14:15:22Z",
"hours": 0,
"start": "2019-08-24T14:15:22Z"
},
"user": {
"avatar_url": "string",
"created_at": "2019-08-24T14:15:22Z",
"email": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"last_seen_at": "2019-08-24T14:15:22Z",
"name": "string",
"roles": [
"string"
],
"username": "string"
},
"workspaces": [
{
"health": "healthy",
"health_reason": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"name": "string",
"owner_username": "string",
"sessions": [
{
"agent_name": "string",
"client_hostname": "string",
"connections": [
{
"connected_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnected_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"explanation": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"status": "ongoing",
"type": "ssh"
}
],
"disconnect_reason": "string",
"duration_seconds": 0,
"ended_at": "2019-08-24T14:15:22Z",
"explanation": "string",
"id": "497f6eca-6276-4993-bfeb-53cbbbba6f08",
"ip": "string",
"network": {
"avg_latency_ms": 0,
"home_derp": "string",
"p2p": true
},
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing",
"timeline": [
{
"description": "string",
"kind": "tunnel_created",
"metadata": {
"property1": null,
"property2": null
},
"severity": "info",
"timestamp": "2019-08-24T14:15:22Z"
}
],
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string"
}
],
"status": "string",
"template_display_name": "string",
"template_name": "string"
}
]
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------|-------------|------------------------------------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | OK | [codersdk.UserDiagnosticResponse](schemas.md#codersdkuserdiagnosticresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get global workspace sessions
### Code samples
```shell
# Example request using curl
curl -X GET http://coder-server:8080/api/v2/connectionlog/sessions?limit=0 \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`GET /connectionlog/sessions`
### Parameters
| Name | In | Type | Required | Description |
|----------|-------|---------|----------|--------------|
| `q` | query | string | false | Search query |
| `limit` | query | integer | true | Page limit |
| `offset` | query | integer | false | Page offset |
### Example responses
> 200 Response
```json
{
"count": 0,
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing",
"workspace_id": "0967198e-ec7b-4c6b-b4d3-f71244cadbe9",
"workspace_name": "string",
"workspace_owner_username": "string"
}
]
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | OK | [codersdk.GlobalWorkspaceSessionsResponse](schemas.md#codersdkglobalworkspacesessionsresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get entitlements
### Code samples
+1155 -3
View File
File diff suppressed because it is too large Load Diff
+122 -2
View File
@@ -2482,6 +2482,39 @@ curl -X GET http://coder-server:8080/api/v2/templateversions/{templateversion}/d
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -2606,6 +2639,32 @@ Status Code **200**
| `»»» script` | string | false | | |
| `»»» start_blocks_login` | boolean | false | | |
| `»»» timeout` | integer | false | | |
| `»» sessions` | array | false | | |
| `»»» client_hostname` | string | false | | |
| `»»» connections` | array | false | | |
| `»»»» client_hostname` | string | false | | Client hostname is the hostname of the client that connected to the agent. Self-reported by the client. |
| `»»»» connected_at` | string(date-time) | false | | |
| `»»»» created_at` | string(date-time) | false | | |
| `»»»» detail` | string | false | | Detail is the app slug or port number for workspace_app and port_forwarding connections. |
| `»»»» disconnect_reason` | string | false | | Disconnect reason is the reason the connection was closed. |
| `»»»» ended_at` | string(date-time) | false | | |
| `»»»» exit_code` | integer | false | | Exit code is the exit code of the SSH session. |
| `»»»» home_derp` | [codersdk.WorkspaceConnectionHomeDERP](schemas.md#codersdkworkspaceconnectionhomederp) | false | | Home derp is the DERP region metadata for the agent's home relay. |
| `»»»»» id` | integer | false | | |
| `»»»»» name` | string | false | | |
| `»»»» ip` | string | false | | |
| `»»»» latency_ms` | number | false | | Latency ms is the most recent round-trip latency in milliseconds. Uses P2P latency when direct, DERP otherwise. |
| `»»»» p2p` | boolean | false | | P2p indicates a direct peer-to-peer connection (true) or DERP relay (false). Nil if telemetry unavailable. |
| `»»»» short_description` | string | false | | Short description is the human-readable short description of the connection. Self-reported by the client. |
| `»»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»»»» type` | [codersdk.ConnectionType](schemas.md#codersdkconnectiontype) | false | | |
| `»»»» user_agent` | string | false | | User agent is the HTTP user agent string from web connections. |
| `»»» ended_at` | string(date-time) | false | | |
| `»»» id` | string | false | | nil for live sessions |
| `»»» ip` | string | false | | |
| `»»» short_description` | string | false | | |
| `»»» started_at` | string(date-time) | false | | |
| `»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» startup_script_behavior` | [codersdk.WorkspaceAgentStartupScriptBehavior](schemas.md#codersdkworkspaceagentstartupscriptbehavior) | false | | Startup script behavior is a legacy field that is deprecated in favor of the `coder_script` resource. It's only referenced by old clients. Deprecated: Remove in the future! |
| `»» status` | [codersdk.WorkspaceAgentStatus](schemas.md#codersdkworkspaceagentstatus) | false | | |
@@ -2636,8 +2695,9 @@ Status Code **200**
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `status` | `clean_disconnected`, `client_disconnected`, `connected`, `connecting`, `control_lost`, `disconnected`, `ongoing`, `timeout` |
| `type` | `jetbrains`, `port_forwarding`, `reconnecting_pty`, `ssh`, `system`, `vscode`, `workspace_app` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `status` | `connected`, `connecting`, `disconnected`, `timeout` |
| `workspace_transition` | `delete`, `start`, `stop` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
@@ -3148,6 +3208,39 @@ curl -X GET http://coder-server:8080/api/v2/templateversions/{templateversion}/r
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -3272,6 +3365,32 @@ Status Code **200**
| `»»» script` | string | false | | |
| `»»» start_blocks_login` | boolean | false | | |
| `»»» timeout` | integer | false | | |
| `»» sessions` | array | false | | |
| `»»» client_hostname` | string | false | | |
| `»»» connections` | array | false | | |
| `»»»» client_hostname` | string | false | | Client hostname is the hostname of the client that connected to the agent. Self-reported by the client. |
| `»»»» connected_at` | string(date-time) | false | | |
| `»»»» created_at` | string(date-time) | false | | |
| `»»»» detail` | string | false | | Detail is the app slug or port number for workspace_app and port_forwarding connections. |
| `»»»» disconnect_reason` | string | false | | Disconnect reason is the reason the connection was closed. |
| `»»»» ended_at` | string(date-time) | false | | |
| `»»»» exit_code` | integer | false | | Exit code is the exit code of the SSH session. |
| `»»»» home_derp` | [codersdk.WorkspaceConnectionHomeDERP](schemas.md#codersdkworkspaceconnectionhomederp) | false | | Home derp is the DERP region metadata for the agent's home relay. |
| `»»»»» id` | integer | false | | |
| `»»»»» name` | string | false | | |
| `»»»» ip` | string | false | | |
| `»»»» latency_ms` | number | false | | Latency ms is the most recent round-trip latency in milliseconds. Uses P2P latency when direct, DERP otherwise. |
| `»»»» p2p` | boolean | false | | P2p indicates a direct peer-to-peer connection (true) or DERP relay (false). Nil if telemetry unavailable. |
| `»»»» short_description` | string | false | | Short description is the human-readable short description of the connection. Self-reported by the client. |
| `»»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»»»» type` | [codersdk.ConnectionType](schemas.md#codersdkconnectiontype) | false | | |
| `»»»» user_agent` | string | false | | User agent is the HTTP user agent string from web connections. |
| `»»» ended_at` | string(date-time) | false | | |
| `»»» id` | string | false | | nil for live sessions |
| `»»» ip` | string | false | | |
| `»»» short_description` | string | false | | |
| `»»» started_at` | string(date-time) | false | | |
| `»»» status` | [codersdk.WorkspaceConnectionStatus](schemas.md#codersdkworkspaceconnectionstatus) | false | | |
| `»» started_at` | string(date-time) | false | | |
| `»» startup_script_behavior` | [codersdk.WorkspaceAgentStartupScriptBehavior](schemas.md#codersdkworkspaceagentstartupscriptbehavior) | false | | Startup script behavior is a legacy field that is deprecated in favor of the `coder_script` resource. It's only referenced by old clients. Deprecated: Remove in the future! |
| `»» status` | [codersdk.WorkspaceAgentStatus](schemas.md#codersdkworkspaceagentstatus) | false | | |
@@ -3302,8 +3421,9 @@ Status Code **200**
| `sharing_level` | `authenticated`, `organization`, `owner`, `public` |
| `state` | `complete`, `failure`, `idle`, `working` |
| `lifecycle_state` | `created`, `off`, `ready`, `shutdown_error`, `shutdown_timeout`, `shutting_down`, `start_error`, `start_timeout`, `starting` |
| `status` | `clean_disconnected`, `client_disconnected`, `connected`, `connecting`, `control_lost`, `disconnected`, `ongoing`, `timeout` |
| `type` | `jetbrains`, `port_forwarding`, `reconnecting_pty`, `ssh`, `system`, `vscode`, `workspace_app` |
| `startup_script_behavior` | `blocking`, `non-blocking` |
| `status` | `connected`, `connecting`, `disconnected`, `timeout` |
| `workspace_transition` | `delete`, `start`, `stop` |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
+234
View File
@@ -246,6 +246,36 @@ of the template will be used.
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -551,6 +581,36 @@ curl -X GET http://coder-server:8080/api/v2/users/{user}/workspace/{workspacenam
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -881,6 +941,36 @@ of the template will be used.
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -1172,6 +1262,18 @@ curl -X GET http://coder-server:8080/api/v2/workspaces \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -1478,6 +1580,36 @@ curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace} \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -2043,6 +2175,36 @@ curl -X PUT http://coder-server:8080/api/v2/workspaces/{workspace}/dormant \
"timeout": 0
}
],
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
],
"started_at": "2019-08-24T14:15:22Z",
"startup_script_behavior": "blocking",
"status": "connecting",
@@ -2271,6 +2433,78 @@ curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace}/resolve-autos
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get workspace sessions
### Code samples
```shell
# Example request using curl
curl -X GET http://coder-server:8080/api/v2/workspaces/{workspace}/sessions \
-H 'Accept: application/json' \
-H 'Coder-Session-Token: API_KEY'
```
`GET /workspaces/{workspace}/sessions`
### Parameters
| Name | In | Type | Required | Description |
|-------------|-------|--------------|----------|--------------|
| `workspace` | path | string(uuid) | true | Workspace ID |
| `limit` | query | integer | false | Page limit |
| `offset` | query | integer | false | Page offset |
### Example responses
> 200 Response
```json
{
"count": 0,
"sessions": [
{
"client_hostname": "string",
"connections": [
{
"client_hostname": "string",
"connected_at": "2019-08-24T14:15:22Z",
"created_at": "2019-08-24T14:15:22Z",
"detail": "string",
"disconnect_reason": "string",
"ended_at": "2019-08-24T14:15:22Z",
"exit_code": 0,
"home_derp": {
"id": 0,
"name": "string"
},
"ip": "string",
"latency_ms": 0,
"p2p": true,
"short_description": "string",
"status": "ongoing",
"type": "ssh",
"user_agent": "string"
}
],
"ended_at": "2019-08-24T14:15:22Z",
"id": "string",
"ip": "string",
"short_description": "string",
"started_at": "2019-08-24T14:15:22Z",
"status": "ongoing"
}
]
}
```
### Responses
| Status | Meaning | Description | Schema |
|--------|---------------------------------------------------------|-------------|------------------------------------------------------------------------------------|
| 200 | [OK](https://tools.ietf.org/html/rfc7231#section-6.3.1) | OK | [codersdk.WorkspaceSessionsResponse](schemas.md#codersdkworkspacesessionsresponse) |
To perform this operation, you must be authenticated. [Learn more](authentication.md).
## Get workspace timings by ID
### Code samples
File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More