Compare commits

..

4 Commits

Author SHA1 Message Date
Stephen Kirby e5a74a775d chore: pull in cherry picks for v2.24 (#18674)
Co-authored-by: Danny Kopping <danny@coder.com>
Co-authored-by: Steven Masley <Emyrk@users.noreply.github.com>
Co-authored-by: Bruno Quaresma <bruno@coder.com>
Co-authored-by: Sas Swart <sas.swart.cdk@gmail.com>
Co-authored-by: Susana Ferreira <susana@coder.com>
Co-authored-by: Danielle Maywood <danielle@themaywoods.com>
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Asher <ash@coder.com>
Co-authored-by: Hugo Dutka <hugo@coder.com>
2025-07-01 14:33:21 -05:00
gcp-cherry-pick-bot[bot] de494d0a49 feat: add Coder registry links to template creation and editing (cherry-pick #18680) (#18697)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Garrett Delfosse <garrett@coder.com>
2025-07-01 23:15:30 +05:00
gcp-cherry-pick-bot[bot] 774792476c fix: stop tearing down non-TTY processes on SSH session end (cherry-pick #18673) (#18676)
Cherry-picked fix: stop tearing down non-TTY processes on SSH session
end (#18673)

(possibly temporary) fix for #18519

Matches OpenSSH for non-tty sessions, where we don't actively terminate
the process.

Adds explicit tracking to the SSH server for these processes so that if
we are shutting down we terminate them: this ensures that we can shut
down quickly to allow shutdown scripts to run. It also ensures our tests
don't leak system resources.

Co-authored-by: Spike Curtis <spike@coder.com>
2025-06-30 23:09:37 +04:00
gcp-cherry-pick-bot[bot] 4a61bbeae4 chore: bump github.com/go-viper/mapstructure/v2 from 2.2.1 to 2.3.0 (cherry-pick #18647) (#18649)
Co-authored-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 22:59:21 +05:00
2604 changed files with 63720 additions and 189094 deletions
-218
View File
@@ -1,218 +0,0 @@
# Database Development Patterns
## Database Work Overview
### Database Generation Process
1. Modify SQL files in `coderd/database/queries/`
2. Run `make gen`
3. If errors about audit table, update `enterprise/audit/table.go`
4. Run `make gen` again
5. Run `make lint` to catch any remaining issues
## Migration Guidelines
### Creating Migration Files
**Location**: `coderd/database/migrations/`
**Format**: `{number}_{description}.{up|down}.sql`
- Number must be unique and sequential
- Always include both up and down migrations
### Helper Scripts
| Script | Purpose |
|---------------------------------------------------------------------|-----------------------------------------|
| `./coderd/database/migrations/create_migration.sh "migration name"` | Creates new migration files |
| `./coderd/database/migrations/fix_migration_numbers.sh` | Renumbers migrations to avoid conflicts |
| `./coderd/database/migrations/create_fixture.sh "fixture name"` | Creates test fixtures for migrations |
### Database Query Organization
- **MUST DO**: Any changes to database - adding queries, modifying queries should be done in the `coderd/database/queries/*.sql` files
- **MUST DO**: Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `oauth2.sql`
- After making changes to any `coderd/database/queries/*.sql` files you must run `make gen` to generate respective ORM changes
## Handling Nullable Fields
Use `sql.NullString`, `sql.NullBool`, etc. for optional database fields:
```go
CodeChallenge: sql.NullString{
String: params.codeChallenge,
Valid: params.codeChallenge != "",
}
```
Set `.Valid = true` when providing values.
## Audit Table Updates
If adding fields to auditable types:
1. Update `enterprise/audit/table.go`
2. Add each new field with appropriate action:
- `ActionTrack`: Field should be tracked in audit logs
- `ActionIgnore`: Field should be ignored in audit logs
- `ActionSecret`: Field contains sensitive data
3. Run `make gen` to verify no audit errors
## Database Architecture
### Core Components
- **PostgreSQL 13+** recommended for production
- **Migrations** managed with `migrate`
- **Database authorization** through `dbauthz` package
### Authorization Patterns
```go
// Public endpoints needing system access (OAuth2 registration)
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
// Authenticated endpoints with user context
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
// System operations in middleware
roles, err := db.GetAuthorizationUserRoles(dbauthz.AsSystemRestricted(ctx), userID)
```
## Common Database Issues
### Migration Issues
1. **Migration conflicts**: Use `fix_migration_numbers.sh` to renumber
2. **Missing down migration**: Always create both up and down files
3. **Schema inconsistencies**: Verify against existing schema
### Field Handling Issues
1. **Nullable field errors**: Use `sql.Null*` types consistently
2. **Missing audit entries**: Update `enterprise/audit/table.go`
### Query Issues
1. **Query organization**: Group related queries in appropriate files
2. **Generated code errors**: Run `make gen` after query changes
3. **Performance issues**: Add appropriate indexes in migrations
## Database Testing
### Test Database Setup
```go
func TestDatabaseFunction(t *testing.T) {
db := dbtestutil.NewDB(t)
// Test with real database
result, err := db.GetSomething(ctx, param)
require.NoError(t, err)
require.Equal(t, expected, result)
}
```
## Best Practices
### Schema Design
1. **Use appropriate data types**: VARCHAR for strings, TIMESTAMP for times
2. **Add constraints**: NOT NULL, UNIQUE, FOREIGN KEY as appropriate
3. **Create indexes**: For frequently queried columns
4. **Consider performance**: Normalize appropriately but avoid over-normalization
### Query Writing
1. **Use parameterized queries**: Prevent SQL injection
2. **Handle errors appropriately**: Check for specific error types
3. **Use transactions**: For related operations that must succeed together
4. **Optimize queries**: Use EXPLAIN to understand query performance
### Migration Writing
1. **Make migrations reversible**: Always include down migration
2. **Test migrations**: On copy of production data if possible
3. **Keep migrations small**: One logical change per migration
4. **Document complex changes**: Add comments explaining rationale
## Advanced Patterns
### Complex Queries
```sql
-- Example: Complex join with aggregation
SELECT
u.id,
u.username,
COUNT(w.id) as workspace_count
FROM users u
LEFT JOIN workspaces w ON u.id = w.owner_id
WHERE u.created_at > $1
GROUP BY u.id, u.username
ORDER BY workspace_count DESC;
```
### Conditional Queries
```sql
-- Example: Dynamic filtering
SELECT * FROM oauth2_provider_apps
WHERE
($1::text IS NULL OR name ILIKE '%' || $1 || '%')
AND ($2::uuid IS NULL OR organization_id = $2)
ORDER BY created_at DESC;
```
### Audit Patterns
```go
// Example: Auditable database operation
func (q *sqlQuerier) UpdateUser(ctx context.Context, arg UpdateUserParams) (User, error) {
// Implementation here
// Audit the change
if auditor := audit.FromContext(ctx); auditor != nil {
auditor.Record(audit.UserUpdate{
UserID: arg.ID,
Old: oldUser,
New: newUser,
})
}
return newUser, nil
}
```
## Debugging Database Issues
### Common Debug Commands
```bash
# Check database connection
make test-postgres
# Run specific database tests
go test ./coderd/database/... -run TestSpecificFunction
# Check query generation
make gen
# Verify audit table
make lint
```
### Debug Techniques
1. **Enable query logging**: Set appropriate log levels
2. **Use database tools**: pgAdmin, psql for direct inspection
3. **Check constraints**: UNIQUE, FOREIGN KEY violations
4. **Analyze performance**: Use EXPLAIN ANALYZE for slow queries
### Troubleshooting Checklist
- [ ] Migration files exist (both up and down)
- [ ] `make gen` run after query changes
- [ ] Audit table updated for new fields
- [ ] Nullable fields use `sql.Null*` types
- [ ] Authorization context appropriate for endpoint type
-157
View File
@@ -1,157 +0,0 @@
# OAuth2 Development Guide
## RFC Compliance Development
### Implementing Standard Protocols
When implementing standard protocols (OAuth2, OpenID Connect, etc.):
1. **Fetch and Analyze Official RFCs**:
- Always read the actual RFC specifications before implementation
- Use WebFetch tool to get current RFC content for compliance verification
- Document RFC requirements in code comments
2. **Default Values Matter**:
- Pay close attention to RFC-specified default values
- Example: RFC 7591 specifies `client_secret_basic` as default, not `client_secret_post`
- Ensure consistency between database migrations and application code
3. **Security Requirements**:
- Follow RFC security considerations precisely
- Example: RFC 7592 prohibits returning registration access tokens in GET responses
- Implement proper error responses per protocol specifications
4. **Validation Compliance**:
- Implement comprehensive validation per RFC requirements
- Support protocol-specific features (e.g., custom schemes for native OAuth2 apps)
- Test edge cases defined in specifications
## OAuth2 Provider Implementation
### OAuth2 Spec Compliance
1. **Follow RFC 6749 for token responses**
- Use `expires_in` (seconds) not `expiry` (timestamp) in token responses
- Return proper OAuth2 error format: `{"error": "code", "error_description": "details"}`
2. **Error Response Format**
- Create OAuth2-compliant error responses for token endpoint
- Use standard error codes: `invalid_client`, `invalid_grant`, `invalid_request`
- Avoid generic error responses for OAuth2 endpoints
### PKCE Implementation
- Support both with and without PKCE for backward compatibility
- Use S256 method for code challenge
- Properly validate code_verifier against stored code_challenge
### UI Authorization Flow
- Use POST requests for consent, not GET with links
- Avoid dependency on referer headers for security decisions
- Support proper state parameter validation
### RFC 8707 Resource Indicators
- Store resource parameters in database for server-side validation (opaque tokens)
- Validate resource consistency between authorization and token requests
- Support audience validation in refresh token flows
- Resource parameter is optional but must be consistent when provided
## OAuth2 Error Handling Pattern
```go
// Define specific OAuth2 errors
var (
errInvalidPKCE = xerrors.New("invalid code_verifier")
)
// Use OAuth2-compliant error responses
type OAuth2Error struct {
Error string `json:"error"`
ErrorDescription string `json:"error_description,omitempty"`
}
// Return proper OAuth2 errors
if errors.Is(err, errInvalidPKCE) {
writeOAuth2Error(ctx, rw, http.StatusBadRequest, "invalid_grant", "The PKCE code verifier is invalid")
return
}
```
## Testing OAuth2 Features
### Test Scripts
Located in `./scripts/oauth2/`:
- `test-mcp-oauth2.sh` - Full automated test suite
- `setup-test-app.sh` - Create test OAuth2 app
- `cleanup-test-app.sh` - Remove test app
- `generate-pkce.sh` - Generate PKCE parameters
- `test-manual-flow.sh` - Manual browser testing
Always run the full test suite after OAuth2 changes:
```bash
./scripts/oauth2/test-mcp-oauth2.sh
```
### RFC Protocol Testing
1. **Compliance Test Coverage**:
- Test all RFC-defined error codes and responses
- Validate proper HTTP status codes for different scenarios
- Test protocol-specific edge cases (URI formats, token formats, etc.)
2. **Security Boundary Testing**:
- Test client isolation and privilege separation
- Verify information disclosure protections
- Test token security and proper invalidation
## Common OAuth2 Issues
1. **OAuth2 endpoints returning wrong error format** - Ensure OAuth2 endpoints return RFC 6749 compliant errors
2. **Resource indicator validation failing** - Ensure database stores and retrieves resource parameters correctly
3. **PKCE tests failing** - Verify both authorization code storage and token exchange handle PKCE fields
4. **RFC compliance failures** - Verify against actual RFC specifications, not assumptions
5. **Authorization context errors in public endpoints** - Use `dbauthz.AsSystemRestricted(ctx)` pattern
6. **Default value mismatches** - Ensure database migrations match application code defaults
7. **Bearer token authentication issues** - Check token extraction precedence and format validation
8. **URI validation failures** - Support both standard schemes and custom schemes per protocol requirements
## Authorization Context Patterns
```go
// Public endpoints needing system access (OAuth2 registration)
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
// Authenticated endpoints with user context
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
// System operations in middleware
roles, err := db.GetAuthorizationUserRoles(dbauthz.AsSystemRestricted(ctx), userID)
```
## OAuth2/Authentication Work Patterns
- Types go in `codersdk/oauth2.go` or similar
- Handlers go in `coderd/oauth2.go` or `coderd/identityprovider/`
- Database fields need migration + audit table updates
- Always support backward compatibility
## Protocol Implementation Checklist
Before completing OAuth2 or authentication feature work:
- [ ] Verify RFC compliance by reading actual specifications
- [ ] Implement proper error response formats per protocol
- [ ] Add comprehensive validation for all protocol fields
- [ ] Test security boundaries and token handling
- [ ] Update RBAC permissions for new resources
- [ ] Add audit logging support if applicable
- [ ] Create database migrations with proper defaults
- [ ] Add comprehensive test coverage including edge cases
- [ ] Verify linting compliance
- [ ] Test both positive and negative scenarios
- [ ] Document protocol-specific patterns and requirements
-212
View File
@@ -1,212 +0,0 @@
# Testing Patterns and Best Practices
## Testing Best Practices
### Avoiding Race Conditions
1. **Unique Test Identifiers**:
- Never use hardcoded names in concurrent tests
- Use `time.Now().UnixNano()` or similar for unique identifiers
- Example: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
2. **Database Constraint Awareness**:
- Understand unique constraints that can cause test conflicts
- Generate unique values for all constrained fields
- Test name isolation prevents cross-test interference
### Testing Patterns
- Use table-driven tests for comprehensive coverage
- Mock external dependencies
- Test both positive and negative cases
- Use `testutil.WaitLong` for timeouts in tests
### Test Package Naming
- **Test packages**: Use `package_test` naming (e.g., `identityprovider_test`) for black-box testing
## RFC Protocol Testing
### Compliance Test Coverage
1. **Test all RFC-defined error codes and responses**
2. **Validate proper HTTP status codes for different scenarios**
3. **Test protocol-specific edge cases** (URI formats, token formats, etc.)
### Security Boundary Testing
1. **Test client isolation and privilege separation**
2. **Verify information disclosure protections**
3. **Test token security and proper invalidation**
## Test Organization
### Test File Structure
```
coderd/
├── oauth2.go # Implementation
├── oauth2_test.go # Main tests
├── oauth2_test_helpers.go # Test utilities
└── oauth2_validation.go # Validation logic
```
### Test Categories
1. **Unit Tests**: Test individual functions in isolation
2. **Integration Tests**: Test API endpoints with database
3. **End-to-End Tests**: Full workflow testing
4. **Race Tests**: Concurrent access testing
## Test Commands
### Running Tests
| Command | Purpose |
|---------|---------|
| `make test` | Run all Go tests |
| `make test RUN=TestFunctionName` | Run specific test |
| `go test -v ./path/to/package -run TestFunctionName` | Run test with verbose output |
| `make test-postgres` | Run tests with Postgres database |
| `make test-race` | Run tests with Go race detector |
| `make test-e2e` | Run end-to-end tests |
### Frontend Testing
| Command | Purpose |
|---------|---------|
| `pnpm test` | Run frontend tests |
| `pnpm check` | Run code checks |
## Common Testing Issues
### Database-Related
1. **SQL type errors** - Use `sql.Null*` types for nullable fields
2. **Race conditions in tests** - Use unique identifiers instead of hardcoded names
### OAuth2 Testing
1. **PKCE tests failing** - Verify both authorization code storage and token exchange handle PKCE fields
2. **Resource indicator validation failing** - Ensure database stores and retrieves resource parameters correctly
### General Issues
1. **Missing newlines** - Ensure files end with newline character
2. **Package naming errors** - Use `package_test` naming for test files
3. **Log message formatting errors** - Use lowercase, descriptive messages without special characters
## Systematic Testing Approach
### Multi-Issue Problem Solving
When facing multiple failing tests or complex integration issues:
1. **Identify Root Causes**:
- Run failing tests individually to isolate issues
- Use LSP tools to trace through call chains
- Check both compilation and runtime errors
2. **Fix in Logical Order**:
- Address compilation issues first (imports, syntax)
- Fix authorization and RBAC issues next
- Resolve business logic and validation issues
- Handle edge cases and race conditions last
3. **Verification Strategy**:
- Test each fix individually before moving to next issue
- Use `make lint` and `make gen` after database changes
- Verify RFC compliance with actual specifications
- Run comprehensive test suites before considering complete
## Test Data Management
### Unique Test Data
```go
// Good: Unique identifiers prevent conflicts
clientName := fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())
// Bad: Hardcoded names cause race conditions
clientName := "test-client"
```
### Test Cleanup
```go
func TestSomething(t *testing.T) {
// Setup
client := coderdtest.New(t, nil)
// Test code here
// Cleanup happens automatically via t.Cleanup() in coderdtest
}
```
## Test Utilities
### Common Test Patterns
```go
// Table-driven tests
tests := []struct {
name string
input InputType
expected OutputType
wantErr bool
}{
{
name: "valid input",
input: validInput,
expected: expectedOutput,
wantErr: false,
},
// ... more test cases
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := functionUnderTest(tt.input)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.expected, result)
})
}
```
### Test Assertions
```go
// Use testify/require for assertions
require.NoError(t, err)
require.Equal(t, expected, actual)
require.NotNil(t, result)
require.True(t, condition)
```
## Performance Testing
### Load Testing
- Use `scaletest/` directory for load testing scenarios
- Run `./scaletest/scaletest.sh` for performance testing
### Benchmarking
```go
func BenchmarkFunction(b *testing.B) {
for i := 0; i < b.N; i++ {
// Function call to benchmark
_ = functionUnderTest(input)
}
}
```
Run benchmarks with:
```bash
go test -bench=. -benchmem ./package/path
```
-239
View File
@@ -1,239 +0,0 @@
# Troubleshooting Guide
## Common Issues
### Database Issues
1. **"Audit table entry missing action"**
- **Solution**: Update `enterprise/audit/table.go`
- Add each new field with appropriate action (ActionTrack, ActionIgnore, ActionSecret)
- Run `make gen` to verify no audit errors
2. **SQL type errors**
- **Solution**: Use `sql.Null*` types for nullable fields
- Set `.Valid = true` when providing values
- Example:
```go
CodeChallenge: sql.NullString{
String: params.codeChallenge,
Valid: params.codeChallenge != "",
}
```
### Testing Issues
3. **"package should be X_test"**
- **Solution**: Use `package_test` naming for test files
- Example: `identityprovider_test` for black-box testing
4. **Race conditions in tests**
- **Solution**: Use unique identifiers instead of hardcoded names
- Example: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
- Never use hardcoded names in concurrent tests
5. **Missing newlines**
- **Solution**: Ensure files end with newline character
- Most editors can be configured to add this automatically
### OAuth2 Issues
6. **OAuth2 endpoints returning wrong error format**
- **Solution**: Ensure OAuth2 endpoints return RFC 6749 compliant errors
- Use standard error codes: `invalid_client`, `invalid_grant`, `invalid_request`
- Format: `{"error": "code", "error_description": "details"}`
7. **Resource indicator validation failing**
- **Solution**: Ensure database stores and retrieves resource parameters correctly
- Check both authorization code storage and token exchange handling
8. **PKCE tests failing**
- **Solution**: Verify both authorization code storage and token exchange handle PKCE fields
- Check `CodeChallenge` and `CodeChallengeMethod` field handling
### RFC Compliance Issues
9. **RFC compliance failures**
- **Solution**: Verify against actual RFC specifications, not assumptions
- Use WebFetch tool to get current RFC content for compliance verification
- Read the actual RFC specifications before implementation
10. **Default value mismatches**
- **Solution**: Ensure database migrations match application code defaults
- Example: RFC 7591 specifies `client_secret_basic` as default, not `client_secret_post`
### Authorization Issues
11. **Authorization context errors in public endpoints**
- **Solution**: Use `dbauthz.AsSystemRestricted(ctx)` pattern
- Example:
```go
// Public endpoints needing system access
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
```
### Authentication Issues
12. **Bearer token authentication issues**
- **Solution**: Check token extraction precedence and format validation
- Ensure proper RFC 6750 Bearer Token Support implementation
13. **URI validation failures**
- **Solution**: Support both standard schemes and custom schemes per protocol requirements
- Native OAuth2 apps may use custom schemes
### General Development Issues
14. **Log message formatting errors**
- **Solution**: Use lowercase, descriptive messages without special characters
- Follow Go logging conventions
## Systematic Debugging Approach
YOU MUST ALWAYS find the root cause of any issue you are debugging
YOU MUST NEVER fix a symptom or add a workaround instead of finding a root cause, even if it is faster.
### Multi-Issue Problem Solving
When facing multiple failing tests or complex integration issues:
1. **Identify Root Causes**:
- Run failing tests individually to isolate issues
- Use LSP tools to trace through call chains
- Read Error Messages Carefully: Check both compilation and runtime errors
- Reproduce Consistently: Ensure you can reliably reproduce the issue before investigating
- Check Recent Changes: What changed that could have caused this? Git diff, recent commits, etc.
- When You Don't Know: Say "I don't understand X" rather than pretending to know
2. **Fix in Logical Order**:
- Address compilation issues first (imports, syntax)
- Fix authorization and RBAC issues next
- Resolve business logic and validation issues
- Handle edge cases and race conditions last
- IF your first fix doesn't work, STOP and re-analyze rather than adding more fixes
3. **Verification Strategy**:
- Always Test each fix individually before moving to next issue
- Verify Before Continuing: Did your test work? If not, form new hypothesis - don't add more fixes
- Use `make lint` and `make gen` after database changes
- Verify RFC compliance with actual specifications
- Run comprehensive test suites before considering complete
## Debug Commands
### Useful Debug Commands
| Command | Purpose |
|----------------------------------------------|---------------------------------------|
| `make lint` | Run all linters |
| `make gen` | Generate mocks, database queries |
| `go test -v ./path/to/package -run TestName` | Run specific test with verbose output |
| `go test -race ./...` | Run tests with race detector |
### LSP Debugging
#### Go LSP (Backend)
| Command | Purpose |
|----------------------------------------------------|------------------------------|
| `mcp__go-language-server__definition symbolName` | Find function definition |
| `mcp__go-language-server__references symbolName` | Find all references |
| `mcp__go-language-server__diagnostics filePath` | Check for compilation errors |
| `mcp__go-language-server__hover filePath line col` | Get type information |
#### TypeScript LSP (Frontend)
| Command | Purpose |
|----------------------------------------------------------------------------|------------------------------------|
| `mcp__typescript-language-server__definition symbolName` | Find component/function definition |
| `mcp__typescript-language-server__references symbolName` | Find all component/type usages |
| `mcp__typescript-language-server__diagnostics filePath` | Check for TypeScript errors |
| `mcp__typescript-language-server__hover filePath line col` | Get type information |
| `mcp__typescript-language-server__rename_symbol filePath line col newName` | Rename across codebase |
## Common Error Messages
### Database Errors
**Error**: `pq: relation "oauth2_provider_app_codes" does not exist`
- **Cause**: Missing database migration
- **Solution**: Run database migrations, check migration files
**Error**: `audit table entry missing action for field X`
- **Cause**: New field added without audit table update
- **Solution**: Update `enterprise/audit/table.go`
### Go Compilation Errors
**Error**: `package should be identityprovider_test`
- **Cause**: Test package naming convention violation
- **Solution**: Use `package_test` naming for black-box tests
**Error**: `cannot use X (type Y) as type Z`
- **Cause**: Type mismatch, often with nullable fields
- **Solution**: Use appropriate `sql.Null*` types
### OAuth2 Errors
**Error**: `invalid_client` but client exists
- **Cause**: Authorization context issue
- **Solution**: Use `dbauthz.AsSystemRestricted(ctx)` for public endpoints
**Error**: PKCE validation failing
- **Cause**: Missing PKCE fields in database operations
- **Solution**: Ensure `CodeChallenge` and `CodeChallengeMethod` are handled
## Prevention Strategies
### Before Making Changes
1. **Read the relevant documentation**
2. **Check if similar patterns exist in codebase**
3. **Understand the authorization context requirements**
4. **Plan database changes carefully**
### During Development
1. **Run tests frequently**: `make test`
2. **Use LSP tools for navigation**: Avoid manual searching
3. **Follow RFC specifications precisely**
4. **Update audit tables when adding database fields**
### Before Committing
1. **Run full test suite**: `make test`
2. **Check linting**: `make lint`
3. **Test with race detector**: `make test-race`
## Getting Help
### Internal Resources
- Check existing similar implementations in codebase
- Use LSP tools to understand code relationships
- For Go code: Use `mcp__go-language-server__*` commands
- For TypeScript/React code: Use `mcp__typescript-language-server__*` commands
- Read related test files for expected behavior
### External Resources
- Official RFC specifications for protocol compliance
- Go documentation for language features
- PostgreSQL documentation for database issues
### Debug Information Collection
When reporting issues, include:
1. **Exact error message**
2. **Steps to reproduce**
3. **Relevant code snippets**
4. **Test output (if applicable)**
5. **Environment information** (OS, Go version, etc.)
-227
View File
@@ -1,227 +0,0 @@
# Development Workflows and Guidelines
## Quick Start Checklist for New Features
### Before Starting
- [ ] Run `git pull` to ensure you're on latest code
- [ ] Check if feature touches database - you'll need migrations
- [ ] Check if feature touches audit logs - update `enterprise/audit/table.go`
## Development Server
### Starting Development Mode
- **Use `./scripts/develop.sh` to start Coder in development mode**
- This automatically builds and runs with `--dev` flag and proper access URL
- **⚠️ Do NOT manually run `make build && ./coder server --dev` - use the script instead**
### Development Workflow
1. **Always start with the development script**: `./scripts/develop.sh`
2. **Make changes** to your code
3. **The script will automatically rebuild** and restart as needed
4. **Access the development server** at the URL provided by the script
## Code Style Guidelines
### Go Style
- Follow [Effective Go](https://go.dev/doc/effective_go) and [Go's Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- Create packages when used during implementation
- Validate abstractions against implementations
- **Test packages**: Use `package_test` naming (e.g., `identityprovider_test`) for black-box testing
### Error Handling
- Use descriptive error messages
- Wrap errors with context
- Propagate errors appropriately
- Use proper error types
- Pattern: `xerrors.Errorf("failed to X: %w", err)`
## Naming Conventions
- Names MUST tell what code does, not how it's implemented or its history
- Follow Go and TypeScript naming conventions
- When changing code, never document the old behavior or the behavior change
- NEVER use implementation details in names (e.g., "ZodValidator", "MCPWrapper", "JSONParser")
- NEVER use temporal/historical context in names (e.g., "LegacyHandler", "UnifiedTool", "ImprovedInterface", "EnhancedParser")
- NEVER use pattern names unless they add clarity (e.g., prefer "Tool" over "ToolFactory")
- Abbreviate only when obvious
### Comments
- Document exported functions, types, and non-obvious logic
- Follow JSDoc format for TypeScript
- Use godoc format for Go code
## Database Migration Workflows
### Migration Guidelines
1. **Create migration files**:
- Location: `coderd/database/migrations/`
- Format: `{number}_{description}.{up|down}.sql`
- Number must be unique and sequential
- Always include both up and down migrations
2. **Use helper scripts**:
- `./coderd/database/migrations/create_migration.sh "migration name"` - Creates new migration files
- `./coderd/database/migrations/fix_migration_numbers.sh` - Renumbers migrations to avoid conflicts
- `./coderd/database/migrations/create_fixture.sh "fixture name"` - Creates test fixtures for migrations
3. **Update database queries**:
- **MUST DO**: Any changes to database - adding queries, modifying queries should be done in the `coderd/database/queries/*.sql` files
- **MUST DO**: Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `oauth2.sql`
- After making changes to any `coderd/database/queries/*.sql` files you must run `make gen` to generate respective ORM changes
4. **Handle nullable fields**:
- Use `sql.NullString`, `sql.NullBool`, etc. for optional database fields
- Set `.Valid = true` when providing values
5. **Audit table updates**:
- If adding fields to auditable types, update `enterprise/audit/table.go`
- Add each new field with appropriate action (ActionTrack, ActionIgnore, ActionSecret)
- Run `make gen` to verify no audit errors
### Database Generation Process
1. Modify SQL files in `coderd/database/queries/`
2. Run `make gen`
3. If errors about audit table, update `enterprise/audit/table.go`
4. Run `make gen` again
5. Run `make lint` to catch any remaining issues
## API Development Workflow
### Adding New API Endpoints
1. **Define types** in `codersdk/` package
2. **Add handler** in appropriate `coderd/` file
3. **Register route** in `coderd/coderd.go`
4. **Add tests** in `coderd/*_test.go` files
5. **Update OpenAPI** by running `make gen`
## Testing Workflows
### Test Execution
- Run full test suite: `make test`
- Run specific test: `make test RUN=TestFunctionName`
- Run with Postgres: `make test-postgres`
- Run with race detector: `make test-race`
- Run end-to-end tests: `make test-e2e`
### Test Development
- Use table-driven tests for comprehensive coverage
- Mock external dependencies
- Test both positive and negative cases
- Use `testutil.WaitLong` for timeouts in tests
- Always use `t.Parallel()` in tests
## Commit Style
- Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/)
- Format: `type(scope): message`
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
- Keep message titles concise (~70 characters)
- Use imperative, present tense in commit titles
## Code Navigation and Investigation
### Using LSP Tools (STRONGLY RECOMMENDED)
**IMPORTANT**: Always use LSP tools for code navigation and understanding. These tools provide accurate, real-time analysis of the codebase and should be your first choice for code investigation.
#### Go LSP Tools (for backend code)
1. **Find function definitions** (USE THIS FREQUENTLY):
- `mcp__go-language-server__definition symbolName`
- Example: `mcp__go-language-server__definition getOAuth2ProviderAppAuthorize`
- Quickly jump to function implementations across packages
2. **Find symbol references** (ESSENTIAL FOR UNDERSTANDING IMPACT):
- `mcp__go-language-server__references symbolName`
- Locate all usages of functions, types, or variables
- Critical for refactoring and understanding data flow
3. **Get symbol information**:
- `mcp__go-language-server__hover filePath line column`
- Get type information and documentation at specific positions
#### TypeScript LSP Tools (for frontend code in site/)
1. **Find component/function definitions** (USE THIS FREQUENTLY):
- `mcp__typescript-language-server__definition symbolName`
- Example: `mcp__typescript-language-server__definition LoginPage`
- Quickly navigate to React components, hooks, and utility functions
2. **Find symbol references** (ESSENTIAL FOR UNDERSTANDING IMPACT):
- `mcp__typescript-language-server__references symbolName`
- Locate all usages of components, types, or functions
- Critical for refactoring React components and understanding prop usage
3. **Get type information**:
- `mcp__typescript-language-server__hover filePath line column`
- Get TypeScript type information and JSDoc documentation
4. **Rename symbols safely**:
- `mcp__typescript-language-server__rename_symbol filePath line column newName`
- Rename components, props, or functions across the entire codebase
5. **Check for TypeScript errors**:
- `mcp__typescript-language-server__diagnostics filePath`
- Get compilation errors and warnings for a specific file
### Investigation Strategy (LSP-First Approach)
#### Backend Investigation (Go)
1. **Start with route registration** in `coderd/coderd.go` to understand API endpoints
2. **Use Go LSP `definition` lookup** to trace from route handlers to actual implementations
3. **Use Go LSP `references`** to understand how functions are called throughout the codebase
4. **Follow the middleware chain** using LSP tools to understand request processing flow
5. **Check test files** for expected behavior and error patterns
#### Frontend Investigation (TypeScript/React)
1. **Start with route definitions** in `site/src/App.tsx` or router configuration
2. **Use TypeScript LSP `definition`** to navigate to React components and hooks
3. **Use TypeScript LSP `references`** to find all component usages and prop drilling
4. **Follow the component hierarchy** using LSP tools to understand data flow
5. **Check for TypeScript errors** with `diagnostics` before making changes
6. **Examine test files** (`.test.tsx`) for component behavior and expected props
## Troubleshooting Development Issues
### Common Issues
1. **Development server won't start** - Use `./scripts/develop.sh` instead of manual commands
2. **Database migration errors** - Check migration file format and use helper scripts
3. **Audit table errors** - Update `enterprise/audit/table.go` with new fields
4. **OAuth2 compliance issues** - Ensure RFC-compliant error responses
### Debug Commands
- Check linting: `make lint`
- Generate code: `make gen`
- Clean build: `make clean`
## Development Environment Setup
### Prerequisites
- Go (version specified in go.mod)
- Node.js and pnpm for frontend development
- PostgreSQL for database testing
- Docker for containerized testing
### First Time Setup
1. Clone the repository
2. Run `./scripts/develop.sh` to start development server
3. Access the development URL provided
4. Create admin user as prompted
5. Begin development
-133
View File
@@ -1,133 +0,0 @@
#!/bin/bash
# Claude Code hook script for file formatting
# This script integrates with the centralized Makefile formatting targets
# and supports the Claude Code hooks system for automatic file formatting.
set -euo pipefail
# A variable to memoize the command for canonicalizing paths.
_CANONICALIZE_CMD=""
# canonicalize_path resolves a path to its absolute, canonical form.
# It tries 'realpath' and 'readlink -f' in order.
# The chosen command is memoized to avoid repeated checks.
# If none of these are available, it returns an empty string.
canonicalize_path() {
local path_to_resolve="$1"
# If we haven't determined a command yet, find one.
if [[ -z "$_CANONICALIZE_CMD" ]]; then
if command -v realpath >/dev/null 2>&1; then
_CANONICALIZE_CMD="realpath"
elif command -v readlink >/dev/null 2>&1 && readlink -f . >/dev/null 2>&1; then
_CANONICALIZE_CMD="readlink"
else
# No command found, so we can't resolve.
# We set a "none" value to prevent re-checking.
_CANONICALIZE_CMD="none"
fi
fi
# Now, execute the command.
case "$_CANONICALIZE_CMD" in
realpath)
realpath "$path_to_resolve" 2>/dev/null
;;
readlink)
readlink -f "$path_to_resolve" 2>/dev/null
;;
*)
# This handles the "none" case or any unexpected error.
echo ""
;;
esac
}
# Read JSON input from stdin
input=$(cat)
# Extract the file path from the JSON input
# Expected format: {"tool_input": {"file_path": "/absolute/path/to/file"}} or {"tool_response": {"filePath": "/absolute/path/to/file"}}
file_path=$(echo "$input" | jq -r '.tool_input.file_path // .tool_response.filePath // empty')
# Secure path canonicalization to prevent path traversal attacks
# Resolve repo root to an absolute, canonical path.
repo_root_raw="$(cd "$(dirname "$0")/../.." && pwd)"
repo_root="$(canonicalize_path "$repo_root_raw")"
if [[ -z "$repo_root" ]]; then
# Fallback if canonicalization fails
repo_root="$repo_root_raw"
fi
# Resolve the input path to an absolute path
if [[ "$file_path" = /* ]]; then
# Already absolute
abs_file_path="$file_path"
else
# Make relative paths absolute from repo root
abs_file_path="$repo_root/$file_path"
fi
# Canonicalize the path (resolve symlinks and ".." segments)
canonical_file_path="$(canonicalize_path "$abs_file_path")"
# Check if canonicalization failed or if the resolved path is outside the repo
if [[ -z "$canonical_file_path" ]] || { [[ "$canonical_file_path" != "$repo_root" ]] && [[ "$canonical_file_path" != "$repo_root"/* ]]; }; then
echo "Error: File path is outside repository or invalid: $file_path" >&2
exit 1
fi
# Handle the case where the file path is the repository root itself.
if [[ "$canonical_file_path" == "$repo_root" ]]; then
echo "Warning: Formatting the repository root is not a supported operation. Skipping." >&2
exit 0
fi
# Convert back to relative path from repo root for consistency
file_path="${canonical_file_path#"$repo_root"/}"
if [[ -z "$file_path" ]]; then
echo "Error: No file path provided in input" >&2
exit 1
fi
# Check if file exists
if [[ ! -f "$file_path" ]]; then
echo "Error: File does not exist: $file_path" >&2
exit 1
fi
# Get the file extension to determine the appropriate formatter
file_ext="${file_path##*.}"
# Change to the project root directory (where the Makefile is located)
cd "$(dirname "$0")/../.."
# Call the appropriate Makefile target based on file extension
case "$file_ext" in
go)
make fmt/go FILE="$file_path"
echo "✓ Formatted Go file: $file_path"
;;
js | jsx | ts | tsx)
make fmt/ts FILE="$file_path"
echo "✓ Formatted TypeScript/JavaScript file: $file_path"
;;
tf | tfvars)
make fmt/terraform FILE="$file_path"
echo "✓ Formatted Terraform file: $file_path"
;;
sh)
make fmt/shfmt FILE="$file_path"
echo "✓ Formatted shell script: $file_path"
;;
md)
make fmt/markdown FILE="$file_path"
echo "✓ Formatted Markdown file: $file_path"
;;
*)
echo "No formatter available for file extension: $file_ext"
exit 0
;;
esac
-15
View File
@@ -1,15 +0,0 @@
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write|MultiEdit",
"hooks": [
{
"type": "command",
"command": ".claude/scripts/format.sh"
}
]
}
]
}
}
+3 -67
View File
@@ -1,16 +1,11 @@
{
"name": "Development environments on your infrastructure",
"image": "codercom/oss-dogfood:latest",
"features": {
// See all possible options here https://github.com/devcontainers/features/tree/main/src/docker-in-docker
"ghcr.io/devcontainers/features/docker-in-docker:2": {
"moby": "false"
},
"ghcr.io/coder/devcontainer-features/code-server:1": {
"auth": "none",
"port": 13337
},
"./filebrowser": {
"folder": "${containerWorkspaceFolder}"
}
},
// SYS_PTRACE to enable go debugging
@@ -18,65 +13,6 @@
"customizations": {
"vscode": {
"extensions": ["biomejs.biome"]
},
"coder": {
"apps": [
{
"slug": "cursor",
"displayName": "Cursor Desktop",
"url": "cursor://coder.coder-remote/openDevContainer?owner=${localEnv:CODER_WORKSPACE_OWNER_NAME}&workspace=${localEnv:CODER_WORKSPACE_NAME}&agent=${localEnv:CODER_WORKSPACE_PARENT_AGENT_NAME}&url=${localEnv:CODER_URL}&token=$SESSION_TOKEN&devContainerName=${localEnv:CONTAINER_ID}&devContainerFolder=${containerWorkspaceFolder}&localWorkspaceFolder=${localWorkspaceFolder}",
"external": true,
"icon": "/icon/cursor.svg",
"order": 1
},
{
"slug": "windsurf",
"displayName": "Windsurf Editor",
"url": "windsurf://coder.coder-remote/openDevContainer?owner=${localEnv:CODER_WORKSPACE_OWNER_NAME}&workspace=${localEnv:CODER_WORKSPACE_NAME}&agent=${localEnv:CODER_WORKSPACE_PARENT_AGENT_NAME}&url=${localEnv:CODER_URL}&token=$SESSION_TOKEN&devContainerName=${localEnv:CONTAINER_ID}&devContainerFolder=${containerWorkspaceFolder}&localWorkspaceFolder=${localWorkspaceFolder}",
"external": true,
"icon": "/icon/windsurf.svg",
"order": 4
},
{
"slug": "zed",
"displayName": "Zed Editor",
"url": "zed://ssh/${localEnv:CODER_WORKSPACE_AGENT_NAME}.${localEnv:CODER_WORKSPACE_NAME}.${localEnv:CODER_WORKSPACE_OWNER_NAME}.coder${containerWorkspaceFolder}",
"external": true,
"icon": "/icon/zed.svg",
"order": 5
},
// Reproduce `code-server` app here from the code-server
// feature so that we can set the correct folder and order.
// Currently, the order cannot be specified via option because
// we parse it as a number whereas variable interpolation
// results in a string. Additionally we set health check which
// is not yet set in the feature.
{
"slug": "code-server",
"displayName": "code-server",
"url": "http://${localEnv:FEATURE_CODE_SERVER_OPTION_HOST:127.0.0.1}:${localEnv:FEATURE_CODE_SERVER_OPTION_PORT:8080}/?folder=${containerWorkspaceFolder}",
"openIn": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPOPENIN:slim-window}",
"share": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPSHARE:owner}",
"icon": "/icon/code.svg",
"group": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPGROUP:Web Editors}",
"order": 3,
"healthCheck": {
"url": "http://${localEnv:FEATURE_CODE_SERVER_OPTION_HOST:127.0.0.1}:${localEnv:FEATURE_CODE_SERVER_OPTION_PORT:8080}/healthz",
"interval": 5,
"threshold": 2
}
}
]
}
},
"mounts": [
// Add a volume for the Coder home directory to persist shell history,
// and speed up dotfiles init and/or personalization.
"source=coder-coder-devcontainer-home,target=/home/coder,type=volume",
// Mount the entire home because conditional mounts are not supported.
// See: https://github.com/devcontainers/spec/issues/132
"source=${localEnv:HOME},target=/mnt/home/coder,type=bind,readonly"
],
"postCreateCommand": ["./.devcontainer/scripts/post_create.sh"],
"postStartCommand": ["./.devcontainer/scripts/post_start.sh"]
}
}
@@ -1,46 +0,0 @@
{
"id": "filebrowser",
"version": "0.0.1",
"name": "File Browser",
"description": "A web-based file browser for your development container",
"options": {
"port": {
"type": "string",
"default": "13339",
"description": "The port to run filebrowser on"
},
"folder": {
"type": "string",
"default": "",
"description": "The root directory for filebrowser to serve"
},
"baseUrl": {
"type": "string",
"default": "",
"description": "The base URL for filebrowser (e.g., /filebrowser)"
}
},
"entrypoint": "/usr/local/bin/filebrowser-entrypoint",
"dependsOn": {
"ghcr.io/devcontainers/features/common-utils:2": {}
},
"customizations": {
"coder": {
"apps": [
{
"slug": "filebrowser",
"displayName": "File Browser",
"url": "http://localhost:${localEnv:FEATURE_FILEBROWSER_OPTION_PORT:13339}",
"icon": "/icon/filebrowser.svg",
"order": 3,
"subdomain": true,
"healthcheck": {
"url": "http://localhost:${localEnv:FEATURE_FILEBROWSER_OPTION_PORT:13339}/health",
"interval": 5,
"threshold": 2
}
}
]
}
}
}
-54
View File
@@ -1,54 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
BOLD='\033[0;1m'
printf "%sInstalling filebrowser\n\n" "${BOLD}"
# Check if filebrowser is installed.
if ! command -v filebrowser &>/dev/null; then
VERSION="v2.42.1"
EXPECTED_HASH="7d83c0f077df10a8ec9bfd9bf6e745da5d172c3c768a322b0e50583a6bc1d3cc"
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/download/${VERSION}/linux-amd64-filebrowser.tar.gz" -o /tmp/filebrowser.tar.gz
echo "${EXPECTED_HASH} /tmp/filebrowser.tar.gz" | sha256sum -c
tar -xzf /tmp/filebrowser.tar.gz -C /tmp
sudo mv /tmp/filebrowser /usr/local/bin/
sudo chmod +x /usr/local/bin/filebrowser
rm /tmp/filebrowser.tar.gz
fi
# Create entrypoint.
cat >/usr/local/bin/filebrowser-entrypoint <<EOF
#!/usr/bin/env bash
PORT="${PORT}"
FOLDER="${FOLDER:-}"
FOLDER="\${FOLDER:-\$(pwd)}"
BASEURL="${BASEURL:-}"
LOG_PATH=/tmp/filebrowser.log
export FB_DATABASE="\${HOME}/.filebrowser.db"
printf "🛠️ Configuring filebrowser\n\n"
# Check if filebrowser db exists.
if [[ ! -f "\${FB_DATABASE}" ]]; then
filebrowser config init >>\${LOG_PATH} 2>&1
filebrowser users add admin "" --perm.admin=true --viewMode=mosaic >>\${LOG_PATH} 2>&1
fi
filebrowser config set --baseurl=\${BASEURL} --port=\${PORT} --auth.method=noauth --root=\${FOLDER} >>\${LOG_PATH} 2>&1
printf "👷 Starting filebrowser...\n\n"
printf "📂 Serving \${FOLDER} at http://localhost:\${PORT}\n\n"
filebrowser >>\${LOG_PATH} 2>&1 &
printf "📝 Logs at \${LOG_PATH}\n\n"
EOF
chmod +x /usr/local/bin/filebrowser-entrypoint
printf "🥳 Installation complete!\n\n"
-67
View File
@@ -1,67 +0,0 @@
#!/bin/sh
install_devcontainer_cli() {
set -e
echo "🔧 Installing DevContainer CLI..."
cd "$(dirname "$0")/../tools/devcontainer-cli"
npm ci --omit=dev
ln -sf "$(pwd)/node_modules/.bin/devcontainer" "$(npm config get prefix)/bin/devcontainer"
}
install_ssh_config() {
echo "🔑 Installing SSH configuration..."
if [ -d /mnt/home/coder/.ssh ]; then
rsync -a /mnt/home/coder/.ssh/ ~/.ssh/
chmod 0700 ~/.ssh
else
echo "⚠️ SSH directory not found."
fi
}
install_git_config() {
echo "📂 Installing Git configuration..."
if [ -f /mnt/home/coder/git/config ]; then
rsync -a /mnt/home/coder/git/ ~/.config/git/
elif [ -d /mnt/home/coder/.gitconfig ]; then
rsync -a /mnt/home/coder/.gitconfig ~/.gitconfig
else
echo "⚠️ Git configuration directory not found."
fi
}
install_dotfiles() {
if [ ! -d /mnt/home/coder/.config/coderv2/dotfiles ]; then
echo "⚠️ Dotfiles directory not found."
return
fi
cd /mnt/home/coder/.config/coderv2/dotfiles || return
for script in install.sh install bootstrap.sh bootstrap script/bootstrap setup.sh setup script/setup; do
if [ -x $script ]; then
echo "📦 Installing dotfiles..."
./$script || {
echo "❌ Error running $script. Please check the script for issues."
return
}
echo "✅ Dotfiles installed successfully."
return
fi
done
echo "⚠️ No install script found in dotfiles directory."
}
personalize() {
# Allow script to continue as Coder dogfood utilizes a hack to
# synchronize startup script execution.
touch /tmp/.coder-startup-script.done
if [ -x /mnt/home/coder/personalize ]; then
echo "🎨 Personalizing environment..."
/mnt/home/coder/personalize
fi
}
install_devcontainer_cli
install_ssh_config
install_dotfiles
personalize
-4
View File
@@ -1,4 +0,0 @@
#!/bin/sh
# Start Docker service if not already running.
sudo service docker start
-26
View File
@@ -1,26 +0,0 @@
{
"name": "devcontainer-cli",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "devcontainer-cli",
"version": "1.0.0",
"dependencies": {
"@devcontainers/cli": "^0.80.0"
}
},
"node_modules/@devcontainers/cli": {
"version": "0.80.0",
"resolved": "https://registry.npmjs.org/@devcontainers/cli/-/cli-0.80.0.tgz",
"integrity": "sha512-w2EaxgjyeVGyzfA/KUEZBhyXqu/5PyWNXcnrXsZOBrt3aN2zyGiHrXoG54TF6K0b5DSCF01Rt5fnIyrCeFzFKw==",
"bin": {
"devcontainer": "devcontainer.js"
},
"engines": {
"node": "^16.13.0 || >=18.0.0"
}
}
}
}
@@ -1,8 +0,0 @@
{
"name": "devcontainer-cli",
"private": true,
"version": "1.0.0",
"dependencies": {
"@devcontainers/cli": "^0.80.0"
}
}
+1 -9
View File
@@ -7,7 +7,7 @@ trim_trailing_whitespace = true
insert_final_newline = true
indent_style = tab
[*.{yaml,yml,tf,tftpl,tfvars,nix}]
[*.{yaml,yml,tf,tfvars,nix}]
indent_style = space
indent_size = 2
@@ -18,11 +18,3 @@ indent_size = 2
[coderd/database/dump.sql]
indent_style = space
indent_size = 4
[coderd/database/queries/*.sql]
indent_style = tab
indent_size = 4
[coderd/database/migrations/*.sql]
indent_style = tab
indent_size = 4
+1 -3
View File
@@ -15,8 +15,6 @@ provisionersdk/proto/*.go linguist-generated=true
*.tfstate.json linguist-generated=true
*.tfstate.dot linguist-generated=true
*.tfplan.dot linguist-generated=true
site/e2e/google/protobuf/timestampGenerated.ts
site/e2e/provisionerGenerated.ts linguist-generated=true
site/src/api/countriesGenerated.tsx linguist-generated=true
site/src/api/rbacresourcesGenerated.tsx linguist-generated=true
site/src/api/typesGenerated.ts linguist-generated=true
site/src/pages/SetupPage/countries.tsx linguist-generated=true
-4
View File
@@ -25,9 +25,5 @@ ignorePatterns:
- pattern: "docs.github.com"
- pattern: "claude.ai"
- pattern: "splunk.com"
- pattern: "stackoverflow.com/questions"
- pattern: "developer.hashicorp.com/terraform/language"
- pattern: "platform.openai.com"
- pattern: "api.openai.com"
aliveStatusCodes:
- 200
@@ -1,49 +0,0 @@
name: "Download Embedded Postgres Cache"
description: |
Downloads the embedded postgres cache and outputs today's cache key.
A PR job can use a cache if it was created by its base branch, its current
branch, or the default branch.
https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache
outputs:
cache-key:
description: "Today's cache key"
value: ${{ steps.vars.outputs.cache-key }}
inputs:
key-prefix:
description: "Prefix for the cache key"
required: true
cache-path:
description: "Path to the cache directory"
required: true
runs:
using: "composite"
steps:
- name: Get date values and cache key
id: vars
shell: bash
run: |
export YEAR_MONTH=$(date +'%Y-%m')
export PREV_YEAR_MONTH=$(date -d 'last month' +'%Y-%m')
export DAY=$(date +'%d')
echo "year-month=$YEAR_MONTH" >> "$GITHUB_OUTPUT"
echo "prev-year-month=$PREV_YEAR_MONTH" >> "$GITHUB_OUTPUT"
echo "cache-key=${INPUTS_KEY_PREFIX}-${YEAR_MONTH}-${DAY}" >> "$GITHUB_OUTPUT"
env:
INPUTS_KEY_PREFIX: ${{ inputs.key-prefix }}
# By default, depot keeps caches for 14 days. This is plenty for embedded
# postgres, which changes infrequently.
# https://depot.dev/docs/github-actions/overview#cache-retention-policy
- name: Download embedded Postgres cache
uses: actions/cache/restore@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ inputs.cache-path }}
key: ${{ steps.vars.outputs.cache-key }}
# > If there are multiple partial matches for a restore key, the action returns the most recently created cache.
# https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows#matching-a-cache-key
# The second restore key allows non-main branches to use the cache from the previous month.
# This prevents PRs from rebuilding the cache on the first day of the month.
# It also makes sure that once a month, the cache is fully reset.
restore-keys: |
${{ inputs.key-prefix }}-${{ steps.vars.outputs.year-month }}-
${{ github.ref != 'refs/heads/main' && format('{0}-{1}-', inputs.key-prefix, steps.vars.outputs.prev-year-month) || '' }}
@@ -1,18 +0,0 @@
name: "Upload Embedded Postgres Cache"
description: Uploads the embedded Postgres cache. This only runs on the main branch.
inputs:
cache-key:
description: "Cache key"
required: true
cache-path:
description: "Path to the cache directory"
required: true
runs:
using: "composite"
steps:
- name: Upload Embedded Postgres cache
if: ${{ github.ref == 'refs/heads/main' }}
uses: actions/cache/save@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ inputs.cache-path }}
key: ${{ inputs.cache-key }}
@@ -1,33 +0,0 @@
name: "Setup Embedded Postgres Cache Paths"
description: Sets up a path for cached embedded postgres binaries.
outputs:
embedded-pg-cache:
description: "Value of EMBEDDED_PG_CACHE_DIR"
value: ${{ steps.paths.outputs.embedded-pg-cache }}
cached-dirs:
description: "directories that should be cached between CI runs"
value: ${{ steps.paths.outputs.cached-dirs }}
runs:
using: "composite"
steps:
- name: Override Go paths
id: paths
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
with:
script: |
const path = require('path');
// RUNNER_TEMP should be backed by a RAM disk on Windows if
// coder/setup-ramdisk-action was used
const runnerTemp = process.env.RUNNER_TEMP;
const embeddedPgCacheDir = path.join(runnerTemp, 'embedded-pg-cache');
core.exportVariable('EMBEDDED_PG_CACHE_DIR', embeddedPgCacheDir);
core.setOutput('embedded-pg-cache', embeddedPgCacheDir);
const cachedDirs = `${embeddedPgCacheDir}`;
core.setOutput('cached-dirs', cachedDirs);
- name: Create directories
shell: bash
run: |
set -e
mkdir -p "$EMBEDDED_PG_CACHE_DIR"
+1 -1
View File
@@ -4,7 +4,7 @@ description: |
inputs:
version:
description: "The Go version to use."
default: "1.24.10"
default: "1.24.4"
use-preinstalled-go:
description: "Whether to use preinstalled Go."
default: "false"
+1 -1
View File
@@ -16,7 +16,7 @@ runs:
- name: Setup Node
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4.0.4
with:
node-version: 22.19.0
node-version: 20.16.0
# See https://github.com/actions/setup-node#caching-global-packages-data
cache: "pnpm"
cache-dependency-path: ${{ inputs.directory }}/pnpm-lock.yaml
+3 -10
View File
@@ -5,13 +5,6 @@ runs:
using: "composite"
steps:
- name: Setup sqlc
# uses: sqlc-dev/setup-sqlc@c0209b9199cd1cce6a14fc27cabcec491b651761 # v4.0.0
# with:
# sqlc-version: "1.30.0"
# Switched to coder/sqlc fork to fix ambiguous column bug, see:
# - https://github.com/coder/sqlc/pull/1
# - https://github.com/sqlc-dev/sqlc/pull/4159
shell: bash
run: |
CGO_ENABLED=1 go install github.com/coder/sqlc/cmd/sqlc@aab4e865a51df0c43e1839f81a9d349b41d14f05
uses: sqlc-dev/setup-sqlc@c0209b9199cd1cce6a14fc27cabcec491b651761 # v4.0.0
with:
sqlc-version: "1.27.0"
+1 -1
View File
@@ -7,5 +7,5 @@ runs:
- name: Install Terraform
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
with:
terraform_version: 1.13.4
terraform_version: 1.12.2
terraform_wrapper: false
@@ -27,11 +27,9 @@ runs:
export YEAR_MONTH=$(date +'%Y-%m')
export PREV_YEAR_MONTH=$(date -d 'last month' +'%Y-%m')
export DAY=$(date +'%d')
echo "year-month=$YEAR_MONTH" >> "$GITHUB_OUTPUT"
echo "prev-year-month=$PREV_YEAR_MONTH" >> "$GITHUB_OUTPUT"
echo "cache-key=${INPUTS_KEY_PREFIX}-${YEAR_MONTH}-${DAY}" >> "$GITHUB_OUTPUT"
env:
INPUTS_KEY_PREFIX: ${{ inputs.key-prefix }}
echo "year-month=$YEAR_MONTH" >> $GITHUB_OUTPUT
echo "prev-year-month=$PREV_YEAR_MONTH" >> $GITHUB_OUTPUT
echo "cache-key=${{ inputs.key-prefix }}-${YEAR_MONTH}-${DAY}" >> $GITHUB_OUTPUT
# TODO: As a cost optimization, we could remove caches that are older than
# a day or two. By default, depot keeps caches for 14 days, which isn't
+14 -14
View File
@@ -12,12 +12,13 @@ runs:
run: |
set -e
echo "owner: $REPO_OWNER"
if [[ "$REPO_OWNER" != "coder" ]]; then
owner=${{ github.repository_owner }}
echo "owner: $owner"
if [[ $owner != "coder" ]]; then
echo "Not a pull request from the main repo, skipping..."
exit 0
fi
if [[ -z "${DATADOG_API_KEY}" ]]; then
if [[ -z "${{ inputs.api-key }}" ]]; then
# This can happen for dependabot.
echo "No API key provided, skipping..."
exit 0
@@ -30,38 +31,37 @@ runs:
TMP_DIR=$(mktemp -d)
if [[ "${RUNNER_OS}" == "Windows" ]]; then
if [[ "${{ runner.os }}" == "Windows" ]]; then
BINARY_PATH="${TMP_DIR}/datadog-ci.exe"
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_win-x64"
elif [[ "${RUNNER_OS}" == "macOS" ]]; then
elif [[ "${{ runner.os }}" == "macOS" ]]; then
BINARY_PATH="${TMP_DIR}/datadog-ci"
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_darwin-arm64"
elif [[ "${RUNNER_OS}" == "Linux" ]]; then
elif [[ "${{ runner.os }}" == "Linux" ]]; then
BINARY_PATH="${TMP_DIR}/datadog-ci"
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_linux-x64"
else
echo "Unsupported OS: $RUNNER_OS"
echo "Unsupported OS: ${{ runner.os }}"
exit 1
fi
echo "Downloading DataDog CI binary version ${BINARY_VERSION} for $RUNNER_OS..."
echo "Downloading DataDog CI binary version ${BINARY_VERSION} for ${{ runner.os }}..."
curl -sSL "$BINARY_URL" -o "$BINARY_PATH"
if [[ "${RUNNER_OS}" == "Windows" ]]; then
if [[ "${{ runner.os }}" == "Windows" ]]; then
echo "$BINARY_HASH_WINDOWS $BINARY_PATH" | sha256sum --check
elif [[ "${RUNNER_OS}" == "macOS" ]]; then
elif [[ "${{ runner.os }}" == "macOS" ]]; then
echo "$BINARY_HASH_MACOS $BINARY_PATH" | shasum -a 256 --check
elif [[ "${RUNNER_OS}" == "Linux" ]]; then
elif [[ "${{ runner.os }}" == "Linux" ]]; then
echo "$BINARY_HASH_LINUX $BINARY_PATH" | sha256sum --check
fi
# Make binary executable (not needed for Windows)
if [[ "${RUNNER_OS}" != "Windows" ]]; then
if [[ "${{ runner.os }}" != "Windows" ]]; then
chmod +x "$BINARY_PATH"
fi
"$BINARY_PATH" junit upload --service coder ./gotests.xml \
--tags "os:${RUNNER_OS}" --tags "runner_name:${RUNNER_NAME}"
--tags os:${{runner.os}} --tags runner_name:${{runner.name}}
env:
REPO_OWNER: ${{ github.repository_owner }}
DATADOG_API_KEY: ${{ inputs.api-key }}
-5
View File
@@ -33,7 +33,6 @@ updates:
- dependency-name: "*"
update-types:
- version-update:semver-patch
- dependency-name: "github.com/mark3labs/mcp-go"
# Update our Dockerfile.
- package-ecosystem: "docker"
@@ -80,9 +79,6 @@ updates:
mui:
patterns:
- "@mui*"
radix:
patterns:
- "@radix-ui/*"
react:
patterns:
- "react"
@@ -107,7 +103,6 @@ updates:
- dependency-name: "*"
update-types:
- version-update:semver-major
- dependency-name: "@playwright/test"
open-pull-requests-limit: 15
- package-ecosystem: "terraform"
@@ -0,0 +1,34 @@
app = "sao-paulo-coder"
primary_region = "gru"
[experimental]
entrypoint = ["/bin/sh", "-c", "CODER_DERP_SERVER_RELAY_URL=\"http://[${FLY_PRIVATE_IP}]:3000\" /opt/coder wsproxy server"]
auto_rollback = true
[build]
image = "ghcr.io/coder/coder-preview:main"
[env]
CODER_ACCESS_URL = "https://sao-paulo.fly.dev.coder.com"
CODER_HTTP_ADDRESS = "0.0.0.0:3000"
CODER_PRIMARY_ACCESS_URL = "https://dev.coder.com"
CODER_WILDCARD_ACCESS_URL = "*--apps.sao-paulo.fly.dev.coder.com"
CODER_VERBOSE = "true"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
# Ref: https://fly.io/docs/reference/configuration/#http_service-concurrency
[http_service.concurrency]
type = "requests"
soft_limit = 50
hard_limit = 100
[[vm]]
cpu_kind = "shared"
cpus = 2
memory_mb = 512
-5
View File
@@ -1,5 +0,0 @@
<!--
If you have used AI to produce some or all of this PR, please ensure you have read our [AI Contribution guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING) before submitting.
-->
+391 -285
View File
File diff suppressed because it is too large Load Diff
+1 -2
View File
@@ -3,7 +3,6 @@ name: contrib
on:
issue_comment:
types: [created, edited]
# zizmor: ignore[dangerous-triggers] We explicitly want to run on pull_request_target.
pull_request_target:
types:
- opened
@@ -53,7 +52,7 @@ jobs:
if: ${{ github.event_name == 'pull_request_target' && !github.event.pull_request.draft }}
steps:
- name: release-labels
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
# This script ensures PR title and labels are in sync:
#
+9 -10
View File
@@ -15,7 +15,7 @@ jobs:
github.event_name == 'pull_request' &&
github.event.action == 'opened' &&
github.event.pull_request.user.login == 'dependabot[bot]' &&
github.event.pull_request.user.id == 49699333 &&
github.actor_id == 49699333 &&
github.repository == 'coder/coder'
permissions:
pull-requests: write
@@ -44,6 +44,10 @@ jobs:
GH_TOKEN: ${{secrets.GITHUB_TOKEN}}
- name: Send Slack notification
env:
PR_URL: ${{github.event.pull_request.html_url}}
PR_TITLE: ${{github.event.pull_request.title}}
PR_NUMBER: ${{github.event.pull_request.number}}
run: |
curl -X POST -H 'Content-type: application/json' \
--data '{
@@ -54,7 +58,7 @@ jobs:
"type": "header",
"text": {
"type": "plain_text",
"text": ":pr-merged: Auto merge enabled for Dependabot PR #'"${PR_NUMBER}"'",
"text": ":pr-merged: Auto merge enabled for Dependabot PR #${{ env.PR_NUMBER }}",
"emoji": true
}
},
@@ -63,7 +67,7 @@ jobs:
"fields": [
{
"type": "mrkdwn",
"text": "'"${PR_TITLE}"'"
"text": "${{ env.PR_TITLE }}"
}
]
},
@@ -76,14 +80,9 @@ jobs:
"type": "plain_text",
"text": "View PR"
},
"url": "'"${PR_URL}"'"
"url": "${{ env.PR_URL }}"
}
]
}
]
}' "${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}"
env:
SLACK_WEBHOOK: ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_TITLE: ${{ github.event.pull_request.title }}
PR_URL: ${{ github.event.pull_request.html_url }}
}' ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
-172
View File
@@ -1,172 +0,0 @@
name: deploy
on:
# Via workflow_call, called from ci.yaml
workflow_call:
inputs:
image:
description: "Image and tag to potentially deploy. Current branch will be validated against should-deploy check."
required: true
type: string
secrets:
FLY_API_TOKEN:
required: true
FLY_PARIS_CODER_PROXY_SESSION_TOKEN:
required: true
FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN:
required: true
FLY_SAO_PAULO_CODER_PROXY_SESSION_TOKEN:
required: true
FLY_JNB_CODER_PROXY_SESSION_TOKEN:
required: true
permissions:
contents: read
concurrency:
group: ${{ github.workflow }} # no per-branch concurrency
cancel-in-progress: false
jobs:
# Determines if the given branch should be deployed to dogfood.
should-deploy:
name: should-deploy
runs-on: ubuntu-latest
outputs:
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Check if deploy is enabled
id: check
run: |
set -euo pipefail
verdict="$(./scripts/should_deploy.sh)"
echo "verdict=$verdict" >> "$GITHUB_OUTPUT"
deploy:
name: "deploy"
runs-on: ubuntu-latest
timeout-minutes: 30
needs: should-deploy
if: needs.should-deploy.outputs.verdict == 'DEPLOY'
permissions:
contents: read
id-token: write
packages: write # to retag image as dogfood
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
with:
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
- name: Set up Google Cloud SDK
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
- name: Set up Flux CLI
uses: fluxcd/flux2/action@b6e76ca2534f76dcb8dd94fb057cdfa923c3b641 # v2.7.3
with:
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
version: "2.7.0"
- name: Get Cluster Credentials
uses: google-github-actions/get-gke-credentials@3da1e46a907576cefaa90c484278bb5b259dd395 # v3.0.0
with:
cluster_name: dogfood-v2
location: us-central1-a
project_id: coder-dogfood-v2
# Retag image as dogfood while maintaining the multi-arch manifest
- name: Tag image as dogfood
run: docker buildx imagetools create --tag "ghcr.io/coder/coder-preview:dogfood" "$IMAGE"
env:
IMAGE: ${{ inputs.image }}
- name: Reconcile Flux
run: |
set -euxo pipefail
flux --namespace flux-system reconcile source git flux-system
flux --namespace flux-system reconcile source git coder-main
flux --namespace flux-system reconcile kustomization flux-system
flux --namespace flux-system reconcile kustomization coder
flux --namespace flux-system reconcile source chart coder-coder
flux --namespace flux-system reconcile source chart coder-coder-provisioner
flux --namespace coder reconcile helmrelease coder
flux --namespace coder reconcile helmrelease coder-provisioner
flux --namespace coder reconcile helmrelease coder-provisioner-tagged
flux --namespace coder reconcile helmrelease coder-provisioner-tagged-prebuilds
# Just updating Flux is usually not enough. The Helm release may get
# redeployed, but unless something causes the Deployment to update the
# pods won't be recreated. It's important that the pods get recreated,
# since we use `imagePullPolicy: Always` to ensure we're running the
# latest image.
- name: Rollout Deployment
run: |
set -euxo pipefail
kubectl --namespace coder rollout restart deployment/coder
kubectl --namespace coder rollout status deployment/coder
kubectl --namespace coder rollout restart deployment/coder-provisioner
kubectl --namespace coder rollout status deployment/coder-provisioner
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged-prebuilds
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged-prebuilds
deploy-wsproxies:
runs-on: ubuntu-latest
needs: deploy
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Setup flyctl
uses: superfly/flyctl-actions/setup-flyctl@fc53c09e1bc3be6f54706524e3b82c4f462f77be # v1.5
- name: Deploy workspace proxies
run: |
flyctl deploy --image "$IMAGE" --app paris-coder --config ./.github/fly-wsproxies/paris-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_PARIS" --yes
flyctl deploy --image "$IMAGE" --app sydney-coder --config ./.github/fly-wsproxies/sydney-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SYDNEY" --yes
flyctl deploy --image "$IMAGE" --app jnb-coder --config ./.github/fly-wsproxies/jnb-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_JNB" --yes
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
IMAGE: ${{ inputs.image }}
TOKEN_PARIS: ${{ secrets.FLY_PARIS_CODER_PROXY_SESSION_TOKEN }}
TOKEN_SYDNEY: ${{ secrets.FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN }}
TOKEN_JNB: ${{ secrets.FLY_JNB_CODER_PROXY_SESSION_TOKEN }}
-205
View File
@@ -1,205 +0,0 @@
# This workflow checks if a PR requires documentation updates.
# It creates a Coder Task that uses AI to analyze the PR changes,
# search existing docs, and comment with recommendations.
#
# Triggered by: Adding the "doc-check" label to a PR, or manual dispatch.
name: AI Documentation Check
on:
pull_request:
types:
- labeled
workflow_dispatch:
inputs:
pr_url:
description: "Pull Request URL to check"
required: true
type: string
template_preset:
description: "Template preset to use"
required: false
default: ""
type: string
jobs:
doc-check:
name: Analyze PR for Documentation Updates Needed
runs-on: ubuntu-latest
if: |
(github.event.label.name == 'doc-check' || github.event_name == 'workflow_dispatch') &&
(github.event.pull_request.draft == false || github.event_name == 'workflow_dispatch')
timeout-minutes: 30
env:
CODER_URL: ${{ secrets.DOC_CHECK_CODER_URL }}
CODER_SESSION_TOKEN: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
permissions:
contents: read
pull-requests: write
actions: write
steps:
- name: Determine PR Context
id: determine-context
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_PR_HTML_URL: ${{ github.event.pull_request.html_url }}
GITHUB_EVENT_PR_NUMBER: ${{ github.event.pull_request.number }}
GITHUB_EVENT_SENDER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_SENDER_LOGIN: ${{ github.event.sender.login }}
INPUTS_PR_URL: ${{ inputs.pr_url }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || '' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the provided PR URL
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${INPUTS_PR_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${INPUTS_PR_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
# Extract PR number from URL for later use
PR_NUMBER=$(echo "${INPUTS_PR_URL}" | grep -oP '(?<=pull/)\d+')
echo "pr_number=${PR_NUMBER}" >> "${GITHUB_OUTPUT}"
elif [[ "${GITHUB_EVENT_NAME}" == "pull_request" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_SENDER_ID}
echo "Using label adder: ${GITHUB_EVENT_SENDER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_SENDER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using PR URL: ${GITHUB_EVENT_PR_HTML_URL}"
# Convert /pull/ to /issues/ for create-task-action compatibility
ISSUE_URL="${GITHUB_EVENT_PR_HTML_URL/\/pull\//\/issues\/}"
echo "pr_url=${ISSUE_URL}" >> "${GITHUB_OUTPUT}"
echo "pr_number=${GITHUB_EVENT_PR_NUMBER}" >> "${GITHUB_OUTPUT}"
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Extract changed files and build prompt
id: extract-context
env:
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
PR_NUMBER: ${{ steps.determine-context.outputs.pr_number }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Analyzing PR #${PR_NUMBER}"
# Build task prompt - using unquoted heredoc so variables expand
TASK_PROMPT=$(cat <<EOF
Review PR #${PR_NUMBER} and determine if documentation needs updating or creating.
PR URL: ${PR_URL}
WORKFLOW:
1. Setup (repo is pre-cloned at ~/coder)
cd ~/coder
git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER}
git checkout pr-${PR_NUMBER}
2. Get PR info
Use GitHub MCP tools to get PR title, body, and diff
Or use: git diff main...pr-${PR_NUMBER}
3. Understand Changes
Read the diff and identify what changed
Ask: Is this user-facing? Does it change behavior? Is it a new feature?
4. Search for Related Docs
cat ~/coder/docs/manifest.json | jq '.routes[] | {title, path}' | head -50
grep -ri "relevant_term" ~/coder/docs/ --include="*.md"
5. Decide
NEEDS DOCS if: New feature, API change, CLI change, behavior change, user-visible
NO DOCS if: Internal refactor, test-only, already documented, non-user-facing, dependency updates
FIRST check: Did this PR already update docs? If yes and complete, say "No Changes Needed"
6. Comment on the PR using this format
COMMENT FORMAT:
## 📚 Documentation Check
### ✅ Updates Needed
- **[docs/path/file.md](github_link)** - Brief what needs changing
### 📝 New Docs Needed
- **docs/suggested/location.md** - What should be documented
### ✨ No Changes Needed
[Reason: Documents already updated in PR | Internal changes only | Test-only | No user-facing impact]
---
*This comment was generated by an AI Agent through [Coder Tasks](https://coder.com/docs/ai-coder/tasks)*
DOCS STRUCTURE:
Read ~/coder/docs/manifest.json for the complete documentation structure.
Common areas include: reference/, admin/, user-guides/, ai-coder/, install/, tutorials/
But check manifest.json - it has everything.
EOF
)
# Output the prompt
{
echo "task_prompt<<EOFOUTPUT"
echo "${TASK_PROMPT}"
echo "EOFOUTPUT"
} >> "${GITHUB_OUTPUT}"
- name: Checkout create-task-action
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
persist-credentials: false
ref: main
repository: coder/create-task-action
- name: Create Coder Task for Documentation Check
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.DOC_CHECK_CODER_URL }}
coder-token: ${{ secrets.DOC_CHECK_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-preset: ${{ steps.determine-context.outputs.template_preset }}
coder-task-name-prefix: doc-check
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-context.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-context.outputs.pr_url }}
comment-on-issue: true
- name: Write outputs
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_URL: ${{ steps.create_task.outputs.task-url }}
PR_URL: ${{ steps.determine-context.outputs.pr_url }}
run: |
{
echo "## Documentation Check Task"
echo ""
echo "**PR:** ${PR_URL}"
echo "**Task created:** ${TASK_CREATED}"
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL:** ${TASK_URL}"
echo ""
echo "The Coder task is analyzing the PR changes and will comment with documentation recommendations."
} >> "${GITHUB_STEP_SUMMARY}"
+4 -6
View File
@@ -38,17 +38,15 @@ jobs:
if: github.repository_owner == 'coder'
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Docker login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -62,7 +60,7 @@ jobs:
# This uses OIDC authentication, so no auth variables are required.
- name: Build base Docker image via depot.dev
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
with:
project: wl5hnrrkns
context: base-build-context
+4 -12
View File
@@ -23,14 +23,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Node
uses: ./.github/actions/setup-node
- uses: tj-actions/changed-files@70069877f29101175ed2b055d210fe8b1d54d7d7 # v45.0.7
- uses: tj-actions/changed-files@666c9d29007687c52e3c7aa2aac6c0ffcadeadc3 # v45.0.7
id: changed-files
with:
files: |
@@ -41,16 +39,10 @@ jobs:
- name: lint
if: steps.changed-files.outputs.any_changed == 'true'
run: |
# shellcheck disable=SC2086
pnpm exec markdownlint-cli2 $ALL_CHANGED_FILES
env:
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
pnpm exec markdownlint-cli2 ${{ steps.changed-files.outputs.all_changed_files }}
- name: fmt
if: steps.changed-files.outputs.any_changed == 'true'
run: |
# markdown-table-formatter requires a space separated list of files
# shellcheck disable=SC2086
echo $ALL_CHANGED_FILES | tr ',' '\n' | pnpm exec markdown-table-formatter --check
env:
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
echo ${{ steps.changed-files.outputs.all_changed_files }} | tr ',' '\n' | pnpm exec markdown-table-formatter --check
+22 -36
View File
@@ -18,7 +18,8 @@ on:
workflow_dispatch:
permissions:
contents: read
# Necessary for GCP authentication (https://github.com/google-github-actions/setup-gcloud#usage)
id-token: write
jobs:
build_image:
@@ -26,21 +27,15 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Nix
uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
with:
# Pinning to 2.28 here, as Nix gets a "error: [json.exception.type_error.302] type must be array, but is string"
# on version 2.29 and above.
nix_version: "2.28.5"
uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
@@ -63,16 +58,15 @@ jobs:
- name: Get branch name
id: branch-name
uses: tj-actions/branch-names@5250492686b253f06fa55861556d1027b067aeb5 # v9.0.2
uses: tj-actions/branch-names@dde14ac574a8b9b1cedc59a1cf312788af43d8d8 # v8.2.1
- name: "Branch name to Docker tag name"
id: docker-tag-name
run: |
tag=${{ steps.branch-name.outputs.current_branch }}
# Replace / with --, e.g. user/feature => user--feature.
tag=${BRANCH_NAME//\//--}
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
env:
BRANCH_NAME: ${{ steps.branch-name.outputs.current_branch }}
tag=${tag//\//--}
echo "tag=${tag}" >> $GITHUB_OUTPUT
- name: Set up Depot CLI
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
@@ -82,13 +76,13 @@ jobs:
- name: Login to DockerHub
if: github.ref == 'refs/heads/main'
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Build and push Non-Nix image
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
with:
project: b4q6ltmpzh
token: ${{ secrets.DEPOT_TOKEN }}
@@ -109,39 +103,32 @@ jobs:
CURRENT_SYSTEM=$(nix eval --impure --raw --expr 'builtins.currentSystem')
docker image tag "codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM" "codercom/oss-dogfood-nix:${DOCKER_TAG}"
docker image push "codercom/oss-dogfood-nix:${DOCKER_TAG}"
docker image tag codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM codercom/oss-dogfood-nix:${{ steps.docker-tag-name.outputs.tag }}
docker image push codercom/oss-dogfood-nix:${{ steps.docker-tag-name.outputs.tag }}
docker image tag "codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM" "codercom/oss-dogfood-nix:latest"
docker image push "codercom/oss-dogfood-nix:latest"
env:
DOCKER_TAG: ${{ steps.docker-tag-name.outputs.tag }}
docker image tag codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM codercom/oss-dogfood-nix:latest
docker image push codercom/oss-dogfood-nix:latest
deploy_template:
needs: build_image
runs-on: ubuntu-latest
permissions:
# Necessary for GCP authentication (https://github.com/google-github-actions/setup-gcloud#usage)
id-token: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Terraform
uses: ./.github/actions/setup-tf
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
with:
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
- name: Terraform init and validate
run: |
@@ -161,12 +148,12 @@ jobs:
- name: Get short commit SHA
if: github.ref == 'refs/heads/main'
id: vars
run: echo "sha_short=$(git rev-parse --short HEAD)" >> "$GITHUB_OUTPUT"
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
- name: Get latest commit title
if: github.ref == 'refs/heads/main'
id: message
run: echo "pr_title=$(git log --format=%s -n 1 ${{ github.sha }})" >> "$GITHUB_OUTPUT"
run: echo "pr_title=$(git log --format=%s -n 1 ${{ github.sha }})" >> $GITHUB_OUTPUT
- name: "Push template"
if: github.ref == 'refs/heads/main'
@@ -178,7 +165,6 @@ jobs:
CODER_URL: https://dev.coder.com
CODER_SESSION_TOKEN: ${{ secrets.CODER_SESSION_TOKEN }}
# Template source & details
TF_VAR_CODER_DOGFOOD_ANTHROPIC_API_KEY: ${{ secrets.CODER_DOGFOOD_ANTHROPIC_API_KEY }}
TF_VAR_CODER_TEMPLATE_NAME: ${{ secrets.CODER_TEMPLATE_NAME }}
TF_VAR_CODER_TEMPLATE_VERSION: ${{ steps.vars.outputs.sha_short }}
TF_VAR_CODER_TEMPLATE_DIR: ./coder
-204
View File
@@ -1,204 +0,0 @@
# The nightly-gauntlet runs tests that are either too flaky or too slow to block
# every PR.
name: nightly-gauntlet
on:
schedule:
# Every day at 4AM
- cron: "0 4 * * 1-5"
workflow_dispatch:
permissions:
contents: read
jobs:
test-go-pg:
# make sure to adjust NUM_PARALLEL_PACKAGES and NUM_PARALLEL_TESTS below
# when changing runner sizes
runs-on: ${{ matrix.os == 'macos-latest' && github.repository_owner == 'coder' && 'depot-macos-latest' || matrix.os == 'windows-2022' && github.repository_owner == 'coder' && 'depot-windows-2022-16' || matrix.os }}
# This timeout must be greater than the timeout set by `go test` in
# `make test-postgres` to ensure we receive a trace of running
# goroutines. Setting this to the timeout +5m should work quite well
# even if some of the preceding steps are slow.
timeout-minutes: 25
strategy:
matrix:
os:
- macos-latest
- windows-2022
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
with:
egress-policy: audit
# macOS indexes all new files in the background. Our Postgres tests
# create and destroy thousands of databases on disk, and Spotlight
# tries to index all of them, seriously slowing down the tests.
- name: Disable Spotlight Indexing
if: runner.os == 'macOS'
run: |
enabled=$(sudo mdutil -a -s | { grep -Fc "Indexing enabled" || true; })
if [ "$enabled" -eq 0 ]; then
echo "Spotlight indexing is already disabled"
exit 0
fi
sudo mdutil -a -i off
sudo mdutil -X /
sudo launchctl bootout system /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
# Set up RAM disks to speed up the rest of the job. This action is in
# a separate repository to allow its use before actions/checkout.
- name: Setup RAM Disks
if: runner.os == 'Windows'
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
persist-credentials: false
- name: Setup Go
uses: ./.github/actions/setup-go
with:
# Runners have Go baked-in and Go will automatically
# download the toolchain configured in go.mod, so we don't
# need to reinstall it. It's faster on Windows runners.
use-preinstalled-go: ${{ runner.os == 'Windows' }}
- name: Setup Terraform
uses: ./.github/actions/setup-tf
- name: Setup Embedded Postgres Cache Paths
id: embedded-pg-cache
uses: ./.github/actions/setup-embedded-pg-cache-paths
- name: Download Embedded Postgres Cache
id: download-embedded-pg-cache
uses: ./.github/actions/embedded-pg-cache/download
with:
key-prefix: embedded-pg-${{ runner.os }}-${{ runner.arch }}
cache-path: ${{ steps.embedded-pg-cache.outputs.cached-dirs }}
- name: Test with PostgreSQL Database
env:
POSTGRES_VERSION: "13"
TS_DEBUG_DISCO: "true"
LC_CTYPE: "en_US.UTF-8"
LC_ALL: "en_US.UTF-8"
shell: bash
run: |
set -o errexit
set -o pipefail
if [ "${{ runner.os }}" == "Windows" ]; then
# Create a temp dir on the R: ramdisk drive for Windows. The default
# C: drive is extremely slow: https://github.com/actions/runner-images/issues/8755
mkdir -p "R:/temp/embedded-pg"
go run scripts/embedded-pg/main.go -path "R:/temp/embedded-pg" -cache "${EMBEDDED_PG_CACHE_DIR}"
elif [ "${{ runner.os }}" == "macOS" ]; then
# Postgres runs faster on a ramdisk on macOS too
mkdir -p /tmp/tmpfs
sudo mount_tmpfs -o noowners -s 8g /tmp/tmpfs
go run scripts/embedded-pg/main.go -path /tmp/tmpfs/embedded-pg -cache "${EMBEDDED_PG_CACHE_DIR}"
elif [ "${{ runner.os }}" == "Linux" ]; then
make test-postgres-docker
fi
# if macOS, install google-chrome for scaletests
# As another concern, should we really have this kind of external dependency
# requirement on standard CI?
if [ "${{ matrix.os }}" == "macos-latest" ]; then
brew install google-chrome
fi
# macOS will output "The default interactive shell is now zsh"
# intermittently in CI...
if [ "${{ matrix.os }}" == "macos-latest" ]; then
touch ~/.bash_profile && echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bash_profile
fi
if [ "${{ runner.os }}" == "Windows" ]; then
# Our Windows runners have 16 cores.
# On Windows Postgres chokes up when we have 16x16=256 tests
# running in parallel, and dbtestutil.NewDB starts to take more than
# 10s to complete sometimes causing test timeouts. With 16x8=128 tests
# Postgres tends not to choke.
NUM_PARALLEL_PACKAGES=8
NUM_PARALLEL_TESTS=16
elif [ "${{ runner.os }}" == "macOS" ]; then
# Our macOS runners have 8 cores. We set NUM_PARALLEL_TESTS to 16
# because the tests complete faster and Postgres doesn't choke. It seems
# that macOS's tmpfs is faster than the one on Windows.
NUM_PARALLEL_PACKAGES=8
NUM_PARALLEL_TESTS=16
elif [ "${{ runner.os }}" == "Linux" ]; then
# Our Linux runners have 8 cores.
NUM_PARALLEL_PACKAGES=8
NUM_PARALLEL_TESTS=8
fi
# run tests without cache
TESTCOUNT="-count=1"
DB=ci gotestsum \
--format standard-quiet --packages "./..." \
-- -timeout=20m -v -p "$NUM_PARALLEL_PACKAGES" -parallel="$NUM_PARALLEL_TESTS" "$TESTCOUNT"
- name: Upload Embedded Postgres Cache
uses: ./.github/actions/embedded-pg-cache/upload
# We only use the embedded Postgres cache on macOS and Windows runners.
if: runner.OS == 'macOS' || runner.OS == 'Windows'
with:
cache-key: ${{ steps.download-embedded-pg-cache.outputs.cache-key }}
cache-path: "${{ steps.embedded-pg-cache.outputs.embedded-pg-cache }}"
- name: Upload test stats to Datadog
timeout-minutes: 1
continue-on-error: true
uses: ./.github/actions/upload-datadog
if: success() || failure()
with:
api-key: ${{ secrets.DATADOG_API_KEY }}
notify-slack-on-failure:
needs:
- test-go-pg
runs-on: ubuntu-latest
if: failure() && github.ref == 'refs/heads/main'
steps:
- name: Send Slack notification
run: |
ESCAPED_PROMPT=$(printf "%s" "<@U09LQ75AHKR> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
curl -X POST -H 'Content-type: application/json' \
--data '{
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "❌ Nightly gauntlet failed",
"emoji": true
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*View failure:* <'"${RUN_URL}"'|Click here>"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": '"$ESCAPED_PROMPT"'
}
}
]
}' "${SLACK_WEBHOOK}"
env:
SLACK_WEBHOOK: ${{ secrets.CI_FAILURE_SLACK_WEBHOOK }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
BLINK_CI_FAILURE_PROMPT: ${{ vars.BLINK_CI_FAILURE_PROMPT }}
+1 -2
View File
@@ -3,7 +3,6 @@
name: PR Auto Assign
on:
# zizmor: ignore[dangerous-triggers] We explicitly want to run on pull_request_target.
pull_request_target:
types: [opened]
@@ -15,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
+7 -17
View File
@@ -19,7 +19,7 @@ jobs:
packages: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
@@ -27,12 +27,10 @@ jobs:
id: pr_number
run: |
if [ -n "${{ github.event.pull_request.number }}" ]; then
echo "PR_NUMBER=${{ github.event.pull_request.number }}" >> "$GITHUB_OUTPUT"
echo "PR_NUMBER=${{ github.event.pull_request.number }}" >> $GITHUB_OUTPUT
else
echo "PR_NUMBER=${PR_NUMBER}" >> "$GITHUB_OUTPUT"
echo "PR_NUMBER=${{ github.event.inputs.pr_number }}" >> $GITHUB_OUTPUT
fi
env:
PR_NUMBER: ${{ github.event.inputs.pr_number }}
- name: Delete image
continue-on-error: true
@@ -53,21 +51,17 @@ jobs:
- name: Delete helm release
run: |
set -euo pipefail
helm delete --namespace "pr${PR_NUMBER}" "pr${PR_NUMBER}" || echo "helm release not found"
env:
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
helm delete --namespace "pr${{ steps.pr_number.outputs.PR_NUMBER }}" "pr${{ steps.pr_number.outputs.PR_NUMBER }}" || echo "helm release not found"
- name: "Remove PR namespace"
run: |
kubectl delete namespace "pr${PR_NUMBER}" || echo "namespace not found"
env:
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
kubectl delete namespace "pr${{ steps.pr_number.outputs.PR_NUMBER }}" || echo "namespace not found"
- name: "Remove DNS records"
run: |
set -euo pipefail
# Get identifier for the record
record_id=$(curl -X GET "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records?name=%2A.pr${PR_NUMBER}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}" \
record_id=$(curl -X GET "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records?name=%2A.pr${{ steps.pr_number.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}" \
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" | jq -r '.result[0].id') || echo "DNS record not found"
@@ -79,13 +73,9 @@ jobs:
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" | jq -r '.success'
) || echo "DNS record not found"
env:
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
- name: "Delete certificate"
if: ${{ github.event.pull_request.merged == true }}
run: |
set -euxo pipefail
kubectl delete certificate "pr${PR_NUMBER}-tls" -n pr-deployment-certs || echo "certificate not found"
env:
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
kubectl delete certificate "pr${{ steps.pr_number.outputs.PR_NUMBER }}-tls" -n pr-deployment-certs || echo "certificate not found"
+68 -85
View File
@@ -39,14 +39,12 @@ jobs:
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check if PR is open
id: check_pr
@@ -57,7 +55,7 @@ jobs:
echo "PR doesn't exist or is closed."
pr_open=false
fi
echo "pr_open=$pr_open" >> "$GITHUB_OUTPUT"
echo "pr_open=$pr_open" >> $GITHUB_OUTPUT
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -76,15 +74,14 @@ jobs:
runs-on: "ubuntu-latest"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Get PR number, title, and branch name
id: pr_info
@@ -93,11 +90,9 @@ jobs:
PR_NUMBER=$(gh pr view --json number | jq -r '.number')
PR_TITLE=$(gh pr view --json title | jq -r '.title')
PR_URL=$(gh pr view --json url | jq -r '.url')
{
echo "PR_URL=$PR_URL"
echo "PR_NUMBER=$PR_NUMBER"
echo "PR_TITLE=$PR_TITLE"
} >> "$GITHUB_OUTPUT"
echo "PR_URL=$PR_URL" >> $GITHUB_OUTPUT
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_OUTPUT
echo "PR_TITLE=$PR_TITLE" >> $GITHUB_OUTPUT
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -105,8 +100,8 @@ jobs:
id: set_tags
run: |
set -euo pipefail
echo "CODER_BASE_IMAGE_TAG=$CODER_BASE_IMAGE_TAG" >> "$GITHUB_OUTPUT"
echo "CODER_IMAGE_TAG=$CODER_IMAGE_TAG" >> "$GITHUB_OUTPUT"
echo "CODER_BASE_IMAGE_TAG=$CODER_BASE_IMAGE_TAG" >> $GITHUB_OUTPUT
echo "CODER_IMAGE_TAG=$CODER_IMAGE_TAG" >> $GITHUB_OUTPUT
env:
CODER_BASE_IMAGE_TAG: ghcr.io/coder/coder-preview-base:pr${{ steps.pr_info.outputs.PR_NUMBER }}
CODER_IMAGE_TAG: ghcr.io/coder/coder-preview:pr${{ steps.pr_info.outputs.PR_NUMBER }}
@@ -123,16 +118,14 @@ jobs:
id: check_deployment
run: |
set -euo pipefail
if helm status "pr${PR_NUMBER}" --namespace "pr${PR_NUMBER}" > /dev/null 2>&1; then
if helm status "pr${{ steps.pr_info.outputs.PR_NUMBER }}" --namespace "pr${{ steps.pr_info.outputs.PR_NUMBER }}" > /dev/null 2>&1; then
echo "Deployment already exists. Skipping deployment."
NEW=false
else
echo "Deployment doesn't exist."
NEW=true
fi
echo "NEW=$NEW" >> "$GITHUB_OUTPUT"
env:
PR_NUMBER: ${{ steps.pr_info.outputs.PR_NUMBER }}
echo "NEW=$NEW" >> $GITHUB_OUTPUT
- name: Check changed files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
@@ -161,20 +154,17 @@ jobs:
- name: Print number of changed files
run: |
set -euo pipefail
echo "Total number of changed files: ${ALL_COUNT}"
echo "Number of ignored files: ${IGNORED_COUNT}"
env:
ALL_COUNT: ${{ steps.filter.outputs.all_count }}
IGNORED_COUNT: ${{ steps.filter.outputs.ignored_count }}
echo "Total number of changed files: ${{ steps.filter.outputs.all_count }}"
echo "Number of ignored files: ${{ steps.filter.outputs.ignored_count }}"
- name: Build conditionals
id: build_conditionals
run: |
set -euo pipefail
# build if the workflow is manually triggered and the deployment doesn't exist (first build or force rebuild)
echo "first_or_force_build=${{ (github.event_name == 'workflow_dispatch' && steps.check_deployment.outputs.NEW == 'true') || github.event.inputs.build == 'true' }}" >> "$GITHUB_OUTPUT"
echo "first_or_force_build=${{ (github.event_name == 'workflow_dispatch' && steps.check_deployment.outputs.NEW == 'true') || github.event.inputs.build == 'true' }}" >> $GITHUB_OUTPUT
# build if the deployment already exist and there are changes in the files that we care about (automatic updates)
echo "automatic_rebuild=${{ steps.check_deployment.outputs.NEW == 'false' && steps.filter.outputs.all_count > steps.filter.outputs.ignored_count }}" >> "$GITHUB_OUTPUT"
echo "automatic_rebuild=${{ steps.check_deployment.outputs.NEW == 'false' && steps.filter.outputs.all_count > steps.filter.outputs.ignored_count }}" >> $GITHUB_OUTPUT
comment-pr:
needs: get_info
@@ -184,12 +174,12 @@ jobs:
pull-requests: write # needed for commenting on PRs
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Find Comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: fc
with:
issue-number: ${{ needs.get_info.outputs.PR_NUMBER }}
@@ -199,7 +189,7 @@ jobs:
- name: Comment on PR
id: comment_id
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
with:
comment-id: ${{ steps.fc.outputs.comment-id }}
issue-number: ${{ needs.get_info.outputs.PR_NUMBER }}
@@ -228,15 +218,14 @@ jobs:
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Node
uses: ./.github/actions/setup-node
@@ -248,7 +237,7 @@ jobs:
uses: ./.github/actions/setup-sqlc
- name: GHCR Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -261,13 +250,12 @@ jobs:
make gen/mark-fresh
export DOCKER_IMAGE_NO_PREREQUISITES=true
version="$(./scripts/version.sh)"
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
export CODER_IMAGE_BUILD_BASE_TAG
export CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
make -j build/coder_linux_amd64
./scripts/build_docker.sh \
--arch amd64 \
--target "${CODER_IMAGE_TAG}" \
--version "$version" \
--target ${{ env.CODER_IMAGE_TAG }} \
--version $version \
--push \
build/coder_linux_amd64
@@ -288,7 +276,7 @@ jobs:
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
@@ -305,13 +293,13 @@ jobs:
set -euo pipefail
foundTag=$(
gh api /orgs/coder/packages/container/coder-preview/versions |
jq -r --arg tag "pr${PR_NUMBER}" '.[] |
jq -r --arg tag "pr${{ env.PR_NUMBER }}" '.[] |
select(.metadata.container.tags == [$tag]) |
.metadata.container.tags[0]'
)
if [ -z "$foundTag" ]; then
echo "Image not found"
echo "${CODER_IMAGE_TAG} not found in ghcr.io/coder/coder-preview"
echo "${{ env.CODER_IMAGE_TAG }} not found in ghcr.io/coder/coder-preview"
exit 1
else
echo "Image found"
@@ -326,42 +314,40 @@ jobs:
curl -X POST "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records" \
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
-H "Content-Type:application/json" \
--data '{"type":"CNAME","name":"*.'"${PR_HOSTNAME}"'","content":"'"${PR_HOSTNAME}"'","ttl":1,"proxied":false}'
--data '{"type":"CNAME","name":"*.${{ env.PR_HOSTNAME }}","content":"${{ env.PR_HOSTNAME }}","ttl":1,"proxied":false}'
- name: Create PR namespace
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
run: |
set -euo pipefail
# try to delete the namespace, but don't fail if it doesn't exist
kubectl delete namespace "pr${PR_NUMBER}" || true
kubectl create namespace "pr${PR_NUMBER}"
kubectl delete namespace "pr${{ env.PR_NUMBER }}" || true
kubectl create namespace "pr${{ env.PR_NUMBER }}"
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check and Create Certificate
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
run: |
# Using kubectl to check if a Certificate resource already exists
# we are doing this to avoid letsenrypt rate limits
if ! kubectl get certificate "pr${PR_NUMBER}-tls" -n pr-deployment-certs > /dev/null 2>&1; then
if ! kubectl get certificate pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs > /dev/null 2>&1; then
echo "Certificate doesn't exist. Creating a new one."
envsubst < ./.github/pr-deployments/certificate.yaml | kubectl apply -f -
else
echo "Certificate exists. Skipping certificate creation."
fi
echo "Copy certificate from pr-deployment-certs to pr${PR_NUMBER} namespace"
until kubectl get secret "pr${PR_NUMBER}-tls" -n pr-deployment-certs &> /dev/null
echo "Copy certificate from pr-deployment-certs to pr${{ env.PR_NUMBER }} namespace"
until kubectl get secret pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs &> /dev/null
do
echo "Waiting for secret pr${PR_NUMBER}-tls to be created..."
echo "Waiting for secret pr${{ env.PR_NUMBER }}-tls to be created..."
sleep 5
done
(
kubectl get secret "pr${PR_NUMBER}-tls" -n pr-deployment-certs -o json |
kubectl get secret pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs -o json |
jq 'del(.metadata.namespace,.metadata.creationTimestamp,.metadata.resourceVersion,.metadata.selfLink,.metadata.uid,.metadata.managedFields)' |
kubectl -n "pr${PR_NUMBER}" apply -f -
kubectl -n pr${{ env.PR_NUMBER }} apply -f -
)
- name: Set up PostgreSQL database
@@ -369,14 +355,13 @@ jobs:
run: |
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install coder-db bitnami/postgresql \
--namespace "pr${PR_NUMBER}" \
--set image.repository=bitnamilegacy/postgresql \
--namespace pr${{ env.PR_NUMBER }} \
--set auth.username=coder \
--set auth.password=coder \
--set auth.database=coder \
--set persistence.size=10Gi
kubectl create secret generic coder-db-url -n "pr${PR_NUMBER}" \
--from-literal=url="postgres://coder:coder@coder-db-postgresql.pr${PR_NUMBER}.svc.cluster.local:5432/coder?sslmode=disable"
kubectl create secret generic coder-db-url -n pr${{ env.PR_NUMBER }} \
--from-literal=url="postgres://coder:coder@coder-db-postgresql.pr${{ env.PR_NUMBER }}.svc.cluster.local:5432/coder?sslmode=disable"
- name: Create a service account, role, and rolebinding for the PR namespace
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
@@ -398,8 +383,8 @@ jobs:
run: |
set -euo pipefail
helm dependency update --skip-refresh ./helm/coder
helm upgrade --install "pr${PR_NUMBER}" ./helm/coder \
--namespace "pr${PR_NUMBER}" \
helm upgrade --install "pr${{ env.PR_NUMBER }}" ./helm/coder \
--namespace "pr${{ env.PR_NUMBER }}" \
--values ./pr-deploy-values.yaml \
--force
@@ -408,8 +393,8 @@ jobs:
run: |
helm repo add coder-logstream-kube https://helm.coder.com/logstream-kube
helm upgrade --install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
--namespace "pr${PR_NUMBER}" \
--set url="https://${PR_HOSTNAME}"
--namespace "pr${{ env.PR_NUMBER }}" \
--set url="https://${{ env.PR_HOSTNAME }}"
- name: Get Coder binary
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
@@ -417,16 +402,16 @@ jobs:
set -euo pipefail
DEST="${HOME}/coder"
URL="https://${PR_HOSTNAME}/bin/coder-linux-amd64"
URL="https://${{ env.PR_HOSTNAME }}/bin/coder-linux-amd64"
mkdir -p "$(dirname "$DEST")"
mkdir -p "$(dirname ${DEST})"
COUNT=0
until curl --output /dev/null --silent --head --fail "$URL"; do
until $(curl --output /dev/null --silent --head --fail "$URL"); do
printf '.'
sleep 5
COUNT=$((COUNT+1))
if [ "$COUNT" -ge 60 ]; then
if [ $COUNT -ge 60 ]; then
echo "Timed out waiting for URL to be available"
exit 1
fi
@@ -435,7 +420,7 @@ jobs:
curl -fsSL "$URL" -o "${DEST}"
chmod +x "${DEST}"
"${DEST}" version
sudo mv "${DEST}" /usr/local/bin/coder
mv "${DEST}" /usr/local/bin/coder
- name: Create first user
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
@@ -450,24 +435,24 @@ jobs:
# add mask so that the password is not printed to the logs
echo "::add-mask::$password"
echo "password=$password" >> "$GITHUB_OUTPUT"
echo "password=$password" >> $GITHUB_OUTPUT
coder login \
--first-user-username "pr${PR_NUMBER}-admin" \
--first-user-email "pr${PR_NUMBER}@coder.com" \
--first-user-password "$password" \
--first-user-username pr${{ env.PR_NUMBER }}-admin \
--first-user-email pr${{ env.PR_NUMBER }}@coder.com \
--first-user-password $password \
--first-user-trial=false \
--use-token-as-session \
"https://${PR_HOSTNAME}"
https://${{ env.PR_HOSTNAME }}
# Create a user for the github.actor
# TODO: update once https://github.com/coder/coder/issues/15466 is resolved
# coder users create \
# --username ${GITHUB_ACTOR} \
# --username ${{ github.actor }} \
# --login-type github
# promote the user to admin role
# coder org members edit-role ${GITHUB_ACTOR} organization-admin
# coder org members edit-role ${{ github.actor }} organization-admin
# TODO: update once https://github.com/coder/internal/issues/207 is resolved
- name: Send Slack notification
@@ -476,22 +461,20 @@ jobs:
curl -s -o /dev/null -X POST -H 'Content-type: application/json' \
-d \
'{
"pr_number": "'"${PR_NUMBER}"'",
"pr_url": "'"${PR_URL}"'",
"pr_title": "'"${PR_TITLE}"'",
"pr_access_url": "'"https://${PR_HOSTNAME}"'",
"pr_username": "'"pr${PR_NUMBER}-admin"'",
"pr_email": "'"pr${PR_NUMBER}@coder.com"'",
"pr_password": "'"${PASSWORD}"'",
"pr_actor": "'"${GITHUB_ACTOR}"'"
"pr_number": "'"${{ env.PR_NUMBER }}"'",
"pr_url": "'"${{ env.PR_URL }}"'",
"pr_title": "'"${{ env.PR_TITLE }}"'",
"pr_access_url": "'"https://${{ env.PR_HOSTNAME }}"'",
"pr_username": "'"pr${{ env.PR_NUMBER }}-admin"'",
"pr_email": "'"pr${{ env.PR_NUMBER }}@coder.com"'",
"pr_password": "'"${{ steps.setup_deployment.outputs.password }}"'",
"pr_actor": "'"${{ github.actor }}"'"
}' \
${{ secrets.PR_DEPLOYMENTS_SLACK_WEBHOOK }}
echo "Slack notification sent"
env:
PASSWORD: ${{ steps.setup_deployment.outputs.password }}
- name: Find Comment
uses: peter-evans/find-comment@b30e6a3c0ed37e7c023ccd3f1db5c6c0b0c23aad # v4.0.0
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
id: fc
with:
issue-number: ${{ env.PR_NUMBER }}
@@ -500,7 +483,7 @@ jobs:
direction: last
- name: Comment on PR
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
env:
STATUS: ${{ needs.get_info.outputs.NEW == 'true' && 'Created' || 'Updated' }}
with:
@@ -521,7 +504,7 @@ jobs:
run: |
set -euo pipefail
cd .github/pr-deployments/template
coder templates push -y --variable "namespace=pr${PR_NUMBER}" kubernetes
coder templates push -y --variable namespace=pr${{ env.PR_NUMBER }} kubernetes
# Create workspace
coder create --template="kubernetes" kube --parameter cpu=2 --parameter memory=4 --parameter home_disk_size=2 -y
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
+69 -138
View File
@@ -32,43 +32,15 @@ env:
CODER_RELEASE_NOTES: ${{ inputs.release_notes }}
jobs:
# Only allow maintainers/admins to release.
check-perms:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Allow only maintainers/admins
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const {data} = await github.rest.repos.getCollaboratorPermissionLevel({
owner: context.repo.owner,
repo: context.repo.repo,
username: context.actor
});
const role = data.role_name || data.user?.role_name || data.permission;
const perms = data.user?.permissions || {};
core.info(`Actor ${context.actor} permission=${data.permission}, role_name=${role}`);
const allowed =
role === 'admin' ||
role === 'maintain' ||
perms.admin === true ||
perms.maintain === true;
if (!allowed) core.setFailed('Denied: requires maintain or admin');
# build-dylib is a separate job to build the dylib on macOS.
build-dylib:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-macos-latest' || 'macos-latest' }}
needs: check-perms
steps:
# Harden Runner doesn't work on macOS.
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
# If the event that triggered the build was an annotated tag (which our
# tags are supposed to be), actions/checkout has a bug where the tag in
@@ -81,16 +53,14 @@ jobs:
- name: Setup build tools
run: |
brew install bash gnu-getopt make
{
echo "$(brew --prefix bash)/bin"
echo "$(brew --prefix gnu-getopt)/bin"
echo "$(brew --prefix make)/libexec/gnubin"
} >> "$GITHUB_PATH"
echo "$(brew --prefix bash)/bin" >> $GITHUB_PATH
echo "$(brew --prefix gnu-getopt)/bin" >> $GITHUB_PATH
echo "$(brew --prefix make)/libexec/gnubin" >> $GITHUB_PATH
- name: Switch XCode Version
uses: maxim-lobanov/setup-xcode@60606e260d2fc5762a71e64e74b2174e8ea3c8bd # v1.6.0
with:
xcode-version: "16.1.0"
xcode-version: "16.0.0"
- name: Setup Go
uses: ./.github/actions/setup-go
@@ -131,7 +101,7 @@ jobs:
AC_CERTIFICATE_PASSWORD_FILE: /tmp/apple_cert_password.txt
- name: Upload build artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: dylibs
path: |
@@ -144,7 +114,7 @@ jobs:
release:
name: Build and publish
needs: [build-dylib, check-perms]
needs: build-dylib
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
permissions:
# Required to publish a release
@@ -164,15 +134,14 @@ jobs:
version: ${{ steps.version.outputs.version }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
# If the event that triggered the build was an annotated tag (which our
# tags are supposed to be), actions/checkout has a bug where the tag in
@@ -187,9 +156,9 @@ jobs:
run: |
set -euo pipefail
version="$(./scripts/version.sh)"
echo "version=$version" >> "$GITHUB_OUTPUT"
echo "version=$version" >> $GITHUB_OUTPUT
# Speed up future version.sh calls.
echo "CODER_FORCE_VERSION=$version" >> "$GITHUB_ENV"
echo "CODER_FORCE_VERSION=$version" >> $GITHUB_ENV
echo "$version"
# Verify that all expectations for a release are met.
@@ -231,7 +200,7 @@ jobs:
release_notes_file="$(mktemp -t release_notes.XXXXXX)"
echo "$CODER_RELEASE_NOTES" > "$release_notes_file"
echo CODER_RELEASE_NOTES_FILE="$release_notes_file" >> "$GITHUB_ENV"
echo CODER_RELEASE_NOTES_FILE="$release_notes_file" >> $GITHUB_ENV
- name: Show release notes
run: |
@@ -239,7 +208,7 @@ jobs:
cat "$CODER_RELEASE_NOTES_FILE"
- name: Docker Login
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -253,7 +222,7 @@ jobs:
# Necessary for signing Windows binaries.
- name: Setup Java
uses: actions/setup-java@dded0888837ed1f317902acf8a20df0ad188d165 # v5.0.0
uses: actions/setup-java@c5195efecf7bdfc987ee8bae7a71cb8b11521c00 # v4.7.1
with:
distribution: "zulu"
java-version: "11.0"
@@ -317,17 +286,17 @@ jobs:
# Setup GCloud for signing Windows binaries.
- name: Authenticate to Google Cloud
id: gcloud_auth
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
with:
workload_identity_provider: ${{ vars.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
workload_identity_provider: ${{ secrets.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
token_format: "access_token"
- name: Setup GCloud SDK
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
uses: google-github-actions/setup-gcloud@77e7a554d41e2ee56fc945c52dfd3f33d12def9a # v2.1.4
- name: Download dylibs
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: dylibs
path: ./build
@@ -354,8 +323,6 @@ jobs:
env:
CODER_SIGN_WINDOWS: "1"
CODER_SIGN_DARWIN: "1"
CODER_SIGN_GPG: "1"
CODER_GPG_RELEASE_KEY_BASE64: ${{ secrets.GPG_RELEASE_KEY_BASE64 }}
CODER_WINDOWS_RESOURCES: "1"
AC_CERTIFICATE_FILE: /tmp/apple_cert.p12
AC_CERTIFICATE_PASSWORD_FILE: /tmp/apple_cert_password.txt
@@ -381,9 +348,9 @@ jobs:
set -euo pipefail
if [[ "${CODER_RELEASE:-}" != *t* ]] || [[ "${CODER_DRY_RUN:-}" == *t* ]]; then
# Empty value means use the default and avoid building a fresh one.
echo "tag=" >> "$GITHUB_OUTPUT"
echo "tag=" >> $GITHUB_OUTPUT
else
echo "tag=$(CODER_IMAGE_BASE=ghcr.io/coder/coder-base ./scripts/image_tag.sh)" >> "$GITHUB_OUTPUT"
echo "tag=$(CODER_IMAGE_BASE=ghcr.io/coder/coder-base ./scripts/image_tag.sh)" >> $GITHUB_OUTPUT
fi
- name: Create empty base-build-context directory
@@ -397,7 +364,7 @@ jobs:
# This uses OIDC authentication, so no auth variables are required.
- name: Build base Docker image via depot.dev
if: steps.image-base-tag.outputs.tag != ''
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
with:
project: wl5hnrrkns
context: base-build-context
@@ -418,7 +385,7 @@ jobs:
# available immediately
for i in {1..10}; do
rc=0
raw_manifests=$(docker buildx imagetools inspect --raw "${IMAGE_TAG}") || rc=$?
raw_manifests=$(docker buildx imagetools inspect --raw "${{ steps.image-base-tag.outputs.tag }}") || rc=$?
if [[ "$rc" -eq 0 ]]; then
break
fi
@@ -440,8 +407,6 @@ jobs:
echo "$manifests" | grep -q linux/amd64
echo "$manifests" | grep -q linux/arm64
echo "$manifests" | grep -q linux/arm/v7
env:
IMAGE_TAG: ${{ steps.image-base-tag.outputs.tag }}
# GitHub attestation provides SLSA provenance for Docker images, establishing a verifiable
# record that these images were built in GitHub Actions with specific inputs and environment.
@@ -454,7 +419,7 @@ jobs:
id: attest_base
if: ${{ !inputs.dry_run && steps.image-base-tag.outputs.tag != '' }}
continue-on-error: true
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
with:
subject-name: ${{ steps.image-base-tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -509,7 +474,7 @@ jobs:
# Save multiarch image tag for attestation
multiarch_image="$(./scripts/image_tag.sh)"
echo "multiarch_image=${multiarch_image}" >> "$GITHUB_OUTPUT"
echo "multiarch_image=${multiarch_image}" >> $GITHUB_OUTPUT
# For debugging, print all docker image tags
docker images
@@ -517,15 +482,16 @@ jobs:
# if the current version is equal to the highest (according to semver)
# version in the repo, also create a multi-arch image as ":latest" and
# push it
created_latest_tag=false
if [[ "$(git tag | grep '^v' | grep -vE '(rc|dev|-|\+|\/)' | sort -r --version-sort | head -n1)" == "v$(./scripts/version.sh)" ]]; then
# shellcheck disable=SC2046
./scripts/build_docker_multiarch.sh \
--push \
--target "$(./scripts/image_tag.sh --version latest)" \
$(cat build/coder_"$version"_linux_{amd64,arm64,armv7}.tag)
echo "created_latest_tag=true" >> "$GITHUB_OUTPUT"
created_latest_tag=true
echo "created_latest_tag=true" >> $GITHUB_OUTPUT
else
echo "created_latest_tag=false" >> "$GITHUB_OUTPUT"
echo "created_latest_tag=false" >> $GITHUB_OUTPUT
fi
env:
CODER_BASE_IMAGE_TAG: ${{ steps.image-base-tag.outputs.tag }}
@@ -533,27 +499,24 @@ jobs:
- name: SBOM Generation and Attestation
if: ${{ !inputs.dry_run }}
env:
COSIGN_EXPERIMENTAL: '1'
MULTIARCH_IMAGE: ${{ steps.build_docker.outputs.multiarch_image }}
VERSION: ${{ steps.version.outputs.version }}
CREATED_LATEST_TAG: ${{ steps.build_docker.outputs.created_latest_tag }}
COSIGN_EXPERIMENTAL: "1"
run: |
set -euxo pipefail
# Generate SBOM for multi-arch image with version in filename
echo "Generating SBOM for multi-arch image: ${MULTIARCH_IMAGE}"
syft "${MULTIARCH_IMAGE}" -o spdx-json > "coder_${VERSION}_sbom.spdx.json"
echo "Generating SBOM for multi-arch image: ${{ steps.build_docker.outputs.multiarch_image }}"
syft "${{ steps.build_docker.outputs.multiarch_image }}" -o spdx-json > coder_${{ steps.version.outputs.version }}_sbom.spdx.json
# Attest SBOM to multi-arch image
echo "Attesting SBOM to multi-arch image: ${MULTIARCH_IMAGE}"
cosign clean --force=true "${MULTIARCH_IMAGE}"
echo "Attesting SBOM to multi-arch image: ${{ steps.build_docker.outputs.multiarch_image }}"
cosign clean --force=true "${{ steps.build_docker.outputs.multiarch_image }}"
cosign attest --type spdxjson \
--predicate "coder_${VERSION}_sbom.spdx.json" \
--predicate coder_${{ steps.version.outputs.version }}_sbom.spdx.json \
--yes \
"${MULTIARCH_IMAGE}"
"${{ steps.build_docker.outputs.multiarch_image }}"
# If latest tag was created, also attest it
if [[ "${CREATED_LATEST_TAG}" == "true" ]]; then
if [[ "${{ steps.build_docker.outputs.created_latest_tag }}" == "true" ]]; then
latest_tag="$(./scripts/image_tag.sh --version latest)"
echo "Generating SBOM for latest image: ${latest_tag}"
syft "${latest_tag}" -o spdx-json > coder_latest_sbom.spdx.json
@@ -570,7 +533,7 @@ jobs:
id: attest_main
if: ${{ !inputs.dry_run }}
continue-on-error: true
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
with:
subject-name: ${{ steps.build_docker.outputs.multiarch_image }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -607,14 +570,14 @@ jobs:
- name: Get latest tag name
id: latest_tag
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
run: echo "tag=$(./scripts/image_tag.sh --version latest)" >> "$GITHUB_OUTPUT"
run: echo "tag=$(./scripts/image_tag.sh --version latest)" >> $GITHUB_OUTPUT
# If this is the highest version according to semver, also attest the "latest" tag
- name: GitHub Attestation for "latest" Docker image
id: attest_latest
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
continue-on-error: true
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
with:
subject-name: ${{ steps.latest_tag.outputs.tag }}
predicate-type: "https://slsa.dev/provenance/v1"
@@ -650,7 +613,7 @@ jobs:
# Report attestation failures but don't fail the workflow
- name: Check attestation status
if: ${{ !inputs.dry_run }}
run: | # zizmor: ignore[template-injection] We're just reading steps.attest_x.outcome here, no risk of injection
run: |
if [[ "${{ steps.attest_base.outcome }}" == "failure" && "${{ steps.attest_base.conclusion }}" != "skipped" ]]; then
echo "::warning::GitHub attestation for base image failed"
fi
@@ -669,30 +632,6 @@ jobs:
- name: ls build
run: ls -lh build
- name: Publish Coder CLI binaries and detached signatures to GCS
if: ${{ !inputs.dry_run }}
run: |
set -euxo pipefail
version="$(./scripts/version.sh)"
# Source array of slim binaries
declare -A binaries
binaries["coder-darwin-amd64"]="coder-slim_${version}_darwin_amd64"
binaries["coder-darwin-arm64"]="coder-slim_${version}_darwin_arm64"
binaries["coder-linux-amd64"]="coder-slim_${version}_linux_amd64"
binaries["coder-linux-arm64"]="coder-slim_${version}_linux_arm64"
binaries["coder-linux-armv7"]="coder-slim_${version}_linux_armv7"
binaries["coder-windows-amd64.exe"]="coder-slim_${version}_windows_amd64.exe"
binaries["coder-windows-arm64.exe"]="coder-slim_${version}_windows_arm64.exe"
for cli_name in "${!binaries[@]}"; do
slim_binary="${binaries[$cli_name]}"
detached_signature="${slim_binary}.asc"
gcloud storage cp "./build/${slim_binary}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}"
gcloud storage cp "./build/${detached_signature}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}.asc"
done
- name: Publish release
run: |
set -euo pipefail
@@ -715,11 +654,11 @@ jobs:
./build/*.apk
./build/*.deb
./build/*.rpm
"./coder_${VERSION}_sbom.spdx.json"
./coder_${{ steps.version.outputs.version }}_sbom.spdx.json
)
# Only include the latest SBOM file if it was created
if [[ "${CREATED_LATEST_TAG}" == "true" ]]; then
if [[ "${{ steps.build_docker.outputs.created_latest_tag }}" == "true" ]]; then
files+=(./coder_latest_sbom.spdx.json)
fi
@@ -730,17 +669,15 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CODER_GPG_RELEASE_KEY_BASE64: ${{ secrets.GPG_RELEASE_KEY_BASE64 }}
VERSION: ${{ steps.version.outputs.version }}
CREATED_LATEST_TAG: ${{ steps.build_docker.outputs.created_latest_tag }}
- name: Authenticate to Google Cloud
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
with:
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_ID_PROVIDER }}
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
- name: Setup GCloud SDK
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # 3.0.1
uses: google-github-actions/setup-gcloud@77e7a554d41e2ee56fc945c52dfd3f33d12def9a # 2.1.4
- name: Publish Helm Chart
if: ${{ !inputs.dry_run }}
@@ -752,16 +689,14 @@ jobs:
cp "build/provisioner_helm_${version}.tgz" build/helm
gsutil cp gs://helm.coder.com/v2/index.yaml build/helm/index.yaml
helm repo index build/helm --url https://helm.coder.com/v2 --merge build/helm/index.yaml
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/coder_helm_${version}.tgz" gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/provisioner_helm_${version}.tgz" gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/index.yaml" gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp "helm/artifacthub-repo.yml" gs://helm.coder.com/v2
helm push "build/coder_helm_${version}.tgz" oci://ghcr.io/coder/chart
helm push "build/provisioner_helm_${version}.tgz" oci://ghcr.io/coder/chart
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/coder_helm_${version}.tgz gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/provisioner_helm_${version}.tgz gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/index.yaml gs://helm.coder.com/v2
gsutil -h "Cache-Control:no-cache,max-age=0" cp helm/artifacthub-repo.yml gs://helm.coder.com/v2
- name: Upload artifacts to actions (if dry-run)
if: ${{ inputs.dry_run }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: release-artifacts
path: |
@@ -777,7 +712,7 @@ jobs:
- name: Upload latest sbom artifact to actions (if dry-run)
if: inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: latest-sbom-artifact
path: ./coder_latest_sbom.spdx.json
@@ -785,7 +720,7 @@ jobs:
- name: Send repository-dispatch event
if: ${{ !inputs.dry_run }}
uses: peter-evans/repository-dispatch@28959ce8df70de7be546dd1250a005dd32156697 # v4.0.1
uses: peter-evans/repository-dispatch@ff45666b9427631e3450c54a1bcbee4d9ff4d7c0 # v3.0.0
with:
token: ${{ secrets.CDRCI_GITHUB_TOKEN }}
repository: coder/packages
@@ -802,18 +737,18 @@ jobs:
# TODO: skip this if it's not a new release (i.e. a backport). This is
# fine right now because it just makes a PR that we can close.
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Update homebrew
env:
# Variables used by the `gh` command
GH_REPO: coder/homebrew-coder
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
VERSION: ${{ needs.release.outputs.version }}
run: |
# Keep version number around for reference, removing any potential leading v
coder_version="$(echo "${VERSION}" | tr -d v)"
coder_version="$(echo "${{ needs.release.outputs.version }}" | tr -d v)"
set -euxo pipefail
@@ -832,9 +767,9 @@ jobs:
wget "$checksums_url" -O checksums.txt
# Get the SHAs
darwin_arm_sha="$(grep "darwin_arm64.zip" checksums.txt | awk '{ print $1 }')"
darwin_intel_sha="$(grep "darwin_amd64.zip" checksums.txt | awk '{ print $1 }')"
linux_sha="$(grep "linux_amd64.tar.gz" checksums.txt | awk '{ print $1 }')"
darwin_arm_sha="$(cat checksums.txt | grep "darwin_arm64.zip" | awk '{ print $1 }')"
darwin_intel_sha="$(cat checksums.txt | grep "darwin_amd64.zip" | awk '{ print $1 }')"
linux_sha="$(cat checksums.txt | grep "linux_amd64.tar.gz" | awk '{ print $1 }')"
echo "macOS arm64: $darwin_arm_sha"
echo "macOS amd64: $darwin_intel_sha"
@@ -847,7 +782,7 @@ jobs:
# Check if a PR already exists.
pr_count="$(gh pr list --search "head:$brew_branch" --json id,closed | jq -r ".[] | select(.closed == false) | .id" | wc -l)"
if [ "$pr_count" -gt 0 ]; then
if [[ "$pr_count" > 0 ]]; then
echo "Bailing out as PR already exists" 2>&1
exit 0
fi
@@ -866,8 +801,8 @@ jobs:
-B master -H "$brew_branch" \
-t "coder $coder_version" \
-b "" \
-r "${GITHUB_ACTOR}" \
-a "${GITHUB_ACTOR}" \
-r "${{ github.actor }}" \
-a "${{ github.actor }}" \
-b "This automatic PR was triggered by the release of Coder v$coder_version"
publish-winget:
@@ -878,7 +813,7 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
@@ -888,10 +823,9 @@ jobs:
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
# If the event that triggered the build was an annotated tag (which our
# tags are supposed to be), actions/checkout has a bug where the tag in
@@ -910,7 +844,7 @@ jobs:
# The package version is the same as the tag minus the leading "v".
# The version in this output already has the leading "v" removed but
# we do it again to be safe.
$version = $env:VERSION.Trim('v')
$version = "${{ needs.release.outputs.version }}".Trim('v')
$release_assets = gh release view --repo coder/coder "v${version}" --json assets | `
ConvertFrom-Json
@@ -942,14 +876,13 @@ jobs:
# For wingetcreate. We need a real token since we're pushing a commit
# to GitHub and then making a PR in a different repo.
WINGET_GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
VERSION: ${{ needs.release.outputs.version }}
- name: Comment on PR
run: |
# wait 30 seconds
Start-Sleep -Seconds 30.0
# Find the PR that wingetcreate just made.
$version = $env:VERSION.Trim('v')
$version = "${{ needs.release.outputs.version }}".Trim('v')
$pr_list = gh pr list --repo microsoft/winget-pkgs --search "author:cdrci Coder.Coder version ${version}" --limit 1 --json number | `
ConvertFrom-Json
$pr_number = $pr_list[0].number
@@ -960,7 +893,6 @@ jobs:
# For gh CLI. We need a real token since we're commenting on a PR in a
# different repo.
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
VERSION: ${{ needs.release.outputs.version }}
# publish-sqlc pushes the latest schema to sqlc cloud.
# At present these pushes cannot be tagged, so the last push is always the latest.
@@ -971,15 +903,14 @@ jobs:
if: ${{ !inputs.dry_run }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 1
persist-credentials: false
# We need golang to run the migration main.go
- name: Setup Go
+5 -5
View File
@@ -20,17 +20,17 @@ jobs:
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
uses: ossf/scorecard-action@05b42c624433fc40578a4040d5cf5e36ddca8cde # v2.4.2
with:
results_file: results.sarif
results_format: sarif
@@ -39,7 +39,7 @@ jobs:
# Upload the results as artifacts.
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: SARIF file
path: results.sarif
@@ -47,6 +47,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
with:
sarif_file: results.sarif
+11 -15
View File
@@ -27,20 +27,18 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Go
uses: ./.github/actions/setup-go
- name: Initialize CodeQL
uses: github/codeql-action/init@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/init@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
with:
languages: go, javascript
@@ -50,7 +48,7 @@ jobs:
rm Makefile
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/analyze@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
- name: Send Slack notification on failure
if: ${{ failure() }}
@@ -69,15 +67,14 @@ jobs:
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Go
uses: ./.github/actions/setup-go
@@ -137,16 +134,15 @@ jobs:
# This environment variables forces scripts/build_docker.sh to build
# the base image tag locally instead of using the cached version from
# the registry.
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
export CODER_IMAGE_BUILD_BASE_TAG
export CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
# We would like to use make -j here, but it doesn't work with the some recent additions
# to our code generation.
make "$image_job"
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
echo "image=$(cat "$image_job")" >> $GITHUB_OUTPUT
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
uses: aquasecurity/trivy-action@76071ef0d7ec797419534a183b498b4d6366cf37
with:
image-ref: ${{ steps.build.outputs.image }}
format: sarif
@@ -154,13 +150,13 @@ jobs:
severity: "CRITICAL,HIGH"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@014f16e7ab1402f30e7c3329d33797e7948572db # v3.29.5
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
with:
sarif_file: trivy-results.sarif
category: "Trivy"
- name: Upload Trivy scan results as an artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: trivy
path: trivy-results.sarif
+8 -10
View File
@@ -18,12 +18,12 @@ jobs:
pull-requests: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: stale
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
stale-issue-label: "stale"
stale-pr-label: "stale"
@@ -44,7 +44,7 @@ jobs:
# Start with the oldest issues, always.
ascending: true
- name: "Close old issues labeled likely-no"
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
@@ -96,14 +96,12 @@ jobs:
contents: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Run delete-old-branches-action
uses: beatlabs/delete-old-branches-action@4eeeb8740ff8b3cb310296ddd6b43c3387734588 # v0.0.11
with:
@@ -120,12 +118,12 @@ jobs:
actions: write
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Delete PR Cleanup workflow runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@39f0bbed25d76b34de5594dceab824811479e5de # v2.0.6
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
@@ -134,7 +132,7 @@ jobs:
delete_workflow_pattern: pr-cleanup.yaml
- name: Delete PR Deploy workflow skipped runs
uses: Mattraks/delete-workflow-runs@5bf9a1dac5c4d041c029f0a8370ddf0c5cb5aeb7 # v2.1.0
uses: Mattraks/delete-workflow-runs@39f0bbed25d76b34de5594dceab824811479e5de # v2.0.6
with:
token: ${{ github.token }}
repository: ${{ github.repository }}
+1 -1
View File
@@ -19,7 +19,7 @@ jobs:
timeout-minutes: 5
steps:
- name: Start Coder workspace
uses: coder/start-workspace-action@f97a681b4cc7985c9eef9963750c7cc6ebc93a19
uses: coder/start-workspace-action@35a4608cefc7e8cc56573cae7c3b85304575cb72
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
github-username: >-
-190
View File
@@ -1,190 +0,0 @@
name: AI Triage Automation
on:
issues:
types:
- labeled
workflow_dispatch:
inputs:
issue_url:
description: "GitHub Issue URL to process"
required: true
type: string
template_name:
description: "Coder template to use for workspace"
required: true
default: "coder"
type: string
template_preset:
description: "Template preset to use"
required: false
default: ""
type: string
prefix:
description: "Prefix for workspace name"
required: false
default: "traiage"
type: string
jobs:
traiage:
name: Triage GitHub Issue with Claude Code
runs-on: ubuntu-latest
if: github.event.label.name == 'traiage' || github.event_name == 'workflow_dispatch'
timeout-minutes: 30
env:
CODER_URL: ${{ secrets.TRAIAGE_CODER_URL }}
CODER_SESSION_TOKEN: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
permissions:
contents: read
issues: write
actions: write
steps:
# This is only required for testing locally using nektos/act, so leaving commented out.
# An alternative is to use a larger or custom image.
# - name: Install Github CLI
# id: install-gh
# run: |
# (type -p wget >/dev/null || (sudo apt update && sudo apt install wget -y)) \
# && sudo mkdir -p -m 755 /etc/apt/keyrings \
# && out=$(mktemp) && wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
# && cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
# && sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
# && sudo mkdir -p -m 755 /etc/apt/sources.list.d \
# && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
# && sudo apt update \
# && sudo apt install gh -y
- name: Determine Inputs
id: determine-inputs
if: always()
env:
GITHUB_ACTOR: ${{ github.actor }}
GITHUB_EVENT_ISSUE_HTML_URL: ${{ github.event.issue.html_url }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_EVENT_USER_ID: ${{ github.event.sender.id }}
GITHUB_EVENT_USER_LOGIN: ${{ github.event.sender.login }}
INPUTS_ISSUE_URL: ${{ inputs.issue_url }}
INPUTS_TEMPLATE_NAME: ${{ inputs.template_name || 'coder' }}
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || ''}}
INPUTS_PREFIX: ${{ inputs.prefix || 'traiage' }}
GH_TOKEN: ${{ github.token }}
run: |
echo "Using template name: ${INPUTS_TEMPLATE_NAME}"
echo "template_name=${INPUTS_TEMPLATE_NAME}" >> "${GITHUB_OUTPUT}"
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
echo "Using prefix: ${INPUTS_PREFIX}"
echo "prefix=${INPUTS_PREFIX}" >> "${GITHUB_OUTPUT}"
# For workflow_dispatch, use the actor who triggered it
# For issues events, use the issue author.
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
exit 1
fi
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
echo "Using issue URL: ${INPUTS_ISSUE_URL}"
echo "issue_url=${INPUTS_ISSUE_URL}" >> "${GITHUB_OUTPUT}"
exit 0
elif [[ "${GITHUB_EVENT_NAME}" == "issues" ]]; then
GITHUB_USER_ID=${GITHUB_EVENT_USER_ID}
echo "Using issue author: ${GITHUB_EVENT_USER_LOGIN} (ID: ${GITHUB_USER_ID})"
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
echo "github_username=${GITHUB_EVENT_USER_LOGIN}" >> "${GITHUB_OUTPUT}"
echo "Using issue URL: ${GITHUB_EVENT_ISSUE_HTML_URL}"
echo "issue_url=${GITHUB_EVENT_ISSUE_HTML_URL}" >> "${GITHUB_OUTPUT}"
exit 0
else
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
exit 1
fi
- name: Verify push access
env:
GITHUB_REPOSITORY: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
GITHUB_USERNAME: ${{ steps.determine-inputs.outputs.github_username }}
GITHUB_USER_ID: ${{ steps.determine-inputs.outputs.github_user_id }}
run: |
# Query the actors permission on this repo
can_push="$(gh api "/repos/${GITHUB_REPOSITORY}/collaborators/${GITHUB_USERNAME}/permission" --jq '.user.permissions.push')"
if [[ "${can_push}" != "true" ]]; then
echo "::error title=Access Denied::${GITHUB_USERNAME} does not have push access to ${GITHUB_REPOSITORY}"
exit 1
fi
- name: Extract context key and description from issue
id: extract-context
env:
ISSUE_URL: ${{ steps.determine-inputs.outputs.issue_url }}
GH_TOKEN: ${{ github.token }}
run: |
issue_number="$(gh issue view "${ISSUE_URL}" --json number --jq '.number')"
context_key="gh-${issue_number}"
TASK_PROMPT=$(cat <<EOF
Fix ${ISSUE_URL}
1. Use the gh CLI to read the issue description and comments.
2. Think carefully and try to understand the root cause. If the issue is unclear or not well defined, ask me to clarify and provide more information.
3. Write a proposed implementation plan to PLAN.md for me to review before starting implementation. Your plan should use TDD and only make the minimal changes necessary to fix the root cause.
4. When I approve your plan, start working on it. If you encounter issues with the plan, ask me for clarification and update the plan as required.
5. When you have finished implementation according to the plan, commit and push your changes, and create a PR using the gh CLI for me to review.
EOF
)
echo "context_key=${context_key}" >> "${GITHUB_OUTPUT}"
{
echo "TASK_PROMPT<<EOF"
echo "${TASK_PROMPT}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: Checkout repository
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 1
path: ./.github/actions/create-task-action
persist-credentials: false
ref: main
repository: coder/create-task-action
- name: Create Coder Task
id: create_task
uses: ./.github/actions/create-task-action
with:
coder-url: ${{ secrets.TRAIAGE_CODER_URL }}
coder-token: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
coder-organization: "default"
coder-template-name: coder
coder-template-preset: ${{ steps.determine-inputs.outputs.template_preset }}
coder-task-name-prefix: gh-coder
coder-task-prompt: ${{ steps.extract-context.outputs.task_prompt }}
github-user-id: ${{ steps.determine-inputs.outputs.github_user_id }}
github-token: ${{ github.token }}
github-issue-url: ${{ steps.determine-inputs.outputs.issue_url }}
comment-on-issue: ${{ startsWith(steps.determine-inputs.outputs.issue_url, format('{0}/{1}', github.server_url, github.repository)) }}
- name: Write outputs
env:
TASK_CREATED: ${{ steps.create_task.outputs.task-created }}
TASK_NAME: ${{ steps.create_task.outputs.task-name }}
TASK_URL: ${{ steps.create_task.outputs.task-url }}
run: |
{
echo "**Task created:** ${TASK_CREATED}"
echo "**Task name:** ${TASK_NAME}"
echo "**Task URL**: ${TASK_URL}"
} >> "${GITHUB_STEP_SUMMARY}"
+1 -4
View File
@@ -1,6 +1,5 @@
[default]
extend-ignore-identifiers-re = ["gho_.*"]
extend-ignore-re = ["(#|//)\\s*spellchecker:ignore-next-line\\n.*"]
[default.extend-identifiers]
alog = "alog"
@@ -9,7 +8,6 @@ IST = "IST"
MacOS = "macOS"
AKS = "AKS"
O_WRONLY = "O_WRONLY"
AIBridge = "AI Bridge"
[default.extend-words]
AKS = "AKS"
@@ -30,7 +28,6 @@ HELO = "HELO"
LKE = "LKE"
byt = "byt"
typ = "typ"
Inferrable = "Inferrable"
[files]
extend-exclude = [
@@ -50,5 +47,5 @@ extend-exclude = [
"provisioner/terraform/testdata/**",
# notifications' golden files confuse the detector because of quoted-printable encoding
"coderd/notifications/testdata/**",
"agent/agentcontainers/testdata/devcontainercli/**",
"agent/agentcontainers/testdata/devcontainercli/**"
]
+4 -9
View File
@@ -21,17 +21,15 @@ jobs:
pull-requests: write # required to post PR review comments by the action
steps:
- name: Harden Runner
uses: step-security/harden-runner@95d9a5deda9de15063e7595e9719c11c38c90ae2 # v2.13.2
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Check Markdown links
uses: umbrelladocs/action-linkspector@652f85bc57bb1e7d4327260decc10aa68f7694c3 # v1.4.0
uses: umbrelladocs/action-linkspector@e2ccef58c4b9eb89cd71ee23a8629744bba75aa6 # v1.3.5
id: markdown-link-check
# checks all markdown files from /docs including all subfolders
with:
@@ -43,10 +41,7 @@ jobs:
- name: Send Slack notification
if: failure() && github.event_name == 'schedule'
run: |
curl \
-X POST \
-H 'Content-type: application/json' \
-d '{"msg":"Broken links found in the documentation. Please check the logs at '"${LOGS_URL}"'"}' "${{ secrets.DOCS_LINK_SLACK_WEBHOOK }}"
curl -X POST -H 'Content-type: application/json' -d '{"msg":"Broken links found in the documentation. Please check the logs at ${{ env.LOGS_URL }}"}' ${{ secrets.DOCS_LINK_SLACK_WEBHOOK }}
echo "Sent Slack notification"
env:
LOGS_URL: https://github.com/coder/coder/actions/runs/${{ github.run_id }}
-4
View File
@@ -1,4 +0,0 @@
rules:
cache-poisoning:
ignore:
- "ci.yaml:184"
-8
View File
@@ -12,9 +12,6 @@ node_modules/
vendor/
yarn-error.log
# Test output files
test-output/
# VSCode settings.
**/.vscode/*
# Allow VSCode recommendations and default settings in project root.
@@ -89,8 +86,3 @@ result
__debug_bin*
**/.claude/settings.local.json
/.env
# Ignore plans written by AI agents.
PLAN.md
+2 -11
View File
@@ -169,16 +169,6 @@ linters-settings:
- name: var-declaration
- name: var-naming
- name: waitgroup-by-value
usetesting:
# Only os-setenv is enabled because we migrated to usetesting from another linter that
# only covered os-setenv.
os-setenv: true
os-create-temp: false
os-mkdir-temp: false
os-temp-dir: false
os-chdir: false
context-background: false
context-todo: false
# irrelevant as of Go v1.22: https://go.dev/blog/loopvar-preview
govet:
@@ -191,6 +181,7 @@ linters-settings:
issues:
exclude-dirs:
- coderd/database/dbmem
- node_modules
- .git
@@ -262,6 +253,7 @@ linters:
# - wastedassign
- staticcheck
- tenv
# In Go, it's possible for a package to test it's internal functionality
# without testing any exported functions. This is enabled to promote
# decomposing a package before testing it's internals. A function caller
@@ -274,5 +266,4 @@ linters:
- typecheck
- unconvert
- unused
- usetesting
- dupl
-36
View File
@@ -1,36 +0,0 @@
{
"mcpServers": {
"go-language-server": {
"type": "stdio",
"command": "go",
"args": [
"run",
"github.com/isaacphi/mcp-language-server@latest",
"-workspace",
"./",
"-lsp",
"go",
"--",
"run",
"golang.org/x/tools/gopls@latest"
],
"env": {}
},
"typescript-language-server": {
"type": "stdio",
"command": "go",
"args": [
"run",
"github.com/isaacphi/mcp-language-server@latest",
"-workspace",
"./site/",
"-lsp",
"pnpx",
"--",
"typescript-language-server",
"--stdio"
],
"env": {}
}
}
}
+2 -4
View File
@@ -49,18 +49,16 @@
"[javascript][javascriptreact][json][jsonc][typescript][typescriptreact]": {
"editor.defaultFormatter": "biomejs.biome",
"editor.codeActionsOnSave": {
"source.fixAll.biome": "explicit"
"quickfix.biome": "explicit"
// "source.organizeImports.biome": "explicit"
}
},
"tailwindCSS.classFunctions": ["cva", "cn"],
"[css][html][markdown][yaml]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"typos.config": ".github/workflows/typos.toml",
"[markdown]": {
"editor.defaultFormatter": "DavidAnson.vscode-markdownlint"
},
"biome.lsp.bin": "site/node_modules/.bin/biome"
}
}
-1
View File
@@ -1 +0,0 @@
CLAUDE.md
+72 -125
View File
@@ -1,159 +1,106 @@
# Coder Development Guidelines
You are an experienced, pragmatic software engineer. You don't over-engineer a solution when a simple one is possible.
Rule #1: If you want exception to ANY rule, YOU MUST STOP and get explicit permission first. BREAKING THE LETTER OR SPIRIT OF THE RULES IS FAILURE.
Read [cursor rules](.cursorrules).
## Foundational rules
## Build/Test/Lint Commands
- Doing it right is better than doing it fast. You are not in a rush. NEVER skip steps or take shortcuts.
- Tedious, systematic work is often the correct solution. Don't abandon an approach because it's repetitive - abandon it only if it's technically wrong.
- Honesty is a core value.
### Main Commands
## Our relationship
- `make build` or `make build-fat` - Build all "fat" binaries (includes "server" functionality)
- `make build-slim` - Build "slim" binaries
- `make test` - Run Go tests
- `make test RUN=TestFunctionName` or `go test -v ./path/to/package -run TestFunctionName` - Test single
- `make test-postgres` - Run tests with Postgres database
- `make test-race` - Run tests with Go race detector
- `make test-e2e` - Run end-to-end tests
- `make lint` - Run all linters
- `make fmt` - Format all code
- `make gen` - Generates mocks, database queries and other auto-generated files
- Act as a critical peer reviewer. Your job is to disagree with me when I'm wrong, not to please me. Prioritize accuracy and reasoning over agreement.
- YOU MUST speak up immediately when you don't know something or we're in over our heads
- YOU MUST call out bad ideas, unreasonable expectations, and mistakes - I depend on this
- NEVER be agreeable just to be nice - I NEED your HONEST technical judgment
- NEVER write the phrase "You're absolutely right!" You are not a sycophant. We're working together because I value your opinion. Do not agree with me unless you can justify it with evidence or reasoning.
- YOU MUST ALWAYS STOP and ask for clarification rather than making assumptions.
- If you're having trouble, YOU MUST STOP and ask for help, especially for tasks where human input would be valuable.
- When you disagree with my approach, YOU MUST push back. Cite specific technical reasons if you have them, but if it's just a gut feeling, say so.
- If you're uncomfortable pushing back out loud, just say "Houston, we have a problem". I'll know what you mean
- We discuss architectutral decisions (framework changes, major refactoring, system design) together before implementation. Routine fixes and clear implementations don't need discussion.
### Frontend Commands (site directory)
## Proactiveness
- `pnpm build` - Build frontend
- `pnpm dev` - Run development server
- `pnpm check` - Run code checks
- `pnpm format` - Format frontend code
- `pnpm lint` - Lint frontend code
- `pnpm test` - Run frontend tests
When asked to do something, just do it - including obvious follow-up actions needed to complete the task properly.
Only pause to ask for confirmation when:
## Code Style Guidelines
- Multiple valid approaches exist and the choice matters
- The action would delete or significantly restructure existing code
- You genuinely don't understand what's being asked
- Your partner asked a question (answer the question, don't jump to implementation)
### Go
@.claude/docs/WORKFLOWS.md
@package.json
- Follow [Effective Go](https://go.dev/doc/effective_go) and [Go's Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
- Use `gofumpt` for formatting
- Create packages when used during implementation
- Validate abstractions against implementations
## Essential Commands
### Error Handling
| Task | Command | Notes |
|-------------------|--------------------------|----------------------------------|
| **Development** | `./scripts/develop.sh` | ⚠️ Don't use manual build |
| **Build** | `make build` | Fat binaries (includes server) |
| **Build Slim** | `make build-slim` | Slim binaries |
| **Test** | `make test` | Full test suite |
| **Test Single** | `make test RUN=TestName` | Faster than full suite |
| **Test Postgres** | `make test-postgres` | Run tests with Postgres database |
| **Test Race** | `make test-race` | Run tests with Go race detector |
| **Lint** | `make lint` | Always run after changes |
| **Generate** | `make gen` | After database changes |
| **Format** | `make fmt` | Auto-format code |
| **Clean** | `make clean` | Clean build artifacts |
- Use descriptive error messages
- Wrap errors with context
- Propagate errors appropriately
- Use proper error types
- (`xerrors.Errorf("failed to X: %w", err)`)
### Documentation Commands
### Naming
- `pnpm run format-docs` - Format markdown tables in docs
- `pnpm run lint-docs` - Lint and fix markdown files
- `pnpm run storybook` - Run Storybook (from site directory)
- Use clear, descriptive names
- Abbreviate only when obvious
- Follow Go and TypeScript naming conventions
## Critical Patterns
### Comments
### Database Changes (ALWAYS FOLLOW)
- Document exported functions, types, and non-obvious logic
- Follow JSDoc format for TypeScript
- Use godoc format for Go code
1. Modify `coderd/database/queries/*.sql` files
2. Run `make gen`
3. If audit errors: update `enterprise/audit/table.go`
4. Run `make gen` again
## Commit Style
### LSP Navigation (USE FIRST)
- Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/)
- Format: `type(scope): message`
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
- Keep message titles concise (~70 characters)
- Use imperative, present tense in commit titles
#### Go LSP (for backend code)
## Database queries
- **Find definitions**: `mcp__go-language-server__definition symbolName`
- **Find references**: `mcp__go-language-server__references symbolName`
- **Get type info**: `mcp__go-language-server__hover filePath line column`
- **Rename symbol**: `mcp__go-language-server__rename_symbol filePath line column newName`
#### TypeScript LSP (for frontend code in site/)
- **Find definitions**: `mcp__typescript-language-server__definition symbolName`
- **Find references**: `mcp__typescript-language-server__references symbolName`
- **Get type info**: `mcp__typescript-language-server__hover filePath line column`
- **Rename symbol**: `mcp__typescript-language-server__rename_symbol filePath line column newName`
### OAuth2 Error Handling
```go
// OAuth2-compliant error responses
writeOAuth2Error(ctx, rw, http.StatusBadRequest, "invalid_grant", "description")
```
### Authorization Context
```go
// Public endpoints needing system access
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
// Authenticated endpoints with user context
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
```
## Quick Reference
### Full workflows available in imported WORKFLOWS.md
### New Feature Checklist
- [ ] Run `git pull` to ensure latest code
- [ ] Check if feature touches database - you'll need migrations
- [ ] Check if feature touches audit logs - update `enterprise/audit/table.go`
- MUST DO! Any changes to database - adding queries, modifying queries should be done in the `coderd\database\queries\*.sql` files. Use `make gen` to generate necessary changes after.
- MUST DO! Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `provisionerjobs.sql`.
- After making changes to any `coderd\database\queries\*.sql` files you must run `make gen` to generate respective ORM changes.
## Architecture
- **coderd**: Main API service
- **provisionerd**: Infrastructure provisioning
- **Agents**: Workspace services (SSH, port forwarding)
- **Database**: PostgreSQL with `dbauthz` authorization
### Core Components
## Testing
- **coderd**: Main API service connecting workspaces, provisioners, and users
- **provisionerd**: Execution context for infrastructure-modifying providers
- **Agents**: Services in remote workspaces providing features like SSH and port forwarding
- **Workspaces**: Cloud resources defined by Terraform
### Race Condition Prevention
## Sub-modules
- Use unique identifiers: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
- Never use hardcoded names in concurrent tests
### Template System
### OAuth2 Testing
- Templates define infrastructure for workspaces using Terraform
- Environment variables pass context between Coder and templates
- Official modules extend development environments
- Full suite: `./scripts/oauth2/test-mcp-oauth2.sh`
- Manual testing: `./scripts/oauth2/test-manual-flow.sh`
### RBAC System
### Timing Issues
- Permissions defined at site, organization, and user levels
- Object-Action model protects resources
- Built-in roles: owner, member, auditor, templateAdmin
- Permission format: `<sign>?<level>.<object>.<id>.<action>`
NEVER use `time.Sleep` to mitigate timing issues. If an issue
seems like it should use `time.Sleep`, read through https://github.com/coder/quartz and specifically the [README](https://github.com/coder/quartz/blob/main/README.md) to better understand how to handle timing issues.
### Database
## Code Style
- PostgreSQL 13+ recommended for production
- Migrations managed with `migrate`
- Database authorization through `dbauthz` package
### Detailed guidelines in imported WORKFLOWS.md
## Frontend
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
- Commit format: `type(scope): message`
The frontend is contained in the site folder.
## Detailed Development Guides
@.claude/docs/OAUTH2.md
@.claude/docs/TESTING.md
@.claude/docs/TROUBLESHOOTING.md
@.claude/docs/DATABASE.md
## Common Pitfalls
1. **Audit table errors** → Update `enterprise/audit/table.go`
2. **OAuth2 errors** → Return RFC-compliant format
3. **Race conditions** → Use unique test identifiers
4. **Missing newlines** → Ensure files end with newline
---
*This file stays lean and actionable. Detailed workflows and explanations are imported automatically.*
For building Frontend refer to [this document](docs/about/contributing/frontend.md)
+4 -27
View File
@@ -1,31 +1,8 @@
# These APIs are versioned, so any changes need to be carefully reviewed for
# whether to bump API major or minor versions.
# These APIs are versioned, so any changes need to be carefully reviewed for whether
# to bump API major or minor versions.
agent/proto/ @spikecurtis @johnstcn
provisionerd/proto/ @spikecurtis @johnstcn
provisionersdk/proto/ @spikecurtis @johnstcn
tailnet/proto/ @spikecurtis @johnstcn
vpn/vpn.proto @spikecurtis @johnstcn
vpn/version.go @spikecurtis @johnstcn
# This caching code is particularly tricky, and one must be very careful when
# altering it.
coderd/files/ @aslilac
coderd/dynamicparameters/ @Emyrk
coderd/rbac/ @Emyrk
# Mainly dependent on coder/guts, which is maintained by @Emyrk
scripts/apitypings/ @Emyrk
scripts/gensite/ @aslilac
# The blood and guts of the autostop algorithm, which is quite complex and
# requires elite ball knowledge of most of the scheduling code to make changes
# without inadvertently affecting other parts of the codebase.
coderd/schedule/autostop.go @deansheather @DanielleMaywood
# Usage tracking code requires intimate knowledge of Tallyman and Metronome, as
# well as guidance from revenue.
coderd/usage/ @deansheather @spikecurtis
enterprise/coderd/usage/ @deansheather @spikecurtis
.github/ @jdomeracki-coder
provisionerd/proto/ @spikecurtis @johnstcn
provisionersdk/proto/ @spikecurtis @johnstcn
+8 -164
View File
@@ -252,10 +252,6 @@ $(CODER_ALL_BINARIES): go.mod go.sum \
fi
cp "$@" "./site/out/bin/coder-$$os-$$arch$$dot_ext"
if [[ "$${CODER_SIGN_GPG:-0}" == "1" ]]; then
cp "$@.asc" "./site/out/bin/coder-$$os-$$arch$$dot_ext.asc"
fi
fi
# This task builds Coder Desktop dylibs
@@ -460,31 +456,16 @@ fmt: fmt/ts fmt/go fmt/terraform fmt/shfmt fmt/biome fmt/markdown
.PHONY: fmt
fmt/go:
ifdef FILE
# Format single file
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.go ]] && ! grep -q "DO NOT EDIT" "$(FILE)"; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/go$(RESET) $(FILE)"; \
go run mvdan.cc/gofumpt@v0.8.0 -w -l "$(FILE)"; \
fi
else
go mod tidy
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/go$(RESET)"
# VS Code users should check out
# https://github.com/mvdan/gofumpt#visual-studio-code
find . $(FIND_EXCLUSIONS) -type f -name '*.go' -print0 | \
xargs -0 grep -E --null -L '^// Code generated .* DO NOT EDIT\.$$' | \
xargs -0 go run mvdan.cc/gofumpt@v0.8.0 -w -l
endif
xargs -0 grep --null -L "DO NOT EDIT" | \
xargs -0 go run mvdan.cc/gofumpt@v0.4.0 -w -l
.PHONY: fmt/go
fmt/ts: site/node_modules/.installed
ifdef FILE
# Format single TypeScript/JavaScript file
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.ts ]] || [[ "$(FILE)" == *.tsx ]] || [[ "$(FILE)" == *.js ]] || [[ "$(FILE)" == *.jsx ]]; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/ts$(RESET) $(FILE)"; \
(cd site/ && pnpm exec biome format --write "../$(FILE)"); \
fi
else
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/ts$(RESET)"
cd site
# Avoid writing files in CI to reduce file write activity
@@ -493,17 +474,9 @@ ifdef CI
else
pnpm run check:fix
endif
endif
.PHONY: fmt/ts
fmt/biome: site/node_modules/.installed
ifdef FILE
# Format single file with biome
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.ts ]] || [[ "$(FILE)" == *.tsx ]] || [[ "$(FILE)" == *.js ]] || [[ "$(FILE)" == *.jsx ]]; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/biome$(RESET) $(FILE)"; \
(cd site/ && pnpm exec biome format --write "../$(FILE)"); \
fi
else
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/biome$(RESET)"
cd site/
# Avoid writing files in CI to reduce file write activity
@@ -512,30 +485,14 @@ ifdef CI
else
pnpm run format
endif
endif
.PHONY: fmt/biome
fmt/terraform: $(wildcard *.tf)
ifdef FILE
# Format single Terraform file
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.tf ]] || [[ "$(FILE)" == *.tfvars ]]; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/terraform$(RESET) $(FILE)"; \
terraform fmt "$(FILE)"; \
fi
else
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/terraform$(RESET)"
terraform fmt -recursive
endif
.PHONY: fmt/terraform
fmt/shfmt: $(SHELL_SRC_FILES)
ifdef FILE
# Format single shell script
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.sh ]]; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/shfmt$(RESET) $(FILE)"; \
shfmt -w "$(FILE)"; \
fi
else
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/shfmt$(RESET)"
# Only do diff check in CI, errors on diff.
ifdef CI
@@ -543,25 +500,14 @@ ifdef CI
else
shfmt -w $(SHELL_SRC_FILES)
endif
endif
.PHONY: fmt/shfmt
fmt/markdown: node_modules/.installed
ifdef FILE
# Format single markdown file
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.md ]]; then \
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/markdown$(RESET) $(FILE)"; \
pnpm exec markdown-table-formatter "$(FILE)"; \
fi
else
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/markdown$(RESET)"
pnpm format-docs
endif
.PHONY: fmt/markdown
# Note: we don't run zizmor in the lint target because it takes a while. CI
# runs it explicitly.
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown
.PHONY: lint
lint/site-icons:
@@ -578,7 +524,6 @@ lint/go:
./scripts/check_codersdk_imports.sh
linter_ver=$(shell egrep -o 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
go run github.com/golangci/golangci-lint/cmd/golangci-lint@v$$linter_ver run
go run github.com/coder/paralleltestctx/cmd/paralleltestctx@v0.0.1 -custom-funcs="testutil.Context" ./...
.PHONY: lint/go
lint/examples:
@@ -600,31 +545,13 @@ lint/markdown: node_modules/.installed
pnpm lint-docs
.PHONY: lint/markdown
lint/actions: lint/actions/actionlint lint/actions/zizmor
.PHONY: lint/actions
lint/actions/actionlint:
go run github.com/rhysd/actionlint/cmd/actionlint@v1.7.7
.PHONY: lint/actions/actionlint
lint/actions/zizmor:
./scripts/zizmor.sh \
--strict-collection \
--persona=regular \
.
.PHONY: lint/actions/zizmor
# Verify api_key_scope enum contains all RBAC <resource>:<action> values.
lint/check-scopes: coderd/database/dump.sql
go run ./scripts/check-scopes
.PHONY: lint/check-scopes
# All files generated by the database should be added here, and this can be used
# as a target for jobs that need to run after the database is generated.
DB_GEN_FILES := \
coderd/database/dump.sql \
coderd/database/querier.go \
coderd/database/unique_constraint.go \
coderd/database/dbmem/dbmem.go \
coderd/database/dbmetrics/dbmetrics.go \
coderd/database/dbauthz/dbauthz.go \
coderd/database/dbmock/dbmock.go
@@ -635,24 +562,16 @@ TAILNETTEST_MOCKS := \
tailnet/tailnettest/workspaceupdatesprovidermock.go \
tailnet/tailnettest/subscriptionmock.go
AIBRIDGED_MOCKS := \
enterprise/aibridged/aibridgedmock/clientmock.go \
enterprise/aibridged/aibridgedmock/poolmock.go
GEN_FILES := \
tailnet/proto/tailnet.pb.go \
agent/proto/agent.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
vpn/vpn.pb.go \
enterprise/aibridged/proto/aibridged.pb.go \
$(DB_GEN_FILES) \
$(SITE_GEN_FILES) \
coderd/rbac/object_gen.go \
codersdk/rbacresources_gen.go \
coderd/rbac/scopes_constants_gen.go \
codersdk/apikey_scopes_gen.go \
docs/admin/integrations/prometheus.md \
docs/reference/cli/index.md \
docs/admin/security/audit-logs.md \
@@ -665,9 +584,7 @@ GEN_FILES := \
coderd/database/pubsub/psmock/psmock.go \
agent/agentcontainers/acmock/acmock.go \
agent/agentcontainers/dcspec/dcspec_gen.go \
coderd/httpmw/loggermw/loggermock/loggermock.go \
codersdk/workspacesdk/agentconnmock/agentconnmock.go \
$(AIBRIDGED_MOCKS)
coderd/httpmw/loggermw/loggermock/loggermock.go
# all gen targets should be added here and to gen/mark-fresh
gen: gen/db gen/golden-files $(GEN_FILES)
@@ -677,7 +594,6 @@ gen/db: $(DB_GEN_FILES)
.PHONY: gen/db
gen/golden-files: \
agent/unit/testdata/.gen-golden \
cli/testdata/.gen-golden \
coderd/.gen-golden \
coderd/notifications/.gen-golden \
@@ -697,15 +613,12 @@ gen/mark-fresh:
agent/proto/agent.pb.go \
provisionersdk/proto/provisioner.pb.go \
provisionerd/proto/provisionerd.pb.go \
agent/agentsocket/proto/agentsocket.pb.go \
vpn/vpn.pb.go \
enterprise/aibridged/proto/aibridged.pb.go \
coderd/database/dump.sql \
$(DB_GEN_FILES) \
site/src/api/typesGenerated.ts \
coderd/rbac/object_gen.go \
codersdk/rbacresources_gen.go \
coderd/rbac/scopes_constants_gen.go \
site/src/api/rbacresourcesGenerated.ts \
site/src/api/countriesGenerated.ts \
docs/admin/integrations/prometheus.md \
@@ -721,8 +634,6 @@ gen/mark-fresh:
agent/agentcontainers/acmock/acmock.go \
agent/agentcontainers/dcspec/dcspec_gen.go \
coderd/httpmw/loggermw/loggermock/loggermock.go \
codersdk/workspacesdk/agentconnmock/agentconnmock.go \
$(AIBRIDGED_MOCKS) \
"
for file in $$files; do
@@ -766,14 +677,6 @@ coderd/httpmw/loggermw/loggermock/loggermock.go: coderd/httpmw/loggermw/logger.g
go generate ./coderd/httpmw/loggermw/loggermock/
touch "$@"
codersdk/workspacesdk/agentconnmock/agentconnmock.go: codersdk/workspacesdk/agentconn.go
go generate ./codersdk/workspacesdk/agentconnmock/
touch "$@"
$(AIBRIDGED_MOCKS): enterprise/aibridged/client.go enterprise/aibridged/pool.go
go generate ./enterprise/aibridged/aibridgedmock/
touch "$@"
agent/agentcontainers/dcspec/dcspec_gen.go: \
node_modules/.installed \
agent/agentcontainers/dcspec/devContainer.base.schema.json \
@@ -802,14 +705,6 @@ agent/proto/agent.pb.go: agent/proto/agent.proto
--go-drpc_opt=paths=source_relative \
./agent/proto/agent.proto
agent/agentsocket/proto/agentsocket.pb.go: agent/agentsocket/proto/agentsocket.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./agent/agentsocket/proto/agentsocket.proto
provisionersdk/proto/provisioner.pb.go: provisionersdk/proto/provisioner.proto
protoc \
--go_out=. \
@@ -832,14 +727,6 @@ vpn/vpn.pb.go: vpn/vpn.proto
--go_opt=paths=source_relative \
./vpn/vpn.proto
enterprise/aibridged/proto/aibridged.pb.go: enterprise/aibridged/proto/aibridged.proto
protoc \
--go_out=. \
--go_opt=paths=source_relative \
--go-drpc_out=. \
--go-drpc_opt=paths=source_relative \
./enterprise/aibridged/proto/aibridged.proto
site/src/api/typesGenerated.ts: site/node_modules/.installed $(wildcard scripts/apitypings/*) $(shell find ./codersdk $(FIND_EXCLUSIONS) -type f -name '*.go')
# -C sets the directory for the go run command
go run -C ./scripts/apitypings main.go > $@
@@ -866,15 +753,6 @@ coderd/rbac/object_gen.go: scripts/typegen/rbacobject.gotmpl scripts/typegen/mai
rmdir -v "$$tempdir"
touch "$@"
coderd/rbac/scopes_constants_gen.go: scripts/typegen/scopenames.gotmpl scripts/typegen/main.go coderd/rbac/policy/policy.go
# Generate typed low-level ScopeName constants from RBACPermissions
# Write to a temp file first to avoid truncating the package during build
# since the generator imports the rbac package.
tempfile=$(shell mktemp /tmp/scopes_constants_gen.XXXXXX)
go run ./scripts/typegen/main.go rbac scopenames > "$$tempfile"
mv -v "$$tempfile" coderd/rbac/scopes_constants_gen.go
touch "$@"
codersdk/rbacresources_gen.go: scripts/typegen/codersdk.gotmpl scripts/typegen/main.go coderd/rbac/object.go coderd/rbac/policy/policy.go
# Do no overwrite codersdk/rbacresources_gen.go directly, as it would make the file empty, breaking
# the `codersdk` package and any parallel build targets.
@@ -882,12 +760,6 @@ codersdk/rbacresources_gen.go: scripts/typegen/codersdk.gotmpl scripts/typegen/m
mv /tmp/rbacresources_gen.go codersdk/rbacresources_gen.go
touch "$@"
codersdk/apikey_scopes_gen.go: scripts/apikeyscopesgen/main.go coderd/rbac/scopes_catalog.go coderd/rbac/scopes.go
# Generate SDK constants for external API key scopes.
go run ./scripts/apikeyscopesgen > /tmp/apikey_scopes_gen.go
mv /tmp/apikey_scopes_gen.go codersdk/apikey_scopes_gen.go
touch "$@"
site/src/api/rbacresourcesGenerated.ts: site/node_modules/.installed scripts/typegen/codersdk.gotmpl scripts/typegen/main.go coderd/rbac/object.go coderd/rbac/policy/policy.go
go run scripts/typegen/main.go rbac typescript > "$@"
(cd site/ && pnpm exec biome format --write src/api/rbacresourcesGenerated.ts)
@@ -963,10 +835,6 @@ clean/golden-files:
-type f -name '*.golden' -delete
.PHONY: clean/golden-files
agent/unit/testdata/.gen-golden: $(wildcard agent/unit/testdata/*.golden) $(GO_SRC_FILES) $(wildcard agent/unit/*_test.go)
TZ=UTC go test ./agent/unit -run="TestGraph" -update
touch "$@"
cli/testdata/.gen-golden: $(wildcard cli/testdata/*.golden) $(wildcard cli/*.tpl) $(GO_SRC_FILES) $(wildcard cli/*_test.go)
TZ=UTC go test ./cli -run="Test(CommandHelp|ServerYAML|ErrorExamples|.*Golden)" -update
touch "$@"
@@ -1016,31 +884,12 @@ else
GOTESTSUM_RETRY_FLAGS :=
endif
# default to 8x8 parallelism to avoid overwhelming our workspaces. Hopefully we can remove these defaults
# when we get our test suite's resource utilization under control.
GOTEST_FLAGS := -v -p $(or $(TEST_NUM_PARALLEL_PACKAGES),"8") -parallel=$(or $(TEST_NUM_PARALLEL_TESTS),"8")
# The most common use is to set TEST_COUNT=1 to avoid Go's test cache.
ifdef TEST_COUNT
GOTEST_FLAGS += -count=$(TEST_COUNT)
endif
ifdef TEST_SHORT
GOTEST_FLAGS += -short
endif
ifdef RUN
GOTEST_FLAGS += -run $(RUN)
endif
TEST_PACKAGES ?= ./...
test:
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="$(TEST_PACKAGES)" -- $(GOTEST_FLAGS)
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="./..." -- -v -short -count=1 $(if $(RUN),-run $(RUN))
.PHONY: test
test-cli:
$(MAKE) test TEST_PACKAGES="./cli..."
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="./cli/..." -- -v -short -count=1
.PHONY: test-cli
# sqlc-cloud-is-setup will fail if no SQLc auth token is set. Use this as a
@@ -1076,7 +925,7 @@ sqlc-vet: test-postgres-docker
test-postgres: test-postgres-docker
# The postgres test is prone to failure, so we limit parallelism for
# more consistent execution.
$(GIT_FLAGS) gotestsum \
$(GIT_FLAGS) DB=ci gotestsum \
--junitfile="gotests.xml" \
--jsonfile="gotests.json" \
$(GOTESTSUM_RETRY_FLAGS) \
@@ -1192,8 +1041,3 @@ endif
dogfood/coder/nix.hash: flake.nix flake.lock
sha256sum flake.nix flake.lock >./dogfood/coder/nix.hash
# Count the number of test databases created per test package.
count-test-databases:
PGPASSWORD=postgres psql -h localhost -U postgres -d coder_testing -P pager=off -c 'SELECT test_package, count(*) as count from test_databases GROUP BY test_package ORDER BY count DESC'
.PHONY: count-test-databases
+85 -128
View File
@@ -8,7 +8,6 @@ import (
"fmt"
"hash/fnv"
"io"
"maps"
"net"
"net/http"
"net/netip"
@@ -41,7 +40,6 @@ import (
"github.com/coder/coder/v2/agent/agentcontainers"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/agentscripts"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agentssh"
"github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/agent/proto/resourcesmonitor"
@@ -72,21 +70,17 @@ const (
)
type Options struct {
Filesystem afero.Fs
LogDir string
TempDir string
ScriptDataDir string
Client Client
ReconnectingPTYTimeout time.Duration
EnvironmentVariables map[string]string
Logger slog.Logger
// IgnorePorts tells the api handler which ports to ignore when
// listing all listening ports. This is helpful to hide ports that
// are used by the agent, that the user does not care about.
IgnorePorts map[int]string
// ListeningPortsGetter is used to get the list of listening ports. Only
// tests should set this. If unset, a default that queries the OS will be used.
ListeningPortsGetter ListeningPortsGetter
Filesystem afero.Fs
LogDir string
TempDir string
ScriptDataDir string
ExchangeToken func(ctx context.Context) (string, error)
Client Client
ReconnectingPTYTimeout time.Duration
EnvironmentVariables map[string]string
Logger slog.Logger
IgnorePorts map[int]string
PortCacheDuration time.Duration
SSHMaxTimeout time.Duration
TailnetListenPort uint16
Subsystems []codersdk.AgentSubsystem
@@ -98,16 +92,13 @@ type Options struct {
Devcontainers bool
DevcontainerAPIOptions []agentcontainers.Option // Enable Devcontainers for these to be effective.
Clock quartz.Clock
SocketServerEnabled bool
SocketPath string // Path for the agent socket server socket
}
type Client interface {
ConnectRPC26(ctx context.Context) (
proto.DRPCAgentClient26, tailnetproto.DRPCTailnetClient26, error,
)
tailnet.DERPMapRewriter
agentsdk.RefreshableSessionTokenProvider
RewriteDERPMap(derpMap *tailcfg.DERPMap)
}
type Agent interface {
@@ -140,13 +131,20 @@ func New(options Options) Agent {
}
options.ScriptDataDir = options.TempDir
}
if options.ExchangeToken == nil {
options.ExchangeToken = func(_ context.Context) (string, error) {
return "", nil
}
}
if options.ReportMetadataInterval == 0 {
options.ReportMetadataInterval = time.Second
}
if options.ServiceBannerRefreshInterval == 0 {
options.ServiceBannerRefreshInterval = 2 * time.Minute
}
if options.PortCacheDuration == 0 {
options.PortCacheDuration = 1 * time.Second
}
if options.Clock == nil {
options.Clock = quartz.NewReal()
}
@@ -160,38 +158,31 @@ func New(options Options) Agent {
options.Execer = agentexec.DefaultExecer
}
if options.ListeningPortsGetter == nil {
options.ListeningPortsGetter = &osListeningPortsGetter{
cacheDuration: 1 * time.Second,
}
}
hardCtx, hardCancel := context.WithCancel(context.Background())
gracefulCtx, gracefulCancel := context.WithCancel(hardCtx)
a := &agent{
clock: options.Clock,
tailnetListenPort: options.TailnetListenPort,
reconnectingPTYTimeout: options.ReconnectingPTYTimeout,
logger: options.Logger,
gracefulCtx: gracefulCtx,
gracefulCancel: gracefulCancel,
hardCtx: hardCtx,
hardCancel: hardCancel,
coordDisconnected: make(chan struct{}),
environmentVariables: options.EnvironmentVariables,
client: options.Client,
filesystem: options.Filesystem,
logDir: options.LogDir,
tempDir: options.TempDir,
scriptDataDir: options.ScriptDataDir,
lifecycleUpdate: make(chan struct{}, 1),
lifecycleReported: make(chan codersdk.WorkspaceAgentLifecycle, 1),
lifecycleStates: []agentsdk.PostLifecycleRequest{{State: codersdk.WorkspaceAgentLifecycleCreated}},
reportConnectionsUpdate: make(chan struct{}, 1),
listeningPortsHandler: listeningPortsHandler{
getter: options.ListeningPortsGetter,
ignorePorts: maps.Clone(options.IgnorePorts),
},
clock: options.Clock,
tailnetListenPort: options.TailnetListenPort,
reconnectingPTYTimeout: options.ReconnectingPTYTimeout,
logger: options.Logger,
gracefulCtx: gracefulCtx,
gracefulCancel: gracefulCancel,
hardCtx: hardCtx,
hardCancel: hardCancel,
coordDisconnected: make(chan struct{}),
environmentVariables: options.EnvironmentVariables,
client: options.Client,
exchangeToken: options.ExchangeToken,
filesystem: options.Filesystem,
logDir: options.LogDir,
tempDir: options.TempDir,
scriptDataDir: options.ScriptDataDir,
lifecycleUpdate: make(chan struct{}, 1),
lifecycleReported: make(chan codersdk.WorkspaceAgentLifecycle, 1),
lifecycleStates: []agentsdk.PostLifecycleRequest{{State: codersdk.WorkspaceAgentLifecycleCreated}},
reportConnectionsUpdate: make(chan struct{}, 1),
ignorePorts: options.IgnorePorts,
portCacheDuration: options.PortCacheDuration,
reportMetadataInterval: options.ReportMetadataInterval,
announcementBannersRefreshInterval: options.ServiceBannerRefreshInterval,
sshMaxTimeout: options.SSHMaxTimeout,
@@ -205,8 +196,6 @@ func New(options Options) Agent {
devcontainers: options.Devcontainers,
containerAPIOptions: options.DevcontainerAPIOptions,
socketPath: options.SocketPath,
socketServerEnabled: options.SocketServerEnabled,
}
// Initially, we have a closed channel, reflecting the fact that we are not initially connected.
// Each time we connect we replace the channel (while holding the closeMutex) with a new one
@@ -214,21 +203,27 @@ func New(options Options) Agent {
// coordinator during shut down.
close(a.coordDisconnected)
a.announcementBanners.Store(new([]codersdk.BannerConfig))
a.sessionToken.Store(new(string))
a.init()
return a
}
type agent struct {
clock quartz.Clock
logger slog.Logger
client Client
tailnetListenPort uint16
filesystem afero.Fs
logDir string
tempDir string
scriptDataDir string
listeningPortsHandler listeningPortsHandler
subsystems []codersdk.AgentSubsystem
clock quartz.Clock
logger slog.Logger
client Client
exchangeToken func(ctx context.Context) (string, error)
tailnetListenPort uint16
filesystem afero.Fs
logDir string
tempDir string
scriptDataDir string
// ignorePorts tells the api handler which ports to ignore when
// listing all listening ports. This is helpful to hide ports that
// are used by the agent, that the user does not care about.
ignorePorts map[int]string
portCacheDuration time.Duration
subsystems []codersdk.AgentSubsystem
reconnectingPTYTimeout time.Duration
reconnectingPTYServer *reconnectingpty.Server
@@ -259,6 +254,7 @@ type agent struct {
scriptRunner *agentscripts.Runner
announcementBanners atomic.Pointer[[]codersdk.BannerConfig] // announcementBanners is atomic because it is periodically updated.
announcementBannersRefreshInterval time.Duration
sessionToken atomic.Pointer[string]
sshServer *agentssh.Server
sshMaxTimeout time.Duration
blockFileTransfer bool
@@ -284,10 +280,6 @@ type agent struct {
devcontainers bool
containerAPIOptions []agentcontainers.Option
containerAPI *agentcontainers.API
socketServerEnabled bool
socketPath string
socketServer *agentsocket.Server
}
func (a *agent) TailnetConn() *tailnet.Conn {
@@ -344,16 +336,18 @@ func (a *agent) init() {
// will not report anywhere.
a.scriptRunner.RegisterMetrics(a.prometheusRegistry)
containerAPIOpts := []agentcontainers.Option{
agentcontainers.WithExecer(a.execer),
agentcontainers.WithCommandEnv(a.sshServer.CommandEnv),
agentcontainers.WithScriptLogger(func(logSourceID uuid.UUID) agentcontainers.ScriptLogger {
return a.logSender.GetScriptLogger(logSourceID)
}),
}
containerAPIOpts = append(containerAPIOpts, a.containerAPIOptions...)
if a.devcontainers {
containerAPIOpts := []agentcontainers.Option{
agentcontainers.WithExecer(a.execer),
agentcontainers.WithCommandEnv(a.sshServer.CommandEnv),
agentcontainers.WithScriptLogger(func(logSourceID uuid.UUID) agentcontainers.ScriptLogger {
return a.logSender.GetScriptLogger(logSourceID)
}),
}
containerAPIOpts = append(containerAPIOpts, a.containerAPIOptions...)
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
}
a.reconnectingPTYServer = reconnectingpty.NewServer(
a.logger.Named("reconnecting-pty"),
@@ -367,32 +361,9 @@ func (a *agent) init() {
s.ExperimentalContainers = a.devcontainers
},
)
a.initSocketServer()
go a.runLoop()
}
// initSocketServer initializes server that allows direct communication with a workspace agent using IPC.
func (a *agent) initSocketServer() {
if !a.socketServerEnabled {
a.logger.Info(a.hardCtx, "socket server is disabled")
return
}
server, err := agentsocket.NewServer(
a.logger.Named("socket"),
agentsocket.WithPath(a.socketPath),
)
if err != nil {
a.logger.Warn(a.hardCtx, "failed to create socket server", slog.Error(err), slog.F("path", a.socketPath))
return
}
a.socketServer = server
a.logger.Debug(a.hardCtx, "socket server started", slog.F("path", a.socketPath))
}
// runLoop attempts to start the agent in a retry loop.
// Coder may be offline temporarily, a connection issue
// may be happening, but regardless after the intermittent
@@ -594,6 +565,7 @@ func (a *agent) reportMetadata(ctx context.Context, aAPI proto.DRPCAgentClient26
// channel to synchronize the results and avoid both messy
// mutex logic and overloading the API.
for _, md := range manifest.Metadata {
md := md
// We send the result to the channel in the goroutine to avoid
// sending the same result multiple times. So, we don't care about
// the return values.
@@ -821,15 +793,11 @@ func (a *agent) reportConnectionsLoop(ctx context.Context, aAPI proto.DRPCAgentC
logger.Debug(ctx, "reporting connection")
_, err := aAPI.ReportConnection(ctx, payload)
if err != nil {
// Do not fail the loop if we fail to report a connection, just
// log a warning.
// Related to https://github.com/coder/coder/issues/20194
logger.Warn(ctx, "failed to report connection to server", slog.Error(err))
// keep going, we still need to remove it from the slice
} else {
logger.Debug(ctx, "successfully reported connection")
return xerrors.Errorf("failed to report connection: %w", err)
}
logger.Debug(ctx, "successfully reported connection")
// Remove the payload we sent.
a.reportConnectionsMu.Lock()
a.reportConnections[0] = nil // Release the pointer from the underlying array.
@@ -860,13 +828,6 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
ip = host
}
// If the IP is "localhost" (which it can be in some cases), set it to
// 127.0.0.1 instead.
// Related to https://github.com/coder/coder/issues/20194
if ip == "localhost" {
ip = "127.0.0.1"
}
a.reportConnectionsMu.Lock()
defer a.reportConnectionsMu.Unlock()
@@ -958,10 +919,11 @@ func (a *agent) run() (retErr error) {
// This allows the agent to refresh its token if necessary.
// For instance identity this is required, since the instance
// may not have re-provisioned, but a new agent ID was created.
err := a.client.RefreshToken(a.hardCtx)
sessionToken, err := a.exchangeToken(a.hardCtx)
if err != nil {
return xerrors.Errorf("refresh token: %w", err)
return xerrors.Errorf("exchange token: %w", err)
}
a.sessionToken.Store(&sessionToken)
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs
aAPI, tAPI, err := a.client.ConnectRPC26(a.hardCtx)
@@ -1127,7 +1089,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
if err != nil {
return xerrors.Errorf("fetch metadata: %w", err)
}
a.logger.Info(ctx, "fetched manifest")
a.logger.Info(ctx, "fetched manifest", slog.F("manifest", mp))
manifest, err := agentsdk.ManifestFromProto(mp)
if err != nil {
a.logger.Critical(ctx, "failed to convert manifest", slog.F("manifest", mp), slog.Error(err))
@@ -1201,7 +1163,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
scripts = manifest.Scripts
devcontainerScripts map[uuid.UUID]codersdk.WorkspaceAgentScript
)
if a.devcontainers {
if a.containerAPI != nil {
// Init the container API with the manifest and client so that
// we can start accepting requests. The final start of the API
// happens after the startup scripts have been executed to
@@ -1209,7 +1171,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
// return existing devcontainers but actual container detection
// and creation will be deferred.
a.containerAPI.Init(
agentcontainers.WithManifestInfo(manifest.OwnerName, manifest.WorkspaceName, manifest.AgentName, manifest.Directory),
agentcontainers.WithManifestInfo(manifest.OwnerName, manifest.WorkspaceName, manifest.AgentName),
agentcontainers.WithDevcontainers(manifest.Devcontainers, manifest.Scripts),
agentcontainers.WithSubAgentClient(agentcontainers.NewSubAgentClientFromAPI(a.logger, aAPI)),
)
@@ -1236,7 +1198,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
// autostarted devcontainer will be included in this time.
err := a.scriptRunner.Execute(a.gracefulCtx, agentscripts.ExecuteStartScripts)
if a.devcontainers {
if a.containerAPI != nil {
// Start the container API after the startup scripts have
// been executed to ensure that the required tools can be
// installed.
@@ -1400,7 +1362,7 @@ func (a *agent) updateCommandEnv(current []string) (updated []string, err error)
"CODER_WORKSPACE_OWNER_NAME": manifest.OwnerName,
// Specific Coder subcommands require the agent token exposed!
"CODER_AGENT_TOKEN": a.client.GetSessionToken(),
"CODER_AGENT_TOKEN": *a.sessionToken.Load(),
// Git on Windows resolves with UNIX-style paths.
// If using backslashes, it's unable to find the executable.
@@ -1960,7 +1922,6 @@ func (a *agent) Close() error {
lifecycleState = codersdk.WorkspaceAgentLifecycleShutdownError
}
}
a.setLifecycle(lifecycleState)
err = a.scriptRunner.Close()
@@ -1968,16 +1929,12 @@ func (a *agent) Close() error {
a.logger.Error(a.hardCtx, "script runner close", slog.Error(err))
}
if a.socketServer != nil {
if err := a.socketServer.Close(); err != nil {
a.logger.Error(a.hardCtx, "socket server close", slog.Error(err))
if a.containerAPI != nil {
if err := a.containerAPI.Close(); err != nil {
a.logger.Error(a.hardCtx, "container API close", slog.Error(err))
}
}
if err := a.containerAPI.Close(); err != nil {
a.logger.Error(a.hardCtx, "container API close", slog.Error(err))
}
// Wait for the graceful shutdown to complete, but don't wait forever so
// that we don't break user expectations.
go func() {
+121 -381
View File
@@ -22,6 +22,7 @@ import (
"slices"
"strconv"
"strings"
"sync/atomic"
"testing"
"time"
@@ -455,6 +456,8 @@ func TestAgent_GitSSH(t *testing.T) {
func TestAgent_SessionTTYShell(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
t.Cleanup(cancel)
if runtime.GOOS == "windows" {
// This might be our implementation, or ConPTY itself.
// It's difficult to find extensive tests for it, so
@@ -465,7 +468,6 @@ func TestAgent_SessionTTYShell(t *testing.T) {
for _, port := range sshPorts {
t.Run(fmt.Sprintf("(%d)", port), func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
session := setupSSHSessionOnPort(t, agentsdk.Manifest{}, codersdk.ServiceBannerConfig{}, nil, port)
command := "sh"
@@ -1807,12 +1809,11 @@ func TestAgent_ReconnectingPTY(t *testing.T) {
//nolint:dogsled
conn, agentClient, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
idConnectionReport := uuid.New()
id := uuid.New()
// Test that the connection is reported. This must be tested in the
// first connection because we care about verifying all of these.
netConn0, err := conn.ReconnectingPTY(ctx, idConnectionReport, 80, 80, "bash --norc")
netConn0, err := conn.ReconnectingPTY(ctx, id, 80, 80, "bash --norc")
require.NoError(t, err)
_ = netConn0.Close()
assertConnectionReport(t, agentClient, proto.Connection_RECONNECTING_PTY, 0, "")
@@ -2028,8 +2029,7 @@ func runSubAgentMain() int {
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
req = req.WithContext(ctx)
client := &http.Client{}
resp, err := client.Do(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "agent connection failed: %v\n", err)
return 11
@@ -2130,7 +2130,7 @@ func TestAgent_DevcontainerAutostart(t *testing.T) {
"name": "mywork",
"image": "ubuntu:latest",
"cmd": ["sleep", "infinity"],
"runArgs": ["--network=host", "--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"]
"runArgs": ["--network=host"]
}`), 0o600)
require.NoError(t, err, "write devcontainer.json")
@@ -2167,7 +2167,6 @@ func TestAgent_DevcontainerAutostart(t *testing.T) {
// Only match this specific dev container.
agentcontainers.WithClock(mClock),
agentcontainers.WithContainerLabelIncludeFilter("devcontainer.local_folder", tempWorkspaceFolder),
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
agentcontainers.WithSubAgentURL(srv.URL),
// The agent will copy "itself", but in the case of this test, the
// agent is actually this test binary. So we'll tell the test binary
@@ -2289,8 +2288,7 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
err = os.WriteFile(devcontainerFile, []byte(`{
"name": "mywork",
"image": "busybox:latest",
"cmd": ["sleep", "infinity"],
"runArgs": ["--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"]
"cmd": ["sleep", "infinity"]
}`), 0o600)
require.NoError(t, err, "write devcontainer.json")
@@ -2317,7 +2315,6 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
o.Devcontainers = true
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
agentcontainers.WithContainerLabelIncludeFilter("devcontainer.local_folder", workspaceFolder),
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
)
})
@@ -2372,7 +2369,7 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
// devcontainer, we do it in a goroutine so we can process logs
// concurrently.
go func(container codersdk.WorkspaceAgentContainer) {
_, err := conn.RecreateDevcontainer(ctx, devcontainerID.String())
_, err := conn.RecreateDevcontainer(ctx, container.ID)
assert.NoError(t, err, "recreate devcontainer should succeed")
}(container)
@@ -2441,8 +2438,7 @@ func TestAgent_DevcontainersDisabledForSubAgent(t *testing.T) {
// Setup the agent with devcontainers enabled initially.
//nolint:dogsled
conn, _, _, _, _ := setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
o.Devcontainers = true
conn, _, _, _, _ := setupAgent(t, manifest, 0, func(*agenttest.Client, *agent.Options) {
})
// Query the containers API endpoint. This should fail because
@@ -2454,214 +2450,8 @@ func TestAgent_DevcontainersDisabledForSubAgent(t *testing.T) {
require.Error(t, err)
// Verify the error message contains the expected text.
require.Contains(t, err.Error(), "Dev Container feature not supported.")
require.Contains(t, err.Error(), "Dev Container integration inside other Dev Containers is explicitly not supported.")
}
// TestAgent_DevcontainerPrebuildClaim tests that we correctly handle
// the claiming process for running devcontainers.
//
// You can run it manually as follows:
//
// CODER_TEST_USE_DOCKER=1 go test -count=1 ./agent -run TestAgent_DevcontainerPrebuildClaim
//
//nolint:paralleltest // This test sets an environment variable.
func TestAgent_DevcontainerPrebuildClaim(t *testing.T) {
if os.Getenv("CODER_TEST_USE_DOCKER") != "1" {
t.Skip("Set CODER_TEST_USE_DOCKER=1 to run this test")
}
if _, err := exec.LookPath("devcontainer"); err != nil {
t.Skip("This test requires the devcontainer CLI: npm install -g @devcontainers/cli")
}
pool, err := dockertest.NewPool("")
require.NoError(t, err, "Could not connect to docker")
var (
ctx = testutil.Context(t, testutil.WaitShort)
devcontainerID = uuid.New()
devcontainerLogSourceID = uuid.New()
workspaceFolder = filepath.Join(t.TempDir(), "project")
devcontainerPath = filepath.Join(workspaceFolder, ".devcontainer")
devcontainerConfig = filepath.Join(devcontainerPath, "devcontainer.json")
)
// Given: A devcontainer project.
t.Logf("Workspace folder: %s", workspaceFolder)
err = os.MkdirAll(devcontainerPath, 0o755)
require.NoError(t, err, "create dev container directory")
// Given: This devcontainer project specifies an app that uses the owner name and workspace name.
err = os.WriteFile(devcontainerConfig, []byte(`{
"name": "project",
"image": "busybox:latest",
"cmd": ["sleep", "infinity"],
"runArgs": ["--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"],
"customizations": {
"coder": {
"apps": [{
"slug": "zed",
"url": "zed://ssh/${localEnv:CODER_WORKSPACE_AGENT_NAME}.${localEnv:CODER_WORKSPACE_NAME}.${localEnv:CODER_WORKSPACE_OWNER_NAME}.coder${containerWorkspaceFolder}"
}]
}
}
}`), 0o600)
require.NoError(t, err, "write devcontainer config")
// Given: A manifest with a prebuild username and workspace name.
manifest := agentsdk.Manifest{
OwnerName: "prebuilds",
WorkspaceName: "prebuilds-xyz-123",
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{ID: devcontainerID, Name: "test", WorkspaceFolder: workspaceFolder},
},
Scripts: []codersdk.WorkspaceAgentScript{
{ID: devcontainerID, LogSourceID: devcontainerLogSourceID},
},
}
// When: We create an agent with devcontainers enabled.
//nolint:dogsled
conn, client, _, _, _ := setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
o.Devcontainers = true
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerLocalFolderLabel, workspaceFolder),
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
)
})
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
return slices.Contains(client.GetLifecycleStates(), codersdk.WorkspaceAgentLifecycleReady)
}, testutil.IntervalMedium, "agent not ready")
var dcPrebuild codersdk.WorkspaceAgentDevcontainer
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
resp, err := conn.ListContainers(ctx)
require.NoError(t, err)
for _, dc := range resp.Devcontainers {
if dc.Container == nil {
continue
}
v, ok := dc.Container.Labels[agentcontainers.DevcontainerLocalFolderLabel]
if ok && v == workspaceFolder {
dcPrebuild = dc
return true
}
}
return false
}, testutil.IntervalMedium, "devcontainer not found")
defer func() {
pool.Client.RemoveContainer(docker.RemoveContainerOptions{
ID: dcPrebuild.Container.ID,
RemoveVolumes: true,
Force: true,
})
}()
// Then: We expect a sub agent to have been created.
subAgents := client.GetSubAgents()
require.Len(t, subAgents, 1)
subAgent := subAgents[0]
subAgentID, err := uuid.FromBytes(subAgent.GetId())
require.NoError(t, err)
// And: We expect there to be 1 app.
subAgentApps, err := client.GetSubAgentApps(subAgentID)
require.NoError(t, err)
require.Len(t, subAgentApps, 1)
// And: This app should contain the prebuild workspace name and owner name.
subAgentApp := subAgentApps[0]
require.Equal(t, "zed://ssh/project.prebuilds-xyz-123.prebuilds.coder/workspaces/project", subAgentApp.GetUrl())
// Given: We close the client and connection
client.Close()
conn.Close()
// Given: A new manifest with a regular user owner name and workspace name.
manifest = agentsdk.Manifest{
OwnerName: "user",
WorkspaceName: "user-workspace",
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
{ID: devcontainerID, Name: "test", WorkspaceFolder: workspaceFolder},
},
Scripts: []codersdk.WorkspaceAgentScript{
{ID: devcontainerID, LogSourceID: devcontainerLogSourceID},
},
}
// When: We create an agent with devcontainers enabled.
//nolint:dogsled
conn, client, _, _, _ = setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
o.Devcontainers = true
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerLocalFolderLabel, workspaceFolder),
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
)
})
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
return slices.Contains(client.GetLifecycleStates(), codersdk.WorkspaceAgentLifecycleReady)
}, testutil.IntervalMedium, "agent not ready")
var dcClaimed codersdk.WorkspaceAgentDevcontainer
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
resp, err := conn.ListContainers(ctx)
require.NoError(t, err)
for _, dc := range resp.Devcontainers {
if dc.Container == nil {
continue
}
v, ok := dc.Container.Labels[agentcontainers.DevcontainerLocalFolderLabel]
if ok && v == workspaceFolder {
dcClaimed = dc
return true
}
}
return false
}, testutil.IntervalMedium, "devcontainer not found")
defer func() {
if dcClaimed.Container.ID != dcPrebuild.Container.ID {
pool.Client.RemoveContainer(docker.RemoveContainerOptions{
ID: dcClaimed.Container.ID,
RemoveVolumes: true,
Force: true,
})
}
}()
// Then: We expect the claimed devcontainer and prebuild devcontainer
// to be using the same underlying container.
require.Equal(t, dcPrebuild.Container.ID, dcClaimed.Container.ID)
// And: We expect there to be a sub agent created.
subAgents = client.GetSubAgents()
require.Len(t, subAgents, 1)
subAgent = subAgents[0]
subAgentID, err = uuid.FromBytes(subAgent.GetId())
require.NoError(t, err)
// And: We expect there to be an app.
subAgentApps, err = client.GetSubAgentApps(subAgentID)
require.NoError(t, err)
require.Len(t, subAgentApps, 1)
// And: We expect this app to have the user's owner name and workspace name.
subAgentApp = subAgentApps[0]
require.Equal(t, "zed://ssh/project.user-workspace.user.coder/workspaces/project", subAgentApp.GetUrl())
require.Contains(t, err.Error(), "The agent dev containers feature is experimental and not enabled by default.")
require.Contains(t, err.Error(), "To enable this feature, set CODER_AGENT_DEVCONTAINERS_ENABLE=true in your template.")
}
func TestAgent_Dial(t *testing.T) {
@@ -2669,11 +2459,11 @@ func TestAgent_Dial(t *testing.T) {
cases := []struct {
name string
setup func(t testing.TB) net.Listener
setup func(t *testing.T) net.Listener
}{
{
name: "TCP",
setup: func(t testing.TB) net.Listener {
setup: func(t *testing.T) net.Listener {
l, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err, "create TCP listener")
return l
@@ -2681,7 +2471,7 @@ func TestAgent_Dial(t *testing.T) {
},
{
name: "UDP",
setup: func(t testing.TB) net.Listener {
setup: func(t *testing.T) net.Listener {
addr := net.UDPAddr{
IP: net.ParseIP("127.0.0.1"),
Port: 0,
@@ -2699,69 +2489,57 @@ func TestAgent_Dial(t *testing.T) {
// The purpose of this test is to ensure that a client can dial a
// listener in the workspace over tailnet.
//
// The OS sometimes drops packets if the system can't keep up with
// them. For TCP packets, it's typically fine due to
// retransmissions, but for UDP packets, it can fail this test.
//
// The OS gets involved for the Wireguard traffic (either via DERP
// or direct UDP), and also for the traffic between the agent and
// the listener in the "workspace".
//
// To avoid this, we'll retry this test up to 3 times.
//nolint:gocritic // This test is flaky due to uncontrollable OS packet drops under heavy load.
testutil.RunRetry(t, 3, func(t testing.TB) {
ctx := testutil.Context(t, testutil.WaitLong)
l := c.setup(t)
done := make(chan struct{})
defer func() {
l.Close()
<-done
}()
l := c.setup(t)
done := make(chan struct{})
defer func() {
l.Close()
<-done
}()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
go func() {
defer close(done)
for range 2 {
c, err := l.Accept()
if assert.NoError(t, err, "accept connection") {
testAccept(ctx, t, c)
_ = c.Close()
}
go func() {
defer close(done)
for range 2 {
c, err := l.Accept()
if assert.NoError(t, err, "accept connection") {
testAccept(ctx, t, c)
_ = c.Close()
}
}()
agentID := uuid.UUID{0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8}
//nolint:dogsled
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
AgentID: agentID,
}, 0)
require.True(t, agentConn.AwaitReachable(ctx))
conn, err := agentConn.DialContext(ctx, l.Addr().Network(), l.Addr().String())
require.NoError(t, err)
testDial(ctx, t, conn)
err = conn.Close()
require.NoError(t, err)
// also connect via the CoderServicePrefix, to test that we can reach the agent on this
// IP. This will be required for CoderVPN.
_, rawPort, _ := net.SplitHostPort(l.Addr().String())
port, _ := strconv.ParseUint(rawPort, 10, 16)
ipp := netip.AddrPortFrom(tailnet.CoderServicePrefix.AddrFromUUID(agentID), uint16(port))
switch l.Addr().Network() {
case "tcp":
conn, err = agentConn.TailnetConn().DialContextTCP(ctx, ipp)
case "udp":
conn, err = agentConn.TailnetConn().DialContextUDP(ctx, ipp)
default:
t.Fatalf("unknown network: %s", l.Addr().Network())
}
require.NoError(t, err)
testDial(ctx, t, conn)
err = conn.Close()
require.NoError(t, err)
})
}()
agentID := uuid.UUID{0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8}
//nolint:dogsled
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
AgentID: agentID,
}, 0)
require.True(t, agentConn.AwaitReachable(ctx))
conn, err := agentConn.DialContext(ctx, l.Addr().Network(), l.Addr().String())
require.NoError(t, err)
testDial(ctx, t, conn)
err = conn.Close()
require.NoError(t, err)
// also connect via the CoderServicePrefix, to test that we can reach the agent on this
// IP. This will be required for CoderVPN.
_, rawPort, _ := net.SplitHostPort(l.Addr().String())
port, _ := strconv.ParseUint(rawPort, 10, 16)
ipp := netip.AddrPortFrom(tailnet.CoderServicePrefix.AddrFromUUID(agentID), uint16(port))
switch l.Addr().Network() {
case "tcp":
conn, err = agentConn.Conn.DialContextTCP(ctx, ipp)
case "udp":
conn, err = agentConn.Conn.DialContextUDP(ctx, ipp)
default:
t.Fatalf("unknown network: %s", l.Addr().Network())
}
require.NoError(t, err)
testDial(ctx, t, conn)
err = conn.Close()
require.NoError(t, err)
})
}
}
@@ -2812,7 +2590,7 @@ func TestAgent_UpdatedDERP(t *testing.T) {
})
// Setup a client connection.
newClientConn := func(derpMap *tailcfg.DERPMap, name string) workspacesdk.AgentConn {
newClientConn := func(derpMap *tailcfg.DERPMap, name string) *workspacesdk.AgentConn {
conn, err := tailnet.NewConn(&tailnet.Options{
Addresses: []netip.Prefix{tailnet.TailscaleServicePrefix.RandomPrefix()},
DERPMap: derpMap,
@@ -2892,13 +2670,13 @@ func TestAgent_UpdatedDERP(t *testing.T) {
// Connect from a second client and make sure it uses the new DERP map.
conn2 := newClientConn(newDerpMap, "client2")
require.Equal(t, []int{2}, conn2.TailnetConn().DERPMap().RegionIDs())
require.Equal(t, []int{2}, conn2.DERPMap().RegionIDs())
t.Log("conn2 got the new DERPMap")
// If the first client gets a DERP map update, it should be able to
// reconnect just fine.
conn1.TailnetConn().SetDERPMap(newDerpMap)
require.Equal(t, []int{2}, conn1.TailnetConn().DERPMap().RegionIDs())
conn1.SetDERPMap(newDerpMap)
require.Equal(t, []int{2}, conn1.DERPMap().RegionIDs())
t.Log("set the new DERPMap on conn1")
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
@@ -2927,11 +2705,11 @@ func TestAgent_Speedtest(t *testing.T) {
func TestAgent_Reconnect(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
logger := testutil.Logger(t)
// After the agent is disconnected from a coordinator, it's supposed
// to reconnect!
fCoordinator := tailnettest.NewFakeCoordinator()
coordinator := tailnet.NewCoordinator(logger)
defer coordinator.Close()
agentID := uuid.New()
statsCh := make(chan *proto.Stats, 50)
@@ -2943,24 +2721,27 @@ func TestAgent_Reconnect(t *testing.T) {
DERPMap: derpMap,
},
statsCh,
fCoordinator,
coordinator,
)
defer client.Close()
initialized := atomic.Int32{}
closer := agent.New(agent.Options{
ExchangeToken: func(ctx context.Context) (string, error) {
initialized.Add(1)
return "", nil
},
Client: client,
Logger: logger.Named("agent"),
})
defer closer.Close()
call1 := testutil.RequireReceive(ctx, t, fCoordinator.CoordinateCalls)
require.Equal(t, client.GetNumRefreshTokenCalls(), 1)
close(call1.Resps) // hang up
// expect reconnect
testutil.RequireReceive(ctx, t, fCoordinator.CoordinateCalls)
// Check that the agent refreshes the token when it reconnects.
require.Equal(t, client.GetNumRefreshTokenCalls(), 2)
closer.Close()
require.Eventually(t, func() bool {
return coordinator.Node(agentID) != nil
}, testutil.WaitShort, testutil.IntervalFast)
client.LastWorkspaceAgent()
require.Eventually(t, func() bool {
return initialized.Load() == 2
}, testutil.WaitShort, testutil.IntervalFast)
}
func TestAgent_WriteVSCodeConfigs(t *testing.T) {
@@ -2982,6 +2763,9 @@ func TestAgent_WriteVSCodeConfigs(t *testing.T) {
defer client.Close()
filesystem := afero.NewMemMapFs()
closer := agent.New(agent.Options{
ExchangeToken: func(ctx context.Context) (string, error) {
return "", nil
},
Client: client,
Logger: logger.Named("agent"),
Filesystem: filesystem,
@@ -3010,6 +2794,9 @@ func TestAgent_DebugServer(t *testing.T) {
conn, _, _, _, agnt := setupAgent(t, agentsdk.Manifest{
DERPMap: derpMap,
}, 0, func(c *agenttest.Client, o *agent.Options) {
o.ExchangeToken = func(context.Context) (string, error) {
return "token", nil
}
o.LogDir = logDir
})
@@ -3255,8 +3042,8 @@ func setupSSHSessionOnPort(
return session
}
func setupAgent(t testing.TB, metadata agentsdk.Manifest, ptyTimeout time.Duration, opts ...func(*agenttest.Client, *agent.Options)) (
workspacesdk.AgentConn,
func setupAgent(t *testing.T, metadata agentsdk.Manifest, ptyTimeout time.Duration, opts ...func(*agenttest.Client, *agent.Options)) (
*workspacesdk.AgentConn,
*agenttest.Client,
<-chan *proto.Stats,
afero.Fs,
@@ -3353,7 +3140,7 @@ func setupAgent(t testing.TB, metadata agentsdk.Manifest, ptyTimeout time.Durati
var dialTestPayload = []byte("dean-was-here123")
func testDial(ctx context.Context, t testing.TB, c net.Conn) {
func testDial(ctx context.Context, t *testing.T, c net.Conn) {
t.Helper()
if deadline, ok := ctx.Deadline(); ok {
@@ -3369,7 +3156,7 @@ func testDial(ctx context.Context, t testing.TB, c net.Conn) {
assertReadPayload(t, c, dialTestPayload)
}
func testAccept(ctx context.Context, t testing.TB, c net.Conn) {
func testAccept(ctx context.Context, t *testing.T, c net.Conn) {
t.Helper()
defer c.Close()
@@ -3386,7 +3173,7 @@ func testAccept(ctx context.Context, t testing.TB, c net.Conn) {
assertWritePayload(t, c, dialTestPayload)
}
func assertReadPayload(t testing.TB, r io.Reader, payload []byte) {
func assertReadPayload(t *testing.T, r io.Reader, payload []byte) {
t.Helper()
b := make([]byte, len(payload)+16)
n, err := r.Read(b)
@@ -3395,11 +3182,11 @@ func assertReadPayload(t testing.TB, r io.Reader, payload []byte) {
assert.Equal(t, payload, b[:n])
}
func assertWritePayload(t testing.TB, w io.Writer, payload []byte) {
func assertWritePayload(t *testing.T, w io.Writer, payload []byte) {
t.Helper()
n, err := w.Write(payload)
assert.NoError(t, err, "write payload")
assert.Equal(t, len(payload), n, "written payload length does not match")
assert.Equal(t, len(payload), n, "payload length does not match")
}
func testSessionOutput(t *testing.T, session *ssh.Session, expected, unexpected []string, expectedRe *regexp.Regexp) {
@@ -3477,31 +3264,16 @@ func TestAgent_Metrics_SSH(t *testing.T) {
err = session.Shell()
require.NoError(t, err)
expected := []struct {
Name string
Type proto.Stats_Metric_Type
CheckFn func(float64) error
Labels []*proto.Stats_Metric_Label
}{
expected := []*proto.Stats_Metric{
{
Name: "agent_reconnecting_pty_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_reconnecting_pty_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_sessions_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 1 {
return nil
}
return xerrors.Errorf("expected 1, got %f", v)
},
Name: "agent_sessions_total",
Type: proto.Stats_Metric_COUNTER,
Value: 1,
Labels: []*proto.Stats_Metric_Label{
{
Name: "magic_type",
@@ -3514,44 +3286,24 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "agent_ssh_server_failed_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_failed_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_ssh_server_sftp_connections_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_sftp_connections_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "agent_ssh_server_sftp_server_errors_total",
Type: proto.Stats_Metric_COUNTER,
CheckFn: func(v float64) error {
if v == 0 {
return nil
}
return xerrors.Errorf("expected 0, got %f", v)
},
Name: "agent_ssh_server_sftp_server_errors_total",
Type: proto.Stats_Metric_COUNTER,
Value: 0,
},
{
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(float64) error {
// We can't reliably ping a peer here, and networking is out of
// scope of this test, so we just test that the metric exists
// with the correct labels.
return nil
},
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
Value: 0,
Labels: []*proto.Stats_Metric_Label{
{
Name: "connection_type",
@@ -3560,11 +3312,9 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(float64) error {
return nil
},
Name: "coderd_agentstats_currently_reachable_peers",
Type: proto.Stats_Metric_GAUGE,
Value: 1,
Labels: []*proto.Stats_Metric_Label{
{
Name: "connection_type",
@@ -3573,20 +3323,9 @@ func TestAgent_Metrics_SSH(t *testing.T) {
},
},
{
Name: "coderd_agentstats_startup_script_seconds",
Type: proto.Stats_Metric_GAUGE,
CheckFn: func(f float64) error {
if f >= 0 {
return nil
}
return xerrors.Errorf("expected >= 0, got %f", f)
},
Labels: []*proto.Stats_Metric_Label{
{
Name: "success",
Value: "true",
},
},
Name: "coderd_agentstats_startup_script_seconds",
Type: proto.Stats_Metric_GAUGE,
Value: 1,
},
}
@@ -3608,10 +3347,11 @@ func TestAgent_Metrics_SSH(t *testing.T) {
for _, m := range mf.GetMetric() {
assert.Equal(t, expected[i].Name, mf.GetName())
assert.Equal(t, expected[i].Type.String(), mf.GetType().String())
// Value is max expected
if expected[i].Type == proto.Stats_Metric_GAUGE {
assert.NoError(t, expected[i].CheckFn(m.GetGauge().GetValue()), "check fn for %s failed", expected[i].Name)
assert.GreaterOrEqualf(t, expected[i].Value, m.GetGauge().GetValue(), "expected %s to be greater than or equal to %f, got %f", expected[i].Name, expected[i].Value, m.GetGauge().GetValue())
} else if expected[i].Type == proto.Stats_Metric_COUNTER {
assert.NoError(t, expected[i].CheckFn(m.GetCounter().GetValue()), "check fn for %s failed", expected[i].Name)
assert.GreaterOrEqualf(t, expected[i].Value, m.GetCounter().GetValue(), "expected %s to be greater than or equal to %f, got %f", expected[i].Name, expected[i].Value, m.GetCounter().GetValue())
}
for j, lbl := range expected[i].Labels {
assert.Equal(t, m.GetLabel()[j], &promgo.LabelPair{
+18 -351
View File
@@ -2,11 +2,8 @@ package agentcontainers
import (
"context"
"encoding/json"
"errors"
"fmt"
"io/fs"
"maps"
"net/http"
"os"
"path"
@@ -21,13 +18,10 @@ import (
"github.com/fsnotify/fsnotify"
"github.com/go-chi/chi/v5"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/google/uuid"
"github.com/spf13/afero"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentcontainers/ignore"
"github.com/coder/coder/v2/agent/agentcontainers/watcher"
"github.com/coder/coder/v2/agent/agentexec"
"github.com/coder/coder/v2/agent/usershell"
@@ -36,7 +30,6 @@ import (
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/provisioner"
"github.com/coder/quartz"
"github.com/coder/websocket"
)
const (
@@ -60,12 +53,10 @@ type API struct {
cancel context.CancelFunc
watcherDone chan struct{}
updaterDone chan struct{}
discoverDone chan struct{}
updateTrigger chan chan error // Channel to trigger manual refresh.
updateInterval time.Duration // Interval for periodic container updates.
logger slog.Logger
watcher watcher.Watcher
fs afero.Fs
execer agentexec.Execer
commandEnv CommandEnv
ccli ContainerCLI
@@ -77,17 +68,12 @@ type API struct {
subAgentURL string
subAgentEnv []string
projectDiscovery bool // If we should perform project discovery or not.
discoveryAutostart bool // If we should autostart discovered projects.
ownerName string
workspaceName string
parentAgent string
agentDirectory string
ownerName string
workspaceName string
parentAgent string
mu sync.RWMutex // Protects the following fields.
initDone chan struct{} // Closed by Init.
updateChans []chan struct{}
closed bool
containers codersdk.WorkspaceAgentListContainersResponse // Output from the last list operation.
containersErr error // Error from the last list operation.
@@ -144,9 +130,7 @@ func WithCommandEnv(ce CommandEnv) Option {
strings.HasPrefix(s, "CODER_WORKSPACE_AGENT_URL=") ||
strings.HasPrefix(s, "CODER_AGENT_TOKEN=") ||
strings.HasPrefix(s, "CODER_AGENT_AUTH=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_ENABLE=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE=") ||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=")
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_ENABLE=")
})
return shell, dir, env, nil
}
@@ -163,8 +147,8 @@ func WithContainerCLI(ccli ContainerCLI) Option {
// WithContainerLabelIncludeFilter sets a label filter for containers.
// This option can be given multiple times to filter by multiple labels.
// The behavior is such that only containers matching all of the provided
// labels will be included.
// The behavior is such that only containers matching one or more of the
// provided labels will be included.
func WithContainerLabelIncludeFilter(label, value string) Option {
return func(api *API) {
api.containerLabelIncludeFilter[label] = value
@@ -204,12 +188,11 @@ func WithSubAgentEnv(env ...string) Option {
// WithManifestInfo sets the owner name, and workspace name
// for the sub-agent.
func WithManifestInfo(owner, workspace, parentAgent, agentDirectory string) Option {
func WithManifestInfo(owner, workspace, parentAgent string) Option {
return func(api *API) {
api.ownerName = owner
api.workspaceName = workspace
api.parentAgent = parentAgent
api.agentDirectory = agentDirectory
}
}
@@ -274,29 +257,6 @@ func WithWatcher(w watcher.Watcher) Option {
}
}
// WithFileSystem sets the file system used for discovering projects.
func WithFileSystem(fileSystem afero.Fs) Option {
return func(api *API) {
api.fs = fileSystem
}
}
// WithProjectDiscovery sets if the API should attempt to discover
// projects on the filesystem.
func WithProjectDiscovery(projectDiscovery bool) Option {
return func(api *API) {
api.projectDiscovery = projectDiscovery
}
}
// WithDiscoveryAutostart sets if the API should attempt to autostart
// projects that have been discovered
func WithDiscoveryAutostart(discoveryAutostart bool) Option {
return func(api *API) {
api.discoveryAutostart = discoveryAutostart
}
}
// ScriptLogger is an interface for sending devcontainer logs to the
// controlplane.
type ScriptLogger interface {
@@ -367,9 +327,6 @@ func NewAPI(logger slog.Logger, options ...Option) *API {
api.watcher = watcher.NewNoop()
}
}
if api.fs == nil {
api.fs = afero.NewOsFs()
}
if api.subAgentClient.Load() == nil {
var c SubAgentClient = noopSubAgentClient{}
api.subAgentClient.Store(&c)
@@ -411,12 +368,6 @@ func (api *API) Start() {
return
}
if api.projectDiscovery && api.agentDirectory != "" {
api.discoverDone = make(chan struct{})
go api.discover()
}
api.watcherDone = make(chan struct{})
api.updaterDone = make(chan struct{})
@@ -424,162 +375,6 @@ func (api *API) Start() {
go api.updaterLoop()
}
func (api *API) discover() {
defer close(api.discoverDone)
defer api.logger.Debug(api.ctx, "project discovery finished")
api.logger.Debug(api.ctx, "project discovery started")
if err := api.discoverDevcontainerProjects(); err != nil {
api.logger.Error(api.ctx, "discovering dev container projects", slog.Error(err))
}
if err := api.RefreshContainers(api.ctx); err != nil {
api.logger.Error(api.ctx, "refreshing containers after discovery", slog.Error(err))
}
}
func (api *API) discoverDevcontainerProjects() error {
isGitProject, err := afero.DirExists(api.fs, filepath.Join(api.agentDirectory, ".git"))
if err != nil {
return xerrors.Errorf(".git dir exists: %w", err)
}
// If the agent directory is a git project, we'll search
// the project for any `.devcontainer/devcontainer.json`
// files.
if isGitProject {
return api.discoverDevcontainersInProject(api.agentDirectory)
}
// The agent directory is _not_ a git project, so we'll
// search the top level of the agent directory for any
// git projects, and search those.
entries, err := afero.ReadDir(api.fs, api.agentDirectory)
if err != nil {
return xerrors.Errorf("read agent directory: %w", err)
}
for _, entry := range entries {
if !entry.IsDir() {
continue
}
isGitProject, err = afero.DirExists(api.fs, filepath.Join(api.agentDirectory, entry.Name(), ".git"))
if err != nil {
return xerrors.Errorf(".git dir exists: %w", err)
}
// If this directory is a git project, we'll search
// it for any `.devcontainer/devcontainer.json` files.
if isGitProject {
if err := api.discoverDevcontainersInProject(filepath.Join(api.agentDirectory, entry.Name())); err != nil {
return err
}
}
}
return nil
}
func (api *API) discoverDevcontainersInProject(projectPath string) error {
logger := api.logger.
Named("project-discovery").
With(slog.F("project_path", projectPath))
globalPatterns, err := ignore.LoadGlobalPatterns(api.fs)
if err != nil {
return xerrors.Errorf("read global git ignore patterns: %w", err)
}
patterns, err := ignore.ReadPatterns(api.ctx, logger, api.fs, projectPath)
if err != nil {
return xerrors.Errorf("read git ignore patterns: %w", err)
}
matcher := gitignore.NewMatcher(append(globalPatterns, patterns...))
devcontainerConfigPaths := []string{
"/.devcontainer/devcontainer.json",
"/.devcontainer.json",
}
return afero.Walk(api.fs, projectPath, func(path string, info fs.FileInfo, err error) error {
if err != nil {
logger.Error(api.ctx, "encountered error while walking for dev container projects",
slog.F("path", path),
slog.Error(err))
return nil
}
pathParts := ignore.FilePathToParts(path)
// We know that a directory entry cannot be a `devcontainer.json` file, so we
// always skip processing directories. If the directory happens to be ignored
// by git then we'll make sure to ignore all of the children of that directory.
if info.IsDir() {
if matcher.Match(pathParts, true) {
return fs.SkipDir
}
return nil
}
if matcher.Match(pathParts, false) {
return nil
}
for _, relativeConfigPath := range devcontainerConfigPaths {
if !strings.HasSuffix(path, relativeConfigPath) {
continue
}
workspaceFolder := strings.TrimSuffix(path, relativeConfigPath)
logger := logger.With(slog.F("workspace_folder", workspaceFolder))
logger.Debug(api.ctx, "discovered dev container project")
api.mu.Lock()
if _, found := api.knownDevcontainers[workspaceFolder]; !found {
logger.Debug(api.ctx, "adding dev container project")
dc := codersdk.WorkspaceAgentDevcontainer{
ID: uuid.New(),
Name: "", // Updated later based on container state.
WorkspaceFolder: workspaceFolder,
ConfigPath: path,
Status: codersdk.WorkspaceAgentDevcontainerStatusStopped,
Dirty: false, // Updated later based on config file changes.
Container: nil,
}
if api.discoveryAutostart {
config, err := api.dccli.ReadConfig(api.ctx, workspaceFolder, path, []string{})
if err != nil {
logger.Error(api.ctx, "read project configuration", slog.Error(err))
} else if config.Configuration.Customizations.Coder.AutoStart {
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
}
}
api.knownDevcontainers[workspaceFolder] = dc
api.broadcastUpdatesLocked()
if dc.Status == codersdk.WorkspaceAgentDevcontainerStatusStarting {
api.asyncWg.Add(1)
go func() {
defer api.asyncWg.Done()
_ = api.CreateDevcontainer(dc.WorkspaceFolder, dc.ConfigPath)
}()
}
}
api.mu.Unlock()
}
return nil
})
}
func (api *API) watcherLoop() {
defer close(api.watcherDone)
defer api.logger.Debug(api.ctx, "watcher loop stopped")
@@ -654,7 +449,6 @@ func (api *API) updaterLoop() {
// We utilize a TickerFunc here instead of a regular Ticker so that
// we can guarantee execution of the updateContainers method after
// advancing the clock.
var prevErr error
ticker := api.clock.TickerFunc(api.ctx, api.updateInterval, func() error {
done := make(chan error, 1)
var sent bool
@@ -672,16 +466,12 @@ func (api *API) updaterLoop() {
if err != nil {
if errors.Is(err, context.Canceled) {
api.logger.Warn(api.ctx, "updater loop ticker canceled", slog.Error(err))
return nil
}
// Avoid excessive logging of the same error.
if prevErr == nil || prevErr.Error() != err.Error() {
} else {
api.logger.Error(api.ctx, "updater loop ticker failed", slog.Error(err))
}
prevErr = err
} else {
prevErr = nil
}
default:
api.logger.Debug(api.ctx, "updater loop ticker skipped, update in progress")
}
return nil // Always nil to keep the ticker going.
@@ -738,7 +528,6 @@ func (api *API) Routes() http.Handler {
r.Use(ensureInitDoneMW)
r.Get("/", api.handleList)
r.Get("/watch", api.watchContainers)
// TODO(mafredri): Simplify this route as the previous /devcontainers
// /-route was dropped. We can drop the /devcontainers prefix here too.
r.Route("/devcontainers/{devcontainer}", func(r chi.Router) {
@@ -748,92 +537,6 @@ func (api *API) Routes() http.Handler {
return r
}
func (api *API) broadcastUpdatesLocked() {
// Broadcast state changes to WebSocket listeners.
for _, ch := range api.updateChans {
select {
case ch <- struct{}{}:
default:
}
}
}
func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
conn, err := websocket.Accept(rw, r, &websocket.AcceptOptions{
// We want `NoContextTakeover` compression to balance improving
// bandwidth cost/latency with minimal memory usage overhead.
CompressionMode: websocket.CompressionNoContextTakeover,
})
if err != nil {
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
Message: "Failed to upgrade connection to websocket.",
Detail: err.Error(),
})
return
}
// Here we close the websocket for reading, so that the websocket library will handle pings and
// close frames.
_ = conn.CloseRead(context.Background())
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
defer wsNetConn.Close()
go httpapi.Heartbeat(ctx, conn)
updateCh := make(chan struct{}, 1)
api.mu.Lock()
api.updateChans = append(api.updateChans, updateCh)
api.mu.Unlock()
defer func() {
api.mu.Lock()
api.updateChans = slices.DeleteFunc(api.updateChans, func(ch chan struct{}) bool {
return ch == updateCh
})
close(updateCh)
api.mu.Unlock()
}()
encoder := json.NewEncoder(wsNetConn)
ct, err := api.getContainers()
if err != nil {
api.logger.Error(ctx, "unable to get containers", slog.Error(err))
return
}
if err := encoder.Encode(ct); err != nil {
api.logger.Error(ctx, "encode container list", slog.Error(err))
return
}
for {
select {
case <-api.ctx.Done():
return
case <-ctx.Done():
return
case <-updateCh:
ct, err := api.getContainers()
if err != nil {
api.logger.Error(ctx, "unable to get containers", slog.Error(err))
continue
}
if err := encoder.Encode(ct); err != nil {
api.logger.Error(ctx, "encode container list", slog.Error(err))
return
}
}
}
}
// handleList handles the HTTP request to list containers.
func (api *API) handleList(rw http.ResponseWriter, r *http.Request) {
ct, err := api.getContainers()
@@ -873,26 +576,8 @@ func (api *API) updateContainers(ctx context.Context) error {
api.mu.Lock()
defer api.mu.Unlock()
var previouslyKnownDevcontainers map[string]codersdk.WorkspaceAgentDevcontainer
if len(api.updateChans) > 0 {
previouslyKnownDevcontainers = maps.Clone(api.knownDevcontainers)
}
api.processUpdatedContainersLocked(ctx, updated)
if len(api.updateChans) > 0 {
statesAreEqual := maps.EqualFunc(
previouslyKnownDevcontainers,
api.knownDevcontainers,
func(dc1, dc2 codersdk.WorkspaceAgentDevcontainer) bool {
return dc1.Equals(dc2)
})
if !statesAreEqual {
api.broadcastUpdatesLocked()
}
}
api.logger.Debug(ctx, "containers updated successfully", slog.F("container_count", len(api.containers.Containers)), slog.F("warning_count", len(api.containers.Warnings)), slog.F("devcontainer_count", len(api.knownDevcontainers)))
return nil
@@ -941,22 +626,17 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
slog.F("config_file", configFile),
)
// If we haven't set any include filters, we should explicitly ignore test devcontainers.
if len(api.containerLabelIncludeFilter) == 0 && container.Labels[DevcontainerIsTestRunLabel] == "true" {
continue
}
// Filter out devcontainer tests, unless explicitly set in include filters.
if len(api.containerLabelIncludeFilter) > 0 {
includeContainer := true
if len(api.containerLabelIncludeFilter) > 0 || container.Labels[DevcontainerIsTestRunLabel] == "true" {
var ok bool
for label, value := range api.containerLabelIncludeFilter {
v, found := container.Labels[label]
includeContainer = includeContainer && (found && v == value)
if v, found := container.Labels[label]; found && v == value {
ok = true
}
}
// Verbose debug logging is fine here since typically filters
// are only used in development or testing environments.
if !includeContainer {
if !ok {
logger.Debug(ctx, "container does not match include filter, ignoring devcontainer", slog.F("container_labels", container.Labels), slog.F("include_filter", api.containerLabelIncludeFilter))
continue
}
@@ -1037,9 +717,6 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
err := api.maybeInjectSubAgentIntoContainerLocked(ctx, dc)
if err != nil {
logger.Error(ctx, "inject subagent into container failed", slog.Error(err))
dc.Error = err.Error()
} else {
dc.Error = ""
}
}
@@ -1266,10 +943,7 @@ func (api *API) handleDevcontainerRecreate(w http.ResponseWriter, r *http.Reques
// devcontainer multiple times in parallel.
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
dc.Container = nil
dc.Error = ""
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.broadcastUpdatesLocked()
go func() {
_ = api.CreateDevcontainer(dc.WorkspaceFolder, dc.ConfigPath, WithRemoveExistingContainer())
}()
@@ -1358,7 +1032,6 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
api.mu.Lock()
dc = api.knownDevcontainers[dc.WorkspaceFolder]
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
dc.Error = err.Error()
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
api.mu.Unlock()
@@ -1382,10 +1055,8 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
}
}
dc.Dirty = false
dc.Error = ""
api.recreateSuccessTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "successTimes")
api.knownDevcontainers[dc.WorkspaceFolder] = dc
api.broadcastUpdatesLocked()
api.mu.Unlock()
// Ensure an immediate refresh to accurately reflect the
@@ -1852,9 +1523,7 @@ func (api *API) maybeInjectSubAgentIntoContainerLocked(ctx context.Context, dc c
originalName := subAgentConfig.Name
for attempt := 1; attempt <= maxAttemptsToNameAgent; attempt++ {
agent, err := client.Create(ctx, subAgentConfig)
if err == nil {
proc.agent = agent // Only reassign on success.
if proc.agent, err = client.Create(ctx, subAgentConfig); err == nil {
if api.usingWorkspaceFolderName[dc.WorkspaceFolder] {
api.devcontainerNames[dc.Name] = true
delete(api.usingWorkspaceFolderName, dc.WorkspaceFolder)
@@ -1862,6 +1531,7 @@ func (api *API) maybeInjectSubAgentIntoContainerLocked(ctx context.Context, dc c
break
}
// NOTE(DanielleMaywood):
// Ordinarily we'd use `errors.As` here, but it didn't appear to work. Not
// sure if this is because of the communication protocol? Instead I've opted
@@ -2016,9 +1686,6 @@ func (api *API) Close() error {
if api.updaterDone != nil {
<-api.updaterDone
}
if api.discoverDone != nil {
<-api.discoverDone
}
// Wait for all async tasks to complete.
api.asyncWg.Wait()
File diff suppressed because it is too large Load Diff
@@ -55,11 +55,11 @@ func TestIntegrationDockerCLI(t *testing.T) {
}, testutil.WaitShort, testutil.IntervalSlow, "Container did not start in time")
dcli := agentcontainers.NewDockerCLI(agentexec.DefaultExecer)
ctx := testutil.Context(t, testutil.WaitMedium) // Longer timeout for multiple subtests
containerName := strings.TrimPrefix(ct.Container.Name, "/")
t.Run("DetectArchitecture", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
arch, err := dcli.DetectArchitecture(ctx, containerName)
require.NoError(t, err, "DetectArchitecture failed")
@@ -71,7 +71,6 @@ func TestIntegrationDockerCLI(t *testing.T) {
t.Run("Copy", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
want := "Help, I'm trapped!"
tempFile := filepath.Join(t.TempDir(), "test-file.txt")
@@ -91,7 +90,6 @@ func TestIntegrationDockerCLI(t *testing.T) {
t.Run("ExecAs", func(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
// Test ExecAs without specifying user (should use container's default).
want := "root"
+1 -1
View File
@@ -61,7 +61,7 @@ fi
exec 3>&-
# Format the generated code.
go run mvdan.cc/gofumpt@v0.8.0 -w -l "${TMPDIR}/${DEST_FILENAME}"
go run mvdan.cc/gofumpt@v0.4.0 -w -l "${TMPDIR}/${DEST_FILENAME}"
# Add a header so that Go recognizes this as a generated file.
if grep -q -- "\[-i extension\]" < <(sed -h 2>&1); then
+32 -33
View File
@@ -91,7 +91,6 @@ type CoderCustomization struct {
Apps []SubAgentApp `json:"apps,omitempty"`
Name string `json:"name,omitempty"`
Ignore bool `json:"ignore,omitempty"`
AutoStart bool `json:"autoStart,omitempty"`
}
type DevcontainerWorkspace struct {
@@ -107,63 +106,63 @@ type DevcontainerCLI interface {
// DevcontainerCLIUpOptions are options for the devcontainer CLI Up
// command.
type DevcontainerCLIUpOptions func(*DevcontainerCLIUpConfig)
type DevcontainerCLIUpOptions func(*devcontainerCLIUpConfig)
type DevcontainerCLIUpConfig struct {
Args []string // Additional arguments for the Up command.
Stdout io.Writer
Stderr io.Writer
type devcontainerCLIUpConfig struct {
args []string // Additional arguments for the Up command.
stdout io.Writer
stderr io.Writer
}
// WithRemoveExistingContainer is an option to remove the existing
// container.
func WithRemoveExistingContainer() DevcontainerCLIUpOptions {
return func(o *DevcontainerCLIUpConfig) {
o.Args = append(o.Args, "--remove-existing-container")
return func(o *devcontainerCLIUpConfig) {
o.args = append(o.args, "--remove-existing-container")
}
}
// WithUpOutput sets additional stdout and stderr writers for logs
// during Up operations.
func WithUpOutput(stdout, stderr io.Writer) DevcontainerCLIUpOptions {
return func(o *DevcontainerCLIUpConfig) {
o.Stdout = stdout
o.Stderr = stderr
return func(o *devcontainerCLIUpConfig) {
o.stdout = stdout
o.stderr = stderr
}
}
// DevcontainerCLIExecOptions are options for the devcontainer CLI Exec
// command.
type DevcontainerCLIExecOptions func(*DevcontainerCLIExecConfig)
type DevcontainerCLIExecOptions func(*devcontainerCLIExecConfig)
type DevcontainerCLIExecConfig struct {
Args []string // Additional arguments for the Exec command.
Stdout io.Writer
Stderr io.Writer
type devcontainerCLIExecConfig struct {
args []string // Additional arguments for the Exec command.
stdout io.Writer
stderr io.Writer
}
// WithExecOutput sets additional stdout and stderr writers for logs
// during Exec operations.
func WithExecOutput(stdout, stderr io.Writer) DevcontainerCLIExecOptions {
return func(o *DevcontainerCLIExecConfig) {
o.Stdout = stdout
o.Stderr = stderr
return func(o *devcontainerCLIExecConfig) {
o.stdout = stdout
o.stderr = stderr
}
}
// WithExecContainerID sets the container ID to target a specific
// container.
func WithExecContainerID(id string) DevcontainerCLIExecOptions {
return func(o *DevcontainerCLIExecConfig) {
o.Args = append(o.Args, "--container-id", id)
return func(o *devcontainerCLIExecConfig) {
o.args = append(o.args, "--container-id", id)
}
}
// WithRemoteEnv sets environment variables for the Exec command.
func WithRemoteEnv(env ...string) DevcontainerCLIExecOptions {
return func(o *DevcontainerCLIExecConfig) {
return func(o *devcontainerCLIExecConfig) {
for _, e := range env {
o.Args = append(o.Args, "--remote-env", e)
o.args = append(o.args, "--remote-env", e)
}
}
}
@@ -186,8 +185,8 @@ func WithReadConfigOutput(stdout, stderr io.Writer) DevcontainerCLIReadConfigOpt
}
}
func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) DevcontainerCLIUpConfig {
conf := DevcontainerCLIUpConfig{Stdout: io.Discard, Stderr: io.Discard}
func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) devcontainerCLIUpConfig {
conf := devcontainerCLIUpConfig{stdout: io.Discard, stderr: io.Discard}
for _, opt := range opts {
if opt != nil {
opt(&conf)
@@ -196,8 +195,8 @@ func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) Devcontainer
return conf
}
func applyDevcontainerCLIExecOptions(opts []DevcontainerCLIExecOptions) DevcontainerCLIExecConfig {
conf := DevcontainerCLIExecConfig{Stdout: io.Discard, Stderr: io.Discard}
func applyDevcontainerCLIExecOptions(opts []DevcontainerCLIExecOptions) devcontainerCLIExecConfig {
conf := devcontainerCLIExecConfig{stdout: io.Discard, stderr: io.Discard}
for _, opt := range opts {
if opt != nil {
opt(&conf)
@@ -242,7 +241,7 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
if configPath != "" {
args = append(args, "--config", configPath)
}
args = append(args, conf.Args...)
args = append(args, conf.args...)
cmd := d.execer.CommandContext(ctx, "devcontainer", args...)
// Capture stdout for parsing and stream logs for both default and provided writers.
@@ -252,14 +251,14 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
&devcontainerCLILogWriter{
ctx: ctx,
logger: logger.With(slog.F("stdout", true)),
writer: conf.Stdout,
writer: conf.stdout,
},
)
// Stream stderr logs and provided writer if any.
cmd.Stderr = &devcontainerCLILogWriter{
ctx: ctx,
logger: logger.With(slog.F("stderr", true)),
writer: conf.Stderr,
writer: conf.stderr,
}
if err := cmd.Run(); err != nil {
@@ -294,17 +293,17 @@ func (d *devcontainerCLI) Exec(ctx context.Context, workspaceFolder, configPath
if configPath != "" {
args = append(args, "--config", configPath)
}
args = append(args, conf.Args...)
args = append(args, conf.args...)
args = append(args, cmd)
args = append(args, cmdArgs...)
c := d.execer.CommandContext(ctx, "devcontainer", args...)
c.Stdout = io.MultiWriter(conf.Stdout, &devcontainerCLILogWriter{
c.Stdout = io.MultiWriter(conf.stdout, &devcontainerCLILogWriter{
ctx: ctx,
logger: logger.With(slog.F("stdout", true)),
writer: io.Discard,
})
c.Stderr = io.MultiWriter(conf.Stderr, &devcontainerCLILogWriter{
c.Stderr = io.MultiWriter(conf.stderr, &devcontainerCLILogWriter{
ctx: ctx,
logger: logger.With(slog.F("stderr", true)),
writer: io.Discard,
@@ -593,7 +593,7 @@ func setupDevcontainerWorkspace(t *testing.T, workspaceFolder string) string {
"containerEnv": {
"TEST_CONTAINER": "true"
},
"runArgs": ["--label=com.coder.test=devcontainercli", "--label=` + agentcontainers.DevcontainerIsTestRunLabel + `=true"]
"runArgs": ["--label", "com.coder.test=devcontainercli"]
}`
err = os.WriteFile(configPath, []byte(content), 0o600)
require.NoError(t, err, "create devcontainer.json file")
-124
View File
@@ -1,124 +0,0 @@
package ignore
import (
"bytes"
"context"
"errors"
"io/fs"
"os"
"path/filepath"
"strings"
"github.com/go-git/go-git/v5/plumbing/format/config"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/spf13/afero"
"golang.org/x/xerrors"
"cdr.dev/slog"
)
const (
gitconfigFile = ".gitconfig"
gitignoreFile = ".gitignore"
gitInfoExcludeFile = ".git/info/exclude"
)
func FilePathToParts(path string) []string {
components := []string{}
if path == "" {
return components
}
for segment := range strings.SplitSeq(filepath.Clean(path), string(filepath.Separator)) {
if segment != "" {
components = append(components, segment)
}
}
return components
}
func readIgnoreFile(fileSystem afero.Fs, path, ignore string) ([]gitignore.Pattern, error) {
var ps []gitignore.Pattern
data, err := afero.ReadFile(fileSystem, filepath.Join(path, ignore))
if err != nil && !errors.Is(err, os.ErrNotExist) {
return nil, err
}
for s := range strings.SplitSeq(string(data), "\n") {
if !strings.HasPrefix(s, "#") && len(strings.TrimSpace(s)) > 0 {
ps = append(ps, gitignore.ParsePattern(s, FilePathToParts(path)))
}
}
return ps, nil
}
func ReadPatterns(ctx context.Context, logger slog.Logger, fileSystem afero.Fs, path string) ([]gitignore.Pattern, error) {
var ps []gitignore.Pattern
subPs, err := readIgnoreFile(fileSystem, path, gitInfoExcludeFile)
if err != nil {
return nil, err
}
ps = append(ps, subPs...)
if err := afero.Walk(fileSystem, path, func(path string, info fs.FileInfo, err error) error {
if err != nil {
logger.Error(ctx, "encountered error while walking for git ignore files",
slog.F("path", path),
slog.Error(err))
return nil
}
if !info.IsDir() {
return nil
}
subPs, err := readIgnoreFile(fileSystem, path, gitignoreFile)
if err != nil {
return err
}
ps = append(ps, subPs...)
return nil
}); err != nil {
return nil, err
}
return ps, nil
}
func loadPatterns(fileSystem afero.Fs, path string) ([]gitignore.Pattern, error) {
data, err := afero.ReadFile(fileSystem, path)
if err != nil && !errors.Is(err, os.ErrNotExist) {
return nil, err
}
decoder := config.NewDecoder(bytes.NewBuffer(data))
conf := config.New()
if err := decoder.Decode(conf); err != nil {
return nil, xerrors.Errorf("decode config: %w", err)
}
excludes := conf.Section("core").Options.Get("excludesfile")
if excludes == "" {
return nil, nil
}
return readIgnoreFile(fileSystem, "", excludes)
}
func LoadGlobalPatterns(fileSystem afero.Fs) ([]gitignore.Pattern, error) {
home, err := os.UserHomeDir()
if err != nil {
return nil, err
}
return loadPatterns(fileSystem, filepath.Join(home, gitconfigFile))
}
-38
View File
@@ -1,38 +0,0 @@
package ignore_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/require"
"github.com/coder/coder/v2/agent/agentcontainers/ignore"
)
func TestFilePathToParts(t *testing.T) {
t.Parallel()
tests := []struct {
path string
expected []string
}{
{"", []string{}},
{"/", []string{}},
{"foo", []string{"foo"}},
{"/foo", []string{"foo"}},
{"./foo/bar", []string{"foo", "bar"}},
{"../foo/bar", []string{"..", "foo", "bar"}},
{"foo/bar/baz", []string{"foo", "bar", "baz"}},
{"/foo/bar/baz", []string{"foo", "bar", "baz"}},
{"foo/../bar", []string{"bar"}},
}
for _, tt := range tests {
t.Run(fmt.Sprintf("`%s`", tt.path), func(t *testing.T) {
t.Parallel()
parts := ignore.FilePathToParts(tt.path)
require.Equal(t, tt.expected, parts)
})
}
}
+8 -16
View File
@@ -188,7 +188,7 @@ func (a *subAgentAPIClient) List(ctx context.Context) ([]SubAgent, error) {
return agents, nil
}
func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (_ SubAgent, err error) {
func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (SubAgent, error) {
a.logger.Debug(ctx, "creating sub agent", slog.F("name", agent.Name), slog.F("directory", agent.Directory))
displayApps := make([]agentproto.CreateSubAgentRequest_DisplayApp, 0, len(agent.DisplayApps))
@@ -233,27 +233,19 @@ func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (_ SubAg
if err != nil {
return SubAgent{}, err
}
defer func() {
if err != nil {
// Best effort.
_, _ = a.api.DeleteSubAgent(ctx, &agentproto.DeleteSubAgentRequest{
Id: resp.GetAgent().GetId(),
})
}
}()
agent.Name = resp.GetAgent().GetName()
agent.ID, err = uuid.FromBytes(resp.GetAgent().GetId())
agent.Name = resp.Agent.Name
agent.ID, err = uuid.FromBytes(resp.Agent.Id)
if err != nil {
return SubAgent{}, err
return agent, err
}
agent.AuthToken, err = uuid.FromBytes(resp.GetAgent().GetAuthToken())
agent.AuthToken, err = uuid.FromBytes(resp.Agent.AuthToken)
if err != nil {
return SubAgent{}, err
return agent, err
}
for _, appError := range resp.GetAppCreationErrors() {
app := apps[appError.GetIndex()]
for _, appError := range resp.AppCreationErrors {
app := apps[appError.Index]
a.logger.Warn(ctx, "unable to create app",
slog.F("agent_name", agent.Name),
+15 -26
View File
@@ -4,7 +4,6 @@ import (
"context"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/fsnotify/fsnotify"
@@ -89,34 +88,24 @@ func TestFSNotifyWatcher(t *testing.T) {
break
}
// TODO(DanielleMaywood):
// Unfortunately it appears this atomic-rename phase of the test is flakey on macOS.
//
// This test flake could be indicative of an issue that may present itself
// in a running environment. Fortunately, we only use this (as of 2025-07-29)
// for our dev container integration. We do not expect the host workspace
// (where this is used), to ever be run on macOS, as containers are a linux
// paradigm.
if runtime.GOOS != "darwin" {
err = os.WriteFile(testFile+".atomic", []byte(`{"test": "atomic"}`), 0o600)
require.NoError(t, err, "write new atomic test file failed")
err = os.WriteFile(testFile+".atomic", []byte(`{"test": "atomic"}`), 0o600)
require.NoError(t, err, "write new atomic test file failed")
err = os.Rename(testFile+".atomic", testFile)
require.NoError(t, err, "rename atomic test file failed")
err = os.Rename(testFile+".atomic", testFile)
require.NoError(t, err, "rename atomic test file failed")
// Verify that we receive the event we want.
for {
event, err := wut.Next(ctx)
require.NoError(t, err, "next event failed")
require.NotNil(t, event, "want non-nil event")
if !event.Has(fsnotify.Create) {
t.Logf("Ignoring event: %s", event)
continue
}
require.Truef(t, event.Has(fsnotify.Create), "want create event: %s", event.String())
require.Equal(t, event.Name, testFile, "want event for test file")
break
// Verify that we receive the event we want.
for {
event, err := wut.Next(ctx)
require.NoError(t, err, "next event failed")
require.NotNil(t, event, "want non-nil event")
if !event.Has(fsnotify.Create) {
t.Logf("Ignoring event: %s", event)
continue
}
require.Truef(t, event.Has(fsnotify.Create), "want create event: %s", event.String())
require.Equal(t, event.Name, testFile, "want event for test file")
break
}
// Test removing the file from the watcher.
+2
View File
@@ -149,6 +149,7 @@ func (r *Runner) Init(scripts []codersdk.WorkspaceAgentScript, scriptCompleted S
if script.Cron == "" {
continue
}
script := script
_, err := r.cron.AddFunc(script.Cron, func() {
err := r.trackRun(r.cronCtx, script, ExecuteCronScripts)
if err != nil {
@@ -223,6 +224,7 @@ func (r *Runner) Execute(ctx context.Context, option ExecuteOption) error {
continue
}
script := script
eg.Go(func() error {
err := r.trackRun(ctx, script, option)
if err != nil {
-146
View File
@@ -1,146 +0,0 @@
package agentsocket
import (
"context"
"golang.org/x/xerrors"
"storj.io/drpc"
"storj.io/drpc/drpcconn"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
)
// Option represents a configuration option for NewClient.
type Option func(*options)
type options struct {
path string
}
// WithPath sets the socket path. If not provided or empty, the client will
// auto-discover the default socket path.
func WithPath(path string) Option {
return func(opts *options) {
if path == "" {
return
}
opts.path = path
}
}
// Client provides a client for communicating with the workspace agentsocket API.
type Client struct {
client proto.DRPCAgentSocketClient
conn drpc.Conn
}
// NewClient creates a new socket client and opens a connection to the socket.
// If path is not provided via WithPath or is empty, it will auto-discover the
// default socket path.
func NewClient(ctx context.Context, opts ...Option) (*Client, error) {
options := &options{}
for _, opt := range opts {
opt(options)
}
conn, err := dialSocket(ctx, options.path)
if err != nil {
return nil, xerrors.Errorf("connect to socket: %w", err)
}
drpcConn := drpcconn.New(conn)
client := proto.NewDRPCAgentSocketClient(drpcConn)
return &Client{
client: client,
conn: drpcConn,
}, nil
}
// Close closes the socket connection.
func (c *Client) Close() error {
return c.conn.Close()
}
// Ping sends a ping request to the agent.
func (c *Client) Ping(ctx context.Context) error {
_, err := c.client.Ping(ctx, &proto.PingRequest{})
return err
}
// SyncStart starts a unit in the dependency graph.
func (c *Client) SyncStart(ctx context.Context, unitName unit.ID) error {
_, err := c.client.SyncStart(ctx, &proto.SyncStartRequest{
Unit: string(unitName),
})
return err
}
// SyncWant declares a dependency between units.
func (c *Client) SyncWant(ctx context.Context, unitName, dependsOn unit.ID) error {
_, err := c.client.SyncWant(ctx, &proto.SyncWantRequest{
Unit: string(unitName),
DependsOn: string(dependsOn),
})
return err
}
// SyncComplete marks a unit as complete in the dependency graph.
func (c *Client) SyncComplete(ctx context.Context, unitName unit.ID) error {
_, err := c.client.SyncComplete(ctx, &proto.SyncCompleteRequest{
Unit: string(unitName),
})
return err
}
// SyncReady requests whether a unit is ready to be started. That is, all dependencies are satisfied.
func (c *Client) SyncReady(ctx context.Context, unitName unit.ID) (bool, error) {
resp, err := c.client.SyncReady(ctx, &proto.SyncReadyRequest{
Unit: string(unitName),
})
return resp.Ready, err
}
// SyncStatus gets the status of a unit and its dependencies.
func (c *Client) SyncStatus(ctx context.Context, unitName unit.ID) (SyncStatusResponse, error) {
resp, err := c.client.SyncStatus(ctx, &proto.SyncStatusRequest{
Unit: string(unitName),
})
if err != nil {
return SyncStatusResponse{}, err
}
var dependencies []DependencyInfo
for _, dep := range resp.Dependencies {
dependencies = append(dependencies, DependencyInfo{
DependsOn: unit.ID(dep.DependsOn),
RequiredStatus: unit.Status(dep.RequiredStatus),
CurrentStatus: unit.Status(dep.CurrentStatus),
IsSatisfied: dep.IsSatisfied,
})
}
return SyncStatusResponse{
UnitName: unitName,
Status: unit.Status(resp.Status),
IsReady: resp.IsReady,
Dependencies: dependencies,
}, nil
}
// SyncStatusResponse contains the status information for a unit.
type SyncStatusResponse struct {
UnitName unit.ID `table:"unit,default_sort" json:"unit_name"`
Status unit.Status `table:"status" json:"status"`
IsReady bool `table:"ready" json:"is_ready"`
Dependencies []DependencyInfo `table:"dependencies" json:"dependencies"`
}
// DependencyInfo contains information about a unit dependency.
type DependencyInfo struct {
DependsOn unit.ID `table:"depends on,default_sort" json:"depends_on"`
RequiredStatus unit.Status `table:"required status" json:"required_status"`
CurrentStatus unit.Status `table:"current status" json:"current_status"`
IsSatisfied bool `table:"satisfied" json:"is_satisfied"`
}
-968
View File
@@ -1,968 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.23.4
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type PingRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingRequest) Reset() {
*x = PingRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingRequest) ProtoMessage() {}
func (x *PingRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingRequest.ProtoReflect.Descriptor instead.
func (*PingRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{0}
}
type PingResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *PingResponse) Reset() {
*x = PingResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PingResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PingResponse) ProtoMessage() {}
func (x *PingResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PingResponse.ProtoReflect.Descriptor instead.
func (*PingResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{1}
}
type SyncStartRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStartRequest) Reset() {
*x = SyncStartRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartRequest) ProtoMessage() {}
func (x *SyncStartRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartRequest.ProtoReflect.Descriptor instead.
func (*SyncStartRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{2}
}
func (x *SyncStartRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncStartResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncStartResponse) Reset() {
*x = SyncStartResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStartResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStartResponse) ProtoMessage() {}
func (x *SyncStartResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStartResponse.ProtoReflect.Descriptor instead.
func (*SyncStartResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{3}
}
type SyncWantRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
}
func (x *SyncWantRequest) Reset() {
*x = SyncWantRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantRequest) ProtoMessage() {}
func (x *SyncWantRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantRequest.ProtoReflect.Descriptor instead.
func (*SyncWantRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{4}
}
func (x *SyncWantRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *SyncWantRequest) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
type SyncWantResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncWantResponse) Reset() {
*x = SyncWantResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncWantResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncWantResponse) ProtoMessage() {}
func (x *SyncWantResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncWantResponse.ProtoReflect.Descriptor instead.
func (*SyncWantResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{5}
}
type SyncCompleteRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncCompleteRequest) Reset() {
*x = SyncCompleteRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteRequest) ProtoMessage() {}
func (x *SyncCompleteRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteRequest.ProtoReflect.Descriptor instead.
func (*SyncCompleteRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{6}
}
func (x *SyncCompleteRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncCompleteResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *SyncCompleteResponse) Reset() {
*x = SyncCompleteResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncCompleteResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncCompleteResponse) ProtoMessage() {}
func (x *SyncCompleteResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncCompleteResponse.ProtoReflect.Descriptor instead.
func (*SyncCompleteResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{7}
}
type SyncReadyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncReadyRequest) Reset() {
*x = SyncReadyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyRequest) ProtoMessage() {}
func (x *SyncReadyRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyRequest.ProtoReflect.Descriptor instead.
func (*SyncReadyRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{8}
}
func (x *SyncReadyRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type SyncReadyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Ready bool `protobuf:"varint,1,opt,name=ready,proto3" json:"ready,omitempty"`
}
func (x *SyncReadyResponse) Reset() {
*x = SyncReadyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncReadyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncReadyResponse) ProtoMessage() {}
func (x *SyncReadyResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncReadyResponse.ProtoReflect.Descriptor instead.
func (*SyncReadyResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{9}
}
func (x *SyncReadyResponse) GetReady() bool {
if x != nil {
return x.Ready
}
return false
}
type SyncStatusRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
}
func (x *SyncStatusRequest) Reset() {
*x = SyncStatusRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusRequest) ProtoMessage() {}
func (x *SyncStatusRequest) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusRequest.ProtoReflect.Descriptor instead.
func (*SyncStatusRequest) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{10}
}
func (x *SyncStatusRequest) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
type DependencyInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Unit string `protobuf:"bytes,1,opt,name=unit,proto3" json:"unit,omitempty"`
DependsOn string `protobuf:"bytes,2,opt,name=depends_on,json=dependsOn,proto3" json:"depends_on,omitempty"`
RequiredStatus string `protobuf:"bytes,3,opt,name=required_status,json=requiredStatus,proto3" json:"required_status,omitempty"`
CurrentStatus string `protobuf:"bytes,4,opt,name=current_status,json=currentStatus,proto3" json:"current_status,omitempty"`
IsSatisfied bool `protobuf:"varint,5,opt,name=is_satisfied,json=isSatisfied,proto3" json:"is_satisfied,omitempty"`
}
func (x *DependencyInfo) Reset() {
*x = DependencyInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DependencyInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DependencyInfo) ProtoMessage() {}
func (x *DependencyInfo) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DependencyInfo.ProtoReflect.Descriptor instead.
func (*DependencyInfo) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{11}
}
func (x *DependencyInfo) GetUnit() string {
if x != nil {
return x.Unit
}
return ""
}
func (x *DependencyInfo) GetDependsOn() string {
if x != nil {
return x.DependsOn
}
return ""
}
func (x *DependencyInfo) GetRequiredStatus() string {
if x != nil {
return x.RequiredStatus
}
return ""
}
func (x *DependencyInfo) GetCurrentStatus() string {
if x != nil {
return x.CurrentStatus
}
return ""
}
func (x *DependencyInfo) GetIsSatisfied() bool {
if x != nil {
return x.IsSatisfied
}
return false
}
type SyncStatusResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Status string `protobuf:"bytes,1,opt,name=status,proto3" json:"status,omitempty"`
IsReady bool `protobuf:"varint,2,opt,name=is_ready,json=isReady,proto3" json:"is_ready,omitempty"`
Dependencies []*DependencyInfo `protobuf:"bytes,3,rep,name=dependencies,proto3" json:"dependencies,omitempty"`
}
func (x *SyncStatusResponse) Reset() {
*x = SyncStatusResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SyncStatusResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SyncStatusResponse) ProtoMessage() {}
func (x *SyncStatusResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SyncStatusResponse.ProtoReflect.Descriptor instead.
func (*SyncStatusResponse) Descriptor() ([]byte, []int) {
return file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP(), []int{12}
}
func (x *SyncStatusResponse) GetStatus() string {
if x != nil {
return x.Status
}
return ""
}
func (x *SyncStatusResponse) GetIsReady() bool {
if x != nil {
return x.IsReady
}
return false
}
func (x *SyncStatusResponse) GetDependencies() []*DependencyInfo {
if x != nil {
return x.Dependencies
}
return nil
}
var File_agent_agentsocket_proto_agentsocket_proto protoreflect.FileDescriptor
var file_agent_agentsocket_proto_agentsocket_proto_rawDesc = []byte{
0x0a, 0x29, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73,
0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x14, 0x63, 0x6f, 0x64,
0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76,
0x31, 0x22, 0x0d, 0x0a, 0x0b, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x22, 0x0e, 0x0a, 0x0c, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x26, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01,
0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0x13, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63,
0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x44, 0x0a,
0x0f, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f,
0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64,
0x73, 0x4f, 0x6e, 0x22, 0x12, 0x0a, 0x10, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x29, 0x0a, 0x13, 0x53, 0x79, 0x6e, 0x63, 0x43,
0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x16, 0x0a, 0x14, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65,
0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x26, 0x0a, 0x10, 0x53, 0x79,
0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x12,
0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e,
0x69, 0x74, 0x22, 0x29, 0x0a, 0x11, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79,
0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x05, 0x72, 0x65, 0x61, 0x64, 0x79, 0x22, 0x27, 0x0a,
0x11, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09,
0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x22, 0xb6, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x65, 0x6e,
0x64, 0x65, 0x6e, 0x63, 0x79, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x12, 0x0a, 0x04, 0x75, 0x6e, 0x69,
0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x75, 0x6e, 0x69, 0x74, 0x12, 0x1d, 0x0a,
0x0a, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x5f, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x09, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x73, 0x4f, 0x6e, 0x12, 0x27, 0x0a, 0x0f,
0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18,
0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0e, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x53,
0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x25, 0x0a, 0x0e, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74,
0x5f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x63,
0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x21, 0x0a, 0x0c,
0x69, 0x73, 0x5f, 0x73, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x18, 0x05, 0x20, 0x01,
0x28, 0x08, 0x52, 0x0b, 0x69, 0x73, 0x53, 0x61, 0x74, 0x69, 0x73, 0x66, 0x69, 0x65, 0x64, 0x22,
0x91, 0x01, 0x0a, 0x12, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x19,
0x0a, 0x08, 0x69, 0x73, 0x5f, 0x72, 0x65, 0x61, 0x64, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x69, 0x73, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12, 0x48, 0x0a, 0x0c, 0x64, 0x65, 0x70,
0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63, 0x69, 0x65, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32,
0x24, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x79, 0x49, 0x6e, 0x66, 0x6f, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x65, 0x6e, 0x64, 0x65, 0x6e, 0x63,
0x69, 0x65, 0x73, 0x32, 0xbb, 0x04, 0x0a, 0x0b, 0x41, 0x67, 0x65, 0x6e, 0x74, 0x53, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x12, 0x4d, 0x0a, 0x04, 0x50, 0x69, 0x6e, 0x67, 0x12, 0x21, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x50, 0x69, 0x6e, 0x67, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x72, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x59, 0x0a, 0x08, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x12, 0x25, 0x2e, 0x63,
0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57, 0x61, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e,
0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x57,
0x61, 0x6e, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x65, 0x0a, 0x0c, 0x53,
0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x12, 0x29, 0x2e, 0x63, 0x6f,
0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e,
0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79,
0x6e, 0x63, 0x43, 0x6f, 0x6d, 0x70, 0x6c, 0x65, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x5c, 0x0a, 0x09, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x12,
0x26, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63,
0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x52, 0x65, 0x61, 0x64, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x5f, 0x0a, 0x0a, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x27,
0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b,
0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53, 0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2e,
0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74, 0x2e, 0x76, 0x31, 0x2e, 0x53,
0x79, 0x6e, 0x63, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x6f, 0x64, 0x65, 0x72, 0x2f, 0x76, 0x32, 0x2f, 0x61,
0x67, 0x65, 0x6e, 0x74, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x73, 0x6f, 0x63, 0x6b, 0x65, 0x74,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce sync.Once
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = file_agent_agentsocket_proto_agentsocket_proto_rawDesc
)
func file_agent_agentsocket_proto_agentsocket_proto_rawDescGZIP() []byte {
file_agent_agentsocket_proto_agentsocket_proto_rawDescOnce.Do(func() {
file_agent_agentsocket_proto_agentsocket_proto_rawDescData = protoimpl.X.CompressGZIP(file_agent_agentsocket_proto_agentsocket_proto_rawDescData)
})
return file_agent_agentsocket_proto_agentsocket_proto_rawDescData
}
var file_agent_agentsocket_proto_agentsocket_proto_msgTypes = make([]protoimpl.MessageInfo, 13)
var file_agent_agentsocket_proto_agentsocket_proto_goTypes = []interface{}{
(*PingRequest)(nil), // 0: coder.agentsocket.v1.PingRequest
(*PingResponse)(nil), // 1: coder.agentsocket.v1.PingResponse
(*SyncStartRequest)(nil), // 2: coder.agentsocket.v1.SyncStartRequest
(*SyncStartResponse)(nil), // 3: coder.agentsocket.v1.SyncStartResponse
(*SyncWantRequest)(nil), // 4: coder.agentsocket.v1.SyncWantRequest
(*SyncWantResponse)(nil), // 5: coder.agentsocket.v1.SyncWantResponse
(*SyncCompleteRequest)(nil), // 6: coder.agentsocket.v1.SyncCompleteRequest
(*SyncCompleteResponse)(nil), // 7: coder.agentsocket.v1.SyncCompleteResponse
(*SyncReadyRequest)(nil), // 8: coder.agentsocket.v1.SyncReadyRequest
(*SyncReadyResponse)(nil), // 9: coder.agentsocket.v1.SyncReadyResponse
(*SyncStatusRequest)(nil), // 10: coder.agentsocket.v1.SyncStatusRequest
(*DependencyInfo)(nil), // 11: coder.agentsocket.v1.DependencyInfo
(*SyncStatusResponse)(nil), // 12: coder.agentsocket.v1.SyncStatusResponse
}
var file_agent_agentsocket_proto_agentsocket_proto_depIdxs = []int32{
11, // 0: coder.agentsocket.v1.SyncStatusResponse.dependencies:type_name -> coder.agentsocket.v1.DependencyInfo
0, // 1: coder.agentsocket.v1.AgentSocket.Ping:input_type -> coder.agentsocket.v1.PingRequest
2, // 2: coder.agentsocket.v1.AgentSocket.SyncStart:input_type -> coder.agentsocket.v1.SyncStartRequest
4, // 3: coder.agentsocket.v1.AgentSocket.SyncWant:input_type -> coder.agentsocket.v1.SyncWantRequest
6, // 4: coder.agentsocket.v1.AgentSocket.SyncComplete:input_type -> coder.agentsocket.v1.SyncCompleteRequest
8, // 5: coder.agentsocket.v1.AgentSocket.SyncReady:input_type -> coder.agentsocket.v1.SyncReadyRequest
10, // 6: coder.agentsocket.v1.AgentSocket.SyncStatus:input_type -> coder.agentsocket.v1.SyncStatusRequest
1, // 7: coder.agentsocket.v1.AgentSocket.Ping:output_type -> coder.agentsocket.v1.PingResponse
3, // 8: coder.agentsocket.v1.AgentSocket.SyncStart:output_type -> coder.agentsocket.v1.SyncStartResponse
5, // 9: coder.agentsocket.v1.AgentSocket.SyncWant:output_type -> coder.agentsocket.v1.SyncWantResponse
7, // 10: coder.agentsocket.v1.AgentSocket.SyncComplete:output_type -> coder.agentsocket.v1.SyncCompleteResponse
9, // 11: coder.agentsocket.v1.AgentSocket.SyncReady:output_type -> coder.agentsocket.v1.SyncReadyResponse
12, // 12: coder.agentsocket.v1.AgentSocket.SyncStatus:output_type -> coder.agentsocket.v1.SyncStatusResponse
7, // [7:13] is the sub-list for method output_type
1, // [1:7] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_agent_agentsocket_proto_agentsocket_proto_init() }
func file_agent_agentsocket_proto_agentsocket_proto_init() {
if File_agent_agentsocket_proto_agentsocket_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PingResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStartResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncWantResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncCompleteResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncReadyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DependencyInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_agentsocket_proto_agentsocket_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SyncStatusResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_agent_agentsocket_proto_agentsocket_proto_rawDesc,
NumEnums: 0,
NumMessages: 13,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_agent_agentsocket_proto_agentsocket_proto_goTypes,
DependencyIndexes: file_agent_agentsocket_proto_agentsocket_proto_depIdxs,
MessageInfos: file_agent_agentsocket_proto_agentsocket_proto_msgTypes,
}.Build()
File_agent_agentsocket_proto_agentsocket_proto = out.File
file_agent_agentsocket_proto_agentsocket_proto_rawDesc = nil
file_agent_agentsocket_proto_agentsocket_proto_goTypes = nil
file_agent_agentsocket_proto_agentsocket_proto_depIdxs = nil
}
-69
View File
@@ -1,69 +0,0 @@
syntax = "proto3";
option go_package = "github.com/coder/coder/v2/agent/agentsocket/proto";
package coder.agentsocket.v1;
message PingRequest {}
message PingResponse {}
message SyncStartRequest {
string unit = 1;
}
message SyncStartResponse {}
message SyncWantRequest {
string unit = 1;
string depends_on = 2;
}
message SyncWantResponse {}
message SyncCompleteRequest {
string unit = 1;
}
message SyncCompleteResponse {}
message SyncReadyRequest {
string unit = 1;
}
message SyncReadyResponse {
bool ready = 1;
}
message SyncStatusRequest {
string unit = 1;
}
message DependencyInfo {
string unit = 1;
string depends_on = 2;
string required_status = 3;
string current_status = 4;
bool is_satisfied = 5;
}
message SyncStatusResponse {
string status = 1;
bool is_ready = 2;
repeated DependencyInfo dependencies = 3;
}
// AgentSocket provides direct access to the agent over local IPC.
service AgentSocket {
// Ping the agent to check if it is alive.
rpc Ping(PingRequest) returns (PingResponse);
// Report the start of a unit.
rpc SyncStart(SyncStartRequest) returns (SyncStartResponse);
// Declare a dependency between units.
rpc SyncWant(SyncWantRequest) returns (SyncWantResponse);
// Report the completion of a unit.
rpc SyncComplete(SyncCompleteRequest) returns (SyncCompleteResponse);
// Request whether a unit is ready to be started. That is, all dependencies are satisfied.
rpc SyncReady(SyncReadyRequest) returns (SyncReadyResponse);
// Get the status of a unit and list its dependencies.
rpc SyncStatus(SyncStatusRequest) returns (SyncStatusResponse);
}
@@ -1,311 +0,0 @@
// Code generated by protoc-gen-go-drpc. DO NOT EDIT.
// protoc-gen-go-drpc version: v0.0.34
// source: agent/agentsocket/proto/agentsocket.proto
package proto
import (
context "context"
errors "errors"
protojson "google.golang.org/protobuf/encoding/protojson"
proto "google.golang.org/protobuf/proto"
drpc "storj.io/drpc"
drpcerr "storj.io/drpc/drpcerr"
)
type drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto struct{}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Marshal(msg drpc.Message) ([]byte, error) {
return proto.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) MarshalAppend(buf []byte, msg drpc.Message) ([]byte, error) {
return proto.MarshalOptions{}.MarshalAppend(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) Unmarshal(buf []byte, msg drpc.Message) error {
return proto.Unmarshal(buf, msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONMarshal(msg drpc.Message) ([]byte, error) {
return protojson.Marshal(msg.(proto.Message))
}
func (drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto) JSONUnmarshal(buf []byte, msg drpc.Message) error {
return protojson.Unmarshal(buf, msg.(proto.Message))
}
type DRPCAgentSocketClient interface {
DRPCConn() drpc.Conn
Ping(ctx context.Context, in *PingRequest) (*PingResponse, error)
SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error)
}
type drpcAgentSocketClient struct {
cc drpc.Conn
}
func NewDRPCAgentSocketClient(cc drpc.Conn) DRPCAgentSocketClient {
return &drpcAgentSocketClient{cc}
}
func (c *drpcAgentSocketClient) DRPCConn() drpc.Conn { return c.cc }
func (c *drpcAgentSocketClient) Ping(ctx context.Context, in *PingRequest) (*PingResponse, error) {
out := new(PingResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStart(ctx context.Context, in *SyncStartRequest) (*SyncStartResponse, error) {
out := new(SyncStartResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncWant(ctx context.Context, in *SyncWantRequest) (*SyncWantResponse, error) {
out := new(SyncWantResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncComplete(ctx context.Context, in *SyncCompleteRequest) (*SyncCompleteResponse, error) {
out := new(SyncCompleteResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncReady(ctx context.Context, in *SyncReadyRequest) (*SyncReadyResponse, error) {
out := new(SyncReadyResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
func (c *drpcAgentSocketClient) SyncStatus(ctx context.Context, in *SyncStatusRequest) (*SyncStatusResponse, error) {
out := new(SyncStatusResponse)
err := c.cc.Invoke(ctx, "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}, in, out)
if err != nil {
return nil, err
}
return out, nil
}
type DRPCAgentSocketServer interface {
Ping(context.Context, *PingRequest) (*PingResponse, error)
SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error)
SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error)
SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error)
SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error)
SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error)
}
type DRPCAgentSocketUnimplementedServer struct{}
func (s *DRPCAgentSocketUnimplementedServer) Ping(context.Context, *PingRequest) (*PingResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStart(context.Context, *SyncStartRequest) (*SyncStartResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncWant(context.Context, *SyncWantRequest) (*SyncWantResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncComplete(context.Context, *SyncCompleteRequest) (*SyncCompleteResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncReady(context.Context, *SyncReadyRequest) (*SyncReadyResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
func (s *DRPCAgentSocketUnimplementedServer) SyncStatus(context.Context, *SyncStatusRequest) (*SyncStatusResponse, error) {
return nil, drpcerr.WithCode(errors.New("Unimplemented"), drpcerr.Unimplemented)
}
type DRPCAgentSocketDescription struct{}
func (DRPCAgentSocketDescription) NumMethods() int { return 6 }
func (DRPCAgentSocketDescription) Method(n int) (string, drpc.Encoding, drpc.Receiver, interface{}, bool) {
switch n {
case 0:
return "/coder.agentsocket.v1.AgentSocket/Ping", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
Ping(
ctx,
in1.(*PingRequest),
)
}, DRPCAgentSocketServer.Ping, true
case 1:
return "/coder.agentsocket.v1.AgentSocket/SyncStart", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStart(
ctx,
in1.(*SyncStartRequest),
)
}, DRPCAgentSocketServer.SyncStart, true
case 2:
return "/coder.agentsocket.v1.AgentSocket/SyncWant", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncWant(
ctx,
in1.(*SyncWantRequest),
)
}, DRPCAgentSocketServer.SyncWant, true
case 3:
return "/coder.agentsocket.v1.AgentSocket/SyncComplete", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncComplete(
ctx,
in1.(*SyncCompleteRequest),
)
}, DRPCAgentSocketServer.SyncComplete, true
case 4:
return "/coder.agentsocket.v1.AgentSocket/SyncReady", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncReady(
ctx,
in1.(*SyncReadyRequest),
)
}, DRPCAgentSocketServer.SyncReady, true
case 5:
return "/coder.agentsocket.v1.AgentSocket/SyncStatus", drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{},
func(srv interface{}, ctx context.Context, in1, in2 interface{}) (drpc.Message, error) {
return srv.(DRPCAgentSocketServer).
SyncStatus(
ctx,
in1.(*SyncStatusRequest),
)
}, DRPCAgentSocketServer.SyncStatus, true
default:
return "", nil, nil, nil, false
}
}
func DRPCRegisterAgentSocket(mux drpc.Mux, impl DRPCAgentSocketServer) error {
return mux.Register(impl, DRPCAgentSocketDescription{})
}
type DRPCAgentSocket_PingStream interface {
drpc.Stream
SendAndClose(*PingResponse) error
}
type drpcAgentSocket_PingStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_PingStream) SendAndClose(m *PingResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStartStream interface {
drpc.Stream
SendAndClose(*SyncStartResponse) error
}
type drpcAgentSocket_SyncStartStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStartStream) SendAndClose(m *SyncStartResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncWantStream interface {
drpc.Stream
SendAndClose(*SyncWantResponse) error
}
type drpcAgentSocket_SyncWantStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncWantStream) SendAndClose(m *SyncWantResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncCompleteStream interface {
drpc.Stream
SendAndClose(*SyncCompleteResponse) error
}
type drpcAgentSocket_SyncCompleteStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncCompleteStream) SendAndClose(m *SyncCompleteResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncReadyStream interface {
drpc.Stream
SendAndClose(*SyncReadyResponse) error
}
type drpcAgentSocket_SyncReadyStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncReadyStream) SendAndClose(m *SyncReadyResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
type DRPCAgentSocket_SyncStatusStream interface {
drpc.Stream
SendAndClose(*SyncStatusResponse) error
}
type drpcAgentSocket_SyncStatusStream struct {
drpc.Stream
}
func (x *drpcAgentSocket_SyncStatusStream) SendAndClose(m *SyncStatusResponse) error {
if err := x.MsgSend(m, drpcEncoding_File_agent_agentsocket_proto_agentsocket_proto{}); err != nil {
return err
}
return x.CloseSend()
}
-17
View File
@@ -1,17 +0,0 @@
package proto
import "github.com/coder/coder/v2/apiversion"
// Version history:
//
// API v1.0:
// - Initial release
// - Ping
// - Sync operations: SyncStart, SyncWant, SyncComplete, SyncWait, SyncStatus
const (
CurrentMajor = 1
CurrentMinor = 0
)
var CurrentVersion = apiversion.New(CurrentMajor, CurrentMinor)
-138
View File
@@ -1,138 +0,0 @@
package agentsocket
import (
"context"
"errors"
"net"
"sync"
"golang.org/x/xerrors"
"storj.io/drpc/drpcmux"
"storj.io/drpc/drpcserver"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/codersdk/drpcsdk"
)
// Server provides access to the DRPCAgentSocketService via a Unix domain socket.
// Do not invoke Server{} directly. Use NewServer() instead.
type Server struct {
logger slog.Logger
path string
drpcServer *drpcserver.Server
service *DRPCAgentSocketService
mu sync.Mutex
listener net.Listener
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
}
// NewServer creates a new agent socket server.
func NewServer(logger slog.Logger, opts ...Option) (*Server, error) {
options := &options{}
for _, opt := range opts {
opt(options)
}
logger = logger.Named("agentsocket-server")
server := &Server{
logger: logger,
path: options.path,
service: &DRPCAgentSocketService{
logger: logger,
unitManager: unit.NewManager(),
},
}
mux := drpcmux.New()
err := proto.DRPCRegisterAgentSocket(mux, server.service)
if err != nil {
return nil, xerrors.Errorf("failed to register drpc service: %w", err)
}
server.drpcServer = drpcserver.NewWithOptions(mux, drpcserver.Options{
Manager: drpcsdk.DefaultDRPCOptions(nil),
Log: func(err error) {
if errors.Is(err, context.Canceled) ||
errors.Is(err, context.DeadlineExceeded) {
return
}
logger.Debug(context.Background(), "drpc server error", slog.Error(err))
},
})
listener, err := createSocket(server.path)
if err != nil {
return nil, xerrors.Errorf("create socket: %w", err)
}
server.listener = listener
// This context is canceled by server.Close().
// canceling it will close all connections.
server.ctx, server.cancel = context.WithCancel(context.Background())
server.logger.Info(server.ctx, "agent socket server started", slog.F("path", server.path))
server.wg.Add(1)
go func() {
defer server.wg.Done()
server.acceptConnections()
}()
return server, nil
}
// Close stops the server and cleans up resources.
func (s *Server) Close() error {
s.mu.Lock()
if s.listener == nil {
s.mu.Unlock()
return nil
}
s.logger.Info(s.ctx, "stopping agent socket server")
s.cancel()
if err := s.listener.Close(); err != nil {
s.logger.Warn(s.ctx, "error closing socket listener", slog.Error(err))
}
s.listener = nil
s.mu.Unlock()
// Wait for all connections to finish
s.wg.Wait()
if err := cleanupSocket(s.path); err != nil {
s.logger.Warn(s.ctx, "error cleaning up socket file", slog.Error(err))
}
s.logger.Info(s.ctx, "agent socket server stopped")
return nil
}
func (s *Server) acceptConnections() {
// In an edge case, Close() might race with acceptConnections() and set s.listener to nil.
// Therefore, we grab a copy of the listener under a lock. We might still get a nil listener,
// but then we know close has already run and we can return early.
s.mu.Lock()
listener := s.listener
s.mu.Unlock()
if listener == nil {
return
}
err := s.drpcServer.Serve(s.ctx, listener)
if err != nil {
s.logger.Warn(s.ctx, "error serving drpc server", slog.Error(err))
}
}
-138
View File
@@ -1,138 +0,0 @@
package agentsocket_test
import (
"context"
"path/filepath"
"runtime"
"testing"
"github.com/google/uuid"
"github.com/spf13/afero"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/agenttest"
agentproto "github.com/coder/coder/v2/agent/proto"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/tailnettest"
"github.com/coder/coder/v2/testutil"
)
func TestServer(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("StartStop", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
t.Run("AlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server1, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
defer server1.Close()
_, err = agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "create socket")
})
t.Run("AutoSocketPath", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
server, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.NoError(t, err)
require.NoError(t, server.Close())
})
}
func TestServerWindowsNotSupported(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
t.Run("NewServer", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(t.TempDir(), "test.sock")
logger := slog.Make().Leveled(slog.LevelDebug)
_, err := agentsocket.NewServer(logger, agentsocket.WithPath(socketPath))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
t.Run("NewClient", func(t *testing.T) {
t.Parallel()
_, err := agentsocket.NewClient(context.Background(), agentsocket.WithPath("test.sock"))
require.ErrorContains(t, err, "agentsocket is not supported on Windows")
})
}
func TestAgentInitializesOnWindowsWithoutSocketServer(t *testing.T) {
t.Parallel()
if runtime.GOOS != "windows" {
t.Skip("this test only runs on Windows")
}
ctx := testutil.Context(t, testutil.WaitShort)
logger := testutil.Logger(t).Named("agent")
derpMap, _ := tailnettest.RunDERPAndSTUN(t)
coordinator := tailnet.NewCoordinator(logger)
t.Cleanup(func() {
_ = coordinator.Close()
})
statsCh := make(chan *agentproto.Stats, 50)
agentID := uuid.New()
manifest := agentsdk.Manifest{
AgentID: agentID,
AgentName: "test-agent",
WorkspaceName: "test-workspace",
OwnerName: "test-user",
WorkspaceID: uuid.New(),
DERPMap: derpMap,
}
client := agenttest.NewClient(t, logger.Named("agenttest"), agentID, manifest, statsCh, coordinator)
t.Cleanup(client.Close)
options := agent.Options{
Client: client,
Filesystem: afero.NewMemMapFs(),
Logger: logger.Named("agent"),
ReconnectingPTYTimeout: testutil.WaitShort,
EnvironmentVariables: map[string]string{},
SocketPath: "",
}
agnt := agent.New(options)
t.Cleanup(func() {
_ = agnt.Close()
})
startup := testutil.TryReceive(ctx, t, client.GetStartup())
require.NotNil(t, startup, "agent should send startup message")
err := agnt.Close()
require.NoError(t, err, "agent should close cleanly")
}
-152
View File
@@ -1,152 +0,0 @@
package agentsocket
import (
"context"
"errors"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket/proto"
"github.com/coder/coder/v2/agent/unit"
)
var _ proto.DRPCAgentSocketServer = (*DRPCAgentSocketService)(nil)
var ErrUnitManagerNotAvailable = xerrors.New("unit manager not available")
// DRPCAgentSocketService implements the DRPC agent socket service.
type DRPCAgentSocketService struct {
unitManager *unit.Manager
logger slog.Logger
}
// Ping responds to a ping request to check if the service is alive.
func (*DRPCAgentSocketService) Ping(_ context.Context, _ *proto.PingRequest) (*proto.PingResponse, error) {
return &proto.PingResponse{}, nil
}
// SyncStart starts a unit in the dependency graph.
func (s *DRPCAgentSocketService) SyncStart(_ context.Context, req *proto.SyncStartRequest) (*proto.SyncStartResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("SyncStart: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.Register(unitID); err != nil {
if !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("SyncStart: %w", err)
}
}
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
if !isReady {
return nil, xerrors.Errorf("cannot start unit %q: unit not ready", req.Unit)
}
err = s.unitManager.UpdateStatus(unitID, unit.StatusStarted)
if err != nil {
return nil, xerrors.Errorf("cannot start unit %q: %w", req.Unit, err)
}
return &proto.SyncStartResponse{}, nil
}
// SyncWant declares a dependency between units.
func (s *DRPCAgentSocketService) SyncWant(_ context.Context, req *proto.SyncWantRequest) (*proto.SyncWantResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot add dependency: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
dependsOnID := unit.ID(req.DependsOn)
if err := s.unitManager.Register(unitID); err != nil && !errors.Is(err, unit.ErrUnitAlreadyRegistered) {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
if err := s.unitManager.AddDependency(unitID, dependsOnID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot add dependency: %w", err)
}
return &proto.SyncWantResponse{}, nil
}
// SyncComplete marks a unit as complete in the dependency graph.
func (s *DRPCAgentSocketService) SyncComplete(_ context.Context, req *proto.SyncCompleteRequest) (*proto.SyncCompleteResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot complete unit: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
if err := s.unitManager.UpdateStatus(unitID, unit.StatusComplete); err != nil {
return nil, xerrors.Errorf("cannot complete unit %q: %w", req.Unit, err)
}
return &proto.SyncCompleteResponse{}, nil
}
// SyncReady checks whether a unit is ready to be started. That is, all dependencies are satisfied.
func (s *DRPCAgentSocketService) SyncReady(_ context.Context, req *proto.SyncReadyRequest) (*proto.SyncReadyResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot check readiness: %w", ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
return &proto.SyncReadyResponse{
Ready: isReady,
}, nil
}
// SyncStatus gets the status of a unit and lists its dependencies.
func (s *DRPCAgentSocketService) SyncStatus(_ context.Context, req *proto.SyncStatusRequest) (*proto.SyncStatusResponse, error) {
if s.unitManager == nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, ErrUnitManagerNotAvailable)
}
unitID := unit.ID(req.Unit)
isReady, err := s.unitManager.IsReady(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot check readiness: %w", err)
}
dependencies, err := s.unitManager.GetAllDependencies(unitID)
switch {
case errors.Is(err, unit.ErrUnitNotFound):
dependencies = []unit.Dependency{}
case err != nil:
return nil, xerrors.Errorf("cannot get dependencies: %w", err)
}
var depInfos []*proto.DependencyInfo
for _, dep := range dependencies {
depInfos = append(depInfos, &proto.DependencyInfo{
Unit: string(dep.Unit),
DependsOn: string(dep.DependsOn),
RequiredStatus: string(dep.RequiredStatus),
CurrentStatus: string(dep.CurrentStatus),
IsSatisfied: dep.IsSatisfied,
})
}
u, err := s.unitManager.Unit(unitID)
if err != nil {
return nil, xerrors.Errorf("cannot get status for unit %q: %w", req.Unit, err)
}
return &proto.SyncStatusResponse{
Status: string(u.Status()),
IsReady: isReady,
Dependencies: depInfos,
}, nil
}
-389
View File
@@ -1,389 +0,0 @@
package agentsocket_test
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"cdr.dev/slog"
"github.com/coder/coder/v2/agent/agentsocket"
"github.com/coder/coder/v2/agent/unit"
"github.com/coder/coder/v2/testutil"
)
// tempDirUnixSocket returns a temporary directory that can safely hold unix
// sockets (probably).
//
// During tests on darwin we hit the max path length limit for unix sockets
// pretty easily in the default location, so this function uses /tmp instead to
// get shorter paths. To keep paths short, we use a hash of the test name
// instead of the full test name.
func tempDirUnixSocket(t *testing.T) string {
t.Helper()
if runtime.GOOS == "darwin" {
// Use a short hash of the test name to keep the path under 104 chars
hash := sha256.Sum256([]byte(t.Name()))
hashStr := hex.EncodeToString(hash[:])[:8] // Use first 8 chars of hash
dir, err := os.MkdirTemp("/tmp", fmt.Sprintf("c-%s-", hashStr))
require.NoError(t, err, "create temp dir for unix socket test")
t.Cleanup(func() {
err := os.RemoveAll(dir)
assert.NoError(t, err, "remove temp dir", dir)
})
return dir
}
return t.TempDir()
}
// newSocketClient creates a DRPC client connected to the Unix socket at the given path.
func newSocketClient(ctx context.Context, t *testing.T, socketPath string) *agentsocket.Client {
t.Helper()
client, err := agentsocket.NewClient(ctx, agentsocket.WithPath(socketPath))
t.Cleanup(func() {
_ = client.Close()
})
require.NoError(t, err)
return client
}
func TestDRPCAgentSocketService(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("agentsocket is not supported on Windows")
}
t.Run("Ping", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.Ping(ctx)
require.NoError(t, err)
})
t.Run("SyncStart", func(t *testing.T) {
t.Parallel()
t.Run("NewUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitAlreadyStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// First Start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Second Start
err = client.SyncStart(ctx, "test-unit")
require.ErrorContains(t, err, unit.ErrSameStatusAlreadySet.Error())
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitAlreadyCompleted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// First start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Complete the unit
err = client.SyncComplete(ctx, "test-unit")
require.NoError(t, err)
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusComplete, status.Status)
// Second start
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
err = client.SyncStart(ctx, "test-unit")
require.ErrorContains(t, err, "unit not ready")
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusPending, status.Status)
require.False(t, status.IsReady)
})
})
t.Run("SyncWant", func(t *testing.T) {
t.Parallel()
t.Run("NewUnits", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// If dependency units are not registered, they are registered automatically
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Len(t, status.Dependencies, 1)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAlreadyRegistered", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Start the dependency unit
err = client.SyncStart(ctx, "dependency-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "dependency-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Add the dependency after the dependency unit has already started
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
// Dependencies can be added even if the dependency unit has already started
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
t.Run("DependencyAddedAfterDependentStarted", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Start the dependent unit
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
status, err := client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.StatusStarted, status.Status)
// Add the dependency after the dependency unit has already started
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
// Dependencies can be added even if the dependent unit has already started.
// The dependency applies the next time a unit is started. The current status is not updated.
// This is to allow flexible dependency management. It does mean that users of this API should
// take care to add dependencies before they start their dependent units.
require.NoError(t, err)
// The dependency is now reflected in the test unit's status
status, err = client.SyncStatus(ctx, "test-unit")
require.NoError(t, err)
require.Equal(t, unit.ID("dependency-unit"), status.Dependencies[0].DependsOn)
require.Equal(t, unit.StatusComplete, status.Dependencies[0].RequiredStatus)
})
})
t.Run("SyncReady", func(t *testing.T) {
t.Parallel()
t.Run("UnregisteredUnit", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
ready, err := client.SyncReady(ctx, "unregistered-unit")
require.NoError(t, err)
require.True(t, ready)
})
t.Run("UnitNotReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Register a unit with an unsatisfied dependency
err = client.SyncWant(ctx, "test-unit", "dependency-unit")
require.NoError(t, err)
// Check readiness - should be false because dependency is not satisfied
ready, err := client.SyncReady(ctx, "test-unit")
require.NoError(t, err)
require.False(t, ready)
})
t.Run("UnitReady", func(t *testing.T) {
t.Parallel()
socketPath := filepath.Join(tempDirUnixSocket(t), "test.sock")
ctx := testutil.Context(t, testutil.WaitShort)
server, err := agentsocket.NewServer(
slog.Make().Leveled(slog.LevelDebug),
agentsocket.WithPath(socketPath),
)
require.NoError(t, err)
defer server.Close()
client := newSocketClient(ctx, t, socketPath)
// Register a unit with no dependencies - should be ready immediately
err = client.SyncStart(ctx, "test-unit")
require.NoError(t, err)
// Check readiness - should be true
ready, err := client.SyncReady(ctx, "test-unit")
require.NoError(t, err)
require.True(t, ready)
// Also test a unit with satisfied dependencies
err = client.SyncWant(ctx, "dependent-unit", "test-unit")
require.NoError(t, err)
// Complete the dependency
err = client.SyncComplete(ctx, "test-unit")
require.NoError(t, err)
// Now dependent-unit should be ready
ready, err = client.SyncReady(ctx, "dependent-unit")
require.NoError(t, err)
require.True(t, ready)
})
})
}
-73
View File
@@ -1,73 +0,0 @@
//go:build !windows
package agentsocket
import (
"context"
"net"
"os"
"path/filepath"
"time"
"golang.org/x/xerrors"
)
const defaultSocketPath = "/tmp/coder-agent.sock"
func createSocket(path string) (net.Listener, error) {
if path == "" {
path = defaultSocketPath
}
if !isSocketAvailable(path) {
return nil, xerrors.Errorf("socket path %s is not available", path)
}
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
return nil, xerrors.Errorf("remove existing socket: %w", err)
}
parentDir := filepath.Dir(path)
if err := os.MkdirAll(parentDir, 0o700); err != nil {
return nil, xerrors.Errorf("create socket directory: %w", err)
}
listener, err := net.Listen("unix", path)
if err != nil {
return nil, xerrors.Errorf("listen on unix socket: %w", err)
}
if err := os.Chmod(path, 0o600); err != nil {
_ = listener.Close()
return nil, xerrors.Errorf("set socket permissions: %w", err)
}
return listener, nil
}
func cleanupSocket(path string) error {
return os.Remove(path)
}
func isSocketAvailable(path string) bool {
if _, err := os.Stat(path); os.IsNotExist(err) {
return true
}
// Try to connect to see if it's actually listening.
dialer := net.Dialer{Timeout: 10 * time.Second}
conn, err := dialer.Dial("unix", path)
if err != nil {
return true
}
_ = conn.Close()
return false
}
func dialSocket(ctx context.Context, path string) (net.Conn, error) {
if path == "" {
path = defaultSocketPath
}
dialer := net.Dialer{}
return dialer.DialContext(ctx, "unix", path)
}
-22
View File
@@ -1,22 +0,0 @@
//go:build windows
package agentsocket
import (
"context"
"net"
"golang.org/x/xerrors"
)
func createSocket(_ string) (net.Listener, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
func cleanupSocket(_ string) error {
return nil
}
func dialSocket(_ context.Context, _ string) (net.Conn, error) {
return nil, xerrors.New("agentsocket is not supported on Windows")
}
+5 -31
View File
@@ -46,8 +46,6 @@ const (
// MagicProcessCmdlineJetBrains is a string in a process's command line that
// uniquely identifies it as JetBrains software.
MagicProcessCmdlineJetBrains = "idea.vendor.name=JetBrains"
MagicProcessCmdlineToolbox = "com.jetbrains.toolbox"
MagicProcessCmdlineGateway = "remote-dev-server"
// BlockedFileTransferErrorCode indicates that SSH server restricted the raw command from performing
// the file transfer.
@@ -119,10 +117,6 @@ type Config struct {
// Note that this is different from the devcontainers feature, which uses
// subagents.
ExperimentalContainers bool
// X11Net allows overriding the networking implementation used for X11
// forwarding listeners. When nil, a default implementation backed by the
// standard library networking package is used.
X11Net X11Network
}
type Server struct {
@@ -137,10 +131,9 @@ type Server struct {
// a lock on mu but protected by closing.
wg sync.WaitGroup
Execer agentexec.Execer
logger slog.Logger
srv *ssh.Server
x11Forwarder *x11Forwarder
Execer agentexec.Execer
logger slog.Logger
srv *ssh.Server
config *Config
@@ -197,20 +190,6 @@ func NewServer(ctx context.Context, logger slog.Logger, prometheusRegistry *prom
config: config,
metrics: metrics,
x11Forwarder: &x11Forwarder{
logger: logger,
x11HandlerErrors: metrics.x11HandlerErrors,
fs: fs,
displayOffset: *config.X11DisplayOffset,
sessions: make(map[*x11Session]struct{}),
connections: make(map[net.Conn]struct{}),
network: func() X11Network {
if config.X11Net != nil {
return config.X11Net
}
return osNet{}
}(),
},
}
srv := &ssh.Server{
@@ -478,7 +457,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
x11, hasX11 := session.X11()
if hasX11 {
display, handled := s.x11Forwarder.x11Handler(ctx, session)
display, handled := s.x11Handler(ctx, x11)
if !handled {
logger.Error(ctx, "x11 handler failed")
closeCause("x11 handler failed")
@@ -611,9 +590,7 @@ func (s *Server) startNonPTYSession(logger slog.Logger, session ssh.Session, mag
// and SSH server close may be delayed.
cmd.SysProcAttr = cmdSysProcAttr()
// to match OpenSSH, we don't actually tear a non-TTY command down, even if the session ends. OpenSSH closes the
// pipes to the process when the session ends; which is what happens here since we wire the command up to the
// session for I/O.
// to match OpenSSH, we don't actually tear a non-TTY command down, even if the session ends.
// c.f. https://github.com/coder/coder/issues/18519#issuecomment-3019118271
cmd.Cancel = nil
@@ -1177,9 +1154,6 @@ func (s *Server) Close() error {
s.mu.Unlock()
s.logger.Debug(ctx, "closing X11 forwarding")
_ = s.x11Forwarder.Close()
s.logger.Debug(ctx, "waiting for all goroutines to exit")
s.wg.Wait() // Wait for all goroutines to exit.
-88
View File
@@ -8,9 +8,7 @@ import (
"context"
"fmt"
"net"
"os"
"os/user"
"path/filepath"
"runtime"
"strings"
"sync"
@@ -405,92 +403,6 @@ func TestNewServer_Signal(t *testing.T) {
})
}
func TestSSHServer_ClosesStdin(t *testing.T) {
t.Parallel()
if runtime.GOOS == "windows" {
t.Skip("bash doesn't exist on Windows")
}
ctx := testutil.Context(t, testutil.WaitMedium)
logger := testutil.Logger(t)
s, err := agentssh.NewServer(ctx, logger.Named("ssh-server"), prometheus.NewRegistry(), afero.NewMemMapFs(), agentexec.DefaultExecer, nil)
require.NoError(t, err)
logger = logger.Named("test")
defer s.Close()
err = s.UpdateHostSigner(42)
assert.NoError(t, err)
ln, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
done := make(chan struct{})
go func() {
defer close(done)
err := s.Serve(ln)
assert.Error(t, err) // Server is closed.
}()
defer func() {
err := s.Close()
require.NoError(t, err)
<-done
}()
c := sshClient(t, ln.Addr().String())
sess, err := c.NewSession()
require.NoError(t, err)
stdout, err := sess.StdoutPipe()
require.NoError(t, err)
stdin, err := sess.StdinPipe()
require.NoError(t, err)
defer stdin.Close()
dir := t.TempDir()
err = os.MkdirAll(dir, 0o755)
require.NoError(t, err)
filePath := filepath.Join(dir, "result.txt")
// the shell command `read` will block until data is written to stdin, or closed. It will return
// exit code 1 if it hits EOF, which is what we want to test.
cmdErrCh := make(chan error, 1)
go func() {
cmdErrCh <- sess.Start(fmt.Sprintf(`echo started; echo "read exit code: $(read && echo 0 || echo 1)" > %s`, filePath))
}()
cmdErr := testutil.RequireReceive(ctx, t, cmdErrCh)
require.NoError(t, cmdErr)
readCh := make(chan error, 1)
go func() {
buf := make([]byte, 8)
_, err := stdout.Read(buf)
assert.Equal(t, "started\n", string(buf))
readCh <- err
}()
err = testutil.RequireReceive(ctx, t, readCh)
require.NoError(t, err)
err = sess.Close()
require.NoError(t, err)
var content []byte
expected := []byte("read exit code: 1\n")
testutil.Eventually(ctx, t, func(_ context.Context) bool {
content, err = os.ReadFile(filePath)
if err != nil {
logger.Debug(ctx, "failed to read file; will retry", slog.Error(err))
return false
}
if len(content) != len(expected) {
logger.Debug(ctx, "file is partially written", slog.F("content", content))
return false
}
return true
}, testutil.IntervalFast)
require.NoError(t, err)
require.Equal(t, string(expected), string(content))
}
func sshClient(t *testing.T, addr string) *ssh.Client {
conn, err := net.Dial("tcp", addr)
require.NoError(t, err)
+1 -16
View File
@@ -53,7 +53,7 @@ func NewJetbrainsChannelWatcher(ctx ssh.Context, logger slog.Logger, reportConne
// If this is not JetBrains, then we do not need to do anything special. We
// attempt to match on something that appears unique to JetBrains software.
if !isJetbrainsProcess(cmdline) {
if !strings.Contains(strings.ToLower(cmdline), strings.ToLower(MagicProcessCmdlineJetBrains)) {
return newChannel
}
@@ -104,18 +104,3 @@ func (c *ChannelOnClose) Close() error {
c.once.Do(c.done)
return c.Channel.Close()
}
func isJetbrainsProcess(cmdline string) bool {
opts := []string{
MagicProcessCmdlineJetBrains,
MagicProcessCmdlineToolbox,
MagicProcessCmdlineGateway,
}
for _, opt := range opts {
if strings.Contains(strings.ToLower(cmdline), strings.ToLower(opt)) {
return true
}
}
return false
}
+72 -287
View File
@@ -7,16 +7,15 @@ import (
"errors"
"fmt"
"io"
"math"
"net"
"os"
"path/filepath"
"strconv"
"sync"
"time"
"github.com/gliderlabs/ssh"
"github.com/gofrs/flock"
"github.com/prometheus/client_golang/prometheus"
"github.com/spf13/afero"
gossh "golang.org/x/crypto/ssh"
"golang.org/x/xerrors"
@@ -30,51 +29,8 @@ const (
X11StartPort = 6000
// X11DefaultDisplayOffset is the default offset for X11 forwarding.
X11DefaultDisplayOffset = 10
X11MaxDisplays = 200
// X11MaxPort is the highest port we will ever use for X11 forwarding. This limits the total number of TCP sockets
// we will create. It seems more useful to have a maximum port number than a direct limit on sockets with no max
// port because we'd like to be able to tell users the exact range of ports the Agent might use.
X11MaxPort = X11StartPort + X11MaxDisplays
)
// X11Network abstracts the creation of network listeners for X11 forwarding.
// It is intended mainly for testing; production code uses the default
// implementation backed by the operating system networking stack.
type X11Network interface {
Listen(network, address string) (net.Listener, error)
}
// osNet is the default X11Network implementation that uses the standard
// library network stack.
type osNet struct{}
func (osNet) Listen(network, address string) (net.Listener, error) {
return net.Listen(network, address)
}
type x11Forwarder struct {
logger slog.Logger
x11HandlerErrors *prometheus.CounterVec
fs afero.Fs
displayOffset int
// network creates X11 listener sockets. Defaults to osNet{}.
network X11Network
mu sync.Mutex
sessions map[*x11Session]struct{}
connections map[net.Conn]struct{}
closing bool
wg sync.WaitGroup
}
type x11Session struct {
session ssh.Session
display int
listener net.Listener
usedAt time.Time
}
// x11Callback is called when the client requests X11 forwarding.
func (*Server) x11Callback(_ ssh.Context, _ ssh.X11) bool {
// Always allow.
@@ -83,243 +39,115 @@ func (*Server) x11Callback(_ ssh.Context, _ ssh.X11) bool {
// x11Handler is called when a session has requested X11 forwarding.
// It listens for X11 connections and forwards them to the client.
func (x *x11Forwarder) x11Handler(sshCtx ssh.Context, sshSession ssh.Session) (displayNumber int, handled bool) {
x11, hasX11 := sshSession.X11()
if !hasX11 {
return -1, false
}
serverConn, valid := sshCtx.Value(ssh.ContextKeyConn).(*gossh.ServerConn)
func (s *Server) x11Handler(ctx ssh.Context, x11 ssh.X11) (displayNumber int, handled bool) {
serverConn, valid := ctx.Value(ssh.ContextKeyConn).(*gossh.ServerConn)
if !valid {
x.logger.Warn(sshCtx, "failed to get server connection")
s.logger.Warn(ctx, "failed to get server connection")
return -1, false
}
ctx := slog.With(sshCtx, slog.F("session_id", fmt.Sprintf("%x", serverConn.SessionID())))
hostname, err := os.Hostname()
if err != nil {
x.logger.Warn(ctx, "failed to get hostname", slog.Error(err))
x.x11HandlerErrors.WithLabelValues("hostname").Add(1)
s.logger.Warn(ctx, "failed to get hostname", slog.Error(err))
s.metrics.x11HandlerErrors.WithLabelValues("hostname").Add(1)
return -1, false
}
x11session, err := x.createX11Session(ctx, sshSession)
ln, display, err := createX11Listener(ctx, *s.config.X11DisplayOffset)
if err != nil {
x.logger.Warn(ctx, "failed to create X11 listener", slog.Error(err))
x.x11HandlerErrors.WithLabelValues("listen").Add(1)
s.logger.Warn(ctx, "failed to create X11 listener", slog.Error(err))
s.metrics.x11HandlerErrors.WithLabelValues("listen").Add(1)
return -1, false
}
s.trackListener(ln, true)
defer func() {
if !handled {
x.closeAndRemoveSession(x11session)
s.trackListener(ln, false)
_ = ln.Close()
}
}()
err = addXauthEntry(ctx, x.fs, hostname, strconv.Itoa(x11session.display), x11.AuthProtocol, x11.AuthCookie)
err = addXauthEntry(ctx, s.fs, hostname, strconv.Itoa(display), x11.AuthProtocol, x11.AuthCookie)
if err != nil {
x.logger.Warn(ctx, "failed to add Xauthority entry", slog.Error(err))
x.x11HandlerErrors.WithLabelValues("xauthority").Add(1)
s.logger.Warn(ctx, "failed to add Xauthority entry", slog.Error(err))
s.metrics.x11HandlerErrors.WithLabelValues("xauthority").Add(1)
return -1, false
}
// clean up the X11 session if the SSH session completes.
go func() {
// Don't leave the listener open after the session is gone.
<-ctx.Done()
x.closeAndRemoveSession(x11session)
_ = ln.Close()
}()
go x.listenForConnections(ctx, x11session, serverConn, x11)
x.logger.Debug(ctx, "X11 forwarding started", slog.F("display", x11session.display))
go func() {
defer ln.Close()
defer s.trackListener(ln, false)
return x11session.display, true
}
func (x *x11Forwarder) trackGoroutine() (closing bool, done func()) {
x.mu.Lock()
defer x.mu.Unlock()
if !x.closing {
x.wg.Add(1)
return false, func() { x.wg.Done() }
}
return true, func() {}
}
func (x *x11Forwarder) listenForConnections(
ctx context.Context, session *x11Session, serverConn *gossh.ServerConn, x11 ssh.X11,
) {
defer x.closeAndRemoveSession(session)
if closing, done := x.trackGoroutine(); closing {
return
} else { // nolint: revive
defer done()
}
for {
conn, err := session.listener.Accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
for {
conn, err := ln.Accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
return
}
s.logger.Warn(ctx, "failed to accept X11 connection", slog.Error(err))
return
}
x.logger.Warn(ctx, "failed to accept X11 connection", slog.Error(err))
return
}
// Update session usage time since a new X11 connection was forwarded.
x.mu.Lock()
session.usedAt = time.Now()
x.mu.Unlock()
if x11.SingleConnection {
x.logger.Debug(ctx, "single connection requested, closing X11 listener")
x.closeAndRemoveSession(session)
}
var originAddr string
var originPort uint32
if tcpConn, ok := conn.(*net.TCPConn); ok {
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok {
originAddr = tcpAddr.IP.String()
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
originPort = uint32(tcpAddr.Port)
if x11.SingleConnection {
s.logger.Debug(ctx, "single connection requested, closing X11 listener")
_ = ln.Close()
}
}
// Fallback values for in-memory or non-TCP connections.
if originAddr == "" {
originAddr = "127.0.0.1"
}
channel, reqs, err := serverConn.OpenChannel("x11", gossh.Marshal(struct {
OriginatorAddress string
OriginatorPort uint32
}{
OriginatorAddress: originAddr,
OriginatorPort: originPort,
}))
if err != nil {
x.logger.Warn(ctx, "failed to open X11 channel", slog.Error(err))
_ = conn.Close()
continue
}
go gossh.DiscardRequests(reqs)
tcpConn, ok := conn.(*net.TCPConn)
if !ok {
s.logger.Warn(ctx, fmt.Sprintf("failed to cast connection to TCPConn. got: %T", conn))
_ = conn.Close()
continue
}
tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr)
if !ok {
s.logger.Warn(ctx, fmt.Sprintf("failed to cast local address to TCPAddr. got: %T", tcpConn.LocalAddr()))
_ = conn.Close()
continue
}
if !x.trackConn(conn, true) {
x.logger.Warn(ctx, "failed to track X11 connection")
_ = conn.Close()
continue
}
go func() {
defer x.trackConn(conn, false)
Bicopy(ctx, conn, channel)
}()
}
}
channel, reqs, err := serverConn.OpenChannel("x11", gossh.Marshal(struct {
OriginatorAddress string
OriginatorPort uint32
}{
OriginatorAddress: tcpAddr.IP.String(),
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
OriginatorPort: uint32(tcpAddr.Port),
}))
if err != nil {
s.logger.Warn(ctx, "failed to open X11 channel", slog.Error(err))
_ = conn.Close()
continue
}
go gossh.DiscardRequests(reqs)
// closeAndRemoveSession closes and removes the session.
func (x *x11Forwarder) closeAndRemoveSession(x11session *x11Session) {
_ = x11session.listener.Close()
x.mu.Lock()
delete(x.sessions, x11session)
x.mu.Unlock()
}
if !s.trackConn(ln, conn, true) {
s.logger.Warn(ctx, "failed to track X11 connection")
_ = conn.Close()
continue
}
go func() {
defer s.trackConn(ln, conn, false)
Bicopy(ctx, conn, channel)
}()
}
}()
// createX11Session creates an X11 forwarding session.
func (x *x11Forwarder) createX11Session(ctx context.Context, sshSession ssh.Session) (*x11Session, error) {
var (
ln net.Listener
display int
err error
)
// retry listener creation after evictions. Limit to 10 retries to prevent pathological cases looping forever.
const maxRetries = 10
for try := range maxRetries {
ln, display, err = x.createX11Listener(ctx)
if err == nil {
break
}
if try == maxRetries-1 {
return nil, xerrors.New("max retries exceeded while creating X11 session")
}
x.logger.Warn(ctx, "failed to create X11 listener; will evict an X11 forwarding session",
slog.F("num_current_sessions", x.numSessions()),
slog.Error(err))
x.evictLeastRecentlyUsedSession()
}
x.mu.Lock()
defer x.mu.Unlock()
if x.closing {
closeErr := ln.Close()
if closeErr != nil {
x.logger.Error(ctx, "error closing X11 listener", slog.Error(closeErr))
}
return nil, xerrors.New("server is closing")
}
x11Sess := &x11Session{
session: sshSession,
display: display,
listener: ln,
usedAt: time.Now(),
}
x.sessions[x11Sess] = struct{}{}
return x11Sess, nil
}
func (x *x11Forwarder) numSessions() int {
x.mu.Lock()
defer x.mu.Unlock()
return len(x.sessions)
}
func (x *x11Forwarder) popLeastRecentlyUsedSession() *x11Session {
x.mu.Lock()
defer x.mu.Unlock()
var lru *x11Session
for s := range x.sessions {
if lru == nil {
lru = s
continue
}
if s.usedAt.Before(lru.usedAt) {
lru = s
continue
}
}
if lru == nil {
x.logger.Debug(context.Background(), "tried to pop from empty set of X11 sessions")
return nil
}
delete(x.sessions, lru)
return lru
}
func (x *x11Forwarder) evictLeastRecentlyUsedSession() {
lru := x.popLeastRecentlyUsedSession()
if lru == nil {
return
}
err := lru.listener.Close()
if err != nil {
x.logger.Error(context.Background(), "failed to close evicted X11 session listener", slog.Error(err))
}
// when we evict, we also want to force the SSH session to be closed as well. This is because we intend to reuse
// the X11 TCP listener port for a new X11 forwarding session. If we left the SSH session up, then graphical apps
// started in that session could potentially connect to an unintended X11 Server (i.e. the display on a different
// computer than the one that started the SSH session). Most likely, this session is a zombie anyway if we've
// reached the maximum number of X11 forwarding sessions.
err = lru.session.Close()
if err != nil {
x.logger.Error(context.Background(), "failed to close evicted X11 SSH session", slog.Error(err))
}
return display, true
}
// createX11Listener creates a listener for X11 forwarding, it will use
// the next available port starting from X11StartPort and displayOffset.
func (x *x11Forwarder) createX11Listener(ctx context.Context) (ln net.Listener, display int, err error) {
func createX11Listener(ctx context.Context, displayOffset int) (ln net.Listener, display int, err error) {
var lc net.ListenConfig
// Look for an open port to listen on.
for port := X11StartPort + x.displayOffset; port <= X11MaxPort; port++ {
if ctx.Err() != nil {
return nil, -1, ctx.Err()
}
ln, err = x.network.Listen("tcp", fmt.Sprintf("localhost:%d", port))
for port := X11StartPort + displayOffset; port < math.MaxUint16; port++ {
ln, err = lc.Listen(ctx, "tcp", fmt.Sprintf("localhost:%d", port))
if err == nil {
display = port - X11StartPort
return ln, display, nil
@@ -328,49 +156,6 @@ func (x *x11Forwarder) createX11Listener(ctx context.Context) (ln net.Listener,
return nil, -1, xerrors.Errorf("failed to find open port for X11 listener: %w", err)
}
// trackConn registers the connection with the x11Forwarder. If the server is
// closed, the connection is not registered and should be closed.
//
//nolint:revive
func (x *x11Forwarder) trackConn(c net.Conn, add bool) (ok bool) {
x.mu.Lock()
defer x.mu.Unlock()
if add {
if x.closing {
// Server or listener closed.
return false
}
x.wg.Add(1)
x.connections[c] = struct{}{}
return true
}
x.wg.Done()
delete(x.connections, c)
return true
}
func (x *x11Forwarder) Close() error {
x.mu.Lock()
x.closing = true
for s := range x.sessions {
sErr := s.listener.Close()
if sErr != nil {
x.logger.Debug(context.Background(), "failed to close X11 listener", slog.Error(sErr))
}
}
for c := range x.connections {
cErr := c.Close()
if cErr != nil {
x.logger.Debug(context.Background(), "failed to close X11 connection", slog.Error(cErr))
}
}
x.mu.Unlock()
x.wg.Wait()
return nil
}
// addXauthEntry adds an Xauthority entry to the Xauthority file.
// The Xauthority file is located at ~/.Xauthority.
func addXauthEntry(ctx context.Context, fs afero.Fs, host string, display string, authProtocol string, authCookie string) error {
+14 -229
View File
@@ -3,9 +3,9 @@ package agentssh_test
import (
"bufio"
"bytes"
"context"
"encoding/hex"
"fmt"
"io"
"net"
"os"
"path/filepath"
@@ -32,19 +32,10 @@ func TestServer_X11(t *testing.T) {
t.Skip("X11 forwarding is only supported on Linux")
}
ctx := testutil.Context(t, testutil.WaitShort)
ctx := context.Background()
logger := testutil.Logger(t)
fs := afero.NewMemMapFs()
// Use in-process networking for X11 forwarding.
inproc := testutil.NewInProcNet()
// Create server config with custom X11 listener.
cfg := &agentssh.Config{
X11Net: inproc,
}
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, cfg)
fs := afero.NewOsFs()
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, &agentssh.Config{})
require.NoError(t, err)
defer s.Close()
err = s.UpdateHostSigner(42)
@@ -102,15 +93,17 @@ func TestServer_X11(t *testing.T) {
x11Chans := c.HandleChannelOpen("x11")
payload := "hello world"
go func() {
conn, err := inproc.Dial(ctx, testutil.NewAddr("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+displayNumber)))
assert.NoError(t, err)
_, err = conn.Write([]byte(payload))
assert.NoError(t, err)
_ = conn.Close()
}()
require.Eventually(t, func() bool {
conn, err := net.Dial("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+displayNumber))
if err == nil {
_, err = conn.Write([]byte(payload))
assert.NoError(t, err)
_ = conn.Close()
}
return err == nil
}, testutil.WaitShort, testutil.IntervalFast)
x11 := testutil.RequireReceive(ctx, t, x11Chans)
x11 := <-x11Chans
ch, reqs, err := x11.Accept()
require.NoError(t, err)
go gossh.DiscardRequests(reqs)
@@ -128,211 +121,3 @@ func TestServer_X11(t *testing.T) {
_, err = fs.Stat(filepath.Join(home, ".Xauthority"))
require.NoError(t, err)
}
func TestServer_X11_EvictionLRU(t *testing.T) {
t.Parallel()
if runtime.GOOS != "linux" {
t.Skip("X11 forwarding is only supported on Linux")
}
ctx := testutil.Context(t, testutil.WaitSuperLong)
logger := testutil.Logger(t)
fs := afero.NewMemMapFs()
// Use in-process networking for X11 forwarding.
inproc := testutil.NewInProcNet()
cfg := &agentssh.Config{
X11Net: inproc,
}
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, cfg)
require.NoError(t, err)
defer s.Close()
err = s.UpdateHostSigner(42)
require.NoError(t, err)
ln, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
done := testutil.Go(t, func() {
err := s.Serve(ln)
assert.Error(t, err)
})
c := sshClient(t, ln.Addr().String())
// block off one port to test x11Forwarder evicts at highest port, not number of listeners.
externalListener, err := inproc.Listen("tcp",
fmt.Sprintf("localhost:%d", agentssh.X11StartPort+agentssh.X11DefaultDisplayOffset+1))
require.NoError(t, err)
defer externalListener.Close()
// Calculate how many simultaneous X11 sessions we can create given the
// configured port range.
startPort := agentssh.X11StartPort + agentssh.X11DefaultDisplayOffset
maxSessions := agentssh.X11MaxPort - startPort + 1 - 1 // -1 for the blocked port
require.Greater(t, maxSessions, 0, "expected a positive maxSessions value")
// shellSession holds references to the session and its standard streams so
// that the test can keep them open (and optionally interact with them) for
// the lifetime of the test. If we don't start the Shell with pipes in place,
// the session will be torn down asynchronously during the test.
type shellSession struct {
sess *gossh.Session
stdin io.WriteCloser
stdout io.Reader
stderr io.Reader
// scanner is used to read the output of the session, line by line.
scanner *bufio.Scanner
}
sessions := make([]shellSession, 0, maxSessions)
for i := 0; i < maxSessions; i++ {
sess, err := c.NewSession()
require.NoError(t, err)
_, err = sess.SendRequest("x11-req", true, gossh.Marshal(ssh.X11{
AuthProtocol: "MIT-MAGIC-COOKIE-1",
AuthCookie: hex.EncodeToString([]byte(fmt.Sprintf("cookie%d", i))),
ScreenNumber: uint32(0),
}))
require.NoError(t, err)
stdin, err := sess.StdinPipe()
require.NoError(t, err)
stdout, err := sess.StdoutPipe()
require.NoError(t, err)
stderr, err := sess.StderrPipe()
require.NoError(t, err)
require.NoError(t, sess.Shell())
// The SSH server lazily starts the session. We need to write a command
// and read back to ensure the X11 forwarding is started.
scanner := bufio.NewScanner(stdout)
msg := fmt.Sprintf("ready-%d", i)
_, err = stdin.Write([]byte("echo " + msg + "\n"))
require.NoError(t, err)
// Read until we get the message (first token may be empty due to shell prompt)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if strings.Contains(line, msg) {
break
}
}
require.NoError(t, scanner.Err())
sessions = append(sessions, shellSession{
sess: sess,
stdin: stdin,
stdout: stdout,
stderr: stderr,
scanner: scanner,
})
}
// Connect X11 forwarding to the first session. This is used to test that
// connecting counts as a use of the display.
x11Chans := c.HandleChannelOpen("x11")
payload := "hello world"
go func() {
conn, err := inproc.Dial(ctx, testutil.NewAddr("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+agentssh.X11DefaultDisplayOffset)))
if !assert.NoError(t, err) {
return
}
_, err = conn.Write([]byte(payload))
assert.NoError(t, err)
_ = conn.Close()
}()
x11 := testutil.RequireReceive(ctx, t, x11Chans)
ch, reqs, err := x11.Accept()
require.NoError(t, err)
go gossh.DiscardRequests(reqs)
got := make([]byte, len(payload))
_, err = ch.Read(got)
require.NoError(t, err)
assert.Equal(t, payload, string(got))
_ = ch.Close()
// Create one more session which should evict a session and reuse the display.
// The first session was used to connect X11 forwarding, so it should not be evicted.
// Therefore, the second session should be evicted and its display reused.
extraSess, err := c.NewSession()
require.NoError(t, err)
_, err = extraSess.SendRequest("x11-req", true, gossh.Marshal(ssh.X11{
AuthProtocol: "MIT-MAGIC-COOKIE-1",
AuthCookie: hex.EncodeToString([]byte("extra")),
ScreenNumber: uint32(0),
}))
require.NoError(t, err)
// Ask the remote side for the DISPLAY value so we can extract the display
// number that was assigned to this session.
out, err := extraSess.Output("echo DISPLAY=$DISPLAY")
require.NoError(t, err)
// Example output line: "DISPLAY=localhost:10.0".
var newDisplayNumber int
{
sc := bufio.NewScanner(bytes.NewReader(out))
for sc.Scan() {
line := strings.TrimSpace(sc.Text())
if strings.HasPrefix(line, "DISPLAY=") {
parts := strings.SplitN(line, ":", 2)
require.Len(t, parts, 2)
displayPart := parts[1]
if strings.Contains(displayPart, ".") {
displayPart = strings.SplitN(displayPart, ".", 2)[0]
}
var convErr error
newDisplayNumber, convErr = strconv.Atoi(displayPart)
require.NoError(t, convErr)
break
}
}
require.NoError(t, sc.Err())
}
// The display number reused should correspond to the SECOND session (display offset 12)
expectedDisplay := agentssh.X11DefaultDisplayOffset + 2 // +1 was blocked port
assert.Equal(t, expectedDisplay, newDisplayNumber, "second session should have been evicted and its display reused")
// First session should still be alive: send cmd and read output.
msgFirst := "still-alive"
_, err = sessions[0].stdin.Write([]byte("echo " + msgFirst + "\n"))
require.NoError(t, err)
for sessions[0].scanner.Scan() {
line := strings.TrimSpace(sessions[0].scanner.Text())
if strings.Contains(line, msgFirst) {
break
}
}
require.NoError(t, sessions[0].scanner.Err())
// Second session should now be closed.
_, err = sessions[1].stdin.Write([]byte("echo dead\n"))
require.ErrorIs(t, err, io.EOF)
err = sessions[1].sess.Wait()
require.Error(t, err)
// Cleanup.
for i, sh := range sessions {
if i == 1 {
// already closed
continue
}
err = sh.stdin.Close()
require.NoError(t, err)
err = sh.sess.Wait()
require.NoError(t, err)
}
err = extraSess.Close()
require.ErrorIs(t, err, io.EOF)
err = s.Close()
require.NoError(t, err)
_ = testutil.TryReceive(ctx, t, done)
}
+9 -1
View File
@@ -1,6 +1,7 @@
package agenttest
import (
"context"
"net/url"
"testing"
@@ -30,11 +31,18 @@ func New(t testing.TB, coderURL *url.URL, agentToken string, opts ...func(*agent
}
if o.Client == nil {
agentClient := agentsdk.New(coderURL, agentsdk.WithFixedToken(agentToken))
agentClient := agentsdk.New(coderURL)
agentClient.SetSessionToken(agentToken)
agentClient.SDK.SetLogger(log)
o.Client = agentClient
}
if o.ExchangeToken == nil {
o.ExchangeToken = func(_ context.Context) (string, error) {
return agentToken, nil
}
}
if o.LogDir == "" {
o.LogDir = t.TempDir()
}
+4 -30
View File
@@ -3,7 +3,6 @@ package agenttest
import (
"context"
"io"
"net/http"
"slices"
"sync"
"sync/atomic"
@@ -29,7 +28,6 @@ import (
"github.com/coder/coder/v2/tailnet"
"github.com/coder/coder/v2/tailnet/proto"
"github.com/coder/coder/v2/testutil"
"github.com/coder/websocket"
)
const statsInterval = 500 * time.Millisecond
@@ -88,34 +86,10 @@ type Client struct {
fakeAgentAPI *FakeAgentAPI
LastWorkspaceAgent func()
mu sync.Mutex // Protects following.
logs []agentsdk.Log
derpMapUpdates chan *tailcfg.DERPMap
derpMapOnce sync.Once
refreshTokenCalls int
}
func (*Client) AsRequestOption() codersdk.RequestOption {
return func(_ *http.Request) {}
}
func (*Client) SetDialOption(*websocket.DialOptions) {}
func (*Client) GetSessionToken() string {
return "agenttest-token"
}
func (c *Client) RefreshToken(context.Context) error {
c.mu.Lock()
defer c.mu.Unlock()
c.refreshTokenCalls++
return nil
}
func (c *Client) GetNumRefreshTokenCalls() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.refreshTokenCalls
mu sync.Mutex // Protects following.
logs []agentsdk.Log
derpMapUpdates chan *tailcfg.DERPMap
derpMapOnce sync.Once
}
func (*Client) RewriteDERPMap(*tailcfg.DERPMap) {}
+33 -46
View File
@@ -2,44 +2,46 @@ package agent
import (
"net/http"
"sync"
"time"
"github.com/go-chi/chi/v5"
"github.com/google/uuid"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/coderd/httpmw/loggermw"
"github.com/coder/coder/v2/coderd/tracing"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/httpmw"
)
func (a *agent) apiHandler() http.Handler {
r := chi.NewRouter()
r.Use(
httpmw.Recover(a.logger),
tracing.StatusWriterMiddleware,
loggermw.Logger(a.logger),
)
r.Get("/", func(rw http.ResponseWriter, r *http.Request) {
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.Response{
Message: "Hello from the agent!",
})
})
if a.devcontainers {
// Make a copy to ensure the map is not modified after the handler is
// created.
cpy := make(map[int]string)
for k, b := range a.ignorePorts {
cpy[k] = b
}
cacheDuration := 1 * time.Second
if a.portCacheDuration > 0 {
cacheDuration = a.portCacheDuration
}
lp := &listeningPortsHandler{
ignorePorts: cpy,
cacheDuration: cacheDuration,
}
if a.containerAPI != nil {
r.Mount("/api/v0/containers", a.containerAPI.Routes())
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
r.HandleFunc("/api/v0/containers", func(w http.ResponseWriter, r *http.Request) {
httpapi.Write(r.Context(), w, http.StatusForbidden, codersdk.Response{
Message: "Dev Container feature not supported.",
Detail: "Dev Container integration inside other Dev Containers is explicitly not supported.",
})
})
} else {
r.HandleFunc("/api/v0/containers", func(w http.ResponseWriter, r *http.Request) {
httpapi.Write(r.Context(), w, http.StatusForbidden, codersdk.Response{
Message: "Dev Container feature not enabled.",
Message: "The agent dev containers feature is experimental and not enabled by default.",
Detail: "To enable this feature, set CODER_AGENT_DEVCONTAINERS_ENABLE=true in your template.",
})
})
@@ -47,12 +49,9 @@ func (a *agent) apiHandler() http.Handler {
promHandler := PrometheusMetricsHandler(a.prometheusRegistry, a.logger)
r.Get("/api/v0/listening-ports", a.listeningPortsHandler.handler)
r.Get("/api/v0/listening-ports", lp.handler)
r.Get("/api/v0/netcheck", a.HandleNetcheck)
r.Post("/api/v0/list-directory", a.HandleLS)
r.Get("/api/v0/read-file", a.HandleReadFile)
r.Post("/api/v0/write-file", a.HandleWriteFile)
r.Post("/api/v0/edit-files", a.HandleEditFiles)
r.Get("/debug/logs", a.HandleHTTPDebugLogs)
r.Get("/debug/magicsock", a.HandleHTTPDebugMagicsock)
r.Get("/debug/magicsock/debug-logging/{state}", a.HandleHTTPMagicsockDebugLoggingState)
@@ -62,21 +61,22 @@ func (a *agent) apiHandler() http.Handler {
return r
}
type ListeningPortsGetter interface {
GetListeningPorts() ([]codersdk.WorkspaceAgentListeningPort, error)
}
type listeningPortsHandler struct {
// In production code, this is set to an osListeningPortsGetter, but it can be overridden for
// testing.
getter ListeningPortsGetter
ignorePorts map[int]string
ignorePorts map[int]string
cacheDuration time.Duration
//nolint: unused // used on some but not all platforms
mut sync.Mutex
//nolint: unused // used on some but not all platforms
ports []codersdk.WorkspaceAgentListeningPort
//nolint: unused // used on some but not all platforms
mtime time.Time
}
// handler returns a list of listening ports. This is tested by coderd's
// TestWorkspaceAgentListeningPorts test.
func (lp *listeningPortsHandler) handler(rw http.ResponseWriter, r *http.Request) {
ports, err := lp.getter.GetListeningPorts()
ports, err := lp.getListeningPorts()
if err != nil {
httpapi.Write(r.Context(), rw, http.StatusInternalServerError, codersdk.Response{
Message: "Could not scan for listening ports.",
@@ -85,20 +85,7 @@ func (lp *listeningPortsHandler) handler(rw http.ResponseWriter, r *http.Request
return
}
filteredPorts := make([]codersdk.WorkspaceAgentListeningPort, 0, len(ports))
for _, port := range ports {
if port.Port < workspacesdk.AgentMinimumListeningPort {
continue
}
// Ignore ports that we've been told to ignore.
if _, ok := lp.ignorePorts[int(port.Port)]; ok {
continue
}
filteredPorts = append(filteredPorts, port)
}
httpapi.Write(r.Context(), rw, http.StatusOK, codersdk.WorkspaceAgentListeningPortsResponse{
Ports: filteredPorts,
Ports: ports,
})
}
+1 -2
View File
@@ -63,7 +63,6 @@ func NewAppHealthReporterWithClock(
// run a ticker for each app health check.
var mu sync.RWMutex
failures := make(map[uuid.UUID]int, 0)
client := &http.Client{}
for _, nextApp := range apps {
if !shouldStartTicker(nextApp) {
continue
@@ -92,7 +91,7 @@ func NewAppHealthReporterWithClock(
if err != nil {
return err
}
res, err := client.Do(req)
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
-275
View File
@@ -1,275 +0,0 @@
package agent
import (
"context"
"errors"
"fmt"
"io"
"mime"
"net/http"
"os"
"path/filepath"
"strconv"
"syscall"
"github.com/icholy/replace"
"github.com/spf13/afero"
"golang.org/x/text/transform"
"golang.org/x/xerrors"
"cdr.dev/slog"
"github.com/coder/coder/v2/coderd/httpapi"
"github.com/coder/coder/v2/codersdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
)
type HTTPResponseCode = int
func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
parser := httpapi.NewQueryParamParser().RequiredNotEmpty("path")
path := parser.String(query, "", "path")
offset := parser.PositiveInt64(query, 0, "offset")
limit := parser.PositiveInt64(query, 0, "limit")
parser.ErrorExcessParams(query)
if len(parser.Errors) > 0 {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Query parameters have invalid values.",
Validations: parser.Errors,
})
return
}
status, err := a.streamFile(ctx, rw, path, offset, limit)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
})
return
}
}
func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
f, err := a.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
case errors.Is(err, os.ErrNotExist):
status = http.StatusNotFound
case errors.Is(err, os.ErrPermission):
status = http.StatusForbidden
}
return status, err
}
defer f.Close()
stat, err := f.Stat()
if err != nil {
return http.StatusInternalServerError, err
}
if stat.IsDir() {
return http.StatusBadRequest, xerrors.Errorf("open %s: not a file", path)
}
size := stat.Size()
if limit == 0 {
limit = size
}
bytesRemaining := max(size-offset, 0)
bytesToRead := min(bytesRemaining, limit)
// Relying on just the file name for the mime type for now.
mimeType := mime.TypeByExtension(filepath.Ext(path))
if mimeType == "" {
mimeType = "application/octet-stream"
}
rw.Header().Set("Content-Type", mimeType)
rw.Header().Set("Content-Length", strconv.FormatInt(bytesToRead, 10))
rw.WriteHeader(http.StatusOK)
reader := io.NewSectionReader(f, offset, bytesToRead)
_, err = io.Copy(rw, reader)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
a.logger.Error(ctx, "workspace agent read file", slog.Error(err))
}
return 0, nil
}
func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
query := r.URL.Query()
parser := httpapi.NewQueryParamParser().RequiredNotEmpty("path")
path := parser.String(query, "", "path")
parser.ErrorExcessParams(query)
if len(parser.Errors) > 0 {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "Query parameters have invalid values.",
Validations: parser.Errors,
})
return
}
status, err := a.writeFile(ctx, r, path)
if err != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: err.Error(),
})
return
}
httpapi.Write(ctx, rw, http.StatusOK, codersdk.Response{
Message: fmt.Sprintf("Successfully wrote to %q", path),
})
}
func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
dir := filepath.Dir(path)
err := a.filesystem.MkdirAll(dir, 0o755)
if err != nil {
status := http.StatusInternalServerError
switch {
case errors.Is(err, os.ErrPermission):
status = http.StatusForbidden
case errors.Is(err, syscall.ENOTDIR):
status = http.StatusBadRequest
}
return status, err
}
f, err := a.filesystem.Create(path)
if err != nil {
status := http.StatusInternalServerError
switch {
case errors.Is(err, os.ErrPermission):
status = http.StatusForbidden
case errors.Is(err, syscall.EISDIR):
status = http.StatusBadRequest
}
return status, err
}
defer f.Close()
_, err = io.Copy(f, r.Body)
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
a.logger.Error(ctx, "workspace agent write file", slog.Error(err))
}
return 0, nil
}
func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req workspacesdk.FileEditRequest
if !httpapi.Read(ctx, rw, r, &req) {
return
}
if len(req.Files) == 0 {
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
Message: "must specify at least one file",
})
return
}
var combinedErr error
status := http.StatusOK
for _, edit := range req.Files {
s, err := a.editFile(r.Context(), edit.Path, edit.Edits)
// Keep the highest response status, so 500 will be preferred over 400, etc.
if s > status {
status = s
}
if err != nil {
combinedErr = errors.Join(combinedErr, err)
}
}
if combinedErr != nil {
httpapi.Write(ctx, rw, status, codersdk.Response{
Message: combinedErr.Error(),
})
return
}
httpapi.Write(ctx, rw, http.StatusOK, codersdk.Response{
Message: "Successfully edited file(s)",
})
}
func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
if path == "" {
return http.StatusBadRequest, xerrors.New("\"path\" is required")
}
if !filepath.IsAbs(path) {
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
}
if len(edits) == 0 {
return http.StatusBadRequest, xerrors.New("must specify at least one edit")
}
f, err := a.filesystem.Open(path)
if err != nil {
status := http.StatusInternalServerError
switch {
case errors.Is(err, os.ErrNotExist):
status = http.StatusNotFound
case errors.Is(err, os.ErrPermission):
status = http.StatusForbidden
}
return status, err
}
defer f.Close()
stat, err := f.Stat()
if err != nil {
return http.StatusInternalServerError, err
}
if stat.IsDir() {
return http.StatusBadRequest, xerrors.Errorf("open %s: not a file", path)
}
transforms := make([]transform.Transformer, len(edits))
for i, edit := range edits {
transforms[i] = replace.String(edit.Search, edit.Replace)
}
// Create an adjacent file to ensure it will be on the same device and can be
// moved atomically.
tmpfile, err := afero.TempFile(a.filesystem, filepath.Dir(path), filepath.Base(path))
if err != nil {
return http.StatusInternalServerError, err
}
defer tmpfile.Close()
_, err = io.Copy(tmpfile, replace.Chain(f, transforms...))
if err != nil {
if rerr := a.filesystem.Remove(tmpfile.Name()); rerr != nil {
a.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
}
return http.StatusInternalServerError, xerrors.Errorf("edit %s: %w", path, err)
}
err = a.filesystem.Rename(tmpfile.Name(), path)
if err != nil {
return http.StatusInternalServerError, err
}
return 0, nil
}
-722
View File
@@ -1,722 +0,0 @@
package agent_test
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"runtime"
"syscall"
"testing"
"github.com/spf13/afero"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent"
"github.com/coder/coder/v2/agent/agenttest"
"github.com/coder/coder/v2/coderd/coderdtest"
"github.com/coder/coder/v2/codersdk/agentsdk"
"github.com/coder/coder/v2/codersdk/workspacesdk"
"github.com/coder/coder/v2/testutil"
)
type testFs struct {
afero.Fs
// intercept can return an error for testing when a call fails.
intercept func(call, file string) error
}
func newTestFs(base afero.Fs, intercept func(call, file string) error) *testFs {
return &testFs{
Fs: base,
intercept: intercept,
}
}
func (fs *testFs) Open(name string) (afero.File, error) {
if err := fs.intercept("open", name); err != nil {
return nil, err
}
return fs.Fs.Open(name)
}
func (fs *testFs) Create(name string) (afero.File, error) {
if err := fs.intercept("create", name); err != nil {
return nil, err
}
// Unlike os, afero lets you create files where directories already exist and
// lets you nest them underneath files, somehow.
stat, err := fs.Fs.Stat(name)
if err == nil && stat.IsDir() {
return nil, &os.PathError{
Op: "open",
Path: name,
Err: syscall.EISDIR,
}
}
stat, err = fs.Fs.Stat(filepath.Dir(name))
if err == nil && !stat.IsDir() {
return nil, &os.PathError{
Op: "open",
Path: name,
Err: syscall.ENOTDIR,
}
}
return fs.Fs.Create(name)
}
func (fs *testFs) MkdirAll(name string, mode os.FileMode) error {
if err := fs.intercept("mkdirall", name); err != nil {
return err
}
// Unlike os, afero lets you create directories where files already exist and
// lets you nest them underneath files somehow.
stat, err := fs.Fs.Stat(filepath.Dir(name))
if err == nil && !stat.IsDir() {
return &os.PathError{
Op: "mkdir",
Path: name,
Err: syscall.ENOTDIR,
}
}
stat, err = fs.Fs.Stat(name)
if err == nil && !stat.IsDir() {
return &os.PathError{
Op: "mkdir",
Path: name,
Err: syscall.ENOTDIR,
}
}
return fs.Fs.MkdirAll(name, mode)
}
func (fs *testFs) Rename(oldName, newName string) error {
if err := fs.intercept("rename", newName); err != nil {
return err
}
return fs.Fs.Rename(oldName, newName)
}
func TestReadFile(t *testing.T) {
t.Parallel()
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return os.ErrPermission
}
return nil
})
})
dirPath := filepath.Join(tmpdir, "a-directory")
err := fs.MkdirAll(dirPath, 0o755)
require.NoError(t, err)
filePath := filepath.Join(tmpdir, "file")
err = afero.WriteFile(fs, filePath, []byte("content"), 0o644)
require.NoError(t, err)
imagePath := filepath.Join(tmpdir, "file.png")
err = afero.WriteFile(fs, imagePath, []byte("not really an image"), 0o644)
require.NoError(t, err)
tests := []struct {
name string
path string
limit int64
offset int64
bytes []byte
mimeType string
errCode int
error string
}{
{
name: "NoPath",
path: "",
errCode: http.StatusBadRequest,
error: "\"path\" is required",
},
{
name: "RelativePathDotSlash",
path: "./relative",
errCode: http.StatusBadRequest,
error: "file path must be absolute",
},
{
name: "RelativePath",
path: "also-relative",
errCode: http.StatusBadRequest,
error: "file path must be absolute",
},
{
name: "NegativeLimit",
path: filePath,
limit: -10,
errCode: http.StatusBadRequest,
error: "value is negative",
},
{
name: "NegativeOffset",
path: filePath,
offset: -10,
errCode: http.StatusBadRequest,
error: "value is negative",
},
{
name: "NonExistent",
path: filepath.Join(tmpdir, "does-not-exist"),
errCode: http.StatusNotFound,
error: "file does not exist",
},
{
name: "IsDir",
path: dirPath,
errCode: http.StatusBadRequest,
error: "not a file",
},
{
name: "NoPermissions",
path: noPermsFilePath,
errCode: http.StatusForbidden,
error: "permission denied",
},
{
name: "Defaults",
path: filePath,
bytes: []byte("content"),
mimeType: "application/octet-stream",
},
{
name: "Limit1",
path: filePath,
limit: 1,
bytes: []byte("c"),
mimeType: "application/octet-stream",
},
{
name: "Offset1",
path: filePath,
offset: 1,
bytes: []byte("ontent"),
mimeType: "application/octet-stream",
},
{
name: "Limit1Offset2",
path: filePath,
limit: 1,
offset: 2,
bytes: []byte("n"),
mimeType: "application/octet-stream",
},
{
name: "Limit7Offset0",
path: filePath,
limit: 7,
offset: 0,
bytes: []byte("content"),
mimeType: "application/octet-stream",
},
{
name: "Limit100",
path: filePath,
limit: 100,
bytes: []byte("content"),
mimeType: "application/octet-stream",
},
{
name: "Offset7",
path: filePath,
offset: 7,
bytes: []byte{},
mimeType: "application/octet-stream",
},
{
name: "Offset100",
path: filePath,
offset: 100,
bytes: []byte{},
mimeType: "application/octet-stream",
},
{
name: "MimeTypePng",
path: imagePath,
bytes: []byte("not really an image"),
mimeType: "image/png",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
reader, mimeType, err := conn.ReadFile(ctx, tt.path, tt.offset, tt.limit)
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
require.NoError(t, err)
defer reader.Close()
bytes, err := io.ReadAll(reader)
require.NoError(t, err)
require.Equal(t, tt.bytes, bytes)
require.Equal(t, tt.mimeType, mimeType)
}
})
}
}
func TestWriteFile(t *testing.T) {
t.Parallel()
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
noPermsDirPath := filepath.Join(tmpdir, "no-perms-dir")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath || file == noPermsDirPath {
return os.ErrPermission
}
return nil
})
})
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
require.NoError(t, err)
filePath := filepath.Join(tmpdir, "file")
err = afero.WriteFile(fs, filePath, []byte("content"), 0o644)
require.NoError(t, err)
notDirErr := "not a directory"
if runtime.GOOS == "windows" {
notDirErr = "cannot find the path"
}
tests := []struct {
name string
path string
bytes []byte
errCode int
error string
}{
{
name: "NoPath",
path: "",
errCode: http.StatusBadRequest,
error: "\"path\" is required",
},
{
name: "RelativePathDotSlash",
path: "./relative",
errCode: http.StatusBadRequest,
error: "file path must be absolute",
},
{
name: "RelativePath",
path: "also-relative",
errCode: http.StatusBadRequest,
error: "file path must be absolute",
},
{
name: "NonExistent",
path: filepath.Join(tmpdir, "/nested/does-not-exist"),
bytes: []byte("now it does exist"),
},
{
name: "IsDir",
path: dirPath,
errCode: http.StatusBadRequest,
error: "is a directory",
},
{
name: "IsNotDir",
path: filepath.Join(filePath, "file2"),
errCode: http.StatusBadRequest,
error: notDirErr,
},
{
name: "NoPermissionsFile",
path: noPermsFilePath,
errCode: http.StatusForbidden,
error: "permission denied",
},
{
name: "NoPermissionsDir",
path: filepath.Join(noPermsDirPath, "within-no-perm-dir"),
errCode: http.StatusForbidden,
error: "permission denied",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
reader := bytes.NewReader(tt.bytes)
err := conn.WriteFile(ctx, tt.path, reader)
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
require.Contains(t, cerr.Error(), tt.error)
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
require.NoError(t, err)
b, err := afero.ReadFile(fs, tt.path)
require.NoError(t, err)
require.Equal(t, tt.bytes, b)
}
})
}
}
func TestEditFiles(t *testing.T) {
t.Parallel()
tmpdir := os.TempDir()
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
failRenameFilePath := filepath.Join(tmpdir, "fail-rename")
//nolint:dogsled
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
if file == noPermsFilePath {
return &os.PathError{
Op: call,
Path: file,
Err: os.ErrPermission,
}
} else if file == failRenameFilePath && call == "rename" {
return xerrors.New("rename failed")
}
return nil
})
})
dirPath := filepath.Join(tmpdir, "directory")
err := fs.MkdirAll(dirPath, 0o755)
require.NoError(t, err)
tests := []struct {
name string
contents map[string]string
edits []workspacesdk.FileEdits
expected map[string]string
errCode int
errors []string
}{
{
name: "NoFiles",
errCode: http.StatusBadRequest,
errors: []string{"must specify at least one file"},
},
{
name: "NoPath",
errCode: http.StatusBadRequest,
edits: []workspacesdk.FileEdits{
{
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errors: []string{"\"path\" is required"},
},
{
name: "RelativePathDotSlash",
edits: []workspacesdk.FileEdits{
{
Path: "./relative",
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusBadRequest,
errors: []string{"file path must be absolute"},
},
{
name: "RelativePath",
edits: []workspacesdk.FileEdits{
{
Path: "also-relative",
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusBadRequest,
errors: []string{"file path must be absolute"},
},
{
name: "NoEdits",
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "no-edits"),
},
},
errCode: http.StatusBadRequest,
errors: []string{"must specify at least one edit"},
},
{
name: "NonExistent",
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "does-not-exist"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusNotFound,
errors: []string{"file does not exist"},
},
{
name: "IsDir",
edits: []workspacesdk.FileEdits{
{
Path: dirPath,
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusBadRequest,
errors: []string{"not a file"},
},
{
name: "NoPermissions",
edits: []workspacesdk.FileEdits{
{
Path: noPermsFilePath,
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusForbidden,
errors: []string{"permission denied"},
},
{
name: "FailRename",
contents: map[string]string{failRenameFilePath: "foo bar"},
edits: []workspacesdk.FileEdits{
{
Path: failRenameFilePath,
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
errCode: http.StatusInternalServerError,
errors: []string{"rename failed"},
},
{
name: "Edit1",
contents: map[string]string{filepath.Join(tmpdir, "edit1"): "foo bar"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "edit1"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "edit1"): "bar bar"},
},
{
name: "EditEdit", // Edits affect previous edits.
contents: map[string]string{filepath.Join(tmpdir, "edit-edit"): "foo bar"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "edit-edit"),
Edits: []workspacesdk.FileEdit{
{
Search: "foo",
Replace: "bar",
},
{
Search: "bar",
Replace: "qux",
},
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "edit-edit"): "qux qux"},
},
{
name: "Multiline",
contents: map[string]string{filepath.Join(tmpdir, "multiline"): "foo\nbar\nbaz\nqux"},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "multiline"),
Edits: []workspacesdk.FileEdit{
{
Search: "bar\nbaz",
Replace: "frob",
},
},
},
},
expected: map[string]string{filepath.Join(tmpdir, "multiline"): "foo\nfrob\nqux"},
},
{
name: "Multifile",
contents: map[string]string{
filepath.Join(tmpdir, "file1"): "file 1",
filepath.Join(tmpdir, "file2"): "file 2",
filepath.Join(tmpdir, "file3"): "file 3",
},
edits: []workspacesdk.FileEdits{
{
Path: filepath.Join(tmpdir, "file1"),
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited1",
},
},
},
{
Path: filepath.Join(tmpdir, "file2"),
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited2",
},
},
},
{
Path: filepath.Join(tmpdir, "file3"),
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited3",
},
},
},
},
expected: map[string]string{
filepath.Join(tmpdir, "file1"): "edited1 1",
filepath.Join(tmpdir, "file2"): "edited2 2",
filepath.Join(tmpdir, "file3"): "edited3 3",
},
},
{
name: "MultiError",
contents: map[string]string{
filepath.Join(tmpdir, "file8"): "file 8",
},
edits: []workspacesdk.FileEdits{
{
Path: noPermsFilePath,
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited7",
},
},
},
{
Path: filepath.Join(tmpdir, "file8"),
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited8",
},
},
},
{
Path: filepath.Join(tmpdir, "file9"),
Edits: []workspacesdk.FileEdit{
{
Search: "file",
Replace: "edited9",
},
},
},
},
expected: map[string]string{
filepath.Join(tmpdir, "file8"): "edited8 8",
},
// Higher status codes will override lower ones, so in this case the 404
// takes priority over the 403.
errCode: http.StatusNotFound,
errors: []string{
fmt.Sprintf("%s: permission denied", noPermsFilePath),
"file9: file does not exist",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
defer cancel()
for path, content := range tt.contents {
err := afero.WriteFile(fs, path, []byte(content), 0o644)
require.NoError(t, err)
}
err := conn.EditFiles(ctx, workspacesdk.FileEditRequest{Files: tt.edits})
if tt.errCode != 0 {
require.Error(t, err)
cerr := coderdtest.SDKError(t, err)
for _, error := range tt.errors {
require.Contains(t, cerr.Error(), error)
}
require.Equal(t, tt.errCode, cerr.StatusCode())
} else {
require.NoError(t, err)
}
for path, expect := range tt.expected {
b, err := afero.ReadFile(fs, path)
require.NoError(t, err)
require.Equal(t, expect, string(b))
}
})
}
}
@@ -1,350 +0,0 @@
package backedpipe
import (
"context"
"io"
"sync"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/singleflight"
"golang.org/x/xerrors"
)
var (
ErrPipeClosed = xerrors.New("pipe is closed")
ErrPipeAlreadyConnected = xerrors.New("pipe is already connected")
ErrReconnectionInProgress = xerrors.New("reconnection already in progress")
ErrReconnectFailed = xerrors.New("reconnect failed")
ErrInvalidSequenceNumber = xerrors.New("remote sequence number exceeds local sequence")
ErrReconnectWriterFailed = xerrors.New("reconnect writer failed")
)
// connectionState represents the current state of the BackedPipe connection.
type connectionState int
const (
// connected indicates the pipe is connected and operational.
connected connectionState = iota
// disconnected indicates the pipe is not connected but not closed.
disconnected
// reconnecting indicates a reconnection attempt is in progress.
reconnecting
// closed indicates the pipe is permanently closed.
closed
)
// ErrorEvent represents an error from a reader or writer with connection generation info.
type ErrorEvent struct {
Err error
Component string // "reader" or "writer"
Generation uint64 // connection generation when error occurred
}
const (
// Default buffer capacity used by the writer - 64MB
DefaultBufferSize = 64 * 1024 * 1024
)
// Reconnector is an interface for establishing connections when the BackedPipe needs to reconnect.
// Implementations should:
// 1. Establish a new connection to the remote side
// 2. Exchange sequence numbers with the remote side
// 3. Return the new connection and the remote's reader sequence number
//
// The readerSeqNum parameter is the local reader's current sequence number
// (total bytes successfully read from the remote). This must be sent to the
// remote so it can replay its data to us starting from this number.
//
// The returned remoteReaderSeqNum should be the remote side's reader sequence
// number (how many bytes of our outbound data it has successfully read). This
// informs our writer where to resume (i.e., which bytes to replay to the remote).
type Reconnector interface {
Reconnect(ctx context.Context, readerSeqNum uint64) (conn io.ReadWriteCloser, remoteReaderSeqNum uint64, err error)
}
// BackedPipe provides a reliable bidirectional byte stream over unreliable network connections.
// It orchestrates a BackedReader and BackedWriter to provide transparent reconnection
// and data replay capabilities.
type BackedPipe struct {
ctx context.Context
cancel context.CancelFunc
mu sync.RWMutex
reader *BackedReader
writer *BackedWriter
reconnector Reconnector
conn io.ReadWriteCloser
// State machine
state connectionState
connGen uint64 // Increments on each successful reconnection
// Unified error handling with generation filtering
errChan chan ErrorEvent
// singleflight group to dedupe concurrent ForceReconnect calls
sf singleflight.Group
// Track first error per generation to avoid duplicate reconnections
lastErrorGen uint64
}
// NewBackedPipe creates a new BackedPipe with default options and the specified reconnector.
// The pipe starts disconnected and must be connected using Connect().
func NewBackedPipe(ctx context.Context, reconnector Reconnector) *BackedPipe {
pipeCtx, cancel := context.WithCancel(ctx)
errChan := make(chan ErrorEvent, 1)
bp := &BackedPipe{
ctx: pipeCtx,
cancel: cancel,
reconnector: reconnector,
state: disconnected,
connGen: 0, // Start with generation 0
errChan: errChan,
}
// Create reader and writer with typed error channel for generation-aware error reporting
bp.reader = NewBackedReader(errChan)
bp.writer = NewBackedWriter(DefaultBufferSize, errChan)
// Start error handler goroutine
go bp.handleErrors()
return bp
}
// Connect establishes the initial connection using the reconnect function.
func (bp *BackedPipe) Connect() error {
bp.mu.Lock()
defer bp.mu.Unlock()
if bp.state == closed {
return ErrPipeClosed
}
if bp.state == connected {
return ErrPipeAlreadyConnected
}
// Use internal context for the actual reconnect operation to ensure
// Close() reliably cancels any in-flight attempt.
return bp.reconnectLocked()
}
// Read implements io.Reader by delegating to the BackedReader.
func (bp *BackedPipe) Read(p []byte) (int, error) {
return bp.reader.Read(p)
}
// Write implements io.Writer by delegating to the BackedWriter.
func (bp *BackedPipe) Write(p []byte) (int, error) {
bp.mu.RLock()
writer := bp.writer
state := bp.state
bp.mu.RUnlock()
if state == closed {
return 0, io.EOF
}
return writer.Write(p)
}
// Close closes the pipe and all underlying connections.
func (bp *BackedPipe) Close() error {
bp.mu.Lock()
defer bp.mu.Unlock()
if bp.state == closed {
return nil
}
bp.state = closed
bp.cancel() // Cancel main context
// Close all components in parallel to avoid deadlocks
//
// IMPORTANT: The connection must be closed first to unblock any
// readers or writers that might be holding the mutex on Read/Write
var g errgroup.Group
if bp.conn != nil {
conn := bp.conn
g.Go(func() error {
return conn.Close()
})
bp.conn = nil
}
if bp.reader != nil {
reader := bp.reader
g.Go(func() error {
return reader.Close()
})
}
if bp.writer != nil {
writer := bp.writer
g.Go(func() error {
return writer.Close()
})
}
// Wait for all close operations to complete and return any error
return g.Wait()
}
// Connected returns whether the pipe is currently connected.
func (bp *BackedPipe) Connected() bool {
bp.mu.RLock()
defer bp.mu.RUnlock()
return bp.state == connected && bp.reader.Connected() && bp.writer.Connected()
}
// reconnectLocked handles the reconnection logic. Must be called with write lock held.
func (bp *BackedPipe) reconnectLocked() error {
if bp.state == reconnecting {
return ErrReconnectionInProgress
}
bp.state = reconnecting
defer func() {
// Only reset to disconnected if we're still in reconnecting state
// (successful reconnection will set state to connected)
if bp.state == reconnecting {
bp.state = disconnected
}
}()
// Close existing connection if any
if bp.conn != nil {
_ = bp.conn.Close()
bp.conn = nil
}
// Increment the generation and update both reader and writer.
// We do it now to track even the connections that fail during
// Reconnect.
bp.connGen++
bp.reader.SetGeneration(bp.connGen)
bp.writer.SetGeneration(bp.connGen)
// Reconnect reader and writer
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go bp.reader.Reconnect(seqNum, newR)
// Get the precise reader sequence number from the reader while it holds its lock
readerSeqNum, ok := <-seqNum
if !ok {
// Reader was closed during reconnection
return ErrReconnectFailed
}
// Perform reconnect using the exact sequence number we just received
conn, remoteReaderSeqNum, err := bp.reconnector.Reconnect(bp.ctx, readerSeqNum)
if err != nil {
// Unblock reader reconnect
newR <- nil
return ErrReconnectFailed
}
// Provide the new connection to the reader (reader still holds its lock)
newR <- conn
// Replay our outbound data from the remote's reader sequence number
writerReconnectErr := bp.writer.Reconnect(remoteReaderSeqNum, conn)
if writerReconnectErr != nil {
return ErrReconnectWriterFailed
}
// Success - update state
bp.conn = conn
bp.state = connected
return nil
}
// handleErrors listens for connection errors from reader/writer and triggers reconnection.
// It filters errors from old connections and ensures only the first error per generation
// triggers reconnection.
func (bp *BackedPipe) handleErrors() {
for {
select {
case <-bp.ctx.Done():
return
case errorEvt := <-bp.errChan:
bp.handleConnectionError(errorEvt)
}
}
}
// handleConnectionError handles errors from either reader or writer components.
// It filters errors from old connections and ensures only one reconnection per generation.
func (bp *BackedPipe) handleConnectionError(errorEvt ErrorEvent) {
bp.mu.Lock()
defer bp.mu.Unlock()
// Skip if already closed
if bp.state == closed {
return
}
// Filter errors from old connections (lower generation)
if errorEvt.Generation < bp.connGen {
return
}
// Skip if not connected (already disconnected or reconnecting)
if bp.state != connected {
return
}
// Skip if we've already seen an error for this generation
if bp.lastErrorGen >= errorEvt.Generation {
return
}
// This is the first error for this generation
bp.lastErrorGen = errorEvt.Generation
// Mark as disconnected
bp.state = disconnected
// Try to reconnect using internal context
reconnectErr := bp.reconnectLocked()
if reconnectErr != nil {
// Reconnection failed - log or handle as needed
// For now, we'll just continue and wait for manual reconnection
_ = errorEvt.Err // Use the original error from the component
_ = errorEvt.Component // Component info available for potential logging by higher layers
}
}
// ForceReconnect forces a reconnection attempt immediately.
// This can be used to force a reconnection if a new connection is established.
// It prevents duplicate reconnections when called concurrently.
func (bp *BackedPipe) ForceReconnect() error {
// Deduplicate concurrent ForceReconnect calls so only one reconnection
// attempt runs at a time from this API. Use the pipe's internal context
// to ensure Close() cancels any in-flight attempt.
_, err, _ := bp.sf.Do("force-reconnect", func() (interface{}, error) {
bp.mu.Lock()
defer bp.mu.Unlock()
if bp.state == closed {
return nil, io.EOF
}
// Don't force reconnect if already reconnecting
if bp.state == reconnecting {
return nil, ErrReconnectionInProgress
}
return nil, bp.reconnectLocked()
})
return err
}
@@ -1,989 +0,0 @@
package backedpipe_test
import (
"bytes"
"context"
"io"
"sync"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
"github.com/coder/coder/v2/testutil"
)
// mockConnection implements io.ReadWriteCloser for testing
type mockConnection struct {
mu sync.Mutex
readBuffer bytes.Buffer
writeBuffer bytes.Buffer
closed bool
readError error
writeError error
closeError error
readFunc func([]byte) (int, error)
writeFunc func([]byte) (int, error)
seqNum uint64
}
func newMockConnection() *mockConnection {
return &mockConnection{}
}
func (mc *mockConnection) Read(p []byte) (int, error) {
mc.mu.Lock()
defer mc.mu.Unlock()
if mc.readFunc != nil {
return mc.readFunc(p)
}
if mc.readError != nil {
return 0, mc.readError
}
return mc.readBuffer.Read(p)
}
func (mc *mockConnection) Write(p []byte) (int, error) {
mc.mu.Lock()
defer mc.mu.Unlock()
if mc.writeFunc != nil {
return mc.writeFunc(p)
}
if mc.writeError != nil {
return 0, mc.writeError
}
return mc.writeBuffer.Write(p)
}
func (mc *mockConnection) Close() error {
mc.mu.Lock()
defer mc.mu.Unlock()
mc.closed = true
return mc.closeError
}
func (mc *mockConnection) WriteString(s string) {
mc.mu.Lock()
defer mc.mu.Unlock()
_, _ = mc.readBuffer.WriteString(s)
}
func (mc *mockConnection) ReadString() string {
mc.mu.Lock()
defer mc.mu.Unlock()
return mc.writeBuffer.String()
}
func (mc *mockConnection) SetReadError(err error) {
mc.mu.Lock()
defer mc.mu.Unlock()
mc.readError = err
}
func (mc *mockConnection) SetWriteError(err error) {
mc.mu.Lock()
defer mc.mu.Unlock()
mc.writeError = err
}
func (mc *mockConnection) Reset() {
mc.mu.Lock()
defer mc.mu.Unlock()
mc.readBuffer.Reset()
mc.writeBuffer.Reset()
mc.readError = nil
mc.writeError = nil
mc.closed = false
}
// mockReconnector implements the Reconnector interface for testing
type mockReconnector struct {
mu sync.Mutex
connections []*mockConnection
connectionIndex int
callCount int
signalChan chan struct{}
}
// Reconnect implements the Reconnector interface
func (m *mockReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.callCount++
if m.connectionIndex >= len(m.connections) {
return nil, 0, xerrors.New("no more connections available")
}
conn := m.connections[m.connectionIndex]
m.connectionIndex++
// Signal when reconnection happens
if m.connectionIndex > 1 {
select {
case m.signalChan <- struct{}{}:
default:
}
}
// Determine remoteReaderSeqNum (how many bytes of our outbound data the remote has read)
var remoteReaderSeqNum uint64
switch {
case m.callCount == 1:
remoteReaderSeqNum = 0
case conn.seqNum != 0:
remoteReaderSeqNum = conn.seqNum
default:
// Default to 0 if unspecified
remoteReaderSeqNum = 0
}
return conn, remoteReaderSeqNum, nil
}
// GetCallCount returns the current call count in a thread-safe manner
func (m *mockReconnector) GetCallCount() int {
m.mu.Lock()
defer m.mu.Unlock()
return m.callCount
}
// mockReconnectFunc creates a unified reconnector with all behaviors enabled
func mockReconnectFunc(connections ...*mockConnection) (*mockReconnector, chan struct{}) {
signalChan := make(chan struct{}, 1)
reconnector := &mockReconnector{
connections: connections,
signalChan: signalChan,
}
return reconnector, signalChan
}
// blockingReconnector is a reconnector that blocks on a channel for deterministic testing
type blockingReconnector struct {
conn1 *mockConnection
conn2 *mockConnection
callCount int
blockChan <-chan struct{}
blockedChan chan struct{}
mu sync.Mutex
signalOnce sync.Once // Ensure we only signal once for the first actual reconnect
}
func (b *blockingReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
b.mu.Lock()
b.callCount++
currentCall := b.callCount
b.mu.Unlock()
if currentCall == 1 {
// Initial connect
return b.conn1, 0, nil
}
// Signal that we're about to block, but only once for the first reconnect attempt
// This ensures we properly test singleflight deduplication
b.signalOnce.Do(func() {
select {
case b.blockedChan <- struct{}{}:
default:
// If channel is full, don't block
}
})
// For subsequent calls, block until channel is closed
select {
case <-b.blockChan:
// Channel closed, proceed with reconnection
case <-ctx.Done():
return nil, 0, ctx.Err()
}
return b.conn2, 0, nil
}
// GetCallCount returns the current call count in a thread-safe manner
func (b *blockingReconnector) GetCallCount() int {
b.mu.Lock()
defer b.mu.Unlock()
return b.callCount
}
func mockBlockingReconnectFunc(conn1, conn2 *mockConnection, blockChan <-chan struct{}) (*blockingReconnector, chan struct{}) {
blockedChan := make(chan struct{}, 1)
reconnector := &blockingReconnector{
conn1: conn1,
conn2: conn2,
blockChan: blockChan,
blockedChan: blockedChan,
}
return reconnector, blockedChan
}
// eofTestReconnector is a custom reconnector for the EOF test case
type eofTestReconnector struct {
mu sync.Mutex
conn1 io.ReadWriteCloser
conn2 io.ReadWriteCloser
callCount int
}
func (e *eofTestReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
e.mu.Lock()
defer e.mu.Unlock()
e.callCount++
if e.callCount == 1 {
return e.conn1, 0, nil
}
if e.callCount == 2 {
// Second call is the reconnection after EOF
// Return 5 to indicate remote has read all 5 bytes of "hello"
return e.conn2, 5, nil
}
return nil, 0, xerrors.New("no more connections")
}
// GetCallCount returns the current call count in a thread-safe manner
func (e *eofTestReconnector) GetCallCount() int {
e.mu.Lock()
defer e.mu.Unlock()
return e.callCount
}
func TestBackedPipe_NewBackedPipe(t *testing.T) {
t.Parallel()
ctx := context.Background()
reconnectFn, _ := mockReconnectFunc(newMockConnection())
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
require.NotNil(t, bp)
require.False(t, bp.Connected())
}
func TestBackedPipe_Connect(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnector, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
err := bp.Connect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 1, reconnector.GetCallCount())
}
func TestBackedPipe_ConnectAlreadyConnected(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
err := bp.Connect()
require.NoError(t, err)
// Second connect should fail
err = bp.Connect()
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrPipeAlreadyConnected)
}
func TestBackedPipe_ConnectAfterClose(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
err := bp.Close()
require.NoError(t, err)
err = bp.Connect()
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrPipeClosed)
}
func TestBackedPipe_BasicReadWrite(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
err := bp.Connect()
require.NoError(t, err)
// Write data
n, err := bp.Write([]byte("hello"))
require.NoError(t, err)
require.Equal(t, 5, n)
// Simulate data coming back
conn.WriteString("world")
// Read data
buf := make([]byte, 10)
n, err = bp.Read(buf)
require.NoError(t, err)
require.Equal(t, 5, n)
require.Equal(t, "world", string(buf[:n]))
}
func TestBackedPipe_WriteBeforeConnect(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
// Write before connecting should block
writeComplete := make(chan error, 1)
go func() {
_, err := bp.Write([]byte("hello"))
writeComplete <- err
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when disconnected")
case <-time.After(100 * time.Millisecond):
// Expected - write is blocked
}
// Connect should unblock the write
err := bp.Connect()
require.NoError(t, err)
// Write should now complete
err = testutil.RequireReceive(ctx, t, writeComplete)
require.NoError(t, err)
// Check that data was replayed to connection
require.Equal(t, "hello", conn.ReadString())
}
func TestBackedPipe_ReadBlocksWhenDisconnected(t *testing.T) {
t.Parallel()
ctx := context.Background()
testCtx := testutil.Context(t, testutil.WaitShort)
reconnectFn, _ := mockReconnectFunc(newMockConnection())
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
// Start a read that should block
readDone := make(chan struct{})
readStarted := make(chan struct{}, 1)
var readErr error
go func() {
defer close(readDone)
readStarted <- struct{}{} // Signal that we're about to start the read
buf := make([]byte, 10)
_, readErr = bp.Read(buf)
}()
// Wait for the goroutine to start
testutil.TryReceive(testCtx, t, readStarted)
// Ensure the read is actually blocked by verifying it hasn't completed
require.Eventually(t, func() bool {
select {
case <-readDone:
t.Fatal("Read should be blocked when disconnected")
return false
default:
// Good, still blocked
return true
}
}, testutil.WaitShort, testutil.IntervalMedium)
// Close should unblock the read
bp.Close()
testutil.TryReceive(testCtx, t, readDone)
require.Equal(t, io.EOF, readErr)
}
func TestBackedPipe_Reconnection(t *testing.T) {
t.Parallel()
ctx := context.Background()
testCtx := testutil.Context(t, testutil.WaitShort)
conn1 := newMockConnection()
conn2 := newMockConnection()
conn2.seqNum = 17 // Remote has received 17 bytes, so replay from sequence 17
reconnectFn, signalChan := mockReconnectFunc(conn1, conn2)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
defer bp.Close()
// Initial connect
err := bp.Connect()
require.NoError(t, err)
// Write some data before failure
bp.Write([]byte("before disconnect***"))
// Simulate connection failure
conn1.SetReadError(xerrors.New("connection lost"))
conn1.SetWriteError(xerrors.New("connection lost"))
// Trigger a write to cause the pipe to notice the failure
_, _ = bp.Write([]byte("trigger failure "))
testutil.RequireReceive(testCtx, t, signalChan)
// Wait for reconnection to complete
require.Eventually(t, func() bool {
return bp.Connected()
}, testutil.WaitShort, testutil.IntervalFast, "pipe should reconnect")
replayedData := conn2.ReadString()
require.Equal(t, "***trigger failure ", replayedData, "Should replay exactly the data written after sequence 17")
// Verify that new writes work with the reconnected pipe
_, err = bp.Write([]byte("new data after reconnect"))
require.NoError(t, err)
// Read all data from the connection (replayed + new data)
allData := conn2.ReadString()
require.Equal(t, "***trigger failure new data after reconnect", allData, "Should have replayed data plus new data")
}
func TestBackedPipe_Close(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
err := bp.Connect()
require.NoError(t, err)
err = bp.Close()
require.NoError(t, err)
require.True(t, conn.closed)
// Operations after close should fail
_, err = bp.Read(make([]byte, 10))
require.Equal(t, io.EOF, err)
_, err = bp.Write([]byte("test"))
require.Equal(t, io.EOF, err)
}
func TestBackedPipe_CloseIdempotent(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
err := bp.Close()
require.NoError(t, err)
// Second close should be no-op
err = bp.Close()
require.NoError(t, err)
}
func TestBackedPipe_ReconnectFunctionFailure(t *testing.T) {
t.Parallel()
ctx := context.Background()
failingReconnector := &mockReconnector{
connections: nil, // No connections available
}
bp := backedpipe.NewBackedPipe(ctx, failingReconnector)
defer bp.Close()
err := bp.Connect()
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrReconnectFailed)
require.False(t, bp.Connected())
}
func TestBackedPipe_ForceReconnect(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn1 := newMockConnection()
conn2 := newMockConnection()
// Set conn2 sequence number to 9 to indicate remote has read all 9 bytes of "test data"
conn2.seqNum = 9
reconnector, _ := mockReconnectFunc(conn1, conn2)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Initial connect
err := bp.Connect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 1, reconnector.GetCallCount())
// Write some data to the first connection
_, err = bp.Write([]byte("test data"))
require.NoError(t, err)
require.Equal(t, "test data", conn1.ReadString())
// Force a reconnection
err = bp.ForceReconnect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 2, reconnector.GetCallCount())
// Since the mock returns the proper sequence number, no data should be replayed
// The new connection should be empty
require.Equal(t, "", conn2.ReadString())
// Verify that data can still be written and read after forced reconnection
_, err = bp.Write([]byte("new data"))
require.NoError(t, err)
require.Equal(t, "new data", conn2.ReadString())
// Verify that reads work with the new connection
conn2.WriteString("response data")
buf := make([]byte, 20)
n, err := bp.Read(buf)
require.NoError(t, err)
require.Equal(t, 13, n)
require.Equal(t, "response data", string(buf[:n]))
}
func TestBackedPipe_ForceReconnectWhenClosed(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
// Close the pipe first
err := bp.Close()
require.NoError(t, err)
// Try to force reconnect when closed
err = bp.ForceReconnect()
require.Error(t, err)
require.Equal(t, io.EOF, err)
}
func TestBackedPipe_StateTransitionsAndGenerationTracking(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn1 := newMockConnection()
conn2 := newMockConnection()
conn3 := newMockConnection()
reconnector, signalChan := mockReconnectFunc(conn1, conn2, conn3)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Initial state should be disconnected
require.False(t, bp.Connected())
// Connect should transition to connected
err := bp.Connect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 1, reconnector.GetCallCount())
// Write some data
_, err = bp.Write([]byte("test data gen 1"))
require.NoError(t, err)
// Simulate connection failure by setting errors on connection
conn1.SetReadError(xerrors.New("connection lost"))
conn1.SetWriteError(xerrors.New("connection lost"))
// Trigger a write to cause the pipe to notice the failure
_, _ = bp.Write([]byte("trigger failure"))
// Wait for reconnection signal
testutil.RequireReceive(testutil.Context(t, testutil.WaitShort), t, signalChan)
// Wait for reconnection to complete
require.Eventually(t, func() bool {
return bp.Connected()
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect")
require.Equal(t, 2, reconnector.GetCallCount())
// Force another reconnection
err = bp.ForceReconnect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 3, reconnector.GetCallCount())
// Close should transition to closed state
err = bp.Close()
require.NoError(t, err)
require.False(t, bp.Connected())
// Operations on closed pipe should fail
err = bp.Connect()
require.Equal(t, backedpipe.ErrPipeClosed, err)
err = bp.ForceReconnect()
require.Equal(t, io.EOF, err)
}
func TestBackedPipe_GenerationFiltering(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn1 := newMockConnection()
conn2 := newMockConnection()
reconnector, _ := mockReconnectFunc(conn1, conn2)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Connect
err := bp.Connect()
require.NoError(t, err)
require.True(t, bp.Connected())
// Simulate multiple rapid errors from the same connection generation
// Only the first one should trigger reconnection
conn1.SetReadError(xerrors.New("error 1"))
conn1.SetWriteError(xerrors.New("error 2"))
// Trigger multiple errors quickly
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
_, _ = bp.Write([]byte("trigger error 1"))
}()
go func() {
defer wg.Done()
_, _ = bp.Write([]byte("trigger error 2"))
}()
// Wait for both writes to complete
wg.Wait()
// Wait for reconnection to complete
require.Eventually(t, func() bool {
return bp.Connected()
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect once")
// Should have only reconnected once despite multiple errors
require.Equal(t, 2, reconnector.GetCallCount()) // Initial connect + 1 reconnect
}
func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
t.Parallel()
ctx := context.Background()
testCtx := testutil.Context(t, testutil.WaitShort)
// Create a blocking reconnector for deterministic testing
conn1 := newMockConnection()
conn2 := newMockConnection()
blockChan := make(chan struct{})
reconnector, blockedChan := mockBlockingReconnectFunc(conn1, conn2, blockChan)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Initial connect
err := bp.Connect()
require.NoError(t, err)
require.Equal(t, 1, reconnector.GetCallCount(), "should have exactly 1 call after initial connect")
// We'll use channels to coordinate the test execution:
// 1. Start all goroutines but have them wait
// 2. Release the first one and wait for it to block
// 3. Release the others while the first is still blocked
const numConcurrent = 3
startSignals := make([]chan struct{}, numConcurrent)
startedSignals := make([]chan struct{}, numConcurrent)
for i := range startSignals {
startSignals[i] = make(chan struct{})
startedSignals[i] = make(chan struct{})
}
errors := make([]error, numConcurrent)
var wg sync.WaitGroup
// Start all goroutines
for i := 0; i < numConcurrent; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
// Wait for the signal to start
<-startSignals[idx]
// Signal that we're about to call ForceReconnect
close(startedSignals[idx])
errors[idx] = bp.ForceReconnect()
}(i)
}
// Start the first ForceReconnect and wait for it to block
close(startSignals[0])
<-startedSignals[0]
// Wait for the first reconnect to actually start and block
testutil.RequireReceive(testCtx, t, blockedChan)
// Now start all the other ForceReconnect calls
// They should all join the same singleflight operation
for i := 1; i < numConcurrent; i++ {
close(startSignals[i])
}
// Wait for all additional goroutines to have started their calls
for i := 1; i < numConcurrent; i++ {
<-startedSignals[i]
}
// At this point, one reconnect has started and is blocked,
// and all other goroutines have called ForceReconnect and should be
// waiting on the same singleflight operation.
// Due to singleflight, only one reconnect should have been attempted.
require.Equal(t, 2, reconnector.GetCallCount(), "should have exactly 2 calls: initial connect + 1 reconnect due to singleflight")
// Release the blocking reconnect function
close(blockChan)
// Wait for all ForceReconnect calls to complete
wg.Wait()
// All calls should succeed (they share the same result from singleflight)
for i, err := range errors {
require.NoError(t, err, "ForceReconnect %d should succeed", i, err)
}
// Final verification: call count should still be exactly 2
require.Equal(t, 2, reconnector.GetCallCount(), "final call count should be exactly 2: initial connect + 1 singleflight reconnect")
}
func TestBackedPipe_SingleReconnectionOnMultipleErrors(t *testing.T) {
t.Parallel()
ctx := context.Background()
testCtx := testutil.Context(t, testutil.WaitShort)
// Create connections for initial connect and reconnection
conn1 := newMockConnection()
conn2 := newMockConnection()
reconnector, signalChan := mockReconnectFunc(conn1, conn2)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Initial connect
err := bp.Connect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 1, reconnector.GetCallCount())
// Write some initial data to establish the connection
_, err = bp.Write([]byte("initial data"))
require.NoError(t, err)
// Set up both read and write errors on the connection
conn1.SetReadError(xerrors.New("read connection lost"))
conn1.SetWriteError(xerrors.New("write connection lost"))
// Trigger write error (this will trigger reconnection)
go func() {
_, _ = bp.Write([]byte("trigger write error"))
}()
// Wait for reconnection to start
testutil.RequireReceive(testCtx, t, signalChan)
// Wait for reconnection to complete
require.Eventually(t, func() bool {
return bp.Connected()
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect after write error")
// Verify that only one reconnection occurred
require.Equal(t, 2, reconnector.GetCallCount(), "should have exactly 2 calls: initial connect + 1 reconnection")
require.True(t, bp.Connected(), "should be connected after reconnection")
}
func TestBackedPipe_ForceReconnectWhenDisconnected(t *testing.T) {
t.Parallel()
ctx := context.Background()
conn := newMockConnection()
reconnector, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Don't connect initially, just force reconnect
err := bp.ForceReconnect()
require.NoError(t, err)
require.True(t, bp.Connected())
require.Equal(t, 1, reconnector.GetCallCount())
// Verify we can write and read
_, err = bp.Write([]byte("test"))
require.NoError(t, err)
require.Equal(t, "test", conn.ReadString())
conn.WriteString("response")
buf := make([]byte, 10)
n, err := bp.Read(buf)
require.NoError(t, err)
require.Equal(t, 8, n)
require.Equal(t, "response", string(buf[:n]))
}
func TestBackedPipe_EOFTriggersReconnection(t *testing.T) {
t.Parallel()
ctx := context.Background()
// Create connections where we can control when EOF occurs
conn1 := newMockConnection()
conn2 := newMockConnection()
conn2.WriteString("newdata") // Pre-populate conn2 with data
// Make conn1 return EOF after reading "world"
hasReadData := false
conn1.readFunc = func(p []byte) (int, error) {
// Don't lock here - the Read method already holds the lock
// First time: return "world"
if !hasReadData && conn1.readBuffer.Len() > 0 {
n, _ := conn1.readBuffer.Read(p)
hasReadData = true
return n, nil
}
// After that: return EOF
return 0, io.EOF
}
conn1.WriteString("world")
reconnector := &eofTestReconnector{
conn1: conn1,
conn2: conn2,
}
bp := backedpipe.NewBackedPipe(ctx, reconnector)
defer bp.Close()
// Initial connect
err := bp.Connect()
require.NoError(t, err)
require.Equal(t, 1, reconnector.GetCallCount())
// Write some data
_, err = bp.Write([]byte("hello"))
require.NoError(t, err)
buf := make([]byte, 10)
// First read should succeed
n, err := bp.Read(buf)
require.NoError(t, err)
require.Equal(t, 5, n)
require.Equal(t, "world", string(buf[:n]))
// Next read will encounter EOF and should trigger reconnection
// After reconnection, it should read from conn2
n, err = bp.Read(buf)
require.NoError(t, err)
require.Equal(t, 7, n)
require.Equal(t, "newdata", string(buf[:n]))
// Verify reconnection happened
require.Equal(t, 2, reconnector.GetCallCount())
// Verify the pipe is still connected and functional
require.True(t, bp.Connected())
// Further writes should go to the new connection
_, err = bp.Write([]byte("aftereof"))
require.NoError(t, err)
require.Equal(t, "aftereof", conn2.ReadString())
}
func BenchmarkBackedPipe_Write(b *testing.B) {
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
bp.Connect()
b.Cleanup(func() {
_ = bp.Close()
})
data := make([]byte, 1024) // 1KB writes
b.ResetTimer()
for i := 0; i < b.N; i++ {
bp.Write(data)
}
}
func BenchmarkBackedPipe_Read(b *testing.B) {
ctx := context.Background()
conn := newMockConnection()
reconnectFn, _ := mockReconnectFunc(conn)
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
bp.Connect()
b.Cleanup(func() {
_ = bp.Close()
})
buf := make([]byte, 1024)
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Fill connection with fresh data for each iteration
conn.WriteString(string(buf))
bp.Read(buf)
}
}
@@ -1,166 +0,0 @@
package backedpipe
import (
"io"
"sync"
)
// BackedReader wraps an unreliable io.Reader and makes it resilient to disconnections.
// It tracks sequence numbers for all bytes read and can handle reconnection,
// blocking reads when disconnected instead of erroring.
type BackedReader struct {
mu sync.Mutex
cond *sync.Cond
reader io.Reader
sequenceNum uint64
closed bool
// Error channel for generation-aware error reporting
errorEventChan chan<- ErrorEvent
// Current connection generation for error reporting
currentGen uint64
}
// NewBackedReader creates a new BackedReader with generation-aware error reporting.
// The reader is initially disconnected and must be connected using Reconnect before
// reads will succeed. The errorEventChan will receive ErrorEvent structs containing
// error details, component info, and connection generation.
func NewBackedReader(errorEventChan chan<- ErrorEvent) *BackedReader {
if errorEventChan == nil {
panic("error event channel cannot be nil")
}
br := &BackedReader{
errorEventChan: errorEventChan,
}
br.cond = sync.NewCond(&br.mu)
return br
}
// Read implements io.Reader. It blocks when disconnected until either:
// 1. A reconnection is established
// 2. The reader is closed
//
// When connected, it reads from the underlying reader and updates sequence numbers.
// Connection failures are automatically detected and reported to the higher layer via callback.
func (br *BackedReader) Read(p []byte) (int, error) {
br.mu.Lock()
defer br.mu.Unlock()
for {
// Step 1: Wait until we have a reader or are closed
for br.reader == nil && !br.closed {
br.cond.Wait()
}
if br.closed {
return 0, io.EOF
}
// Step 2: Perform the read while holding the mutex
// This ensures proper synchronization with Reconnect and Close operations
n, err := br.reader.Read(p)
br.sequenceNum += uint64(n) // #nosec G115 -- n is always >= 0 per io.Reader contract
if err == nil {
return n, nil
}
// Mark reader as disconnected so future reads will wait for reconnection
br.reader = nil
// Notify parent of error with generation information
select {
case br.errorEventChan <- ErrorEvent{
Err: err,
Component: "reader",
Generation: br.currentGen,
}:
default:
// Channel is full, drop the error.
// This is not a problem, because we set the reader to nil
// and block until reconnected so no new errors will be sent
// until pipe processes the error and reconnects.
}
// If we got some data before the error, return it now
if n > 0 {
return n, nil
}
}
}
// Reconnect coordinates reconnection using channels for better synchronization.
// The seqNum channel is used to send the current sequence number to the caller.
// The newR channel is used to receive the new reader from the caller.
// This allows for better coordination during the reconnection process.
func (br *BackedReader) Reconnect(seqNum chan<- uint64, newR <-chan io.Reader) {
// Grab the lock
br.mu.Lock()
defer br.mu.Unlock()
if br.closed {
// Close the channel to indicate closed state
close(seqNum)
return
}
// Get the sequence number to send to the other side via seqNum channel
seqNum <- br.sequenceNum
close(seqNum)
// Wait for the reconnect to complete, via newR channel, and give us a new io.Reader
newReader := <-newR
// If reconnection fails while we are starting it, the caller sends nil on newR
if newReader == nil {
// Reconnection failed, keep current state
return
}
// Reconnection successful
br.reader = newReader
// Notify any waiting reads via the cond
br.cond.Broadcast()
}
// Close the reader and wake up any blocked reads.
// After closing, all Read calls will return io.EOF.
func (br *BackedReader) Close() error {
br.mu.Lock()
defer br.mu.Unlock()
if br.closed {
return nil
}
br.closed = true
br.reader = nil
// Wake up any blocked reads
br.cond.Broadcast()
return nil
}
// SequenceNum returns the current sequence number (total bytes read).
func (br *BackedReader) SequenceNum() uint64 {
br.mu.Lock()
defer br.mu.Unlock()
return br.sequenceNum
}
// Connected returns whether the reader is currently connected.
func (br *BackedReader) Connected() bool {
br.mu.Lock()
defer br.mu.Unlock()
return br.reader != nil
}
// SetGeneration sets the current connection generation for error reporting.
func (br *BackedReader) SetGeneration(generation uint64) {
br.mu.Lock()
defer br.mu.Unlock()
br.currentGen = generation
}
@@ -1,603 +0,0 @@
package backedpipe_test
import (
"context"
"io"
"sync"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
"github.com/coder/coder/v2/testutil"
)
// mockReader implements io.Reader with controllable behavior for testing
type mockReader struct {
mu sync.Mutex
data []byte
pos int
err error
readFunc func([]byte) (int, error)
}
func newMockReader(data string) *mockReader {
return &mockReader{data: []byte(data)}
}
func (mr *mockReader) Read(p []byte) (int, error) {
mr.mu.Lock()
defer mr.mu.Unlock()
if mr.readFunc != nil {
return mr.readFunc(p)
}
if mr.err != nil {
return 0, mr.err
}
if mr.pos >= len(mr.data) {
return 0, io.EOF
}
n := copy(p, mr.data[mr.pos:])
mr.pos += n
return n, nil
}
func (mr *mockReader) setError(err error) {
mr.mu.Lock()
defer mr.mu.Unlock()
mr.err = err
}
func TestBackedReader_NewBackedReader(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
require.NotNil(t, br)
require.Equal(t, uint64(0), br.SequenceNum())
require.False(t, br.Connected())
}
func TestBackedReader_BasicReadOperation(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader := newMockReader("hello world")
// Connect the reader
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number from reader
seq := testutil.RequireReceive(ctx, t, seqNum)
require.Equal(t, uint64(0), seq)
// Send new reader
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
// Read data
buf := make([]byte, 5)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, 5, n)
require.Equal(t, "hello", string(buf))
require.Equal(t, uint64(5), br.SequenceNum())
// Read more data
n, err = br.Read(buf)
require.NoError(t, err)
require.Equal(t, 5, n)
require.Equal(t, " worl", string(buf))
require.Equal(t, uint64(10), br.SequenceNum())
}
func TestBackedReader_ReadBlocksWhenDisconnected(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
// Start a read operation that should block
readDone := make(chan struct{})
var readErr error
var readBuf []byte
var readN int
go func() {
defer close(readDone)
buf := make([]byte, 10)
readN, readErr = br.Read(buf)
readBuf = buf[:readN]
}()
// Ensure the read is actually blocked by verifying it hasn't completed
// and that the reader is not connected
select {
case <-readDone:
t.Fatal("Read should be blocked when disconnected")
default:
// Read is still blocked, which is what we want
}
require.False(t, br.Connected(), "Reader should not be connected")
// Connect and the read should unblock
reader := newMockReader("test")
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send new reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
// Wait for read to complete
testutil.TryReceive(ctx, t, readDone)
require.NoError(t, readErr)
require.Equal(t, "test", string(readBuf))
}
func TestBackedReader_ReconnectionAfterFailure(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader1 := newMockReader("first")
// Initial connection
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send new reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, io.Reader(reader1))
// Read some data
buf := make([]byte, 5)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, "first", string(buf[:n]))
require.Equal(t, uint64(5), br.SequenceNum())
// Simulate connection failure
reader1.setError(xerrors.New("connection lost"))
// Start a read that will block due to connection failure
readDone := make(chan error, 1)
go func() {
_, err := br.Read(buf)
readDone <- err
}()
// Wait for the error to be reported via error channel
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Error(t, receivedErrorEvent.Err)
require.Equal(t, "reader", receivedErrorEvent.Component)
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
// Verify read is still blocked
select {
case err := <-readDone:
t.Fatalf("Read should still be blocked, but completed with: %v", err)
default:
// Good, still blocked
}
// Verify disconnection
require.False(t, br.Connected())
// Reconnect with new reader
reader2 := newMockReader("second")
seqNum2 := make(chan uint64, 1)
newR2 := make(chan io.Reader, 1)
go br.Reconnect(seqNum2, newR2)
// Get sequence number and send new reader
seq := testutil.RequireReceive(ctx, t, seqNum2)
require.Equal(t, uint64(5), seq) // Should return current sequence number
testutil.RequireSend(ctx, t, newR2, io.Reader(reader2))
// Wait for read to unblock and succeed with new data
readErr := testutil.RequireReceive(ctx, t, readDone)
require.NoError(t, readErr) // Should succeed with new reader
require.True(t, br.Connected())
}
func TestBackedReader_Close(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader := newMockReader("test")
// Connect
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send new reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
// First, read all available data
buf := make([]byte, 10)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, 4, n) // "test" is 4 bytes
// Close the reader before EOF triggers reconnection
err = br.Close()
require.NoError(t, err)
// After close, reads should return EOF
n, err = br.Read(buf)
require.Equal(t, 0, n)
require.Equal(t, io.EOF, err)
// Subsequent reads should return EOF
_, err = br.Read(buf)
require.Equal(t, io.EOF, err)
}
func TestBackedReader_CloseIdempotent(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
err := br.Close()
require.NoError(t, err)
// Second close should be no-op
err = br.Close()
require.NoError(t, err)
}
func TestBackedReader_ReconnectAfterClose(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
err := br.Close()
require.NoError(t, err)
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Should get 0 sequence number for closed reader
seq := testutil.TryReceive(ctx, t, seqNum)
require.Equal(t, uint64(0), seq)
}
// Helper function to reconnect a reader using channels
func reconnectReader(ctx context.Context, t testing.TB, br *backedpipe.BackedReader, reader io.Reader) {
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send new reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, reader)
}
func TestBackedReader_SequenceNumberTracking(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader := newMockReader("0123456789")
reconnectReader(ctx, t, br, reader)
// Read in chunks and verify sequence number
buf := make([]byte, 3)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, 3, n)
require.Equal(t, uint64(3), br.SequenceNum())
n, err = br.Read(buf)
require.NoError(t, err)
require.Equal(t, 3, n)
require.Equal(t, uint64(6), br.SequenceNum())
n, err = br.Read(buf)
require.NoError(t, err)
require.Equal(t, 3, n)
require.Equal(t, uint64(9), br.SequenceNum())
}
func TestBackedReader_EOFHandling(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader := newMockReader("test")
reconnectReader(ctx, t, br, reader)
// Read all data
buf := make([]byte, 10)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, 4, n)
require.Equal(t, "test", string(buf[:n]))
// Next read should encounter EOF, which triggers disconnection
// The read should block waiting for reconnection
readDone := make(chan struct{})
var readErr error
var readN int
go func() {
defer close(readDone)
readN, readErr = br.Read(buf)
}()
// Wait for EOF to be reported via error channel
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Equal(t, io.EOF, receivedErrorEvent.Err)
require.Equal(t, "reader", receivedErrorEvent.Component)
// Reader should be disconnected after EOF
require.False(t, br.Connected())
// Read should still be blocked
select {
case <-readDone:
t.Fatal("Read should be blocked waiting for reconnection after EOF")
default:
// Good, still blocked
}
// Reconnect with new data
reader2 := newMockReader("more")
reconnectReader(ctx, t, br, reader2)
// Wait for the blocked read to complete with new data
testutil.TryReceive(ctx, t, readDone)
require.NoError(t, readErr)
require.Equal(t, 4, readN)
require.Equal(t, "more", string(buf[:readN]))
}
func BenchmarkBackedReader_Read(b *testing.B) {
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
buf := make([]byte, 1024)
// Create a reader that never returns EOF by cycling through data
reader := &mockReader{
readFunc: func(p []byte) (int, error) {
// Fill buffer with 'x' characters - never EOF
for i := range p {
p[i] = 'x'
}
return len(p), nil
},
}
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
defer cancel()
reconnectReader(ctx, b, br, reader)
b.ResetTimer()
for i := 0; i < b.N; i++ {
br.Read(buf)
}
}
func TestBackedReader_PartialReads(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
// Create a reader that returns partial reads
reader := &mockReader{
readFunc: func(p []byte) (int, error) {
// Always return just 1 byte at a time
if len(p) == 0 {
return 0, nil
}
p[0] = 'A'
return 1, nil
},
}
reconnectReader(ctx, t, br, reader)
// Read multiple times
buf := make([]byte, 10)
for i := 0; i < 5; i++ {
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, 1, n)
require.Equal(t, byte('A'), buf[0])
}
require.Equal(t, uint64(5), br.SequenceNum())
}
func TestBackedReader_CloseWhileBlockedOnUnderlyingReader(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
// Create a reader that blocks on Read calls but can be unblocked
readStarted := make(chan struct{}, 1)
readUnblocked := make(chan struct{})
blockingReader := &mockReader{
readFunc: func(p []byte) (int, error) {
select {
case readStarted <- struct{}{}:
default:
}
<-readUnblocked // Block until signaled
// After unblocking, return an error to simulate connection failure
return 0, xerrors.New("connection interrupted")
},
}
// Connect the blocking reader
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send blocking reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, io.Reader(blockingReader))
// Start a read that will block on the underlying reader
readDone := make(chan struct{})
var readErr error
var readN int
go func() {
defer close(readDone)
buf := make([]byte, 10)
readN, readErr = br.Read(buf)
}()
// Wait for the read to start and block on the underlying reader
testutil.RequireReceive(ctx, t, readStarted)
// Verify read is blocked by checking that it hasn't completed
// and ensuring we have adequate time for it to reach the blocking state
require.Eventually(t, func() bool {
select {
case <-readDone:
t.Fatal("Read should be blocked on underlying reader")
return false
default:
// Good, still blocked
return true
}
}, testutil.WaitShort, testutil.IntervalMedium)
// Start Close() in a goroutine since it will block until the underlying read completes
closeDone := make(chan error, 1)
go func() {
closeDone <- br.Close()
}()
// Verify Close() is also blocked waiting for the underlying read
select {
case <-closeDone:
t.Fatal("Close should be blocked until underlying read completes")
case <-time.After(10 * time.Millisecond):
// Good, Close is blocked
}
// Unblock the underlying reader, which will cause both the read and close to complete
close(readUnblocked)
// Wait for both the read and close to complete
testutil.TryReceive(ctx, t, readDone)
closeErr := testutil.RequireReceive(ctx, t, closeDone)
require.NoError(t, closeErr)
// The read should return EOF because Close() was called while it was blocked,
// even though the underlying reader returned an error
require.Equal(t, 0, readN)
require.Equal(t, io.EOF, readErr)
// Subsequent reads should return EOF since the reader is now closed
buf := make([]byte, 10)
n, err := br.Read(buf)
require.Equal(t, 0, n)
require.Equal(t, io.EOF, err)
}
func TestBackedReader_CloseWhileBlockedWaitingForReconnect(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
br := backedpipe.NewBackedReader(errChan)
reader1 := newMockReader("initial")
// Initial connection
seqNum := make(chan uint64, 1)
newR := make(chan io.Reader, 1)
go br.Reconnect(seqNum, newR)
// Get sequence number and send initial reader
testutil.RequireReceive(ctx, t, seqNum)
testutil.RequireSend(ctx, t, newR, io.Reader(reader1))
// Read initial data
buf := make([]byte, 10)
n, err := br.Read(buf)
require.NoError(t, err)
require.Equal(t, "initial", string(buf[:n]))
// Simulate connection failure
reader1.setError(xerrors.New("connection lost"))
// Start a read that will block waiting for reconnection
readDone := make(chan struct{})
var readErr error
var readN int
go func() {
defer close(readDone)
readN, readErr = br.Read(buf)
}()
// Wait for the error to be reported (indicating disconnection)
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Error(t, receivedErrorEvent.Err)
require.Equal(t, "reader", receivedErrorEvent.Component)
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
// Verify read is blocked waiting for reconnection
select {
case <-readDone:
t.Fatal("Read should be blocked waiting for reconnection")
default:
// Good, still blocked
}
// Verify reader is disconnected
require.False(t, br.Connected())
// Close the BackedReader while read is blocked waiting for reconnection
err = br.Close()
require.NoError(t, err)
// The read should unblock and return EOF
testutil.TryReceive(ctx, t, readDone)
require.Equal(t, 0, readN)
require.Equal(t, io.EOF, readErr)
}
@@ -1,243 +0,0 @@
package backedpipe
import (
"io"
"os"
"sync"
"golang.org/x/xerrors"
)
var (
ErrWriterClosed = xerrors.New("cannot reconnect closed writer")
ErrNilWriter = xerrors.New("new writer cannot be nil")
ErrFutureSequence = xerrors.New("cannot replay from future sequence")
ErrReplayDataUnavailable = xerrors.New("failed to read replay data")
ErrReplayFailed = xerrors.New("replay failed")
ErrPartialReplay = xerrors.New("partial replay")
)
// BackedWriter wraps an unreliable io.Writer and makes it resilient to disconnections.
// It maintains a ring buffer of recent writes for replay during reconnection.
type BackedWriter struct {
mu sync.Mutex
cond *sync.Cond
writer io.Writer
buffer *ringBuffer
sequenceNum uint64 // total bytes written
closed bool
// Error channel for generation-aware error reporting
errorEventChan chan<- ErrorEvent
// Current connection generation for error reporting
currentGen uint64
}
// NewBackedWriter creates a new BackedWriter with generation-aware error reporting.
// The writer is initially disconnected and will block writes until connected.
// The errorEventChan will receive ErrorEvent structs containing error details,
// component info, and connection generation. Capacity must be > 0.
func NewBackedWriter(capacity int, errorEventChan chan<- ErrorEvent) *BackedWriter {
if capacity <= 0 {
panic("backed writer capacity must be > 0")
}
if errorEventChan == nil {
panic("error event channel cannot be nil")
}
bw := &BackedWriter{
buffer: newRingBuffer(capacity),
errorEventChan: errorEventChan,
}
bw.cond = sync.NewCond(&bw.mu)
return bw
}
// blockUntilConnectedOrClosed blocks until either a writer is available or the BackedWriter is closed.
// Returns os.ErrClosed if closed while waiting, nil if connected. You must hold the mutex to call this.
func (bw *BackedWriter) blockUntilConnectedOrClosed() error {
for bw.writer == nil && !bw.closed {
bw.cond.Wait()
}
if bw.closed {
return os.ErrClosed
}
return nil
}
// Write implements io.Writer.
// When connected, it writes to both the ring buffer (to preserve data in case we need to replay it)
// and the underlying writer.
// If the underlying write fails, the writer is marked as disconnected and the write blocks
// until reconnection occurs.
func (bw *BackedWriter) Write(p []byte) (int, error) {
if len(p) == 0 {
return 0, nil
}
bw.mu.Lock()
defer bw.mu.Unlock()
// Block until connected
if err := bw.blockUntilConnectedOrClosed(); err != nil {
return 0, err
}
// Write to buffer
bw.buffer.Write(p)
bw.sequenceNum += uint64(len(p))
// Try to write to underlying writer
n, err := bw.writer.Write(p)
if err == nil && n != len(p) {
err = io.ErrShortWrite
}
if err != nil {
// Connection failed or partial write, mark as disconnected
bw.writer = nil
// Notify parent of error with generation information
select {
case bw.errorEventChan <- ErrorEvent{
Err: err,
Component: "writer",
Generation: bw.currentGen,
}:
default:
// Channel is full, drop the error.
// This is not a problem, because we set the writer to nil
// and block until reconnected so no new errors will be sent
// until pipe processes the error and reconnects.
}
// Block until reconnected - reconnection will replay this data
if err := bw.blockUntilConnectedOrClosed(); err != nil {
return 0, err
}
// Don't retry - reconnection replay handled it
return len(p), nil
}
// Write succeeded
return len(p), nil
}
// Reconnect replaces the current writer with a new one and replays data from the specified
// sequence number. If the requested sequence number is no longer in the buffer,
// returns an error indicating data loss.
//
// IMPORTANT: You must close the current writer, if any, before calling this method.
// Otherwise, if a Write operation is currently blocked in the underlying writer's
// Write method, this method will deadlock waiting for the mutex that Write holds.
func (bw *BackedWriter) Reconnect(replayFromSeq uint64, newWriter io.Writer) error {
bw.mu.Lock()
defer bw.mu.Unlock()
if bw.closed {
return ErrWriterClosed
}
if newWriter == nil {
return ErrNilWriter
}
// Check if we can replay from the requested sequence number
if replayFromSeq > bw.sequenceNum {
return ErrFutureSequence
}
// Calculate how many bytes we need to replay
replayBytes := bw.sequenceNum - replayFromSeq
var replayData []byte
if replayBytes > 0 {
// Get the last replayBytes from buffer
// If the buffer doesn't have enough data (some was evicted),
// ReadLast will return an error
var err error
// Safe conversion: The check above (replayFromSeq > bw.sequenceNum) ensures
// replayBytes = bw.sequenceNum - replayFromSeq is always <= bw.sequenceNum.
// Since sequence numbers are much smaller than maxInt, the uint64->int conversion is safe.
//nolint:gosec // Safe conversion: replayBytes <= sequenceNum, which is much less than maxInt
replayData, err = bw.buffer.ReadLast(int(replayBytes))
if err != nil {
return ErrReplayDataUnavailable
}
}
// Clear the current writer first in case replay fails
bw.writer = nil
// Replay data if needed. We keep the mutex held during replay to ensure
// no concurrent operations can interfere with the reconnection process.
if len(replayData) > 0 {
n, err := newWriter.Write(replayData)
if err != nil {
// Reconnect failed, writer remains nil
return ErrReplayFailed
}
if n != len(replayData) {
// Reconnect failed, writer remains nil
return ErrPartialReplay
}
}
// Set new writer only after successful replay. This ensures no concurrent
// writes can interfere with the replay operation.
bw.writer = newWriter
// Wake up any operations waiting for connection
bw.cond.Broadcast()
return nil
}
// Close closes the writer and prevents further writes.
// After closing, all Write calls will return os.ErrClosed.
// This code keeps the Close() signature consistent with io.Closer,
// but it never actually returns an error.
//
// IMPORTANT: You must close the current underlying writer, if any, before calling
// this method. Otherwise, if a Write operation is currently blocked in the
// underlying writer's Write method, this method will deadlock waiting for the
// mutex that Write holds.
func (bw *BackedWriter) Close() error {
bw.mu.Lock()
defer bw.mu.Unlock()
if bw.closed {
return nil
}
bw.closed = true
bw.writer = nil
// Wake up any blocked operations
bw.cond.Broadcast()
return nil
}
// SequenceNum returns the current sequence number (total bytes written).
func (bw *BackedWriter) SequenceNum() uint64 {
bw.mu.Lock()
defer bw.mu.Unlock()
return bw.sequenceNum
}
// Connected returns whether the writer is currently connected.
func (bw *BackedWriter) Connected() bool {
bw.mu.Lock()
defer bw.mu.Unlock()
return bw.writer != nil
}
// SetGeneration sets the current connection generation for error reporting.
func (bw *BackedWriter) SetGeneration(generation uint64) {
bw.mu.Lock()
defer bw.mu.Unlock()
bw.currentGen = generation
}
@@ -1,992 +0,0 @@
package backedpipe_test
import (
"bytes"
"os"
"sync"
"testing"
"time"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
"github.com/coder/coder/v2/testutil"
)
// mockWriter implements io.Writer with controllable behavior for testing
type mockWriter struct {
mu sync.Mutex
buffer bytes.Buffer
err error
writeFunc func([]byte) (int, error)
writeCalls int
}
func newMockWriter() *mockWriter {
return &mockWriter{}
}
// newBackedWriterForTest creates a BackedWriter with a small buffer for testing eviction behavior
func newBackedWriterForTest(bufferSize int) *backedpipe.BackedWriter {
errChan := make(chan backedpipe.ErrorEvent, 1)
return backedpipe.NewBackedWriter(bufferSize, errChan)
}
func (mw *mockWriter) Write(p []byte) (int, error) {
mw.mu.Lock()
defer mw.mu.Unlock()
mw.writeCalls++
if mw.writeFunc != nil {
return mw.writeFunc(p)
}
if mw.err != nil {
return 0, mw.err
}
return mw.buffer.Write(p)
}
func (mw *mockWriter) Len() int {
mw.mu.Lock()
defer mw.mu.Unlock()
return mw.buffer.Len()
}
func (mw *mockWriter) Reset() {
mw.mu.Lock()
defer mw.mu.Unlock()
mw.buffer.Reset()
mw.writeCalls = 0
mw.err = nil
mw.writeFunc = nil
}
func (mw *mockWriter) setError(err error) {
mw.mu.Lock()
defer mw.mu.Unlock()
mw.err = err
}
func TestBackedWriter_NewBackedWriter(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
require.NotNil(t, bw)
require.Equal(t, uint64(0), bw.SequenceNum())
require.False(t, bw.Connected())
}
func TestBackedWriter_WriteBlocksWhenDisconnected(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Write should block when disconnected
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
n, writeErr = bw.Write([]byte("hello"))
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when disconnected")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Connect and verify write completes
writer := newMockWriter()
err := bw.Reconnect(0, writer)
require.NoError(t, err)
// Write should now complete
testutil.TryReceive(ctx, t, writeComplete)
require.NoError(t, writeErr)
require.Equal(t, 5, n)
require.Equal(t, uint64(5), bw.SequenceNum())
require.Equal(t, []byte("hello"), writer.buffer.Bytes())
}
func TestBackedWriter_WriteToUnderlyingWhenConnected(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
writer := newMockWriter()
// Connect
err := bw.Reconnect(0, writer)
require.NoError(t, err)
require.True(t, bw.Connected())
// Write should go to both buffer and underlying writer
n, err := bw.Write([]byte("hello"))
require.NoError(t, err)
require.Equal(t, 5, n)
// Data should be buffered
require.Equal(t, uint64(5), bw.SequenceNum())
// Check underlying writer
require.Equal(t, []byte("hello"), writer.buffer.Bytes())
}
func TestBackedWriter_BlockOnWriteFailure(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
writer := newMockWriter()
// Connect
err := bw.Reconnect(0, writer)
require.NoError(t, err)
// Cause write to fail
writer.setError(xerrors.New("write failed"))
// Write should block when underlying writer fails, not succeed immediately
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
n, writeErr = bw.Write([]byte("hello"))
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when underlying writer fails")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Wait for error event which implies writer was marked disconnected
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Contains(t, receivedErrorEvent.Err.Error(), "write failed")
require.Equal(t, "writer", receivedErrorEvent.Component)
require.False(t, bw.Connected())
// Reconnect with working writer and verify write completes
writer2 := newMockWriter()
err = bw.Reconnect(0, writer2) // Replay from beginning
require.NoError(t, err)
// Write should now complete
testutil.TryReceive(ctx, t, writeComplete)
require.NoError(t, writeErr)
require.Equal(t, 5, n)
require.Equal(t, uint64(5), bw.SequenceNum())
require.Equal(t, []byte("hello"), writer2.buffer.Bytes())
}
func TestBackedWriter_ReplayOnReconnect(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Connect initially to write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
// Write some data while connected
_, err = bw.Write([]byte("hello"))
require.NoError(t, err)
_, err = bw.Write([]byte(" world"))
require.NoError(t, err)
require.Equal(t, uint64(11), bw.SequenceNum())
// Disconnect by causing a write failure
writer1.setError(xerrors.New("connection lost"))
// Write should block when underlying writer fails
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
n, writeErr = bw.Write([]byte("test"))
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when underlying writer fails")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Wait for error event which implies writer was marked disconnected
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
require.Equal(t, "writer", receivedErrorEvent.Component)
require.False(t, bw.Connected())
// Reconnect with new writer and request replay from beginning
writer2 := newMockWriter()
err = bw.Reconnect(0, writer2)
require.NoError(t, err)
// Write should now complete
select {
case <-writeComplete:
// Expected - write completed
case <-time.After(100 * time.Millisecond):
t.Fatal("Write should have completed after reconnection")
}
require.NoError(t, writeErr)
require.Equal(t, 4, n)
// Should have replayed all data including the failed write that was buffered
require.Equal(t, []byte("hello worldtest"), writer2.buffer.Bytes())
// Write new data should go to both
_, err = bw.Write([]byte("!"))
require.NoError(t, err)
require.Equal(t, []byte("hello worldtest!"), writer2.buffer.Bytes())
}
func TestBackedWriter_PartialReplay(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Connect initially to write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
// Write some data
_, err = bw.Write([]byte("hello"))
require.NoError(t, err)
_, err = bw.Write([]byte(" world"))
require.NoError(t, err)
_, err = bw.Write([]byte("!"))
require.NoError(t, err)
// Reconnect with new writer and request replay from middle
writer2 := newMockWriter()
err = bw.Reconnect(5, writer2) // From " world!"
require.NoError(t, err)
// Should have replayed only the requested portion
require.Equal(t, []byte(" world!"), writer2.buffer.Bytes())
}
func TestBackedWriter_ReplayFromFutureSequence(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Connect initially to write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
_, err = bw.Write([]byte("hello"))
require.NoError(t, err)
writer2 := newMockWriter()
err = bw.Reconnect(10, writer2) // Future sequence
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrFutureSequence)
}
func TestBackedWriter_ReplayDataLoss(t *testing.T) {
t.Parallel()
bw := newBackedWriterForTest(10) // Small buffer for testing
// Connect initially to write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
// Fill buffer beyond capacity to cause eviction
_, err = bw.Write([]byte("0123456789")) // Fills buffer exactly
require.NoError(t, err)
_, err = bw.Write([]byte("abcdef")) // Should evict "012345"
require.NoError(t, err)
writer2 := newMockWriter()
err = bw.Reconnect(0, writer2) // Try to replay from evicted data
// With the new error handling, this should fail because we can't read all the data
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrReplayDataUnavailable)
}
func TestBackedWriter_BufferEviction(t *testing.T) {
t.Parallel()
bw := newBackedWriterForTest(5) // Very small buffer for testing
// Connect initially
writer := newMockWriter()
err := bw.Reconnect(0, writer)
require.NoError(t, err)
// Write data that will cause eviction
n, err := bw.Write([]byte("abcde"))
require.NoError(t, err)
require.Equal(t, 5, n)
// Write more to cause eviction
n, err = bw.Write([]byte("fg"))
require.NoError(t, err)
require.Equal(t, 2, n)
// Verify that the buffer contains only the latest data after eviction
// Total sequence number should be 7 (5 + 2)
require.Equal(t, uint64(7), bw.SequenceNum())
// Try to reconnect from the beginning - this should fail because
// the early data was evicted from the buffer
writer2 := newMockWriter()
err = bw.Reconnect(0, writer2)
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrReplayDataUnavailable)
// However, reconnecting from a sequence that's still in the buffer should work
// The buffer should contain the last 5 bytes: "cdefg"
writer3 := newMockWriter()
err = bw.Reconnect(2, writer3) // From sequence 2, should replay "cdefg"
require.NoError(t, err)
require.Equal(t, []byte("cdefg"), writer3.buffer.Bytes())
require.True(t, bw.Connected())
}
func TestBackedWriter_Close(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
writer := newMockWriter()
bw.Reconnect(0, writer)
err := bw.Close()
require.NoError(t, err)
// Writes after close should fail
_, err = bw.Write([]byte("test"))
require.Equal(t, os.ErrClosed, err)
// Reconnect after close should fail
err = bw.Reconnect(0, newMockWriter())
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrWriterClosed)
}
func TestBackedWriter_CloseIdempotent(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
err := bw.Close()
require.NoError(t, err)
// Second close should be no-op
err = bw.Close()
require.NoError(t, err)
}
func TestBackedWriter_ReconnectDuringReplay(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Connect initially to write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
_, err = bw.Write([]byte("hello world"))
require.NoError(t, err)
// Create a writer that fails during replay
writer2 := &mockWriter{
err: backedpipe.ErrReplayFailed,
}
err = bw.Reconnect(0, writer2)
require.Error(t, err)
require.ErrorIs(t, err, backedpipe.ErrReplayFailed)
require.False(t, bw.Connected())
}
func TestBackedWriter_BlockOnPartialWrite(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Create writer that does partial writes
writer := &mockWriter{
writeFunc: func(p []byte) (int, error) {
if len(p) > 3 {
return 3, nil // Only write first 3 bytes
}
return len(p), nil
},
}
bw.Reconnect(0, writer)
// Write should block due to partial write
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
n, writeErr = bw.Write([]byte("hello"))
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when underlying writer does partial write")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Wait for error event which implies writer was marked disconnected
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Contains(t, receivedErrorEvent.Err.Error(), "short write")
require.Equal(t, "writer", receivedErrorEvent.Component)
require.False(t, bw.Connected())
// Reconnect with working writer and verify write completes
writer2 := newMockWriter()
err := bw.Reconnect(0, writer2) // Replay from beginning
require.NoError(t, err)
// Write should now complete
testutil.TryReceive(ctx, t, writeComplete)
require.NoError(t, writeErr)
require.Equal(t, 5, n)
require.Equal(t, uint64(5), bw.SequenceNum())
require.Equal(t, []byte("hello"), writer2.buffer.Bytes())
}
func TestBackedWriter_WriteUnblocksOnReconnect(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Start a single write that should block
writeResult := make(chan error, 1)
go func() {
_, err := bw.Write([]byte("test"))
writeResult <- err
}()
// Verify write is blocked
select {
case <-writeResult:
t.Fatal("Write should have blocked when disconnected")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Connect and verify write completes
writer := newMockWriter()
err := bw.Reconnect(0, writer)
require.NoError(t, err)
// Write should now complete
err = testutil.RequireReceive(ctx, t, writeResult)
require.NoError(t, err)
// Write should have been written to the underlying writer
require.Equal(t, "test", writer.buffer.String())
}
func TestBackedWriter_CloseUnblocksWaitingWrites(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Start a write that should block
writeComplete := make(chan error, 1)
go func() {
_, err := bw.Write([]byte("test"))
writeComplete <- err
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked when disconnected")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Close the writer
err := bw.Close()
require.NoError(t, err)
// Write should now complete with error
err = testutil.RequireReceive(ctx, t, writeComplete)
require.Equal(t, os.ErrClosed, err)
}
func TestBackedWriter_WriteBlocksAfterDisconnection(t *testing.T) {
t.Parallel()
ctx := testutil.Context(t, testutil.WaitShort)
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
writer := newMockWriter()
// Connect initially
err := bw.Reconnect(0, writer)
require.NoError(t, err)
// Write should succeed when connected
_, err = bw.Write([]byte("hello"))
require.NoError(t, err)
// Cause disconnection - the write should now block instead of returning an error
writer.setError(xerrors.New("connection lost"))
// This write should block
writeComplete := make(chan error, 1)
go func() {
_, err := bw.Write([]byte("world"))
writeComplete <- err
}()
// Verify write is blocked
select {
case <-writeComplete:
t.Fatal("Write should have blocked after disconnection")
case <-time.After(50 * time.Millisecond):
// Expected - write is blocked
}
// Wait for error event which implies writer was marked disconnected
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
require.Equal(t, "writer", receivedErrorEvent.Component)
require.False(t, bw.Connected())
// Reconnect and verify write completes
writer2 := newMockWriter()
err = bw.Reconnect(5, writer2) // Replay from after "hello"
require.NoError(t, err)
err = testutil.RequireReceive(ctx, t, writeComplete)
require.NoError(t, err)
// Check that only "world" was written during replay (not duplicated)
require.Equal(t, []byte("world"), writer2.buffer.Bytes()) // Only "world" since we replayed from sequence 5
}
func TestBackedWriter_ConcurrentWriteAndClose(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Don't connect initially - this will cause writes to block in blockUntilConnectedOrClosed()
writeStarted := make(chan struct{}, 1)
// Start a write operation that will block waiting for connection
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
// Signal that we're about to start the write
writeStarted <- struct{}{}
// This write will block in blockUntilConnectedOrClosed() since no writer is connected
n, writeErr = bw.Write([]byte("hello"))
}()
// Wait for write goroutine to start
ctx := testutil.Context(t, testutil.WaitShort)
testutil.RequireReceive(ctx, t, writeStarted)
// Ensure the write is actually blocked by repeatedly checking that:
// 1. The write hasn't completed yet
// 2. The writer is still not connected
// We use require.Eventually to give it a fair chance to reach the blocking state
require.Eventually(t, func() bool {
select {
case <-writeComplete:
t.Fatal("Write should be blocked when no writer is connected")
return false
default:
// Write is still blocked, which is what we want
return !bw.Connected()
}
}, testutil.WaitShort, testutil.IntervalMedium)
// Close the writer while the write is blocked waiting for connection
closeErr := bw.Close()
require.NoError(t, closeErr)
// Wait for write to complete
select {
case <-writeComplete:
// Good, write completed
case <-ctx.Done():
t.Fatal("Write did not complete in time")
}
// The write should have failed with os.ErrClosed because Close() was called
// while it was waiting for connection
require.ErrorIs(t, writeErr, os.ErrClosed)
require.Equal(t, 0, n)
// Subsequent writes should also fail
n, err := bw.Write([]byte("world"))
require.Equal(t, 0, n)
require.ErrorIs(t, err, os.ErrClosed)
}
func TestBackedWriter_ConcurrentWriteAndReconnect(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Initial connection
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
// Write some initial data
_, err = bw.Write([]byte("initial"))
require.NoError(t, err)
// Start reconnection which will block new writes
replayStarted := make(chan struct{}, 1) // Buffered to prevent race condition
replayCanComplete := make(chan struct{})
writer2 := &mockWriter{
writeFunc: func(p []byte) (int, error) {
// Signal that replay has started
select {
case replayStarted <- struct{}{}:
default:
// Signal already sent, which is fine
}
// Wait for test to allow replay to complete
<-replayCanComplete
return len(p), nil
},
}
// Start the reconnection in a goroutine so we can control timing
reconnectComplete := make(chan error, 1)
go func() {
reconnectComplete <- bw.Reconnect(0, writer2)
}()
ctx := testutil.Context(t, testutil.WaitShort)
// Wait for replay to start
testutil.RequireReceive(ctx, t, replayStarted)
// Now start a write operation that will be blocked by the ongoing reconnect
writeStarted := make(chan struct{}, 1)
writeComplete := make(chan struct{})
var writeErr error
var n int
go func() {
defer close(writeComplete)
// Signal that we're about to start the write
writeStarted <- struct{}{}
// This write should be blocked during reconnect
n, writeErr = bw.Write([]byte("blocked"))
}()
// Wait for write to start
testutil.RequireReceive(ctx, t, writeStarted)
// Use a small timeout to ensure the write goroutine has a chance to get blocked
// on the mutex before we check if it's still blocked
writeCheckTimer := time.NewTimer(testutil.IntervalFast)
defer writeCheckTimer.Stop()
select {
case <-writeComplete:
t.Fatal("Write should be blocked during reconnect")
case <-writeCheckTimer.C:
// Write is still blocked after a reasonable wait
}
// Allow replay to complete, which will allow reconnect to finish
close(replayCanComplete)
// Wait for reconnection to complete
select {
case reconnectErr := <-reconnectComplete:
require.NoError(t, reconnectErr)
case <-ctx.Done():
t.Fatal("Reconnect did not complete in time")
}
// Wait for write to complete
<-writeComplete
// Write should succeed after reconnection completes
require.NoError(t, writeErr)
require.Equal(t, 7, n) // "blocked" is 7 bytes
// Verify the writer is connected
require.True(t, bw.Connected())
}
func TestBackedWriter_ConcurrentReconnectAndClose(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Initial connection and write some data
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
_, err = bw.Write([]byte("test data"))
require.NoError(t, err)
// Start reconnection with slow replay
reconnectStarted := make(chan struct{}, 1)
replayCanComplete := make(chan struct{})
reconnectComplete := make(chan struct{})
var reconnectErr error
go func() {
defer close(reconnectComplete)
writer2 := &mockWriter{
writeFunc: func(p []byte) (int, error) {
// Signal that replay has started
select {
case reconnectStarted <- struct{}{}:
default:
}
// Wait for test to allow replay to complete
<-replayCanComplete
return len(p), nil
},
}
reconnectErr = bw.Reconnect(0, writer2)
}()
// Wait for reconnection to start
ctx := testutil.Context(t, testutil.WaitShort)
testutil.RequireReceive(ctx, t, reconnectStarted)
// Start Close() in a separate goroutine since it will block until Reconnect() completes
closeStarted := make(chan struct{}, 1)
closeComplete := make(chan error, 1)
go func() {
closeStarted <- struct{}{} // Signal that Close() is starting
closeComplete <- bw.Close()
}()
// Wait for Close() to start, then give it a moment to attempt to acquire the mutex
testutil.RequireReceive(ctx, t, closeStarted)
closeCheckTimer := time.NewTimer(testutil.IntervalFast)
defer closeCheckTimer.Stop()
select {
case <-closeComplete:
t.Fatal("Close should be blocked during reconnect")
case <-closeCheckTimer.C:
// Good, Close is still blocked after a reasonable wait
}
// Allow replay to complete so reconnection can finish
close(replayCanComplete)
// Wait for reconnect to complete
select {
case <-reconnectComplete:
// Good, reconnect completed
case <-ctx.Done():
t.Fatal("Reconnect did not complete in time")
}
// Wait for close to complete
select {
case closeErr := <-closeComplete:
require.NoError(t, closeErr)
case <-ctx.Done():
t.Fatal("Close did not complete in time")
}
// With mutex held during replay, Close() waits for Reconnect() to finish.
// So Reconnect() should succeed, then Close() runs and closes the writer.
require.NoError(t, reconnectErr)
// Verify writer is closed (Close() ran after Reconnect() completed)
require.False(t, bw.Connected())
}
func TestBackedWriter_MultipleWritesDuringReconnect(t *testing.T) {
t.Parallel()
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Initial connection
writer1 := newMockWriter()
err := bw.Reconnect(0, writer1)
require.NoError(t, err)
// Write some initial data
_, err = bw.Write([]byte("initial"))
require.NoError(t, err)
// Start multiple write operations
numWriters := 5
var wg sync.WaitGroup
writeResults := make([]error, numWriters)
writesStarted := make(chan struct{}, numWriters)
for i := 0; i < numWriters; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
// Signal that this write is starting
writesStarted <- struct{}{}
data := []byte{byte('A' + id)}
_, writeResults[id] = bw.Write(data)
}(i)
}
// Wait for all writes to start
ctx := testutil.Context(t, testutil.WaitLong)
for i := 0; i < numWriters; i++ {
testutil.RequireReceive(ctx, t, writesStarted)
}
// Use a timer to ensure all write goroutines have had a chance to start executing
// and potentially get blocked on the mutex before we start the reconnection
writesReadyTimer := time.NewTimer(testutil.IntervalFast)
defer writesReadyTimer.Stop()
<-writesReadyTimer.C
// Start reconnection with controlled replay
replayStarted := make(chan struct{}, 1)
replayCanComplete := make(chan struct{})
writer2 := &mockWriter{
writeFunc: func(p []byte) (int, error) {
// Signal that replay has started
select {
case replayStarted <- struct{}{}:
default:
}
// Wait for test to allow replay to complete
<-replayCanComplete
return len(p), nil
},
}
// Start reconnection in a goroutine so we can control timing
reconnectComplete := make(chan error, 1)
go func() {
reconnectComplete <- bw.Reconnect(0, writer2)
}()
// Wait for replay to start
testutil.RequireReceive(ctx, t, replayStarted)
// Allow replay to complete
close(replayCanComplete)
// Wait for reconnection to complete
select {
case reconnectErr := <-reconnectComplete:
require.NoError(t, reconnectErr)
case <-ctx.Done():
t.Fatal("Reconnect did not complete in time")
}
// Wait for all writes to complete
wg.Wait()
// All writes should succeed
for i, err := range writeResults {
require.NoError(t, err, "Write %d should succeed", i)
}
// Verify the writer is connected
require.True(t, bw.Connected())
}
func BenchmarkBackedWriter_Write(b *testing.B) {
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan) // 64KB buffer
writer := newMockWriter()
bw.Reconnect(0, writer)
data := bytes.Repeat([]byte("x"), 1024) // 1KB writes
b.ResetTimer()
for i := 0; i < b.N; i++ {
bw.Write(data)
}
}
func BenchmarkBackedWriter_Reconnect(b *testing.B) {
errChan := make(chan backedpipe.ErrorEvent, 1)
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
// Connect initially to fill buffer with data
initialWriter := newMockWriter()
err := bw.Reconnect(0, initialWriter)
if err != nil {
b.Fatal(err)
}
// Fill buffer with data
data := bytes.Repeat([]byte("x"), 1024)
for i := 0; i < 32; i++ {
bw.Write(data)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
writer := newMockWriter()
bw.Reconnect(0, writer)
}
}

Some files were not shown because too many files have changed in this diff Show More