Compare commits
6 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f97bd76bb5 | |||
| 5059c23b43 | |||
| e5a74a775d | |||
| de494d0a49 | |||
| 774792476c | |||
| 4a61bbeae4 |
@@ -1,218 +0,0 @@
|
||||
# Database Development Patterns
|
||||
|
||||
## Database Work Overview
|
||||
|
||||
### Database Generation Process
|
||||
|
||||
1. Modify SQL files in `coderd/database/queries/`
|
||||
2. Run `make gen`
|
||||
3. If errors about audit table, update `enterprise/audit/table.go`
|
||||
4. Run `make gen` again
|
||||
5. Run `make lint` to catch any remaining issues
|
||||
|
||||
## Migration Guidelines
|
||||
|
||||
### Creating Migration Files
|
||||
|
||||
**Location**: `coderd/database/migrations/`
|
||||
**Format**: `{number}_{description}.{up|down}.sql`
|
||||
|
||||
- Number must be unique and sequential
|
||||
- Always include both up and down migrations
|
||||
|
||||
### Helper Scripts
|
||||
|
||||
| Script | Purpose |
|
||||
|---------------------------------------------------------------------|-----------------------------------------|
|
||||
| `./coderd/database/migrations/create_migration.sh "migration name"` | Creates new migration files |
|
||||
| `./coderd/database/migrations/fix_migration_numbers.sh` | Renumbers migrations to avoid conflicts |
|
||||
| `./coderd/database/migrations/create_fixture.sh "fixture name"` | Creates test fixtures for migrations |
|
||||
|
||||
### Database Query Organization
|
||||
|
||||
- **MUST DO**: Any changes to database - adding queries, modifying queries should be done in the `coderd/database/queries/*.sql` files
|
||||
- **MUST DO**: Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `oauth2.sql`
|
||||
- After making changes to any `coderd/database/queries/*.sql` files you must run `make gen` to generate respective ORM changes
|
||||
|
||||
## Handling Nullable Fields
|
||||
|
||||
Use `sql.NullString`, `sql.NullBool`, etc. for optional database fields:
|
||||
|
||||
```go
|
||||
CodeChallenge: sql.NullString{
|
||||
String: params.codeChallenge,
|
||||
Valid: params.codeChallenge != "",
|
||||
}
|
||||
```
|
||||
|
||||
Set `.Valid = true` when providing values.
|
||||
|
||||
## Audit Table Updates
|
||||
|
||||
If adding fields to auditable types:
|
||||
|
||||
1. Update `enterprise/audit/table.go`
|
||||
2. Add each new field with appropriate action:
|
||||
- `ActionTrack`: Field should be tracked in audit logs
|
||||
- `ActionIgnore`: Field should be ignored in audit logs
|
||||
- `ActionSecret`: Field contains sensitive data
|
||||
3. Run `make gen` to verify no audit errors
|
||||
|
||||
## Database Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
- **PostgreSQL 13+** recommended for production
|
||||
- **Migrations** managed with `migrate`
|
||||
- **Database authorization** through `dbauthz` package
|
||||
|
||||
### Authorization Patterns
|
||||
|
||||
```go
|
||||
// Public endpoints needing system access (OAuth2 registration)
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
|
||||
|
||||
// Authenticated endpoints with user context
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
|
||||
|
||||
// System operations in middleware
|
||||
roles, err := db.GetAuthorizationUserRoles(dbauthz.AsSystemRestricted(ctx), userID)
|
||||
```
|
||||
|
||||
## Common Database Issues
|
||||
|
||||
### Migration Issues
|
||||
|
||||
1. **Migration conflicts**: Use `fix_migration_numbers.sh` to renumber
|
||||
2. **Missing down migration**: Always create both up and down files
|
||||
3. **Schema inconsistencies**: Verify against existing schema
|
||||
|
||||
### Field Handling Issues
|
||||
|
||||
1. **Nullable field errors**: Use `sql.Null*` types consistently
|
||||
2. **Missing audit entries**: Update `enterprise/audit/table.go`
|
||||
|
||||
### Query Issues
|
||||
|
||||
1. **Query organization**: Group related queries in appropriate files
|
||||
2. **Generated code errors**: Run `make gen` after query changes
|
||||
3. **Performance issues**: Add appropriate indexes in migrations
|
||||
|
||||
## Database Testing
|
||||
|
||||
### Test Database Setup
|
||||
|
||||
```go
|
||||
func TestDatabaseFunction(t *testing.T) {
|
||||
db := dbtestutil.NewDB(t)
|
||||
|
||||
// Test with real database
|
||||
result, err := db.GetSomething(ctx, param)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, result)
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Schema Design
|
||||
|
||||
1. **Use appropriate data types**: VARCHAR for strings, TIMESTAMP for times
|
||||
2. **Add constraints**: NOT NULL, UNIQUE, FOREIGN KEY as appropriate
|
||||
3. **Create indexes**: For frequently queried columns
|
||||
4. **Consider performance**: Normalize appropriately but avoid over-normalization
|
||||
|
||||
### Query Writing
|
||||
|
||||
1. **Use parameterized queries**: Prevent SQL injection
|
||||
2. **Handle errors appropriately**: Check for specific error types
|
||||
3. **Use transactions**: For related operations that must succeed together
|
||||
4. **Optimize queries**: Use EXPLAIN to understand query performance
|
||||
|
||||
### Migration Writing
|
||||
|
||||
1. **Make migrations reversible**: Always include down migration
|
||||
2. **Test migrations**: On copy of production data if possible
|
||||
3. **Keep migrations small**: One logical change per migration
|
||||
4. **Document complex changes**: Add comments explaining rationale
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Complex Queries
|
||||
|
||||
```sql
|
||||
-- Example: Complex join with aggregation
|
||||
SELECT
|
||||
u.id,
|
||||
u.username,
|
||||
COUNT(w.id) as workspace_count
|
||||
FROM users u
|
||||
LEFT JOIN workspaces w ON u.id = w.owner_id
|
||||
WHERE u.created_at > $1
|
||||
GROUP BY u.id, u.username
|
||||
ORDER BY workspace_count DESC;
|
||||
```
|
||||
|
||||
### Conditional Queries
|
||||
|
||||
```sql
|
||||
-- Example: Dynamic filtering
|
||||
SELECT * FROM oauth2_provider_apps
|
||||
WHERE
|
||||
($1::text IS NULL OR name ILIKE '%' || $1 || '%')
|
||||
AND ($2::uuid IS NULL OR organization_id = $2)
|
||||
ORDER BY created_at DESC;
|
||||
```
|
||||
|
||||
### Audit Patterns
|
||||
|
||||
```go
|
||||
// Example: Auditable database operation
|
||||
func (q *sqlQuerier) UpdateUser(ctx context.Context, arg UpdateUserParams) (User, error) {
|
||||
// Implementation here
|
||||
|
||||
// Audit the change
|
||||
if auditor := audit.FromContext(ctx); auditor != nil {
|
||||
auditor.Record(audit.UserUpdate{
|
||||
UserID: arg.ID,
|
||||
Old: oldUser,
|
||||
New: newUser,
|
||||
})
|
||||
}
|
||||
|
||||
return newUser, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Debugging Database Issues
|
||||
|
||||
### Common Debug Commands
|
||||
|
||||
```bash
|
||||
# Check database connection
|
||||
make test-postgres
|
||||
|
||||
# Run specific database tests
|
||||
go test ./coderd/database/... -run TestSpecificFunction
|
||||
|
||||
# Check query generation
|
||||
make gen
|
||||
|
||||
# Verify audit table
|
||||
make lint
|
||||
```
|
||||
|
||||
### Debug Techniques
|
||||
|
||||
1. **Enable query logging**: Set appropriate log levels
|
||||
2. **Use database tools**: pgAdmin, psql for direct inspection
|
||||
3. **Check constraints**: UNIQUE, FOREIGN KEY violations
|
||||
4. **Analyze performance**: Use EXPLAIN ANALYZE for slow queries
|
||||
|
||||
### Troubleshooting Checklist
|
||||
|
||||
- [ ] Migration files exist (both up and down)
|
||||
- [ ] `make gen` run after query changes
|
||||
- [ ] Audit table updated for new fields
|
||||
- [ ] Nullable fields use `sql.Null*` types
|
||||
- [ ] Authorization context appropriate for endpoint type
|
||||
@@ -1,157 +0,0 @@
|
||||
# OAuth2 Development Guide
|
||||
|
||||
## RFC Compliance Development
|
||||
|
||||
### Implementing Standard Protocols
|
||||
|
||||
When implementing standard protocols (OAuth2, OpenID Connect, etc.):
|
||||
|
||||
1. **Fetch and Analyze Official RFCs**:
|
||||
- Always read the actual RFC specifications before implementation
|
||||
- Use WebFetch tool to get current RFC content for compliance verification
|
||||
- Document RFC requirements in code comments
|
||||
|
||||
2. **Default Values Matter**:
|
||||
- Pay close attention to RFC-specified default values
|
||||
- Example: RFC 7591 specifies `client_secret_basic` as default, not `client_secret_post`
|
||||
- Ensure consistency between database migrations and application code
|
||||
|
||||
3. **Security Requirements**:
|
||||
- Follow RFC security considerations precisely
|
||||
- Example: RFC 7592 prohibits returning registration access tokens in GET responses
|
||||
- Implement proper error responses per protocol specifications
|
||||
|
||||
4. **Validation Compliance**:
|
||||
- Implement comprehensive validation per RFC requirements
|
||||
- Support protocol-specific features (e.g., custom schemes for native OAuth2 apps)
|
||||
- Test edge cases defined in specifications
|
||||
|
||||
## OAuth2 Provider Implementation
|
||||
|
||||
### OAuth2 Spec Compliance
|
||||
|
||||
1. **Follow RFC 6749 for token responses**
|
||||
- Use `expires_in` (seconds) not `expiry` (timestamp) in token responses
|
||||
- Return proper OAuth2 error format: `{"error": "code", "error_description": "details"}`
|
||||
|
||||
2. **Error Response Format**
|
||||
- Create OAuth2-compliant error responses for token endpoint
|
||||
- Use standard error codes: `invalid_client`, `invalid_grant`, `invalid_request`
|
||||
- Avoid generic error responses for OAuth2 endpoints
|
||||
|
||||
### PKCE Implementation
|
||||
|
||||
- Support both with and without PKCE for backward compatibility
|
||||
- Use S256 method for code challenge
|
||||
- Properly validate code_verifier against stored code_challenge
|
||||
|
||||
### UI Authorization Flow
|
||||
|
||||
- Use POST requests for consent, not GET with links
|
||||
- Avoid dependency on referer headers for security decisions
|
||||
- Support proper state parameter validation
|
||||
|
||||
### RFC 8707 Resource Indicators
|
||||
|
||||
- Store resource parameters in database for server-side validation (opaque tokens)
|
||||
- Validate resource consistency between authorization and token requests
|
||||
- Support audience validation in refresh token flows
|
||||
- Resource parameter is optional but must be consistent when provided
|
||||
|
||||
## OAuth2 Error Handling Pattern
|
||||
|
||||
```go
|
||||
// Define specific OAuth2 errors
|
||||
var (
|
||||
errInvalidPKCE = xerrors.New("invalid code_verifier")
|
||||
)
|
||||
|
||||
// Use OAuth2-compliant error responses
|
||||
type OAuth2Error struct {
|
||||
Error string `json:"error"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
}
|
||||
|
||||
// Return proper OAuth2 errors
|
||||
if errors.Is(err, errInvalidPKCE) {
|
||||
writeOAuth2Error(ctx, rw, http.StatusBadRequest, "invalid_grant", "The PKCE code verifier is invalid")
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
## Testing OAuth2 Features
|
||||
|
||||
### Test Scripts
|
||||
|
||||
Located in `./scripts/oauth2/`:
|
||||
|
||||
- `test-mcp-oauth2.sh` - Full automated test suite
|
||||
- `setup-test-app.sh` - Create test OAuth2 app
|
||||
- `cleanup-test-app.sh` - Remove test app
|
||||
- `generate-pkce.sh` - Generate PKCE parameters
|
||||
- `test-manual-flow.sh` - Manual browser testing
|
||||
|
||||
Always run the full test suite after OAuth2 changes:
|
||||
|
||||
```bash
|
||||
./scripts/oauth2/test-mcp-oauth2.sh
|
||||
```
|
||||
|
||||
### RFC Protocol Testing
|
||||
|
||||
1. **Compliance Test Coverage**:
|
||||
- Test all RFC-defined error codes and responses
|
||||
- Validate proper HTTP status codes for different scenarios
|
||||
- Test protocol-specific edge cases (URI formats, token formats, etc.)
|
||||
|
||||
2. **Security Boundary Testing**:
|
||||
- Test client isolation and privilege separation
|
||||
- Verify information disclosure protections
|
||||
- Test token security and proper invalidation
|
||||
|
||||
## Common OAuth2 Issues
|
||||
|
||||
1. **OAuth2 endpoints returning wrong error format** - Ensure OAuth2 endpoints return RFC 6749 compliant errors
|
||||
2. **Resource indicator validation failing** - Ensure database stores and retrieves resource parameters correctly
|
||||
3. **PKCE tests failing** - Verify both authorization code storage and token exchange handle PKCE fields
|
||||
4. **RFC compliance failures** - Verify against actual RFC specifications, not assumptions
|
||||
5. **Authorization context errors in public endpoints** - Use `dbauthz.AsSystemRestricted(ctx)` pattern
|
||||
6. **Default value mismatches** - Ensure database migrations match application code defaults
|
||||
7. **Bearer token authentication issues** - Check token extraction precedence and format validation
|
||||
8. **URI validation failures** - Support both standard schemes and custom schemes per protocol requirements
|
||||
|
||||
## Authorization Context Patterns
|
||||
|
||||
```go
|
||||
// Public endpoints needing system access (OAuth2 registration)
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
|
||||
|
||||
// Authenticated endpoints with user context
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
|
||||
|
||||
// System operations in middleware
|
||||
roles, err := db.GetAuthorizationUserRoles(dbauthz.AsSystemRestricted(ctx), userID)
|
||||
```
|
||||
|
||||
## OAuth2/Authentication Work Patterns
|
||||
|
||||
- Types go in `codersdk/oauth2.go` or similar
|
||||
- Handlers go in `coderd/oauth2.go` or `coderd/identityprovider/`
|
||||
- Database fields need migration + audit table updates
|
||||
- Always support backward compatibility
|
||||
|
||||
## Protocol Implementation Checklist
|
||||
|
||||
Before completing OAuth2 or authentication feature work:
|
||||
|
||||
- [ ] Verify RFC compliance by reading actual specifications
|
||||
- [ ] Implement proper error response formats per protocol
|
||||
- [ ] Add comprehensive validation for all protocol fields
|
||||
- [ ] Test security boundaries and token handling
|
||||
- [ ] Update RBAC permissions for new resources
|
||||
- [ ] Add audit logging support if applicable
|
||||
- [ ] Create database migrations with proper defaults
|
||||
- [ ] Add comprehensive test coverage including edge cases
|
||||
- [ ] Verify linting compliance
|
||||
- [ ] Test both positive and negative scenarios
|
||||
- [ ] Document protocol-specific patterns and requirements
|
||||
@@ -1,212 +0,0 @@
|
||||
# Testing Patterns and Best Practices
|
||||
|
||||
## Testing Best Practices
|
||||
|
||||
### Avoiding Race Conditions
|
||||
|
||||
1. **Unique Test Identifiers**:
|
||||
- Never use hardcoded names in concurrent tests
|
||||
- Use `time.Now().UnixNano()` or similar for unique identifiers
|
||||
- Example: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
|
||||
|
||||
2. **Database Constraint Awareness**:
|
||||
- Understand unique constraints that can cause test conflicts
|
||||
- Generate unique values for all constrained fields
|
||||
- Test name isolation prevents cross-test interference
|
||||
|
||||
### Testing Patterns
|
||||
|
||||
- Use table-driven tests for comprehensive coverage
|
||||
- Mock external dependencies
|
||||
- Test both positive and negative cases
|
||||
- Use `testutil.WaitLong` for timeouts in tests
|
||||
|
||||
### Test Package Naming
|
||||
|
||||
- **Test packages**: Use `package_test` naming (e.g., `identityprovider_test`) for black-box testing
|
||||
|
||||
## RFC Protocol Testing
|
||||
|
||||
### Compliance Test Coverage
|
||||
|
||||
1. **Test all RFC-defined error codes and responses**
|
||||
2. **Validate proper HTTP status codes for different scenarios**
|
||||
3. **Test protocol-specific edge cases** (URI formats, token formats, etc.)
|
||||
|
||||
### Security Boundary Testing
|
||||
|
||||
1. **Test client isolation and privilege separation**
|
||||
2. **Verify information disclosure protections**
|
||||
3. **Test token security and proper invalidation**
|
||||
|
||||
## Test Organization
|
||||
|
||||
### Test File Structure
|
||||
|
||||
```
|
||||
coderd/
|
||||
├── oauth2.go # Implementation
|
||||
├── oauth2_test.go # Main tests
|
||||
├── oauth2_test_helpers.go # Test utilities
|
||||
└── oauth2_validation.go # Validation logic
|
||||
```
|
||||
|
||||
### Test Categories
|
||||
|
||||
1. **Unit Tests**: Test individual functions in isolation
|
||||
2. **Integration Tests**: Test API endpoints with database
|
||||
3. **End-to-End Tests**: Full workflow testing
|
||||
4. **Race Tests**: Concurrent access testing
|
||||
|
||||
## Test Commands
|
||||
|
||||
### Running Tests
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `make test` | Run all Go tests |
|
||||
| `make test RUN=TestFunctionName` | Run specific test |
|
||||
| `go test -v ./path/to/package -run TestFunctionName` | Run test with verbose output |
|
||||
| `make test-postgres` | Run tests with Postgres database |
|
||||
| `make test-race` | Run tests with Go race detector |
|
||||
| `make test-e2e` | Run end-to-end tests |
|
||||
|
||||
### Frontend Testing
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `pnpm test` | Run frontend tests |
|
||||
| `pnpm check` | Run code checks |
|
||||
|
||||
## Common Testing Issues
|
||||
|
||||
### Database-Related
|
||||
|
||||
1. **SQL type errors** - Use `sql.Null*` types for nullable fields
|
||||
2. **Race conditions in tests** - Use unique identifiers instead of hardcoded names
|
||||
|
||||
### OAuth2 Testing
|
||||
|
||||
1. **PKCE tests failing** - Verify both authorization code storage and token exchange handle PKCE fields
|
||||
2. **Resource indicator validation failing** - Ensure database stores and retrieves resource parameters correctly
|
||||
|
||||
### General Issues
|
||||
|
||||
1. **Missing newlines** - Ensure files end with newline character
|
||||
2. **Package naming errors** - Use `package_test` naming for test files
|
||||
3. **Log message formatting errors** - Use lowercase, descriptive messages without special characters
|
||||
|
||||
## Systematic Testing Approach
|
||||
|
||||
### Multi-Issue Problem Solving
|
||||
|
||||
When facing multiple failing tests or complex integration issues:
|
||||
|
||||
1. **Identify Root Causes**:
|
||||
- Run failing tests individually to isolate issues
|
||||
- Use LSP tools to trace through call chains
|
||||
- Check both compilation and runtime errors
|
||||
|
||||
2. **Fix in Logical Order**:
|
||||
- Address compilation issues first (imports, syntax)
|
||||
- Fix authorization and RBAC issues next
|
||||
- Resolve business logic and validation issues
|
||||
- Handle edge cases and race conditions last
|
||||
|
||||
3. **Verification Strategy**:
|
||||
- Test each fix individually before moving to next issue
|
||||
- Use `make lint` and `make gen` after database changes
|
||||
- Verify RFC compliance with actual specifications
|
||||
- Run comprehensive test suites before considering complete
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Unique Test Data
|
||||
|
||||
```go
|
||||
// Good: Unique identifiers prevent conflicts
|
||||
clientName := fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())
|
||||
|
||||
// Bad: Hardcoded names cause race conditions
|
||||
clientName := "test-client"
|
||||
```
|
||||
|
||||
### Test Cleanup
|
||||
|
||||
```go
|
||||
func TestSomething(t *testing.T) {
|
||||
// Setup
|
||||
client := coderdtest.New(t, nil)
|
||||
|
||||
// Test code here
|
||||
|
||||
// Cleanup happens automatically via t.Cleanup() in coderdtest
|
||||
}
|
||||
```
|
||||
|
||||
## Test Utilities
|
||||
|
||||
### Common Test Patterns
|
||||
|
||||
```go
|
||||
// Table-driven tests
|
||||
tests := []struct {
|
||||
name string
|
||||
input InputType
|
||||
expected OutputType
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid input",
|
||||
input: validInput,
|
||||
expected: expectedOutput,
|
||||
wantErr: false,
|
||||
},
|
||||
// ... more test cases
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := functionUnderTest(tt.input)
|
||||
if tt.wantErr {
|
||||
require.Error(t, err)
|
||||
return
|
||||
}
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Test Assertions
|
||||
|
||||
```go
|
||||
// Use testify/require for assertions
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, actual)
|
||||
require.NotNil(t, result)
|
||||
require.True(t, condition)
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Load Testing
|
||||
|
||||
- Use `scaletest/` directory for load testing scenarios
|
||||
- Run `./scaletest/scaletest.sh` for performance testing
|
||||
|
||||
### Benchmarking
|
||||
|
||||
```go
|
||||
func BenchmarkFunction(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
// Function call to benchmark
|
||||
_ = functionUnderTest(input)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Run benchmarks with:
|
||||
```bash
|
||||
go test -bench=. -benchmem ./package/path
|
||||
```
|
||||
@@ -1,231 +0,0 @@
|
||||
# Troubleshooting Guide
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Database Issues
|
||||
|
||||
1. **"Audit table entry missing action"**
|
||||
- **Solution**: Update `enterprise/audit/table.go`
|
||||
- Add each new field with appropriate action (ActionTrack, ActionIgnore, ActionSecret)
|
||||
- Run `make gen` to verify no audit errors
|
||||
|
||||
2. **SQL type errors**
|
||||
- **Solution**: Use `sql.Null*` types for nullable fields
|
||||
- Set `.Valid = true` when providing values
|
||||
- Example:
|
||||
|
||||
```go
|
||||
CodeChallenge: sql.NullString{
|
||||
String: params.codeChallenge,
|
||||
Valid: params.codeChallenge != "",
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Issues
|
||||
|
||||
3. **"package should be X_test"**
|
||||
- **Solution**: Use `package_test` naming for test files
|
||||
- Example: `identityprovider_test` for black-box testing
|
||||
|
||||
4. **Race conditions in tests**
|
||||
- **Solution**: Use unique identifiers instead of hardcoded names
|
||||
- Example: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
|
||||
- Never use hardcoded names in concurrent tests
|
||||
|
||||
5. **Missing newlines**
|
||||
- **Solution**: Ensure files end with newline character
|
||||
- Most editors can be configured to add this automatically
|
||||
|
||||
### OAuth2 Issues
|
||||
|
||||
6. **OAuth2 endpoints returning wrong error format**
|
||||
- **Solution**: Ensure OAuth2 endpoints return RFC 6749 compliant errors
|
||||
- Use standard error codes: `invalid_client`, `invalid_grant`, `invalid_request`
|
||||
- Format: `{"error": "code", "error_description": "details"}`
|
||||
|
||||
7. **Resource indicator validation failing**
|
||||
- **Solution**: Ensure database stores and retrieves resource parameters correctly
|
||||
- Check both authorization code storage and token exchange handling
|
||||
|
||||
8. **PKCE tests failing**
|
||||
- **Solution**: Verify both authorization code storage and token exchange handle PKCE fields
|
||||
- Check `CodeChallenge` and `CodeChallengeMethod` field handling
|
||||
|
||||
### RFC Compliance Issues
|
||||
|
||||
9. **RFC compliance failures**
|
||||
- **Solution**: Verify against actual RFC specifications, not assumptions
|
||||
- Use WebFetch tool to get current RFC content for compliance verification
|
||||
- Read the actual RFC specifications before implementation
|
||||
|
||||
10. **Default value mismatches**
|
||||
- **Solution**: Ensure database migrations match application code defaults
|
||||
- Example: RFC 7591 specifies `client_secret_basic` as default, not `client_secret_post`
|
||||
|
||||
### Authorization Issues
|
||||
|
||||
11. **Authorization context errors in public endpoints**
|
||||
- **Solution**: Use `dbauthz.AsSystemRestricted(ctx)` pattern
|
||||
- Example:
|
||||
|
||||
```go
|
||||
// Public endpoints needing system access
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
|
||||
```
|
||||
|
||||
### Authentication Issues
|
||||
|
||||
12. **Bearer token authentication issues**
|
||||
- **Solution**: Check token extraction precedence and format validation
|
||||
- Ensure proper RFC 6750 Bearer Token Support implementation
|
||||
|
||||
13. **URI validation failures**
|
||||
- **Solution**: Support both standard schemes and custom schemes per protocol requirements
|
||||
- Native OAuth2 apps may use custom schemes
|
||||
|
||||
### General Development Issues
|
||||
|
||||
14. **Log message formatting errors**
|
||||
- **Solution**: Use lowercase, descriptive messages without special characters
|
||||
- Follow Go logging conventions
|
||||
|
||||
## Systematic Debugging Approach
|
||||
|
||||
### Multi-Issue Problem Solving
|
||||
|
||||
When facing multiple failing tests or complex integration issues:
|
||||
|
||||
1. **Identify Root Causes**:
|
||||
- Run failing tests individually to isolate issues
|
||||
- Use LSP tools to trace through call chains
|
||||
- Check both compilation and runtime errors
|
||||
|
||||
2. **Fix in Logical Order**:
|
||||
- Address compilation issues first (imports, syntax)
|
||||
- Fix authorization and RBAC issues next
|
||||
- Resolve business logic and validation issues
|
||||
- Handle edge cases and race conditions last
|
||||
|
||||
3. **Verification Strategy**:
|
||||
- Test each fix individually before moving to next issue
|
||||
- Use `make lint` and `make gen` after database changes
|
||||
- Verify RFC compliance with actual specifications
|
||||
- Run comprehensive test suites before considering complete
|
||||
|
||||
## Debug Commands
|
||||
|
||||
### Useful Debug Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|----------------------------------------------|---------------------------------------|
|
||||
| `make lint` | Run all linters |
|
||||
| `make gen` | Generate mocks, database queries |
|
||||
| `go test -v ./path/to/package -run TestName` | Run specific test with verbose output |
|
||||
| `go test -race ./...` | Run tests with race detector |
|
||||
|
||||
### LSP Debugging
|
||||
|
||||
#### Go LSP (Backend)
|
||||
|
||||
| Command | Purpose |
|
||||
|----------------------------------------------------|------------------------------|
|
||||
| `mcp__go-language-server__definition symbolName` | Find function definition |
|
||||
| `mcp__go-language-server__references symbolName` | Find all references |
|
||||
| `mcp__go-language-server__diagnostics filePath` | Check for compilation errors |
|
||||
| `mcp__go-language-server__hover filePath line col` | Get type information |
|
||||
|
||||
#### TypeScript LSP (Frontend)
|
||||
|
||||
| Command | Purpose |
|
||||
|----------------------------------------------------------------------------|------------------------------------|
|
||||
| `mcp__typescript-language-server__definition symbolName` | Find component/function definition |
|
||||
| `mcp__typescript-language-server__references symbolName` | Find all component/type usages |
|
||||
| `mcp__typescript-language-server__diagnostics filePath` | Check for TypeScript errors |
|
||||
| `mcp__typescript-language-server__hover filePath line col` | Get type information |
|
||||
| `mcp__typescript-language-server__rename_symbol filePath line col newName` | Rename across codebase |
|
||||
|
||||
## Common Error Messages
|
||||
|
||||
### Database Errors
|
||||
|
||||
**Error**: `pq: relation "oauth2_provider_app_codes" does not exist`
|
||||
|
||||
- **Cause**: Missing database migration
|
||||
- **Solution**: Run database migrations, check migration files
|
||||
|
||||
**Error**: `audit table entry missing action for field X`
|
||||
|
||||
- **Cause**: New field added without audit table update
|
||||
- **Solution**: Update `enterprise/audit/table.go`
|
||||
|
||||
### Go Compilation Errors
|
||||
|
||||
**Error**: `package should be identityprovider_test`
|
||||
|
||||
- **Cause**: Test package naming convention violation
|
||||
- **Solution**: Use `package_test` naming for black-box tests
|
||||
|
||||
**Error**: `cannot use X (type Y) as type Z`
|
||||
|
||||
- **Cause**: Type mismatch, often with nullable fields
|
||||
- **Solution**: Use appropriate `sql.Null*` types
|
||||
|
||||
### OAuth2 Errors
|
||||
|
||||
**Error**: `invalid_client` but client exists
|
||||
|
||||
- **Cause**: Authorization context issue
|
||||
- **Solution**: Use `dbauthz.AsSystemRestricted(ctx)` for public endpoints
|
||||
|
||||
**Error**: PKCE validation failing
|
||||
|
||||
- **Cause**: Missing PKCE fields in database operations
|
||||
- **Solution**: Ensure `CodeChallenge` and `CodeChallengeMethod` are handled
|
||||
|
||||
## Prevention Strategies
|
||||
|
||||
### Before Making Changes
|
||||
|
||||
1. **Read the relevant documentation**
|
||||
2. **Check if similar patterns exist in codebase**
|
||||
3. **Understand the authorization context requirements**
|
||||
4. **Plan database changes carefully**
|
||||
|
||||
### During Development
|
||||
|
||||
1. **Run tests frequently**: `make test`
|
||||
2. **Use LSP tools for navigation**: Avoid manual searching
|
||||
3. **Follow RFC specifications precisely**
|
||||
4. **Update audit tables when adding database fields**
|
||||
|
||||
### Before Committing
|
||||
|
||||
1. **Run full test suite**: `make test`
|
||||
2. **Check linting**: `make lint`
|
||||
3. **Test with race detector**: `make test-race`
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Internal Resources
|
||||
|
||||
- Check existing similar implementations in codebase
|
||||
- Use LSP tools to understand code relationships
|
||||
- For Go code: Use `mcp__go-language-server__*` commands
|
||||
- For TypeScript/React code: Use `mcp__typescript-language-server__*` commands
|
||||
- Read related test files for expected behavior
|
||||
|
||||
### External Resources
|
||||
|
||||
- Official RFC specifications for protocol compliance
|
||||
- Go documentation for language features
|
||||
- PostgreSQL documentation for database issues
|
||||
|
||||
### Debug Information Collection
|
||||
|
||||
When reporting issues, include:
|
||||
|
||||
1. **Exact error message**
|
||||
2. **Steps to reproduce**
|
||||
3. **Relevant code snippets**
|
||||
4. **Test output (if applicable)**
|
||||
5. **Environment information** (OS, Go version, etc.)
|
||||
@@ -1,223 +0,0 @@
|
||||
# Development Workflows and Guidelines
|
||||
|
||||
## Quick Start Checklist for New Features
|
||||
|
||||
### Before Starting
|
||||
|
||||
- [ ] Run `git pull` to ensure you're on latest code
|
||||
- [ ] Check if feature touches database - you'll need migrations
|
||||
- [ ] Check if feature touches audit logs - update `enterprise/audit/table.go`
|
||||
|
||||
## Development Server
|
||||
|
||||
### Starting Development Mode
|
||||
|
||||
- **Use `./scripts/develop.sh` to start Coder in development mode**
|
||||
- This automatically builds and runs with `--dev` flag and proper access URL
|
||||
- **⚠️ Do NOT manually run `make build && ./coder server --dev` - use the script instead**
|
||||
|
||||
### Development Workflow
|
||||
|
||||
1. **Always start with the development script**: `./scripts/develop.sh`
|
||||
2. **Make changes** to your code
|
||||
3. **The script will automatically rebuild** and restart as needed
|
||||
4. **Access the development server** at the URL provided by the script
|
||||
|
||||
## Code Style Guidelines
|
||||
|
||||
### Go Style
|
||||
|
||||
- Follow [Effective Go](https://go.dev/doc/effective_go) and [Go's Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
|
||||
- Create packages when used during implementation
|
||||
- Validate abstractions against implementations
|
||||
- **Test packages**: Use `package_test` naming (e.g., `identityprovider_test`) for black-box testing
|
||||
|
||||
### Error Handling
|
||||
|
||||
- Use descriptive error messages
|
||||
- Wrap errors with context
|
||||
- Propagate errors appropriately
|
||||
- Use proper error types
|
||||
- Pattern: `xerrors.Errorf("failed to X: %w", err)`
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
- Use clear, descriptive names
|
||||
- Abbreviate only when obvious
|
||||
- Follow Go and TypeScript naming conventions
|
||||
|
||||
### Comments
|
||||
|
||||
- Document exported functions, types, and non-obvious logic
|
||||
- Follow JSDoc format for TypeScript
|
||||
- Use godoc format for Go code
|
||||
|
||||
## Database Migration Workflows
|
||||
|
||||
### Migration Guidelines
|
||||
|
||||
1. **Create migration files**:
|
||||
- Location: `coderd/database/migrations/`
|
||||
- Format: `{number}_{description}.{up|down}.sql`
|
||||
- Number must be unique and sequential
|
||||
- Always include both up and down migrations
|
||||
|
||||
2. **Use helper scripts**:
|
||||
- `./coderd/database/migrations/create_migration.sh "migration name"` - Creates new migration files
|
||||
- `./coderd/database/migrations/fix_migration_numbers.sh` - Renumbers migrations to avoid conflicts
|
||||
- `./coderd/database/migrations/create_fixture.sh "fixture name"` - Creates test fixtures for migrations
|
||||
|
||||
3. **Update database queries**:
|
||||
- **MUST DO**: Any changes to database - adding queries, modifying queries should be done in the `coderd/database/queries/*.sql` files
|
||||
- **MUST DO**: Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `oauth2.sql`
|
||||
- After making changes to any `coderd/database/queries/*.sql` files you must run `make gen` to generate respective ORM changes
|
||||
|
||||
4. **Handle nullable fields**:
|
||||
- Use `sql.NullString`, `sql.NullBool`, etc. for optional database fields
|
||||
- Set `.Valid = true` when providing values
|
||||
|
||||
5. **Audit table updates**:
|
||||
- If adding fields to auditable types, update `enterprise/audit/table.go`
|
||||
- Add each new field with appropriate action (ActionTrack, ActionIgnore, ActionSecret)
|
||||
- Run `make gen` to verify no audit errors
|
||||
|
||||
### Database Generation Process
|
||||
|
||||
1. Modify SQL files in `coderd/database/queries/`
|
||||
2. Run `make gen`
|
||||
3. If errors about audit table, update `enterprise/audit/table.go`
|
||||
4. Run `make gen` again
|
||||
5. Run `make lint` to catch any remaining issues
|
||||
|
||||
## API Development Workflow
|
||||
|
||||
### Adding New API Endpoints
|
||||
|
||||
1. **Define types** in `codersdk/` package
|
||||
2. **Add handler** in appropriate `coderd/` file
|
||||
3. **Register route** in `coderd/coderd.go`
|
||||
4. **Add tests** in `coderd/*_test.go` files
|
||||
5. **Update OpenAPI** by running `make gen`
|
||||
|
||||
## Testing Workflows
|
||||
|
||||
### Test Execution
|
||||
|
||||
- Run full test suite: `make test`
|
||||
- Run specific test: `make test RUN=TestFunctionName`
|
||||
- Run with Postgres: `make test-postgres`
|
||||
- Run with race detector: `make test-race`
|
||||
- Run end-to-end tests: `make test-e2e`
|
||||
|
||||
### Test Development
|
||||
|
||||
- Use table-driven tests for comprehensive coverage
|
||||
- Mock external dependencies
|
||||
- Test both positive and negative cases
|
||||
- Use `testutil.WaitLong` for timeouts in tests
|
||||
- Always use `t.Parallel()` in tests
|
||||
|
||||
## Commit Style
|
||||
|
||||
- Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
- Format: `type(scope): message`
|
||||
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
|
||||
- Keep message titles concise (~70 characters)
|
||||
- Use imperative, present tense in commit titles
|
||||
|
||||
## Code Navigation and Investigation
|
||||
|
||||
### Using LSP Tools (STRONGLY RECOMMENDED)
|
||||
|
||||
**IMPORTANT**: Always use LSP tools for code navigation and understanding. These tools provide accurate, real-time analysis of the codebase and should be your first choice for code investigation.
|
||||
|
||||
#### Go LSP Tools (for backend code)
|
||||
|
||||
1. **Find function definitions** (USE THIS FREQUENTLY):
|
||||
- `mcp__go-language-server__definition symbolName`
|
||||
- Example: `mcp__go-language-server__definition getOAuth2ProviderAppAuthorize`
|
||||
- Quickly jump to function implementations across packages
|
||||
|
||||
2. **Find symbol references** (ESSENTIAL FOR UNDERSTANDING IMPACT):
|
||||
- `mcp__go-language-server__references symbolName`
|
||||
- Locate all usages of functions, types, or variables
|
||||
- Critical for refactoring and understanding data flow
|
||||
|
||||
3. **Get symbol information**:
|
||||
- `mcp__go-language-server__hover filePath line column`
|
||||
- Get type information and documentation at specific positions
|
||||
|
||||
#### TypeScript LSP Tools (for frontend code in site/)
|
||||
|
||||
1. **Find component/function definitions** (USE THIS FREQUENTLY):
|
||||
- `mcp__typescript-language-server__definition symbolName`
|
||||
- Example: `mcp__typescript-language-server__definition LoginPage`
|
||||
- Quickly navigate to React components, hooks, and utility functions
|
||||
|
||||
2. **Find symbol references** (ESSENTIAL FOR UNDERSTANDING IMPACT):
|
||||
- `mcp__typescript-language-server__references symbolName`
|
||||
- Locate all usages of components, types, or functions
|
||||
- Critical for refactoring React components and understanding prop usage
|
||||
|
||||
3. **Get type information**:
|
||||
- `mcp__typescript-language-server__hover filePath line column`
|
||||
- Get TypeScript type information and JSDoc documentation
|
||||
|
||||
4. **Rename symbols safely**:
|
||||
- `mcp__typescript-language-server__rename_symbol filePath line column newName`
|
||||
- Rename components, props, or functions across the entire codebase
|
||||
|
||||
5. **Check for TypeScript errors**:
|
||||
- `mcp__typescript-language-server__diagnostics filePath`
|
||||
- Get compilation errors and warnings for a specific file
|
||||
|
||||
### Investigation Strategy (LSP-First Approach)
|
||||
|
||||
#### Backend Investigation (Go)
|
||||
|
||||
1. **Start with route registration** in `coderd/coderd.go` to understand API endpoints
|
||||
2. **Use Go LSP `definition` lookup** to trace from route handlers to actual implementations
|
||||
3. **Use Go LSP `references`** to understand how functions are called throughout the codebase
|
||||
4. **Follow the middleware chain** using LSP tools to understand request processing flow
|
||||
5. **Check test files** for expected behavior and error patterns
|
||||
|
||||
#### Frontend Investigation (TypeScript/React)
|
||||
|
||||
1. **Start with route definitions** in `site/src/App.tsx` or router configuration
|
||||
2. **Use TypeScript LSP `definition`** to navigate to React components and hooks
|
||||
3. **Use TypeScript LSP `references`** to find all component usages and prop drilling
|
||||
4. **Follow the component hierarchy** using LSP tools to understand data flow
|
||||
5. **Check for TypeScript errors** with `diagnostics` before making changes
|
||||
6. **Examine test files** (`.test.tsx`) for component behavior and expected props
|
||||
|
||||
## Troubleshooting Development Issues
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Development server won't start** - Use `./scripts/develop.sh` instead of manual commands
|
||||
2. **Database migration errors** - Check migration file format and use helper scripts
|
||||
3. **Audit table errors** - Update `enterprise/audit/table.go` with new fields
|
||||
4. **OAuth2 compliance issues** - Ensure RFC-compliant error responses
|
||||
|
||||
### Debug Commands
|
||||
|
||||
- Check linting: `make lint`
|
||||
- Generate code: `make gen`
|
||||
- Clean build: `make clean`
|
||||
|
||||
## Development Environment Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Go (version specified in go.mod)
|
||||
- Node.js and pnpm for frontend development
|
||||
- PostgreSQL for database testing
|
||||
- Docker for containerized testing
|
||||
|
||||
### First Time Setup
|
||||
|
||||
1. Clone the repository
|
||||
2. Run `./scripts/develop.sh` to start development server
|
||||
3. Access the development URL provided
|
||||
4. Create admin user as prompted
|
||||
5. Begin development
|
||||
@@ -1,133 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Claude Code hook script for file formatting
|
||||
# This script integrates with the centralized Makefile formatting targets
|
||||
# and supports the Claude Code hooks system for automatic file formatting.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# A variable to memoize the command for canonicalizing paths.
|
||||
_CANONICALIZE_CMD=""
|
||||
|
||||
# canonicalize_path resolves a path to its absolute, canonical form.
|
||||
# It tries 'realpath' and 'readlink -f' in order.
|
||||
# The chosen command is memoized to avoid repeated checks.
|
||||
# If none of these are available, it returns an empty string.
|
||||
canonicalize_path() {
|
||||
local path_to_resolve="$1"
|
||||
|
||||
# If we haven't determined a command yet, find one.
|
||||
if [[ -z "$_CANONICALIZE_CMD" ]]; then
|
||||
if command -v realpath >/dev/null 2>&1; then
|
||||
_CANONICALIZE_CMD="realpath"
|
||||
elif command -v readlink >/dev/null 2>&1 && readlink -f . >/dev/null 2>&1; then
|
||||
_CANONICALIZE_CMD="readlink"
|
||||
else
|
||||
# No command found, so we can't resolve.
|
||||
# We set a "none" value to prevent re-checking.
|
||||
_CANONICALIZE_CMD="none"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Now, execute the command.
|
||||
case "$_CANONICALIZE_CMD" in
|
||||
realpath)
|
||||
realpath "$path_to_resolve" 2>/dev/null
|
||||
;;
|
||||
readlink)
|
||||
readlink -f "$path_to_resolve" 2>/dev/null
|
||||
;;
|
||||
*)
|
||||
# This handles the "none" case or any unexpected error.
|
||||
echo ""
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Read JSON input from stdin
|
||||
input=$(cat)
|
||||
|
||||
# Extract the file path from the JSON input
|
||||
# Expected format: {"tool_input": {"file_path": "/absolute/path/to/file"}} or {"tool_response": {"filePath": "/absolute/path/to/file"}}
|
||||
file_path=$(echo "$input" | jq -r '.tool_input.file_path // .tool_response.filePath // empty')
|
||||
|
||||
# Secure path canonicalization to prevent path traversal attacks
|
||||
# Resolve repo root to an absolute, canonical path.
|
||||
repo_root_raw="$(cd "$(dirname "$0")/../.." && pwd)"
|
||||
repo_root="$(canonicalize_path "$repo_root_raw")"
|
||||
if [[ -z "$repo_root" ]]; then
|
||||
# Fallback if canonicalization fails
|
||||
repo_root="$repo_root_raw"
|
||||
fi
|
||||
|
||||
# Resolve the input path to an absolute path
|
||||
if [[ "$file_path" = /* ]]; then
|
||||
# Already absolute
|
||||
abs_file_path="$file_path"
|
||||
else
|
||||
# Make relative paths absolute from repo root
|
||||
abs_file_path="$repo_root/$file_path"
|
||||
fi
|
||||
|
||||
# Canonicalize the path (resolve symlinks and ".." segments)
|
||||
canonical_file_path="$(canonicalize_path "$abs_file_path")"
|
||||
|
||||
# Check if canonicalization failed or if the resolved path is outside the repo
|
||||
if [[ -z "$canonical_file_path" ]] || { [[ "$canonical_file_path" != "$repo_root" ]] && [[ "$canonical_file_path" != "$repo_root"/* ]]; }; then
|
||||
echo "Error: File path is outside repository or invalid: $file_path" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Handle the case where the file path is the repository root itself.
|
||||
if [[ "$canonical_file_path" == "$repo_root" ]]; then
|
||||
echo "Warning: Formatting the repository root is not a supported operation. Skipping." >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Convert back to relative path from repo root for consistency
|
||||
file_path="${canonical_file_path#"$repo_root"/}"
|
||||
|
||||
if [[ -z "$file_path" ]]; then
|
||||
echo "Error: No file path provided in input" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if file exists
|
||||
if [[ ! -f "$file_path" ]]; then
|
||||
echo "Error: File does not exist: $file_path" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the file extension to determine the appropriate formatter
|
||||
file_ext="${file_path##*.}"
|
||||
|
||||
# Change to the project root directory (where the Makefile is located)
|
||||
cd "$(dirname "$0")/../.."
|
||||
|
||||
# Call the appropriate Makefile target based on file extension
|
||||
case "$file_ext" in
|
||||
go)
|
||||
make fmt/go FILE="$file_path"
|
||||
echo "✓ Formatted Go file: $file_path"
|
||||
;;
|
||||
js | jsx | ts | tsx)
|
||||
make fmt/ts FILE="$file_path"
|
||||
echo "✓ Formatted TypeScript/JavaScript file: $file_path"
|
||||
;;
|
||||
tf | tfvars)
|
||||
make fmt/terraform FILE="$file_path"
|
||||
echo "✓ Formatted Terraform file: $file_path"
|
||||
;;
|
||||
sh)
|
||||
make fmt/shfmt FILE="$file_path"
|
||||
echo "✓ Formatted shell script: $file_path"
|
||||
;;
|
||||
md)
|
||||
make fmt/markdown FILE="$file_path"
|
||||
echo "✓ Formatted Markdown file: $file_path"
|
||||
;;
|
||||
*)
|
||||
echo "No formatter available for file extension: $file_ext"
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit|Write|MultiEdit",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": ".claude/scripts/format.sh"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,16 +1,11 @@
|
||||
{
|
||||
"name": "Development environments on your infrastructure",
|
||||
"image": "codercom/oss-dogfood:latest",
|
||||
|
||||
"features": {
|
||||
// See all possible options here https://github.com/devcontainers/features/tree/main/src/docker-in-docker
|
||||
"ghcr.io/devcontainers/features/docker-in-docker:2": {
|
||||
"moby": "false"
|
||||
},
|
||||
"ghcr.io/coder/devcontainer-features/code-server:1": {
|
||||
"auth": "none",
|
||||
"port": 13337
|
||||
},
|
||||
"./filebrowser": {
|
||||
"folder": "${containerWorkspaceFolder}"
|
||||
}
|
||||
},
|
||||
// SYS_PTRACE to enable go debugging
|
||||
@@ -18,65 +13,6 @@
|
||||
"customizations": {
|
||||
"vscode": {
|
||||
"extensions": ["biomejs.biome"]
|
||||
},
|
||||
"coder": {
|
||||
"apps": [
|
||||
{
|
||||
"slug": "cursor",
|
||||
"displayName": "Cursor Desktop",
|
||||
"url": "cursor://coder.coder-remote/openDevContainer?owner=${localEnv:CODER_WORKSPACE_OWNER_NAME}&workspace=${localEnv:CODER_WORKSPACE_NAME}&agent=${localEnv:CODER_WORKSPACE_PARENT_AGENT_NAME}&url=${localEnv:CODER_URL}&token=$SESSION_TOKEN&devContainerName=${localEnv:CONTAINER_ID}&devContainerFolder=${containerWorkspaceFolder}&localWorkspaceFolder=${localWorkspaceFolder}",
|
||||
"external": true,
|
||||
"icon": "/icon/cursor.svg",
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"slug": "windsurf",
|
||||
"displayName": "Windsurf Editor",
|
||||
"url": "windsurf://coder.coder-remote/openDevContainer?owner=${localEnv:CODER_WORKSPACE_OWNER_NAME}&workspace=${localEnv:CODER_WORKSPACE_NAME}&agent=${localEnv:CODER_WORKSPACE_PARENT_AGENT_NAME}&url=${localEnv:CODER_URL}&token=$SESSION_TOKEN&devContainerName=${localEnv:CONTAINER_ID}&devContainerFolder=${containerWorkspaceFolder}&localWorkspaceFolder=${localWorkspaceFolder}",
|
||||
"external": true,
|
||||
"icon": "/icon/windsurf.svg",
|
||||
"order": 4
|
||||
},
|
||||
{
|
||||
"slug": "zed",
|
||||
"displayName": "Zed Editor",
|
||||
"url": "zed://ssh/${localEnv:CODER_WORKSPACE_AGENT_NAME}.${localEnv:CODER_WORKSPACE_NAME}.${localEnv:CODER_WORKSPACE_OWNER_NAME}.coder${containerWorkspaceFolder}",
|
||||
"external": true,
|
||||
"icon": "/icon/zed.svg",
|
||||
"order": 5
|
||||
},
|
||||
// Reproduce `code-server` app here from the code-server
|
||||
// feature so that we can set the correct folder and order.
|
||||
// Currently, the order cannot be specified via option because
|
||||
// we parse it as a number whereas variable interpolation
|
||||
// results in a string. Additionally we set health check which
|
||||
// is not yet set in the feature.
|
||||
{
|
||||
"slug": "code-server",
|
||||
"displayName": "code-server",
|
||||
"url": "http://${localEnv:FEATURE_CODE_SERVER_OPTION_HOST:127.0.0.1}:${localEnv:FEATURE_CODE_SERVER_OPTION_PORT:8080}/?folder=${containerWorkspaceFolder}",
|
||||
"openIn": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPOPENIN:slim-window}",
|
||||
"share": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPSHARE:owner}",
|
||||
"icon": "/icon/code.svg",
|
||||
"group": "${localEnv:FEATURE_CODE_SERVER_OPTION_APPGROUP:Web Editors}",
|
||||
"order": 3,
|
||||
"healthCheck": {
|
||||
"url": "http://${localEnv:FEATURE_CODE_SERVER_OPTION_HOST:127.0.0.1}:${localEnv:FEATURE_CODE_SERVER_OPTION_PORT:8080}/healthz",
|
||||
"interval": 5,
|
||||
"threshold": 2
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"mounts": [
|
||||
// Add a volume for the Coder home directory to persist shell history,
|
||||
// and speed up dotfiles init and/or personalization.
|
||||
"source=coder-coder-devcontainer-home,target=/home/coder,type=volume",
|
||||
// Mount the entire home because conditional mounts are not supported.
|
||||
// See: https://github.com/devcontainers/spec/issues/132
|
||||
"source=${localEnv:HOME},target=/mnt/home/coder,type=bind,readonly"
|
||||
],
|
||||
"postCreateCommand": ["./.devcontainer/scripts/post_create.sh"],
|
||||
"postStartCommand": ["./.devcontainer/scripts/post_start.sh"]
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
{
|
||||
"id": "filebrowser",
|
||||
"version": "0.0.1",
|
||||
"name": "File Browser",
|
||||
"description": "A web-based file browser for your development container",
|
||||
"options": {
|
||||
"port": {
|
||||
"type": "string",
|
||||
"default": "13339",
|
||||
"description": "The port to run filebrowser on"
|
||||
},
|
||||
"folder": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "The root directory for filebrowser to serve"
|
||||
},
|
||||
"baseUrl": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "The base URL for filebrowser (e.g., /filebrowser)"
|
||||
}
|
||||
},
|
||||
"entrypoint": "/usr/local/bin/filebrowser-entrypoint",
|
||||
"dependsOn": {
|
||||
"ghcr.io/devcontainers/features/common-utils:2": {}
|
||||
},
|
||||
"customizations": {
|
||||
"coder": {
|
||||
"apps": [
|
||||
{
|
||||
"slug": "filebrowser",
|
||||
"displayName": "File Browser",
|
||||
"url": "http://localhost:${localEnv:FEATURE_FILEBROWSER_OPTION_PORT:13339}",
|
||||
"icon": "/icon/filebrowser.svg",
|
||||
"order": 3,
|
||||
"subdomain": true,
|
||||
"healthcheck": {
|
||||
"url": "http://localhost:${localEnv:FEATURE_FILEBROWSER_OPTION_PORT:13339}/health",
|
||||
"interval": 5,
|
||||
"threshold": 2
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BOLD='\033[0;1m'
|
||||
|
||||
printf "%sInstalling filebrowser\n\n" "${BOLD}"
|
||||
|
||||
# Check if filebrowser is installed.
|
||||
if ! command -v filebrowser &>/dev/null; then
|
||||
VERSION="v2.42.1"
|
||||
EXPECTED_HASH="7d83c0f077df10a8ec9bfd9bf6e745da5d172c3c768a322b0e50583a6bc1d3cc"
|
||||
|
||||
curl -fsSL "https://github.com/filebrowser/filebrowser/releases/download/${VERSION}/linux-amd64-filebrowser.tar.gz" -o /tmp/filebrowser.tar.gz
|
||||
echo "${EXPECTED_HASH} /tmp/filebrowser.tar.gz" | sha256sum -c
|
||||
tar -xzf /tmp/filebrowser.tar.gz -C /tmp
|
||||
sudo mv /tmp/filebrowser /usr/local/bin/
|
||||
sudo chmod +x /usr/local/bin/filebrowser
|
||||
rm /tmp/filebrowser.tar.gz
|
||||
fi
|
||||
|
||||
# Create entrypoint.
|
||||
cat >/usr/local/bin/filebrowser-entrypoint <<EOF
|
||||
#!/usr/bin/env bash
|
||||
|
||||
PORT="${PORT}"
|
||||
FOLDER="${FOLDER:-}"
|
||||
FOLDER="\${FOLDER:-\$(pwd)}"
|
||||
BASEURL="${BASEURL:-}"
|
||||
LOG_PATH=/tmp/filebrowser.log
|
||||
export FB_DATABASE="\${HOME}/.filebrowser.db"
|
||||
|
||||
printf "🛠️ Configuring filebrowser\n\n"
|
||||
|
||||
# Check if filebrowser db exists.
|
||||
if [[ ! -f "\${FB_DATABASE}" ]]; then
|
||||
filebrowser config init >>\${LOG_PATH} 2>&1
|
||||
filebrowser users add admin "" --perm.admin=true --viewMode=mosaic >>\${LOG_PATH} 2>&1
|
||||
fi
|
||||
|
||||
filebrowser config set --baseurl=\${BASEURL} --port=\${PORT} --auth.method=noauth --root=\${FOLDER} >>\${LOG_PATH} 2>&1
|
||||
|
||||
printf "👷 Starting filebrowser...\n\n"
|
||||
|
||||
printf "📂 Serving \${FOLDER} at http://localhost:\${PORT}\n\n"
|
||||
|
||||
filebrowser >>\${LOG_PATH} 2>&1 &
|
||||
|
||||
printf "📝 Logs at \${LOG_PATH}\n\n"
|
||||
EOF
|
||||
|
||||
chmod +x /usr/local/bin/filebrowser-entrypoint
|
||||
|
||||
printf "🥳 Installation complete!\n\n"
|
||||
@@ -1,67 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
install_devcontainer_cli() {
|
||||
set -e
|
||||
echo "🔧 Installing DevContainer CLI..."
|
||||
cd "$(dirname "$0")/../tools/devcontainer-cli"
|
||||
npm ci --omit=dev
|
||||
ln -sf "$(pwd)/node_modules/.bin/devcontainer" "$(npm config get prefix)/bin/devcontainer"
|
||||
}
|
||||
|
||||
install_ssh_config() {
|
||||
echo "🔑 Installing SSH configuration..."
|
||||
if [ -d /mnt/home/coder/.ssh ]; then
|
||||
rsync -a /mnt/home/coder/.ssh/ ~/.ssh/
|
||||
chmod 0700 ~/.ssh
|
||||
else
|
||||
echo "⚠️ SSH directory not found."
|
||||
fi
|
||||
}
|
||||
|
||||
install_git_config() {
|
||||
echo "📂 Installing Git configuration..."
|
||||
if [ -f /mnt/home/coder/git/config ]; then
|
||||
rsync -a /mnt/home/coder/git/ ~/.config/git/
|
||||
elif [ -d /mnt/home/coder/.gitconfig ]; then
|
||||
rsync -a /mnt/home/coder/.gitconfig ~/.gitconfig
|
||||
else
|
||||
echo "⚠️ Git configuration directory not found."
|
||||
fi
|
||||
}
|
||||
|
||||
install_dotfiles() {
|
||||
if [ ! -d /mnt/home/coder/.config/coderv2/dotfiles ]; then
|
||||
echo "⚠️ Dotfiles directory not found."
|
||||
return
|
||||
fi
|
||||
|
||||
cd /mnt/home/coder/.config/coderv2/dotfiles || return
|
||||
for script in install.sh install bootstrap.sh bootstrap script/bootstrap setup.sh setup script/setup; do
|
||||
if [ -x $script ]; then
|
||||
echo "📦 Installing dotfiles..."
|
||||
./$script || {
|
||||
echo "❌ Error running $script. Please check the script for issues."
|
||||
return
|
||||
}
|
||||
echo "✅ Dotfiles installed successfully."
|
||||
return
|
||||
fi
|
||||
done
|
||||
echo "⚠️ No install script found in dotfiles directory."
|
||||
}
|
||||
|
||||
personalize() {
|
||||
# Allow script to continue as Coder dogfood utilizes a hack to
|
||||
# synchronize startup script execution.
|
||||
touch /tmp/.coder-startup-script.done
|
||||
|
||||
if [ -x /mnt/home/coder/personalize ]; then
|
||||
echo "🎨 Personalizing environment..."
|
||||
/mnt/home/coder/personalize
|
||||
fi
|
||||
}
|
||||
|
||||
install_devcontainer_cli
|
||||
install_ssh_config
|
||||
install_dotfiles
|
||||
personalize
|
||||
@@ -1,4 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Start Docker service if not already running.
|
||||
sudo service docker start
|
||||
@@ -1,26 +0,0 @@
|
||||
{
|
||||
"name": "devcontainer-cli",
|
||||
"version": "1.0.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "devcontainer-cli",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@devcontainers/cli": "^0.80.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@devcontainers/cli": {
|
||||
"version": "0.80.0",
|
||||
"resolved": "https://registry.npmjs.org/@devcontainers/cli/-/cli-0.80.0.tgz",
|
||||
"integrity": "sha512-w2EaxgjyeVGyzfA/KUEZBhyXqu/5PyWNXcnrXsZOBrt3aN2zyGiHrXoG54TF6K0b5DSCF01Rt5fnIyrCeFzFKw==",
|
||||
"bin": {
|
||||
"devcontainer": "devcontainer.js"
|
||||
},
|
||||
"engines": {
|
||||
"node": "^16.13.0 || >=18.0.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"name": "devcontainer-cli",
|
||||
"private": true,
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@devcontainers/cli": "^0.80.0"
|
||||
}
|
||||
}
|
||||
+1
-9
@@ -7,7 +7,7 @@ trim_trailing_whitespace = true
|
||||
insert_final_newline = true
|
||||
indent_style = tab
|
||||
|
||||
[*.{yaml,yml,tf,tftpl,tfvars,nix}]
|
||||
[*.{yaml,yml,tf,tfvars,nix}]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
|
||||
@@ -18,11 +18,3 @@ indent_size = 2
|
||||
[coderd/database/dump.sql]
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
[coderd/database/queries/*.sql]
|
||||
indent_style = tab
|
||||
indent_size = 4
|
||||
|
||||
[coderd/database/migrations/*.sql]
|
||||
indent_style = tab
|
||||
indent_size = 4
|
||||
|
||||
+1
-3
@@ -15,8 +15,6 @@ provisionersdk/proto/*.go linguist-generated=true
|
||||
*.tfstate.json linguist-generated=true
|
||||
*.tfstate.dot linguist-generated=true
|
||||
*.tfplan.dot linguist-generated=true
|
||||
site/e2e/google/protobuf/timestampGenerated.ts
|
||||
site/e2e/provisionerGenerated.ts linguist-generated=true
|
||||
site/src/api/countriesGenerated.tsx linguist-generated=true
|
||||
site/src/api/rbacresourcesGenerated.tsx linguist-generated=true
|
||||
site/src/api/typesGenerated.ts linguist-generated=true
|
||||
site/src/pages/SetupPage/countries.tsx linguist-generated=true
|
||||
|
||||
@@ -25,7 +25,5 @@ ignorePatterns:
|
||||
- pattern: "docs.github.com"
|
||||
- pattern: "claude.ai"
|
||||
- pattern: "splunk.com"
|
||||
- pattern: "stackoverflow.com/questions"
|
||||
- pattern: "developer.hashicorp.com/terraform/language"
|
||||
aliveStatusCodes:
|
||||
- 200
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
name: "Download Embedded Postgres Cache"
|
||||
description: |
|
||||
Downloads the embedded postgres cache and outputs today's cache key.
|
||||
A PR job can use a cache if it was created by its base branch, its current
|
||||
branch, or the default branch.
|
||||
https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache
|
||||
outputs:
|
||||
cache-key:
|
||||
description: "Today's cache key"
|
||||
value: ${{ steps.vars.outputs.cache-key }}
|
||||
inputs:
|
||||
key-prefix:
|
||||
description: "Prefix for the cache key"
|
||||
required: true
|
||||
cache-path:
|
||||
description: "Path to the cache directory"
|
||||
required: true
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Get date values and cache key
|
||||
id: vars
|
||||
shell: bash
|
||||
run: |
|
||||
export YEAR_MONTH=$(date +'%Y-%m')
|
||||
export PREV_YEAR_MONTH=$(date -d 'last month' +'%Y-%m')
|
||||
export DAY=$(date +'%d')
|
||||
echo "year-month=$YEAR_MONTH" >> "$GITHUB_OUTPUT"
|
||||
echo "prev-year-month=$PREV_YEAR_MONTH" >> "$GITHUB_OUTPUT"
|
||||
echo "cache-key=${INPUTS_KEY_PREFIX}-${YEAR_MONTH}-${DAY}" >> "$GITHUB_OUTPUT"
|
||||
env:
|
||||
INPUTS_KEY_PREFIX: ${{ inputs.key-prefix }}
|
||||
|
||||
# By default, depot keeps caches for 14 days. This is plenty for embedded
|
||||
# postgres, which changes infrequently.
|
||||
# https://depot.dev/docs/github-actions/overview#cache-retention-policy
|
||||
- name: Download embedded Postgres cache
|
||||
uses: actions/cache/restore@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: ${{ inputs.cache-path }}
|
||||
key: ${{ steps.vars.outputs.cache-key }}
|
||||
# > If there are multiple partial matches for a restore key, the action returns the most recently created cache.
|
||||
# https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/caching-dependencies-to-speed-up-workflows#matching-a-cache-key
|
||||
# The second restore key allows non-main branches to use the cache from the previous month.
|
||||
# This prevents PRs from rebuilding the cache on the first day of the month.
|
||||
# It also makes sure that once a month, the cache is fully reset.
|
||||
restore-keys: |
|
||||
${{ inputs.key-prefix }}-${{ steps.vars.outputs.year-month }}-
|
||||
${{ github.ref != 'refs/heads/main' && format('{0}-{1}-', inputs.key-prefix, steps.vars.outputs.prev-year-month) || '' }}
|
||||
@@ -1,18 +0,0 @@
|
||||
name: "Upload Embedded Postgres Cache"
|
||||
description: Uploads the embedded Postgres cache. This only runs on the main branch.
|
||||
inputs:
|
||||
cache-key:
|
||||
description: "Cache key"
|
||||
required: true
|
||||
cache-path:
|
||||
description: "Path to the cache directory"
|
||||
required: true
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Upload Embedded Postgres cache
|
||||
if: ${{ github.ref == 'refs/heads/main' }}
|
||||
uses: actions/cache/save@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||
with:
|
||||
path: ${{ inputs.cache-path }}
|
||||
key: ${{ inputs.cache-key }}
|
||||
@@ -1,33 +0,0 @@
|
||||
name: "Setup Embedded Postgres Cache Paths"
|
||||
description: Sets up a path for cached embedded postgres binaries.
|
||||
outputs:
|
||||
embedded-pg-cache:
|
||||
description: "Value of EMBEDDED_PG_CACHE_DIR"
|
||||
value: ${{ steps.paths.outputs.embedded-pg-cache }}
|
||||
cached-dirs:
|
||||
description: "directories that should be cached between CI runs"
|
||||
value: ${{ steps.paths.outputs.cached-dirs }}
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Override Go paths
|
||||
id: paths
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
|
||||
with:
|
||||
script: |
|
||||
const path = require('path');
|
||||
|
||||
// RUNNER_TEMP should be backed by a RAM disk on Windows if
|
||||
// coder/setup-ramdisk-action was used
|
||||
const runnerTemp = process.env.RUNNER_TEMP;
|
||||
const embeddedPgCacheDir = path.join(runnerTemp, 'embedded-pg-cache');
|
||||
core.exportVariable('EMBEDDED_PG_CACHE_DIR', embeddedPgCacheDir);
|
||||
core.setOutput('embedded-pg-cache', embeddedPgCacheDir);
|
||||
const cachedDirs = `${embeddedPgCacheDir}`;
|
||||
core.setOutput('cached-dirs', cachedDirs);
|
||||
|
||||
- name: Create directories
|
||||
shell: bash
|
||||
run: |
|
||||
set -e
|
||||
mkdir -p "$EMBEDDED_PG_CACHE_DIR"
|
||||
@@ -4,7 +4,7 @@ description: |
|
||||
inputs:
|
||||
version:
|
||||
description: "The Go version to use."
|
||||
default: "1.24.6"
|
||||
default: "1.24.4"
|
||||
use-preinstalled-go:
|
||||
description: "Whether to use preinstalled Go."
|
||||
default: "false"
|
||||
|
||||
@@ -16,7 +16,7 @@ runs:
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4.0.4
|
||||
with:
|
||||
node-version: 22.19.0
|
||||
node-version: 20.16.0
|
||||
# See https://github.com/actions/setup-node#caching-global-packages-data
|
||||
cache: "pnpm"
|
||||
cache-dependency-path: ${{ inputs.directory }}/pnpm-lock.yaml
|
||||
|
||||
@@ -7,5 +7,5 @@ runs:
|
||||
- name: Install Terraform
|
||||
uses: hashicorp/setup-terraform@b9cd54a3c349d3f38e8881555d616ced269862dd # v3.1.2
|
||||
with:
|
||||
terraform_version: 1.13.0
|
||||
terraform_version: 1.12.2
|
||||
terraform_wrapper: false
|
||||
|
||||
@@ -27,11 +27,9 @@ runs:
|
||||
export YEAR_MONTH=$(date +'%Y-%m')
|
||||
export PREV_YEAR_MONTH=$(date -d 'last month' +'%Y-%m')
|
||||
export DAY=$(date +'%d')
|
||||
echo "year-month=$YEAR_MONTH" >> "$GITHUB_OUTPUT"
|
||||
echo "prev-year-month=$PREV_YEAR_MONTH" >> "$GITHUB_OUTPUT"
|
||||
echo "cache-key=${INPUTS_KEY_PREFIX}-${YEAR_MONTH}-${DAY}" >> "$GITHUB_OUTPUT"
|
||||
env:
|
||||
INPUTS_KEY_PREFIX: ${{ inputs.key-prefix }}
|
||||
echo "year-month=$YEAR_MONTH" >> $GITHUB_OUTPUT
|
||||
echo "prev-year-month=$PREV_YEAR_MONTH" >> $GITHUB_OUTPUT
|
||||
echo "cache-key=${{ inputs.key-prefix }}-${YEAR_MONTH}-${DAY}" >> $GITHUB_OUTPUT
|
||||
|
||||
# TODO: As a cost optimization, we could remove caches that are older than
|
||||
# a day or two. By default, depot keeps caches for 14 days, which isn't
|
||||
|
||||
@@ -12,12 +12,13 @@ runs:
|
||||
run: |
|
||||
set -e
|
||||
|
||||
echo "owner: $REPO_OWNER"
|
||||
if [[ "$REPO_OWNER" != "coder" ]]; then
|
||||
owner=${{ github.repository_owner }}
|
||||
echo "owner: $owner"
|
||||
if [[ $owner != "coder" ]]; then
|
||||
echo "Not a pull request from the main repo, skipping..."
|
||||
exit 0
|
||||
fi
|
||||
if [[ -z "${DATADOG_API_KEY}" ]]; then
|
||||
if [[ -z "${{ inputs.api-key }}" ]]; then
|
||||
# This can happen for dependabot.
|
||||
echo "No API key provided, skipping..."
|
||||
exit 0
|
||||
@@ -30,38 +31,37 @@ runs:
|
||||
|
||||
TMP_DIR=$(mktemp -d)
|
||||
|
||||
if [[ "${RUNNER_OS}" == "Windows" ]]; then
|
||||
if [[ "${{ runner.os }}" == "Windows" ]]; then
|
||||
BINARY_PATH="${TMP_DIR}/datadog-ci.exe"
|
||||
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_win-x64"
|
||||
elif [[ "${RUNNER_OS}" == "macOS" ]]; then
|
||||
elif [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||
BINARY_PATH="${TMP_DIR}/datadog-ci"
|
||||
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_darwin-arm64"
|
||||
elif [[ "${RUNNER_OS}" == "Linux" ]]; then
|
||||
elif [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||
BINARY_PATH="${TMP_DIR}/datadog-ci"
|
||||
BINARY_URL="https://github.com/DataDog/datadog-ci/releases/download/${BINARY_VERSION}/datadog-ci_linux-x64"
|
||||
else
|
||||
echo "Unsupported OS: $RUNNER_OS"
|
||||
echo "Unsupported OS: ${{ runner.os }}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Downloading DataDog CI binary version ${BINARY_VERSION} for $RUNNER_OS..."
|
||||
echo "Downloading DataDog CI binary version ${BINARY_VERSION} for ${{ runner.os }}..."
|
||||
curl -sSL "$BINARY_URL" -o "$BINARY_PATH"
|
||||
|
||||
if [[ "${RUNNER_OS}" == "Windows" ]]; then
|
||||
if [[ "${{ runner.os }}" == "Windows" ]]; then
|
||||
echo "$BINARY_HASH_WINDOWS $BINARY_PATH" | sha256sum --check
|
||||
elif [[ "${RUNNER_OS}" == "macOS" ]]; then
|
||||
elif [[ "${{ runner.os }}" == "macOS" ]]; then
|
||||
echo "$BINARY_HASH_MACOS $BINARY_PATH" | shasum -a 256 --check
|
||||
elif [[ "${RUNNER_OS}" == "Linux" ]]; then
|
||||
elif [[ "${{ runner.os }}" == "Linux" ]]; then
|
||||
echo "$BINARY_HASH_LINUX $BINARY_PATH" | sha256sum --check
|
||||
fi
|
||||
|
||||
# Make binary executable (not needed for Windows)
|
||||
if [[ "${RUNNER_OS}" != "Windows" ]]; then
|
||||
if [[ "${{ runner.os }}" != "Windows" ]]; then
|
||||
chmod +x "$BINARY_PATH"
|
||||
fi
|
||||
|
||||
"$BINARY_PATH" junit upload --service coder ./gotests.xml \
|
||||
--tags "os:${RUNNER_OS}" --tags "runner_name:${RUNNER_NAME}"
|
||||
--tags os:${{runner.os}} --tags runner_name:${{runner.name}}
|
||||
env:
|
||||
REPO_OWNER: ${{ github.repository_owner }}
|
||||
DATADOG_API_KEY: ${{ inputs.api-key }}
|
||||
|
||||
@@ -33,7 +33,6 @@ updates:
|
||||
- dependency-name: "*"
|
||||
update-types:
|
||||
- version-update:semver-patch
|
||||
- dependency-name: "github.com/mark3labs/mcp-go"
|
||||
|
||||
# Update our Dockerfile.
|
||||
- package-ecosystem: "docker"
|
||||
@@ -80,9 +79,6 @@ updates:
|
||||
mui:
|
||||
patterns:
|
||||
- "@mui*"
|
||||
radix:
|
||||
patterns:
|
||||
- "@radix-ui/*"
|
||||
react:
|
||||
patterns:
|
||||
- "react"
|
||||
@@ -107,7 +103,6 @@ updates:
|
||||
- dependency-name: "*"
|
||||
update-types:
|
||||
- version-update:semver-major
|
||||
- dependency-name: "@playwright/test"
|
||||
open-pull-requests-limit: 15
|
||||
|
||||
- package-ecosystem: "terraform"
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
<!--
|
||||
|
||||
If you have used AI to produce some or all of this PR, please ensure you have read our [AI Contribution guidelines](https://coder.com/docs/about/contributing/AI_CONTRIBUTING) before submitting.
|
||||
|
||||
-->
|
||||
+372
-270
File diff suppressed because it is too large
Load Diff
@@ -3,7 +3,6 @@ name: contrib
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created, edited]
|
||||
# zizmor: ignore[dangerous-triggers] We explicitly want to run on pull_request_target.
|
||||
pull_request_target:
|
||||
types:
|
||||
- opened
|
||||
@@ -53,7 +52,7 @@ jobs:
|
||||
if: ${{ github.event_name == 'pull_request_target' && !github.event.pull_request.draft }}
|
||||
steps:
|
||||
- name: release-labels
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
|
||||
with:
|
||||
# This script ensures PR title and labels are in sync:
|
||||
#
|
||||
|
||||
@@ -15,7 +15,7 @@ jobs:
|
||||
github.event_name == 'pull_request' &&
|
||||
github.event.action == 'opened' &&
|
||||
github.event.pull_request.user.login == 'dependabot[bot]' &&
|
||||
github.event.pull_request.user.id == 49699333 &&
|
||||
github.actor_id == 49699333 &&
|
||||
github.repository == 'coder/coder'
|
||||
permissions:
|
||||
pull-requests: write
|
||||
@@ -44,6 +44,10 @@ jobs:
|
||||
GH_TOKEN: ${{secrets.GITHUB_TOKEN}}
|
||||
|
||||
- name: Send Slack notification
|
||||
env:
|
||||
PR_URL: ${{github.event.pull_request.html_url}}
|
||||
PR_TITLE: ${{github.event.pull_request.title}}
|
||||
PR_NUMBER: ${{github.event.pull_request.number}}
|
||||
run: |
|
||||
curl -X POST -H 'Content-type: application/json' \
|
||||
--data '{
|
||||
@@ -54,7 +58,7 @@ jobs:
|
||||
"type": "header",
|
||||
"text": {
|
||||
"type": "plain_text",
|
||||
"text": ":pr-merged: Auto merge enabled for Dependabot PR #'"${PR_NUMBER}"'",
|
||||
"text": ":pr-merged: Auto merge enabled for Dependabot PR #${{ env.PR_NUMBER }}",
|
||||
"emoji": true
|
||||
}
|
||||
},
|
||||
@@ -63,7 +67,7 @@ jobs:
|
||||
"fields": [
|
||||
{
|
||||
"type": "mrkdwn",
|
||||
"text": "'"${PR_TITLE}"'"
|
||||
"text": "${{ env.PR_TITLE }}"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -76,14 +80,9 @@ jobs:
|
||||
"type": "plain_text",
|
||||
"text": "View PR"
|
||||
},
|
||||
"url": "'"${PR_URL}"'"
|
||||
"url": "${{ env.PR_URL }}"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}' "${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}"
|
||||
env:
|
||||
SLACK_WEBHOOK: ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
|
||||
PR_NUMBER: ${{ github.event.pull_request.number }}
|
||||
PR_TITLE: ${{ github.event.pull_request.title }}
|
||||
PR_URL: ${{ github.event.pull_request.html_url }}
|
||||
}' ${{ secrets.DEPENDABOT_PRS_SLACK_WEBHOOK }}
|
||||
|
||||
@@ -1,174 +0,0 @@
|
||||
name: deploy
|
||||
|
||||
on:
|
||||
# Via workflow_call, called from ci.yaml
|
||||
workflow_call:
|
||||
inputs:
|
||||
image:
|
||||
description: "Image and tag to potentially deploy. Current branch will be validated against should-deploy check."
|
||||
required: true
|
||||
type: string
|
||||
secrets:
|
||||
FLY_API_TOKEN:
|
||||
required: true
|
||||
FLY_PARIS_CODER_PROXY_SESSION_TOKEN:
|
||||
required: true
|
||||
FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN:
|
||||
required: true
|
||||
FLY_SAO_PAULO_CODER_PROXY_SESSION_TOKEN:
|
||||
required: true
|
||||
FLY_JNB_CODER_PROXY_SESSION_TOKEN:
|
||||
required: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }} # no per-branch concurrency
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
# Determines if the given branch should be deployed to dogfood.
|
||||
should-deploy:
|
||||
name: should-deploy
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
verdict: ${{ steps.check.outputs.verdict }} # DEPLOY or NOOP
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Check if deploy is enabled
|
||||
id: check
|
||||
run: |
|
||||
set -euo pipefail
|
||||
verdict="$(./scripts/should_deploy.sh)"
|
||||
echo "verdict=$verdict" >> "$GITHUB_OUTPUT"
|
||||
|
||||
deploy:
|
||||
name: "deploy"
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
needs: should-deploy
|
||||
if: needs.should-deploy.outputs.verdict == 'DEPLOY'
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
packages: write # to retag image as dogfood
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: GHCR Login
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Authenticate to Google Cloud
|
||||
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
|
||||
with:
|
||||
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
|
||||
|
||||
- name: Set up Google Cloud SDK
|
||||
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
|
||||
|
||||
- name: Set up Flux CLI
|
||||
uses: fluxcd/flux2/action@6bf37f6a560fd84982d67f853162e4b3c2235edb # v2.6.4
|
||||
with:
|
||||
# Keep this and the github action up to date with the version of flux installed in dogfood cluster
|
||||
version: "2.7.0"
|
||||
|
||||
- name: Get Cluster Credentials
|
||||
uses: google-github-actions/get-gke-credentials@3da1e46a907576cefaa90c484278bb5b259dd395 # v3.0.0
|
||||
with:
|
||||
cluster_name: dogfood-v2
|
||||
location: us-central1-a
|
||||
project_id: coder-dogfood-v2
|
||||
|
||||
# Retag image as dogfood while maintaining the multi-arch manifest
|
||||
- name: Tag image as dogfood
|
||||
run: docker buildx imagetools create --tag "ghcr.io/coder/coder-preview:dogfood" "$IMAGE"
|
||||
env:
|
||||
IMAGE: ${{ inputs.image }}
|
||||
|
||||
- name: Reconcile Flux
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
flux --namespace flux-system reconcile source git flux-system
|
||||
flux --namespace flux-system reconcile source git coder-main
|
||||
flux --namespace flux-system reconcile kustomization flux-system
|
||||
flux --namespace flux-system reconcile kustomization coder
|
||||
flux --namespace flux-system reconcile source chart coder-coder
|
||||
flux --namespace flux-system reconcile source chart coder-coder-provisioner
|
||||
flux --namespace coder reconcile helmrelease coder
|
||||
flux --namespace coder reconcile helmrelease coder-provisioner
|
||||
flux --namespace coder reconcile helmrelease coder-provisioner-tagged
|
||||
flux --namespace coder reconcile helmrelease coder-provisioner-tagged-prebuilds
|
||||
|
||||
# Just updating Flux is usually not enough. The Helm release may get
|
||||
# redeployed, but unless something causes the Deployment to update the
|
||||
# pods won't be recreated. It's important that the pods get recreated,
|
||||
# since we use `imagePullPolicy: Always` to ensure we're running the
|
||||
# latest image.
|
||||
- name: Rollout Deployment
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
kubectl --namespace coder rollout restart deployment/coder
|
||||
kubectl --namespace coder rollout status deployment/coder
|
||||
kubectl --namespace coder rollout restart deployment/coder-provisioner
|
||||
kubectl --namespace coder rollout status deployment/coder-provisioner
|
||||
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged
|
||||
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged
|
||||
kubectl --namespace coder rollout restart deployment/coder-provisioner-tagged-prebuilds
|
||||
kubectl --namespace coder rollout status deployment/coder-provisioner-tagged-prebuilds
|
||||
|
||||
deploy-wsproxies:
|
||||
runs-on: ubuntu-latest
|
||||
needs: deploy
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Setup flyctl
|
||||
uses: superfly/flyctl-actions/setup-flyctl@fc53c09e1bc3be6f54706524e3b82c4f462f77be # v1.5
|
||||
|
||||
- name: Deploy workspace proxies
|
||||
run: |
|
||||
flyctl deploy --image "$IMAGE" --app paris-coder --config ./.github/fly-wsproxies/paris-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_PARIS" --yes
|
||||
flyctl deploy --image "$IMAGE" --app sydney-coder --config ./.github/fly-wsproxies/sydney-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SYDNEY" --yes
|
||||
flyctl deploy --image "$IMAGE" --app sao-paulo-coder --config ./.github/fly-wsproxies/sao-paulo-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_SAO_PAULO" --yes
|
||||
flyctl deploy --image "$IMAGE" --app jnb-coder --config ./.github/fly-wsproxies/jnb-coder.toml --env "CODER_PROXY_SESSION_TOKEN=$TOKEN_JNB" --yes
|
||||
env:
|
||||
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
|
||||
IMAGE: ${{ inputs.image }}
|
||||
TOKEN_PARIS: ${{ secrets.FLY_PARIS_CODER_PROXY_SESSION_TOKEN }}
|
||||
TOKEN_SYDNEY: ${{ secrets.FLY_SYDNEY_CODER_PROXY_SESSION_TOKEN }}
|
||||
TOKEN_SAO_PAULO: ${{ secrets.FLY_SAO_PAULO_CODER_PROXY_SESSION_TOKEN }}
|
||||
TOKEN_JNB: ${{ secrets.FLY_JNB_CODER_PROXY_SESSION_TOKEN }}
|
||||
@@ -38,17 +38,15 @@ jobs:
|
||||
if: github.repository_owner == 'coder'
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Docker login
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
@@ -62,7 +60,7 @@ jobs:
|
||||
|
||||
# This uses OIDC authentication, so no auth variables are required.
|
||||
- name: Build base Docker image via depot.dev
|
||||
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
|
||||
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
|
||||
with:
|
||||
project: wl5hnrrkns
|
||||
context: base-build-context
|
||||
|
||||
@@ -23,14 +23,12 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Setup Node
|
||||
uses: ./.github/actions/setup-node
|
||||
|
||||
- uses: tj-actions/changed-files@4563c729c555b4141fac99c80f699f571219b836 # v45.0.7
|
||||
- uses: tj-actions/changed-files@666c9d29007687c52e3c7aa2aac6c0ffcadeadc3 # v45.0.7
|
||||
id: changed-files
|
||||
with:
|
||||
files: |
|
||||
@@ -41,16 +39,10 @@ jobs:
|
||||
- name: lint
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
# shellcheck disable=SC2086
|
||||
pnpm exec markdownlint-cli2 $ALL_CHANGED_FILES
|
||||
env:
|
||||
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
|
||||
pnpm exec markdownlint-cli2 ${{ steps.changed-files.outputs.all_changed_files }}
|
||||
|
||||
- name: fmt
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
# markdown-table-formatter requires a space separated list of files
|
||||
# shellcheck disable=SC2086
|
||||
echo $ALL_CHANGED_FILES | tr ',' '\n' | pnpm exec markdown-table-formatter --check
|
||||
env:
|
||||
ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
|
||||
echo ${{ steps.changed-files.outputs.all_changed_files }} | tr ',' '\n' | pnpm exec markdown-table-formatter --check
|
||||
|
||||
@@ -18,7 +18,8 @@ on:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
# Necessary for GCP authentication (https://github.com/google-github-actions/setup-gcloud#usage)
|
||||
id-token: write
|
||||
|
||||
jobs:
|
||||
build_image:
|
||||
@@ -26,21 +27,15 @@ jobs:
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-4' || 'ubuntu-latest' }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Setup Nix
|
||||
uses: nixbuild/nix-quick-install-action@1f095fee853b33114486cfdeae62fa099cda35a9 # v33
|
||||
with:
|
||||
# Pinning to 2.28 here, as Nix gets a "error: [json.exception.type_error.302] type must be array, but is string"
|
||||
# on version 2.29 and above.
|
||||
nix_version: "2.28.4"
|
||||
uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||
|
||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||
with:
|
||||
@@ -63,16 +58,15 @@ jobs:
|
||||
|
||||
- name: Get branch name
|
||||
id: branch-name
|
||||
uses: tj-actions/branch-names@5250492686b253f06fa55861556d1027b067aeb5 # v9.0.2
|
||||
uses: tj-actions/branch-names@dde14ac574a8b9b1cedc59a1cf312788af43d8d8 # v8.2.1
|
||||
|
||||
- name: "Branch name to Docker tag name"
|
||||
id: docker-tag-name
|
||||
run: |
|
||||
tag=${{ steps.branch-name.outputs.current_branch }}
|
||||
# Replace / with --, e.g. user/feature => user--feature.
|
||||
tag=${BRANCH_NAME//\//--}
|
||||
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
|
||||
env:
|
||||
BRANCH_NAME: ${{ steps.branch-name.outputs.current_branch }}
|
||||
tag=${tag//\//--}
|
||||
echo "tag=${tag}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Set up Depot CLI
|
||||
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
|
||||
@@ -82,13 +76,13 @@ jobs:
|
||||
|
||||
- name: Login to DockerHub
|
||||
if: github.ref == 'refs/heads/main'
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_PASSWORD }}
|
||||
|
||||
- name: Build and push Non-Nix image
|
||||
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
|
||||
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
|
||||
with:
|
||||
project: b4q6ltmpzh
|
||||
token: ${{ secrets.DEPOT_TOKEN }}
|
||||
@@ -109,39 +103,32 @@ jobs:
|
||||
|
||||
CURRENT_SYSTEM=$(nix eval --impure --raw --expr 'builtins.currentSystem')
|
||||
|
||||
docker image tag "codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM" "codercom/oss-dogfood-nix:${DOCKER_TAG}"
|
||||
docker image push "codercom/oss-dogfood-nix:${DOCKER_TAG}"
|
||||
docker image tag codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM codercom/oss-dogfood-nix:${{ steps.docker-tag-name.outputs.tag }}
|
||||
docker image push codercom/oss-dogfood-nix:${{ steps.docker-tag-name.outputs.tag }}
|
||||
|
||||
docker image tag "codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM" "codercom/oss-dogfood-nix:latest"
|
||||
docker image push "codercom/oss-dogfood-nix:latest"
|
||||
env:
|
||||
DOCKER_TAG: ${{ steps.docker-tag-name.outputs.tag }}
|
||||
docker image tag codercom/oss-dogfood-nix:latest-$CURRENT_SYSTEM codercom/oss-dogfood-nix:latest
|
||||
docker image push codercom/oss-dogfood-nix:latest
|
||||
|
||||
deploy_template:
|
||||
needs: build_image
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
# Necessary for GCP authentication (https://github.com/google-github-actions/setup-gcloud#usage)
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Setup Terraform
|
||||
uses: ./.github/actions/setup-tf
|
||||
|
||||
- name: Authenticate to Google Cloud
|
||||
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
|
||||
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
|
||||
with:
|
||||
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
|
||||
workload_identity_provider: projects/573722524737/locations/global/workloadIdentityPools/github/providers/github
|
||||
service_account: coder-ci@coder-dogfood.iam.gserviceaccount.com
|
||||
|
||||
- name: Terraform init and validate
|
||||
run: |
|
||||
@@ -161,12 +148,12 @@ jobs:
|
||||
- name: Get short commit SHA
|
||||
if: github.ref == 'refs/heads/main'
|
||||
id: vars
|
||||
run: echo "sha_short=$(git rev-parse --short HEAD)" >> "$GITHUB_OUTPUT"
|
||||
run: echo "sha_short=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get latest commit title
|
||||
if: github.ref == 'refs/heads/main'
|
||||
id: message
|
||||
run: echo "pr_title=$(git log --format=%s -n 1 ${{ github.sha }})" >> "$GITHUB_OUTPUT"
|
||||
run: echo "pr_title=$(git log --format=%s -n 1 ${{ github.sha }})" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: "Push template"
|
||||
if: github.ref == 'refs/heads/main'
|
||||
@@ -178,7 +165,6 @@ jobs:
|
||||
CODER_URL: https://dev.coder.com
|
||||
CODER_SESSION_TOKEN: ${{ secrets.CODER_SESSION_TOKEN }}
|
||||
# Template source & details
|
||||
TF_VAR_CODER_DOGFOOD_ANTHROPIC_API_KEY: ${{ secrets.CODER_DOGFOOD_ANTHROPIC_API_KEY }}
|
||||
TF_VAR_CODER_TEMPLATE_NAME: ${{ secrets.CODER_TEMPLATE_NAME }}
|
||||
TF_VAR_CODER_TEMPLATE_VERSION: ${{ steps.vars.outputs.sha_short }}
|
||||
TF_VAR_CODER_TEMPLATE_DIR: ./coder
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
# The nightly-gauntlet runs tests that are either too flaky or too slow to block
|
||||
# every PR.
|
||||
name: nightly-gauntlet
|
||||
on:
|
||||
schedule:
|
||||
# Every day at 4AM
|
||||
- cron: "0 4 * * 1-5"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
test-go-pg:
|
||||
# make sure to adjust NUM_PARALLEL_PACKAGES and NUM_PARALLEL_TESTS below
|
||||
# when changing runner sizes
|
||||
runs-on: ${{ matrix.os == 'macos-latest' && github.repository_owner == 'coder' && 'depot-macos-latest' || matrix.os == 'windows-2022' && github.repository_owner == 'coder' && 'depot-windows-2022-16' || matrix.os }}
|
||||
# This timeout must be greater than the timeout set by `go test` in
|
||||
# `make test-postgres` to ensure we receive a trace of running
|
||||
# goroutines. Setting this to the timeout +5m should work quite well
|
||||
# even if some of the preceding steps are slow.
|
||||
timeout-minutes: 25
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- macos-latest
|
||||
- windows-2022
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
# macOS indexes all new files in the background. Our Postgres tests
|
||||
# create and destroy thousands of databases on disk, and Spotlight
|
||||
# tries to index all of them, seriously slowing down the tests.
|
||||
- name: Disable Spotlight Indexing
|
||||
if: runner.os == 'macOS'
|
||||
run: |
|
||||
enabled=$(sudo mdutil -a -s | { grep -Fc "Indexing enabled" || true; })
|
||||
if [ "$enabled" -eq 0 ]; then
|
||||
echo "Spotlight indexing is already disabled"
|
||||
exit 0
|
||||
fi
|
||||
sudo mdutil -a -i off
|
||||
sudo mdutil -X /
|
||||
sudo launchctl bootout system /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
|
||||
|
||||
# Set up RAM disks to speed up the rest of the job. This action is in
|
||||
# a separate repository to allow its use before actions/checkout.
|
||||
- name: Setup RAM Disks
|
||||
if: runner.os == 'Windows'
|
||||
uses: coder/setup-ramdisk-action@e1100847ab2d7bcd9d14bcda8f2d1b0f07b36f1b # v0.1.0
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
- name: Setup Go
|
||||
uses: ./.github/actions/setup-go
|
||||
with:
|
||||
# Runners have Go baked-in and Go will automatically
|
||||
# download the toolchain configured in go.mod, so we don't
|
||||
# need to reinstall it. It's faster on Windows runners.
|
||||
use-preinstalled-go: ${{ runner.os == 'Windows' }}
|
||||
|
||||
- name: Setup Terraform
|
||||
uses: ./.github/actions/setup-tf
|
||||
|
||||
- name: Setup Embedded Postgres Cache Paths
|
||||
id: embedded-pg-cache
|
||||
uses: ./.github/actions/setup-embedded-pg-cache-paths
|
||||
|
||||
- name: Download Embedded Postgres Cache
|
||||
id: download-embedded-pg-cache
|
||||
uses: ./.github/actions/embedded-pg-cache/download
|
||||
with:
|
||||
key-prefix: embedded-pg-${{ runner.os }}-${{ runner.arch }}
|
||||
cache-path: ${{ steps.embedded-pg-cache.outputs.cached-dirs }}
|
||||
|
||||
- name: Test with PostgreSQL Database
|
||||
env:
|
||||
POSTGRES_VERSION: "13"
|
||||
TS_DEBUG_DISCO: "true"
|
||||
LC_CTYPE: "en_US.UTF-8"
|
||||
LC_ALL: "en_US.UTF-8"
|
||||
shell: bash
|
||||
run: |
|
||||
set -o errexit
|
||||
set -o pipefail
|
||||
|
||||
if [ "${{ runner.os }}" == "Windows" ]; then
|
||||
# Create a temp dir on the R: ramdisk drive for Windows. The default
|
||||
# C: drive is extremely slow: https://github.com/actions/runner-images/issues/8755
|
||||
mkdir -p "R:/temp/embedded-pg"
|
||||
go run scripts/embedded-pg/main.go -path "R:/temp/embedded-pg" -cache "${EMBEDDED_PG_CACHE_DIR}"
|
||||
elif [ "${{ runner.os }}" == "macOS" ]; then
|
||||
# Postgres runs faster on a ramdisk on macOS too
|
||||
mkdir -p /tmp/tmpfs
|
||||
sudo mount_tmpfs -o noowners -s 8g /tmp/tmpfs
|
||||
go run scripts/embedded-pg/main.go -path /tmp/tmpfs/embedded-pg -cache "${EMBEDDED_PG_CACHE_DIR}"
|
||||
elif [ "${{ runner.os }}" == "Linux" ]; then
|
||||
make test-postgres-docker
|
||||
fi
|
||||
|
||||
# if macOS, install google-chrome for scaletests
|
||||
# As another concern, should we really have this kind of external dependency
|
||||
# requirement on standard CI?
|
||||
if [ "${{ matrix.os }}" == "macos-latest" ]; then
|
||||
brew install google-chrome
|
||||
fi
|
||||
|
||||
# macOS will output "The default interactive shell is now zsh"
|
||||
# intermittently in CI...
|
||||
if [ "${{ matrix.os }}" == "macos-latest" ]; then
|
||||
touch ~/.bash_profile && echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bash_profile
|
||||
fi
|
||||
|
||||
if [ "${{ runner.os }}" == "Windows" ]; then
|
||||
# Our Windows runners have 16 cores.
|
||||
# On Windows Postgres chokes up when we have 16x16=256 tests
|
||||
# running in parallel, and dbtestutil.NewDB starts to take more than
|
||||
# 10s to complete sometimes causing test timeouts. With 16x8=128 tests
|
||||
# Postgres tends not to choke.
|
||||
NUM_PARALLEL_PACKAGES=8
|
||||
NUM_PARALLEL_TESTS=16
|
||||
elif [ "${{ runner.os }}" == "macOS" ]; then
|
||||
# Our macOS runners have 8 cores. We set NUM_PARALLEL_TESTS to 16
|
||||
# because the tests complete faster and Postgres doesn't choke. It seems
|
||||
# that macOS's tmpfs is faster than the one on Windows.
|
||||
NUM_PARALLEL_PACKAGES=8
|
||||
NUM_PARALLEL_TESTS=16
|
||||
elif [ "${{ runner.os }}" == "Linux" ]; then
|
||||
# Our Linux runners have 8 cores.
|
||||
NUM_PARALLEL_PACKAGES=8
|
||||
NUM_PARALLEL_TESTS=8
|
||||
fi
|
||||
|
||||
# run tests without cache
|
||||
TESTCOUNT="-count=1"
|
||||
|
||||
DB=ci gotestsum \
|
||||
--format standard-quiet --packages "./..." \
|
||||
-- -timeout=20m -v -p "$NUM_PARALLEL_PACKAGES" -parallel="$NUM_PARALLEL_TESTS" "$TESTCOUNT"
|
||||
|
||||
- name: Upload Embedded Postgres Cache
|
||||
uses: ./.github/actions/embedded-pg-cache/upload
|
||||
# We only use the embedded Postgres cache on macOS and Windows runners.
|
||||
if: runner.OS == 'macOS' || runner.OS == 'Windows'
|
||||
with:
|
||||
cache-key: ${{ steps.download-embedded-pg-cache.outputs.cache-key }}
|
||||
cache-path: "${{ steps.embedded-pg-cache.outputs.embedded-pg-cache }}"
|
||||
|
||||
- name: Upload test stats to Datadog
|
||||
timeout-minutes: 1
|
||||
continue-on-error: true
|
||||
uses: ./.github/actions/upload-datadog
|
||||
if: success() || failure()
|
||||
with:
|
||||
api-key: ${{ secrets.DATADOG_API_KEY }}
|
||||
|
||||
notify-slack-on-failure:
|
||||
needs:
|
||||
- test-go-pg
|
||||
runs-on: ubuntu-latest
|
||||
if: failure() && github.ref == 'refs/heads/main'
|
||||
|
||||
steps:
|
||||
- name: Send Slack notification
|
||||
run: |
|
||||
ESCAPED_PROMPT=$(printf "%s" "<@U09LQ75AHKR> $BLINK_CI_FAILURE_PROMPT" | jq -Rsa .)
|
||||
curl -X POST -H 'Content-type: application/json' \
|
||||
--data '{
|
||||
"blocks": [
|
||||
{
|
||||
"type": "header",
|
||||
"text": {
|
||||
"type": "plain_text",
|
||||
"text": "❌ Nightly gauntlet failed",
|
||||
"emoji": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": "*View failure:* <'"${RUN_URL}"'|Click here>"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": '"$ESCAPED_PROMPT"'
|
||||
}
|
||||
}
|
||||
]
|
||||
}' "${SLACK_WEBHOOK}"
|
||||
env:
|
||||
SLACK_WEBHOOK: ${{ secrets.CI_FAILURE_SLACK_WEBHOOK }}
|
||||
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
|
||||
BLINK_CI_FAILURE_PROMPT: ${{ vars.BLINK_CI_FAILURE_PROMPT }}
|
||||
@@ -3,7 +3,6 @@
|
||||
name: PR Auto Assign
|
||||
|
||||
on:
|
||||
# zizmor: ignore[dangerous-triggers] We explicitly want to run on pull_request_target.
|
||||
pull_request_target:
|
||||
types: [opened]
|
||||
|
||||
@@ -15,7 +14,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ jobs:
|
||||
packages: write
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
@@ -27,12 +27,10 @@ jobs:
|
||||
id: pr_number
|
||||
run: |
|
||||
if [ -n "${{ github.event.pull_request.number }}" ]; then
|
||||
echo "PR_NUMBER=${{ github.event.pull_request.number }}" >> "$GITHUB_OUTPUT"
|
||||
echo "PR_NUMBER=${{ github.event.pull_request.number }}" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "PR_NUMBER=${PR_NUMBER}" >> "$GITHUB_OUTPUT"
|
||||
echo "PR_NUMBER=${{ github.event.inputs.pr_number }}" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
env:
|
||||
PR_NUMBER: ${{ github.event.inputs.pr_number }}
|
||||
|
||||
- name: Delete image
|
||||
continue-on-error: true
|
||||
@@ -53,21 +51,17 @@ jobs:
|
||||
- name: Delete helm release
|
||||
run: |
|
||||
set -euo pipefail
|
||||
helm delete --namespace "pr${PR_NUMBER}" "pr${PR_NUMBER}" || echo "helm release not found"
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
|
||||
helm delete --namespace "pr${{ steps.pr_number.outputs.PR_NUMBER }}" "pr${{ steps.pr_number.outputs.PR_NUMBER }}" || echo "helm release not found"
|
||||
|
||||
- name: "Remove PR namespace"
|
||||
run: |
|
||||
kubectl delete namespace "pr${PR_NUMBER}" || echo "namespace not found"
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
|
||||
kubectl delete namespace "pr${{ steps.pr_number.outputs.PR_NUMBER }}" || echo "namespace not found"
|
||||
|
||||
- name: "Remove DNS records"
|
||||
run: |
|
||||
set -euo pipefail
|
||||
# Get identifier for the record
|
||||
record_id=$(curl -X GET "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records?name=%2A.pr${PR_NUMBER}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}" \
|
||||
record_id=$(curl -X GET "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records?name=%2A.pr${{ steps.pr_number.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}" \
|
||||
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
|
||||
-H "Content-Type:application/json" | jq -r '.result[0].id') || echo "DNS record not found"
|
||||
|
||||
@@ -79,13 +73,9 @@ jobs:
|
||||
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
|
||||
-H "Content-Type:application/json" | jq -r '.success'
|
||||
) || echo "DNS record not found"
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
|
||||
|
||||
- name: "Delete certificate"
|
||||
if: ${{ github.event.pull_request.merged == true }}
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
kubectl delete certificate "pr${PR_NUMBER}-tls" -n pr-deployment-certs || echo "certificate not found"
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.pr_number.outputs.PR_NUMBER }}
|
||||
kubectl delete certificate "pr${{ steps.pr_number.outputs.PR_NUMBER }}-tls" -n pr-deployment-certs || echo "certificate not found"
|
||||
|
||||
@@ -39,14 +39,12 @@ jobs:
|
||||
PR_OPEN: ${{ steps.check_pr.outputs.pr_open }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Check if PR is open
|
||||
id: check_pr
|
||||
@@ -57,7 +55,7 @@ jobs:
|
||||
echo "PR doesn't exist or is closed."
|
||||
pr_open=false
|
||||
fi
|
||||
echo "pr_open=$pr_open" >> "$GITHUB_OUTPUT"
|
||||
echo "pr_open=$pr_open" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
@@ -76,15 +74,14 @@ jobs:
|
||||
runs-on: "ubuntu-latest"
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Get PR number, title, and branch name
|
||||
id: pr_info
|
||||
@@ -93,11 +90,9 @@ jobs:
|
||||
PR_NUMBER=$(gh pr view --json number | jq -r '.number')
|
||||
PR_TITLE=$(gh pr view --json title | jq -r '.title')
|
||||
PR_URL=$(gh pr view --json url | jq -r '.url')
|
||||
{
|
||||
echo "PR_URL=$PR_URL"
|
||||
echo "PR_NUMBER=$PR_NUMBER"
|
||||
echo "PR_TITLE=$PR_TITLE"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
echo "PR_URL=$PR_URL" >> $GITHUB_OUTPUT
|
||||
echo "PR_NUMBER=$PR_NUMBER" >> $GITHUB_OUTPUT
|
||||
echo "PR_TITLE=$PR_TITLE" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
@@ -105,8 +100,8 @@ jobs:
|
||||
id: set_tags
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "CODER_BASE_IMAGE_TAG=$CODER_BASE_IMAGE_TAG" >> "$GITHUB_OUTPUT"
|
||||
echo "CODER_IMAGE_TAG=$CODER_IMAGE_TAG" >> "$GITHUB_OUTPUT"
|
||||
echo "CODER_BASE_IMAGE_TAG=$CODER_BASE_IMAGE_TAG" >> $GITHUB_OUTPUT
|
||||
echo "CODER_IMAGE_TAG=$CODER_IMAGE_TAG" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
CODER_BASE_IMAGE_TAG: ghcr.io/coder/coder-preview-base:pr${{ steps.pr_info.outputs.PR_NUMBER }}
|
||||
CODER_IMAGE_TAG: ghcr.io/coder/coder-preview:pr${{ steps.pr_info.outputs.PR_NUMBER }}
|
||||
@@ -123,16 +118,14 @@ jobs:
|
||||
id: check_deployment
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if helm status "pr${PR_NUMBER}" --namespace "pr${PR_NUMBER}" > /dev/null 2>&1; then
|
||||
if helm status "pr${{ steps.pr_info.outputs.PR_NUMBER }}" --namespace "pr${{ steps.pr_info.outputs.PR_NUMBER }}" > /dev/null 2>&1; then
|
||||
echo "Deployment already exists. Skipping deployment."
|
||||
NEW=false
|
||||
else
|
||||
echo "Deployment doesn't exist."
|
||||
NEW=true
|
||||
fi
|
||||
echo "NEW=$NEW" >> "$GITHUB_OUTPUT"
|
||||
env:
|
||||
PR_NUMBER: ${{ steps.pr_info.outputs.PR_NUMBER }}
|
||||
echo "NEW=$NEW" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Check changed files
|
||||
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||
@@ -161,20 +154,17 @@ jobs:
|
||||
- name: Print number of changed files
|
||||
run: |
|
||||
set -euo pipefail
|
||||
echo "Total number of changed files: ${ALL_COUNT}"
|
||||
echo "Number of ignored files: ${IGNORED_COUNT}"
|
||||
env:
|
||||
ALL_COUNT: ${{ steps.filter.outputs.all_count }}
|
||||
IGNORED_COUNT: ${{ steps.filter.outputs.ignored_count }}
|
||||
echo "Total number of changed files: ${{ steps.filter.outputs.all_count }}"
|
||||
echo "Number of ignored files: ${{ steps.filter.outputs.ignored_count }}"
|
||||
|
||||
- name: Build conditionals
|
||||
id: build_conditionals
|
||||
run: |
|
||||
set -euo pipefail
|
||||
# build if the workflow is manually triggered and the deployment doesn't exist (first build or force rebuild)
|
||||
echo "first_or_force_build=${{ (github.event_name == 'workflow_dispatch' && steps.check_deployment.outputs.NEW == 'true') || github.event.inputs.build == 'true' }}" >> "$GITHUB_OUTPUT"
|
||||
echo "first_or_force_build=${{ (github.event_name == 'workflow_dispatch' && steps.check_deployment.outputs.NEW == 'true') || github.event.inputs.build == 'true' }}" >> $GITHUB_OUTPUT
|
||||
# build if the deployment already exist and there are changes in the files that we care about (automatic updates)
|
||||
echo "automatic_rebuild=${{ steps.check_deployment.outputs.NEW == 'false' && steps.filter.outputs.all_count > steps.filter.outputs.ignored_count }}" >> "$GITHUB_OUTPUT"
|
||||
echo "automatic_rebuild=${{ steps.check_deployment.outputs.NEW == 'false' && steps.filter.outputs.all_count > steps.filter.outputs.ignored_count }}" >> $GITHUB_OUTPUT
|
||||
|
||||
comment-pr:
|
||||
needs: get_info
|
||||
@@ -184,7 +174,7 @@ jobs:
|
||||
pull-requests: write # needed for commenting on PRs
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
@@ -228,15 +218,14 @@ jobs:
|
||||
CODER_IMAGE_TAG: ${{ needs.get_info.outputs.CODER_IMAGE_TAG }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Setup Node
|
||||
uses: ./.github/actions/setup-node
|
||||
@@ -248,7 +237,7 @@ jobs:
|
||||
uses: ./.github/actions/setup-sqlc
|
||||
|
||||
- name: GHCR Login
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
@@ -261,13 +250,12 @@ jobs:
|
||||
make gen/mark-fresh
|
||||
export DOCKER_IMAGE_NO_PREREQUISITES=true
|
||||
version="$(./scripts/version.sh)"
|
||||
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
|
||||
export CODER_IMAGE_BUILD_BASE_TAG
|
||||
export CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
|
||||
make -j build/coder_linux_amd64
|
||||
./scripts/build_docker.sh \
|
||||
--arch amd64 \
|
||||
--target "${CODER_IMAGE_TAG}" \
|
||||
--version "$version" \
|
||||
--target ${{ env.CODER_IMAGE_TAG }} \
|
||||
--version $version \
|
||||
--push \
|
||||
build/coder_linux_amd64
|
||||
|
||||
@@ -288,7 +276,7 @@ jobs:
|
||||
PR_HOSTNAME: "pr${{ needs.get_info.outputs.PR_NUMBER }}.${{ secrets.PR_DEPLOYMENTS_DOMAIN }}"
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
@@ -305,13 +293,13 @@ jobs:
|
||||
set -euo pipefail
|
||||
foundTag=$(
|
||||
gh api /orgs/coder/packages/container/coder-preview/versions |
|
||||
jq -r --arg tag "pr${PR_NUMBER}" '.[] |
|
||||
jq -r --arg tag "pr${{ env.PR_NUMBER }}" '.[] |
|
||||
select(.metadata.container.tags == [$tag]) |
|
||||
.metadata.container.tags[0]'
|
||||
)
|
||||
if [ -z "$foundTag" ]; then
|
||||
echo "Image not found"
|
||||
echo "${CODER_IMAGE_TAG} not found in ghcr.io/coder/coder-preview"
|
||||
echo "${{ env.CODER_IMAGE_TAG }} not found in ghcr.io/coder/coder-preview"
|
||||
exit 1
|
||||
else
|
||||
echo "Image found"
|
||||
@@ -326,42 +314,40 @@ jobs:
|
||||
curl -X POST "https://api.cloudflare.com/client/v4/zones/${{ secrets.PR_DEPLOYMENTS_ZONE_ID }}/dns_records" \
|
||||
-H "Authorization: Bearer ${{ secrets.PR_DEPLOYMENTS_CLOUDFLARE_API_TOKEN }}" \
|
||||
-H "Content-Type:application/json" \
|
||||
--data '{"type":"CNAME","name":"*.'"${PR_HOSTNAME}"'","content":"'"${PR_HOSTNAME}"'","ttl":1,"proxied":false}'
|
||||
--data '{"type":"CNAME","name":"*.${{ env.PR_HOSTNAME }}","content":"${{ env.PR_HOSTNAME }}","ttl":1,"proxied":false}'
|
||||
|
||||
- name: Create PR namespace
|
||||
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
|
||||
run: |
|
||||
set -euo pipefail
|
||||
# try to delete the namespace, but don't fail if it doesn't exist
|
||||
kubectl delete namespace "pr${PR_NUMBER}" || true
|
||||
kubectl create namespace "pr${PR_NUMBER}"
|
||||
kubectl delete namespace "pr${{ env.PR_NUMBER }}" || true
|
||||
kubectl create namespace "pr${{ env.PR_NUMBER }}"
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Check and Create Certificate
|
||||
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
|
||||
run: |
|
||||
# Using kubectl to check if a Certificate resource already exists
|
||||
# we are doing this to avoid letsenrypt rate limits
|
||||
if ! kubectl get certificate "pr${PR_NUMBER}-tls" -n pr-deployment-certs > /dev/null 2>&1; then
|
||||
if ! kubectl get certificate pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs > /dev/null 2>&1; then
|
||||
echo "Certificate doesn't exist. Creating a new one."
|
||||
envsubst < ./.github/pr-deployments/certificate.yaml | kubectl apply -f -
|
||||
else
|
||||
echo "Certificate exists. Skipping certificate creation."
|
||||
fi
|
||||
echo "Copy certificate from pr-deployment-certs to pr${PR_NUMBER} namespace"
|
||||
until kubectl get secret "pr${PR_NUMBER}-tls" -n pr-deployment-certs &> /dev/null
|
||||
echo "Copy certificate from pr-deployment-certs to pr${{ env.PR_NUMBER }} namespace"
|
||||
until kubectl get secret pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs &> /dev/null
|
||||
do
|
||||
echo "Waiting for secret pr${PR_NUMBER}-tls to be created..."
|
||||
echo "Waiting for secret pr${{ env.PR_NUMBER }}-tls to be created..."
|
||||
sleep 5
|
||||
done
|
||||
(
|
||||
kubectl get secret "pr${PR_NUMBER}-tls" -n pr-deployment-certs -o json |
|
||||
kubectl get secret pr${{ env.PR_NUMBER }}-tls -n pr-deployment-certs -o json |
|
||||
jq 'del(.metadata.namespace,.metadata.creationTimestamp,.metadata.resourceVersion,.metadata.selfLink,.metadata.uid,.metadata.managedFields)' |
|
||||
kubectl -n "pr${PR_NUMBER}" apply -f -
|
||||
kubectl -n pr${{ env.PR_NUMBER }} apply -f -
|
||||
)
|
||||
|
||||
- name: Set up PostgreSQL database
|
||||
@@ -369,14 +355,13 @@ jobs:
|
||||
run: |
|
||||
helm repo add bitnami https://charts.bitnami.com/bitnami
|
||||
helm install coder-db bitnami/postgresql \
|
||||
--namespace "pr${PR_NUMBER}" \
|
||||
--set image.repository=bitnamilegacy/postgresql \
|
||||
--namespace pr${{ env.PR_NUMBER }} \
|
||||
--set auth.username=coder \
|
||||
--set auth.password=coder \
|
||||
--set auth.database=coder \
|
||||
--set persistence.size=10Gi
|
||||
kubectl create secret generic coder-db-url -n "pr${PR_NUMBER}" \
|
||||
--from-literal=url="postgres://coder:coder@coder-db-postgresql.pr${PR_NUMBER}.svc.cluster.local:5432/coder?sslmode=disable"
|
||||
kubectl create secret generic coder-db-url -n pr${{ env.PR_NUMBER }} \
|
||||
--from-literal=url="postgres://coder:coder@coder-db-postgresql.pr${{ env.PR_NUMBER }}.svc.cluster.local:5432/coder?sslmode=disable"
|
||||
|
||||
- name: Create a service account, role, and rolebinding for the PR namespace
|
||||
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
|
||||
@@ -398,8 +383,8 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
helm dependency update --skip-refresh ./helm/coder
|
||||
helm upgrade --install "pr${PR_NUMBER}" ./helm/coder \
|
||||
--namespace "pr${PR_NUMBER}" \
|
||||
helm upgrade --install "pr${{ env.PR_NUMBER }}" ./helm/coder \
|
||||
--namespace "pr${{ env.PR_NUMBER }}" \
|
||||
--values ./pr-deploy-values.yaml \
|
||||
--force
|
||||
|
||||
@@ -408,8 +393,8 @@ jobs:
|
||||
run: |
|
||||
helm repo add coder-logstream-kube https://helm.coder.com/logstream-kube
|
||||
helm upgrade --install coder-logstream-kube coder-logstream-kube/coder-logstream-kube \
|
||||
--namespace "pr${PR_NUMBER}" \
|
||||
--set url="https://${PR_HOSTNAME}"
|
||||
--namespace "pr${{ env.PR_NUMBER }}" \
|
||||
--set url="https://${{ env.PR_HOSTNAME }}"
|
||||
|
||||
- name: Get Coder binary
|
||||
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
|
||||
@@ -417,16 +402,16 @@ jobs:
|
||||
set -euo pipefail
|
||||
|
||||
DEST="${HOME}/coder"
|
||||
URL="https://${PR_HOSTNAME}/bin/coder-linux-amd64"
|
||||
URL="https://${{ env.PR_HOSTNAME }}/bin/coder-linux-amd64"
|
||||
|
||||
mkdir -p "$(dirname "$DEST")"
|
||||
mkdir -p "$(dirname ${DEST})"
|
||||
|
||||
COUNT=0
|
||||
until curl --output /dev/null --silent --head --fail "$URL"; do
|
||||
until $(curl --output /dev/null --silent --head --fail "$URL"); do
|
||||
printf '.'
|
||||
sleep 5
|
||||
COUNT=$((COUNT+1))
|
||||
if [ "$COUNT" -ge 60 ]; then
|
||||
if [ $COUNT -ge 60 ]; then
|
||||
echo "Timed out waiting for URL to be available"
|
||||
exit 1
|
||||
fi
|
||||
@@ -435,7 +420,7 @@ jobs:
|
||||
curl -fsSL "$URL" -o "${DEST}"
|
||||
chmod +x "${DEST}"
|
||||
"${DEST}" version
|
||||
sudo mv "${DEST}" /usr/local/bin/coder
|
||||
mv "${DEST}" /usr/local/bin/coder
|
||||
|
||||
- name: Create first user
|
||||
if: needs.get_info.outputs.NEW == 'true' || github.event.inputs.deploy == 'true'
|
||||
@@ -450,24 +435,24 @@ jobs:
|
||||
|
||||
# add mask so that the password is not printed to the logs
|
||||
echo "::add-mask::$password"
|
||||
echo "password=$password" >> "$GITHUB_OUTPUT"
|
||||
echo "password=$password" >> $GITHUB_OUTPUT
|
||||
|
||||
coder login \
|
||||
--first-user-username "pr${PR_NUMBER}-admin" \
|
||||
--first-user-email "pr${PR_NUMBER}@coder.com" \
|
||||
--first-user-password "$password" \
|
||||
--first-user-username pr${{ env.PR_NUMBER }}-admin \
|
||||
--first-user-email pr${{ env.PR_NUMBER }}@coder.com \
|
||||
--first-user-password $password \
|
||||
--first-user-trial=false \
|
||||
--use-token-as-session \
|
||||
"https://${PR_HOSTNAME}"
|
||||
https://${{ env.PR_HOSTNAME }}
|
||||
|
||||
# Create a user for the github.actor
|
||||
# TODO: update once https://github.com/coder/coder/issues/15466 is resolved
|
||||
# coder users create \
|
||||
# --username ${GITHUB_ACTOR} \
|
||||
# --username ${{ github.actor }} \
|
||||
# --login-type github
|
||||
|
||||
# promote the user to admin role
|
||||
# coder org members edit-role ${GITHUB_ACTOR} organization-admin
|
||||
# coder org members edit-role ${{ github.actor }} organization-admin
|
||||
# TODO: update once https://github.com/coder/internal/issues/207 is resolved
|
||||
|
||||
- name: Send Slack notification
|
||||
@@ -476,19 +461,17 @@ jobs:
|
||||
curl -s -o /dev/null -X POST -H 'Content-type: application/json' \
|
||||
-d \
|
||||
'{
|
||||
"pr_number": "'"${PR_NUMBER}"'",
|
||||
"pr_url": "'"${PR_URL}"'",
|
||||
"pr_title": "'"${PR_TITLE}"'",
|
||||
"pr_access_url": "'"https://${PR_HOSTNAME}"'",
|
||||
"pr_username": "'"pr${PR_NUMBER}-admin"'",
|
||||
"pr_email": "'"pr${PR_NUMBER}@coder.com"'",
|
||||
"pr_password": "'"${PASSWORD}"'",
|
||||
"pr_actor": "'"${GITHUB_ACTOR}"'"
|
||||
"pr_number": "'"${{ env.PR_NUMBER }}"'",
|
||||
"pr_url": "'"${{ env.PR_URL }}"'",
|
||||
"pr_title": "'"${{ env.PR_TITLE }}"'",
|
||||
"pr_access_url": "'"https://${{ env.PR_HOSTNAME }}"'",
|
||||
"pr_username": "'"pr${{ env.PR_NUMBER }}-admin"'",
|
||||
"pr_email": "'"pr${{ env.PR_NUMBER }}@coder.com"'",
|
||||
"pr_password": "'"${{ steps.setup_deployment.outputs.password }}"'",
|
||||
"pr_actor": "'"${{ github.actor }}"'"
|
||||
}' \
|
||||
${{ secrets.PR_DEPLOYMENTS_SLACK_WEBHOOK }}
|
||||
echo "Slack notification sent"
|
||||
env:
|
||||
PASSWORD: ${{ steps.setup_deployment.outputs.password }}
|
||||
|
||||
- name: Find Comment
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e # v3.1.0
|
||||
@@ -521,7 +504,7 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
cd .github/pr-deployments/template
|
||||
coder templates push -y --variable "namespace=pr${PR_NUMBER}" kubernetes
|
||||
coder templates push -y --variable namespace=pr${{ env.PR_NUMBER }} kubernetes
|
||||
|
||||
# Create workspace
|
||||
coder create --template="kubernetes" kube --parameter cpu=2 --parameter memory=4 --parameter home_disk_size=2 -y
|
||||
|
||||
@@ -14,7 +14,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
|
||||
+65
-134
@@ -32,43 +32,15 @@ env:
|
||||
CODER_RELEASE_NOTES: ${{ inputs.release_notes }}
|
||||
|
||||
jobs:
|
||||
# Only allow maintainers/admins to release.
|
||||
check-perms:
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
|
||||
steps:
|
||||
- name: Allow only maintainers/admins
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
const {data} = await github.rest.repos.getCollaboratorPermissionLevel({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
username: context.actor
|
||||
});
|
||||
const role = data.role_name || data.user?.role_name || data.permission;
|
||||
const perms = data.user?.permissions || {};
|
||||
core.info(`Actor ${context.actor} permission=${data.permission}, role_name=${role}`);
|
||||
|
||||
const allowed =
|
||||
role === 'admin' ||
|
||||
role === 'maintain' ||
|
||||
perms.admin === true ||
|
||||
perms.maintain === true;
|
||||
|
||||
if (!allowed) core.setFailed('Denied: requires maintain or admin');
|
||||
|
||||
# build-dylib is a separate job to build the dylib on macOS.
|
||||
build-dylib:
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-macos-latest' || 'macos-latest' }}
|
||||
needs: check-perms
|
||||
steps:
|
||||
# Harden Runner doesn't work on macOS.
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
# If the event that triggered the build was an annotated tag (which our
|
||||
# tags are supposed to be), actions/checkout has a bug where the tag in
|
||||
@@ -81,16 +53,14 @@ jobs:
|
||||
- name: Setup build tools
|
||||
run: |
|
||||
brew install bash gnu-getopt make
|
||||
{
|
||||
echo "$(brew --prefix bash)/bin"
|
||||
echo "$(brew --prefix gnu-getopt)/bin"
|
||||
echo "$(brew --prefix make)/libexec/gnubin"
|
||||
} >> "$GITHUB_PATH"
|
||||
echo "$(brew --prefix bash)/bin" >> $GITHUB_PATH
|
||||
echo "$(brew --prefix gnu-getopt)/bin" >> $GITHUB_PATH
|
||||
echo "$(brew --prefix make)/libexec/gnubin" >> $GITHUB_PATH
|
||||
|
||||
- name: Switch XCode Version
|
||||
uses: maxim-lobanov/setup-xcode@60606e260d2fc5762a71e64e74b2174e8ea3c8bd # v1.6.0
|
||||
with:
|
||||
xcode-version: "16.1.0"
|
||||
xcode-version: "16.0.0"
|
||||
|
||||
- name: Setup Go
|
||||
uses: ./.github/actions/setup-go
|
||||
@@ -144,7 +114,7 @@ jobs:
|
||||
|
||||
release:
|
||||
name: Build and publish
|
||||
needs: [build-dylib, check-perms]
|
||||
needs: build-dylib
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
|
||||
permissions:
|
||||
# Required to publish a release
|
||||
@@ -164,15 +134,14 @@ jobs:
|
||||
version: ${{ steps.version.outputs.version }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
# If the event that triggered the build was an annotated tag (which our
|
||||
# tags are supposed to be), actions/checkout has a bug where the tag in
|
||||
@@ -187,9 +156,9 @@ jobs:
|
||||
run: |
|
||||
set -euo pipefail
|
||||
version="$(./scripts/version.sh)"
|
||||
echo "version=$version" >> "$GITHUB_OUTPUT"
|
||||
echo "version=$version" >> $GITHUB_OUTPUT
|
||||
# Speed up future version.sh calls.
|
||||
echo "CODER_FORCE_VERSION=$version" >> "$GITHUB_ENV"
|
||||
echo "CODER_FORCE_VERSION=$version" >> $GITHUB_ENV
|
||||
echo "$version"
|
||||
|
||||
# Verify that all expectations for a release are met.
|
||||
@@ -231,7 +200,7 @@ jobs:
|
||||
|
||||
release_notes_file="$(mktemp -t release_notes.XXXXXX)"
|
||||
echo "$CODER_RELEASE_NOTES" > "$release_notes_file"
|
||||
echo CODER_RELEASE_NOTES_FILE="$release_notes_file" >> "$GITHUB_ENV"
|
||||
echo CODER_RELEASE_NOTES_FILE="$release_notes_file" >> $GITHUB_ENV
|
||||
|
||||
- name: Show release notes
|
||||
run: |
|
||||
@@ -239,7 +208,7 @@ jobs:
|
||||
cat "$CODER_RELEASE_NOTES_FILE"
|
||||
|
||||
- name: Docker Login
|
||||
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
|
||||
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
@@ -253,7 +222,7 @@ jobs:
|
||||
|
||||
# Necessary for signing Windows binaries.
|
||||
- name: Setup Java
|
||||
uses: actions/setup-java@dded0888837ed1f317902acf8a20df0ad188d165 # v5.0.0
|
||||
uses: actions/setup-java@c5195efecf7bdfc987ee8bae7a71cb8b11521c00 # v4.7.1
|
||||
with:
|
||||
distribution: "zulu"
|
||||
java-version: "11.0"
|
||||
@@ -317,17 +286,17 @@ jobs:
|
||||
# Setup GCloud for signing Windows binaries.
|
||||
- name: Authenticate to Google Cloud
|
||||
id: gcloud_auth
|
||||
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
|
||||
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
|
||||
with:
|
||||
workload_identity_provider: ${{ vars.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ vars.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
|
||||
workload_identity_provider: ${{ secrets.GCP_CODE_SIGNING_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ secrets.GCP_CODE_SIGNING_SERVICE_ACCOUNT }}
|
||||
token_format: "access_token"
|
||||
|
||||
- name: Setup GCloud SDK
|
||||
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # v3.0.1
|
||||
uses: google-github-actions/setup-gcloud@77e7a554d41e2ee56fc945c52dfd3f33d12def9a # v2.1.4
|
||||
|
||||
- name: Download dylibs
|
||||
uses: actions/download-artifact@634f93cb2916e3fdff6788551b99b062d0335ce0 # v5.0.0
|
||||
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
|
||||
with:
|
||||
name: dylibs
|
||||
path: ./build
|
||||
@@ -354,8 +323,6 @@ jobs:
|
||||
env:
|
||||
CODER_SIGN_WINDOWS: "1"
|
||||
CODER_SIGN_DARWIN: "1"
|
||||
CODER_SIGN_GPG: "1"
|
||||
CODER_GPG_RELEASE_KEY_BASE64: ${{ secrets.GPG_RELEASE_KEY_BASE64 }}
|
||||
CODER_WINDOWS_RESOURCES: "1"
|
||||
AC_CERTIFICATE_FILE: /tmp/apple_cert.p12
|
||||
AC_CERTIFICATE_PASSWORD_FILE: /tmp/apple_cert_password.txt
|
||||
@@ -381,9 +348,9 @@ jobs:
|
||||
set -euo pipefail
|
||||
if [[ "${CODER_RELEASE:-}" != *t* ]] || [[ "${CODER_DRY_RUN:-}" == *t* ]]; then
|
||||
# Empty value means use the default and avoid building a fresh one.
|
||||
echo "tag=" >> "$GITHUB_OUTPUT"
|
||||
echo "tag=" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "tag=$(CODER_IMAGE_BASE=ghcr.io/coder/coder-base ./scripts/image_tag.sh)" >> "$GITHUB_OUTPUT"
|
||||
echo "tag=$(CODER_IMAGE_BASE=ghcr.io/coder/coder-base ./scripts/image_tag.sh)" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Create empty base-build-context directory
|
||||
@@ -397,7 +364,7 @@ jobs:
|
||||
# This uses OIDC authentication, so no auth variables are required.
|
||||
- name: Build base Docker image via depot.dev
|
||||
if: steps.image-base-tag.outputs.tag != ''
|
||||
uses: depot/build-push-action@9785b135c3c76c33db102e45be96a25ab55cd507 # v1.16.2
|
||||
uses: depot/build-push-action@2583627a84956d07561420dcc1d0eb1f2af3fac0 # v1.15.0
|
||||
with:
|
||||
project: wl5hnrrkns
|
||||
context: base-build-context
|
||||
@@ -418,7 +385,7 @@ jobs:
|
||||
# available immediately
|
||||
for i in {1..10}; do
|
||||
rc=0
|
||||
raw_manifests=$(docker buildx imagetools inspect --raw "${IMAGE_TAG}") || rc=$?
|
||||
raw_manifests=$(docker buildx imagetools inspect --raw "${{ steps.image-base-tag.outputs.tag }}") || rc=$?
|
||||
if [[ "$rc" -eq 0 ]]; then
|
||||
break
|
||||
fi
|
||||
@@ -440,8 +407,6 @@ jobs:
|
||||
echo "$manifests" | grep -q linux/amd64
|
||||
echo "$manifests" | grep -q linux/arm64
|
||||
echo "$manifests" | grep -q linux/arm/v7
|
||||
env:
|
||||
IMAGE_TAG: ${{ steps.image-base-tag.outputs.tag }}
|
||||
|
||||
# GitHub attestation provides SLSA provenance for Docker images, establishing a verifiable
|
||||
# record that these images were built in GitHub Actions with specific inputs and environment.
|
||||
@@ -454,7 +419,7 @@ jobs:
|
||||
id: attest_base
|
||||
if: ${{ !inputs.dry_run && steps.image-base-tag.outputs.tag != '' }}
|
||||
continue-on-error: true
|
||||
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
|
||||
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
|
||||
with:
|
||||
subject-name: ${{ steps.image-base-tag.outputs.tag }}
|
||||
predicate-type: "https://slsa.dev/provenance/v1"
|
||||
@@ -509,7 +474,7 @@ jobs:
|
||||
|
||||
# Save multiarch image tag for attestation
|
||||
multiarch_image="$(./scripts/image_tag.sh)"
|
||||
echo "multiarch_image=${multiarch_image}" >> "$GITHUB_OUTPUT"
|
||||
echo "multiarch_image=${multiarch_image}" >> $GITHUB_OUTPUT
|
||||
|
||||
# For debugging, print all docker image tags
|
||||
docker images
|
||||
@@ -517,15 +482,16 @@ jobs:
|
||||
# if the current version is equal to the highest (according to semver)
|
||||
# version in the repo, also create a multi-arch image as ":latest" and
|
||||
# push it
|
||||
created_latest_tag=false
|
||||
if [[ "$(git tag | grep '^v' | grep -vE '(rc|dev|-|\+|\/)' | sort -r --version-sort | head -n1)" == "v$(./scripts/version.sh)" ]]; then
|
||||
# shellcheck disable=SC2046
|
||||
./scripts/build_docker_multiarch.sh \
|
||||
--push \
|
||||
--target "$(./scripts/image_tag.sh --version latest)" \
|
||||
$(cat build/coder_"$version"_linux_{amd64,arm64,armv7}.tag)
|
||||
echo "created_latest_tag=true" >> "$GITHUB_OUTPUT"
|
||||
created_latest_tag=true
|
||||
echo "created_latest_tag=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "created_latest_tag=false" >> "$GITHUB_OUTPUT"
|
||||
echo "created_latest_tag=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
env:
|
||||
CODER_BASE_IMAGE_TAG: ${{ steps.image-base-tag.outputs.tag }}
|
||||
@@ -533,27 +499,24 @@ jobs:
|
||||
- name: SBOM Generation and Attestation
|
||||
if: ${{ !inputs.dry_run }}
|
||||
env:
|
||||
COSIGN_EXPERIMENTAL: '1'
|
||||
MULTIARCH_IMAGE: ${{ steps.build_docker.outputs.multiarch_image }}
|
||||
VERSION: ${{ steps.version.outputs.version }}
|
||||
CREATED_LATEST_TAG: ${{ steps.build_docker.outputs.created_latest_tag }}
|
||||
COSIGN_EXPERIMENTAL: "1"
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
|
||||
# Generate SBOM for multi-arch image with version in filename
|
||||
echo "Generating SBOM for multi-arch image: ${MULTIARCH_IMAGE}"
|
||||
syft "${MULTIARCH_IMAGE}" -o spdx-json > "coder_${VERSION}_sbom.spdx.json"
|
||||
echo "Generating SBOM for multi-arch image: ${{ steps.build_docker.outputs.multiarch_image }}"
|
||||
syft "${{ steps.build_docker.outputs.multiarch_image }}" -o spdx-json > coder_${{ steps.version.outputs.version }}_sbom.spdx.json
|
||||
|
||||
# Attest SBOM to multi-arch image
|
||||
echo "Attesting SBOM to multi-arch image: ${MULTIARCH_IMAGE}"
|
||||
cosign clean --force=true "${MULTIARCH_IMAGE}"
|
||||
echo "Attesting SBOM to multi-arch image: ${{ steps.build_docker.outputs.multiarch_image }}"
|
||||
cosign clean --force=true "${{ steps.build_docker.outputs.multiarch_image }}"
|
||||
cosign attest --type spdxjson \
|
||||
--predicate "coder_${VERSION}_sbom.spdx.json" \
|
||||
--predicate coder_${{ steps.version.outputs.version }}_sbom.spdx.json \
|
||||
--yes \
|
||||
"${MULTIARCH_IMAGE}"
|
||||
"${{ steps.build_docker.outputs.multiarch_image }}"
|
||||
|
||||
# If latest tag was created, also attest it
|
||||
if [[ "${CREATED_LATEST_TAG}" == "true" ]]; then
|
||||
if [[ "${{ steps.build_docker.outputs.created_latest_tag }}" == "true" ]]; then
|
||||
latest_tag="$(./scripts/image_tag.sh --version latest)"
|
||||
echo "Generating SBOM for latest image: ${latest_tag}"
|
||||
syft "${latest_tag}" -o spdx-json > coder_latest_sbom.spdx.json
|
||||
@@ -570,7 +533,7 @@ jobs:
|
||||
id: attest_main
|
||||
if: ${{ !inputs.dry_run }}
|
||||
continue-on-error: true
|
||||
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
|
||||
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
|
||||
with:
|
||||
subject-name: ${{ steps.build_docker.outputs.multiarch_image }}
|
||||
predicate-type: "https://slsa.dev/provenance/v1"
|
||||
@@ -607,14 +570,14 @@ jobs:
|
||||
- name: Get latest tag name
|
||||
id: latest_tag
|
||||
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
|
||||
run: echo "tag=$(./scripts/image_tag.sh --version latest)" >> "$GITHUB_OUTPUT"
|
||||
run: echo "tag=$(./scripts/image_tag.sh --version latest)" >> $GITHUB_OUTPUT
|
||||
|
||||
# If this is the highest version according to semver, also attest the "latest" tag
|
||||
- name: GitHub Attestation for "latest" Docker image
|
||||
id: attest_latest
|
||||
if: ${{ !inputs.dry_run && steps.build_docker.outputs.created_latest_tag == 'true' }}
|
||||
continue-on-error: true
|
||||
uses: actions/attest@daf44fb950173508f38bd2406030372c1d1162b1 # v3.0.0
|
||||
uses: actions/attest@ce27ba3b4a9a139d9a20a4a07d69fabb52f1e5bc # v2.4.0
|
||||
with:
|
||||
subject-name: ${{ steps.latest_tag.outputs.tag }}
|
||||
predicate-type: "https://slsa.dev/provenance/v1"
|
||||
@@ -650,7 +613,7 @@ jobs:
|
||||
# Report attestation failures but don't fail the workflow
|
||||
- name: Check attestation status
|
||||
if: ${{ !inputs.dry_run }}
|
||||
run: | # zizmor: ignore[template-injection] We're just reading steps.attest_x.outcome here, no risk of injection
|
||||
run: |
|
||||
if [[ "${{ steps.attest_base.outcome }}" == "failure" && "${{ steps.attest_base.conclusion }}" != "skipped" ]]; then
|
||||
echo "::warning::GitHub attestation for base image failed"
|
||||
fi
|
||||
@@ -669,30 +632,6 @@ jobs:
|
||||
- name: ls build
|
||||
run: ls -lh build
|
||||
|
||||
- name: Publish Coder CLI binaries and detached signatures to GCS
|
||||
if: ${{ !inputs.dry_run }}
|
||||
run: |
|
||||
set -euxo pipefail
|
||||
|
||||
version="$(./scripts/version.sh)"
|
||||
|
||||
# Source array of slim binaries
|
||||
declare -A binaries
|
||||
binaries["coder-darwin-amd64"]="coder-slim_${version}_darwin_amd64"
|
||||
binaries["coder-darwin-arm64"]="coder-slim_${version}_darwin_arm64"
|
||||
binaries["coder-linux-amd64"]="coder-slim_${version}_linux_amd64"
|
||||
binaries["coder-linux-arm64"]="coder-slim_${version}_linux_arm64"
|
||||
binaries["coder-linux-armv7"]="coder-slim_${version}_linux_armv7"
|
||||
binaries["coder-windows-amd64.exe"]="coder-slim_${version}_windows_amd64.exe"
|
||||
binaries["coder-windows-arm64.exe"]="coder-slim_${version}_windows_arm64.exe"
|
||||
|
||||
for cli_name in "${!binaries[@]}"; do
|
||||
slim_binary="${binaries[$cli_name]}"
|
||||
detached_signature="${slim_binary}.asc"
|
||||
gcloud storage cp "./build/${slim_binary}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}"
|
||||
gcloud storage cp "./build/${detached_signature}" "gs://releases.coder.com/coder-cli/${version}/${cli_name}.asc"
|
||||
done
|
||||
|
||||
- name: Publish release
|
||||
run: |
|
||||
set -euo pipefail
|
||||
@@ -715,11 +654,11 @@ jobs:
|
||||
./build/*.apk
|
||||
./build/*.deb
|
||||
./build/*.rpm
|
||||
"./coder_${VERSION}_sbom.spdx.json"
|
||||
./coder_${{ steps.version.outputs.version }}_sbom.spdx.json
|
||||
)
|
||||
|
||||
# Only include the latest SBOM file if it was created
|
||||
if [[ "${CREATED_LATEST_TAG}" == "true" ]]; then
|
||||
if [[ "${{ steps.build_docker.outputs.created_latest_tag }}" == "true" ]]; then
|
||||
files+=(./coder_latest_sbom.spdx.json)
|
||||
fi
|
||||
|
||||
@@ -730,17 +669,15 @@ jobs:
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
CODER_GPG_RELEASE_KEY_BASE64: ${{ secrets.GPG_RELEASE_KEY_BASE64 }}
|
||||
VERSION: ${{ steps.version.outputs.version }}
|
||||
CREATED_LATEST_TAG: ${{ steps.build_docker.outputs.created_latest_tag }}
|
||||
|
||||
- name: Authenticate to Google Cloud
|
||||
uses: google-github-actions/auth@7c6bc770dae815cd3e89ee6cdf493a5fab2cc093 # v3.0.0
|
||||
uses: google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193 # v2.1.10
|
||||
with:
|
||||
workload_identity_provider: ${{ vars.GCP_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ vars.GCP_SERVICE_ACCOUNT }}
|
||||
workload_identity_provider: ${{ secrets.GCP_WORKLOAD_ID_PROVIDER }}
|
||||
service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
|
||||
|
||||
- name: Setup GCloud SDK
|
||||
uses: google-github-actions/setup-gcloud@aa5489c8933f4cc7a4f7d45035b3b1440c9c10db # 3.0.1
|
||||
uses: google-github-actions/setup-gcloud@77e7a554d41e2ee56fc945c52dfd3f33d12def9a # 2.1.4
|
||||
|
||||
- name: Publish Helm Chart
|
||||
if: ${{ !inputs.dry_run }}
|
||||
@@ -752,12 +689,10 @@ jobs:
|
||||
cp "build/provisioner_helm_${version}.tgz" build/helm
|
||||
gsutil cp gs://helm.coder.com/v2/index.yaml build/helm/index.yaml
|
||||
helm repo index build/helm --url https://helm.coder.com/v2 --merge build/helm/index.yaml
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/coder_helm_${version}.tgz" gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/provisioner_helm_${version}.tgz" gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp "build/helm/index.yaml" gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp "helm/artifacthub-repo.yml" gs://helm.coder.com/v2
|
||||
helm push "build/coder_helm_${version}.tgz" oci://ghcr.io/coder/chart
|
||||
helm push "build/provisioner_helm_${version}.tgz" oci://ghcr.io/coder/chart
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/coder_helm_${version}.tgz gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/provisioner_helm_${version}.tgz gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp build/helm/index.yaml gs://helm.coder.com/v2
|
||||
gsutil -h "Cache-Control:no-cache,max-age=0" cp helm/artifacthub-repo.yml gs://helm.coder.com/v2
|
||||
|
||||
- name: Upload artifacts to actions (if dry-run)
|
||||
if: ${{ inputs.dry_run }}
|
||||
@@ -802,18 +737,18 @@ jobs:
|
||||
# TODO: skip this if it's not a new release (i.e. a backport). This is
|
||||
# fine right now because it just makes a PR that we can close.
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Update homebrew
|
||||
env:
|
||||
# Variables used by the `gh` command
|
||||
GH_REPO: coder/homebrew-coder
|
||||
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
|
||||
VERSION: ${{ needs.release.outputs.version }}
|
||||
run: |
|
||||
# Keep version number around for reference, removing any potential leading v
|
||||
coder_version="$(echo "${VERSION}" | tr -d v)"
|
||||
coder_version="$(echo "${{ needs.release.outputs.version }}" | tr -d v)"
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
@@ -832,9 +767,9 @@ jobs:
|
||||
wget "$checksums_url" -O checksums.txt
|
||||
|
||||
# Get the SHAs
|
||||
darwin_arm_sha="$(grep "darwin_arm64.zip" checksums.txt | awk '{ print $1 }')"
|
||||
darwin_intel_sha="$(grep "darwin_amd64.zip" checksums.txt | awk '{ print $1 }')"
|
||||
linux_sha="$(grep "linux_amd64.tar.gz" checksums.txt | awk '{ print $1 }')"
|
||||
darwin_arm_sha="$(cat checksums.txt | grep "darwin_arm64.zip" | awk '{ print $1 }')"
|
||||
darwin_intel_sha="$(cat checksums.txt | grep "darwin_amd64.zip" | awk '{ print $1 }')"
|
||||
linux_sha="$(cat checksums.txt | grep "linux_amd64.tar.gz" | awk '{ print $1 }')"
|
||||
|
||||
echo "macOS arm64: $darwin_arm_sha"
|
||||
echo "macOS amd64: $darwin_intel_sha"
|
||||
@@ -847,7 +782,7 @@ jobs:
|
||||
|
||||
# Check if a PR already exists.
|
||||
pr_count="$(gh pr list --search "head:$brew_branch" --json id,closed | jq -r ".[] | select(.closed == false) | .id" | wc -l)"
|
||||
if [ "$pr_count" -gt 0 ]; then
|
||||
if [[ "$pr_count" > 0 ]]; then
|
||||
echo "Bailing out as PR already exists" 2>&1
|
||||
exit 0
|
||||
fi
|
||||
@@ -866,8 +801,8 @@ jobs:
|
||||
-B master -H "$brew_branch" \
|
||||
-t "coder $coder_version" \
|
||||
-b "" \
|
||||
-r "${GITHUB_ACTOR}" \
|
||||
-a "${GITHUB_ACTOR}" \
|
||||
-r "${{ github.actor }}" \
|
||||
-a "${{ github.actor }}" \
|
||||
-b "This automatic PR was triggered by the release of Coder v$coder_version"
|
||||
|
||||
publish-winget:
|
||||
@@ -878,7 +813,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
@@ -888,10 +823,9 @@ jobs:
|
||||
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
# If the event that triggered the build was an annotated tag (which our
|
||||
# tags are supposed to be), actions/checkout has a bug where the tag in
|
||||
@@ -910,7 +844,7 @@ jobs:
|
||||
# The package version is the same as the tag minus the leading "v".
|
||||
# The version in this output already has the leading "v" removed but
|
||||
# we do it again to be safe.
|
||||
$version = $env:VERSION.Trim('v')
|
||||
$version = "${{ needs.release.outputs.version }}".Trim('v')
|
||||
|
||||
$release_assets = gh release view --repo coder/coder "v${version}" --json assets | `
|
||||
ConvertFrom-Json
|
||||
@@ -942,14 +876,13 @@ jobs:
|
||||
# For wingetcreate. We need a real token since we're pushing a commit
|
||||
# to GitHub and then making a PR in a different repo.
|
||||
WINGET_GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
|
||||
VERSION: ${{ needs.release.outputs.version }}
|
||||
|
||||
- name: Comment on PR
|
||||
run: |
|
||||
# wait 30 seconds
|
||||
Start-Sleep -Seconds 30.0
|
||||
# Find the PR that wingetcreate just made.
|
||||
$version = $env:VERSION.Trim('v')
|
||||
$version = "${{ needs.release.outputs.version }}".Trim('v')
|
||||
$pr_list = gh pr list --repo microsoft/winget-pkgs --search "author:cdrci Coder.Coder version ${version}" --limit 1 --json number | `
|
||||
ConvertFrom-Json
|
||||
$pr_number = $pr_list[0].number
|
||||
@@ -960,7 +893,6 @@ jobs:
|
||||
# For gh CLI. We need a real token since we're commenting on a PR in a
|
||||
# different repo.
|
||||
GH_TOKEN: ${{ secrets.CDRCI_GITHUB_TOKEN }}
|
||||
VERSION: ${{ needs.release.outputs.version }}
|
||||
|
||||
# publish-sqlc pushes the latest schema to sqlc cloud.
|
||||
# At present these pushes cannot be tagged, so the last push is always the latest.
|
||||
@@ -971,15 +903,14 @@ jobs:
|
||||
if: ${{ !inputs.dry_run }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
# We need golang to run the migration main.go
|
||||
- name: Setup Go
|
||||
|
||||
@@ -20,12 +20,12 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
@@ -47,6 +47,6 @@ jobs:
|
||||
|
||||
# Upload the results to GitHub's code scanning dashboard.
|
||||
- name: "Upload to code-scanning"
|
||||
uses: github/codeql-action/upload-sarif@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
|
||||
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
|
||||
with:
|
||||
sarif_file: results.sarif
|
||||
|
||||
@@ -27,20 +27,18 @@ jobs:
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Setup Go
|
||||
uses: ./.github/actions/setup-go
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
|
||||
uses: github/codeql-action/init@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
|
||||
with:
|
||||
languages: go, javascript
|
||||
|
||||
@@ -50,7 +48,7 @@ jobs:
|
||||
rm Makefile
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
|
||||
uses: github/codeql-action/analyze@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
|
||||
|
||||
- name: Send Slack notification on failure
|
||||
if: ${{ failure() }}
|
||||
@@ -69,15 +67,14 @@ jobs:
|
||||
runs-on: ${{ github.repository_owner == 'coder' && 'depot-ubuntu-22.04-8' || 'ubuntu-latest' }}
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Setup Go
|
||||
uses: ./.github/actions/setup-go
|
||||
@@ -137,16 +134,15 @@ jobs:
|
||||
# This environment variables forces scripts/build_docker.sh to build
|
||||
# the base image tag locally instead of using the cached version from
|
||||
# the registry.
|
||||
CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
|
||||
export CODER_IMAGE_BUILD_BASE_TAG
|
||||
export CODER_IMAGE_BUILD_BASE_TAG="$(CODER_IMAGE_BASE=coder-base ./scripts/image_tag.sh --version "$version")"
|
||||
|
||||
# We would like to use make -j here, but it doesn't work with the some recent additions
|
||||
# to our code generation.
|
||||
make "$image_job"
|
||||
echo "image=$(cat "$image_job")" >> "$GITHUB_OUTPUT"
|
||||
echo "image=$(cat "$image_job")" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8
|
||||
uses: aquasecurity/trivy-action@76071ef0d7ec797419534a183b498b4d6366cf37
|
||||
with:
|
||||
image-ref: ${{ steps.build.outputs.image }}
|
||||
format: sarif
|
||||
@@ -154,7 +150,7 @@ jobs:
|
||||
severity: "CRITICAL,HIGH"
|
||||
|
||||
- name: Upload Trivy scan results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@192325c86100d080feab897ff886c34abd4c83a3 # v3.29.5
|
||||
uses: github/codeql-action/upload-sarif@ce28f5bb42b7a9f2c824e633a3f6ee835bab6858 # v3.29.0
|
||||
with:
|
||||
sarif_file: trivy-results.sarif
|
||||
category: "Trivy"
|
||||
|
||||
@@ -18,12 +18,12 @@ jobs:
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: stale
|
||||
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
|
||||
uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
|
||||
with:
|
||||
stale-issue-label: "stale"
|
||||
stale-pr-label: "stale"
|
||||
@@ -44,7 +44,7 @@ jobs:
|
||||
# Start with the oldest issues, always.
|
||||
ascending: true
|
||||
- name: "Close old issues labeled likely-no"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
@@ -96,14 +96,12 @@ jobs:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
- name: Run delete-old-branches-action
|
||||
uses: beatlabs/delete-old-branches-action@4eeeb8740ff8b3cb310296ddd6b43c3387734588 # v0.0.11
|
||||
with:
|
||||
@@ -120,7 +118,7 @@ jobs:
|
||||
actions: write
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ jobs:
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Start Coder workspace
|
||||
uses: coder/start-workspace-action@f97a681b4cc7985c9eef9963750c7cc6ebc93a19
|
||||
uses: coder/start-workspace-action@35a4608cefc7e8cc56573cae7c3b85304575cb72
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
github-username: >-
|
||||
|
||||
@@ -1,217 +0,0 @@
|
||||
name: AI Triage Automation
|
||||
|
||||
on:
|
||||
issues:
|
||||
types:
|
||||
- labeled
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
issue_url:
|
||||
description: "GitHub Issue URL to process"
|
||||
required: true
|
||||
type: string
|
||||
template_name:
|
||||
description: "Coder template to use for workspace"
|
||||
required: true
|
||||
default: "coder"
|
||||
type: string
|
||||
template_preset:
|
||||
description: "Template preset to use"
|
||||
required: true
|
||||
default: "none"
|
||||
type: string
|
||||
prefix:
|
||||
description: "Prefix for workspace name"
|
||||
required: false
|
||||
default: "traiage"
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
traiage:
|
||||
name: Triage GitHub Issue with Claude Code
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.label.name == 'traiage' || github.event_name == 'workflow_dispatch'
|
||||
timeout-minutes: 30
|
||||
env:
|
||||
CODER_URL: ${{ secrets.TRAIAGE_CODER_URL }}
|
||||
CODER_SESSION_TOKEN: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
actions: write
|
||||
|
||||
steps:
|
||||
# This is only required for testing locally using nektos/act, so leaving commented out.
|
||||
# An alternative is to use a larger or custom image.
|
||||
# - name: Install Github CLI
|
||||
# id: install-gh
|
||||
# run: |
|
||||
# (type -p wget >/dev/null || (sudo apt update && sudo apt install wget -y)) \
|
||||
# && sudo mkdir -p -m 755 /etc/apt/keyrings \
|
||||
# && out=$(mktemp) && wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
|
||||
# && cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
|
||||
# && sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
|
||||
# && sudo mkdir -p -m 755 /etc/apt/sources.list.d \
|
||||
# && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
|
||||
# && sudo apt update \
|
||||
# && sudo apt install gh -y
|
||||
|
||||
- name: Determine Inputs
|
||||
id: determine-inputs
|
||||
if: always()
|
||||
env:
|
||||
GITHUB_ACTOR: ${{ github.actor }}
|
||||
GITHUB_EVENT_ISSUE_HTML_URL: ${{ github.event.issue.html_url }}
|
||||
GITHUB_EVENT_NAME: ${{ github.event_name }}
|
||||
GITHUB_EVENT_USER_ID: ${{ github.event.sender.id }}
|
||||
GITHUB_EVENT_USER_LOGIN: ${{ github.event.sender.login }}
|
||||
INPUTS_ISSUE_URL: ${{ inputs.issue_url }}
|
||||
INPUTS_TEMPLATE_NAME: ${{ inputs.template_name || 'coder' }}
|
||||
INPUTS_TEMPLATE_PRESET: ${{ inputs.template_preset || 'none'}}
|
||||
INPUTS_PREFIX: ${{ inputs.prefix || 'traiage' }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
echo "Using template name: ${INPUTS_TEMPLATE_NAME}"
|
||||
echo "template_name=${INPUTS_TEMPLATE_NAME}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using template preset: ${INPUTS_TEMPLATE_PRESET}"
|
||||
echo "template_preset=${INPUTS_TEMPLATE_PRESET}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using prefix: ${INPUTS_PREFIX}"
|
||||
echo "prefix=${INPUTS_PREFIX}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
# For workflow_dispatch, use the actor who triggered it
|
||||
# For issues events, use the issue author.
|
||||
if [[ "${GITHUB_EVENT_NAME}" == "workflow_dispatch" ]]; then
|
||||
if ! GITHUB_USER_ID=$(gh api "users/${GITHUB_ACTOR}" --jq '.id'); then
|
||||
echo "::error::Failed to get GitHub user ID for actor ${GITHUB_ACTOR}"
|
||||
exit 1
|
||||
fi
|
||||
echo "Using workflow_dispatch actor: ${GITHUB_ACTOR} (ID: ${GITHUB_USER_ID})"
|
||||
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
|
||||
echo "github_username=${GITHUB_ACTOR}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using issue URL: ${INPUTS_ISSUE_URL}"
|
||||
echo "issue_url=${INPUTS_ISSUE_URL}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
exit 0
|
||||
elif [[ "${GITHUB_EVENT_NAME}" == "issues" ]]; then
|
||||
GITHUB_USER_ID=${GITHUB_EVENT_USER_ID}
|
||||
echo "Using issue author: ${GITHUB_EVENT_USER_LOGIN} (ID: ${GITHUB_USER_ID})"
|
||||
echo "github_user_id=${GITHUB_USER_ID}" >> "${GITHUB_OUTPUT}"
|
||||
echo "github_username=${GITHUB_EVENT_USER_LOGIN}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
echo "Using issue URL: ${GITHUB_EVENT_ISSUE_HTML_URL}"
|
||||
echo "issue_url=${GITHUB_EVENT_ISSUE_HTML_URL}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
exit 0
|
||||
else
|
||||
echo "::error::Unsupported event type: ${GITHUB_EVENT_NAME}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Verify push access
|
||||
env:
|
||||
GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GITHUB_USERNAME: ${{ steps.determine-inputs.outputs.github_username }}
|
||||
GITHUB_USER_ID: ${{ steps.determine-inputs.outputs.github_user_id }}
|
||||
run: |
|
||||
# Query the actor’s permission on this repo
|
||||
can_push="$(gh api "/repos/${GITHUB_REPOSITORY}/collaborators/${GITHUB_USERNAME}/permission" --jq '.user.permissions.push')"
|
||||
if [[ "${can_push}" != "true" ]]; then
|
||||
echo "::error title=Access Denied::${GITHUB_USERNAME} does not have push access to ${GITHUB_REPOSITORY}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Extract context key from issue
|
||||
id: extract-context
|
||||
env:
|
||||
ISSUE_URL: ${{ steps.determine-inputs.outputs.issue_url }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
issue_number="$(gh issue view "${ISSUE_URL}" --json number --jq '.number')"
|
||||
context_key="gh-${issue_number}"
|
||||
echo "context_key=${context_key}" >> "${GITHUB_OUTPUT}"
|
||||
echo "CONTEXT_KEY=${context_key}" >> "${GITHUB_ENV}"
|
||||
|
||||
- name: Download and install Coder binary
|
||||
shell: bash
|
||||
env:
|
||||
CODER_URL: ${{ secrets.TRAIAGE_CODER_URL }}
|
||||
run: |
|
||||
if [ "${{ runner.arch }}" == "ARM64" ]; then
|
||||
ARCH="arm64"
|
||||
else
|
||||
ARCH="amd64"
|
||||
fi
|
||||
mkdir -p "${HOME}/.local/bin"
|
||||
curl -fsSL --compressed "$CODER_URL/bin/coder-linux-${ARCH}" -o "${HOME}/.local/bin/coder"
|
||||
chmod +x "${HOME}/.local/bin/coder"
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
coder version
|
||||
coder whoami
|
||||
echo "$HOME/.local/bin" >> "${GITHUB_PATH}"
|
||||
|
||||
- name: Get Coder username from GitHub actor
|
||||
id: get-coder-username
|
||||
env:
|
||||
CODER_SESSION_TOKEN: ${{ secrets.TRAIAGE_CODER_SESSION_TOKEN }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GITHUB_USER_ID: ${{ steps.determine-inputs.outputs.github_user_id }}
|
||||
run: |
|
||||
user_json=$(
|
||||
coder users list --github-user-id="${GITHUB_USER_ID}" --output=json
|
||||
)
|
||||
coder_username=$(jq -r 'first | .username' <<< "$user_json")
|
||||
[[ -z "${coder_username}" || "${coder_username}" == "null" ]] && echo "No Coder user with GitHub user ID ${GITHUB_USER_ID} found" && exit 1
|
||||
echo "coder_username=${coder_username}" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
persist-credentials: false
|
||||
fetch-depth: 0
|
||||
|
||||
# TODO(Cian): this is a good use-case for 'recipes'
|
||||
- name: Create Coder task
|
||||
id: create-task
|
||||
env:
|
||||
CODER_USERNAME: ${{ steps.get-coder-username.outputs.coder_username }}
|
||||
CONTEXT_KEY: ${{ steps.extract-context.outputs.context_key }}
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
ISSUE_URL: ${{ steps.determine-inputs.outputs.issue_url }}
|
||||
PREFIX: ${{ steps.determine-inputs.outputs.prefix }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
TEMPLATE_NAME: ${{ steps.determine-inputs.outputs.template_name }}
|
||||
TEMPLATE_PARAMETERS: ${{ secrets.TRAIAGE_TEMPLATE_PARAMETERS }}
|
||||
TEMPLATE_PRESET: ${{ steps.determine-inputs.outputs.template_preset }}
|
||||
run: |
|
||||
# Fetch issue description using `gh` CLI
|
||||
#shellcheck disable=SC2016 # The template string should not be subject to shell expansion
|
||||
issue_description=$(gh issue view "${ISSUE_URL}" \
|
||||
--json 'title,body,comments' \
|
||||
--template '{{printf "%s\n\n%s\n\nComments:\n" .title .body}}{{range $k, $v := .comments}} - {{index $v.author "login"}}: {{printf "%s\n" $v.body}}{{end}}')
|
||||
|
||||
# Write a prompt to PROMPT_FILE
|
||||
PROMPT=$(cat <<EOF
|
||||
Fix ${ISSUE_URL}
|
||||
|
||||
Analyze the below GitHub issue description, understand the root cause, and make appropriate changes to resolve the issue.
|
||||
---
|
||||
${issue_description}
|
||||
EOF
|
||||
)
|
||||
export PROMPT
|
||||
|
||||
export TASK_NAME="${PREFIX}-${CONTEXT_KEY}-${RUN_ID}"
|
||||
echo "Creating task: $TASK_NAME"
|
||||
./scripts/traiage.sh create
|
||||
if [[ "${ISSUE_URL}" == "https://github.com/${GITHUB_REPOSITORY}"* ]]; then
|
||||
gh issue comment "${ISSUE_URL}" --body "Task created: https://dev.coder.com/tasks/${CODER_USERNAME}/${TASK_NAME}" --create-if-none --edit-last
|
||||
else
|
||||
echo "Skipping comment on other repo."
|
||||
fi
|
||||
echo "TASK_NAME=${CODER_USERNAME}/${TASK_NAME}" >> "${GITHUB_OUTPUT}"
|
||||
echo "TASK_NAME=${CODER_USERNAME}/${TASK_NAME}" >> "${GITHUB_ENV}"
|
||||
@@ -1,6 +1,5 @@
|
||||
[default]
|
||||
extend-ignore-identifiers-re = ["gho_.*"]
|
||||
extend-ignore-re = ["(#|//)\\s*spellchecker:ignore-next-line\\n.*"]
|
||||
|
||||
[default.extend-identifiers]
|
||||
alog = "alog"
|
||||
@@ -29,7 +28,6 @@ HELO = "HELO"
|
||||
LKE = "LKE"
|
||||
byt = "byt"
|
||||
typ = "typ"
|
||||
Inferrable = "Inferrable"
|
||||
|
||||
[files]
|
||||
extend-exclude = [
|
||||
@@ -49,5 +47,5 @@ extend-exclude = [
|
||||
"provisioner/terraform/testdata/**",
|
||||
# notifications' golden files confuse the detector because of quoted-printable encoding
|
||||
"coderd/notifications/testdata/**",
|
||||
"agent/agentcontainers/testdata/devcontainercli/**",
|
||||
"agent/agentcontainers/testdata/devcontainercli/**"
|
||||
]
|
||||
|
||||
@@ -21,17 +21,15 @@ jobs:
|
||||
pull-requests: write # required to post PR review comments by the action
|
||||
steps:
|
||||
- name: Harden Runner
|
||||
uses: step-security/harden-runner@f4a75cfd619ee5ce8d5b864b0d183aff3c69b55a # v2.13.1
|
||||
uses: step-security/harden-runner@002fdce3c6a235733a90a27c80493a3241e56863 # v2.12.1
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
|
||||
with:
|
||||
persist-credentials: false
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||
|
||||
- name: Check Markdown links
|
||||
uses: umbrelladocs/action-linkspector@874d01cae9fd488e3077b08952093235bd626977 # v1.3.7
|
||||
uses: umbrelladocs/action-linkspector@e2ccef58c4b9eb89cd71ee23a8629744bba75aa6 # v1.3.5
|
||||
id: markdown-link-check
|
||||
# checks all markdown files from /docs including all subfolders
|
||||
with:
|
||||
@@ -43,10 +41,7 @@ jobs:
|
||||
- name: Send Slack notification
|
||||
if: failure() && github.event_name == 'schedule'
|
||||
run: |
|
||||
curl \
|
||||
-X POST \
|
||||
-H 'Content-type: application/json' \
|
||||
-d '{"msg":"Broken links found in the documentation. Please check the logs at '"${LOGS_URL}"'"}' "${{ secrets.DOCS_LINK_SLACK_WEBHOOK }}"
|
||||
curl -X POST -H 'Content-type: application/json' -d '{"msg":"Broken links found in the documentation. Please check the logs at ${{ env.LOGS_URL }}"}' ${{ secrets.DOCS_LINK_SLACK_WEBHOOK }}
|
||||
echo "Sent Slack notification"
|
||||
env:
|
||||
LOGS_URL: https://github.com/coder/coder/actions/runs/${{ github.run_id }}
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
rules:
|
||||
cache-poisoning:
|
||||
ignore:
|
||||
- "ci.yaml:184"
|
||||
+2
-11
@@ -169,16 +169,6 @@ linters-settings:
|
||||
- name: var-declaration
|
||||
- name: var-naming
|
||||
- name: waitgroup-by-value
|
||||
usetesting:
|
||||
# Only os-setenv is enabled because we migrated to usetesting from another linter that
|
||||
# only covered os-setenv.
|
||||
os-setenv: true
|
||||
os-create-temp: false
|
||||
os-mkdir-temp: false
|
||||
os-temp-dir: false
|
||||
os-chdir: false
|
||||
context-background: false
|
||||
context-todo: false
|
||||
|
||||
# irrelevant as of Go v1.22: https://go.dev/blog/loopvar-preview
|
||||
govet:
|
||||
@@ -191,6 +181,7 @@ linters-settings:
|
||||
|
||||
issues:
|
||||
exclude-dirs:
|
||||
- coderd/database/dbmem
|
||||
- node_modules
|
||||
- .git
|
||||
|
||||
@@ -262,6 +253,7 @@ linters:
|
||||
# - wastedassign
|
||||
|
||||
- staticcheck
|
||||
- tenv
|
||||
# In Go, it's possible for a package to test it's internal functionality
|
||||
# without testing any exported functions. This is enabled to promote
|
||||
# decomposing a package before testing it's internals. A function caller
|
||||
@@ -274,5 +266,4 @@ linters:
|
||||
- typecheck
|
||||
- unconvert
|
||||
- unused
|
||||
- usetesting
|
||||
- dupl
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"go-language-server": {
|
||||
"type": "stdio",
|
||||
"command": "go",
|
||||
"args": [
|
||||
"run",
|
||||
"github.com/isaacphi/mcp-language-server@latest",
|
||||
"-workspace",
|
||||
"./",
|
||||
"-lsp",
|
||||
"go",
|
||||
"--",
|
||||
"run",
|
||||
"golang.org/x/tools/gopls@latest"
|
||||
],
|
||||
"env": {}
|
||||
},
|
||||
"typescript-language-server": {
|
||||
"type": "stdio",
|
||||
"command": "go",
|
||||
"args": [
|
||||
"run",
|
||||
"github.com/isaacphi/mcp-language-server@latest",
|
||||
"-workspace",
|
||||
"./site/",
|
||||
"-lsp",
|
||||
"pnpx",
|
||||
"--",
|
||||
"typescript-language-server",
|
||||
"--stdio"
|
||||
],
|
||||
"env": {}
|
||||
}
|
||||
}
|
||||
}
|
||||
Vendored
+2
-4
@@ -49,18 +49,16 @@
|
||||
"[javascript][javascriptreact][json][jsonc][typescript][typescriptreact]": {
|
||||
"editor.defaultFormatter": "biomejs.biome",
|
||||
"editor.codeActionsOnSave": {
|
||||
"source.fixAll.biome": "explicit"
|
||||
"quickfix.biome": "explicit"
|
||||
// "source.organizeImports.biome": "explicit"
|
||||
}
|
||||
},
|
||||
|
||||
"tailwindCSS.classFunctions": ["cva", "cn"],
|
||||
"[css][html][markdown][yaml]": {
|
||||
"editor.defaultFormatter": "esbenp.prettier-vscode"
|
||||
},
|
||||
"typos.config": ".github/workflows/typos.toml",
|
||||
"[markdown]": {
|
||||
"editor.defaultFormatter": "DavidAnson.vscode-markdownlint"
|
||||
},
|
||||
"biome.lsp.bin": "site/node_modules/.bin/biome"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,25 +1,21 @@
|
||||
# Coder Development Guidelines
|
||||
|
||||
@.claude/docs/WORKFLOWS.md
|
||||
@.cursorrules
|
||||
@README.md
|
||||
@package.json
|
||||
Read [cursor rules](.cursorrules).
|
||||
|
||||
## 🚀 Essential Commands
|
||||
## Build/Test/Lint Commands
|
||||
|
||||
| Task | Command | Notes |
|
||||
|-------------------|--------------------------|----------------------------------|
|
||||
| **Development** | `./scripts/develop.sh` | ⚠️ Don't use manual build |
|
||||
| **Build** | `make build` | Fat binaries (includes server) |
|
||||
| **Build Slim** | `make build-slim` | Slim binaries |
|
||||
| **Test** | `make test` | Full test suite |
|
||||
| **Test Single** | `make test RUN=TestName` | Faster than full suite |
|
||||
| **Test Postgres** | `make test-postgres` | Run tests with Postgres database |
|
||||
| **Test Race** | `make test-race` | Run tests with Go race detector |
|
||||
| **Lint** | `make lint` | Always run after changes |
|
||||
| **Generate** | `make gen` | After database changes |
|
||||
| **Format** | `make fmt` | Auto-format code |
|
||||
| **Clean** | `make clean` | Clean build artifacts |
|
||||
### Main Commands
|
||||
|
||||
- `make build` or `make build-fat` - Build all "fat" binaries (includes "server" functionality)
|
||||
- `make build-slim` - Build "slim" binaries
|
||||
- `make test` - Run Go tests
|
||||
- `make test RUN=TestFunctionName` or `go test -v ./path/to/package -run TestFunctionName` - Test single
|
||||
- `make test-postgres` - Run tests with Postgres database
|
||||
- `make test-race` - Run tests with Go race detector
|
||||
- `make test-e2e` - Run end-to-end tests
|
||||
- `make lint` - Run all linters
|
||||
- `make fmt` - Format all code
|
||||
- `make gen` - Generates mocks, database queries and other auto-generated files
|
||||
|
||||
### Frontend Commands (site directory)
|
||||
|
||||
@@ -30,109 +26,81 @@
|
||||
- `pnpm lint` - Lint frontend code
|
||||
- `pnpm test` - Run frontend tests
|
||||
|
||||
### Documentation Commands
|
||||
## Code Style Guidelines
|
||||
|
||||
- `pnpm run format-docs` - Format markdown tables in docs
|
||||
- `pnpm run lint-docs` - Lint and fix markdown files
|
||||
- `pnpm run storybook` - Run Storybook (from site directory)
|
||||
### Go
|
||||
|
||||
## 🔧 Critical Patterns
|
||||
- Follow [Effective Go](https://go.dev/doc/effective_go) and [Go's Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments)
|
||||
- Use `gofumpt` for formatting
|
||||
- Create packages when used during implementation
|
||||
- Validate abstractions against implementations
|
||||
|
||||
### Database Changes (ALWAYS FOLLOW)
|
||||
### Error Handling
|
||||
|
||||
1. Modify `coderd/database/queries/*.sql` files
|
||||
2. Run `make gen`
|
||||
3. If audit errors: update `enterprise/audit/table.go`
|
||||
4. Run `make gen` again
|
||||
- Use descriptive error messages
|
||||
- Wrap errors with context
|
||||
- Propagate errors appropriately
|
||||
- Use proper error types
|
||||
- (`xerrors.Errorf("failed to X: %w", err)`)
|
||||
|
||||
### LSP Navigation (USE FIRST)
|
||||
### Naming
|
||||
|
||||
#### Go LSP (for backend code)
|
||||
- Use clear, descriptive names
|
||||
- Abbreviate only when obvious
|
||||
- Follow Go and TypeScript naming conventions
|
||||
|
||||
- **Find definitions**: `mcp__go-language-server__definition symbolName`
|
||||
- **Find references**: `mcp__go-language-server__references symbolName`
|
||||
- **Get type info**: `mcp__go-language-server__hover filePath line column`
|
||||
- **Rename symbol**: `mcp__go-language-server__rename_symbol filePath line column newName`
|
||||
### Comments
|
||||
|
||||
#### TypeScript LSP (for frontend code in site/)
|
||||
- Document exported functions, types, and non-obvious logic
|
||||
- Follow JSDoc format for TypeScript
|
||||
- Use godoc format for Go code
|
||||
|
||||
- **Find definitions**: `mcp__typescript-language-server__definition symbolName`
|
||||
- **Find references**: `mcp__typescript-language-server__references symbolName`
|
||||
- **Get type info**: `mcp__typescript-language-server__hover filePath line column`
|
||||
- **Rename symbol**: `mcp__typescript-language-server__rename_symbol filePath line column newName`
|
||||
## Commit Style
|
||||
|
||||
### OAuth2 Error Handling
|
||||
- Follow [Conventional Commits 1.0.0](https://www.conventionalcommits.org/en/v1.0.0/)
|
||||
- Format: `type(scope): message`
|
||||
- Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
|
||||
- Keep message titles concise (~70 characters)
|
||||
- Use imperative, present tense in commit titles
|
||||
|
||||
```go
|
||||
// OAuth2-compliant error responses
|
||||
writeOAuth2Error(ctx, rw, http.StatusBadRequest, "invalid_grant", "description")
|
||||
```
|
||||
## Database queries
|
||||
|
||||
### Authorization Context
|
||||
- MUST DO! Any changes to database - adding queries, modifying queries should be done in the `coderd\database\queries\*.sql` files. Use `make gen` to generate necessary changes after.
|
||||
- MUST DO! Queries are grouped in files relating to context - e.g. `prebuilds.sql`, `users.sql`, `provisionerjobs.sql`.
|
||||
- After making changes to any `coderd\database\queries\*.sql` files you must run `make gen` to generate respective ORM changes.
|
||||
|
||||
```go
|
||||
// Public endpoints needing system access
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(dbauthz.AsSystemRestricted(ctx), clientID)
|
||||
## Architecture
|
||||
|
||||
// Authenticated endpoints with user context
|
||||
app, err := api.Database.GetOAuth2ProviderAppByClientID(ctx, clientID)
|
||||
```
|
||||
### Core Components
|
||||
|
||||
## 📋 Quick Reference
|
||||
- **coderd**: Main API service connecting workspaces, provisioners, and users
|
||||
- **provisionerd**: Execution context for infrastructure-modifying providers
|
||||
- **Agents**: Services in remote workspaces providing features like SSH and port forwarding
|
||||
- **Workspaces**: Cloud resources defined by Terraform
|
||||
|
||||
### Full workflows available in imported WORKFLOWS.md
|
||||
## Sub-modules
|
||||
|
||||
### New Feature Checklist
|
||||
### Template System
|
||||
|
||||
- [ ] Run `git pull` to ensure latest code
|
||||
- [ ] Check if feature touches database - you'll need migrations
|
||||
- [ ] Check if feature touches audit logs - update `enterprise/audit/table.go`
|
||||
- Templates define infrastructure for workspaces using Terraform
|
||||
- Environment variables pass context between Coder and templates
|
||||
- Official modules extend development environments
|
||||
|
||||
## 🏗️ Architecture
|
||||
### RBAC System
|
||||
|
||||
- **coderd**: Main API service
|
||||
- **provisionerd**: Infrastructure provisioning
|
||||
- **Agents**: Workspace services (SSH, port forwarding)
|
||||
- **Database**: PostgreSQL with `dbauthz` authorization
|
||||
- Permissions defined at site, organization, and user levels
|
||||
- Object-Action model protects resources
|
||||
- Built-in roles: owner, member, auditor, templateAdmin
|
||||
- Permission format: `<sign>?<level>.<object>.<id>.<action>`
|
||||
|
||||
## 🧪 Testing
|
||||
### Database
|
||||
|
||||
### Race Condition Prevention
|
||||
- PostgreSQL 13+ recommended for production
|
||||
- Migrations managed with `migrate`
|
||||
- Database authorization through `dbauthz` package
|
||||
|
||||
- Use unique identifiers: `fmt.Sprintf("test-client-%s-%d", t.Name(), time.Now().UnixNano())`
|
||||
- Never use hardcoded names in concurrent tests
|
||||
## Frontend
|
||||
|
||||
### OAuth2 Testing
|
||||
The frontend is contained in the site folder.
|
||||
|
||||
- Full suite: `./scripts/oauth2/test-mcp-oauth2.sh`
|
||||
- Manual testing: `./scripts/oauth2/test-manual-flow.sh`
|
||||
|
||||
### Timing Issues
|
||||
|
||||
NEVER use `time.Sleep` to mitigate timing issues. If an issue
|
||||
seems like it should use `time.Sleep`, read through https://github.com/coder/quartz and specifically the [README](https://github.com/coder/quartz/blob/main/README.md) to better understand how to handle timing issues.
|
||||
|
||||
## 🎯 Code Style
|
||||
|
||||
### Detailed guidelines in imported WORKFLOWS.md
|
||||
|
||||
- Follow [Uber Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md)
|
||||
- Commit format: `type(scope): message`
|
||||
|
||||
## 📚 Detailed Development Guides
|
||||
|
||||
@.claude/docs/OAUTH2.md
|
||||
@.claude/docs/TESTING.md
|
||||
@.claude/docs/TROUBLESHOOTING.md
|
||||
@.claude/docs/DATABASE.md
|
||||
|
||||
## 🚨 Common Pitfalls
|
||||
|
||||
1. **Audit table errors** → Update `enterprise/audit/table.go`
|
||||
2. **OAuth2 errors** → Return RFC-compliant format
|
||||
3. **Race conditions** → Use unique test identifiers
|
||||
4. **Missing newlines** → Ensure files end with newline
|
||||
|
||||
---
|
||||
|
||||
*This file stays lean and actionable. Detailed workflows and explanations are imported automatically.*
|
||||
For building Frontend refer to [this document](docs/about/contributing/frontend.md)
|
||||
|
||||
+4
-37
@@ -1,41 +1,8 @@
|
||||
# These APIs are versioned, so any changes need to be carefully reviewed for
|
||||
# whether to bump API major or minor versions.
|
||||
# These APIs are versioned, so any changes need to be carefully reviewed for whether
|
||||
# to bump API major or minor versions.
|
||||
agent/proto/ @spikecurtis @johnstcn
|
||||
provisionerd/proto/ @spikecurtis @johnstcn
|
||||
provisionersdk/proto/ @spikecurtis @johnstcn
|
||||
tailnet/proto/ @spikecurtis @johnstcn
|
||||
vpn/vpn.proto @spikecurtis @johnstcn
|
||||
vpn/version.go @spikecurtis @johnstcn
|
||||
|
||||
# This caching code is particularly tricky, and one must be very careful when
|
||||
# altering it.
|
||||
coderd/files/ @aslilac
|
||||
|
||||
coderd/dynamicparameters/ @Emyrk
|
||||
coderd/rbac/ @Emyrk
|
||||
|
||||
# Mainly dependent on coder/guts, which is maintained by @Emyrk
|
||||
scripts/apitypings/ @Emyrk
|
||||
scripts/gensite/ @aslilac
|
||||
|
||||
site/ @aslilac @Parkreiner
|
||||
site/src/hooks/ @Parkreiner
|
||||
# These rules intentionally do not specify any owners. More specific rules
|
||||
# override less specific rules, so these files are "ignored" by the site/ rule.
|
||||
site/e2e/google/protobuf/timestampGenerated.ts
|
||||
site/e2e/provisionerGenerated.ts
|
||||
site/src/api/countriesGenerated.ts
|
||||
site/src/api/rbacresourcesGenerated.ts
|
||||
site/src/api/typesGenerated.ts
|
||||
site/src/testHelpers/entities.ts
|
||||
site/CLAUDE.md
|
||||
|
||||
# The blood and guts of the autostop algorithm, which is quite complex and
|
||||
# requires elite ball knowledge of most of the scheduling code to make changes
|
||||
# without inadvertently affecting other parts of the codebase.
|
||||
coderd/schedule/autostop.go @deansheather @DanielleMaywood
|
||||
|
||||
# Usage tracking code requires intimate knowledge of Tallyman and Metronome, as
|
||||
# well as guidance from revenue.
|
||||
coderd/usage/ @deansheather @spikecurtis
|
||||
enterprise/coderd/usage/ @deansheather @spikecurtis
|
||||
provisionerd/proto/ @spikecurtis @johnstcn
|
||||
provisionersdk/proto/ @spikecurtis @johnstcn
|
||||
|
||||
@@ -252,10 +252,6 @@ $(CODER_ALL_BINARIES): go.mod go.sum \
|
||||
fi
|
||||
|
||||
cp "$@" "./site/out/bin/coder-$$os-$$arch$$dot_ext"
|
||||
|
||||
if [[ "$${CODER_SIGN_GPG:-0}" == "1" ]]; then
|
||||
cp "$@.asc" "./site/out/bin/coder-$$os-$$arch$$dot_ext.asc"
|
||||
fi
|
||||
fi
|
||||
|
||||
# This task builds Coder Desktop dylibs
|
||||
@@ -460,31 +456,16 @@ fmt: fmt/ts fmt/go fmt/terraform fmt/shfmt fmt/biome fmt/markdown
|
||||
.PHONY: fmt
|
||||
|
||||
fmt/go:
|
||||
ifdef FILE
|
||||
# Format single file
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.go ]] && ! grep -q "DO NOT EDIT" "$(FILE)"; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/go$(RESET) $(FILE)"; \
|
||||
go run mvdan.cc/gofumpt@v0.8.0 -w -l "$(FILE)"; \
|
||||
fi
|
||||
else
|
||||
go mod tidy
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/go$(RESET)"
|
||||
# VS Code users should check out
|
||||
# https://github.com/mvdan/gofumpt#visual-studio-code
|
||||
find . $(FIND_EXCLUSIONS) -type f -name '*.go' -print0 | \
|
||||
xargs -0 grep -E --null -L '^// Code generated .* DO NOT EDIT\.$$' | \
|
||||
xargs -0 go run mvdan.cc/gofumpt@v0.8.0 -w -l
|
||||
endif
|
||||
xargs -0 grep --null -L "DO NOT EDIT" | \
|
||||
xargs -0 go run mvdan.cc/gofumpt@v0.4.0 -w -l
|
||||
.PHONY: fmt/go
|
||||
|
||||
fmt/ts: site/node_modules/.installed
|
||||
ifdef FILE
|
||||
# Format single TypeScript/JavaScript file
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.ts ]] || [[ "$(FILE)" == *.tsx ]] || [[ "$(FILE)" == *.js ]] || [[ "$(FILE)" == *.jsx ]]; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/ts$(RESET) $(FILE)"; \
|
||||
(cd site/ && pnpm exec biome format --write "../$(FILE)"); \
|
||||
fi
|
||||
else
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/ts$(RESET)"
|
||||
cd site
|
||||
# Avoid writing files in CI to reduce file write activity
|
||||
@@ -493,17 +474,9 @@ ifdef CI
|
||||
else
|
||||
pnpm run check:fix
|
||||
endif
|
||||
endif
|
||||
.PHONY: fmt/ts
|
||||
|
||||
fmt/biome: site/node_modules/.installed
|
||||
ifdef FILE
|
||||
# Format single file with biome
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.ts ]] || [[ "$(FILE)" == *.tsx ]] || [[ "$(FILE)" == *.js ]] || [[ "$(FILE)" == *.jsx ]]; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/biome$(RESET) $(FILE)"; \
|
||||
(cd site/ && pnpm exec biome format --write "../$(FILE)"); \
|
||||
fi
|
||||
else
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/biome$(RESET)"
|
||||
cd site/
|
||||
# Avoid writing files in CI to reduce file write activity
|
||||
@@ -512,30 +485,14 @@ ifdef CI
|
||||
else
|
||||
pnpm run format
|
||||
endif
|
||||
endif
|
||||
.PHONY: fmt/biome
|
||||
|
||||
fmt/terraform: $(wildcard *.tf)
|
||||
ifdef FILE
|
||||
# Format single Terraform file
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.tf ]] || [[ "$(FILE)" == *.tfvars ]]; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/terraform$(RESET) $(FILE)"; \
|
||||
terraform fmt "$(FILE)"; \
|
||||
fi
|
||||
else
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/terraform$(RESET)"
|
||||
terraform fmt -recursive
|
||||
endif
|
||||
.PHONY: fmt/terraform
|
||||
|
||||
fmt/shfmt: $(SHELL_SRC_FILES)
|
||||
ifdef FILE
|
||||
# Format single shell script
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.sh ]]; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/shfmt$(RESET) $(FILE)"; \
|
||||
shfmt -w "$(FILE)"; \
|
||||
fi
|
||||
else
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/shfmt$(RESET)"
|
||||
# Only do diff check in CI, errors on diff.
|
||||
ifdef CI
|
||||
@@ -543,25 +500,14 @@ ifdef CI
|
||||
else
|
||||
shfmt -w $(SHELL_SRC_FILES)
|
||||
endif
|
||||
endif
|
||||
.PHONY: fmt/shfmt
|
||||
|
||||
fmt/markdown: node_modules/.installed
|
||||
ifdef FILE
|
||||
# Format single markdown file
|
||||
if [[ -f "$(FILE)" ]] && [[ "$(FILE)" == *.md ]]; then \
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/markdown$(RESET) $(FILE)"; \
|
||||
pnpm exec markdown-table-formatter "$(FILE)"; \
|
||||
fi
|
||||
else
|
||||
echo "$(GREEN)==>$(RESET) $(BOLD)fmt/markdown$(RESET)"
|
||||
pnpm format-docs
|
||||
endif
|
||||
.PHONY: fmt/markdown
|
||||
|
||||
# Note: we don't run zizmor in the lint target because it takes a while. CI
|
||||
# runs it explicitly.
|
||||
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown lint/actions/actionlint lint/check-scopes
|
||||
lint: lint/shellcheck lint/go lint/ts lint/examples lint/helm lint/site-icons lint/markdown
|
||||
.PHONY: lint
|
||||
|
||||
lint/site-icons:
|
||||
@@ -578,7 +524,6 @@ lint/go:
|
||||
./scripts/check_codersdk_imports.sh
|
||||
linter_ver=$(shell egrep -o 'GOLANGCI_LINT_VERSION=\S+' dogfood/coder/Dockerfile | cut -d '=' -f 2)
|
||||
go run github.com/golangci/golangci-lint/cmd/golangci-lint@v$$linter_ver run
|
||||
go run github.com/coder/paralleltestctx/cmd/paralleltestctx@v0.0.1 -custom-funcs="testutil.Context" ./...
|
||||
.PHONY: lint/go
|
||||
|
||||
lint/examples:
|
||||
@@ -600,31 +545,13 @@ lint/markdown: node_modules/.installed
|
||||
pnpm lint-docs
|
||||
.PHONY: lint/markdown
|
||||
|
||||
lint/actions: lint/actions/actionlint lint/actions/zizmor
|
||||
.PHONY: lint/actions
|
||||
|
||||
lint/actions/actionlint:
|
||||
go run github.com/rhysd/actionlint/cmd/actionlint@v1.7.7
|
||||
.PHONY: lint/actions/actionlint
|
||||
|
||||
lint/actions/zizmor:
|
||||
./scripts/zizmor.sh \
|
||||
--strict-collection \
|
||||
--persona=regular \
|
||||
.
|
||||
.PHONY: lint/actions/zizmor
|
||||
|
||||
# Verify api_key_scope enum contains all RBAC <resource>:<action> values.
|
||||
lint/check-scopes: coderd/database/dump.sql
|
||||
go run ./scripts/check-scopes
|
||||
.PHONY: lint/check-scopes
|
||||
|
||||
# All files generated by the database should be added here, and this can be used
|
||||
# as a target for jobs that need to run after the database is generated.
|
||||
DB_GEN_FILES := \
|
||||
coderd/database/dump.sql \
|
||||
coderd/database/querier.go \
|
||||
coderd/database/unique_constraint.go \
|
||||
coderd/database/dbmem/dbmem.go \
|
||||
coderd/database/dbmetrics/dbmetrics.go \
|
||||
coderd/database/dbauthz/dbauthz.go \
|
||||
coderd/database/dbmock/dbmock.go
|
||||
@@ -635,23 +562,16 @@ TAILNETTEST_MOCKS := \
|
||||
tailnet/tailnettest/workspaceupdatesprovidermock.go \
|
||||
tailnet/tailnettest/subscriptionmock.go
|
||||
|
||||
AIBRIDGED_MOCKS := \
|
||||
enterprise/x/aibridged/aibridgedmock/clientmock.go \
|
||||
enterprise/x/aibridged/aibridgedmock/poolmock.go
|
||||
|
||||
GEN_FILES := \
|
||||
tailnet/proto/tailnet.pb.go \
|
||||
agent/proto/agent.pb.go \
|
||||
provisionersdk/proto/provisioner.pb.go \
|
||||
provisionerd/proto/provisionerd.pb.go \
|
||||
vpn/vpn.pb.go \
|
||||
enterprise/x/aibridged/proto/aibridged.pb.go \
|
||||
$(DB_GEN_FILES) \
|
||||
$(SITE_GEN_FILES) \
|
||||
coderd/rbac/object_gen.go \
|
||||
codersdk/rbacresources_gen.go \
|
||||
coderd/rbac/scopes_constants_gen.go \
|
||||
codersdk/apikey_scopes_gen.go \
|
||||
docs/admin/integrations/prometheus.md \
|
||||
docs/reference/cli/index.md \
|
||||
docs/admin/security/audit-logs.md \
|
||||
@@ -664,9 +584,7 @@ GEN_FILES := \
|
||||
coderd/database/pubsub/psmock/psmock.go \
|
||||
agent/agentcontainers/acmock/acmock.go \
|
||||
agent/agentcontainers/dcspec/dcspec_gen.go \
|
||||
coderd/httpmw/loggermw/loggermock/loggermock.go \
|
||||
codersdk/workspacesdk/agentconnmock/agentconnmock.go \
|
||||
$(AIBRIDGED_MOCKS)
|
||||
coderd/httpmw/loggermw/loggermock/loggermock.go
|
||||
|
||||
# all gen targets should be added here and to gen/mark-fresh
|
||||
gen: gen/db gen/golden-files $(GEN_FILES)
|
||||
@@ -696,13 +614,11 @@ gen/mark-fresh:
|
||||
provisionersdk/proto/provisioner.pb.go \
|
||||
provisionerd/proto/provisionerd.pb.go \
|
||||
vpn/vpn.pb.go \
|
||||
enterprise/x/aibridged/proto/aibridged.pb.go \
|
||||
coderd/database/dump.sql \
|
||||
$(DB_GEN_FILES) \
|
||||
site/src/api/typesGenerated.ts \
|
||||
coderd/rbac/object_gen.go \
|
||||
codersdk/rbacresources_gen.go \
|
||||
coderd/rbac/scopes_constants_gen.go \
|
||||
site/src/api/rbacresourcesGenerated.ts \
|
||||
site/src/api/countriesGenerated.ts \
|
||||
docs/admin/integrations/prometheus.md \
|
||||
@@ -718,8 +634,6 @@ gen/mark-fresh:
|
||||
agent/agentcontainers/acmock/acmock.go \
|
||||
agent/agentcontainers/dcspec/dcspec_gen.go \
|
||||
coderd/httpmw/loggermw/loggermock/loggermock.go \
|
||||
codersdk/workspacesdk/agentconnmock/agentconnmock.go \
|
||||
$(AIBRIDGED_MOCKS) \
|
||||
"
|
||||
|
||||
for file in $$files; do
|
||||
@@ -763,14 +677,6 @@ coderd/httpmw/loggermw/loggermock/loggermock.go: coderd/httpmw/loggermw/logger.g
|
||||
go generate ./coderd/httpmw/loggermw/loggermock/
|
||||
touch "$@"
|
||||
|
||||
codersdk/workspacesdk/agentconnmock/agentconnmock.go: codersdk/workspacesdk/agentconn.go
|
||||
go generate ./codersdk/workspacesdk/agentconnmock/
|
||||
touch "$@"
|
||||
|
||||
$(AIBRIDGED_MOCKS): enterprise/x/aibridged/client.go enterprise/x/aibridged/pool.go
|
||||
go generate ./enterprise/x/aibridged/aibridgedmock/
|
||||
touch "$@"
|
||||
|
||||
agent/agentcontainers/dcspec/dcspec_gen.go: \
|
||||
node_modules/.installed \
|
||||
agent/agentcontainers/dcspec/devContainer.base.schema.json \
|
||||
@@ -821,14 +727,6 @@ vpn/vpn.pb.go: vpn/vpn.proto
|
||||
--go_opt=paths=source_relative \
|
||||
./vpn/vpn.proto
|
||||
|
||||
enterprise/x/aibridged/proto/aibridged.pb.go: enterprise/x/aibridged/proto/aibridged.proto
|
||||
protoc \
|
||||
--go_out=. \
|
||||
--go_opt=paths=source_relative \
|
||||
--go-drpc_out=. \
|
||||
--go-drpc_opt=paths=source_relative \
|
||||
./enterprise/x/aibridged/proto/aibridged.proto
|
||||
|
||||
site/src/api/typesGenerated.ts: site/node_modules/.installed $(wildcard scripts/apitypings/*) $(shell find ./codersdk $(FIND_EXCLUSIONS) -type f -name '*.go')
|
||||
# -C sets the directory for the go run command
|
||||
go run -C ./scripts/apitypings main.go > $@
|
||||
@@ -855,15 +753,6 @@ coderd/rbac/object_gen.go: scripts/typegen/rbacobject.gotmpl scripts/typegen/mai
|
||||
rmdir -v "$$tempdir"
|
||||
touch "$@"
|
||||
|
||||
coderd/rbac/scopes_constants_gen.go: scripts/typegen/scopenames.gotmpl scripts/typegen/main.go coderd/rbac/policy/policy.go
|
||||
# Generate typed low-level ScopeName constants from RBACPermissions
|
||||
# Write to a temp file first to avoid truncating the package during build
|
||||
# since the generator imports the rbac package.
|
||||
tempfile=$(shell mktemp /tmp/scopes_constants_gen.XXXXXX)
|
||||
go run ./scripts/typegen/main.go rbac scopenames > "$$tempfile"
|
||||
mv -v "$$tempfile" coderd/rbac/scopes_constants_gen.go
|
||||
touch "$@"
|
||||
|
||||
codersdk/rbacresources_gen.go: scripts/typegen/codersdk.gotmpl scripts/typegen/main.go coderd/rbac/object.go coderd/rbac/policy/policy.go
|
||||
# Do no overwrite codersdk/rbacresources_gen.go directly, as it would make the file empty, breaking
|
||||
# the `codersdk` package and any parallel build targets.
|
||||
@@ -871,12 +760,6 @@ codersdk/rbacresources_gen.go: scripts/typegen/codersdk.gotmpl scripts/typegen/m
|
||||
mv /tmp/rbacresources_gen.go codersdk/rbacresources_gen.go
|
||||
touch "$@"
|
||||
|
||||
codersdk/apikey_scopes_gen.go: scripts/apikeyscopesgen/main.go coderd/rbac/scopes_catalog.go coderd/rbac/scopes.go
|
||||
# Generate SDK constants for external API key scopes.
|
||||
go run ./scripts/apikeyscopesgen > /tmp/apikey_scopes_gen.go
|
||||
mv /tmp/apikey_scopes_gen.go codersdk/apikey_scopes_gen.go
|
||||
touch "$@"
|
||||
|
||||
site/src/api/rbacresourcesGenerated.ts: site/node_modules/.installed scripts/typegen/codersdk.gotmpl scripts/typegen/main.go coderd/rbac/object.go coderd/rbac/policy/policy.go
|
||||
go run scripts/typegen/main.go rbac typescript > "$@"
|
||||
(cd site/ && pnpm exec biome format --write src/api/rbacresourcesGenerated.ts)
|
||||
@@ -1001,31 +884,12 @@ else
|
||||
GOTESTSUM_RETRY_FLAGS :=
|
||||
endif
|
||||
|
||||
# default to 8x8 parallelism to avoid overwhelming our workspaces. Hopefully we can remove these defaults
|
||||
# when we get our test suite's resource utilization under control.
|
||||
GOTEST_FLAGS := -v -p $(or $(TEST_NUM_PARALLEL_PACKAGES),"8") -parallel=$(or $(TEST_NUM_PARALLEL_TESTS),"8")
|
||||
|
||||
# The most common use is to set TEST_COUNT=1 to avoid Go's test cache.
|
||||
ifdef TEST_COUNT
|
||||
GOTEST_FLAGS += -count=$(TEST_COUNT)
|
||||
endif
|
||||
|
||||
ifdef TEST_SHORT
|
||||
GOTEST_FLAGS += -short
|
||||
endif
|
||||
|
||||
ifdef RUN
|
||||
GOTEST_FLAGS += -run $(RUN)
|
||||
endif
|
||||
|
||||
TEST_PACKAGES ?= ./...
|
||||
|
||||
test:
|
||||
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="$(TEST_PACKAGES)" -- $(GOTEST_FLAGS)
|
||||
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="./..." -- -v -short -count=1 $(if $(RUN),-run $(RUN))
|
||||
.PHONY: test
|
||||
|
||||
test-cli:
|
||||
$(MAKE) test TEST_PACKAGES="./cli..."
|
||||
$(GIT_FLAGS) gotestsum --format standard-quiet $(GOTESTSUM_RETRY_FLAGS) --packages="./cli/..." -- -v -short -count=1
|
||||
.PHONY: test-cli
|
||||
|
||||
# sqlc-cloud-is-setup will fail if no SQLc auth token is set. Use this as a
|
||||
@@ -1061,7 +925,7 @@ sqlc-vet: test-postgres-docker
|
||||
test-postgres: test-postgres-docker
|
||||
# The postgres test is prone to failure, so we limit parallelism for
|
||||
# more consistent execution.
|
||||
$(GIT_FLAGS) gotestsum \
|
||||
$(GIT_FLAGS) DB=ci gotestsum \
|
||||
--junitfile="gotests.xml" \
|
||||
--jsonfile="gotests.json" \
|
||||
$(GOTESTSUM_RETRY_FLAGS) \
|
||||
|
||||
+37
-33
@@ -74,6 +74,7 @@ type Options struct {
|
||||
LogDir string
|
||||
TempDir string
|
||||
ScriptDataDir string
|
||||
ExchangeToken func(ctx context.Context) (string, error)
|
||||
Client Client
|
||||
ReconnectingPTYTimeout time.Duration
|
||||
EnvironmentVariables map[string]string
|
||||
@@ -97,8 +98,7 @@ type Client interface {
|
||||
ConnectRPC26(ctx context.Context) (
|
||||
proto.DRPCAgentClient26, tailnetproto.DRPCTailnetClient26, error,
|
||||
)
|
||||
tailnet.DERPMapRewriter
|
||||
agentsdk.RefreshableSessionTokenProvider
|
||||
RewriteDERPMap(derpMap *tailcfg.DERPMap)
|
||||
}
|
||||
|
||||
type Agent interface {
|
||||
@@ -131,6 +131,11 @@ func New(options Options) Agent {
|
||||
}
|
||||
options.ScriptDataDir = options.TempDir
|
||||
}
|
||||
if options.ExchangeToken == nil {
|
||||
options.ExchangeToken = func(_ context.Context) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
}
|
||||
if options.ReportMetadataInterval == 0 {
|
||||
options.ReportMetadataInterval = time.Second
|
||||
}
|
||||
@@ -167,6 +172,7 @@ func New(options Options) Agent {
|
||||
coordDisconnected: make(chan struct{}),
|
||||
environmentVariables: options.EnvironmentVariables,
|
||||
client: options.Client,
|
||||
exchangeToken: options.ExchangeToken,
|
||||
filesystem: options.Filesystem,
|
||||
logDir: options.LogDir,
|
||||
tempDir: options.TempDir,
|
||||
@@ -197,6 +203,7 @@ func New(options Options) Agent {
|
||||
// coordinator during shut down.
|
||||
close(a.coordDisconnected)
|
||||
a.announcementBanners.Store(new([]codersdk.BannerConfig))
|
||||
a.sessionToken.Store(new(string))
|
||||
a.init()
|
||||
return a
|
||||
}
|
||||
@@ -205,6 +212,7 @@ type agent struct {
|
||||
clock quartz.Clock
|
||||
logger slog.Logger
|
||||
client Client
|
||||
exchangeToken func(ctx context.Context) (string, error)
|
||||
tailnetListenPort uint16
|
||||
filesystem afero.Fs
|
||||
logDir string
|
||||
@@ -246,6 +254,7 @@ type agent struct {
|
||||
scriptRunner *agentscripts.Runner
|
||||
announcementBanners atomic.Pointer[[]codersdk.BannerConfig] // announcementBanners is atomic because it is periodically updated.
|
||||
announcementBannersRefreshInterval time.Duration
|
||||
sessionToken atomic.Pointer[string]
|
||||
sshServer *agentssh.Server
|
||||
sshMaxTimeout time.Duration
|
||||
blockFileTransfer bool
|
||||
@@ -327,16 +336,18 @@ func (a *agent) init() {
|
||||
// will not report anywhere.
|
||||
a.scriptRunner.RegisterMetrics(a.prometheusRegistry)
|
||||
|
||||
containerAPIOpts := []agentcontainers.Option{
|
||||
agentcontainers.WithExecer(a.execer),
|
||||
agentcontainers.WithCommandEnv(a.sshServer.CommandEnv),
|
||||
agentcontainers.WithScriptLogger(func(logSourceID uuid.UUID) agentcontainers.ScriptLogger {
|
||||
return a.logSender.GetScriptLogger(logSourceID)
|
||||
}),
|
||||
}
|
||||
containerAPIOpts = append(containerAPIOpts, a.containerAPIOptions...)
|
||||
if a.devcontainers {
|
||||
containerAPIOpts := []agentcontainers.Option{
|
||||
agentcontainers.WithExecer(a.execer),
|
||||
agentcontainers.WithCommandEnv(a.sshServer.CommandEnv),
|
||||
agentcontainers.WithScriptLogger(func(logSourceID uuid.UUID) agentcontainers.ScriptLogger {
|
||||
return a.logSender.GetScriptLogger(logSourceID)
|
||||
}),
|
||||
}
|
||||
containerAPIOpts = append(containerAPIOpts, a.containerAPIOptions...)
|
||||
|
||||
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
|
||||
a.containerAPI = agentcontainers.NewAPI(a.logger.Named("containers"), containerAPIOpts...)
|
||||
}
|
||||
|
||||
a.reconnectingPTYServer = reconnectingpty.NewServer(
|
||||
a.logger.Named("reconnecting-pty"),
|
||||
@@ -554,6 +565,7 @@ func (a *agent) reportMetadata(ctx context.Context, aAPI proto.DRPCAgentClient26
|
||||
// channel to synchronize the results and avoid both messy
|
||||
// mutex logic and overloading the API.
|
||||
for _, md := range manifest.Metadata {
|
||||
md := md
|
||||
// We send the result to the channel in the goroutine to avoid
|
||||
// sending the same result multiple times. So, we don't care about
|
||||
// the return values.
|
||||
@@ -781,15 +793,11 @@ func (a *agent) reportConnectionsLoop(ctx context.Context, aAPI proto.DRPCAgentC
|
||||
logger.Debug(ctx, "reporting connection")
|
||||
_, err := aAPI.ReportConnection(ctx, payload)
|
||||
if err != nil {
|
||||
// Do not fail the loop if we fail to report a connection, just
|
||||
// log a warning.
|
||||
// Related to https://github.com/coder/coder/issues/20194
|
||||
logger.Warn(ctx, "failed to report connection to server", slog.Error(err))
|
||||
// keep going, we still need to remove it from the slice
|
||||
} else {
|
||||
logger.Debug(ctx, "successfully reported connection")
|
||||
return xerrors.Errorf("failed to report connection: %w", err)
|
||||
}
|
||||
|
||||
logger.Debug(ctx, "successfully reported connection")
|
||||
|
||||
// Remove the payload we sent.
|
||||
a.reportConnectionsMu.Lock()
|
||||
a.reportConnections[0] = nil // Release the pointer from the underlying array.
|
||||
@@ -820,13 +828,6 @@ func (a *agent) reportConnection(id uuid.UUID, connectionType proto.Connection_T
|
||||
ip = host
|
||||
}
|
||||
|
||||
// If the IP is "localhost" (which it can be in some cases), set it to
|
||||
// 127.0.0.1 instead.
|
||||
// Related to https://github.com/coder/coder/issues/20194
|
||||
if ip == "localhost" {
|
||||
ip = "127.0.0.1"
|
||||
}
|
||||
|
||||
a.reportConnectionsMu.Lock()
|
||||
defer a.reportConnectionsMu.Unlock()
|
||||
|
||||
@@ -918,10 +919,11 @@ func (a *agent) run() (retErr error) {
|
||||
// This allows the agent to refresh its token if necessary.
|
||||
// For instance identity this is required, since the instance
|
||||
// may not have re-provisioned, but a new agent ID was created.
|
||||
err := a.client.RefreshToken(a.hardCtx)
|
||||
sessionToken, err := a.exchangeToken(a.hardCtx)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("refresh token: %w", err)
|
||||
return xerrors.Errorf("exchange token: %w", err)
|
||||
}
|
||||
a.sessionToken.Store(&sessionToken)
|
||||
|
||||
// ConnectRPC returns the dRPC connection we use for the Agent and Tailnet v2+ APIs
|
||||
aAPI, tAPI, err := a.client.ConnectRPC26(a.hardCtx)
|
||||
@@ -1161,7 +1163,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
|
||||
scripts = manifest.Scripts
|
||||
devcontainerScripts map[uuid.UUID]codersdk.WorkspaceAgentScript
|
||||
)
|
||||
if a.devcontainers {
|
||||
if a.containerAPI != nil {
|
||||
// Init the container API with the manifest and client so that
|
||||
// we can start accepting requests. The final start of the API
|
||||
// happens after the startup scripts have been executed to
|
||||
@@ -1169,7 +1171,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
|
||||
// return existing devcontainers but actual container detection
|
||||
// and creation will be deferred.
|
||||
a.containerAPI.Init(
|
||||
agentcontainers.WithManifestInfo(manifest.OwnerName, manifest.WorkspaceName, manifest.AgentName, manifest.Directory),
|
||||
agentcontainers.WithManifestInfo(manifest.OwnerName, manifest.WorkspaceName, manifest.AgentName),
|
||||
agentcontainers.WithDevcontainers(manifest.Devcontainers, manifest.Scripts),
|
||||
agentcontainers.WithSubAgentClient(agentcontainers.NewSubAgentClientFromAPI(a.logger, aAPI)),
|
||||
)
|
||||
@@ -1196,7 +1198,7 @@ func (a *agent) handleManifest(manifestOK *checkpoint) func(ctx context.Context,
|
||||
// autostarted devcontainer will be included in this time.
|
||||
err := a.scriptRunner.Execute(a.gracefulCtx, agentscripts.ExecuteStartScripts)
|
||||
|
||||
if a.devcontainers {
|
||||
if a.containerAPI != nil {
|
||||
// Start the container API after the startup scripts have
|
||||
// been executed to ensure that the required tools can be
|
||||
// installed.
|
||||
@@ -1360,7 +1362,7 @@ func (a *agent) updateCommandEnv(current []string) (updated []string, err error)
|
||||
"CODER_WORKSPACE_OWNER_NAME": manifest.OwnerName,
|
||||
|
||||
// Specific Coder subcommands require the agent token exposed!
|
||||
"CODER_AGENT_TOKEN": a.client.GetSessionToken(),
|
||||
"CODER_AGENT_TOKEN": *a.sessionToken.Load(),
|
||||
|
||||
// Git on Windows resolves with UNIX-style paths.
|
||||
// If using backslashes, it's unable to find the executable.
|
||||
@@ -1927,8 +1929,10 @@ func (a *agent) Close() error {
|
||||
a.logger.Error(a.hardCtx, "script runner close", slog.Error(err))
|
||||
}
|
||||
|
||||
if err := a.containerAPI.Close(); err != nil {
|
||||
a.logger.Error(a.hardCtx, "container API close", slog.Error(err))
|
||||
if a.containerAPI != nil {
|
||||
if err := a.containerAPI.Close(); err != nil {
|
||||
a.logger.Error(a.hardCtx, "container API close", slog.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for the graceful shutdown to complete, but don't wait forever so
|
||||
|
||||
+96
-313
@@ -22,6 +22,7 @@ import (
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -455,6 +456,8 @@ func TestAgent_GitSSH(t *testing.T) {
|
||||
|
||||
func TestAgent_SessionTTYShell(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
t.Cleanup(cancel)
|
||||
if runtime.GOOS == "windows" {
|
||||
// This might be our implementation, or ConPTY itself.
|
||||
// It's difficult to find extensive tests for it, so
|
||||
@@ -465,7 +468,6 @@ func TestAgent_SessionTTYShell(t *testing.T) {
|
||||
for _, port := range sshPorts {
|
||||
t.Run(fmt.Sprintf("(%d)", port), func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
session := setupSSHSessionOnPort(t, agentsdk.Manifest{}, codersdk.ServiceBannerConfig{}, nil, port)
|
||||
command := "sh"
|
||||
@@ -1807,12 +1809,11 @@ func TestAgent_ReconnectingPTY(t *testing.T) {
|
||||
|
||||
//nolint:dogsled
|
||||
conn, agentClient, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0)
|
||||
idConnectionReport := uuid.New()
|
||||
id := uuid.New()
|
||||
|
||||
// Test that the connection is reported. This must be tested in the
|
||||
// first connection because we care about verifying all of these.
|
||||
netConn0, err := conn.ReconnectingPTY(ctx, idConnectionReport, 80, 80, "bash --norc")
|
||||
netConn0, err := conn.ReconnectingPTY(ctx, id, 80, 80, "bash --norc")
|
||||
require.NoError(t, err)
|
||||
_ = netConn0.Close()
|
||||
assertConnectionReport(t, agentClient, proto.Connection_RECONNECTING_PTY, 0, "")
|
||||
@@ -2028,8 +2029,7 @@ func runSubAgentMain() int {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
req = req.WithContext(ctx)
|
||||
client := &http.Client{}
|
||||
resp, err := client.Do(req)
|
||||
resp, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
_, _ = fmt.Fprintf(os.Stderr, "agent connection failed: %v\n", err)
|
||||
return 11
|
||||
@@ -2130,7 +2130,7 @@ func TestAgent_DevcontainerAutostart(t *testing.T) {
|
||||
"name": "mywork",
|
||||
"image": "ubuntu:latest",
|
||||
"cmd": ["sleep", "infinity"],
|
||||
"runArgs": ["--network=host", "--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"]
|
||||
"runArgs": ["--network=host"]
|
||||
}`), 0o600)
|
||||
require.NoError(t, err, "write devcontainer.json")
|
||||
|
||||
@@ -2167,7 +2167,6 @@ func TestAgent_DevcontainerAutostart(t *testing.T) {
|
||||
// Only match this specific dev container.
|
||||
agentcontainers.WithClock(mClock),
|
||||
agentcontainers.WithContainerLabelIncludeFilter("devcontainer.local_folder", tempWorkspaceFolder),
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
|
||||
agentcontainers.WithSubAgentURL(srv.URL),
|
||||
// The agent will copy "itself", but in the case of this test, the
|
||||
// agent is actually this test binary. So we'll tell the test binary
|
||||
@@ -2289,8 +2288,7 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
|
||||
err = os.WriteFile(devcontainerFile, []byte(`{
|
||||
"name": "mywork",
|
||||
"image": "busybox:latest",
|
||||
"cmd": ["sleep", "infinity"],
|
||||
"runArgs": ["--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"]
|
||||
"cmd": ["sleep", "infinity"]
|
||||
}`), 0o600)
|
||||
require.NoError(t, err, "write devcontainer.json")
|
||||
|
||||
@@ -2317,7 +2315,6 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
|
||||
o.Devcontainers = true
|
||||
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
|
||||
agentcontainers.WithContainerLabelIncludeFilter("devcontainer.local_folder", workspaceFolder),
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
|
||||
)
|
||||
})
|
||||
|
||||
@@ -2372,7 +2369,7 @@ func TestAgent_DevcontainerRecreate(t *testing.T) {
|
||||
// devcontainer, we do it in a goroutine so we can process logs
|
||||
// concurrently.
|
||||
go func(container codersdk.WorkspaceAgentContainer) {
|
||||
_, err := conn.RecreateDevcontainer(ctx, devcontainerID.String())
|
||||
_, err := conn.RecreateDevcontainer(ctx, container.ID)
|
||||
assert.NoError(t, err, "recreate devcontainer should succeed")
|
||||
}(container)
|
||||
|
||||
@@ -2441,8 +2438,7 @@ func TestAgent_DevcontainersDisabledForSubAgent(t *testing.T) {
|
||||
|
||||
// Setup the agent with devcontainers enabled initially.
|
||||
//nolint:dogsled
|
||||
conn, _, _, _, _ := setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
|
||||
o.Devcontainers = true
|
||||
conn, _, _, _, _ := setupAgent(t, manifest, 0, func(*agenttest.Client, *agent.Options) {
|
||||
})
|
||||
|
||||
// Query the containers API endpoint. This should fail because
|
||||
@@ -2454,214 +2450,8 @@ func TestAgent_DevcontainersDisabledForSubAgent(t *testing.T) {
|
||||
require.Error(t, err)
|
||||
|
||||
// Verify the error message contains the expected text.
|
||||
require.Contains(t, err.Error(), "Dev Container feature not supported.")
|
||||
require.Contains(t, err.Error(), "Dev Container integration inside other Dev Containers is explicitly not supported.")
|
||||
}
|
||||
|
||||
// TestAgent_DevcontainerPrebuildClaim tests that we correctly handle
|
||||
// the claiming process for running devcontainers.
|
||||
//
|
||||
// You can run it manually as follows:
|
||||
//
|
||||
// CODER_TEST_USE_DOCKER=1 go test -count=1 ./agent -run TestAgent_DevcontainerPrebuildClaim
|
||||
//
|
||||
//nolint:paralleltest // This test sets an environment variable.
|
||||
func TestAgent_DevcontainerPrebuildClaim(t *testing.T) {
|
||||
if os.Getenv("CODER_TEST_USE_DOCKER") != "1" {
|
||||
t.Skip("Set CODER_TEST_USE_DOCKER=1 to run this test")
|
||||
}
|
||||
if _, err := exec.LookPath("devcontainer"); err != nil {
|
||||
t.Skip("This test requires the devcontainer CLI: npm install -g @devcontainers/cli")
|
||||
}
|
||||
|
||||
pool, err := dockertest.NewPool("")
|
||||
require.NoError(t, err, "Could not connect to docker")
|
||||
|
||||
var (
|
||||
ctx = testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
devcontainerID = uuid.New()
|
||||
devcontainerLogSourceID = uuid.New()
|
||||
|
||||
workspaceFolder = filepath.Join(t.TempDir(), "project")
|
||||
devcontainerPath = filepath.Join(workspaceFolder, ".devcontainer")
|
||||
devcontainerConfig = filepath.Join(devcontainerPath, "devcontainer.json")
|
||||
)
|
||||
|
||||
// Given: A devcontainer project.
|
||||
t.Logf("Workspace folder: %s", workspaceFolder)
|
||||
|
||||
err = os.MkdirAll(devcontainerPath, 0o755)
|
||||
require.NoError(t, err, "create dev container directory")
|
||||
|
||||
// Given: This devcontainer project specifies an app that uses the owner name and workspace name.
|
||||
err = os.WriteFile(devcontainerConfig, []byte(`{
|
||||
"name": "project",
|
||||
"image": "busybox:latest",
|
||||
"cmd": ["sleep", "infinity"],
|
||||
"runArgs": ["--label=`+agentcontainers.DevcontainerIsTestRunLabel+`=true"],
|
||||
"customizations": {
|
||||
"coder": {
|
||||
"apps": [{
|
||||
"slug": "zed",
|
||||
"url": "zed://ssh/${localEnv:CODER_WORKSPACE_AGENT_NAME}.${localEnv:CODER_WORKSPACE_NAME}.${localEnv:CODER_WORKSPACE_OWNER_NAME}.coder${containerWorkspaceFolder}"
|
||||
}]
|
||||
}
|
||||
}
|
||||
}`), 0o600)
|
||||
require.NoError(t, err, "write devcontainer config")
|
||||
|
||||
// Given: A manifest with a prebuild username and workspace name.
|
||||
manifest := agentsdk.Manifest{
|
||||
OwnerName: "prebuilds",
|
||||
WorkspaceName: "prebuilds-xyz-123",
|
||||
|
||||
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
|
||||
{ID: devcontainerID, Name: "test", WorkspaceFolder: workspaceFolder},
|
||||
},
|
||||
Scripts: []codersdk.WorkspaceAgentScript{
|
||||
{ID: devcontainerID, LogSourceID: devcontainerLogSourceID},
|
||||
},
|
||||
}
|
||||
|
||||
// When: We create an agent with devcontainers enabled.
|
||||
//nolint:dogsled
|
||||
conn, client, _, _, _ := setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
|
||||
o.Devcontainers = true
|
||||
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerLocalFolderLabel, workspaceFolder),
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
|
||||
)
|
||||
})
|
||||
|
||||
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
|
||||
return slices.Contains(client.GetLifecycleStates(), codersdk.WorkspaceAgentLifecycleReady)
|
||||
}, testutil.IntervalMedium, "agent not ready")
|
||||
|
||||
var dcPrebuild codersdk.WorkspaceAgentDevcontainer
|
||||
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
|
||||
resp, err := conn.ListContainers(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, dc := range resp.Devcontainers {
|
||||
if dc.Container == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
v, ok := dc.Container.Labels[agentcontainers.DevcontainerLocalFolderLabel]
|
||||
if ok && v == workspaceFolder {
|
||||
dcPrebuild = dc
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}, testutil.IntervalMedium, "devcontainer not found")
|
||||
defer func() {
|
||||
pool.Client.RemoveContainer(docker.RemoveContainerOptions{
|
||||
ID: dcPrebuild.Container.ID,
|
||||
RemoveVolumes: true,
|
||||
Force: true,
|
||||
})
|
||||
}()
|
||||
|
||||
// Then: We expect a sub agent to have been created.
|
||||
subAgents := client.GetSubAgents()
|
||||
require.Len(t, subAgents, 1)
|
||||
|
||||
subAgent := subAgents[0]
|
||||
subAgentID, err := uuid.FromBytes(subAgent.GetId())
|
||||
require.NoError(t, err)
|
||||
|
||||
// And: We expect there to be 1 app.
|
||||
subAgentApps, err := client.GetSubAgentApps(subAgentID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, subAgentApps, 1)
|
||||
|
||||
// And: This app should contain the prebuild workspace name and owner name.
|
||||
subAgentApp := subAgentApps[0]
|
||||
require.Equal(t, "zed://ssh/project.prebuilds-xyz-123.prebuilds.coder/workspaces/project", subAgentApp.GetUrl())
|
||||
|
||||
// Given: We close the client and connection
|
||||
client.Close()
|
||||
conn.Close()
|
||||
|
||||
// Given: A new manifest with a regular user owner name and workspace name.
|
||||
manifest = agentsdk.Manifest{
|
||||
OwnerName: "user",
|
||||
WorkspaceName: "user-workspace",
|
||||
|
||||
Devcontainers: []codersdk.WorkspaceAgentDevcontainer{
|
||||
{ID: devcontainerID, Name: "test", WorkspaceFolder: workspaceFolder},
|
||||
},
|
||||
Scripts: []codersdk.WorkspaceAgentScript{
|
||||
{ID: devcontainerID, LogSourceID: devcontainerLogSourceID},
|
||||
},
|
||||
}
|
||||
|
||||
// When: We create an agent with devcontainers enabled.
|
||||
//nolint:dogsled
|
||||
conn, client, _, _, _ = setupAgent(t, manifest, 0, func(_ *agenttest.Client, o *agent.Options) {
|
||||
o.Devcontainers = true
|
||||
o.DevcontainerAPIOptions = append(o.DevcontainerAPIOptions,
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerLocalFolderLabel, workspaceFolder),
|
||||
agentcontainers.WithContainerLabelIncludeFilter(agentcontainers.DevcontainerIsTestRunLabel, "true"),
|
||||
)
|
||||
})
|
||||
|
||||
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
|
||||
return slices.Contains(client.GetLifecycleStates(), codersdk.WorkspaceAgentLifecycleReady)
|
||||
}, testutil.IntervalMedium, "agent not ready")
|
||||
|
||||
var dcClaimed codersdk.WorkspaceAgentDevcontainer
|
||||
testutil.Eventually(ctx, t, func(ctx context.Context) bool {
|
||||
resp, err := conn.ListContainers(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, dc := range resp.Devcontainers {
|
||||
if dc.Container == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
v, ok := dc.Container.Labels[agentcontainers.DevcontainerLocalFolderLabel]
|
||||
if ok && v == workspaceFolder {
|
||||
dcClaimed = dc
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}, testutil.IntervalMedium, "devcontainer not found")
|
||||
defer func() {
|
||||
if dcClaimed.Container.ID != dcPrebuild.Container.ID {
|
||||
pool.Client.RemoveContainer(docker.RemoveContainerOptions{
|
||||
ID: dcClaimed.Container.ID,
|
||||
RemoveVolumes: true,
|
||||
Force: true,
|
||||
})
|
||||
}
|
||||
}()
|
||||
|
||||
// Then: We expect the claimed devcontainer and prebuild devcontainer
|
||||
// to be using the same underlying container.
|
||||
require.Equal(t, dcPrebuild.Container.ID, dcClaimed.Container.ID)
|
||||
|
||||
// And: We expect there to be a sub agent created.
|
||||
subAgents = client.GetSubAgents()
|
||||
require.Len(t, subAgents, 1)
|
||||
|
||||
subAgent = subAgents[0]
|
||||
subAgentID, err = uuid.FromBytes(subAgent.GetId())
|
||||
require.NoError(t, err)
|
||||
|
||||
// And: We expect there to be an app.
|
||||
subAgentApps, err = client.GetSubAgentApps(subAgentID)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, subAgentApps, 1)
|
||||
|
||||
// And: We expect this app to have the user's owner name and workspace name.
|
||||
subAgentApp = subAgentApps[0]
|
||||
require.Equal(t, "zed://ssh/project.user-workspace.user.coder/workspaces/project", subAgentApp.GetUrl())
|
||||
require.Contains(t, err.Error(), "The agent dev containers feature is experimental and not enabled by default.")
|
||||
require.Contains(t, err.Error(), "To enable this feature, set CODER_AGENT_DEVCONTAINERS_ENABLE=true in your template.")
|
||||
}
|
||||
|
||||
func TestAgent_Dial(t *testing.T) {
|
||||
@@ -2669,11 +2459,11 @@ func TestAgent_Dial(t *testing.T) {
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
setup func(t testing.TB) net.Listener
|
||||
setup func(t *testing.T) net.Listener
|
||||
}{
|
||||
{
|
||||
name: "TCP",
|
||||
setup: func(t testing.TB) net.Listener {
|
||||
setup: func(t *testing.T) net.Listener {
|
||||
l, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err, "create TCP listener")
|
||||
return l
|
||||
@@ -2681,7 +2471,7 @@ func TestAgent_Dial(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "UDP",
|
||||
setup: func(t testing.TB) net.Listener {
|
||||
setup: func(t *testing.T) net.Listener {
|
||||
addr := net.UDPAddr{
|
||||
IP: net.ParseIP("127.0.0.1"),
|
||||
Port: 0,
|
||||
@@ -2699,69 +2489,57 @@ func TestAgent_Dial(t *testing.T) {
|
||||
|
||||
// The purpose of this test is to ensure that a client can dial a
|
||||
// listener in the workspace over tailnet.
|
||||
//
|
||||
// The OS sometimes drops packets if the system can't keep up with
|
||||
// them. For TCP packets, it's typically fine due to
|
||||
// retransmissions, but for UDP packets, it can fail this test.
|
||||
//
|
||||
// The OS gets involved for the Wireguard traffic (either via DERP
|
||||
// or direct UDP), and also for the traffic between the agent and
|
||||
// the listener in the "workspace".
|
||||
//
|
||||
// To avoid this, we'll retry this test up to 3 times.
|
||||
//nolint:gocritic // This test is flaky due to uncontrollable OS packet drops under heavy load.
|
||||
testutil.RunRetry(t, 3, func(t testing.TB) {
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
l := c.setup(t)
|
||||
done := make(chan struct{})
|
||||
defer func() {
|
||||
l.Close()
|
||||
<-done
|
||||
}()
|
||||
|
||||
l := c.setup(t)
|
||||
done := make(chan struct{})
|
||||
defer func() {
|
||||
l.Close()
|
||||
<-done
|
||||
}()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
defer close(done)
|
||||
for range 2 {
|
||||
c, err := l.Accept()
|
||||
if assert.NoError(t, err, "accept connection") {
|
||||
testAccept(ctx, t, c)
|
||||
_ = c.Close()
|
||||
}
|
||||
go func() {
|
||||
defer close(done)
|
||||
for range 2 {
|
||||
c, err := l.Accept()
|
||||
if assert.NoError(t, err, "accept connection") {
|
||||
testAccept(ctx, t, c)
|
||||
_ = c.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
agentID := uuid.UUID{0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8}
|
||||
//nolint:dogsled
|
||||
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
|
||||
AgentID: agentID,
|
||||
}, 0)
|
||||
require.True(t, agentConn.AwaitReachable(ctx))
|
||||
conn, err := agentConn.DialContext(ctx, l.Addr().Network(), l.Addr().String())
|
||||
require.NoError(t, err)
|
||||
testDial(ctx, t, conn)
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// also connect via the CoderServicePrefix, to test that we can reach the agent on this
|
||||
// IP. This will be required for CoderVPN.
|
||||
_, rawPort, _ := net.SplitHostPort(l.Addr().String())
|
||||
port, _ := strconv.ParseUint(rawPort, 10, 16)
|
||||
ipp := netip.AddrPortFrom(tailnet.CoderServicePrefix.AddrFromUUID(agentID), uint16(port))
|
||||
|
||||
switch l.Addr().Network() {
|
||||
case "tcp":
|
||||
conn, err = agentConn.TailnetConn().DialContextTCP(ctx, ipp)
|
||||
case "udp":
|
||||
conn, err = agentConn.TailnetConn().DialContextUDP(ctx, ipp)
|
||||
default:
|
||||
t.Fatalf("unknown network: %s", l.Addr().Network())
|
||||
}
|
||||
require.NoError(t, err)
|
||||
testDial(ctx, t, conn)
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}()
|
||||
|
||||
agentID := uuid.UUID{0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8}
|
||||
//nolint:dogsled
|
||||
agentConn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
|
||||
AgentID: agentID,
|
||||
}, 0)
|
||||
require.True(t, agentConn.AwaitReachable(ctx))
|
||||
conn, err := agentConn.DialContext(ctx, l.Addr().Network(), l.Addr().String())
|
||||
require.NoError(t, err)
|
||||
testDial(ctx, t, conn)
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// also connect via the CoderServicePrefix, to test that we can reach the agent on this
|
||||
// IP. This will be required for CoderVPN.
|
||||
_, rawPort, _ := net.SplitHostPort(l.Addr().String())
|
||||
port, _ := strconv.ParseUint(rawPort, 10, 16)
|
||||
ipp := netip.AddrPortFrom(tailnet.CoderServicePrefix.AddrFromUUID(agentID), uint16(port))
|
||||
|
||||
switch l.Addr().Network() {
|
||||
case "tcp":
|
||||
conn, err = agentConn.Conn.DialContextTCP(ctx, ipp)
|
||||
case "udp":
|
||||
conn, err = agentConn.Conn.DialContextUDP(ctx, ipp)
|
||||
default:
|
||||
t.Fatalf("unknown network: %s", l.Addr().Network())
|
||||
}
|
||||
require.NoError(t, err)
|
||||
testDial(ctx, t, conn)
|
||||
err = conn.Close()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -2812,7 +2590,7 @@ func TestAgent_UpdatedDERP(t *testing.T) {
|
||||
})
|
||||
|
||||
// Setup a client connection.
|
||||
newClientConn := func(derpMap *tailcfg.DERPMap, name string) workspacesdk.AgentConn {
|
||||
newClientConn := func(derpMap *tailcfg.DERPMap, name string) *workspacesdk.AgentConn {
|
||||
conn, err := tailnet.NewConn(&tailnet.Options{
|
||||
Addresses: []netip.Prefix{tailnet.TailscaleServicePrefix.RandomPrefix()},
|
||||
DERPMap: derpMap,
|
||||
@@ -2892,13 +2670,13 @@ func TestAgent_UpdatedDERP(t *testing.T) {
|
||||
|
||||
// Connect from a second client and make sure it uses the new DERP map.
|
||||
conn2 := newClientConn(newDerpMap, "client2")
|
||||
require.Equal(t, []int{2}, conn2.TailnetConn().DERPMap().RegionIDs())
|
||||
require.Equal(t, []int{2}, conn2.DERPMap().RegionIDs())
|
||||
t.Log("conn2 got the new DERPMap")
|
||||
|
||||
// If the first client gets a DERP map update, it should be able to
|
||||
// reconnect just fine.
|
||||
conn1.TailnetConn().SetDERPMap(newDerpMap)
|
||||
require.Equal(t, []int{2}, conn1.TailnetConn().DERPMap().RegionIDs())
|
||||
conn1.SetDERPMap(newDerpMap)
|
||||
require.Equal(t, []int{2}, conn1.DERPMap().RegionIDs())
|
||||
t.Log("set the new DERPMap on conn1")
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
@@ -2927,11 +2705,11 @@ func TestAgent_Speedtest(t *testing.T) {
|
||||
|
||||
func TestAgent_Reconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
logger := testutil.Logger(t)
|
||||
// After the agent is disconnected from a coordinator, it's supposed
|
||||
// to reconnect!
|
||||
fCoordinator := tailnettest.NewFakeCoordinator()
|
||||
coordinator := tailnet.NewCoordinator(logger)
|
||||
defer coordinator.Close()
|
||||
|
||||
agentID := uuid.New()
|
||||
statsCh := make(chan *proto.Stats, 50)
|
||||
@@ -2943,24 +2721,27 @@ func TestAgent_Reconnect(t *testing.T) {
|
||||
DERPMap: derpMap,
|
||||
},
|
||||
statsCh,
|
||||
fCoordinator,
|
||||
coordinator,
|
||||
)
|
||||
defer client.Close()
|
||||
|
||||
initialized := atomic.Int32{}
|
||||
closer := agent.New(agent.Options{
|
||||
ExchangeToken: func(ctx context.Context) (string, error) {
|
||||
initialized.Add(1)
|
||||
return "", nil
|
||||
},
|
||||
Client: client,
|
||||
Logger: logger.Named("agent"),
|
||||
})
|
||||
defer closer.Close()
|
||||
|
||||
call1 := testutil.RequireReceive(ctx, t, fCoordinator.CoordinateCalls)
|
||||
require.Equal(t, client.GetNumRefreshTokenCalls(), 1)
|
||||
close(call1.Resps) // hang up
|
||||
// expect reconnect
|
||||
testutil.RequireReceive(ctx, t, fCoordinator.CoordinateCalls)
|
||||
// Check that the agent refreshes the token when it reconnects.
|
||||
require.Equal(t, client.GetNumRefreshTokenCalls(), 2)
|
||||
closer.Close()
|
||||
require.Eventually(t, func() bool {
|
||||
return coordinator.Node(agentID) != nil
|
||||
}, testutil.WaitShort, testutil.IntervalFast)
|
||||
client.LastWorkspaceAgent()
|
||||
require.Eventually(t, func() bool {
|
||||
return initialized.Load() == 2
|
||||
}, testutil.WaitShort, testutil.IntervalFast)
|
||||
}
|
||||
|
||||
func TestAgent_WriteVSCodeConfigs(t *testing.T) {
|
||||
@@ -2982,6 +2763,9 @@ func TestAgent_WriteVSCodeConfigs(t *testing.T) {
|
||||
defer client.Close()
|
||||
filesystem := afero.NewMemMapFs()
|
||||
closer := agent.New(agent.Options{
|
||||
ExchangeToken: func(ctx context.Context) (string, error) {
|
||||
return "", nil
|
||||
},
|
||||
Client: client,
|
||||
Logger: logger.Named("agent"),
|
||||
Filesystem: filesystem,
|
||||
@@ -3010,6 +2794,9 @@ func TestAgent_DebugServer(t *testing.T) {
|
||||
conn, _, _, _, agnt := setupAgent(t, agentsdk.Manifest{
|
||||
DERPMap: derpMap,
|
||||
}, 0, func(c *agenttest.Client, o *agent.Options) {
|
||||
o.ExchangeToken = func(context.Context) (string, error) {
|
||||
return "token", nil
|
||||
}
|
||||
o.LogDir = logDir
|
||||
})
|
||||
|
||||
@@ -3255,8 +3042,8 @@ func setupSSHSessionOnPort(
|
||||
return session
|
||||
}
|
||||
|
||||
func setupAgent(t testing.TB, metadata agentsdk.Manifest, ptyTimeout time.Duration, opts ...func(*agenttest.Client, *agent.Options)) (
|
||||
workspacesdk.AgentConn,
|
||||
func setupAgent(t *testing.T, metadata agentsdk.Manifest, ptyTimeout time.Duration, opts ...func(*agenttest.Client, *agent.Options)) (
|
||||
*workspacesdk.AgentConn,
|
||||
*agenttest.Client,
|
||||
<-chan *proto.Stats,
|
||||
afero.Fs,
|
||||
@@ -3353,7 +3140,7 @@ func setupAgent(t testing.TB, metadata agentsdk.Manifest, ptyTimeout time.Durati
|
||||
|
||||
var dialTestPayload = []byte("dean-was-here123")
|
||||
|
||||
func testDial(ctx context.Context, t testing.TB, c net.Conn) {
|
||||
func testDial(ctx context.Context, t *testing.T, c net.Conn) {
|
||||
t.Helper()
|
||||
|
||||
if deadline, ok := ctx.Deadline(); ok {
|
||||
@@ -3369,7 +3156,7 @@ func testDial(ctx context.Context, t testing.TB, c net.Conn) {
|
||||
assertReadPayload(t, c, dialTestPayload)
|
||||
}
|
||||
|
||||
func testAccept(ctx context.Context, t testing.TB, c net.Conn) {
|
||||
func testAccept(ctx context.Context, t *testing.T, c net.Conn) {
|
||||
t.Helper()
|
||||
defer c.Close()
|
||||
|
||||
@@ -3386,7 +3173,7 @@ func testAccept(ctx context.Context, t testing.TB, c net.Conn) {
|
||||
assertWritePayload(t, c, dialTestPayload)
|
||||
}
|
||||
|
||||
func assertReadPayload(t testing.TB, r io.Reader, payload []byte) {
|
||||
func assertReadPayload(t *testing.T, r io.Reader, payload []byte) {
|
||||
t.Helper()
|
||||
b := make([]byte, len(payload)+16)
|
||||
n, err := r.Read(b)
|
||||
@@ -3395,11 +3182,11 @@ func assertReadPayload(t testing.TB, r io.Reader, payload []byte) {
|
||||
assert.Equal(t, payload, b[:n])
|
||||
}
|
||||
|
||||
func assertWritePayload(t testing.TB, w io.Writer, payload []byte) {
|
||||
func assertWritePayload(t *testing.T, w io.Writer, payload []byte) {
|
||||
t.Helper()
|
||||
n, err := w.Write(payload)
|
||||
assert.NoError(t, err, "write payload")
|
||||
assert.Equal(t, len(payload), n, "written payload length does not match")
|
||||
assert.Equal(t, len(payload), n, "payload length does not match")
|
||||
}
|
||||
|
||||
func testSessionOutput(t *testing.T, session *ssh.Session, expected, unexpected []string, expectedRe *regexp.Regexp) {
|
||||
@@ -3462,11 +3249,7 @@ func TestAgent_Metrics_SSH(t *testing.T) {
|
||||
registry := prometheus.NewRegistry()
|
||||
|
||||
//nolint:dogsled
|
||||
conn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{
|
||||
// Make sure we always get a DERP connection for
|
||||
// currently_reachable_peers.
|
||||
DisableDirectConnections: true,
|
||||
}, 0, func(_ *agenttest.Client, o *agent.Options) {
|
||||
conn, _, _, _, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, o *agent.Options) {
|
||||
o.PrometheusRegistry = registry
|
||||
})
|
||||
|
||||
@@ -3520,7 +3303,7 @@ func TestAgent_Metrics_SSH(t *testing.T) {
|
||||
{
|
||||
Name: "coderd_agentstats_currently_reachable_peers",
|
||||
Type: proto.Stats_Metric_GAUGE,
|
||||
Value: 1,
|
||||
Value: 0,
|
||||
Labels: []*proto.Stats_Metric_Label{
|
||||
{
|
||||
Name: "connection_type",
|
||||
@@ -3531,7 +3314,7 @@ func TestAgent_Metrics_SSH(t *testing.T) {
|
||||
{
|
||||
Name: "coderd_agentstats_currently_reachable_peers",
|
||||
Type: proto.Stats_Metric_GAUGE,
|
||||
Value: 0,
|
||||
Value: 1,
|
||||
Labels: []*proto.Stats_Metric_Label{
|
||||
{
|
||||
Name: "connection_type",
|
||||
|
||||
+16
-351
@@ -2,11 +2,8 @@ package agentcontainers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"maps"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
@@ -21,13 +18,10 @@ import (
|
||||
|
||||
"github.com/fsnotify/fsnotify"
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
|
||||
"github.com/google/uuid"
|
||||
"github.com/spf13/afero"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"github.com/coder/coder/v2/agent/agentcontainers/ignore"
|
||||
"github.com/coder/coder/v2/agent/agentcontainers/watcher"
|
||||
"github.com/coder/coder/v2/agent/agentexec"
|
||||
"github.com/coder/coder/v2/agent/usershell"
|
||||
@@ -36,7 +30,6 @@ import (
|
||||
"github.com/coder/coder/v2/codersdk/agentsdk"
|
||||
"github.com/coder/coder/v2/provisioner"
|
||||
"github.com/coder/quartz"
|
||||
"github.com/coder/websocket"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -60,12 +53,10 @@ type API struct {
|
||||
cancel context.CancelFunc
|
||||
watcherDone chan struct{}
|
||||
updaterDone chan struct{}
|
||||
discoverDone chan struct{}
|
||||
updateTrigger chan chan error // Channel to trigger manual refresh.
|
||||
updateInterval time.Duration // Interval for periodic container updates.
|
||||
logger slog.Logger
|
||||
watcher watcher.Watcher
|
||||
fs afero.Fs
|
||||
execer agentexec.Execer
|
||||
commandEnv CommandEnv
|
||||
ccli ContainerCLI
|
||||
@@ -77,17 +68,12 @@ type API struct {
|
||||
subAgentURL string
|
||||
subAgentEnv []string
|
||||
|
||||
projectDiscovery bool // If we should perform project discovery or not.
|
||||
discoveryAutostart bool // If we should autostart discovered projects.
|
||||
|
||||
ownerName string
|
||||
workspaceName string
|
||||
parentAgent string
|
||||
agentDirectory string
|
||||
ownerName string
|
||||
workspaceName string
|
||||
parentAgent string
|
||||
|
||||
mu sync.RWMutex // Protects the following fields.
|
||||
initDone chan struct{} // Closed by Init.
|
||||
updateChans []chan struct{}
|
||||
closed bool
|
||||
containers codersdk.WorkspaceAgentListContainersResponse // Output from the last list operation.
|
||||
containersErr error // Error from the last list operation.
|
||||
@@ -144,9 +130,7 @@ func WithCommandEnv(ce CommandEnv) Option {
|
||||
strings.HasPrefix(s, "CODER_WORKSPACE_AGENT_URL=") ||
|
||||
strings.HasPrefix(s, "CODER_AGENT_TOKEN=") ||
|
||||
strings.HasPrefix(s, "CODER_AGENT_AUTH=") ||
|
||||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_ENABLE=") ||
|
||||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE=") ||
|
||||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE=")
|
||||
strings.HasPrefix(s, "CODER_AGENT_DEVCONTAINERS_ENABLE=")
|
||||
})
|
||||
return shell, dir, env, nil
|
||||
}
|
||||
@@ -163,8 +147,8 @@ func WithContainerCLI(ccli ContainerCLI) Option {
|
||||
|
||||
// WithContainerLabelIncludeFilter sets a label filter for containers.
|
||||
// This option can be given multiple times to filter by multiple labels.
|
||||
// The behavior is such that only containers matching all of the provided
|
||||
// labels will be included.
|
||||
// The behavior is such that only containers matching one or more of the
|
||||
// provided labels will be included.
|
||||
func WithContainerLabelIncludeFilter(label, value string) Option {
|
||||
return func(api *API) {
|
||||
api.containerLabelIncludeFilter[label] = value
|
||||
@@ -204,12 +188,11 @@ func WithSubAgentEnv(env ...string) Option {
|
||||
|
||||
// WithManifestInfo sets the owner name, and workspace name
|
||||
// for the sub-agent.
|
||||
func WithManifestInfo(owner, workspace, parentAgent, agentDirectory string) Option {
|
||||
func WithManifestInfo(owner, workspace, parentAgent string) Option {
|
||||
return func(api *API) {
|
||||
api.ownerName = owner
|
||||
api.workspaceName = workspace
|
||||
api.parentAgent = parentAgent
|
||||
api.agentDirectory = agentDirectory
|
||||
}
|
||||
}
|
||||
|
||||
@@ -274,29 +257,6 @@ func WithWatcher(w watcher.Watcher) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithFileSystem sets the file system used for discovering projects.
|
||||
func WithFileSystem(fileSystem afero.Fs) Option {
|
||||
return func(api *API) {
|
||||
api.fs = fileSystem
|
||||
}
|
||||
}
|
||||
|
||||
// WithProjectDiscovery sets if the API should attempt to discover
|
||||
// projects on the filesystem.
|
||||
func WithProjectDiscovery(projectDiscovery bool) Option {
|
||||
return func(api *API) {
|
||||
api.projectDiscovery = projectDiscovery
|
||||
}
|
||||
}
|
||||
|
||||
// WithDiscoveryAutostart sets if the API should attempt to autostart
|
||||
// projects that have been discovered
|
||||
func WithDiscoveryAutostart(discoveryAutostart bool) Option {
|
||||
return func(api *API) {
|
||||
api.discoveryAutostart = discoveryAutostart
|
||||
}
|
||||
}
|
||||
|
||||
// ScriptLogger is an interface for sending devcontainer logs to the
|
||||
// controlplane.
|
||||
type ScriptLogger interface {
|
||||
@@ -367,9 +327,6 @@ func NewAPI(logger slog.Logger, options ...Option) *API {
|
||||
api.watcher = watcher.NewNoop()
|
||||
}
|
||||
}
|
||||
if api.fs == nil {
|
||||
api.fs = afero.NewOsFs()
|
||||
}
|
||||
if api.subAgentClient.Load() == nil {
|
||||
var c SubAgentClient = noopSubAgentClient{}
|
||||
api.subAgentClient.Store(&c)
|
||||
@@ -411,12 +368,6 @@ func (api *API) Start() {
|
||||
return
|
||||
}
|
||||
|
||||
if api.projectDiscovery && api.agentDirectory != "" {
|
||||
api.discoverDone = make(chan struct{})
|
||||
|
||||
go api.discover()
|
||||
}
|
||||
|
||||
api.watcherDone = make(chan struct{})
|
||||
api.updaterDone = make(chan struct{})
|
||||
|
||||
@@ -424,162 +375,6 @@ func (api *API) Start() {
|
||||
go api.updaterLoop()
|
||||
}
|
||||
|
||||
func (api *API) discover() {
|
||||
defer close(api.discoverDone)
|
||||
defer api.logger.Debug(api.ctx, "project discovery finished")
|
||||
api.logger.Debug(api.ctx, "project discovery started")
|
||||
|
||||
if err := api.discoverDevcontainerProjects(); err != nil {
|
||||
api.logger.Error(api.ctx, "discovering dev container projects", slog.Error(err))
|
||||
}
|
||||
|
||||
if err := api.RefreshContainers(api.ctx); err != nil {
|
||||
api.logger.Error(api.ctx, "refreshing containers after discovery", slog.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
func (api *API) discoverDevcontainerProjects() error {
|
||||
isGitProject, err := afero.DirExists(api.fs, filepath.Join(api.agentDirectory, ".git"))
|
||||
if err != nil {
|
||||
return xerrors.Errorf(".git dir exists: %w", err)
|
||||
}
|
||||
|
||||
// If the agent directory is a git project, we'll search
|
||||
// the project for any `.devcontainer/devcontainer.json`
|
||||
// files.
|
||||
if isGitProject {
|
||||
return api.discoverDevcontainersInProject(api.agentDirectory)
|
||||
}
|
||||
|
||||
// The agent directory is _not_ a git project, so we'll
|
||||
// search the top level of the agent directory for any
|
||||
// git projects, and search those.
|
||||
entries, err := afero.ReadDir(api.fs, api.agentDirectory)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("read agent directory: %w", err)
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
isGitProject, err = afero.DirExists(api.fs, filepath.Join(api.agentDirectory, entry.Name(), ".git"))
|
||||
if err != nil {
|
||||
return xerrors.Errorf(".git dir exists: %w", err)
|
||||
}
|
||||
|
||||
// If this directory is a git project, we'll search
|
||||
// it for any `.devcontainer/devcontainer.json` files.
|
||||
if isGitProject {
|
||||
if err := api.discoverDevcontainersInProject(filepath.Join(api.agentDirectory, entry.Name())); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (api *API) discoverDevcontainersInProject(projectPath string) error {
|
||||
logger := api.logger.
|
||||
Named("project-discovery").
|
||||
With(slog.F("project_path", projectPath))
|
||||
|
||||
globalPatterns, err := ignore.LoadGlobalPatterns(api.fs)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("read global git ignore patterns: %w", err)
|
||||
}
|
||||
|
||||
patterns, err := ignore.ReadPatterns(api.ctx, logger, api.fs, projectPath)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("read git ignore patterns: %w", err)
|
||||
}
|
||||
|
||||
matcher := gitignore.NewMatcher(append(globalPatterns, patterns...))
|
||||
|
||||
devcontainerConfigPaths := []string{
|
||||
"/.devcontainer/devcontainer.json",
|
||||
"/.devcontainer.json",
|
||||
}
|
||||
|
||||
return afero.Walk(api.fs, projectPath, func(path string, info fs.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
logger.Error(api.ctx, "encountered error while walking for dev container projects",
|
||||
slog.F("path", path),
|
||||
slog.Error(err))
|
||||
return nil
|
||||
}
|
||||
|
||||
pathParts := ignore.FilePathToParts(path)
|
||||
|
||||
// We know that a directory entry cannot be a `devcontainer.json` file, so we
|
||||
// always skip processing directories. If the directory happens to be ignored
|
||||
// by git then we'll make sure to ignore all of the children of that directory.
|
||||
if info.IsDir() {
|
||||
if matcher.Match(pathParts, true) {
|
||||
return fs.SkipDir
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if matcher.Match(pathParts, false) {
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, relativeConfigPath := range devcontainerConfigPaths {
|
||||
if !strings.HasSuffix(path, relativeConfigPath) {
|
||||
continue
|
||||
}
|
||||
|
||||
workspaceFolder := strings.TrimSuffix(path, relativeConfigPath)
|
||||
|
||||
logger := logger.With(slog.F("workspace_folder", workspaceFolder))
|
||||
logger.Debug(api.ctx, "discovered dev container project")
|
||||
|
||||
api.mu.Lock()
|
||||
if _, found := api.knownDevcontainers[workspaceFolder]; !found {
|
||||
logger.Debug(api.ctx, "adding dev container project")
|
||||
|
||||
dc := codersdk.WorkspaceAgentDevcontainer{
|
||||
ID: uuid.New(),
|
||||
Name: "", // Updated later based on container state.
|
||||
WorkspaceFolder: workspaceFolder,
|
||||
ConfigPath: path,
|
||||
Status: codersdk.WorkspaceAgentDevcontainerStatusStopped,
|
||||
Dirty: false, // Updated later based on config file changes.
|
||||
Container: nil,
|
||||
}
|
||||
|
||||
if api.discoveryAutostart {
|
||||
config, err := api.dccli.ReadConfig(api.ctx, workspaceFolder, path, []string{})
|
||||
if err != nil {
|
||||
logger.Error(api.ctx, "read project configuration", slog.Error(err))
|
||||
} else if config.Configuration.Customizations.Coder.AutoStart {
|
||||
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
|
||||
}
|
||||
}
|
||||
|
||||
api.knownDevcontainers[workspaceFolder] = dc
|
||||
api.broadcastUpdatesLocked()
|
||||
|
||||
if dc.Status == codersdk.WorkspaceAgentDevcontainerStatusStarting {
|
||||
api.asyncWg.Add(1)
|
||||
go func() {
|
||||
defer api.asyncWg.Done()
|
||||
|
||||
_ = api.CreateDevcontainer(dc.WorkspaceFolder, dc.ConfigPath)
|
||||
}()
|
||||
}
|
||||
}
|
||||
api.mu.Unlock()
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func (api *API) watcherLoop() {
|
||||
defer close(api.watcherDone)
|
||||
defer api.logger.Debug(api.ctx, "watcher loop stopped")
|
||||
@@ -654,7 +449,6 @@ func (api *API) updaterLoop() {
|
||||
// We utilize a TickerFunc here instead of a regular Ticker so that
|
||||
// we can guarantee execution of the updateContainers method after
|
||||
// advancing the clock.
|
||||
var prevErr error
|
||||
ticker := api.clock.TickerFunc(api.ctx, api.updateInterval, func() error {
|
||||
done := make(chan error, 1)
|
||||
var sent bool
|
||||
@@ -672,15 +466,9 @@ func (api *API) updaterLoop() {
|
||||
if err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
api.logger.Warn(api.ctx, "updater loop ticker canceled", slog.Error(err))
|
||||
return nil
|
||||
}
|
||||
// Avoid excessive logging of the same error.
|
||||
if prevErr == nil || prevErr.Error() != err.Error() {
|
||||
} else {
|
||||
api.logger.Error(api.ctx, "updater loop ticker failed", slog.Error(err))
|
||||
}
|
||||
prevErr = err
|
||||
} else {
|
||||
prevErr = nil
|
||||
}
|
||||
default:
|
||||
api.logger.Debug(api.ctx, "updater loop ticker skipped, update in progress")
|
||||
@@ -740,7 +528,6 @@ func (api *API) Routes() http.Handler {
|
||||
r.Use(ensureInitDoneMW)
|
||||
|
||||
r.Get("/", api.handleList)
|
||||
r.Get("/watch", api.watchContainers)
|
||||
// TODO(mafredri): Simplify this route as the previous /devcontainers
|
||||
// /-route was dropped. We can drop the /devcontainers prefix here too.
|
||||
r.Route("/devcontainers/{devcontainer}", func(r chi.Router) {
|
||||
@@ -750,92 +537,6 @@ func (api *API) Routes() http.Handler {
|
||||
return r
|
||||
}
|
||||
|
||||
func (api *API) broadcastUpdatesLocked() {
|
||||
// Broadcast state changes to WebSocket listeners.
|
||||
for _, ch := range api.updateChans {
|
||||
select {
|
||||
case ch <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (api *API) watchContainers(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
conn, err := websocket.Accept(rw, r, &websocket.AcceptOptions{
|
||||
// We want `NoContextTakeover` compression to balance improving
|
||||
// bandwidth cost/latency with minimal memory usage overhead.
|
||||
CompressionMode: websocket.CompressionNoContextTakeover,
|
||||
})
|
||||
if err != nil {
|
||||
httpapi.Write(ctx, rw, http.StatusInternalServerError, codersdk.Response{
|
||||
Message: "Failed to upgrade connection to websocket.",
|
||||
Detail: err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Here we close the websocket for reading, so that the websocket library will handle pings and
|
||||
// close frames.
|
||||
_ = conn.CloseRead(context.Background())
|
||||
|
||||
ctx, wsNetConn := codersdk.WebsocketNetConn(ctx, conn, websocket.MessageText)
|
||||
defer wsNetConn.Close()
|
||||
|
||||
go httpapi.Heartbeat(ctx, conn)
|
||||
|
||||
updateCh := make(chan struct{}, 1)
|
||||
|
||||
api.mu.Lock()
|
||||
api.updateChans = append(api.updateChans, updateCh)
|
||||
api.mu.Unlock()
|
||||
|
||||
defer func() {
|
||||
api.mu.Lock()
|
||||
api.updateChans = slices.DeleteFunc(api.updateChans, func(ch chan struct{}) bool {
|
||||
return ch == updateCh
|
||||
})
|
||||
close(updateCh)
|
||||
api.mu.Unlock()
|
||||
}()
|
||||
|
||||
encoder := json.NewEncoder(wsNetConn)
|
||||
|
||||
ct, err := api.getContainers()
|
||||
if err != nil {
|
||||
api.logger.Error(ctx, "unable to get containers", slog.Error(err))
|
||||
return
|
||||
}
|
||||
|
||||
if err := encoder.Encode(ct); err != nil {
|
||||
api.logger.Error(ctx, "encode container list", slog.Error(err))
|
||||
return
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-api.ctx.Done():
|
||||
return
|
||||
|
||||
case <-ctx.Done():
|
||||
return
|
||||
|
||||
case <-updateCh:
|
||||
ct, err := api.getContainers()
|
||||
if err != nil {
|
||||
api.logger.Error(ctx, "unable to get containers", slog.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
if err := encoder.Encode(ct); err != nil {
|
||||
api.logger.Error(ctx, "encode container list", slog.Error(err))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// handleList handles the HTTP request to list containers.
|
||||
func (api *API) handleList(rw http.ResponseWriter, r *http.Request) {
|
||||
ct, err := api.getContainers()
|
||||
@@ -875,26 +576,8 @@ func (api *API) updateContainers(ctx context.Context) error {
|
||||
api.mu.Lock()
|
||||
defer api.mu.Unlock()
|
||||
|
||||
var previouslyKnownDevcontainers map[string]codersdk.WorkspaceAgentDevcontainer
|
||||
if len(api.updateChans) > 0 {
|
||||
previouslyKnownDevcontainers = maps.Clone(api.knownDevcontainers)
|
||||
}
|
||||
|
||||
api.processUpdatedContainersLocked(ctx, updated)
|
||||
|
||||
if len(api.updateChans) > 0 {
|
||||
statesAreEqual := maps.EqualFunc(
|
||||
previouslyKnownDevcontainers,
|
||||
api.knownDevcontainers,
|
||||
func(dc1, dc2 codersdk.WorkspaceAgentDevcontainer) bool {
|
||||
return dc1.Equals(dc2)
|
||||
})
|
||||
|
||||
if !statesAreEqual {
|
||||
api.broadcastUpdatesLocked()
|
||||
}
|
||||
}
|
||||
|
||||
api.logger.Debug(ctx, "containers updated successfully", slog.F("container_count", len(api.containers.Containers)), slog.F("warning_count", len(api.containers.Warnings)), slog.F("devcontainer_count", len(api.knownDevcontainers)))
|
||||
|
||||
return nil
|
||||
@@ -943,22 +626,17 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
|
||||
slog.F("config_file", configFile),
|
||||
)
|
||||
|
||||
// If we haven't set any include filters, we should explicitly ignore test devcontainers.
|
||||
if len(api.containerLabelIncludeFilter) == 0 && container.Labels[DevcontainerIsTestRunLabel] == "true" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Filter out devcontainer tests, unless explicitly set in include filters.
|
||||
if len(api.containerLabelIncludeFilter) > 0 {
|
||||
includeContainer := true
|
||||
if len(api.containerLabelIncludeFilter) > 0 || container.Labels[DevcontainerIsTestRunLabel] == "true" {
|
||||
var ok bool
|
||||
for label, value := range api.containerLabelIncludeFilter {
|
||||
v, found := container.Labels[label]
|
||||
|
||||
includeContainer = includeContainer && (found && v == value)
|
||||
if v, found := container.Labels[label]; found && v == value {
|
||||
ok = true
|
||||
}
|
||||
}
|
||||
// Verbose debug logging is fine here since typically filters
|
||||
// are only used in development or testing environments.
|
||||
if !includeContainer {
|
||||
if !ok {
|
||||
logger.Debug(ctx, "container does not match include filter, ignoring devcontainer", slog.F("container_labels", container.Labels), slog.F("include_filter", api.containerLabelIncludeFilter))
|
||||
continue
|
||||
}
|
||||
@@ -1039,9 +717,6 @@ func (api *API) processUpdatedContainersLocked(ctx context.Context, updated code
|
||||
err := api.maybeInjectSubAgentIntoContainerLocked(ctx, dc)
|
||||
if err != nil {
|
||||
logger.Error(ctx, "inject subagent into container failed", slog.Error(err))
|
||||
dc.Error = err.Error()
|
||||
} else {
|
||||
dc.Error = ""
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1268,10 +943,7 @@ func (api *API) handleDevcontainerRecreate(w http.ResponseWriter, r *http.Reques
|
||||
// devcontainer multiple times in parallel.
|
||||
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusStarting
|
||||
dc.Container = nil
|
||||
dc.Error = ""
|
||||
api.knownDevcontainers[dc.WorkspaceFolder] = dc
|
||||
api.broadcastUpdatesLocked()
|
||||
|
||||
go func() {
|
||||
_ = api.CreateDevcontainer(dc.WorkspaceFolder, dc.ConfigPath, WithRemoveExistingContainer())
|
||||
}()
|
||||
@@ -1360,7 +1032,6 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
|
||||
api.mu.Lock()
|
||||
dc = api.knownDevcontainers[dc.WorkspaceFolder]
|
||||
dc.Status = codersdk.WorkspaceAgentDevcontainerStatusError
|
||||
dc.Error = err.Error()
|
||||
api.knownDevcontainers[dc.WorkspaceFolder] = dc
|
||||
api.recreateErrorTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "errorTimes")
|
||||
api.mu.Unlock()
|
||||
@@ -1384,10 +1055,8 @@ func (api *API) CreateDevcontainer(workspaceFolder, configPath string, opts ...D
|
||||
}
|
||||
}
|
||||
dc.Dirty = false
|
||||
dc.Error = ""
|
||||
api.recreateSuccessTimes[dc.WorkspaceFolder] = api.clock.Now("agentcontainers", "recreate", "successTimes")
|
||||
api.knownDevcontainers[dc.WorkspaceFolder] = dc
|
||||
api.broadcastUpdatesLocked()
|
||||
api.mu.Unlock()
|
||||
|
||||
// Ensure an immediate refresh to accurately reflect the
|
||||
@@ -1854,9 +1523,7 @@ func (api *API) maybeInjectSubAgentIntoContainerLocked(ctx context.Context, dc c
|
||||
originalName := subAgentConfig.Name
|
||||
|
||||
for attempt := 1; attempt <= maxAttemptsToNameAgent; attempt++ {
|
||||
agent, err := client.Create(ctx, subAgentConfig)
|
||||
if err == nil {
|
||||
proc.agent = agent // Only reassign on success.
|
||||
if proc.agent, err = client.Create(ctx, subAgentConfig); err == nil {
|
||||
if api.usingWorkspaceFolderName[dc.WorkspaceFolder] {
|
||||
api.devcontainerNames[dc.Name] = true
|
||||
delete(api.usingWorkspaceFolderName, dc.WorkspaceFolder)
|
||||
@@ -1864,6 +1531,7 @@ func (api *API) maybeInjectSubAgentIntoContainerLocked(ctx context.Context, dc c
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
// NOTE(DanielleMaywood):
|
||||
// Ordinarily we'd use `errors.As` here, but it didn't appear to work. Not
|
||||
// sure if this is because of the communication protocol? Instead I've opted
|
||||
@@ -2018,9 +1686,6 @@ func (api *API) Close() error {
|
||||
if api.updaterDone != nil {
|
||||
<-api.updaterDone
|
||||
}
|
||||
if api.discoverDone != nil {
|
||||
<-api.discoverDone
|
||||
}
|
||||
|
||||
// Wait for all async tasks to complete.
|
||||
api.asyncWg.Wait()
|
||||
|
||||
+12
-1369
File diff suppressed because it is too large
Load Diff
@@ -55,11 +55,11 @@ func TestIntegrationDockerCLI(t *testing.T) {
|
||||
}, testutil.WaitShort, testutil.IntervalSlow, "Container did not start in time")
|
||||
|
||||
dcli := agentcontainers.NewDockerCLI(agentexec.DefaultExecer)
|
||||
ctx := testutil.Context(t, testutil.WaitMedium) // Longer timeout for multiple subtests
|
||||
containerName := strings.TrimPrefix(ct.Container.Name, "/")
|
||||
|
||||
t.Run("DetectArchitecture", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
arch, err := dcli.DetectArchitecture(ctx, containerName)
|
||||
require.NoError(t, err, "DetectArchitecture failed")
|
||||
@@ -71,7 +71,6 @@ func TestIntegrationDockerCLI(t *testing.T) {
|
||||
|
||||
t.Run("Copy", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
want := "Help, I'm trapped!"
|
||||
tempFile := filepath.Join(t.TempDir(), "test-file.txt")
|
||||
@@ -91,7 +90,6 @@ func TestIntegrationDockerCLI(t *testing.T) {
|
||||
|
||||
t.Run("ExecAs", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
// Test ExecAs without specifying user (should use container's default).
|
||||
want := "root"
|
||||
|
||||
@@ -61,7 +61,7 @@ fi
|
||||
exec 3>&-
|
||||
|
||||
# Format the generated code.
|
||||
go run mvdan.cc/gofumpt@v0.8.0 -w -l "${TMPDIR}/${DEST_FILENAME}"
|
||||
go run mvdan.cc/gofumpt@v0.4.0 -w -l "${TMPDIR}/${DEST_FILENAME}"
|
||||
|
||||
# Add a header so that Go recognizes this as a generated file.
|
||||
if grep -q -- "\[-i extension\]" < <(sed -h 2>&1); then
|
||||
|
||||
@@ -91,7 +91,6 @@ type CoderCustomization struct {
|
||||
Apps []SubAgentApp `json:"apps,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Ignore bool `json:"ignore,omitempty"`
|
||||
AutoStart bool `json:"autoStart,omitempty"`
|
||||
}
|
||||
|
||||
type DevcontainerWorkspace struct {
|
||||
@@ -107,63 +106,63 @@ type DevcontainerCLI interface {
|
||||
|
||||
// DevcontainerCLIUpOptions are options for the devcontainer CLI Up
|
||||
// command.
|
||||
type DevcontainerCLIUpOptions func(*DevcontainerCLIUpConfig)
|
||||
type DevcontainerCLIUpOptions func(*devcontainerCLIUpConfig)
|
||||
|
||||
type DevcontainerCLIUpConfig struct {
|
||||
Args []string // Additional arguments for the Up command.
|
||||
Stdout io.Writer
|
||||
Stderr io.Writer
|
||||
type devcontainerCLIUpConfig struct {
|
||||
args []string // Additional arguments for the Up command.
|
||||
stdout io.Writer
|
||||
stderr io.Writer
|
||||
}
|
||||
|
||||
// WithRemoveExistingContainer is an option to remove the existing
|
||||
// container.
|
||||
func WithRemoveExistingContainer() DevcontainerCLIUpOptions {
|
||||
return func(o *DevcontainerCLIUpConfig) {
|
||||
o.Args = append(o.Args, "--remove-existing-container")
|
||||
return func(o *devcontainerCLIUpConfig) {
|
||||
o.args = append(o.args, "--remove-existing-container")
|
||||
}
|
||||
}
|
||||
|
||||
// WithUpOutput sets additional stdout and stderr writers for logs
|
||||
// during Up operations.
|
||||
func WithUpOutput(stdout, stderr io.Writer) DevcontainerCLIUpOptions {
|
||||
return func(o *DevcontainerCLIUpConfig) {
|
||||
o.Stdout = stdout
|
||||
o.Stderr = stderr
|
||||
return func(o *devcontainerCLIUpConfig) {
|
||||
o.stdout = stdout
|
||||
o.stderr = stderr
|
||||
}
|
||||
}
|
||||
|
||||
// DevcontainerCLIExecOptions are options for the devcontainer CLI Exec
|
||||
// command.
|
||||
type DevcontainerCLIExecOptions func(*DevcontainerCLIExecConfig)
|
||||
type DevcontainerCLIExecOptions func(*devcontainerCLIExecConfig)
|
||||
|
||||
type DevcontainerCLIExecConfig struct {
|
||||
Args []string // Additional arguments for the Exec command.
|
||||
Stdout io.Writer
|
||||
Stderr io.Writer
|
||||
type devcontainerCLIExecConfig struct {
|
||||
args []string // Additional arguments for the Exec command.
|
||||
stdout io.Writer
|
||||
stderr io.Writer
|
||||
}
|
||||
|
||||
// WithExecOutput sets additional stdout and stderr writers for logs
|
||||
// during Exec operations.
|
||||
func WithExecOutput(stdout, stderr io.Writer) DevcontainerCLIExecOptions {
|
||||
return func(o *DevcontainerCLIExecConfig) {
|
||||
o.Stdout = stdout
|
||||
o.Stderr = stderr
|
||||
return func(o *devcontainerCLIExecConfig) {
|
||||
o.stdout = stdout
|
||||
o.stderr = stderr
|
||||
}
|
||||
}
|
||||
|
||||
// WithExecContainerID sets the container ID to target a specific
|
||||
// container.
|
||||
func WithExecContainerID(id string) DevcontainerCLIExecOptions {
|
||||
return func(o *DevcontainerCLIExecConfig) {
|
||||
o.Args = append(o.Args, "--container-id", id)
|
||||
return func(o *devcontainerCLIExecConfig) {
|
||||
o.args = append(o.args, "--container-id", id)
|
||||
}
|
||||
}
|
||||
|
||||
// WithRemoteEnv sets environment variables for the Exec command.
|
||||
func WithRemoteEnv(env ...string) DevcontainerCLIExecOptions {
|
||||
return func(o *DevcontainerCLIExecConfig) {
|
||||
return func(o *devcontainerCLIExecConfig) {
|
||||
for _, e := range env {
|
||||
o.Args = append(o.Args, "--remote-env", e)
|
||||
o.args = append(o.args, "--remote-env", e)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -186,8 +185,8 @@ func WithReadConfigOutput(stdout, stderr io.Writer) DevcontainerCLIReadConfigOpt
|
||||
}
|
||||
}
|
||||
|
||||
func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) DevcontainerCLIUpConfig {
|
||||
conf := DevcontainerCLIUpConfig{Stdout: io.Discard, Stderr: io.Discard}
|
||||
func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) devcontainerCLIUpConfig {
|
||||
conf := devcontainerCLIUpConfig{stdout: io.Discard, stderr: io.Discard}
|
||||
for _, opt := range opts {
|
||||
if opt != nil {
|
||||
opt(&conf)
|
||||
@@ -196,8 +195,8 @@ func applyDevcontainerCLIUpOptions(opts []DevcontainerCLIUpOptions) Devcontainer
|
||||
return conf
|
||||
}
|
||||
|
||||
func applyDevcontainerCLIExecOptions(opts []DevcontainerCLIExecOptions) DevcontainerCLIExecConfig {
|
||||
conf := DevcontainerCLIExecConfig{Stdout: io.Discard, Stderr: io.Discard}
|
||||
func applyDevcontainerCLIExecOptions(opts []DevcontainerCLIExecOptions) devcontainerCLIExecConfig {
|
||||
conf := devcontainerCLIExecConfig{stdout: io.Discard, stderr: io.Discard}
|
||||
for _, opt := range opts {
|
||||
if opt != nil {
|
||||
opt(&conf)
|
||||
@@ -242,7 +241,7 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
|
||||
if configPath != "" {
|
||||
args = append(args, "--config", configPath)
|
||||
}
|
||||
args = append(args, conf.Args...)
|
||||
args = append(args, conf.args...)
|
||||
cmd := d.execer.CommandContext(ctx, "devcontainer", args...)
|
||||
|
||||
// Capture stdout for parsing and stream logs for both default and provided writers.
|
||||
@@ -252,14 +251,14 @@ func (d *devcontainerCLI) Up(ctx context.Context, workspaceFolder, configPath st
|
||||
&devcontainerCLILogWriter{
|
||||
ctx: ctx,
|
||||
logger: logger.With(slog.F("stdout", true)),
|
||||
writer: conf.Stdout,
|
||||
writer: conf.stdout,
|
||||
},
|
||||
)
|
||||
// Stream stderr logs and provided writer if any.
|
||||
cmd.Stderr = &devcontainerCLILogWriter{
|
||||
ctx: ctx,
|
||||
logger: logger.With(slog.F("stderr", true)),
|
||||
writer: conf.Stderr,
|
||||
writer: conf.stderr,
|
||||
}
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
@@ -294,17 +293,17 @@ func (d *devcontainerCLI) Exec(ctx context.Context, workspaceFolder, configPath
|
||||
if configPath != "" {
|
||||
args = append(args, "--config", configPath)
|
||||
}
|
||||
args = append(args, conf.Args...)
|
||||
args = append(args, conf.args...)
|
||||
args = append(args, cmd)
|
||||
args = append(args, cmdArgs...)
|
||||
c := d.execer.CommandContext(ctx, "devcontainer", args...)
|
||||
|
||||
c.Stdout = io.MultiWriter(conf.Stdout, &devcontainerCLILogWriter{
|
||||
c.Stdout = io.MultiWriter(conf.stdout, &devcontainerCLILogWriter{
|
||||
ctx: ctx,
|
||||
logger: logger.With(slog.F("stdout", true)),
|
||||
writer: io.Discard,
|
||||
})
|
||||
c.Stderr = io.MultiWriter(conf.Stderr, &devcontainerCLILogWriter{
|
||||
c.Stderr = io.MultiWriter(conf.stderr, &devcontainerCLILogWriter{
|
||||
ctx: ctx,
|
||||
logger: logger.With(slog.F("stderr", true)),
|
||||
writer: io.Discard,
|
||||
|
||||
@@ -593,7 +593,7 @@ func setupDevcontainerWorkspace(t *testing.T, workspaceFolder string) string {
|
||||
"containerEnv": {
|
||||
"TEST_CONTAINER": "true"
|
||||
},
|
||||
"runArgs": ["--label=com.coder.test=devcontainercli", "--label=` + agentcontainers.DevcontainerIsTestRunLabel + `=true"]
|
||||
"runArgs": ["--label", "com.coder.test=devcontainercli"]
|
||||
}`
|
||||
err = os.WriteFile(configPath, []byte(content), 0o600)
|
||||
require.NoError(t, err, "create devcontainer.json file")
|
||||
|
||||
@@ -1,124 +0,0 @@
|
||||
package ignore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/go-git/go-git/v5/plumbing/format/config"
|
||||
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
|
||||
"github.com/spf13/afero"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"cdr.dev/slog"
|
||||
)
|
||||
|
||||
const (
|
||||
gitconfigFile = ".gitconfig"
|
||||
gitignoreFile = ".gitignore"
|
||||
gitInfoExcludeFile = ".git/info/exclude"
|
||||
)
|
||||
|
||||
func FilePathToParts(path string) []string {
|
||||
components := []string{}
|
||||
|
||||
if path == "" {
|
||||
return components
|
||||
}
|
||||
|
||||
for segment := range strings.SplitSeq(filepath.Clean(path), string(filepath.Separator)) {
|
||||
if segment != "" {
|
||||
components = append(components, segment)
|
||||
}
|
||||
}
|
||||
|
||||
return components
|
||||
}
|
||||
|
||||
func readIgnoreFile(fileSystem afero.Fs, path, ignore string) ([]gitignore.Pattern, error) {
|
||||
var ps []gitignore.Pattern
|
||||
|
||||
data, err := afero.ReadFile(fileSystem, filepath.Join(path, ignore))
|
||||
if err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for s := range strings.SplitSeq(string(data), "\n") {
|
||||
if !strings.HasPrefix(s, "#") && len(strings.TrimSpace(s)) > 0 {
|
||||
ps = append(ps, gitignore.ParsePattern(s, FilePathToParts(path)))
|
||||
}
|
||||
}
|
||||
|
||||
return ps, nil
|
||||
}
|
||||
|
||||
func ReadPatterns(ctx context.Context, logger slog.Logger, fileSystem afero.Fs, path string) ([]gitignore.Pattern, error) {
|
||||
var ps []gitignore.Pattern
|
||||
|
||||
subPs, err := readIgnoreFile(fileSystem, path, gitInfoExcludeFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ps = append(ps, subPs...)
|
||||
|
||||
if err := afero.Walk(fileSystem, path, func(path string, info fs.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
logger.Error(ctx, "encountered error while walking for git ignore files",
|
||||
slog.F("path", path),
|
||||
slog.Error(err))
|
||||
return nil
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
subPs, err := readIgnoreFile(fileSystem, path, gitignoreFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ps = append(ps, subPs...)
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ps, nil
|
||||
}
|
||||
|
||||
func loadPatterns(fileSystem afero.Fs, path string) ([]gitignore.Pattern, error) {
|
||||
data, err := afero.ReadFile(fileSystem, path)
|
||||
if err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
decoder := config.NewDecoder(bytes.NewBuffer(data))
|
||||
|
||||
conf := config.New()
|
||||
if err := decoder.Decode(conf); err != nil {
|
||||
return nil, xerrors.Errorf("decode config: %w", err)
|
||||
}
|
||||
|
||||
excludes := conf.Section("core").Options.Get("excludesfile")
|
||||
if excludes == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return readIgnoreFile(fileSystem, "", excludes)
|
||||
}
|
||||
|
||||
func LoadGlobalPatterns(fileSystem afero.Fs) ([]gitignore.Pattern, error) {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return loadPatterns(fileSystem, filepath.Join(home, gitconfigFile))
|
||||
}
|
||||
@@ -1,38 +0,0 @@
|
||||
package ignore_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/coder/coder/v2/agent/agentcontainers/ignore"
|
||||
)
|
||||
|
||||
func TestFilePathToParts(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tests := []struct {
|
||||
path string
|
||||
expected []string
|
||||
}{
|
||||
{"", []string{}},
|
||||
{"/", []string{}},
|
||||
{"foo", []string{"foo"}},
|
||||
{"/foo", []string{"foo"}},
|
||||
{"./foo/bar", []string{"foo", "bar"}},
|
||||
{"../foo/bar", []string{"..", "foo", "bar"}},
|
||||
{"foo/bar/baz", []string{"foo", "bar", "baz"}},
|
||||
{"/foo/bar/baz", []string{"foo", "bar", "baz"}},
|
||||
{"foo/../bar", []string{"bar"}},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("`%s`", tt.path), func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
parts := ignore.FilePathToParts(tt.path)
|
||||
require.Equal(t, tt.expected, parts)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -188,7 +188,7 @@ func (a *subAgentAPIClient) List(ctx context.Context) ([]SubAgent, error) {
|
||||
return agents, nil
|
||||
}
|
||||
|
||||
func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (_ SubAgent, err error) {
|
||||
func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (SubAgent, error) {
|
||||
a.logger.Debug(ctx, "creating sub agent", slog.F("name", agent.Name), slog.F("directory", agent.Directory))
|
||||
|
||||
displayApps := make([]agentproto.CreateSubAgentRequest_DisplayApp, 0, len(agent.DisplayApps))
|
||||
@@ -233,27 +233,19 @@ func (a *subAgentAPIClient) Create(ctx context.Context, agent SubAgent) (_ SubAg
|
||||
if err != nil {
|
||||
return SubAgent{}, err
|
||||
}
|
||||
defer func() {
|
||||
if err != nil {
|
||||
// Best effort.
|
||||
_, _ = a.api.DeleteSubAgent(ctx, &agentproto.DeleteSubAgentRequest{
|
||||
Id: resp.GetAgent().GetId(),
|
||||
})
|
||||
}
|
||||
}()
|
||||
|
||||
agent.Name = resp.GetAgent().GetName()
|
||||
agent.ID, err = uuid.FromBytes(resp.GetAgent().GetId())
|
||||
agent.Name = resp.Agent.Name
|
||||
agent.ID, err = uuid.FromBytes(resp.Agent.Id)
|
||||
if err != nil {
|
||||
return SubAgent{}, err
|
||||
return agent, err
|
||||
}
|
||||
agent.AuthToken, err = uuid.FromBytes(resp.GetAgent().GetAuthToken())
|
||||
agent.AuthToken, err = uuid.FromBytes(resp.Agent.AuthToken)
|
||||
if err != nil {
|
||||
return SubAgent{}, err
|
||||
return agent, err
|
||||
}
|
||||
|
||||
for _, appError := range resp.GetAppCreationErrors() {
|
||||
app := apps[appError.GetIndex()]
|
||||
for _, appError := range resp.AppCreationErrors {
|
||||
app := apps[appError.Index]
|
||||
|
||||
a.logger.Warn(ctx, "unable to create app",
|
||||
slog.F("agent_name", agent.Name),
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/fsnotify/fsnotify"
|
||||
@@ -89,34 +88,24 @@ func TestFSNotifyWatcher(t *testing.T) {
|
||||
break
|
||||
}
|
||||
|
||||
// TODO(DanielleMaywood):
|
||||
// Unfortunately it appears this atomic-rename phase of the test is flakey on macOS.
|
||||
//
|
||||
// This test flake could be indicative of an issue that may present itself
|
||||
// in a running environment. Fortunately, we only use this (as of 2025-07-29)
|
||||
// for our dev container integration. We do not expect the host workspace
|
||||
// (where this is used), to ever be run on macOS, as containers are a linux
|
||||
// paradigm.
|
||||
if runtime.GOOS != "darwin" {
|
||||
err = os.WriteFile(testFile+".atomic", []byte(`{"test": "atomic"}`), 0o600)
|
||||
require.NoError(t, err, "write new atomic test file failed")
|
||||
err = os.WriteFile(testFile+".atomic", []byte(`{"test": "atomic"}`), 0o600)
|
||||
require.NoError(t, err, "write new atomic test file failed")
|
||||
|
||||
err = os.Rename(testFile+".atomic", testFile)
|
||||
require.NoError(t, err, "rename atomic test file failed")
|
||||
err = os.Rename(testFile+".atomic", testFile)
|
||||
require.NoError(t, err, "rename atomic test file failed")
|
||||
|
||||
// Verify that we receive the event we want.
|
||||
for {
|
||||
event, err := wut.Next(ctx)
|
||||
require.NoError(t, err, "next event failed")
|
||||
require.NotNil(t, event, "want non-nil event")
|
||||
if !event.Has(fsnotify.Create) {
|
||||
t.Logf("Ignoring event: %s", event)
|
||||
continue
|
||||
}
|
||||
require.Truef(t, event.Has(fsnotify.Create), "want create event: %s", event.String())
|
||||
require.Equal(t, event.Name, testFile, "want event for test file")
|
||||
break
|
||||
// Verify that we receive the event we want.
|
||||
for {
|
||||
event, err := wut.Next(ctx)
|
||||
require.NoError(t, err, "next event failed")
|
||||
require.NotNil(t, event, "want non-nil event")
|
||||
if !event.Has(fsnotify.Create) {
|
||||
t.Logf("Ignoring event: %s", event)
|
||||
continue
|
||||
}
|
||||
require.Truef(t, event.Has(fsnotify.Create), "want create event: %s", event.String())
|
||||
require.Equal(t, event.Name, testFile, "want event for test file")
|
||||
break
|
||||
}
|
||||
|
||||
// Test removing the file from the watcher.
|
||||
|
||||
@@ -149,6 +149,7 @@ func (r *Runner) Init(scripts []codersdk.WorkspaceAgentScript, scriptCompleted S
|
||||
if script.Cron == "" {
|
||||
continue
|
||||
}
|
||||
script := script
|
||||
_, err := r.cron.AddFunc(script.Cron, func() {
|
||||
err := r.trackRun(r.cronCtx, script, ExecuteCronScripts)
|
||||
if err != nil {
|
||||
@@ -223,6 +224,7 @@ func (r *Runner) Execute(ctx context.Context, option ExecuteOption) error {
|
||||
continue
|
||||
}
|
||||
|
||||
script := script
|
||||
eg.Go(func() error {
|
||||
err := r.trackRun(ctx, script, option)
|
||||
if err != nil {
|
||||
|
||||
@@ -46,8 +46,6 @@ const (
|
||||
// MagicProcessCmdlineJetBrains is a string in a process's command line that
|
||||
// uniquely identifies it as JetBrains software.
|
||||
MagicProcessCmdlineJetBrains = "idea.vendor.name=JetBrains"
|
||||
MagicProcessCmdlineToolbox = "com.jetbrains.toolbox"
|
||||
MagicProcessCmdlineGateway = "remote-dev-server"
|
||||
|
||||
// BlockedFileTransferErrorCode indicates that SSH server restricted the raw command from performing
|
||||
// the file transfer.
|
||||
@@ -119,10 +117,6 @@ type Config struct {
|
||||
// Note that this is different from the devcontainers feature, which uses
|
||||
// subagents.
|
||||
ExperimentalContainers bool
|
||||
// X11Net allows overriding the networking implementation used for X11
|
||||
// forwarding listeners. When nil, a default implementation backed by the
|
||||
// standard library networking package is used.
|
||||
X11Net X11Network
|
||||
}
|
||||
|
||||
type Server struct {
|
||||
@@ -137,10 +131,9 @@ type Server struct {
|
||||
// a lock on mu but protected by closing.
|
||||
wg sync.WaitGroup
|
||||
|
||||
Execer agentexec.Execer
|
||||
logger slog.Logger
|
||||
srv *ssh.Server
|
||||
x11Forwarder *x11Forwarder
|
||||
Execer agentexec.Execer
|
||||
logger slog.Logger
|
||||
srv *ssh.Server
|
||||
|
||||
config *Config
|
||||
|
||||
@@ -197,20 +190,6 @@ func NewServer(ctx context.Context, logger slog.Logger, prometheusRegistry *prom
|
||||
config: config,
|
||||
|
||||
metrics: metrics,
|
||||
x11Forwarder: &x11Forwarder{
|
||||
logger: logger,
|
||||
x11HandlerErrors: metrics.x11HandlerErrors,
|
||||
fs: fs,
|
||||
displayOffset: *config.X11DisplayOffset,
|
||||
sessions: make(map[*x11Session]struct{}),
|
||||
connections: make(map[net.Conn]struct{}),
|
||||
network: func() X11Network {
|
||||
if config.X11Net != nil {
|
||||
return config.X11Net
|
||||
}
|
||||
return osNet{}
|
||||
}(),
|
||||
},
|
||||
}
|
||||
|
||||
srv := &ssh.Server{
|
||||
@@ -478,7 +457,7 @@ func (s *Server) sessionHandler(session ssh.Session) {
|
||||
|
||||
x11, hasX11 := session.X11()
|
||||
if hasX11 {
|
||||
display, handled := s.x11Forwarder.x11Handler(ctx, session)
|
||||
display, handled := s.x11Handler(ctx, x11)
|
||||
if !handled {
|
||||
logger.Error(ctx, "x11 handler failed")
|
||||
closeCause("x11 handler failed")
|
||||
@@ -611,9 +590,7 @@ func (s *Server) startNonPTYSession(logger slog.Logger, session ssh.Session, mag
|
||||
// and SSH server close may be delayed.
|
||||
cmd.SysProcAttr = cmdSysProcAttr()
|
||||
|
||||
// to match OpenSSH, we don't actually tear a non-TTY command down, even if the session ends. OpenSSH closes the
|
||||
// pipes to the process when the session ends; which is what happens here since we wire the command up to the
|
||||
// session for I/O.
|
||||
// to match OpenSSH, we don't actually tear a non-TTY command down, even if the session ends.
|
||||
// c.f. https://github.com/coder/coder/issues/18519#issuecomment-3019118271
|
||||
cmd.Cancel = nil
|
||||
|
||||
@@ -1177,9 +1154,6 @@ func (s *Server) Close() error {
|
||||
|
||||
s.mu.Unlock()
|
||||
|
||||
s.logger.Debug(ctx, "closing X11 forwarding")
|
||||
_ = s.x11Forwarder.Close()
|
||||
|
||||
s.logger.Debug(ctx, "waiting for all goroutines to exit")
|
||||
s.wg.Wait() // Wait for all goroutines to exit.
|
||||
|
||||
|
||||
@@ -8,9 +8,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -405,92 +403,6 @@ func TestNewServer_Signal(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestSSHServer_ClosesStdin(t *testing.T) {
|
||||
t.Parallel()
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("bash doesn't exist on Windows")
|
||||
}
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitMedium)
|
||||
logger := testutil.Logger(t)
|
||||
s, err := agentssh.NewServer(ctx, logger.Named("ssh-server"), prometheus.NewRegistry(), afero.NewMemMapFs(), agentexec.DefaultExecer, nil)
|
||||
require.NoError(t, err)
|
||||
logger = logger.Named("test")
|
||||
defer s.Close()
|
||||
err = s.UpdateHostSigner(42)
|
||||
assert.NoError(t, err)
|
||||
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
defer close(done)
|
||||
err := s.Serve(ln)
|
||||
assert.Error(t, err) // Server is closed.
|
||||
}()
|
||||
defer func() {
|
||||
err := s.Close()
|
||||
require.NoError(t, err)
|
||||
<-done
|
||||
}()
|
||||
|
||||
c := sshClient(t, ln.Addr().String())
|
||||
|
||||
sess, err := c.NewSession()
|
||||
require.NoError(t, err)
|
||||
stdout, err := sess.StdoutPipe()
|
||||
require.NoError(t, err)
|
||||
stdin, err := sess.StdinPipe()
|
||||
require.NoError(t, err)
|
||||
defer stdin.Close()
|
||||
|
||||
dir := t.TempDir()
|
||||
err = os.MkdirAll(dir, 0o755)
|
||||
require.NoError(t, err)
|
||||
filePath := filepath.Join(dir, "result.txt")
|
||||
|
||||
// the shell command `read` will block until data is written to stdin, or closed. It will return
|
||||
// exit code 1 if it hits EOF, which is what we want to test.
|
||||
cmdErrCh := make(chan error, 1)
|
||||
go func() {
|
||||
cmdErrCh <- sess.Start(fmt.Sprintf(`echo started; echo "read exit code: $(read && echo 0 || echo 1)" > %s`, filePath))
|
||||
}()
|
||||
|
||||
cmdErr := testutil.RequireReceive(ctx, t, cmdErrCh)
|
||||
require.NoError(t, cmdErr)
|
||||
|
||||
readCh := make(chan error, 1)
|
||||
go func() {
|
||||
buf := make([]byte, 8)
|
||||
_, err := stdout.Read(buf)
|
||||
assert.Equal(t, "started\n", string(buf))
|
||||
readCh <- err
|
||||
}()
|
||||
err = testutil.RequireReceive(ctx, t, readCh)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = sess.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
var content []byte
|
||||
expected := []byte("read exit code: 1\n")
|
||||
testutil.Eventually(ctx, t, func(_ context.Context) bool {
|
||||
content, err = os.ReadFile(filePath)
|
||||
if err != nil {
|
||||
logger.Debug(ctx, "failed to read file; will retry", slog.Error(err))
|
||||
return false
|
||||
}
|
||||
if len(content) != len(expected) {
|
||||
logger.Debug(ctx, "file is partially written", slog.F("content", content))
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}, testutil.IntervalFast)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, string(expected), string(content))
|
||||
}
|
||||
|
||||
func sshClient(t *testing.T, addr string) *ssh.Client {
|
||||
conn, err := net.Dial("tcp", addr)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -53,7 +53,7 @@ func NewJetbrainsChannelWatcher(ctx ssh.Context, logger slog.Logger, reportConne
|
||||
|
||||
// If this is not JetBrains, then we do not need to do anything special. We
|
||||
// attempt to match on something that appears unique to JetBrains software.
|
||||
if !isJetbrainsProcess(cmdline) {
|
||||
if !strings.Contains(strings.ToLower(cmdline), strings.ToLower(MagicProcessCmdlineJetBrains)) {
|
||||
return newChannel
|
||||
}
|
||||
|
||||
@@ -104,18 +104,3 @@ func (c *ChannelOnClose) Close() error {
|
||||
c.once.Do(c.done)
|
||||
return c.Channel.Close()
|
||||
}
|
||||
|
||||
func isJetbrainsProcess(cmdline string) bool {
|
||||
opts := []string{
|
||||
MagicProcessCmdlineJetBrains,
|
||||
MagicProcessCmdlineToolbox,
|
||||
MagicProcessCmdlineGateway,
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
if strings.Contains(strings.ToLower(cmdline), strings.ToLower(opt)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
+72
-287
@@ -7,16 +7,15 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/gliderlabs/ssh"
|
||||
"github.com/gofrs/flock"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/spf13/afero"
|
||||
gossh "golang.org/x/crypto/ssh"
|
||||
"golang.org/x/xerrors"
|
||||
@@ -30,51 +29,8 @@ const (
|
||||
X11StartPort = 6000
|
||||
// X11DefaultDisplayOffset is the default offset for X11 forwarding.
|
||||
X11DefaultDisplayOffset = 10
|
||||
X11MaxDisplays = 200
|
||||
// X11MaxPort is the highest port we will ever use for X11 forwarding. This limits the total number of TCP sockets
|
||||
// we will create. It seems more useful to have a maximum port number than a direct limit on sockets with no max
|
||||
// port because we'd like to be able to tell users the exact range of ports the Agent might use.
|
||||
X11MaxPort = X11StartPort + X11MaxDisplays
|
||||
)
|
||||
|
||||
// X11Network abstracts the creation of network listeners for X11 forwarding.
|
||||
// It is intended mainly for testing; production code uses the default
|
||||
// implementation backed by the operating system networking stack.
|
||||
type X11Network interface {
|
||||
Listen(network, address string) (net.Listener, error)
|
||||
}
|
||||
|
||||
// osNet is the default X11Network implementation that uses the standard
|
||||
// library network stack.
|
||||
type osNet struct{}
|
||||
|
||||
func (osNet) Listen(network, address string) (net.Listener, error) {
|
||||
return net.Listen(network, address)
|
||||
}
|
||||
|
||||
type x11Forwarder struct {
|
||||
logger slog.Logger
|
||||
x11HandlerErrors *prometheus.CounterVec
|
||||
fs afero.Fs
|
||||
displayOffset int
|
||||
|
||||
// network creates X11 listener sockets. Defaults to osNet{}.
|
||||
network X11Network
|
||||
|
||||
mu sync.Mutex
|
||||
sessions map[*x11Session]struct{}
|
||||
connections map[net.Conn]struct{}
|
||||
closing bool
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
type x11Session struct {
|
||||
session ssh.Session
|
||||
display int
|
||||
listener net.Listener
|
||||
usedAt time.Time
|
||||
}
|
||||
|
||||
// x11Callback is called when the client requests X11 forwarding.
|
||||
func (*Server) x11Callback(_ ssh.Context, _ ssh.X11) bool {
|
||||
// Always allow.
|
||||
@@ -83,243 +39,115 @@ func (*Server) x11Callback(_ ssh.Context, _ ssh.X11) bool {
|
||||
|
||||
// x11Handler is called when a session has requested X11 forwarding.
|
||||
// It listens for X11 connections and forwards them to the client.
|
||||
func (x *x11Forwarder) x11Handler(sshCtx ssh.Context, sshSession ssh.Session) (displayNumber int, handled bool) {
|
||||
x11, hasX11 := sshSession.X11()
|
||||
if !hasX11 {
|
||||
return -1, false
|
||||
}
|
||||
serverConn, valid := sshCtx.Value(ssh.ContextKeyConn).(*gossh.ServerConn)
|
||||
func (s *Server) x11Handler(ctx ssh.Context, x11 ssh.X11) (displayNumber int, handled bool) {
|
||||
serverConn, valid := ctx.Value(ssh.ContextKeyConn).(*gossh.ServerConn)
|
||||
if !valid {
|
||||
x.logger.Warn(sshCtx, "failed to get server connection")
|
||||
s.logger.Warn(ctx, "failed to get server connection")
|
||||
return -1, false
|
||||
}
|
||||
ctx := slog.With(sshCtx, slog.F("session_id", fmt.Sprintf("%x", serverConn.SessionID())))
|
||||
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
x.logger.Warn(ctx, "failed to get hostname", slog.Error(err))
|
||||
x.x11HandlerErrors.WithLabelValues("hostname").Add(1)
|
||||
s.logger.Warn(ctx, "failed to get hostname", slog.Error(err))
|
||||
s.metrics.x11HandlerErrors.WithLabelValues("hostname").Add(1)
|
||||
return -1, false
|
||||
}
|
||||
|
||||
x11session, err := x.createX11Session(ctx, sshSession)
|
||||
ln, display, err := createX11Listener(ctx, *s.config.X11DisplayOffset)
|
||||
if err != nil {
|
||||
x.logger.Warn(ctx, "failed to create X11 listener", slog.Error(err))
|
||||
x.x11HandlerErrors.WithLabelValues("listen").Add(1)
|
||||
s.logger.Warn(ctx, "failed to create X11 listener", slog.Error(err))
|
||||
s.metrics.x11HandlerErrors.WithLabelValues("listen").Add(1)
|
||||
return -1, false
|
||||
}
|
||||
s.trackListener(ln, true)
|
||||
defer func() {
|
||||
if !handled {
|
||||
x.closeAndRemoveSession(x11session)
|
||||
s.trackListener(ln, false)
|
||||
_ = ln.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
err = addXauthEntry(ctx, x.fs, hostname, strconv.Itoa(x11session.display), x11.AuthProtocol, x11.AuthCookie)
|
||||
err = addXauthEntry(ctx, s.fs, hostname, strconv.Itoa(display), x11.AuthProtocol, x11.AuthCookie)
|
||||
if err != nil {
|
||||
x.logger.Warn(ctx, "failed to add Xauthority entry", slog.Error(err))
|
||||
x.x11HandlerErrors.WithLabelValues("xauthority").Add(1)
|
||||
s.logger.Warn(ctx, "failed to add Xauthority entry", slog.Error(err))
|
||||
s.metrics.x11HandlerErrors.WithLabelValues("xauthority").Add(1)
|
||||
return -1, false
|
||||
}
|
||||
|
||||
// clean up the X11 session if the SSH session completes.
|
||||
go func() {
|
||||
// Don't leave the listener open after the session is gone.
|
||||
<-ctx.Done()
|
||||
x.closeAndRemoveSession(x11session)
|
||||
_ = ln.Close()
|
||||
}()
|
||||
|
||||
go x.listenForConnections(ctx, x11session, serverConn, x11)
|
||||
x.logger.Debug(ctx, "X11 forwarding started", slog.F("display", x11session.display))
|
||||
go func() {
|
||||
defer ln.Close()
|
||||
defer s.trackListener(ln, false)
|
||||
|
||||
return x11session.display, true
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) trackGoroutine() (closing bool, done func()) {
|
||||
x.mu.Lock()
|
||||
defer x.mu.Unlock()
|
||||
if !x.closing {
|
||||
x.wg.Add(1)
|
||||
return false, func() { x.wg.Done() }
|
||||
}
|
||||
return true, func() {}
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) listenForConnections(
|
||||
ctx context.Context, session *x11Session, serverConn *gossh.ServerConn, x11 ssh.X11,
|
||||
) {
|
||||
defer x.closeAndRemoveSession(session)
|
||||
if closing, done := x.trackGoroutine(); closing {
|
||||
return
|
||||
} else { // nolint: revive
|
||||
defer done()
|
||||
}
|
||||
|
||||
for {
|
||||
conn, err := session.listener.Accept()
|
||||
if err != nil {
|
||||
if errors.Is(err, net.ErrClosed) {
|
||||
for {
|
||||
conn, err := ln.Accept()
|
||||
if err != nil {
|
||||
if errors.Is(err, net.ErrClosed) {
|
||||
return
|
||||
}
|
||||
s.logger.Warn(ctx, "failed to accept X11 connection", slog.Error(err))
|
||||
return
|
||||
}
|
||||
x.logger.Warn(ctx, "failed to accept X11 connection", slog.Error(err))
|
||||
return
|
||||
}
|
||||
|
||||
// Update session usage time since a new X11 connection was forwarded.
|
||||
x.mu.Lock()
|
||||
session.usedAt = time.Now()
|
||||
x.mu.Unlock()
|
||||
if x11.SingleConnection {
|
||||
x.logger.Debug(ctx, "single connection requested, closing X11 listener")
|
||||
x.closeAndRemoveSession(session)
|
||||
}
|
||||
|
||||
var originAddr string
|
||||
var originPort uint32
|
||||
|
||||
if tcpConn, ok := conn.(*net.TCPConn); ok {
|
||||
if tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr); ok {
|
||||
originAddr = tcpAddr.IP.String()
|
||||
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
|
||||
originPort = uint32(tcpAddr.Port)
|
||||
if x11.SingleConnection {
|
||||
s.logger.Debug(ctx, "single connection requested, closing X11 listener")
|
||||
_ = ln.Close()
|
||||
}
|
||||
}
|
||||
// Fallback values for in-memory or non-TCP connections.
|
||||
if originAddr == "" {
|
||||
originAddr = "127.0.0.1"
|
||||
}
|
||||
|
||||
channel, reqs, err := serverConn.OpenChannel("x11", gossh.Marshal(struct {
|
||||
OriginatorAddress string
|
||||
OriginatorPort uint32
|
||||
}{
|
||||
OriginatorAddress: originAddr,
|
||||
OriginatorPort: originPort,
|
||||
}))
|
||||
if err != nil {
|
||||
x.logger.Warn(ctx, "failed to open X11 channel", slog.Error(err))
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
go gossh.DiscardRequests(reqs)
|
||||
tcpConn, ok := conn.(*net.TCPConn)
|
||||
if !ok {
|
||||
s.logger.Warn(ctx, fmt.Sprintf("failed to cast connection to TCPConn. got: %T", conn))
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
tcpAddr, ok := tcpConn.LocalAddr().(*net.TCPAddr)
|
||||
if !ok {
|
||||
s.logger.Warn(ctx, fmt.Sprintf("failed to cast local address to TCPAddr. got: %T", tcpConn.LocalAddr()))
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
if !x.trackConn(conn, true) {
|
||||
x.logger.Warn(ctx, "failed to track X11 connection")
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
go func() {
|
||||
defer x.trackConn(conn, false)
|
||||
Bicopy(ctx, conn, channel)
|
||||
}()
|
||||
}
|
||||
}
|
||||
channel, reqs, err := serverConn.OpenChannel("x11", gossh.Marshal(struct {
|
||||
OriginatorAddress string
|
||||
OriginatorPort uint32
|
||||
}{
|
||||
OriginatorAddress: tcpAddr.IP.String(),
|
||||
// #nosec G115 - Safe conversion as TCP port numbers are within uint32 range (0-65535)
|
||||
OriginatorPort: uint32(tcpAddr.Port),
|
||||
}))
|
||||
if err != nil {
|
||||
s.logger.Warn(ctx, "failed to open X11 channel", slog.Error(err))
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
go gossh.DiscardRequests(reqs)
|
||||
|
||||
// closeAndRemoveSession closes and removes the session.
|
||||
func (x *x11Forwarder) closeAndRemoveSession(x11session *x11Session) {
|
||||
_ = x11session.listener.Close()
|
||||
x.mu.Lock()
|
||||
delete(x.sessions, x11session)
|
||||
x.mu.Unlock()
|
||||
}
|
||||
if !s.trackConn(ln, conn, true) {
|
||||
s.logger.Warn(ctx, "failed to track X11 connection")
|
||||
_ = conn.Close()
|
||||
continue
|
||||
}
|
||||
go func() {
|
||||
defer s.trackConn(ln, conn, false)
|
||||
Bicopy(ctx, conn, channel)
|
||||
}()
|
||||
}
|
||||
}()
|
||||
|
||||
// createX11Session creates an X11 forwarding session.
|
||||
func (x *x11Forwarder) createX11Session(ctx context.Context, sshSession ssh.Session) (*x11Session, error) {
|
||||
var (
|
||||
ln net.Listener
|
||||
display int
|
||||
err error
|
||||
)
|
||||
// retry listener creation after evictions. Limit to 10 retries to prevent pathological cases looping forever.
|
||||
const maxRetries = 10
|
||||
for try := range maxRetries {
|
||||
ln, display, err = x.createX11Listener(ctx)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
if try == maxRetries-1 {
|
||||
return nil, xerrors.New("max retries exceeded while creating X11 session")
|
||||
}
|
||||
x.logger.Warn(ctx, "failed to create X11 listener; will evict an X11 forwarding session",
|
||||
slog.F("num_current_sessions", x.numSessions()),
|
||||
slog.Error(err))
|
||||
x.evictLeastRecentlyUsedSession()
|
||||
}
|
||||
x.mu.Lock()
|
||||
defer x.mu.Unlock()
|
||||
if x.closing {
|
||||
closeErr := ln.Close()
|
||||
if closeErr != nil {
|
||||
x.logger.Error(ctx, "error closing X11 listener", slog.Error(closeErr))
|
||||
}
|
||||
return nil, xerrors.New("server is closing")
|
||||
}
|
||||
x11Sess := &x11Session{
|
||||
session: sshSession,
|
||||
display: display,
|
||||
listener: ln,
|
||||
usedAt: time.Now(),
|
||||
}
|
||||
x.sessions[x11Sess] = struct{}{}
|
||||
return x11Sess, nil
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) numSessions() int {
|
||||
x.mu.Lock()
|
||||
defer x.mu.Unlock()
|
||||
return len(x.sessions)
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) popLeastRecentlyUsedSession() *x11Session {
|
||||
x.mu.Lock()
|
||||
defer x.mu.Unlock()
|
||||
var lru *x11Session
|
||||
for s := range x.sessions {
|
||||
if lru == nil {
|
||||
lru = s
|
||||
continue
|
||||
}
|
||||
if s.usedAt.Before(lru.usedAt) {
|
||||
lru = s
|
||||
continue
|
||||
}
|
||||
}
|
||||
if lru == nil {
|
||||
x.logger.Debug(context.Background(), "tried to pop from empty set of X11 sessions")
|
||||
return nil
|
||||
}
|
||||
delete(x.sessions, lru)
|
||||
return lru
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) evictLeastRecentlyUsedSession() {
|
||||
lru := x.popLeastRecentlyUsedSession()
|
||||
if lru == nil {
|
||||
return
|
||||
}
|
||||
err := lru.listener.Close()
|
||||
if err != nil {
|
||||
x.logger.Error(context.Background(), "failed to close evicted X11 session listener", slog.Error(err))
|
||||
}
|
||||
// when we evict, we also want to force the SSH session to be closed as well. This is because we intend to reuse
|
||||
// the X11 TCP listener port for a new X11 forwarding session. If we left the SSH session up, then graphical apps
|
||||
// started in that session could potentially connect to an unintended X11 Server (i.e. the display on a different
|
||||
// computer than the one that started the SSH session). Most likely, this session is a zombie anyway if we've
|
||||
// reached the maximum number of X11 forwarding sessions.
|
||||
err = lru.session.Close()
|
||||
if err != nil {
|
||||
x.logger.Error(context.Background(), "failed to close evicted X11 SSH session", slog.Error(err))
|
||||
}
|
||||
return display, true
|
||||
}
|
||||
|
||||
// createX11Listener creates a listener for X11 forwarding, it will use
|
||||
// the next available port starting from X11StartPort and displayOffset.
|
||||
func (x *x11Forwarder) createX11Listener(ctx context.Context) (ln net.Listener, display int, err error) {
|
||||
func createX11Listener(ctx context.Context, displayOffset int) (ln net.Listener, display int, err error) {
|
||||
var lc net.ListenConfig
|
||||
// Look for an open port to listen on.
|
||||
for port := X11StartPort + x.displayOffset; port <= X11MaxPort; port++ {
|
||||
if ctx.Err() != nil {
|
||||
return nil, -1, ctx.Err()
|
||||
}
|
||||
|
||||
ln, err = x.network.Listen("tcp", fmt.Sprintf("localhost:%d", port))
|
||||
for port := X11StartPort + displayOffset; port < math.MaxUint16; port++ {
|
||||
ln, err = lc.Listen(ctx, "tcp", fmt.Sprintf("localhost:%d", port))
|
||||
if err == nil {
|
||||
display = port - X11StartPort
|
||||
return ln, display, nil
|
||||
@@ -328,49 +156,6 @@ func (x *x11Forwarder) createX11Listener(ctx context.Context) (ln net.Listener,
|
||||
return nil, -1, xerrors.Errorf("failed to find open port for X11 listener: %w", err)
|
||||
}
|
||||
|
||||
// trackConn registers the connection with the x11Forwarder. If the server is
|
||||
// closed, the connection is not registered and should be closed.
|
||||
//
|
||||
//nolint:revive
|
||||
func (x *x11Forwarder) trackConn(c net.Conn, add bool) (ok bool) {
|
||||
x.mu.Lock()
|
||||
defer x.mu.Unlock()
|
||||
if add {
|
||||
if x.closing {
|
||||
// Server or listener closed.
|
||||
return false
|
||||
}
|
||||
x.wg.Add(1)
|
||||
x.connections[c] = struct{}{}
|
||||
return true
|
||||
}
|
||||
x.wg.Done()
|
||||
delete(x.connections, c)
|
||||
return true
|
||||
}
|
||||
|
||||
func (x *x11Forwarder) Close() error {
|
||||
x.mu.Lock()
|
||||
x.closing = true
|
||||
|
||||
for s := range x.sessions {
|
||||
sErr := s.listener.Close()
|
||||
if sErr != nil {
|
||||
x.logger.Debug(context.Background(), "failed to close X11 listener", slog.Error(sErr))
|
||||
}
|
||||
}
|
||||
for c := range x.connections {
|
||||
cErr := c.Close()
|
||||
if cErr != nil {
|
||||
x.logger.Debug(context.Background(), "failed to close X11 connection", slog.Error(cErr))
|
||||
}
|
||||
}
|
||||
|
||||
x.mu.Unlock()
|
||||
x.wg.Wait()
|
||||
return nil
|
||||
}
|
||||
|
||||
// addXauthEntry adds an Xauthority entry to the Xauthority file.
|
||||
// The Xauthority file is located at ~/.Xauthority.
|
||||
func addXauthEntry(ctx context.Context, fs afero.Fs, host string, display string, authProtocol string, authCookie string) error {
|
||||
|
||||
+14
-229
@@ -3,9 +3,9 @@ package agentssh_test
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -32,19 +32,10 @@ func TestServer_X11(t *testing.T) {
|
||||
t.Skip("X11 forwarding is only supported on Linux")
|
||||
}
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
ctx := context.Background()
|
||||
logger := testutil.Logger(t)
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Use in-process networking for X11 forwarding.
|
||||
inproc := testutil.NewInProcNet()
|
||||
|
||||
// Create server config with custom X11 listener.
|
||||
cfg := &agentssh.Config{
|
||||
X11Net: inproc,
|
||||
}
|
||||
|
||||
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, cfg)
|
||||
fs := afero.NewOsFs()
|
||||
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, &agentssh.Config{})
|
||||
require.NoError(t, err)
|
||||
defer s.Close()
|
||||
err = s.UpdateHostSigner(42)
|
||||
@@ -102,15 +93,17 @@ func TestServer_X11(t *testing.T) {
|
||||
|
||||
x11Chans := c.HandleChannelOpen("x11")
|
||||
payload := "hello world"
|
||||
go func() {
|
||||
conn, err := inproc.Dial(ctx, testutil.NewAddr("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+displayNumber)))
|
||||
assert.NoError(t, err)
|
||||
_, err = conn.Write([]byte(payload))
|
||||
assert.NoError(t, err)
|
||||
_ = conn.Close()
|
||||
}()
|
||||
require.Eventually(t, func() bool {
|
||||
conn, err := net.Dial("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+displayNumber))
|
||||
if err == nil {
|
||||
_, err = conn.Write([]byte(payload))
|
||||
assert.NoError(t, err)
|
||||
_ = conn.Close()
|
||||
}
|
||||
return err == nil
|
||||
}, testutil.WaitShort, testutil.IntervalFast)
|
||||
|
||||
x11 := testutil.RequireReceive(ctx, t, x11Chans)
|
||||
x11 := <-x11Chans
|
||||
ch, reqs, err := x11.Accept()
|
||||
require.NoError(t, err)
|
||||
go gossh.DiscardRequests(reqs)
|
||||
@@ -128,211 +121,3 @@ func TestServer_X11(t *testing.T) {
|
||||
_, err = fs.Stat(filepath.Join(home, ".Xauthority"))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestServer_X11_EvictionLRU(t *testing.T) {
|
||||
t.Parallel()
|
||||
if runtime.GOOS != "linux" {
|
||||
t.Skip("X11 forwarding is only supported on Linux")
|
||||
}
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitSuperLong)
|
||||
logger := testutil.Logger(t)
|
||||
fs := afero.NewMemMapFs()
|
||||
|
||||
// Use in-process networking for X11 forwarding.
|
||||
inproc := testutil.NewInProcNet()
|
||||
|
||||
cfg := &agentssh.Config{
|
||||
X11Net: inproc,
|
||||
}
|
||||
|
||||
s, err := agentssh.NewServer(ctx, logger, prometheus.NewRegistry(), fs, agentexec.DefaultExecer, cfg)
|
||||
require.NoError(t, err)
|
||||
defer s.Close()
|
||||
err = s.UpdateHostSigner(42)
|
||||
require.NoError(t, err)
|
||||
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
require.NoError(t, err)
|
||||
|
||||
done := testutil.Go(t, func() {
|
||||
err := s.Serve(ln)
|
||||
assert.Error(t, err)
|
||||
})
|
||||
|
||||
c := sshClient(t, ln.Addr().String())
|
||||
|
||||
// block off one port to test x11Forwarder evicts at highest port, not number of listeners.
|
||||
externalListener, err := inproc.Listen("tcp",
|
||||
fmt.Sprintf("localhost:%d", agentssh.X11StartPort+agentssh.X11DefaultDisplayOffset+1))
|
||||
require.NoError(t, err)
|
||||
defer externalListener.Close()
|
||||
|
||||
// Calculate how many simultaneous X11 sessions we can create given the
|
||||
// configured port range.
|
||||
|
||||
startPort := agentssh.X11StartPort + agentssh.X11DefaultDisplayOffset
|
||||
maxSessions := agentssh.X11MaxPort - startPort + 1 - 1 // -1 for the blocked port
|
||||
require.Greater(t, maxSessions, 0, "expected a positive maxSessions value")
|
||||
|
||||
// shellSession holds references to the session and its standard streams so
|
||||
// that the test can keep them open (and optionally interact with them) for
|
||||
// the lifetime of the test. If we don't start the Shell with pipes in place,
|
||||
// the session will be torn down asynchronously during the test.
|
||||
type shellSession struct {
|
||||
sess *gossh.Session
|
||||
stdin io.WriteCloser
|
||||
stdout io.Reader
|
||||
stderr io.Reader
|
||||
// scanner is used to read the output of the session, line by line.
|
||||
scanner *bufio.Scanner
|
||||
}
|
||||
|
||||
sessions := make([]shellSession, 0, maxSessions)
|
||||
for i := 0; i < maxSessions; i++ {
|
||||
sess, err := c.NewSession()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = sess.SendRequest("x11-req", true, gossh.Marshal(ssh.X11{
|
||||
AuthProtocol: "MIT-MAGIC-COOKIE-1",
|
||||
AuthCookie: hex.EncodeToString([]byte(fmt.Sprintf("cookie%d", i))),
|
||||
ScreenNumber: uint32(0),
|
||||
}))
|
||||
require.NoError(t, err)
|
||||
|
||||
stdin, err := sess.StdinPipe()
|
||||
require.NoError(t, err)
|
||||
stdout, err := sess.StdoutPipe()
|
||||
require.NoError(t, err)
|
||||
stderr, err := sess.StderrPipe()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, sess.Shell())
|
||||
|
||||
// The SSH server lazily starts the session. We need to write a command
|
||||
// and read back to ensure the X11 forwarding is started.
|
||||
scanner := bufio.NewScanner(stdout)
|
||||
msg := fmt.Sprintf("ready-%d", i)
|
||||
_, err = stdin.Write([]byte("echo " + msg + "\n"))
|
||||
require.NoError(t, err)
|
||||
// Read until we get the message (first token may be empty due to shell prompt)
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if strings.Contains(line, msg) {
|
||||
break
|
||||
}
|
||||
}
|
||||
require.NoError(t, scanner.Err())
|
||||
|
||||
sessions = append(sessions, shellSession{
|
||||
sess: sess,
|
||||
stdin: stdin,
|
||||
stdout: stdout,
|
||||
stderr: stderr,
|
||||
scanner: scanner,
|
||||
})
|
||||
}
|
||||
|
||||
// Connect X11 forwarding to the first session. This is used to test that
|
||||
// connecting counts as a use of the display.
|
||||
x11Chans := c.HandleChannelOpen("x11")
|
||||
payload := "hello world"
|
||||
go func() {
|
||||
conn, err := inproc.Dial(ctx, testutil.NewAddr("tcp", fmt.Sprintf("localhost:%d", agentssh.X11StartPort+agentssh.X11DefaultDisplayOffset)))
|
||||
if !assert.NoError(t, err) {
|
||||
return
|
||||
}
|
||||
_, err = conn.Write([]byte(payload))
|
||||
assert.NoError(t, err)
|
||||
_ = conn.Close()
|
||||
}()
|
||||
|
||||
x11 := testutil.RequireReceive(ctx, t, x11Chans)
|
||||
ch, reqs, err := x11.Accept()
|
||||
require.NoError(t, err)
|
||||
go gossh.DiscardRequests(reqs)
|
||||
got := make([]byte, len(payload))
|
||||
_, err = ch.Read(got)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, payload, string(got))
|
||||
_ = ch.Close()
|
||||
|
||||
// Create one more session which should evict a session and reuse the display.
|
||||
// The first session was used to connect X11 forwarding, so it should not be evicted.
|
||||
// Therefore, the second session should be evicted and its display reused.
|
||||
extraSess, err := c.NewSession()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = extraSess.SendRequest("x11-req", true, gossh.Marshal(ssh.X11{
|
||||
AuthProtocol: "MIT-MAGIC-COOKIE-1",
|
||||
AuthCookie: hex.EncodeToString([]byte("extra")),
|
||||
ScreenNumber: uint32(0),
|
||||
}))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Ask the remote side for the DISPLAY value so we can extract the display
|
||||
// number that was assigned to this session.
|
||||
out, err := extraSess.Output("echo DISPLAY=$DISPLAY")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Example output line: "DISPLAY=localhost:10.0".
|
||||
var newDisplayNumber int
|
||||
{
|
||||
sc := bufio.NewScanner(bytes.NewReader(out))
|
||||
for sc.Scan() {
|
||||
line := strings.TrimSpace(sc.Text())
|
||||
if strings.HasPrefix(line, "DISPLAY=") {
|
||||
parts := strings.SplitN(line, ":", 2)
|
||||
require.Len(t, parts, 2)
|
||||
displayPart := parts[1]
|
||||
if strings.Contains(displayPart, ".") {
|
||||
displayPart = strings.SplitN(displayPart, ".", 2)[0]
|
||||
}
|
||||
var convErr error
|
||||
newDisplayNumber, convErr = strconv.Atoi(displayPart)
|
||||
require.NoError(t, convErr)
|
||||
break
|
||||
}
|
||||
}
|
||||
require.NoError(t, sc.Err())
|
||||
}
|
||||
|
||||
// The display number reused should correspond to the SECOND session (display offset 12)
|
||||
expectedDisplay := agentssh.X11DefaultDisplayOffset + 2 // +1 was blocked port
|
||||
assert.Equal(t, expectedDisplay, newDisplayNumber, "second session should have been evicted and its display reused")
|
||||
|
||||
// First session should still be alive: send cmd and read output.
|
||||
msgFirst := "still-alive"
|
||||
_, err = sessions[0].stdin.Write([]byte("echo " + msgFirst + "\n"))
|
||||
require.NoError(t, err)
|
||||
for sessions[0].scanner.Scan() {
|
||||
line := strings.TrimSpace(sessions[0].scanner.Text())
|
||||
if strings.Contains(line, msgFirst) {
|
||||
break
|
||||
}
|
||||
}
|
||||
require.NoError(t, sessions[0].scanner.Err())
|
||||
|
||||
// Second session should now be closed.
|
||||
_, err = sessions[1].stdin.Write([]byte("echo dead\n"))
|
||||
require.ErrorIs(t, err, io.EOF)
|
||||
err = sessions[1].sess.Wait()
|
||||
require.Error(t, err)
|
||||
|
||||
// Cleanup.
|
||||
for i, sh := range sessions {
|
||||
if i == 1 {
|
||||
// already closed
|
||||
continue
|
||||
}
|
||||
err = sh.stdin.Close()
|
||||
require.NoError(t, err)
|
||||
err = sh.sess.Wait()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
err = extraSess.Close()
|
||||
require.ErrorIs(t, err, io.EOF)
|
||||
|
||||
err = s.Close()
|
||||
require.NoError(t, err)
|
||||
_ = testutil.TryReceive(ctx, t, done)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package agenttest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
@@ -30,11 +31,18 @@ func New(t testing.TB, coderURL *url.URL, agentToken string, opts ...func(*agent
|
||||
}
|
||||
|
||||
if o.Client == nil {
|
||||
agentClient := agentsdk.New(coderURL, agentsdk.WithFixedToken(agentToken))
|
||||
agentClient := agentsdk.New(coderURL)
|
||||
agentClient.SetSessionToken(agentToken)
|
||||
agentClient.SDK.SetLogger(log)
|
||||
o.Client = agentClient
|
||||
}
|
||||
|
||||
if o.ExchangeToken == nil {
|
||||
o.ExchangeToken = func(_ context.Context) (string, error) {
|
||||
return agentToken, nil
|
||||
}
|
||||
}
|
||||
|
||||
if o.LogDir == "" {
|
||||
o.LogDir = t.TempDir()
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package agenttest
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net/http"
|
||||
"slices"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@@ -29,7 +28,6 @@ import (
|
||||
"github.com/coder/coder/v2/tailnet"
|
||||
"github.com/coder/coder/v2/tailnet/proto"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
"github.com/coder/websocket"
|
||||
)
|
||||
|
||||
const statsInterval = 500 * time.Millisecond
|
||||
@@ -88,34 +86,10 @@ type Client struct {
|
||||
fakeAgentAPI *FakeAgentAPI
|
||||
LastWorkspaceAgent func()
|
||||
|
||||
mu sync.Mutex // Protects following.
|
||||
logs []agentsdk.Log
|
||||
derpMapUpdates chan *tailcfg.DERPMap
|
||||
derpMapOnce sync.Once
|
||||
refreshTokenCalls int
|
||||
}
|
||||
|
||||
func (*Client) AsRequestOption() codersdk.RequestOption {
|
||||
return func(_ *http.Request) {}
|
||||
}
|
||||
|
||||
func (*Client) SetDialOption(*websocket.DialOptions) {}
|
||||
|
||||
func (*Client) GetSessionToken() string {
|
||||
return "agenttest-token"
|
||||
}
|
||||
|
||||
func (c *Client) RefreshToken(context.Context) error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.refreshTokenCalls++
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Client) GetNumRefreshTokenCalls() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.refreshTokenCalls
|
||||
mu sync.Mutex // Protects following.
|
||||
logs []agentsdk.Log
|
||||
derpMapUpdates chan *tailcfg.DERPMap
|
||||
derpMapOnce sync.Once
|
||||
}
|
||||
|
||||
func (*Client) RewriteDERPMap(*tailcfg.DERPMap) {}
|
||||
|
||||
+2
-13
@@ -6,7 +6,6 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/google/uuid"
|
||||
|
||||
"github.com/coder/coder/v2/coderd/httpapi"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
@@ -37,19 +36,12 @@ func (a *agent) apiHandler() http.Handler {
|
||||
cacheDuration: cacheDuration,
|
||||
}
|
||||
|
||||
if a.devcontainers {
|
||||
if a.containerAPI != nil {
|
||||
r.Mount("/api/v0/containers", a.containerAPI.Routes())
|
||||
} else if manifest := a.manifest.Load(); manifest != nil && manifest.ParentID != uuid.Nil {
|
||||
r.HandleFunc("/api/v0/containers", func(w http.ResponseWriter, r *http.Request) {
|
||||
httpapi.Write(r.Context(), w, http.StatusForbidden, codersdk.Response{
|
||||
Message: "Dev Container feature not supported.",
|
||||
Detail: "Dev Container integration inside other Dev Containers is explicitly not supported.",
|
||||
})
|
||||
})
|
||||
} else {
|
||||
r.HandleFunc("/api/v0/containers", func(w http.ResponseWriter, r *http.Request) {
|
||||
httpapi.Write(r.Context(), w, http.StatusForbidden, codersdk.Response{
|
||||
Message: "Dev Container feature not enabled.",
|
||||
Message: "The agent dev containers feature is experimental and not enabled by default.",
|
||||
Detail: "To enable this feature, set CODER_AGENT_DEVCONTAINERS_ENABLE=true in your template.",
|
||||
})
|
||||
})
|
||||
@@ -60,9 +52,6 @@ func (a *agent) apiHandler() http.Handler {
|
||||
r.Get("/api/v0/listening-ports", lp.handler)
|
||||
r.Get("/api/v0/netcheck", a.HandleNetcheck)
|
||||
r.Post("/api/v0/list-directory", a.HandleLS)
|
||||
r.Get("/api/v0/read-file", a.HandleReadFile)
|
||||
r.Post("/api/v0/write-file", a.HandleWriteFile)
|
||||
r.Post("/api/v0/edit-files", a.HandleEditFiles)
|
||||
r.Get("/debug/logs", a.HandleHTTPDebugLogs)
|
||||
r.Get("/debug/magicsock", a.HandleHTTPDebugMagicsock)
|
||||
r.Get("/debug/magicsock/debug-logging/{state}", a.HandleHTTPMagicsockDebugLoggingState)
|
||||
|
||||
+1
-2
@@ -63,7 +63,6 @@ func NewAppHealthReporterWithClock(
|
||||
// run a ticker for each app health check.
|
||||
var mu sync.RWMutex
|
||||
failures := make(map[uuid.UUID]int, 0)
|
||||
client := &http.Client{}
|
||||
for _, nextApp := range apps {
|
||||
if !shouldStartTicker(nextApp) {
|
||||
continue
|
||||
@@ -92,7 +91,7 @@ func NewAppHealthReporterWithClock(
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.Do(req)
|
||||
res, err := http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
-273
@@ -1,273 +0,0 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"mime"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"syscall"
|
||||
|
||||
"github.com/icholy/replace"
|
||||
"github.com/spf13/afero"
|
||||
"golang.org/x/text/transform"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"cdr.dev/slog"
|
||||
"github.com/coder/coder/v2/coderd/httpapi"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
)
|
||||
|
||||
type HTTPResponseCode = int
|
||||
|
||||
func (a *agent) HandleReadFile(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
query := r.URL.Query()
|
||||
parser := httpapi.NewQueryParamParser().RequiredNotEmpty("path")
|
||||
path := parser.String(query, "", "path")
|
||||
offset := parser.PositiveInt64(query, 0, "offset")
|
||||
limit := parser.PositiveInt64(query, 0, "limit")
|
||||
parser.ErrorExcessParams(query)
|
||||
if len(parser.Errors) > 0 {
|
||||
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
|
||||
Message: "Query parameters have invalid values.",
|
||||
Validations: parser.Errors,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
status, err := a.streamFile(ctx, rw, path, offset, limit)
|
||||
if err != nil {
|
||||
httpapi.Write(ctx, rw, status, codersdk.Response{
|
||||
Message: err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (a *agent) streamFile(ctx context.Context, rw http.ResponseWriter, path string, offset, limit int64) (HTTPResponseCode, error) {
|
||||
if !filepath.IsAbs(path) {
|
||||
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Open(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
case errors.Is(err, os.ErrNotExist):
|
||||
status = http.StatusNotFound
|
||||
case errors.Is(err, os.ErrPermission):
|
||||
status = http.StatusForbidden
|
||||
}
|
||||
return status, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
stat, err := f.Stat()
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
|
||||
if stat.IsDir() {
|
||||
return http.StatusBadRequest, xerrors.Errorf("open %s: not a file", path)
|
||||
}
|
||||
|
||||
size := stat.Size()
|
||||
if limit == 0 {
|
||||
limit = size
|
||||
}
|
||||
bytesRemaining := max(size-offset, 0)
|
||||
bytesToRead := min(bytesRemaining, limit)
|
||||
|
||||
// Relying on just the file name for the mime type for now.
|
||||
mimeType := mime.TypeByExtension(filepath.Ext(path))
|
||||
if mimeType == "" {
|
||||
mimeType = "application/octet-stream"
|
||||
}
|
||||
rw.Header().Set("Content-Type", mimeType)
|
||||
rw.Header().Set("Content-Length", strconv.FormatInt(bytesToRead, 10))
|
||||
rw.WriteHeader(http.StatusOK)
|
||||
|
||||
reader := io.NewSectionReader(f, offset, bytesToRead)
|
||||
_, err = io.Copy(rw, reader)
|
||||
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
|
||||
a.logger.Error(ctx, "workspace agent read file", slog.Error(err))
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (a *agent) HandleWriteFile(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
query := r.URL.Query()
|
||||
parser := httpapi.NewQueryParamParser().RequiredNotEmpty("path")
|
||||
path := parser.String(query, "", "path")
|
||||
parser.ErrorExcessParams(query)
|
||||
if len(parser.Errors) > 0 {
|
||||
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
|
||||
Message: "Query parameters have invalid values.",
|
||||
Validations: parser.Errors,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
status, err := a.writeFile(ctx, r, path)
|
||||
if err != nil {
|
||||
httpapi.Write(ctx, rw, status, codersdk.Response{
|
||||
Message: err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
httpapi.Write(ctx, rw, http.StatusOK, codersdk.Response{
|
||||
Message: fmt.Sprintf("Successfully wrote to %q", path),
|
||||
})
|
||||
}
|
||||
|
||||
func (a *agent) writeFile(ctx context.Context, r *http.Request, path string) (HTTPResponseCode, error) {
|
||||
if !filepath.IsAbs(path) {
|
||||
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
|
||||
}
|
||||
|
||||
dir := filepath.Dir(path)
|
||||
err := a.filesystem.MkdirAll(dir, 0o755)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
case errors.Is(err, os.ErrPermission):
|
||||
status = http.StatusForbidden
|
||||
case errors.Is(err, syscall.ENOTDIR):
|
||||
status = http.StatusBadRequest
|
||||
}
|
||||
return status, err
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Create(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
case errors.Is(err, os.ErrPermission):
|
||||
status = http.StatusForbidden
|
||||
case errors.Is(err, syscall.EISDIR):
|
||||
status = http.StatusBadRequest
|
||||
}
|
||||
return status, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
_, err = io.Copy(f, r.Body)
|
||||
if err != nil && !errors.Is(err, io.EOF) && ctx.Err() == nil {
|
||||
a.logger.Error(ctx, "workspace agent write file", slog.Error(err))
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func (a *agent) HandleEditFiles(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
var req workspacesdk.FileEditRequest
|
||||
if !httpapi.Read(ctx, rw, r, &req) {
|
||||
return
|
||||
}
|
||||
|
||||
if len(req.Files) == 0 {
|
||||
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
|
||||
Message: "must specify at least one file",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
var combinedErr error
|
||||
status := http.StatusOK
|
||||
for _, edit := range req.Files {
|
||||
s, err := a.editFile(r.Context(), edit.Path, edit.Edits)
|
||||
// Keep the highest response status, so 500 will be preferred over 400, etc.
|
||||
if s > status {
|
||||
status = s
|
||||
}
|
||||
if err != nil {
|
||||
combinedErr = errors.Join(combinedErr, err)
|
||||
}
|
||||
}
|
||||
|
||||
if combinedErr != nil {
|
||||
httpapi.Write(ctx, rw, status, codersdk.Response{
|
||||
Message: combinedErr.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
httpapi.Write(ctx, rw, http.StatusOK, codersdk.Response{
|
||||
Message: "Successfully edited file(s)",
|
||||
})
|
||||
}
|
||||
|
||||
func (a *agent) editFile(ctx context.Context, path string, edits []workspacesdk.FileEdit) (int, error) {
|
||||
if path == "" {
|
||||
return http.StatusBadRequest, xerrors.New("\"path\" is required")
|
||||
}
|
||||
|
||||
if !filepath.IsAbs(path) {
|
||||
return http.StatusBadRequest, xerrors.Errorf("file path must be absolute: %q", path)
|
||||
}
|
||||
|
||||
if len(edits) == 0 {
|
||||
return http.StatusBadRequest, xerrors.New("must specify at least one edit")
|
||||
}
|
||||
|
||||
f, err := a.filesystem.Open(path)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
case errors.Is(err, os.ErrNotExist):
|
||||
status = http.StatusNotFound
|
||||
case errors.Is(err, os.ErrPermission):
|
||||
status = http.StatusForbidden
|
||||
}
|
||||
return status, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
stat, err := f.Stat()
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
|
||||
if stat.IsDir() {
|
||||
return http.StatusBadRequest, xerrors.Errorf("open %s: not a file", path)
|
||||
}
|
||||
|
||||
transforms := make([]transform.Transformer, len(edits))
|
||||
for i, edit := range edits {
|
||||
transforms[i] = replace.String(edit.Search, edit.Replace)
|
||||
}
|
||||
|
||||
tmpfile, err := afero.TempFile(a.filesystem, "", filepath.Base(path))
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
defer tmpfile.Close()
|
||||
|
||||
_, err = io.Copy(tmpfile, replace.Chain(f, transforms...))
|
||||
if err != nil {
|
||||
if rerr := a.filesystem.Remove(tmpfile.Name()); rerr != nil {
|
||||
a.logger.Warn(ctx, "unable to clean up temp file", slog.Error(rerr))
|
||||
}
|
||||
return http.StatusInternalServerError, xerrors.Errorf("edit %s: %w", path, err)
|
||||
}
|
||||
|
||||
err = a.filesystem.Rename(tmpfile.Name(), path)
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
@@ -1,722 +0,0 @@
|
||||
package agent_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"syscall"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/afero"
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/agent"
|
||||
"github.com/coder/coder/v2/agent/agenttest"
|
||||
"github.com/coder/coder/v2/coderd/coderdtest"
|
||||
"github.com/coder/coder/v2/codersdk/agentsdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
type testFs struct {
|
||||
afero.Fs
|
||||
// intercept can return an error for testing when a call fails.
|
||||
intercept func(call, file string) error
|
||||
}
|
||||
|
||||
func newTestFs(base afero.Fs, intercept func(call, file string) error) *testFs {
|
||||
return &testFs{
|
||||
Fs: base,
|
||||
intercept: intercept,
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *testFs) Open(name string) (afero.File, error) {
|
||||
if err := fs.intercept("open", name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return fs.Fs.Open(name)
|
||||
}
|
||||
|
||||
func (fs *testFs) Create(name string) (afero.File, error) {
|
||||
if err := fs.intercept("create", name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Unlike os, afero lets you create files where directories already exist and
|
||||
// lets you nest them underneath files, somehow.
|
||||
stat, err := fs.Fs.Stat(name)
|
||||
if err == nil && stat.IsDir() {
|
||||
return nil, &os.PathError{
|
||||
Op: "open",
|
||||
Path: name,
|
||||
Err: syscall.EISDIR,
|
||||
}
|
||||
}
|
||||
stat, err = fs.Fs.Stat(filepath.Dir(name))
|
||||
if err == nil && !stat.IsDir() {
|
||||
return nil, &os.PathError{
|
||||
Op: "open",
|
||||
Path: name,
|
||||
Err: syscall.ENOTDIR,
|
||||
}
|
||||
}
|
||||
return fs.Fs.Create(name)
|
||||
}
|
||||
|
||||
func (fs *testFs) MkdirAll(name string, mode os.FileMode) error {
|
||||
if err := fs.intercept("mkdirall", name); err != nil {
|
||||
return err
|
||||
}
|
||||
// Unlike os, afero lets you create directories where files already exist and
|
||||
// lets you nest them underneath files somehow.
|
||||
stat, err := fs.Fs.Stat(filepath.Dir(name))
|
||||
if err == nil && !stat.IsDir() {
|
||||
return &os.PathError{
|
||||
Op: "mkdir",
|
||||
Path: name,
|
||||
Err: syscall.ENOTDIR,
|
||||
}
|
||||
}
|
||||
stat, err = fs.Fs.Stat(name)
|
||||
if err == nil && !stat.IsDir() {
|
||||
return &os.PathError{
|
||||
Op: "mkdir",
|
||||
Path: name,
|
||||
Err: syscall.ENOTDIR,
|
||||
}
|
||||
}
|
||||
return fs.Fs.MkdirAll(name, mode)
|
||||
}
|
||||
|
||||
func (fs *testFs) Rename(oldName, newName string) error {
|
||||
if err := fs.intercept("rename", newName); err != nil {
|
||||
return err
|
||||
}
|
||||
return fs.Fs.Rename(oldName, newName)
|
||||
}
|
||||
|
||||
func TestReadFile(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "a-directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
filePath := filepath.Join(tmpdir, "file")
|
||||
err = afero.WriteFile(fs, filePath, []byte("content"), 0o644)
|
||||
require.NoError(t, err)
|
||||
|
||||
imagePath := filepath.Join(tmpdir, "file.png")
|
||||
err = afero.WriteFile(fs, imagePath, []byte("not really an image"), 0o644)
|
||||
require.NoError(t, err)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
limit int64
|
||||
offset int64
|
||||
bytes []byte
|
||||
mimeType string
|
||||
errCode int
|
||||
error string
|
||||
}{
|
||||
{
|
||||
name: "NoPath",
|
||||
path: "",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "\"path\" is required",
|
||||
},
|
||||
{
|
||||
name: "RelativePathDotSlash",
|
||||
path: "./relative",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "file path must be absolute",
|
||||
},
|
||||
{
|
||||
name: "RelativePath",
|
||||
path: "also-relative",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "file path must be absolute",
|
||||
},
|
||||
{
|
||||
name: "NegativeLimit",
|
||||
path: filePath,
|
||||
limit: -10,
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "value is negative",
|
||||
},
|
||||
{
|
||||
name: "NegativeOffset",
|
||||
path: filePath,
|
||||
offset: -10,
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "value is negative",
|
||||
},
|
||||
{
|
||||
name: "NonExistent",
|
||||
path: filepath.Join(tmpdir, "does-not-exist"),
|
||||
errCode: http.StatusNotFound,
|
||||
error: "file does not exist",
|
||||
},
|
||||
{
|
||||
name: "IsDir",
|
||||
path: dirPath,
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "not a file",
|
||||
},
|
||||
{
|
||||
name: "NoPermissions",
|
||||
path: noPermsFilePath,
|
||||
errCode: http.StatusForbidden,
|
||||
error: "permission denied",
|
||||
},
|
||||
{
|
||||
name: "Defaults",
|
||||
path: filePath,
|
||||
bytes: []byte("content"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Limit1",
|
||||
path: filePath,
|
||||
limit: 1,
|
||||
bytes: []byte("c"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Offset1",
|
||||
path: filePath,
|
||||
offset: 1,
|
||||
bytes: []byte("ontent"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Limit1Offset2",
|
||||
path: filePath,
|
||||
limit: 1,
|
||||
offset: 2,
|
||||
bytes: []byte("n"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Limit7Offset0",
|
||||
path: filePath,
|
||||
limit: 7,
|
||||
offset: 0,
|
||||
bytes: []byte("content"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Limit100",
|
||||
path: filePath,
|
||||
limit: 100,
|
||||
bytes: []byte("content"),
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Offset7",
|
||||
path: filePath,
|
||||
offset: 7,
|
||||
bytes: []byte{},
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "Offset100",
|
||||
path: filePath,
|
||||
offset: 100,
|
||||
bytes: []byte{},
|
||||
mimeType: "application/octet-stream",
|
||||
},
|
||||
{
|
||||
name: "MimeTypePng",
|
||||
path: imagePath,
|
||||
bytes: []byte("not really an image"),
|
||||
mimeType: "image/png",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
|
||||
reader, mimeType, err := conn.ReadFile(ctx, tt.path, tt.offset, tt.limit)
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
require.Contains(t, cerr.Error(), tt.error)
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
defer reader.Close()
|
||||
bytes, err := io.ReadAll(reader)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.bytes, bytes)
|
||||
require.Equal(t, tt.mimeType, mimeType)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteFile(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
|
||||
noPermsDirPath := filepath.Join(tmpdir, "no-perms-dir")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath || file == noPermsDirPath {
|
||||
return os.ErrPermission
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
filePath := filepath.Join(tmpdir, "file")
|
||||
err = afero.WriteFile(fs, filePath, []byte("content"), 0o644)
|
||||
require.NoError(t, err)
|
||||
|
||||
notDirErr := "not a directory"
|
||||
if runtime.GOOS == "windows" {
|
||||
notDirErr = "cannot find the path"
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
bytes []byte
|
||||
errCode int
|
||||
error string
|
||||
}{
|
||||
{
|
||||
name: "NoPath",
|
||||
path: "",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "\"path\" is required",
|
||||
},
|
||||
{
|
||||
name: "RelativePathDotSlash",
|
||||
path: "./relative",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "file path must be absolute",
|
||||
},
|
||||
{
|
||||
name: "RelativePath",
|
||||
path: "also-relative",
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "file path must be absolute",
|
||||
},
|
||||
{
|
||||
name: "NonExistent",
|
||||
path: filepath.Join(tmpdir, "/nested/does-not-exist"),
|
||||
bytes: []byte("now it does exist"),
|
||||
},
|
||||
{
|
||||
name: "IsDir",
|
||||
path: dirPath,
|
||||
errCode: http.StatusBadRequest,
|
||||
error: "is a directory",
|
||||
},
|
||||
{
|
||||
name: "IsNotDir",
|
||||
path: filepath.Join(filePath, "file2"),
|
||||
errCode: http.StatusBadRequest,
|
||||
error: notDirErr,
|
||||
},
|
||||
{
|
||||
name: "NoPermissionsFile",
|
||||
path: noPermsFilePath,
|
||||
errCode: http.StatusForbidden,
|
||||
error: "permission denied",
|
||||
},
|
||||
{
|
||||
name: "NoPermissionsDir",
|
||||
path: filepath.Join(noPermsDirPath, "within-no-perm-dir"),
|
||||
errCode: http.StatusForbidden,
|
||||
error: "permission denied",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
|
||||
reader := bytes.NewReader(tt.bytes)
|
||||
err := conn.WriteFile(ctx, tt.path, reader)
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
require.Contains(t, cerr.Error(), tt.error)
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
b, err := afero.ReadFile(fs, tt.path)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.bytes, b)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestEditFiles(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tmpdir := os.TempDir()
|
||||
noPermsFilePath := filepath.Join(tmpdir, "no-perms-file")
|
||||
failRenameFilePath := filepath.Join(tmpdir, "fail-rename")
|
||||
//nolint:dogsled
|
||||
conn, _, _, fs, _ := setupAgent(t, agentsdk.Manifest{}, 0, func(_ *agenttest.Client, opts *agent.Options) {
|
||||
opts.Filesystem = newTestFs(opts.Filesystem, func(call, file string) error {
|
||||
if file == noPermsFilePath {
|
||||
return &os.PathError{
|
||||
Op: call,
|
||||
Path: file,
|
||||
Err: os.ErrPermission,
|
||||
}
|
||||
} else if file == failRenameFilePath && call == "rename" {
|
||||
return xerrors.New("rename failed")
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
dirPath := filepath.Join(tmpdir, "directory")
|
||||
err := fs.MkdirAll(dirPath, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
contents map[string]string
|
||||
edits []workspacesdk.FileEdits
|
||||
expected map[string]string
|
||||
errCode int
|
||||
errors []string
|
||||
}{
|
||||
{
|
||||
name: "NoFiles",
|
||||
errCode: http.StatusBadRequest,
|
||||
errors: []string{"must specify at least one file"},
|
||||
},
|
||||
{
|
||||
name: "NoPath",
|
||||
errCode: http.StatusBadRequest,
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errors: []string{"\"path\" is required"},
|
||||
},
|
||||
{
|
||||
name: "RelativePathDotSlash",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: "./relative",
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusBadRequest,
|
||||
errors: []string{"file path must be absolute"},
|
||||
},
|
||||
{
|
||||
name: "RelativePath",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: "also-relative",
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusBadRequest,
|
||||
errors: []string{"file path must be absolute"},
|
||||
},
|
||||
{
|
||||
name: "NoEdits",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "no-edits"),
|
||||
},
|
||||
},
|
||||
errCode: http.StatusBadRequest,
|
||||
errors: []string{"must specify at least one edit"},
|
||||
},
|
||||
{
|
||||
name: "NonExistent",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "does-not-exist"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusNotFound,
|
||||
errors: []string{"file does not exist"},
|
||||
},
|
||||
{
|
||||
name: "IsDir",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: dirPath,
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusBadRequest,
|
||||
errors: []string{"not a file"},
|
||||
},
|
||||
{
|
||||
name: "NoPermissions",
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: noPermsFilePath,
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusForbidden,
|
||||
errors: []string{"permission denied"},
|
||||
},
|
||||
{
|
||||
name: "FailRename",
|
||||
contents: map[string]string{failRenameFilePath: "foo bar"},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: failRenameFilePath,
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
errCode: http.StatusInternalServerError,
|
||||
errors: []string{"rename failed"},
|
||||
},
|
||||
{
|
||||
name: "Edit1",
|
||||
contents: map[string]string{filepath.Join(tmpdir, "edit1"): "foo bar"},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "edit1"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: map[string]string{filepath.Join(tmpdir, "edit1"): "bar bar"},
|
||||
},
|
||||
{
|
||||
name: "EditEdit", // Edits affect previous edits.
|
||||
contents: map[string]string{filepath.Join(tmpdir, "edit-edit"): "foo bar"},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "edit-edit"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "foo",
|
||||
Replace: "bar",
|
||||
},
|
||||
{
|
||||
Search: "bar",
|
||||
Replace: "qux",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: map[string]string{filepath.Join(tmpdir, "edit-edit"): "qux qux"},
|
||||
},
|
||||
{
|
||||
name: "Multiline",
|
||||
contents: map[string]string{filepath.Join(tmpdir, "multiline"): "foo\nbar\nbaz\nqux"},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "multiline"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "bar\nbaz",
|
||||
Replace: "frob",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: map[string]string{filepath.Join(tmpdir, "multiline"): "foo\nfrob\nqux"},
|
||||
},
|
||||
{
|
||||
name: "Multifile",
|
||||
contents: map[string]string{
|
||||
filepath.Join(tmpdir, "file1"): "file 1",
|
||||
filepath.Join(tmpdir, "file2"): "file 2",
|
||||
filepath.Join(tmpdir, "file3"): "file 3",
|
||||
},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "file1"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "file2"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited2",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "file3"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited3",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: map[string]string{
|
||||
filepath.Join(tmpdir, "file1"): "edited1 1",
|
||||
filepath.Join(tmpdir, "file2"): "edited2 2",
|
||||
filepath.Join(tmpdir, "file3"): "edited3 3",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "MultiError",
|
||||
contents: map[string]string{
|
||||
filepath.Join(tmpdir, "file8"): "file 8",
|
||||
},
|
||||
edits: []workspacesdk.FileEdits{
|
||||
{
|
||||
Path: noPermsFilePath,
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited7",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "file8"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited8",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Path: filepath.Join(tmpdir, "file9"),
|
||||
Edits: []workspacesdk.FileEdit{
|
||||
{
|
||||
Search: "file",
|
||||
Replace: "edited9",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: map[string]string{
|
||||
filepath.Join(tmpdir, "file8"): "edited8 8",
|
||||
},
|
||||
// Higher status codes will override lower ones, so in this case the 404
|
||||
// takes priority over the 403.
|
||||
errCode: http.StatusNotFound,
|
||||
errors: []string{
|
||||
fmt.Sprintf("%s: permission denied", noPermsFilePath),
|
||||
"file9: file does not exist",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitLong)
|
||||
defer cancel()
|
||||
|
||||
for path, content := range tt.contents {
|
||||
err := afero.WriteFile(fs, path, []byte(content), 0o644)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
err := conn.EditFiles(ctx, workspacesdk.FileEditRequest{Files: tt.edits})
|
||||
if tt.errCode != 0 {
|
||||
require.Error(t, err)
|
||||
cerr := coderdtest.SDKError(t, err)
|
||||
for _, error := range tt.errors {
|
||||
require.Contains(t, cerr.Error(), error)
|
||||
}
|
||||
require.Equal(t, tt.errCode, cerr.StatusCode())
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
for path, expect := range tt.expected {
|
||||
b, err := afero.ReadFile(fs, path)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expect, string(b))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,350 +0,0 @@
|
||||
package backedpipe
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
"golang.org/x/sync/singleflight"
|
||||
"golang.org/x/xerrors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrPipeClosed = xerrors.New("pipe is closed")
|
||||
ErrPipeAlreadyConnected = xerrors.New("pipe is already connected")
|
||||
ErrReconnectionInProgress = xerrors.New("reconnection already in progress")
|
||||
ErrReconnectFailed = xerrors.New("reconnect failed")
|
||||
ErrInvalidSequenceNumber = xerrors.New("remote sequence number exceeds local sequence")
|
||||
ErrReconnectWriterFailed = xerrors.New("reconnect writer failed")
|
||||
)
|
||||
|
||||
// connectionState represents the current state of the BackedPipe connection.
|
||||
type connectionState int
|
||||
|
||||
const (
|
||||
// connected indicates the pipe is connected and operational.
|
||||
connected connectionState = iota
|
||||
// disconnected indicates the pipe is not connected but not closed.
|
||||
disconnected
|
||||
// reconnecting indicates a reconnection attempt is in progress.
|
||||
reconnecting
|
||||
// closed indicates the pipe is permanently closed.
|
||||
closed
|
||||
)
|
||||
|
||||
// ErrorEvent represents an error from a reader or writer with connection generation info.
|
||||
type ErrorEvent struct {
|
||||
Err error
|
||||
Component string // "reader" or "writer"
|
||||
Generation uint64 // connection generation when error occurred
|
||||
}
|
||||
|
||||
const (
|
||||
// Default buffer capacity used by the writer - 64MB
|
||||
DefaultBufferSize = 64 * 1024 * 1024
|
||||
)
|
||||
|
||||
// Reconnector is an interface for establishing connections when the BackedPipe needs to reconnect.
|
||||
// Implementations should:
|
||||
// 1. Establish a new connection to the remote side
|
||||
// 2. Exchange sequence numbers with the remote side
|
||||
// 3. Return the new connection and the remote's reader sequence number
|
||||
//
|
||||
// The readerSeqNum parameter is the local reader's current sequence number
|
||||
// (total bytes successfully read from the remote). This must be sent to the
|
||||
// remote so it can replay its data to us starting from this number.
|
||||
//
|
||||
// The returned remoteReaderSeqNum should be the remote side's reader sequence
|
||||
// number (how many bytes of our outbound data it has successfully read). This
|
||||
// informs our writer where to resume (i.e., which bytes to replay to the remote).
|
||||
type Reconnector interface {
|
||||
Reconnect(ctx context.Context, readerSeqNum uint64) (conn io.ReadWriteCloser, remoteReaderSeqNum uint64, err error)
|
||||
}
|
||||
|
||||
// BackedPipe provides a reliable bidirectional byte stream over unreliable network connections.
|
||||
// It orchestrates a BackedReader and BackedWriter to provide transparent reconnection
|
||||
// and data replay capabilities.
|
||||
type BackedPipe struct {
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
mu sync.RWMutex
|
||||
reader *BackedReader
|
||||
writer *BackedWriter
|
||||
reconnector Reconnector
|
||||
conn io.ReadWriteCloser
|
||||
|
||||
// State machine
|
||||
state connectionState
|
||||
connGen uint64 // Increments on each successful reconnection
|
||||
|
||||
// Unified error handling with generation filtering
|
||||
errChan chan ErrorEvent
|
||||
|
||||
// singleflight group to dedupe concurrent ForceReconnect calls
|
||||
sf singleflight.Group
|
||||
|
||||
// Track first error per generation to avoid duplicate reconnections
|
||||
lastErrorGen uint64
|
||||
}
|
||||
|
||||
// NewBackedPipe creates a new BackedPipe with default options and the specified reconnector.
|
||||
// The pipe starts disconnected and must be connected using Connect().
|
||||
func NewBackedPipe(ctx context.Context, reconnector Reconnector) *BackedPipe {
|
||||
pipeCtx, cancel := context.WithCancel(ctx)
|
||||
|
||||
errChan := make(chan ErrorEvent, 1)
|
||||
|
||||
bp := &BackedPipe{
|
||||
ctx: pipeCtx,
|
||||
cancel: cancel,
|
||||
reconnector: reconnector,
|
||||
state: disconnected,
|
||||
connGen: 0, // Start with generation 0
|
||||
errChan: errChan,
|
||||
}
|
||||
|
||||
// Create reader and writer with typed error channel for generation-aware error reporting
|
||||
bp.reader = NewBackedReader(errChan)
|
||||
bp.writer = NewBackedWriter(DefaultBufferSize, errChan)
|
||||
|
||||
// Start error handler goroutine
|
||||
go bp.handleErrors()
|
||||
|
||||
return bp
|
||||
}
|
||||
|
||||
// Connect establishes the initial connection using the reconnect function.
|
||||
func (bp *BackedPipe) Connect() error {
|
||||
bp.mu.Lock()
|
||||
defer bp.mu.Unlock()
|
||||
|
||||
if bp.state == closed {
|
||||
return ErrPipeClosed
|
||||
}
|
||||
|
||||
if bp.state == connected {
|
||||
return ErrPipeAlreadyConnected
|
||||
}
|
||||
|
||||
// Use internal context for the actual reconnect operation to ensure
|
||||
// Close() reliably cancels any in-flight attempt.
|
||||
return bp.reconnectLocked()
|
||||
}
|
||||
|
||||
// Read implements io.Reader by delegating to the BackedReader.
|
||||
func (bp *BackedPipe) Read(p []byte) (int, error) {
|
||||
return bp.reader.Read(p)
|
||||
}
|
||||
|
||||
// Write implements io.Writer by delegating to the BackedWriter.
|
||||
func (bp *BackedPipe) Write(p []byte) (int, error) {
|
||||
bp.mu.RLock()
|
||||
writer := bp.writer
|
||||
state := bp.state
|
||||
bp.mu.RUnlock()
|
||||
|
||||
if state == closed {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
return writer.Write(p)
|
||||
}
|
||||
|
||||
// Close closes the pipe and all underlying connections.
|
||||
func (bp *BackedPipe) Close() error {
|
||||
bp.mu.Lock()
|
||||
defer bp.mu.Unlock()
|
||||
|
||||
if bp.state == closed {
|
||||
return nil
|
||||
}
|
||||
|
||||
bp.state = closed
|
||||
bp.cancel() // Cancel main context
|
||||
|
||||
// Close all components in parallel to avoid deadlocks
|
||||
//
|
||||
// IMPORTANT: The connection must be closed first to unblock any
|
||||
// readers or writers that might be holding the mutex on Read/Write
|
||||
var g errgroup.Group
|
||||
|
||||
if bp.conn != nil {
|
||||
conn := bp.conn
|
||||
g.Go(func() error {
|
||||
return conn.Close()
|
||||
})
|
||||
bp.conn = nil
|
||||
}
|
||||
|
||||
if bp.reader != nil {
|
||||
reader := bp.reader
|
||||
g.Go(func() error {
|
||||
return reader.Close()
|
||||
})
|
||||
}
|
||||
|
||||
if bp.writer != nil {
|
||||
writer := bp.writer
|
||||
g.Go(func() error {
|
||||
return writer.Close()
|
||||
})
|
||||
}
|
||||
|
||||
// Wait for all close operations to complete and return any error
|
||||
return g.Wait()
|
||||
}
|
||||
|
||||
// Connected returns whether the pipe is currently connected.
|
||||
func (bp *BackedPipe) Connected() bool {
|
||||
bp.mu.RLock()
|
||||
defer bp.mu.RUnlock()
|
||||
return bp.state == connected && bp.reader.Connected() && bp.writer.Connected()
|
||||
}
|
||||
|
||||
// reconnectLocked handles the reconnection logic. Must be called with write lock held.
|
||||
func (bp *BackedPipe) reconnectLocked() error {
|
||||
if bp.state == reconnecting {
|
||||
return ErrReconnectionInProgress
|
||||
}
|
||||
|
||||
bp.state = reconnecting
|
||||
defer func() {
|
||||
// Only reset to disconnected if we're still in reconnecting state
|
||||
// (successful reconnection will set state to connected)
|
||||
if bp.state == reconnecting {
|
||||
bp.state = disconnected
|
||||
}
|
||||
}()
|
||||
|
||||
// Close existing connection if any
|
||||
if bp.conn != nil {
|
||||
_ = bp.conn.Close()
|
||||
bp.conn = nil
|
||||
}
|
||||
|
||||
// Increment the generation and update both reader and writer.
|
||||
// We do it now to track even the connections that fail during
|
||||
// Reconnect.
|
||||
bp.connGen++
|
||||
bp.reader.SetGeneration(bp.connGen)
|
||||
bp.writer.SetGeneration(bp.connGen)
|
||||
|
||||
// Reconnect reader and writer
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go bp.reader.Reconnect(seqNum, newR)
|
||||
|
||||
// Get the precise reader sequence number from the reader while it holds its lock
|
||||
readerSeqNum, ok := <-seqNum
|
||||
if !ok {
|
||||
// Reader was closed during reconnection
|
||||
return ErrReconnectFailed
|
||||
}
|
||||
|
||||
// Perform reconnect using the exact sequence number we just received
|
||||
conn, remoteReaderSeqNum, err := bp.reconnector.Reconnect(bp.ctx, readerSeqNum)
|
||||
if err != nil {
|
||||
// Unblock reader reconnect
|
||||
newR <- nil
|
||||
return ErrReconnectFailed
|
||||
}
|
||||
|
||||
// Provide the new connection to the reader (reader still holds its lock)
|
||||
newR <- conn
|
||||
|
||||
// Replay our outbound data from the remote's reader sequence number
|
||||
writerReconnectErr := bp.writer.Reconnect(remoteReaderSeqNum, conn)
|
||||
if writerReconnectErr != nil {
|
||||
return ErrReconnectWriterFailed
|
||||
}
|
||||
|
||||
// Success - update state
|
||||
bp.conn = conn
|
||||
bp.state = connected
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// handleErrors listens for connection errors from reader/writer and triggers reconnection.
|
||||
// It filters errors from old connections and ensures only the first error per generation
|
||||
// triggers reconnection.
|
||||
func (bp *BackedPipe) handleErrors() {
|
||||
for {
|
||||
select {
|
||||
case <-bp.ctx.Done():
|
||||
return
|
||||
case errorEvt := <-bp.errChan:
|
||||
bp.handleConnectionError(errorEvt)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// handleConnectionError handles errors from either reader or writer components.
|
||||
// It filters errors from old connections and ensures only one reconnection per generation.
|
||||
func (bp *BackedPipe) handleConnectionError(errorEvt ErrorEvent) {
|
||||
bp.mu.Lock()
|
||||
defer bp.mu.Unlock()
|
||||
|
||||
// Skip if already closed
|
||||
if bp.state == closed {
|
||||
return
|
||||
}
|
||||
|
||||
// Filter errors from old connections (lower generation)
|
||||
if errorEvt.Generation < bp.connGen {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip if not connected (already disconnected or reconnecting)
|
||||
if bp.state != connected {
|
||||
return
|
||||
}
|
||||
|
||||
// Skip if we've already seen an error for this generation
|
||||
if bp.lastErrorGen >= errorEvt.Generation {
|
||||
return
|
||||
}
|
||||
|
||||
// This is the first error for this generation
|
||||
bp.lastErrorGen = errorEvt.Generation
|
||||
|
||||
// Mark as disconnected
|
||||
bp.state = disconnected
|
||||
|
||||
// Try to reconnect using internal context
|
||||
reconnectErr := bp.reconnectLocked()
|
||||
|
||||
if reconnectErr != nil {
|
||||
// Reconnection failed - log or handle as needed
|
||||
// For now, we'll just continue and wait for manual reconnection
|
||||
_ = errorEvt.Err // Use the original error from the component
|
||||
_ = errorEvt.Component // Component info available for potential logging by higher layers
|
||||
}
|
||||
}
|
||||
|
||||
// ForceReconnect forces a reconnection attempt immediately.
|
||||
// This can be used to force a reconnection if a new connection is established.
|
||||
// It prevents duplicate reconnections when called concurrently.
|
||||
func (bp *BackedPipe) ForceReconnect() error {
|
||||
// Deduplicate concurrent ForceReconnect calls so only one reconnection
|
||||
// attempt runs at a time from this API. Use the pipe's internal context
|
||||
// to ensure Close() cancels any in-flight attempt.
|
||||
_, err, _ := bp.sf.Do("force-reconnect", func() (interface{}, error) {
|
||||
bp.mu.Lock()
|
||||
defer bp.mu.Unlock()
|
||||
|
||||
if bp.state == closed {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
// Don't force reconnect if already reconnecting
|
||||
if bp.state == reconnecting {
|
||||
return nil, ErrReconnectionInProgress
|
||||
}
|
||||
|
||||
return nil, bp.reconnectLocked()
|
||||
})
|
||||
return err
|
||||
}
|
||||
@@ -1,989 +0,0 @@
|
||||
package backedpipe_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
// mockConnection implements io.ReadWriteCloser for testing
|
||||
type mockConnection struct {
|
||||
mu sync.Mutex
|
||||
readBuffer bytes.Buffer
|
||||
writeBuffer bytes.Buffer
|
||||
closed bool
|
||||
readError error
|
||||
writeError error
|
||||
closeError error
|
||||
readFunc func([]byte) (int, error)
|
||||
writeFunc func([]byte) (int, error)
|
||||
seqNum uint64
|
||||
}
|
||||
|
||||
func newMockConnection() *mockConnection {
|
||||
return &mockConnection{}
|
||||
}
|
||||
|
||||
func (mc *mockConnection) Read(p []byte) (int, error) {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
|
||||
if mc.readFunc != nil {
|
||||
return mc.readFunc(p)
|
||||
}
|
||||
|
||||
if mc.readError != nil {
|
||||
return 0, mc.readError
|
||||
}
|
||||
|
||||
return mc.readBuffer.Read(p)
|
||||
}
|
||||
|
||||
func (mc *mockConnection) Write(p []byte) (int, error) {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
|
||||
if mc.writeFunc != nil {
|
||||
return mc.writeFunc(p)
|
||||
}
|
||||
|
||||
if mc.writeError != nil {
|
||||
return 0, mc.writeError
|
||||
}
|
||||
|
||||
return mc.writeBuffer.Write(p)
|
||||
}
|
||||
|
||||
func (mc *mockConnection) Close() error {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
mc.closed = true
|
||||
return mc.closeError
|
||||
}
|
||||
|
||||
func (mc *mockConnection) WriteString(s string) {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
_, _ = mc.readBuffer.WriteString(s)
|
||||
}
|
||||
|
||||
func (mc *mockConnection) ReadString() string {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
return mc.writeBuffer.String()
|
||||
}
|
||||
|
||||
func (mc *mockConnection) SetReadError(err error) {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
mc.readError = err
|
||||
}
|
||||
|
||||
func (mc *mockConnection) SetWriteError(err error) {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
mc.writeError = err
|
||||
}
|
||||
|
||||
func (mc *mockConnection) Reset() {
|
||||
mc.mu.Lock()
|
||||
defer mc.mu.Unlock()
|
||||
mc.readBuffer.Reset()
|
||||
mc.writeBuffer.Reset()
|
||||
mc.readError = nil
|
||||
mc.writeError = nil
|
||||
mc.closed = false
|
||||
}
|
||||
|
||||
// mockReconnector implements the Reconnector interface for testing
|
||||
type mockReconnector struct {
|
||||
mu sync.Mutex
|
||||
connections []*mockConnection
|
||||
connectionIndex int
|
||||
callCount int
|
||||
signalChan chan struct{}
|
||||
}
|
||||
|
||||
// Reconnect implements the Reconnector interface
|
||||
func (m *mockReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
m.callCount++
|
||||
|
||||
if m.connectionIndex >= len(m.connections) {
|
||||
return nil, 0, xerrors.New("no more connections available")
|
||||
}
|
||||
|
||||
conn := m.connections[m.connectionIndex]
|
||||
m.connectionIndex++
|
||||
|
||||
// Signal when reconnection happens
|
||||
if m.connectionIndex > 1 {
|
||||
select {
|
||||
case m.signalChan <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
// Determine remoteReaderSeqNum (how many bytes of our outbound data the remote has read)
|
||||
var remoteReaderSeqNum uint64
|
||||
switch {
|
||||
case m.callCount == 1:
|
||||
remoteReaderSeqNum = 0
|
||||
case conn.seqNum != 0:
|
||||
remoteReaderSeqNum = conn.seqNum
|
||||
default:
|
||||
// Default to 0 if unspecified
|
||||
remoteReaderSeqNum = 0
|
||||
}
|
||||
|
||||
return conn, remoteReaderSeqNum, nil
|
||||
}
|
||||
|
||||
// GetCallCount returns the current call count in a thread-safe manner
|
||||
func (m *mockReconnector) GetCallCount() int {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
return m.callCount
|
||||
}
|
||||
|
||||
// mockReconnectFunc creates a unified reconnector with all behaviors enabled
|
||||
func mockReconnectFunc(connections ...*mockConnection) (*mockReconnector, chan struct{}) {
|
||||
signalChan := make(chan struct{}, 1)
|
||||
|
||||
reconnector := &mockReconnector{
|
||||
connections: connections,
|
||||
signalChan: signalChan,
|
||||
}
|
||||
|
||||
return reconnector, signalChan
|
||||
}
|
||||
|
||||
// blockingReconnector is a reconnector that blocks on a channel for deterministic testing
|
||||
type blockingReconnector struct {
|
||||
conn1 *mockConnection
|
||||
conn2 *mockConnection
|
||||
callCount int
|
||||
blockChan <-chan struct{}
|
||||
blockedChan chan struct{}
|
||||
mu sync.Mutex
|
||||
signalOnce sync.Once // Ensure we only signal once for the first actual reconnect
|
||||
}
|
||||
|
||||
func (b *blockingReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
|
||||
b.mu.Lock()
|
||||
b.callCount++
|
||||
currentCall := b.callCount
|
||||
b.mu.Unlock()
|
||||
|
||||
if currentCall == 1 {
|
||||
// Initial connect
|
||||
return b.conn1, 0, nil
|
||||
}
|
||||
|
||||
// Signal that we're about to block, but only once for the first reconnect attempt
|
||||
// This ensures we properly test singleflight deduplication
|
||||
b.signalOnce.Do(func() {
|
||||
select {
|
||||
case b.blockedChan <- struct{}{}:
|
||||
default:
|
||||
// If channel is full, don't block
|
||||
}
|
||||
})
|
||||
|
||||
// For subsequent calls, block until channel is closed
|
||||
select {
|
||||
case <-b.blockChan:
|
||||
// Channel closed, proceed with reconnection
|
||||
case <-ctx.Done():
|
||||
return nil, 0, ctx.Err()
|
||||
}
|
||||
|
||||
return b.conn2, 0, nil
|
||||
}
|
||||
|
||||
// GetCallCount returns the current call count in a thread-safe manner
|
||||
func (b *blockingReconnector) GetCallCount() int {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
return b.callCount
|
||||
}
|
||||
|
||||
func mockBlockingReconnectFunc(conn1, conn2 *mockConnection, blockChan <-chan struct{}) (*blockingReconnector, chan struct{}) {
|
||||
blockedChan := make(chan struct{}, 1)
|
||||
reconnector := &blockingReconnector{
|
||||
conn1: conn1,
|
||||
conn2: conn2,
|
||||
blockChan: blockChan,
|
||||
blockedChan: blockedChan,
|
||||
}
|
||||
|
||||
return reconnector, blockedChan
|
||||
}
|
||||
|
||||
// eofTestReconnector is a custom reconnector for the EOF test case
|
||||
type eofTestReconnector struct {
|
||||
mu sync.Mutex
|
||||
conn1 io.ReadWriteCloser
|
||||
conn2 io.ReadWriteCloser
|
||||
callCount int
|
||||
}
|
||||
|
||||
func (e *eofTestReconnector) Reconnect(ctx context.Context, readerSeqNum uint64) (io.ReadWriteCloser, uint64, error) {
|
||||
e.mu.Lock()
|
||||
defer e.mu.Unlock()
|
||||
|
||||
e.callCount++
|
||||
|
||||
if e.callCount == 1 {
|
||||
return e.conn1, 0, nil
|
||||
}
|
||||
if e.callCount == 2 {
|
||||
// Second call is the reconnection after EOF
|
||||
// Return 5 to indicate remote has read all 5 bytes of "hello"
|
||||
return e.conn2, 5, nil
|
||||
}
|
||||
|
||||
return nil, 0, xerrors.New("no more connections")
|
||||
}
|
||||
|
||||
// GetCallCount returns the current call count in a thread-safe manner
|
||||
func (e *eofTestReconnector) GetCallCount() int {
|
||||
e.mu.Lock()
|
||||
defer e.mu.Unlock()
|
||||
return e.callCount
|
||||
}
|
||||
|
||||
func TestBackedPipe_NewBackedPipe(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
reconnectFn, _ := mockReconnectFunc(newMockConnection())
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
require.NotNil(t, bp)
|
||||
require.False(t, bp.Connected())
|
||||
}
|
||||
|
||||
func TestBackedPipe_Connect(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnector, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
}
|
||||
|
||||
func TestBackedPipe_ConnectAlreadyConnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second connect should fail
|
||||
err = bp.Connect()
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrPipeAlreadyConnected)
|
||||
}
|
||||
|
||||
func TestBackedPipe_ConnectAfterClose(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
|
||||
err := bp.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bp.Connect()
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrPipeClosed)
|
||||
}
|
||||
|
||||
func TestBackedPipe_BasicReadWrite(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write data
|
||||
n, err := bp.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
|
||||
// Simulate data coming back
|
||||
conn.WriteString("world")
|
||||
|
||||
// Read data
|
||||
buf := make([]byte, 10)
|
||||
n, err = bp.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, "world", string(buf[:n]))
|
||||
}
|
||||
|
||||
func TestBackedPipe_WriteBeforeConnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
|
||||
// Write before connecting should block
|
||||
writeComplete := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := bp.Write([]byte("hello"))
|
||||
writeComplete <- err
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when disconnected")
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Connect should unblock the write
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
err = testutil.RequireReceive(ctx, t, writeComplete)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Check that data was replayed to connection
|
||||
require.Equal(t, "hello", conn.ReadString())
|
||||
}
|
||||
|
||||
func TestBackedPipe_ReadBlocksWhenDisconnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
testCtx := testutil.Context(t, testutil.WaitShort)
|
||||
reconnectFn, _ := mockReconnectFunc(newMockConnection())
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
|
||||
// Start a read that should block
|
||||
readDone := make(chan struct{})
|
||||
readStarted := make(chan struct{}, 1)
|
||||
var readErr error
|
||||
|
||||
go func() {
|
||||
defer close(readDone)
|
||||
readStarted <- struct{}{} // Signal that we're about to start the read
|
||||
buf := make([]byte, 10)
|
||||
_, readErr = bp.Read(buf)
|
||||
}()
|
||||
|
||||
// Wait for the goroutine to start
|
||||
testutil.TryReceive(testCtx, t, readStarted)
|
||||
|
||||
// Ensure the read is actually blocked by verifying it hasn't completed
|
||||
require.Eventually(t, func() bool {
|
||||
select {
|
||||
case <-readDone:
|
||||
t.Fatal("Read should be blocked when disconnected")
|
||||
return false
|
||||
default:
|
||||
// Good, still blocked
|
||||
return true
|
||||
}
|
||||
}, testutil.WaitShort, testutil.IntervalMedium)
|
||||
|
||||
// Close should unblock the read
|
||||
bp.Close()
|
||||
|
||||
testutil.TryReceive(testCtx, t, readDone)
|
||||
require.Equal(t, io.EOF, readErr)
|
||||
}
|
||||
|
||||
func TestBackedPipe_Reconnection(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
testCtx := testutil.Context(t, testutil.WaitShort)
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
conn2.seqNum = 17 // Remote has received 17 bytes, so replay from sequence 17
|
||||
reconnectFn, signalChan := mockReconnectFunc(conn1, conn2)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write some data before failure
|
||||
bp.Write([]byte("before disconnect***"))
|
||||
|
||||
// Simulate connection failure
|
||||
conn1.SetReadError(xerrors.New("connection lost"))
|
||||
conn1.SetWriteError(xerrors.New("connection lost"))
|
||||
|
||||
// Trigger a write to cause the pipe to notice the failure
|
||||
_, _ = bp.Write([]byte("trigger failure "))
|
||||
|
||||
testutil.RequireReceive(testCtx, t, signalChan)
|
||||
|
||||
// Wait for reconnection to complete
|
||||
require.Eventually(t, func() bool {
|
||||
return bp.Connected()
|
||||
}, testutil.WaitShort, testutil.IntervalFast, "pipe should reconnect")
|
||||
|
||||
replayedData := conn2.ReadString()
|
||||
require.Equal(t, "***trigger failure ", replayedData, "Should replay exactly the data written after sequence 17")
|
||||
|
||||
// Verify that new writes work with the reconnected pipe
|
||||
_, err = bp.Write([]byte("new data after reconnect"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Read all data from the connection (replayed + new data)
|
||||
allData := conn2.ReadString()
|
||||
require.Equal(t, "***trigger failure new data after reconnect", allData, "Should have replayed data plus new data")
|
||||
}
|
||||
|
||||
func TestBackedPipe_Close(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bp.Close()
|
||||
require.NoError(t, err)
|
||||
require.True(t, conn.closed)
|
||||
|
||||
// Operations after close should fail
|
||||
_, err = bp.Read(make([]byte, 10))
|
||||
require.Equal(t, io.EOF, err)
|
||||
|
||||
_, err = bp.Write([]byte("test"))
|
||||
require.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
func TestBackedPipe_CloseIdempotent(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
|
||||
err := bp.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second close should be no-op
|
||||
err = bp.Close()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestBackedPipe_ReconnectFunctionFailure(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
failingReconnector := &mockReconnector{
|
||||
connections: nil, // No connections available
|
||||
}
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, failingReconnector)
|
||||
defer bp.Close()
|
||||
|
||||
err := bp.Connect()
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrReconnectFailed)
|
||||
require.False(t, bp.Connected())
|
||||
}
|
||||
|
||||
func TestBackedPipe_ForceReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
// Set conn2 sequence number to 9 to indicate remote has read all 9 bytes of "test data"
|
||||
conn2.seqNum = 9
|
||||
reconnector, _ := mockReconnectFunc(conn1, conn2)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
|
||||
// Write some data to the first connection
|
||||
_, err = bp.Write([]byte("test data"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "test data", conn1.ReadString())
|
||||
|
||||
// Force a reconnection
|
||||
err = bp.ForceReconnect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 2, reconnector.GetCallCount())
|
||||
|
||||
// Since the mock returns the proper sequence number, no data should be replayed
|
||||
// The new connection should be empty
|
||||
require.Equal(t, "", conn2.ReadString())
|
||||
|
||||
// Verify that data can still be written and read after forced reconnection
|
||||
_, err = bp.Write([]byte("new data"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "new data", conn2.ReadString())
|
||||
|
||||
// Verify that reads work with the new connection
|
||||
conn2.WriteString("response data")
|
||||
buf := make([]byte, 20)
|
||||
n, err := bp.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 13, n)
|
||||
require.Equal(t, "response data", string(buf[:n]))
|
||||
}
|
||||
|
||||
func TestBackedPipe_ForceReconnectWhenClosed(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
|
||||
// Close the pipe first
|
||||
err := bp.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to force reconnect when closed
|
||||
err = bp.ForceReconnect()
|
||||
require.Error(t, err)
|
||||
require.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
func TestBackedPipe_StateTransitionsAndGenerationTracking(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
conn3 := newMockConnection()
|
||||
reconnector, signalChan := mockReconnectFunc(conn1, conn2, conn3)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial state should be disconnected
|
||||
require.False(t, bp.Connected())
|
||||
|
||||
// Connect should transition to connected
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
|
||||
// Write some data
|
||||
_, err = bp.Write([]byte("test data gen 1"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Simulate connection failure by setting errors on connection
|
||||
conn1.SetReadError(xerrors.New("connection lost"))
|
||||
conn1.SetWriteError(xerrors.New("connection lost"))
|
||||
|
||||
// Trigger a write to cause the pipe to notice the failure
|
||||
_, _ = bp.Write([]byte("trigger failure"))
|
||||
|
||||
// Wait for reconnection signal
|
||||
testutil.RequireReceive(testutil.Context(t, testutil.WaitShort), t, signalChan)
|
||||
|
||||
// Wait for reconnection to complete
|
||||
require.Eventually(t, func() bool {
|
||||
return bp.Connected()
|
||||
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect")
|
||||
require.Equal(t, 2, reconnector.GetCallCount())
|
||||
|
||||
// Force another reconnection
|
||||
err = bp.ForceReconnect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 3, reconnector.GetCallCount())
|
||||
|
||||
// Close should transition to closed state
|
||||
err = bp.Close()
|
||||
require.NoError(t, err)
|
||||
require.False(t, bp.Connected())
|
||||
|
||||
// Operations on closed pipe should fail
|
||||
err = bp.Connect()
|
||||
require.Equal(t, backedpipe.ErrPipeClosed, err)
|
||||
|
||||
err = bp.ForceReconnect()
|
||||
require.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
func TestBackedPipe_GenerationFiltering(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
reconnector, _ := mockReconnectFunc(conn1, conn2)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
|
||||
// Simulate multiple rapid errors from the same connection generation
|
||||
// Only the first one should trigger reconnection
|
||||
conn1.SetReadError(xerrors.New("error 1"))
|
||||
conn1.SetWriteError(xerrors.New("error 2"))
|
||||
|
||||
// Trigger multiple errors quickly
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(2)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
_, _ = bp.Write([]byte("trigger error 1"))
|
||||
}()
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
_, _ = bp.Write([]byte("trigger error 2"))
|
||||
}()
|
||||
|
||||
// Wait for both writes to complete
|
||||
wg.Wait()
|
||||
|
||||
// Wait for reconnection to complete
|
||||
require.Eventually(t, func() bool {
|
||||
return bp.Connected()
|
||||
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect once")
|
||||
|
||||
// Should have only reconnected once despite multiple errors
|
||||
require.Equal(t, 2, reconnector.GetCallCount()) // Initial connect + 1 reconnect
|
||||
}
|
||||
|
||||
func TestBackedPipe_DuplicateReconnectionPrevention(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
testCtx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
// Create a blocking reconnector for deterministic testing
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
blockChan := make(chan struct{})
|
||||
reconnector, blockedChan := mockBlockingReconnectFunc(conn1, conn2, blockChan)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, reconnector.GetCallCount(), "should have exactly 1 call after initial connect")
|
||||
|
||||
// We'll use channels to coordinate the test execution:
|
||||
// 1. Start all goroutines but have them wait
|
||||
// 2. Release the first one and wait for it to block
|
||||
// 3. Release the others while the first is still blocked
|
||||
|
||||
const numConcurrent = 3
|
||||
startSignals := make([]chan struct{}, numConcurrent)
|
||||
startedSignals := make([]chan struct{}, numConcurrent)
|
||||
for i := range startSignals {
|
||||
startSignals[i] = make(chan struct{})
|
||||
startedSignals[i] = make(chan struct{})
|
||||
}
|
||||
|
||||
errors := make([]error, numConcurrent)
|
||||
var wg sync.WaitGroup
|
||||
|
||||
// Start all goroutines
|
||||
for i := 0; i < numConcurrent; i++ {
|
||||
wg.Add(1)
|
||||
go func(idx int) {
|
||||
defer wg.Done()
|
||||
// Wait for the signal to start
|
||||
<-startSignals[idx]
|
||||
// Signal that we're about to call ForceReconnect
|
||||
close(startedSignals[idx])
|
||||
errors[idx] = bp.ForceReconnect()
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Start the first ForceReconnect and wait for it to block
|
||||
close(startSignals[0])
|
||||
<-startedSignals[0]
|
||||
|
||||
// Wait for the first reconnect to actually start and block
|
||||
testutil.RequireReceive(testCtx, t, blockedChan)
|
||||
|
||||
// Now start all the other ForceReconnect calls
|
||||
// They should all join the same singleflight operation
|
||||
for i := 1; i < numConcurrent; i++ {
|
||||
close(startSignals[i])
|
||||
}
|
||||
|
||||
// Wait for all additional goroutines to have started their calls
|
||||
for i := 1; i < numConcurrent; i++ {
|
||||
<-startedSignals[i]
|
||||
}
|
||||
|
||||
// At this point, one reconnect has started and is blocked,
|
||||
// and all other goroutines have called ForceReconnect and should be
|
||||
// waiting on the same singleflight operation.
|
||||
// Due to singleflight, only one reconnect should have been attempted.
|
||||
require.Equal(t, 2, reconnector.GetCallCount(), "should have exactly 2 calls: initial connect + 1 reconnect due to singleflight")
|
||||
|
||||
// Release the blocking reconnect function
|
||||
close(blockChan)
|
||||
|
||||
// Wait for all ForceReconnect calls to complete
|
||||
wg.Wait()
|
||||
|
||||
// All calls should succeed (they share the same result from singleflight)
|
||||
for i, err := range errors {
|
||||
require.NoError(t, err, "ForceReconnect %d should succeed", i, err)
|
||||
}
|
||||
|
||||
// Final verification: call count should still be exactly 2
|
||||
require.Equal(t, 2, reconnector.GetCallCount(), "final call count should be exactly 2: initial connect + 1 singleflight reconnect")
|
||||
}
|
||||
|
||||
func TestBackedPipe_SingleReconnectionOnMultipleErrors(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
testCtx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
// Create connections for initial connect and reconnection
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
reconnector, signalChan := mockReconnectFunc(conn1, conn2)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
|
||||
// Write some initial data to establish the connection
|
||||
_, err = bp.Write([]byte("initial data"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Set up both read and write errors on the connection
|
||||
conn1.SetReadError(xerrors.New("read connection lost"))
|
||||
conn1.SetWriteError(xerrors.New("write connection lost"))
|
||||
|
||||
// Trigger write error (this will trigger reconnection)
|
||||
go func() {
|
||||
_, _ = bp.Write([]byte("trigger write error"))
|
||||
}()
|
||||
|
||||
// Wait for reconnection to start
|
||||
testutil.RequireReceive(testCtx, t, signalChan)
|
||||
|
||||
// Wait for reconnection to complete
|
||||
require.Eventually(t, func() bool {
|
||||
return bp.Connected()
|
||||
}, testutil.WaitShort, testutil.IntervalFast, "should reconnect after write error")
|
||||
|
||||
// Verify that only one reconnection occurred
|
||||
require.Equal(t, 2, reconnector.GetCallCount(), "should have exactly 2 calls: initial connect + 1 reconnection")
|
||||
require.True(t, bp.Connected(), "should be connected after reconnection")
|
||||
}
|
||||
|
||||
func TestBackedPipe_ForceReconnectWhenDisconnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnector, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Don't connect initially, just force reconnect
|
||||
err := bp.ForceReconnect()
|
||||
require.NoError(t, err)
|
||||
require.True(t, bp.Connected())
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
|
||||
// Verify we can write and read
|
||||
_, err = bp.Write([]byte("test"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "test", conn.ReadString())
|
||||
|
||||
conn.WriteString("response")
|
||||
buf := make([]byte, 10)
|
||||
n, err := bp.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 8, n)
|
||||
require.Equal(t, "response", string(buf[:n]))
|
||||
}
|
||||
|
||||
func TestBackedPipe_EOFTriggersReconnection(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Create connections where we can control when EOF occurs
|
||||
conn1 := newMockConnection()
|
||||
conn2 := newMockConnection()
|
||||
conn2.WriteString("newdata") // Pre-populate conn2 with data
|
||||
|
||||
// Make conn1 return EOF after reading "world"
|
||||
hasReadData := false
|
||||
conn1.readFunc = func(p []byte) (int, error) {
|
||||
// Don't lock here - the Read method already holds the lock
|
||||
|
||||
// First time: return "world"
|
||||
if !hasReadData && conn1.readBuffer.Len() > 0 {
|
||||
n, _ := conn1.readBuffer.Read(p)
|
||||
hasReadData = true
|
||||
return n, nil
|
||||
}
|
||||
// After that: return EOF
|
||||
return 0, io.EOF
|
||||
}
|
||||
conn1.WriteString("world")
|
||||
|
||||
reconnector := &eofTestReconnector{
|
||||
conn1: conn1,
|
||||
conn2: conn2,
|
||||
}
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnector)
|
||||
defer bp.Close()
|
||||
|
||||
// Initial connect
|
||||
err := bp.Connect()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, reconnector.GetCallCount())
|
||||
|
||||
// Write some data
|
||||
_, err = bp.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
|
||||
buf := make([]byte, 10)
|
||||
|
||||
// First read should succeed
|
||||
n, err := bp.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, "world", string(buf[:n]))
|
||||
|
||||
// Next read will encounter EOF and should trigger reconnection
|
||||
// After reconnection, it should read from conn2
|
||||
n, err = bp.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 7, n)
|
||||
require.Equal(t, "newdata", string(buf[:n]))
|
||||
|
||||
// Verify reconnection happened
|
||||
require.Equal(t, 2, reconnector.GetCallCount())
|
||||
|
||||
// Verify the pipe is still connected and functional
|
||||
require.True(t, bp.Connected())
|
||||
|
||||
// Further writes should go to the new connection
|
||||
_, err = bp.Write([]byte("aftereof"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "aftereof", conn2.ReadString())
|
||||
}
|
||||
|
||||
func BenchmarkBackedPipe_Write(b *testing.B) {
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
bp.Connect()
|
||||
b.Cleanup(func() {
|
||||
_ = bp.Close()
|
||||
})
|
||||
|
||||
data := make([]byte, 1024) // 1KB writes
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
bp.Write(data)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBackedPipe_Read(b *testing.B) {
|
||||
ctx := context.Background()
|
||||
conn := newMockConnection()
|
||||
reconnectFn, _ := mockReconnectFunc(conn)
|
||||
|
||||
bp := backedpipe.NewBackedPipe(ctx, reconnectFn)
|
||||
bp.Connect()
|
||||
b.Cleanup(func() {
|
||||
_ = bp.Close()
|
||||
})
|
||||
|
||||
buf := make([]byte, 1024)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
// Fill connection with fresh data for each iteration
|
||||
conn.WriteString(string(buf))
|
||||
bp.Read(buf)
|
||||
}
|
||||
}
|
||||
@@ -1,166 +0,0 @@
|
||||
package backedpipe
|
||||
|
||||
import (
|
||||
"io"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// BackedReader wraps an unreliable io.Reader and makes it resilient to disconnections.
|
||||
// It tracks sequence numbers for all bytes read and can handle reconnection,
|
||||
// blocking reads when disconnected instead of erroring.
|
||||
type BackedReader struct {
|
||||
mu sync.Mutex
|
||||
cond *sync.Cond
|
||||
reader io.Reader
|
||||
sequenceNum uint64
|
||||
closed bool
|
||||
|
||||
// Error channel for generation-aware error reporting
|
||||
errorEventChan chan<- ErrorEvent
|
||||
|
||||
// Current connection generation for error reporting
|
||||
currentGen uint64
|
||||
}
|
||||
|
||||
// NewBackedReader creates a new BackedReader with generation-aware error reporting.
|
||||
// The reader is initially disconnected and must be connected using Reconnect before
|
||||
// reads will succeed. The errorEventChan will receive ErrorEvent structs containing
|
||||
// error details, component info, and connection generation.
|
||||
func NewBackedReader(errorEventChan chan<- ErrorEvent) *BackedReader {
|
||||
if errorEventChan == nil {
|
||||
panic("error event channel cannot be nil")
|
||||
}
|
||||
br := &BackedReader{
|
||||
errorEventChan: errorEventChan,
|
||||
}
|
||||
br.cond = sync.NewCond(&br.mu)
|
||||
return br
|
||||
}
|
||||
|
||||
// Read implements io.Reader. It blocks when disconnected until either:
|
||||
// 1. A reconnection is established
|
||||
// 2. The reader is closed
|
||||
//
|
||||
// When connected, it reads from the underlying reader and updates sequence numbers.
|
||||
// Connection failures are automatically detected and reported to the higher layer via callback.
|
||||
func (br *BackedReader) Read(p []byte) (int, error) {
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
|
||||
for {
|
||||
// Step 1: Wait until we have a reader or are closed
|
||||
for br.reader == nil && !br.closed {
|
||||
br.cond.Wait()
|
||||
}
|
||||
|
||||
if br.closed {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
// Step 2: Perform the read while holding the mutex
|
||||
// This ensures proper synchronization with Reconnect and Close operations
|
||||
n, err := br.reader.Read(p)
|
||||
br.sequenceNum += uint64(n) // #nosec G115 -- n is always >= 0 per io.Reader contract
|
||||
|
||||
if err == nil {
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// Mark reader as disconnected so future reads will wait for reconnection
|
||||
br.reader = nil
|
||||
|
||||
// Notify parent of error with generation information
|
||||
select {
|
||||
case br.errorEventChan <- ErrorEvent{
|
||||
Err: err,
|
||||
Component: "reader",
|
||||
Generation: br.currentGen,
|
||||
}:
|
||||
default:
|
||||
// Channel is full, drop the error.
|
||||
// This is not a problem, because we set the reader to nil
|
||||
// and block until reconnected so no new errors will be sent
|
||||
// until pipe processes the error and reconnects.
|
||||
}
|
||||
|
||||
// If we got some data before the error, return it now
|
||||
if n > 0 {
|
||||
return n, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Reconnect coordinates reconnection using channels for better synchronization.
|
||||
// The seqNum channel is used to send the current sequence number to the caller.
|
||||
// The newR channel is used to receive the new reader from the caller.
|
||||
// This allows for better coordination during the reconnection process.
|
||||
func (br *BackedReader) Reconnect(seqNum chan<- uint64, newR <-chan io.Reader) {
|
||||
// Grab the lock
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
|
||||
if br.closed {
|
||||
// Close the channel to indicate closed state
|
||||
close(seqNum)
|
||||
return
|
||||
}
|
||||
|
||||
// Get the sequence number to send to the other side via seqNum channel
|
||||
seqNum <- br.sequenceNum
|
||||
close(seqNum)
|
||||
|
||||
// Wait for the reconnect to complete, via newR channel, and give us a new io.Reader
|
||||
newReader := <-newR
|
||||
|
||||
// If reconnection fails while we are starting it, the caller sends nil on newR
|
||||
if newReader == nil {
|
||||
// Reconnection failed, keep current state
|
||||
return
|
||||
}
|
||||
|
||||
// Reconnection successful
|
||||
br.reader = newReader
|
||||
|
||||
// Notify any waiting reads via the cond
|
||||
br.cond.Broadcast()
|
||||
}
|
||||
|
||||
// Close the reader and wake up any blocked reads.
|
||||
// After closing, all Read calls will return io.EOF.
|
||||
func (br *BackedReader) Close() error {
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
|
||||
if br.closed {
|
||||
return nil
|
||||
}
|
||||
|
||||
br.closed = true
|
||||
br.reader = nil
|
||||
|
||||
// Wake up any blocked reads
|
||||
br.cond.Broadcast()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SequenceNum returns the current sequence number (total bytes read).
|
||||
func (br *BackedReader) SequenceNum() uint64 {
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
return br.sequenceNum
|
||||
}
|
||||
|
||||
// Connected returns whether the reader is currently connected.
|
||||
func (br *BackedReader) Connected() bool {
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
return br.reader != nil
|
||||
}
|
||||
|
||||
// SetGeneration sets the current connection generation for error reporting.
|
||||
func (br *BackedReader) SetGeneration(generation uint64) {
|
||||
br.mu.Lock()
|
||||
defer br.mu.Unlock()
|
||||
br.currentGen = generation
|
||||
}
|
||||
@@ -1,603 +0,0 @@
|
||||
package backedpipe_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
// mockReader implements io.Reader with controllable behavior for testing
|
||||
type mockReader struct {
|
||||
mu sync.Mutex
|
||||
data []byte
|
||||
pos int
|
||||
err error
|
||||
readFunc func([]byte) (int, error)
|
||||
}
|
||||
|
||||
func newMockReader(data string) *mockReader {
|
||||
return &mockReader{data: []byte(data)}
|
||||
}
|
||||
|
||||
func (mr *mockReader) Read(p []byte) (int, error) {
|
||||
mr.mu.Lock()
|
||||
defer mr.mu.Unlock()
|
||||
|
||||
if mr.readFunc != nil {
|
||||
return mr.readFunc(p)
|
||||
}
|
||||
|
||||
if mr.err != nil {
|
||||
return 0, mr.err
|
||||
}
|
||||
|
||||
if mr.pos >= len(mr.data) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
n := copy(p, mr.data[mr.pos:])
|
||||
mr.pos += n
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (mr *mockReader) setError(err error) {
|
||||
mr.mu.Lock()
|
||||
defer mr.mu.Unlock()
|
||||
mr.err = err
|
||||
}
|
||||
|
||||
func TestBackedReader_NewBackedReader(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
require.NotNil(t, br)
|
||||
require.Equal(t, uint64(0), br.SequenceNum())
|
||||
require.False(t, br.Connected())
|
||||
}
|
||||
|
||||
func TestBackedReader_BasicReadOperation(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader := newMockReader("hello world")
|
||||
|
||||
// Connect the reader
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number from reader
|
||||
seq := testutil.RequireReceive(ctx, t, seqNum)
|
||||
require.Equal(t, uint64(0), seq)
|
||||
|
||||
// Send new reader
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
|
||||
|
||||
// Read data
|
||||
buf := make([]byte, 5)
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, "hello", string(buf))
|
||||
require.Equal(t, uint64(5), br.SequenceNum())
|
||||
|
||||
// Read more data
|
||||
n, err = br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, " worl", string(buf))
|
||||
require.Equal(t, uint64(10), br.SequenceNum())
|
||||
}
|
||||
|
||||
func TestBackedReader_ReadBlocksWhenDisconnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
|
||||
// Start a read operation that should block
|
||||
readDone := make(chan struct{})
|
||||
var readErr error
|
||||
var readBuf []byte
|
||||
var readN int
|
||||
|
||||
go func() {
|
||||
defer close(readDone)
|
||||
buf := make([]byte, 10)
|
||||
readN, readErr = br.Read(buf)
|
||||
readBuf = buf[:readN]
|
||||
}()
|
||||
|
||||
// Ensure the read is actually blocked by verifying it hasn't completed
|
||||
// and that the reader is not connected
|
||||
select {
|
||||
case <-readDone:
|
||||
t.Fatal("Read should be blocked when disconnected")
|
||||
default:
|
||||
// Read is still blocked, which is what we want
|
||||
}
|
||||
require.False(t, br.Connected(), "Reader should not be connected")
|
||||
|
||||
// Connect and the read should unblock
|
||||
reader := newMockReader("test")
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send new reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
|
||||
|
||||
// Wait for read to complete
|
||||
testutil.TryReceive(ctx, t, readDone)
|
||||
require.NoError(t, readErr)
|
||||
require.Equal(t, "test", string(readBuf))
|
||||
}
|
||||
|
||||
func TestBackedReader_ReconnectionAfterFailure(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader1 := newMockReader("first")
|
||||
|
||||
// Initial connection
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send new reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(reader1))
|
||||
|
||||
// Read some data
|
||||
buf := make([]byte, 5)
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "first", string(buf[:n]))
|
||||
require.Equal(t, uint64(5), br.SequenceNum())
|
||||
|
||||
// Simulate connection failure
|
||||
reader1.setError(xerrors.New("connection lost"))
|
||||
|
||||
// Start a read that will block due to connection failure
|
||||
readDone := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := br.Read(buf)
|
||||
readDone <- err
|
||||
}()
|
||||
|
||||
// Wait for the error to be reported via error channel
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Error(t, receivedErrorEvent.Err)
|
||||
require.Equal(t, "reader", receivedErrorEvent.Component)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
|
||||
|
||||
// Verify read is still blocked
|
||||
select {
|
||||
case err := <-readDone:
|
||||
t.Fatalf("Read should still be blocked, but completed with: %v", err)
|
||||
default:
|
||||
// Good, still blocked
|
||||
}
|
||||
|
||||
// Verify disconnection
|
||||
require.False(t, br.Connected())
|
||||
|
||||
// Reconnect with new reader
|
||||
reader2 := newMockReader("second")
|
||||
seqNum2 := make(chan uint64, 1)
|
||||
newR2 := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum2, newR2)
|
||||
|
||||
// Get sequence number and send new reader
|
||||
seq := testutil.RequireReceive(ctx, t, seqNum2)
|
||||
require.Equal(t, uint64(5), seq) // Should return current sequence number
|
||||
testutil.RequireSend(ctx, t, newR2, io.Reader(reader2))
|
||||
|
||||
// Wait for read to unblock and succeed with new data
|
||||
readErr := testutil.RequireReceive(ctx, t, readDone)
|
||||
require.NoError(t, readErr) // Should succeed with new reader
|
||||
require.True(t, br.Connected())
|
||||
}
|
||||
|
||||
func TestBackedReader_Close(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader := newMockReader("test")
|
||||
|
||||
// Connect
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send new reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(reader))
|
||||
|
||||
// First, read all available data
|
||||
buf := make([]byte, 10)
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 4, n) // "test" is 4 bytes
|
||||
|
||||
// Close the reader before EOF triggers reconnection
|
||||
err = br.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// After close, reads should return EOF
|
||||
n, err = br.Read(buf)
|
||||
require.Equal(t, 0, n)
|
||||
require.Equal(t, io.EOF, err)
|
||||
|
||||
// Subsequent reads should return EOF
|
||||
_, err = br.Read(buf)
|
||||
require.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
func TestBackedReader_CloseIdempotent(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
|
||||
err := br.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second close should be no-op
|
||||
err = br.Close()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestBackedReader_ReconnectAfterClose(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
|
||||
err := br.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Should get 0 sequence number for closed reader
|
||||
seq := testutil.TryReceive(ctx, t, seqNum)
|
||||
require.Equal(t, uint64(0), seq)
|
||||
}
|
||||
|
||||
// Helper function to reconnect a reader using channels
|
||||
func reconnectReader(ctx context.Context, t testing.TB, br *backedpipe.BackedReader, reader io.Reader) {
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send new reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, reader)
|
||||
}
|
||||
|
||||
func TestBackedReader_SequenceNumberTracking(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader := newMockReader("0123456789")
|
||||
|
||||
reconnectReader(ctx, t, br, reader)
|
||||
|
||||
// Read in chunks and verify sequence number
|
||||
buf := make([]byte, 3)
|
||||
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 3, n)
|
||||
require.Equal(t, uint64(3), br.SequenceNum())
|
||||
|
||||
n, err = br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 3, n)
|
||||
require.Equal(t, uint64(6), br.SequenceNum())
|
||||
|
||||
n, err = br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 3, n)
|
||||
require.Equal(t, uint64(9), br.SequenceNum())
|
||||
}
|
||||
|
||||
func TestBackedReader_EOFHandling(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader := newMockReader("test")
|
||||
|
||||
reconnectReader(ctx, t, br, reader)
|
||||
|
||||
// Read all data
|
||||
buf := make([]byte, 10)
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 4, n)
|
||||
require.Equal(t, "test", string(buf[:n]))
|
||||
|
||||
// Next read should encounter EOF, which triggers disconnection
|
||||
// The read should block waiting for reconnection
|
||||
readDone := make(chan struct{})
|
||||
var readErr error
|
||||
var readN int
|
||||
|
||||
go func() {
|
||||
defer close(readDone)
|
||||
readN, readErr = br.Read(buf)
|
||||
}()
|
||||
|
||||
// Wait for EOF to be reported via error channel
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Equal(t, io.EOF, receivedErrorEvent.Err)
|
||||
require.Equal(t, "reader", receivedErrorEvent.Component)
|
||||
|
||||
// Reader should be disconnected after EOF
|
||||
require.False(t, br.Connected())
|
||||
|
||||
// Read should still be blocked
|
||||
select {
|
||||
case <-readDone:
|
||||
t.Fatal("Read should be blocked waiting for reconnection after EOF")
|
||||
default:
|
||||
// Good, still blocked
|
||||
}
|
||||
|
||||
// Reconnect with new data
|
||||
reader2 := newMockReader("more")
|
||||
reconnectReader(ctx, t, br, reader2)
|
||||
|
||||
// Wait for the blocked read to complete with new data
|
||||
testutil.TryReceive(ctx, t, readDone)
|
||||
require.NoError(t, readErr)
|
||||
require.Equal(t, 4, readN)
|
||||
require.Equal(t, "more", string(buf[:readN]))
|
||||
}
|
||||
|
||||
func BenchmarkBackedReader_Read(b *testing.B) {
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
buf := make([]byte, 1024)
|
||||
|
||||
// Create a reader that never returns EOF by cycling through data
|
||||
reader := &mockReader{
|
||||
readFunc: func(p []byte) (int, error) {
|
||||
// Fill buffer with 'x' characters - never EOF
|
||||
for i := range p {
|
||||
p[i] = 'x'
|
||||
}
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), testutil.WaitShort)
|
||||
defer cancel()
|
||||
reconnectReader(ctx, b, br, reader)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
br.Read(buf)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackedReader_PartialReads(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
|
||||
// Create a reader that returns partial reads
|
||||
reader := &mockReader{
|
||||
readFunc: func(p []byte) (int, error) {
|
||||
// Always return just 1 byte at a time
|
||||
if len(p) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
p[0] = 'A'
|
||||
return 1, nil
|
||||
},
|
||||
}
|
||||
|
||||
reconnectReader(ctx, t, br, reader)
|
||||
|
||||
// Read multiple times
|
||||
buf := make([]byte, 10)
|
||||
for i := 0; i < 5; i++ {
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, n)
|
||||
require.Equal(t, byte('A'), buf[0])
|
||||
}
|
||||
|
||||
require.Equal(t, uint64(5), br.SequenceNum())
|
||||
}
|
||||
|
||||
func TestBackedReader_CloseWhileBlockedOnUnderlyingReader(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
|
||||
// Create a reader that blocks on Read calls but can be unblocked
|
||||
readStarted := make(chan struct{}, 1)
|
||||
readUnblocked := make(chan struct{})
|
||||
blockingReader := &mockReader{
|
||||
readFunc: func(p []byte) (int, error) {
|
||||
select {
|
||||
case readStarted <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
<-readUnblocked // Block until signaled
|
||||
// After unblocking, return an error to simulate connection failure
|
||||
return 0, xerrors.New("connection interrupted")
|
||||
},
|
||||
}
|
||||
|
||||
// Connect the blocking reader
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send blocking reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(blockingReader))
|
||||
|
||||
// Start a read that will block on the underlying reader
|
||||
readDone := make(chan struct{})
|
||||
var readErr error
|
||||
var readN int
|
||||
|
||||
go func() {
|
||||
defer close(readDone)
|
||||
buf := make([]byte, 10)
|
||||
readN, readErr = br.Read(buf)
|
||||
}()
|
||||
|
||||
// Wait for the read to start and block on the underlying reader
|
||||
testutil.RequireReceive(ctx, t, readStarted)
|
||||
|
||||
// Verify read is blocked by checking that it hasn't completed
|
||||
// and ensuring we have adequate time for it to reach the blocking state
|
||||
require.Eventually(t, func() bool {
|
||||
select {
|
||||
case <-readDone:
|
||||
t.Fatal("Read should be blocked on underlying reader")
|
||||
return false
|
||||
default:
|
||||
// Good, still blocked
|
||||
return true
|
||||
}
|
||||
}, testutil.WaitShort, testutil.IntervalMedium)
|
||||
|
||||
// Start Close() in a goroutine since it will block until the underlying read completes
|
||||
closeDone := make(chan error, 1)
|
||||
go func() {
|
||||
closeDone <- br.Close()
|
||||
}()
|
||||
|
||||
// Verify Close() is also blocked waiting for the underlying read
|
||||
select {
|
||||
case <-closeDone:
|
||||
t.Fatal("Close should be blocked until underlying read completes")
|
||||
case <-time.After(10 * time.Millisecond):
|
||||
// Good, Close is blocked
|
||||
}
|
||||
|
||||
// Unblock the underlying reader, which will cause both the read and close to complete
|
||||
close(readUnblocked)
|
||||
|
||||
// Wait for both the read and close to complete
|
||||
testutil.TryReceive(ctx, t, readDone)
|
||||
closeErr := testutil.RequireReceive(ctx, t, closeDone)
|
||||
require.NoError(t, closeErr)
|
||||
|
||||
// The read should return EOF because Close() was called while it was blocked,
|
||||
// even though the underlying reader returned an error
|
||||
require.Equal(t, 0, readN)
|
||||
require.Equal(t, io.EOF, readErr)
|
||||
|
||||
// Subsequent reads should return EOF since the reader is now closed
|
||||
buf := make([]byte, 10)
|
||||
n, err := br.Read(buf)
|
||||
require.Equal(t, 0, n)
|
||||
require.Equal(t, io.EOF, err)
|
||||
}
|
||||
|
||||
func TestBackedReader_CloseWhileBlockedWaitingForReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
br := backedpipe.NewBackedReader(errChan)
|
||||
reader1 := newMockReader("initial")
|
||||
|
||||
// Initial connection
|
||||
seqNum := make(chan uint64, 1)
|
||||
newR := make(chan io.Reader, 1)
|
||||
|
||||
go br.Reconnect(seqNum, newR)
|
||||
|
||||
// Get sequence number and send initial reader
|
||||
testutil.RequireReceive(ctx, t, seqNum)
|
||||
testutil.RequireSend(ctx, t, newR, io.Reader(reader1))
|
||||
|
||||
// Read initial data
|
||||
buf := make([]byte, 10)
|
||||
n, err := br.Read(buf)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "initial", string(buf[:n]))
|
||||
|
||||
// Simulate connection failure
|
||||
reader1.setError(xerrors.New("connection lost"))
|
||||
|
||||
// Start a read that will block waiting for reconnection
|
||||
readDone := make(chan struct{})
|
||||
var readErr error
|
||||
var readN int
|
||||
|
||||
go func() {
|
||||
defer close(readDone)
|
||||
readN, readErr = br.Read(buf)
|
||||
}()
|
||||
|
||||
// Wait for the error to be reported (indicating disconnection)
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Error(t, receivedErrorEvent.Err)
|
||||
require.Equal(t, "reader", receivedErrorEvent.Component)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
|
||||
|
||||
// Verify read is blocked waiting for reconnection
|
||||
select {
|
||||
case <-readDone:
|
||||
t.Fatal("Read should be blocked waiting for reconnection")
|
||||
default:
|
||||
// Good, still blocked
|
||||
}
|
||||
|
||||
// Verify reader is disconnected
|
||||
require.False(t, br.Connected())
|
||||
|
||||
// Close the BackedReader while read is blocked waiting for reconnection
|
||||
err = br.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// The read should unblock and return EOF
|
||||
testutil.TryReceive(ctx, t, readDone)
|
||||
require.Equal(t, 0, readN)
|
||||
require.Equal(t, io.EOF, readErr)
|
||||
}
|
||||
@@ -1,243 +0,0 @@
|
||||
package backedpipe
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/xerrors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrWriterClosed = xerrors.New("cannot reconnect closed writer")
|
||||
ErrNilWriter = xerrors.New("new writer cannot be nil")
|
||||
ErrFutureSequence = xerrors.New("cannot replay from future sequence")
|
||||
ErrReplayDataUnavailable = xerrors.New("failed to read replay data")
|
||||
ErrReplayFailed = xerrors.New("replay failed")
|
||||
ErrPartialReplay = xerrors.New("partial replay")
|
||||
)
|
||||
|
||||
// BackedWriter wraps an unreliable io.Writer and makes it resilient to disconnections.
|
||||
// It maintains a ring buffer of recent writes for replay during reconnection.
|
||||
type BackedWriter struct {
|
||||
mu sync.Mutex
|
||||
cond *sync.Cond
|
||||
writer io.Writer
|
||||
buffer *ringBuffer
|
||||
sequenceNum uint64 // total bytes written
|
||||
closed bool
|
||||
|
||||
// Error channel for generation-aware error reporting
|
||||
errorEventChan chan<- ErrorEvent
|
||||
|
||||
// Current connection generation for error reporting
|
||||
currentGen uint64
|
||||
}
|
||||
|
||||
// NewBackedWriter creates a new BackedWriter with generation-aware error reporting.
|
||||
// The writer is initially disconnected and will block writes until connected.
|
||||
// The errorEventChan will receive ErrorEvent structs containing error details,
|
||||
// component info, and connection generation. Capacity must be > 0.
|
||||
func NewBackedWriter(capacity int, errorEventChan chan<- ErrorEvent) *BackedWriter {
|
||||
if capacity <= 0 {
|
||||
panic("backed writer capacity must be > 0")
|
||||
}
|
||||
if errorEventChan == nil {
|
||||
panic("error event channel cannot be nil")
|
||||
}
|
||||
bw := &BackedWriter{
|
||||
buffer: newRingBuffer(capacity),
|
||||
errorEventChan: errorEventChan,
|
||||
}
|
||||
bw.cond = sync.NewCond(&bw.mu)
|
||||
return bw
|
||||
}
|
||||
|
||||
// blockUntilConnectedOrClosed blocks until either a writer is available or the BackedWriter is closed.
|
||||
// Returns os.ErrClosed if closed while waiting, nil if connected. You must hold the mutex to call this.
|
||||
func (bw *BackedWriter) blockUntilConnectedOrClosed() error {
|
||||
for bw.writer == nil && !bw.closed {
|
||||
bw.cond.Wait()
|
||||
}
|
||||
if bw.closed {
|
||||
return os.ErrClosed
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Write implements io.Writer.
|
||||
// When connected, it writes to both the ring buffer (to preserve data in case we need to replay it)
|
||||
// and the underlying writer.
|
||||
// If the underlying write fails, the writer is marked as disconnected and the write blocks
|
||||
// until reconnection occurs.
|
||||
func (bw *BackedWriter) Write(p []byte) (int, error) {
|
||||
if len(p) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
|
||||
// Block until connected
|
||||
if err := bw.blockUntilConnectedOrClosed(); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Write to buffer
|
||||
bw.buffer.Write(p)
|
||||
bw.sequenceNum += uint64(len(p))
|
||||
|
||||
// Try to write to underlying writer
|
||||
n, err := bw.writer.Write(p)
|
||||
if err == nil && n != len(p) {
|
||||
err = io.ErrShortWrite
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Connection failed or partial write, mark as disconnected
|
||||
bw.writer = nil
|
||||
|
||||
// Notify parent of error with generation information
|
||||
select {
|
||||
case bw.errorEventChan <- ErrorEvent{
|
||||
Err: err,
|
||||
Component: "writer",
|
||||
Generation: bw.currentGen,
|
||||
}:
|
||||
default:
|
||||
// Channel is full, drop the error.
|
||||
// This is not a problem, because we set the writer to nil
|
||||
// and block until reconnected so no new errors will be sent
|
||||
// until pipe processes the error and reconnects.
|
||||
}
|
||||
|
||||
// Block until reconnected - reconnection will replay this data
|
||||
if err := bw.blockUntilConnectedOrClosed(); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Don't retry - reconnection replay handled it
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// Write succeeded
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// Reconnect replaces the current writer with a new one and replays data from the specified
|
||||
// sequence number. If the requested sequence number is no longer in the buffer,
|
||||
// returns an error indicating data loss.
|
||||
//
|
||||
// IMPORTANT: You must close the current writer, if any, before calling this method.
|
||||
// Otherwise, if a Write operation is currently blocked in the underlying writer's
|
||||
// Write method, this method will deadlock waiting for the mutex that Write holds.
|
||||
func (bw *BackedWriter) Reconnect(replayFromSeq uint64, newWriter io.Writer) error {
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
|
||||
if bw.closed {
|
||||
return ErrWriterClosed
|
||||
}
|
||||
|
||||
if newWriter == nil {
|
||||
return ErrNilWriter
|
||||
}
|
||||
|
||||
// Check if we can replay from the requested sequence number
|
||||
if replayFromSeq > bw.sequenceNum {
|
||||
return ErrFutureSequence
|
||||
}
|
||||
|
||||
// Calculate how many bytes we need to replay
|
||||
replayBytes := bw.sequenceNum - replayFromSeq
|
||||
|
||||
var replayData []byte
|
||||
if replayBytes > 0 {
|
||||
// Get the last replayBytes from buffer
|
||||
// If the buffer doesn't have enough data (some was evicted),
|
||||
// ReadLast will return an error
|
||||
var err error
|
||||
// Safe conversion: The check above (replayFromSeq > bw.sequenceNum) ensures
|
||||
// replayBytes = bw.sequenceNum - replayFromSeq is always <= bw.sequenceNum.
|
||||
// Since sequence numbers are much smaller than maxInt, the uint64->int conversion is safe.
|
||||
//nolint:gosec // Safe conversion: replayBytes <= sequenceNum, which is much less than maxInt
|
||||
replayData, err = bw.buffer.ReadLast(int(replayBytes))
|
||||
if err != nil {
|
||||
return ErrReplayDataUnavailable
|
||||
}
|
||||
}
|
||||
|
||||
// Clear the current writer first in case replay fails
|
||||
bw.writer = nil
|
||||
|
||||
// Replay data if needed. We keep the mutex held during replay to ensure
|
||||
// no concurrent operations can interfere with the reconnection process.
|
||||
if len(replayData) > 0 {
|
||||
n, err := newWriter.Write(replayData)
|
||||
if err != nil {
|
||||
// Reconnect failed, writer remains nil
|
||||
return ErrReplayFailed
|
||||
}
|
||||
|
||||
if n != len(replayData) {
|
||||
// Reconnect failed, writer remains nil
|
||||
return ErrPartialReplay
|
||||
}
|
||||
}
|
||||
|
||||
// Set new writer only after successful replay. This ensures no concurrent
|
||||
// writes can interfere with the replay operation.
|
||||
bw.writer = newWriter
|
||||
|
||||
// Wake up any operations waiting for connection
|
||||
bw.cond.Broadcast()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close closes the writer and prevents further writes.
|
||||
// After closing, all Write calls will return os.ErrClosed.
|
||||
// This code keeps the Close() signature consistent with io.Closer,
|
||||
// but it never actually returns an error.
|
||||
//
|
||||
// IMPORTANT: You must close the current underlying writer, if any, before calling
|
||||
// this method. Otherwise, if a Write operation is currently blocked in the
|
||||
// underlying writer's Write method, this method will deadlock waiting for the
|
||||
// mutex that Write holds.
|
||||
func (bw *BackedWriter) Close() error {
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
|
||||
if bw.closed {
|
||||
return nil
|
||||
}
|
||||
|
||||
bw.closed = true
|
||||
bw.writer = nil
|
||||
|
||||
// Wake up any blocked operations
|
||||
bw.cond.Broadcast()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SequenceNum returns the current sequence number (total bytes written).
|
||||
func (bw *BackedWriter) SequenceNum() uint64 {
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
return bw.sequenceNum
|
||||
}
|
||||
|
||||
// Connected returns whether the writer is currently connected.
|
||||
func (bw *BackedWriter) Connected() bool {
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
return bw.writer != nil
|
||||
}
|
||||
|
||||
// SetGeneration sets the current connection generation for error reporting.
|
||||
func (bw *BackedWriter) SetGeneration(generation uint64) {
|
||||
bw.mu.Lock()
|
||||
defer bw.mu.Unlock()
|
||||
bw.currentGen = generation
|
||||
}
|
||||
@@ -1,992 +0,0 @@
|
||||
package backedpipe_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/agent/immortalstreams/backedpipe"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
// mockWriter implements io.Writer with controllable behavior for testing
|
||||
type mockWriter struct {
|
||||
mu sync.Mutex
|
||||
buffer bytes.Buffer
|
||||
err error
|
||||
writeFunc func([]byte) (int, error)
|
||||
writeCalls int
|
||||
}
|
||||
|
||||
func newMockWriter() *mockWriter {
|
||||
return &mockWriter{}
|
||||
}
|
||||
|
||||
// newBackedWriterForTest creates a BackedWriter with a small buffer for testing eviction behavior
|
||||
func newBackedWriterForTest(bufferSize int) *backedpipe.BackedWriter {
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
return backedpipe.NewBackedWriter(bufferSize, errChan)
|
||||
}
|
||||
|
||||
func (mw *mockWriter) Write(p []byte) (int, error) {
|
||||
mw.mu.Lock()
|
||||
defer mw.mu.Unlock()
|
||||
|
||||
mw.writeCalls++
|
||||
|
||||
if mw.writeFunc != nil {
|
||||
return mw.writeFunc(p)
|
||||
}
|
||||
|
||||
if mw.err != nil {
|
||||
return 0, mw.err
|
||||
}
|
||||
|
||||
return mw.buffer.Write(p)
|
||||
}
|
||||
|
||||
func (mw *mockWriter) Len() int {
|
||||
mw.mu.Lock()
|
||||
defer mw.mu.Unlock()
|
||||
return mw.buffer.Len()
|
||||
}
|
||||
|
||||
func (mw *mockWriter) Reset() {
|
||||
mw.mu.Lock()
|
||||
defer mw.mu.Unlock()
|
||||
mw.buffer.Reset()
|
||||
mw.writeCalls = 0
|
||||
mw.err = nil
|
||||
mw.writeFunc = nil
|
||||
}
|
||||
|
||||
func (mw *mockWriter) setError(err error) {
|
||||
mw.mu.Lock()
|
||||
defer mw.mu.Unlock()
|
||||
mw.err = err
|
||||
}
|
||||
|
||||
func TestBackedWriter_NewBackedWriter(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
require.NotNil(t, bw)
|
||||
require.Equal(t, uint64(0), bw.SequenceNum())
|
||||
require.False(t, bw.Connected())
|
||||
}
|
||||
|
||||
func TestBackedWriter_WriteBlocksWhenDisconnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Write should block when disconnected
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
n, writeErr = bw.Write([]byte("hello"))
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when disconnected")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Connect and verify write completes
|
||||
writer := newMockWriter()
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
testutil.TryReceive(ctx, t, writeComplete)
|
||||
|
||||
require.NoError(t, writeErr)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, uint64(5), bw.SequenceNum())
|
||||
require.Equal(t, []byte("hello"), writer.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_WriteToUnderlyingWhenConnected(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
writer := newMockWriter()
|
||||
|
||||
// Connect
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
require.True(t, bw.Connected())
|
||||
|
||||
// Write should go to both buffer and underlying writer
|
||||
n, err := bw.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
|
||||
// Data should be buffered
|
||||
require.Equal(t, uint64(5), bw.SequenceNum())
|
||||
|
||||
// Check underlying writer
|
||||
require.Equal(t, []byte("hello"), writer.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_BlockOnWriteFailure(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
writer := newMockWriter()
|
||||
|
||||
// Connect
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Cause write to fail
|
||||
writer.setError(xerrors.New("write failed"))
|
||||
|
||||
// Write should block when underlying writer fails, not succeed immediately
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
n, writeErr = bw.Write([]byte("hello"))
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when underlying writer fails")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Wait for error event which implies writer was marked disconnected
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "write failed")
|
||||
require.Equal(t, "writer", receivedErrorEvent.Component)
|
||||
require.False(t, bw.Connected())
|
||||
|
||||
// Reconnect with working writer and verify write completes
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(0, writer2) // Replay from beginning
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
testutil.TryReceive(ctx, t, writeComplete)
|
||||
|
||||
require.NoError(t, writeErr)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, uint64(5), bw.SequenceNum())
|
||||
require.Equal(t, []byte("hello"), writer2.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_ReplayOnReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Connect initially to write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write some data while connected
|
||||
_, err = bw.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
_, err = bw.Write([]byte(" world"))
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, uint64(11), bw.SequenceNum())
|
||||
|
||||
// Disconnect by causing a write failure
|
||||
writer1.setError(xerrors.New("connection lost"))
|
||||
|
||||
// Write should block when underlying writer fails
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
n, writeErr = bw.Write([]byte("test"))
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when underlying writer fails")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Wait for error event which implies writer was marked disconnected
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
|
||||
require.Equal(t, "writer", receivedErrorEvent.Component)
|
||||
require.False(t, bw.Connected())
|
||||
|
||||
// Reconnect with new writer and request replay from beginning
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(0, writer2)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
select {
|
||||
case <-writeComplete:
|
||||
// Expected - write completed
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Fatal("Write should have completed after reconnection")
|
||||
}
|
||||
|
||||
require.NoError(t, writeErr)
|
||||
require.Equal(t, 4, n)
|
||||
|
||||
// Should have replayed all data including the failed write that was buffered
|
||||
require.Equal(t, []byte("hello worldtest"), writer2.buffer.Bytes())
|
||||
|
||||
// Write new data should go to both
|
||||
_, err = bw.Write([]byte("!"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("hello worldtest!"), writer2.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_PartialReplay(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Connect initially to write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write some data
|
||||
_, err = bw.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
_, err = bw.Write([]byte(" world"))
|
||||
require.NoError(t, err)
|
||||
_, err = bw.Write([]byte("!"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Reconnect with new writer and request replay from middle
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(5, writer2) // From " world!"
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should have replayed only the requested portion
|
||||
require.Equal(t, []byte(" world!"), writer2.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_ReplayFromFutureSequence(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Connect initially to write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = bw.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(10, writer2) // Future sequence
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrFutureSequence)
|
||||
}
|
||||
|
||||
func TestBackedWriter_ReplayDataLoss(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
bw := newBackedWriterForTest(10) // Small buffer for testing
|
||||
|
||||
// Connect initially to write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Fill buffer beyond capacity to cause eviction
|
||||
_, err = bw.Write([]byte("0123456789")) // Fills buffer exactly
|
||||
require.NoError(t, err)
|
||||
_, err = bw.Write([]byte("abcdef")) // Should evict "012345"
|
||||
require.NoError(t, err)
|
||||
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(0, writer2) // Try to replay from evicted data
|
||||
// With the new error handling, this should fail because we can't read all the data
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrReplayDataUnavailable)
|
||||
}
|
||||
|
||||
func TestBackedWriter_BufferEviction(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
bw := newBackedWriterForTest(5) // Very small buffer for testing
|
||||
|
||||
// Connect initially
|
||||
writer := newMockWriter()
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write data that will cause eviction
|
||||
n, err := bw.Write([]byte("abcde"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 5, n)
|
||||
|
||||
// Write more to cause eviction
|
||||
n, err = bw.Write([]byte("fg"))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 2, n)
|
||||
|
||||
// Verify that the buffer contains only the latest data after eviction
|
||||
// Total sequence number should be 7 (5 + 2)
|
||||
require.Equal(t, uint64(7), bw.SequenceNum())
|
||||
|
||||
// Try to reconnect from the beginning - this should fail because
|
||||
// the early data was evicted from the buffer
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(0, writer2)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrReplayDataUnavailable)
|
||||
|
||||
// However, reconnecting from a sequence that's still in the buffer should work
|
||||
// The buffer should contain the last 5 bytes: "cdefg"
|
||||
writer3 := newMockWriter()
|
||||
err = bw.Reconnect(2, writer3) // From sequence 2, should replay "cdefg"
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("cdefg"), writer3.buffer.Bytes())
|
||||
require.True(t, bw.Connected())
|
||||
}
|
||||
|
||||
func TestBackedWriter_Close(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
writer := newMockWriter()
|
||||
|
||||
bw.Reconnect(0, writer)
|
||||
|
||||
err := bw.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Writes after close should fail
|
||||
_, err = bw.Write([]byte("test"))
|
||||
require.Equal(t, os.ErrClosed, err)
|
||||
|
||||
// Reconnect after close should fail
|
||||
err = bw.Reconnect(0, newMockWriter())
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrWriterClosed)
|
||||
}
|
||||
|
||||
func TestBackedWriter_CloseIdempotent(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
err := bw.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Second close should be no-op
|
||||
err = bw.Close()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestBackedWriter_ReconnectDuringReplay(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Connect initially to write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = bw.Write([]byte("hello world"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a writer that fails during replay
|
||||
writer2 := &mockWriter{
|
||||
err: backedpipe.ErrReplayFailed,
|
||||
}
|
||||
|
||||
err = bw.Reconnect(0, writer2)
|
||||
require.Error(t, err)
|
||||
require.ErrorIs(t, err, backedpipe.ErrReplayFailed)
|
||||
require.False(t, bw.Connected())
|
||||
}
|
||||
|
||||
func TestBackedWriter_BlockOnPartialWrite(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Create writer that does partial writes
|
||||
writer := &mockWriter{
|
||||
writeFunc: func(p []byte) (int, error) {
|
||||
if len(p) > 3 {
|
||||
return 3, nil // Only write first 3 bytes
|
||||
}
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
|
||||
bw.Reconnect(0, writer)
|
||||
|
||||
// Write should block due to partial write
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
n, writeErr = bw.Write([]byte("hello"))
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when underlying writer does partial write")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Wait for error event which implies writer was marked disconnected
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "short write")
|
||||
require.Equal(t, "writer", receivedErrorEvent.Component)
|
||||
require.False(t, bw.Connected())
|
||||
|
||||
// Reconnect with working writer and verify write completes
|
||||
writer2 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer2) // Replay from beginning
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
testutil.TryReceive(ctx, t, writeComplete)
|
||||
|
||||
require.NoError(t, writeErr)
|
||||
require.Equal(t, 5, n)
|
||||
require.Equal(t, uint64(5), bw.SequenceNum())
|
||||
require.Equal(t, []byte("hello"), writer2.buffer.Bytes())
|
||||
}
|
||||
|
||||
func TestBackedWriter_WriteUnblocksOnReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Start a single write that should block
|
||||
writeResult := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := bw.Write([]byte("test"))
|
||||
writeResult <- err
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeResult:
|
||||
t.Fatal("Write should have blocked when disconnected")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Connect and verify write completes
|
||||
writer := newMockWriter()
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete
|
||||
err = testutil.RequireReceive(ctx, t, writeResult)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should have been written to the underlying writer
|
||||
require.Equal(t, "test", writer.buffer.String())
|
||||
}
|
||||
|
||||
func TestBackedWriter_CloseUnblocksWaitingWrites(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Start a write that should block
|
||||
writeComplete := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := bw.Write([]byte("test"))
|
||||
writeComplete <- err
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked when disconnected")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Close the writer
|
||||
err := bw.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should now complete with error
|
||||
err = testutil.RequireReceive(ctx, t, writeComplete)
|
||||
require.Equal(t, os.ErrClosed, err)
|
||||
}
|
||||
|
||||
func TestBackedWriter_WriteBlocksAfterDisconnection(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
writer := newMockWriter()
|
||||
|
||||
// Connect initially
|
||||
err := bw.Reconnect(0, writer)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write should succeed when connected
|
||||
_, err = bw.Write([]byte("hello"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Cause disconnection - the write should now block instead of returning an error
|
||||
writer.setError(xerrors.New("connection lost"))
|
||||
|
||||
// This write should block
|
||||
writeComplete := make(chan error, 1)
|
||||
go func() {
|
||||
_, err := bw.Write([]byte("world"))
|
||||
writeComplete <- err
|
||||
}()
|
||||
|
||||
// Verify write is blocked
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should have blocked after disconnection")
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Expected - write is blocked
|
||||
}
|
||||
|
||||
// Wait for error event which implies writer was marked disconnected
|
||||
receivedErrorEvent := testutil.RequireReceive(ctx, t, errChan)
|
||||
require.Contains(t, receivedErrorEvent.Err.Error(), "connection lost")
|
||||
require.Equal(t, "writer", receivedErrorEvent.Component)
|
||||
require.False(t, bw.Connected())
|
||||
|
||||
// Reconnect and verify write completes
|
||||
writer2 := newMockWriter()
|
||||
err = bw.Reconnect(5, writer2) // Replay from after "hello"
|
||||
require.NoError(t, err)
|
||||
|
||||
err = testutil.RequireReceive(ctx, t, writeComplete)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Check that only "world" was written during replay (not duplicated)
|
||||
require.Equal(t, []byte("world"), writer2.buffer.Bytes()) // Only "world" since we replayed from sequence 5
|
||||
}
|
||||
|
||||
func TestBackedWriter_ConcurrentWriteAndClose(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Don't connect initially - this will cause writes to block in blockUntilConnectedOrClosed()
|
||||
|
||||
writeStarted := make(chan struct{}, 1)
|
||||
|
||||
// Start a write operation that will block waiting for connection
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
// Signal that we're about to start the write
|
||||
writeStarted <- struct{}{}
|
||||
// This write will block in blockUntilConnectedOrClosed() since no writer is connected
|
||||
n, writeErr = bw.Write([]byte("hello"))
|
||||
}()
|
||||
|
||||
// Wait for write goroutine to start
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
testutil.RequireReceive(ctx, t, writeStarted)
|
||||
|
||||
// Ensure the write is actually blocked by repeatedly checking that:
|
||||
// 1. The write hasn't completed yet
|
||||
// 2. The writer is still not connected
|
||||
// We use require.Eventually to give it a fair chance to reach the blocking state
|
||||
require.Eventually(t, func() bool {
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should be blocked when no writer is connected")
|
||||
return false
|
||||
default:
|
||||
// Write is still blocked, which is what we want
|
||||
return !bw.Connected()
|
||||
}
|
||||
}, testutil.WaitShort, testutil.IntervalMedium)
|
||||
|
||||
// Close the writer while the write is blocked waiting for connection
|
||||
closeErr := bw.Close()
|
||||
require.NoError(t, closeErr)
|
||||
|
||||
// Wait for write to complete
|
||||
select {
|
||||
case <-writeComplete:
|
||||
// Good, write completed
|
||||
case <-ctx.Done():
|
||||
t.Fatal("Write did not complete in time")
|
||||
}
|
||||
|
||||
// The write should have failed with os.ErrClosed because Close() was called
|
||||
// while it was waiting for connection
|
||||
require.ErrorIs(t, writeErr, os.ErrClosed)
|
||||
require.Equal(t, 0, n)
|
||||
|
||||
// Subsequent writes should also fail
|
||||
n, err := bw.Write([]byte("world"))
|
||||
require.Equal(t, 0, n)
|
||||
require.ErrorIs(t, err, os.ErrClosed)
|
||||
}
|
||||
|
||||
func TestBackedWriter_ConcurrentWriteAndReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Initial connection
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write some initial data
|
||||
_, err = bw.Write([]byte("initial"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Start reconnection which will block new writes
|
||||
replayStarted := make(chan struct{}, 1) // Buffered to prevent race condition
|
||||
replayCanComplete := make(chan struct{})
|
||||
writer2 := &mockWriter{
|
||||
writeFunc: func(p []byte) (int, error) {
|
||||
// Signal that replay has started
|
||||
select {
|
||||
case replayStarted <- struct{}{}:
|
||||
default:
|
||||
// Signal already sent, which is fine
|
||||
}
|
||||
// Wait for test to allow replay to complete
|
||||
<-replayCanComplete
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
|
||||
// Start the reconnection in a goroutine so we can control timing
|
||||
reconnectComplete := make(chan error, 1)
|
||||
go func() {
|
||||
reconnectComplete <- bw.Reconnect(0, writer2)
|
||||
}()
|
||||
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
// Wait for replay to start
|
||||
testutil.RequireReceive(ctx, t, replayStarted)
|
||||
|
||||
// Now start a write operation that will be blocked by the ongoing reconnect
|
||||
writeStarted := make(chan struct{}, 1)
|
||||
writeComplete := make(chan struct{})
|
||||
var writeErr error
|
||||
var n int
|
||||
|
||||
go func() {
|
||||
defer close(writeComplete)
|
||||
// Signal that we're about to start the write
|
||||
writeStarted <- struct{}{}
|
||||
// This write should be blocked during reconnect
|
||||
n, writeErr = bw.Write([]byte("blocked"))
|
||||
}()
|
||||
|
||||
// Wait for write to start
|
||||
testutil.RequireReceive(ctx, t, writeStarted)
|
||||
|
||||
// Use a small timeout to ensure the write goroutine has a chance to get blocked
|
||||
// on the mutex before we check if it's still blocked
|
||||
writeCheckTimer := time.NewTimer(testutil.IntervalFast)
|
||||
defer writeCheckTimer.Stop()
|
||||
|
||||
select {
|
||||
case <-writeComplete:
|
||||
t.Fatal("Write should be blocked during reconnect")
|
||||
case <-writeCheckTimer.C:
|
||||
// Write is still blocked after a reasonable wait
|
||||
}
|
||||
|
||||
// Allow replay to complete, which will allow reconnect to finish
|
||||
close(replayCanComplete)
|
||||
|
||||
// Wait for reconnection to complete
|
||||
select {
|
||||
case reconnectErr := <-reconnectComplete:
|
||||
require.NoError(t, reconnectErr)
|
||||
case <-ctx.Done():
|
||||
t.Fatal("Reconnect did not complete in time")
|
||||
}
|
||||
|
||||
// Wait for write to complete
|
||||
<-writeComplete
|
||||
|
||||
// Write should succeed after reconnection completes
|
||||
require.NoError(t, writeErr)
|
||||
require.Equal(t, 7, n) // "blocked" is 7 bytes
|
||||
|
||||
// Verify the writer is connected
|
||||
require.True(t, bw.Connected())
|
||||
}
|
||||
|
||||
func TestBackedWriter_ConcurrentReconnectAndClose(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Initial connection and write some data
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
_, err = bw.Write([]byte("test data"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Start reconnection with slow replay
|
||||
reconnectStarted := make(chan struct{}, 1)
|
||||
replayCanComplete := make(chan struct{})
|
||||
reconnectComplete := make(chan struct{})
|
||||
var reconnectErr error
|
||||
|
||||
go func() {
|
||||
defer close(reconnectComplete)
|
||||
writer2 := &mockWriter{
|
||||
writeFunc: func(p []byte) (int, error) {
|
||||
// Signal that replay has started
|
||||
select {
|
||||
case reconnectStarted <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
// Wait for test to allow replay to complete
|
||||
<-replayCanComplete
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
reconnectErr = bw.Reconnect(0, writer2)
|
||||
}()
|
||||
|
||||
// Wait for reconnection to start
|
||||
ctx := testutil.Context(t, testutil.WaitShort)
|
||||
testutil.RequireReceive(ctx, t, reconnectStarted)
|
||||
|
||||
// Start Close() in a separate goroutine since it will block until Reconnect() completes
|
||||
closeStarted := make(chan struct{}, 1)
|
||||
closeComplete := make(chan error, 1)
|
||||
go func() {
|
||||
closeStarted <- struct{}{} // Signal that Close() is starting
|
||||
closeComplete <- bw.Close()
|
||||
}()
|
||||
|
||||
// Wait for Close() to start, then give it a moment to attempt to acquire the mutex
|
||||
testutil.RequireReceive(ctx, t, closeStarted)
|
||||
closeCheckTimer := time.NewTimer(testutil.IntervalFast)
|
||||
defer closeCheckTimer.Stop()
|
||||
|
||||
select {
|
||||
case <-closeComplete:
|
||||
t.Fatal("Close should be blocked during reconnect")
|
||||
case <-closeCheckTimer.C:
|
||||
// Good, Close is still blocked after a reasonable wait
|
||||
}
|
||||
|
||||
// Allow replay to complete so reconnection can finish
|
||||
close(replayCanComplete)
|
||||
|
||||
// Wait for reconnect to complete
|
||||
select {
|
||||
case <-reconnectComplete:
|
||||
// Good, reconnect completed
|
||||
case <-ctx.Done():
|
||||
t.Fatal("Reconnect did not complete in time")
|
||||
}
|
||||
|
||||
// Wait for close to complete
|
||||
select {
|
||||
case closeErr := <-closeComplete:
|
||||
require.NoError(t, closeErr)
|
||||
case <-ctx.Done():
|
||||
t.Fatal("Close did not complete in time")
|
||||
}
|
||||
|
||||
// With mutex held during replay, Close() waits for Reconnect() to finish.
|
||||
// So Reconnect() should succeed, then Close() runs and closes the writer.
|
||||
require.NoError(t, reconnectErr)
|
||||
|
||||
// Verify writer is closed (Close() ran after Reconnect() completed)
|
||||
require.False(t, bw.Connected())
|
||||
}
|
||||
|
||||
func TestBackedWriter_MultipleWritesDuringReconnect(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Initial connection
|
||||
writer1 := newMockWriter()
|
||||
err := bw.Reconnect(0, writer1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Write some initial data
|
||||
_, err = bw.Write([]byte("initial"))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Start multiple write operations
|
||||
numWriters := 5
|
||||
var wg sync.WaitGroup
|
||||
writeResults := make([]error, numWriters)
|
||||
writesStarted := make(chan struct{}, numWriters)
|
||||
|
||||
for i := 0; i < numWriters; i++ {
|
||||
wg.Add(1)
|
||||
go func(id int) {
|
||||
defer wg.Done()
|
||||
// Signal that this write is starting
|
||||
writesStarted <- struct{}{}
|
||||
data := []byte{byte('A' + id)}
|
||||
_, writeResults[id] = bw.Write(data)
|
||||
}(i)
|
||||
}
|
||||
|
||||
// Wait for all writes to start
|
||||
ctx := testutil.Context(t, testutil.WaitLong)
|
||||
for i := 0; i < numWriters; i++ {
|
||||
testutil.RequireReceive(ctx, t, writesStarted)
|
||||
}
|
||||
|
||||
// Use a timer to ensure all write goroutines have had a chance to start executing
|
||||
// and potentially get blocked on the mutex before we start the reconnection
|
||||
writesReadyTimer := time.NewTimer(testutil.IntervalFast)
|
||||
defer writesReadyTimer.Stop()
|
||||
<-writesReadyTimer.C
|
||||
|
||||
// Start reconnection with controlled replay
|
||||
replayStarted := make(chan struct{}, 1)
|
||||
replayCanComplete := make(chan struct{})
|
||||
writer2 := &mockWriter{
|
||||
writeFunc: func(p []byte) (int, error) {
|
||||
// Signal that replay has started
|
||||
select {
|
||||
case replayStarted <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
// Wait for test to allow replay to complete
|
||||
<-replayCanComplete
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
|
||||
// Start reconnection in a goroutine so we can control timing
|
||||
reconnectComplete := make(chan error, 1)
|
||||
go func() {
|
||||
reconnectComplete <- bw.Reconnect(0, writer2)
|
||||
}()
|
||||
|
||||
// Wait for replay to start
|
||||
testutil.RequireReceive(ctx, t, replayStarted)
|
||||
|
||||
// Allow replay to complete
|
||||
close(replayCanComplete)
|
||||
|
||||
// Wait for reconnection to complete
|
||||
select {
|
||||
case reconnectErr := <-reconnectComplete:
|
||||
require.NoError(t, reconnectErr)
|
||||
case <-ctx.Done():
|
||||
t.Fatal("Reconnect did not complete in time")
|
||||
}
|
||||
|
||||
// Wait for all writes to complete
|
||||
wg.Wait()
|
||||
|
||||
// All writes should succeed
|
||||
for i, err := range writeResults {
|
||||
require.NoError(t, err, "Write %d should succeed", i)
|
||||
}
|
||||
|
||||
// Verify the writer is connected
|
||||
require.True(t, bw.Connected())
|
||||
}
|
||||
|
||||
func BenchmarkBackedWriter_Write(b *testing.B) {
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan) // 64KB buffer
|
||||
writer := newMockWriter()
|
||||
bw.Reconnect(0, writer)
|
||||
|
||||
data := bytes.Repeat([]byte("x"), 1024) // 1KB writes
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
bw.Write(data)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkBackedWriter_Reconnect(b *testing.B) {
|
||||
errChan := make(chan backedpipe.ErrorEvent, 1)
|
||||
bw := backedpipe.NewBackedWriter(backedpipe.DefaultBufferSize, errChan)
|
||||
|
||||
// Connect initially to fill buffer with data
|
||||
initialWriter := newMockWriter()
|
||||
err := bw.Reconnect(0, initialWriter)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
|
||||
// Fill buffer with data
|
||||
data := bytes.Repeat([]byte("x"), 1024)
|
||||
for i := 0; i < 32; i++ {
|
||||
bw.Write(data)
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
writer := newMockWriter()
|
||||
bw.Reconnect(0, writer)
|
||||
}
|
||||
}
|
||||
@@ -1,129 +0,0 @@
|
||||
package backedpipe
|
||||
|
||||
import "golang.org/x/xerrors"
|
||||
|
||||
// ringBuffer implements an efficient circular buffer with a fixed-size allocation.
|
||||
// This implementation is not thread-safe and relies on external synchronization.
|
||||
type ringBuffer struct {
|
||||
buffer []byte
|
||||
start int // index of first valid byte
|
||||
end int // index of last valid byte (-1 when empty)
|
||||
}
|
||||
|
||||
// newRingBuffer creates a new ring buffer with the specified capacity.
|
||||
// Capacity must be > 0.
|
||||
func newRingBuffer(capacity int) *ringBuffer {
|
||||
if capacity <= 0 {
|
||||
panic("ring buffer capacity must be > 0")
|
||||
}
|
||||
return &ringBuffer{
|
||||
buffer: make([]byte, capacity),
|
||||
end: -1, // -1 indicates empty buffer
|
||||
}
|
||||
}
|
||||
|
||||
// Size returns the current number of bytes in the buffer.
|
||||
func (rb *ringBuffer) Size() int {
|
||||
if rb.end == -1 {
|
||||
return 0 // Buffer is empty
|
||||
}
|
||||
if rb.start <= rb.end {
|
||||
return rb.end - rb.start + 1
|
||||
}
|
||||
// Buffer wraps around
|
||||
return len(rb.buffer) - rb.start + rb.end + 1
|
||||
}
|
||||
|
||||
// Write writes data to the ring buffer. If the buffer would overflow,
|
||||
// it evicts the oldest data to make room for new data.
|
||||
func (rb *ringBuffer) Write(data []byte) {
|
||||
if len(data) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
capacity := len(rb.buffer)
|
||||
|
||||
// If data is larger than capacity, only keep the last capacity bytes
|
||||
if len(data) > capacity {
|
||||
data = data[len(data)-capacity:]
|
||||
// Clear buffer and write new data
|
||||
rb.start = 0
|
||||
rb.end = -1 // Will be set properly below
|
||||
}
|
||||
|
||||
// Calculate how much we need to evict to fit new data
|
||||
spaceNeeded := len(data)
|
||||
availableSpace := capacity - rb.Size()
|
||||
|
||||
if spaceNeeded > availableSpace {
|
||||
bytesToEvict := spaceNeeded - availableSpace
|
||||
rb.evict(bytesToEvict)
|
||||
}
|
||||
|
||||
// Buffer has data, write after current end
|
||||
writePos := (rb.end + 1) % capacity
|
||||
if writePos+len(data) <= capacity {
|
||||
// No wrap needed - single copy
|
||||
copy(rb.buffer[writePos:], data)
|
||||
rb.end = (rb.end + len(data)) % capacity
|
||||
} else {
|
||||
// Need to wrap around - two copies
|
||||
firstChunk := capacity - writePos
|
||||
copy(rb.buffer[writePos:], data[:firstChunk])
|
||||
copy(rb.buffer[0:], data[firstChunk:])
|
||||
rb.end = len(data) - firstChunk - 1
|
||||
}
|
||||
}
|
||||
|
||||
// evict removes the specified number of bytes from the beginning of the buffer.
|
||||
func (rb *ringBuffer) evict(count int) {
|
||||
if count >= rb.Size() {
|
||||
// Evict everything
|
||||
rb.start = 0
|
||||
rb.end = -1
|
||||
return
|
||||
}
|
||||
|
||||
rb.start = (rb.start + count) % len(rb.buffer)
|
||||
// Buffer remains non-empty after partial eviction
|
||||
}
|
||||
|
||||
// ReadLast returns the last n bytes from the buffer.
|
||||
// If n is greater than the available data, returns an error.
|
||||
// If n is negative, returns an error.
|
||||
func (rb *ringBuffer) ReadLast(n int) ([]byte, error) {
|
||||
if n < 0 {
|
||||
return nil, xerrors.New("cannot read negative number of bytes")
|
||||
}
|
||||
|
||||
if n == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
size := rb.Size()
|
||||
|
||||
// If requested more than available, return error
|
||||
if n > size {
|
||||
return nil, xerrors.Errorf("requested %d bytes but only %d available", n, size)
|
||||
}
|
||||
|
||||
result := make([]byte, n)
|
||||
capacity := len(rb.buffer)
|
||||
|
||||
// Calculate where to start reading from (n bytes before the end)
|
||||
startOffset := size - n
|
||||
actualStart := (rb.start + startOffset) % capacity
|
||||
|
||||
// Copy the last n bytes
|
||||
if actualStart+n <= capacity {
|
||||
// No wrap needed
|
||||
copy(result, rb.buffer[actualStart:actualStart+n])
|
||||
} else {
|
||||
// Need to wrap around
|
||||
firstChunk := capacity - actualStart
|
||||
copy(result[0:firstChunk], rb.buffer[actualStart:capacity])
|
||||
copy(result[firstChunk:], rb.buffer[0:n-firstChunk])
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
@@ -1,261 +0,0 @@
|
||||
package backedpipe
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/goleak"
|
||||
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
if runtime.GOOS == "windows" {
|
||||
// Don't run goleak on windows tests, they're super flaky right now.
|
||||
// See: https://github.com/coder/coder/issues/8954
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
goleak.VerifyTestMain(m, testutil.GoleakOptions...)
|
||||
}
|
||||
|
||||
func TestRingBuffer_NewRingBuffer(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(100)
|
||||
// Test that we can write and read from the buffer
|
||||
rb.Write([]byte("test"))
|
||||
|
||||
data, err := rb.ReadLast(4)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("test"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_WriteAndRead(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(10)
|
||||
|
||||
// Write some data
|
||||
rb.Write([]byte("hello"))
|
||||
|
||||
// Read last 4 bytes
|
||||
data, err := rb.ReadLast(4)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "ello", string(data))
|
||||
|
||||
// Write more data
|
||||
rb.Write([]byte("world"))
|
||||
|
||||
// Read last 5 bytes
|
||||
data, err = rb.ReadLast(5)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "world", string(data))
|
||||
|
||||
// Read last 3 bytes
|
||||
data, err = rb.ReadLast(3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "rld", string(data))
|
||||
|
||||
// Read more than available (should be 10 bytes total)
|
||||
_, err = rb.ReadLast(15)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "requested 15 bytes but only")
|
||||
}
|
||||
|
||||
func TestRingBuffer_OverflowEviction(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(5)
|
||||
|
||||
// Fill buffer
|
||||
rb.Write([]byte("abcde"))
|
||||
|
||||
// Overflow should evict oldest data
|
||||
rb.Write([]byte("fg"))
|
||||
|
||||
// Should now contain "cdefg"
|
||||
data, err := rb.ReadLast(5)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("cdefg"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_LargeWrite(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(5)
|
||||
|
||||
// Write data larger than capacity
|
||||
rb.Write([]byte("abcdefghij"))
|
||||
|
||||
// Should contain last 5 bytes
|
||||
data, err := rb.ReadLast(5)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("fghij"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_WrapAround(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(5)
|
||||
|
||||
// Fill buffer
|
||||
rb.Write([]byte("abcde"))
|
||||
|
||||
// Write more to cause wrap-around
|
||||
rb.Write([]byte("fgh"))
|
||||
|
||||
// Should contain "defgh"
|
||||
data, err := rb.ReadLast(5)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("defgh"), data)
|
||||
|
||||
// Test reading last 3 bytes after wrap
|
||||
data, err = rb.ReadLast(3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("fgh"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_ReadLastEdgeCases(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(3)
|
||||
|
||||
// Write some data (5 bytes to a 3-byte buffer, so only last 3 bytes remain)
|
||||
rb.Write([]byte("hello"))
|
||||
|
||||
// Test reading negative count
|
||||
data, err := rb.ReadLast(-1)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "cannot read negative number of bytes")
|
||||
require.Nil(t, data)
|
||||
|
||||
// Test reading zero bytes
|
||||
data, err = rb.ReadLast(0)
|
||||
require.NoError(t, err)
|
||||
require.Nil(t, data)
|
||||
|
||||
// Test reading more than available (buffer has 3 bytes, try to read 10)
|
||||
_, err = rb.ReadLast(10)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "requested 10 bytes but only 3 available")
|
||||
|
||||
// Test reading exact amount available
|
||||
data, err = rb.ReadLast(3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("llo"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_EmptyWrite(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(10)
|
||||
|
||||
// Write empty data
|
||||
rb.Write([]byte{})
|
||||
|
||||
// Buffer should still be empty
|
||||
_, err := rb.ReadLast(5)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "requested 5 bytes but only 0 available")
|
||||
}
|
||||
|
||||
func TestRingBuffer_MultipleWrites(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(10)
|
||||
|
||||
// Write data in chunks
|
||||
rb.Write([]byte("ab"))
|
||||
rb.Write([]byte("cd"))
|
||||
rb.Write([]byte("ef"))
|
||||
|
||||
data, err := rb.ReadLast(6)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("abcdef"), data)
|
||||
|
||||
// Test partial reads
|
||||
data, err = rb.ReadLast(4)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("cdef"), data)
|
||||
|
||||
data, err = rb.ReadLast(2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("ef"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_EdgeCaseEviction(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(3)
|
||||
|
||||
// Write data that will cause eviction
|
||||
rb.Write([]byte("abc"))
|
||||
|
||||
// Write more to cause eviction
|
||||
rb.Write([]byte("d"))
|
||||
|
||||
// Should now contain "bcd"
|
||||
data, err := rb.ReadLast(3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("bcd"), data)
|
||||
}
|
||||
|
||||
func TestRingBuffer_ComplexWrapAroundScenario(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
rb := newRingBuffer(8)
|
||||
|
||||
// Fill buffer
|
||||
rb.Write([]byte("12345678"))
|
||||
|
||||
// Evict some and add more to create complex wrap scenario
|
||||
rb.Write([]byte("abcd"))
|
||||
data, err := rb.ReadLast(8)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("5678abcd"), data)
|
||||
|
||||
// Add more
|
||||
rb.Write([]byte("xyz"))
|
||||
data, err = rb.ReadLast(8)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("8abcdxyz"), data)
|
||||
|
||||
// Test reading various amounts from the end
|
||||
data, err = rb.ReadLast(7)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("abcdxyz"), data)
|
||||
|
||||
data, err = rb.ReadLast(4)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, []byte("dxyz"), data)
|
||||
}
|
||||
|
||||
// Benchmark tests for performance validation
|
||||
func BenchmarkRingBuffer_Write(b *testing.B) {
|
||||
rb := newRingBuffer(64 * 1024 * 1024) // 64MB for benchmarks
|
||||
data := bytes.Repeat([]byte("x"), 1024) // 1KB writes
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
rb.Write(data)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkRingBuffer_ReadLast(b *testing.B) {
|
||||
rb := newRingBuffer(64 * 1024 * 1024) // 64MB for benchmarks
|
||||
// Fill buffer with test data
|
||||
for i := 0; i < 64; i++ {
|
||||
rb.Write(bytes.Repeat([]byte("x"), 1024))
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := rb.ReadLast((i % 100) + 1)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
+78
-69
@@ -11,39 +11,23 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/shirou/gopsutil/v4/disk"
|
||||
"github.com/spf13/afero"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/coder/coder/v2/coderd/httpapi"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
)
|
||||
|
||||
var WindowsDriveRegex = regexp.MustCompile(`^[a-zA-Z]:\\$`)
|
||||
|
||||
func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
func (*agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
// An absolute path may be optionally provided, otherwise a path split into an
|
||||
// array must be provided in the body (which can be relative).
|
||||
query := r.URL.Query()
|
||||
parser := httpapi.NewQueryParamParser()
|
||||
path := parser.String(query, "", "path")
|
||||
parser.ErrorExcessParams(query)
|
||||
if len(parser.Errors) > 0 {
|
||||
httpapi.Write(ctx, rw, http.StatusBadRequest, codersdk.Response{
|
||||
Message: "Query parameters have invalid values.",
|
||||
Validations: parser.Errors,
|
||||
})
|
||||
var query LSRequest
|
||||
if !httpapi.Read(ctx, rw, r, &query) {
|
||||
return
|
||||
}
|
||||
|
||||
var req workspacesdk.LSRequest
|
||||
if !httpapi.Read(ctx, rw, r, &req) {
|
||||
return
|
||||
}
|
||||
|
||||
resp, err := listFiles(a.filesystem, path, req)
|
||||
resp, err := listFiles(query)
|
||||
if err != nil {
|
||||
status := http.StatusInternalServerError
|
||||
switch {
|
||||
@@ -62,66 +46,58 @@ func (a *agent) HandleLS(rw http.ResponseWriter, r *http.Request) {
|
||||
httpapi.Write(ctx, rw, http.StatusOK, resp)
|
||||
}
|
||||
|
||||
func listFiles(fs afero.Fs, path string, query workspacesdk.LSRequest) (workspacesdk.LSResponse, error) {
|
||||
absolutePathString := path
|
||||
if absolutePathString != "" {
|
||||
if !filepath.IsAbs(path) {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("path must be absolute: %q", path)
|
||||
}
|
||||
} else {
|
||||
var fullPath []string
|
||||
switch query.Relativity {
|
||||
case workspacesdk.LSRelativityHome:
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("failed to get user home directory: %w", err)
|
||||
}
|
||||
fullPath = []string{home}
|
||||
case workspacesdk.LSRelativityRoot:
|
||||
if runtime.GOOS == "windows" {
|
||||
if len(query.Path) == 0 {
|
||||
return listDrives()
|
||||
}
|
||||
if !WindowsDriveRegex.MatchString(query.Path[0]) {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("invalid drive letter %q", query.Path[0])
|
||||
}
|
||||
} else {
|
||||
fullPath = []string{"/"}
|
||||
}
|
||||
default:
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("unsupported relativity type %q", query.Relativity)
|
||||
}
|
||||
|
||||
fullPath = append(fullPath, query.Path...)
|
||||
fullPathRelative := filepath.Join(fullPath...)
|
||||
var err error
|
||||
absolutePathString, err = filepath.Abs(fullPathRelative)
|
||||
func listFiles(query LSRequest) (LSResponse, error) {
|
||||
var fullPath []string
|
||||
switch query.Relativity {
|
||||
case LSRelativityHome:
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("failed to get absolute path of %q: %w", fullPathRelative, err)
|
||||
return LSResponse{}, xerrors.Errorf("failed to get user home directory: %w", err)
|
||||
}
|
||||
fullPath = []string{home}
|
||||
case LSRelativityRoot:
|
||||
if runtime.GOOS == "windows" {
|
||||
if len(query.Path) == 0 {
|
||||
return listDrives()
|
||||
}
|
||||
if !WindowsDriveRegex.MatchString(query.Path[0]) {
|
||||
return LSResponse{}, xerrors.Errorf("invalid drive letter %q", query.Path[0])
|
||||
}
|
||||
} else {
|
||||
fullPath = []string{"/"}
|
||||
}
|
||||
default:
|
||||
return LSResponse{}, xerrors.Errorf("unsupported relativity type %q", query.Relativity)
|
||||
}
|
||||
|
||||
fullPath = append(fullPath, query.Path...)
|
||||
fullPathRelative := filepath.Join(fullPath...)
|
||||
absolutePathString, err := filepath.Abs(fullPathRelative)
|
||||
if err != nil {
|
||||
return LSResponse{}, xerrors.Errorf("failed to get absolute path of %q: %w", fullPathRelative, err)
|
||||
}
|
||||
|
||||
// codeql[go/path-injection] - The intent is to allow the user to navigate to any directory in their workspace.
|
||||
f, err := fs.Open(absolutePathString)
|
||||
f, err := os.Open(absolutePathString)
|
||||
if err != nil {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("failed to open directory %q: %w", absolutePathString, err)
|
||||
return LSResponse{}, xerrors.Errorf("failed to open directory %q: %w", absolutePathString, err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
stat, err := f.Stat()
|
||||
if err != nil {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("failed to stat directory %q: %w", absolutePathString, err)
|
||||
return LSResponse{}, xerrors.Errorf("failed to stat directory %q: %w", absolutePathString, err)
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("path %q is not a directory", absolutePathString)
|
||||
return LSResponse{}, xerrors.Errorf("path %q is not a directory", absolutePathString)
|
||||
}
|
||||
|
||||
// `contents` may be partially populated even if the operation fails midway.
|
||||
contents, _ := f.Readdir(-1)
|
||||
respContents := make([]workspacesdk.LSFile, 0, len(contents))
|
||||
contents, _ := f.ReadDir(-1)
|
||||
respContents := make([]LSFile, 0, len(contents))
|
||||
for _, file := range contents {
|
||||
respContents = append(respContents, workspacesdk.LSFile{
|
||||
respContents = append(respContents, LSFile{
|
||||
Name: file.Name(),
|
||||
AbsolutePathString: filepath.Join(absolutePathString, file.Name()),
|
||||
IsDir: file.IsDir(),
|
||||
@@ -129,7 +105,7 @@ func listFiles(fs afero.Fs, path string, query workspacesdk.LSRequest) (workspac
|
||||
}
|
||||
|
||||
// Sort alphabetically: directories then files
|
||||
slices.SortFunc(respContents, func(a, b workspacesdk.LSFile) int {
|
||||
slices.SortFunc(respContents, func(a, b LSFile) int {
|
||||
if a.IsDir && !b.IsDir {
|
||||
return -1
|
||||
}
|
||||
@@ -141,35 +117,35 @@ func listFiles(fs afero.Fs, path string, query workspacesdk.LSRequest) (workspac
|
||||
|
||||
absolutePath := pathToArray(absolutePathString)
|
||||
|
||||
return workspacesdk.LSResponse{
|
||||
return LSResponse{
|
||||
AbsolutePath: absolutePath,
|
||||
AbsolutePathString: absolutePathString,
|
||||
Contents: respContents,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func listDrives() (workspacesdk.LSResponse, error) {
|
||||
func listDrives() (LSResponse, error) {
|
||||
// disk.Partitions() will return partitions even if there was a failure to
|
||||
// get one. Any errored partitions will not be returned.
|
||||
partitionStats, err := disk.Partitions(true)
|
||||
if err != nil && len(partitionStats) == 0 {
|
||||
// Only return the error if there were no partitions returned.
|
||||
return workspacesdk.LSResponse{}, xerrors.Errorf("failed to get partitions: %w", err)
|
||||
return LSResponse{}, xerrors.Errorf("failed to get partitions: %w", err)
|
||||
}
|
||||
|
||||
contents := make([]workspacesdk.LSFile, 0, len(partitionStats))
|
||||
contents := make([]LSFile, 0, len(partitionStats))
|
||||
for _, a := range partitionStats {
|
||||
// Drive letters on Windows have a trailing separator as part of their name.
|
||||
// i.e. `os.Open("C:")` does not work, but `os.Open("C:\\")` does.
|
||||
name := a.Mountpoint + string(os.PathSeparator)
|
||||
contents = append(contents, workspacesdk.LSFile{
|
||||
contents = append(contents, LSFile{
|
||||
Name: name,
|
||||
AbsolutePathString: name,
|
||||
IsDir: true,
|
||||
})
|
||||
}
|
||||
|
||||
return workspacesdk.LSResponse{
|
||||
return LSResponse{
|
||||
AbsolutePath: []string{},
|
||||
AbsolutePathString: "",
|
||||
Contents: contents,
|
||||
@@ -187,3 +163,36 @@ func pathToArray(path string) []string {
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
type LSRequest struct {
|
||||
// e.g. [], ["repos", "coder"],
|
||||
Path []string `json:"path"`
|
||||
// Whether the supplied path is relative to the user's home directory,
|
||||
// or the root directory.
|
||||
Relativity LSRelativity `json:"relativity"`
|
||||
}
|
||||
|
||||
type LSResponse struct {
|
||||
AbsolutePath []string `json:"absolute_path"`
|
||||
// Returned so clients can display the full path to the user, and
|
||||
// copy it to configure file sync
|
||||
// e.g. Windows: "C:\\Users\\coder"
|
||||
// Linux: "/home/coder"
|
||||
AbsolutePathString string `json:"absolute_path_string"`
|
||||
Contents []LSFile `json:"contents"`
|
||||
}
|
||||
|
||||
type LSFile struct {
|
||||
Name string `json:"name"`
|
||||
// e.g. "C:\\Users\\coder\\hello.txt"
|
||||
// "/home/coder/hello.txt"
|
||||
AbsolutePathString string `json:"absolute_path_string"`
|
||||
IsDir bool `json:"is_dir"`
|
||||
}
|
||||
|
||||
type LSRelativity string
|
||||
|
||||
const (
|
||||
LSRelativityRoot LSRelativity = "root"
|
||||
LSRelativityHome LSRelativity = "home"
|
||||
)
|
||||
|
||||
+38
-76
@@ -6,103 +6,67 @@ import (
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/afero"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
)
|
||||
|
||||
type testFs struct {
|
||||
afero.Fs
|
||||
}
|
||||
|
||||
func newTestFs(base afero.Fs) *testFs {
|
||||
return &testFs{
|
||||
Fs: base,
|
||||
}
|
||||
}
|
||||
|
||||
func (*testFs) Open(name string) (afero.File, error) {
|
||||
return nil, os.ErrPermission
|
||||
}
|
||||
|
||||
func TestListFilesWithQueryParam(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
fs := afero.NewMemMapFs()
|
||||
query := workspacesdk.LSRequest{}
|
||||
_, err := listFiles(fs, "not-relative", query)
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), "must be absolute")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
err = fs.MkdirAll(tmpDir, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
res, err := listFiles(fs, tmpDir, query)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, res.Contents, 0)
|
||||
}
|
||||
|
||||
func TestListFilesNonExistentDirectory(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
fs := afero.NewMemMapFs()
|
||||
query := workspacesdk.LSRequest{
|
||||
query := LSRequest{
|
||||
Path: []string{"idontexist"},
|
||||
Relativity: workspacesdk.LSRelativityHome,
|
||||
Relativity: LSRelativityHome,
|
||||
}
|
||||
_, err := listFiles(fs, "", query)
|
||||
_, err := listFiles(query)
|
||||
require.ErrorIs(t, err, os.ErrNotExist)
|
||||
}
|
||||
|
||||
func TestListFilesPermissionDenied(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
fs := newTestFs(afero.NewMemMapFs())
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("creating an unreadable-by-user directory is non-trivial on Windows")
|
||||
}
|
||||
|
||||
home, err := os.UserHomeDir()
|
||||
require.NoError(t, err)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
reposDir := filepath.Join(tmpDir, "repos")
|
||||
err = fs.MkdirAll(reposDir, 0o000)
|
||||
err = os.Mkdir(reposDir, 0o000)
|
||||
require.NoError(t, err)
|
||||
|
||||
rel, err := filepath.Rel(home, reposDir)
|
||||
require.NoError(t, err)
|
||||
|
||||
query := workspacesdk.LSRequest{
|
||||
query := LSRequest{
|
||||
Path: pathToArray(rel),
|
||||
Relativity: workspacesdk.LSRelativityHome,
|
||||
Relativity: LSRelativityHome,
|
||||
}
|
||||
_, err = listFiles(fs, "", query)
|
||||
_, err = listFiles(query)
|
||||
require.ErrorIs(t, err, os.ErrPermission)
|
||||
}
|
||||
|
||||
func TestListFilesNotADirectory(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
fs := afero.NewMemMapFs()
|
||||
home, err := os.UserHomeDir()
|
||||
require.NoError(t, err)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
err = fs.MkdirAll(tmpDir, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
filePath := filepath.Join(tmpDir, "file.txt")
|
||||
err = afero.WriteFile(fs, filePath, []byte("content"), 0o600)
|
||||
err = os.WriteFile(filePath, []byte("content"), 0o600)
|
||||
require.NoError(t, err)
|
||||
|
||||
rel, err := filepath.Rel(home, filePath)
|
||||
require.NoError(t, err)
|
||||
|
||||
query := workspacesdk.LSRequest{
|
||||
query := LSRequest{
|
||||
Path: pathToArray(rel),
|
||||
Relativity: workspacesdk.LSRelativityHome,
|
||||
Relativity: LSRelativityHome,
|
||||
}
|
||||
_, err = listFiles(fs, "", query)
|
||||
_, err = listFiles(query)
|
||||
require.ErrorContains(t, err, "is not a directory")
|
||||
}
|
||||
|
||||
@@ -112,7 +76,7 @@ func TestListFilesSuccess(t *testing.T) {
|
||||
tc := []struct {
|
||||
name string
|
||||
baseFunc func(t *testing.T) string
|
||||
relativity workspacesdk.LSRelativity
|
||||
relativity LSRelativity
|
||||
}{
|
||||
{
|
||||
name: "home",
|
||||
@@ -121,7 +85,7 @@ func TestListFilesSuccess(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
return home
|
||||
},
|
||||
relativity: workspacesdk.LSRelativityHome,
|
||||
relativity: LSRelativityHome,
|
||||
},
|
||||
{
|
||||
name: "root",
|
||||
@@ -131,7 +95,7 @@ func TestListFilesSuccess(t *testing.T) {
|
||||
}
|
||||
return "/"
|
||||
},
|
||||
relativity: workspacesdk.LSRelativityRoot,
|
||||
relativity: LSRelativityRoot,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -140,20 +104,19 @@ func TestListFilesSuccess(t *testing.T) {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
fs := afero.NewMemMapFs()
|
||||
base := tc.baseFunc(t)
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
reposDir := filepath.Join(tmpDir, "repos")
|
||||
err := fs.MkdirAll(reposDir, 0o755)
|
||||
err := os.Mkdir(reposDir, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
downloadsDir := filepath.Join(tmpDir, "Downloads")
|
||||
err = fs.MkdirAll(downloadsDir, 0o755)
|
||||
err = os.Mkdir(downloadsDir, 0o755)
|
||||
require.NoError(t, err)
|
||||
|
||||
textFile := filepath.Join(tmpDir, "file.txt")
|
||||
err = afero.WriteFile(fs, textFile, []byte("content"), 0o600)
|
||||
err = os.WriteFile(textFile, []byte("content"), 0o600)
|
||||
require.NoError(t, err)
|
||||
|
||||
var queryComponents []string
|
||||
@@ -166,16 +129,16 @@ func TestListFilesSuccess(t *testing.T) {
|
||||
queryComponents = pathToArray(rel)
|
||||
}
|
||||
|
||||
query := workspacesdk.LSRequest{
|
||||
query := LSRequest{
|
||||
Path: queryComponents,
|
||||
Relativity: tc.relativity,
|
||||
}
|
||||
resp, err := listFiles(fs, "", query)
|
||||
resp, err := listFiles(query)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, tmpDir, resp.AbsolutePathString)
|
||||
// Output is sorted
|
||||
require.Equal(t, []workspacesdk.LSFile{
|
||||
require.Equal(t, []LSFile{
|
||||
{
|
||||
Name: "Downloads",
|
||||
AbsolutePathString: downloadsDir,
|
||||
@@ -203,44 +166,43 @@ func TestListFilesListDrives(t *testing.T) {
|
||||
t.Skip("skipping test on non-Windows OS")
|
||||
}
|
||||
|
||||
fs := afero.NewOsFs()
|
||||
query := workspacesdk.LSRequest{
|
||||
query := LSRequest{
|
||||
Path: []string{},
|
||||
Relativity: workspacesdk.LSRelativityRoot,
|
||||
Relativity: LSRelativityRoot,
|
||||
}
|
||||
resp, err := listFiles(fs, "", query)
|
||||
resp, err := listFiles(query)
|
||||
require.NoError(t, err)
|
||||
require.Contains(t, resp.Contents, workspacesdk.LSFile{
|
||||
require.Contains(t, resp.Contents, LSFile{
|
||||
Name: "C:\\",
|
||||
AbsolutePathString: "C:\\",
|
||||
IsDir: true,
|
||||
})
|
||||
|
||||
query = workspacesdk.LSRequest{
|
||||
query = LSRequest{
|
||||
Path: []string{"C:\\"},
|
||||
Relativity: workspacesdk.LSRelativityRoot,
|
||||
Relativity: LSRelativityRoot,
|
||||
}
|
||||
resp, err = listFiles(fs, "", query)
|
||||
resp, err = listFiles(query)
|
||||
require.NoError(t, err)
|
||||
|
||||
query = workspacesdk.LSRequest{
|
||||
query = LSRequest{
|
||||
Path: resp.AbsolutePath,
|
||||
Relativity: workspacesdk.LSRelativityRoot,
|
||||
Relativity: LSRelativityRoot,
|
||||
}
|
||||
resp, err = listFiles(fs, "", query)
|
||||
resp, err = listFiles(query)
|
||||
require.NoError(t, err)
|
||||
// System directory should always exist
|
||||
require.Contains(t, resp.Contents, workspacesdk.LSFile{
|
||||
require.Contains(t, resp.Contents, LSFile{
|
||||
Name: "Windows",
|
||||
AbsolutePathString: "C:\\Windows",
|
||||
IsDir: true,
|
||||
})
|
||||
|
||||
query = workspacesdk.LSRequest{
|
||||
query = LSRequest{
|
||||
// Network drives are not supported.
|
||||
Path: []string{"\\sshfs\\work"},
|
||||
Relativity: workspacesdk.LSRelativityRoot,
|
||||
Relativity: LSRelativityRoot,
|
||||
}
|
||||
resp, err = listFiles(fs, "", query)
|
||||
resp, err = listFiles(query)
|
||||
require.ErrorContains(t, err, "drive")
|
||||
}
|
||||
|
||||
@@ -25,7 +25,6 @@ import (
|
||||
|
||||
// screenReconnectingPTY provides a reconnectable PTY via `screen`.
|
||||
type screenReconnectingPTY struct {
|
||||
logger slog.Logger
|
||||
execer agentexec.Execer
|
||||
command *pty.Cmd
|
||||
|
||||
@@ -63,7 +62,6 @@ type screenReconnectingPTY struct {
|
||||
// own which causes it to spawn with the specified size.
|
||||
func newScreen(ctx context.Context, logger slog.Logger, execer agentexec.Execer, cmd *pty.Cmd, options *Options) *screenReconnectingPTY {
|
||||
rpty := &screenReconnectingPTY{
|
||||
logger: logger,
|
||||
execer: execer,
|
||||
command: cmd,
|
||||
metrics: options.Metrics,
|
||||
@@ -175,7 +173,6 @@ func (rpty *screenReconnectingPTY) Attach(ctx context.Context, _ string, conn ne
|
||||
|
||||
ptty, process, err := rpty.doAttach(ctx, conn, height, width, logger)
|
||||
if err != nil {
|
||||
logger.Debug(ctx, "unable to attach to screen reconnecting pty", slog.Error(err))
|
||||
if errors.Is(err, context.Canceled) {
|
||||
// Likely the process was too short-lived and canceled the version command.
|
||||
// TODO: Is it worth distinguishing between that and a cancel from the
|
||||
@@ -185,7 +182,6 @@ func (rpty *screenReconnectingPTY) Attach(ctx context.Context, _ string, conn ne
|
||||
}
|
||||
return err
|
||||
}
|
||||
logger.Debug(ctx, "attached to screen reconnecting pty")
|
||||
|
||||
defer func() {
|
||||
// Log only for debugging since the process might have already exited on its
|
||||
@@ -407,7 +403,6 @@ func (rpty *screenReconnectingPTY) Wait() {
|
||||
}
|
||||
|
||||
func (rpty *screenReconnectingPTY) Close(err error) {
|
||||
rpty.logger.Debug(context.Background(), "closing screen reconnecting pty", slog.Error(err))
|
||||
// The closing state change will be handled by the lifecycle.
|
||||
rpty.state.setState(StateClosing, err)
|
||||
}
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
package archivefs
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"io"
|
||||
"io/fs"
|
||||
|
||||
"github.com/spf13/afero"
|
||||
"github.com/spf13/afero/zipfs"
|
||||
)
|
||||
|
||||
// FromZipReader creates a read-only in-memory FS
|
||||
func FromZipReader(r io.ReaderAt, size int64) (fs.FS, error) {
|
||||
zr, err := zip.NewReader(r, size)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return afero.NewIOFS(zipfs.New(zr)), nil
|
||||
}
|
||||
-80
@@ -1,80 +0,0 @@
|
||||
{
|
||||
"vcs": {
|
||||
"enabled": true,
|
||||
"clientKind": "git",
|
||||
"useIgnoreFile": true,
|
||||
"defaultBranch": "main"
|
||||
},
|
||||
"files": {
|
||||
"includes": ["**", "!**/pnpm-lock.yaml"],
|
||||
"ignoreUnknown": true
|
||||
},
|
||||
"linter": {
|
||||
"rules": {
|
||||
"a11y": {
|
||||
"noSvgWithoutTitle": "off",
|
||||
"useButtonType": "off",
|
||||
"useSemanticElements": "off",
|
||||
"noStaticElementInteractions": "off"
|
||||
},
|
||||
"correctness": {
|
||||
"noUnusedImports": "warn",
|
||||
"useUniqueElementIds": "off", // TODO: This is new but we want to fix it
|
||||
"noNestedComponentDefinitions": "off", // TODO: Investigate, since it is used by shadcn components
|
||||
"noUnusedVariables": {
|
||||
"level": "warn",
|
||||
"options": {
|
||||
"ignoreRestSiblings": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"style": {
|
||||
"noNonNullAssertion": "off",
|
||||
"noParameterAssign": "off",
|
||||
"useDefaultParameterLast": "off",
|
||||
"useSelfClosingElements": "off",
|
||||
"useAsConstAssertion": "error",
|
||||
"useEnumInitializers": "error",
|
||||
"useSingleVarDeclarator": "error",
|
||||
"noUnusedTemplateLiteral": "error",
|
||||
"useNumberNamespace": "error",
|
||||
"noInferrableTypes": "error",
|
||||
"noUselessElse": "error",
|
||||
"noRestrictedImports": {
|
||||
"level": "error",
|
||||
"options": {
|
||||
"paths": {
|
||||
"@mui/material": "Use @mui/material/<name> instead. See: https://material-ui.com/guides/minimizing-bundle-size/.",
|
||||
"@mui/material/Avatar": "Use components/Avatar/Avatar instead.",
|
||||
"@mui/material/Alert": "Use components/Alert/Alert instead.",
|
||||
"@mui/material/Popover": "Use components/Popover/Popover instead.",
|
||||
"@mui/material/Typography": "Use native HTML elements instead. Eg: <span>, <p>, <h1>, etc.",
|
||||
"@mui/material/Box": "Use a <div> instead.",
|
||||
"@mui/material/Button": "Use a components/Button/Button instead.",
|
||||
"@mui/material/styles": "Import from @emotion/react instead.",
|
||||
"@mui/material/Table*": "Import from components/Table/Table instead.",
|
||||
"lodash": "Use lodash/<name> instead."
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"suspicious": {
|
||||
"noArrayIndexKey": "off",
|
||||
"noThenProperty": "off",
|
||||
"noTemplateCurlyInString": "off",
|
||||
"useIterableCallbackReturn": "off",
|
||||
"noUnknownAtRules": "off", // Allow Tailwind directives
|
||||
"noConsole": {
|
||||
"level": "error",
|
||||
"options": {
|
||||
"allow": ["error", "info", "warn"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"complexity": {
|
||||
"noImportantStyles": "off" // TODO: check and fix !important styles
|
||||
}
|
||||
}
|
||||
},
|
||||
"$schema": "./node_modules/@biomejs/biome/configuration_schema.json"
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
apiVersion: backstage.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: coder
|
||||
annotations:
|
||||
github.com/project-slug: 'coder/coder'
|
||||
spec:
|
||||
type: service
|
||||
lifecycle: production
|
||||
owner: rd
|
||||
+106
-45
@@ -15,6 +15,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"cloud.google.com/go/compute/metadata"
|
||||
"golang.org/x/xerrors"
|
||||
"gopkg.in/natefinch/lumberjack.v2"
|
||||
|
||||
@@ -37,27 +38,25 @@ import (
|
||||
"github.com/coder/coder/v2/codersdk/agentsdk"
|
||||
)
|
||||
|
||||
func workspaceAgent() *serpent.Command {
|
||||
func (r *RootCmd) workspaceAgent() *serpent.Command {
|
||||
var (
|
||||
logDir string
|
||||
scriptDataDir string
|
||||
pprofAddress string
|
||||
noReap bool
|
||||
sshMaxTimeout time.Duration
|
||||
tailnetListenPort int64
|
||||
prometheusAddress string
|
||||
debugAddress string
|
||||
slogHumanPath string
|
||||
slogJSONPath string
|
||||
slogStackdriverPath string
|
||||
blockFileTransfer bool
|
||||
agentHeaderCommand string
|
||||
agentHeader []string
|
||||
devcontainers bool
|
||||
devcontainerProjectDiscovery bool
|
||||
devcontainerDiscoveryAutostart bool
|
||||
auth string
|
||||
logDir string
|
||||
scriptDataDir string
|
||||
pprofAddress string
|
||||
noReap bool
|
||||
sshMaxTimeout time.Duration
|
||||
tailnetListenPort int64
|
||||
prometheusAddress string
|
||||
debugAddress string
|
||||
slogHumanPath string
|
||||
slogJSONPath string
|
||||
slogStackdriverPath string
|
||||
blockFileTransfer bool
|
||||
agentHeaderCommand string
|
||||
agentHeader []string
|
||||
devcontainers bool
|
||||
)
|
||||
agentAuth := &AgentAuth{}
|
||||
cmd := &serpent.Command{
|
||||
Use: "agent",
|
||||
Short: `Starts the Coder workspace agent.`,
|
||||
@@ -175,14 +174,12 @@ func workspaceAgent() *serpent.Command {
|
||||
|
||||
version := buildinfo.Version()
|
||||
logger.Info(ctx, "agent is starting now",
|
||||
slog.F("url", agentAuth.agentURL),
|
||||
slog.F("auth", agentAuth.agentAuth),
|
||||
slog.F("url", r.agentURL),
|
||||
slog.F("auth", auth),
|
||||
slog.F("version", version),
|
||||
)
|
||||
client, err := agentAuth.CreateClient()
|
||||
if err != nil {
|
||||
return xerrors.Errorf("create agent client: %w", err)
|
||||
}
|
||||
|
||||
client := agentsdk.New(r.agentURL)
|
||||
client.SDK.SetLogger(logger)
|
||||
// Set a reasonable timeout so requests can't hang forever!
|
||||
// The timeout needs to be reasonably long, because requests
|
||||
@@ -191,7 +188,7 @@ func workspaceAgent() *serpent.Command {
|
||||
client.SDK.HTTPClient.Timeout = 30 * time.Second
|
||||
// Attach header transport so we process --agent-header and
|
||||
// --agent-header-command flags
|
||||
headerTransport, err := headerTransport(ctx, &agentAuth.agentURL, agentHeader, agentHeaderCommand)
|
||||
headerTransport, err := headerTransport(ctx, r.agentURL, agentHeader, agentHeaderCommand)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("configure header transport: %w", err)
|
||||
}
|
||||
@@ -215,6 +212,68 @@ func workspaceAgent() *serpent.Command {
|
||||
ignorePorts[port] = "debug"
|
||||
}
|
||||
|
||||
// exchangeToken returns a session token.
|
||||
// This is abstracted to allow for the same looping condition
|
||||
// regardless of instance identity auth type.
|
||||
var exchangeToken func(context.Context) (agentsdk.AuthenticateResponse, error)
|
||||
switch auth {
|
||||
case "token":
|
||||
token, _ := inv.ParsedFlags().GetString(varAgentToken)
|
||||
if token == "" {
|
||||
tokenFile, _ := inv.ParsedFlags().GetString(varAgentTokenFile)
|
||||
if tokenFile != "" {
|
||||
tokenBytes, err := os.ReadFile(tokenFile)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("read token file %q: %w", tokenFile, err)
|
||||
}
|
||||
token = strings.TrimSpace(string(tokenBytes))
|
||||
}
|
||||
}
|
||||
if token == "" {
|
||||
return xerrors.Errorf("CODER_AGENT_TOKEN or CODER_AGENT_TOKEN_FILE must be set for token auth")
|
||||
}
|
||||
client.SetSessionToken(token)
|
||||
case "google-instance-identity":
|
||||
// This is *only* done for testing to mock client authentication.
|
||||
// This will never be set in a production scenario.
|
||||
var gcpClient *metadata.Client
|
||||
gcpClientRaw := ctx.Value("gcp-client")
|
||||
if gcpClientRaw != nil {
|
||||
gcpClient, _ = gcpClientRaw.(*metadata.Client)
|
||||
}
|
||||
exchangeToken = func(ctx context.Context) (agentsdk.AuthenticateResponse, error) {
|
||||
return client.AuthGoogleInstanceIdentity(ctx, "", gcpClient)
|
||||
}
|
||||
case "aws-instance-identity":
|
||||
// This is *only* done for testing to mock client authentication.
|
||||
// This will never be set in a production scenario.
|
||||
var awsClient *http.Client
|
||||
awsClientRaw := ctx.Value("aws-client")
|
||||
if awsClientRaw != nil {
|
||||
awsClient, _ = awsClientRaw.(*http.Client)
|
||||
if awsClient != nil {
|
||||
client.SDK.HTTPClient = awsClient
|
||||
}
|
||||
}
|
||||
exchangeToken = func(ctx context.Context) (agentsdk.AuthenticateResponse, error) {
|
||||
return client.AuthAWSInstanceIdentity(ctx)
|
||||
}
|
||||
case "azure-instance-identity":
|
||||
// This is *only* done for testing to mock client authentication.
|
||||
// This will never be set in a production scenario.
|
||||
var azureClient *http.Client
|
||||
azureClientRaw := ctx.Value("azure-client")
|
||||
if azureClientRaw != nil {
|
||||
azureClient, _ = azureClientRaw.(*http.Client)
|
||||
if azureClient != nil {
|
||||
client.SDK.HTTPClient = azureClient
|
||||
}
|
||||
}
|
||||
exchangeToken = func(ctx context.Context) (agentsdk.AuthenticateResponse, error) {
|
||||
return client.AuthAzureInstanceIdentity(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
executablePath, err := os.Executable()
|
||||
if err != nil {
|
||||
return xerrors.Errorf("getting os executable: %w", err)
|
||||
@@ -282,7 +341,18 @@ func workspaceAgent() *serpent.Command {
|
||||
LogDir: logDir,
|
||||
ScriptDataDir: scriptDataDir,
|
||||
// #nosec G115 - Safe conversion as tailnet listen port is within uint16 range (0-65535)
|
||||
TailnetListenPort: uint16(tailnetListenPort),
|
||||
TailnetListenPort: uint16(tailnetListenPort),
|
||||
ExchangeToken: func(ctx context.Context) (string, error) {
|
||||
if exchangeToken == nil {
|
||||
return client.SDK.SessionToken(), nil
|
||||
}
|
||||
resp, err := exchangeToken(ctx)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
client.SetSessionToken(resp.SessionToken)
|
||||
return resp.SessionToken, nil
|
||||
},
|
||||
EnvironmentVariables: environmentVariables,
|
||||
IgnorePorts: ignorePorts,
|
||||
SSHMaxTimeout: sshMaxTimeout,
|
||||
@@ -293,9 +363,7 @@ func workspaceAgent() *serpent.Command {
|
||||
Execer: execer,
|
||||
Devcontainers: devcontainers,
|
||||
DevcontainerAPIOptions: []agentcontainers.Option{
|
||||
agentcontainers.WithSubAgentURL(agentAuth.agentURL.String()),
|
||||
agentcontainers.WithProjectDiscovery(devcontainerProjectDiscovery),
|
||||
agentcontainers.WithDiscoveryAutostart(devcontainerDiscoveryAutostart),
|
||||
agentcontainers.WithSubAgentURL(r.agentURL.String()),
|
||||
},
|
||||
})
|
||||
|
||||
@@ -328,6 +396,13 @@ func workspaceAgent() *serpent.Command {
|
||||
}
|
||||
|
||||
cmd.Options = serpent.OptionSet{
|
||||
{
|
||||
Flag: "auth",
|
||||
Default: "token",
|
||||
Description: "Specify the authentication type to use for the agent.",
|
||||
Env: "CODER_AGENT_AUTH",
|
||||
Value: serpent.StringOf(&auth),
|
||||
},
|
||||
{
|
||||
Flag: "log-dir",
|
||||
Default: os.TempDir(),
|
||||
@@ -435,22 +510,8 @@ func workspaceAgent() *serpent.Command {
|
||||
Description: "Allow the agent to automatically detect running devcontainers.",
|
||||
Value: serpent.BoolOf(&devcontainers),
|
||||
},
|
||||
{
|
||||
Flag: "devcontainers-project-discovery-enable",
|
||||
Default: "true",
|
||||
Env: "CODER_AGENT_DEVCONTAINERS_PROJECT_DISCOVERY_ENABLE",
|
||||
Description: "Allow the agent to search the filesystem for devcontainer projects.",
|
||||
Value: serpent.BoolOf(&devcontainerProjectDiscovery),
|
||||
},
|
||||
{
|
||||
Flag: "devcontainers-discovery-autostart-enable",
|
||||
Default: "false",
|
||||
Env: "CODER_AGENT_DEVCONTAINERS_DISCOVERY_AUTOSTART_ENABLE",
|
||||
Description: "Allow the agent to autostart devcontainer projects it discovers based on their configuration.",
|
||||
Value: serpent.BoolOf(&devcontainerDiscoveryAutostart),
|
||||
},
|
||||
}
|
||||
agentAuth.AttachOptions(cmd, false)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package cli_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
@@ -10,6 +11,7 @@ import (
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
@@ -20,6 +22,8 @@ import (
|
||||
"github.com/coder/coder/v2/coderd/database"
|
||||
"github.com/coder/coder/v2/coderd/database/dbfake"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/coder/v2/codersdk/workspacesdk"
|
||||
"github.com/coder/coder/v2/provisionersdk/proto"
|
||||
"github.com/coder/coder/v2/testutil"
|
||||
)
|
||||
|
||||
@@ -59,6 +63,143 @@ func TestWorkspaceAgent(t *testing.T) {
|
||||
}, testutil.WaitLong, testutil.IntervalMedium)
|
||||
})
|
||||
|
||||
t.Run("Azure", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
instanceID := "instanceidentifier"
|
||||
certificates, metadataClient := coderdtest.NewAzureInstanceIdentity(t, instanceID)
|
||||
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
|
||||
AzureCertificates: certificates,
|
||||
})
|
||||
user := coderdtest.CreateFirstUser(t, client)
|
||||
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
|
||||
OrganizationID: user.OrganizationID,
|
||||
OwnerID: user.UserID,
|
||||
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
|
||||
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
|
||||
return agents
|
||||
}).Do()
|
||||
|
||||
inv, _ := clitest.New(t, "agent", "--auth", "azure-instance-identity", "--agent-url", client.URL.String())
|
||||
inv = inv.WithContext(
|
||||
//nolint:revive,staticcheck
|
||||
context.WithValue(inv.Context(), "azure-client", metadataClient),
|
||||
)
|
||||
|
||||
ctx := inv.Context()
|
||||
clitest.Start(t, inv)
|
||||
coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).
|
||||
MatchResources(matchAgentWithVersion).Wait()
|
||||
workspace, err := client.Workspace(ctx, r.Workspace.ID)
|
||||
require.NoError(t, err)
|
||||
resources := workspace.LatestBuild.Resources
|
||||
if assert.NotEmpty(t, workspace.LatestBuild.Resources) && assert.NotEmpty(t, resources[0].Agents) {
|
||||
assert.NotEmpty(t, resources[0].Agents[0].Version)
|
||||
}
|
||||
dialer, err := workspacesdk.New(client).
|
||||
DialAgent(ctx, resources[0].Agents[0].ID, nil)
|
||||
require.NoError(t, err)
|
||||
defer dialer.Close()
|
||||
require.True(t, dialer.AwaitReachable(ctx))
|
||||
})
|
||||
|
||||
t.Run("AWS", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
instanceID := "instanceidentifier"
|
||||
certificates, metadataClient := coderdtest.NewAWSInstanceIdentity(t, instanceID)
|
||||
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
|
||||
AWSCertificates: certificates,
|
||||
})
|
||||
user := coderdtest.CreateFirstUser(t, client)
|
||||
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
|
||||
OrganizationID: user.OrganizationID,
|
||||
OwnerID: user.UserID,
|
||||
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
|
||||
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
|
||||
return agents
|
||||
}).Do()
|
||||
|
||||
inv, _ := clitest.New(t, "agent", "--auth", "aws-instance-identity", "--agent-url", client.URL.String())
|
||||
inv = inv.WithContext(
|
||||
//nolint:revive,staticcheck
|
||||
context.WithValue(inv.Context(), "aws-client", metadataClient),
|
||||
)
|
||||
|
||||
clitest.Start(t, inv)
|
||||
ctx := inv.Context()
|
||||
coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).
|
||||
MatchResources(matchAgentWithVersion).
|
||||
Wait()
|
||||
workspace, err := client.Workspace(ctx, r.Workspace.ID)
|
||||
require.NoError(t, err)
|
||||
resources := workspace.LatestBuild.Resources
|
||||
if assert.NotEmpty(t, resources) && assert.NotEmpty(t, resources[0].Agents) {
|
||||
assert.NotEmpty(t, resources[0].Agents[0].Version)
|
||||
}
|
||||
dialer, err := workspacesdk.New(client).
|
||||
DialAgent(ctx, resources[0].Agents[0].ID, nil)
|
||||
require.NoError(t, err)
|
||||
defer dialer.Close()
|
||||
require.True(t, dialer.AwaitReachable(ctx))
|
||||
})
|
||||
|
||||
t.Run("GoogleCloud", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
instanceID := "instanceidentifier"
|
||||
validator, metadataClient := coderdtest.NewGoogleInstanceIdentity(t, instanceID, false)
|
||||
client, db := coderdtest.NewWithDatabase(t, &coderdtest.Options{
|
||||
GoogleTokenValidator: validator,
|
||||
})
|
||||
owner := coderdtest.CreateFirstUser(t, client)
|
||||
member, memberUser := coderdtest.CreateAnotherUser(t, client, owner.OrganizationID)
|
||||
r := dbfake.WorkspaceBuild(t, db, database.WorkspaceTable{
|
||||
OrganizationID: owner.OrganizationID,
|
||||
OwnerID: memberUser.ID,
|
||||
}).WithAgent(func(agents []*proto.Agent) []*proto.Agent {
|
||||
agents[0].Auth = &proto.Agent_InstanceId{InstanceId: instanceID}
|
||||
return agents
|
||||
}).Do()
|
||||
|
||||
inv, cfg := clitest.New(t, "agent", "--auth", "google-instance-identity", "--agent-url", client.URL.String())
|
||||
clitest.SetupConfig(t, member, cfg)
|
||||
|
||||
clitest.Start(t,
|
||||
inv.WithContext(
|
||||
//nolint:revive,staticcheck
|
||||
context.WithValue(inv.Context(), "gcp-client", metadataClient),
|
||||
),
|
||||
)
|
||||
|
||||
ctx := inv.Context()
|
||||
coderdtest.NewWorkspaceAgentWaiter(t, client, r.Workspace.ID).
|
||||
MatchResources(matchAgentWithVersion).
|
||||
Wait()
|
||||
workspace, err := client.Workspace(ctx, r.Workspace.ID)
|
||||
require.NoError(t, err)
|
||||
resources := workspace.LatestBuild.Resources
|
||||
if assert.NotEmpty(t, resources) && assert.NotEmpty(t, resources[0].Agents) {
|
||||
assert.NotEmpty(t, resources[0].Agents[0].Version)
|
||||
}
|
||||
dialer, err := workspacesdk.New(client).DialAgent(ctx, resources[0].Agents[0].ID, nil)
|
||||
require.NoError(t, err)
|
||||
defer dialer.Close()
|
||||
require.True(t, dialer.AwaitReachable(ctx))
|
||||
sshClient, err := dialer.SSHClient(ctx)
|
||||
require.NoError(t, err)
|
||||
defer sshClient.Close()
|
||||
session, err := sshClient.NewSession()
|
||||
require.NoError(t, err)
|
||||
defer session.Close()
|
||||
key := "CODER_AGENT_TOKEN"
|
||||
command := "sh -c 'echo $" + key + "'"
|
||||
if runtime.GOOS == "windows" {
|
||||
command = "cmd.exe /c echo %" + key + "%"
|
||||
}
|
||||
token, err := session.CombinedOutput(command)
|
||||
require.NoError(t, err)
|
||||
_, err = uuid.Parse(strings.TrimSpace(string(token)))
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("PostStartup", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
|
||||
+3
-6
@@ -12,21 +12,18 @@ import (
|
||||
)
|
||||
|
||||
func (r *RootCmd) autoupdate() *serpent.Command {
|
||||
client := new(codersdk.Client)
|
||||
cmd := &serpent.Command{
|
||||
Annotations: workspaceCommand,
|
||||
Use: "autoupdate <workspace> <always|never>",
|
||||
Short: "Toggle auto-update policy for a workspace",
|
||||
Middleware: serpent.Chain(
|
||||
serpent.RequireNArgs(2),
|
||||
r.InitClient(client),
|
||||
),
|
||||
Handler: func(inv *serpent.Invocation) error {
|
||||
client, err := r.InitClient(inv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
policy := strings.ToLower(inv.Args[1])
|
||||
err = validateAutoUpdatePolicy(policy)
|
||||
err := validateAutoUpdatePolicy(policy)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("validate policy: %w", err)
|
||||
}
|
||||
|
||||
+1
-26
@@ -53,9 +53,6 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
t := time.NewTimer(0)
|
||||
defer t.Stop()
|
||||
|
||||
startTime := time.Now()
|
||||
baseInterval := opts.FetchInterval
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
@@ -71,11 +68,7 @@ func Agent(ctx context.Context, writer io.Writer, agentID uuid.UUID, opts AgentO
|
||||
return
|
||||
}
|
||||
fetchedAgent <- fetchAgent{agent: agent}
|
||||
|
||||
// Adjust the interval based on how long we've been waiting.
|
||||
elapsed := time.Since(startTime)
|
||||
currentInterval := GetProgressiveInterval(baseInterval, elapsed)
|
||||
t.Reset(currentInterval)
|
||||
t.Reset(opts.FetchInterval)
|
||||
}
|
||||
}
|
||||
}()
|
||||
@@ -300,24 +293,6 @@ func safeDuration(sw *stageWriter, a, b *time.Time) time.Duration {
|
||||
return a.Sub(*b)
|
||||
}
|
||||
|
||||
// GetProgressiveInterval returns an interval that increases over time.
|
||||
// The interval starts at baseInterval and increases to
|
||||
// a maximum of baseInterval * 16 over time.
|
||||
func GetProgressiveInterval(baseInterval time.Duration, elapsed time.Duration) time.Duration {
|
||||
switch {
|
||||
case elapsed < 60*time.Second:
|
||||
return baseInterval // 500ms for first 60 seconds
|
||||
case elapsed < 2*time.Minute:
|
||||
return baseInterval * 2 // 1s for next 1 minute
|
||||
case elapsed < 5*time.Minute:
|
||||
return baseInterval * 4 // 2s for next 3 minutes
|
||||
case elapsed < 10*time.Minute:
|
||||
return baseInterval * 8 // 4s for next 5 minutes
|
||||
default:
|
||||
return baseInterval * 16 // 8s after 10 minutes
|
||||
}
|
||||
}
|
||||
|
||||
type closeFunc func() error
|
||||
|
||||
func (c closeFunc) Close() error {
|
||||
|
||||
@@ -866,31 +866,3 @@ func TestConnDiagnostics(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetProgressiveInterval(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
baseInterval := 500 * time.Millisecond
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
elapsed time.Duration
|
||||
expected time.Duration
|
||||
}{
|
||||
{"first_minute", 30 * time.Second, baseInterval},
|
||||
{"second_minute", 90 * time.Second, baseInterval * 2},
|
||||
{"third_to_fifth_minute", 3 * time.Minute, baseInterval * 4},
|
||||
{"sixth_to_tenth_minute", 7 * time.Minute, baseInterval * 8},
|
||||
{"after_ten_minutes", 15 * time.Minute, baseInterval * 16},
|
||||
{"boundary_first_minute", 59 * time.Second, baseInterval},
|
||||
{"boundary_second_minute", 61 * time.Second, baseInterval * 2},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
result := cliui.GetProgressiveInterval(baseInterval, tc.elapsed)
|
||||
require.Equal(t, tc.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,16 +38,15 @@ func RichParameter(inv *serpent.Invocation, templateVersionParameter codersdk.Te
|
||||
// Move the cursor up a single line for nicer display!
|
||||
_, _ = fmt.Fprint(inv.Stdout, "\033[1A")
|
||||
|
||||
var defaults []string
|
||||
err = json.Unmarshal([]byte(templateVersionParameter.DefaultValue), &defaults)
|
||||
var options []string
|
||||
err = json.Unmarshal([]byte(templateVersionParameter.DefaultValue), &options)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
values, err := RichMultiSelect(inv, RichMultiSelectOptions{
|
||||
Options: templateVersionParameter.Options,
|
||||
Defaults: defaults,
|
||||
EnableCustomInput: templateVersionParameter.FormType == "tag-select",
|
||||
values, err := MultiSelect(inv, MultiSelectOptions{
|
||||
Options: options,
|
||||
Defaults: options,
|
||||
})
|
||||
if err == nil {
|
||||
v, err := json.Marshal(&values)
|
||||
|
||||
+28
-121
@@ -12,7 +12,6 @@ import (
|
||||
"golang.org/x/mod/semver"
|
||||
|
||||
"github.com/coder/coder/v2/coderd/database/dbtime"
|
||||
"github.com/coder/coder/v2/coderd/util/slice"
|
||||
"github.com/coder/coder/v2/codersdk"
|
||||
"github.com/coder/pretty"
|
||||
)
|
||||
@@ -30,7 +29,6 @@ type WorkspaceResourcesOptions struct {
|
||||
ServerVersion string
|
||||
ListeningPorts map[uuid.UUID]codersdk.WorkspaceAgentListeningPortsResponse
|
||||
Devcontainers map[uuid.UUID]codersdk.WorkspaceAgentListContainersResponse
|
||||
ShowDetails bool
|
||||
}
|
||||
|
||||
// WorkspaceResources displays the connection status and tree-view of provided resources.
|
||||
@@ -71,11 +69,7 @@ func WorkspaceResources(writer io.Writer, resources []codersdk.WorkspaceResource
|
||||
|
||||
totalAgents := 0
|
||||
for _, resource := range resources {
|
||||
for _, agent := range resource.Agents {
|
||||
if !agent.ParentID.Valid {
|
||||
totalAgents++
|
||||
}
|
||||
}
|
||||
totalAgents += len(resource.Agents)
|
||||
}
|
||||
|
||||
for _, resource := range resources {
|
||||
@@ -100,15 +94,12 @@ func WorkspaceResources(writer io.Writer, resources []codersdk.WorkspaceResource
|
||||
"",
|
||||
})
|
||||
// Display all agents associated with the resource.
|
||||
agents := slice.Filter(resource.Agents, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return !agent.ParentID.Valid
|
||||
})
|
||||
for index, agent := range agents {
|
||||
for index, agent := range resource.Agents {
|
||||
tableWriter.AppendRow(renderAgentRow(agent, index, totalAgents, options))
|
||||
for _, row := range renderListeningPorts(options, agent.ID, index, totalAgents) {
|
||||
tableWriter.AppendRow(row)
|
||||
}
|
||||
for _, row := range renderDevcontainers(resources, options, agent.ID, index, totalAgents) {
|
||||
for _, row := range renderDevcontainers(options, agent.ID, index, totalAgents) {
|
||||
tableWriter.AppendRow(row)
|
||||
}
|
||||
}
|
||||
@@ -134,7 +125,7 @@ func renderAgentRow(agent codersdk.WorkspaceAgent, index, totalAgents int, optio
|
||||
}
|
||||
if !options.HideAccess {
|
||||
sshCommand := "coder ssh " + options.WorkspaceName
|
||||
if totalAgents > 1 || len(options.Devcontainers) > 0 {
|
||||
if totalAgents > 1 {
|
||||
sshCommand += "." + agent.Name
|
||||
}
|
||||
sshCommand = pretty.Sprint(DefaultStyles.Code, sshCommand)
|
||||
@@ -173,129 +164,45 @@ func renderPortRow(port codersdk.WorkspaceAgentListeningPort, idx, total int) ta
|
||||
return table.Row{sb.String()}
|
||||
}
|
||||
|
||||
func renderDevcontainers(resources []codersdk.WorkspaceResource, wro WorkspaceResourcesOptions, agentID uuid.UUID, index, totalAgents int) []table.Row {
|
||||
func renderDevcontainers(wro WorkspaceResourcesOptions, agentID uuid.UUID, index, totalAgents int) []table.Row {
|
||||
var rows []table.Row
|
||||
if wro.Devcontainers == nil {
|
||||
return []table.Row{}
|
||||
}
|
||||
dc, ok := wro.Devcontainers[agentID]
|
||||
if !ok || len(dc.Devcontainers) == 0 {
|
||||
if !ok || len(dc.Containers) == 0 {
|
||||
return []table.Row{}
|
||||
}
|
||||
rows = append(rows, table.Row{
|
||||
fmt.Sprintf(" %s─ %s", renderPipe(index, totalAgents), "Devcontainers"),
|
||||
})
|
||||
for idx, devcontainer := range dc.Devcontainers {
|
||||
rows = append(rows, renderDevcontainerRow(resources, devcontainer, idx, len(dc.Devcontainers), wro)...)
|
||||
for idx, container := range dc.Containers {
|
||||
rows = append(rows, renderDevcontainerRow(container, idx, len(dc.Containers)))
|
||||
}
|
||||
return rows
|
||||
}
|
||||
|
||||
func renderDevcontainerRow(resources []codersdk.WorkspaceResource, devcontainer codersdk.WorkspaceAgentDevcontainer, index, total int, wro WorkspaceResourcesOptions) []table.Row {
|
||||
var rows []table.Row
|
||||
|
||||
// If the devcontainer is running and has an associated agent, we want to
|
||||
// display the agent's details. Otherwise, we just display the devcontainer
|
||||
// name and status.
|
||||
var subAgent *codersdk.WorkspaceAgent
|
||||
displayName := devcontainer.Name
|
||||
if devcontainer.Agent != nil && devcontainer.Status == codersdk.WorkspaceAgentDevcontainerStatusRunning {
|
||||
for _, resource := range resources {
|
||||
if agent, found := slice.Find(resource.Agents, func(agent codersdk.WorkspaceAgent) bool {
|
||||
return agent.ID == devcontainer.Agent.ID
|
||||
}); found {
|
||||
subAgent = &agent
|
||||
break
|
||||
}
|
||||
}
|
||||
if subAgent != nil {
|
||||
displayName = subAgent.Name
|
||||
displayName += fmt.Sprintf(" (%s, %s)", subAgent.OperatingSystem, subAgent.Architecture)
|
||||
}
|
||||
}
|
||||
|
||||
if devcontainer.Container != nil {
|
||||
displayName += " " + pretty.Sprint(DefaultStyles.Keyword, "["+devcontainer.Container.FriendlyName+"]")
|
||||
}
|
||||
|
||||
// Build the main row.
|
||||
row := table.Row{
|
||||
fmt.Sprintf(" %s─ %s", renderPipe(index, total), displayName),
|
||||
}
|
||||
|
||||
// Add status, health, and version columns.
|
||||
if !wro.HideAgentState {
|
||||
if subAgent != nil {
|
||||
row = append(row, renderAgentStatus(*subAgent))
|
||||
row = append(row, renderAgentHealth(*subAgent))
|
||||
row = append(row, renderAgentVersion(subAgent.Version, wro.ServerVersion))
|
||||
} else {
|
||||
row = append(row, renderDevcontainerStatus(devcontainer.Status))
|
||||
row = append(row, "") // No health for devcontainer without agent.
|
||||
row = append(row, "") // No version for devcontainer without agent.
|
||||
}
|
||||
}
|
||||
|
||||
// Add access column.
|
||||
if !wro.HideAccess {
|
||||
if subAgent != nil {
|
||||
accessString := fmt.Sprintf("coder ssh %s.%s", wro.WorkspaceName, subAgent.Name)
|
||||
row = append(row, pretty.Sprint(DefaultStyles.Code, accessString))
|
||||
} else {
|
||||
row = append(row, "") // No access for devcontainers without agent.
|
||||
}
|
||||
}
|
||||
|
||||
rows = append(rows, row)
|
||||
|
||||
// Add error message if present.
|
||||
if errorMessage := devcontainer.Error; errorMessage != "" {
|
||||
// Cap error message length for display.
|
||||
if !wro.ShowDetails && len(errorMessage) > 80 {
|
||||
errorMessage = errorMessage[:79] + "…"
|
||||
}
|
||||
errorRow := table.Row{
|
||||
" × " + pretty.Sprint(DefaultStyles.Error, errorMessage),
|
||||
"",
|
||||
"",
|
||||
"",
|
||||
}
|
||||
if !wro.HideAccess {
|
||||
errorRow = append(errorRow, "")
|
||||
}
|
||||
rows = append(rows, errorRow)
|
||||
}
|
||||
|
||||
// Add listening ports for the devcontainer agent.
|
||||
if subAgent != nil {
|
||||
portRows := renderListeningPorts(wro, subAgent.ID, index, total)
|
||||
for _, portRow := range portRows {
|
||||
// Adjust indentation for ports under devcontainer agent.
|
||||
if len(portRow) > 0 {
|
||||
if str, ok := portRow[0].(string); ok {
|
||||
portRow[0] = " " + str // Add extra indentation.
|
||||
}
|
||||
}
|
||||
rows = append(rows, portRow)
|
||||
}
|
||||
}
|
||||
|
||||
return rows
|
||||
}
|
||||
|
||||
func renderDevcontainerStatus(status codersdk.WorkspaceAgentDevcontainerStatus) string {
|
||||
switch status {
|
||||
case codersdk.WorkspaceAgentDevcontainerStatusRunning:
|
||||
return pretty.Sprint(DefaultStyles.Keyword, "▶ running")
|
||||
case codersdk.WorkspaceAgentDevcontainerStatusStopped:
|
||||
return pretty.Sprint(DefaultStyles.Placeholder, "⏹ stopped")
|
||||
case codersdk.WorkspaceAgentDevcontainerStatusStarting:
|
||||
return pretty.Sprint(DefaultStyles.Warn, "⧗ starting")
|
||||
case codersdk.WorkspaceAgentDevcontainerStatusError:
|
||||
return pretty.Sprint(DefaultStyles.Error, "✘ error")
|
||||
default:
|
||||
return pretty.Sprint(DefaultStyles.Placeholder, "○ "+string(status))
|
||||
func renderDevcontainerRow(container codersdk.WorkspaceAgentContainer, index, total int) table.Row {
|
||||
var row table.Row
|
||||
var sb strings.Builder
|
||||
_, _ = sb.WriteString(" ")
|
||||
_, _ = sb.WriteString(renderPipe(index, total))
|
||||
_, _ = sb.WriteString("─ ")
|
||||
_, _ = sb.WriteString(pretty.Sprintf(DefaultStyles.Code, "%s", container.FriendlyName))
|
||||
row = append(row, sb.String())
|
||||
sb.Reset()
|
||||
if container.Running {
|
||||
_, _ = sb.WriteString(pretty.Sprintf(DefaultStyles.Keyword, "(%s)", container.Status))
|
||||
} else {
|
||||
_, _ = sb.WriteString(pretty.Sprintf(DefaultStyles.Error, "(%s)", container.Status))
|
||||
}
|
||||
row = append(row, sb.String())
|
||||
sb.Reset()
|
||||
// "health" is not applicable here.
|
||||
row = append(row, sb.String())
|
||||
_, _ = sb.WriteString(container.Image)
|
||||
row = append(row, sb.String())
|
||||
return row
|
||||
}
|
||||
|
||||
func renderAgentStatus(agent codersdk.WorkspaceAgent) string {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user