## 10. Test Projects ### Test Strategy The boilerplate scaffolds three test projects, each targeting a different layer and testing style: | Project | Tests | Style | |---|---|---| | `${PROJECT}.Domain.Tests` | Entities, value objects, Result pattern, domain logic | Pure unit tests — no mocks needed (Domain has zero dependencies) | | `${PROJECT}.Application.Tests` | Command/query handlers, validators, pipeline behaviors | Unit tests with mocked `ApplicationDbContext` and `IUnitOfWork` | | `${PROJECT}.IntegrationTests` | Full HTTP request/response cycle through the API | Integration tests using `WebApplicationFactory` with a real PostgreSQL instance | **Shared test tooling across all projects:** - **xUnit** — test framework (`[Fact]`, `[Theory]`) - **FluentAssertions** — expressive assertions (`result.Should().Be(...)`) - **Moq** — mocking framework for isolating dependencies - **coverlet.collector** — code coverage collection (used by CI pipeline) **Integration test additions:** - **`Microsoft.AspNetCore.Mvc.Testing`** — provides `WebApplicationFactory` for in-process HTTP testing ### Test File Naming Convention Test files follow the pattern `{ClassUnderTest}Tests.cs`. For example: - `ResultTests.cs` tests `Result.cs` - `ValidationBehaviorTests.cs` tests `ValidationBehavior.cs` ### Project References Each test project references only the layer it tests: - `Domain.Tests` → `Domain` - `Application.Tests` → `Application` (which transitively includes Domain) - `IntegrationTests` → `Api` (which transitively includes everything) This mirrors the Clean Architecture dependency rule — test projects never reach across layers. --- ## 11. CI/CD Pipeline The CI/CD pipeline is split into two conceptual sections: 1. **Standard CI** (test, SonarCloud, Qodana) — reusable as-is across projects. Just update repository variables. 2. **Deployment** (Docker build/push, SSH deploy) — project-specific. Customize the deployment target, Tailscale config, and SSH details. ### `qodana.yaml` Configuration for JetBrains Qodana static analysis. Points to the `.slnx` solution file and uses the starter inspection profile. ```yaml #-------------------------------------------------------------------------------# # Qodana analysis is configured by qodana.yaml file # # [https://www.jetbrains.com/help/qodana/qodana-yaml.html](https://www.jetbrains.com/help/qodana/qodana-yaml.html) # #-------------------------------------------------------------------------------# ################################################################################# # WARNING: Do not store sensitive information in this file, # # as its contents will be included in the Qodana report. # ################################################################################# version: "1.0" #Specify IDE code to run analysis without container (Applied in CI/CD pipeline) ide: QDNET #Specify the .NET solution to analyze dotnet: solution: ${PROJECT}.slnx #Specify inspection profile for code analysis profile: name: qodana.starter #Enable inspections #include: # - name: #Disable inspections #exclude: # - name: # paths: # - #Execute shell command before Qodana execution (Applied in CI/CD pipeline) #bootstrap: sh ./prepare-qodana.sh #Install IDE plugins before Qodana execution (Applied in CI/CD pipeline) #plugins: # - id: #(plugin id can be found at [https://plugins.jetbrains.com](https://plugins.jetbrains.com)) # Quality gate. Will fail the CI/CD pipeline if any condition is not met # severityThresholds - configures maximum thresholds for different problem severities # testCoverageThresholds - configures minimum code coverage on a whole project and newly added code # Code Coverage is available in Ultimate and Ultimate Plus plans #failureConditions: # severityThresholds: # any: 15 # critical: 5 # testCoverageThresholds: # fresh: 70 # total: 50 ``` ### `.github/workflows/ci.yml` Full CI/CD pipeline with four jobs: ```yaml name: CI/CD Pipeline on: push: branches: [main, dev] pull_request: branches: [main, dev] env: DOTNET_VERSION: "10.0.x" JAVA_VERSION: "17" DOCKER_IMAGE: ${{ vars.DOCKERHUB_USERNAME }}/${PROJECT_LOWER}-api jobs: # ───────────────────────────────────────────────────────────────── # Job 1: Build, test, and collect coverage # ───────────────────────────────────────────────────────────────── test: name: Build & Test runs-on: ubuntu-latest services: postgres: image: postgres:17-alpine env: POSTGRES_DB: ${PROJECT_LOWER}_test POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres ports: - 5432:5432 options: >- --health-cmd "pg_isready -U postgres" --health-interval 10s --health-timeout 5s --health-retries 5 steps: - name: Checkout repository uses: actions/checkout@v4 - name: Setup .NET uses: actions/setup-dotnet@v4 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Restore dependencies run: dotnet restore - name: Build solution run: dotnet build --no-restore --configuration Release - name: Run tests run: >- dotnet test --no-build --configuration Release --logger "trx;LogFileName=test-results.trx" --collect:"XPlat Code Coverage" --results-directory ./TestResults env: ConnectionStrings__DefaultConnection: "Host=localhost;Port=5432;Database=${PROJECT_LOWER}_test;Username=postgres;Password=postgres" - name: Upload test results uses: actions/upload-artifact@v4 if: always() with: name: test-results path: ./TestResults retention-days: 7 # ───────────────────────────────────────────────────────────────── # Job 2: SonarCloud analysis # ───────────────────────────────────────────────────────────────── sonar: name: SonarCloud Analysis runs-on: ubuntu-latest needs: test services: postgres: image: postgres:17-alpine env: POSTGRES_DB: ${PROJECT_LOWER}_test POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres ports: - 5432:5432 options: >- --health-cmd "pg_isready -U postgres" --health-interval 10s --health-timeout 5s --health-retries 5 steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 - name: Setup .NET uses: actions/setup-dotnet@v4 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Setup Java uses: actions/setup-java@v4 with: distribution: "temurin" java-version: ${{ env.JAVA_VERSION }} - name: Install SonarScanner run: dotnet tool install --global dotnet-sonarscanner - name: Begin SonarCloud analysis env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} run: >- dotnet sonarscanner begin /k:"${{ vars.SONAR_PROJECT_KEY }}" /o:"${{ vars.SONAR_ORGANIZATION_KEY }}" /d:sonar.token="${{ secrets.SONAR_TOKEN }}" /d:sonar.host.url="[https://sonarcloud.io](https://sonarcloud.io)" /d:sonar.cs.opencover.reportsPaths="**/TestResults/**/coverage.opencover.xml" /d:sonar.exclusions="**/obj/**,**/bin/**" /d:sonar.coverage.exclusions="**/obj/**,**/bin/**,**/Migrations/**" - name: Build solution run: dotnet build --configuration Release - name: Run tests with coverage run: >- dotnet test --no-build --configuration Release --collect:"XPlat Code Coverage;Format=opencover" --results-directory ./TestResults env: ConnectionStrings__DefaultConnection: "Host=localhost;Port=5432;Database=${PROJECT_LOWER}_test;Username=postgres;Password=postgres" - name: End SonarCloud analysis env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} run: dotnet sonarscanner end /d:sonar.token="${{ secrets.SONAR_TOKEN }}" - name: Check SonarCloud Quality Gate uses: sonarsource/sonarqube-quality-gate-action@v1.2.0 timeout-minutes: 5 continue-on-error: true with: scanMetadataReportFile: .sonarqube/out/.sonar/report-task.txt env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} # ───────────────────────────────────────────────────────────────── # Job 3: Qodana analysis (parallel with SonarCloud) # ───────────────────────────────────────────────────────────────── qodana: name: Qodana Analysis runs-on: ubuntu-latest needs: test permissions: contents: write pull-requests: write checks: write steps: - name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 - name: Qodana Scan uses: JetBrains/qodana-action@v2025.1 env: QODANA_TOKEN: ${{ secrets.QODANA_TOKEN }} # ───────────────────────────────────────────────────────────────── # Job 4: Build, push, and deploy # ───────────────────────────────────────────────────────────────── deploy: name: Deploy to Server runs-on: ubuntu-latest needs: [sonar, qodana] if: github.ref == 'refs/heads/dev' && github.event_name == 'push' environment: Development steps: - name: Checkout repository uses: actions/checkout@v4 - name: Login to Docker Hub uses: docker/login-action@v3 with: username: ${{ vars.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 - name: Build and push Docker image uses: docker/build-push-action@v6 with: context: . push: true tags: | ${{ env.DOCKER_IMAGE }}:latest ${{ env.DOCKER_IMAGE }}:${{ github.sha }} cache-from: type=gha cache-to: type=gha,mode=max - name: Connect to Tailscale uses: tailscale/github-action@v4 with: oauth-client-id: ${{ vars.TS_OAUTH_CLIENT_ID }} oauth-secret: ${{ secrets.TS_OAUTH_SECRET }} tags: tag:ci - name: Copy compose file to server uses: appleboy/scp-action@v1.0.0 with: host: ${{ vars.SSH_HOST }} username: ${{ vars.SSH_USERNAME }} key: ${{ secrets.SSH_KEY }} port: ${{ vars.SSH_PORT }} source: "compose.yml,.env.example" target: ${{ vars.DEPLOY_PATH }} - name: Deploy via SSH uses: appleboy/ssh-action@v1.2.0 with: host: ${{ vars.SSH_HOST }} username: ${{ vars.SSH_USERNAME }} key: ${{ secrets.SSH_KEY }} port: ${{ vars.SSH_PORT }} envs: DOCKER_IMAGE,IMAGE_TAG script: | cd ${{ vars.DEPLOY_PATH }} if [ ! -f .env ]; then cp .env.example .env chmod 600 .env fi # Update DOCKER_IMAGE and IMAGE_TAG in .env with values from CI sed -i "s|^DOCKER_IMAGE=.*|DOCKER_IMAGE=${DOCKER_IMAGE}|" .env sed -i "s|^IMAGE_TAG=.*|IMAGE_TAG=${IMAGE_TAG}|" .env docker compose pull docker compose up -d sleep 10 # Verify all expected services are running EXPECTED=2 RUNNING=$(docker compose ps --status running --quiet | wc -l) if [ "$RUNNING" -lt "$EXPECTED" ]; then echo "Expected $EXPECTED services, found $RUNNING running" docker compose logs --tail=50 exit 1 fi env: DOCKER_IMAGE: ${{ env.DOCKER_IMAGE }} IMAGE_TAG: ${{ github.sha }} ``` ### Required GitHub Secrets and Variables | Type | Name | Description | |---|---|---| | **Secret** | `SONAR_TOKEN` | SonarCloud authentication token | | **Secret** | `QODANA_TOKEN` | Qodana Cloud authentication token | | **Secret** | `DOCKERHUB_TOKEN` | Docker Hub access token | | **Secret** | `SSH_KEY` | Private SSH key for deployment server | | **Secret** | `TS_OAUTH_SECRET` | Tailscale OAuth secret | | **Variable** | `DOCKERHUB_USERNAME` | Docker Hub username | | **Variable** | `SONAR_PROJECT_KEY` | SonarCloud project key | | **Variable** | `SONAR_ORGANIZATION_KEY` | SonarCloud organization key | | **Variable** | `SSH_HOST` | Deployment server hostname (Tailscale IP) | | **Variable** | `SSH_USERNAME` | SSH username on deployment server | | **Variable** | `SSH_PORT` | SSH port on deployment server | | **Variable** | `TS_OAUTH_CLIENT_ID` | Tailscale OAuth client ID | | **Variable** | `DEPLOY_PATH` | Path on server where app is deployed | --- ## 12. AI-Assisted Development ### `.github/copilot-instructions.md` This file provides project-specific context to GitHub Copilot (and other AI assistants that read it). It documents: - **Git commit conventions** — execute commits directly, no `Co-authored-by` trailers - **Build & run commands** — `dotnet build`, `dotnet run`, `dotnet test`, EF Core migrations - **Architecture** — Clean Architecture dependency flow, layer responsibilities - **Key conventions** — CQRS with MediatR, Result pattern, Domain layer rules - **Testing conventions** — xUnit, Moq, FluentAssertions, `WebApplicationFactory` - **Tech stack** — .NET 10, PostgreSQL, MediatR 14, FluentValidation 12, Serilog This file is automatically picked up by Copilot in VS Code and GitHub.com, providing contextual suggestions that align with the project's architecture and conventions. --- ## 13. Running the Project ### Local Development (without Docker) ```bash # Start PostgreSQL (via Docker or locally) docker compose up db -d # Run the API dotnet run --project src/${PROJECT}.Api # API available at http://localhost:5212 # Health check: http://localhost:5212/health # OpenAPI spec: http://localhost:5212/openapi/v1.json (dev only) ``` ### Docker Compose (full stack) ```bash # Build and start everything docker compose up --build -d # API available at http://localhost:5212 # Health check: http://localhost:5212/health ``` ### Running Tests ```bash # All tests dotnet test # Specific test project dotnet test tests/${PROJECT}.Domain.Tests dotnet test tests/${PROJECT}.Application.Tests dotnet test tests/${PROJECT}.IntegrationTests # Single test by name dotnet test --filter "FullyQualifiedName~MyTestMethod" ``` ### EF Core Migrations ```bash # Add a new migration dotnet ef migrations add \ --project src/${PROJECT}.Infrastructure \ --startup-project src/${PROJECT}.Api # Apply migrations dotnet ef database update \ --project src/${PROJECT}.Infrastructure \ --startup-project src/${PROJECT}.Api ``` --- ## Appendix A: Outbox Pattern (Optional) > **This section is an optional enhancement.** The boilerplate uses EF Core's `DomainEventInterceptor` to dispatch domain events in-process during `SaveChanges`. The outbox pattern is the next evolution — use it when you need **guaranteed delivery** of domain events to external systems (message brokers, other microservices) with at-least-once semantics. ### The Problem The current `DomainEventInterceptor` dispatches events via MediatR *inside* the `SaveChanges` pipeline. This works well for in-process handlers (updating read models, sending notifications, etc.), but has two limitations: 1. **No guarantee of delivery to external systems** — if the application crashes after `SaveChanges` but before an external message is sent, the event is lost. 2. **Coupling to the transaction boundary** — if an event handler calls an external API or publishes to a message broker, you're mixing I/O with the database transaction. ### The Outbox Pattern Solution Instead of dispatching events immediately, write them as rows in an `OutboxMessages` table within the **same database transaction** as the business data. A background job (Quartz.NET, which is already included in this boilerplate) polls the table and publishes events to external consumers. **Flow:** 1. Handler modifies entity + raises domain event 2. `SaveChanges` interceptor serializes domain events into `OutboxMessages` table (same transaction) 3. Transaction commits — business data and outbox messages are atomically consistent 4. Quartz.NET background job polls `OutboxMessages`, publishes to message broker, marks as processed ### Implementation Sketch **1. Outbox entity (Domain or Infrastructure layer):** ```csharp public sealed class OutboxMessage { public Guid Id { get; init; } public string Type { get; init; } = string.Empty; // Event CLR type name public string Content { get; init; } = string.Empty; // Serialized event payload (JSON) public DateTime OccurredOnUtc { get; init; } public DateTime? ProcessedOnUtc { get; set; } public string? Error { get; set; } } ``` **2. Modified interceptor (writes to outbox instead of dispatching):** ```csharp // In SaveChangesInterceptor, replace MediatR Publish with: var outboxMessages = domainEvents.Select(e => new OutboxMessage { Id = Guid.NewGuid(), Type = e.GetType().Name, Content = JsonConvert.SerializeObject(e, new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.All }), OccurredOnUtc = DateTime.UtcNow }); dbContext.Set().AddRange(outboxMessages); // Events are persisted in the same SaveChanges transaction ``` **3. Quartz.NET background job:** ```csharp [DisallowConcurrentExecution] public sealed class ProcessOutboxMessagesJob( ApplicationDbContext dbContext, IPublisher publisher) : IJob { public async Task Execute(IJobExecutionContext context) { var messages = await dbContext.Set() .Where(m => m.ProcessedOnUtc == null) .OrderBy(m => m.OccurredOnUtc) .Take(20) .ToListAsync(context.CancellationToken); foreach (var message in messages) { try { var domainEvent = JsonConvert.DeserializeObject( message.Content, new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.All }); if (domainEvent is not null) await publisher.Publish(domainEvent, context.CancellationToken); message.ProcessedOnUtc = DateTime.UtcNow; } catch (Exception ex) { message.Error = ex.ToString(); } } await dbContext.SaveChangesAsync(context.CancellationToken); } } ``` ### When to Adopt This - You're publishing events to a message broker (RabbitMQ, Azure Service Bus, Kafka) - You need at-least-once delivery guarantees - You're in a microservices architecture where services communicate via events For purely in-process event handling (the common case in a monolith), the existing `DomainEventInterceptor` approach is simpler and sufficient.