The Sanctuary

Writing about interests; Computer Science, Philosophy, Mathematics and AI.

Microservices Architecture: Patterns and Implementation with Spring Cloud

Microservices are, first and foremost, an organisational solution masquerading as a technical one. The reason to decompose a monolith is not that monoliths are inherently bad — they are often the most productive architecture for a small team — but that a monolith forces a hundred engineers to coordinate deployments through a single pipeline, and that coordination cost eventually dwarfs the engineering cost. Microservices trade coordination overhead for operational complexity: instead of one deployment you have fifty, each with its own failure modes, its own data store, its own on-call rotation.

I have worked on both sides of that trade-off, and the lesson I keep relearning is that the patterns matter more than the framework. Circuit breakers, service discovery, event-driven communication, distributed tracing — these are not Spring Cloud features; they are survival strategies for distributed systems. Spring Cloud happens to provide excellent implementations of them, and that is what this guide covers.

Why Microservices?

The honest answer is: only when you have to. Monolithic applications become difficult to scale and maintain as they grow, but the threshold is higher than most teams think:

AspectMonolithMicroservices
DeploymentAll-or-nothingIndependent per service
ScalingScale entire appScale specific services
TechnologySingle stackPolyglot possible
Team StructureLarge coordinated teamsSmall autonomous teams
Failure ImpactEntire systemIsolated to service
Development SpeedSlows with sizeRemains constant

Architecture Overview

A production microservices architecture requires several supporting components:

Microservices Architecture

Core Components

ComponentPurposeTechnologies
API GatewaySingle entry point, routing, authSpring Cloud Gateway, Kong
Service DiscoveryDynamic service locationEureka, Consul, Kubernetes DNS
Config ServerCentralized configurationSpring Cloud Config, Vault
Circuit BreakerFault toleranceResilience4j, Hystrix
Message BrokerAsync communicationKafka, RabbitMQ
ObservabilityMonitoring, tracing, loggingPrometheus, Jaeger, ELK

API Gateway Pattern

The API Gateway is the piece of infrastructure that makes the distributed backend look like a single system to the outside world. Without it, every client needs to know the address of every service, handle authentication independently, and deal with the fact that services come and go. The gateway absorbs that complexity:

API Gateway Pattern

Responsibilities

  1. Request Routing: Route requests to appropriate services
  2. Authentication: Validate tokens, enforce security
  3. Rate Limiting: Protect services from overload
  4. Load Balancing: Distribute traffic across instances
  5. Response Aggregation: Combine responses from multiple services
  6. Protocol Translation: REST to gRPC, WebSocket handling

Spring Cloud Gateway Implementation

# application.yml
spring:
  cloud:
    gateway:
      routes:
        - id: order-service
          uri: lb://order-service
          predicates:
            - Path=/api/orders/**
          filters:
            - StripPrefix=1
            - name: CircuitBreaker
              args:
                name: orderCircuitBreaker
                fallbackUri: forward:/fallback/orders

        - id: product-service
          uri: lb://product-service
          predicates:
            - Path=/api/products/**
          filters:
            - StripPrefix=1
            - name: RequestRateLimiter
              args:
                redis-rate-limiter.replenishRate: 100
                redis-rate-limiter.burstCapacity: 200

      default-filters:
        - name: Retry
          args:
            retries: 3
            statuses: BAD_GATEWAY

Gateway Security

@Configuration
@EnableWebFluxSecurity
public class GatewaySecurityConfig {

    @Bean
    public SecurityWebFilterChain springSecurityFilterChain(ServerHttpSecurity http) {
        http
            .csrf(ServerHttpSecurity.CsrfSpec::disable)
            .authorizeExchange(exchanges -> exchanges
                .pathMatchers("/api/public/**").permitAll()
                .pathMatchers("/api/admin/**").hasRole("ADMIN")
                .anyExchange().authenticated()
            )
            .oauth2ResourceServer(oauth2 -> oauth2
                .jwt(Customizer.withDefaults())
            );
        return http.build();
    }
}

Service Discovery

Services need to find each other dynamically as instances scale up/down:

Service Discovery

Netflix Eureka Setup

Eureka Server:

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
    public static void main(String[] args) {
        SpringApplication.run(EurekaServerApplication.class, args);
    }
}
# eureka-server application.yml
server:
  port: 8761

eureka:
  instance:
    hostname: localhost
  client:
    registerWithEureka: false
    fetchRegistry: false
  server:
    enableSelfPreservation: false

Service Registration:

# service application.yml
spring:
  application:
    name: order-service

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:8761/eureka/
  instance:
    preferIpAddress: true
    lease-renewal-interval-in-seconds: 10
    lease-expiration-duration-in-seconds: 30

Kubernetes-Native Discovery

With Kubernetes, use native DNS-based discovery:

spring:
  cloud:
    kubernetes:
      discovery:
        enabled: true
        all-namespaces: false
      loadbalancer:
        mode: SERVICE

Circuit Breaker Pattern

The circuit breaker is the pattern I wish I had understood before my first production incident with microservices. Without it, a single slow or failing service cascades through every upstream caller — threads pool up, timeouts stack, and within minutes the entire system is unresponsive. The circuit breaker prevents this by failing fast:

Circuit Breaker Pattern

Circuit Breaker States

StateBehavior
ClosedRequests pass through normally
OpenRequests fail immediately (fallback)
Half-OpenLimited requests to test recovery

Resilience4j Implementation

@Service
public class OrderService {

    private final ProductClient productClient;

    @CircuitBreaker(name = "productService", fallbackMethod = "getProductFallback")
    @Retry(name = "productService")
    @TimeLimiter(name = "productService")
    public CompletableFuture<Product> getProduct(String productId) {
        return CompletableFuture.supplyAsync(() ->
            productClient.getProduct(productId)
        );
    }

    public CompletableFuture<Product> getProductFallback(String productId, Exception ex) {
        log.warn("Fallback for product {}: {}", productId, ex.getMessage());
        return CompletableFuture.completedFuture(
            Product.builder()
                .id(productId)
                .name("Product Unavailable")
                .cached(true)
                .build()
        );
    }
}
# application.yml
resilience4j:
  circuitbreaker:
    instances:
      productService:
        registerHealthIndicator: true
        slidingWindowSize: 10
        minimumNumberOfCalls: 5
        permittedNumberOfCallsInHalfOpenState: 3
        automaticTransitionFromOpenToHalfOpenEnabled: true
        waitDurationInOpenState: 5s
        failureRateThreshold: 50
        eventConsumerBufferSize: 10

  retry:
    instances:
      productService:
        maxAttempts: 3
        waitDuration: 100ms
        enableExponentialBackoff: true
        exponentialBackoffMultiplier: 2

  timelimiter:
    instances:
      productService:
        timeoutDuration: 3s
        cancelRunningFuture: true

Configuration Management

Centralize configuration for all services:

Spring Cloud Config Server

@SpringBootApplication
@EnableConfigServer
public class ConfigServerApplication {
    public static void main(String[] args) {
        SpringApplication.run(ConfigServerApplication.class, args);
    }
}
# config-server application.yml
spring:
  cloud:
    config:
      server:
        git:
          uri: https://github.com/org/config-repo
          default-label: main
          search-paths: '{application}'
        encrypt:
          enabled: true

encrypt:
  key: ${ENCRYPT_KEY}

HashiCorp Vault Integration

For secrets management:

spring:
  cloud:
    vault:
      uri: https://vault.example.com
      authentication: KUBERNETES
      kubernetes:
        role: my-service
        kubernetes-path: kubernetes
      kv:
        enabled: true
        backend: secret
        default-context: application

  config:
    import: vault://
@Configuration
@ConfigurationProperties(prefix = "database")
public class DatabaseConfig {

    @Value("${database.username}")
    private String username;

    @Value("${database.password}")
    private String password;  // Fetched from Vault

    // ...
}

Database Per Service

This is the rule that teams resist the hardest — and the one that matters the most. If two services share a database, they are not microservices; they are a distributed monolith with all the operational complexity and none of the independence. Each microservice owns its data:

Database Per Service

Patterns

PatternUse Case
Private DatabaseFull isolation, different schemas
Schema Per ServiceShared database, logical separation
Shared DatabaseLegacy migration (avoid long-term)

Implementation

@Entity
@Table(name = "orders")
public class Order {

    @Id
    @GeneratedValue(strategy = GenerationType.UUID)
    private String id;

    private String customerId;  // Reference, not FK
    private String productId;   // Reference, not FK

    @Enumerated(EnumType.STRING)
    private OrderStatus status;

    private BigDecimal totalAmount;

    @CreatedDate
    private Instant createdAt;
}

Data Consistency: Saga Pattern

For distributed transactions:

@Service
public class OrderSagaOrchestrator {

    public void createOrder(CreateOrderCommand command) {
        // Step 1: Create order
        Order order = orderService.createOrder(command);

        try {
            // Step 2: Reserve inventory
            inventoryService.reserveStock(order.getProductId(), order.getQuantity());

            // Step 3: Process payment
            paymentService.processPayment(order.getCustomerId(), order.getTotalAmount());

            // Step 4: Confirm order
            orderService.confirmOrder(order.getId());

        } catch (InventoryException e) {
            // Compensate: Cancel order
            orderService.cancelOrder(order.getId());
            throw e;

        } catch (PaymentException e) {
            // Compensate: Release inventory, cancel order
            inventoryService.releaseStock(order.getProductId(), order.getQuantity());
            orderService.cancelOrder(order.getId());
            throw e;
        }
    }
}

Asynchronous Communication

Use message brokers for event-driven communication:

Apache Kafka Integration

@Configuration
public class KafkaConfig {

    @Bean
    public ProducerFactory<String, OrderEvent> producerFactory() {
        Map<String, Object> config = new HashMap<>();
        config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
        config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
        return new DefaultKafkaProducerFactory<>(config);
    }

    @Bean
    public KafkaTemplate<String, OrderEvent> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }
}

@Service
public class OrderEventPublisher {

    private final KafkaTemplate<String, OrderEvent> kafkaTemplate;

    public void publishOrderCreated(Order order) {
        OrderEvent event = new OrderEvent(
            order.getId(),
            OrderEventType.CREATED,
            order
        );
        kafkaTemplate.send("order-events", order.getId(), event);
    }
}

@Service
public class InventoryEventConsumer {

    @KafkaListener(topics = "order-events", groupId = "inventory-service")
    public void handleOrderEvent(OrderEvent event) {
        switch (event.getType()) {
            case CREATED -> reserveInventory(event.getOrder());
            case CANCELLED -> releaseInventory(event.getOrder());
        }
    }
}

Inter-Service Communication

OpenFeign Clients

@FeignClient(name = "product-service", fallback = ProductClientFallback.class)
public interface ProductClient {

    @GetMapping("/api/products/{id}")
    Product getProduct(@PathVariable String id);

    @GetMapping("/api/products")
    List<Product> getProducts(@RequestParam List<String> ids);
}

@Component
public class ProductClientFallback implements ProductClient {

    @Override
    public Product getProduct(String id) {
        return Product.unavailable(id);
    }

    @Override
    public List<Product> getProducts(List<String> ids) {
        return ids.stream()
            .map(Product::unavailable)
            .collect(toList());
    }
}

gRPC for High Performance

// product.proto
syntax = "proto3";

service ProductService {
    rpc GetProduct(ProductRequest) returns (ProductResponse);
    rpc GetProducts(ProductsRequest) returns (stream ProductResponse);
}

message ProductRequest {
    string product_id = 1;
}

message ProductResponse {
    string id = 1;
    string name = 2;
    double price = 3;
    int32 stock = 4;
}
@GrpcService
public class ProductGrpcService extends ProductServiceGrpc.ProductServiceImplBase {

    private final ProductRepository repository;

    @Override
    public void getProduct(ProductRequest request, StreamObserver<ProductResponse> observer) {
        Product product = repository.findById(request.getProductId())
            .orElseThrow(() -> new ProductNotFoundException(request.getProductId()));

        observer.onNext(toProto(product));
        observer.onCompleted();
    }
}

Observability

Distributed Tracing with Micrometer

management:
  tracing:
    sampling:
      probability: 1.0
  zipkin:
    tracing:
      endpoint: http://zipkin:9411/api/v2/spans

logging:
  pattern:
    level: "%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]"

Prometheus Metrics

@RestController
public class OrderController {

    private final Counter orderCounter;
    private final Timer orderTimer;

    public OrderController(MeterRegistry registry) {
        this.orderCounter = Counter.builder("orders.created")
            .description("Number of orders created")
            .register(registry);

        this.orderTimer = Timer.builder("orders.processing.time")
            .description("Order processing time")
            .register(registry);
    }

    @PostMapping("/orders")
    public Order createOrder(@RequestBody CreateOrderRequest request) {
        return orderTimer.record(() -> {
            Order order = orderService.create(request);
            orderCounter.increment();
            return order;
        });
    }
}

Centralized Logging

# logback-spring.xml
<appender name="JSON" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
        <includeMdcKeyName>traceId</includeMdcKeyName>
        <includeMdcKeyName>spanId</includeMdcKeyName>
    </encoder>
</appender>

Deployment Architecture

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
        prometheus.io/path: "/actuator/prometheus"
    spec:
      containers:
        - name: order-service
          image: registry.example.com/order-service:1.0.0
          ports:
            - containerPort: 8080
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: kubernetes
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
              port: 8080
            initialDelaySeconds: 30
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
              port: 8080
          resources:
            requests:
              memory: "512Mi"
              cpu: "250m"
            limits:
              memory: "1Gi"
              cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
    - port: 80
      targetPort: 8080

Testing Microservices

Contract Testing with Pact

@ExtendWith(PactConsumerTestExt.class)
class ProductClientContractTest {

    @Pact(consumer = "order-service", provider = "product-service")
    public V4Pact getProductPact(PactDslWithProvider builder) {
        return builder
            .given("product exists")
            .uponReceiving("get product request")
            .path("/api/products/prod-123")
            .method("GET")
            .willRespondWith()
            .status(200)
            .body(new PactDslJsonBody()
                .stringValue("id", "prod-123")
                .stringValue("name", "Test Product")
                .decimalType("price", 99.99))
            .toPact(V4Pact.class);
    }

    @Test
    @PactTestFor(pactMethod = "getProductPact")
    void testGetProduct(MockServer mockServer) {
        ProductClient client = new ProductClient(mockServer.getUrl());
        Product product = client.getProduct("prod-123");

        assertThat(product.getId()).isEqualTo("prod-123");
        assertThat(product.getName()).isEqualTo("Test Product");
    }
}

Integration Testing with Testcontainers

@SpringBootTest
@Testcontainers
class OrderServiceIntegrationTest {

    @Container
    static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:15");

    @Container
    static KafkaContainer kafka = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:7.4.0"));

    @DynamicPropertySource
    static void configureProperties(DynamicPropertyRegistry registry) {
        registry.add("spring.datasource.url", postgres::getJdbcUrl);
        registry.add("spring.kafka.bootstrap-servers", kafka::getBootstrapServers);
    }

    @Test
    void shouldCreateOrderAndPublishEvent() {
        // Test implementation
    }
}

Hard-Won Lessons

On Service Design

The most common mistake is decomposing too finely. A “user-preferences-service” that exists solely because someone drew a box on a diagram is not a microservice; it is an unnecessary network hop. Each service should own a meaningful business capability — something that a product manager could name. If it does not have its own data, its own release cadence, and its own reason to exist independently, it should be a module inside another service.

On Operational Discipline

Health checks (liveness and readiness probes) are non-negotiable. Graceful shutdown — draining connections before terminating — prevents the request errors that plague every rolling update without it. Idempotency in your APIs means that retries are safe, which means your circuit breakers and message consumers can retry without fear. And correlation IDs — a single trace ID propagated through every service call — are the difference between debugging a distributed failure in minutes and debugging it in days.

On Security

In a microservices architecture, the network is not a trust boundary. Assume zero trust: authenticate every service-to-service call, manage secrets through Vault rather than environment variables, restrict traffic with network policies, and encrypt everything with mTLS. This sounds paranoid until you consider that a compromised container in a flat network has access to every other service.

Final Thoughts

Microservices are not a goal; they are a trade-off. You gain independent deployability, team autonomy, and the ability to scale individual components — and you pay for it with operational complexity, distributed debugging, and the constant discipline required to keep dozens of services healthy, observable, and secure.

The Spring Cloud ecosystem absorbs a significant portion of that complexity. The API Gateway handles routing and security. Eureka or Kubernetes DNS handles discovery. Resilience4j handles the failure modes that distributed systems inevitably produce. Kafka handles the asynchronous communication that keeps services decoupled. And the observability stack — Prometheus, Jaeger, structured logging — gives you the visibility to understand what is actually happening when things go wrong.

But none of these tools substitute for the hardest part: deciding where to draw the service boundaries. Get the boundaries right and the architecture serves you for years. Get them wrong — split too finely, or along the wrong domain lines — and you spend the next two years merging services back together. Start with a well-structured monolith, extract services only when the organisational or scaling pressure demands it, and treat every new service boundary as a commitment that is expensive to reverse.


Microservices Architecture: Patterns and Implementation with Spring Cloud

A guide to building resilient distributed systems.

Achraf SOLTANI — August 10, 2024