Skip to main content

tower/circuit_breaker/
mod.rs

1//! Circuit breaker middleware for Tower services.
2//!
3//! Prevents cascading failures by tracking service health and
4//! short-circuiting requests to a failing backend before they hit the
5//! network.
6//!
7//! # States
8//!
9//! ```text
10//! Closed ──(N consecutive failures)──► Open
11//! Open   ──(timeout elapsed)─────────► HalfOpen  (one probe allowed)
12//! HalfOpen ──(success rate ≥ threshold)► Closed
13//! HalfOpen ──(probe fails)────────────► Open
14//! ```
15//!
16//! - **Closed** — normal operation; all requests pass through.
17//! - **Open** — service is unhealthy; requests are rejected immediately
18//!   with [`CircuitError::Open`], avoiding latency pile-up.
19//! - **Half-Open** — after the recovery timeout elapses, one probe request
20//!   is allowed through.  On success the circuit closes; on failure it
21//!   reopens.
22//!
23//! # Policies
24//!
25//! The circuit-breaking logic is separated from the state machine via the
26//! [`CircuitPolicy`] trait.  The built-in [`ConsecutiveFailures`] policy
27//! opens after *N* consecutive failures and closes once enough probes
28//! succeed.  Implement [`CircuitPolicy`] directly to build latency-based
29//! triggers, manual switches, or any other strategy.
30//!
31//! # Relationship to [`tower::retry::budget`]
32//!
33//! [`Budget`][budget] and circuit breakers are **complementary**, not
34//! competing.
35//!
36//! - A **retry budget** governs *retry worthiness*: it limits how many
37//!   retried requests can be issued relative to the originals, preventing
38//!   retry amplification inside a single client.
39//! - A **circuit breaker** governs *traffic admission*: once failure is
40//!   systemic it stops **all** requests (including first attempts) from
41//!   reaching the backend, giving it breathing room to recover.
42//!
43//! Using a circuit breaker without a budget still exposes you to retry
44//! storms from clients above; using a budget without a circuit breaker
45//! still allows traffic to pile up against a failing backend.  The two
46//! compose naturally:
47//!
48//! ```rust,ignore
49//! use std::{future, sync::Arc, time::Duration};
50//! use tower::{ServiceBuilder, retry::{Policy, budget::TpsBudget}};
51//! use tower::circuit_breaker::CircuitBreakerLayer;
52//!
53//! // Budget caps how many retries each client issues.
54//! // Circuit breaker stops all traffic once failure is systemic.
55//! let svc = ServiceBuilder::new()
56//!     .layer(CircuitBreakerLayer::new(5, 0.8, Duration::from_secs(30)))
57//!     .layer(tower::retry::RetryLayer::new(my_budget_policy))
58//!     .service_fn(my_backend);
59//! ```
60//!
61//! [budget]: crate::retry::budget
62//!
63//! # Quick start
64//!
65//! ```rust,ignore
66//! use std::time::Duration;
67//! use tower::ServiceBuilder;
68//! use tower::circuit_breaker::CircuitBreakerLayer;
69//!
70//! let svc = ServiceBuilder::new()
71//!     .layer(CircuitBreakerLayer::new(
72//!         5,                        // open after 5 consecutive failures
73//!         0.8,                      // close when 80 % of probes succeed
74//!         Duration::from_secs(30),  // wait 30 s before sending a probe
75//!     ))
76//!     .service_fn(|req: String| async move {
77//!         Ok::<String, std::io::Error>(req)
78//!     });
79//! ```
80
81mod future;
82mod layer;
83mod policy;
84mod service;
85
86pub use self::{
87    future::ResponseFuture,
88    layer::CircuitBreakerLayer,
89    policy::{CircuitPolicy, ConsecutiveFailures},
90    service::{CircuitBreaker, CircuitError, CircuitStatus},
91};