Async Rust with Tokio, futures, async functions, and concurrent task execution.
Async programming allows writing non-blocking code that can handle many concurrent operations with fewer threads. It is ideal for I/O-bound tasks like network requests, file reads, and database queries where the program spends most of its time waiting.
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
// A simple future that completes after one poll
struct SimpleFuture {
done: bool,
impl Future for SimpleFuture {
type Output = String;
fn poll(
mut self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Self::Output> {
if self.done {
Poll::Ready(String::from("Done!"))
} else {
self.done = true;
Poll::Pending
}
}
}
A Future does nothing until it is polled. The poll() method returns Poll::Pending when the operation is not yet complete, and Poll::Ready(value) when it is. The async/await syntax hides this machinery.
The async/await syntax makes writing asynchronous code look like synchronous code. An async fn returns an impl Future, and .await suspends execution until the future completes.
use std::time::Duration;
use std::thread;
// Without async: verbose callback style
fn fetch_data_callback<F: Fn(String)>(callback: F) {
thread::spawn(move || {
thread::sleep(Duration::from_secs(1));
callback("Data from server".to_string());
});
// With async: clean and readable
async fn fetch_data_async() -> String {
println!("Fetching data...");
"Data from server".to_string()
}
// Async functions return impl Future
async fn example() {
println!("Start");
let result = "completed".to_string();
println!("Result: {}", result);
#[tokio::main]
async fn main() {
example().await;
println!("Async main completed");
}
Writing async fn foo() -> i32 is equivalent to fn foo() -> impl Future<Output = i32>. The compiler transforms the function body into a state machine that implements Future.
Tokio is the most popular async runtime for Rust, providing task spawning, timers, I/O utilities, and more. You need a runtime to execute futures, Rust's standard library does not include one.
// Using the #[tokio::main] macro
#[tokio::main]
async fn main() {
println!("Tokio runtime is running");
// Spawn an async task
let handle = tokio::spawn(async {
println!("Task running");
"Result".to_string()
});
// Wait for task completion
match handle.await {
Ok(result) => println!("Got: {}", result),
Err(_) => println!("Task panicked"),
}
}
use tokio::runtime::Runtime;
// Manual runtime creation (useful in non-async main)
fn main() {
let rt = Runtime::new().unwrap();
rt.block_on(async {
println!("Running in async context");
let result = async_operation().await;
println!("Result: {}", result);
});
async fn async_operation() -> i32 {
42
}
Add tokio = { version = "1", features = ["full"] } to your [dependencies]. The "full" feature enables all Tokio components including timers, I/O, and the multi-threaded runtime.
tokio::spawn runs a future concurrently on the Tokio runtime. Each spawned task is an independent unit of work that the runtime schedules cooperatively.
#[tokio::main]
async fn main() {
let mut handles = vec![];
for i in 0..5 {
let handle = tokio::spawn(async move {
println!("Task {} starting", i);
tokio::time::sleep(
tokio::time::Duration::from_millis(100 * i as u64)
).await;
println!("Task {} done", i);
i * 2
});
handles.push(handle);
}
// Wait for all tasks
for handle in handles {
match handle.await {
Ok(result) => println!("Result: {}", result),
Err(_) => println!("Task panicked"),
}
}
}
// Tracking concurrent tasks with AtomicUsize
use std::sync::Arc;
use std::sync::atomic::{AtomicUsize, Ordering};
#[tokio::main]
async fn main() {
let counter = Arc::new(AtomicUsize::new(0));
for _ in 0..10 {
let counter = counter.clone();
tokio::spawn(async move {
counter.fetch_add(1, Ordering::Relaxed);
tokio::time::sleep(
tokio::time::Duration::from_millis(100)
).await;
counter.fetch_sub(1, Ordering::Relaxed);
});
}
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
Tasks passed to tokio::spawn must own all their data (be 'static). Use move closures and Arc to share data across tasks, just like with threads.
Tokio provides async-aware channels for inter-task communication. Unlike std::sync::mpsc, these channels suspend the task (instead of blocking the thread) when waiting for messages.
#[tokio::main]
async fn main() {
let (tx, mut rx) = tokio::sync::mpsc::channel(100);
tokio::spawn(async move {
for i in 0..5 {
tx.send(i).await.unwrap();
tokio::time::sleep(
tokio::time::Duration::from_millis(100)
).await;
}
});
while let Some(value) = rx.recv().await {
println!("Received: {}", value);
}
}
// Async producer-consumer pattern
#[tokio::main]
async fn main() {
let (tx, mut rx) = tokio::sync::mpsc::channel(10);
// Producer task
tokio::spawn(async move {
for i in 0..10 {
match tx.send(format!("Message {}", i)).await {
Ok(_) => {},
Err(_) => break,
}
tokio::time::sleep(
tokio::time::Duration::from_millis(50)
).await;
}
});
// Consumer
let mut count = 0;
while let Some(msg) = rx.recv().await {
println!("{}", msg);
count += 1;
}
println!("Processed {} messages", count);
}
tokio::sync::mpsc::channel(n) creates a bounded channel with capacity n. When full, send().await suspends until space is available. This provides natural back-pressure to prevent producers from overwhelming consumers.
Tokio's select! races multiple futures and runs the branch of whichever completes first. join! waits for all futures to complete concurrently.
// select! races multiple futures
#[tokio::main]
async fn main() {
tokio::select! {
_ = tokio::time::sleep(
tokio::time::Duration::from_secs(1)
) => {
println!("Timeout reached");
}
result = async_operation() => {
println!("Operation completed: {}", result);
}
}
async fn async_operation() -> String {
"Done".to_string()
}
// join! waits for all futures concurrently
#[tokio::main]
async fn main() {
let result = tokio::join!(
operation1(),
operation2(),
operation3()
);
println!("Results: {:?}", result);
// (1, 2, 3)
async fn operation1() -> i32 { 1 }
async fn operation2() -> i32 { 2 }
async fn operation3() -> i32 { 3 }
Use join! when you need all results (e.g., fetching data from multiple APIs). Use select! when you want to act on whichever completes first (e.g., implementing timeouts, or cancelling a slow operation).
Error handling in async code follows the same Result and ? operator patterns as synchronous Rust. The ? operator works inside async fn just as you would expect.
async fn potentially_failing_operation() -> Result<String, String> {
if true {
Ok("Success".to_string())
} else {
Err("Failed".to_string())
}
#[tokio::main]
async fn main() {
match potentially_failing_operation().await {
Ok(value) => println!("Success: {}", value),
Err(e) => println!("Error: {}", e),
}
}
// Using the ? operator in async functions
async fn run_async_operations() -> Result<(), String> {
let result1 = operation1().await?;
let result2 = operation2().await?;
println!("Results: {}, {}", result1, result2);
Ok(())
async fn operation1() -> Result<i32, String> { Ok(42) }
async fn operation2() -> Result<i32, String> { Ok(43) }
#[tokio::main]
async fn main() {
match run_async_operations().await {
Ok(_) => println!("All operations succeeded"),
Err(e) => println!("Error: {}", e),
}
}
The ? operator in async functions works identically to synchronous code. If an awaited future returns an Err, the error is propagated immediately. Combine with anyhow or thiserror crates for ergonomic error handling.
The reqwest crate provides a popular async HTTP client that integrates seamlessly with Tokio. This example shows sequential and concurrent HTTP requests.
// Cargo.toml:
// [dependencies]
// reqwest = { version = "0.11", features = ["json"] }
// tokio = { version = "1", features = ["full"] }
#[tokio::main]
async fn main() {
let client = reqwest::Client::new();
match client
.get("https://api.github.com/repos/rust-lang/rust")
.send()
.await
{
Ok(response) => {
match response.text().await {
Ok(text) => println!("Response: {}",
&text[0..100.min(text.len())]),
Err(e) => println!("Failed to read body: {}", e),
}
}
Err(e) => println!("Request failed: {}", e),
}
}
// Concurrent HTTP requests
#[tokio::main]
async fn main() {
let client = reqwest::Client::new();
let urls = vec![
"https://example.com",
"https://example.com/page2",
"https://example.com/page3",
];
let mut handles = vec![];
for url in urls {
let client = client.clone();
let handle = tokio::spawn(async move {
match client.get(url).send().await {
Ok(resp) => println!("Status: {}", resp.status()),
Err(e) => println!("Error: {}", e),
}
});
handles.push(handle);
}
for handle in handles {
let _ = handle.await;
}
}
Async tasks are much cheaper than OS threads. You can spawn thousands of async tasks on a single thread pool. Use async for I/O-bound work (network, disk) and threads for CPU-bound work (computation, image processing).