Ceph Days Seattle 2025

Bringing Ceph to Seattle

A full-day event dedicated to sharing Ceph's transformative power and fostering the vibrant Ceph community with the community in Seattle!

The expert Ceph team, Ceph's customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception.

Important Dates

  • CFP Opens: 2025-03-30
  • CFP Closes: 2025-04-25 Extended to 2025-05-02
  • Speakers receive confirmation of acceptance: 2025-05-10
  • Schedule Announcement: 2025-05-10
  • Event Date: 2025-05-15

Apply to be a Presenter!

Register to Attend!


Schedule

WhatWhoWhen
Welcome, Check-in, Coffee, Pastries12:00 - 13:00
Introduction to Ceph: The State of the Cephalopod in 2025
New to Ceph? Or a seasoned operator curious about the latest updates?
This talk is your fast track to understanding Ceph in 2025. We’ll cover
what Ceph is, how it works, and where the project is headed — from
new features and architectural changes to project governance and
ecosystem growth. Whether you're deploying your first cluster or
managing petabytes, this session will bring you up to speed.
Dan van der Ster (CLYSO)13:00 - 13:30
Choosing the Right Data Protection Strategies For Your Ceph Deployments
Choosing the right data protection strategy for Ceph deployments can be
complicated:

Usable to raw capacity ratio
Replication vs erasure coding
EC profile values for k and m
Read and Write performance
Recovery performance
Failure domains
Fault tolerance
Media saturation
min_alloc_size vs IU

Moreover, a given Ceph cluster often benefits from or even needs a
combination of strategies and media types. These decisions can be
daunting, and many clusters require or would benefit from a mixture
of approaches based on use-case and per-pool requirements.

Anthony D'Atri (IBM)13:30 - 14:00
Ceph Solution Design Tool
In this talk we'll be going over general cluster design recommendations
and how to employ those using the Ceph Solution Design utilities.
We'll also discuss some of the tradeoffs between using EC vs replicas,
sizing of the ratio between HDD and flash in hybrid configurations, and more
Steven Umbehocker (OSNEXUS)14:00 - 14:30
Coffee / Tea break14:30 - 15:00
Ceph in Proxmox VE
Proxmox embraced Ceph early on, and now this has become very
relevant for those migrating virtualization platforms. This talk will
provide a technical overview of Ceph implementation in Proxmox
VE, Proxmox Backup server, and ISS experiences.
Alex Gorbachev (ISS)15:00 - 15:30
Ceph Durability: How Safe Is My Data?
How many nines does your cluster really have? Whether you’re running
a small 50-disk setup or a hyperscale 5000 OSD deployment,
understanding Ceph’s actual data durability is key to making the right
design choices. Replication vs. erasure coding, failure domains,
recovery speeds: these all impact real-world reliability. In this talk, we
introduce a new Ceph durability calculator based on Monte Carlo
simulations to give updated, practical insights into how safe your
data really is with Ceph. Bring your cluster size and
settings — and walk away with numbers.
Dan van der Ster (CLYSO)15:30 - 16:00
Optimizing Ceph RGW for Specific Workloads Including AI
One of the big barriers to getting great performance and scalability in
object storage configurations has to do with data placement.
Inefficiently writing small objects to EC storage can kill performance
and cause unintentional wasted space due to padding. We'll talk
about Ceph's RGW's support for embedded Lua and how we've
used that in QuantaStor to solve these issues. We'll then dive
into  Lua examples you can customize to optimize your object
storage workloads.
Steven Umbehocker (OSNEXUS)16:00 - 16:30
NVMe over TCP and Block Performance
A seasoned IT professional with over 20 years of leadership experience
in technology solutions and consulting, Mike specializes in data center
modernization, cloud architectures, and disaster recovery strategies.
Currently serving as a Technical Product Manager for IBM Storage
Ceph, he focuses on NVMe over TCP and VMware vSphere integration
for block storage. His expertise spans decades of IT Consulting, public
peaking, customer education, strategic planning and high-value solutions
architecture.
Mike Burkhart16:30 - 17:00
Dinner & Drinks & Networking oh my!17:30 - 19:00
OSZAR »