Our site includes cookies because we use Facebook Pixel, Google Analytics and Yandex.Metrika services. You can opt out of them and continue to use the site.
Ok. Show no more.
Close
Explore Ceph software storage platform to increase fault tolerance of your systems.

You will get system knowledge of basic concepts and terms, and upon completion of the course you will be able to install, configure and manage Ceph.

Ceph

RELEASE DATE
November 1
PRESALE
$ 890
$ 1 100
Ceph is a software-defined distributed storage system with high fault tolerance. Its main advantages are:

— accessing data in object, block and file storage systems;
— open source nature that reduces operating costs, ensures fast development of Ceph, provides support for the professional community;
— advanced backup and data integrity algorithms, recovery tools that maintain services and systems in case of emergencies.

Course Curriculum

You will have access to all modules in your Slurm Account.
№1: What Ceph Is and Is Not
  • What Ceph Is
  • What Ceph can and cannot do
  • Alternatives to Ceph
№2: Ceph Architecture Overview
  • Ceph components
  • Ceph components' functionality
№3: How to Install Ceph
  • Manual configuration
  • Ceph-deploy, cephadm, ansible
  • System requirements in a nutshell
№4: Ways to Use Ceph
  • External interfaces: RBD, CephFS, RGW (S3)
№5: Ceph Integration with Common Cloud Native Solutions
  • Kubernetes
  • Proxmox
  • OpenNebula
№6: Exploitation of Ceph
  • Replace/add/remove OSD, MDS, RGW
  • Migrate a monitor to another server
  • Manage configurations. Where to find and how to change them
  • Logging
  • Server reboot. How to stop a cluster
  • Rebalancing
  • Backup
№7: Monitoring Ceph
  • Collect metrics. What to pay attention to
  • Alerting
№8: How to Fix Problems with Ceph
  • Global flags/flapping OSD. How to stop rebalancing
  • Recovery/rebalance speed
  • Scrubbing
  • Data overflow in clusters
  • Slow ops
№9: Ceph Performance. Simple Math
  • What Is Ceph cluster/storage performance
  • What your cluster can and cannot do
  • How to estimate and calculate performance
  • What you need to do to reach a certain performance level
  • Hyper-converged systems
№10: How to Choose Hardware for Your Cluster
  • General recommendations on choosing hardware for your cluster
№11: Ceph Pools and Storage Classes
  • Pool Placement
Meet Our Course Speakers
Vitaly Filippov, Expert Developer at CUSTIS
— Expert developer at CUSTIS, Linuxoid, Ceph specialist
— Codes in React, Node.js, PHP, Go, Python, Perl, Java, C ++, designs infrastructure solutions
— Considerable expertise in Ceph performance
— DevOps engineer with 7 years of experience and a co-founder of DevOps Engineers of Saint-Petersburg community
Sergey Bondarev, Architect at Southbridge
— Engineer with 25 years of experience;
— Certified Kubernetes Administrator;
— Kubernetes implementations: all Southbridge kub projects, including our own infrastructure;
— One of the kubespray developers with rights to accept pull requests.
Alexander Rudenko, Lead engineer in the CROC Cloud development group
— Lead engineer in cloud development group at CROC
— Has been successfully using Ceph for over 6 years, 24/7
— On top of Ceph, has experience in distributed storage systems like Dell EMC VxFlex (ScaleIO), IBM Spectrum Scale (GPFS), Gluster
— Develops storage and virtualization subsystems at CROC

Who This Course Is For?

Engineers already working with Ceph
You will get a better understanding of Ceph and fill your knowledge gaps. You can also ask specific questions about your infrastructure.
Infrastructure administrators
The new technology will expand your expertise. Ceph is now gaining popularity sparking the demand for competent specialists.
Specialists who prioritize fault tolerance
Ceph architecture facilitates data migration and updates, helps to build a fault-tolerant infrastructure.
Presale until November 1
$ 1 100
$ 890
About
Slurm is Descended from an In-House Training at Southbridge
Southbridge is a company that provides administration for high load systems. We developed courses to train our own employees, then shared with community.
Limited Tech-Stack
We use Kubernetes, but not Docker Swarm or Mesos. Likewise, we can display CI processes with Gitlab, but we do not use Jenkins and BitBucket.
Practice: From Zero to Hero
Our every course is result- and practice-oriented. We are very picky about what the market offers and teach solely what we personally use in work.
Feel free to contact us if you have further questions.