Is it safe to run Ceph with 2-way replication on 3 OSD nodes? -


let want achieve maximum useable capacity data resilience on 3 osd nodes setup each node contains 2x 1tb osds.

is safe run 3 ceph nodes 2-way replication?

what pros , cons of using 2-way? cause data split-brain?

last not least, domain fault tolerance running on 2-way replication?

thanks!

sometimes, 3 replica not enough, e.g. if ssd disks (from cache) fail or 1 one.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-october/005672.html

for 2 osd can set manually 1 replica minimum , 2 replicas maximum (i didn't managed set automatically in case of 1 failed osd of 3 osds):

osd pool default size = 2 # write object 2 times

osd pool default min size = 1 # allow writing 1 copy in degraded state

but command: ceph osd pool set mypoolname set min_size 1 sets pool, not default settings.

for n = 4 nodes each 1 osd , 1 mon , settings of replica min_size 1 , size 4 3 osd can fail, 1 mon can fail (the monitor quorum means more half survive). 4 + 1 number of monitors required 2 failed monitors (at least 1 should external without osd). 8 monitors (four external monitors) 3 mon can fail, 3 nodes each 1 osd , 1 mon can fail. not sure setting of 8 monitors possible.

thus, 3 nodes each 1 monitor , osd reasonable settings replica min_size 2 , size 3 or 2. 1 node can fail. if have external monitors, if set min_size 1 (this dangerous) , size 2 or 1 2 nodes can down. 1 replica (no copy, original data) can loose job soon.


Comments

Popular posts from this blog

sublimetext3 - what keyboard shortcut is to comment/uncomment for this script tag in sublime -

java - No use of nillable="0" in SOAP Webservice -

ubuntu - Laravel 5.2 quickstart guide gives Not Found Error -