Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
January 15, 2022 06:56 pm GMT

Cephadm(containerized) ile Ceph(Pacific) Storage Kurulumu on Linux

Image description

cephadm octopus versiyonu ile gelmi olup tamamen container yapda cluster ayaa kaldrmaktadr.

Ceph is the future of storage : Ak kaynak kodlu, datk, yazlm tabanl storage sistemidir. Object, block ve file-level storage desteklemektedir. Geleneksel depolama sistemlerinin kstlarna karn Ceph leklenebilir, esnek ve donanm bamszl sayesinde ekonomikliliide beraberinde getirir. Ayrca datk yapda almas sayesinde olas bir arzay karn otomatik onarm ilemlerini balatmas, exabyte seviyelerine kadar kabilmesi ve 3-in-1 protokol destei Cephi popler hale getirmitir.

Yapmz ksaca aadaki gibi; 1 node Ceph servisleri(Monitr, Management, Rados Gateway, Admin client) + 3 OSD node toplam 4 node. Her OSD nodesinde storage cluster iin 3er ayr disk(/dev/sdb, dev/sdc, dev/sdd gibi). Siz kendi yapnz iin yatayda OSD ve dikeyde disk yapnz oaltabilirsiniz.

lk olarak docker dahil gerekli paketleri indirip, kuralm. Btn Nodelerde altrlmaldr.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpgecho "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullsudo apt update && sudo apt install chrony python3-pip python3 ca-certificates curl gnupg lsb-release lvm2 docker-ce docker-ce-cli containerd.io -y
sudo vi /etc/chrony/chrony.conf
#pool ntp.ubuntu.com        iburst maxsources 4#pool 0.ubuntu.pool.ntp.org iburst maxsources 1#pool 1.ubuntu.pool.ntp.org iburst maxsources 1#pool 2.ubuntu.pool.ntp.org iburst maxsources 2server $NTP_SERVER iburst
sudo systemctl restart chrony.servicechronyc sources
echo "$USER ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/$USERsudo chmod 0440 /etc/sudoers.d/$USER
sudo vi /etc/hosts
10.10.10.144 ceph-yonetici10.10.10.145 ceph-osdx10.10.10.146 ceph-osdy10.10.10.147 ceph-osdz

Admin Nodede altrlr(ceph-yonetici).

ssh-keygenssh-copy-id $USER@ceph-yoneticissh-copy-id $USER@ceph-osdxssh-copy-id $USER@ceph-osdyssh-copy-id $USER@ceph-osdz
vi ~/.ssh/config

$USER parametresi anlk kullanc ile deitirin.

Host ceph-yonetici   Hostname ceph-yonetici   User $USERHost ceph-osdx   Hostname ceph-osdx   User $USERHost ceph-osdy   Hostname ceph-osdy   User $USERHost ceph-osdz   Hostname ceph-osdz   User $USER

cephadmyi kuralm.

curl --silent https://download.ceph.com/keys/release.asc | sudo apt-key add -sudo apt-add-repository https://download.ceph.com/debian-pacificsudo apt install -y cephadm

$USER parametresini anlk kullanc(ifresiz root yetkisine sahip olan bir kullanc) ile deitirin, --mon-ip ceph-yonetici ipsi olmaldr.

sudo cephadm bootstrap --mon-ip 10.10.10.144 --initial-dashboard-user admin --initial-dashboard-password 1234 --ssh-user $USER --cluster-network 10.10.10.0/24

parametre deerlerini anlamak adna https://docs.ceph.com/en/latest/cephadm/install/ linkini kontrol edebilirsiniz.

Servisleri gzlemleyebilirsiniz.

systemctl status ceph-* --no-pager

Ceph komutlarn yrtmek iin yukardaki bootstrap ktsnda verilen You can access the Ceph CLI with: satrn kullanabilirsiniz. lgili komut sizi direk admin containera balayacaktr.

sudo /usr/sbin/cephadm shell --fsid d0232fa8-6d5b-11ec-8fba-23b487996ad5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Ama biz direk host zerinden de komutlar yrtebilmek iin aadaki yntem/leri izleyebiliriz.

alias sudo='sudo 'alias ceph='cephadm shell -- ceph'alias radosgw-admin='cephadm shell -- radosgw-admin'echo "alias sudo='sudo '" >> ~/.bashrcecho "alias ceph='cephadm shell -- ceph'" >> ~/.bashrcecho "alias radosgw-admin='cephadm shell -- radosgw-admin'" >> ~/.bashrc

Ya da

cephadm add-repo --release pacificcephadm install ceph-common

Ya da

sudo apt install ceph-common

ceph-yonetici hostunu _admine ek olarak mon ve mgr olarak ta etiketleyelim nk servisler o host zerinde ayaa kalkt.

sudo ceph orch host label add ceph-yonetici mon  [ya da (cephadm shell -- ceph orch host label add ceph-yonetici mon)]sudo ceph orch host label add ceph-yonetici mgr  [ya da (cephadm shell -- ceph orch host label add ceph-yonetici mgr)]

Aadaki komutlar ile clustern durumunu inceleyebilirsiniz.

sudo ceph -ssudo ceph orch host lssudo ceph orch device lssudo ceph orch ps

imdi dier hostlar(OSD) clustera dahil edeceiz ama ncesinde public keyi datalm. $USER parametresi anlk kullanc ile deitirin.

ssh-copy-id -f -i /etc/ceph/ceph.pub $USER@ceph-osdxssh-copy-id -f -i /etc/ceph/ceph.pub $USER@ceph-osdyssh-copy-id -f -i /etc/ceph/ceph.pub $USER@ceph-osdz

Hostlar dahil edelim.

sudo ceph orch host add ceph-osdx 10.10.10.145 osdsudo ceph orch host add ceph-osdy 10.10.10.146 osdsudo ceph orch host add ceph-osdz 10.10.10.147 osd

imdi tekrar clustern durumunu inceleyin.

sudo ceph -ssudo ceph orch host lssudo ceph orch device lssudo ceph orch ps

imdi ise device ls ile OSD nodelerinde listelediimiz diskleri OSD olarak ayn anada ekleyelim.

sudo ceph orch apply osd --all-available-devices

Ya da zellikle iaret ederek de ekleyebiliriz.

sudo ceph orch daemon add osd <host>:<device-path>

Object storage kullanm iin Rados GWyi aktif edelim. rgw default portu 80dir ve yine defaultta 2 daemon up eder.

sudo ceph orch apply rgw ceph-yonetici [ya da sudo ceph orch apply rgw ceph-yonetici '--placement=label:rgw count-per-host:1' --port=7480]sudo ceph orch host label add ceph-yonetici rgw

Ek olarak monitor, manager ve radosgw daemon eklemek isterseniz,

# sudo ceph orch apply mon --placement="ceph-yonetici,ceph-osdx,ceph-osdy"# sudo ceph orch apply mgr --placement="ceph-yonetici,ceph-osdx,ceph-osdy"# sudo ceph orch apply rgw --placement="ceph-yonetici,ceph-osdx,ceph-osdy"

Daemon - Servis silmek isterseniz,

# sudo ceph orch rm rgw.ceph-yonetici# sudo ceph orch daemon rm rgw.ceph-osdx

Ceph Dashboarddan ve APIden radosgwye eriebilmek iin kullanc oluturabilirsiniz fakat defaultta var olarak gelir. Changed $ACCESS_KEY and $SECRET_KEY

sudo ceph dashboard set-rgw-credentialssudo radosgw-admin user create --uid="cephdash" --display-name="Ceph DashBoard2" --systemsudo /usr/sbin/cephadm shell --fsid d0232fa8-6d5b-11ec-8fba-23b487996ad5 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringecho $ACCESS_KEY > /tmp/access_keyecho $SECRET_KEY > /tmp/secret_keysudo ceph dashboard set-rgw-api-access-key -i /tmp/access_keysudo ceph dashboard set-rgw-api-secret-key -i /tmp/secret_key

Poolumuzu oluturalm.

sudo ceph osd crush rule lssudo ceph osd pool create default.rgw.buckets.data 32 32 replicated replicated_rulesudo ceph osd pool application enable default.rgw.buckets.data rgw

Swift user oluturuyorum, siz isterseniz s3 user oluturabilirsiniz.

sudo radosgw-admin --uid "ceph-demo" --display-name "Ceph Demo Kurulum Kullancs" --subuser=ceph-demo:swift --key-type swift --access full user createsudo radosgw-admin --uid "ceph-demo" --max-buckets=0 user modify

Uygulama balants iin kullanlabilecek bilgiler

ceph.username = ceph-demo:swiftceph.password = $SECRET_KEYceph.endpoint = http://10.10.10.144:7480/swift/v1ceph.auth.url = http://10.10.10.144:7480/auth/1.0

Kullanl komutlar,

sudo ceph osd pool ls / sudo ceph osd pool ls detailsudo ceph df detailsudo ceph osd statussudo ceph health detailsudo ceph -ssudo radosgw-admin user listsudo radosgw-admin user info --uid=ceph-demo
  cluster:    id:     d0232fa8-6d5b-11ec-8fba-23b487996ad5    health: HEALTH_OK  services:    mon: 4 daemons, quorum ceph-yonetici,ceph-osdx,ceph-osdy,ceph-osdz (age 19h)    mgr: ceph-yonetici.uwijtu(active, since 19h), standbys: ceph-osdx.znrgdv    osd: 9 osds: 9 up (since 13h), 9 in (since 13h)    rgw: 1 daemons active (1 hosts, 1 zones)  data:    pools:   6 pools, 137 pgs    objects: 240 objects, 6.1 KiB    usage:   892 MiB used, 89 GiB / 90 GiB avail    pgs:     137 active+clean

ref:
https://docs.ceph.com/en/pacific/cephadm/install/#enable-ceph-cli
https://chowdera.com/2021/04/20210411052419702x.html
https://dev.to/akhal3d96/exploring-ceph-in-a-multi-node-setup-3c8h
https://achchusnulchikam2.medium.com/deploy-ceph-cluster-with-cephadm-on-centos-8-257b300e7b42


Original Link: https://dev.to/aslanfatih/cephadmcontainerized-ile-cephpacific-storage-kurulumu-on-linux-2dam

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To