/dev/sda
as the system and /dev/sdb
and /dev/sdc
for the data of the object storage. Various programs, modules, frameworks for working with an S3 compatible storage can act as a client. I have successfully tested DragonDisk , CrossFTP and S3Browser .s3.ceph.labspace.studiogrizzly.com
s3.ceph.labspace.studiogrizzly.com
./etc/ceph/ceph.conf
— add a definition for RADOS Gateway [client.radosgw.gateway] host = node01 keyring = /etc/ceph/keyring.radosgw.gateway rgw socket path = /tmp/radosgw.sock log file = /var/log/ceph/radosgw.log rgw dns name = s3.ceph.labspace.studiogrizzly.com rgw print continue = false
scp /etc/ceph/ceph.conf node02:/etc/ceph/ceph.conf scp /etc/ceph/ceph.conf node03:/etc/ceph/ceph.conf
aptitude install apache2 libapache2-mod-fastcgi radosgw
a2enmod rewrite a2enmod fastcgi
/etc/apache2/sites-available/rgw.conf
FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock <VirtualHost *:80> ServerName s3.ceph.labspace.studiogrizzly.com ServerAdmin tweet@studiogrizzly.com DocumentRoot /var/www RewriteEngine On RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] <IfModule mod_fastcgi.c> <Directory /var/www> Options +ExecCGI AllowOverride All SetHandler fastcgi-script Order allow,deny Allow from all AuthBasicAuthoritative Off </Directory> </IfModule> AllowEncodedSlashes On ErrorLog /var/log/apache2/error.log CustomLog /var/log/apache2/access.log combined ServerSignature Off </VirtualHost>
a2ensite rgw.conf a2dissite default
/var/www/s3gw.fcgi
: #!/bin/sh exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
chmod +x /var/www/s3gw.fcgi
mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
ceph-authtool --create-keyring /etc/ceph/keyring.radosgw.gateway chmod +r /etc/ceph/keyring.radosgw.gateway ceph-authtool /etc/ceph/keyring.radosgw.gateway -n client.radosgw.gateway --gen-key ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow r' /etc/ceph/keyring.radosgw.gateway
ceph -k /etc/ceph/ceph.keyring auth add client.radosgw.gateway -i /etc/ceph/keyring.radosgw.gateway
service apache2 restart /etc/init.d/radosgw restart
access_key
and secret_key
for the new user. radosgw-admin user create --uid=i --display-name="Igor" --email=tweet@studiogrizzly.com
s3.ceph.labspace.studiogrizzly.com
point to the IP address of the host running RADOS Gateway.mybackups
, the domain is mybackups.s3.ceph.labspace.studiogrizzly.com.
should point to the node01 IP address, which is - 192.168.2.31. * IN CNAME node01.ceph.labspace.studiogrizzly.com.
100-continue HTTP response
. Ready packages can be taken here . backend radosgw1 { .host = "radosgw1"; .port = "8080"; .probe = { .url = "/"; .interval = 2s; .timeout = 1s; .window = 5; .threshold = 3; } } backend radosgw2 { .host = "radosgw2"; .port = "8080"; .probe = { .url = "/"; .interval = 2s; .timeout = 1s; .window = 5; .threshold = 3; } } director cephgw round-robin { { .backend = radosgw1; } { .backend = radosgw2; } }
python-boto
. Here is an example python script (carefully, indents), which is able to upload everything into the bucket from the file system. This method is convenient for batch processing of a heap of files in automatic mode. If you don't like python, no problem, you can use other popular languages. #!/usr/bin/env python import fnmatch import os, sys import boto import boto.s3.connection access_key = 'insert_access_key' secret_key = 'insert_secret_key' pidfile = "/tmp/copytoceph.pid" def check_pid(pid): try: os.kill(pid, 0) except OSError: return False else: return True if os.path.isfile(pidfile): pid = long(open(pidfile, 'r').read()) if check_pid(pid): print "%s already exists, doing natting" % pidfile sys.exit() pid = str(os.getpid()) file(pidfile, 'w').write(pid) conn = boto.connect_s3( aws_access_key_id = access_key, aws_secret_access_key = secret_key, host = 'cephgw1', port = 8080, is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) mybucket = conn.get_bucket('test') mylist = mybucket.list() i = 0 for root, dirnames, filenames in os.walk('/var/storage/photoes', followlinks=True): for filename in fnmatch.filter(filenames, '*'): myfile = os.path.join(root,filename) key = mybucket.get_key(filename) i += 1 if not key: key = mybucket.new_key(filename) key.set_contents_from_filename(myfile) key.set_canned_acl('public-read') print key print i os.unlink(pidfile)
Source: https://habr.com/ru/post/180415/
All Articles