
Amazon S3 (Simple Storage Service, simple storage service) is a very powerful online file storage service provided by Amazon Web Services. Think of it as a remote disk on which you can store files in directories, receive and delete them. Companies like DropBox, Netflix, Pinterest, Slideshare, Tumblr and many others rely on it.
Although the service is excellent, its code is not open, so you have to trust Amazon with your data, and although they provide access to a free instance for a year, you still need to enter credit card information to create an account. Since S3 should know every software engineer, I want my students to gain experience with it and use it in their web applications, and I also do not want them to pay for it. Some students also work while traveling, which means slow Internet connection and expensive traffic, or a complete lack of Internet at all.
That's why I started looking for open solutions that emulate the S3 API, and that could work on any machine. As usual, the world of Open Source did not disappoint me and provided several solutions, here are my favorites:
')
- The first thing I came across was Fake S3 , written in Ruby and available as gems, it only takes a few seconds to install and the library is very well supported. This is a great tool to get started, but it does not implement all the S3 commands and is not suitable for use in production.
- The second option is HPE Helion Eucalyptus , which provides a wide range of AWS emulation services (CloudFormation, Cloudwatch, ELB ...), including S3 support. This is a very complete solution (only running on CentOS), enterprise-oriented and, unfortunately, too heavy for personal use or for small business.
- The last and preferred option is the Scality S3 server . Available as a Docker image, which makes it very easy to deploy and start using it. The software is suitable for personal use, anyone can start using it in a few seconds without any complicated installation. But it is also suitable for the enterprise, because it is scalable and ready for production. The best of both worlds.
Getting Started with Scality S3 Server
To demonstrate how easy it is to emulate AWS S3 using Scality S3 server, let's revive it!
Requirements:
Launch the Scality S3 Docker server container:
$ docker run -d --name s3server -p 8000:8000 scality/s3server Unable to find image 'scality/s3server:latest' locally latest: Pulling from scality/s3server 357ea8c3d80b: Pull complete 52befadefd24: Pull complete 3c0732d5313c: Pull complete ceb711c7e301: Pull complete 868b1d0e2aad: Pull complete 3a438db159a5: Pull complete 38d1470647f9: Pull complete 4d005fb96ed5: Pull complete a385ffd009d5: Pull complete Digest: sha256:4fe4e10cdb88da8d3c57e2f674114423ce4fbc57755dc4490d72bc23fe27409e Status: Downloaded newer image for scality/s3server:latest 7c61434e5223d614a0739aaa61edf21763354592ba3cc5267946e9995902dc18 $
Make sure that the Docker container is working properly:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ed54e677b1b3 scality/s3server "npm start" 5 days ago Up 5 days 0.0.0.0:8000->8000/tcp s3server
Install the AWS SDK v2 Ruby-gem (documentation
here ):
$ gem install aws-sdk
Now let's create a file that we upload to our basket:
$ touch myfavoritefile
Using your favorite text editor, create a file containing your Ruby script, let's call it 's3_script.rb':
Run the script:
$ ruby s3_script.rb $ myfavoritefile
Congratulations, you created your first S3-cart and uploaded a file into it!
Let's break down the code
Here we indicate that the script should be executed by Ruby and that we include
the AWS SDK library :
We initiate a connection to our S3 server running in our Docker container. Note that the 'accessKey1' and 'verySecretKey1' are the access key and the default secret access key defined by the Scality S3 server:
s3 = Aws::S3::Client.new( :access_key_id => 'accessKey1', :secret_access_key => 'verySecretKey1', :region => 'us-west-2', :endpoint => 'http://127.0.0.1:8000/', :force_path_style => true )
Create an S3 basket named 'mybucket':
s3.create_bucket({bucket: "mybucket"})
Here we load the previously created file 'myfavoritefile' into our cart 'mybucket':
File.open('myfavoritefile', 'rb') do |file| s3.put_object(bucket: 'mybucket', key: 'myfavoritefile', body: file) end
And finally, collect the contents of the basket 'mybucket' and display it in the standard output:
resp = s3.list_objects_v2(bucket: “mybucket”) puts resp.contents.map(&:key)