It just so happened that I needed somewhere to store more than 1.5TB of data, and even provide the ability to download them by regular users via a direct link. Since traditionally, such amounts of memory go already to VDS, the rental cost of which is not too much invested in the project budget from the “have nothing to do” category, and from the source data I had a VPS 400GB SSD, where I couldn’t put 1.5tb pictures without loseless compression will succeed.
And then I remembered that if you remove trash from a Google disk, such as programs that run only on Windows XP, and other things that move from my carrier to media since the Internet was not so fast at all not unlimited (for example, those 10-20 versions of a virtual box hardly had any value other than nostalgic), then everything should fit very well. No sooner said than done. And so, breaking through the limit on the number of requests to api (by the way, technical support increased the request quota per user for 100 seconds to 10,000 without any problems) to the place of its further deployment.
Everything seems to be good, but now it needs to be conveyed to the end user. Moreover, without any redirects to other resources, but so that the person simply pressed the “Download” button and became the happy owner of the coveted file.
Then I, by golly, went into all serious things. At first it was a script on AmPHP, but the load created by it did not suit me (a sharp jump at the start to 100% of the kernel consumption). Then the curl for ReactPHP wrapper went into the business, which fit into my wishes for the CPU clock consumption, but it didn’t give me the speed I wanted (it turned out that you can just reduce the curl_multi_select call interval, but then we have the same type of gluttony ). I even tried to write a small service on Rust, and it worked quite briskly (it is surprising even that it worked, with my knowledge), but I wanted more, and it was somehow difficult to customize it. In addition, all these solutions somehow strangely buffered the answer, but I wanted to track the moment when the file loading ended with the greatest accuracy.
In general, for a while it was crooked, but it worked. Until one day I came up with a great idea for my delusion: nginx in theory can do what I want, it works briskly, and it also allows any distortions with configuration. Need to try - what happens? And after half a day of persistent searches, a solution that has been stable for several months and met all my requirements was born.
# . location ~* ^/google_drive/(.+)$ { # (, ). internal; # ( ). limit_rate 1m; # nginx google drive . resolver 8.8.8.8; # C ( ). set $download_url https://www.googleapis.com/drive/v3/files/$upstream_http_file_id?alt=media; # Content-Disposition , . set $content_disposition 'attachment; filename="$upstream_http_filename"'; # . proxy_max_temp_file_size 0; # , , ( , $http_upstream . , - , ). proxy_set_header Authorization 'Bearer $1'; # , . proxy_pass $download_url; # . add_header Content-Disposition $content_disposition; # . proxy_hide_header Content-Disposition; proxy_hide_header Alt-Svc; proxy_hide_header Expires; proxy_hide_header Cache-Control; proxy_hide_header Vary; proxy_hide_header X-Goog-Hash; proxy_hide_header X-GUploader-UploadID; }
location ~* ^/google_drive/(.+)$ { internal; limit_rate 1m; resolver 8.8.8.8; set $download_url https://www.googleapis.com/drive/v3/files/$upstream_http_file_id?alt=media; set $content_disposition 'attachment; filename="$upstream_http_filename"'; proxy_max_temp_file_size 0; proxy_set_header Authorization 'Bearer $1'; proxy_pass $download_url; add_header Content-Disposition $content_disposition; proxy_hide_header Content-Disposition; proxy_hide_header Alt-Svc; proxy_hide_header Expires; proxy_hide_header Cache-Control; proxy_hide_header Vary; proxy_hide_header X-Goog-Hash; proxy_hide_header X-GUploader-UploadID; }
The example will be in PHP and purposely written with a minimum body kit. I think everyone who has experience with any other language will be able to integrate this part with the help of my example.
<?php # Google Drive Api. define('TOKEN', '*****'); # ID $fileId = 'abcdefghijklmnopqrstuvwxyz1234567890'; # , - ? http_response_code(204); # c ID ( nginx $upstream_http_file_id). header('File-Id: ' . $fileId); # ( $upstream_http_filename). header('Filename: ' . 'test.zip'); # . , , $1 nginx. header('X-Accel-Redirect: ' . rawurlencode('/google_drive/' . TOKEN));
In general, this method makes it quite easy to organize the distribution of files to users from any cloud storage. Yes, even from a telegram or VK, (provided that the file size does not exceed the allowable size of this storage). I had an idea like this , but unfortunately I’ve come across files up to 2GB, and I haven’t yet found a way or module for pasting answers from upstream, but writing some wrappers for this project is too costly.
Thanks for attention. I hope my story was at least a little interesting or useful.
Source: https://habr.com/ru/post/460685/
All Articles