📜 ⬆️ ⬇️

The small joys of lynx and linux - download files and rule

Some time ago it took us to transfer one client from a third-party hosting to our server. The site is simple - the task is trivial. Then it turned out that even without a database, everything was stored in xml files. The size of the site was about 200 meters. And everything seems simple, but as always, unforeseen problems have arisen.

The colleague who transferred the site decided to download everything via FTP, but after a while he complained that the file transfer was going badly - periodically falling off, and taking into account the fact that the Internet in our area is slow (I have 600kbits, it has 800kbits) very boring, especially with the "dumps". And he asked for help.

I immediately decided to refuse FTP. SSH on a third-party hosting was not (apparently the limitations of a simple rate). However, there was an admin where the beautiful daily dumps were compressed in tar.gz.
')
I decided that it would be easier to download from there, since I have more experience with linux and I have been able to work in lynx. Having logged into our server via SSH, for a few minutes I remembered what it was like to have an incredible rendering speed of a text browser. :) I logged in quickly enough and downloaded the file at 800 kb / s.

Then he gave the reins to a colleague, prompting that in the future it would be worth using lynx. :)

However, after a couple of minutes, a message came to ICQ that the file was broken - not unpacked. It turned out to be true. Having made vim archive.tar.gz, I saw a nice picture - the file started with an html insert

X URL: xxxxx
Date: Thu, 28 Mar 2010 15:24:40 GMT
Last-Modified: Wed, 17 Mar 2010 21:00:00 +0000
BASE HREF = " xxxxxx "

META HTTP-EQUIV = "Content-Type" CONTENT = "text / html; charset = windows-1251 "


This lynx showed its concern and inserted the left header into the file. :(

I decided to remove it in the hex editor - I installed heme, but there I could only see and count the number of “left” bytes (414), but I could not edit it.

I thought I'd use the sed editor, but it's still not for binary data. As a result, dd helped:
dd if = broken_archive.bin of = archive_new.tar.gz ibs = 414 skip = 1 && tar -xzvf archive_new.tar.gz

if is the input file, of is the output, ibs is the size of the blocks with which the file is read, skip is the number of blocks to skip

Cut out the excess with surgical precision. :) It all took a little more than 20 minutes. Given my internet connection, it was a time-efficient solution. In addition, a new experience that can someone else save time.

PS Maybe someone can tell how to wean lynx from such a fool?
PPS How by the way from the terminal from the screw server it would be possible to log in and download a file on http?

Source: https://habr.com/ru/post/90046/


All Articles