asyncio
eating, and I would venture to eliminate the “fatal flaw” of this module, namely the lack of an asynchronous HTTP server implementation. import asyncio import logging import concurrent.futures @asyncio.coroutine def handle_connection(reader, writer): peername = writer.get_extra_info('peername') logging.info('Accepted connection from {}'.format(peername)) while True: try: data = yield from asyncio.wait_for(reader.readline(), timeout=10.0) if data: writer.write(data) else: logging.info('Connection from {} closed by peer'.format(peername)) break except concurrent.futures.TimeoutError: logging.info('Connection from {} closed by timeout'.format(peername)) break writer.close() if __name__ == '__main__': loop = asyncio.get_event_loop() logging.basicConfig(level=logging.INFO) server_gen = asyncio.start_server(handle_connection, port=2007) server = loop.run_until_complete(server_gen) logging.info('Listening established on {0}'.format(server.sockets[0].getsockname())) try: loop.run_forever() except KeyboardInterrupt: pass # Press Ctrl+C to stop finally: server.close() loop.close()
server_gen = asyncio.start_server(handle_connection, port=2007) server = loop.run_until_complete(server_gen)
asyncio
creates and initializes the TCP server using the specified parameters. The second line is an example of such an appeal. try: data = yield from asyncio.wait_for(reader.readline(), timeout=10.0) if data: writer.write(data) else: logging.info('Connection from {} closed by peer'.format(peername)) break except concurrent.futures.TimeoutError: logging.info('Connection from {} closed by timeout'.format(peername)) break
reader.readline()
function performs asynchronous reading of data from the input stream. But waiting for data to read is not time limited, if you need to stop it by timeout, you need to wrap the call to the coroutine function in asyncio.wait_for()
. In this case, after the time interval specified in seconds has elapsed, the exception concurrent.futures.TimeoutError
will be raised, which can be processed as needed.reader.readline()
returns a non-empty value in this example is required. Otherwise, after the connection is disconnected by the client (connection reset by peer), attempts to read and return an empty value will continue indefinitely. import asyncio import logging import concurrent.futures class EchoServer(object): """Echo server class""" def __init__(self, host, port, loop=None): self._loop = loop or asyncio.get_event_loop() self._server = asyncio.start_server(self.handle_connection, host=host, port=port) def start(self, and_loop=True): self._server = self._loop.run_until_complete(self._server) logging.info('Listening established on {0}'.format(self._server.sockets[0].getsockname())) if and_loop: self._loop.run_forever() def stop(self, and_loop=True): self._server.close() if and_loop: self._loop.close() @asyncio.coroutine def handle_connection(self, reader, writer): peername = writer.get_extra_info('peername') logging.info('Accepted connection from {}'.format(peername)) while not reader.at_eof(): try: data = yield from asyncio.wait_for(reader.readline(), timeout=10.0) writer.write(data) except concurrent.futures.TimeoutError: break writer.close() if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) server = EchoServer('127.0.0.1', 2007) try: server.start() except KeyboardInterrupt: pass # Press Ctrl+C to stop finally: server.stop()
asyncio
module provides us with this opportunity. Unlike the tornado
for example, tornado
does not implement an HTTP server. As they say it is a sin not to try to correct this omission :)asyncio
. import bottle import os.path from os import listdir from bottle import route, template, static_file root = os.path.abspath(os.path.dirname(__file__)) @route('/') def index(): tmpl = """<!DOCTYPE html> <html> <head><title>Bottle of Aqua</title></head> </body> <h3>List of files:</h3> <ul> % for item in files: <li><a href="/files/{{item}}">{{item}}</a></li> % end </ul> </body> </html> """ files = [file_name for file_name in listdir(os.path.join(root, 'files')) if os.path.isfile(os.path.join(root, 'files', file_name))] return template(tmpl, files=files) @route('/files/<filename>') def server_static(filename): return static_file(filename, root=os.path.join(root,'files')) class AquaServer(bottle.ServerAdapter): """Bottle server adapter""" def run(self, handler): import asyncio import logging from aqua.wsgiserver import WSGIServer logging.basicConfig(level=logging.ERROR) loop = asyncio.get_event_loop() server = WSGIServer(handler, loop=loop) server.bind(self.host, self.port) try: loop.run_forever() except KeyboardInterrupt: pass # Press Ctrl+C to stop finally: server.unbindAll() loop.close() if __name__ == '__main__': bottle.run(server=AquaServer, port=5000)
asyncio
module. The only moment is a feature of browsers (for example, chrome), to reset the request if it sees that it is starting to receive a large file. Obviously, this is done in order to switch to a more optimized way to download large files, because the request is repeated and the file starts to be received normally. But the first dropped request throws a ConnectionResetError
exception, if the file has already been sent to it by using the StreamWriter.write()
function call. This case needs to handle and close the connection with StreamWriter.close()
.bottle
, quite popular Waitress WSGI server also in conjunction with the bottle
and of course Tornado, acted as the test subjects. As an application was the lowest possible helloword. Tests conducted with the following parameters: 100 and 1000 simultaneous connections; test duration 10 seconds for 13 bytes and kilobytes; test duration 60 seconds for 13 megabytes; The three options for the size of the data given are 13 bytes, 13 kilobytes and 13 megabytes respectively. Below is the result:100 concurent users | 13 b (10 sec) | 13 Kb (10 sec) | 13 Mb (60 sec) | |||
---|---|---|---|---|---|---|
Avail. | Trans / sec | Avail. | Trans / sec | Avail. | Trans / sec | |
aqua + bottle | 100.0% | 835.24 | 100.0% | 804.49 | 99.9% | 26.28 |
waitress + bootle | 100.0% | 707.24 | 100.0% | 642.03 | 100.0% | 8.67 |
tornado | 100.0% | 2282.45 | 100.0% | 2071.27 | 100.0% | 15.78 |
1000 concurent users | 13 b (10 sec) | 13 Kb (10 sec) | 13 Mb (60 sec) | |||
---|---|---|---|---|---|---|
Avail. | Trans / sec | Avail. | Trans / sec | Avail. | Trans / sec | |
aqua + bottle | 99.9% | 800.41 | 99.9% | 777.15 | 60.2% | 26.24 |
waitress + bootle | 94.9% | 689.23 | 99.9% | 621.03 | 37.5% | 8.89 |
tornado | 100.0% | 1239.88 | 100.0% | 978.73 | 55.7% | 14.51 |
$ siege -c 1000 -b -t 60S http://127.0.0.1:5000/ ** SIEGE 2.70 ** Preparing 1000 concurrent users for battle. Transactions: 1570 hits Availability: 60.18 % Elapsed time: 59.84 secs Data transferred: 20410.00 MB Response time: 5.56 secs Transaction rate: 26.24 trans/sec Throughput: 341.08 MB/sec Concurrency: 145.80 Successful transactions: 1570 Failed transactions: 1039 Longest transaction: 20.44 Shortest transaction: 0.00 $ siege -c 1000 -b -t 60S http://127.0.0.1:5001/ ** SIEGE 2.70 ** Preparing 1000 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 526 hits Availability: 37.49 % Elapsed time: 59.20 secs Data transferred: 6838.00 MB Response time: 16.05 secs Transaction rate: 8.89 trans/sec Throughput: 115.51 MB/sec Concurrency: 142.58 Successful transactions: 526 Failed transactions: 877 Longest transaction: 42.43 Shortest transaction: 0.00 $ siege -c 1000 -b -t 60S http://127.0.0.1:5002/ ** SIEGE 2.70 ** Preparing 1000 concurrent users for battle. The server is now under siege... Lifting the server siege... done. Transactions: 857 hits Availability: 55.65 % Elapsed time: 59.07 secs Data transferred: 11141.00 MB Response time: 20.14 secs Transaction rate: 14.51 trans/sec Throughput: 188.61 MB/sec Concurrency: 292.16 Successful transactions: 857 Failed transactions: 683 Longest transaction: 51.19 Shortest transaction: 3.26
asyncio
has the right to life. It is possible to talk about using such servers in serious projects early, but after testing, running in and with the advent of async asyncio
drivers for databases and key-value repositories, it may well be possible.Source: https://habr.com/ru/post/217143/
All Articles