
Someone may seem strange title post. And really, why in their right mind to slow down the execution of requests on the server?
I will answer: "To check how the interface of the client program works during lengthy requests." Such a task arose before me during the implementation of the import structure of the base for our
ER-designer .
The program interface, in my humble opinion, during long queries should provide three aspects:
- Delight the eye of the user of all kinds of statistics and animation (?).
- Do not let the user click or do something wrong.
- On the other hand, be sure to give the opportunity to stop the long process.
')
Nobody, of course, prohibits inserting all sorts of
sleep () into the code of the client program, but there is something in it from the evil one. And you can use the built-in function
pg_sleep () , the parameter of which can be passed the number of seconds to stop the execution of the request.
SELECT pg_sleep(1.5);
Let's slow down the query returning the list of tables in the database:
SELECT oid :: regclass FROM pg_class WHERE relkind = 'r'
Example 1
You can place the
pg_sleep () call in the
FROM section:
SELECT oid::regclass FROM pg_class, pg_sleep(10) WHERE relkind = 'r'
In this case, the function will be executed only once and the total execution time will be about 10 seconds. Moreover, the entire data set will be returned immediately.
Example 2
You can place the
pg_sleep () call in the
SELECT section:
SELECT oid::regclass, pg_sleep(1) FROM pg_class WHERE relkind = 'r'
In this case, the function will be executed once for each row of the data set. Namely, as many times as there are tables in your database. With this approach, there is another plus. Using asynchronous request processing, for example, one
way or the other, you can get the string one by one, and please the user by slowly creeping the progress bar.
All the best to you!
UPD Thank you Szymon Guz for
reminding you about this functionality.