📜 ⬆️ ⬇️

Autonomous transactions in PostgreSQL

Let's start by defining what autonomous transactions are in principle:
Autonomous transactions allow you to create new subtransactions that can save or discard changes, regardless of the parent transaction. Read more here .

They are implemented in the Oracle DBMS, and are very flexible and convenient means. The most popular (but far from the only) use of autonomous transactions is logging. Consider a simple example. Imagine the following situation - in your database a long and complicated stored procedure, such as monthly billing, is implemented. Important events you probably would like to see in the table of logs. Of course, you can just write to it without any tricks. But there is one serious drawback - if an unhandled error occurs in the procedure - the transaction is rolled back along with all entries in the log table. Here autonomous transactions come to the rescue. They are executed independently of the parent transaction, and you can see the entries in the log table even before the logged billing procedure is completed.

Let's move on to the topic topic. Despite the pretentious header, there are no offline transactions in PostgreSQL. But there is a way by which we can get a similar result.

dblink


First, we need the dblink contrib module. This module allows using your functions to access another PostgreSQL instance. A detailed description is beyond the scope of the topic. From myself I will say that dblink is one of several useful contrib-modules, which I always install regardless of the tasks solved by the database.
')

Example


I will show the described technique with an example from the beginning of the topic. We implement logging on "autonomous" transactions.

Create a log table:
CREATE TABLE " public "."logs" (
"log_id" BIGSERIAL,
"source" TEXT NOT NULL ,
" level " TEXT NOT NULL ,
"message" TEXT NOT NULL ,
"time_added" TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL ,
CONSTRAINT "logs_pkey" PRIMARY KEY ("log_id")
) WITH OIDS;


* This source code was highlighted with Source Code Highlighter .


Create a function for logging:
CREATE OR REPLACE FUNCTION " public "."log"( IN p_source text, IN p_level text, IN p_message text)
RETURNS void AS
$$
DECLARE
INSERT_SQL text := 'INSERT INTO public.logs (source, level, message) VALUES (' '[SOURCE]' ',' '[LEVEL]' ',' '[MESSAGE]' ')' ;
BEGIN

IF p_level != 'ERROR' AND p_level != 'WARNING'
AND p_level != 'INFO' AND p_level != 'DEBUG' THEN
RAISE EXCEPTION 'Log level should be one of the following types: ERROR, WARNING, INFO, DEBUG' ;
END IF ;

INSERT_SQL := replace(INSERT_SQL, '[SOURCE]' , p_source);
INSERT_SQL := replace(INSERT_SQL, '[LEVEL]' , p_level);
INSERT_SQL := replace(INSERT_SQL, '[MESSAGE]' , p_message);

PERFORM dblink_exec( 'dbname=' || current_database(), INSERT_SQL);
END ;
$$ LANGUAGE 'plpgsql' ;


* This source code was highlighted with Source Code Highlighter .


Now at any time we can call the “log” function and the record will immediately appear in the log table, regardless of the status of the transaction during which we called it.
PERFORM " public "."log"( 'monthly billing' , 'INFO' , 'Starting monthly billing' );

* This source code was highlighted with Source Code Highlighter .


As you can see, the whole trick is that we write to the log table using the dblink_exec function. This means that a new connection, session and transaction is created, in the context of which recording takes place.

Remarks


  1. Unfortunately, this trick can not completely replace stand-alone Oracle transactions. The problem is performance - I would not advise using this technique to the left and right, think about where it is really needed.
  2. This method is applicable not only to PostgreSQL. Every DBMS I know has functions like “dblink_exec”
  3. As correctly noted in the comments, using dblink_connect to create a persistent connection, and using this connection in the future will significantly speed up the work.

Source: https://habr.com/ru/post/99645/


All Articles