I don’t think that I’m going to make a big mistake if I say that most of the readers of this article have a folder on the computer in which the code is stored, which is then used in combat projects. Small such pieces of algorithms on which the very possibility of implementing an idea is tested. I call them "nishtyachki."
The more a programmer works according to his tasks, the more this daddy swells. Here is my already climbed beyond seven hundred different demos.
But the problem is that in 99 percent of cases all these “nishtyachki” are written to the table, and only the owner of this folder knows about the existence of these developments, and in the same place sometimes whole bins of ideas, implementation approaches, algorithmic tricks, and just stopped on the take-off of thoughts, which it would not be a sin to share (what if someone takes and develops the approach).
')
In this article, I will share three developments that came out of just such “folders with nishtyak” and have been used in our combat projects for many years now.
There will be a little assembler - but don't be scared, it’s only there in the form of an informational component.
Let's start with caching
It is unlikely that I will reveal the secret that byte reading a file is bad.
Well, that means - it’s bad, it works, and it doesn’t give any errors, but the brakes ... Cylinder heads are as much as they are scalded, trying to give the data they need to everyone, and here we’re reading one byte from the file.
And why do we even read exactly one byte?
If we abstract a little from the load on the file system and imagine that the file we are reading looks like: “a byte containing the data block size + data block, then a byte again containing the data block size + data block” - then everything is absolutely logical. In this case, we execute the only correct logic, read the prefix containing the size and the data block itself, and then repeat until we are at the end of the file.
Conveniently? Even there can be no questions - of course convenient.
And what we really have to do to get away from the brakes when reading:
- Read immediately a large amount of data in a temporary buffer;
- The actual reading is done from the temporary buffer;
- And if in the temporary data buffer it is not enough, again read them from the file and take into account offsets and other related data;
And such a leapfrog with manual caching in a whole heap of project sites where work with files is required.
Not comfortable? Certainly inconvenient, I want the same simplicity, as in the first version.
Having understood the essence of the problem, our team gave birth to the following idea: once working with data goes through the heirs from TStream (TFileStream, TWinHTTHStream, TWinFTPStream) - wouldn't you write us a caching proxy over the stream itself? Well, why not, but we are not the first to take, for example, the same TStreamAdapter from System.Classes, which acts as a layer between IStream and abstract TStream, as a model.
Convenient, by the way, a thing - I advise.
Our proxy is made in the form of a banal heir from TStream, so with the help of it you can absolutely freely control the work with the data of any other heir of this class.
In general, the implementation of such proxy streams is quite common. For example, if you omit the TStreamAdapter, you will most likely be aware of such classes as TZCompressionStream and TZDecompressionStream from the ZLib module, which provide a very convenient way to compress and decompress data stored in any arbitrary TStream descendant. Yes, I used to indulge in such a
thing myself , having implemented a fairly convenient proxy in the form of the
TFWZipItemStream class, which, passing all data through itself, edits it “on the fly” and
reads the checksum of all the data passing through it.
Therefore, using the previously accumulated experience, the TBufferedStream class was born, and as a clarification on working with it, a comment was immediately stuck to the class declaration: "// type buffered reading from stream. ReadOnly !!!"
But, before proceeding with the study of the code of this class, let's write a small console application that measures the load on the application using different variants of the heirs from TStream, according to the speed of code execution.
As the PayLoad functionality, let's do the following - let's calculate the offsets for the resources section of each library located in the system directory (GetSystemDirectory) and notice the time taken to execute using TBufferedStream, then TFileStream, and finally, TMemoryStream.
Such a sequence of tests was chosen in order to level the influence of the file system cache, i.e. The TBufferedStream will work with non-cached data, and the next two tests will (should) be performed significantly faster due to repeated access to the cached (file system) data.
Who do you think will win?
However:
First we need a function that will build a list of files on which the work will be performed:
function GetSystemRootFiles: TStringList; var Path: string; SR: TSearchRec; begin Result := TStringList.Create; SetLength(Path, MAX_PATH); GetSystemDirectory(@Path[1], MAX_PATH); Path := IncludeTrailingPathDelimiter(PChar(Path)); if FindFirst(Path + '*.dll', faAnyFile, SR) = 0 then try repeat if SR.FindData.nFileSizeLow > 1024 * 1024 * 2 then Result.Add(Path + SR.Name); until FindNext(SR) <> 0; finally FindClose(SR); end; end;
It creates an instance of TStringList and is filled with paths to libraries that are larger than two megabytes (for the demo, enough).
The next function will be the overall kit over the start of each test with time measurement, also simple, in essence: function MakeTest(AData: TStringList; StreamType: TStreamClass): DWORD; var TotalTime: DWORD; I: Integer; AStream: TStream; begin Writeln(StreamType.ClassName, ': '); Writeln('==========================================='); AStream := nil; TotalTime := GetTickCount; try for I := 0 to AData.Count - 1 do begin if StreamType = TBufferedStream then AStream := TBufferedStream.Create(AData[I], fmOpenRead or fmShareDenyWrite, $4000); if StreamType = TFileStream then AStream := TFileStream.Create(AData[I], fmOpenRead or fmShareDenyWrite); if StreamType = TMemoryStream then begin AStream := TMemoryStream.Create; TMemoryStream(AStream).LoadFromFile(AData[I]); end; Write('File: "', AData[I], '" CRC = '); CalcResOffset(AStream); end; finally Result := GetTickCount - TotalTime; end; end;
The PayLoad functionality itself is in the common_payload.pas module and looks like a CalcResOffset procedure. procedure CalcResOffset(AData: TStream; ReleaseStream: Boolean); var IDH: TImageDosHeader; NT: TImageNtHeaders; Section: TImageSectionHeader; I, A, CRC, Size: Integer; Buff: array [0..65] of Byte; begin try
I was too lazy to invent something complex that clearly demonstrated the need to read a file in pieces, so I decided to stop at working with sections of the PE file.
The task of this procedure is to calculate the address of the resource section (.rsrc) of the file transferred to it (in the form of a stream) and simply calculate the sum of all bytes located in this section.
It immediately shows two things needed for work, reading the buffer with data (DOS header and PE header), after which the resources section is reached, from which data is read in chunks of 64 bytes and summarized with the result.
PS: yes, I know that the data from the section are not considered as a whole, because reading is in blocks and the last, not a multiple of 64 bytes is not considered, but then this is an example.
Run this trouble with this code: var S: TStringList; A, B, C: DWORD; begin try S := GetSystemRootFiles; try
We look at the result (the picture already includes the results from the TBufferedStream):
TFileStream, as expected, was far behind, but TMemoryStream showed a result very close to the results of the TBufferedStream that we have not yet reviewed.
It's okay, the fact is that he did it with a big overhead from memory, because he had to load each library into the application’s memory (drawdown), but caught up in speed for the very same reason (avoiding the need to frequently read data from the disk).
And now TBufferedStream itself: TBufferedStream = class(TStream) private FStream: TStream; FOwnership: TStreamOwnership; FPosition: Int64; FBuff: array of byte; FBuffStartPosition: Int64; FBuffSize: Integer; function GetBuffer_EndPosition: Int64; procedure SetBufferSize(Value: Integer); protected property Buffer_StartPosition: Int64 read FBuffStartPosition; property Buffer_EndPosition: Int64 read GetBuffer_EndPosition; function Buffer_Read(var Buffer; Size: LongInt): Longint; function Buffer_Update: Boolean; function Buffer_Contains(APosition: Int64): Boolean; public constructor Create(AStream: TStream; AOwnership: TStreamOwnership = soReference); overload; constructor Create(const AFileName: string; Mode: Word; ABuffSize: Integer = 1024 * 1024); overload; destructor Destroy; override; function Read(var Buffer; Count: Longint): Longint; override; function Write(const Buffer; Count: Longint): Longint; override; function Seek(const Offset: Int64; Origin: TSeekOrigin): Int64; override; property BufferSize: Integer read FBuffSize write SetBufferSize; procedure InvalidateBuffer; end;
The public section is nothing out of the ordinary, the same overlapped Read / Write / Seek as any other proxy stream.
The whole trick starts with this function:
function TBufferedStream.Read(var Buffer; Count: Longint): Longint; var Readed: Integer; begin Result := 0; while Result < Count do begin Readed := Buffer_Read(PAnsiChar(@Buffer)[Result], Count - Result); Inc(Result, Readed); if Readed = 0 then if not Buffer_Update then Exit; end; end;
As you can see from the code, we try to read the data by calling the Buffer_Read function, which returns it from the already prepared cache, and if we could not read it, we try to reinitialize the cache by calling Buffer_Update.
Reinitializing the cache looks like this:
function TBufferedStream.Buffer_Update: Boolean; begin FStream.Position := FPosition; FBuffStartPosition := FPosition; SetLength(FBuff, FBuffSize); SetLength(FBuff, FStream.Read(FBuff[0], FBuffSize)); Result := Length(FBuff) > 0 end;
Those. we allocate memory for the cache, the size specified in the BufferSize property of the class, after which we attempt to read the cache from the stream we control.
If the data was considered successful, we correct the actual cache size (because if you wanted to count megabytes, and only 15 bytes are available, then we will free up unnecessary memory, why do we need too much?).
The read operation from the cache is also simple:
function TBufferedStream.Buffer_Read(var Buffer; Size: LongInt): Longint; begin Result := 0; if not Buffer_Contains(FPosition) then Exit; Result := Buffer_EndPosition - FPosition + 1; if Result > Size then Result := Size; Move(FBuff[Integer(FPosition - Buffer_StartPosition)], Buffer, Result); Inc(FPosition, Result); end;
Just check the current position of the stream and make sure that we really store the necessary data available on this offset, after which with the banal Move we transfer the data to the external buffer.
The remaining methods of this class are too trivial, so I will not consider them, they can be found in demos in the archive for the article: "
. \ Src \ bufferedstream \ "
What is the result?- The TBufferedStream class has a much smaller (at times) overhead data reading speed than the TFileStream because of the cache implemented in it. The number of read operations from the disk (which in itself is a fairly “hard operation”) has been significantly reduced.
- For the same reason, overhead costs for speed are much less compared to TMemoryStream, since only the necessary data is read into the cache, and not the entire file.
- The memory overhead is significantly lower than TMemoryStream, for obvious reasons. Of course, in this case, TFileStream will win in terms of memory costs, but, again, speed ...
- The class provides an easy-to-use layer, which allows not to think about the life time of the stream controlled by it and retains all the functionality necessary for work.
Liked?
Then we go to the second part.
TOnMemoryStream
But imagine that the data we want to read is already located in the memory of our application. In order not to overcomplicate, we will again dwell on the same libraries discussed in the first part of the article. In order to perform the same work that was shown in the CalcResOffset function, we will need to somehow transfer the library data to some TStream heir (for example, in the same TMemoryStream).
And what will we do in this case?
In 99 percent of cases, create a TMemoryStream and call the Write (WriteBuffer) function.
Is it normal, because we are in fact simply copying the data that we already have? And we will do this for one reason only - in order to be able to work with the data through the usual TStream.
To correct this superfluous overhead memory, such a simple class was developed:
type TOnMemoryStream = class(TCustomMemoryStream)
I do not even know what to add to this code as a comment, so let's just take a look at working with this class.
program onmemorystream_demo; {$APPTYPE CONSOLE} {$R *.res} uses Windows, SysUtils, common_payload in '..\common\common_payload.pas', OnMemoryStream in 'OnMemoryStream.pas'; var M: TOnMemoryStream; begin try M := TOnMemoryStream.Create( Pointer(GetModuleHandle('ntdll.dll')), 1024 * 1024 * 8 ); try CalcResOffset(M, False); finally M.Free; end; except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; Readln; end.
Everything is simple here - we are looking for the address of the loaded NTDLL.DLL and read its resource section directly from memory, using all the advantages of the stream (and there is no need to copy anything into the temporary buffer.
Now a few comments on the use of the class.
In general, it is very pleasant, if it is used only in data reading operations, but ... as can be seen from the code, it does not prohibit writing data to the memory block it controls, and this can be very troublesome.
We can easily overwrite data critical for the operation of the application, and then go to a banal AV, so in our projects using this class feature is minimized (literally rebuilding search indexes in the right places on a pre-allocated buffer is so easy).
By the way, it is for this reason that we refused to use the Friendly classes, which allow access to the TCustomMemoryStream.SetPointer call, since in this case, the recording will not be controlled by anyone at all, which may result in good such badabum.
The source code of the class and example can be viewed in the archive: "
.src \ onmemorystream \ "
However, we turn to the concluding part of the article.
Smartpainer Special Case - SharedPtr
Now I will teach the bad.
Let's take a look at how to work with objects in Delphi. Usually it looks like this:
var T: TObject; begin T := TObject.Create; try
Newbies in the language, of course, forget about using the finalization section, rolling pearls like this:
T := TObject.Create;
And even then, forgetting the need to release the object, they do not tell the Free object.
Some "advanced beginners" manage to implement even such a "govnokod"
try T := TObject.Create;
And once I met, and here with this implementation:
try finally T := TObject.Create;
Well, the man tried - immediately visible.
However, let's still focus on the first version of the correct code.
He has the following minus - if we need to work with several classes at the same time, we will have to significantly expand the code due to the multiple uses of the finalization sections:
var T1, T2, T3: TObject; begin T1 := TObject.Create; try T2 := TObject.Create; try T3 := TObject.Create; try
There is, of course, an option that is somewhat dubious and not used by me, but recently quite often found on the Internet:
T1 := nil; T2 := nil; T3 := nil; try T1 := TObject.Create; T2 := TObject.Create; T3 := TObject.Create;
Due to the initial initialization of each object, in this case, an error will not occur when calling Free an object that has not yet been created (if an exception is suddenly raised in the previous constructor), but still it looks too doubtful.
And how do you look at that, if I say that calling the Free method can not be done at all?
Yes, just create an object and forget about the fact that it needs to be destroyed.
What does this look like? Yes, like this:
T := TObject.Create;
Well, of course, right in this form, this cannot be done without a memo - well, we don’t have a garbage collector and other things, but don’t rush to say: “Sanya - you lost your mind!” ... because you can take an idea from other programming languages and implement it in our , "Great and mighty."
And we will take idea from SharedPtr:
we watch documentation .
The logic of this class is simple - control the lifetime of an object by counting references to it. Fortunately, we can do it - we have such a mechanism, we call interfaces.
But not everything is so simple.
Of course, with a swoop, you can roll out such an idea - we implement support for IUnknown in the class and everything, as soon as the reference count for an instance of the class reaches zero - it will collapse.
But we can only do this with handwritten classes, and what to do with the same TMemoryStream, which has all the feng shui on the drum, because he doesn't know about the interfaces?
The most logical thing is to write another proxy, which will keep a link to the object it controls and in itself will implement reference counting, and at its destruction it will bang the object entrusted to it for storage.
But here, too, is not so rosy. We will write a proxy, and what is there to write it? The idea has already been voiced, but there will be a large drawdown both in memory and in the speed of working with the class, if it uses the classic interface as a reference engine, with all its features.
Therefore, let us approach the solution of the problem from the technical side and look at the implementation minuses through the interface: program slowsharedptr; {$APPTYPE CONSOLE} {$R *.res} uses Windows, Classes, SysUtils; type TObjectDestroyer = class(TInterfacedObject) private FObject: TObject; public constructor Create(AObject: TObject); destructor Destroy; override; end; TSharedPtr = record private FDestroyerObj: TObjectDestroyer; FDestroyer: IUnknown; public constructor Create(const AValue: TObject); end; constructor TObjectDestroyer.Create(AObject: TObject); begin inherited Create; FObject := AObject; end; destructor TObjectDestroyer.Destroy; begin FObject.Free; inherited; end; constructor TSharedPtr.Create(const AValue: TObject); begin FDestroyerObj := TObjectDestroyer.Create(AValue); FDestroyer := FDestroyerObj; end; var I: Integer; T: DWORD; begin ReportMemoryLeaksOnShutdown := True; try T := GetTickCount; for I := 0 to $FFFFFF do TSharedPtr.Create(TObject.Create); Writeln(GetTickCount - T); except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; Readln; end.
The time spent on the execution of this code will be around 3525 milliseconds (remember this number).
The bottom line: the main logic is released by the TObjectDestroyer class, which works with reference counting and destroys the object passed to it for storage. TSharedPtr is a structure through which proper work with links occurs at the moment when it goes out of scope (of course, in this case, it can be done without this structure, but ...).
If you run the example, you will see that the created objects will be destroyed before the application terminates (however, if this were not the case, you would be clearly informed about this, because the ReportMemoryLeaksOnShutdown flag is on).
But let's take a closer look at where there may be an overhead that we do not need (both in terms of memory and speed of execution).
Well, firstly - TObjectDestroyer.InstanceSize is equal to 20.
Heh, we get an extra 20 bytes of memory for each object we control, and given that the granularity of the memory manager in Delphi is 12 bytes, then not all the 24 bytes are lost, but all 24. Do you think the little things? Maybe so - but our version should go (and will be) exactly 12 bytes, because if you remove the overhead - so entirely.
The second problem is the redundant overhead when calling interface methods.
Let's remember what it looks like in the memory of a VMT object that implements the interface.
The VMT of the object begins with the virtual methods of the object itself, including the overlapped methods of the interface, and these overlapped methods do not belong to the interface.
And just behind them comes the VMT methods of the interface itself, which, when called, redirects (by means of the CompilerMagic constant, calculated for each interface at the compilation stage) to the real code.
This can be seen visually by executing the following code:
constructor TSharedPtr.Create(const AValue: TObject); var I: IUnknown; begin FDestroyerObj := TObjectDestroyer.Create(AValue); I := FDestroyerObj; I._AddRef; I._Release;
If you look at the assembler listing, we will see the following:
slowsharedptr.dpr.51: I._AddRef; 004D3C73 8B45F4 mov eax,[ebp-$0c] 004D3C76 50 push eax 004D3C77 8B00 mov eax,[eax] 004D3C79 FF5004 call dword ptr [eax+$04]
... which lead to:
004021A3 83442404F8 add dword ptr [esp+$04],-$08
in the first case, and in the second on:
004021AD 83442404F8 add dword ptr [esp+$04],-$08
If we inherited in TObjectDestroyer not from IUnknown, but, for example, from IEnumerator, then the compiler would automatically correct the output addresses to the VMT object like this:
004D3A4B 83442404F0 add dword ptr [esp+$04],-$10
It is through such a jump that the compiler makes a call to the _AddRef and _Release methods when the reference count changes (for example, when an interface is assigned to a new variable, or when it goes out of scope).
Therefore, now we will defeat all this trouble and write our own interface.
So we write:
PObjectDestroyer = ^TObjectDestroyer; TObjectDestroyer = record strict private class var VTable: array[0..2] of Pointer; class function QueryInterface(Self: PObjectDestroyer; const IID: TGUID; out Obj): HResult; stdcall; static; class function _AddRef(Self: PObjectDestroyer): Integer; stdcall; static; class function _Release(Self: PObjectDestroyer): Integer; stdcall; static; class constructor ClassCreate; private FVTable: Pointer; FRefCount: Integer; FObj: TObject; public class function Create(AObj: TObject): IUnknown; static; end;
Do you think this is a record structure?
Noah is the very object itself, with its own VMT located in VTable and exactly 12 bytes in size:
FVTable: Pointer; FRefCount: Integer; FObj: TObject;
Now the actual "magic".
VMT initialization occurs in the following method:
class constructor TObjectDestroyer.ClassCreate; begin VTable[0] := @QueryInterface; VTable[1] := @_AddRef; VTable[2] := @_Release; end;
All according to the canons, and Delphi will not even suspect any trick here, because for her it will be an absolutely valid VMT, implemented according to all laws and rules.
But the main constructor looks like this:
class function TObjectDestroyer.Create(AObj: TObject): IUnknown; var P: PObjectDestroyer; begin if AObj = nil then Exit(nil); GetMem(P, SizeOf(TObjectDestroyer)); P^.FVTable := @VTable; P^.FRefCount := 0; P^.FObj := AObj; Result := IUnknown(P); end;
Via GetMem, we allocate a place under the InstanceSize of our “supposedly” class, despite the fact that it is actually a structure, after which we initialize the required fields as a pointer to VMT, a reference count and a pointer to an object controlled by the class.
And with this we immediately bypass the overhead on the InitInstance call and the load that is associated with it.
Pay attention - the result of the constructor call is the IUnknown interface.
Hack Of course.
Works? Of course.
The implementation of the methods QueryInterface, _AddRef and _Release is taken from the standard TIntefacedObject and is not interesting.
However, the QueryInterface in this approach is essentially redundant, but since we decided to do everything according to the classics, and we are laying on the fact that some kind of “mad programmer” will still try to pull this method, then we will leave it in its proper place (especially since and so should go first in the VMT interface. Well, do not leave the junk pointer instead of it?).Now let's do a little over the structure with which we provided control over the links: TSharedPtr<T: class> = record private FPtr: IUnknown; function GetValue: T; inline; public class function Create(AObj: T): TSharedPtr<T>; static; inline; class function Null: TSharedPtr<T>; static; property Value: T read GetValue; function Unwrap: T; end;
The designer has changed a little bit: class function TSharedPtr<T>.Create(AObj: T): TSharedPtr<T>; begin Result.FPtr := TObjectDestroyer.Create(AObj); end;
However, the essence of this has not changed.A new method has been added, through which it will be possible to gain access, to the object controlled by our shareholder: function TSharedPtr<T>.GetValue: T; begin if FPtr = nil then Exit(nil); Result := T(PObjectDestroyer(FPtr)^.FObj); end;
Well, two utilitarian procedures, the first of which simply reduces the number of links: class function TSharedPtr<T>.Null: TSharedPtr<T>; begin Result.FPtr := nil; end;
And the second one disables the object controlled by the class from this whole mechanism: function TSharedPtr<T>.Unwrap: T; begin if FPtr = nil then Exit(nil); Result := T(PObjectDestroyer(FPtr).FObj); PObjectDestroyer(FPtr).FObj := nil; FPtr := nil; end;
Now let's see - why do we need all this?Consider the situation:Here, for example, we created a certain instance of the class, which is monitored by TObjectDestroyer and given it outside, what happens in this case?That's right - as soon as the execution of the procedure code in which the object was created is completed, it will be immediately destroyed and the external code will work with the pointer that has already been killed.For this purpose, the TSharedPtr class has been introduced, by means of which it is possible to “prokidyvat” the data on the procedures of our application, without fear of premature destruction of the object. As soon as he really becomes useless to anyone - TObjectDestroyer will instantly crash him and everyone will be nirvana.But that is not all.
Having twirled the TSharedPtr implementation, we nevertheless concluded that it was not entirely successful. And you know why?Because such a constructor code seemed to us too redundant: TSharedPtr<TMyObj>.Create(TMyObj.Create);
Yeah - that's exactly what you need to call, but in order not to scare programmers who are unprepared for such happiness, we decided to add a small wrapper around this plan: TSharedPtr = record public class function Create<T: class>(AObj: T): TSharedPtr<T>; static; inline; end; ... class function TSharedPtr.Create<T>(AObj: T): TSharedPtr<T>; begin Result.FPtr := TObjectDestroyer.Create(AObj); end;
After which everything became much more pleasant, and the call of the shareholder began to look much more familiar, and resemble the creation of a previously voiced proxy: TSharedPtr.Create(TObject.Create)
But enough to rant and look at the drawdown time (and she, of course, will be):Write the code: program sharedptr_demo; {$APPTYPE CONSOLE} {$R *.res} uses Windows, System.SysUtils, StaredPtr in 'StaredPtr.pas'; const Count = $FFFFFF; procedure TestObj; var I: Integer; Start: Cardinal; Obj: TObject; begin Start := GetTickCount; for I := 0 to Count - 1 do begin Obj := TObject.Create; try
And look what happened:In the first version of the shareholder, there was a delay of 3525 milliseconds, the new version would produce the number 2917 - no wonder they tried, it turns out.However, what is this AutoDestroy that has outrunned the chartrinter for a full second?This is a helper, and this is bad.Bad, because this helper is implemented over TObject: TObjectHelper = class helper for TObject public function AutoDestroy: IUnknown; inline; end; ... function TObjectHelper.AutoDestroy: IUnknown; begin Result := TObjectDestroyer.Create(Self); end;
The fact is that, at least in XE4, the conflict with intersecting helpers is still not defeated, i.e. if you have your own helper above TStream and you try to connect to it in a pair of TObjectHelper - the project will not crash.I don’t know if this problem is solved in XE7, but it is definitely present in the four, and for this reason we don’t use this piece of code, although it is much more productive than using the TSharedPtr structure.Now let's look at the penultimate moment I mentioned above, namely, about the implementation of the jump to VMT, for this we will write two simple procedures: procedure TestInterfacedObjectVMT; var I: IUnknown; begin I := TInterfacedObject.Create; end;
At the very beginning I mentioned that using the simplest version of TSharedPtr in the very first example is a bit redundant. Yes, this is so, in that case, you could just memorize the interface reference in a local variable (which TSharedPtr is essentially doing, though in a slightly different way);So, see what happens in this version of the code:1. Creating an object and initializing the interface: sharedptr_demo.dpr.60: I := TInterfacedObject.Create; 004192BB B201 mov dl,$01 004192BD A11C1E4000 mov eax,[$00401e1c] 004192C2 E899C5FEFF call TObject.Create 004192C7 8BD0 mov edx,eax 004192C9 85D2 test edx,edx 004192CB 7403 jz $004192d0 004192CD 83EAF8 sub edx,-$08 004192D0 8D45FC lea eax,[ebp-$04] 004192D3 E8C801FFFF call @IntfCopy
2. Call the finalization section: sharedptr_demo.dpr.61: end; 004192D8 33C0 xor eax,eax 004192DA 5A pop edx 004192DB 59 pop ecx 004192DC 59 pop ecx 004192DD 648910 mov fs:[eax],edx 004192E0 68F5924100 push $004192f5 004192E5 8D45FC lea eax,[ebp-$04] 004192E8 E89B01FFFF call @IntfClear
3. After that, the control is transferred to @IntfClear, where the leap announced earlier awaits us: 00401DE1 83442404F8 add dword ptr [esp+$04],-$08 00401DE6 E951770000 jmp TInterfacedObject._Release
And what happens in the use of TObjectDestroyer? procedure TestSharedPtrVMT; begin TObjectDestroyer.Create(TObject.Create); end;
1. Creating an object and creating the TObjectDestroyer itself: sharedptr_demo.dpr.66: TObjectDestroyer.Create(TObject.Create); 004D3C27 B201 mov dl,$01 004D3C29 A184164000 mov eax,[$00401684] 004D3C2E E89945F3FF call TObject.Create 004D3C33 8D55FC lea edx,[ebp-$04] 004D3C36 E8B5FBFFFF call TObjectDestroyer.Create
Yes, there is an overhead, an extra action, after all. However, what about the destruction?2. Everything is very simple: sharedptr_demo.dpr.67: end; 004D3C3B 33C0 xor eax,eax 004D3C3D 5A pop edx 004D3C3E 59 pop ecx 004D3C3F 59 pop ecx 004D3C40 648910 mov fs:[eax],edx 004D3C43 68583C4D00 push $004d3c58 004D3C48 8D45FC lea eax,[ebp-$04] 004D3C4B E8DC92F3FF call @IntfClear 004D3C50 C3 ret
Almost identical to the first option.But the most interesting thing will happen when you call @IntfClear, it will skip redundant jumps on the VMT and transfer control to the class function TObjectDestroyer._Release right away.As a result, we saved on calling two instructions (add and jmp), but unfortunately this is for now the most minimal thing that can be done, since in the case of proxy use, the overhead is simply not unavoidable.In conclusion, it remains only to see how to use the mechanism of automatic destruction of an object in practice:For example, create a file stream and write some constant in it: procedure TestWriteBySharedPtr; var F: TFileStream; ConstData: DWORD; begin ConstData := $DEADBEEF; F := TFileStream.Create('data.bin', fmCreate); TObjectDestroyer.Create(F); F.WriteBuffer(ConstData, SizeOf(ConstData)); end;
Yes, this is all - the life of the stream is controlled, and excessive inclinations are not required.In this case, the TSharedPtr structure is not used, since there is no need to transfer the pointer between parts of the code and enough TObjectDestroyer functionality.And now we will read the value of the constant from the file and display it, and at the same time we will look at the transfer of data between the procedures.This is how we create an object controlled by a shareholder: function CreateReadStream: TSharedPtr<TFileStream>; begin Result := TSharedPtr.Create(TFileStream.Create('data.bin', fmOpenRead or fmShareDenyWrite)); end;
And so we will get data from this object: procedure TestReadBySharedPtr; var F: TSharedPtr<TFileStream>; ConstData: DWORD; begin F := CreateReadStream; F.Value.ReadBuffer(ConstData, SizeOf(ConstData)); Writeln(IntToHex(ConstData, 8)); end;
As you can see, the code has practically not changed, if we compare it with the classical approach to software development.Pros - the need to use TRY..FINALLY blocks has disappeared, the code has become less overloaded in volume.The minuses are a small overhead in speed and the designers have slightly expanded, forcing us to call TSharedPtr.Create (in case of data transfer to the external) or TObjectDestroyer to control the lifetime.There is also an additional parameter Value, through which you can get access to the controlled object in the case of using TSharedPtr, but it’s enough just to get used to it, especially since this is all that Delphi is capable of in terms of syntactic sugar.Although I still dream that there will be a DEFAULT object method (or a property of not enumerated type) that can be called without specifying its name by simply referring to the class variable, then we would declare the Value property of the TSharedPtr class as default and work with the base object, even not knowing that he is under proxy control :)findings
Only one conclusion - I was tired of painting it all.But seriously, all three approaches shown above are quite comfortable, in fact, and I use the first two almost everywhere.With TSharedPtr, of course, I am cautious.Do not think that he is bad - for another reason. I still (for so many years of practice) are uncomfortable to observe the code without using the finalization sections, although I certainly understand that the posterior cerebellum will work as it should - but ... not usual.Therefore, I use TSharedPtr only in a few special cases - when I need to release an object into the external code, which is not controlled by me, although my colleagues have a slightly different point of view and use it quite often (of course, not everywhere, because you can see that its main minus - double speed drawdown, as payment for usability).And on this, perhaps, I round out.Check your bins - share, because there definitely is something useful.The source code of demos is available at this link .