Finally got around to write this note.
In the last
article I wrote about creating a simple xml parser based on SAX technology.
At the request of the workers made the next edition of this series. SAX vs. DOM performance comparison.
So let me remind you.
We had an xml file, which is a data structure that describes the doctor.
Here she is:
<? xml version ="1.0" encoding ="UTF-8" standalone ="no" ? >
< doc >
< id > 3 </ id >
< fam > </ fam >
< name > </ name >
< otc > </ otc >
< dateb > 12-05-1976 </ dateb >
< datep > 13-04-2005 </ datep >
< datev > 02-03-2004 </ datev >
< datebegin > 18-06-2009 </ datebegin >
< dateend > 22-01-2022 </ dateend >
< vdolid > 1 </ vdolid >
< specid > 1 </ specid >
< klavid > 1 </ klavid >
< stav > 1.0 </ stav >
< progid > 1 </ progid >
</ doc >
* This source code was highlighted with Source Code Highlighter .
This whole miracle weighs 411 bytes.
I decided to make a simple test. Parsil this file in a loop n times.
Let's look at the chart.

Note: vertically we have time in milliseconds, horizontally labels with the amount of information (format N * 411, where N is the number of times the run of the given xml file).
We get a curious addiction. The larger the amount of information processed, the longer the DOM begins to work. Against this background, SAX looks very good if you need speed with large volumes.
')
In any case, I do not want to draw any conclusions, so as not to breed holivars. Let me just say that in choosing a method, you need to use common sense and goals.