📜 ⬆️ ⬇️

Work with resources, or how I pushed through @Cleanup

This is a fictional story, and all coincidences are random.

Finally, the development team of Unknown Ltd. released release on time. Head of Development Andrew, Systems Architect South and simple ordinary developer Bob got together for planning.

In the upcoming quarter, they decided to take more tasks from those. debt, as almost all of the previous one was devoted to fixing bugs and making urgent improvements for specific customers.

Everyone sat back and began to discuss the upcoming plan. Bob immediately drew attention to the task of processing the generation of documents. The essence of the task was that the generated documents consist of the schema and settings of the generation and the document itself. When saved to a database, the document is serialized to XML, converted into a stream of bytes and compressed, and then everything is standardly placed in a BLOB column. When you need to display in the system or unload a document, then everything repeats with accuracy and vice versa, and voila, the documentary flaunts on the client's screen. That's it, it's simple. But as you know, the devil is in the details. To regenerate a document, if you need to change the settings, you have to completely load the entire document from the database, although its contents are completely unnecessary. Ay-i-yay. We discussed the problem. It was concluded that Bob will have to do the following:
')

On that and decided. We proceeded to discuss the remaining issues.

An hour and a half passed.

Bob returned from the planning enthusiastic, not having time to sit down at the workplace, he transferred the task to "In Work" and began execution. About two days passed, and the first part was over. Bob successfully commited the changes and sent them for review. In order not to waste time in vain, he began to perform the second part - migration. After some time, the following migrator code was born:

public class MigratorV1 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected public void migrate() throws Exception { PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data"); while (oldIdResult.next()) { long id = oldIdResult.getLong(1); selectOldContent.setLong(1, id); ResultSet oldContentResult = selectOldContent.executeQuery(); oldContentResult.next(); Blob oldContent = oldContentResult.getBlob(1); Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); XMLStreamWriter newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); XMLStreamWriter newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"); xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); String newScheme = newSchemeWriter.toString(); byte[] newData = newDataOutput.toByteArray(); StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } } 

To use the migrator, the client code must create or in some way inject the migrator and call its migrate () method. That's all.

Something seems to be wrong, thought Bob. Well, of course, he forgot to release resources. Imagine that if a client has a production on the order of several hundred thousand documents, and we do not release resources. Bob quickly fixed the problem:

 public class MigratorV2 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected public void migrate() throws Exception { try ( PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data"); ){ while (oldIdResult.next()) { long id = oldIdResult.getLong(1); selectOldContent.setLong(1, id); try (ResultSet oldContentResult = selectOldContent.executeQuery()) { oldContentResult.next(); String newScheme; byte[] newData; Blob oldContent = null; try { oldContent = oldContentResult.getBlob(1); try ( Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); ){ XMLStreamWriter newSchemeXMLWriter = null; XMLStreamWriter newDataXMLWriter = null; try { newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter); newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"); xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); } finally { if (newSchemeXMLWriter != null) { try { newSchemeXMLWriter.close(); } catch (XMLStreamException e) {} } if (newDataXMLWriter != null) { try { newDataXMLWriter.close(); } catch (XMLStreamException e) {} } } newScheme = newSchemeWriter.toString(); newData = newDataOutput.toByteArray(); } } finally { if (oldContent != null) { try { oldContent.free(); } catch (SQLException e) {} } } try ( StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); ){ insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } } } } } 

Oh God! I thought Bob. How now in all this to understand? This code is difficult to understand not only for another developer, but also for me, if I return to it, suppose in a month to correct or add something. It is necessary to decompose, Bob thought, and split the independent parts of the code into methods:

 public class MigratorV3 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected @RequiredArgsConstructor private static class NewData { final String scheme; final byte[] data; } private List<Long> loadIds() throws Exception { List<Long> ids = new ArrayList<>(); try (ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data")) { while (oldIdResult.next()) { ids.add(oldIdResult.getLong(1)); } } return ids; } private Blob loadOldContent(PreparedStatement selectOldContent, long id) throws Exception { selectOldContent.setLong(1, id); try (ResultSet oldContentResult = selectOldContent.executeQuery()) { oldContentResult.next(); return oldContentResult.getBlob(1); } } private void oldContentToNewData(Reader oldContentReader, StringWriter newSchemeWriter, GZIPOutputStream newZippedDataOutput) throws Exception { XMLStreamWriter newSchemeXMLWriter = null; XMLStreamWriter newDataXMLWriter = null; try { newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter); newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"); xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); } finally { if (newSchemeXMLWriter != null) { try { newSchemeXMLWriter.close(); } catch (XMLStreamException e) {} } if (newDataXMLWriter != null) { try { newDataXMLWriter.close(); } catch (XMLStreamException e) {} } } } private NewData generateNewDataFromOldContent(PreparedStatement selectOldContent, long id) throws Exception { Blob oldContent = null; try { oldContent = loadOldContent(selectOldContent, id); try ( Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); ){ oldContentToNewData(oldContentReader, newSchemeWriter, newZippedDataOutput); return new NewData(newSchemeWriter.toString(), newDataOutput.toByteArray()); } } finally { if (oldContent != null) { try { oldContent.free(); } catch (SQLException e) {} } } } private void storeNewData(PreparedStatement insertNewContent, long id, String newScheme, byte[] newData) throws Exception { try ( StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); ){ insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } public void migrate() throws Exception { List<Long> ids = loadIds(); try ( PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ){ for (Long id : ids) { NewData newData = generateNewDataFromOldContent(selectOldContent, id); storeNewData(insertNewContent, id, newData.scheme, newData.data); } } } } 

All, as before, the client code creates a migrator and calls migrate (). Inside, all identifiers of current documents are loaded, and for each identifier, the contents of the documents are loaded, which with the help of the SAX parser is broken down into parts belonging to the schema and data, in order to save them into a new table, but separately in different columns.

It seems to be a little better. But Bob felt sad. Why XMLStreamWriter and Blob do not implement Autoloseable, he thought, and started writing wrappers:

 public class MigratorV4 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected @RequiredArgsConstructor private static class NewData { final String scheme; final byte[] data; } @RequiredArgsConstructor private static class SmartXMLStreamWriter implements AutoCloseable { final XMLStreamWriter writer; @Override public void close() throws Exception { writer.close(); } } @RequiredArgsConstructor private static class SmartBlob implements AutoCloseable { final Blob blob; @Override public void close() throws Exception { blob.free(); } } private List<Long> loadIds() throws Exception { List<Long> ids = new ArrayList<>(); try (ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data")) { while (oldIdResult.next()) { ids.add(oldIdResult.getLong(1)); } } return ids; } private Blob loadOldContent(PreparedStatement selectOldContent, long id) throws Exception { selectOldContent.setLong(1, id); try (ResultSet oldContentResult = selectOldContent.executeQuery()) { oldContentResult.next(); return oldContentResult.getBlob(1); } } private void oldContentToNewData(Reader oldContentReader, StringWriter newSchemeWriter, GZIPOutputStream newZippedDataOutput) throws Exception { try ( SmartXMLStreamWriter newSchemeXMLWriter = new SmartXMLStreamWriter(xmlFactory.createXMLStreamWriter(newSchemeWriter)); SmartXMLStreamWriter newDataXMLWriter = new SmartXMLStreamWriter(xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8")); ){ xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); } } private NewData generateNewDataFromOldContent(PreparedStatement selectOldContent, long id) throws Exception { try ( SmartBlob oldContent = new SmartBlob(loadOldContent(selectOldContent, id)); Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.blob.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); ){ oldContentToNewData(oldContentReader, newSchemeWriter, newZippedDataOutput); return new NewData(newSchemeWriter.toString(), newDataOutput.toByteArray()); } } private void storeNewData(PreparedStatement insertNewContent, long id, String newScheme, byte[] newData) throws Exception { try ( StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); ){ insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } public void migrate() throws Exception { List<Long> ids = loadIds(); try ( PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ){ for (Long id : ids) { NewData newData = generateNewDataFromOldContent(selectOldContent, id); storeNewData(insertNewContent, id, newData.scheme, newData.data); } } } } 

Two SmartXMLStreamWriter and SmartBlob wrappers were written, which automatically closed XMLStreamWriter and Blob into try-with-resources.

And if I have more resources that do not implement AutoCloseable, will I have to write wrappers again? Bob turned to the South for help. South, a little bit, issued the original solution using the features of Java 8:

 public class MigratorV5 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected @RequiredArgsConstructor private static class NewData { final String scheme; final byte[] data; } private List<Long> loadIds() throws Exception { List<Long> ids = new ArrayList<>(); try (ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data")) { while (oldIdResult.next()) { ids.add(oldIdResult.getLong(1)); } } return ids; } private Blob loadOldContent(PreparedStatement selectOldContent, long id) throws Exception { selectOldContent.setLong(1, id); try (ResultSet oldContentResult = selectOldContent.executeQuery()) { oldContentResult.next(); return oldContentResult.getBlob(1); } } private void oldContentToNewData(Reader oldContentReader, StringWriter newSchemeWriter, GZIPOutputStream newZippedDataOutput) throws Exception { XMLStreamWriter newSchemeXMLWriter; XMLStreamWriter newDataXMLWriter; try ( AutoCloseable fake1 = (newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter))::close; AutoCloseable fake2 = (newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"))::close; ){ xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); } } private NewData generateNewDataFromOldContent(PreparedStatement selectOldContent, long id) throws Exception { Blob oldContent; try ( AutoCloseable fake = (oldContent = loadOldContent(selectOldContent, id))::free; Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); ){ oldContentToNewData(oldContentReader, newSchemeWriter, newZippedDataOutput); return new NewData(newSchemeWriter.toString(), newDataOutput.toByteArray()); } } private void storeNewData(PreparedStatement insertNewContent, long id, String newScheme, byte[] newData) throws Exception { try ( StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); ){ insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } public void migrate() throws Exception { List<Long> ids = loadIds(); try ( PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ){ for (Long id : ids) { NewData newData = generateNewDataFromOldContent(selectOldContent, id); storeNewData(insertNewContent, id, newData.scheme, newData.data); } } } } 

Yes, yes, yes, exactly what you thought: he took the opportunity to transfer the method. The code turned out just awful. But you don’t have to write wrappers, Bob thought and wept.

And then he drew attention to the annotation that he had already actively used: @RequiredArgsConstructor. Eureka! The Lombok library has the @Cleanup annotation, which is born to console the Java programmer who has lost all hope. At the compilation stage, it adds try-finally to the bytecode and automatically adds the code to securely close the resources. Moreover, she knows how to work with any method of freeing resources, be it close (), free () or any other, the main thing for her to tell about it (although she herself is clever and curses, if she did not find a suitable method).

And Bob rewrote problem areas using @Cleanup:

 public class MigratorV6 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected @RequiredArgsConstructor private static class NewData { final String scheme; final byte[] data; } private List<Long> loadIds() throws Exception { List<Long> ids = new ArrayList<>(); try (ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data")) { while (oldIdResult.next()) { ids.add(oldIdResult.getLong(1)); } } return ids; } private Blob loadOldContent(PreparedStatement selectOldContent, long id) throws Exception { selectOldContent.setLong(1, id); try (ResultSet oldContentResult = selectOldContent.executeQuery()) { oldContentResult.next(); return oldContentResult.getBlob(1); } } private void oldContentToNewData(Reader oldContentReader, StringWriter newSchemeWriter, GZIPOutputStream newZippedDataOutput) throws Exception { @Cleanup XMLStreamWriter newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter); @Cleanup XMLStreamWriter newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"); xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); } private NewData generateNewDataFromOldContent(PreparedStatement selectOldContent, long id) throws Exception { @Cleanup("free") Blob oldContent = loadOldContent(selectOldContent, id); try ( Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); StringWriter newSchemeWriter = new StringWriter(); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); ){ oldContentToNewData(oldContentReader, newSchemeWriter, newZippedDataOutput); return new NewData(newSchemeWriter.toString(), newDataOutput.toByteArray()); } } private void storeNewData(PreparedStatement insertNewContent, long id, String newScheme, byte[] newData) throws Exception { try ( StringReader newSchemeReader = new StringReader(newScheme); ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); ){ insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } public void migrate() throws Exception { List<Long> ids = loadIds(); try ( PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); ){ for (Long id : ids) { NewData newData = generateNewDataFromOldContent(selectOldContent, id); storeNewData(insertNewContent, id, newData.scheme, newData.data); } } } } 

Satisfied with the found elegant, and most importantly, out of the box, Bob made a long-awaited commit by decision and gave the code for the review.

Nothing foreshadowed trouble. But trouble always lurks around the corner. The commit did not pass the review, South and Andrew did not endorse @Cleanup. Only two places where non-AutoCloseable resources are used, they said. What profit will it give us? We do not like this abstract! How will we debug the code in case of what? And stuff like that. Bob ruthlessly fought back, but all attempts were in vain. And then he made another attempt to prove the convenience and rolled out the following code:

 public class MigratorV7 { private Connection conn; // Injected private SAXParser xmlParser; // Injected private XMLOutputFactory xmlFactory; // Injected public void migrate() throws Exception { @Cleanup PreparedStatement selectOldContent = conn.prepareStatement("select content from old_data where id = ?"); @Cleanup PreparedStatement insertNewContent = conn.prepareStatement("insert into new_data (id, scheme, data) values (?, ?, ?)"); @Cleanup ResultSet oldIdResult = conn.createStatement().executeQuery("select id from old_data"); while (oldIdResult.next()) { long id = oldIdResult.getLong(1); selectOldContent.setLong(1, id); @Cleanup ResultSet oldContentResult = selectOldContent.executeQuery(); oldContentResult.next(); @Cleanup("free") Blob oldContent = oldContentResult.getBlob(1); @Cleanup Reader oldContentReader = new InputStreamReader(new GZIPInputStream(oldContent.getBinaryStream())); @Cleanup StringWriter newSchemeWriter = new StringWriter(); @Cleanup XMLStreamWriter newSchemeXMLWriter = xmlFactory.createXMLStreamWriter(newSchemeWriter); ByteArrayOutputStream newDataOutput = new ByteArrayOutputStream(); @Cleanup GZIPOutputStream newZippedDataOutput = new GZIPOutputStream(newDataOutput); @Cleanup XMLStreamWriter newDataXMLWriter = xmlFactory.createXMLStreamWriter(newZippedDataOutput, "utf-8"); xmlParser.parse(new InputSource(oldContentReader), new DefaultHandler() { // Usage of schemeXMLWriter and dataXMLWriter to write XML into String and byte[] }); String newScheme = newSchemeWriter.toString(); byte[] newData = newDataOutput.toByteArray(); @Cleanup StringReader newSchemeReader = new StringReader(newScheme); @Cleanup ByteArrayInputStream newDataInput = new ByteArrayInputStream(newData); insertNewContent.setLong(1, id); insertNewContent.setCharacterStream(2, newSchemeReader, newScheme.length()); insertNewContent.setBlob(3, newDataInput, newData.length); insertNewContent.executeUpdate(); } } } 

Yes Yes. He removed all the additional methods and returned the sequential procedural code again. He did not have time to check his working capacity, of course, because he wanted so much to show simplicity. In this code, probably, not only he himself will understand in a month, but any other who will have to read it.

But again he did not find support. No matter how he fought against the wall - the wall was stronger. And he gave up. In production, the MigratorV5 code was eventually rolled out - the one where Java 8 features are used so clumsily.

Epilogue.

Of course, the code that was provided is far from ideal, and it can still be combed, some things can be rewritten in a completely different way, for example, using the template code. The latter option is generally the embodiment of a procedural programming style, which is not very good (but it is understandable when reading from top to bottom). But that's not the point. @Cleanup is a cool annotation that helps precisely in such moments when we cannot use try-with-resources, it saves us from unnecessary nesting of blocks of code from one to another, if we do not divide operations into methods. She does not need to get involved, but if necessary, why not?

Source: https://habr.com/ru/post/339046/


All Articles