📜 ⬆️ ⬇️

Flutter Application Performance Testing

The default Flutter framework works well and quickly, but does this mean that you don’t need to think about performance at all? Not. It’s absolutely real to write Flutter applications that will be slow. On the other hand, you can also use the framework as much as possible and make your applications not only fast, but also efficient, consuming less CPU and battery time.



This is what we want to see: a statistically significant result of comparing two versions of your application for some meaningful metric. Read on to find out how to get this.


There are some general recommendations for optimizing performance in Flutter:



The sad truth is that for many questions about performance optimization the answer will be "how lucky." Is this particular optimization worth the effort and maintenance costs for this particular widget? Does this particular approach make sense in this particular situation?


The only useful answer to these questions is testing and measuring. Quantify the impact on performance of each choice, and make a decision based on this data.


The good news is that Flutter provides excellent performance profiling tools, such as Dart DevTools (currently in preview release), which includes Flutter Inspector, or you can use Flutter Inspector directly from Android Studio (with the Flutter plugin installed). You have Flutter Driver to test your application and Profile mode to save performance information.


The bad news is that modern smartphones are “too” smart.


Problem with regulators


Quantifying the performance of the Flutter application is especially difficult with iOS and Android controls. These system level daemons adjust the speed of the central and graphics processors depending on the load. Of course, this is basically good, as it provides smooth operation with less battery consumption.


The disadvantage is that you can make your application much faster by increasing the amount of work done by it.


Below, you can see how adding meaningless print calls to the application cycle caused the controller to switch the CPU to an increased frequency, which made the application much faster and its performance more predictable.


The problem with regulators: by default you cannot trust your numbers. In this span diagram, we have separate launches along the x axis (marked with the exact start time) and assembly time along the Y axis. As you can see, when we enter some completely unnecessary print statements, this causes the build time to go down. and not up.


In this experiment, the worst code resulted in faster build times (see above), faster rasterization times and a higher frame rate. When objectively the worst code leads to improved performance, you can not rely on these indicators as a guide (recommendations).


This is just one example of how mobile application performance testing can be unintuitive and difficult.


Below I share some tips that I gathered while working on Flutter Developer Quest for Google I / O.


General tips



CPU / GPU regulators


As discussed above, modern operating systems change the frequency of each processor and graphics processor at their disposal in accordance with the load and some other heuristics. (For example, touching the screen usually increases the speed of the Android phone.)


On Android, you can disable these controls. We call this process "scaling lock".




Early version of Developer Quest, tested using Flutter Driver on my desktop.


Flutter driver


Flutter Driver allows you to automatically test your application. Read the section “Performance Profiling” on flutter.dev to find out how to use it when profiling your application.



Chrome timeline tool to check the results of profiling in Flutter.


Timeline


The timeline is the raw output of your profiling results. Flutter writes this information to a JSON file, which can be downloaded to chrome://tracing .



Metrics


It is better to look at as many metrics as possible, but I decided that some are more useful than others.



results


After setup you can confidently compare commits and conduct experiments. Below you can see the answer to the common dilemma: “Is this optimization of service costs worth it?”


I think that in this particular case the answer is yes. Thanks to just a few lines of code, each automated passage of our application takes on average 12% less processor time.


But - and this is the main message of this article - measurements of another optimization may show something completely different. It is tempting, but it is wrong to try to extrapolate a measurement of productivity too broadly.


In other words: "how lucky". And we must come to terms with it.


')

Source: https://habr.com/ru/post/451840/


All Articles