📜 ⬆️ ⬇️

$ mol_app_bench: preparing JS benchmarks quickly and easily

Hello, my name is Dmitry Karlovsky and I ... he is still a gourmet. I like to prepare exquisite dishes that elegantly and simply solve the usual problems that have already grinned teeth. You can talk for a long time about the benefits of various approaches, cheaper support, speeding up development, and simplifying debugging, but all this remains quite subjective assessments that need to be pondered. Therefore, sooner or later (but, as a rule, prematurely), the entire discussion falls to more or less measurable quantities - speed of work, download speed and other speeds. And not only do you need to make several implementations on different technologies, so that there is something to compare, it would not be too bad to draw an interface with results that are understandable to humans. And all this - time, which is always not enough, especially if done well.


To simplify the development of benchmarks, we have allocated their common part into a separate application that draws the entire interface from the choice of the tested options to the visual presentation of the results, and the variable part is connected from the outside and implements a fairly simple interface. All the "naklikannoe" state is stored in the link, so that they are easy to share with other gourmets. In addition, localization and sorting of results by various criteria is supported - a real feast for all lovers of fast food.


Hurry up, and then all the tasty eat


Next you will find out :



Bachelor breakfast


Simple and tasteful


Suppose we want to know how much it costs to add various HTML elements to the DOM. For example, did you know that in Chrome the element "Q" is 5 times heavier than "BLOCKQUOTE", and the element "IFRAME" is 20 times heavier than "OBJECT"? To find out these and other interesting facts, we implement this simple benchmark.


The whole benchmark is just a html page that you can put on any site. $ mol_app_bench will open it in a frame and interact with it using messages. Page frame is quite trivial:


<!doctype html> <meta charset="utf-8" /> <style> * { /*                */ position: absolute; } </style> <script> //       </script> 

It is necessary to specify in advance how many elements we plan to display; this number will come in handy later:


 var count = 500 

If the benchmark is opened directly, and not through $ mol_app_bench, then simply redirect to $ mol_app_bench with instructions to open the current benchmark:


 if( window.parent === window ) document.location = '//eigenmethod.imtqy.com/mol/app/bench/#bench=' + encodeURIComponent( location.href ) 

The benchmark and the interface communicate through the simplest RPC of the form [ ' ' , ...[ ] ] . For simplicity, we will simply call the corresponding function with the appropriate parameters:


 window.addEventListener( 'message' , function( event ) { window[ event.data[0] ].apply( null , event.data.slice( 1 ) ) } ) 

When $ mol_app_bench starts, it will open our benchmark in the frame and send us the message [ 'meta' ] , therefore we will implement the corresponding function:


 function meta() { done( metaData ) } 

The answer to the RPC call $ mol_app_bench expects as a message [ 'done' , ...[ ] ] , so everything is also simple here:


 function done( result ) { parent.postMessage( [ 'done' , result ] , '*' ) } 

The meta method should return $ mol_app_bench the following benchmark meta information:



All texts are specified with the language specified. In our case, the meta information will be presented as follows:


 var metaData = { title : { 'en' : 'HTML Elements rendering time' , 'ru' : '  HTML ' , } , descr : { 'en' : 'Simply add **' + count + ' elements**.' , 'ru' : '    **' + count + ' **  .' , } , samples : { } , steps : { 'fill' : { title : { 'en' : 'Adding elements' , 'ru' : ' ' , } , } , } , } 

We have only one measurement here - fill , and implementations are not specified at all, since they need to be generated programmatically from the list of element names. Here, by the way, and he:


 var tagNames = [ 'a' , 'abbr' , 'acronym' , 'address' , 'applet' , 'area' , 'article' , 'aside' , 'audio' , 'b' , 'base' , 'basefont' , 'bdi' , 'bdo' , 'bgsound' , 'big' , 'blink' , 'blockquote' , 'body' , 'br' , 'button' , 'canvas' , 'caption' , 'center' , 'cite' , 'code' , 'col' , 'colgroup' , 'command' , 'content' , 'data' , 'datalist' , 'dd' , 'del' , 'details' , 'dfn' , 'dialog' , 'dir' , 'div' , 'dl' , 'dt' , 'element' , 'em' , 'embed' , 'fieldset' , 'figcaption' , 'figure' , 'font' , 'footer' , 'form' , 'frame' , 'frameset' , 'head' , 'header' , 'hgroup' , 'hr' , 'html' , 'i' , 'iframe' , 'image' , 'img' , 'input' , 'ins' , 'isindex' , 'kbd' , 'keygen' , 'label' , 'legend' , 'li' , 'link' , 'listing' , 'main' , 'map' , 'mark' , 'marquee' , 'menu' , 'menuitem' , 'meta' , 'meter' , 'multicol' , 'nav' , 'nobr' , 'noembed' , 'noframes' , 'noscript' , 'object' , 'ol' , 'optgroup' , 'option' , 'output' , 'p' , 'param' , 'picture' , 'plaintext' , 'pre' , 'progress' , 'q' , 'rp' , 'rt' , 'rtc' , 'ruby' , 's' , 'samp' , 'script' , 'section' , 'select' , 'shadow' , 'small' , 'source' , 'spacer' , 'span' , 'strike' , 'strong' , 'style' , 'sub' , 'summary' , 'sup' , 'table' , 'tbody' , 'td' , 'template' , 'textarea' , 'tfoot' , 'th' , 'thead' , 'time' , 'title' , 'tr' , 'track' , 'tt' , 'u' , 'ul' , 'var' , 'video' , 'wbr' , 'xmp' ] 

Now we will add meta-information, a config of implementation variants:


 tagNames.forEach( function( tagName ) { metaData.samples[ tagName ] = { title : { 'en' : tagName } } } ) 

After receiving the meta-information, $ mol_app_bench draws a menu of implementations with a choice of which of them you want to measure. For each implementation, it will successively invoke methods that correspond to the names of the steps, with the name of the implementation option being passed as an argument. In our case, there is only one step:


 function fill( sample ) { var body = document.body while( body.firstChild ) { body.removeChild( body.firstChild ) } requestAnimationFrame( function() { var start = Date.now() var frag = document.createDocumentFragment() for( var i = 0 ; i < count ; ++ i ) { frag.appendChild( document.createElement( sample ) ) } body.appendChild( frag ) setImmediate( function() { done( Date.now() - start + ' ms' ) } ) } ) } 

Here we first clean the document body, wait for the next frame of the animation so that the browser finishes all its affairs, then create the required number of elements and add them all at once to the DOM. The browser performs some of the operations asynchronously, so through setImmediate we wait until it finishes and returns the measured time.


However, setImmediate is a relatively new API, which is not yet implemented in all browsers, so we implement its simplest version via the postMessage already used by us:


 var setImmediate_task function setImmediate( task ) { setImmediate_task = task postMessage( [ 'setImmediate_task' ] , '*' ) } 

That's all:



Dinner for the whole family


Meals for every taste


Suppose we want to know which JS framework is the fastest. To do this, we need to implement the same application on them all and drive out the same usage scenarios on each of them, measuring the time of their completion. There are so many frameworks in JS that it will take more than one month to understand each of them and implement even the simplest application. Therefore, we will take ready-made applications from the ToDoMVC project. To do this, we fork it , add our benchmark and post it on imtqy.com (The corresponding merge request is still waiting for its high point ).


Our benchmark will open implementations in a separate frame:


 <iframe id="sandbox"></iframe> 

We will have 3 steps:



 var metaData = { title : { 'en' : 'ToDoMVC workflow benchmark' , 'ru' : 'ToDoMVC -  ' , } , descr : { 'en' : 'Sample applications is [ToDOMVC](todomvc.com) implementations. Benchmark creates **' + count + ' tasks** in sequence and then removes them.' , 'ru' : '    [ToDOMVC](todomvc.com)     .       **' + count + ' **    .' , } , samples : { } , steps : { 'start' : { title : { 'en' : 'Load and init' , 'ru' : '  ' , } , } , 'fill' : { title : { 'en' : 'Tasks creating' , 'ru' : ' ' , } , } , 'clear' : { title : { 'en' : 'Tasks removing' , 'ru' : ' ' , } , } , } , } 

Information about the variants of implementations is in the file learn.json, which we are not steaming synchronously download and add meta-information:


 var xhr = new XMLHttpRequest xhr.open( 'get' , '../learn.json' , false ) xhr.send() var learn = JSON.parse( xhr.responseText ) for( var lib in learn ) { if( lib === 'templates' ) continue learn[ lib ].examples.forEach( function( example ) { if( !/^examples\/[-a-zA-Z0-9_\/]+$/.test( example.url ) ) return metaData.samples[ example.url.replace( /(^examples\/|\/$)/g , '' ) ] = { title : { 'en' : learn[ lib ].name + ' ' + example.name } } } ) } 

Before embarking on the implementation of measurements, we will memorize the link to the frame and write down the selectors to search for the necessary elements in the applications:


 var sandbox = document.getElementById( 'sandbox' ) var selector = { adder : '#new-todo,.new-todo,.todo__new,[mol_app_todomvc_add]' , adderForm : '#todo-form,.todo-form,#header form' , dropper : '.destroy,[mol_app_todomvc_task_row_drop]' , } 

We detect the start of the application by the appearance of an element for adding tasks:


 function start( sample ) { var start = Date.now() sandbox.src = '../examples/' + sample + '/' sandbox.onload = function() { step() function step() { if( sandbox.contentDocument.querySelector( selector.adder ) ) done( Date.now() - start + ' ms' ) else setTimeout( step , 10 ) } } } 

To add a task, we simulate all the events that occur in the browser when the user enters the name and presses "ENTER":


 function fill( sample ) { var adder = sandbox.contentDocument.querySelector( selector.adder ) var adderForm = sandbox.contentDocument.querySelector( selector.adderForm ) var i = 1 var start = Date.now() step() function step() { adder.value = 'Something to do ' + i adder.dispatchEvent( new Event( 'input' , { bubbles : true } ) ) adder.dispatchEvent( new Event( 'change' , { bubbles : true } ) ) var event = new Event( 'keydown' , { bubbles : true } ) event.keyCode = 13 event.which = 13 event.key = 'Enter' adder.dispatchEvent( event ) var event = new Event( 'keypress' , { bubbles : true } ) event.keyCode = 13 event.which = 13 event.key = 'Enter' adder.dispatchEvent( event ) var event = new Event( 'compositionend' , { bubbles : true } ) event.keyCode = 13 event.which = 13 event.key = 'Enter' adder.dispatchEvent( event ) var event = new Event( 'keyup' , { bubbles : true } ) event.keyCode = 13 event.which = 13 event.key = 'Enter' adder.dispatchEvent( event ) var event = new Event( 'blur' , { bubbles : true } ) adder.dispatchEvent( event ) if( adderForm ) { var event = new Event( 'submit' , { bubbles : true } ) event.keyCode = 13 event.which = 13 event.key = 'Enter' adderForm.dispatchEvent( event ) } if( ++i <= count ) setImmediate( step ) else done( Date.now() - start + ' ms' ) } } 

So many events we throw, since different frameworks react to different events to implement the same functionality. The set and configuration of events had to be selected manually by looking at the implementations that were made with the benchmark. Now most implementations are working out correctly, but some are still kosyachat, which can be seen with the naked eye when they are launched. Community requests requesting a benchmark with these implementations would be very useful ;-)


And the last step - the consistent removal of all tasks. Everything is simple:


 function clear( sample ) { var start = Date.now() step() function step() { var dropper = sandbox.contentDocument.querySelector( selector.dropper ) if( !dropper ) return done( Date.now() - start + ' ms' ) dropper.dispatchEvent( new Event( 'mousedown' , { bubbles : true } ) ) dropper.dispatchEvent( new Event( 'mouseup' , { bubbles : true } ) ) dropper.dispatchEvent( new Event( 'click' , { bubbles : true } ) ) setImmediate( step ) } } 

That's all, it turned out not much, and more difficult, is not it?



The most pulp $ mol_app_bench


My darling


First we outline the structure of our application. It will consist of 2 panels: main with benchmark and results; and additional with the menu of choice of implementations:


 $mol_app_bench $mol_view sub / <= Addon_page $mol_page <= Main_page $mol_page 

For additional, let's set the title and the actual list of implementations, which we will later programmatically:


 Addon_page $mol_page title <= addon_title @ \Samples body / <= Menu $mol_list rows <= menu_options / 

Do not be embarrassed that we sewed up the English text right here. Thanks to the dog, when assembling it will be extracted to a file with English strings.


Next, let us specify that each menu item will be nothing more than a checkbox that allows you to enable or disable measurements of the corresponding implementation:


 Menu_option!id $mol_check_box checked?val <=> menu_option_checked!id?val false label / <= menu_option_title!id \ 

As you can see, we have here used one-sided and two-sided binding for interaction with nested components. The name to the right of the arrow is the name of the property of our component (in this case, the application component), and to the left is the property of the nested component.


In the main panel, we will display an information block with a description and results, as well as a sandbox into which applications will be loaded:


 Main_page $mol_page title <= title - body / <= Inform $mol_view <= Sandbox $mol_view dom_name \iframe 

As you can see, the title of this panel will be the title of the entire application, which will also be displayed in the name of the browser tab.


In the information block, we will display the description through the markdown component of the visualization, and we will output the results through the output component of the comparative tables:


 Inform $mol_view sub / <= Descr $mol_text text <= description \ <= Result $mol_bench result <= result null col_head_label!id / <= result_col_title!id / col_sort?val <=> result_col_sort?val \ 

The final touch, let's set the title for the column with the names of the implementations in the output comparative table, which we will use a little later:


 result_col_title_sample @ \Sample 

Combine all these pieces of code and get a complete description of the structure of the application .


Now you can take on the behavior, to add which, it suffices to extend the class automatically generated from the structure description:


 namespace $.$mol { export class $mol_app_bench extends $.$mol_app_bench { //           } } 

First of all, we need to get a reference to the benchmark from the address bar:


 @ $mol_mem() bench() { return $mol_state_arg.value( this.state_key( 'bench' ) ) || 'list/' } 

As you can see, if the benchmark is not set, then the benchmark of list output with different frameworks opens, which lies side by side.


Next, we need a link to the sandbox, and not just a link, but with a benchmark already loaded there, so we write the value of the property only after waiting for the 'load' event in the frame:


 @ $mol_mem() sandbox( next? : HTMLIFrameElement , force? : $mol_atom_force ) : HTMLIFrameElement { const next2 = this.Sandbox().dom_node() as HTMLIFrameElement next2.src = this.bench() next2.onload = event => { next2.onload = null this.sandbox( next2 , $mol_atom_force ) } throw new $mol_atom_wait( `Loading sandbox...` ) } 

The implementation code of the sandbox property turned out to be asynchronous, but this asynchrony is isolated from the surrounding code, so the external interface of this property remains as before synchronized due to the reactivity magic .


The greatest difficulty is hidden in the property "the result of the remote procedure call". Besides the fact that it also encapsulates asynchrony in itself, it has one more restriction - no more than one procedure can be called at a time, since there is only one sandbox, and running several benchmarks at the same time is a bad sign. Therefore, the current executable command is written to the command_current property, and command_result first checks that another command is not currently being executed, and if it is executed, it will first wait for it to complete.


 'command_current()' : any[] @ $mol_mem() command_current( next? : any[] , force? : $mol_atom_force ) { if( this['command_current()'] ) return return next } @ $mol_mem_key() command_result< Result >( command : any[] , next? : Result ) : Result { const sandbox = this.sandbox() sandbox.valueOf() if( next !== void 0 ) return next const current = this.command_current( command ) if( current !== command ) throw new $mol_atom_wait( `Waiting for ${ JSON.stringify( current ) }...` ) requestAnimationFrame( ()=> { sandbox.contentWindow.postMessage( command , '*' ) window.onmessage = event => { if( event.data[ 0 ] !== 'done' ) return window.onmessage = null this.command_current( null , $mol_atom_force ) this.command_result( command , event.data[ 1 ] ) } } ) throw new $mol_atom_wait( `Running ${ command }...` ) } 

Connoisseurs of multi-threading can recognize here the synchronization primitive "mutex", implemented through the mechanism "compare and swap". And this is not casual, because $ mol_atom by default tries to parallelize tasks, if possible. Understanding this code can be difficult without understanding the reactivity magic, so I recommend reading the above article. But the main thing is that all complexity is encapsulated in this property and further work will be simple and pleasant. For example, we will get the meta-information from the benchmark:


 meta() { type meta = { title : { [ lang : string ] : string } descr : { [ lang : string ] : string } samples : { [ sample : string ] : { title : { [ lang : string ] : string } } } steps : { [ step : string ] : { title : { [ lang : string ] : string } } } } return this.command_result< meta >([ 'meta' ]) } 

And now, we get a list of all implementations, sorted by their names:


 @ $mol_mem() samples_all( next? : string[] ) { return Object.keys( this.meta().samples ).sort( ( a , b )=> { const titleA = this.menu_option_title( a ).toLowerCase() const titleB = this.menu_option_title( a ).toLowerCase() return titleA > titleB ? 1 : titleA < titleB ? -1 : 0 } ) } 

The names of the implementations are based on the current language:


 menu_option_title( sample : string ) { const title = this.meta().samples[ sample ].title return title[ $mol_locale.lang() ] || title[ 'en' ] } 

The name and description of the benchmark, we get the same way:


 @ $mol_mem() title() { const title = this.meta().title return title[ $mol_locale.lang() ] || title[ 'en' ] || super.title() } @ $mol_mem() description() { const descr = this.meta().descr return descr[ $mol_locale.lang() ] || descr[ 'en' ] || '' } 

It's time to create a list of menu items:


 menu_options() { return this.samples_all().map( sample => this.Menu_option( sample ) ) } 

The state of selection of an implementation will depend on the list of selected implementations:


 @ $mol_mem_key() menu_option_checked( sample : string , next? : boolean ) { if( next === void 0 ) return this.samples().indexOf( sample ) !== -1 if( next ) this.samples( this.samples().concat( sample ) ) else this.samples( this.samples().filter( s => s !== sample ) ) return next } 

As you can see, the property implementation is both a getter and a setter at the same time. The list of selected implementations will be stored in the link in the same way:


 @ $mol_mem() samples( next? : string[] ) : string[] { const arg = $mol_state_arg.value( this.state_key( 'sample' ) , next && next.join( '~' ) ) return arg ? arg.split( '~' ).sort() : [] } 

Before proceeding to the actual measurements, we will write down all the steps that we are going to go:


 @ $mol_mem() steps( next? : string[] ) { return Object.keys( this.meta().steps ) } 

Let's go through all the steps for each implementation:


 @ $mol_mem_key() result_sample( sampleId : string ) { const result : { [ key : string ] : any } = { sample : this.menu_option_title( sampleId ) , } this.steps().forEach( step => { result[ step ] = this.command_result<string>([ step , sampleId ]) } ) return result } 

Now let's go through all the selected implementations and form a general result, which will be passed to the output component of the comparative tables, as indicated in the description of the application structure:


 @ $mol_mem() result() { const result : { [ sample : string ] : { [ step : string ] : any } } = {} this.samples().forEach( sample => { result[ sample ] = this.result_sample( sample ) } ) return result } 

In the $mol_bench component, we also redefined the col_head_label property, where we tied the result_col_title property, which returns an empty array. Let's redefine it so that it returns the localized column header:


 result_col_title( col_id : string ) { if( col_id === 'sample' ) return [ this.result_col_title_sample() ] const title = this.meta().steps[ col_id ].title return [ title[ $mol_locale.lang() ] || title[ 'en' ] ] } 

In addition, we provided it with the col_sort property as the result_col_sort property. To keep it in the link, we will redefine it as follows:


 @ $mol_mem() result_col_sort( next? : string ) { return $mol_state_arg.value( this.state_key( 'sort' ) , next ) } 

With the behavior figured out , now you can decorate the dish styles for automatically generated BEM- attributes and can be served to the table.


Enjoy your meal


Documentation on writing benchmarks and links to already implemented ones can be found on the $ mol_app_bench page.


A few examples:



In the plans:



Do not slow down, offer your ideas what and how you can test, what is missing from the generalized interface described here, try your recipes and share them with the community.


I am constantly haunted by clever thoughts, but I am faster


')

Source: https://habr.com/ru/post/322162/


All Articles