📜 ⬆️ ⬇️

Procurement "Import-Export"

Good health to all habrazhiteli!


MODx Revolution is convenient in many ways. If in MODx Evolution you could do everything, then in MODx Revolution you can do absolutely everything. There would be a fantasy and patience. However, after the appearance of Revolution many people wondered how to drag content from one engine to another. It's one thing if you have a dozen resources. Here kopipasta help you. Another thing is content collections, catalogs, and the like.


Prehistory

I had two collections - a joke and epic. In the first one I collected my favorite jokes, in the second - stories from “Yaplakal”, “IThappens” and other interesting portals. All this hung on Evolution 1.0.5. However, one day I transferred my entire multi-domain site to one engine and one database. In general, I switched to Revolution. Naturally there was a question about the transfer of content. With the section "about myself" and the music section, everything was simple - copy-paste. On the forum, I did not steamed at all - it is still on phpBB. But with the anecdote and epic writer, the question had to be put on the back burner, because all the patience that had been accumulated there would not be enough ...

Export

On the old site there lived a tiny snippet of importing a random joke from a joke. In fact, an anecdot could export data. Later, I made a special page that exported all the contents of the site in JSON format, and forgot about it. When there was a question about the transfer of data, I remembered about it.
')
Why json? Yes, simply, probably, because I was tired of all hell with all XML parsers. For nothing, that for JSON there are simple functions - json_encode and json_decode . This extremely convenient circumstance makes the option with JSON much more preferable than all other options.

With export to JSON, everything is simple. So the contents of the page to export (template _blank):
{"items":[ [[Ditto? &startID=`162` &tpl=`cat` &tplLast=`catLast`]] ]} 

Content of cat chunk:
  { "name":"[+pagetitle+]", "alias":"[+alias+]", "template":"[+template+]", "hidemenu":"[+hidemenu+]", "content":[ [!Ditto? &startID=`[+id+]` &tpl=`item` &tplLast=`itemLast`!] ] }, 

catLast is the same, only without a comma at the end. Item chunk content:
  { "name":"[+pagetitle+]", "alias":"[+alias+]", "template":"[+template+]", "hidemenu":"[+hidemenu+]", "content":"[+content:strip:noquotes+]" }, 

itemLast is the same, only without a comma at the end.

Phx snippet: noquotes:
 <?php // Remove r & n return str_replace('"','& quot ;',$output); //     ! * ?> 

* spaces are related to how the HTML entity Habrahabr interprets.

The result is an impressive such file. Yes, the main thing - do not forget to set the data type on the export page. The data type is text / javascript . Somehow you can immediately export Ditto data to JSON. But there was no time to understand this question.

Import

File received. What's next? And then I came across an article on the creation of a social network on MODx and saw exactly how programmatically you can create new documents in MODx Revolution. An idea was born, and after it a snippet:

 <?php //   JSON- //      function addItem($ctx,$pagetitle,$template,$isfolder,$hidemenu,$parent,$alias,$content,$td){ global $modx; $newResource = $modx->newObject('modResource'); $newResource->fromArray(array( 'pagetitle'=>$pagetitle, 'longtitle'=>$pagetitle, 'content'=>$content, 'template'=>$template, 'isfolder'=>$isfolder, 'hidemenu'=>$hidemenu, 'parent'=>$parent, 'published'=>'1', 'alias'=>$alias, 'context_key'=>$ctx )); if ($newResource->save()) { $id = $newResource->get('id'); $modx->cacheManager->refresh(); $modx->reloadConfig(); if (is_array($td)) { foreach($td as $key=>$val) { $tvar = $modx->newObject('modTemplateVarResource'); $tvar->set('contentid',$id); $tvar->set('tmplvarid',$key); $tvar->set('value',$val); $tvar->save(); } } return $id; } else { return false; } } // ,        function handleItem($ctx,$item,$parent,$tpls,$tvs,$handleChildren=false){ $hidm = isset($item['hidemenu'])?$item['hidemenu']:'0'; $isf = is_array($item['content'])?'1':'0'; $content = is_array($item['content'])?'':$item['content']; $tpl = array_key_exists('tpl'.$item['template'],$tpls)?$tpls['tpl'.$item['template']]:'0'; $td = array(); foreach($tvs as $tvn=>$tvv) if (array_key_exists($tvn,$item)) $td[$tvv] = $item[$tvn]; $ret = ''; if ($id = addItem($ctx,$item['name'],$tpl,$isf,$hidm,$parent,$item['alias'],$content,$td)) { $ret = 'Resource «<b>'.$item['name'].'</b>» imported successfully! ' . 'New ID: <b>'.$id.'</b><br />'; if (is_array($item['content']) && $handleChildren) foreach ($item['content'] as $i) $ret.= handleItem($ctx,$i,$id,$tpls,$tvs,$handleChildren); return $ret; } else { return 'Resource «<b>'.$item['name'].'</b>» not imported!<br />'; } } //   $cons = '<h1>Import item log</h1>'; //       (    ) $item_count = isset($itemCount)?$itemCount:4; // ,    if (!isset($curContext)) $curContext = 'web'; // ""     (    ) $next_items = isset($_GET['jsonimportnext'])?intval($_GET['jsonimportnext']):0; //   $tpls = array(); if (isset($templates)) { $tmp = explode(',',$templates); foreach($tmp as $val) { $tpls_d = explode('=>',$val); $tpls['tpl'.$tpls_d[0]] = $tpls_d[1]; } } //  TV- $tvs = array(); if (isset($tvParams)) { $tmp = explode(',',$tvParams); foreach($tmp as $val) { $tvs_d = explode('=>',$val); $tvs[$tvs_d[0]] = $tvs_d[1]; } } //   if (isset($source) && isset($rootID)) { if ($import_content = @file_get_contents($source)) { $import_data = json_decode($import_content,true); $import_count = count($import_data['items']); if ($item_count != 0) { for($c = 0; $c < $item_count; $c++) { $n = $item_count*$next_items+$c; if (isset($import_data['items'][$n])) $cons.= handleItem($curContext,$import_data['items'][$n],$rootID,$tpls,$tvs); } $this_res = $modx->resource->get('alias'); $this_res.= '.html'; if (($item_count*$next_items+$item_count-1)<$import_count) { $cons.= '<br /><a href="'.$this_res.'?jsonimportnext=' . ($next_items+1).'">' . 'Import next items</a><br />'; } else { $cons.= '<br /><a href="'.$this_res.'">Start</a>'; } } else { foreach ($import_data['items'] as $item) $cons.= handleItem($curContext,$item,$rootID,$tpls,$tvs,true); } } else { $cons.= 'Cannot get source!<br />'; } } else { $cons.= 'Invalid execution parameters!<br />'; } return $cons; 


I must say: it does not pretend to a universal solution. The code is practically not commented out, alas. Too hurry to share with the needy. If the solution seems interesting, I will continue to develop the work and maybe create a full-fledged superstructure over MODx.

The snippet receives the following parameters as input:


Why talk about performance. But the fact is that when I launched the first version of the snippet, where everything should be processed recursively, the server gave me error 502. Simply put the hoster hacked to a high load. No wonder - there were so many documents.

How to use

First, we write a simple template:
 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="ru"><head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <base href="/" /> <title>[[*pagetitle]]</title> <style type="text/css"> body { font: 12px monospace; } </style> </head><body><div align="center"><div style="text-align: left; width: 800px;"> [[!importJSON? &source=`[[*sourceURL]]` &itemCount=`6` &templates=`[[*templatesReplace]]` &tvParams=`[[*tvsReplace]]` &curContext=`[[*currentContext]]` &rootID=`[[*importDestination]]`]] </div></div></body></html> 

Then we create and bind the TV-parameters sourceURL, templatesReplace, tvsReplace, currentContext, importDestination to the template . No need to swear at currentContext and broadcast to me about context_key. In theory, you can create one page and import data into different contexts. Actually everything. In addition, I will say how I used this thing. Immediately I will make a note that I did without categories in the export template, changing the startID each time. Due to load limitations. The sequence of my actions.
  1. On the old site, open the export file for editing. We put on the further action "continue."
  2. On the new site, open the file for editing, where we transfer the content (hereinafter the import file). Change the template to the import template from JSON, save.
  3. In the parameters of the import file, we set the current context, the URL of the export file, pattern matching and TV parameters. We save.
  4. In the export file, we change the value in startID to the id of the parent resource, from where we will export the content. We save.
  5. In the import file, set the id of the resource where we will import. We save.
  6. Call the import file for viewing. Then we repeat until at the end there is a link that says “Start”:
    1. We are waiting for the download to complete.
    2. Click on the link “Import next items”
  7. After we have imported everything from the necessary resource, we return to step 4, if something else needs to be imported.


Yes, I know, for greater performance it would be possible to do everything by direct queries to the database. Only, firstly, not the fact that this would correct the situation with the 502nd error. Secondly, there was no time to study what is affected in the database when creating a resource, except for site_content. Thirdly, I would have written such a decision, I would have been immediately stumbled with the wording “a-how-well-XPDO”.

Once again I remind you that this is only a preliminary draft decision. Thank you all for your attention to my next bike!

Source: https://habr.com/ru/post/157483/


All Articles