The search server Sphinx (sphinxsearch) is positioned as a system, very well scalable for high loads and large volumes of indices. In general, this is not bad - but sometimes there is no machine with a 16-core processor and 256GB of RAM on hand. And what if the core is only one? And if with not so much memory? And if this is not a server or even an average PC, but generally a SoC router, with far from being the fastest “stone”, and where is only 32MB of RAM, and that one should be shared with other processes and system? Will the search engine soar? Will it work? Is it justified?
Yes, take off. Yes, it will work. Yes, it is quite justified.
PrehistoryI work in one small educational library. No, this is not a .dll or .so file, but the most common - a room with shelves and a lot of books, about 30 thousand. The library is not even twenty years old; once it began with a pair of shelves, and there were only a few dozen books, and they were all known to do everything. Then the library began to develop, there were more and more books; in the old room there was no room for them and the move took place. At that time, when I started working there, there were about 15 thousand books in the collection (or, more officially, “storage units”), and the total length of all the shelves on all shelves was about a kilometer.
Although ... no, it was not a library. It was a huge dump!
Books on incomprehensible signs were placed on different racks; no “official” library management systems were used; everything was done “on the knee”. I inherited this landfill, as well as the “database” —the MS Access file in which the only table proudly flaunted with everything — everything (with the same success it could exist as an excel table or even a list in the word — no “ there were no baseline features like normalizations or relationships between tables in it). Taking the book, the reader had to write his name and the name of the book on a lined letterhead.
Search for books in the "database" was carried out by opening the table and pressing Ctrl + F.
The search for a real book in the "dump" ... oh, this was the main work, which sometimes takes up the whole working day. By brute force, cho ... After a few months of work, I already had an idea about where to look for this or that book, but this only slightly accelerated the process. I finally got tired of this meaningless exercise, and I took up the task of “everything”.
After about a year (most of the time was spent directly on the inventory of books), I had a decent base, where all the “storage units” were numbered, equipped with bar codes and tied to specific shelves. The issuance of books has been reduced to searching a book in the database to find out on which shelf it is, and then entering the book number (with a scanner, from a barcode) into a virtual card reader. Delivery - again, scanning the barcode, after which the book is automatically removed from the card reader, and the database kindly offers to carry it to a specific shelf. The lesson, which used to take full time, now began to take a few minutes. Hooray!
What next? Normalize database? Yes done.
ABOUT! Well, let's drag it from MS Access somewhere to the “looser” software? Ok, done dragging into mysql. And the original database did not suffer - it just put the ODBC driver and replaced the internal tables with links to the now external ones.
Now nothing stops you from letting readers search for books themselves! (before that, too, no one interfered - but this was done by creating a copy of the .mdb database, deleting sensitive data such as a list to whom and what was issued, and then distributing it as an archive. I entered hundreds of new books - and be kind, do the whole routine again ...). A modest php application with the simplest search form was attributed to the database - and now, now you don’t even need to search for books :). Readers find them themselves, take them from the shelves and carry me only to scan the code and get it in their hands. Robots injected, happy man!
What are the costs? Well, what, base, form, search - full LAMPS-stack (= LAMP + S [phinx]), everything revolves on your home desktop with an external IP address.
UNABLE!
Well, let it be. In the end, this is not a bank site for servicing half of the country’s real-time transactions.
But damn, the included 24/7 computer also eats electricity. And it makes a noise ... And if you think about it, then apart from the library website there are no other reasons to keep it on and not ...
What if you use a router?
The time has come when I was quite-quite bored with the periodic hang-ups of an old router (DI-624) and I changed it to WRT-160NL. I felt the Internet, looked for descriptions of the features of the upgrade ... And the very next day I replaced the stock firmware with dd-wrt. I hooked up the external hdd, poshamanil - and got a modest NAS-file dump. I rummaged around on the Internet, read about optware, tried it - and on the same external hdd I started additional software for the router itself. Transmission - please; lighttpd - easily! php? No problem! Hmm ... does the library website take off? Yes, it took off. And if its base mysql directly into the router shove? Wow, it turned out! True, the assembly mysql strongly chewed; from the tables only MyISAM - but still, it works!
')
And the search?
This is what comes out - the router itself now pumps the torrents; file dumping and even the site it serves the same ... The only thing missing is sphinx search. It still spins on the desktop. And what if his on the router? ..
Yes. As it turned out - going, running, working. Works well!
I later changed the router to Asus RT-N16, and now to NetGear WNDR-4300. I replaced the firmware from dd-wrt with openwrt. But sphinx still lives on it.
How to build your software under the router?
Of course, as long as there is a console and the ability to build scripts, you can create a program in many different ways. Some of them are very die-hard. Approximately how to install a program from sources on a deb or rpm-based distribution using ./configure && make && make install: it is possible that it will build and even fly - but the scalability and adaptability to support is almost zero. Therefore, more civilized methods are used. I dealt with two systems of assembly / publication under the router. I will describe them.
Optware
It started my acquaintance with an alternative software for third-party firmware.
In short,
optware is software, entirely concentrated in the / opt. Hence the name. For example, a daemon installed in optware will be located somewhere in / opt / usr / sbin, the fumbled libraries will be thrown into / opt / usr / lib, configs will be expected in / opt / etc, and launch scripts will be thrown in / opt / etc /init.d. If the target system is only 4MB of flash memory, and all are clogged with firmware - it does not matter! You just need to mount / opt from another place in some way - and we are in chocolate! We can install optware and work! This other place can be, for example, an external flash drive (inserted into an existing usb port). Or even an SD card soldered “on the knee” to any contacts inside the router, the state of which we are able to programmatically control (for example, the power, activity, output LEDs; the output; the wps and reset buttons — to the input. Then everything is approximately like in Ardwin or another MK - we “blink” the specified contacts in the right order, depicting the SPI bus operation, and read the “button presses.” And the SD card understands this, and so we access its content literally “
through the LEDs and the button .”). Well, or even an external network FS, mounted to a router from running Windows using the SMB protocol. However, in this case, the meaning of the router disappears as a small stand-alone computer that can fully serve its services.
Optware was once part of the development of openwrt free firmware, but later separated into an independent project, due to the fact that the packages collected there are completely independent and self-sufficient. The standard libraries necessary for their work (libc in the form of uclibc) are also assembled and located in optware. This allows you to run very many applications, almost without worrying about the features and features of the firmware specific device.
The downside of this approach is that
not everything will work . For example, it’s not possible to build and connect kernel modules. Also, linking to your own standard libraries implies loading them into memory along with the existing stock libraries of the firmware, which means increased requirements for RAM, which is not very much on the router. Finally, we get an assembly not sharpened for a specific piece of hardware, but a package for a “universal average router” - this is about as a program for i386 (which you run on your cool core i7): many features of a specific processor / platform are ignored and rolled back to legacy . And nevertheless, optware is quite a working solution.
Optware is usually compiled by a cross-compiler on a linux-based host (for example, I compile on ubuntu 12.04). Zachekautiv
svn-repository of the project, you will immediately receive the source code, and the README file with further instructions (curious - go ahead!).
Openwrt
This is a whole separate firmware, which completely replaces the original stock. I had to contact OpenWrt when it became clear that the capabilities of ddwrt + optware are not enough. Having read the Habrow articles about “smart home”, I decided to gradually automate various household electrics. I chose z-wave, bought a “whistle” and joined the cloud z-wave.me. However, keeping the desktop on for the sake of connecting home automation to the cloud is too wasteful. Moreover, in my router there was not one, but two whole usb-ports! But ... the whistle did not start. The necessary kernel module (cp21xx) was not found in the firmware. It was impossible to assemble software of this kind in optware. Rebuilding all dd-wrt firmware is also a completely non-trivial task (terrible system! My patience was not enough to set up the toolchain, satisfy all dependencies and achieve a successful build!). Openwrt for my router (rt-n16) was not initially supported, but at that moment there were already patches that allowed me to make a completely working firmware. So, I said goodbye to dd-wrt in favor of an “even more free” firmware.
What is good openwrt? Yes, that
everything is configured. We look at the minuses of optware, and throw them out of my head. There are no duplicate libraries (we collect software directly under the existing firmware). No problems with the kernel modules (the usual full-fledged linux. Wanted an exotic “whistle”? Please! Wanted to plug in a 3G modem and take the Internet out of it? Easily!). The software can be optimized for the existing platform (and the compiler in the assembly will use the capabilities that are in the chip specifically your router, and not roll back to legacy).
Finally, it is worth saying that openwrt doesn’t actually contradict optware. Simply, it is more flexible and more convenient. However, if the / opt partition has been preserved from the previous firmware, then the software available there will probably start perfectly in the new openwrt. Rebuild does not need anything! (although all the disadvantages of its isolation optware will remain).
Embedded Sphinx - what do you need from it?
Sphinx's scalability is impressive! It is enough to review and evaluate the achievements on the project’s 'powered by' page. Indices for billions of documents; the load of half a billion requests per day is quite a working number. But this is all High Load on serious machines or even clusters, with nimble processors and a sea of ​​RAM. What about scalability in a different direction? Low memory, incomprehensible processor and disk? It turns out not so bad. On a native non-x86 platform, the Sphinx is quite normally assembled (and passes internal tests) on the Raspberry Pi. However, if we look at the details, then apart from the
other architecture, everything else in Malinka is very good. As many as 256-512MB of RAM! And the sphinx need it! After all, almost all of his "secret of success" is the maximum use of available iron. He is not at all focused on the “640Kb enough for everyone” philosophy, but quite the opposite; seeks to maximally follow the “fashionable” configurations and use their capabilities to the fullest
Memory cheap? Let's forget about and load the whole index into it!
Have new fashion SSDs come into fashion in production with impressive access times? Okay, let's play, try - and, maybe, shake up the internal strategies and formats again, “everything went flying”.
Moreover, this is not at all a waste of the form “since we have a new smart percent, then we can afford to sort these three gigabyte lines with a bubble from a script written in BASIC,” but quite the opposite; reasonable optimizations implemented by experienced (ex) game dev developers and tested by benchs.
And where, it is asked, is there to be thrust with his antediluvian “nedocomputer”, which only Baitik can send from one cable to another? Well, actually, why not?
Of course, cross-compilation is implied (some manage to build and run the native tool directly on the router - but I'm not very ready for this approach). Once a cross-compilation means that many internal tests of autotouls (the Sphinx is configured by them) will stand aside and it will be necessary to somehow prompt the tulchein that we have glands on board.
So, to run a sphinx on a small piece of iron, you first need to think about some points related to the “miniature” of the target platform.
Optimizing memory requirements
How to satisfy the gluttony (or rather, the "large-scale attitude") of the sphinx to the RAM? It's all quite simple. Sphinx does not spend resources "just like that"; its “voracity” directly depends on the task and on the index volume. It is he, for the most part, takes place in the memory. This means that the lower the index, the lower the memory requirements.
The dictionary (file .spi) and attribute blobs (files .spa, .sps, .spm) are loaded into the RAM. Evaluating the size of these files, you can make a fairly accurate prediction of future "gluttony".
If it is very tight - you can get rid of the attributes and enable the option "dictionary-on-disk". Then everything will be slow and sad, but entirely on the disk. However, you can not disconnect anything, but simply connect the swap-partition. It gets in the memory - it will “fly”. It will not fit - it will automatically go to the swap and will come to the option “slowly and sadly, but it works!”
Index type?
I took the usual. With the keywords dictionary (just crc is already outdated). RT is not needed, if only because the data itself does not need to be changed in real-time; periodical reindexing is enough (and so yes, if necessary, it will “fly up” and rt). This means that a single searchd daemon is not enough, but an indexer is also needed.
Link to data source?
What else? Need support for the desired data source (in our case - mysql). Until recently, with this support there were certain “rakes” associated with the architecture of the Sphinx itself - namely, the fact that it consists of the libsphinx megablibrary and small sources that implement the immediate indexer, searchd, etc. tools. At the same time, almost all the functionality of working with indexes (including their creation from data sources) is in libsphinx. The “rake” of supporting sources lies in the fact that when they assemble a sphinx with the support of mysql, all the sphinx binaries through a common megalibu are dependent on libmysqlclient. On a serious machine, this is not a problem, but on a router, having a dependency on a library that is linked “just like that” but completely unnecessary for work is wasteful! There are two options: make two assemblies; one with mysql support, the second with no indexing support at all. From the first to take the indexer, from the second - everything else. The second option appeared recently - it simply loads all the necessary external libraries explicitly (via dlopen). In this case, you can forget about unnecessary dependencies: indexer will load the library when it needs it; searchd won't touch it at all.
How to run a demon?
The classic method is implemented in the Sphinx itself. This is a double fork - first we unhook from the active terminal, then we create a new working session - and now we live in the background, we don’t have a console and do not respond to pressing Ctrl + C in the terminal. We write our pid to the pid-file - and that’s it, we became a full-fledged daemon. It seems everything is OK, but it has to be implemented by EVERY demon in the system. Why duplicate code on an open-source system? So the smart men thought and thought and invented an upstart - he doesn’t need to be able to become a demon from the “experimental” process, but simply keeps it in the background, stores the pid-file in the standard place and at the same time implements the “guard dog” restarting the demon in the result of a sudden fall. By the way, at one time because of this, there were problems running the sphinx through upstart - two too clever programs still could not agree. As a result, the correct option from under the upstart is to run with the option '--nodetach' and monitor the running process (and not its forks). At the same time, the sphinx does not consider itself a demon, but the entire background control falls on upstart).
In the case of embedded, however, there are no problems with upstart, as long as it is optware, that openwrt use classic rc.d-scripts. In other words - no special flags are needed; Sphynx will manage its life itself.
Where to put the logs?
Initially, the Sphinx keeps a general log of the daemon (fell-rose-rotated), as well as a separate log of all requests. It is clear that writing something to the flash memory on the router, and even without size control, will be like suicide. Therefore, we specify as 'syslog' as the logs, and also do not forget to add --with-syslog (however, it is now enabled by default) when configuring the assembly. As a result, the entire output will be strewn into the system log, which we will already steer at the system level of the router itself. And then there are several options, depending on the capabilities of the firmware. On some (like the old DIR-300 with a very small flash) it is easier not to log at all. On others, a piece is stored in a ring buffer directly in memory, and can be read from the console with the logread command. On the third one is redirected via udp to the server in “which can”. But in any case - it is no longer a sphinx problem!
Configuring and collecting!
All questions of the work of the demon on the router like decided. Now it's time to do the build!
Cheating autotools
Sphinx is configured using autotools. ./configure- the script that runs before the build will honestly run through our build system and find out what our compiler is, what functions are supported, what is the architecture, what is the byte order (LSB or MSB). The problem is that almost all these tests will affect not the target router, but the system where we run the toolchain. Therefore, there is only one way - to prompt the configuration “correct answers”. This is done using environment variables. For example, if ./configure checks the availability of the qsort function, then you can give the necessary “hint” by defining the variable ac_cv_func_qsort before starting ./configure. For example, executing
export ac_cv_func_qsort=no
- ./configure before
export ac_cv_func_qsort=no
will assume that you do not have the qsort function. Accordingly, a package configured in this way will either use its own implementation, or it will break at all during assembly (ha-ha!).
To build embedded sphinx "explicit hints" you need quite a bit. Here they are all ...
- sphinx_cv_unaligned_ram_access (yes / no) - the name speaks for itself. In short, the internal index format of the Sphinx is compressed, aligned by one byte. If you suddenly need to take a DWORD from such a file (loaded into memory), and it will be lying at an odd address - on some architectures (for example, sparc) this will lead to crash! You can recognize the situation either by reading the datasheet on the target platform; or by the “scientific spear” method - by collecting all the possible (as many as two!) options and trying to run on the target system. (Technical details: if unaligned access is not possible, then instead of simply taking a DWORD from the desired pointer, the memcpy call will be used to copy the necessary 4 bytes from a random address). In practice, on my mips (el) unaligned works.
- sphinx_cv_interlocked (yes / no) - just the build costs for legacy i386. For example, in optware for dd-wrt you have to put 'no', otherwise the build fails. In a more tuned openwrt assembly, the same flag can be left intact; The default 'yes' gives a completely working version. (Technical details: depending on the flag inside, either the atomic lock-free operation __sync_fetch_and_add is used, or the structure of the variable + mutex, which does the same work, but is already locked)
- ac_cv_c_bigendian (yes / no) is the funniest variable. If you do not publish indexes anywhere "to the outside world" - this variable can be ignored. How the demon keeps blobs inside it doesn’t matter; The main thing is to correctly respond to requests. And communication over the network, as expected, is surrounded by calls to hton / ntoh and the like, and therefore endianess is not affected. However, as soon as you try, say, to create an index on your desktop, and then throw it into the router “shob worked” - this is where the differences come out. In short, loading an index with another endianess is currently IMPOSSIBLE. In principle, the problem can be fixed and repaired, but there is not much point in this (as long as there is no “huge crowd of users” who load indexes between platforms). The difference between the indices is easy to see by opening the .sph file for viewing (directly with a dump, F3 in mc). On the index created on little-endian in the title you will see the magic-signature "SPHX". On big-endian - "XHPS".
Find out the target correct value in several ways. First of all - the already mentioned "scientific tyk." The sphinx has a built-in runtime check, which, if the actual capabilities do not match the ones specified during the compilation, politely curses and prompts the correct value. Secondly, you can see the characteristics of the compiled binary using the file command (implicitly - by pressing F3 on the file in mc on ubunt). Having seen the “ELF 32-bit MSB executable” there, we understand that we are dealing with big endian. And "ELF 32-bit LSB executable" says little-endian. In this case, you are lucky - you can create indexes on the desktop :). Finally, the third and easiest way is to look at the name of the compiler. 'mips' is big-endian, 'mipsel' is little endian. The correct endianess is of critical importance if you decide to use the aot lemmatizer (included in the config as stemmer = lemmatize_aot_ru, lemmatize_aot_en, lemmatize_aot_de or all together). Blocks of aot dictionaries are created on a “normal” PC with little-endian, so the “enemy” architecture needs to be converted (which is done on the fly right at boot, but for this endianess must be correctly set during configuration). - ac_cv_func_realloc_0_nonnull = yes
- ac_cv_func_malloc_0_nonnull = yes - I’ll just mention for completeness. "Bzik" avtotulz, in the Sphinx are not used. But without them, the assembly produces an error, which leads to such a decision, a “gag”.
Yes, by the way - the variables mentioned (naming style, the way of prompting “correct answers”) are, as you understand, not a special feature of the sphinx, but an attribute of autotouls. Exactly the same way you can "prompt" the answers when configuring any other software written using autotouch.
Sphinx in optware
In the target platform folder (I compiled for ddwrt), go to the make folder and copy template.mk into sphinxsearch.mk. Then edit the copy, following the instructions in the file itself. This, in fact, is the only necessary (and often sufficient) script for adding your software to optware. Here is the key piece of this script that configures the main build:
(SPHINXSEARCH_BUILD_DIR)/.configured: sphinxsearch-source make/sphinxsearch.mk $(MAKE) libstdc++-stage $(MAKE) expat-stage $(MAKE) mysql5-stage rm -rf $(BUILD_DIR)/$(SPHINXSEARCH_DIR) $(@D) $(SPHINXSEARCH_UNZIP) $(DL_DIR)/$(SPHINXSEARCH_SOURCE) | tar -C $(BUILD_DIR) -xvf - $(LIBSTEMMER_UNZIP) $(DL_DIR)/$(LIBSTEMMER_SOURCE) | tar -C $(BUILD_DIR)/$(SPHINXSEARCH_DIR) -xvf - if test -n "$(SPHINXSEARCH_PATCHES)" ; \ then cat $(SPHINXSEARCH_PATCHES) | \ patch -d $(BUILD_DIR)/$(SPHINXSEARCH_DIR) -p0 ; \ fi if test "$(BUILD_DIR)/$(SPHINXSEARCH_DIR)" != "$(@D)" ; \ then mv $(BUILD_DIR)/$(SPHINXSEARCH_DIR) $(@D) ; \ fi (cd $(@D); \ export ac_cv_func_realloc_0_nonnull=yes; \ export ac_cv_func_malloc_0_nonnull=yes; \ export sphinx_cv_unaligned_ram_access=yes; \ export ac_cv_c_bigendian=no; \ export sphinx_cv_interlocked=no; \ $(TARGET_CONFIGURE_OPTS) \ CPPFLAGS="$(STAGING_CPPFLAGS) $(SPHINXSEARCH_CPPFLAGS)" \ LDFLAGS="$(STAGING_LDFLAGS) $(SPHINXSEARCH_LDFLAGS)" \ ./configure \ --build=$(GNU_HOST_NAME) \ --host=$(GNU_TARGET_NAME) \ --target=$(GNU_TARGET_NAME) \ --prefix=/opt \ --sysconfdir=/opt/etc/sphinxsearch \ --with-libstemmer \ --with-mysql=$(STAGING_PREFIX) \ --without-unixodbc \ --with-syslog \ --enable-dl \ ) touch $@
The rest of the build is trivial (make, strip, packaging). In addition to sphinx sources, libstemmer sources are also required (they will be pulled when building directly from their site). Also collected expat and mysql are required (as long as we depend on them; they will be collected as dependencies). In addition to the main makefile (lying in ./make), the optware source folder folder structure also uses the ./sources/sphinxsearch folder. It contains the init-script for the daemon, which will be packaged and later found in /opt/etc/init.d during installation. The build is done from the root folder, and certain suffixes are added to the package name:
- make sphinxsearch - just gather the sphinx in cross-tachyne. Useful for debugging.
- make sphinxsearch-ipk - will assemble a sphinx, after which it will create an .ipk package
- make sphinxsearch-clean - we clean up trash (after the build).
As a result of the assembly with the suffix -ipk, we get the “ready-to-use” package sphinxsearch.ipk, which we install into optware using ipkg. Then everything is as usual - we write the config, we index (and as long as the dd-wrt builds on the default LSB - we can index it on the desktop) - and take off. Voila!
If you wish, you can remove the dependency on mysql (see the SPHINXSEARCH_DEPENDS directive in full config) - as long as you do not need this library to start the daemon itself (and if you create an index on the side, then you DO NOT need it on the router). It may be necessary to slightly correct the ac_cv_ variables ... and sphinx_cv_ ... In the above script, they are set based on the build for optware for dd-wrt (and this is the mipsel architecture, i.e. little-endian, and at the same time the maximum set of chip features is reduced)
Sphinx in openwrt
If the target openwrt on mips does not succeed in running optware from ddwrt (there is mipsel). If, however, also on mipsel - you can run sphinx from optware, however, in this case it is much more justified in terms of careful use of resources, you will use the full openwrt openness and build it under the target system - this will get rid of legacy restrictions, and also get a more optimal code, which will not load into memory one more “slightly different” copy of the standard library.
For assembly in openwrt, makefiles are also used, only their format and location are different. The necessary files are located in a separate folder created somewhere in the package tree from the root of the build environment. For example, I chose package / network / services / sphinx. In the specified folder, the Makefile is the minimum required, and there may also be other files and folders with predefined names. In addition to the Makefile, I also use Config.in (it creates a submenu in the main menuconfig), as well as the file / folder where the sample config and the init script lie (by the way, much shorter than in optware). The key part of the configuration for openwrt looks like this:
CONFIGURE_VARS += \ ac_cv_func_realloc_0_nonnull=yes \ ac_cv_func_malloc_0_nunnul=yes \ ac_cv_c_bigendian=yes \ sphinx_cv_unaligned_ram_access=yes CONFIGURE_ARGS += \ --prefix=/ \ --sysconfdir=/etc/sphinx \ $(if $(CONFIG_SPHINX_MYSQL_SUPPORT),--with-mysql,--without-mysql) \ $(if $(CONFIG_SPHINX_PGSQL_SUPPORT),--with-pgsql,--without-pgsql) \ $(if $(CONFIG_SPHINX_UNIXODBC_SUPPORT),--with-unixodbc,--without-unixodbc) \ $(if $(CONFIG_SPHINX_EXPAT_SUPPORT),--with-libexpat,--without-libexpat) \ $(if $(CONFIG_SPHINX_DYNAMIC_LOAD),--enable-dl,,) \ --with-syslog \ --with-libstemmer
Even visually, it is shorter than in optware (the entire script itself is also significantly shorter - only 84 lines, most of which are patterned). This involves variables that are configured from the menu. In this version, you may need to correct ac_cv_c_bigendian for your needs (I built a NetGear WNDR4300 router, which is MIPS, that is, big-endian). Also in this version, I did not load libstemmer separately when building. Instead, you need to download it yourself in advance and package it into a tarball with the source code (and fix PKG_MD5SUM for the one that the tarball has received).
Build a package in openwrt is carried out in two stages. First, we launch menuconfig in the root folder and configure the firmware as a whole. There you NEED to go to the Network / Web Servers / Proxies section - and already there choose sphinx as a module (M). You can also go to the configuration submenu and build an assembly with the support of the necessary data sources. Then we exit the menuconfig, saving the changes - and finally, run the assembly of the entire firmware:
make
or just one sphinx (of course, with dependencies, if needed)
make package / network / services / sphinx / compile
The received packages are added to the folder ./bin/ARCH/packages (in my case it is ./bin/ar71xx/packages). You can install the package either directly by finding it and copying it to the router (and then setting an opkg on it), or publishing the assembly folder on the local web server and writing the path to it in /etc/opkg.conf on the router, and then opkg update; opkg install sphinx.
To index on the router, you may need to separately install the mysql client library (I did not prescribe it depending on the installation in the case of configuration with dynamic loading of the required libs). And also - manually create a symlink for it (using the ln -s libmysqlclient.so.16.0.0 command libmysqlclient.so in the folder where the lib lies). Otherwise, everything works out of the box. My application uses sphinx using the sphinxql protocol (i.e., the same libmysqlclient is used not only by the sphinx to index mysql, but also by clients to work with the sphinx itself), but the legacy protocol of the sphinx (sphinx api, which is strongly recommended not to touch client applications) is also quite functional.
Finally - configs and scripts.
For optware:
Assembly scriptsphinxsearch.mk make optware
########################################################### # # sphinxsearch # ########################################################### # You must replace "sphinxsearch" and "SPHINXSEARCH" with the lower case name and # upper case name of your new package. Some places below will say # "Do not change this" - that does not include this global change, # which must always be done to ensure we have unique names. # # SPHINXSEARCH_VERSION, SPHINXSEARCH_SITE and SPHINXSEARCH_SOURCE define # the upstream location of the source code for the package. # SPHINXSEARCH_DIR is the directory which is created when the source # archive is unpacked. # SPHINXSEARCH_UNZIP is the command used to unzip the source. # It is usually "zcat" (for .gz) or "bzcat" (for .bz2) # # You should change all these variables to suit your package. # Please make sure that you add a description, and that you # list all your packages' dependencies, seperated by commas. # # If you list yourself as MAINTAINER, please give a valid email # address, and indicate your irc nick if it cannot be easily deduced # from your name or email address. If you leave MAINTAINER set to # "NSLU2 Linux" other developers will feel free to edit. # http://sphinxsearch.com/files/sphinx-2.0.5-release.tar.gz #SPHINXSEARCH_SITE=http://sphinxsearch.com/files SPHINXSEARCH_SITE=http://192.168.1.5:65080/r/sphinxsearch SPHINXSEARCH_VERSION=2.2.2-4470 SPHINXSEARCH_SOURCE=sphinx-$(SPHINXSEARCH_VERSION).tar.gz SPHINXSEARCH_DIR=sphinx-$(SPHINXSEARCH_VERSION) SPHINXSEARCH_UNZIP=zcat SPHINXSEARCH_MAINTAINER=NSLU2 Linux <nslu2-linux@yahoogroups.com> SPHINXSEARCH_DESCRIPTION=Sphinx is free open-source SQL full-text search engine. SPHINXSEARCH_SECTION=misc SPHINXSEARCH_PRIORITY=optional SPHINXSEARCH_DEPENDS=libstdc++, expat, mysql5 SPHINXSEARCH_SUGGESTS= SPHINXSEARCH_CONFLICTS= LIBSTEMMER_SITE=http://snowball.tartarus.org/dist LIBSTEMMER_SOURCE=libstemmer_c.tgz LIBSTEMMER_UNZIP=zcat # # SPHINXSEARCH_IPK_VERSION should be incremented when the ipk changes. # SPHINXSEARCH_IPK_VERSION=2 # # SPHINXSEARCH_CONFFILES should be a list of user-editable files SPHINXSEARCH_CONFFILES=/opt/etc/sphinxsearch/sphinx.conf # # SPHINXSEARCH_PATCHES should list any patches, in the the order in # which they should be applied to the source code. # #SPHINXSEARCH_PATCHES=$(SPHINXSEARCH_SOURCE_DIR)/configure.patch SPHINXSEARCH_PATCHES= # # If the compilation of the package requires additional # compilation or linking flags, then list them here. # SPHINXSEARCH_CPPFLAGS= SPHINXSEARCH_LDFLAGS= # # SPHINXSEARCH_BUILD_DIR is the directory in which the build is done. # SPHINXSEARCH_SOURCE_DIR is the directory which holds all the # patches and ipkg control files. # SPHINXSEARCH_IPK_DIR is the directory in which the ipk is built. # SPHINXSEARCH_IPK is the name of the resulting ipk files. # # You should not change any of these variables. # SPHINXSEARCH_BUILD_DIR=$(BUILD_DIR)/sphinxsearch SPHINXSEARCH_SOURCE_DIR=$(SOURCE_DIR)/sphinxsearch SPHINXSEARCH_IPK_DIR=$(BUILD_DIR)/sphinxsearch-$(SPHINXSEARCH_VERSION)-ipk SPHINXSEARCH_IPK=$(BUILD_DIR)/sphinxsearch_$(SPHINXSEARCH_VERSION)-$(SPHINXSEARCH_IPK_VERSION)_$(TARGET_ARCH).ipk .PHONY: sphinxsearch-source sphinxsearch-unpack sphinxsearch sphinxsearch-stage sphinxsearch-ipk sphinxsearch-clean sphinxsearch-dirclean sphinxsearch-check # # This is the dependency on the source code. If the source is missing, # then it will be fetched from the site using wget. # $(DL_DIR)/$(SPHINXSEARCH_SOURCE): $(WGET) -P $(@D) $(SPHINXSEARCH_SITE)/$(@F) || \ $(WGET) -P $(@D) $(SOURCES_NLO_SITE)/$(@F) $(DL_DIR)/$(LIBSTEMMER_SOURCE): $(WGET) -P $(@D) $(LIBSTEMMER_SITE)/$(@F) || \ $(WGET) -P $(@D) $(SOURCES_NLO_SITE)/$(@F) # # The source code depends on it existing within the download directory. # This target will be called by the top level Makefile to download the # source code's archive (.tar.gz, .bz2, etc.) # sphinxsearch-source: $(DL_DIR)/$(SPHINXSEARCH_SOURCE) $(DL_DIR)/$(LIBSTEMMER_SOURCE) $(SPHINXSEARCH_PATCHES) # # This target unpacks the source code in the build directory. # If the source archive is not .tar.gz or .tar.bz2, then you will need # to change the commands here. Patches to the source code are also # applied in this target as required. # # This target also configures the build within the build directory. # Flags such as LDFLAGS and CPPFLAGS should be passed into configure # and NOT $(MAKE) below. Passing it to configure causes configure to # correctly BUILD the Makefile with the right paths, where passing it # to Make causes it to override the default search paths of the compiler. # # If the compilation of the package requires other packages to be staged # first, then do that first (eg "$(MAKE) <bar>-stage <baz>-stage"). # # If the package uses GNU libtool, you should invoke $(PATCH_LIBTOOL) as # shown below to make various patches to it. # $(SPHINXSEARCH_BUILD_DIR)/.configured: sphinxsearch-source make/sphinxsearch.mk $(MAKE) libstdc++-stage $(MAKE) expat-stage $(MAKE) mysql5-stage rm -rf $(BUILD_DIR)/$(SPHINXSEARCH_DIR) $(@D) $(SPHINXSEARCH_UNZIP) $(DL_DIR)/$(SPHINXSEARCH_SOURCE) | tar -C $(BUILD_DIR) -xvf - $(LIBSTEMMER_UNZIP) $(DL_DIR)/$(LIBSTEMMER_SOURCE) | tar -C $(BUILD_DIR)/$(SPHINXSEARCH_DIR) -xvf - if test -n "$(SPHINXSEARCH_PATCHES)" ; \ then cat $(SPHINXSEARCH_PATCHES) | \ patch -d $(BUILD_DIR)/$(SPHINXSEARCH_DIR) -p0 ; \ fi if test "$(BUILD_DIR)/$(SPHINXSEARCH_DIR)" != "$(@D)" ; \ then mv $(BUILD_DIR)/$(SPHINXSEARCH_DIR) $(@D) ; \ fi (cd $(@D); \ export ac_cv_func_realloc_0_nonnull=yes; \ export ac_cv_func_malloc_0_nonnull=yes; \ export sphinx_cv_unaligned_ram_access=yes; \ export sphinx_cv_interlocked=no; \ $(TARGET_CONFIGURE_OPTS) \ CPPFLAGS="$(STAGING_CPPFLAGS) $(SPHINXSEARCH_CPPFLAGS)" \ LDFLAGS="$(STAGING_LDFLAGS) $(SPHINXSEARCH_LDFLAGS)" \ ./configure \ --build=$(GNU_HOST_NAME) \ --host=$(GNU_TARGET_NAME) \ --target=$(GNU_TARGET_NAME) \ --prefix=/opt \ --sysconfdir=/opt/etc/sphinxsearch \ --with-libstemmer \ --with-mysql=$(STAGING_PREFIX) \ --without-unixodbc \ --with-syslog \ --enable-dl \ ) # $(PATCH_LIBTOOL) $(@D)/libtool touch $@ sphinxsearch-unpack: $(SPHINXSEARCH_BUILD_DIR)/.configured # # This builds the actual binary. # $(SPHINXSEARCH_BUILD_DIR)/.built: $(SPHINXSEARCH_BUILD_DIR)/.configured rm -f $@ $(MAKE) -C $(@D) touch $@ # # This is the build convenience target. # sphinxsearch: $(SPHINXSEARCH_BUILD_DIR)/.built # # If you are building a library, then you need to stage it too. # $(SPHINXSEARCH_BUILD_DIR)/.staged: $(SPHINXSEARCH_BUILD_DIR)/.built rm -f $@ $(MAKE) -C $(@D) DESTDIR=$(STAGING_DIR) install touch $@ sphinxsearch-stage: $(SPHINXSEARCH_BUILD_DIR)/.staged # # This rule creates a control file for ipkg. It is no longer # necessary to create a seperate control file under sources/sphinxsearch # $(SPHINXSEARCH_IPK_DIR)/CONTROL/control: @install -d $(@D) @rm -f $@ @echo "Package: sphinxsearch" >>$@ @echo "Architecture: $(TARGET_ARCH)" >>$@ @echo "Priority: $(SPHINXSEARCH_PRIORITY)" >>$@ @echo "Section: $(SPHINXSEARCH_SECTION)" >>$@ @echo "Version: $(SPHINXSEARCH_VERSION)-$(SPHINXSEARCH_IPK_VERSION)" >>$@ @echo "Maintainer: $(SPHINXSEARCH_MAINTAINER)" >>$@ @echo "Source: $(SPHINXSEARCH_SITE)/$(SPHINXSEARCH_SOURCE)" >>$@ @echo "Description: $(SPHINXSEARCH_DESCRIPTION)" >>$@ @echo "Depends: $(SPHINXSEARCH_DEPENDS)" >>$@ @echo "Suggests: $(SPHINXSEARCH_SUGGESTS)" >>$@ @echo "Conflicts: $(SPHINXSEARCH_CONFLICTS)" >>$@ # # This builds the IPK file. # # Binaries should be installed into $(SPHINXSEARCH_IPK_DIR)/opt/sbin or $(SPHINXSEARCH_IPK_DIR)/opt/bin # (use the location in a well-known Linux distro as a guide for choosing sbin or bin). # Libraries and include files should be installed into $(SPHINXSEARCH_IPK_DIR)/opt/{lib,include} # Configuration files should be installed in $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch/... # Documentation files should be installed in $(SPHINXSEARCH_IPK_DIR)/opt/doc/sphinxsearch/... # Daemon startup scripts should be installed in $(SPHINXSEARCH_IPK_DIR)/opt/etc/init.d/S??sphinxsearch # # You may need to patch your application to make it use these locations. # $(SPHINXSEARCH_IPK): $(SPHINXSEARCH_BUILD_DIR)/.built rm -rf $(SPHINXSEARCH_IPK_DIR) $(BUILD_DIR)/sphinxsearch_*_$(TARGET_ARCH).ipk $(MAKE) -C $(SPHINXSEARCH_BUILD_DIR) DESTDIR=$(SPHINXSEARCH_IPK_DIR) install-strip install -d $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch install -m 644 $(SPHINXSEARCH_BUILD_DIR)/sphinx-min.conf.dist $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch/sphinx.conf install -d $(SPHINXSEARCH_IPK_DIR)/opt/doc/sphinxsearch install -m 644 $(SPHINXSEARCH_BUILD_DIR)/doc/sphinx.txt $(SPHINXSEARCH_IPK_DIR)/opt/doc/sphinxsearch/sphinx.txt rm $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch/sphinx.conf.dist rm $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch/example.sql rm $(SPHINXSEARCH_IPK_DIR)/opt/etc/sphinxsearch/sphinx-min.conf.dist install -d $(SPHINXSEARCH_IPK_DIR)/opt/etc/init.d install -m 755 $(SPHINXSEARCH_SOURCE_DIR)/rc.sphinxsearch $(SPHINXSEARCH_IPK_DIR)/opt/etc/init.d/S90sphinxsearch ln -s S90sphinxsearch $(SPHINXSEARCH_IPK_DIR)/opt/etc/init.d/K70sphinxsearch # sed -i -e '/^#!/aOPTWARE_TARGET=${OPTWARE_TARGET}' $(SPHINXSEARCH_IPK_DIR)/opt/etc/init.d/SXXsphinxsearch $(MAKE) $(SPHINXSEARCH_IPK_DIR)/CONTROL/control # install -m 755 $(SPHINXSEARCH_SOURCE_DIR)/postinst $(SPHINXSEARCH_IPK_DIR)/CONTROL/postinst # sed -i -e '/^#!/aOPTWARE_TARGET=${OPTWARE_TARGET}' $(SPHINXSEARCH_IPK_DIR)/CONTROL/postinst # install -m 755 $(SPHINXSEARCH_SOURCE_DIR)/prerm $(SPHINXSEARCH_IPK_DIR)/CONTROL/prerm # sed -i -e '/^#!/aOPTWARE_TARGET=${OPTWARE_TARGET}' $(SPHINXSEARCH_IPK_DIR)/CONTROL/prerm # if test -n "$(UPD-ALT_PREFIX)"; then \ sed -i -e '/^[ ]*update-alternatives /s|update-alternatives|$(UPD-ALT_PREFIX)/bin/&|' \ $(SPHINXSEARCH_IPK_DIR)/CONTROL/postinst $(SPHINXSEARCH_IPK_DIR)/CONTROL/prerm; \ fi echo $(SPHINXSEARCH_CONFFILES) | sed -e 's/ /\n/g' > $(SPHINXSEARCH_IPK_DIR)/CONTROL/conffiles cd $(BUILD_DIR); $(IPKG_BUILD) $(SPHINXSEARCH_IPK_DIR) $(WHAT_TO_DO_WITH_IPK_DIR) $(SPHINXSEARCH_IPK_DIR) # # This is called from the top level makefile to create the IPK file. # sphinxsearch-ipk: $(SPHINXSEARCH_IPK) # # This is called from the top level makefile to clean all of the built files. # sphinxsearch-clean: rm -f $(SPHINXSEARCH_BUILD_DIR)/.built -$(MAKE) -C $(SPHINXSEARCH_BUILD_DIR) clean # # This is called from the top level makefile to clean all dynamically created # directories. # sphinxsearch-dirclean: rm -rf $(BUILD_DIR)/$(SPHINXSEARCH_DIR) $(SPHINXSEARCH_BUILD_DIR) $(SPHINXSEARCH_IPK_DIR) $(SPHINXSEARCH_IPK) # # # Some sanity check for the package. # sphinxsearch-check: $(SPHINXSEARCH_IPK) perl scripts/optware-check-package.pl --target=$(OPTWARE_TARGET) $^
init scriptrc.sphinxsearch sources/sphinxsearch optware ( ). .
#!/bin/sh NAME=sphinxsearch DAEMON=searchd # only used for virgin run DATA_PART=/mnt [ -d /mnt/C ] && DATA_PART=/mnt/C prefix="/opt" export PATH=${prefix}/bin:${prefix}/sbin:/bin:/usr/bin:/sbin:/usr/sbin:${PATH} DAEMON=${prefix}/bin/${DAEMON} SCRIPT="`basename $0`" test -x $DAEMON || exit 0 if [ -z "$1" ] ; then case `echo "$0" | sed 's:^.*/\(.*\):\1:g'` in S??*) rc="start" ;; K??*) rc="stop" ;; *) rc="usage" ;; esac else rc="$1" fi case "$rc" in start) if [ -n "`pidof $DAEMON`" ]; then echo "$NAME is already running" else echo "Starting SphinxSearch daemon: $NAME" export LD_LIBRARY_PATH=/opt/lib:$LD_LIBRARY_PATH pth=`pwd` $DAEMON cd "$pth" export LD_LIBRARY_PATH=$OLD_LIBRARY_PATH fi ;; stop) if [ -n "`pidof $DAEMON`" ]; then echo "Stopping SphinxSearch daemon: $NAME" pth=`pwd` n=1 while true; do $DAEMON --stop sleep 1 [ ! -n "`pidof $DAEMON`" ] && break sleep 5 [ $n -gt 3 ] && break let n+=1 done n=1 while true; do killall -9 $NAME 2>/dev/null sleep 1 [ ! -n "`pidof $DAEMON`" ] && break sleep 2 [ $n -gt 10 ] && break let n+=1 done if [ -n "`pidof $DAEMON`" ]; then echo "Termination of $NAME was not successful, it keeps running" sleep 1 fi cd "$pth" else echo "$NAME already stopped" fi ;; status) if [ -n "`pidof $DAEMON`" ]; then echo "$NAME is running" else echo "$NAME is not running" fi ;; restart) "$0" stop "$0" start ;; *) echo "Usage: $0 (start|stop|restart|usage)" ;; esac exit 0
For openwrt:
Assembly scriptMakefile package/network/services/sphinx ( ).
include $(TOPDIR)/rules.mk PKG_NAME:=sphinx PKG_VERSION:=2.2.2 PKG_REVISION:=4470 PKG_SUFFIX:=stemmer PKG_RELEASE:=2 #PKG_MD5SUM:=3119bbeafc9e32637339c6e95a3317ef PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION)-$(PKG_REVISION)-$(PKG_SUFFIX).tar.gz PKG_MAINTAINER:=Aleksey Vinogradov <klirichek@sphinxsearch.com> PKG_SOURCE_URL:=http://192.168.1.5:65080/r/sphinxsearch PKG_BUILD_DIR:=$(BUILD_DIR)/$(PKG_NAME)-$(PKG_VERSION)-$(PKG_REVISION)-$(PKG_SUFFIX) PKG_BUILD_PARALLEL:=1 PKG_DPNDS:= +SPHINX_MYSQL_SUPPORT:libmysqlclient +SPHINX_PGSQL_SUPPORT:libpq +SPHINX_UNIXODBC_SUPPORT:unixodbc +SPHINX_EXPAT_SUPPORT:libexpat ifeq ($(CONFIG_SPHINX_DYNAMIC_LOAD),y) PKG_BUILD_DEPENDS:= $(PKG_DEPENDS) endif include $(INCLUDE_DIR)/package.mk define Package/sphinx SECTION:=net CATEGORY:=Network SUBMENU:=Web Servers/Proxies TITLE:=sphinxsearch - fast FT search engine server DEPENDS:=+libstdcpp +librt +libpthread +zlib ifneq ($(CONFIG_SPHINX_DYNAMIC_LOAD),y) DEPENDS+= $(PKG_DPNDS) endif MENU:=1 endef define Package/sphinx/config source endef define Package/sphinx/conffiles /etc/sphinx/sphinx.conf endef # :) define Package/sphinx/description This is placeholder for sphinxsearch description endef CONFIGURE_VARS += \ ac_cv_func_realloc_0_nonnull=yes \ ac_cv_func_malloc_0_nunnul=yes \ ac_cv_c_bigendian=yes \ sphinx_cv_unaligned_ram_access=yes CONFIGURE_ARGS += \ --prefix=/ \ --sysconfdir=/etc/sphinx \ $(if $(CONFIG_SPHINX_MYSQL_SUPPORT),--with-mysql,--without-mysql) \ $(if $(CONFIG_SPHINX_PGSQL_SUPPORT),--with-pgsql,--without-pgsql) \ $(if $(CONFIG_SPHINX_UNIXODBC_SUPPORT),--with-unixodbc,--without-unixodbc) \ $(if $(CONFIG_SPHINX_EXPAT_SUPPORT),--with-libexpat,--without-libexpat) \ $(if $(CONFIG_SPHINX_DYNAMIC_LOAD),--enable-dl,,) \ --with-syslog \ --with-libstemmer define Package/sphinx/install $(INSTALL_DIR) $(1)/etc/sphinx $(INSTALL_DATA) ./files/sphinx.conf $(1)/etc/sphinx/sphinx.conf $(INSTALL_DIR) $(1)/etc/init.d $(INSTALL_BIN) ./files/sphinx.init $(1)/etc/init.d/sphinx $(INSTALL_DIR) $(1)/usr/sbin $(INSTALL_BIN) $(PKG_BUILD_DIR)/src/searchd $(1)/usr/sbin/ $(INSTALL_BIN) $(PKG_BUILD_DIR)/src/indexer $(1)/usr/sbin/ # - . # $(INSTALL_BIN) $(PKG_BUILD_DIR)/src/indextool $(1)/usr/sbin/ # $(INSTALL_BIN) $(PKG_BUILD_DIR)/src/spelldump $(1)/usr/sbin/ endef $(eval $(call BuildPackage,sphinx))
Menu configurationConfig.in Makefile package/network/services/sphinx
# sphinx config menu "Configuration" depends on PACKAGE_sphinx config SPHINX_DYNAMIC_LOAD bool "Load all client libs for accessing sources dynamically" default y help This will force the sphinx to load necessary db libs only when actually using db sources (otherwize they will be linked statically and will be dependencies for the sphinx package) config SPHINX_MYSQL_SUPPORT bool "Enable indexing of mysql databases" select PACKAGE_libmysqlclient default n help This will build the sphinx with supporting of mysql db indexing. It will allow to use source type=mysql, and also need libmysqlclient library in order to work. config SPHINX_PGSQL_SUPPORT bool "Enable indexing of posgresql databases" select PACKAGE_libpq default n help This will build the sphinx with supporting of posgresql db indexing. It will allow to use source type=pgsql, and also need libpq library in order to work. config SPHINX_UNIXODBC_SUPPORT bool "Enable indexing of odbc sources" select PACKAGE_unixodbc default n help This will build the sphinx with supporting of indexing odbc sources. It will allow to use source type=odbc, and also need unixodbc library in order to work. config SPHINX_EXPAT_SUPPORT bool "Enable indexing of xmlpipe sources" select PACKAGE_libexpat default n help This will build the sphinx with supporting of indexing xmlpipes. It will allow to use source type=xmlpipe2, and also need libexpat library in order to work. endmenu
Init scriptsphinx.init package/network/services/sphinx/files ( )
#!/bin/sh /etc/rc.common # Copyright (C) 2010-2011 OpenWrt.org START=95 STOP=10 SERVICE_STOP_TIME=9 #PREFIX=/opt PREFIX="" error() { echo "${initscript}:" "$@" 1>&2 } start() { $PREFIX/usr/sbin/searchd } stop() { $PREFIX/usr/sbin/searchd --stop }
Config examplesphinx.conf sphinx.init package/network/services/sphinx/files
; .
# # Sphinx index for library (clean, simple, functional) # source ltslibrary_src { type = mysql sql_host = 127.0.0.1 sql_user = #wiped sql_pass = #wiped sql_db = my_lib sql_query_pre = SET NAMES utf8 sql_query = SELECT * FROM sphinx_main_index sql_joined_field = title FROM QUERY; SELECT * FROM all_titles_sphinx_un sql_attr_timestamp = entered sql_attr_uint = pages sql_attr_float = price sql_attr_float = thickness sql_attr_uint = crcyear sql_attr_string = year } index ltslib { source = ltslibrary_src path = /mnt/sphinx/index/ltsidx preopen = 1 morphology = lemmatize_ru_all, lemmatize_en_all, lemmatize_de_all, libstemmer_fr expand_keywords = 1 index_exact_words = 1 min_prefix_len = 2 min_word_len = 2 dict = keywords stopwords = /mnt/sphinx/stopwords-en.txt wordforms = /mnt/bigstore/library/sphinx/wordforms.txt } indexer { mem_limit = 32M } common { lemmatizer_base = /mnt/sphinx/aot } searchd { listen = localhost:9306:mysql41 log = syslog query_log = syslog read_timeout = 5 max_children = 30 pid_file = /mnt/sphinx/searchd.pid max_matches = 1000 seamless_rotate = 1 preopen_indexes = 0 unlink_old = 1 workers = threads # for RT to work binlog_path = subtree_docs_cache = 1M subtree_hits_cache = 1M }
Well, a couple of pre-built.
sphinxsearch_2.2.2-4470-2_mipsel.ipk for optware in dd-wrt, mipsel platform (LSB)
sphinx_2.2.2-2_ar71xx.ipk under openwrt on NetGear WNDR4300, platform mips (MSB).