⬆️ ⬇️

Search for images based on content

Image databases can be very large and contain hundreds of thousands or even millions of images. In most cases, these databases are indexed by keywords only. These keywords are introduced into the database by the operator, who also distributes all the images into categories. But images can be found in the database and based on their own content. By content we can understand colors and their distribution, objects in an image and their spatial position, etc. Currently, segmentation and recognition algorithms are not developed well enough, however, now there are already several systems (including commercial ones) for searching images based on their content.



To work with the database of images, it is desirable to have some way of searching for images that would be more convenient and more efficient than direct viewing of the entire database. Most companies perform only two processing steps: the selection of images for inclusion in the database and the classification of images by assigning keywords to them. Internet search engines usually get keywords automatically from image captions. With the help of ordinary databases, images can be found on the basis of their textual attributes. In a typical search, these attributes can include categories, the names of people on the image, as well as the date the image was created. To speed up the search, the contents of the database can be indexed across all these fields. Then you can use the SQL language to search for images. For example, the query:

SELECT * FROM IMAGEBD

WHERE CATEGORY = ”MEI”

could find and return all the images from the base on which is shown the MEI. But in fact, everything is not so simple. This type of search has a number of serious limitations. Assigning keywords to a person is a time consuming task. But, much worse, this task allows ambiguous execution. Because of this, some of the images found can be very, very different from the user's expectations. The figure shows the output of google for the request "MEI".





')

Having accepted, as a fact, that the use of keywords does not provide sufficient efficiency, we will consider a number of other methods of image search.



Let's start with the sample search. Instead of specifying keywords, the user could present a sample image to the system, or draw a sketch. Then our search system should find similar images or images containing the required objects. For simplicity, we assume that the user presents the system with a rough sketch of the expected image and some set of restrictions. If the user provided a blank thumbnail, the system should return all images that satisfy the constraints. Restrictions as the most logical to set in the form of keywords and various logical conditions that unite them. In the most general case, the request contains some image that is compared with the images from the base according to the distance measure used. If the distance is 0, it is considered that the image exactly matches the request. Values ​​greater than 0 correspond to varying degrees of similarity of the image in question with the request. The search engine should return images sorted by distance from the thumbnail.



The figure shows the search in the QBIC system using a distance measure based on a color layout.







To determine the similarity of the image from the database with the image specified in the query, usually some measure of distance or characteristics is used with which you can get a numerical estimate of the similarity of images. Image similarity characteristics can be divided into four main groups:

1. Color similarity

2. Textural similarity

3. Similarity of form

4. The similarity of objects and relations between objects



For simplicity, we consider only the methods of color similarity. The color similarity characteristics are often very simple. They allow you to compare the color content of one image with the color content of another image or with the parameters specified in the request. For example, in the QBIC system, the user can specify the percentage of colors in the desired images. The figure shows a set of images obtained as a result of a query with an indication of 40% red, 30% yellow and 10% black. Although the images found contain very similar colors, the semantic content of these images differs significantly.







A similar search method is based on a comparison of color histograms. Measures of distance based on a color histogram should include an assessment of the similarity of two different colors. The QBIC system determines the distance as follows:







where h (I), h (Q) are histograms of images I, Q, A is a similarity matrix. In the similarity matrix, elements whose values ​​are close to 1 correspond to similar colors, close to 0 correspond to very different colors. Another possible measure of distance is based on a color layout. When forming a request, an empty grid is usually presented to the user. For each cell, the user can specify the color from the table.



Similarity characteristics based on a color layout using shaded grids require a measure that takes into account the contents of the two shaded grids. This measure should provide a comparison of each grid cell specified in the request, the corresponding grid cell of an arbitrary image from the database. The results of the comparison of all cells are combined to obtain the value of the distance between the images:







where C ^ I (g), C ^ Q (g) are the colors of the cell g in the images I, Q, respectively.

Search based on textural similarity, and even more so based on the similarity of the form is much more difficult, but it is worth saying that the first steps have already been taken in this direction. For example, the ART MUSEUM system stores color images of many paintings. These color images are processed for intermediate representation. Pre-processing consists of three stages:

1. Reduce the image to a specified size and remove noise using a median filter.

2. Border detection. First, using the global threshold, then using the local threshold. The result is a clean contour image.

3. On a clean contour image, redundant contours are removed. The resulting image is once again cleared of noise, and we get the required abstract representation.



When the user submits a sketch to the system, the same processing operations are performed on it, and we get a linear sketch. The matching algorithm has a correlative nature: the image is divided into cells, and for each cell a correlation is calculated with a similar image cell from the database. For reliability, this procedure is performed several times for different values ​​of the shift of the linear sketch. In most cases, this method allows you to successfully find the desired images.

It remains only to wait for the introduction of such systems in our usual Internet search engines, and we can say that the problem of finding pictures was not such a problem.

Source: https://habr.com/ru/post/103385/



All Articles