📜 ⬆️ ⬇️

MongoDB and IT Job Market Research

Have you ever analyzed vacancies?

One wondered what technologies are the strongest needs of the labor market at the moment? Month ago? A year ago?

How often are new job openings by Java developers in a particular area of ​​your city and how actively are they closing?
')
In this article I will tell you how you can achieve the desired result and build a reporting system on a topic of interest to us. Go!


A source

The choice fell on Headhunter.ru


Probably many of you are familiar and even used such a resource as Headhunter.ru . Thousands of new vacancies in various fields are posted daily on this site. Also, HeadHunter has an API that allows the developer to interact with the data of this resource.

Tools


Using a simple example, let us consider the construction of the data acquisition process for the reporting system, which is based on the work with the Headhunter.ru site API. We will use the embedded SQLite DBMS as an intermediate storage of information, we will store the processed data in the MongoDB database in NoSQL, and the main language will be Python version 3.4.

HH API
The capabilities of the HeadHunter API are quite extensive and well documented in the official GitHib documentation. First of all, it is the ability to send anonymous requests that do not require authorization to receive information about vacancies in JSON format. Recently, a number of methods became paid (employer's methods), but they will not be considered in this task.

Each vacancy hangs on the site for 30 days, after which, if it is not renewed, it is archived. If the vacancy got into the archive before the expiration of 30 days, it means that it was closed by the employer.

HeadHunter API (hereinafter - HH API) allows you to receive an array of published vacancies for any date in the last 30 days, which we will use - we will collect published vacancies on a daily basis for each day.

Implementation



First use case
Suppose that we are faced with the task of identifying vacancies that were closed for a certain period of time, for example, in July 2018. This is solved as follows: the result of a simple SQL query to the vacancy_history table will return the data we need, which can be passed to the DataFrame for further analysis:

  c.execute(""" select a.id_vacancy, date(a.date_load) as date_last_load, date(a.date_from) as date_publish, ifnull(a.date_next, date(a.date_load, '+1 day')) as date_close from ( select vh1.id_vacancy, vh1.date_load, vh1.date_from, min(vh2.date_load) as date_next from vacancy_history vh1 left join vacancy_history vh2 on vh1.id_vacancy = vh2.id_vacancy and vh1.date_load < vh2.date_load where date(vh1.date_load) between :date_in and :date_out group by vh1.id_vacancy, vh1.date_load, vh1.date_from ) as a where a.date_next is null """, {"date_in" : date_in, "date_out" : date_out}) date_in = dt.datetime(2018, 7, 1) date_out = dt.datetime(2018, 7, 31) closed_vacancies = get_closed_by_period(date_in, date_out) df = pd.DataFrame(closed_vacancies, columns = ['id_vacancy', 'date_last_load', 'date_publish', 'date_close']) df.head() 

We get the result of this type:
id_vacancydate_last_loaddate_publishdate_close
0181266972018-07-092018-07-092018-07-10
one181551212018-07-092018-06-192018-07-10
2188816052018-07-092018-07-022018-07-10
3196207832018-07-092018-06-272018-07-10
four196961882018-07-092018-06-152018-07-10
If we want to analyze using Excel or third-party BI tools, then we can upload the vacancy_history table to a csv file for further analysis:

 #       CSV data = c.execute('select * from vacancy_history') with open('vacancy_history.csv','w', newline='') as out_csv_file: csv_out = csv.writer(out_csv_file) csv_out.writerow(d[0] for d in data.description) csv_out.writerows(data.fetchall()) conn_db.close() 

Heavy artillery


But what if we need to conduct more complex data analysis? This is where the MongoDB NoSQL document-oriented database, which allows you to store data in JSON format, comes to the rescue.

The above actions for collecting vacancies are launched on a daily basis, so there is no need to look through all the vacancies each time and get detailed information on each of them. We will take only those that were received in the last five days.

The completed collection in MongoDB looks something like this:



Some more examples


Having a collected database, we can perform various analytical samples. So, I’ll bring out the Top 10 highest paid jobs for Python developers in St. Petersburg:

 cursor_mongo = VacancyMongo.find({"name" : {"$regex" : ".*[pP]ython*"}}) df_mongo = pd.DataFrame(list(cursor_mongo)) del df_mongo['_id'] pd.concat([df_mongo.drop(['employer'], axis=1), df_mongo['employer'].apply(pd.Series)['name']], axis=1)[['grade', 'name', 'salary_processed' ]].sort_values('salary_processed', ascending=False)[:10] 

Top 10 Python Top Payers
gradenamenamesalary_processed
seniorWeb Team Lead / Architect (Python / Django / React)Investex Ltd293901.0
seniorSenior Python developer in MontenegroBetmaster277141.0
seniorSenior Python developer in MontenegroBetmaster275289.0
middleBack-End Web Developer (Python)Soshace250000.0
middleBack-End Web Developer (Python)Soshace250000.0
seniorLead Python Engineer for a Swiss StartupAssaia International AG250000.0
middleBack-End Web Developer (Python)Soshace250000.0
middleBack-End Web Developer (Python)Soshace250000.0
seniorPython teamleadDigitalhr230000.0
seniorLead Developer (Python, PHP, Javascript)IK GROUP220231.0



And now we will deduce, near which metro station is the highest concentration of vacant posts for Java developers. With the help of a regular expression, I filter by vacancy name “Java”, and also select only those vacancies that contain the address:

 cursor_mongo = VacancyMongo.find({"name" : {"$regex" : ".*[jJ]ava[^sS]"}, "address" : {"$ne" : None}}) df_mongo = pd.DataFrame(list(cursor_mongo)) df_mongo['metro'] = df_mongo.apply(lambda x: x['address']['metro']['station_name'] if x['address']['metro'] is not None else None, axis = 1) df_mongo.groupby('metro')['_id'] \ .count() \ .reset_index(name='count') \ .sort_values(['count'], ascending=False) \ [:10] 

Jobs Java-developers for metro stations
metrocount
Vasileostrovskaya87
Petrogradskaya68
Vyborg46
Lenin Square45
Gorky45
Chkalovskaya43
Narva32
Square Rebellion29
Old village29
Elizarovskaya27


Results


So, the analytical capabilities of the developed system are truly broad and can be used for planning a startup or opening a new activity line.

I note that only the basic functionality of the system is presented so far, further development is planned in the direction of analyzing geographical coordinates and predicting the appearance of vacancies in a particular area of ​​the city.

The full source code for this article can be found at the link to my GitHub .

PS Comments on the article are welcome, I will be glad to answer all your questions and get your opinion. Thank!

Source: https://habr.com/ru/post/422627/


All Articles