Not only the largest IT companies, but also ordinary developers are gradually beginning to face the analysis of large amounts of data. In our company, in a number of projects, such a task arises, and we decided to systematize the accumulated experience by sharing with i-Free colleagues and our partners the most effective tools and technologies. Today we will talk about the use of Apache Spark
In a report for our next technical mit-up, our colleague from the MoneyTap project told about the simplest cases of using Apache Spark, a tool that we use to process large amounts of data for more than a year. The report has it all, so that you can write on your own the simplest analytics system for your project, as well as a few personal recommendations for working with this wonderful tool.