📜 ⬆️ ⬇️

SQL access to RDBMS via ScalikeJDBC

image There is a library that facilitates the use of SQL in Scala-programs, the mention of which I did not find at habr. I would like to correct this injustice. It's about ScalikeJDBC.

The main competitor of SkalikeJDBC is Anorm, a library from Play that solves exactly the same tasks of convenient communication with RDBMS using pure (without ORM impurities) SQL. However, Anorm is deep in Play, and it can be difficult to use it in non-Play projects. Wait, when it will be difficult for me, I did not. Hearing about SkalikeJDBC, I almost immediately decided to try it out. I will share the results of this approbation in the form of a small demo application in this article, just below.

Before proceeding to the example of using the library, it is worth noting that the work with the following DBMS is supported and tested:

And the rest (Oracle, MS SQL Server, DB2, Informix, SQLite, thousands of them ) should also work, because all communication with the DBMS goes through standard JDBC. However, their testing is not produced, which can bring despair on the corporate customer.
')

Sample application


But let us leave the corporate customer alone with his gloomy thoughts, and we better deal with what this article was written for. Let's make a short immersion in the library facilities.

Next, I'll give an example of a simple application that uses SkalikeJDBC to access Postgresql. I will show how you can configure it using Typesafe Config , create a table in the database, make CRUD requests to this table and convert the results of the Read requests to Scala objects. I will deliberately overlook many configuration options (without using Typesafe Config) and using the library to stay brief and provide a quick start. A full description of the features is available in convenient and fairly short documentation , as well as in the Wiki on github.

The application will use SBT for building and managing dependencies, so we create the following buildsbt file in the root of an empty project:

name := "scalike-demo" version := "0.0" scalaVersion := "2.11.6" val scalikejdbcV = "2.2.5" libraryDependencies ++= Seq( "org.postgresql" % "postgresql" % "9.4-1201-jdbc41", "org.scalikejdbc" %% "scalikejdbc" % scalikejdbcV, "org.scalikejdbc" %% "scalikejdbc-config" % scalikejdbcV ) 

It declares the following dependencies:

As a DBMS, we will use local Postgresql on the standard (5432) port. It already has a user pguser with the password securepassword and full access to the demo_db database.

In this case, create a configuration file src / main / resources / application.conf with the following content:

 db { demo_db { driver = org.postgresql.Driver url = "jdbc:postgresql://localhost:5432/demo_db" user = pguser password = securepassword poolInitialSize=10 poolMaxSize=20 connectionTimeoutMillis=1000 poolValidationQuery="select 1 as one" poolFactoryName="commons-dbcp" } } 

We could restrict ourselves to the first four parameters, then the default connection pool settings would apply.

Next, create a demo package in the src / main / scala folder, where we put all the scala code.

DemoApp.scala

Let's start with the main launch object:

 package demo import scalikejdbc.config.DBs object DemoApp extends App { DBs.setup('demo_db) } 

The only line inside the object is an indication to read the demo_db database access settings from the configuration files. The DBs object will search for all matching configuration keys (driver, url, user, password, ...) in the db.demo_db node in all configuration files read by the Typesafe Config. Typesafe Config, by convention, automatically reads application.conf located in the application classpath.

The result will be a configured ConnectionPool to the database.

DbConnected.scala

Next, create a trait in which we encapsulate getting a connection to the database from the pool

 package demo import java.sql.Connection import scalikejdbc.{ConnectionPool, DB, DBSession} import scalikejdbc._ trait DbConnected { def connectionFromPool : Connection = ConnectionPool.borrow('demo_db) // (1) def dbFromPool : DB = DB(connectionFromPool) // (2) def insideLocalTx[A](sqlRequest: DBSession => A): A = { // (3) using(dbFromPool) { db => db localTx { session => sqlRequest(session) } } } def insideReadOnly[A](sqlRequest: DBSession => A): A = { // (4) using(dbFromPool) { db => db readOnly { session => sqlRequest(session) } } } } 

In (1) we get the connection (java.sql.Connection) from the pool created and configured in the last step.
In (2) we wrap the resulting connection in a convenient Database Object Accessor for scalikeJDBC (Basic Database Accessor).
In (3) and (4) we create convenient wrappers for executing SQL queries. (3) - for change requests, (4) - for read requests. It would have been possible to do without them, but then we would have to write everywhere:

 def delete(userId: Long) = { using(dbFromPool) { db => db localTx { implicit session => sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply() } } } 

instead:

 def delete(userId: Long) = { insideLocalTx { implicit session => sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply() } } 

, a DRY has not been canceled yet.

Let us examine in more detail what happens in paragraphs (3) and (4):

using (dbFromPool) - allows you to wrap the opening and closing of the connection to the database in one request. Without this, it would be necessary to open (val db = ThreadLocalDB.create (connectionFromPool)) and not forget to close (db.close ()) connections yourself.

db.localTx - creates a blocking transaction within which requests are executed. If an exception occurs inside the block, the transaction will be rolled back. More details.

db.readOnly - executes requests in read mode. More details.

We can use this treit in our DAO-classes, of which in our educational application there will be exactly 1 piece.

User.scala

Before we start creating our DAO class, let's create a domain object with which it will work. This will be a simple case class that defines a system user with three talking fields:

 package demo case class User(id: Option[Long] = None, name: String, email: Option[String] = None, age: Option[Int] = None) 

Only the name field is required. If id == None, then this indicates that the object has not yet been saved to the database.

UserDao.scala

Now everything is ready to create our DAO object.

 package demo import scalikejdbc._ class UserDao extends DbConnected { def createTable() : Unit = { insideLocalTx { implicit session => sql"""CREATE TABLE t_users ( id BIGSERIAL NOT NULL PRIMARY KEY , name VARCHAR(255) NOT NULL , email VARCHAR(255), age INT)""".execute().apply() } } def create(userToSave: User): Long = { insideLocalTx { implicit session => val userId: Long = sql"""INSERT INTO t_users (name, email, age) VALUES (${userToSave.name}, ${userToSave.email}, ${userToSave.age})""" .updateAndReturnGeneratedKey().apply() userId } } def read(userId: Long) : Option[User] = { insideReadOnly { implicit session => sql"SELECT * FROM t_users WHERE id = ${userId}".map(rs => User(rs.longOpt("id"), rs.string("name"), rs.stringOpt("email"), rs.intOpt("age"))) .single.apply() } } def readAll() : List[User] = { insideReadOnly { implicit session => sql"SELECT * FROM t_users".map(rs => User(rs.longOpt("id"), rs.string("name"), rs.stringOpt("email"), rs.intOpt("age"))) .list.apply() } } def update(userToUpdate: User) : Unit = { insideLocalTx { implicit session => sql"""UPDATE t_users SET name=${userToUpdate.name}, email=${userToUpdate.email}, age=${userToUpdate.age} WHERE id = ${userToUpdate.id} """.execute().apply() } } def delete(userId: Long) :Unit= { insideLocalTx { implicit session => sql"DELETE FROM t_users WHERE id = ${userId}".execute().apply() } } } 


It is already easy to guess what each function does.

Create a SQL object using the notations:
 sql"""<SQL Here>""" sql"<SQL Here>" 

This object uses methods:

The apply () operation completes the chain, which executes the created request by means of a declared implicit session.

It should also be noted that all parameter inserts of the type $ {userId} are the insertion of parameters into the PreparedStatement and there is no need to fear any SQL injections.

Finita

Well, our DAO object is ready. It is strange, of course, to see in it the method of creating a table ... It was added just for example. Educational application - we can afford. It remains only to apply this DAO object. To do this, we will change the DemoApp object we created at the beginning. For example, it may take the following form:
 package demo import scalikejdbc.config.DBs object DemoApp extends App { DBs.setup('demo_db) val userDao = new UserDao userDao.createTable() val userId = userDao.create(User(name = "Vasya", age = Some(42))) val user = userDao.read(userId).get val fullUser = user.copy(email = Some("vasya@domain.org"), age = None) userDao.update(fullUser) val userToDeleteId = userDao.create(User(name = "Petr")) userDao.delete(userToDeleteId) userDao.readAll().foreach(println) } 

Conclusion

In this brief review, we looked at the capabilities of the SkalikeJDBC library and felt the ease and power with which it allows us to create objects of access to relational data. I am glad that in the era of the dominance of ORMs, there is such a tool that solves well the tasks assigned to it and at the same time continues to actively develop.

Thanks for attention. Yes, Scala will arrive with you!

Source: https://habr.com/ru/post/256545/


All Articles