
I worked with Rails for a decent amount of time, and during that time I saw a lot of Rails applications, and I also read and wrote a lot of bad Ruby code. And here are the 5 most common mistakes that I have seen in almost every application.
1. Migrations without restrictions
Your data model is the heart of your application. Without schema constraints, due to bugs in your code, data will slowly collapse until you can rely on any field at all in your database. Here is a schematic of the
Contact model:
create_table "contacts" do |t| t.integer "user_id" t.string "name" t.string "phone" t.string "email" end
What is missing here? Most likely
Contact belongs to (
belongs_to ) user (
User ) and the contact must have at least a name - use database restrictions for this. By adding "
: null => false " we will always be sure of the integrity of the model, even if there are bugs in the validation code, because the database itself will not allow us to keep the model that violates these restrictions.
create_table "contacts" do |t| t.integer "user_id", :null => false t.string "name", :null => false t.string "phone" t.string "email" end
Bonus hint : use "
: limit => N " to indicate the reasonable size of your string fields. By default, the length of the string is set to 255 characters, and probably the “
phone ” field is completely useless, is it logical?
')
2. Object Oriented Programming
Most Rails developers do not write object-oriented Ruby code. They write MVC-oriented code by expanding the models and controllers in the right folders. Some add auxiliary modules with class methods to the “lib” folder, but no more. This lasts 2-3 years before the developer realizes: “Rails is just Ruby. I can create simple objects and link them with the framework as I like, and not as he prescribes. ”
Bonus hint : make facades for third-party services that you use. Create a facade-mock for use in your tests, so as not to pull third-party services during testing.
3. HTML concatenation in helpers
If you create helper methods (well done!), Then at least you are trying to keep your templates clean. But very often, developers do not know the basics of creating tags inside the helpers, which leads to a jumble of glued strings.
str = "<li class='vehicle_list'> " str += link_to("#{vehicle.title.upcase} Sale", show_all_styles_path(vehicle.id, vehicle.url_title)) str += " </li>" str.html_safe
Tin! This is ugly and easily leads to XSS vulnerabilities.
content_tag is your friend!
content_tag :li, :class => 'vehicle_list' do link_to("#{vehicle.title.upcase} Sale", show_all_styles_path(vehicle.id, vehicle.url_title)) end
Bonus hint : start using helpers that take a block as an argument. Nested blocks are great when you need to generate nested HTML.
4. Huge requests load everything into memory.
You need to fix something in the data and you just go through them all in a loop and fix them, right?
User.has_purchased(true).each do |customer| customer.grant_role(:customer) end
Now imagine that you have a commercial site and a million users. Let each object of class
User occupy 500 bytes in memory. Your code will have 500 MB of memory. Better option:
User.has_purchased(true).find_each do |customer| customer.grant_role(:customer) end
find_each uses the
find_in_batches method inside itself, selecting 1000 entries at a time and significantly reducing memory consumption.
Bonus hint : use
update_all or bare SQL to perform bulk data updates. SQL takes some time to learn, but the benefits you get will be huge: up to x100 performance gains.
5. Review Code
I assume that you are using GitHub, and I also assume that you are
not using pull requests. If you spend a day or two to implement a new feature, then do it in a separate branch and use the pull request. Your team will be able to check your code, suggest what can be improved, and also point you to some points that you might have missed. I guarantee that this will improve the quality of your code.
Bonus hint : Do not take pool requests without tests that pass successfully. Testing is invaluable for maintaining your application in a stable state and for your peaceful sleep.
Bonus: useful commentary on the original article.
find_each and
find_in_batches are preferred for large samples, however, you need to keep in mind that you change the application memory for processor cycles for the database.
Suppose you are making a request that returns 1'000'000 records.
SELECT * FROM users WHERE unindexed_field = true;
If you make
User.all.each , the database will make one huge query, calculating the entire result at once. If you make
User.where (...). Find_each (: batch_size => 100) , then the database will have to do the same query 1'000'000 / 100 = 10'000 times. And if you are using MySQL, it will recalculate the result every time. Those. for records 100-200, he will calculate the first 200 records, for 200-300 - the first 300 records, then for 300-400 - the first 400 records and so on up to 999900-1000000.
So for large samples it is certainly better to use
find_each or
find_in_batches , but keep in mind that this can also cause problems for you.
The general “hack” for solving this problem is:
ids = User.where(...).select(:id).all ids.in_groups_of(100) do |id_group|