I'm sure everyone had to work with drag-and-drop interfaces, and many people had to develop software with those. In most cases, the fact of the drop'a of the draggable object on the target object is established by the fact that the coordinates of the mouse cursor are in the bounding box of the target object in an event of type mouseUp, dragStop, and others.
So work almost all the examples that I met. But some time ago, when implementing the interactive task module for an educational resource, I was faced with the fact that this approach is not very convenient. The main reason is the target objects are significantly less draggable. Therefore, aiming the mouse is uncomfortable and tiring. Dragging large-draggable objects, the user completely overlaps the target object and does not see where the object falls.

')
Accordingly, it was decided to process as follows:
- if there is a mouse contact with the target object, then the drop will be strictly on it
- if not, then focus on the contact bounding box of the object-draggable with the bounding box of target-objects
- if there is a contact with only one draggable object - everything is clear, drop on it
- but contact with two or more is an ambiguous situation, where to do drop is unclear
In such a situation, one could ask the user where he wants to drop an object. This is useful if the target objects on the screen are somehow named (for example, numbered). However, in our case, this is an unnecessary complication of the interface. Therefore, we decided to prohibit drop in this case and react as if the user released the object in the absence of contact.
Backlight.We also decided in the case of unambiguity to highlight the contacting target object (conditionally) in green, and in the case of ambiguity, all contacting target objects in yellow. Thus, we give the user a hint - why in one case the drop is normal, and not in the other.
But! Let me remind you that this is a learning task. There is an opinion that such highlighting can be perceived as a hint about the correctness or incorrectness of the attempted decision, and not about the fact of the drop’s validity. At the same time, later the task was added to highlight the correct answers after calling the verification procedure. If drop was on the correct target object, then the draggable object is highlighted in green, if not - in red. And it began to resonate strongly with the backlight in the process of the task itself. We changed the colors and styles of the backlighting there, but how clear this interface is is unclear.
There is an idea to eliminate the ambiguity situation, to proceed to assessing the contacts of regions not at one time, but with regard to their occurrence in time. And show only the last one that has arisen. But even here, ambiguity is possible, depending on how many areas will be in contact with each call of the event handler.
In general, there is an opinion that it is necessary to abandon this approach and rely strictly only on contact with the mouse. Therefore, I want to hear the opinion of the public, those who are faced with such.
UPD: Initially, the idea of using intersections by regions came when developing applications for interactive whiteboards - in order to reach out and rush around at the whiteboard. You can take a block of a draggable object and easily reach with its corner to the area of the target object. If you focus on the mouse, it may be necessary to move along the board once again, which means once again cast a shadow (save, do not take ultra-short-focus projectors).