Many programmers take notes, describe the difficulties, beautiful and not-so-great solutions, which one has to face as a matter of duty. This could be your own technical blog, a working wiki, or even a regular notebook - the essence is the same. Gradually, from small Evernote-notes the whole article on Habr can grow. But as time goes on, a change of place of work promises changes in the development stack, and the technologies are not standing still (by the way, EF Core has been a couple of months as in version 1.1 ). On the other hand, Entity Framework 6 has been and remains a “workhorse” for accessing data in corporate applications on the .net stack, not least because of its stability, low entry threshold and widespread fame. Therefore, I hope the article will still be useful to someone.
Content:
I would not want to start another round of the controversy "Code First vs. Database First". Better just tell you how to make your life easier if you prefer Database First. Many developers using this approach have noted the inconvenience of working with a bulky EDMX file. This file can turn teamwork into hell, making it very difficult to merge parallel changes due to the constant “mixing” of its internal structure. Among other shortcomings, for models with several hundred entities (the usual such legacy monolith), you may encounter a strong drop in the speed of any action, working with a standard EDMX-designer.
The solution seems obvious - it is necessary to abandon the EDMX in favor of alternative means of generating POCO and storing metadata. The task is simple, and in EF there is an "out of the box" - this is the "Generate Code First From Database" item available in Visual Studio (VS2015 for sure). But, in practice, it is very inconvenient to roll database changes onto the resulting model through this tool. Further, who works with EF for a long time - the Entity Framework Power Tools extension can remember, solving similar problems - but, alas, the project no longer develops (on VS2015 without a hack ), and some of the developers of this tool now work directly in the EF team.
It would seem that everything is bad - and here we found the EntityFramework Reverse POCO Generator . This is a T4 template for generating POCO based on an existing database with a large number of settings and open source . All basic features of EDMX are supported, and there are a number of additional goodies: FakeDbContext / FakeDbSet generation for unit testing, model attribute coverage (eg DataContract / DataMember), etc. Thanks to T4, there is full control over code generation. Summarizing: it works stably, the team likes it, it is easy to migrate existing projects.
Attaching a new object to a DbContext or a single object previously obtained in another context is usually not difficult. Problems begin in the case of graphs, i.e. entities with links - EF "out of the box" does not track changes in the contents of the navigation properties of the newly attached to the context of the entity. To track changes, for each entity object during the context's life, there must be a corresponding entry - an object with service information, including the state of the entity (Added, Modified, Deleted, etc.). Fill in the entries to attach the graph - perhaps at least 2 ways:
An example of solution # 1 can be found, for example, in a fresh Pluralsight course from Julie Lerman, a well-known EF specialist. For its generic self-realization requires a large number of gestures. All entities must implement the IStateObject interface:
public interface IStateObject { ObjectState State { get; set; } }
In one way or another, it is necessary to ensure the relevance of State values ​​so that after manually joining each entity in the graph to the context:
context.Foos.Attach(foo);
go through all the entry, editing their state:
IStateObject entity = entry.Entity; entry.State = ConvertState(entity.State);
In this case, we do not need additional appeals to the database, but the solution is voluminous, fragile, and potentially inoperative for many-to-many relationships. Also, the models littered (by the way, the requirement to implement the interface can be solved by modifying the T4 templates from the previous section of the article).
Consider solution # 2 . I'll be brief:
context.UpdateGraph(root, map => map.OwnedCollection(r => r.Childs));
This call will add the root entity to the context, updating the navigation property with the Childs collection of child objects, at the cost of SELECT to the database alone. That became possible thanks to the GraphDiff library, the author of which did all the dirty work and caught the main bugs.
A seemingly simple SELECT statement ... FROM Table WITH (UPDLOCK) is not supported by EF. But there are interceptors that allow you to modify the generated SQL in any suitable way, for example, using regular expressions. Let's add UPDLOCK to each SELECT generated within the context lifetime (naturally, granularity is not necessarily the context, it all depends on your implementation):
using (var ctx = new MyDbContext().With(SqlLockMode.UpdLock)) {}
To do this, declare the With method inside the context and register the interceptor:
public interface ILockContext { SqlLockMode LockMode { get; set; } MyDbContext With(SqlLockMode lockMode); } public class MyDbConfig : DbConfiguration { public MyDbConfig() { AddInterceptor(new LockInterceptor()); } } [DbConfigurationType(typeof(MyDbConfig))] public partial class MyDbContext : ILockContext { public SqlLockMode LockMode { get; set; } public MyDbContext With(SqlLockMode lockMode) { LockMode = lockMode; return this; } private static void MyDbContextStaticPartial() { } }
public class LockInterceptor : DbCommandInterceptor { public override void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) { AddLockStatement(command, interceptionContext); } public override void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) { AddLockStatement(command, interceptionContext); } private void AddLockStatement<T>(DbCommand command, DbCommandInterceptionContext<T> interceptionContext) { var lockMode = GetLock(interceptionContext); switch (lockMode) { case SqlLockMode.UpdLock: command.CommandText = SqlModifier.AddUpdLock(command.CommandText); break; } } private SqlLockMode GetLock<T>(DbCommandInterceptionContext<T> interceptionContext) { if (interceptionContext == null) return SqlLockMode.None; ILockContext lockContext = interceptionContext.DbContexts.First() as ILockContext; if (lockContext == null) return SqlLockMode.None; return lockContext.LockMode; } }
public static class SqlModifier { private static readonly Regex _regex = new Regex(@"(?<tableAlias>SELECT\s.*FROM\s.*AS \[Extent\d+])", RegexOptions.Multiline | RegexOptions.IgnoreCase); public static string AddUpdLock(string text) { return _regex.Replace(text, "${tableAlias} WITH (UPDLOCK)"); } }
public class SqlModifier_Tests { [TestCase("SELECT [Extent1].[Name] AS [Name] FROM [dbo].[Customer] AS [Extent1]")] [TestCase("SELECT * FROM [dbo].[Customer] AS [Extent999]")] public void AddUpdLock_ValidEfSelectStatement_AddLockAfterTableAlias(string text) { string expected = text + " WITH (UPDLOCK)"; string actual = SqlModifier.AddUpdLock(text); Assert.AreEqual(expected, actual); } [TestCase("SELECT [Extent1].[Extent1] AS [Extent1]")] [TestCase("SELECT * FROM Order")] [TestCase(" AS [Extent111]")] public void AddUpdLock_InvalidEfSelectStatement_NoChange(string text) { string actual = SqlModifier.AddUpdLock(text); Assert.AreEqual(text, actual); } }
EF caches things like:
Data caching is only within the context of the life (remember the Find method), and this language will not be called a full-fledged cache. How do we organize a single, in all contexts, managed cache in the process memory? Use EntityFramework.Plus , or its "poor" alternative to EntityFramework.Cache :
public void SelectWithCache() { using (var ctx = new MyDbContext()) { ctx.Customers.FromCache().ToList(); } } [Test] public void SelectWithCache_Test() { TimeSpan expiration = TimeSpan.FromSeconds(5); var options = new CacheItemPolicy() { SlidingExpiration = expiration }; QueryCacheManager.DefaultCacheItemPolicy = options; SelectWithCache(); // SelectWithCache(); // Thread.Sleep(expiration); SelectWithCache(); // }
It is enough to run the SQL profiler to make sure that the 2nd SelectWithCache () call does not affect the database. Lazy calls will also be cached.
Moreover, EF integration with distributed cache is possible. For example, through a self-written cache manager based on Sytem.Runtime.Caching.ObjectCache connected to EntityFramework.Plus. NCache supports integration with EF "out of the box" (I can not share specifics - I have not tried this cache).
public class SchoolConfiguration : DbConfiguration { public SchoolConfiguration() { SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy(maxRetryCount: 3, maxDelay: TimeSpan.FromSeconds(10))); } }
SqlAzureExecutionStrategy - this strategy is contained in EF6 (disabled by default). When used, receiving a specific error code in the response from SQL Server causes the SQL instruction to be sent again to the server.
// SQL Error Code: 40197 // The service has encountered an error processing your request. Please try again. case 40197: // SQL Error Code: 40501 // The service is currently busy. Retry the request after 10 seconds. case 40501: // SQL Error Code: 10053 // A transport-level error has occurred when receiving results from the server. // An established connection was aborted by the software in your host machine. case 10053: // SQL Error Code: 10054 // A transport-level error has occurred when sending the request to the server. // (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) case 10054: // SQL Error Code: 10060 // A network-related or instance-specific error occurred while establishing a connection to SQL Server. // The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server // is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed // because the connected party did not properly respond after a period of time, or established connection failed // because connected host has failed to respond.)"} case 10060: // SQL Error Code: 40613 // Database XXXX on server YYYY is not currently available. Please retry the connection later. If the problem persists, contact customer // support, and provide them the session tracing ID of ZZZZZ. case 40613: // SQL Error Code: 40143 // The service has encountered an error processing your request. Please try again. case 40143: // SQL Error Code: 233 // The client was unable to establish a connection because of an error during connection initialization process before login. // Possible causes include the following: the client tried to connect to an unsupported version of SQL Server; the server was too busy // to accept new connections; or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. // (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) case 233: // SQL Error Code: 64 // A connection was successfully established with the server, but then an error occurred during the login process. // (provider: TCP Provider, error: 0 - The specified network name is no longer available.) case 64: // DBNETLIB Error Code: 20 // The instance of SQL Server you attempted to connect to does not support encryption. case (int)ProcessNetLibErrorCode.EncryptionNotSupported: return true;
Interesting:
For testing purposes, substitute DbContext transparently to the calling code, and fill in the fake DbSet with test data. I will give several ways to solve the problem.
Method # 1 (long): manually create stubs for IMyDbContext and DbSet, explicitly prescribe the required behavior. Here is what it might look like using the Moq library:
public IMyDbContext Create() { var mockRepository = new MockRepository(MockBehavior.Default); var mockContext = mockRepository.Create<IMyDbContext>(); mockContext.Setup(x => x.SaveChanges()).Returns(int.MaxValue); var mockDbSet = MockDbSet<Customer>(customers); mockContext.Setup(m => m.Customers).Returns(mockDbSet.Object); return mockContext.Object; } private Mock<DbSet<T>> MockDbSet<T>(List<T> data = null) where T : class { if (data == null) data = new List<T>(); var queryable = data.AsQueryable(); var mock = new Mock<DbSet<T>>(); mock.As<IQueryable<T>>().Setup(m => m.Provider) .Returns(queryable.Provider); mock.As<IQueryable<T>>().Setup(m => m.Expression) .Returns(queryable.Expression); mock.As<IQueryable<T>>().Setup(m => m.ElementType) .Returns(queryable.ElementType); mock.As<IQueryable<T>>().Setup(m => m.GetEnumerator()) .Returns(queryable.GetEnumerator()); return mock; }
There is a basic MSDN article on this topic: Entity Framework Testing with a Mocking Framework (EF6 onwards) . And once I was so impressed with this approach that I got a demo project on a githaba (using EF6 DbFirst, SQL Server, Moq, Ninject). By the way, in the course already mentioned in the Entity Framework in the Enterprise , an entire chapter is devoted to testing.
Method # 2 (short): use the already mentioned Reverse POCO Generator, which by default creates stubs for your DbContext and all DbSets (inside FakeDbSet there will be a regular in-memory collection).
To insert thousands of new records into the SQL Server database at the same time, it is extremely efficient to use BULK operations instead of standard non-default INSERT. I covered the problem in more detail in a separate article , so I’ll give below only ready-to-use solutions based on SqlBulkCopy:
→ EntityFramework.Utilities
→ Entity Framework Extensions
I have it all. Share your recipes and tricks in the comments =)
Source: https://habr.com/ru/post/320128/
All Articles