
Sometimes there are situations in which there is no way to speed up the work of some operation. It may depend on some service, which is located on an external web server, or it may be an operation that gives a high load on the processor. Or it can be fast operations, however, their parallel work can suck all performance resources out of your computer. There are many reasons to use caching. It should be noted that
PostSharp does not initially provide any caching framework for you, it just allows you to do this task orders of magnitude faster, without any tedious actions, such as arranging the code responsible for caching all the program source code. It allows you to solve this problem elegantly, bringing tasks to classes and allowing them to be reused.


Suppose I want to know on the website of the car dealership how much the cars that are for sale in this car show are worth. To do this, I will use an application that will download the price list of the cabin from the server, which is designed for cars of a certain brand, model and year of manufacture. If the values of the price list (as part of our example) change too often, I will use the web service to get the values of this price list. Let the web service be too slow, and I want to request too many cars. As you understand, I cannot make someone else’s web service faster, but I can cache the returned data from the store, thus reducing the number of requests.
Since one of the main features of PostSharp is to “intercept” a method call, i.e. introduction to the method in such a way that we can execute our code both before and after the work of the method body, we will use this framework to implement the caching task:
[ Serializable ]
public class CacheAttribute : MethodInterceptionAspect
{
[NonSerialized]
private static readonly ICache _cache;
private string _methodName;
static CacheAttribute()
{
if (!PostSharpEnvironment.IsPostSharpRunning)
{
// one minute cache
_cache = new StaticMemoryCache( new TimeSpan (0, 1, 0));
// use an IoC container/service locator here in practice
}
}
public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
{
_methodName = method.Name;
}
public override void OnInvoke(MethodInterceptionArgs args)
{
var key = BuildCacheKey(args.Arguments);
if (_cache[key] != null )
{
args.ReturnValue = _cache[key];
}
else
{
var returnVal = args.Invoke(args.Arguments);
args.ReturnValue = returnVal;
_cache[key] = returnVal;
}
}
private string BuildCacheKey(Arguments arguments)
{
var sb = new StringBuilder ();
sb.Append(_methodName);
foreach ( var argument in arguments.ToArray())
{
sb.Append(argument == null ? "_" : argument.ToString());
}
return sb.ToString();
}
}
* This source code was highlighted with Source Code Highlighter .
I save the name of the method at compile time and initialize the caching service at run-time. As a key for caching, I will use the name of the method, as well as the values of all method parameters, separated by spaces (see the code for the BuildCacheKey method), which will be unique for each method and each parameter set. In the OnInvoke method, I check if the received key exists in the cache, and use the value from the cache if the key already exists. Otherwise, I call the code of the original method to put in the cache the result of the work until the next call.
In my example, there is a GetCarValue method that is designed to simulate a web service call to get information about a car. This method has parameters that can take very different values, so it can return different results each time it is called (in our example, only in those cases where the cached value is missing):
[Cache]
public decimal GetCarValue( int year, CarMakeAndModel carType)
{
// simulate web service time
Thread.Sleep(_msToSleep);
int yearsOld = Math .Abs( DateTime .Now.Year - year);
int randomAmount = ( new Random ()).Next(0, 1000);
int calculatedValue = baselineValue - (yearDiscount*yearsOld) + randomAmount;
return calculatedValue;
}
* This source code was highlighted with Source Code Highlighter .
A few notes about this aspect:
- I could also use OnMethodBoundaryAspect instead of MethodInterceptionAspect: both approaches would be correct. Just in this case, I chose MethodInterceptionAspect to simplify my choice by covering the program requirements.
- Remember that since there is no point in loading and initializing the cache while PostSharp is running (not while the application is running), we must check whether PostSharp is running or not. Another way to load dependencies is to place the code in RuntimeInitialize.
- This aspect does not make it possible to use the 'out' and 'ref' parameters in caching tasks. Of course, it is possible to do this, but it seems to me that the 'out' and 'ref' parameters should not be used in such tasks, and if you agree with me, let's not waste time on their implementation.
Checks at compile time
There are always options when caching is not a good idea. For example, when the method returns Stream, IEnumerable, IQueryable, etc.
interfaces. Therefore, such values cannot be cached. To do such checks, you need to override the CompileTimeValidate method, for example, like this:
public override bool CompileTimeValidate(MethodBase method)
{
var methodInfo = method as MethodInfo;
if (methodInfo != null )
{
var returnType = methodInfo.ReturnType;
if (IsDisallowedCacheReturnType(returnType))
{
Message.Write(SeverityType.Error, "998" ,
"Methods with return type {0} cannot be cached in {1}.{2}" ,
returnType.Name, _className, _methodName);
return false ;
}
}
return true ;
}
private static readonly IList DisallowedTypes = new List
{
typeof ( Stream ),
typeof ( IEnumerable ),
typeof (IQueryable)
};
private static bool IsDisallowedCacheReturnType(Type returnType)
{
return DisallowedTypes.Any(t => t.IsAssignableFrom(returnType));
}
* This source code was highlighted with Source Code Highlighter .
Thus, if any developer tries to apply caching to methods that should not be cached, he will receive a compile error message. By the way, if you use IsAssignableFrom for some type, you also cover the classes and interfaces that come from it. Those. in our case, such types as FileStream, IEnumerable, etc. will also be covered.
')
Multitasking
Great, at this stage we already have a great solution to add caching to all methods that need it. However, have you thought about one potential problem hidden in this caching aspect? In multitasking applications (such as a website), caching does an excellent job with its work, because after the first “user” accesses the cache, each subsequent “user” extracts all creams from using the cache access speed. However, what happens when two users try to get the same information at the same time? In the caching aspect that we just developed, this will mean that both users will at least calculate the same cache value. For a car shop website this is not very relevant, however, if you have a web server with a load of hundreds or thousands of visitors requesting the same information at the same time, the problem of caching arises very importantly. If they all make requests at the same time, our cache will constantly calculate the same values.
A simple solution to solve this problem is to use “lock” every time the cache is used. However, blocking is an expensive and slow operation, and it is better if we first check for the existence of a key in the cache, and only then block it. However, in this case, there is a possibility that several threads will simultaneously be checked for the absence of a key in the cache and go to the calculation of this key, so we must check the existence of the key twice (
double-checked locking ), outside the locked code and inside it:
[ Serializable ]
public class CacheAttribute : MethodInterceptionAspect
{
[NonSerialized] private object syncRoot;
public override void RuntimeInitialize(MethodBase method)
{
syncRoot = new object ();
}
public override void OnInvoke(MethodInterceptionArgs args)
{
var key = BuildCacheKey(args.Arguments);
if (_cache[key] != null )
{
args.ReturnValue = _cache[key];
}
else
{
lock (syncRoot)
{
if (_cache[key] == null )
{
var returnVal = args.Invoke(args.Arguments);
args.ReturnValue = returnVal;
_cache[key] = returnVal;
}
else
{
args.ReturnValue = _cache[key];
}
}
}
}
}
* This source code was highlighted with Source Code Highlighter .
It looks like a little repetition, but in this case, it is an excellent solution to improve performance for high-load solutions. Instead of blocking the cache, I block some private object that is specific only to the method to which the aspect is applied. All this leads to minimizing the number of locks when using the cache.
I hope you are not confused? Problems of parallel execution of tasks can be confusing to many, but in many applications this is a reality. Armed with this aspect, you no longer have to worry about your own mistakes or the mistakes of developers with a small amount of experience. Either about the mistakes of new developers or developers with 30 years of experience in developing on COBOL and seeing C # for the first time :). In fact, they need to know how to frame methods with the “Cache” aspect, and they do not need to know how this technology should be implemented. And they do not need to know how to make their methods flow safe. They will be able to concentrate only on their own piece of code, without being distracted by the implementation of related tasks.
References: