-
Notifications
You must be signed in to change notification settings - Fork 934
Added support for batching 2nd level cache operations when loading entities and collections #1633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
nhibernate/NHibernate-Caches#29 integrates caches implementing the |
With the current state, In order to support batching other
I am not planning to implement the other methods from |
This comment has been minimized.
This comment has been minimized.
I would like to understand how does it marry with #1392? |
If I understood correctly, the mentioned PR is about minimizing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good.
One small addition on naming things :
- IBatchableReadWriteCache -> IBatchableCache
- IBatchableReadCache -> IBatchableReadOnlyCache
…dCache to IBatchableReadOnlyCache
…to reduce type checking.
Great! I've renamed the interfaces and added a small optimization for checking if batching is supported. |
I am currently working on the Redis cache for NHibernate that will support batching from this PR and I found out that the current locking mechanism defined by
I am planning to correct that for the
Are you ok with this? |
Yes, better have the right signature in this interface from the start. |
Supporting the same pattern for |
…equate for distributed locks.
The way we should deal with all the "transitional" interfaces we add for avoiding breaking changes, once we do the next major version, is not yet defined.
For consistency with what we will do for |
I agree that we should avoid adding a "transitional" interface for a small change like this, but I don't see how we could correct that for |
For the cache case, I am thinking about obsoleting And then, adding new methods to the cache will be quite easier, since we could add them freely in a base class, provided they are not made abstract. |
If that is the case then wouldn't it be better to replace the interfaces |
No, that will be the job of the PR doing the switch to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am ok with the last changes.
The build failures are due to some unrelated trouble on TC side, I am relaunching the builds.
The aim of this PR is to reduce the performance drop when switching from an in-memory 2nd level cache to an external one. With the current status of this PR, only batching a read operation is implemented, the write operation is not yet implemented, which is why I put as WIP. In order to easily understand the new algorithm added in BatchFetchQueue I've made a fiddle that shows how it works in comparison with the old one. As it can be seen from the fiddle, the old algorithm isn't always consistent when ordering the returned keys and also in certain cases it makes more cache calls than it would be needed. The new algorithm has always a consistent ordering when returning the keys and also the number of cache calls is reduced even if the cache doesn't support batching read operations. In the example you can see that the number of cache calls is reduced by half in certain cases, when batching is disabled.
Having a batchable write operation is a little more trickier when a
read-write
strategy is used. Having just a batchable put operation is not enough when theread-write
strategy is used as we need to lock the keys prior writing in order to have consistent data in the cache. In order to have a fully batchable write whenread-write
strategy is used we need a batchable lock and unlock method. I am planning to add an interface for a batchable write cache as follows:I've spent some time analyzing
Redis
andMemcached
if they would be able to implement the above methods correctly and my conclusion was that they could.Redis:
PutMultiple
: can be implemented by using MSET command or by creating an atomic lua script that would loop trough the keys and store them by using the SET EX command when the key should expire after a certain amount of time.LockMultiple
: can be implemented by creating a lua script that would check for each key if a lock was already been made or not prior setting the lock. The lock would be successful only if the lock would be aquired for all keys. After that, the implementor would need to save the locked keys in order to release them whenUnlockMultiple
is called, to prevent releasing sombody else's locks.UnlockMultiple
: can be implemented as an atomic lua script that would delete the locks that were made by the implementor by using the DEL command.Memcached:
PutMultiple
: can be implemented if the used memcached client supports binary protocolLockMultiple
: can be implemented by pipelining multiple add commands via the binary protocol and the lock would be obtained only if all add commands were successfuly completed. The implementor would need to save the locked keys in order to release them whenUnlockMultiple
is called, to prevent releasing sombody else's locks.UnlockMultiple
: can be implemented by pipelining multiple delete commands but only for keys that where locked by the implementor.After this PR, as I am at the moment interested in
Redis
and the currently Redis provider for NHiberante isn't been ported to NH 5, I am willing to make one for NHibernate Caches that would support the batching operations from this PR. I've seen that @fredericDelaporte added aRedis
implementation in his PR by using the new AspNet Caching but unfortunately after a quick look at their RedisCache class, it doesn't seem to be extendable.Also, the
AsyncGenerator
has to be updated to version0.10.0
as local functions are used in this PR.