-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Add IMemoryPoolFactory and cleanup memory pool while idle #61554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
src/Servers/Kestrel/Kestrel/src/WebHostBuilderKestrelExtensions.cs
Outdated
Show resolved
Hide resolved
private readonly Counter<long> _evictedMemory; | ||
private readonly Counter<long> _usageRate; | ||
|
||
public PinnedBlockMemoryPoolMetrics(IMeterFactory meterFactory) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want a metric for peak?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
total_allocated covers that? Oh, nvm. Yeah, peak might be interesting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think peak is just measuring the max of current_memory
?
|
||
private long _currentMemory; | ||
private long _evictedMemory; | ||
private DateTimeOffset _nextEviction = DateTime.UtcNow.AddSeconds(10); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we going to regret not making this configurable?
{ | ||
} | ||
|
||
public PinnedBlockMemoryPool(IMeterFactory meterFactory) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should pass an options object so we can easily add future options
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would you set the options? Just have a generic MemoryPoolOptions
type that users can configure in DI? That sounds like something we could add in the future without breaking anything.
internal sealed class PinnedBlockMemoryPoolMetrics | ||
{ | ||
// Note: Dot separated instead of dash. | ||
public const string MeterName = "System.Buffers.PinnedBlockMemoryPool"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if these metrics should be on a new meter. Couldn't they be on a meter the same name as Kestrel's meter?
A new meter name means folks would need to configure OTEL to export the kestrel meter, and this System.Buffers.PinnedBlockMemoryPool
meter. Having memory stats related to kestrel be on the Kestrel meter makes more sense to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see there are multiple frameworks that use System.Buffers.PinnedBlockMemoryPool
. Maybe it does make sense for it to be its own meter. Although if you have multiple frameworks using the pool at the same time then numbers will get merged together. Is that would you intend?
A solution would be to have an owner
tag that says who used the memory. When a memory pool is created the owner could pass in their name, e.g. kestrel
, sockets
, iis
, etc, and the name is internally passed to counters when they're changed. An owner
tag would let you view total memory usage, vs just Kestrel HTTP layer, vs sockets layer, vs IIS (if running together in the same process)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the meter should be renamed to indicate that it's specific to ASP.NET Core. Maybe Microsoft.AspNetCore.MemoryPool
? That is simple and obvious what the meter is for.
I don't think people care that the underlying pool is implemented using pinned blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the name. But can we limit metric feedback to the issue #61594?
The owner tag seems interesting, but how would it actually work? For example, the app layer can grab the memory pool factory via DI and get a pool instance, so if you wanted a different name for app code then the interface would need to allow setting a name. If this is something you think we should pursue, can you comment on the issue I linked?
|
||
public PinnedBlockMemoryPoolMetrics(IMeterFactory meterFactory) | ||
{ | ||
_meter = meterFactory.Create(MeterName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there could be multiple instances of memory pool used across assemblies with shared source. Fortunatly I believe that as long as the same IMeterFactory
is used for all of them (injected by DI), the meter and counters created will be the same instances. Metrics values will be aggregated and saved to the same location.
However, I think a unit test to double check that is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a test and updated our TestMeterFactory implementation to return the same Meter instance for similar meters. Same as the default meter factory impl: https://source.dot.net/#Microsoft.Extensions.Diagnostics/Metrics/DefaultMeterFactory.cs,37
@BrennanConroy FYI naming guidelines for OTEL are here: https://opentelemetry.io/docs/specs/semconv/general/naming/ |
Here’s a review of the updated Key Changes and Features
Strengths
Potential Issues / Suggestions
Summary Table
ConclusionThis is a robust, production-grade adaptive memory pool.
Overall: 👍 Looks great and ready for real workloads. |
{ | ||
private readonly IMeterFactory _meterFactory; | ||
private readonly TimeProvider _timeProvider; | ||
private readonly ConcurrentDictionary<PinnedBlockMemoryPool, PinnedBlockMemoryPool> _pools = new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Could ask @stephentoub to create a ConcurrentHashTable<T>
😉 or change the value to a recognised atomic type by ConcurrentDictionary
e.g. nuint
private readonly ConcurrentDictionary<PinnedBlockMemoryPool, nuint> _pools = new();
Will save on GC write barriers; thought probably not too important
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to any documentation or discussion about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps: https://learn.microsoft.com/en-us/previous-versions/dotnet/articles/ms973837(v=msdn.10)?redirectedfrom=MSDN#making-generations-work-with-write-barriers
https://learn.microsoft.com/en-us/previous-versions/dotnet/articles/ms973837(v=msdn.10)?redirectedfrom=MSDN#too-many-object-writes
A short version is that updating a reference-typed field is a little more expensive than a value-typed field. Its a micro-optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Micro-optimization on the correct data structure not being available in runtime (as both key and value are always the same and value never read)
internal sealed class DefaultMemoryPoolFactory : IMemoryPoolFactory<byte>, IAsyncDisposable | ||
{ | ||
private readonly IMeterFactory _meterFactory; | ||
private readonly ConcurrentDictionary<PinnedBlockMemoryPool, PinnedBlockMemoryPool> _pools = new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: here also ConcurrentDictionary value to nuint
ConcurrentDictionary<PinnedBlockMemoryPool, nuint>
|
||
public DefaultMemoryPoolFactory(IMeterFactory? meterFactory = null) | ||
{ | ||
_meterFactory = meterFactory ?? NoopMeterFactory.Instance; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it would only be null in tests? I think it would be better if the type always expect a meter factory from DI. And then test code is responsible for creating the factory with a test meter factory. Then NoopMeterFactory
can be remove from this type and https://github.com/dotnet/aspnetcore/blob/2ad1ff0a9e870a77522fc0725153154834bcb0ac/src/Shared/Metrics/TestMeterFactory.cs used when this type is created in tests.
await _timerTask; | ||
} | ||
|
||
private sealed class NoopMeterFactory : IMeterFactory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name isn't accurate because the factory will still create meters and people can still listen to those meters. The only difference is the factory isn't set as the owner of the meters. In other words, they're harder to test.
A better simulation of "no op" would be to not create PinnedBlockMemoryPoolMetrics
if there's no meter factory and any calls to it would be _metrics?.UpdateCurrentMemory(-block.Memory.Length);
As mentioned here, I think the best improvement to make here is always require the meter factory and have tests always specify it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went with allowing a null IMeterFactory
. Most tests aren't doing anything with meters, so it seems like a waste to require them to pass in a meter factory.
Adding the ability to release (to the GC) memory from the
PinnedBlockMemoryPool
. It does this by counting the rent and return calls and using heuristics to decide if it should free up memory or not based on pool usage.Additionally implements the newly approved
IMemoryPoolFactory<T>
interface and plumbs it through Kestrel, IIS, and HttpSys with the default implementation forIMemoryPoolFactory<byte>
being thePinnedBlockMemoryPool
.Kestrel makes use of its heartbeat loop to clean up the memory pools, whereas IIS and HttpSys don't have the equivalent so they use a
PeriodicTimer
approach.Adds metrics to the pool. Issue for those is at #61594.
Closes #61003
Closes #27394
Closes #55490
Did a few perf runs to see the impact of
Interlocked.Increment
on theRent
andReturn
paths.Did longer 60 second runs to make sure the memory pools had plenty of opportunity to run clean up logic and see the impact of incrementing counters.
TLDR; didn't notice a perf impact with this PR and so removed the ScalableApproximateCounting code for now. Can always add it back if we find problems.