Description
Description
Summary
I’d like to propose a new configuration directive in PHP-FPM, tentatively named pm.max_memory
, which would allow administrators to specify a per-child memory usage limit. Once a PHP-FPM worker process finishes handling a request, if its memory usage exceeds this configured threshold, it would be gracefully recycled before handling another request. This feature would complement existing solutions (pm.max_requests
, memory_limit
, cgroups) but address use cases where neither request count–based recycling nor OS-level OOM kills provide the desired behavior.
Motivation and Rationale
-
Slow Memory Leaks:
- While memory leaks in core PHP have become less common, it is not unusual for users to run older PHP versions (e.g., 7.4) or to rely on external C extensions that occasionally exhibit leaks. Over multiple requests, a slow leak can cause a worker’s memory footprint to grow steadily until the system becomes strained.
- A process-level memory cap checked after each request (when no code is running) would help automatically recycle leaky workers before they become too large.
-
Existing Workarounds:
pm.max_requests
: Recycles processes based on request count. This helps, but it’s a blunt tool—sometimes memory issues manifest in fewer or more requests than expected.memory_limit
: Kills an individual script mid-request if it exceeds a certain amount of PHP-allocated memory. However, a memory leak that accumulates across requests might never exceed the per-requestmemory_limit
.- cgroups / Docker memory limits: Typically trigger an OOM kill that can occur at any moment, including mid-request. This can disrupt active requests rather than recycling gracefully.
-
Why a New Setting?:
pm.max_memory
would allow graceful recycling once the worker has finished a request, preventing mid-execution kills. This behavior is more user-friendly and operationally safe than OOM kills or forcibly loweringpm.max_requests
.
Proposed Behavior
-
Directive:
pm.max_memory = <value>
0
would disable the setting (no memory-based recycling).- A positive integer (e.g., in bytes) indicates the per-child memory threshold.
-
Measurement Timing:
- Check worker memory usage at the end of each request (during request shutdown, before picking up a new request).
- If usage exceeds
pm.max_memory
, the process gracefully exits (similar to how it does withpm.max_requests
).
-
Memory Metric:
- Likely the resident set size (RSS) of the process, as commonly displayed by
top
,ps
, or read from/proc/self/statm
on Linux. This aligns with what admins typically observe in real-world monitoring. - There will be platform-specific differences (e.g., using
getrusage()
or equivalent APIs on non-Linux OSes).
- Likely the resident set size (RSS) of the process, as commonly displayed by
-
Graceful Behavior:
- The process only exits after finishing the current request, preventing partial execution or abrupt kills.
Benefits
- Operational Simplicity: Admins can simply look at
top
orps
, see typical usage and outliers, and decide on an appropriate memory limit for each pool. - Graceful Recycling: This avoids the downsides of an OOM kill, which can happen mid-request and risk data corruption or incomplete responses.
- Better Than
pm.max_requests
for Certain Leaks: Provides more precise control over memory-related issues, rather than guessing how many requests a leaking script can handle.
Potential Implementation Details
- Cross-Platform:
- On Linux, reading
/proc/self/statm
is straightforward. Other systems may require different APIs, so this feature might initially be limited to platforms where memory usage can be reliably checked.
- On Linux, reading
- Configuration:
- Default value is
0
(disabled), so existing users are unaffected unless they opt in.
- Default value is
- Edge Cases:
- Processes with spiky memory usage that are still within
memory_limit
per request. As soon as they finish a request, memory usage may drop. That’s acceptable—if the worker genuinely releases memory by request end, it won’t be terminated. We only care about persistent usage that doesn’t free up.
- Processes with spiky memory usage that are still within
Alternatives Considered
- System OOM / cgroups:
- Not ideal for graceful recycling. OOM kills can occur mid-request and take down the entire process or container.
memory_limit
:- Only applies to per-request usage inside the PHP memory allocator, not total process memory (including possible leaks in extensions).
- External Scripts:
- While you can have a watchdog script to kill large PHP-FPM workers, that effectively duplicates the same logic in a less integrated and possibly more abrupt way.
Conclusion
pm.max_memory
could offer a safer, more precise way to handle slow or partial memory leaks without relying on request counts or mid-request kills. Feedback on feasibility, naming, implementation strategies, and any potential pitfalls is greatly appreciated.
Thank you for considering this feature request!